repo
stringclasses
856 values
pull_number
int64
3
127k
instance_id
stringlengths
12
58
issue_numbers
sequencelengths
1
5
base_commit
stringlengths
40
40
patch
stringlengths
67
1.54M
test_patch
stringlengths
0
107M
problem_statement
stringlengths
3
307k
hints_text
stringlengths
0
908k
created_at
timestamp[s]
ray-project/ray
3,312
ray-project__ray-3312
[ "2970" ]
d90f3653946c0be3edaad68fbb2660d164099bf1
diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py --- a/python/ray/scripts/scripts.py +++ b/python/ray/scripts/scripts.py @@ -5,6 +5,7 @@ import click import json import logging +import os import subprocess import ray.services as services @@ -541,6 +542,60 @@ def rsync_up(cluster_config_file, source, target, cluster_name): rsync(cluster_config_file, source, target, cluster_name, down=False) [email protected]() [email protected]("cluster_config_file", required=True, type=str) [email protected]( + "--stop", + is_flag=True, + default=False, + help="Stop the cluster after the command finishes running.") [email protected]( + "--start", + is_flag=True, + default=False, + help="Start the cluster if needed.") [email protected]( + "--screen", + is_flag=True, + default=False, + help="Run the command in a screen.") [email protected]( + "--tmux", is_flag=True, default=False, help="Run the command in tmux.") [email protected]( + "--cluster-name", + "-n", + required=False, + type=str, + help="Override the configured cluster name.") [email protected]( + "--port-forward", required=False, type=int, help="Port to forward.") [email protected]("script", required=True, type=str) [email protected]("script_args", required=False, type=str, nargs=-1) +def submit(cluster_config_file, screen, tmux, stop, start, cluster_name, + port_forward, script, script_args): + """Uploads and runs a script on the specified cluster. + + The script is automatically synced to the following location: + + os.path.join("~", os.path.basename(script)) + """ + assert not (screen and tmux), "Can specify only one of `screen` or `tmux`." + + if start: + create_or_update_cluster(cluster_config_file, None, None, False, False, + True, cluster_name) + + target = os.path.join("~", os.path.basename(script)) + rsync(cluster_config_file, script, target, cluster_name, down=False) + + cmd = " ".join(["python", target] + list(script_args)) + exec_cluster(cluster_config_file, cmd, screen, tmux, stop, False, + cluster_name, port_forward) + if tmux: + logger.info("Use `ray attach {} --tmux` " + "to check on command status.".format(cluster_config_file)) + + @cli.command() @click.argument("cluster_config_file", required=True, type=str) @click.argument("cmd", required=True, type=str) @@ -625,6 +680,7 @@ def stack(): cli.add_command(exec_cmd, name="exec") cli.add_command(rsync_down) cli.add_command(rsync_up) +cli.add_command(submit) cli.add_command(teardown) cli.add_command(teardown, name="down") cli.add_command(get_head_ip)
[autoscaler] Script Submit Functionality ### Describe the problem Add functionality for submitting a script for execution. ### Source code / logs Something similar to: ```python @click.option("--background", is_flag=True, default=False, help="Runs job in a separate screen.") @click.argument("script", required=True, type=str) @click.argument("script_args", required=False, type=str, nargs=-1) # TODO(rliaw): Terminate if job hangs for x minutes def submit(cluster_yaml, shutdown, background, script, script_args): """Uploads and executes script on cluster""" # check that cluster is alive config = load_config(cluster_yaml) head_updater = get_head_updater(config) # check that cluster yaml is on head # syncs file to home directory on cluster base_script = os.path.basename(script) remote_dest = os.path.join("~", base_script) head_updater.sync_files({remote_dest: script}) cmd_list = ["python", base_script] + list(script_args) head_updater.ssh_cmd(cmd, verbose=True) ``` @eugenevinitsky
This would be so helpful!
2018-11-13T03:31:20
ray-project/ray
3,399
ray-project__ray-3399
[ "3398" ]
b85e7b43f3de688b1c5e34cf2e249890a84f7f9c
diff --git a/python/ray/tune/logger.py b/python/ray/tune/logger.py --- a/python/ray/tune/logger.py +++ b/python/ray/tune/logger.py @@ -97,7 +97,12 @@ class _JsonLogger(Logger): def _init(self): config_out = os.path.join(self.logdir, "params.json") with open(config_out, "w") as f: - json.dump(self.config, f, sort_keys=True, cls=_SafeFallbackEncoder) + json.dump( + self.config, + f, + indent=2, + sort_keys=True, + cls=_SafeFallbackEncoder) local_file = os.path.join(self.logdir, "result.json") self.local_out = open(local_file, "w")
[tune] Tune logger should indent format params.json It would be nice if the tune params.json was automatically formatted. I keep going back to the variants fairly often and always have to manually (automatically) reformat it.
2018-11-25T07:14:07
ray-project/ray
3,431
ray-project__ray-3431
[ "3430" ]
7e319dbf0ce36ce4260ed9425e5b49bb394cc87a
diff --git a/python/ray/experimental/sgd/sgd_worker.py b/python/ray/experimental/sgd/sgd_worker.py --- a/python/ray/experimental/sgd/sgd_worker.py +++ b/python/ray/experimental/sgd/sgd_worker.py @@ -205,9 +205,6 @@ def for_model(self, fn): def compute_gradients(self): start = time.time() feed_dict = self._grad_feed_dict() - # Aggregate feed dicts for each model on this worker. - for model in self.models: - feed_dict.update(model.get_feed_dict()) # We only need to fetch the first per_device_grad, since they are # averaged across all devices by allreduce. fetches = self.sess.run(
Remove duplicate feed dict constructing in `python/ray/experimental/sgd/sgd_worker.py` ```python def compute_gradients(self): start = time.time() feed_dict = self._grad_feed_dict() # Aggregate feed dicts for each model on this worker. for model in self.models: feed_dict.update(model.get_feed_dict()) ``` with `_grad_feed_dict` definitions: ```python def _grad_feed_dict(self): # Aggregate feed dicts for each model on this worker. feed_dict = {} for model in self.models: feed_dict.update(model.get_feed_dict()) return feed_dict ```
2018-11-29T10:18:26
ray-project/ray
3,455
ray-project__ray-3455
[ "3412" ]
ddc97864dfcc772b54dc8e4569e967ba0c6c8756
diff --git a/python/ray/tune/schedulers/pbt.py b/python/ray/tune/schedulers/pbt.py --- a/python/ray/tune/schedulers/pbt.py +++ b/python/ray/tune/schedulers/pbt.py @@ -47,7 +47,12 @@ def explore(config, mutations, resample_probability, custom_explore_fn): """ new_config = copy.deepcopy(config) for key, distribution in mutations.items(): - if isinstance(distribution, list): + if isinstance(distribution, dict): + new_config.update({ + key: explore(config[key], mutations[key], resample_probability, + None) + }) + elif isinstance(distribution, list): if random.random() < resample_probability or \ config[key] not in distribution: new_config[key] = random.choice(distribution) @@ -213,8 +218,8 @@ def _exploit(self, trial_executor, trial, trial_to_clone): trial_state = self._trial_state[trial] new_state = self._trial_state[trial_to_clone] if not new_state.last_checkpoint: - logger.warning("[pbt]: no checkpoint for trial" - "skip exploit for Trial {}".format(trial)) + logger.warning("[pbt]: no checkpoint for trial." + " Skip exploit for Trial {}".format(trial)) return new_config = explore(trial_to_clone.config, self._hyperparam_mutations, self._resample_probability,
diff --git a/python/ray/tune/test/trial_scheduler_test.py b/python/ray/tune/test/trial_scheduler_test.py --- a/python/ray/tune/test/trial_scheduler_test.py +++ b/python/ray/tune/test/trial_scheduler_test.py @@ -5,7 +5,7 @@ import random import unittest import numpy as np - +import sys import ray from ray.tune.schedulers import (HyperBandScheduler, AsyncHyperBandScheduler, PopulationBasedTraining, MedianStoppingRule, @@ -17,6 +17,11 @@ from ray.rllib import _register_all _register_all() +if sys.version_info >= (3, 3): + from unittest.mock import MagicMock +else: + from mock import MagicMock + def result(t, rew): return dict( @@ -747,6 +752,104 @@ def assertProduces(fn, values): lambda x: x), {10.0, 100.0}) + def deep_add(seen, new_values): + for k, new_value in new_values.items(): + if isinstance(new_value, dict): + if k not in seen: + seen[k] = {} + seen[k].update(deep_add(seen[k], new_value)) + else: + if k not in seen: + seen[k] = set() + seen[k].add(new_value) + + return seen + + def assertNestedProduces(fn, values): + random.seed(0) + seen = {} + for _ in range(100): + new_config = fn() + seen = deep_add(seen, new_config) + self.assertEqual(seen, values) + + # Nested mutation and spec + assertNestedProduces( + lambda: explore( + { + "a": { + "b": 4 + }, + "1": { + "2": { + "3": 100 + } + }, + }, + { + "a": { + "b": [3, 4, 8, 10] + }, + "1": { + "2": { + "3": lambda: random.choice([10, 100]) + } + }, + }, + 0.0, + lambda x: x), + { + "a": { + "b": {3, 8} + }, + "1": { + "2": { + "3": {80, 120} + } + }, + }) + + custom_explore_fn = MagicMock(side_effect=lambda x: x) + + # Nested mutation and spec + assertNestedProduces( + lambda: explore( + { + "a": { + "b": 4 + }, + "1": { + "2": { + "3": 100 + } + }, + }, + { + "a": { + "b": [3, 4, 8, 10] + }, + "1": { + "2": { + "3": lambda: random.choice([10, 100]) + } + }, + }, + 0.0, + custom_explore_fn), + { + "a": { + "b": {3, 8} + }, + "1": { + "2": { + "3": {80, 120} + } + }, + }) + + # Expect call count to be 100 because we call explore 100 times + self.assertEqual(custom_explore_fn.call_count, 100) + def testYieldsTimeToOtherTrials(self): pbt, runner = self.basicSetup() trials = runner.get_trials()
[tune] PBT should support nested mutations ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 and 18.04 - **Ray installed from (source or binary)**: source@f372f48bf3b51bc4e6b51ad9691f71b4b9004462 - **Ray version**: 0.5.3 - **Python version**: 3.6 I was trying to run PBT scheduler with nested config and mutation. This fails in the [PBTTrialState.explore](https://github.com/ray-project/ray/blob/f372f48bf3b51bc4e6b51ad9691f71b4b9004462/python/ray/tune/schedulers/pbt.py#L49) because it doesn't support nesting. It seems like this should be pretty easily fixable, assuming that there are no further issues with the nesting. Example config: ``` {'Q_params': {'kwargs': {'hidden_layer_sizes': (256, 256)}, 'type': 'double_feedforward_Q_function'}, 'algorithm_params': {'kwargs': {'action_prior': 'uniform', 'discount': 0.99, 'epoch_length': 1000, 'eval_deterministic': True, 'eval_n_episodes': 1, 'eval_render': False, 'lr': 0.0003, 'n_epochs': 301, 'n_initial_exploration_steps': 10000, 'n_train_repeat': 1, 'reparameterize': True, 'reward_scale': 1.0, 'save_full_state': False, 'store_extra_policy_info': False, 'target_entropy': 'auto', 'target_update_interval': 1, 'tau': 0.005, 'train_every_n_steps': 1}, 'type': 'SAC'}, 'domain': 'swimmer', 'env_params': {}, 'git_sha': '8b4f4ab12499b2f10b6cd24fc0112b35cfea9ffd experiment/pbt-test', 'mode': 'local', 'policy_params': {'kwargs': {'hidden_layer_sizes': (256, 256), 'regularization_coeff': 0.001, 'squash': True}, 'type': 'GaussianPolicy'}, 'preprocessor_params': {'type': None}, 'replay_pool_params': {'kwargs': {'max_size': 1000000.0}, 'type': 'SimpleReplayPool'}, 'run_params': {'checkpoint_at_end': True, 'checkpoint_frequency': 60, 'seed': 2}, 'sampler_params': {'kwargs': {'batch_size': 256, 'max_path_length': 1000, 'min_pool_size': 1000}, 'type': 'SimpleSampler'}, 'task': 'default', 'universe': 'gym'} ``` Example mutation: ``` {'algorithm_params': {'kwargs': {'discount': <function launch_experiments_ray.<locals>.<lambda> at 0x7f21de052950>, 'lr': <function launch_experiments_ray.<locals>.<lambda> at 0x7f21de0529d8>, 'n_train_repeat': [1, 2, 4, 8]}}} ```
2018-12-02T06:25:26
ray-project/ray
3,464
ray-project__ray-3464
[ "3450" ]
ce355d13d4b50481ba1a86b63555fef204e25f9f
diff --git a/python/ray/services.py b/python/ray/services.py --- a/python/ray/services.py +++ b/python/ray/services.py @@ -1028,9 +1028,6 @@ def determine_plasma_store_config(object_store_memory=None, "when calling ray.init() or ray start.") object_store_memory = MAX_DEFAULT_MEM - if plasma_directory is not None: - plasma_directory = os.path.abspath(plasma_directory) - # Determine which directory to use. By default, use /tmp on MacOS and # /dev/shm on Linux, unless the shared-memory file system is too small, # in which case we default to /tmp on Linux. @@ -1055,10 +1052,15 @@ def determine_plasma_store_config(object_store_memory=None, else: plasma_directory = "/tmp" - # Do some sanity checks. - if object_store_memory > system_memory: - raise Exception("The requested object store memory size is greater " - "than the total available memory.") + # Do some sanity checks. + if object_store_memory > system_memory: + raise Exception( + "The requested object store memory size is greater " + "than the total available memory.") + else: + plasma_directory = os.path.abspath(plasma_directory) + logger.warning("WARNING: object_store_memory is not verified when " + "plasma_directory is set.") if not os.path.isdir(plasma_directory): raise Exception("The file {} does not exist or is not a directory."
Allow greater than memory allocation for plasma store on Mac ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: OSX 10.12 - **Ray installed from (source or binary)**: source - **Python version**: 3.6 - **Exact command to reproduce**: (See below) ### Describe the problem <!-- Describe the problem clearly here. --> In Modin, it would be great if we could specify `plasma_directory="/tmp"` and `object_store_memory=n * physical_mem`. It seems to work fine on Ubuntu, but I am getting the following error on Mac: ```Exception: The requested object store memory size is greater than the total available memory.``` ### Source code / logs ```python from psutil import virtual_memory mem_bytes = virtual_memory().total object_store_memory = 8 * mem_bytes plasma_directory = "/tmp" ray.init( redirect_output=True, include_webui=False, redirect_worker_output=True, use_raylet=True, ignore_reinit_error=True, plasma_directory=plasma_directory, object_store_memory=object_store_memory, ) ```
Really, that works on Linux? I don't think you actually want this behavior because it will start swapping (even before it hits the total amount of memory) and freeze your laptop. I guess it doesn't on 0.6. On 0.5.X it worked. I don't understand why this would cause the laptop to freeze. The OS would manage the swapping, and it would be slower than in-memory, would you expect that to break the system? Maybe not literally freeze, but in the past when I start using too much memory, I've seen things become sufficiently unresponsive that I've had to reboot the machine. Just to clarify, here is what we need: * plasma store on disk ("/tmp") * having larger than memory dataframes supported with the `object_store_memory` parameter The OS maintains the paging. We have been experimenting with this on 0.5.3, and while it is slower than purely in-memory, it works and allows 10's of GB dataframes on a laptop. This is a very important requirement in Modin. I think it's a reasonable request for the Modin use case. We already provide the ability to specify the plasma directory as a mount point (e.g., for huge pages). We should try mounting a large file as tmpfs mountpoint, passing that to the object store and evaluate performance. This, combined with explicitly specifying object store memory should just work. If there's some internal check in python that overrides the specified object store memory, capping it to available system memory, I'd say it's a bug, because the plasma dir could point to a larger pool of memory.
2018-12-04T18:45:44
ray-project/ray
3,534
ray-project__ray-3534
[ "3533" ]
84fae57ab55751cd5c9b67142621fa016da2b846
diff --git a/python/ray/autoscaler/aws/config.py b/python/ray/autoscaler/aws/config.py --- a/python/ray/autoscaler/aws/config.py +++ b/python/ray/autoscaler/aws/config.py @@ -273,8 +273,11 @@ def _get_role(role_name, config): try: role.load() return role - except botocore.errorfactory.NoSuchEntityException: - return None + except botocore.exceptions.ClientError as exc: + if exc.response.get("Error", {}).get("Code") == "NoSuchEntity": + return None + else: + raise exc def _get_instance_profile(profile_name, config): @@ -283,8 +286,11 @@ def _get_instance_profile(profile_name, config): try: profile.load() return profile - except botocore.errorfactory.NoSuchEntityException: - return None + except botocore.exceptions.ClientError as exc: + if exc.response.get("Error", {}).get("Code") == "NoSuchEntity": + return None + else: + raise exc def _get_key(key_name, config):
[autoscaler] Creating instance profile fails on exception handling ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Mac, Linux - **Ray installed from (source or binary)**: - **Ray version**: 0.6 - **Python version**: 3.6 - **Exact command to reproduce**: `ray up example-full.yaml` on an account without the proper instance profile name. ### Describe the problem I think the exception-handling is just out of date. ### Source code / logs ```bash ╭─ ~/Research/ec2/clustercfgs  ╰─ aws iam remove-role-from-instance-profile --role-name ray-autoscaler-v1 --instance-profile-name ray-autoscaler-v1 ╭─ ~/Research/ec2/clustercfgs  ╰─ aws iam delete-instance-profile --instance-profile-name ray-autoscaler-v1 ╭─ ~/Research/ec2/clustercfgs  ╰─ ray up example-full.yaml Traceback (most recent call last): File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/autoscaler/aws/config.py", line 284, in _get_instance_profile profile.load() File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/boto3/resources/factory.py", line 505, in do_action response = action(self, *args, **kwargs) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/boto3/resources/action.py", line 83, in __call__ response = getattr(parent.meta.client, operation_name)(**params) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/botocore/client.py", line 317, in _api_call return self._make_api_call(operation_name, kwargs) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/botocore/client.py", line 615, in _make_api_call raise error_class(parsed_response, operation_name) botocore.errorfactory.NoSuchEntityException: An error occurred (NoSuchEntity) when calling the GetInstanceProfile operation: Instance Profile ray-autoscaler-v1 cannot be found. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/rliaw/miniconda3/envs/ray/bin/ray", line 11, in <module> sys.exit(main()) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/scripts/scripts.py", line 690, in main return cli() File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/scripts/scripts.py", line 470, in create_or_update no_restart, restart_only, yes, cluster_name) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/autoscaler/commands.py", line 42, in create_or_update_cluster config = _bootstrap_config(config) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/autoscaler/commands.py", line 64, in _bootstrap_config resolved_config = bootstrap_config(config) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/autoscaler/aws/config.py", line 43, in bootstrap_aws config = _configure_iam_role(config) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/autoscaler/aws/config.py", line 62, in _configure_iam_role profile = _get_instance_profile(DEFAULT_RAY_INSTANCE_PROFILE, config) File "/Users/rliaw/miniconda3/envs/ray/lib/python3.6/site-packages/ray/autoscaler/aws/config.py", line 286, in _get_instance_profile except botocore.errorfactory.NoSuchEntityException: AttributeError: module 'botocore.errorfactory' has no attribute 'NoSuchEntityException' ```
2018-12-13T22:59:52
ray-project/ray
3,556
ray-project__ray-3556
[ "3550" ]
417c7f2d6f280159e31b44dd257e736e0eb03379
diff --git a/python/ray/rllib/evaluation/sampler.py b/python/ray/rllib/evaluation/sampler.py --- a/python/ray/rllib/evaluation/sampler.py +++ b/python/ray/rllib/evaluation/sampler.py @@ -10,8 +10,7 @@ import threading from ray.rllib.evaluation.episode import MultiAgentEpisode, _flatten_action -from ray.rllib.evaluation.sample_batch import MultiAgentSampleBatchBuilder, \ - MultiAgentBatch +from ray.rllib.evaluation.sample_batch import MultiAgentSampleBatchBuilder from ray.rllib.evaluation.tf_policy_graph import TFPolicyGraph from ray.rllib.env.async_vector_env import AsyncVectorEnv from ray.rllib.env.atari_wrappers import get_wrapper_by_cls, MonitorEnv @@ -164,20 +163,6 @@ def get_data(self): if isinstance(rollout, BaseException): raise rollout - # We can't auto-concat rollouts in these modes - if self.async_vector_env.num_envs > 1 or \ - isinstance(rollout, MultiAgentBatch): - return rollout - - # Auto-concat rollouts; TODO(ekl) is this important for A3C perf? - while not rollout["dones"][-1]: - try: - part = self.queue.get_nowait() - if isinstance(part, BaseException): - raise rollout - rollout = rollout.concat(part) - except queue.Empty: - break return rollout def get_metrics(self):
diff --git a/python/ray/rllib/test/test_policy_evaluator.py b/python/ray/rllib/test/test_policy_evaluator.py --- a/python/ray/rllib/test/test_policy_evaluator.py +++ b/python/ray/rllib/test/test_policy_evaluator.py @@ -263,18 +263,6 @@ def testAsync(self): self.assertIn(key, batch) self.assertGreater(batch["advantages"][0], 1) - def testAutoConcat(self): - ev = PolicyEvaluator( - env_creator=lambda _: MockEnv(episode_length=40), - policy_graph=MockPolicyGraph, - sample_async=True, - batch_steps=10, - batch_mode="truncate_episodes", - observation_filter="ConcurrentMeanStdFilter") - time.sleep(2) - batch = ev.sample() - self.assertEqual(batch.count, 40) # auto-concat up to 5 episodes - def testAutoVectorization(self): ev = PolicyEvaluator( env_creator=lambda cfg: MockEnv(episode_length=20, config=cfg),
AsyncSampler auto-concat feature causes non-requested batch size increase <!-- General questions should be asked on the mailing list [email protected]. Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 - **Ray installed from (source or binary)**: source - **Ray version**: 0.6.0 - **Python version**: 3.6.7 - **Exact command to reproduce**: rllib train --run PPO --env CartPole-v0 --config '{"train_batch_size":200,"sample_batch_size":200,"num_workers":1,"sample_async":true,"observation_filter":"NoFilter"}' <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> When working with the AsyncSampler, the auto-concat feature in the 'get_data()' routine causes sample sizes which exceed the 'batch_steps' passed to PolicyEvaluator, when using batch_mode="truncate_episodes". In this mode the doc says the batch size will be exactly 'batch_steps * num_envs', however due to auto-concat feature, assuming the ENV is faster than the training, we will get the entire queue as additional data (in the provided example command, we get 1200 train steps each iteration instead of the requested 200, due to the AsyncSampler queue size of 5). In case of batch_mode="complete_episodes", the auto-concat feature has no effect (Since each rollout always ends with 'done') The comment suggests this is to improve A3C performance, which it probably does for fast envs, but that would always be true by increasing the batch size. In many algorithms it's important to have a fixed batch size using 'truncate_episodes', while still using the AsyncSampler (For example for real-time environments which continue running during train time) Maybe make the auto-concat an optional configuration? Maybe as part of a new 'batch_mode' option (batch_mode="max_available"?) ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
2018-12-17T10:00:00
ray-project/ray
3,578
ray-project__ray-3578
[ "3449" ]
ffa6ee3ec8056aa1dbaef7b7569d802a5f0ec4b9
diff --git a/python/setup.py b/python/setup.py --- a/python/setup.py +++ b/python/setup.py @@ -154,6 +154,8 @@ def find_version(*filepath): setup( name="ray", version=find_version("ray", "__init__.py"), + author="Ray Team", + author_email="[email protected]", description=("A system for parallel and distributed Python that unifies " "the ML ecosystem."), long_description=open("../README.rst").read(),
Fix formatting of PyPI package description. See https://pypi.org/project/ray/. Note that we can test this out first at https://test.pypi.org/project/ray/.
2018-12-19T23:57:09
ray-project/ray
3,593
ray-project__ray-3593
[ "3308" ]
33319502b69ff3859f3ece16d626717505eb0e3e
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -485,6 +485,10 @@ class ActorHandle(object): _ray_actor_driver_id: The driver ID of the job that created the actor (it is possible that this ActorHandle exists on a driver with a different driver ID). + _ray_new_actor_handles: The new actor handles that were created from + this handle since the last task on this handle was submitted. This + is used to garbage-collect dummy objects that are no longer + necessary in the backend. """ def __init__(self, @@ -520,6 +524,7 @@ def __init__(self, actor_creation_dummy_object_id) self._ray_actor_method_cpus = actor_method_cpus self._ray_actor_driver_id = actor_driver_id + self._ray_new_actor_handles = [] def _actor_method_call(self, method_name, @@ -585,6 +590,7 @@ def _actor_method_call(self, actor_creation_dummy_object_id=( self._ray_actor_creation_dummy_object_id), execution_dependencies=execution_dependencies, + new_actor_handles=self._ray_new_actor_handles, # We add one for the dummy return ID. num_return_vals=num_return_vals + 1, resources={"CPU": self._ray_actor_method_cpus}, @@ -596,6 +602,9 @@ def _actor_method_call(self, # The last object returned is the dummy object that should be # passed in to the next actor method. Do not return it to the user. self._ray_actor_cursor = object_ids.pop() + # We have notified the backend of the new actor handles to expect since + # the last task was submitted, so clear the list. + self._ray_new_actor_handles = [] if len(object_ids) == 1: object_ids = object_ids[0] @@ -702,6 +711,19 @@ def _serialization_helper(self, ray_forking): if ray_forking: self._ray_actor_forks += 1 + new_actor_handle_id = actor_handle_id + else: + # The execution dependency for a pickled actor handle is never safe + # to release, since it could be unpickled and submit another + # dependent task at any time. Therefore, we notify the backend of a + # random handle ID that will never actually be used. + new_actor_handle_id = ray.ObjectID(_random_string()) + # Notify the backend to expect this new actor handle. The backend will + # not release the cursor for any new handles until the first task for + # each of the new handles is submitted. + # NOTE(swang): There is currently no garbage collection for actor + # handles until the actor itself is removed. + self._ray_new_actor_handles.append(new_actor_handle_id) return state diff --git a/python/ray/experimental/named_actors.py b/python/ray/experimental/named_actors.py --- a/python/ray/experimental/named_actors.py +++ b/python/ray/experimental/named_actors.py @@ -56,5 +56,8 @@ def register_actor(name, actor_handle): # Add the actor to Redis if it does not already exist. already_exists = _internal_kv_put(actor_name, pickled_state) if already_exists: + # If the registration fails, then erase the new actor handle that + # was added when pickling the actor handle. + actor_handle._ray_new_actor_handles.pop() raise ValueError( "Error: the actor with name={} already exists".format(name)) diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -524,6 +524,7 @@ def submit_task(self, actor_creation_dummy_object_id=None, max_actor_reconstructions=0, execution_dependencies=None, + new_actor_handles=None, num_return_vals=None, resources=None, placement_resources=None, @@ -594,6 +595,9 @@ def submit_task(self, if execution_dependencies is None: execution_dependencies = [] + if new_actor_handles is None: + new_actor_handles = [] + if driver_id is None: driver_id = self.task_driver_id @@ -628,8 +632,8 @@ def submit_task(self, num_return_vals, self.current_task_id, task_index, actor_creation_id, actor_creation_dummy_object_id, max_actor_reconstructions, actor_id, actor_handle_id, - actor_counter, execution_dependencies, resources, - placement_resources) + actor_counter, new_actor_handles, execution_dependencies, + resources, placement_resources) self.raylet_client.submit_task(task) return task.returns() @@ -1949,7 +1953,7 @@ def connect(ray_params, worker.current_task_id, worker.task_index, ray.ObjectID(NIL_ACTOR_ID), ray.ObjectID(NIL_ACTOR_ID), 0, ray.ObjectID(NIL_ACTOR_ID), ray.ObjectID(NIL_ACTOR_ID), - nil_actor_counter, [], {"CPU": 0}, {}) + nil_actor_counter, [], [], {"CPU": 0}, {}) # Add the driver task to the task table. global_state._execute_command(driver_task.task_id(), "RAY.TABLE_ADD",
diff --git a/src/ray/raylet/worker_pool_test.cc b/src/ray/raylet/worker_pool_test.cc --- a/src/ray/raylet/worker_pool_test.cc +++ b/src/ray/raylet/worker_pool_test.cc @@ -66,9 +66,9 @@ static inline TaskSpecification ExampleTaskSpec( const ActorID actor_id = ActorID::nil(), const Language &language = Language::PYTHON) { std::vector<std::string> function_descriptor(3); - return TaskSpecification(UniqueID::nil(), UniqueID::nil(), 0, ActorID::nil(), - ObjectID::nil(), 0, actor_id, ActorHandleID::nil(), 0, {}, 0, - {{}}, {{}}, language, function_descriptor); + return TaskSpecification(UniqueID::nil(), TaskID::nil(), 0, ActorID::nil(), + ObjectID::nil(), 0, actor_id, ActorHandleID::nil(), 0, {}, {}, + 0, {{}}, {{}}, language, function_descriptor); } TEST_F(WorkerPoolTest, HandleWorkerRegistration) { diff --git a/test/actor_test.py b/test/actor_test.py --- a/test/actor_test.py +++ b/test/actor_test.py @@ -1854,8 +1854,87 @@ def fork(queue, key, num_items): # Fork num_iters times. num_forks = 10 num_items_per_fork = 100 - ray.get( - [fork.remote(queue, i, num_items_per_fork) for i in range(num_forks)]) + + # Submit some tasks on new actor handles. + forks = [ + fork.remote(queue, i, num_items_per_fork) for i in range(num_forks) + ] + # Submit some more tasks on the original actor handle. + for item in range(num_items_per_fork): + local_fork = queue.enqueue.remote(num_forks, item) + forks.append(local_fork) + # Wait for tasks from all handles to complete. + ray.get(forks) + # Check that all tasks from all handles have completed. + items = ray.get(queue.read.remote()) + for i in range(num_forks + 1): + filtered_items = [item[1] for item in items if item[0] == i] + assert filtered_items == list(range(num_items_per_fork)) + + +def test_pickled_handle_consistency(setup_queue_actor): + queue = setup_queue_actor + + @ray.remote + def fork(pickled_queue, key, num_items): + queue = ray.worker.pickle.loads(pickled_queue) + x = None + for item in range(num_items): + x = queue.enqueue.remote(key, item) + return ray.get(x) + + # Fork num_iters times. + num_forks = 10 + num_items_per_fork = 100 + + # Submit some tasks on the pickled actor handle. + new_queue = ray.worker.pickle.dumps(queue) + forks = [ + fork.remote(new_queue, i, num_items_per_fork) for i in range(num_forks) + ] + # Submit some more tasks on the original actor handle. + for item in range(num_items_per_fork): + local_fork = queue.enqueue.remote(num_forks, item) + forks.append(local_fork) + # Wait for tasks from all handles to complete. + ray.get(forks) + # Check that all tasks from all handles have completed. + items = ray.get(queue.read.remote()) + for i in range(num_forks + 1): + filtered_items = [item[1] for item in items if item[0] == i] + assert filtered_items == list(range(num_items_per_fork)) + + +def test_nested_fork(setup_queue_actor): + queue = setup_queue_actor + + @ray.remote + def fork(queue, key, num_items): + x = None + for item in range(num_items): + x = queue.enqueue.remote(key, item) + return ray.get(x) + + @ray.remote + def nested_fork(queue, key, num_items): + # Pass the actor into a nested task. + ray.get(fork.remote(queue, key + 1, num_items)) + x = None + for item in range(num_items): + x = queue.enqueue.remote(key, item) + return ray.get(x) + + # Fork num_iters times. + num_forks = 10 + num_items_per_fork = 100 + + # Submit some tasks on new actor handles. + forks = [ + nested_fork.remote(queue, i, num_items_per_fork) + for i in range(0, num_forks, 2) + ] + ray.get(forks) + # Check that all tasks from all handles have completed. items = ray.get(queue.read.remote()) for i in range(num_forks): filtered_items = [item[1] for item in items if item[0] == i]
Garbage collection for actor dummy objects <!-- General questions should be asked on the mailing list [email protected]. Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Any - **Ray installed from (source or binary)**: Any - **Ray version**: 0.5.3 - **Python version**: Any - **Exact command to reproduce**: <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> 1. For long running jobs, actor won't exit and it will submit tasks continuously. Each task will have a dummy object put in `local_objects_` in task_dependency_manager.cc. This unordered map will grow very big. When the size grows to 13-16 million, the rehashing algorithm will take more than 10 second, which cause heartbeat timeout. 2. For python functions, even if there is no return value, a `None` object will be put into Plasma. However, users don't aware there is a None value put into Plasma and they will not call Free to release it, which will also cause very big Plasma usage or large amount of `local_objects_` elements. ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
For 2, if you don't want any return values, then you should decorate the actor method with `@ray.method(num_return_vals=0)` as follows. ```python @ray.remote class Actor: @ray.method(num_return_vals=0) def method(self): pass ``` For 1, maybe the solution is to release unneeded dummy objects every time we do an actor checkpoint. Thanks, @robertnishihara for the `num_return_vals=0` method. That is what I want! For 1, we have implemented in local repo to delete the GCS table as well as the dummy object after checkpointing. Actually, I think we can delete the previous dummy object on the fly after we do `ExtendFrontier`. Because we only need the most recent dummy object to represent Actor's state. I'll experiment this idea when I have time. cc @stephanie-wang @raulchen, I'm not 100% sure, but there may be some issues with that if the actor handles are passed into other tasks because then you may receive tasks that depend on an earlier actor dummy object that has been removed. @robertnishihara I think that's true for now, because when we pass a handle to another task, the new handle will have the same cursor. But I don't really understand why the new handle needs to depend on the old handle's cursor. I think from the perspective of the actor, these 2 handles can be simply 2 independent clients. Their task execution order shouldn't be related, right? @raulchen I think in a lot of cases, it is intuitive to expect the execution orders to be related. For example, ```python a = Actor.remote() # Create an actor. a.initialize.remote() # Do some initialization. f.remote(a) # Pass a into f, which uses a. ``` In this example, I think people would expect the `initialize` task (and definitely the actor creation task) to happen before whatever tasks `f` submits to `a`. https://github.com/ray-project/ray/issues/1149 is a related example in which the lack of dependence caused confusion (though this is with the actor creation task). @robertnishihara Thanks. Regarding the 4 options you mentioned in [this comment](https://github.com/ray-project/ray/issues/1149#issuecomment-345153290), I think it makes sense to have option 2 the default behavior. because in normal cases, users may only want to distribute a handle to multiple workers, and don't care about the order. Regarding the example you commented above, one can explicitly ensure order by doing the following: ``` ray.get(a.initialize.remote()) f.remote(a) ``` or ``` initialized = a.initialize.remote() f.remote(a, initialized) ``` As @robertnishihara, the original reasoning for keeping the dummy objects around is for ordering. It's hard to say definitively how often users do or don't care about ordering, so I think we'll just have to wait and see. As a longer-term solution, you can actually decide when to release dummy objects as long as the frontend provides information about how many times an actor handle has been passed around. So in the example posted above, if you did, this then you would only have to keep around the `initialize` dummy object, and it could be released once `f` calls its first method. ```python a = Actor.remote() # Create an actor. a.initialize.remote() # Do some initialization. f.remote(a) # Pass a into f, which uses a. a.foo.remote() # Tell the backend that a was passed into f. ``` And yes, I completely agree that we should release old dummy objects after a checkpoint. It is possible that an old dummy object ID could become relevant a long time later, right (even after multiple checkpoints)? E.g., if a handle was passed into a task but the task didn't get scheduled for a very long time due to some other object dependency. In that case, we really just need to be able to compare the dummy Object ID with the current actor frontier, so if we can encode more semantic information in the IDs, e.g., a vector clock essentially, then we wouldn't need to keep around all of the dummy object IDs. @stephanie-wang Does something like that make sense? > In that case, we really just need to be able to compare the dummy Object ID with the current actor frontier, so if we can encode more semantic information in the IDs, e.g., a vector clock essentially, then we wouldn't need to keep around all of the dummy object IDs. @robertnishihara can you elaborate on how that solves the case you just described (late-scheduled tasks)? Why would having that ordering let you release an old dummy object? I was trying to figure out a solution to the same problem in general for lineage GC after checkpointing. ``` a -> b -> c -> d \ -> x ``` @ujvl say we have the above dependency tree, each letter is a dummy object. Right now, we cannot clean up b until x starts executing, even if c and d have already finished. Because otherwise we don't know x's dependencies are satisfied. (Considering x might be re-submitted, we can never clean up b, right? @stephanie-wang ) However, if we dummy objects are comparable, we can clean up b and c, and only keep d. Because when we try schedule x, we know there's a newer dummy object d than b, x's dependency is satisfied. I really don't think that it's a good idea to pursue comparable object IDs. The best we can do is probabilistic comparison. You will run out of bits very quickly if you try to do perfect comparison. The way that I would go about the case that @robertnishihara listed is: 1. The frontend notifies the backend that a handle has been passed to another task with dummy object `x`. The backend records the fact that it is waiting for `x`. It can do this by preemptively adding a new handle to the actor frontier. 2. The task with the new actor handle hasn't been scheduled yet, but a bunch of other tasks get submitted on other handles. All dummy objects for these handles in between `x` and the present task can be released. A checkpoint may be taken at any point, but it doesn't affect which objects get released (the set of dummy objects held is already the minimum). 4. The task gets scheduled and submits new tasks on its actor handle. The backend releases `x`. Also, @raulchen, I think it's okay to release b as soon as x is scheduled the first time. If x gets resubmitted, then you have to roll back the actor to the latest checkpoint anyway, so b will get recreated. @stephanie-wang, regarding > I think it's okay to release b as soon as x is scheduled the first time. If x gets resubmitted, then you have to roll back the actor to the latest checkpoint anyway, so b will get recreated. This assumes that the checkpoint is before `b`, right? E.g., if the checkpoint was taken at `d` then reloading from the checkpoint won't cause `b` to get recreated, right? Also, is there a way to make this work in the setting where the actor is using the "restart only" fault tolerance mode, where the actor is recreated, but no methods are replayed? I suppose that for such actors, task ordering probably doesn't matter and so maybe such actors simply shouldn't use dummy object IDs. If the checkpoint happens before `b`, then `b` will get recreated. If the checkpoint was after `b`, then I think we should make it so that `b` is considered unreconstructable (similar to what we do right now for evicted actor objects), and gets failed immediately. I think that the task ordering is still important for "restart only" actors, since it would not be very intuitive to users if tasks could suddenly be executed out-of-order for that failure mode. The part that would be different is after recovering from a failure, we would only require that any **later** task could be executed immediately, instead of requiring that the **next consecutive** task be executed. @stephanie-wang the scenario I'm concerned about is a little different I think. I'm imagining a setting with no failures. - Actor methods `a`, `b`, `c`, `d`, and `x` are submitted as above. - Methods `a`, `b`, `c`, `d` execute. - A checkpoint is taken (say after `d`), and so dummy object IDs for `a`, `b`, `c`, `d` are flushed. - Then `x` finally arrives at the raylet (it was delayed for some reason). - The correct behavior is to just execute `x`. Doing so respects all ordering constraints. However, `x` depends on the dummy object for `b` which has been flushed, so we don't know that we can execute it. Does that make sense? Actually, I don't think checkpoints really matter at all. The algorithm I'm proposing is to record the fact that the actor handle was forked from `b` in the task that produces `c`. When `c` gets executed, the raylet will see this and extend the actor's frontier to include the new handle on which `x` will be submitted. Any dummy object on the actor's frontier will be pinned, and everything before the frontier will be released. Therefore, `x`'s dependency will be pinned until it arrives at the raylet.
2018-12-21T00:53:39
ray-project/ray
3,621
ray-project__ray-3621
[ "3611" ]
ddd4c842f1eb34097cc3f8795c823fd044a56fc4
diff --git a/python/ray/__init__.py b/python/ray/__init__.py --- a/python/ray/__init__.py +++ b/python/ray/__init__.py @@ -47,7 +47,7 @@ raise modin_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), "modin") -sys.path.insert(0, modin_path) +sys.path.append(modin_path) from ray.raylet import ObjectID, _config # noqa: E402 from ray.profiling import profile # noqa: E402
[modin] Importing Modin before Ray can sometimes cause ImportError ### Describe the problem <!-- Describe the problem clearly here. --> When running Modin with Ray installed from source, I am sometimes running into `ImportError` and `ModuleNotFoundError` which is occurring when I am running a modified version of Modin. This forces me to modify Ray's source such that it does not try to use the Modin that is bundled with Ray. I will work on a solution for this. ### Source code / logs `import modin.pandas as pd` ``` Traceback (most recent call last): File "/home/ubuntu/ray/python/ray/function_manager.py", line 165, in fetch_and_register_remote_function function = pickle.loads(serialized_function) ModuleNotFoundError: No module named 'modin.data_management.utils' ```
Is that exception raised on a worker or the driver? It is raised in every worker immediately after import.
2018-12-23T18:46:25
ray-project/ray
3,656
ray-project__ray-3656
[ "3652" ]
3df1e1c471aef6a54d2c5011e0839f5fb9443922
diff --git a/python/ray/tempfile_services.py b/python/ray/tempfile_services.py --- a/python/ray/tempfile_services.py +++ b/python/ray/tempfile_services.py @@ -66,8 +66,16 @@ def try_to_create_directory(directory_path): # important when multiple people are using the same machine. try: os.chmod(directory_path, 0o0777) - except PermissionError: - pass + except OSError as e: + # Silently suppress the PermissionError that is thrown by the chmod. + # This is done because the user attempting to change the permissions + # on a directory may not own it. The chmod is attempted whether the + # directory is new or not to avoid race conditions. + # ray-project/ray/#3591 + if e.errno in [errno.EACCES, errno.EPERM]: + pass + else: + raise def get_temp_root():
PermissionError not defined in Python 2.7 ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 16 - **Ray installed from (source or binary)**: binary - **Ray version**: 0.6.1 - **Python version**: 2.7 - **Exact command to reproduce**: I don't have access to `/tmp`, and I get this following error: ``` cluster_tests.py:55: in _start_new_cluster "num_heartbeats_timeout": 10 /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/test/cluster_utils.py:43: in __init__ self.add_node(**head_node_args) /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/test/cluster_utils.py:86: in add_node **node_kwargs) /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:1777: in start_ray_head _internal_config=_internal_config) /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:1436: in start_ray_processes redis_max_memory=redis_max_memory) /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:458: in start_redis redis_stdout_file, redis_stderr_file = new_redis_log_file(redirect_output) /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:182: in new_redis_log_file "redis", redirect_output) /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:166: in new_log_files try_to_create_directory("/tmp/ray") _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ directory_path = '/tmp/ray' def try_to_create_directory(directory_path): """Attempt to create a directory that is globally readable/writable. Args: directory_path: The path of the directory to create. """ directory_path = os.path.expanduser(directory_path) if not os.path.exists(directory_path): try: os.makedirs(directory_path) except OSError as e: if e.errno != os.errno.EEXIST: raise e logger.warning( "Attempted to create '{}', but the directory already " "exists.".format(directory_path)) # Change the log directory permissions so others can use it. This is # important when multiple people are using the same machine. try: os.chmod(directory_path, 0o0777) > except PermissionError: E NameError: global name 'PermissionError' is not defined /data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:69: NameError ```
@devin-petersohn looks like this was introduced in https://github.com/ray-project/ray/pull/3591. Looks like an easy fix. `PermissionError` was only available as of Python 3.3.
2018-12-28T20:35:50
ray-project/ray
3,711
ray-project__ray-3711
[ "3710" ]
e78562b2e8d2affc8b0f7fabde37aa192b4385fc
diff --git a/python/ray/tune/registry.py b/python/ray/tune/registry.py --- a/python/ray/tune/registry.py +++ b/python/ray/tune/registry.py @@ -2,6 +2,7 @@ from __future__ import division from __future__ import print_function +import logging from types import FunctionType import ray @@ -17,6 +18,8 @@ TRAINABLE_CLASS, ENV_CREATOR, RLLIB_MODEL, RLLIB_PREPROCESSOR ] +logger = logging.getLogger(__name__) + def register_trainable(name, trainable): """Register a trainable function or class. @@ -30,8 +33,16 @@ def register_trainable(name, trainable): from ray.tune.trainable import Trainable, wrap_function - if isinstance(trainable, FunctionType): + if isinstance(trainable, type): + logger.debug("Detected class for trainable.") + elif isinstance(trainable, FunctionType): + logger.debug("Detected function for trainable.") + trainable = wrap_function(trainable) + elif callable(trainable): + logger.warning( + "Detected unknown callable for trainable. Converting to class.") trainable = wrap_function(trainable) + if not issubclass(trainable, Trainable): raise TypeError("Second argument must be convertable to Trainable", trainable)
diff --git a/python/ray/tune/test/trial_runner_test.py b/python/ray/tune/test/trial_runner_test.py --- a/python/ray/tune/test/trial_runner_test.py +++ b/python/ray/tune/test/trial_runner_test.py @@ -112,6 +112,24 @@ class B(Trainable): self.assertRaises(TypeError, lambda: register_trainable("foo", B())) self.assertRaises(TypeError, lambda: register_trainable("foo", A)) + def testRegisterTrainableCallable(self): + def dummy_fn(config, reporter, steps): + reporter(timesteps_total=steps, done=True) + + from functools import partial + steps = 500 + register_trainable("test", partial(dummy_fn, steps=steps)) + [trial] = run_experiments({ + "foo": { + "run": "test", + "config": { + "script_min_iter_time_s": 0, + }, + } + }) + self.assertEqual(trial.status, Trial.TERMINATED) + self.assertEqual(trial.last_result[TIMESTEPS_TOTAL], steps) + def testBuiltInTrainableResources(self): class B(Trainable): @classmethod
[tune[ partial function cannot be registered as trainable ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: binary - **Ray version**: 0.6.1 - **Python version**: 3.7 - **Exact command to reproduce**: The following code fails: ``` def dummy_fn(c, a, b): print("Called") from functools import partial from ray.tune import register_trainable register_trainable("test", partial(dummy_fn, c=None)) ``` while the following code works: ``` def dummy_fn(a, b): print("Called") from functools import partial from ray.tune import register_trainable register_trainable("test", dummy_fn) ``` ### Describe the problem The first code sample does not work, despite the function (after the `partial`) fullfills all requirements to be properly registered. ### Source code / logs Traceback: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/temp/schock/conda/envs/delira_new/lib/python3.7/site-packages/ray/tune/registry.py", line 35, in register_trainable if not issubclass(trainable, Trainable): TypeError: issubclass() arg 1 must be a class ```
2019-01-07T20:01:03
ray-project/ray
3,731
ray-project__ray-3731
[ "3685" ]
edb7aaf7c7e857c2b5b90a33f8cce2ffe374736e
diff --git a/python/ray/autoscaler/commands.py b/python/ray/autoscaler/commands.py --- a/python/ray/autoscaler/commands.py +++ b/python/ray/autoscaler/commands.py @@ -11,6 +11,7 @@ import sys import click import logging +import random import yaml try: # py3 @@ -94,6 +95,35 @@ def teardown_cluster(config_file, yes, workers_only, override_cluster_name): nodes = provider.nodes({TAG_RAY_NODE_TYPE: "worker"}) +def kill_node(config_file, yes, override_cluster_name): + """Kills a random Raylet worker.""" + + config = yaml.load(open(config_file).read()) + if override_cluster_name is not None: + config["cluster_name"] = override_cluster_name + config = _bootstrap_config(config) + + confirm("This will kill a node in your cluster", yes) + + provider = get_node_provider(config["provider"], config["cluster_name"]) + nodes = provider.nodes({TAG_RAY_NODE_TYPE: "worker"}) + node = random.choice(nodes) + logger.info("Terminating worker {}".format(node)) + updater = NodeUpdaterProcess( + node, + config["provider"], + config["auth"], + config["cluster_name"], + config["file_mounts"], [], + "", + redirect_output=False) + + _exec(updater, "ray stop", False, False) + + time.sleep(5) + return provider.external_ip(node) + + def get_or_create_head_node(config, config_file, no_restart, restart_only, yes, override_cluster_name): """Create the cluster head node, which in turn creates the workers.""" @@ -343,6 +373,17 @@ def get_head_node_ip(config_file, override_cluster_name): return provider.external_ip(head_node) +def get_worker_node_ips(config_file, override_cluster_name): + """Returns worker node IPs for given configuration file.""" + + config = yaml.load(open(config_file).read()) + if override_cluster_name is not None: + config["cluster_name"] = override_cluster_name + provider = get_node_provider(config["provider"], config["cluster_name"]) + nodes = provider.nodes({TAG_RAY_NODE_TYPE: "worker"}) + return [provider.external_ip(node) for node in nodes] + + def _get_head_node(config, config_file, override_cluster_name, diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py --- a/python/ray/scripts/scripts.py +++ b/python/ray/scripts/scripts.py @@ -9,9 +9,9 @@ import subprocess import ray.services as services -from ray.autoscaler.commands import (attach_cluster, exec_cluster, - create_or_update_cluster, rsync, - teardown_cluster, get_head_node_ip) +from ray.autoscaler.commands import ( + attach_cluster, exec_cluster, create_or_update_cluster, rsync, + teardown_cluster, get_head_node_ip, kill_node, get_worker_node_ips) import ray.ray_constants as ray_constants import ray.utils @@ -274,8 +274,8 @@ def start(node_ip_address, redis_address, redis_port, num_redis_shards, # Get the node IP address if one is not provided. ray_params.update_if_absent( node_ip_address=services.get_node_ip_address()) - logger.info("Using IP address {} for this node." - .format(ray_params.node_ip_address)) + logger.info("Using IP address {} for this node.".format( + ray_params.node_ip_address)) ray_params.update_if_absent( redis_port=redis_port, redis_shard_ports=redis_shard_ports, @@ -342,8 +342,8 @@ def start(node_ip_address, redis_address, redis_port, num_redis_shards, # Get the node IP address if one is not provided. ray_params.update_if_absent( node_ip_address=services.get_node_ip_address(redis_address)) - logger.info("Using IP address {} for this node." - .format(ray_params.node_ip_address)) + logger.info("Using IP address {} for this node.".format( + ray_params.node_ip_address)) # Check that there aren't already Redis clients with the same IP # address connected with this Redis instance. This raises an exception # if the Redis server already has clients on this node. @@ -456,6 +456,7 @@ def stop(): help="Don't ask for confirmation.") def create_or_update(cluster_config_file, min_workers, max_workers, no_restart, restart_only, yes, cluster_name): + """Create or update a Ray cluster.""" if restart_only or no_restart: assert restart_only != no_restart, "Cannot set both 'restart_only' " \ "and 'no_restart' at the same time!" @@ -483,9 +484,30 @@ def create_or_update(cluster_config_file, min_workers, max_workers, no_restart, type=str, help="Override the configured cluster name.") def teardown(cluster_config_file, yes, workers_only, cluster_name): + """Tear down the Ray cluster.""" teardown_cluster(cluster_config_file, yes, workers_only, cluster_name) [email protected]() [email protected]("cluster_config_file", required=True, type=str) [email protected]( + "--yes", + "-y", + is_flag=True, + default=False, + help="Don't ask for confirmation.") [email protected]( + "--cluster-name", + "-n", + required=False, + type=str, + help="Override the configured cluster name.") +def kill_random_node(cluster_config_file, yes, cluster_name): + """Kills a random Ray node. For testing purposes only.""" + click.echo("Killed node with IP " + + kill_node(cluster_config_file, yes, cluster_name)) + + @cli.command() @click.argument("cluster_config_file", required=True, type=str) @click.option( @@ -664,6 +686,19 @@ def get_head_ip(cluster_config_file, cluster_name): click.echo(get_head_node_ip(cluster_config_file, cluster_name)) [email protected]() [email protected]("cluster_config_file", required=True, type=str) [email protected]( + "--cluster-name", + "-n", + required=False, + type=str, + help="Override the configured cluster name.") +def get_worker_ips(cluster_config_file, cluster_name): + worker_ips = get_worker_node_ips(cluster_config_file, cluster_name) + click.echo("\n".join(worker_ips)) + + @cli.command() def stack(): COMMAND = """ @@ -700,7 +735,9 @@ def stack(): cli.add_command(submit) cli.add_command(teardown) cli.add_command(teardown, name="down") +cli.add_command(kill_random_node) cli.add_command(get_head_ip, name="get_head_ip") +cli.add_command(get_worker_ips) cli.add_command(stack)
[autoscaler] Fault tolerance testing ### Describe the problem Add functionality to the autoscaler to kill and restart a raylet (but not the machine), to automate fault tolerance testing.
2019-01-09T22:35:51
ray-project/ray
3,793
ray-project__ray-3793
[ "650" ]
75ac016e2bd39060d14a302292546d9dbc49f6a2
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -779,6 +779,13 @@ def __setstate__(self, state): def make_actor(cls, num_cpus, num_gpus, resources, actor_method_cpus, checkpoint_interval, max_reconstructions): + # Give an error if cls is an old-style class. + if not issubclass(cls, object): + raise TypeError( + "The @ray.remote decorator cannot be applied to old-style " + "classes. In Python 2, you must declare the class with " + "'class ClassName(object):' instead of 'class ClassName:'.") + if checkpoint_interval is None: checkpoint_interval = -1 if max_reconstructions is None:
diff --git a/test/actor_test.py b/test/actor_test.py --- a/test/actor_test.py +++ b/test/actor_test.py @@ -98,6 +98,16 @@ def foo(self): ray.get(actor.foo.remote()) [email protected]( + sys.version_info >= (3, 0), reason="This test requires Python 2.") +def test_old_style_error(ray_start_regular): + with pytest.raises(TypeError): + + @ray.remote + class Actor: + pass + + def test_keyword_args(ray_start_regular): @ray.remote class Actor(object):
Actor definition needs Actor(object). The following code does not work on Python 2 but works on Python 3. ``` import ray ray.init() @ray.remote class Outer(): def __init__(self): pass ``` Error thrown: ``` Traceback (most recent call last): File "/Users/rliaw/miniconda2/envs/py2/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/Users/rliaw/miniconda2/envs/py2/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/Users/rliaw/miniconda2/envs/py2/lib/python2.7/site-packages/ray/worker.py", line 1166, in import_thread worker.fetch_and_register_actor(key, worker) File "/Users/rliaw/miniconda2/envs/py2/lib/python2.7/site-packages/ray/actor.py", line 86, in fetch_and_register_actor worker.actors[actor_id_str] = unpickled_class.__new__(unpickled_class) AttributeError: class Class has no attribute '__new__' ```
Do we want to support old-style classes? We should certainly give a better error message. Better error message: Yes! It's ok to not support old-style classes for actors I think. I tagged it with "help wanted" as it looks like a low hanging fruit that would be perfect for somebody to look into to contribute to the project.
2019-01-16T23:47:15
ray-project/ray
3,872
ray-project__ray-3872
[ "3850" ]
5fb813ff3918aadee3c464f250d49d0562efac18
diff --git a/python/ray/__init__.py b/python/ray/__init__.py --- a/python/ray/__init__.py +++ b/python/ray/__init__.py @@ -51,7 +51,7 @@ from ray._raylet import (UniqueID, ObjectID, DriverID, ClientID, ActorID, ActorHandleID, FunctionID, ActorClassID, TaskID, - Config as _Config) # noqa: E402 + _ID_TYPES, Config as _Config) # noqa: E402 _config = _Config() @@ -77,7 +77,8 @@ "remote", "profile", "actor", "method", "get_gpu_ids", "get_resource_ids", "get_webui_url", "register_custom_serializer", "shutdown", "is_initialized", "SCRIPT_MODE", "WORKER_MODE", "LOCAL_MODE", - "PYTHON_MODE", "global_state", "_config", "__version__", "internal" + "PYTHON_MODE", "global_state", "_config", "__version__", "internal", + "_ID_TYPES" ] __all__ += [ diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -1099,22 +1099,11 @@ def _initialize_serialization(driver_id, worker=global_worker): serialization_context.set_pickle(pickle.dumps, pickle.loads) pyarrow.register_torch_serialization_handlers(serialization_context) - # Define a custom serializer and deserializer for handling Object IDs. - def object_id_custom_serializer(obj): - return obj.binary() - - def object_id_custom_deserializer(serialized_obj): - return ObjectID(serialized_obj) - - # We register this serializer on each worker instead of calling - # register_custom_serializer from the driver so that isinstance still - # works. - serialization_context.register_type( - ObjectID, - "ray.ObjectID", - pickle=False, - custom_serializer=object_id_custom_serializer, - custom_deserializer=object_id_custom_deserializer) + for id_type in ray._ID_TYPES: + serialization_context.register_type( + id_type, + "{}.{}".format(id_type.__module__, id_type.__name__), + pickle=True) def actor_handle_serializer(obj): return obj._serialization_helper(True)
Too many warning messages when actor handles are passed into tasks. ```python import ray ray.init() @ray.remote class Foo: pass @ray.remote def g(handle): pass f = Foo.remote() g.remote(f) ``` The last line will print out ``` WARNING: Falling back to serializing objects of type <class 'ray._raylet.ActorID'> by using pickle. This may be inefficient. WARNING: Falling back to serializing objects of type <class 'ray._raylet.ActorHandleID'> by using pickle. This may be inefficient. WARNING: Falling back to serializing objects of type <class 'ray._raylet.DriverID'> by using pickle. This may be inefficient. ``` This was presumably introduced in #3541. We should just register pickle as a custom serializer for all ID types ahead of time. cc @suquark
Yes, it should be. Let me create a PR to fix it.
2019-01-26T22:22:23
ray-project/ray
3,894
ray-project__ray-3894
[ "3873" ]
62a0a7bdc73cb1ee169babc35fd0df2e29c6fa84
diff --git a/python/ray/tune/examples/mnist_pytorch.py b/python/ray/tune/examples/mnist_pytorch.py --- a/python/ray/tune/examples/mnist_pytorch.py +++ b/python/ray/tune/examples/mnist_pytorch.py @@ -8,7 +8,6 @@ import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms -from torch.autograd import Variable # Training settings parser = argparse.ArgumentParser(description='PyTorch MNIST Example') @@ -120,7 +119,6 @@ def train(epoch): for batch_idx, (data, target) in enumerate(train_loader): if args.cuda: data, target = data.cuda(), target.cuda() - data, target = Variable(data), Variable(target) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) @@ -131,16 +129,17 @@ def test(): model.eval() test_loss = 0 correct = 0 - for data, target in test_loader: - if args.cuda: - data, target = data.cuda(), target.cuda() - data, target = Variable(data, volatile=True), Variable(target) - output = model(data) - test_loss += F.nll_loss( - output, target, size_average=False).item() # sum up batch loss - pred = output.data.max( - 1, keepdim=True)[1] # get the index of the max log-probability - correct += pred.eq(target.data.view_as(pred)).long().cpu().sum() + with torch.no_grad(): + for data, target in test_loader: + if args.cuda: + data, target = data.cuda(), target.cuda() + output = model(data) + # sum up batch loss + test_loss += F.nll_loss(output, target, reduction='sum').item() + # get the index of the max log-probability + pred = output.argmax(dim=1, keepdim=True) + correct += pred.eq( + target.data.view_as(pred)).long().cpu().sum() test_loss = test_loss / len(test_loader.dataset) accuracy = correct.item() / len(test_loader.dataset) @@ -176,7 +175,8 @@ def test(): "training_iteration": 1 if args.smoke_test else 20 }, "resources_per_trial": { - "cpu": 3 + "cpu": 3, + "gpu": int(not args.no_cuda) }, "run": "train_mnist", "num_samples": 1 if args.smoke_test else 10, diff --git a/python/ray/tune/examples/mnist_pytorch_trainable.py b/python/ray/tune/examples/mnist_pytorch_trainable.py --- a/python/ray/tune/examples/mnist_pytorch_trainable.py +++ b/python/ray/tune/examples/mnist_pytorch_trainable.py @@ -9,7 +9,6 @@ import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms -from torch.autograd import Variable from ray.tune import Trainable @@ -127,7 +126,6 @@ def _train_iteration(self): for batch_idx, (data, target) in enumerate(self.train_loader): if self.args.cuda: data, target = data.cuda(), target.cuda() - data, target = Variable(data), Variable(target) self.optimizer.zero_grad() output = self.model(data) loss = F.nll_loss(output, target) @@ -138,18 +136,17 @@ def _test(self): self.model.eval() test_loss = 0 correct = 0 - for data, target in self.test_loader: - if self.args.cuda: - data, target = data.cuda(), target.cuda() - data, target = Variable(data, volatile=True), Variable(target) - output = self.model(data) - - # sum up batch loss - test_loss += F.nll_loss(output, target, size_average=False).item() - - # get the index of the max log-probability - pred = output.data.max(1, keepdim=True)[1] - correct += pred.eq(target.data.view_as(pred)).long().cpu().sum() + with torch.no_grad(): + for data, target in self.test_loader: + if self.args.cuda: + data, target = data.cuda(), target.cuda() + output = self.model(data) + # sum up batch loss + test_loss += F.nll_loss(output, target, reduction='sum').item() + # get the index of the max log-probability + pred = output.argmax(dim=1, keepdim=True) + correct += pred.eq( + target.data.view_as(pred)).long().cpu().sum() test_loss = test_loss / len(self.test_loader.dataset) accuracy = correct.item() / len(self.test_loader.dataset) @@ -188,7 +185,8 @@ def _restore(self, checkpoint_path): "training_iteration": 1 if args.smoke_test else 20, }, "resources_per_trial": { - "cpu": 3 + "cpu": 3, + "gpu": int(not args.no_cuda) }, "run": TrainMNIST, "num_samples": 1 if args.smoke_test else 20,
diff --git a/test/jenkins_tests/run_multi_node_tests.sh b/test/jenkins_tests/run_multi_node_tests.sh --- a/test/jenkins_tests/run_multi_node_tests.sh +++ b/test/jenkins_tests/run_multi_node_tests.sh @@ -363,12 +363,11 @@ docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ --smoke-test docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ - python /ray/python/ray/tune/examples/mnist_pytorch.py \ - --smoke-test + python /ray/python/ray/tune/examples/mnist_pytorch.py --smoke-test --no-cuda docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ python /ray/python/ray/tune/examples/mnist_pytorch_trainable.py \ - --smoke-test + --smoke-test --no-cuda docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ python /ray/python/ray/tune/examples/genetic_example.py \
[tune] Option --no-cuda is misleading in mnist_pytorch.py example ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 16.04 - **Ray installed from (source or binary)**: binary - **Ray version**: 0.6.2 - **Python version**: 3.5 - **Exact command to reproduce**: ``` python3 mnist_pytorch.py ``` ### Describe the problem Argument parser's option `--no-cuda` which is False by default is misleading as `resources_per_trial` does not allocate a GPU. https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L45-L49 https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L178-L180 Thanks
Nice catch! Would you be interested in pushing a fix? @richardliaw yes, I can send a PR with a fix of both PyTorch examples : - mnist_pytorch_trainable.py - mnist_pytorch.py. Idea is to add one GPU: ```python "resources_per_trial": { "cpu": 3, "gpu": 1 } ``` what do you think ? PS: IMO examples are a little bit outdated as `Variable` is used which was deprecated since `0.4.0`. Maybe we should update this too. I think maybe do something like `"gpu": int(not args.no_cuda)`, since you don't want to set it like that when CUDA is disabled. Updating deprecated things would be great; thanks! > I think maybe do something like "gpu": int(not args.no_cuda), since you don't want to set it like that when CUDA is disabled. @richardliaw yes, but this is handled inside the trial code: https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L62 so we can leave savely `"gpu": 1`. If you leave “gpu”: 1, the example will not run if Ray does not detect a GPU. Thanks, Richard On Tue, Jan 29, 2019 at 2:37 AM vfdev <[email protected]> wrote: > I think maybe do something like "gpu": int(not args.no_cuda), since you > don't want to set it like that when CUDA is disabled. > > @richardliaw <https://github.com/richardliaw> yes, but this is handled > inside the trial code: > > https://github.com/ray-project/ray/blob/eddd60e14e95a2aadb06192bd141d06c68d5f082/python/ray/tune/examples/mnist_pytorch.py#L62 > so we can leave savely "gpu": 1. > > — > You are receiving this because you were mentioned. > > > Reply to this email directly, view it on GitHub > <https://github.com/ray-project/ray/issues/3873#issuecomment-458491495>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AEUc5bFOAUQgKygCBnihLCLHdRu14Yvoks5vICRugaJpZM4aU24_> > . >
2019-01-30T00:24:26
ray-project/ray
3,937
ray-project__ray-3937
[ "3935", "3935" ]
002531b199aaeb40d267b4ac50dbc7ae7689da1b
diff --git a/python/ray/autoscaler/aws/node_provider.py b/python/ray/autoscaler/aws/node_provider.py --- a/python/ray/autoscaler/aws/node_provider.py +++ b/python/ray/autoscaler/aws/node_provider.py @@ -127,11 +127,11 @@ def nodes(self, tag_filters): return [node.id for node in nodes] def is_running(self, node_id): - node = self._node(node_id) + node = self._get_cached_node(node_id) return node.state["Name"] == "running" def is_terminated(self, node_id): - node = self._node(node_id) + node = self._get_cached_node(node_id) state = node.state["Name"] return state not in ["running", "pending"] @@ -142,10 +142,20 @@ def node_tags(self, node_id): return dict(d1, **d2) def external_ip(self, node_id): - return self._node(node_id).public_ip_address + node = self._get_cached_node(node_id) + + if node.public_ip_address is None: + node = self._get_node(node_id) + + return node.public_ip_address def internal_ip(self, node_id): - return self._node(node_id).private_ip_address + node = self._get_cached_node(node_id) + + if node.private_ip_address is None: + node = self._get_node(node_id) + + return node.private_ip_address def set_node_tags(self, node_id, tags): with self.tag_cache_lock: @@ -205,7 +215,7 @@ def create_node(self, node_config, tags, count): self.ec2.create_instances(**conf) def terminate_node(self, node_id): - node = self._node(node_id) + node = self._get_cached_node(node_id) node.terminate() self.tag_cache.pop(node_id, None) @@ -218,14 +228,22 @@ def terminate_nodes(self, node_ids): self.tag_cache.pop(node_id, None) self.tag_cache_pending.pop(node_id, None) - def _node(self, node_id): - if node_id not in self.cached_nodes: - self.nodes({}) # Side effect: should cache it. + def _get_node(self, node_id): + """Refresh and get info for this node, updating the cache.""" + self.nodes({}) # Side effect: fetches and caches the node. assert node_id in self.cached_nodes, "Invalid instance id {}".format( node_id) + return self.cached_nodes[node_id] + def _get_cached_node(self, node_id): + """Return node info from cache if possible, otherwise fetches it.""" + if node_id in self.cached_nodes: + return self.cached_nodes[node_id] + + return self._get_node(node_id) + def cleanup(self): self.tag_cache_update_event.set() self.tag_cache_kill_event.set() diff --git a/python/ray/autoscaler/gcp/node_provider.py b/python/ray/autoscaler/gcp/node_provider.py --- a/python/ray/autoscaler/gcp/node_provider.py +++ b/python/ray/autoscaler/gcp/node_provider.py @@ -51,10 +51,6 @@ def __init__(self, provider_config, cluster_name): # excessive DescribeInstances requests. self.cached_nodes = {} - # Cache of ip lookups. We assume IPs never change once assigned. - self.internal_ip_cache = {} - self.external_ip_cache = {} - def nodes(self, tag_filters): if tag_filters: label_filter_expr = "(" + " AND ".join([ @@ -97,15 +93,15 @@ def nodes(self, tag_filters): return [i["name"] for i in instances] def is_running(self, node_id): - node = self._node(node_id) + node = self._get_cached_node(node_id) return node["status"] == "RUNNING" def is_terminated(self, node_id): - node = self._node(node_id) + node = self._get_cached_node(node_id) return node["status"] not in {"PROVISIONING", "STAGING", "RUNNING"} def node_tags(self, node_id): - node = self._node(node_id) + node = self._get_cached_node(node_id) labels = node.get("labels", {}) return labels @@ -114,7 +110,7 @@ def set_node_tags(self, node_id, tags): project_id = self.provider_config["project_id"] availability_zone = self.provider_config["availability_zone"] - node = self._node(node_id) + node = self._get_node(node_id) operation = self.compute.instances().setLabels( project=project_id, zone=availability_zone, @@ -130,23 +126,30 @@ def set_node_tags(self, node_id, tags): return result def external_ip(self, node_id): - if node_id in self.external_ip_cache: - return self.external_ip_cache[node_id] - node = self._node(node_id) - # TODO: Is there a better and more reliable way to do this? - ip = (node.get("networkInterfaces", [{}])[0].get( - "accessConfigs", [{}])[0].get("natIP", None)) - if ip: - self.external_ip_cache[node_id] = ip + node = self._get_cached_node(node_id) + + def get_external_ip(node): + return node.get("networkInterfaces", [{}])[0].get( + "accessConfigs", [{}])[0].get("natIP", None) + + ip = get_external_ip(node) + if ip is None: + node = self._get_node(node_id) + ip = get_external_ip(node) + return ip def internal_ip(self, node_id): - if node_id in self.internal_ip_cache: - return self.internal_ip_cache[node_id] - node = self._node(node_id) - ip = node.get("networkInterfaces", [{}])[0].get("networkIP") - if ip: - self.internal_ip_cache[node_id] = ip + node = self._get_cached_node(node_id) + + def get_internal_ip(node): + return node.get("networkInterfaces", [{}])[0].get("networkIP") + + ip = get_internal_ip(node) + if ip is None: + node = self._get_node(node_id) + ip = get_internal_ip(node) + return ip def create_node(self, base_config, tags, count): @@ -206,14 +209,16 @@ def terminate_node(self, node_id): return result - def _node(self, node_id): + def _get_node(self, node_id): + self.nodes({}) # Side effect: fetches and caches the node. + + assert node_id in self.cached_nodes, "Invalid instance id {}".format( + node_id) + + return self.cached_nodes[node_id] + + def _get_cached_node(self, node_id): if node_id in self.cached_nodes: return self.cached_nodes[node_id] - instance = self.compute.instances().get( - project=self.provider_config["project_id"], - zone=self.provider_config["availability_zone"], - instance=node_id, - ).execute() - - return instance + return self._get_node(node_id)
[autoscaler] AWS node provider broken due to external ip never getting updated ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: Source @ https://github.com/ray-project/ray/commit/002531b199aaeb40d267b4ac50dbc7ae7689da1b - **Ray version**: 0.6.2 @ 002531b199aaeb40d267b4ac50dbc7ae7689da1b - **Python version**: 3.6 - **Exact command to reproduce**: `ray teardown -y ${RAY_PATH}/python/ray/autoscaler/aws/example-minimal.yaml && ray exec ${RAY_PATH}/python/ray/autoscaler/aws/example-minimal.yaml "echo test" --start` ### Describe the problem When running the command above, the autoscaler correctly creates a head node but then hangs waiting for its ip. This happens because when the node is first created, we insert the node (without ip) to the node provider's node cache, and then when waiting for the ip we never refresh the node data, thus leading to just infinite loop [here](https://github.com/ray-project/ray/blob/002531b199aaeb40d267b4ac50dbc7ae7689da1b/python/ray/autoscaler/updater.py#L85) until `NODE_START_WAIT_S` breaks it. This didn't happen before because we used an external ip cache, which would not have a key for a node until the external ip actually existed. The behavior was changed here: https://github.com/ray-project/ray/commit/315edab08508e7b4ca07ce22467d76dad4031a89#diff-6c1c6ac4425d69b4155cb41c15eeb277L89 ### Source code / logs ``` (softlearning) ➜ kristian@jensen2: ~/github/hartikainen/ray on master ✗ $ ray teardown -y ~/github/hartikainen/ray/python/ray/autoscaler/aws/example-minimal.yaml && ray exec ~/github/hartikainen/ray/python/ray/autoscaler/aws/example-minimal.yaml "echo test" --start 2019-02-02 16:12:11,822 INFO commands.py:108 -- teardown_cluster: Terminating 1 nodes... 2019-02-02 16:12:13,112 INFO log_timer.py:21 -- teardown_cluster: Termination done. [LogTimer=1291ms] 2019-02-02 16:12:14,149 INFO commands.py:172 -- get_or_create_head_node: Launching new head node... 2019-02-02 16:12:15,510 INFO commands.py:185 -- get_or_create_head_node: Updating files on head node... 2019-02-02 16:12:15,512 INFO updater.py:126 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Updating to 7d28d7c48d6357f567a59188f2e558932a05ac3f 2019-02-02 16:12:15,512 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:15,633 INFO log_timer.py:21 -- AWSNodeProvider: Set tag ray-node-status=waiting-for-ssh on ['i-0a4c4fcfc2ee6a75b'] [LogTimer=120ms] 2019-02-02 16:12:25,520 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:35,529 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:45,534 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:55,541 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:05,549 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:15,556 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:25,566 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:35,576 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:45,584 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:55,594 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:05,605 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:15,615 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:25,624 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:35,632 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:45,642 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:55,653 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:05,660 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:15,668 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:25,676 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:35,684 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:45,692 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:55,700 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:05,708 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:15,718 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:25,728 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:35,736 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:45,744 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:55,752 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:17:05,761 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:17:15,764 INFO log_timer.py:21 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Got IP [LogTimer=300251ms] 2019-02-02 16:17:15,764 INFO log_timer.py:21 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Applied config 7d28d7c48d6357f567a59188f2e558932a05ac3f [LogTimer=300251ms] 2019-02-02 16:17:15,764 ERROR updater.py:138 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Error updating Unable to find IP of node Exception in thread Thread-3: Traceback (most recent call last): File "/home/kristian/conda/envs/softlearning/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 141, in run raise e File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 130, in run self.do_update() File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 183, in do_update self.set_ssh_ip_if_required() File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 105, in set_ssh_ip_if_required assert ip is not None, "Unable to find IP of node" AssertionError: Unable to find IP of node ``` [autoscaler] AWS node provider broken due to external ip never getting updated ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: Source @ https://github.com/ray-project/ray/commit/002531b199aaeb40d267b4ac50dbc7ae7689da1b - **Ray version**: 0.6.2 @ 002531b199aaeb40d267b4ac50dbc7ae7689da1b - **Python version**: 3.6 - **Exact command to reproduce**: `ray teardown -y ${RAY_PATH}/python/ray/autoscaler/aws/example-minimal.yaml && ray exec ${RAY_PATH}/python/ray/autoscaler/aws/example-minimal.yaml "echo test" --start` ### Describe the problem When running the command above, the autoscaler correctly creates a head node but then hangs waiting for its ip. This happens because when the node is first created, we insert the node (without ip) to the node provider's node cache, and then when waiting for the ip we never refresh the node data, thus leading to just infinite loop [here](https://github.com/ray-project/ray/blob/002531b199aaeb40d267b4ac50dbc7ae7689da1b/python/ray/autoscaler/updater.py#L85) until `NODE_START_WAIT_S` breaks it. This didn't happen before because we used an external ip cache, which would not have a key for a node until the external ip actually existed. The behavior was changed here: https://github.com/ray-project/ray/commit/315edab08508e7b4ca07ce22467d76dad4031a89#diff-6c1c6ac4425d69b4155cb41c15eeb277L89 ### Source code / logs ``` (softlearning) ➜ kristian@jensen2: ~/github/hartikainen/ray on master ✗ $ ray teardown -y ~/github/hartikainen/ray/python/ray/autoscaler/aws/example-minimal.yaml && ray exec ~/github/hartikainen/ray/python/ray/autoscaler/aws/example-minimal.yaml "echo test" --start 2019-02-02 16:12:11,822 INFO commands.py:108 -- teardown_cluster: Terminating 1 nodes... 2019-02-02 16:12:13,112 INFO log_timer.py:21 -- teardown_cluster: Termination done. [LogTimer=1291ms] 2019-02-02 16:12:14,149 INFO commands.py:172 -- get_or_create_head_node: Launching new head node... 2019-02-02 16:12:15,510 INFO commands.py:185 -- get_or_create_head_node: Updating files on head node... 2019-02-02 16:12:15,512 INFO updater.py:126 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Updating to 7d28d7c48d6357f567a59188f2e558932a05ac3f 2019-02-02 16:12:15,512 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:15,633 INFO log_timer.py:21 -- AWSNodeProvider: Set tag ray-node-status=waiting-for-ssh on ['i-0a4c4fcfc2ee6a75b'] [LogTimer=120ms] 2019-02-02 16:12:25,520 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:35,529 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:45,534 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:12:55,541 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:05,549 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:15,556 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:25,566 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:35,576 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:45,584 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:13:55,594 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:05,605 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:15,615 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:25,624 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:35,632 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:45,642 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:14:55,653 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:05,660 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:15,668 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:25,676 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:35,684 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:45,692 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:15:55,700 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:05,708 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:15,718 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:25,728 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:35,736 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:45,744 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:16:55,752 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:17:05,761 INFO updater.py:88 -- NodeUpdater: Waiting for IP of i-0a4c4fcfc2ee6a75b... 2019-02-02 16:17:15,764 INFO log_timer.py:21 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Got IP [LogTimer=300251ms] 2019-02-02 16:17:15,764 INFO log_timer.py:21 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Applied config 7d28d7c48d6357f567a59188f2e558932a05ac3f [LogTimer=300251ms] 2019-02-02 16:17:15,764 ERROR updater.py:138 -- NodeUpdater: i-0a4c4fcfc2ee6a75b: Error updating Unable to find IP of node Exception in thread Thread-3: Traceback (most recent call last): File "/home/kristian/conda/envs/softlearning/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 141, in run raise e File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 130, in run self.do_update() File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 183, in do_update self.set_ssh_ip_if_required() File "/home/kristian/github/hartikainen/ray/python/ray/autoscaler/updater.py", line 105, in set_ssh_ip_if_required assert ip is not None, "Unable to find IP of node" AssertionError: Unable to find IP of node ```
2019-02-03T03:03:19
ray-project/ray
3,951
ray-project__ray-3951
[ "3949" ]
b2b84177905d591e9f38b752b60c0f86aa25085b
diff --git a/python/ray/tune/function_runner.py b/python/ray/tune/function_runner.py --- a/python/ray/tune/function_runner.py +++ b/python/ray/tune/function_runner.py @@ -8,7 +8,7 @@ from ray.tune import TuneError from ray.tune.trainable import Trainable -from ray.tune.result import TIMESTEPS_TOTAL +from ray.tune.result import TIMESTEPS_TOTAL, TRAINING_ITERATION logger = logging.getLogger(__name__) @@ -28,6 +28,7 @@ def __init__(self): self._lock = threading.Lock() self._error = None self._done = False + self._iteration = 0 def __call__(self, **kwargs): """Report updated training status. @@ -44,6 +45,7 @@ def __call__(self, **kwargs): with self._lock: self._latest_result = self._last_result = kwargs.copy() + self._iteration += 1 def _get_and_clear_status(self): if self._error: @@ -55,10 +57,13 @@ def _get_and_clear_status(self): "last result. To avoid this, include done=True " "upon the last reporter call.") self._last_result.update(done=True) + self._last_result.setdefault(TRAINING_ITERATION, self._iteration) return self._last_result with self._lock: res = self._latest_result self._latest_result = None + if res: + res.setdefault(TRAINING_ITERATION, self._iteration) return res def _stop(self): diff --git a/python/ray/tune/suggest/bayesopt.py b/python/ray/tune/suggest/bayesopt.py --- a/python/ray/tune/suggest/bayesopt.py +++ b/python/ray/tune/suggest/bayesopt.py @@ -96,7 +96,8 @@ def on_trial_complete(self, self.optimizer.register( params=self._live_trial_mapping[trial_id], target=result[self._reward_attr]) - del self._live_trial_mapping[trial_id] + + del self._live_trial_mapping[trial_id] def _num_live_trials(self): return len(self._live_trial_mapping) diff --git a/python/ray/tune/trainable.py b/python/ray/tune/trainable.py --- a/python/ray/tune/trainable.py +++ b/python/ray/tune/trainable.py @@ -18,9 +18,9 @@ import ray from ray.tune.logger import UnifiedLogger -from ray.tune.result import (DEFAULT_RESULTS_DIR, TIME_THIS_ITER_S, - TIMESTEPS_THIS_ITER, DONE, TIMESTEPS_TOTAL, - EPISODES_THIS_ITER, EPISODES_TOTAL) +from ray.tune.result import ( + DEFAULT_RESULTS_DIR, TIME_THIS_ITER_S, TIMESTEPS_THIS_ITER, DONE, + TIMESTEPS_TOTAL, EPISODES_THIS_ITER, EPISODES_TOTAL, TRAINING_ITERATION) from ray.tune.trial import Resources logger = logging.getLogger(__name__) @@ -181,6 +181,7 @@ def train(self): # self._timesteps_total should not override user-provided total result.setdefault(TIMESTEPS_TOTAL, self._timesteps_total) result.setdefault(EPISODES_TOTAL, self._episodes_total) + result.setdefault(TRAINING_ITERATION, self._iteration) # Provides auto-filled neg_mean_loss for avoiding regressions if result.get("mean_loss"): @@ -191,7 +192,6 @@ def train(self): experiment_id=self._experiment_id, date=now.strftime("%Y-%m-%d_%H-%M-%S"), timestamp=int(time.mktime(now.timetuple())), - training_iteration=self._iteration, time_this_iter_s=time_this_iter, time_total_s=self._time_total, pid=os.getpid(),
diff --git a/python/ray/tune/test/trial_runner_test.py b/python/ray/tune/test/trial_runner_test.py --- a/python/ray/tune/test/trial_runner_test.py +++ b/python/ray/tune/test/trial_runner_test.py @@ -19,7 +19,7 @@ from ray.tune.schedulers import TrialScheduler, FIFOScheduler from ray.tune.registry import _global_registry, TRAINABLE_CLASS from ray.tune.result import (DEFAULT_RESULTS_DIR, TIMESTEPS_TOTAL, DONE, - EPISODES_TOTAL) + EPISODES_TOTAL, TRAINING_ITERATION) from ray.tune.logger import Logger from ray.tune.util import pin_in_object_store, get_pinned_object from ray.tune.experiment import Experiment @@ -559,6 +559,28 @@ def _restore(self, state): self.assertEqual(trial.status, Trial.TERMINATED) self.assertTrue(trial.has_checkpoint()) + def testIterationCounter(self): + def train(config, reporter): + for i in range(100): + reporter(itr=i, done=i == 99) + + register_trainable("exp", train) + config = { + "my_exp": { + "run": "exp", + "config": { + "iterations": 100, + }, + "stop": { + "timesteps_total": 100 + }, + } + } + [trial] = run_experiments(config) + self.assertEqual(trial.status, Trial.TERMINATED) + self.assertEqual(trial.last_result[TRAINING_ITERATION], 100) + self.assertEqual(trial.last_result["itr"], 99) + class RunExperimentTest(unittest.TestCase): def setUp(self):
[tune] Incorrect trial/iteration increment for function API throughout trial ### Describe the problem <!-- Describe the problem clearly here. --> There's an issue with the trial/iteration counter when running experiments with the functional API. Similar to #3834, but the issue persists throughout the experiment, not just the end. ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> ``` == Status == Using FIFO scheduling algorithm. Resources requested: 1/4 CPUs, 0/0 GPUs Memory usage on this node: 6.6/8.6 GB Result logdir: /Users/andrewtan/ray_results/my_exp RUNNING trials: - exp_0: RUNNING Result for exp_0: date: 2019-02-04_16-40-41 done: true experiment_id: df4e206afbd4446cb7a9c8257c12cbc4 hostname: airbears2-10-142-33-12.airbears2.1918.berkeley.edu iterations_since_restore: 1 itr: 99 node_ip: 10.142.33.12 pid: 96939 time_since_restore: 1.0051090717315674 time_this_iter_s: 1.0051090717315674 time_total_s: 1.0051090717315674 timestamp: 1549327241 timesteps_since_restore: 0 training_iteration: 1 == Status == Using FIFO scheduling algorithm. Resources requested: 0/4 CPUs, 0/0 GPUs Memory usage on this node: 6.8/8.6 GB Result logdir: /Users/andrewtan/ray_results/my_exp TERMINATED trials: - exp_0: TERMINATED [pid=96939], 1 s, 1 iter == Status == Using FIFO scheduling algorithm. Resources requested: 0/4 CPUs, 0/0 GPUs Memory usage on this node: 6.8/8.6 GB Result logdir: /Users/andrewtan/ray_results/my_exp TERMINATED trials: - exp_0: TERMINATED [pid=96939], 1 s, 1 iter ``` The test file run for this has a stopping criteria of 100 iterations. However, the logger shows that the experiment ended at iteration 1. It seems like the number of iterations logged by the Function Runner and Trainable are not synced
2019-02-05T09:54:58
ray-project/ray
4,043
ray-project__ray-4043
[ "3912", "3963" ]
1fb56a4316dcd5aacb3da4c46e4dd3e404413cdf
diff --git a/python/ray/tune/examples/tune_mnist_ray_hyperband.py b/python/ray/tune/examples/tune_mnist_ray_hyperband.py --- a/python/ray/tune/examples/tune_mnist_ray_hyperband.py +++ b/python/ray/tune/examples/tune_mnist_ray_hyperband.py @@ -199,11 +199,13 @@ def _train(self): return {"mean_accuracy": train_accuracy} def _save(self, checkpoint_dir): - return self.saver.save( + prefix = self.saver.save( self.sess, checkpoint_dir + "/save", global_step=self.iterations) + return {"prefix": prefix} - def _restore(self, path): - return self.saver.restore(self.sess, path) + def _restore(self, ckpt_data): + prefix = ckpt_data["prefix"] + return self.saver.restore(self.sess, prefix) # !!! Example of using the ray.tune Python API !!! @@ -229,7 +231,7 @@ def _restore(self, path): } if args.smoke_test: - mnist_spec['stop']['training_iteration'] = 2 + mnist_spec['stop']['training_iteration'] = 20 mnist_spec['num_samples'] = 2 ray.init()
[tune] PBT (Memory Checkpointing) using TF-Saver is broken Another issue on the same topic is that [TensorFlow examples](https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/tune_mnist_ray_hyperband.py) does not work correctly for any training scheduler which requires checkpointing. The issue is related (as far as my understanding goes) to the fact that after running trainable.restore(...) ray deletes the temporary directory with the TensorFlow checkpoint. The TensorFlow method self.saver.restore(self.sess, path) is just a declaration of restoring process, and it does not restore model in the place where declared. The restoration of the model variables happens when session.run(...) is performed, and this is the moment when variables are loaded from saved checkpoint to the tf.Graph(). Since checkpoint directory is deleted (in trainable.restore(...) ), the actual process fails. Why ray.tune delete checkpoint directory in trainable.restore(...)? _Originally posted by @agniszczotka in https://github.com/ray-project/ray/issues/2856#issuecomment-459011993_ [tune] Checkpointing with tensorflow no longer works <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **Ray installed from (source or binary)**: binary - **Ray version**: 0.6.2 - **Python version**: 3.6.7 - **Exact command to reproduce**: https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/tune_mnist_ray_hyperband.py <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem Saving and restoring only works with single files in the newest version for ray. Tensorflow stores multiple files during checkpointing. Thus, the tensorflow example for storing and saving is broken. ``` def _save(self, checkpoint_dir): return self.saver.save( self.sess, checkpoint_dir + "/save", global_step=self.iterations) def _restore(self, path): return self.saver.restore(self.sess, path) ``` https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/tune_mnist_ray_hyperband.py ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> ValueError: The returned checkpoint path does not exist: ray_results/2019-02-06_15-56-40yekao5r5/checkpoint_0/save
I believe this commit broke the functionality https://github.com/ray-project/ray/commit/f9b58d7b0252f312e7b0983ffefabf68d3e7a939#diff-0ba1385a99926375f91eca677328f64c Replacing the save and restore functions with the old variants fixes the problem ``` class ExtendedTrainable(Trainable, ABC): def save(self, checkpoint_dir=None): """Saves the current model state to a checkpoint. Subclasses should override ``_save()`` instead to save state. This method dumps additional metadata alongside the saved path. Args: checkpoint_dir (str): Optional dir to place the checkpoint. Returns: Checkpoint path that may be passed to restore(). """ checkpoint_path = self._save(checkpoint_dir or self.logdir) with open(checkpoint_path + ".tune_metadata", "wb") as file: pickle.dump({ "experiment_id": self._experiment_id, "iteration": self._iteration, "timesteps_total": self._timesteps_total, "time_total": self._time_total, "episodes_total": self._episodes_total, }, file) return checkpoint_path def restore(self, checkpoint_path): with open(checkpoint_path + ".tune_metadata", "rb") as file: metadata = pickle.load(file) self._experiment_id = metadata["experiment_id"] self._iteration = metadata["iteration"] self._timesteps_total = metadata["timesteps_total"] self._time_total = metadata["time_total"] self._episodes_total = metadata["episodes_total"] self._restore(checkpoint_path) self._restored = True ```
2019-02-14T02:12:37
ray-project/ray
4,108
ray-project__ray-4108
[ "4103" ]
acf4d53b55779822873a438bf696aaa59537a1c0
diff --git a/python/ray/rllib/agents/ddpg/ddpg_policy_graph.py b/python/ray/rllib/agents/ddpg/ddpg_policy_graph.py --- a/python/ray/rllib/agents/ddpg/ddpg_policy_graph.py +++ b/python/ray/rllib/agents/ddpg/ddpg_policy_graph.py @@ -138,7 +138,7 @@ def __init__(self, q_t_selected = tf.squeeze(q_t, axis=len(q_t.shape) - 1) if twin_q: - twin_q_t_selected = tf.squeeze(q_t, axis=len(q_t.shape) - 1) + twin_q_t_selected = tf.squeeze(twin_q_t, axis=len(q_t.shape) - 1) q_tp1 = tf.minimum(q_tp1, twin_q_tp1) q_tp1_best = tf.squeeze(input=q_tp1, axis=len(q_tp1.shape) - 1)
[rllib] Question regarding twin-Q usage in DDPG ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 - **Ray installed from (source or binary)**: source - **Ray version**: latest - **Python version**: 3.6 - **Exact command to reproduce**: ### Describe the problem When activating the `twin_q` functionality in DDPG the following loss specific operations are defined when creating the policy graph (see `ActorCriticLoss`): ``` q_t_selected = tf.squeeze(q_t, axis=len(q_t.shape) - 1) if twin_q: twin_q_t_selected = tf.squeeze(q_t, axis=len(q_t.shape) - 1) q_tp1 = tf.minimum(q_tp1, twin_q_tp1) ``` In this case `q_t_selected` and `twin_q_t_selected` hold the same operation. Probably `twin_q_t_selected` need to be adjusted the following way: ``` twin_q_t_selected = tf.squeeze(twin_q_t, axis=len(q_t.shape) - 1) ```
Yes, that looks like a bug. cc @joneswong yes, we must optimize both of the two q-net. twin_q_t should be used here.
2019-02-21T05:36:02
ray-project/ray
4,114
ray-project__ray-4114
[ "4113" ]
524e69a82d85c61de0ecb93d2d49ef2649e0dd41
diff --git a/python/ray/rllib/rollout.py b/python/ray/rllib/rollout.py --- a/python/ray/rllib/rollout.py +++ b/python/ray/rllib/rollout.py @@ -73,15 +73,15 @@ def run(args, parser): if not config: # Load configuration from file config_dir = os.path.dirname(args.checkpoint) - config_path = os.path.join(config_dir, "params.json") + config_path = os.path.join(config_dir, "params.pkl") if not os.path.exists(config_path): - config_path = os.path.join(config_dir, "../params.json") + config_path = os.path.join(config_dir, "../params.pkl") if not os.path.exists(config_path): raise ValueError( - "Could not find params.json in either the checkpoint dir or " + "Could not find params.pkl in either the checkpoint dir or " "its parent directory.") - with open(config_path) as f: - config = json.load(f) + with open(config_path, 'rb') as f: + config = pickle.load(f) if "num_workers" in config: config["num_workers"] = min(2, config["num_workers"]) @@ -102,18 +102,18 @@ def run(args, parser): def rollout(agent, env_name, num_steps, out=None, no_render=True): if hasattr(agent, "local_evaluator"): env = agent.local_evaluator.env + multiagent = agent.local_evaluator.multiagent + if multiagent: + policy_agent_mapping = agent.config["multiagent"][ + "policy_mapping_fn"] + mapping_cache = {} + policy_map = agent.local_evaluator.policy_map + state_init = {p: m.get_initial_state() for p, m in policy_map.items()} + use_lstm = {p: len(s) > 0 for p, s in state_init.items()} else: env = gym.make(env_name) - - if hasattr(agent, "local_evaluator"): - state_init = agent.local_evaluator.policy_map[ - "default"].get_initial_state() - else: - state_init = [] - if state_init: - use_lstm = True - else: - use_lstm = False + multiagent = False + use_lstm = {'default': False} if out is not None: rollouts = [] @@ -125,13 +125,39 @@ def rollout(agent, env_name, num_steps, out=None, no_render=True): done = False reward_total = 0.0 while not done and steps < (num_steps or steps + 1): - if use_lstm: - action, state_init, logits = agent.compute_action( - state, state=state_init) + if multiagent: + action_dict = {} + for agent_id in state.keys(): + a_state = state[agent_id] + if a_state is not None: + policy_id = mapping_cache.setdefault( + agent_id, policy_agent_mapping(agent_id)) + p_use_lstm = use_lstm[policy_id] + if p_use_lstm: + a_action, p_state_init, _ = agent.compute_action( + a_state, + state=state_init[policy_id], + policy_id=policy_id) + state_init[policy_id] = p_state_init + else: + a_action = agent.compute_action( + a_state, policy_id=policy_id) + action_dict[agent_id] = a_action + action = action_dict else: - action = agent.compute_action(state) + if use_lstm["default"]: + action, state_init, _ = agent.compute_action( + state, state=state_init) + else: + action = agent.compute_action(state) + next_state, reward, done, _ = env.step(action) - reward_total += reward + + if multiagent: + done = done["__all__"] + reward_total += sum(reward.values()) + else: + reward_total += reward if not no_render: env.render() if out is not None: @@ -141,6 +167,7 @@ def rollout(agent, env_name, num_steps, out=None, no_render=True): if out is not None: rollouts.append(rollout) print("Episode reward", reward_total) + if out is not None: pickle.dump(rollouts, open(out, "wb"))
[rllib] make rollout script support multiagent Hi, If I'm correct, only single agent/policy is currently supported in rollout.py. For instance https://github.com/ray-project/ray/blob/2e30f7ba386e716bf80f019dcd473b67d83abb95/python/ray/rllib/rollout.py#L109-L110 references default policy to check if policy uses lstm, which fails when a multi agent configuration is loaded. Thanks!
2019-02-21T14:14:12
ray-project/ray
4,175
ray-project__ray-4175
[ "4117" ]
33663bef942bc591ae80e32419c2a7c139cf5d00
diff --git a/python/setup.py b/python/setup.py --- a/python/setup.py +++ b/python/setup.py @@ -24,7 +24,9 @@ "ray/core/src/ray/gcs/redis_module/libray_redis_module.so", "ray/core/src/plasma/plasma_store_server", "ray/_raylet.so", "ray/core/src/ray/raylet/raylet_monitor", "ray/core/src/ray/raylet/raylet", - "ray/WebUI.ipynb" + "ray/WebUI.ipynb", "ray/dashboard/dashboard.py", + "ray/dashboard/index.html", "ray/dashboard/res/main.css", + "ray/dashboard/res/main.js" ] # These are the directories where automatically generated Python flatbuffer
dashboard.py is not packaged in the Linux Ray wheels. See the conversation in https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/ray-dev/M8wGAdEhkTw/QIbvbuoJBAAJ. I think we can fix this just by putting `__init__.py` in the `ray/python/dashboard` directory, though we have to make sure that that includes the html and javascript files. cc @virtualluke
2019-02-27T02:36:52
ray-project/ray
4,181
ray-project__ray-4181
[ "1444" ]
5bfcfa8ec89316041b45ce9bcde9180f2471c001
diff --git a/python/ray/autoscaler/updater.py b/python/ray/autoscaler/updater.py --- a/python/ray/autoscaler/updater.py +++ b/python/ray/autoscaler/updater.py @@ -271,9 +271,10 @@ def ssh_cmd(self, ssh.append("-tt") if emulate_interactive: force_interactive = ( - "set -i || true && source ~/.bashrc && " + "true && source ~/.bashrc && " "export OMP_NUM_THREADS=1 PYTHONWARNINGS=ignore && ") - cmd = "bash --login -c {}".format(quote(force_interactive + cmd)) + cmd = "bash --login -c -i {}".format( + quote(force_interactive + cmd)) if port_forward is None: ssh_opt = []
[autoscaler] bash 4.4 does not support set -i <!-- General questions should be asked on the mailing list [email protected]. Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 17.10 - **Ray installed from (source or binary)**: Source - **Ray version**: commit b8811cbe3418ab0d3ea10deaa54947d5bb26cecf - **Python version**: 3.6 - **Exact command to reproduce**: ray create_or_update example.yaml <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> As of bash 4.4, set -i is no longer accepted to create an interactive shell. Consider -t in stead. ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
Tried using -t, this creates problems down the line. Will try to come up with a better suggestion... OTOH -- why try to force interactive shells anyway? Thanks @atypic, I believe the motivation was so that the sequence of commands, e.g., https://github.com/ray-project/ray/blob/215d526e0d605e2f090da3c7b1ec66c990bec89c/python/ray/autoscaler/aws/development-example.yaml#L69-L80 would share "state" with each other. E.g., modifying the `PATH` in one command carries over to the next. @robertnishihara the commands are actually still separate sessions. The original motivation was so that the setup commands can run with a similar PATH as an interactive shell (otherwise, you'd have to prepend all your commands with things like `PATH=~/anaconda3/bin:$PATH`). You might think that just auto-prepending `source ~/.bashrc` to each command would be sufficient, but it turns out many bashrc scripts early exit if the session is non-interactive. Hence the `set -i`. One solution might just be to remove the `set -i`, this is not as nice from the user perspective but avoids weird pitfalls. This stackoverflow post has a better explanation: https://stackoverflow.com/questions/940533/how-do-i-set-path-such-that-ssh-userhost-command-works I tried to remove set -i and modify the yaml accordingly, however something odd happens to the startup of the redis server, making me unable to connect to the socket. The server is running and all seems fine, but ray.init() fails ("Unable to connect to socket /tmp/<id>"...). I do something similar to `source /home/ubuntu/anaconda3/bin/activate MyEnvironment; ray start --head <...> ` in the head_start_ray_commands. I had to go back to bash 4.3 to get set -i. ray start perhaps depends on something in Env/.bashrc? I have yet to investigate :-) Anyway... Somehow depending on the behavior of bash seems wonky, as people like to use a variety of shells even on AWS. Another solution could perhaps be to have a "PATH" section in the yaml, and use .ssh/environment. Node provisioning is always such a mess :(
2019-02-27T09:45:22
ray-project/ray
4,195
ray-project__ray-4195
[ "4192", "4192" ]
484708d44db8feb7754f8fa829324bee5ac68bd6
diff --git a/python/ray/remote_function.py b/python/ray/remote_function.py --- a/python/ray/remote_function.py +++ b/python/ray/remote_function.py @@ -57,9 +57,11 @@ def __init__(self, function, num_cpus, num_gpus, resources, self._function_signature = ray.signature.extract_signature( self._function) - # # Export the function. + # Export the function. worker = ray.worker.get_global_worker() worker.function_actor_manager.export(self) + # In which session this function was exported last time. + self._last_export_session = worker._session_index def __call__(self, *args, **kwargs): raise Exception("Remote functions cannot be called directly. Instead " @@ -97,6 +99,13 @@ def _remote(self, """An experimental alternate way to submit remote functions.""" worker = ray.worker.get_global_worker() worker.check_connected() + + if self._last_export_session < worker._session_index: + # If this function was exported in a previous session, we need to + # export this function again, because current GCS doesn't have it. + self._last_export_session = worker._session_index + worker.function_actor_manager.export(self) + kwargs = {} if kwargs is None else kwargs args = ray.signature.extend_args(self._function_signature, args, kwargs) diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -161,6 +161,9 @@ def __init__(self): # This event is checked regularly by all of the threads so that they # know when to exit. self.threads_stopped = threading.Event() + # Index of the current session. This number will + # increment every time when `ray.shutdown` is called. + self._session_index = 0 @property def task_context(self): @@ -2061,6 +2064,7 @@ def disconnect(): if hasattr(worker, "logger_thread"): worker.logger_thread.join() worker.threads_stopped.clear() + worker._session_index += 1 worker.connected = False worker.cached_functions_to_run = []
diff --git a/python/ray/tests/test_object_manager.py b/python/ray/tests/test_object_manager.py --- a/python/ray/tests/test_object_manager.py +++ b/python/ray/tests/test_object_manager.py @@ -13,6 +13,11 @@ import ray from ray.tests.cluster_utils import Cluster +# TODO(yuhguo): This test file requires a lot of CPU/memory, and +# better be put in Jenkins. However, it fails frequently in Jenkins, but +# works well in Travis. We should consider moving it back to Jenkins once +# we figure out the reason. + if (multiprocessing.cpu_count() < 40 or ray.utils.get_system_memory() < 50 * 10**9): warnings.warn("This test must be run on large machines.")
Problem with multiple ray.init and ray.shutdown ```python import ray @ray.remote def f(): return 1 ray.init(num_cpus=1) print(ray.get(f.remote())) ray.shutdown() ray.init(num_cpus=1) print(ray.get(f.remote())) ray.shutdown() ``` The above script will hang at the second `ray.get(f.remote())` with error message `This worker was asked to execute a function that it does not have registered. You may have to restart Ray.` This is because the function is only registered to the first GCS. I discovered this issue when I was trying to consolidating python tests in travis, see https://github.com/ray-project/ray/blob/387c98cf015e484e8f20fb648f4a378b8f58bf27/.travis.yml#L182-L184 cc @williamma12 Similar but not the same with #3897 Problem with multiple ray.init and ray.shutdown ```python import ray @ray.remote def f(): return 1 ray.init(num_cpus=1) print(ray.get(f.remote())) ray.shutdown() ray.init(num_cpus=1) print(ray.get(f.remote())) ray.shutdown() ``` The above script will hang at the second `ray.get(f.remote())` with error message `This worker was asked to execute a function that it does not have registered. You may have to restart Ray.` This is because the function is only registered to the first GCS. I discovered this issue when I was trying to consolidating python tests in travis, see https://github.com/ray-project/ray/blob/387c98cf015e484e8f20fb648f4a378b8f58bf27/.travis.yml#L182-L184 cc @williamma12 Similar but not the same with #3897
2019-02-28T07:24:33
ray-project/ray
4,232
ray-project__ray-4232
[ "3965" ]
ba030482542cb670f5169b0d13254ec68c927deb
diff --git a/python/ray/tune/trainable.py b/python/ray/tune/trainable.py --- a/python/ray/tune/trainable.py +++ b/python/ray/tune/trainable.py @@ -245,14 +245,15 @@ def save(self, checkpoint_dir=None): raise ValueError( "`_save` must return a dict or string type: {}".format( str(type(checkpoint)))) - pickle.dump({ - "experiment_id": self._experiment_id, - "iteration": self._iteration, - "timesteps_total": self._timesteps_total, - "time_total": self._time_total, - "episodes_total": self._episodes_total, - "saved_as_dict": saved_as_dict - }, open(checkpoint_path + ".tune_metadata", "wb")) + with open(checkpoint_path + ".tune_metadata", "wb") as f: + pickle.dump({ + "experiment_id": self._experiment_id, + "iteration": self._iteration, + "timesteps_total": self._timesteps_total, + "time_total": self._time_total, + "episodes_total": self._episodes_total, + "saved_as_dict": saved_as_dict + }, f) return checkpoint_path def save_to_object(self): @@ -271,7 +272,8 @@ def save_to_object(self): for path in os.listdir(base_dir): path = os.path.join(base_dir, path) if path.startswith(checkpoint_prefix): - data[os.path.basename(path)] = open(path, "rb").read() + with open(path, "rb") as f: + data[os.path.basename(path)] = f.read() out = io.BytesIO() data_dict = pickle.dumps({ @@ -294,7 +296,8 @@ def restore(self, checkpoint_path): This method restores additional metadata saved with the checkpoint. """ - metadata = pickle.load(open(checkpoint_path + ".tune_metadata", "rb")) + with open(checkpoint_path + ".tune_metadata", "rb") as f: + metadata = pickle.load(f) self._experiment_id = metadata["experiment_id"] self._iteration = metadata["iteration"] self._timesteps_total = metadata["timesteps_total"]
[tune] Open file handles but never close them in Trainable This file is cluttered with functions opening file handles but never closing them https://github.com/ray-project/ray/blob/master/python/ray/tune/trainable.py
Aren't they closed automatically by python ref counting? They are garbage collected eventually, but it is bad practice and results in a bunch of warnings in my Std error. https://stackoverflow.com/q/1832528/1116010 I see, that makes sense to clean up then.
2019-03-03T16:56:59
ray-project/ray
4,237
ray-project__ray-4237
[ "4233" ]
348328225489ab18041e4a10152af5adda9366b0
diff --git a/python/ray/tune/trial.py b/python/ray/tune/trial.py --- a/python/ray/tune/trial.py +++ b/python/ray/tune/trial.py @@ -299,6 +299,24 @@ def __init__(self, self.error_file = None self.num_failures = 0 + # AutoML fields + self.results = None + self.best_result = None + self.param_config = None + self.extra_arg = None + + self._nonjson_fields = [ + "_checkpoint", + "config", + "loggers", + "sync_function", + "last_result", + "results", + "best_result", + "param_config", + "extra_arg", + ] + self.trial_name = None if trial_name_creator: self.trial_name = trial_name_creator(self) @@ -509,17 +527,8 @@ def __getstate__(self): state = self.__dict__.copy() state["resources"] = resources_to_json(self.resources) - # These are non-pickleable entries. - pickle_data = { - "_checkpoint": self._checkpoint, - "config": self.config, - "loggers": self.loggers, - "sync_function": self.sync_function, - "last_result": self.last_result - } - - for key, value in pickle_data.items(): - state[key] = binary_to_hex(cloudpickle.dumps(value)) + for key in self._nonjson_fields: + state[key] = binary_to_hex(cloudpickle.dumps(state.get(key))) state["runner"] = None state["result_logger"] = None @@ -535,10 +544,7 @@ def __getstate__(self): def __setstate__(self, state): logger_started = state.pop("__logger_started__") state["resources"] = json_to_resources(state["resources"]) - for key in [ - "_checkpoint", "config", "loggers", "sync_function", - "last_result" - ]: + for key in self._nonjson_fields: state[key] = cloudpickle.loads(hex_to_binary(state[key])) self.__dict__.update(state)
[tune] genetic_example.py --smoke-test output should be reduced ``` 2019-03-03 10:58:40,611 INFO genetic_searcher.py:68 -- [GENETIC SEARCH] Generate the 1th generation, population=10 2019-03-03 10:58:40,678 INFO search_policy.py:115 -- =========== BEGIN Experiment-Round: 1 [10 NEW | 10 TOTAL] =========== 2019-03-03 10:58:41,430 ERROR trial_runner.py:252 -- Trial Runner checkpointing failed. Traceback (most recent call last): File "/ray/python/ray/tune/trial_runner.py", line 250, in step self.checkpoint() File "/ray/python/ray/tune/trial_runner.py", line 144, in checkpoint json.dump(runner_state, f, indent=2) File "/opt/conda/lib/python2.7/json/__init__.py", line 189, in dump for chunk in iterable: File "/opt/conda/lib/python2.7/json/encoder.py", line 434, in _iterencode for chunk in _iterencode_dict(o, _current_indent_level): File "/opt/conda/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict for chunk in chunks: File "/opt/conda/lib/python2.7/json/encoder.py", line 332, in _iterencode_list for chunk in chunks: File "/opt/conda/lib/python2.7/json/encoder.py", line 408, in _iterencode_dict for chunk in chunks: File "/opt/conda/lib/python2.7/json/encoder.py", line 332, in _iterencode_list for chunk in chunks: File "/opt/conda/lib/python2.7/json/encoder.py", line 442, in _iterencode o = _default(o) File "/opt/conda/lib/python2.7/json/encoder.py", line 184, in default raise TypeError(repr(o) + " is not JSON serializable") TypeError: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) is not JSON serializable 2019-03-03 10:58:41,449 ERROR trial_runner.py:252 -- Trial Runner checkpointing failed. ```
2019-03-03T22:19:31
ray-project/ray
4,305
ray-project__ray-4305
[ "4300" ]
4c80177d6f3d402618b988badbe330d31816396a
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -125,7 +125,11 @@ def __call__(self, *args, **kwargs): def remote(self, *args, **kwargs): return self._remote(args, kwargs) - def _remote(self, args, kwargs, num_return_vals=None): + def _remote(self, args=None, kwargs=None, num_return_vals=None): + if args is None: + args = [] + if kwargs is None: + kwargs = {} if num_return_vals is None: num_return_vals = self._num_return_vals @@ -233,8 +237,8 @@ def remote(self, *args, **kwargs): return self._remote(args=args, kwargs=kwargs) def _remote(self, - args, - kwargs, + args=None, + kwargs=None, num_cpus=None, num_gpus=None, resources=None): @@ -255,6 +259,11 @@ def _remote(self, Returns: A handle to the newly created actor. """ + if args is None: + args = [] + if kwargs is None: + kwargs = {} + worker = ray.worker.get_global_worker() if worker.mode is None: raise Exception("Actors cannot be created before ray.init() " @@ -293,10 +302,6 @@ def _remote(self, actor_placement_resources = resources.copy() actor_placement_resources["CPU"] += 1 - if args is None: - args = [] - if kwargs is None: - kwargs = {} function_name = "__init__" function_signature = self._method_signatures[function_name] creation_args = signature.extend_args(function_signature, args, diff --git a/python/ray/remote_function.py b/python/ray/remote_function.py --- a/python/ray/remote_function.py +++ b/python/ray/remote_function.py @@ -107,6 +107,7 @@ def _remote(self, worker.function_actor_manager.export(self) kwargs = {} if kwargs is None else kwargs + args = [] if args is None else args args = ray.signature.extend_args(self._function_signature, args, kwargs)
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -827,51 +827,63 @@ def m(x): assert ray.get(k2.remote(1)) == 2 assert ray.get(m.remote(1)) == 2 - def test_submit_api(shutdown_only): - ray.init(num_cpus=1, num_gpus=1, resources={"Custom": 1}) - @ray.remote - def f(n): - return list(range(n)) +def test_submit_api(shutdown_only): + ray.init(num_cpus=1, num_gpus=1, resources={"Custom": 1}) - @ray.remote - def g(): - return ray.get_gpu_ids() + @ray.remote + def f(n): + return list(range(n)) - assert f._remote([0], num_return_vals=0) is None - id1 = f._remote(args=[1], num_return_vals=1) - assert ray.get(id1) == [0] - id1, id2 = f._remote(args=[2], num_return_vals=2) - assert ray.get([id1, id2]) == [0, 1] - id1, id2, id3 = f._remote(args=[3], num_return_vals=3) - assert ray.get([id1, id2, id3]) == [0, 1, 2] - assert ray.get( - g._remote( - args=[], num_cpus=1, num_gpus=1, - resources={"Custom": 1})) == [0] - infeasible_id = g._remote(args=[], resources={"NonexistentCustom": 1}) - ready_ids, remaining_ids = ray.wait([infeasible_id], timeout=0.05) - assert len(ready_ids) == 0 - assert len(remaining_ids) == 1 + @ray.remote + def g(): + return ray.get_gpu_ids() + + assert f._remote([0], num_return_vals=0) is None + id1 = f._remote(args=[1], num_return_vals=1) + assert ray.get(id1) == [0] + id1, id2 = f._remote(args=[2], num_return_vals=2) + assert ray.get([id1, id2]) == [0, 1] + id1, id2, id3 = f._remote(args=[3], num_return_vals=3) + assert ray.get([id1, id2, id3]) == [0, 1, 2] + assert ray.get( + g._remote(args=[], num_cpus=1, num_gpus=1, + resources={"Custom": 1})) == [0] + infeasible_id = g._remote(args=[], resources={"NonexistentCustom": 1}) + assert ray.get(g._remote()) == [] + ready_ids, remaining_ids = ray.wait([infeasible_id], timeout=0.05) + assert len(ready_ids) == 0 + assert len(remaining_ids) == 1 - @ray.remote - class Actor(object): - def __init__(self, x, y=0): - self.x = x - self.y = y + @ray.remote + class Actor(object): + def __init__(self, x, y=0): + self.x = x + self.y = y - def method(self, a, b=0): - return self.x, self.y, a, b + def method(self, a, b=0): + return self.x, self.y, a, b + + def gpu_ids(self): + return ray.get_gpu_ids() + + @ray.remote + class Actor2(object): + def __init__(self): + pass + + def method(self): + pass - def gpu_ids(self): - return ray.get_gpu_ids() + a = Actor._remote( + args=[0], kwargs={"y": 1}, num_gpus=1, resources={"Custom": 1}) - a = Actor._remote( - args=[0], kwargs={"y": 1}, num_gpus=1, resources={"Custom": 1}) + a2 = Actor2._remote() + ray.get(a2.method._remote()) - id1, id2, id3, id4 = a.method._remote( - args=["test"], kwargs={"b": 2}, num_return_vals=4) - assert ray.get([id1, id2, id3, id4]) == [0, 1, "test", 2] + id1, id2, id3, id4 = a.method._remote( + args=["test"], kwargs={"b": 2}, num_return_vals=4) + assert ray.get([id1, id2, id3, id4]) == [0, 1, "test", 2] def test_get_multiple(shutdown_only):
All arguments to `_remote` should be optional. Currently `args` and `kwargs` are required.
2019-03-08T05:29:14
ray-project/ray
4,323
ray-project__ray-4323
[ "4312" ]
59079a799cf1939935ba2c0a1509d504cacd70ea
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -21,8 +21,6 @@ from ray import (ObjectID, ActorID, ActorHandleID, ActorClassID, TaskID, DriverID) -DEFAULT_ACTOR_METHOD_NUM_RETURN_VALS = 1 - logger = logging.getLogger(__name__) @@ -166,7 +164,7 @@ class ActorClass(object): """ def __init__(self, modified_class, class_id, max_reconstructions, num_cpus, - num_gpus, resources, actor_method_cpus): + num_gpus, resources): self._modified_class = modified_class self._class_id = class_id self._class_name = modified_class.__name__ @@ -174,7 +172,6 @@ def __init__(self, modified_class, class_id, max_reconstructions, num_cpus, self._num_cpus = num_cpus self._num_gpus = num_gpus self._resources = resources - self._actor_method_cpus = actor_method_cpus self._exported = False self._actor_methods = inspect.getmembers( @@ -215,7 +212,7 @@ def __init__(self): method.__ray_num_return_vals__) else: self._actor_method_num_return_vals[method_name] = ( - DEFAULT_ACTOR_METHOD_NUM_RETURN_VALS) + ray_constants.DEFAULT_ACTOR_METHOD_NUM_RETURN_VALS) def __call__(self, *args, **kwargs): raise Exception("Actors methods cannot be instantiated directly. " @@ -276,6 +273,25 @@ def _remote(self, # updated to reflect the new invocation. actor_cursor = None + # Set the actor's default resources if not already set. First three + # conditions are to check that no resources were specified in the + # decorator. Last three conditions are to check that no resources were + # specified when _remote() was called. + if (self._num_cpus is None and self._num_gpus is None + and self._resources is None and num_cpus is None + and num_gpus is None and resources is None): + # In the default case, actors acquire no resources for + # their lifetime, and actor methods will require 1 CPU. + cpus_to_use = ray_constants.DEFAULT_ACTOR_CREATION_CPU_SIMPLE + actor_method_cpu = ray_constants.DEFAULT_ACTOR_METHOD_CPU_SIMPLE + else: + # If any resources are specified (here or in decorator), then + # all resources are acquired for the actor's lifetime and no + # resources are associated with methods. + cpus_to_use = (ray_constants.DEFAULT_ACTOR_CREATION_CPU_SPECIFIED + if self._num_cpus is None else self._num_cpus) + actor_method_cpu = ray_constants.DEFAULT_ACTOR_METHOD_CPU_SPECIFIED + # Do not export the actor class or the actor if run in LOCAL_MODE # Instead, instantiate the actor locally and add it to the worker's # dictionary @@ -290,15 +306,15 @@ def _remote(self, self._exported = True resources = ray.utils.resources_from_resource_arguments( - self._num_cpus, self._num_gpus, self._resources, num_cpus, + cpus_to_use, self._num_gpus, self._resources, num_cpus, num_gpus, resources) # If the actor methods require CPU resources, then set the required # placement resources. If actor_placement_resources is empty, then # the required placement resources will be the same as resources. actor_placement_resources = {} - assert self._actor_method_cpus in [0, 1] - if self._actor_method_cpus == 1: + assert actor_method_cpu in [0, 1] + if actor_method_cpu == 1: actor_placement_resources = resources.copy() actor_placement_resources["CPU"] += 1 @@ -322,8 +338,8 @@ def _remote(self, actor_handle = ActorHandle( actor_id, self._modified_class.__module__, self._class_name, actor_cursor, self._actor_method_names, self._method_signatures, - self._actor_method_num_return_vals, actor_cursor, - self._actor_method_cpus, worker.task_driver_id) + self._actor_method_num_return_vals, actor_cursor, actor_method_cpu, + worker.task_driver_id) # We increment the actor counter by 1 to account for the actor creation # task. actor_handle._ray_actor_counter += 1 @@ -664,8 +680,7 @@ def __setstate__(self, state): return self._deserialization_helper(state, False) -def make_actor(cls, num_cpus, num_gpus, resources, actor_method_cpus, - max_reconstructions): +def make_actor(cls, num_cpus, num_gpus, resources, max_reconstructions): # Give an error if cls is an old-style class. if not issubclass(cls, object): raise TypeError( @@ -720,7 +735,7 @@ def __ray_checkpoint__(self): class_id = ActorClassID(_random_string()) return ActorClass(Class, class_id, max_reconstructions, num_cpus, num_gpus, - resources, actor_method_cpus) + resources) ray.worker.global_worker.make_actor = make_actor diff --git a/python/ray/ray_constants.py b/python/ray/ray_constants.py --- a/python/ray/ray_constants.py +++ b/python/ray/ray_constants.py @@ -25,6 +25,17 @@ def env_integer(key, default): # The smallest cap on the memory used by Redis that we allow. REDIS_MINIMUM_MEMORY_BYTES = 10**7 +# Default resource requirements for actors when no resource requirements are +# specified. +DEFAULT_ACTOR_METHOD_CPU_SIMPLE = 1 +DEFAULT_ACTOR_CREATION_CPU_SIMPLE = 0 +# Default resource requirements for actors when some resource requirements are +# specified in . +DEFAULT_ACTOR_METHOD_CPU_SPECIFIED = 0 +DEFAULT_ACTOR_CREATION_CPU_SPECIFIED = 1 +# Default number of return values for each actor method. +DEFAULT_ACTOR_METHOD_NUM_RETURN_VALS = 1 + # If a remote function or actor (or some other export) has serialized size # greater than this quantity, print an warning. PICKLE_OBJECT_WARNING_SIZE = 10**7 diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -76,15 +76,6 @@ ERROR_KEY_PREFIX = b"Error:" -# Default resource requirements for actors when no resource requirements are -# specified. -DEFAULT_ACTOR_METHOD_CPUS_SIMPLE_CASE = 1 -DEFAULT_ACTOR_CREATION_CPUS_SIMPLE_CASE = 0 -# Default resource requirements for actors when some resource requirements are -# specified. -DEFAULT_ACTOR_METHOD_CPUS_SPECIFIED_CASE = 0 -DEFAULT_ACTOR_CREATION_CPUS_SPECIFIED_CASE = 1 - # Logger for this module. It should be configured at the entry point # into the program using Ray. Ray provides a default configuration at # entry/init points. @@ -2480,23 +2471,8 @@ def decorator(function_or_class): raise Exception("The keyword 'max_calls' is not allowed for " "actors.") - # Set the actor default resources. - if num_cpus is None and num_gpus is None and resources is None: - # In the default case, actors acquire no resources for - # their lifetime, and actor methods will require 1 CPU. - cpus_to_use = DEFAULT_ACTOR_CREATION_CPUS_SIMPLE_CASE - actor_method_cpus = DEFAULT_ACTOR_METHOD_CPUS_SIMPLE_CASE - else: - # If any resources are specified, then all resources are - # acquired for the actor's lifetime and no resources are - # associated with methods. - cpus_to_use = (DEFAULT_ACTOR_CREATION_CPUS_SPECIFIED_CASE - if num_cpus is None else num_cpus) - actor_method_cpus = DEFAULT_ACTOR_METHOD_CPUS_SPECIFIED_CASE - - return worker.make_actor(function_or_class, cpus_to_use, num_gpus, - resources, actor_method_cpus, - max_reconstructions) + return worker.make_actor(function_or_class, num_cpus, num_gpus, + resources, max_reconstructions) raise Exception("The @ray.remote decorator must be applied to " "either a function or to a class.")
diff --git a/ci/long_running_tests/workloads/many_actor_tasks.py b/ci/long_running_tests/workloads/many_actor_tasks.py --- a/ci/long_running_tests/workloads/many_actor_tasks.py +++ b/ci/long_running_tests/workloads/many_actor_tasks.py @@ -36,9 +36,7 @@ # Run the workload. -# TODO (williamma12): Remove the num_cpus argument once -# https://github.com/ray-project/ray/issues/4312 gets resolved [email protected](num_cpus=0.1) [email protected] class Actor(object): def __init__(self): self.value = 0 @@ -47,10 +45,8 @@ def method(self): self.value += 1 -# TODO (williamma12): Update the actors to each have only 0.1 of a cpu once -# https://github.com/ray-project/ray/issues/4312 gets resolved. actors = [ - Actor._remote([], {}, resources={str(i % num_nodes): 0.1}) + Actor._remote([], {}, num_cpus=0.1, resources={str(i % num_nodes): 0.1}) for i in range(num_nodes * 5) ] diff --git a/python/ray/tests/test_actor.py b/python/ray/tests/test_actor.py --- a/python/ray/tests/test_actor.py +++ b/python/ray/tests/test_actor.py @@ -502,6 +502,89 @@ def echo(self, value): assert ray.get(a.g.remote(2)) == 4 +def test_resource_assignment(shutdown_only): + """Test to make sure that we assign resource to actors at instantiation.""" + # This test will create 16 actors. Declaring this many CPUs initially will + # speed up the test because the workers will be started ahead of time. + ray.init(num_cpus=16, num_gpus=1, resources={"Custom": 1}) + + class Actor(object): + def __init__(self): + self.resources = ray.get_resource_ids() + + def get_actor_resources(self): + return self.resources + + def get_actor_method_resources(self): + return ray.get_resource_ids() + + decorator_resource_args = [{}, { + "num_cpus": 0.1 + }, { + "num_gpus": 0.1 + }, { + "resources": { + "Custom": 0.1 + } + }] + instantiation_resource_args = [{}, { + "num_cpus": 0.2 + }, { + "num_gpus": 0.2 + }, { + "resources": { + "Custom": 0.2 + } + }] + for decorator_args in decorator_resource_args: + for instantiation_args in instantiation_resource_args: + if len(decorator_args) == 0: + actor_class = ray.remote(Actor) + else: + actor_class = ray.remote(**decorator_args)(Actor) + actor = actor_class._remote(**instantiation_args) + actor_resources = ray.get(actor.get_actor_resources.remote()) + actor_method_resources = ray.get( + actor.get_actor_method_resources.remote()) + if len(decorator_args) == 0 and len(instantiation_args) == 0: + assert len(actor_resources) == 0, ( + "Actor should not be assigned resources.") + assert list(actor_method_resources.keys()) == [ + "CPU" + ], ("Actor method should only have CPUs") + assert actor_method_resources["CPU"][0][1] == 1, ( + "Actor method should default to one cpu.") + else: + if ("num_cpus" not in decorator_args + and "num_cpus" not in instantiation_args): + assert actor_resources["CPU"][0][1] == 1, ( + "Actor should default to one cpu.") + correct_resources = {} + defined_resources = decorator_args.copy() + defined_resources.update(instantiation_args) + for resource, value in defined_resources.items(): + if resource == "num_cpus": + correct_resources["CPU"] = value + elif resource == "num_gpus": + correct_resources["GPU"] = value + elif resource == "resources": + for custom_resource, amount in value.items(): + correct_resources[custom_resource] = amount + for resource, amount in correct_resources.items(): + assert (actor_resources[resource][0][0] == + actor_method_resources[resource][0][0]), ( + "Should have assigned same {} for both actor ", + "and actor method.".format(resource)) + assert (actor_resources[resource][0][ + 1] == actor_method_resources[resource][0][1]), ( + "Should have assigned same amount of {} for both ", + "actor and actor method.".format(resource)) + assert actor_resources[resource][0][1] == amount, ( + "Actor should have {amount} {resource} but has ", + "{amount} {resource}".format( + amount=amount, resource=resource)) + + def test_multiple_actors(ray_start_regular): @ray.remote class Counter(object): diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -807,7 +807,7 @@ def m(x): def test_submit_api(shutdown_only): - ray.init(num_cpus=1, num_gpus=1, resources={"Custom": 1}) + ray.init(num_cpus=2, num_gpus=1, resources={"Custom": 1}) @ray.remote def f(n): diff --git a/python/ray/tests/test_object_manager.py b/python/ray/tests/test_object_manager.py --- a/python/ray/tests/test_object_manager.py +++ b/python/ray/tests/test_object_manager.py @@ -139,8 +139,11 @@ def set_weights(self, x): pass actors = [ - Actor._remote(args=[], kwargs={}, resources={str(i % num_nodes): 1}) - for i in range(100) + Actor._remote( + args=[], + kwargs={}, + num_cpus=0.01, + resources={str(i % num_nodes): 1}) for i in range(100) ] # Wait for the actors to start up.
Actor resources set when during decoration instead of instantiation ### Describe the problem Actor's resources are set during decoration instead of during instantiation, which causes errors similar to #4255 and below. ### Source code / logs ``` import ray ray.init() @ray.remote class Actor: def __init__(self): x = 1 def method(self): x += 1 a = actor._remote([], {}, num_cpus=0.1) a.method.remote() ``` The above code results in ``` ObjectID(01000000242c0f1ad83b6cb4f21ed5d78d541556) 2019-03-08 10:54:51,920 ERROR worker.py:1752 -- A worker died or was killed while executing task 00000000242c0f1ad83b6cb4f21ed5d78d541556. 2019-03-08 10:54:51,921 ERROR worker.py:1752 -- A worker died or was killed while executing task 00000000acf22b4eb780458806ffed9e8a45b60e. (pid=3883) WARNING: Logging before InitGoogleLogging() is written to STDERR (pid=3883) F0308 10:54:51.906103 1691321792 raylet_client.cc:263] Check failed: whole_fraction == resource_fraction (pid=3883) *** Check failure stack trace: *** (pid=3883) Fatal Python error: Aborted (pid=3883) (pid=3883) Stack (most recent call first): (pid=3883) File "/Users/William/Documents/ray/python/ray/worker.py", line 992 in _get_next_task_from_local_scheduler (pid=3883) File "/Users/William/Documents/ray/python/ray/worker.py", line 1009 in main_loop (pid=3883) File "/Users/William/Documents/ray/python/ray/workers/default_worker.py", line 111 in <module> (pid=3886) WARNING: Logging before InitGoogleLogging() is written to STDERR (pid=3886) F0308 10:54:51.906775 1758442944 raylet_client.cc:263] Check failed: whole_fraction == resource_fraction (pid=3886) *** Check failure stack trace: *** (pid=3886) Fatal Python error: Aborted (pid=3886) (pid=3886) Stack (most recent call first): (pid=3886) File "/Users/William/Documents/ray/python/ray/worker.py", line 992 in _get_next_task_from_local_scheduler (pid=3886) File "/Users/William/Documents/ray/python/ray/worker.py", line 1009 in main_loop (pid=3886) File "/Users/William/Documents/ray/python/ray/workers/default_worker.py", line 111 in <module> ```
@robertnishihara
2019-03-11T01:46:52
ray-project/ray
4,336
ray-project__ray-4336
[ "4290" ]
7ff56ce82684cb6064ad213aa999ac05cb6aa4d2
diff --git a/python/ray/rllib/rollout.py b/python/ray/rllib/rollout.py --- a/python/ray/rllib/rollout.py +++ b/python/ray/rllib/rollout.py @@ -12,6 +12,7 @@ import gym import ray from ray.rllib.agents.registry import get_agent_class +from ray.tune.util import merge_dicts EXAMPLE_USAGE = """ Example Usage via RLlib CLI: @@ -69,22 +70,23 @@ def create_parser(parser_creator=None): def run(args, parser): - config = args.config - if not config: - # Load configuration from file - config_dir = os.path.dirname(args.checkpoint) - config_path = os.path.join(config_dir, "params.pkl") - if not os.path.exists(config_path): - config_path = os.path.join(config_dir, "../params.pkl") - if not os.path.exists(config_path): + config = {} + # Load configuration from file + config_dir = os.path.dirname(args.checkpoint) + config_path = os.path.join(config_dir, "params.pkl") + if not os.path.exists(config_path): + config_path = os.path.join(config_dir, "../params.pkl") + if not os.path.exists(config_path): + if not args.config: raise ValueError( "Could not find params.pkl in either the checkpoint dir or " "its parent directory.") + else: with open(config_path, 'rb') as f: config = pickle.load(f) - if "num_workers" in config: - config["num_workers"] = min(2, config["num_workers"]) - + if "num_workers" in config: + config["num_workers"] = min(2, config["num_workers"]) + config = merge_dicts(config, args.config) if not args.env: if not config.get("env"): parser.error("the following arguments are required: --env")
rllib rollout does not load the model automatically from params.json <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux 4.4.0-135-generic x86_64 - **Python version**: Python 3.6.5 <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> rllib rollout does not load the model automatically from params.json for a simple 256x256x256x256 model. When I run rllib rollout without specifying --config with "model": {"fcnet_hiddens": [256, 256, 256, 256]} it fails with the following error: ``` assert len(vector) == i, "Passed weight does not have the correct shape." AssertionError: Passed weight does not have the correct shape. ```
2019-03-12T05:07:25
ray-project/ray
4,379
ray-project__ray-4379
[ "4366" ]
a6a5b344b954e21e41098cd04986991b61f65b24
diff --git a/python/ray/tune/ray_trial_executor.py b/python/ray/tune/ray_trial_executor.py --- a/python/ray/tune/ray_trial_executor.py +++ b/python/ray/tune/ray_trial_executor.py @@ -18,6 +18,7 @@ logger = logging.getLogger(__name__) +RESOURCE_REFRESH_PERIOD = 0.5 # Refresh resources every 500 ms BOTTLENECK_WARN_PERIOD_S = 60 NONTRIVIAL_WAIT_TIME_THRESHOLD_S = 1e-3 @@ -34,18 +35,24 @@ def unwrap(self): class RayTrialExecutor(TrialExecutor): """An implemention of TrialExecutor based on Ray.""" - def __init__(self, queue_trials=False, reuse_actors=False): + def __init__(self, + queue_trials=False, + reuse_actors=False, + refresh_period=RESOURCE_REFRESH_PERIOD): super(RayTrialExecutor, self).__init__(queue_trials) self._running = {} # Since trial resume after paused should not run # trial.train.remote(), thus no more new remote object id generated. # We use self._paused to store paused trials here. self._paused = {} + self._reuse_actors = reuse_actors + self._cached_actor = None + self._avail_resources = Resources(cpu=0, gpu=0) self._committed_resources = Resources(cpu=0, gpu=0) self._resources_initialized = False - self._reuse_actors = reuse_actors - self._cached_actor = None + self._refresh_period = refresh_period + self._last_resource_refresh = float("-inf") self._last_nontrivial_wait = time.time() if ray.is_initialized(): self._update_avail_resources() @@ -370,11 +377,19 @@ def _update_avail_resources(self, num_retries=5): self._avail_resources = Resources( int(num_cpus), int(num_gpus), custom_resources=custom_resources) + self._last_resource_refresh = time.time() self._resources_initialized = True def has_resources(self, resources): - """Returns whether this runner has at least the specified resources.""" - self._update_avail_resources() + """Returns whether this runner has at least the specified resources. + + This refreshes the Ray cluster resources if the time since last update + has exceeded self._refresh_period. This also assumes that the + cluster is not resizing very frequently. + """ + if time.time() - self._last_resource_refresh > self._refresh_period: + self._update_avail_resources() + currently_available = Resources.subtract(self._avail_resources, self._committed_resources) @@ -445,7 +460,6 @@ def resource_string(self): def on_step_begin(self): """Before step() called, update the available resources.""" - self._update_avail_resources() def save(self, trial, storage=Checkpoint.DISK):
diff --git a/python/ray/tune/tests/test_actor_reuse.py b/python/ray/tune/tests/test_actor_reuse.py --- a/python/ray/tune/tests/test_actor_reuse.py +++ b/python/ray/tune/tests/test_actor_reuse.py @@ -15,27 +15,30 @@ def on_trial_result(self, trial_runner, trial, result): return TrialScheduler.PAUSE -class MyResettableClass(Trainable): - def _setup(self, config): - self.config = config - self.num_resets = 0 - self.iter = 0 +def create_resettable_class(): + class MyResettableClass(Trainable): + def _setup(self, config): + self.config = config + self.num_resets = 0 + self.iter = 0 - def _train(self): - self.iter += 1 - return {"num_resets": self.num_resets, "done": self.iter > 1} + def _train(self): + self.iter += 1 + return {"num_resets": self.num_resets, "done": self.iter > 1} - def _save(self, chkpt_dir): - return {"iter": self.iter} + def _save(self, chkpt_dir): + return {"iter": self.iter} - def _restore(self, item): - self.iter = item["iter"] + def _restore(self, item): + self.iter = item["iter"] - def reset_config(self, new_config): - if "fake_reset_not_supported" in self.config: - return False - self.num_resets += 1 - return True + def reset_config(self, new_config): + if "fake_reset_not_supported" in self.config: + return False + self.num_resets += 1 + return True + + return MyResettableClass class ActorReuseTest(unittest.TestCase): @@ -49,7 +52,7 @@ def testTrialReuseDisabled(self): trials = run_experiments( { "foo": { - "run": MyResettableClass, + "run": create_resettable_class(), "num_samples": 4, "config": {}, } @@ -63,7 +66,7 @@ def testTrialReuseEnabled(self): trials = run_experiments( { "foo": { - "run": MyResettableClass, + "run": create_resettable_class(), "num_samples": 4, "config": {}, } @@ -78,7 +81,7 @@ def run(): run_experiments( { "foo": { - "run": MyResettableClass, + "run": create_resettable_class(), "max_failures": 1, "num_samples": 4, "config": {
[tune] Performance collapses with thousands of queued jobs ### System information - **Ray version**: 0.6.2 to at least March 12th nightly When there are thousands of trials queued (not even running), the master node performance is very poor and it processes results from the nodes very slowly. The nodes cpu utilization is very low. Here is an example where this problem happens: ``` import time from ray import tune import ray.tune.util from ray.tune import trainable class IdleTrainable(trainable.Trainable): def _setup(self, config): self.name = config['name'] self.timeout = config['timeout'] def _train(self): time.sleep(self.timeout) string = f"hello from {self.name}" print(f"done: {self.name}") return {'done': True, 'message': string} timeout = 3 def build_experiment(i): name = f"exp{i}" return tune.Experiment( run=IdleTrainable, name=name, config={"name": name, "timeout": timeout}, #num_samples=1, # We are using repeats instead. checkpoint_at_end=False, local_dir="/tmp/rayidle", # It seems this means a thread and not a full core. resources_per_trial={'cpu': 1}) ray.init() n_jobs = 2500 experiments = [build_experiment(i) for i in range(n_jobs)] tune.run_experiments(experiments, verbose=False) ``` You can run this locally and see that the jobs (trials) are processed very slowly. However if you change `n_jobs = 100`, than the _rate_ at which the jobs are processed is much faster. So the this bug is directly related to the length of the list of jobs. I explain why below. And here is the associated stack trace: ``` ncalls tottime percall cumtime percall filename:lineno(function) 2352/1 0.074 0.000 66.300 66.300 {built-in method builtins.exec} 1 0.000 0.000 66.299 66.299 junk_ray.py:1(<module>) 1 0.000 0.000 62.646 62.646 tune.py:55(run_experiments) 35 0.002 0.000 62.637 1.790 trial_runner.py:218(step) 35 0.000 0.000 43.330 1.238 trial_runner.py:377(_get_next_trial) 35 0.055 0.002 40.574 1.159 trial_scheduler.py:86(choose_trial_to_run) 33375 0.028 0.000 40.519 0.001 trial_runner.py:373(has_resources) 33375 0.089 0.000 40.491 0.001 ray_trial_executor.py:320(has_resources) 33411 0.345 0.000 40.011 0.001 ray_trial_executor.py:292(_update_avail_resources) 167051 0.558 0.000 39.222 0.000 state.py:750(cluster_resources) 167051 0.232 0.000 38.638 0.000 state.py:390(client_table) 167051 2.000 0.000 38.356 0.000 state.py:20(parse_client_table) 34 0.004 0.000 17.156 0.505 trial_runner.py:127(checkpoint) 68 3.035 0.045 17.107 0.252 __init__.py:120(dump) ``` The issue is the performance of method `FIFOScheduler.choose_trial_to_run(trial_runner)` in tune/scheduler/trial_scheduler.py when there are no resources available. This method loops over the complete list of trials to see if any one of them is small enough to run given the current available resources. But checking each trial ends up calling `parse_client_table(redis_client)` in tune/experimental/state.py which is very slow (> 1 ms) because it communicates with the redis server. So if the list of trials contains thousands of items, it will take *many seconds* to iterate over the whole list to realize that not a single job can run when there are 0 cpus available. This is why the master has very poor performance and can't process results from the nodes. Note that in the main master loop, `FIFOScheduler.choose_trial_to_run(trial_runner)` is basically called in two different situations: right after a job completed and right after a job started. In the first case, `FIFOScheduler.choose_trial_to_run(trial_runner)` will run very quickly because it will find a job to run given that some resources were just freed by the completed job. However, when a new job just started, it is likely that there are no resources left (unless it's at the very beginning or the user just added new resources) and it is in those cases that `FIFOScheduler.choose_trial_to_run(trial_runner)` iterates over the full list of jobs and finds no runnable job and occupies the master thread for many seconds. So every time a new result comes in and a new job starts, the master thread basically freezes for a few seconds.
Yep, looks like self.has_resources is much more expensive than it needs to be: https://github.com/ray-project/ray/blob/d5f46983056642ad0a2e4dff2a0030845f5c3f03/python/ray/tune/ray_trial_executor.py#L359 A reasonable fix would to be to avoid calling `_update_avail_resources` unless there is actually a state change in the cluster. But then it's not clear how the trial executor ever becomes aware that the user added resources. Currently every single pending trial calls `_update_avail_resources`, but maybe it could only be called once instead. I see, it makes sense to call it once in a while. Perhaps at the beginning of `choose_trial_to_run`? Actually, I think it would be OK if we just call it on `TrialExecutor.on_step_begin` as already done and remove it from `has_resources`. This is probably fine as the gap between updating the resources and making the admission control decision (in choose_next_trial_to_run) is very small.
2019-03-15T08:37:07
ray-project/ray
4,469
ray-project__ray-4469
[ "4467" ]
01699ce4ea52062b8bbf2757ad83da65ae26781f
diff --git a/python/ray/tune/examples/mnist_pytorch.py b/python/ray/tune/examples/mnist_pytorch.py --- a/python/ray/tune/examples/mnist_pytorch.py +++ b/python/ray/tune/examples/mnist_pytorch.py @@ -171,7 +171,6 @@ def test(): tune.run( "TRAIN_FN", name="exp", - verbose=0, scheduler=sched, **{ "stop": { diff --git a/python/ray/tune/examples/mnist_pytorch_trainable.py b/python/ray/tune/examples/mnist_pytorch_trainable.py --- a/python/ray/tune/examples/mnist_pytorch_trainable.py +++ b/python/ray/tune/examples/mnist_pytorch_trainable.py @@ -179,7 +179,6 @@ def _restore(self, checkpoint_path): time_attr="training_iteration", reward_attr="neg_mean_loss") tune.run( TrainMNIST, - verbose=0, scheduler=sched, **{ "stop": { diff --git a/python/ray/tune/examples/tune_mnist_keras.py b/python/ray/tune/examples/tune_mnist_keras.py --- a/python/ray/tune/examples/tune_mnist_keras.py +++ b/python/ray/tune/examples/tune_mnist_keras.py @@ -183,7 +183,6 @@ def create_parser(): tune.run( "TRAIN_FN", name="exp", - verbose=0, scheduler=sched, **{ "stop": {
[tune] EXAMPLE DOESN'T RUN only show failing information from two examples: mnist_pytorch.py and tune_mnist_keras.py <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: > NAME="Ubuntu" > VERSION="16.04.5 LTS (Xenial Xerus)" > ID=ubuntu > ID_LIKE=debian > PRETTY_NAME="Ubuntu 16.04.5 LTS" > VERSION_ID="16.04" > HOME_URL="http://www.ubuntu.com/" > SUPPORT_URL="http://help.ubuntu.com/" > BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" > VERSION_CODENAME=xenial > UBUNTU_CODENAME=xenial - **Ray installed from (source or binary)**: source - **Ray version**: 0.6.5 - **Python version**: Python 3.6.5 - **Exact command to reproduce**: ``` cd ray/python/ray/tune/examples python mnist_pytorch.py ``` pytorch version: > 1.0.0 or ``` cd ray/python/ray/tune/examples python tune_mnist_keras.py ``` TF version: > 1.12.0 keras version: > 2.2.4 <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem Without any modfifications, build Ray from source, try to directly use tune provided examples, but seems most of the examples failed due to the > Destroying actor for trial xxxx. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. Btw, the machine has GPU and the version: > Cuda compilation tools, release 9.0, V9.0.176 However, after trying add `reuse_actors=True` , the same error msg appear. Since the trials are suddenly stopped without any error or exception, could you please help to take a look? @richardliaw @robertnishihara Thanks! ### Source code / logs `python mnist_pytorch.py` > 2019-03-23 23:54:34,913 WARNING worker.py:1406 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes. > 2019-03-23 23:54:34,914 INFO node.py:423 -- Process STDOUT and STDERR is being redirected to /tmp/ray/session_2019-03-23_23-54-34_52746/logs. > 2019-03-23 23:54:35,021 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:24948 to respond... > 2019-03-23 23:54:35,130 INFO services.py:363 -- Waiting for redis server at 127.0.0.1:39939 to respond... > 2019-03-23 23:54:35,132 INFO services.py:760 -- Starting Redis shard with 10.0 GB max memory. > 2019-03-23 23:54:35,147 WARNING services.py:1236 -- Warning: Capping object memory store to 20.0GB. To increase this further, specify `object_store_memory` when calling ray.init() or ray start. > 2019-03-23 23:54:35,148 INFO services.py:1384 -- Starting the Plasma object store with 20.0 GB memory using /dev/shm. > 2019-03-23 23:54:35,793 INFO tune.py:60 -- Tip: to resume incomplete experiments, pass resume='prompt' or resume=True to run() > 2019-03-23 23:54:35,796 INFO tune.py:211 -- Starting a new experiment. > 2019-03-23 23:54:37,283 WARNING util.py:62 -- The `start_trial` operation took 1.3957560062408447 seconds to complete, which may be a performance bottleneck. > 2019-03-23 23:54:58,442 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_0_lr=0.081371,momentum=0.40185. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:58,754 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_3_lr=0.010086,momentum=0.41713. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,133 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_1_lr=0.028139,momentum=0.40255. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,160 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_7_lr=0.030289,momentum=0.55615. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,299 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_5_lr=0.08914,momentum=0.18464. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:54:59,449 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_6_lr=0.066883,momentum=0.68077. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:00,221 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_4_lr=0.059111,momentum=0.82238. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:00,525 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_2_lr=0.063279,momentum=0.43368. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:21,020 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_9_lr=0.084676,momentum=0.45356. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-23 23:55:21,150 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_8_lr=0.051943,momentum=0.6297. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. `python tune_mnist_keras.py` > (pid=57890) 60000 train samples > (pid=57890) 10000 test samples > (pid=57881) x_train shape: (60000, 28, 28, 1) > (pid=57881) 60000 train samples > (pid=57881) 10000 test samples > (pid=57899) x_train shape: (60000, 28, 28, 1) > (pid=57899) 60000 train samples > (pid=57899) 10000 test samples > (pid=57916) x_train shape: (60000, 28, 28, 1) > (pid=57916) 60000 train samples > (pid=57916) 10000 test samples > (pid=57913) x_train shape: (60000, 28, 28, 1) > (pid=57913) 60000 train samples > (pid=57913) 10000 test samples > (pid=57910) x_train shape: (60000, 28, 28, 1) > (pid=57910) 60000 train samples > (pid=57910) 10000 test samples > 2019-03-24 00:09:22,154 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_3_dropout1=0.41208,hidden=53,lr=0.0045996,momentum=0.29457. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:23,633 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_9_dropout1=0.78277,hidden=424,lr=0.085855,momentum=0.11821. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:28,650 WARNING util.py:62 -- The `experiment_checkpoint` operation took 0.14834022521972656 seconds to complete, which may be a performance bottleneck. > 2019-03-24 00:09:36,315 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_1_dropout1=0.77148,hidden=307,lr=0.084435,momentum=0.87804. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:09:37,978 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_4_dropout1=0.71993,hidden=442,lr=0.014533,momentum=0.65771. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:18,199 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_6_dropout1=0.72255,hidden=446,lr=0.086364,momentum=0.86826. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:44,899 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_2_dropout1=0.73158,hidden=107,lr=0.087594,momentum=0.5979. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:48,515 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_0_dropout1=0.2571,hidden=236,lr=0.0083709,momentum=0.47214. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:51,434 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_7_dropout1=0.47593,hidden=218,lr=0.067242,momentum=0.85505. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:54,745 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_8_dropout1=0.47459,hidden=383,lr=0.094025,momentum=0.39063. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. > 2019-03-24 00:10:56,552 INFO ray_trial_executor.py:178 -- Destroying actor for trial TRAIN_FN_5_dropout1=0.5431,hidden=429,lr=0.031262,momentum=0.61523. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads.
2019-03-24T04:37:46
ray-project/ray
4,493
ray-project__ray-4493
[ "2887" ]
1bcb0b94cc1d589a93eaedcb3059d6cde27c96c0
diff --git a/python/ray/autoscaler/docker.py b/python/ray/autoscaler/docker.py --- a/python/ray/autoscaler/docker.py +++ b/python/ray/autoscaler/docker.py @@ -98,7 +98,7 @@ def docker_start_cmds(user, image, mount, cname, user_options): cmds.append(" ".join(docker_check + docker_run)) docker_update = [ " && ".join(("apt-get -y update", "apt-get -y upgrade", - "apt-get install -y git wget cmake psmisc")) + "apt-get install -y git wget psmisc")) ] cmds.extend(with_docker_exec(docker_update, container_name=cname)) return cmds diff --git a/python/setup.py b/python/setup.py --- a/python/setup.py +++ b/python/setup.py @@ -17,7 +17,7 @@ # before these files have been created, so we have to move the files # manually. -# NOTE: The lists below must be kept in sync with ray/CMakeLists.txt. +# NOTE: The lists below must be kept in sync with ray/BUILD.bazel. ray_files = [ "ray/core/src/ray/thirdparty/redis/src/redis-server",
diff --git a/ci/long_running_tests/config.yaml b/ci/long_running_tests/config.yaml --- a/ci/long_running_tests/config.yaml +++ b/ci/long_running_tests/config.yaml @@ -47,7 +47,7 @@ setup_commands: - pip install boto3==1.4.8 cython==0.29.0 # # Uncomment the following if you wish to install Ray instead. # - sudo apt-get update - # - sudo apt-get install -y cmake pkg-config build-essential autoconf curl libtool unzip flex bison python + # - sudo apt-get install -y build-essential curl unzip # - git clone https://github.com/ray-project/ray || true # - cd ray/python; git checkout master; git pull; pip install -e . --verbose # Install nightly Ray wheels. diff --git a/ci/stress_tests/stress_testing_config.yaml b/ci/stress_tests/stress_testing_config.yaml --- a/ci/stress_tests/stress_testing_config.yaml +++ b/ci/stress_tests/stress_testing_config.yaml @@ -91,7 +91,7 @@ setup_commands: # - sudo dpkg --configure -a # Install basics. - sudo apt-get update - - sudo apt-get install -y cmake pkg-config build-essential autoconf curl libtool unzip flex bison python + - sudo apt-get install -y build-essential curl unzip # Install Anaconda. - wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh || true - bash Anaconda3-5.0.1-Linux-x86_64.sh -b -p $HOME/anaconda3 || true
Convert build system to Bazel. It seems like using Bazel (https://github.com/bazelbuild/bazel) would improve our build system in a lot of ways. @rshin did an early version of this a long time ago in https://github.com/ray-project/ray-legacy/pull/408. @ericl, @concretevitamin, @pcmoritz have all used Bazel I believe and are proponents. @rsepassi and @dbieber have recently worked on this and have a good sense of what is involved. Any thoughts about this? Any drawbacks to be aware of? cc @chuxi @guoyuhong @raulchen
Will CMake script be retired or will both ways be kept? Currently, there are duplicated building for thirdparty libraries, e.g. zlib for Arrow and Parquet, etc. Avoiding duplicated building may improve the building time. Hi @robertnishihara , actually, while I was working on cmake, I was confused with two problems: 1. how to manage dependency and its transitive dependency in cmake. 2. how to package and deploy. I ever referred to tensorflow project to find out cmake best practice. And draw a principle for thirdparty dependency: `ExternalProject` only. In maven, which is the cornerstone, it has maven center repository and local repository. It helps manage all downloaded jars and you can use them in your maven project. So when it comes to cmake, cmake should also be able to reuse the downloaded dependencies. I would suggest to use `FindXXXX.cmake` to speed up the (second) build process. However, I still can not deal with the transitive problem unless you wrote each `ExternalProject`. So ... for c++ dependency, I need a mechanism to support it. **cmake is not enough**. I did not package ray yet, because it is already packaged by python wheels. But for cmake package mechanism, which is based on gcc library links, I found in tensorflow, it uses static library instead of dynamic for thirdparty libraries. em... I have to admit it is quite reasonable. --- I have not used bazel yet, but I found it is easy to understand once you know cmake. And it has the ability to support other languages, which is good for python and java. Also It has better third-party dependency management. The key problem may be it is too new to cooperate with IDEs, which for me it is important to debug. And at last, if you'd like, we can keep cmake and bazel both solutions. In tensorflow, it has cmake in a standalone directory which contains the whole tensorflow project. @guoyuhong @chuxi We could potentially keep around cmake for IDE reasons, but I think it would be substantially less work to just retire cmake. @chuxi where is the TensorFlow directory that you're talking about? > Will CMake script be retired or will both ways be kept? Currently, there are duplicated building for thirdparty libraries, e.g. zlib for Arrow and Parquet, etc. Avoiding duplicated building may improve the building time. yes, transitive dependency problem. And I am considering to write some findXXX modules, which support to find a directory which contains all dependency. we define a cmake property: CMAKE_EXTERNAL_HOME, by default it is ${CMKAE_CURRENT_BINARY_DIR}/build/external. And then we write some findXXX.cmake modules based on the directory. Then in the ${CMAKE_EXTERNAL_HOME}, all dependencies are kept and the user could get benefit from it. And for user he can change it to his own directory. So for each build he can get the benefit from local pre build projects. > @guoyuhong @chuxi We could potentially keep around cmake for IDE reasons, but I think it would be substantially less work to just retire cmake. @chuxi where is the TensorFlow directory that you're talking about? `tensorflow/contrib/cmake` the directory has cmake files for tensorflow, I read and found it works, but may not be always updated as bazel. 1) I don't have much experience with Bazel. From its web site https://www.bazel.build/, it doesn't claim support for Python. Would this be a problem for us? 2) Will buck (https://buckbuild.com/) also be an option? It's similar to bazel. I don't have a in-depth comparison for buck and bazel. But IFAIK, buck has good support for C++, python and java. Bazel supports Python: https://docs.bazel.build/versions/master/be/python.html and is backed by Google. I looked a bit into the currently available python support: - Bazel provides a set of basic rules for Python components and unit tests: https://docs.bazel.build/versions/master/be/python.html#py_test - They have experimental support for PyPI library dependencies: https://github.com/bazelbuild/rules_python - It also looks straightforward to use a shell rule or custom rule to build wheels, for example, here is a such a custom rule: https://github.com/horia141/bazel-pypi-package Overall it looks workable, though it would probably make sense to tackle Python build conversion incrementally since things aren't fully baked here. Re: buck, my impression is that does not have much adoption outside of facebook. We've got all C++ dependencies building internally at v0.5.2. I've opened a PR with a partial BUILD file that still needs the external stuff to be added. Totally fine to not merge that, but thought I'd send along so that you all have a starting point. Ideally the code would be restructured so that there's a BUILD file per directory and things are a bit more decoupled. But that would be a lot more work to disentangle stuff. The BUILD file in the PR was written so as to require no/minimal changes to the source given the cmake build. For the external dependencies, it looks like each would have a [`new_git_repository` rule](https://docs.bazel.build/versions/master/be/workspace.html#new_git_repository) to download them. A [`genrule`](https://docs.bazel.build/versions/master/be/general.html#genrule) could be used to call the cmake command (or whatever bash command needs to be called to build it). A `cc_library` can then wrap all the [compiled units](https://docs.bazel.build/versions/master/cpp-use-cases.html#adding-dependencies-on-precompiled-libraries), or if it's easy you can write BUILD rules for them ([`gtest` example](https://docs.bazel.build/versions/master/cpp-use-cases.html#including-external-libraries)). I had quite some issues in the past installing Bazel to compile some libraries on some supercomputing clusters (like Berkeley's Savio) because it depends on Java. And then you have this problem that to compile a simple thing which does not use Java, you have to get Java onto the system. Was not impressed. @pcmoritz, if we use Arrow binary artifacts instead of compiling Arrow, then the switch to Bazel will probably be much easier. Note - This requires frequent (e.g., nightly) and persistent binary artifacts from Arrow. - This makes it harder to test changes to Arrow in Ray. Can't remember if I had previously shared this, but in case it's relevant to what you're thinking about, Bazel genrule <https://docs.bazel.build/versions/master/be/general.html#genrule> can wrap cmake commands. On Tue, Dec 4, 2018 at 11:15 AM Robert Nishihara <[email protected]> wrote: > @pcmoritz <https://github.com/pcmoritz>, if we use Arrow binary artifacts > instead of compiling Arrow, then the switch to Bazel will probably be much > easier. > > Note > > - This requires frequent (e.g., nightly) and persistent binary > artifacts from Arrow. > - This makes it harder to test changes to Arrow in Ray. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/ray-project/ray/issues/2887#issuecomment-444221786>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ABEGW2QswoV-s3Tgs_to-JDjqNe705_Hks5u1snKgaJpZM4WqPnW> > . > To interface with cmake, see also https://github.com/bazelbuild/rules_foreign_cc (cc @irengrig). Thanks @rsepassi and @laurentlb! Mostly done in #3898. We still aren't using Bazel to build our wheels because the Bazel build upgraded Arrow, which is causing other issues (see https://github.com/ray-project/ray/issues/4129). Might already know this, but just in case, I think Bazel wasn't added as a dependency yet on master, so this breaks compilation unless the user explicitly installs it before running `pip install -v -e .` Looks like some error also pops up during redis compilation: ``` config.status: creating src/glog/stl_logging.h config.status: creating libglog.pc config.status: creating src/config.h config.status: executing depfiles commands config.status: executing libtool commands 'src/config.h' -> 'bazel-out/k8-opt/genfiles/external/com_github_google_glog/config.h' 'src/glog/logging.h' -> 'bazel-out/k8-opt/genfiles/external/com_github_google_glog/src/glog/logging.h' 'src/glog/raw_logging.h' -> 'bazel-out/k8-opt/genfiles/external/com_github_google_glog/src/glog/raw_logging.h' 'src/glog/stl_logging.h' -> 'bazel-out/k8-opt/genfiles/external/com_github_google_glog/src/glog/stl_logging.h' 'src/glog/vlog_is_on.h' -> 'bazel-out/k8-opt/genfiles/external/com_github_google_glog/src/glog/vlog_is_on.h' [137 / 163] Compiling src/ray/raylet/reconstruction_policy.cc; 80s processwrapper-sandbox ... (24 actions running) INFO: From Compiling src/ray/thirdparty/hiredis/dict.c: src/ray/thirdparty/hiredis/dict.c:53:21: warning: 'dictGenHashFunction' defined but not used [-Wunused-function] static unsigned int dictGenHashFunction(const unsigned char *buf, int len) { ^ src/ray/thirdparty/hiredis/dict.c:73:14: warning: 'dictCreate' defined but not used [-Wunused-function] static dict *dictCreate(dictType *type, void *privDataPtr) { ^ src/ray/thirdparty/hiredis/dict.c:160:12: warning: 'dictReplace' defined but not used [-Wunused-function] static int dictReplace(dict *ht, void *key, void *val) { ^ src/ray/thirdparty/hiredis/dict.c:182:12: warning: 'dictDelete' defined but not used [-Wunused-function] static int dictDelete(dict *ht, const void *key) { ^ src/ray/thirdparty/hiredis/dict.c:238:13: warning: 'dictRelease' defined but not used [-Wunused-function] static void dictRelease(dict *ht) { ^ src/ray/thirdparty/hiredis/dict.c:258:22: warning: 'dictGetIterator' defined but not used [-Wunused-function] static dictIterator *dictGetIterator(dict *ht) { ^ src/ray/thirdparty/hiredis/dict.c:268:19: warning: 'dictNext' defined but not used [-Wunused-function] static dictEntry *dictNext(dictIterator *iter) { ^ src/ray/thirdparty/hiredis/dict.c:288:13: warning: 'dictReleaseIterator' defined but not used [-Wunused-function] static void dictReleaseIterator(dictIterator *iter) { ^ ERROR: /data/dtk-pipeline/ray/BUILD.bazel:511:1: Executing genrule //:redis failed (Exit 2) bash failed: error executing command /bin/bash -c ... (remaining 1 argument(s) skipped) Use --sandbox_debug to see verbose messages from the sandbox + curl -sL https://github.com/antirez/redis/archive/5.0.3.tar.gz + tar xz --strip-components=1 -C . gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now [166 / 191] Compiling src/ray/raylet/worker.cc; 183s processwrapper-sandbox ... (20 actions running) Target //:ray_pkg failed to build Use --verbose_failures to see the command lines of failed build steps. INFO: Elapsed time: 1093.283s, Critical Path: 239.37s INFO: 136 processes: 136 processwrapper-sandbox. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully Traceback (most recent call last): File "<string>", line 1, in <module> File "/data/dtk-pipeline/ray/python/setup.py", line 188, in <module> license="Apache 2.0") File "/scratch/anaconda3/envs/pipe/lib/python3.7/site-packages/setuptools/__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "/scratch/anaconda3/envs/pipe/lib/python3.7/distutils/core.py", line 148, in setup dist.run_commands() File "/scratch/anaconda3/envs/pipe/lib/python3.7/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/scratch/anaconda3/envs/pipe/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/scratch/anaconda3/envs/pipe/lib/python3.7/site-packages/setuptools/command/develop.py", line 38, in run self.install_for_development() File "/scratch/anaconda3/envs/pipe/lib/python3.7/site-packages/setuptools/command/develop.py", line 140, in install_for_development self.run_command('build_ext') File "/scratch/anaconda3/envs/pipe/lib/python3.7/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/scratch/anaconda3/envs/pipe/lib/python3.7/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/data/dtk-pipeline/ray/python/setup.py", line 81, in run subprocess.check_call(["../build.sh", "-p", sys.executable]) File "/scratch/anaconda3/envs/pipe/lib/python3.7/subprocess.py", line 347, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['../build.sh', '-p', '/scratch/anaconda3/envs/pipe/bin/python']' returned non-zero exit status 1. Cleaning up... Removed build tracker '/tmp/pip-req-tracker-kethrq6g' ``` Hmm, I haven't seen this one. What happens if you do `curl -sL https://github.com/antirez/redis/archive/5.0.3.tar.gz` manually and try to extract the file with `tar xz --strip-components=1 -C .`? Does it give the same error? Sorry for delay. Found something really weird. Using the system curl, it works just fine. The (my) anaconda curl is what is "broken". System: CentOS 7 `uname -r`: `3.10.0-693.21.1.el7.x86_64` ``` [user@host ~]$ /usr/bin/curl --version curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.28.4 zlib/1.2.11 libidn/1.28 libssh2/1.4.3 Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smtp smtps telnet tftp Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz unix-sockets ``` With anaconda3: ``` [user@host ~]$ which curl /scratch/anaconda3/bin/curl [user@host ~]$ curl --version curl 7.63.0 (x86_64-conda_cos6-linux-gnu) libcurl/7.63.0 OpenSSL/1.1.1a zlib/1.2.11 libssh2/1.8.0 Release-Date: 2018-12-12 Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP UnixSockets HTTPS-proxy ``` It appears that `curl -sL https://github.com/antirez/redis/archive/5.0.3.tar.gz` just returns instantly with the anaconda 3 curl, with no content. For what it's worth, `curl https://google.com -sL` seems to spit out the google homepage to the terminal just fine with this same curl... weird. E: ok, same problem / error message either way. Not knowing much about `curl`, it appears that the `curl blah -sL` just spits the output straight to the terminal (at least for my system / anaconda curl versions). If `tar` is expecting a file on disk instead of a stream based on how the build system is set up (no idea), I guess this might be the problem? ``` ERROR: /data/ray/BUILD.bazel:515:1: Executing genrule //:redis failed (Exit 2) bash failed: error executing command /bin/bash -c ... (remainin g 1 argument(s) skipped) Use --sandbox_debug to see verbose messages from the sandbox + curl -sL https://github.com/antirez/redis/archive/5.0.3.tar.gz + tar xz --strip-components=1 -C . gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now [306 / 317] Compiling src/ray/raylet/reconstruction_policy.cc; 1s processwrapper-sandbox ... (7 actions running) [306 / 317] Compiling src/ray/raylet/reconstruction_policy.cc; 11s processwrapper-sandbox ... (7 actions running) ``` Any idea what may be happening? Weird that none of the previous `curl`s fail until this one is hit. Would be nice to be able to compile bleeding-edge Ray versions Hm, for me `curl -sL "https://github.com/antirez/redis/archive/5.0.3.tar.gz"` also appears to return nothing, but the composition `curl -sL "https://github.com/antirez/redis/archive/5.0.3.tar.gz" | tar xz --strip-components=1 -C .` seems to do the right thing. This is with ``` $ which curl /Users/rnishihara/anaconda3/bin/curl $ curl --version curl 7.58.0 (x86_64-apple-darwin13.4.0) libcurl/7.58.0 OpenSSL/1.0.2n zlib/1.2.11 Release-Date: 2018-01-24 Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp Features: AsynchDNS IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP UnixSockets HTTPS-proxy ``` Your composition works for me, too, so even more confused now... is that the actual command being run by the build system, though? The remaining issues that need to be addressed before we deprecate cmake are https://github.com/ray-project/ray/issues/4273 and https://github.com/ray-project/ray/issues/4272. @zbarry I've seen the same error that you mentioned in our CI (on Travis) nondeterministically. It's possible that we need to just retry that command if it fails, though it'd be nice to have a more reliable approach. Gotcha. Let me get a fresh repo + conda env going and see what happens Hmm. At least in my case, I don't think the issue is a Monte Carlo `curl`. I changed the `bazel build` line in `build.sh` to be: `bazel build //:ray_pkg -c opt --sandbox_debug --verbose_failures` which gave the exact command being executed. Maybe this is helpful(?). Here, you can see clearly that the command run is indeed `curl -sL "https://github.com/antirez/redis/archive/5.0.3.tar.gz" | tar xz --strip-components=1 -C .`, which works if I just paste into a terminal. ``` [99 / 177] Executing genrule @com_github_google_glog//:run_configure; 9s processwrapper-sandbox ... (24 actions running) ERROR: /data/external_repos/ray/BUILD.bazel:515:1: Executing genrule //:redis failed (Exit 2) process-wrapper failed: error executing command (cd /data/external_repos/ray/cache/_bazel_me/e4ce00b73b332e14ab5d91799de1ce7a/execroot/__main__ && \ exec env - \ LD_LIBRARY_PATH=/cm/shared/apps/uge/8.6.0/lib/lx-amd64:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/scratch/anaconda3/lib/libjpeg-turbo/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/scratch/anaconda3/lib/libjpeg-turbo/lib \ PATH=/scratch/anaconda3/envs/raybuild/bin:/scratch/anaconda3/condabin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin \ TMPDIR=/tmp \ /data/external_repos/ray/cache/_bazel_me/install/25bd58125e78809ebbc928ae699c5e3d/_embedded_binaries/process-wrapper '--timeout=0' '--kill_delay=15' /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; set -x && curl -sL "https://github.com/antirez/redis/archive/5.0.3.tar.gz" | tar xz --strip-components=1 -C . && make && mv ./src/redis-server bazel-out/k8-opt/genfiles/redis-server && chmod +x bazel-out/k8-opt/genfiles/redis-server && mv ./src/redis-cli bazel-out/k8-opt/genfiles/redis-cli && chmod +x bazel-out/k8-opt/genfiles/redis-cli '): process-wrapper failed: error executing command (cd /data/external_repos/ray/cache/_bazel_me/e4ce00b73b332e14ab5d91799de1ce7a/execroot/__main__ && \ exec env - \ LD_LIBRARY_PATH=/cm/shared/apps/uge/8.6.0/lib/lx-amd64:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/scratch/anaconda3/lib/libjpeg-turbo/lib:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/scratch/anaconda3/lib/libjpeg-turbo/lib \ PATH=/scratch/anaconda3/envs/raybuild/bin:/scratch/anaconda3/condabin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin \ TMPDIR=/tmp \ /data/external_repos/ray/cache/_bazel_me/install/25bd58125e78809ebbc928ae699c5e3d/_embedded_binaries/process-wrapper '--timeout=0' '--kill_delay=15' /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; set -x && curl -sL "https://github.com/antirez/redis/archive/5.0.3.tar.gz" | tar xz --strip-components=1 -C . && make && mv ./src/redis-server bazel-out/k8-opt/genfiles/redis-server && chmod +x bazel-out/k8-opt/genfiles/redis-server && mv ./src/redis-cli bazel-out/k8-opt/genfiles/redis-cli && chmod +x bazel-out/k8-opt/genfiles/redis-cli ') + curl -sL https://github.com/antirez/redis/archive/5.0.3.tar.gz + tar xz --strip-components=1 -C . gzip: stdin: unexpected end of file tar: Child returned status 1 tar: Error is not recoverable: exiting now Target //:ray_pkg failed to build INFO: Elapsed time: 15.938s, Critical Path: 11.03s INFO: 67 processes: 67 processwrapper-sandbox. FAILED: Build did NOT complete successfully FAILED: Build did NOT complete successfully Traceback (most recent call last): ``` Hmm, I don't really see what is going wrong here. As a workaround, do you want to try replacing curl by wget and see if that fixes the problem? Ugh. I figured it out because `wget` gave me a useful error message. For whatever reason, that particular `genrule` is not pulling my `http_proxy` and `https_proxy` environment variables in, so github.com is not being resolved... I added them explicitly to the rule via `export http_proxy=blah`, and Ray built to completion. Is there a way to make sure these vars (I guess all the user's environment vars?) are present in the build environment? So not including your environment variables is actually a feature of Bazel (it's supposed to create the same build environment on every machine, this also means having a well-defined set of environment variables). For use cases like this, you can use the following functionality I think: https://bazel.build/designs/2016/06/21/environment.html If this helps, contributing a patch that allows to pass arguments from setup.py into bazel would be welcome! Actually, I think putting the option into your `.bazelrc` would also work and is probably a better solution. Thanks. Will look into modding `.bazelrc`. Those `--sandbox_debug --verbose_failures` flags might be worth adding into the build script, by the way! Ok. Good to go. Since that Bazel env config page is not super explicit about how to specify vars to pass through the build environment, for others that have the misfortune of living life behind a proxy, just plop these into `~/.bazelrc`: ``` build --action_env=http_proxy build --action_env=https_proxy ``` @zbarry thanks for the fix, @jliagouris just ran into the same problem and your suggestion worked. @zbarry Thanks for the suggestion, I added the flags here: https://github.com/ray-project/ray/pull/4278
2019-03-27T23:00:06
ray-project/ray
4,504
ray-project__ray-4504
[ "4502" ]
ab55a1f93a15e7b6bbfac0805e362505a9fbcf88
diff --git a/python/ray/rllib/agents/dqn/dqn_policy_graph.py b/python/ray/rllib/agents/dqn/dqn_policy_graph.py --- a/python/ray/rllib/agents/dqn/dqn_policy_graph.py +++ b/python/ray/rllib/agents/dqn/dqn_policy_graph.py @@ -387,9 +387,9 @@ def __init__(self, observation_space, action_space, config): # update_target_fn will be called periodically to copy Q network to # target Q network update_target_expr = [] - for var, var_target in zip( - sorted(self.q_func_vars, key=lambda v: v.name), - sorted(self.target_q_func_vars, key=lambda v: v.name)): + assert len(self.q_func_vars) == len(self.target_q_func_vars), \ + (self.q_func_vars, self.target_q_func_vars) + for var, var_target in zip(self.q_func_vars, self.target_q_func_vars): update_target_expr.append(var_target.assign(var)) self.update_target_expr = tf.group(*update_target_expr)
Inconsistent weight assignment operations in DQNPolicyGraph <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., macOS 10.14.3)**: - **Ray installed from (source or binary)**: source - **Ray version**: 0.7.0.dev2 ab55a1f9 - **Python version**: 3.6.8 <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> `DQNPolicyGraph` creates tensorflow assign operations by looping through lists of `self.q_func_vars` and `self.target_q_func_vars` sorted by variable name on lines 390:393. The default sorting is not consistent between the two lists of variable names and as a result the operations can mix up the assignments. The attached code snippet produces the error message below. ``` 2019-03-28 18:34:31,402 WARNING worker.py:1397 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes. 2019-03-28 18:34:31.415440: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA WARNING:tensorflow:From /Users/ristovuorio/miniconda3/envs/ray_fiddle/lib/python3.6/site-packages/tensorflow/python/util/decorator_utils.py:127: GraphKeys.VARIABLES (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.GraphKeys.GLOBAL_VARIABLES` instead. Traceback (most recent call last): File "dqn_fail_demonstrator.py", line 37, in <module> trainer = DQNAgent(env="CartPole-v0", config=config) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/agents/agent.py", line 280, in __init__ Trainable.__init__(self, config, logger_creator) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/tune/trainable.py", line 88, in __init__ self._setup(copy.deepcopy(self.config)) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/agents/agent.py", line 377, in _setup self._init() File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/agents/dqn/dqn.py", line 207, in _init self.env_creator, self._policy_graph) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/agents/agent.py", line 510, in make_local_evaluator extra_config or {})) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/agents/agent.py", line 727, in _make_evaluator async_remote_worker_envs=config["async_remote_worker_envs"]) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/evaluation/policy_evaluator.py", line 296, in __init__ self._build_policy_map(policy_dict, policy_config) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/evaluation/policy_evaluator.py", line 692, in _build_policy_map policy_map[name] = cls(obs_space, act_space, merged_conf) File "/Users/ristovuorio/projects/ray_doodle/ray/python/ray/rllib/agents/dqn/dqn_policy_graph.py", line 394, in __init__ update_target_expr.append(var_target.assign(var)) File "/Users/ristovuorio/miniconda3/envs/ray_fiddle/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 951, in assign self._shape.assert_is_compatible_with(value_tensor.shape) File "/Users/ristovuorio/miniconda3/envs/ray_fiddle/lib/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py", line 848, in assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (3,) and (11,) are incompatible ``` ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> ``` from tensorflow.keras.layers import Dense import ray from ray.rllib.models import ModelCatalog, Model from ray.rllib.agents.dqn import DQNAgent class DemoNN(Model): def _build_layers_v2(self, input_dict, num_outputs, options): x = input_dict["obs"] x = Dense(1)(x) x = Dense(1)(x) x = Dense(3)(x) x = Dense(1)(x) x = Dense(1)(x) x = Dense(1)(x) x = Dense(1)(x) x = Dense(1)(x) x = Dense(1)(x) x = Dense(1)(x) x = Dense(11)(x) x = Dense(2)(x) return x, x ray.init(local_mode=True) ModelCatalog.register_custom_model("demo_nn", DemoNN) config = { "model": { "custom_model": "demo_nn", }, "hiddens": [], "num_workers": 0, } trainer = DQNAgent(env="CartPole-v0", config=config) ```
Would removing the sorted() be the right fix here? According to the TF docs for get_collection(), the variables are already returned in the "order they were collected", which is presumably a consistent order. Yup, that seems to fix it.
2019-03-28T23:48:48
ray-project/ray
4,518
ray-project__ray-4518
[ "4511" ]
5693cd13442f3f17be690446d39c107ff759d10e
diff --git a/python/ray/tune/commands.py b/python/ray/tune/commands.py --- a/python/ray/tune/commands.py +++ b/python/ray/tune/commands.py @@ -194,16 +194,14 @@ def list_trials(experiment_path, print_format_output(checkpoints_df) if output: - experiment_path = os.path.expanduser(experiment_path) - output_path = os.path.join(experiment_path, output) file_extension = os.path.splitext(output)[1].lower() if file_extension in (".p", ".pkl", ".pickle"): - checkpoints_df.to_pickle(output_path) + checkpoints_df.to_pickle(output) elif file_extension == ".csv": - checkpoints_df.to_csv(output_path, index=False) + checkpoints_df.to_csv(output, index=False) else: raise ValueError("Unsupported filetype: {}".format(output)) - print("Output saved at:", output_path) + print("Output saved at:", output) def list_experiments(project_path, @@ -295,15 +293,14 @@ def list_experiments(project_path, print_format_output(info_df) if output: - output_path = os.path.join(base, output) file_extension = os.path.splitext(output)[1].lower() if file_extension in (".p", ".pkl", ".pickle"): - info_df.to_pickle(output_path) + info_df.to_pickle(output) elif file_extension == ".csv": - info_df.to_csv(output_path, index=False) + info_df.to_csv(output, index=False) else: raise ValueError("Unsupported filetype: {}".format(output)) - print("Output saved at:", output_path) + print("Output saved at:", output) def add_note(path, filename="note.txt"):
[tune] Add `--output` to the Tune docs We should add --output to the docs. _Originally posted by @richardliaw in https://github.com/ray-project/ray/pull/4322#issuecomment-477903993_ cc @andrewztan
2019-03-30T10:32:36
ray-project/ray
4,560
ray-project__ray-4560
[ "4440" ]
5693cd13442f3f17be690446d39c107ff759d10e
diff --git a/python/ray/rllib/models/action_dist.py b/python/ray/rllib/models/action_dist.py --- a/python/ray/rllib/models/action_dist.py +++ b/python/ray/rllib/models/action_dist.py @@ -261,17 +261,35 @@ def sampled_action_prob(self): class Dirichlet(ActionDistribution): - """Dirichlet distribution for countinuous actions that are between + """Dirichlet distribution for continuous actions that are between [0,1] and sum to 1. e.g. actions that represent resource allocation.""" def __init__(self, inputs): - self.dist = tf.distributions.Dirichlet(concentration=inputs) - ActionDistribution.__init__(self, inputs) + """Input is a tensor of logits. The exponential of logits is used to + parametrize the Dirichlet distribution as all parameters need to be + positive. An arbitrary small epsilon is added to the concentration + parameters to be zero due to numerical error. + + See issue #4440 for more details. + """ + self.epsilon = 1e-7 + concentration = tf.exp(inputs) + self.epsilon + self.dist = tf.distributions.Dirichlet( + concentration=concentration, + validate_args=True, + allow_nan_stats=False, + ) + ActionDistribution.__init__(self, concentration) @override(ActionDistribution) def logp(self, x): + # Support of Dirichlet are positive real numbers. x is already be + # an array of positive number, but we clip to avoid zeros due to + # numerical errors. + x = tf.maximum(x, self.epsilon) + x = x / tf.reduce_sum(x, axis=-1, keepdims=True) return self.dist.log_prob(x) @override(ActionDistribution)
Agent returns action with np.nan when using extra_spaces.Simplex ### System information - **Linux Ubuntu 18.04** - **Ray installed with `pip install ray`** - **Ray 0.6.4** - **Python version 3.6.7** - **Exact command to reproduce**: see section `Source code / logs` below. ### Describe the problem **Context**: My use case is to create custom `gym.Env` and solve those environments with `ray` agents. Ray has worked very well in my use case, where different agents have solved both continuous (`gym.spaces.Box`) and discrete (`gym.spaces.Discrete`) action spaces in custom environments. **Problem**: Problems start when using the action space `ray.rllib.models.extra_spaces.Simplex` in the environment (see [#4070](https://github.com/ray-project/ray/pull/4070)). In particular, the problem is that the agent returns actions with some np.nan(s). However, I've done some diggings and logits are calculated correctly (before being mapped to action distribution with a softmax I guess), so the issue should be in the layer of logic in ray mapping logits to an action. I want to remark that simply switching `Simplex` to `Box` in the code below allows PPO to solve the environment in a few minutes, so there must be something wrong with `Simplex`. **Motivation**: `Simplex` does not seem to work properly at the moment but it would allow to solve many interesting use cases with constrained action spaces. **Question**: Why is Simplex with PPO not working correctly in the self-contained and reproducible code snippet below? ### Source code / logs ``` # Self contained reproducible example. import ray from ray import rllib import gym import numpy as np ray.init() class ContextualBanditSimplex(gym.Env): """Contextual bandit environment, optionally non-stationary.""" def __init__(self, env_config: dict): """ Parameters ---------- k : int Number of arms. nr_iter_max : int Fixed episode length. std_dev : float Variance of the means sampled at the start of the episode. The higher the variance, the simpler the environment. References ---------- https://gym.openai.com/docs/ """ # Inputs. self.k = env_config['k'] self.nr_iter_max = env_config['nr_iter_max'] self.std_dev = env_config['std_dev'] # Spaces. # See: https://github.com/ray-project/ray/pull/4070 self.action_space = ray.rllib.models.extra_spaces.Simplex(shape=(self.k, )) self.observation_space = gym.spaces.Box(-np.inf, +np.inf, (self.k, ), np.float32) # Reset at the start of an episode by ContextualBanditEnv.reset. self.nr_iter: int = None self.done: bool = None self.mean_reward: np.ndarray = None self.cov: np.ndarray = None self.state: np.ndarray = None def _sample_mean_reward(self): """Sample k numbers, representing the average daily return of each stock.""" return np.random.normal(loc=self.std_dev, size=self.k) def _get_observation(self): """Sample reward from each leaver.""" return np.random.multivariate_normal(self.mean_reward, self.cov) def reset(self): """It's your responsibility to call this method every time before the start of an episode.""" self.nr_iter = 0 self.done = False self.mean_reward = self._sample_mean_reward() self.cov = np.identity(self.k) self.state = self._get_observation() return self.state def step(self, action: int): """ Parameters ---------- action : int Integer presenting the i-th stock out of k in which you want to invest. Returns ------- state : np.ndarray Next state, which consists in a sample from the return distribution from each stock. reward : float Return from your investment. done : bool Indicates whether the episode is ended. If so, call .reset() to start a new episode. info : dict An empty dict. """ if self.done: raise ValueError("Episode is ended. Call ContextualBanditEnv.reset" "to start a new episode.") if not self.action_space.contains(action): raise ValueError("Action {} does not belong to the action space {}." "".format(action, self.action_space)) reward = self._get_reward(action) self.nr_iter += 1 self.done = self.nr_iter >= self.nr_iter_max self.state = self._get_observation() info = {} return self.state, reward, self.done, info def _get_reward(self, action): """Returns weighted average return of the portfolio.""" if not self.action_space.contains(action): raise ValueError( "Action {} does not belong to action space {}." "".format(action, self.action_space) ) rewards = self._get_observation() return (rewards * action).sum() # Specify the custom env. config = rllib.agents.ppo.DEFAULT_CONFIG.copy() config['env'] = ContextualBanditSimplex config['env_config'] = {'k': 5, 'nr_iter_max': 500, 'std_dev': 1} # Instance the PPO agent. agent = rllib.agents.ppo.PPOAgent(config, ContextualBanditSimplex) # Sample a state from the environment to be fed in the policy to compute the action. env = ContextualBanditSimplex(config['env_config']) state = env.reset() # (UNEXPECTED OUTPUT): Agent returns action with nans! # It is not an exploding gradient problem because: # 1- Training hasn't started yet. # 2- I've decreased 'lr' just in case and the issue persists. action = agent.compute_action(state) print(action) # [nan nan 0. 0. 0.] # (UNEXPECTED OUTPUT): Action is still with the same nan issue, but logits seem are correct! policy = agent.get_policy() action, _, info = policy.compute_single_action(state, []) print(action) # [nan nan 0. nan 0.] print(info) # {'action_prob': nan, 'vf_preds': 0.005049658, # 'logits': array([-0.00207247, -0.00186774, 0.00244786, -0.00415111, 0.00086402], dtype=float32)} # (FAILS) As expected, `agent.train()` fails because agent returns actions with nans. agent.train() # [RayTaskError] ValueError: Action [ 0. nan 0. nan nan] does not # belong to the action space Simplex((10,); [1, 1, 1, 1, 1]). ```
@Szkered, any ideas here? I've done more digging in the source code of ray and I've shed some light on why logits are well defined and actions probabilities are not. The context is that the input of [Dirichlet](https://github.com/ray-project/ray/blob/master/python/ray/rllib/models/action_dist.py#L263) (the action distribution associated with Simplex) are logits, then passed to tf.distrib.Dirichlet in `Dirichlet.__init__`. So Dirichlet is parametrized by the logits of the Policy graph (I'm not sure if this is correct, probably not considering the output from the code below, but there we go). Now, it turns out that internal parameters of tf.distrib.Dirichlet might be invalid, in which case a sample from the distribution return `np.nan`. See docs [here](https://www.tensorflow.org/api_docs/python/tf/distributions/Dirichlet#allow_nan_stats)) and example below. ``` with tf.Session() as session: logits = [-0.00207247, -0.00186774, 0.00244786, -0.00415111, 0.00086402] action_distrib = tf.distributions.Dirichlet( concentration=logits, allow_nan_stats=False, ) sample = action_distrib.sample().eval() print(sample) # Console: [nan nan 0. nan 0.] ``` So we now know that it's the action distribution which created `np.nan` in the action probabilities. So the point seems to be that logits cannot directly feed to tf.distrib.Dirichlet, which is what's happening in the ray source code as of now. So now the question is: what is the right way to parametrize Dirichlet? One possible solution could be to use the softmax of logits (see code below). ``` def softmax(x, axis=None): x = x - x.max(axis=axis, keepdims=True) y = np.exp(x) return y / y.sum(axis=axis, keepdims=True) with tf.Session() as session: # Note that the concentration parameter is now the softmax logits, # as opposed to logits. action_distrib = tf.distributions.Dirichlet( concentration=softmax(logits), allow_nan_stats=False, ) sample = action_distrib.sample().eval() print(sample) # Console: [2.39824214e-04 1.06406130e-05 6.64767340e-02 6.38899220e-02 8.69382879e-01] ``` **QUESTION**: could anyone confirm if providing the softmax of logits as the parametrization of Dirichlet makes sense? I'm not sure if this is mathematically sound. I would be happy to create a pull request if someone confirmed the solution. @Szkered @ericl Based on https://en.m.wikipedia.org/wiki/Dirichlet_distribution, it might be undesirable to restrict the parameters to [0,1]. Perhaps an alternative parametrization would be to square each parameter independently? > Values of the concentration parameter above 1 prefer variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values I've tested a few possible solutions using the contextual bandit environment above. - Red line (top line): contextual bandit with discrete action space (so no Dirichlet). This is just a benchmark. - Grey and orange (2 lines in the middle): two runs using exponential of logits or -logits. - Lines at the bottom: square of logits, absolute value of logits, softmax of logits ![image](https://user-images.githubusercontent.com/23344446/55009981-57253880-4fdb-11e9-8bdc-9c269971a796.png) So exp of logits might seem empirically and intuitively the way to go. Somehow, logits become all `nan` starting from iteration ~200k, so the action becomes constant to a vector of 1/n summing up to 1. This is probably due to me not being familiar enough with the source code to appropriately fix the bug in [Dirichlet](https://github.com/ray-project/ray/blob/master/python/ray/rllib/models/action_dist.py#L263). My attempt has been to change the following line of code in `Dirichlet.__init__`, and I'm not aware of side effects of this change or if there is anything else that should be changed. ``` # Before. self.dist = tf.distributions.Dirichlet(concentration=inputs) # After self.dist = tf.distributions.Dirichlet(concentration=tf.exp(inputs)) ``` @ericl Let assume that the solution is to use the exponential of logits as concentration parameter for Dirichlet. Would you be able to provide insights about what else I should change in the source code? Thank you @ericl I know how to fix this issue. I'm happy to send a pull request if you or someone else agrees with the solution below (and after than #4550 will be fixed). **Context**: Use `Simplex` to describe the action space and therefore `Dirichlet` as action distribution. **Bug**: Agents return nans. **Diagnosis**: there are two separate issues. 1. The Dirichlet is parametrized by the output of the policy network (logits) which can be either positive or negative. However, Dirichlet requires all concentration parameters to be positive. Therefore, the agent returns nans whenever there is a non-positive value in the tensor of logits. This issue has been discussed in the previous posts. 2. When max(logits) - min(logits) >> 0, then a sampled action from the Dirichlet might contain a zero due to numerical error. However, the support of Dirichlet are positive real numbers, so 0s are not allowed. Therefore, when calculating the log probability of the sample during training, tensorflow raises an error. **Solutions**: 1. Rather than parametrizing Dirichlet with the logits of the policy network, use the exponential of logits. If the logits are normally distributed, then their exp is distributed as a log-normal distribution. A [log-normal distribution](https://en.wikipedia.org/wiki/Log-normal_distribution) is a good representation of the concentration parameters of Dirichlet (positive, high density around 1 i.e. no priors at the beginning of training). 2. Use clipping (either when sampling or when calculating the log probabilities). I'm inclined to go for the latter because there is nothing wrong with sampling zeros per se (e.g Dirichlet to descrive weights of a financial portfolio where 0 means no investment in the i-th financial asset). @FedericoFontana using exp(logits) seems like a reasonable choice to me. Do you observe stable training with this support? Clipping to some epsilon value during probability calculation sounds good too.
2019-04-04T15:31:22
ray-project/ray
4,564
ray-project__ray-4564
[ "4523" ]
bfd0af52bc6910c30186da65dc6eb0036fe45aad
diff --git a/python/ray/tune/commands.py b/python/ray/tune/commands.py --- a/python/ray/tune/commands.py +++ b/python/ray/tune/commands.py @@ -129,8 +129,8 @@ def list_trials(experiment_path, sort=None, output=None, filter_op=None, - info_keys=DEFAULT_EXPERIMENT_INFO_KEYS, - result_keys=DEFAULT_RESULT_KEYS): + info_keys=None, + result_keys=None): """Lists trials in the directory subtree starting at the given path. Args: @@ -151,6 +151,10 @@ def list_trials(experiment_path, checkpoint_dicts = [flatten_dict(g) for g in checkpoint_dicts] checkpoints_df = pd.DataFrame(checkpoint_dicts) + if not info_keys: + info_keys = DEFAULT_EXPERIMENT_INFO_KEYS + if not result_keys: + result_keys = DEFAULT_RESULT_KEYS result_keys = ["last_result:{}".format(k) for k in result_keys] col_keys = [ k for k in list(info_keys) + result_keys if k in checkpoints_df @@ -208,7 +212,7 @@ def list_experiments(project_path, sort=None, output=None, filter_op=None, - info_keys=DEFAULT_PROJECT_INFO_KEYS): + info_keys=None): """Lists experiments in the directory subtree. Args: @@ -263,6 +267,8 @@ def list_experiments(project_path, sys.exit(0) info_df = pd.DataFrame(experiment_data_collection) + if not info_keys: + info_keys = DEFAULT_PROJECT_INFO_KEYS col_keys = [k for k in list(info_keys) if k in info_df] if not col_keys: print("None of keys {} in experiment data!".format(info_keys)) diff --git a/python/ray/tune/scripts.py b/python/ray/tune/scripts.py --- a/python/ray/tune/scripts.py +++ b/python/ray/tune/scripts.py @@ -28,9 +28,26 @@ def cli(): default=None, type=str, help="Select filter in the format '<column> <operator> <value>'.") -def list_trials(experiment_path, sort, output, filter_op): [email protected]( + "--columns", + default=None, + type=str, + help="Select columns to be displayed.") [email protected]( + "--result-columns", + "result_columns", + default=None, + type=str, + help="Select columns of last result to be displayed.") +def list_trials(experiment_path, sort, output, filter_op, columns, + result_columns): """Lists trials in the directory subtree starting at the given path.""" - commands.list_trials(experiment_path, sort, output, filter_op) + if columns: + columns = columns.split(',') + if result_columns: + result_columns = result_columns.split(',') + commands.list_trials(experiment_path, sort, output, filter_op, columns, + result_columns) @cli.command() @@ -50,9 +67,16 @@ def list_trials(experiment_path, sort, output, filter_op): default=None, type=str, help="Select filter in the format '<column> <operator> <value>'.") -def list_experiments(project_path, sort, output, filter_op): [email protected]( + "--columns", + default=None, + type=str, + help="Select columns to be displayed.") +def list_experiments(project_path, sort, output, filter_op, columns): """Lists experiments in the directory subtree.""" - commands.list_experiments(project_path, sort, output, filter_op) + if columns: + columns = columns.split(',') + commands.list_experiments(project_path, sort, output, filter_op, columns) @cli.command()
diff --git a/python/ray/tune/tests/test_commands.py b/python/ray/tune/tests/test_commands.py --- a/python/ray/tune/tests/test_commands.py +++ b/python/ray/tune/tests/test_commands.py @@ -79,9 +79,18 @@ def test_ls(start_ray, tmpdir): }) with Capturing() as output: - commands.list_trials(experiment_path, info_keys=("status", )) + commands.list_trials( + experiment_path, + info_keys=("status", ), + result_keys=( + "episode_reward_mean", + "training_iteration", + )) lines = output.captured assert sum("TERMINATED" in line for line in lines) == num_samples + columns = ["status", "episode_reward_mean", "training_iteration"] + assert all(col in lines[1] for col in columns) + assert lines[1].count('|') == 4 with Capturing() as output: commands.list_trials( @@ -113,6 +122,8 @@ def test_lsx(start_ray, tmpdir): commands.list_experiments(project_path, info_keys=("total_trials", )) lines = output.captured assert sum("1" in line for line in lines) >= num_experiments + assert "total_trials" in lines[1] + assert lines[1].count('|') == 2 with Capturing() as output: commands.list_experiments(
[tune] Specify `--columns` for CLI ### Describe the problem Right now it's not possible to specify the columns shown for `list-experiments` and `list-trials`. It's already implemented in `tune/commands.py`, but just needs to be exposed via `scripts.py`. This should also support `result-columns`. > This script is awesome. I was not able to find the tune CLI (even under ray in site_packages). However ran it manually via python <some_path>/python2.7/site-packages/ray/tune/scripts.py list-trials experiment2/ which produces experiment_tag besides other columns. However, is there a way to also show each hyperparameter separately and final metrics (i.e return value of _train()) after the trials terminate in separate columns? See: https://groups.google.com/d/msg/ray-dev/T5q7DkkGlpQ/ZzbOTIsVBwAJ @andrewztan can you take a look at this? cc @Nithanaroy
Sorry, what do you mean by `reset-columns`? What would that command look like? Thanks! Oh `result_columns` are the keys in `last_result` (i.e., see the already implemented `commands.list_trials(result_keys=RESULT_KEYS)`). Let me know if you have any questions... BTW, this depends on #4519, so you can base a PR off of that. The command would look like `tune ls [EXP_PATH] --columns id,name,cidr --result-columns mean_accuracy,mean_loss`
2019-04-04T22:46:41
ray-project/ray
4,605
ray-project__ray-4605
[ "4330" ]
4eade036a0505e244c976f36aaa2d64386b5129b
diff --git a/python/ray/node.py b/python/ray/node.py --- a/python/ray/node.py +++ b/python/ray/node.py @@ -62,6 +62,7 @@ def __init__(self, if shutdown_at_exit and connect_only: raise ValueError("'shutdown_at_exit' and 'connect_only' cannot " "be both true.") + self.head = head self.all_processes = {} # Try to get node IP address with the parameters. @@ -78,6 +79,7 @@ def __init__(self, include_log_monitor=True, resources={}, include_webui=False, + temp_dir="/tmp/ray", worker_path=os.path.join( os.path.dirname(os.path.abspath(__file__)), "workers/default_worker.py")) @@ -87,7 +89,19 @@ def __init__(self, self._config = (json.loads(ray_params._internal_config) if ray_params._internal_config else None) - self._init_temp() + if head: + redis_client = None + # date including microsecond + date_str = datetime.datetime.today().strftime( + "%Y-%m-%d_%H-%M-%S_%f") + self.session_name = "session_{date_str}_{pid}".format( + pid=os.getpid(), date_str=date_str) + else: + redis_client = self.create_redis_client() + self.session_name = ray.utils.decode( + redis_client.get("session_name")) + + self._init_temp(redis_client) if connect_only: # Get socket names from the configuration. @@ -119,7 +133,6 @@ def __init__(self, ray_params.update_if_absent(num_redis_shards=1, include_webui=True) self._webui_url = None else: - redis_client = self.create_redis_client() self._webui_url = ( ray.services.get_webui_url_from_redis(redis_client)) ray_params.include_java = ( @@ -128,6 +141,10 @@ def __init__(self, # Start processes. if head: self.start_head_processes() + redis_client = self.create_redis_client() + redis_client.set("session_name", self.session_name) + redis_client.set("session_dir", self._session_dir) + redis_client.set("temp_dir", self._temp_dir) if not connect_only: self.start_ray_processes() @@ -136,25 +153,31 @@ def __init__(self, atexit.register(lambda: self.kill_all_processes( check_alive=False, allow_graceful=True)) - def _init_temp(self): + def _init_temp(self, redis_client): # Create an dictionary to store temp file index. self._incremental_dict = collections.defaultdict(lambda: 0) - self._temp_dir = self._ray_params.temp_dir - if self._temp_dir is None: - date_str = datetime.datetime.today().strftime("%Y-%m-%d_%H-%M-%S") - self._temp_dir = self._make_inc_temp( - prefix="session_{date_str}_{pid}".format( - pid=os.getpid(), date_str=date_str), - directory_name="/tmp/ray") + if self.head: + self._temp_dir = self._ray_params.temp_dir + else: + self._temp_dir = ray.utils.decode(redis_client.get("temp_dir")) + + try_to_create_directory(self._temp_dir, warn_if_exist=False) - try_to_create_directory(self._temp_dir) + if self.head: + self._session_dir = os.path.join(self._temp_dir, self.session_name) + else: + self._session_dir = ray.utils.decode( + redis_client.get("session_dir")) + + # Send a warning message if the session exists. + try_to_create_directory(self._session_dir) # Create a directory to be used for socket files. - self._sockets_dir = os.path.join(self._temp_dir, "sockets") - try_to_create_directory(self._sockets_dir) + self._sockets_dir = os.path.join(self._session_dir, "sockets") + try_to_create_directory(self._sockets_dir, warn_if_exist=False) # Create a directory to be used for process log files. - self._logs_dir = os.path.join(self._temp_dir, "logs") - try_to_create_directory(self._logs_dir) + self._logs_dir = os.path.join(self._session_dir, "logs") + try_to_create_directory(self._logs_dir, warn_if_exist=False) @property def node_ip_address(self): @@ -204,6 +227,7 @@ def address_info(self): "object_store_address": self._plasma_store_socket_name, "raylet_socket_name": self._raylet_socket_name, "webui_url": self._webui_url, + "session_dir": self._session_dir, } def create_redis_client(self): @@ -215,6 +239,10 @@ def get_temp_dir_path(self): """Get the path of the temporary directory.""" return self._temp_dir + def get_session_dir_path(self): + """Get the path of the session directory.""" + return self._session_dir + def get_logs_dir_path(self): """Get the path of the log files directory.""" return self._logs_dir diff --git a/python/ray/utils.py b/python/ray/utils.py --- a/python/ray/utils.py +++ b/python/ray/utils.py @@ -500,11 +500,27 @@ def is_main_thread(): return threading.current_thread().getName() == "MainThread" -def try_to_create_directory(directory_path): +def try_make_directory_shared(directory_path): + try: + os.chmod(directory_path, 0o0777) + except OSError as e: + # Silently suppress the PermissionError that is thrown by the chmod. + # This is done because the user attempting to change the permissions + # on a directory may not own it. The chmod is attempted whether the + # directory is new or not to avoid race conditions. + # ray-project/ray/#3591 + if e.errno in [errno.EACCES, errno.EPERM]: + pass + else: + raise + + +def try_to_create_directory(directory_path, warn_if_exist=True): """Attempt to create a directory that is globally readable/writable. Args: directory_path: The path of the directory to create. + warn_if_exist (bool): Warn if the directory already exists. """ logger = logging.getLogger("ray") directory_path = os.path.expanduser(directory_path) @@ -514,20 +530,11 @@ def try_to_create_directory(directory_path): except OSError as e: if e.errno != errno.EEXIST: raise e - logger.warning( - "Attempted to create '{}', but the directory already " - "exists.".format(directory_path)) - # Change the log directory permissions so others can use it. This is - # important when multiple people are using the same machine. - try: - os.chmod(directory_path, 0o0777) - except OSError as e: - # Silently suppress the PermissionError that is thrown by the chmod. - # This is done because the user attempting to change the permissions - # on a directory may not own it. The chmod is attempted whether the - # directory is new or not to avoid race conditions. - # ray-project/ray/#3591 - if e.errno in [errno.EACCES, errno.EPERM]: - pass - else: - raise + if warn_if_exist: + logger.warning( + "Attempted to create '{}', but the directory already " + "exists.".format(directory_path)) + + # Change the log directory permissions so others can use it. This is + # important when multiple people are using the same machine. + try_make_directory_shared(directory_path) diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -1688,8 +1688,7 @@ def connect(node, mode=WORKER_MODE, log_to_driver=False, worker=global_worker, - driver_id=None, - load_code_from_local=False): + driver_id=None): """Connect this worker to the raylet, to Plasma, and to Redis. Args:
diff --git a/python/ray/tests/test_tempfile.py b/python/ray/tests/test_tempfile.py --- a/python/ray/tests/test_tempfile.py +++ b/python/ray/tests/test_tempfile.py @@ -9,12 +9,6 @@ import ray from ray.tests.cluster_utils import Cluster -# Py2 compatibility -try: - FileNotFoundError -except NameError: - FileNotFoundError = OSError - def test_conn_cluster(): # plasma_store_socket_name @@ -45,13 +39,25 @@ def test_conn_cluster(): def test_tempdir(): + shutil.rmtree("/tmp/ray", ignore_errors=True) ray.init(temp_dir="/tmp/i_am_a_temp_dir") assert os.path.exists( "/tmp/i_am_a_temp_dir"), "Specified temp dir not found." + assert not os.path.exists("/tmp/ray"), "Default temp dir should not exist." ray.shutdown() shutil.rmtree("/tmp/i_am_a_temp_dir", ignore_errors=True) +def test_tempdir_commandline(): + shutil.rmtree("/tmp/ray", ignore_errors=True) + os.system("ray start --head --temp-dir=/tmp/i_am_a_temp_dir2") + assert os.path.exists( + "/tmp/i_am_a_temp_dir2"), "Specified temp dir not found." + assert not os.path.exists("/tmp/ray"), "Default temp dir should not exist." + os.system("ray stop") + shutil.rmtree("/tmp/i_am_a_temp_dir2", ignore_errors=True) + + def test_raylet_socket_name(): ray.init(raylet_socket_name="/tmp/i_am_a_temp_socket") assert os.path.exists( @@ -59,7 +65,7 @@ def test_raylet_socket_name(): ray.shutdown() try: os.remove("/tmp/i_am_a_temp_socket") - except FileNotFoundError: + except OSError: pass # It could have been removed by Ray. cluster = Cluster(True) cluster.add_node(raylet_socket_name="/tmp/i_am_a_temp_socket_2") @@ -68,7 +74,7 @@ def test_raylet_socket_name(): cluster.shutdown() try: os.remove("/tmp/i_am_a_temp_socket_2") - except FileNotFoundError: + except OSError: pass # It could have been removed by Ray. @@ -79,7 +85,7 @@ def test_temp_plasma_store_socket(): ray.shutdown() try: os.remove("/tmp/i_am_a_temp_socket") - except FileNotFoundError: + except OSError: pass # It could have been removed by Ray. cluster = Cluster(True) cluster.add_node(plasma_store_socket_name="/tmp/i_am_a_temp_socket_2") @@ -88,14 +94,14 @@ def test_temp_plasma_store_socket(): cluster.shutdown() try: os.remove("/tmp/i_am_a_temp_socket_2") - except FileNotFoundError: + except OSError: pass # It could have been removed by Ray. def test_raylet_tempfiles(): ray.init(num_cpus=0) node = ray.worker._global_node - top_levels = set(os.listdir(node.get_temp_dir_path())) + top_levels = set(os.listdir(node.get_session_dir_path())) assert top_levels.issuperset({"sockets", "logs"}) log_files = set(os.listdir(node.get_logs_dir_path())) assert log_files.issuperset({ @@ -110,7 +116,7 @@ def test_raylet_tempfiles(): ray.init(num_cpus=2) node = ray.worker._global_node - top_levels = set(os.listdir(node.get_temp_dir_path())) + top_levels = set(os.listdir(node.get_session_dir_path())) assert top_levels.issuperset({"sockets", "logs"}) time.sleep(3) # wait workers to start log_files = set(os.listdir(node.get_logs_dir_path())) @@ -128,3 +134,20 @@ def test_raylet_tempfiles(): socket_files = set(os.listdir(node.get_sockets_dir_path())) assert socket_files == {"plasma_store", "raylet"} ray.shutdown() + + +def test_tempdir_privilege(): + os.chmod("/tmp/ray", 0o000) + ray.init(num_cpus=1) + session_dir = ray.worker._global_node.get_session_dir_path() + assert os.path.exists(session_dir), "Specified socket path not found." + ray.shutdown() + + +def test_session_dir_uniqueness(): + session_dirs = set() + for _ in range(3): + ray.init(num_cpus=1) + session_dirs.add(ray.worker._global_node.get_session_dir_path) + ray.shutdown() + assert len(session_dirs) == 3
No session_* subfolder if temp_dir is specified Currently if `temp_dir` is specified, the sockets and logs are directly stored there, and not in a `session_${DATE}_${PROCESS_ID}` subfolder. This may lead to logs being mixed from multiple runs and/or collisions for the socket files if the `temp_dir` is hardcoded in a script which is run multiple times. cc @nileshtrip
2019-04-11T18:53:01
ray-project/ray
4,676
ray-project__ray-4676
[ "2280" ]
4dd628a837a957dfc1905a521247bdf9aab4f9cb
diff --git a/python/ray/services.py b/python/ray/services.py --- a/python/ray/services.py +++ b/python/ray/services.py @@ -1177,21 +1177,21 @@ def start_raylet(redis_address, command = [ RAYLET_EXECUTABLE, - raylet_name, - plasma_store_name, - str(object_manager_port), - str(node_manager_port), - node_ip_address, - gcs_ip_address, - gcs_port, - str(num_initial_workers), - str(maximum_startup_concurrency), - resource_argument, - config_str, - start_worker_command, - java_worker_command, - redis_password or "", - temp_dir, + "--raylet_socket_name={}".format(raylet_name), + "--store_socket_name={}".format(plasma_store_name), + "--object_manager_port={}".format(object_manager_port), + "--node_manager_port={}".format(node_manager_port), + "--node_ip_address={}".format(node_ip_address), + "--redis_address={}".format(gcs_ip_address), + "--redis_port={}".format(gcs_port), + "--num_initial_workers={}".format(num_initial_workers), + "--maximum_startup_concurrency={}".format(maximum_startup_concurrency), + "--static_resource_list={}".format(resource_argument), + "--config_list={}".format(config_str), + "--python_worker_command={}".format(start_worker_command), + "--java_worker_command={}".format(java_worker_command), + "--redis_password={}".format(redis_password or ""), + "--temp_dir={}".format(temp_dir), ] process_info = start_ray_process( command, @@ -1555,7 +1555,12 @@ def start_raylet_monitor(redis_address, redis_password = redis_password or "" config = config or {} config_str = ",".join(["{},{}".format(*kv) for kv in config.items()]) - command = [RAYLET_MONITOR_EXECUTABLE, gcs_ip_address, gcs_port, config_str] + command = [ + RAYLET_MONITOR_EXECUTABLE, + "--redis_address={}".format(gcs_ip_address), + "--redis_port={}".format(gcs_port), + "--config_list={}".format(config_str), + ] if redis_password: command += [redis_password] process_info = start_ray_process(
Using gflags or similar for argument parsing Did a quick search through the issues list and didn't see anything related. Since the raylet code is fast-moving, it may be helpful to rely on an argument parsing library for ```raylet``` and ```raylet_monitor``` rather than magic numbers for argument positions. Should be a small change, but raising as an issue rather than PR first because there may be some reason that the team doesn't want to do this.
You're completely right. We do it *slightly* better in legacy Ray https://github.com/ray-project/ray/blob/6bf48f47bcf91610db42b350e8149c85e01afb0d/src/local_scheduler/local_scheduler.cc#L1522-L1557 but if there's a more canonical C++ way of parsing arguments, that would be preferable. Yeah, getopt is the POSIX way. There are innumerable c++ libraries doing this now, but [gflags](https://gflags.github.io/gflags/) seems to be reasonably popular. Some people don’t like the dependence on macros, though. Boost also has a library and we already have that dependency. +1 to gflags. On Thu, Jun 21, 2018 at 3:54 AM Daniel Suo <[email protected]> wrote: > Yeah, getopt is the POSIX way. There are innumerable c++ libraries doing > this now, but gflags <https://gflags.github.io/gflags/> seems to be > reasonably popular. Some people don’t like the dependence on macros, though. > > Boost also has a library and we already have that dependency. > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/ray-project/ray/issues/2280#issuecomment-399060723>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/AAkLHrUaGghVSSDrn5RRgAs_H0CaBijOks5t-3tpgaJpZM4Uvp_m> > . > We are trying to minimize the boost dependency (just asio in the Ray codebase). gflags might be a good option.
2019-04-20T10:57:21
ray-project/ray
4,734
ray-project__ray-4734
[ "4724" ]
8b6f0d3224055e5e028569e31cfd56316f7ce29e
diff --git a/python/ray/tune/experiment.py b/python/ray/tune/experiment.py --- a/python/ray/tune/experiment.py +++ b/python/ray/tune/experiment.py @@ -96,7 +96,8 @@ def __init__(self, "config": config, "resources_per_trial": resources_per_trial, "num_samples": num_samples, - "local_dir": os.path.expanduser(local_dir or DEFAULT_RESULTS_DIR), + "local_dir": os.path.abspath( + os.path.expanduser(local_dir or DEFAULT_RESULTS_DIR)), "upload_dir": upload_dir, "trial_name_creator": trial_name_creator, "loggers": loggers, @@ -107,7 +108,8 @@ def __init__(self, "checkpoint_score_attr": checkpoint_score_attr, "export_formats": export_formats or [], "max_failures": max_failures, - "restore": restore + "restore": os.path.abspath(os.path.expanduser(restore)) + if restore else None } self.name = name or run_identifier diff --git a/python/ray/tune/trainable.py b/python/ray/tune/trainable.py --- a/python/ray/tune/trainable.py +++ b/python/ray/tune/trainable.py @@ -350,6 +350,14 @@ def restore(self, checkpoint_path): self._timesteps_since_restore = 0 self._iterations_since_restore = 0 self._restored = True + logger.info("Restored from checkpoint: {}".format(checkpoint_path)) + state = { + "_iteration": self._iteration, + "_timesteps_total": self._timesteps_total, + "_time_total": self._time_total, + "_episodes_total": self._episodes_total, + } + logger.info("Current state after restoring: {}".format(state)) def restore_from_object(self, obj): """Restores training state from a checkpoint object.
diff --git a/python/ray/tune/tests/test_tune_save_restore.py b/python/ray/tune/tests/test_tune_save_restore.py new file mode 100644 --- /dev/null +++ b/python/ray/tune/tests/test_tune_save_restore.py @@ -0,0 +1,159 @@ +# coding: utf-8 +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +import pickle +import shutil +import tempfile +import unittest + +import ray +from ray import tune +from ray.rllib import _register_all +from ray.tune import Trainable + + +class SerialTuneRelativeLocalDirTest(unittest.TestCase): + local_mode = True + prefix = "Serial" + + class MockTrainable(Trainable): + _name = "MockTrainable" + + def _setup(self, config): + self.state = {"hi": 1} + + def _train(self): + return {"timesteps_this_iter": 1, "done": True} + + def _save(self, checkpoint_dir): + checkpoint_path = os.path.join( + checkpoint_dir, "checkpoint-{}".format(self._iteration)) + with open(checkpoint_path, "wb") as f: + pickle.dump(self.state, f) + return checkpoint_path + + def _restore(self, checkpoint_path): + with open(checkpoint_path, "rb") as f: + extra_data = pickle.load(f) + self.state.update(extra_data) + + def setUp(self): + ray.init(num_cpus=1, num_gpus=0, local_mode=self.local_mode) + + def tearDown(self): + shutil.rmtree(self.absolute_local_dir, ignore_errors=True) + self.absolute_local_dir = None + ray.shutdown() + # Without this line, test_tune_server.testAddTrial would fail. + _register_all() + + def _get_trial_dir(self, absoulte_exp_dir): + trial_dirname = next( + (child_dir for child_dir in os.listdir(absoulte_exp_dir) + if (os.path.isdir(os.path.join(absoulte_exp_dir, child_dir)) + and child_dir.startswith(self.MockTrainable._name)))) + + trial_absolute_dir = os.path.join(absoulte_exp_dir, trial_dirname) + + return trial_dirname, trial_absolute_dir + + def _train(self, exp_name, local_dir, absolute_local_dir): + trial, = tune.run( + self.MockTrainable, + name=exp_name, + stop={ + "training_iteration": 1 + }, + checkpoint_freq=1, + local_dir=local_dir, + config={ + "env": "CartPole-v0", + "log_level": "DEBUG" + }).trials + + exp_dir = os.path.join(absolute_local_dir, exp_name) + _, abs_trial_dir = self._get_trial_dir(exp_dir) + + self.assertIsNone(trial.error_file) + self.assertEqual(trial.local_dir, exp_dir) + self.assertEqual(trial.logdir, abs_trial_dir) + + self.assertTrue(os.path.isdir(absolute_local_dir), absolute_local_dir) + self.assertTrue(os.path.isdir(exp_dir)) + self.assertTrue(os.path.isdir(abs_trial_dir)) + self.assertTrue( + os.path.isfile( + os.path.join(abs_trial_dir, "checkpoint_1/checkpoint-1"))) + + def _restore(self, exp_name, local_dir, absolute_local_dir): + trial_name, abs_trial_dir = self._get_trial_dir( + os.path.join(absolute_local_dir, exp_name)) + + checkpoint_path = os.path.join( + local_dir, exp_name, trial_name, + "checkpoint_1/checkpoint-1") # Relative checkpoint path + + # The file tune would find. The absolute checkpoint path. + tune_find_file = os.path.abspath(os.path.expanduser(checkpoint_path)) + self.assertTrue( + os.path.isfile(tune_find_file), + "{} is not exist!".format(tune_find_file)) + + trial, = tune.run( + self.MockTrainable, + name=exp_name, + stop={ + "training_iteration": 2 + }, # train one more iteration. + restore=checkpoint_path, # Restore the checkpoint + config={ + "env": "CartPole-v0", + "log_level": "DEBUG" + }).trials + self.assertIsNone(trial.error_file) + + def testDottedRelativePath(self): + local_dir = "./test_dotted_relative_local_dir" + exp_name = self.prefix + "DottedRelativeLocalDir" + absolute_local_dir = os.path.abspath(local_dir) + self.absolute_local_dir = absolute_local_dir + self.assertFalse(os.path.exists(absolute_local_dir)) + self._train(exp_name, local_dir, absolute_local_dir) + self._restore(exp_name, local_dir, absolute_local_dir) + + def testRelativePath(self): + local_dir = "test_relative_local_dir" + exp_name = self.prefix + "RelativePath" + absolute_local_dir = os.path.abspath(local_dir) + self.absolute_local_dir = absolute_local_dir + self.assertFalse(os.path.exists(absolute_local_dir)) + self._train(exp_name, local_dir, absolute_local_dir) + self._restore(exp_name, local_dir, absolute_local_dir) + + def testTildeAbsolutePath(self): + local_dir = "~/test_tilde_absolute_local_dir" + exp_name = self.prefix + "TildeAbsolutePath" + absolute_local_dir = os.path.abspath(os.path.expanduser(local_dir)) + self.absolute_local_dir = absolute_local_dir + self.assertFalse(os.path.exists(absolute_local_dir)) + self._train(exp_name, local_dir, absolute_local_dir) + self._restore(exp_name, local_dir, absolute_local_dir) + + def testTempfile(self): + local_dir = tempfile.mkdtemp() + exp_name = self.prefix + "Tempfile" + self.absolute_local_dir = local_dir + self._train(exp_name, local_dir, local_dir) + self._restore(exp_name, local_dir, local_dir) + + +class ParallelTuneRelativeLocalDirTest(SerialTuneRelativeLocalDirTest): + local_mode = False + prefix = "Parallel" + + +if __name__ == "__main__": + unittest.main(verbosity=2)
[tune] Relative local_dir is not supported. <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS Mojave - **Ray installed from (source or binary)**: - **Ray version**:0.7.0.dev2 - **Python version**:3.6 - **Exact command to reproduce**: <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> If you use ``` tune.run( self.algo, name="TuneRelavtieLocalDirTest", stop={"training_iteration": 1}, checkpoint_freq=1, local_dir="ray_result", config={ "env": "CartPole-v0", }, ) ``` The path of checkpoint turned out to be: `$CURRENT_DIR/ray_result/TuneRelavtieLocalDirTest/PG_CartPole-v0_0_2019-05-01_15-24-54xo9z1bq8/ray_result/TuneRelavtieLocalDirTest/PG_CartPole-v0_0_2019-05-01_15-24-54xo9z1bq8/checkpoint_1/checkpoint-1` ![image](https://user-images.githubusercontent.com/22206995/57008878-bd7a2800-6c25-11e9-890c-363d0f4facca.png) You can see that the path is nested. It is ought to be `$CURRENT_DIR/ray_result/TuneRelavtieLocalDirTest/PG_CartPole-v0_0_2019-05-01_15-24-54xo9z1bq8/checkpoint_1/checkpoint-1` Using `local_dir` with dot identifier gives the same result. ``` tune.run( self.algo, name="TuneRelavtieLocalDirTest", stop={"training_iteration": 1}, checkpoint_freq=1, local_dir="./ray_result", # This should be ray/tune/tests/ray_result config={ "env": "CartPole-v0", }, ) ``` Why this matter? Because in some cases, user needs to take away all the experiments results within their directory of codes, via git or FTP. ------ What about using absolute path? ``` abs_path = os.path.abspath("ray_result3") print("abs_path:", abs_path) # abs_path: /Users/XXX/ray/python/ray/ray_result3 tune.run( self.algo, name="TuneRelavtieLocalDirTest", stop={"training_iteration": 1}, checkpoint_freq=1, local_dir=abs_path, # This should be ray/tune/tests/ray_result3 config={ "env": "CartPole-v0", }, ) ``` It's correct! <img width="422" alt="image" src="https://user-images.githubusercontent.com/22206995/57009099-26ae6b00-6c27-11e9-85e4-80c2ef142a1a.png"> ### Discussion I think maybe we can divide the `local_dir` into three kinds: 1. "./result" 2. "result" 3. "/User/XXX/result" 4. "~/result" We should consider No.1 & 2 to be the relative path and the last two as absolute path.
2019-05-02T08:56:19
ray-project/ray
4,735
ray-project__ray-4735
[ "4693" ]
af463e8bb1f08af917feb3b6d90f154b7fdbfe23
diff --git a/python/ray/rllib/agents/qmix/qmix_policy_graph.py b/python/ray/rllib/agents/qmix/qmix_policy_graph.py --- a/python/ray/rllib/agents/qmix/qmix_policy_graph.py +++ b/python/ray/rllib/agents/qmix/qmix_policy_graph.py @@ -46,16 +46,19 @@ def __init__(self, self.double_q = double_q self.gamma = gamma - def forward(self, rewards, actions, terminated, mask, obs, action_mask): + def forward(self, rewards, actions, terminated, mask, obs, next_obs, + action_mask, next_action_mask): """Forward pass of the loss. Arguments: - rewards: Tensor of shape [B, T-1, n_agents] - actions: Tensor of shape [B, T-1, n_agents] - terminated: Tensor of shape [B, T-1, n_agents] - mask: Tensor of shape [B, T-1, n_agents] + rewards: Tensor of shape [B, T, n_agents] + actions: Tensor of shape [B, T, n_agents] + terminated: Tensor of shape [B, T, n_agents] + mask: Tensor of shape [B, T, n_agents] obs: Tensor of shape [B, T, n_agents, obs_size] + next_obs: Tensor of shape [B, T, n_agents, obs_size] action_mask: Tensor of shape [B, T, n_agents, n_actions] + next_action_mask: Tensor of shape [B, T, n_agents, n_actions] """ B, T = obs.size(0), obs.size(1) @@ -68,9 +71,9 @@ def forward(self, rewards, actions, terminated, mask, obs, action_mask): mac_out.append(q) mac_out = th.stack(mac_out, dim=1) # Concat over time - # Pick the Q-Values for the actions taken -> [B * n_agents, T-1] + # Pick the Q-Values for the actions taken -> [B * n_agents, T] chosen_action_qvals = th.gather( - mac_out[:, :-1], dim=3, index=actions.unsqueeze(3)).squeeze(3) + mac_out, dim=3, index=actions.unsqueeze(3)).squeeze(3) # Calculate the Q-Values necessary for the target target_mac_out = [] @@ -79,32 +82,37 @@ def forward(self, rewards, actions, terminated, mask, obs, action_mask): for s in self.target_model.state_init() ] for t in range(T): - target_q, target_h = _mac(self.target_model, obs[:, t], target_h) + target_q, target_h = _mac(self.target_model, next_obs[:, t], + target_h) target_mac_out.append(target_q) - - # We don't need the first timesteps Q-Value estimate for targets - target_mac_out = th.stack( - target_mac_out[1:], dim=1) # Concat across time + target_mac_out = th.stack(target_mac_out, dim=1) # Concat across time # Mask out unavailable actions - target_mac_out[action_mask[:, 1:] == 0] = -9999999 + ignore_action = (next_action_mask == 0) & (mask == 1).unsqueeze(-1) + target_mac_out[ignore_action] = -np.inf # Max over target Q-Values if self.double_q: # Get actions that maximise live Q (for double q-learning) - mac_out[action_mask == 0] = -9999999 - cur_max_actions = mac_out[:, 1:].max(dim=3, keepdim=True)[1] + ignore_action = (action_mask == 0) & (mask == 1).unsqueeze(-1) + mac_out = mac_out.clone() # issue 4742 + mac_out[ignore_action] = -np.inf + cur_max_actions = mac_out.max(dim=3, keepdim=True)[1] target_max_qvals = th.gather(target_mac_out, 3, cur_max_actions).squeeze(3) else: target_max_qvals = target_mac_out.max(dim=3)[0] + assert target_max_qvals.min().item() != -np.inf, \ + "target_max_qvals contains a masked action; \ + there may be a state with no valid actions." + # Mix if self.mixer is not None: # TODO(ekl) add support for handling global state? This is just # treating the stacked agent obs as the state. - chosen_action_qvals = self.mixer(chosen_action_qvals, obs[:, :-1]) - target_max_qvals = self.target_mixer(target_max_qvals, obs[:, 1:]) + chosen_action_qvals = self.mixer(chosen_action_qvals, obs) + target_max_qvals = self.target_mixer(target_max_qvals, next_obs) # Calculate 1-step Q-Learning targets targets = rewards + self.gamma * (1 - terminated) * target_max_qvals @@ -239,48 +247,53 @@ def compute_actions(self, def learn_on_batch(self, samples): obs_batch, action_mask = self._unpack_observation( samples[SampleBatch.CUR_OBS]) + next_obs_batch, next_action_mask = self._unpack_observation( + samples[SampleBatch.NEXT_OBS]) group_rewards = self._get_group_rewards(samples[SampleBatch.INFOS]) # These will be padded to shape [B * T, ...] - [rew, action_mask, act, dones, obs], initial_states, seq_lens = \ + [rew, action_mask, next_action_mask, act, dones, obs, next_obs], \ + initial_states, seq_lens = \ chop_into_sequences( samples[SampleBatch.EPS_ID], samples[SampleBatch.UNROLL_ID], samples[SampleBatch.AGENT_INDEX], [ - group_rewards, action_mask, samples[SampleBatch.ACTIONS], - samples[SampleBatch.DONES], obs_batch + group_rewards, action_mask, next_action_mask, + samples[SampleBatch.ACTIONS], samples[SampleBatch.DONES], + obs_batch, next_obs_batch ], [samples["state_in_{}".format(k)] for k in range(len(self.get_initial_state()))], max_seq_len=self.config["model"]["max_seq_len"], - dynamic_max=True, - _extra_padding=1) - # TODO(ekl) adding 1 extra unit of padding here, since otherwise we - # lose the terminating reward and the Q-values will be unanchored! - B, T = len(seq_lens), max(seq_lens) + 1 + dynamic_max=True) + B, T = len(seq_lens), max(seq_lens) def to_batches(arr): new_shape = [B, T] + list(arr.shape[1:]) return th.from_numpy(np.reshape(arr, new_shape)) - rewards = to_batches(rew)[:, :-1].float() - actions = to_batches(act)[:, :-1].long() + rewards = to_batches(rew).float() + actions = to_batches(act).long() obs = to_batches(obs).reshape([B, T, self.n_agents, self.obs_size]).float() action_mask = to_batches(action_mask) + next_obs = to_batches(next_obs).reshape( + [B, T, self.n_agents, self.obs_size]).float() + next_action_mask = to_batches(next_action_mask) # TODO(ekl) this treats group termination as individual termination terminated = to_batches(dones.astype(np.float32)).unsqueeze(2).expand( - B, T, self.n_agents)[:, :-1] + B, T, self.n_agents) + + # Create mask for where index is < unpadded sequence length filled = (np.reshape(np.tile(np.arange(T), B), [B, T]) < np.expand_dims(seq_lens, 1)).astype(np.float32) - mask = th.from_numpy(filled).unsqueeze(2).expand(B, T, - self.n_agents)[:, :-1] - mask[:, 1:] = mask[:, 1:] * (1 - terminated[:, :-1]) + mask = th.from_numpy(filled).unsqueeze(2).expand(B, T, self.n_agents) # Compute loss loss_out, mask, masked_td_error, chosen_action_qvals, targets = \ - self.loss(rewards, actions, terminated, mask, obs, action_mask) + self.loss(rewards, actions, terminated, mask, obs, + next_obs, action_mask, next_action_mask) # Optimise self.optimiser.zero_grad()
Qmix Bug with Truncated Episodes or when max_seq_len is set <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: source - **Ray version**: 0.6.2 - **Python version**: 3.5.5 - **Exact command to reproduce**: (running on custom env) <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> Qmix is set up to add padded q-values of -9999999 to the end of every sequence. These q-values are not being correctly masked out. They are ignored in terminal states, due to line 108 in qmix_policy_graph.py, however they are being used in non-terminal states that happen to be at the end of a sequence. This is causing astronomical TD-errors on my custom env with 1 agent. The issue is also a bit more complicated than just masking them out, as the code tries to do: There does need to be a next state (observation) to complete the bootstrapped return. If it is simply masked out, then certain states will never receive a back propagated loss. I would suggest also passing in next_states to the loss function (new_obs in samples dictionary I believe). If terminal states are not recorded with a next observation, then one can be added as padding, since it will be 0-ed out by (1 - terminated) in line 108 anyway. This issue can be verified by truncating episodes and noting that reward + self.gamma*-9999999 is contained in the variable masked_td_error. ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
To clarify, this only happens if batch mode is set to truncate episodes? I assume it would also be an issue with complete episodes but with a max_seq_len less than the length of the episode. It seems to be an issue if there is a non-terminal state at the end of a sequence. I have attached code that I believe reproduces the same bug. The commands are: python3 twostep_game_no_bug.py --run QMIX python3 twostep_game_bug.py --run QMIX the variable masked_td_error seems to still contain the dummy constant in the bug version. [Qmix bug.zip](https://github.com/ray-project/ray/files/3116258/Qmix.bug.zip) Hm I see. The proposed fix makes sense to me, and it also seems ok to potentially ignore the last state when truncating. Do you want to make a patch? Sure. I can't do it this second, but should have some time next week.
2019-05-02T11:51:21
ray-project/ray
4,775
ray-project__ray-4775
[ "2891" ]
a7d01aba9b94232cf2ee385e4cad15f435022033
diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -782,18 +782,27 @@ def _get_arguments_for_execution(self, function_name, serialized_args): RayError: This exception is raised if a task that created one of the arguments failed. """ - arguments = [] + arguments = [None] * len(serialized_args) + object_ids = [] + object_indices = [] + for (i, arg) in enumerate(serialized_args): if isinstance(arg, ObjectID): - # get the object from the local object store - argument = self.get_object([arg])[0] - if isinstance(argument, RayError): - raise argument + object_ids.append(arg) + object_indices.append(i) else: # pass the argument by value - argument = arg + arguments[i] = arg + + # Get the objects from the local object store. + if len(object_ids) > 0: + values = self.get_object(object_ids) + for i, value in enumerate(values): + if isinstance(value, RayError): + raise value + else: + arguments[object_indices[i]] = value - arguments.append(argument) return arguments def _store_outputs_in_object_store(self, object_ids, outputs):
Task arguments are retrieved serially from the object store. @stephanie-wang noticed that when we do `ray.get`, we fetch objects in parallel from the object store: https://github.com/ray-project/ray/blob/e05baed336470c6568c0f32017ed1fdde9a17132/python/ray/worker.py#L2510-L2511 But when we get the arguments for a task, we get them serially: https://github.com/ray-project/ray/blob/e05baed336470c6568c0f32017ed1fdde9a17132/python/ray/worker.py#L850-L853 For settings when many object IDs are passed into a remote function, this can slow things down.
2019-05-12T05:48:53
ray-project/ray
4,844
ray-project__ray-4844
[ "1609" ]
ba6c595094a27828358e891227ed3852c6f1e50f
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -186,9 +186,12 @@ class ActorClass(object): task. _resources: The default resources required by the actor creation task. _actor_method_cpus: The number of CPUs required by actor method tasks. - _last_export_session: The index of the last session in which the remote - function was exported. This is used to determine if we need to - export the remote function again. + _last_driver_id_exported_for: The ID of the driver ID of the last Ray + session during which this actor class definition was exported. This + is an imperfect mechanism used to determine if we need to export + the remote function again. It is imperfect in the sense that the + actor class definition could be exported multiple times by + different workers. _actor_methods: The actor methods. _method_decorators: Optional decorators that should be applied to the method invocation function before invoking the actor methods. These @@ -209,7 +212,7 @@ def __init__(self, modified_class, class_id, max_reconstructions, num_cpus, self._num_cpus = num_cpus self._num_gpus = num_gpus self._resources = resources - self._last_export_session = None + self._last_driver_id_exported_for = None self._actor_methods = inspect.getmembers( self._modified_class, ray.utils.is_function_or_method) @@ -342,12 +345,13 @@ def _remote(self, *copy.deepcopy(args), **copy.deepcopy(kwargs)) else: # Export the actor. - if (self._last_export_session is None - or self._last_export_session < worker._session_index): + if (self._last_driver_id_exported_for is None + or self._last_driver_id_exported_for != + worker.task_driver_id): # If this actor class was exported in a previous session, we # need to export this function again, because current GCS # doesn't have it. - self._last_export_session = worker._session_index + self._last_driver_id_exported_for = worker.task_driver_id worker.function_actor_manager.export_actor_class( self._modified_class, self._actor_method_names) diff --git a/python/ray/function_manager.py b/python/ray/function_manager.py --- a/python/ray/function_manager.py +++ b/python/ray/function_manager.py @@ -342,7 +342,7 @@ def export(self, remote_function): # and export it later. self._functions_to_export.append(remote_function) return - if self._worker.mode != ray.worker.SCRIPT_MODE: + if self._worker.mode == ray.worker.LOCAL_MODE: # Don't need to export if the worker is not a driver. return self._do_export(remote_function) diff --git a/python/ray/remote_function.py b/python/ray/remote_function.py --- a/python/ray/remote_function.py +++ b/python/ray/remote_function.py @@ -43,9 +43,12 @@ class RemoteFunction(object): return the resulting ObjectIDs. For an example, see "test_decorated_function" in "python/ray/tests/test_basic.py". _function_signature: The function signature. - _last_export_session: The index of the last session in which the remote - function was exported. This is used to determine if we need to - export the remote function again. + _last_driver_id_exported_for: The ID of the driver ID of the last Ray + session during which this remote function definition was exported. + This is an imperfect mechanism used to determine if we need to + export the remote function again. It is imperfect in the sense that + the actor class definition could be exported multiple times by + different workers. """ def __init__(self, function, num_cpus, num_gpus, resources, @@ -69,10 +72,7 @@ def __init__(self, function, num_cpus, num_gpus, resources, self._function_signature = ray.signature.extract_signature( self._function) - # Export the function. - worker = ray.worker.get_global_worker() - self._last_export_session = worker._session_index - worker.function_actor_manager.export(self) + self._last_driver_id_exported_for = None def __call__(self, *args, **kwargs): raise Exception("Remote functions cannot be called directly. Instead " @@ -111,10 +111,11 @@ def _remote(self, worker = ray.worker.get_global_worker() worker.check_connected() - if self._last_export_session < worker._session_index: + if (self._last_driver_id_exported_for is None + or self._last_driver_id_exported_for != worker.task_driver_id): # If this function was exported in a previous session, we need to # export this function again, because current GCS doesn't have it. - self._last_export_session = worker._session_index + self._last_driver_id_exported_for = worker.task_driver_id worker.function_actor_manager.export(self) kwargs = {} if kwargs is None else kwargs
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -303,6 +303,23 @@ def f(x): assert_equal(obj, ray.get(ray.put(obj))) +def test_nested_functions(ray_start_regular): + # Make sure that remote functions can use other values that are defined + # after the remote function but before the first function invocation. + @ray.remote + def f(): + return g(), ray.get(h.remote()) + + def g(): + return 1 + + @ray.remote + def h(): + return 2 + + assert ray.get(f.remote()) == (1, 2) + + def test_ray_recursive_objects(ray_start_regular): class ClassA(object): pass @@ -2968,3 +2985,17 @@ def method(self): ray.get(f.remote()) a = Actor.remote() ray.get(a.method.remote()) + + ray.shutdown() + + # Start Ray again and make sure that these definitions can be exported from + # workers. + ray.init(num_cpus=2) + + @ray.remote + def export_definitions_from_worker(remote_function, actor_class): + ray.get(remote_function.remote()) + actor_handle = actor_class.remote() + ray.get(actor_handle.method.remote()) + + ray.get(export_definitions_from_worker.remote(f, Actor)) diff --git a/python/ray/tests/test_failure.py b/python/ray/tests/test_failure.py --- a/python/ray/tests/test_failure.py +++ b/python/ray/tests/test_failure.py @@ -95,7 +95,15 @@ def temporary_helper_function(): # fail when it is unpickled. @ray.remote def g(): - return module.temporary_python_file() + try: + module.temporary_python_file() + except Exception: + # This test is not concerned with the error from running this + # function. Only from unpickling the remote function. + pass + + # Invoke the function so that the definition is exported. + g.remote() wait_for_errors(ray_constants.REGISTER_REMOTE_FUNCTION_PUSH_ERROR, 2) errors = relevant_errors(ray_constants.REGISTER_REMOTE_FUNCTION_PUSH_ERROR) @@ -499,6 +507,9 @@ def test_export_large_objects(ray_start_regular): def f(): large_object + # Invoke the function so that the definition is exported. + f.remote() + # Make sure that a warning is generated. wait_for_errors(ray_constants.PICKLING_LARGE_OBJECT_PUSH_ERROR, 1) diff --git a/python/ray/tests/test_monitors.py b/python/ray/tests/test_monitors.py --- a/python/ray/tests/test_monitors.py +++ b/python/ray/tests/test_monitors.py @@ -46,13 +46,6 @@ def Driver(success): # Two new objects. ray.get(ray.put(1111)) ray.get(ray.put(1111)) - attempts = 0 - while (2, 1, summary_start[2]) != StateSummary(): - time.sleep(0.1) - attempts += 1 - if attempts == max_attempts_before_failing: - success.value = False - break @ray.remote def f(): @@ -61,7 +54,7 @@ def f(): # 1 new function. attempts = 0 - while (2, 1, summary_start[2] + 1) != StateSummary(): + while (2, 1, summary_start[2]) != StateSummary(): time.sleep(0.1) attempts += 1 if attempts == max_attempts_before_failing:
Ship remote function definitions lazily when they are first invoked. We may want to ship remote function definitions when they are first invoked (as opposed to when they are first defined. This would help avoid problems like #1607. This could also avoid shipping a ton of remote function definitions that are not used (e.g., because they are defined in a Python module which is imported by a user). The main downside is that it increases latency for the first invocation. cc @ludwigschmidt
A few thoughts on this: - From personal experience, Robert's point regarding slow `import` due to remote functions could become a serious usability problem. I remember similar issues from Julia where compiling packages on import was quite annoying because it led to slow program / notebook starts. - I find it more intuitive to have some extra latency on the actual invocation. As a user, I know that data is being transferred over the network now. And probably my remote code takes some time anyway. - Would lazy shipping also make the @remote annotation unnecessary? That would significantly simplify usability since I don't have to worry about annotating my functions any more (this is a nice feature of Pywren for me). Responding to your third point, I think we'd still need the decorator if we want to do invoke functions with `f.remote()`. We could add a `ray.submit(f)` function to the API which would avoid the decorator. I like it less (aesthetically), though it does have the advantage of allowing you to pass in other arguments (e.g., resource requirements) when the function is invoked as opposed to when it is defined (so they can differ between invocations). As a user who doesn't know what's going on under the hood, I prefer `ray.submit(f)` (or `ray.run(f)` or `ray.execute(f)`). It is more transparent to me because I know that `f` is still just the function I defined. Also, if `f.remote()` turns out to be the only thing that the decorator enables after lazy shipping, it might be good to clarify that in the documentation. It might also be good to make the decorator optional at this point. One thing to note: ``` @ray.remote def f(): return 1 ``` and ``` def f(): return 1 f = ray.remote(f) ``` The above two are equivalent and currently work. @ludwigschmidt does the latter approach provide the semantics you were suggesting above? Just as a note, often times I don't use the annotation, but it's helpful to have it for fast scripting. I forgot to comment on this earlier. The latter approach does not quite have the semantics I had in mind because f is still a special object and not the function I initially defined. With the `ray.submit(f)` proposed by @robertnishihara , f can remain the "plain" Python function I define.
2019-05-23T17:45:12
ray-project/ray
4,912
ray-project__ray-4912
[ "4881" ]
89722ff00349203424cde8d468c9f1815799f807
diff --git a/python/ray/tune/schedulers/pbt.py b/python/ray/tune/schedulers/pbt.py --- a/python/ray/tune/schedulers/pbt.py +++ b/python/ray/tune/schedulers/pbt.py @@ -19,10 +19,6 @@ logger = logging.getLogger(__name__) -# Parameters are transferred from the top PBT_QUANTILE fraction of trials to -# the bottom PBT_QUANTILE fraction. -PBT_QUANTILE = 0.25 - class PBTTrialState(object): """Internal PBT state tracked per-trial.""" @@ -134,6 +130,10 @@ class PopulationBasedTraining(FIFOScheduler): A function specifies the distribution of a continuous parameter. You must specify at least one of `hyperparam_mutations` or `custom_explore_fn`. + quantile_fraction (float): Parameters are transferred from the top + `quantile_fraction` fraction of trials to the bottom + `quantile_fraction` fraction. Needs to be between 0 and 0.5. + Setting it to 0 essentially implies doing no exploitation at all. resample_probability (float): The probability of resampling from the original distribution when applying `hyperparam_mutations`. If not resampled, the value will be perturbed by a factor of 1.2 or 0.8 @@ -172,6 +172,7 @@ def __init__(self, mode="max", perturbation_interval=60.0, hyperparam_mutations={}, + quantile_fraction=0.25, resample_probability=0.25, custom_explore_fn=None, log_config=True): @@ -180,6 +181,11 @@ def __init__(self, "You must specify at least one of `hyperparam_mutations` or " "`custom_explore_fn` to use PBT.") + if quantile_fraction > 0.5 or quantile_fraction < 0: + raise TuneError( + "You must set `quantile_fraction` to a value between 0 and" + "0.5. Current value: '{}'".format(quantile_fraction)) + assert mode in ["min", "max"], "`mode` must be 'min' or 'max'!" if reward_attr is not None: @@ -199,6 +205,7 @@ def __init__(self, self._time_attr = time_attr self._perturbation_interval = perturbation_interval self._hyperparam_mutations = hyperparam_mutations + self._quantile_fraction = quantile_fraction self._resample_probability = resample_probability self._trial_state = {} self._custom_explore_fn = custom_explore_fn @@ -247,6 +254,7 @@ def _log_config_on_step(self, trial_state, new_state, trial, For each step, logs: [target trial tag, clone trial tag, target trial iteration, clone trial iteration, old config, new config]. + """ trial_name, trial_to_clone_name = (trial_state.orig_tag, new_state.orig_tag) @@ -277,7 +285,9 @@ def _log_config_on_step(self, trial_state, new_state, trial, def _exploit(self, trial_executor, trial, trial_to_clone): """Transfers perturbed state from trial_to_clone -> trial. - If specified, also logs the updated hyperparam state.""" + If specified, also logs the updated hyperparam state. + + """ trial_state = self._trial_state[trial] new_state = self._trial_state[trial_to_clone] @@ -318,7 +328,9 @@ def _exploit(self, trial_executor, trial, trial_to_clone): def _quantiles(self): """Returns trials in the lower and upper `quantile` of the population. - If there is not enough data to compute this, returns empty lists.""" + If there is not enough data to compute this, returns empty lists. + + """ trials = [] for trial, state in self._trial_state.items(): @@ -329,14 +341,19 @@ def _quantiles(self): if len(trials) <= 1: return [], [] else: - return (trials[:int(math.ceil(len(trials) * PBT_QUANTILE))], - trials[int(math.floor(-len(trials) * PBT_QUANTILE)):]) + num_trials_in_quantile = int( + math.ceil(len(trials) * self._quantile_fraction)) + if num_trials_in_quantile > len(trials) / 2: + num_trials_in_quantile = int(math.floor(len(trials) / 2)) + return (trials[:num_trials_in_quantile], + trials[-num_trials_in_quantile:]) def choose_trial_to_run(self, trial_runner): """Ensures all trials get fair share of time (as defined by time_attr). This enables the PBT scheduler to support a greater number of concurrent trials than can fit in the cluster at any given time. + """ candidates = []
diff --git a/python/ray/tune/tests/test_trial_scheduler.py b/python/ray/tune/tests/test_trial_scheduler.py --- a/python/ray/tune/tests/test_trial_scheduler.py +++ b/python/ray/tune/tests/test_trial_scheduler.py @@ -627,6 +627,7 @@ def basicSetup(self, resample_prob=0.0, explore=None, log_config=False): time_attr="training_iteration", perturbation_interval=10, resample_probability=resample_prob, + quantile_fraction=0.25, hyperparam_mutations={ "id_factor": [100], "float_factor": lambda: 100.0,
[tune] Make PBT Quantile fraction configurable ### Describe the problem <!-- Describe the problem clearly here. --> Currently PBT has the quantile fraction hardcoded to 25% (that is trials from the top 25% get transferred to the bottom 25%). It would be nice if this is configurable. I have been bothering you guys with comments/issues for a while, so I'd be happy to use this as my first issue to contribute, if there are no objections from you guys or any reasons you see that this should not be configurable. :)
2019-05-31T16:11:35
ray-project/ray
4,937
ray-project__ray-4937
[ "4932" ]
2702b15b04f3e8a84f65f98ccb6e7300755a217f
diff --git a/python/ray/__init__.py b/python/ray/__init__.py --- a/python/ray/__init__.py +++ b/python/ray/__init__.py @@ -96,7 +96,7 @@ from ray.runtime_context import _get_runtime_context # noqa: E402 # Ray version string. -__version__ = "0.7.1" +__version__ = "0.8.0.dev1" __all__ = [ "global_state",
Bumping to 0.8.0.dev1? @robertnishihara @devin-petersohn Is there a followup PR to bump back to 0.8.0.dev1? _Originally posted by @richardliaw in https://github.com/ray-project/ray/pull/4890#issuecomment-498979267_
2019-06-05T22:59:59
ray-project/ray
5,002
ray-project__ray-5002
[ "4674" ]
7bda5edc16d40880b16ac04a5421dbfe79f4ccb2
diff --git a/python/ray/experimental/signal.py b/python/ray/experimental/signal.py --- a/python/ray/experimental/signal.py +++ b/python/ray/experimental/signal.py @@ -2,6 +2,8 @@ from __future__ import division from __future__ import print_function +import logging + from collections import defaultdict import ray @@ -13,6 +15,8 @@ # in node_manager.cc ACTOR_DIED_STR = "ACTOR_DIED_SIGNAL" +logger = logging.getLogger(__name__) + class Signal(object): """Base class for Ray signals.""" @@ -125,10 +129,16 @@ def receive(sources, timeout=None): for s in sources: task_id_to_sources[_get_task_id(s).hex()].append(s) + if timeout < 1e-3: + logger.warning("Timeout too small. Using 1ms minimum") + timeout = 1e-3 + + timeout_ms = int(1000 * timeout) + # Construct the redis query. query = "XREAD BLOCK " - # Multiply by 1000x since timeout is in sec and redis expects ms. - query += str(1000 * timeout) + # redis expects ms. + query += str(timeout_ms) query += " STREAMS " query += " ".join([task_id for task_id in task_id_to_sources]) query += " "
diff --git a/python/ray/tests/test_signal.py b/python/ray/tests/test_signal.py --- a/python/ray/tests/test_signal.py +++ b/python/ray/tests/test_signal.py @@ -353,3 +353,36 @@ def f(sources): assert len(result_list) == 1 result_list = ray.get(f.remote([a])) assert len(result_list) == 1 + + +def test_non_integral_receive_timeout(ray_start_regular): + @ray.remote + def send_signal(value): + signal.send(UserSignal(value)) + + a = send_signal.remote(0) + # make sure send_signal had a chance to execute + ray.get(a) + + result_list = ray.experimental.signal.receive([a], timeout=0.1) + + assert len(result_list) == 1 + + +def test_small_receive_timeout(ray_start_regular): + """ Test that receive handles timeout smaller than the 1ms min + """ + # 0.1 ms + small_timeout = 1e-4 + + @ray.remote + def send_signal(value): + signal.send(UserSignal(value)) + + a = send_signal.remote(0) + # make sure send_signal had a chance to execute + ray.get(a) + + result_list = ray.experimental.signal.receive([a], timeout=small_timeout) + + assert len(result_list) == 1
Non-integer timeout for signal.receive causes malformed redis query <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: source - **Ray version**: 0.7.0.dev2 - **Python version**: 3.7.2 ### Describe the problem The timeout argument in `signal.receive()` does not work with non-integer values. For instance, calling `signal.receive(sources, timeout=0.01)` causes a `redis.exceptions.ResponseError: timeout is not an integer or out of range`. This is because of L131 in `signal.py`, where the timeout is converted to ms and then converted to a string - redis expects an int, but if you pass a double to the method (like 0.1), the result of `str(1000 * timeout)` is `100.0`. The correct string would have been `100`. A fix would be to change `str(1000 * timeout)` to `str(int(1000 * timeout))` and have checks to ensure timeout is not < 1000. https://github.com/ray-project/ray/blob/d951eb740ffe22c385d75df62aa18da790706804/python/ray/experimental/signal.py#L129-L133 ### Source code / logs Traceback: ``` Traceback (most recent call last): File "driver.py", line 21, in <module> signals = signal.receive(self.clients, timeout=0.01) File "/home/romilb/ray/python/ray/experimental/signal.py", line 141, in receive answers = ray.worker.global_worker.redis_client.execute_command(query) File "/home/romilb/anaconda3/lib/python3.7/site-packages/redis/client.py", line 668, in execute_command return self.parse_response(connection, command_name, **options) File "/home/romilb/anaconda3/lib/python3.7/site-packages/redis/client.py", line 680, in parse_response response = connection.read_response() File "/home/romilb/anaconda3/lib/python3.7/site-packages/redis/connection.py", line 629, in read_response raise response redis.exceptions.ResponseError: timeout is not an integer or out of range ``` I also printed the corresponding redis query: ``` (pid=31330) XREAD BLOCK 10.0 STREAMS fc96f993802bb5a31b5868e69179fe76b62b7c3b 0 ```
I'm experiencing this problem. Seems like an easy fix, is this all that needs to happen? Yes, do you want to submit a PR? I do On Wed, Jun 19, 2019, 4:06 PM Philipp Moritz <[email protected]> wrote: > Yes, do you want to submit a PR? > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/ray-project/ray/issues/4674?email_source=notifications&email_token=ACDIN2RSR35IO4D3K6MZI7DP3KGVLA5CNFSM4HHJEO32YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYDEFXA#issuecomment-503726812>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ACDIN2RVVWSIZC622HFFQTLP3KGVLANCNFSM4HHJEO3Q> > . >
2019-06-19T20:44:09
ray-project/ray
5,097
ray-project__ray-5097
[ "5041" ]
1cf7728f359bd26bc54a52ded3373b8c8a37311b
diff --git a/python/ray/tune/schedulers/pbt.py b/python/ray/tune/schedulers/pbt.py --- a/python/ray/tune/schedulers/pbt.py +++ b/python/ray/tune/schedulers/pbt.py @@ -268,8 +268,8 @@ def _log_config_on_step(self, trial_state, new_state, trial, "pbt_policy_" + trial_to_clone_id + ".txt") policy = [ trial_name, trial_to_clone_name, - trial.last_result[TRAINING_ITERATION], - trial_to_clone.last_result[TRAINING_ITERATION], + trial.last_result.get(TRAINING_ITERATION, 0), + trial_to_clone.last_result.get(TRAINING_ITERATION, 0), trial_to_clone.config, new_config ] # Log to global file.
diff --git a/python/ray/tune/tests/test_trial_scheduler.py b/python/ray/tune/tests/test_trial_scheduler.py --- a/python/ray/tune/tests/test_trial_scheduler.py +++ b/python/ray/tune/tests/test_trial_scheduler.py @@ -622,10 +622,15 @@ def tearDown(self): ray.shutdown() _register_all() # re-register the evicted objects - def basicSetup(self, resample_prob=0.0, explore=None, log_config=False): + def basicSetup(self, + resample_prob=0.0, + explore=None, + perturbation_interval=10, + log_config=False, + step_once=True): pbt = PopulationBasedTraining( time_attr="training_iteration", - perturbation_interval=10, + perturbation_interval=perturbation_interval, resample_probability=resample_prob, quantile_fraction=0.25, hyperparam_mutations={ @@ -646,9 +651,10 @@ def basicSetup(self, resample_prob=0.0, explore=None, log_config=False): }) runner.add_trial(trial) trial.status = Trial.RUNNING - self.assertEqual( - pbt.on_trial_result(runner, trial, result(10, 50 * i)), - TrialScheduler.CONTINUE) + if step_once: + self.assertEqual( + pbt.on_trial_result(runner, trial, result(10, 50 * i)), + TrialScheduler.CONTINUE) pbt.reset_stats() return pbt, runner @@ -959,6 +965,24 @@ def explore(new_config): self.assertEqual(trials[0].config["id_factor"], 42) self.assertEqual(trials[0].config["float_factor"], 43) + def testFastPerturb(self): + pbt, runner = self.basicSetup( + perturbation_interval=1, step_once=False, log_config=True) + trials = runner.get_trials() + + tmpdir = tempfile.mkdtemp() + for i, trial in enumerate(trials): + trial.local_dir = tmpdir + trial.last_result = {} + pbt.on_trial_result(runner, trials[0], result(1, 10)) + self.assertEqual( + pbt.on_trial_result(runner, trials[2], result(1, 200)), + TrialScheduler.CONTINUE) + self.assertEqual(pbt._num_checkpoints, 1) + + pbt._exploit(runner.trial_executor, trials[1], trials[2]) + shutil.rmtree(tmpdir) + class AsyncHyperBandSuite(unittest.TestCase): def setUp(self):
[tune] PBT perturbing after first iteration KeyError <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux - **Ray installed from (source or binary)**: binary - **Ray version**: 0.7.1 - **Python version**: 3.7.3 - **Exact command to reproduce**: run PBT with `perturbation_interval=1` and `time_attr="training_iteration"`. ### Describe the problem Occasionally, with the above described config PBT attempts to perturb config before `trial.last_result` is populated. This results in `KeyError` and the worker dies. ### Source code / logs ```python Traceback (most recent call last): File "/home/magnus/.cache/pypoetry/virtualenvs/document-classification-py3.7/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 458, in _process_trial self, trial, result) File "/home/magnus/.cache/pypoetry/virtualenvs/document-classification-py3.7/lib/python3.7/site-packages/ray/tune/schedulers/pbt.py", line 217, in on_trial_result self._exploit(trial_runner.trial_executor, trial, trial_to_clone) File "/home/magnus/.cache/pypoetry/virtualenvs/document-classification-py3.7/lib/python3.7/site-packages/ray/tune/schedulers/pbt.py", line 279, in _exploit trial_to_clone, new_config) File "/home/magnus/.cache/pypoetry/virtualenvs/document-classification-py3.7/lib/python3.7/site-packages/ray/tune/schedulers/pbt.py", line 244, in _log_config_on_step trial.last_result[TRAINING_ITERATION], KeyError: 'training_iteration' ```
Can you post what your `run` command looks like? (not actual code we're running, since it's scattered all over modules, but something like this) ```python experiment = ray.tune.Experiment( name=experiment_name, run=SupervisedTrainable, resources_per_trial={"cpu": cpu_fraction, "gpu": gpu_fraction}, stop=experiment_config.training.stopping_criteria, num_samples=4, config=resolved_config, checkpoint_at_end=True, checkpoint_freq=experiment_config.training.checkpoint_freq, ) scheduler = PopulationBasedTraining( time_attr="training_iteration", reward_attr="val/accuracy", perturbation_interval=1, hyperparam_mutations=hparam_mutations, ) ray.tune.run( experiment, scheduler=scheduler, reuse_actors=False ) ``` OK, I see what the issue is; I'll open a PR to fix.
2019-07-02T20:41:06
ray-project/ray
5,117
ray-project__ray-5117
[ "5105" ]
09bde397c9b0aa85bf0fba261c601717f8f79c18
diff --git a/python/ray/tune/tune.py b/python/ray/tune/tune.py --- a/python/ray/tune/tune.py +++ b/python/ray/tune/tune.py @@ -49,6 +49,8 @@ def run(run_or_experiment, sync_to_driver=None, checkpoint_freq=0, checkpoint_at_end=False, + keep_checkpoints_num=None, + checkpoint_score_attr=None, global_checkpoint_period=10, export_formats=None, max_failures=3, @@ -114,6 +116,13 @@ def run(run_or_experiment, checkpoints. A value of 0 (default) disables checkpointing. checkpoint_at_end (bool): Whether to checkpoint at the end of the experiment regardless of the checkpoint_freq. Default is False. + keep_checkpoints_num (int): Number of checkpoints to keep. A value of + `None` keeps all checkpoints. Defaults to `None`. If set, need + to provide `checkpoint_score_attr`. + checkpoint_score_attr (str): Specifies by which attribute to rank the + best checkpoint. Default is increasing order. If attribute starts + with `min-` it will rank attribute in decreasing order, i.e. + `min-validation_loss`. global_checkpoint_period (int): Seconds between global checkpointing. This does not affect `checkpoint_freq`, which specifies frequency for individual trials. @@ -199,6 +208,8 @@ def run(run_or_experiment, loggers=loggers, checkpoint_freq=checkpoint_freq, checkpoint_at_end=checkpoint_at_end, + keep_checkpoints_num=keep_checkpoints_num, + checkpoint_score_attr=checkpoint_score_attr, export_formats=export_formats, max_failures=max_failures, restore=restore,
[tune] Missing keep_checkpoints_num arg in tune Hi everybody, Thanks a lot for this lib, I've been using for the past few weeks with TF2 and it works pretty well! In my current use case, I'm checkpointing regularly to ensure a minimal loss in the face of OOM. I do control TF checkpoints manually to avoid filling my disk space but I still need to dump some ray metadata (iterations count, etc.). I've noticed that ray create folders like `checkpoint_{iter}` to store those but from `tune` there is no way to tell him that I don't need to keep dozens of folder. I also noticed that `experiment`, and `trial` classes have the perfect argument: `keep_checkpoints_num`. Sadly this argument is missing from the `tune` config. Is there any reasons why it is missing from `tune` arguments? Have a nice day.
No particular reason, we just missed it - would you be willing to make a PR?
2019-07-04T08:41:11
ray-project/ray
5,136
ray-project__ray-5136
[ "5135", "5135" ]
274233962f160df599252a7388810c32e6c4113a
diff --git a/python/ray/tune/function_runner.py b/python/ray/tune/function_runner.py --- a/python/ray/tune/function_runner.py +++ b/python/ray/tune/function_runner.py @@ -3,10 +3,10 @@ from __future__ import print_function import logging -import sys import time import inspect import threading +import traceback from six.moves import queue from ray.tune import track @@ -100,12 +100,9 @@ def run(self): # report the error but avoid indefinite blocking which would # prevent the exception from being propagated in the unlikely # case that something went terribly wrong - err_type, err_value, err_tb = sys.exc_info() - err_tb = err_tb.format_exc() + err_tb_str = traceback.format_exc() self._error_queue.put( - (err_type, err_value, err_tb), - block=True, - timeout=ERROR_REPORT_TIMEOUT) + err_tb_str, block=True, timeout=ERROR_REPORT_TIMEOUT) except queue.Full: logger.critical( ("Runner Thread was unable to report error to main " @@ -234,13 +231,10 @@ def _stop(self): def _report_thread_runner_error(self, block=False): try: - err_type, err_value, err_tb = self._error_queue.get( + err_tb_str = self._error_queue.get( block=block, timeout=ERROR_FETCH_TIMEOUT) - raise TuneError(("Trial raised a {err_type} exception with value: " - "{err_value}\nWith traceback:\n{err_tb}").format( - err_type=err_type, - err_value=err_value, - err_tb=err_tb)) + raise TuneError(("Trial raised an exception. Traceback:\n{}" + .format(err_tb_str))) except queue.Empty: pass
[tune] err_tb.format_exc() undefined in function_runner ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: binary (PyPI) - **Ray version**: 0.7.2 - **Python version**: 3.6 - **Exact command to reproduce**: N/A ### Describe the problem When the objective function I am attempting to optimise with Tune crashes, Ray throws another exception complaining that the traceback object it gets has no `.format_exc()` attribute. It then claims that my function completed without dying or reporting any results. ### Source code / logs Here is the relevant part of the traceback: ``` … traceback from my app up here… (pid=21249) During handling of the above exception, another exception occurred: (pid=21249) (pid=21249) Traceback (most recent call last): (pid=21249) File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner (pid=21249) self.run() (pid=21249) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/function_runner.py", line 105, in run (pid=21249) err_tb = err_tb.format_exc() (pid=21249) AttributeError: 'traceback' object has no attribute 'format_exc' (pid=21249) 2019-07-07 10:14:16,574 ERROR trial_runner.py:487 -- Error processing event. Traceback (most recent call last): File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 436, in _process_trial result = self.trial_executor.fetch_result(trial) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 323, in fetch_result result = ray.get(trial_future[0]) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/worker.py", line 2195, in get raise value ray.exceptions.RayTaskError: ray_WrappedFunc:train() (pid=21249, host=sam-ThinkPad-E470) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/trainable.py", line 150, in train result = self._train() File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/function_runner.py", line 206, in _train ("Wrapped function ran until completion without reporting " ray.tune.error.TuneError: Wrapped function ran until completion without reporting results or raising an exception. ``` The error seems to be due to [the following lines](https://github.com/ray-project/ray/blob/6a14f1a540d1c97d812cfcc2aecb1654028b279f/python/ray/tune/function_runner.py#L100-L108) in `function_runner.py`: ```python # report the error but avoid indefinite blocking which would # prevent the exception from being propagated in the unlikely # case that something went terribly wrong err_type, err_value, err_tb = sys.exc_info() err_tb = err_tb.format_exc() ``` Python traceback objects don't seem to have a `format_exc()` attribute (at least as of 3.7), so I assume that last was a typo. Importing the `traceback` module and replacing `err_tb.format_exc()` with `traceback.format_tb(err_tb)` (or `traceback.format_exc()`) makes it go away. [tune] err_tb.format_exc() undefined in function_runner ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: binary (PyPI) - **Ray version**: 0.7.2 - **Python version**: 3.6 - **Exact command to reproduce**: N/A ### Describe the problem When the objective function I am attempting to optimise with Tune crashes, Ray throws another exception complaining that the traceback object it gets has no `.format_exc()` attribute. It then claims that my function completed without dying or reporting any results. ### Source code / logs Here is the relevant part of the traceback: ``` … traceback from my app up here… (pid=21249) During handling of the above exception, another exception occurred: (pid=21249) (pid=21249) Traceback (most recent call last): (pid=21249) File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner (pid=21249) self.run() (pid=21249) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/function_runner.py", line 105, in run (pid=21249) err_tb = err_tb.format_exc() (pid=21249) AttributeError: 'traceback' object has no attribute 'format_exc' (pid=21249) 2019-07-07 10:14:16,574 ERROR trial_runner.py:487 -- Error processing event. Traceback (most recent call last): File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 436, in _process_trial result = self.trial_executor.fetch_result(trial) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 323, in fetch_result result = ray.get(trial_future[0]) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/worker.py", line 2195, in get raise value ray.exceptions.RayTaskError: ray_WrappedFunc:train() (pid=21249, host=sam-ThinkPad-E470) File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/trainable.py", line 150, in train result = self._train() File "/home/sam/.virtualenvs/asnets/lib/python3.6/site-packages/ray/tune/function_runner.py", line 206, in _train ("Wrapped function ran until completion without reporting " ray.tune.error.TuneError: Wrapped function ran until completion without reporting results or raising an exception. ``` The error seems to be due to [the following lines](https://github.com/ray-project/ray/blob/6a14f1a540d1c97d812cfcc2aecb1654028b279f/python/ray/tune/function_runner.py#L100-L108) in `function_runner.py`: ```python # report the error but avoid indefinite blocking which would # prevent the exception from being propagated in the unlikely # case that something went terribly wrong err_type, err_value, err_tb = sys.exc_info() err_tb = err_tb.format_exc() ``` Python traceback objects don't seem to have a `format_exc()` attribute (at least as of 3.7), so I assume that last was a typo. Importing the `traceback` module and replacing `err_tb.format_exc()` with `traceback.format_tb(err_tb)` (or `traceback.format_exc()`) makes it go away.
2019-07-07T18:00:15
ray-project/ray
5,169
ray-project__ray-5169
[ "5159" ]
691c9733f95a28d9a1ceeb291eda94a2ffd93cdd
diff --git a/python/ray/autoscaler/gcp/config.py b/python/ray/autoscaler/gcp/config.py --- a/python/ray/autoscaler/gcp/config.py +++ b/python/ray/autoscaler/gcp/config.py @@ -383,7 +383,8 @@ def _add_iam_policy_binding(service_account, roles): email = service_account["email"] member_id = "serviceAccount:" + email - policy = crm.projects().getIamPolicy(resource=project_id).execute() + policy = crm.projects().getIamPolicy( + resource=project_id, body={}).execute() already_configured = True
[autoscaler] GCP error missing required parameter body <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux - **Ray installed from (source or binary)**: - **Ray version**: 0.6.2 - **Python version**: 3.6 - **Exact command to reproduce**: ray up gcp_trainer.yaml <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> Code worked well until today (no update). I got an error at the beginning after the getIamPolicy function in /autoscaler/gcp/config.py. I have all the rights / permissions in my GCP. The yaml file is similar to the example-full.yaml ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> file_cache is unavailable when using oauth2client >= 4.0.0 Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/__init__.py", line 36, in autodetect from google.appengine.api import memcache ModuleNotFoundError: No module named 'google.appengine' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 33, in <module> from oauth2client.contrib.locked_file import LockedFile ModuleNotFoundError: No module named 'oauth2client.contrib.locked_file' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 37, in <module> from oauth2client.locked_file import LockedFile ModuleNotFoundError: No module named 'oauth2client.locked_file' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/__init__.py", line 41, in autodetect from . import file_cache File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 41, in <module> 'file_cache is unavailable when using oauth2client >= 4.0.0') ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 URL being requested: GET https://www.googleapis.com/discovery/v1/apis/cloudresourcemanager/v1/rest /opt/tools/anaconda3/lib/python3.6/site-packages/google/auth/_default.py:66: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) file_cache is unavailable when using oauth2client >= 4.0.0 Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/__init__.py", line 36, in autodetect from google.appengine.api import memcache ModuleNotFoundError: No module named 'google.appengine' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 33, in <module> from oauth2client.contrib.locked_file import LockedFile ModuleNotFoundError: No module named 'oauth2client.contrib.locked_file' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 37, in <module> from oauth2client.locked_file import LockedFile ModuleNotFoundError: No module named 'oauth2client.locked_file' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/__init__.py", line 41, in autodetect from . import file_cache File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 41, in <module> 'file_cache is unavailable when using oauth2client >= 4.0.0') ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 URL being requested: GET https://www.googleapis.com/discovery/v1/apis/iam/v1/rest /opt/tools/anaconda3/lib/python3.6/site-packages/google/auth/_default.py:66: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) file_cache is unavailable when using oauth2client >= 4.0.0 Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/__init__.py", line 36, in autodetect from google.appengine.api import memcache ModuleNotFoundError: No module named 'google.appengine' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 33, in <module> from oauth2client.contrib.locked_file import LockedFile ModuleNotFoundError: No module named 'oauth2client.contrib.locked_file' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 37, in <module> from oauth2client.locked_file import LockedFile ModuleNotFoundError: No module named 'oauth2client.locked_file' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/__init__.py", line 41, in autodetect from . import file_cache File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery_cache/file_cache.py", line 41, in <module> 'file_cache is unavailable when using oauth2client >= 4.0.0') ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 URL being requested: GET https://www.googleapis.com/discovery/v1/apis/compute/v1/rest /opt/tools/anaconda3/lib/python3.6/site-packages/google/auth/_default.py:66: UserWarning: Your application has authenticated using end user credentials from Google Cloud SDK. We recommend that most server applications use service accounts instead. If your application continues to use end user credentials from Cloud SDK, you might receive a "quota exceeded" or "API not enabled" error. For more information about service accounts, see https://cloud.google.com/docs/authentication/ warnings.warn(_CLOUD_SDK_CREDENTIALS_WARNING) URL being requested: GET https://cloudresourcemanager.googleapis.com/v1/projects/d-dls-dlsi?alt=json URL being requested: GET https://iam.googleapis.com/v1/projects/d-dls-dlsi/serviceAccounts/[email protected]?alt=json Traceback (most recent call last): File "/opt/tools/anaconda3/bin/ray", line 10, in <module> sys.exit(main()) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/scripts/scripts.py", line 744, in main return cli() File "/opt/tools/anaconda3/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/opt/tools/anaconda3/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/opt/tools/anaconda3/lib/python3.6/site-packages/click/core.py", line 1066, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/opt/tools/anaconda3/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/tools/anaconda3/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/scripts/scripts.py", line 463, in create_or_update no_restart, restart_only, yes, cluster_name) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/autoscaler/commands.py", line 43, in create_or_update_cluster config = _bootstrap_config(config) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/autoscaler/commands.py", line 65, in _bootstrap_config resolved_config = bootstrap_config(config) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/autoscaler/gcp/config.py", line 109, in bootstrap_gcp config = _configure_iam_role(config) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/autoscaler/gcp/config.py", line 169, in _configure_iam_role _add_iam_policy_binding(service_account, DEFAULT_SERVICE_ACCOUNT_ROLES) File "/opt/tools/anaconda3/lib/python3.6/site-packages/ray/autoscaler/gcp/config.py", line 381, in _add_iam_policy_binding policy = crm.projects().getIamPolicy(resource=project_id).execute() File "/opt/tools/anaconda3/lib/python3.6/site-packages/googleapiclient/discovery.py", line 730, in method raise TypeError('Missing required parameter "%s"' % name) TypeError: Missing required parameter "body"
Looks like https://github.com/googleapis/google-api-python-client/issues/416 is a fix. Odd that this didn't fail before. Can you try it out to see if it works? You can follow these instructions for setting up an environment (it'll take 3 minutes) - https://ray.readthedocs.io/en/latest/tune-contrib.html#setting-up-a-development-environment This issue appeared for me too on several different machines independently starting at the same time a couple of days ago. I believe something in the Google API changed recently and the Google API python client library wasn't changed accordingly. The fix for autoscaler is to add `body={}` argument to the `getIamPolicy` here: https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/gcp/config.py#L386.
2019-07-11T03:54:30
ray-project/ray
5,208
ray-project__ray-5208
[ "5207" ]
4fa2a6006c305694a682086b1b52608cc3b7b8ee
diff --git a/python/ray/rllib/utils/debug.py b/python/ray/rllib/utils/debug.py --- a/python/ray/rllib/utils/debug.py +++ b/python/ray/rllib/utils/debug.py @@ -78,7 +78,10 @@ def _summarize(obj): elif isinstance(obj, tuple): return tuple(_summarize(x) for x in obj) elif isinstance(obj, np.ndarray): - if obj.dtype == np.object: + if obj.size == 0: + return _StringValue("np.ndarray({}, dtype={})".format( + obj.shape, obj.dtype)) + elif obj.dtype == np.object: return _StringValue("np.ndarray({}, dtype={}, head={})".format( obj.shape, obj.dtype, _summarize(obj[0]))) else: diff --git a/python/ray/rllib/utils/memory.py b/python/ray/rllib/utils/memory.py --- a/python/ray/rllib/utils/memory.py +++ b/python/ray/rllib/utils/memory.py @@ -56,7 +56,11 @@ def aligned_array(size, dtype, align=64): empty = np.empty(n + (align - 1), dtype=np.uint8) data_align = empty.ctypes.data % align offset = 0 if data_align == 0 else (align - data_align) - output = empty[offset:offset + n].view(dtype) + if n == 0: + # stop np from optimising out empty slice reference + output = empty[offset:offset + 1][0:0].view(dtype) + else: + output = empty[offset:offset + n].view(dtype) assert len(output) == size, len(output) assert output.ctypes.data % align == 0, output.ctypes.data
[rllib] utils.debug.summarize() dies on empty arrays # System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: binary (pypi) - **Ray version**: 0.7.1 & 0.7.2 - **Python version**: 3.6 - **Exact command to reproduce**: N/A ### Describe the problem I'm running rllib on an environment that returns weird zero-length observations. RLLib chokes when it tries to summarise them because the arrays don't have a min/max/mean. Example traceback below. ### Source code / logs ``` 2019-07-16 14:10:09,054 ERROR trial_runner.py:487 -- Error processing event. Traceback (most recent call last): File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 436, in _process_trial result = self.trial_executor.fetch_result(trial) …snip… File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/ray/rllib/evaluation/sampler.py", line 308, in _env_runner summarize(unfiltered_obs))) File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/ray/rllib/utils/debug.py", line 65, in summarize return _printer.pformat(_summarize(obj)) File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/ray/rllib/utils/debug.py", line 70, in _summarize return {k: _summarize(v) for k, v in obj.items()} …snip… File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/ray/rllib/utils/debug.py", line 87, in _summarize obj.shape, obj.dtype, round(float(np.min(obj)), 3), File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2618, in amin initial=initial) File "/home/sam/.virtualenvs/weird-env/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction return ufunc.reduce(obj, axis, dtype, out, **passkwargs) ValueError: zero-size array to reduction operation minimum which has no identity ```
2019-07-16T21:36:58
ray-project/ray
5,221
ray-project__ray-5221
[ "5220" ]
0af07bd4934d6cd0d578683da7a6e1e2917014f6
diff --git a/python/ray/log_monitor.py b/python/ray/log_monitor.py --- a/python/ray/log_monitor.py +++ b/python/ray/log_monitor.py @@ -4,6 +4,7 @@ import argparse import errno +import glob import json import logging import os @@ -89,18 +90,20 @@ def close_all_files(self): def update_log_filenames(self): """Update the list of log files to monitor.""" - log_filenames = os.listdir(self.logs_dir) - - for log_filename in log_filenames: - full_path = os.path.join(self.logs_dir, log_filename) - if full_path not in self.log_filenames: - self.log_filenames.add(full_path) + # we only monior worker log files + log_file_paths = glob.glob("{}/worker*[.out|.err]".format( + self.logs_dir)) + for file_path in log_file_paths: + if os.path.isfile( + file_path) and file_path not in self.log_filenames: + self.log_filenames.add(file_path) self.closed_file_infos.append( LogFileInfo( - filename=full_path, + filename=file_path, size_when_last_opened=0, file_position=0, file_handle=None)) + log_filename = os.path.basename(file_path) logger.info("Beginning to track file {}".format(log_filename)) def open_closed_files(self): @@ -172,20 +175,21 @@ def check_log_files_and_publish_updates(self): lines_to_publish = [] max_num_lines_to_read = 100 for _ in range(max_num_lines_to_read): - next_line = file_info.file_handle.readline() - if next_line == "": - break - if next_line[-1] == "\n": - next_line = next_line[:-1] - lines_to_publish.append(next_line) - - # Publish the lines if this is a worker process. - filename = file_info.filename.split("/")[-1] - is_worker = (filename.startswith("worker") - and (filename.endswith("out") - or filename.endswith("err"))) - - if is_worker and file_info.file_position == 0: + try: + next_line = file_info.file_handle.readline() + if next_line == "": + break + if next_line[-1] == "\n": + next_line = next_line[:-1] + lines_to_publish.append(next_line) + except Exception: + logger.error("Error: Reading file: {}, position: {} " + "failed.".format( + file_info.full_path, + file_info.file_info.file_handle.tell())) + raise + + if file_info.file_position == 0: if (len(lines_to_publish) > 0 and lines_to_publish[0].startswith("Ray worker pid: ")): file_info.worker_pid = int( @@ -195,7 +199,7 @@ def check_log_files_and_publish_updates(self): # Record the current position in the file. file_info.file_position = file_info.file_handle.tell() - if len(lines_to_publish) > 0 and is_worker: + if len(lines_to_publish) > 0: self.redis_client.publish( ray.gcs_utils.LOG_FILE_CHANNEL, json.dumps({
log monitor report UnicodeDecodeError <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: centos 7 - **Ray installed from (source or binary)**: binay - **Ray version**: 0.7.2 0.8 - **Python version**: python 3.7 - **Exact command to reproduce**: <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> Now the `log_monitor` monitor all the files under logs. This could be causing the following errors when we read those log file with `VIM` because of the `.*.swap` file will be created. ```Traceback (most recent call last): File "/home/xianyang/anaconda3/lib/python3.7/site-packages/ray/log_monitor.py", line 278, in <module> raise e File "/home/xianyang/anaconda3/lib/python3.7/site-packages/ray/log_monitor.py", line 268, in <module> log_monitor.run() File "/home/xianyang/anaconda3/lib/python3.7/site-packages/ray/log_monitor.py", line 219, in run anything_published = self.check_log_files_and_publish_updates() File "/home/xianyang/anaconda3/lib/python3.7/site-packages/ray/log_monitor.py", line 175, in check_log_files_and_publish_updates next_line = file_info.file_handle.readline() File "/home/xianyang/anaconda3/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xca in position 21: invalid continuation byte ``` ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
2019-07-18T06:47:20
ray-project/ray
5,287
ray-project__ray-5287
[ "5280" ]
6f737e6a500dc9f500d4cf7ba7b31f979922a18b
diff --git a/python/ray/tune/trial.py b/python/ray/tune/trial.py --- a/python/ray/tune/trial.py +++ b/python/ray/tune/trial.py @@ -30,7 +30,7 @@ from ray.utils import binary_to_hex, hex_to_binary DEBUG_PRINT_INTERVAL = 5 -MAX_LEN_IDENTIFIER = 130 +MAX_LEN_IDENTIFIER = int(os.environ.get("MAX_LEN_IDENTIFIER", 130)) logger = logging.getLogger(__name__)
[Tune] The logdir string of Trial is always truncated For now, the logdir string of a trial is created by `Trial.create_logdir`: https://github.com/ray-project/ray/blob/6f737e6a500dc9f500d4cf7ba7b31f979922a18b/python/ray/tune/trial.py#L373-L389 The `identifier` is always be truncated to a length of `MAX_LEN_IDENTIFIER`. This should be configurable since the max length of file names could be 256 in some systems. @richardliaw
Can you make a PR? Maybe just set it to an environment variable? Yeah, I'll try.
2019-07-26T16:24:37
ray-project/ray
5,295
ray-project__ray-5295
[ "5294" ]
a62c5f40f67785439a143c6151f8dc8599351e94
diff --git a/python/ray/tune/logger.py b/python/ray/tune/logger.py --- a/python/ray/tune/logger.py +++ b/python/ray/tune/logger.py @@ -13,6 +13,7 @@ import numpy as np import ray.cloudpickle as cloudpickle +from ray.tune.util import flatten_dict from ray.tune.syncer import get_log_syncer from ray.tune.result import (NODE_IP, TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL, EXPR_PARAM_FILE, @@ -107,19 +108,15 @@ def update_config(self, config): def to_tf_values(result, path): - values = [] - for attr, value in result.items(): - if value is not None: - if use_tf150_api: - type_list = [int, float, np.float32, np.float64, np.int32] - else: - type_list = [int, float] - if type(value) in type_list: - values.append( - tf.Summary.Value( - tag="/".join(path + [attr]), simple_value=value)) - elif type(value) is dict: - values.extend(to_tf_values(value, path + [attr])) + if use_tf150_api: + type_list = [int, float, np.float32, np.float64, np.int32] + else: + type_list = [int, float] + flat_result = flatten_dict(result, delimiter="/") + values = [ + tf.Summary.Value(tag="/".join(path + [attr]), simple_value=value) + for attr, value in flat_result.items() if type(value) in type_list + ] return values @@ -175,6 +172,10 @@ def _init(self): self._csv_out = None def on_result(self, result): + tmp = result.copy() + if "config" in tmp: + del tmp["config"] + result = flatten_dict(tmp, delimiter="/") if self._csv_out is None: self._csv_out = csv.DictWriter(self._file, result.keys()) if not self._continuing: @@ -182,6 +183,7 @@ def on_result(self, result): self._csv_out.writerow( {k: v for k, v in result.items() if k in self._csv_out.fieldnames}) + self._file.flush() def flush(self): self._file.flush() diff --git a/python/ray/tune/util.py b/python/ray/tune/util.py --- a/python/ray/tune/util.py +++ b/python/ray/tune/util.py @@ -180,14 +180,15 @@ def deep_update(original, new_dict, new_keys_allowed, whitelist): return original -def flatten_dict(dt): +def flatten_dict(dt, delimiter=":"): + dt = copy.deepcopy(dt) while any(isinstance(v, dict) for v in dt.values()): remove = [] add = {} for key, value in dt.items(): if isinstance(value, dict): for subkey, v in value.items(): - add[":".join([key, subkey])] = v + add[delimiter.join([key, subkey])] = v remove.append(key) dt.update(add) for k in remove:
[tune] support nested results for CSVLogger `Trainable._train` may return a nested dictionary, e.g., ``` { "train": {"loss": 1.0, "acc": 0.6, "f1": 0.5}, "test": {"acc": 0.6, "f1": 0.5} } ``` Currently, `CSVLogger` regards nested dictionaries as strings and directly writes them into file. It would be convenient for follow-up analysis to parse the nested dictionaries like this function [to_tf_values](https://github.com/ray-project/ray/blob/a62c5f40f67785439a143c6151f8dc8599351e94/python/ray/tune/logger.py#L109-L123). Then, the header will be something like `train/loss,train/acc,train/f1,test/acc,test/f1`.
Oh that's a good idea! Would you be interested in this patch? There's a `flatten_dict` function in `tune/utils.py`, and putting that in the CSVLogger would only be a 5 line change. If not, I could take it later.
2019-07-27T17:26:30
ray-project/ray
5,354
ray-project__ray-5354
[ "5337" ]
13fb9fe3db81eacfeb03f2fe647e343bd0929d38
diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py --- a/python/ray/scripts/scripts.py +++ b/python/ray/scripts/scripts.py @@ -397,8 +397,8 @@ def stop(): ] for process in processes_to_kill: - command = ("kill $(ps aux | grep '" + process + "' | grep -v grep | " + - "awk '{ print $2 }') 2> /dev/null") + command = ("kill -9 $(ps aux | grep '" + process + + "' | grep -v grep | " + "awk '{ print $2 }') 2> /dev/null") subprocess.call([command], shell=True) # Find the PID of the jupyter process and kill it.
`ray.get` on cluster mode sometimes does not return ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux - **Ray installed from (source or binary)**: wheels - **Ray version**: 0.8.0.dev3 - **Python version**: 3.6 - **Exact command to reproduce**: With 2 nodes: ```ipython3 In [1]: import ray In [2]: ray.init(redis_address="localhost:6379") 2019-08-01 02:57:18,898 WARNING worker.py:1372 -- WARNING: Not updating worker name since `setproctitle` is not installed. Install this with `pip install setproctitle` (or ray[debug]) to enable monitoring of worker processes. Out[2]: {'node_ip_address': '172.31.95.217', 'redis_address': '172.31.95.217:6379', 'object_store_address': '/tmp/ray/session_2019-08-01_02-55-05_728763_2867/sockets/plasma_store', 'raylet_socket_name': '/tmp/ray/session_2019-08-01_02-55-05_728763_2867/sockets/raylet', 'webui_url': None, 'session_dir': '/tmp/ray/session_2019-08-01_02-55-05_728763_2867'} In [3]: @ray.remote ...: def test(): ...: print("hello!") ...: return 123 ...: ...: In [4]: ray.get(test.remote()) (pid=2896) hello! Out[4]: 123 In [5]: ray.get(test.remote()) (pid=2833, ip=172.31.89.59) hello! ``` Sometimes, `ray.get` does not return. ```yaml # An unique identifier for the head node and workers of this cluster. cluster_name: sgd-pytorch # The maximum number of workers nodes to launch in addition to the head # node. This takes precedence over min_workers. min_workers default to 0. min_workers: 1 initial_workers: 1 max_workers: 1 target_utilization_fraction: 0.9 # If a node is idle for this many minutes, it will be removed. idle_timeout_minutes: 20 provider: type: aws region: us-east-1 availability_zone: us-east-1f auth: ssh_user: ubuntu head_node: InstanceType: c5.xlarge ImageId: ami-0d96d570269578cd7 worker_nodes: InstanceType: c5.xlarge ImageId: ami-0d96d570269578cd7 setup_commands: - pip install -U https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-0.8.0.dev3-cp36-cp36m-manylinux1_x86_64.whl file_mounts: {} # Custom commands that will be run on the head node after common setup. head_setup_commands: [] # Custom commands that will be run on worker nodes after common setup. worker_setup_commands: [] # # Command to start ray on the head node. You don't need to change this. head_start_ray_commands: - ray stop - ray start --head --redis-port=6379 --object-manager-port=8076 --autoscaling-config=~/ray_bootstrap_config.yaml --object-store-memory=1000000000 # Command to start ray on worker nodes. You don't need to change this. worker_start_ray_commands: - ray stop - ray start --redis-address=$RAY_HEAD_IP:6379 --object-manager-port=8076 --object-store-memory=1000000000 ```
you can run `ray up example-sgd.yaml --restart-only` over again and `ray exec example-sgd.yaml 'ipython'` for fast testing. I'm not really sure what's happening here, but it seems to have something to do with the gRPC change in the object manager. With `DEBUG` messages on, I see this on the driver's node: ``` I0801 05:08:49.677681 7593 object_manager.cc:202] Sending pull request from 2b40211fa4822dae803472cbffaa1964743f499c to 7ff8904fdde34990457c041b537723df072ed05b of object 9000448a466ea5adf7ea7c12b3b9828c01000000 I0801 05:08:49.677784 7593 object_manager.cc:797] Get rpc client, address: 172.31.83.159, port: 8076, local port: 8076 ``` The remote node receives the `PullRequest` message and seems to send the object (which is already in the remote node's object store): ``` I0801 05:08:49.681157 6323 object_manager.cc:720] Received pull request from client 2b40211fa4822dae803472cbffaa1964743f499c for object [9000448a466ea5adf7ea7c12b3b9828c01000000]. I0801 05:08:49.681197 6312 object_manager.cc:329] Push on 7ff8904fdde34990457c041b537723df072ed05b to 2b40211fa4822dae803472cbffaa1964743f499c of object 9000448a466ea5adf7ea7c12b3b9828c01000000 I0801 05:08:49.681258 6312 object_manager.cc:797] Get rpc client, address: 172.31.86.139, port: 8076, local port: 8076 I0801 05:08:49.681265 6312 object_manager.cc:395] Sending object chunks of 9000448a466ea5adf7ea7c12b3b9828c01000000 to client 2b40211fa4822dae803472cbffaa1964743f499c, number of chunks: 1, total data size: 448 I0801 05:08:49.681928 6312 object_manager.cc:284] HandleSendFinished on 7ff8904fdde34990457c041b537723df072ed05b to 2b40211fa4822dae803472cbffaa1964743f499c of object 9000448a466ea5adf7ea7c12b3b9828c01000000 chunk 0, status: OK ``` But then the driver's node doesn't seem to receive the pushed object. I don't see any messages with `ReceiveObjectChunk` in them, and the driver's node eventually tries to send the `PullRequest` again. @raulchen @jiangzihao2009 I traced down the real issue. So this bug only occurs after https://github.com/ray-project/ray/pull/5120. However, the real issue seems to be somehow `ray stop` is unable to stop the `default_worker` and `raylet` processes. I'm suspecting that `kill` failed for some reason; or gRPC thread doesn't respond to signals correctly. Checking... Steps to reproduce the ray stop bug https://gist.github.com/simon-mo/7194824e161f336d699d9a0bcb65c13e
2019-08-02T07:50:51
ray-project/ray
5,402
ray-project__ray-5402
[ "5335" ]
17c6835c3f3a0a604b7807a54d7f0b77e51e5d90
diff --git a/python/ray/tune/schedulers/median_stopping_rule.py b/python/ray/tune/schedulers/median_stopping_rule.py --- a/python/ray/tune/schedulers/median_stopping_rule.py +++ b/python/ray/tune/schedulers/median_stopping_rule.py @@ -27,13 +27,18 @@ class MedianStoppingRule(FIFOScheduler): mode (str): One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. grace_period (float): Only stop trials at least this old in time. + The mean will only be computed from this time onwards. The units + are the same as the attribute named by `time_attr`. + min_samples_required (int): Minimum number of trials to compute median + over. + min_time_slice (float): Each trial runs at least this long before + yielding (assuming it isn't stopped). Note: trials ONLY yield if + there are not enough samples to evaluate performance for the + current result AND there are other trials waiting to run. The units are the same as the attribute named by `time_attr`. - min_samples_required (int): Min samples to compute median over. hard_stop (bool): If False, pauses trials instead of stopping them. When all other trials are complete, paused trials will be resumed and allowed to run FIFO. - verbose (bool): If True, will output the median and best result each - time a trial reports. Defaults to True. """ def __init__(self, @@ -43,10 +48,9 @@ def __init__(self, mode="max", grace_period=60.0, min_samples_required=3, - hard_stop=True, - verbose=True): + min_time_slice=0, + hard_stop=True): assert mode in ["min", "max"], "`mode` must be 'min' or 'max'!" - if reward_attr is not None: mode = "max" metric = reward_attr @@ -54,21 +58,20 @@ def __init__(self, "`reward_attr` is deprecated and will be removed in a future " "version of Tune. " "Setting `metric={}` and `mode=max`.".format(reward_attr)) - FIFOScheduler.__init__(self) self._stopped_trials = set() - self._completed_trials = set() - self._results = collections.defaultdict(list) self._grace_period = grace_period self._min_samples_required = min_samples_required + self._min_time_slice = min_time_slice self._metric = metric - if mode == "max": - self._metric_op = 1. - elif mode == "min": - self._metric_op = -1. + assert mode in {"min", "max"}, "`mode` must be 'min' or 'max'." + self._worst = float("-inf") if mode == "max" else float("inf") + self._compare_op = max if mode == "max" else min self._time_attr = time_attr self._hard_stop = hard_stop - self._verbose = verbose + self._trial_state = {} + self._last_pause = collections.defaultdict(lambda: float("-inf")) + self._results = collections.defaultdict(list) def on_trial_result(self, trial_runner, trial, result): """Callback for early stopping. @@ -82,19 +85,38 @@ def on_trial_result(self, trial_runner, trial, result): if trial in self._stopped_trials: assert not self._hard_stop - return TrialScheduler.CONTINUE # fall back to FIFO + # Fall back to FIFO + return TrialScheduler.CONTINUE time = result[self._time_attr] self._results[trial].append(result) - median_result = self._get_median_result(time) + + if time < self._grace_period: + return TrialScheduler.CONTINUE + + trials = self._trials_beyond_time(time) + trials.remove(trial) + + if len(trials) < self._min_samples_required: + action = self._on_insufficient_samples(trial_runner, trial, time) + if action == TrialScheduler.PAUSE: + self._last_pause[trial] = time + action_str = "Yielding time to other trials." + else: + action_str = "Continuing anyways." + logger.debug( + "MedianStoppingRule: insufficient samples={} to evaluate " + "trial {} at t={}. {}".format( + len(trials), trial.trial_id, time, action_str)) + return action + + median_result = self._median_result(trials, time) best_result = self._best_result(trial) - if self._verbose: - logger.info("Trial {} best res={} vs median res={} at t={}".format( - trial, best_result, median_result, time)) - if best_result < median_result and time > self._grace_period: - if self._verbose: - logger.info("MedianStoppingRule: " - "early stopping {}".format(trial)) + logger.debug("Trial {} best res={} vs median res={} at t={}".format( + trial, best_result, median_result, time)) + + if self._compare_op(median_result, best_result) != best_result: + logger.debug("MedianStoppingRule: early stopping {}".format(trial)) self._stopped_trials.add(trial) if self._hard_stop: return TrialScheduler.STOP @@ -105,33 +127,39 @@ def on_trial_result(self, trial_runner, trial, result): def on_trial_complete(self, trial_runner, trial, result): self._results[trial].append(result) - self._completed_trials.add(trial) - - def on_trial_remove(self, trial_runner, trial): - """Marks trial as completed if it is paused and has previously ran.""" - if trial.status is Trial.PAUSED and trial in self._results: - self._completed_trials.add(trial) def debug_string(self): return "Using MedianStoppingRule: num_stopped={}.".format( len(self._stopped_trials)) - def _get_median_result(self, time): - scores = [] - for trial in self._completed_trials: - scores.append(self._running_result(trial, time)) - if len(scores) >= self._min_samples_required: - return np.median(scores) - else: - return float("-inf") - - def _running_result(self, trial, t_max=float("inf")): + def _on_insufficient_samples(self, trial_runner, trial, time): + pause = time - self._last_pause[trial] > self._min_time_slice + pause = pause and [ + t for t in trial_runner.get_trials() + if t.status in (Trial.PENDING, Trial.PAUSED) + ] + return TrialScheduler.PAUSE if pause else TrialScheduler.CONTINUE + + def _trials_beyond_time(self, time): + trials = [ + trial for trial in self._results + if self._results[trial][-1][self._time_attr] >= time + ] + return trials + + def _median_result(self, trials, time): + return np.median([self._running_mean(trial, time) for trial in trials]) + + def _running_mean(self, trial, time): results = self._results[trial] # TODO(ekl) we could do interpolation to be more precise, but for now # assume len(results) is large and the time diffs are roughly equal - return self._metric_op * np.mean( - [r[self._metric] for r in results if r[self._time_attr] <= t_max]) + scoped_results = [ + r for r in results + if self._grace_period <= r[self._time_attr] <= time + ] + return np.mean([r[self._metric] for r in scoped_results]) def _best_result(self, trial): results = self._results[trial] - return max(self._metric_op * r[self._metric] for r in results) + return self._compare_op([r[self._metric] for r in results])
diff --git a/python/ray/tune/tests/test_trial_scheduler.py b/python/ray/tune/tests/test_trial_scheduler.py --- a/python/ray/tune/tests/test_trial_scheduler.py +++ b/python/ray/tune/tests/test_trial_scheduler.py @@ -36,6 +36,12 @@ def result(t, rew): time_total_s=t, episode_reward_mean=rew, training_iteration=int(t)) +def mock_trial_runner(trials=None): + trial_runner = MagicMock() + trial_runner.get_trials.return_value = trials or [] + return trial_runner + + class EarlyStoppingSuite(unittest.TestCase): def setUp(self): ray.init() @@ -47,93 +53,105 @@ def tearDown(self): def basicSetup(self, rule): t1 = Trial("PPO") # mean is 450, max 900, t_max=10 t2 = Trial("PPO") # mean is 450, max 450, t_max=5 + runner = mock_trial_runner() for i in range(10): + r1 = result(i, i * 100) + print("basicSetup:", i) self.assertEqual( - rule.on_trial_result(None, t1, result(i, i * 100)), - TrialScheduler.CONTINUE) + rule.on_trial_result(runner, t1, r1), TrialScheduler.CONTINUE) for i in range(5): + r2 = result(i, 450) self.assertEqual( - rule.on_trial_result(None, t2, result(i, 450)), - TrialScheduler.CONTINUE) + rule.on_trial_result(runner, t2, r2), TrialScheduler.CONTINUE) return t1, t2 def testMedianStoppingConstantPerf(self): rule = MedianStoppingRule(grace_period=0, min_samples_required=1) t1, t2 = self.basicSetup(rule) - rule.on_trial_complete(None, t1, result(10, 1000)) + runner = mock_trial_runner() + rule.on_trial_complete(runner, t1, result(10, 1000)) self.assertEqual( - rule.on_trial_result(None, t2, result(5, 450)), + rule.on_trial_result(runner, t2, result(5, 450)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t2, result(6, 0)), + rule.on_trial_result(runner, t2, result(6, 0)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t2, result(10, 450)), + rule.on_trial_result(runner, t2, result(10, 450)), TrialScheduler.STOP) def testMedianStoppingOnCompleteOnly(self): rule = MedianStoppingRule(grace_period=0, min_samples_required=1) t1, t2 = self.basicSetup(rule) + runner = mock_trial_runner() self.assertEqual( - rule.on_trial_result(None, t2, result(100, 0)), + rule.on_trial_result(runner, t2, result(100, 0)), TrialScheduler.CONTINUE) - rule.on_trial_complete(None, t1, result(10, 1000)) + rule.on_trial_complete(runner, t1, result(101, 1000)) self.assertEqual( - rule.on_trial_result(None, t2, result(101, 0)), + rule.on_trial_result(runner, t2, result(101, 0)), TrialScheduler.STOP) def testMedianStoppingGracePeriod(self): rule = MedianStoppingRule(grace_period=2.5, min_samples_required=1) t1, t2 = self.basicSetup(rule) - rule.on_trial_complete(None, t1, result(10, 1000)) - rule.on_trial_complete(None, t2, result(10, 1000)) + runner = mock_trial_runner() + rule.on_trial_complete(runner, t1, result(10, 1000)) + rule.on_trial_complete(runner, t2, result(10, 1000)) t3 = Trial("PPO") self.assertEqual( - rule.on_trial_result(None, t3, result(1, 10)), + rule.on_trial_result(runner, t3, result(1, 10)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t3, result(2, 10)), + rule.on_trial_result(runner, t3, result(2, 10)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t3, result(3, 10)), TrialScheduler.STOP) + rule.on_trial_result(runner, t3, result(3, 10)), + TrialScheduler.STOP) def testMedianStoppingMinSamples(self): rule = MedianStoppingRule(grace_period=0, min_samples_required=2) t1, t2 = self.basicSetup(rule) - rule.on_trial_complete(None, t1, result(10, 1000)) + runner = mock_trial_runner() + rule.on_trial_complete(runner, t1, result(10, 1000)) t3 = Trial("PPO") + # Insufficient samples to evaluate t3 self.assertEqual( - rule.on_trial_result(None, t3, result(3, 10)), + rule.on_trial_result(runner, t3, result(5, 10)), TrialScheduler.CONTINUE) - rule.on_trial_complete(None, t2, result(10, 1000)) + rule.on_trial_complete(runner, t2, result(5, 1000)) + # Sufficient samples to evaluate t3 self.assertEqual( - rule.on_trial_result(None, t3, result(3, 10)), TrialScheduler.STOP) + rule.on_trial_result(runner, t3, result(5, 10)), + TrialScheduler.STOP) def testMedianStoppingUsesMedian(self): rule = MedianStoppingRule(grace_period=0, min_samples_required=1) t1, t2 = self.basicSetup(rule) - rule.on_trial_complete(None, t1, result(10, 1000)) - rule.on_trial_complete(None, t2, result(10, 1000)) + runner = mock_trial_runner() + rule.on_trial_complete(runner, t1, result(10, 1000)) + rule.on_trial_complete(runner, t2, result(10, 1000)) t3 = Trial("PPO") self.assertEqual( - rule.on_trial_result(None, t3, result(1, 260)), + rule.on_trial_result(runner, t3, result(1, 260)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t3, result(2, 260)), + rule.on_trial_result(runner, t3, result(2, 260)), TrialScheduler.STOP) def testMedianStoppingSoftStop(self): rule = MedianStoppingRule( grace_period=0, min_samples_required=1, hard_stop=False) t1, t2 = self.basicSetup(rule) - rule.on_trial_complete(None, t1, result(10, 1000)) - rule.on_trial_complete(None, t2, result(10, 1000)) + runner = mock_trial_runner() + rule.on_trial_complete(runner, t1, result(10, 1000)) + rule.on_trial_complete(runner, t2, result(10, 1000)) t3 = Trial("PPO") self.assertEqual( - rule.on_trial_result(None, t3, result(1, 260)), + rule.on_trial_result(runner, t3, result(1, 260)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t3, result(2, 260)), + rule.on_trial_result(runner, t3, result(2, 260)), TrialScheduler.PAUSE) def _test_metrics(self, result_func, metric, mode): @@ -145,20 +163,21 @@ def _test_metrics(self, result_func, metric, mode): mode=mode) t1 = Trial("PPO") # mean is 450, max 900, t_max=10 t2 = Trial("PPO") # mean is 450, max 450, t_max=5 + runner = mock_trial_runner() for i in range(10): self.assertEqual( - rule.on_trial_result(None, t1, result_func(i, i * 100)), + rule.on_trial_result(runner, t1, result_func(i, i * 100)), TrialScheduler.CONTINUE) for i in range(5): self.assertEqual( - rule.on_trial_result(None, t2, result_func(i, 450)), + rule.on_trial_result(runner, t2, result_func(i, 450)), TrialScheduler.CONTINUE) - rule.on_trial_complete(None, t1, result_func(10, 1000)) + rule.on_trial_complete(runner, t1, result_func(10, 1000)) self.assertEqual( - rule.on_trial_result(None, t2, result_func(5, 450)), + rule.on_trial_result(runner, t2, result_func(5, 450)), TrialScheduler.CONTINUE) self.assertEqual( - rule.on_trial_result(None, t2, result_func(6, 0)), + rule.on_trial_result(runner, t2, result_func(6, 0)), TrialScheduler.CONTINUE) def testAlternateMetrics(self):
[tune] MedianStoppingRule dependent only on complete trials <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: RHEL 7.6 - **Ray installed from (source or binary)**: source - **Ray version**: 0.8 - **Python version**: 3.6.8 - **Exact command to reproduce**: <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> I'm looking to implement some of the different schedulers.. and the MedianStoppingRule by the docs seems like it should implement an early stopping rule... but when running it, the median_result seems to be dependent on values from a list of trials which only get added "on_trial_complete"... which doesn't really make sense to me, as shouldn't the median be updating all along with iteration results? ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
Forget it... i I understand what it's doing... I actually think the version I'm thinking of might be useful as well, more like PBT updating in parallel and dropping trials that are below a median value... in this case I guess it's more of a sequential process of running trials, rather than in parallel. I think this is actually supposed to be addressed in https://github.com/ray-project/ray/pull/4119 but we never merged it because we haven't gotten around to fixing the tests. You're welcome to try that one out though! That's excellent Richard, thank you... I created a new class and incorporated those changes. I also added to it the PBT style pausing/selecting of trials to allow all trials to stay in step, and the combination of those two gives me exactly what I'm looking for... I also added an extra argument to control the evaluation interval, similar to "perturbation_interval" as well as adding an optional parameter to take from the end of the list of results, thus the median calculated can be from a list of means constructed of tail_length... rather than the entire training history to reflect more recent performance. A question I have is, when running PBT or MedianStopping using APEX DQN... when a trial is PAUSED... what happens to the replay buffer? I'm trying to understand what the limitations might be regarding running multiple trials in a distributed fashion, which I have a decent idea when just outright training, but not sure if one is to PAUSE & CONTINUE along the way with this type of scheduling. here is what my current version looks like... it seems to be doing what I expected... class MedianTrialState(object): def __init__(self, trial): self.orig_tag = trial.experiment_tag self.last_eval_time = 0 def __repr__(self): return str((self.last_eval_time,)) class MedianStoppingResult(FIFOScheduler): def __init__(self, time_attr="time_total_s", reward_attr=None, metric="episode_reward_mean", mode="max", grace_period=60.0, eval_interval=600.0, min_samples_required=3, hard_stop=True, verbose=True, tail_length= None): assert mode in ["min", "max"], "`mode` must be 'min' or 'max'!" if reward_attr is not None: mode = "max" metric = reward_attr logger.warning( "`reward_attr` is deprecated and will be removed in a future " "version of Tune. " "Setting `metric={}` and `mode=max`.".format(reward_attr)) FIFOScheduler.__init__(self) self._stopped_trials = set() self._completed_trials = set() self._grace_period = grace_period self._eval_interval = eval_interval self._min_samples_required = min_samples_required self._metric = metric if mode == "max": self._metric_op = 1. elif mode == "min": self._metric_op = -1. self._time_attr = time_attr self._hard_stop = hard_stop self._verbose = verbose self._tail_length = tail_length self._trial_state = {} self._results = collections.defaultdict(list) @property def _trials_beyond_grace_period(self): trials = [ trial for trial in self._results if (trial.last_result.get( self._time_attr, -float('inf')) > self._grace_period) ] return trials def on_trial_add(self, trial_runner, trial): self._trial_state[trial] = MedianTrialState(trial) def on_trial_result(self, trial_runner, trial, result): """Callback for early stopping. This stopping rule stops a running trial if the trial's best objective value by step `t` is strictly worse than the median of the running averages of all completed trials' objectives reported up to step `t`. """ if trial in self._stopped_trials: assert not self._hard_stop return TrialScheduler.CONTINUE # fall back to FIFO state = self._trial_state[trial] time = result[self._time_attr] self._results[trial].append(result) if time - state.last_eval_time < self._eval_interval: return TrialScheduler.CONTINUE # avoid overhead state.last_eval_time = time median_result = self._get_median_result(time) best_result = self._best_result(trial) if self._verbose: logger.info("Trial {} best res={} vs median res={} at t={}".format( trial, best_result, median_result, time)) if best_result < median_result and time > self._grace_period: if self._verbose: logger.info("MedianStoppingResult: " "early stopping {}".format(trial)) self._stopped_trials.add(trial) if self._hard_stop: return TrialScheduler.STOP else: return TrialScheduler.PAUSE else: for _trial in trial_runner.get_trials(): if _trial.status in [Trial.PENDING, Trial.PAUSED]: return TrialScheduler.PAUSE # yield time to other trials return TrialScheduler.CONTINUE def on_trial_complete(self, trial_runner, trial, result): self._results[trial].append(result) self._completed_trials.add(trial) def on_trial_remove(self, trial_runner, trial): """Marks trial as completed if it is paused and has previously ran.""" if trial.status is Trial.PAUSED and trial in self._results: self._completed_trials.add(trial) def debug_string(self): return "Using MedianStoppingResult: num_stopped={}.".format( len(self._stopped_trials)) def _get_median_result(self, time): scores = [] for trial in self._trials_beyond_grace_period: scores.append(self._running_result(trial, time)) if len(scores) >= self._min_samples_required: return np.median(scores) else: return float("-inf") def _running_result(self, trial, t_max=float("inf")): results = self._results[trial] if self._tail_length is not None: results = results[-self._tail_length:] return self._metric_op * np.mean( [flatten_dict(r)[self._metric] for r in results \ if flatten_dict(r)[self._time_attr] <= t_max]) def _best_result(self, trial): results = self._results[trial] if self._tail_length is not None: results = results[-self._tail_length:] return max([self._metric_op * flatten_dict(r)[self._metric] for r in results]) def choose_trial_to_run(self, trial_runner): candidates = [] for trial in trial_runner.get_trials(): if trial.status in [Trial.PENDING, Trial.PAUSED] and \ trial_runner.has_resources(trial.resources): candidates.append(trial) candidates.sort( key=lambda trial: self._trial_state[trial].last_eval_time) return candidates[0] if candidates else None I think the entire replay buffer gets serialized. On Thu, Aug 1, 2019 at 11:06 AM waldroje <[email protected]> wrote: > That's excellent Richard, thank you... I created a new class and > incorporated those changes. I also added to it the PBT style > pausing/selecting of trials to allow all trials to stay in step, and the > combination of those two gives me exactly what I'm looking for... I also > added an extra argument to control the evaluation interval, similar to > "perturbation_interval" as well as adding an optional parameter to take > from the end of the list of results, thus the median calculated can be from > a list of means constructed of tail_length... rather than the entire > training history to reflect more recent performance. > > A question I have is, when running PBT or MedianStopping using APEX DQN... > when a trial is PAUSED... what happens to the replay buffer? I'm trying to > understand what the limitations might be regarding running multiple trials > in a distributed fashion, which I have a decent idea when just outright > training, but not sure if one is to PAUSE & CONTINUE along the way with > this type of scheduling. > > here is what my current version looks like... it seems to be doing what I > expected... > > class MedianTrialState(object): > > def __init__(self, trial): > self.orig_tag = trial.experiment_tag > self.last_eval_time = 0 > > def __repr__(self): > return str((self.last_eval_time,)) > > class MedianStoppingResult(FIFOScheduler): > > def __init__(self, > time_attr="time_total_s", > reward_attr=None, > metric="episode_reward_mean", > mode="max", > grace_period=60.0, > eval_interval=600.0, > min_samples_required=3, > hard_stop=True, > verbose=True, > tail_length= None): > assert mode in ["min", "max"], "`mode` must be 'min' or 'max'!" > > if reward_attr is not None: > mode = "max" > metric = reward_attr > logger.warning( > "`reward_attr` is deprecated and will be removed in a future " > "version of Tune. " > "Setting `metric={}` and `mode=max`.".format(reward_attr)) > > FIFOScheduler.__init__(self) > self._stopped_trials = set() > self._completed_trials = set() > self._grace_period = grace_period > self._eval_interval = eval_interval > self._min_samples_required = min_samples_required > self._metric = metric > if mode == "max": > self._metric_op = 1. > elif mode == "min": > self._metric_op = -1. > self._time_attr = time_attr > self._hard_stop = hard_stop > self._verbose = verbose > self._tail_length = tail_length > > self._trial_state = {} > self._results = collections.defaultdict(list) > > @property > def _trials_beyond_grace_period(self): > trials = [ > trial for trial in self._results if (trial.last_result.get( > self._time_attr, -float('inf')) > self._grace_period) > ] > return trials > > def on_trial_add(self, trial_runner, trial): > self._trial_state[trial] = MedianTrialState(trial) > > def on_trial_result(self, trial_runner, trial, result): > """Callback for early stopping. > > This stopping rule stops a running trial if the trial's best objective > value by step `t` is strictly worse than the median of the running > averages of all completed trials' objectives reported up to step `t`. > """ > > if trial in self._stopped_trials: > assert not self._hard_stop > return TrialScheduler.CONTINUE # fall back to FIFO > > state = self._trial_state[trial] > time = result[self._time_attr] > self._results[trial].append(result) > > if time - state.last_eval_time < self._eval_interval: > return TrialScheduler.CONTINUE # avoid overhead > > state.last_eval_time = time > median_result = self._get_median_result(time) > best_result = self._best_result(trial) > if self._verbose: > logger.info("Trial {} best res={} vs median res={} at t={}".format( > trial, best_result, median_result, time)) > > if best_result < median_result and time > self._grace_period: > if self._verbose: > logger.info("MedianStoppingResult: " > "early stopping {}".format(trial)) > self._stopped_trials.add(trial) > if self._hard_stop: > return TrialScheduler.STOP > else: > return TrialScheduler.PAUSE > else: > for _trial in trial_runner.get_trials(): > if _trial.status in [Trial.PENDING, Trial.PAUSED]: > return TrialScheduler.PAUSE # yield time to other trials > > return TrialScheduler.CONTINUE > > > > def on_trial_complete(self, trial_runner, trial, result): > self._results[trial].append(result) > self._completed_trials.add(trial) > > def on_trial_remove(self, trial_runner, trial): > """Marks trial as completed if it is paused and has previously ran.""" > if trial.status is Trial.PAUSED and trial in self._results: > self._completed_trials.add(trial) > > def debug_string(self): > return "Using MedianStoppingResult: num_stopped={}.".format( > len(self._stopped_trials)) > > def _get_median_result(self, time): > scores = [] > for trial in self._trials_beyond_grace_period: > scores.append(self._running_result(trial, time)) > if len(scores) >= self._min_samples_required: > return np.median(scores) > else: > return float("-inf") > > def _running_result(self, trial, t_max=float("inf")): > results = self._results[trial] > if self._tail_length is not None: > results = results[-self._tail_length:] > return self._metric_op * np.mean( > [flatten_dict(r)[self._metric] for r in results \ > if flatten_dict(r)[self._time_attr] <= t_max]) > > > def _best_result(self, trial): > results = self._results[trial] > if self._tail_length is not None: > results = results[-self._tail_length:] > return max([self._metric_op * flatten_dict(r)[self._metric] for r in results]) > > def choose_trial_to_run(self, trial_runner): > > candidates = [] > for trial in trial_runner.get_trials(): > if trial.status in [Trial.PENDING, Trial.PAUSED] and \ > trial_runner.has_resources(trial.resources): > candidates.append(trial) > candidates.sort( > key=lambda trial: self._trial_state[trial].last_eval_time) > return candidates[0] if candidates else None > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/ray-project/ray/issues/5335?email_source=notifications&email_token=ABCRZZLH3VWP4RHEWQAOZQDQCMQ2HA5CNFSM4IILBDB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3LNVUQ#issuecomment-517397202>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ABCRZZL4AUAEHEN6ILTQQO3QCMQ2HANCNFSM4IILBDBQ> > . > BTW, this change looks really neat - would you be interested in creating a PR with these changes? On Thu, Aug 1, 2019 at 12:42 PM Richard Liaw <[email protected]> wrote: > I think the entire replay buffer gets serialized. > > On Thu, Aug 1, 2019 at 11:06 AM waldroje <[email protected]> wrote: > >> That's excellent Richard, thank you... I created a new class and >> incorporated those changes. I also added to it the PBT style >> pausing/selecting of trials to allow all trials to stay in step, and the >> combination of those two gives me exactly what I'm looking for... I also >> added an extra argument to control the evaluation interval, similar to >> "perturbation_interval" as well as adding an optional parameter to take >> from the end of the list of results, thus the median calculated can be from >> a list of means constructed of tail_length... rather than the entire >> training history to reflect more recent performance. >> >> A question I have is, when running PBT or MedianStopping using APEX >> DQN... when a trial is PAUSED... what happens to the replay buffer? I'm >> trying to understand what the limitations might be regarding running >> multiple trials in a distributed fashion, which I have a decent idea when >> just outright training, but not sure if one is to PAUSE & CONTINUE along >> the way with this type of scheduling. >> >> here is what my current version looks like... it seems to be doing what I >> expected... >> >> class MedianTrialState(object): >> >> def __init__(self, trial): >> self.orig_tag = trial.experiment_tag >> self.last_eval_time = 0 >> >> def __repr__(self): >> return str((self.last_eval_time,)) >> >> class MedianStoppingResult(FIFOScheduler): >> >> def __init__(self, >> time_attr="time_total_s", >> reward_attr=None, >> metric="episode_reward_mean", >> mode="max", >> grace_period=60.0, >> eval_interval=600.0, >> min_samples_required=3, >> hard_stop=True, >> verbose=True, >> tail_length= None): >> assert mode in ["min", "max"], "`mode` must be 'min' or 'max'!" >> >> if reward_attr is not None: >> mode = "max" >> metric = reward_attr >> logger.warning( >> "`reward_attr` is deprecated and will be removed in a future " >> "version of Tune. " >> "Setting `metric={}` and `mode=max`.".format(reward_attr)) >> >> FIFOScheduler.__init__(self) >> self._stopped_trials = set() >> self._completed_trials = set() >> self._grace_period = grace_period >> self._eval_interval = eval_interval >> self._min_samples_required = min_samples_required >> self._metric = metric >> if mode == "max": >> self._metric_op = 1. >> elif mode == "min": >> self._metric_op = -1. >> self._time_attr = time_attr >> self._hard_stop = hard_stop >> self._verbose = verbose >> self._tail_length = tail_length >> >> self._trial_state = {} >> self._results = collections.defaultdict(list) >> >> @property >> def _trials_beyond_grace_period(self): >> trials = [ >> trial for trial in self._results if (trial.last_result.get( >> self._time_attr, -float('inf')) > self._grace_period) >> ] >> return trials >> >> def on_trial_add(self, trial_runner, trial): >> self._trial_state[trial] = MedianTrialState(trial) >> >> def on_trial_result(self, trial_runner, trial, result): >> """Callback for early stopping. >> >> This stopping rule stops a running trial if the trial's best objective >> value by step `t` is strictly worse than the median of the running >> averages of all completed trials' objectives reported up to step `t`. >> """ >> >> if trial in self._stopped_trials: >> assert not self._hard_stop >> return TrialScheduler.CONTINUE # fall back to FIFO >> >> state = self._trial_state[trial] >> time = result[self._time_attr] >> self._results[trial].append(result) >> >> if time - state.last_eval_time < self._eval_interval: >> return TrialScheduler.CONTINUE # avoid overhead >> >> state.last_eval_time = time >> median_result = self._get_median_result(time) >> best_result = self._best_result(trial) >> if self._verbose: >> logger.info("Trial {} best res={} vs median res={} at t={}".format( >> trial, best_result, median_result, time)) >> >> if best_result < median_result and time > self._grace_period: >> if self._verbose: >> logger.info("MedianStoppingResult: " >> "early stopping {}".format(trial)) >> self._stopped_trials.add(trial) >> if self._hard_stop: >> return TrialScheduler.STOP >> else: >> return TrialScheduler.PAUSE >> else: >> for _trial in trial_runner.get_trials(): >> if _trial.status in [Trial.PENDING, Trial.PAUSED]: >> return TrialScheduler.PAUSE # yield time to other trials >> >> return TrialScheduler.CONTINUE >> >> >> >> def on_trial_complete(self, trial_runner, trial, result): >> self._results[trial].append(result) >> self._completed_trials.add(trial) >> >> def on_trial_remove(self, trial_runner, trial): >> """Marks trial as completed if it is paused and has previously ran.""" >> if trial.status is Trial.PAUSED and trial in self._results: >> self._completed_trials.add(trial) >> >> def debug_string(self): >> return "Using MedianStoppingResult: num_stopped={}.".format( >> len(self._stopped_trials)) >> >> def _get_median_result(self, time): >> scores = [] >> for trial in self._trials_beyond_grace_period: >> scores.append(self._running_result(trial, time)) >> if len(scores) >= self._min_samples_required: >> return np.median(scores) >> else: >> return float("-inf") >> >> def _running_result(self, trial, t_max=float("inf")): >> results = self._results[trial] >> if self._tail_length is not None: >> results = results[-self._tail_length:] >> return self._metric_op * np.mean( >> [flatten_dict(r)[self._metric] for r in results \ >> if flatten_dict(r)[self._time_attr] <= t_max]) >> >> >> def _best_result(self, trial): >> results = self._results[trial] >> if self._tail_length is not None: >> results = results[-self._tail_length:] >> return max([self._metric_op * flatten_dict(r)[self._metric] for r in results]) >> >> def choose_trial_to_run(self, trial_runner): >> >> candidates = [] >> for trial in trial_runner.get_trials(): >> if trial.status in [Trial.PENDING, Trial.PAUSED] and \ >> trial_runner.has_resources(trial.resources): >> candidates.append(trial) >> candidates.sort( >> key=lambda trial: self._trial_state[trial].last_eval_time) >> return candidates[0] if candidates else None >> >> — >> You are receiving this because you commented. >> Reply to this email directly, view it on GitHub >> <https://github.com/ray-project/ray/issues/5335?email_source=notifications&email_token=ABCRZZLH3VWP4RHEWQAOZQDQCMQ2HA5CNFSM4IILBDB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3LNVUQ#issuecomment-517397202>, >> or mute the thread >> <https://github.com/notifications/unsubscribe-auth/ABCRZZL4AUAEHEN6ILTQQO3QCMQ2HANCNFSM4IILBDBQ> >> . >> > I’m happy to give it a shot... I should have some time later today, early tomorrow... Sent from my iPhone > On Aug 1, 2019, at 3:43 PM, Richard Liaw <[email protected]> wrote: > > BTW, this change looks really neat - would you be interested in creating a > PR with these changes? > > On Thu, Aug 1, 2019 at 12:42 PM Richard Liaw <[email protected]> wrote: > > > I think the entire replay buffer gets serialized. > > > > On Thu, Aug 1, 2019 at 11:06 AM waldroje <[email protected]> wrote: > > > >> That's excellent Richard, thank you... I created a new class and > >> incorporated those changes. I also added to it the PBT style > >> pausing/selecting of trials to allow all trials to stay in step, and the > >> combination of those two gives me exactly what I'm looking for... I also > >> added an extra argument to control the evaluation interval, similar to > >> "perturbation_interval" as well as adding an optional parameter to take > >> from the end of the list of results, thus the median calculated can be from > >> a list of means constructed of tail_length... rather than the entire > >> training history to reflect more recent performance. > >> > >> A question I have is, when running PBT or MedianStopping using APEX > >> DQN... when a trial is PAUSED... what happens to the replay buffer? I'm > >> trying to understand what the limitations might be regarding running > >> multiple trials in a distributed fashion, which I have a decent idea when > >> just outright training, but not sure if one is to PAUSE & CONTINUE along > >> the way with this type of scheduling. > >> > >> here is what my current version looks like... it seems to be doing what I > >> expected... > >> > >> class MedianTrialState(object): > >> > >> def __init__(self, trial): > >> self.orig_tag = trial.experiment_tag > >> self.last_eval_time = 0 > >> > >> def __repr__(self): > >> return str((self.last_eval_time,)) > >> > >> class MedianStoppingResult(FIFOScheduler): > >> > >> def __init__(self, > >> time_attr="time_total_s", > >> reward_attr=None, > >> metric="episode_reward_mean", > >> mode="max", > >> grace_period=60.0, > >> eval_interval=600.0, > >> min_samples_required=3, > >> hard_stop=True, > >> verbose=True, > >> tail_length= None): > >> assert mode in ["min", "max"], "`mode` must be 'min' or 'max'!" > >> > >> if reward_attr is not None: > >> mode = "max" > >> metric = reward_attr > >> logger.warning( > >> "`reward_attr` is deprecated and will be removed in a future " > >> "version of Tune. " > >> "Setting `metric={}` and `mode=max`.".format(reward_attr)) > >> > >> FIFOScheduler.__init__(self) > >> self._stopped_trials = set() > >> self._completed_trials = set() > >> self._grace_period = grace_period > >> self._eval_interval = eval_interval > >> self._min_samples_required = min_samples_required > >> self._metric = metric > >> if mode == "max": > >> self._metric_op = 1. > >> elif mode == "min": > >> self._metric_op = -1. > >> self._time_attr = time_attr > >> self._hard_stop = hard_stop > >> self._verbose = verbose > >> self._tail_length = tail_length > >> > >> self._trial_state = {} > >> self._results = collections.defaultdict(list) > >> > >> @property > >> def _trials_beyond_grace_period(self): > >> trials = [ > >> trial for trial in self._results if (trial.last_result.get( > >> self._time_attr, -float('inf')) > self._grace_period) > >> ] > >> return trials > >> > >> def on_trial_add(self, trial_runner, trial): > >> self._trial_state[trial] = MedianTrialState(trial) > >> > >> def on_trial_result(self, trial_runner, trial, result): > >> """Callback for early stopping. > >> > >> This stopping rule stops a running trial if the trial's best objective > >> value by step `t` is strictly worse than the median of the running > >> averages of all completed trials' objectives reported up to step `t`. > >> """ > >> > >> if trial in self._stopped_trials: > >> assert not self._hard_stop > >> return TrialScheduler.CONTINUE # fall back to FIFO > >> > >> state = self._trial_state[trial] > >> time = result[self._time_attr] > >> self._results[trial].append(result) > >> > >> if time - state.last_eval_time < self._eval_interval: > >> return TrialScheduler.CONTINUE # avoid overhead > >> > >> state.last_eval_time = time > >> median_result = self._get_median_result(time) > >> best_result = self._best_result(trial) > >> if self._verbose: > >> logger.info("Trial {} best res={} vs median res={} at t={}".format( > >> trial, best_result, median_result, time)) > >> > >> if best_result < median_result and time > self._grace_period: > >> if self._verbose: > >> logger.info("MedianStoppingResult: " > >> "early stopping {}".format(trial)) > >> self._stopped_trials.add(trial) > >> if self._hard_stop: > >> return TrialScheduler.STOP > >> else: > >> return TrialScheduler.PAUSE > >> else: > >> for _trial in trial_runner.get_trials(): > >> if _trial.status in [Trial.PENDING, Trial.PAUSED]: > >> return TrialScheduler.PAUSE # yield time to other trials > >> > >> return TrialScheduler.CONTINUE > >> > >> > >> > >> def on_trial_complete(self, trial_runner, trial, result): > >> self._results[trial].append(result) > >> self._completed_trials.add(trial) > >> > >> def on_trial_remove(self, trial_runner, trial): > >> """Marks trial as completed if it is paused and has previously ran.""" > >> if trial.status is Trial.PAUSED and trial in self._results: > >> self._completed_trials.add(trial) > >> > >> def debug_string(self): > >> return "Using MedianStoppingResult: num_stopped={}.".format( > >> len(self._stopped_trials)) > >> > >> def _get_median_result(self, time): > >> scores = [] > >> for trial in self._trials_beyond_grace_period: > >> scores.append(self._running_result(trial, time)) > >> if len(scores) >= self._min_samples_required: > >> return np.median(scores) > >> else: > >> return float("-inf") > >> > >> def _running_result(self, trial, t_max=float("inf")): > >> results = self._results[trial] > >> if self._tail_length is not None: > >> results = results[-self._tail_length:] > >> return self._metric_op * np.mean( > >> [flatten_dict(r)[self._metric] for r in results \ > >> if flatten_dict(r)[self._time_attr] <= t_max]) > >> > >> > >> def _best_result(self, trial): > >> results = self._results[trial] > >> if self._tail_length is not None: > >> results = results[-self._tail_length:] > >> return max([self._metric_op * flatten_dict(r)[self._metric] for r in results]) > >> > >> def choose_trial_to_run(self, trial_runner): > >> > >> candidates = [] > >> for trial in trial_runner.get_trials(): > >> if trial.status in [Trial.PENDING, Trial.PAUSED] and \ > >> trial_runner.has_resources(trial.resources): > >> candidates.append(trial) > >> candidates.sort( > >> key=lambda trial: self._trial_state[trial].last_eval_time) > >> return candidates[0] if candidates else None > >> > >> — > >> You are receiving this because you commented. > >> Reply to this email directly, view it on GitHub > >> <https://github.com/ray-project/ray/issues/5335?email_source=notifications&email_token=ABCRZZLH3VWP4RHEWQAOZQDQCMQ2HA5CNFSM4IILBDB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3LNVUQ#issuecomment-517397202>, > >> or mute the thread > >> <https://github.com/notifications/unsubscribe-auth/ABCRZZL4AUAEHEN6ILTQQO3QCMQ2HANCNFSM4IILBDBQ> > >> . > >> > > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub, or mute the thread. @waldroje did you end up getting around to this? Not yet, been tied up, haven’t ever done a PR, so was looking into linting and tests... I have a few days that I’m free this week, was hoping to get to it. Sent from my iPhone > On Aug 5, 2019, at 11:48 PM, Richard Liaw <[email protected]> wrote: > > @waldroje did you end up getting around to this? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or mute the thread.
2019-08-08T00:08:08
ray-project/ray
5,426
ray-project__ray-5426
[ "4660" ]
a1d2e1762325cd34e14dc411666d63bb15d6eaf0
diff --git a/python/ray/tune/trial.py b/python/ray/tune/trial.py --- a/python/ray/tune/trial.py +++ b/python/ray/tune/trial.py @@ -321,7 +321,7 @@ def location_string(hostname, pid): location_string( self.last_result.get(HOSTNAME), self.last_result.get(PID))), "{} s".format( - int(self.last_result.get(TIME_TOTAL_S))) + int(self.last_result.get(TIME_TOTAL_S, 0))) ] if self.last_result.get(TRAINING_ITERATION) is not None: diff --git a/python/ray/tune/trial_runner.py b/python/ray/tune/trial_runner.py --- a/python/ray/tune/trial_runner.py +++ b/python/ray/tune/trial_runner.py @@ -506,7 +506,7 @@ def _process_trial(self, trial): result = trial.last_result result.update(done=True) - self._total_time += result[TIME_THIS_ITER_S] + self._total_time += result.get(TIME_THIS_ITER_S, 0) flat_result = flatten_dict(result) if trial.should_stop(flat_result): diff --git a/rllib/evaluation/postprocessing.py b/rllib/evaluation/postprocessing.py --- a/rllib/evaluation/postprocessing.py +++ b/rllib/evaluation/postprocessing.py @@ -28,7 +28,7 @@ def compute_advantages(rollout, last_r, gamma=0.9, lambda_=1.0, use_gae=True): last_r (float): Value estimation for last observation gamma (float): Discount factor. lambda_ (float): Parameter for GAE - use_gae (bool): Using Generalized Advantage Estamation + use_gae (bool): Using Generalized Advantage Estimation Returns: SampleBatch (SampleBatch): Object with experience from rollout and
diff --git a/python/ray/tune/tests/test_trial_runner.py b/python/ray/tune/tests/test_trial_runner.py --- a/python/ray/tune/tests/test_trial_runner.py +++ b/python/ray/tune/tests/test_trial_runner.py @@ -459,6 +459,15 @@ def train(config, reporter): self.assertEqual(trial.status, Trial.TERMINATED) self.assertEqual(trial.last_result[TIMESTEPS_TOTAL], 100) + def testReporterNoUsage(self): + def run_task(config, reporter): + print("hello") + + experiment = Experiment(run=run_task, name="ray_crash_repro") + [trial] = ray.tune.run(experiment).trials + print(trial.last_result) + self.assertEqual(trial.last_result[DONE], True) + def testErrorReturn(self): def train(config, reporter): raise Exception("uh oh")
[tune] run crashes if reporter isn't used ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: OSX 10.11.6 - **Ray installed from (source or binary)**: I ran `pip install -U ray` not sure which that is - **Ray version**: 0.6.5 - **Python version**: 3.5.2 - **Exact command to reproduce**: ``` import ray from ray.tune import Experiment def run_task(config, reporter): print('hello') experiment = Experiment(run=run_task, config={}, resources_per_trial={"cpu": 1, "gpu": 0}, name="ray_crash_repro") ray.init() ray.tune.run(experiment) print("end") # never reached ``` ### Describe the problem Not using the `reporter` in the function run by an `Experiment` causes `ray` to crash when it tries to get timing statistics. If this is desired behavior, it should be documented a little better. The [quick start guide here](https://ray.readthedocs.io/en/latest/tune.html?highlight=reporter) doesn't mention that using the reporter is required. If this isn't desired behavior I think you can just fill in default values in `fetch_result` for any missing keys that are used later. ### Source code / logs Traceback (most recent call last): ``` 2019-06-25 15:02:03,270 ERROR trial_runner.py:460 -- Error processing event. Traceback (most recent call last): File "/Users/Neil/anaconda/envs/rllab3ray/lib/python3.5/site-packages/ray/tune/trial_runner.py", line 420, in _process_trial self._total_time += result[TIME_THIS_ITER_S] KeyError: 'time_this_iter_s' 2019-06-25 15:02:03,271 INFO ray_trial_executor.py:178 -- Destroying actor for trial run_task_0. If your trainable is slow to initialize, consider setting reuse_actors=True to reduce actor creation overheads. Traceback (most recent call last): File "ray_crash_repro.py", line 12, in <module> ray.tune.run(experiment) File "/Users/Neil/anaconda/envs/rllab3ray/lib/python3.5/site-packages/ray/tune/tune.py", line 242, in run print(runner.debug_string(max_debug=99999)) File "/Users/Neil/anaconda/envs/rllab3ray/lib/python3.5/site-packages/ray/tune/trial_runner.py", line 354, in debug_string t, t.progress_string())) File "/Users/Neil/anaconda/envs/rllab3ray/lib/python3.5/site-packages/ray/tune/trial.py", line 441, in progress_string int(self.last_result.get(TIME_TOTAL_S))) TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType' (pid=65702) hello (pid=65702) WARNING: Not monitoring node memory since `psutil` is not installed. Install this with `pip install psutil` (or ray[debug]) to enable debugging of memory-related crashes. ```
2019-08-10T21:50:34
ray-project/ray
5,429
ray-project__ray-5429
[ "5423" ]
cc86271cf8e01e5f97e52a32c33b0e07de61be58
diff --git a/python/ray/tune/analysis/experiment_analysis.py b/python/ray/tune/analysis/experiment_analysis.py --- a/python/ray/tune/analysis/experiment_analysis.py +++ b/python/ray/tune/analysis/experiment_analysis.py @@ -75,7 +75,7 @@ def get_best_logdir(self, metric, mode="max"): mode (str): One of [min, max]. """ - df = self.dataframe() + df = self.dataframe(metric=metric, mode=mode) if mode == "max": return df.iloc[df[metric].idxmax()].logdir elif mode == "min":
diff --git a/python/ray/tune/tests/test_experiment_analysis.py b/python/ray/tune/tests/test_experiment_analysis.py --- a/python/ray/tune/tests/test_experiment_analysis.py +++ b/python/ray/tune/tests/test_experiment_analysis.py @@ -141,6 +141,13 @@ def testBestLogdir(self): self.assertTrue(logdir2.startswith(self.test_dir)) self.assertNotEquals(logdir, logdir2) + def testBestConfigIsLogdir(self): + analysis = Analysis(self.test_dir) + for metric, mode in [(self.metric, "min"), (self.metric, "max")]: + logdir = analysis.get_best_logdir(metric, mode=mode) + best_config = analysis.get_best_config(metric, mode=mode) + self.assertEquals(analysis.get_all_configs()[logdir], best_config) + if __name__ == "__main__": unittest.main(verbosity=2)
[Tune] Experiment Analysis get_best behaviour Hi, In their current version, the [`get_best_config`](https://github.com/ray-project/ray/blob/ed89897a311fbe63afdd5fa05a4ef8b7576ca6a4/python/ray/tune/analysis/experiment_analysis.py#L56) and [`get_best_logdir`](https://github.com/ray-project/ray/blob/ed89897a311fbe63afdd5fa05a4ef8b7576ca6a4/python/ray/tune/analysis/experiment_analysis.py#L70) methods of the `Analysis` object may consider different Trials as the best one: - `get_best_config` will first retrieve the best row of each trial dataframe and then select the best trial from these rows. - `get_best_logdir` will first retrieve the last row of each trial and then selects the best one. Is it the expected behaviour ? If it isn't, I think that the correct way of doing it is the first one. This could be done by simply passing the `metric` and `max` arguments to the [`self.dataframe`](https://github.com/ray-project/ray/blob/ed89897a311fbe63afdd5fa05a4ef8b7576ca6a4/python/ray/tune/analysis/experiment_analysis.py#L78) call in `get_best_dir`.
Hi @TomVeniat, that's a good point, and I think your proposal is right. Would you be willing to push a fix?
2019-08-11T01:47:30
ray-project/ray
5,580
ray-project__ray-5580
[ "5578" ]
04b869678ea328089f4fb6c7eca67332e57dda05
diff --git a/python/ray/node.py b/python/ray/node.py --- a/python/ray/node.py +++ b/python/ray/node.py @@ -19,7 +19,7 @@ import ray.ray_constants as ray_constants import ray.services from ray.resource_spec import ResourceSpec -from ray.utils import try_to_create_directory +from ray.utils import try_to_create_directory, try_to_symlink_directory # Logger for this module. It should be configured at the entry point # into the program using Ray. Ray configures it by default automatically @@ -27,6 +27,7 @@ logger = logging.getLogger(__name__) PY3 = sys.version_info.major >= 3 +SESSION_LATEST = "session_latest" class Node(object): @@ -171,9 +172,11 @@ def _init_temp(self, redis_client): else: self._session_dir = ray.utils.decode( redis_client.get("session_dir")) + session_symlink = os.path.join(self._temp_dir, SESSION_LATEST) # Send a warning message if the session exists. try_to_create_directory(self._session_dir) + try_to_symlink_directory(self._session_dir, session_symlink) # Create a directory to be used for socket files. self._sockets_dir = os.path.join(self._session_dir, "sockets") try_to_create_directory(self._sockets_dir, warn_if_exist=False) diff --git a/python/ray/utils.py b/python/ray/utils.py --- a/python/ray/utils.py +++ b/python/ray/utils.py @@ -614,3 +614,49 @@ def try_to_create_directory(directory_path, warn_if_exist=True): # Change the log directory permissions so others can use it. This is # important when multiple people are using the same machine. try_make_directory_shared(directory_path) + + +def try_to_symlink_directory(directory_path, symlink_path): + """Attempt to create a symlink to an existing directory. + + If the directory doesn't exist, the symlink path exists, or the symlink + failed to be created, a warning will be logged and the symlink will not + be created. If a symlink exists in the path, it will be attempted to be + removed and replaced. + + Args: + directory_path: The path of the existing directory. + symlink_path: The path of the symlink to create. + """ + logger = logging.getLogger("ray") + directory_path = os.path.expanduser(directory_path) + symlink_path = os.path.expanduser(symlink_path) + if not os.path.exists(directory_path): + logger.warning("Attempted to create symlink to directory '{}', but " + "the directory doesn't exist.".format(directory_path)) + return + elif not os.path.isdir(directory_path): + logger.warning("Attempted to create symlink to directory '{}', but " + "the '{}' isn't a directory.".format( + directory_path, directory_path)) + return + + if os.path.exists(symlink_path): + if os.path.islink(symlink_path): + try: + os.remove(symlink_path) + except OSError as e: + logger.warning("Failed to remove existing symlink '{}': {}" + .format(symlink_path, e)) + return + else: + logger.warning("Attempted to create symlink '{}' to directory '{}'" + ", but the symlink path exists and isn't a symlink." + .format(symlink_path, directory_path)) + return + + try: + os.symlink(directory_path, symlink_path) + except OSError as e: + logger.warning("Failed to create symlink '{}' to directory '{}': {}" + .format(symlink_path, directory_path, e))
Finding the latest session log files is annoying The log files are timestamped, so if you're running many sessions and want to find the latest logs, you have to manually find the latest timestamp and then `cd` into that directory. Would be much nicer to automatically create a symlink to the latest session so you can `cat /tmp/ray/session_latest/...`.
2019-08-29T22:33:16
ray-project/ray
5,599
ray-project__ray-5599
[ "5594" ]
3e70daba740bc0d306aa12e3c1dc2917b53359b2
diff --git a/python/ray/tune/schedulers/pbt.py b/python/ray/tune/schedulers/pbt.py --- a/python/ray/tune/schedulers/pbt.py +++ b/python/ray/tune/schedulers/pbt.py @@ -13,6 +13,7 @@ from ray.tune.error import TuneError from ray.tune.result import TRAINING_ITERATION +from ray.tune.logger import _SafeFallbackEncoder from ray.tune.schedulers import FIFOScheduler, TrialScheduler from ray.tune.suggest.variant_generator import format_vars from ray.tune.trial import Trial, Checkpoint @@ -276,13 +277,13 @@ def _log_config_on_step(self, trial_state, new_state, trial, ] # Log to global file. with open(os.path.join(trial.local_dir, "pbt_global.txt"), "a+") as f: - f.write(json.dumps(policy) + "\n") + print(json.dumps(policy, cls=_SafeFallbackEncoder), file=f) # Overwrite state in target trial from trial_to_clone. if os.path.exists(trial_to_clone_path): shutil.copyfile(trial_to_clone_path, trial_path) # Log new exploit in target trial log. with open(trial_path, "a+") as f: - f.write(json.dumps(policy) + "\n") + f.write(json.dumps(policy, cls=_SafeFallbackEncoder) + "\n") def _exploit(self, trial_executor, trial, trial_to_clone): """Transfers perturbed state from trial_to_clone -> trial.
[tune] `tune.function` does not work with PTB scheduler <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**:macOS Moave 10.14.6 - **Ray installed from (source or binary)**: binary - **Ray version**:0.73 - **Python version**:3.6 - **Exact command to reproduce**: I simply took the `pbt_example.py` from the example folder and added a `tune.function` object in the config of `tune.run`: ```python """examples/pbt_example.py""" from ray.tune import function # ... def some_function(): # <== added this function return 42 run( PBTBenchmarkExample, name="pbt_test", scheduler=pbt, reuse_actors=True, verbose=False, **{ "stop": { "training_iteration": 2000, }, "num_samples": 4, "config": { "lr": 0.0001, # note: this parameter is perturbed but has no effect on # the model training in this example "some_other_factor": 1, "some_function": function(some_function) # <== added this line }, }) ``` ### Describe the problem Hi, it seems that there is a problem in dumping the function as json? When you execute the code you will also get the known error `AttributeError: 'NoneType' object has no attribute 'get_global_worker'` as discussed in #5042. Both errors do not occur when I remove the scheduler from the `tune.run` call. Would be great if you could look into this. ## Source code / logs ``` 2019-08-30 16:46:30,213 INFO pbt.py:82 -- [explore] perturbed config from {'lr': 0.0001, 'some_other_factor': 1, 'some_function': tune.function(<function some_function at 0x11fd31ae8>)} -> {'lr': 0.00012, 'some_other_factor': 2, 'some_function': tune.function(<function some_function at 0x11fd31ae8>)} 2019-08-30 16:46:30,213 INFO pbt.py:304 -- [exploit] transferring weights from trial PBTBenchmarkExample_3 (score 24.859665593890774) -> PBTBenchmarkExample_2 (score 1.1905894444680376) 2019-08-30 16:46:30,213 ERROR trial_runner.py:550 -- Error processing event. Traceback (most recent call last): File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 520, in _process_trial self, trial, result) File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/site-packages/ray/tune/schedulers/pbt.py", line 243, in on_trial_result self._exploit(trial_runner.trial_executor, trial, trial_to_clone) File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/site-packages/ray/tune/schedulers/pbt.py", line 308, in _exploit trial_to_clone, new_config) File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/site-packages/ray/tune/schedulers/pbt.py", line 277, in _log_config_on_step f.write(json.dumps(policy) + "\n") File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/json/encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/json/encoder.py", line 257, in iterencode return _iterencode(o, 0) File "/Users/XXXX/anaconda2/envs/dev-p36/lib/python3.6/json/encoder.py", line 180, in default o.__class__.__name__) TypeError: Object of type 'function' is not JSON serializable ```
2019-08-31T04:00:56
ray-project/ray
5,653
ray-project__ray-5653
[ "5648" ]
8a352a8e701978bfafa2f79d2a7e39c071227681
diff --git a/python/ray/tune/resources.py b/python/ray/tune/resources.py --- a/python/ray/tune/resources.py +++ b/python/ray/tune/resources.py @@ -5,11 +5,10 @@ from collections import namedtuple import logging import json +from numbers import Number # For compatibility under py2 to consider unicode as str from six import string_types -from numbers import Number - from ray.tune import TuneError logger = logging.getLogger(__name__) @@ -66,6 +65,23 @@ def __new__(cls, custom_resources.setdefault(value, 0) extra_custom_resources.setdefault(value, 0) + cpu = round(cpu, 2) + gpu = round(gpu, 2) + memory = round(memory, 2) + object_store_memory = round(object_store_memory, 2) + extra_cpu = round(extra_cpu, 2) + extra_gpu = round(extra_gpu, 2) + extra_memory = round(extra_memory, 2) + extra_object_store_memory = round(extra_object_store_memory, 2) + custom_resources = { + resource: round(value, 2) + for resource, value in custom_resources.items() + } + extra_custom_resources = { + resource: round(value, 2) + for resource, value in extra_custom_resources.items() + } + all_values = [ cpu, gpu, memory, object_store_memory, extra_cpu, extra_gpu, extra_memory, extra_object_store_memory
diff --git a/python/ray/tune/tests/test_trial_runner.py b/python/ray/tune/tests/test_trial_runner.py --- a/python/ray/tune/tests/test_trial_runner.py +++ b/python/ray/tune/tests/test_trial_runner.py @@ -1531,6 +1531,14 @@ def testFractionalGpus(self): self.assertEqual(trials[2].status, Trial.PENDING) self.assertEqual(trials[3].status, Trial.PENDING) + def testResourceNumericalError(self): + resource = Resources(cpu=0.99, gpu=0.99, custom_resources={"a": 0.99}) + small_resource = Resources( + cpu=0.33, gpu=0.33, custom_resources={"a": 0.33}) + for i in range(3): + resource = Resources.subtract(resource, small_resource) + self.assertTrue(resource.is_nonnegative()) + def testResourceScheduler(self): ray.init(num_cpus=4, num_gpus=1) runner = TrialRunner()
[tune] AssertionError: Resource invalid <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 16.04 - **Ray installed from (source or binary)**: pip install https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-0.8.0.dev4-cp36-cp36m-manylinux1_x86_64.whl - **Ray version**: 0.8.0.dev4 - **Python version**: 3.6.7 - **Exact command to reproduce**: <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> I run 5 trials with ray.tune. In one of the trials (each time), an error occurs at the end of training: `AssertionError: Resource invalid: Resources(cpu=3, gpu=0.33, memory=0, object_store_memory=0, extra_cpu=0, extra_gpu=0, extra_memory=0, extra_object_store_memory=0, custom_resources={}, extra_custom_resources={})`. When I trace back the error, I end up in the following function (ray/tune/resources.py): ``` def is_nonnegative(self): all_values = [self.cpu, self.gpu, self.extra_cpu, self.extra_gpu] all_values += list(self.custom_resources.values()) all_values += list(self.extra_custom_resources.values()) return all(v >= 0 for v in all_values) ``` It seems `custom_resources` and `extra_custom_resources` are not defined. It is weird that the error only occurs in one run... Is this a bug, or any suggestions on how to fix? ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> __This is how I call `tune.run`__ ``` tune.run( ModelTrainerMT, resources_per_trial={ 'cpu': config['ncpu'], 'gpu': config['ngpu'], }, num_samples=1, config=best_config, local_dir=store, raise_on_failed_trial=True, verbose=1, with_server=False, ray_auto_init=False, scheduler=early_stopping_scheduler, loggers=[JsonLogger, CSVLogger], checkpoint_at_end=True, reuse_actors=True, stop={'epoch': 2 if args.test else config['max_t']} ) ``` __Traceback__ ``` 2019-09-06 09:56:45,526 ERROR trial_runner.py:557 -- Error processing event. Traceback (most recent call last): File "/opt/conda/lib/python3.6/site-packages/ray/tune/trial_runner.py", line 552, in _process_trial self.trial_executor.stop_trial(trial) File "/opt/conda/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 246, in stop_trial self._return_resources(trial.resources) File "/opt/conda/lib/python3.6/site-packages/ray/tune/ray_trial_executor.py", line 388, in _return_resources "Resource invalid: {}".format(resources)) AssertionError: Resource invalid: Resources(cpu=3, gpu=0.33, memory=0, object_store_memory=0, extra_cpu=0, extra_gpu=0, extra_memory=0, extra_object_store_memory=0, custom_resources={}, extra_custom_resources={}) ```
@richardliaw
2019-09-07T05:26:29
ray-project/ray
5,673
ray-project__ray-5673
[ "5513" ]
147e7d46ec9bcdd69315468c9161cd784c3038d6
diff --git a/python/ray/services.py b/python/ray/services.py --- a/python/ray/services.py +++ b/python/ray/services.py @@ -772,6 +772,8 @@ def _start_redis_instance(executable, # Construct the command to start the Redis server. command = [executable] if password: + if " " in password: + raise ValueError("Spaces not permitted in redis password.") command += ["--requirepass", password] command += ( ["--port", str(port), "--loglevel", "warning"] + load_module_args)
hang on ray.get when redis_password has a space ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04.3 LTS - **Ray installed from (source or binary)**: pip install ray - **Ray version**: 0.7.3 - **Python version**: 3.6.8 - **Exact command to reproduce**: See below ### Describe the problem Calling ray.init with redis_password containing no spaces works fine. Using a redis_password with a space causes the code to hang between the two print statements. ### Source code / logs ``` import ray ray.init(redis_password="pw 1") @ray.remote def increment(num): return num + 1 x = increment.remote(2) print(x) print(ray.get(x)) ```
2019-09-10T02:28:53
ray-project/ray
5,687
ray-project__ray-5687
[ "5686" ]
336aef1774aecb3db41f6e2c1d35f28e41279e1a
diff --git a/rllib/agents/ppo/ppo.py b/rllib/agents/ppo/ppo.py --- a/rllib/agents/ppo/ppo.py +++ b/rllib/agents/ppo/ppo.py @@ -128,6 +128,8 @@ def warn_about_bad_reward_scales(trainer, result): def validate_config(config): if config["entropy_coeff"] < 0: raise DeprecationWarning("entropy_coeff must be >= 0") + if isinstance(config["entropy_coeff"], int): + config["entropy_coeff"] = float(config["entropy_coeff"]) if config["sgd_minibatch_size"] > config["train_batch_size"]: raise ValueError( "Minibatch size {} must be <= train batch size {}.".format(
[rllib] Integer entropy coeff cannot be passed in <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Steropes - **Ray installed from (source or binary)**: pip install -U <latest whl> - **Ray version**: nightly - **Python version**: 3.7 - **Exact command to reproduce**: Pass integer value of entropy_coeff into run() with PPO <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. --> ``` 2019-09-11 00:11:50,889 ERROR trial_runner.py:552 -- Error processing event. Traceback (most recent call last): File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 498, in _process_trial result = self.trial_executor.fetch_result(trial) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py", line 347, in fetch_result result = ray.get(trial_future[0]) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 2340, in get raise value ray.exceptions.RayTaskError: ray_PPO:train() (pid=11050, host=steropes) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 527, in _apply_op_helper preferred_dtype=default_dtype) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1224, in internal_convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 1018, in _TensorTensorConversionFunction (dtype.name, t.dtype.name, str(t))) ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("default_policy/Sum_5:0", shape=(?,), dtype=float32)' During handling of the above exception, another exception occurred: ray_PPO:train() (pid=11050, host=steropes) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__ Trainer.__init__(self, config, env, logger_creator) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 366, in __init__ Trainable.__init__(self, config, logger_creator) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/tune/trainable.py", line 99, in __init__ self._setup(copy.deepcopy(self.config)) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 486, in _setup self._init(self.config, self.env_creator) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 109, in _init self.config["num_workers"]) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 531, in _make_workers logdir=self.logdir) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 64, in __init__ RolloutWorker, env_creator, policy, 0, self._local_config) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/worker_set.py", line 220, in _make_worker _fake_sampler=config.get("_fake_sampler", False)) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 348, in __init__ self._build_policy_map(policy_dict, policy_config) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 762, in _build_policy_map policy_map[name] = cls(obs_space, act_space, merged_conf) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/tf_policy_template.py", line 143, in __init__ obs_include_prev_action_reward=obs_include_prev_action_reward) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 196, in __init__ self._initialize_loss() File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 337, in _initialize_loss loss = self._do_loss_init(train_batch) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 349, in _do_loss_init loss = self._loss_fn(self, self.model, self._dist_class, train_batch) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/ppo/ppo_policy.py", line 146, in ppo_surrogate_loss model_config=policy.config["model"]) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/ray/rllib/agents/ppo/ppo_policy.py", line 106, in __init__ vf_loss_coeff * vf_loss - entropy_coeff * curr_entropy) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1045, in _run_op return tensor_oper(a.value(), *args, **kwargs) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 884, in binary_op_wrapper return func(x, y, name=name) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py", line 1180, in _mul_dispatch return gen_math_ops.mul(x, y, name=name) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6490, in mul "Mul", x=x, y=y, name=name) File "/data/ashwineep/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 563, in _apply_op_helper inferred_from[input_arg.type_attr])) TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int32 of argument 'x'. ```
2019-09-11T07:13:50
ray-project/ray
5,754
ray-project__ray-5754
[ "5726" ]
249ca2cf9e5eb9aaee8fc8b35dd9dee27c0b8f5b
diff --git a/python/ray/tune/experiment.py b/python/ray/tune/experiment.py --- a/python/ray/tune/experiment.py +++ b/python/ray/tune/experiment.py @@ -3,6 +3,7 @@ from __future__ import print_function import copy +import inspect import logging import os import six @@ -87,11 +88,19 @@ def __init__(self, _raise_deprecation_note( "sync_function", "sync_to_driver", soft=False) + stop = stop or {} + if not isinstance(stop, dict) and not callable(stop): + raise ValueError("Invalid stop criteria: {}. Must be a callable " + "or dict".format(stop)) + if callable(stop) and len(inspect.getargspec(stop).args) != 2: + raise ValueError("Invalid stop criteria: {}. Callable criteria " + "must take exactly 2 parameters.".format(stop)) + config = config or {} run_identifier = Experiment._register_if_needed(run) spec = { "run": run_identifier, - "stop": stop or {}, + "stop": stop, "config": config, "resources_per_trial": resources_per_trial, "num_samples": num_samples, diff --git a/python/ray/tune/trial.py b/python/ray/tune/trial.py --- a/python/ray/tune/trial.py +++ b/python/ray/tune/trial.py @@ -284,6 +284,9 @@ def should_stop(self, result): if result.get(DONE): return True + if callable(self.stopping_criterion): + return self.stopping_criterion(self.trial_id, result) + for criteria, stop_value in self.stopping_criterion.items(): if criteria not in result: raise TuneError( diff --git a/python/ray/tune/tune.py b/python/ray/tune/tune.py --- a/python/ray/tune/tune.py +++ b/python/ray/tune/tune.py @@ -80,9 +80,11 @@ def run(run_or_experiment, If Experiment, then Tune will execute training based on Experiment.spec. name (str): Name of experiment. - stop (dict): The stopping criteria. The keys may be any field in - the return result of 'train()', whichever is reached first. - Defaults to empty dict. + stop (dict|func): The stopping criteria. If dict, the keys may be + any field in the return result of 'train()', whichever is + reached first. If function, it must take (trial_id, result) as + arguments and return a boolean (True if trial should be stopped, + False otherwise). config (dict): Algorithm-specific configuration for Tune variant generation (e.g. env, hyperparams). Defaults to empty dict. Custom search algorithms may ignore this.
diff --git a/python/ray/tune/tests/test_trial_runner.py b/python/ray/tune/tests/test_trial_runner.py --- a/python/ray/tune/tests/test_trial_runner.py +++ b/python/ray/tune/tests/test_trial_runner.py @@ -453,6 +453,17 @@ def train(config, reporter): [trial] = tune.run(train, stop={"test/test1/test2": 6}).trials self.assertEqual(trial.last_result["training_iteration"], 7) + def testStoppingFunction(self): + def train(config, reporter): + for i in range(10): + reporter(test=i) + + def stop(trial_id, result): + return result["test"] > 6 + + [trial] = tune.run(train, stop=stop).trials + self.assertEqual(trial.last_result["training_iteration"], 8) + def testEarlyReturn(self): def train(config, reporter): reporter(timesteps_total=100, done=True)
[tune] stopping criterion to propagate to other trials We use ray.tune to do automated hyperparameter tuning (yay 🙌). To check we have done everything correctly in our projects, we often have an integration test that runs tune and checks if an Experiment can reach a very achievable accuracy in a certain time. Currently it seems that the stopping criterion that you pass to a ray.tune.Experiment is only applied per trial (that is if one trial reaches it it stops, but the other worse trials keep running). Since we are satisfied if any trial reaches the stopping criterion, my question: Is there a way to stop the entire Experiment (and get a result from ray.tune.run) once any trial reaches the stopping criterion? Thanks in advance! And thanks for building `ray` 🧡
Ah, I see; I'll get to the programmatic stopping condition PR this weekend. That should allow you to do what you want.
2019-09-23T01:11:18
ray-project/ray
5,844
ray-project__ray-5844
[ "5828" ]
08e4e3a1530e106d97f79a653ae51646a36cde19
diff --git a/python/ray/tune/ray_trial_executor.py b/python/ray/tune/ray_trial_executor.py --- a/python/ray/tune/ray_trial_executor.py +++ b/python/ray/tune/ray_trial_executor.py @@ -301,18 +301,22 @@ def get_alive_node_ips(self): def get_current_trial_ips(self): return {t.node_ip for t in self.get_running_trials()} - def get_next_available_trial(self): + def get_next_failed_trial(self): + """Gets the first trial found to be running on a node presumed dead. + + Returns: + A Trial object that is ready for failure processing. None if + no failure detected. + """ if ray.worker._mode() != ray.worker.LOCAL_MODE: live_cluster_ips = self.get_alive_node_ips() if live_cluster_ips - self.get_current_trial_ips(): for trial in self.get_running_trials(): if trial.node_ip and trial.node_ip not in live_cluster_ips: - logger.warning( - "{} (ip: {}) detected as stale. This is likely " - "because the node was lost. Processing this " - "trial first.".format(trial, trial.node_ip)) return trial + return None + def get_next_available_trial(self): shuffled_results = list(self._running.keys()) random.shuffle(shuffled_results) # Note: We shuffle the results because `ray.wait` by default returns diff --git a/python/ray/tune/trial_executor.py b/python/ray/tune/trial_executor.py --- a/python/ray/tune/trial_executor.py +++ b/python/ray/tune/trial_executor.py @@ -158,6 +158,15 @@ def get_next_available_trial(self): """ raise NotImplementedError + def get_next_failed_trial(self): + """Non-blocking call that detects and returns one failed trial. + + Returns: + A Trial object that is ready for failure processing. None if + no failure detected. + """ + raise NotImplementedError + def fetch_result(self, trial): """Fetches one result for the trial. diff --git a/python/ray/tune/trial_runner.py b/python/ray/tune/trial_runner.py --- a/python/ray/tune/trial_runner.py +++ b/python/ray/tune/trial_runner.py @@ -497,9 +497,18 @@ def _get_next_trial(self): return trial def _process_events(self): - trial = self.trial_executor.get_next_available_trial() # blocking - with warn_if_slow("process_trial"): - self._process_trial(trial) + failed_trial = self.trial_executor.get_next_failed_trial() + if failed_trial: + with warn_if_slow("process_failed_trial"): + self._process_trial_failure( + failed_trial, + error_msg="{} (ip: {}) detected as stale. This is likely" + "because the node was lost".format(failed_trial, + failed_trial.node_ip)) + else: + trial = self.trial_executor.get_next_available_trial() # blocking + with warn_if_slow("process_trial"): + self._process_trial(trial) def _process_trial(self, trial): try: @@ -558,16 +567,25 @@ def _process_trial(self, trial): decision) except Exception: logger.exception("Error processing event.") - error_msg = traceback.format_exc() - if trial.status == Trial.RUNNING: - if trial.should_recover(): - self._try_recover(trial, error_msg) - else: - self._scheduler_alg.on_trial_error(self, trial) - self._search_alg.on_trial_complete( - trial.trial_id, error=True) - self.trial_executor.stop_trial( - trial, error=True, error_msg=error_msg) + self._process_trial_failure(trial, traceback.format_exc()) + + def _process_trial_failure(self, trial, error_msg): + """Handle trial failure. + + Attempt trial recovery if possible, clean up state otherwise. + + Args: + trial (Trial): Failed trial. + error_msg (str): Error message prior to invoking this method. + """ + if trial.status == Trial.RUNNING: + if trial.should_recover(): + self._try_recover(trial, error_msg) + else: + self._scheduler_alg.on_trial_error(self, trial) + self._search_alg.on_trial_complete(trial.trial_id, error=True) + self.trial_executor.stop_trial( + trial, error=True, error_msg=error_msg) def _checkpoint_trial_if_needed(self, trial, force=False): """Checkpoints trial based off trial.last_result."""
diff --git a/python/ray/tune/tests/test_cluster.py b/python/ray/tune/tests/test_cluster.py --- a/python/ray/tune/tests/test_cluster.py +++ b/python/ray/tune/tests/test_cluster.py @@ -8,6 +8,7 @@ import os import pytest import shutil +import sys import ray from ray import tune @@ -20,6 +21,11 @@ from ray.tune.trial_runner import TrialRunner from ray.tune.suggest import BasicVariantGenerator +if sys.version_info >= (3, 3): + from unittest.mock import MagicMock +else: + from mock import MagicMock + def _start_new_cluster(): cluster = Cluster( @@ -98,6 +104,26 @@ def test_counting_resources(start_connected_cluster): assert sum(t.status == Trial.RUNNING for t in runner.get_trials()) == 2 +def test_trial_processed_after_node_failure(start_connected_emptyhead_cluster): + """Tests that Tune processes a trial as failed if its node died.""" + cluster = start_connected_emptyhead_cluster + node = cluster.add_node(num_cpus=1) + cluster.wait_for_nodes() + + runner = TrialRunner(BasicVariantGenerator()) + mock_process_failure = MagicMock(side_effect=runner._process_trial_failure) + runner._process_trial_failure = mock_process_failure + + runner.add_trial(Trial("__fake")) + runner.step() + runner.step() + assert not mock_process_failure.called + + cluster.remove_node(node) + runner.step() + assert mock_process_failure.called + + def test_remove_node_before_result(start_connected_emptyhead_cluster): """Tune continues when node is removed before trial returns.""" cluster = start_connected_emptyhead_cluster
[tune] Failed spot instances should not stall entire cluster We should keep a running list of trials and their update times, and only jumpstart when necessary cc @stevenlin1111
2019-10-04T10:17:42
ray-project/ray
5,863
ray-project__ray-5863
[ "5715" ]
785670bc18a8595219c96e9512192922fafcf510
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -369,9 +369,7 @@ def _remote(self, # Instead, instantiate the actor locally and add it to the worker's # dictionary if worker.mode == ray.LOCAL_MODE: - actor_id = ActorID.of(worker.current_job_id, - worker.current_task_id, - worker.task_context.task_index + 1) + actor_id = ActorID.from_random() worker.actors[actor_id] = meta.modified_class( *copy.deepcopy(args), **copy.deepcopy(kwargs)) core_handle = ray._raylet.ActorHandle(
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -1766,6 +1766,28 @@ def returns_multiple_throws(): with pytest.raises(Exception, match=exception_str): ray.get(obj2) + # Check that Actors are not overwritten by remote calls from different + # classes. + @ray.remote + class RemoteActor1(object): + def __init__(self): + pass + + def function1(self): + return 0 + + @ray.remote + class RemoteActor2(object): + def __init__(self): + pass + + def function2(self): + return 1 + + actor1 = RemoteActor1.remote() + _ = RemoteActor2.remote() + assert ray.get(actor1.function1.remote()) == 0 + def test_resource_constraints(shutdown_only): num_workers = 20
[local mode] Actors are not handled correctly The below fails with: ```bash Traceback (most recent call last): File "/Users/rliaw/Research/riselab/ray/doc/examples/parameter_server/failure.py", line 35, in <module> accuracies = run_sync_parameter_server() File "/Users/rliaw/Research/riselab/ray/doc/examples/parameter_server/failure.py", line 32, in run_sync_parameter_server current_weights = ps.get_weights.remote() File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 148, in remote return self._remote(args, kwargs) File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 169, in _remote return invocation(args, kwargs) File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 163, in invocation num_return_vals=num_return_vals) File "/Users/rliaw/miniconda3/lib/python3.7/site-packages/ray/actor.py", line 588, in _actor_method_call function = getattr(worker.actors[self._ray_actor_id], method_name) AttributeError: 'DataWorker' object has no attribute 'get_weights' ``` ```python import ray @ray.remote class ParameterServer(object): def __init__(self, learning_rate): pass def apply_gradients(self, *gradients): pass def get_weights(self): pass @ray.remote class DataWorker(object): def __init__(self): pass def compute_gradient_on_batch(self, data, target): pass def compute_gradients(self, weights): pass def run_sync_parameter_server(): iterations = 50 num_workers = 2 ps = ParameterServer.remote(1e-4 * num_workers) # Create workers. workers = [DataWorker.remote() for i in range(num_workers)] current_weights = ps.get_weights.remote() ray.init(ignore_reinit_error=True, local_mode=True) accuracies = run_sync_parameter_server() ```
cc @edoakes I'll take a look into this. I get a similar error in this test case. ```python import ray from ray import tune config = {"env": "CartPole-v1"} ray.init(local_mode=True) tune.run("PPO", config=config) ``` ``` Traceback (most recent call last): File "/home/matt/Code/ray/python/ray/tune/trial_runner.py", line 506, in _process_trial result = self.trial_executor.fetch_result(trial) File "/home/matt/Code/ray/python/ray/tune/ray_trial_executor.py", line 347, in fetch_result result = ray.get(trial_future[0]) File "/home/matt/Code/ray/python/ray/worker.py", line 2349, in get raise value ray.exceptions.RayTaskError: python test.py (pid=32468, host=Rocko2) File "/home/matt/Code/ray/python/ray/local_mode_manager.py", line 55, in execute results = function(*copy.deepcopy(args)) File "/home/matt/Code/ray/python/ray/rllib/agents/trainer.py", line 395, in train w.set_global_vars.remote(self.global_vars) File "/home/matt/Code/ray/python/ray/actor.py", line 148, in remote return self._remote(args, kwargs) File "/home/matt/Code/ray/python/ray/actor.py", line 169, in _remote return invocation(args, kwargs) File "/home/matt/Code/ray/python/ray/actor.py", line 163, in invocation num_return_vals=num_return_vals) File "/home/matt/Code/ray/python/ray/actor.py", line 588, in _actor_method_call function = getattr(worker.actors[self._ray_actor_id], method_name) AttributeError: 'PPO' object has no attribute 'set_global_vars' ``` > I get a similar error in this test case. > > ```python > import ray > from ray import tune > config = {"env": "CartPole-v1"} > ray.init(local_mode=True) > tune.run("PPO", config=config) > ``` > > ``` > Traceback (most recent call last): > File "/home/matt/Code/ray/python/ray/tune/trial_runner.py", line 506, in _process_trial > result = self.trial_executor.fetch_result(trial) > File "/home/matt/Code/ray/python/ray/tune/ray_trial_executor.py", line 347, in fetch_result > result = ray.get(trial_future[0]) > File "/home/matt/Code/ray/python/ray/worker.py", line 2349, in get > raise value > ray.exceptions.RayTaskError: python test.py (pid=32468, host=Rocko2) > File "/home/matt/Code/ray/python/ray/local_mode_manager.py", line 55, in execute > results = function(*copy.deepcopy(args)) > File "/home/matt/Code/ray/python/ray/rllib/agents/trainer.py", line 395, in train > w.set_global_vars.remote(self.global_vars) > File "/home/matt/Code/ray/python/ray/actor.py", line 148, in remote > return self._remote(args, kwargs) > File "/home/matt/Code/ray/python/ray/actor.py", line 169, in _remote > return invocation(args, kwargs) > File "/home/matt/Code/ray/python/ray/actor.py", line 163, in invocation > num_return_vals=num_return_vals) > File "/home/matt/Code/ray/python/ray/actor.py", line 588, in _actor_method_call > function = getattr(worker.actors[self._ray_actor_id], method_name) > AttributeError: 'PPO' object has no attribute 'set_global_vars' > ``` I'm getting the same error with: ``` ray.init(num_cpus=N_CPUS, local_mode=True) # defining dictionary for the experiment experiment_params = dict( run="PPO", # must be the same as the default config env=gym_name, config={**ppo_config}, checkpoint_freq=20, checkpoint_at_end=True, max_failures=999, stop={"training_iteration": 200, }, # stop conditions ) experiment_params = {params["exp_tag"]: experiment_params} # running the experiment trials = run_experiments(experiment_params) ``` With the following Traceback: ``` ray.exceptions.RayTaskError: /anaconda3/envs/dmas/bin/python /Applications/PyCharm.app/Contents/helpers/pydev/pydevconsole.py --mode=client --port=49411 (pid=1002, host=client-145-120-37-77.surfnet.eduroam.rug.nl) File "/anaconda3/envs/dmas/lib/python3.6/site-packages/ray/local_mode_manager.py", line 55, in execute results = function(*copy.deepcopy(args)) File "/anaconda3/envs/dmas/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 395, in train w.set_global_vars.remote(self.global_vars) File "/anaconda3/envs/dmas/lib/python3.6/site-packages/ray/actor.py", line 148, in remote return self._remote(args, kwargs) File "/anaconda3/envs/dmas/lib/python3.6/site-packages/ray/actor.py", line 169, in _remote return invocation(args, kwargs) File "/anaconda3/envs/dmas/lib/python3.6/site-packages/ray/actor.py", line 163, in invocation num_return_vals=num_return_vals) File "/anaconda3/envs/dmas/lib/python3.6/site-packages/ray/actor.py", line 548, in _actor_method_call function = getattr(worker.actors[self._ray_actor_id], method_name) AttributeError: 'PPO' object has no attribute 'set_global_vars' ```
2019-10-08T17:40:27
ray-project/ray
5,971
ray-project__ray-5971
[ "5970" ]
252a5d13ed129107584a26766ac934d336e4b755
diff --git a/python/ray/tune/experiment.py b/python/ray/tune/experiment.py --- a/python/ray/tune/experiment.py +++ b/python/ray/tune/experiment.py @@ -102,9 +102,9 @@ def __init__(self, "criteria must take exactly 2 parameters.".format(stop)) config = config or {} - run_identifier = Experiment._register_if_needed(run) + self._run_identifier = Experiment._register_if_needed(run) spec = { - "run": run_identifier, + "run": self._run_identifier, "stop": stop, "config": config, "resources_per_trial": resources_per_trial, @@ -125,7 +125,7 @@ def __init__(self, if restore else None } - self.name = name or run_identifier + self.name = name or self._run_identifier self.spec = spec @classmethod @@ -202,6 +202,11 @@ def remote_checkpoint_dir(self): if self.spec["upload_dir"]: return os.path.join(self.spec["upload_dir"], self.name) + @property + def run_identifier(self): + """Returns a string representing the trainable identifier.""" + return self._run_identifier + def convert_to_experiment_list(experiments): """Produces a list of Experiment objects. diff --git a/python/ray/tune/tune.py b/python/ray/tune/tune.py --- a/python/ray/tune/tune.py +++ b/python/ray/tune/tune.py @@ -4,6 +4,7 @@ import logging import time +import six from ray.tune.error import TuneError from ray.tune.experiment import convert_to_experiment_list, Experiment @@ -45,6 +46,9 @@ def _make_scheduler(args): def _check_default_resources_override(run_identifier): + if not isinstance(run_identifier, six.string_types): + # If obscure dtype, assume it is overriden. + return True trainable_cls = get_trainable_cls(run_identifier) return hasattr(trainable_cls, "default_resource_request") and ( trainable_cls.default_resource_request.__code__ != @@ -265,7 +269,7 @@ def run(run_or_experiment, dict) and "gpu" in resources_per_trial: # "gpu" is manually set. pass - elif _check_default_resources_override(run_identifier): + elif _check_default_resources_override(experiment.run_identifier): # "default_resources" is manually overriden. pass else:
Undefined Reference To "run_identifier" on GPU Machine With tune.Experiment On the latest nightly build, passing a `tune.Experiment` into `tune.Run` on a GPU machine results in an undefined reference to `run_identifier`. This occurs because `run_identifier` is defined conditionally [here](https://github.com/ray-project/ray/blob/91acecc9f9eb8d1a7e9fe651bd50e3b2d68ecee2/python/ray/tune/tune.py#L214). The `run_identifier` reference occurs on [this line](https://github.com/ray-project/ray/blob/91acecc9f9eb8d1a7e9fe651bd50e3b2d68ecee2/python/ray/tune/tune.py#L268). Traceback for the error: ``` 2019-10-22 06:57:15,904 DEBUG registry.py:59 -- Detected class for trainable. 2019-10-22 06:57:15,906 DEBUG tune.py:236 -- Ignoring some parameters passed into tune.run. 2019-10-22 06:57:15,909 DEBUG trial_runner.py:175 -- Starting a new experiment. Traceback (most recent call last): File "/home/steven/res/railrl-private/railrl/launchers/ray/local_launch.py", line 94, in <module> launch_local_experiment(**local_launch_variant) File "/home/steven/res/railrl-private/railrl/launchers/ray/local_launch.py", line 82, in launch_local_experiment queue_trials=True, File "/env/lib/python3.5/site-packages/ray/tune/tune.py", line 268, in run elif _check_default_resources_override(run_identifier): UnboundLocalError: local variable 'run_identifier' referenced before assignment ```
2019-10-22T07:36:32
ray-project/ray
5,999
ray-project__ray-5999
[ "5996" ]
c69e9aafdc22c47fd9278a3d10e66e5e748a1b71
diff --git a/python/ray/tune/progress_reporter.py b/python/ray/tune/progress_reporter.py --- a/python/ray/tune/progress_reporter.py +++ b/python/ray/tune/progress_reporter.py @@ -4,7 +4,8 @@ from ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, PID, EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS, - HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S) + HOSTNAME, TRAINING_ITERATION, TIME_TOTAL_S, + TIMESTEPS_TOTAL) from ray.tune.util import flatten_dict try: @@ -21,6 +22,7 @@ MEAN_ACCURACY: "acc", MEAN_LOSS: "loss", TIME_TOTAL_S: "total time (s)", + TIMESTEPS_TOTAL: "timesteps", TRAINING_ITERATION: "iter", } @@ -135,7 +137,7 @@ def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=100): trial_table.append(_get_trial_info(trial, params, keys, has_failed)) # Parse columns. parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys] - columns = ["Trial name", "ID", "status", "loc"] + columns = ["Trial name", "status", "loc"] columns += ["failures", "error file"] if has_failed else [] columns += params + parsed_columns messages.append( @@ -146,7 +148,7 @@ def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=100): def _get_trial_info(trial, parameters, metrics, include_error_data=False): """Returns the following information about a trial: - name | ID | status | loc | # failures | error_file | params... | metrics... + name | status | loc | # failures | error_file | params... | metrics... Args: trial (Trial): Trial to get information for. @@ -155,7 +157,7 @@ def _get_trial_info(trial, parameters, metrics, include_error_data=False): include_error_data (bool): Include error file and # of failures. """ result = flatten_dict(trial.last_result) - trial_info = [str(trial), trial.trial_id, trial.status] + trial_info = [str(trial), trial.status] trial_info += [_location_str(result.get(HOSTNAME), result.get(PID))] if include_error_data: # TODO(ujvl): File path is too long to display in a single row. diff --git a/python/ray/tune/result.py b/python/ray/tune/result.py --- a/python/ray/tune/result.py +++ b/python/ray/tune/result.py @@ -62,8 +62,8 @@ DEFAULT_EXPERIMENT_INFO_KEYS = ("trainable_name", EXPERIMENT_TAG, TRIAL_ID) -DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, MEAN_ACCURACY, - MEAN_LOSS) +DEFAULT_RESULT_KEYS = (TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL, + MEAN_ACCURACY, MEAN_LOSS) # __duplicate__ is a magic keyword used internally to # avoid double-logging results when using the Function API.
Tune doesn't show timesteps anymore This makes it not very useful for monitoring RL runs: ``` +--------------------------+----------+----------+-----------+--------+------------------+----------+ | Trial name | ID | status | loc | iter | total time (s) | reward | |--------------------------+----------+----------+-----------+--------+------------------+----------| | SAC_Pendulum-v0_79dcd0fe | 79dcd0fe | RUNNING | pid=11031 | 66 | 163.499 | -1105.85 | +--------------------------+----------+----------+-----------+--------+------------------+----------+ ``` cc @richardliaw
Can we also drop "ID"? It seems redundant given that it is encoded in the trial name already. @ujvl can you take a look at this? will take a look today
2019-10-24T20:27:43
ray-project/ray
6,141
ray-project__ray-6141
[ "2615" ]
c75ada9e0465928d97b3ebaeb105149009a592ef
diff --git a/python/ray/node.py b/python/ray/node.py --- a/python/ray/node.py +++ b/python/ray/node.py @@ -192,10 +192,10 @@ def atexit_handler(): atexit.register(atexit_handler) - # Register the a handler to be called if we get a SIGTERM. + # Register the handler to be called if we get a SIGTERM. # In this case, we want to exit with an error code (1) after # cleaning up child processes. - def sigterm_handler(): + def sigterm_handler(signum, frame): return clean_up_children(lambda *args, **kwargs: sys.exit(1)) signal.signal(signal.SIGTERM, sigterm_handler)
Killing `ray start --block` should stop all child processes It would be useful for deployment if `ray start --block` (or some other equivalent command) would start up all the Ray subprocesses in such a way that killing the initial command would stop all child processes. This makes it easier for Ray to be deployed using existing deployment tools which assume killing the started process fully cleans things up. See https://github.com/ray-project/ray/issues/2214#issuecomment-411659135.
See also #2587. On Linux, the best way to assure this is by setting `pdeathsig` on the parent process. Then if the parent process dies, Linux kills also all children. In Python you can do this by adding to `subprocess` `Popen` call arguments: ``` preexec_fn=lambda: prctl.set_pdeathsig(signal.SIGKILL) ``` You [need this package](https://pythonhosted.org/python-prctl/). The issue is that this is not graceful, but that is OK. You should gracefully try to kill stuff from the parent process while it is running, but if it dies, then the rest dies with it. I would say that it might be better to just reuse some existing process supervisor here instead of reimplementing all this and figuring our all the details on how to manage subprocesses on various systems. Isn't this also related to #2005? One related comment about this. `ray start --block` should probably exit if the child processes have died. It could just check periodically to see if the raylet that it started is still up and running and if not then exit.
2019-11-11T22:21:15
ray-project/ray
6,170
ray-project__ray-6170
[ "6115" ]
7d33e9949b942acde92db6698abdb6b409c0648c
diff --git a/python/ray/local_mode_manager.py b/python/ray/local_mode_manager.py --- a/python/ray/local_mode_manager.py +++ b/python/ray/local_mode_manager.py @@ -5,6 +5,7 @@ import copy import traceback +import ray from ray import ObjectID from ray.utils import format_error_message from ray.exceptions import RayTaskError @@ -20,7 +21,18 @@ class LocalModeObjectID(ObjectID): it equates to the object not existing in the object store. This is necessary because None is a valid object value. """ - pass + + def __copy__(self): + new = LocalModeObjectID(self.binary()) + if hasattr(self, "value"): + new.value = self.value + return new + + def __deepcopy__(self, memo=None): + new = LocalModeObjectID(self.binary()) + if hasattr(self, "value"): + new.value = self.value + return new class LocalModeManager(object): @@ -49,23 +61,37 @@ def execute(self, function, function_name, args, kwargs, num_return_vals): Returns: LocalModeObjectIDs corresponding to the function return values. """ - object_ids = [ + return_ids = [ LocalModeObjectID.from_random() for _ in range(num_return_vals) ] + new_args = [] + for i, arg in enumerate(args): + if isinstance(arg, ObjectID): + new_args.append(ray.get(arg)) + else: + new_args.append(copy.deepcopy(arg)) + + new_kwargs = {} + for k, v in kwargs.items(): + if isinstance(v, ObjectID): + new_kwargs[k] = ray.get(v) + else: + new_kwargs[k] = copy.deepcopy(v) + try: - results = function(*copy.deepcopy(args), **copy.deepcopy(kwargs)) + results = function(*new_args, **new_kwargs) if num_return_vals == 1: - object_ids[0].value = results + return_ids[0].value = results else: - for object_id, result in zip(object_ids, results): + for object_id, result in zip(return_ids, results): object_id.value = result except Exception as e: backtrace = format_error_message(traceback.format_exc()) task_error = RayTaskError(function_name, backtrace, e.__class__) - for object_id in object_ids: + for object_id in return_ids: object_id.value = task_error - return object_ids + return return_ids def put_object(self, value): """Store an object in the emulated object store.
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -2227,6 +2227,17 @@ def function2(self): _ = RemoteActor2.remote() assert ray.get(actor1.function1.remote()) == 0 + # Test passing ObjectIDs. + @ray.remote + def direct_dep(input): + return input + + @ray.remote + def indirect_dep(input): + return ray.get(direct_dep.remote(input[0])) + + assert ray.get(indirect_dep.remote(["hello"])) == "hello" + def test_resource_constraints(shutdown_only): num_workers = 20
Passing ObjectID as a function argument in local_mode is broken ### System information - **OS Platform and Distribution**: Ubuntu 18.04 - **Ray installed from (source or binary)**: binary - **Ray version**: 0.8.0.dev6 - **Python version**: 3.7 - **Exact command to reproduce**: see below ### Describe the problem The argument passing behavior with local_mode=True vs False seems to be different. When I run the code snippet below: ```import ray ray.init(local_mode=True) # Replace with False to get a working example @ray.remote def remote_function(x): obj = x['a'] return ray.get(obj) a = ray.put(42) d = {'a': a} result = remote_function.remote(d) print(ray.get(result)) ``` With local_mode=False I get output `42`, as expected. With local_mode=True I get the following error: ``` Traceback (most recent call last): File "/home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py", line 13, in <module> print(ray.get(result)) File "/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/worker.py", line 2194, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(KeyError): /home/alex/miniconda3/envs/doom-rl/bin/python /home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py (pid=2449, ip=10.136.109.38) File "/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/local_mode_manager.py", line 55, in execute results = function(*copy.deepcopy(args)) File "/home/alex/all/projects/doom-neurobot/playground/ray_local_mode_bug.py", line 7, in remote_function return ray.get(obj) File "/home/alex/miniconda3/envs/doom-rl/lib/python3.7/site-packages/ray/local_mode_manager.py", line 105, in get_objects raise KeyError("Value for {} not found".format(object_id)) KeyError: 'Value for LocalModeObjectID(89f92e430883458c8107c10ed53eb35b26099831) not found' ``` It looks like the LocalObjectID instance inside `d` loses it's field `value` when it gets deep copied during the "remote" function call (currently it's `local_mode_manager.py:55`). It's hard to tell why exactly that happens, looks like a bug.
Indeed, this example seems to confirm that it might be a `deepcopy` problem: ``` import ray ray.init(local_mode=True) a = ray.put(42) from copy import deepcopy a_copy = deepcopy(a) print(ray.get(a)) print(ray.get(a_copy)) ``` I get `42` printed and then an error upon attempting to `ray.get(a_copy)`. Is this intended? Related to #5853? cc @edoakes It is related, and I agree that local_mode should mimic normal operation as closely as possible. Although probably there's an easier fix for this small issue. Fixed it temporarily by implementing `__deepcopy__` on `LocalModeObjectID`: ``` def __deepcopy__(self, memodict={}): obj_copy = copy.copy(self) content_copy = copy.deepcopy(self.__dict__, memodict) obj_copy.__dict__.update(content_copy) return obj_copy ```
2019-11-15T20:14:50
ray-project/ray
6,233
ray-project__ray-6233
[ "6294" ]
2797c11b6983e55081733c8c498eea4290a155ea
diff --git a/python/setup.py b/python/setup.py --- a/python/setup.py +++ b/python/setup.py @@ -3,6 +3,7 @@ from __future__ import print_function import glob +from itertools import chain import os import re import shutil @@ -82,6 +83,8 @@ "tune": ["tabulate"], } +extras["all"] = list(set(chain.from_iterable(extras.values()))) + class build_ext(_build_ext.build_ext): def run(self):
Kubernetes Docker Container untagged The container used by the kubernetes manifests (rayproject/autoscaler) doesn't have any useful tags. Furthermore the contents of the container are actually built using github.com/edoakes/ray and the container is two months old. It might be better to properly tag the containers with versions and ensure they are built from the main repo (by travis) so that there aren't small differences in the code and it stays up to date.
2019-11-21T23:07:22
ray-project/ray
6,258
ray-project__ray-6258
[ "6091" ]
9f0d005ce62e5450f3d58167245769817e8d0295
diff --git a/rllib/examples/custom_keras_model.py b/rllib/examples/custom_keras_model.py --- a/rllib/examples/custom_keras_model.py +++ b/rllib/examples/custom_keras_model.py @@ -13,12 +13,14 @@ from ray.rllib.models.tf.tf_modelv2 import TFModelV2 from ray.rllib.agents.dqn.distributional_q_model import DistributionalQModel from ray.rllib.utils import try_import_tf +from ray.rllib.models.tf.visionnet_v2 import VisionNetwork as MyVisionNetwork tf = try_import_tf() parser = argparse.ArgumentParser() parser.add_argument("--run", type=str, default="DQN") # Try PG, PPO, DQN parser.add_argument("--stop", type=int, default=200) +parser.add_argument("--use_vision_network", action="store_true") class MyKerasModel(TFModelV2): @@ -90,13 +92,18 @@ def forward(self, input_dict, state, seq_lens): if __name__ == "__main__": ray.init() args = parser.parse_args() - ModelCatalog.register_custom_model("keras_model", MyKerasModel) - ModelCatalog.register_custom_model("keras_q_model", MyKerasQModel) + ModelCatalog.register_custom_model( + "keras_model", MyVisionNetwork + if args.use_vision_network else MyKerasModel) + ModelCatalog.register_custom_model( + "keras_q_model", MyVisionNetwork + if args.use_vision_network else MyKerasQModel) tune.run( args.run, stop={"episode_reward_mean": args.stop}, config={ - "env": "CartPole-v0", + "env": "BreakoutNoFrameskip-v4" + if args.use_vision_network else "CartPole-v0", "num_gpus": 0, "model": { "custom_model": "keras_q_model" diff --git a/rllib/models/catalog.py b/rllib/models/catalog.py --- a/rllib/models/catalog.py +++ b/rllib/models/catalog.py @@ -257,12 +257,11 @@ def get_model_v2(obs_space, model_cls = _global_registry.get(RLLIB_MODEL, model_config["custom_model"]) if issubclass(model_cls, ModelV2): - if model_interface and not issubclass(model_cls, - model_interface): - raise ValueError("The given model must subclass", - model_interface) - if framework == "tf": + logger.info("Wrapping {} as {}".format( + model_cls, model_interface)) + model_cls = ModelCatalog._wrap_if_needed( + model_cls, model_interface) created = set() # Track and warn if vars were created but not registered
DQN does not allow custom models ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: source - **Ray version**: 0.8.0.dev6 - **Python version**: 3.7.5 - **Exact command to reproduce**: The following code tries to set built-in VisionNetwork from TF as custom model and it errors out as described below. However, the code succeeds if custom model was not set in which case exact same VisionNetwork gets selected automatically by [_get_v2_model](https://github.com/ray-project/ray/blob/master/rllib/models/catalog.py#L517). The cause of this issue is explained below however I'm not sure about the fix. ``` import ray from ray.rllib.agents.dqn import DQNTrainer from ray.rllib.models import ModelCatalog from ray.rllib.models.tf.visionnet_v2 import VisionNetwork ModelCatalog.register_custom_model("my_model", VisionNetwork) config = {'model': { "custom_model": "my_model", "custom_options": {}, # extra options to pass to your model }} ray.init() agent = DQNTrainer(config=config, env="BreakoutNoFrameskip-v4") ``` ### Describe the problem Current code in master is not allowing the use of custom models in DQN. When trying to use custom model (either for TF or PyTorch), error is thrown indicating that model has not been subclassed from `DistributionalQModel`. This happens even when custom model is set to simply `ray.rllib.models.tf.visionnet_v2.VisionNetwork`. Error message: ``` 'The given model must subclass', <class 'ray.rllib.agents.dqn.distributional_q_model.DistributionalQModel'>) ``` ### Source code / logs Cause of this issue is [this check](https://github.com/ray-project/ray/blob/master/rllib/models/catalog.py#L262). Notice that this check is only done if custom_model is set. Apparently built-in models don't subclass `DistributionalQModel` either however as this check is not applied to built-in models they work fine.
There's a bit of code below that check that auto wraps the default model in the interface. I'm open to auto wrapping custom models as well if you want to make a patch. Why not instead subclass the right model class though? It makes the behaviour a bit more clear I think: https://github.com/ray-project/ray/blob/master/rllib/examples/custom_keras_model.py#L59 @sytelus Hey I ran into this exact issue a few days back and all I subclassed the right Model and everything works as expected. Copy-paste this Model code below. ` ``` # ============== VisionNetwork Model ================== class VisionNetwork(DistributionalQModel): """Generic vision network implemented in DistributionalQModel API.""" def __init__(self, obs_space, action_space, num_outputs, model_config, name, **kw): super(VisionNetwork, self).__init__( obs_space, action_space, num_outputs, model_config, name, **kw) activation = get_activation_fn(model_config.get("conv_activation")) filters = model_config.get("conv_filters") if not filters: filters = _get_filter_config(obs_space.shape) no_final_linear = model_config.get("no_final_linear") vf_share_layers = model_config.get("vf_share_layers") inputs = tf.keras.layers.Input( shape=obs_space.shape, name="observations") last_layer = inputs # Build the action layers for i, (out_size, kernel, stride) in enumerate(filters[:-1], 1): last_layer = tf.keras.layers.Conv2D( out_size, kernel, strides=(stride, stride), activation=activation, padding="same", name="conv{}".format(i))(last_layer) out_size, kernel, stride = filters[-1] if no_final_linear: # the last layer is adjusted to be of size num_outputs last_layer = tf.keras.layers.Conv2D( num_outputs, kernel, strides=(stride, stride), activation=activation, padding="valid", name="conv_out")(last_layer) conv_out = last_layer else: last_layer = tf.keras.layers.Conv2D( out_size, kernel, strides=(stride, stride), activation=activation, padding="valid", name="conv{}".format(i + 1))(last_layer) conv_out = tf.keras.layers.Conv2D( num_outputs, [1, 1], activation=None, padding="same", name="conv_out")(last_layer) # Build the value layers if vf_share_layers: last_layer = tf.keras.layers.Lambda( lambda x: tf.squeeze(x, axis=[1, 2]))(last_layer) value_out = tf.keras.layers.Dense( 1, name="value_out", activation=None, kernel_initializer=normc_initializer(0.01))(last_layer) else: # build a parallel set of hidden layers for the value net last_layer = inputs for i, (out_size, kernel, stride) in enumerate(filters[:-1], 1): last_layer = tf.keras.layers.Conv2D( out_size, kernel, strides=(stride, stride), activation=activation, padding="same", name="conv_value_{}".format(i))(last_layer) out_size, kernel, stride = filters[-1] last_layer = tf.keras.layers.Conv2D( out_size, kernel, strides=(stride, stride), activation=activation, padding="valid", name="conv_value_{}".format(i + 1))(last_layer) last_layer = tf.keras.layers.Conv2D( 1, [1, 1], activation=None, padding="same", name="conv_value_out")(last_layer) value_out = tf.keras.layers.Lambda( lambda x: tf.squeeze(x, axis=[1, 2]))(last_layer) self.base_model = tf.keras.Model(inputs, [conv_out, value_out]) self.register_variables(self.base_model.variables) def forward(self, input_dict, state, seq_lens): # explicit cast to float32 needed in eager model_out, self._value_out = self.base_model( tf.cast(input_dict["obs"], tf.float32)) return tf.squeeze(model_out, axis=[1, 2]), state def value_function(self): return tf.reshape(self._value_out, [-1]) # ================== Register Custom Model ====================== ModelCatalog.register_custom_model("NatureCNN", VisionNetwork)` ``` But I think this should be implemented in the library by default. I see, that makes sense as I guess it's the expected behaviour. Cc @AmeerHajAli I think this would be a good issue to get started on if you're interested. Yeah, I think rllib shouldn't make distinction between built-in models and custom model. If it wraps up internal models then it should probably do so for custom models as well. That sounds good!
2019-11-24T12:02:55
ray-project/ray
6,320
ray-project__ray-6320
[ "6228" ]
b8669bc06c4535e1715b3a84ad193cdfa0e237f3
diff --git a/python/ray/tune/ray_trial_executor.py b/python/ray/tune/ray_trial_executor.py --- a/python/ray/tune/ray_trial_executor.py +++ b/python/ray/tune/ray_trial_executor.py @@ -8,6 +8,7 @@ import random import time import traceback +from contextlib import contextmanager import ray from ray.exceptions import RayTimeoutError @@ -97,8 +98,9 @@ def _setup_remote_runner(self, trial, reuse_allowed): if self._cached_actor: logger.debug("Cannot reuse cached runner {} for new trial".format( self._cached_actor)) - self._cached_actor.stop.remote() - self._cached_actor.__ray_terminate__.remote() + with self._change_working_directory(trial): + self._cached_actor.stop.remote() + self._cached_actor.__ray_terminate__.remote() self._cached_actor = None cls = ray.remote( @@ -128,7 +130,9 @@ def logger_creator(config): } if issubclass(trial.get_trainable_cls(), DurableTrainable): kwargs["remote_checkpoint_dir"] = trial.remote_checkpoint_dir - return cls.remote(**kwargs) + + with self._change_working_directory(trial): + return cls.remote(**kwargs) def _train(self, trial): """Start one iteration of training and save remote id.""" @@ -148,7 +152,8 @@ def _train(self, trial): return assert trial.status == Trial.RUNNING, trial.status - remote = trial.runner.train.remote() + with self._change_working_directory(trial): + remote = trial.runner.train.remote() # Local Mode if isinstance(remote, dict): @@ -216,8 +221,9 @@ def _stop_trial(self, trial, error=False, error_msg=None, self._cached_actor = trial.runner else: logger.debug("Trial %s: Destroying actor.", trial) - trial.runner.stop.remote() - trial.runner.__ray_terminate__.remote() + with self._change_working_directory(trial): + trial.runner.stop.remote() + trial.runner.__ray_terminate__.remote() except Exception: logger.exception("Trial %s: Error stopping runner.", trial) self.set_status(trial, Trial.ERROR) @@ -299,14 +305,16 @@ def reset_trial(self, trial, new_config, new_experiment_tag): trial.experiment_tag = new_experiment_tag trial.config = new_config trainable = trial.runner - with warn_if_slow("reset_config"): - try: - reset_val = ray.get( - trainable.reset_config.remote(new_config), - DEFAULT_GET_TIMEOUT) - except RayTimeoutError: - logger.exception("Trial %s: reset_config timed out.", trial) - return False + with self._change_working_directory(trial): + with warn_if_slow("reset_config"): + try: + reset_val = ray.get( + trainable.reset_config.remote(new_config), + DEFAULT_GET_TIMEOUT) + except RayTimeoutError: + logger.exception("Trial %s: reset_config timed out.", + trial) + return False return reset_val def get_running_trials(self): @@ -562,14 +570,16 @@ def save(self, trial, storage=Checkpoint.PERSISTENT, result=None): Checkpoint future, or None if an Exception occurs. """ result = result or trial.last_result - if storage == Checkpoint.MEMORY: - value = trial.runner.save_to_object.remote() - checkpoint = Checkpoint(storage, value, result) - else: - with warn_if_slow("save_checkpoint_to_storage"): - # TODO(ujvl): Make this asynchronous. - value = ray.get(trial.runner.save.remote()) + + with self._change_working_directory(trial): + if storage == Checkpoint.MEMORY: + value = trial.runner.save_to_object.remote() checkpoint = Checkpoint(storage, value, result) + else: + with warn_if_slow("save_checkpoint_to_storage"): + # TODO(ujvl): Make this asynchronous. + value = ray.get(trial.runner.save.remote()) + checkpoint = Checkpoint(storage, value, result) with warn_if_slow("on_checkpoint", DEFAULT_GET_TIMEOUT) as profile: try: trial.on_checkpoint(checkpoint) @@ -605,18 +615,21 @@ def restore(self, trial, checkpoint=None): logger.debug("Trial %s: Attempting restore from object", trial) # Note that we don't store the remote since in-memory checkpoints # don't guarantee fault tolerance and don't need to be waited on. - trial.runner.restore_from_object.remote(value) + with self._change_working_directory(trial): + trial.runner.restore_from_object.remote(value) else: logger.debug("Trial %s: Attempting restore from %s", trial, value) if issubclass(trial.get_trainable_cls(), DurableTrainable): - remote = trial.runner.restore.remote(value) + with self._change_working_directory(trial): + remote = trial.runner.restore.remote(value) elif trial.sync_on_checkpoint: # This provides FT backwards compatibility in the # case where a DurableTrainable is not provided. logger.warning("Trial %s: Reading checkpoint into memory.", trial) data_dict = TrainableUtil.pickle_checkpoint(value) - remote = trial.runner.restore_from_object.remote(data_dict) + with self._change_working_directory(trial): + remote = trial.runner.restore_from_object.remote(data_dict) else: raise AbortTrialExecution( "Pass in `sync_on_checkpoint=True` for driver-based trial" @@ -633,9 +646,10 @@ def export_trial_if_needed(self, trial): A dict that maps ExportFormats to successfully exported models. """ if trial.export_formats and len(trial.export_formats) > 0: - return ray.get( - trial.runner.export_model.remote(trial.export_formats), - DEFAULT_GET_TIMEOUT) + with self._change_working_directory(trial): + return ray.get( + trial.runner.export_model.remote(trial.export_formats), + DEFAULT_GET_TIMEOUT) return {} def has_gpus(self): @@ -643,6 +657,23 @@ def has_gpus(self): self._update_avail_resources() return self._avail_resources.gpu > 0 + @contextmanager + def _change_working_directory(self, trial): + """Context manager changing working directory to trial logdir. + Used in local mode. + + For non-local mode it is no-op. + """ + if ray.worker._mode() == ray.worker.LOCAL_MODE: + old_dir = os.getcwd() + try: + os.chdir(trial.logdir) + yield + finally: + os.chdir(old_dir) + else: + yield + def _to_gb(n_bytes): return round(n_bytes / (1024**3), 2)
[tune] Working directory not set to logdir in local_mode ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 18.04 - **Ray installed from (source or binary)**: binary (Nightlies) - **Ray version**: `'0.8.0.dev6'` - **Python version**: 3.7.3, conda ### Describe the problem `ray.tune` runs particular trials with working dir set to respective logdir. This is not the case when script is run in ray `local_mode`. This is a problem when training function produces artifacts using relative paths. Although it is possible to get `logdir` in training function and manually obtain absolute path, it does not appear to be enforced in documentation (see docstring of `ray.tune.trainable.Trainable.logdir` or `python/ray/tune/tests/tutorial.py#L32`). ### How to reproduce File `example.py`: ```python from os import environ import ray from ray import tune def example_train(config, reporter): with open('foo.txt', 'w') as fp: fp.write(f'bar: {config}') reporter(acc=config['x'] / 2) local_mode = bool(int(environ.get('RAY_LOCAL', '0'))) ray.init(local_mode=local_mode) tune.run(example_train, config={'x': tune.grid_search([1, 2])}) ``` *Run in default mode* ```bash rm -R ~/ray_results/example_train # ignore error python example.py ls ~/ray_results/example_train/*/foo.txt # notice presence of foo.txt files in logdirs ``` *Run in local mode* ```bash rm -R ~/ray_results/example_train # ignore error RAY_LOCAL=1 python example.py ls ~/ray_results/example_train/*/foo.txt # foo.txt files are *missing* in logdirs ls foo.txt # it is in project root ``` Related issue: #2822
2019-12-01T17:06:56
ray-project/ray
6,395
ray-project__ray-6395
[ "6369" ]
6272907a57599f42a9effe685a423e1333c623dd
diff --git a/python/ray/tune/progress_reporter.py b/python/ray/tune/progress_reporter.py --- a/python/ray/tune/progress_reporter.py +++ b/python/ray/tune/progress_reporter.py @@ -1,5 +1,7 @@ from __future__ import print_function +import collections + from ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS, TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL) @@ -25,6 +27,8 @@ class ProgressReporter(object): + # TODO(ujvl): Expose ProgressReporter in tune.run for custom reporting. + def report(self, trial_runner): """Reports progress across all trials of the trial runner. @@ -49,7 +53,8 @@ def report(self, trial_runner): "== Status ==", memory_debug_str(), trial_runner.debug_string(delim=delim), - trial_progress_str(trial_runner.get_trials(), fmt="html") + trial_progress_str(trial_runner.get_trials(), fmt="html"), + trial_errors_str(trial_runner.get_trials(), fmt="html"), ] from IPython.display import clear_output from IPython.core.display import display, HTML @@ -64,7 +69,8 @@ def report(self, trial_runner): "== Status ==", memory_debug_str(), trial_runner.debug_string(), - trial_progress_str(trial_runner.get_trials()) + trial_progress_str(trial_runner.get_trials()), + trial_errors_str(trial_runner.get_trials()), ] print("\n".join(messages) + "\n") @@ -90,7 +96,7 @@ def memory_debug_str(): "(or ray[debug]) to resolve)") -def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=100): +def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=20): """Returns a human readable message for printing to the console. This contains a table where each row represents a trial, its parameters @@ -109,52 +115,116 @@ def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=100): return delim.join(messages) num_trials = len(trials) - trials_per_state = {} + trials_by_state = collections.defaultdict(list) for t in trials: - trials_per_state[t.status] = trials_per_state.get(t.status, 0) + 1 - messages.append("Number of trials: {} ({})".format(num_trials, - trials_per_state)) + trials_by_state[t.status].append(t) + for local_dir in sorted({t.local_dir for t in trials}): messages.append("Result logdir: {}".format(local_dir)) + num_trials_strs = [ + "{} {}".format(len(trials_by_state[state]), state) + for state in trials_by_state + ] + messages.append("Number of trials: {} ({})".format( + num_trials, ", ".join(num_trials_strs))) + if num_trials > max_rows: - overflow = num_trials - max_rows # TODO(ujvl): suggestion for users to view more rows. - messages.append("Table truncated to {} rows ({} overflow).".format( - max_rows, overflow)) + trials_by_state_trunc = _fair_filter_trials(trials_by_state, max_rows) + trials = [] + overflow_strs = [] + for state in trials_by_state: + trials += trials_by_state_trunc[state] + overflow = len(trials_by_state[state]) - len( + trials_by_state_trunc[state]) + overflow_strs.append("{} {}".format(overflow, state)) + # Build overflow string. + overflow = num_trials - max_rows + overflow_str = ", ".join(overflow_strs) + messages.append("Table truncated to {} rows. {} trials ({}) not " + "shown.".format(max_rows, overflow, overflow_str)) # Pre-process trials to figure out what columns to show. keys = list(metrics or DEFAULT_PROGRESS_KEYS) keys = [k for k in keys if any(t.last_result.get(k) for t in trials)] - # Build trial rows. - trial_table = [] params = list(set().union(*[t.evaluated_params for t in trials])) - for trial in trials[:min(num_trials, max_rows)]: - trial_table.append(_get_trial_info(trial, params, keys)) + trial_table = [_get_trial_info(trial, params, keys) for trial in trials] # Parse columns. parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys] columns = ["Trial name", "status", "loc"] columns += params + parsed_columns messages.append( tabulate(trial_table, headers=columns, tablefmt=fmt, showindex=False)) + return delim.join(messages) + - # Build trial error rows. +def trial_errors_str(trials, fmt="psql", max_rows=20): + """Returns a readable message regarding trial errors. + + Args: + trials (List[Trial]): List of trials to get progress string for. + fmt (str): Output format (see tablefmt in tabulate API). + max_rows (int): Maximum number of rows in the error table. + """ + messages = [] failed = [t for t in trials if t.error_file] - if len(failed) > 0: - messages.append("Number of errored trials: {}".format(len(failed))) + num_failed = len(failed) + if num_failed > 0: + messages.append("Number of errored trials: {}".format(num_failed)) + if num_failed > max_rows: + messages.append("Table truncated to {} rows ({} overflow)".format( + max_rows, num_failed - max_rows)) error_table = [] - for trial in failed: + for trial in failed[:max_rows]: row = [str(trial), trial.num_failures, trial.error_file] error_table.append(row) columns = ["Trial name", "# failures", "error file"] messages.append( tabulate( error_table, headers=columns, tablefmt=fmt, showindex=False)) - + delim = "<br>" if fmt == "html" else "\n" return delim.join(messages) +def _fair_filter_trials(trials_by_state, max_trials): + """Filters trials such that each state is represented fairly. + + The oldest trials are truncated if necessary. + + Args: + trials_by_state (Dict[str, List[Trial]]: Trials by state. + max_trials (int): Maximum number of trials to return. + Returns: + Dict mapping state to List of fairly represented trials. + """ + num_trials_by_state = collections.defaultdict(int) + no_change = False + # Determine number of trials to keep per state. + while max_trials > 0 and not no_change: + no_change = True + for state in trials_by_state: + if num_trials_by_state[state] < len(trials_by_state[state]): + no_change = False + max_trials -= 1 + num_trials_by_state[state] += 1 + # Sort by start time, descending. + sorted_trials_by_state = { + state: sorted( + trials_by_state[state], + reverse=True, + key=lambda t: t.start_time if t.start_time else float("-inf")) + for state in trials_by_state + } + # Truncate oldest trials. + filtered_trials = { + state: sorted_trials_by_state[state][:num_trials_by_state[state]] + for state in trials_by_state + } + return filtered_trials + + def _get_trial_info(trial, parameters, metrics): """Returns the following information about a trial: diff --git a/python/ray/tune/trial.py b/python/ray/tune/trial.py --- a/python/ray/tune/trial.py +++ b/python/ray/tune/trial.py @@ -162,6 +162,7 @@ def __init__(self, self.export_formats = export_formats self.status = Trial.PENDING + self.start_time = None self.logdir = None self.runner = None self.result_logger = None @@ -251,6 +252,12 @@ def set_location(self, location): """Sets the location of the trial.""" self.address = location + def set_status(self, status): + """Sets the status of the trial.""" + if status == Trial.RUNNING and self.start_time is None: + self.start_time = time.time() + self.status = status + def close_logger(self): """Closes logger.""" if self.result_logger: diff --git a/python/ray/tune/trial_executor.py b/python/ray/tune/trial_executor.py --- a/python/ray/tune/trial_executor.py +++ b/python/ray/tune/trial_executor.py @@ -41,7 +41,7 @@ def set_status(self, trial, status): """ logger.debug("Trial %s: Changing status from %s to %s.", trial, trial.status, status) - trial.status = status + trial.set_status(status) if status in [Trial.TERMINATED, Trial.ERROR]: self.try_checkpoint_metadata(trial)
diff --git a/python/ray/tune/tests/test_progress_reporter.py b/python/ray/tune/tests/test_progress_reporter.py new file mode 100644 --- /dev/null +++ b/python/ray/tune/tests/test_progress_reporter.py @@ -0,0 +1,59 @@ +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import collections +import sys +import time +import unittest + +from ray.tune.trial import Trial +from ray.tune.progress_reporter import _fair_filter_trials + +if sys.version_info >= (3, 3): + from unittest.mock import MagicMock +else: + from mock import MagicMock + + +class ProgressReporterTest(unittest.TestCase): + def mock_trial(self, status, start_time): + mock = MagicMock() + mock.status = status + mock.start_time = start_time + return mock + + def testFairFilterTrials(self): + """Tests that trials are represented fairly.""" + trials_by_state = collections.defaultdict(list) + # States for which trials are under and overrepresented + states_under = (Trial.PAUSED, Trial.ERROR) + states_over = (Trial.PENDING, Trial.RUNNING, Trial.TERMINATED) + + max_trials = 13 + num_trials_under = 2 # num of trials for each underrepresented state + num_trials_over = 10 # num of trials for each overrepresented state + + for state in states_under: + for _ in range(num_trials_under): + trials_by_state[state].append( + self.mock_trial(state, time.time())) + for state in states_over: + for _ in range(num_trials_over): + trials_by_state[state].append( + self.mock_trial(state, time.time())) + + filtered_trials_by_state = _fair_filter_trials( + trials_by_state, max_trials=max_trials) + for state in trials_by_state: + if state in states_under: + expected_num_trials = num_trials_under + else: + expected_num_trials = (max_trials - num_trials_under * + len(states_under)) / len(states_over) + state_trials = filtered_trials_by_state[state] + self.assertEqual(len(state_trials), expected_num_trials) + # Make sure trials are sorted newest-first within state. + for i in range(len(state_trials) - 1): + self.assertGreaterEqual(state_trials[i].start_time, + state_trials[i + 1].start_time)
[tune] Overflow table only shows TERMINATED runs ``` Memory usage on this node: 34.4/251.9 GiB Using FIFO scheduling algorithm. Resources requested: 63/64 CPUs, 0/0 GPUs, 0.0/221.19 GiB heap, 0.0/12.84 GiB objects Number of trials: 1000 ({'TERMINATED': 252, 'RUNNING': 63, 'PENDING': 685}) Result logdir: /home/ubuntu/ray_results/default Table truncated to 100 rows (900 overflow). +-------------------------+------------+-------+--------+------------------+-------------+----------+ | Trial name | status | loc | iter | total time (s) | timesteps | reward | |-------------------------+------------+-------+--------+------------------+-------------+----------| | PG_CartPole-v0_61e5eca8 | TERMINATED | | 1 | 20.0927 | 200 | 21.125 | | PG_CartPole-v0_61e6577e | TERMINATED | | 1 | 29.8103 | 200 | 23.75 | | PG_CartPole-v0_61e6ba48 | TERMINATED | | 1 | 30.2616 | 200 | 16.4545 | | PG_CartPole-v0_61e716dc | TERMINATED | | 1 | 29.7899 | 200 | 27.1429 | | PG_CartPole-v0_61e7783e | TERMINATED | | 1 | 29.6441 | 200 | 30.3333 | ``` This is a regression from the previous behaviour, which would show the last N examples of each state.
cc @richardliaw @ujvl Got it, will address some time this week.
2019-12-08T06:14:29
ray-project/ray
6,450
ray-project__ray-6450
[ "6431" ]
64d8626d6d413bb2f4a266205acd538f506ab473
diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -431,6 +431,7 @@ def sigterm_handler(signum, frame): signal.signal(signal.SIGTERM, sigterm_handler) self.core_worker.run_task_loop() + sys.exit(0) def get_gpu_ids(): @@ -834,6 +835,12 @@ def shutdown(exiting_interpreter=False): disconnect(exiting_interpreter) + # We need to destruct the core worker here because after this function, + # we will tear down any processes spawned by ray.init() and the background + # IO thread in the core worker doesn't currently handle that gracefully. + if hasattr(global_worker, "core_worker"): + del global_worker.core_worker + # Disconnect global state from GCS. ray.state.state.disconnect() @@ -843,7 +850,7 @@ def shutdown(exiting_interpreter=False): _global_node.kill_all_processes(check_alive=False, allow_graceful=True) _global_node = None - # TODO(rkn): Instead of manually reseting some of the worker fields, we + # TODO(rkn): Instead of manually resetting some of the worker fields, we # should simply set "global_worker" to equal "None" or something like that. global_worker.set_mode(None) global_worker._post_get_hooks = [] @@ -1332,12 +1339,6 @@ def disconnect(exiting_interpreter=False): worker.cached_functions_to_run = [] worker.serialization_context_map.clear() - # We need to destruct the core worker here because after this function, - # we will tear down any processes spawned by ray.init() and the background - # threads in the core worker don't currently handle that gracefully. - if hasattr(worker, "core_worker"): - del worker.core_worker - @contextmanager def _changeproctitle(title, next_title):
diff --git a/python/ray/tests/test_tempfile.py b/python/ray/tests/test_tempfile.py --- a/python/ray/tests/test_tempfile.py +++ b/python/ray/tests/test_tempfile.py @@ -37,13 +37,12 @@ def test_conn_cluster(): "temp_dir must not be provided.") -def test_tempdir(): +def test_tempdir(shutdown_only): shutil.rmtree("/tmp/ray", ignore_errors=True) ray.init(temp_dir="/tmp/i_am_a_temp_dir") assert os.path.exists( "/tmp/i_am_a_temp_dir"), "Specified temp dir not found." assert not os.path.exists("/tmp/ray"), "Default temp dir should not exist." - ray.shutdown() shutil.rmtree("/tmp/i_am_a_temp_dir", ignore_errors=True) @@ -57,7 +56,7 @@ def test_tempdir_commandline(): shutil.rmtree("/tmp/i_am_a_temp_dir2", ignore_errors=True) -def test_raylet_socket_name(): +def test_raylet_socket_name(shutdown_only): ray.init(raylet_socket_name="/tmp/i_am_a_temp_socket") assert os.path.exists( "/tmp/i_am_a_temp_socket"), "Specified socket path not found." @@ -77,7 +76,7 @@ def test_raylet_socket_name(): pass # It could have been removed by Ray. -def test_temp_plasma_store_socket(): +def test_temp_plasma_store_socket(shutdown_only): ray.init(plasma_store_socket_name="/tmp/i_am_a_temp_socket") assert os.path.exists( "/tmp/i_am_a_temp_socket"), "Specified socket path not found." @@ -97,7 +96,7 @@ def test_temp_plasma_store_socket(): pass # It could have been removed by Ray. -def test_raylet_tempfiles(): +def test_raylet_tempfiles(shutdown_only): ray.init(num_cpus=0) node = ray.worker._global_node top_levels = set(os.listdir(node.get_session_dir_path())) @@ -132,15 +131,13 @@ def test_raylet_tempfiles(): socket_files = set(os.listdir(node.get_sockets_dir_path())) assert socket_files == {"plasma_store", "raylet"} - ray.shutdown() -def test_tempdir_privilege(): +def test_tempdir_privilege(shutdown_only): os.chmod("/tmp/ray", 0o000) ray.init(num_cpus=1) session_dir = ray.worker._global_node.get_session_dir_path() assert os.path.exists(session_dir), "Specified socket path not found." - ray.shutdown() def test_session_dir_uniqueness():
Fatal Python error: PyImport_GetModuleDict: no module dictionary <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is the problem? On actor shutdown, we get a lot of warnings like this: ``` (pid=58704) Current thread 0x00007f61957f3700 (most recent call first): (pid=58704) File "/home/ubuntu/ray/python/ray/worker.py", line 433 in main_loop (pid=58704) File "/home/ubuntu/ray/python/ray/workers/default_worker.py", line 99 in <module> (pid=58709) WARNING: Not monitoring node memory since `psutil` is not installed. Install this with `pip install psutil` (or ray[debug]) to enable debugging of memory-related crashes. (pid=58709) Fatal Python error: PyImport_GetModuleDict: no module dictionary! (pid=58709) (pid=58709) Current thread 0x00007f27c36a4700 (most recent call first): (pid=58709) File "/home/ubuntu/ray/python/ray/worker.py", line 433 in main_loop (pid=58709) File "/home/ubuntu/ray/python/ray/workers/default_worker.py", line 99 in <module> (pid=58659) WARNING: Not monitoring node memory since `psutil` is not installed. Install this with `pip install psutil` (or ray[debug]) to enable debugging of memory-related crashes. (pid=58659) Fatal Python error: PyImport_GetModuleDict: no module dictionary! (pid=58659) (pid=58659) Current thread 0x00007f2b0d61f700 (most recent call first): (pid=58659) File "/home/ubuntu/ray/python/ray/worker.py", line 433 in main_loop (pid=58659) File "/home/ubuntu/ray/python/ray/workers/default_worker.py", line 99 in <module> (pid=58693) Fatal Python error: PyImport_GetModuleDict: no module dictionary! (pid=58693) ``` ### Reproduction Run any RLlib job. cc @edoakes
Manually testing indicates it was introduced by: https://github.com/ray-project/ray/pull/5783 Does it also only happen on CUDA instances? I can't reproduce this locally, only on EC2. The Python versions are both 3.6. I'm able to reproduce on my laptop Googling around shows that it might be caused by referencing imported modules inside of threads during shutdown. A few separate issues people have seen similar to: https://jira.mongodb.org/browse/SERVER-22142
2019-12-12T06:04:49
ray-project/ray
6,470
ray-project__ray-6470
[ "6030" ]
d8eeb9641314740572e81f9836cbce3e5b8f2b73
diff --git a/rllib/examples/rl_attention.py b/rllib/examples/rl_attention.py new file mode 100644 --- /dev/null +++ b/rllib/examples/rl_attention.py @@ -0,0 +1,173 @@ +import argparse + +import gym + +import numpy as np + +import ray +from ray import tune + +from ray.tune import registry + +from ray.rllib import models +from ray.rllib.utils import try_import_tf +from ray.rllib.models.tf import attention +from ray.rllib.models.tf import recurrent_tf_modelv2 +from ray.rllib.examples.custom_keras_rnn_model import RepeatAfterMeEnv +from ray.rllib.examples.custom_keras_rnn_model import RepeatInitialEnv + +tf = try_import_tf() + +parser = argparse.ArgumentParser() +parser.add_argument("--run", type=str, default="PPO") +parser.add_argument("--env", type=str, default="RepeatAfterMeEnv") +parser.add_argument("--stop", type=int, default=90) +parser.add_argument("--num-cpus", type=int, default=0) + + +class OneHot(gym.Wrapper): + + def __init__(self, env): + super(OneHot, self).__init__(env) + self.observation_space = gym.spaces.Box(0., 1., + (env.observation_space.n,)) + + def reset(self, **kwargs): + obs = self.env.reset(**kwargs) + return self._encode_obs(obs) + + def step(self, action): + obs, reward, done, info = self.env.step(action) + return self._encode_obs(obs), reward, done, info + + def _encode_obs(self, obs): + new_obs = np.ones(self.env.observation_space.n) + new_obs[obs] = 1.0 + return new_obs + + +class LookAndPush(gym.Env): + def __init__(self): + self.action_space = gym.spaces.Discrete(2) + self.observation_space = gym.spaces.Discrete(5) + self._state = None + self._case = None + + def reset(self): + self._state = 2 + self._case = np.random.choice(2) + return self._state + + def step(self, action): + assert self.action_space.contains(action) + + if self._state == 4: + if action and self._case: + return self._state, 10., True, {} + else: + return self._state, -10, True, {} + else: + if action: + if self._state == 0: + self._state = 2 + else: + self._state += 1 + elif self._state == 2: + self._state = self._case + + return self._state, -1, False, {} + + +class GRUTrXL(recurrent_tf_modelv2.RecurrentTFModelV2): + + def __init__(self, obs_space, action_space, num_outputs, model_config, + name): + super(GRUTrXL, self).__init__(obs_space, action_space, num_outputs, + model_config, name) + self.max_seq_len = model_config["max_seq_len"] + self.obs_dim = obs_space.shape[0] + input_layer = tf.keras.layers.Input( + shape=(self.max_seq_len, obs_space.shape[0]), + name="inputs", + ) + + trxl_out = attention.make_GRU_TrXL( + seq_length=model_config["max_seq_len"], + num_layers=model_config["custom_options"]["num_layers"], + attn_dim=model_config["custom_options"]["attn_dim"], + num_heads=model_config["custom_options"]["num_heads"], + head_dim=model_config["custom_options"]["head_dim"], + ff_hidden_dim=model_config["custom_options"]["ff_hidden_dim"], + )(input_layer) + + # Postprocess TrXL output with another hidden layer and compute values + logits = tf.keras.layers.Dense( + self.num_outputs, + activation=tf.keras.activations.linear, + name="logits")(trxl_out) + values_out = tf.keras.layers.Dense( + 1, activation=None, name="values")(trxl_out) + + self.trxl_model = tf.keras.Model( + inputs=[input_layer], + outputs=[logits, values_out], + ) + self.register_variables(self.trxl_model.variables) + self.trxl_model.summary() + + def forward_rnn(self, inputs, state, seq_lens): + state = state[0] + + # We assume state is the history of recent observations and append + # the current inputs to the end and only keep the most recent (up to + # max_seq_len). This allows us to deal with timestep-wise inference + # and full sequence training with the same logic. + state = tf.concat((state, inputs), axis=1)[:, -self.max_seq_len:] + logits, self._value_out = self.trxl_model(state) + + in_T = tf.shape(inputs)[1] + logits = logits[:, -in_T:] + self._value_out = self._value_out[:, -in_T:] + + return logits, [state] + + def get_initial_state(self): + return [np.zeros((self.max_seq_len, self.obs_dim), np.float32)] + + def value_function(self): + return tf.reshape(self._value_out, [-1]) + + +if __name__ == "__main__": + args = parser.parse_args() + ray.init(num_cpus=args.num_cpus or None) + models.ModelCatalog.register_custom_model("trxl", GRUTrXL) + registry.register_env("RepeatAfterMeEnv", lambda c: RepeatAfterMeEnv(c)) + registry.register_env("RepeatInitialEnv", lambda _: RepeatInitialEnv()) + registry.register_env("LookAndPush", lambda _: OneHot(LookAndPush())) + tune.run( + args.run, + stop={"episode_reward_mean": args.stop}, + config={ + "env": args.env, + "env_config": { + "repeat_delay": 2, + }, + "gamma": 0.99, + "num_workers": 0, + "num_envs_per_worker": 20, + "entropy_coeff": 0.001, + "num_sgd_iter": 5, + "vf_loss_coeff": 1e-5, + "model": { + "custom_model": "trxl", + "max_seq_len": 10, + "custom_options": { + "num_layers": 1, + "attn_dim": 10, + "num_heads": 1, + "head_dim": 10, + "ff_hidden_dim": 20, + }, + }, + }) diff --git a/rllib/examples/supervised_attention.py b/rllib/examples/supervised_attention.py new file mode 100644 --- /dev/null +++ b/rllib/examples/supervised_attention.py @@ -0,0 +1,81 @@ +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import numpy as np + +from rllib.models.tf import attention +from ray.rllib.utils import try_import_tf + +tf = try_import_tf() + + +def bit_shift_generator(seq_length, shift, batch_size): + while True: + values = np.array([0., 1.], dtype=np.float32) + seq = np.random.choice(values, (batch_size, seq_length, 1)) + targets = np.squeeze(np.roll(seq, shift, axis=1).astype(np.int32)) + targets[:, :shift] = 0 + yield seq, targets + + +def make_model(seq_length, num_tokens, num_layers, attn_dim, num_heads, + head_dim, ff_hidden_dim): + + return tf.keras.Sequential(( + attention.make_TrXL(seq_length, num_layers, attn_dim, num_heads, + head_dim, ff_hidden_dim), + tf.keras.layers.Dense(num_tokens), + )) + + +def train_loss(targets, outputs): + loss = tf.nn.sparse_softmax_cross_entropy_with_logits( + labels=targets, logits=outputs) + return tf.reduce_mean(loss) + + +def train_bit_shift(seq_length, num_iterations, print_every_n): + + optimizer = tf.keras.optimizers.Adam(1e-3) + + model = make_model( + seq_length, + num_tokens=2, + num_layers=1, + attn_dim=10, + num_heads=5, + head_dim=20, + ff_hidden_dim=20, + ) + + shift = 10 + train_batch = 10 + test_batch = 100 + data_gen = bit_shift_generator( + seq_length, shift=shift, batch_size=train_batch) + test_gen = bit_shift_generator( + seq_length, shift=shift, batch_size=test_batch) + + @tf.function + def update_step(inputs, targets): + loss_fn = lambda: train_loss(targets, model(inputs)) + var_fn = lambda: model.trainable_variables + optimizer.minimize(loss_fn, var_fn) + + for i, (inputs, targets) in zip(range(num_iterations), data_gen): + update_step( + tf.convert_to_tensor(inputs), tf.convert_to_tensor(targets)) + + if i % print_every_n == 0: + test_inputs, test_targets = next(test_gen) + print(i, train_loss(test_targets, model(test_inputs))) + + +if __name__ == "__main__": + tf.enable_eager_execution() + train_bit_shift( + seq_length=20, + num_iterations=2000, + print_every_n=200, + ) diff --git a/rllib/models/tf/attention.py b/rllib/models/tf/attention.py new file mode 100644 --- /dev/null +++ b/rllib/models/tf/attention.py @@ -0,0 +1,287 @@ +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import numpy as np + +from ray.rllib.models.tf.tf_modelv2 import TFModelV2 +from ray.rllib.utils import try_import_tf + +tf = try_import_tf() + + +def relative_position_embedding(seq_length, out_dim): + inverse_freq = 1 / (10000**(tf.range(0, out_dim, 2.0) / out_dim)) + pos_offsets = tf.range(seq_length - 1., -1., -1.) + inputs = pos_offsets[:, None] * inverse_freq[None, :] + return tf.concat((tf.sin(inputs), tf.cos(inputs)), axis=-1) + + +def rel_shift(x): + # Transposed version of the shift approach implemented by Dai et al. 2019 + # https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/tf/model.py#L31 + x_size = tf.shape(x) + + x = tf.pad(x, [[0, 0], [0, 0], [1, 0], [0, 0]]) + x = tf.reshape(x, [x_size[0], x_size[2] + 1, x_size[1], x_size[3]]) + x = tf.slice(x, [0, 1, 0, 0], [-1, -1, -1, -1]) + x = tf.reshape(x, x_size) + + return x + + +class MultiHeadAttention(tf.keras.layers.Layer): + + def __init__(self, out_dim, num_heads, head_dim, **kwargs): + super(MultiHeadAttention, self).__init__(**kwargs) + + # no bias or non-linearity + self._num_heads = num_heads + self._head_dim = head_dim + self._qkv_layer = tf.keras.layers.Dense( + 3 * num_heads * head_dim, use_bias=False) + self._linear_layer = tf.keras.layers.TimeDistributed( + tf.keras.layers.Dense(out_dim, use_bias=False)) + + def call(self, inputs): + L = tf.shape(inputs)[1] # length of segment + H = self._num_heads # number of attention heads + D = self._head_dim # attention head dimension + + qkv = self._qkv_layer(inputs) + + queries, keys, values = tf.split(qkv, 3, -1) + queries = queries[:, -L:] # only query based on the segment + + queries = tf.reshape(queries, [-1, L, H, D]) + keys = tf.reshape(keys, [-1, L, H, D]) + values = tf.reshape(values, [-1, L, H, D]) + + score = tf.einsum("bihd,bjhd->bijh", queries, keys) + score = score / D ** 0.5 + + # causal mask of the same length as the sequence + mask = tf.sequence_mask(tf.range(1, L + 1), dtype=score.dtype) + mask = mask[None, :, :, None] + + masked_score = score * mask + 1e30 * (mask - 1.) + wmat = tf.nn.softmax(masked_score, axis=2) + + out = tf.einsum("bijh,bjhd->bihd", wmat, values) + out = tf.reshape(out, tf.concat((tf.shape(out)[:2], [H * D]), axis=0)) + return self._linear_layer(out) + + +class RelativeMultiHeadAttention(tf.keras.layers.Layer): + def __init__(self, + out_dim, + num_heads, + head_dim, + rel_pos_encoder, + input_layernorm=False, + output_activation=None, + **kwargs): + super(RelativeMultiHeadAttention, self).__init__(**kwargs) + + # no bias or non-linearity + self._num_heads = num_heads + self._head_dim = head_dim + self._qkv_layer = tf.keras.layers.Dense( + 3 * num_heads * head_dim, use_bias=False) + self._linear_layer = tf.keras.layers.TimeDistributed( + tf.keras.layers.Dense( + out_dim, use_bias=False, activation=output_activation)) + + self._uvar = self.add_weight(shape=(num_heads, head_dim)) + self._vvar = self.add_weight(shape=(num_heads, head_dim)) + + self._pos_proj = tf.keras.layers.Dense( + num_heads * head_dim, use_bias=False) + self._rel_pos_encoder = rel_pos_encoder + + self._input_layernorm = None + if input_layernorm: + self._input_layernorm = tf.keras.layers.LayerNormalization(axis=-1) + + def call(self, inputs, memory=None): + L = tf.shape(inputs)[1] # length of segment + H = self._num_heads # number of attention heads + D = self._head_dim # attention head dimension + + # length of the memory segment + M = memory.shape[0] if memory is not None else 0 + + if memory is not None: + inputs = np.concatenate( + (tf.stop_gradient(memory), inputs), axis=1) + + if self._input_layernorm is not None: + inputs = self._input_layernorm(inputs) + + qkv = self._qkv_layer(inputs) + + queries, keys, values = tf.split(qkv, 3, -1) + queries = queries[:, -L:] # only query based on the segment + + queries = tf.reshape(queries, [-1, L, H, D]) + keys = tf.reshape(keys, [-1, L + M, H, D]) + values = tf.reshape(values, [-1, L + M, H, D]) + + rel = self._pos_proj(self._rel_pos_encoder) + rel = tf.reshape(rel, [L, H, D]) + + score = tf.einsum("bihd,bjhd->bijh", queries + self._uvar, keys) + pos_score = tf.einsum("bihd,jhd->bijh", queries + self._vvar, rel) + score = score + rel_shift(pos_score) + score = score / D**0.5 + + # causal mask of the same length as the sequence + mask = tf.sequence_mask(tf.range(M + 1, L + M + 1), dtype=score.dtype) + mask = mask[None, :, :, None] + + masked_score = score * mask + 1e30 * (mask - 1.) + wmat = tf.nn.softmax(masked_score, axis=2) + + out = tf.einsum("bijh,bjhd->bihd", wmat, values) + out = tf.reshape(out, tf.concat((tf.shape(out)[:2], [H * D]), axis=0)) + return self._linear_layer(out) + + +class PositionwiseFeedforward(tf.keras.layers.Layer): + + def __init__(self, out_dim, hidden_dim, output_activation=None, **kwargs): + super(PositionwiseFeedforward, self).__init__(**kwargs) + + self._hidden_layer = tf.keras.layers.Dense( + hidden_dim, + activation=tf.nn.relu, + ) + self._output_layer = tf.keras.layers.Dense( + out_dim, activation=output_activation) + + def call(self, inputs, **kwargs): + del kwargs + output = self._hidden_layer(inputs) + return self._output_layer(output) + + +class SkipConnection(tf.keras.layers.Layer): + """Skip connection layer. + + If no fan-in layer is specified, then this layer behaves as a regular + residual layer. + """ + + def __init__(self, layer, fan_in_layer=None, **kwargs): + super(SkipConnection, self).__init__(**kwargs) + self._fan_in_layer = fan_in_layer + self._layer = layer + + def call(self, inputs, **kwargs): + del kwargs + outputs = self._layer(inputs) + if self._fan_in_layer is None: + outputs = outputs + inputs + else: + outputs = self._fan_in_layer((inputs, outputs)) + + return outputs + + +class GRUGate(tf.keras.layers.Layer): + + def __init__(self, init_bias=0., **kwargs): + super(GRUGate, self).__init__(**kwargs) + self._init_bias = init_bias + + def build(self, input_shape): + x_shape, y_shape = input_shape + if x_shape[-1] != y_shape[-1]: + raise ValueError( + "Both inputs to GRUGate must equal size last axis.") + + self._w_r = self.add_weight(shape=(y_shape[-1], y_shape[-1])) + self._w_z = self.add_weight(shape=(y_shape[-1], y_shape[-1])) + self._w_h = self.add_weight(shape=(y_shape[-1], y_shape[-1])) + self._u_r = self.add_weight(shape=(x_shape[-1], x_shape[-1])) + self._u_z = self.add_weight(shape=(x_shape[-1], x_shape[-1])) + self._u_h = self.add_weight(shape=(x_shape[-1], x_shape[-1])) + + def bias_initializer(shape, dtype): + return tf.fill(shape, tf.cast(self._init_bias, dtype=dtype)) + + self._bias_z = self.add_weight( + shape=(x_shape[-1], ), initializer=bias_initializer) + + def call(self, inputs, **kwargs): + x, y = inputs + r = (tf.tensordot(y, self._w_r, axes=1) + tf.tensordot( + x, self._u_r, axes=1)) + r = tf.nn.sigmoid(r) + + z = (tf.tensordot(y, self._w_z, axes=1) + tf.tensordot( + x, self._u_z, axes=1) + self._bias_z) + z = tf.nn.sigmoid(z) + + h = (tf.tensordot(y, self._w_h, axes=1) + tf.tensordot( + (x * r), self._u_h, axes=1)) + h = tf.nn.tanh(h) + + return (1 - z) * x + z * h + + +def make_TrXL(seq_length, num_layers, attn_dim, num_heads, head_dim, + ff_hidden_dim): + pos_embedding = relative_position_embedding(seq_length, attn_dim) + + layers = [tf.keras.layers.Dense(attn_dim)] + for _ in range(num_layers): + layers.append( + SkipConnection( + RelativeMultiHeadAttention(attn_dim, num_heads, head_dim, + pos_embedding))) + layers.append(tf.keras.layers.LayerNormalization(axis=-1)) + + layers.append( + SkipConnection(PositionwiseFeedforward(attn_dim, ff_hidden_dim))) + layers.append(tf.keras.layers.LayerNormalization(axis=-1)) + + return tf.keras.Sequential(layers) + + +def make_GRU_TrXL(seq_length, + num_layers, + attn_dim, + num_heads, + head_dim, + ff_hidden_dim, + init_gate_bias=2.): + # Default initial bias for the gate taken from + # Parisotto, Emilio, et al. "Stabilizing Transformers for Reinforcement Learning." arXiv preprint arXiv:1910.06764 (2019). + pos_embedding = relative_position_embedding(seq_length, attn_dim) + + layers = [tf.keras.layers.Dense(attn_dim)] + for _ in range(num_layers): + layers.append( + SkipConnection( + RelativeMultiHeadAttention( + attn_dim, + num_heads, + head_dim, + pos_embedding, + input_layernorm=True, + output_activation=tf.nn.relu), + fan_in_layer=GRUGate(init_gate_bias), + )) + + layers.append( + SkipConnection( + tf.keras.Sequential( + (tf.keras.layers.LayerNormalization(axis=-1), + PositionwiseFeedforward( + attn_dim, ff_hidden_dim, + output_activation=tf.nn.relu))), + fan_in_layer=GRUGate(init_gate_bias), + )) + + return tf.keras.Sequential(layers)
[rllib] Add examples for attention network models For example, the Numpad / memory maze experiments in https://arxiv.org/pdf/1910.06764.pdf cc @gehring
that would be awesome if you guys tackled this... would love to see such an example.... @waldroje This is a priority for us so, barring anything major, it will get done! I can't give you a concrete estimate for when this will be completed but it should be relatively soon, hopefully within a month or so.
2019-12-13T03:37:21
ray-project/ray
6,475
ray-project__ray-6475
[ "6393" ]
eb6f3f86e5c64a14c06e24894c0ff64b4b99a462
diff --git a/rllib/models/tf/tf_action_dist.py b/rllib/models/tf/tf_action_dist.py --- a/rllib/models/tf/tf_action_dist.py +++ b/rllib/models/tf/tf_action_dist.py @@ -166,7 +166,7 @@ def kl(self, other): @override(ActionDistribution) def entropy(self): return tf.reduce_sum( - .5 * self.log_std + .5 * np.log(2.0 * np.pi * np.e), + self.log_std + .5 * np.log(2.0 * np.pi * np.e), reduction_indices=[1]) @override(TFActionDistribution)
Inconsistency in Diagonal Gaussian between log probability and entropy In the DiagonalGaussian action distribution the logp is calculated like so: ` return (-0.5 * tf.reduce_sum( tf.square((x - self.mean) / self.std), reduction_indices=[1]) - 0.5 * np.log(2.0 * np.pi) * tf.to_float(tf.shape(x)[1]) - tf.reduce_sum(self.log_std, reduction_indices=[1]))` Here the term tf.reduce_sum(self.log_std, reduction_indices=[1])) is supposed to compute the part of the logprobability due to the determinant of the covariance matrix. The diagonals of the covariance matrix are the squared standard deviations, so the absence of the 0.5 in front of the term makes sense. But, then, in that case the entropy `tf.reduce_sum( .5 * self.log_std + .5 * np.log(2.0 * np.pi * np.e), reduction_indices=[1])` <img width="310" alt="image" src="https://user-images.githubusercontent.com/7660397/70380361-ed418a80-18ee-11ea-917f-0f266b48ada5.png"> should also not have a 0.5 in front of the log-std term and should be `tf.reduce_sum( self.log_std + .5 * np.log(2.0 * np.pi * np.e), reduction_indices=[1])`
I see, should we be reverting https://github.com/ray-project/ray/pull/2968 then? cc @pcmoritz cc @michaelzhiluo, as he was the one who mentioned this change I think? I agree that `0.5` should be removed because of using `log_std` not the `log_variance`. I also agree that `0.5` should not be in front of `self.log_std` in entropy since there is a 2x term factoring out of the variance terms in the covariance matrix. @eugenevinitsky do you mind pushing a fix? Yeah, one sec!
2019-12-13T08:18:58
ray-project/ray
6,485
ray-project__ray-6485
[ "6484" ]
e2b7459bfc4c5025d256485000544f1b5e76b720
diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py --- a/python/ray/scripts/scripts.py +++ b/python/ray/scripts/scripts.py @@ -381,7 +381,7 @@ def start(node_ip_address, redis_address, address, redis_port, redis_client = services.create_redis_client( redis_address, password=redis_password) - # Check that the verion information on this node matches the version + # Check that the version information on this node matches the version # information that the cluster was started with. services.check_version_info(redis_client)
Fix simple typo: verion -> version # Issue Type [x] Bug (Typo) # Steps to Replicate 1. Examine python/ray/scripts/scripts.py. 2. Search for `verion`. # Expected Behaviour 1. Should read `version`.
2019-12-14T11:22:38
ray-project/ray
6,571
ray-project__ray-6571
[ "6569" ]
7bbfa85c6681c65b5da4e01e3c2cea65f694f4b3
diff --git a/python/ray/tune/function_runner.py b/python/ray/tune/function_runner.py --- a/python/ray/tune/function_runner.py +++ b/python/ray/tune/function_runner.py @@ -7,6 +7,7 @@ import inspect import threading import traceback +import sys from six.moves import queue from ray.tune import track @@ -248,7 +249,10 @@ def wrap_function(train_func): use_track = False try: - func_args = inspect.getargspec(train_func).args + if sys.version_info >= (3, 3): + func_args = inspect.getfullargspec(train_func).args + else: + func_args = inspect.getargspec(train_func).args use_track = ("reporter" not in func_args and len(func_args) == 1) if use_track: logger.info("tune.track signature detected.")
diff --git a/docker/tune_test/Dockerfile b/docker/tune_test/Dockerfile --- a/docker/tune_test/Dockerfile +++ b/docker/tune_test/Dockerfile @@ -6,7 +6,7 @@ FROM ray-project/base-deps # a test runner. RUN conda install -y numpy RUN pip install -U pip -RUN pip install -U https://ray-wheels.s3-us-west-2.amazonaws.com/latest/ray-0.9.0.dev-cp36-cp36m-manylinux1_x86_64.whl +RUN pip install -U https://ray-wheels.s3-us-west-2.amazonaws.com/latest/ray-0.9.0.dev0-cp36-cp36m-manylinux1_x86_64.whl RUN pip install -U boto3 # We install this after the latest wheels -- this should not override the latest wheels. # Needed to run Tune example with a 'plot' call - which does not actually render a plot, but throws an error.
[tune] Ray Tune fails to parse typing hints of the function for experiment ### What is the problem? If the function for experiment has a [typing hint](https://docs.python.org/3/library/typing.html) for its argument `config`, then Ray Tune fails to parse the argument and assumes that there is a reporter signature. The cause of this problem is in this source code: https://github.com/ray-project/ray/blob/1eaa57c98f8870a43e1ea14ec011b6bd4be97c8d/python/ray/tune/function_runner.py#L250-L257 Changing `func_args = inspect.getargspec(train_func).args` to `func_args = inspect.getfullargspec(train_func).args` might solve the problem. *Ray version and other system information (Python version, TensorFlow version, OS):* Ray: 0.8.0 Python: 3.7.5 OS: Ubuntu 18.04 *Does the problem occur on the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html)?* I couldn't install the latest wheel. So I can't confirm it. ### Reproduction The following is a modification of the first examples in the [Ray Tune Documentation](https://ray.readthedocs.io/en/latest/tune.html#quick-start), where I added a typing hint `config: Dict[str, Any]` for the argument of function `train_mnist`. ``` from typing import Dict, Any import torch.optim as optim from ray import tune from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test def train_mnist(config: Dict[str, Any]): train_loader, test_loader = get_data_loaders() model = ConvNet() optimizer = optim.SGD(model.parameters(), lr=config["lr"]) for i in range(10): train(model, optimizer, train_loader) acc = test(model, test_loader) tune.track.log(mean_accuracy=acc) analysis = tune.run(train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])}) print("Best config: ", analysis.get_best_config(metric="mean_accuracy")) # Get a dataframe for analyzing trial results. df = analysis.dataframe() ``` When running the code you get this error message: **TypeError: train_mnist() takes 1 positional argument but 2 were given**
2019-12-21T14:20:37
ray-project/ray
6,634
ray-project__ray-6634
[ "6632" ]
3e0f07468fb117bfbe25feb815c83f02028284b7
diff --git a/doc/examples/parameter_server/async_parameter_server.py b/doc/examples/parameter_server/async_parameter_server.py deleted file mode 100644 --- a/doc/examples/parameter_server/async_parameter_server.py +++ /dev/null @@ -1,80 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import argparse -import time - -import ray -import model - -parser = argparse.ArgumentParser(description="Run the asynchronous parameter " - "server example.") -parser.add_argument("--num-workers", default=4, type=int, - help="The number of workers to use.") -parser.add_argument("--redis-address", default=None, type=str, - help="The Redis address of the cluster.") - - [email protected] -class ParameterServer(object): - def __init__(self, keys, values): - # These values will be mutated, so we must create a copy that is not - # backed by the object store. - values = [value.copy() for value in values] - self.weights = dict(zip(keys, values)) - - def push(self, keys, values): - for key, value in zip(keys, values): - self.weights[key] += value - - def pull(self, keys): - return [self.weights[key] for key in keys] - - [email protected] -def worker_task(ps, worker_index, batch_size=50): - # Download MNIST. - mnist = model.download_mnist_retry(seed=worker_index) - - # Initialize the model. - net = model.SimpleCNN() - keys = net.get_weights()[0] - - while True: - # Get the current weights from the parameter server. - weights = ray.get(ps.pull.remote(keys)) - net.set_weights(keys, weights) - - # Compute an update and push it to the parameter server. - xs, ys = mnist.train.next_batch(batch_size) - gradients = net.compute_update(xs, ys) - ps.push.remote(keys, gradients) - - -if __name__ == "__main__": - args = parser.parse_args() - - ray.init(redis_address=args.redis_address) - - # Create a parameter server with some random weights. - net = model.SimpleCNN() - all_keys, all_values = net.get_weights() - ps = ParameterServer.remote(all_keys, all_values) - - # Start some training tasks. - worker_tasks = [worker_task.remote(ps, i) for i in range(args.num_workers)] - - # Download MNIST. - mnist = model.download_mnist_retry() - - i = 0 - while True: - # Get and evaluate the current model. - current_weights = ray.get(ps.pull.remote(all_keys)) - net.set_weights(all_keys, current_weights) - test_xs, test_ys = mnist.test.next_batch(1000) - accuracy = net.compute_accuracy(test_xs, test_ys) - print("Iteration {}: accuracy is {}".format(i, accuracy)) - i += 1 - time.sleep(1) diff --git a/doc/examples/parameter_server/model.py b/doc/examples/parameter_server/model.py deleted file mode 100644 --- a/doc/examples/parameter_server/model.py +++ /dev/null @@ -1,203 +0,0 @@ -# Most of the tensorflow code is adapted from Tensorflow's tutorial on using -# CNNs to train MNIST -# https://www.tensorflow.org/get_started/mnist/pros#build-a-multilayer-convolutional-network. # noqa: E501 - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import time - -import tensorflow as tf -from tensorflow.examples.tutorials.mnist import input_data - -import ray -import ray.experimental.tf_utils - - -def download_mnist_retry(seed=0, max_num_retries=20): - for _ in range(max_num_retries): - try: - return input_data.read_data_sets( - "MNIST_data", one_hot=True, seed=seed) - except tf.errors.AlreadyExistsError: - time.sleep(1) - raise Exception("Failed to download MNIST.") - - -class SimpleCNN(object): - def __init__(self, learning_rate=1e-4): - with tf.Graph().as_default(): - - # Create the model - self.x = tf.placeholder(tf.float32, [None, 784]) - - # Define loss and optimizer - self.y_ = tf.placeholder(tf.float32, [None, 10]) - - # Build the graph for the deep net - self.y_conv, self.keep_prob = deepnn(self.x) - - with tf.name_scope("loss"): - cross_entropy = tf.nn.softmax_cross_entropy_with_logits( - labels=self.y_, logits=self.y_conv) - self.cross_entropy = tf.reduce_mean(cross_entropy) - - with tf.name_scope("adam_optimizer"): - self.optimizer = tf.train.AdamOptimizer(learning_rate) - self.train_step = self.optimizer.minimize(self.cross_entropy) - - with tf.name_scope("accuracy"): - correct_prediction = tf.equal( - tf.argmax(self.y_conv, 1), tf.argmax(self.y_, 1)) - correct_prediction = tf.cast(correct_prediction, tf.float32) - self.accuracy = tf.reduce_mean(correct_prediction) - - self.sess = tf.Session( - config=tf.ConfigProto( - intra_op_parallelism_threads=1, - inter_op_parallelism_threads=1)) - self.sess.run(tf.global_variables_initializer()) - - # Helper values. - - self.variables = ray.experimental.tf_utils.TensorFlowVariables( - self.cross_entropy, self.sess) - - self.grads = self.optimizer.compute_gradients(self.cross_entropy) - self.grads_placeholder = [(tf.placeholder( - "float", shape=grad[1].get_shape()), grad[1]) - for grad in self.grads] - self.apply_grads_placeholder = self.optimizer.apply_gradients( - self.grads_placeholder) - - def compute_update(self, x, y): - # TODO(rkn): Computing the weights before and after the training step - # and taking the diff is awful. - weights = self.get_weights()[1] - self.sess.run( - self.train_step, - feed_dict={ - self.x: x, - self.y_: y, - self.keep_prob: 0.5 - }) - new_weights = self.get_weights()[1] - return [x - y for x, y in zip(new_weights, weights)] - - def compute_gradients(self, x, y): - return self.sess.run( - [grad[0] for grad in self.grads], - feed_dict={ - self.x: x, - self.y_: y, - self.keep_prob: 0.5 - }) - - def apply_gradients(self, gradients): - feed_dict = {} - for i in range(len(self.grads_placeholder)): - feed_dict[self.grads_placeholder[i][0]] = gradients[i] - self.sess.run(self.apply_grads_placeholder, feed_dict=feed_dict) - - def compute_accuracy(self, x, y): - return self.sess.run( - self.accuracy, - feed_dict={ - self.x: x, - self.y_: y, - self.keep_prob: 1.0 - }) - - def set_weights(self, variable_names, weights): - self.variables.set_weights(dict(zip(variable_names, weights))) - - def get_weights(self): - weights = self.variables.get_weights() - return list(weights.keys()), list(weights.values()) - - -def deepnn(x): - """deepnn builds the graph for a deep net for classifying digits. - - Args: - x: an input tensor with the dimensions (N_examples, 784), where 784 is - the number of pixels in a standard MNIST image. - - Returns: - A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with - values equal to the logits of classifying the digit into one of 10 - classes (the digits 0-9). keep_prob is a scalar placeholder for the - probability of dropout. - """ - # Reshape to use within a convolutional neural net. - # Last dimension is for "features" - there is only one here, since images - # are grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc. - with tf.name_scope("reshape"): - x_image = tf.reshape(x, [-1, 28, 28, 1]) - - # First convolutional layer - maps one grayscale image to 32 feature maps. - with tf.name_scope("conv1"): - W_conv1 = weight_variable([5, 5, 1, 32]) - b_conv1 = bias_variable([32]) - h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) - - # Pooling layer - downsamples by 2X. - with tf.name_scope("pool1"): - h_pool1 = max_pool_2x2(h_conv1) - - # Second convolutional layer -- maps 32 feature maps to 64. - with tf.name_scope("conv2"): - W_conv2 = weight_variable([5, 5, 32, 64]) - b_conv2 = bias_variable([64]) - h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) - - # Second pooling layer. - with tf.name_scope("pool2"): - h_pool2 = max_pool_2x2(h_conv2) - - # Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image - # is down to 7x7x64 feature maps -- maps this to 1024 features. - with tf.name_scope("fc1"): - W_fc1 = weight_variable([7 * 7 * 64, 1024]) - b_fc1 = bias_variable([1024]) - - h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64]) - h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) - - # Dropout - controls the complexity of the model, prevents co-adaptation of - # features. - with tf.name_scope("dropout"): - keep_prob = tf.placeholder(tf.float32) - h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) - - # Map the 1024 features to 10 classes, one for each digit - with tf.name_scope("fc2"): - W_fc2 = weight_variable([1024, 10]) - b_fc2 = bias_variable([10]) - - y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 - return y_conv, keep_prob - - -def conv2d(x, W): - """conv2d returns a 2d convolution layer with full stride.""" - return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding="SAME") - - -def max_pool_2x2(x): - """max_pool_2x2 downsamples a feature map by 2X.""" - return tf.nn.max_pool( - x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="SAME") - - -def weight_variable(shape): - """weight_variable generates a weight variable of a given shape.""" - initial = tf.truncated_normal(shape, stddev=0.1) - return tf.Variable(initial) - - -def bias_variable(shape): - """bias_variable generates a bias variable of a given shape.""" - initial = tf.constant(0.1, shape=shape) - return tf.Variable(initial) diff --git a/doc/examples/parameter_server/sync_parameter_server.py b/doc/examples/parameter_server/sync_parameter_server.py deleted file mode 100644 --- a/doc/examples/parameter_server/sync_parameter_server.py +++ /dev/null @@ -1,76 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import argparse -import numpy as np - -import ray -import model - -parser = argparse.ArgumentParser(description="Run the synchronous parameter " - "server example.") -parser.add_argument("--num-workers", default=4, type=int, - help="The number of workers to use.") -parser.add_argument("--redis-address", default=None, type=str, - help="The Redis address of the cluster.") - - [email protected] -class ParameterServer(object): - def __init__(self, learning_rate): - self.net = model.SimpleCNN(learning_rate=learning_rate) - - def apply_gradients(self, *gradients): - self.net.apply_gradients(np.mean(gradients, axis=0)) - return self.net.variables.get_flat() - - def get_weights(self): - return self.net.variables.get_flat() - - [email protected] -class Worker(object): - def __init__(self, worker_index, batch_size=50): - self.worker_index = worker_index - self.batch_size = batch_size - self.mnist = model.download_mnist_retry(seed=worker_index) - self.net = model.SimpleCNN() - - def compute_gradients(self, weights): - self.net.variables.set_flat(weights) - xs, ys = self.mnist.train.next_batch(self.batch_size) - return self.net.compute_gradients(xs, ys) - - -if __name__ == "__main__": - args = parser.parse_args() - - ray.init(redis_address=args.redis_address) - - # Create a parameter server. - net = model.SimpleCNN() - ps = ParameterServer.remote(1e-4 * args.num_workers) - - # Create workers. - workers = [Worker.remote(worker_index) - for worker_index in range(args.num_workers)] - - # Download MNIST. - mnist = model.download_mnist_retry() - - i = 0 - current_weights = ps.get_weights.remote() - while True: - # Compute and apply gradients. - gradients = [worker.compute_gradients.remote(current_weights) - for worker in workers] - current_weights = ps.apply_gradients.remote(*gradients) - - if i % 10 == 0: - # Evaluate the current model. - net.variables.set_flat(ray.get(current_weights)) - test_xs, test_ys = mnist.test.next_batch(1000) - accuracy = net.compute_accuracy(test_xs, test_ys) - print("Iteration {}: accuracy is {}".format(i, accuracy)) - i += 1
Some examples no longer run ### What is the problem? *Ray version and other system information (Python version, TensorFlow version, OS):* - Using the current Ray master 3e0f07468fb117bfbe25feb815c83f02028284b7. - TensorFlow 2.0.0 - MacOS - Python 3.7.4 ### Reproduction To reproduce the issue, run ``` python doc/examples/parameter_server/async_parameter_server.py ``` This fails with ``` Traceback (most recent call last): File "doc/examples/parameter_server/async_parameter_server.py", line 9, in <module> import model File "/Users/rkn/anyscale/ray/doc/examples/parameter_server/model.py", line 12, in <module> from tensorflow.examples.tutorials.mnist import input_data ModuleNotFoundError: No module named 'tensorflow.examples.tutorials' ``` The same is true of `sync_parameter_server.py`. All of the examples under `doc/examples/` should be tested.
2019-12-30T03:03:57
ray-project/ray
6,756
ray-project__ray-6756
[ "4687" ]
60d4d5e1aaa9fde3cf541ee335e284d05e75679c
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -215,12 +215,19 @@ def __init__(self): self.method_signatures = {} self.actor_method_num_return_vals = {} for method_name, method in self.actor_methods: + # Whether or not this method requires binding of its first + # argument. For class and static methods, we do not want to bind + # the first argument, but we do for instance methods + is_bound = (ray.utils.is_class_method(method) + or ray.utils.is_static_method(self.modified_class, + method_name)) + # Print a warning message if the method signature is not # supported. We don't raise an exception because if the actor # inherits from a class that has a method whose signature we # don't support, there may not be much the user can do about it. self.method_signatures[method_name] = signature.extract_signature( - method, ignore_first=not ray.utils.is_class_method(method)) + method, ignore_first=not is_bound) # Set the default number of return values for this method. if hasattr(method, "__ray_num_return_vals__"): self.actor_method_num_return_vals[method_name] = ( diff --git a/python/ray/function_manager.py b/python/ray/function_manager.py --- a/python/ray/function_manager.py +++ b/python/ray/function_manager.py @@ -25,6 +25,7 @@ binary_to_hex, is_function_or_method, is_class_method, + is_static_method, check_oversized_pickle, decode, ensure_str, @@ -757,7 +758,9 @@ def actor_method_executor(actor, *args, **kwargs): # Execute the assigned method and save a checkpoint if necessary. try: - if is_class_method(method): + is_bound = (is_class_method(method) + or is_static_method(type(actor), method_name)) + if is_bound: method_returns = method(*args, **kwargs) else: method_returns = method(actor, *args, **kwargs) diff --git a/python/ray/utils.py b/python/ray/utils.py --- a/python/ray/utils.py +++ b/python/ray/utils.py @@ -131,6 +131,21 @@ def is_class_method(f): return hasattr(f, "__self__") and f.__self__ is not None +def is_static_method(cls, f_name): + """Returns whether the class has a static method with the given name. + + Args: + cls: The Python class (i.e. object of type `type`) to + search for the method in. + f_name: The name of the method to look up in this class + and check whether or not it is static. + """ + for cls in inspect.getmro(cls): + if f_name in cls.__dict__: + return isinstance(cls.__dict__[f_name], staticmethod) + return False + + def random_string(): """Generate a random string to use as an ID.
diff --git a/python/ray/tests/test_actor.py b/python/ray/tests/test_actor.py --- a/python/ray/tests/test_actor.py +++ b/python/ray/tests/test_actor.py @@ -212,6 +212,55 @@ def g(self): assert ray.get(t.g.remote()) == 3 +def test_actor_static_attributes(ray_start_regular): + class Grandparent: + GRANDPARENT = 2 + + @staticmethod + def grandparent_static(): + assert Grandparent.GRANDPARENT == 2 + return 1 + + class Parent1(Grandparent): + PARENT1 = 6 + + @staticmethod + def parent1_static(): + assert Parent1.PARENT1 == 6 + return 2 + + def parent1(self): + assert Parent1.PARENT1 == 6 + + class Parent2: + PARENT2 = 7 + + def parent2(self): + assert Parent2.PARENT2 == 7 + + @ray.remote + class TestActor(Parent1, Parent2): + X = 3 + + @staticmethod + def f(): + assert TestActor.GRANDPARENT == 2 + assert TestActor.PARENT1 == 6 + assert TestActor.PARENT2 == 7 + assert TestActor.X == 3 + return 4 + + def g(self): + assert TestActor.GRANDPARENT == 2 + assert TestActor.PARENT1 == 6 + assert TestActor.PARENT2 == 7 + assert TestActor.f() == 4 + return TestActor.X + + t = TestActor.remote() + assert ray.get(t.g.remote()) == 3 + + def test_caching_actors(shutdown_only): # Test defining actors before ray.init() has been called.
Actor class static methods are treated as instance methods Trying to call the staticmethod of ray Actor (class wrapped with @ray.remote) fails because the method awaits the `self` field. ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: - **Ray installed from (source or binary)**: binary (pip) - **Ray version**: 0.6.4 - **Python version**: 3.6.7 - **Exact command to reproduce**: ``` @ray.remote class my_class: def __init__(self): self.a = 1 def get_a(self): return self.a @staticmethod def static_sum(a, b): return a+b mc = my_class.remote() ray.get(mc.static_sum.remote(1, 2)) ``` will fail with exception: ``` Exception: Too many arguments were passed to the function 'static_sum' ``` and calling ``` ray.get(mc.static_sum.remote(1)) ``` causes ``` RayTaskError: ray_my_class:static_sum() (pid=28795, host=***) File "<ipython-input-64-88df51b11f3a>", line 11, in static_sum TypeError: unsupported operand type(s) for +: 'my_class' and 'int' ``` <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem Actor static methods are treated as instance methods. Additional question: Are there any ways to access the actor property (function wrapped with property decorator) or just instance field? They are not callable and not present in ActorHandle attributes. e.g. ``` mc.a ``` causes ``` AttributeError: 'ActorHandle' object has no attribute 'a' ``` <!-- Describe the problem clearly here. --> ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
Thanks for reporting this, would you be interested in submitting a patch for this? Regarding accessing fields of an object, there isn't a way that is exposed right now. We could always expose the builtin methods like `__getattribute__`, which would make it possible. One reason for not making it too easy is that accessing a field feels very "cheap", but in our case it requires remote procedure calls, so isn't particularly cheap. Thanks, @robertnishihara! Talking about the patch, are the staticmethods supposed to be treated as remote functions or just ignored with some exception message? About the accessing the fields: the documentation seems to lack this info. From the first view some people may get the incorrect understanding of ray remote actors. Up (talking about the last comment) Up
2020-01-09T14:00:22
ray-project/ray
6,787
ray-project__ray-6787
[ "6762" ]
3ea3b56eb14b79c7601ecbad9859a93050ddd2dd
diff --git a/python/ray/projects/scripts.py b/python/ray/projects/scripts.py --- a/python/ray/projects/scripts.py +++ b/python/ray/projects/scripts.py @@ -326,6 +326,10 @@ def attach(screen, tmux): @click.option("--name", help="Name of the session to stop", default=None) def stop(name): project_definition = load_project_or_throw() + + if not name: + name = project_definition.config["name"] + teardown_cluster( project_definition.cluster_yaml(), yes=True,
[projects] ray session stop does not kill pods <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is the problem? Using ray 0.9.0dev0 Ray projects does not delete pods associated with the project when running `ray session stop` ### Reproduction (REQUIRED) When launching the cluster with `ray session start <command>` the name of the cluster supplied in the `cluster.yml` is overwritten with the name of the project. However the same replacment is not made when running `ray session stop`. A temporary workaround is possible by running `ray session stop --name project_name` refer to the stop code here https://github.com/ray-project/ray/blob/master/python/ray/projects/scripts.py#L327 refer to the start code here https://github.com/ray-project/ray/blob/master/python/ray/projects/scripts.py#L352 ### Potential fix Alter the stop function to the following ``` def stop(name): if not name: name = project_definition.config["name"] project_definition = load_project_or_throw() teardown_cluster( project_definition.cluster_yaml(), yes=True, workers_only=False, override_cluster_name=name) ``` I think however it would be neater if the config loader did the replacement i.e here https://github.com/ray-project/ray/blob/master/python/ray/projects/projects.py#L49
Thanks for attention, I will reproduce and fix it if it's a bug. @Nintorac @Qstar Thanks for pointing this out and offering to fix it, let me know if you need help! The proposed fix looks great to me :)
2020-01-14T04:37:53
ray-project/ray
6,849
ray-project__ray-6849
[ "5567" ]
341ddd0a0909fcc755e3474427dda0b590fb19dd
diff --git a/python/ray/tune/suggest/variant_generator.py b/python/ray/tune/suggest/variant_generator.py --- a/python/ray/tune/suggest/variant_generator.py +++ b/python/ray/tune/suggest/variant_generator.py @@ -2,7 +2,6 @@ import logging import numpy import random -import types from ray.tune import TuneError from ray.tune.sample import sample_from @@ -126,7 +125,7 @@ def _generate_variants(spec): grid_vars = [] lambda_vars = [] for path, value in unresolved.items(): - if isinstance(value, types.FunctionType): + if callable(value): lambda_vars.append((path, value)) else: grid_vars.append((path, value))
diff --git a/python/ray/tune/tests/test_var.py b/python/ray/tune/tests/test_var.py --- a/python/ray/tune/tests/test_var.py +++ b/python/ray/tune/tests/test_var.py @@ -1,5 +1,6 @@ import os import numpy as np +import random import unittest import ray @@ -210,6 +211,29 @@ def testDependentGridSearch(self): self.assertEqual(trials[0].config, {"x": 100, "y": 1}) self.assertEqual(trials[1].config, {"x": 200, "y": 1}) + def testDependentGridSearchCallable(self): + class Normal: + def __call__(self, _config): + return random.normalvariate(mu=0, sigma=1) + + class Single: + def __call__(self, _config): + return 20 + + trials = self.generate_trials({ + "run": "PPO", + "config": { + "x": grid_search( + [tune.sample_from(Normal()), + tune.sample_from(Normal())]), + "y": tune.sample_from(Single()), + }, + }, "dependent_grid_search") + trials = list(trials) + self.assertEqual(len(trials), 2) + self.assertEqual(trials[0].config["y"], 20) + self.assertEqual(trials[1].config["y"], 20) + def testNestedValues(self): trials = self.generate_trials({ "run": "PPO",
[tune] Feature request: tune.sample_from does not support callable objects. ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 - **Ray installed from (source or binary)**: binary - **Ray version**: 0.7.2 - **Python version**: 3.2 - **Exact command to reproduce**: See below ### Describe the problem The `tune` sample_from interface is strictly limited to function objects, such as lambdas. This serves most use cases, but there are a number of instances where it's very useful to define a callable object to yield samples. (See trivial example below.) At the moment, providing a callable object returns errors from within tune variant generation, as the non-function-based `sample_from` entries are processed in grid entries. This can be resolved by changeing the sample/grid check from a direct check for `FunctionType` (Source location: https://github.com/ray-project/ray/blob/fadfa5f30bb654a74c781eaf8396a35af3ab7760/python/ray/tune/suggest/variant_generator.py#L116) to the builtin function `callable`. I'm not entirely clear if this is an intentional limitation, and changing this logic will likely require expansion of tune's tests and documentation to cover the new behavior. I would be happy to open a PR for this if a maintainer gives the feature a 👍. ### Source code / logs ```python import random import ray.tune as tune from ray.tune.suggest.variant_generator import generate_variants class Normal: def __call__(self, _config): return random.normalvariate(mu=0, sigma=1) grid_config = {"grid": tune.grid_search(list(range(2)))} sample_config = {"normal": tune.sample_from(Normal())} print(grid_config) print(list(generate_variants(grid_config))) print(sample_config) print(list(generate_variants(sample_config))) ``` Results: ``` {'grid': {'grid_search': [0, 1]}} [('grid=0', {'grid': 0}), ('grid=1', {'grid': 1})] {'normal': tune.sample_from(<__main__.Normal object at 0x7f08ed1d0f50>)} Traceback (most recent call last): File "sample_error.py", line 19, in <module> print(list(generate_variants(sample_config))) File "/work/home/lexaf/workspace/alphabeta/.conda/lib/python3.7/site-packages/ray/tune/suggest/variant_generator.py", line 43, in generate_variants for resolved_vars, spec in _generate_variants(unresolved_spec): File "/work/home/lexaf/workspace/alphabeta/.conda/lib/python3.7/site-packages/ray/tune/suggest/variant_generator.py", line 123, in _generate_variants for resolved_spec in grid_search: File "/work/home/lexaf/workspace/alphabeta/.conda/lib/python3.7/site-packages/ray/tune/suggest/variant_generator.py", line 193, in _grid_search_generator while value_indices[-1] < len(grid_vars[-1][1]): TypeError: object of type 'Normal' has no len() ```
@asford this would be great! I'd be more than happy to help shepherd this. I just had a case where this feature would've been useful.
2020-01-20T00:07:18
ray-project/ray
6,890
ray-project__ray-6890
[ "6884" ]
1558307ac4e66d77d2ee92514b45311046fa93c6
diff --git a/rllib/examples/random_env.py b/rllib/examples/random_env.py new file mode 100644 --- /dev/null +++ b/rllib/examples/random_env.py @@ -0,0 +1,69 @@ +""" +Example of a custom gym environment and model. Run this for a demo. + +This example shows: + - using a custom environment + - using a custom model + - using Tune for grid search + +You can visualize experiment results in ~/ray_results using TensorBoard. +""" + +import gym +from gym.spaces import Tuple, Discrete +import numpy as np + +from ray.rllib.agents.ppo import PPOTrainer +from ray.rllib.utils import try_import_tf + +tf = try_import_tf() + + +class RandomEnv(gym.Env): + """ + A randomly acting environment that can be instantiated with arbitrary + action and observation spaces. + """ + + def __init__(self, config): + # Action space. + self.action_space = config["action_space"] + # Observation space from which to sample. + self.observation_space = config["observation_space"] + # Reward space from which to sample. + self.reward_space = config.get( + "reward_space", + gym.spaces.Box(low=-1.0, high=1.0, shape=(), dtype=np.float32)) + # Chance that an episode ends at any step. + self.p_done = config.get("p_done", 0.1) + + def reset(self): + return self.observation_space.sample() + + def step(self, action): + return self.observation_space.sample(), \ + float(self.reward_space.sample()), \ + bool(np.random.choice( + [True, False], p=[self.p_done, 1.0 - self.p_done] + )), {} + + +if __name__ == "__main__": + trainer = PPOTrainer( + config={ + "model": { + "use_lstm": True, + }, + "vf_share_layers": False, + "num_workers": 0, # no parallelism + "env_config": { + "action_space": Discrete(2), + # Test a simple Tuple observation space. + "observation_space": Tuple([Discrete(3), + Discrete(2)]) + } + }, + env=RandomEnv, + ) + results = trainer.train() + print(results) diff --git a/rllib/models/tf/modelv1_compat.py b/rllib/models/tf/modelv1_compat.py --- a/rllib/models/tf/modelv1_compat.py +++ b/rllib/models/tf/modelv1_compat.py @@ -1,3 +1,4 @@ +import copy import logging import numpy as np @@ -6,7 +7,6 @@ from ray.rllib.models.tf.misc import linear, normc_initializer from ray.rllib.utils.annotations import override from ray.rllib.utils import try_import_tf -from ray.rllib.utils.debug import log_once from ray.rllib.utils.tf_ops import scope_vars tf = try_import_tf() @@ -124,19 +124,29 @@ def value_function(self): # Create a new separate model with no RNN state, etc. branch_model_config = self.model_config.copy() branch_model_config["free_log_std"] = False + obs_space_vf = self.obs_space + if branch_model_config["use_lstm"]: branch_model_config["use_lstm"] = False - if log_once("vf_warn"): - logger.warning( - "It is not recommended to use a LSTM model " - "with vf_share_layers=False (consider setting " - "it to True). If you want to not share " - "layers, you can implement a custom LSTM " - "model that overrides the value_function() " - "method.") + logger.warning( + "It is not recommended to use an LSTM model " + "with the `vf_share_layers=False` option. " + "If you want to use separate policy- and vf-" + "networks with LSTMs, you can implement a custom " + "LSTM model that overrides the value_function() " + "method. " + "NOTE: Your policy- and vf-NNs will use the same " + "shared LSTM!") + # Remove original space from obs-space not to trigger + # preprocessing (input to vf-NN is already vectorized + # LSTM output). + obs_space_vf = copy.copy(self.obs_space) + if hasattr(obs_space_vf, "original_space"): + delattr(obs_space_vf, "original_space") + branch_instance = self.legacy_model_cls( self.cur_instance.input_dict, - self.obs_space, + obs_space_vf, self.action_space, 1, branch_model_config,
diff --git a/ci/jenkins_tests/run_rllib_tests.sh b/ci/jenkins_tests/run_rllib_tests.sh --- a/ci/jenkins_tests/run_rllib_tests.sh +++ b/ci/jenkins_tests/run_rllib_tests.sh @@ -487,3 +487,6 @@ docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ /ray/ci/suppress_output python /ray/rllib/tests/test_env_with_subprocess.py + +docker run --rm --shm-size=${SHM_SIZE} --memory=${MEMORY_SIZE} $DOCKER_SHA \ + /ray/ci/suppress_output python /ray/rllib/examples/random_env.py
[RLlib] Using LSTM model raises ValueError (Tuple obs_space, similar to #3367) ### System Information - Fedora 7.7 (Maipo) server - Ray from pip - Ray version 0.8.0 - Python version 3.6.8 ### Problem I have a custom environment with a Tuple observation space, using the default model in rllib that is trained using PPO. Everything works when I don't turn on the LSTM option. However, when I do turn it on, I get the following stack trace: ` File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 90, in __init__ Trainer.__init__(self, config, env, logger_creator) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 398, in __init__ Trainable.__init__(self, config, logger_creator) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/tune/trainable.py", line 96, in __init__ self._setup(copy.deepcopy(self.config)) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 523, in _setup self._init(self.config, self.env_creator) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer_template.py", line 109, in _init self.config["num_workers"]) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/trainer.py", line 568, in _make_workers logdir=self.logdir) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/worker_set.py", line 64, in __init__ RolloutWorker, env_creator, policy, 0, self._local_config) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/worker_set.py", line 220, in _make_worker _fake_sampler=config.get("_fake_sampler", False)) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 350, in __init__ self._build_policy_map(policy_dict, policy_config) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/evaluation/rollout_worker.py", line 766, in _build_policy_map policy_map[name] = cls(obs_space, act_space, merged_conf) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/policy/tf_policy_template.py", line 143, in __init__ obs_include_prev_action_reward=obs_include_prev_action_reward) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/policy/dynamic_tf_policy.py", line 198, in __init__ before_loss_init(self, obs_space, action_space, config) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/policy/tf_policy_template.py", line 127, in before_loss_init_wrapper self._extra_action_fetches = extra_action_fetches_fn(self) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/agents/ppo/ppo_policy.py", line 170, in vf_preds_and_logits_fetches SampleBatch.VF_PREDS: policy.model.value_function(), File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/models/tf/modelv1_compat.py", line 148, in value_function seq_lens=None) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/models/catalog.py", line 481, in get_model seq_lens) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/models/catalog.py", line 521, in _get_model num_outputs, options) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/models/model.py", line 57, in __init__ input_dict["obs"], obs_space) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/models/model.py", line 230, in restore_original_dimensions return _unpack_obs(obs, obs_space.original_space, tensorlib=tensorlib) File "/afs/ece.cmu.edu/usr/charlieh/.local/lib/python3.6/site-packages/ray/rllib/models/model.py", line 260, in _unpack_obs prep.shape[0], obs.shape)) ValueError: Expected flattened obs shape of [None, 143], got (?, 256) ` My state space is `obs_space = spaces.Tuple(spaces.Discrete(35),spaces.Discrete(35),spaces.Discrete(35), spaces.Discrete(3), spaces.Discrete(35))` and if I run `prep = get_preprocessor(spy_state_space)(spy_state_space) print(prep)` I get > (143,) As expected. I believe that this is similar to issue #3367 (except this time the observation space is a Tuple rather than a Dict. Am I doing something wrong or is this a bug? If requested, I can also try posting my code.
Cc @sven1977 Will try to reproduce. ... My minimal example works fine. Could you check, to see what I might do differently? ``` import gym from gym.spaces import Tuple, Discrete import numpy as np from ray.rllib.agents.ppo import PPOTrainer from ray.rllib.utils import try_import_tf tf = try_import_tf() class RandomEnv(gym.Env): """ A randomly acting environment that can be instantiated with arbitrary action and observation spaces. """ def __init__(self, config): # Action space. self.action_space = config["action_space"] # Observation space from which to sample. self.observation_space = config["observation_space"] # Reward space from which to sample. self.reward_space = config.get( "reward_space", gym.spaces.Box(low=-1.0, high=1.0, shape=(), dtype=np.float32) ) # Chance that an episode ends at any step. self.p_done = config.get("p_done", 0.1) def reset(self): return self.observation_space.sample() def step(self, action): return self.observation_space.sample(), float(self.reward_space.sample()), \ bool(np.random.choice( [True, False], p=[self.p_done, 1.0 - self.p_done] )), {} if __name__ == "__main__": trainer = PPOTrainer( config={ "model": { "use_lstm": True, }, "vf_share_layers": True, "num_workers": 0, # no parallelism "env_config": { "action_space": Discrete(2), # Test a simple Tuple observation space. "observation_space": Tuple([Discrete(2), Discrete(2)]) } }, env=RandomEnv, ) for _ in range(2): results = trainer.train() print(results) ``` Ah, got it! When I set `vf_share_layers` to False (in my example above), I get the same error as you do. Ok, this seems to be a bug. So RLlib sets `use_lstm` automatically to False if you do `vf_share_layers == False` (separate policy and vf networks) and outputs a warning: `2020-01-22 13:26:40,266 WARNING modelv1_compat.py:131 -- It is not recommended to use a LSTM model with vf_share_layers=False (consider setting it to True). If you want to not share layers, you can implement a custom LSTM model that overrides the value_function() method.` I'll fix the error message (it shouldn't be thrown at all), but in the meantime, could you set your config to `vf_share_layers=True` and model->`use_lstm=True`? Then it should work.
2020-01-22T15:21:43
ray-project/ray
6,915
ray-project__ray-6915
[ "6270" ]
e516c5074587a5bbd38822b46093d7baf0ebebd3
diff --git a/python/ray/tune/__init__.py b/python/ray/tune/__init__.py --- a/python/ray/tune/__init__.py +++ b/python/ray/tune/__init__.py @@ -6,6 +6,8 @@ from ray.tune.trainable import Trainable from ray.tune.durable_trainable import DurableTrainable from ray.tune.suggest import grid_search +from ray.tune.progress_reporter import (ProgressReporter, CLIReporter, + JupyterNotebookReporter) from ray.tune.sample import (function, sample_from, uniform, choice, randint, randn, loguniform) @@ -29,4 +31,7 @@ "loguniform", "ExperimentAnalysis", "Analysis", + "CLIReporter", + "JupyterNotebookReporter", + "ProgressReporter", ] diff --git a/python/ray/tune/progress_reporter.py b/python/ray/tune/progress_reporter.py --- a/python/ray/tune/progress_reporter.py +++ b/python/ray/tune/progress_reporter.py @@ -1,10 +1,11 @@ from __future__ import print_function import collections +import time -from ray.tune.result import (DEFAULT_RESULT_KEYS, CONFIG_PREFIX, - EPISODE_REWARD_MEAN, MEAN_ACCURACY, MEAN_LOSS, - TRAINING_ITERATION, TIME_TOTAL_S, TIMESTEPS_TOTAL) +from ray.tune.result import (CONFIG_PREFIX, EPISODE_REWARD_MEAN, MEAN_ACCURACY, + MEAN_LOSS, TRAINING_ITERATION, TIME_TOTAL_S, + TIMESTEPS_TOTAL) from ray.tune.utils import flatten_dict try: @@ -14,67 +15,202 @@ "Please re-run 'pip install ray[tune]' or " "'pip install ray[rllib]'.") -DEFAULT_PROGRESS_KEYS = DEFAULT_RESULT_KEYS + (EPISODE_REWARD_MEAN, ) -# Truncated representations of column names (to accommodate small screens). -REPORTED_REPRESENTATIONS = { - EPISODE_REWARD_MEAN: "reward", - MEAN_ACCURACY: "acc", - MEAN_LOSS: "loss", - TIME_TOTAL_S: "total time (s)", - TIMESTEPS_TOTAL: "timesteps", - TRAINING_ITERATION: "iter", -} - class ProgressReporter: - # TODO(ujvl): Expose ProgressReporter in tune.run for custom reporting. + """Abstract class for experiment progress reporting. + + `should_report()` is called to determine whether or not `report()` should + be called. Tune will call these functions after trial state transitions, + receiving training results, and so on. + """ - def report(self, trial_runner): - """Reports progress across all trials of the trial runner. + def should_report(self, trials, done=False): + """Returns whether or not progress should be reported. Args: - trial_runner: Trial runner to report on. + trials (list[Trial]): Trials to report on. + done (bool): Whether this is the last progress report attempt. """ raise NotImplementedError + def report(self, trials, *sys_info): + """Reports progress across trials. + + Args: + trials (list[Trial]): Trials to report on. + sys_info: System info. + """ + raise NotImplementedError + + +class TuneReporterBase(ProgressReporter): + """Abstract base class for the default Tune reporters.""" + + # Truncated representations of column names (to accommodate small screens). + DEFAULT_COLUMNS = { + EPISODE_REWARD_MEAN: "reward", + MEAN_ACCURACY: "acc", + MEAN_LOSS: "loss", + TIME_TOTAL_S: "total time (s)", + TIMESTEPS_TOTAL: "ts", + TRAINING_ITERATION: "iter", + } + + def __init__(self, + metric_columns=None, + max_progress_rows=20, + max_error_rows=20, + max_report_frequency=5): + """Initializes a new TuneReporterBase. + + Args: + metric_columns (dict[str, str]|list[str]): Names of metrics to + include in progress table. If this is a dict, the keys should + be metric names and the values should be the displayed names. + If this is a list, the metric name is used directly. + max_progress_rows (int): Maximum number of rows to print + in the progress table. The progress table describes the + progress of each trial. Defaults to 20. + max_error_rows (int): Maximum number of rows to print in the + error table. The error table lists the error file, if any, + corresponding to each trial. Defaults to 20. + max_report_frequency (int): Maximum report frequency in seconds. + Defaults to 5s. + """ + self._metric_columns = metric_columns or self.DEFAULT_COLUMNS + self._max_progress_rows = max_progress_rows + self._max_error_rows = max_error_rows + + self._max_report_freqency = max_report_frequency + self._last_report_time = 0 + + def should_report(self, trials, done=False): + if time.time() - self._last_report_time > self._max_report_freqency: + self._last_report_time = time.time() + return True + return done -class JupyterNotebookReporter(ProgressReporter): - def __init__(self, overwrite): + def add_metric_column(self, metric, representation=None): + """Adds a metric to the existing columns. + + Args: + metric (str): Metric to add. This must be a metric being returned + in training step results. + representation (str): Representation to use in table. Defaults to + `metric`. + """ + if metric in self._metric_columns: + raise ValueError("Column {} already exists.".format(metric)) + + if isinstance(self._metric_columns, collections.Mapping): + representation = representation or metric + self._metric_columns[metric] = representation + else: + if representation is not None and representation != metric: + raise ValueError( + "`representation` cannot differ from `metric` " + "if this reporter was initialized with a list " + "of metric columns.") + self._metric_columns.append(metric) + + def _progress_str(self, trials, *sys_info, fmt="psql", delim="\n"): + """Returns full progress string. + + This string contains a progress table and error table. The progress + table describes the progress of each trial. The error table lists + the error file, if any, corresponding to each trial. The latter only + exists if errors have occurred. + + Args: + trials (list[Trial]): Trials to report on. + fmt (str): Table format. See `tablefmt` in tabulate API. + delim (str): Delimiter between messages. + """ + messages = ["== Status ==", memory_debug_str(), *sys_info] + if self._max_progress_rows > 0: + messages.append( + trial_progress_str( + trials, + metric_columns=self._metric_columns, + fmt=fmt, + max_rows=self._max_progress_rows)) + if self._max_error_rows > 0: + messages.append( + trial_errors_str( + trials, fmt=fmt, max_rows=self._max_error_rows)) + return delim.join(messages) + delim + + +class JupyterNotebookReporter(TuneReporterBase): + """Jupyter notebook-friendly Reporter that can update display in-place.""" + + def __init__(self, + overwrite, + metric_columns=None, + max_progress_rows=20, + max_error_rows=20, + max_report_frequency=5): """Initializes a new JupyterNotebookReporter. Args: overwrite (bool): Flag for overwriting the last reported progress. + metric_columns (dict[str, str]|list[str]): Names of metrics to + include in progress table. If this is a dict, the keys should + be metric names and the values should be the displayed names. + If this is a list, the metric name is used directly. + max_progress_rows (int): Maximum number of rows to print + in the progress table. The progress table describes the + progress of each trial. Defaults to 20. + max_error_rows (int): Maximum number of rows to print in the + error table. The error table lists the error file, if any, + corresponding to each trial. Defaults to 20. + max_report_frequency (int): Maximum report frequency in seconds. + Defaults to 5s. """ - self.overwrite = overwrite - - def report(self, trial_runner): - delim = "<br>" - messages = [ - "== Status ==", - memory_debug_str(), - trial_runner.scheduler_alg.debug_string(), - trial_runner.trial_executor.debug_string(), - trial_progress_str(trial_runner.get_trials(), fmt="html"), - trial_errors_str(trial_runner.get_trials(), fmt="html"), - ] + super(JupyterNotebookReporter, + self).__init__(metric_columns, max_progress_rows, max_error_rows, + max_report_frequency) + self._overwrite = overwrite + + def report(self, trials, *sys_info): from IPython.display import clear_output from IPython.core.display import display, HTML - if self.overwrite: + if self._overwrite: clear_output(wait=True) - display(HTML(delim.join(messages) + delim)) + progress_str = self._progress_str( + trials, *sys_info, fmt="html", delim="<br>") + display(HTML(progress_str)) + +class CLIReporter(TuneReporterBase): + """Command-line reporter""" + + def __init__(self, + metric_columns=None, + max_progress_rows=20, + max_error_rows=20, + max_report_frequency=5): + """Initializes a CLIReporter. + + Args: + metric_columns (dict[str, str]|list[str]): Names of metrics to + include in progress table. If this is a dict, the keys should + be metric names and the values should be the displayed names. + If this is a list, the metric name is used directly. + max_progress_rows (int): Maximum number of rows to print + in the progress table. The progress table describes the + progress of each trial. Defaults to 20. + max_error_rows (int): Maximum number of rows to print in the + error table. The error table lists the error file, if any, + corresponding to each trial. Defaults to 20. + max_report_frequency (int): Maximum report frequency in seconds. + Defaults to 5s. + """ + super(CLIReporter, self).__init__(metric_columns, max_progress_rows, + max_error_rows, max_report_frequency) -class CLIReporter(ProgressReporter): - def report(self, trial_runner): - messages = [ - "== Status ==", - memory_debug_str(), - trial_runner.scheduler_alg.debug_string(), - trial_runner.trial_executor.debug_string(), - trial_progress_str(trial_runner.get_trials()), - trial_errors_str(trial_runner.get_trials()), - ] - print("\n".join(messages) + "\n") + def report(self, trials, *sys_info): + print(self._progress_str(trials, *sys_info)) def memory_debug_str(): @@ -98,18 +234,21 @@ def memory_debug_str(): "(or ray[debug]) to resolve)") -def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=20): +def trial_progress_str(trials, metric_columns, fmt="psql", max_rows=None): """Returns a human readable message for printing to the console. This contains a table where each row represents a trial, its parameters and the current values of its metrics. Args: - trials (List[Trial]): List of trials to get progress string for. - metrics (List[str]): Names of metrics to include. Defaults to - metrics defined in DEFAULT_RESULT_KEYS. + trials (list[Trial]): List of trials to get progress string for. + metric_columns (dict[str, str]|list[str]): Names of metrics to include. + If this is a dict, the keys are metric names and the values are + the names to use in the message. If this is a list, the metric + name is used in the message directly. fmt (str): Output format (see tablefmt in tabulate API). - max_rows (int): Maximum number of rows in the trial table. + max_rows (int): Maximum number of rows in the trial table. Defaults to + unlimited. """ messages = [] delim = "<br>" if fmt == "html" else "\n" @@ -131,6 +270,7 @@ def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=20): messages.append("Number of trials: {} ({})".format( num_trials, ", ".join(num_trials_strs))) + max_rows = max_rows or float("inf") if num_trials > max_rows: # TODO(ujvl): suggestion for users to view more rows. trials_by_state_trunc = _fair_filter_trials(trials_by_state, max_rows) @@ -148,33 +288,41 @@ def trial_progress_str(trials, metrics=None, fmt="psql", max_rows=20): "shown.".format(max_rows, overflow, overflow_str)) # Pre-process trials to figure out what columns to show. - keys = list(metrics or DEFAULT_PROGRESS_KEYS) + if isinstance(metric_columns, collections.Mapping): + keys = list(metric_columns.keys()) + else: + keys = metric_columns keys = [k for k in keys if any(t.last_result.get(k) for t in trials)] # Build trial rows. params = list(set().union(*[t.evaluated_params for t in trials])) trial_table = [_get_trial_info(trial, params, keys) for trial in trials] - # Parse columns. - parsed_columns = [REPORTED_REPRESENTATIONS.get(k, k) for k in keys] - columns = ["Trial name", "status", "loc"] - columns += params + parsed_columns + # Format column headings + if isinstance(metric_columns, collections.Mapping): + formatted_columns = [metric_columns[k] for k in keys] + else: + formatted_columns = keys + columns = ["Trial name", "status", "loc"] + params + formatted_columns + # Tabulate. messages.append( tabulate(trial_table, headers=columns, tablefmt=fmt, showindex=False)) return delim.join(messages) -def trial_errors_str(trials, fmt="psql", max_rows=20): +def trial_errors_str(trials, fmt="psql", max_rows=None): """Returns a readable message regarding trial errors. Args: - trials (List[Trial]): List of trials to get progress string for. + trials (list[Trial]): List of trials to get progress string for. fmt (str): Output format (see tablefmt in tabulate API). - max_rows (int): Maximum number of rows in the error table. + max_rows (int): Maximum number of rows in the error table. Defaults to + unlimited. """ messages = [] failed = [t for t in trials if t.error_file] num_failed = len(failed) if num_failed > 0: messages.append("Number of errored trials: {}".format(num_failed)) + max_rows = max_rows or float("inf") if num_failed > max_rows: messages.append("Table truncated to {} rows ({} overflow)".format( max_rows, num_failed - max_rows)) @@ -196,7 +344,7 @@ def _fair_filter_trials(trials_by_state, max_trials): The oldest trials are truncated if necessary. Args: - trials_by_state (Dict[str, List[Trial]]: Trials by state. + trials_by_state (dict[str, list[Trial]]: Trials by state. max_trials (int): Maximum number of trials to return. Returns: Dict mapping state to List of fairly represented trials. @@ -234,8 +382,8 @@ def _get_trial_info(trial, parameters, metrics): Args: trial (Trial): Trial to get information for. - parameters (List[str]): Names of trial parameters to include. - metrics (List[str]): Names of metrics to include. + parameters (list[str]): Names of trial parameters to include. + metrics (list[str]): Names of metrics to include. """ result = flatten_dict(trial.last_result) trial_info = [str(trial), trial.status, str(trial.location)] diff --git a/python/ray/tune/tune.py b/python/ray/tune/tune.py --- a/python/ray/tune/tune.py +++ b/python/ray/tune/tune.py @@ -1,12 +1,11 @@ import logging -import time import six from ray.tune.error import TuneError from ray.tune.experiment import convert_to_experiment_list, Experiment from ray.tune.analysis import ExperimentAnalysis from ray.tune.suggest import BasicVariantGenerator -from ray.tune.trial import Trial, DEBUG_PRINT_INTERVAL +from ray.tune.trial import Trial from ray.tune.trainable import Trainable from ray.tune.ray_trial_executor import RayTrialExecutor from ray.tune.registry import get_trainable_cls @@ -51,6 +50,21 @@ def _check_default_resources_override(run_identifier): Trainable.default_resource_request.__code__) +def _report_progress(runner, reporter, done=False): + """Reports experiment progress. + + Args: + runner (TrialRunner): Trial runner to report on. + reporter (ProgressReporter): Progress reporter. + done (bool): Whether this is the last progress report attempt. + """ + trials = runner.get_trials() + if reporter.should_report(trials, done=done): + sched_debug_str = runner.scheduler_alg.debug_string() + executor_debug_str = runner.trial_executor.debug_string() + reporter.report(trials, sched_debug_str, executor_debug_str) + + def run(run_or_experiment, name=None, stop=None, @@ -77,6 +91,7 @@ def run(run_or_experiment, with_server=False, server_port=TuneServer.DEFAULT_PORT, verbose=2, + progress_reporter=None, resume=False, queue_trials=False, reuse_actors=False, @@ -169,6 +184,10 @@ def run(run_or_experiment, server_port (int): Port number for launching TuneServer. verbose (int): 0, 1, or 2. Verbosity mode. 0 = silent, 1 = only status updates, 2 = status and trial results. + progress_reporter (ProgressReporter): Progress reporter for reporting + intermediate experiment progress. Defaults to CLIReporter if + running in command-line, or JupyterNotebookReporter if running in + a Jupyter notebook. resume (str|bool): One of "LOCAL", "REMOTE", "PROMPT", or bool. LOCAL/True restores the checkpoint from the local_checkpoint_dir. REMOTE restores the checkpoint from remote_checkpoint_dir. @@ -272,10 +291,11 @@ def run(run_or_experiment, for exp in experiments: runner.add_experiment(exp) - if IS_NOTEBOOK: - reporter = JupyterNotebookReporter(overwrite=verbose < 2) - else: - reporter = CLIReporter() + if progress_reporter is None: + if IS_NOTEBOOK: + progress_reporter = JupyterNotebookReporter(overwrite=verbose < 2) + else: + progress_reporter = CLIReporter() # User Warning for GPUs if trial_executor.has_gpus(): @@ -295,13 +315,10 @@ def run(run_or_experiment, "`Trainable.default_resource_request` if using the " "Trainable API.") - last_debug = 0 while not runner.is_finished(): runner.step() - if time.time() - last_debug > DEBUG_PRINT_INTERVAL: - if verbose: - reporter.report(runner) - last_debug = time.time() + if verbose: + _report_progress(runner, progress_reporter) try: runner.checkpoint(force=True) @@ -309,7 +326,7 @@ def run(run_or_experiment, logger.exception("Trial Runner checkpointing failed.") if verbose: - reporter.report(runner) + _report_progress(runner, progress_reporter, done=True) wait_for_sync() @@ -339,6 +356,7 @@ def run_experiments(experiments, with_server=False, server_port=TuneServer.DEFAULT_PORT, verbose=2, + progress_reporter=None, resume=False, queue_trials=False, reuse_actors=False, @@ -380,6 +398,7 @@ def run_experiments(experiments, with_server=with_server, server_port=server_port, verbose=verbose, + progress_reporter=progress_reporter, resume=resume, queue_trials=queue_trials, reuse_actors=reuse_actors, @@ -396,6 +415,7 @@ def run_experiments(experiments, with_server=with_server, server_port=server_port, verbose=verbose, + progress_reporter=progress_reporter, resume=resume, queue_trials=queue_trials, reuse_actors=reuse_actors,
diff --git a/python/ray/tune/tests/test_progress_reporter.py b/python/ray/tune/tests/test_progress_reporter.py --- a/python/ray/tune/tests/test_progress_reporter.py +++ b/python/ray/tune/tests/test_progress_reporter.py @@ -4,7 +4,7 @@ from unittest.mock import MagicMock from ray.tune.trial import Trial -from ray.tune.progress_reporter import _fair_filter_trials +from ray.tune.progress_reporter import CLIReporter, _fair_filter_trials class ProgressReporterTest(unittest.TestCase): @@ -48,3 +48,22 @@ def testFairFilterTrials(self): for i in range(len(state_trials) - 1): self.assertGreaterEqual(state_trials[i].start_time, state_trials[i + 1].start_time) + + def testAddMetricColumn(self): + """Tests edge cases of add_metric_column.""" + + # Test list-initialized metric columns. + reporter = CLIReporter(metric_columns=["foo", "bar"]) + with self.assertRaises(ValueError): + reporter.add_metric_column("bar") + + with self.assertRaises(ValueError): + reporter.add_metric_column("baz", "qux") + + reporter.add_metric_column("baz") + self.assertIn("baz", reporter._metric_columns) + + # Test default-initialized (dict) metric columns. + reporter = CLIReporter() + reporter.add_metric_column("foo", "bar") + self.assertIn("foo", reporter._metric_columns)
[tune] Adding train_acc and eval_acc tag in the layout of result table <!-- General questions should be asked on the mailing list [email protected]. Questions about how to use Ray should be asked on [StackOverflow](https://stackoverflow.com/questions/tagged/ray). Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Linux Ubuntu 16.04 - **Ray installed from (source or binary)**:source - **Ray version**: 0.8.0dev - **Python version**: 3.6 - **Exact command to reproduce**: <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> ![image](https://user-images.githubusercontent.com/33815430/69534534-fc523f80-0fb4-11ea-90e4-eacf6af87e1f.png) what is the acc in the last column, train_acc or test_acc? Can we edit the table layout like adding train_acc and eval_acc ? Thanks ### Source code / logs <!-- Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem. -->
`acc` here refers to `mean_accuracy`, which is whatever you set it to in the `result`. > Can we edit the table layout like adding train_acc and eval_acc ? Currently it isn't possible to add custom metrics but I agree that could be useful to have.
2020-01-24T09:50:38
ray-project/ray
6,916
ray-project__ray-6916
[ "6659" ]
e516c5074587a5bbd38822b46093d7baf0ebebd3
diff --git a/python/ray/tune/schedulers/async_hyperband.py b/python/ray/tune/schedulers/async_hyperband.py --- a/python/ray/tune/schedulers/async_hyperband.py +++ b/python/ray/tune/schedulers/async_hyperband.py @@ -141,7 +141,8 @@ def __init__(self, min_t, max_t, reduction_factor, s): def cutoff(self, recorded): if not recorded: return None - return np.percentile(list(recorded.values()), (1 - 1 / self.rf) * 100) + return np.nanpercentile( + list(recorded.values()), (1 - 1 / self.rf) * 100) def on_result(self, trial, cur_iter, cur_rew): action = TrialScheduler.CONTINUE
diff --git a/python/ray/tune/tests/test_trial_scheduler.py b/python/ray/tune/tests/test_trial_scheduler.py --- a/python/ray/tune/tests/test_trial_scheduler.py +++ b/python/ray/tune/tests/test_trial_scheduler.py @@ -1091,6 +1091,21 @@ def basicSetup(self, scheduler): TrialScheduler.CONTINUE) return t1, t2 + def nanSetup(self, scheduler): + t1 = Trial("PPO") # mean is 450, max 450, t_max=10 + t2 = Trial("PPO") # mean is nan, max nan, t_max=10 + scheduler.on_trial_add(None, t1) + scheduler.on_trial_add(None, t2) + for i in range(10): + self.assertEqual( + scheduler.on_trial_result(None, t1, result(i, 450)), + TrialScheduler.CONTINUE) + for i in range(10): + self.assertEqual( + scheduler.on_trial_result(None, t2, result(i, np.nan)), + TrialScheduler.CONTINUE) + return t1, t2 + def testAsyncHBOnComplete(self): scheduler = AsyncHyperBandScheduler(max_t=10, brackets=1) t1, t2 = self.basicSetup(scheduler) @@ -1145,6 +1160,21 @@ def testAsyncHBUsesPercentile(self): scheduler.on_trial_result(None, t3, result(2, 260)), TrialScheduler.STOP) + def testAsyncHBNanPercentile(self): + scheduler = AsyncHyperBandScheduler( + grace_period=1, max_t=10, reduction_factor=2, brackets=1) + t1, t2 = self.nanSetup(scheduler) + scheduler.on_trial_complete(None, t1, result(10, 450)) + scheduler.on_trial_complete(None, t2, result(10, np.nan)) + t3 = Trial("PPO") + scheduler.on_trial_add(None, t3) + self.assertEqual( + scheduler.on_trial_result(None, t3, result(1, 260)), + TrialScheduler.STOP) + self.assertEqual( + scheduler.on_trial_result(None, t3, result(2, 260)), + TrialScheduler.STOP) + def _test_metrics(self, result_func, metric, mode): scheduler = AsyncHyperBandScheduler( grace_period=1,
[tune] Handle nan case for AsynchScheduler To deal with the case of recording nan values i would suggest changing from np.percentile to np.nanpercentile in this line. https://github.com/ray-project/ray/blob/f7455839bf5686cf990c9e6625c6ada9a3ffd7c8/python/ray/tune/schedulers/async_hyperband.py#L145-L148 As stated in https://docs.scipy.org/doc/numpy/reference/generated/numpy.nanpercentile.html#numpy.nanpercentile any time a nan result is recorded this will result in the cutoff being nan. >>> array([[10., nan, 4.], [ 3., 2., 1.]]) >>> np.percentile(a, 50) nan Ultimately leading to this line evaluating to false, preventing any following trial from stopping. https://github.com/ray-project/ray/blob/f7455839bf5686cf990c9e6625c6ada9a3ffd7c8/python/ray/tune/schedulers/async_hyperband.py#L157-L158
I think the change is good. cc @richardliaw to confirm. Thanks for reporting the issue. @hugwi would you want to create a pull request for it? perhaps with a unit test. This change sounds good! @hugwi could you push a PR? @richardliaw @hugwi any follow up on this? Yeah, sorry haven't had time. Should be able to push a PR at the weekend @richardliaw @edoakes
2020-01-24T13:31:31
ray-project/ray
6,969
ray-project__ray-6969
[ "6952" ]
5bdfc50bf6ffeca914a91855366129410a75bb5f
diff --git a/python/ray/experimental/joblib/__init__.py b/python/ray/experimental/joblib/__init__.py new file mode 100644 --- /dev/null +++ b/python/ray/experimental/joblib/__init__.py @@ -0,0 +1,17 @@ +from joblib.parallel import register_parallel_backend + + +def register_ray(): + """ Register Ray Backend to be called with parallel_backend("ray"). """ + try: + from ray.experimental.joblib.ray_backend import RayBackend + register_parallel_backend("ray", RayBackend) + except ImportError: + msg = ("To use the ray backend you must install ray." + "Try running 'pip install ray'." + "See https://ray.readthedocs.io/en/latest/installation.html" + "for more information.") + raise ImportError(msg) + + +__all__ = ["register_ray"] diff --git a/python/ray/experimental/joblib/ray_backend.py b/python/ray/experimental/joblib/ray_backend.py new file mode 100644 --- /dev/null +++ b/python/ray/experimental/joblib/ray_backend.py @@ -0,0 +1,58 @@ +from joblib._parallel_backends import MultiprocessingBackend +from joblib.pool import PicklingPool +import logging + +from ray.experimental.multiprocessing.pool import Pool +import ray + +RAY_ADDRESS_ENV = "RAY_ADDRESS" + +logger = logging.getLogger(__name__) + + +class RayBackend(MultiprocessingBackend): + """Ray backend uses ray, a system for scalable distributed computing. + More info about Ray is available here: https://ray.readthedocs.io. + """ + + def configure(self, + n_jobs=1, + parallel=None, + prefer=None, + require=None, + **memmappingpool_args): + """Make Ray Pool the father class of PicklingPool. PicklingPool is a + father class that inherits Pool from multiprocessing.pool. The next + line is a patch, which changes the inheritance of Pool to be from + ray.experimental.multiprocessing.pool. + """ + PicklingPool.__bases__ = (Pool, ) + """Use all available resources when n_jobs == -1. Must set RAY_ADDRESS + variable in the environment or run ray.init(address=..) to run on + multiple nodes. + """ + if n_jobs == -1: + if not ray.is_initialized(): + import os + if RAY_ADDRESS_ENV in os.environ: + ray_address = os.environ[RAY_ADDRESS_ENV] + logger.info( + "Connecting to ray cluster at address='{}'".format( + ray_address)) + ray.init(address=ray_address) + else: + logger.info("Starting local ray cluster") + ray.init() + ray_cpus = int(ray.state.cluster_resources()["CPU"]) + n_jobs = ray_cpus + + eff_n_jobs = super(RayBackend, self).configure( + n_jobs, parallel, prefer, require, **memmappingpool_args) + return eff_n_jobs + + def effective_n_jobs(self, n_jobs): + eff_n_jobs = super(RayBackend, self).effective_n_jobs(n_jobs) + if n_jobs == -1: + ray_cpus = int(ray.state.cluster_resources()["CPU"]) + eff_n_jobs = ray_cpus + return eff_n_jobs
diff --git a/python/ray/tests/BUILD b/python/ray/tests/BUILD --- a/python/ray/tests/BUILD +++ b/python/ray/tests/BUILD @@ -259,6 +259,14 @@ py_test( deps = ["//:ray_lib"], ) +py_test( + name = "test_joblib", + size = "medium", + srcs = ["test_joblib.py"], + tags = ["exclusive"], + deps = ["//:ray_lib"], +) + py_test( name = "test_multi_node_2", size = "medium", diff --git a/python/ray/tests/test_joblib.py b/python/ray/tests/test_joblib.py new file mode 100644 --- /dev/null +++ b/python/ray/tests/test_joblib.py @@ -0,0 +1,159 @@ +import numpy as np +import joblib +from sklearn.datasets import load_digits, load_iris +from sklearn.model_selection import RandomizedSearchCV +from time import time +from sklearn.datasets import fetch_openml +from sklearn.ensemble import ExtraTreesClassifier +from sklearn.ensemble import RandomForestClassifier +from sklearn.kernel_approximation import Nystroem +from sklearn.kernel_approximation import RBFSampler +from sklearn.pipeline import make_pipeline +from sklearn.svm import LinearSVC, SVC +from sklearn.tree import DecisionTreeClassifier +from sklearn.utils import check_array +from sklearn.linear_model import LogisticRegression +from sklearn.neural_network import MLPClassifier +from sklearn.model_selection import cross_val_score + +from ray.experimental.joblib import register_ray +import ray + + +def test_register_ray(): + register_ray() + assert "ray" in joblib.parallel.BACKENDS + assert not ray.is_initialized() + + +def test_ray_backend(shutdown_only): + register_ray() + from ray.experimental.joblib.ray_backend import RayBackend + with joblib.parallel_backend("ray"): + assert type(joblib.parallel.get_active_backend()[0]) == RayBackend + + +def test_svm_single_node(shutdown_only): + digits = load_digits() + param_space = { + "C": np.logspace(-6, 6, 10), + "gamma": np.logspace(-8, 8, 10), + "tol": np.logspace(-4, -1, 3), + "class_weight": [None, "balanced"], + } + + model = SVC(kernel="rbf") + search = RandomizedSearchCV( + model, param_space, cv=3, n_iter=50, verbose=10) + register_ray() + with joblib.parallel_backend("ray"): + search.fit(digits.data, digits.target) + assert ray.is_initialized() + + +def test_svm_multiple_nodes(ray_start_cluster_2_nodes): + digits = load_digits() + param_space = { + "C": np.logspace(-6, 6, 30), + "gamma": np.logspace(-8, 8, 30), + "tol": np.logspace(-4, -1, 30), + "class_weight": [None, "balanced"], + } + + model = SVC(kernel="rbf") + search = RandomizedSearchCV( + model, param_space, cv=5, n_iter=100, verbose=10) + register_ray() + with joblib.parallel_backend("ray"): + search.fit(digits.data, digits.target) + assert ray.is_initialized() + + +"""This test only makes sure the different sklearn classifiers are supported +and do not fail. It can be improved to check for accuracy similar to +'test_cross_validation' but the classifiers need to be improved (to improve +the accuracy), which results in longer test time. +""" + + +def test_sklearn_benchmarks(ray_start_cluster_2_nodes): + ESTIMATORS = { + "CART": DecisionTreeClassifier(), + "ExtraTrees": ExtraTreesClassifier(n_estimators=10), + "RandomForest": RandomForestClassifier(), + "Nystroem-SVM": make_pipeline( + Nystroem(gamma=0.015, n_components=1000), LinearSVC(C=1)), + "SampledRBF-SVM": make_pipeline( + RBFSampler(gamma=0.015, n_components=1000), LinearSVC(C=1)), + "LogisticRegression-SAG": LogisticRegression( + solver="sag", tol=1e-1, C=1e4), + "LogisticRegression-SAGA": LogisticRegression( + solver="saga", tol=1e-1, C=1e4), + "MultilayerPerceptron": MLPClassifier( + hidden_layer_sizes=(32, 32), + max_iter=100, + alpha=1e-4, + solver="sgd", + learning_rate_init=0.2, + momentum=0.9, + verbose=1, + tol=1e-2, + random_state=1), + "MLP-adam": MLPClassifier( + hidden_layer_sizes=(32, 32), + max_iter=100, + alpha=1e-4, + solver="adam", + learning_rate_init=0.001, + verbose=1, + tol=1e-2, + random_state=1) + } + # Load dataset. + print("Loading dataset...") + data = fetch_openml("mnist_784") + X = check_array(data["data"], dtype=np.float32, order="C") + y = data["target"] + + # Normalize features. + X = X / 255 + + # Create train-test split. + print("Creating train-test split...") + n_train = 6000 + X_train = X[:n_train] + y_train = y[:n_train] + register_ray() + + train_time = {} + random_seed = 0 + # Use two workers per classifier. + num_jobs = 2 + with joblib.parallel_backend("ray"): + for name in sorted(ESTIMATORS.keys()): + print("Training %s ... " % name, end="") + estimator = ESTIMATORS[name] + estimator_params = estimator.get_params() + estimator.set_params( + **{ + p: random_seed + for p in estimator_params if p.endswith("random_state") + }) + + if "n_jobs" in estimator_params: + estimator.set_params(n_jobs=num_jobs) + time_start = time() + estimator.fit(X_train, y_train) + train_time[name] = time() - time_start + print("training", name, "took", train_time[name], "seconds") + + +def test_cross_validation(shutdown_only): + register_ray() + iris = load_iris() + clf = SVC(kernel="linear", C=1, random_state=0) + with joblib.parallel_backend("ray", n_jobs=5): + accuracy = cross_val_score(clf, iris.data, iris.target, cv=5) + assert len(accuracy) == 5 + for result in accuracy: + assert result > 0.95
Test joblib failing on master tests:test_joblib has been failing due to missing import `joblib` You probably need to modify `./ci/travis/install-dependencies.sh` to install joblib @AmeerHajAli
2020-01-30T07:28:01
ray-project/ray
7,065
ray-project__ray-7065
[ "6693" ]
93ed86f17567128f9cac2767da2f134203320f0b
diff --git a/python/ray/autoscaler/commands.py b/python/ray/autoscaler/commands.py --- a/python/ray/autoscaler/commands.py +++ b/python/ray/autoscaler/commands.py @@ -441,7 +441,12 @@ def _exec(updater, cmd, screen, tmux, port_forward=None): port_forward=port_forward) -def rsync(config_file, source, target, override_cluster_name, down): +def rsync(config_file, + source, + target, + override_cluster_name, + down, + all_nodes=False): """Rsyncs files. Arguments: @@ -450,6 +455,7 @@ def rsync(config_file, source, target, override_cluster_name, down): target: target dir override_cluster_name: set the name of the cluster down: whether we're syncing remote -> local + all_nodes: whether to sync worker nodes in addition to the head node """ assert bool(source) == bool(target), ( "Must either provide both or neither source and target.") @@ -458,32 +464,46 @@ def rsync(config_file, source, target, override_cluster_name, down): if override_cluster_name is not None: config["cluster_name"] = override_cluster_name config = _bootstrap_config(config) - head_node = _get_head_node( - config, config_file, override_cluster_name, create_if_needed=False) provider = get_node_provider(config["provider"], config["cluster_name"]) try: - updater = NodeUpdaterThread( - node_id=head_node, - provider_config=config["provider"], - provider=provider, - auth_config=config["auth"], - cluster_name=config["cluster_name"], - file_mounts=config["file_mounts"], - initialization_commands=[], - setup_commands=[], - ray_start_commands=[], - runtime_hash="", - ) - if down: - rsync = updater.rsync_down - else: - rsync = updater.rsync_up - - if source and target: - rsync(source, target) - else: - updater.sync_file_mounts(rsync) + nodes = [] + if all_nodes: + # technically we re-open the provider for no reason + # in get_worker_nodes but it's cleaner this way + # and _get_head_node does this too + nodes = _get_worker_nodes(config, override_cluster_name) + + nodes += [ + _get_head_node( + config, + config_file, + override_cluster_name, + create_if_needed=False) + ] + + for node_id in nodes: + updater = NodeUpdaterThread( + node_id=node_id, + provider_config=config["provider"], + provider=provider, + auth_config=config["auth"], + cluster_name=config["cluster_name"], + file_mounts=config["file_mounts"], + initialization_commands=[], + setup_commands=[], + ray_start_commands=[], + runtime_hash="", + ) + if down: + rsync = updater.rsync_down + else: + rsync = updater.rsync_up + + if source and target: + rsync(source, target) + else: + updater.sync_file_mounts(rsync) finally: provider.cleanup() @@ -530,6 +550,21 @@ def get_worker_node_ips(config_file, override_cluster_name): provider.cleanup() +def _get_worker_nodes(config, override_cluster_name): + """Returns worker node ids for given configuration.""" + # todo: technically could be reused in get_worker_node_ips + if override_cluster_name is not None: + config["cluster_name"] = override_cluster_name + + provider = get_node_provider(config["provider"], config["cluster_name"]) + try: + return provider.non_terminated_nodes({ + TAG_RAY_NODE_TYPE: NODE_TYPE_WORKER + }) + finally: + provider.cleanup() + + def _get_head_node(config, config_file, override_cluster_name, diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py --- a/python/ray/scripts/scripts.py +++ b/python/ray/scripts/scripts.py @@ -647,8 +647,20 @@ def rsync_down(cluster_config_file, source, target, cluster_name): required=False, type=str, help="Override the configured cluster name.") -def rsync_up(cluster_config_file, source, target, cluster_name): - rsync(cluster_config_file, source, target, cluster_name, down=False) [email protected]( + "--all-nodes", + "-A", + is_flag=True, + required=False, + help="Upload to all nodes (workers and head).") +def rsync_up(cluster_config_file, source, target, cluster_name, all_nodes): + rsync( + cluster_config_file, + source, + target, + cluster_name, + down=False, + all_nodes=all_nodes) @cli.command(context_settings={"ignore_unknown_options": True})
rsync_up should have an option to sync with the workers The `rsync_up` would be a lot more useful if it synced not only between the local host and the head node, but also with the worker nodes. If there is a mapped path between all the nodes, I don't see any reason for this not to be feasible. The tools seem to be in place for this feature to happen, so I guess the changes just need to be done to the CLI. Usage example `ray rsync_up --sync-workers cluster.yaml`
Does `ray up cluster.yaml --restart-only` work for you? I don't think it does. But, anyway, IMO it makes sense that rsync_up supports this
2020-02-05T20:13:11
ray-project/ray
7,111
ray-project__ray-7111
[ "7108" ]
58c94f6381922b2f1b8cd53b5c351468c0537898
diff --git a/rllib/optimizers/async_replay_optimizer.py b/rllib/optimizers/async_replay_optimizer.py --- a/rllib/optimizers/async_replay_optimizer.py +++ b/rllib/optimizers/async_replay_optimizer.py @@ -3,15 +3,16 @@ https://arxiv.org/abs/1803.00933""" import collections +import logging +import numpy as np import os import random -import time -import threading - -import numpy as np from six.moves import queue +import threading +import time import ray +from ray.exceptions import RayError from ray.rllib.evaluation.metrics import get_learner_stats from ray.rllib.policy.sample_batch import SampleBatch, DEFAULT_POLICY_ID, \ MultiAgentBatch @@ -27,6 +28,8 @@ REPLAY_QUEUE_DEPTH = 4 LEARNER_QUEUE_MAX_SIZE = 16 +logger = logging.getLogger(__name__) + class AsyncReplayOptimizer(PolicyOptimizer): """Main event loop of the Ape-X optimizer (async sampling with replay). @@ -206,19 +209,42 @@ def _step(self): with self.timers["sample_processing"]: completed = list(self.sample_tasks.completed()) - counts = ray_get_and_free([c[1][1] for c in completed]) + # First try a batched ray.get(). + ray_error = None + try: + counts = { + i: v + for i, v in enumerate( + ray_get_and_free([c[1][1] for c in completed])) + } + # If there are failed workers, try to recover the still good ones + # (via non-batched ray.get()) and store the first error (to raise + # later). + except RayError: + counts = {} + for i, c in enumerate(completed): + try: + counts[i] = ray_get_and_free(c[1][1]) + except RayError as e: + logger.exception( + "Error in completed task: {}".format(e)) + ray_error = ray_error if ray_error is not None else e + for i, (ev, (sample_batch, count)) in enumerate(completed): - sample_timesteps += counts[i] + # Skip failed tasks. + if i not in counts: + continue + sample_timesteps += counts[i] # Send the data to the replay buffer random.choice( self.replay_actors).add_batch.remote(sample_batch) - # Update weights if needed + # Update weights if needed. self.steps_since_update[ev] += counts[i] if self.steps_since_update[ev] >= self.max_weight_sync_delay: # Note that it's important to pull new weights once - # updated to avoid excessive correlation between actors + # updated to avoid excessive correlation between actors. if weights is None or self.learner.weights_updated: self.learner.weights_updated = False with self.timers["put_weights"]: @@ -228,9 +254,14 @@ def _step(self): self.num_weight_syncs += 1 self.steps_since_update[ev] = 0 - # Kick off another sample request + # Kick off another sample request. self.sample_tasks.add(ev, ev.sample_with_count.remote()) + # Now that all still good tasks have been kicked off again, + # we can throw the error. + if ray_error: + raise ray_error + with self.timers["replay_processing"]: for ra, replay in self.replay_tasks.completed(): self.replay_tasks.add(ra, ra.replay.remote())
[RLlib] AsyncReplayOptimizer should retain good sample_tasks even if other sample_tasks have failed. <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> AsyncReplayOptimizer::_step() collects all self.sample_tasks that are completed (including crashed ones), leaving then self.sample_tasks with a count of 0 (empty). After that collection step, it calls ray_get_and_free on all these tasks' IDs, which crashes (due to the error'd crashed tasks). This causes even the still-good tasks to not be processed and reinstated, leaving the optimizer in an infinite loop (trying to collect samples w/o any sample_tasks left). *Ray version and other system information (Python version, TensorFlow version, OS):* ### Reproduction (REQUIRED) Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments): If we cannot run your script, we cannot fix your issue. This can be reproduced by debugging the test case `rllib/tests/test_ignore_worker_failure.py::testAsyncReplay`, stepping through the code and waiting a while in async_replay_optimizer.py::_step before the call to `self.sample_tasks.completed()` (so that all tasks are completed, the good and the failed ones). This then should make ray_get_and_free error and skip the necessary ``` # Kick off another sample request self.sample_tasks.add(ev, ev.sample_with_count.remote()) ``` code. - [x] I have verified my script runs in a clean environment and reproduces the issue. - [x] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).
2020-02-10T19:52:37
ray-project/ray
7,114
ray-project__ray-7114
[ "7112" ]
1e690673d8bd5f2204ad49a78cc93d29cf9e4d12
diff --git a/python/ray/__init__.py b/python/ray/__init__.py --- a/python/ray/__init__.py +++ b/python/ray/__init__.py @@ -14,9 +14,9 @@ "packaged along with Ray).") if "OMP_NUM_THREADS" not in os.environ: - logger.warning("[ray] Forcing OMP_NUM_THREADS=1 to avoid performance " - "degradation with many workers (issue #6998). You can " - "override this by explicitly setting OMP_NUM_THREADS.") + logger.debug("[ray] Forcing OMP_NUM_THREADS=1 to avoid performance " + "degradation with many workers (issue #6998). You can " + "override this by explicitly setting OMP_NUM_THREADS.") os.environ["OMP_NUM_THREADS"] = "1" # Add the directory containing pickle5 to the Python path so that we find the
Turn off OMP_NUM_THREADS warnings? Can we just turn off the warnings on each ray.init? We can't force everyone to set the environment variable. https://github.com/ray-project/ray/blob/3f99be8dad5e0e1abfaede1f25753a0af74f1648/python/ray/__init__.py#L16-L21
I promoted this to release blockers because many users may find this warning annoying. (cc @pcmoritz @robertnishihara ) How about we change the log level to INFO?
2020-02-10T23:05:52
ray-project/ray
7,139
ray-project__ray-7139
[ "7106" ]
fc9352c588a0906bb8810c3da2f4fb09e39bb0c5
diff --git a/rllib/utils/actors.py b/rllib/utils/actors.py --- a/rllib/utils/actors.py +++ b/rllib/utils/actors.py @@ -1,6 +1,7 @@ import logging import os import ray +from collections import deque logger = logging.getLogger(__name__) @@ -11,7 +12,7 @@ class TaskPool: def __init__(self): self._tasks = {} self._objects = {} - self._fetching = [] + self._fetching = deque() def add(self, worker, all_obj_ids): if isinstance(all_obj_ids, list): @@ -38,15 +39,11 @@ def completed_prefetch(self, blocking_wait=False, max_yield=999): for worker, obj_id in self.completed(blocking_wait=blocking_wait): self._fetching.append((worker, obj_id)) - remaining = [] - num_yielded = 0 - for worker, obj_id in self._fetching: - if num_yielded < max_yield: - yield (worker, obj_id) - num_yielded += 1 - else: - remaining.append((worker, obj_id)) - self._fetching = remaining + for _ in range(max_yield): + if not self._fetching: + break + + yield self._fetching.popleft() def reset_workers(self, workers): """Notify that some workers may be removed.""" @@ -54,11 +51,14 @@ def reset_workers(self, workers): if ev not in workers: del self._tasks[obj_id] del self._objects[obj_id] - ok = [] - for ev, obj_id in self._fetching: + + # We want to keep the same deque reference so that we don't suffer from + # stale references in generators that are still in flight + for _ in range(len(self._fetching)): + ev, obj_id = self._fetching.popleft() if ev in workers: - ok.append((ev, obj_id)) - self._fetching = ok + # Re-queue items that are still valid + self._fetching.append((ev, obj_id)) @property def count(self):
diff --git a/rllib/utils/tests/test_taskpool.py b/rllib/utils/tests/test_taskpool.py new file mode 100644 --- /dev/null +++ b/rllib/utils/tests/test_taskpool.py @@ -0,0 +1,138 @@ +import unittest +from unittest.mock import patch + +import ray +from ray.rllib.utils.actors import TaskPool + + +def createMockWorkerAndObjectId(obj_id): + return ({obj_id: 1}, obj_id) + + +class TaskPoolTest(unittest.TestCase): + @patch("ray.wait") + def test_completed_prefetch_yieldsAllComplete(self, rayWaitMock): + task1 = createMockWorkerAndObjectId(1) + task2 = createMockWorkerAndObjectId(2) + # Return the second task as complete and the first as pending + rayWaitMock.return_value = ([2], [1]) + + pool = TaskPool() + pool.add(*task1) + pool.add(*task2) + + fetched = list(pool.completed_prefetch()) + self.assertListEqual(fetched, [task2]) + + @patch("ray.wait") + def test_completed_prefetch_yieldsAllCompleteUpToDefaultLimit( + self, rayWaitMock): + # Load the pool with 1000 tasks, mock them all as complete and then + # check that the first call to completed_prefetch only yields 999 + # items and the second call yields the final one + pool = TaskPool() + for i in range(1000): + task = createMockWorkerAndObjectId(i) + pool.add(*task) + + rayWaitMock.return_value = (list(range(1000)), []) + + # For this test, we're only checking the object ids + fetched = [pair[1] for pair in pool.completed_prefetch()] + self.assertListEqual(fetched, list(range(999))) + + # Finally, check the next iteration returns the final taks + fetched = [pair[1] for pair in pool.completed_prefetch()] + self.assertListEqual(fetched, [999]) + + @patch("ray.wait") + def test_completed_prefetch_yieldsAllCompleteUpToSpecifiedLimit( + self, rayWaitMock): + # Load the pool with 1000 tasks, mock them all as complete and then + # check that the first call to completed_prefetch only yield 999 items + # and the second call yields the final one + pool = TaskPool() + for i in range(1000): + task = createMockWorkerAndObjectId(i) + pool.add(*task) + + rayWaitMock.return_value = (list(range(1000)), []) + + # Verify that only the first 500 tasks are returned, this should leave + # some tasks in the _fetching deque for later + fetched = [pair[1] for pair in pool.completed_prefetch(max_yield=500)] + self.assertListEqual(fetched, list(range(500))) + + # Finally, check the next iteration returns the remaining tasks + fetched = [pair[1] for pair in pool.completed_prefetch()] + self.assertListEqual(fetched, list(range(500, 1000))) + + @patch("ray.wait") + def test_completed_prefetch_yieldsRemainingIfIterationStops( + self, rayWaitMock): + # Test for issue #7106 + # In versions of Ray up to 0.8.1, if the pre-fetch generator failed to + # run to completion, then the TaskPool would fail to clear up already + # fetched tasks resulting in stale object ids being returned + pool = TaskPool() + for i in range(10): + task = createMockWorkerAndObjectId(i) + pool.add(*task) + + rayWaitMock.return_value = (list(range(10)), []) + + # This should fetch just the first item in the list + try: + for _ in pool.completed_prefetch(): + # Simulate a worker failure returned by ray.get() + raise ray.exceptions.RayError + except ray.exceptions.RayError: + pass + + # This fetch should return the remaining pre-fetched tasks + fetched = [pair[1] for pair in pool.completed_prefetch()] + self.assertListEqual(fetched, list(range(1, 10))) + + @patch("ray.wait") + def test_reset_workers_pendingFetchesFromFailedWorkersRemoved( + self, rayWaitMock): + pool = TaskPool() + # We need to hold onto the tasks for this test so that we can fail a + # specific worker + tasks = [] + + for i in range(10): + task = createMockWorkerAndObjectId(i) + pool.add(*task) + tasks.append(task) + + # Simulate only some of the work being complete and fetch a couple of + # tasks in order to fill the fetching queue + rayWaitMock.return_value = ([0, 1, 2, 3, 4, 5], [6, 7, 8, 9]) + fetched = [pair[1] for pair in pool.completed_prefetch(max_yield=2)] + + # As we still have some pending tasks, we need to update the + # completion states to remove the completed tasks + rayWaitMock.return_value = ([], [6, 7, 8, 9]) + + pool.reset_workers([ + tasks[0][0], + tasks[1][0], + tasks[2][0], + tasks[3][0], + # OH NO! WORKER 4 HAS CRASHED! + tasks[5][0], + tasks[6][0], + tasks[7][0], + tasks[8][0], + tasks[9][0] + ]) + + # Fetch the remaining tasks which should already be in the _fetching + # queue + fetched = [pair[1] for pair in pool.completed_prefetch()] + self.assertListEqual(fetched, [2, 3, 5]) + + +if __name__ == "__main__": + unittest.main(verbosity=2)
[RLlib] Errors originating from environment workers cause training to stall ### What is the problem? Ray: 0.8.0/0.8.1 OS: Ubuntu 16.04/18.04 Errors originating from environment `step()` or `reset()` functions cause training to stall when using `AsyncSamplesOptimizer` (e.g. IMPALA) even if `ignore_worker_failures: True` is set in the experiment config. If an environment request fails, the generator returned by `TaskPool.completed_prefetch()` never runs to completion and so `self._fetching` isn't cleared down properly. This results in stale object ids being used on the next training iteration causing #7105 to manifest itself, stalling the experiment. ### Reproduction 1. Set up an IMPALA experiment with external environments 2. After a few training iterations, kill one of the external environments #### Result: After `Trainer._try_recover()` has black listed the failed worker, the experiment will stall with the following call stack: ``` Thread 46562 (idle): "MainThread" get_objects (ray/worker.py:318) get (ray/worker.py:1450) ray_get_and_free (ray/rllib/utils/memory.py:33) _augment_with_replay (ray/rllib/optimizers/aso_aggregator.py:170) iter_train_batches (ray/rllib/optimizers/aso_aggregator.py:117) _step (ray/rllib/optimizers/async_samples_optimizer.py:178) step (ray/rllib/optimizers/async_samples_optimizer.py:136) _train (ray/rllib/agents/trainer_template.py:129) train (ray/tune/trainable.py:176) train (ray/rllib/agents/trainer.py:433) actor_method_executor (ray/function_manager.py:766) main_loop (ray/worker.py:433) <module> (ray/workers/default_worker.py:118) ``` ### Potential Fix Rather than using a list for `self._fetching` in `TaskPool`, you can use a queue so that items that are going to be freed are automatically removed from the list of tasks to be fetched. This also ensures that the fetching list is always up to date, even if the generator is stopped early. Here's a example fix that I'm currently testing: [actors.py.diff](https://gist.github.com/elpollouk/1d0cb83bd98c7a9fa4c9226c66ea07ee) I haven't dug into the access pattern for `TaskPool` beyond my current scenario, so I'm not sure if a lock should be added around modifications to `self._fetching`. I'm also unaware if there are any subtle side effects of removing items from the fetching list that the original authors were considering when implementing the current logic, so any thoughts here would be greatly appreciated.
@elpollouk could you try out the latest wheel and see if this is still an issue? The hang fix is merged in master as of https://github.com/ray-project/ray/pull/7117 Trying the latest nightly and the experiment does continue to run after knocking out workers. However, it does appear that you're still re-using object ids for sample batches you've already processed after an error. Is this a concern? The issue I was attempting to fix with my patch was that you are relying on the `completed_prefetch` generator to run to completion in order to maintain correct state in the `TaskPool`. However, generator completion is never guaranteed as iteration can be stopped at any time either by an exception or other coded stopping condition. Although fixing the symptom of #7105 is working now, the fact that it possible for `TaskPool` to get itself into an inconsistent state will likely manifest itself again in the future via other non-obvious bugs. The patch makes sense. Do you want to make a PR for it?
2020-02-12T17:26:02
ray-project/ray
7,181
ray-project__ray-7181
[ "7174", "7174" ]
734629b4eaddfef8cd42122e4186658929e141d6
diff --git a/python/ray/cloudpickle/cloudpickle_fast.py b/python/ray/cloudpickle/cloudpickle_fast.py --- a/python/ray/cloudpickle/cloudpickle_fast.py +++ b/python/ray/cloudpickle/cloudpickle_fast.py @@ -411,6 +411,16 @@ def _property_reduce(obj): return property, (obj.fget, obj.fset, obj.fdel, obj.__doc__) +def _numpy_frombuffer(buffer, dtype, shape, order): + # Get the _frombuffer() function for reconstruction + from numpy.core.numeric import _frombuffer + array = _frombuffer(buffer, dtype, shape, order) + # Unfortunately, numpy does not follow the standard, so we still + # have to set the readonly flag for it here. + array.setflags(write=not buffer.readonly) + return array + + def _numpy_ndarray_reduce(array): # This function is implemented according to 'array_reduce_ex_picklebuffer' # in numpy C backend. This is a workaround for python3.5 pickling support. @@ -443,10 +453,7 @@ def _numpy_ndarray_reduce(array): # (gh-12745). return array.__reduce__() - # Get the _frombuffer() function for reconstruction - import numpy.core.numeric as numeric_mod - from_buffer_func = numeric_mod._frombuffer - return from_buffer_func, (buffer, array.dtype, array.shape, order) + return _numpy_frombuffer, (buffer, array.dtype, array.shape, order) class CloudPickler(Pickler):
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -478,6 +478,15 @@ def f(): assert wr() is None +def test_deserialized_from_buffer_immutable(ray_start_regular): + x = np.full((2, 2), 1.) + o = ray.put(x) + y = ray.get(o) + with pytest.raises( + ValueError, match="assignment destination is read-only"): + y[0, 0] = 9. + + def test_passing_arguments_by_value_out_of_the_box(ray_start_regular): @ray.remote def f(x):
Numpy arrays are mutable in object store in ray >=0.8 My first ever github post and hopefully I am doing this right... ### System information OS Platform: Linux RHEL7 Ray installed from binary Ray version: 0.8.x onwards Python version: 3.7.4 Exact command to reproduce: See example below. ### Describe the problem Numpy arrays (and pandas dataframes) were immutable in 0.7.6, as described in the documentation. Any modification to a numpy array fetched with ray.get would result in error (`"ValueError: assignment destination is read-only"`). In 0.8 (tested 0.8.1 and 0.9.0 dev), remote numpy arrays are no longer immutable. ### Example ``` python import ray import numpy as np ray.init("auto") # initialize as necessary in your local install x = np.full((2, 2), 1.) o = ray.put(x) y = ray.get(o) y[0, 0] = 9. # no error print(ray.get(o)) ``` prints [[9. 1.] [1. 1.]] expected [[1. 1.] [1. 1.]] Numpy arrays are mutable in object store in ray >=0.8 My first ever github post and hopefully I am doing this right... ### System information OS Platform: Linux RHEL7 Ray installed from binary Ray version: 0.8.x onwards Python version: 3.7.4 Exact command to reproduce: See example below. ### Describe the problem Numpy arrays (and pandas dataframes) were immutable in 0.7.6, as described in the documentation. Any modification to a numpy array fetched with ray.get would result in error (`"ValueError: assignment destination is read-only"`). In 0.8 (tested 0.8.1 and 0.9.0 dev), remote numpy arrays are no longer immutable. ### Example ``` python import ray import numpy as np ray.init("auto") # initialize as necessary in your local install x = np.full((2, 2), 1.) o = ray.put(x) y = ray.get(o) y[0, 0] = 9. # no error print(ray.get(o)) ``` prints [[9. 1.] [1. 1.]] expected [[1. 1.] [1. 1.]]
It is a good catch. Thanks so much. This issue could be related to the new serializer we are using. I am going to fix it. It is a good catch. Thanks so much. This issue could be related to the new serializer we are using. I am going to fix it.
2020-02-16T05:49:19
ray-project/ray
7,198
ray-project__ray-7198
[ "8704" ]
cd5a207d69cdaf05b47d956c18e89d928585eec7
diff --git a/python/ray/autoscaler/commands.py b/python/ray/autoscaler/commands.py --- a/python/ray/autoscaler/commands.py +++ b/python/ray/autoscaler/commands.py @@ -453,7 +453,6 @@ def _exec(updater, cmd, screen, tmux, port_forward=None, with_output=False): cmd = " ".join(cmd) return updater.cmd_runner.run( cmd, - allocate_tty=True, exit_on_fail=True, port_forward=port_forward, with_output=with_output) diff --git a/python/ray/autoscaler/updater.py b/python/ray/autoscaler/updater.py --- a/python/ray/autoscaler/updater.py +++ b/python/ray/autoscaler/updater.py @@ -47,7 +47,6 @@ def __init__(self, log_prefix, namespace, node_id, auth_config, def run(self, cmd=None, timeout=120, - allocate_tty=False, exit_on_fail=False, port_forward=None, with_output=False): @@ -77,12 +76,12 @@ def run(self, raise Exception(exception_str) else: logger.info(self.log_prefix + "Running {}...".format(cmd)) - final_cmd = self.kubectl + [ - "exec", - "-it" if allocate_tty else "-i", + final_cmd = self.kubectl + ["exec", "-it"] + final_cmd += [ self.node_id, "--", - ] + with_interactive(cmd) + ] + final_cmd += with_interactive(cmd) try: if with_output: return self.process_runner.check_output( @@ -230,16 +229,13 @@ def set_ssh_ip_if_required(self): def run(self, cmd, timeout=120, - allocate_tty=False, exit_on_fail=False, port_forward=None, with_output=False): self.set_ssh_ip_if_required() - ssh = ["ssh"] - if allocate_tty: - ssh.append("-tt") + ssh = ["ssh", "-tt"] if port_forward: if not isinstance(port_forward, list): @@ -259,7 +255,8 @@ def run(self, else: # We do this because `-o ControlMaster` causes the `-N` flag to # still create an interactive shell in some ssh versions. - final_cmd.append("while true; do sleep 86400; done") + final_cmd.append(quote("while true; do sleep 86400; done")) + try: if with_output: return self.process_runner.check_output(final_cmd)
IOCTL error in Autoscaler <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is the problem? The following error message is printed repeatedly when using the autoscaler. It appears many times during `ray up ...` ``` bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell ``` *Ray version and other system information (Python version, TensorFlow version, OS):* ### Reproduction (REQUIRED) Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments): If we cannot run your script, we cannot fix your issue. - [x] I have verified my script runs in a clean environment and reproduces the issue. - [ ] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
2020-02-17T19:36:05
ray-project/ray
7,250
ray-project__ray-7250
[ "6718" ]
6c80071a7dc8aec4a16eef1352ff51eeaa83ad68
diff --git a/python/ray/services.py b/python/ray/services.py --- a/python/ray/services.py +++ b/python/ray/services.py @@ -1,4 +1,5 @@ import collections +import errno import json import logging import multiprocessing @@ -282,10 +283,10 @@ def get_node_ip_address(address="8.8.8.8:53"): # connection. s.connect((ip_address, int(port))) node_ip_address = s.getsockname()[0] - except Exception as e: + except OSError as e: node_ip_address = "127.0.0.1" # [Errno 101] Network is unreachable - if e.errno == 101: + if e.errno == errno.ENETUNREACH: try: # try get node ip address from host name host_name = socket.getfqdn(socket.gethostname()) @@ -582,11 +583,15 @@ def start_reaper(): try: os.setpgrp() except OSError as e: - logger.warning("setpgrp failed, processes may not be " - "cleaned up properly: {}.".format(e)) - # Don't start the reaper in this case as it could result in killing - # other user processes. - return None + if e.errno == errno.EPERM and os.getpgrp() == os.getpid(): + # Nothing to do; we're already a session leader. + pass + else: + logger.warning("setpgrp failed, processes may not be " + "cleaned up properly: {}.".format(e)) + # Don't start the reaper in this case as it could result in killing + # other user processes. + return None reaper_filepath = os.path.join( os.path.dirname(os.path.abspath(__file__)), "ray_process_reaper.py")
Error message on ray init when run inside Jupyter: setpgrp failed, processes may not be cleaned up properly ### What is the problem? *Ray version and other system information (Python version, TensorFlow version, OS):* Ray 0.8.0 Python 3.6 Ubuntu 18.04 *Does the problem occur on the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html)?* Yes, with ray-0.9.0.dev0. ### Reproduction Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments): If we cannot run your script, we cannot fix your issue. I am running juypyterlab 1.2.4 within docker, using an 18.04 base image. To reproduce, simply import ray and call ray.init. ```python import ray ray.init() ``` And the error message will be generated. ``` 2020-01-06 17:54:50,339 WARNING services.py:595 -- setpgrp failed, processes may not be cleaned up properly: [Errno 1] Operation not permitted. ``` PID, PGID, PPID, and SID all have the same value in this case. Outside of Jupyter, this error does not occur.
Thanks for reporting this @mc-allen. After looking into this a bit, I was only able to reproduce in jupyterlab and unfortunately am not able to find a workaround. This shouldn't cause any significant problems, it just means that if the python process that called `ray.init()` (the jupyter notebook in this case) is terminated via `SIGKILL`, there may be some background processes left up. If this does happen for some reason, you can always clean these processes up manually using `ray stop`. Is "ray stop" the same effect as "ray.shutdown()"? `ray stop` is a bash command and will stop all ray processes running on the machine (started by `ray.init()` in python or `ray start` on the command line. `ray.shutdown()` will stop all ray processes started by a previous `ray.init()` call in the current python process. I see. I'm looking for something I can call from python that will release all ray resources, including object store shared memory, and ideally redis too, and kill all tasks. This is needed in the event of a crash or restart of a script. If ray was started from within the process that crashed, resources should be cleaned up automatically. Also, not sure if this exactly fits the bill, but if you call `ray start` with `--block`, it will clean up all processes started by ray of any of them die (e.g., if redis goes down or the raylet crashes) Sorry if this is the wrong place to continue this conversation, but... In my experience, it seems like ray.shutdown() does not clear ray's usage of /dev/shm. Is there a surefire way to accomplish this? The `/dev/shm` usage will be cleared once the object store goes down. This should be done by `ray.shutdown()` if you called `ray.init()` in the same process, otherwise you'll have to call `ray stop`. This looks very similar to this [os.setsid](https://stackoverflow.com/questions/25701333/os-setsid-operation-not-permitted) question where python raises an exception if the value is already set. Could this be a simple as changing ` os.setpgrp()` to ` if os.getpgrp() != os.getpid(): os.setpgrp()` This worked for me on Ubuntu 18.04. Thanks @hubcity! @mehrdadn do you think you could take a look when you have a chance? Yeah I'll try to think about it, it seems like a nontrivial problem but there might be a solution.
2020-02-20T20:58:43
ray-project/ray
7,262
ray-project__ray-7262
[ "7197" ]
d190e73727a2f883e6179fcc6a51ab97a606bc05
diff --git a/python/ray/worker.py b/python/ray/worker.py --- a/python/ray/worker.py +++ b/python/ray/worker.py @@ -1445,6 +1445,10 @@ def show_in_webui(message, key="", dtype="text"): worker.core_worker.set_webui_display(key.encode(), message_encoded) +# Global varaible to make sure we only send out the warning once +blocking_get_inside_async_warned = False + + def get(object_ids, timeout=None): """Get a remote object or a list of remote objects from the object store. @@ -1454,7 +1458,7 @@ def get(object_ids, timeout=None): object has been created). If object_ids is a list, then the objects corresponding to each object in the list will be returned. - This method will error will error if it's running inside async context, + This method will issue a warning if it's running inside async context, you can use ``await object_id`` instead of ``ray.get(object_id)``. For a list of object ids, you can use ``await asyncio.gather(*object_ids)``. @@ -1479,9 +1483,13 @@ def get(object_ids, timeout=None): if hasattr( worker, "core_worker") and worker.core_worker.current_actor_is_asyncio(): - raise RayError("Using blocking ray.get inside async actor. " - "This blocks the event loop. Please " - "use `await` on object id with asyncio.gather.") + global blocking_get_inside_async_warned + if not blocking_get_inside_async_warned: + logger.warning("Using blocking ray.get inside async actor. " + "This blocks the event loop. Please use `await` " + "on object id with asyncio.gather if you want to " + "yield execution to the event loop instead.") + blocking_get_inside_async_warned = True with profiling.profile("ray.get"): is_individual_id = isinstance(object_ids, ray.ObjectID) @@ -1547,6 +1555,10 @@ def put(value, weakref=False): return object_id +# Global variable to make sure we only send out the warning once. +blocking_wait_inside_async_warned = False + + def wait(object_ids, num_returns=1, timeout=None): """Return a list of IDs that are ready and a list of IDs that are not. @@ -1565,8 +1577,9 @@ def wait(object_ids, num_returns=1, timeout=None): precede B in the ready list. This also holds true if A and B are both in the remaining list. - This method will error if it's running inside an async context. Instead of - ``ray.wait(object_ids)``, you can use ``await asyncio.wait(object_ids)``. + This method will issue a warning if it's running inside an async context. + Instead of ``ray.wait(object_ids)``, you can use + ``await asyncio.wait(object_ids)``. Args: object_ids (List[ObjectID]): List of object IDs for objects that may or @@ -1584,9 +1597,12 @@ def wait(object_ids, num_returns=1, timeout=None): if hasattr(worker, "core_worker") and worker.core_worker.current_actor_is_asyncio( ) and timeout != 0: - raise RayError("Using blocking ray.wait inside async method. " - "This blocks the event loop. Please use `await` " - "on object id with asyncio.wait. ") + global blocking_wait_inside_async_warned + if not blocking_wait_inside_async_warned: + logger.warning("Using blocking ray.wait inside async method. " + "This blocks the event loop. Please use `await` " + "on object id with asyncio.wait. ") + blocking_wait_inside_async_warned = True if isinstance(object_ids, ObjectID): raise TypeError(
Cannot call ray.get inside async actor (not a request for async get) **Problem:** ray.get not allowed inside async actors Have been discussing this issue with @edoakes on Slack. Apparently the decision was made to hard stop ray.get inside async actors to prevent people from accidentally running blocking code inside otherwise fully async tasks. The problem is that blocking code is actually needed inside _lightly_ async tasks. **Key Example:** Ray.signal is quite buggy and sadly hasn't seen much love lately (and I have heard it may be deprecated). One possible replacement is to use the Async api to add queues to synchronous actors that function as async inboxes. In this case, you would want to have a light async workload, such as adding items to a queue (represented by asyncWork in the snippet below). You wouldn't want most of your run() method preempted, but instead would designate a small segment where control can be yielded using asyncio.sleep() (where I have time.sleep() now). **Impact:** Point-to-point communication is a key feature in many distributed applications. Without ray.get inside async actors, it's not really possible to replicate the functionality of ray.signal with the async api (at least I couldn't find a solution). Adding an allow_blocking flag or similar to ray.init would solve this issue for now. I'd like to emphasize that this really isn't a good long term solution. Ray.signal is a much, much better API for what it's intended to do. Async code introduces much more complexity than signaling, so having async replace signal is far from ideal unless we can come up with a really easy plug-and-play async queue for use with otherwise synchronous actors. *Ray version and other system information (Python version, TensorFlow version, OS):* Latest Ray wheel on Ubuntu (though this should hold on all systems -- seems more of a design choice that has outlived its usefulness than anything) **Reproduction:** ``` import ray import time import asyncio @ray.remote def bar(): return @ray.remote class Foo: def run(self): while True: time.sleep(1) ray.get(bar.remote()) async def asyncWork(self): pass if __name__ == '__main__': ray.init() foo = Foo.remote() foo.run.remote() while True: time.sleep(1) foo.asyncWork.remote() ``` - [ y] I have verified my script runs in a clean environment and reproduces the issue. - [ y] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).
Edit: fixed code format It seems the alternative async version of the run method is ```python async def run(self): while True: time.sleep(1) # simulate work, this won't be preempted await bar.remote() ``` the blocking code itself won't be pre-emptied. `asyncio` will only context switch when you are waiting for bar.remote() to execute. When the context switch happens, other coroutines will be allowed to run. That works if you happen to be calling ray.get from an async method, but what if you want to call it from a synchronous method? can you elaborate on why do you need to call it from synchronous method? Wrapping your synchronous method inside an async method have no performance penalty and there won't be any preemption. You have to explicitly yield control with await. Isn't the bigger issue that `run` will block the entire event loop unless it's an `async def`? Even if we allowed get, the example above still wouldn't work. You would need: ``` import ray import time import asyncio @ray.remote def bar(): return @ray.remote class Foo: async def run(self): while True: await asyncio.sleep(1) await bar.remote() print("running") async def asyncWork(self): print("do async poll") if __name__ == '__main__': ray.init() foo = Foo.remote() foo.run.remote() while True: time.sleep(1) ray.get(foo.asyncWork.remote()) ``` @simon-mo can we raise a warning if a non-async method is called? @ericl Yes I missed an async def in my example, thank you. @simon-mo The larger issue is that having to await ray.get forces you to redefine a potentially large number of functions as asynchronous (see below). ``` import ray import time import asyncio @ray.remote def bar(): return @ray.remote class Foo: async def run(self): while True: time.sleep(1) self.f1() def f1(self): return self.f2() def f2(self): return self.f3() def f3(self): return ray.get(bar.remote()) if __name__ == '__main__': ray.init() foo = Foo.remote() foo.run.remote() while True: time.sleep(1) ```
2020-02-21T19:48:47
ray-project/ray
7,312
ray-project__ray-7312
[ "7300" ]
3fc162f93c0ead3d9d53719795f2349f2bc85cef
diff --git a/python/ray/tune/logger.py b/python/ray/tune/logger.py --- a/python/ray/tune/logger.py +++ b/python/ray/tune/logger.py @@ -231,7 +231,12 @@ def flush(self): def close(self): if self._file_writer is not None: if self.trial and self.trial.evaluated_params and self.last_result: - self._try_log_hparams(self.last_result) + scrubbed_result = { + k: value + for k, value in self.last_result.items() + if type(value) in VALID_SUMMARY_TYPES + } + self._try_log_hparams(scrubbed_result) self._file_writer.close() def _try_log_hparams(self, result):
diff --git a/python/ray/tune/tests/test_logger.py b/python/ray/tune/tests/test_logger.py --- a/python/ray/tune/tests/test_logger.py +++ b/python/ray/tune/tests/test_logger.py @@ -8,12 +8,14 @@ Trial = namedtuple("MockTrial", ["evaluated_params", "trial_id"]) -def result(t, rew): - return dict( +def result(t, rew, **kwargs): + results = dict( time_total_s=t, episode_reward_mean=rew, mean_accuracy=rew * 2, training_iteration=int(t)) + results.update(kwargs) + return results class LoggerSuite(unittest.TestCase): @@ -31,22 +33,25 @@ def testCSV(self): logger = CSVLogger(config=config, logdir=self.test_dir, trial=t) logger.on_result(result(2, 4)) logger.on_result(result(2, 4)) + logger.on_result(result(2, 4, score=[1, 2, 3])) logger.close() def testJSON(self): config = {"a": 2, "b": 5} t = Trial(evaluated_params=config, trial_id="json") logger = JsonLogger(config=config, logdir=self.test_dir, trial=t) - logger.on_result(result(2, 4)) - logger.on_result(result(2, 4)) + logger.on_result(result(0, 4)) + logger.on_result(result(1, 4)) + logger.on_result(result(2, 4, score=[1, 2, 3])) logger.close() def testTBX(self): config = {"a": 2, "b": 5} t = Trial(evaluated_params=config, trial_id="tbx") logger = TBXLogger(config=config, logdir=self.test_dir, trial=t) - logger.on_result(result(2, 4)) - logger.on_result(result(2, 4)) + logger.on_result(result(0, 4)) + logger.on_result(result(1, 4)) + logger.on_result(result(2, 4, score=[1, 2, 3])) logger.close()
[tune] HParams Dashboard shows no data when using Histogram Dashboard. ### What is your question? <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> HParams dashboard shows no data when using Histogram dashboard. ![Screen Shot 2020-02-24 at 3 57 51 PM](https://user-images.githubusercontent.com/44219935/75191208-e6712980-571f-11ea-9349-f918e1a6f79b.png) ![Screen Shot 2020-02-24 at 3 57 33 PM](https://user-images.githubusercontent.com/44219935/75191225-ee30ce00-571f-11ea-9cd9-e0c765db0ef9.png) *Ray version and other system information (Python version, TensorFlow version, OS):* Ray version: 0.9.0.dev0 Python version: 3.7.5 TensorFlow version: 2.1.0 ### Reproduction I modified the [mnist_pytorch_trainable.py](https://github.com/ray-project/ray/blob/master/python/ray/tune/examples/mnist_pytorch_trainable.py) to log model parameters to tensorboad. Modification as follow: ``` def _train(self): train(self.model, self.optimizer, self.train_loader) acc = test(self.model, self.test_loader) result = {"mean_accuracy": acc} for tag, value in self.model.named_parameters(): tag = tag.replace('.', '/') result[tag] = value.data.cpu().tolist() return result ``` If I removed the modification, the HParams dashboard is back to normal. ![Screen Shot 2020-02-24 at 4 12 08 PM](https://user-images.githubusercontent.com/44219935/75191712-ddcd2300-5720-11ea-8ab4-d0600b61f007.png) Am I doing in the wrong way?
2020-02-25T06:11:52
ray-project/ray
7,347
ray-project__ray-7347
[ "7121" ]
58073f726086867cecf3dbaf94c363ef8c5dfa2d
diff --git a/rllib/rollout.py b/rllib/rollout.py --- a/rllib/rollout.py +++ b/rllib/rollout.py @@ -15,6 +15,7 @@ from ray.rllib.env.base_env import _DUMMY_AGENT_ID from ray.rllib.evaluation.episode import _flatten_action from ray.rllib.policy.sample_batch import DEFAULT_POLICY_ID +from ray.rllib.utils.deprecation import deprecation_warning from ray.tune.utils import merge_dicts EXAMPLE_USAGE = """ @@ -182,26 +183,34 @@ def create_parser(parser_creator=None): default=False, action="store_const", const=True, - help="Surpress rendering of the environment.") + help="Suppress rendering of the environment.") parser.add_argument( "--monitor", default=False, - action="store_const", - const=True, - help="Wrap environment in gym Monitor to record video.") + action="store_true", + help="Wrap environment in gym Monitor to record video. NOTE: This " + "option is deprecated: Use `--video-dir [some dir]` instead.") + parser.add_argument( + "--video-dir", + type=str, + default=None, + help="Specifies the directory into which videos of all episode " + "rollouts will be stored.") parser.add_argument( - "--steps", default=10000, help="Number of steps to roll out.") + "--steps", + default=10000, + help="Number of timesteps to roll out (overwritten by --episodes).") + parser.add_argument( + "--episodes", + default=0, + help="Number of complete episodes to roll out (overrides --steps).") parser.add_argument("--out", default=None, help="Output filename.") parser.add_argument( "--config", default="{}", type=json.loads, help="Algorithm-specific configuration (e.g. env, hyperparams). " - "Surpresses loading of configuration from checkpoint.") - parser.add_argument( - "--episodes", - default=0, - help="Number of complete episodes to roll out. (Overrides --steps)") + "Gets merged with loaded configuration from checkpoint file.") parser.add_argument( "--save-info", default=False, @@ -226,21 +235,30 @@ def create_parser(parser_creator=None): def run(args, parser): config = {} - # Load configuration from file + # Load configuration from checkpoint file. config_dir = os.path.dirname(args.checkpoint) config_path = os.path.join(config_dir, "params.pkl") + # Try parent directory. if not os.path.exists(config_path): config_path = os.path.join(config_dir, "../params.pkl") + + # If no pkl file found, require command line `--config`. if not os.path.exists(config_path): if not args.config: raise ValueError( "Could not find params.pkl in either the checkpoint dir or " - "its parent directory.") + "its parent directory AND no config given on command line!") + + # Load the config from pickled. else: with open(config_path, "rb") as f: config = pickle.load(f) + + # Set num_workers to be at least 2. if "num_workers" in config: config["num_workers"] = min(2, config["num_workers"]) + + # Merge with command line `--config` settings. config = merge_dicts(config, args.config) if not args.env: if not config.get("env"): @@ -249,11 +267,26 @@ def run(args, parser): ray.init() + # Create the Trainer from config. cls = get_agent_class(args.run) agent = cls(env=args.env, config=config) + # Load state from checkpoint. agent.restore(args.checkpoint) num_steps = int(args.steps) num_episodes = int(args.episodes) + + # Determine the video output directory. + # Deprecated way: Use (--out|~/ray_results) + "/monitor" as dir. + video_dir = None + if args.monitor: + video_dir = os.path.join( + os.path.dirname(args.out or "") + or os.path.expanduser("~/ray_results/"), "monitor") + # New way: Allow user to specify a video output path. + elif args.video_dir: + video_dir = os.path.expanduser(args.video_dir) + + # Do the actual rollout. with RolloutSaver( args.out, args.use_shelve, @@ -262,7 +295,7 @@ def run(args, parser): target_episodes=num_episodes, save_info=args.save_info) as saver: rollout(agent, args.env, num_steps, num_episodes, saver, - args.no_render, args.monitor) + args.no_render, video_dir) class DefaultMapping(collections.defaultdict): @@ -295,7 +328,7 @@ def rollout(agent, num_episodes=0, saver=None, no_render=True, - monitor=False): + video_dir=None): policy_agent_mapping = default_policy_agent_mapping if saver is None: @@ -320,13 +353,14 @@ def rollout(agent, multiagent = False use_lstm = {DEFAULT_POLICY_ID: False} - if monitor and not no_render and saver and saver.outfile is not None: - # If monitoring has been requested, - # manually wrap our environment with a gym monitor - # which is set to record every episode. + # If monitoring has been requested, manually wrap our environment with a + # gym monitor, which is set to record every episode. + if video_dir: env = gym.wrappers.Monitor( - env, os.path.join(os.path.dirname(saver.outfile), "monitor"), - lambda x: True) + env=env, + directory=video_dir, + video_callable=lambda x: True, + force=True) steps = 0 episodes = 0 @@ -396,4 +430,25 @@ def rollout(agent, if __name__ == "__main__": parser = create_parser() args = parser.parse_args() + + # Old option: monitor, use video-dir instead. + if args.monitor: + deprecation_warning("--monitor", "--video-dir=[some dir]") + # User tries to record videos, but no-render is set: Error. + if (args.monitor or args.video_dir) and args.no_render: + raise ValueError( + "You have --no-render set, but are trying to record rollout videos" + " (via options --video-dir/--monitor)! " + "Either unset --no-render or do not use --video-dir/--monitor.") + # --use_shelve w/o --out option. + if args.use_shelve and not args.out: + raise ValueError( + "If you set --use-shelve, you must provide an output file via " + "--out as well!") + # --track-progress w/o --out option. + if args.track_progress and not args.out: + raise ValueError( + "If you set --track-progress, you must provide an output file via " + "--out as well!") + run(args, parser)
[rllib] How can I record the full episode results of rllib? <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is your question? *Ray version and other system information (Python version, TensorFlow version, OS):* ``` Python version: Python 3.6.10 :: Anaconda, Inc. Anaconda version: 4.7.12 TensorFlow version: 1.15.0 OS: Ubuntu 16.04 ray version: 0.8.1 ``` Hello I've trained an RL model using rllib I have tested the breakout environment and the agent is running successfully. Here is the python code I used. I used Rllib's python API If you test at openai gym after learning, it works successfully. I want to record the results of the agent. Agent test was executed using the rollout command of rllib. My result photo is below. This is the command I executed ``` rllib rollout checkpoint_4301/checkpoint-4301 \ --run PPO --env BreakoutNoFrameskip-v4 --monitor --config '{"monitor": true}' ``` ![image](https://user-images.githubusercontent.com/19910566/74238350-f6990a00-4d18-11ea-97b5-681da6163a18.png) When testing with rollout, on average 8 games run. Many episodes ran, but only the results of one game (four lives in the case of breakout) were saved as videos. And I think it's a mix of different game parts. I additionally tested ms pacman. Pacman's recording showed the same problem. I have attached my rllib model and the recorded results file. I would be grateful if you could tell me a document or a way to help me record the video. [PPO_BreakoutNoFrameskip-v4_2020-02-11_21-32-52m48o1b48.zip](https://github.com/ray-project/ray/files/4185820/PPO_BreakoutNoFrameskip-v4_2020-02-11_21-32-52m48o1b48.zip) [BreakoutModel.zip](https://github.com/ray-project/ray/files/4185821/BreakoutModel.zip)
Thanks for this issue. This sounds almost like a bug. @ericl could you comment? If yes, I'll take a look to get this fixed. Hm, all rollout.py does is adding `env = gym.wrappers.Monitor(...)`. Is it possible this is an issue with the env wrapper configuration from gym? Thank you for the answer. This is the Python code I used to run rllib. I tested the result using rollout.py. The gym env environment uses ``BreakoutNoFrameskip-v4`` and didn't fix it. Can ``num_workers`` or ``num_envs_per_worker`` be a problem? ```python import ray import ray.rllib.agents.ppo as ppo from ray.tune.logger import pretty_print ray.init() config = ppo.DEFAULT_CONFIG.copy() config["num_gpus"] = 4 config["num_workers"] = 10 config["num_envs_per_worker"] = 5 config["train_batch_size"] = 50000 config["sample_batch_size"] = 1000 config["sgd_minibatch_size"] = 5000 config["eager"] = False trainer = ppo.PPOTrainer(config=config, env="BreakoutNoFrameskip-v4") for i in range(100000): if i % 100 == 0: checkpoint = trainer.save() print("checkpoint saved at", checkpoint) ``` I can reproduce the issue with CartPole as well. It's only recording a very short sequence of the rolled out episode, and only a few of these as well (4 out of 50 rolled out episodes have mpeg snippet files). Taking a look. ...
2020-02-27T10:48:38
ray-project/ray
7,362
ray-project__ray-7362
[ "7349" ]
3fc162f93c0ead3d9d53719795f2349f2bc85cef
diff --git a/python/ray/tune/utils/util.py b/python/ray/tune/utils/util.py --- a/python/ray/tune/utils/util.py +++ b/python/ray/tune/utils/util.py @@ -58,7 +58,12 @@ def _read_utilization(self): self.values["ram_util_percent"].append( float(getattr(psutil.virtual_memory(), "percent"))) if GPUtil is not None: - for gpu in GPUtil.getGPUs(): + gpu_list = [] + try: + gpu_list = GPUtil.getGPUs() + except Exception: + logger.debug("GPUtil failed to retrieve GPUs.") + for gpu in gpu_list: self.values["gpu_util_percent" + str(gpu.id)].append( float(gpu.load)) self.values["vram_util_percent" + str(gpu.id)].append(
[tune] GPU utilization getting checked for cluster with no gpu <!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant--> ### What is the problem? Trivial issue, which does not actually impact performance as far as I can tell, but it seems if you launch a cluster with a configuration that sets gpus = 0, then there might be a check which avoids this error message... ``` (pid=4628) Exception in thread Thread-2: (pid=4628) Traceback (most recent call last): (pid=4628) File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner (pid=4628) self.run() (pid=4628) File "/home/ubuntu/algo/lib/python3.6/site-packages/ray/tune/utils/util.py", line 89, in run (pid=4628) self._read_utilization() (pid=4628) File "/home/ubuntu/algo/lib/python3.6/site-packages/ray/tune/utils/util.py", line 65, in _read_utilization (pid=4628) for gpu in GPUtil.getGPUs(): (pid=4628) File "/home/ubuntu/algo/lib/python3.6/site-packages/GPUtil/GPUtil.py", line 102, in getGPUs (pid=4628) deviceIds = int(vals[i]) (pid=4628) ValueError: invalid literal for int() with base 10: "NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running." ``` *Ray version and other system information (Python version, TensorFlow version, OS):* Ray 0.8.1, TF 2.1, RHEL 7.7 ### Reproduction (REQUIRED) Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments): If we cannot run your script, we cannot fix your issue. - [ ] I have verified my script runs in a clean environment and reproduces the issue. - [ ] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).
2020-02-28T02:25:29
ray-project/ray
7,392
ray-project__ray-7392
[ "7376" ]
2d97650b1e01c299eda8d973c3b7792b3ac85307
diff --git a/python/ray/cloudpickle/cloudpickle_fast.py b/python/ray/cloudpickle/cloudpickle_fast.py --- a/python/ray/cloudpickle/cloudpickle_fast.py +++ b/python/ray/cloudpickle/cloudpickle_fast.py @@ -541,8 +541,8 @@ def reducer_override(self, obj): # This is a patch for python3.5 if isinstance(obj, numpy.ndarray): if (self.proto < 5 or - (not obj.flags.c_contiguous and - not obj.flags.f_contiguous) or + (not obj.flags.c_contiguous and not obj.flags.f_contiguous) or + (issubclass(type(obj), numpy.ndarray) and type(obj) is not numpy.ndarray) or obj.dtype == "O" or obj.itemsize == 0): return NotImplemented return _numpy_ndarray_reduce(obj)
diff --git a/python/ray/tests/test_basic.py b/python/ray/tests/test_basic.py --- a/python/ray/tests/test_basic.py +++ b/python/ray/tests/test_basic.py @@ -378,6 +378,56 @@ def test_complex_serialization_with_pickle(shutdown_only): complex_serialization(use_pickle=True) +def test_numpy_serialization(ray_start_regular): + array = np.zeros(314) + from ray.cloudpickle import dumps + buffers = [] + inband = dumps(array, protocol=5, buffer_callback=buffers.append) + assert len(inband) < array.nbytes + assert len(buffers) == 1 + + +def test_numpy_subclass_serialization(ray_start_regular): + class MyNumpyConstant(np.ndarray): + def __init__(self, value): + super().__init__() + self.constant = value + + def __str__(self): + print(self.constant) + + constant = MyNumpyConstant(123) + + def explode(x): + raise RuntimeError("Expected error.") + + ray.register_custom_serializer( + type(constant), serializer=explode, deserializer=explode) + + try: + ray.put(constant) + assert False, "Should never get here!" + except (RuntimeError, IndexError): + print("Correct behavior, proof that customer serializer was used.") + + +def test_numpy_subclass_serialization_pickle(ray_start_regular): + class MyNumpyConstant(np.ndarray): + def __init__(self, value): + super().__init__() + self.constant = value + + def __str__(self): + print(self.constant) + + constant = MyNumpyConstant(123) + ray.register_custom_serializer(type(constant), use_pickle=True) + + repr_orig = repr(constant) + repr_ser = repr(ray.get(ray.put(constant))) + assert repr_orig == repr_ser + + def test_function_descriptor(): python_descriptor = ray._raylet.PythonFunctionDescriptor( "module_name", "function_name", "class_name", "function_hash")
Serialization of certain objects not handled correctly after 0.8.1 ### What is the problem? When using Ray 0.8.2 on Ubuntu 18.04, I am serializing data structures from the astropy library (https://pypi.org/project/astropy/ , version 4.0). Previously, I have configured ray to use cloudpickle for many of these astropy types, as they couldn't be natively handled by ray/pyarrow. In 0.8.2 however, ray seems to be ignoring this configuration, and it converts the data type in question (astropy.constants.constant.Constant) to a numpy ndarray upon ray.put/ray.get, which is incorrect behavior and breaks downstream parts of my code. Version 0.8.1 seems to be fine. I also reproduced the issue with the test cases below in ray-0.9.0.dev0 as of today. I suspect that this is a bug related to classes that have numpy-related base class. ### Reproduction (REQUIRED) Case 1: ignoring custom serializer ```python import numpy import ray class MyConstant(numpy.ndarray): def __init__(self, value): super().__init__() self.constant = value def __str__(self): print(self.constant) constant = MyConstant(123) ray.shutdown() ray.init() def explode(x): raise RuntimeError() ray.register_custom_serializer(type(constant), serializer=explode, deserializer=explode) try: ray.put(constant) print('Should never get here!') except (RuntimeError, IndexError): print('Correct behavior, proof that customer serializer was used.') ``` Case 2: Incorrect round-trip ```python import numpy import ray class MyConstant(numpy.ndarray): def __init__(self, value): super().__init__() self.constant = value def __str__(self): print(self.constant) constant = MyConstant(123) ray.shutdown() ray.init() ray.register_custom_serializer(type(constant), use_pickle=True) repr_orig = repr(constant) repr_ser =repr(ray.get(ray.put(constant))) if repr_orig == repr_ser: print('Good round trip') else: print('Bad round trip!') print(repr_orig) print(repr_ser) ``` If we cannot run your script, we cannot fix your issue. - [x] I have verified my script runs in a clean environment and reproduces the issue. - [x] I have verified the issue also occurs with the [latest wheels](https://ray.readthedocs.io/en/latest/installation.html).
Thanks, I can reproduce this and it looks like it was introduced in https://github.com/ray-project/ray/pull/6675 @suquark Can you look into this? @mc-allen it's a great catch! let me fix this
2020-03-02T02:04:31
ray-project/ray
7,398
ray-project__ray-7398
[ "7397" ]
2771af103635fe5f701ce913b48a704af353d479
diff --git a/rllib/agents/ppo/ppo_torch_policy.py b/rllib/agents/ppo/ppo_torch_policy.py --- a/rllib/agents/ppo/ppo_torch_policy.py +++ b/rllib/agents/ppo/ppo_torch_policy.py @@ -67,9 +67,15 @@ def __init__(self, vf_loss_coeff (float): Coefficient of the value function loss use_gae (bool): If true, use the Generalized Advantage Estimator. """ + if valid_mask is not None: - def reduce_mean_valid(t): - return torch.mean(t * valid_mask) + def reduce_mean_valid(t): + return torch.mean(t * valid_mask) + + else: + + def reduce_mean_valid(t): + return torch.mean(t) prev_dist = dist_class(prev_logits, model) # Make loss functions. @@ -109,13 +115,11 @@ def ppo_surrogate_loss(policy, model, dist_class, train_batch): logits, state = model.from_batch(train_batch) action_dist = dist_class(logits, model) + mask = None if state: max_seq_len = torch.max(train_batch["seq_lens"]) mask = sequence_mask(train_batch["seq_lens"], max_seq_len) mask = torch.reshape(mask, [-1]) - else: - mask = torch.ones_like( - train_batch[Postprocessing.ADVANTAGES], dtype=torch.bool) policy.loss_obj = PPOLoss( dist_class, diff --git a/rllib/models/torch/torch_action_dist.py b/rllib/models/torch/torch_action_dist.py --- a/rllib/models/torch/torch_action_dist.py +++ b/rllib/models/torch/torch_action_dist.py @@ -71,7 +71,15 @@ def deterministic_sample(self): @override(TorchDistributionWrapper) def logp(self, actions): - return TorchDistributionWrapper.logp(self, actions).sum(-1) + return super().logp(actions).sum(-1) + + @override(TorchDistributionWrapper) + def entropy(self): + return super().entropy().sum(-1) + + @override(TorchDistributionWrapper) + def kl(self, other): + return super().kl(other).sum(-1) @staticmethod @override(ActionDistribution)
diff --git a/rllib/tests/test_supported_spaces.py b/rllib/tests/test_supported_spaces.py --- a/rllib/tests/test_supported_spaces.py +++ b/rllib/tests/test_supported_spaces.py @@ -11,6 +11,8 @@ from ray.rllib.agents.registry import get_agent_class from ray.rllib.models.tf.fcnet_v2 import FullyConnectedNetwork as FCNetV2 from ray.rllib.models.tf.visionnet_v2 import VisionNetwork as VisionNetV2 +from ray.rllib.models.torch.visionnet import VisionNetwork as TorchVisionNetV2 +from ray.rllib.models.torch.fcnet import FullyConnectedNetwork as TorchFCNetV2 from ray.rllib.tests.test_multi_agent_env import MultiCartpole, \ MultiMountainCar from ray.rllib.utils.error import UnsupportedSpaceException @@ -75,10 +77,11 @@ def check_support(alg, config, stats, check_bounds=False, name=None): covered_o = set() config["log_level"] = "ERROR" first_error = None + torch = config.get("use_pytorch", False) for a_name, action_space in ACTION_SPACES_TO_TEST.items(): for o_name, obs_space in OBSERVATION_SPACES_TO_TEST.items(): - print("=== Testing {} A={} S={} ===".format( - alg, action_space, obs_space)) + print("=== Testing {} (torch={}) A={} S={} ===".format( + alg, torch, action_space, obs_space)) stub_env = make_stub_env(action_space, obs_space, check_bounds) register_env("stub_env", lambda c: stub_env()) stat = "ok" @@ -86,14 +89,26 @@ def check_support(alg, config, stats, check_bounds=False, name=None): try: if a_name in covered_a and o_name in covered_o: stat = "skip" # speed up tests by avoiding full grid + # TODO(sven): Add necessary torch distributions. + elif torch and a_name in ["tuple", "multidiscrete"]: + stat = "unsupported" else: a = get_agent_class(alg)(config=config, env="stub_env") if alg not in ["DDPG", "ES", "ARS", "SAC"]: if o_name in ["atari", "image"]: - assert isinstance(a.get_policy().model, - VisionNetV2) + if torch: + assert isinstance( + a.get_policy().model, TorchVisionNetV2) + else: + assert isinstance( + a.get_policy().model, VisionNetV2) elif o_name in ["vector", "vector2"]: - assert isinstance(a.get_policy().model, FCNetV2) + if torch: + assert isinstance( + a.get_policy().model, TorchFCNetV2) + else: + assert isinstance( + a.get_policy().model, FCNetV2) a.train() covered_a.add(a_name) covered_o.add(o_name) @@ -144,15 +159,15 @@ def tearDown(self): ray.shutdown() def test_a3c(self): - check_support( - "A3C", { - "num_workers": 1, - "optimizer": { - "grads_per_step": 1 - } - }, - self.stats, - check_bounds=True) + config = { + "num_workers": 1, + "optimizer": { + "grads_per_step": 1 + } + } + check_support("A3C", config, self.stats, check_bounds=True) + config["use_pytorch"] = True + check_support("A3C", config, self.stats, check_bounds=True) def test_appo(self): check_support("APPO", {"num_gpus": 0, "vtrace": False}, self.stats) @@ -201,25 +216,25 @@ def test_impala(self): check_support("IMPALA", {"num_gpus": 0}, self.stats) def test_ppo(self): - check_support( - "PPO", { - "num_workers": 1, - "num_sgd_iter": 1, - "train_batch_size": 10, - "sample_batch_size": 10, - "sgd_minibatch_size": 1, - }, - self.stats, - check_bounds=True) + config = { + "num_workers": 1, + "num_sgd_iter": 1, + "train_batch_size": 10, + "sample_batch_size": 10, + "sgd_minibatch_size": 1, + } + check_support("PPO", config, self.stats, check_bounds=True) + config["use_pytorch"] = True + check_support("PPO", config, self.stats, check_bounds=True) def test_pg(self): - check_support( - "PG", { - "num_workers": 1, - "optimizer": {} - }, - self.stats, - check_bounds=True) + config = { + "num_workers": 1, + "optimizer": {} + } + check_support("PG", config, self.stats, check_bounds=True) + config["use_pytorch"] = True + check_support("PG", config, self.stats, check_bounds=True) def test_sac(self): check_support("SAC", {}, self.stats, check_bounds=True)
[rllib] TorchDiagGaussian doesn’t handle multiple actions correctly. This is not a contribution. Ray version: 0.8.2 Python version: 3.6.8 Pytorch version: 1.4 OS: Ubuntu 18.04 Docker TorchDiagGaussian doesn’t handle multiple actions correctly. As a result, training PPO with Pytorch will crash when the action space has more than 1 action. Here’s minimal reproduction script: ```python import gym from gym.spaces import Box from ray import tune class ContinuousEnv(gym.Env): def __init__(self, config): self.action_space = Box(0.0, 1.0, shape=(2,)) self.observation_space = Box(0.0, 1.0, shape=(1, )) def reset(self): return [0.0] def step(self, action): return [0.0], 1.0, False, {} tune.run( "PPO", config={"env": ContinuousEnv, "use_pytorch": True, "num_workers": 1}) ```
Thanks for filing this. Taking a look ...
2020-03-02T10:25:06
ray-project/ray
7,434
ray-project__ray-7434
[ "3472" ]
476b5c6196fa734794e395a53d2506e7c8485d12
diff --git a/python/ray/actor.py b/python/ray/actor.py --- a/python/ray/actor.py +++ b/python/ray/actor.py @@ -652,6 +652,14 @@ def __init__(self, decorator=self._ray_method_decorators.get(method_name)) setattr(self, method_name, method) + def __del__(self): + # Mark that this actor handle has gone out of scope. Once all actor + # handles are out of scope, the actor will exit. + worker = ray.worker.get_global_worker() + if worker.connected and hasattr(worker, "core_worker"): + worker.core_worker.remove_actor_handle_reference( + self._ray_actor_id) + def _actor_method_call(self, method_name, args=None, @@ -752,36 +760,6 @@ def __repr__(self): self._ray_actor_creation_function_descriptor.class_name, self._actor_id.hex()) - def __del__(self): - """Terminate the worker that is running this actor.""" - # TODO(swang): Also clean up forked actor handles. - # Kill the worker if this is the original actor handle, created - # with Class.remote(). TODO(rkn): Even without passing handles around, - # this is not the right policy. the actor should be alive as long as - # there are ANY handles in scope in the process that created the actor, - # not just the first one. - worker = ray.worker.get_global_worker() - exported_in_current_session_and_job = ( - self._ray_session_and_job == worker.current_session_and_job) - if (worker.mode == ray.worker.SCRIPT_MODE - and not exported_in_current_session_and_job): - # If the worker is a driver and driver id has changed because - # Ray was shut down re-initialized, the actor is already cleaned up - # and we don't need to send `__ray_terminate__` again. - logger.warning( - "Actor is garbage collected in the wrong driver." + - " Actor id = %s, class name = %s.", self._ray_actor_id, - self._ray_actor_creation_function_descriptor.class_name) - return - if worker.connected and self._ray_original_handle: - # Note: in py2 the weakref is destroyed prior to calling __del__ - # so we need to set the hardref here briefly - try: - self.__ray_terminate__._actor_hard_ref = self - self.__ray_terminate__.remote() - finally: - self.__ray_terminate__._actor_hard_ref = None - def __ray_kill__(self): """Deprecated - use ray.kill() instead.""" logger.warning("actor.__ray_kill__() is deprecated and will be removed" @@ -792,13 +770,9 @@ def __ray_kill__(self): def _actor_id(self): return self._ray_actor_id - def _serialization_helper(self, ray_forking): + def _serialization_helper(self): """This is defined in order to make pickling work. - Args: - ray_forking: True if this is being called because Ray is forking - the actor handle and false if it is being called by pickling. - Returns: A dictionary of the information needed to reconstruct the object. """ @@ -807,10 +781,11 @@ def _serialization_helper(self, ray_forking): if hasattr(worker, "core_worker"): # Non-local mode - state = worker.core_worker.serialize_actor_handle(self) + state = worker.core_worker.serialize_actor_handle( + self._ray_actor_id) else: # Local mode - state = { + state = ({ "actor_language": self._ray_actor_language, "actor_id": self._ray_actor_id, "method_decorators": self._ray_method_decorators, @@ -819,18 +794,20 @@ def _serialization_helper(self, ray_forking): "actor_method_cpus": self._ray_actor_method_cpus, "actor_creation_function_descriptor": self. _ray_actor_creation_function_descriptor, - } + }, None) return state @classmethod - def _deserialization_helper(cls, state, ray_forking): + def _deserialization_helper(cls, state, outer_object_id=None): """This is defined in order to make pickling work. Args: state: The serialized state of the actor handle. - ray_forking: True if this is being called because Ray is forking - the actor handle and false if it is being called by pickling. + outer_object_id: The ObjectID that the serialized actor handle was + contained in, if any. This is used for counting references to + the actor handle. + """ worker = ray.worker.get_global_worker() worker.check_connected() @@ -838,7 +815,7 @@ def _deserialization_helper(cls, state, ray_forking): if hasattr(worker, "core_worker"): # Non-local mode return worker.core_worker.deserialize_and_register_actor_handle( - state) + state, outer_object_id) else: # Local mode return cls( @@ -855,8 +832,8 @@ def _deserialization_helper(cls, state, ray_forking): def __reduce__(self): """This code path is used by pickling but not by Ray forking.""" - state = self._serialization_helper(False) - return ActorHandle._deserialization_helper, (state, False) + state = self._serialization_helper() + return ActorHandle._deserialization_helper, (state) def modify_class(cls): diff --git a/python/ray/serialization.py b/python/ray/serialization.py --- a/python/ray/serialization.py +++ b/python/ray/serialization.py @@ -135,11 +135,18 @@ def __init__(self, worker): self._thread_local = threading.local() def actor_handle_serializer(obj): - return obj._serialization_helper(True) + serialized, actor_handle_id = obj._serialization_helper() + # Update ref counting for the actor handle + self.add_contained_object_id(actor_handle_id) + return serialized def actor_handle_deserializer(serialized_obj): + # If this actor handle was stored in another object, then tell the + # core worker. + context = ray.worker.global_worker.get_serialization_context() + outer_id = context.get_outer_object_id() return ray.actor.ActorHandle._deserialization_helper( - serialized_obj, True) + serialized_obj, outer_id) self._register_cloudpickle_serializer( ray.actor.ActorHandle, @@ -153,15 +160,7 @@ def id_deserializer(serialized_obj): return serialized_obj[0](*serialized_obj[1]) def object_id_serializer(obj): - if self.is_in_band_serialization(): - self.add_contained_object_id(obj) - else: - # If this serialization is out-of-band (e.g., from a call to - # cloudpickle directly or captured in a remote function/actor), - # then pin the object for the lifetime of this worker by adding - # a local reference that won't ever be removed. - ray.worker.get_global_worker( - ).core_worker.add_object_id_reference(obj) + self.add_contained_object_id(obj) owner_id = "" owner_address = "" # TODO(swang): Remove this check. Otherwise, we will not be able to @@ -243,10 +242,20 @@ def get_and_clear_contained_object_ids(self): return object_ids def add_contained_object_id(self, object_id): - if not hasattr(self._thread_local, "object_ids"): - self._thread_local.object_ids = set() - - self._thread_local.object_ids.add(object_id) + if self.is_in_band_serialization(): + # This object ID is being stored in an object. Add the ID to the + # list of IDs contained in the object so that we keep the inner + # object value alive as long as the outer object is in scope. + if not hasattr(self._thread_local, "object_ids"): + self._thread_local.object_ids = set() + self._thread_local.object_ids.add(object_id) + else: + # If this serialization is out-of-band (e.g., from a call to + # cloudpickle directly or captured in a remote function/actor), + # then pin the object for the lifetime of this worker by adding + # a local reference that won't ever be removed. + ray.worker.get_global_worker().core_worker.add_object_id_reference( + object_id) def _deserialize_pickle5_data(self, data): if not self.use_pickle: diff --git a/streaming/python/runtime/graph.py b/streaming/python/runtime/graph.py --- a/streaming/python/runtime/graph.py +++ b/streaming/python/runtime/graph.py @@ -62,7 +62,7 @@ def __init__(self, task_pb): self.task_id = task_pb.task_id self.task_index = task_pb.task_index self.worker_actor = ray.actor.ActorHandle.\ - _deserialization_helper(task_pb.worker_actor, False) + _deserialization_helper(task_pb.worker_actor) class ExecutionGraph:
diff --git a/java/test/src/main/resources/test_cross_language_invocation.py b/java/test/src/main/resources/test_cross_language_invocation.py --- a/java/test/src/main/resources/test_cross_language_invocation.py +++ b/java/test/src/main/resources/test_cross_language_invocation.py @@ -34,7 +34,7 @@ def py_func_call_java_actor(value): @ray.remote def py_func_call_java_actor_from_handle(value): assert isinstance(value, bytes) - actor_handle = ray.actor.ActorHandle._deserialization_helper(value, False) + actor_handle = ray.actor.ActorHandle._deserialization_helper(value) r = actor_handle.concat.remote(b"2") return ray.get(r) @@ -42,7 +42,7 @@ def py_func_call_java_actor_from_handle(value): @ray.remote def py_func_call_python_actor_from_handle(value): assert isinstance(value, bytes) - actor_handle = ray.actor.ActorHandle._deserialization_helper(value, False) + actor_handle = ray.actor.ActorHandle._deserialization_helper(value) r = actor_handle.increase.remote(2) return ray.get(r) @@ -52,7 +52,7 @@ def py_func_pass_python_actor_handle(): counter = Counter.remote(2) f = ray.java_function("org.ray.api.test.CrossLanguageInvocationTest", "callPythonActorHandle") - r = f.remote(counter._serialization_helper(False)) + r = f.remote(counter._serialization_helper()) return ray.get(r) diff --git a/python/ray/tests/test_actor.py b/python/ray/tests/test_actor.py --- a/python/ray/tests/test_actor.py +++ b/python/ray/tests/test_actor.py @@ -106,6 +106,7 @@ class Actor(object): # The cache of ActorClassMethodMetadata. cache = ray.actor.ActorClassMethodMetadata._cache + cache.clear() # Check cache hit during ActorHandle deserialization. A1 = ray.remote(Actor) @@ -532,6 +533,34 @@ def method(self): assert ray.get(Actor.remote().method.remote()) == 1 +def test_distributed_actor_handle_deletion(ray_start_regular): + @ray.remote + class Actor: + def method(self): + return 1 + + def getpid(self): + return os.getpid() + + @ray.remote + def f(actor, signal): + ray.get(signal.wait.remote()) + return ray.get(actor.method.remote()) + + signal = ray.test_utils.SignalActor.remote() + a = Actor.remote() + pid = ray.get(a.getpid.remote()) + # Pass the handle to another task that cannot run yet. + x_id = f.remote(a, signal) + # Delete the original handle. The actor should not get killed yet. + del a + + # Once the task finishes, the actor process should get killed. + ray.get(signal.send.remote()) + assert ray.get(x_id) == 1 + ray.test_utils.wait_for_pid_to_exit(pid) + + def test_multiple_actors(ray_start_regular): @ray.remote class Counter: diff --git a/python/ray/tests/test_metrics.py b/python/ray/tests/test_metrics.py --- a/python/ray/tests/test_metrics.py +++ b/python/ray/tests/test_metrics.py @@ -202,7 +202,7 @@ def getpid(self): try: assert len(actor_info) == 1 _, parent_actor_info = actor_info.popitem() - assert parent_actor_info["numObjectIdsInScope"] == 11 + assert parent_actor_info["numObjectIdsInScope"] == 13 assert parent_actor_info["numLocalObjects"] == 10 children = parent_actor_info["children"] assert len(children) == 2 diff --git a/src/ray/core_worker/test/core_worker_test.cc b/src/ray/core_worker/test/core_worker_test.cc --- a/src/ray/core_worker/test/core_worker_test.cc +++ b/src/ray/core_worker/test/core_worker_test.cc @@ -618,9 +618,10 @@ TEST_F(ZeroNodeTest, TestTaskSpecPerf) { /*is_detached*/ false, /*is_asyncio*/ false}; const auto job_id = NextJobId(); - ActorHandle actor_handle(ActorID::Of(job_id, TaskID::ForDriverTask(job_id), 1), job_id, - ObjectID::FromRandom(), function.GetLanguage(), true, - function.GetFunctionDescriptor(), ""); + ActorHandle actor_handle(ActorID::Of(job_id, TaskID::ForDriverTask(job_id), 1), + TaskID::Nil(), rpc::Address(), job_id, ObjectID::FromRandom(), + function.GetLanguage(), true, function.GetFunctionDescriptor(), + ""); // Manually create `num_tasks` task specs, and for each of them create a // `PushTaskRequest`, this is to batch performance of TaskSpec @@ -734,8 +735,9 @@ TEST_F(ZeroNodeTest, TestWorkerContext) { TEST_F(ZeroNodeTest, TestActorHandle) { // Test actor handle serialization and deserialization round trip. JobID job_id = NextJobId(); - ActorHandle original(ActorID::Of(job_id, TaskID::ForDriverTask(job_id), 0), job_id, - ObjectID::FromRandom(), Language::PYTHON, /*is_direct_call=*/false, + ActorHandle original(ActorID::Of(job_id, TaskID::ForDriverTask(job_id), 0), + TaskID::Nil(), rpc::Address(), job_id, ObjectID::FromRandom(), + Language::PYTHON, /*is_direct_call=*/false, ray::FunctionDescriptorBuilder::BuildPython("", "", "", ""), ""); std::string output; original.Serialize(&output);
Actor handle GC is problematic. <!-- General questions should be asked on the mailing list [email protected]. Before submitting an issue, please fill out the following form. --> ### System information - **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: all - **Ray installed from (source or binary)**: source - **Ray version**: latest master - **Python version**: all - **Exact command to reproduce**: n/a <!-- You can obtain the Ray version with python -c "import ray; print(ray.__version__)" --> ### Describe the problem <!-- Describe the problem clearly here. --> Currently, when the original actor handle is garbage collected by Python interpreter, a `__ray_terminate__` message will be sent to also terminate the actor, see https://github.com/ray-project/ray/blob/06f6431765b16da7cdb924cb4716a35acf0fba84/python/ray/actor.py#L662. This mechanism is problematic, because 1) If there're forked handles, the actor shouldn't be terminated. 2) `__del__` is unreliable and not recommended to use, see https://stackoverflow.com/questions/1481488/what-is-the-del-method-how-to-call-it. The first point is a critical problem and is hard to handle automatically. Even if we count the number of forked handles, and only terminate the actor when all handles are GC'ed. It's still problematic if users serialize a handle and deserialize it after a long time. Ray doesn't know when an actor is not needed any more. Only the user knows. Thus, I'm in favor of letting the users manually GC an actor with `actor.__ray_terminate__.remote()`, when they don't need the actor any more. To prevent the actor from wasting resource forever, there's already the mechanism that we'll clean up all workers when a driver exits.
@raulchen is this still an issue? @robertnishihara Hi Robert. Can confirm, still an issue on `0.8.1` Yeah, this is still an issue on latest master. BTW, we've removed the `__del__` function in our production code.
2020-03-04T00:52:14