Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def mark_as_duplicate(self, duplicated_cid, master_cid, msg=''): content_id_from = self.get_post(duplicated_cid)["id"] content_id_to = self.get_post(master_cid)["id"] params = { "cid_dupe": content_id_from, "cid_to": content_id_to, "msg": msg } return self._rpc.content_mark_duplicate(params)
[ "Mark the post at ``duplicated_cid`` as a duplicate of ``master_cid``\n\n :type duplicated_cid: int\n :param duplicated_cid: The numeric id of the duplicated post\n :type master_cid: int\n :param master_cid: The numeric id of an older post. This will be the\n post that gets kept and ``duplicated_cid`` post will be concatinated\n as a follow up to ``master_cid`` post.\n :type msg: string\n :param msg: the optional message (or reason for marking as duplicate)\n :returns: True if it is successful. False otherwise\n " ]
Please provide a description of the function:def resolve_post(self, post): try: cid = post["id"] except KeyError: cid = post params = { "cid": cid, "resolved": "true" } return self._rpc.content_mark_resolved(params)
[ "Mark post as resolved\n\n :type post: dict|str|int\n :param post: Either the post dict returned by another API method, or\n the `cid` field of that post.\n :returns: True if it is successful. False otherwise\n " ]
Please provide a description of the function:def pin_post(self, post): try: cid = post['id'] except KeyError: cid = post params = { "cid": cid, } return self._rpc.content_pin(params)
[ "Pin post\n\n :type post: dict|str|int\n :param post: Either the post dict returned by another API method, or\n the `cid` field of that post.\n :returns: True if it is successful. False otherwise\n " ]
Please provide a description of the function:def delete_post(self, post): try: cid = post['id'] except KeyError: cid = post except TypeError: post = self.get_post(post) cid = post['id'] params = { "cid": cid, } return self._rpc.content_delete(params)
[ " Deletes post by cid\n\n :type post: dict|str|int\n :param post: Either the post dict returned by another API method, the post ID, or\n the `cid` field of that post.\n :rtype: dict\n :returns: Dictionary with information about the post cid.\n " ]
Please provide a description of the function:def get_feed(self, limit=100, offset=0): return self._rpc.get_my_feed(limit=limit, offset=offset)
[ "Get your feed for this network\n\n Pagination for this can be achieved by using the ``limit`` and\n ``offset`` params\n\n :type limit: int\n :param limit: Number of posts from feed to get, starting from ``offset``\n :type offset: int\n :param offset: Offset starting from bottom of feed\n :rtype: dict\n :returns: Feed metadata, including list of posts in feed format; this\n means they are not the full posts but only in partial form as\n necessary to display them on the Piazza feed. For example, the\n returned dicts only have content snippets of posts rather\n than the full text.\n " ]
Please provide a description of the function:def get_filtered_feed(self, feed_filter): assert isinstance(feed_filter, (UnreadFilter, FollowingFilter, FolderFilter)) return self._rpc.filter_feed(**feed_filter.to_kwargs())
[ "Get your feed containing only posts filtered by ``feed_filter``\n\n :type feed_filter: FeedFilter\n :param feed_filter: Must be an instance of either: UnreadFilter,\n FollowingFilter, or FolderFilter\n :rtype: dict\n " ]
Please provide a description of the function:def get_all_datasets(self): success = True for dataset in tqdm(self.datasets): individual_success = self.get_dataset(dataset) if not individual_success: success = False return success
[ "\n Make sure the datasets are present. If not, downloads and extracts them.\n Attempts the download five times because the file hosting is unreliable.\n :return: True if successful, false otherwise\n " ]
Please provide a description of the function:def get_dataset(self, dataset): # If the dataset is present, no need to download anything. success = True dataset_path = self.base_dataset_path + dataset if not isdir(dataset_path): # Try 5 times to download. The download page is unreliable, so we need a few tries. was_error = False for iteration in range(5): # Guard against trying again if successful if iteration == 0 or was_error is True: zip_path = dataset_path + ".zip" # Download zip files if they're not there if not isfile(zip_path): try: with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset) as pbar: urlretrieve(self.datasets[dataset]["url"], zip_path, pbar.hook) except Exception as ex: print("Error downloading %s: %s" % (dataset, ex)) was_error = True # Unzip the data files if not isdir(dataset_path): try: with zipfile.ZipFile(zip_path) as zip_archive: zip_archive.extractall(path=dataset_path) zip_archive.close() except Exception as ex: print("Error unzipping %s: %s" % (zip_path, ex)) # Usually the error is caused by a bad zip file. # Delete it so the program will try to download it again. try: remove(zip_path) except FileNotFoundError: pass was_error = True if was_error: print("\nThis recognizer is trained by the CASIA handwriting database.") print("If the download doesn't work, you can get the files at %s" % self.datasets[dataset]["url"]) print("If you have download problems, " "wget may be effective at downloading because of download resuming.") success = False return success
[ "\n Checks to see if the dataset is present. If not, it downloads and unzips it.\n " ]
Please provide a description of the function:def get_raw(self, verbose=True): assert self.get_all_datasets() is True, "Datasets aren't properly downloaded, " \ "rerun to try again or download datasets manually." for dataset in self.datasets: # Create a folder to hold the dataset prefix_path = self.base_dataset_path + "raw/" + dataset if not isdir(prefix_path): makedirs(prefix_path) label_count = Counter() for image, label in tqdm(self.load_dataset(dataset, verbose=verbose)): #assert type(image) == "PIL.Image.Image", "image is not the correct type. " # Make sure there's a folder for the class label. label_path = prefix_path + "/" + label if not isdir(label_path): makedirs(label_path) label_count[label] = label_count[label] + 1 image.save(label_path + "/%s_%s.jpg" % (label, label_count[label]))
[ "\n Used to create easily introspectable image directories of all the data.\n :return:\n " ]
Please provide a description of the function:def load_character_images(self, verbose=True): for dataset in self.character_sets: assert self.get_dataset(dataset) is True, "Datasets aren't properly downloaded, " \ "rerun to try again or download datasets manually." for dataset in self.character_sets: for image, label in self.load_dataset(dataset, verbose=verbose): yield image, label
[ "\n Generator to load all images in the dataset. Yields (image, character) pairs until all images have been loaded.\n :return: (Pillow.Image.Image, string) tuples\n " ]
Please provide a description of the function:def load_dataset(self, dataset, verbose=True): assert self.get_dataset(dataset) is True, "Datasets aren't properly downloaded, " \ "rerun to try again or download datasets manually." if verbose: print("Loading %s" % dataset) dataset_path = self.base_dataset_path + dataset for path in tqdm(glob.glob(dataset_path + "/*.gnt")): for image, label in self.load_gnt_file(path): yield image, label
[ "\n Load a directory of gnt files. Yields the image and label in tuples.\n :param dataset: The directory to load.\n :return: Yields (Pillow.Image.Image, label) pairs.\n " ]
Please provide a description of the function:def load_gnt_file(filename): # Thanks to nhatch for the code to read the GNT file, available at https://github.com/nhatch/casia with open(filename, "rb") as f: while True: packed_length = f.read(4) if packed_length == b'': break length = struct.unpack("<I", packed_length)[0] raw_label = struct.unpack(">cc", f.read(2)) width = struct.unpack("<H", f.read(2))[0] height = struct.unpack("<H", f.read(2))[0] photo_bytes = struct.unpack("{}B".format(height * width), f.read(height * width)) # Comes out as a tuple of chars. Need to be combined. Encoded as gb2312, gotta convert to unicode. label = decode(raw_label[0] + raw_label[1], encoding="gb2312") # Create an array of bytes for the image, match it to the proper dimensions, and turn it into an image. image = toimage(np.array(photo_bytes).reshape(height, width)) yield image, label
[ "\n Load characters and images from a given GNT file.\n :param filename: The file path to load.\n :return: (image: Pillow.Image.Image, character) tuples\n " ]
Please provide a description of the function:def middleware(self, *args, **kwargs): kwargs.setdefault('priority', 5) kwargs.setdefault('relative', None) kwargs.setdefault('attach_to', None) kwargs.setdefault('with_context', False) if len(args) == 1 and callable(args[0]): middle_f = args[0] self._middlewares.append( FutureMiddleware(middle_f, args=tuple(), kwargs=kwargs)) return middle_f def wrapper(middleware_f): self._middlewares.append( FutureMiddleware(middleware_f, args=args, kwargs=kwargs)) return middleware_f return wrapper
[ "Decorate and register middleware\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The middleware function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def exception(self, *args, **kwargs): if len(args) == 1 and callable(args[0]): if isinstance(args[0], type) and issubclass(args[0], Exception): pass else: # pragma: no cover raise RuntimeError("Cannot use the @exception decorator " "without arguments") def wrapper(handler_f): self._exceptions.append(FutureException(handler_f, exceptions=args, kwargs=kwargs)) return handler_f return wrapper
[ "Decorate and register an exception handler\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def listener(self, event, *args, **kwargs): if len(args) == 1 and callable(args[0]): # pragma: no cover raise RuntimeError("Cannot use the @listener decorator without " "arguments") def wrapper(listener_f): if len(kwargs) > 0: listener_f = (listener_f, kwargs) self._listeners[event].append(listener_f) return listener_f return wrapper
[ "Create a listener from a decorated function.\n :param event: Event to listen to.\n :type event: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The function to use as the listener\n :rtype: fn\n " ]
Please provide a description of the function:def route(self, uri, *args, **kwargs): if len(args) == 0 and callable(uri): # pragma: no cover raise RuntimeError("Cannot use the @route decorator without " "arguments.") kwargs.setdefault('methods', frozenset({'GET'})) kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', False) kwargs.setdefault('stream', False) kwargs.setdefault('name', None) def wrapper(handler_f): self._routes.append(FutureRoute(handler_f, uri, args, kwargs)) return handler_f return wrapper
[ "Create a plugin route from a decorated function.\n :param uri: endpoint at which the route will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def websocket(self, uri, *args, **kwargs): kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', None) kwargs.setdefault('subprotocols', None) kwargs.setdefault('name', None) def wrapper(handler_f): self._ws.append(FutureWebsocket(handler_f, uri, args, kwargs)) return handler_f return wrapper
[ "Create a websocket route from a decorated function\n :param uri: endpoint at which the socket endpoint will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def static(self, uri, file_or_directory, *args, **kwargs): kwargs.setdefault('pattern', r'/?.+') kwargs.setdefault('use_modified_since', True) kwargs.setdefault('use_content_range', False) kwargs.setdefault('stream_large_files', False) kwargs.setdefault('name', 'static') kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', None) self._static.append(FutureStatic(uri, file_or_directory, args, kwargs))
[ "Create a websocket route from a decorated function\n :param uri: endpoint at which the socket endpoint will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def first_plugin_context(self): # Note, because registrations are stored in a set, its not _really_ # the first one, but whichever one it sees first in the set. first_spf_reg = next(iter(self.registrations)) return self.get_context_from_spf(first_spf_reg)
[ "Returns the context is associated with the first app this plugin was\n registered on" ]
Please provide a description of the function:def decorate(cls, app, *args, run_middleware=False, with_context=False, **kwargs): from spf.framework import SanicPluginsFramework spf = SanicPluginsFramework(app) # get the singleton from the app try: assoc = spf.register_plugin(cls, skip_reg=True) except ValueError as e: # this is normal, if this plugin has been registered previously assert e.args and len(e.args) > 1 assoc = e.args[1] (plugin, reg) = assoc inst = spf.get_plugin(plugin) # plugin may not actually be registered # registered might be True, False or None at this point regd = True if inst else None if regd is True: # middleware will be run on this route anyway, because the plugin # is registered on the app. Turn it off on the route-level. run_middleware = False req_middleware = deque() resp_middleware = deque() if run_middleware: for i, m in enumerate(plugin._middlewares): attach_to = m.kwargs.pop('attach_to', 'request') priority = m.kwargs.pop('priority', 5) with_context = m.kwargs.pop('with_context', False) mw_handle_fn = m.middleware if attach_to == 'response': relative = m.kwargs.pop('relative', 'post') if relative == "pre": mw = (0, 0 - priority, 0 - i, mw_handle_fn, with_context, m.args, m.kwargs) else: # relative = "post" mw = (1, 0 - priority, 0 - i, mw_handle_fn, with_context, m.args, m.kwargs) resp_middleware.append(mw) else: # attach_to = "request" relative = m.kwargs.pop('relative', 'pre') if relative == "post": mw = (1, priority, i, mw_handle_fn, with_context, m.args, m.kwargs) else: # relative = "pre" mw = (0, priority, i, mw_handle_fn, with_context, m.args, m.kwargs) req_middleware.append(mw) req_middleware = tuple(sorted(req_middleware)) resp_middleware = tuple(sorted(resp_middleware)) def _decorator(f): nonlocal spf, plugin, regd, run_middleware, with_context nonlocal req_middleware, resp_middleware, args, kwargs async def wrapper(request, *a, **kw): nonlocal spf, plugin, regd, run_middleware, with_context nonlocal req_middleware, resp_middleware, f, args, kwargs # the plugin was not registered on the app, it might be now if regd is None: _inst = spf.get_plugin(plugin) regd = _inst is not None context = plugin.get_context_from_spf(spf) if run_middleware and not regd and len(req_middleware) > 0: for (_a, _p, _i, handler, with_context, args, kwargs) \ in req_middleware: if with_context: resp = handler(request, *args, context=context, **kwargs) else: resp = handler(request, *args, **kwargs) if isawaitable(resp): resp = await resp if resp: return response = await plugin.route_wrapper( f, request, context, a, kw, *args, with_context=with_context, **kwargs) if isawaitable(response): response = await response if run_middleware and not regd and len(resp_middleware) > 0: for (_a, _p, _i, handler, with_context, args, kwargs) \ in resp_middleware: if with_context: _resp = handler(request, response, *args, context=context, **kwargs) else: _resp = handler(request, response, *args, **kwargs) if isawaitable(_resp): _resp = await _resp if _resp: response = _resp break return response return update_wrapper(wrapper, f) return _decorator
[ "\n This is a decorator that can be used to apply this plugin to a specific\n route/view on your app, rather than the whole app.\n :param app:\n :type app: Sanic | Blueprint\n :param args:\n :type args: tuple(Any)\n :param run_middleware:\n :type run_middleware: bool\n :param with_context:\n :type with_context: bool\n :param kwargs:\n :param kwargs: dict(Any)\n :return: the decorated route/view\n :rtype: fn\n " ]
Please provide a description of the function:async def route_wrapper(self, route, request, context, request_args, request_kw, *decorator_args, with_context=None, **decorator_kw): # by default, do nothing, just run the wrapped function if with_context: resp = route(request, context, *request_args, **request_kw) else: resp = route(request, *request_args, **request_kw) if isawaitable(resp): resp = await resp return resp
[ "This is the function that is called when a route is decorated with\n your plugin decorator. Context will normally be None, but the user\n can pass use_context=True so the route will get the plugin\n context\n " ]
Please provide a description of the function:def replace(self, key, value): if key in self._inner().keys(): return self.__setitem__(key, value) parents_searched = [self] parent = self._parent_context while parent: try: if key in parent.keys(): return parent.__setitem__(key, value) except (KeyError, AttributeError): pass parents_searched.append(parent) # noinspection PyProtectedMember next_parent = parent._parent_context if next_parent in parents_searched: raise RuntimeError("Recursive ContextDict found!") parent = next_parent return self.__setitem__(key, value)
[ "\n If this ContextDict doesn't already have this key, it sets\n the value on a parent ContextDict if that parent has the key,\n otherwise sets the value on this ContextDict.\n :param key:\n :param value:\n :return: Nothing\n :rtype: None\n " ]
Please provide a description of the function:def update(self, E=None, **F): if E is not None: if hasattr(E, 'keys'): for K in E: self.replace(K, E[K]) elif hasattr(E, 'items'): for K, V in E.items(): self.replace(K, V) else: for K, V in E: self.replace(K, V) for K in F: self.replace(K, F[K])
[ "\n Update ContextDict from dict/iterable E and F\n :return: Nothing\n :rtype: None\n " ]
Please provide a description of the function:def middleware(self, *args, **kwargs): kwargs.setdefault('priority', 5) kwargs.setdefault('relative', None) kwargs.setdefault('attach_to', None) kwargs['with_context'] = True # This is the whole point of this plugin plugin = self.plugin reg = self.reg if len(args) == 1 and callable(args[0]): middle_f = args[0] return plugin._add_new_middleware(reg, middle_f, **kwargs) def wrapper(middle_f): nonlocal plugin, reg nonlocal args, kwargs return plugin._add_new_middleware(reg, middle_f, *args, **kwargs) return wrapper
[ "Decorate and register middleware\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The middleware function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def route(self, uri, *args, **kwargs): if len(args) == 0 and callable(uri): raise RuntimeError("Cannot use the @route decorator without " "arguments.") kwargs.setdefault('methods', frozenset({'GET'})) kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', False) kwargs.setdefault('stream', False) kwargs.setdefault('name', None) kwargs['with_context'] = True # This is the whole point of this plugin plugin = self.plugin reg = self.reg def wrapper(handler_f): nonlocal plugin, reg nonlocal uri, args, kwargs return plugin._add_new_route(reg, uri, handler_f, *args, **kwargs) return wrapper
[ "Create a plugin route from a decorated function.\n :param uri: endpoint at which the route will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def listener(self, event, *args, **kwargs): if len(args) == 1 and callable(args[0]): raise RuntimeError("Cannot use the @listener decorator without " "arguments") kwargs['with_context'] = True # This is the whole point of this plugin plugin = self.plugin reg = self.reg def wrapper(listener_f): nonlocal plugin, reg nonlocal event, args, kwargs return plugin._add_new_listener(reg, event, listener_f, *args, **kwargs) return wrapper
[ "Create a listener from a decorated function.\n :param event: Event to listen to.\n :type event: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The function to use as the listener\n :rtype: fn\n " ]
Please provide a description of the function:def websocket(self, uri, *args, **kwargs): kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', None) kwargs.setdefault('subprotocols', None) kwargs.setdefault('name', None) kwargs['with_context'] = True # This is the whole point of this plugin plugin = self.plugin reg = self.reg def wrapper(handler_f): nonlocal plugin, reg nonlocal uri, args, kwargs return plugin._add_new_ws_route(reg, uri, handler_f, *args, **kwargs) return wrapper
[ "Create a websocket route from a decorated function\n :param uri: endpoint at which the socket endpoint will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def middleware(self, *args, **kwargs): kwargs.setdefault('priority', 5) kwargs.setdefault('relative', None) kwargs.setdefault('attach_to', None) kwargs['with_context'] = True # This is the whole point of this plugin if len(args) == 1 and callable(args[0]): middle_f = args[0] return super(Contextualize, self).middleware(middle_f, **kwargs) def wrapper(middle_f): nonlocal self, args, kwargs return super(Contextualize, self).middleware( *args, **kwargs)(middle_f) return wrapper
[ "Decorate and register middleware\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The middleware function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def route(self, uri, *args, **kwargs): if len(args) == 0 and callable(uri): raise RuntimeError("Cannot use the @route decorator without " "arguments.") kwargs.setdefault('methods', frozenset({'GET'})) kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', False) kwargs.setdefault('stream', False) kwargs.setdefault('name', None) kwargs['with_context'] = True # This is the whole point of this plugin def wrapper(handler_f): nonlocal self, uri, args, kwargs return super(Contextualize, self).route( uri, *args, **kwargs)(handler_f) return wrapper
[ "Create a plugin route from a decorated function.\n :param uri: endpoint at which the route will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def listener(self, event, *args, **kwargs): if len(args) == 1 and callable(args[0]): raise RuntimeError("Cannot use the @listener decorator without " "arguments") kwargs['with_context'] = True # This is the whole point of this plugin def wrapper(listener_f): nonlocal self, event, args, kwargs return super(Contextualize, self).listener( event, *args, **kwargs)(listener_f) return wrapper
[ "Create a listener from a decorated function.\n :param event: Event to listen to.\n :type event: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the listener\n :rtype: fn\n " ]
Please provide a description of the function:def websocket(self, uri, *args, **kwargs): kwargs.setdefault('host', None) kwargs.setdefault('strict_slashes', None) kwargs.setdefault('subprotocols', None) kwargs.setdefault('name', None) kwargs['with_context'] = True # This is the whole point of this plugin def wrapper(handler_f): nonlocal self, uri, args, kwargs return super(Contextualize, self).websocket( uri, *args, **kwargs)(handler_f) return wrapper
[ "Create a websocket route from a decorated function\n :param uri: endpoint at which the socket endpoint will be accessible.\n :type uri: str\n :param args: captures all of the positional arguments passed in\n :type args: tuple(Any)\n :param kwargs: captures the keyword arguments passed in\n :type kwargs: dict(Any)\n :return: The exception function to use as the decorator\n :rtype: fn\n " ]
Please provide a description of the function:def get_peercred(sock): buf = sock.getsockopt(_PEERCRED_LEVEL, _PEERCRED_OPTION, struct.calcsize('3i')) return struct.unpack('3i', buf)
[ "Gets the (pid, uid, gid) for the client on the given *connected* socket." ]
Please provide a description of the function:def check_credentials(client): pid, uid, gid = get_peercred(client) euid = os.geteuid() client_name = "PID:%s UID:%s GID:%s" % (pid, uid, gid) if uid not in (0, euid): raise SuspiciousClient("Can't accept client with %s. It doesn't match the current EUID:%s or ROOT." % ( client_name, euid )) _LOG("Accepted connection on fd:%s from %s" % (client.fileno(), client_name)) return pid, uid, gid
[ "\n Checks credentials for given socket.\n " ]
Please provide a description of the function:def handle_connection_exec(client): class ExitExecLoop(Exception): pass def exit(): raise ExitExecLoop() client.settimeout(None) fh = os.fdopen(client.detach() if hasattr(client, 'detach') else client.fileno()) with closing(client): with closing(fh): try: payload = fh.readline() while payload: _LOG("Running: %r." % payload) eval(compile(payload, '<manhole>', 'exec'), {'exit': exit}, _MANHOLE.locals) payload = fh.readline() except ExitExecLoop: _LOG("Exiting exec loop.")
[ "\n Alternate connection handler. No output redirection.\n " ]
Please provide a description of the function:def handle_connection_repl(client): client.settimeout(None) # # disable this till we have evidence that it's needed # client.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, 0) # # Note: setting SO_RCVBUF on UDS has no effect, see: http://man7.org/linux/man-pages/man7/unix.7.html backup = [] old_interval = getinterval() patches = [('r', ('stdin', '__stdin__')), ('w', ('stdout', '__stdout__'))] if _MANHOLE.redirect_stderr: patches.append(('w', ('stderr', '__stderr__'))) try: client_fd = client.fileno() for mode, names in patches: for name in names: backup.append((name, getattr(sys, name))) setattr(sys, name, _ORIGINAL_FDOPEN(client_fd, mode, 1 if PY3 else 0)) try: handle_repl(_MANHOLE.locals) except Exception as exc: _LOG("REPL failed with %r." % exc) _LOG("DONE.") finally: try: # Change the switch/check interval to something ridiculous. We don't want to have other thread try # to write to the redirected sys.__std*/sys.std* - it would fail horribly. setinterval(2147483647) try: client.close() # close before it's too late. it may already be dead except IOError: pass junk = [] # keep the old file objects alive for a bit for name, fh in backup: junk.append(getattr(sys, name)) setattr(sys, name, fh) del backup for fh in junk: try: if hasattr(fh, 'detach'): fh.detach() else: fh.close() except IOError: pass del fh del junk finally: setinterval(old_interval) _LOG("Cleaned up.")
[ "\n Handles connection.\n " ]
Please provide a description of the function:def handle_repl(locals): dump_stacktraces() namespace = { 'dump_stacktraces': dump_stacktraces, 'sys': sys, 'os': os, 'socket': socket, 'traceback': traceback, } if locals: namespace.update(locals) ManholeConsole(namespace).interact()
[ "\n Dumps stacktraces and runs an interactive prompt (REPL).\n " ]
Please provide a description of the function:def install(verbose=True, verbose_destination=sys.__stderr__.fileno() if hasattr(sys.__stderr__, 'fileno') else sys.__stderr__, strict=True, **kwargs): # pylint: disable=W0603 global _MANHOLE with _LOCK: if _MANHOLE is None: _MANHOLE = Manhole() else: if strict: raise AlreadyInstalled("Manhole already installed!") else: _LOG.release() _MANHOLE.release() # Threads might be started here _LOG.configure(verbose, verbose_destination) _MANHOLE.configure(**kwargs) # Threads might be started here return _MANHOLE
[ "\n Installs the manhole.\n\n Args:\n verbose (bool): Set it to ``False`` to squelch the logging.\n verbose_destination (file descriptor or handle): Destination for verbose messages. Default is unbuffered stderr\n (stderr ``2`` file descriptor).\n patch_fork (bool): Set it to ``False`` if you don't want your ``os.fork`` and ``os.forkpy`` monkeypatched\n activate_on (int or signal name): set to ``\"USR1\"``, ``\"USR2\"`` or some other signal name, or a number if you\n want the Manhole thread to start when this signal is sent. This is desireable in case you don't want the\n thread active all the time.\n oneshot_on (int or signal name): Set to ``\"USR1\"``, ``\"USR2\"`` or some other signal name, or a number if you\n want the Manhole to listen for connection in the signal handler. This is desireable in case you don't want\n threads at all.\n thread (bool): Start the always-on ManholeThread. Default: ``True``. Automatically switched to ``False`` if\n ``oneshort_on`` or ``activate_on`` are used.\n sigmask (list of ints or signal names): Will set the signal mask to the given list (using\n ``signalfd.sigprocmask``). No action is done if ``signalfd`` is not importable.\n **NOTE**: This is done so that the Manhole thread doesn't *steal* any signals; Normally that is fine because\n Python will force all the signal handling to be run in the main thread but signalfd doesn't.\n socket_path (str): Use a specific path for the unix domain socket (instead of ``/tmp/manhole-<pid>``). This\n disables ``patch_fork`` as children cannot reuse the same path.\n reinstall_delay (float): Delay the unix domain socket creation *reinstall_delay* seconds. This\n alleviates cleanup failures when using fork+exec patterns.\n locals (dict): Names to add to manhole interactive shell locals.\n daemon_connection (bool): The connection thread is daemonic (dies on app exit). Default: ``False``.\n redirect_stderr (bool): Redirect output from stderr to manhole console. Default: ``True``.\n connection_handler (function): Connection handler to use. Use ``\"exec\"`` for simple implementation without\n output redirection or your own function. (warning: this is for advanced users). Default: ``\"repl\"``.\n " ]
Please provide a description of the function:def dump_stacktraces(): lines = [] for thread_id, stack in sys._current_frames().items(): # pylint: disable=W0212 lines.append("\n######### ProcessID=%s, ThreadID=%s #########" % ( os.getpid(), thread_id )) for filename, lineno, name, line in traceback.extract_stack(stack): lines.append('File: "%s", line %d, in %s' % (filename, lineno, name)) if line: lines.append(" %s" % (line.strip())) lines.append("#############################################\n\n") print('\n'.join(lines), file=sys.stderr if _MANHOLE.redirect_stderr else sys.stdout)
[ "\n Dumps thread ids and tracebacks to stdout.\n " ]
Please provide a description of the function:def clone(self, **kwargs): return ManholeThread( self.get_socket, self.sigmask, self.start_timeout, connection_handler=self.connection_handler, daemon_connection=self.daemon_connection, **kwargs )
[ "\n Make a fresh thread with the same options. This is usually used on dead threads.\n " ]
Please provide a description of the function:def run(self): self.serious.set() if signalfd and self.sigmask: signalfd.sigprocmask(signalfd.SIG_BLOCK, self.sigmask) pthread_setname_np(self.ident, self.psname) if self.bind_delay: _LOG("Delaying UDS binding %s seconds ..." % self.bind_delay) _ORIGINAL_SLEEP(self.bind_delay) sock = self.get_socket() while self.should_run: _LOG("Waiting for new connection (in pid:%s) ..." % os.getpid()) try: client = ManholeConnectionThread(sock.accept()[0], self.connection_handler, self.daemon_connection) client.start() client.join() except socket.timeout: continue except (InterruptedError, socket.error) as e: if e.errno != errno.EINTR: raise continue finally: client = None
[ "\n Runs the manhole loop. Only accepts one connection at a time because:\n\n * This thread is a daemon thread (exits when main thread exists).\n * The connection need exclusive access to stdin, stderr and stdout so it can redirect inputs and outputs.\n " ]
Please provide a description of the function:def reinstall(self): with _LOCK: if not (self.thread.is_alive() and self.thread in _ORIGINAL__ACTIVE): self.thread = self.thread.clone(bind_delay=self.reinstall_delay) if self.should_restart: self.thread.start()
[ "\n Reinstalls the manhole. Checks if the thread is running. If not, it starts it again.\n " ]
Please provide a description of the function:def patched_fork(self): pid = self.original_os_fork() if not pid: _LOG('Fork detected. Reinstalling Manhole.') self.reinstall() return pid
[ "Fork a child process." ]
Please provide a description of the function:def patched_forkpty(self): pid, master_fd = self.original_os_forkpty() if not pid: _LOG('Fork detected. Reinstalling Manhole.') self.reinstall() return pid, master_fd
[ "Fork a new process with a new pseudo-terminal as controlling tty." ]
Please provide a description of the function:def update( # noqa: C901 self, alert_condition_nrql_id, policy_id, name=None, threshold_type=None, query=None, since_value=None, terms=None, expected_groups=None, value_function=None, runbook_url=None, ignore_overlap=None, enabled=True): conditions_nrql_dict = self.list(policy_id) target_condition_nrql = None for condition in conditions_nrql_dict['nrql_conditions']: if int(condition['id']) == alert_condition_nrql_id: target_condition_nrql = condition break if target_condition_nrql is None: raise NoEntityException( 'Target alert condition nrql is not included in that policy.' 'policy_id: {}, alert_condition_nrql_id {}'.format( policy_id, alert_condition_nrql_id ) ) data = { 'nrql_condition': { 'type': threshold_type or target_condition_nrql['type'], 'enabled': target_condition_nrql['enabled'], 'name': name or target_condition_nrql['name'], 'terms': terms or target_condition_nrql['terms'], 'nrql': { 'query': query or target_condition_nrql['nrql']['query'], 'since_value': since_value or target_condition_nrql['nrql']['since_value'], } } } if enabled is not None: data['nrql_condition']['enabled'] = str(enabled).lower() if runbook_url is not None: data['nrql_condition']['runbook_url'] = runbook_url elif 'runbook_url' in target_condition_nrql: data['nrql_condition']['runbook_url'] = target_condition_nrql['runbook_url'] if expected_groups is not None: data['nrql_condition']['expected_groups'] = expected_groups elif 'expected_groups' in target_condition_nrql: data['nrql_condition']['expected_groups'] = target_condition_nrql['expected_groups'] if ignore_overlap is not None: data['nrql_condition']['ignore_overlap'] = ignore_overlap elif 'ignore_overlap' in target_condition_nrql: data['nrql_condition']['ignore_overlap'] = target_condition_nrql['ignore_overlap'] if value_function is not None: data['nrql_condition']['value_function'] = value_function elif 'value_function' in target_condition_nrql: data['nrql_condition']['value_function'] = target_condition_nrql['value_function'] if data['nrql_condition']['type'] == 'static': if 'value_function' not in data['nrql_condition']: raise ConfigurationException( 'Alert is set as static but no value_function config specified' ) data['nrql_condition'].pop('expected_groups', None) data['nrql_condition'].pop('ignore_overlap', None) elif data['nrql_condition']['type'] == 'outlier': if 'expected_groups' not in data['nrql_condition']: raise ConfigurationException( 'Alert is set as outlier but expected_groups config is not specified' ) if 'ignore_overlap' not in data['nrql_condition']: raise ConfigurationException( 'Alert is set as outlier but ignore_overlap config is not specified' ) data['nrql_condition'].pop('value_function', None) return self._put( url='{0}alerts_nrql_conditions/{1}.json'.format(self.URL, alert_condition_nrql_id), headers=self.headers, data=data )
[ "\n Updates any of the optional parameters of the alert condition nrql\n\n :type alert_condition_nrql_id: int\n :param alert_condition_nrql_id: Alerts condition NRQL id to update\n\n :type policy_id: int\n :param policy_id: Alert policy id where target alert condition belongs to\n\n :type condition_scope: str\n :param condition_scope: The scope of the condition, can be instance or application\n\n :type name: str\n :param name: The name of the alert\n\n :type threshold_type: str\n :param threshold_type: The tthreshold_typeype of the condition, can be static or outlier\n\n :type query: str\n :param query: nrql query for the alerts\n\n :type since_value: str\n :param since_value: since value for the alert\n\n :type terms: list[hash]\n :param terms: list of hashes containing threshold config for the alert\n\n :type expected_groups: int\n :param expected_groups: expected groups setting for outlier alerts\n\n :type value_function: str\n :param type: value function for static alerts\n\n :type runbook_url: str\n :param runbook_url: The url of the runbook\n\n :type ignore_overlap: bool\n :param ignore_overlap: Whether to ignore overlaps for outlier alerts\n\n :type enabled: bool\n :param enabled: Whether to enable that alert condition\n\n :rtype: dict\n :return: The JSON response of the API\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NoEntityException>`\n if target alert condition is not included in target policy\n\n :raises: This will raise a\n :class:`ConfigurationException<newrelic_api.exceptions.ConfigurationException>`\n if metric is set as user_defined but user_defined config is not passed\n ::\n {\n \"nrql_condition\": {\n \"name\": \"string\",\n \"runbook_url\": \"string\",\n \"enabled\": \"boolean\",\n \"expected_groups\": \"integer\",\n \"ignore_overlap\": \"boolean\",\n \"value_function\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"nrql\": {\n \"query\": \"string\",\n \"since_value\": \"string\"\n }\n }\n }\n " ]
Please provide a description of the function:def create( self, policy_id, name, threshold_type, query, since_value, terms, expected_groups=None, value_function=None, runbook_url=None, ignore_overlap=None, enabled=True): data = { 'nrql_condition': { 'type': threshold_type, 'name': name, 'enabled': enabled, 'terms': terms, 'nrql': { 'query': query, 'since_value': since_value } } } if runbook_url is not None: data['nrql_condition']['runbook_url'] = runbook_url if expected_groups is not None: data['nrql_condition']['expected_groups'] = expected_groups if ignore_overlap is not None: data['nrql_condition']['ignore_overlap'] = ignore_overlap if value_function is not None: data['nrql_condition']['value_function'] = value_function if data['nrql_condition']['type'] == 'static': if 'value_function' not in data['nrql_condition']: raise ConfigurationException( 'Alert is set as static but no value_function config specified' ) data['nrql_condition'].pop('expected_groups', None) data['nrql_condition'].pop('ignore_overlap', None) elif data['nrql_condition']['type'] == 'outlier': if 'expected_groups' not in data['nrql_condition']: raise ConfigurationException( 'Alert is set as outlier but expected_groups config is not specified' ) if 'ignore_overlap' not in data['nrql_condition']: raise ConfigurationException( 'Alert is set as outlier but ignore_overlap config is not specified' ) data['nrql_condition'].pop('value_function', None) return self._post( url='{0}alerts_nrql_conditions/policies/{1}.json'.format(self.URL, policy_id), headers=self.headers, data=data )
[ "\n Creates an alert condition nrql\n\n :type policy_id: int\n :param policy_id: Alert policy id where target alert condition nrql belongs to\n\n :type name: str\n :param name: The name of the alert\n\n :type threshold_type: str\n :param type: The threshold_type of the condition, can be static or outlier\n\n :type query: str\n :param query: nrql query for the alerts\n\n :type since_value: str\n :param since_value: since value for the alert\n\n :type terms: list[hash]\n :param terms: list of hashes containing threshold config for the alert\n\n :type expected_groups: int\n :param expected_groups: expected groups setting for outlier alerts\n\n :type value_function: str\n :param type: value function for static alerts\n\n :type runbook_url: str\n :param runbook_url: The url of the runbook\n\n :type ignore_overlap: bool\n :param ignore_overlap: Whether to ignore overlaps for outlier alerts\n\n :type enabled: bool\n :param enabled: Whether to enable that alert condition\n\n :rtype: dict\n :return: The JSON response of the API\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NoEntityException>`\n if target alert condition is not included in target policy\n\n :raises: This will raise a\n :class:`ConfigurationException<newrelic_api.exceptions.ConfigurationException>`\n if metric is set as user_defined but user_defined config is not passed\n ::\n {\n \"nrql_condition\": {\n \"name\": \"string\",\n \"runbook_url\": \"string\",\n \"enabled\": \"boolean\",\n \"expected_groups\": \"integer\",\n \"ignore_overlap\": \"boolean\",\n \"value_function\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"nrql\": {\n \"query\": \"string\",\n \"since_value\": \"string\"\n }\n }\n }\n " ]
Please provide a description of the function:def delete(self, alert_condition_nrql_id): return self._delete( url='{0}alerts_nrql_conditions/{1}.json'.format(self.URL, alert_condition_nrql_id), headers=self.headers )
[ "\n This API endpoint allows you to delete an alert condition nrql\n\n :type alert_condition_nrql_id: integer\n :param alert_condition_nrql_id: Alert Condition ID\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n {\n \"nrql_condition\": {\n \"type\": \"string\",\n \"id\": \"integer\",\n \"name\": \"string\",\n \"runbook_url\": \"string\",\n \"enabled\": \"boolean\",\n \"expected_groups\": \"integer\",\n \"ignore_overlap\": \"boolean\",\n \"value_function\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"nrql\": {\n \"query\": \"string\",\n \"since_value\": \"string\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def list(self, filter_name=None, filter_ids=None, filter_labels=None, page=None): label_param = '' if filter_labels: label_param = ';'.join(['{}:{}'.format(label, value) for label, value in filter_labels.items()]) filters = [ 'filter[name]={0}'.format(filter_name) if filter_name else None, 'filter[ids]={0}'.format(','.join([str(app_id) for app_id in filter_ids])) if filter_ids else None, 'filter[labels]={0}'.format(label_param) if filter_labels else None, 'page={0}'.format(page) if page else None ] return self._get( url='{0}servers.json'.format(self.URL), headers=self.headers, params=self.build_param_string(filters) )
[ "\n This API endpoint returns a paginated list of the Servers\n associated with your New Relic account. Servers can be filtered\n by their name or by a list of server IDs.\n\n :type filter_name: str\n :param filter_name: Filter by server name\n\n :type filter_ids: list of ints\n :param filter_ids: Filter by server ids\n\n :type filter_labels: dict of label type: value pairs\n :param filter_labels: Filter by server labels\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"servers\": [\n {\n \"id\": \"integer\",\n \"account_id\": \"integer\",\n \"name\": \"string\",\n \"host\": \"string\",\n \"reporting\": \"boolean\",\n \"last_reported_at\": \"time\",\n \"summary\": {\n \"cpu\": \"float\",\n \"cpu_stolen\": \"float\",\n \"disk_io\": \"float\",\n \"memory\": \"float\",\n \"memory_used\": \"integer\",\n \"memory_total\": \"integer\",\n \"fullest_disk\": \"float\",\n \"fullest_disk_free\": \"integer\"\n }\n }\n ],\n \"pages\": {\n \"last\": {\n \"url\": \"https://api.newrelic.com/v2/servers.json?page=2\",\n \"rel\": \"last\"\n },\n \"next\": {\n \"url\": \"https://api.newrelic.com/v2/servers.json?page=2\",\n \"rel\": \"next\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def update(self, id, name=None): nr_data = self.show(id)['server'] data = { 'server': { 'name': name or nr_data['name'], } } return self._put( url='{0}servers/{1}.json'.format(self.URL, id), headers=self.headers, data=data )
[ "\n Updates any of the optional parameters of the server\n\n :type id: int\n :param id: Server ID\n\n :type name: str\n :param name: The name of the server\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"server\": {\n \"id\": \"integer\",\n \"account_id\": \"integer\",\n \"name\": \"string\",\n \"host\": \"string\",\n \"reporting\": \"boolean\",\n \"last_reported_at\": \"time\",\n \"summary\": {\n \"cpu\": \"float\",\n \"cpu_stolen\": \"float\",\n \"disk_io\": \"float\",\n \"memory\": \"float\",\n \"memory_used\": \"integer\",\n \"memory_total\": \"integer\",\n \"fullest_disk\": \"float\",\n \"fullest_disk_free\": \"integer\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def metric_names(self, id, name=None, page=None): params = [ 'name={0}'.format(name) if name else None, 'page={0}'.format(page) if page else None ] return self._get( url='{0}servers/{1}/metrics.json'.format(self.URL, id), headers=self.headers, params=self.build_param_string(params) )
[ "\n Return a list of known metrics and their value names for the given resource.\n\n :type id: int\n :param id: Server ID\n\n :type name: str\n :param name: Filter metrics by name\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"metrics\": [\n {\n \"name\": \"string\",\n \"values\": [\n \"string\"\n ]\n }\n ],\n \"pages\": {\n \"last\": {\n \"url\": \"https://api.newrelic.com/v2/servers/{server_id}/metrics.json?page=2\",\n \"rel\": \"last\"\n },\n \"next\": {\n \"url\": \"https://api.newrelic.com/v2/servers/{server_id}/metrics.json?page=2\",\n \"rel\": \"next\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def create(self, name, incident_preference): data = { "policy": { "name": name, "incident_preference": incident_preference } } return self._post( url='{0}alerts_policies.json'.format(self.URL), headers=self.headers, data=data )
[ "\n This API endpoint allows you to create an alert policy\n\n :type name: str\n :param name: The name of the policy\n\n :type incident_preference: str\n :param incident_preference: Can be PER_POLICY, PER_CONDITION or\n PER_CONDITION_AND_TARGET\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"policy\": {\n \"created_at\": \"time\",\n \"id\": \"integer\",\n \"incident_preference\": \"string\",\n \"name\": \"string\",\n \"updated_at\": \"time\"\n }\n }\n\n " ]
Please provide a description of the function:def update(self, id, name, incident_preference): data = { "policy": { "name": name, "incident_preference": incident_preference } } return self._put( url='{0}alerts_policies/{1}.json'.format(self.URL, id), headers=self.headers, data=data )
[ "\n This API endpoint allows you to update an alert policy\n\n :type id: integer\n :param id: The id of the policy\n\n :type name: str\n :param name: The name of the policy\n\n :type incident_preference: str\n :param incident_preference: Can be PER_POLICY, PER_CONDITION or\n PER_CONDITION_AND_TARGET\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"policy\": {\n \"created_at\": \"time\",\n \"id\": \"integer\",\n \"incident_preference\": \"string\",\n \"name\": \"string\",\n \"updated_at\": \"time\"\n }\n }\n\n " ]
Please provide a description of the function:def delete(self, id): return self._delete( url='{0}alerts_policies/{1}.json'.format(self.URL, id), headers=self.headers )
[ "\n This API endpoint allows you to delete an alert policy\n\n :type id: integer\n :param id: The id of the policy\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"policy\": {\n \"created_at\": \"time\",\n \"id\": \"integer\",\n \"incident_preference\": \"string\",\n \"name\": \"string\",\n \"updated_at\": \"time\"\n }\n }\n\n " ]
Please provide a description of the function:def associate_with_notification_channel(self, id, channel_id): return self._put( url='{0}alerts_policy_channels.json?policy_id={1}&channel_ids={2}'.format( self.URL, id, channel_id ), headers=self.headers )
[ "\n This API endpoint allows you to associate an alert policy with an\n notification channel\n\n :type id: integer\n :param id: The id of the policy\n\n :type channel_id: integer\n :param channel_id: The id of the notification channel\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"policy\": {\n \"channel_ids\": \"list\",\n \"id\": \"integer\"\n }\n }\n\n " ]
Please provide a description of the function:def dissociate_from_notification_channel(self, id, channel_id): return self._delete( url='{0}alerts_policy_channels.json?policy_id={1}&channel_id={2}'.format( self.URL, id, channel_id ), headers=self.headers )
[ "\n This API endpoint allows you to dissociate an alert policy from an\n notification channel\n\n :type id: integer\n :param id: The id of the policy\n\n :type channel_id: integer\n :param channel_id: The id of the notification channel\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"channel\":{\n \"configuration\": \"hash\",\n \"type\": \"string\",\n \"id\": \"integer\",\n \"links\":{\n \"policy_ids\": \"list\"\n },\n \"name\": \"string\"\n }\n }\n\n " ]
Please provide a description of the function:def list(self, policy_id, page=None): filters = [ 'policy_id={0}'.format(policy_id), 'page={0}'.format(page) if page else None ] return self._get( url='{0}alerts_conditions.json'.format(self.URL), headers=self.headers, params=self.build_param_string(filters) )
[ "\n This API endpoint returns a paginated list of alert conditions associated with the\n given policy_id.\n\n This API endpoint returns a paginated list of the alert conditions\n associated with your New Relic account. Alert conditions can be filtered\n by their name, list of IDs, type (application, key_transaction, or\n server) or whether or not policies are archived (defaults to filtering\n archived policies).\n\n :type policy_id: int\n :param policy_id: Alert policy id\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"conditions\": [\n {\n \"id\": \"integer\",\n \"type\": \"string\",\n \"condition_scope\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"entities\": [\n \"integer\"\n ],\n \"metric\": \"string\",\n \"runbook_url\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"user_defined\": {\n \"metric\": \"string\",\n \"value_function\": \"string\"\n }\n }\n ]\n }\n\n " ]
Please provide a description of the function:def update( self, alert_condition_id, policy_id, type=None, condition_scope=None, name=None, entities=None, metric=None, runbook_url=None, terms=None, user_defined=None, enabled=None): conditions_dict = self.list(policy_id) target_condition = None for condition in conditions_dict['conditions']: if int(condition['id']) == alert_condition_id: target_condition = condition break if target_condition is None: raise NoEntityException( 'Target alert condition is not included in that policy.' 'policy_id: {}, alert_condition_id {}'.format(policy_id, alert_condition_id) ) data = { 'condition': { 'type': type or target_condition['type'], 'name': name or target_condition['name'], 'entities': entities or target_condition['entities'], 'condition_scope': condition_scope or target_condition['condition_scope'], 'terms': terms or target_condition['terms'], 'metric': metric or target_condition['metric'], 'runbook_url': runbook_url or target_condition['runbook_url'], } } if enabled is not None: data['condition']['enabled'] = str(enabled).lower() if data['condition']['metric'] == 'user_defined': if user_defined: data['condition']['user_defined'] = user_defined elif 'user_defined' in target_condition: data['condition']['user_defined'] = target_condition['user_defined'] else: raise ConfigurationException( 'Metric is set as user_defined but no user_defined config specified' ) return self._put( url='{0}alerts_conditions/{1}.json'.format(self.URL, alert_condition_id), headers=self.headers, data=data )
[ "\n Updates any of the optional parameters of the alert condition\n\n :type alert_condition_id: int\n :param alert_condition_id: Alerts condition id to update\n\n :type policy_id: int\n :param policy_id: Alert policy id where target alert condition belongs to\n\n :type type: str\n :param type: The type of the condition, can be apm_app_metric,\n apm_kt_metric, servers_metric, browser_metric, mobile_metric\n\n :type condition_scope: str\n :param condition_scope: The scope of the condition, can be instance or application\n\n :type name: str\n :param name: The name of the server\n\n :type entities: list[str]\n :param name: entity ids to which the alert condition is applied\n\n :type : str\n :param metric: The target metric\n\n :type : str\n :param runbook_url: The url of the runbook\n\n :type terms: list[hash]\n :param terms: list of hashes containing threshold config for the alert\n\n :type user_defined: hash\n :param user_defined: hash containing threshold user_defined for the alert\n required if metric is set to user_defined\n\n :type enabled: bool\n :param enabled: Whether to enable that alert condition\n\n :rtype: dict\n :return: The JSON response of the API\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NoEntityException>`\n if target alert condition is not included in target policy\n\n :raises: This will raise a\n :class:`ConfigurationException<newrelic_api.exceptions.ConfigurationException>`\n if metric is set as user_defined but user_defined config is not passed\n\n ::\n\n {\n \"condition\": {\n \"id\": \"integer\",\n \"type\": \"string\",\n \"condition_scope\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"entities\": [\n \"integer\"\n ],\n \"metric\": \"string\",\n \"runbook_url\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"user_defined\": {\n \"metric\": \"string\",\n \"value_function\": \"string\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def create( self, policy_id, type, condition_scope, name, entities, metric, terms, runbook_url=None, user_defined=None, enabled=True): data = { 'condition': { 'type': type, 'name': name, 'enabled': enabled, 'entities': entities, 'condition_scope': condition_scope, 'terms': terms, 'metric': metric, 'runbook_url': runbook_url, } } if metric == 'user_defined': if user_defined: data['condition']['user_defined'] = user_defined else: raise ConfigurationException( 'Metric is set as user_defined but no user_defined config specified' ) return self._post( url='{0}alerts_conditions/policies/{1}.json'.format(self.URL, policy_id), headers=self.headers, data=data )
[ "\n Creates an alert condition\n\n :type policy_id: int\n :param policy_id: Alert policy id where target alert condition belongs to\n\n :type type: str\n :param type: The type of the condition, can be apm_app_metric,\n apm_kt_metric, servers_metric, browser_metric, mobile_metric\n\n :type condition_scope: str\n :param condition_scope: The scope of the condition, can be instance or application\n\n :type name: str\n :param name: The name of the server\n\n :type entities: list[str]\n :param name: entity ids to which the alert condition is applied\n\n :type : str\n :param metric: The target metric\n\n :type : str\n :param runbook_url: The url of the runbook\n\n :type terms: list[hash]\n :param terms: list of hashes containing threshold config for the alert\n\n :type user_defined: hash\n :param user_defined: hash containing threshold user_defined for the alert\n required if metric is set to user_defined\n\n :type enabled: bool\n :param enabled: Whether to enable that alert condition\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"condition\": {\n \"id\": \"integer\",\n \"type\": \"string\",\n \"condition_scope\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"entities\": [\n \"integer\"\n ],\n \"metric\": \"string\",\n \"runbook_url\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"user_defined\": {\n \"metric\": \"string\",\n \"value_function\": \"string\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def delete(self, alert_condition_id): return self._delete( url='{0}alerts_conditions/{1}.json'.format(self.URL, alert_condition_id), headers=self.headers )
[ "\n This API endpoint allows you to delete an alert condition\n\n :type alert_condition_id: integer\n :param alert_condition_id: Alert Condition ID\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"condition\": {\n \"id\": \"integer\",\n \"type\": \"string\",\n \"condition_scope\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"entities\": [\n \"integer\"\n ],\n \"metric\": \"string\",\n \"runbook_url\": \"string\",\n \"terms\": [\n {\n \"duration\": \"string\",\n \"operator\": \"string\",\n \"priority\": \"string\",\n \"threshold\": \"string\",\n \"time_function\": \"string\"\n }\n ],\n \"user_defined\": {\n \"metric\": \"string\",\n \"value_function\": \"string\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def show(self, id): return self._get( url='{root}key_transactions/{id}.json'.format( root=self.URL, id=id ), headers=self.headers, )
[ "\n This API endpoint returns a single Key transaction, identified its ID.\n\n :type id: int\n :param id: Key transaction ID\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"key_transaction\": {\n \"id\": \"integer\",\n \"name\": \"string\",\n \"transaction_name\": \"string\",\n \"application_summary\": {\n \"response_time\": \"float\",\n \"throughput\": \"float\",\n \"error_rate\": \"float\",\n \"apdex_target\": \"float\",\n \"apdex_score\": \"float\"\n },\n \"end_user_summary\": {\n \"response_time\": \"float\",\n \"throughput\": \"float\",\n \"apdex_target\": \"float\",\n \"apdex_score\": \"float\"\n },\n \"links\": {\n \"application\": \"integer\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def list(self, policy_id, limit=None, offset=None): filters = [ 'policy_id={0}'.format(policy_id), 'limit={0}'.format(limit) if limit else '50', 'offset={0}'.format(offset) if offset else '0' ] return self._get( url='{0}alerts/conditions'.format(self.URL), headers=self.headers, params=self.build_param_string(filters) )
[ "\n This API endpoint returns a paginated list of alert conditions for infrastucture\n metrics associated with the given policy_id.\n\n :type policy_id: int\n :param policy_id: Alert policy id\n\n :type limit: string\n :param limit: Max amount of results to return\n\n :type offset: string\n :param offset: Starting record to return\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"data\": [\n {\n \"id\": \"integer\",\n \"policy_id\": \"integer\",\n \"type\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"where_clause\": \"string\",\n \"comparison\": \"string\",\n \"filter\": \"hash\",\n \"critical_threshold\": \"hash\",\n \"process_where_clause\": \"string\",\n \"created_at_epoch_millis\": \"time\",\n \"updated_at_epoch_millis\": \"time\"\n }\n ],\n \"meta\": {\n \"limit\": \"integer\",\n \"offset\": \"integer\",\n \"total\": \"integer\"\n }\n }\n\n " ]
Please provide a description of the function:def show(self, alert_condition_infra_id): return self._get( url='{0}alerts/conditions/{1}'.format(self.URL, alert_condition_infra_id), headers=self.headers, )
[ "\n This API endpoint returns an alert condition for infrastucture, identified by its\n ID.\n\n :type alert_condition_infra_id: int\n :param alert_condition_infra_id: Alert Condition Infra ID\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"data\": {\n \"id\": \"integer\",\n \"policy_id\": \"integer\",\n \"type\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"where_clause\": \"string\",\n \"comparison\": \"string\",\n \"filter\": \"hash\",\n \"critical_threshold\": \"hash\",\n \"event_type\": \"string\",\n \"process_where_clause\": \"string\",\n \"created_at_epoch_millis\": \"time\",\n \"updated_at_epoch_millis\": \"time\"\n }\n }\n\n " ]
Please provide a description of the function:def create(self, policy_id, name, condition_type, alert_condition_configuration, enabled=True): data = { "data": alert_condition_configuration } data['data']['type'] = condition_type data['data']['policy_id'] = policy_id data['data']['name'] = name data['data']['enabled'] = enabled return self._post( url='{0}alerts/conditions'.format(self.URL), headers=self.headers, data=data )
[ "\n This API endpoint allows you to create an alert condition for infrastucture\n\n :type policy_id: int\n :param policy_id: Alert policy id\n\n :type name: str\n :param name: The name of the alert condition\n\n :type condition_type: str\n :param condition_type: The type of the alert condition can be\n infra_process_running, infra_metric or infra_host_not_reporting\n\n :type alert_condition_configuration: hash\n :param alert_condition_configuration: hash containing config for the alert\n\n :type enabled: bool\n :param enabled: Whether to enable that alert condition\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"data\": {\n \"id\": \"integer\",\n \"policy_id\": \"integer\",\n \"type\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"where_clause\": \"string\",\n \"comparison\": \"string\",\n \"filter\": \"hash\",\n \"critical_threshold\": \"hash\",\n \"event_type\": \"string\",\n \"process_where_clause\": \"string\",\n \"created_at_epoch_millis\": \"time\",\n \"updated_at_epoch_millis\": \"time\"\n }\n }\n\n " ]
Please provide a description of the function:def update(self, alert_condition_infra_id, policy_id, name, condition_type, alert_condition_configuration, enabled=True): data = { "data": alert_condition_configuration } data['data']['type'] = condition_type data['data']['policy_id'] = policy_id data['data']['name'] = name data['data']['enabled'] = enabled return self._put( url='{0}alerts/conditions/{1}'.format(self.URL, alert_condition_infra_id), headers=self.headers, data=data )
[ "\n This API endpoint allows you to update an alert condition for infrastucture\n\n :type alert_condition_infra_id: int\n :param alert_condition_infra_id: Alert Condition Infra ID\n\n :type policy_id: int\n :param policy_id: Alert policy id\n\n :type name: str\n :param name: The name of the alert condition\n\n :type condition_type: str\n :param condition_type: The type of the alert condition can be\n infra_process_running, infra_metric or infra_host_not_reporting\n\n :type alert_condition_configuration: hash\n :param alert_condition_configuration: hash containing config for the alert\n\n :type enabled: bool\n :param enabled: Whether to enable that alert condition\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"data\": {\n \"id\": \"integer\",\n \"policy_id\": \"integer\",\n \"type\": \"string\",\n \"name\": \"string\",\n \"enabled\": \"boolean\",\n \"where_clause\": \"string\",\n \"comparison\": \"string\",\n \"filter\": \"hash\",\n \"critical_threshold\": \"hash\",\n \"event_type\": \"string\",\n \"process_where_clause\": \"string\",\n \"created_at_epoch_millis\": \"time\",\n \"updated_at_epoch_millis\": \"time\"\n }\n }\n\n " ]
Please provide a description of the function:def delete(self, alert_condition_infra_id): return self._delete( url='{0}alerts/conditions/{1}'.format(self.URL, alert_condition_infra_id), headers=self.headers )
[ "\n This API endpoint allows you to delete an alert condition for infrastucture\n\n :type alert_condition_infra_id: integer\n :param alert_condition_infra_id: Alert Condition Infra ID\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {}\n\n " ]
Please provide a description of the function:def create(self, name, category, applications=None, servers=None): data = { "label": { "category": category, "name": name, "links": { "applications": applications or [], "servers": servers or [] } } } return self._put( url='{0}labels.json'.format(self.URL), headers=self.headers, data=data )
[ "\n This API endpoint will create a new label with the provided name and\n category\n\n :type name: str\n :param name: The name of the label\n\n :type category: str\n :param category: The Category\n\n :type applications: list of int\n :param applications: An optional list of application ID's\n\n :type servers: list of int\n :param servers: An optional list of server ID's\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"label\": {\n \"key\": \"string\",\n \"category\": \"string\",\n \"name\": \"string\",\n \"links\": {\n \"applications\": [\n \"integer\"\n ],\n \"servers\": [\n \"integer\"\n ]\n }\n }\n }\n\n " ]
Please provide a description of the function:def delete(self, key): return self._delete( url='{url}labels/labels/{key}.json'.format( url=self.URL, key=key), headers=self.headers, )
[ "\n When applications are provided, this endpoint will remove those\n applications from the label.\n\n When no applications are provided, this endpoint will remove the label.\n\n :type key: str\n :param key: Label key. Example: 'Language:Java'\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"label\": {\n \"key\": \"string\",\n \"category\": \"string\",\n \"name\": \"string\",\n \"links\": {\n \"applications\": [\n \"integer\"\n ],\n \"servers\": [\n \"integer\"\n ]\n }\n }\n }\n\n " ]
Please provide a description of the function:def list(self, filter_guid=None, filter_ids=None, detailed=None, page=None): filters = [ 'filter[guid]={0}'.format(filter_guid) if filter_guid else None, 'filter[ids]={0}'.format(','.join([str(app_id) for app_id in filter_ids])) if filter_ids else None, 'detailed={0}'.format(detailed) if detailed is not None else None, 'page={0}'.format(page) if page else None ] return self._get( url='{0}plugins.json'.format(self.URL), headers=self.headers, params=self.build_param_string(filters) )
[ "\n This API endpoint returns a paginated list of the plugins associated\n with your New Relic account.\n\n Plugins can be filtered by their name or by a list of IDs.\n\n :type filter_guid: str\n :param filter_guid: Filter by name\n\n :type filter_ids: list of ints\n :param filter_ids: Filter by user ids\n\n :type detailed: bool\n :param detailed: Include all data about a plugin\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"plugins\": [\n {\n \"id\": \"integer\",\n \"name\": \"string\",\n \"guid\": \"string\",\n \"publisher\": \"string\",\n \"details\": {\n \"description\": \"integer\",\n \"is_public\": \"string\",\n \"created_at\": \"time\",\n \"updated_at\": \"time\",\n \"last_published_at\": \"time\",\n \"has_unpublished_changes\": \"boolean\",\n \"branding_image_url\": \"string\",\n \"upgraded_at\": \"time\",\n \"short_name\": \"string\",\n \"publisher_about_url\": \"string\",\n \"publisher_support_url\": \"string\",\n \"download_url\": \"string\",\n \"first_edited_at\": \"time\",\n \"last_edited_at\": \"time\",\n \"first_published_at\": \"time\",\n \"published_version\": \"string\"\n },\n \"summary_metrics\": [\n {\n \"id\": \"integer\",\n \"name\": \"string\",\n \"metric\": \"string\",\n \"value_function\": \"string\",\n \"thresholds\": {\n \"caution\": \"float\",\n \"critical\": \"float\"\n },\n \"values\": {\n \"raw\": \"float\",\n \"formatted\": \"string\"\n }\n }\n ]\n }\n ],\n \"pages\": {\n \"last\": {\n \"url\": \"https://api.newrelic.com/v2/plugins.json?page=2\",\n \"rel\": \"last\"\n },\n \"next\": {\n \"url\": \"https://api.newrelic.com/v2/plugins.json?page=2\",\n \"rel\": \"next\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def show(self, id, detailed=None): filters = [ 'detailed={0}'.format(detailed) if detailed is not None else None, ] return self._get( url='{root}plugins/{id}.json'.format( root=self.URL, id=id ), headers=self.headers, params=self.build_param_string(filters) or None )
[ "\n This API endpoint returns a single Key transaction, identified its ID.\n\n :type id: int\n :param id: Key transaction ID\n\n :type detailed: bool\n :param detailed:\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"plugin\": {\n \"id\": \"integer\",\n \"name\": \"string\",\n \"guid\": \"string\",\n \"publisher\": \"string\",\n \"details\": {\n \"description\": \"integer\",\n \"is_public\": \"string\",\n \"created_at\": \"time\",\n \"updated_at\": \"time\",\n \"last_published_at\": \"time\",\n \"has_unpublished_changes\": \"boolean\",\n \"branding_image_url\": \"string\",\n \"upgraded_at\": \"time\",\n \"short_name\": \"string\",\n \"publisher_about_url\": \"string\",\n \"publisher_support_url\": \"string\",\n \"download_url\": \"string\",\n \"first_edited_at\": \"time\",\n \"last_edited_at\": \"time\",\n \"first_published_at\": \"time\",\n \"published_version\": \"string\"\n },\n \"summary_metrics\": [\n {\n \"id\": \"integer\",\n \"name\": \"string\",\n \"metric\": \"string\",\n \"value_function\": \"string\",\n \"thresholds\": {\n \"caution\": \"float\",\n \"critical\": \"float\"\n },\n \"values\": {\n \"raw\": \"float\",\n \"formatted\": \"string\"\n }\n }\n ]\n }\n }\n\n " ]
Please provide a description of the function:def list( self, application_id, filter_hostname=None, filter_ids=None, page=None): filters = [ 'filter[hostname]={0}'.format(filter_hostname) if filter_hostname else None, 'filter[ids]={0}'.format(','.join([str(app_id) for app_id in filter_ids])) if filter_ids else None, 'page={0}'.format(page) if page else None ] return self._get( url='{root}applications/{application_id}/instances.json'.format( root=self.URL, application_id=application_id ), headers=self.headers, params=self.build_param_string(filters) )
[ "\n This API endpoint returns a paginated list of instances associated with the\n given application.\n\n Application instances can be filtered by hostname, or the list of\n application instance IDs.\n\n :type application_id: int\n :param application_id: Application ID\n\n :type filter_hostname: str\n :param filter_hostname: Filter by server hostname\n\n :type filter_ids: list of ints\n :param filter_ids: Filter by application instance ids\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"application_instances\": [\n {\n \"id\": \"integer\",\n \"application_name\": \"string\",\n \"host\": \"string\",\n \"port\": \"integer\",\n \"language\": \"integer\",\n \"health_status\": \"string\",\n \"application_summary\": {\n \"response_time\": \"float\",\n \"throughput\": \"float\",\n \"error_rate\": \"float\",\n \"apdex_score\": \"float\"\n },\n \"end_user_summary\": {\n \"response_time\": \"float\",\n \"throughput\": \"float\",\n \"apdex_score\": \"float\"\n },\n \"links\": {\n \"application\": \"integer\",\n \"application_host\": \"integer\",\n \"server\": \"integer\"\n }\n }\n ],\n \"pages\": {\n \"last\": {\n \"url\": \"https://api.newrelic.com/v2/applications/{application_id}/instances.json?page=2\",\n \"rel\": \"last\"\n },\n \"next\": {\n \"url\": \"https://api.newrelic.com/v2/applications/{application_id}/instances.json?page=2\",\n \"rel\": \"next\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def show(self, application_id, host_id): return self._get( url='{root}applications/{application_id}/hosts/{host_id}.json'.format( root=self.URL, application_id=application_id, host_id=host_id ), headers=self.headers, )
[ "\n This API endpoint returns a single application host, identified by its\n ID.\n\n :type application_id: int\n :param application_id: Application ID\n\n :type host_id: int\n :param host_id: Application host ID\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"application_host\": {\n \"id\": \"integer\",\n \"application_name\": \"string\",\n \"host\": \"string\",\n \"language\": \"integer\",\n \"health_status\": \"string\",\n \"application_summary\": {\n \"response_time\": \"float\",\n \"throughput\": \"float\",\n \"error_rate\": \"float\",\n \"apdex_score\": \"float\"\n },\n \"end_user_summary\": {\n \"response_time\": \"float\",\n \"throughput\": \"float\",\n \"apdex_score\": \"float\"\n },\n \"links\": {\n \"application\": \"integer\",\n \"application_instances\": [\n \"integer\"\n ],\n \"server\": \"integer\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def metric_names(self, application_id, host_id, name=None, page=None): params = [ 'name={0}'.format(name) if name else None, 'page={0}'.format(page) if page else None ] return self._get( url='{root}applications/{application_id}/hosts/{host_id}/metrics.json'.format( root=self.URL, application_id=application_id, host_id=host_id ), headers=self.headers, params=self.build_param_string(params) )
[ "\n Return a list of known metrics and their value names for the given resource.\n\n :type application_id: int\n :param application_id: Application ID\n\n :type host_id: int\n :param host_id: Application Host ID\n\n :type name: str\n :param name: Filter metrics by name\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'pages' key\n if there are paginated results\n\n ::\n\n {\n \"metrics\": [\n {\n \"name\": \"string\",\n \"values\": [\n \"string\"\n ]\n }\n ],\n \"pages\": {\n \"last\": {\n \"url\": \"https://api.newrelic.com/v2/\\\napplications/{application_id}/hosts/{host_id}/metrics.json?page=2\",\n \"rel\": \"last\"\n },\n \"next\": {\n \"url\": \"https://api.newrelic.com/v2/\\\napplications/{application_id}/hosts/{host_id}/metrics.json?page=2\",\n \"rel\": \"next\"\n }\n }\n }\n\n " ]
Please provide a description of the function:def metric_data( self, id, names, values=None, from_dt=None, to_dt=None, summarize=False): params = [ 'from={0}'.format(from_dt) if from_dt else None, 'to={0}'.format(to_dt) if to_dt else None, 'summarize=true' if summarize else None ] params += ['names[]={0}'.format(name) for name in names] if values: params += ['values[]={0}'.format(value) for value in values] return self._get( url='{0}components/{1}/metrics/data.json'.format(self.URL, id), headers=self.headers, params=self.build_param_string(params) )
[ "\n This API endpoint returns a list of values for each of the requested\n metrics. The list of available metrics can be returned using the Metric\n Name API endpoint.\n\n Metric data can be filtered by a number of parameters, including\n multiple names and values, and by time range. Metric names and values\n will be matched intelligently in the background.\n\n You can also retrieve a summarized data point across the entire time\n range selected by using the summarize parameter.\n\n **Note** All times sent and received are formatted in UTC. The default\n time range is the last 30 minutes.\n\n :type id: int\n :param id: Component ID\n\n :type names: list of str\n :param names: Retrieve specific metrics by name\n\n :type values: list of str\n :param values: Retrieve specific metric values\n\n :type from_dt: datetime\n :param from_dt: Retrieve metrics after this time\n\n :type to_dt: datetime\n :param to_dt: Retrieve metrics before this time\n\n :type summarize: bool\n :param summarize: Summarize the data\n\n :rtype: dict\n :return: The JSON response of the API\n\n ::\n\n {\n \"metric_data\": {\n \"from\": \"time\",\n \"to\": \"time\",\n \"metrics\": [\n {\n \"name\": \"string\",\n \"timeslices\": [\n {\n \"from\": \"time\",\n \"to\": \"time\",\n \"values\": \"hash\"\n }\n ]\n }\n ]\n }\n }\n\n " ]
Please provide a description of the function:def _get(self, *args, **kwargs): response = requests.get(*args, **kwargs) if not response.ok: raise NewRelicAPIServerException('{}: {}'.format(response.status_code, response.text)) json_response = response.json() if response.links: json_response['pages'] = response.links return json_response
[ "\n A wrapper for getting things\n\n :returns: The response of your get\n :rtype: dict\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NewRelicAPIServerException>`\n if there is an error from New Relic\n " ]
Please provide a description of the function:def _put(self, *args, **kwargs): if 'data' in kwargs: kwargs['data'] = json.dumps(kwargs['data']) response = requests.put(*args, **kwargs) if not response.ok: raise NewRelicAPIServerException('{}: {}'.format(response.status_code, response.text)) return response.json()
[ "\n A wrapper for putting things. It will also json encode your 'data' parameter\n\n :returns: The response of your put\n :rtype: dict\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NewRelicAPIServerException>`\n if there is an error from New Relic\n " ]
Please provide a description of the function:def _post(self, *args, **kwargs): if 'data' in kwargs: kwargs['data'] = json.dumps(kwargs['data']) response = requests.post(*args, **kwargs) if not response.ok: raise NewRelicAPIServerException('{}: {}'.format(response.status_code, response.text)) return response.json()
[ "\n A wrapper for posting things. It will also json encode your 'data' parameter\n\n :returns: The response of your post\n :rtype: dict\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NewRelicAPIServerException>`\n if there is an error from New Relic\n " ]
Please provide a description of the function:def _delete(self, *args, **kwargs): response = requests.delete(*args, **kwargs) if not response.ok: raise NewRelicAPIServerException('{}: {}'.format(response.status_code, response.text)) if response.text: return response.json() return {}
[ "\n A wrapper for deleting things\n\n :returns: The response of your delete\n :rtype: dict\n\n :raises: This will raise a\n :class:`NewRelicAPIServerException<newrelic_api.exceptions.NewRelicAPIServerException>`\n if there is an error from New Relic\n " ]
Please provide a description of the function:def list(self, filter_title=None, filter_ids=None, page=None): filters = [ 'filter[title]={0}'.format(filter_title) if filter_title else None, 'filter[ids]={0}'.format(','.join([str(dash_id) for dash_id in filter_ids])) if filter_ids else None, 'page={0}'.format(page) if page else None ] return self._get( url='{0}dashboards.json'.format(self.URL), headers=self.headers, params=self.build_param_string(filters) )
[ "\n :type filter_title: str\n :param filter_title: Filter by dashboard title\n\n :type filter_ids: list of ints\n :param filter_ids: Filter by dashboard ids\n\n :type page: int\n :param page: Pagination index\n\n :rtype: dict\n :return: The JSON response of the API, with an additional 'page' key\n if there are paginated results\n\n ::\n\n {\n \"dashboards\": [\n {\n \"id\": \"integer\",\n \"title\": \"string\",\n \"description\": \"string\",\n \"icon\": \"string\",\n \"created_at\": \"time\",\n \"updated_at\": \"time\",\n \"visibility\": \"string\",\n \"editable\": \"string\",\n \"ui_url\": \"string\",\n \"api_url\": \"string\",\n \"owner_email\": \"string\",\n \"filter\": {\n \"event_types\": [\"string\"],\n \"attributes\": [\"string\"]\n }\n }\n ],\n \"pages\": {\n \"last\": {\n \"url\": \"https://api.newrelic.com/v2/dashboards.json?page=1&per_page=100\",\n \"rel\": \"last\"\n },\n \"next\": {\n \"url\": \"https://api.newrelic.com/v2/dashboards.json?page=1&per_page=100\",\n \"rel\": \"next\"\n }\n }\n }\n " ]
Please provide a description of the function:def create(self, dashboard_data): return self._post( url='{0}dashboards.json'.format(self.URL), headers=self.headers, data=dashboard_data, )
[ "\n This API endpoint creates a dashboard and all defined widgets.\n\n :type dashboard: dict\n :param dashboard: Dashboard Dictionary\n\n :rtype dict\n :return: The JSON response of the API\n\n ::\n {\n \"dashboard\": {\n \"id\": \"integer\",\n \"title\": \"string\",\n \"description\": \"string\",\n \"icon\": \"string\",\n \"created_at\": \"time\",\n \"updated_at\": \"time\",\n \"visibility\": \"string\",\n \"editable\": \"string\",\n \"ui_url\": \"string\",\n \"api_url\": \"string\",\n \"owner_email\": \"string\",\n \"metadata\": {\n \"version\": \"integer\"\n },\n \"widgets\": [\n {\n \"visualization\": \"string\",\n \"layout\": {\n \"width\": \"integer\",\n \"height\": \"integer\",\n \"row\": \"integer\",\n \"column\": \"integer\"\n },\n \"widget_id\": \"integer\",\n \"account_id\": \"integer\",\n \"data\": [\n \"nrql\": \"string\"\n ],\n \"presentation\": {\n \"title\": \"string\",\n \"notes\": \"string\"\n }\n }\n ],\n \"filter\": {\n \"event_types\": [\"string\"],\n \"attributes\": [\"string\"]\n }\n }\n }\n " ]
Please provide a description of the function:def update(self, id, dashboard_data): return self._put( url='{0}dashboards/{1}.json'.format(self.URL, id), headers=self.headers, data=dashboard_data, )
[ "\n This API endpoint updates a dashboard and all defined widgets.\n\n :type id: int\n :param id: Dashboard ID\n\n :type dashboard: dict\n :param dashboard: Dashboard Dictionary\n\n :rtype dict\n :return: The JSON response of the API\n\n ::\n {\n \"dashboard\": {\n \"id\": \"integer\",\n \"title\": \"string\",\n \"description\": \"string\",\n \"icon\": \"string\",\n \"created_at\": \"time\",\n \"updated_at\": \"time\",\n \"visibility\": \"string\",\n \"editable\": \"string\",\n \"ui_url\": \"string\",\n \"api_url\": \"string\",\n \"owner_email\": \"string\",\n \"metadata\": {\n \"version\": \"integer\"\n },\n \"widgets\": [\n {\n \"visualization\": \"string\",\n \"layout\": {\n \"width\": \"integer\",\n \"height\": \"integer\",\n \"row\": \"integer\",\n \"column\": \"integer\"\n },\n \"widget_id\": \"integer\",\n \"account_id\": \"integer\",\n \"data\": [\n \"nrql\": \"string\"\n ],\n \"presentation\": {\n \"title\": \"string\",\n \"notes\": \"string\"\n }\n }\n ],\n \"filter\": {\n \"event_types\": [\"string\"],\n \"attributes\": [\"string\"]\n }\n }\n }\n " ]
Please provide a description of the function:def operatorPrecedence(base, operators): # The full expression, used to provide sub-expressions expression = Forward() # The initial expression last = base | Suppress('(') + expression + Suppress(')') def parse_operator(expr, arity, association, action=None, extra=None): return expr, arity, association, action, extra for op in operators: # Use a function to default action to None expr, arity, association, action, extra = parse_operator(*op) # Check that the arity is valid if arity < 1 or arity > 2: raise Exception("Arity must be unary (1) or binary (2)") if association not in (opAssoc.LEFT, opAssoc.RIGHT): raise Exception("Association must be LEFT or RIGHT") # This will contain the expression this = Forward() # Create an expression based on the association and arity if association is opAssoc.LEFT: new_last = (last | extra) if extra else last if arity == 1: operator_expression = new_last + OneOrMore(expr) elif arity == 2: operator_expression = last + OneOrMore(expr + new_last) elif association is opAssoc.RIGHT: new_this = (this | extra) if extra else this if arity == 1: operator_expression = expr + new_this # Currently no operator uses this, so marking it nocover for now elif arity == 2: # nocover operator_expression = last + OneOrMore(new_this) # nocover # Set the parse action for the operator if action is not None: operator_expression.setParseAction(action) this <<= (operator_expression | last) last = this # Set the full expression and return it expression <<= last return expression
[ "\n This re-implements pyparsing's operatorPrecedence function.\n\n It gets rid of a few annoying bugs, like always putting operators inside\n a Group, and matching the whole grammar with Forward first (there may\n actually be a reason for that, but I couldn't find it). It doesn't\n support trinary expressions, but they should be easy to add if it turns\n out I need them.\n " ]
Please provide a description of the function:def addevensubodd(operator, operand): try: for i, x in enumerate(operand): if x % 2: operand[i] = -x return operand except TypeError: if operand % 2: return -operand return operand
[ "Add even numbers, subtract odd ones. See http://1w6.org/w6 " ]
Please provide a description of the function:def main(argv=None): args = docopt.docopt(__doc__, argv=argv, version=__version__) verbose = bool(args['--verbose']) f_roll = dice.roll kwargs = {} if args['--min']: f_roll = dice.roll_min elif args['--max']: f_roll = dice.roll_max if args['--max-dice']: try: kwargs['max_dice'] = int(args['--max-dice']) except ValueError: print("Invalid value for --max-dice: '%s'" % args['--max-dice']) exit(1) expr = ' '.join(args['<expression>']) try: roll, kwargs = f_roll(expr, raw=True, return_kwargs=True, **kwargs) if verbose: print('Result: ', end='') print(str(roll.evaluate_cached(**kwargs))) if verbose: print('Breakdown:') print(dice.utilities.verbose_print(roll, **kwargs)) except DiceBaseException as e: print('Whoops! Something went wrong:') print(e.pretty_print()) exit(1)
[ "Run roll() from a command line interface" ]
Please provide a description of the function:def set_parse_attributes(self, string, location, tokens): "Fluent API for setting parsed location" self.string = string self.location = location self.tokens = tokens return self
[]
Please provide a description of the function:def evaluate_object(obj, cls=None, cache=False, **kwargs): old_obj = obj if isinstance(obj, Element): if cache: obj = obj.evaluate_cached(**kwargs) else: obj = obj.evaluate(cache=cache, **kwargs) if cls is not None and type(obj) != cls: obj = cls(obj) for attr in ('string', 'location', 'tokens'): if hasattr(old_obj, attr): setattr(obj, attr, getattr(old_obj, attr)) return obj
[ "Evaluates elements, and coerces objects to a class if needed" ]
Please provide a description of the function:def evaluate_cached(self, **kwargs): if not hasattr(self, 'result'): self.result = self.evaluate(cache=True, **kwargs) return self.result
[ "Wraps evaluate(), caching results" ]
Please provide a description of the function:def readGraph(edgeList, nodeList = None, directed = False, idKey = 'ID', eSource = 'From', eDest = 'To'): progArgs = (0, "Starting to reading graphs") if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if directed: grph = nx.DiGraph() else: grph = nx.Graph() if nodeList: PBar.updateVal(0, "Reading " + nodeList) f = open(os.path.expanduser(os.path.abspath(nodeList))) nFile = csv.DictReader(f) for line in nFile: vals = line ndID = vals[idKey] del vals[idKey] if len(vals) > 0: grph.add_node(ndID, **vals) else: grph.add_node(ndID) f.close() PBar.updateVal(.25, "Reading " + edgeList) f = open(os.path.expanduser(os.path.abspath(edgeList))) eFile = csv.DictReader(f) for line in eFile: vals = line eFrom = vals[eSource] eTo = vals[eDest] del vals[eSource] del vals[eDest] if len(vals) > 0: grph.add_edge(eFrom, eTo, **vals) else: grph.add_edge(eFrom, eTo) PBar.finish("{} nodes and {} edges found".format(len(grph.nodes()), len(grph.edges()))) f.close() return grph
[ "Reads the files given by _edgeList_ and _nodeList_ and creates a networkx graph for the files.\n\n This is designed only for the files produced by metaknowledge and is meant to be the reverse of [writeGraph()](#metaknowledge.graphHelpers.writeGraph), if this does not produce the desired results the networkx builtin [networkx.read_edgelist()](https://networkx.github.io/documentation/networkx-1.10/reference/generated/networkx.readwrite.edgelist.read_edgelist.html) could be tried as it is aimed at a more general usage.\n\n The read edge list format assumes the column named _eSource_ (default `'From'`) is the source node, then the column _eDest_ (default `'To'`) givens the destination and all other columns are attributes of the edges, e.g. weight.\n\n The read node list format assumes the column _idKey_ (default `'ID'`) is the ID of the node for the edge list and the resulting network. All other columns are considered attributes of the node, e.g. count.\n\n **Note**: If the names of the columns do not match those given to **readGraph()** a `KeyError` exception will be raised.\n\n **Note**: If nodes appear in the edgelist but not the nodeList they will be created silently with no attributes.\n\n # Parameters\n\n _edgeList_ : `str`\n\n > a string giving the path to the edge list file\n\n _nodeList_ : `optional [str]`\n\n > default `None`, a string giving the path to the node list file\n\n _directed_ : `optional [bool]`\n\n > default `False`, if `True` the produced network is directed from _eSource_ to _eDest_\n\n _idKey_ : `optional [str]`\n\n > default `'ID'`, the name of the ID column in the node list\n\n _eSource_ : `optional [str]`\n\n > default `'From'`, the name of the source column in the edge list\n\n _eDest_ : `optional [str]`\n\n > default `'To'`, the name of the destination column in the edge list\n\n # Returns\n\n `networkx Graph`\n\n > the graph described by the input files\n " ]
Please provide a description of the function:def writeGraph(grph, name, edgeInfo = True, typing = False, suffix = 'csv', overwrite = True, allSameAttribute = False): progArgs = (0, "Writing the graph to files starting with: {}".format(name)) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if typing: if isinstance(grph, nx.classes.digraph.DiGraph) or isinstance(grph, nx.classes.multidigraph.MultiDiGraph): grphType = "_directed" else: grphType = "_undirected" else: grphType = '' nameCompts = os.path.split(os.path.expanduser(os.path.normpath(name))) if nameCompts[0] == '' and nameCompts[1] == '': edgeListName = "edgeList"+ grphType + '.' + suffix nodesAtrName = "nodeAttributes"+ grphType + '.' + suffix elif nameCompts[0] == '': edgeListName = nameCompts[1] + "_edgeList"+ grphType + '.' + suffix nodesAtrName = nameCompts[1] + "_nodeAttributes"+ grphType + '.' + suffix elif nameCompts[1] == '': edgeListName = os.path.join(nameCompts[0], "edgeList"+ grphType + '.' + suffix) nodesAtrName = os.path.join(nameCompts[0], "nodeAttributes"+ grphType + '.' + suffix) else: edgeListName = os.path.join(nameCompts[0], nameCompts[1] + "_edgeList"+ grphType + '.' + suffix) nodesAtrName = os.path.join(nameCompts[0], nameCompts[1] + "_nodeAttributes"+ grphType + '.' + suffix) if not overwrite: if os.path.isfile(edgeListName): raise OSError(edgeListName+ " already exists") if os.path.isfile(nodesAtrName): raise OSError(nodesAtrName + " already exists") writeEdgeList(grph, edgeListName, extraInfo = edgeInfo, allSameAttribute = allSameAttribute, _progBar = PBar) writeNodeAttributeFile(grph, nodesAtrName, allSameAttribute = allSameAttribute, _progBar = PBar) PBar.finish("{} nodes and {} edges written to file".format(len(grph.nodes()), len(grph.edges())))
[ "Writes both the edge list and the node attribute list of _grph_ to files starting with _name_.\n\n The output files start with _name_, the file type (edgeList, nodeAttributes) then if typing is True the type of graph (directed or undirected) then the suffix, the default is as follows:\n\n >> name_fileType.suffix\n\n Both files are csv's with comma delimiters and double quote quoting characters. The edge list has two columns for the source and destination of the edge, `'From'` and `'To'` respectively, then, if _edgeInfo_ is `True`, for each attribute of the node another column is created. The node list has one column call \"ID\" with the node ids used by networkx and all other columns are the node attributes.\n\n To read back these files use [readGraph()](#metaknowledge.graphHelpers.readGraph) and to write only one type of lsit use [writeEdgeList()](#metaknowledge.graphHelpers.writeEdgeList) or [writeNodeAttributeFile()](#metaknowledge.graphHelpers.writeNodeAttributeFile).\n\n **Warning**: this function will overwrite files, if they are in the way of the output, to prevent this set _overwrite_ to `False`\n\n **Note**: If any nodes or edges are missing an attribute a `KeyError` will be raised.\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > A networkx graph of the network to be written.\n\n _name_ : `str`\n\n > The start of the file name to be written, can include a path.\n\n _edgeInfo_ : `optional [bool]`\n\n > Default `True`, if `True` the the attributes of each edge are written to the edge list.\n\n _typing_ : `optional [bool]`\n\n > Default `False`, if `True` the directed ness of the graph will be added to the file names.\n\n _suffix_ : `optional [str]`\n\n > Default `\"csv\"`, the suffix of the file.\n\n _overwrite_ : `optional [bool]`\n\n > Default `True`, if `True` files will be overwritten silently, otherwise an `OSError` exception will be raised.\n " ]
Please provide a description of the function:def writeEdgeList(grph, name, extraInfo = True, allSameAttribute = False, _progBar = None): count = 0 eMax = len(grph.edges()) if metaknowledge.VERBOSE_MODE or isinstance(_progBar, _ProgressBar): if isinstance(_progBar, _ProgressBar): PBar = _progBar PBar.updateVal(0, "Writing edge list {}".format(name)) else: PBar = _ProgressBar(0, "Writing edge list {}".format(name)) else: PBar = _ProgressBar(0, "Writing edge list {}".format(name), dummy = True) if len(grph.edges(data = True)) < 1: outFile = open(os.path.expanduser(os.path.abspath(name)), 'w') outFile.write('"From","To"\n') outFile.close() PBar.updateVal(1, "Done edge list '{}', 0 edges written.".format(name)) else: if extraInfo: csvHeader = [] if allSameAttribute: csvHeader = ['From'] + ['To'] + list(grph.edges(data = True).__next__()[2].keys()) else: extraAttribs = set() for eTuple in grph.edges(data = True): count += 1 if count % 1000 == 0: PBar.updateVal(count / eMax * .10, "Checking over edge: '{}' to '{}'".format(eTuple[0], eTuple[1])) s = set(eTuple[2].keys()) - extraAttribs if len(s) > 0: for i in s: extraAttribs.add(i) csvHeader = ['From', 'To'] + list(extraAttribs) else: csvHeader = ['From'] + ['To'] count = 0 PBar.updateVal(.01, "Opening file {}".format(name)) f = open(os.path.expanduser(os.path.abspath(name)), 'w', newline = '') outFile = csv.DictWriter(f, csvHeader, delimiter = ',', quotechar = '"', quoting=csv.QUOTE_NONNUMERIC) outFile.writeheader() if extraInfo: for e in grph.edges(data = True): count += 1 if count % 1000 == 0: PBar.updateVal(count / eMax * .90 + .10, "Writing edge: '{}' to '{}'".format(e[0], e[1])) eDict = e[2].copy() eDict['From'] = e[0] eDict['To'] = e[1] try: outFile.writerow(eDict) except UnicodeEncodeError: #Because Windows newDict = {k.encode('ASCII', errors='ignore').decode('ASCII', errors='ignore') if isinstance(k, str) else k: v.encode('ASCII', errors='ignore').decode('ASCII', errors='ignore') if isinstance(v, str) else v for k, v in eDict.items()} outFile.writerow(newDict) except ValueError: raise ValueError("Some edges in The graph do not have the same attributes") else: for e in grph.edges(): count += 1 if count % 1000 == 0: PBar.updateVal(count / eMax * .90 + .10, "Writing edge: '{}' to '{}'".format(e[0], e[1])) eDict['From'] = e[0] eDict['To'] = e[1] try: outFile.writerow(eDict) except UnicodeEncodeError: #Because Windows newDict = {k.encode('ASCII', errors='ignore').decode('ASCII', errors='ignore') if isinstance(k, str) else k: v.encode('ASCII', errors='ignore').decode('ASCII', errors='ignore') if isinstance(v, str) else v for k, v in eDict.items()} outFile.writerow(newDict) PBar.updateVal(1, "Closing {}".format(name)) f.close() if not isinstance(_progBar, _ProgressBar): PBar.finish("Done edge list {}, {} edges written.".format(name, count))
[ "Writes an edge list of _grph_ at the destination _name_.\n\n The edge list has two columns for the source and destination of the edge, `'From'` and `'To'` respectively, then, if _edgeInfo_ is `True`, for each attribute of the node another column is created.\n\n **Note**: If any edges are missing an attribute it will be left blank by default, enable _allSameAttribute_ to cause a `KeyError` to be raised.\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > The graph to be written to _name_\n\n _name_ : `str`\n\n > The name of the file to be written\n\n _edgeInfo_ : `optional [bool]`\n\n > Default `True`, if `True` the attributes of each edge will be written\n\n _allSameAttribute_ : `optional [bool]`\n\n > Default `False`, if `True` all the edges must have the same attributes or an exception will be raised. If `False` the missing attributes will be left blank.\n " ]
Please provide a description of the function:def writeNodeAttributeFile(grph, name, allSameAttribute = False, _progBar = None): count = 0 nMax = len(grph.nodes()) if metaknowledge.VERBOSE_MODE or isinstance(_progBar, _ProgressBar): if isinstance(_progBar, _ProgressBar): PBar = _progBar PBar.updateVal(0, "Writing node list {}".format(name)) else: PBar = _ProgressBar(0, "Writing node list {}".format(name)) else: PBar = _ProgressBar(0, "Writing node list {}".format(name), dummy = True) if len(grph.nodes(data = True)) < 1: outFile = open(os.path.expanduser(os.path.abspath(name)), 'w') outFile.write('ID\n') outFile.close() PBar.updateVal(1, "Done node attribute list: {}, 0 nodes written.".format(name)) else: csvHeader = [] if allSameAttribute: csvHeader = ['ID'] + list(grph.nodes(data = True).__next__()[1].keys()) else: extraAttribs = set() for n, attribs in grph.nodes(data = True): count += 1 if count % 100 == 0: PBar.updateVal(count / nMax * .10, "Checking over node: '{}'".format(n)) s = set(attribs.keys()) - extraAttribs if len(s) > 0: for i in s: extraAttribs.add(i) csvHeader = ['ID'] + list(extraAttribs) count = 0 PBar.updateVal(.10, "Opening '{}'".format(name)) f = open(name, 'w', newline = '') outFile = csv.DictWriter(f, csvHeader, delimiter = ',', quotechar = '"', quoting = csv.QUOTE_NONNUMERIC) outFile.writeheader() for n in grph.nodes(data = True): count += 1 if count % 100 == 0: PBar.updateVal(count / nMax * .90 + .10, "Writing node: '{}'".format(n[0])) nDict = n[1].copy() nDict['ID'] = n[0] try: outFile.writerow(nDict) except UnicodeEncodeError: #Because Windows newDict = {k.encode('ASCII', errors='ignore').decode('ASCII', errors='ignore') if isinstance(k, str) else k: v.encode('ASCII', errors='ignore').decode('ASCII', errors='ignore') if isinstance(v, str) else v for k, v in nDict.items()} outFile.writerow(newDict) except ValueError: raise ValueError("Some nodes in the graph do not have the same attributes") PBar.updateVal(1, "Closing {}".format(name)) f.close() if not isinstance(_progBar, _ProgressBar): PBar.finish("Done node attribute list: {}, {} nodes written.".format(name, count))
[ "Writes a node attribute list of _grph_ to the file given by the path _name_.\n\n The node list has one column call `'ID'` with the node ids used by networkx and all other columns are the node attributes.\n\n **Note**: If any nodes are missing an attribute it will be left blank by default, enable _allSameAttribute_ to cause a `KeyError` to be raised.\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > The graph to be written to _name_\n\n _name_ : `str`\n\n > The name of the file to be written\n\n _allSameAttribute_ : `optional [bool]`\n\n > Default `False`, if `True` all the nodes must have the same attributes or an exception will be raised. If `False` the missing attributes will be left blank.\n " ]
Please provide a description of the function:def writeTnetFile(grph, name, modeNameString, weighted = False, sourceMode = None, timeString = None, nodeIndexString = 'tnet-ID', weightString = 'weight'): count = 0 eMax = len(grph.edges()) progArgs = (0, "Writing tnet edge list {}".format(name)) if metaknowledge.VERBOSE_MODE: progKwargs = {'dummy' : False} else: progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if sourceMode is not None: modes = [sourceMode] else: modes = [] mode1Set = set() PBar.updateVal(.1, "Indexing nodes for tnet") for nodeIndex, node in enumerate(grph.nodes(data = True), start = 1): try: nMode = node[1][modeNameString] except KeyError: #too many modes so will fail modes = [1,2,3] nMode = 4 if nMode not in modes: if len(modes) < 2: modes.append(nMode) else: raise RCValueError("Too many modes of '{}' found in the network or one of the nodes was missing its mode. There must be exactly 2 modes.".format(modeNameString)) if nMode == modes[0]: mode1Set.add(node[0]) node[1][nodeIndexString] = nodeIndex if len(modes) != 2: raise RCValueError("Too few modes of '{}' found in the network. There must be exactly 2 modes.".format(modeNameString)) with open(name, 'w', encoding = 'utf-8') as f: edgesCaller = {'data' : True} if timeString is not None: edgesCaller['keys'] = True for *nodes, eDict in grph.edges(**edgesCaller): if timeString is not None: n1, n2, keyVal = nodes else: n1, n2 = nodes count += 1 if count % 1000 == 1: PBar.updateVal(count/ eMax * .9 + .1, "writing edge: '{}'-'{}'".format(n1, n2)) if n1 in mode1Set: if n2 in mode1Set: raise RCValueError("The nodes '{}' and '{}' have an edge and the same type. The network must be purely 2-mode.".format(n1, n2)) elif n2 in mode1Set: n1, n2 = n2, n1 else: raise RCValueError("The nodes '{}' and '{}' have an edge and the same type. The network must be purely 2-mode.".format(n1, n2)) if timeString is not None: eTimeString = '"{}" '.format(keyVal) else: eTimeString = '' if weighted: f.write("{}{} {} {}\n".format(eTimeString, grph.node[n1][nodeIndexString], grph.node[n2][nodeIndexString], eDict[weightString])) else: f.write("{}{} {}\n".format(eTimeString, grph.node[n1][nodeIndexString], grph.node[n2][nodeIndexString])) PBar.finish("Done writing tnet file '{}'".format(name))
[ "Writes an edge list designed for reading by the _R_ package [tnet](https://toreopsahl.com/tnet/).\n\n The _networkx_ graph provided must be a pure two-mode network, the modes must be 2 different values for the node attribute accessed by _modeNameString_ and all edges must be between different node types. Each node will be given an integer id, stored in the attribute given by _nodeIndexString_, these ids are then written to the file as the endpoints of the edges. Unless _sourceMode_ is given which mode is the source (first column) and which the target (second column) is random.\n\n **Note** the _grph_ will be modified by this function, the ids of the nodes will be written to the graph at the attribute _nodeIndexString_.\n\n # Parameters\n\n _grph_ : `network Graph`\n\n > The graph that will be written to _name_\n\n _name_ : `str`\n\n > The path of the file to write\n\n _modeNameString_ : `str`\n\n > The name of the attribute _grph_'s modes are stored in\n\n _weighted_ : `optional bool`\n\n > Default `False`, if `True` then the attribute _weightString_ will be written to the weight column\n\n _sourceMode_ : `optional str`\n\n > Default `None`, if given the name of the mode used for the source (first column) in the output file\n\n _timeString_ : `optional str`\n\n > Default `None`, if present the attribute _timeString_ of an edge will be written to the time column surrounded by double quotes (\").\n\n **Note** The format used by tnet for dates is very strict it uses the ISO format, down to the second and without time zones.\n\n _nodeIndexString_ : `optional str`\n\n > Default `'tnet-ID'`, the name of the attribute to save the id for each node\n\n _weightString_ : `optional str`\n\n > Default `'weight'`, the name of the weight attribute\n " ]
Please provide a description of the function:def getWeight(grph, nd1, nd2, weightString = "weight", returnType = int): if not weightString: return returnType(1) else: return returnType(grph.edges[nd1, nd2][weightString])
[ "\n A way of getting the weight of an edge with or without weight as a parameter\n returns a the value of the weight parameter converted to returnType if it is given or 1 (also converted) if not\n " ]
Please provide a description of the function:def getNodeDegrees(grph, weightString = "weight", strictMode = False, returnType = int, edgeType = 'bi'): ndsDict = {} for nd in grph.nodes(): ndsDict[nd] = returnType(0) for e in grph.edges(data = True): if weightString: try: edgVal = returnType(e[2][weightString]) except KeyError: if strictMode: raise KeyError("The edge from " + str(e[0]) + " to " + str(e[1]) + " does not have the attribute: '" + str(weightString) + "'") else: edgVal = returnType(1) else: edgVal = returnType(1) if edgeType == 'bi': ndsDict[e[0]] += edgVal ndsDict[e[1]] += edgVal elif edgeType == 'in': ndsDict[e[1]] += edgVal elif edgeType == 'out': ndsDict[e[0]] += edgVal else: raise ValueError("edgeType must be 'bi', 'in', or 'out'") return ndsDict
[ "\n Retunrs a dictionary of nodes to their degrees, the degree is determined by adding the weight of edge with the weight being the string weightString that gives the name of the attribute of each edge containng thier weight. The Weights are then converted to the type returnType. If weightString is give as False instead each edge is counted as 1.\n\n edgeType, takes in one of three strings: 'bi', 'in', 'out'. 'bi' means both nodes on the edge count it, 'out' mans only the one the edge comes form counts it and 'in' means only the node the edge goes to counts it. 'bi' is the default. Use only on directional graphs as otherwise the selected nodes is random.\n " ]
Please provide a description of the function:def dropEdges(grph, minWeight = - float('inf'), maxWeight = float('inf'), parameterName = 'weight', ignoreUnweighted = False, dropSelfLoops = False): count = 0 total = len(grph.edges()) if metaknowledge.VERBOSE_MODE: progArgs = (0, "Dropping edges") progKwargs = {} else: progArgs = (0, "Dropping edges") progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: if dropSelfLoops: slps = list(grph.selfloop_edges()) PBar.updateVal(0, "Dropping self {} loops".format(len(slps))) for e in slps: grph.remove_edge(e[0], e[1]) edgesToDrop = [] if minWeight != - float('inf') or maxWeight != float('inf'): for e in grph.edges(data = True): try: val = e[2][parameterName] except KeyError: if not ignoreUnweighted: raise KeyError("One or more Edges do not have weight or " + str(parameterName), " is not the name of the weight") else: pass else: count += 1 if count % 100000 == 0: PBar.updateVal(count/ total, str(count) + " edges analysed and " + str(total -len(grph.edges())) + " edges dropped") if val > maxWeight or val < minWeight: edgesToDrop.append((e[0], e[1])) grph.remove_edges_from(edgesToDrop) PBar.finish(str(total - len(grph.edges())) + " edges out of " + str(total) + " dropped, " + str(len(grph.edges())) + " returned")
[ "Modifies _grph_ by dropping edges whose weight is not within the inclusive bounds of _minWeight_ and _maxWeight_, i.e after running _grph_ will only have edges whose weights meet the following inequality: _minWeight_ <= edge's weight <= _maxWeight_. A `Keyerror` will be raised if the graph is unweighted unless _ignoreUnweighted_ is `True`, the weight is determined by examining the attribute _parameterName_.\n\n **Note**: none of the default options will result in _grph_ being modified so only specify the relevant ones, e.g. `dropEdges(G, dropSelfLoops = True)` will remove only the self loops from `G`.\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > The graph to be modified.\n\n _minWeight_ : `optional [int or double]`\n\n > default `-inf`, the minimum weight for an edge to be kept in the graph.\n\n _maxWeight_ : `optional [int or double]`\n\n > default `inf`, the maximum weight for an edge to be kept in the graph.\n\n _parameterName_ : `optional [str]`\n\n > default `'weight'`, key to weight field in the edge's attribute dictionary, the default is the same as networkx and metaknowledge so is likely to be correct\n\n _ignoreUnweighted_ : `optional [bool]`\n\n > default `False`, if `True` unweighted edges will kept\n\n _dropSelfLoops_ : `optional [bool]`\n\n > default `False`, if `True` self loops will be removed regardless of their weight\n " ]
Please provide a description of the function:def dropNodesByDegree(grph, minDegree = -float('inf'), maxDegree = float('inf'), useWeight = True, parameterName = 'weight', includeUnweighted = True): count = 0 total = len(grph.nodes()) if metaknowledge.VERBOSE_MODE: progArgs = (0, "Dropping nodes by degree") progKwargs = {} else: progArgs = (0, "Dropping nodes by degree") progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: badNodes = [] for n in grph.nodes(): if PBar: count += 1 if count % 10000 == 0: PBar.updateVal(count/ total, str(count) + " nodes analysed and " + str(len(badNodes)) + " nodes dropped") val = 0 if useWeight: for e in grph.edges(n, data = True): try: val += e[2][parameterName] except KeyError: if not includeUnweighted: raise KeyError("One or more Edges do not have weight or " + str(parameterName), " is not the name of the weight") else: val += 1 else: val = len(grph.edges(n)) if val < minDegree or val > maxDegree: badNodes.append(n) if PBar: PBar.updateVal(1, "Cleaning up graph") grph.remove_nodes_from(badNodes) if PBar: PBar.finish("{} nodes out of {} dropped, {} returned".format(len(badNodes), total, total - len(badNodes)))
[ "Modifies _grph_ by dropping nodes that do not have a degree that is within inclusive bounds of _minDegree_ and _maxDegree_, i.e after running _grph_ will only have nodes whose degrees meet the following inequality: _minDegree_ <= node's degree <= _maxDegree_.\n\n Degree is determined in two ways, the default _useWeight_ is the weight attribute of the edges to a node will be summed, the attribute's name is _parameterName_ otherwise the number of edges touching the node is used. If _includeUnweighted_ is `True` then _useWeight_ will assign a degree of 1 to unweighted edges.\n\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > The graph to be modified.\n\n _minDegree_ : `optional [int or double]`\n\n > default `-inf`, the minimum degree for an node to be kept in the graph.\n\n _maxDegree_ : `optional [int or double]`\n\n > default `inf`, the maximum degree for an node to be kept in the graph.\n\n _useWeight_ : `optional [bool]`\n\n > default `True`, if `True` the the edge weights will be summed to get the degree, if `False` the number of edges will be used to determine the degree.\n\n _parameterName_ : `optional [str]`\n\n > default `'weight'`, key to weight field in the edge's attribute dictionary, the default is the same as networkx and metaknowledge so is likely to be correct.\n\n _includeUnweighted_ : `optional [bool]`\n\n > default `True`, if `True` edges with no weight will be considered to have a weight of 1, if `False` they will cause a `KeyError` to be raised.\n " ]
Please provide a description of the function:def dropNodesByCount(grph, minCount = -float('inf'), maxCount = float('inf'), parameterName = 'count', ignoreMissing = False): count = 0 total = len(grph.nodes()) if metaknowledge.VERBOSE_MODE: progArgs = (0, "Dropping nodes by count") progKwargs = {} else: progArgs = (0, "Dropping nodes by count") progKwargs = {'dummy' : True} with _ProgressBar(*progArgs, **progKwargs) as PBar: badNodes = [] for n in grph.nodes(data = True): if PBar: count += 1 if count % 10000 == 0: PBar.updateVal(count/ total, str(count) + "nodes analysed and {} nodes dropped".format(len(badNodes))) try: val = n[1][parameterName] except KeyError: if not ignoreMissing: raise KeyError("One or more nodes do not have counts or " + str(parameterName), " is not the name of the count parameter") else: pass else: if val < minCount or val > maxCount: badNodes.append(n[0]) if PBar: PBar.updateVal(1, "Cleaning up graph") grph.remove_nodes_from(badNodes) if PBar: PBar.finish("{} nodes out of {} dropped, {} returned".format(len(badNodes), total, total - len(badNodes)))
[ "Modifies _grph_ by dropping nodes that do not have a count that is within inclusive bounds of _minCount_ and _maxCount_, i.e after running _grph_ will only have nodes whose degrees meet the following inequality: _minCount_ <= node's degree <= _maxCount_.\n\n Count is determined by the count attribute, _parameterName_, and if missing will result in a `KeyError` being raised. _ignoreMissing_ can be set to `True` to suppress the error.\n\n minCount and maxCount default to negative and positive infinity respectively so without specifying either the output should be the input\n\n # Parameters\n\n _grph_ : `networkx Graph`\n\n > The graph to be modified.\n\n _minCount_ : `optional [int or double]`\n\n > default `-inf`, the minimum Count for an node to be kept in the graph.\n\n _maxCount_ : `optional [int or double]`\n\n > default `inf`, the maximum Count for an node to be kept in the graph.\n\n _parameterName_ : `optional [str]`\n\n > default `'count'`, key to count field in the nodes's attribute dictionary, the default is the same thoughout metaknowledge so is likely to be correct.\n\n _ignoreMissing_ : `optional [bool]`\n\n > default `False`, if `True` nodes missing a count will be kept in the graph instead of raising an exception\n " ]
Please provide a description of the function:def mergeGraphs(targetGraph, addedGraph, incrementedNodeVal = 'count', incrementedEdgeVal = 'weight'): for addedNode, attribs in addedGraph.nodes(data = True): if incrementedNodeVal: try: targetGraph.node[addedNode][incrementedNodeVal] += attribs[incrementedNodeVal] except KeyError: targetGraph.add_node(addedNode, **attribs) else: if not targetGraph.has_node(addedNode): targetGraph.add_node(addedNode, **attribs) for edgeNode1, edgeNode2, attribs in addedGraph.edges(data = True): if incrementedEdgeVal: try: targetGraph.edges[edgeNode1, edgeNode2][incrementedEdgeVal] += attribs[incrementedEdgeVal] except KeyError: targetGraph.add_edge(edgeNode1, edgeNode2, **attribs) else: if not targetGraph.Graph.has_edge(edgeNode1, edgeNode2): targetGraph.add_edge(edgeNode1, edgeNode2, **attribs)
[ "A quick way of merging graphs, this is meant to be quick and is only intended for graphs generated by metaknowledge. This does not check anything and as such may cause unexpected results if the source and target were not generated by the same method.\n\n **mergeGraphs**() will **modify** _targetGraph_ in place by adding the nodes and edges found in the second, _addedGraph_. If a node or edge exists _targetGraph_ is given precedence, but the edge and node attributes given by _incrementedNodeVal_ and incrementedEdgeVal are added instead of being overwritten.\n\n # Parameters\n\n _targetGraph_ : `networkx Graph`\n\n > the graph to be modified, it has precedence.\n\n _addedGraph_ : `networkx Graph`\n\n > the graph that is unmodified, it is added and does **not** have precedence.\n\n _incrementedNodeVal_ : `optional [str]`\n\n > default `'count'`, the name of the count attribute for the graph's nodes. When merging this attribute will be the sum of the values in the input graphs, instead of _targetGraph_'s value.\n\n _incrementedEdgeVal_ : `optional [str]`\n\n > default `'weight'`, the name of the weight attribute for the graph's edges. When merging this attribute will be the sum of the values in the input graphs, instead of _targetGraph_'s value.\n " ]
Please provide a description of the function:def graphStats(G, stats = ('nodes', 'edges', 'isolates', 'loops', 'density', 'transitivity'), makeString = True, sentenceString = False): for sts in stats: if sts not in ['nodes', 'edges', 'isolates', 'loops', 'density', 'transitivity']: raise RuntimeError('"{}" is not a valid stat.'.format(sts)) if makeString: stsData = [] else: stsData = {} if 'nodes' in stats: if makeString: if sentenceString: stsData.append("{:G} nodes".format(len(G.nodes()))) else: stsData.append("Nodes: {:G}".format(len(G.nodes()))) else: stsData['nodes'] = len(G.nodes()) if 'edges' in stats: if makeString: if sentenceString: stsData.append("{:G} edges".format(len(G.edges()))) else: stsData.append("Edges: {:G}".format(len(G.edges()))) else: stsData['edges'] = len(G.edges()) if 'isolates' in stats: if makeString: if sentenceString: stsData.append("{:G} isolates".format(len(list(nx.isolates(G))))) else: stsData.append("Isolates: {:G}".format(len(list(nx.isolates(G))))) else: stsData['isolates'] = len(list(nx.isolates(G))) if 'loops' in stats: if makeString: if sentenceString: stsData.append("{:G} self loops".format(len(list(G.selfloop_edges())))) else: stsData.append("Self loops: {:G}".format(len(list(G.selfloop_edges())))) else: stsData['loops'] = len(list(G.selfloop_edges())) if 'density' in stats: if makeString: if sentenceString: stsData.append("a density of {:G}".format(nx.density(G))) else: stsData.append("Density: {:G}".format(nx.density(G))) else: stsData['density'] = nx.density(G) if 'transitivity' in stats: if makeString: if sentenceString: stsData.append("a transitivity of {:G}".format(nx.transitivity(G))) else: stsData.append("Transitivity: {:G}".format(nx.transitivity(G))) else: stsData['transitivity'] = nx.transitivity(G) if makeString: if sentenceString: retString = "The graph has " if len(stsData) < 1: return retString elif len(stsData) == 1: return retString + stsData[0] else: return retString + ', '.join(stsData[:-1]) + ' and ' + stsData[-1] else: return '\n'.join(stsData) else: retLst = [] for sts in stats: retLst.append(stsData[sts]) return tuple(retLst)
[ "Returns a string or list containing statistics about the graph _G_.\n\n **graphStats()** gives 6 different statistics: number of nodes, number of edges, number of isolates, number of loops, density and transitivity. The ones wanted can be given to _stats_. By default a string giving each stat on a different line it can also produce a sentence containing all the requested statistics or the raw values can be accessed instead by setting _makeString_ to `False`.\n\n # Parameters\n\n _G_ : `networkx Graph`\n\n > The graph for the statistics to be determined of\n\n _stats_ : `optional [list or tuple [str]]`\n\n > Default `('nodes', 'edges', 'isolates', 'loops', 'density', 'transitivity')`, a list or tuple containing any number or combination of the strings:\n\n > `\"nodes\"`, `\"edges\"`, `\"isolates\"`, `\"loops\"`, `\"density\"` and `\"transitivity\"``\n\n > At least one occurrence of the corresponding string causes the statistics to be provided in the string output. For the non-string (tuple) output the returned tuple has the same length as the input and each output is at the same index as the string that requested it, e.g.\n\n > `_stats_ = (\"edges\", \"loops\", \"edges\")`\n\n > The return is a tuple with 2 elements the first and last of which are the number of edges and the second is the number of loops\n\n _makeString_ : `optional [bool]`\n\n > Default `True`, if `True` a string is returned if `False` a tuple\n\n _sentenceString_ : `optional [bool]`\n\n >Default `False` : if `True` the returned string is a sentce, otherwise each value has a seperate line.\n\n # Returns\n\n `str or tuple [float and int]`\n\n > The type is determined by _makeString_ and the layout by _stats_\n " ]
Please provide a description of the function:def AD(val): retDict = {} for v in val: split = v.split(' : ') retDict[split[0]] = [s for s in' : '.join(split[1:]).replace('\n', '').split(';') if s != ''] return retDict
[ "Affiliation\n Undoing what the parser does then splitting at the semicolons and dropping newlines extra fitlering is required beacuse some AD's end with a semicolon" ]
Please provide a description of the function:def AUID(val): retDict = {} for v in val: split = v.split(' : ') retDict[split[0]] = ' : '.join(split[1:]) return retDict
[ "AuthorIdentifier\n one line only just need to undo the parser's effects" ]
Please provide a description of the function:def isInteractive(): if sys.stdout.isatty() and os.name != 'nt': #Hopefully everything but ms supports '\r' try: import threading except ImportError: return False else: return True else: return False
[ "\n A basic check of if the program is running in interactive mode\n " ]