content
stringlengths
10
4.9M
One of the most religious states in America was not pleased with a billboard’s unfavorable message about God. Mississippi is known as one of the most religious states in the country and a signboard that was put up there by the FFRF (Freedom From Religion Foundation) was pulled down after heavy community backlash. The digital board had Uncle Sam warning the onlooker that 'God Fixation Won't Fix this Nation'. The sign was put up in Tupelo, a majority Christian city. The signboard was placed last Friday and was brought down in less than a week. The FFRF were quoted as calling this decision a 'Heckler's Veto'. The FRFF is a countrywide church watchdog that has a membership of over 24,000 people from around the country, including Mississippi. The signboard was intended to stir up emotions and to send out a strong message, clearly, but it seemed to be too effective and as an FFRF member put it, it developed 'too much heat'. Another spokesperson from the organization expressed distaste at what happened and said that she was disappointed that there is no room for an alternate or dissent in Mississippi. It is not healthy when religion cannot be debated or questioned, she went on to say. So I read a headline about an atheist organization buying a billboard mocking God in Mississippi. Smh. A mess. — Whit ✨ (@Whitley__Kayla) June 30, 2016 Alabama and Mississippi are tied for being the most religious states in the USA with 77% of the adult population claiming to be 'very religious', the numbers are taken from a Pew Research Center report. The FFRF has been active since the 1970's, but have met with stiff resistance through most of their existence. However, since 2007, they have been quite successful with displaying their billboards all over the country. People are getting more open to the idea of debating and discussing religion all over the country, and censorship is freeing out almost everywhere. That, however, is not the case with Mississippi. A member of the FFRF said that facing such strong emotions and censorship in this day and age is disturbing. Is it necessary? The FFRF feels that the image and what it represented is appropriate for Mississippi. That is because, as the most religious state, it is also the poorest state in the country. Mississippi also has the weakest numbers in terms of quality of life. A Wall Street survey from 2015 put Mississippi at the 50th place as far as the best places to live in was concerned. It is, all things considered, the worst place to live in all of USA. The FFRF feel that the extent to which they take their religion could be a reason for this. Resources Follow the Conversation on Twitter
<filename>SDFConv/code/utils/vis/unet_vis.py """Implement this function across different project. ----ZY.2020.Oct. """ import os from easydict import EasyDict from torchvision.utils import save_image from logging import Logger from subprocess import call def create_save_folders(root_folder, folder_list: list): """Create folders to save visualization image. :param root_folder: The root folder. :param folder_list: The list of folders """ for folder in folder_list: os.makedirs(os.path.join(root_folder, folder), exist_ok=True) def unet_vis( in_batch: dict, out_batch: tuple, training: bool, epoch: int, step: int, options: EasyDict, logger: Logger ): """The visualization function of UNet. :param in_batch: The input batch. :param out_batch: The output batch. :param training: Whether it is training stage. :param epoch: The epoch number start with 1. :param step: The step. :param logger: The logger. :param options: The options for visualization. """ # Folders if training: vis_dir = os.path.join(options.vis.dir, "train_vis") else: vis_dir = os.path.join(options.vis.dir, "val_vis") out_dir = os.path.join(vis_dir, "epoch-{:04d}".format(epoch)) # Customize the list of folders. dir_list = ["input_image", "info"] # Create the list folders. create_save_folders(out_dir, dir_list) # The list of key in input and output batch. key_list = ["input_image", ["loss"]] batch = {} batch.update(in_batch) batch.update(out_batch[0]) batch.update(out_batch[1]) # Get the batch size. if training: batch_size = options.train.batch_size else: batch_size = options.test.batch_size # Get number of steps each epoch. if training: # Update the number of training samples in options. num_step_each_epoch = options.dataset.len_train // (options.train.batch_size * options.num_gpus) else: # Update the number of validation samples in options. num_step_each_epoch = options.dataset.len_test // (options.test.batch_size * options.num_gpus) # Save images and info. for i in range(batch_size): batch_id = step % num_step_each_epoch fn = "data-{:04d}.png".format(batch_id * batch_size + i) # file name. for key, folder in zip(key_list, dir_list): if folder == "info": with open(os.path.join(out_dir, folder, fn.replace('.png', '.txt')), 'w') as file: for loss_item in key: file.write("{}: {}\n".format(loss_item, batch[loss_item][i].item())) else: save_image(batch[key][i], os.path.join(out_dir, folder, fn)) # Get the KC step interval. if training: kc_steps = options.train.kc_steps else: kc_steps = options.test.kc_steps # Generate HTML file. mod_step = step % num_step_each_epoch # step starts ar 1. extra_step = (mod_step + kc_steps) / num_step_each_epoch if mod_step == 0 or extra_step > 1.0: # Visualize HTML. logger.info("Generating html visualization ...") sublist = ",".join(dir_list) script_path = os.path.join(os.path.abspath(os.getcwd()), "utils", "gen_html_hierarchy_local.py") if not os.path.exists(script_path): raise ValueError("{} this python script does not exist!".format(script_path)) cmd = "cd {} && python {} . 10 htmls {} {} > /dev/null".format( out_dir, script_path, sublist, sublist ) call(cmd, shell=True) logger.info("DONE")
#ifndef TRANSFORM_H #define TRANSFORM_H // lib includes #include <glm/glm.hpp> #include <glm/gtc/quaternion.hpp> namespace mirage { class Transform { public: Transform( glm::vec3 position = glm::vec3(), glm::quat orientation = glm::quat(), glm::vec3 sscale = glm::vec3(1.0f, 1.0f, 1.0f) ); Transform( Transform * const parent, glm::vec3 position = glm::vec3(), glm::quat orientation = glm::quat(), glm::vec3 sscale = glm::vec3(1.0f, 1.0f, 1.0f) ); void translate(const glm::vec3 & direction, float t); void rotate(const glm::vec3 & axis, float angle); void rotate(const glm::quat & rotation); void scale(const glm::vec3 & scale); glm::mat4 getParentMatrix() const; glm::mat4 getModelMatrix() const; void setParent(Transform * const parent); Transform * const getParent() const; void setPosition(const glm::vec3 & position); const glm::vec3 & getPosition() const; void setOrientation(const glm::quat & orientation); const glm::quat & getOrientation() const; glm::vec3 getForward() const; glm::vec3 getBackward() const; glm::vec3 getLeft() const; glm::vec3 getRight() const; glm::vec3 getUp() const; glm::vec3 getDown() const; void setScale(const glm::vec3 & scale); const glm::vec3 & getScale() const; private: Transform * m_parent; glm::vec3 m_position; glm::quat m_orientation; glm::vec3 m_scale; }; } #endif // TRANSFORM_H
/** * Model tests for UpdateInstancesByNameStateRequest */ public class UpdateInstancesByNameStateRequestTest { private final UpdateInstancesByNameStateRequest model = new UpdateInstancesByNameStateRequest(); /** * Model tests for UpdateInstancesByNameStateRequest */ @Test public void testUpdateInstancesByNameStateRequest() { // TODO: test UpdateInstancesByNameStateRequest } /** * Test the property 'action' */ @Test public void actionTest() { // TODO: test action } /** * Test the property 'timeout' */ @Test public void timeoutTest() { // TODO: test timeout } /** * Test the property 'force' */ @Test public void forceTest() { // TODO: test force } /** * Test the property 'stateful' */ @Test public void statefulTest() { // TODO: test stateful } }
<reponame>zkrami/SuperLibrary #include <string.h> #include <ctype.h> #include <stdio.h> typedef struct Map { char *key; int value; } map; /** * Counts the number of vowels and consonants in a given string * * @param char *string * @return map * */ map * countVowelsAndConsonants(char *string) { int vowelsCount = 0; int consonantsCount = 0; char vowels[] = {'a', 'e', 'i', 'o', 'u'}; char consonants[] = {'b', 'c', 'd', 'f', 'g', 'h', 'j', 'k', 'l', 'm', 'n', 'p', 'q', 'r', 's', 't', 'v', 'w', 'x', 'y', 'z'}; int vowelsLength = sizeof(vowels) / sizeof(vowels[0]); int consonantsLength = sizeof(consonants) / sizeof(consonants[0]); int stringLength = strlen(string); int i = 0; while(string[i]) { putchar(tolower(string[i])); i++; } for (int i = 0;i < stringLength;i++) { int check = 0; for (int j = 0;j < vowelsLength;j++) { if (vowels[j] == string[i]) { vowelsCount++; check = 1; break; } } if (check == 1) { continue; } for (int j = 0;j < consonantsLength;j++) { if (consonants[j] == string[i]) { consonantsCount++; break; } } } map *result; result[0].key = "vowels_count"; result[0].value = vowelsCount; result[1].key = "consonants_count"; result[1].value = consonantsCount; return result; }
Elucidation of the Transmission Patterns of an Insect-Borne Bacterium ABSTRACT Quantitative data on modes of transmission are a crucial element in understanding the ecology of microorganisms associated with animals. We investigated the transmission patterns of a γ-proteobacterium informally known as pea aphid Bemisia-like symbiont (PABS), also known as T-type, which is widely but not universally distributed in natural populations of the pea aphid, Acyrthosiphon pisum. The vertical transmission of PABS to asexual and sexual morphs and sexually produced eggs was demonstrated by a diagnostic PCR-based assay, and the maximum estimated failure rate was 2%. Aphids naturally lacking PABS acquired PABS bacteria administered via the diet, and the infection persisted by vertical transmission for at least three aphid generations. PABS was also detected in two of five aphid honeydew samples tested and in all five siphuncular fluid samples tested but in none of 15 samples of salivary secretions from PABS-positive aphids. However, PABS-negative aphids did not acquire PABS when they were cocultured with PABS-positive aphids; the maximal estimated level of horizontal transmission was 18%. A deterministic model indicated that the force of infection by a horizontal transmission rate of 3% is sufficient to maintain a previously described estimate of the prevalence of PABS-positive aphids (37%), if the vertical transmission rate is 98%. We concluded that PABS infections in A. pisum can be maintained by high vertical transmission rates and occasional horizontal transmission, possibly via the oral route, in the absence of selection either for or against aphids bearing this bacterium.
// SubsetSumBrute returns the subset by performing a brute-force search. // The first possible result is return if found, otherwise the search is exhaustive. // The runtime is exponential. func SubsetSumBrute(ints []int, k int) []int { for i, v := range ints { if v == k { return []int{v} } else if k > v { if subset := SubsetSumBrute(ints[i+1:], k-v); len(subset) != 0 { return append([]int{v}, subset...) } } } return nil }
<filename>GD_HW/hw6/PhysicalEngine.h #pragma once #include "CollisionDetector.h" #include "CollisionResolver.h" class PhysicalEngine { public: PhysicalEngine(bool resolveCollision) : resolveCollision(resolveCollision), RigidBodies(nullptr), isChangedRigidBodies(false) {} void AddRigidBody(Body2D *body); void SetRigidBodies(std::vector<Body2D *> *); std::vector<CollisionInfo> CurrentCollisionInfo(); void SetResolverAcivity(bool resolveCollision); inline void SetChangedRigidBodies(bool changed) { isChangedRigidBodies = changed; } void Update(float dt); private: CollisionDetector detector; CollisionResolver resolver; std::vector<Body2D *> *RigidBodies; std::vector<CollisionInfo> currentCollisionInfo; bool resolveCollision; bool isChangedRigidBodies; };
/* * Copyright 2010 - 2013 <NAME>, <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or * implied. * * See the License for the specific language governing permissions and * limitations under the License. */ package org.ops4j.pax.exam.junit; import java.util.List; import org.junit.runner.Description; import org.junit.runner.Runner; import org.junit.runner.manipulation.Filter; import org.junit.runner.manipulation.Filterable; import org.junit.runner.manipulation.NoTestsRemainException; import org.junit.runner.manipulation.Sortable; import org.junit.runner.manipulation.Sorter; import org.junit.runner.notification.RunNotifier; import org.junit.runners.Parameterized; import org.junit.runners.ParentRunner; import org.junit.runners.model.InitializationError; import org.ops4j.pax.exam.junit.impl.ExtensibleSuite; import org.ops4j.pax.exam.junit.impl.ParameterizedDriverExtension; /** * JUnit runner for parameterized Pax Exam tests. See {@link Parameterized} for more details on * specifying parameter sets. * <p> * See {@link PaxExam} for more information on other annotations supported on Pax Exam test classes * or methods. * * @author <NAME> * */ public class PaxExamParameterized extends ParentRunner<Runner> implements Filterable, Sortable { private ExtensibleSuite delegate; private ParameterizedDriverExtension extension; public PaxExamParameterized(Class<?> klass) throws InitializationError { super(klass); extension = new ParameterizedDriverExtension(klass); delegate = new ExtensibleSuite(klass, extension); extension.setBase(delegate); } @Override public Description getDescription() { return delegate.getDescription(); } @Override public void run(RunNotifier notifier) { delegate.run(notifier); } @Override public void filter(Filter filter) throws NoTestsRemainException { delegate.filter(filter); } @Override public void sort(Sorter sorter) { delegate.sort(sorter); } @Override protected List<Runner> getChildren() { return extension.getChildren(); } @Override protected Description describeChild(Runner child) { return extension.describeChild(child); } @Override protected void runChild(Runner child, RunNotifier notifier) { extension.runChild(child, notifier); } }
class GsmHttpResponse(object): def __init__(self, raw_response): self.raw_response = raw_response self.return_code = -1 self.response_body = '' self.parse(self.raw_response) def parse(self, raw_response): split_string = raw_response.split('\r\n\r\nSEND OK\r\n') response = split_string[1] end_first_line_idx = response.find('\n') if end_first_line_idx == -1: raise Exception first_line = response[0:end_first_line_idx] self.return_code = first_line.split(' ')[1] response_parts = response.split('\r\n\r\n') if len(response_parts) > 0: self.response_body = response_parts[1] else: self.response_body = '' def get_return_code(self): return self.return_code def get_response(self): return self.response_body
Antifungal activity of iridoid glycosides from the heartwood of Gmelina arborea Gmelina arborea Linn. (Verbenaceae) is a deciduous broad-leaved tree widely distributed from Southeast Asia to South Asia (Greaves 1981) and all parts of this plant are used in medicine to cure various ailments (Rao et al. 1967; Tiwari 1995; Rechnagel and Glende 1973; Chanh et al. 1987). Trees of the Verbenaceae family contain iridoid glycosides (Jacke and Rimpler 1983; Rathore et al. 1989; Otsuka et al. 1991, 1992; Stuppner et al. 1993; Calis et ̧ al. 1994; Dellar et al. 1996; Hosny and Rosazza 1998; Hannedouche et al. 1999; Helfrich and Rimpler 1999; Ghisalberti 2000; Chowdhury et al. 2003). A total of 15 and 8 derivatives of 6-O-a-L-rhamnopyranosylcatalpols have been reported from the leaves of G. arborea (Hosny and Rosazza 1998) and the aerial parts of Holmskioldia sanguinea (Helfrich and Rimpler 1999), respectively. The aerial parts of Gmelina philippensis, which is the same genus of G. arborea, contains catalpol (Helfrich and Rimpler 2000). Iridoid glycosides have been found to possess a range of biological activities, i.e., antiinflammatory (Schapoval et al. 1998), antitumoral (Kapadia et al. 1996; Konoshima et al. 2000) and antioxidative (Chander et al. 1992). Cinnamate esters of catalpol from Westringia fruticosa and W. viminalis exhibited antifungal activity against the plant pathogenic fungus Cladosporium cucumerinum (Dellar et al. 1996). In the first part of this series, it was shown that four lignans contained in the heartwood of G. arborea showed antifungal activity against Trametes versicolor (Kawamura et al. 2004). In the present paper, we report on the isolation and identification of four iridoid glycosides from the same source and the determination of their antifungal activity.
def insert_marker_lines(self, lines): progenitor = lines[0] line = qc.QLineF(progenitor[1], progenitor[2], progenitor[3], progenitor[4]) position = qc.QPointF(progenitor[5], progenitor[6]) tmp_line = self.scene().addLine(line, self._pens.get_display_pen()) tmp_line.setData(ItemDataTypes.PARENT_HASH, "p") tmp_line.setData(ItemDataTypes.REGION_INDEX, self._region_index) tmp_line.setData(ItemDataTypes.FRAME_NUMBER, progenitor[7]) for data in lines[1:]: line = qc.QLineF(data[1], data[2], data[3], data[4]) position = qc.QPointF(data[5], data[6]) p_hash = hash_graphics_line(tmp_line) tmp_line = self.scene().addLine(line, self._pens.get_display_pen()) tmp_line.setPos(position) tmp_line.setData(ItemDataTypes.PARENT_HASH, p_hash) tmp_line.setData(ItemDataTypes.REGION_INDEX, self._region_index) tmp_line.setData(ItemDataTypes.FRAME_NUMBER, data[7])
#include "oiserver.h" #include <QDebug> OIServer::OIServer(QObject *parent) : QObject(parent), m_quality(MbData::Quality::Good), m_works(false), m_clientAddr(DEFAULT_CLIENT_ADDRESS), m_requestDelay(DEFAULT_REQUESTS_INTERVAL) { qDebug() << "IOServer constructor"; //--- IODriver ---- m_ptrLib = std::make_unique<IODriverLib>(); m_ptrDrv = m_ptrLib->createDriver("Modbus"); if (IDriver::Config::Ok != m_ptrDrv->setComConfigure(m_clientAddr.toStdString())) cout << "Bad Com config" << endl; if (IDriver::Config::Ok != m_ptrDrv->setDataConfigure("{0,10}")) cout << "Bad data config" << endl; cout << "Driver: " << m_ptrDrv->name() << endl; if (m_ptrDrv->setRequestDelay(m_requestDelay)) m_ptrDrv->start(); // internal modbus register's map m_ptrRegisterMap = QSharedPointer<MbData::MbRegisterMap>::create(MbData::MbRegisterMap(DEFAULT_MODBUS_MAP_SIZE)); // m_ptrRegisterMap->resize(MODBUS_MAP_SIZE); qDebug() << "OIServer: MbRegisterMap has created (size=" << m_ptrRegisterMap->size() << ")"; // cyclic interval of updating m_timer.setInterval(DEFAULT_REQUESTS_INTERVAL); connect(&m_timer, &QTimer::timeout, this, &OIServer::updateTime); m_timer.start(); } void OIServer::setClientAddress(const QString &addr) { qDebug() << "setClientAddress " << addr; m_clientAddr = addr; Q_ASSERT(nullptr != m_ptrDrv); if (IDriver::Config::Ok != m_ptrDrv->setComConfigure(m_clientAddr.toStdString())) cout << "Bad Com config" << endl; std::cout << m_ptrDrv->comConfigure(); } /*! * \brief Registers widget * \param[in] indicator */ void OIServer::addMbWidget(MbWidget *widget) { widget->setMbRegisterMap(m_ptrRegisterMap); connect(widget, &MbWidget::cmdActivated, this, &OIServer::mbCmdReceived); m_widget.push_back(widget); } void OIServer::addMbWidgets(QList<MbWidget *> &widgets) { for (auto widget: widgets) { // qDebug() << "\t" << widget->objectName(); this->addMbWidget(widget); // m_oiserver.addMbWidget(widget); } } void OIServer::addMbButton(MbButton *button) { button->setMbRegisterMap(m_ptrRegisterMap); connect(button, &MbButton::cmdValueChanged, this, &OIServer::cmdReceived); m_buttons.push_back(button); } void OIServer::addMbButtons(QList<MbButton *> &buttons) { for (auto button: buttons) { // qDebug() << "\t" << button->objectName(); this->addMbButton(button); } } /*! * \brief Unregisters widget * \param[in] indicator */ void OIServer::delMbWidget(MbWidget *widget) { m_widget.remove(widget); } QVector<OIServer::Range> OIServer::ranges() { QVector<OIServer::Range> ranges; // for (auto &wgt: m_widget) { // ranges.push_back(OIServer::Range(wgt->regAddr(), 10)); // } // dummy ranges.push_back({10, 21}); ranges.push_back({3121, 3200}); return ranges; } /*! * \brief Notifies all registered widgets * * \todo make method more properly */ void OIServer::notifyAll() { if (m_works) { for (auto indicator: m_widget) { indicator->onDataUpdated(); } } } /*! * \brief Changes register's values * * \brief Increase each register in modbus map (first MAX_NUMBER registers) */ void OIServer::changeMbMap() { const quint16 MAX_VALUE_NUMBER = 255; const quint16 NUMBER_REGISTERS = 125; Q_ASSERT(DEFAULT_MODBUS_MAP_SIZE > NUMBER_REGISTERS); // MbRegisterMap filling std::vector<IDriver::Data> datas = m_ptrDrv->readDatas(); MbData::MbRegisterMap::Iterator it = m_ptrRegisterMap->begin(); for (; it!=m_ptrRegisterMap->end(); it++) { std::vector<IDriver::Data> datas = m_ptrDrv->readDatas(); Q_ASSERT(datas.size()>0); auto data = datas.at(0); it->direct = MbData::Register::Direction::r; it->qulity = MbData::Quality::Good; } for (auto &data: datas) { Q_ASSERT(data.addr+data.regs.size() <= m_ptrRegisterMap->size()); for (unsigned int i=0; i<data.regs.size(); i++) { auto addr = data.addr+i; (*m_ptrRegisterMap)[addr].value = data.regs.at(i); // (*m_ptrRegisterMap)[addr].qulity = data.quality; } } // while(true) { // std::this_thread::sleep_for(std::chrono::seconds(1)); // std::vector<IDriver::Data> datas = drv->readDatas(); // std::cout << "Number of ranges: " << datas.size() << endl; // for (auto &data: datas) { // cout << "[addr=" << data.addr << ", number=" << data.regs.size() << "] : "; // for (uint16_t addr=data.addr; addr<data.addr+data.regs.size(); addr++) { // std::cout << data.regs.at(addr-data.addr) << " "; // } // cout << "(" << data.quality_to_string(data.quality) << ")" << endl; // } // cout << endl; // } } /*! * \brief Changes quality * \param[in] quality - (true - good quality) */ void OIServer::updateQuality(bool quality) { if (quality) m_quality = MbData::Quality::Good; else m_quality = MbData::Quality::NoConnect; notifyAll(); } void OIServer::cmdReceived(int cmd) { MbButton* senderPointer = qobject_cast<MbButton*>(sender()); if(senderPointer == nullptr) qDebug() << "1: Sender isn't MbButton"; else { qDebug() << "1: Sender: " << senderPointer->cmdAddress() << "; " << cmd; m_ptrDrv->write(senderPointer->cmdAddress(), cmd); } } void OIServer::mbCmdReceived() { MbWidget* senderPointer = qobject_cast<MbWidget*>(sender()); if(senderPointer == nullptr) qDebug() << "2: Sender isn't MbWidget"; else qDebug() << "2: Sender: " << senderPointer->cmdAddr() << "; " << senderPointer->cmdCode(); } /*! * \brief Timer's timeout */ void OIServer::updateTime() { changeMbMap(); notifyAll(); }
/** * Handles a path in a tree from the root node to the position inside this tree. * The position of the root node is dropped in the list, because it would always be zero. * The path of the root node as length 0. * <br> * Example: * <pre> * + Root Path: [] * | * +-+ Node Path: [0] * | | * | +-+ Sub-Node Path: [0, 0] * | | * | +-+ Sub-Node Path: [0, 1] * | * +-+ Node Path: [1] * | * +-+ Sub-Node Path: [1, 0] * | * +-+ Sub-Node Path: [1, 1] * | * +-+ Sub-Node Path: [1, 2] * </pre> * * @since 1.5.0 */ public class TreePath implements Serializable { private final int[] path; public TreePath(final int... path) { this.path = path; } public TreePath(final List<Integer> pathList) { path = new int[pathList.size()]; for (int i = 0; i < path.length; i++) { path[i] = pathList.get(i); } } public TreePath(final TreeNode node) { if (node == null) { throw new IllegalArgumentException(); } TreeNode p = node; final List<TreeNode> list = new ArrayList<>(); int n = 0; while (p != null) { list.add(p); p = p.getParent(); n++; } path = new int[n - 1]; for (int i = n - 2; i >= 0; i--) { final TreeNode parent = list.get(i + 1); final TreeNode child = list.get(i); for (int j = 0; j < parent.getChildCount(); j++) { if (parent.getChildAt(j) == child) { // == is okay in this case path[n - 2 - i] = j; break; } } } } public int[] getPath() { return path; } public TreePath getParent() { return new TreePath(Arrays.copyOf(path, path.length - 1)); } public boolean isRoot() { return path.length == 0; } public int getLength() { return path.length; } @Override public boolean equals(final Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } final TreePath nodeIndex = (TreePath) o; return Arrays.equals(path, nodeIndex.path); } @Override public int hashCode() { return path != null ? Arrays.hashCode(path) : 0; } @Override public String toString() { return Arrays.toString(path); } }
<reponame>AISSProjects22/extjwnl<filename>extjwnl/src/test/java/net/sf/extjwnl/dictionary/TestEditMapBackedDictionary.java package net.sf.extjwnl.dictionary; import java.io.InputStream; /** * Tests MapBackedDictionary editing. * * @author <a href="http://autayeu.com/"><NAME></a> */ public class TestEditMapBackedDictionary extends DictionaryEditTester { @Override protected InputStream getProperties() { return TestEditMapBackedDictionary.class.getResourceAsStream("/test_clean_map.xml"); } }
#include<stdio.h> int main() { int i,n; long long FBsum = 0,sum = 0; scanf("%d",&n); for(i = 1;i <= n;i++) { sum += i; if(i % 3 == 0 || i % 5 == 0) { FBsum+=i; } } printf("%lld\n",sum-FBsum); }
<gh_stars>100-1000 import * as React from "react"; import { SetStateAction, useState } from "react"; import { CalendarSource } from "../types"; type SourceWith<T extends Partial<CalendarSource>, K> = T extends K ? T : never; interface BasicProps<T extends Partial<CalendarSource>> { source: T; } function DirectorySetting<T extends Partial<CalendarSource>>({ source, }: BasicProps<T>) { let sourceWithDirectory = source as SourceWith<T, { directory: undefined }>; return ( <div className="setting-item-control"> <input disabled type="text" value={sourceWithDirectory.directory} style={{ width: "100%", marginLeft: 4, marginRight: 4, }} /> </div> ); } function UrlSetting<T extends Partial<CalendarSource>>({ source, }: BasicProps<T>) { let sourceWithUrl = source as SourceWith<T, { url: undefined }>; return ( <div className="setting-item-control"> <input disabled type="text" value={sourceWithUrl.url} style={{ width: "100%", marginLeft: 4, marginRight: 4, }} /> </div> ); } function NameSetting<T extends Partial<CalendarSource>>({ source, }: BasicProps<T>) { let sourceWithName = source as SourceWith<T, { name: undefined }>; return ( <div className="setting-item-control"> <input disabled type="text" value={sourceWithName.name} style={{ width: "100%", marginLeft: 4, marginRight: 4, }} /> </div> ); } function Username<T extends Partial<CalendarSource>>({ source, }: BasicProps<T>) { let sourceWithUsername = source as SourceWith<T, { username: undefined }>; return ( <div className="setting-item-control"> <input disabled type="text" value={sourceWithUsername.username} style={{ width: "100%", marginLeft: 4, marginRight: 4, }} /> </div> ); } interface CalendarSettingsProps { setting: Partial<CalendarSource>; onColorChange: (s: string) => void; deleteCalendar: () => void; } export const CalendarSettingRow = ({ setting, onColorChange, deleteCalendar, }: CalendarSettingsProps) => { const isCalDAV = setting.type === "caldav" || setting.type === "icloud"; return ( <div className="setting-item"> <button type="button" onClick={deleteCalendar} style={{ maxWidth: "15%" }} > ✕ </button> {setting.type === "local" ? ( <DirectorySetting source={setting} /> ) : ( <UrlSetting source={setting} /> )} {isCalDAV && <NameSetting source={setting} />} {isCalDAV && <Username source={setting} />} <input style={{ maxWidth: "25%", minWidth: "3rem" }} type="color" value={setting.color} onChange={(e) => onColorChange(e.target.value)} /> </div> ); }; interface CalendarSettingProps { sources: CalendarSource[]; submit: (payload: CalendarSource[]) => void; } type CalendarSettingState = { sources: CalendarSource[]; dirty: boolean; }; export class CalendarSettings extends React.Component< CalendarSettingProps, CalendarSettingState > { constructor(props: CalendarSettingProps) { super(props); this.state = { sources: props.sources, dirty: false }; } addSource(source: CalendarSource) { this.setState((state, props) => ({ sources: [...state.sources, source], dirty: true, })); } render() { return ( <div style={{ width: "100%" }}> {this.state.sources.map((s, idx) => ( <CalendarSettingRow key={idx} setting={s} onColorChange={(color) => this.setState((state, props) => ({ sources: [ ...state.sources.slice(0, idx), { ...state.sources[idx], color }, ...state.sources.slice(idx + 1), ], dirty: true, })) } deleteCalendar={() => this.setState((state, props) => ({ sources: [ ...state.sources.slice(0, idx), ...state.sources.slice(idx + 1), ], dirty: true, })) } /> ))} <div className="setting-item-control"> {this.state.dirty && ( <button onClick={() => { this.props.submit( this.state.sources.map( (elt) => elt as CalendarSource ) ); this.setState({ dirty: false }); }} style={{ backgroundColor: this.state.dirty ? "var(--interactive-accent)" : undefined, color: this.state.dirty ? "var(--text-on-accent)" : undefined, }} > {this.state.dirty ? "Save" : "Settings Saved"} </button> )} </div> </div> ); } }
import React, { useState, useEffect } from "react"; import { Link } from "react-router-dom"; import { StyledNavBar, StyledNavBarContainer, StyledLogo, NavTitleContainer, StyledCartSVGContainer, StyledCartSVG, } from "./style"; interface NavBarProps { fixed?: boolean; } const NavBar: React.FC<NavBarProps> = (props) => { const [scrollPosition, setScrollPosition] = useState(0); const [show, setShow] = useState(true); const onScroll = () => { setScrollPosition(document.body.getBoundingClientRect().top); setShow(document.body.getBoundingClientRect().top > scrollPosition); }; useEffect(() => { if (!props.fixed) { window.addEventListener("scroll", onScroll); } return () => { window.removeEventListener("scroll", onScroll); }; }); return ( <StyledNavBarContainer toShow={show}> <StyledNavBar className={show ? "active" : "hidden"}> <Link to="/"> <StyledLogo /> </Link> <NavTitleContainer to="/">Vestimentum</NavTitleContainer> <StyledCartSVGContainer to="/checkout"> <StyledCartSVG /> </StyledCartSVGContainer> </StyledNavBar> </StyledNavBarContainer> ); }; export default NavBar;
/** * @return {@code true} if the widget should be created by default. Otherwise, the user must enable it explicitly * via status bar context menu or settings. */ @Override public boolean isEnabledByDefault() { return true; }
<filename>mybatis-generator-extention-core/src/main/java/io/github/elricboa/util/MethodUtil.java package io.github.elricboa.util; import io.github.elricboa.constant.GeneratorConstant; import io.github.elricboa.enums.MethodEnum; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang.StringUtils; import org.apache.ibatis.session.SqlSession; import org.mybatis.generator.api.IntrospectedTable; import org.mybatis.generator.api.dom.java.*; import org.mybatis.generator.config.Context; import java.util.Properties; import java.util.Set; /** * @author shentongzhou on 2019-09-17 */ public class MethodUtil { /** * 从Properties 获取key对应的value * * @param introspectedTable * @param keyName * @return */ public static String getPropertyValueByName(IntrospectedTable introspectedTable, String keyName) { return introspectedTable.getTableConfigurationProperty(keyName); } /** * 填充父类 * * @param topLevelClass * @param introspectedTable * @return */ public static FullyQualifiedJavaType setSuperClass(TopLevelClass topLevelClass, IntrospectedTable introspectedTable) { FullyQualifiedJavaType superClass = getSuperClass(introspectedTable); if (superClass != null) { topLevelClass.setSuperClass(superClass); topLevelClass.addImportedType(superClass); } return superClass; } /** * 获取父类 * * @param introspectedTable * @return */ public static FullyQualifiedJavaType getSuperClass(IntrospectedTable introspectedTable) { FullyQualifiedJavaType superClass; if (null != introspectedTable && null != introspectedTable.getRules() && introspectedTable.getRules().generatePrimaryKeyClass()) { superClass = new FullyQualifiedJavaType(introspectedTable.getPrimaryKeyType()); } else { String rootClass = getRootClass(introspectedTable, introspectedTable.getContext()); if (rootClass != null) { superClass = new FullyQualifiedJavaType(rootClass); } else { superClass = null; } } return superClass; } /** * 获取根类 * * @param introspectedTable * @param context * @return */ public static String getRootClass(IntrospectedTable introspectedTable, Context context) { String rootClass = introspectedTable.getTableConfigurationProperty(GeneratorConstant.ROOT_CLASS_NAME); if (rootClass == null) { Properties properties = context.getJavaModelGeneratorConfiguration().getProperties(); rootClass = properties.getProperty(GeneratorConstant.ROOT_CLASS_NAME); } return rootClass; } /** * 接口类转换成 普通类 * * @param interfaceClazz * @return */ public static TopLevelClass convertInterfaceToTopLevelClass(Interface interfaceClazz) { TopLevelClass topLevelClass = new TopLevelClass(interfaceClazz.getType()); topLevelClass.addImportedTypes(interfaceClazz.getImportedTypes()); topLevelClass.setVisibility(JavaVisibility.PUBLIC); // 父类接口是否为空 if (CollectionUtils.isNotEmpty(interfaceClazz.getSuperInterfaceTypes())) { topLevelClass.setSuperClass(interfaceClazz.getSuperInterfaceTypes().iterator().next()); } // 生成GetSqlSession方法 Method method = new Method(); method.setName("getSqlSession"); method.getBodyLines().add(" return null; "); method.setVisibility(JavaVisibility.PUBLIC); method.setReturnType(new FullyQualifiedJavaType(SqlSession.class.getName())); topLevelClass.addImportedType(new FullyQualifiedJavaType(SqlSession.class.getName())); topLevelClass.getMethods().add(method); return topLevelClass; } /** * 检测mapper对应方法是否已经存在 * * @param introspectedTable * @return true 存在 false 不存在 */ public static boolean checkExistMethodElement(IntrospectedTable introspectedTable, MethodEnum sourceMethodEnum) { String mapperXmlName = MethodUtil.getPropertyValueByName(introspectedTable, GeneratorConstant .MAPPER_XML_NAME); if (StringUtils.isNotBlank(mapperXmlName)) { mapperXmlName = mapperXmlName.toLowerCase(); } String domainObjectName = introspectedTable.getFullyQualifiedTable().getDomainObjectName(); String defaultMapperXmlName = (domainObjectName + "Mapper").toLowerCase(); if (GeneratorConstant.existElementForMapperMap.containsKey(mapperXmlName)) { Set<MethodEnum> methodEnumSet = GeneratorConstant.existElementForMapperMap.get(mapperXmlName); if (CollectionUtils.isNotEmpty(methodEnumSet)) { if (methodEnumSet.contains(sourceMethodEnum)) { return true; } } } if (GeneratorConstant.existElementForMapperMap.containsKey(defaultMapperXmlName)) { Set<MethodEnum> methodEnumSet = GeneratorConstant.existElementForMapperMap.get(defaultMapperXmlName); if (CollectionUtils.isNotEmpty(methodEnumSet)) { if (methodEnumSet.contains(sourceMethodEnum)) { return true; } } } return false; } }
def Move_Stoped(self, pos): self.status_sig.emit(ThreadCommand("move_done", [pos]))
import { Component, h, Prop, Event, EventEmitter, State, Listen, Element } from '@stencil/core' import '@a11y/focus-trap' import { ButtonVariants } from '../../shared/types' /** * @slot - Triggering control goes here */ @Component({ tag: 'bk-pop-confirm', scoped: true, styleUrl: './index.scss', }) export class PopConfirm { private cancelButtonRef?: HTMLButtonElement @Element() el!: HTMLElement @State() show = false /** Message to show */ @Prop() message?: string /** Confirm button text */ @Prop() confirmButtonText = 'Confirm' /** Cancel button text */ @Prop() cancelButtonText = 'Cancel' /** Confirm button variant */ @Prop() confirmButtonVariant: ButtonVariants = 'primary' /** Cancel button variant */ @Prop() cancelButtonVariant: ButtonVariants = 'default' /** Enable or disable popover */ @Prop() disabled = false /** on confirm action */ @Event({ bubbles: false }) bkConfirmed!: EventEmitter /** on cancel action */ @Event({ bubbles: false }) bkCancelled!: EventEmitter onActionHandler = (e: Event, eventToEmit: EventEmitter) => { e.stopImmediatePropagation() this.show = false this.focusOnControl() eventToEmit.emit() } @Listen('click') onClickHandler() { if (!this.disabled && !this.show) { this.show = true } } focusOnControl = () => { const control = this.el.querySelector('[slot=control]') as HTMLElement control.focus() } onPopConfirmOpenHandler = () => { this.cancelButtonRef?.focus() } getButtonVariant = (variant: ButtonVariants) => (variant !== 'default' ? `bk-button--${variant}` : '') render() { return ( <bk-popover show={this.show} triggerOn="manual" disabled={this.disabled} placement="bottom-end" onBkOpened={this.onPopConfirmOpenHandler} aria-label="confirmation popup" > <div class="bk-pop-confirm" slot="content"> <focus-trap> <div class="bk-pop-confirm__message">{this.message}</div> <div class="bk-pop-confirm__footer"> <button class={`bk-button bk-button--mini ${this.getButtonVariant(this.cancelButtonVariant)}`} onClick={(e) => this.onActionHandler(e, this.bkCancelled)} ref={(el) => (this.cancelButtonRef = el)} > {this.cancelButtonText} </button> <button class={`bk-button bk-button--mini ${this.getButtonVariant(this.confirmButtonVariant)}`} onClick={(e) => this.onActionHandler(e, this.bkConfirmed)} > {this.confirmButtonText} </button> </div> </focus-trap> </div> <slot name="control" /> </bk-popover> ) } }
// Invokes the funcs in order, returning the first resulting non-nil handler. func (self Coalesce) Han(req *http.Request) http.Handler { for _, fun := range self { if fun != nil { val := fun(req) if val != nil { return val } } } return nil }
import { Box, Paper, Table, TableBody, TableCell, TableContainer, TableHead, TableRow, } from '@material-ui/core'; import React from 'react'; import { IDespesas } from '../../Interfaces/IDespesas'; import { round } from '../../services/math'; export default function Despesas({ despesas }: any): React.ReactElement { return ( <Box> <TableContainer component={Paper}> <Table> <TableHead> <TableRow> <TableCell align="left">Despesa</TableCell> <TableCell align="left">Categoria</TableCell> <TableCell align="left">Dia</TableCell> <TableCell align="right">Valor</TableCell> </TableRow> </TableHead> <TableBody> {despesas ? ( despesas.map((despesa: IDespesas) => ( <TableRow key={despesa.id}> <TableCell align="left">{despesa.descricao}</TableCell> <TableCell align="left">{despesa.categoria}</TableCell> <TableCell align="left">{despesa.dia}</TableCell> <TableCell align="right">{round(despesa.valor)}</TableCell> </TableRow> )) ) : ( <TableRow> <TableCell align="left">NO DATA</TableCell> <TableCell align="left">NO DATA</TableCell> <TableCell align="left">NO DATA</TableCell> <TableCell align="right">NO DATA</TableCell> </TableRow> )} </TableBody> </Table> </TableContainer> </Box> ); }
import collections import sys import math X=int(input()) if X<100: print("0") sys.exit() amari=math.floor(X/100) kake=amari*5 if amari*100<=X<=amari*100+kake: print("1") else: print("0")
<reponame>jsdw/git-backup<filename>src/services/gitlab.rs use regex::Regex; use lazy_static::lazy_static; use crate::error::Error; use super::service::{ Service, Repository }; pub struct GitLab { /// Which user are we backing up repositories for? owner: String, /// An access token token: String } impl GitLab { pub fn new(url: String, token: String) -> Option<GitLab> { lazy_static! { static ref HTTP_URL_RE: Regex = Regex::new("^(?:http(?:s)?://)?(?:www\\.)?gitlab(?:\\.org)?/([^/]+)(?:/)?$").unwrap(); static ref SSH_URL_RE: Regex = Regex::new("^(?:git@)?gitlab(?:\\.org)?:([^/.]+)(?:/)?$").unwrap(); static ref BASIC_SSH_RE: Regex = Regex::new("^([^@]+)@gitlab(?:\\.org)?(?:/)?$").unwrap(); } // In all of the regexs, first capture is owner let caps = HTTP_URL_RE.captures(&url) .or_else(|| SSH_URL_RE.captures(&url)) .or_else(|| BASIC_SSH_RE.captures(&url))?; let owner = caps.get(1).unwrap().as_str().to_owned(); Some(GitLab { owner, token }) } #[cfg(test)] pub fn owner(&self) -> &str { &self.owner } } impl Service for GitLab { fn username(&self) -> String { self.owner.to_owned() } fn list_repositories(&self) -> Result<Vec<Repository>,Error> { let token = &self.token; let client = reqwest::Client::new(); let url = format!("https://gitlab.com/api/v4/users/{user}/projects?simple=true&owned=true", user=self.owner); let empty = vec![]; let mut res = client .get(&url) .header("Private-Token", token) .send() .map_err(|e| err!("There was a problem talking to GitLab: {}", e))?; // Return an error if the response was not successful: let status = res.status(); if !status.is_success() { return Err(match status.as_u16() { 401 => err!("Not authorized: is the app password that you provided for GitLab valid?"), _ => err!("Error talking to GitLab: {} (code {})", status.canonical_reason().unwrap_or("Unknown"), status.as_str()) }); } // We convert our response back to a loosely typed JSON Value: let data: serde_json::Value = res .json() .map_err(|_| err!("Invalid JSON response from GitLab"))?; let mut repos = vec![]; let repo_values = data.as_array().unwrap_or(&empty); for repo in repo_values { let url = repo["http_url_to_repo"] .as_str() .ok_or_else(|| err!("Invalid clone URL"))?; let name = repo["path"] .as_str() .ok_or_else(|| err!("Invalid repo name"))?; // Push to our repo list: repos.push(Repository { name: name.to_owned(), git_url: url.to_owned() }) } Ok(repos) } } #[cfg(test)] mod test { use super::*; #[test] fn test_valid_urls() { let urls = vec![ ("http://www.gitlab.org/jsdw", "jsdw"), ("http://www.gitlab.org/jsdw/", "jsdw"), ("http://gitlab.org/jsdw", "jsdw"), ("https://gitlab.org/jsdw", "jsdw"), ("https://gitlab/jsdw", "jsdw"), ("gitlab.org/jsdw", "jsdw"), ("gitlab.org/jsdw/", "jsdw"), ("gitlab/jsdw", "jsdw"), ("<EMAIL>:jsdw", "jsdw"), ("<EMAIL>:jsdw/", "jsdw"), ("gitlab.org:jsdw", "jsdw"), ("gitlab.org:jsdw/", "jsdw"), ("gitlab:jsdw", "jsdw"), ("<EMAIL>", "jsdw"), ("jsdw@gitlab", "jsdw"), ]; for (url, owner) in urls { if let Some(gh) = GitLab::new(url.to_owned(), "token".to_owned()) { assert_eq!(gh.owner(), owner, "url {} expected owner {} but got {}", url, owner, gh.owner()); } else { panic!("url {} was not parsed properly", url); } } } }
Penetrating cardiac injury by wire thrown from a lawn mower. The first successful surgically treated case of penetrating heart injury, specifically the right ventricle, caused by a fragment of coat hanger wire thrown by a lawn mower, is reported. Though traumatic heart injuries are rare, this case represents accurate surgical management and judgment, especially in the preoperative phase which resulted in early operating and excellent postoperative results. It is our feeling that if the patient can be transferred safely to the operating room the mortality rate is considerably lowered; however, emergency room thoracotomy, which will undoubtedly result in a greater survival rate from these spectacular injuries, should be performed in the emergency center if cardiac activity ceases or the patient's condition deteriorates considerably.
<filename>web/src/main/java/net/chrisrichardson/monolithic/customersandorders/web/orderhistory/CustomerView.java package net.chrisrichardson.monolithic.customersandorders.web.orderhistory; import net.chrisrichardson.monolithic.customersandorders.domain.money.Money; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; public class CustomerView { private Long id; private Map<Long, OrderView> orders = new HashMap<>(); private String name; private Money creditLimit; public CustomerView(String name, Money creditLimit, List<OrderView> orders) { this.name = name; this.creditLimit = creditLimit; this.orders = orders.stream().collect(Collectors.toMap(OrderView::getId, x -> x)); } public void setId(Long id) { this.id = id; } public Long getId() { return id; } public Map<Long, OrderView> getOrders() { return orders; } public void setName(String name) { this.name = name; } public String getName() { return name; } public void setCreditLimit(Money creditLimit) { this.creditLimit = creditLimit; } public Money getCreditLimit() { return creditLimit; } }
<reponame>ruoranluomu/AliOS-Things /* * Copyright (C) 2015-2019 Alibaba Group Holding Limited */ #ifndef BE_MQTT_INSTANCE_H #define BE_MQTT_INSTANCE_H enum { MQTT_INSTANCE_EVENT_DISCONNECTED = 0, MQTT_INSTANCE_EVENT_CONNECTED = 1, }; /** * @brief Get the mqtt singleton instance. * * @param None * * @retval NULL : failed * @retval NOT_NULL : The singleton instance of mqtt client. * @see None. */ void *mqtt_get_instance(); /** * @brief Remove the mqtt singleton instance. * * @param None * * @retval None * @see None. */ void mqtt_remove_instance(); /** * @brief Set the mqtt singleton instance. * * @param None * * @retval -1: * @retval 0: * @see None. */ int mqtt_set_instance(void *mqtt_t); /** * @brief Initialize the mqtt singleton instance. * * @param [in] productKey * @param [in] deviceName * @param [in] deviceSecret * @param [in] maxMsgSize: mqtt read/send buffer size * * @retval 1: mqtt instance have been init * @retval 0: mqtt instance init success * IOT_MQTT_Construct success, MQTT connected. * @retval -1: mqtt instance init fail * @see None. */ int mqtt_init_instance(char *productKey, char *deviceName, char *deviceSecret, int maxMsgSize); /** * @brief Deinitialize the mqtt singleton instance. * * * @retval 0: success * @retval -1: fail * @see None. */ int mqtt_deinit_instance(); /** * @brief Set mqtt event callback. * * @param [in] event callback * @param [in] user data * * @retval 0: success * @retval -1: fail * @see None. */ int mqtt_set_event_cb(void (*event)(int event, void *ctx), void *ctx); /** * @brief Subscribe topic. * * @param [in] topic * @param [in] callback * @param [in] user data * * @retval 0: success * @retval -1: fail * @see None. */ int mqtt_subscribe(char *topic, void (*cb)(char *topic, int topic_len, void *payload, int payload_len, void *ctx), void *ctx); /** * @brief Unsubscribe topic. * * @param [in] topic * * @retval 0: success * @retval -1: fail * @see None. */ int mqtt_unsubscribe(char *topic); /** * @brief Publish packet. * * @param [in] topic * @param [in] qos * @param [in] payload data * @param [in] payload data length * * @retval 0: success * @retval -1: fail * @see None. */ int mqtt_publish(char *topic, int qos, void *data, int len); /** * @brief set mqtt domain * */ void mqtt_set_domain(char *domain, int port); #endif /* BE_MQTT_INSTANCE_H */
<gh_stars>0 """$ fio collect""" from functools import partial import json import logging import click import cligj from fiona.fio import helpers from fiona.fio import options from fiona.transform import transform_geom @click.command(short_help="Collect a sequence of features.") @cligj.precision_opt @cligj.indent_opt @cligj.compact_opt @click.option('--record-buffered/--no-record-buffered', default=False, help="Economical buffering of writes at record, not collection " "(default), level.") @click.option('--ignore-errors/--no-ignore-errors', default=False, help="log errors but do not stop serialization.") @options.src_crs_opt @click.option('--with-ld-context/--without-ld-context', default=False, help="add a JSON-LD context to JSON output.") @click.option('--add-ld-context-item', multiple=True, help="map a term to a URI and add it to the output's JSON LD " "context.") @click.option('--parse/--no-parse', default=True, help="load and dump the geojson feature (default is True)") @click.pass_context def collect(ctx, precision, indent, compact, record_buffered, ignore_errors, src_crs, with_ld_context, add_ld_context_item, parse): """Make a GeoJSON feature collection from a sequence of GeoJSON features and print it.""" verbosity = (ctx.obj and ctx.obj['verbosity']) or 2 logger = logging.getLogger('fio') stdin = click.get_text_stream('stdin') sink = click.get_text_stream('stdout') dump_kwds = {'sort_keys': True} if indent: dump_kwds['indent'] = indent if compact: dump_kwds['separators'] = (',', ':') item_sep = compact and ',' or ', ' if src_crs: if not parse: raise click.UsageError("Can't specify --src-crs with --no-parse") transformer = partial(transform_geom, src_crs, 'EPSG:4326', antimeridian_cutting=True, precision=precision) else: transformer = lambda x: x first_line = next(stdin) # If parsing geojson if parse: # If input is RS-delimited JSON sequence. if first_line.startswith(u'\x1e'): def feature_text_gen(): buffer = first_line.strip(u'\x1e') for line in stdin: if line.startswith(u'\x1e'): if buffer: feat = json.loads(buffer) feat['geometry'] = transformer(feat['geometry']) yield json.dumps(feat, **dump_kwds) buffer = line.strip(u'\x1e') else: buffer += line else: feat = json.loads(buffer) feat['geometry'] = transformer(feat['geometry']) yield json.dumps(feat, **dump_kwds) else: def feature_text_gen(): feat = json.loads(first_line) feat['geometry'] = transformer(feat['geometry']) yield json.dumps(feat, **dump_kwds) for line in stdin: feat = json.loads(line) feat['geometry'] = transformer(feat['geometry']) yield json.dumps(feat, **dump_kwds) # If *not* parsing geojson else: # If input is RS-delimited JSON sequence. if first_line.startswith(u'\x1e'): def feature_text_gen(): buffer = first_line.strip(u'\x1e') for line in stdin: if line.startswith(u'\x1e'): if buffer: yield buffer buffer = line.strip(u'\x1e') else: buffer += line else: yield buffer else: def feature_text_gen(): yield first_line for line in stdin: yield line try: source = feature_text_gen() if record_buffered: # Buffer GeoJSON data at the feature level for smaller # memory footprint. indented = bool(indent) rec_indent = "\n" + " " * (2 * (indent or 0)) collection = { 'type': 'FeatureCollection', 'features': []} if with_ld_context: collection['@context'] = helpers.make_ld_context( add_ld_context_item) head, tail = json.dumps(collection, **dump_kwds).split('[]') sink.write(head) sink.write("[") # Try the first record. try: i, first = 0, next(source) if with_ld_context: first = helpers.id_record(first) if indented: sink.write(rec_indent) sink.write(first.replace("\n", rec_indent)) except StopIteration: pass except Exception as exc: # Ignoring errors is *not* the default. if ignore_errors: logger.error( "failed to serialize file record %d (%s), " "continuing", i, exc) else: # Log error and close up the GeoJSON, leaving it # more or less valid no matter what happens above. logger.critical( "failed to serialize file record %d (%s), " "quiting", i, exc) sink.write("]") sink.write(tail) if indented: sink.write("\n") raise # Because trailing commas aren't valid in JSON arrays # we'll write the item separator before each of the # remaining features. for i, rec in enumerate(source, 1): try: if with_ld_context: rec = helpers.id_record(rec) if indented: sink.write(rec_indent) sink.write(item_sep) sink.write(rec.replace("\n", rec_indent)) except Exception as exc: if ignore_errors: logger.error( "failed to serialize file record %d (%s), " "continuing", i, exc) else: logger.critical( "failed to serialize file record %d (%s), " "quiting", i, exc) sink.write("]") sink.write(tail) if indented: sink.write("\n") raise # Close up the GeoJSON after writing all features. sink.write("]") sink.write(tail) if indented: sink.write("\n") else: # Buffer GeoJSON data at the collection level. The default. collection = { 'type': 'FeatureCollection', 'features': []} if with_ld_context: collection['@context'] = helpers.make_ld_context( add_ld_context_item) head, tail = json.dumps(collection, **dump_kwds).split('[]') sink.write(head) sink.write("[") sink.write(",".join(source)) sink.write("]") sink.write(tail) sink.write("\n") except Exception: logger.exception("Exception caught during processing") raise click.Abort()
/* * Mapping of DWARF debug register numbers into register names. * * Copyright (C) 2010 Matt Fleming <[email protected]> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #include <stddef.h> #include <dwarf-regs.h> /* * Generic dwarf analysis helpers */ #define SH_MAX_REGS 18 const char *sh_regs_table[SH_MAX_REGS] = { "r0", "r1", "r2", "r3", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12", "r13", "r14", "r15", "pc", "pr", }; /* Return architecture dependent register string (for kprobe-tracer) */ const char *get_arch_regstr(unsigned int n) { return (n <= SH_MAX_REGS) ? sh_regs_table[n] : NULL; }
Physical exercise and endothelial dysfunction. The role of the endothelium was considered mainly as a selective barrier for the diffusion of macromolecules from the lumen of blood vessels to the interstitial space. During the last 20 years, many other functions have been defined for the endothelium, such as the regulation of the vagal tonus, the promotion and inhibition of neovascular growth and the modulation of inflammation, of platelet aggregation and coagulation. This finding is considered one of the most important concepts in modern vascular biology. Currently, atherosclerosis is the prototype of the disease characterized in all its phases by an endothelial dysfunction, defined as an insufficient offer of nitric oxide (NO), which predisposes the endothelium to oxidative stress, inflammation, erosion and vasoconstriction. In this sense, several experimental studies have demonstrated that physical exercise is capable of restoring and improving the endothelial function. The impact of exercise on the endothelium has been broadly discussed. Considering its vasodilating effect and the risk factors, the possibility of treating coronary artery disease and its outcomes without the inclusion of physical exercise became unconceivable. However, the literature is still controversial regarding the intensity of physical effort that is necessary to cause significant protective alterations in endothelial functions. Moreover, the association between intense physical exercises and increased oxygen consumption, with a consequent increase in free radical formation, is also discussed.
/** * Creates the meta-model objects for the package. This method is * guarded to have no affect on any invocation but its first. * <!-- begin-user-doc --> * <!-- end-user-doc --> * @generated */ public void createPackageContents() { if (isCreated) return; isCreated = true; stateMachineEClass = createEClass(STATE_MACHINE); createEReference(stateMachineEClass, STATE_MACHINE__STATES); stateEClass = createEClass(STATE); createEAttribute(stateEClass, STATE__NAME); simpleStateEClass = createEClass(SIMPLE_STATE); createEReference(simpleStateEClass, SIMPLE_STATE__OUT); createEAttribute(simpleStateEClass, SIMPLE_STATE__LOWER_BOUND); createEAttribute(simpleStateEClass, SIMPLE_STATE__UPPER_BOUND); createEAttribute(simpleStateEClass, SIMPLE_STATE__DEPTH); createEReference(simpleStateEClass, SIMPLE_STATE__TYPE); createEAttribute(simpleStateEClass, SIMPLE_STATE__KIND); createEReference(simpleStateEClass, SIMPLE_STATE__ECORE); startStateEClass = createEClass(START_STATE); createEReference(startStateEClass, START_STATE__OUT); stopStateEClass = createEClass(STOP_STATE); dataTypeEClass = createEClass(DATA_TYPE); complexTypeEClass = createEClass(COMPLEX_TYPE); tByteEClass = createEClass(TBYTE); createEAttribute(tByteEClass, TBYTE__MIN); createEAttribute(tByteEClass, TBYTE__MAX); createEAttribute(tByteEClass, TBYTE__STEPPING); tCharEClass = createEClass(TCHAR); tShortEClass = createEClass(TSHORT); createEAttribute(tShortEClass, TSHORT__MIN); createEAttribute(tShortEClass, TSHORT__MAX); createEAttribute(tShortEClass, TSHORT__STEPPING); tIntEClass = createEClass(TINT); createEAttribute(tIntEClass, TINT__MIN); createEAttribute(tIntEClass, TINT__MAX); createEAttribute(tIntEClass, TINT__STEPPING); tLongEClass = createEClass(TLONG); createEAttribute(tLongEClass, TLONG__MIN); createEAttribute(tLongEClass, TLONG__MAX); createEAttribute(tLongEClass, TLONG__STEPPING); tFloatEClass = createEClass(TFLOAT); createEAttribute(tFloatEClass, TFLOAT__MIN); createEAttribute(tFloatEClass, TFLOAT__MAX); createEAttribute(tFloatEClass, TFLOAT__FRACTION_DIGITS); createEAttribute(tFloatEClass, TFLOAT__STEPPING); tDoubleEClass = createEClass(TDOUBLE); createEAttribute(tDoubleEClass, TDOUBLE__MIN); createEAttribute(tDoubleEClass, TDOUBLE__MAX); createEAttribute(tDoubleEClass, TDOUBLE__FRACTION_DIGITS); createEAttribute(tDoubleEClass, TDOUBLE__STEPPING); tStringEClass = createEClass(TSTRING); createEAttribute(tStringEClass, TSTRING__LENGTH); tTimestampEClass = createEClass(TTIMESTAMP); intTypeEClass = createEClass(INT_TYPE); }
// Returns True if the markup is Markdown. func isMarkdown(fn string) bool { switch filepath.Ext(fn) { case ".md", ".markdown": return true } return false }
<filename>HR.py # -*- coding: utf-8 -*- """ Created on Tue Aug 14 21:19:20 2018 @author: <NAME> """ """I am using scikit-learn library to find DECISION RULES from the decision trees. So we can make decisions""" import pandas as pd from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn import tree import numpy as np import matplotlib.pyplot as plt plt.style.use('bmh') # This is a simulated dataset df = pd.read_csv("C:/Users/mauri/Desktop/Big Data/kaggle/HR_kaggle.csv") df.head() # Size of the dataset df.shape df.dtypes # There are no null values df.isnull().sum() # create dummy variables """ When modeling is very important to preprocess the data. However right know I will just use the model I was given with. """ df = pd.get_dummies(df, columns = ['salary', 'sales'], drop_first = True) x = df.drop(labels = 'left', axis = 1) y = df['left'] feature_name = x.columns #Train and test to check out the score of our model x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.35) """ When modeling, picking an easy algorithm is way better than a more colpex. Just follow the Occam's Razor or lex parsimonaie law. In other words, the simplest solution tends to be the right one. So if we have a simple hypothesis we should select one with simple assumptions. This of course it is not an irrefutable principle. That is why we will select as less complexity as possible. In other words, a small tree (few nodes) might work well. It will also be more simple to extract some rules.""" # Fit the decision tree algorithm clf = DecisionTreeClassifier(max_depth = 5) clf.fit(x_train, y_train) # Print the score of the model print(clf.score(x_test, y_test)) tree.export_graphviz(clf, out_file='C:/Users/mauri/Desktop/d_tree.dot', feature_names=feature_name, class_names = ["left", "in"]) """ It will also be relevant to understant how important is each feature to the model we built. We can say that those variables are important to the decision making of the employees. """ importances = clf.feature_importances_ rel = [e for e in importances if e > 0.001] indices = np.argsort(rel) plt.figure() plt.title('Importance of the features', fontdict = {'fontsize': 20, 'weight': 'bold','alpha': 0.67}) plt.barh(range(len(indices)), importances[indices], color='#5CD1BD', align='center') plt.yticks(range(len(indices)), feature_name[indices]) plt.xlabel('Relative Importance')
async def info(self) -> dict: if not hasattr(Connection.info, "cb"): self.logger.debug("vcx_connection_info: Creating callback") Connection.info.cb = create_cb(CFUNCTYPE(None, c_uint32, c_uint32, c_char_p)) c_connection_handle = c_uint32(self.handle) details = await do_call('vcx_connection_info', c_connection_handle, Connection.info.cb) return json.loads(details.decode())
def repeatexp(n, d, grid_size, reps, tho_scale=0.1, is_classification=True, no_signal=True): datasetList = ['Train', 'Holdout', 'Test'] colList = ['perm', 'performance', 'dataset'] df_list_std = [] df_list_tho = [] for perm in tqdm(range(reps)): vals_std, vals_tho = fitModels_paramTuning(n, d, grid_size, is_classification=is_classification, tho_scale=tho_scale, no_signal=no_signal) for i, ds in enumerate(datasetList): df_list_std.append((perm, vals_std[i], ds)) df_list_tho.append((perm, vals_tho[i], ds)) df_std = pd.DataFrame(df_list_std, columns=colList) df_tho = pd.DataFrame(df_list_tho, columns=colList) return df_std, df_tho
/** * Renders a JFreeChart chart, ready for display via a servlet-rendered web * page. The chart is rendered into a PNG image of the given size. * * @param chart the chart to render * @param width width of the image to create, in pixels * @param height height of the image to create, in pixels * @return a map containing the location, width, height, imageMap and * imageMapName * @throws IOException if there is an error writing the image file */ public static Map renderForWeb(JFreeChart chart, int width, int height) throws IOException { Map<String, Object> params = new HashMap<String, Object>(); ChartRenderingInfo chartRenderingInfo = new ChartRenderingInfo(); String location = ServletUtilities.saveChartAsPNG(chart, width, height, chartRenderingInfo, null); params.put("location", location); params.put("width", width); params.put("height", height); String mapName = "imageMap-" + RandomUtils.insecureRandomString(3); params.put("imageMap", ChartUtilities.getImageMap(mapName, chartRenderingInfo)); params.put("imageMapName", mapName); return params; }
/** * Controller for REST operations for the /processes resource * @author JKRANES */ @Controller @RequestMapping("/processes") public class ProcessesController extends CowServerController { @Autowired ProcessService procService; @Autowired ProcessInstanceService processInstanceService; private static Logger log = Logger.getLogger(ProcessesController.class); /** * The terminology for key, id, ext was inconsistent so I extracted then to constants. */ private static final String WFLOW_NAME = "workflowName"; private static final String WFLOW_NAME_URL = "/{" + WFLOW_NAME + "}"; /** * Retrieve the process (workflow) XML in native (JPDL) format * @param wflowName the process key * @return */ @RequestMapping(value = WFLOW_NAME_URL, params = "format=native") @ResponseBody public Definitions getNativeProcess(@PathVariable(WFLOW_NAME) String wflowName) { // return new StreamSource(processService.getNativeProcessAsStream(key)); return getBpmn20Process(wflowName); } /** * Retrieves a workflow process in BPMN 2.0 format. This method only works for workflow processes * that were originally created in COW format. * @param wflowName the process key * @return the process in BPMN2.0 format */ @RequestMapping(value = WFLOW_NAME_URL, params = "format=bpmn20") @ResponseBody public Definitions getBpmn20Process(@PathVariable(WFLOW_NAME) String wflowName) { return procService.getBpmn20Process(wflowName); } /** * Retrieves a workflow process in COW format. This method only works for workflow * processes that were originally created in COW format. * @param wflowName the process key. Note: any "/" characters must be doubly encoded to "%252F" * @return the XML process document */ @RequestMapping(value = WFLOW_NAME_URL, params = "format=cow", produces="application/xml") @ResponseBody public org.wiredwidgets.cow.server.api.model.v2.Process getCowProcess( @PathVariable(WFLOW_NAME) String wflowName) { return getV2Process(wflowName); } /** * For backward compatibility. 'cow' is preferred over 'v2'. * Calls getCowProcess * @param workFlowName * @return * @see #getCowProcess(java.lang.String) */ @RequestMapping(value = WFLOW_NAME_URL, params = "format=v2", produces="application/xml") @ResponseBody public org.wiredwidgets.cow.server.api.model.v2.Process getV2Process( @PathVariable(WFLOW_NAME) String workFlowName) { return procService.getV2Process(workFlowName); } @RequestMapping(value = WFLOW_NAME_URL, params = "format=graph", produces="application/json") @ResponseBody public Map<String, Object> getCowProcessGraph(@PathVariable(WFLOW_NAME) String wflowName) { return procService.getProcessGraph(wflowName); } /** * Get a Process (workflow) * @param wflowName The name of the Process * @return */ @RequestMapping(value = WFLOW_NAME_URL, method = GET) @ResponseBody public ResponseEntity<Process> getProcess(@PathVariable(WFLOW_NAME) String wflowName) { return createGetResponse(procService.getV2Process(wflowName)); } /** * Retrieves the list of running instances for a given process * @param wflowName * @return */ @RequestMapping(value = WFLOW_NAME_URL + "/processInstances", method = GET) @ResponseBody public ResponseEntity<ProcessInstances> getProcessInstances( @PathVariable(WFLOW_NAME) String wflowName) { if (!processExists(wflowName)) { return notFound(); } ProcessInstances pis = getProcInstances(wflowName); return ok(pis); } /** * Delete all running instances of process * @param wflowName * @return 204 on success, 404 if process doesn't exist */ @RequestMapping(value = WFLOW_NAME_URL + "/processInstances", method = DELETE) @ResponseBody public ResponseEntity<Void> deleteProcessInstances( @PathVariable(WFLOW_NAME) String wflowName) { if (!processExists(wflowName)) { return notFound(); } processInstanceService.deleteProcessInstancesByKey(wflowName); return noContent(); } /** * Create a new process. Attempts to use process.getKey() as the id. If the id is taken the * process's key will be set to a unique value. The response body will contain the process * with its key updated. The uri that can be used to get the newly created process can be * found in the response's "Location" header. * * @param process * @param uriBuilder * @return */ @RequestMapping(method = POST) @ResponseBody public ResponseEntity<Process> createProcess( @RequestBody Process process, UriComponentsBuilder uriBuilder) { String id = getUniqueKey(process.getKey()); process.setKey(id); procService.save(process); return getCreatedResponse("/processes/{id}", id, uriBuilder, getV2Process(id)); } /** * Attempts to update the specified process. If the process updates normally the response * will have status code 200. If the process doesn't already exist it will be created * and the response will have status code 201. * A process cannot be modified when there are instances of it running. If there are instances * of the process running the response will have status code 409, and the body will contain * the running instances of the process. * * @param wflowName * @param process * @param uriBuilder * @return */ @RequestMapping(value = WFLOW_NAME_URL, method = PUT) @ResponseBody public ResponseEntity<?> updateProcess( @PathVariable(WFLOW_NAME) String wflowName, @RequestBody Process process, UriComponentsBuilder uriBuilder) { process.setKey(wflowName); if (!processExists(wflowName)) { //201 created procService.save(process); return getCreatedResponse("/processes/{id}", wflowName, uriBuilder, process); } ProcessInstances runningInstances = getProcInstances(wflowName); if (runningInstances.getProcessInstances().isEmpty()) { //200 OK procService.save(process); return ok(getV2Process(wflowName)); } else { //409 need to delete process instances return conflict(runningInstances); } } /** * A process cannot be modified when there are instances of it running. If there are instances * of the process running the response will have status code 409, and the body will contain * the running instances of the process. * @param wflowName * @return 204 if successful, 404 if not found, 409 if running instances */ @RequestMapping(value = WFLOW_NAME_URL, method = DELETE) @ResponseBody public ResponseEntity<?> deleteProcess(@PathVariable(WFLOW_NAME) String wflowName) { if (!processExists(wflowName)) { return notFound(); } ProcessInstances runningInstances = getProcInstances(wflowName); if (!runningInstances.getProcessInstances().isEmpty()) { return conflict(runningInstances); } if (!procService.deleteProcess(wflowName)) { return internalError(); } return noContent(); } private String getUniqueKey(String key) { String orginalKey = key; int i = 1; while (processExists(key)) { key = orginalKey + i; i++; } return key; } private boolean processExists(String wflowName) { return procService.getV2Process(wflowName) != null; } private ProcessInstances getProcInstances(final String wflowName) { return doWithRetry(new RetryCallback<ProcessInstances>() { public ProcessInstances doWithRetry(RetryContext arg0) throws Exception { ProcessInstances pis = new ProcessInstances(); pis.getProcessInstances().addAll(processInstanceService .findProcessInstancesByKey(wflowName)); return pis; } }); } }
/** * Exception thrown when an HTML document contains WAT tags with invalid arguments. * * @author Francois-Xavier Bonnet */ public class AggregationSyntaxException extends RuntimeException { private static final long serialVersionUID = 1L; /** * @param string * Error message */ public AggregationSyntaxException(String string) { super(string); } }
Hydrogeologic and Paleo-Geographic Characteristics of Riverside Alluvium at an Artificial Recharge Site in Korea This study showed the hydrogeological characteristics of an alluvial aquifer that is composed of sand, silt, and clay layers in a small domain. It can be classified into a lower high-salinity layer and an upper freshwater layer and contains shells and remnant paleo-seawater (average 5000 μS/cm) due to sea level fluctuation. Geological and electrical conductivity logging, a long-term pumping test, and multi-depth water quality measurements were conducted at pumping, injection, and observational wells to evaluate the hydrogeologic properties, identify the optimal recharge rate, and assess artificial recharge. Using a hydraulic test, a large difference in drawdown and salinity appeared at the radially located observational wells because of the difference in hydraulic connectivity between the wells in the small study area. It was concluded that the hydraulic anisotropy and heterogeneity of the alluvial aquifer should be carefully examined when locating an injection well and considering the efficient design of artificial recharge.
/* * Tests use current thread in payload and reply has the thread that actually * performed the send() on gatewayThreadChannel. */ @Test public void testExecs() throws Exception { assertThat(TestUtils.getPropertyValue(execGatewayFB, "asyncExecutor")).isSameAs(exec); assertThat(TestUtils.getPropertyValue(noExecGatewayFB, "asyncExecutor")).isNull(); Future<Thread> result = this.int2634Gateway.test3(Thread.currentThread()); assertThat(result.get()).isNotEqualTo(Thread.currentThread()); assertThat(result.get().getName()).startsWith("SimpleAsync"); result = this.execGateway.test1(Thread.currentThread()); assertThat(result.get()).isNotEqualTo(Thread.currentThread()); assertThat(result.get().getName()).startsWith("exec-"); result = this.noExecGateway.test1(Thread.currentThread()); assertThat(result.get()).isEqualTo(Thread.currentThread()); ListenableFuture<Thread> result2 = this.execGateway.test2(Thread.currentThread()); final CountDownLatch latch = new CountDownLatch(1); final AtomicReference<Thread> thread = new AtomicReference<>(); result2.addCallback(new ListenableFutureCallback<>() { @Override public void onSuccess(Thread result) { thread.set(result); latch.countDown(); } @Override public void onFailure(Throwable t) { } }); assertThat(latch.await(10, TimeUnit.SECONDS)).isTrue(); assertThat(result2.get().getName()).startsWith("exec-"); /* @IntegrationComponentScan(useDefaultFilters = false, includeFilters = @ComponentScan.Filter(TestMessagingGateway.class)) excludes this a candidate */ assertThat(this.notAGatewayByScanFilter).isNull(); }
#include<bits/stdc++.h> using namespace std; const int inf = 1e9 + 10; const int M = 1000000007; #define enter(x) for(auto & it : x) cin>>it; #define print(x) for(auto & it : x) cout<<it<<' '; cout<<endl; void solve() { int n;cin>>n; string s;cin>>s; if( n < 4){ cout<<"No"<<endl; return; }else{ if( s.substr(0,4) == "2020"){ cout<<"Yes"<<endl; }else if( s.substr(n-4,4) == "2020"){ cout<<"Yes"<<endl; }else if( s.substr(0,2) == "20" and s.substr(n-2,2) == "20"){ cout<<"Yes"<<endl; }else if( s[0] == '2' and s.substr(n-3,3) == "020"){ cout<<"Yes"<<endl; }else if(s.substr(0,3) == "202" and s[n-1] == '0'){ cout<<"Yes"<<endl; }else{ cout<<"No"<<endl; } } } int32_t main() { ios_base::sync_with_stdio(false); cin.tie(NULL);cout.tie(NULL); cin.exceptions(ios::badbit|ios::failbit); int TC; cin>>TC; while(TC--) solve(); return 0; }
. Occipital condyle fracture(OCF) is rarely seen and can be missed during medical evaluation due to the variety of clinical presentations and the difficulty to be visualized radiographically. This fracture can be associated with cranial nerves injuries (31%), being the hipoglossal nerve the most frequently involved (67%). We report a 58 years old female patient who presented with OCF, injury of lower cranial nerves and Jefferson's fracture. The patient was treated with cervical traction for six weeks followed by halo immobilization for three months. There was bone consolidation recovery of the nervous injury after this period. This report emphazises the importance of investigating the skull-cervical transition in all patients with cervical trauma. Although Jefferson's fracture is rarely associated with OCF, it should be remembered and treated appropriately when diagnosed.
from pyastar2d.astar_wrapper import astar_path, Heuristic __all__ = ["astar_path", "Heuristic"]
// AddTerminfo can be called to register a new Term entry. func AddTerminfo(t *Term) { mu.Lock() infos[t.Name] = t for _, x := range t.Aliases { infos[x] = t } mu.Unlock() }
<gh_stars>1-10 package JSHOP2; import java.util.Vector; /** Each method at compile time is represented as an instance of this class. * * @author <NAME> * @author <a href="http://www.cs.umd.edu/~okhtay">http://www.cs.umd.edu/~okhtay</a> * @version 1.0.3 */ public class InternalMethod extends InternalElement { /** The number of objects already instantiated from this class. */ private static int classCnt = 0; /** A <code>Vector</code> of <code>String</code>s each of which represents * the label of a branch of this method. */ private Vector<String> labels; /** A <code>Vector</code> of logical preconditions each of which represents * the precondition of a branch of this method. Each branch is an * alternative on how to decompose the task associated with this method. */ private Vector<LogicalPrecondition> pres; /** A <code>Vector</code> of task lists each of which represents a possible * way to decompose the task associated with this method if the * corresponding precondition is satisfied in the current state of the * world. */ private Vector<TaskList> subs; /** To initialize an <code>InternalMethod</code> object. * * @param head * head of the method (i.e., the compound task which can be * decomposed by using this method). * @param labelsIn * a <code>Vector</code> of <code>String</code> labels. * @param presIn * a <code>Vector</code> of logical preconditions. * @param subsIn * a <code>Vector</code> of task lists. */ public InternalMethod(Predicate head, Vector<String> labelsIn, Vector<LogicalPrecondition> presIn, Vector<TaskList> subsIn) { //-- Set the head of this InternalMethod. Note the use of 'classCnt' to //-- make this object distinguishable from other objects instantiated from //-- this same class. super(head, classCnt++); //-- Set the labels, preconditions and task decompositions of //-- branches in this method. labels = labelsIn; pres = presIn; subs = subsIn; //-- To iterate over branch preconditions. //-- For each branch, set the number of variables in the precondition for //-- that branch. This will be used to produce the code that will be used //-- to find bindings, since a binding is an array of this size. for (LogicalPrecondition pre : pres) pre.setVarCount(getHead().getVarCount()); //-- To iterate over task decompositions. //-- For each task decomposition, set the number of variables in the task //-- list for that decomposition. for (TaskList tl : subs) tl.setVarCount(getHead().getVarCount()); } /** This function produces the Java code needed to implement this method. */ public String toCode() { String s = ""; //-- First produce the initial code for the preconditions of each branch. for (int i = 0; i < pres.size(); i++) s += pres.get(i).getInitCode(); //-- The header of the class for this method at run time. Note the use of //-- 'getCnt()' to make the name of this class unique. s += "class Method" + getCnt() + " extends Method" + endl + "{" + endl; //-- The constructor of the class. s += "\tpublic Method" + getCnt() + "()" + endl + "\t{" + endl; //-- Call the constructor of the base class (class 'Method') with the code //-- that produces the head of this method. s += "\t\tsuper(" + getHead().toCode() + ");" + endl; //-- Allocate the array to keep the possible task lists that represent //-- possible decompositions of this method. s += "\t\tTaskList[] subsIn = new TaskList[" + subs.size() + "];" + endl; s += endl; //-- For each possible decomposition, for (int i = 0; i < subs.size(); i++) { if ((subs.get(i)).isEmpty()) //-- This decomposition is an empty task list. s += "\t\tsubsIn[" + i + "] = TaskList.empty;" + endl; else //-- This decomposition is not an empty task list, so call the function //-- that will produce the task list for this decomposition. This //-- function will be implemented later on. Note the use of variable //-- 'i' to make the header of the function being called unique. s += "\t\tsubsIn[" + i + "] = createTaskList" + i + "();" + endl; } //-- Call the function that sets the method's task list to the array that //-- was created and initialized. s += endl + "\t\tsetSubs(subsIn);" + endl + "\t}" + endl + endl; //-- For each possible decomposition, for (int i = 0; i < subs.size(); i++) { //-- If the decomposition is not an empty list, we need to implement the //-- function that returns this decomposition. if (!(subs.get(i)).isEmpty()) { //-- The function header. s += "\tTaskList createTaskList" + i + "()" + endl + "\t{" + endl; //-- The code that will produce this task list. s += (subs.get(i)).toCode() + "\t}" + endl + endl; } } //-- The function that returns an iterator that can be used to find all the //-- bindings that satisfy a given precondition of this method and return //-- them one-by-one. s += "\tpublic Precondition getIterator(Term[] unifier, int which)" + endl; s += "\t{" + endl + "\t\tPrecondition p;" + endl + endl; //-- The switch statement to choose the appropriate precondition. s += "\t\tswitch (which)" + endl + "\t\t{"; //-- For each possible decomposition, for (int i = 0; i < pres.size(); i++) { //-- Retrieve the logical precondition. LogicalPrecondition pre = pres.get(i); //-- Produce the code that will return the appropriate iterator. s += endl + "\t\t\tcase " + i + ":" + endl + "\t\t\t\tp = "; s += pre.toCode() + ";" + endl; //-- If the logical precondition is marker ':first', set the appropriate //-- flag. if (pre.getFirst()) s += "\t\t\t\tp.setFirst(true);" + endl; s += "\t\t\tbreak;"; } //-- Close the switch statement. s += endl + "\t\t\tdefault:" + endl + "\t\t\t\treturn null;" + endl; s += "\t\t}" + endl; //-- Reset the precondition and return it. s += endl + "\t\tp.reset();" + endl + endl + "\t\treturn p;" + endl; //-- This function returns the label of a given branch of this method. s += "\t}" + endl + endl + "\tpublic String getLabel(int which)" + endl; //-- The switch statement to choose the appropriate label. s += "\t{" + endl + "\t\tswitch (which)" + endl + "\t\t{"; //-- For each branch; for (int i = 0; i < labels.size(); i++) //-- Return its associated label. s += endl + "\t\t\tcase " + i + ": return \"" + labels.get(i) + "\";"; //-- Close the switch statement. s += endl + "\t\t\tdefault: return null;" + endl + "\t\t}" + endl; //-- Close the function definition and the class definition and return the //-- resulting string. return s + "\t}" + endl + "}" + endl + endl; } }
A new rooftop solar system is installed every three minutes in the U.S., up from one every 80 minutes just eight short years ago. If this pace continues to accelerate or even just holds steady, it will not be long before solar panels become visible, if not ubiquitous, in many neighborhoods nationwide. That prospect is enough to upset the Koch brothers, the heirs of the Walmart fortune and the utility industry, all which are trying to stamp out the rooftop solar movement or at least make a tidy profit penalizing the people who use it. With the help of powerful lobbyists and PACs like the American Legislative Exchange Council (ALEC) and Americans for Prosperity, they are set to do battle in statehouses across the nation in 2015. ALEC, which receives much of its funding from the utility industry and fossil-fuel investors like the Kochs, has long been an opponent of renewable energy and the Obama administration’s effort to reduce carbon emissions. It’s working with conservative activists and corporate interests to fight homeowners who are installing solar panels on their roofs. Calling people who install rooftop solar panel “freeriders,” another word for freeloaders, the pro-corporate group is actively promoting legislation in states to charge fees, even exorbitant ones, for rooftop solar installations. Behind the lobbyists are the megarich Walton family. The majority owners of the Walmart retail chain also own several energy interests, including a 30% stake in First Solar, which makes the parts for huge commerical installations of solar panels that operate like power plants. A recent report by the Institute for Local Self-Reliance shows that the Waltons are giving lobbyist organizations millions to attack renewable energy laws at the state level. Their prime targets are the homeowners and businesses that opt for solar panels to provide their own electricity. “Rooftop solar in the U.S. is growing exponentially and more and more Americans have access to affordable solar power that cuts their energy bills and builds a more sustainable energy future,” says Erich Pica, president of Friends of the Earth. “Yet, the Waltons’ money is instead limiting average Americans’ ability to go solar and control their own energy future,” Tag-teaming with the Koch brothers and some of the nation’s largest utilities, the Waltons are not being shy in browbeating state lawmakers and agencies to roll back or throw out their renewable energy policies. Over the past few years, they’ve bankrolled campaigns against residential solar in Arizona, Kansas, North Carolina, South Carolina, Ohio, Oklahoma and Washington. Results in these states have been mixed, so far. In Arizona, Americans for Prosperity and First Solar were successful in securing fees on rooftop solar installations for the state’s energy utilities. Initially, the utilities asked for a $100-a-month surcharge, which would have utterly destroyed any economic incentive to opt for home-generated power. But with some pushback from state regulators, a compromise was reached, and the new fees for putting solar panels on a home now come to about $5 a month. As relatively small as it is, the new fee seems to have scared off would-be solar adopters in the Grand Canyon state. The thought of the “tax” increasing in coming years may have homeowners thinking twice about installing panels; rooftop installations in Arizona have dropped 40% since the compromise was reached. While not going directly after homeowners with solar rooftops, ALEC was similarly successful in Ohio, making it the first state to hold back on new mandates for renewable energy generation. This had ALEC’s John Eick doing his happy dance. “[Ohio] may have laid the groundwork for other states to move in this direction in the coming year,” said ALEC’s legislative analyst. Ohio’s 2008 standards had mandated that utilities sell an increasing amount of power generated by renewable or alternative fuels over time until it comprised a quarter of the state’s electricity output 10 years from now. It also mandated a sharp reduction in power consumption. Those requirements are now frozen for at least two years while a state legislative commission considers altering them. Utilities Versus ‘Youtilities’ Why are conservative luminaries, corporate lobbyists, and the power companies pushing so hard against the little guy trying to save a few bucks while helping the planet? Because even though solar energy still only accounts for 0.23 percent of the nation’s electricity today, rooftop solar is a real threat to the very existence of utilities in the near future. For utilities, the most immediate cause for concern is net metering policies in many states, which allow homeowners and businesses to sell back any excess electricity they create with their solar panels. The surplus electricity goes back into the power grid and is sold to other consumers at low rates, often lower than what the utilities charge for electricity themselves. John Eick told the Guardian that ALEC is worried about how individual homeowners are being compensated for feeding electricity back into the system. He said ALEC wants to reduce the rate homeowners are paid for direct power generation and perhaps even penalize homeowners for selling electricity back to the grid. While power fed back to the grid from homeowners and businesses isn’t much of a threat to utilities currently, it will be in the near future as solar installations become more popular and affordable. Homegrown solar power, says the utility industry, may soon lay waste to the status quo. The industry’s Edison Institute released a report in 2013 in which it voiced this warning: “[T]here is a perception that customers will always need to remain on the grid. While we would expect customers to remain on the grid until a fully viable and economic distributed non-variable resource is available, one can imagine a day when battery storage technology or micro turbines could allow customers to be electric grid independent. To put this in perspective, who would have believed 10 years ago that traditional wire telephone customers could economically ‘cut the cord?’” The Edison Institute further predicts that as more homes with rooftop solar panels are connected to the grid, the price to provide electricity to traditional ratepayers will rise, and will drive even more homeowners to rooftop. At that point “it may be too late to repair the utility business model.” That time seems to be coming sooner than later. The prices of solar installations keep dropping and the technology behind home solar energy storage is rapidly progressing. Solar City, one of the largest rooftop solar leasing companies, is already marketing advanced battery backup systems to help owners of its solar panels store more surplus energy in the home, and they are doing so with one leader in battery technology, electric-car manufacturer Tesla Motors. (High-tech entrepreneur Elon Musk heads both companies.) And they are not alone; Solar City is being joined by several other technology upstarts in the sales of newer and longer-lasting battery backup systems. It may only be a few short years, energy analysts say, before these battery backups evolve into systems that allow homeowners to untie themselves from the grid for good. But for now, they are nothing much more than relatively expensive (roughly between $2,000-$10,000) failsafes for infrequent power outages. Right now, the utilities are more concerned about the grid-tied net-metering arrangements, since they’re starting to serve as virtual batteries for electric consumers. Rooftop solar panel installations on net-metering systems often produce excess power during the day and feed it back into the grid. When the home needs to tap into energy at night and on overcast days, the consumer often isn’t charged for the energy they use until they’ve spent all the credits they generated by selling energy through their panels. So, utilities do not so much fear consumers leaving the grid, as they do the grid increasingly becoming a network over which they have little control. Rather than being a top-down supply chain of electricity from power plants, it becomes a web of sharing between consumers and commercial energy producers across North America. Electricity will be not a commodity, but a shared resource as consumers barter energy credits with the utilities and one another. If everyone becomes an energy producer, it challenges and can even break the utility monopolies, transforming the system entirely. Home and business owners, as a network, will compete in an open market rising from the ashes of barren cartels. These corporations, still powerful today, foresee this doomsday scenario (for them) if they can’t rig the system to work solely for them. This is why the Walton Family Trust, heavily vested in a future of immense solar arrays, with thousands of solar panels that amount to a power plant, instead of modest rooftop panels, are channeling their self-interest through corporate lobbyists. First Solar is a $6 billion corporation and the Waltons could lose their collective shirts if solar arrays turn out to be a bad investment. The drift toward residential rooftop solar has been called a “revolution” by the Institute for Local Self-Reliance. “The Waltons claim to have a deep commitment to sustainability, but their support for anti-solar initiatives tells a different story,” says Stacy Mitchell, a senior researcher at ILSR. “The Waltons are investing in efforts that both undercut clean energy and prevent average Americans from benefiting economically from solar power.” In short, the Waltons have a product on the market, and they want to stop their potential customers from producing this product themselves. In essence, they’re calling for a tax on individuals who harvest their own energy. If the Waltons tried to tax backyard vegetable plots because they sell tomatoes at Walmart, people would start to get the idea of what’s at stake here. Is the Electric Company Too Big to Fail? In the 16 states where electricity markets are deregulated, the utility industry looks quite different; that, is if you can figure it out. Gone is the monolithic power company that owned the powerplants and the web that brought energy to homes and businesses. While an electric utility is still a regulated monopoly that provides power to the region through its infrastructure, consumers can often purchase electricity from a variety of providers, or the utility may act as the consumer’s agent, buying energy from the market to resell. As murky as this may make the utility picture for consumers, it doesn’t change the basic fact that the ultimate goal of any electric utility to get as much energy through its network as possible. As long as those meters keeping spinning through the kilowatt hours, it’s easy money. But what happens if those meters grind to a halt, or even run in reverse. Worse yet, what if electrical customers are now providing all the electricity they need during the daytime, when demand is at a peak and prices are high. How does the electric company make up for this? And what of the expensive solar arrays, nuclear power plants, coal- and natural gas-powered plants that might become idle? What happens to them when they’re no longer in critical demand during peak usage times, especially in the summer months when air conditioners may run throughout the day? It’s hard to comprehend what happens in a world where electricity consumers are increasingly self-reliant. Is the future of the electric utilities to merely become sentinels of the grid? And what of power plants? Are they destined to become backup systems for dark hours and cloudy days? And what about the electric ratepayers who haven’t switched over to alternative energies such as solar or wind; do their rates skyrocket when it costs more per capita to service the grid and sell power? Will this eventually drive those ratepayers to renewable energy as well, the final sling taking down the electric Goliath? It’s hard to predict, but both alternative energy advocates and corporatists say it will be a game-changer within a few short years. The ISLR is optimistic about a transforming market for consumers. “It’s moving the U.S. from a system in which electricity generation is controlled by a small number of investor-owned utilities and toward a future in which households produce energy and reap the financial benefits,” says its recent report. No matter how you slice it, rooftop solar power creates uncertainty for the prototypical utility, mass energy provider and the corporations that build and provide resources to these facilities. And uncertainty is something corporations and their investors do not like. So, if you’re the CEO of a large energy utility owner like Duke Energy, or you’re the Kochs, the Waltons or any other person or institution heavily vested in energy, you’ve got millions, if not billions, of reasons to circumvent and gut the competition. And because your chief rival is not another corporation, but millions of individual homeowners and businesses, you can’t buy them out directly, so you buy out their government representatives. In this era of Citizens United, nothing is stopping you from dispatching swarms of lobbyists to butter up or even threaten politicians to do your bidding. In 2015, this is the American way.
// PickUtxos Picks Utxos for spending. Tell it how much money you want. // It returns a tx-sortable utxoslice, and the overshoot amount. Also errors. // if "ow" is true, only gives witness utxos (for channel funding) // The overshoot amount is *after* fees, so can be used directly for a // change output. func (w *Wallit) PickUtxos( amtWanted, outputByteSize, feePerByte int64, ow bool) (portxo.TxoSliceByBip69, int64, error) { curHeight, err := w.GetDBSyncHeight() if err != nil { return nil, 0, err } var allUtxos portxo.TxoSliceByAmt allUtxos, err = w.GetAllUtxos() if err != nil { return nil, 0, err } for i := len(allUtxos) - 1; i >= 0; i-- { _, frozen := w.FreezeSet[allUtxos[i].Op] if frozen { allUtxos[i] = allUtxos[len(allUtxos)-1] allUtxos = allUtxos[:len(allUtxos)-1] } } sort.Sort(sort.Reverse(allUtxos)) maxFeeGuess := feePerByte * consts.MaxTxCount for len(allUtxos) > 1 && allUtxos[1].Value > amtWanted+maxFeeGuess && allUtxos[1].Height > 100 && !(ow && allUtxos[1].Mode&portxo.FlagTxoWitness == 0) { allUtxos = allUtxos[1:] } for len(allUtxos) > 2 && allUtxos[2].Height > 100 && allUtxos[1].Mature(curHeight) && allUtxos[2].Mature(curHeight) && allUtxos[1].Value+allUtxos[2].Value > amtWanted+maxFeeGuess && !(ow && allUtxos[2].Mode&portxo.FlagTxoWitness == 0) && !(ow && allUtxos[1].Mode&portxo.FlagTxoWitness == 0) { logging.Infof("remaining utxo list, in order:\n") for _, u := range allUtxos { logging.Infof("\t h: %d amt: %d\n", u.Height, u.Value) } allUtxos = allUtxos[1:] } var rSlice portxo.TxoSliceByBip69 remaining := amtWanted for _, utxo := range allUtxos { if !utxo.Mature(curHeight) { continue } if ow && utxo.Mode&portxo.FlagTxoWitness == 0 { continue } if utxo.Value < 1 { continue } rSlice = append(rSlice, utxo) remaining -= utxo.Value if remaining <= 0 { fee := EstFee(rSlice, outputByteSize, feePerByte) remaining += fee if remaining < -fee { break } } } if remaining > 0 { return nil, 0, fmt.Errorf("wanted %d but %d available.", amtWanted, amtWanted-remaining) } sort.Sort(rSlice) return rSlice, -remaining, nil }
/** * Creates a new UI layout. */ @Command(scope = "onos", name = "layout-add", description = "Creates a new UI layout") public class LayoutAddCommand extends AbstractShellCommand { private static final String FMT = "id=%s, name=%s, type=%s"; private static final String FMT_MASTER = " master=%s"; @Argument(index = 0, name = "id", description = "Layout ID", required = true, multiValued = false) String id = null; @Argument(index = 1, name = "id", description = "Region ID (optional)", required = false, multiValued = false) String regionId = null; @Argument(index = 2, name = "id", description = "Parent layout ID (optional)", required = false, multiValued = false) String parentId = null; private RegionService regionService; @Override protected void execute() { UiTopoLayoutService service = get(UiTopoLayoutService.class); RegionService regionService = get(RegionService.class); Region region = regionId == null ? null : regionService.getRegion(regionId(regionId)); UiTopoLayoutId pid = parentId == null ? UiTopoLayoutId.DEFAULT_ID : layoutId(parentId); UiTopoLayout layout = new UiTopoLayout(layoutId(id)).region(region).parent(pid); service.addLayout(layout); } }
import sys input = sys.stdin.readline import bisect m,n,k,t = map(int,input().split()) ag = list(map(int,input().split())) ag.sort() lrd = [list(map(int,input().split())) for i in range(k)] lrd.sort() lf = 0 rg = m+1 while lf+1<rg: x = (lf+rg)//2 agl = ag[m-x] lr = [] for l,r,d in lrd: if d > agl: if lr and lr[-1][1] < l: lr.append([l,r]) elif lr: lr[-1][1] = max(lr[-1][1],r) else: lr.append([l,r]) ans = n+1 if lr: for l,r in lr: ans += (r-l+1)*2 if ans > t: rg = x else: lf = x print(lf)
/** * Edits the details of an existing item in the sales list. */ public class EditCommand extends Command { public static final String COMMAND_WORD = "edit"; private final int index; private final int quantity; private final Logger logger = LogsCenter.getLogger(getClass()); /** * Creates an EditCommand to add an item * @param index of the item in the sales list to edit * @param quantity of the edited item */ public EditCommand(int index, int quantity) { assert index > 0 : "Index must be a positive integer."; assert quantity > 0 : "Quantity cannot be negative."; logger.info("index of item edited: " + index); logger.info("quantity of item edited: " + quantity); this.index = index; this.quantity = quantity; } @Override public CommandResult execute(seedu.address.cashier.model.Model model, seedu.address.person.model.CheckAndGetPersonByNameModel personModel) throws Exception { Item i; try { i = model.findItemByIndex(index); } catch (IndexOutOfBoundsException e) { throw new NoSuchIndexException(CashierMessages.NO_SUCH_INDEX_CASHIER); } if (!model.hasSufficientQuantityToEdit(index, quantity)) { String description = model.findItemByIndex(index).getDescription(); int quantityLeft = model.getStockLeft(description); throw new InsufficientAmountException(String.format(MESSAGE_INSUFFICIENT_STOCK, quantityLeft, description)); } i = model.editItem(index, quantity); logger.info("Edited item: " + i.toString()); return new CommandResult(String.format(MESSAGE_EDIT_SUCCESS, i.getDescription(), i.getQuantity())); } @Override public boolean equals(Object other) { return other == this // short circuit if same object || (other instanceof EditCommand // instanceof handles nulls && index == (((EditCommand) other).index) && quantity == ((EditCommand) other).quantity); } }
<filename>cloud-storage-common/src/main/java/com/tal/file/transform/common/storage/local/LocalZone.java package com.tal.file.transform.common.storage.local; import com.tal.file.transform.common.storage.StorageConfig; import com.tal.file.transform.common.storage.StorageFile; import com.tal.file.transform.common.storage.StorageZone; import org.apache.log4j.Logger; /** * 存储根对象的本地实现 * * @author lazycathome * */ public class LocalZone implements StorageZone { protected static final Logger log = Logger.getLogger(LocalZone.class); private StorageConfig conf = null; /** * 静态工厂方法,简化调用关系 * @param conf * @return */ public static StorageZone create(StorageConfig conf) { return new LocalZone(conf); } protected LocalZone(StorageConfig conf) { this.conf = conf; } @Override public StorageFile create(String urlOrPath) { return LocalFile.create(conf, urlOrPath, false); } @Override public StorageFile lookup(String urlOrPath) { return LocalFile.create(conf, urlOrPath, true); } }
def from_device(self, other: "Device") -> None: if other is self: return if not isinstance(other, type(self)): raise TypeError("other") self.reset_addresses() self.address = other.address self.address_aliases = other.address_aliases self._old_addresses = [address for address in {*self._old_addresses, *other._old_addresses} if address not in self.all_addresses]
/** Import an ECC key from a binary packet, using user supplied domain params rather than one of the NIST ones @param in The packet to import @param inlen The length of the packet @param key [out] The destination of the import @param dp pointer to user supplied params; must be the same as the params used when exporting @return CRYPT_OK if successful, upon error all allocated memory will be freed */ int ecc_bl_import_ex(const unsigned char *in, unsigned long inlen, ecc_key *key, const ltc_ecc_set_type *dp) { unsigned long key_size; unsigned char flags[1]; int err; LTC_ARGCHK(in != NULL); LTC_ARGCHK(key != NULL); LTC_ARGCHK(ltc_mp.name != NULL); if (mp_init_multi(&key->pubkey.x, &key->pubkey.y, &key->pubkey.z, &key->k, NULL) != CRYPT_OK) { return CRYPT_MEM; } if ((err = der_decode_sequence_multi(in, inlen, LTC_ASN1_BIT_STRING, 1UL, &flags, LTC_ASN1_EOL, 0UL, NULL)) != CRYPT_OK) { goto done; } if (flags[0] == 1) { key->type = PK_PRIVATE; if ((err = der_decode_sequence_multi(in, inlen, LTC_ASN1_BIT_STRING, 1UL, flags, LTC_ASN1_SHORT_INTEGER, 1UL, &key_size, LTC_ASN1_INTEGER, 1UL, key->pubkey.x, LTC_ASN1_INTEGER, 1UL, key->pubkey.y, LTC_ASN1_INTEGER, 1UL, key->k, LTC_ASN1_EOL, 0UL, NULL)) != CRYPT_OK) { goto done; } } else { key->type = PK_PUBLIC; if ((err = der_decode_sequence_multi(in, inlen, LTC_ASN1_BIT_STRING, 1UL, flags, LTC_ASN1_SHORT_INTEGER, 1UL, &key_size, LTC_ASN1_INTEGER, 1UL, key->pubkey.x, LTC_ASN1_INTEGER, 1UL, key->pubkey.y, LTC_ASN1_EOL, 0UL, NULL)) != CRYPT_OK) { goto done; } } if (dp == NULL) { for (key->idx = 0; ltc_ecc_bl_sets[key->idx].size && (unsigned long)ltc_ecc_bl_sets[key->idx].size != key_size; ++key->idx); if (ltc_ecc_sets[key->idx].size == 0) { err = CRYPT_INVALID_PACKET; goto done; } key->dp = &ltc_ecc_bl_sets[key->idx]; } else { key->idx = -1; key->dp = dp; } if ((err = mp_set(key->pubkey.z, 1)) != CRYPT_OK) { goto done; } if ((err = ltc_ecc_bl_CheckKey(key)) != CRYPT_OK) { goto done; } return CRYPT_OK; done: mp_clear_multi(key->pubkey.x, key->pubkey.y, key->pubkey.z, key->k, NULL); return err; }
/** * Created by jobob on 17/5/16. */ @Slf4j public class AppTest { public static void main(String[] args) { for (int i=0;i<10000000;i++){ new Thread(()->{ System.out.println(System.currentTimeMillis()); }).start(); } } @Before public void test() { Af af = UserServiceImpl.class.getAnnotation(Af.class); String res = String.format("AliasForServiceImpl Annotation " + "Af.value = [%s], Af.attribute = [%s]", af.value(), af.attribute()); log.error(res); } @Test public void test1(){ Af af = AnnotationUtils.getAnnotation(UserServiceImpl.class, Af.class); String res = String.format("AnnotationUtils#getAnnotation(targetClazz, AnnotationClazz) " + "AliasForServiceImpl Annotation " + "Af.value = [%s], Af.attribute = [%s]", af.value(), af.value()); log.error(res); } //10:22:26.875 [main] ERROR com.auto.test.AppTest - AliasForServiceImpl Annotation Af.value = [aa], Af.attribute = [bb] //10:23:15.393 [main] ERROR com.auto.test.AppTest - AliasForServiceImpl Annotation Af.value = [aa], Af.attribute = [bb] }
/** * Description: StaticAccessNonStatic * Author: silence * Update: silence(2017-04-20 18:41) */ public class StaticAccessNonStatic { static public void info() { System.out.println("简单的info方法"); } public static void main(String[] args) { //因为main方法是静态方法,而info是非静态方法, //调用main方法的是该类本身,而不是该类的实例, //因此省略的this无法指向有效的对象 info(); } }
<reponame>comparae/comparae-ui<filename>src/Button/Button.test.tsx import * as React from 'react'; import { renderWithTheme } from '../utils/tests'; import { Button } from './Button'; describe('<Button />', () => { it('renders with Light theme', () => { const { container } = renderWithTheme( <Button color="primary">Test</Button>, 'dark' ); expect(container.firstChild).toMatchSnapshot(); expect(container.firstChild).toHaveStyleRule('color', '#fff'); }); it('renders with Light theme', () => { const { container } = renderWithTheme( <Button color="primary">Test</Button>, 'light' ); expect(container.firstChild).toMatchSnapshot(); expect(container.firstChild).toHaveStyleRule('color', '#fff'); }); });
package page70.q6955; import java.util.Locale; import java.util.Scanner; public class Main { public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); double max = Double.MIN_VALUE, pi = 3.14159; in.nextLine(); while(n-- > 0) { String s[] = in.nextLine().split(" "); if("C".equals(s[0])) { max = Math.max(max, (1/3.0) * pi * Math.pow(Double.parseDouble(s[1]), 2) * Double.parseDouble(s[2])); } else if("L".equals(s[0])) { max = Math.max(max, pi * Math.pow(Double.parseDouble(s[1]), 2) * Double.parseDouble(s[2])); } else { max = Math.max(max, (4/3.0) * pi * Math.pow(Double.parseDouble(s[1]), 3)); } } System.out.printf(Locale.US, "%.3f\n", max); } }
<reponame>Shiba-Kar/DefinitelyTyped<gh_stars>1-10 import FirebaseAdmin from 'lesgo/services/FirebaseAdminService'; const firebase = new FirebaseAdmin(); // $ExpectType FirebaseAdmin // $ExpectType void firebase.connect({ serviceAccount: require('path/to/serviceAccountKey.json'), projectName: 'testproject', }); (async () => { await firebase.getAllUsers(); // $ExpectType UserRecord[] // $ExpectType UserRecord await firebase.createUser({ email: '<EMAIL>', username: 'xXLXx', password: '<PASSWORD>', }); await firebase.deleteUser('1234'); // $ExpectType void await firebase.delete(); // $ExpectType void })();
# coding: utf-8 from __future__ import absolute_import from datetime import date, datetime # noqa: F401 from typing import List, Dict # noqa: F401 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.base_model_ import Model from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.amf_info import AmfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.ausf_info import AusfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.bsf_info import BsfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.chf_info import ChfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.default_notification_subscription import DefaultNotificationSubscription from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.gmlc_info import GmlcInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.hss_info import HssInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.ipv6_addr import Ipv6Addr from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.lmf_info import LmfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nf_service import NFService from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nf_status import NFStatus from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nf_type import NFType from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nef_info import NefInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nwdaf_info import NwdafInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.pcf_info import PcfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.pcscf_info import PcscfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.plmn_id import PlmnId from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.plmn_id_nid import PlmnIdNid from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.plmn_snssai import PlmnSnssai from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.smf_info import SmfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.snssai import Snssai from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.udm_info import UdmInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.udr_info import UdrInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.udsf_info import UdsfInfo from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.upf_info import UpfInfo from openapi_server import util from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.amf_info import AmfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.ausf_info import AusfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.bsf_info import BsfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.chf_info import ChfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.default_notification_subscription import DefaultNotificationSubscription # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.gmlc_info import GmlcInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.hss_info import HssInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.ipv6_addr import Ipv6Addr # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.lmf_info import LmfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nef_info import NefInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nf_service import NFService # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nf_status import NFStatus # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nf_type import NFType # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.nwdaf_info import NwdafInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.pcf_info import PcfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.pcscf_info import PcscfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.plmn_id import PlmnId # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.plmn_id_nid import PlmnIdNid # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.plmn_snssai import PlmnSnssai # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.smf_info import SmfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.snssai import Snssai # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.udm_info import UdmInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.udr_info import UdrInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.udsf_info import UdsfInfo # noqa: E501 from openapi_server.com.h21lab.TS29510_Nnrf_NFDiscovery.handler.upf_info import UpfInfo # noqa: E501 class NFProfile(Model): """NOTE: This class is auto generated by OpenAPI Generator (https://openapi-generator.tech). Do not edit the class manually. """ def __init__(self, nf_instance_id=None, nf_instance_name=None, nf_type=None, nf_status=None, plmn_list=None, s_nssais=None, per_plmn_snssai_list=None, nsi_list=None, fqdn=None, ipv4_addresses=None, ipv6_addresses=None, capacity=None, load=None, load_time_stamp=None, locality=None, priority=None, udr_info=None, udr_info_ext=None, udm_info=None, udm_info_ext=None, ausf_info=None, ausf_info_ext=None, amf_info=None, amf_info_ext=None, smf_info=None, smf_info_ext=None, upf_info=None, upf_info_ext=None, pcf_info=None, pcf_info_ext=None, bsf_info=None, bsf_info_ext=None, chf_info=None, chf_info_ext=None, udsf_info=None, udsf_info_ext=None, nwdaf_info=None, nef_info=None, pcscf_info=None, hss_info=None, custom_info=None, recovery_time=None, nf_service_persistence=False, nf_services=None, default_notification_subscriptions=None, lmf_info=None, gmlc_info=None, snpn_list=None, nf_set_id_list=None, serving_scope=None, lc_h_support_ind=False, olc_h_support_ind=False): # noqa: E501 """NFProfile - a model defined in OpenAPI :param nf_instance_id: The nf_instance_id of this NFProfile. # noqa: E501 :type nf_instance_id: str :param nf_instance_name: The nf_instance_name of this NFProfile. # noqa: E501 :type nf_instance_name: str :param nf_type: The nf_type of this NFProfile. # noqa: E501 :type nf_type: NFType :param nf_status: The nf_status of this NFProfile. # noqa: E501 :type nf_status: NFStatus :param plmn_list: The plmn_list of this NFProfile. # noqa: E501 :type plmn_list: List[PlmnId] :param s_nssais: The s_nssais of this NFProfile. # noqa: E501 :type s_nssais: List[Snssai] :param per_plmn_snssai_list: The per_plmn_snssai_list of this NFProfile. # noqa: E501 :type per_plmn_snssai_list: List[PlmnSnssai] :param nsi_list: The nsi_list of this NFProfile. # noqa: E501 :type nsi_list: List[str] :param fqdn: The fqdn of this NFProfile. # noqa: E501 :type fqdn: str :param ipv4_addresses: The ipv4_addresses of this NFProfile. # noqa: E501 :type ipv4_addresses: List[str] :param ipv6_addresses: The ipv6_addresses of this NFProfile. # noqa: E501 :type ipv6_addresses: List[Ipv6Addr] :param capacity: The capacity of this NFProfile. # noqa: E501 :type capacity: int :param load: The load of this NFProfile. # noqa: E501 :type load: int :param load_time_stamp: The load_time_stamp of this NFProfile. # noqa: E501 :type load_time_stamp: datetime :param locality: The locality of this NFProfile. # noqa: E501 :type locality: str :param priority: The priority of this NFProfile. # noqa: E501 :type priority: int :param udr_info: The udr_info of this NFProfile. # noqa: E501 :type udr_info: UdrInfo :param udr_info_ext: The udr_info_ext of this NFProfile. # noqa: E501 :type udr_info_ext: List[UdrInfo] :param udm_info: The udm_info of this NFProfile. # noqa: E501 :type udm_info: UdmInfo :param udm_info_ext: The udm_info_ext of this NFProfile. # noqa: E501 :type udm_info_ext: List[UdmInfo] :param ausf_info: The ausf_info of this NFProfile. # noqa: E501 :type ausf_info: AusfInfo :param ausf_info_ext: The ausf_info_ext of this NFProfile. # noqa: E501 :type ausf_info_ext: List[AusfInfo] :param amf_info: The amf_info of this NFProfile. # noqa: E501 :type amf_info: AmfInfo :param amf_info_ext: The amf_info_ext of this NFProfile. # noqa: E501 :type amf_info_ext: List[AmfInfo] :param smf_info: The smf_info of this NFProfile. # noqa: E501 :type smf_info: SmfInfo :param smf_info_ext: The smf_info_ext of this NFProfile. # noqa: E501 :type smf_info_ext: List[SmfInfo] :param upf_info: The upf_info of this NFProfile. # noqa: E501 :type upf_info: UpfInfo :param upf_info_ext: The upf_info_ext of this NFProfile. # noqa: E501 :type upf_info_ext: List[UpfInfo] :param pcf_info: The pcf_info of this NFProfile. # noqa: E501 :type pcf_info: PcfInfo :param pcf_info_ext: The pcf_info_ext of this NFProfile. # noqa: E501 :type pcf_info_ext: List[PcfInfo] :param bsf_info: The bsf_info of this NFProfile. # noqa: E501 :type bsf_info: BsfInfo :param bsf_info_ext: The bsf_info_ext of this NFProfile. # noqa: E501 :type bsf_info_ext: List[BsfInfo] :param chf_info: The chf_info of this NFProfile. # noqa: E501 :type chf_info: ChfInfo :param chf_info_ext: The chf_info_ext of this NFProfile. # noqa: E501 :type chf_info_ext: List[ChfInfo] :param udsf_info: The udsf_info of this NFProfile. # noqa: E501 :type udsf_info: UdsfInfo :param udsf_info_ext: The udsf_info_ext of this NFProfile. # noqa: E501 :type udsf_info_ext: List[UdsfInfo] :param nwdaf_info: The nwdaf_info of this NFProfile. # noqa: E501 :type nwdaf_info: NwdafInfo :param nef_info: The nef_info of this NFProfile. # noqa: E501 :type nef_info: NefInfo :param pcscf_info: The pcscf_info of this NFProfile. # noqa: E501 :type pcscf_info: List[PcscfInfo] :param hss_info: The hss_info of this NFProfile. # noqa: E501 :type hss_info: List[HssInfo] :param custom_info: The custom_info of this NFProfile. # noqa: E501 :type custom_info: object :param recovery_time: The recovery_time of this NFProfile. # noqa: E501 :type recovery_time: datetime :param nf_service_persistence: The nf_service_persistence of this NFProfile. # noqa: E501 :type nf_service_persistence: bool :param nf_services: The nf_services of this NFProfile. # noqa: E501 :type nf_services: List[NFService] :param default_notification_subscriptions: The default_notification_subscriptions of this NFProfile. # noqa: E501 :type default_notification_subscriptions: List[DefaultNotificationSubscription] :param lmf_info: The lmf_info of this NFProfile. # noqa: E501 :type lmf_info: LmfInfo :param gmlc_info: The gmlc_info of this NFProfile. # noqa: E501 :type gmlc_info: GmlcInfo :param snpn_list: The snpn_list of this NFProfile. # noqa: E501 :type snpn_list: List[PlmnIdNid] :param nf_set_id_list: The nf_set_id_list of this NFProfile. # noqa: E501 :type nf_set_id_list: List[str] :param serving_scope: The serving_scope of this NFProfile. # noqa: E501 :type serving_scope: List[str] :param lc_h_support_ind: The lc_h_support_ind of this NFProfile. # noqa: E501 :type lc_h_support_ind: bool :param olc_h_support_ind: The olc_h_support_ind of this NFProfile. # noqa: E501 :type olc_h_support_ind: bool """ self.openapi_types = { 'nf_instance_id': str, 'nf_instance_name': str, 'nf_type': NFType, 'nf_status': NFStatus, 'plmn_list': List[PlmnId], 's_nssais': List[Snssai], 'per_plmn_snssai_list': List[PlmnSnssai], 'nsi_list': List[str], 'fqdn': str, 'ipv4_addresses': List[str], 'ipv6_addresses': List[Ipv6Addr], 'capacity': int, 'load': int, 'load_time_stamp': datetime, 'locality': str, 'priority': int, 'udr_info': UdrInfo, 'udr_info_ext': List[UdrInfo], 'udm_info': UdmInfo, 'udm_info_ext': List[UdmInfo], 'ausf_info': AusfInfo, 'ausf_info_ext': List[AusfInfo], 'amf_info': AmfInfo, 'amf_info_ext': List[AmfInfo], 'smf_info': SmfInfo, 'smf_info_ext': List[SmfInfo], 'upf_info': UpfInfo, 'upf_info_ext': List[UpfInfo], 'pcf_info': PcfInfo, 'pcf_info_ext': List[PcfInfo], 'bsf_info': BsfInfo, 'bsf_info_ext': List[BsfInfo], 'chf_info': ChfInfo, 'chf_info_ext': List[ChfInfo], 'udsf_info': UdsfInfo, 'udsf_info_ext': List[UdsfInfo], 'nwdaf_info': NwdafInfo, 'nef_info': NefInfo, 'pcscf_info': List[PcscfInfo], 'hss_info': List[HssInfo], 'custom_info': object, 'recovery_time': datetime, 'nf_service_persistence': bool, 'nf_services': List[NFService], 'default_notification_subscriptions': List[DefaultNotificationSubscription], 'lmf_info': LmfInfo, 'gmlc_info': GmlcInfo, 'snpn_list': List[PlmnIdNid], 'nf_set_id_list': List[str], 'serving_scope': List[str], 'lc_h_support_ind': bool, 'olc_h_support_ind': bool } self.attribute_map = { 'nf_instance_id': 'nfInstanceId', 'nf_instance_name': 'nfInstanceName', 'nf_type': 'nfType', 'nf_status': 'nfStatus', 'plmn_list': 'plmnList', 's_nssais': 'sNssais', 'per_plmn_snssai_list': 'perPlmnSnssaiList', 'nsi_list': 'nsiList', 'fqdn': 'fqdn', 'ipv4_addresses': 'ipv4Addresses', 'ipv6_addresses': 'ipv6Addresses', 'capacity': 'capacity', 'load': 'load', 'load_time_stamp': 'loadTimeStamp', 'locality': 'locality', 'priority': 'priority', 'udr_info': 'udrInfo', 'udr_info_ext': 'udrInfoExt', 'udm_info': 'udmInfo', 'udm_info_ext': 'udmInfoExt', 'ausf_info': 'ausfInfo', 'ausf_info_ext': 'ausfInfoExt', 'amf_info': 'amfInfo', 'amf_info_ext': 'amfInfoExt', 'smf_info': 'smfInfo', 'smf_info_ext': 'smfInfoExt', 'upf_info': 'upfInfo', 'upf_info_ext': 'upfInfoExt', 'pcf_info': 'pcfInfo', 'pcf_info_ext': 'pcfInfoExt', 'bsf_info': 'bsfInfo', 'bsf_info_ext': 'bsfInfoExt', 'chf_info': 'chfInfo', 'chf_info_ext': 'chfInfoExt', 'udsf_info': 'udsfInfo', 'udsf_info_ext': 'udsfInfoExt', 'nwdaf_info': 'nwdafInfo', 'nef_info': 'nefInfo', 'pcscf_info': 'pcscfInfo', 'hss_info': 'hssInfo', 'custom_info': 'customInfo', 'recovery_time': 'recoveryTime', 'nf_service_persistence': 'nfServicePersistence', 'nf_services': 'nfServices', 'default_notification_subscriptions': 'defaultNotificationSubscriptions', 'lmf_info': 'lmfInfo', 'gmlc_info': 'gmlcInfo', 'snpn_list': 'snpnList', 'nf_set_id_list': 'nfSetIdList', 'serving_scope': 'servingScope', 'lc_h_support_ind': 'lcHSupportInd', 'olc_h_support_ind': 'olcHSupportInd' } self._nf_instance_id = nf_instance_id self._nf_instance_name = nf_instance_name self._nf_type = nf_type self._nf_status = nf_status self._plmn_list = plmn_list self._s_nssais = s_nssais self._per_plmn_snssai_list = per_plmn_snssai_list self._nsi_list = nsi_list self._fqdn = fqdn self._ipv4_addresses = ipv4_addresses self._ipv6_addresses = ipv6_addresses self._capacity = capacity self._load = load self._load_time_stamp = load_time_stamp self._locality = locality self._priority = priority self._udr_info = udr_info self._udr_info_ext = udr_info_ext self._udm_info = udm_info self._udm_info_ext = udm_info_ext self._ausf_info = ausf_info self._ausf_info_ext = ausf_info_ext self._amf_info = amf_info self._amf_info_ext = amf_info_ext self._smf_info = smf_info self._smf_info_ext = smf_info_ext self._upf_info = upf_info self._upf_info_ext = upf_info_ext self._pcf_info = pcf_info self._pcf_info_ext = pcf_info_ext self._bsf_info = bsf_info self._bsf_info_ext = bsf_info_ext self._chf_info = chf_info self._chf_info_ext = chf_info_ext self._udsf_info = udsf_info self._udsf_info_ext = udsf_info_ext self._nwdaf_info = nwdaf_info self._nef_info = nef_info self._pcscf_info = pcscf_info self._hss_info = hss_info self._custom_info = custom_info self._recovery_time = recovery_time self._nf_service_persistence = nf_service_persistence self._nf_services = nf_services self._default_notification_subscriptions = default_notification_subscriptions self._lmf_info = lmf_info self._gmlc_info = gmlc_info self._snpn_list = snpn_list self._nf_set_id_list = nf_set_id_list self._serving_scope = serving_scope self._lc_h_support_ind = lc_h_support_ind self._olc_h_support_ind = olc_h_support_ind @classmethod def from_dict(cls, dikt) -> 'NFProfile': """Returns the dict as a model :param dikt: A dict. :type: dict :return: The NFProfile of this NFProfile. # noqa: E501 :rtype: NFProfile """ return util.deserialize_model(dikt, cls) @property def nf_instance_id(self): """Gets the nf_instance_id of this NFProfile. :return: The nf_instance_id of this NFProfile. :rtype: str """ return self._nf_instance_id @nf_instance_id.setter def nf_instance_id(self, nf_instance_id): """Sets the nf_instance_id of this NFProfile. :param nf_instance_id: The nf_instance_id of this NFProfile. :type nf_instance_id: str """ if nf_instance_id is None: raise ValueError("Invalid value for `nf_instance_id`, must not be `None`") # noqa: E501 self._nf_instance_id = nf_instance_id @property def nf_instance_name(self): """Gets the nf_instance_name of this NFProfile. :return: The nf_instance_name of this NFProfile. :rtype: str """ return self._nf_instance_name @nf_instance_name.setter def nf_instance_name(self, nf_instance_name): """Sets the nf_instance_name of this NFProfile. :param nf_instance_name: The nf_instance_name of this NFProfile. :type nf_instance_name: str """ self._nf_instance_name = nf_instance_name @property def nf_type(self): """Gets the nf_type of this NFProfile. :return: The nf_type of this NFProfile. :rtype: NFType """ return self._nf_type @nf_type.setter def nf_type(self, nf_type): """Sets the nf_type of this NFProfile. :param nf_type: The nf_type of this NFProfile. :type nf_type: NFType """ if nf_type is None: raise ValueError("Invalid value for `nf_type`, must not be `None`") # noqa: E501 self._nf_type = nf_type @property def nf_status(self): """Gets the nf_status of this NFProfile. :return: The nf_status of this NFProfile. :rtype: NFStatus """ return self._nf_status @nf_status.setter def nf_status(self, nf_status): """Sets the nf_status of this NFProfile. :param nf_status: The nf_status of this NFProfile. :type nf_status: NFStatus """ if nf_status is None: raise ValueError("Invalid value for `nf_status`, must not be `None`") # noqa: E501 self._nf_status = nf_status @property def plmn_list(self): """Gets the plmn_list of this NFProfile. :return: The plmn_list of this NFProfile. :rtype: List[PlmnId] """ return self._plmn_list @plmn_list.setter def plmn_list(self, plmn_list): """Sets the plmn_list of this NFProfile. :param plmn_list: The plmn_list of this NFProfile. :type plmn_list: List[PlmnId] """ self._plmn_list = plmn_list @property def s_nssais(self): """Gets the s_nssais of this NFProfile. :return: The s_nssais of this NFProfile. :rtype: List[Snssai] """ return self._s_nssais @s_nssais.setter def s_nssais(self, s_nssais): """Sets the s_nssais of this NFProfile. :param s_nssais: The s_nssais of this NFProfile. :type s_nssais: List[Snssai] """ self._s_nssais = s_nssais @property def per_plmn_snssai_list(self): """Gets the per_plmn_snssai_list of this NFProfile. :return: The per_plmn_snssai_list of this NFProfile. :rtype: List[PlmnSnssai] """ return self._per_plmn_snssai_list @per_plmn_snssai_list.setter def per_plmn_snssai_list(self, per_plmn_snssai_list): """Sets the per_plmn_snssai_list of this NFProfile. :param per_plmn_snssai_list: The per_plmn_snssai_list of this NFProfile. :type per_plmn_snssai_list: List[PlmnSnssai] """ self._per_plmn_snssai_list = per_plmn_snssai_list @property def nsi_list(self): """Gets the nsi_list of this NFProfile. :return: The nsi_list of this NFProfile. :rtype: List[str] """ return self._nsi_list @nsi_list.setter def nsi_list(self, nsi_list): """Sets the nsi_list of this NFProfile. :param nsi_list: The nsi_list of this NFProfile. :type nsi_list: List[str] """ self._nsi_list = nsi_list @property def fqdn(self): """Gets the fqdn of this NFProfile. Fully Qualified Domain Name # noqa: E501 :return: The fqdn of this NFProfile. :rtype: str """ return self._fqdn @fqdn.setter def fqdn(self, fqdn): """Sets the fqdn of this NFProfile. Fully Qualified Domain Name # noqa: E501 :param fqdn: The fqdn of this NFProfile. :type fqdn: str """ self._fqdn = fqdn @property def ipv4_addresses(self): """Gets the ipv4_addresses of this NFProfile. :return: The ipv4_addresses of this NFProfile. :rtype: List[str] """ return self._ipv4_addresses @ipv4_addresses.setter def ipv4_addresses(self, ipv4_addresses): """Sets the ipv4_addresses of this NFProfile. :param ipv4_addresses: The ipv4_addresses of this NFProfile. :type ipv4_addresses: List[str] """ self._ipv4_addresses = ipv4_addresses @property def ipv6_addresses(self): """Gets the ipv6_addresses of this NFProfile. :return: The ipv6_addresses of this NFProfile. :rtype: List[Ipv6Addr] """ return self._ipv6_addresses @ipv6_addresses.setter def ipv6_addresses(self, ipv6_addresses): """Sets the ipv6_addresses of this NFProfile. :param ipv6_addresses: The ipv6_addresses of this NFProfile. :type ipv6_addresses: List[Ipv6Addr] """ self._ipv6_addresses = ipv6_addresses @property def capacity(self): """Gets the capacity of this NFProfile. :return: The capacity of this NFProfile. :rtype: int """ return self._capacity @capacity.setter def capacity(self, capacity): """Sets the capacity of this NFProfile. :param capacity: The capacity of this NFProfile. :type capacity: int """ if capacity is not None and capacity > 65535: # noqa: E501 raise ValueError("Invalid value for `capacity`, must be a value less than or equal to `65535`") # noqa: E501 if capacity is not None and capacity < 0: # noqa: E501 raise ValueError("Invalid value for `capacity`, must be a value greater than or equal to `0`") # noqa: E501 self._capacity = capacity @property def load(self): """Gets the load of this NFProfile. :return: The load of this NFProfile. :rtype: int """ return self._load @load.setter def load(self, load): """Sets the load of this NFProfile. :param load: The load of this NFProfile. :type load: int """ if load is not None and load > 100: # noqa: E501 raise ValueError("Invalid value for `load`, must be a value less than or equal to `100`") # noqa: E501 if load is not None and load < 0: # noqa: E501 raise ValueError("Invalid value for `load`, must be a value greater than or equal to `0`") # noqa: E501 self._load = load @property def load_time_stamp(self): """Gets the load_time_stamp of this NFProfile. :return: The load_time_stamp of this NFProfile. :rtype: datetime """ return self._load_time_stamp @load_time_stamp.setter def load_time_stamp(self, load_time_stamp): """Sets the load_time_stamp of this NFProfile. :param load_time_stamp: The load_time_stamp of this NFProfile. :type load_time_stamp: datetime """ self._load_time_stamp = load_time_stamp @property def locality(self): """Gets the locality of this NFProfile. :return: The locality of this NFProfile. :rtype: str """ return self._locality @locality.setter def locality(self, locality): """Sets the locality of this NFProfile. :param locality: The locality of this NFProfile. :type locality: str """ self._locality = locality @property def priority(self): """Gets the priority of this NFProfile. :return: The priority of this NFProfile. :rtype: int """ return self._priority @priority.setter def priority(self, priority): """Sets the priority of this NFProfile. :param priority: The priority of this NFProfile. :type priority: int """ if priority is not None and priority > 65535: # noqa: E501 raise ValueError("Invalid value for `priority`, must be a value less than or equal to `65535`") # noqa: E501 if priority is not None and priority < 0: # noqa: E501 raise ValueError("Invalid value for `priority`, must be a value greater than or equal to `0`") # noqa: E501 self._priority = priority @property def udr_info(self): """Gets the udr_info of this NFProfile. :return: The udr_info of this NFProfile. :rtype: UdrInfo """ return self._udr_info @udr_info.setter def udr_info(self, udr_info): """Sets the udr_info of this NFProfile. :param udr_info: The udr_info of this NFProfile. :type udr_info: UdrInfo """ self._udr_info = udr_info @property def udr_info_ext(self): """Gets the udr_info_ext of this NFProfile. :return: The udr_info_ext of this NFProfile. :rtype: List[UdrInfo] """ return self._udr_info_ext @udr_info_ext.setter def udr_info_ext(self, udr_info_ext): """Sets the udr_info_ext of this NFProfile. :param udr_info_ext: The udr_info_ext of this NFProfile. :type udr_info_ext: List[UdrInfo] """ self._udr_info_ext = udr_info_ext @property def udm_info(self): """Gets the udm_info of this NFProfile. :return: The udm_info of this NFProfile. :rtype: UdmInfo """ return self._udm_info @udm_info.setter def udm_info(self, udm_info): """Sets the udm_info of this NFProfile. :param udm_info: The udm_info of this NFProfile. :type udm_info: UdmInfo """ self._udm_info = udm_info @property def udm_info_ext(self): """Gets the udm_info_ext of this NFProfile. :return: The udm_info_ext of this NFProfile. :rtype: List[UdmInfo] """ return self._udm_info_ext @udm_info_ext.setter def udm_info_ext(self, udm_info_ext): """Sets the udm_info_ext of this NFProfile. :param udm_info_ext: The udm_info_ext of this NFProfile. :type udm_info_ext: List[UdmInfo] """ self._udm_info_ext = udm_info_ext @property def ausf_info(self): """Gets the ausf_info of this NFProfile. :return: The ausf_info of this NFProfile. :rtype: AusfInfo """ return self._ausf_info @ausf_info.setter def ausf_info(self, ausf_info): """Sets the ausf_info of this NFProfile. :param ausf_info: The ausf_info of this NFProfile. :type ausf_info: AusfInfo """ self._ausf_info = ausf_info @property def ausf_info_ext(self): """Gets the ausf_info_ext of this NFProfile. :return: The ausf_info_ext of this NFProfile. :rtype: List[AusfInfo] """ return self._ausf_info_ext @ausf_info_ext.setter def ausf_info_ext(self, ausf_info_ext): """Sets the ausf_info_ext of this NFProfile. :param ausf_info_ext: The ausf_info_ext of this NFProfile. :type ausf_info_ext: List[AusfInfo] """ self._ausf_info_ext = ausf_info_ext @property def amf_info(self): """Gets the amf_info of this NFProfile. :return: The amf_info of this NFProfile. :rtype: AmfInfo """ return self._amf_info @amf_info.setter def amf_info(self, amf_info): """Sets the amf_info of this NFProfile. :param amf_info: The amf_info of this NFProfile. :type amf_info: AmfInfo """ self._amf_info = amf_info @property def amf_info_ext(self): """Gets the amf_info_ext of this NFProfile. :return: The amf_info_ext of this NFProfile. :rtype: List[AmfInfo] """ return self._amf_info_ext @amf_info_ext.setter def amf_info_ext(self, amf_info_ext): """Sets the amf_info_ext of this NFProfile. :param amf_info_ext: The amf_info_ext of this NFProfile. :type amf_info_ext: List[AmfInfo] """ self._amf_info_ext = amf_info_ext @property def smf_info(self): """Gets the smf_info of this NFProfile. :return: The smf_info of this NFProfile. :rtype: SmfInfo """ return self._smf_info @smf_info.setter def smf_info(self, smf_info): """Sets the smf_info of this NFProfile. :param smf_info: The smf_info of this NFProfile. :type smf_info: SmfInfo """ self._smf_info = smf_info @property def smf_info_ext(self): """Gets the smf_info_ext of this NFProfile. :return: The smf_info_ext of this NFProfile. :rtype: List[SmfInfo] """ return self._smf_info_ext @smf_info_ext.setter def smf_info_ext(self, smf_info_ext): """Sets the smf_info_ext of this NFProfile. :param smf_info_ext: The smf_info_ext of this NFProfile. :type smf_info_ext: List[SmfInfo] """ self._smf_info_ext = smf_info_ext @property def upf_info(self): """Gets the upf_info of this NFProfile. :return: The upf_info of this NFProfile. :rtype: UpfInfo """ return self._upf_info @upf_info.setter def upf_info(self, upf_info): """Sets the upf_info of this NFProfile. :param upf_info: The upf_info of this NFProfile. :type upf_info: UpfInfo """ self._upf_info = upf_info @property def upf_info_ext(self): """Gets the upf_info_ext of this NFProfile. :return: The upf_info_ext of this NFProfile. :rtype: List[UpfInfo] """ return self._upf_info_ext @upf_info_ext.setter def upf_info_ext(self, upf_info_ext): """Sets the upf_info_ext of this NFProfile. :param upf_info_ext: The upf_info_ext of this NFProfile. :type upf_info_ext: List[UpfInfo] """ self._upf_info_ext = upf_info_ext @property def pcf_info(self): """Gets the pcf_info of this NFProfile. :return: The pcf_info of this NFProfile. :rtype: PcfInfo """ return self._pcf_info @pcf_info.setter def pcf_info(self, pcf_info): """Sets the pcf_info of this NFProfile. :param pcf_info: The pcf_info of this NFProfile. :type pcf_info: PcfInfo """ self._pcf_info = pcf_info @property def pcf_info_ext(self): """Gets the pcf_info_ext of this NFProfile. :return: The pcf_info_ext of this NFProfile. :rtype: List[PcfInfo] """ return self._pcf_info_ext @pcf_info_ext.setter def pcf_info_ext(self, pcf_info_ext): """Sets the pcf_info_ext of this NFProfile. :param pcf_info_ext: The pcf_info_ext of this NFProfile. :type pcf_info_ext: List[PcfInfo] """ self._pcf_info_ext = pcf_info_ext @property def bsf_info(self): """Gets the bsf_info of this NFProfile. :return: The bsf_info of this NFProfile. :rtype: BsfInfo """ return self._bsf_info @bsf_info.setter def bsf_info(self, bsf_info): """Sets the bsf_info of this NFProfile. :param bsf_info: The bsf_info of this NFProfile. :type bsf_info: BsfInfo """ self._bsf_info = bsf_info @property def bsf_info_ext(self): """Gets the bsf_info_ext of this NFProfile. :return: The bsf_info_ext of this NFProfile. :rtype: List[BsfInfo] """ return self._bsf_info_ext @bsf_info_ext.setter def bsf_info_ext(self, bsf_info_ext): """Sets the bsf_info_ext of this NFProfile. :param bsf_info_ext: The bsf_info_ext of this NFProfile. :type bsf_info_ext: List[BsfInfo] """ self._bsf_info_ext = bsf_info_ext @property def chf_info(self): """Gets the chf_info of this NFProfile. :return: The chf_info of this NFProfile. :rtype: ChfInfo """ return self._chf_info @chf_info.setter def chf_info(self, chf_info): """Sets the chf_info of this NFProfile. :param chf_info: The chf_info of this NFProfile. :type chf_info: ChfInfo """ self._chf_info = chf_info @property def chf_info_ext(self): """Gets the chf_info_ext of this NFProfile. :return: The chf_info_ext of this NFProfile. :rtype: List[ChfInfo] """ return self._chf_info_ext @chf_info_ext.setter def chf_info_ext(self, chf_info_ext): """Sets the chf_info_ext of this NFProfile. :param chf_info_ext: The chf_info_ext of this NFProfile. :type chf_info_ext: List[ChfInfo] """ self._chf_info_ext = chf_info_ext @property def udsf_info(self): """Gets the udsf_info of this NFProfile. :return: The udsf_info of this NFProfile. :rtype: UdsfInfo """ return self._udsf_info @udsf_info.setter def udsf_info(self, udsf_info): """Sets the udsf_info of this NFProfile. :param udsf_info: The udsf_info of this NFProfile. :type udsf_info: UdsfInfo """ self._udsf_info = udsf_info @property def udsf_info_ext(self): """Gets the udsf_info_ext of this NFProfile. :return: The udsf_info_ext of this NFProfile. :rtype: List[UdsfInfo] """ return self._udsf_info_ext @udsf_info_ext.setter def udsf_info_ext(self, udsf_info_ext): """Sets the udsf_info_ext of this NFProfile. :param udsf_info_ext: The udsf_info_ext of this NFProfile. :type udsf_info_ext: List[UdsfInfo] """ self._udsf_info_ext = udsf_info_ext @property def nwdaf_info(self): """Gets the nwdaf_info of this NFProfile. :return: The nwdaf_info of this NFProfile. :rtype: NwdafInfo """ return self._nwdaf_info @nwdaf_info.setter def nwdaf_info(self, nwdaf_info): """Sets the nwdaf_info of this NFProfile. :param nwdaf_info: The nwdaf_info of this NFProfile. :type nwdaf_info: NwdafInfo """ self._nwdaf_info = nwdaf_info @property def nef_info(self): """Gets the nef_info of this NFProfile. :return: The nef_info of this NFProfile. :rtype: NefInfo """ return self._nef_info @nef_info.setter def nef_info(self, nef_info): """Sets the nef_info of this NFProfile. :param nef_info: The nef_info of this NFProfile. :type nef_info: NefInfo """ self._nef_info = nef_info @property def pcscf_info(self): """Gets the pcscf_info of this NFProfile. :return: The pcscf_info of this NFProfile. :rtype: List[PcscfInfo] """ return self._pcscf_info @pcscf_info.setter def pcscf_info(self, pcscf_info): """Sets the pcscf_info of this NFProfile. :param pcscf_info: The pcscf_info of this NFProfile. :type pcscf_info: List[PcscfInfo] """ self._pcscf_info = pcscf_info @property def hss_info(self): """Gets the hss_info of this NFProfile. :return: The hss_info of this NFProfile. :rtype: List[HssInfo] """ return self._hss_info @hss_info.setter def hss_info(self, hss_info): """Sets the hss_info of this NFProfile. :param hss_info: The hss_info of this NFProfile. :type hss_info: List[HssInfo] """ self._hss_info = hss_info @property def custom_info(self): """Gets the custom_info of this NFProfile. :return: The custom_info of this NFProfile. :rtype: object """ return self._custom_info @custom_info.setter def custom_info(self, custom_info): """Sets the custom_info of this NFProfile. :param custom_info: The custom_info of this NFProfile. :type custom_info: object """ self._custom_info = custom_info @property def recovery_time(self): """Gets the recovery_time of this NFProfile. :return: The recovery_time of this NFProfile. :rtype: datetime """ return self._recovery_time @recovery_time.setter def recovery_time(self, recovery_time): """Sets the recovery_time of this NFProfile. :param recovery_time: The recovery_time of this NFProfile. :type recovery_time: datetime """ self._recovery_time = recovery_time @property def nf_service_persistence(self): """Gets the nf_service_persistence of this NFProfile. :return: The nf_service_persistence of this NFProfile. :rtype: bool """ return self._nf_service_persistence @nf_service_persistence.setter def nf_service_persistence(self, nf_service_persistence): """Sets the nf_service_persistence of this NFProfile. :param nf_service_persistence: The nf_service_persistence of this NFProfile. :type nf_service_persistence: bool """ self._nf_service_persistence = nf_service_persistence @property def nf_services(self): """Gets the nf_services of this NFProfile. :return: The nf_services of this NFProfile. :rtype: List[NFService] """ return self._nf_services @nf_services.setter def nf_services(self, nf_services): """Sets the nf_services of this NFProfile. :param nf_services: The nf_services of this NFProfile. :type nf_services: List[NFService] """ self._nf_services = nf_services @property def default_notification_subscriptions(self): """Gets the default_notification_subscriptions of this NFProfile. :return: The default_notification_subscriptions of this NFProfile. :rtype: List[DefaultNotificationSubscription] """ return self._default_notification_subscriptions @default_notification_subscriptions.setter def default_notification_subscriptions(self, default_notification_subscriptions): """Sets the default_notification_subscriptions of this NFProfile. :param default_notification_subscriptions: The default_notification_subscriptions of this NFProfile. :type default_notification_subscriptions: List[DefaultNotificationSubscription] """ self._default_notification_subscriptions = default_notification_subscriptions @property def lmf_info(self): """Gets the lmf_info of this NFProfile. :return: The lmf_info of this NFProfile. :rtype: LmfInfo """ return self._lmf_info @lmf_info.setter def lmf_info(self, lmf_info): """Sets the lmf_info of this NFProfile. :param lmf_info: The lmf_info of this NFProfile. :type lmf_info: LmfInfo """ self._lmf_info = lmf_info @property def gmlc_info(self): """Gets the gmlc_info of this NFProfile. :return: The gmlc_info of this NFProfile. :rtype: GmlcInfo """ return self._gmlc_info @gmlc_info.setter def gmlc_info(self, gmlc_info): """Sets the gmlc_info of this NFProfile. :param gmlc_info: The gmlc_info of this NFProfile. :type gmlc_info: GmlcInfo """ self._gmlc_info = gmlc_info @property def snpn_list(self): """Gets the snpn_list of this NFProfile. :return: The snpn_list of this NFProfile. :rtype: List[PlmnIdNid] """ return self._snpn_list @snpn_list.setter def snpn_list(self, snpn_list): """Sets the snpn_list of this NFProfile. :param snpn_list: The snpn_list of this NFProfile. :type snpn_list: List[PlmnIdNid] """ self._snpn_list = snpn_list @property def nf_set_id_list(self): """Gets the nf_set_id_list of this NFProfile. :return: The nf_set_id_list of this NFProfile. :rtype: List[str] """ return self._nf_set_id_list @nf_set_id_list.setter def nf_set_id_list(self, nf_set_id_list): """Sets the nf_set_id_list of this NFProfile. :param nf_set_id_list: The nf_set_id_list of this NFProfile. :type nf_set_id_list: List[str] """ self._nf_set_id_list = nf_set_id_list @property def serving_scope(self): """Gets the serving_scope of this NFProfile. :return: The serving_scope of this NFProfile. :rtype: List[str] """ return self._serving_scope @serving_scope.setter def serving_scope(self, serving_scope): """Sets the serving_scope of this NFProfile. :param serving_scope: The serving_scope of this NFProfile. :type serving_scope: List[str] """ self._serving_scope = serving_scope @property def lc_h_support_ind(self): """Gets the lc_h_support_ind of this NFProfile. :return: The lc_h_support_ind of this NFProfile. :rtype: bool """ return self._lc_h_support_ind @lc_h_support_ind.setter def lc_h_support_ind(self, lc_h_support_ind): """Sets the lc_h_support_ind of this NFProfile. :param lc_h_support_ind: The lc_h_support_ind of this NFProfile. :type lc_h_support_ind: bool """ self._lc_h_support_ind = lc_h_support_ind @property def olc_h_support_ind(self): """Gets the olc_h_support_ind of this NFProfile. :return: The olc_h_support_ind of this NFProfile. :rtype: bool """ return self._olc_h_support_ind @olc_h_support_ind.setter def olc_h_support_ind(self, olc_h_support_ind): """Sets the olc_h_support_ind of this NFProfile. :param olc_h_support_ind: The olc_h_support_ind of this NFProfile. :type olc_h_support_ind: bool """ self._olc_h_support_ind = olc_h_support_ind
__author__ = "<NAME>, <NAME>, <NAME>" __copyright__ = "Copyright 2012-2013, The SAGA Project" __license__ = "MIT" ''' SLURM job adaptor implementation ''' import re import os import math import time import datetime import tempfile import radical.utils as ru from ...job import constants as c from ...utils import pty_shell as rsups from ... import job as api_job from ... import exceptions as rse from ... import filesystem as sfs from .. import base as a_base from ..cpi import job as cpi_job from ..cpi import decorators as cpi_decs from ...utils.job import TransferDirectives SYNC_CALL = cpi_decs.SYNC_CALL ASYNC_CALL = cpi_decs.ASYNC_CALL # ------------------------------------------------------------------------------ # some private defs # _PTY_TIMEOUT = 2.0 # ------------------------------------------------------------------------------ # the adaptor name # _ADAPTOR_NAME = "radical.saga.adaptors.slurm_job" _ADAPTOR_SCHEMAS = ["slurm", "slurm+ssh", "slurm+gsissh"] _ADAPTOR_OPTIONS = [] # ------------------------------------------------------------------------------ # the adaptor capabilities & supported attributes # # TODO: FILL ALL IN FOR SLURM _ADAPTOR_CAPABILITIES = { "jdes_attributes" : [c.NAME, c.EXECUTABLE, c.PRE_EXEC, c.POST_EXEC, c.ARGUMENTS, c.ENVIRONMENT, c.SPMD_VARIATION, c.TOTAL_CPU_COUNT, c.TOTAL_GPU_COUNT, c.NUMBER_OF_PROCESSES, c.PROCESSES_PER_HOST, c.THREADS_PER_PROCESS, c.WORKING_DIRECTORY, # c.INTERACTIVE, c.INPUT, c.OUTPUT, c.ERROR, c.FILE_TRANSFER, c.CLEANUP, c.WALL_TIME_LIMIT, c.TOTAL_PHYSICAL_MEMORY, c.SYSTEM_ARCHITECTURE, # c.OPERATING_SYSTEM_TYPE, c.CANDIDATE_HOSTS, c.QUEUE, c.PROJECT, c.JOB_CONTACT], "job_attributes" : [c.EXIT_CODE, c.EXECUTION_HOSTS, c.CREATED, c.STARTED, c.FINISHED], "metrics" : [c.STATE, c.STATE_DETAIL], "contexts" : {"ssh" : "public/private keypair", "x509" : "X509 proxy for gsissh", "userpass" : "<PASSWORD>/<PASSWORD> simple ssh"} } # ------------------------------------------------------------------------------ # the adaptor documentation # # General Notes # ************* # On Stampede, returning a non-zero exit code results in the scheduler # putting the job into a FAILED state and assigning it an exit code of # 127. # **Example:** # js = rs.job.Service("slurm+ssh://stampede") # jd.executable = '/bin/exit' # jd.arguments = ['3'] # job = js.create_job(jd) # job.run() # Will return something similar to (personal account information # removed):: # (ve) ashleyz@login1:~$ scontrol show job 309684 # JobId=309684 Name=SlurmJob # UserId=_____ GroupId=__________ # Priority=3000 Account=_____ QOS=normal # JobState=FAILED Reason=NonZeroExitCode Dependency=(null) # Requeue=0 Restarts=0 BatchFlag=1 ExitCode=127:0 # RunTime=00:00:05 TimeLimit=00:01:00 TimeMin=N/A # SubmitTime=2013-02-22T20:26:50 EligibleTime=2013-02-22T20:26:50 # StartTime=2013-02-22T20:26:50 EndTime=2013-02-22T20:26:55 # PreemptTime=None SuspendTime=None SecsPreSuspend=0 # Partition=development AllocNode:Sid=login1:12070 # ReqNodeList=(null) ExcNodeList=(null) # NodeList=c557-401 # BatchHost=c557-401 # NumNodes=1 NumCPUs=16 CPUs/Task=1 ReqS:C:T=*:*:* # MinCPUsNode=1 MinMemoryNode=0 MinTmpDiskNode=0 # Features=(null) Gres=(null) Reservation=(null) # Shared=0 Contiguous=0 Licenses=(null) Network=(null) # Command=/home1/01414/_______/.saga/adaptors/slurm_job/wrapper.sh # WorkDir=/home1/01414/_______/ # I'm not sure how to fix this for the time being. # Suspend/resume do not appear to be supported for regular # users on Stampede. # run_job is not supported, as there are many attributed (queues, # projects, etc) which need to be passed to the adaptor. I could # add URL parsing so that you could pile on queue/project/jobname # information if this has any strong usecases, but I avoided doing # so for now to decrease complexity/possible confusion. # Cancelling a job with scontrol, puts it into a COMPLETING state, which # is parsed by the SLURM status parser as rs.job.RUNNING (see the # SLURM docs, COMPLETING is a state a job goes into when it is done # running but still flushing IO/etc). Anyhow, I put some code in to # manually put the job into CANCELED state when the job is canceled, # but I'm not sure that this is reported correctly everywhere yet. # What exit code should be returned for a CANCELED job? _ADAPTOR_DOC = { "name" : _ADAPTOR_NAME, "cfg_options" : _ADAPTOR_OPTIONS, "capabilities" : _ADAPTOR_CAPABILITIES, "description" : ''' The SLURM adaptor allows to run and manage jobs on a `SLURM <https://computing.llnl.gov/linux/slurm/>`_ HPC cluster. Implementation Notes ******************** - If scontrol can't find an exit code, it returns None (see _job_get_exit_code) - If scancel can't cancel a job, we raise an exception (see _job_cancel) - If we can't suspend a job with scontrol suspend, we raise an exception (see _job_suspend). scontrol suspend NOT supported on Stampede. - I started to implement a dictionary to keep track of jobs locally. It works to the point where the unit tests are passed, but I have not gone over theis extensively... - Relating to the above, _job_get_info is written, but unused/untested (mostly from PBS adaptor) ''', "example": "examples/jobs/slurmjob.py", "schemas": {"slurm": "connect to a local cluster", "slurm+ssh": "conenct to a remote cluster via SSH", "slurm+gsissh": "connect to a remote cluster via GSISSH"} } # ------------------------------------------------------------------------------ # the adaptor info is used to register the adaptor with SAGA _ADAPTOR_INFO = { "name" : _ADAPTOR_NAME, "version" : "v0.2.1", "schemas" : _ADAPTOR_SCHEMAS, "capabilities" : _ADAPTOR_CAPABILITIES, "cpis" : [ { "type" : "radical.saga.job.Service", "class" : "SLURMJobService" }, { "type" : "radical.saga.job.Job", "class" : "SLURMJob" } ] } ################################################################################ # # The adaptor class # class Adaptor(a_base.Base): ''' This is the actual adaptor class, which gets loaded by SAGA (i.e. by the SAGA engine), and which registers the CPI implementation classes which provide the adaptor's functionality. ''' # -------------------------------------------------------------------------- # def __init__(self): a_base.Base.__init__(self, _ADAPTOR_INFO, _ADAPTOR_OPTIONS) self.id_re = re.compile(r'^\[(.*)\]-\[(.*?)\]$') self.epoch = datetime.datetime(1970, 1, 1) # -------------------------------------------------------------------------- # def sanity_check(self): pass def parse_id(self, id): # split the id '[rm]-[pid]' in its parts, and return them. match = self.id_re.match(id) if not match or len(match.groups()) != 2: raise rse.BadParameter("Cannot parse job id '%s'" % id) return(match.group(1), match.group(2)) ############################################################################### # class SLURMJobService(cpi_job.Service): ''' Implements cpi_job.Service ''' # -------------------------------------------------------------------------- # def __init__(self, api, adaptor): _cpi_base = super(SLURMJobService, self) _cpi_base.__init__(api, adaptor) # TODO make sure this formats properly and works right! self.exit_code_re = re.compile(r"\bExitCode \b=(\d*)", re.VERBOSE) self.scontrol_jobstate_re = re.compile(r"\bJobState \b=(\S*)", re.VERBOSE) self.scontrol_job_name_re = re.compile(r"\bJobName \b=(\S*)", re.VERBOSE) self.scontrol_create_time_re = re.compile(r"\bSubmitTime\b=(\S*)", re.VERBOSE) self.scontrol_start_time_re = re.compile(r"\bStartTime \b=(\S*)", re.VERBOSE) self.scontrol_end_time_re = re.compile(r"\bEndTime \b=(\S*)", re.VERBOSE) self.scontrol_comp_time_re = re.compile(r"\bRunTime \b=(\S*)", re.VERBOSE) self.scontrol_exec_hosts_re = re.compile(r"\bNodeList \b=(\S*)", re.VERBOSE) # these are the commands that we need in order to interact with SLURM # the adaptor will try to find them when it first opens the shell # connection, and bails out in case they are not available. self._commands = {'sbatch' : None, 'squeue' : None, 'scontrol': None, 'scancel' : None} # -------------------------------------------------------------------------- # def __del__(self): try: if self.shell: del(self.shell) except: pass # -------------------------------------------------------------------------- # @SYNC_CALL def init_instance(self, adaptor_state, rm_url, session): ''' Service instance constructor ''' self.rm = rm_url self.session = session self.jobs = {} self._open() return self.get_api() # -------------------------------------------------------------------------- # def close(self): if self.shell: self.shell.finalize(True) # -------------------------------------------------------------------------- # def _open(self): ''' Open our persistent shell for this job adaptor. We use the pty_shell functionality for this. ''' # check to see what kind of connection we will want to create if self.rm.schema == "slurm": shell_schema = "fork://" elif self.rm.schema == "slurm+ssh": shell_schema = "ssh://" elif self.rm.schema == "slurm+gsissh": shell_schema = "gsissh://" else: raise rse.IncorrectURL("Schema %s not supported by SLURM adaptor." % self.rm.schema) # <scheme>://<user>:<pass>@<host>:<port>/<path>?<query>#<fragment> # build our shell URL shell_url = shell_schema # did we provide a username and password? if self.rm.username and self.rm.password: shell_url += self.rm.username + ":" + self.rm.password + "@" # only provided a username if self.rm.username and not self.rm.password: shell_url += self.rm.username + "@" # add hostname shell_url += self.rm.host # add port if self.rm.port: shell_url += ":" + str(self.rm.port) shell_url = ru.Url(shell_url) # establish shell connection self._logger.debug("Opening shell of type: %s" % shell_url) self.shell = rsups.PTYShell(shell_url, self.session, self._logger) # verify our SLURM environment contains the commands we need for this # adaptor to work properly self._logger.debug("Verifying existence of remote SLURM tools.") for cmd in list(self._commands.keys()): ret, out, _ = self.shell.run_sync("which %s " % cmd) if ret != 0: message = "Error finding SLURM tool %s on remote server %s!\n" \ "Locations searched:\n%s\n" \ "Is SLURM installed on that machine? " \ "If so, is your remote SLURM environment "\ "configured properly? " % (cmd, self.rm, out) raise rse.NoSuccess._log(self._logger, message) self._logger.debug("got cmd prompt (%s)(%s)" % (ret, out)) self.rm.detected_username = self.rm.username # figure out username if it wasn't made explicit # important if .ssh/config info read+connected with # a different username than what we expect if not self.rm.username: self._logger.debug("No username provided in URL %s, so we are" " going to find it with whoami" % self.rm) ret, out, _ = self.shell.run_sync("whoami") self.rm.detected_username = out.strip() self._logger.debug("Username detected as: %s", self.rm.detected_username) _, out, _ = self.shell.run_sync('scontrol --version') self._version = out.split()[1].strip() self._logger.info('slurm version: %s' % self._version) ppn_pat = '\'s/.*\\(CPUTot=[0-9]*\\).*/\\1/g\'' ppn_cmd = 'scontrol show nodes ' + \ '| grep CPUTot' + \ '| sed -e ' + ppn_pat + \ '| sort ' + \ '| uniq -c ' + \ '| cut -f 2 -d = ' + \ '| xargs echo' _, out, _ = self.shell.run_sync(ppn_cmd) ppn_vals = [o.strip() for o in out.split() if o.strip()] if len(ppn_vals) >= 1: self._ppn = int(ppn_vals[0]) else : self._ppn = None if 'stampede2' in self.rm.host.lower(): # FIXME: this only works on the KNL nodes self._ppn = 68 elif 'traverse' in self.rm.host.lower(): self._ppn = 32 elif 'frontera' in self.rm.host.lower(): # not that this is incorrect for the rtx queue self._ppn = 56 elif 'comet' in self.rm.host.lower(): self._ppn = 24 elif 'longhorn' in self.rm.host.lower(): # FIXME: other option - get it later by `processes_per_host` self._ppn = 40 self._logger.info("ppn: %s", self._ppn) # -------------------------------------------------------------------------- # def _close(self): ''' Close our shell connection ''' del(self.shell) self.shell = None # -------------------------------------------------------------------------- # def _handle_file_transfers(self, ft, mode): """ if mode == 'in' : perform sanity checks on all staging directives. if mode == 'in' : stage files to condor submission site if mode == 'out': stage files from condor submission site """ td = TransferDirectives(ft) assert(mode in ['in', 'out']) if mode == 'in': if td.in_append: raise Exception('File append (>>) not supported') if td.out_append: raise Exception('File append (<<) not supported') if td.in_overwrite: for (local, remote) in td.in_overwrite: source = local target = remote self._logger.info("Transferring in %s to %s", source, target) self.shell.stage_to_remote(source, target) elif mode == 'out': if td.out_overwrite: for (local, remote) in td.out_overwrite: source = remote target = local self._logger.info("Transferring out %s to %s", source, target) self.shell.stage_from_remote(source, target) # -------------------------------------------------------------------------- # # def _job_run(self, jd): ''' runs a job on the wrapper via pty, and returns the job id ''' # define a bunch of default args exe = jd.executable pre = jd.as_dict().get(c.PRE_EXEC) post = jd.as_dict().get(c.POST_EXEC) args = jd.as_dict().get(c.ARGUMENTS, []) env = jd.as_dict().get(c.ENVIRONMENT, dict()) cwd = jd.as_dict().get(c.WORKING_DIRECTORY) job_name = jd.as_dict().get(c.NAME) spmd_variation = jd.as_dict().get(c.SPMD_VARIATION) cpu_count = jd.as_dict().get(c.TOTAL_CPU_COUNT) gpu_count = jd.as_dict().get(c.TOTAL_GPU_COUNT) n_procs = jd.as_dict().get(c.NUMBER_OF_PROCESSES) processes_per_host = jd.as_dict().get(c.PROCESSES_PER_HOST) output = jd.as_dict().get(c.OUTPUT, "radical.saga.stdout") error = jd.as_dict().get(c.ERROR, "radical.saga.stderr") file_transfer = jd.as_dict().get(c.FILE_TRANSFER) wall_time = jd.as_dict().get(c.WALL_TIME_LIMIT) queue = jd.as_dict().get(c.QUEUE) project = jd.as_dict().get(c.PROJECT) total_memory = jd.as_dict().get(c.TOTAL_PHYSICAL_MEMORY) sys_arch = jd.as_dict().get(c.SYSTEM_ARCHITECTURE) job_contact = jd.as_dict().get(c.JOB_CONTACT) c_hosts = jd.as_dict().get(c.CANDIDATE_HOSTS) cpu_arch = sys_arch.get('cpu') gpu_arch = sys_arch.get('gpu') # check to see what's available in our job description # to override defaults # try to create the working directory (if defined) # NOTE: this assumes a shared filesystem between login node and # compute nodes. if cwd: self._logger.info("Creating working directory %s" % cwd) ret, out, _ = self.shell.run_sync("mkdir -p %s" % cwd) if ret: raise rse.NoSuccess("Couldn't create workdir: %s" % out) self._handle_file_transfers(file_transfer, mode='in') if isinstance(c_hosts, list): c_hosts = ','.join(c_hosts) if isinstance(job_contact, list): job_contact = job_contact[0] if project and ':' in project: account, reservation = project.split(':', 1) else: account, reservation = project, None script = "#!/bin/sh\n\n" # make sure we have something for cpu_count if not cpu_count: cpu_count = 1 # make sure we have something for n_procs if not n_procs: n_procs = cpu_count # get memory_per_node from total_memory and make sure it is not None memory_per_node = total_memory or 0 # define n_nodes and recalculate memory_per_node (if self._ppn is set) n_nodes = None if self._ppn: # exception(s) for earlier defined `self._ppn` if 'frontera' in self.rm.host.lower() and \ queue and 'rtx' in queue.lower(): self._ppn = 16 # other option is to use: processes_per_host n_nodes = int(math.ceil(float(cpu_count) / self._ppn)) memory_per_node = int(memory_per_node / float(n_nodes)) elif total_memory: raise rse.NotImplemented( 'cannot allocate memory, node number unknown') if spmd_variation: if spmd_variation.lower() not in 'mpi': raise rse.BadParameter("Slurm cannot handle spmd variation '%s'" % spmd_variation) mpi_cmd = 'mpirun -n %d ' % n_procs else: # we start N independent processes mpi_cmd = '' if 'stampede2' in self.rm.host.lower() or \ 'longhorn' in self.rm.host.lower(): assert(n_nodes), 'need unique number of cores per node' script += "#SBATCH -N %d\n" % n_nodes script += "#SBATCH -n %s\n" % n_procs elif 'frontera' in self.rm.host.lower() or \ 'andes' in self.rm.host.lower(): assert(n_nodes), 'need unique number of cores per node' script += "#SBATCH -N %d\n" % n_nodes elif self._version in ['17.11.5', '18.08.0', '18.08.3']: assert(n_nodes), 'need unique number of cores per node' script += "#SBATCH -N %d\n" % n_nodes script += "#SBATCH --ntasks=%s\n" % n_procs else: script += "#SBATCH --ntasks=%s\n" % n_procs if not processes_per_host: script += "#SBATCH --cpus-per-task=%s\n" \ % (int(cpu_count / n_procs)) else: script += "#SBATCH --ntasks-per-node=%s\n" % processes_per_host # target host specifica # FIXME: these should be moved into resource config files self._logger.debug ("submit SLURM script to %s", self.rm) if 'bridges2' in self.rm.host.lower(): if gpu_count: # gres resources are specified *per node* assert(n_nodes), 'need unique number of cores per node' script += "#SBATCH --gres=gpu:%s:8\n" % (gpu_arch) elif 'comet' in self.rm.host.lower(): if gpu_count: # gres resources are specified *per node* assert(n_nodes), 'need unique number of cores per node' # if no `gpu_arch` then first available gpu node (either type) # gpu types are "p100" and "k80" if gpu_arch: gpu_arch_str = ':%s' % gpu_arch.lower() else : gpu_arch_str = '' count = 4 # Make sure we take a full GPU node script += "#SBATCH --gres=gpu%s:%d\n" % (gpu_arch_str, count) elif 'tiger' in self.rm.host.lower(): if gpu_count: # gres resources are specified *per node* assert(n_nodes), 'need unique number of cores per node' count = int(gpu_count / n_nodes) if count: script += "#SBATCH --gres=gpu:%s\n" % count elif 'cori' in self.rm.host.lower(): # Set to "haswell" for Haswell nodes, to "knl,quad,cache" (or other # modes) for KNL, etc. if cpu_arch : script += "#SBATCH -C %s\n" % cpu_arch if gpu_count: script += "#SBATCH --gpus=%s\n" % gpu_count elif 'longhorn' in self.rm.host.lower(): self._logger.debug("SLURM GRES is not set (longhorn exception)\n") elif queue == 'tmp3': # this is a special queue, which is associated with SuperMUC-NG, # but since there is no machine name in config data we only track # this queue name to set SLURM QoS option script += "#SBATCH --qos=nolimit\n" self._logger.debug("SLURM QoS is set (SuperMUC-NG only)\n") else: if gpu_count: script += "#SBATCH --gpus=%s\n" % gpu_count if job_name : script += '#SBATCH -J "%s"\n' % job_name if cwd : script += '#SBATCH -D "%s"\n' % cwd if output : script += '#SBATCH --output "%s"\n' % output if error : script += '#SBATCH --error "%s"\n' % error if queue : script += '#SBATCH --partition "%s"\n' % queue if c_hosts : script += '#SBATCH --nodelist="%s"\n' % c_hosts if job_contact : script += '#SBATCH --mail-user="%s"\n' % job_contact if account : script += '#SBATCH --account "%s"\n' % account if reservation : script += '#SBATCH --reservation "%s"\n' % reservation if memory_per_node: script += '#SBATCH --mem="%s"\n' % memory_per_node if wall_time : script += '#SBATCH --time %02d:%02d:00\n' \ % (int(wall_time / 60), wall_time % 60) if env: script += "\n## ENVIRONMENT\n" for key,val in env.items(): script += 'export "%s"="%s"\n' % (key, val) if pre: script += "\n## PRE_EXEC\n" + "\n".join(pre) script += '\n' # create our commandline script += "\n## EXEC\n" script += '%s%s %s' % (mpi_cmd, exe, ' '.join(args)) script += '\n' if post: script += "\n## POST_EXEC\n" + '\n'.join(post) script += '\n' # write script into a tmp file for staging self._logger.info("SLURM script generated:\n%s" % script) tgt = os.path.basename(tempfile.mktemp(suffix='.slurm', prefix='tmp_')) self.shell.write_to_remote(src=script, tgt=tgt) # submit the job ret, out, _ = self.shell.run_sync("sbatch '%s'; echo rm -f '%s'" % (tgt,tgt)) self._logger.debug("submit SLURM script (%s) (%s)" % (tgt, ret)) # find out what our job ID is # TODO: Could make this more efficient job_id = None for line in out.split("\n"): if "Submitted batch job" in line: job_id = "[%s]-[%s]" % (self.rm, int(line.split()[-1:][0])) break # if we have no job ID, there's a failure... if not job_id: raise rse.NoSuccess._log(self._logger, "Couldn't get job id from submitted job!" " sbatch output:\n%s" % out) self._logger.debug("started job %s" % job_id) self._logger.debug("Batch system output:\n%s" % out) # create local jobs dictionary entry self.jobs[job_id] = {'state' : c.PENDING, 'create_time': None, 'start_time' : None, 'end_time' : None, 'comp_time' : None, 'exec_hosts' : None, 'gone' : False, 'output' : output, 'error' : error, 'stdout' : None, 'stderr' : None, 'ft' : file_transfer, } return job_id # -------------------------------------------------------------------------- # # FROM STAMPEDE'S SQUEUE MAN PAGE # # JOB STATE CODES # Jobs typically pass through several states in the course of their # execution. The typical states are PENDING, RUNNING, SUSPENDED, # COMPLETING, and COMPLETED. An explanation of each state follows. # # CA CANCELED Job was explicitly cancelled by the user or system # administrator. The job may or may not have been # initiated. # CD COMPLETED Job has terminated all processes on all nodes. # CF CONFIGURING Job has been allocated resources, but are waiting # for them to become ready for use (e.g. booting). # CG COMPLETING Job is in the process of completing. Some processes # on some nodes may still be active. # F FAILED Job terminated with non-zero exit code or other # failure condition. # NF NODE_FAIL Job terminated due to failure of one or more # allocated nodes. # OOM OUT_OF_MEMORY Job required more memory than available. # PD PENDING Job is awaiting resource allocation. # PR PREEMPTED Job terminated due to preemption. # R RUNNING Job currently has an allocation. # S SUSPENDED Job has an allocation, but execution has been # suspended. # TO TIMEOUT Job terminated upon reaching its time limit. # def _slurm_to_saga_state(self, slurmjs): ''' translates a slurm one-letter state to saga ''' if slurmjs in ['CA' , "CANCELLED" ]: return c.CANCELED elif slurmjs in ['CD' , "COMPLETED" ]: return c.DONE elif slurmjs in ['CF' , "CONFIGURING" ]: return c.PENDING elif slurmjs in ['CG' , "COMPLETING" ]: return c.RUNNING elif slurmjs in ['F' , "FAILED" ]: return c.FAILED elif slurmjs in ['NF' , "NODE_FAIL" ]: return c.FAILED elif slurmjs in ['OOM', "OUT_OF_MEMORY"]: return c.FAILED elif slurmjs in ['PD' , "PENDING" ]: return c.PENDING elif slurmjs in ['PR' , "PREEMPTED" ]: return c.CANCELED elif slurmjs in ['R' , "RUNNING" ]: return c.RUNNING elif slurmjs in ['S' , "SUSPENDED" ]: return c.SUSPENDED elif slurmjs in ['TO' , "TIMEOUT" ]: return c.CANCELED else : return c.UNKNOWN # -------------------------------------------------------------------------- # def _job_cancel(self, job): ''' Given a job id, attempt to cancel it through use of commandline scancel. Raises exception when unsuccessful. ''' if job._state in c.FINAL: # job is already final - nothing to do return if job._state in [c.NEW]: # job is not yet submitted - nothing to do job._state = c.CANCELED if not job._id: # uh oh - what to do? raise rse.NoSuccess._log(self._logger, "Could not cancel job: no job ID") rm, pid = self._adaptor.parse_id(job._id) ret, out, _ = self.shell.run_sync("scancel %s" % pid) if ret != 0: raise rse.NoSuccess._log(self._logger, "Could not cancel job %s because: %s" % (pid, out)) job._state = c.CANCELED # -------------------------------------------------------------------------- # def _job_suspend(self, job): ''' Attempt to suspend a job with commandline scontrol. Raise exception when unsuccessful. ''' if job._state in [c.DONE, c.FAILED, c.CANCELED, c.NEW, c.SUSPENDED]: raise rse.IncorrectState._log(self._logger, "Could not suspend job %s [%s]" % (job._id, job._state)) rm, pid = self._adaptor.parse_id(job._id) ret, out, _ = self.shell.run_sync("scontrol suspend %s" % pid) if ret == 0: return True # check to see if the error was a permission error elif "Access/permission denied" in out: raise rse.PermissionDenied._log(self._logger, "Could not suspend job %s because: %s" % (pid, out)) # it's some other error else: raise rse.NoSuccess._log(self._logger, "Could not suspend job %s because: %s" % (pid, out)) # -------------------------------------------------------------------------- # def _job_resume(self, job): ''' Attempt to resume a job with commandline scontrol. Raise exception when unsuccessful. ''' if job._state in [c.DONE, c.FAILED, c.CANCELED, c.NEW, c.RUNNING]: raise rse.IncorrectState._log(self._logger, "Could not resume job %s [%s]" % (job._id, job._state)) rm, pid = self._adaptor.parse_id(job._id) ret, out, _ = self.shell.run_sync("scontrol resume %s" % pid) if ret == 0: return True # check to see if the error was a permission error elif "Access/permission denied" in out: raise rse.PermissionDenied._log(self._logger, "Could not suspend job %s because: %s" % (pid, out)) # it's some other error else: raise rse.NoSuccess._log(self._logger, "Could not resume job %s because: %s" % (pid, out)) # -------------------------------------------------------------------------- # @SYNC_CALL def create_job(self, jd): ''' Implements cpi_job.Service.create_job() ''' # this dict is passed on to the job adaptor class -- use it to pass any # state information you need there. adaptor_state = {"job_service" : self, "job_description": jd, "job_schema" : self.rm.schema, "reconnect" : False} return api_job.Job(_adaptor=self._adaptor, _adaptor_state=adaptor_state) # -------------------------------------------------------------------------- # @SYNC_CALL def get_url(self): ''' Implements cpi_job.Service.get_url() ''' return self.rm # -------------------------------------------------------------------------- # @SYNC_CALL def list(self): ''' Implements rs.adaptors.cpi.job.Service.list() ''' # ashleyz@login1:~$ squeue -h -o "%i" -u ashleyz # 255042 # 255035 # 255028 # 255018 # this line gives us a nothing but jobids for our user ret, out, _ = self.shell.run_sync('squeue -h -o "%%i" -u %s' % self.rm.detected_username) # mangle our results into the proper id format output = ["[%s]-[%s]" % (self.rm, i) for i in out.strip().split("\n")] return output # -------------------------------------------------------------------------- # @SYNC_CALL def get_job(self, jobid): # this dict is passed on to the job adaptor class -- use it to pass any # state information you need there. The job adaptor will run 'scontrol # show job $jobid' to complement the information. adaptor_state = {"job_service" : self, "job_description": api_job.Description(), "job_schema" : self.rm.schema, "reconnect" : True, "reconnect_jobid": jobid } return api_job.Job(_adaptor=self._adaptor, _adaptor_state=adaptor_state) # -------------------------------------------------------------------------- # def container_run(self, jobs): for job in jobs: job.run() # -------------------------------------------------------------------------- # def container_wait(self, jobs, mode, timeout): # TODO: this is not optimized yet for job in jobs: job.wait() # -------------------------------------------------------------------------- # def container_cancel(self, jobs, timeout): # TODO: this is not optimized yet for job in jobs: job.cancel(timeout) # -------------------------------------------------------------------------- # def container_get_states(self, jobs): # TODO: this is not optimized yet states = list() for job in jobs: states.append(job.get_state()) return states # ------------------------------------------------------------------------------ # class SLURMJob(cpi_job.Job): # -------------------------------------------------------------------------- # def __init__(self, api, adaptor): _cpi_base = super(SLURMJob, self) _cpi_base.__init__(api, adaptor) # -------------------------------------------------------------------------- # @SYNC_CALL def init_instance(self, job_info): self.jd = job_info["job_description"] self.js = job_info["job_service"] # the js is responsible for job bulk operations -- which # for jobs only work for run() self._container = self.js self._method_type = "run" # initialize job attribute values self._id = None self._name = self.jd.as_dict().get(c.NAME, 'saga') self._state = c.NEW self._exit_code = None self._exception = None self._started = None self._finished = None # think "reconnect" in terms of "reloading" job id, _NOT_ # physically creating a new network connection if job_info['reconnect']: self._id = job_info['reconnect_jobid'] other_info = self._job_get_info() self._name = other_info.get('job_name') self._started = True else: self._started = False return self.get_api() # -------------------------------------------------------------------------- # def _job_get_info(self): ''' use scontrol to grab job info NOT CURRENTLY USED/TESTED, here for later ''' # prev. info contains the info collect when _job_get_info # was called the last time prev_info = self.js.jobs.get(self._id) # if the 'gone' flag is set, there's no need to query the job # state again. it's gone forever if prev_info: if prev_info.get('gone', False): self._logger.debug("Job is gone.") return prev_info # curr. info will contain the new job info collect. it starts off # as a copy of prev_info (don't use deepcopy because there is an API # object in the dict -> recursion) curr_info = dict() if prev_info: curr_info['job_id' ] = prev_info.get('job_id' ) curr_info['job_name' ] = prev_info.get('job_name' ) curr_info['state' ] = prev_info.get('state' ) curr_info['create_time'] = prev_info.get('create_time') curr_info['start_time' ] = prev_info.get('start_time' ) curr_info['end_time' ] = prev_info.get('end_time' ) curr_info['comp_time' ] = prev_info.get('comp_time' ) curr_info['exec_hosts' ] = prev_info.get('exec_hosts' ) curr_info['gone' ] = prev_info.get('gone' ) curr_info['output' ] = prev_info.get('output' ) curr_info['error' ] = prev_info.get('error' ) curr_info['stdout' ] = prev_info.get('stdout' ) curr_info['stderr' ] = prev_info.get('stderr' ) curr_info['ft' ] = prev_info.get('ft' ) else: curr_info['job_id' ] = None curr_info['job_name' ] = None curr_info['state' ] = None curr_info['create_time'] = None curr_info['start_time' ] = None curr_info['end_time' ] = None curr_info['comp_time' ] = None curr_info['exec_hosts' ] = None curr_info['gone' ] = None curr_info['output' ] = None curr_info['error' ] = None curr_info['stdout' ] = None curr_info['stderr' ] = None curr_info['ft' ] = None rm, pid = self._adaptor.parse_id(self._id) # update current info with scontrol ret, out, _ = self.js.shell.run_sync('scontrol show job %s' % pid) # out is comprised of a set of space-limited words like this: # # ---------------------------------------------------------------------- # $ scontrol show job 8101313 # JobId=8101313 JobName=pilot.0000 UserId=tg803521(803521) # GroupId=G-81625(81625) Priority=1701 Nice=0 Account=TG-MCB090174 # QOS=normal JobState=RUNNING Reason=None Dependency=(null) Requeue=0 # Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0 RunTime=00:00:25 # TimeLimit=00:15:00 TimeMin=N/A SubmitTime=2017-01-11T15:47:19 # EligibleTime=2017-01-11T15:47:19 StartTime=2017-01-11T15:47:19 # EndTime=2017-01-11T16:02:19 PreemptTime=None SuspendTime=None # SecsPreSuspend=0 Partition=development AllocNode:Sid=login3:2886 # ReqNodeList=(null) ExcNodeList=(null) NodeList=c557-[901-904] # BatchHost=c557-901 NumNodes=4 NumCPUs=64 CPUs/Task=1 # ReqB:S:C:T=0:0:*:* TRES=cpu=64,node=4 Socks/Node=* # NtasksPerN:B:S:C=0:0:*:* CoreSpec=* MinCPUsNode=1 MinMemoryNode=0 # MinTmpDiskNode=0 Features=(null) Gres=(null) Reservation=(null) # Shared=0 Contiguous=0 Licenses=(null) Network=(null) # Command=/home1/01083/tg803521/tmp_egGk1n.slurm # WorkDir=/work/01083/ # StdIn=/dev/null # StdOut=/work/01083/bootstrap_1.out # StdErr=/work/01083/bootstrap_1.err # Power= SICP=0 # ---------------------------------------------------------------------- # # so we split on spaces and newlines, and then on '=' to get # key-value-pairs. elems = out.split() data = dict() for elem in sorted(elems): parts = elem.split('=', 1) if len(parts) == 1: # default if no '=' is found parts.append(None) # ignore non-splittable ones key, val = parts if val in ['', '(null)']: val = None self._logger.info('%-20s := %s', key, val) data[key] = val if data.get('JobState'): curr_info['state'] = self.js._slurm_to_saga_state(data['JobState']) else: curr_info['state'] = self._job_get_state(self._id) # update exit code if data.get('ExitCode'): curr_info['exit_code'] = data['ExitCode'].split(':')[0] else: curr_info['exit_code'] = self._job_get_state(self._id) curr_info['job_name' ] = data.get('JobName') # curr_info['create_time'] = data.get('SubmitTime') # curr_info['start_time' ] = data.get('StartTime') # curr_info['end_time' ] = data.get('EndTime') curr_info['comp_time' ] = data.get('RunTime') curr_info['exec_hosts' ] = data.get('NodeList') # Alas, time stamps are not in EPOCH, and do not contain time zone info, # so we set approximate values here now = time.time() if not curr_info['create_time']: curr_info['create_time'] = now if curr_info['state'] in [c.RUNNING] + c.FINAL: if not curr_info['start_time' ]: curr_info['start_time' ] = now if curr_info['state'] in c.FINAL: if not curr_info['end_time' ]: curr_info['end_time' ] = now if curr_info['stdout'] is None: if curr_info['output'] is None: curr_info['output'] = data.get('StdOut') ret, out, err = self.js.shell.run_sync( 'cat %s' % curr_info['output']) if ret: curr_info['stdout'] = None else : curr_info['stdout'] = out if curr_info['stderr'] is None: if curr_info['error'] is None: curr_info['error'] = data.get('StdErr') ret, out, err = self.js.shell.run_sync( 'cat %s' % curr_info['error']) if ret: curr_info['stderr'] = None else : curr_info['stderr'] = out self.js._handle_file_transfers(curr_info['ft'], mode='out') curr_info['gone'] = True self.js.jobs[self._id] = curr_info return curr_info # -------------------------------------------------------------------------- # def _job_get_state(self, job_id): ''' get the job state from the wrapper shell ''' # if the state is NEW and we haven't sent out a run command, keep # it listed as NEW if self._state == c.NEW and not self._started: return c.NEW # if the state is DONE, CANCELED or FAILED, it is considered # final and we don't need to query the backend again if self._state in c.FINAL: return self._state rm, pid = self._adaptor.parse_id(job_id) try: ret, out, _ = self.js.shell.run_sync('scontrol show job %s' % pid) match = self.js.scontrol_jobstate_re.search(out) if match: slurm_state = match.group(1) else: # no jobstate found from scontrol # the job may have finished a while back, use sacct to # look at the full slurm history slurm_state = self._sacct_jobstate_match(pid) if not slurm_state: # no jobstate found in slurm return c.UNKNOWN return self.js._slurm_to_saga_state(slurm_state) except Exception as e: self._logger.exception('failed to get job state') raise rse.NoSuccess("Error getting the job state for " "job %s:\n%s" % (pid, e)) from e raise rse.NoSuccess._log(self._logger, "Internal SLURM adaptor error" " in _job_get_state") # -------------------------------------------------------------------------- # def _sacct_jobstate_match(self, pid): ''' get the job state from the slurm accounting data ''' ret, sacct_out, _ = self.js.shell.run_sync( "sacct --format=JobID,State --parsable2 --noheader --jobs=%s" % pid) # output will look like: # 500723|COMPLETED # 500723.batch|COMPLETED # or: # 500682|CANCELLED by 900369 # 500682.batch|CANCELLED try: for line in sacct_out.strip().split('\n'): slurm_id, slurm_state = line.split('|', 1) if slurm_id == pid and slurm_state: return slurm_state.split()[0].strip() except Exception: self._logger.warning('cannot parse sacct output:\n%s' % sacct_out) return None # -------------------------------------------------------------------------- # @SYNC_CALL def get_state(self): self._state = self._job_get_state(self._id) return self._state # -------------------------------------------------------------------------- # @SYNC_CALL def get_stdout(self): out = self._job_get_info()['stdout'] if out is None: out = '' # raise rse.NoSuccess("Couldn't fetch stdout (js reconnected?)") return out # -------------------------------------------------------------------------- # @SYNC_CALL def get_stderr(self): err = self._job_get_info()['stderr'] if err is None: err = '' # raise rse.NoSuccess("Couldn't fetch stderr (js reconnected?)") return err # -------------------------------------------------------------------------- # @SYNC_CALL def get_description(self): return self.jd # -------------------------------------------------------------------------- # @SYNC_CALL def get_service_url(self): return self.js.rm # -------------------------------------------------------------------------- # @SYNC_CALL def wait(self, timeout): time_start = time.time() rm, pid = self._adaptor.parse_id(self._id) while True: state = self._job_get_state(self._id) self._logger.debug("wait() for job id %s:%s" % (self._id, state)) if state == c.UNKNOWN: raise rse.IncorrectState("cannot get job state") if state in c.FINAL: self._job_get_info() return True # check if we hit timeout if timeout >= 0: if time.time() - time_start > timeout: return False # avoid busy poll time.sleep(0.5) # -------------------------------------------------------------------------- # # In general, the job ID is something which is generated by the adaptor or # by the backend, and the user should not interpret it. Two caveats though: # # (a) The ID MUST remain constant once it is assigned to a job (imagine an # application hashes on job ids, for example. # # (b) the ID SHOULD follow the scheme [service_url]-[backend-id] -- and in # that case, we should make sure that the URL part of the ID can be used to # create a new job service instance. # @SYNC_CALL def get_id(self): return self._id # -------------------------------------------------------------------------- # @SYNC_CALL def get_name(self): if not self._name: self._name = self._job_get_info()['job_name'] return self._name # -------------------------------------------------------------------------- # @SYNC_CALL def get_exit_code(self): # FIXME: use cache return self._job_get_info()['exit_code'] # -------------------------------------------------------------------------- # @SYNC_CALL def suspend(self): return self.js._job_suspend(self) # -------------------------------------------------------------------------- @SYNC_CALL def resume(self): return self.js._job_resume(self) # -------------------------------------------------------------------------- # @SYNC_CALL def get_created(self): # FIXME: use cache # FIXME: convert to EOPCH return self._job_get_info()['create_time'] # -------------------------------------------------------------------------- # @SYNC_CALL def get_started(self): # FIXME: use cache # FIXME: convert to EPOCH return self._job_get_info()['start_time'] # -------------------------------------------------------------------------- # @SYNC_CALL def get_finished(self): # FIXME: use cache # FIXME: convert to EPOCH return self._job_get_info()['end_time'] # -------------------------------------------------------------------------- # @SYNC_CALL def get_execution_hosts(self): # FIXME: use cache return self._job_get_info()['exec_hosts'] # -------------------------------------------------------------------------- # @SYNC_CALL def cancel(self, timeout): self.js._job_cancel(self) # -------------------------------------------------------------------------- # @SYNC_CALL def run(self): self._id = self.js._job_run(self.jd) self._started = True # ------------------------------------------------------------------------------
/** * Replaces the old fisheye figure with a new one. * * @param oldFigure * @param newFigure */ boolean replaceFishFigure(IFigure oldFigure, IFigure newFigure) { if (this.fishEyeLayer.getChildren().contains(oldFigure)) { Rectangle bounds = oldFigure.getBounds(); newFigure.setBounds(bounds); this.fishEyeLayer.remove(oldFigure); this.fishEyeLayer.add(newFigure); for (Iterator iterator = fisheyeListeners.iterator(); iterator .hasNext();) { FisheyeListener listener = (FisheyeListener) iterator.next(); listener.fisheyeReplaced(this, oldFigure, newFigure); } return true; } return false; }
<filename>src/lib/config.rs use std::collections::HashMap; use std::fs; use std::path::PathBuf; use dirs; use failure::Fail; use serde::Deserialize; use serde::Serialize; use crate::types::ResultDynError; #[derive(Debug, Fail)] pub enum ProjectConfigError { #[fail( display = "Project config {} does not exist, please check config or create it", name )] ProjectConfigDoesNotExist { name: String }, } #[derive(Hash, Eq, PartialEq, Serialize, Deserialize)] pub struct ProjectConfig { pub name: String, pub db_uri: String, } #[derive(Serialize, Deserialize)] pub struct JabConfig { pub projects: HashMap<String, ProjectConfig>, } impl JabConfig { pub fn read() -> ResultDynError<JabConfig> { let config_path = JabConfig::get_path(); let config_str = String::from(fs::read_to_string(config_path)?); let config: JabConfig = serde_json::from_str(&config_str)?; return Ok(config); } pub fn persist(config: &JabConfig) -> ResultDynError<()> { let config_path = JabConfig::get_path(); let config_str = serde_json::to_string_pretty(&config)?; fs::write(config_path, config_str)?; return Ok(()); } pub fn get_path() -> PathBuf { return get_jab_dir().join("config"); } pub fn empty_config_str() -> String { let config = JabConfig { projects: HashMap::new(), }; return serde_json::to_string_pretty(&config).unwrap(); } } impl JabConfig { pub fn register_project_config(&mut self, project_config: ProjectConfig) { self .projects .insert(project_config.name.clone(), project_config); } pub fn project_config(&self, name: &str) -> ResultDynError<&ProjectConfig> { return self.projects.get(name).ok_or( ProjectConfigError::ProjectConfigDoesNotExist { name: String::from(name), } .into(), ); } } pub fn get_jab_dir() -> PathBuf { let project_dir = dirs::home_dir().unwrap(); return project_dir.join(".jab"); }
/* $OpenBSD: cl.c,v 1.59 2010/06/28 14:13:29 deraadt Exp $ */ /* * Copyright (c) 1995 <NAME>. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ /* DMA mode still does not work!!! */ #include <sys/param.h> #include <sys/ioctl.h> #include <sys/proc.h> #include <sys/tty.h> #include <sys/uio.h> #include <sys/systm.h> #include <sys/time.h> #include <sys/device.h> #include <sys/syslog.h> #include <machine/autoconf.h> #include <machine/conf.h> #include <machine/cpu.h> #include <machine/psl.h> #include <dev/cons.h> #include <mvme88k/dev/clreg.h> #include <mvme88k/dev/pcctworeg.h> #include <mvme88k/dev/pcctwovar.h> #ifdef DDB #include <ddb/db_var.h> #endif #define splcl() spltty() /* min timeout 0xa, what is a good value */ #define CL_TIMEOUT 0x10 #define CL_FIFO_MAX 0x20 #define CL_FIFO_CNT 0xc #define CL_RX_TIMEOUT 0x10 #define CL_RXDMAINT 0x82 #define CL_TXDMAINT 0x42 #define CL_TXMASK 0x47 #define CL_RXMASK 0x87 #define CL_TXINTR 0x02 #define CL_RXINTR 0x02 struct cl_cons { bus_space_tag_t cl_iot; bus_space_handle_t cl_ioh; volatile u_int8_t *cl_rxiack; u_int8_t channel; } cl_cons; struct cl_info { struct tty *tty; u_char cl_swflags; u_char cl_softchar; u_char cl_consio; u_char cl_speed; u_char cl_parstop; /* parity, stop bits. */ u_char cl_rxmode; u_char cl_txmode; u_char cl_clen; u_char cl_parity; #if 0 u_char transmitting; #endif u_long txcnt; u_long rxcnt; void *rx[2]; void *rxp[2]; void *tx[2]; void *txp[2]; }; #define CLCD_PORTS_PER_CHIP 4 #define CL_BUFSIZE 256 #ifndef DO_MALLOC /* four (4) buffers per port */ char cl_dmabuf[CLCD_PORTS_PER_CHIP * CL_BUFSIZE * 4]; char cl_dmabuf1[CLCD_PORTS_PER_CHIP * CL_BUFSIZE * 4]; #endif struct clsoftc { struct device sc_dev; bus_space_tag_t sc_iot; bus_space_handle_t sc_ioh; time_t sc_fotime; /* time of last fifo overrun */ struct cl_info sc_cl[CLCD_PORTS_PER_CHIP]; struct intrhand sc_ih_e; struct intrhand sc_ih_m; struct intrhand sc_ih_t; struct intrhand sc_ih_r; char sc_errintrname[16 + 4]; char sc_mxintrname[16 + 3]; char sc_rxintrname[16 + 3]; char sc_txintrname[16 + 3]; struct pcctwosoftc *sc_pcctwo; }; const struct { u_int speed; u_char divisor; u_char clock; u_char rx_timeout; } cl_clocks[] = { { 64000, 0x26, 0, 0x01}, { 56000, 0x2c, 0, 0x01}, { 38400, 0x40, 0, 0x01}, { 19200, 0x81, 0, 0x02}, { 9600, 0x40, 1, 0x04}, { 7200, 0x56, 1, 0x04}, { 4800, 0x81, 1, 0x08}, { 3600, 0xad, 1, 0x08}, { 2400, 0x40, 2, 0x10}, { 1200, 0x81, 2, 0x20}, { 600, 0x40, 3, 0x40}, { 300, 0x81, 3, 0x80}, { 150, 0x40, 3, 0x80}, { 110, 0x58, 4, 0xff}, { 50, 0xC2, 4, 0xff}, { 0, 0x00, 0, 0}, }; #define CL_SAFE_CLOCK 4 /* 9600 entry */ /* prototypes */ cons_decl(cl); int cl_instat(struct clsoftc *sc); u_int8_t cl_clkdiv(int speed); u_int8_t cl_clknum(int speed); u_int8_t cl_clkrxtimeout(int speed); void clstart(struct tty *tp); void cl_unblock(struct tty *tp); int clccparam(struct clsoftc *sc, struct termios *par, int channel); int clparam(struct tty *tp, struct termios *t); int cl_mintr(void *); int cl_txintr(void *); int cl_rxintr(void *); void cl_overflow(struct clsoftc *sc, int channel, long *ptime, char *msg); void cl_parity(struct clsoftc *sc, int channel); void cl_frame(struct clsoftc *sc, int channel); void cl_break( struct clsoftc *sc, int channel); int clmctl(dev_t dev, int bits, int how); #ifdef DEBUG void cl_dumpport(struct clsoftc *, int); #endif int clprobe(struct device *parent, void *self, void *aux); void clattach(struct device *parent, struct device *self, void *aux); void cl_initchannel(struct clsoftc *sc, int channel); void clputc(struct clsoftc *sc, int unit, u_char c); struct cfattach cl_ca = { sizeof(struct clsoftc), clprobe, clattach }; struct cfdriver cl_cd = { NULL, "cl", DV_TTY }; #if 0 #define CLCDBUF 80 void cloutput(struct tty *tp); #endif #define CL_UNIT(x) (minor(x) >> 2) #define CL_CHANNEL(x) (minor(x) & 3) struct tty * cltty(dev) dev_t dev; { int unit, channel; struct clsoftc *sc; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return NULL; } channel = CL_CHANNEL(dev); return sc->sc_cl[channel].tty; } int clprobe(parent, self, aux) struct device *parent; void *self; void *aux; { struct confargs *ca = aux; bus_space_handle_t ioh; int rc; if (brdtyp == BRD_188) return 0; /* * We do not accept empty locators here... */ if (ca->ca_paddr == CD2400_BASE_ADDR || (ca->ca_paddr == CD2400_SECONDARY_ADDR && brdtyp == BRD_8120)) { if (bus_space_map(ca->ca_iot, ca->ca_paddr, CD2400_SIZE, 0, &ioh) != 0) return 0; rc = badaddr((vaddr_t)bus_space_vaddr(ca->ca_iot, ioh), 1); bus_space_unmap(ca->ca_iot, ioh, CD2400_SIZE); return rc == 0; } return 0; } void clattach(parent, self, aux) struct device *parent; struct device *self; void *aux; { struct clsoftc *sc = (struct clsoftc *)self; struct confargs *ca = aux; bus_space_tag_t iot; bus_space_handle_t ioh; int i; if (ca->ca_ipl < 0) ca->ca_ipl = IPL_TTY; iot = sc->sc_iot = ca->ca_iot; if (bus_space_map(iot, ca->ca_paddr, CD2400_SIZE, 0, &ioh) != 0) { printf(": can't map registers!\n"); return; } sc->sc_ioh = ioh; sc->sc_pcctwo = (struct pcctwosoftc *)parent; if (ca->ca_paddr == CD2400_BASE_ADDR) { /* * Although we are still running using the BUG routines, * this device will be elected as the console after * autoconf. Mark it as such. */ sc->sc_cl[0].cl_consio = 1; printf(": console"); } else { /* reset chip only if we are not console device */ /* wait for GFRCR */ } /* allow chip to settle before continuing */ delay(800); /* set up global registers */ bus_space_write_1(iot, ioh, CL_TPR, CL_TIMEOUT); bus_space_write_1(iot, ioh, CL_RPILR, 0x03); bus_space_write_1(iot, ioh, CL_TPILR, 0x02); bus_space_write_1(iot, ioh, CL_MPILR, 0x01); #ifdef DO_MALLOC sc->sc_cl[0].rx[0] = (void *)(dvma_malloc(16 * CL_BUFSIZE)); #else /* XXX */ if ((vaddr_t)ca->ca_paddr == CD2400_BASE_ADDR) sc->sc_cl[0].rx[0] = (void *)(&cl_dmabuf); else sc->sc_cl[0].rx[0] = (void *)(&cl_dmabuf1); #endif sc->sc_cl[0].rx[1] = (void *)(((int)sc->sc_cl[0].rx[0]) + CL_BUFSIZE); sc->sc_cl[1].rx[0] = (void *)(((int)sc->sc_cl[0].rx[1]) + CL_BUFSIZE); sc->sc_cl[1].rx[1] = (void *)(((int)sc->sc_cl[1].rx[0]) + CL_BUFSIZE); sc->sc_cl[2].rx[0] = (void *)(((int)sc->sc_cl[1].rx[1]) + CL_BUFSIZE); sc->sc_cl[2].rx[1] = (void *)(((int)sc->sc_cl[2].rx[0]) + CL_BUFSIZE); sc->sc_cl[3].rx[0] = (void *)(((int)sc->sc_cl[2].rx[1]) + CL_BUFSIZE); sc->sc_cl[3].rx[1] = (void *)(((int)sc->sc_cl[3].rx[0]) + CL_BUFSIZE); sc->sc_cl[0].tx[0] = (void *)(((int)sc->sc_cl[3].rx[1]) + CL_BUFSIZE); sc->sc_cl[0].tx[1] = (void *)(((int)sc->sc_cl[0].tx[0]) + CL_BUFSIZE); sc->sc_cl[1].tx[0] = (void *)(((int)sc->sc_cl[0].tx[1]) + CL_BUFSIZE); sc->sc_cl[1].tx[1] = (void *)(((int)sc->sc_cl[1].tx[0]) + CL_BUFSIZE); sc->sc_cl[2].tx[0] = (void *)(((int)sc->sc_cl[1].tx[1]) + CL_BUFSIZE); sc->sc_cl[2].tx[1] = (void *)(((int)sc->sc_cl[2].tx[0]) + CL_BUFSIZE); sc->sc_cl[3].tx[0] = (void *)(((int)sc->sc_cl[2].tx[1]) + CL_BUFSIZE); sc->sc_cl[3].tx[1] = (void *)(((int)sc->sc_cl[3].tx[0]) + CL_BUFSIZE); for (i = 0; i < CLCD_PORTS_PER_CHIP; i++) { #if 0 int j; for (j = 0; j < 2 ; j++) { sc->sc_cl[i].rxp[j] = (void *)kvtop(sc->sc_cl[i].rx[j]); printf("cl[%d].rxbuf[%d] %x p %x\n", i, j, sc->sc_cl[i].rx[j], sc->sc_cl[i].rxp[j]); sc->sc_cl[i].txp[j] = (void *)kvtop(sc->sc_cl[i].tx[j]); printf("cl[%d].txbuf[%d] %x p %x\n", i, j, sc->sc_cl[i].tx[j], sc->sc_cl[i].txp[j]); } #endif #if 0 sc->sc_cl[i].cl_rxmode = !(!((flags >> (i * CL_FLAG_BIT_PCH)) & 0x01)); sc->sc_cl[i].cl_txmode = !(!((flags >> (i * CL_FLAG_BIT_PCH)) & 0x02)); sc->sc_cl[i].cl_softchar = !(!((flags >> (i * CL_FLAG_BIT_PCH)) & 0x04)); #endif cl_initchannel(sc, i); } /* clear errors */ bus_space_write_1(sc->sc_pcctwo->sc_iot, sc->sc_pcctwo->sc_ioh, PCCTWO_SCCERR, 0x01); /* enable interrupts */ sc->sc_ih_e.ih_fn = cl_rxintr; sc->sc_ih_e.ih_arg = sc; sc->sc_ih_e.ih_wantframe = 0; sc->sc_ih_e.ih_ipl = ca->ca_ipl; sc->sc_ih_m.ih_fn = cl_mintr; sc->sc_ih_m.ih_arg = sc; sc->sc_ih_m.ih_wantframe = 0; sc->sc_ih_m.ih_ipl = ca->ca_ipl; sc->sc_ih_t.ih_fn = cl_txintr; sc->sc_ih_t.ih_arg = sc; sc->sc_ih_t.ih_wantframe = 0; sc->sc_ih_t.ih_ipl = ca->ca_ipl; sc->sc_ih_r.ih_fn = cl_rxintr; sc->sc_ih_r.ih_arg = sc; sc->sc_ih_r.ih_wantframe = 0; sc->sc_ih_r.ih_ipl = ca->ca_ipl; snprintf(sc->sc_errintrname, sizeof sc->sc_errintrname, "%s_err", self->dv_xname); snprintf(sc->sc_mxintrname, sizeof sc->sc_mxintrname, "%s_mx", self->dv_xname); snprintf(sc->sc_rxintrname, sizeof sc->sc_rxintrname, "%s_rx", self->dv_xname); snprintf(sc->sc_txintrname, sizeof sc->sc_txintrname, "%s_tx", self->dv_xname); pcctwointr_establish(PCC2V_SCC_RXE, &sc->sc_ih_e, sc->sc_errintrname); pcctwointr_establish(PCC2V_SCC_M, &sc->sc_ih_m, sc->sc_mxintrname); pcctwointr_establish(PCC2V_SCC_TX, &sc->sc_ih_t, sc->sc_txintrname); pcctwointr_establish(PCC2V_SCC_RX, &sc->sc_ih_r, sc->sc_rxintrname); bus_space_write_1(sc->sc_pcctwo->sc_iot, sc->sc_pcctwo->sc_ioh, PCCTWO_SCCICR, PCC2_IRQ_IEN | (ca->ca_ipl & PCC2_IRQ_IPL)); bus_space_write_1(sc->sc_pcctwo->sc_iot, sc->sc_pcctwo->sc_ioh, PCCTWO_SCCTX, PCC2_IRQ_IEN | (ca->ca_ipl & PCC2_IRQ_IPL)); bus_space_write_1(sc->sc_pcctwo->sc_iot, sc->sc_pcctwo->sc_ioh, PCCTWO_SCCRX, PCC2_IRQ_IEN | (ca->ca_ipl & PCC2_IRQ_IPL)); printf("\n"); } void cl_initchannel(sc, channel) struct clsoftc *sc; int channel; { int s; bus_space_tag_t iot; bus_space_handle_t ioh; iot = sc->sc_iot; ioh = sc->sc_ioh; /* set up option registers */ sc->sc_cl[channel].tty = NULL; s = splhigh(); bus_space_write_1(iot, ioh, CL_CAR, channel); bus_space_write_1(iot, ioh, CL_LIVR, PCC2_VECT + PCC2V_SCC_RXE); bus_space_write_1(iot, ioh, CL_IER, 0); if (sc->sc_cl[channel].cl_consio == 0) { bus_space_write_1(iot, ioh, CL_CMR, 0x02); bus_space_write_1(iot, ioh, CL_COR1, 0x17); bus_space_write_1(iot, ioh, CL_COR2, 0x00); bus_space_write_1(iot, ioh, CL_COR3, 0x02); bus_space_write_1(iot, ioh, CL_COR4, 0xec); bus_space_write_1(iot, ioh, CL_COR5, 0xec); bus_space_write_1(iot, ioh, CL_COR6, 0x00); bus_space_write_1(iot, ioh, CL_COR7, 0x00); bus_space_write_1(iot, ioh, CL_SCHR1, 0x00); bus_space_write_1(iot, ioh, CL_SCHR2, 0x00); bus_space_write_1(iot, ioh, CL_SCHR3, 0x00); bus_space_write_1(iot, ioh, CL_SCHR4, 0x00); bus_space_write_1(iot, ioh, CL_SCRL, 0x00); bus_space_write_1(iot, ioh, CL_SCRH, 0x00); bus_space_write_1(iot, ioh, CL_LNXT, 0x00); bus_space_write_1(iot, ioh, CL_RBPR, 0x40); /* 9600 */ bus_space_write_1(iot, ioh, CL_RCOR, 0x01); bus_space_write_1(iot, ioh, CL_TBPR, 0x40); /* 9600 */ bus_space_write_1(iot, ioh, CL_TCOR, 0x01 << 5); /* console port should be 0x88 already */ bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x00); bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x00); bus_space_write_1(iot, ioh, CL_RTPRL, CL_RX_TIMEOUT); bus_space_write_1(iot, ioh, CL_RTPRH, 0x00); } bus_space_write_1(iot, ioh, CL_CCR, 0x20); while (bus_space_read_1(iot, ioh, CL_CCR) != 0) ; splx(s); } int cldefaultrate = TTYDEF_SPEED; int clmctl(dev, bits, how) dev_t dev; int bits; int how; { struct clsoftc *sc; bus_space_tag_t iot; bus_space_handle_t ioh; int s; /* should only be called with valid device */ sc = (struct clsoftc *)cl_cd.cd_devs[CL_UNIT(dev)]; iot = sc->sc_iot; ioh = sc->sc_ioh; /* settings are currently ignored */ s = splcl(); switch (how) { case DMSET: if (bits & TIOCM_RTS) bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x01); else bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x00); if (bits & TIOCM_DTR) bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x02); else bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x00); break; case DMBIC: if (bits & TIOCM_RTS) bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x00); if (bits & TIOCM_DTR) bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x00); break; case DMBIS: if (bits & TIOCM_RTS) bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x01); if (bits & TIOCM_DTR) bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x02); break; case DMGET: bits = 0; { u_int8_t msvr; msvr = bus_space_read_1(iot, ioh, CL_MSVR_RTS); if (msvr & 0x80) bits |= TIOCM_DSR; if (msvr & 0x40) bits |= TIOCM_CD; if (msvr & 0x20) bits |= TIOCM_CTS; if (msvr & 0x10) bits |= TIOCM_DTR; if (msvr & 0x02) bits |= TIOCM_DTR; if (msvr & 0x01) bits |= TIOCM_RTS; } break; } splx(s); #if 0 bits = 0; /* proper defaults? */ bits |= TIOCM_DTR; bits |= TIOCM_RTS; bits |= TIOCM_CTS; bits |= TIOCM_CD; /* bits |= TIOCM_RI; */ bits |= TIOCM_DSR; #endif return bits; } int clopen(dev, flag, mode, p) dev_t dev; int flag; int mode; struct proc *p; { int s, unit, channel; struct cl_info *cl; struct clsoftc *sc; struct tty *tp; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return ENODEV; } channel = CL_CHANNEL(dev); cl = &sc->sc_cl[channel]; s = splcl(); if (cl->tty) { tp = cl->tty; } else { tp = cl->tty = ttymalloc(0); } tp->t_oproc = clstart; tp->t_param = clparam; tp->t_dev = dev; if ((tp->t_state & TS_ISOPEN) == 0) { tp->t_state |= TS_WOPEN; ttychars(tp); if (tp->t_ispeed == 0) { /* * only when cleared do we reset to defaults. */ tp->t_iflag = TTYDEF_IFLAG; tp->t_oflag = TTYDEF_OFLAG; tp->t_lflag = TTYDEF_LFLAG; tp->t_ispeed = tp->t_ospeed = cldefaultrate; if (sc->sc_cl[channel].cl_consio != 0) { /* console is 8N1 */ tp->t_cflag = (CREAD | CS8 | HUPCL); } else { tp->t_cflag = TTYDEF_CFLAG; } } /* * do these all the time */ if (cl->cl_swflags & TIOCFLAG_CLOCAL) tp->t_cflag |= CLOCAL; if (cl->cl_swflags & TIOCFLAG_CRTSCTS) tp->t_cflag |= CRTSCTS; if (cl->cl_swflags & TIOCFLAG_MDMBUF) tp->t_cflag |= MDMBUF; clparam(tp, &tp->t_termios); ttsetwater(tp); (void)clmctl(dev, TIOCM_DTR | TIOCM_RTS, DMSET); #ifdef XXX if ((cl->cl_swflags & TIOCFLAG_SOFTCAR) || (clmctl(dev, 0, DMGET) & TIOCM_CD)) { tp->t_state |= TS_CARR_ON; } else { tp->t_state &= ~TS_CARR_ON; } #endif tp->t_state |= TS_CARR_ON; { u_int8_t save; save = bus_space_read_1(sc->sc_iot, sc->sc_ioh, CL_CAR); bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_CAR, channel); bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_IER, 0x88); bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_CAR, save); } } else if (tp->t_state & TS_XCLUDE && suser(p, 0) != 0) { splx(s); return EBUSY; } splx(s); /* * Reset the tty pointer, as there could have been a dialout * use of the tty with a dialin open waiting. */ tp->t_dev = dev; #ifdef DEBUG cl_dumpport(sc, channel); #endif return (*linesw[tp->t_line].l_open)(dev, tp, p); } int clparam(tp, t) struct tty *tp; struct termios *t; { int unit, channel; struct clsoftc *sc; int s; dev_t dev; dev = tp->t_dev; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return ENODEV; } channel = CL_CHANNEL(dev); tp->t_ispeed = t->c_ispeed; tp->t_ospeed = t->c_ospeed; tp->t_cflag = t->c_cflag; clccparam(sc, t, channel); s = splcl(); cl_unblock(tp); splx(s); return 0; } #if 0 void cloutput(tp) struct tty *tp; { int cc, s, unit, cnt; u_char *tptr; int channel; struct clsoftc *sc; dev_t dev; u_char cl_obuffer[CLCDBUF+1]; dev = tp->t_dev; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return; } channel = CL_CHANNEL(dev); if ((tp->t_state & TS_ISOPEN) == 0) return; s = splcl(); cc = tp->t_outq.c_cc; while (cc > 0) { /*XXX*/ cnt = min (CLCDBUF,cc); cnt = q_to_b(&tp->t_outq, cl_obuffer, cnt); if (cnt == 0) { break; } for (tptr = cl_obuffer; tptr < &cl_obuffer[cnt]; tptr++) { clputc(sc, channel, *tptr); } cc -= cnt; } splx(s); } #endif int clclose(dev, flag, mode, p) dev_t dev; int flag; int mode; struct proc *p; { int unit, channel; struct tty *tp; struct cl_info *cl; struct clsoftc *sc; bus_space_tag_t iot; bus_space_handle_t ioh; int s; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return ENODEV; } channel = CL_CHANNEL(dev); cl = &sc->sc_cl[channel]; iot = sc->sc_iot; ioh = sc->sc_ioh; tp = cl->tty; (*linesw[tp->t_line].l_close)(tp, flag, p); s = splcl(); bus_space_write_1(iot, ioh, CL_CAR, channel); if (cl->cl_consio == 0 && (tp->t_cflag & HUPCL) != 0) { bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x00); bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x00); bus_space_write_1(iot, ioh, CL_CCR, 0x05); } splx(s); ttyclose(tp); #if 0 cl->tty = NULL; #endif #ifdef DEBUG cl_dumpport(sc, channel); #endif return 0; } int clread(dev, uio, flag) dev_t dev; struct uio *uio; int flag; { int unit, channel; struct tty *tp; struct cl_info *cl; struct clsoftc *sc; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return ENODEV; } channel = CL_CHANNEL(dev); cl = &sc->sc_cl[channel]; tp = cl->tty; if (tp == NULL) return ENXIO; return (*linesw[tp->t_line].l_read)(tp, uio, flag); } int clwrite(dev, uio, flag) dev_t dev; struct uio *uio; int flag; { int unit, channel; struct tty *tp; struct cl_info *cl; struct clsoftc *sc; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return ENODEV; } channel = CL_CHANNEL(dev); cl = &sc->sc_cl[channel]; tp = cl->tty; if (tp == NULL) return ENXIO; return (*linesw[tp->t_line].l_write)(tp, uio, flag); } int clioctl(dev, cmd, data, flag, p) dev_t dev; u_long cmd; caddr_t data; int flag; struct proc *p; { int error; int unit, channel; struct tty *tp; struct cl_info *cl; struct clsoftc *sc; unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return ENODEV; } channel = CL_CHANNEL(dev); cl = &sc->sc_cl[channel]; tp = cl->tty; if (tp == NULL) return ENXIO; error = (*linesw[tp->t_line].l_ioctl)(tp, cmd, data, flag, p); if (error >= 0) return error; error = ttioctl(tp, cmd, data, flag, p); if (error >= 0) return error; switch (cmd) { case TIOCSBRK: /* */ break; case TIOCCBRK: /* */ break; case TIOCSDTR: (void) clmctl(dev, TIOCM_DTR | TIOCM_RTS, DMBIS); break; case TIOCCDTR: (void) clmctl(dev, TIOCM_DTR | TIOCM_RTS, DMBIC); break; case TIOCMSET: (void) clmctl(dev, *(int *) data, DMSET); break; case TIOCMBIS: (void) clmctl(dev, *(int *) data, DMBIS); break; case TIOCMBIC: (void) clmctl(dev, *(int *) data, DMBIC); break; case TIOCMGET: *(int *)data = clmctl(dev, 0, DMGET); break; case TIOCGFLAGS: *(int *)data = cl->cl_swflags; break; case TIOCSFLAGS: error = suser(p, 0); if (error != 0) return EPERM; cl->cl_swflags = *(int *)data; cl->cl_swflags &= /* only allow valid flags */ (TIOCFLAG_SOFTCAR | TIOCFLAG_CLOCAL | TIOCFLAG_CRTSCTS); break; default: return ENOTTY; } return 0; } int clstop(tp, flag) struct tty *tp; int flag; { int s; s = splcl(); if (tp->t_state & TS_BUSY) { if ((tp->t_state & TS_TTSTOP) == 0) tp->t_state |= TS_FLUSH; } splx(s); return 0; } void clcnprobe(cp) struct consdev *cp; { int maj; /* bomb if it'a a MVME188 */ if (brdtyp == BRD_188 || badaddr(CD2400_BASE_ADDR, 1) != 0) return; /* do not attach as console if cl has been disabled */ if (cl_cd.cd_ndevs == 0 || cl_cd.cd_devs[0] == NULL) return; /* locate the major number */ for (maj = 0; maj < nchrdev; maj++) if (cdevsw[maj].d_open == clopen) break; if (maj == nchrdev) return; cp->cn_dev = makedev(maj, 0); cp->cn_pri = CN_LOWPRI; } void clcninit(cp) struct consdev *cp; { struct clsoftc *sc; sc = (struct clsoftc *)cl_cd.cd_devs[0]; cl_cons.cl_iot = sc->sc_iot; cl_cons.cl_ioh = sc->sc_ioh; cl_cons.cl_rxiack = (void *)(sc->sc_pcctwo->sc_base + PCCTWO_SCCRXIACK); } int cl_instat(sc) struct clsoftc *sc; { u_int8_t rir; if (sc == NULL) rir = bus_space_read_1(cl_cons.cl_iot, cl_cons.cl_ioh, CL_RIR); else rir = bus_space_read_1(sc->sc_iot, sc->sc_ioh, CL_RIR); return (rir & 0x40); } int clcngetc(dev) dev_t dev; { u_int8_t val, reoir, licr, data; int got_char = 0; u_int8_t ier_old; bus_space_tag_t iot; bus_space_handle_t ioh; iot = cl_cons.cl_iot; ioh = cl_cons.cl_ioh; bus_space_write_1(iot, ioh, CL_CAR, 0); ier_old = bus_space_read_1(iot, ioh, CL_IER); if ((ier_old & 0x08) == 0) { bus_space_write_1(iot, ioh, CL_IER, 0x08); } else ier_old = 0xff; while (got_char == 0) { val = bus_space_read_1(iot, ioh, CL_RIR); /* if no receive interrupt pending wait */ if ((val & 0x80) == 0) continue; /* XXX do we need to suck the entire FIFO contents? */ reoir = *cl_cons.cl_rxiack; /* receive PIACK */ licr = bus_space_read_1(iot, ioh, CL_LICR); /* is the interrupt for us? (port 0) */ if (((licr >> 2) & 0x3) == 0) { (void)bus_space_read_1(iot, ioh, CL_RISRL); (void)bus_space_read_1(iot, ioh, CL_RFOC); data = bus_space_read_1(iot, ioh, CL_RDR); if (ier_old != 0xff) bus_space_write_1(iot, ioh, CL_IER, ier_old); got_char = 1; } else { /* read and discard the character */ data = bus_space_read_1(iot, ioh, CL_RDR); } bus_space_write_1(iot, ioh, CL_TEOIR, 0x00); } return data; } void clcnputc(dev, c) dev_t dev; u_char c; { clputc(0, 0, c); } void clcnpollc(dev, on) dev_t dev; int on; { if (on != 0) { /* enable polling */ } else { /* disable polling */ } } void clputc(sc, unit, c) struct clsoftc *sc; int unit; u_char c; { u_int8_t schar; u_int8_t oldchannel; bus_space_tag_t iot; bus_space_handle_t ioh; int s; if (sc == NULL) { /* output on console */ iot = cl_cons.cl_iot; ioh = cl_cons.cl_ioh; } else { iot = sc->sc_iot; ioh = sc->sc_ioh; } s = splhigh(); oldchannel = bus_space_read_1(iot, ioh, CL_CAR); bus_space_write_1(iot, ioh, CL_CAR, unit); if (unit == 0) { schar = bus_space_read_1(iot, ioh, CL_SCHR3); /* send special char, number 3 */ bus_space_write_1(iot, ioh, CL_SCHR3, c); bus_space_write_1(iot, ioh, CL_STCR, 0x08 | 3); while (bus_space_read_1(iot, ioh, CL_STCR) != 0) { /* wait until cl notices the command * otherwise it may not notice the character * if we send characters too fast. */ } DELAY(5); bus_space_write_1(iot, ioh, CL_SCHR3, schar); } else { if (bus_space_read_1(iot, ioh, CL_TFTC) != 0) bus_space_write_1(iot, ioh, CL_TDR, c); } bus_space_write_1(iot, ioh, CL_CAR, oldchannel); splx(s); } int clccparam(sc, par, channel) struct clsoftc *sc; struct termios *par; int channel; { bus_space_tag_t iot; bus_space_handle_t ioh; u_int divisor, clk, clen; int s, imask, ints; iot = sc->sc_iot; ioh = sc->sc_ioh; s = splcl(); bus_space_write_1(iot, ioh, CL_CAR, channel); if (par->c_ospeed == 0) { /* dont kill the console */ if (sc->sc_cl[channel].cl_consio == 0) { /* disconnect, drop RTS DTR stop receiver */ bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x00); bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x00); bus_space_write_1(iot, ioh, CL_CCR, 0x05); } splx(s); return 0xff; } bus_space_write_1(iot, ioh, CL_MSVR_RTS, 0x03); bus_space_write_1(iot, ioh, CL_MSVR_DTR, 0x03); divisor = cl_clkdiv(par->c_ospeed); clk = cl_clknum(par->c_ospeed); bus_space_write_1(iot, ioh, CL_TBPR, divisor); bus_space_write_1(iot, ioh, CL_TCOR, clk << 5); divisor = cl_clkdiv(par->c_ispeed); clk = cl_clknum(par->c_ispeed); bus_space_write_1(iot, ioh, CL_RBPR, divisor); bus_space_write_1(iot, ioh, CL_RCOR, clk); bus_space_write_1(iot, ioh, CL_RTPRL, cl_clkrxtimeout(par->c_ispeed)); bus_space_write_1(iot, ioh, CL_RTPRH, 0x00); switch (par->c_cflag & CSIZE) { case CS5: clen = 4; /* this is the mask for the chip. */ imask = 0x1F; break; case CS6: clen = 5; imask = 0x3F; break; case CS7: clen = 6; imask = 0x7F; break; default: clen = 7; imask = 0xFF; } bus_space_write_1(iot, ioh, CL_COR3, par->c_cflag & PARENB ? 4 : 2); { u_int8_t cor1; if (par->c_cflag & PARENB) { if (par->c_cflag & PARODD) { cor1 = 0xE0 | clen ; /* odd */ } else { cor1 = 0x40 | clen ; /* even */ } } else { cor1 = 0x10 | clen; /* ignore parity */ } if (bus_space_read_1(iot, ioh, CL_COR1) != cor1) { bus_space_write_1(iot, ioh, CL_COR1, cor1); bus_space_write_1(iot, ioh, CL_CCR, 0x20); while (bus_space_read_1(iot, ioh, CL_CCR) != 0) ; } } if (sc->sc_cl[channel].cl_consio == 0 && (par->c_cflag & CREAD) == 0) bus_space_write_1(iot, ioh, CL_CCR, 0x08); else bus_space_write_1(iot, ioh, CL_CCR, 0x0a); while (bus_space_read_1(iot, ioh, CL_CCR) != 0) ; ints = 0; #define SCC_DSR 0x80 #define SCC_DCD 0x40 #define SCC_CTS 0x20 if ((par->c_cflag & CLOCAL) == 0) { ints |= SCC_DCD; } if ((par->c_cflag & CCTS_OFLOW) != 0) { ints |= SCC_CTS; } if ((par->c_cflag & CRTSCTS) != 0) { ints |= SCC_CTS; } #ifdef DONT_LET_HARDWARE if ((par->c_cflag & CCTS_IFLOW) != 0) { ints |= SCC_DSR; } #endif bus_space_write_1(iot, ioh, CL_COR4, ints | CL_FIFO_CNT); bus_space_write_1(iot, ioh, CL_COR5, ints | CL_FIFO_CNT); splx(s); return imask; } static int clknum = 0; u_int8_t cl_clkdiv(speed) int speed; { int i; if (cl_clocks[clknum].speed == speed) return cl_clocks[clknum].divisor; for (i = 0; cl_clocks[i].speed != 0; i++) { if (cl_clocks[i].speed == speed) { clknum = i; return cl_clocks[clknum].divisor; } } /* return some sane value if unknown speed */ return cl_clocks[CL_SAFE_CLOCK].divisor; } u_int8_t cl_clknum(speed) int speed; { int i; if (cl_clocks[clknum].speed == speed) return cl_clocks[clknum].clock; for (i = 0; cl_clocks[i].speed != 0; i++) { if (cl_clocks[clknum].speed == speed) { clknum = i; return cl_clocks[clknum].clock; } } /* return some sane value if unknown speed */ return cl_clocks[CL_SAFE_CLOCK].clock; } u_int8_t cl_clkrxtimeout(speed) int speed; { int i; if (cl_clocks[clknum].speed == speed) return cl_clocks[clknum].rx_timeout; for (i = 0; cl_clocks[i].speed != 0; i++) { if (cl_clocks[i].speed == speed) { clknum = i; return cl_clocks[clknum].rx_timeout; } } /* return some sane value if unknown speed */ return cl_clocks[CL_SAFE_CLOCK].rx_timeout; } void cl_unblock(tp) struct tty *tp; { tp->t_state &= ~TS_FLUSH; if (tp->t_outq.c_cc != 0) clstart(tp); } void clstart(tp) struct tty *tp; { dev_t dev; struct clsoftc *sc; int channel, unit, s; #if 0 int cnt; u_int8_t cbuf; #endif dev = tp->t_dev; channel = CL_CHANNEL(dev); /* hack to test output on non console only */ #if 0 if (channel == 0) { cloutput(tp); return; } #endif unit = CL_UNIT(dev); if (unit >= cl_cd.cd_ndevs || (sc = (struct clsoftc *)cl_cd.cd_devs[unit]) == NULL) { return; } if ((tp->t_state & TS_ISOPEN) == 0) return; s = splcl(); #if 0 if (sc->sc_cl[channel].transmitting == 1) { /* i'm busy, go away, I will get to it later. */ splx(s); return; } cnt = q_to_b(&tp->t_outq, &cbuf, 1); if ( cnt != 0 ) { sc->sc_cl[channel].transmitting = 1; bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_CAR, channel); bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_TDR, cbuf); } else { sc->sc_cl[channel].transmitting = 0; } #else if ((tp->t_state & (TS_TIMEOUT | TS_BUSY | TS_TTSTOP | TS_FLUSH)) == 0) { tp->t_state |= TS_BUSY; bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_CAR, channel); bus_space_write_1(sc->sc_iot, sc->sc_ioh, CL_IER, bus_space_read_1(sc->sc_iot, sc->sc_ioh, CL_IER) | 0x03); } #endif splx(s); } int cl_mintr(arg) void *arg; { struct clsoftc *sc = arg; bus_space_tag_t iot; bus_space_handle_t ioh; u_int8_t mir, misr, msvr; int channel; iot = sc->sc_iot; ioh = sc->sc_ioh; mir = bus_space_read_1(iot, ioh, CL_MIR); if ((mir & 0x40) == 0) { return 0; } channel = mir & 0x03; misr = bus_space_read_1(iot, ioh, CL_MISR); msvr = bus_space_read_1(iot, ioh, CL_MSVR_RTS); if (misr & 0x01) { /* timers are not currently used?? */ log(LOG_WARNING, "cl_mintr: channel %x timer 1 unexpected\n",channel); } if (misr & 0x02) { /* timers are not currently used?? */ log(LOG_WARNING, "cl_mintr: channel %x timer 2 unexpected\n",channel); } if (misr & 0x20) { #ifdef DEBUG log(LOG_WARNING, "cl_mintr: channel %x cts %x\n",channel, ((msvr & 0x20) != 0x0) ); #endif } if (misr & 0x40) { struct tty *tp = sc->sc_cl[channel].tty; #ifdef DEBUG log(LOG_WARNING, "cl_mintr: channel %x cd %x\n",channel, ((msvr & 0x40) != 0x0) ); #endif ttymodem(tp, ((msvr & 0x40) != 0x0) ); } if (misr & 0x80) { #ifdef DEBUG log(LOG_WARNING, "cl_mintr: channel %x dsr %x\n",channel, ((msvr & 0x80) != 0x0) ); #endif } bus_space_write_1(iot, ioh, CL_MEOIR, 0); return 1; } int cl_txintr(arg) void *arg; { static int empty; struct clsoftc *sc = arg; bus_space_tag_t iot; bus_space_handle_t ioh; u_int8_t tir, cmr, teoir; u_int8_t max; int channel; struct tty *tp; int cnt; u_char buffer[CL_FIFO_MAX +1]; iot = sc->sc_iot; ioh = sc->sc_ioh; tir = bus_space_read_1(iot, ioh, CL_TIR); if ((tir & 0x40) == 0) { return 0; } channel = tir & 0x03; sc->sc_cl[channel].txcnt ++; cmr = bus_space_read_1(iot, ioh, CL_CMR); tp = sc->sc_cl[channel].tty; if (tp == NULL || (tp->t_state & TS_ISOPEN) == 0) { bus_space_write_1(iot, ioh, CL_IER, bus_space_read_1(iot, ioh, CL_IER) & ~0x03); bus_space_write_1(iot, ioh, CL_TEOIR, 0x08); return 1; } switch (cmr & CL_TXMASK) { case CL_TXDMAINT: { u_int8_t dmabsts; int nbuf, busy, resid; void *pbuffer; dmabsts = bus_space_read_1(iot, ioh, CL_DMABSTS); nbuf = ((dmabsts & 0x8) >> 3) & 0x1; busy = ((dmabsts & 0x4) >> 2) & 0x1; do { pbuffer = sc->sc_cl[channel].tx[nbuf]; resid = tp->t_outq.c_cc; cnt = min (CL_BUFSIZE,resid); log(LOG_WARNING, "cl_txintr: resid %x cnt %x pbuf %p\n", resid, cnt, pbuffer); if (cnt != 0) { cnt = q_to_b(&tp->t_outq, pbuffer, cnt); resid -= cnt; if (nbuf == 0) { bus_space_write_2(iot, ioh, CL_ATBADRU, ((u_long)sc->sc_cl[channel].txp[nbuf]) >> 16); bus_space_write_2(iot, ioh, CL_ATBADRL, ((u_long) sc->sc_cl[channel].txp[nbuf]) & 0xffff); bus_space_write_2(iot, ioh, CL_ATBCNT, cnt); bus_space_write_1(iot, ioh, CL_ATBSTS, 0x43); } else { bus_space_write_2(iot, ioh, CL_BTBADRU, ((u_long)sc->sc_cl[channel].txp[nbuf]) >> 16); bus_space_write_2(iot, ioh, CL_BTBADRL, ((u_long) sc->sc_cl[channel].txp[nbuf]) & 0xffff); bus_space_write_2(iot, ioh, CL_BTBCNT, cnt); bus_space_write_1(iot, ioh, CL_BTBSTS, 0x43); } teoir = 0x08; } else { teoir = 0x08; if (tp->t_state & TS_BUSY) { tp->t_state &= ~(TS_BUSY | TS_FLUSH); if (tp->t_state & TS_ASLEEP) { tp->t_state &= ~TS_ASLEEP; wakeup((caddr_t) &tp->t_outq); } selwakeup(&tp->t_wsel); } bus_space_write_1(iot, ioh, CL_IER, bus_space_read_1(iot, ioh, CL_IER) & ~0x03); } nbuf = ~nbuf & 0x1; busy--; } while (resid != 0 && busy != -1);/* if not busy do other buffer */ } break; case CL_TXINTR: max = bus_space_read_1(iot, ioh, CL_TFTC); cnt = min((int)max,tp->t_outq.c_cc); if (cnt != 0) { cnt = q_to_b(&tp->t_outq, buffer, cnt); empty = 0; bus_space_write_multi_1(iot, ioh, CL_TDR, buffer, cnt); teoir = 0x00; } else { if (empty > 5 && ((empty % 20000 )== 0)) { log(LOG_WARNING, "cl_txintr to many empty intr %d channel %d\n", empty, channel); } empty++; teoir = 0x08; if (tp->t_state & TS_BUSY) { tp->t_state &= ~(TS_BUSY | TS_FLUSH); if (tp->t_state & TS_ASLEEP) { tp->t_state &= ~TS_ASLEEP; wakeup((caddr_t) &tp->t_outq); } selwakeup(&tp->t_wsel); } bus_space_write_1(iot, ioh, CL_IER, bus_space_read_1(iot, ioh, CL_IER) & ~0x03); } break; default: log(LOG_WARNING, "cl_txintr unknown mode %x\n", cmr); /* we probably will go to hell quickly now */ teoir = 0x08; } bus_space_write_1(iot, ioh, CL_TEOIR, teoir); return 1; } int cl_rxintr(arg) void *arg; { struct clsoftc *sc = arg; bus_space_tag_t iot; bus_space_handle_t ioh; u_int8_t rir, channel, cmr, risrl; u_int8_t fifocnt; struct tty *tp; int i; u_int8_t reoir; u_char buffer[CL_FIFO_MAX +1]; #ifdef DDB int wantddb = 0; #endif iot = sc->sc_iot; ioh = sc->sc_ioh; rir = bus_space_read_1(iot, ioh, CL_RIR); if ((rir & 0x40) == 0x0) { return 0; } channel = rir & 0x3; cmr = bus_space_read_1(iot, ioh, CL_CMR); sc->sc_cl[channel].rxcnt ++; risrl = bus_space_read_1(iot, ioh, CL_RISRL); if (risrl & 0x80) { /* timeout, no characters */ } else /* We don't need no stinkin special characters */ if (risrl & 0x08) { cl_overflow(sc, channel, (long *)&sc->sc_fotime, "fifo"); } else if (risrl & 0x04) { cl_parity(sc, channel); } else if (risrl & 0x02) { cl_frame(sc, channel); } else if (risrl & 0x01) { #ifdef DDB if (sc->sc_cl[channel].cl_consio) wantddb = db_console; #endif cl_break(sc, channel); } reoir = 0x08; switch (cmr & CL_RXMASK) { case CL_RXDMAINT: { int nbuf; u_int16_t cnt; int bufcomplete; u_int8_t status, dmabsts; u_int8_t risrh; risrh = bus_space_read_1(iot, ioh, CL_RISRH); dmabsts = bus_space_read_1(iot, ioh, CL_DMABSTS); nbuf = (risrh & 0x08) ? 1 : 0; bufcomplete = (risrh & 0x20) ? 1 : 0; if (nbuf == 0) { cnt = bus_space_read_2(iot, ioh, CL_ARBCNT); status = bus_space_read_1(iot, ioh, CL_ARBSTS); } else { cnt = bus_space_read_2(iot, ioh, CL_BRBCNT); status = bus_space_read_1(iot, ioh, CL_BRBSTS); } #if USE_BUFFER cl_appendbufn(sc, channel, sc->rx[nbuf], cnt); #else { int i; u_char *pbuf; tp = sc->sc_cl[channel].tty; pbuf = sc->sc_cl[channel].rx[nbuf]; /* this should be done at off level */ { u_int16_t rcbadru, rcbadrl; u_int8_t arbsts, brbsts; u_char *pbufs, *pbufe; rcbadru = bus_space_read_2(iot, ioh, CL_RCBADRU); rcbadrl = bus_space_read_2(iot, ioh, CL_RCBADRL); arbsts = bus_space_read_1(iot, ioh, CL_ARBSTS); brbsts = bus_space_read_1(iot, ioh, CL_BRBSTS); pbufs = sc->sc_cl[channel].rxp[nbuf]; pbufe = (u_char *)(((u_long)rcbadru << 16) | (u_long)rcbadrl); cnt = pbufe - pbufs; } reoir = 0x0 | (bufcomplete) ? 0 : 0xd0; bus_space_write_1(iot, ioh, CL_REOIR, reoir); DELAY(10); /* give the chip a moment */ for (i = 0; i < cnt; i++) { u_char c; c = pbuf[i]; (*linesw[tp->t_line].l_rint)(c,tp); } /* this should be done at off level */ if (nbuf == 0) { bus_space_write_2(iot, ioh, CL_ARBCNT, CL_BUFSIZE); bus_space_write_2(iot, ioh, CL_ARBSTS, 0x01); } else { bus_space_write_2(iot, ioh, CL_BRBCNT, CL_BUFSIZE); bus_space_write_2(iot, ioh, CL_BRBSTS, 0x01); } } #endif } bus_space_write_1(iot, ioh, CL_REOIR, reoir); break; case CL_RXINTR: fifocnt = bus_space_read_1(iot, ioh, CL_RFOC); tp = sc->sc_cl[channel].tty; bus_space_read_multi_1(iot, ioh, CL_RDR, buffer, fifocnt); if (tp == NULL) { /* if the channel is not configured, * dont send characters upstream. * also fix problem with NULL dereference */ reoir = 0x00; break; } bus_space_write_1(iot, ioh, CL_REOIR, reoir); for (i = 0; i < fifocnt; i++) { u_char c; c = buffer[i]; #if USE_BUFFER cl_appendbuf(sc, channel, c); #else /* does any restricitions exist on spl * for this call */ (*linesw[tp->t_line].l_rint)(c,tp); #endif } break; default: log(LOG_WARNING, "cl_rxintr unknown mode %x\n", cmr); /* we probably will go to hell quickly now */ bus_space_write_1(iot, ioh, CL_REOIR, 0x08); } #ifdef DDB if (wantddb != 0) Debugger(); #endif return 1; } void cl_overflow(sc, channel, ptime, msg) struct clsoftc *sc; int channel; long *ptime; char *msg; { log(LOG_WARNING, "%s[%d]: %s overrun\n", sc->sc_dev.dv_xname, channel, msg); } void cl_parity(sc, channel) struct clsoftc *sc; int channel; { log(LOG_WARNING, "%s[%d]: parity error\n", sc->sc_dev.dv_xname, channel); } void cl_frame(sc, channel) struct clsoftc *sc; int channel; { log(LOG_WARNING, "%s[%d]: frame error\n", sc->sc_dev.dv_xname, channel); } void cl_break(sc, channel) struct clsoftc *sc; int channel; { #ifdef DEBUG log(LOG_WARNING, "%s[%d]: break detected\n", sc->sc_dev.dv_xname, channel); #endif } #ifdef DEBUG void cl_dumpport(struct clsoftc *sc, int channel) { bus_space_tag_t iot; bus_space_handle_t ioh; u_int8_t livr, cmr, cor1, cor2, cor3, cor4, cor5, cor6, cor7, schr1, schr2, schr3, schr4, scrl, scrh, lnxt, rbpr, rcor, tbpr, tcor, rpilr, rir, tpr, ier, ccr, dmabsts, arbsts, brbsts, atbsts, btbsts, csr, rts, dtr, rtprl, rtprh; u_int16_t rcbadru, rcbadrl, arbadru, arbadrl, arbcnt, brbadru, brbadrl, brbcnt; u_int16_t tcbadru, tcbadrl, atbadru, atbadrl, atbcnt, btbadru, btbadrl, btbcnt; int s; iot = sc->sc_iot; ioh = sc->sc_ioh; s = splcl(); bus_space_write_1(iot, ioh, CL_CAR, channel); livr = bus_space_read_1(iot, ioh, CL_LIVR); cmr = bus_space_read_1(iot, ioh, CL_CMR); cor1 = bus_space_read_1(iot, ioh, CL_COR1); cor2 = bus_space_read_1(iot, ioh, CL_COR2); cor3 = bus_space_read_1(iot, ioh, CL_COR3); cor4 = bus_space_read_1(iot, ioh, CL_COR4); cor5 = bus_space_read_1(iot, ioh, CL_COR5); cor6 = bus_space_read_1(iot, ioh, CL_COR6); cor7 = bus_space_read_1(iot, ioh, CL_COR7); schr1 = bus_space_read_1(iot, ioh, CL_SCHR1); schr2 = bus_space_read_1(iot, ioh, CL_SCHR2); schr3 = bus_space_read_1(iot, ioh, CL_SCHR3); schr4 = bus_space_read_1(iot, ioh, CL_SCHR4); scrl = bus_space_read_1(iot, ioh, CL_SCRL); scrh = bus_space_read_1(iot, ioh, CL_SCRH); lnxt = bus_space_read_1(iot, ioh, CL_LNXT); rbpr = bus_space_read_1(iot, ioh, CL_RBPR); rcor = bus_space_read_1(iot, ioh, CL_RCOR); tbpr = bus_space_read_1(iot, ioh, CL_TBPR); rpilr = bus_space_read_1(iot, ioh, CL_RPILR); rir = bus_space_read_1(iot, ioh, CL_RIR); ier = bus_space_read_1(iot, ioh, CL_IER); ccr = bus_space_read_1(iot, ioh, CL_CCR); tcor = bus_space_read_1(iot, ioh, CL_TCOR); csr = bus_space_read_1(iot, ioh, CL_CSR); tpr = bus_space_read_1(iot, ioh, CL_TPR); rts = bus_space_read_1(iot, ioh, CL_MSVR_RTS); dtr = bus_space_read_1(iot, ioh, CL_MSVR_DTR); rtprl = bus_space_read_1(iot, ioh, CL_RTPRL); rtprh = bus_space_read_1(iot, ioh, CL_RTPRH); dmabsts = bus_space_read_1(iot, ioh, CL_DMABSTS); tcbadru = bus_space_read_2(iot, ioh, CL_TCBADRU); tcbadrl = bus_space_read_2(iot, ioh, CL_TCBADRL); rcbadru = bus_space_read_2(iot, ioh, CL_RCBADRU); rcbadrl = bus_space_read_2(iot, ioh, CL_RCBADRL); arbadru = bus_space_read_2(iot, ioh, CL_ARBADRU); arbadrl = bus_space_read_2(iot, ioh, CL_ARBADRL); arbcnt = bus_space_read_2(iot, ioh, CL_ARBCNT); arbsts = bus_space_read_1(iot, ioh, CL_ARBSTS); brbadru = bus_space_read_2(iot, ioh, CL_BRBADRU); brbadrl = bus_space_read_2(iot, ioh, CL_BRBADRL); brbcnt = bus_space_read_2(iot, ioh, CL_BRBCNT); brbsts = bus_space_read_1(iot, ioh, CL_BRBSTS); atbadru = bus_space_read_2(iot, ioh, CL_ATBADRU); atbadrl = bus_space_read_2(iot, ioh, CL_ATBADRL); atbcnt = bus_space_read_2(iot, ioh, CL_ATBCNT); atbsts = bus_space_read_1(iot, ioh, CL_ATBSTS); btbadru = bus_space_read_2(iot, ioh, CL_BTBADRU); btbadrl = bus_space_read_2(iot, ioh, CL_BTBADRL); btbcnt = bus_space_read_2(iot, ioh, CL_BTBCNT); btbsts = bus_space_read_1(iot, ioh, CL_BTBSTS); splx(s); printf("{ port %x livr %x cmr %x\n", channel,livr, cmr); printf("cor1 %x cor2 %x cor3 %x cor4 %x cor5 %x cor6 %x cor7 %x\n", cor1, cor2, cor3, cor4, cor5, cor6, cor7); printf("schr1 %x schr2 %x schr3 %x schr4 %x\n", schr1, schr2, schr3, schr4); printf("scrl %x scrh %x lnxt %x\n", scrl, scrh, lnxt); printf("rbpr %x rcor %x tbpr %x tcor %x\n", rbpr, rcor, tbpr, tcor); printf("rpilr %x rir %x ier %x ccr %x\n", rpilr, rir, ier, ccr); printf("tpr %x csr %x rts %x dtr %x\n", tpr, csr, rts, dtr); printf("rtprl %x rtprh %x\n", rtprl, rtprh); printf("rxcnt %x txcnt %x\n", sc->sc_cl[channel].rxcnt, sc->sc_cl[channel].txcnt); printf("dmabsts %x, tcbadru %x, tcbadrl %x, rcbadru %x, rcbadrl %x,\n", dmabsts, tcbadru, tcbadrl, rcbadru, rcbadrl ); printf("arbadru %x, arbadrl %x, arbcnt %x, arbsts %x\n", arbadru, arbadrl, arbcnt, arbsts); printf("brbadru %x, brbadrl %x, brbcnt %x, brbsts %x\n", brbadru, brbadrl, brbcnt, brbsts); printf("atbadru %x, atbadrl %x, atbcnt %x, atbsts %x\n", atbadru, atbadrl, atbcnt, atbsts); printf("btbadru %x, btbadrl %x, btbcnt %x, btbsts %x\n", btbadru, btbadrl, btbcnt, btbsts); printf("}\n"); } #endif
It's been a week chock-full of goodies to capture in GIF form (and there was occasionally some hockey in there, as well) - but before we get to this week's set, it's time to unveil the one that ran away with the competition seven days ago. Ladies and gentlemen, we give you... Joel Ward, hat thief: "Hey, I was wearin' that!" Ah, well. So what's in the treasure chest this time around? First out of the gate is, appropriately, a first - otherwise known as the first time rookie Nate Schmidt found the back of the net: It's not just the fact that he scored his first career NHL goal (although that in and of itself is pretty flippin' great). Kicking this up a notch is the celebratory bear hug from the guy who appears to give the best embraces on the team - fellow blueliner and hugger extraordinaire, John Carlson. N'aww. So that's a pretty fun goal celebration. After all, first goal joy is the best goal joy. ...or is it? Observe, if you will, one Alex Ovechkin, whose fourth tally of the night proved to be the late, game-tying, overtime-forcing, thrill-inducing goal. He runs. He leaps. He... toe-picks and goes flying and belly flops onto the ice. Heeeeee's okay, folks! Swimming, er, rolling right along... As we head into the middle of December, there's a marked difference in our surroundings - holiday cheer takes over, in the form of smiles, songs and sparkly lights. And sometimes those three things combine to form the greatest holiday card EVER: So. Much. Happening. The captain loses a jingle bell because he is just too strong for mere musical instruments... but does it stop him? No, he just keeps right on jingling! And then there's giant Tom Wilson with the world's tiniest ukulele. And the little elf hat on Mike Green's head as he wails on the cowbell. And Joel Ward rocking out on his pink saxophone. And... whatever is happening with Troy Brouwer, because every bit of it is just horribly wonderful. ...yes. Well, then. Switching gears from the goofy to the great, we head over to Sunday's showdown at Madison Square Garden. With the Caps already up 2-0, Mikhail Grabovski draws a penalty shot, and heads to center ice to face off against the King, Henrik Lundqvist. And then the guy with the silky mitts and sick moves does... this: Wow. Pretty sure Lundqvist is still looking behind him in disbelief after that one. Speaking of Lundqvist, remember how he had that insane, frustrating, heartbreaking and downright annoying shutout streak against the Caps, dating back to last May? Remember how it stretched out over 200 minutes? And remember how there is one guy on the Caps who always seems to score on him? Take it away, Jason Chimera: Streak: busted. Heavy is the head that wears the crown... when he also wears a Rangers' jersey. With a home-and-home coming up against the ever-charming Philadelphia Flyers, it seems only fitting to wrap things up with one of the great moments in Caps-Flyers history - so for this week's Flashback Friday moment, come along with us to April 16, 1988. Overtime. Game 7. And the puck on Dale Hunter's stick... A great moment in the rivalry, a great moment in franchise history, and almost enough of a good memory to erase the stench of "Hunter hockey"... almost. Now go vote for your favorite from the past week, and be sure to check back next Friday to see who agreed with you!
<filename>src/data-model/value.ts import { DateTime, Duration } from "luxon"; import { DEFAULT_QUERY_SETTINGS, QuerySettings } from "settings"; import { getFileTitle, normalizeHeaderForLink, renderMinimalDuration } from "util/normalize"; /** Shorthand for a mapping from keys to values. */ export type DataObject = { [key: string]: Literal }; /** The literal types supported by the query engine. */ export type LiteralType = | "boolean" | "number" | "string" | "date" | "duration" | "link" | "array" | "object" | "html" | "function" | "null"; /** The raw values that a literal can take on. */ export type Literal = | boolean | number | string | DateTime | Duration | Link | Array<Literal> | DataObject | HTMLElement | Function | null; /** A grouping on a type which supports recursively-nested groups. */ export type GroupElement<T> = { key: Literal; rows: Grouping<T> }; export type Grouping<T> = T[] | GroupElement<T>[]; /** Maps the string type to it's external, API-facing representation. */ export type LiteralRepr<T extends LiteralType> = T extends "boolean" ? boolean : T extends "number" ? number : T extends "string" ? string : T extends "duration" ? Duration : T extends "date" ? DateTime : T extends "null" ? null : T extends "link" ? Link : T extends "array" ? Array<Literal> : T extends "object" ? Record<string, Literal> : T extends "html" ? HTMLElement : T extends "function" ? Function : any; /** A wrapped literal value which can be switched on. */ export type WrappedLiteral = | LiteralWrapper<"string"> | LiteralWrapper<"number"> | LiteralWrapper<"boolean"> | LiteralWrapper<"date"> | LiteralWrapper<"duration"> | LiteralWrapper<"link"> | LiteralWrapper<"array"> | LiteralWrapper<"object"> | LiteralWrapper<"html"> | LiteralWrapper<"function"> | LiteralWrapper<"null">; export interface LiteralWrapper<T extends LiteralType> { type: T; value: LiteralRepr<T>; } export namespace Values { /** Convert an arbitary value into a reasonable, Markdown-friendly string if possible. */ export function toString( field: any, setting: QuerySettings = DEFAULT_QUERY_SETTINGS, recursive: boolean = false ): string { let wrapped = wrapValue(field); if (!wrapped) return "null"; switch (wrapped.type) { case "string": return wrapped.value; case "number": case "boolean": case "html": case "null": return "" + wrapped.value; case "link": return wrapped.value.markdown(); case "function": return "<function>"; case "array": let result = ""; if (recursive) result += "["; result += wrapped.value.map(f => toString(f, setting, true)).join(", "); if (recursive) result += "]"; return result; case "object": return ( "{ " + Object.entries(wrapped.value) .map(e => e[0] + ": " + toString(e[1], setting, true)) .join(", ") + " }" ); case "date": if (wrapped.value.second == 0 && wrapped.value.hour == 0 && wrapped.value.minute == 0) { return wrapped.value.toFormat(setting.defaultDateFormat); } return wrapped.value.toFormat(setting.defaultDateTimeFormat); case "duration": return renderMinimalDuration(wrapped.value); } } /** Wrap a literal value so you can switch on it easily. */ export function wrapValue(val: Literal): WrappedLiteral | undefined { if (isNull(val)) return { type: "null", value: val }; else if (isNumber(val)) return { type: "number", value: val }; else if (isString(val)) return { type: "string", value: val }; else if (isBoolean(val)) return { type: "boolean", value: val }; else if (isDuration(val)) return { type: "duration", value: val }; else if (isDate(val)) return { type: "date", value: val }; else if (isHtml(val)) return { type: "html", value: val }; else if (isArray(val)) return { type: "array", value: val }; else if (isLink(val)) return { type: "link", value: val }; else if (isFunction(val)) return { type: "function", value: val }; else if (isObject(val)) return { type: "object", value: val }; else return undefined; } /** Recursively map complex objects at the leaves. */ export function mapLeaves(val: Literal, func: (t: Literal) => Literal): Literal { if (isObject(val)) { let result: DataObject = {}; for (let [key, value] of Object.entries(val)) result[key] = mapLeaves(value, func); return result; } else if (isArray(val)) { let result: Literal[] = []; for (let value of val) result.push(mapLeaves(value, func)); return result; } else { return func(val); } } /** Compare two arbitrary JavaScript values. Produces a total ordering over ANY possible dataview value. */ export function compareValue(val1: Literal, val2: Literal, linkNormalizer?: (link: string) => string): number { // Handle undefined/nulls first. if (val1 === undefined) val1 = null; if (val2 === undefined) val2 = null; if (val1 === null && val2 === null) return 0; else if (val1 === null) return 1; else if (val2 === null) return -1; // A non-null value now which we can wrap & compare on. let wrap1 = wrapValue(val1); let wrap2 = wrapValue(val2); if (wrap1 === undefined && wrap2 === undefined) return 0; else if (wrap1 === undefined) return 1; else if (wrap2 === undefined) return -1; if (wrap1.type != wrap2.type) return wrap1.type.localeCompare(wrap2.type); switch (wrap1.type) { case "string": return wrap1.value.localeCompare(wrap2.value as string); case "number": if (wrap1.value < (wrap2.value as number)) return -1; else if (wrap1.value == (wrap2.value as number)) return 0; return 1; case "null": return 0; case "boolean": if (wrap1.value == wrap2.value) return 0; else return wrap1.value ? 1 : -1; case "link": let link1 = wrap1.value; let link2 = wrap2.value as Link; let normalize = linkNormalizer ?? ((x: string) => x); // We can't compare by file name or display, since that would break link equality. Compare by path. let pathCompare = normalize(link1.path).localeCompare(normalize(link2.path)); if (pathCompare != 0) return pathCompare; // Then compare by type. let typeCompare = link1.type.localeCompare(link2.type); if (typeCompare != 0) return typeCompare; // Then compare by subpath existence. if (link1.subpath && !link2.subpath) return 1; if (!link1.subpath && link2.subpath) return -1; if (!link1.subpath && !link2.subpath) return 0; // Since both have a subpath, compare by subpath. return (link1.subpath ?? "").localeCompare(link2.subpath ?? ""); case "date": return wrap1.value < (wrap2.value as DateTime) ? -1 : wrap1.value.equals(wrap2.value as DateTime) ? 0 : 1; case "duration": return wrap1.value < (wrap2.value as Duration) ? -1 : wrap1.value.equals(wrap2.value as Duration) ? 0 : 1; case "array": let f1 = wrap1.value; let f2 = wrap2.value as any[]; for (let index = 0; index < Math.min(f1.length, f2.length); index++) { let comp = compareValue(f1[index], f2[index]); if (comp != 0) return comp; } return f1.length - f2.length; case "object": let o1 = wrap1.value; let o2 = wrap2.value as Record<string, any>; let k1 = Array.from(Object.keys(o1)); let k2 = Array.from(Object.keys(o2)); k1.sort(); k2.sort(); let keyCompare = compareValue(k1, k2); if (keyCompare != 0) return keyCompare; for (let key of k1) { let comp = compareValue(o1[key], o2[key]); if (comp != 0) return comp; } return 0; case "html": return 0; case "function": return 0; } } /** Find the corresponding Dataveiw type for an arbitrary value. */ export function typeOf(val: any): LiteralType | undefined { return wrapValue(val)?.type; } /** Determine if the given value is "truthy" (i.e., is non-null and has data in it). */ export function isTruthy(field: Literal): boolean { let wrapped = wrapValue(field); if (!wrapped) return false; switch (wrapped.type) { case "number": return wrapped.value != 0; case "string": return wrapped.value.length > 0; case "boolean": return wrapped.value; case "link": return !!wrapped.value.path; case "date": return wrapped.value.toMillis() != 0; case "duration": return wrapped.value.as("seconds") != 0; case "object": return Object.keys(wrapped.value).length > 0; case "array": return wrapped.value.length > 0; case "null": return false; case "html": return true; case "function": return true; } } /** Deep copy a field. */ export function deepCopy<T extends Literal>(field: T): T { if (field === null || field === undefined) return field; if (Values.isArray(field)) { return ([] as Literal[]).concat(field.map(v => deepCopy(v))) as T; } else if (Values.isObject(field)) { let result: Record<string, Literal> = {}; for (let [key, value] of Object.entries(field)) result[key] = deepCopy(value); return result as T; } else { return field; } } export function isString(val: any): val is string { return typeof val == "string"; } export function isNumber(val: any): val is number { return typeof val == "number"; } export function isDate(val: any): val is DateTime { return val instanceof DateTime; } export function isDuration(val: any): val is Duration { return val instanceof Duration; } export function isNull(val: any): val is null | undefined { return val === null || val === undefined; } export function isArray(val: any): val is any[] { return Array.isArray(val); } export function isBoolean(val: any): val is boolean { return typeof val === "boolean"; } export function isLink(val: any): val is Link { return val instanceof Link; } export function isHtml(val: any): val is HTMLElement { if (typeof HTMLElement !== "undefined") { return val instanceof HTMLElement; } else { return false; } } export function isObject(val: any): val is Record<string, any> { return ( typeof val == "object" && !isHtml(val) && !isArray(val) && !isDuration(val) && !isDate(val) && !isLink(val) ); } export function isFunction(val: any): val is Function { return typeof val == "function"; } } /////////////// // Groupings // /////////////// export namespace Groupings { /** Determines if the given group entry is a standalone value, or a grouping of sub-entries. */ export function isElementGroup<T>(entry: T | GroupElement<T>): entry is GroupElement<T> { return Values.isObject(entry) && Object.keys(entry).length == 2 && "key" in entry && "rows" in entry; } /** Determines if the given array is a grouping array. */ export function isGrouping<T>(entry: T[] | GroupElement<T>[]): entry is GroupElement<T>[] { for (let element of entry) if (!isElementGroup(element)) return false; return true; } } ////////// // LINK // ////////// /** The Obsidian 'link', used for uniquely describing a file, header, or block. */ export class Link { /** The file path this link points to. */ public path: string; /** The display name associated with the link. */ public display?: string; /** The block ID or header this link points to within a file, if relevant. */ public subpath?: string; /** Is this link an embedded link (!)? */ public embed: boolean; /** The type of this link, which determines what 'subpath' refers to, if anything. */ public type: "file" | "header" | "block"; /** Create a link to a specific file. */ public static file(path: string, embed: boolean = false, display?: string) { return new Link({ path, embed, display, subpath: undefined, type: "file", }); } /** Create a link to a specific file and header in that file. */ public static header(path: string, header: string, embed?: boolean, display?: string) { // Headers need to be normalized to alpha-numeric & with extra spacing removed. return new Link({ path, embed, display, subpath: normalizeHeaderForLink(header), type: "header", }); } /** Create a link to a specific file and block in that file. */ public static block(path: string, blockId: string, embed?: boolean, display?: string) { return new Link({ path, embed, display, subpath: blockId, type: "block", }); } public static fromObject(object: Record<string, any>) { return new Link(object); } private constructor(fields: Partial<Link>) { Object.assign(this, fields); } /** Checks for link equality (i.e., that the links are pointing to the same exact location). */ public equals(other: Link): boolean { if (other == undefined || other == null) return false; return this.path == other.path && this.type == other.type && this.subpath == other.subpath; } /** Convert this link to it's markdown representation. */ public toString(): string { return this.markdown(); } /** Convert this link to a raw object which is serialization-friendly. */ public toObject(): Record<string, any> { return { path: this.path, type: this.type, subpath: this.subpath, display: this.display, embed: this.embed }; } /** Update this link with a new path. */ public withPath(path: string) { return new Link(Object.assign({}, this, { path })); } /** Return a new link which points to the same location but with a new display value. */ public withDisplay(display?: string) { return new Link(Object.assign({}, this, { display })); } /** Convert a file link into a link to a specific header. */ public withHeader(header: string) { return Link.header(this.path, header, this.embed, this.display); } /** Convert any link into a link to its file. */ public toFile() { return Link.file(this.path, this.embed, this.display); } /** Convert this link into an embedded link. */ public toEmbed(): Link { if (this.embed) { return this; } else { let link = new Link(this); link.embed = true; return link; } } /** Convert this link to markdown so it can be rendered. */ public markdown(): string { let result = (this.embed ? "!" : "") + "[[" + this.path; if (this.type == "header") result += "#" + this.subpath; else if (this.type == "block") result += "#^" + this.subpath; if (this.display) { result += "|" + this.display; } else { result += "|" + getFileTitle(this.path); if (this.type == "header" || this.type == "block") result += " > " + this.subpath; } result += "]]"; return result; } /** The stripped name of the file this link points to. */ public fileName(): string { return getFileTitle(this.path).replace(".md", ""); } }
import * as React from 'react'; import { DjeetaState } from '../../../states/djeetaState'; import { CurrentAbilityActions } from '../../../containers/Djeeta/currentAbilityContainer'; import { createStyles, Theme, WithStyles } from '@material-ui/core/styles'; import Paper from '@material-ui/core/Paper'; import Typography from '@material-ui/core/Typography'; import Divider from '@material-ui/core/Divider'; import ListItem from '@material-ui/core/ListItem'; import ListItemText from '@material-ui/core/ListItemText'; import tmpIcon from '../../../images/tmpIcon.png'; export const styles = (theme: Theme) => createStyles({ root: { flexGrow: 0, }, paper: {}, title: { paddingTop: theme.spacing.unit * 2, paddingLeft: theme.spacing.unit * 2, }, textField: { marginLeft: theme.spacing.unit, marginRight: theme.spacing.unit, width: 100, }, img: { margin: 'auto', display: 'block', maxWidth: '100%', maxHeight: '100%', }, }); interface OwnProps {} interface StylesProps extends WithStyles<typeof styles> {} type CurrentAbilityProps = OwnProps & DjeetaState & CurrentAbilityActions & StylesProps; export const CurrentAbility: React.SFC<any> = (props: CurrentAbilityProps) => { const { classes } = props; return ( <Paper className={classes.paper}> <Typography color="textPrimary" gutterBottom className={classes.title}> アビリティ </Typography> <Divider light /> {props.abilityList.map(({ icon, name, secondary }) => ( <AbilityItem icon={icon} name={name} secondary={secondary} /> ))} </Paper> ); }; interface AbilityItemProps { icon: string; name: string; secondary: string; isChangingAbility: boolean; onClick(v: string): void; } //TODO AbilityList.tsxと共通化すること const AbilityItem: React.SFC<any> = (props: AbilityItemProps & CurrentAbilityActions) => { //fix warning //const { icon, name, secondary } = props; const { name, secondary } = props; return ( <React.Fragment key={name}> <ListItem> <img src={tmpIcon} alt="icon" /> <ListItemText primary={name} secondary={secondary} /> </ListItem> </React.Fragment> ); };
<reponame>vpnachev/kubernikus<filename>pkg/controller/metrics/routegc.go<gh_stars>100-1000 package metrics import "github.com/prometheus/client_golang/prometheus" func init() { prometheus.MustRegister( OrphanedRoutesTotal, RouteGCFailedOperationsTotal, ) OrphanedRoutesTotal.With(prometheus.Labels{}).Add(0) RouteGCFailedOperationsTotal.With(prometheus.Labels{}).Add(0) } var OrphanedRoutesTotal = prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "kubernikus", Subsystem: "routegc", Name: "orphaned_routes_total", Help: "Number of orphaned routes removed from OpenStack router", }, []string{}, ) var RouteGCFailedOperationsTotal = prometheus.NewCounterVec( prometheus.CounterOpts{ Namespace: "kubernikus", Subsystem: "routegc", Name: "failed_operation_total", Help: "Number of failed operations.", }, []string{}, )
def holdout_grid_search(clf, X_train_hp, y_train_hp, X_val_hp, y_val_hp, hyperparam, verbose=False): best_estimator = None best_hyperparam = {} best_score = 0.0 hyper_param_l = list(hyperparam.values()) combination_l_of_t = list(itertools.product(*hyper_param_l)) combination_l_of_d = [] for val_tuple in combination_l_of_t: param_d = {} for i, k in enumerate(hyperparam): param_d[k] = val_tuple[i] combination_l_of_d.append(param_d) for param_d in combination_l_of_d: estimator = clf(**param_d) estimator.fit(X_train_hp,y_train_hp) preds = estimator.predict_proba(X_val_hp) estimator_score = concordance_index(y_val_hp, preds[:,1]) if estimator_score>best_score: best_score = estimator_score best_estimator = estimator best_hyperparam = param_d if verbose: print("hyperparam:") display(hyperparam) print("hyper_param_l") display(hyper_param_l) print("combination_l_of_t") display(combination_l_of_t) print(f"combination_l_of_d") display(combination_l_of_d) print(f"best_hyperparam") display(best_hyperparam) print(f"best_score: {best_score:.4f}") return best_estimator, best_hyperparam
Some dog owners living in a Burnaby, B.C., apartment building are raising a stink after they were told to provide a stool sample from their pets or face eviction. The landlord of the building in the 7400 block of 14th Avenue issued the letters to about 30 dog owners early Sunday morning, after somebody's pet left an anonymous sample in the building's stairwell — for the second time. The landlord is looking for the offending dog, and if tenants don't submit a solid sample for DNA testing, it would be an "admission of guilt" and a "reason for immediate eviction," the letter said. A company called PooPrints offers DNA matching services to apartment and condominium communities as a way to crack down on rogue pooches. Dog owners vow to fight Tenant Daniel Charlie got one of the letters, along with a plastic bag for the sample. "They even put if you deny the sample, they can evict you, How can they do that? I don't know, especially for dog poop," said Charlie. Tenant Claude Paulin-Dupere said the landlord is asking for too much. "I am going to fight this until I can't no more. I think it is a pretty big invasion of privacy." The Tenant Resource & Advisory Centre said the owner of the offending dog could be evicted if caught, but unless there was specific language in the lease, a tenant cannot be evicted for not providing a sample. Lisa Mackie, strata property and residential tenancy lawyer, agreed. "It is frustrating for landlords when it's evident one tenant or more are breaching terms of a tenancy agreement ... Unfortunately the onus is on the landlord to establish that a tenant or multiple tenants are in breach of their tenancy agreement," said Mackie. About 30 tenants in the Burnaby building received the letter threatening them with eviction if they don't hand over a stool sample from their dogs. (CBC) Take the poll: Should dog owners be forced to provide DNA samples of their mutts?
#!/usr/bin/env python # Copyright (c) 2009, David Buxton <[email protected]> # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in the # documentation and/or other materials provided with the distribution. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS # IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED # TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED # TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Tools to convert between Python datetime instances and Microsoft times. """ from datetime import datetime, timedelta, tzinfo from calendar import timegm # http://support.microsoft.com/kb/167296 # How To Convert a UNIX time_t to a Win32 FILETIME or SYSTEMTIME EPOCH_AS_FILETIME = 116444736000000000 # January 1, 1970 as MS file time HUNDREDS_OF_NANOSECONDS = 10000000 ZERO = timedelta(0) HOUR = timedelta(hours=1) class UTC(tzinfo): """UTC""" def utcoffset(self, dt): return ZERO def tzname(self, dt): return "UTC" def dst(self, dt): return ZERO utc = UTC() def dt_to_filetime(dt): """Converts a datetime to Microsoft filetime format. If the object is time zone-naive, it is forced to UTC before conversion. >>> "%.0f" % dt_to_filetime(datetime(2009, 7, 25, 23, 0)) '128930364000000000' >>> dt_to_filetime(datetime(1970, 1, 1, 0, 0, tzinfo=utc)) 116444736000000000L >>> dt_to_filetime(datetime(1970, 1, 1, 0, 0)) 116444736000000000L """ if (dt.tzinfo is None) or (dt.tzinfo.utcoffset(dt) is None): dt = dt.replace(tzinfo=utc) return EPOCH_AS_FILETIME + (timegm(dt.timetuple()) * HUNDREDS_OF_NANOSECONDS) def filetime_to_dt(ft): """Converts a Microsoft filetime number to a Python datetime. The new datetime object is time zone-naive but is equivalent to tzinfo=utc. >>> filetime_to_dt(116444736000000000) datetime.datetime(1970, 1, 1, 0, 0) >>> filetime_to_dt(128930364000000000) datetime.datetime(2009, 7, 25, 23, 0) """ return datetime.utcfromtimestamp((ft - EPOCH_AS_FILETIME) / HUNDREDS_OF_NANOSECONDS) if __name__ == "__main__": import doctest doctest.testmod()
<filename>imdb_preprocess.py import argparse import glob import math import os import pickle import sys import cv2 import face_alignment import numpy as np import torch from transform3d import euler face_model_path = ( '/opt/intel/openvino/deployment_tools/intel_models' '/face-detection-adas-0001/FP32/face-detection-adas-0001.xml' ) def best_fit_transform(A, B): ''' Calculates the least-squares best-fit transform that maps corresponding points A to B in m spatial dimensions Input: A: Nxm numpy array of corresponding points B: Nxm numpy array of corresponding points Returns: T: (m+1)x(m+1) homogeneous transformation matrix that maps A on to B R: mxm rotation matrix t: mx1 translation vector ''' assert A.shape == B.shape # get number of dimensions m = A.shape[1] # translate points to their centroids centroid_A = np.mean(A, axis=0) centroid_B = np.mean(B, axis=0) AA = A - centroid_A BB = B - centroid_B # rotation matrix H = np.dot(AA.T, BB) U, S, Vt = np.linalg.svd(H) R = np.dot(Vt.T, U.T) # special reflection case if np.linalg.det(R) < 0: Vt[m-1,:] *= -1 R = np.dot(Vt.T, U.T) # translation t = centroid_B.T - np.dot(R,centroid_A.T) # homogeneous transformation T = np.identity(m+1) T[:m, :m] = R T[:m, m] = t return T, R, t def norm3d_t(landmark, ref): t, _, _ = best_fit_transform(landmark, ref) #print(t) n = np.dot(t[0:3, 0:3], landmark.T).T n += t[:3, 3] return n.astype(np.float32), t def parse_args(): parser = argparse.ArgumentParser() parser.add_argument('--data-dir', required=True) parser.add_argument('--limit', type=int, default=0) parser.add_argument('--output-dir', required=True) parser.add_argument('--threshold', default=0.5, type=float) parser.add_argument('--min-size', default=100, type=int) return parser.parse_args() def main(): args = parse_args() min_face_size = args.min_size min_box_diagonal = int(math.sqrt(2 * (min_face_size ** 2))) print_fun('List files...') image_paths = glob.glob(os.path.join(args.data_dir, '**/*.jpg')) print_fun(f'Done list files: {len(image_paths)}') print_fun('Load face detect driver...') device = 'cuda' if torch.cuda.is_available() else 'cpu' fa3d = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, device=device) print_fun('Done loading.') landmark_ref = np.load('./landmark_ref.npy') processed = 0 threshold = 0.2 landmarks = {} boxes = {} if not os.path.exists(args.output_dir): os.makedirs(args.output_dir) for i, path in enumerate(image_paths): if i % 100 == 0: print_fun(f'Progress {i / len(image_paths) * 100:.2f} %.') print_fun(f'Processed {processed} images, looked: {i}.') try: with open(path, 'rb') as f: raw_img = f.read() frame = cv2.imdecode(np.frombuffer(raw_img, np.uint8), cv2.IMREAD_COLOR) frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) detected_faces = fa3d.face_detector.detect_from_image(frame_rgb) except Exception as e: print_fun(f'ERROR: {path}, {e}; Skip') continue if len(detected_faces) != 1: continue box = detected_faces[0] x1 = box[0] / frame.shape[1] x2 = box[2] / frame.shape[1] y1 = box[1] / frame.shape[0] y2 = box[3] / frame.shape[0] l3d = fa3d.get_landmarks(frame, detected_faces=[box]) scale = (box[2] - box[0] + box[3] - box[1]) / 195 if l3d is None or len(l3d) < 1: continue l3d = l3d[0] landmark_3d = l3d[:, :].astype(np.float32) landmark_3d[:, 0] = (landmark_3d[:, 0]) / frame.shape[1] landmark_3d[:, 1] = (landmark_3d[:, 1]) / frame.shape[0] landmark_3d[:, 2] = landmark_3d[:, 2] / (200 * scale) landmark_3d[:, 0:2] = np.clip(landmark_3d[:, 0:2], 0, 1) _, land_transform = norm3d_t(landmark_3d.copy(), landmark_ref) a1, a2, a3 = euler.mat2euler(land_transform) # print(a1, a2, a3) # cv2.imshow('Image', frame) # cv2.waitKey(0) is_frontal = abs(a1) < threshold and abs(a2) < threshold and abs(a3) < threshold if not is_frontal: continue dirname = path.split('/')[-2] basename = os.path.basename(path) save_path = os.path.join(args.output_dir, dirname, basename) if not os.path.exists(os.path.join(args.output_dir, dirname)): os.makedirs(os.path.join(args.output_dir, dirname)) with open(save_path, 'wb') as f: f.write(raw_img) landmarks[f'{dirname}/{basename}'] = landmark_3d boxes[f'{dirname}/{basename}'] = np.array([x1, y1, x2, y2]) processed += 1 if args.limit != 0 and processed >= args.limit: break with open(os.path.join(args.output_dir, 'landmarks.pkl'), 'wb') as f: pickle.dump(landmarks, f) with open(os.path.join(args.output_dir, 'boxes.pkl'), 'wb') as f: pickle.dump(boxes, f) print_fun(f'Processed {processed} images, looked: {i}') def print_fun(s): print(s) sys.stdout.flush() def box_diagonal(box): w = box[2] - box[0] h = box[3] - box[1] return math.sqrt(w ** 2 + h ** 2) if __name__ == '__main__': main()
def otoshidama(N, Y): y = Y/1000 for a in range(int(y/10)+1, -1, -1): for b in range(N - a, -1, -1): c = N - a - b if 10*a + 5*b + c == y: return a, b, c return -1, -1, -1 if __name__ == '__main__': N, Y = map(int,input().split()) a, b, c = otoshidama(N, Y) print(a, b, c)
<filename>LexicalAnalyzer.cpp /*********************************************************** * CPSC323 - RAT32S - Lexical Analyzer Component * --------------------------------------------------------- * California State University, Fullerton (CSUF) * CPSC 323 - Spring 2017 * Prof <NAME> * Assignment #1 - Lexical Analyzer * --------------------------------------------------------- * Authors: <NAME>, <NAME>, <NAME> * --------------------------------------------------------- * LexicalAnalyzer.cpp * * CLASS LexicalAnalyzer * - This class contains a lexer function which reads * from a file and determines lexeme and token ***********************************************************/ //Read OverallStructure.txt for Guide #include "LexicalAnalyzer.h" //Enumerate the states/columns to make it easier to read enum States { letter = 1, digit, decimal, dollar, space, punct, reject = 11, spaceReject, punctReject }; //This is the transistion table for all both finite state machines. //This should be able to tell us if it is an Integer, Identifier/KEYWORD, or Real //The first row isn't used by this program and is there for clarity int transistionTable[14][7] = { { 0, letter, digit, decimal, dollar, space, punct }, { 1, 3, 2, 11, 11, 12, 13 }, //Starting State { 2, 11, 5, 4, 11, 12, 13 }, //Accept Integer { 3, 8, 7, 11, 6, 12, 13 }, //Accepting Identifier { 4, 11, 9, 11, 11, 12, 13 }, //Accept Integer { 5, 11, 5, 4, 11, 12, 13 }, { 6, 11, 11, 11, 11, 12, 13 }, //Accept Identifier { 7, 8, 7, 11, 6, 12, 13 }, { 8, 8, 7, 11, 6, 12, 13 }, //Accept Identifier/KEYWORD - Check for KEYWORD if last state { 9, 11, 10, 11, 11, 12, 13 }, //Accept Real { 10, 11, 10, 11, 11, 12, 13 }, { 11, 11, 11, 11, 11, 12, 13 }, { 12, 12, 12, 12, 12, 12, 12 }, { 13, 11, 11, 11, 11, 11, 13 } }; LexicalAnalyzer::LexicalAnalyzer() { //Initializing Hashtable of special case lexemes //OPERATORs: specialLexs.insert({ //OPERATORs: { "=", "OPERATOR" }, { "+", "OPERATOR" }, { "-", "OPERATOR" }, { "*", "OPERATOR" }, { "/", "OPERATOR" }, { "==", "OPERATOR" }, { "^=", "OPERATOR" }, { "<", "OPERATOR" }, { ">", "OPERATOR" }, { "=<", "OPERATOR" }, { "=>", "OPERATOR" }, //SEPERATORs { "%%", "SEPERATOR" }, { ":", "SEPERATOR" }, { "[", "SEPERATOR" }, { "]", "SEPERATOR" }, { ",", "SEPERATOR" }, { "{", "SEPERATOR" }, { "}", "SEPERATOR" }, { "(", "SEPERATOR" }, { ")", "SEPERATOR" }, { ";", "SEPERATOR" }, //KEYWORDs { "function", "KEYWORD" }, { "int", "KEYWORD" }, { "boolean", "KEYWORD" }, { "real", "KEYWORD" }, { "if", "KEYWORD" }, { "else", "KEYWORD" }, { "endif", "KEYWORD" }, { "return", "KEYWORD" }, { "put", "KEYWORD" }, { "get", "KEYWORD" }, { "while", "KEYWORD" }, { "true", "KEYWORD" }, { "false", "KEYWORD" }, }); } LexicalAnalyzer::~LexicalAnalyzer() { } ///////////////////////////////////////////////////////////////////////// //FUNCTION: Lexer //USE: Read the line from the source file and run each character through // the transistion table. It will then find out if the token is // a ketword, integer, real, etc... //@param line - (string) line taken from the sourcefile //////////////////////////////////////////////////////////////////////// int LexicalAnalyzer::lexer(string line) { //Starting state int curState = 1; //This will be used to go back a state if it is a space, tab, OPERATOR, etc... int prevState = 1; //Get the length of the line. This will be int lineLength = line.length(); // Column variable will be used for the transistion table int curCol = 0; //No use for this variable yet. Might use it to see if we found token yet or not. currentChar = ' '; token = ""; for (int i = 0; i < lineLength;) { //Get the character currentChar = line[i]; curCol = colNum(currentChar); prevState = curState; bool tokenFound = false; //Check the transistion table to see what state it currently is after getting the next column curState = transistionTable[curState][curCol]; //Still working on these down here. Will update with more info //Currently it is creating the token one character at a time if (curState != reject && curState != spaceReject && curState != punctReject) { token += currentChar; i++; } //This program can find most tokens with a space after it. Will need fine tuning and testing to complete else if (curState == spaceReject && curState != punctReject) { if (prevState == 8) { unordered_map <string, string>::iterator itr = specialLexs.find(token); if (itr != specialLexs.end()) { cout << itr->first << "\t\t" << itr->second << endl; tokenFound = true; } } if (tokenFound == false) { cout << token << "\t\t" << getLexemeName(prevState) << endl; } token = ""; curState = 1; i++; } //Operators/Separator isn't working yet. Im doing a few tests here to see which will work best else if (curState == punctReject) { if (prevState != punctReject && prevState != 1) { cout << token << "\t\t" << getLexemeName(curState) << endl; token = currentChar; } else if (curState == punctReject) { token += currentChar; i++; //int k = i; //currentChar = line[++k]; //curCol = colNum(currentChar); //prevState = curState; //curState = transistionTable[curState][curCol]; //if (curState == punctReject) { // token += currentChar; // cout << token << "\t\t" << getLexemeName(curState) << endl; // token = ""; // i++; //} //else { // token += currentChar; // cout << token << "\t\t" << getLexemeName(prevState) << endl; // token = ""; //} //if (curState == spaceReject) { // i++; //} //if (token.length() > 0) { // cout << token << "\t\t" << getLexemeName(prevState) << endl; // token = ""; //} } } } return 0; } int LexicalAnalyzer::colNum(char ch) { if(isdigit(ch)) { return digit; } else if(isalpha(ch)) { return letter; } else if(ch == '$') { return dollar; } else if (ch == '.') { return decimal; } else if(isspace(ch)) { return space; } else if (ispunct(ch)) { return punct; } } string LexicalAnalyzer::getLexemeName(int state) { if (state == 2 || state == 4) { return "INTEGER"; } else if (state == 6 || state == 3) { return "IDENTIFIER"; } else if (state == 8) { return "KEYWORD"; } else if (state == 9) { return "REAL"; } else if (state == 13) { return "OPERATOR"; } else return "UNKNOWN"; } //string LexicalAnalyzer::getLexeme() { // //} // //string LexicalAnalyzer::getToken() { // //} // //bool LexicalAnalyzer::isKEYWORD(string KEYWORD) { // //} // //bool LexicalAnalyzer::isOPERATOR(char ch) { // //} // //bool LexicalAnalyzer::isSEPERATOR(char ch) { // //} // //bool LexicalAnalyzer::idDFSM(char ch) { // //} // //int LexicalAnalyzer::DFSM(char state) { // //} // //bool LexicalAnalyzer::realDFSM(char ch) { // //}
use super::Error; use derive_new::new; pub type Result<T> = std::result::Result<T, Error>; /// The result of the entire parse process. #[derive(PartialEq, PartialOrd, Debug, Clone, new)] pub struct EntireResult<T> { pub final_result: Result<T>, pub inner_errors: Vec<Error>, } impl<T> EntireResult<T> { pub fn aggregate_errors(self) -> Result<T> { match (self.final_result, self.inner_errors.into_iter().next()) { // The parsing process succeeded without any errors. (Ok(v), None) => Ok(v), // There is an error occurred during the parsing process. (Ok(_), Some(e)) => Err(e), // The parsing process failed with an error. (Err(e), _) => Err(e), } } }
// newDateNotEqualsFunc - returns new DateNotEquals function. func newDateGreaterThanEqualsFunc(key Key, values ValueSet) (Function, error) { v, err := valueToTime(dateNotEquals, values) if err != nil { return nil, err } return NewDateGreaterThanEqualsFunc(key, v) }
What a difference six months makes. The political landscape is barely recognisable from the day Theresa May stood in Downing Street to announce a snap general election. The pundits expected a Tory landslide. The election would strengthen the government’s hand in the Brexit negotiations and stabilise the country, they agreed. Labour faced oblivion. Brexit battles face Jeremy Corbyn’s ‘mainstream’ party in Brighton Read more The country thought otherwise. May’s claim to be “strong and stable” went from mantra to millstone. The Tories were found wanting on all of the big issues facing our country. Her government was shown to be strong only against the weak, unwilling to stand up to the powerful and the elite. The Tory manifesto was shredded almost overnight, as Conservative activists huddled in empty halls. Labour, meanwhile, relished the battle and seized the chance to campaign for our manifesto to end austerity and transform society. We didn’t succeed in winning a Labour majority and we need to do more to build trust and support. But we achieved the biggest increase in Labour’s vote since 1945, and the Conservatives lost their majority. The election campaign demonstrated the thirst for real change across Britain. We changed the debate and we have set the political agenda. The government has had to drop one damaging policy after another – from ditching free school meals to means-testing winter fuel payments for the elderly. The policies we campaigned for attracted support because they are what most people actually want. We have changed the political centre of gravity. We are now the political mainstream and have the chance to transform our country. To do that we must use our new strength inside and outside parliament to challenge the Conservatives at every step – and prepare to form a government to change Britain when the next election is called. The Tories are weak and divided. They have no mandate for what they are doing. Wherever we can, we will block their attempts to pay for tax cuts for the wealthy by making life worse for millions of people in the name of austerity. We are in a moment of great change – in the economy, politics and across the world. Our challenge is to marshal these forces of change for the real wealth creators – that means all of us. And our mission must be to work with the people of Britain to transfer wealth, power and opportunity to the many from the few. For the first time in a long time, we can provide a politics of hope and a politics for the people. We are now on a permanent campaign footing. Labour membership has almost tripled to 570,000 in the last two years. Contrast that with the Conservatives, who have few members and are backed by hedge funds and billionaires, not millions of working-class people. Since losing their majority, the Conservatives have gone for power grab after power grab to stay in office, propping themselves up with a tawdry £1bn deal with the Democratic Unionist party. The next Labour government will be different. To earn the trust of the people of our country, we must show that we mean it when we say we hand power back to the people. For the first time in years, we are handing our conference back to our members. Politics isn’t some technical specialism for an elite. Politics is about us all coming together to decide our futures. Taking back power for the many should be fun and exhilarating. We aren’t a lobbyists’ playground. This will be a real conference whose decisions matter. Labour is preparing for government and we are already deepening and extending the policies we set out in our election manifesto. It is Labour, not the Tories, who are prepared to tackle the long-term challenges facing our country, including automation, the threat to the environment, health costs and an ageing population. The disarray at the heart of government is painful to see: from the public sector pay cap to tuition fees, Tory ministers are flip-flopping and incoherent. When we brought these issues to the Commons recently, the government was forced to concede it did not have a majority and refused to vote. May is leading a weak government at a critical time for our country’s future. Fifteen months after the EU referendum, the government is still floundering over what to do about Brexit. It was evident in Florence on Friday that the prime minister is still no clearer about what our long-term relationship with the EU will look like. The only advance seems to be that she has listened to Labour and faced up to the reality that Britain needs a transition on the same basic terms to provide security for jobs and the economy, though May and her cabinet are spending more time negotiating with each other than with the EU. The Tories have made abundantly clear they want to use Brexit to deregulate and cut taxes for the wealthy. Labour is making the case instead for a jobs-first Brexit that prioritises access to European markets, uses powers returned from Brussels to invest and upgrade Britain’s economy, and protects and extends workers’ and consumer rights and environmental standards. We will not accept any Transatlantic Trade and Investment Partnership-style deregulation and investor protection deals with the Trump administration, which is what this Tory government wants to use Brexit for. No wonder it is relying on a power grab through their EU withdrawal bill in an attempt to bypass democracy and steamroller through their race-to-the-bottom approach. The commentators wrote us off in April. But the election and the months that followed have proved people do not have to accept the establishment’s rules of the game, or what they’re told is inevitable. We do not have to accept that millions of people are in work but in poverty. We do not have to accept rising homelessness, food banks and zero-hour contracts. We do not have to accept rip-off energy prices or austerity without end. As I said when I was first elected leader two years ago, things can, and will, change. We remain in opposition, for now. But we are a government in waiting. Politics has changed and Labour has driven that change.
Myoglobinuric renal failure after generalised tonic-clonic seizures. A case report. A 47-year-old man developed progressive renal impairment after a series of seven generalised tonic-clonic seizures. The patient did not become oliguric and because recovery of renal function was rapid, dialysis was not required. The diagnosis of myoglobin-induced renal failure was made on the basis of markedly elevated muscle enzyme values, and myoglobin in the urine.
/** * Contains integration tests (interaction with the Model) and unit tests for SortJobCommand. */ public class SortPersonCommandTest { private Model model = new ModelManager(getTypicalPersonAddressBook(), getTypicalJobAddressBook(), new UserPrefs()); private Model expectedModel = new ModelManager(getTypicalPersonAddressBook(), getTypicalJobAddressBook(), new UserPrefs()); @Test public void execute_ascendingName_sortedSuccess() { PersonNameComparator comparator = new PersonNameComparator(); String expectedMessage = MESSAGE_SUCCESS + comparator.toString() + "in ascending order."; SortPersonCommand command = new SortPersonCommand(comparator, true); expectedModel.updateSortedPersonList(comparator); assertCommandSuccess(command, model, expectedMessage, expectedModel); assertEquals(expectedModel.getSortedPersonList(), model.getSortedPersonList()); } @Test public void execute_ascendingExpectedSalary_sortedSuccess() { PersonExpectedSalaryComparator comparator = new PersonExpectedSalaryComparator(); String expectedMessage = MESSAGE_SUCCESS + comparator.toString() + "in ascending order."; SortPersonCommand command = new SortPersonCommand(comparator, true); expectedModel.updateSortedPersonList(comparator); assertCommandSuccess(command, model, expectedMessage, expectedModel); assertEquals(expectedModel.getSortedPersonList(), model.getSortedPersonList()); } @Test public void execute_ascendingExperience_sortedSuccess() { PersonExperienceComparator comparator = new PersonExperienceComparator(); String expectedMessage = MESSAGE_SUCCESS + comparator.toString() + "in ascending order."; SortPersonCommand command = new SortPersonCommand(comparator, true); expectedModel.updateSortedPersonList(comparator); assertCommandSuccess(command, model, expectedMessage, expectedModel); assertEquals(expectedModel.getSortedPersonList(), model.getSortedPersonList()); } @Test public void execute_descendingDateOfApplication_sortedSuccess() { PersonDateOfApplicationComparator comparator = new PersonDateOfApplicationComparator(); String expectedMessage = MESSAGE_SUCCESS + comparator.toString() + "in descending order."; SortPersonCommand command = new SortPersonCommand(comparator, false); expectedModel.updateSortedPersonList(comparator.reversed()); assertCommandSuccess(command, model, expectedMessage, expectedModel); assertEquals(expectedModel.getSortedPersonList(), model.getSortedPersonList()); } @Test public void execute_descendingBlacklist_sortedSuccess() { PersonBlackListComparator comparator = new PersonBlackListComparator(); String expectedMessage = MESSAGE_SUCCESS + comparator.toString() + "in descending order."; SortPersonCommand command = new SortPersonCommand(comparator, false); expectedModel.updateSortedPersonList(comparator.reversed()); assertCommandSuccess(command, model, expectedMessage, expectedModel); assertEquals(expectedModel.getSortedPersonList(), model.getSortedPersonList()); } @Test public void equals() { PersonBlackListComparator blackListComparator = new PersonBlackListComparator(); PersonDateOfApplicationComparator dateOfApplicationComparator = new PersonDateOfApplicationComparator(); PersonExperienceComparator experienceComparator = new PersonExperienceComparator(); PersonExpectedSalaryComparator expectedSalaryComparator = new PersonExpectedSalaryComparator(); PersonNameComparator nameComparator = new PersonNameComparator(); SortPersonCommand sortFirstCommand = new SortPersonCommand(blackListComparator, true); SortPersonCommand sortSecondCommand = new SortPersonCommand(dateOfApplicationComparator, false); SortPersonCommand sortThirdCommand = new SortPersonCommand(experienceComparator, true); SortPersonCommand sortFourthCommand = new SortPersonCommand(expectedSalaryComparator, true); SortPersonCommand sortFifthCommand = new SortPersonCommand(nameComparator, false); // same object -> returns true assertTrue(sortFirstCommand.equals(sortFirstCommand)); // same values -> returns true SortPersonCommand sortFirstCommandCopy = new SortPersonCommand(blackListComparator, true); assertTrue(sortFirstCommand.equals(sortFirstCommandCopy)); // different types -> returns false assertFalse(sortFirstCommand.equals(1)); // null -> returns false assertFalse(sortFirstCommand.equals(null)); // different commands -> returns false assertFalse(sortFirstCommand.equals(sortSecondCommand)); assertFalse(sortFirstCommand.equals(sortThirdCommand)); assertFalse(sortFirstCommand.equals(sortFourthCommand)); assertFalse(sortFirstCommand.equals(sortFifthCommand)); } }
import math n = list(map(int, input().split())) lista = [] for i in range(n[0]): a,m = list(map(str, input().split())) m = int(m) lista.append([a,m,i+1]) list.sort(lista, key = lambda x:x[1], reverse=True) list.sort(lista, key = lambda x:x[0]) for i in range(n[0]): print(lista[i][2])
C4-dicarboxylate metabolons: Interaction of C4-dicarboxylate transporters of Escherichia coli with cytosolic enzymes and regulators Metabolons represent the structural organization of proteins for metabolic or regulatory pathways. Here the interaction of enzymes fumarase FumB and aspartase AspA with the C4-DC transporters DcuA and DcuB of Escherichia coli was tested by a bacterial two-hybrid (BACTH) assay in situ, or by co-chromatography (mSPINE). DcuB interacted strongly with FumB and AspA, and DcuA with AspA. The fumB-dcuB and the dcuA-aspA genes encoding the respective proteins are known for their colocalization on the genome and the production of co-transcripts. The data consistently suggest the formation of DcuB/FumB, DcuB/AspA and DcuA/AspA metabolons in fumarate respiration for the uptake of L-malate, or L-aspartate, conversion to fumarate and excretion of succinate after reduction. The DcuA/AspA metabolon catalyzes L-Asp uptake and fumarate excretion in concerted action also to provide ammonia for nitrogen assimilation. The aerobic C4-DC transporter DctA interacted with the regulator EIIAGlc of the E. coli glucose phosphotransferase system. It is suggested that EIIAGlc inhibits C4-DC uptake by DctA in the presence of the preferred substrate glucose.
// to run this enable redis using docker. run redis-up.sh in home dir package main import ( "context" "fmt" "io/ioutil" "log" "net/http" "time" "github.com/go-redis/redis/v8" ) func redisConnect(name string) string { var ctx = context.Background() rdb := redis.NewClient(&redis.Options{ Addr: "localhost:6379", Password: "", // no password set DB: 0, // use default DB }) val, err := rdb.Get(ctx, "name:"+name).Result() if err != nil { val = getName(name) } else { fmt.Println("##########From the Redis cache##########") return val } err = rdb.Set(ctx, "name:"+name, string(val), 1*time.Hour).Err() if err != nil { panic(err) } fmt.Println("##########From the internet##########") return val } func getName(name string) string { resp, err := http.Get("https://api.nationalize.io/?name=" + name) if err != nil { log.Fatalln(err) } //We Read the response body on the line below. body, err := ioutil.ReadAll(resp.Body) if err != nil { log.Fatalln(err) } //Convert the body to type string sb := string(body) return sb } func main() { // rahul <NAME> tharun <NAME> name := "tom" t1 := time.Now() res := redisConnect(name) t := time.Now() fmt.Println(res) fmt.Println(t.Sub(t1)) }
#include <iostream> using namespace std; #include <string> string str; int n,w=0,op; main() { cin>>str; for(int i=0;i<str.size();i++) { if(str[i]>=97) {w++;} } op=str.size()%2+str.size()/2; if(w>= op ) { for(int i=0;i<str.size();i++) if(str[i]<97)cout<<(char)(str[i]+32); else cout<<(str[i]); } else { for(int i=0;i<str.size();i++) if(str[i]>=97)cout<<(char)(str[i]-32); else cout<<(str[i]); } }
//Add transaction for an existing customer in a branch public boolean addBranchCustomerTransaction(String branchName, String customerName, double amount){ Branch existingBranch = findBranch(branchName); if (existingBranch != null){ return existingBranch.addCustomerTransaction(customerName,amount); } return false; }
// Call for any operation needing GLSL float16 data-type support. void TParseVersions::float16Check(const TSourceLoc& loc, const char* op, bool builtIn) { if (!builtIn) { const char* const extensions[] = { E_GL_AMD_gpu_shader_half_float, E_GL_EXT_shader_explicit_arithmetic_types, E_GL_EXT_shader_explicit_arithmetic_types_float16}; requireExtensions(loc, sizeof(extensions)/sizeof(extensions[0]), extensions, op); } }
#pragma once #include "VPShader.h" #include "ConstantBufferHolder.h" namespace Storm { // This class would be interfaced with Grid.hlsl class GridShader : public Storm::VPShaderBase, private Storm::ConstantBufferHolder { public: GridShader(const ComPtr<ID3D11Device> &device, unsigned int indexCount); public: void draw(const ComPtr<ID3D11DeviceContext> &deviceContext); void setup(const ComPtr<ID3D11Device> &device, const ComPtr<ID3D11DeviceContext> &deviceContext, const Storm::Camera &currentCamera); private: unsigned int _gridIndexCount; }; }
/** * Public API Surface of ng-polymorpheus */ export * from './classes/component'; export * from './directives/template'; export * from './directives/outlet'; export * from './tokens/context'; export * from './types/content'; export * from './types/handler'; export * from './types/primitive'; export * from './polymorpheus.module';
Amid all the dreadful economic news last week—the European meltdown, the insane trading glitch, the oil spill—it was easy to forget just how good the jobs report was. The report is one piece of data, and subject to revision, and you can’t make too much of it. But jobs rose by 290,000, with 231,000 of those gained in the private sector. The unemployment rate increased to 9.9 percent largely because people flooded back into the job market. And the jobs trend is just as encouraging. The Bureau of Labor Statistics revised February and March job numbers upward, from -14,000 in February to +39,000, and from 162,000 in March to 230,000. I have been arguing since December that the combination of GDP growth and unsustainably high productivity figures would lead to strong job growth. Payroll jobs have now risen in five of the last six months, and the pace of growth is picking up steam. March and April 2010 have been the first two consecutive months of 200,000-plus jobs growth since November and December of 2006. (Washington Monthly’s Steve Benen’s “bikini chart” is starting to look less like a bikini.) But economists have been slow to catch on to the trend of stronger-than-expected jobs growth, and they are still skeptical of recovery. The Wall Street Journal printed a table showing that of 26 economists polled by Dow Jones Newswires, only four said the economy would create more than 226,000 jobs in April, while 19 said it would create less than 200,000 jobs in the month. Why are they still behind the curve? Many analysts and market commentators have repeatedly had difficulty foreseeing a recovery given that housing is still poor, consumers are still struggling, and credit still isn’t freely available. The reality is that the recovery has taken place in spite of housing, consumers, and credit. It’s been led by business, investment, trade, and exports. Business cycles get into a sweet spot when rising production leads to more jobs, which leads to more consumer spending, which leads to more orders for production. I wouldn’t say we’re quite there. But the business recovery is beginning to spill over into the consumer economy. Retail sales are coming around. The last piece of the puzzle to fall into place will be housing—still dependent on government support, still plagued by big problems. But there are signs that it may stop getting worse. On Monday, the Associated Press reported that the mortgage delinquency rate—i.e., the percentage of people behind 60 days or more on mortgages—”dropped in the first quarter for the first time since 2006, according to credit reporting agency TransUnion. The 60-day delinquency rate slipped to 6.77 percent, from 6.89 percent in the fourth quarter of 2009.” It could be that there are just fewer and fewer people with mortgages to fall behind. Or it could be that, four years after the housing market began to decline, the sector that led the nation into recession may finally be bottoming out. Like Slate on Facebook. Follow us on Twitter.
/// Return the starting country in single player playthroughs. If playing in multiplayer or if /// the starting country can't be determined then none is returned. pub fn starting_country(&self, histories: &[PlayerHistory]) -> Option<CountryTag> { match histories { [player] => Some(player.history.initial), _ => None, } }
Republican presidential nominee Donald Trump gives a thumbs-up as he arrives at a barbecue restaurant in Greensboro, North Carolina, U.S. September 20, 2016. REUTERS/Jonathan Ernst (Reuters) - Republican Donald Trump may have gotten it wrong when he said the moderator of the first U.S. presidential debate next week between him and his Democratic rival, Hillary Clinton, is a Democrat. NBC’s Lester Holt, who will moderate the debate on Monday, is registered as a Republican, according to voter information on the New York State Board of Elections website. NBC did not respond to requests for comment on Tuesday about Holt’s party affiliation. Trump has said repeatedly the presidential debates will be stacked against him. He said on Fox News on Monday that Holt was a “professional” but that he was a Democrat, adding that the NBC News anchor may be under pressure after critics accused fellow NBC journalist Matt Lauer of giving Clinton tougher treatment than Trump during a recent forum on defense issues. “Look, it’s a phony system. Lester is a Democrat. I mean, they are all Democrats. OK? It’s a very unfair system,” Trump said on the show. Bill O’Reilly, the Fox News host conducting the interview, said two other debate moderators, ABC’s Martha Raddatz and CNN’s Anderson Cooper, were also Democrats. Cooper is registered in New York with no party affiliation, according to the state elections site, which CNN confirmed. ABC did not respond to a request for comment about Raddatz’s registration. The fourth presidential debate moderator, Chris Wallace of Fox News, is a registered Democrat in Washington, D.C., according to the District’s elections website. Wallace has said in interviews that he registered with the party so he could vote in primaries in the heavily Democratic District of Columbia.
<gh_stars>1000+ import { useMemo } from './use-memo'; /** * @function * @template T * @param {T} initialValue * @return {{ current: T }} Ref */ const useRef = <T>(initialValue: T) => useMemo(() => ({ current: initialValue }), []); export { useRef }
import { Inject, Injectable } from '@nestjs/common'; import { Repository } from 'typeorm'; import { CreateTreinoDto } from './dto/create-treino.dto'; import { UpdateTreinoDto } from './dto/update-treino.dto'; import { Treino } from './entities/treino.entity'; @Injectable() export class TreinosService { constructor( @Inject('TREINO_REPOSITORY') private treinoRepository: Repository<Treino>, ) { } create(createTreinoDto: CreateTreinoDto) { const treino = new Treino(createTreinoDto) return this.treinoRepository.save(treino); } async findAll() { try { return await this.treinoRepository.find() } catch (error) { return new Error(error.message) } } async findAllForCref(cref: string) { try { return await this.treinoRepository.find({ where: { crefProfessor: cref } }) } catch (error) { return new Error(error.message) } } async findOne(id: string) { try { const treino = await this.treinoRepository.findOne(id); const idDados = await this.treinoRepository.query(`SELECT "dadosId" FROM professores WHERE "cref"='${treino.crefProfessor}' LIMIT 1`) const professor = await this.treinoRepository.query(`SELECT nome FROM dados WHERE id='${idDados[0].dadosId}' LIMIT 1`) if (professor && treino) { return { nomeProfessor: professor[0].nome, ...treino } } throw new Error("Dados não carregados") } catch (error) { return new Error(error) } } async update(id: string, updateTreinoDto: UpdateTreinoDto) { try { const updateTreino = new Treino(updateTreinoDto) const treino = await this.treinoRepository.findOne(id) if (treino) { this.treinoRepository.merge(treino, updateTreino) await this.treinoRepository.save(treino) } } catch (error) { throw new Error(error.message) } } async remove(id: string) { try { const treino = await this.treinoRepository.findOne(id) if (treino) return await this.treinoRepository.delete(id); return new Error("treino não encontrado") } catch (error) { return new Error(error.message) } } }
Versican V0 and V1 Guide Migratory Neural Crest Cells* We previously showed the selective expression of the chondroitin sulfate proteoglycans versican V0 and V1 in barrier tissues that impede the migration of neural crest cells during embryonic trunk development (Landolt, R. M., Vaughan, L., Winterhalter, K. H., and Zimmermann, D. R. (1995) Development 212, 2303-2312). To test for an active involvement of these isoforms in the guidance process, we have now established protocols to isolate intact versican V0 and V1 in quantities sufficient for functional experiments. Using stripe choice assays, we demonstrate that pure preparations of either a mixture of versican V0/V1 or V1 alone strongly inhibit the migration of multipotent Sox10/p75NTR double-positive early neural crest stem cells on fibronectin by interfering with cell-substrate adhesion. We show that this inhibition is largely core glycoprotein-dependent, as the complete removal of the glycosaminoglycan chains has only a minor effect on the inhibitory capacity. Our findings support the notion that versican variants V0 and V1 act, possibly in concert with other inhibitory molecules such as aggrecan and ephrins, in directing the migratory streams of neural crest cells to their appropriate target tissues. To test for an active involvement of these isoforms in the guidance process, we have now established protocols to isolate intact versican V0 and V1 in quantities sufficient for functional experiments. Using stripe choice assays, we demonstrate that pure preparations of either a mixture of versican V0/V1 or V1 alone strongly inhibit the migration of multipotent Sox10/p75NTR double-positive early neural crest stem cells on fibronectin by interfering with cell-substrate adhesion. We show that this inhibition is largely core glycoproteindependent, as the complete removal of the glycosaminoglycan chains has only a minor effect on the inhibitory capacity. Our findings support the notion that versican variants V0 and V1 act, possibly in concert with other inhibitory molecules such as aggrecan and ephrins, in directing the migratory streams of neural crest cells to their appropriate target tissues. The highly precise and coordinated migration of neural crest cells during early phases of embryonic development is controlled by the differential expression of permissive substrates and non-permissive/inhibitory molecules within the pathways and the bordering tissues, and the set of membrane receptors present on the moving cells (reviewed by Refs. . The journey of the multipotent neural crest stem and progenitor cells begins in the dorsal neural tube from where they emerge shortly after its closure. In the trunk region the cells are initially guided along a ventral trajectory before a second wave starts to invade the dorsolateral tissue underneath the ectoderm. Whereas the ventrally migrating populations differentiate into neurons and glia of the sensory and the sympathetic nervous system, the laterally progressing cells give rise to the melanocytes of the skin. On their route, neural crest cells pass through highly permissive extracellular matrices, which allow rapid cellular movements. These pathways are flanked by tissues that block neural crest cell immigration and thus provide the directional information. These barrier tissues, previously identified by microsurgical manipulations, include the posterior sclerotomes (5), the perinotochordal region (6), and for a short period also the subectodermal matrix prior to melanocyte precursor invasion (7). Consequently, the streams of neural crest cells, which originally emigrate in an unsegmented fashion from the dorsal neural tube, are on their ventral path canalized into the anterior sclerotome strictly avoiding the posterior somitic halves and the more ventrally localized perinotochordal zone. This particular migration behavior finally leads to the characteristic segmental pattern of the forming sensory and sympathetic ganglia. Since the major migration promoting substrates, fibronectin and laminin (8), are uniformly expressed in both halves of the somites (9), the guidance of the migratory neural crest cells appears to depend mainly on inhibitory cues. Several extracellular matrix and cell surface components match the candidate profile for a migration blocking function as they are selectively expressed in non-permissive tissues. Molecules consistently absent from the pathways, but highly expressed within the barriers, include chondroitin 6-sulfate proteoglycans, peanut agglutinin (PNA) 4 -binding glycoproteins (10,11), F-spondin (12), semaphorin3A (13), T-cadherin (14), collagen IX (15), and, except for the dorsolateral path, ephrins (16 -18). For some of these molecules, like semaphorin3A (13) and ephrins (ephrin-B1 in avian (16) and ephrin-B2 in mammalian embryos (18)), inhibitory activities on neural crest cell migration have been demonstrated in vitro using neural tube or whole trunk explant culture systems. Unexpectedly, however, neither the gene inactivation of semaphorin3A (19) nor of ephrin-B2 (20) and the corresponding Eph receptors on neural crest cells (21,22) resulted in the mutant mice in aberrant migration patterns through the somites. These observations suggested that a concerted action of multiple inhibitory and some attractive cues are required to guide trunk neural crest cells in vivo (1,17,23,24). Prime candidates for a cooperative partnership with the cell surface contact inhibitors of neural crest motility are extracellular matrix components belonging to the chondroitin sulfate proteoglycans (CSPGs) and PNA-binding glycoproteins (10,11,25,26). Especially, the chondroitin sulfate proteoglycans versican (27) and aggrecan (28) appear to be functionally involved in the inhibition. Both proteoglycans are members of the hyalectan family forming large complexes through interactions with hyaluronan and link proteins (29). They play key roles during development, as the constitutive abrogation of their expression leads in homozygous mice to early intra-uterine (versican) or perinatal death (aggrecan) (30,31). At least four different isoforms of versican (V0 to V3) exist as a result of alternative splicing of two exons encoding the central glycosaminoglycan carrying domains, glycosaminoglycan (GAG)-␣ and GAG-␤ (32,33). The largest splice variants of versican, V0 and V1, are highly * This work was supported in part by grants from the Swiss National Science Foundation, the Hartmann Mü ller, the Lydia Hochstrasser, and the Velux Foundation (to D. R. Z.), and grants from the Swiss National Science Foundation and the National Center of Competence in Research "Neural Plasticity and Repair" (to L. S.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 expressed during embryogenesis and are frequently associated with little adhesive, fast-proliferating tissues displaying a high extracellular matrix turnover rate (34,35). Versican V2, a smaller central nervous system-specific isoform is in contrast produced late during neurogenesis (36). It is a potent inhibitor of axonal growth and hence, seems to participate in restricting the structural plasticity of myelinated fiber tracts and impede regeneration in the mature central nervous system (37). We have previously shown that the expression of versican V0 and V1 is during neural crest cell migration tightly associated with the formation of barrier tissues in the trunk of chicken embryos (35). Versican V0 and V1 are selectively deposited in the posterior half of the sclerotome, transiently present within the dorsolateral subectodermal tissues and to a lesser extent expressed in the perinotochordal tissue where aggrecan appears to be the prominent hyalectan (38,39). This intriguing codistribution of versican V0 and V1 with barrier tissues prompted us to isolate the intact forms of these large versican proteoglycan variants and to explore their potential guidance function in neural crest cell migration during early embryonic trunk development. MATERIALS AND METHODS Antibodies-Polyclonal antibodies against recombinant fragments of the GAG-␣ or GAG-␤ domains of bovine, human, and mouse versicans were affinity purified from rabbit antisera previously prepared in our laboratory (32,37,40,41). Monoclonal antibodies CS-56 recognizing intact chondroitin sulfate chains and ⌬Di-6S specific for chondroitin 6-sulfate "stubs" exposed after chondroitinase ABC digestion were obtained from Sigma and Seikagaku, respectively. Polyclonal antibodies against p75 NTR were purchased from Chemicon and a monoclonal antibody specific for Sox10 (42) was a kind gift of Michael Wegner (University of Erlangen, Germany). For immunostaining of fibronectin the monoclonal antibody IST-4 (Sigma) was used. Isolation of Versican Isoforms-Intact versicans V1, V2, and a mixture of V0 and V1 were isolated from calf aorta, bovine spinal cord, and the spent culture medium of the human glioma cell line U251MG, respectively. Our previously developed procedure for the isolation of brain versican V2 was adapted for this purpose (40). Fig. 1 summarizes sources and steps involved in the protein-chemical purification of the individual isoforms. Bovine tissues obtained from the local abattoir were homogenized in 6 volumes of ice-cold extraction buffer containing 4 M guanidine hydrochloride (GdnHCl), 50 mM sodium acetate, pH 5.8, 2 mM EDTA, 10 mM N-ethylmaleimide, 1 mM Pefabloc, 25 mM 6-aminocaproic acid, and 5 mM benzamidine and extracted overnight. After filtration through cheesecloth, the extract was cleared by centrifuging at 100,000 ϫ g for 45 min. The supernatant was dialyzed extensively against 50 mM Tris, 10 mM EDTA, pH 7.0. The precipitate formed during dialysis was removed by centrifugation at 27,000 ϫ g for 30 min. Subsequently, ammonium sulfate was added to the supernatant to reach 20% saturation. Proteins were allowed to precipitate overnight at 4°C. Following centrifugation at 27,000 ϫ g for 45 min the ammonium sulfate concentration was raised to 60% saturation. After an additional overnight incubation at 4°C the precipitate was collected and resuspended in urea buffer (6 M urea, 0.25 M NaCl, 50 mM Tris, 10 mM EDTA, pH 6) followed by dialysis against 10 volumes of the same buffer. This crude extract was batch absorbed on Q-Sepharose FF (Amersham Biosciences) overnight and then packed into a column. The bound proteins were first washed with urea buffer containing 0.25 M NaCl and then eluted with a linear NaCl gradient to 1 M. 10-l aliquots of each fraction were tested for the presence of versicans by slot blotting using antibodies against bovine GAG-␣ (V2) or GAG-␤ (V1) epitopes (40). Samples positive on the blot were pooled, dialyzed against 0.5 M NaCl, 20 mM Tris, 10 mM EDTA, pH 8, and batch-absorbed overnight to hyaluronan-Sepharose, which had previously been prepared according to the method of Tengblad (43) by coupling hyaluronan from bovine trachea (Sigma) to EAH-Sepharose 4B (Amersham Biosciences). Separate hyaluronan-Sepharose batches were used for each versican isoform preparation to avoid cross-contamination. The hyaluronan-Sepharose affinity column was subsequently packed and washed with a gradient of 0.5 to 3 M NaCl. Finally, hyalectans were eluted with 4 M guanidinium buffer (4 M GdnHCl, 20 mM Tris, 10 mM EDTA, pH 8). After analysis with slot blotting, the versican-containing fractions were pooled and concentrated to a volume of 500 l using Biomax 100/Ultrafree-15 filters (Millipore). For the V1 preparation from aorta, an additional gel filtration step on Sepharose CL-4B was required to remove partially degraded versican. For this purpose, the column was equilibrated with 3 bed volumes of the GdnHCl buffer before loading of the sample. The column was run at a slow flow rate of 0.4 ml/min. Every second fraction was tested for the presence of versican V1 on a slot blot. Antibody-reactive samples were pooled, dialyzed against PBS, and concentrated. For the isolation of a mixture of versicans V0 and V1 from U251MG cell supernatants the proteoglycans were first precipitated from conditioned medium by adding ammonium sulfate to 70% saturation followed by purification with anion exchange and hyaluronan affinity chromatography as described above. Domain-specific antibodies recognizing the human versican homologues were used for the slot blot detection instead. Protein Electrophoresis and Immunoblotting-Versican samples were separated on 4 -15% PHAST SDS-polyacrylamide gels (Amersham Biosciences) under reducing conditions and stained with either Coomassie Blue (NOVEX colloidal blue staining kit, Invitrogen) or with silver stain (PHAST GEL Silver Kit; Amersham Biosciences). For immunoblotting, chondroitinase ABC-digested samples were resolved on 4 -15% PHAST polyacrylamide gels followed by diffusion transfer onto Immobilon-P membranes (Millipore) at 70°C for 30 min. The blots were blocked with 3% dry milk in PBS for 30 min followed by incubation with the primary antibodies (all diluted 1:1000 in the blocking buffer) overnight. After washing, alkaline phosphatase-conjugated goat anti-rabbit or anti-mouse Ig secondary antibodies (BIOSOURCE International, CA; diluted 1:15000 in PBS) were allowed to bind for 1 h. The color reaction was performed with Western blue substrate solution (Promega). Preparation of Primary Cultures of Early Neural Crest Stem Cells (eNCSCs) and Embryonic Fibroblasts-eNCSCs cultures were derived from neural tubes of embryonic day 9 (E9) mice according to the protocol of Kléber et al. (44,45). Briefly, embryos were gently squeezed out of the uterus. After cleaning with Hanks' balanced salt solution, the trunk region caudal of the heart extending to the most posterior somite was cut from the rest of the embryo with the help of forceps and tungsten needles. The isolated trunks were pooled and transferred to a digestion solution containing 0.4 units/ml Dispase (Roche Diagnostics) in Hanks' buffer without Ca 2ϩ and Mg 2ϩ . The trunks were slowly triturated and then kept at 4°C for 6 min. This procedure was repeated 3 times using a fresh digestion mixture. Finally the neural tubes were triturated again very gently to completely free them of all other tissues and then transferred to Dulbecco's modified Eagle's medium containing 10% fetal bovine serum to stop the digestion reaction. The explants were subsequently grown on various substrates in SN1 medium (SN medium with 10 ng/ml of bovine fibroblast growth factor) while keeping the cultures in a gas tight modular incubator chamber flushed with 1% O 2 , 6% CO 2 and balanced N 2 (44). Embryonic fibroblasts were prepared from the trunk of E14.5 mouse embryos as described by Talts et al. (46). Stripe Choice Assays-Stripe choice assays were done as described previously (37) (modified protocol from Vielmetter et al. (47)). 20 ϫ 20-mm glass coverslips were coated in alternating stripes of test and control substrates for migratory eNCSCs. The coating solutions of the test substrate contained variable concentrations of versican isoforms (0 to 100 g/ml) admixed to 100 g/ml human fibronectin, whereas the control lanes were treated with 20 g/ml fibronectin (Roche Diagnostics) alone. To exclude major differences in the coating efficiency of fibronectin upon addition of versican to the test substrate, ratios of immunofluorescence staining intensities of control and substrate lanes were determined using the image software analySIS (Soft Imaging System/Olympus; region of interest: 480 m 2 , n ϭ 10 each) (Fig. 2). For each assay, three to four neural tubes were placed perpendicular to the stripe pattern, covered with little SN1 culture medium, and first incubated for 45 to 50 min at 37°C in a 5% CO 2 atmosphere allowing the attachment of the explants. Once the tubes adhered to the substrate, the dishes were gently flooded with medium and kept at reduced oxygen levels. For this purpose, the dishes were placed in a gas-tight modular incubator chamber, which was flushed for 3 to 5 min with a gas mixture containing 1% O 2 , 6% CO 2 and balanced N 2 to generate 3 to 6% actual levels of O 2 . eNCSCs were allowed to migrate out of the neural tube for 20 h and were then fixed with 3.7% formaldehyde in PBS for 10 min. They were either directly visualized by phase-contrast microscopy or processed for immunofluorescence staining. Alternatively, embryonic fibroblasts were allowed to emigrate onto the stripe pattern from the surface of small uncoated circular coverslips. Neural Crest Cell Adhesion Assay-For the adhesion assays, eNCSCs were isolated from E10.5 rat embryos as described (48) and re-plated after 15 h in culture on plastic dishes coated with fibronectin alone (100 g/ml), versican (75 g/ml) alone, or a mixture of versican and fibronectin (75 and 100 g/ml, respectively). The time course and saturation of the attachment was monitored by fixing the cells after 5, 15, 30, or 45 min of incubation. The number of cells that attached after 45 min on fibronectin alone was taken as 100%. Immunofluorescence Staining of Cells and Tissue Sections-eNCSCs were visualized by double immunofluorescence staining of fixed cultures with polyclonal antibodies against p75 NTR (1:300 dilution; Chemicon) and with a monoclonal antibody against Sox10 (1:3 dilution (42)). For this purpose, cells were first blocked and permeabilized with PBS containing 10% goat serum and 0.3% Triton X-100 for 10 min. Subsequently they were incubated with the primary antibodies for 2 h at room temperature and then labeled for fluorescence detection with Cy3-conjugated goat anti-rabbit IgG and fluorescein isothiocyanate-conjugated anti-mouse IgG secondary antibodies (both 1:200 dilution; Jackson Laboratories). Alternatively, the substrate coats were immunostained in some experiments with polyclonal antibodies against the GAG-␤ domain of versican (dilution: 1:100) and with the monoclonal antibody IST-4 recognizing human fibronectin. For immunostaining of tissue sections, mouse embryos were formalin fixed, dehydrated, and embedded in paraffin. 2-m sections of the tissue were deparaffinized in xylene and re-hydrated by a dilution series of ethanol in water. Heat unmasking of the antigens was done in 10 mM Tris, 1.7 mM EDTA, 1 mM sodium citrate, pH 7.8, in a steam cooker for 2 min at 100°C. The sections were two times washed in PBS for 5 min and subsequently blocked, first in 0.2% gelatin and 0.5% bovine serum albumin in PBS for 30 min and then in blocking buffer from a M.O.M. kit (mouse-over-mouse, Vector Laboratories). The incubation with the monoclonal antibody against Sox10 (1:6) and the polyclonal antibodies specific for versican GAG-␤ (1:1000) was allowed to proceed overnight at 4°C. All antibodies were diluted in 0.5% bovine serum albumin, 0.2% gelatin, 0.02% NaN 3 in PBS. After washing in PBS, the sections were incubated with Alexa 488 goat anti-rabbit IgG and Alexa 594 goat antimouse IgG secondary antibodies (both Molecular Probes) diluted 1:200 in PBS. Counterstaining was done with Hoechst H33258 bis-benzimide stain (Invitrogen) for 2 min. Sections were finally washed in PBS and mounted in fluorescence mounting medium (Dako). Images were taken with an Olympus BX61 microscope equipped with a F-view II camera using the analySIS 3.2 software (Soft Imaging System). Red, green, and blue fluorescence were stored separately in the corresponding color channels in RGB-format. Brightness and contrast were adjusted with Photoshop 7.0. Isolation of the Proteoglycan Isoforms of Versican (V0, V1, and V2)- The preparation of the large isoforms of versican for functional studies has notoriously been difficult, as they are, except for versican V2, predominantly expressed in embryonic tissues yielding only very small quantities of the intact proteoglycans (49). In contrast, adult tissues such as blood vessels contain frequently moderate amounts of immunohistochemically detectable versicans (50). At this stage, however, versican is largely present in fragments resulting from a physiological cleavage process involving ADAMTS proteases and/or other matrix degrading metalloenzymes (51). We therefore sought for new sources of intact versicans in tissues of young animals (calves) and in supernatants of cells in culture. Adapting our previously developed protein-chemical protocol to purify versican V2 from bovine brain (40), we were able to isolate a mixture of versican V0/V1 from the culture supernatant of the glioma cell line U251MG, versican V1 with trace amounts of V0 from calf aorta, and for biochemical comparison, versican V2 from bovine spinal cord (Fig. 1). Practically all the isoforms were intact revealing their characteristic high molecular mass core protein bands on SDS-PAGE only after removal of the chondroitin sulfate side chains ( Fig. 3; V0, Ϸ650 kDa; V1, Ϸ520 kDa; and V2, 400 kDa). Degradation products were nearly absent. Only two minute bands migrating in the size range of the V2 isoform became weakly apparent on immunoblots of the U251MG and aorta preparations (Fig. 3C). Their immunoreactivity with GAG-␤-specific polyclonal antibodies clearly excluded an identity with the V2 variant, which lacks this domain. Except for some low molecular mass components later introduced by the treatment with chondroitinase ABC, no other contaminating proteins were detectable in Coomassie Blue and silver-stained gels. The yields of these preparations were 0.25 mg of versican V0/V1 per liter of culture supernatant, 0.2 mg of versican V1 per 100 g of aortic tissue, and 0.3 mg of versican V2 per 100 g of spinal cord, respectively (referred to the protein content). Versicans V0/V1 and V1 Specifically Restrict the Migration of Early Neural Crest Stem Cells in Vitro-Having sufficient amounts of intact versicans at hand, we could now study their effect on the migration behavior of eNCSCs in stripe-choice assays. These in vitro assays simulate the metameric expression pattern of versican V0 and V1 within the posterior sclerotome during the active neural crest cell migration period occurring around stage 20 in chick (35) and around E9.5 in mouse embryos (Fig. 4A). Consequently, neural tubes from E9.5 mouse embryos were used as the stem cell source for the in vitro experiments. The explants were cultured on coverslips coated with alternating stripes of the migration-promoting extracellular matrix protein fibronectin and the versican test substrate admixed to fibronectin. Like in vivo, the eNCSCs emigrated from the neural tube as sheets and moved onto the substrate-coated surface. This migration remained uniform in assays, in which the coverslips were exclusively coated with 20 and 100 g/ml fibronectin in a stripe pattern (Fig. 4C). The picture changed, however, dramatically, when eNCSCs were confronted with lanes of fibronectin alternating with stripes treated with a mixture of versican V0 and V1 plus fibronectin (Fig. 4B). In these experiments, the continuous sheets of the migratory neural crest cells divided into separate streams shortly after leaving the neural tube subsequently advancing only on the versican-free surfaces. This selective movement greatly differed from the behavior of embryonic trunk fibroblasts, which did not display a substrate preference. In a comparable stripe choice assay, the fibroblasts migrated uniformly, even at a high versican coating concentration (Fig. 5). In contrast to the fibroblasts, early embryonic neural crest stem cells avoided versican V0/V1 containing substrates already at coating concentrations of versican as low as 25 g/ml (Fig. 6, B and F), clearly overriding the migration promoting effect of fibronectin (100 g/ml) also present in these lanes. Whereas the inhibition was not yet complete at this low versican V0/V1 level, no crossing of the cells could be observed anymore at higher coating concentrations (Fig. 6, C and G and D and H). Of note, parallel experiments with intact versican V1 from bovine aorta gave very similar results in this experimental setting (data not shown). In contrast to the strong effect on migration behavior, the contact with versican substrates appeared to have no influence on cellular differentiation. Double immunofluorescence staining with the markers p75 NTR and Sox10, characteristic of an early neural crest stem cell phenotype (42,48), revealed throughout the versican concentration range tested a rather homogenous population along the entire migration path (Fig. 6, E-H). Versican Core Glycoprotein Retains the Inhibitory Capacity after Removal of the Chondroitin Sulfate Side Chains-To investigate, whether the inhibitory function of versican V0/V1 originates from the GAG moiety or from the core glycoproteins, we digested our preparations with chondroitinase ABC and tested them again in stripe choice assays (Fig. 7). Prior to these experiments, the efficient removal of the chondroitin sulfate side chains had been confirmed by slot and Western blot analysis with the monoclonal antibodies CS-56 against intact chondroitin sulfate and ⌬Di-6S recognizing the stubs of chondroitin 6-sulfate exposed next to the core protein linker region after GAG cleavage (Fig. 7, A and B). Because intact versican V0/V1 was in the CS-56 slot blot still detected at a 1:100 dilution, but no staining was observed in a 1:2-diluted sample after chondroitinase ABC treatment, we concluded that the digestion was more than 98% complete. The stripe choice experiments demonstrated that the versican core glycoproteins were still able to restrict the migration of eNC- SCs. Similar to the results with the intact proteoglycans, the effect was directly proportional to the increasing concentration of the GAG-free versican core glycoprotein, although some reduction of the inhibitory capacity was observed. At low coating concentrations (25 g/ml) of digested V0/V1, cells migrating out of the neural tube showed only a marginal stripe restriction (Fig. 7C). At concentrations of 50 g/ml (Fig. 7D) and higher, almost all the cells followed the stripe pattern migrating again on areas coated with fibronectin alone. This set of experiments clearly showed that the capacity to inhibit eNCSC migration resides within the versican core glycoprotein. The chondroitin sulfate side chains may, however, be required to modulate this inhibitory function, because the coating concentration had roughly to be doubled after the GAG removal to achieve a similar effect as with the intact proteoglycan. Versican Interferes with the Substrate Adhesion of eNCSCs to Fibronectin-To test, whether intact versican interferes with fibronectin-mediated cell adhesion, uniform Sox10 and p75-positive eNCSC populations were re-plated on culture dishes coated with versican V0/V1 alone (75 g/ml), fibronectin alone (100 g/ml), and a mixture of versican and fibronectin (75 and 100 g/ml, respectively). These experiments demonstrated that in relation to the number of cells binding to fibronectin, less then 10% adhered to the surface coated with versican V0/V1 plus fibronectin, whereas on pure versican V0/V1 (75 g/ml) no cell adhesion was observed, even long after the cell attachment had reached saturation in the control dishes (Fig. 8). Hence, these comparative in vitro assays provided strong evidence that versicans inhibit neural crest cell migration by negatively controlling the binding of neural crest cells to adhesion-promoting extracellular matrix substrates such as fibronectin. DISCUSSION The findings of our present and previous work (35) provide together clear evidence for a direct functional involvement of versicans V0 and V1 in the inhibition and guidance of neural crest cell migration during embryonic trunk development. These are in particular: 1) the selective, spatiotemporally coordinated expression patterns of versican V0 and V1 in barrier tissues of chicken and mouse embryos impeding the invasion of neural crest stem and progenitor cells; 2) the strict concentration-dependent exclusion of eNCSC movement from areas containing intact versican V0 and V1 in stripe choice experiments; 3) the specific, mainly core glycoprotein-associated inhibitory function of versicans interfering with eNSCS migration even after complete removal of chondroitin sulfate chains; and 4) the strong activity of isolated versicans in suppressing the adhesion of eNCSCs to their physiological substrate fibronectin. Prerequisite for our functional studies has been the identification of suitable sources for the isolation of the intact versicans. This has in the past been hampered by the fact that the largest versican splice variants V0 and V1 are predominantly expressed during early phases of embryonic development (34,35), where they are subjected to a highly dynamic turnover most likely being controlled by the action of specific ADAMTS proteinases and matrix metalloproteinases (51,52). Consequently, the yields of intact versican are minimal, when embryonic tissues are used as a source (e.g. 30 g of core protein from the limb buds of 750 chick Phase-contrast (A) and immunofluorescence staining (B: versican, red; Hoechst nuclear dye, blue) demonstrates that embryonic mouse fibroblasts, which themselves express versican, show in contrast to eNCSCs no substrate preference in an analogous stripe choice migration assay. Bars, 100 m. FIGURE 6. The inhibition of eNCSC migration by versican V0/V1 is concentration-dependent. Neural tubes of E9.5 mouse embryos were placed on an alternating substrate stripe pattern coated with 20 g/ml fibronectin alone and a mixture of increasing concentrations of intact versican V0/V1 (A/E, 0 g/ml; B/F, 25 g/ml; C/G, 50 g/ml; D/H, 100 g/ml) and 100 g/ml fibronectin. Phasecontrast images (A-D) show that the eNCSCs emigrate uniformly from the neural tube, but rapidly divide into separate streams when they encounter versican. Whereas the cells moving at the front completely avoid versican V0/V1 at coating concentrations of 50 and 100 g/ml, some lane crossing can still be observed at 25 g/ml. The migratory cells maintain their multipotent phenotype all along the migration path as demonstrated by double immunofluorescence staining with the early stem cell markers Sox10 (green) and p75 NTR (red) (E-H). Bars, 100 m. embryos (49)). During maturation, the expression of the V0 and V1 isoforms is greatly reduced and the core proteins left in the adult organism are largely fragmented as demonstrated by protein-chemical analysis of tissue extracts and by immunological stainings of neo-epitopes exposed after physiological cleavage with ADAMTS proteinases (51,52). Despite this partial degradation, V0 and V1 fragments stay incorporated in various elastic tissues such as blood vessels (50,53) and skin (41) and can therefore be detected by immunohistochemical techniques. Whereas these fragments may still contribute to the mechanical properties of mature extracellular matrices, it appears likely that the limited proteolysis of versicans V0 and V1 is required for the abrogation of their functions in cell proliferation and migration during embryonic development. As the proteolytic products of versican V0 and V1 rapidly accumulate postnatally, we have used bovine tissues from very young animals and supernatants from cell cultures to obtain sufficient quantities of highly purified and intact proteoglycans for our functional studies. In consequence, the proportion of degradation products could be minimized in these preparations. Nevertheless, minute amounts of versican fragments migrating on SDS-PAGE around 400 kDa and slightly above could still be detected on immunoblots apart from the intact core glycoproteins. Despite their size similarities with versican V2, they are most probably derived from versican V0 by cleavage within the GAG-␤ domain. This is concluded from the fact that they are, unlike versican V2, reactive with both, GAG-␣ and GAG-␤ domain-specific antibodies. Hence, from our tissue expression studies (32) and protein chemical analysis, we have currently no evidence that versican V2 is expressed outside of the central nervous system as has been indicated in a recent report (54). Whereas the aorta extract contained predominantly intact versican V1 with only trace amounts of versican V0, a roughly equal proportion of these isoforms could be isolated from the glioma preparation. Both versican isoforms were after chondroitinase ABC digestion strongly reactive with the monoclonal antibody ⌬Di-6S (3B3) used in various immunohistological studies (10,11,25,26). Because all proteoglycan isoforms of versican also bind the peanut agglutinin, 5 it appears that at least parts of the CS-6 epitopes and PNA-binding carbohydrates are directly linked to the core proteins of versican V0 and V1 in tissues forming barriers to neural crest cell migration and axonal growth. The relationship between PNA-binding fragments of versican V0 and V1 and the previously described axon growth inhibitory glycoproteins isolated from the chick somites (55) is currently unknown. Because primary cell cultures of dissociated embryonic trunks of chicken and mouse express the large versicans, V0 and V1, we have performed most of our functional studies with preparations containing a mixture of these isoforms. Nonetheless, versican V1 alone proved to be similarly effective in analogous experiments (data not shown). In all stripe choice assays, fibronectin was added to the versican coating solutions. This way, we have excluded that versicans not simply act as nonpermissive substrates for migratory neural crest stem cells, but rather function as active inhibitors, which suppress the migration-promoting A and B, respectively). Different dilutions of digested and undigested versican V0/V1 samples were applied to Immobilon membranes and developed with monoclonal antibodies against the intact chondroitin sulfate side chains (CS-56) and against the stub epitopes (⌬Di-6S) being exposed after digestion. The complete disappearance of CS-56 immunoreactivity after chondroitinase ABC treatment (A) coinciding with a strong increase in ⌬Di-6S-specific staining (A and B) demonstrates the efficient removal of glycosaminoglycan side chains. Phase-contrast images (C and D) reveal that the migration preference of eNCSCs for the control lanes containing exclusively fibronectin is maintained, when test substrate stripes have been coated with at least 50 g/ml chondroitinase ABC-digested versican V0/V1 together with 100 g/ml fibronectin (D). At lower versican core glycoprotein concentrations (25 g/ml), only a marginal stripe pattern can be noted (arrowhead in C). properties of fibronectin. As fibronectin is in the embryonic trunk present in both pathways and barrier tissues (9), the stripe choice assays are closely reflecting the in vivo situation. In comparison to other inhibitory molecules that have previously been tested in a similar experimental set-up (13,15,18), only relatively low versican concentrations had to be applied to observe maximal inhibition. Considering molar relationships, versican V0 and V1 may even be the most potent inhibitors of neural crest cell migration. The data of our functional study are in line with previous experiments demonstrating that the perturbation of chondroitin sulfate proteoglycan biosynthesis in mouse embryos leads to an abnormal migration behavior of neural crest cells through the posterior sclerotomes (56), which normally express large amounts of chondroitin sulfate proteoglycans and in particular the versican isoforms V0 and V1 (35). They are, furthermore, supported by observations made in splotch mice, which closely correlate the disruption of normal neural crest cell migration and the consequent failure of target tissue colonization with the ectopical expression of versican in the migratory pathways of these Pax3 mutants (57). Nevertheless, a controversy concerning the inhibitory potential of sclerotomal versicans on neural crest cell migration has been raised leaving a similar role for the notochord-associated aggrecan undisputed (54). In this previous paper (54) a long range attractive function of versicans has been postulated, whereas our experiments clearly demonstrate the contact inhibiting effect of the intact versicans V0 and V1. This discrepancy may be explained by the fact that partially degraded versican preparations from adult bovine aorta and chicken trunks were used in this earlier study. Because the specific cleavage of versicans with ADAMTS proteinases may form part of the physiological process to neutralize the inhibitory properties of versicans, these partly fragmented preparations may have been less active. How the signal initiated through the contact of moving neural crest cells with versican is translated into the inhibition of cellular migration is currently still open. Versicans may either directly activate a versicanspecific receptor on the surface of neural crest cells or act indirectly by sterically interfering with the interaction between migration promoting substrates and their integrin receptors (58,59). The sterical hindrance model is supported by the observation that the complete removal of the chondroitin sulfate side chains leads only to a modest reduction, but not to the abolition of the inhibitory activity. This relatively small effect could be caused by a partial collapse of the core protein in the absence of the highly sulfated glycosaminoglycan side chains, greatly diminishing the hydrodynamic size of the versican molecule and possibly allowing the re-establishment of a few interactions between the cell surface and the permissive substrate. Along this line, it appears plausible that the extent of inhibition is in vivo controlled by the expression of specific splice forms varying in core protein size and number of chondroitin sulfate side chains covalently attached to them. Despite the fact that no versican-specific transmembrane signaling molecule has been identified as yet, the versican-receptor hypothesis should, however, not be discarded. Only recently, we have shown that the central nervous system-derived versican V2 efficiently blocks axonal growth (37). Schweigreiter et al. (60) could subsequently demonstrate that this inhibition is mediated by a signaling cascade involving two members of the Rho family of small GTPases, RhoA and Rac1. This study has shown that the axonal contact with versican V2 activates RhoA and inactivates Rac1 in cerebellar granule cells. Consequently, the existence of an unknown versican receptor has been postulated that triggers this reaction finally leading to a growth cone collapse. An analogous signaling mechanism could also be responsible for the inhibition of neural crest cell migration by versican V0 and V1. During cellular movement, the activation of Rho is in general associated with retraction of migratory cells, whereas the activation of Rac promotes the formation of membrane protrusions at the leading edge (61). Integrin receptors are key mediators of these responses (62). For instance, the activation of RhoA and its downstream target, ROCK, directly affects the migration of leukocytes by decreasing the affinity of the ␣4␤1 integrin to its extracellular ligands (63). Interestingly, the same fibronectin-binding integrin is known for its central role in the locomotion of neural crest cells (64,65). Hence, a RhoA-mediated signaling process engaged upon contact with versican V0 and V1 could also negatively regulate the integrin-dependent migration of neural crest cells during embryonic development. The suppression of cell-matrix interactions by versican V0 and V1 may not be sufficient to completely abrogate neural crest cell invasion of barrier tissues in vivo, as some neural crest cells could eventually switch from a collective chain-type movement (66) to a random amoeboid migration pattern (reviewed in Ref. 67). For a few cells, such an erratic locomotion within the caudal versican-containing portion of the scle- FIGURE 9. Inhibitors of neural crest cell migration selectively expressed in barrier tissues during trunk development. Routes of neural crest cell migration are indicated by green arrows, barrier tissues are marked in red, whereas pathways are colored in green. Versicans V0 and V1 carry both CS-6S epitopes and PNA-binding carbohydrates. They have therefore been grouped together in this scheme modified from Ref. 2: F-spondin (12), ephrins (16,18), Sema3A (13), aggrecan (38,68), collagen IX (15), T-cadherin (14), and CS-6S/PNA (10,11). rotome has indeed been observed in chicken trunk explants after disrupting the interaction between the EphB3 receptor on migratory neural crest cells and ephrin-B1 on the surface of posterior sclerotomal cells by addition of soluble ephrin-B1 (1,16). Nonetheless, constitutive inactivation of the corresponding ephrin or Eph receptor genes alone has not led to an aberrant phenotype in regard to trunk neural crest cell migration (20). This indicates that the cell surface-bound ephrins and the extracellular matrix-embedded versicans V0 and V1 may together direct the migration of neural crest stem and progenitor cells through the rostral sclerotome. Thus, future investigations will possibly have to rely on complex mouse models carrying combinations of multiple constitutively and/or conditionally inactivated genes to elucidate, how versicans, ephrins, and the other putative migration inhibitors (summarized in Fig. 9) join in vivo their functions to regulate the highly precise migration patterns of the various neural crest cell subpopulations in a concerted action.
/** * Find user by username and password, used for login. * @param username String * @param password String * @return UserModel the instance of UserModel if found */ public UserModel findWsUser(String username, String password) { if (Strings.isNullOrEmpty(username) || Strings.isNullOrEmpty(password)){ return null; } UserModel user = new UserModel(); user.setUserId("test"); user.setUsername(username); user.setFirstName("firstName"); user.setLastName("lastName"); return user; }
<reponame>SparseDataTree/Priority-Operation-Processing package com.comcast.pop.endpoint.agenda.factory; import com.comcast.pop.api.Agenda; import com.comcast.pop.api.AgendaTemplate; public interface AgendaFactory { Agenda createAgendaFromObject(AgendaTemplate agendaTemplate, Object payload, String progressId, String cid); }
import java.io.PrintWriter; import java.util.Scanner; /** * @author pvasilyev * @since 17 Jul 2014 */ public class C { public static void main(String[] args) { final Scanner reader = new Scanner(System.in); final PrintWriter writer = new PrintWriter(System.out); solve(reader, writer); reader.close(); writer.close(); } private static void solve(final Scanner reader, final PrintWriter writer) { final int n = reader.nextInt(); int[] a = new int[n]; int max = -1; for (int i = 0; i < a.length; i++) { a[i] = reader.nextInt(); if (max < a[i]) { max = a[i]; } } final int horizontalDraw = tryHorizontalDraw(a, 0, a.length); if (horizontalDraw < n) { writer.println(horizontalDraw); } else { writer.println(n); } } private static int tryHorizontalDraw(final int[] a, final int lo, final int hi) { if (lo >= hi) { return 0; } else if (hi - lo == 1) { return Math.min(1,a[lo]); } int min = a[lo]; for (int i = lo; i < hi; i++) { if (min > a[i] && a[i] > 0) { min = a[i]; } } if (min == 0) { return 0; } for (int i = lo; i < hi; i++) { a[i] -= min; } int result = min; boolean nonZeroFlag = a[lo] > 0; int pointer = lo; for (int i = lo; i < hi; i++) { if (nonZeroFlag && a[i] == 0) { nonZeroFlag = false; result += tryHorizontalDraw(a, pointer, i); } else if (nonZeroFlag && a[i] > 0) { continue; } else if (!nonZeroFlag && a[i] == 0) { continue; } else { nonZeroFlag = true; pointer = i; } } if (nonZeroFlag) { result += tryHorizontalDraw(a, pointer, hi); } return Math.min(hi - lo, result); } }
/** * Run a Javadoc task. * * @param args the command line arguments to pass */ protected void javadoc(String...args) { int result; PrintStream old = System.out; try { println("Javadoc"); if (quiet) { System.setOut(filter(System.out, new String[] { "Loading source files for package", "Constructing Javadoc information...", "Generating ", "Standard Doclet", "Building tree for all the packages and classes...", "Building index for all the packages and classes...", "Building index for all classes..." })); } else { System.setOut(filter(System.out, new String[] { "Loading source files for package ", "Generating ", })); } Class<?> clazz = Class.forName("com.sun.tools.javadoc.Main"); Method execute = clazz.getMethod("execute", String[].class); result = (Integer) invoke(execute, null, new Object[] { args }); } catch (Exception e) { result = exec("javadoc", args(args)); } finally { System.setOut(old); } if (result != 0) { throw new RuntimeException("An error occurred, result=" + result); } }
// Sum implements hash.Hash. func (h *Hasher) Sum(b []byte) (sum []byte) { if total := len(b) + h.Size(); cap(b) >= total { sum = b[:total] } else { sum = make([]byte, total) copy(sum, b) } if dst := sum[len(b):]; len(dst) <= 64 { var out [64]byte wordsToBytes(compressNode(h.rootNode()), &out) copy(dst, out[:]) } else { h.XOF().Read(dst) } return }
package me // GENERATED SDK for me API // Continents type ContinentEnum string var ( ContinentEnumSOUTH_AMERICA ContinentEnum = "south-america" ContinentEnumOCEANIA ContinentEnum = "oceania" ContinentEnumNORTH_AMERICA ContinentEnum = "north-america" ContinentEnumEUROPE ContinentEnum = "europe" ContinentEnumASIA ContinentEnum = "asia" ContinentEnumANTARTICA ContinentEnum = "antartica" ContinentEnumAFRICA ContinentEnum = "africa" )