content
stringlengths
10
4.9M
Hermitian formulation of multiple scattering induced topological phases in metamaterial crystals In this article we develop an analog to the SSH model in tight-binding chains of resonators and an innovative Hermitian matrix formulation to describe the topological phases induced by multiple scattering at subwavelength scales in one-dimensional structured and locally resonant metamaterial crystals. We first start from a set of coupled dipole equations capturing the nature of the wave-matter interactions, i.e., hybridization between locally dispersive resonances and a continuum as well as infinite long range multiple scattering coupling, to analytically derive a matrix operator ${H}_{\mathrm{MS}}$, which is found to be Hermitian when evaluated on propagative bands. This new operator straightforwardly highlights how the composition, structure, scattering resonance, and Bloch periodicity together set in the chain macroscopic properties, in particular its topology. We analytically confirm the existence of a structure based topological transition in chiral-symmetric biperiodic metamaterial crystal chains, characterized by a winding number. We further demonstrate that chiral symmetry breaking in metamaterial crystal chains prevents us from defining a proper topological invariant. We finally numerically confirm, in the microwave domain, the existence of the topological transition in a chiral symmetric metamaterial crystal chain through the study of topological interface modes and their robustness to disorder.
// Creates Conditions and Roles in the database and returns their ent reference. func (pm *sqlPolicyMgr) createConditionsRoles(roles []*redtape.Role, conditions redtape.Conditions, ctx context.Context) ([]*ent.Roles, []*ent.Conditions, error) { entRoles := []*ent.Roles{} entConds := []*ent.Conditions{} for _, role := range roles { r, err := pm.client.Roles.Create(). SetID(role.ID). SetName(role.Name). SetDescription(role.Description). Save(ctx) if err != nil { return nil, nil, err } entRoles = append(entRoles, r) } for name, cond := range conditions { typ, val := getTypeAndVal(cond) opts := map[string]interface{}{ "value": val, } c, err := pm.client.Conditions.Create(). SetName(name). SetType(typ). SetOptions(opts). Save(ctx) if err != nil { return nil, nil, err } entConds = append(entConds, c) } return entRoles, entConds, nil }
/** * Register a GBean instance. * * @param gbeanInstance the GBean to register * @throws GBeanAlreadyExistsException if there is already a GBean registered with the instance's name */ public synchronized void register(GBeanInstance gbeanInstance) throws GBeanAlreadyExistsException { ObjectName name = normalizeObjectName(gbeanInstance.getObjectNameObject()); if (objectNameRegistry.containsKey(name)) { throw new GBeanAlreadyExistsException("Cannot register GBean with abstract name: " + gbeanInstance.getAbstractName() + ", GBean with abstract name: " + objectNameRegistry.get(name).getAbstractName() + " already registered under ObjectName: " + name); } objectNameRegistry.put(name, gbeanInstance); infoRegistry.put(gbeanInstance.getAbstractName(), gbeanInstance); gbeanInstance.setInstanceRegistry(this); }
def sentence_to_vec(data): for i in range(len(data)): word_vectors = [] for j in range(len(data[i]["text"])): if len(c_dictionary.doc2bow([data[i]["text"][j]])) > 0 and len(tfidfmodel[[c_dictionary.doc2bow([data[i]["text"][j]])[0]]][0]) > 1: word_vectors.append( model.wv[data[i]["text"][j]] *tfidfmodel[[c_dictionary.doc2bow([data[i]["text"][j]])[0]]][0][1] ) if len(word_vectors) == 0: data[i]["vector"] = 300*[0] else: data[i]["vector"] = average(word_vectors, axis=0) return data
#include "ConsolePrinter.h" #include <iostream> int main() { std::cout << "main() called" << std::endl; auto allComponents = ConsolePrintersRegistry::constructAll(); for (auto& component : allComponents) { component->print(); } return 0; }
/** * Provides the means to set or reset the model to a default state. */ public void initDefault() { setName("Sample Document"); setWidth(500); setHeight(500); }
GOVERNMENTS often expel foreigners who enrage them. North Korea offers a worse fate: not being allowed to leave. On March 7th authorities in Pyongyang, the capital, said that 11 Malaysian citizens living in North Korea would be prevented from flying home until the two countries had resolved their differences over the murder of Kim Jong Nam—the half-brother of Kim Jong Un, the North’s dictator. Kim Jong Nam was assassinated last month at Kuala Lumpur’s main airport using VX, a nerve agent renounced by nearly all governments except North Korea’s. The North later released two of the hostages, but continued to hold the other nine. Najib Razak, Malaysia’s prime minister, condemned North Korea’s decision to detain its citizens as “abhorrent”. He announced that North Koreans in Malaysia—of whom there are perhaps as many as 1,000, many doing dirty jobs such as mining—would in turn be prevented from leaving until the regime backed down. Malaysian authorities are watching who enters and leaves North Korea’s embassy in Kuala Lumpur. The chief of police believes that at least two North Koreans wanted for questioning about Kim Jong Nam’s murder are hiding inside; he has said that his men will stand guard for “five years” if it takes that long for them to come out. Get our daily newsletter Upgrade your inbox and get our Daily Dispatch and Editor's Picks. The stand-off caps a week of diplomatic drama. On March 6th Malaysia kicked out the North Korean ambassador, Kang Chol, who denies that North Korean spies were responsible for the murder or that the victim was Kim Jong Nam; he accused Malaysia of cooking up the story with America and South Korea to blacken the North’s reputation. The North Korean government formally expelled Malaysia’s ambassador the same day, though by then his bosses had already called him back to Kuala Lumpur. In a further display of recalcitrance, North Korea tested four missiles simultaneously on March 6th, in defiance of UN sanctions. In response to the North’s frequent tests, America and South Korea are accelerating the deployment in South Korea of THAAD, an American anti-missile system. That, in turn, has riled China, which fears THAAD could render its missiles less potent, too. Meanwhile, on March 8th, a previously unknown outfit called Cheollima Civil Defence posted a video it said was of Kim Han Sol, the son of Kim Jong Nam. It claimed to have responded to a request to “extract and protect” him from his home in Macau, along with his mother and sister. The group said it had received help from China, America and the Netherlands. Whether Kim Han Sol will become a vocal critic of the regime that murdered his father, or choose to vanish from sight, remains unclear.
def moment(self, x): V1 = self.V1 M1 = self.M1 w1 = self.w1 w2 = self.w2 L = self.Length() return M1 + V1*x + w1*x**2/2 + x**3*(-w1 + w2)/(6*L)
<gh_stars>1-10 import React, { SVGProps } from 'react'; import generateIcon from '../../generateIcon'; const BrandsDribbbleSquare = (props: SVGProps<SVGSVGElement>) => { return ( <svg viewBox="0 0 448 512" width="1em" height="1em" {...props}> <path d="M90.2 228.2c8.9-42.4 37.4-77.7 75.7-95.7 3.6 4.9 28 38.8 50.7 79-64 17-120.3 16.8-126.4 16.7zM314.6 154c-33.6-29.8-79.3-41.1-122.6-30.6 3.8 5.1 28.6 38.9 51 80 48.6-18.3 69.1-45.9 71.6-49.4zM140.1 364c40.5 31.6 93.3 36.7 137.3 18-2-12-10-53.8-29.2-103.6-55.1 18.8-93.8 56.4-108.1 85.6zm98.8-108.2c-3.4-7.8-7.2-15.5-11.1-23.2C159.6 253 93.4 252.2 87.4 252c0 1.4-.1 2.8-.1 4.2 0 35.1 13.3 67.1 35.1 91.4 22.2-37.9 67.1-77.9 116.5-91.8zm34.9 16.3c17.9 49.1 25.1 89.1 26.5 97.4 30.7-20.7 52.5-53.6 58.6-91.6-4.6-1.5-42.3-12.7-85.1-5.8zm-20.3-48.4c4.8 9.8 8.3 17.8 12 26.8 45.5-5.7 90.7 3.4 95.2 4.4-.3-32.3-11.8-61.9-30.9-85.1-2.9 3.9-25.8 33.2-76.3 53.9zM448 80v352c0 26.5-21.5 48-48 48H48c-26.5 0-48-21.5-48-48V80c0-26.5 21.5-48 48-48h352c26.5 0 48 21.5 48 48zm-64 176c0-88.2-71.8-160-160-160S64 167.8 64 256s71.8 160 160 160 160-71.8 160-160z" /> </svg> ); }; export default generateIcon(BrandsDribbbleSquare);
package wxm // GetUnlimitedParam https://developers.weixin.qq.com/miniprogram/dev/api-backend/open-api/qr-code/wxacode.getUnlimited.html type GetUnlimitedParam struct { Scene string `json:"scene"` Page string `json:"page"` Width int `json:"width,omitempty"` AutoColor bool `json:"auto_color"` LineColor *LineColor `json:"line_color,omitempty"` IsHyaline bool `json:"is_hyaline"` } type LineColor struct { R int `json:"r"` G int `json:"g"` B int `json:"b"` } type GetUnlimitedRsp struct { ErrCode ErrCode `json:"errcode"` ErrMsg string `json:"errmsg"` Data []byte `json:"data"` }
<gh_stars>1-10 /* Licensed Materials - Property of IBM License: BSD 3-Clause 5747-C31, 5747-C32 © Copyright IBM Corp. 2016, 2017 All Rights Reserved US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. */ #ifndef RDB2_UTILS_H #define RDB2_UTILS_H #include <Rcpp.h> namespace rdb2 { void R_msg(const std::string& txt); } #endif
def destroy(self): c_destroy_session = self.get_libgmt_func( "GMT_Destroy_Session", argtypes=[ctp.c_void_p], restype=ctp.c_int ) status = c_destroy_session(self.session_pointer) if status: raise GMTCLibError( f"Failed to destroy GMT API session:\n{self._error_message}" ) self.session_pointer = None
//[email protected] //http://jfi.uchicago.edu/~tcaswell // //This program is free software; you can redistribute it and/or modify //it under the terms of the GNU General Public License as published by //the Free Software Foundation; either version 3 of the License, or (at //your option) any later version. // //This program is distributed in the hope that it will be useful, but //WITHOUT ANY WARRANTY; without even the implied warranty of //MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU //General Public License for more details. // //You should have received a copy of the GNU General Public License //along with this program; if not, see <http://www.gnu.org/licenses>. // //Additional permission under GNU GPL version 3 section 7 // //If you modify this Program, or any covered work, by linking or //combining it with IPP (or a modified version of that library), //containing parts covered by the terms of End User License Agreement //for the Intel(R) Software Development Products, the licensors of //this Program grant you additional permission to convey the resulting //work. #ifndef CORR_SOFQ #define CORR_SOFQ #include <vector> #include <string> #include <complex> #include "accumulator.h" namespace tracking { /** s(|q|) computation. This takes in a specific direction in reciprocal space and computes s(|q|) along that direction. If the assumption that the particles have no preferred direction holds, then this saves a lot of computation time. */ class Accum_sofq : public Accumulator { public: // basic inherited stuff void add_particle(const particle *) ; void out_to_wrapper(utilities::Generic_wrapper &, const utilities::Md_store & md_store ) const ; // special stuff /** constructor,will gain arguments @param[in] q_range Pair with the min and max \f$|\vec{q}|\f$ @param[in] q the direction \f$\hat{q}\f$ to take s(q) along, does not need to be a unit vector @param[in] n_bins the number of bins to use */ Accum_sofq(const utilities::Tuple<float,2>& q_range, utilities::Tuplef q, const int n_bins); ~Accum_sofq(); /** Returns an array with the magnitudes of the complex s(q) values @param[out] out vector of \f$|s(q)|^2|\f$. Vector will be re-sized if needed */ void get_magnitude_sqr(std::vector<float>& out )const; /** Plots to screen using gnuplot */ void display() const; /** Fills a vector with the q values used. Out is cleared and resized as needed. @param[out] out vector of q values */ void get_q_vec(std::vector<float> &out)const; int get_n_bins() const {return n_bins_;} private: /** number of bins */ const unsigned int n_bins_; /** range */ const utilities::Tuple<float,2> q_range_; /** Spacing of evaluation points in reciprocal space */ const float q_step_; /** direction in reciprocal space to evaluate along. */ const utilities::Tuplef q_; /** vector to hold the values */ std::vector<std::complex<float> > s_of_q_; /** count of the number of particles added */ int parts_added_; /** the units of q */ std::string units_; /** \f$i\f$ */ const static std::complex<float> i_; /** \f$\pi\f$ */ const static float pi_; }; } #endif
def detect(self, image): confidence = 0 if self.__detect_flag > 0: self.__points, self.__detect_flag, confidence = \ self.__detector.ffp_detect(image) else: self.__points, self.__detect_flag, confidence = \ self.__detector.ffp_track(image, self.__points) return confidence
/* * tuplestore_begin_xxx * * Initialize for a tuple store operation. */ static Tuplestorestate * tuplestore_begin_common(int eflags, bool interXact, int maxKBytes) { Tuplestorestate *state; state = (Tuplestorestate *) palloc0(sizeof(Tuplestorestate)); state->status = TSS_INMEM; state->eflags = eflags; state->interXact = interXact; state->availMem = maxKBytes * 1024L; state->availMemMin = state->availMem; state->allowedMem = state->availMem; state->myfile = NULL; state->context = CurrentMemoryContext; state->resowner = CurrentResourceOwner; state->memtupcount = 0; state->memtupsize = 1024; state->memtuples = (void **) palloc(state->memtupsize * sizeof(void *)); state->pos.eof_reached = false; state->pos.current = 0; USEMEM(state, GetMemoryChunkSpace(state->memtuples)); state->eof_reached = false; state->current = 0; return state; }
Pair-Testing Method to Complement User's Behavior Records In an evacuation behavior analysis, record of behavior such as straying are dependent on participants' memory. Therefore, there is a loss of same of behavior records. In this research, we propose a pair-testing method that simultaneously performs subjective evaluation and interaction evaluation. It has complementary effect of user's memory that depends on user's behavior. We apply the proposed method to evacuation experiments. Based on the proposed method, the evaluator supplemented 47.5% with records by observing the participants. As complementary effect of pair-testing method, "straying" behaviors were recorded by using evaluator's records.
import json from dispatch.enums import DispatchEnum from dispatch.incident.models import Incident from dispatch.feedback.enums import FeedbackRating class RatingFeedbackBlockId(DispatchEnum): anonymous = "anonymous_field" feedback = "feedback_field" rating = "rating_field" class RatingFeedbackCallbackId(DispatchEnum): submit_form = "rating_feedback_submit_form" def rating_feedback_view(incident: Incident, channel_id: str): """Builds all blocks required to rate and provide feedback about an incident.""" modal_template = { "type": "modal", "title": {"type": "plain_text", "text": "Incident Feedback"}, "blocks": [ { "type": "context", "elements": [ { "type": "plain_text", "text": "Use this form to rate your experience and provide feedback about the incident.", } ], }, ], "close": {"type": "plain_text", "text": "Cancel"}, "submit": {"type": "plain_text", "text": "Submit"}, "callback_id": RatingFeedbackCallbackId.submit_form, "private_metadata": json.dumps({"incident_id": str(incident.id), "channel_id": channel_id}), } rating_picker_options = [] for rating in FeedbackRating: rating_picker_options.append( {"text": {"type": "plain_text", "text": rating}, "value": rating} ) rating_picker_block = { "type": "input", "block_id": RatingFeedbackBlockId.rating, "label": {"type": "plain_text", "text": "Rate your experience"}, "element": { "type": "static_select", "placeholder": {"type": "plain_text", "text": "Select a rating"}, "options": rating_picker_options, }, "optional": False, } modal_template["blocks"].append(rating_picker_block) feedback_block = { "type": "input", "block_id": RatingFeedbackBlockId.feedback, "label": {"type": "plain_text", "text": "Give us feedback"}, "element": { "type": "plain_text_input", "action_id": RatingFeedbackBlockId.feedback, "placeholder": { "type": "plain_text", "text": "How would you describe your experience?", }, "multiline": True, }, "optional": False, } modal_template["blocks"].append(feedback_block) anonymous_checkbox_block = { "type": "input", "block_id": RatingFeedbackBlockId.anonymous, "label": { "type": "plain_text", "text": "Check the box if you wish to provide your feedback anonymously", }, "element": { "type": "checkboxes", "action_id": RatingFeedbackBlockId.anonymous, "options": [ { "value": "anonymous", "text": {"type": "plain_text", "text": "Anonymize my feedback"}, }, ], }, "optional": True, } modal_template["blocks"].append(anonymous_checkbox_block) return modal_template
/** * Created by seckcoder on 14-11-26. */
VATICAN CITY (AP) -- The Vatican has registered one of its worst budget deficits in years, plunging back into the red with a €15 million ($19 million) deficit in 2011 after a brief respite of profit. The Vatican on Thursday blamed the poor outcome on high personnel and communications costs and adverse market conditions, particularly for its real estate holdings. Not even a €50 million gift to the pope from the Vatican bank and increased donations from dioceses and religious orders could offset the expenses and poor investment returns, the Vatican said in its annual financial report. The Vatican said it ran a €14.9 million deficit in 2011 after posting a surplus of €9.85 million in 2010. The 2010 surplus, however, was something of an anomaly. In 2009 the Vatican ran a deficit of €4.01 million, in 2008 the deficit was €0.9 million and in 2007 it was nearly €9.1 million. The Vatican city state, which mainly manages the Vatican Museums and is a separate and autonomous administration, managed a budget surplus of €21.8 million. That's largely due to a spike in revenue from the museums: More than five million people visited the Sistine Chapel and other works of art in the Vatican museums last year, bringing in €91.3 million in 2011 compared to €82.4 million a year earlier. And the Vatican could also cheer that donations from the faithful were also up last year despite the global economic crisis: Donations from Peter's Pence, which are donations from the faithful to support the pope's charity works, rose from $67.7 million in 2010 to $69.7 million last year. That money, however, doesn't figure into the Vatican's operating budget, though contributions from dioceses, religious orders and the Vatican bank do. The Vatican bank, known as the Institute for Religious Works, is able to make such a big contribution to the Vatican's budget each year based on investments. Draining the Vatican's finances were the high costs for its main job of spreading the faith via Vatican media: Vatican Radio, the Vatican newspaper L'Osservatore Romano and Vatican television all have significant expenses and little or nothing in the way of revenue. Vatican Radio, however, is expected to save hundreds of thousands of euros a year in energy costs each year after it cut back short and medium-wave transmissions to Europe and the United States from its main transmission point in Rome. The Rev. Federico Lombardi, who runs the Vatican radio and television departments and is also the Vatican spokesman, stressed that layoffs among the 2,832 Holy See personnel aren't in the offing, although he acknowledged that savings must come from elsewhere. During the meeting of cardinals who oversee the Vatican's finances this week, he said, there was a "request for prudence and savings." "I'm not an expert," he said of the deficit. "Yes, it's bigger than in past years, it's true." But he noted that the amounts on a global scale aren't alarming. "Certainly they indicate a need to pay attention and see the criteria the Vatican's assets are administered." ___ Follow Nicole Winfield at www.twitter.com/nwinfield
<reponame>murasaki718/tutorials<filename>lessons/082/app/main.go package main import ( "flag" "fmt" "html" "log" "net/http" "strconv" ) func main() { name := flag.String("name", "", "service name") port := flag.Int("port", 0, "port number") flag.Parse() http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { log.Printf("Method: %s", r.Method) log.Printf("Protocol: %s", r.Proto) log.Printf("Headers: %v", r.Header) fmt.Fprintf(w, "Hello from %s, resource: %q\n", *name, html.EscapeString(r.URL.Path)) }) log.Printf("Service %s, running on port %d", *name, *port) log.Fatal(http.ListenAndServe(":"+strconv.Itoa(*port), nil)) }
8 June 2012 State Department on Brett McGurk-Gina Chon Emails: http://cryptome.org/2012/06/state-mcgurk-chon.htm 7 June 2012 Gina Chon and Brett McGurk at Senate Foreign Relations Committee, June 6, 2012. C-Span 5 June 2012 Ambassadorial Nominee Brett McGurk and WSJ Gina Chon Emails A sends: I rec'd this and thought you might post the details. McGurk is the Ambassadorial Nominee to represent the US in Iraq. His confirmation hearing is June 6. At the height of the war and during the SOFA negotiations while countless American troops and Iraqi civilians were being slaughtered, it appears that Brett McGurk was engaged in an affair with Wall Street Journal reporter Gina Chon. He bragged endlessly about senior-level dinners, the secret SOFA negotiations, and "self-healing" exercises to cure his blue balls. In a tribute to his professionalism and discretion, see emails: http://www.flickr.com/photos/80005642@N02/
def repo_create(request, repo_base): username = request.user.get_username() if username != repo_base: message = ( 'Error: Permission Denied. ' '%s cannot create new repositories in %s.' % (username, repo_base)) return HttpResponseForbidden(message) if request.method == 'POST': repo = request.POST['repo'] with DataHubManager(user=username, repo_base=repo_base) as manager: manager.create_repo(repo) return HttpResponseRedirect(reverse('browser-user', args=(username,))) elif request.method == 'GET': res = {'repo_base': repo_base, 'login': username} res.update(csrf(request)) return render_to_response("repo-create.html", res)
/// <reference types="Cypress" /> import {Login} from '../../page-objects/auth/auth-login.po'; import {ConfirmModal} from '../../page-objects/modal/confirm-modal.po'; import {UserOptionsMailToExpert} from '../../page-objects/user-options/user-options-mail-to-expert.po'; before(() => { cy.fixture('users/zkMember.json').as('zkMember'); }); describe('Mail to Expert', () => { beforeEach(function () { cy.visit('/login'); Login.loginAs(this.zkMember); }); it('should send mail to expert', () => { cy.deleteAllMails(); UserOptionsMailToExpert.openSendMailToExpertModal() .addMessageToSupport('Wsparcie zdalne', 'Test Message') .sendMessageToSupport(); ConfirmModal.accept(); UserOptionsMailToExpert.confirmationShouldToAppear(); cy.getMailBySubject('Prośba'); cy.getMailBySubject('Potwierdzenie prośby'); }); });
/* * Called by dumper via archiver from within a data dump routine * We substitute this for _WriteData while emitting a BLOB */ static size_t _WriteBlobData(ArchiveHandle* AH, const void* data, size_t dLen) { if (dLen > 0) { PQExpBuffer buf = createPQExpBuffer(); appendByteaLiteralAHX(buf, (const unsigned char*)data, dLen, AH); ahprintf(AH, "SELECT pg_catalog.lowrite(0, %s);\n", buf->data); destroyPQExpBuffer(buf); } return dLen; }
<reponame>praveenkuttappan/azure-sdk-for-python<filename>sdk/storage/azure-storage-queue/tests/test_queue_encryption_async.py # ------------------------------------------------------------------------- # Copyright (c) Microsoft Corporation. All rights reserved. # Licensed under the MIT License. See License.txt in the project root for # license information. # -------------------------------------------------------------------------- import unittest import pytest import asyncio import six from base64 import ( b64decode, ) from json import ( loads, dumps, ) from cryptography.hazmat import backends from cryptography.hazmat.primitives.ciphers import Cipher from cryptography.hazmat.primitives.ciphers.algorithms import AES from cryptography.hazmat.primitives.ciphers.modes import CBC from cryptography.hazmat.primitives.padding import PKCS7 from azure.core.exceptions import HttpResponseError, ResourceExistsError from devtools_testutils import ResourceGroupPreparer, StorageAccountPreparer from multidict import CIMultiDict, CIMultiDictProxy from azure.storage.queue._shared.encryption import ( _ERROR_OBJECT_INVALID, _WrappedContentKey, _EncryptionAgent, _EncryptionData, ) from azure.core.pipeline.transport import AioHttpTransport from azure.storage.queue import ( VERSION, BinaryBase64EncodePolicy, BinaryBase64DecodePolicy, ) from azure.storage.queue.aio import ( QueueServiceClient, QueueClient ) from encryption_test_helper import ( KeyWrapper, KeyResolver, RSAKeyWrapper, ) from devtools_testutils.storage.aio import AsyncStorageTestCase from settings.testcase import QueuePreparer # ------------------------------------------------------------------------------ TEST_QUEUE_PREFIX = 'encryptionqueue' # ------------------------------------------------------------------------------ def _decode_base64_to_bytes(data): if isinstance(data, six.text_type): data = data.encode('utf-8') return b64decode(data) class AiohttpTestTransport(AioHttpTransport): """Workaround to vcrpy bug: https://github.com/kevin1024/vcrpy/pull/461 """ async def send(self, request, **config): response = await super(AiohttpTestTransport, self).send(request, **config) if not isinstance(response.headers, CIMultiDictProxy): response.headers = CIMultiDictProxy(CIMultiDict(response.internal_response.headers)) response.content_type = response.headers.get("content-type") return response class StorageQueueEncryptionTestAsync(AsyncStorageTestCase): # --Helpers----------------------------------------------------------------- def _get_queue_reference(self, qsc, prefix=TEST_QUEUE_PREFIX, **kwargs): queue_name = self.get_resource_name(prefix) queue = qsc.get_queue_client(queue_name, **kwargs) return queue async def _create_queue(self, qsc, prefix=TEST_QUEUE_PREFIX, **kwargs): queue = self._get_queue_reference(qsc, prefix, **kwargs) try: created = await queue.create_queue() except ResourceExistsError: pass return queue # -------------------------------------------------------------------------- @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_get_messages_encrypted_kek(self, storage_account_name, storage_account_key): # Arrange qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) qsc.key_encryption_key = KeyWrapper('key1') queue = await self._create_queue(qsc) await queue.send_message(u'encrypted_message_2') # Act li = None async for m in queue.receive_messages(): li = m # Assert self.assertEqual(li.content, u'encrypted_message_2') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_get_messages_encrypted_resolver(self, storage_account_name, storage_account_key): # Arrange qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) qsc.key_encryption_key = KeyWrapper('key1') queue = await self._create_queue(qsc) await queue.send_message(u'encrypted_message_2') key_resolver = KeyResolver() key_resolver.put_key(qsc.key_encryption_key) queue.key_resolver_function = key_resolver.resolve_key queue.key_encryption_key = None # Ensure that the resolver is used # Act li = None async for m in queue.receive_messages(): li = m # Assert self.assertEqual(li.content, u'encrypted_message_2') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_peek_messages_encrypted_kek(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange qsc.key_encryption_key = KeyWrapper('key1') queue = await self._create_queue(qsc) await queue.send_message(u'encrypted_message_3') # Act li = await queue.peek_messages() # Assert self.assertEqual(li[0].content, u'encrypted_message_3') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_peek_messages_encrypted_resolver(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange qsc.key_encryption_key = KeyWrapper('key1') queue = await self._create_queue(qsc) await queue.send_message(u'encrypted_message_4') key_resolver = KeyResolver() key_resolver.put_key(qsc.key_encryption_key) queue.key_resolver_function = key_resolver.resolve_key queue.key_encryption_key = None # Ensure that the resolver is used # Act li = await queue.peek_messages() # Assert self.assertEqual(li[0].content, u'encrypted_message_4') @pytest.mark.live_test_only @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_peek_messages_encrypted_kek_RSA(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # We can only generate random RSA keys, so this must be run live or # the playback test will fail due to a change in kek values. # Arrange qsc.key_encryption_key = RSAKeyWrapper('key2') queue = await self._create_queue(qsc) await queue.send_message(u'encrypted_message_3') # Act li = await queue.peek_messages() # Assert self.assertEqual(li[0].content, u'encrypted_message_3') @pytest.mark.live_test_only @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_update_encrypted_message(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # TODO: Recording doesn't work # Arrange queue = await self._create_queue(qsc) queue.key_encryption_key = KeyWrapper('key1') await queue.send_message(u'Update Me') messages = [] async for m in queue.receive_messages(): messages.append(m) list_result1 = messages[0] list_result1.content = u'Updated' # Act message = await queue.update_message(list_result1) async for m in queue.receive_messages(): messages.append(m) list_result2 = messages[0] # Assert self.assertEqual(u'Updated', list_result2.content) @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_update_encrypted_binary_message(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc, message_encode_policy=BinaryBase64EncodePolicy(), message_decode_policy=BinaryBase64DecodePolicy()) queue.key_encryption_key = KeyWrapper('key1') binary_message = self.get_random_bytes(100) await queue.send_message(binary_message) messages = [] async for m in queue.receive_messages(): messages.append(m) list_result1 = messages[0] # Act binary_message = self.get_random_bytes(100) list_result1.content = binary_message await queue.update_message(list_result1) async for m in queue.receive_messages(): messages.append(m) list_result2 = messages[0] # Assert self.assertEqual(binary_message, list_result2.content) @pytest.mark.live_test_only @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_update_encrypted_raw_text_message(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # TODO: Recording doesn't work # Arrange queue = await self._create_queue(qsc, message_encode_policy=None, message_decode_policy=None) queue.key_encryption_key = KeyWrapper('key1') raw_text = u'Update Me' await queue.send_message(raw_text) messages = [] async for m in queue.receive_messages(): messages.append(m) list_result1 = messages[0] # Act raw_text = u'Updated' list_result1.content = raw_text async for m in queue.receive_messages(): messages.append(m) list_result2 = messages[0] # Assert self.assertEqual(raw_text, list_result2.content) @pytest.mark.live_test_only @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_update_encrypted_json_message(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # TODO: Recording doesn't work # Arrange queue = await self._create_queue(qsc, message_encode_policy=None, message_decode_policy=None) queue.key_encryption_key = KeyWrapper('key1') message_dict = {'val1': 1, 'val2': '2'} json_text = dumps(message_dict) await queue.send_message(json_text) messages = [] async for m in queue.receive_messages(): messages.append(m) list_result1 = messages[0] # Act message_dict['val1'] = 0 message_dict['val2'] = 'updated' json_text = dumps(message_dict) list_result1.content = json_text await queue.update_message(list_result1) async for m in queue.receive_messages(): messages.append(m) list_result2 = messages[0] # Assert self.assertEqual(message_dict, loads(list_result2.content)) @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_invalid_value_kek_wrap(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) queue.key_encryption_key = KeyWrapper('key1') queue.key_encryption_key.get_kid = None with self.assertRaises(AttributeError) as e: await queue.send_message(u'message') self.assertEqual(str(e.exception), _ERROR_OBJECT_INVALID.format('key encryption key', 'get_kid')) queue.key_encryption_key = KeyWrapper('key1') queue.key_encryption_key.get_kid = None with self.assertRaises(AttributeError): await queue.send_message(u'message') queue.key_encryption_key = KeyWrapper('key1') queue.key_encryption_key.wrap_key = None with self.assertRaises(AttributeError): await queue.send_message(u'message') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_missing_attribute_kek_wrap(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) valid_key = KeyWrapper('key1') # Act invalid_key_1 = lambda: None # functions are objects, so this effectively creates an empty object invalid_key_1.get_key_wrap_algorithm = valid_key.get_key_wrap_algorithm invalid_key_1.get_kid = valid_key.get_kid # No attribute wrap_key queue.key_encryption_key = invalid_key_1 with self.assertRaises(AttributeError): await queue.send_message(u'message') invalid_key_2 = lambda: None # functions are objects, so this effectively creates an empty object invalid_key_2.wrap_key = valid_key.wrap_key invalid_key_2.get_kid = valid_key.get_kid # No attribute get_key_wrap_algorithm queue.key_encryption_key = invalid_key_2 with self.assertRaises(AttributeError): await queue.send_message(u'message') invalid_key_3 = lambda: None # functions are objects, so this effectively creates an empty object invalid_key_3.get_key_wrap_algorithm = valid_key.get_key_wrap_algorithm invalid_key_3.wrap_key = valid_key.wrap_key # No attribute get_kid queue.key_encryption_key = invalid_key_3 with self.assertRaises(AttributeError): await queue.send_message(u'message') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_invalid_value_kek_unwrap(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) queue.key_encryption_key = KeyWrapper('key1') await queue.send_message(u'message') # Act queue.key_encryption_key.unwrap_key = None with self.assertRaises(HttpResponseError): await queue.peek_messages() queue.key_encryption_key.get_kid = None with self.assertRaises(HttpResponseError): await queue.peek_messages() @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_missing_attribute_kek_unrwap(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) queue.key_encryption_key = KeyWrapper('key1') await queue.send_message(u'message') # Act valid_key = KeyWrapper('key1') invalid_key_1 = lambda: None # functions are objects, so this effectively creates an empty object invalid_key_1.unwrap_key = valid_key.unwrap_key # No attribute get_kid queue.key_encryption_key = invalid_key_1 with self.assertRaises(HttpResponseError) as e: await queue.peek_messages() self.assertEqual(str(e.exception), "Decryption failed.") invalid_key_2 = lambda: None # functions are objects, so this effectively creates an empty object invalid_key_2.get_kid = valid_key.get_kid # No attribute unwrap_key queue.key_encryption_key = invalid_key_2 with self.assertRaises(HttpResponseError): await queue.peek_messages() @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_validate_encryption(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) kek = KeyWrapper('key1') queue.key_encryption_key = kek await queue.send_message(u'message') # Act queue.key_encryption_key = None # Message will not be decrypted li = await queue.peek_messages() message = li[0].content message = loads(message) encryption_data = message['EncryptionData'] wrapped_content_key = encryption_data['WrappedContentKey'] wrapped_content_key = _WrappedContentKey( wrapped_content_key['Algorithm'], b64decode(wrapped_content_key['EncryptedKey'].encode(encoding='utf-8')), wrapped_content_key['KeyId']) encryption_agent = encryption_data['EncryptionAgent'] encryption_agent = _EncryptionAgent( encryption_agent['EncryptionAlgorithm'], encryption_agent['Protocol']) encryption_data = _EncryptionData( b64decode(encryption_data['ContentEncryptionIV'].encode(encoding='utf-8')), encryption_agent, wrapped_content_key, {'EncryptionLibrary': VERSION}) message = message['EncryptedMessageContents'] content_encryption_key = kek.unwrap_key( encryption_data.wrapped_content_key.encrypted_key, encryption_data.wrapped_content_key.algorithm) # Create decryption cipher backend = backends.default_backend() algorithm = AES(content_encryption_key) mode = CBC(encryption_data.content_encryption_IV) cipher = Cipher(algorithm, mode, backend) # decode and decrypt data decrypted_data = _decode_base64_to_bytes(message) decryptor = cipher.decryptor() decrypted_data = (decryptor.update(decrypted_data) + decryptor.finalize()) # unpad data unpadder = PKCS7(128).unpadder() decrypted_data = (unpadder.update(decrypted_data) + unpadder.finalize()) decrypted_data = decrypted_data.decode(encoding='utf-8') # Assert self.assertEqual(decrypted_data, u'message') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_put_with_strict_mode(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) kek = KeyWrapper('key1') queue.key_encryption_key = kek queue.require_encryption = True await queue.send_message(u'message') queue.key_encryption_key = None # Assert with self.assertRaises(ValueError) as e: await queue.send_message(u'message') self.assertEqual(str(e.exception), "Encryption required but no key was provided.") @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_get_with_strict_mode(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) await queue.send_message(u'message') queue.require_encryption = True queue.key_encryption_key = KeyWrapper('key1') with self.assertRaises(ValueError) as e: messages = [] async for m in queue.receive_messages(): messages.append(m) _ = messages[0] self.assertEqual(str(e.exception), 'Message was not encrypted.') @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_encryption_add_encrypted_64k_message(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) message = u'a' * 1024 * 64 # Act await queue.send_message(message) # Assert queue.key_encryption_key = KeyWrapper('key1') with self.assertRaises(HttpResponseError): await queue.send_message(message) @QueuePreparer() @AsyncStorageTestCase.await_prepared_test async def test_encryption_nonmatching_kid(self, storage_account_name, storage_account_key): qsc = QueueServiceClient(self.account_url(storage_account_name, "queue"), storage_account_key, transport=AiohttpTestTransport()) # Arrange queue = await self._create_queue(qsc) queue.key_encryption_key = KeyWrapper('key1') await queue.send_message(u'message') # Act queue.key_encryption_key.kid = 'Invalid' # Assert with self.assertRaises(HttpResponseError) as e: messages = [] async for m in queue.receive_messages(): messages.append(m) self.assertEqual(str(e.exception), "Decryption failed.") # ------------------------------------------------------------------------------ if __name__ == '__main__': unittest.main()
Accelerating Select where and Select Join Queries on a GPU This paper presents implementations of a few selected SQL operations using theCUDA programming framework on the GPU platform. Nowadays, the GPU’sparallel architectures give a high speed-up on certain problems. Therefore, thenumber of non-graphical problems that can be run and sped-up on the GPUstill increases. Especially, there has been a lot of research in data mining onGPUs. In many cases it proves the advantage of offloading processing fromthe CPU to the GPU. At the beginning of our project we chose the set ofSELECT WHERE and SELECT JOIN instructions as the most common op-erations used in databases. We parallelized these SQL operations using threemain mechanisms in CUDA: thread group hierarchy, shared memories, andbarrier synchronization. Our results show that the implemented highly parallelSELECT WHERE and SELECT JOIN operations on the GPU platform canbe significantly faster than the sequential one in a database system run on theCPU.
/** * Adds the given parameter to the list of parameters, iff the value is not null or empty. * * @param name The name of the parameter to add. * @param value The value of the parameter to add. * @param params The list of parameters to add the parameter to. */ protected void addParameter(String name, String value, List<NameValuePair> params) { if (null == name) { throw new IllegalArgumentException("name must not be null."); } if (null == params) { throw new IllegalArgumentException("params must not be null."); } if (null != value && !value.isEmpty()) { params.add(new BasicNameValuePair(name, value)); } }
#include "../../../include/Core/ExtLibs.h" #include "../../../include/Core/Component.h" #include "../../../include/Core/AbstractFrame.h" using namespace A2D; Component::Component() : m_forcedBounds(false), m_parent(NULL), m_pipeline(NULL), m_id(rand()), m_componentManager(NULL), m_focused(false), m_focusable(true), m_nextCompListener(NULL), m_prevCompListener(NULL), m_activeInterpolations(false), m_componentTreeValidationRequest(false), m_calculatedRowIndex(0), m_calculatedColumnIndex(0), m_previousCalculatedRowIndex(0), m_previousCalculatedColumnIndex(0), m_positionAnimationXY(NULL), m_eventQueue(NULL), m_depth(0), m_scrollTop(0), m_scrolling(false), m_scrollLeft(0), m_cachedAnimationPositionXY(Animator::COMPONENT_BOUNDS_XY, Easing::OUT_CIRC, 0, 0, 800, NULL, NULL, NULL) { m_styleSet.m_visibleRegion = &m_visibleRegion; m_styleSet.m_region = &m_region; m_styleSet.m_subRegion = &m_subRegion; m_styleSet.m_subBordersRegion = &m_subBordersRegion; m_styleSet.m_id = &m_id; m_styleSet.markBorderColorsAsDirty(); m_styleSet.markBackgroundAsDirty(); } Component::~Component(){} void Component::paintComponentBorder(){} void Component::interpolate() { OrderedList<A2DINTERPOLATORFLOAT*>::Node<A2DINTERPOLATORFLOAT*> * node = m_interpolators._head(); double currentTime = nanotime__; while (node->value) { A2DINTERPOLATORFLOAT * interpolator = node->value; Callable * callable = interpolator->m_callable; float duration = SFLOAT((currentTime - interpolator->m_startTime) * 1000.0f); // Save the next node node = node->right; // Remove the node if (duration > interpolator->m_period) { // Force end switch (interpolator->m_mode) { case A2DINTERPOLATORFLOAT::Mode::FOUR_PARAMETERS:{ (this->*(*interpolator->m_interpolatable_d))(interpolator->m_start_a + interpolator->m_range_a, interpolator->m_start_b + interpolator->m_range_b, interpolator->m_start_c + interpolator->m_range_c, interpolator->m_start_d + interpolator->m_range_d); break; } case A2DINTERPOLATORFLOAT::Mode::THREE_PARAMETERS: { (this->*(*interpolator->m_interpolatable_c))(interpolator->m_start_a + interpolator->m_range_a, interpolator->m_start_b + interpolator->m_range_b, interpolator->m_start_c + interpolator->m_range_c); break; } case A2DINTERPOLATORFLOAT::Mode::TWO_PARAMETERS:{ (this->*(*interpolator->m_interpolatable_b))(interpolator->m_start_a + interpolator->m_range_a, interpolator->m_start_b + interpolator->m_range_b); break; } case A2DINTERPOLATORFLOAT::Mode::ONE_PARAMETER:{ (this->*(*interpolator->m_interpolatable))(interpolator->m_start_a + interpolator->m_range_a); break; } } // Execute callback if (callable) { callable->callback(interpolator->m_arg); } // Remove request Animator::stop(*this, &interpolator->m_removeTicket); } // OR Update the value else { float interpolated_a, interpolated_b, interpolated_c, interpolated_d, period = interpolator->m_period; TWEEN * tween = interpolator->m_tween; switch (interpolator->m_mode) { case A2DINTERPOLATORFLOAT::Mode::FOUR_PARAMETERS : interpolated_d = (*tween)(duration, interpolator->m_start_d, interpolator->m_range_d, period); case A2DINTERPOLATORFLOAT::Mode::THREE_PARAMETERS: interpolated_c = (*tween)(duration, interpolator->m_start_c, interpolator->m_range_c, period); case A2DINTERPOLATORFLOAT::Mode::TWO_PARAMETERS: interpolated_b = (*tween)(duration, interpolator->m_start_b, interpolator->m_range_b, period); case A2DINTERPOLATORFLOAT::Mode::ONE_PARAMETER: interpolated_a = (*tween)(duration, interpolator->m_start_a, interpolator->m_range_a, period); } switch (interpolator->m_mode) { case A2DINTERPOLATORFLOAT::Mode::FOUR_PARAMETERS:{ (this->*(*interpolator->m_interpolatable_d))(interpolated_a, interpolated_b, interpolated_c, interpolated_d); break; } case A2DINTERPOLATORFLOAT::Mode::THREE_PARAMETERS:{ (this->*(*interpolator->m_interpolatable_c))(interpolated_a, interpolated_b, interpolated_c); break; } case A2DINTERPOLATORFLOAT::Mode::TWO_PARAMETERS:{ (this->*(*interpolator->m_interpolatable_b))(interpolated_a, interpolated_b); break; } case A2DINTERPOLATORFLOAT::Mode::ONE_PARAMETER:{ (this->*(*interpolator->m_interpolatable))(interpolated_a); break; } } } } // Remaining interpolators? if (m_interpolators.size() == 0) { m_activeInterpolations = false; #ifdef A2D_DE__ SYSOUT_F("[Component] [ComponentId: 0x%X] Turning off interpolators.", m_id); #endif // A2D_DE__ } } Component& Component::getParent() { return *m_parent; } void Component::setId(int x_id) { m_id = x_id; } int Component::getDepth() { return m_depth; } AbstractFrame& Component::getFrame() { return *m_frame; } void Component::setGraphics(Graphics& xGraphics) { m_graphics = &xGraphics; } Graphics& Component::getGraphics() { return *m_graphics; } void Component::setDepth(int xDepth) { m_styleSet.m_depth = (A2D_MAX_Z_DEPTH - SFLOAT(m_depth = xDepth)) / A2D_MAX_Z_DEPTH; m_styleSet.markDepthAsDirty(); } void Component::setFrame(AbstractFrame& xFrame) { m_frame = &xFrame; } void Component::setComponentManager(ComponentManager& x_componentManager) { m_componentManager = &x_componentManager; } void Component::setParent(Component& xParent) { m_parent = &xParent; } void Component::setEventQueue(AbstractEventQueue& x_eventQueue) { m_eventQueue = &x_eventQueue; } Rect Component::getBounds() { return m_region; } Rect * Component::getBoundsAsPtr() { return &m_region; } Rect * Component::getVisibleRegion() { return &m_visibleRegion; } void Component::add(Component& xContainer) { m_children.push_back(&xContainer, NULL); } void Component::remove(Component& xContainer) { // FIXME Use remove_request m_children.remove(&xContainer); } void Component::invalidate() { m_validatedContents = false; } void Component::revalidate() { validate(); } void Component::validated() { m_validatedContents = true; } void Component::validate() { Rect& region = m_region; bool hasParent = m_parent != NULL; if (!m_styleSet.m_visible) { m_validatedContents = true; return; } if (!hasParent) { m_visibleRegion = m_calculatedRegion = { max__(0.0f, region.m_x), max__(0.0f, region.m_y), max__(0.0f, region.m_width), max__(0.0f, region.m_height) }; } else { // Create shifts m_calculatedRegion = { m_parent->m_calculatedRegion.m_x + region.m_x, m_parent->m_calculatedRegion.m_y + region.m_y, region.m_width, region.m_height }; A2DPIXELDISTANCESETUINT4& borderWidths = m_styleSet.m_borders.m_precalculatedBorderWidths; // Applying constraints m_visibleRegion = Math::intersect(m_parent->m_visibleRegion, m_calculatedRegion); m_subRegion = Math::subtract(m_visibleRegion, m_calculatedRegion); m_subBordersRegion = Math::subtract(m_parent->m_visibleRegion, { m_calculatedRegion.m_x - borderWidths.m_left, m_calculatedRegion.m_y - borderWidths.m_top, m_calculatedRegion.m_width, m_calculatedRegion.m_height }); if (m_visibleRegion.m_height != m_previousVisibleDimensions.m_height || m_visibleRegion.m_width != m_previousVisibleDimensions.m_width) { // FIXME Use SSE2 Acceleration m_previousVisibleDimensions.m_width = m_visibleRegion.m_width; m_previousVisibleDimensions.m_height = m_visibleRegion.m_height; // Request the validation of the components m_componentTreeValidationRequest = true; // Mark background as dirty m_styleSet.markBackgroundAsDirty(); #ifdef A2D_DE__ SYSOUT_F("[Component] [ComponentId: 0x%X] Requesting background update.", m_id); #endif // A2D_DE__ } } CascadingLayout::doLayout(*this); m_validatedContents = true; m_componentTreeValidationRequest = false; m_styleSet.markVisibleRegionAsDirty(); } void Component::forceBounds(bool xForce) { m_forcedBounds = xForce; } void Component::setSize(Style::Units xWidthUnits, float xWidth, Style::Units xHeightUnits, float xHeight) { A2DDISTANCESET2& size = m_styleSet.m_size; size.m_widthUnits = xWidthUnits; size.m_heightUnits = xHeightUnits; size.m_width = xWidth; size.m_height = xHeight; m_validatedContents = false; } void Component::setWidth(float x_width) { m_styleSet.m_size.m_width = x_width; m_parent->m_componentTreeValidationRequest = true; } void Component::setWidthUnits(Style::Units x_units) { m_styleSet.m_size.m_widthUnits = x_units; m_parent->m_componentTreeValidationRequest = true; } void Component::setHeight(float x_height) { m_styleSet.m_size.m_height = x_height; m_parent->m_componentTreeValidationRequest = true; } void Component::setHeightUnits(Style::Units x_units) { m_styleSet.m_size.m_widthUnits = x_units; m_parent->m_componentTreeValidationRequest = true; } void Component::setDisplay(Style::Display xDisplay) { m_styleSet.m_display = xDisplay; m_validatedContents = false; } void Component::setMargins(Style::Units xLeftUnits, float xLeft, Style::Units xTopUnits, float xTop, Style::Units xRightUnits, float xRight, Style::Units xBottomUnits, float xBottom) { A2DDISTANCESET4& margins = m_styleSet.m_margins; margins.m_leftUnits = xLeftUnits; margins.m_topUnits = xTopUnits; margins.m_rightUnits = xRightUnits; margins.m_bottomUnits = xBottomUnits; margins.m_left = xLeft; margins.m_top = xTop; margins.m_bottom = xBottom; margins.m_right = xRight; m_validatedContents = false; } void Component::setPositioning(Style::Units xLeftUnits, float xLeft, Style::Units xTopUnits, float xTop, Style::Units xRightUnits, float xRight, Style::Units xBottomUnits, float xBottom) { A2DDISTANCESET4& positioning = m_styleSet.m_positioning; positioning.m_leftUnits = xLeftUnits; positioning.m_topUnits = xTopUnits; positioning.m_rightUnits = xRightUnits; positioning.m_bottomUnits = xBottomUnits; positioning.m_left = xLeft; positioning.m_top = xTop; positioning.m_bottom = xBottom; positioning.m_right = xRight; m_validatedContents = false; } void Component::setPadding(Style::Units xLeftUnits, float xLeft, Style::Units xTopUnits, float xTop, Style::Units xRightUnits, float xRight, Style::Units xBottomUnits, float xBottom) { A2DDISTANCESET4& padding = m_styleSet.m_padding; padding.m_leftUnits = xLeftUnits; padding.m_topUnits = xTopUnits; padding.m_rightUnits = xRightUnits; padding.m_bottomUnits = xBottomUnits; padding.m_left = xLeft; padding.m_top = xTop; padding.m_bottom = xBottom; padding.m_right = xRight; m_validatedContents = false; } void Component::setPosition(Style::Position xPosition) { m_styleSet.m_position = xPosition; m_validatedContents = false; } STATUS Component::initialize() { m_graphics->resetDrawable(m_styleSet.m_drawable); return STATUS_OK; } void Component::paintComponent() { Graphics& graphics = *m_graphics; graphics.drawComponent(&m_pipeline, m_styleSet); } void Component::update() { Graphics& graphics = *m_graphics; // If Component is not visible return if (!m_styleSet.m_visible) { return; } // Interpolate the options if (m_activeInterpolations) { interpolate(); } // Update the visible region of the component if (!m_validatedContents) { validate(); } // Update the location and revalidate else if (m_componentTreeValidationRequest) { CascadingLayout::doLayout(*this); m_componentTreeValidationRequest = false; } // Set the graphics clip graphics.setClip(&m_visibleRegion, m_styleSet.m_depth); // Render the current component paintComponent(); // Force region graphics.setClip(&m_visibleRegion, m_styleSet.m_depth); // Render the currect component border paintComponentBorder(); } STATUS Component::requestFocus() { // Also it's broken, as aFrame is not initialized. if (m_focusable && !m_focused) { FocusEvent& focusRequest = *new FocusEvent(this, FocusEvent::FOCUS_GAINED); Toolkit::getSystemEventQueue(m_frame->id())->processFocusEvent(&focusRequest); } return STATUS_OK; } void Component::setFocusable(bool xFocusable) { m_focusable = xFocusable; } Rect * Component::getEventRegion() { return &m_visibleRegion; } STATUS Component::addMouseListener(MouseListener * xListener) { if (m_eventQueue) { // Add depth manually STATUS status = ComponentEventSource::addMouseListener(xListener); if (xListener) { m_eventQueue->addEventDepthTracker(this, abs__(m_depth)); } else { m_eventQueue->removeEventDepthTracker(this, abs__(m_depth + 1)); } return status; } return ComponentEventSource::addMouseListener(xListener); } STATUS Component::addMouseMotionListener(MouseMotionListener * xListener) { if (m_eventQueue) { // Add depth manually STATUS status = ComponentEventSource::addMouseMotionListener(xListener); if (xListener) { m_eventQueue->addEventDepthTracker(this, abs__(m_depth)); } else { m_eventQueue->removeEventDepthTracker(this, abs__(m_depth + 1)); } return status; } return ComponentEventSource::addMouseMotionListener(xListener); } STATUS Component::addFocusListener(FocusListener * xListener) { if (m_eventQueue) { // Add depth manually STATUS status = ComponentEventSource::addFocusListener(xListener); if (xListener) { m_eventQueue->addEventDepthTracker(this, abs__(m_depth)); } else { m_eventQueue->removeEventDepthTracker(this, abs__(m_depth + 1)); } return status; } return ComponentEventSource::addFocusListener(xListener); } STATUS Component::addActionListener(ActionListener * xListener) { if (m_eventQueue) { // Add depth manually STATUS hr = ComponentEventSource::addActionListener(xListener); if (xListener != NULL) { m_eventQueue->addEventDepthTracker(this, abs__(m_depth)); } else { m_eventQueue->removeEventDepthTracker(this, abs__(m_depth + 1)); } return hr; } return ComponentEventSource::addActionListener(xListener); } void Component::setBackgroundImage(wchar_t* x_src) { m_styleSet.m_drawable.setSource(x_src); m_graphics->bindDrawable(m_styleSet.m_drawable); m_styleSet.markBackgroundAsDirty(); } void Component::setBackgroundPaint(Paint& xOptPaint) { Paint::from(m_styleSet.m_backgroundPaint, xOptPaint); m_styleSet.markBackgroundAsDirty(); }; void Component::setBorderWidths(Style::Units xLeftUnits, float xLeft, Style::Units xTopUnits, float xTop, Style::Units xRightUnits, float xRight, Style::Units xBottomUnits, float xBottom) { A2DDISTANCESET4& bordersWidths = m_styleSet.m_borders.m_borderWidths; bordersWidths.m_leftUnits = xLeftUnits; bordersWidths.m_topUnits = xTopUnits; bordersWidths.m_rightUnits = xRightUnits; bordersWidths.m_bottomUnits = xBottomUnits; bordersWidths.m_left = xLeft; bordersWidths.m_top = xTop; bordersWidths.m_bottom = xBottom; bordersWidths.m_right = xRight; m_styleSet.markBorderWidthsAsDirty(); } void Component::setOpacity(float x_opacity) { m_styleSet.m_opacity = x_opacity; m_styleSet.markOpacityAsDirty(); } void Component::setBorderRadii(Style::Units xLeftUnits, float xLeft, Style::Units xTopUnits, float xTop, Style::Units xRightUnits, float xRight, Style::Units xBottomUnits, float xBottom) { A2DDISTANCESET4& borderRadii = m_styleSet.m_borderRadii; borderRadii.m_leftUnits = xLeftUnits; borderRadii.m_topUnits = xTopUnits; borderRadii.m_rightUnits = xRightUnits; borderRadii.m_bottomUnits = xBottomUnits; borderRadii.m_left = xLeft; borderRadii.m_top = xTop; borderRadii.m_bottom = xBottom; borderRadii.m_right = xRight; m_styleSet.markBorderRadiiAsDirty(); } void Component::setBorderRadiiTopLeft(float x_value) { A2DDISTANCESET4& borderRadii = m_styleSet.m_borderRadii; A2DPIXELDISTANCESETUINT4& precalculatedBorderRadii = m_styleSet.m_precalculatedBorderRadii; unsigned int width = m_styleSet.m_precalculatedSize.m_width; unsigned int height = m_styleSet.m_precalculatedSize.m_height; unsigned int usableDim = min__(width, height); precalculatedBorderRadii.m_left = min__(SUINT(cvtsu2px__(borderRadii.m_leftUnits, (borderRadii.m_left = x_value), width)), usableDim / 2); m_styleSet.markBorderRadiiAsDirty(); } void Component::setBorderRadiiTopLeftUnits(Style::Units x_units) { m_styleSet.m_borderRadii.m_leftUnits = x_units; m_styleSet.markBorderRadiiAsDirty(); } void Component::setBorderRadiiUnified(float x_value) { A2DDISTANCESET4& borderRadii = m_styleSet.m_borderRadii; A2DPIXELDISTANCESETUINT4& precalculatedBorderRadii = m_styleSet.m_precalculatedBorderRadii; unsigned int width = m_styleSet.m_precalculatedSize.m_width; unsigned int height = m_styleSet.m_precalculatedSize.m_height; unsigned int usableDim = min__(width, height); precalculatedBorderRadii.m_left = min__(SUINT(cvtsu2px__(borderRadii.m_leftUnits, (borderRadii.m_left = x_value), width)), usableDim / 2); precalculatedBorderRadii.m_top = min__(SUINT(cvtsu2px__(borderRadii.m_topUnits, (borderRadii.m_top = x_value), height)), usableDim / 2); precalculatedBorderRadii.m_right = min__(SUINT(cvtsu2px__(borderRadii.m_bottomUnits, (borderRadii.m_bottom = x_value), height)), usableDim / 2); precalculatedBorderRadii.m_bottom = min__(SUINT(cvtsu2px__(borderRadii.m_bottomUnits, (borderRadii.m_right = x_value), width)), usableDim / 2); m_styleSet.markBorderRadiiAsDirty(); } void Component::setBorderColor(unsigned int xLeft, unsigned int xTop, unsigned int xRight, unsigned int xBottom) { A2DBORDERSET4& borders = m_styleSet.m_borders; Color3D::from(borders.m_leftColor, xLeft); Color3D::from(borders.m_topColor, xTop); Color3D::from(borders.m_rightColor, xRight); Color3D::from(borders.m_bottomColor, xBottom); m_validatedContents = false; m_styleSet.markBorderColorsAsDirty(); } wchar_t* Component::getBackgroundImage() { return m_styleSet.m_drawable.getSource(); } Paint& Component::getBackgroundPaint() { return m_styleSet.m_backgroundPaint; }; void Component::setBoundsX(float x_x) { m_region.m_x = x_x; m_validatedContents = false; m_styleSet.markRequestRegionAsDirty(); } float Component::getBoundsX() { return m_region.m_x; } void Component::setBoundsY(float x_y) { m_region.m_y = x_y; m_validatedContents = false; m_styleSet.markRequestRegionAsDirty(); } void Component::setBoundsXY(float x_x, float x_y) { m_region.m_x = x_x; m_region.m_y = x_y; m_validatedContents = false; m_styleSet.markRequestRegionAsDirty(); } float Component::getBoundsY() { return m_region.m_y; } void Component::setScroll(float x_left, float x_top) { m_scrollLeft = x_left; m_scrollTop = x_top; m_componentTreeValidationRequest = m_validatedContents = false; m_styleSet.markRequestRegionAsDirty(); } void Component::setScrollTop(float x_top) { m_scrollTop = x_top; m_componentTreeValidationRequest = m_validatedContents = false; m_styleSet.markRequestRegionAsDirty(); } void Component::captureScroll() { m_scrolling = true; } void Component::releaseScroll() { m_scrolling = false; } void Component::setBounds(float xX, float xY, float xWidth, float xHeight) { // FIX-ME if ((m_previousCalculatedRowIndex != m_calculatedRowIndex || m_previousCalculatedColumnIndex != m_calculatedColumnIndex) && !m_parent->m_scrolling) { m_region.m_width = xWidth; m_region.m_height = xHeight; if (m_positionAnimationXY) { Animator::stop(*this, m_positionAnimationXY); } m_cachedAnimationPositionXY.toValues(xX, xY); m_positionAnimationXY = Animator::animate(*this, m_cachedAnimationPositionXY); m_previousCalculatedRowIndex = m_calculatedRowIndex; m_previousCalculatedColumnIndex = m_calculatedColumnIndex; } else /*if (!Animator::isAnimating(*this, m_positionAnimationXY))*/ { m_region.m_width = xWidth; m_region.m_height = xHeight; m_region.m_x = xX; m_region.m_y = xY; m_backgroundRegion.m_width = xWidth; m_backgroundRegion.m_height = xHeight; if (m_region.m_height != m_previousDimensions.m_height || m_region.m_width != m_previousDimensions.m_width) { // FIXME Use SSE2 Acceleration m_previousDimensions = { m_region.m_width, m_region.m_height }; m_styleSet.markBackgroundAsDirty(); } } m_validatedContents = false; m_styleSet.markRequestRegionAsDirty(); }
// NewHTTPPoolOpts initializes an HTTP pool of peers with the given options. // Unlike NewHTTPPool, this function does not register the created pool as an HTTP handler. // The returned *HTTPPool implements http.Handler and must be registered using http.Handle. func NewHTTPPoolOpts(self string, o *HTTPPoolOptions) *HTTPPool { if httpPoolMade { panic("groupcache: NewHTTPPool must be called only once") } httpPoolMade = true p := &HTTPPool{ self: self, httpGetters: make(map[string]*httpGetter), } if o != nil { p.opts = *o } if p.opts.BasePath == "" { p.opts.BasePath = defaultBasePath } if p.opts.Replicas == 0 { p.opts.Replicas = defaultReplicas } p.peers = consistenthash.New(p.opts.Replicas, p.opts.HashFn) RegisterPeerPicker(func() PeerPicker { return p }) return p }
import sys def subset(a): for n in range(2**len(a)): yield [a[i] for i in xrange(len(a)) if (n >> i) & 1 == 1] weights = [1, 2, 4, 8, 16, 32, 64, 128, 256, 512] for line in sys.stdin: w = int(line) for s in subset(weights): if sum(s) == w: print ' '.join(map(str, s))
import { QuestionBase } from "./QuestionBase"; import { ChildEntity, Column } from "typeorm"; @ChildEntity() export class TextQuestion extends QuestionBase { @Column({ type: "integer", comment: "the max length of text", default: 255 }) maxLength: number; }
// Returns the oauth2.Token corresponding to this request. func (a *Authenticator) GetToken(r *http.Request) (*oauth2.Token, error) { session := a.getSession(r) if session.Token == nil { return nil, ErrNotAuthenticated } return session.Token, nil }
import pytest def run_Lorenz(efficient, skip_residual_computation, num_procs=1): from pySDC.implementations.problem_classes.Lorenz import LorenzAttractor from pySDC.implementations.hooks.log_errors import LogGlobalErrorPostRun from pySDC.implementations.hooks.log_solution import LogSolution from pySDC.implementations.hooks.log_work import LogWork from pySDC.projects.Resilience.sweepers import generic_implicit_efficient, generic_implicit from pySDC.implementations.controller_classes.controller_nonMPI import controller_nonMPI # initialize level parameters level_params = {} level_params['dt'] = 1e-2 # initialize sweeper parameters sweeper_params = {} sweeper_params['quad_type'] = 'RADAU-RIGHT' sweeper_params['num_nodes'] = 3 sweeper_params['QI'] = 'IE' sweeper_params['skip_residual_computation'] = ( ('IT_CHECK', 'IT_FINE', 'IT_COARSE', 'IT_DOWN', 'IT_UP') if skip_residual_computation else () ) problem_params = { 'newton_tol': 1e-9, 'newton_maxiter': 99, } # initialize step parameters step_params = {} step_params['maxiter'] = 4 # initialize controller parameters controller_params = {} controller_params['logger_level'] = 30 controller_params['hook_class'] = [LogSolution, LogWork, LogGlobalErrorPostRun] controller_params['mssdc_jac'] = False # fill description dictionary for easy step instantiation description = {} description['problem_class'] = LorenzAttractor description['problem_params'] = problem_params description['sweeper_class'] = generic_implicit_efficient if efficient else generic_implicit description['sweeper_params'] = sweeper_params description['level_params'] = level_params description['step_params'] = step_params # set time parameters t0 = 0.0 # instantiate controller controller = controller_nonMPI(num_procs=num_procs, controller_params=controller_params, description=description) P = controller.MS[0].levels[0].prob uinit = P.u_exact(t0) uend, stats = controller.run(u0=uinit, t0=t0, Tend=1.0) return stats def run_Schroedinger(efficient=False, num_procs=1, skip_residual_computation=False): from pySDC.implementations.problem_classes.NonlinearSchroedinger_MPIFFT import nonlinearschroedinger_imex from pySDC.implementations.sweeper_classes.imex_1st_order import imex_1st_order from pySDC.projects.Resilience.sweepers import imex_1st_order_efficient from pySDC.implementations.hooks.log_errors import LogGlobalErrorPostRunMPI from pySDC.implementations.hooks.log_solution import LogSolution from pySDC.implementations.hooks.log_work import LogWork from pySDC.implementations.controller_classes.controller_MPI import controller_MPI from mpi4py import MPI space_comm = MPI.COMM_SELF rank = space_comm.Get_rank() # initialize level parameters level_params = {} level_params['restol'] = 1e-8 level_params['dt'] = 1e-01 / 2 level_params['nsweeps'] = 1 # initialize sweeper parameters sweeper_params = {} sweeper_params['quad_type'] = 'RADAU-RIGHT' sweeper_params['num_nodes'] = 3 sweeper_params['QI'] = 'IE' sweeper_params['initial_guess'] = 'spread' sweeper_params['skip_residual_computation'] = ( ('IT_FINE', 'IT_COARSE', 'IT_DOWN', 'IT_UP') if skip_residual_computation else () ) # initialize problem parameters problem_params = {} problem_params['nvars'] = (128, 128) problem_params['spectral'] = False problem_params['c'] = 1.0 problem_params['comm'] = space_comm # initialize step parameters step_params = {} step_params['maxiter'] = 50 # initialize controller parameters controller_params = {} controller_params['logger_level'] = 30 if rank == 0 else 99 controller_params['hook_class'] = [LogSolution, LogWork, LogGlobalErrorPostRunMPI] controller_params['mssdc_jac'] = False # fill description dictionary for easy step instantiation description = {} description['problem_params'] = problem_params description['problem_class'] = nonlinearschroedinger_imex description['sweeper_class'] = imex_1st_order_efficient if efficient else imex_1st_order description['sweeper_params'] = sweeper_params description['level_params'] = level_params description['step_params'] = step_params # set time parameters t0 = 0.0 # instantiate controller controller_args = { 'controller_params': controller_params, 'description': description, } comm = MPI.COMM_SELF controller = controller_MPI(**controller_args, comm=comm) P = controller.S.levels[0].prob uinit = P.u_exact(t0) uend, stats = controller.run(u0=uinit, t0=t0, Tend=1.0) return stats @pytest.mark.base def test_generic_implicit_efficient(skip_residual_computation=True): stats_normal = run_Lorenz(efficient=False, skip_residual_computation=skip_residual_computation) stats_efficient = run_Lorenz(efficient=True, skip_residual_computation=skip_residual_computation) assert_sameness(stats_normal, stats_efficient, 'generic_implicit') @pytest.mark.base def test_residual_skipping(): stats_normal = run_Lorenz(efficient=True, skip_residual_computation=False) stats_efficient = run_Lorenz(efficient=True, skip_residual_computation=True) assert_sameness(stats_normal, stats_efficient, 'generic_implicit', check_residual=False) @pytest.mark.mpi4py def test_residual_skipping_with_residual_tolerance(): stats_normal = run_Schroedinger(efficient=True, skip_residual_computation=False) stats_efficient = run_Schroedinger(efficient=True, skip_residual_computation=True) assert_sameness(stats_normal, stats_efficient, 'imex_first_order', check_residual=False) @pytest.mark.mpi4py def test_imex_first_order_efficient(): stats_normal = run_Schroedinger(efficient=False) stats_efficient = run_Schroedinger(efficient=True) assert_sameness(stats_normal, stats_efficient, 'imex_first_order') def assert_sameness(stats_normal, stats_efficient, sweeper_name, check_residual=True): from pySDC.helpers.stats_helper import get_sorted, get_list_of_types import numpy as np for me in get_list_of_types(stats_normal): normal = [you[1] for you in get_sorted(stats_normal, type=me)] if 'timing' in me or all(you is None for you in normal) or (not check_residual and 'residual' in me): continue assert np.allclose( normal, [you[1] for you in get_sorted(stats_efficient, type=me)] ), f'Stats don\'t match in type \"{me}\" for efficient and regular implementations of {sweeper_name} sweeper!'
class FilterCollection: """ Collection of filters to test messages against. """ def __init__(self): self._filters = [] def test_message(self, message, default=False): """ Test if the message matches any filter. :param message: message to be tested against the filters, ''Message'' :param default: return value when there is no active filter, ''bool'' :returns: True if the message matches any filter, ''bool'' """ match = default for f in self._filters: if f.is_enabled() and f.has_filter(): if f.test_message(message): return True else: match = False return match def append(self, filter): self._filters.append(filter) def count_enabled_filters(self): enabled = [f for f in self._filters if f.is_enabled()] return len(enabled) def __len__(self): return len(self._filters) def __delitem__(self, index): del self._filters[index]
/* * Try to initialise xendev. Depends on the frontend being ready * for it (shared ring and evtchn info in xenstore, state being * Initialised or Connected). * * Goes to Connected on success. */ static int xen_be_try_initialise(struct XenDevice *xendev) { int rc = 0; if (xendev->fe_state != XenbusStateInitialised && xendev->fe_state != XenbusStateConnected) { if (xendev->ops->flags & DEVOPS_FLAG_IGNORE_STATE) { xen_pv_printf(xendev, 2, "frontend not ready, ignoring\n"); } else { xen_pv_printf(xendev, 2, "frontend not ready (yet)\n"); return -1; } } if (xendev->ops->initialise) { rc = xendev->ops->initialise(xendev); } if (rc != 0) { xen_pv_printf(xendev, 0, "initialise() failed\n"); return rc; } xen_be_set_state(xendev, XenbusStateConnected); return 0; }
def calculateAngle(p1, p2, inDegrees=True): xDiff = p2[0] - p1[0] yDiff = p2[1] - p1[1] angle = math.atan2(yDiff, xDiff) if inDegrees: angle = math.degrees(angle) return angle
<filename>src/components/BackTop/index.tsx<gh_stars>10-100 import React, { FC, useEffect, useState } from 'react'; import { getScrollPosition } from '@utils/tools'; import './index.scss'; export interface BackTopProps { // visible height minHeight?: number; } const BackTop: FC<BackTopProps> = ({ minHeight }) => { const [isVisible, setVisible] = useState(false); const handleBackToTop = () => { const s = document.documentElement.scrollTop || document.body.scrollTop; if (s > 0) { window.scrollTo({ top: 0, behavior: 'smooth', }); } else { if (!isVisible) { setVisible(false); } } }; const visibleBtn = () => { if (getScrollPosition().y > (minHeight as number)) { if (!isVisible) { setVisible(true); } } else { setVisible(false); } }; useEffect(() => { window.addEventListener('scroll', visibleBtn); return () => { window.removeEventListener('scroll', visibleBtn); }; }); return ( <div id="fzj-backtop" className={isVisible ? 'show' : 'hide'} onClick={handleBackToTop} > <svg viewBox="0 0 98 125" xmlns="http://www.w3.org/2000/svg"> <path d="m.41627505 120.505813c.36557376-2.799314 38.38567845-112.09375627 40.40633855-116.52273479 2.0206601-4.42897853 7.0443939-6.06190829 10.0049342.20172081 1.9736935 4.17575274 17.3630227 42.22464148 46.1679874 114.14666598.3068629 1.030234.4237748 1.755017.3507358 2.174348-.4483666 2.574161-2.3292736 3.429521-4.6125739 2.516093-15.0683018-6.028033-36.2681037-23.1692513-43.8663627-23.1692513-7.5982589 0-27.0568176 12.9171643-43.18383202 23.1692513-.96511235.613531-1.91473089 1.069619-2.88840599 1.149501s-2.74439509-.86628-2.37882134-3.665594z" fill="#666" fillRule="evenodd" /> </svg> </div> ); }; BackTop.defaultProps = { minHeight: 300, }; export default BackTop;
/** * Write a portion of an array of characters. * * @param ca The array from which to write * @param off Starting offset * @param len Number of characters to write */ public void write(char ca[], int off, int len) { super.write(ca, off, len); super.flush(); }
<reponame>danu30/evm-lite-cli import * as fs from 'fs'; import utils from 'evm-lite-utils'; import Inquirer from 'inquirer'; import Vorpal from 'vorpal'; import Session from '../core/Session'; import Command, { IArgs, IOptions } from '../core/Command'; interface Opts extends IOptions { interactive?: boolean; debug?: boolean; pwd?: string; out: string; } interface Args extends IArgs<Opts> { options: Opts; moniker: string; } interface Answers { moniker: string; outpath: string; passphrase: string; verifyPassphrase: string; } const command = (evmlc: Vorpal, session: Session) => { const description = 'Creates an encrypted keypair locally'; return evmlc .command('accounts create [moniker]') .alias('a c') .description(description) .option('-i, --interactive', 'enter interactive mode') .option('--pwd <file_path>', 'passphrase file path') .option('--out <output_path>', 'write keystore to output path') .types({ string: ['_', 'pwd', 'out'] }) .action((args: Args) => new AccountCreateCommand(session, args).run()); }; class AccountCreateCommand extends Command<Args> { protected async init(): Promise<boolean> { this.args.options.interactive = this.args.options.interactive || this.session.interactive; this.args.moniker = this.args.moniker || this.config.defaults.from; this.args.options.out = this.args.options.out || this.datadir.keystorePath; return this.args.options.interactive; } protected async prompt(): Promise<void> { const questions: Inquirer.QuestionCollection<Answers> = [ { message: 'Moniker: ', name: 'moniker', type: 'input' }, { message: 'Output Path: ', name: 'outpath', type: 'input', default: this.args.options.out }, { message: 'Passphrase: ', name: 'passphrase', type: 'password' }, { message: 'Re-enter passphrase: ', name: 'verifyPassphrase', type: 'password' } ]; const answers = await Inquirer.prompt<Answers>(questions); if (!(answers.passphrase && answers.verifyPassphrase)) { throw Error('Fields cannot be blank'); } if (answers.passphrase !== answers.verifyPassphrase) { throw Error('Passphrases do not match'); } this.args.moniker = answers.moniker; this.args.options.out = answers.outpath; this.passphrase = answers.passphrase.trim(); } protected async check(): Promise<void> { if (!this.args.moniker) { throw Error('Moniker cannot be empty'); } if (!utils.validMoniker(this.args.moniker)) { throw Error('Moniker contains illegal characters'); } if (!this.passphrase) { if (!this.args.options.pwd) { throw Error('No passphrase file path provided'); } if (!utils.exists(this.args.options.pwd)) { throw Error('Passphrase file path provided does not exist'); } if (utils.isDirectory(this.args.options.pwd)) { throw Error('Passphrase file path provided is a directory'); } this.passphrase = fs .readFileSync(this.args.options.pwd, 'utf8') .trim(); } if (this.args.options.out) { if (!utils.exists(this.args.options.out)) { throw Error('Output path provided does not exist'); } if (!utils.isDirectory(this.args.options.out)) { throw Error('Output path provided is a not a directory'); } } } protected async exec(): Promise<string> { this.log.info('keystore', this.datadir.keystorePath); const account = await this.datadir.newKeyfile( this.args.moniker, this.passphrase!, this.args.options.out ); return JSON.stringify(account, null, 2); } } export const AccountCreate = AccountCreateCommand; export default command;
<reponame>evanphx/relay<filename>pq/pq.go package pq import ( "fmt" "reflect" "sync" "time" "github.com/armon/relay/broker" ) // The pq package provides a simple priority queue. All that is required is a // a known number of available priorities. Queues labeled from 0..N will be // automatically created, where the higher numbers are higher priority queues. // High-level methods are exposed to make dealing with the priority queues // very simple and abstract. var ( // The minimum amount of time to wait after receiving a message for a higher // priority message to arrive. MinQuietPeriod = 10 * time.Millisecond ) // PriorityQueue is a simple wrapper around a relay.Broker to manage // a set of queues with varying priority. type PriorityQueue struct { max int source broker.Broker prefix string quietPeriod time.Duration publishers []broker.Publisher shutdownLock sync.Mutex shutdown bool } // priorityResp is used as a container for response data during a Consume(). // Since the response must be fed down a channel and contains multiple values, // we stuff them all into this struct and thread it through. type priorityResp struct { value interface{} priority int consumer broker.Consumer } // NewPriorityQueue returns a new priority queue from which a consumer or // producer at a given priority can be easily retrieved. func NewPriorityQueue( b broker.Broker, pri int, prefix string, quietPeriod time.Duration) (*PriorityQueue, error) { if b == nil { return nil, fmt.Errorf("Broker must not be nil") } if pri < 1 { return nil, fmt.Errorf("Must be 1 or more priorities") } // Guard against a quiet period which is too small if quietPeriod < MinQuietPeriod { quietPeriod = MinQuietPeriod } q := PriorityQueue{ source: b, prefix: prefix, max: pri - 1, quietPeriod: quietPeriod, } // Initialize the publisher cache q.publishers = make([]broker.Publisher, pri) return &q, nil } // queueName formats the name of a priority queue by appending its priority to // the user-provided queue prefix. func queueName(prefix string, pri int) string { return fmt.Sprintf("%s-%d", prefix, pri) } // Max returns the highest priority number. func (q *PriorityQueue) Max() int { return q.max } // Min returns the lowest priority number. This is always 0. func (q *PriorityQueue) Min() int { return 0 } // publisher returns a new publisher from the priority indicated by pri. func (q *PriorityQueue) publisher(pri int) (broker.Publisher, error) { if pri > q.Max() || pri < q.Min() { return nil, fmt.Errorf("Priority out of range: %d", pri) } if q.publishers[pri] == nil { pub, err := q.source.Publisher(queueName(q.prefix, pri)) if err != nil { return nil, err } q.publishers[pri] = pub } return q.publishers[pri], nil } // consumer returns a new consumer with the indicated priority. func (q *PriorityQueue) consumer(pri int) (broker.Consumer, error) { if pri > q.Max() || pri < q.Min() { return nil, fmt.Errorf("Priority out of range: %d", pri) } cons, err := q.source.Consumer(queueName(q.prefix, pri)) if err != nil { return nil, err } return cons, nil } // Publish will publish a message at a given priority. The publisher is // automatically closed afterward. func (q *PriorityQueue) Publish(payload interface{}, pri int) error { pub, err := q.publisher(pri) if err != nil { return err } if err := pub.Publish(payload); err != nil { return err } return nil } // consume consumes a message from the priority queue. This is a blocking call // which will watch every queue at every priority until a message is received on // at least one of them. Once a message is received, we continue blocking for a // configurable amount of time for any other higher priority message to arrive. // After the quiet period, if no higher priority message has been received, all // lower priority messages are marked for re-delivery, and the most urgent // message is returned. // // The quiet period should almost always carry a value greater than zero. If // there is no quiet period, then messages are returned as soon as they are // received, which makes the quickness of the server and queue the decider of // message priority. Therefore, we enforce a minimum quiet period. // // The consumer is also returned as part of the result, as it will contain // the session open to the queue with an un-Ack()'ed message. It is the // responsibility of the caller to Ack() and Close() the consumer. func (q *PriorityQueue) consume( out interface{}, cancelCh chan struct{}) (broker.Consumer, int, error) { // Populate the cancelCh if none was provided if cancelCh == nil { cancelCh = make(chan struct{}) } // Initialize the data channels errCh := make(chan error, q.Max()+1) respCh := make(chan priorityResp, q.Max()+1) // Create consumers and map them to their corresponding priority consumers := make(map[int]broker.Consumer, q.Max()+1) // Close all consumers when we return. We will remove the consumer // of the highest priority entry before this is called so as to // avoid nack'ing the returned message. defer func() { for _, cons := range consumers { cons.Close() } }() // Initialize the consumers for i := q.Min(); i <= q.Max(); i++ { cons, err := q.consumer(i) if err != nil { return nil, 0, err } consumers[i] = cons } // Start each consumer for pri, cons := range consumers { go func(cons broker.Consumer, pri int) { // Make a new object and consume into it. val := reflect.New(reflect.TypeOf(out)).Interface() if err := cons.Consume(&val); err != nil { errCh <- err return } // Create the response object to send down the channel respCh <- priorityResp{ value: val, priority: pri, consumer: cons, } }(cons, pri) } var wait <-chan time.Time highest := q.Min() - 1 // Allows Min() messages to be accepted OUTER: for { select { case err := <-errCh: return nil, 0, err case r := <-respCh: if r.priority <= highest { continue OUTER } // Received message was higher priority, so re-assign the results // and enter the quiet period for any higher-priority messages // to arrive. dst := reflect.Indirect(reflect.ValueOf(out)) src := reflect.Indirect(reflect.ValueOf(r.value)) dst.Set(reflect.Indirect(src)) highest = r.priority wait = time.After(q.quietPeriod) case <-wait: // Pop the consumer of the highest priority message out of the // map and return it. All other consumers are automatically closed. cons := consumers[highest] delete(consumers, highest) return cons, highest, nil case <-cancelCh: break OUTER } } return nil, 0, nil } // Consume is the public method for consuming data out of a priority queue. It // will block until data is received, and returns the priority level of the // consumed message along with any errors. The consumer is also returned, which // should be Ack'ed and Closed by the caller. func (q *PriorityQueue) Consume(out interface{}) (broker.Consumer, int, error) { return q.consume(out, nil) } // ConsumeCancel allows passing in a channel to signal that we should // stop trying to consume a message. Internally this channel will be checked on // a short interval, and will shut down the job if the channel has been closed. func (q *PriorityQueue) ConsumeCancel( out interface{}, cancelCh chan struct{}) (broker.Consumer, int, error) { if cancelCh == nil { return nil, 0, fmt.Errorf("Cancellation channel cannot be nil") } return q.consume(out, cancelCh) } // Close will call a shutdown on all publishers we have used. By default, all // publishers are kept open so that multiple calls to establish the sessions are // not always required. This method shuts them all down and resets the pool. func (q *PriorityQueue) Close() error { q.shutdownLock.Lock() defer q.shutdownLock.Unlock() if q.shutdown { return nil } q.shutdown = true for i, pub := range q.publishers { if pub != nil { if err := pub.Close(); err != nil { return err } } q.publishers[i] = nil } return nil }
Paulie Malignaggi explains his sparring session with Conor McGregor and adjusting to the styles of a MMA fighter. (1:40) LAS VEGAS -- UFC star Conor McGregor shared a boxing ring with former two-weight world champion Paulie Malignaggi on Thursday. McGregor is preparing for an Aug. 26 bout with Floyd Mayweather. McGregor, 29, posted a photo from the eight-round session to social media. It shows McGregor with his hands behind his back, and the caption, "They say I've got no hands." They say I've got no hands. pic.twitter.com/FJfvj5qjKi — Conor McGregor (@TheNotoriousMMA) July 21, 2017 According to Malignaggi, who says he will continue to serve as one of McGregor's regular sparring partners during the next five weeks, the photo represented the spirit of their first sparring session. "There was a lot of trash talking right away," Malignaggi told ESPN. "A lot of fighting right away. At the end, you look back on it, it was kind of fun. I don't have many people who can match my trash talk, but Conor definitely can. It was making it a lot of fun." Of course, the world already knows McGregor, a two-division champ in the UFC, can talk trash. The question, going into his first professional boxing match against the undefeated Mayweather (49-0, 26 KOs), is whether the UFC star can box. Malignaggi, who retired from professional boxing in March, wouldn't divulge specific details, but repeatedly referred to his session with McGregor as "good work." "To say a mixed martial artist is coming into boxing and wouldn't be awkward is an understatement," Malignaggi said. "He's going to have his own style and set of things he does. He's got a game plan. It's not what people think. "I'll put it like this: He knows what he wants to do and he has a method of how he wants to get there. The mechanism of how he gets there may look, to the naked eye, 'hmm, I don't know about this.' But there's a method to his madness. He's a thinker." Editor's Picks McGregor meets community service obligation Conor McGregor talked with children and teenagers in Dublin to complete the 25 hours of community service imposed by Nevada officials following a profanity-laced, bottle-throwing prefight news conference last August. Malignaggi said they were originally scheduled for six rounds, but ended up extending it to eight. He said both showboated at times, but he couldn't guess whether McGregor will do so in the fight with Mayweather. Asked to describe the power of McGregor's left hand, which has been his best weapon in mixed martial arts, Malignaggi said there is "pop." "He's got some pop in the left hand, I can't take that away from him," Malignaggi said. "In boxing, especially against a guy like Floyd Mayweather, you need to devise a few more weapons, and I think that's what Conor is working on. I think ... coming into this situation already knowing Conor has a big left hand ... you're probably going to prepare for that left hand. "Obviously, Conor is working on other things besides the left hand, so there can be that surprise element to it. ... There's going to be other things he needs to make you worry about, and that's what's being worked on in camp. He's effective at what he's doing." Malignaggi described the camp's mood as "serious" and said McGregor initiated a conversation at the end of the session. Once the intensity of competition lowered, Malignaggi said McGregor was "actually a chill, normal guy." A Brooklyn native, Malignaggi fought professionally from 2001 to 2017. He faced the likes of Miguel Cotto, Ricky Hatton, Juan Diaz, Amir Khan, Adrien Broner, Zab Judah, Shawn Porter and Danny Garcia, among others. "I think [McGregor] is definitively an underdog, but he has a method to what he's doing and he has a thinking process behind it," Malignaggi said. "This is a fight of moments, and I think he can give himself certain moments. "If those moments turn into bigger moments, that's not up to me. That's up to Conor McGregor."
<reponame>opensingular/singular-core<gh_stars>1-10 /* * Copyright (C) 2016 Singular Studios (a.k.a Atom Tecnologia) - www.opensingular.com * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.opensingular.form.persistence; /** * Chave baseada em int. * * @author <NAME> */ public class FormKeyInt extends AbstractFormKey<Integer> implements FormKeyNumber { public FormKeyInt(int value) { super(value); } public FormKeyInt(Integer value) { super(value); } public FormKeyInt(String persistenceString) { super(persistenceString); } @Override protected Integer parseValuePersistenceString(String persistenceString) { try { return Integer.valueOf(persistenceString); } catch (Exception e) { throw new SingularFormPersistenceException("O valor da chave não é um inteiro válido", e).add("key", persistenceString); } } public static FormKeyInt convertToKey(Object objectValueToBeConverted) { if (objectValueToBeConverted == null) { throw new SingularFormPersistenceException("Não pode converter um valor null para FormKey"); } else if (objectValueToBeConverted instanceof FormKeyInt) { return (FormKeyInt) objectValueToBeConverted; } else if (objectValueToBeConverted instanceof Integer) { return new FormKeyInt((Integer) objectValueToBeConverted); } else if (objectValueToBeConverted instanceof Number) { return new FormKeyInt(((Number) objectValueToBeConverted).intValue()); } else if (objectValueToBeConverted instanceof String) { return new FormKeyInt((String) objectValueToBeConverted); } throw new SingularFormPersistenceException("Não consegue converter o valor solcicitado").add("value", objectValueToBeConverted).add("value type", objectValueToBeConverted.getClass()); } @Override public Long longValue() { return Long.valueOf(getValue()); } @Override public Integer intValue() { return getValue(); } }
import { window, CancellationToken, CustomTextEditorProvider, Disposable, ExtensionContext, TextDocument, WebviewPanel } from 'vscode'; import { TableView } from './tableView'; import { ViewTypes } from './viewTypes'; /** * Defines custom tabular data text editor provider. * * @see https://code.visualstudio.com/api/references/vscode-api#CustomTextEditorProvider */ export class TableEditor implements CustomTextEditorProvider { /** * Registers custom Table editor. * * @see https://code.visualstudio.com/api/references/vscode-api#window.registerCustomEditorProvider * * @param context Extension context. * @returns Disposable object for this editor. */ public static register(context: ExtensionContext): Disposable { return window.registerCustomEditorProvider( ViewTypes.TableEditor, new TableEditor(context), { webviewOptions: { enableFindWidget: true, retainContextWhenHidden: true } }); } /** * Creates new Table editor instance. * * @param context Extension context. */ constructor(private readonly context: ExtensionContext) { } /** * Resolves a custom text eidtor for a given tabular data text document source, * and creates new TableView for that table data display. * * @param document Text document for the tabular data source to resolve. * @param webviewPanel Webview panel used to display the editor UI for this resource. * @param token A cancellation token that indicates the result is no longer needed. */ public async resolveCustomTextEditor( document: TextDocument, webviewPanel: WebviewPanel, token: CancellationToken ): Promise<void> { // create new table view for the given tabular text data document and render it TableView.render(this.context.extensionUri, document.uri, webviewPanel); } }
def representable(val: int, bits: int, signed: bool = True, shift: int = 0) -> bool: if val % (1 << shift) != 0: return False val >>= shift if signed: return -2**(bits-1) <= val < 2**(bits-1) else: return 0 <= val < 2**bits
<reponame>nashaofu/learn-java import java.util.List; public class RegExDemo { public static void main(String[] args) { String re = "\\d{3,4}-\\d{7,8}"; for (String s : List.of("010-12345678", "020-9999999", "0755-7654321")) { if (!s.matches(re)) { System.out.println("测试失败: " + s); return; } } for (String s : List.of("010 12345678", "A20-9999999", "0755-7654.321")) { if (s.matches(re)) { System.out.println("测试失败: " + s); return; } } System.out.println("测试成功!"); System.out.println("AaAaAa".hashCode()); // 0x7460e8c0 System.out.println("BBAaBB".hashCode()); // 0x7460e8c0 } }
Not to be confused with the 1962 comic series Sabrina the Teenage Witch Teen Witch Home video cover, also used for a theatrical release poster Directed by Dorian Walker Produced by Moshe Diamant Rafael Eisenman Alana H. Lambros Bob Manning Eduard Sarlai Written by Robin Menken Vernon Zimmerman Starring Robyn Lively Zelda Rubinstein Music by Richard Elliot Larry Weir Cinematography Marc Reshvosky Edited by Natan Zahavi Distributed by Trans World Entertainment Release date April 23, 1989 ( ) Running time 94 minutes Country United States Language English Budget $2.5 million Box office $27,843 Teen Witch is a 1989 American teen fantasy comedy film directed by Dorian Walker, written by Robin Menken and Vernon Zimmerman, and starring Robyn Lively and Zelda Rubinstein. Originally pitched as a female version of Teen Wolf (1985) and later reworked into a film of its own, the film features numerous impromptu rap musical numbers and has since become a cult classic,[1][2] aided by midnight theater showings, regular airings on cable network channels, and on ABC Family's 13 Nights of Halloween. The film is also popular for its music and 1980s fashion nostalgia.[1] Plot [ edit ] After a bike accident, the sweet-yet-nerdy 15-year-old Louise Miller knocks on the door of a strange-looking house, hoping to use the phone. Instead, she meets a strange but welcoming woman, the seer Madame Serena. Reading Louise's palm, Serena is stunned when she learns that Louise is a reincarnated witch and an old friend from one of her previous lives. A week later, on Louise's 16th birthday, her magical powers return through a powerful amulet that was lost in a former life, an item that Madame Serena says searches for its owner. Now that Louise has the power to alter the world around her, she intends to make her dreams come true by casting a spell to win over Brad, the hottest guy in school, without earning his love. With Madame Serena's help, Louise uses her newfound powers to become the most popular girl in school, while also getting back at her harassing English teacher, Mr. Weaver, and the cheerleaders who never respected her. It is only after her popularity spell gets out of hand—which in turn causes her to abandon her equally unpopular, but loyal, best friend Polly—that Louise realizes she doesn't need magic. In the end, she relinquishes her powers by giving her amulet to Madame Serena, creating her own happy ending in the process. Cast [ edit ] Box office and reception [ edit ] The production budget for Teen Witch was $2,500,000. The film was released in the United States on April 23, 1989 and grossed $3,875 in its opening weekend at the box office, and only $27,843 in its entire run.[3] April 1989 box office competition included Field of Dreams, starring Kevin Costner and Pet Sematary, written by Stephen King. Both films were released on April 21, 1989, two days before the Teen Witch release. Teen Witch is a cult classic, having gained newer, younger audiences after regular re-airings on cable network channels such as HBO and Cinemax in the 1990s.[1][2][4][5] Jarett Wieselman of the New York Post stated, "There are good movies, there are bad movies, there are movies that are so bad they're good and then there is Teen Witch -- a cult classic that defies classification thanks to a curious combination of songs, spells and skin."[1] Joshua John Miller stated of his involvement with the film as character Richie, "If you look at Teen Witch, it was a very campy performance. But it's a really fun film and something I have grown to honor."[2] There are parodies or homages of the film, especially of its rap song "Top That" (including a homage starring Alia Shawkat).[4][6] Drew Grant of Nerve.com stated, "If you've never seen the original rap scene from the 80s classic Teen Witch, you must immediately stop what you're doing and watch it right now. It's everything wonderful and terrible about that decade rolled into one misguided appropriation of... hip-hop."[6] Stephanie Marcus of The Huffington Post called "Top That" "the worst song of all time."[7] On July 12, 2005, MGM released the film to DVD in its original widescreen theatrical version. In 2007, ABC Family acquired the television rights and has since re-aired it regularly as part of their yearly 13 Nights of Halloween movie specials.[8] Soundtrack [ edit ] "All Washed Up" - Larry Weir "Dream Lover" - Cathy Car "Finest Hour" - Cindy Valentine featuring Larry Weir "High School Blues" - The Puppy Boys "I Keep on Falling" - Blue Future "I Like Boys" - Elizabeth and The Weirz "Get Up and Move" - Cathy Car "Much too Much" - Cathy Car "Never Gonna Be the Same Again" (opening sequence) - Lori Ruso "Never Gonna Be the Same Again" (concert version) - Cindy Valentine "Popular Girl" - Theresa and The Weirz "Rap" - Philip McKean and Larry Weir "Shame" - The Weirz "Top That" - The Michael Terry Rappers "In Your Arms" - Richard Elliot Music was recorded at Weir Brothers Studio.[9] Accolades [ edit ] Year Nominee / work Award Result Eleventh Annual Youth in Film Awards 1988-1989[10] 1989 Best Young Actor Starring in a Motion Picture Young Artist Awards: Joshua John Miller Nominated 1989 Best Young Actress Starring in a Motion Picture Young Artist Awards: Robyn Lively Nominated Adaptations [ edit ] The Weir brothers created Caption Records and collaborated with Teen Witch film producer Alana Lambros for the Teen Witch the Musical project.[5][11] Financial backers of Teen Witch had neglected to provide funding for the original soundtrack release: After a decade and a half, the master audio tapes had become unavailable. The Weir brothers were interested in recreating the now-popular songs that Larry Weir had written; Alana Lambros brought her long-held view that Teen Witch the Musical was viable as a Broadway bound production to the project.[5] In 2007, the audio CD for Teen Witch the Musical was released, a new generation of actors were cast for the stage-play, which was presented in workshop. This adaptation never found a larger venue.[12] The cast of Teen Witch the Musical:[13] Alycia Adler as Randa (Cheerleader) Bryce Blue as Rhet Blake McIver Ewing as Brad Powell Ashley Crowe as Madame Serena Monet Lerner as Darcy (Cheerleader) Tessa Ludwick as Phoebe (Cheerleader) Lauren Patten as Polly Sara Niemietz as Louise Miller Heather Youmans as Shana the Rock Star V-Style as rapper In April 2008, Variety reported that Ashley Tisdale signed with FremantleMedia North America and was in talks with United Artists to star in a remake of Teen Witch.[14]
/** * Add a {@link CommandSender} to the repeating schedule. * A {@link CommandSender} will not be added twice. * @param commandSender */ public void subscribe(CommandSender commandSender) throws NotYetConnectedException, IllegalArgumentException { if (!isConnected()) { throw new NotYetConnectedException(); } if (commandSender == null || commandSender.getCurrentCommand() == null) { throw new IllegalArgumentException("A valid command is required."); } if(!this.pendingSubscribersList.contains(commandSender)) { this.pendingSubscribersList.add(commandSender); } }
def UpdateInternetScsiName(self, iScsiHbaDevice, iScsiName): return self.delegate("UpdateInternetScsiName")(iScsiHbaDevice, iScsiName)
def planetary_positions(self): temp_pos_list, temp_vel_list = [], [] for i, o in enumerate(self.planets_list): temp_pos_x, temp_v_x = tools.verlet_algorithm(o.pos_x_real, o.v_x, self.planetary_interaction()[i][0]) temp_pos_y, temp_v_y = tools.verlet_algorithm(o.pos_y_real, o.v_y, self.planetary_interaction()[i][1]) temp_pos_list.append([temp_pos_x, temp_pos_y]) temp_vel_list.append([temp_v_x, temp_v_y]) return temp_pos_list, temp_vel_list
module Potato.Trader.Tests ( tests ) where import Potato.Trader.Helpers import Potato.Trader.Types import Test.Hspec -- bids are people trying to buy TT testBids :: [(AmountRatio USDT TT, Amount TT)] testBids = [ (AmountRatio 100, Amount 100) , (AmountRatio 90, Amount 200) , (AmountRatio 80, Amount 100)] -- asks are people trying to sell TT testAsks :: [(AmountRatio USDT TT, Amount TT)] testAsks = [ (AmountRatio 100, Amount 100) , (AmountRatio 110, Amount 200) , (AmountRatio 120, Amount 100)] test_make_sellt1 :: Spec test_make_sellt1 = do let mySellt1 = make_sellt1_from_bidst1 testBids it "returns expected value for boundary bids" $ do mySellt1 (Amount 100) `shouldBe` Amount (100*100) mySellt1 (Amount (100+200)) `shouldBe` Amount (100*100+200*90) it "returns correct value for partially executed bid" $ do mySellt1 (Amount (100+100)) `shouldBe` Amount (100*100+100*90) test_make_buyt1 :: Spec test_make_buyt1 = do let myBuyt1 = make_buyt1_from_askst1 testAsks it "returns expected value for boundary bids" $ do myBuyt1 (Amount 10000) `shouldBe` Amount (100) myBuyt1 (Amount (100*100+110*200)) `shouldBe` Amount (100+200) it "returns correct value for partially executed bid" $ do myBuyt1 (Amount (100*100+110*100)) `shouldBe` Amount (100+100) tests :: IO () tests = hspec $ describe "Bilaxy" $ do describe "make_sellt1" $ test_make_sellt1 describe "make_buyt1" $ test_make_buyt1
Place of death in Parkinson’s disease: trends in the USA Background Parkinson’s disease (PD) is a significant cause of mortality but little is known about the place of death for patients with PD in the USA, a key metric of end-of-life care. Methodology A trend analysis was conducted for years 2003–2017 using aggregated death certificate data from the Centers for Disease Control and Prevention Wide-ranging OnLine Data for Epidemiologic Research) database, with individual-level mortality data from the Mortality Multiple Cause-of-Death Public Use Record available between 2013 and 2017. All natural deaths for which PD was identified as an underlying cause of death were identified. Place of death was categorised as hospital, decedent home, hospice facility, nursing home/long-term care and other. Results Between 2003 and 2017, 346141 deaths were attributed to PD (59% males, 93.7% White). Most deaths occurred in patients aged 75–84 years (43.9%), followed by those aged ≥85 years (40.9 %). Hospital and nursing home deaths decreased from 18% (n=3240) and 52.6% (n=9474) in 2003 to 9.2% (n=2949) and 42% (n=13 429) in 2017, respectively. Home deaths increased from 21.1% (n=3804) to 32.4% (n=10 347) and hospice facility deaths increased from 0.3% (n=47) in 2003 to 8.6% (n=2739) in 2017. Female sex, being married and college education were associated with increased odds of home deaths while Hispanic ethnicity and non-white race were associated with increased odds of hospital deaths. Conclusion Home and hospice facility deaths are gradually increasing in patients with PD. Particular attention should be provided to vulnerable socioeconomic groups that continue to have higher rates of hospital deaths and decreased usage of hospice facilities.
<filename>infraestrutura-de-software/p4-sleeping-teaching-assistant/sta_jfcd.c<gh_stars>1-10 #include <stdlib.h> #include <stdio.h> #include <pthread.h> #include <semaphore.h> #include <unistd.h> pthread_mutex_t mutex_chairs; /* Controls access to number of chairs taken.*/ int used_chairs = 0; int c = 0; /* Chair index offset on array */ pthread_mutex_t mutex_stds; /* Controls access to the number of active students (tasks).*/ int active_students; sem_t sem_ta_sleep; /* Signal for TA sleep */ sem_t sem_std; /* Signal for ending student's assistance */ sem_t sem_chairs[3]; /* Signal for chair call. */ int waking_student; /* Who woke up the TA */ void log_msg(int pid, char *msg) { printf("[%d] %li: %s\n", pid, time(NULL), msg); fflush(stdout); } /* Simulates TA as a running thread (task) */ void *ta_task(void *arg) { printf("TA is sleeping.\n"); while (1) { /* Waiting for student to awaken him */ sem_wait(&sem_ta_sleep); printf("Student #%d is awaking the TA.\n", waking_student); /* UNSTABLE: Global */ fflush(stdout); /* Awake cycle */ while (1) { /* Check if there is any student waiting. */ pthread_mutex_lock(&mutex_chairs); if (used_chairs == 0) { pthread_mutex_unlock(&mutex_chairs); printf("TA went back to sleep.\n"); fflush(stdout); break; } sem_post(&sem_chairs[c]); /* Notify students that the room is free. */ used_chairs--; c = (c + 1) % 3; /* Increment the chair index offset */ pthread_mutex_unlock(&mutex_chairs); sleep(rand() % 5 + 1); /* Assisting student */ sem_post(&sem_std); /* Notify that assistance is over */ /* Check if all students were assisted. */ pthread_mutex_lock(&mutex_stds); active_students--; if (active_students == 0) { return NULL; } pthread_mutex_unlock(&mutex_stds); } } } /* Simulates student as a running task. Gets student id as unique arg. */ void *student_task(void *arg) { int id = *(int *) arg; int tmp; while(1) { /* Entry section (Student programming for random time) */ sleep(rand() % 5 + 1); printf("Student #%d is going to TA room.\n", id); fflush(stdout); /* Check for available chairs and take action (Critical section) */ pthread_mutex_lock(&mutex_chairs); tmp = used_chairs; pthread_mutex_unlock(&mutex_chairs); /* CASE 1: There's enough space. */ if (tmp < 3) { if (tmp == 0) { /* Empty chairs. TA may be sleeping: Try to wake him up. */ waking_student = id; sem_post(&sem_ta_sleep); } /* Takes seat */ pthread_mutex_lock(&mutex_chairs); tmp = (c + used_chairs) % 3; /* Where he is sitting next */ used_chairs++; printf("Student #%d sat on chair #%d. %d chair(s) remain.\n",id,tmp+1,3 - used_chairs); fflush(stdout); pthread_mutex_unlock(&mutex_chairs); /* Waits for TA to call the student. */ sem_wait(&sem_chairs[tmp]); printf("TA is teaching student #%d.\n", id); /* Waits for time with TA to finish */ sem_wait(&sem_std); printf("TA finished teaching student #%d.\n", id); fflush(stdout); printf("Student #%d left TA room.\n", id); fflush(stdout); return NULL; } else { /* CASE 2: Not enough space. Student will come back later. */ printf("There is no available chair to student %d. The student will return late.\n", id); } } } int main(int argc, char const *argv[]) { int n = atoi(argv[1]); /* Starting number of students */ active_students = n; pthread_t ta; /* TA task id */ pthread_t stds_tid[n]; /* Student task ids */ int stds_num[n]; /* Student numbers/names (ids) */ sem_init(&sem_ta_sleep, 0, 0); sem_init(&sem_std, 0, 0); for (int i = 0; i<3; i++) { sem_init(&sem_chairs[i], 0, 0); } pthread_mutex_init(&mutex_chairs, NULL); pthread_mutex_init(&mutex_stds, NULL); /* Run tasks */ srand(time(NULL)); pthread_create(&ta,NULL,ta_task,NULL); for (int i = 1; i<=n; i++) { stds_num[i] = i; pthread_create(&stds_tid[i],NULL,student_task,(void *) &stds_num[i]); } /* Wait for tasks to finish */ pthread_join(ta, NULL); for (int i = 1; i<=n; i++) { pthread_join(stds_tid[i], NULL); } printf("TA went back to sleep.\n"); printf("There is no more students to help. TA left the room.\n"); fflush(stdout); /* Memory cleanup */ pthread_mutex_destroy(&mutex_chairs); pthread_mutex_destroy(&mutex_stds); sem_destroy(&sem_ta_sleep); for (int i = 0; i < 3; i++) { sem_destroy(&sem_chairs[i]); } return 0; }
/** * This is a patient object. Contains the patient details and its history. * The history contains its possible events, prescriptions and measurements. * @author MG * */ public class Patient implements Comparable<Patient>,Cloneable{ //the mandatory column indexes public static final short COLUMN_PATIENT_ID = 0; public static final short COLUMN_BIRTHDATE = 1; public static final short COLUMN_GENDER = 2; public static final short COLUMN_START_DATE = 3; public static final short COLUMN_END_DATE = 4; public final int NO_DATA = -1; //patient details public String ID; public int birthDate; public byte gender; public int startDate; public int endDate; public String subset; //keeps all possible extended attributes public ExtendedData extended = new ExtendedData(DataDefinition.PATIENT); //population definition public int populationStartDate; public int populationEndDate; public boolean inPopulation; //cohort definition public int cohortStartDate; public int cohortEndDate; public boolean inCohort; public List<Cohort> cohorts; //patient history public List<Event> events; public List<Prescription> prescriptions; public List<Measurement> measurements; public List<Prescription> originalPrescriptions; //all the column indexes for the extended data public int indexPracticeId = ExtendedData.NO_DATA; private int anonymizedPatientID; /** * Retrieves and sets the indexes of all eventual * extended data columns for this patient. This is * used in order to manipulate the extended data based * on indexes and not hard coded strings. */ private void setExtendedDataIndexes(){ this.indexPracticeId = this.extended.getIndexOfAttribute(DataDefinition.PATIENT_PRACTICE_ID); } //CONSTRUCTORS /** * Basic constructor. */ public Patient(){ this.subset = DataDefinition.DEFAULT_SUBSET_ID; this.events = new ArrayList<Event>(); this.prescriptions = new ArrayList<Prescription>(); this.measurements = new ArrayList<Measurement>(); this.originalPrescriptions = new ArrayList<Prescription>(); this.cohorts = new ArrayList<Cohort>(); if (this.extended == null) this.extended = new ExtendedData(DataDefinition.PATIENT); setExtendedDataIndexes(); } /** * Constructor initializing a patient object with an input patient object. * The patient history is allowed to be null. It is to be used in order to * produce copies of a patient. * @param patient - the patient to process */ public Patient(Patient patient){ //copy patient details this.ID = patient.ID; this.birthDate = patient.birthDate; this.gender = patient.gender; this.startDate = patient.startDate; this.endDate = patient.endDate; this.subset = patient.subset; this.extended = new ExtendedData(patient.extended); //copy patient history if (patient.getEvents() != null) { this.events = new ArrayList<Event>(); for (Event event : patient.getEvents()) this.events.add(new Event(event)); } else { this.events = null; } if (patient.getPrescriptions() != null) { this.prescriptions = new ArrayList<Prescription>(); for (Prescription prescription : patient.getPrescriptions()) this.prescriptions.add(new Prescription(prescription)); } else { this.prescriptions = null; } if (patient.getMeasurements() != null) { this.measurements = new ArrayList<Measurement>(); for (Measurement measurement : patient.getMeasurements()) this.measurements.add(new Measurement(measurement)); } else { this.measurements = null; } if (patient.getOriginalPrescriptions() != null) { this.originalPrescriptions = new ArrayList<Prescription>(); for (Prescription prescription : patient.getPrescriptions()) this.originalPrescriptions.add(new Prescription(prescription)); } else { this.originalPrescriptions = null; } if (patient.getCohorts() != null) { this.cohorts = new ArrayList<Cohort>(); for (Cohort cohort: patient.getCohorts()) this.cohorts.add(new Cohort(cohort)); } else { this.cohorts = null; } setModifierDefaults(); setExtendedDataIndexes(); } /** * Constructor initializing the patient details and its history. * The list of events, prescriptions or measurements is allowed to be null. * @param patient - the patient to process * @param events - the list of events for this patient; null allowed * @param prescriptions - the list of prescriptions for this patient; null allowed * @param measurements - the list of measurements for this patient; null allowed */ public Patient(Patient patient, List<Event> events, List<Prescription> prescriptions, List<Measurement> measurements){ this.ID = patient.ID; this.birthDate = patient.birthDate; this.gender = patient.gender; this.startDate = patient.startDate; this.endDate = patient.endDate; this.subset = patient.subset; this.extended = patient.extended; this.events = events != null && events.size() > 0 ? events : null; this.prescriptions = prescriptions != null && prescriptions.size() > 0 ? prescriptions : null; this.measurements = measurements != null && measurements.size() > 0 ? measurements : null; this.originalPrescriptions = prescriptions != null ? new ArrayList<Prescription>() : null; // Copy the prescriptions to the original prescriptions if (this.prescriptions != null) { for (Prescription prescription : this.prescriptions) this.originalPrescriptions.add(new Prescription(prescription)); } this.cohorts = new ArrayList<Cohort>(); setModifierDefaults(); setExtendedDataIndexes(); } /** * Constructor of a patient object from an input file line. * Each attribute of the object is brought to a compressed form, * making use of conversion methods and/or look-up tables. * Used in the PatientObjectCreator class. * @param attributes - a line from the patient input file * @param patientsFile - the patient input file containing all formatting details (e.g., data order, date format) */ public Patient(String [] attributes, InputFile patientsFile){ this.ID = attributes[patientsFile.getDataOrder()[Patient.COLUMN_PATIENT_ID]]; this.gender = AttributeChecker.checkGender(attributes[patientsFile.getDataOrder()[Patient.COLUMN_GENDER]].trim()); this.birthDate = DateUtilities.dateToDays(attributes[patientsFile.getDataOrder()[Patient.COLUMN_BIRTHDATE]].trim(), patientsFile.getDateFormat()); this.startDate = DateUtilities.dateToDays(attributes[patientsFile.getDataOrder()[Patient.COLUMN_START_DATE]].trim(), patientsFile.getDateFormat()); this.endDate = DateUtilities.dateToDays(attributes[patientsFile.getDataOrder()[Patient.COLUMN_END_DATE]].trim(), patientsFile.getDateFormat()); //extended data if (this.extended == null) this.extended = new ExtendedData(DataDefinition.PATIENT); if (patientsFile.hasExtendedData()){ this.extended.setData(this.extended.setExtendedAttributesFromInputFile(attributes)); } this.subset = patientsFile.getSubsetIndex() != -1 ? attributes[patientsFile.getSubsetIndex()].trim() : DataDefinition.DEFAULT_SUBSET_ID; this.events = new ArrayList<Event>(); this.prescriptions = new ArrayList<Prescription>(); this.measurements = new ArrayList<Measurement>(); this.originalPrescriptions = new ArrayList<Prescription>(); this.cohorts = new ArrayList<Cohort>(); setModifierDefaults(); setExtendedDataIndexes(); } /** * Constructor of a patient object from a patient object file. * Used in the PatientUtilities class. Note: +1 shift of columns due to subset ID. * @param attributes - the attributes of the patient object */ public Patient(String[] attributes){ this.subset = attributes[0]; this.ID = attributes[COLUMN_PATIENT_ID + 1]; this.gender = Byte.valueOf(attributes[COLUMN_GENDER + 1].trim()); this.birthDate = Integer.valueOf(attributes[COLUMN_BIRTHDATE + 1]); this.startDate = Integer.valueOf(attributes[COLUMN_START_DATE + 1]); this.endDate = Integer.valueOf(attributes[COLUMN_END_DATE + 1]); //extended data if (this.extended == null) this.extended = new ExtendedData(DataDefinition.PATIENT); this.extended.setData(this.extended.setExtendedAttributesFromPOF(attributes)); this.events = new ArrayList<Event>(); this.prescriptions = new ArrayList<Prescription>(); this.measurements = new ArrayList<Measurement>(); this.originalPrescriptions = new ArrayList<Prescription>(); this.cohorts = new ArrayList<Cohort>(); setModifierDefaults(); setExtendedDataIndexes(); } //TO STRING METHODS @Override /** * Returns a string representation of the patient details, * having its gender under a string representation. * @return - the patient details separated by comma and * a newline character appended. */ public String toString(){ return (subset+","+this.getPracticeIDAsString()+","+ID+","+birthDate+","+ convertGender(gender)+","+startDate+","+endDate+System.lineSeparator()); } /** * Returns a string representation of the patient details with the eventual * extended data in its compressed form. * @return - the patient details separated by comma and its extended data, with a newline character appended */ public String toStringWithExtendedData(){ return (subset+","+ID+","+birthDate+","+convertGender(gender)+","+startDate+","+endDate+ toStringExtendedDataCompressed()+System.lineSeparator()); } /** * Returns the patient details with or without newline character. * @param newLine - true if a new line character should be appended at the end * @return - a string representation of the patient details separated by comma, * with or without a new line character appended. */ public String toString(boolean newLine){ if (newLine) return this.toString(); else return (subset+","+ID+","+birthDate+","+convertGender(gender)+","+startDate+","+endDate); } /** * Returns the patient details and its extended data with or without newline character. * @param newLine - true if a new line character should be appended at the end * @return - a string representation of the patient details and its extended * data, separated by comma, with or without a new line character appended. */ public String toStringWithExtendedData(boolean newLine){ if (newLine) return (subset+","+ID+","+birthDate+","+convertGender(gender)+","+startDate+","+endDate+ toStringExtendedDataCompressed()+System.lineSeparator()); else return (subset+","+ID+","+birthDate+","+convertGender(gender)+","+startDate+","+endDate+ toStringExtendedDataCompressed()); } /** * Used to output patient details under a compressed form to file * during the compression step. * @param flag - an indication about the type of the data, as it appears in DataDefinition. * @return - a compressed string representation of the patient details */ public String toStringWithFlag(short flag){ return (subset+","+ID+","+birthDate+","+gender+","+startDate+","+endDate+ toStringExtendedDataCompressed()+ (flag != -1 ? ","+flag : "")); } /** * Returns the patient details under a string representation * with all the dates in YYYYMMDD format. * @return - a string representation of the patient details separated by comma, including * its extended data. */ public String toStringConvertedDate(){ return (subset+","+ID+","+DateUtilities.daysToDate(birthDate)+","+convertGender(gender)+"," +DateUtilities.daysToDate(startDate)+","+ DateUtilities.daysToDate(endDate))+ toStringExtendedData(); } /** * Return a string with all details of the patient, including its history: * patient attributes and extended data, events, prescriptions, measurements. * @return - a string representation of the patient details for debug purposes */ public String toStringDetails(){ StrBuilder s = new StrBuilder(); s.appendln("Patient"); s.appendln("---------------"); s.appendln("Id:\t\t\t\t\t"+ID); s.appendln("Birth date:\t\t\t"+DateUtilities.daysToDate(birthDate)); s.appendln("Gender:\t\t\t\t"+convertGender(gender)); s.appendln("Patient start:\t\t"+DateUtilities.daysToDate(startDate)); s.appendln("Patient end:\t\t"+DateUtilities.daysToDate(endDate)); s.appendln("Population start:\t"+DateUtilities.daysToDate(populationStartDate)); s.appendln("Population end:\t\t"+DateUtilities.daysToDate(populationEndDate)); s.appendln("In Population:\t\t"+inPopulation); s.appendln("Cohort start:\t\t"+DateUtilities.daysToDate(cohortStartDate)); s.appendln("Cohort end:\t\t\t"+DateUtilities.daysToDate(cohortEndDate)); s.appendln("In cohort:\t\t\t"+inCohort); s.appendln(""); if (hasExtended()){ s.appendln("Extended"); s.appendln("---------------"); for (Integer extendedColumn : this.extended.getData().keySet()) s.appendln(this.extended.getAttributeName(extendedColumn)+"\t\t"+ this.extended.getAttributeAsString(extendedColumn)); s.appendln(""); } if (hasEvents()){ s.appendln("Events"); s.appendln("---------------"); for (Event e : this.getEvents()) s.appendln(e.toString()); s.appendln(""); } s.appendln(""); if (hasPrescriptions()){ s.appendln("Prescriptions"); s.appendln("---------------"); for (Prescription p : this.getPrescriptions()) s.appendln(p.toString()); s.appendln(""); } s.appendln(""); if (hasPrescriptions()){ s.appendln("Original Prescriptions"); s.appendln("---------------"); for (Prescription p : this.getOriginalPrescriptions()) s.appendln(p.toString()); s.appendln(""); } if (hasMeasurements()){ s.appendln("Measurements"); s.appendln("---------------"); for (Measurement m : this.getMeasurements()) s.appendln(m.toString()); s.appendln(""); } if (hasCohorts()){ s.appendln("Cohort"); s.appendln("---------------"); for (Cohort c : this.getCohorts()) s.appendln(c.toString()); s.appendln(""); } return s.toString(); } /** * Returns the patient details with the dates in YYYYMMDD format. * @param newLine - true if a newline character should be appended * @return - a string representation of the patient details, with or without * a new line character appended */ public String toStringConvertedDate(boolean newLine){ if (newLine){ return (subset+","+ID+","+DateUtilities.daysToDate(birthDate)+","+convertGender(gender)+"," +DateUtilities.daysToDate(startDate)+","+ DateUtilities.daysToDate(endDate)+ toStringExtendedData()+System.lineSeparator()); } else { return (subset+","+ID+","+DateUtilities.daysToDate(birthDate)+","+convertGender(gender)+"," +DateUtilities.daysToDate(startDate)+","+ DateUtilities.daysToDate(endDate))+ toStringExtendedData(); } } /** * Returns the string representation of this patient formatted for * data export to CSV file, including its extended data (if any) * Note that the export order should be the same as in data definition. * * @see DataDefinition * @return - a string representation of this patient's attributes separated by comma. */ public String toStringForExport(){ return ID + "," + DateUtilities.daysToDate(birthDate) + "," + convertGender(gender) + "," + DateUtilities.daysToDate(startDate) + "," + DateUtilities.daysToDate(endDate) + toStringExtendedData(); } /** * Returns the string representation of this patient formatted for * data export to CSV file, including its extended data (if any) and * population and cohort start and end dates. * Note that the export order should be the same as in data definition. * * @see DataDefinition * @return - a string representation of this patient's attributes separated by comma. */ public String toStringForExportLong(boolean withID){ String record = toStringForExport() + "," + (inPopulation ? DateUtilities.daysToDate(getPopulationStartDate()) : "") + "," + (inPopulation ? DateUtilities.daysToDate(getPopulationEndDate()) : "") + "," + (inCohort ? DateUtilities.daysToDate(getCohortStartDate()) : "") + "," + (inCohort ? DateUtilities.daysToDate(getCohortEndDate()) : ""); if (!withID) { record = record.substring(record.indexOf(",") + 1); } return record; } /** * Returns the patient details with spaces in order to be aligned * if multiple patients output. Note that the extended data is not included. * @return - a string representation of the patient details */ public String toStringAligned(){ return ID+" "+ DateUtilities.daysToDate(birthDate)+" "+ (convertGender(gender).equals("F") ? convertGender(gender)+" " : convertGender(gender))+" "+ DateUtilities.daysToDate(startDate)+" "+ DateUtilities.daysToDate(endDate)+" "; } /** * Returns the part of string that will contain the extended data (if any) for this patient in its uncompressed form. * This string is to be added to the first part containing the mandatory data. It is used in the toString methods. * @return - a formatted string containing the extended data columns separated by comma */ protected String toStringExtendedData(){ String s = ""; for (Integer extColumnIndex : this.extended.getKeySet()) s+= ","+(this.extended.get(extColumnIndex) != ExtendedData.NO_DATA ? (this.extended.getAttributeLookUp(extColumnIndex).get(this.extended.get(extColumnIndex))) : ""); return s; } /** * Returns the part of string that will contain the extended data for this episodeType (if any) in its compressed form. * This string is to be added to the first part containing the mandatory data. It is used in the toString methods. * Note that it makes use of the look-up tables created for the extended data. * @return - a formatted string containing the extended data columns separated by comma */ protected String toStringExtendedDataCompressed(){ String s = ""; for (Integer extColumnIndex : this.extended.getKeySet()) s+= ","+this.extended.getData().get(extColumnIndex); return s; } //COMPARATORS @Override /** * Basic comparator on the patient identifier. * @param patient - the patient to compare this patient with. * @return - 0 if both patient IDs are the same; 1 if this patient ID is superior; * -1 if this patient ID is inferior to the parameter patient */ public int compareTo(Patient patient) { return this.ID.compareTo(patient.ID); } /** * Inner class used in order to perform sorting based on subset ID. * * @author MG * */ public static class CompareSubset implements Comparator<Patient>{ @Override public int compare(Patient p1, Patient p2) { return p1.subset.compareTo(p2.subset); } } //GETTERS AND SETTERS /** * Forces population and cohort start and end dates to patient start and end. * The patient is considered by default in the population and in the cohort * until the modifiers are run. */ private void setModifierDefaults(){ //set defaults for modifier routines this.populationStartDate = startDate; this.populationEndDate = endDate; this.inPopulation = true; this.cohortStartDate = startDate; this.cohortEndDate = endDate; this.inCohort = true; } /** * Returns the age of the patient in the beginning of the observation period. * @return - the age of the patient in days as difference between the start date and birth date */ public int getAgeAtStartDate(){ return (this.startDate - this.birthDate); } /** * Returns the age of this patient at the end of the observation period. * @return - the age of the patient in days as difference between the end date and birth date */ public int getAgeAtEndDate(){ return (this.endDate - this.birthDate); } /** * Returns the age of the patient in the beginning of the cohort. * @return - the age of the patient in days as difference between the cohort start date and patient birth date */ public int getAgeAtCohortStartDate(){ return (this.cohortStartDate - this.birthDate); } /** * Returns the age of this patient at the end of the cohort. * @return - the age of the patient in days as difference between the cohort end date and patient birth date */ public int getAgeAtCohortEndDate(){ return (this.cohortEndDate - this.birthDate); } /** * Returns the age of this patient in the beginning of a calendar year. * It assumes that the patient is active during that year. * @param year - the year of reference * @return - the age of this patient in days as difference between the * 1st of January of year and patient birthDate; if negative, -1 is returned */ public int getAgeInBeginningOfYear(int year){ int nbDays = DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD) - this.birthDate; return (nbDays < 0 ? -1 : nbDays); } /** * Returns the age of this patient at a certain date. * @param date - the date of interest * @return - the age of the patient in days as difference between the patient birth date and date; * if negative, -1 is returned */ public int getAgeAtDate(int date){ return (date - birthDate > 0 ? date - birthDate : -1); } /** * Returns the age of this patient in years at a certain date. * It takes into consideration if date is prior or post the birthday celebration. * @param date - the date of interest * @return - the age of this patient in years; * if negative, -1 is returned */ public int getAgeAtDateInYears(int date){ //split the birth date and date into components int[] birthDateComponents = DateUtilities.daysToDateComponents(birthDate); int[] dateComponents = DateUtilities.daysToDateComponents(date); //get the number of years int age = dateComponents[0] - birthDateComponents[0]; //check if the month from date is passed the celebration month if (dateComponents[1] < birthDateComponents[1]) //then subtract one year age --; //or if the same month but not year reached the celebration day else if (dateComponents[1] == birthDateComponents[1]) if (dateComponents[2] < birthDateComponents[2]) //then subtract one year age -- ; return age < 0 ? -1 : age; } /** * Returns the age as double taking into account the leap years using the SAS method: * * if n365 equals the number of days between the start and end dates in a 365 day year, * and n366 equals the number of days between the start and end dates in a 366 day year, * the YRDIF calculation is computed as YRDIF=n365/365.0 + n366/366.0. * This calculation corresponds to the commonly understood ACT/ACT day count basis that is * documented in the financial literature. * * double checked with SAS YRDIF function * @param date - the date of interest * @return - the age of this patient at date, under a double representation */ public double getAge(int date){ int[] birthDateComponents = DateUtilities.daysToDateComponents(birthDate); int[] dateComponents = DateUtilities.daysToDateComponents(date); int n365 = 0; int n366 = 0; // Determine number of days in first year int nrDaysFirstYear = 0; if (dateComponents[0]>birthDateComponents[0]) nrDaysFirstYear = DateUtilities.dateToDays(birthDateComponents[0]+"1231",DateUtilities.DATE_ON_YYYYMMDD) - DateUtilities.dateToDays(birthDateComponents)+1; else nrDaysFirstYear = date - DateUtilities.dateToDays(birthDateComponents); if (DateUtilities.isLeapYear(birthDateComponents[0])) n366 = nrDaysFirstYear; else n365 = nrDaysFirstYear; // Add all the remaining day for (int i=birthDateComponents[0]+1;i<=dateComponents[0];i++){ //not at the last year yet then add full year if (i!=dateComponents[0]){ if (DateUtilities.isLeapYear(i)) n366 = n366 + 366; else n365 = n365 + 365; } else { int nrDaysLastYear = date - DateUtilities.dateToDays(dateComponents[0]+"0101",DateUtilities.DATE_ON_YYYYMMDD); if (DateUtilities.isLeapYear(dateComponents[0])) n366 = n366 + nrDaysLastYear; else n365 = n365 + nrDaysLastYear; } } return n365/365.0 + n366/366.0; } /** * Returns the length of the observation period of this patient. * @return - the number of days between startDate and endDate */ public int getPatientTime(){ return (this.endDate - this.startDate); } /** * Returns the length of the period this patient is in population. * @return - the number of days between the population start date and population end date */ public int getPopulationTime(){ return (this.populationEndDate - this.populationStartDate); } /** * Returns the length of the cohort default period of this patient. * @return - the number of days between start date and end date of the cohort period */ public int getCohortTime(){ return (this.cohortEndDate - this.cohortStartDate); } /** * Returns the length of the cohort default period of this patient. * @param - the number of the cohrt * @return - the number of days between start date and end date of the cohort period. * if the cohortnr does not exist -1 is returned */ public int getCohortTime(int nr){ int result = -1; if (cohorts.size()>= nr){ Cohort cohort = cohorts.get(nr); result = cohort.cohortEndDate - cohort.cohortStartDate; } return result; } /** * Returns the length of the observation period of this patient before the start of a certain year. * It checks if the endDate is not prior to the beginning of the year. * @param year - the calendar year of interest * @return - the number of days between the start date of the patient and 1st of January of year; * if it results in a negative value, the length of the observation period is considered 0 days */ public int getPatientTimeBeforeStartOfYear(int year){ int patientTime = Math.min(this.endDate, DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD)) - this.startDate; return patientTime < 0 ? 0 : patientTime; } /** * Returns the length of the observation period of this patient after the start of a certain year, * if active at the 1st of January, if not, it's start date is considered. * @param year - the calendar year of interest * @return - the number of days between the 1st of January of the specified year and the end date of the patient; * if it results in a negative value, the length of the observation period is considered 0 days */ public int getPatientTimeAfterStartOfYear(int year){ int patientTime = this.endDate - Math.max(this.startDate, DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD)); return patientTime < 0 ? 0 : patientTime; } /** * Returns the length of the observation period of the patient in a certain calendar year. * If the start date of the patient is superior to 1st of January of year, then the difference between the start date * and the last day of year is considered. * @param year - the calendar year for which the patient time is to be retrieved * @return - the number of days between the 1st of January of the specified year and the 31st of December of the same year; * If somehow a negative value represents the result, 0 is returned * */ public int getPatientTimeInYear(int year){ int nbDaysAtStartOfYear = DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD); int nbDaysAtEndOfYear = DateUtilities.dateToDays((year+1)+"0101",DateUtilities.DATE_ON_YYYYMMDD); int patientTime = (Math.min(endDate, nbDaysAtEndOfYear) - Math.max(startDate, nbDaysAtStartOfYear)); return patientTime < 0 ? 0 : patientTime; } /** * Returns the amount of patient time in days between the * 1st of January of year and the birthday of this patient. * @param year - the year of interest * @return - the patient time until the birthday of the patient that year. * If it results in a negative value, then zero is returned */ public int getPatientTimeInYearBeforeBirthday(int year){ int patientTime = Math.min(getBirthdayInYear(year), this.endDate) - Math.max(this.startDate, DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD)); return patientTime < 0 ? 0 : patientTime; } /** * Returns the amount of patient time in days between the * 1st of January of year and the birthday of this patient in that year. * Note that the purpose of this method as opposed to getPatientTimeInYearBeforeBirthday(year) * is purely optimization. It receives the birthday as parameter and does not have to calculate it at * each method call. * @param year - the year of interest * @param birthday - the number of days from the first legal date and the birthday celebration that year * @return - the patient time until the birthday of the patient that year. * If it results in a negative value, then zero is returned */ public int getPatientTimeInYearBeforeBirthday(int year, int birthday){ int patientTime = Math.min(birthday, this.endDate) - Math.max(this.startDate, DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD)); return patientTime < 0 ? 0 : patientTime; } /** * Returns the amount of patient time in days between the * the birthday of this patient and the end of year. * @param year - the year of interest * @return - the patient time from birthday of the patient that year and end of year. * If it results in a negative value, then zero is returned. * Note that the actual birthday is added in this interval and NOT in the getPatientTimeInYearBeforeBirthday(year) */ public int getPatientTimeInYearAfterBirthday(int year){ int patientTime = (Math.min(this.endDate, DateUtilities.dateToDays(year+"1231",DateUtilities.DATE_ON_YYYYMMDD)) - Math.min(this.endDate, Math.max(getBirthdayInYear(year), this.startDate))) + 1; return patientTime < 0 ? 0 : patientTime; } /** * Returns the amount of patient time in days between the * the birthday of this patient during this year and the end of this year. * Note that the purpose of this method as opposed to getPatientTimeInYearBeforeBirthday(year) * is purely optimization. It receives the birthday as parameter and does not have to calculate it at * each method call. * @param year - the year of interest * @param birthday - the number of days from the first legal date and the birthday celebration that year * If it results in a negative value, then zero is returned. * Note that the actual birthday is added in this interval and NOT in the getPatientTimeInYearBeforeBirthday(year) * @return - the patient time from birthday of the patient that year and end of year. */ public int getPatientTimeInYearAfterBirthday(int year, int birthday){ int patientTime = (Math.min(this.endDate, DateUtilities.dateToDays(year+"1231",DateUtilities.DATE_ON_YYYYMMDD)) - Math.min(this.endDate, Math.max(birthday, this.startDate))) + 1; return patientTime < 0 ? 0 : patientTime; } /** * Returns the two (if applicable) different age values for this patient during a certain calendar year * and the observation time for each one of the two ages. The result is pair-based. First age with * its observation time (e.g., result[0] and result[1]) and second age with its observation time * (e.g., result[2] and result[3]). * @param year - the calendar year for which the observation time and ages are requested * @return - an array containing two pairs of possible ages during year */ //NOT USED public int[] getPatientTimePerAgesInCalendarYear(int year){ int[] result = new int[4]; //get nb days for first day of year and last day of year int nbDaysAtStartOfYear = DateUtilities.dateToDays(year+"0101",DateUtilities.DATE_ON_YYYYMMDD); int nbDaysAtEndOfYear = DateUtilities.dateToDays((year+1)+"0101",DateUtilities.DATE_ON_YYYYMMDD); //get ages of patient during that year result[0] = (nbDaysAtStartOfYear - birthDate); result[2] = (nbDaysAtEndOfYear - birthDate); //get the observation time range for year int observationStart = Math.max(this.startDate, nbDaysAtEndOfYear); int observationEnd = Math.max(this.startDate, nbDaysAtEndOfYear); //compute the celebration day of the birth date int[] birthdateComponents = DateUtilities.daysToDateComponents(this.birthDate); birthdateComponents[0] = year; int birthdayCelebration = DateUtilities.dateToDays(birthdateComponents); //put observation times per age during year; iff > 0; int patientTimeBeforeCelebration, patientTimeAfterCelebration; patientTimeBeforeCelebration = (birthdayCelebration - observationStart); patientTimeAfterCelebration = (observationEnd - birthdayCelebration); result[1] = patientTimeBeforeCelebration > 0 ? patientTimeBeforeCelebration : 0; result[3] = patientTimeAfterCelebration > 0 ? patientTimeAfterCelebration : 0; return result; } /** * Returns the patient time in days before a certain date * as the difference between the date and the patient start date. * If the result is negative, then zero is returned. * @param date - the date of interest * @return - the patient time in days before date */ public int getPatientTimeBeforeDateInDays(int date){ return (date - this.startDate < 0 ? 0 : date - this.startDate); } /** * Returns the patient time in days from the cohort start until date. * If the result is negative, then zero is returned. * @param date - the date of interest * @return - the patient time in days from cohort start to date */ public int getPatientTimeFromCohortStartInDays(int date){ return (date - this.cohortStartDate < 0 ? 0 : date - this.cohortStartDate); } /** * Returns the patient time in months before a certain date * as a difference between the date and the patient start date * divided by the average number of days per month. * If the result is negative, then zero is returned. * @param date - the date of interest * @return - the patient time in months before the date */ public double getPatientTimeBeforeDateInMonths(int date){ return (date - this.startDate < 0 ? 0 : (date - this.startDate)/DateUtilities.daysPerMonth); } /** * Returns the patient time in days after a certain date * as a difference between the patient end date and the date. * If the result is negative, then zero is returned. * @param date - the date of interest * @return - the patient time in days after the date */ public int getPatientTimeAfterDateInDays(int date){ return (this.endDate - date < 0 ? 0 : this.endDate - date); } /** * Returns the patient time in days after a certain date * and the end of the cohort. * If the result is negative, then zero is return. * @param date - the date of interest * @return - the patient time in days after date and until cohort end. */ public int getPatientTimeUntilCohortEndInDays(int date){ return (this.cohortEndDate - date < 0 ? 0 : this.cohortEndDate - date); } /** * Returns the patient time in months after a certain date * as a difference between the date and the patient end date, * divided by the average number of days per month. * If the result is negative, then zero is return. * @param date - the date of interest * @return - the patient time in months after date */ public double getPatientTimeAfterDateInMonths(int date){ return (this.endDate - date < 0 ? 0 : (this.endDate - date)/DateUtilities.daysPerMonth); } /** * Returns the number of days from the first legal date until the * birthday "celebration" of this patient in the year passed as argument. * If year is not a leap year and the patient is born in a leap year, * then the birthday is set to the 1st of March of year. * @param year - the year of interest * @return - the number of days representing the birthday celebration in year */ public int getBirthdayInYear(int year){ int[] birthdateComponents = DateUtilities.daysToDateComponents(this.birthDate); boolean bornOnLeapDay = DateUtilities.isLeapYear(birthdateComponents[0]) && (birthdateComponents[1] == 2 && birthdateComponents[2] == 29); birthdateComponents[0] = year; //set the birthday as 1st of march if patient born in leap year and current year is not a leap year if (bornOnLeapDay && !DateUtilities.isLeapYear(year)){ birthdateComponents[1] = 3; birthdateComponents[2] = 1; } return DateUtilities.dateToDays(birthdateComponents); } //SPECIFIC METHODS /** * Converts the gender of a patient from a byte form to a string representation. * It is based on constant values stored in the DataDefinition class. * @param gender - the gender of this patient under byte representation * @return - a corresponding String representation of the gender */ public static String convertGender(byte gender){ switch (gender){ case DataDefinition.FEMALE_GENDER : return "F"; case DataDefinition.MALE_GENDER : return "M"; } return "U"; } /** * Checks if the patient is born on a leap day (i.e., 29th of February). * @return - true if the patient is born on a leap day; false otherwise */ public boolean isBornOnLeapDay(){ int[] birthdateComponents = DateUtilities.daysToDateComponents(this.birthDate); return DateUtilities.isLeapYear(birthdateComponents[0]) && (birthdateComponents[1] == 2 && birthdateComponents[2] == 29); } /** * Sorts the measurements of this patient by date. */ public void sortMeasurements(){ Collections.sort(this.measurements); } /** * Sorts the events of this patient by date. */ public void sortEvents(){ Collections.sort(this.events); } /** * Sorts the prescriptions of this patient by date. */ public void sortPrescriptions(){ Collections.sort(this.prescriptions); } /** * Sorts the original prescriptions of this patient by date. */ public void sortOriginalPrescriptions(){ Collections.sort(this.originalPrescriptions); } /** * Sorts the cohorts of this patient by cohort startdate. */ public void sortCohort(){ Collections.sort(this.cohorts); } /** * Verifies if date is between the cohort start and the cohort end. * @param date - the date to be checked * @return - true if the date is in the cohort; false otherwise */ public boolean dateInCohort(int date) { return (date >= cohortStartDate) && (date < cohortEndDate); } /** * Verifies if date is between the cohort start and the cohort end * for a selected cohort * @param date - the date to be checked. End date us not included * @return - true if the date is in the cohort; false otherwise * returns false if the cohort does not exists */ public boolean dateInCohort(int date, int nr) { boolean result = false; if (nr >= cohorts.size()) { Cohort cohort = cohorts.get(nr); result = (date >= cohort.cohortStartDate) && (date < cohort.cohortEndDate); } return result; } /* * Checks if this the first cohort of this type. * @param type - type of cohort * @return - true if first, false otherwise. * It there is no cohort defined for that date false is returned */ public boolean isInFirstCohortOfType(int date) { boolean result = true; String type = getCohortType(date); if (type != "None") { for (Cohort cohort : getCohorts()) { if (cohort.isInCohort(date)) break; if (cohort.getType().equals(type)){ result = false; break; } } } else result = false; return result; } /** * Returns the type label of the cohort the patient is in at date * @param date - the date to be checked. End date is not included * @return - label of the cohort type, "None" otherwise * returns false if the cohort does not exists */ public String getCohortType(int date) { String result = "None"; for (Cohort cohort : this.getCohorts()) { if ((date >= cohort.cohortStartDate) && (date < cohort.cohortEndDate)) { result = cohort.getType(); break; } } return result; } /** * Returns the start date of the cohort the patient is in at date * @param date - the date to be checked. End date is not included * @return - data as int or -1 if no cohort is found */ public int getCohortStartDate(int date) { int result = -1; for (Cohort cohort : this.getCohorts()) { if ((date >= cohort.cohortStartDate) && (date < cohort.cohortEndDate)) { result = cohort.getCohortStartDate(); break; } } return result; } /** * Returns the start date of the cohort the patient is in at date * @param date - the date to be checked. End date is not included * @return - data as int or -1 if no cohort is found */ public int getCohortEndDate(int date) { int result = -1; for (Cohort cohort : this.getCohorts()) { if ((date >= cohort.cohortStartDate) && (date < cohort.cohortEndDate)) { result = cohort.getCohortEndDate(); break; } } return result; } /** * Returns the sequence number of the cohort the patient is in at date * @param date - the date to be checked. End date is not included * @return - sequence number as int starting at 0 or -1 if no cohort is found */ public int getCohortSequenceNr(int date) { int result = -1; int sequenceNr = 1; for (Cohort cohort : this.getCohorts()) { if ((date >= cohort.cohortStartDate) && (date < cohort.cohortEndDate)) { result = sequenceNr; break; } sequenceNr++; } return result; } /** * Verifies if date is between the cohort start and the cohort end, * optionally with the cohort end date inclusive. * @param date - the date to be checked * @param endInclusive - true if the actual end day of the cohort should be included; false otherwise * @return - true if the date is in the cohort; false otherwise */ public boolean dateInCohort(int date, boolean endInclusive) { if (endInclusive) return (date >= cohortStartDate) && (date <= cohortEndDate); else return (date >= cohortStartDate) && (date < cohortEndDate); } /** * Verifies if date is between the population start and the population end. * Note that population start date is inclusive. * @param date - the date to be checked * @return - true if the date is in the population; false otherwise */ public boolean dateInPopulation(int date) { return (date >= populationStartDate) && (date < populationEndDate); } /** * Checks if the patient is active at the specified date. * It is not the same as isInPopulation() which is specific for * the PopulationDefinition modifier. * @param days - the number of days from the first legal date until the date of interest * @return - true if the difference between the start date * of the patient and the number of days is positive; false otherwise */ public boolean isActive(int days){ return days - this.startDate >= 0; } //GETTERS AND SETTERS FOR OBJECT ATTRIBUTES public List<Event> getEvents() { return events; } public List<Prescription> getPrescriptions() { return prescriptions; } public List<Measurement> getMeasurements() { return measurements; } public List<Prescription> getOriginalPrescriptions() { return originalPrescriptions; } public ExtendedData getExtended() { return extended; } public List<Cohort> getCohorts() { return cohorts; } public void setAnonymizedPatientId(int anonymizedId) { this.anonymizedPatientID = anonymizedId; } public void setEvents(List<Event> events) { this.events = events; } public void setPrescriptions(List<Prescription> prescriptions) { this.prescriptions = prescriptions; } public void setMeasurements(List<Measurement> measurements) { this.measurements = measurements; } public void setOriginalPrescriptions(List<Prescription> prescriptions) { this.originalPrescriptions = prescriptions; } public void setCohorts(List<Cohort> cohorts) { this.cohorts = cohorts; } public void setExtended(ExtendedData extended) { this.extended = extended; } public boolean hasEvents(){ return this.events != null && this.events.size() > 0; } public boolean hasPrescriptions(){ return this.prescriptions != null && this.prescriptions.size() > 0; } public boolean hasMeasurements(){ return this.measurements != null && this.measurements.size() > 0; } public boolean hasOriginalPrescriptions(){ return this.originalPrescriptions != null && this.originalPrescriptions.size() > 0; } public boolean hasCohorts(){ return this.cohorts != null && this.cohorts.size() > 0; } public boolean hasExtended(){ return this.extended != null && this.extended.getData() != null && this.extended.getData().size() > 0; } public String getAnonymizedPatientId() { return Jerboa.unitTest ? ID : Integer.toString(anonymizedPatientID); } public String getGender(){ return convertGender(gender); } public int getBirthYear(){ return DateUtilities.getYearFromDays(birthDate); } public int getBirthMonth(){ return DateUtilities.getMonthFromDays(birthDate); } public int getBirthDate(){ return birthDate; } public int getStartYear(){ return DateUtilities.getYearFromDays(startDate); } public int getEndYear(){ return DateUtilities.getYearFromDays(endDate); } public int getPopulationStartDate() { return populationStartDate; } public void setPopulationStartDate(int populationStartDate) { this.populationStartDate = populationStartDate; } public int getPopulationEndDate() { return populationEndDate; } public void setPopulationEndDate(int populationEndDate) { this.populationEndDate = populationEndDate; } public boolean isInPopulation() { return inPopulation; } public void setInPopulation(boolean inPopulation) { this.inPopulation = inPopulation; } public int getCohortStartDate() { return cohortStartDate; } public void setCohortStartDate(int cohortStartDate) { this.cohortStartDate = cohortStartDate; } public int getCohortEndDate() { return cohortEndDate; } public void setCohortEndDate(int cohortEndDate) { this.cohortEndDate = cohortEndDate; } public String getPatientID(){ return ID; } public boolean isInCohort() { return inCohort; } public void setInCohort(boolean inCohort) { this.inCohort = inCohort; } /** * Returns the difference in days between the * end date and start date of this patient. * @return - the patient time in days; 0 if result is negative */ public int getPatientTimeInDays(){ return this.endDate > this.startDate ? this.endDate - this.startDate : 0; } /** * Returns the difference in months between the * end date and start date of this patient. Rounded down. * @return - the patient time in months; 0 if negative value */ public int getPatientTimeInMonths(){ return this.endDate > this.startDate ? (int)((this.endDate - this.startDate)/DateUtilities.daysPerMonth) : 0; } /** * Returns the difference in years between the * end date and start date of this patient. Rounded down. * @return - the patient time in years; 0 if negative */ public int getPatientTimeInYears(){ return this.endDate > this.startDate ? (int)((this.endDate - this.startDate)/DateUtilities.daysPerYear) : 0; } //GETTERS AND SETTERS FOR EXTENDED DATA /** * Returns the practice ID (if any) for this patient under a string representation. * Note that the practice ID is considered to be extended data. * @return - the practice ID as string */ public String getPracticeIDAsString(){ String practiceID = this.extended.getAttributeAsString(indexPracticeId); return practiceID.equals(DataDefinition.NO_DATA) ? "" : practiceID; } /** * Returns the practice ID (if any) for this patient in its compressed form. * Note that the practice ID is considered to be extended data. * @return - the practice ID as the index in the look-up table */ public int getPracticeID(){ return this.extended.getAttribute(indexPracticeId); } /** * Returns the anonymized practice ID (if any) for this patient in its compressed form. * Note that the practice ID is considered to be extended data. * @return - the practice ID as the index in the look-up table */ public int getAnonymizedPracticeID(){ return this.extended.getAttribute(indexPracticeId); } //MAIN METHOD FOR TESTING public static void main(String[] args) { Patient patient = new Patient(); patient.birthDate = DateUtilities.dateToDays("19441201",DateUtilities.DATE_ON_YYYYMMDD); int date = DateUtilities.dateToDays("20091201",DateUtilities.DATE_ON_YYYYMMDD); double age = patient.getAge(date); System.out.println("Age: " + age); //Age: 68.833565 } //WRAPPERS FOR EXTENDED DATA public boolean hasExtendedAttribute(int index){ return this.extended.hasAttribute(index); } public boolean hasExtendedAttribute(String extendedAttribute) { extendedAttribute = extendedAttribute.toLowerCase(); return (this.extended.hasAttribute(extendedAttribute) || this.extended.getAsIs(extendedAttribute) != null); } public Integer getExtendedAttribute(int index) { return this.extended.getAttribute(index); } public void setExtendedAttribute(int attributeIndex, int value) { if (this.extended.getAttribute(attributeIndex) != NO_DATA) this.extended.put(attributeIndex, value); } public void setExtendedAttribute(int attributeIndex, String value) { if (this.extended.getAttribute(attributeIndex) != NO_DATA) this.extended.put(attributeIndex, value); } public void setExtendedAttribute(String attributeName, String value) { attributeName = attributeName.toLowerCase(); if (this.extended.getAttribute(attributeName) == null || this.extended.getAttribute(attributeName) == NO_DATA) this.extended.setAsIs(attributeName, value); else { this.extended.setExtendedAttributePatient(this, attributeName, value); } } public Integer getExtendedAttribute(String attribute) { return this.extended.getAttribute(attribute.toLowerCase()); } public String getExtendedAttributeAsString(String attribute) { return extended.getAttributeAsString(attribute.toLowerCase()); } public String getExtendedAttributeAsString(int index){ return this.extended.getAttributeAsString(index); } public Integer getIndexOfExtendedAttribute(String attribute){ return this.extended.getIndexOfAttribute(attribute.toLowerCase()); } public int getIndex(DualHashBidiMap list, String value){ return this.extended.getIndex(list, value); } public String getValue(DualHashBidiMap list, int key){ return this.extended.getValue(list, key); } public DualHashBidiMap getExtendedAttributeLookUp(Integer extendedColumnIndex){ return this.extended.getAttributeLookUp(extendedColumnIndex); } public DualHashBidiMap getExtendedAttributeLookUp(String attribute){ return this.extended.getAttributeLookUp(attribute.toLowerCase()); } public HashMap<Integer, Integer> getExtendedData(){ return this.extended.getData(); } public String getExtendedAttributeName(Integer extendedColumnIndex){ return this.extended.getAttributeName(extendedColumnIndex); } }
Application of low frequency SQUID NMR to the ultra-low temperature study of atomically layered 3He films adsorbed on graphite Low frequency pulsed NMR is a powerful technique for the investigation of the nuclear susceptibility and spin-dynamics of strongly correlated systems down to ultra-low temperatures. We describe a versatile broadband pulsed NMR spectrometer using a two- stage Superconducting Quantum Interference Device (SQUID). This instrument has enabled the investigation of the spin-dynamics of 3He films adsorbed on exfoliated graphite into the microkelvin temperature range. In prior work we reported a SQUID NMR study of twodimensional ferromagnetism on a triangular lattice, where 3He films adsorbed on graphite provide an ideal model system. Here we describe the detection of the much weaker signals from strongly correlated fluid films in the second layer of 3He on graphite, using a spectrometer with improved sensitivity. We have measured the low frequency spin-spin relaxation and spin-lattice relaxation of two-dimensional 3He into the microkelvin range. These show an unusual time and frequency dependence. We also describe the use of the 13C-signal from the exfoliated graphite for thermometry, and the unusual properties of the spin-lattice relaxation of that system. Introduction Helium isotopes adsorbed onto the atomically flat basal planes of exfoliated graphite provide model systems for the investigation of quantum matter in two dimensions. For an overview and a review of earlier work see . These atomically layered films provide a quantum simulator to further the understanding of key contemporary problems in condensed matter physics including: frustrated magnetism ; heavy-fermion quantum criticality ; the Mott-Hubbard transition . NMR provides a powerful tool to study 3 He films, but until now the focus of measurements at ultra-low temperatures has been on studies of a static property, the 3 He nuclear magnetization, complementing measurements of heat capacity. While NMR studies of spin dynamics played a central role in the pioneering studies of 3 He submonolayer films on graphite , their use in measurements at ultra-low temperatures have so far been limited. In this article we briefly survey our recent work to apply SQUIDs to the detection of NMR signals at relatively low frequencies. The gains are: improved precision in the measurement of magnetization; the ability to conveniently measure as a function of magnetic field; the ability to study spin-dynamics into the microkelvin range. The Spectrometer Our pulsed NMR spectrometer includes a helium adsorption cell and home-made NMR magnet assembly mounted at the demagnetization stage and mixing chamber of a dilution refrigerator respectively. The adsorption cell contains a stack of 64 exfoliated graphite sheets (Grafoil ), enclosed in a Stycast-1266 body . The cell is oriented such that the Grafoil sheets are normal to the static NMR field B 0 . In Grafoil, graphite platelets show a preferential alignment of their c-axis normal to the foil surface. The mosaic spread of these platelets was found to be ±19° for a sample from the same batch. In the cell, each Grafoil sheet is individually heatsunk to the experimental stage through a diffusion bonded silver foil which is connected to the central thermal anchor of the cell (see Fig. 1). Helium can be admitted to the cell via a narrow fill line that extends up to a room temperature gas handling system. The total surface area of the graphite substrate is 12 m 2 as determined from point-B of a 4 He adsorption isotherm at 4.2 K . The low field NMR magnet and coil assembly is contained within a 31 mm diameter, 105 mm long superconducting niobium shield and is of a similar design to those described previously . The NMR magnet is a 4 layer solenoid wound using 106 µm CuNi clad NbTi wire. The transmitter coil is wound around a 25 mm diameter former using similar wire following the optimum saddle coil geometry described by Hoult and Richards and has a field current ratio of 0.275 mT A −1 . An overlapping superconducting Hechtfischer shield between the NMR magnet former and transmitter coil screens eddy-current transients in the copper induced by transmitter pulses. The receiver coil is wound directly onto the epoxy sample cell using NbTi wire and has a 10 mm square cross-section. The NMR cell is mechanically and thermally decoupled from the magnet assembly in order to reduce magneto-acoustic resonances and heat leaks into the nuclear demagnetization stage. Orthogonality of the transmitter and receiver coils is optimized whilst assembling the setup by cross-coupling measurements. NMR signals are coupled into the SQUID amplifier by a superconducting flux transformer Figure 1. Schematics of the ultra-low temperature SQUID NMR setup. The NMR cell is mounted on the 100 µKplate of a nuclear demagnetization stage whereas the magnet assembly is supported by the mixing chamber. NMR signals are coupled into a twostage DC-SQUID, which is controlled by a room-temperature FLL electronics. Tipping pulses are generated with a PXI arbitrary waveform generator. consisting the receiver coil and the integrated SQUID input coil, which are connected by a superconducting twisted pair shielded by a niobium capillary. Both coils have matching inductances of L i = 1.8 µH to maximize the flux transfer ratio. This set-up provides a broadband detection scheme allowing measurements at different B 0 , and hence NMR frequencies. We use a two stage DC-SQUID with integrated current limiter developed by PTB . When operated in flux-locked loop mode (FLL) it serves as a low noise, high bandwidth, high gain flux-to-voltage converter. Control and feedback for the the SQUID is provided by roomtemperature Magnicon XXF-1 FLL electronics with 6 MHz bandwidth . The coupled energy sensitivity of our spectrometer is where M i is the mutual inductance between the input coil and SQUID and S φ the flux noise per unit bandwidth, with the SQUID cooled to 700 mK. NMR pulse sequences and signal acquisition are controlled by a National Instruments PXI system . Tipping pulses are produced by a PXI-5412 arbitrary waveform generator and the resultant NMR signals captured by a PXI-5922 digitizer with 20-bit resolution at 5MS/s . A 4 kHz high-pass filter is used to attenuate low frequency eddy-current transients. The captured signals are background-subtracted with an off-resonance and zero-field transient to eliminate magneto-acoustics around the Larmor frequency and low-frequency transients respectively. The signals are analyzed in the frequency domain by applying a conventional FFT. However the exceptionally high signal-to-noise ratio also permits study directly in the time domain by using lock-in demodulation; this allows us to see the precise shape of the recorded free induction decays (FIDs). 2D 3 He Adsorbed on Graphite at Ultra-Low Temperatures This section describes pulsed NMR measurements on a monolayer 3 He film sample with areal density 5.5 nm −2 adsorbed onto graphite preplated by a monolayer of 4 He, to highlight the versatility of our broadband spectrometer. We show magnetization, spin-spin and spin-lattice relaxation time measurements of this strongly correlated 2D Fermi fluid. Nuclear Susceptibility The nuclear susceptibility of the sample was measured at 100 kHz and temperatures from 240 µK to 10 mK during the natural warmup of the demagnetization stage and from 6 mK to 300 mK using thermally stabilized data points. The magnetization was determined from a stretched exponential fit to the envelope function of the FID in the time-domain. Figure 2 shows the temperature dependence of the total magnetization. The data is fitted by a Curie-Weiss, plus a Dyugaev Fermi fluid magnetization of the form: where C is the Curie constant per spin, Θ the Curie temperature and T * * F the renormalized Fermi temperature. The first term phenomenologically describes an upturn above the Pauli susceptibility at lowest temperatures. We find an effective Fermi temperature, T * * F = 131 mK, consistent with previous measurements . We used the Landau Fermi liquid susceptibility enhancement χ/χ 0 = (m * /m)(1 + F a 0 ) −1 , together with the fact that the spin-antisymmetric Landau parameter (1 + F a 0 ) −1 = (1 + 1 − m/m * ) 2 depends weakly on effective mass , to obtain an effective mass of 5.8 ± 0.2 m 0 . This effective mass is consistent with data obtained by Greywall when density corrections due to first layer compression are taken into account, and also with results of Lusher and Morhard . Figure 3. A 750 µK measurement of T 1 at 100 kHz made using an inversion-recovery pulse sequence. The vertical scaling is relative to the magnetization obtained using the same readout pulse, without the preparation pulse, before the measurement. The data (open circles) is fitted by an exponential decay (dashed blue line) Spin-Spin relaxation The transverse relaxation times T * 2 were extracted from fits to the Fourier transformed FIDs. Since T * 2 includes contributions from both intrinsic spin-spin interactions and from inhomogeneities in the static magnetic field, measurements of the intrinsic T 2 are desirable. T 2 was measured using the usual 90°− τ − 180°− τ −spin-echo pulse sequence, requiring additional hardware. A Stanford DG535 delay generator provided a precise trigger source for two Agilent 33220A arbitrary waveform generators and for the PXI digitizer. An example of the quality of data obtained for a spin-echo measurement is shown in Fig. 4. Fig. 5 shows the spin-echo heights as a function of delay, τ , for frequencies between 100 and 300 kHz. Here the decay of the magnetization was well described by a superposition of two exponential functions, as previously found for similar systems . We observe that the shorter component T 2S = 14.8 ms is frequency independent whilst the relaxation rate of the longer component increased non-linearly with increasing frequency (see inset Fig. 5) in contrast to previous measurements at higher frequencies . Here the low frequency T 2 is governed by the intrinsic dipole-dipole interaction . The magnetization of the shorter component was an order of magnitude larger than that of the longer component. Spin-Lattice Relaxation Measurements of the spin-lattice relaxation time of adsorbed 3 He have previously been used to, for example, measure exchange and dimensionality at T > 1 K . Fig. 3 shows a preliminary measurement of T 1 , using a 180°− τ − readout sequence, where the sample was at 750 µK before the measurement. Here the vertical scale is the magnetization normalized to its equilibrium value before each data point was taken, sampled with a 0.6°pulse. The 11% reduction of magnetization at long τ is due to an overheating of the sample during the measurement and corresponds to a temperature increase of 80 µK using the magnetization data in Fig. 2. 13 C NMR Thermometry In our setup we used the spin-1 ⁄2 13 C nuclei present in the Grafoil substrate, at a natural abundance of 1.1 %, as a thermometer below 1 mK. This enabled us to measure directly the temperature of the substrate, which should have the smallest possible temperature gradient to the helium sample. Within graphite 13 C forms a dilute nuclear paramagnet with weak dipoledipole interactions and hence a narrow NMR line width (T * 2 = 2.4 ms). An advantage of the broadband spectrometer is that it allows us to observe FIDs from 3 He Fig. 6 shows the magnitude of a FFT of a 40 µs transmitter pulse at 100 kHz, demonstrating that there is significant spectral power over a broad frequency range. Fig. 6 also shows a FFT of the NMR response to this pulse where B 0 = 3.08 mT, so that the 3 He is on resonance and the 13 C resonance is close to the second side maximum of the pulse power spectrum. The magnetization of the 13 C is obtained from a Lorentzian fit to the complex FFT of the FID. This is calibrated against the 3 He melting curve thermometer, MCT, over the temperature range 5.5 mK to 20 mK using thermally stabilised data points. The MCT is self-calibrated against the superfluid A-transition. The thermal coupling of the 13 C and the adsorbed helium sample is demonstrated in Fig. 7 where the magnetization of the 13 C is compared to that of a pure bilayer 3 He film of areal density 13.0 nm −2 , for which the first layer paramagnetism strongly dominates at low temperatures. The linearity shows that both spin species were in thermal equilibrium over the entire temperature range. Since the pulse repetition rate of NMR thermometers at ultra-low temperatures is limited by their T 1 relaxation time, we investigated the longitudinal recovery of 13 C down to the lowest temperatures. The T 1 recovery of the 13 C, shown in Fig. 8, was measured using small angle tipping , where the equilibrium magnetization was initially prepared by a 180°pulse and the recovery is subsequently probed by repeated 2°tipping pulses. The recovery is fitted by a stretched exponential according to: where τ is the time between the preparation pulse and readout pulse and α is a fitting parameter. Non-exponential relaxation is commonly observed where a continuous range of relaxation times exist, see for example . However, we find that the exponent α = 0.65 is close to 2 ⁄3, which is the expected value for a 2D rigid lattice of spins , suggesting that relaxation may be due to a low dimensional effect. Summary We have installed a versatile broadband NMR spectrometer to measure samples thermalized to a nuclear demagnetization stage. The broadband nature of the spectrometer allows both frequency and temperature as tuning parameters. The nuclear magnetization of 13 C within the graphite substrate has been used as a thermometer below 1 mK and is shown to be well thermalized to the 3 He film sample. We have demonstrated the feasibility of measuring T 1 , T 2 , T * 2 and the nuclear magnetism of 3 He films on graphite with high precision. This is currently being applied to the study of quantum criticality of the Mott-Hubbard transition and to identify a possible quantum-spin liquid phase. This work has been financially supported by EPSRC grant EP/H048375/1.
__copyright__ = '' __author__ = 'Son-Huy TRAN' __email__ = "[email protected]" __doc__ = '' __version__ = '1.0' from math import log, pow def main() -> int: k = int(input()) l = int(input()) if (l < k): print('NO') else: temp = int(log(l, k)) temp_pow = int(pow(k, temp)) if temp_pow == l: print('YES', temp - 1, sep='\n') elif temp_pow * k == l: print('YES', temp, sep='\n') else: print('NO') return 0 if __name__ == '__main__': exit(main())
/* * terminal_read * DESCRIPTION: Reads the data from one line which wil be indicated by an ending with ENTER or * the buffer gets full. * INPUTS: Pointer to the buffer and bytes to read * OUTPUTS: none * RETURN VALUE: the number of bytes read in the buffer. * SIDE EFFECTS: fills the buffer with data. */ int32_t terminal_read(int32_t fd, void* buf, int32_t nbytes) { int i = 0; while(enter != 1); char* temp_buf = (char*) buf; int buffer_index; char buffer[BUFFER_SIZE]; if(terminal_num == 1) { buffer_index = buffer_index1; strcpy(buffer, buffer1); } else if(terminal_num == 2) { buffer_index = buffer_index2; strcpy(buffer, buffer2); } else if(terminal_num == 3) { buffer_index = buffer_index3; strcpy(buffer, buffer3); } for(i=0; i < buffer_index; i++) temp_buf[i] = buffer[i]; temp_buf[i] = '\n'; i++; clear_buffer(); enter = 0; return i; }
// isTicketCommitP2SH returns whether or not the passed ticket output commitment // script commits to a hash which represents a pay-to-script-hash output. When // false, it commits to a hash which represents a pay-to-pubkey-hash output. // // NOTE: The caller MUST have already determined that the provided script is // a commitment output script or the function may panic. func isTicketCommitP2SH(script []byte) bool { This is a faster equivalent of: amtBytes := script[commitAmountStartIdx:commitAmountEndIdx] amtEncoded := binary.LittleEndian.Uint64(amtBytes) return (amtEncoded & commitP2SHFlag) != 0 return script[commitAmountEndIdx-1]&0x80 != 0 }
DAVAO CITY – Incoming President Rodrigo Duterte on Monday said he would release all political prisoners once peace talks with the rebels start. “This is part of confidence-building,” he told reporters in a press briefing here. ADVERTISEMENT “I will welcome them back to society,” he added. The presumptive President-elect said he would grant amnesty and pardon to all political prisoners in the country. “They can even join the peace talks,” he said. His only precondition was for rebels to join the peace talks. “You must come here. We must be talking to each other,” he said. Duterte has previously said that he is open to revive the stalled peace talks under the Aquino administration. He even announced that he was offering communist leaders positions in the Department of Agrarian Reform (DAR), Department of Labor and Employment (DOLE), Department of Environment and Natural Resources (DENR) and Department of Social Welfare and Development (DSWD). “We have accepted the offer,” said Luis Jalandoni, peace negotiator for the National Democratic Front of the Philippines (NDFP), in a radio interview last week. Jalandoni said the CPP and the NDFP were very happy about Duterte’s statements. ADVERTISEMENT “It shows his trust and confidence in the Communist Party of the Philippines, the New People’s Army and the National Democratic Front,” Jalandoni added. He said that after hearing about Duterte’s statements, the CPP and the NDFP immediately started preparing a list of nominees to the positions. Read Next LATEST STORIES MOST READ
def download_device_database(aUrl, aPath): headers = { "User-Agent": "ZaberDeviceControlToolbox/1.2.0 (Python)" } request = urllib.request.Request(aUrl, None, headers) with tempfile.TemporaryFile() as tmpFile: with urllib.request.urlopen(request) as response: data = response.read() tmpFile.write(data) tmpFile.seek(0) print("Decompressing downloaded file...") with lzma.open(tmpFile) as ifp: data = ifp.read() if len(data) < 1: raise IOError("Failed to decompress downloaded device database.") if os.path.exists(aPath): os.remove(aPath) with open(aPath, "wb") as ofp: ofp.write(data)
<reponame>republicprotocol/store<gh_stars>10-100 package leveldb import ( "bytes" "fmt" "github.com/renproject/kv/db" "github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb/iterator" "github.com/syndtr/goleveldb/leveldb/util" ) // levelDB is a leveldb implementation of the `db.Iterable`. type levelDB struct { db *leveldb.DB codec db.Codec } // New returns a new `db.Iterable`. func New(path string, codec db.Codec) db.DB { if codec == nil { panic("codec cannot be nil") } ldb, err := leveldb.OpenFile(path, nil) if err != nil { panic(fmt.Sprintf("error initialising leveldb: %v", err)) } return &levelDB{ db: ldb, codec: codec, } } func (ldb *levelDB) Close() error { return ldb.db.Close() } // Insert implements the `db.DB` interface. func (ldb *levelDB) Insert(key string, value interface{}) error { if key == "" { return db.ErrEmptyKey } data, err := ldb.codec.Encode(value) if err != nil { return err } return ldb.db.Put([]byte(key), data, nil) } // Get implements the `db.DB` interface. func (ldb *levelDB) Get(key string, value interface{}) error { if key == "" { return db.ErrEmptyKey } data, err := ldb.db.Get([]byte(key), nil) if err != nil { return convertErr(err) } return ldb.codec.Decode(data, value) } // Delete implements the `db.DB` interface. func (ldb *levelDB) Delete(key string) error { if key == "" { return db.ErrEmptyKey } return ldb.db.Delete([]byte(key), nil) } // Size implements the `db.DB` interface. func (ldb *levelDB) Size(prefix string) (int, error) { iter := ldb.db.NewIterator(util.BytesPrefix([]byte(prefix)), nil) defer iter.Release() counter := 0 for iter.Next() { counter++ } return counter, nil } // Iterator implements the `db.DB` interface. func (ldb *levelDB) Iterator(prefix string) db.Iterator { iterator := ldb.db.NewIterator(util.BytesPrefix([]byte(prefix)), nil) return &iter{ prefix: []byte(prefix), iter: iterator, codec: ldb.codec, } } // iter implements the `db.Iterator` interface. type iter struct { prefix []byte iter iterator.Iterator codec db.Codec } // Next implements the `db.Iterator` interface. func (iter *iter) Next() bool { next := iter.iter.Next() // Release the iter when it finishes iterating. if !next { iter.iter.Release() } return next } // Key implements the `db.Iterator` interface. func (iter *iter) Key() (string, error) { key := iter.iter.Key() if key == nil { return "", db.ErrIndexOutOfRange } return string(bytes.TrimPrefix(key, iter.prefix)), nil } // Value implements the `db.Iterator` interface. func (iter *iter) Value(value interface{}) error { val := iter.iter.Value() if val == nil { return db.ErrIndexOutOfRange } return iter.codec.Decode(val, value) } // Close implements the `db.Iterator` interface. func (iter *iter) Close() { iter.iter.Release() } // convertErr will convert levelDB-specific error to kv error. func convertErr(err error) error { switch err { case leveldb.ErrNotFound: return db.ErrKeyNotFound default: return err } }
/** * That a move is possible, * all of the moved coordinates of a block have to be in range of the board after the move * * @param forward * @return */ boolean isMoveWithinRange(@Nonnull Direction dir) { Collection<Coordinate> allCoordinates = getCoveredCoordinates().getUniqueCoordinates(); for (Coordinate coord : allCoordinates) { if (dir.equals(Direction.FORWARD)) { if (Alignment.VERTICAL.equals(this.alignment)) { if (!coord.isMoveWithinRange(0, 1)) { return false; } } else { if (!coord.isMoveWithinRange(1, 0)) { return false; } } } else { if (Alignment.VERTICAL.equals(this.alignment)) { if (!coord.isMoveWithinRange(0, -1)) { return false; } } else { if (!coord.isMoveWithinRange(-1, 0)) { return false; } } } } return true; }
<reponame>lamhungypl/nest-api<filename>src/core/address/address.entity.ts<gh_stars>0 import { BaseModel } from '@modules/common/base.model'; import { format } from 'date-fns'; import { BeforeInsert, BeforeUpdate, Column, Entity, PrimaryGeneratedColumn, } from 'typeorm'; @Entity('address') export class Address extends BaseModel { @PrimaryGeneratedColumn({ name: 'address_id' }) public addressId: number; @Column({ name: 'customer_id' }) public customerId: number; @Column({ name: 'country_id' }) public countryId: number; @Column({ name: 'zone_id' }) public zoneId: number; @Column({ name: 'first_name' }) public firstName: string; @Column({ name: 'last_name' }) public lastName: string; @Column({ name: 'company' }) public company: string; @Column({ name: 'address_1' }) public address1: string; @Column({ name: 'address_2' }) public address2: string; @Column({ name: 'postcode' }) public postcode: number; @Column({ name: 'city' }) public city: string; @Column({ name: 'state' }) public state: string; @Column({ name: 'email_id' }) public emailId: string; @Column({ name: 'phone_no' }) public phoneNo: number; @Column({ name: 'address_type' }) public addressType: number; @Column({ name: 'is_active' }) public isActive: number; @BeforeInsert() public async createDetails(): Promise<void> { this.createdDate = format(new Date(), 'yyyy-MM-dd HH:mm:ss'); } @BeforeUpdate() public async updateDetails(): Promise<void> { this.modifiedDate = format(new Date(), 'yyyy-MM-dd HH:mm:ss'); } }
/** * Check to see if the two locations have any overlapping component * locations * * @param location1 * first location to test * @param location2 * second location to test * @return true if one or more sub-location of location1 overlaps one or * more sub-location of location2 in frame */ public static boolean overlapsInFrame(RichLocation location1, int offset1, RichLocation location2, int offset2) { for (RichLocation sloc1 : sortLocation(location1)) { if (sloc1.equals(location1)) { for (RichLocation sloc2 : sortLocation(location2)) { if (sloc2.equals(location2)) { if (outerOverlapsInFrame(sloc1, offset1, sloc2, offset2)) { return true; } } else { if (overlapsInFrame(sloc1, offset1, sloc2, offset2)) { return true; } } offset2 += getLocationLength(sloc2); } } else { if (overlapsInFrame(sloc1, offset1, location2, offset2)) { return true; } } offset1 += getLocationLength(sloc1); } return false; }
class Simulation: """ Sets up and runs a simulation of trading activity using the given strategy over the loaded data, with the ability to tune or optimize arbitrary strategy hyperparamaters. """ def __init__( self, strategy_class: Type[Strategy], rates: Dict[str, pandas.DataFrame], coins: Dict[str, int], starting_value: int, start_datetime: str, end_datetime: Optional[str] = None ): if not rates.keys() == coins.keys(): raise ValueError( "Data mismatch! The strategy must have rates corresponding to each coin" " to trade, and each coin must have an initial value." ) elif not all(start_datetime in r.index for r in rates.values()): raise ValueError( "Data mismatch! The provided start datetime does not have associated" " for one or more of the provided coin trading rates. All coins must" " have data for the given start datetime and end datetime." ) elif not end_datetime and len({r.index[-1] for r in rates.values()}) != 1: raise ValueError( "Data mismatch! The provided rates do not share a common end datetime." " You must be expicitly pass a common end datetime." ) self.start_datetime = start_datetime """ The datetime stamp on which to start the simulation. Should be a string of the same form as the `rates` timeseries. Most commonly, this is `YYYY-MM-DD hh:mm:ss`. Denomination data must exist for this date for all coins in the provided `rates` table. """ self.end_datetime = end_datetime or next(iter(rates.values())).index[-1] """ Optional. The datetime stamp on which to end the simulation. Should be a string of the same form as the `rates` timeseries. Most commonly, this is `YYYY-MM-DD hh:mm:ss`. Denomination data must exist for this date for all coins in the provided `rates` table. If no value is provided, the simulation will run until the end of the available data. """ initials = [] cmap = {"USDC": 0} for i, (coin, init) in enumerate(coins.items()): cmap[coin] = i + 1 initials.append(init) for k in rates.keys(): rates[k] = rates[k][self.start_datetime:self.end_datetime] # type: ignore self.duration = len(next(iter(rates.values()))) self.strategy = strategy_class( rates=rates, coins=cmap, ) self.coins = coins.keys() """ The coins with which to trade. This excludes USDC, which is the default collateral to use.""" self.wallets: List[Wallet] = [[starting_value, *initials, starting_value]] @final def simulate(self, *args) -> List[Wallet]: # for every timestep in simulation duration for timestep in range(1, self.duration + 1): wallet = self.wallets[timestep - 1].copy() for coin in self.coins: self.strategy.trade(coin, timestep, wallet, *args) # calculate the current wallet value wallet[3] = wallet[0] + sum( wallet[i + 1] * self.strategy.rates[coin][timestep - 1] for i, coin in enumerate(self.coins) ) self.wallets.append(wallet) return self.wallets
/* * linux/include/asm-i386/timex.h * * i386 architecture timex specifications */ #ifndef _ASMi386_TIMEX_H #define _ASMi386_TIMEX_H #include <linux/config.h> #include <asm/processor.h> #ifdef CONFIG_X86_ELAN # define CLOCK_TICK_RATE 1189200 /* AMD Elan has different frequency! */ #else # define CLOCK_TICK_RATE 1193182 /* Underlying HZ */ #endif /* * Standard way to access the cycle counter on i586+ CPUs. * Currently only used on SMP. * * If you really have a SMP machine with i486 chips or older, * compile for that, and this will just always return zero. * That's ok, it just means that the nicer scheduling heuristics * won't work for you. * * We only use the low 32 bits, and we'd simply better make sure * that we reschedule before that wraps. Scheduling at least every * four billion cycles just basically sounds like a good idea, * regardless of how fast the machine is. */ typedef unsigned long long cycles_t; static inline cycles_t get_cycles (void) { unsigned long long ret=0; #ifndef CONFIG_X86_TSC if (!cpu_has_tsc) return 0; #endif #if defined(CONFIG_X86_GENERIC) || defined(CONFIG_X86_TSC) rdtscll(ret); #endif return ret; } extern unsigned int cpu_khz; extern int read_current_timer(unsigned long *timer_value); #define ARCH_HAS_READ_CURRENT_TIMER 1 #endif
// EditNote accept one argument to execute a // fuzzy search for the note to be edited // and open the EDITOR with the found result func EditNote(context *cli.Context) { notePath := viper.GetString("notePath") noteTitle := context.String("title") _, err := os.Stat(notePath) if err != nil { fmt.Println(err) os.Exit(1) } caseSensitive := context.Bool("case-sensitive") if noteTitle == "" { err = editContent(notePath, context.Args(), caseSensitive) } else { err = editTitle(notePath, noteTitle, context.Args(), caseSensitive) } if err != nil { fmt.Println(err) os.Exit(1) } }
def extract(self, index): if index < 0 or index >= len(self.annotation_data): raise Exception("out of range") manifest_data = self.annotation_data[index] return self.extract_annotation_with_data(manifest_data)
def forward(self, input): return F.log(1 + self.beta * F.relu(x))
package binary import ( "encoding/binary" "errors" "io" ) type OPCUAType string // OPC UA Message types // Described in OPC Unified Architecture 1.04, Part 6, 7.1.2.2 const ( OPCUAHello OPCUAType = "HEL" OPCUAAcknowledge OPCUAType = "ACK" OPCUAError OPCUAType = "ERR" OPCUAReverseHello OPCUAType = "RHE" ) // An OPCUAMessage is an OPC UA protocol message type OPCUAMessage interface { Type() OPCUAType marshal() ([]byte, error) unmarshal(b []byte) error } // HelloMessage is an OPC UA Hello message type type HelloMessage struct { ProtocolVersion uint32 ReceiveBufferSize uint32 SendBufferSize uint32 MaxMessageSize uint32 MaxChunkCount uint32 EndpointURL string } func (m *HelloMessage) Type() OPCUAType { return OPCUAHello } func (m *HelloMessage) marshal() ([]byte, error) { if m.ReceiveBufferSize < 8192 { return []byte{}, errors.New("ReceiveBufferSize must be at least 8192 bytes") } if m.SendBufferSize < 8192 { return []byte{}, errors.New("ReceiveBufferSize must be at least 8192 bytes") } if len(m.EndpointURL) > 4096 { return []byte{}, errors.New("EndpointURL length cannot be greater than 4096 bytes") } buf := make([]byte, 24+len(m.EndpointURL)) PutUint32(buf[0:4], m.ProtocolVersion) PutUint32(buf[4:8], m.ReceiveBufferSize) PutUint32(buf[8:12], m.SendBufferSize) PutUint32(buf[12:16], m.MaxMessageSize) PutUint32(buf[16:20], m.MaxChunkCount) PutString(buf[20:], m.EndpointURL) return buf, nil } func (m *HelloMessage) unmarshal(b []byte) error { if len(b) < 24 { return io.ErrUnexpectedEOF } protVer := binary.LittleEndian.Uint32(b[0:5]) recvBufSize := binary.LittleEndian.Uint32(b[4:9]) sendBufSize := binary.LittleEndian.Uint32(b[8:13]) maxMsgSize := binary.LittleEndian.Uint32(b[12:17]) maxChkCount := binary.LittleEndian.Uint32(b[16:21]) urlSize := binary.LittleEndian.Uint32(b[20:25]) endpoint := "" if urlSize > 0 { endpoint = string(b[24 : 25+urlSize]) } *m = HelloMessage{ ProtocolVersion: protVer, ReceiveBufferSize: recvBufSize, SendBufferSize: sendBufSize, MaxMessageSize: maxMsgSize, MaxChunkCount: maxChkCount, EndpointURL: endpoint, } return nil } // AckMessage is an OPC UA Acknowledge message type type AckMessage struct { ProtocolVersion uint32 ReceiveBufferSize uint32 SendBufferSize uint32 MaxMessageSize uint32 MaxChunkCount uint32 } func (m *AckMessage) marshal() ([]byte, error) { if m.ReceiveBufferSize < 8192 { return []byte{}, errors.New("ReceiveBufferSize must be at least 8192 bytes") } if m.SendBufferSize < 8192 { return []byte{}, errors.New("ReceiveBufferSize must be at least 8192 bytes") } buf := make([]byte, 20) PutUint32(buf[0:4], m.ProtocolVersion) PutUint32(buf[4:8], m.ReceiveBufferSize) PutUint32(buf[8:12], m.SendBufferSize) PutUint32(buf[12:16], m.MaxMessageSize) PutUint32(buf[16:20], m.MaxChunkCount) return buf, nil } func (m *AckMessage) unmarshal(b []byte) error { if len(b) < 20 { return io.ErrUnexpectedEOF } protVer := binary.LittleEndian.Uint32(b[0:5]) recvBufSize := binary.LittleEndian.Uint32(b[4:9]) sendBufSize := binary.LittleEndian.Uint32(b[8:13]) maxMsgSize := binary.LittleEndian.Uint32(b[12:17]) maxChkCount := binary.LittleEndian.Uint32(b[16:21]) *m = AckMessage{ ProtocolVersion: protVer, ReceiveBufferSize: recvBufSize, SendBufferSize: sendBufSize, MaxMessageSize: maxMsgSize, MaxChunkCount: maxChkCount, } return nil } func (m *AckMessage) Type() OPCUAType { return OPCUAAcknowledge } // AckMessage is an OPC UA Error message type type ErrorMessage struct { Error uint32 Reason string } func (m *ErrorMessage) marshal() ([]byte, error) { buf := make([]byte, 8+len(m.Reason)) PutUint32(buf[0:4], m.Error) PutString(buf[4:], m.Reason) return buf, nil } func (m *ErrorMessage) unmarshal(b []byte) error { if len(b) < 8 { return io.ErrUnexpectedEOF } err := binary.LittleEndian.Uint32(b[0:5]) reasonSize := binary.LittleEndian.Uint32(b[4:9]) reason := "" if reasonSize > 0 { reason = string(b[8 : 8+reasonSize]) } *m = ErrorMessage{ Error: err, Reason: reason, } return nil } func (m *ErrorMessage) Type() OPCUAType { return OPCUAError }
<reponame>evacs/astr-119-hw-2 #dealing with unexpected results #great for writing complex programs try: print (a) #throw an exception except: print("a is not defined") #a is not defined, instead of crashing program, #we can ask it to tell us what the problem is try: print(a) except NameError: #if this is the error... print("a still isn't defined") except: #if not... print("Something else is wrong") print(a) #this will not work and will #BREAK the program
/** * @author Jan Gabler * @author Malte Schwering * @version 0.3 */ @Stateless public class UserGroupEntityFacade extends AbstractFacade<UserGroupEntity> implements de.th.wildau.webapp.buchladen.facades.UserGroupEntityFacadeRemote { /** * Der Entitäten-Manager. */ @PersistenceContext(unitName = "buchladen-ejbPU") private EntityManager em; /** * Liefert den Entitäten-Manager zurück. * @return Entitäten-Manager */ @Override protected EntityManager getEntityManager() { return em; } /** * Konstruktor der den Konstruktor der abstrakten Klasse aufruft. */ public UserGroupEntityFacade() { super(UserGroupEntity.class); } /** * Liefert die Entität einer Benutzergruppe zu einem bestimmten Gruppennamen zurück. * @param groupName Name der Benutzergruppe * @return UserGroupEntity */ @Override public UserGroupEntity findByGroupName(String groupName) { try { Query query = em.createNamedQuery("UserGroupEntity.findByGroupName"); return (UserGroupEntity) query.setParameter("groupName", groupName).getSingleResult(); } catch(NoResultException e) { return null; } } }
import math import os import numpy as np import pytorch_lightning as pl import torch import torch.nn as nn import torch.nn.functional as F from torchvision.utils import save_image from models.rdff_base_net import Encoder_MDCBlock1, Decoder_MDCBlock1 class make_dense(nn.Module): def __init__(self, nChannels, growthRate, kernel_size=3): super(make_dense, self).__init__() self.conv = nn.Conv2d(nChannels, growthRate, kernel_size=kernel_size, padding=(kernel_size - 1) // 2, bias=False) def forward(self, x): out = F.relu(self.conv(x)) out = torch.cat((x, out), 1) return out # Residual dense block (RDB) architecture class RDB(nn.Module): def __init__(self, nChannels, nDenselayer, growthRate, scale=1.0): super(RDB, self).__init__() nChannels_ = nChannels self.scale = scale modules = [] for i in range(nDenselayer): modules.append(make_dense(nChannels_, growthRate)) nChannels_ += growthRate self.dense_layers = nn.Sequential(*modules) self.conv_1x1 = nn.Conv2d(nChannels_, nChannels, kernel_size=1, padding=0, bias=False) def forward(self, x): out = self.dense_layers(x) out = self.conv_1x1(out) * self.scale out = out + x return out class ConvLayer(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride): super(ConvLayer, self).__init__() reflection_padding = kernel_size // 2 self.reflection_pad = nn.ReflectionPad2d(reflection_padding) self.conv2d = nn.Conv2d(in_channels, out_channels, kernel_size, stride) def forward(self, x): out = self.reflection_pad(x) out = self.conv2d(out) return out class UpsampleConvLayer(torch.nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride): super(UpsampleConvLayer, self).__init__() self.conv2d = nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=stride) def forward(self, x): out = self.conv2d(x) return out class ResidualBlock(torch.nn.Module): def __init__(self, channels): super(ResidualBlock, self).__init__() self.conv1 = ConvLayer(channels, channels, kernel_size=3, stride=1) self.conv2 = ConvLayer(channels, channels, kernel_size=3, stride=1) self.relu = nn.PReLU() def forward(self, x): residual = x out = self.relu(self.conv1(x)) out = self.conv2(out) * 0.1 out = torch.add(out, residual) return out class RDFFNetwork(pl.LightningModule): def modify_commandline_options(parser): parser.add_argument('--n_resblocks', type=int, default=18, help='number of residual blocks') parser.add_argument('--save_output', type=str, default=None, help='save the output images during testing') return parser def __init__(self, args): super(RDFFNetwork, self).__init__() self.args = args self.conv_input = ConvLayer(3, 16, kernel_size=11, stride=1) self.dense0 = nn.Sequential( ResidualBlock(16), ResidualBlock(16), ResidualBlock(16) ) self.conv2x = ConvLayer(16, 32, kernel_size=3, stride=2) self.conv1 = RDB(16, 4, 16) self.fusion1 = Encoder_MDCBlock1(16, 2, mode='iter2') self.dense1 = nn.Sequential( ResidualBlock(32), ResidualBlock(32), ResidualBlock(32) ) self.conv4x = ConvLayer(32, 64, kernel_size=3, stride=2) self.conv2 = RDB(32, 4, 32) self.fusion2 = Encoder_MDCBlock1(32, 3, mode='iter2') self.dense2 = nn.Sequential( ResidualBlock(64), ResidualBlock(64), ResidualBlock(64) ) self.conv8x = ConvLayer(64, 128, kernel_size=3, stride=2) self.conv3 = RDB(64, 4, 64) self.fusion3 = Encoder_MDCBlock1(64, 4, mode='iter2') self.dense3 = nn.Sequential( ResidualBlock(128), ResidualBlock(128), ResidualBlock(128) ) self.conv16x = ConvLayer(128, 256, kernel_size=3, stride=2) self.conv4 = RDB(128, 4, 128) self.fusion4 = Encoder_MDCBlock1(128, 5, mode='iter2') self.dehaze = nn.Sequential() for i in range(0, args.n_resblocks): self.dehaze.add_module('res%d' % i, ResidualBlock(256)) self.convd16x = UpsampleConvLayer(256, 128, kernel_size=3, stride=2) self.dense_4 = nn.Sequential( ResidualBlock(128), ResidualBlock(128), ResidualBlock(128) ) self.conv_4 = RDB(64, 4, 64) self.fusion_4 = Decoder_MDCBlock1(64, 2, mode='iter2') self.convd8x = UpsampleConvLayer(128, 64, kernel_size=3, stride=2) self.dense_3 = nn.Sequential( ResidualBlock(64), ResidualBlock(64), ResidualBlock(64) ) self.conv_3 = RDB(32, 4, 32) self.fusion_3 = Decoder_MDCBlock1(32, 3, mode='iter2') self.convd4x = UpsampleConvLayer(64, 32, kernel_size=3, stride=2) self.dense_2 = nn.Sequential( ResidualBlock(32), ResidualBlock(32), ResidualBlock(32) ) self.conv_2 = RDB(16, 4, 16) self.fusion_2 = Decoder_MDCBlock1(16, 4, mode='iter2') self.convd2x = UpsampleConvLayer(32, 16, kernel_size=3, stride=2) self.dense_1 = nn.Sequential( ResidualBlock(16), ResidualBlock(16), ResidualBlock(16) ) self.conv_1 = RDB(8, 4, 8) self.fusion_1 = Decoder_MDCBlock1(8, 5, mode='iter2') self.conv_output = ConvLayer(16, 3, kernel_size=3, stride=1) def forward(self, x): res1x = self.conv_input(x) res1x_1, res1x_2 = res1x.split([(res1x.size()[1] // 2), (res1x.size()[1] // 2)], dim=1) feature_mem = [res1x_1] x = self.dense0(res1x) + res1x res2x = self.conv2x(x) res2x_1, res2x_2 = res2x.split([(res2x.size()[1] // 2), (res2x.size()[1] // 2)], dim=1) res2x_1 = self.fusion1(res2x_1, feature_mem) res2x_2 = self.conv1(res2x_2) feature_mem.append(res2x_1) res2x = torch.cat((res2x_1, res2x_2), dim=1) res2x = self.dense1(res2x) + res2x res4x = self.conv4x(res2x) res4x_1, res4x_2 = res4x.split([(res4x.size()[1] // 2), (res4x.size()[1] // 2)], dim=1) res4x_1 = self.fusion2(res4x_1, feature_mem) res4x_2 = self.conv2(res4x_2) feature_mem.append(res4x_1) res4x = torch.cat((res4x_1, res4x_2), dim=1) res4x = self.dense2(res4x) + res4x res8x = self.conv8x(res4x) res8x_1, res8x_2 = res8x.split([(res8x.size()[1] // 2), (res8x.size()[1] // 2)], dim=1) res8x_1 = self.fusion3(res8x_1, feature_mem) res8x_2 = self.conv3(res8x_2) feature_mem.append(res8x_1) res8x = torch.cat((res8x_1, res8x_2), dim=1) res8x = self.dense3(res8x) + res8x res16x = self.conv16x(res8x) res16x_1, res16x_2 = res16x.split([(res16x.size()[1] // 2), (res16x.size()[1] // 2)], dim=1) res16x_1 = self.fusion4(res16x_1, feature_mem) res16x_2 = self.conv4(res16x_2) res16x = torch.cat((res16x_1, res16x_2), dim=1) res_dehaze = res16x in_ft = res16x * 2 res16x = self.dehaze(in_ft) + in_ft - res_dehaze res16x_1, res16x_2 = res16x.split([(res16x.size()[1] // 2), (res16x.size()[1] // 2)], dim=1) feature_mem_up = [res16x_1] res16x = self.convd16x(res16x) res16x = F.upsample(res16x, res8x.size()[2:], mode='bilinear') res8x = torch.add(res16x, res8x) res8x = self.dense_4(res8x) + res8x - res16x res8x_1, res8x_2 = res8x.split([(res8x.size()[1] // 2), (res8x.size()[1] // 2)], dim=1) res8x_1 = self.fusion_4(res8x_1, feature_mem_up) res8x_2 = self.conv_4(res8x_2) feature_mem_up.append(res8x_1) res8x = torch.cat((res8x_1, res8x_2), dim=1) res8x = self.convd8x(res8x) res8x = F.upsample(res8x, res4x.size()[2:], mode='bilinear') res4x = torch.add(res8x, res4x) res4x = self.dense_3(res4x) + res4x - res8x res4x_1, res4x_2 = res4x.split([(res4x.size()[1] // 2), (res4x.size()[1] // 2)], dim=1) res4x_1 = self.fusion_3(res4x_1, feature_mem_up) res4x_2 = self.conv_3(res4x_2) feature_mem_up.append(res4x_1) res4x = torch.cat((res4x_1, res4x_2), dim=1) res4x = self.convd4x(res4x) res4x = F.upsample(res4x, res2x.size()[2:], mode='bilinear') res2x = torch.add(res4x, res2x) res2x = self.dense_2(res2x) + res2x - res4x res2x_1, res2x_2 = res2x.split([(res2x.size()[1] // 2), (res2x.size()[1] // 2)], dim=1) res2x_1 = self.fusion_2(res2x_1, feature_mem_up) res2x_2 = self.conv_2(res2x_2) feature_mem_up.append(res2x_1) res2x = torch.cat((res2x_1, res2x_2), dim=1) res2x = self.convd2x(res2x) res2x = F.upsample(res2x, x.size()[2:], mode='bilinear') x = torch.add(res2x, x) x = self.dense_1(x) + x - res2x x_1, x_2 = x.split([(x.size()[1] // 2), (x.size()[1] // 2)], dim=1) x_1 = self.fusion_1(x_1, feature_mem_up) x_2 = self.conv_1(x_2) x = torch.cat((x_1, x_2), dim=1) x = self.conv_output(x) return x def configure_optimizers(self): optimizer = torch.optim.Adam(self.parameters(), lr=self.args.learning_rate, betas=(0.9, 0.99)) scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=100, T_mult=2, eta_min=1e-5) return [optimizer], [scheduler] def training_step(self, batch, batch_idx): data = batch['data'] label = batch['label'] output = self.forward(data) loss = F.l1_loss(output, label) self.log_dict({'train_loss': loss}, prog_bar=True, logger=True) return loss def validation_step(self, batch, batch_idx): data = batch['data'] label = batch['label'] with torch.no_grad(): output = self.forward(data) diff = output - label mse = diff.pow(2).mean() psnr = -10 * math.log10(mse) return {'psnr': psnr, 'data': data, 'label': label, 'output': output} def validation_epoch_end(self, outputs): data_img = torch.clamp(outputs[0]['data'], 0., 1.) label_img = torch.clamp(outputs[0]['label'], 0., 1.) output_img = torch.clamp(outputs[0]['output'], 0., 1.) self.logger[0].experiment.add_images('Validation/input', data_img, self.current_epoch) self.logger[0].experiment.add_images('Validation/label', label_img, self.current_epoch) self.logger[0].experiment.add_images('Validation/output', output_img, self.current_epoch) loss_val = np.mean([x['psnr'] for x in outputs]) self.log_dict({'avg_psnr': loss_val}, prog_bar=True, logger=True, on_epoch=True) torch.cuda.empty_cache() def test_step(self, batch, batch_idx): data = batch['data'] label = batch['label'] filename = batch['path'][0] with torch.no_grad(): output = self.forward(data) if self.args.save_output: save_image(output, os.path.join(self.args.save_output, 'output', os.path.basename(filename))) save_image(label, os.path.join(self.args.save_output, 'label', os.path.basename(filename)))
#include <iostream> #include <cstdio> #include <cmath> #include <cstring> #include <algorithm> using namespace std; typedef long long ll; const int M=3e5+44; struct Node { int cnt1,cnt2; ll val; } ; Node dp[M]; int mono[4][M],num[4]; ll sum3[M]; int n,m; bool cmp(int x,int y) { return x>y; } int main() { ll ans; int i,j,a,b; scanf("%d%d",&n,&m); for(i=1;i<=3;i++) num[i]=0; for(i=1;i<=n;i++) { scanf("%d%d",&a,&b); mono[a][++num[a]]=b; } for(i=1;i<=3;i++) sort(mono[i]+1,mono[i]+num[i]+1,cmp); dp[0].val=0; dp[0].cnt1=dp[0].cnt2=0; for(i=1;i<=m;i++) { dp[i]=dp[i-1]; if(i-1>=0 && dp[i-1].cnt1+1<=num[1] && dp[i].val<dp[i-1].val+mono[1][dp[i-1].cnt1+1]) { dp[i].val=dp[i-1].val+mono[1][dp[i-1].cnt1+1]; dp[i].cnt1=dp[i-1].cnt1+1; dp[i].cnt2=dp[i-1].cnt2; } if(i-2>=0 && dp[i-2].cnt2+1<=num[2] && dp[i].val<dp[i-2].val+mono[2][dp[i-2].cnt2+1]) { dp[i].val=dp[i-2].val+mono[2][dp[i-2].cnt2+1]; dp[i].cnt1=dp[i-2].cnt1; dp[i].cnt2=dp[i-2].cnt2+1; } } sum3[0]=0; for(i=1;i<=num[3];i++) sum3[i]=sum3[i-1]+mono[3][i]; ans=0; for(i=0;i<=num[3] && i*3<=m;i++) ans=max(ans,sum3[i]+dp[m-i*3].val); printf("%I64d",ans); return 0; }
package secrets import ( "fmt" "io" "io/ioutil" "os" "github.com/mlab-lattice/lattice/pkg/api/client" "github.com/mlab-lattice/lattice/pkg/api/v1" "github.com/mlab-lattice/lattice/pkg/definition/tree" "github.com/mlab-lattice/lattice/pkg/util/cli" "github.com/mlab-lattice/lattice/pkg/util/cli/color" "github.com/mlab-lattice/lattice/pkg/util/cli/flags" ) const ( setFileFlag = "file" setValueFlag = "value" ) var setContentFlags = []string{setFileFlag, setValueFlag} func Set() *cli.Command { var ( file string value string ) cmd := Command{ Flags: map[string]cli.Flag{ setFileFlag: &flags.String{Target: &file}, setValueFlag: &flags.String{Target: &value}, }, MutuallyExclusiveFlags: [][]string{setContentFlags}, RequiredFlagSet: [][]string{setContentFlags}, Run: func(ctx *SecretCommandContext, args []string, flags cli.Flags) error { if flags[setFileFlag].Set() { var data []byte var err error if file == "-" { data, err = ioutil.ReadAll(os.Stdin) } else { data, err = ioutil.ReadFile(file) } if err != nil { return err } value = string(data) } return SetSecret(ctx.Client, ctx.System, ctx.Secret, value, os.Stdout) }, } return cmd.Command() } func SetSecret( client client.Interface, system v1.SystemID, secret tree.PathSubcomponent, value string, w io.Writer, ) error { err := client.V1().Systems().Secrets(system).Set(secret, value) if err != nil { return err } fmt.Fprint(w, color.BoldHiSuccessString(fmt.Sprintf("✓ succesfully set %v\n", secret.String()))) return nil }
Watchung Celebrates Opening With $1 Burritos For All on December 6th Watchung, NJ (RestaurantNews.com) Pancheros Mexican Grill (www.pancheros.com), the Iowa-based burrito connoisseurs, opened its first location today in Watchung, NJ. In honor of their opening, Pancheros will be hosting a special day for residents to enjoy their mouthwatering burrito creations for just $1 on December 6, 2012 from 4p.m. to 6p.m. Supplying burrito lovers a Mexican dining experience to remember, Pancheros utilizes only the freshest ingredients and homemade tortillas to concoct their trademark dishes. With the assistance of “Bob the Tool”, a unique plastic spatula, Pancheros has revolutionized burrito building forever, ensuring every customer gets an equally tasty mouthful in each bite! For those yearning to fulfill their palate in a different way, other menu selections include: quesadillas, tacos, burrito bowls and salads, all of which are made with fresh grilled meats and vegetables. “Pancheros is thrilled to be opening their ninth location in the New Jersey area, where we hope to show residents what a “burrito better built” truly is,” said Rodney Anderson, President of Pancheros Mexican Grill. “Our tortillas are our friends. They won’t let you down. Wrap your fingers around a Pancheros burrito and it feels like a firm handshake. One that says, “Don’t worry, buddy I’m gonna keep it together and make sure none of my marinated amazingness gets re-routed to your chinos.” The new Pancheros location will open November 29th and is located at 1680 Route 22 East, Watchung, NJ 07069. The store will be open from 10:30a.m. to 10:00p.m. Sunday through Thursday and 10:30a.m. to 11:00p.m. Friday through Saturday. To contact the Watchung Pancheros, please call (908) 561-0546. About Pancheros Mexican Grill Founded in 1992 by Rodney L. Anderson, Coralville, Iowa-based Pancheros Mexican Grill is a quick-serve, fresh-Mexican franchise that serves its signature fresh-pressed tortillas filled with the freshest, highest-quality ingredients. Burritos are customized and mixed with “Bob the Tool” to get every ingredient in each bite. Along with their tasty burritos, the menu also includes quesadillas, tacos, burrito bowls, and salads. Pancheros currently has more than 50 locations in the United States and the company plans to have 60 restaurants open across the country by the end of 2013. For more information, visit http://www.pancheros.com.
package net.dgardiner.markdown.core.base; public interface Type { String getGroup(); String getKey(); String getId(); }
After confirming plans back in June to rebrand, the brewery located at 909 W. Fifth Avenue in Grandview (previously operating as Zauber Brewing, though no longer affiliated with the owner of that brand name) shared today that the operation will be relaunching under the name Endeavor Brewing Company. “Our brewery is inspired by our team’s unique travel and cultural experiences,” said Endeavor’s Marketing Coordinator Oliver Convertini in a statement. “We draw upon these experiences for inspiration, and we believe our beer reflects this distinct viewpoint.” The taproom will include a cocktail menu designed by award-winning mixologist Annie Williams, guest drafts, and of course Endeavor’s own craft creations from brewmaster Cameron Lloyd, who had previously headed the brewing operation at Zauber. According to the brewery website, the current lineup includes a pilsner, Belgian witbier, English mild, Belgian tripel, barrel-aged pilsner and an IPA called New World. “We’re taking a big differentiation from Zauber and will no longer be just German and Belgian focused, although that’s certainly an area of the world that I draw a lot of inspiration from myself, and a lot of my favorite styles come from there, so there will be a fair number of German and Belgian styles within the wheelhouse,” said Lloyd back in June. Along with a continued commitment to soccer, the new(-ish) brewery will continue to feature food trucks as well. Endeavor will celebrate its grand (re-)opening on Saturday, September 23, including tours and a chance to meet the brewer. photo via Endeavor on Instagram
/** * @brief Enqueue from the queue. * @param element element is stored into the queue. */ void s_queue_enqueue(size_t element) { if (queue_tail_index == MAX_QUEUE_SIZE) { return; } queue[queue_tail_index++] = element; }
<reponame>VladimirZubavlenko/kaf42.mephi.ru-rebuild-backend import { Employee } from '@app/employee/entities/employee.entity'; export interface IEmployeeWithType { main: Employee[]; list: Employee[]; }
import seal.* class JavaSealedTest { public void testNesting() { new SubSealed.Nested(); Supplier<SubSealed.Nested> nestedSupplier = SubSealed.Nested::new; SubSealed.INSTANCE.internalFunction(); Runnable noArgFunction = SubSealed.INSTANCE::internalFunction; } }
/* * Copyright 2019 Carnegie Technologies * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #pragma once #include "basic/MemVector.hpp" #include "object/OwnedObject.hpp" #include "error/Error.hpp" #include "sys/SocketApi.hpp" #include "log/TextLog.hpp" #include "log/LogId.hpp" #include "event/EventManager.hpp" namespace Pravala { class Socket; class IpSocket; class LocalSocket; /// @brief The owner of the Socket class SocketOwner { protected: /// @brief Called when data is received over a socket. /// @param [in] sock Pointer to the socket that received the data. /// @param [in] data The data received. If the receiver consumes that data, /// it should modify this object using, for example, data.consume() or data.clear(). /// If the data is only partially consumed, this callback will be called again, /// with the same data object (with remaining data). It will be repeated until /// either the data is fully consumed, or when not even a single byte of it is consumed. /// If the data is not consumed, the behaviour depends on the socket type. /// /// Sockets that can lose data (like UDP), will simply drop the data. /// /// Lossless sockets (like TCP) will keep it around in internal buffer, /// which can be accessed using getReadBuffer(). They also may stop reading more data until /// pending data is fully consumed (using consumeReadBuffer() call). virtual void socketDataReceived ( Socket * sock, MemHandle & data ) = 0; /// @brief Called when the socket is closed. /// @param [in] sock Pointer to the socket that was closed. /// @param [in] reason An additional reason for the socket being closed. virtual void socketClosed ( Socket * sock, ERRCODE reason ) = 0; /// @brief Callback generated when the socket successfully connects. /// @param [in] sock Pointer to the socket that was connected. virtual void socketConnected ( Socket * sock ) = 0; /// @brief Callback for notifying the owner when the connection attempt failed. /// @note This callback is only generated if the failure happened before successfully establishing /// the connection. If the connection succeeds, socketConnected() is generated, followed /// by socketClosed() when it is disconnected. /// @param [in] sock Pointer to the socket that failed to connect. /// @param [in] reason An additional reason for the connection failing. virtual void socketConnectFailed ( Socket * sock, ERRCODE reason ) = 0; /// @brief Callback for notifying the owner when the socket is ready to send data. /// @note This callback is only generated if previous send*() call failed due to insufficient buffer space. /// @param [in] sock The socket that is now ready to send the data again. virtual void socketReadyToSend ( Socket * sock ) = 0; /// @brief Callback for notifying the owner when the write size hint of a socket changes. /// Default implementation does nothing. /// @see getWriteSizeHint() for details. /// @param [in] sock The socket that generated the callback. /// @param [in] sizeHint The new size hint (in bytes); Could be 0. virtual void socketWriteSizeHintUpdated ( Socket * sock, size_t sizeHint ); /// @brief Destructor. virtual ~SocketOwner(); friend class Socket; }; /// @brief An abstract socket that can be used to connect, send and receive data. /// @note Passing Socket objects between threads is NOT SUPPORTED. class Socket: public OwnedObject<SocketOwner>, public LogId, protected EventManager::LoopEndEventHandler { public: /// @brief Closes the socket. /// No further callbacks will be generated. /// Safe to call on a socket that is already closed. /// Default implementation clears all the flags and scheduled events. /// @note Closing the socket typically clears the read buffer, so all unread data will be lost! virtual void close(); /// @brief Sends the data over the socket. /// This API simply uses underlying socket's behaviour. /// If it is a stream socket (like TCP), it will write the data into the stream. /// If it is a datagram socket (like UDP), it will treat the data as a single datagram. /// @param [in,out] data Data to send. /// The data is consumed to reflect how much of it has actually been written, /// and it's possible that it won't be fully accepted. If not all of the data is sent, /// socketReadyToSend() callback will be generated once the socket is ready to accept /// more data. Note that packet-based sockets (like UDP) will either send all or none /// of the data, and either not modify or clear this object. /// @return Standard error code. If there is an error resulting in socket being closed, /// a socketClosed() callback will be generated at the end of the event loop. /// That callback is generated only once, when the socket becomes closed. /// If this method is called on a socket that is already closed, /// no additional socketClosed() callbacks will be generated. virtual ERRCODE send ( MemHandle & data ) = 0; /// @brief Sends the data over the socket. /// This API simply uses underlying socket's behaviour. /// If it is a stream socket (like TCP), it will write the data into the stream. /// If it is a datagram socket (like UDP), it will treat the data as a single datagram. /// @param [in,out] data Data to send. /// The data is consumed to reflect how much of it has actually been written, /// and it's possible that it won't be fully accepted. If not all of the data is sent, /// socketReadyToSend() callback will be generated once the socket is ready to accept /// more data. Note that packet-based sockets (like UDP) will either send all or none /// of the data, and either not modify or clear this object. /// @return Standard error code. If there is an error resulting in socket being closed, /// a socketClosed() callback will be generated at the end of the event loop. /// That callback is generated only once, when the socket becomes closed. /// If this method is called on a socket that is already closed, /// no additional socketClosed() callbacks will be generated. virtual ERRCODE send ( MemVector & data ) = 0; /// @brief Sends the data over the socket. /// This API simply uses underlying socket's behaviour. /// If it is a stream socket (like TCP), it will write the data into the stream. /// If it is a datagram socket (like UDP), it will treat the data as a single datagram. /// @param [in] data Pointer to the data to send. /// @param [in,out] dataSize The size of the data to send. /// On success it is set to the number of bytes sent (or queued), /// which may be less than the original size. If not all of the data is sent, /// socketReadyToSend() callback will be generated once the socket is ready to accept /// more data. Note that packet-based sockets (like UDP) will either send all or none /// of the data, so they never modify this value. /// @return Standard error code. If there is an error resulting in socket being closed, /// a socketClosed() callback will be generated at the end of the event loop. /// That callback is generated only once, when the socket becomes closed. /// If this method is called on a socket that is already closed, /// no additional socketClosed() callbacks will be generated. virtual ERRCODE send ( const char * data, size_t & dataSize ) = 0; /// @brief Gets the data received over the socket without removing it from the receive buffer. /// Default implementation always returns an empty buffer. /// @return The read buffer (could be empty). virtual const MemHandle & getReadBuffer() const; /// @brief Consumes the data in the read buffer. /// This should be called to notify the socket that data returned by getReadBuffer() has been fully /// or partially consumed. If the socket's read is blocked, this may resume it. /// @param [in] size The number of consumed bytes from the buffer returned by getReadBuffer(). virtual void consumeReadBuffer ( size_t size ); /// @brief Gets the size of the data in the read buffer. /// @return The number of bytes of data in the read buffer. inline size_t getReadBufferSize() const { return getReadBuffer().size(); } /// @brief Returns the size of a single write/send operation supported by the socket internally. /// It may not be supported (and return 0), and should be treated by the caller as a hint /// for an optimal size of a single packet. Even when socket returns a non-zero value, /// there is no guarantee that when sending data, this size of data will actually be accepted. /// Default implementation returns 0. /// @return The optimal size of a single data write for this socket (in bytes). /// 0 if unknown or not relevant. virtual size_t getWriteSizeHint() const; /// @brief Checks if this socket is valid. /// @return True if this socket is valid; False otherwise. inline bool isValid() const { return hasFlag ( SockFlagValid ); } /// @brief Checks if this socket is connecting. /// @return True if this socket is connecting; False otherwise. inline bool isConnecting() const { return hasFlag ( SockFlagConnecting ); } /// @brief Checks if this socket is connected. /// @note This should typically only used from the outside. Classes inheriting Socket should /// check specific flags. Some sockets may require several steps to become fully connected. /// @return True if this socket is connected; False otherwise. inline bool isConnected() const { return hasFlag ( SockFlagConnected ); } /// @brief This function returns the underlying file descriptor and removes it from the socket. /// This file descriptor will be unsubscribed from EventManager, and the socket will be no longer /// responsible for closing it. Not all socket types support this operation, in which case nothing /// will happen (and -1 will be returned). If this function succeeds, it will be equivalent /// to calling close() on the socket, except the actual file descriptor will not be closed. /// @note Successful call typically clears the read buffer, so all unread data will be lost! /// @return Underlying file descriptor, or -1 on error (or if the socket does not support this operation). virtual int stealSockFd(); /// @brief Returns this object as an IpSocket. /// @return This object as an IpSocket pointer, or 0 if it is not an IpSocket. virtual IpSocket * getIpSocket(); /// @brief Returns this object as a LocalSocket. /// @return This object as a LocalSocket pointer, or 0 if it is not a LocalSocket. virtual LocalSocket * getLocalSocket(); /// @brief Returns the description of the remote endpoint. /// The default implementation returns an empty string. /// @return The description of the remote endpoint. virtual String getLocalDesc() const; /// @brief Returns the description of the local endpoint. /// The default implementation returns an empty string. /// @return The description of the local endpoint. virtual String getRemoteDesc() const; /// @brief Gets the value of a socket option. /// It is analogous to POSIX 'getsockopt' call and may not be supported. /// Default implementation always fails. /// @param [in] level The protocol level at which the option resides. /// @param [in] optName Specifies the option to get. /// @param [out] value Memory to store the value in. /// If not empty, the size of the memory will be used as expected option value size. /// Otherwise an automatic mode will be used. /// @return True on success, false on error. virtual bool getOption ( int level, int optName, MemHandle & value ) const; /// @brief Tries to increase receive buffer size of the socket. /// This function changes SO_RCVBUF option of the socket. /// If the currently used socket's buffer size is greater (or equal) to the size requested, /// it will NOT be modified (this function newer shrinks the buffer). /// Otherwise the buffer will be increased up to the size requested, if possible. /// If it cannot be increased to the requested size, it will be increased as much as possible. /// Default implementation always fails. /// @param [in] size The requested size (in bytes); Should be > 0. /// @return New receive buffer size in bytes (even if it was not modified, can be larger than size requested); /// -1 on error. virtual int increaseRcvBufSize ( int size ); /// @brief Tries to increase send buffer size of the socket. /// This function changes SO_SNDBUF option of the socket. /// If the currently used socket's buffer size is greater (or equal) to the size requested, /// it will NOT be modified (this function newer shrinks the buffer). /// Otherwise the buffer will be increased up to the size requested, if possible. /// If it cannot be increased to the requested size, it will be increased as much as possible. /// Default implementation always fails. /// @param [in] size The requested size (in bytes); Should be > 0. /// @return New send buffer size in bytes (even if it was not modified, can be larger than size requested); /// -1 on error. virtual int increaseSndBufSize ( int size ); protected: /// @brief Socket will be closed (using close()) and socket "closed" callback will be generated. /// This will NOT clear any flags! Instead, flags should be handled in close(). static const uint16_t SockEventClosed = ( 1 << 0 ); /// @brief Socket will be marked 'connected', and "connected" callback will be generated. /// It will set 'connected' and unset 'connecting' flag. static const uint16_t SockEventConnected = ( 1 << 1 ); /// @brief Socket will be marked as not connected and not connecting. /// Also "failed to connect" callback will be generated. static const uint16_t SockEventConnectFailed = ( 1 << 2 ); static const uint16_t SockFlagValid = ( 1 << 0 ); ///< Set when this socket is valid. static const uint16_t SockFlagConnecting = ( 1 << 1 ); ///< Set when this socket is connecting. static const uint16_t SockFlagConnected = ( 1 << 2 ); ///< Set when this socket is connected. /// @brief This flag is set if the send buffer was filled up and send() blocked. static const uint16_t SockFlagSendBlocked = ( 1 << 3 ); /// @brief The lowest event bit that can be used by the class inheriting this one. /// Classes that inherit it should use ( 1 << next_shift + 0), ( 1 << next_shift + 1), etc. values. static const uint8_t SockNextEventShift = 3; /// @brief The lowest flag bit that can be used by the class inheriting this one. /// Classes that inherit it should use ( 1 << next_shift + 0), ( 1 << next_shift + 1), etc. values. static const uint8_t SockNextFlagShift = 4; static TextLog _log; ///< Log stream. /// @brief Constructor. /// @param [in] owner The initial owner to set. Socket ( SocketOwner * owner ); /// @brief Destructor. virtual ~Socket(); /// @brief Sets given flags. /// @param [in] flags The flags to set. inline void setFlags ( uint16_t flags ) { _sockFlags |= flags; } /// @brief Clears given flags. /// @param [in] flags The flags to clear. inline void clearFlags ( uint16_t flags ) { _sockFlags &= ~flags; } /// @brief Clears all flags. inline void clearAllFlags() { _sockFlags = 0; } /// @brief Checks if at least one of given socket flags is set. /// @param [in] flags Flags to check. /// @return True if at least one of the flags passed is set; False otherwise. inline bool hasFlag ( uint16_t flags ) const { return ( ( _sockFlags & flags ) != 0 ); } /// @brief Returns a multi-bit flag value stored in socket flags. /// "Flag value" is a value stored in multiple flag bits, but interpreted as a number, not using /// individual bits. valueMask describes which bits of socket flags are be used. /// @param [in] valueMask The mask to use for the value. /// @return Value stored in flags using given mask. inline uint16_t getFlagValue ( uint16_t valueMask ) const { return ( _sockFlags & valueMask ); } /// @brief Sets a multi-bit flag value to be stored in socket flags. /// "Flag value" is a value stored in multiple flag bits, but interpreted as a number, not using /// individual bits. 'valueMask' describes which bits of socket flags are be used. /// All the bits that are part of 'value' will be set, and all the bits that are part of 'valueMask', /// but not the 'value' will be cleared. getFlagValue() will return the exact value set. /// @param [in] value The value to set. No bits that are outside of 'valueMask' will be set. /// @param [in] valueMask The mask in flags to use. inline void setFlagValue ( uint16_t value, uint16_t valueMask ) { _sockFlags = ( _sockFlags & ( ~valueMask ) ) | ( value & valueMask ); } /// @brief Runs socket events. /// It may change socket flags, and generate some callback(s). /// The default implementation runs a single 'closed', 'connect failed', or 'connected' callback /// (it selects the first one, prioritized in this order). /// For error callbacks it will use either 'closed' or 'connect failed' error codes. /// If this returns 'true', the event was not 'closed' or 'connect failed' (so it was 'connected'), /// and there are some other events remaining, they will be re-scheduled. /// @param [in] events The events to run. /// @return True if the event has been handled; False otherwise. /// If this returns 'true', other versions cannot access local state and should return right away, /// since a callback might have been called by then. virtual bool runEvents ( uint16_t events ); /// @brief Schedules given event (one or more of SockEvent*). /// If multiple events are scheduled at the same time, it's up to runEvents() to select /// the event (or events) to run, and ignore or re-schedule additional events. /// @param [in] events The event(s) to schedule. void scheduleEvents ( uint16_t events ); /// @brief Checks if the given socket event is scheduled. /// @param [in] events The event(s) to check. /// @return True if at least one of the events passed is scheduled. inline bool isEventScheduled ( uint16_t events ) const { return ( ( _sockEvents & events ) != 0 ); } /// @brief Clears passed events. /// @param [in] events Event (or events) to clear. inline void clearEvents ( uint16_t events ) { _sockEvents &= ~events; } /// @brief Clears all scheduled events. inline void clearAllEvents() { _sockEvents = 0; } /// @brief Helper function that sends memory vector one chunk at a time. /// It can be used by stream sockets to send the entire memory vector, one chunk at a time. /// It will keep sending until there is an error, or a partial write. /// @note It does not make sense for datagram sockets, as each chunk would result in a separate message. /// @param [in,out] data Data to send. /// The data is consumed to reflect how much of it has actually been written, /// and it's possible that it won't be fully accepted. If not all of the data is sent, /// socketReadyToSend() callback will be generated once the socket is ready to accept /// more data. /// @return Standard error code. If there is an error resulting in socket being closed, /// a socketClosed() callback will be generated at the end of the event loop. /// That callback is generated only once, when the socket becomes closed. /// If this method is called on a socket that is already closed, /// no additional socketClosed() callbacks will be generated. ERRCODE streamSend ( MemVector & data ); /// @brief A helper function that initializes socket's FD. /// It does nothing if the socket is already initialized. /// If a new socket is created, this function will also enable non-blocking mode on it. /// It also sets 'valid' flag if the socket is initialized successfully. /// @param [in] sockType The type of the socket to initialize. /// @param [in,out] sockFd The FD of the socket to initialize. It is not modified if it's already >= 0. /// @return True if the socket has been initialized properly (or was already initialized); /// False if we failed to generate a new socket file descriptor or enable non-blocking mode. virtual bool sockInitFd ( SocketApi::SocketType sockType, int & sockFd ); /// @brief Immediately sends a 'data received' callback. /// It will hold a temporary reference and keep calling the callback until all the data is accepted, /// or the owner stops accepting the data (or it goes away). /// See SocketOwner for detailed description. /// @param [in] data Data received. virtual void doSockDataReceived ( MemHandle & data ); /// @brief Helper function that calls 'data received' callback in the owner (if it's valid). /// @param [in] data Data received. inline void callSockDataReceived ( MemHandle & data ) { if ( getOwner() != 0 ) { getOwner()->socketDataReceived ( this, data ); } } /// @brief Closes the socket (using close()) and immediately sends a 'closed' callback. /// See SocketOwner for detailed description. /// @param [in] reason Reason code. virtual void doSockClosed ( ERRCODE reason ); /// @brief Sets 'connected' flag, clears 'connecting' flag, and immediately sends a 'connected' callback. /// See SocketOwner for detailed description. virtual void doSockConnected(); /// @brief Clears 'connected' and 'connecting' flags, and immediately sends a 'connect failed' callback. /// See SocketOwner for detailed description. /// @param [in] reason Reason code. virtual void doSockConnectFailed ( ERRCODE reason ); /// @brief Clears 'send blocked' flag and immediately sends a 'ready to send' callback. /// See SocketOwner for detailed description. virtual void doSockReadyToSend(); /// @brief Sends 'write size hint updated' callback. /// @param [in] sizeHint The new size hint (in bytes); Could be 0. virtual void doSockWriteSizeHintUpdated ( size_t sizeHint ); virtual void receiveLoopEndEvent(); private: uint16_t _sockFlags; ///< Socket flags. uint16_t _sockEvents; ///< Pending socket events. }; }
/** * The List View for an <code>RTextFileChooser</code>. This is similar to the * List View found in Microsoft Windows file choosers. * * @author Robert Futrell * @version 0.1 */ class ListView extends JList<File> implements RTextFileChooserView { private RTextFileChooser chooser; // The chooser that owns this list view. private MouseListener mouseListener; private SelectionListener selectionListener; /** * Constructor. * * @param chooser The file chooser that owns this list view. */ ListView(RTextFileChooser chooser) { super(new DefaultListModel<>()); // Ensure we have a DefaultListModel. this.chooser = chooser; // Just some other stuff to keep things looking nice. ListCellRenderer<File> cellRenderer = FileChooserViewRendererFactory.createListViewRenderer(chooser); setCellRenderer(cellRenderer); setLayoutOrientation(JList.VERTICAL_WRAP); addComponentListener(new ComponentAdapter() { @Override public void componentResized(ComponentEvent e) { setVisibleRowCount(-1); } }); setTransferHandler(new FileChooserViewTransferHandler(this)); setDragEnabled(true); addKeyListener(new ViewKeyListener()); // Add any listeners. mouseListener = new MouseListener(chooser); addMouseListener(mouseListener); selectionListener = new SelectionListener(chooser); addListSelectionListener(selectionListener); fixInputMap(); ComponentOrientation orientation = ComponentOrientation. getOrientation(getLocale()); applyComponentOrientation(orientation); } /** * Clears all files displayed by this view. */ @Override public void clearDisplayedFiles() { // setListData() replaces our ListModel, with a non-DefaultListModel // model, which we don't want to do //setListData(new File[0]); setModel(new DefaultListModel<>()); } /** * Makes sure the specified file is visible in the view. * * @param file The file that is to be visible. */ @Override public void ensureFileIsVisible(File file) { // This will always be true because we explicitly set the // model below. DefaultListModel<File> model = (DefaultListModel<File>)getModel(); int index = model.indexOf(file); if (index!=-1) ensureIndexIsVisible(index); } /** * Removes keyboard mappings that interfere with our file chooser's * shortcuts. */ private void fixInputMap() { InputMap im = getInputMap(); // Prevent shift+delete from doing nothing (registered to delete an // element?). im.put(KeyStroke.getKeyStroke("shift DELETE"), "none"); } /** * {@inheritDoc} */ @Override public Color getDefaultFileColor() { return getForeground(); } /** * Returns the number of files currently being displayed. * * @return The number of files currently being displayed. */ @Override public int getDisplayedFileCount() { return getModel().getSize(); } /** * Returns the file at the specified point in the view. * * @param p The point at which to look for a file. * @return The file at that point (or <code>null</code> if there isn't * one???). */ @Override public File getFileAtPoint(Point p) { int row = locationToIndex(p); return getModel().getElementAt(row); } /** * Gets the selected file, for use when a single file is selected. * * @return The selected file, or <code>null</code> if no file is * selected. */ @Override public File getSelectedFile() { return getSelectedValue(); } /** * Returns all selected files in this view. * * @return An array of all selected files. */ @Override public File[] getSelectedFiles() { return getSelectedValuesList().toArray(new File[0]); } /** * Returns the tool tip to display for a given mouse event. * * @param e The mouse event. * @return The tool tip. */ @Override public String getToolTipText(MouseEvent e) { String tip = null; Point p = e.getPoint(); int index = locationToIndex(p); if (index==-1) return null; Rectangle bounds = getCellBounds(index, index); if (bounds.contains(p)) { File file = getModel().getElementAt(index); if (file==null || file.isDirectory()) return null; tip = chooser.getToolTipFor(file); } return tip; } /** * Removes all listeners this view has created and added to itself. This * method is here to get around the fact that <code>finalize</code> is * not going to be called as long as listeners are still registered for * this view, but nobody else knows about these listeners except for the * view. */ @Override public void removeAllListeners() { removeMouseListener(mouseListener); removeListSelectionListener(selectionListener); } /** * Selects the file at the specified point in the view. If no file * exists at that point, the selection should be cleared. * * @param p The point at which a file should be selected. */ @Override public void selectFileAtPoint(Point p) { int row = locationToIndex(p); Rectangle bounds = getCellBounds(row, row); if (bounds.contains(p)) { setSelectedIndex(row); ensureIndexIsVisible(row); } else clearSelection(); } /** * {@inheritDoc} */ @Override public void setDisplayedFiles(List<File> files) { // setListData() replaces our ListModel, with a non-DefaultListModel // model, which we don't want to do //setListData(files); DefaultListModel<File> model = new DefaultListModel<>(); for (File file : files) { model.addElement(file); } setModel(model); } /** * Sets whether or not this view allows the selection of multiple files. * * @param enabled Whether or not to allow the selection of multiple * files. */ @Override public void setMultiSelectionEnabled(boolean enabled) { getSelectionModel().setSelectionMode( enabled ? ListSelectionModel.MULTIPLE_INTERVAL_SELECTION : ListSelectionModel.SINGLE_SELECTION); } /** * Selects the specified files in the view. * * @param files The files to select. If any of the files are not in * the file chooser's <code>currentDirectory</code>, then * they are not selected. */ @Override public void setSelectedFiles(File[] files) { int num = files.length; if(num>0) { ListModel<File> model = getModel(); int modelSize = model.getSize(); for (File f1 : files) { if (!f1.exists()) continue; File parentFile = f1.getParentFile(); if (!parentFile.equals(chooser.currentDirectory)) continue; for (int j = 0; j < modelSize; j++) { File f2 = model.getElementAt(j); if (f1.equals(f2)) { addSelectionInterval(j, j); break; } } } } } /** * Listens for key events in the list, to allow the user to type the name * of a file and have it selected. */ private class ViewKeyListener extends KeyAdapter { private String typed; private long lastTime; private int getNextMatch(String text, int fromCell) { text = text.toUpperCase(); ListModel<File> model = getModel(); // First, try everything after the selected row for (int row=fromCell; row<model.getSize(); row++) { File value = model.getElementAt(row); String fileName = value.getName(); fileName = fileName.toUpperCase(); if (fileName.startsWith(text)) { return row; } } // Then, wrap around to before the selected row for (int row=0; row<fromCell; row++) { File value = model.getElementAt(row); String fileName = value.getName(); fileName = fileName.toUpperCase(); if (fileName.startsWith(text)) { return row; } } return -1; } @Override public void keyTyped(KeyEvent e) { if (getModel().getSize()==0) { return; } long time = e.getWhen(); if (time<lastTime+1000) { if (typed==null) { typed = String.valueOf(e.getKeyChar()); } else { typed += e.getKeyChar(); } } else { typed = String.valueOf(e.getKeyChar()); } lastTime = time; int startCell = getLeadSelectionIndex(); if (startCell==-1) { startCell = 0; } int matchCell = getNextMatch(typed, startCell); if (matchCell!=-1) { setSelectedIndex(matchCell); ensureFileIsVisible(getSelectedFile()); } } } }
A contextual account of the psychosocial impacts of social identity in a sample of digital gamers. Drawing on social identity theory (SIT), the current research explored the psychosocial impacts of digital gaming, through two studies. In Study 1, Football Manager players (N= 349) completed an online questionnaire measuring their social identity, quality of friendships, self-esteem, and psychological well-being. Study 2 utilised the equivalent methodology but in relation to FIFA players (N = 95), in which social identity was framed by their affiliation as online versus offline players. Study 1 found that social identity was positively related to well-being. Study 2 found differential effects of social identity as a result of the context of play. Specifically, positive associations were found for players who played in offline contexts in respect of support and depth of relationships. Overall, positive associations were found between social identity and self-esteem. This research highlights the application of SIT through a more nuanced contextual lens, to more fully understand its psychosocial impacts.
// Copyright 2021 The Matrix.org Foundation C.I.C. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. use std::{ net::{SocketAddr, TcpListener}, sync::Arc, time::Duration, }; use anyhow::Context; use clap::Parser; use futures::{future::TryFutureExt, stream::TryStreamExt}; use hyper::{header, Server, Version}; use mas_config::RootConfig; use mas_storage::MIGRATOR; use mas_tasks::TaskQueue; use mas_templates::Templates; use opentelemetry_http::HeaderExtractor; use tower::{make::Shared, ServiceBuilder}; use tower_http::{ compression::CompressionLayer, sensitive_headers::SetSensitiveHeadersLayer, trace::{MakeSpan, OnResponse, TraceLayer}, }; use tracing::{error, field, info}; use super::RootCommand; #[derive(Parser, Debug, Default)] pub(super) struct ServerCommand { /// Automatically apply pending migrations #[clap(long)] migrate: bool, /// Watch for changes for templates on the filesystem #[clap(short, long)] watch: bool, } #[derive(Debug, Clone, Default)] struct OtelMakeSpan; impl<B> MakeSpan<B> for OtelMakeSpan { fn make_span(&mut self, request: &hyper::Request<B>) -> tracing::Span { // Extract the context from the headers let headers = request.headers(); let extractor = HeaderExtractor(headers); let cx = opentelemetry::global::get_text_map_propagator(|propagator| { propagator.extract(&extractor) }); // Attach the context so when the request span is created it gets properly // parented let _guard = cx.attach(); let version = match request.version() { Version::HTTP_09 => "0.9", Version::HTTP_10 => "1.0", Version::HTTP_11 => "1.1", Version::HTTP_2 => "2.0", Version::HTTP_3 => "3.0", _ => "", }; let span = tracing::info_span!( "request", http.method = %request.method(), http.target = %request.uri(), http.flavor = version, http.status_code = field::Empty, http.user_agent = field::Empty, otel.kind = "server", otel.status_code = field::Empty, ); if let Some(user_agent) = headers .get(header::USER_AGENT) .and_then(|s| s.to_str().ok()) { span.record("http.user_agent", &user_agent); } span } } #[derive(Debug, Clone, Default)] struct OtelOnResponse; impl<B> OnResponse<B> for OtelOnResponse { fn on_response(self, response: &hyper::Response<B>, _latency: Duration, span: &tracing::Span) { let s = response.status(); let status = if s.is_success() { "ok" } else if s.is_client_error() || s.is_server_error() { "error" } else { "unset" }; span.record("otel.status_code", &status); span.record("http.status_code", &s.as_u16()); } } #[cfg(not(unix))] async fn shutdown_signal() { // Wait for the CTRL+C signal tokio::signal::ctrl_c() .await .expect("failed to install Ctrl+C signal handler"); tracing::info!("Got Ctrl+C, shutting down"); } #[cfg(unix)] async fn shutdown_signal() { use tokio::signal::unix::{signal, SignalKind}; // Wait for SIGTERM and SIGINT signals // This might panic but should be fine let mut term = signal(SignalKind::terminate()).expect("failed to install SIGTERM signal handler"); let mut int = signal(SignalKind::interrupt()).expect("failed to install SIGINT signal handler"); tokio::select! { _ = term.recv() => tracing::info!("Got SIGTERM, shutting down"), _ = int.recv() => tracing::info!("Got SIGINT, shutting down"), }; } /// Watch for changes in the templates folders async fn watch_templates( client: &watchman_client::Client, templates: &Templates, ) -> anyhow::Result<()> { use watchman_client::{ fields::NameOnly, pdu::{QueryResult, SubscribeRequest}, CanonicalPath, SubscriptionData, }; let templates = templates.clone(); // Find which roots we're supposed to watch let roots = templates.watch_roots().await; let mut streams = Vec::new(); for root in roots { // For each root, create a subscription let resolved = client .resolve_root(CanonicalPath::canonicalize(root)?) .await?; // TODO: we could subscribe to less, properly filter here let (subscription, _) = client .subscribe::<NameOnly>(&resolved, SubscribeRequest::default()) .await?; // Create a stream out of that subscription let stream = futures::stream::try_unfold(subscription, |mut sub| async move { let next = sub.next().await?; anyhow::Ok(Some((next, sub))) }); streams.push(Box::pin(stream)); } let files_changed_stream = futures::stream::select_all(streams).try_filter_map(|event| async move { match event { SubscriptionData::FilesChanged(QueryResult { files: Some(files), .. }) => { let files: Vec<_> = files.into_iter().map(|f| f.name.into_inner()).collect(); Ok(Some(files)) } _ => Ok(None), } }); let fut = files_changed_stream .try_for_each(move |files| { let templates = templates.clone(); async move { info!(?files, "Files changed, reloading templates"); templates .clone() .reload() .await .context("Could not reload templates") } }) .inspect_err(|err| error!(%err, "Error while watching templates, stop watching")); tokio::spawn(fut); Ok(()) } impl ServerCommand { pub async fn run(&self, root: &RootCommand) -> anyhow::Result<()> { let config: RootConfig = root.load_config()?; let addr: SocketAddr = config .http .address .parse() .context("could not parse listener address")?; let listener = TcpListener::bind(addr).context("could not bind address")?; // Connect to the database let pool = config.database.connect().await?; if self.migrate { info!("Running pending migrations"); MIGRATOR .run(&pool) .await .context("could not run migrations")?; } info!("Starting task scheduler"); let queue = TaskQueue::default(); queue.recuring(Duration::from_secs(15), mas_tasks::cleanup_expired(&pool)); queue.start(); // Initialize the key store let key_store = config .oauth2 .key_store() .context("could not import keys from config")?; // Wrap the key store in an Arc let key_store = Arc::new(key_store); // Load and compile the templates let templates = Templates::load_from_config(&config.templates) .await .context("could not load templates")?; // Watch for changes in templates if the --watch flag is present if self.watch { let client = watchman_client::Connector::new() .connect() .await .context("could not connect to watchman")?; watch_templates(&client, &templates) .await .context("could not watch for templates changes")?; } // Start the server let root = mas_handlers::root(&pool, &templates, &key_store, &config); let warp_service = warp::service(root); let service = ServiceBuilder::new() // Add high level tracing/logging to all requests .layer( TraceLayer::new_for_http() .make_span_with(OtelMakeSpan) .on_response(OtelOnResponse), ) // Set a timeout .timeout(Duration::from_secs(10)) // Compress responses .layer(CompressionLayer::new()) // Mark the `Authorization` and `Cookie` headers as sensitive so it doesn't show in logs .layer(SetSensitiveHeadersLayer::new(vec![ header::AUTHORIZATION, header::COOKIE, ])) .service(warp_service); info!("Listening on http://{}", listener.local_addr().unwrap()); Server::from_tcp(listener)? .serve(Shared::new(service)) .with_graceful_shutdown(shutdown_signal()) .await?; Ok(()) } }
/* * si7051.c * * Created on: Jul 26, 2021 * Author: alema */ #include "si7051.h" #define SI7051_I2C_ADDR 0x40 #define SI7051_TEMPERATUR_OFFSET 0xF3 HAL_StatusTypeDef si7051Temp(I2C_HandleTypeDef *hi2c1, float* siValue) { uint8_t readSensorADDR = SI7051_TEMPERATUR_OFFSET; // Poll I2C device HAL_StatusTypeDef ret = HAL_I2C_Master_Transmit(hi2c1, (uint16_t)(SI7051_I2C_ADDR << 1), &readSensorADDR, 1, 50); if (HAL_OK == ret) { HAL_Delay(10); //delay is needed for response time uint8_t addata[2]; ret = HAL_I2C_Master_Receive(hi2c1, (SI7051_I2C_ADDR << 1) | 0x01, addata, 2, 50); if (HAL_OK == ret) { uint16_t si7051_temp = addata[0] << 8 | addata[1]; *siValue = (175.72*si7051_temp) / 65536 - 46.85; } } return ret; }
#pragma once namespace fast_io { /* https://www.youtube.com/watch?v=c1gO9aB9nbs&t=3166s CppCon 2014: <NAME> "Lock-Free Programming (or, Juggling Razor Blades), Part I" */ template<std::movable acceptor_type,std::size_t size=900,typename server_type,typename Func> inline void thread_pool_accept(server_type& server,Func&& func) { std::vector<std::thread> pool; pool.reserve(size); std::array<std::pair<std::mutex,std::condition_variable>,size> cvs; std::vector<std::optional<acceptor_type>> slot(size); for(std::size_t i{};i!=size;++i) pool.emplace_back([func,&cvs,&slot](std::size_t i) { auto &cvp(cvs[i]); for(std::optional<acceptor_type>& opt(slot[i]);;) { std::unique_lock ul{cvp.first}; cvp.second.wait(ul,[&opt](){return opt.has_value();}); auto acc(*std::move(opt)); opt.reset(); func(acc); } },i); for(auto i(slot.begin());;) { std::optional<acceptor_type> opt(std::in_place,server); for(auto e(slot.end());*i;) if(++i==e) i=slot.begin(); *i=std::move(opt); cvs[i-slot.begin()].second.notify_one(); } } }
/* * Copyright (c) 2004-2008 The Trustees of Indiana University and Indiana * University Research and Technology * Corporation. All rights reserved. * Copyright (c) 2004-2005 The University of Tennessee and The University * of Tennessee Research Foundation. All rights * reserved. * Copyright (c) 2004-2005 High Performance Computing Center Stuttgart, * University of Stuttgart. All rights reserved. * Copyright (c) 2004-2005 The Regents of the University of California. * All rights reserved. * $COPYRIGHT$ * * Additional copyrights may follow * * $HEADER$ */ /** @file: * * The OpenRTE Environment-Specific Services * */ #ifndef ORTE_ESS_H #define ORTE_ESS_H #include "orte_config.h" #include "orte/types.h" #include "opal/mca/mca.h" BEGIN_C_DECLS /* * API functions */ /* * Initialize the RTE for this environment */ typedef int (*orte_ess_base_module_init_fn_t)(char flags); /* * Finalize the RTE for this environment */ typedef int (*orte_ess_base_module_finalize_fn_t)(void); /** * Abort the current application * * Aborts currently running application, NOTE: We do NOT call the * regular C-library "abort" function, even * though that would have alerted us to the fact that this is * an abnormal termination, because it would automatically cause * a core file to be generated. The "report" flag indicates if the * function should create an appropriate file to alert the local * orted that termination was abnormal. */ typedef void (*orte_ess_base_module_abort_fn_t)(int status, bool report); /** * Determine if a process is local to me * * MPI procs need to know if a process is "local" or not - i.e., * if they share the same node. Different environments are capable * of making that determination in different ways - e.g., they may * provide a callable utility to return the answer, or download * a map of information into each process. This API provides a * means for each environment to do the "right thing". */ typedef bool (*orte_ess_base_module_proc_is_local_fn_t)(orte_process_name_t *proc); /** * Get the hostname where a proc resides * * MPI procs need to know the hostname where a specified proc resides. * Different environments provide that info in different ways - e.g., they may * provide a callable utility to return the answer, or download * a map of information into each process. This API provides a * means for each environment to do the "right thing". * * NOTE: To avoid memory waste, this function returns a pointer * to a static storage. IT MUST NOT BE FREED! */ typedef char* (*orte_ess_base_module_proc_get_hostname_fn_t)(orte_process_name_t *proc); /** * Determine the arch of the node where a specified proc resides * * MPI procs need to know the arch being used by a specified proc. * Different environments provide that info in different ways - e.g., they may * provide a callable utility to return the answer, or download * a map of information into each process. This API provides a * means for each environment to do the "right thing". */ typedef uint32_t (*orte_ess_base_module_proc_get_arch_fn_t)(orte_process_name_t *proc); /** * Get the local rank of a remote process */ typedef orte_local_rank_t (*orte_ess_base_module_proc_get_local_rank_fn_t)(orte_process_name_t *proc); /** * Get the node rank of a remote process */ typedef orte_node_rank_t (*orte_ess_base_module_proc_get_node_rank_fn_t)(orte_process_name_t *proc); /** * Update the arch of a remote process */ typedef int (*orte_ess_base_module_update_arch_fn_t)(orte_process_name_t *proc, uint32_t arch); /** * Handle fault tolerance updates * * @param[in] state Fault tolerance state update * * @retval ORTE_SUCCESS The operation completed successfully * @retval ORTE_ERROR An unspecifed error occurred */ typedef int (*orte_ess_base_module_ft_event_fn_t)(int state); /* * the standard module data structure */ struct orte_ess_base_module_1_0_0_t { orte_ess_base_module_init_fn_t init; orte_ess_base_module_finalize_fn_t finalize; orte_ess_base_module_abort_fn_t abort; orte_ess_base_module_proc_is_local_fn_t proc_is_local; orte_ess_base_module_proc_get_hostname_fn_t proc_get_hostname; orte_ess_base_module_proc_get_arch_fn_t proc_get_arch; orte_ess_base_module_proc_get_local_rank_fn_t get_local_rank; orte_ess_base_module_proc_get_node_rank_fn_t get_node_rank; orte_ess_base_module_update_arch_fn_t update_arch; orte_ess_base_module_ft_event_fn_t ft_event; }; typedef struct orte_ess_base_module_1_0_0_t orte_ess_base_module_1_0_0_t; typedef struct orte_ess_base_module_1_0_0_t orte_ess_base_module_t; /* * the standard component data structure */ struct orte_ess_base_component_2_0_0_t { mca_base_component_t base_version; mca_base_component_data_t base_data; }; typedef struct orte_ess_base_component_2_0_0_t orte_ess_base_component_2_0_0_t; typedef struct orte_ess_base_component_2_0_0_t orte_ess_base_component_t; /* * Macro for use in components that are of type ess */ #define ORTE_ESS_BASE_VERSION_2_0_0 \ MCA_BASE_VERSION_2_0_0, \ "ess", 2, 0, 0 /* Global structure for accessing ESS functions */ ORTE_DECLSPEC extern orte_ess_base_module_t orte_ess; /* holds selected module's function pointers */ END_C_DECLS #endif
#include<stdio.h> char fast_char_var; #define fast_unsigned_dtype unsigned long fast_unsigned_dtype fast_unsigned_var; fast_unsigned_dtype fast_unsigned() { fast_char_var=getchar(); while(fast_char_var<48||fast_char_var>57) fast_char_var=getchar(); fast_unsigned_var=0; while(fast_char_var>47&&fast_char_var<58) { fast_unsigned_var=(fast_unsigned_var<<1)+(fast_unsigned_var<<3)+fast_char_var-48; fast_char_var=getchar(); } return fast_unsigned_var; } int main() { static unsigned long n,m,i,j,a[128],count=0;; char pic[102][102]; n=fast_unsigned(); m=fast_unsigned(); for(i=0;i<n;i++) { for(j=0;j<m;j++) pic[i][j]=getchar(); getchar(); } for(i=0;i<(n-1);i++) for(j=0;j<(m-1);j++) { a[pic[i][j]]=1; a[pic[i][j+1]]=1; a[pic[i+1][j]]=1; a[pic[i+1][j+1]]=1; if(a['a']&&a['f']&&a['c']&&a['e']) count++; a[pic[i][j]]=0; a[pic[i][j+1]]=0; a[pic[i+1][j]]=0; a[pic[i+1][j+1]]=0; } printf("%lu\n",count); return 0; }
<reponame>ludc/rlstructures # # Copyright (c) Facebook, Inc. and its affiliates. # # This source code is licensed under the MIT license found in the # LICENSE file in the root directory of this source tree. # from rlstructures import logging from rlstructures import DictTensor,TemporalDictTensor import torch class Agent: """ Describes an agent responsible for producing actions when receiving observations and agent states. At each time step, and agent receives observations (DictTensor of size B), and agent states (DictTensor of size B) that reflect the agent's internal state. It then returns a triplet: agent state when receiving the observation (DictTensor): it is the agent state before computing anything. It is mainly used to initialize the state of the agent when facing initial states from the environment. action (DictTensor): the action + and additional outputs produced by the agent next agent state (DictTensor): the new state of the agent after all the computation. This value will then be provided to the agent at the next timestep. """ def __init__(self): pass def require_history(self): """ if True, then the 'history' argument in the __call__ method will contain the set of previous transitions (e.g for transformers based policies) """ return False def __call__(self, state:DictTensor, input:DictTensor,user_info:DictTensor,history:TemporalDictTensor = None): """ Execute one step of the agent :param state: the previous state of the agent, or None if the agent needs to be initialized :type state: DictTensor :param input: The observation coming from the environment :type input: DictTensor :param user_info: An additional DictTensor (provided by the user such that the epsilon value in epsilon-greedy policies) :type user_info: DictTensor :param history: [description], None if require_history()==False or a set of previous transitions (as a TemporalDictTensor) if True :type history: TemporalDictTensor, optional """ raise NotImplementedError def update(self, info): """ Update the agent. For instance, may update the pytorch model of this agent """ raise NotImplementedError def close(self): """ Terminate the agent """ pass
<reponame>demo-verse/sdk-js /** * Copyright 2018-2021 BOTLabs GmbH. * * This source code is licensed under the BSD 4-Clause "Original" license * found in the LICENSE file in the root directory of this source tree. */ /** * @packageDocumentation * @module IQuote */ import type { ICType } from './CType' import type { DidSignature } from './DidDetails' export interface ICostBreakdown { tax: Record<string, unknown> net: number gross: number } export interface IQuote { attesterDid: string cTypeHash: ICType['hash'] cost: ICostBreakdown currency: string timeframe: string termsAndConditions: string } export interface IQuoteAttesterSigned extends IQuote { attesterSignature: DidSignature } export interface IQuoteAgreement extends IQuoteAttesterSigned { rootHash: string claimerSignature: DidSignature } export type CompressedCostBreakdown = [ ICostBreakdown['gross'], ICostBreakdown['net'], ICostBreakdown['tax'] ] export type CompressedQuote = [ IQuote['attesterDid'], IQuote['cTypeHash'], CompressedCostBreakdown, IQuote['currency'], IQuote['termsAndConditions'], IQuote['timeframe'] ] export type CompressedQuoteAttesterSigned = [ ...CompressedQuote, [ IQuoteAttesterSigned['attesterSignature']['signature'], IQuoteAttesterSigned['attesterSignature']['keyId'] ] ] export type CompressedQuoteAgreed = [ ...CompressedQuoteAttesterSigned, [ IQuoteAgreement['claimerSignature']['signature'], IQuoteAgreement['claimerSignature']['keyId'] ], IQuoteAgreement['rootHash'] ]
// DefaultReqResConfig is configuration for logging one entry when request is received and one when response is written. func DefaultReqResConfig() Config { return Config{ RequestFields: DefaultRequestFields, ResponseFields: DefaultResponseFields, } }
Eitan Jankelewitz is a technology lawyer at the law firm Sheridans. He provides commercial legal advice to all kinds of technology businesses, including some operating in the bitcoin economy. In this article, Jankelewitz explains how UK regulation applies to bitcoin and other digital currencies. He also describes the approach to compliance generally taken by UK businesses. The UK, especially London, is considered a global centre for financial services and new technologies. You might assume, therefore, that the UK would be a great adoptive home for bitcoin and other digital currencies. Digital currency is, after all, the ultimate example of a finance/technology hybrid. Well, you would be right. The British public has shown keen interest in digital currencies – the London bitcoin meetup is possibly the biggest in the world and there are numerous other events and meetings being held in cities up and down the UK. Britain is also home to some of the world’s most popular bitcoin products and services. Despite this, the UK’s government and regulators have been remarkably quiet on the subject of digital currencies, and have left the development and adoption of digital currencies largely unacknowledged. There are three areas of regulation to consider when examining this subject: consumer protection; the prevention of money laundering, and taxation. Foreign regulations also have certain implications for those operating in the UK. Consumer protection In the UK, the Financial Conduct Authority (FCA) is the regulator with responsibility for ensuring that financial services are provided in a way that protects consumers and maintains the integrity of the market. The FCA regulates businesses that provide financial services or promote financial services (whether retail or wholesale). In the last year, a number of bitcoin businesses have approached the FCA seeking clarification on the legalities of operating bitcoin exchanges. However the FCA has not offered any constructive guidance or comment on the regulation of digital currencies. In fact, the FCA has gone as far as stating it does not regulate digital currencies and has no intention of doing so. The result is that bitcoin businesses in the UK are not obliged to register with or be authorised by the FCA. The UK has a well-established tradition of self-regulation. Despite the regulator’s approach, a number of bitcoin businesses have told me that they act in accordance with FCA rules, even though they are not required to do so. Without any formal guidance, businesses act on their own interpretation of what the rules ought to be. As a result, an unusual scenario has arisen: instead of regulators chasing after businesses and insisting on compliance, UK businesses are chasing after regulators and insisting on rules with which they can comply. There was even one instance where, allegedly, the FCA, on discovering that a bitcoin business had managed to add itself to an FCA register, politely invited that business to de-register itself. Prevention of money laundering The prevention of money laundering is taken very seriously in the UK and indeed in many countries around the world. In the UK, the Money Laundering Regulations 2007 set out who must assist the prevention of money laundering and provide steps on how this should be achieved. Customer due diligence is central to these regulations – businesses should know where money is coming from by identifying their customers. The Money Laundering Regulations 2007 are enforced by a number of entities, principally the UK’s tax authority, the HMRC (HM Revenue & Customs), and the FCA, but also some others. For example, lawyers are obligated to conduct customer due diligence by the Law Society. In the UK, however, there is no formal obligation to take any steps to prevent money laundering through dealings made in bitcoin. This is quite remarkable. Compare this to the position in the US, where businesses must comply with anti-money laundering regulations at a federal level and then essentially repeat this compliance in almost every other state. Once again, UK businesses take regulation into their own hands. UK bitcoin businesses seem, for the most part, to all take some measure or another to try and identify their customers for the purposes of preventing money laundering. It is fair to say that some businesses go above and beyond what would be required if their business was dealing with pounds sterling rather than bitcoin. The reason for this is simple: UK businesses don’t think that this status quo can be maintained for much longer. If (or, indeed, when) UK bitcoin businesses are required to comply with anti-money laundering regulation, those businesses could be obligated to undertake customer due diligence on their entire existing customer base. This could be an overwhelming task for a company that has been in business for some years. Businesses may eventually even be required to report all of their previous dealings as part of a suspicious activity report. It therefore makes much more sense to identify customers from the outset in order to be prepared for these requirements. Taxation Four or five months ago, after receiving a number of requests from bitcoin stakeholders about the VAT (value added tax) treatment of bitcoin, HMRC began to issue guidance in the form of a letter. The guidance stated that bitcoin was to be treated as a single-purpose face-value voucher. This type of voucher is, as the name suggests, redeemable for just a single use. This means that at the time the voucher is bought, it is known whether or not VAT is chargeable on the goods or services for which the voucher can be redeemed. HMRC therefore charges VAT on the purchase of the voucher – they don’t wait around for the redemption. If you know a little about bitcoin, you will know you can buy more than just one thing with it. It seems to me that someone at HMRC had simply misunderstood bitcoin, but the consequences were serious – anyone selling bitcoin or operating an exchange would have to charge VAT on the value of the bitcoin being sold. This meant that no UK exchange could be both compliant and competitive. Along with a few others, I was lucky enough to be invited to HMRC to talk about this particular point. Following the meeting, HMRC agreed to withdraw this guidance and re-examine bitcoin to see how VAT should be applied to it. For once UK businesses were happy to have no regulation. We were told that VAT would most likely be charged on bitcoin service charges, but not bitcoin itself. Therefore an exchange would have to charge VAT on its commission, but not on the bitcoins traded. HMRC is continuing to consider how best to tax bitcoin and meetings with stakeholders are ongoing. I also understand that HMRC is considering all other aspects of taxation, not just VAT. Hopefully we will see some development in this area soon and a definitive position on how bitcoin businesses should account for tax. Foreign Regulation Just because there is so little regulation in the UK, it doesn’t mean that UK businesses aren’t affected by foreign laws. Regulations in the US have a habit of reaching beyond the borders of the 50 states. In the US, operating a money transmission business is regulated by the Financial Crimes Enforcement Network (FinCEN) at a federal level, and then again at state level. In order to be compliant throughout the US, money transmitters must comply with all sorts of customer due diligence obligations and maintain many expensive registrations in each state in which their services are available. Famously, on 18 March 2013, FinCEN extended the scope of this regulation to bitcoin exchanges and others buying and selling bitcoin or other digital currencies. Unfortunately for UK businesses, this regulation has extraterritorial scope – it even applies to non-US businesses providing their services to US citizens. Given the burden of complying with US regulation, most UK businesses simply close their doors to US citizens until they are ready to expand into the US market and have sufficient funds to undertake the compliance process. This involves geo-blocking US IP addresses, as well as any blocking any contact made through VPNs or TOR. Conclusion The lack of regulation in the UK has caused more problems than opportunities for bitcoin businesses. Unable to be sure of what regulation is on the horizon and keen to avoid future liability, bitcoin businesses often find themselves taking more regulatory measures than regulated businesses. On top of this is the biggest problem facing bitcoin in the UK – access to UK banking services. In short, there isn’t any. With the regulatory picture unclear, banks consider it too risky to offer bitcoin businesses a bank account. In jurisdictions around the world, law makers and regulators are considering if and how to bring digital currencies under their regulatory frameworks. Meanwhile the entrepreneurs, who can’t help but get started on their new businesses, are left second-guessing what form this new regulation will take and what effect it will have on their own particular business. Until the inevitable question of regulation is settled, one way or another, digital currency businesses will be unable reach their true potential.
// Author(s): <NAME> and <NAME> // Copyright: see the accompanying file COPYING or copy at // https://svn.win.tue.nl/trac/MCRL2/browser/trunk/COPYING // // Distributed under the Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt or copy at // http://www.boost.org/LICENSE_1_0.txt) // #include "springlayout.h" #include "ui_springlayout.h" #include <QThread> #include <cstdlib> #include <ctime> #include <algorithm> namespace Graph { // // Utility functions // inline float frand(float min, float max) { return ((float)qrand() / RAND_MAX) * (max - min) + min; } inline float cube(float x) { return x * x * x; } inline void clip(float& f, float min, float max) { if (f < min) f = min; else if (f > max) f = max; } // // SpringLayout // SpringLayout::SpringLayout(Graph &graph) : m_speed(0.001f), m_attraction(0.13f), m_repulsion(50.0f), m_natLength(50.0f), m_controlPointWeight(0.001f), m_clipMin(Coord3D(0.0f, 0.0f, 0.0f)), m_clipMax(Coord3D(1000.0f, 1000.0f, 1000.0f)), m_graph(graph), m_ui(NULL), m_forceCalculation(&SpringLayout::forceLTSGraph) { srand(time(NULL)); } SpringLayout::~SpringLayout() { delete m_ui; } SpringLayoutUi* SpringLayout::ui(QWidget* parent) { if (m_ui == NULL) m_ui = new SpringLayoutUi(*this, parent); return m_ui; } void SpringLayout::setForceCalculation(ForceCalculation c) { switch (c) { case ltsgraph: m_forceCalculation = &SpringLayout::forceLTSGraph; break; case linearsprings: m_forceCalculation = &SpringLayout::forceLinearSprings; break; } } SpringLayout::ForceCalculation SpringLayout::forceCalculation() { if (m_forceCalculation == &SpringLayout::forceLTSGraph) return ltsgraph; return linearsprings; } Coord3D SpringLayout::forceLTSGraph(const Coord3D& a, const Coord3D& b, float ideal) { Coord3D diff = (a - b); float dist = (std::max)(diff.size(), 1.0f); float factor = m_attraction * 10000 * log(dist / (ideal + 1.0f)) / dist; return diff * factor; } Coord3D SpringLayout::forceLinearSprings(const Coord3D& a, const Coord3D& b, float ideal) { Coord3D diff = (a - b); float dist = diff.size() - ideal; float factor = (std::max)(dist, 0.0f) * m_attraction; // Let springs attract really strong near their equilibrium if (dist > 0.0f) { factor = (std::max)(factor, 100 * m_attraction / (std::max)(dist * dist / 10000.0f, 0.1f)); } return diff * factor; } inline Coord3D repulsionForce(const Coord3D& a, const Coord3D& b, float repulsion, float natlength) { Coord3D diff = a - b; float r = repulsion; r /= cube((std::max)(diff.size() / 2.0f, natlength / 10)); diff = diff * r + Coord3D(frand(-0.01f, 0.01f), frand(-0.01f, 0.01f), frand(-0.01f, 0.01f)); return diff; } void SpringLayout::apply() { m_nforces.resize(m_graph.nodeCount()); m_hforces.resize(m_graph.edgeCount()); m_lforces.resize(m_graph.edgeCount()); m_sforces.resize(m_graph.nodeCount()); for (size_t n = 0; n < m_graph.nodeCount(); ++n) { m_nforces[n] = Coord3D(0, 0, 0); for (size_t m = 0; m < n; ++m) { Coord3D diff = repulsionForce(m_graph.node(n).pos(), m_graph.node(m).pos(), m_repulsion, m_natLength); m_nforces[n] += diff; m_nforces[m] -= diff; } m_sforces[n] = (this->*m_forceCalculation)(m_graph.node(n).pos(), m_graph.stateLabel(n).pos(), 0.0); } for (size_t n = 0; n < m_graph.edgeCount(); ++n) { Edge e = m_graph.edge(n); Coord3D f; // Variables for repulsion calculations m_hforces[n] = Coord3D(0, 0, 0); m_lforces[n] = Coord3D(0, 0, 0); if (e.from() == e.to()) { m_hforces[n] += repulsionForce(m_graph.handle(n).pos(), m_graph.node(e.from()).pos(), m_repulsion, m_natLength); } f = (this->*m_forceCalculation)(m_graph.node(e.to()).pos(), m_graph.node(e.from()).pos(), m_natLength); m_nforces[e.from()] += f; m_nforces[e.to()] -= f; f = (this->*m_forceCalculation)((m_graph.node(e.to()).pos() + m_graph.node(e.from()).pos()) / 2.0, m_graph.handle(n).pos(), 0.0); m_hforces[n] += f; f = (this->*m_forceCalculation)(m_graph.handle(n).pos(), m_graph.transitionLabel(n).pos(), 0.0); m_lforces[n] += f; for (size_t m = 0; m < n; ++m) { // Handles f = repulsionForce(m_graph.handle(n).pos(), m_graph.handle(m).pos(), m_repulsion * m_controlPointWeight, m_natLength); m_hforces[n] += f; m_hforces[m] -= f; // Labels f = repulsionForce(m_graph.transitionLabel(n).pos(), m_graph.transitionLabel(m).pos(), m_repulsion * m_controlPointWeight, m_natLength); m_lforces[n] += f; m_lforces[m] -= f; } } for (size_t n = 0; n < m_graph.nodeCount(); ++n) { if (!m_graph.node(n).anchored()) { m_graph.node(n).pos() = m_graph.node(n).pos() + m_nforces[n] * m_speed; m_graph.node(n).pos().clip(m_clipMin, m_clipMax); } if (!m_graph.stateLabel(n).anchored()) { m_graph.stateLabel(n).pos() = m_graph.stateLabel(n).pos() + m_sforces[n] * m_speed; m_graph.stateLabel(n).pos().clip(m_clipMin, m_clipMax); } } for (size_t n = 0; n < m_graph.edgeCount(); ++n) { if (!m_graph.handle(n).anchored()) { m_graph.handle(n).pos() = m_graph.handle(n).pos() + m_hforces[n] * m_speed; m_graph.handle(n).pos().clip(m_clipMin, m_clipMax); } if (!m_graph.transitionLabel(n).anchored()) { m_graph.transitionLabel(n).pos() = m_graph.transitionLabel(n).pos() + m_lforces[n] * m_speed; m_graph.transitionLabel(n).pos().clip(m_clipMin, m_clipMax); } } } void SpringLayout::setClipRegion(const Coord3D& min, const Coord3D& max) { if (min.z < m_clipMin.z || max.z > m_clipMax.z) //Depth is increased, add random z values to improve spring movement in z direction { float change = (std::min)(m_clipMin.z-min.z, max.z-m_clipMax.z)/100.0f; //Add at most 1/100th of the change for (size_t n = 0; n < m_graph.nodeCount(); ++n) { if (!m_graph.node(n).anchored()) { m_graph.node(n).pos().z = m_graph.node(n).pos().z + frand(-change, change); } } } m_clipMin = min; m_clipMax = max; } // // SpringLayoutUi // class WorkerThread : public QThread { private: bool m_stopped; QTime m_time; SpringLayout &m_layout; int m_period; public: WorkerThread(SpringLayout &layout, int period=50, QObject* parent=0) : QThread(parent), m_stopped(false), m_layout(layout), m_period(period) {} void stop() { m_stopped = true; } void setPeriod(int period) { m_period = period; } int period() const { return m_period; } virtual void run() { m_time.start(); int elapsed; while (!m_stopped) { m_layout.apply(); elapsed = m_time.elapsed(); m_time.restart(); if (m_period > elapsed) msleep(m_period - elapsed); } } }; SpringLayoutUi::SpringLayoutUi(SpringLayout &layout, QWidget *parent) : QDockWidget(parent), m_layout(layout), m_thread(NULL) { m_ui.setupUi(this); m_ui.sldAttraction->setValue(m_layout.attraction()); m_ui.sldRepulsion->setValue(m_layout.repulsion()); m_ui.sldSpeed->setValue(m_layout.speed()); m_ui.sldHandleWeight->setValue(m_layout.controlPointWeight()); m_ui.sldNatLength->setValue(m_layout.naturalTransitionLength()); m_ui.cmbForceCalculation->setCurrentIndex(m_layout.forceCalculation()); } SpringLayoutUi::~SpringLayoutUi() { if (m_thread != NULL) { static_cast<WorkerThread*>(m_thread)->stop(); m_thread->wait(); } } QByteArray SpringLayoutUi::settings() { QByteArray result; QDataStream out(&result, QIODevice::WriteOnly); out << quint32(m_ui.sldAttraction->value()) << quint32(m_ui.sldRepulsion->value()) << quint32(m_ui.sldSpeed->value()) << quint32(m_ui.sldHandleWeight->value()) << quint32(m_ui.sldNatLength->value()) << quint32(m_ui.cmbForceCalculation->currentIndex()); return result; } void SpringLayoutUi::setSettings(QByteArray state) { if (state.isEmpty()) return; QDataStream in(&state, QIODevice::ReadOnly); quint32 attraction, repulsion, speed, handleWeight, NatLength, ForceCalculation; in >> attraction >> repulsion >> speed >> handleWeight >> NatLength >> ForceCalculation; if (in.status() == QDataStream::Ok) { m_ui.sldAttraction->setValue(attraction); m_ui.sldRepulsion->setValue(repulsion); m_ui.sldSpeed->setValue(speed); m_ui.sldHandleWeight->setValue(handleWeight); m_ui.sldNatLength->setValue(NatLength); m_ui.cmbForceCalculation->setCurrentIndex(ForceCalculation); } } void SpringLayoutUi::onAttractionChanged(int value) { m_layout.setAttraction(value); } void SpringLayoutUi::onRepulsionChanged(int value) { m_layout.setRepulsion(value); } void SpringLayoutUi::onSpeedChanged(int value) { if (m_thread != NULL) static_cast<WorkerThread*>(m_thread)->setPeriod(100 - value); } void SpringLayoutUi::onHandleWeightChanged(int value) { m_layout.setControlPointWeight(value); } void SpringLayoutUi::onNatLengthChanged(int value) { m_layout.setNaturalTransitionLength(value); } void SpringLayoutUi::onForceCalculationChanged(int value) { switch (value) { case 0: m_layout.setForceCalculation(SpringLayout::ltsgraph); break; case 1: m_layout.setForceCalculation(SpringLayout::linearsprings); break; } } void SpringLayoutUi::onStarted() { m_ui.btnStartStop->setText("Stop"); m_ui.btnStartStop->setEnabled(true); } void SpringLayoutUi::onStopped() { m_ui.btnStartStop->setText("Start"); m_ui.btnStartStop->setEnabled(true); emit runningChanged(false); } void SpringLayoutUi::onStartStop() { m_ui.btnStartStop->setEnabled(false); if (m_thread == NULL) { emit runningChanged(true); m_thread = new WorkerThread(m_layout, 100 - m_ui.sldSpeed->value(), this); m_thread->connect(m_thread, SIGNAL(started()), this, SLOT(onStarted())); m_thread->connect(m_thread, SIGNAL(finished()), this, SLOT(onStopped())); m_thread->start(); } else { static_cast<WorkerThread*>(m_thread)->stop(); m_thread->wait(); m_thread = NULL; } } void SpringLayoutUi::setActive(bool active) { if (active && m_thread == NULL) onStartStop(); else if (!active && m_thread != NULL) onStartStop(); } }
<gh_stars>10-100 export const LANGUAGE_DB_KEY = 'language-1.8'
Factors that Determine the Adoption of Scientific Indicators in Community-Based Monitoring of Mangrove Ecosystems in Tanzania Abstract This article reveals factors that need to be considered by facilitating institutions and organisations prior to adoption of scientific indicators in community-based monitoring of mangrove ecosystems; as a necessary route towards achieving effective participation and meaningful experiential learning processes. It employs an Experiential Learning Intervention Workshop (ELIW) as a key methodological tool and a useful space for analysing conditions that are necessary for adoption of scientific frameworks in the Tanzanian coastal area. ELIW also offers an opportunity for local people to share knowledge and decide the kind of input required for monitoring mangroves and fisheries.
<reponame>getong/quilkin<filename>build.rs<gh_stars>100-1000 /* * Copyright 2020 Google LLC * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // This build script is used to generate the rust source files that // we need for XDS GRPC communication. fn main() -> Result<(), Box<dyn std::error::Error>> { let proto_files = vec![ "proto/data-plane-api/envoy/config/accesslog/v3/accesslog.proto", "proto/data-plane-api/envoy/config/cluster/v3/cluster.proto", "proto/data-plane-api/envoy/config/listener/v3/listener.proto", "proto/data-plane-api/envoy/config/route/v3/route.proto", "proto/data-plane-api/envoy/service/cluster/v3/cds.proto", "proto/data-plane-api/envoy/service/discovery/v3/ads.proto", "proto/data-plane-api/envoy/service/discovery/v3/discovery.proto", "proto/data-plane-api/envoy/type/metadata/v3/metadata.proto", "proto/data-plane-api/envoy/type/tracing/v3/custom_tag.proto", "proto/udpa/xds/core/v3/resource_name.proto", "proto/quilkin/extensions/filters/debug/v1alpha1/debug.proto", "proto/quilkin/extensions/filters/capture_bytes/v1alpha1/capture_bytes.proto", "proto/quilkin/extensions/filters/compress/v1alpha1/compress.proto", "proto/quilkin/extensions/filters/concatenate_bytes/v1alpha1/concatenate_bytes.proto", "proto/quilkin/extensions/filters/load_balancer/v1alpha1/load_balancer.proto", "proto/quilkin/extensions/filters/local_rate_limit/v1alpha1/local_rate_limit.proto", "proto/quilkin/extensions/filters/token_router/v1alpha1/token_router.proto", "proto/quilkin/extensions/filters/firewall/v1alpha1/firewall.proto", ] .iter() .map(|name| std::env::current_dir().unwrap().join(name)) .collect::<Vec<_>>(); let include_dirs = vec![ "proto/data-plane-api", "proto/udpa", "proto/googleapis", "proto/protoc-gen-validate", "proto/quilkin", ] .iter() .map(|i| std::env::current_dir().unwrap().join(i)) .collect::<Vec<_>>(); let config = { let mut c = prost_build::Config::new(); c.disable_comments(Some(".")); c }; tonic_build::configure() .build_server(true) .compile_with_config( config, &proto_files .iter() .map(|path| path.to_str().unwrap()) .collect::<Vec<_>>(), &include_dirs .iter() .map(|p| p.to_str().unwrap()) .collect::<Vec<_>>(), )?; // This tells cargo to re-run this build script only when the proto files // we're interested in change or the any of the proto directories were updated. for path in vec![proto_files, include_dirs].concat() { println!("cargo:rerun-if-changed={}", path.to_str().unwrap()); } Ok(()) }
/** * Row information to be displayed in the Editor.<br/> * Row data to be displayed in the search results editing screen.<br/> */ public class JbmEditorMigrationRow implements MigrationEditorRow { /** * Font name of the first layer */ private static final Font CHARACTOR = PluginUtil.getJbmEditorFont(); /** * Hierarchical level parent */ public static final int LEVEL_FIRST = 0; /** * Hierarchical level child */ public static final int LEVEL_SECOND = 1; /** * Hierarchical level grandson */ public static final int LEVEL_THIRD = 2; /** * No */ private String no; /** * Count the number of file name or line number held by the hierarchy one * eye */ private String countNo; /** * File name */ private String fileName; /** * Line number */ private String rowNo; /** * Degree of difficulty */ private String difficulty; /** * Guide chapter number */ private String chapterNo; /** * Visual confirmation item */ private String visualConfirmationItem; /** * Visual confirmation status */ private String checkEyeStatus; /** * Hearing status */ private String hearingStatus; /** * Level (represents the hierarchy) */ private int level; /** * Large Item Description */ private String bigMessage; /** * Middle Item Description */ private String middleMessage; /** * Visual confirmation description */ private String checkEyeMessage; /** * Confirmation hearing descriptionS */ private String hearingMessage; /** * It determines whether the hierarchy that holds the line number (true: * false third layer is line number retention: Display up to the second * level (file name search)) */ private boolean hasLine; /** * Conversion flag */ private String convert; /** * Number of lines */ private String lineNumber; /** * Line number basis */ private String lineNumberContents; /** * Porting factor */ private String lineFactor; /** * Difficulty details */ private String degreeDetail; /** * Parent element */ private JbmEditorMigrationRow parent; /** * Child JbmEditorMigrationRow */ private List<JbmEditorMigrationRow> childList = new ArrayList<JbmEditorMigrationRow>(); /** * One row of data in CSV format (hold the value only the first level) */ private String writeData; private boolean expand = true; public boolean isExpand() { return expand; } public void setExpand(boolean expand) { this.expand = expand; } /** * Get No.<br/> * * @return No */ public String getNo() { return no; } /** * Set No.<br/> * * @param no * No */ public void setNo(String no) { this.no = no; } /** * Get the count of the file name or line number held by the hierarchy one * eye.<br/> * * @return Count the number of file name or line number held by the * hierarchy one eye */ public String getCountNo() { return countNo; } /** * Set the number of counts in the file name or line number held by the * hierarchy one eye.<br/> * * @param countNo * Count the number of file name or line number held by the * hierarchy one eye */ public void setCountNo(String countNo) { this.countNo = countNo; } /** * Get a file name.<br/> * * @return File name */ public String getFileName() { return fileName; } /** * Set the file name.<br/> * * @param fileName * File name */ public void setFileName(String fileName) { // if(fileName.matches(File.separator)){ // String[] token = fileName.split(File.separator); // this.fileName = token[token.length-1]; // // } this.fileName = fileName; } /** * Get the line number.<br/> * * @return Line number */ public String getRowNo() { return rowNo; } /** * Set the line number.<br/> * * @param rowNo * Line number */ public void setRowNo(String rowNo) { this.rowNo = rowNo; } /** * Get the difficulty.<br/> * * @return Degree of difficulty */ public String getDifficulty() { return difficulty; } /** * Set the difficulty.<br/> * * @param difficulty * Degree of difficulty */ public void setDifficulty(String difficulty) { this.difficulty = difficulty; } /** * Get the guide chapter number.<br/> * * @return Guide chapter number */ public String getChapterNo() { return chapterNo; } /** * Set the guide chapter number.<br/> * * @param chapterNo * Guide chapter number */ public void setChapterNo(String chapterNo) { this.chapterNo = chapterNo; } /** * Get a visual confirmation item.<br/> * * @return Visual confirmation item */ public String getVisualConfirmationItem() { return visualConfirmationItem; } /** * Set visual confirmation item.<br/> * * @param visualConfirmationItem * Visual confirmation item */ public void setVisualConfirmationItem(String visualConfirmationItem) { this.visualConfirmationItem = visualConfirmationItem; } /** * Get a visual confirmation status.<br/> * * @return Visual confirmation status */ public String getCheckEyeStatus() { return checkEyeStatus; } /** * Set the visual confirmation status.<br/> * * @param checkEyeStatus * Visual confirmation status */ public void setCheckEyeStatus(String checkEyeStatus) { this.checkEyeStatus = checkEyeStatus; } /** * Get hearing status.<br/> * * @return Hearing status */ public String getHearingStatus() { return hearingStatus; } /** * Set the status hearing.<br/> * * @param hearingStatus * Hearing status */ public void setHearingStatus(String hearingStatus) { this.hearingStatus = hearingStatus; } /** * {@inheritDoc} */ @Override public int getLevel() { return level; } /** * Set level.<br/> * * @param level * Level */ public void setLevel(int level) { this.level = level; } /** * Get the Child JbmEditorMigrationRow list.<br/> * * @return Child JbmEditorMigrationRow list */ public List<JbmEditorMigrationRow> getChildList() { return childList; } /** * Set the Child JbmEditorMigrationRow list.<br/> * * @param childList * Child JbmEditorMigrationRow list */ public void setChildList(List<JbmEditorMigrationRow> childList) { this.childList = childList; } /** * Add register the Child JbmEditorMigrationRow.<br/> * * @param child * Child JbmEditorMigrationRow */ public void addChild(JbmEditorMigrationRow child) { childList.add(child); } /** * Determine whether visual confirmation item target row.<br/> * If there are one or more characters visually check Description, visual * check.<br/> * * @return true:There is visual confirmation false:Without visual * confirmation */ public boolean isCheckEye() { return !StringUtil.isEmpty(CheckListInformationFactory .getCheckListInformationFacade() .getCheckEyeDescription(getNo())); } /** * {@inheritDoc} */ @Override public JbmEditorMigrationRow getParent() { return parent; } /** * Set the parent element. <br/> * * @param parent * Parent element */ public void setParent(JbmEditorMigrationRow parent) { this.parent = parent; } /** * Get the conversion flag.<br/> * * @return convert Conversion flag */ public String getConvert() { return convert; } /** * Set the conversion flag.<br/> * * @param convert * Conversion flag */ public void setConvert(String convert) { this.convert = convert; } /** * Determine whether the hearing item subject line.<br/> * If there are one or more characters hearing Item Description, and hearing * check.<br/> * * @return true:Hearing item false:Hearing fee */ public boolean isHearing() { return !StringUtil .isEmpty(CheckListInformationFactory .getCheckListInformationFacade().getHearingDescription( getNo())); } /** * Number of lines is determined whether the unknown target row.<br/> * If "unknown", line number unknown.<br/> * * @return true:Unknown false:Unknown except */ public boolean isLineUnKnown() { return CheckListInformationFactory.getCheckListInformationFacade() .getLineNumberDescription(getNo()) .equals(ResourceUtil.WORKVIEW_MESSAGE_UNKNOWN); } /** * Number of lines TODO: determine whether the subject line of the SE manual * calculation item.<br/> * Case of the "SE manual calculation item", SE and manual calculation item.<br/> * * @return true:SE manual calculation item false:SE manual calculation fee */ public boolean isLineToDoSe() { return CheckListInformationFactory.getCheckListInformationFacade() .getLineNumberDescription(getNo()) .equals(ResourceUtil.WORKVIEW_MESSAGE_TODO_SE); } /** * The return of the major items.<br/> * If have obtained information from the check list file,<br/> * and return the value that is already held.<br/> * * @return Large category */ public String getBigMessage() { if (bigMessage == null) { bigMessage = CheckListInformationFactory .getCheckListInformationFacade().getBigDescription(getNo()); } return bigMessage; } /** * If you have obtained information from the check list file,<br/> * return a value that already holds return the items.<br/> * * @return middle item */ public String getMiddleMessage() { if (middleMessage == null) { middleMessage = CheckListInformationFactory .getCheckListInformationFacade().getMiddleDescription( getNo()); } return middleMessage; } /** * The return of the visual content confirmation.<br/> * If have obtained information from the check list file,<br/> * and return the value that is already held.<br/> * * @return Visual Confirmation */ public String getCheckEyeMessage() { if (checkEyeMessage == null) { checkEyeMessage = CheckListInformationFactory .getCheckListInformationFacade().getCheckEyeDescription( getNo()); } return checkEyeMessage; } /** * The return of the hearing Confirmation.<br/> * If you have obtained information from the check list file, <br/> * return the value that is already held.<br/> * * @return Visual Confirmation */ public String getHiaringMessage() { if (hearingMessage == null) { hearingMessage = CheckListInformationFactory .getCheckListInformationFacade().getHearingDescription( getNo()); } return hearingMessage; } /** * Return the number of lines.<br/> * If you have obtained information from the check list file,<br/> * I will return the value that is already held.<br/> * * @return Number of lines */ public String getLineNumber() { if (lineNumber == null) { lineNumber = CheckListInformationFactory .getCheckListInformationFacade().getLineNumberDescription( getNo()); } return lineNumber; } /** * * Set the number of lines<br/> * * @param lineNumber * Number of lines */ public void setLineNumber(String lineNumber) { this.lineNumber = lineNumber; } /** * It returns the number of lines basis.<br/> * If you have obtained information from the check list file, <br/> * I will return the value that is already held.<br/> * * @return Line number basis */ public String getLineNumberContents() { if (lineNumberContents == null) { lineNumberContents = CheckListInformationFactory .getCheckListInformationFacade() .getLineNumberContentsDescription(getNo()); } return lineNumberContents; } /** * * Set the number of lines basis.<br/> * * @param lineNumberContents * Line number basis */ public void setLineNumberContents(String lineNumberContents) { this.lineNumberContents = lineNumberContents; } /** * To return the total number of lines.<br/> * * @return Total number of lines */ public String getTotalLine() { String totalLine; long sum; try { Integer lineNum = Integer.valueOf(getLineNumber()); Integer hitNum = Integer.valueOf(getHitNum()); if (getLevel() == JbmEditorMigrationRow.LEVEL_FIRST) { sum = lineNum.longValue() * hitNum.longValue(); } else { sum = lineNum.longValue(); } totalLine = String.valueOf(sum); } catch (NumberFormatException e) { totalLine = "0"; } return totalLine; } /** * The return of the transplant factor.<br/> * If you have obtained information from the check list file, <br/> * and return the value that is already held.<br/> * * @return Porting factor */ public String getLineFactor() { if (lineFactor == null) { lineFactor = CheckListInformationFactory .getCheckListInformationFacade().getFactorDescription( getNo()); } return lineFactor; } /** * * Set the transplant factor.<br/> * * @param factor * Porting factor */ public void setLineFactor(String factor) { lineFactor = factor; } /** * The return more difficulty.<br/> * If obtained information from the check list file, <br/> * return the value that is already held.<br/> * * @return Difficulty details */ public String getDegreeDetail() { if (degreeDetail == null) { degreeDetail = CheckListInformationFactory .getCheckListInformationFacade().getDegreeDescription( getNo()); } return degreeDetail; } /** * * Set more difficulty.<br/> * * @param detail * Difficulty details */ public void setDegreeDetail(String detail) { degreeDetail = detail; } /** * Get whether to hold the results of the line number.<br/> * Only the first hierarchy. * * @return true:Third layer is line number holding false:Display up to the * second level (file name search) */ public boolean isHasLine() { return hasLine; } /** * Set whether to hold the results of the line number.<br/> * * @param isHasLine * Line number holding existence (true: false third layer is line * number retention: Display up to the second level (file name * search)) */ public void setHasLine(boolean isHasLine) { hasLine = isHasLine; } /** * To return the number of hits.<br/> * * @return Hits */ public int getHitNum() { int hitNum = 0; hitNum = 0; if (getRowNo().equals("0")) { // Case of 0 1 search hits, hits returns 0. return hitNum; } if (isHasLine()) { for (JbmEditorMigrationRow child : getChildList()) { hitNum += child.getChildList().size(); } } else { hitNum += getChildList().size(); } return hitNum; } /** * {@inheritDoc} */ @Override public String getColumnText(int columnIndex) { String text = StringUtil.EMPTY; switch (JbmEditorEnum.get(columnIndex)) { case INDEX_NO: // No if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { String orgNo = getNo(); Integer no = PythonUtil.PY_SEARCH_PROGRESS_STATUS_MAP.get(getNo()); text = Integer.toString(no)+"("+orgNo+")"; } break; case HIT_NUM: // Hits if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = String.valueOf(getHitNum()); } break; case BIG_ITEM: // Large category if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getBigMessage(); } break; case MIDDLE_ITEM: // Middle item if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getMiddleMessage(); } break; case TARGET_FILE_PATH: // File name if (JbmEditorMigrationRow.LEVEL_SECOND == getLevel()) { text = getFileName(); // String pattern = Pattern.quote(System.getProperty("file.separator")); // String[] splittedFileName = text.split(pattern); // if(splittedFileName!= null){ // text = splittedFileName[splittedFileName.length-1]; // } } break; case ROW_NO: // Line number if (JbmEditorMigrationRow.LEVEL_THIRD == getLevel()) { text = getRowNo(); } break; case DIFFICULTY: // Degree of difficulty if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getDifficulty(); } break; case CHAPTER_NO: // Guide chapter number if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getChapterNo(); } break; case VISUAL_CONFIRM_ITEM: // Visual confirmation item text = getCheckEyeMessage(); break; case HIARING_ITEM: // Confirmation hearing content text = getHiaringMessage(); break; case VISUAL_CONFIRM_STATSU_ITEM: // Visual confirmation status items text = getHiaringStatusItem(getLevel(), false); break; case HIARING_STATUS: // Hearing confirmation item text = getHiaringStatusItem(getLevel(), true); break; case LINE_NUM: // Number of lines if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getLineNumber(); } break; case LINE_NUM_BASIS: // Line number basis if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getLineNumberContents(); } break; case TOTAL_LINE: // Total line if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getTotalLine(); } break; case LINE_FACTOR: // Porting factor if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getLineFactor(); } break; case DEGREE_DETAIL: // Difficulty details if (JbmEditorMigrationRow.LEVEL_FIRST == getLevel()) { text = getDegreeDetail(); } break; } return text; } /** * {@inheritDoc} */ @Override public Color getForeground(int index) { // Third level only get the color String temp = getColumnText(index); if (ConfirmItemEnum.STATUS_NON.getStatusText().equals(temp) || ConfirmItemChangeAction.NG.equals(temp)) { return ColorUtil.getRed(); } return null; } /** * {@inheritDoc} */ @Override public Color getBackground(int index) { // Third level only get the color if (JbmEditorMigrationRow.LEVEL_THIRD == getLevel()) { // Visual confirmation String temp = getColumnText(index); if (ConfirmItemEnum.STATUS_NON.getStatusText().equals(temp)) { return ColorUtil.getConfirmItemStatusNonColor(); } if (ConfirmItemEnum.STATUS_OK.getStatusText().equals(temp)) { return ColorUtil.getConfirmItemStatusOkColor(); } if (ConfirmItemEnum.STATUS_NG.getStatusText().equals(temp)) { return ColorUtil.getConfirmItemStatusNgColor(); } } return null; } /** * Get a string of check items to display on the screen.<br/> * * @param level * Hierarchical level * @param hiaring * Whether hearing item * @return Display character First level: O or X Second level: O or X Third * level: Confirmed (transplant unnecessary) or unconfirmed or * confirmed (on transplantation) */ private String getHiaringStatusItem(int level, boolean hiaring) { boolean type; if (hiaring) { type = isHearing(); } else { type = isCheckEye(); } if (type) { if (JbmEditorMigrationRow.LEVEL_FIRST == level) { // (Show O or X) first level for (JbmEditorMigrationRow secount : getChildList()) { if (ConfirmItemChangeAction.NG.equals(getTopStatus( secount.getChildList(), hiaring))) { return ConfirmItemChangeAction.NG; } } return ConfirmItemChangeAction.OK; } if (JbmEditorMigrationRow.LEVEL_SECOND == level) { // second tier (Show O or X) return getTopStatus(getChildList(), hiaring); } // Third layer ConfirmItemEnum status; if (hiaring) { status = ConfirmItemEnum.getForString(getHearingStatus()); } else { status = ConfirmItemEnum.getForString(getCheckEyeStatus()); } return status.getStatusText(); } return StringUtil.EMPTY; } /** * Get the display string in the first layer / second layer of visual * confirmation item.<br/> * If the unidentified item exists at least one of the status of the Child * hierarchy than themselves,<br/> * The return of the ConfirmItemChangeAction.NG.<br/> * * * @param rowList * Inspection list * @param hiaring * Whether hearing item * @return Display character */ private String getTopStatus(List<JbmEditorMigrationRow> rowList, boolean hiaring) { ConfirmItemEnum confirmItemEnum; for (JbmEditorMigrationRow row : rowList) { if (hiaring) { confirmItemEnum = ConfirmItemEnum.getForString(row .getHearingStatus()); } else { confirmItemEnum = ConfirmItemEnum.getForString(row .getCheckEyeStatus()); } if (ConfirmItemEnum.STATUS_NON.equals(confirmItemEnum)) { return ConfirmItemChangeAction.NG; } } return ConfirmItemChangeAction.OK; } /** * Create a confirmation status from visual / hearing confirmation item.<br/> * * @return Check status */ public String getConfirmStatus() { String result = String.valueOf(ConfirmItemStatusEnum.NON_NON .getStatus()); if (isCheckEye() || isHearing()) { ConfirmItemStatusEnum confirmItemStatusEnum = ConfirmItemStatusEnum .getStatus(getCheckEyeStatus(), getHearingStatus()); if (confirmItemStatusEnum != null) { result = String.valueOf(confirmItemStatusEnum.getStatus()); } } return result; } /** * {@inheritDoc} */ @Override public String toString() { StringBuilder sb = new StringBuilder(); sb.append(getNo()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getFileName()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getMoreParent().getCountNo()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getRowNo()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getDifficulty()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getConvert()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getChapterNo()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getConfirmStatus()); return sb.toString(); } /** * ToString to be specified line number, the confirmation status.<br/> * * @param lineNumberList * Line number list * @param confirmList * Checklist * @param count * Number of third level * @return String */ public String toStringParamRowAndStatus(String lineNumberList, String confirmList, int count) { StringBuilder sb = new StringBuilder(); sb.append(getNo()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getFileName()); sb.append(StringUtil.CSV_DELIMITER); sb.append(count); sb.append(StringUtil.CSV_DELIMITER); sb.append(lineNumberList.trim()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getDifficulty()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getConvert()); sb.append(StringUtil.CSV_DELIMITER); sb.append(getChapterNo()); sb.append(StringUtil.CSV_DELIMITER); sb.append(confirmList.trim()); return sb.toString(); } /** * {@inheritDoc} */ @Override public Image getColumnImage(int columnIndex) { if (columnIndex == JbmEditorEnum.DIFFICULTY.getCode() && getLevel() == JbmEditorMigrationRow.LEVEL_FIRST) { DifficultyEnum difficultyEnum = DifficultyEnum.get(getDifficulty()); if (difficultyEnum != null) { return difficultyEnum.getImage(); } } return null; } /** * {@inheritDoc} */ @Override public Font getFont() { if (getLevel() == JbmEditorMigrationRow.LEVEL_FIRST) { return JbmEditorMigrationRow.CHARACTOR; } return null; } /** * {@inheritDoc} */ @Override public boolean hasChildren() { if (getChildList() != null && getChildList().size() > 0) { return true; } return false; } /** * {@inheritDoc} */ @Override public Object[] getChildren() { return getChildList().toArray(); } /** * {@inheritDoc} */ @Override public JbmEditorMigrationRow getMoreParent() { if (getLevel() == JbmEditorMigrationRow.LEVEL_FIRST) { return this; } if (getLevel() == JbmEditorMigrationRow.LEVEL_SECOND) { return getParent(); } return getParent().getParent(); } /** * {@inheritDoc} */ @Override public void updateWriteData() { JbmEditorMigrationRow top = getMoreParent(); StringBuilder sb = new StringBuilder(); for (JbmEditorMigrationRow second : top.getChildList()) { String totalLine = ""; if (second.getChildList().size() == 0) { sb.append(second.toString()); } if (second.getChildList().size() > 0) { StringBuilder comfirmStatus = new StringBuilder(); StringBuilder lineNumber = new StringBuilder(); int countNo = 0; for (JbmEditorMigrationRow third : second.getChildList()) { comfirmStatus.append(third.getConfirmStatus()); comfirmStatus.append(StringUtil.BLANK); lineNumber.append(third.getRowNo()); lineNumber.append(StringUtil.BLANK); if (Integer.parseInt(top.getCountNo()) != 0) { countNo++; } } sb.append(second .getChildList() .get(0) .toStringParamRowAndStatus(lineNumber.toString(), comfirmStatus.toString(), countNo)); totalLine = getCsvTotalLine(top, countNo); } // Set the contents of the checklist sb.append(StringUtil.CSV_DELIMITER); sb.append(top.getLineFactor()); sb.append(StringUtil.CSV_DELIMITER); sb.append(top.getDegreeDetail()); sb.append(StringUtil.CSV_DELIMITER); sb.append(top.getLineNumber()); sb.append(StringUtil.CSV_DELIMITER); // Edit line number basis in the CSV output String lineNumContents = top.getLineNumberContents(); // Escaped if it contains a double quote lineNumContents = lineNumContents.replaceAll("\"", "\"\""); // Enclosed in double quotes if it contains a comma, newline, double // Fort if (lineNumContents.contains(StringUtil.CSV_DELIMITER) || lineNumContents.contains(StringUtil.LINE_SEPARATOR) || lineNumContents.contains("\"")) { lineNumContents = "\"" + lineNumContents + "\""; } sb.append(lineNumContents); sb.append(StringUtil.CSV_DELIMITER); sb.append(totalLine); sb.append(StringUtil.LINE_SEPARATOR); } top.setWriteData(sb.toString()); } /** * Get the CSV output for the total number of lines.<br/> * * @param top * Top hierarchy * @param count * Number of counts * @return CSV output for the total number of lines */ private String getCsvTotalLine(JbmEditorMigrationRow top, int count) { String totalLine; long sum; try { Integer lineNum = Integer.valueOf(top.getLineNumber()); Integer hitNum = Integer.valueOf(count); if (getLevel() == JbmEditorMigrationRow.LEVEL_FIRST) { sum = lineNum.longValue() * hitNum.longValue(); } else { sum = lineNum.longValue(); } totalLine = String.valueOf(sum); } catch (NumberFormatException e) { totalLine = "0"; } return totalLine; } /** * Get (to hold the value only the first level) one row of data in CSV * format.<br/> * * @return One row of data in CSV format (hold the value only the first * level) */ public String getWriteData() { return writeData; } /** * Set (to hold the value only the first level) one row of data in CSV * format.<br/> * * @param writeData * one row of data in CSV format (To hold the value only the * first level) */ public void setWriteData(String writeData) { this.writeData = writeData; } }
/** * Remote proxy to a RecoveryStore */ public class RecoveryStoreProxy extends TxLogProxy implements RecoveryStore { private RecoveryStoreBeanMBean rsProxy; // proxy for the recovery store public RecoveryStoreProxy(RecoveryStoreBeanMBean rsProxy) { super(rsProxy); this.rsProxy = rsProxy; } public boolean allObjUids(String type, InputObjectState buff, int match) throws ObjectStoreException { ObjectStateWrapper ios = rsProxy.allObjUids(type, match); OutputObjectState oos = ios.getOOS(); if (oos == null) return false; buff.copyFrom(oos); return ios.isValid(); } public boolean allObjUids(String type, InputObjectState buff) throws ObjectStoreException { ObjectStateWrapper ios = rsProxy.allObjUids(type); OutputObjectState oos = ios.getOOS(); if (oos == null) return false; buff.copyFrom(oos); return ios.isValid(); } public boolean allTypes(InputObjectState buff) throws ObjectStoreException { ObjectStateWrapper ios = rsProxy.allTypes(); OutputObjectState oos = ios.getOOS(); if (oos == null) return false; buff.copyFrom(oos); return ios.isValid(); } public int currentState(Uid u, String tn) throws ObjectStoreException { return rsProxy.currentState(u, tn); } public boolean hide_state(Uid u, String tn) throws ObjectStoreException { return rsProxy.hide_state(u, tn); } public boolean reveal_state(Uid u, String tn) throws ObjectStoreException { return rsProxy.reveal_state(u, tn); } public InputObjectState read_committed(Uid u, String tn) throws ObjectStoreException { ObjectStateWrapper osw = rsProxy.read_committed(u, tn); return osw.getIOS(); } public boolean isType(Uid u, String tn, int st) throws ObjectStoreException { return rsProxy.isType(u, tn, st); } }
/** * Parses the input string into commands. Boolean flags are set and executed in the main loop. * * @param input the input string to parse * * @return 0 if passed, -1 if failed * */ int parseArgs(char* input){ char* arg; arg = strtok(input, " \n"); if(arg==NULL){ printf("\nPlease enter a command\n"); return -1; } while(arg!=NULL){ if(strcmp(strlwr(arg), "arm")==0){ arg = strtok(NULL," \n"); if(arg!=NULL){ strcpy(filename, arg); fArm = true; } else{ printf("Error: No filename specified\n"); return -1; } } else if(strcmp(strlwr(arg), "start")==0 || strcmp(strlwr(arg), "immediate")==0){ arg = strtok(NULL," \n"); if(arg!=NULL){ strcpy(filename, arg); fImmediate = true; } else{ printf("Error: No filename specified\n"); return -1; } } else if(strcmp(strlwr(arg), "stop") == 0){ fStop = true; } else if(strcmp(strlwr(arg), "length") == 0){ arg = strtok(NULL," \n"); if(arg==NULL){ printf("Please enter a length value, EX: length 42\n"); return -1; } if(arg[0]<'0' && arg[0]>'9'){ printf("Invalid length value \"%s\". Please enter a length value. EX: length 42\n", arg); } if(arg[0]=='0' && (arg[1]=='x'||arg[1]=='X')){ sscanf(arg, "0x%X", &length); } else{ length = atoi(arg); } if(length>0 && length < 0x4000){ fSetLength = true; } else{ printf("Length value must be between 1 and 16384\n"); return -1; } } else if(strcmp(strlwr(arg), "gain") == 0){ arg = strtok(NULL," \n"); if(arg==NULL){ printf("Please enter a gain value, EX: gain high\n"); return -1; } if(strcmp(strlwr(arg), "low")==0 || arg[0]=='0'){ gain = 0; fSetGain=true; } else if(strcmp(strlwr(arg), "high")==0 || arg[0]=='1'){ gain = 1; fSetGain=true; } else{ printf("Invalid gain value \"%s\". Expected low, 0, high, or 1\n", arg); return -1; } } else if(strcmp(strlwr(arg), "edge") == 0){ arg = strtok(NULL," \n"); if(arg==NULL){ printf("Please enter a trigger edge, EX: trigger rising\n"); return -1; } if(strcmp(strlwr(arg), "rising")==0 || arg[0]=='1'){ triggerEdge = 0; fSetTriggerEdge=true; } else if(strcmp(strlwr(arg), "falling")==0 || arg[0]=='0'){ triggerEdge = 1; fSetTriggerEdge=true; } else{ printf("Invalid trigger edge value \"%s\". Expected rising, 1, falling, 0\n", arg); return -1; } } else if(strcmp(strlwr(arg), "coupling") == 0){ arg = strtok(NULL," \n"); if(arg==NULL){ printf("Please enter a coupling, EX: coupling ac\n"); return -1; } if(strcmp(strlwr(arg), "ac")==0){ coupling = 1; fSetCoupling=true; } else if(strcmp(strlwr(arg), "dc")==0){ coupling = 0; fSetCoupling=true; } else{ printf("Invalid coupling value \"%s\". Expected [ac, dc]\n", arg); return -1; } } else if(strcmp(strlwr(arg), "level") == 0){ arg = strtok(NULL," \n"); if(arg==NULL){ printf("Please enter a trigger level, EX: trigger 1.5\n"); return -1; } if(arg[0]<'0' && arg[0]>'9'){ printf("Invalid trigger level value \"%s\". Please enter a trigger level value. EX: level 1.5\n", arg); } triggerLevel = atof(arg); if((gain == 0 && triggerLevel>=-25 && triggerLevel <= 25) || (gain == 1 && triggerLevel>=-1 && triggerLevel <= 1)){ fSetTriggerLevel = true; } else{ if(gain==0){ printf("Trigger Level value must be between -25.0 and 25.0 when gain = 0\n"); }else{ printf("Trigger Level value must be between -1.0 and 1.0 when gain = 1\n"); } return -1; } } else if(strcmp(strlwr(arg), "ch1") == 0){ channel = 1; } else if(strcmp(strlwr(arg), "ch2") == 0){ channel = 2; } else if(strcmp(strlwr(arg), "help") == 0 || arg[0]=='?'){ printUsage(); } else { printf("Error: Invalid argument %s\n", arg); return -1; } arg = strtok(NULL, " \n"); } return 0; }
/** * Fast Number Theoretic Transform that uses a "two-pass" * algorithm to calculate a very long transform on data that * resides on a mass storage device. The storage medium should * preferably be a solid state disk for good performance; * on normal hard disks performance is usually inadequate.<p> * * The "two-pass" algorithm only needs to do two passes through * the data set. In comparison, a basic FFT algorithm of length 2<sup>n</sup> * needs to do n passes through the data set. Although the * algorithm is fairly optimal in terms of amount of data transferred * between the mass storage and main memory, the mass storage access is * not linear but done in small incontinuous pieces, so due to disk * seek times the performance can be quite lousy.<p> * * When the data to be transformed is considered to be an * n<sub>1</sub> x n<sub>2</sub> matrix of data, instead of a linear array, * the two passes go as follows: * * <ol> * <li>Do n<sub>2</sub> transforms of length n<sub>1</sub> by transforming the matrix columns. * Do this by fetching n<sub>1</sub> x b blocks in memory so that the * blocks are as large as possible but fit in main memory.</li> * <li>Then do n<sub>1</sub> transforms of length n<sub>2</sub> by transforming the matrix rows. * Do this also by fetching b x n<sub>2</sub> blocks in memory so that the blocks just * fit in the available memory.</li> * </ol> * <p> * * The algorithm requires reading blocks of b elements from the mass storage device. * The smaller the amount of memory compared to the transform length is, the smaller * is b also. Reading very short blocks of data from hard disks can be prohibitively * slow.<p> * * When reading the column data to be transformed, the data can be transposed to * rows by reading the b-length blocks to proper locations in memory and then * transposing the b x b blocks.<p> * * In a convolution algorithm the data elements can remain in any order after * the transform, as long as the inverse transform can transform it back. * The convolution's element-by-element multiplication is not sensitive * to the order in which the elements are.<p> * * All access to this class must be externally synchronized. * * @see DataStorage#getTransposedArray(int,int,int,int) * * @since 1.7.0 * @version 1.9.0 * @author Mikko Tommila */ public class TwoPassFNTStrategy extends AbstractStepFNTStrategy { /** * Default constructor. */ public TwoPassFNTStrategy() { } @Override protected void transform(DataStorage dataStorage, int n1, int n2, long length, int modulus) throws ApfloatRuntimeException { assert (n2 >= n1); int maxBlockSize = getMaxMemoryBlockSize(length), // Maximum memory array size that can be allocated b; if (n1 > maxBlockSize || n2 > maxBlockSize) { throw new ApfloatInternalException("Not enough memory available to fit one row or column of matrix to memory; n1=" + n1 + ", n2=" + n2 + ", available=" + maxBlockSize); } b = maxBlockSize / n1; for (int i = 0; i < n2; i += b) { // Read the data in n1 x b blocks, transposed try (ArrayAccess arrayAccess = getColumns(dataStorage, i, b, n1)) { // Do b transforms of size n1 transformColumns(arrayAccess, n1, b, false, modulus); } } b = maxBlockSize / n2; for (int i = 0; i < n1; i += b) { // Read the data in b x n2 blocks try (ArrayAccess arrayAccess = getRows(dataStorage, i, b, n2)) { // Multiply each matrix element by w^(i*j) multiplyElements(arrayAccess, i, 0, b, n2, length, 1, false, modulus); // Do b transforms of size n2 transformRows(arrayAccess, n2, b, false, modulus); } } } @Override protected void inverseTransform(DataStorage dataStorage, int n1, int n2, long length, long totalTransformLength, int modulus) throws ApfloatRuntimeException { assert (n2 >= n1); int maxBlockSize = getMaxMemoryBlockSize(length), // Maximum memory array size that can be allocated b; if (n1 > maxBlockSize || n2 > maxBlockSize) { throw new ApfloatInternalException("Not enough memory available to fit one row or column of matrix to memory; n1=" + n1 + ", n2=" + n2 + ", available=" + maxBlockSize); } b = maxBlockSize / n2; for (int i = 0; i < n1; i += b) { // Read the data in b x n2 blocks try (ArrayAccess arrayAccess = getRows(dataStorage, i, b, n2)) { // Do b transforms of size n2 transformRows(arrayAccess, n2, b, true, modulus); // Multiply each matrix element by w^(i*j) / n multiplyElements(arrayAccess, i, 0, b, n2, length, totalTransformLength, true, modulus); } } b = maxBlockSize / n1; for (int i = 0; i < n2; i += b) { // Read the data in n1 x b blocks, transposed try (ArrayAccess arrayAccess = getColumns(dataStorage, i, b, n1)) { // Do b transforms of size n1 transformColumns(arrayAccess, n1, b, true, modulus); } } } /** * Get a block of column data. The data may be transposed, depending on the implementation. * * @param dataStorage The data storage. * @param startColumn The starting column where data is read. * @param columns The number of columns of data to read. * @param rows The number of rows of data to read. This should be equivalent to n<sub>1</sub>, number of rows in the matrix. * * @return Access to an array of size <code>columns</code> x <code>rows</code> containing the data. */ protected ArrayAccess getColumns(DataStorage dataStorage, int startColumn, int columns, int rows) { return dataStorage.getTransposedArray(DataStorage.READ_WRITE, startColumn, columns, rows); } /** * Get a block of row data. The data may be transposed, depending on the implementation. * * @param dataStorage The data storage. * @param startRow The starting row where data is read. * @param rows The number of rows of data to read. * @param columns The number of columns of data to read. This should be equivalent to n<sub>2</sub>, number of columns in the matrix. * * @return Access to an array of size <code>columns</code> x <code>rows</code> containing the data. */ protected ArrayAccess getRows(DataStorage dataStorage, int startRow, int rows, int columns) { return dataStorage.getArray(DataStorage.READ_WRITE, startRow * columns, rows * columns); } /** * Multiply each matrix element <code>(i, j)</code> by <code>w<sup>i * j</sup> / totalTransformLength</code>. * The matrix size is n<sub>1</sub> x n<sub>2</sub>. * * @param arrayAccess The memory array to multiply. * @param startRow Which row in the whole matrix the starting row in the <code>arrayAccess</code> is. * @param startColumn Which column in the whole matrix the starting column in the <code>arrayAccess</code> is. * @param rows The number of rows in the <code>arrayAccess</code> to multiply. * @param columns The number of columns in the matrix (= n<sub>2</sub>). * @param length The length of data in the matrix being transformed. * @param totalTransformLength The total transform length, for the scaling factor. Used only for the inverse case. * @param isInverse If the multiplication is done for the inverse transform or not. * @param modulus Index of the modulus. */ protected void multiplyElements(ArrayAccess arrayAccess, int startRow, int startColumn, int rows, int columns, long length, long totalTransformLength, boolean isInverse, int modulus) { super.stepStrategy.multiplyElements(arrayAccess, startRow, startColumn, rows, columns, length, totalTransformLength, isInverse, modulus); } /** * Transform the columns of the data matrix. * The data may be in transposed format, depending on the implementation.<p> * * By default the column transforms permute the data, leaving it in the correct * order so the element-by-element multiplication is simpler. * * @param arrayAccess The memory array to split to columns and to transform. * @param length Length of one transform (one columns). * @param count Number of columns. * @param isInverse <code>true</code> if an inverse transform is performed, <code>false</code> if a forward transform is performed. * @param modulus Index of the modulus. */ protected void transformColumns(ArrayAccess arrayAccess, int length, int count, boolean isInverse, int modulus) { super.stepStrategy.transformRows(arrayAccess, length, count, isInverse, true, modulus); } /** * Transform the rows of the data matrix. * The data may be in transposed format, depending on the implementation.<p> * * By default the row transforms do not permute the data, leaving it in * scrambled order, as this does not matter when the data is only used for * convolution. * * @param arrayAccess The memory array to split to rows and to transform. * @param length Length of one transform (one row). * @param count Number of rows. * @param isInverse <code>true</code> if an inverse transform is performed, <code>false</code> if a forward transform is performed. * @param modulus Index of the modulus. */ protected void transformRows(ArrayAccess arrayAccess, int length, int count, boolean isInverse, int modulus) { super.stepStrategy.transformRows(arrayAccess, length, count, isInverse, false, modulus); } private int getMaxMemoryBlockSize(long length) { ApfloatContext ctx = ApfloatContext.getContext(); long maxMemoryBlockSize = Util.round2down(Math.min(ctx.getMaxMemoryBlockSize(), Integer.MAX_VALUE)) / ctx.getBuilderFactory().getElementSize(); int maxBlockSize = (int) Math.min(length, maxMemoryBlockSize); return maxBlockSize; } }