content
stringlengths
10
4.9M
Spain’s Garbine Muguruza won Grand Slam #2, and her first Slam at Wimbledon 7-5 6-0 over Venus Williams, a surprising result given Venus was likely a slight favorite at the start, and a closer match was certainly expected. The 23 year old Muguruza likely has more slams in her future, her all court game was on display against Venus, she hit with power from corner to corner and after saving two set points at 4-5 in the first, she broke Venus and went on to serve out the opening set. After that it was one way traffic, as the veteran Williams sister got sloppy, she was broken three times, and never challenged Muguruza’s serve, taking a bagel in the second set to hand Muguruza the trophy. Williams, a five-time Wimbledon champion, still can claim two Slam finals to her name. The last time she did that was 2003, well over a decade ago. That said, she hasn’t won a Grand Slam since 2008, and has been in two semifinals, and three finals since that point. Venus will remain at seven Slams in total. Still, it has been an important year for the older Williams sister, showing the world she can still compete at the top levels. If she keeps on reaching Grand Slam finals, she absolutely can win another one, even at the age of 37 (or older). Muguruza now has a real chance to attract a new generation of Spaniards to tennis, as she’s quickly becoming one of the most accomplished female Spaniards to ever play the sport. With a friendly personality, and an exciting game to boot, the Spanish federation has to be happy. Spanish tennis has been driven by Rafael Nadal for a very long time now, but it is very good for Spain to know that they have someone coming up who can keep the fans interested after Nadal eventually retires, whenever that is. Main Photo:
<gh_stars>0 import {EdmMapping,EdmType} from '@themost/data/odata'; import StructuredValue = require('./structured-value-model'); /** * @class */ @EdmMapping.entityType('ElectionSpecification') class ElectionSpecification extends StructuredValue { public minimumSelection?: number; public maximumSelection?: number; public validFrom?: Date; public validThrough?: Date; public id?: number; /** * @constructor */ constructor() { super(); } } export = ElectionSpecification;
// Draw a series of diagonal lines across a square canvas of width/height of // the length requested. The lines will start from the top left corner to the // bottom right corner, and move from left to right (at the top) and from right // to left (at the bottom) until 10,000 lines are drawn. // // The resulting image will be an hourglass shape. void BM_DrawLine(benchmark::State& state, BackendType backend_type, unsigned attributes) { auto canvas_provider = CreateCanvasProvider(backend_type); DisplayListBuilder builder; builder.setAttributesFromPaint(GetPaintForRun(attributes), DisplayListOpFlags::kDrawLineFlags); AnnotateAttributes(attributes, state, DisplayListOpFlags::kDrawLineFlags); size_t length = state.range(0); canvas_provider->InitializeSurface(length, length); auto canvas = canvas_provider->GetSurface()->getCanvas(); state.counters["DrawCallCount"] = kLinesToDraw; for (size_t i = 0; i < kLinesToDraw; i++) { builder.drawLine(SkPoint::Make(i % length, 0), SkPoint::Make(length - i % length, length)); } auto display_list = builder.Build(); for ([[maybe_unused]] auto _ : state) { display_list->RenderTo(canvas); canvas_provider->GetSurface()->flushAndSubmit(true); } auto filename = canvas_provider->BackendName() + "-DrawLine-" + std::to_string(state.range(0)) + ".png"; canvas_provider->Snapshot(filename); }
/** * Returns whether the result of the check is considered secure or not. * * @return whether the result of the check is considered secure or not */ public boolean isSecure() { boolean retVal = delegate.passed(); if (deviceCheck.secureWhenFalse) { return !retVal; } return retVal; }
“WE WERE selling $1m a year in merchandise with the company logo on it,” says Erik Prince with a mixture of nostalgia and defiance. Blackwater, the company in question, rose to worldwide prominence as an outsourced branch of the American army during the occupation of Iraq and Afghanistan. It had plenty of admirers for the way it had pioneered a new branch of the defence industry, earning a total of around $2 billion from Uncle Sam for providing armed personnel to the Pentagon, the State Department and, secretly, the CIA. But the firm was overwhelmed by its more numerous critics, who said it was an undisciplined, unaccountable bunch of mercenaries. In 2010 Mr Prince gave up the fight. He sold the firm, which he had founded in 1997 and which got its first big break by teaching police to handle school shootings after the 1999 Columbine massacre. It is smaller now and mostly does less controversial work such as guarding diplomats and providing training. Its new name, Academi, could scarcely sound less aggressive. Get our daily newsletter Upgrade your inbox and get our Daily Dispatch and Editor's Picks. Mr Prince has started talking, after a long silence, to promote his book, “Civilian Warriors”, in which he mounts a defence of his firm as unyielding as a Blackwater contractor under enemy fire. Mr Prince says he never intended to build a new sort of defence firm. But he stumbled on a huge opportunity to fill gaps in military capacity cost-effectively. Blackwater provided security to government officials such as Paul Bremer, the head of the transitional authority after the invasion of Iraq, and after that, senior State Department employees in Iraq and Afghanistan. He says he got his entrepreneurial instinct from his father, who built a successful motor-industry supplier and taught him that the best way to win and keep a customer is always to say yes, then overdeliver—a formula that worked spectacularly well for Blackwater until the deadly nature of its overdelivery became so controversial. Now he is glad to be out of the industry he helped build. With the winding down of America’s military campaigns in Iraq and Afghanistan, it has become a “very crowded field; too many firms competing for a shrinking pie.” But Blackwater’s demise created space for two rivals: DynCorp International, a 60-year-old firm that diversified into military security, and Triple Canopy, founded in 2003 with a similar business model to Blackwater’s. Groups such as Human Rights First campaign against governments’ use of private military contractors, and Barack Obama attacked Blackwater in his first presidential campaign. George Bush responded by modestly tightening the rules on its deployment. But there has been no change of course since Mr Obama took office. “There is no going back, we just need to figure out the new rules for when private military firms should be used and when they should not,” says Sean McFate, an academic at Georgetown University who previously worked for DynCorp. Imitators everywhere Post-Blackwater, two trends have dominated the new industry, says Mr McFate: globalisation and indigenisation. On the supply side, there are a growing number of private military firms, and not all of the new ones were formed by former special forces from Western powers, such as Aegis and Blue Mountain, two British firms. Warlords in places such as Afghanistan and Somalia are creating contracting firms that they staff with local talent. Their embattled national governments are seeing the merits in contracting out security. So America is no longer the only big buyer of private force, notes Mr McFate. One thing that would greatly improve the industry’s prospects is if the United Nations began using private contractors for peacekeeping missions, as it is said to be considering. Today, such missions are staffed by soldiers from poorer countries, who are often badly trained. Mr Prince thinks that private contracting would make the UN more effective, but he has no intention of going after that business. For him, the new promised land is Africa, where he is investing in firms providing services to the oil and gas industry, in places where he thinks his expertise in providing logistics and security can give him a competitive edge.
Pretesting mHealth: Implications for Campaigns among Underserved Patients. BACKGROUND For health campaigns, pretesting the channel of message delivery and process evaluation is important to eventual campaign effectiveness. We conducted a pilot study to pretest text messaging as a mHealth channel for traditionally underserved patients. AIMS The primary objectives of the research were to assess 1) successful recruitment of these patients for a text message study and 2) whether recruited patients would engage in a process evaluation after receiving the text message. METHODS Recruited patients were sent a text message and then called a few hours later to assess whether they had received, read, and remembered the sent text message. RESULTS We approached twenty patients, of whom fifteen consented to participate. Of these consented participants, ten (67%) engaged in the process evaluation and eight (53%) were confirmed as receiving, reading, and remembering the text message. CONCLUSION We found that traditionally underserved and under-researched patients can be recruited to participate in a text message study, and that recruited patients would engage in a process evaluation after receiving the text message.
Bandai Namco has announced free DLC will be available in the European eShop for both 3DS and Wii U versions of One Piece Unlimited World Red. The action adventure title – which is based on the popular manga series – was released just last week for Nintendo platforms and, in order to celebrate the game’s launch, Bandai Namco will release the “Red Stands Alone” quest as free downloadable content. The publisher has not specified how long the DLC will remain as free, but presumably fans will be able to download the quest when the Nintendo eShop updates with another round of new content on Thursday, July 3. One Piece Unlimited World Red will arrive in North America next week, though Bandai Namco has yet to confirm the free DLC. For a run down of what to expect in Red Stands Alone, you can check out a part of the press release below. Engage in combat once again against Red! Players will face some of the most familiar and toughest enemies in a completely new setup with a higher difficulty threshold!
import React from 'react' import { StyleSheet, View } from 'react-native-web' export const Button = (props) => ( <View style={[ sheet.button, props.isEven ? sheet.blue : sheet.red, props.isEven ? sheet.size1 : sheet.size2, props.style, ]} > {props.children} </View> ) const sheet = StyleSheet.create({ button: { alignItems: 'center', flexShrink: 0, justifyContent: 'center', backgroundColor: 'white', borderColor: '#999', borderWidth: 1, borderRadius: 3, height: 25, paddingLeft: 10, paddingRight: 10, }, disabled: { backgroundColor: 'gray', shadowColor: 'gray', pointerEvents: 'none', }, size1: { borderRadius: '2', height: 25, paddingHorizontal: 10, }, size2: { borderRadius: '3', height: 35, paddingHorizontal: 15, }, blue: { backgroundColor: 'blue', borderColor: 'gray', }, red: { backgroundColor: 'red', borderColor: 'gray', }, })
<reponame>Xemrion/Exam-work<filename>OilSpillage/UI/Elements/ItemSlot.cpp<gh_stars>1-10 #include "ItemSlot.h" #include "../../game.h" #include <cassert> #include "../UserInterface.h" Vector2 ItemSlot::size = Vector2(100, 100); void ItemSlot::addTextbox() { if (this->showTextBox) { this->textBox = std::make_unique<TextBox>("-- " + this->slot->getItem()->getName() + " --\n" + this->slot->getItem()->getDescription(), Color(Colors::White), Vector2(), ArrowPlacement::TOP); this->textBox->setPosition(this->position + Vector2(ItemSlot::size.x * 0.5f - this->textBox->getSize().x * 0.5f, ItemSlot::size.y + 10.0f)); } } ItemSlot::ItemSlot(Vector2 position, bool showTextBox) : Element(position), showTextBox(showTextBox), slot(nullptr), rotationTimer(0) { Game::getGraphics().loadTexture("UI/itemSlot"); this->textureSlot = Game::getGraphics().getTexturePointer("UI/itemSlot"); assert(textureSlot && "Texture failed to load!"); } ItemSlot::~ItemSlot() { } void ItemSlot::draw(bool selected) { UserInterface::getSpriteBatch()->Draw(this->textureSlot->getShaderResView(), this->position); if (selected && this->textBox) { this->textBox->draw(selected); } } void ItemSlot::update(bool selected, float deltaTime) { if (this->slot && this->slot->getItem()->getObject()) { if (selected) { rotationTimer = std::fmodf(rotationTimer + deltaTime * 4, XM_2PI); //rotation = Quaternion::CreateFromYawPitchRoll(rotationTimer, 0.0f, 0.0f); rotation = Quaternion::Slerp(rotation, Quaternion::CreateFromYawPitchRoll(rotationTimer, 0, 0), deltaTime * 10); transform = Item::generateTransform(this->slot->getItem()->getObject(), this->position + Vector2(ItemSlot::size.x * 0.5f, ItemSlot::size.y - 10.0f), Vector3(1.5f), rotation, true); } else { rotationTimer = Game::lerp(rotationTimer, 190 * XM_PI / 180, deltaTime * 4); rotation = Quaternion::Slerp(rotation, Quaternion::CreateFromYawPitchRoll(XM_PI + 0.3f, 0.26f, 0.0f), deltaTime * 4); transform = Item::generateTransform(this->slot->getItem()->getObject(), this->position + Vector2(ItemSlot::size.x * 0.5f, ItemSlot::size.y - 10.0f), Vector3(1.5f), rotation,true); } } } Matrix& ItemSlot::getTransform() { return this->transform; } Container::Slot* ItemSlot::getSlot() const { return this->slot; } void ItemSlot::setSlot(Container::Slot* slot) { if (this->slot) { if (this->slot->getItem()->getObject()) { Game::getGraphics().removeFromUIDraw(this->slot->getItem()->getObject(), &this->transform); } this->textBox.reset(); } this->slot = slot; if (slot) { if (this->slot->getItem()->getObject()) { rotationTimer = 190 * XM_PI / 180; rotation = Quaternion::CreateFromYawPitchRoll(XM_PI + 0.3f, 0.26f, 0.0f); transform = Item::generateTransform(this->slot->getItem()->getObject(), this->position + Vector2(ItemSlot::size.x * 0.5f, ItemSlot::size.y - 10.0f), Vector3(1.5f), rotation, true); Game::getGraphics().addToUIDraw(this->slot->getItem()->getObject(), &this->transform); } this->addTextbox(); } } void ItemSlot::unloadTextures() { Game::getGraphics().unloadTexture("UI/itemSlot"); }
<filename>packages/table-react/src/ExpandableTableRowController.tsx import React, { forwardRef } from "react"; import cx from "classnames"; import { ExpandButton } from "@forbrukerradet/jkl-expand-button-react"; import type { TableCellProps } from "./TableCell"; import { TableCell } from "./TableCell"; import { useTableContext } from "./tableContext"; export interface ExpandableTableRowControllerProps extends TableCellProps { /** Settes automatisk av ExpandableTableRow */ isOpen?: boolean; /** Settes automatisk av ExpandableTableRow */ onClick?: () => void; } const ExpandableTableRowController = forwardRef<HTMLTableCellElement, ExpandableTableRowControllerProps>( ({ isOpen, onClick, children, className, id, "aria-controls": ariaControls, ...rest }, ref) => { if (isOpen === undefined || typeof onClick !== "function") { throw new Error("ExpandableTableRowController must have ExpandableTableRow as parent"); } const { compact, collapseToList } = useTableContext(); // pick text from data-th if possible, but only if it's a list const showTextFromTh: string | undefined = collapseToList ? (rest as Record<string, string>)["data-th"] : undefined; return ( <TableCell className={cx( "jkl-table-cell--expand", { ["jkl-table-cell--expand-without-text"]: !children }, className, )} {...rest} ref={ref} > <ExpandButton className={cx("jkl-table-row-expand-button", { ["jkl-table-row-expand-button--expanded"]: isOpen, })} id={id} forceCompact={compact} isExpanded={isOpen} aria-controls={ariaControls} onClick={(e) => { e.stopPropagation(); onClick(); }} onKeyDown={(e) => { if (e.key === "Enter" || e.key === " ") { e.stopPropagation(); e.preventDefault(); onClick(); } }} > {/* show children. or try to use data-th if children is undefined */} {children ?? showTextFromTh} </ExpandButton> </TableCell> ); }, ); ExpandableTableRowController.displayName = "ExpandableTableRowController"; export { ExpandableTableRowController };
def add_child(self, instance): assert isinstance(instance, Module) or isinstance(instance, Gate) assert instance.id not in self.children instance.parent = self self.children[instance.id] = instance
/** * Created by Administrator on 2015/11/5. */ public class OjalgoUtils { public static Matrix computeHTWHInv(Matrix HH, Matrix WWInv) { MatrixStore<Double> hhoj = la4jMatrixToMatrixStore(HH); MatrixStore<Double> wwinvoj = la4jMatrixToMatrixStore(WWInv); MatrixStore<Double> res = hhoj.transpose().multiply(wwinvoj).multiply(hhoj); res = invertOjMatrix(res); return ojMatrixStoreToLa4jMatrix(res); } public static MatrixStore<Double> la4jMatrixToMatrixStore(Matrix matrix) { final PhysicalStore.Factory<Double, PrimitiveDenseStore> matrixFactory = PrimitiveDenseStore.FACTORY; PrimitiveDenseStore WInvOjStore = matrixFactory.makeZero(matrix.rows(), matrix.columns()); for (int i = 0; i < WInvOjStore.getRowDim(); i++) { for (int j = 0; j < WInvOjStore.getColDim(); j++) { WInvOjStore.set(i, j, matrix.get(i, j)); } } MatrixStore<Double> ret = WInvOjStore.builder().build(); return ret; } public static Matrix ojMatrixStoreToLa4jMatrix(MatrixStore<Double> matrixStore) { Matrix res = Matrix.zero((int) matrixStore.countRows(), (int) matrixStore.countColumns()); for (int i = 0; i < res.rows(); i++) { for (int j = 0; j < res.columns(); j++) { res.set(i, j, matrixStore.get(i, j)); } } return res; } public static MatrixStore<Double> invertOjMatrix(MatrixStore matrixStore) { MatrixStore<Double> res = null; InverterTask<Double> tmpInverter = InverterTask.PRIMITIVE.make(matrixStore, false); final DecompositionStore<Double> tmpAlloc = tmpInverter.preallocate(matrixStore); try { res = tmpInverter.invert(matrixStore, tmpAlloc); } catch (TaskException e) { e.printStackTrace(); } return res; } }
<gh_stars>1-10 # -*- coding: utf-8 -*- # Form implementation generated from reading ui file 'untitled.ui' # # Created by: PyQt5 UI code generator 5.13.2 # # WARNING! All changes made in this file will be lost! from PyQt5 import QtCore, QtGui, QtWidgets from PyQt5.QtWidgets import QMessageBox from lib.currencies import get_latest_btc import random import json class Ui_AutoInvest(object): def setupUi(self, AutoInvest): AutoInvest.setObjectName("AutoInvest") AutoInvest.resize(646, 675) self.centralwidget = QtWidgets.QWidget(AutoInvest) self.centralwidget.setObjectName("centralwidget") self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(10, 10, 181, 41)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(24) self.label.setFont(font) self.label.setObjectName("label") self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(10, 180, 131, 41)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(14) self.label_2.setFont(font) self.label_2.setToolTip("") self.label_2.setObjectName("label_2") self.email_recipient = QtWidgets.QTextEdit(self.centralwidget) self.email_recipient.setGeometry(QtCore.QRect(10, 220, 161, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(11) self.email_recipient.setFont(font) self.email_recipient.setToolTip("") self.email_recipient.setObjectName("email_recipient") self.label_3 = QtWidgets.QLabel(self.centralwidget) self.label_3.setGeometry(QtCore.QRect(10, 90, 131, 41)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(14) self.label_3.setFont(font) self.label_3.setToolTip("") self.label_3.setObjectName("label_3") self.email_sender = QtWidgets.QTextEdit(self.centralwidget) self.email_sender.setGeometry(QtCore.QRect(60, 130, 151, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(11) self.email_sender.setFont(font) self.email_sender.setToolTip("") self.email_sender.setStatusTip("") self.email_sender.setWhatsThis("") self.email_sender.setAccessibleName("") self.email_sender.setObjectName("email_sender") self.label_4 = QtWidgets.QLabel(self.centralwidget) self.label_4.setGeometry(QtCore.QRect(10, 130, 61, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(12) self.label_4.setFont(font) self.label_4.setToolTip("") self.label_4.setObjectName("label_4") self.label_5 = QtWidgets.QLabel(self.centralwidget) self.label_5.setGeometry(QtCore.QRect(220, 130, 61, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(12) self.label_5.setFont(font) self.label_5.setToolTip("") self.label_5.setObjectName("label_5") self.email_passwd = QtWidgets.QTextEdit(self.centralwidget) self.email_passwd.setGeometry(QtCore.QRect(290, 130, 151, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(11) self.email_passwd.setFont(font) self.email_passwd.setToolTip("") self.email_passwd.setStatusTip("") self.email_passwd.setWhatsThis("") self.email_passwd.setAccessibleName("") self.email_passwd.setObjectName("email_passwd") self.label_6 = QtWidgets.QLabel(self.centralwidget) self.label_6.setGeometry(QtCore.QRect(10, 310, 131, 41)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(14) self.label_6.setFont(font) self.label_6.setToolTip("") self.label_6.setObjectName("label_6") self.your_btc = QtWidgets.QTextEdit(self.centralwidget) self.your_btc.setGeometry(QtCore.QRect(10, 350, 161, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(11) self.your_btc.setFont(font) self.your_btc.setToolTip("") self.your_btc.setObjectName("your_btc") self.label_7 = QtWidgets.QLabel(self.centralwidget) self.label_7.setGeometry(QtCore.QRect(10, 410, 281, 41)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(14) self.label_7.setFont(font) self.label_7.setToolTip("") self.label_7.setObjectName("label_7") self.value = QtWidgets.QTextEdit(self.centralwidget) self.value.setGeometry(QtCore.QRect(10, 450, 161, 31)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(11) self.value.setFont(font) self.value.setToolTip("") self.value.setObjectName("value") self.currency = QtWidgets.QComboBox(self.centralwidget) self.currency.setGeometry(QtCore.QRect(180, 450, 71, 31)) font = QtGui.QFont() font.setPointSize(10) self.currency.setFont(font) self.currency.setObjectName("currency") self.currency.addItem("") self.currency.addItem("") self.label_8 = QtWidgets.QLabel(self.centralwidget) self.label_8.setGeometry(QtCore.QRect(260, 440, 131, 41)) font = QtGui.QFont() font.setFamily("Arial") font.setPointSize(14) self.label_8.setFont(font) self.label_8.setToolTip("") self.label_8.setObjectName("label_8") self.start_btn = QtWidgets.QPushButton(self.centralwidget) self.start_btn.setGeometry(QtCore.QRect(10, 580, 141, 41)) font = QtGui.QFont() font.setPointSize(12) self.start_btn.setFont(font) self.start_btn.setObjectName("start_btn") AutoInvest.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(AutoInvest) self.menubar.setGeometry(QtCore.QRect(0, 0, 646, 21)) self.menubar.setObjectName("menubar") AutoInvest.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(AutoInvest) self.statusbar.setObjectName("statusbar") AutoInvest.setStatusBar(self.statusbar) self.retranslateUi(AutoInvest) QtCore.QMetaObject.connectSlotsByName(AutoInvest) self.start_btn.clicked.connect(lambda: self.start_clicked( self.email_sender.toPlainText(), self.email_passwd.toPlainText(), self.email_recipient.toPlainText(), self.isfloat(self.your_btc.toPlainText()), self.isfloat(self.value.toPlainText()), str(self.currency.currentText()), get_latest_btc(), )) def retranslateUi(self, AutoInvest): _translate = QtCore.QCoreApplication.translate AutoInvest.setWindowTitle(_translate("AutoInvest", "AutoInvest")) self.label.setText(_translate("AutoInvest", "AutoInvest")) self.label_2.setText(_translate("AutoInvest", "Email recipient")) self.email_recipient.setHtml(_translate("AutoInvest", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Arial\'; font-size:11pt; font-weight:400; font-style:normal;\">\n" "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; font-family:\'Arial\';\"><br /></p></body></html>")) self.label_3.setText(_translate("AutoInvest", "Email sender")) self.email_sender.setHtml(_translate("AutoInvest", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Arial\'; font-size:11pt; font-weight:400; font-style:normal;\">\n" "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; font-family:\'Arial\';\"><br /></p></body></html>")) self.label_4.setText(_translate("AutoInvest", "Email:")) self.label_5.setText(_translate("AutoInvest", "Passwd:")) self.email_passwd.setHtml(_translate("AutoInvest", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Arial\'; font-size:11pt; font-weight:400; font-style:normal;\">\n" "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; font-family:\'Arial\';\"><br /></p></body></html>")) self.label_6.setText(_translate("AutoInvest", "Your Bitcoins")) self.your_btc.setHtml(_translate("AutoInvest", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Arial\'; font-size:11pt; font-weight:400; font-style:normal;\">\n" "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; font-family:\'Arial\';\"><br /></p></body></html>")) self.label_7.setText(_translate("AutoInvest", "Notify me when BTC value is ")) self.value.setHtml(_translate("AutoInvest", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n" "<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n" "p, li { white-space: pre-wrap; }\n" "</style></head><body style=\" font-family:\'Arial\'; font-size:11pt; font-weight:400; font-style:normal;\">\n" "<p style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px; font-family:\'Arial\';\"><br /></p></body></html>")) self.currency.setItemText(0, _translate("AutoInvest", "BTC")) self.currency.setItemText(1, _translate("AutoInvest", "EUR")) self.label_8.setText(_translate("AutoInvest", "greater")) self.start_btn.setText(_translate("AutoInvest", "Start")) def start_clicked(self, email_sender, email_passwd, email_recipient, your_btc, value, currency, past_btc_value): if email_sender == '' or email_passwd == '' or email_recipient == '' or your_btc == '' or value == '': self.show_popup('AutoInvest', 'Looks like you haven\'t entered all values.', QMessageBox.Warning) else: new_user = { 'email_sender': email_sender, 'email_passwd': email_passwd, 'email_recipient': email_recipient, 'your_btc': your_btc, 'value': value, 'currency': currency, 'past_btc_value': past_btc_value, } users = json.load(open('./data/users.json')) user_id = str(random.randint(100000000000000000000000000000, 999999999999999999999999999999)) with open('./data/users.json', 'w') as f: users[user_id] = new_user json.dump(users, f, indent=4, sort_keys=True) self.show_popup('Success!', 'We have now added you to our users.', QMessageBox.Information) def show_popup(self, title, text, icon): msg = QMessageBox() msg.setWindowTitle(title) msg.setText(text) msg.setIcon(icon) msg.setStandardButtons(QMessageBox.Ok) msg.exec_() def isfloat(self, value): try: float(value) except ValueError: self.show_popup('AutoInvest', 'Looks like you haven\'t entered a valid number. Quitting...', QMessageBox.Critical) exit() else: return float(value) if __name__ == "__main__": import sys app = QtWidgets.QApplication(sys.argv) AutoInvest = QtWidgets.QMainWindow() ui = Ui_AutoInvest() ui.setupUi(AutoInvest) AutoInvest.show() sys.exit(app.exec_())
<filename>ds/netapi/svcdlls/wkssvc/server/wsdfs.c //+---------------------------------------------------------------------------- // // Copyright (C) 1996, Microsoft Corporation // // File: wsdfs.c // // Classes: None // // Functions: DfsDcName // // History: Feb 1, 1996 Milans Created // //----------------------------------------------------------------------------- #include <nt.h> #include <ntrtl.h> #include <nturtl.h> #include <dfsfsctl.h> #include <stdlib.h> #include <windows.h> #include <lm.h> #include <dsgetdc.h> // DsGetDcName #include "wsdfs.h" #include "dominfo.h" #include "wsmain.h" #include "wsutil.h" #include <config.h> #include <confname.h> // // Timeouts for domian change notifications // #define TIMEOUT_MINUTES(_x) (_x) * 1000 * 60 #define DOMAIN_NAME_CHANGE_TIMEOUT 1 #define DOMAIN_NAME_CHANGE_TIMEOUT_LONG 15 #define DFS_DC_NAME_DELAY TEXT("DfsDcNameDelay") extern NET_API_STATUS WsSetWorkStationDomainName( VOID); DWORD DfsGetDelayInterval(void); VOID DfsDcName( LPVOID pContext, BOOLEAN fReason ); NTSTATUS DfsGetDomainNameInfo(void); extern HANDLE hMupEvent; extern BOOLEAN MupEventSignaled; extern BOOLEAN GotDomainNameInfo; extern ULONG DfsDebug; ULONG g_ulCount; ULONG g_ulLastCount; ULONG g_ulInitThreshold; ULONG g_ulForce; ULONG g_ulForceThreshold; HANDLE PollDCNameEvent = NULL; HANDLE TearDownDoneEvent; HANDLE WsDomainNameChangeEvent = NULL; HANDLE g_WsDomainNameChangeWorkItem; //+---------------------------------------------------------------------------- // // Function: WsInitializeDfs // // Synopsis: Initializes the Dfs thread that waits for calls from the // driver to map Domain names into DC lists // // Arguments: None // // Returns: WIN32 error from CreateThread // //----------------------------------------------------------------------------- NET_API_STATUS WsInitializeDfs() { NTSTATUS Status = STATUS_SUCCESS; NET_API_STATUS ApiStatus; OBJECT_ATTRIBUTES obja; DWORD dwTimeout = INFINITE; HANDLE hEvent; g_ulInitThreshold = 4; g_ulForceThreshold = 60; g_ulForce = g_ulForceThreshold; // initialize workstation tear down done event InitializeObjectAttributes( &obja, NULL, OBJ_OPENIF, NULL, NULL ); Status = NtCreateEvent( &TearDownDoneEvent, SYNCHRONIZE | EVENT_QUERY_STATE | EVENT_MODIFY_STATE, &obja, SynchronizationEvent, FALSE ); if (Status != STATUS_SUCCESS) { return Status; } if (hMupEvent == NULL) { hMupEvent = CreateMupEvent(); if (WsInAWorkgroup() == TRUE && MupEventSignaled == FALSE) { SetEvent(hMupEvent); MupEventSignaled = TRUE; } } // // Watch for Domain Name changes, and automatically pick them up // ApiStatus = NetRegisterDomainNameChangeNotification( &WsDomainNameChangeEvent ); if (ApiStatus != NO_ERROR) { WsDomainNameChangeEvent = NULL; InitializeObjectAttributes( &obja, NULL, OBJ_OPENIF, NULL, NULL ); Status = NtCreateEvent( &PollDCNameEvent, SYNCHRONIZE | EVENT_QUERY_STATE | EVENT_MODIFY_STATE, &obja, SynchronizationEvent, FALSE ); if (Status != STATUS_SUCCESS) { NtClose(TearDownDoneEvent); TearDownDoneEvent = NULL; return Status; } } // // If we aren't in a workgroup or are in one but // don't need to wait for domain name change. // if (WsInAWorkgroup() != TRUE || WsDomainNameChangeEvent != NULL) { if (WsInAWorkgroup() != TRUE) { // // If we are not in a workgroup, set the timeout value to poll the DC name. // dwTimeout = 1; } hEvent = WsDomainNameChangeEvent ? WsDomainNameChangeEvent : PollDCNameEvent; Status = RtlRegisterWait( &g_WsDomainNameChangeWorkItem, hEvent, DfsDcName, // callback fcn hEvent, // parameter dwTimeout, // timeout WT_EXECUTEONLYONCE | // flags WT_EXECUTEDEFAULT | WT_EXECUTELONGFUNCTION); } if (!NT_SUCCESS(Status)) { NtClose(TearDownDoneEvent); TearDownDoneEvent = NULL; if (PollDCNameEvent != NULL) { NtClose(PollDCNameEvent); PollDCNameEvent = NULL; } return( RtlNtStatusToDosError(Status) ); } else { return( NERR_Success ); } } //+---------------------------------------------------------------------------- // // Function: WsShutdownDfs // // Synopsis: Stops the thread created by WsInitializeDfs // // Arguments: None // // Returns: Nothing // //----------------------------------------------------------------------------- VOID WsShutdownDfs() { DWORD dwReturn, cbRead; NTSTATUS Status; HANDLE hDfs; Status = DfsOpen( &hDfs, NULL ); if (NT_SUCCESS(Status)) { Status = DfsFsctl( hDfs, FSCTL_DFS_STOP_DFS, NULL, 0L, NULL, 0L); NtClose( hDfs ); } if( WsDomainNameChangeEvent ) { // // Stop waiting for domain name changes // SetEvent( WsDomainNameChangeEvent ); WaitForSingleObject(TearDownDoneEvent, INFINITE); NetUnregisterDomainNameChangeNotification( WsDomainNameChangeEvent ); WsDomainNameChangeEvent = NULL; } else { SetEvent( PollDCNameEvent ); WaitForSingleObject(TearDownDoneEvent, INFINITE); NtClose(PollDCNameEvent); PollDCNameEvent = NULL; } NtClose(TearDownDoneEvent); TearDownDoneEvent = NULL; } //+---------------------------------------------------------------------------- // // Function: DfsDcName // // Synopsis: Gets a DC name and sends it to the mup(dfs) driver // // This routine is intended to be called as the entry proc for a // thread. // // Arguments: pContext -- Context data (handle to domain name change event) // fReason -- TRUE if the wait timed out // FALSE if the event was signalled // // Returns: // //----------------------------------------------------------------------------- VOID DfsDcName( LPVOID pContext, BOOLEAN fReason ) { NTSTATUS Status; HANDLE hDfs; DWORD dwTimeout = INFINITE; ULONG Flags = 0; BOOLEAN needRefresh = FALSE; BOOLEAN DcNameFailed; Status = RtlDeregisterWait(g_WsDomainNameChangeWorkItem); if (!NT_SUCCESS(Status)) { NetpKdPrint(("WKSTA DfsDcName: RtlDeregisterWait FAILED %#x\n", Status)); } if (WsGlobalData.Status.dwCurrentState == SERVICE_STOP_PENDING || WsGlobalData.Status.dwCurrentState == SERVICE_STOPPED) { // // The service is shutting down -- stop waiting for a domain name change // SetEvent(TearDownDoneEvent); return; } if (fReason) { // // TRUE == timeout // if ((g_ulCount <= g_ulInitThreshold) || (g_ulLastCount >= DfsGetDelayInterval())) { g_ulLastCount = 0; needRefresh = TRUE; } if (needRefresh) { Status = DfsOpen( &hDfs, NULL ); if (NT_SUCCESS(Status)) { Status = DfsFsctl( hDfs, FSCTL_DFS_PKT_SET_DC_NAME, L"", sizeof(WCHAR), NULL, 0L); NtClose( hDfs ); } if (NT_SUCCESS(Status) && GotDomainNameInfo == FALSE) { DfsGetDomainNameInfo(); } if (g_ulCount > g_ulInitThreshold) { Flags |= DS_BACKGROUND_ONLY; } Status = DfsGetDCName(Flags, &DcNameFailed); if (!NT_SUCCESS(Status) && DcNameFailed == FALSE && g_ulForce >= g_ulForceThreshold) { g_ulForce = 0; Flags |= DS_FORCE_REDISCOVERY; Status = DfsGetDCName(Flags, &DcNameFailed); } } if (MupEventSignaled == FALSE) { #if DBG if (DfsDebug) DbgPrint("Signaling mup event\n"); #endif SetEvent(hMupEvent); MupEventSignaled = TRUE; } if (NT_SUCCESS(Status) || (g_ulCount > g_ulInitThreshold)) { dwTimeout = DOMAIN_NAME_CHANGE_TIMEOUT_LONG; } else { dwTimeout = DOMAIN_NAME_CHANGE_TIMEOUT; } g_ulCount += dwTimeout; g_ulForce += dwTimeout; g_ulLastCount += dwTimeout; dwTimeout = TIMEOUT_MINUTES(dwTimeout); } else { // set the new WorkStation domain name if the event is triggered by domain // name change event. NetpKdPrint(("WKSTA DfsDcName set WorkStation Domain Name\n")); WsSetWorkStationDomainName(); // timeout needs to be adjusted accordingly if change occurs between workgroup // and domain so that DC name is also updated on the DFS. if (WsInAWorkgroup() != TRUE) { dwTimeout = TIMEOUT_MINUTES(DOMAIN_NAME_CHANGE_TIMEOUT); } else { dwTimeout = INFINITE; // DFS needs to take care of the transition from domain to workgroup. } } // // Reregister the wait on the domain name change event // Status = RtlRegisterWait( &g_WsDomainNameChangeWorkItem, (HANDLE)pContext, // waitable handle DfsDcName, // callback fcn pContext, // parameter dwTimeout, // timeout WT_EXECUTEONLYONCE | // flags WT_EXECUTEDEFAULT | WT_EXECUTELONGFUNCTION); return; } #define DFS_DC_NAME_DELAY_POLICY_KEY TEXT("Software\\Policies\\Microsoft\\System\\DFSClient") DWORD DfsGetDelayInterval(void) { NET_API_STATUS ApiStatus; LPNET_CONFIG_HANDLE SectionHandle = NULL; DWORD Value=0; HKEY hKey; LONG lResult=0; DWORD dwValue=0, dwSize = sizeof(dwValue); DWORD dwType = 0; // First, check for a policy lResult = RegOpenKeyEx (HKEY_LOCAL_MACHINE, DFS_DC_NAME_DELAY_POLICY_KEY, 0, KEY_READ, &hKey); if (lResult == ERROR_SUCCESS) { lResult = RegQueryValueEx (hKey, DFS_DC_NAME_DELAY, 0, &dwType, (LPBYTE) &dwValue, &dwSize); RegCloseKey (hKey); } // Exit now if a policy value was found if (lResult == ERROR_SUCCESS && dwType == REG_DWORD) { return dwValue; } // Second, check for a preference // // Open section of config data. // ApiStatus = NetpOpenConfigData( &SectionHandle, NULL, // Local server. SECT_NT_WKSTA, // Section name. FALSE // Don't want read-only access. ); if (ApiStatus != NERR_Success) { return DOMAIN_NAME_CHANGE_TIMEOUT_LONG; } ApiStatus = NetpGetConfigDword( SectionHandle, DFS_DC_NAME_DELAY, 0, &Value ); NetpCloseConfigData( SectionHandle ); if (ApiStatus != NERR_Success) { return DOMAIN_NAME_CHANGE_TIMEOUT_LONG; } if (Value < DOMAIN_NAME_CHANGE_TIMEOUT_LONG) { Value = DOMAIN_NAME_CHANGE_TIMEOUT_LONG; } return Value; }
"""Python adventure game prototype.""" class Room(object): """Room class object.""" def __init__(self): """Initialization of Room object.""" self.name = 'Room' self.desc = 'Description' self.nsew = [None, None, None, None] self.updown = [None] self.visited = False class Map(object): """Map class object.""" def __init__(self): """Initialization of Map object.""" self.name = 'Map' self.layout = {} def add_room(self): """Add a new room to the map.""" new_room = Room() def add_route(self, room1, room2, direction): """Create a path from one room to another via the direction passed."""
Cambridge officials received hundreds of submissions from residents hoping to make their mark as literary legends through the city’s first-ever “Sidewalk Poetry” contest this spring. In the end, only five scribes emerged victorious. In March, the city put out a call for poets to participate in the project. Winners were promised a permanent display space for their musings — the poems would be imprinted in the freshly poured concrete as Department of Public Works crews replaced sidewalk slabs cracked or damaged during the winter. Advertisement The response was great, said Molly Akin, the Cambridge Arts Council’s marketing director. More than 300 submissions flooded in from writers ranging in age from 4 to 95, according to organizers. Get Metro Headlines in your inbox: The 10 top local news stories from metro Boston and around New England delivered daily. Sign Up Thank you for signing up! Sign up for more newsletters here A special committee that included workers from the DPW, representatives from the local libraries, members of the Arts Council, and Cambridge’s former Poet Populists helped select the finalists. Each of the winning poems will have one display, in locations that will be determined as DPW crews repair sidewalks this summer. Special tools will be used to imprint the poems. Akin said workers this week did a trial run on Calendar Street, with a poem that wasn’t among the winners. “It was a test to make sure they had the right tools and technology to stamp the winners this summer,” she said. Advertisement The result, she said, came out beautifully. Finalists will get more than just a spot on the sidewalk. In June, the selected poets will read their work during the annual River Festival, beneath the Poetry Tent. Six semifinalists whose work won’t be etched in cement were also selected to share their poems at the event, Akin said. The Sidewalk Poetry contest was inspired by a similar initiative in St. Paul that began in 2008. The city now has more than 450 poems imprinted on sidewalks. Cambridge’s efforts are more modest, but given the success of the contest this spring, there are tentative plans to seek submissions again next year. Below are the names of the winners, and their poems: Rose Breslin Blake Advertisement Children, look up Cherish those clouds Ride grey ponies over their hills Feed the shiny fish Boo the big bear Chase the gloomy giant Giggle with the geese Sing with the lambs Cherish those clouds; they cherish you Rest on their pillows. Benjamin Grimm I could not forget you if I tried. I have tried. Ty Muto Your blue-green glances My heart skips double dutch beats Caught in your rhythm Carolyn Russell Stonewell Sun takes a bite of mango as it sets. Its last rays run down my cheek. Elissa Warner A Mother’s Wish Little boys, little treasures Shine like lights from above My son, my only one My wish for you is that you wake One day when you are old And feel raindrops on your cheek Tears of joy from my heart For you to keep Steve Annear can be reached at [email protected] . Follow him on Twitter @steveannear
//Uint64Ptr cast field pointer to *uint64 func (f *Field) Uint64Ptr(structPtr unsafe.Pointer) *uint64 { result := AsUint64AddrPtr(f.Pointer(structPtr)) if result == nil { return nil } return *result }
/** * Created by Niko on 3/17/16. */ public class Message { String mBody; long mDate; boolean mIncoming; public Message(){ switch (new Random().nextInt(9)){ case 0: mBody = "This game is so fun"; break; case 1: mBody = "OMG YEA IT ISSSS!!!!! Lets kill some people!!!!"; break; case 2: mBody = "Sure"; break; case 3: mBody = "Oh, lets kill Will"; break; case 4: mBody = "Yessss lets kill will"; break; case 5: mBody = "Oh yea definetly here is a long message of gibberish. To make things less boring. We should all vote on one person to kill. Any thoughts on who?"; break; case 6: mBody = "Guys, I am NOT a member of the mafia."; break; case 7: mBody = "I dream of my next victim, so I have an alibi for killing that man"; break; default: mBody = "Okay, lets just kill Sean, just because nobody likes him"; break; } mDate = System.currentTimeMillis(); mIncoming = new Random().nextInt(2) == 1; } public String getBody() { return mBody; } public long getTimestamp() { return mDate; } public boolean isIncoming() { return mIncoming; } }
def link_tags(doc, parse_full=False): if parse_full: soup = BeautifulSoup(doc, "html.parser") links = soup.find_all("a") else: only_a_tags = SoupStrainer("a") links = BeautifulSoup(doc, "html.parser", parse_only=only_a_tags) return links
/* * add resource records to a list, duplicate them if they are not database * RR's and hence from the cache since cache RR's are shared. */ RR* rrcat(RR **start, RR *rp, int type) { RR *next; RR *np; RR **last; SOA *soa; last = start; while(*last) last = &(*last)->next; for(;rp && tsame(type, rp->type); rp = next){ next = rp->next; if(rp->db) np = rp; else { np = rralloc(rp->type); if(type == Tsoa){ soa = np->soa; *soa = *(rp->soa); *np = *rp; np->soa = soa; } else *np = *rp; } np->next = 0; *last = np; last = &np->next; } return *start; }
Does it seem like everyone you know is getting a tablet computer? There's a reason for that. They have been. Tablets are getting more popular than ever. According to the Pew Research Center's Internet and American Life Project, over a third of American adults now own a tablet. By Pew's latest survey numbers, "A third (34%) of American adults ages 18 and older own a tablet computer like an iPad, Samsung Galaxy Tab, Google Nexus, or Kindle Fire—almost twice as many as the 18% who owned a tablet a year ago." While tablet computers aren't new, their popularity only dates back to April 3, 2010 when Apple introduced the iPad. By May 2010, older tablets and the iPad reached 3% of the adult market. Not three years later, the number of tablet owners has increased eleven-fold. Who are these people? Pew's numbers reveal few surprises: Demographic groups most likely to own tablets include: Those living in households earning at least $75,000 per year (56%), compared with lower income brackets Adults ages 35-44 (49%), compared with younger and older adults College graduates (49%), compared with adults with lower levels of education The Pew survey did find one oddity: "Unlike smartphones, which are most popular with younger adults ages 18-34, we see the highest rates of tablet ownership among adults in their late thirties and early forties. In fact, almost half (49%) of adults ages 35-44 now own a tablet computer, significantly more than any other age group. Adults ages 65 and older, on the other hand, are less likely to own a tablet (18%) than younger age groups." In other words, when it comes to tablets, Generation X--not the Baby Boomers nor the Millennials, aka Generation Y--are the ones driving tablet sales. Users are only going to continue to buy tablets in ever greater numbers. Indeed, market research firm NPD claims that by 2017, tablets will outsell notebooks by six to one . Today, IDC has found that global tablet shipments grew by 142.4-percent year-over-year during the first quarter of 2013, and that "tablets have shown no sign of slowing down." A big part of what's driving this explosive growth, according to IDC, are inexpensive Android devices with screen sizes of about 7 inches . Looking ahead, IDC predicts that smaller tablets, such as the Nexus 7 and the Apple iPad mini, will have 63% of the market by 2017. At the same time, PC sales continue to head into the toilet. Indeed, by IDC's count the last sales quarter, global PC shipments have plunged into their worst drop in a generation . It's no wonder that Microsoft appears to be aiming Windows 8.1 at tablets and other mobile devices rather than making disgruntled Windows 8 desktop users happy. The future belongs to tablets. Which tablets are people buying? Pew doesn't look into this, but many other research firms follow this. ABI Research states that " The tide is definitely turning toward Android-based tablets , though Apple will not slouch as it feels the competition approaching." IDC found that Android-based tablets are already edging ahead of iPads, 48.8-percent to 46-percent . ABI Research senior practice director Jeff Orr wouldn't go that far but he would say, " It's inevitable that Android tablets will overtake iOS-powered slates , though we see no single vendor challenging Apple’s dominance anytime soon." As for would-be challengers to Android and Apple, there really aren't any. Microsoft with Windows 8 and RT is the closest thing to a viable alternative, but even combined their sales lag far behind tablet market leaders, Apple, Samsung, ASUS, and Amazon. We're well on our way to a world where tablets, and not PCs, will be the most popular computing device and the real battle for market supremacy will be between Apple and Google's Android allies. No one else at this point— Firefox , Microsoft, Ubuntu —appear to be in the running for top tablet honors. Related Stories:
<reponame>marchese29/EmulatorCore<gh_stars>0 // // Created by <NAME> on 8/29/15. // #ifndef EMULATORCORE_MEMORYCONTROLLER_H #define EMULATORCORE_MEMORYCONTROLLER_H #include "../../Common.h" class MemoryMap; class MemoryController { protected: MemoryMap *_mmap; public: MemoryController(MemoryMap *map) { _mmap = map; } virtual ~MemoryController() { } virtual uint8_t read(uint16_t addr) = 0; virtual void write(uint16_t addr, uint8_t value) = 0; virtual void reset() = 0; }; #endif //EMULATORCORE_MEMORYCONTROLLER_H
use arrayvec::ArrayVec; const BUF_SIZE: usize = 40; // The maximum X3.28 message length is 18 bytes #[derive(Debug)] pub struct Buffer { data: ArrayVec<u8, BUF_SIZE>, read_pos: usize, } impl Buffer { pub fn new() -> Self { Self { data: ArrayVec::new(), read_pos: 0, } } pub fn len(&self) -> usize { self.data.len() - self.read_pos } pub fn consume(&mut self, len: usize) { assert!(len <= self.len()); self.read_pos += len; } pub fn write(&mut self, mut bytes: &[u8]) { if self.read_pos == self.data.len() { self.clear(); } if bytes.len() > self.data.capacity() { bytes = &bytes[(bytes.len() - self.data.capacity())..]; self.clear(); } else { let cap = self.data.remaining_capacity(); if cap < bytes.len() { let drain_len = bytes.len() - cap; self.data.drain(..drain_len); self.read_pos = self.read_pos.saturating_sub(drain_len); } } let write_pos = self.data.len(); self.data.try_extend_from_slice(bytes).unwrap(); for byte in self.data[write_pos..].iter_mut() { if *byte > 0x7f { *byte = 0; // map all non-ASCII bytes to NUL } } } pub fn clear(&mut self) { self.data.clear(); self.read_pos = 0; } } impl AsRef<[u8]> for Buffer { fn as_ref(&self) -> &[u8] { &self.data[self.read_pos..] } } #[cfg(test)] mod tests { use super::*; fn get_buffer() -> Buffer { let mut buf = Buffer::new(); buf.write(b"abcdabcdabcd"); buf } #[test] fn test_slice() { let buf = get_buffer(); assert_eq!(buf.as_ref().len(), buf.len()); } #[test] fn buffer_spill() { let mut buf = Buffer::new(); for _ in 0..(BUF_SIZE + 1) { buf.write(b"a"); } assert_eq!(buf.read_pos, 0); buf.consume(5); assert_eq!(buf.read_pos, 5); buf.write(b"1234"); assert_eq!(buf.read_pos, 1); buf.write(b"5"); assert_eq!(buf.read_pos, 0); buf.write(b"67"); assert_eq!(buf.read_pos, 0); } #[test] fn too_large_write() { let mut buf = Buffer::new(); let data: String = std::iter::once("abc").cycle().take(BUF_SIZE).collect(); buf.write(data.as_bytes()); assert_eq!(buf.data, data.as_bytes()[(data.len() - BUF_SIZE)..]) } }
Over a decade ago, Margin Walker’s Graham Williams, then a booker for the downtown Emo’s location, conceptualized Free Week as a way to bring warm bodies into the club during one of the slowest weeks of the year. The philosophy is simple: throw open the doors, drop the cover charge and invite folks to come experience Austin music. The event has grown into a vibrant mini-festival, a free local music binge with events all over town geared to all sorts of music enthusiasts. After several years of expansion that made the better part of January a musical free-for-all, most Austin clubs have contracted back to one week in 2018. » FREE WEEK EVENTS IN OUR CALENDAR Every year there are contentious conversations about whether the event devalues Austin music in the eyes of stingy local music fans who already tend to balk about reasonable cover charges. It’s debatable, but we’d be lying if we said we didn’t love the way the event generates serious excitement about local music. A few things to remember: Some Free Week bands are getting paid. Some play for free as a favor to venues that support them year round. All of them would like you to spend some of the money you save at the door at the merch table, or drop it into the tip jar. Also, part of the reason artists play Free Week is exposure, so expose the beejeebers out of any bands you see. Post photos and videos to your socials and tell all your friends. Go ahead and tag the clubs in those posts too. Ambassador for the excitement of local music is a great look, and let’s all start 2018 by wearing it well. SIX GRAND SLAM FREE WEEK BILLS Monday, Jan 1: Peelander-Z, Dead Music Capital Band, Drakulas at Empire. From the Japanese action comic antics of the Peelander punks to DIY marching zombies, this unpretentious hair-of-the-dog throwdown indulges in the best of new school Austin weird. Thursday, Jan. 4: Whiskey Shivers, Sailor Poon, Booher at Empire. Rollicking rebel bluegrass? Check. Feminist art punk with songs about “Boobies”? Check. Emotionally literate rock ‘n’ roll? Yep. What else do you need, people? Jackie Venson, Mobley, Mélat at Stubb’s. Three of Austin’s brightest rising stars on the same bill. Venson is Austin’s next great electric blues guitarist who recently did a high profile road stint with Gary Clark Jr. Mobley has a catalog loaded with catchy pop songs and an electrifying stage show and Mélat sings dreamy R&B for the lover in all of us. Riverboat Gamblers, American Sharks, Eagle Claw at Barracuda. Oh, you like it rough? The Gamblers’ rowdy bash is a perennial Free Week fave. Friday, Jan 5: Los Coast, Emily Wolfe, Otis the Destroyer at Mohawk. After a year spent building a reputation as one of the hottest live acts in Austin, rock ‘n’ soul outfit, Los Coast promises “a slew of album releases” in 2018. Emily Wolfe spent the year writing and plotting her debut full-length debut and in November, she teased us with “Holy Roller,” a blistering single with stormy melodies that swirl around muscular guitar riffs. Otis the Destroyer also performs. This is a solid opportunity to catch a few winners before they blow up. Saturday, Jan. 6: Applied Pressure Orchestra at Empire. A one-off performance from the live band version of the excellent DJ/Electronic music collective that includes Hobo D, Kid Slyce and Boombaptist. The evening will also include a performance from Vapor Caves, the new project from Keeper’s Yadira Brown and Boombaptist, and a rare return to the turntables by DJ Tats, best known these days as a local ramen kingpin. FIVE RISING AUSTIN ACTS TO CATCH Annabelle Chairlegs. There is no Annabelle (as far as we know) but singer and guitarist Lindsey Mackin is one of Austin’s most captivating leads. The band spent a good portion of 2017 on the road, enchanting audiences around the country with their trippy garage rock and we suspect 2018 will be a big year for them. Playing: Jan. 2 at Mohawk; Jan. 6 at Hotel Vegas. Blastfamous USA. The raucous rap-rock project fronted by Zeale, one of Austin’s fiercest rhymeslingers, unleashes a seething cauldron righteous rage. Playing: Jan 2 at Cheer Up Charlies; Jan. 5 at Empire. Go Fever. Their self-titled album built around Aussie ex-pat Acey Monaro’s solid songwriting was one of 2017 strongest debuts. Playing: Jan 2 at Swan Dive; Jan. 6 at Valhalla. Alesia Lani. Her warm and wonderful 2017 release “Resilient,” a rich platter of R&B, provided a much-needed salve to our city’s soul this year. Playing: Jan. 4 at Cheer Up Charlie’s Darkbird. Go on with your “Bad Self” and indulge in some furious rock ‘n’ roll. Jan. 3 at Hotel Vegas; Jan 6 at Mohawk; Jan. 7 at Empire. MORE FREE WEEK HIGHLIGHTS Jan. 1 Mayeux and Brossard, Jane Ellen Bryant at Stubb’s Marmalakes, Get a Life at Barracuda Migrant Kids, the Canvas People at Mohawk Duncan Fellowes, Delmar Dennis at Hotel Vegas Mode Dodeca at Cheer Up Charlie’s The Cover Letter at the Swan Dive Jan. 2 Whiite Walls, Slomo Drags at Barracuda Good Field, Blastfamous U.S. at Cheer Up Charlie’s White Dog, Think No Think at the Volstead The Wild Now, Tinnarose at Stubb’s Jeff Plankenhorn, Peterson Brothers at Empire Vampire, The Millbrook Estates at Mohawk Pretty (Expletive) at Beerland Sailor Poon, Big Bill at Hotel Vegas Go Fever at the Swan Dive Jan. 3 Ringo Deathstarr, The Halfways at Empire The Well, the Ghost Wolves at Barracuda Tinnarose, Otis the Destroyer, Darkbird at Hotel Vegas Smiile, Thor and Friends at Mohawk The Zoltars at Spiderhouse Jimmy Eat Wednesday at Barbarella Stretch Panic at Beerland Jan. 4 Riverboat Gamblers, American Sharks at Barracuda Wood & Wire, the Deer at Mohawk Otis the Destroyer, Megafauna at Empire Stoner Jam with Amplified Heat, more at Swan Dive Leopold & His Fiction at the Belmont Trouble in the Streets, Alesia Lani at Cheer Up Charlies Borzoi, USA/Mexico at Beerland Continental Drift, Deny Our Salvation at Elysium The Lagoons, Groovethink at the Blackheart Jan. 5 Holy Wave, the Diamond Center at Cheer Up Charlie’s Ringo Deathstarr, the Reputations at Hotel Vegas Netherfriends, Corduroi at Empire Cloudchord, Sphynx at Stubb’s Trouble in the Streets, Blastfamous US at Empire Marmalakes, Growl at Mohawk Tinnarose, Abram Shook at Barracuda Honey & Salt, Seafire 3 at the Sidewinder Roxy Roca, the Crack Pipes at Hotel Vegas Blxpltn, Major Grizz, Como Las Movies at Sahara Lounge Timberos del Norte, Zoumountchi at Flamingo 99 Crimes EP release at Dirty Dog The Watters, Memphis Strange at the Blackheart Jan. 6 Moving Panoramas, Lowin, Go Fever at Valhalla Roadkill Ghost Choir, Calliope Musicals, Darkbird at Mohawk Quiet Company, Dayshifters at Barracuda A. Sinclair, Megafauna at Mohawk Knifight, Light Wheel at Stubb’s Annabelle Chairlegs, Deep Time at Hotel Vegas Leche, High at Beerland Andy, Starfruit at Cheer Up Charlie’s Knife in the Water, Croy and the Boys at Hotel Vegas Melat, Shy Beast at Empire Lincoln Durham, Altamesa at the Belmont Primo, Reagan Jones at Elysium Crypt Trip, Transit Method at Sidewinder The Big News, Los Kurados at Flamingo The Canvas People & the Matters at the Blackheart Jan. 7 The Zoltars, the Borzoi at Hotel Vegas Tomar & the FCs, Honey Made at Stubb’s Zeale & the Nght Hcklrs at Empire Smiile, Batty Jr. at Cheer Up Charlies Wonderbitch, Darkbird at Empire
// drop unique constraint need provide the constraint name. func (s Sql) AlterTableDropUnique(table, constraintName string) { q := "" switch s.Dialect { case "mssql", "oracle": q = fmt.Sprintf("ALTER TABLE %s DROP CONSTRAINT %s", table, constraintName) case "mysql": q = fmt.Sprintf("ALTER TABLE %s drop INDEX %s", table, constraintName) default: } if q == "" { return } s.Exec(q) }
<reponame>Eluinhost/hosts.uhc import * as React from 'react'; import { Button, NonIdealState, Spinner } from '@blueprintjs/core'; import { SelectField, SelectFieldProps } from '../../components/fields/SelectField'; import { connect } from 'react-redux'; import { ListVersionsState } from '../reducer'; import { getListVersionsState } from '../selectors'; import { Dispatch } from 'redux'; import { FETCH_VERSIONS } from '../actions'; import { Version } from '../Version'; export type MainVersionFieldProps = Omit<SelectFieldProps, 'options'>; type StateProps = ListVersionsState; type DispatchProps = { updateVersionList: () => void; }; class MainVersionFieldComponent extends React.PureComponent<MainVersionFieldProps & StateProps & DispatchProps> { componentDidMount(): void { if (!this.props.isFetching && !this.props.error && this.props.data.length === 0) { this.props.updateVersionList(); } } render() { if (this.props.isFetching) { return <Spinner />; } if (this.props.error) { return ( <NonIdealState icon="warning-sign" title="Failed to load versions list" action={<Button onClick={this.props.updateVersionList}>Try Again</Button>} /> ); } const options = this.props.data.map(item => ({ display: item.displayName, value: item.displayName, })); return <SelectField {...this.props} options={options} />; } } const mapDispatchToProps = (dispatch: Dispatch): DispatchProps => ({ updateVersionList: () => dispatch(FETCH_VERSIONS.TRIGGER()), }); export const MainVersionField: React.ComponentType<MainVersionFieldProps> = connect( getListVersionsState, mapDispatchToProps, )(MainVersionFieldComponent);
<filename>engine/src/core/logger.h #pragma once #include "defines.h" // switches to enable or disable specific logging level #define MC_LOG_WARN_ENABLED 1 #define MC_LOG_INFO_ENABLED 1 #define MC_LOG_DEBUG_ENABLED 1 #define MC_LOG_TRACE_ENABLED 1 #define MC_LOG_ERROR_ENABLED 1 #define MC_LOG_FATAL_ENABLED 1 #if MIRAI_RELEASE == 1 // disable debug and trace logging in release mode #define MC_LOG_DEBUG_ENABLED 0 #define MC_LOG_TRACE_ENABLED 0 #endif typedef enum MC_LOG_LEVEL { MC_LOG_LEVEL_TRACE = 0, MC_LOG_LEVEL_DEBUG = 1, MC_LOG_LEVEL_INFO = 2, MC_LOG_LEVEL_WARN = 3, MC_LOG_LEVEL_ERROR = 4, MC_LOG_LEVEL_FATAL = 5 } MC_LOG_LEVEL; // log formated message based on it's log level, most propably you will not need to call this // function directly and will use bellow macros. // level: MC_LOG_LEVEL enum to specify log level // message: string to log // ...: args to pass for formatting the message MIRAI_API void mc_log(MC_LOG_LEVEL level, const char *message, ...); #if MC_LOG_TRACE_ENABLED == 1 // __VA_ARGS__: accept variable number of arguments in macro // we add '##' before __VA_ARGS__ to remove extra ',' before it if variable arguments are // omitted or empty. #define MC_TRACE(message, ...) mc_log(MC_LOG_LEVEL_TRACE, message, ##__VA_ARGS__) #else #define MC_TRACE(message, ...) // do nothing #endif #if MC_LOG_DEBUG_ENABLED == 1 #define MC_DEBUG(message, ...) mc_log(MC_LOG_LEVEL_DEBUG, message, ##__VA_ARGS__) #else #define MC_DEBUG(message, ...) // do nothing #endif #if MC_LOG_INFO_ENABLED == 1 #define MC_INFO(message, ...) mc_log(MC_LOG_LEVEL_INFO, message, ##__VA_ARGS__) #else #define MC_INFO(message, ...) // do nothing #endif #if MC_LOG_WARN_ENABLED == 1 #define MC_WARN(message, ...) mc_log(MC_LOG_LEVEL_WARN, message, ##__VA_ARGS__) #else #define MC_WARN(message, ...) // do nothing #endif #if MC_LOG_ERROR_ENABLED == 1 #define MC_ERROR(message, ...) mc_log(MC_LOG_LEVEL_ERROR, message, ##__VA_ARGS__) #else #define MC_ERROR(message, ...) // do nothing #endif #if MC_LOG_FATAL_ENABLED == 1 #define MC_FATAL(message, ...) mc_log(MC_LOG_LEVEL_FATAL, message, ##__VA_ARGS__) #else #define MC_FATAL(message, ...) // do nothing #endif
#include <fstream> #include <iostream> #include <unistd.h> int main(int argc, char** argv) { int data[] = {10,5,263}; //Random data we want to send FILE *file; file = fopen("/dev/ttyACM0","w"); //Opening device file int i = 0; for(i = 0 ; i < 3 ; i++) { fprintf(file,"%d",data[i]); //Writing to the file fprintf(file,"%c",','); //To separate digits sleep(1); } fclose(file); }
import Discord from 'discord.js'; import EventEmitter from 'events'; import { UserPromtResult } from '../Interface/UserPromtResult.js'; interface UserActionEmitterEvents { action: (user: Discord.GuildMember, channel: Discord.TextChannel, result: UserPromtResult, creator?: Discord.Message | Discord.ButtonInteraction) => void } export declare interface UserActionEmitter { on<U extends keyof UserActionEmitterEvents>( event: U, listener: UserActionEmitterEvents[U] ): this emit<U extends keyof UserActionEmitterEvents>( event: U, ...args: Parameters<UserActionEmitterEvents[U]> ): boolean removeListener<U extends keyof UserActionEmitterEvents>( event: U, listener: UserActionEmitterEvents[U] ): this } // add listen for? export class UserActionEmitter extends EventEmitter { emitOnMessage(msg: Discord.Message): void { if (msg.content.toLowerCase() === 'cancel') { this.emit('action', msg.member, msg.channel as Discord.TextChannel, 'cancel', msg); return; } const opt = parseInt(msg.content); if (opt) { this.emit('action', msg.member, msg.channel as Discord.TextChannel, opt, msg); } } emitOnButton(inter: Discord.ButtonInteraction): void { if (inter.customId === 'cancel') { this.emit('action', inter.member as Discord.GuildMember, inter.channel as Discord.TextChannel, 'cancel', inter); } const opt = parseInt(inter.customId); if (opt) { this.emit('action', inter.member as Discord.GuildMember, inter.channel as Discord.TextChannel, opt, inter); } } }
// Copyright (c) Aptos // SPDX-License-Identifier: Apache-2.0 use crate::common::{ types::{CliCommand, CliTypedResult, EncodingOptions, ProfileOptions, WriteTransactionOptions}, utils::submit_transaction, }; use aptos_rest_client::{aptos_api_types::WriteSetChange, Transaction}; use aptos_types::account_address::AccountAddress; use async_trait::async_trait; use cached_framework_packages::aptos_stdlib; use clap::Parser; use serde::Serialize; use std::collections::BTreeMap; /// Command to transfer coins between accounts /// #[derive(Debug, Parser)] pub struct TransferCoins { #[clap(flatten)] write_options: WriteTransactionOptions, #[clap(flatten)] encoding_options: EncodingOptions, #[clap(flatten)] profile_options: ProfileOptions, /// Address of account you want to send coins to #[clap(long, parse(try_from_str = crate::common::types::load_account_arg))] account: AccountAddress, /// Amount of coins to transfer #[clap(long)] amount: u64, } #[async_trait] impl CliCommand<TransferSummary> for TransferCoins { fn command_name(&self) -> &'static str { "TransferCoins" } async fn execute(self) -> CliTypedResult<TransferSummary> { let sender_key = self.write_options.private_key_options.extract_private_key( self.encoding_options.encoding, &self.profile_options.profile, )?; submit_transaction( self.write_options .rest_options .url(&self.profile_options.profile)?, self.write_options .chain_id(&self.profile_options.profile) .await?, sender_key, aptos_stdlib::encode_test_coin_transfer(self.account, self.amount), self.write_options.max_gas, ) .await .map(TransferSummary::from) } } const SUPPORTED_COINS: [&str; 1] = ["0x1::Coin::CoinStore<0x1::TestCoin::TestCoin>"]; /// A shortened transaction output #[derive(Clone, Debug, Default, Serialize)] pub struct TransferSummary { gas_used: Option<u64>, balance_changes: BTreeMap<AccountAddress, serde_json::Value>, sender: Option<AccountAddress>, success: bool, version: Option<u64>, vm_status: String, } impl From<Transaction> for TransferSummary { fn from(transaction: Transaction) -> Self { let mut summary = TransferSummary { success: transaction.success(), version: transaction.version(), vm_status: transaction.vm_status(), ..Default::default() }; if let Transaction::UserTransaction(txn) = transaction { summary.sender = Some(*txn.request.sender.inner()); summary.gas_used = Some(txn.info.gas_used.0); summary.version = Some(txn.info.version.0); summary.balance_changes = txn .info .changes .iter() .filter_map(|change| match change { WriteSetChange::WriteResource { address, data, .. } => { if SUPPORTED_COINS.contains(&data.typ.to_string().as_str()) { Some(( *address.inner(), serde_json::to_value(data.data.clone()).unwrap_or_default(), )) } else { None } } _ => None, }) .collect(); } summary } }
/** * Created by andrey on 24.04.14. */ @Stateless @Local(AttachmentService.class) @Interceptors(SpringBeanAutowiringInterceptor.class) public class LocalAttachmentServiceImpl extends BaseAttachmentServiceImpl implements AttachmentService { @Override protected RemoteInputStream wrapStream(InputStream inputStream) throws RemoteException { return new DirectRemoteInputStream(inputStream, false); } }
import java.util.*; public class Main{ static Long sumupto(int n ){ Long sum = 0L; for(int i=0;i<=n;i++){ sum+=i; } return sum; } public static void main(String[] rk){ long []a = new long[10000+1]; for(int i=0;i<10000+1;i++){ Long temp = sumupto(i); a[i]=temp; } Scanner sc = new Scanner (System.in); int n = sc.nextInt(); int sum = 0; int save=0; for(int i=0;i<10000+1;i++){ sum+=a[i]; if(sum>n){ save = i; break; } } System.out.println(save-1); } }
/** * @author Rinat Gareev (Kazan Federal University) * */ class NativeLibrary { private static boolean loaded = false; public synchronized static void load() { if (!loaded) { System.loadLibrary("crfsuite-jni"); loaded = true; } } }
def separable_filter(tensor: tf.Tensor, kernel: tf.Tensor) -> tf.Tensor: strides = [1, 1, 1, 1, 1] kernel = tf.cast(kernel, dtype=tensor.dtype) tensor = tf.nn.conv3d( tf.nn.conv3d( tf.nn.conv3d( tensor, filters=tf.reshape(kernel, [-1, 1, 1, 1, 1]), strides=strides, padding="SAME", ), filters=tf.reshape(kernel, [1, -1, 1, 1, 1]), strides=strides, padding="SAME", ), filters=tf.reshape(kernel, [1, 1, -1, 1, 1]), strides=strides, padding="SAME", ) return tensor
def f(self, x0, y0, z0, w0, a, b, usecontrol:bool=True): if self.config['useControl'] and usecontrol: if self.config['usePositiveControl']: k = np.array(self.kp) e = np.array(self.e) x = np.array([x0, y0, z0, w0]) sigma = self.sigma self.u = sigma * (-np.matmul(np.transpose(k), (x - e))) u = self.u dx0 = x0 * (b[0] - a[0][0]*x0 - a[0][1]*y0 - a[0][2]*z0 - a[0][3]*w0 + k[0]*u) dy0 = y0 * (b[1] - a[1][0]*x0 - a[1][1]*y0 - a[1][2]*z0 - a[1][3]*w0 + k[1]*u) dz0 = z0 * (b[2] - a[2][0]*x0 - a[2][1]*y0 - a[2][2]*z0 - a[2][3]*w0 + k[2]*u) dw0 = w0 * (b[3] - a[3][0]*x0 - a[3][1]*y0 - a[3][2]*z0 - a[3][3]*w0 + k[3]*u) elif self.config['useOptimalControl']: F0, F1, F2, F3 = self.computeOptimalControl(x0, y0, z0, w0, a, b) dx0 = x0 * (b[0] - a[0][0]*x0 - a[0][1]*y0 - a[0][2]*z0 - a[0][3]*w0) + F0 dy0 = y0 * (b[1] - a[1][0]*x0 - a[1][1]*y0 - a[1][2]*z0 - a[1][3]*w0) + F1 dz0 = z0 * (b[2] - a[2][0]*x0 - a[2][1]*y0 - a[2][2]*z0 - a[2][3]*w0) + F2 dw0 = w0 * (b[3] - a[3][0]*x0 - a[3][1]*y0 - a[3][2]*z0 - a[3][3]*w0) + F3 else: dx0 = x0 * (b[0] - a[0][0]*x0 - a[0][1]*y0 - a[0][2]*z0 - a[0][3]*w0) dy0 = y0 * (b[1] - a[1][0]*x0 - a[1][1]*y0 - a[1][2]*z0 - a[1][3]*w0) dz0 = z0 * (b[2] - a[2][0]*x0 - a[2][1]*y0 - a[2][2]*z0 - a[2][3]*w0) dw0 = w0 * (b[3] - a[3][0]*x0 - a[3][1]*y0 - a[3][2]*z0 - a[3][3]*w0) return dx0, dy0, dz0, dw0
<filename>executable/Main.hs {-# LANGUAGE DataKinds #-} {-# LANGUAGE DeriveAnyClass #-} {-# LANGUAGE DeriveDataTypeable #-} {-# LANGUAGE DeriveGeneric #-} {-# LANGUAGE EmptyDataDecls #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE ForeignFunctionInterface #-} {-# LANGUAGE JavaScriptFFI #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeSynonymInstances #-} -- | module Main where #ifdef __GHCJS__ import React.Flux import qualified VK.App as VK main :: IO () main = do av <- VK.initApp VK.app case VK.app of VK.App{VK.appRouter = Just ar} -> VK.initRouter ar _ -> return () reactRender "vk-app" av Nothing #else main :: IO () main = return () #endif
#ifndef _AB_UNIQUE_PTR_ #define _AB_UNIQUE_PTR_ namespace ab { template<typename T> class unique_ptr { public: explicit unique_ptr(T* ptr) : ptr(ptr) {} unique_ptr() : ptr(0) {} unique_ptr(unique_ptr<T>&& ptr) : ptr(ptr.get()) { ptr.reset(); } unique_ptr(const unique_ptr<T>& ptr) = delete; ~unique_ptr() { delete ptr; } T& operator*() const { assert(ptr); return *ptr; } T* operator->() const { return ptr; } explicit operator bool() const { return ptr != 0; } unique_ptr<T>& operator=(const unique_ptr<T>& ptr) = delete; T* get() const { return ptr; } void reset() { ptr = 0; } private: T* ptr; }; } #endif /* _AB_UNIQUE_PTR_ */
import { OpenApiPublisher } from "./publishing/openApiPublisher"; import * as fs from "fs"; import * as path from "path"; import { IInjector, IInjectorModule } from "@paperbits/common/injection"; import { ConsoleLogger } from "@paperbits/common/logging"; import { ListOfApisModule } from "./components/apis/list-of-apis/ko/listOfApis.module"; import { DetailsOfApiModule } from "./components/apis/details-of-api/ko/detailsOfApi.module"; import { StaticRouter } from "./components/staticRouter"; import { StaticUserService } from "./services/userService"; import { StaticAuthenticator } from "./components/staticAuthenticator"; import { OperationListModule } from "./components/operations/operation-list/ko/operationList.module"; import { OperationDetailsPublishModule } from "./components/operations/operation-details/operationDetails.publish.module"; import { ValidationSummaryModule } from "./components/users/validation-summary/validationSummary.module"; import { StaticRoleService } from "./services/roleService"; import { FileSystemBlobStorage } from "./components/fileSystemBlobStorage"; import { FileSystemDataProvider } from "./persistence/fileSystemDataProvider"; import { FileSystemObjectStorage } from "./persistence/fileSystemObjectStorage"; import { openapiSpecsPathSettingName, dataPathSettingName, mediaPathSettingName, websiteContentFileName } from "./constants"; export class MainPublishModule implements IInjectorModule { public register(injector: IInjector): void { injector.bindModule(new ListOfApisModule()); injector.bindModule(new DetailsOfApiModule()); injector.bindModule(new OperationListModule()); injector.bindModule(new OperationDetailsPublishModule()); injector.bindModule(new ValidationSummaryModule()); injector.bindSingleton("userService", StaticUserService); injector.bindSingleton("roleService", StaticRoleService); injector.bindSingleton("router", StaticRouter); injector.bindSingleton("authenticator", StaticAuthenticator); injector.bindSingleton("logger", ConsoleLogger); const configPath = path.resolve(__dirname, "config.json"); const configRaw = fs.readFileSync(configPath, "utf8"); const config = JSON.parse(configRaw); const mediaFolder = config[mediaPathSettingName]; const contentFolder = config[dataPathSettingName]; const openapiSpecsFolder = config[openapiSpecsPathSettingName]; const basePath = path.dirname(__filename); const contentFilePath = path.resolve(basePath, contentFolder, websiteContentFileName); const specsFolderPath = path.resolve(basePath, openapiSpecsFolder); const mediaFolderPath = path.resolve(basePath, mediaFolder); injector.bindInstance("specsBlobStorage", new FileSystemBlobStorage(specsFolderPath)); injector.bindInstance("blobStorage", new FileSystemBlobStorage(mediaFolderPath)); injector.bindInstance("dataProvider", new FileSystemDataProvider(contentFilePath)); injector.bindInstance("objectStorage", new FileSystemObjectStorage(contentFilePath)); injector.bindInstance("outputBlobStorage", new FileSystemBlobStorage(path.resolve("../website"))); injector.bindToCollection("publishers", OpenApiPublisher); } }
// Returns the AgentPosition (relative strength measure) of the agent when at the minimum HP defined by the condition and conditionOp func (a *CustomAgent3) requiredHPLevel(treaty messages.Treaty) AgentPosition { if treaty.ConditionOp() == messages.LT || treaty.ConditionOp() == messages.LE || treaty.ConditionValue() > 100 { return SurvivalLevel } switch hp := treaty.ConditionValue(); { case hp >= 75: return Strong case hp >= 55: return Healthy case hp >= 35: return Average case hp >= a.HealthInfo().WeakLevel: return Weak case hp == a.HealthInfo().HPCritical: return SurvivalLevel default: return Reject } }
<reponame>Opty-MSc/HDS<filename>Project/HDLT/Server/src/main/java/pt/tecnico/ulisboa/hds/hdlt/server/repository/DBPopulate.java<gh_stars>0 package pt.tecnico.ulisboa.hds.hdlt.server.repository; import org.intellij.lang.annotations.Language; import java.sql.Connection; import java.sql.SQLException; import java.sql.Statement; public class DBPopulate { @Language("SQL") public static final String InsertNonceSQL = "INSERT INTO UserNonces (nonce) VALUES (?);"; @Language("SQL") public static final String InsertUserReportSQL = "INSERT INTO UserReports (uname, epoch, x, y) VALUES (?, ?, ?, ?);"; @Language("SQL") public static final String InsertUserLocationProofSQL = "INSERT INTO UserLocationProofs (uname, epoch, uSigner, uSignedProof) VALUES (?, ?, ?, ?);"; @Language("SQL") public static final String InsertServerLocationProofSQL = "INSERT INTO ServerLocationProofs (uname, epoch, uSigner, sSigner, sSignedProof) VALUES (?, ?, ?, ?, ?);"; @Language("SQL") public static final String SelectNoncesSQL = "SELECT * FROM UserNonces;"; @Language("SQL") public static final String SelectHasUserReportSQL = "SELECT * FROM UserReports WHERE uname = ? AND epoch = ?;"; @Language("SQL") public static final String SelectUserReportSQL = "SELECT * FROM UserReports NATURAL JOIN UserLocationProofs NATURAL JOIN ServerLocationProofs WHERE uname = ? AND epoch = ?;"; @Language("SQL") public static final String SelectUserReportsByLocationSQL = "SELECT * FROM UserReports NATURAL JOIN UserLocationProofs NATURAL JOIN ServerLocationProofs WHERE epoch = ? AND x < ? AND x > ? AND y < ? AND y > ?;"; @Language("SQL") public static final String SelectLocationProofsSQL = "SELECT * FROM UserReports NATURAL JOIN UserLocationProofs NATURAL JOIN ServerLocationProofs WHERE uSigner = ? AND epoch = ANY(?::INTEGER[]);"; public static void createTables(Connection connection) throws SQLException { Statement stmt = connection.createStatement(); stmt.executeUpdate( "CREATE TABLE IF NOT EXISTS UserNonces(" + "nonce BYTEA NOT NULL," + "PRIMARY KEY (nonce));"); stmt.executeUpdate( "CREATE TABLE IF NOT EXISTS UserReports(" + "uname VARCHAR(20) NOT NULL," + "epoch INTEGER NOT NULL," + "x INTEGER NOT NULL," + "y INTEGER NOT NULL," + "PRIMARY KEY (uname, epoch));"); stmt.executeUpdate( "CREATE TABLE IF NOT EXISTS UserLocationProofs(" + "uname VARCHAR(20) NOT NULL," + "epoch INTEGER NOT NULL," + "uSigner VARCHAR(20) NOT NULL," + "uSignedProof BYTEA NOT NULL," + "FOREIGN KEY(uname, epoch) REFERENCES UserReports(uname, epoch)," + "PRIMARY KEY (uname, epoch, uSigner));"); stmt.executeUpdate( "CREATE TABLE IF NOT EXISTS ServerLocationProofs(" + "uname VARCHAR(20) NOT NULL," + "epoch INTEGER NOT NULL," + "uSigner VARCHAR(20) NOT NULL," + "sSigner VARCHAR(20) NOT NULL," + "sSignedProof BYTEA NOT NULL," + "FOREIGN KEY(uname, epoch, uSigner) REFERENCES UserLocationProofs(uname, epoch, uSigner)," + "PRIMARY KEY (uname, epoch, uSigner, sSigner));"); stmt.close(); } }
package com.venky.swf.plugins.background.extensions; import com.venky.swf.plugins.background.core.agent.Agent; import com.venky.swf.plugins.background.core.agent.PersistedTaskPollingAgent; public class SWFAgentRegistry { static { Agent.instance().registerAgentSeederTaskBuilder("PERSISTED_TASK_POLLER", new PersistedTaskPollingAgent()); } }
import ThankYou from '../components/ThankYou' export default ThankYou
/** * Test of setTournamentSize method, of class * gov.sandia.isrc.learning.reinforcement.TournamentSelector. */ public void testSetTournamentSize() { System.out.println("setTournamentSize"); double pct = Math.random(); int tournieSize = (int) (Math.random() * 1000) + 1; TournamentSelector<Double> selector = new TournamentSelector<Double>( pct, tournieSize ); assertEquals( tournieSize, selector.getTournamentSize() ); int t2 = tournieSize + 1; selector.setTournamentSize( t2 ); assertEquals( t2, selector.getTournamentSize() ); int t3 = 0; try { selector.setTournamentSize( t3 ); fail( "Should have thrown exception" ); } catch (Exception e) { System.out.println( "Good! Properly threw exception" ); } int t4 = -tournieSize; try { selector.setTournamentSize( t4 ); fail( "Should have thrown exception" ); } catch (Exception e) { System.out.println( "Good! Properly threw exception" ); } }
ADVERTISEMENT There he goes again. Just when you thought Donald Trump must be feeling the heat for saying Judge Gonzalo Curiel's "Mexican heritage" should preclude him from ruling on the lawsuits against Trump University, the presumptive GOP nominee reportedly ordered surrogates on a Monday conference call to keep fanning the flames. "Take that order and throw it the hell out," Trump said of a campaign memo asking them not to comment on the controversy. All of which begs the question: Why is Trump doing everything in his power to keep this terrible story alive? Way back in March, I argued that Trump would likely lose — badly — in a general election contest with Hillary Clinton, not because he is too divisive or too ignorant or too ideologically unmoored (though all of these posed problems for his candidacy), but for a much more basic reason. As I said then, "Trump — at the very moment that he most obviously needs to begin making a general-election argument — is instead driving the conversation back to himself, and to his peculiar obsessions and insecurities." The polls won't reflect it for a while yet, but Trump's continuing obsession with Curiel provides strong evidence that I was right. Many Republicans are increasingly worried that Trump's attacks on Curiel's background — the judge is a natural-born American citizen of Mexican ancestry — will further infuriate Hispanic, immigrant, and non-white voters. And they undoubtedly will. The more thoughtful among them are also worried for a more principled reason: Trump's point of view is the most dead-end identity politics — the kind of thing principled conservatives, principled liberals, and, indeed, anyone who believes in the rule of law viscerally opposes. Trump's isn't a plea for justice, but a declaration that justice is impossible. But if you're a purely cynical Republican operative, the main reason to despair is the creeping realization of just why it is that Trump is talking about Curiel, or his lawsuit, at all. It's not because he's trying to win the lawsuit by getting Curiel to recuse himself. Trump's lawyers surely know that there is clear legal precedent against requiring such recusal on the basis of either ethnic background or overall political alignment. Nor is it an attempt to pressure Curiel in the absence of a strong legal argument; a judge who faced down drug cartels is unlikely to feel threatened by Trump's bluster. Nor is it a political strategy. If Trump doesn't have the kind of voters who might share his view of Hispanic judges locked up by now, then he's in bigger trouble than the polls suggest. Meanwhile, Trump University is a terrible story for Donald Trump — because it's the perfect story for making the case that Trump is both a con artist and a failure. And when you don't like what people are saying, you're supposed to change the conversation, not whine about it endlessly. Trump is ranting about Curiel's bias not because doing so is part of any kind of rational political strategy, but because he is going to lose the case. And if he loses, it must be somebody else's fault. He's not just talking about himself instead of something that actually matters to voters. He's talking to himself, telling himself a story of how big a winner he is, no matter how often he loses. And he's doing it in front of the entire country. In a very basic sense, this is the emotional connection that Trump forged from the beginning of his campaign. Trump sees himself as a winner whose occasional setbacks are the result of other people's unfairness or incompetence. He has connected with a slice of the voting public that sees America's problems in similar terms: the fault of corrupt, incompetent, and disloyal elites. But successful political leaders — whether they operate within established norms or, like Trump, gleefully flout them — use that emotional connection for something larger. It's the ground on which they build loyalty to a political program and organization. Trump isn't building anything. Indeed, he hasn't built anything in a good long time; for decades, he's been a marketer whose only product is his own mystique. And so it is with his political campaign. The purpose of the emotional connection he has forged is entirely personal: to reaffirm his own greatness, his own winningness. "I've always won and I'm going to continue to win. And that's the way it is," he told supporters on the Monday conference call. The conversation keeps coming back to him because that's where he wants it to go. Because that's all his campaign has ever been about. Which is why that cynical operative should despair. He's made his peace with Trump, and is now focused on highlighting his distinctive strengths and controlling the damage of his distinctive style of politics, so as to get the best result for the party come November. But Trump isn't interested in getting the best result for the party. He's got a whole host of strategies for convincing himself that he's a winner even when he loses, because the loss is always somebody else's fault. And he's got a whole host of strategies for making sure that, monetarily and psychically speaking, the bulk of his losses hits somebody else's balance sheet rather than his own. Historically, those have been his priorities, and from the look of things, they still are. So if Trump looks like he's in real trouble after Labor Day, what makes anyone think he's even going to try to win — as opposed to assuring himself (and his supporters) that someone else is to blame for his loss? And if he does lose, who do you think he's going to make sure gets blamed?
<filename>mars/learn/ensemble/tests/test_blockwise.py # Copyright 1999-2021 Alibaba Group Holding Ltd. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import numpy as np import pytest from sklearn.datasets import make_classification from sklearn.linear_model import LogisticRegression from .... import dataframe as md from .... import tensor as mt from .. import BlockwiseVotingClassifier, BlockwiseVotingRegressor fit_raw_X, fit_raw_y = make_classification() fit_X, fit_y = mt.tensor(fit_raw_X, chunk_size=25), mt.tensor(fit_raw_y, chunk_size=25) fit_df_X = md.DataFrame(fit_X) predict_raw_X, predict_raw_y = make_classification() predict_X, predict_y = ( mt.tensor(predict_raw_X, chunk_size=20), mt.tensor(predict_raw_y, chunk_size=20), ) predict_df_X = md.DataFrame(predict_X) @pytest.mark.parametrize( "fit_X, fit_y, predict_X, predict_y", [ (fit_X, fit_y, predict_X, predict_y), (fit_raw_X, fit_raw_y, predict_raw_X, predict_raw_y), (fit_df_X, fit_raw_y, predict_df_X, predict_raw_y), ], ) def test_blockwise_voting_classifier_hard(setup, fit_X, fit_y, predict_X, predict_y): clf = BlockwiseVotingClassifier(LogisticRegression(solver="lbfgs")) clf.fit(fit_X, fit_y) estimators = clf.estimators_.fetch() if not isinstance(fit_X, np.ndarray): assert len(estimators) == 4 clf.predict(predict_X) score = clf.score(predict_X, predict_y) assert isinstance(score.fetch(), float) with pytest.raises(AttributeError, match="hard"): clf.predict_proba(predict_X) @pytest.mark.parametrize( "fit_X, fit_y, predict_X, predict_y", [ (fit_X, fit_y, predict_X, predict_y), (fit_raw_X, fit_raw_y, predict_raw_X, predict_raw_y), (fit_df_X, fit_raw_y, predict_df_X, predict_raw_y), ], ) def test_blockwise_voting_classifier_soft(setup, fit_X, fit_y, predict_X, predict_y): clf = BlockwiseVotingClassifier( LogisticRegression(solver="lbfgs"), voting="soft", classes=[0, 1], ) clf.fit(fit_X, fit_y) estimators = clf.estimators_.fetch() if not isinstance(fit_X, np.ndarray): assert len(estimators) == 4 result = clf.predict(predict_X) assert result.dtype == np.dtype("int64") assert result.shape == (predict_X.shape[0],) result = clf.predict_proba(predict_X) assert result.dtype == np.dtype("float64") assert result.shape == (predict_X.shape[0], 2) score = clf.score(predict_X, predict_y) assert isinstance(score.fetch(), float) @pytest.mark.parametrize( "fit_X, fit_y, predict_X, predict_y", [ (fit_X, fit_y, predict_X, predict_y), (fit_raw_X, fit_raw_y, predict_raw_X, predict_raw_y), (fit_df_X, fit_raw_y, predict_df_X, predict_raw_y), ], ) def test_blockwise_voting_regressor(setup, fit_X, fit_y, predict_X, predict_y): est = BlockwiseVotingRegressor(LogisticRegression()) est.fit(fit_X, fit_y) estimators = est.estimators_.fetch() if not isinstance(fit_X, np.ndarray): assert len(estimators) == 4 result = est.predict(predict_X) assert result.dtype == np.dtype("float64") assert result.shape == (predict_X.shape[0],) score = est.score(predict_X, predict_y) assert isinstance(score.fetch(), float)
n,m = map(int, raw_input().split()) A = map(int, raw_input().split()) total = 0 for _ in xrange(m): start, end = map(int, raw_input().split()) s = sum(A[start-1:end]) if s>=0: total += s print total
THE HIERARCHICAL ORGANIZATION OF ARM STROKE IN A 400-M FREESTYLE SWIMMING RACE Purpose. Arm stroke is a key variable of successful performance in front crawl swimming. In the present study, the effects of the arm stroke on the front crawl swimming performance were analysed by considering the complementarity of macro-consistency and micro-variability in the arm stroke as a hierarchically organized adaptive system. In this case, consistency is necessary to achieve outcomes reliably, and variability is fundamental for coping with environmental instability. Methods. Displacements of swimmers ( n = 31) who competed in the 400-m freestyle race of the Paulista Master championship were captured in 4 moments (partial races 1–4). From the aerial and aquatic phases of the left and right arm strokes, macrostructure (components’ relative timing) and microstructure (components’ overall time) had their variability rates cal cu lated for all partial races on the basis of the biological coefficients of variation. Results. It was revealed that swimmers: (i) increased consistency of macrostructure related to the left arm aerial phase stroke in the final race; (ii) maintained consistency of microstructure across races; and (iii) presented macrostructure with inferior rate of variability to the microstructure in the final race. Conclusions. Given these results, coaches should emphasize instruction for swimmers to maintain the temporal relationship among arm stroke components (macrostructure) rather than focus on the components themselves (microstructure). Introduction In the last few decades, there have been an increasing number of studies to understand the arm stroke as a key variable of successful performance in front crawl swimming . As it is well known, the arm stroke is responsible for most of the propulsion in front crawl swimming . In front crawl swimming, the arm stroke is characterized by a cycle formed by continuous and alternating circular movements of the left and right arms, in and out of the water. on the basis of chollet's index of arm coordination , 3 main arm stroke cycles can be identified, which differ in terms of the beginning of the propulsive movement of one arm and the end of the propulsive movement of the other arm. these are: (a) opposition, when the end of the propulsive action of one arm occurs concomitant to the beginning of the propulsive action of the other arm; (b) catch-up, when there is a delay between the propulsive action of the 2 arms resulting in a negative percentage, i.e. during a period when the swimmer does not produce propulsion with either arm; and (c) superposition, when the index is positive, i.e. when there is simultaneous propulsive action of the 2 arms. It has been suggested that for elite swimmers of short and middle sprint races, the superposition coordination generates the largest crawl propulsion and, consequently, higher speed . Besides, several studies have shown how the arm stroke affects swimming performance through 2 main dimensions: spatial (stroke length ) and temporal (stroke rate ) . Generally, SL has been considered in meters per stroke. In turn, Sr has been calculated by the number of strokes in a given time. Swimming speed (v) has been determined as the product of those parameters and technically described by the following equation : Numerous studies have indicated that the intraand inter-race variabilities of SL and Sr play an important role in performance . Although such variabilities are dependent upon characteristics of swimmers and race speed, they also have been suggested to be a consequence of the swimmer's decision-making on adaptive strategies for dealing with constraints that emerge throughout the race . Notwithstanding the advances provided by previous studies, as with any complex phenomenon, the role of the arm stroke concerning crawl swimming performance needs to be investigated from different perspectives and analysis levels. In the present paper, the analysis was based on an alternative view of variability in the motor skills as hierarchical adaptive systems . Such a view allows advancing the existing knowledge by conceiving the complementarity of different levels of variability (e.g. macro-and microscopic) of a system (e.g. arm stroke), as well as the meaning of both in the performance as hallmarks of motor skill . It is worth mentioning that, although arm stroke coordination (index of coordination ), SL, and Sr play an important role in the crawl swimming performance , they do not stand for the complementarity of different levels of variability (e.g. macro-and microscopic) of a hierarchical adaptive system (e.g. arm stroke). Hierarchical adaptive systems, also named metastable open systems, are those systems in which the variability of the general pattern is significantly smaller than the variability of the components, and for this reason they present consistency of macroscopic behaviour and variability of microscopic behaviour . this is the case with motor skills. In terms of motor skills, variability in the interaction of the components guarantees the general configuration of a pattern and, therefore, its consistency. In turn, the variability of components themselves is related to the performance options available within each one. thus, variability of components themselves is responsible for pattern flexibility (i.e. parameterization) . In this hierarchical adaptive system view of motor skills, the pattern consistency and component variability as dimensions of motor skills have been referred to as macro-and microstructures, respectively. the macrostructure results from the interaction between components. It refers to the motor skill's overall pattern and, therefore, it is well-defined and order-oriented. relative timing and sequencing have been recognized as invariant features of macrostructure. In turn, microstructure refers to the components themselves. It is ill-defined, variable, and disorderoriented. Absolute movement time and muscle group selection are some of the measures of microstructure . For instance, the arm stroke emerges from the interaction between 4 main movement components: (a) right arm aerial phase; (b) right arm aquatic phase; (c) left arm aerial phase; and (d) left arm aquatic phase. Invariably, the arm stroke of front crawl swimming comprises the simultaneous execution of 2 components (1 and 4; 2 and 3) and sequential and continuous execution of these component dyads (1-4, 2-3...). In turn, the details of these simultaneities, such as the duration of each component, as well as the beginning time of each one, vary depending on the race context (e.g. 50-m vs. 400-m race; skilful swimmers vs. not skilled ones). At the macrostructure level, the temporal and sequential relationship of the arms is well established. For example, while the right arm performs the aerial phase, the left arm performs the aquatic phase by considering the total stroke cycle time. on the other hand, at the microstructure level, each component can behave freely within certain limits established by the macrostructure. thus, the aerial phase component of the right arm has a relative time that can be significantly invariable (macrostructure), although its absolute movement times may vary (microstructure). on the basis of this view, we sought to investigate how variabilities of the arm stroke's macro-and microstructure would affect the performance of front crawl swimming. considering the rate of macro-and microstructure variability change throughout a race , we focused on a 400-m freestyle race to allow for capturing the swimmers' performance in different moments. It was hypothesized that the fastest swimmers would show higher macro-consistence (i.e. lower variability rate) or higher micro-flexibility (i.e. higher variability) than other swimmers. consequently, they would also present the macrostructure with a superior rate of variability compared with the microstructure variability of the components themselves. the 400-m freestyle race was held in a semi-olympic pool (25 m). thus, the subjects swam 16 times the length of the pool. It is worth highlighting an original aspect of the study: it was carried out in a real competitive situation. By considering the race length, as a matter of methodological viability, the swimmers' displacements were captured in 4 moments: third, seventh, eleventh and fifteenth laps, hereinafter referred to as partial races 1, 2, 3, and 4, respectively. Besides, these laps were chosen because they were always in the same direction and did not include the starting (first lap) and the finishing (last lap), in which swimmers have to make adjustments regarding departure and arrival, respectively. Participants From the recording of swimmers' displacement, 3 stroke cycles performed over 10 m of the central pool (i.e. between the initial and final 7.5 m) were selected for analysis through the VirtualDub 1.6 (GNU General Public License, cambridge, MA, USA) software with a frequency of 60 Hz. Although the front crawl swimming emerges from interaction among arm stroke, leg kick, body position, and breathing, this study focused only on the first one because it is strongly associated with speed in swimming . A stroke cycle was defined on the basis of the right arm's attack into the water. that is, a stroke cycle was considered from the moment the right arm attacked the water until it returned to the same position. this procedure allowed obtaining the time of aerial (considered from the moment the arm got out of the water until the attack) and aquatic (considered from the attack until the arm got out of the water) phases of both right and left strokes as stroke components. Measures of stroke cycle and stroke components made it possible to calculate the measures of macro-and microstructure of stroke patterns. Measures the macrostructure referred to the invariant stroke overall pattern, which emerged from the interaction of its components (c1 -right arm aerial phase; c2right arm aquatic phase; c3 -left arm aerial phase; c4 -left arm aquatic phase). It was assessed on the basis of the relationship between each component and the stroke overall pattern. on the other hand, the microstructure measure referred to each components' behaviour and, therefore, reflected the variability in the stroke patterns. these structures were as follows: Microstructure From the values of each component of both strokes' macro-and microstructure in each cycle, the invariant and variable dimensions of the stroke pattern were assessed by calculating their rates of variability in the first, second, third, and fourth partial race. this was made by biological coefficients of variation: where BcV stood for the biological coefficient of variation, cV was the rate of variability (coefficient of variation), and SEM was the standard error of the mean. cV was calculated as: where referred to the standard deviation and μ was the arithmetic mean. SEM was obtained as: where referred to the standard deviation, μ was the arithmetic mean, and n referred to the number of trials considered in the calculation. the subtraction of SEM from cV has been used as a way of separating (filtering) the possible human errors in the capture of kinematic data, including data concerning swimming . Furthermore, the performances in the race were obtained by an official timing and electronic scoreboard system of the local aquatic federation that were triggered by the swimmer's exit from the block and finalized by their touch on plates (0.90 m high by 2.40 m long) attached to the pool wall. Statistical analysis to investigate the influence of the hierarchical organization of the front crawl stroke pattern on the performance of the 400-m freestyle swimming race, we analysed the macro-and microstructure on the basis of race times. they ranged from 265 to 531 seconds and were divided in accordance with quartiles as the cut-off points . In this case, the first quar-tile was the one with the fastest swimmers and the fourth quartile involved the slowest swimmers. this procedure occurred separately for male and female swimmers because their performances, as expected , were significantly different (F(1, 29) = 8.916, p = 0.006). In other words, male and female swimmers were grouped depending on the performance quartiles rather than the absolute values of performance. to consider the interactions between performance quartiles and partial races, mixed-model ANoVAs 4 × 4 (performance quartiles × partial races) were conducted on data from each stroke component. Finally, by considering that in a hierarchically organized system the macro-variability is smaller than the microvariability, the macro-and microstructure were compared with each partial race by a mixed-model ANoVA 4 × 2 (performance quartiles × structure levels). In this case, the structures as a whole (overall macrostructure and overall microstructure) were considered through the arithmetic mean of the components' BcV. Both analyses were preceded by sex comparisons by oneway ANoVAs. All observed significant effects were followed up by using Fisher HSD tests. All analyses were preceded by Shapiro-Wilk's W and Bartlett's tests of normality and homogeneity of variance. All analyses were conducted with the Statistica 13.0 software (StatSoft, tulsa, oK, USA) with the level of significance set at = 0.05. Ethical approval the research related to human use has complied with all the relevant national regulations and institutional policies, has followed the tenets of the Declaration of Helsinki, and has been approved by the authors' institutional review board. Informed consent Informed consent has been obtained from all individuals included in this study. Figure 1 presents the BcV of the macro-and microstructure's components c1 (right arm aerial phase), c2 (right arm aquatic phase), c3 (left arm aerial phase), and c4 (left arm aquatic phase) of the first, second, third, and fourth performance quartiles of the swimmer groups. regarding the macrostructure, it can be observed that the arm stroke's aerial phases had rates of variability superior to those in the aquatic phases. Also, throughout the partial races, the rates of vari- Figure 1. Biological coefficients of variation (BcV) of the macro-and microstructure's components c1 (right arm aerial phase), c2 (right arm aquatic phase), c3 (left arm aerial phase), and c4 (left arm aquatic phase) in the first, second, third, and fourth performance quartiles of swimmer groups ability of the aerial phases fluctuated more than those in the aquatic phases. Similar behaviour can be found for the microstructure. concerning macrostructure, no effects were revealed for sex comparisons. Mixed-model ANoVAs 4 × 4 (performance quartiles × partial races) determined effects only for partial races in c3 (F(3, 81) = 3.37, p = 0.002, 2 = 0.11). the Fisher HSD test showed that the fourth partial race had a lower rate of variability than the second (p = 0.039) and third (p = 0.005) partial races. this allows inferring that the left arm aerial phase became more consistent in the final part of the race regardless of the performance level. As for microstructure, no difference was revealed. Results concerning macro-vs. microstructure comparisons, Figure 2 presents the BcV of the macro-and microstructure in the partial races 1, 2, 3, and 4 of the first, second, third, and fourth performance quartiles of the swimmer groups. It can be noted that in most partial races, all performance quartile groups presented macrostructure with inferior variability rates com-pared with microstructure. this did not occur only in the third partial race. Furthermore, it appears that the third performance quartile group had a superior rate of variability in most partial races. Similarly to the previous results, no effects were also revealed for sex comparisons here. A mixed-model ANoVA 4 × 2 (performance quartiles × structure levels) determined effects only for structure levels in the fourth partial race (F(1, 27) = 15.05, p = 0.0001, 2 = 0.45). the Fisher HSD test showed that macrostructure had an inferior rate of variability compared with microstructure (p = 0.0003). Discussion total and relative times have long been used for assessing different dimensions of movement patterns, including inferring the role of the central nervous system on the variable (parameterization) and invariant (structure) aspects of movement patterns . In the present study, both variable and invariant dimensions were assumed as levels of a single structure, that is, a complex system of hierarchical organization . It was expected that the fastest swimmers would show a lower rate of variability in the macrostructure (invariant dimension) and, therefore, more consistency in the pattern structure, and a higher rate of variability in the microstructure (variant dimension) than other swimmers. Furthermore, assuming that in hierarchical adaptive systems, the variability of the general pattern is significantly smaller than the variability of each component , we hypothesized that the fastest swimmers would also show a superior variability rate in the macro-compared with the microstructure. these hypotheses are rooted in the prediction that macroconsistency and micro-variability allow an open system (i.e. arm stroke) to maintain its identity and adapt itself to environmental demands, respectively . these hypotheses were partially confirmed. on the one hand, the results did not show the foregoing expected differences among groups. this was because all groups similarly changed their movement pattern in terms of the left arm aerial phase as a macrostructure component, making it more consistent in the final partial race. Why does this change occur? Perhaps this is related to how athletes distribute work and energy throughout an exercise task (strategy). As it is known, athletes self-select intensity and optimal pacing strategy by their ability to resist fatigue (i.e. anaerobic and aerobic supply) . Additionally, there is a key characteristic of the best swimmers in maintaining a consistent Sr over a 100-m race . Specifically, concerning the results of the present study, 400-m freestyle swimmers are characterized by increasing consistency of arm stroke in the final partial race. Another important question is: why are groups similar in making this change? We think this is related to the other results' explanatory hypothesis. As expected, results showed that all performance groups presented a similar macrostructure with an inferior rate of variability compared with microstructure in most partial races. We believe that this makes sense because all of the investigated swimmers were skilful. As indicated, we collected data for swimmers in the central lanes (3, 4, 5, and 6), therefore involving the fastest swimmers. thus, they may have a similar level of capability. the maintenance of pattern identity simultaneous with adaptability to environmental requirements refers to a characteristic of skilful behaviour for swimmers at the national and international performance levels . Besides, the behaviour of the left arm aerial movement should be re-investigated to guarantee the consistency of results. Conclusions In summary, the findings of this study allowed us to conclude that 400-m freestyle race swimmers performed more consistently in the left arm aerial phase in the final partial race and that they showed an inferior rate of variability for the macrostructure compared with the microstructure. As it is widely recognized, an athlete's pacing and arm stroke organization can have a significant impact on performance. on the basis of our findings, swimming coaches and teachers should instruct swimmers to maintain the temporal relationship among the arm stroke components (macrostructure) rather than the components themselves (microstructure). regarding this suggestion, 2 aspects deserve to be highlighted: (i) our findings were obtained from a real competition context with elite athletes; (ii) the results had a moderate power of generalization, which reinforces the possibility of providing useful insights for the design of practice tasks in swimming. Disclosure statement No author has any financial interest or received any financial benefit from this research.
<filename>src/api/java/appeng/api/implementations/items/IAEWrench.java package appeng.api.implementations.items; import net.minecraft.entity.player.EntityPlayer; import net.minecraft.item.ItemStack; import net.minecraft.util.math.BlockPos; /** * Implemented on AE's wrench(s) as a substitute for if BC's API is not * available. */ public interface IAEWrench { /** * Check if the wrench can be used. * * @param player wrenching player * @param pos of block. * @return true if wrench can be used */ boolean canWrench(ItemStack wrench, EntityPlayer player, BlockPos pos); }
/** * Class for requisition header * */ public class RequisitionHeader { @SerializedName("Name") public String name; @SerializedName("Comment") public String comment; @SerializedName("NeedBy") public Date needBy; @SerializedName("Requester") public String requester; @SerializedName("Preparer") public String preparer; @SerializedName("RequisitionItems") public Item[] requisitionItems; /** * Returns name * * @return name */ public String getName() { return name; } /** * Sets name * * @param name */ public void setName(String name) { this.name = name; } /** * Returns comment * * @return comment */ public String getComment() { return comment; } /** * Sets comment * * @param comment */ public void setComment(String comment) { this.comment = comment; } /** * Returns need by * * @return need by */ public Date getNeedBy() { return needBy; } /** * Sets need by * * @param needBy */ public void setNeedBy(Date needBy) { this.needBy = needBy; } /** * Returns requester * * @return requester */ public String getRequester() { return requester; } /** * Sets requester * * @param requester */ public void setRequester(String requester) { this.requester = requester; } /** * Returns preparer * * @return preparer */ public String getPreparer() { return preparer; } /** * Sets preparer * * @param preparer */ public void setPreparer(String preparer) { this.preparer = preparer; } /** * Returns requisition items * * @return requisition items */ public Item[] getRequisitionItems() { return requisitionItems; } /** * Sets requisition items * * @param requisitionItems */ public void setRequisitionItems(Item[] requisitionItems) { this.requisitionItems = requisitionItems; } }
#needed by visual studio #import django.test.utils #django.setup() #django.test.utils.setup_test_environment() #django.test.utils.setup_databases() from django.conf import settings from django.test.utils import get_runner # import unittest # from unittest import TestCase # if __name__ == "__main__": # os.environ['DJANGO_SETTINGS_MODULE'] = 'tests.test_settings' # django.setup() # TestRunner = get_runner(settings) # test_runner = TestRunner() # failures = test_runner.run_tests(["tests"]) # sys.exit(bool(failures)) # the 4 following lines are mandatory to execute tests import django import os #add this if you want to run all tests in one shot # if __name__ == "__main__": # os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'tests_settings') # django.setup() # TestRunner = get_runner(settings) # test_runner = TestRunner() # failures = test_runner.run_tests(["tests"]) # sys.exit(bool(failures)) ######################## from django.test import TestCase #from unittest import TestCase from burnDownBackEnd.models import Company, Team, Sprint, Pbi from burnDownBackEnd.forms import SprintForm, PbiForm from datetime import datetime, timedelta, date from django.utils import timezone import logging from django.test import Client from rest_framework.test import APIRequestFactory, APIClient, APITestCase from burnDownBackEnd.serializers import CompanySerializer, PbiSerializer from django.db.models import Q from django.urls import reverse import json from rest_framework import status from django.core import mail # Create your tests here. logger = logging.getLogger(__name__) logging.basicConfig(level=logging.DEBUG) class Test_CompanyTestCase(TestCase): def setUp(self): #Team.objects.create(name="team1", pouet = 444) Company.objects.create(name="company1") def test_company_created(self): comp = Company.objects.get(name="company1") self.assertTrue(comp.name, "company1") class TeamTestCase(TestCase): def setUp(self): comp = Company.objects.create(name="company1") Team.objects.create(name="team1", pouet = 444, company = comp) def test_team_have_name(self): team1 = Team.objects.get(name="team1") self.assertEqual(team1.was_created_recently(), True) self.assertEqual(team1.name, "team1") # initialize the APIClient app client = Client() #from django.test.utils import override_settings #to send email #@override_settings(EMAIL_BACKEND='django.core.mail.backends.smtp.EmailBackend') class PbiTestCase(APITestCase): def setUp(self): comp = Company.objects.create(name="companyPbiTest") team = Team.objects.create(name="teamPbi", pouet = 444, company = comp) sprint = Sprint.objects.create(goal = 'goalPbi', team = team, start_date = timezone.now() , end_date = timezone.now() + timedelta(days=7) ) pbi1 = Pbi.objects.create(sprint = sprint, pbi_type = "US", state = "NEW", story_points = 5, local_id = "one", title ="title 1", link = "http://link1.com", area= "area1") pbi2 = Pbi.objects.create(sprint = sprint, pbi_type = "US", state = "NEW", story_points = 5, local_id = "two", title ="title 2", link = "http://link2.com", area= "area1") # sprintForm = SprintForm(data={'goal' : 'goalPbi', 'team' : team.id, 'start_date' : timezone.now() , 'end_date' : timezone.now() + timedelta(days=7)}) # self.assertTrue(sprintForm.is_valid()) # sprintForm.save() def test_pbi_form(self): sprint = Sprint.objects.get(goal='goalPbi') form = PbiForm(data={'sprint':sprint.id, 'snapshot_date': timezone.now(), 'pbi_type':'US', 'state':'NEW','story_points':0, 'local_id':'zero','title':'pbiForm','link':'http://pbiform.com','area':'forms','is_interruption':False}) self.assertTrue(form.is_valid()) def test_pbi_no_sprint(self): sprint = Sprint.objects.get(goal='goalPbi') form = PbiForm(data={ 'snapshot_date': timezone.now(), 'pbi_type':'US', 'state':'NEW','story_points':0, 'local_id':'zero','title':'pbiForm','link':'http://pbiform.com','area':'forms','is_interruption':False}) self.assertFalse(form.is_valid()) self.assertEqual( form.errors['__all__'], ['Sprint cannot be null'] ) def test_pbi_cannot_create_tomorrow(self): sprint = Sprint.objects.get(goal='goalPbi') form = PbiForm(data={ 'snapshot_date': timezone.now() + timedelta(days=1), 'pbi_type':'US', 'state':'NEW','story_points':0, 'local_id':'zero','title':'pbiForm','link':'http://pbiform.com','area':'forms','is_interruption':False}) self.assertFalse(form.is_valid()) self.assertEqual( form.errors['__all__'], ['Pbi cannot be in the future'] ) def test_pbi_change_sprint_auto(self): team = Team.objects.get(name="teamPbi") oldSprint = Sprint.objects.create(goal = 'goalPbiold', team = team, start_date = timezone.now() - timedelta(days=20), end_date = timezone.now() - timedelta(days=10) ) currentSprint = Sprint.objects.get(goal='goalPbi') pbi = PbiForm(data={'sprint':oldSprint.id, 'snapshot_date': timezone.now().strftime("%Y-%m-%d" ) , 'pbi_type':'US', 'state':'NEW','story_points':0, 'local_id':'zero','title':'pbiForm','link':'http://pbiform.com','area':'forms','is_interruption':False}) self.assertTrue(pbi.is_valid()) pouet = pbi.save() self.assertEqual(pouet.sprint.id, currentSprint.id) #FUCKING SERIALIZER DO NOT USE FORM VALIDATION def test_serilalizer_call_clean(self): currentSprint = Sprint.objects.get(goal='goalPbi') pbi = Pbi() #area = 'area', link = 'http://plouf.com', local_id = 'lid', title = 'title', sprint = currentSprint, pbi_type = 'US', state = 'NEW', is_interruption = False, snapshot_date = date.today().strftime("%d %m %Y" )) pbi.area = 'area' pbi.link = 'http://plouf.com' pbi.local_id = 'lid' pbi.title = 'title' pbi.sprint = currentSprint pbi.pbi_type = 'US' pbi.state = 'NEW' pbi.is_interruption = False pbi.snapshot_date = date.today().strftime("%Y-%m-%d" ) pbi.story_points = 33 testreverse = reverse('burnDown:pbi_list') serializer = PbiSerializer(instance=[pbi], many=True) # serializer.is_valid() response = client.post(testreverse, json.dumps(serializer.data), content_type="application/json" ) #response = client.post(testreverse, json.dumps([pouet]), content_type="application/json" ) self.assertEqual(response.status_code, status.HTTP_200_OK) def test_pbi_create_sprint_auto(self): comp = Company.objects.create(name="companyPbiTestAutoCreate") team = Team.objects.create(name="teamPbiAutoCreate", pouet = 555, company = comp) oldSprint = Sprint.objects.create(goal = 'goalPbiold2', team = team, start_date = timezone.now() - timedelta(days=11), end_date = timezone.now() - timedelta(days=1) ) pbi = PbiForm(data={'sprint':oldSprint.id, 'snapshot_date': timezone.now() , 'pbi_type':'US', 'state':'NEW','story_points':0, 'local_id':'zero','title':'pbiForm','link':'http://pbiform.com','area':'forms','is_interruption':False}) self.assertTrue(pbi.is_valid()) pouet = pbi.save() self.assertNotEqual(pouet.sprint.id, oldSprint.id) self.assertEqual(pouet.snapshot_date, pouet.sprint.start_date) self.assertEqual(pouet.snapshot_date + timedelta(days=10), pouet.sprint.end_date) self.assertEqual(pouet.sprint.goal, 'GOAL UNDEFINED') self.assertEqual(len(mail.outbox), 1) def test_pbi_create_sprint_auto_rest(self): comp = Company.objects.create(name="companyPbiTest555") team = Team.objects.create(name="teamPbiTest", pouet = 555, company = comp) sprint = Sprint.objects.create(goal = 'goalPbi', team = team, start_date = timezone.now() - timedelta(days=10), end_date = timezone.now() - timedelta(days=1) ) pbi = Pbi() #area = 'area', link = 'http://plouf.com', local_id = 'lid', title = 'title', sprint = currentSprint, pbi_type = 'US', state = 'NEW', is_interruption = False, snapshot_date = date.today().strftime("%d %m %Y" )) pbi.area = 'area' pbi.link = 'http://plouf.com' pbi.local_id = 'lid' pbi.title = 'title' pbi.sprint = sprint pbi.pbi_type = 'US' pbi.state = 'NEW' pbi.is_interruption = False pbi.snapshot_date = date.today().strftime("%Y-%m-%d" ) pbi.story_points = 33 testreverse = reverse('burnDown:pbi_list') serializer = PbiSerializer(instance=[pbi], many=True) # serializer.is_valid() response = client.post(testreverse, json.dumps(serializer.data), content_type="application/json" ) #response = client.post(testreverse, json.dumps([pouet]), content_type="application/json" ) self.assertEqual(response.status_code, status.HTTP_200_OK) def test_rest_get_serialized_pbis(self): # pbisTested = Pbi.get_all_pbis() # serializerTested = PbiSerializer(pbisTested, many=True) response = client.get(reverse('burnDown:pbi_list'), format='json') self.assertEqual(response.status_code, status.HTTP_200_OK) pbis = Pbi.objects.filter(Q(local_id ='one') | Q(local_id='two')).order_by('snapshot_date') # serializer = PbiSerializer(pbis, many=True) self.assertEqual(pbis[0].local_id, response.json()[0]['local_id']) self.assertEqual(pbis[1].local_id, response.json()[1]['local_id']) self.assertEqual(len(response.json()), len(pbis)) # does not work and i dont know why # self.assertEqual( serializer.data, response.json()) # self.assertEqual( list(pbis.values()), response.json()) # self.assertQuerysetEqual(serializer.data, response.json(), ordered = False) def test_post_update_serialize_pbi(self): pbis = Pbi.objects.filter(Q(local_id ='one') | Q(local_id='two')).order_by('local_id') #used to evaluate whole queryset, otherwise we cannot update; see https://stackoverflow.com/questions/22980448/django-objects-filter-is-not-updating-field-but-objects-get-is bool(pbis) pbis[0].state = "ACTIVE" pbis[1].story_points = 5 serializer = PbiSerializer(pbis, many=True) response = client.post(reverse('burnDown:pbi_list'), data=json.dumps(serializer.data), content_type='application/json') self.assertEqual(response.status_code, status.HTTP_200_OK) pbis = Pbi.objects.filter(Q(local_id ='one') | Q(local_id='two')).order_by('snapshot_date') self.assertEqual(pbis[0].state , "ACTIVE") self.assertEqual(pbis[1].story_points , 5) def test_get_list_date_pbis(self): sprint = Sprint.objects.get(goal='goalPbi') # testreverse = reverse('burnDown:pbisByDate', args=[1]) testreverse = reverse('burnDown:pbisByDate', kwargs={'sprint_id':sprint.id})+'?_date='+date.today().strftime("%d %m %Y" ) response = client.get(testreverse) self.assertEqual(response.status_code, status.HTTP_200_OK) pbis = Pbi.objects.filter(Q(local_id ='one') | Q(local_id='two')).order_by('snapshot_date') serializer = PbiSerializer(pbis, many=True) self.assertEqual(response.json(), serializer.data) # TEST COMMANDS from io import StringIO from django.core.management import call_command from django.test import TestCase class CommandTest(TestCase): def setUp(self): comp = Company.objects.create(name="companyPbiTest") team1 = Team.objects.create(name="team1", pouet = 444, company = comp) team2 = Team.objects.create(name="team2", pouet = 444, company = comp) sprint = Sprint.objects.create(goal = 'goalPbi', team = team1, start_date = timezone.now() , end_date = timezone.now() + timedelta(days=7) ) sprint2 = Sprint.objects.create(goal = 'goalPbi', team = team1, start_date = timezone.now() , end_date = timezone.now() + timedelta(days=7) ) pbi1 = Pbi.objects.create(sprint = sprint, pbi_type = "US", state = "NEW", story_points = 5, local_id = "one", title ="title 1", link = "http://link1.com", area= "area1") pbi2 = Pbi.objects.create(sprint = sprint, pbi_type = "US", state = "NEW", story_points = 5, local_id = "two", title ="title 2", link = "http://link2.com", area= "area1") pbit2 = PbiForm(data={'sprint':sprint2.id, 'snapshot_date': timezone.now() - + timedelta(days=1) , 'pbi_type':'US', 'state':'NEW','story_points':0, 'local_id':'zero','title':'pbiForm','link':'http://pbiform.com','area':'forms','is_interruption':False}) def test_command_output(self): out = StringIO() call_command('InactivityEmail', stdout=out) if datetime.today().weekday() < 5 : self.assertIn('Successfully checked activity', out.getvalue()) self.assertEqual(len(mail.outbox), 1)
<reponame>m-rei/go-3d-rasterizer<filename>rasterizer/rasterizer.go package rasterizer import ( m "go-3d-rasterizer/math3d" "math" ) // Scene contains all the matrices needed to transform a vertex into screen coordinates type Scene struct { ModelViewMatrix m.Matrix ProjectionMatrix m.Matrix ViewportMatrix m.Matrix Buffers buffers width int height int wh int } type buffers struct { FrameBuffer []m.Vector DepthBuffer []float64 } // LightingCalcCb is a callback function type for lighting calculation type LightingCalcCb func(w, u, t float64, frameBufferIdx int) // NewScene creates a new scene struct func NewScene(winWidth, winHeight, fov, zNear, zFar float64) *Scene { return &Scene{ ModelViewMatrix: m.IdentityMatrix(), ProjectionMatrix: m.ProjectionMatrix(90.0, winWidth/winHeight, 1, 1000), ViewportMatrix: m.Viewport(0, 0, winWidth, winHeight), Buffers: buffers{ FrameBuffer: make([]m.Vector, int(winWidth*winHeight)), DepthBuffer: make([]float64, int(winWidth*winHeight)), }, width: int(winWidth), height: int(winHeight), wh: int(winWidth * winHeight), } } // ClearBuffers clears the buffers func (s *Scene) ClearBuffers(clearColor m.Vector) { i := 0 for y := 0; y < s.height; y++ { for x := 0; x < s.width; x++ { s.Buffers.FrameBuffer[i] = clearColor s.Buffers.DepthBuffer[i] = 1 i++ } } } // VectorToScreencoords convertex a vector to screen coordinates func (s *Scene) VectorToScreencoords(v m.Vector) m.Vector { v = m.Transform(s.ModelViewMatrix, v, false) v = m.Transform(s.ProjectionMatrix, v, false) v = m.Mul(v, 1./v.W) v = m.Transform(s.ViewportMatrix, v, false) return v } // RasterizeLine draws a line from a to b with the given color, using the bresenham's line algorithm found here // https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm#All_cases func (s *Scene) RasterizeLine(a, b, colorA, colorB m.Vector) { a = s.VectorToScreencoords(a) b = s.VectorToScreencoords(b) x0, y0 := int(a.X), int(a.Y) x1, y1 := int(b.X), int(b.Y) dx := int(math.Abs(float64(x1 - x0))) sx := 1 if x0 >= x1 { sx = -1 } dy := int(-math.Abs(float64(y1 - y0))) sy := 1 if y0 >= y1 { sy = -1 } err := dx + dy for true { var t float64 if b.X-a.X == 0 { t = (float64(y0) - a.Y) / (b.Y - a.Y) } else { t = (float64(x0) - a.X) / (b.X - a.X) } depth := a.Z*(1-t) + b.Z*t if y0 >= 0 && y0 < s.height && x0 >= 0 && x0 < s.width && depth >= 0 && depth <= 1.0 { idx := s.width*y0 + x0 s.Buffers.FrameBuffer[idx] = lerpTriColor(colorA, colorB, m.Vector{}, (1 - t), t, 0) s.Buffers.DepthBuffer[idx] = depth } if x0 == x1 && y0 == y1 { break } e2 := 2 * err if e2 >= dy { err += dy x0 += sx } if e2 <= dx { err += dx y0 += sy } } } func sf(x, y int, a, b, c m.Vector) float64 { return ((a.Y-c.Y)*float64(x) + (c.X-a.X)*float64(y) + a.X*c.Y - c.X*a.Y) / ((a.Y-c.Y)*b.X + (c.X-a.X)*b.Y + a.X*c.Y - c.X*a.Y) } func tf(x, y int, a, b, c m.Vector) float64 { return ((a.Y-b.Y)*float64(x) + (b.X-a.X)*float64(y) + a.X*b.Y - b.X*a.Y) / ((a.Y-b.Y)*c.X + (b.X-a.X)*c.Y + a.X*b.Y - b.X*a.Y) } // RasterizeTriangle draws a triangle with the three vectors a, b and c and the given color // optimization: incremental barycentric coordinate calculation (u,t) // source: http://gamma.cs.unc.edu/graphicscourse/09_rasterization.pdf (page 32,33) func (s *Scene) RasterizeTriangle(a, b, c, colorA, colorB, colorC m.Vector, lightCalcCb LightingCalcCb) { a = s.VectorToScreencoords(a) b = s.VectorToScreencoords(b) c = s.VectorToScreencoords(c) bbMinX := int(math.Ceil(math.Min(math.Min(a.X, b.X), c.X))) bbMinY := int(math.Ceil(math.Min(math.Min(a.Y, b.Y), c.Y))) bbMaxX := int(math.Ceil(math.Max(math.Max(a.X, b.X), c.X))) bbMaxY := int(math.Ceil(math.Max(math.Max(a.Y, b.Y), c.Y))) u := sf(bbMinX, bbMinY, a, b, c) ux := sf(bbMinX+1., bbMinY, a, b, c) - u uy := sf(bbMinX, bbMinY+1., a, b, c) - u t := tf(bbMinX, bbMinY, a, b, c) tx := tf(bbMinX+1., bbMinY, a, b, c) - t ty := tf(bbMinX, bbMinY+1., a, b, c) - t n := float64(bbMaxX - bbMinX + 1) for y := bbMinY; y <= bbMaxY; y++ { idxOffset := s.width*y + bbMinX for x := bbMinX; x <= bbMaxX; x++ { insideViewport := x >= 0 && x < s.width && y >= 0 && y < s.height if t >= 0 && u >= 0 && t+u <= 1 && insideViewport { w := 1. - u - t depth := a.Z*w + b.Z*u + c.Z*t if depth >= 0. && depth <= s.Buffers.DepthBuffer[idxOffset] { s.Buffers.FrameBuffer[idxOffset] = lerpTriColor(colorA, colorB, colorC, w, u, t) s.Buffers.DepthBuffer[idxOffset] = depth if lightCalcCb != nil { lightCalcCb(w, u, t, idxOffset) } } } idxOffset++ u += ux t += tx } u += uy - n*ux t += ty - n*tx } } // DrawTriangleWireframe draws a triangle with lines func (s *Scene) DrawTriangleWireframe(a, b, c, colorA, colorB, colorC m.Vector) { s.RasterizeLine(a, b, colorA, colorB) s.RasterizeLine(b, c, colorB, colorC) s.RasterizeLine(c, a, colorC, colorA) } // DrawAxisLines draws an axis func (s *Scene) DrawAxisLines(size float64) { xAxisA := m.Vector{X: -size, Y: 0, Z: 0, W: 1} xAxisB := m.Vector{X: size, Y: 0, Z: 0, W: 1} yAxisA := m.Vector{X: 0, Y: -size, Z: 0, W: 1} yAxisB := m.Vector{X: 0, Y: size, Z: 0, W: 1} zAxisA := m.Vector{X: 0, Y: 0, Z: -size, W: 1} zAxisB := m.Vector{X: 0, Y: 0, Z: size, W: 1} red := m.Vector{X: 1, Y: 0, Z: 0, W: 1} green := m.Vector{X: 0, Y: 1, Z: 0, W: 1} blue := m.Vector{X: 0, Y: 0, Z: 1, W: 1} s.RasterizeLine(xAxisA, xAxisB, red, red) s.RasterizeLine(yAxisA, yAxisB, green, green) s.RasterizeLine(zAxisA, zAxisB, blue, blue) } // DrawQuad renders 2 triangles func (s *Scene) DrawQuad(a, b, c, d m.Vector, ca, cb, cc, cd m.Vector) { s.RasterizeTriangle(a, b, c, ca, cb, cc, nil) s.RasterizeTriangle(a, c, d, ca, cc, cd, nil) } // DrawCube renders a cube func (s *Scene) DrawCube(center, color m.Vector, size float64) { size /= 2 v := []m.Vector{ {X: -size, Y: -size, Z: -size, W: 1}, {X: -size, Y: +size, Z: -size, W: 1}, {X: +size, Y: +size, Z: -size, W: 1}, {X: +size, Y: -size, Z: -size, W: 1}, {X: -size, Y: -size, Z: +size, W: 1}, {X: -size, Y: +size, Z: +size, W: 1}, {X: +size, Y: +size, Z: +size, W: 1}, {X: +size, Y: -size, Z: +size, W: 1}, } for i := range v { v[i] = m.Add(v[i], center) } s.DrawQuad(v[0], v[1], v[2], v[3], color, color, color, color) s.DrawQuad(v[4], v[5], v[6], v[7], color, color, color, color) s.DrawQuad(v[0], v[4], v[7], v[3], color, color, color, color) s.DrawQuad(v[1], v[5], v[6], v[2], color, color, color, color) s.DrawQuad(v[0], v[1], v[5], v[4], color, color, color, color) s.DrawQuad(v[2], v[3], v[7], v[6], color, color, color, color) } func lerpTriColor(c1, c2, c3 m.Vector, s, t, u float64) m.Vector { return m.Add(m.Add(m.Mul(c1, s), m.Mul(c2, t)), m.Mul(c3, u)) }
def operations(self, qubits: Sequence[cirq.Qid]) -> cirq.OP_TREE:
Jeannine Goodhue regularly shops at Market Basket for its low prices. But on Tuesday she was wheeling a shopping cart stuffed with groceries from the Stop & Shop in Reading because she could not bring herself to patronize Market Basket while its employees were fighting management over control of the company. “It’s just like crossing a picket line, it feels just like that,” said Goodhue, who lives nearby in Melrose. Not that she was happy about shifting allegiance; the peaches at Stop & Shop, she noted tartly, were 50 cents a pound more than at Market Basket. Advertisement Supermarkets across Massachusetts are experiencing a rush of business from customers once loyal to Market Basket. Many say they are honoring the Market Basket employees’ request to boycott their chain, while some are turned off by the aisles of empty shelves as the widespread employee protests threaten to paralyze the 71-store chain. Get Talking Points in your inbox: An afternoon recap of the day’s most important business news, delivered weekdays. Sign Up Thank you for signing up! Sign up for more newsletters here Meanwhile, the new Market Basket co-chief executives sought to stanch the loss of customers such as Goodhue, issuing a request to their 25,000 employees to call off the protest and return the stores to normal operations. “We understand the strain and emotion facing Market Basket associates,” co-chief executives Felicia Thornton and James Gooch said in their statement. “We strongly encourage all associates to return their focus to Market Basket’s customers, their needs and expectations.” But many customers had already voted with their feet, and evidence of a mass flight from Market Basket was not hard to find at competitors with stores nearby. In the middle of the day all the checkout lines at the Stop & Shop in Reading, for example, were busy with a queue of at least five customers. Joanne Rather/Globe Staff Customers walked in and out of Market Basket with protesting workers outside a Lowell store on Tuesday. Advertisement The parking lot was nearly full, and a handwritten sign out front advertised the store was “hiring for all positions.” Meanwhile, nearby the Market Basket was described as a “ghost town” by two customers who went in to shop. In Lowell, a Hannaford supermarket was filled with the kinds of crowds normally seen on a weekend. Deirdre O’Connor usually shops at the Market Basket across the street, but out of support for the protesting employees found herself competing for quickly disappearing stock at Hannaford. “I’ve popped in and out of Hannaford, but I’ve never seen this Hannaford as busy as it is today. The shelves were bare,” said O’Connor. Advertisement Back at Market Basket, customers taped receipts from purchases made at competing food stores on the glass windows — evidence of their support for the employees’ boycott. Hannaford and Stop & Shop declined to comment. Although the current crisis dates to late June, when the directors of the parent company, Demoulas Super Markets Inc., fired longtime president Arthur T. Demoulas, the origins of the feud are decades old. Demoulas has been fighting with his cousin, Arthur S. Demoulas, for control of the company since the 1990s, and the power bloc on the board of directors finally moved in Arthur S. Demoulas’ favor last year. The bitter family fight had largely been confined to inside the company until the dismissal of Arthur T. Demoulas, who was extremely popular among rank and file employees. What began as a rally and public protest over his firing has spread into a cause célebrè in which Demoulas and the protesting employees have won widespread support. So far eight Market Basket supervisors and managers have been fired. Gooch and Thornton said the “individuals who were terminated took significant actions that harmed the company and therefore compromised Market Basket’s ability to be there for our customers. We took the difficult step of termination only after we saw no alternative.” While protests Tuesday were largely peaceful, one Market Basket employee was arrested outside a store in Epping, N.H., and charged with reckless driving. Kevin Griffin, a longtime grocery industry analyst, said the employees were playing “a dangerous game” by openly encouraging customers to spend their money at competitors. “This is somewhat of a powder keg,” said Griffin, publisher of The Griffin Report of Food Marketing, based in Duxbury. “I think it’s going to get worse before it gets better, but I don’t see it lasting much longer.” Meanwhile, reports of food shortages — especially perishables such as produce, seafood, and meat — throughout the chain continued to mount. The stores have not been getting fresh deliveries since Friday because workers at the company’s warehouse have abandoned their posts in support of Demoulas. The company, organizers have said, has tried to bring in replacements, but without staff and drivers in place, few deliveries are making it into stores. A representative for Demoulas Super Markets could not be reached Tuesday. An afternoon call to company headquarters went straight to a recorded message that the switchboard was “currently closed.” Market Basket suppliers were also feeling the effects. One Boston-based seafood vendor, who asked not to be named out of fear of losing Market Basket’s business, said daily orders from the chain have dried up. The vendor added that deliveries made Thursday probably never made it to stores, and by now that food has spoiled, probably costing Market Basket millions of dollars in lost product. And yet the numbers in support of Arthur T. Demoulas keep growing. By Tuesday the Facebook pages created by protesting employees had 73,000 fans, and more than 80 elected officials in Massachusetts and New Hampshire had signed petitions calling for boycotts unless management rehired Demoulas. At a Market Basket in Lowell, employees had set up a folding table next to the store entrance to collect signatures for a petition asking the company’s directors to reinstate Demoulas. Jay Davis, a meat clerk at the Reading Market Basket, said the petitions signed by customers and other supporters filled the bed of a pickup truck that drove around stores to collect them. Two petitions circulating online had also collected some 18,000 signatures. Although he was pushing customers to boycott his chain, Davis said he and his colleagues would still keep the stores clean and presentable with what food remained. “If you want to boycott, boycott. If you don’t, don’t. We’re not going to hold it against you,” Davis said. “We’re not going shut down the store. I’m pretty sure that’s illegal.” Below are reports and photos from social media on the protests: Erin Ailworth contributed to this report. Jack Newsham can be reached at [email protected] . Follow him on Twitter @TheNewsHam . Nina Joy Godlewski can be reached at [email protected] . Follow her on Twitter @NinaJGodlewski
/** * adds all registries in SynBioHubFrontend and local preferences to doc's * internal registries */ public static void populateRegistries(SBOLDocument doc) { Registries.get().forEach(registry -> { if (registry.isMetadata()) { doc.addRegistry(registry.getLocation(), registry.getUriPrefix()); } }); }
/* * Copyright (C) 2008-2009 QUALCOMM Incorporated. */ #ifndef __LINUX_MSM_CAMERA_H #define __LINUX_MSM_CAMERA_H #include <linux/types.h> #include <asm/sizes.h> #include <linux/ioctl.h> #define MSM_CAM_IOCTL_MAGIC 'm' #define MSM_CAM_IOCTL_GET_SENSOR_INFO \ _IOR(MSM_CAM_IOCTL_MAGIC, 1, struct msm_camsensor_info *) #define MSM_CAM_IOCTL_REGISTER_PMEM \ _IOW(MSM_CAM_IOCTL_MAGIC, 2, struct msm_pmem_info *) #define MSM_CAM_IOCTL_UNREGISTER_PMEM \ _IOW(MSM_CAM_IOCTL_MAGIC, 3, unsigned) #define MSM_CAM_IOCTL_CTRL_COMMAND \ _IOW(MSM_CAM_IOCTL_MAGIC, 4, struct msm_ctrl_cmd *) #define MSM_CAM_IOCTL_CONFIG_VFE \ _IOW(MSM_CAM_IOCTL_MAGIC, 5, struct msm_camera_vfe_cfg_cmd *) #define MSM_CAM_IOCTL_GET_STATS \ _IOR(MSM_CAM_IOCTL_MAGIC, 6, struct msm_camera_stats_event_ctrl *) #define MSM_CAM_IOCTL_GETFRAME \ _IOR(MSM_CAM_IOCTL_MAGIC, 7, struct msm_camera_get_frame *) #define MSM_CAM_IOCTL_ENABLE_VFE \ _IOW(MSM_CAM_IOCTL_MAGIC, 8, struct camera_enable_cmd *) #define MSM_CAM_IOCTL_CTRL_CMD_DONE \ _IOW(MSM_CAM_IOCTL_MAGIC, 9, struct camera_cmd *) #define MSM_CAM_IOCTL_CONFIG_CMD \ _IOW(MSM_CAM_IOCTL_MAGIC, 10, struct camera_cmd *) #define MSM_CAM_IOCTL_DISABLE_VFE \ _IOW(MSM_CAM_IOCTL_MAGIC, 11, struct camera_enable_cmd *) #define MSM_CAM_IOCTL_PAD_REG_RESET2 \ _IOW(MSM_CAM_IOCTL_MAGIC, 12, struct camera_enable_cmd *) #define MSM_CAM_IOCTL_VFE_APPS_RESET \ _IOW(MSM_CAM_IOCTL_MAGIC, 13, struct camera_enable_cmd *) #define MSM_CAM_IOCTL_RELEASE_FRAME_BUFFER \ _IOW(MSM_CAM_IOCTL_MAGIC, 14, struct camera_enable_cmd *) #define MSM_CAM_IOCTL_RELEASE_STATS_BUFFER \ _IOW(MSM_CAM_IOCTL_MAGIC, 15, struct msm_stats_buf *) #define MSM_CAM_IOCTL_AXI_CONFIG \ _IOW(MSM_CAM_IOCTL_MAGIC, 16, struct msm_camera_vfe_cfg_cmd *) #define MSM_CAM_IOCTL_GET_PICTURE \ _IOW(MSM_CAM_IOCTL_MAGIC, 17, struct msm_camera_ctrl_cmd *) #define MSM_CAM_IOCTL_SET_CROP \ _IOW(MSM_CAM_IOCTL_MAGIC, 18, struct crop_info *) #define MSM_CAM_IOCTL_PICT_PP \ _IOW(MSM_CAM_IOCTL_MAGIC, 19, uint8_t *) #define MSM_CAM_IOCTL_PICT_PP_DONE \ _IOW(MSM_CAM_IOCTL_MAGIC, 20, struct msm_snapshot_pp_status *) #define MSM_CAM_IOCTL_SENSOR_IO_CFG \ _IOW(MSM_CAM_IOCTL_MAGIC, 21, struct sensor_cfg_data *) #define MSM_CAMERA_LED_OFF 0 #define MSM_CAMERA_LED_LOW 1 #define MSM_CAMERA_LED_HIGH 2 #define MSM_CAM_IOCTL_FLASH_LED_CFG \ _IOW(MSM_CAM_IOCTL_MAGIC, 22, unsigned *) #define MSM_CAM_IOCTL_UNBLOCK_POLL_FRAME \ _IO(MSM_CAM_IOCTL_MAGIC, 23) #define MSM_CAM_IOCTL_CTRL_COMMAND_2 \ _IOW(MSM_CAM_IOCTL_MAGIC, 24, struct msm_ctrl_cmd *) #define MAX_SENSOR_NUM 3 #define MAX_SENSOR_NAME 32 #define MSM_CAM_CTRL_CMD_DONE 0 #define MSM_CAM_SENSOR_VFE_CMD 1 /***************************************************** * structure *****************************************************/ /* define five type of structures for userspace <==> kernel * space communication: * command 1 - 2 are from userspace ==> kernel * command 3 - 4 are from kernel ==> userspace * * 1. control command: control command(from control thread), * control status (from config thread); */ struct msm_ctrl_cmd { uint16_t type; uint16_t length; void *value; uint16_t status; uint32_t timeout_ms; int resp_fd; }; struct msm_vfe_evt_msg { unsigned short type; /* 1 == event (RPC), 0 == message (adsp) */ unsigned short msg_id; unsigned int len; /* size in, number of bytes out */ void *data; }; #define MSM_CAM_RESP_CTRL 0 #define MSM_CAM_RESP_STAT_EVT_MSG 1 #define MSM_CAM_RESP_V4L2 2 #define MSM_CAM_RESP_MAX 3 /* this one is used to send ctrl/status up to config thread */ struct msm_stats_event_ctrl { /* 0 - ctrl_cmd from control thread, * 1 - stats/event kernel, * 2 - V4L control or read request */ int resptype; int timeout_ms; struct msm_ctrl_cmd ctrl_cmd; /* struct vfe_event_t stats_event; */ struct msm_vfe_evt_msg stats_event; }; /* 2. config command: config command(from config thread); */ struct msm_camera_cfg_cmd { /* what to config: * 1 - sensor config, 2 - vfe config */ uint16_t cfg_type; /* sensor config type */ uint16_t cmd_type; uint16_t queue; uint16_t length; void *value; }; #define CMD_GENERAL 0 #define CMD_AXI_CFG_OUT1 1 #define CMD_AXI_CFG_SNAP_O1_AND_O2 2 #define CMD_AXI_CFG_OUT2 3 #define CMD_PICT_T_AXI_CFG 4 #define CMD_PICT_M_AXI_CFG 5 #define CMD_RAW_PICT_AXI_CFG 6 #define CMD_STATS_AXI_CFG 7 #define CMD_STATS_AF_AXI_CFG 8 #define CMD_FRAME_BUF_RELEASE 9 #define CMD_PREV_BUF_CFG 10 #define CMD_SNAP_BUF_RELEASE 11 #define CMD_SNAP_BUF_CFG 12 #define CMD_STATS_DISABLE 13 #define CMD_STATS_ENABLE 14 #define CMD_STATS_AF_ENABLE 15 #define CMD_STATS_BUF_RELEASE 16 #define CMD_STATS_AF_BUF_RELEASE 17 #define UPDATE_STATS_INVALID 18 /* vfe config command: config command(from config thread)*/ struct msm_vfe_cfg_cmd { int cmd_type; uint16_t length; void *value; }; #define MAX_CAMERA_ENABLE_NAME_LEN 32 struct camera_enable_cmd { char name[MAX_CAMERA_ENABLE_NAME_LEN]; }; #define MSM_PMEM_OUTPUT1 0 #define MSM_PMEM_OUTPUT2 1 #define MSM_PMEM_OUTPUT1_OUTPUT2 2 #define MSM_PMEM_THUMBAIL 3 #define MSM_PMEM_MAINIMG 4 #define MSM_PMEM_RAW_MAINIMG 5 #define MSM_PMEM_AEC_AWB 6 #define MSM_PMEM_AF 7 #define MSM_PMEM_MAX 8 #define FRAME_PREVIEW_OUTPUT1 0 #define FRAME_PREVIEW_OUTPUT2 1 #define FRAME_SNAPSHOT 2 #define FRAME_THUMBAIL 3 #define FRAME_RAW_SNAPSHOT 4 #define FRAME_MAX 5 struct msm_pmem_info { int type; int fd; void *vaddr; uint32_t y_off; uint32_t cbcr_off; uint8_t active; }; struct outputCfg { uint32_t height; uint32_t width; uint32_t window_height_firstline; uint32_t window_height_lastline; }; #define OUTPUT_1 0 #define OUTPUT_2 1 #define OUTPUT_1_AND_2 2 #define CAMIF_TO_AXI_VIA_OUTPUT_2 3 #define OUTPUT_1_AND_CAMIF_TO_AXI_VIA_OUTPUT_2 4 #define OUTPUT_2_AND_CAMIF_TO_AXI_VIA_OUTPUT_1 5 #define LAST_AXI_OUTPUT_MODE_ENUM = OUTPUT_2_AND_CAMIF_TO_AXI_VIA_OUTPUT_1 6 #define MSM_FRAME_PREV_1 0 #define MSM_FRAME_PREV_2 1 #define MSM_FRAME_ENC 2 struct msm_frame { int path; unsigned long buffer; uint32_t y_off; uint32_t cbcr_off; int fd; void *cropinfo; int croplen; }; #define STAT_AEAW 0 #define STAT_AF 1 #define STAT_MAX 2 struct msm_stats_buf { int type; unsigned long buffer; int fd; }; #define MSM_V4L2_VID_CAP_TYPE 0 #define MSM_V4L2_STREAM_ON 1 #define MSM_V4L2_STREAM_OFF 2 #define MSM_V4L2_SNAPSHOT 3 #define MSM_V4L2_QUERY_CTRL 4 #define MSM_V4L2_GET_CTRL 5 #define MSM_V4L2_SET_CTRL 6 #define MSM_V4L2_QUERY 7 #define MSM_V4L2_MAX 8 struct crop_info { void *info; int len; }; struct msm_postproc { int ftnum; struct msm_frame fthumnail; int fmnum; struct msm_frame fmain; }; struct msm_snapshot_pp_status { void *status; }; #define CFG_SET_MODE 0 #define CFG_SET_EFFECT 1 #define CFG_START 2 #define CFG_PWR_UP 3 #define CFG_PWR_DOWN 4 #define CFG_WRITE_EXPOSURE_GAIN 5 #define CFG_SET_DEFAULT_FOCUS 6 #define CFG_MOVE_FOCUS 7 #define CFG_REGISTER_TO_REAL_GAIN 8 #define CFG_REAL_TO_REGISTER_GAIN 9 #define CFG_SET_FPS 10 #define CFG_SET_PICT_FPS 11 #define CFG_SET_BRIGHTNESS 12 #define CFG_SET_CONTRAST 13 #define CFG_SET_ZOOM 14 #define CFG_SET_EXPOSURE_MODE 15 #define CFG_SET_WB 16 #define CFG_SET_ANTIBANDING 17 #define CFG_SET_EXP_GAIN 18 #define CFG_SET_PICT_EXP_GAIN 19 #define CFG_SET_LENS_SHADING 20 #define CFG_GET_PICT_FPS 21 #define CFG_GET_PREV_L_PF 22 #define CFG_GET_PREV_P_PL 23 #define CFG_GET_PICT_L_PF 24 #define CFG_GET_PICT_P_PL 25 #define CFG_GET_AF_MAX_STEPS 26 #define CFG_GET_PICT_MAX_EXP_LC 27 #define CFG_MAX 28 #define MOVE_NEAR 0 #define MOVE_FAR 1 #define SENSOR_PREVIEW_MODE 0 #define SENSOR_SNAPSHOT_MODE 1 #define SENSOR_RAW_SNAPSHOT_MODE 2 #define SENSOR_QTR_SIZE 0 #define SENSOR_FULL_SIZE 1 #define SENSOR_INVALID_SIZE 2 #define CAMERA_EFFECT_OFF 0 #define CAMERA_EFFECT_MONO 1 #define CAMERA_EFFECT_NEGATIVE 2 #define CAMERA_EFFECT_SOLARIZE 3 #define CAMERA_EFFECT_PASTEL 4 #define CAMERA_EFFECT_MOSAIC 5 #define CAMERA_EFFECT_RESIZE 6 #define CAMERA_EFFECT_SEPIA 7 #define CAMERA_EFFECT_POSTERIZE 8 #define CAMERA_EFFECT_WHITEBOARD 9 #define CAMERA_EFFECT_BLACKBOARD 10 #define CAMERA_EFFECT_AQUA 11 #define CAMERA_EFFECT_MAX 12 struct sensor_pict_fps { uint16_t prevfps; uint16_t pictfps; }; struct exp_gain_cfg { uint16_t gain; uint32_t line; }; struct focus_cfg { int32_t steps; int dir; }; struct fps_cfg { uint16_t f_mult; uint16_t fps_div; uint32_t pict_fps_div; }; struct sensor_cfg_data { int cfgtype; int mode; int rs; uint8_t max_steps; union { int8_t effect; uint8_t lens_shading; uint16_t prevl_pf; uint16_t prevp_pl; uint16_t pictl_pf; uint16_t pictp_pl; uint32_t pict_max_exp_lc; uint16_t p_fps; struct sensor_pict_fps gfps; struct exp_gain_cfg exp_gain; struct focus_cfg focus; struct fps_cfg fps; } cfg; }; #define GET_NAME 0 #define GET_PREVIEW_LINE_PER_FRAME 1 #define GET_PREVIEW_PIXELS_PER_LINE 2 #define GET_SNAPSHOT_LINE_PER_FRAME 3 #define GET_SNAPSHOT_PIXELS_PER_LINE 4 #define GET_SNAPSHOT_FPS 5 #define GET_SNAPSHOT_MAX_EP_LINE_CNT 6 struct msm_camsensor_info { char name[MAX_SENSOR_NAME]; uint8_t flash_enabled; }; #endif /* __LINUX_MSM_CAMERA_H */
Identification of proteins for controlled nucleation of metal-organic crystals for nanoenergetics Here, we report that a marine sandworm Nereis virens jaw protein, Nvjp1, nucleates hemozoin with similar activity as the native parasite hemozoin protein, HisRPII. X-ray diffraction and scanning electron microscopy confirm the identity of the hemozoin produced from Nvjp1-containing reactions. Finally, we observed that nAl assembled with hemozoin from Nvjp1 reactions has a substantially higher energetic output when compared to analogous thermite from the synthetic standard or HisRPII-nucleated hemozoin. Our results demonstrate that a marine sandworm protein can nucleate malaria pigment and set the stage for engineering recombinant hemozoin production for nanoenergetic applications.
def warp(inputFile, outputFile, xsize, ysize, dst_srs, src_srs=None, doClip=False, xmin=None, xmax=None, ymin=None, ymax=None, method='bilinear'): command = 'gdalwarp -overwrite' if src_srs is not None: command = command + ' -s_srs "{}"'.format(src_srs) command = command + ' -t_srs "{}"'.format(dst_srs) if doClip: command = command + ' -te {} {} {} {}'.format(xmin, ymin, xmax, ymax) command = command + ' -tr {} {}'.format(xsize, ysize) command = command + ' -r {}'.format(method) command = command + ' "{}" "{}"'.format(inputFile, outputFile) print('GDAL command\n------------\n'+command) print('\nOutput\n------') try: retMessage = subprocess.check_output(command, shell=False) retMessage = retMessage.decode("utf-8") except subprocess.CalledProcessError as err: retMessage = "ERROR. GDAL returned code {}.\n{}\n".format(err.returncode, err.output.decode("utf-8")) return retMessage
package com.cloudera.sa.hive.utils; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.IOException; import java.io.InputStreamReader; import java.io.OutputStreamWriter; import java.util.ArrayList; import java.util.EnumSet; import java.util.Iterator; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.zip.GZIPInputStream; import java.util.zip.ZipEntry; import java.util.zip.ZipInputStream; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.CreateFlag; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.LocatedFileStatus; import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.RemoteIterator; import org.apache.hadoop.io.NullWritable; import org.apache.hadoop.io.SequenceFile; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.compress.BZip2Codec; import org.apache.hadoop.io.compress.CompressionCodec; import org.apache.hadoop.io.compress.GzipCodec; import org.apache.hadoop.io.compress.SnappyCodec; /** * This is a job that runs on a clint. If will take non-splitable gzip or zip files in HDFS * and uncompress them to write them back out to HDFS. * <br><br> * Note: this is not a map/reduce job. So it is limited to the network and processing power on one box. * @author ted.malaska * */ public class UnCompressor { public static int finishedThreaded = 0; public static void main(String[] args) throws IOException, InterruptedException { if (args.length < 3) { System.out.println("UnCompressor Help:"); System.out .println("Parameters: <inputFilePath> <outputPath> <numOfThreads>"); System.out.println(); return; } String inputLocation = args[0]; String outputLocation = args[1]; int numOfThreads = Integer.parseInt(args[2]); ThreadPoolExecutor threadPool = new ThreadPoolExecutor(numOfThreads, numOfThreads, 1, TimeUnit.MINUTES, new ArrayBlockingQueue<Runnable>(100, true));; Configuration config = new Configuration(); FileSystem hdfs = FileSystem.get(config); Path inputFilePath = new Path(inputLocation); ArrayList<Path> pathArray = new ArrayList<Path>(); if (hdfs.isDirectory(inputFilePath)) { RemoteIterator<LocatedFileStatus> files = hdfs.listFiles(inputFilePath, false); while(files.hasNext()) { LocatedFileStatus lfs = files.next(); pathArray.add(lfs.getPath()); } } else { pathArray.add(inputFilePath); } for (Path path: pathArray) { UnCompresserThread thread = new UnCompresserThread(path, outputLocation, hdfs); System.out.println("Starting thread for file: " + path); threadPool.execute(thread); } while (finishedThreaded < pathArray.size()) { Thread.sleep(1000); } System.out.println("Finished all files"); System.exit(0); } public static class UnCompresserThread implements Runnable { Path inputFilePath; String outputLocation; FileSystem hdfs; public UnCompresserThread(Path inputFilePath, String outputLocation, FileSystem hdfs) { this.inputFilePath = inputFilePath; this.outputLocation = outputLocation; this.hdfs = hdfs; } public void run() { try { if (inputFilePath.getName().endsWith(".zip")) { uncompressZipFile(outputLocation, hdfs, inputFilePath); } else if (inputFilePath.getName().endsWith("gz") || inputFilePath.getName().endsWith("gzip")) { uncompressGzipFile(outputLocation, hdfs, inputFilePath); } }catch (Exception e) { e.printStackTrace(); } System.out.println("Finished thread for file: " + inputFilePath); UnCompressor.finishedThreaded++; } private void uncompressGzipFile (String outputLocation, FileSystem hdfs, Path inputFilePath) throws IOException { FSDataInputStream inputStream = hdfs.open(inputFilePath); GZIPInputStream gzipStream = new GZIPInputStream(inputStream); BufferedReader reader = new BufferedReader(new InputStreamReader( gzipStream)); try { String newFileName = inputFilePath.getName(); newFileName = newFileName.substring(0, newFileName.lastIndexOf(".gz")); Path outputFilePath = new Path(outputLocation + "/" + newFileName); FSDataOutputStream outputStream = hdfs.create(outputFilePath); BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(outputStream)); try { int singleChar = -1; singleChar = reader.read(); while (singleChar > -1) { writer.write(singleChar ); singleChar = reader.read(); } } finally { if (writer != null) { writer.close(); writer = null; } } } finally { if (reader != null) { reader.close(); reader = null; } } } private void uncompressZipFile(String outputLocation, FileSystem hdfs, Path inputFilePath) throws IOException { ZipInputStream zipReader = getZipReader(hdfs, inputFilePath); BufferedReader reader = new BufferedReader(new InputStreamReader( zipReader)); try { long counter = 0; ZipEntry ze; while ((ze = zipReader.getNextEntry()) != null) { String entryName = ze.getName(); System.out.println("Entry Name: " + entryName + " " + ze.getSize()); Path outputFilePath = new Path(outputLocation + "/" + entryName); FSDataOutputStream outputStream = hdfs.create(outputFilePath); BufferedWriter writer = new BufferedWriter(new OutputStreamWriter(outputStream)); try { int singleChar = -1; singleChar = reader.read(); while (singleChar > -1) { writer.write(singleChar ); singleChar = reader.read(); } } finally { if (writer != null) { writer.close(); writer = null; } } } System.out.println("Finished: Processed " + counter + " lines."); } finally { if (reader != null) { reader.close(); reader = null; } } } public static ZipInputStream getZipReader(FileSystem hdfs, Path path) throws IOException { FSDataInputStream inputStream = hdfs.open(path); if (path.getName().endsWith("zip") ) { ZipInputStream zipInputStream = new ZipInputStream(inputStream); System.out.println("processing zip file"); return zipInputStream; } else { throw new IOException( "UnKnown compress type. Can only process files with ext of (zip)"); } } } public static SequenceFile.Writer getSequenceFileWriter( Configuration config, FileSystem hdfs, Path path, String compressionCodecStr) throws IOException { // Created our writer SequenceFile.Metadata metaData = new SequenceFile.Metadata(); EnumSet<CreateFlag> enumSet = EnumSet.of(CreateFlag.CREATE); return SequenceFile.createWriter(FileContext.getFileContext(), config, path, NullWritable.class, Text.class, SequenceFile.CompressionType.BLOCK, getCompressionCodec(compressionCodecStr), metaData, enumSet); } public static CompressionCodec getCompressionCodec(String value) { if (value.equals("snappy")) { return new SnappyCodec(); } else if (value.equals("gzip")) { return new GzipCodec(); } else if (value.equals("bzip2")) { return new BZip2Codec(); } else { return new SnappyCodec(); } } }
def make_url(name_or_url: str): if isinstance(name_or_url, (str, bytes)): return _parse_rfc1738_args(name_or_url) elif isinstance(name_or_url, dict): return URL("postgres", **name_or_url) else: return URL("postgres")
package usecase import ( "context" "log" "time" "github.com/tamanyan/oauth2" _oauth2 "github.com/tamanyan/oauth2-server/oauth2" "github.com/tamanyan/oauth2-server/oauth2/http/request" "github.com/tamanyan/oauth2/errors" ) // OAuth2Usecase usecase type OAuth2Usecase struct { Manager oauth2.Manager } // NewOAuth2Usecase will create new an articleUsecase object representation of article.Usecase interface func NewOAuth2Usecase(Manager oauth2.Manager, mamatimeout time.Duration) _oauth2.Usecase { return &OAuth2Usecase{ Manager: Manager, } } // IssuePasswordCredentialAccessToken Issue password credential access token func (a *OAuth2Usecase) IssuePasswordCredentialAccessToken(ctx context.Context, request request.OAuth2PasswordCredentialRequest) (ti oauth2.TokenInfo, err error) { // Check Client ID and Secret cli, err := a.Manager.GetClient(request.ClientID) if err != nil || cli.GetSecret() != request.ClientSecret { err = errors.ErrInvalidClient return } // Check username and password if !(request.Username == "test" && request.Password == "<PASSWORD>") { err = errors.ErrInvalidGrant return } tgr := &oauth2.TokenGenerateRequest{ ClientID: request.ClientID, ClientSecret: request.ClientSecret, Scope: request.Scope, UserID: request.Username, } ti, err = a.Manager.GenerateAccessToken(oauth2.GrantType(request.GrantType), tgr) return } // IssueRefreshAccessToken Issue refresh token func (a *OAuth2Usecase) IssueRefreshAccessToken(ctx context.Context, request request.OAuth2RefreshTokenRequest) (ti oauth2.TokenInfo, err error) { // Check Client ID and Secret cli, err := a.Manager.GetClient(request.ClientID) if err != nil || cli.GetSecret() != request.ClientSecret { err = errors.ErrInvalidClient return } tgr := &oauth2.TokenGenerateRequest{ ClientID: request.ClientID, ClientSecret: request.ClientSecret, Scope: request.Scope, Refresh: request.RefreshToken, } ti, err = a.Manager.RefreshAccessToken(tgr) log.Println(ti) return } // IssueClientCredentialAccessToken will issue client credential access token func (a *OAuth2Usecase) IssueClientCredentialAccessToken(ctx context.Context, request request.OAuth2ClientCredentialRequest) (ti oauth2.TokenInfo, err error) { // Check Client ID and Secret cli, err := a.Manager.GetClient(request.ClientID) if err != nil || cli.GetSecret() != request.ClientSecret { err = errors.ErrInvalidClient return } tgr := &oauth2.TokenGenerateRequest{ ClientID: request.ClientID, ClientSecret: request.ClientSecret, Scope: request.Scope, } ti, err = a.Manager.GenerateAccessToken(oauth2.GrantType(request.GrantType), tgr) return } // RevokeAccessToken will revoke access token func (a *OAuth2Usecase) RevokeAccessToken(ctx context.Context, token string) error { return a.Manager.RemoveAccessToken(token) } // VerifyAccessToken will verify access token func (a *OAuth2Usecase) VerifyAccessToken(ctx context.Context, token string) (ti oauth2.TokenInfo, err error) { return a.Manager.LoadAccessToken(token) }
<gh_stars>1-10 export * from "./default-message" export * from "./message"
<reponame>hroptatyr/clob #include <stdio.h> #include "btree.h" int main(void) { btree_t t; int rc = 0; t = make_btree(true); for (size_t j = 0U; j < 100U; j++) { btree_val_t *v = btree_put(t, 1.dd+j); if (btree_val_nil_p(*v)) { *v = (btree_val_t){make_plqu(), {1.dd+j}}; printf("%p ", v); } else { rc = 1; break; } v = btree_get(t, 1.dd+j); if (v && !btree_val_nil_p(*v)) { printf(" %p %f\n", v, (double)v->sum.dis); } else { puts(""); rc = 1; break; } } free_btree(t); return rc; }
import json from rest_framework.request import Request from django.template.loader import get_template from rest_framework import generics from api.pdf import render as render_pdf from django.http import ( HttpResponse ) class SurveyPdfView(generics.GenericAPIView): # FIXME - restore authentication? permission_classes = () # permissions.IsAuthenticated,) def post(self, request: Request, name=None): tpl_name = "survey-{}.html".format(name) # return HttpResponseBadRequest('Unknown survey name') responses = json.loads(request.POST["data"]) # responses = {'question1': 'test value'} template = get_template(tpl_name) html_content = template.render(responses) if name == "primary": instruct_template = get_template("instructions-primary.html") instruct_html = instruct_template.render(responses) docs = (instruct_html,) + (html_content,) * 4 pdf_content = render_pdf(*docs) else: pdf_content = render_pdf(html_content) response = HttpResponse(content_type="application/pdf") response["Content-Disposition"] = 'attachment; filename="report.pdf"' response.write(pdf_content) return response
// check for duplicates will fail if we pass GlobalParameters as arg static public String isValidSyncTaskName(Context c, ArrayList<SyncTaskItem> stl, String t_name, boolean checkDup, boolean showAllError) { String result = "", sep=""; if (t_name.length() > 0) { result = hasSyncTaskNameInvalidLength(c, t_name); if (!result.equals("")) sep = "\n"; result += sep+hasSyncTaskNameUnusableCharacter(c, t_name); if (!result.equals("")) sep = "\n"; if (!checkDup && isSyncTasknameExists(stl, t_name)) { result += sep+c.getString(R.string.msgs_duplicate_task_name); } else if (checkDup && isSyncTaskNameDuplicate(stl, t_name)) { result += sep+c.getString(R.string.msgs_duplicate_task_name); } if (!result.equals("")) sep = "\n"; if (!result.equals("") && !showAllError && result.contains(sep)) { result = result.substring(0, result.indexOf(sep)); } } else { result = c.getString(R.string.msgs_specify_task_name); } return result; }
/** * Finds anagrams and builds a multidimensional array list. Size of the first dimension * is the number of sets of anagrams found, and 2nd dimension 1 row each for each set of anagrams. * * Approach: * Foreach element of the input array, compare to the first element of each row to determine if * it is an anagram of something already within the solution array. The first string of the input * will be added to the first arraylist since there is nothing yet to compare to. From that point * loop over all rows and check if the next string is an anagram of the first word in the row. * Either append the string to an existing row or add it as the first string to a new row. * * @param possibleAnagrams Arraylist of strings such as "ate", "tea", "note", "tone" * @return multi-dimensional ArrayList<ArrayList<String>> where each row is a set of anagrams */ public ArrayList<ArrayList<String>> buildAnagramList(ArrayList<String> possibleAnagrams) { var solution = new ArrayList<ArrayList<String>>(); for (String a : possibleAnagrams) { boolean inserted = false; for(ArrayList<String> s : solution) { if(isAnagram(s.get(0).toCharArray(), a.toCharArray())) { inserted = true; s.add(a); } } if(!inserted) { var list = new ArrayList<String>(); list.add(a); solution.add(list); } } return solution; }
<filename>test/examples/define.spec.ts UTest({ $config: { 'http.process': { command: 'atma custom examples/index', matchReady: '/Listen /' } }, $before: function(next){ UTest .server .request('http://localhost:5771/define') .done(next); }, 'counter: click to increment model and update the ui' (done, doc, win) { UTest .domtest(doc.body, ` find('li[name=counter]') { find ('i') > text 0; find ('button') > do click; find ('i') > text 1; } `) .always(() => done(doc, win)); }, 'fooos: clickable' (done, doc, win) { UTest .domtest(doc.body, ` find('li[name=foos] > section[name=clickable] > .container') { text (''); do click; text ('foo'); } `) .always(() => done(doc, win)); }, 'fooos: pressable' (done, doc, win) { UTest .domtest(doc.body, ` find('li[name=foos] > section[name=pressable] > input') { val (''); do keydown a; val ('foo'); do keydown b; do keydown c; val ('foofoofoo'); } `) .always(() => done(doc, win)); } })
<filename>src/it/unimi/dsi/fastutil/io/TextIO.java /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2005-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package it.unimi.dsi.fastutil.io; import static it.unimi.dsi.fastutil.BigArrays.SEGMENT_MASK; import static it.unimi.dsi.fastutil.BigArrays.segment; import static it.unimi.dsi.fastutil.BigArrays.start; import java.io.*; import java.util.*; import it.unimi.dsi.fastutil.booleans.*; import it.unimi.dsi.fastutil.bytes.*; import it.unimi.dsi.fastutil.shorts.*; import it.unimi.dsi.fastutil.ints.*; import it.unimi.dsi.fastutil.longs.*; import it.unimi.dsi.fastutil.floats.*; import it.unimi.dsi.fastutil.doubles.*; /** Provides static methods to perform easily textual I/O. * * <P>This class fills a gap in the Java API: a natural operation on sequences * of primitive elements is to load or store them in textual form. This format * makes files humanly readable. * * <P>For each primitive type, this class provides methods that read elements * from a {@link BufferedReader} or from a filename (which will be opened * using a buffer of {@link #BUFFER_SIZE} bytes) into an array. Analogously, * there are methods that store the content of an array (fragment) or the * elements returned by an iterator to a {@link PrintStream} or to a given * filename. * * <P>Finally, there are useful wrapper methods that {@linkplain #asIntIterator(CharSequence) * exhibit a file as a type-specific iterator}. * * <P>Note that, contrarily to the binary case, there is no way to * {@linkplain BinIO#loadInts(CharSequence) load from a file without providing an array}. You can * easily work around the problem as follows: * <pre> * array = IntIterators.unwrap( TextIO.asIntIterator("foo") ); * </pre> * * @since 4.4 */ public class TextIO { private TextIO() {} /** The size of the buffer used for all I/O on files. */ final public static int BUFFER_SIZE = 8 * 1024; /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadBooleans( final BufferedReader reader, final boolean[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.booleans.BooleanArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Boolean.parseBoolean( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadBooleans( final BufferedReader reader, final boolean[] array ) throws IOException { return loadBooleans( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadBooleans( final File file, final boolean[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadBooleans( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadBooleans( final CharSequence filename, final boolean[] array, final int offset, final int length ) throws IOException { return loadBooleans( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadBooleans( final File file, final boolean[] array ) throws IOException { return loadBooleans( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadBooleans( final CharSequence filename, final boolean[] array ) throws IOException { return loadBooleans( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeBooleans( final boolean array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.booleans.BooleanArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeBooleans( final boolean array[], final PrintStream stream ) { storeBooleans( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeBooleans( final boolean array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeBooleans( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeBooleans( final boolean array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeBooleans( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeBooleans( final boolean array[], final File file ) throws IOException { storeBooleans( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeBooleans( final boolean array[], final CharSequence filename ) throws IOException { storeBooleans( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeBooleans( final BooleanIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextBoolean() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeBooleans( final BooleanIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeBooleans( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeBooleans( final BooleanIterator i, final CharSequence filename ) throws IOException { storeBooleans( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadBooleans( final BufferedReader reader, final boolean[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.booleans.BooleanBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final boolean[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Boolean.parseBoolean( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadBooleans( final BufferedReader reader, final boolean[][] array ) throws IOException { return loadBooleans( reader, array, 0, it.unimi.dsi.fastutil.booleans.BooleanBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadBooleans( final File file, final boolean[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadBooleans( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadBooleans( final CharSequence filename, final boolean[][] array, final long offset, final long length ) throws IOException { return loadBooleans( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadBooleans( final File file, final boolean[][] array ) throws IOException { return loadBooleans( file, array, 0, it.unimi.dsi.fastutil.booleans.BooleanBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadBooleans( final CharSequence filename, final boolean[][] array ) throws IOException { return loadBooleans( filename, array, 0, it.unimi.dsi.fastutil.booleans.BooleanBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeBooleans( final boolean array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.booleans.BooleanBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final boolean[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeBooleans( final boolean array[][], final PrintStream stream ) { storeBooleans( array, 0, it.unimi.dsi.fastutil.booleans.BooleanBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeBooleans( final boolean array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeBooleans( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeBooleans( final boolean array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeBooleans( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeBooleans( final boolean array[][], final File file ) throws IOException { storeBooleans( array, 0, it.unimi.dsi.fastutil.booleans.BooleanBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeBooleans( final boolean array[][], final CharSequence filename ) throws IOException { storeBooleans( array, 0, it.unimi.dsi.fastutil.booleans.BooleanBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class BooleanReaderWrapper extends AbstractBooleanIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private boolean next; public BooleanReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Boolean.parseBoolean( s ); return true; } public boolean nextBoolean() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static BooleanIterator asBooleanIterator( final BufferedReader reader ) { return new BooleanReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static BooleanIterator asBooleanIterator( final File file ) throws IOException { return new BooleanReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static BooleanIterator asBooleanIterator( final CharSequence filename ) throws IOException { return asBooleanIterator( new File( filename.toString() ) ); } /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadBytes( final BufferedReader reader, final byte[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.bytes.ByteArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Byte.parseByte( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadBytes( final BufferedReader reader, final byte[] array ) throws IOException { return loadBytes( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadBytes( final File file, final byte[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadBytes( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadBytes( final CharSequence filename, final byte[] array, final int offset, final int length ) throws IOException { return loadBytes( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadBytes( final File file, final byte[] array ) throws IOException { return loadBytes( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadBytes( final CharSequence filename, final byte[] array ) throws IOException { return loadBytes( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeBytes( final byte array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.bytes.ByteArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeBytes( final byte array[], final PrintStream stream ) { storeBytes( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeBytes( final byte array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeBytes( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeBytes( final byte array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeBytes( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeBytes( final byte array[], final File file ) throws IOException { storeBytes( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeBytes( final byte array[], final CharSequence filename ) throws IOException { storeBytes( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeBytes( final ByteIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextByte() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeBytes( final ByteIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeBytes( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeBytes( final ByteIterator i, final CharSequence filename ) throws IOException { storeBytes( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadBytes( final BufferedReader reader, final byte[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.bytes.ByteBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final byte[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Byte.parseByte( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadBytes( final BufferedReader reader, final byte[][] array ) throws IOException { return loadBytes( reader, array, 0, it.unimi.dsi.fastutil.bytes.ByteBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadBytes( final File file, final byte[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadBytes( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadBytes( final CharSequence filename, final byte[][] array, final long offset, final long length ) throws IOException { return loadBytes( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadBytes( final File file, final byte[][] array ) throws IOException { return loadBytes( file, array, 0, it.unimi.dsi.fastutil.bytes.ByteBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadBytes( final CharSequence filename, final byte[][] array ) throws IOException { return loadBytes( filename, array, 0, it.unimi.dsi.fastutil.bytes.ByteBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeBytes( final byte array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.bytes.ByteBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final byte[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeBytes( final byte array[][], final PrintStream stream ) { storeBytes( array, 0, it.unimi.dsi.fastutil.bytes.ByteBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeBytes( final byte array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeBytes( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeBytes( final byte array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeBytes( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeBytes( final byte array[][], final File file ) throws IOException { storeBytes( array, 0, it.unimi.dsi.fastutil.bytes.ByteBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeBytes( final byte array[][], final CharSequence filename ) throws IOException { storeBytes( array, 0, it.unimi.dsi.fastutil.bytes.ByteBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class ByteReaderWrapper extends AbstractByteIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private byte next; public ByteReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Byte.parseByte( s ); return true; } public byte nextByte() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static ByteIterator asByteIterator( final BufferedReader reader ) { return new ByteReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static ByteIterator asByteIterator( final File file ) throws IOException { return new ByteReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static ByteIterator asByteIterator( final CharSequence filename ) throws IOException { return asByteIterator( new File( filename.toString() ) ); } /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadShorts( final BufferedReader reader, final short[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.shorts.ShortArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Short.parseShort( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadShorts( final BufferedReader reader, final short[] array ) throws IOException { return loadShorts( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadShorts( final File file, final short[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadShorts( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadShorts( final CharSequence filename, final short[] array, final int offset, final int length ) throws IOException { return loadShorts( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadShorts( final File file, final short[] array ) throws IOException { return loadShorts( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadShorts( final CharSequence filename, final short[] array ) throws IOException { return loadShorts( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeShorts( final short array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.shorts.ShortArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeShorts( final short array[], final PrintStream stream ) { storeShorts( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeShorts( final short array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeShorts( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeShorts( final short array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeShorts( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeShorts( final short array[], final File file ) throws IOException { storeShorts( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeShorts( final short array[], final CharSequence filename ) throws IOException { storeShorts( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeShorts( final ShortIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextShort() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeShorts( final ShortIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeShorts( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeShorts( final ShortIterator i, final CharSequence filename ) throws IOException { storeShorts( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadShorts( final BufferedReader reader, final short[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.shorts.ShortBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final short[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Short.parseShort( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadShorts( final BufferedReader reader, final short[][] array ) throws IOException { return loadShorts( reader, array, 0, it.unimi.dsi.fastutil.shorts.ShortBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadShorts( final File file, final short[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadShorts( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadShorts( final CharSequence filename, final short[][] array, final long offset, final long length ) throws IOException { return loadShorts( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadShorts( final File file, final short[][] array ) throws IOException { return loadShorts( file, array, 0, it.unimi.dsi.fastutil.shorts.ShortBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadShorts( final CharSequence filename, final short[][] array ) throws IOException { return loadShorts( filename, array, 0, it.unimi.dsi.fastutil.shorts.ShortBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeShorts( final short array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.shorts.ShortBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final short[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeShorts( final short array[][], final PrintStream stream ) { storeShorts( array, 0, it.unimi.dsi.fastutil.shorts.ShortBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeShorts( final short array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeShorts( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeShorts( final short array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeShorts( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeShorts( final short array[][], final File file ) throws IOException { storeShorts( array, 0, it.unimi.dsi.fastutil.shorts.ShortBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeShorts( final short array[][], final CharSequence filename ) throws IOException { storeShorts( array, 0, it.unimi.dsi.fastutil.shorts.ShortBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class ShortReaderWrapper extends AbstractShortIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private short next; public ShortReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Short.parseShort( s ); return true; } public short nextShort() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static ShortIterator asShortIterator( final BufferedReader reader ) { return new ShortReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static ShortIterator asShortIterator( final File file ) throws IOException { return new ShortReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static ShortIterator asShortIterator( final CharSequence filename ) throws IOException { return asShortIterator( new File( filename.toString() ) ); } /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadInts( final BufferedReader reader, final int[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.ints.IntArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Integer.parseInt( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadInts( final BufferedReader reader, final int[] array ) throws IOException { return loadInts( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadInts( final File file, final int[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadInts( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadInts( final CharSequence filename, final int[] array, final int offset, final int length ) throws IOException { return loadInts( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadInts( final File file, final int[] array ) throws IOException { return loadInts( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadInts( final CharSequence filename, final int[] array ) throws IOException { return loadInts( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeInts( final int array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.ints.IntArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeInts( final int array[], final PrintStream stream ) { storeInts( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeInts( final int array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeInts( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeInts( final int array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeInts( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeInts( final int array[], final File file ) throws IOException { storeInts( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeInts( final int array[], final CharSequence filename ) throws IOException { storeInts( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeInts( final IntIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextInt() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeInts( final IntIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeInts( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeInts( final IntIterator i, final CharSequence filename ) throws IOException { storeInts( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadInts( final BufferedReader reader, final int[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.ints.IntBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final int[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Integer.parseInt( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadInts( final BufferedReader reader, final int[][] array ) throws IOException { return loadInts( reader, array, 0, it.unimi.dsi.fastutil.ints.IntBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadInts( final File file, final int[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadInts( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadInts( final CharSequence filename, final int[][] array, final long offset, final long length ) throws IOException { return loadInts( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadInts( final File file, final int[][] array ) throws IOException { return loadInts( file, array, 0, it.unimi.dsi.fastutil.ints.IntBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadInts( final CharSequence filename, final int[][] array ) throws IOException { return loadInts( filename, array, 0, it.unimi.dsi.fastutil.ints.IntBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeInts( final int array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.ints.IntBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final int[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeInts( final int array[][], final PrintStream stream ) { storeInts( array, 0, it.unimi.dsi.fastutil.ints.IntBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeInts( final int array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeInts( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeInts( final int array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeInts( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeInts( final int array[][], final File file ) throws IOException { storeInts( array, 0, it.unimi.dsi.fastutil.ints.IntBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeInts( final int array[][], final CharSequence filename ) throws IOException { storeInts( array, 0, it.unimi.dsi.fastutil.ints.IntBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class IntReaderWrapper extends AbstractIntIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private int next; public IntReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Integer.parseInt( s ); return true; } public int nextInt() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static IntIterator asIntIterator( final BufferedReader reader ) { return new IntReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static IntIterator asIntIterator( final File file ) throws IOException { return new IntReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static IntIterator asIntIterator( final CharSequence filename ) throws IOException { return asIntIterator( new File( filename.toString() ) ); } /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadLongs( final BufferedReader reader, final long[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.longs.LongArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Long.parseLong( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadLongs( final BufferedReader reader, final long[] array ) throws IOException { return loadLongs( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadLongs( final File file, final long[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadLongs( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadLongs( final CharSequence filename, final long[] array, final int offset, final int length ) throws IOException { return loadLongs( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadLongs( final File file, final long[] array ) throws IOException { return loadLongs( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadLongs( final CharSequence filename, final long[] array ) throws IOException { return loadLongs( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeLongs( final long array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.longs.LongArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeLongs( final long array[], final PrintStream stream ) { storeLongs( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeLongs( final long array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeLongs( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeLongs( final long array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeLongs( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeLongs( final long array[], final File file ) throws IOException { storeLongs( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeLongs( final long array[], final CharSequence filename ) throws IOException { storeLongs( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeLongs( final LongIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextLong() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeLongs( final LongIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeLongs( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeLongs( final LongIterator i, final CharSequence filename ) throws IOException { storeLongs( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadLongs( final BufferedReader reader, final long[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.longs.LongBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final long[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Long.parseLong( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadLongs( final BufferedReader reader, final long[][] array ) throws IOException { return loadLongs( reader, array, 0, it.unimi.dsi.fastutil.longs.LongBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadLongs( final File file, final long[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadLongs( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadLongs( final CharSequence filename, final long[][] array, final long offset, final long length ) throws IOException { return loadLongs( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadLongs( final File file, final long[][] array ) throws IOException { return loadLongs( file, array, 0, it.unimi.dsi.fastutil.longs.LongBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadLongs( final CharSequence filename, final long[][] array ) throws IOException { return loadLongs( filename, array, 0, it.unimi.dsi.fastutil.longs.LongBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeLongs( final long array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.longs.LongBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final long[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeLongs( final long array[][], final PrintStream stream ) { storeLongs( array, 0, it.unimi.dsi.fastutil.longs.LongBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeLongs( final long array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeLongs( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeLongs( final long array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeLongs( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeLongs( final long array[][], final File file ) throws IOException { storeLongs( array, 0, it.unimi.dsi.fastutil.longs.LongBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeLongs( final long array[][], final CharSequence filename ) throws IOException { storeLongs( array, 0, it.unimi.dsi.fastutil.longs.LongBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class LongReaderWrapper extends AbstractLongIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private long next; public LongReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Long.parseLong( s ); return true; } public long nextLong() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static LongIterator asLongIterator( final BufferedReader reader ) { return new LongReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static LongIterator asLongIterator( final File file ) throws IOException { return new LongReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static LongIterator asLongIterator( final CharSequence filename ) throws IOException { return asLongIterator( new File( filename.toString() ) ); } /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadFloats( final BufferedReader reader, final float[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.floats.FloatArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Float.parseFloat( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadFloats( final BufferedReader reader, final float[] array ) throws IOException { return loadFloats( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadFloats( final File file, final float[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadFloats( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadFloats( final CharSequence filename, final float[] array, final int offset, final int length ) throws IOException { return loadFloats( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadFloats( final File file, final float[] array ) throws IOException { return loadFloats( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadFloats( final CharSequence filename, final float[] array ) throws IOException { return loadFloats( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeFloats( final float array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.floats.FloatArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeFloats( final float array[], final PrintStream stream ) { storeFloats( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeFloats( final float array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeFloats( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeFloats( final float array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeFloats( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeFloats( final float array[], final File file ) throws IOException { storeFloats( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeFloats( final float array[], final CharSequence filename ) throws IOException { storeFloats( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeFloats( final FloatIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextFloat() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeFloats( final FloatIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeFloats( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeFloats( final FloatIterator i, final CharSequence filename ) throws IOException { storeFloats( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadFloats( final BufferedReader reader, final float[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.floats.FloatBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final float[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Float.parseFloat( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadFloats( final BufferedReader reader, final float[][] array ) throws IOException { return loadFloats( reader, array, 0, it.unimi.dsi.fastutil.floats.FloatBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadFloats( final File file, final float[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadFloats( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadFloats( final CharSequence filename, final float[][] array, final long offset, final long length ) throws IOException { return loadFloats( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadFloats( final File file, final float[][] array ) throws IOException { return loadFloats( file, array, 0, it.unimi.dsi.fastutil.floats.FloatBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadFloats( final CharSequence filename, final float[][] array ) throws IOException { return loadFloats( filename, array, 0, it.unimi.dsi.fastutil.floats.FloatBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeFloats( final float array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.floats.FloatBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final float[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeFloats( final float array[][], final PrintStream stream ) { storeFloats( array, 0, it.unimi.dsi.fastutil.floats.FloatBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeFloats( final float array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeFloats( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeFloats( final float array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeFloats( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeFloats( final float array[][], final File file ) throws IOException { storeFloats( array, 0, it.unimi.dsi.fastutil.floats.FloatBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeFloats( final float array[][], final CharSequence filename ) throws IOException { storeFloats( array, 0, it.unimi.dsi.fastutil.floats.FloatBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class FloatReaderWrapper extends AbstractFloatIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private float next; public FloatReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Float.parseFloat( s ); return true; } public float nextFloat() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static FloatIterator asFloatIterator( final BufferedReader reader ) { return new FloatReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static FloatIterator asFloatIterator( final File file ) throws IOException { return new FloatReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static FloatIterator asFloatIterator( final CharSequence filename ) throws IOException { return asFloatIterator( new File( filename.toString() ) ); } /* Generic definitions */ /* Assertions (useful to generate conditional code) */ /* Current type and class (and size, if applicable) */ /* Value methods */ /* Interfaces (keys) */ /* Interfaces (values) */ /* Abstract implementations (keys) */ /* Abstract implementations (values) */ /* Static containers (keys) */ /* Static containers (values) */ /* Implementations */ /* Synchronized wrappers */ /* Unmodifiable wrappers */ /* Other wrappers */ /* Methods (keys) */ /* Methods (values) */ /* Methods (keys/values) */ /* Methods that have special names depending on keys (but the special names depend on values) */ /* Equality */ /* Object/Reference-only definitions (keys) */ /* Primitive-type-only definitions (keys) */ /* Object/Reference-only definitions (values) */ /* * Copyright (C) 2004-2013 <NAME> * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** Loads elements from a given fast buffered reader, storing them in a given array fragment. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static int loadDoubles( final BufferedReader reader, final double[] array, final int offset, final int length ) throws IOException { it.unimi.dsi.fastutil.doubles.DoubleArrays.ensureOffsetLength( array, offset, length ); int i = 0; String s; try { for( i = 0; i < length; i++ ) if ( ( s = reader.readLine() ) != null ) array[ i + offset ] = Double.parseDouble( s ); else break; } catch( EOFException itsOk ) {} return i; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array an array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static int loadDoubles( final BufferedReader reader, final double[] array ) throws IOException { return loadDoubles( reader, array, 0, array.length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array fragment. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadDoubles( final File file, final double[] array, final int offset, final int length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final int result = loadDoubles( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given array fragment. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static int loadDoubles( final CharSequence filename, final double[] array, final int offset, final int length ) throws IOException { return loadDoubles( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadDoubles( final File file, final double[] array ) throws IOException { return loadDoubles( file, array, 0, array.length ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array an array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static int loadDoubles( final CharSequence filename, final double[] array ) throws IOException { return loadDoubles( filename, array, 0, array.length ); } /** Stores an array fragment to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeDoubles( final double array[], final int offset, final int length, final PrintStream stream ) { it.unimi.dsi.fastutil.doubles.DoubleArrays.ensureOffsetLength( array, offset, length ); for( int i = 0; i < length; i++ ) stream.println( array[ offset + i ] ); } /** Stores an array to a given print stream. * * @param array an array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeDoubles( final double array[], final PrintStream stream ) { storeDoubles( array, 0, array.length, stream ); } /** Stores an array fragment to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeDoubles( final double array[], final int offset, final int length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeDoubles( array, offset, length, stream ); stream.close(); } /** Stores an array fragment to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeDoubles( final double array[], final int offset, final int length, final CharSequence filename ) throws IOException { storeDoubles( array, offset, length, new File( filename.toString() ) ); } /** Stores an array to a file given by a {@link File} object. * * @param array an array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeDoubles( final double array[], final File file ) throws IOException { storeDoubles( array, 0, array.length, file ); } /** Stores an array to a file given by a pathname. * * @param array an array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeDoubles( final double array[], final CharSequence filename ) throws IOException { storeDoubles( array, 0, array.length, filename ); } /** Stores the element returned by an iterator to a given print stream. * * @param i an iterator whose output will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeDoubles( final DoubleIterator i, final PrintStream stream ) { while( i.hasNext() ) stream.println( i.nextDouble() ); } /** Stores the element returned by an iterator to a file given by a {@link File} object. * * @param i an iterator whose output will be written to <code>filename</code>. * @param file a file. */ public static void storeDoubles( final DoubleIterator i, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeDoubles( i, stream ); stream.close(); } /** Stores the element returned by an iterator to a file given by a pathname. * * @param i an iterator whose output will be written to <code>filename</code>. * @param filename a filename. */ public static void storeDoubles( final DoubleIterator i, final CharSequence filename ) throws IOException { storeDoubles( i, new File( filename.toString() ) ); } /** Loads elements from a given fast buffered reader, storing them in a given big-array fragment. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from <code>reader</code> (it might be less than <code>length</code> if <code>reader</code> ends). */ public static long loadDoubles( final BufferedReader reader, final double[][] array, final long offset, final long length ) throws IOException { it.unimi.dsi.fastutil.doubles.DoubleBigArrays.ensureOffsetLength( array, offset, length ); long c = 0; String s; try { for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final double[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) { if ( ( s = reader.readLine() ) != null ) t[ d ] = Double.parseDouble( s ); else return c; c++; } } } catch( EOFException itsOk ) {} return c; } /** Loads elements from a given buffered reader, storing them in a given array. * * @param reader a buffered reader. * @param array a big array which will be filled with data from <code>reader</code>. * @return the number of elements actually read from <code>reader</code> (it might be less than the array length if <code>reader</code> ends). */ public static long loadDoubles( final BufferedReader reader, final double[][] array ) throws IOException { return loadDoubles( reader, array, 0, it.unimi.dsi.fastutil.doubles.DoubleBigArrays.length( array ) ); } /** Loads elements from a file given by a {@link File} object, storing them in a given big-array fragment. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadDoubles( final File file, final double[][] array, final long offset, final long length ) throws IOException { final BufferedReader reader = new BufferedReader( new FileReader( file ) ); final long result = loadDoubles( reader, array, offset, length ); reader.close(); return result; } /** Loads elements from a file given by a filename, storing them in a given big-array fragment. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @param offset the index of the first element of <code>array</code> to be filled. * @param length the number of elements of <code>array</code> to be filled. * @return the number of elements actually read from the given file (it might be less than <code>length</code> if the file is too short). */ public static long loadDoubles( final CharSequence filename, final double[][] array, final long offset, final long length ) throws IOException { return loadDoubles( new File( filename.toString() ), array, offset, length ); } /** Loads elements from a file given by a {@link File} object, storing them in a given array. * * @param file a file. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadDoubles( final File file, final double[][] array ) throws IOException { return loadDoubles( file, array, 0, it.unimi.dsi.fastutil.doubles.DoubleBigArrays.length( array ) ); } /** Loads elements from a file given by a filename, storing them in a given array. * * @param filename a filename. * @param array a big array which will be filled with data from the specified file. * @return the number of elements actually read from the given file (it might be less than the array length if the file is too short). */ public static long loadDoubles( final CharSequence filename, final double[][] array ) throws IOException { return loadDoubles( filename, array, 0, it.unimi.dsi.fastutil.doubles.DoubleBigArrays.length( array ) ); } /** Stores a big-array fragment to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param stream a print stream. */ public static void storeDoubles( final double array[][], final long offset, final long length, final PrintStream stream ) { it.unimi.dsi.fastutil.doubles.DoubleBigArrays.ensureOffsetLength( array, offset, length ); for( int i = segment( offset ); i < segment( offset + length + SEGMENT_MASK ); i++ ) { final double[] t = array[ i ]; final int l = (int)Math.min( t.length, offset + length - start( i ) ); for( int d = (int)Math.max( 0, offset - start( i ) ); d < l; d++ ) stream.println( t[ d ] ); } } /** Stores a big array to a given print stream. * * @param array a big array whose elements will be written to <code>stream</code>. * @param stream a print stream. */ public static void storeDoubles( final double array[][], final PrintStream stream ) { storeDoubles( array, 0, it.unimi.dsi.fastutil.doubles.DoubleBigArrays.length( array ), stream ); } /** Stores a big-array fragment to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param file a file. */ public static void storeDoubles( final double array[][], final long offset, final long length, final File file ) throws IOException { final PrintStream stream = new PrintStream( new FastBufferedOutputStream( new FileOutputStream( file ) ) ); storeDoubles( array, offset, length, stream ); stream.close(); } /** Stores a big-array fragment to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param offset the index of the first element of <code>array</code> to be written. * @param length the number of elements of <code>array</code> to be written. * @param filename a filename. */ public static void storeDoubles( final double array[][], final long offset, final long length, final CharSequence filename ) throws IOException { storeDoubles( array, offset, length, new File( filename.toString() ) ); } /** Stores a big array to a file given by a {@link File} object. * * @param array a big array whose elements will be written to <code>filename</code>. * @param file a file. */ public static void storeDoubles( final double array[][], final File file ) throws IOException { storeDoubles( array, 0, it.unimi.dsi.fastutil.doubles.DoubleBigArrays.length( array ), file ); } /** Stores a big array to a file given by a pathname. * * @param array a big array whose elements will be written to <code>filename</code>. * @param filename a filename. */ public static void storeDoubles( final double array[][], final CharSequence filename ) throws IOException { storeDoubles( array, 0, it.unimi.dsi.fastutil.doubles.DoubleBigArrays.length( array ), filename ); } /** A wrapper that exhibits the content of a reader as a type-specific iterator. */ final private static class DoubleReaderWrapper extends AbstractDoubleIterator { final private BufferedReader reader; private boolean toAdvance = true; private String s; private double next; public DoubleReaderWrapper( final BufferedReader reader ) { this.reader = reader; } public boolean hasNext() { if ( ! toAdvance ) return s != null; toAdvance = false; try { s = reader.readLine(); } catch( EOFException itsOk ) {} catch( IOException rethrow ) { throw new RuntimeException( rethrow ); } if ( s == null ) return false; next = Double.parseDouble( s ); return true; } public double nextDouble() { if (! hasNext()) throw new NoSuchElementException(); toAdvance = true; return next; } } /** Wraps the given buffered reader into an iterator. * * @param reader a buffered reader. */ public static DoubleIterator asDoubleIterator( final BufferedReader reader ) { return new DoubleReaderWrapper( reader ); } /** Wraps a file given by a {@link File} object into an iterator. * * @param file a file. */ public static DoubleIterator asDoubleIterator( final File file ) throws IOException { return new DoubleReaderWrapper( new BufferedReader( new FileReader( file ) ) ); } /** Wraps a file given by a pathname into an iterator. * * @param filename a filename. */ public static DoubleIterator asDoubleIterator( final CharSequence filename ) throws IOException { return asDoubleIterator( new File( filename.toString() ) ); } }
def remove_namespace(xml): regex = re.compile(' xmlns(:ns2)?="[^"]+"|(ns2:)|(xml:)') return regex.sub('', xml)
<filename>src/main/java/br/org/itai/amqpservice/proxy/services/impl/AnnotatedServiceImpl.java package br.org.itai.amqpservice.proxy.services.impl; import java.lang.annotation.Annotation; import java.lang.reflect.InvocationHandler; import java.lang.reflect.Method; import java.util.Arrays; import java.util.HashMap; import java.util.Map; import javax.jms.Destination; import javax.jms.Session; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import br.org.itai.amqpservice.connection.interfaces.ConnectionFactory; import br.org.itai.amqpservice.connection.interfaces.DestinationFactory; import br.org.itai.amqpservice.convertion.interfaces.Converter; import br.org.itai.amqpservice.convertion.interfaces.ConverterFactory; import br.org.itai.amqpservice.convertion.interfaces.ConverterType; import br.org.itai.amqpservice.proxy.annotations.param.Param; import br.org.itai.amqpservice.proxy.annotations.param.config.AddressConfig; import br.org.itai.amqpservice.proxy.annotations.param.config.QueueConfig; import br.org.itai.amqpservice.proxy.annotations.param.config.ServiceConfig; import br.org.itai.amqpservice.proxy.annotations.service.ServiceAddress; import br.org.itai.amqpservice.proxy.annotations.service.ServiceQueue; import br.org.itai.amqpservice.proxy.message.RequestMessage; import br.org.itai.amqpservice.proxy.services.AnnotatedService; import br.org.itai.amqpservice.proxy.services.MessengerService; import br.org.itai.amqpservice.proxy.util.ParametizerStringUtils; public class AnnotatedServiceImpl implements AnnotatedService, InvocationHandler { protected transient Logger logger = LoggerFactory .getLogger(this.getClass()); private ServiceAddress serviceAddress; private ServiceQueue serviceQueue; private MessengerService messenger = new MessengerServiceImpl(); private Class<?> clazz; private ConnectionFactory cf; private DestinationFactory qf; private ConverterFactory cvf; public AnnotatedServiceImpl(Class<?> clazz, Annotation serviceAnnotation, Annotation queueAnnotation, ConnectionFactory cf, DestinationFactory qf, ConverterFactory cvf) throws Exception { this.clazz = clazz; this.cf = cf; this.qf = qf; this.cvf = cvf; this.serviceQueue = (ServiceQueue) queueAnnotation; this.serviceAddress = (ServiceAddress) serviceAnnotation; } public Object invoke(Object proxy, Method m, Object[] args) throws Throwable { if(!Arrays.asList(clazz.getDeclaredMethods()).contains(m)) return Object.class.getMethod(m.getName()).invoke(clazz); try { // check if address is right annotated if (serviceAddress == null && m.getAnnotation(ServiceAddress.class) == null) throw new IllegalStateException("There must be a " + ServiceAddress.class.getCanonicalName() + " annotation on class or method " + clazz.getCanonicalName() + "#" + m.getName()); // check if queue is right annotated if (serviceQueue == null && m.getAnnotation(ServiceQueue.class) == null) throw new IllegalStateException("There must be a " + ServiceQueue.class.getCanonicalName() + " annotation on class or method " + clazz.getCanonicalName() + "#" + m.getName()); RequestMessage message = getMessage(m, args); String address = getAddress(m, args); String queue = getQueue(m, args); return request(message, address, queue, m.getReturnType()); } catch (Exception e) { throw e; } } public String getAddress(Method m, Object[] args) throws Exception { Annotation[][] parameterAnnotations = m.getParameterAnnotations(); Map<String, Object> addressConfigParams = new HashMap<String, Object>(); // if will be use global or local address config String urlPattern = (m.getAnnotation(ServiceAddress.class) != null) ? m .getAnnotation(ServiceAddress.class).value() : serviceAddress .value(); for (int i = 0; i < args.length; i++) { for (Annotation a : parameterAnnotations[i]) { // filling url with local address config if (a instanceof AddressConfig) { String paramName = ((AddressConfig) a).value(); if (!ParametizerStringUtils.hasParam(urlPattern, paramName)) throw new IllegalArgumentException(urlPattern + " should have " + paramName); addressConfigParams.put(((AddressConfig) a).value(), args[i].toString()); } } } String address = ParametizerStringUtils.format(urlPattern, addressConfigParams); if (!ParametizerStringUtils.isCompleteFilled(address, 0)) throw new IllegalArgumentException(urlPattern + " should have all params specified."); return address; } public RequestMessage getMessage(Method m, Object[] args) throws Exception { RequestMessage message = new RequestMessage(); if(m.getAnnotation(ServiceConfig.class) != null){ ServiceConfig serviceConfig = m.getAnnotation(ServiceConfig.class); if(serviceConfig.enable()) message.setService(serviceConfig.name()); // if(!serviceConfig.groupParams()) // if(serviceConfig.enable()) // message.addParameter(serviceConfig.name(), m.getName()); } else message.setService(m.getName()); Annotation[][] parameterAnnotations = m.getParameterAnnotations(); for (int i = 0; i < args.length; i++) { for (Annotation an : parameterAnnotations[i]) { // getting params if (an instanceof Param) { message.addParameter(((Param) an).value(), args[i].toString()); } } } return message; } public String getQueue(Method m, Object[] args) throws Exception { Annotation[][] parameterAnnotations = m.getParameterAnnotations(); Map<String, Object> queueConfigParams = new HashMap<String, Object>(); // if will be use global or local queue config String description = (m.getAnnotation(ServiceQueue.class) != null) ? m .getAnnotation(ServiceQueue.class).value() : serviceQueue .value(); for (int i = 0; i < args.length; i++) { for (Annotation a : parameterAnnotations[i]) { // filling queue with local queue config if (a instanceof QueueConfig) { String paramName = ((QueueConfig) a).value(); if (!ParametizerStringUtils .hasParam(description, paramName)) throw new IllegalArgumentException(description + " should have " + paramName); queueConfigParams.put(((QueueConfig) a).value(), args[i].toString()); } } } String queue = ParametizerStringUtils.format(description, queueConfigParams); // TODO: verify if all parameters were filled / improve it if (!ParametizerStringUtils.isCompleteFilled(queue, 1)) throw new IllegalArgumentException(description + " should have all params specified."); return queue; } public <T> T request(RequestMessage message, String address, String queue, Class<T> returnType) throws Exception { // QPID send and receive logic Session session = cf.getSession(address); Destination destination = qf.createQueue(queue); Converter c = cvf.createConverter(ConverterType.JSON); String jsonMessage = c.marshall(message); logger.debug("Enviando mensagem {} para o endereço {} na fila {}", jsonMessage, address, queue); String messageReceived = messenger.requestTo(jsonMessage, session, destination); logger.debug("Mensagem {} recebida.", messageReceived); return c.unmarshall(messageReceived, returnType); } }
/// Requests the setting value from a given signature for a setting handler. async fn request_value(&mut self, signature: Signature) -> Result<SettingInfo, Error> { let mut send_receptor = self .messenger_client .message( Payload::Command(Command::HandleRequest(Request::Get)).into(), Audience::Messenger(signature), ) .send(); send_receptor.next_payload().await.and_then(|payload| { if let Ok(Payload::Result(Ok(Some(setting)))) = Payload::try_from(payload.0) { Ok(setting) } else { // TODO(fxbug.dev/68479): Propagate the returned error or // generate proper error. Err(format_err!("did not receive setting value")) } }) }
//! s3du: A tool for informing you of the used space in AWS S3 buckets. #![forbid(unsafe_code)] #![deny(missing_docs)] #![allow(clippy::redundant_field_names)] use anyhow::Result; use clap::value_t; use log::{ debug, info, }; use std::str::FromStr; /// Command line parsing. mod cli; /// Common types and traits. mod common; use common::{ BucketSizer, ClientConfig, ClientMode, HumanSize, Region, SizeUnit, }; #[cfg(feature = "s3")] use common::ObjectVersions; /// CloudWatch Client. #[cfg(feature = "cloudwatch")] mod cloudwatch; /// S3 Client. #[cfg(feature = "s3")] mod s3; /// `Client` struct wraps a `Box<dyn BucketSizer>`. struct Client(Box<dyn BucketSizer>); /// `Client` implementation. impl Client { /// Return the appropriate AWS client with the given `ClientConfig`. async fn new(config: ClientConfig) -> Self { let mode = &config.mode; let region = &config.region; info!("Client in region {} for mode {:?}", region.name(), mode); let client: Box<dyn BucketSizer> = match mode { #[cfg(feature = "cloudwatch")] ClientMode::CloudWatch => { let client = cloudwatch::Client::new(config); Box::new(client.await) }, #[cfg(feature = "s3")] ClientMode::S3 => { let client = s3::Client::new(config); Box::new(client.await) }, }; Client(client) } /// Perform the actual get and output of the bucket sizes. async fn du(&self, unit: SizeUnit) -> Result<()> { // List all of our buckets let buckets = self.0.buckets().await?; debug!("du: Got buckets: {:?}", buckets); // Track total size of all buckets. let mut total_size: usize = 0; // For each bucket name, get the size for bucket in buckets { let size = self.0.bucket_size(&bucket).await?; total_size += size; let size = size.humansize(&unit); println!("{size}\t{bucket}", size=size, bucket=bucket.name); } let total_size = total_size.humansize(&unit); // Display the total size the same way du(1) would, the total size // followed by a `.`. println!("{size}\t.", size=total_size); Ok(()) } } /// Entry point #[tokio::main] async fn main() -> Result<()> { pretty_env_logger::init(); // Parse the CLI let matches = cli::parse_args(); // Get the bucket name, if any. let bucket_name = matches .value_of("BUCKET") .map(|name| name.to_string()); // Get the client mode let mode = value_t!(matches, "MODE", ClientMode)?; // Get the unit size to display let unit = value_t!(matches, "UNIT", SizeUnit)?; // Here we get the region, if a custom endpoint is set, that is used, // otherwise we get the regular region. // Unwraps on values here should be fine, as they're checked when the CLI // is validated. #[cfg(feature = "s3")] let region = if matches.is_present("ENDPOINT") { if mode == ClientMode::S3 { let endpoint = matches.value_of("ENDPOINT").unwrap(); Region::new().set_endpoint(endpoint) } else { eprintln!("Error: Endpoint supplied but client mode is not S3"); ::std::process::exit(1); } } else { let region = matches.value_of("REGION").unwrap(); Region::new().set_region(region) }; // Endpoint selection isn't supported for CloudWatch, so we can drop it if // we're compiled without the S3 feature. #[cfg(all(feature = "cloudwatch", not(feature = "s3")))] let region = { let region = matches.value_of("REGION").unwrap(); Region::new().set_region(region) }; // This warning will trigger if compiled without the "s3" feature. We're // aware, allow it. #[allow(unused_mut)] let mut config = ClientConfig { bucket_name: bucket_name, mode: mode, region: region, ..Default::default() }; // If have s3 mode available we also need to pull in the ObjectVersions // from the command line. #[cfg(feature = "s3")] { if config.mode == ClientMode::S3 { // This should be safe, we validated this in the CLI parser. let versions = matches.value_of("OBJECT_VERSIONS").unwrap(); // This should be safe, due to validation of the above. let versions = ObjectVersions::from_str(versions).unwrap(); config.object_versions = versions; } } // The region here will come from CLI args in the future let client = Client::new(config).await; client.du(unit).await }
Changes to Trash Collection - Beginning July 3, 2017 View this email in your browser Newport Moves to Weekly Trash Collection, Curbside Recycling for All Newport, Kentucky – June 21, 2017 – Newport is making big changes to its trash program. Beginning July 1, curbside recycling is available to all residents at no additional cost and trash moves to once weekly collection. As part of Newport’s contract with Rumpke, all residents will receive a 65-gallon recycling cart that will be collected once per week at no additional cost. Residents can opt for a smaller 18-gallon bin by contacting Rumpke. Rumpke will deliver recycling containers June 24 and 25. With the move to once weekly trash collection and the adoption of curbside recycling for all, most residents will have a new collection day. Residents should watch their mailbox next week for information about their new trash and recycling collection schedule. The new schedule can also be accessed on the City’s website and online searchable map of pick up days can be viewed here. “It’s a big change for our residents,” said Tom Fromme, Newport City Manager. “For years, we offered twice weekly trash collection service and offered curbside recycling for an additional fee.” Fromme said that only 400 of the City’s 6,000+ households participated in the curbside recycling program. “Like trash, recycling collection is an essential service,” Fromme said. “It’s important that our residents make an effort to reduce the amount of trash we send to the landfill. To help encourage participation, we took away the cost barrier.” For all questions or to adjust the size of your can call 1-877-786-7537 or email [email protected]. Rumpke Waste & Recycling has been committed to keeping neighborhoods and businesses clean and green since 1932 by providing environmentally friendly waste disposal solutions. Headquartered in Colerain Township, Ohio, the firm employs 2,600 people and provides service to areas of Ohio, Kentucky, Indiana and West Virginia. Rumpke divisions include Rumpke Recycling, Rumpke Portable Restrooms, The William-Thomas Group, Rumpke Hydraulics and Rumpke Haul-it-Away. Please visit www.rumpke.com for more information.
Quantifying the roles of ocean circulation and biogeochemistry in governing ocean carbon-13 and atmospheric carbon dioxide at the last glacial maximum Abstract. We use a state-of-the-art ocean general circulation and biogeochemistry model to examine the impact of changes in ocean circulation and biogeochemistry in governing the change in ocean carbon-13 and atmospheric CO2 at the last glacial maximum (LGM). We examine 5 different realisations of the ocean's overturning circulation produced by a fully coupled atmosphere-ocean model under LGM forcing and suggested changes in the atmospheric deposition of iron and phytoplankton physiology at the LGM. Measured changes in carbon-13 and carbon-14, as well as a qualitative reconstruction of the change in ocean carbon export are used to evaluate the results. Overall, we find that while a reduction in ocean ventilation at the LGM is necessary to reproduce carbon-13 and carbon-14 observations, this circulation results in a low net sink for atmospheric CO2. In contrast, while biogeochemical processes contribute little to carbon isotopes, we propose that most of the change in atmospheric CO2 was due to such factors. However, the lesser role for circulation means that when all plausible factors are accounted for, most of the necessary CO2 change remains to be explained. This presents a serious challenge to our understanding of the mechanisms behind changes in the global carbon cycle during the geologic past. Introduction Reproducing past changes in the global carbon cycle is a key test of our understanding of the Earth's climate system and, as such, explaining the documented changes in atmospheric gases and geochemical proxies that occurred during the last glacial maximum (LGM) remains an enduring challenge. Since the carbon stored in the terrestrial biosphere likely declined (e.g., Bird, et al., 1994;Sigman and Boyle, 2000), the ocean is believed to be responsible for the 80ppm reduction in atmospheric carbon dioxide (pCO 2atm ) measured in ice cores (Petit et al., 1999) at the LGM. In reproducing this change, only ∼50 ppm needs to be explained by a given hypothesis, as the remainder can be accounted for by subsequent deep-ocean carbonate compensation (e.g., Brovkin et al., 2007). In parallel to reduced pCO 2atm , sediment cores have shown that the gradient in δ 13 C DIC between upper and deeper waters (as measured by benthic foraminifera) was 50% greater at the LGM (e.g., Duplessy et al., 1988;Curry and Oppo, 2005). Accordingly, pCO 2atm and the δ 13 C DIC gradient provide two constraints for hypotheses that seek to explain the processes that resulted in the LGM climate and pCO 2atm . Since the LGM was typified by changes in both ocean circulation (Lynch-Stieglitz et al., 2007) and dust deposition of the micronutrient iron (Petit et al., 1999;Martin, 1990;Mahowald et al., 2006), modified pCO 2atm and δ 13 C DIC potentially reflect changes in productivity and/or circulation (e.g., Brovkin et al., 2007;Lynch-Stieglitz et al., 2007;Toggweiler, 696 A. Tagliabue et al.: Quantifying the roles of ocean circulation and biogeochemistry would be manifested by changes in the ocean's solubility and/or biological pumps (driven by physical and biological processes). Colder sea surface temperatures can increase the solubility of CO 2 in the ocean at the LGM, while ocean productivity could be stimulated by additional iron from dust (Martin, 1990) or by changes in phytoplankton stoichiometry that more efficiently produce organic carbon per unit macronutrient (Broecker, 1982;Omta et al., 2006). The widely measured changes in δ 13 C DIC are, thus, an important additional constraint on hypotheses that seek to explain LGM pCO 2atm . Most models that have investigated LGM pCO 2atm and δ 13 C DIC have been box models or models of intermediatecomplexity (IC-models) (e.g., Toggweiler, 1999Brovkin et al., 2007), which permit a wide exploration of parameter space, but sacrifice degrees of physical and/or biogeochemical realism that preclude a detailed spatial comparison with observations. 3-D ocean-general-circulation and biogeochemistry models (OGCBMs) can represent the effect of LGM climate on circulation and the subsequent impact on biogeochemistry in a more mechanistic sense (e.g., Bopp et al., 2003;Kurahashi-Nakamura et al., 2007) and can also simulate well-measured paleo-proxies such as δ 13 C and 14 C. Such a procedure permits a thorough spatial comparison of the δ 13 C DIC distribution resulting from a given LGM atmosphere-ocean scenario to observations and provides confidence in model results. Here we use a state-of-the-art OGCBM to examine how pCO 2atm and δ 13 C DIC respond to changes in ocean circulation and dust deposition. We constrain the modelled LGM δ 13 C DIC using 133 observations from benthic foraminifera. Our results allow us to delineate differing roles for circulation and biology in governing LGM δ 13 C and pCO 2atm , but highlight important shortcomings in capturing the required LGM change in pCO 2atm using existing hypotheses. Ocean biogeochemical model Our OGCBM PISCES (Aumont and Bopp, 2006) simulates nanophytoplankton and diatoms, meso-/micro-zooplankton, small/large detritus, carbon-13, carbon-14, calcium carbonate, biogenic silica, dissolved-inorganic-carbon, carbonate, dissolved-organic-carbon, nitrate, phosphate, silicic acid and iron. Fixed organic matter production ratios are employed for nitrogen and phosphorous, while ratios of both silica, and iron, to carbon vary as a function of the phytoplankton group and environmental variables. The PISCES model has previously been evaluated and used for a wide variety of studies concerning historical and future climate (e.g., Bopp et al., 2003;Aumont and Bopp, 2006;. We explicitly resolve carbon-13 in the existing 3 dissolved and 7 particulate carbon pools, with fractionation occurring during photosynthesis, precipitation of calcite, gas exchange and carbonate chemistry . We parameterize photosynthetic fractionation (‰) to be a function of the CO 2 (aq) concentration and the specific growth rate (µ) of each phytoplankton group. In an attempt to account for the influence of cell size on photosynthetic fractionation, as well as the observed minimum at high values of µ/CO 2 (aq), we restrict the variation in photosynthetic fractionation to between 5 and 20, and 10 and 26‰ for diatoms and nanophytoplankton, respectively. Calcite formation has a fixed fractionation of 1‰. Fractionation during gas exchange, and the conversion of CO 2 (aq) to DIC, are a function of temperature and the proportion of the DIC present as CO 2− 3 . We refer readers to for a complete description of the model and an evaluation against observations for the historical period (1860 to 2000). We note that we begin our simulations with the configuration from that best matches modern deep ocean δ 13 C DIC (i.e., PISCES-D in the notation of ), see Supplementary Fig. 1: http://www. clim-past.net/5/695/2009/cp-5-695-2009). 14 C simulations followed the OCMIP protocol (http: //www.ipsl.jussieu.fr/OCMIP/) and were conducted for 3000 years, assuming LGM values for pCO 2atm , salinity and alkalinity and constant atmospheric 14 C production. Physical model The physical model coupled to PISCES is based on the ORCA2 global ocean model configuration of OPA version 8.2 (Madec et al., 1998) and also includes a dynamicthermodynamic sea-ice model (Timmermann et al., 2003). The mean horizontal resolution is approximately 2 • by 2 • cos (latitude) and the meridional resolution is enhanced to 0.5 • at the Equator. The model has 30 vertical levels; with an increment that increases from 10 m at the surface to 500 m at depth (12 levels are located in the first 125 m). Our simulations were forced by the output from the IPSL coupled atmosphere-ocean climate model under LGM forcing in terms of the radiative properties of atmospheric gases and orbital forcing, with a LGM land/ocean mask and LGM topography and ice sheets (see Sect. 2.4). As such, this produces changes in physical transport and temperature and salinity properties of the ocean that can be used to force PISCES offline. The intermediate complexity model CLIMBER-2 The IC-model CLIMBER-2 (Petoukhov et al., 2000) simulates the atmosphere, ocean and land biosphere. The atmospheric model has a coarse resolution of 10 • and 51 • in latitude and longitude, respectively. CLIMBER-2 includes a zonally averaged ocean model with a 2.5 • meridional resolution and 20 uneven vertical levels, a sea-ice model and the carbon system (including carbon-13), as well as a simplistic representation of ocean biota. Due to its intermediate complexity, we decided to use CLIMBER-2 to examine the robustness of the major conclusions from our OGCBM with simulations of 20 000 years (i.e., to equilibrium). Ocean circulation in CLIMBER-2 was modified via additions of freshwater, which permitted us to evaluate the impact of changes in ocean ventilation on δ 13 C DIC in a similar fashion as for the OGCBM. Experimental strategy We employed the OGCBM PISCES to prognostically simulate ocean biogeochemistry, including carbon-13 and carbon-14, when forced by a variety of LGM circulation schemes that arise from the fully coupled IPSL model (see: Sect. 2.2, Table 1). In all LGM simulations, snow accumulates on the ice-sheets and this freshwater sink has to be compensated in order to obtain stable simulations. Therefore, simulations differ in the regions chosen for the redistribution of freshwater (CircD and CircE) and different degrees of freshwater forcing (CircA, CircB, CircC), which results in the variable overturning circulations summarized in Table 1 (Alkama et al., 2007;Arsouze et al., 2008;Kageyama et al., 2009). We use an estimation of LGM dust deposition that is constrained by the sediment record and iron supply increases 1.5-fold, primarily in the Southern Ocean (SO) Atlantic sector (Mahowald et al., 2006). Pre-industrial and LGM dust deposition fields assumed Fe to be 3.5% of dust, with a solubility of 0.5% . Shelf iron supply was recalculated assuming a 120 m drop in sea level. We also tested the mean proposed changes in phytoplankton stoichiometry as reported from phytoplankton physiology models under a "NADW Max" is the maximum value of the meridional stream function between 500 and 1500 m and 30 • N to 50 • N. "NADW depth" is the depth at which the stream function is zero. b "AABW Max" is the minimum value of the meridional stream function between 55 • S to 75 • S (circumpolar), whereas "AABW at depth" is minimum value of the stream function at 3000 m (also between 55 • S to 75 • S, to give an indication as to the depth of ventilation by AABW). LGM climate, which results in a 12% increase in the mean phytoplankton C/N ratio (Omta et al., 2006). OGCM LGM simulations using the IPSL model employ a modified land/sea mask, accounting for topography and ice sheets (Peltier, 2004). Atmospheric concentrations of CO 2 , CH 4 and N 2 O for radiative forcing were 185 ppm, 350 ppb and 200 ppb, respectively, with orbital parameters from 21 kyr BP (following the PMIP2 protocol, http://pmip2. lsce.ipsl.fr) and this climate impacts the properties of the ocean (in terms of temperature and salinity). Our experiments are designed to test the impact of different LGM overturning circulations on carbon isotopes, ocean biogeochemistry and pCO 2atm . While in this particular study, alternative circulations are obtained via changes in surface freshwater forcing of the ocean, these could have arisen from a number of climatic forcings, such as other freshwater fluxes or changes in winds. For example, the overturning circulations could also represent the postulated changes induced by latitudinal changes in southern hemisphere westerly winds, especially with regards to the SO (e.g., Toggweiler et al., 2006). We used the resulting circulations (Table 1) to force PISCES offline (i.e., the circulations were constant for the duration of the experiments). Before running each simulation, we remove 0.4‰ from oceanic δ 13 C DIC (by decreasing DI 13 C) and add a further 1 psu to salinity (on top of LGM climate related changes) to account for changes in land biosphere carbon and sea level, respectively. We also increased nutrient stocks by 3% to account for the change in sea level in the first year of simulation (we note that we did not change DIC or alkalinity). We performed integrations of 500-years for each experiment (except those concerned with carbon-14, which ran for 3000 years), following a 3000-year spin-up under preindustrial conditions . Initially, we ran PISCES with LGM dust under a pre-industrial (PI) circulation and then with each "LGM" circulation scheme (Table 1) in turn, evaluating the simulated δ 13 C DIC (LGM-PI) against 133 observations from benthic foraminifera, as well as evaluating simulated 14 C against previously published data. Subsequently, we then examined the impact on ocean biogeochemistry and pCO 2atm of the LGM overturning circulation that is able to reproduce carbon isotope observations. We also examined the relationship between ocean circulation and δ 13 C DIC using the IC-model CLIMBER-2 (see Sect. 2.3). To that end, we made a number of idealised freshwater additions to the Northern and Southern Hemispheres under LGM climate and examined the impact on ocean carbon-13 distributions after simulations of 20 000 years. This permits us to evaluate the robustness of our findings from the OGCBM, which, while more complex than CLIMBER-2 (in terms of its spatial resolution and biogeochemical processes), necessitates shorter simulation durations. LGM and pre-industrial δ 13 C DIC data from benthic foraminifera are from the MOTIF database (http://motif.lsce. ipsl.fr). We removed data without a complete PI-LGM pair, leaving 133 paired observations to compare with our model results. LGM ocean carbon-13 While increasing dust iron supply under a pre-industrial circulation reduces deep-water δ 13 C DIC , due to an increased export of organic matter with light δ 13 C values, its impact remains too local and too slight to explain the observations alone (Fig. 1a, Table 2). Stimulation of carbon export (see Table 3 and Sect. 3.2) in the iron limited SO is responsible for a slight decrease in deep-water δ 13 C DIC in the SO and a slight increase in northern latitudes (Fig. 1a). Increased LGM overturning (CircC, CircD, and CircE) drives a larger change in δ 13 C DIC , but since enhanced mixing of enriched Table 2. Statistical comparison of model simulation glacial-interglacial change in ocean δ 13 C DIC with observations from benthic foraminifera collated in the MOTIF database (values compared at the identical latitude, longitude and depth as the observations, see Table 1 for details on model simulations). a There are 133 valid data points in the entire ocean, with 60 of these at or below 3 km. surface waters with depleted deep waters results, the change is opposite to that observed (i.e., increased at depth, relative to the surface, and vice-versa, Fig. 1d-f, Table 2). On the other hand, under diminished north Atlantic (NA) ventilation (CircA and CircB) an excellent regional agreement with NA δ 13 C DIC observations results (Figs. 1b and c, 2, Table 2, with R=0.73). The change in the NA δ 13 C DIC gradient is robustly related to changes in either the speed or depth of NA ventilation (across all simulations, Fig. 3). This relationship is also remarkably consistent when examined with CLIMBER-2 ( Fig. 3), which supports the statement from the OGCBM that NA ventilation is the dominant process governing the NA δ 13 C DIC vertical gradient (as reconstructed from benthic foraminifera). Using the OGCBM relationship, the observed 0.49‰ increase in gradient requires a 59% or 68% reduction in overturning speed, or depth, respectively. Since simulations with different degrees of SO ventilation (CircA and CircB) both reproduce NA δ 13 C DIC (Fig. 1b and c), we suggest local (i.e., NA) forcings predominate and the role of Antarctic processes are of second order. Unfortunately, in all the circulations tested in this study, increased NA ventilation speed was always accompanied by a deepening of ventilation (and vice-versa, Table 1), leaving us unable to isolate their separate effects. Nevertheless, assuming modern overturning is 18 Sv and ventilates to 4000 m (Talley et al., 2003), a LGM NA ventilation of 7 Sv to 1300 m would satisfy δ 13 C DIC observations. Turning to the SO, deep-water δ 13 C DIC increases when the necessary reduction in NA ventilation is accompanied by increased SO ventilation (CircB), as suggested by some LGM studies, (Curry and Oppo, 2005;Brovkin et al., 2007), which is contrary to observations (Fig. 1c, Table 2). We find that only lesser SO ventilation (mostly in the Atlantic and Indian ocean sectors) at the LGM can reproduce the observed depletion in deep δ 13 C DIC (CircA, Fig. 1b). Furthermore, simulated bottom δ 13 C DIC highlights the widespread impact of elevated SO ventilation (CircB) on δ 13 C DIC (Fig. 2b) and, since dust is insufficient (there is only a weak change in deep-water δ 13 C DIC due to elevated SO export, see further discussion of the export response in Sect. 3.2), confirms the necessity for reduced LGM SO ventilation (CircA) in order to satisfy the deep δ 13 C DIC observational constraint (Fig. 2a). In contrast to previous IC-models (which assume increased SO ventilation, Brovkin et al., 2007), our OGCBM demonstrates the far-field effects of greater SO ventilation on δ 13 C DIC and its discord with observations (CircB, Fig. 2b). Indeed, additional tests with CLIMBER-2, show that when we reduce SO ventilation, the deep SO δ 13 C DIC declines (by 0.03‰ Sv −1 ) in line with observations. Finally, 14 C simulations (performed offline for 3000 years) also demonstrate that only CircA captures the observed increase in deep water ages (Robinson et al., 2005;Sikes et al., 2000), with enhanced SO ventilation (CircB) causing earlier ages for the deep ocean, in contrast to observations (Fig. 4). While CircA has reduced overall SO ventilation, we note that this response is heterogeneous in space, with a shift in ventilation sites and slight increases in ventilation in some places (such as the Ross Sea, which explains the weaker δ 13 C DIC change in this region (Fig. 2a). Overall, only reduced NA and SO ventilation, isolating the deep-ocean, can reproduce LGM δ 13 C DIC , as well as 14 C, observations. The reduced SO ventilation at the LGM, that is necessitated by δ 13 C DIC and 14 C observations, might have arisen from the postulated equatorward shift in Southern Hemisphere westerly winds , although this is not the forcing which we have used in this study. Our results regarding the nature of the LGM overturning circulation agree with earlier box model studies (Toggweiler, 1999), whereas previous CLIMBER-2 simulations with greater SO ventilation cannot replicate SO δ 13 C DIC observations (Brovkin et al., 2007). Similarly, a variety of other proxies ( 18 O, Cd/Ca, 15 N/ 14 N, and 231 Pa/ 230 Th) support a poorly ventilated LGM deep-ocean (e.g., Robinson et al., 2005;Sikes et al., 2000;Sigman et al., 2004;Francois et al., 1997;Adkins et al., 2005; Marchitto and Broecker, Table 1. The relationships are described by δ 13 C DIC gradient=−0.666( NA-ventilation)−0.145 (R=−0.98) and δ 13 C DIC gradient=−0.776( NA-ventilation)+0.045 (R=−0.96) for ventilation strength and depth, respectively. Superimposed (in grey) is the impact of changes in NA ventilation on the δ 13 C DIC gradient in the IC-model CLIMBER-2 (δ 13 C DIC gradient=−0.771( NA-ventilation)+0.079, R=−0.999, calculated in the same fashion as for the 3-D OGCBM), which highlights that the fundamental relationship between NA ventilation and the δ 13 C DIC gradient is robust and not specific to the OGCBM. Observations show an increase in gradient of 0.49‰ for the same region. We note that our depth ranges for the δ 13 C DIC gradient were chosen to examine the effect of circulation on δ 13 C DIC , rather than the biological pump. Changes in NA overturning for CLIMBER-2 were calculated in the same fashion as for the OGCBM. 14 C (‰) for (a) the pre-industrial circulation, (b) CircA and (c) CircB presented as an Atlantic zonal mean (Where 14 C=1000*((DI 14 C/DIC)−1))). We include LGM 14 C data collected from the western North Atlantic (Robinson et al., 2005) as circles for (b) and (c), which demonstrates that only CircA (which has reduced SO ventilation, Table 1) captures the increased deepwater ages present in the data. The GLODAP estimation (Key et al., 2004) of the pre-industrial 14 C for the Atlantic basin (zonal mean) is presented in Supplementary Fig. 2 2006; Keigwin and Boyle, 2008). Surface waters were, thus, enriched with carbon-13 and deep waters more strongly reflected the light carbon introduced from organic matter remineralisation. Accordingly, CircA provides the best statistical fit to the 133 δ 13 C DIC observations from benthic foraminifera (R=0.6, p<0.001) and reproduces the greater reduction in δ 13 C DIC at depths greater than 3 km (Table 2). Deep-water oxygen responds to the changes in ocean circulation and biology. In response to LGM dust, the greater export of organic matter in the SO results in reduced deepwater oxygen (Fig. 5a), which are then transported northward and throughout the SO by the abyssal ocean circulation. The reduced ventilation present in CircA that is necessary to reconcile δ 13 C DIC observations elevates global suboxia when it acts in conjunction with LGM dust (defined as O 2 <5 µM) 3-fold, which should ostensibly increase denitrification rates and this depletes the oceanic fixed nitrogen inventory. More- Table 1). over, the reduced overturning rate necessitated by carbon-13 and carbon-14 measurements might possibly also inhibit the degree of compensation (in terms of the oceanic fixed N inventory) by N 2 fixation . Nevertheless, despite a sluggish PI circulation (Table 1) CircA and LGM dust did not drive total anoxia at depth in all places (Fig. 5b). In fact, colder temperatures, reduced subduction of oxygen-rich surface waters and shifts in SO ventilation sites associated with CircA and the LGM climate can actually elevate deep oxygen in some regions. These include modern denitrification sites (e.g., eastern Pacific and Arabian Sea, Fig. 5b), consistent with observations of reduced LGM denitrification therein (Ganeshram et al., 2000). Overall, oxygen declined in most all of the deep ocean during our experiments, but since a spatial reorganisation of denitrification sites will likely follow modified LGM circulation (especially if it impacts deep water formation sites), a reduction in denitrification at a modern denitrification site might not necessary imply reduced global LGM denitrification. LGM atmospheric CO 2 A circulation that reproduces LGM δ 13 C DIC lowers pCO 2atm by 3.5 ppm (CircA alone, Table 3). Since the included temperature/salinity changes associated with LGM climate increase CO 2 uptake and typically reduce pCO 2atm by ∼15 ppm (e.g., Brovkin et al., 2007), this means that the lesser ventilation necessary to reconcile δ 13 C DIC observations actually causes increased pCO 2atm . This is because vertical nutrient supply to surface waters is reduced and export production declines, which increases preformed nutrients (Table 3). Overall, the increased solubility pump that results from the LGM climate is almost entirely compensated for by a reduced biological pump and results in a net 3.5 ppm pCO 2atm reduction. Our OGCBM represents the spatial variability in CO 2 fluxes, as well as circulation-biogeochemistry feedbacks and demonstrates that although δ 13 C DIC observations require less well-ventilated glacial ocean (which might have arisen from a northward redistribution of westerly winds in the Southern Hemisphere), this cannot drive a significant portion of the LGM pCO 2atm (Table 3). In addition, reduced SO ventilation also retards the vertical supply of alkalinity to surface waters and this decreased surface alkalinity also contributes to the weak impact of CircA on pCO 2atm . The weak response in terms of pCO 2atm that results from CircA is due to the counteracting effects of the oceanic solubility, carbonate and biological pumps that are represented by our OGCBM. Despite contributing little to δ 13 C DIC , LGM dust does reduce pCO 2atm . While iron fertilisation of SO export elevates local CO 2 uptake, total global export declines due to reduced low latitude nutrient supply (Dust+PI circulation, Table 3). Nevertheless, the SO is an effective control of pCO 2atm (Marinov et al., 2006) and the 15% enhancement of export therein and overall reduction in preformed nutrients reduces pCO 2atm by 11 ppm (Table 3). This is >3-fold lower than the 35 ppm found when ICmodels are forced by arbitrary increases in SO productivity (Brovkin et al., 2007), highlighting the importance of explicitly representing dust iron and other limiting factors (such as light/macronutrients/physiology) when appraising the impact of LGM dust. Our OGCBM has a relatively low sensitivity to the large increase in dust iron supply, which results from phytoplankton physiological and ocean circulation processes. We find that the OGCBM primary productivity response to the additional dust iron is tempered by the 60% increase in the physiological (carbon specific) iron demand that follows elevated iron supply (the Fe/C ratio). A greater iron demand results from the up regulation of iron requiring processes by the phytoplankton that increases the Fe/C ratio (Sunda and Hunstman, 1997). In addition, increasing dust deposition of iron also promotes a floristic shift in phytoplankton species composition towards diatoms (by 25%, as seen during iron enrichment experiments, deBaar et al., 2005), which have a greater iron demand. Explicitly representing these 2 processes, as well as the role of macronutrients and light in limiting phytoplankton growth (in the northern and southern part of the Southern Ocean, respectively) retards the response of ocean biology to LGM dust deposition. This explains why our OGCBM has a lower sensitivity to dust than previous OGCBMs that did not include such effects (Bopp et al., 2003) or IC-models that do not explicitly account for iron (Brovkin et al., 2007). Compensatory changes in carbon export at low latitudes that result from modified nutrient utilization at high latitudes are also important in understanding the impact on the global carbon cycle. In addition to the effects of the solubility and biological pumps, pCO 2atm can be impacted by modifications to the relative export of alkalinity that follow dust driven changes in high latitude utilization of silicic acid (e.g., Matsumoto and Sarmiento, 2008). We find that LGM dust reduces SO diatom silicification (the Si/C ratio) by 12% and silicic acid "leakage" to low latitudes does promote diatom productivity (over calcifiers), which reduces the relative export of alkalinity by 7% globally. As suggested by Matsumoto and Sarmiento (2008), this is despite a 6% reduction in absolute biogenic silica export at low latitudes (between 40 • S-40 • N) that might be measured in a sediment core. However, since additional dust also increases SO diatom abundance (see above) and hence absolute silicic acid utilisation, low latitude supply is reduced (in absolute terms). This explains the small changes to relative alkalinity export that likely contribute <1 ppm to pCO 2atm (Matsumoto and Sarmiento, 2008). Nevertheless, it should be noted that even our relatively complex OGCBM only describes 2 generic phytoplankton (diatoms and nanophytoplankton) and the potential for small diatoms (and possibly other species) to outcompete calcifiers in the nutrient-poor low latitudes might not be adequately represented. Figure 6 presents the spatial distribution of the proportional change in carbon export at 100 m from the OGCBM (LGM dust under PI circulation and LGM dust under CircA, Table 1) that is overlain with a qualitative appraisal of the change in carbon export at the LGM from a recent data-based compilation (Kohfeld et al., 2005). It should be borne in mind that the data compilation aggregates estimates from a variety of different proxies to assign 'scores' regarding LGM export (Kohfeld et al., 2005) and is thus necessarily qualitative. On the other hand, the OGCBM presents export at 100 m, which is not a precise analogue to the data. The data presents some obvious trends: increased export in the LGM sub-Antarctic region, with lesser export in poleward latitudes and increased export in the north Atlantic. At low latitudes, the trend in the Pacific appears equivocal, while in the tropical Atlantic export increased at modern upwelling sites near Africa and probably increased throughout the tropical Atlantic basin, although some cores suggest reductions (Kohfeld et al., 2005). Our results suggest that LGM dust is the driver of increased export in the South Atlantic, while reduced export in the Antarctic sector (i.e., south of the Polar Front) of the Southern Ocean is a result of circulation/sea-ice changes (although we find the transition from increased to decreased export occurs poleward of where the data suggests, Fig. 6). Increased export in the North Atlantic is a result of circulation changes in conjunction with LGM dust deposition, while the OGCBM trend in the Equatorial Pacific is equivocal (in accord with the data, Fig. 6). While both OGCBM candidates LGM>>PI LGM>PI LGM=PI LGM<PI LGM<<PI ND LGM>>PI LGM>PI LGM=PI LGM<PI LGM<<PI ND Fig. 6. The proportional change in organic carbon export (at 100 m) from the OGCBM (relative to the pre-industrial) for (a) LGM dust added to a PI circulation and (b) LGM dust added to CircA. Circles represent a data compilation of the qualitative change in export production at the LGM from Kohfeld et al. (2005). We plot the change in export at the LGM data on the same scale as the OGCBM results where (for the data points), −1 =decrease, −0.5 =slight decrease, 0 =no trend, +0.5 =slight increase and +1 =increase (black circles represent data points where the LGM trend in export was equivocal). Note that the OGCBM results represent the proportional change in organic carbon export at 100 m and can therefore only be compared with the data (which rely on a variety of methods) in a qualitative sense. (LGM dust and CircA+LGM dust) suggest reduced export in the tropical Atlantic, the trend in the data is unclear with increased, slightly increased, and decreased export all present in the data synthesis (Fig. 6). Reduced vertical nutrient supply (during CircA) and increased nutrient utilization in the Southern Ocean (following LGM dust) both act to decrease the nutrient concentrations and, thus, carbon export in the tropics. This negative effect on nutrient supply to the tropics is only enhanced when LGM dust acts alongside CircA. If tropical LGM export did increase as suggested by Kohfeld et al. (2005), then additional processes (e.g., elevated nutrient supply from rivers, changes in stoichiometry or species composition) might be important in compensating for the decline in nutrients that results from dust (greater high latitude macronutrient utilization) and circulation (less ocean ventilation) changes that occurred at the LGM. LGM dust causes a 16 ppm decline in pCO 2atm if added to CircA (Table 3), which is an additional 1.5 ppm (or 14%), relative to the separate effects of CircA (−3.5 ppm) and LGM dust (−11 ppm). Although dust fertilizes SO export in CircA less than for the pre-industrial circulation (12% instead of 15%, Table 3, due to reduced vertical nutrient supply), the deep-water sequestration of exported carbon is enhanced due its greater residence time at depth (illustrated by the impact on carbon-14 of CircA, Fig. 4). Consequently, while circulation changes might not explain LGM pCO 2atm , the reduced ventilation necessitated by δ 13 C DIC can potentially amplify the impact of dust or other biogeochemical forcings. Including a 12% increase in LGM planktonic C/N ratios (Omta et al., 2006) in combination with LGM dust and CircA reduces pCO 2atm by another 9 ppm (by 25 ppm in total, Table 3). Interestingly, adding modified stoichiometry to CircA with LGM dust causes preformed nutrients to change little, despite lowering pCO 2atm further (Table 3), which demonstrates that preformed nutrients are decoupled from changes in pCO 2atm if the overall C/N stoichiometry is modified. Synthesis Overall, we capture 25 ppm of the required 50 ppm pCO 2atm decline (prior to carbonate compensation) and almost all of the necessary change in the δ 13 C DIC gradient (Fig. 7). Using our results in combination with two observational constraints (pCO 2atm and the δ 13 C DIC gradient), we estimate that ocean circulation (CircA) explains two-thirds of the LGM δ 13 C DIC gradient, but <10% of the necessary pCO 2atm change (Fig. 7, "LGM-circulation" isoline, −9.48 ppm‰ −1 ). In contrast, biogeochemical processes only explain one-third of the LGM δ 13 C DIC gradient, but would have to account for >90% of the pCO 2atm decline (Fig. 7, "LGM-biology" isoline, −238 ppm ‰ −1 ). Because our shortfall of 25 ppm, or 45 ppm, considering reduced terrestrial carbon storage (Sigman and Boyle, 2000), lies along the "LGM-biology" isoline in Fig. 7, we suggest it reflects additional biogeochemical processes, which can be amplified by the LGM ocean circulation. Ocean ventilation must have declined at the LGM in order to satisfy geochemical proxy constraints (in particular δ 13 C DIC and 14 C), but due to the reduced biological pump, our OGCBM suggests a lesser role for circulation changes in governing LGM pCO 2atm . Accordingly, the integrated impact of all plausible changes tested in this study (circulation, dust and stoichiometry) cannot account for the entire amplitude of the observed LGM changes in pCO 2atm . This presents a challenge to our prevailing understanding of the mechanisms behind LGM changes in pCO 2atm and the global carbon cycle and necessitates additional biogeochemical mechanisms. For example, better representing additional biogeochemical mechanisms, Fig. 7. A summary of the relationship between the globally averaged change in the upper (0-2000 m) to deep (3000-5000 m) gradient in δ 13 C DIC (‰) and the resulting change in pCO 2atm (ppm) at the LGM for various OGCBM simulations. The black diamond denoted "observations" represents the observational constraints on the change in the δ 13 C DIC gradient and pCO 2atm . Light grey dotted lines represent the "LGM-circulation", "PI-biology" and "LGM-biology" isolines, which have slopes of −9.48, −529.4 and −237.68 ppm‰ −1 , respectively. Changes in circulation are necessary to shift the LGM δ 13 C DIC gradient to an appropriate value (x axis), but only changes in biological production (mediated by dust and/or changes in phytoplankton stoichiometry) can reduce pCO 2atm markedly (y axis). The amplification effect of reduced ventilation on LGM dust is illustrated by the difference in the slopes of the "PI-biology" and "LGM-biology" isolines. Our shortfall reflects as yet unconstrained additional biogeochemical changes (see discussion in the text). We note that our depth ranges for the δ 13 C DIC gradient were chosen to examine the effect of circulation on δ 13 C DIC , rather than the biological pump. such as changes in nutrient supply from land (which remain constant in our scenarios) or shifts in phytoplankton species composition/stoichiometry that impact the export of carbon and/or alkalinity (which are represented in a relatively simplistic fashion even by our complex OGCBM), could reconcile tropical LGM export observations and lower pCO 2atm further. The spatial and mechanistic detail of our OGCBM is much greater than prior LGM studies using box models and IC-models. Nevertheless, we may require more complex OGCBMs that also represent additional biogeochemical processes (both biotic and abiotic), as well as those dominant at high latitudes (ocean ventilation and dust deposition) in order to explain LGM pCO 2atm . Conclusions In conclusion, we use δ 13 C DIC , 14 C and models of differing complexity to constrain LGM ocean circulation and while it can satisfy two-thirds of LGM δ 13 C DIC , biogeochemical processes were responsible for >90% of the pCO 2atm reduction (Fig. 3). We concur with previous studies that NA ventilation was reduced (Lynch-Stieglitz et al., 2007), but quantify its reduction. We also suggest that the impact of reduced NA ventilation was comparatively local and must have been accompanied by lesser SO ventilation to reconcile δ 13 C DIC and 14 C observations. Such changes may have been brought about by changes in freshwater input or westerly wind patterns. Constraining the general nature of LGM circulation explains δ 13 C DIC observations, but biogeochemical changes (including increased dust deposition and modified phytoplankton stoichiometry) lowered pCO 2atm . More work is required in understanding the biogeochemical mechanisms (in addition to dust and stoichiometry) behind the final 25-45 ppm change in pCO 2atm at the LGM that is required before carbonate compensation.
/* * Copyright (C) 2006, 2008 Apple Inc. All rights reserved. * Copyright (C) 2009 Google Inc. All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in the * documentation and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY APPLE COMPUTER, INC. ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE COMPUTER, INC. OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef THIRD_PARTY_BLINK_RENDERER_PLATFORM_LOADER_FETCH_RESOURCE_RESPONSE_H_ #define THIRD_PARTY_BLINK_RENDERER_PLATFORM_LOADER_FETCH_RESOURCE_RESPONSE_H_ #include <memory> #include <utility> #include "base/memory/scoped_refptr.h" #include "base/optional.h" #include "base/time/time.h" #include "services/network/public/mojom/cross_origin_embedder_policy.mojom-shared.h" #include "services/network/public/mojom/fetch_api.mojom-shared.h" #include "third_party/blink/public/platform/web_url_response.h" #include "third_party/blink/renderer/platform/network/http_header_map.h" #include "third_party/blink/renderer/platform/network/http_parsers.h" #include "third_party/blink/renderer/platform/platform_export.h" #include "third_party/blink/renderer/platform/weborigin/kurl.h" #include "third_party/blink/renderer/platform/wtf/allocator/allocator.h" #include "third_party/blink/renderer/platform/wtf/ref_counted.h" #include "third_party/blink/renderer/platform/wtf/vector.h" namespace blink { class ResourceLoadTiming; struct ResourceLoadInfo; // A ResourceResponse is a "response" object used in blink. Conceptually // it is https://fetch.spec.whatwg.org/#concept-response, but it contains // a lot of blink specific fields. WebURLResponse is the "public version" // of this class and public classes (i.e., classes in public/platform) use it. // // This class is thread-bound. Do not copy/pass an instance across threads. class PLATFORM_EXPORT ResourceResponse final { USING_FAST_MALLOC(ResourceResponse); public: enum HTTPVersion : uint8_t { kHTTPVersionUnknown, kHTTPVersion_0_9, kHTTPVersion_1_0, kHTTPVersion_1_1, kHTTPVersion_2_0 }; enum CTPolicyCompliance { kCTPolicyComplianceDetailsNotAvailable, kCTPolicyComplies, kCTPolicyDoesNotComply }; class PLATFORM_EXPORT SignedCertificateTimestamp final { DISALLOW_NEW(); public: SignedCertificateTimestamp(String status, String origin, String log_description, String log_id, int64_t timestamp, String hash_algorithm, String signature_algorithm, String signature_data) : status_(status), origin_(origin), log_description_(log_description), log_id_(log_id), timestamp_(timestamp), hash_algorithm_(hash_algorithm), signature_algorithm_(signature_algorithm), signature_data_(signature_data) {} explicit SignedCertificateTimestamp( const struct blink::WebURLResponse::SignedCertificateTimestamp&); SignedCertificateTimestamp IsolatedCopy() const; String status_; String origin_; String log_description_; String log_id_; int64_t timestamp_; String hash_algorithm_; String signature_algorithm_; String signature_data_; }; using SignedCertificateTimestampList = WTF::Vector<SignedCertificateTimestamp>; struct SecurityDetails { DISALLOW_NEW(); SecurityDetails(const String& protocol, const String& key_exchange, const String& key_exchange_group, const String& cipher, const String& mac, const String& subject_name, const Vector<String>& san_list, const String& issuer, time_t valid_from, time_t valid_to, const Vector<AtomicString>& certificate, const SignedCertificateTimestampList& sct_list) : protocol(protocol), key_exchange(key_exchange), key_exchange_group(key_exchange_group), cipher(cipher), mac(mac), subject_name(subject_name), san_list(san_list), issuer(issuer), valid_from(valid_from), valid_to(valid_to), certificate(certificate), sct_list(sct_list) {} // All strings are human-readable values. String protocol; // keyExchange is the empty string if not applicable for the connection's // protocol. String key_exchange; // keyExchangeGroup is the empty string if not applicable for the // connection's key exchange. String key_exchange_group; String cipher; // mac is the empty string when the connection cipher suite does not // have a separate MAC value (i.e. if the cipher suite is AEAD). String mac; String subject_name; Vector<String> san_list; String issuer; time_t valid_from; time_t valid_to; // DER-encoded X509Certificate certificate chain. Vector<AtomicString> certificate; SignedCertificateTimestampList sct_list; }; ResourceResponse(); explicit ResourceResponse(const KURL& current_request_url); ResourceResponse(const ResourceResponse&); ResourceResponse& operator=(const ResourceResponse&); ~ResourceResponse(); bool IsNull() const { return is_null_; } bool IsHTTP() const; // The current request URL for this resource (the URL after redirects). // Corresponds to: // https://fetch.spec.whatwg.org/#concept-request-current-url // // Beware that this might not be the same the response URL, so it is usually // incorrect to use this in security checks. Use GetType() to determine origin // sameness. // // Specifically, if a service worker responded to the request for this // resource, it may have fetched an entirely different URL and responded with // that resource. WasFetchedViaServiceWorker() and ResponseUrl() can be used // to determine whether and how a service worker responded to the request. // Example service worker code: // // onfetch = (event => { // if (event.request.url == 'https://abc.com') // event.respondWith(fetch('https://def.com')); // }); // // If this service worker responds to an "https://abc.com" request, then for // the resulting ResourceResponse, CurrentRequestUrl() is "https://abc.com", // WasFetchedViaServiceWorker() is true, and ResponseUrl() is // "https://def.com". const KURL& CurrentRequestUrl() const; void SetCurrentRequestUrl(const KURL&); // The response URL of this resource. Corresponds to: // https://fetch.spec.whatwg.org/#concept-response-url // // This returns the same URL as CurrentRequestUrl() unless a service worker // responded to the request. See the comments for that function. KURL ResponseUrl() const; // Returns true if this response is the result of a service worker // effectively calling `evt.respondWith(fetch(evt.request))`. Specifically, // it returns false for synthetic constructed responses, responses fetched // from different URLs, and responses produced by cache_storage. bool IsServiceWorkerPassThrough() const; const AtomicString& MimeType() const; void SetMimeType(const AtomicString&); int64_t ExpectedContentLength() const; void SetExpectedContentLength(int64_t); const AtomicString& TextEncodingName() const; void SetTextEncodingName(const AtomicString&); int HttpStatusCode() const; void SetHttpStatusCode(int); const AtomicString& HttpStatusText() const; void SetHttpStatusText(const AtomicString&); const AtomicString& HttpHeaderField(const AtomicString& name) const; void SetHttpHeaderField(const AtomicString& name, const AtomicString& value); void AddHttpHeaderField(const AtomicString& name, const AtomicString& value); void AddHttpHeaderFieldWithMultipleValues(const AtomicString& name, const Vector<AtomicString>& values); void ClearHttpHeaderField(const AtomicString& name); const HTTPHeaderMap& HttpHeaderFields() const; bool IsAttachment() const; AtomicString HttpContentType() const; // These functions return parsed values of the corresponding response headers. // NaN means that the header was not present or had invalid value. bool CacheControlContainsNoCache() const; bool CacheControlContainsNoStore() const; bool CacheControlContainsMustRevalidate() const; bool HasCacheValidatorFields() const; base::Optional<base::TimeDelta> CacheControlMaxAge() const; base::Optional<base::Time> Date() const; base::Optional<base::TimeDelta> Age() const; base::Optional<base::Time> Expires() const; base::Optional<base::Time> LastModified() const; // Will always return values >= 0. base::TimeDelta CacheControlStaleWhileRevalidate() const; unsigned ConnectionID() const; void SetConnectionID(unsigned); bool ConnectionReused() const; void SetConnectionReused(bool); bool WasCached() const; void SetWasCached(bool); ResourceLoadTiming* GetResourceLoadTiming() const; void SetResourceLoadTiming(scoped_refptr<ResourceLoadTiming>); scoped_refptr<ResourceLoadInfo> GetResourceLoadInfo() const; void SetResourceLoadInfo(scoped_refptr<ResourceLoadInfo>); HTTPVersion HttpVersion() const { return http_version_; } void SetHttpVersion(HTTPVersion version) { http_version_ = version; } int RequestId() const { return request_id_; } void SetRequestId(int request_id) { request_id_ = request_id; } bool HasMajorCertificateErrors() const { return has_major_certificate_errors_; } void SetHasMajorCertificateErrors(bool has_major_certificate_errors) { has_major_certificate_errors_ = has_major_certificate_errors; } CTPolicyCompliance GetCTPolicyCompliance() const { return ct_policy_compliance_; } void SetCTPolicyCompliance(CTPolicyCompliance); bool IsLegacyTLSVersion() const { return is_legacy_tls_version_; } void SetIsLegacyTLSVersion(bool value) { is_legacy_tls_version_ = value; } bool TimingAllowPassed() const { return timing_allow_passed_; } void SetTimingAllowPassed(bool value) { timing_allow_passed_ = value; } SecurityStyle GetSecurityStyle() const { return security_style_; } void SetSecurityStyle(SecurityStyle security_style) { security_style_ = security_style; } const base::Optional<SecurityDetails>& GetSecurityDetails() const { return security_details_; } void SetSecurityDetails(const String& protocol, const String& key_exchange, const String& key_exchange_group, const String& cipher, const String& mac, const String& subject_name, const Vector<String>& san_list, const String& issuer, time_t valid_from, time_t valid_to, const Vector<AtomicString>& certificate, const SignedCertificateTimestampList& sct_list); int64_t AppCacheID() const { return app_cache_id_; } void SetAppCacheID(int64_t id) { app_cache_id_ = id; } const KURL& AppCacheManifestURL() const { return app_cache_manifest_url_; } void SetAppCacheManifestURL(const KURL& url) { app_cache_manifest_url_ = url; } bool WasFetchedViaSPDY() const { return was_fetched_via_spdy_; } void SetWasFetchedViaSPDY(bool value) { was_fetched_via_spdy_ = value; } // See network::ResourceResponseInfo::was_fetched_via_service_worker. bool WasFetchedViaServiceWorker() const { return was_fetched_via_service_worker_; } void SetWasFetchedViaServiceWorker(bool value) { was_fetched_via_service_worker_ = value; } network::mojom::FetchResponseSource GetServiceWorkerResponseSource() const { return service_worker_response_source_; } void SetServiceWorkerResponseSource( network::mojom::FetchResponseSource value) { service_worker_response_source_ = value; } // See network::ResourceResponseInfo::was_fallback_required_by_service_worker. bool WasFallbackRequiredByServiceWorker() const { return was_fallback_required_by_service_worker_; } void SetWasFallbackRequiredByServiceWorker(bool value) { was_fallback_required_by_service_worker_ = value; } network::mojom::FetchResponseType GetType() const { return response_type_; } void SetType(network::mojom::FetchResponseType value) { response_type_ = value; } // https://html.spec.whatwg.org/C/#cors-same-origin bool IsCorsSameOrigin() const; // https://html.spec.whatwg.org/C/#cors-cross-origin bool IsCorsCrossOrigin() const; // See network::ResourceResponseInfo::url_list_via_service_worker. const Vector<KURL>& UrlListViaServiceWorker() const { return url_list_via_service_worker_; } void SetUrlListViaServiceWorker(const Vector<KURL>& url_list) { url_list_via_service_worker_ = url_list; } const String& CacheStorageCacheName() const { return cache_storage_cache_name_; } void SetCacheStorageCacheName(const String& cache_storage_cache_name) { cache_storage_cache_name_ = cache_storage_cache_name; } const Vector<String>& CorsExposedHeaderNames() const { return cors_exposed_header_names_; } void SetCorsExposedHeaderNames(const Vector<String>& header_names) { cors_exposed_header_names_ = header_names; } bool DidServiceWorkerNavigationPreload() const { return did_service_worker_navigation_preload_; } void SetDidServiceWorkerNavigationPreload(bool value) { did_service_worker_navigation_preload_ = value; } base::Time ResponseTime() const { return response_time_; } void SetResponseTime(base::Time response_time) { response_time_ = response_time; } const AtomicString& RemoteIPAddress() const { return remote_ip_address_; } void SetRemoteIPAddress(const AtomicString& value) { remote_ip_address_ = value; } uint16_t RemotePort() const { return remote_port_; } void SetRemotePort(uint16_t value) { remote_port_ = value; } bool WasAlpnNegotiated() const { return was_alpn_negotiated_; } void SetWasAlpnNegotiated(bool was_alpn_negotiated) { was_alpn_negotiated_ = was_alpn_negotiated; } const AtomicString& AlpnNegotiatedProtocol() const { return alpn_negotiated_protocol_; } void SetAlpnNegotiatedProtocol(const AtomicString& value) { alpn_negotiated_protocol_ = value; } net::HttpResponseInfo::ConnectionInfo ConnectionInfo() const { return connection_info_; } void SetConnectionInfo(net::HttpResponseInfo::ConnectionInfo value) { connection_info_ = value; } AtomicString ConnectionInfoString() const; int64_t EncodedDataLength() const { return encoded_data_length_; } void SetEncodedDataLength(int64_t value); int64_t EncodedBodyLength() const { return encoded_body_length_; } void SetEncodedBodyLength(int64_t value); int64_t DecodedBodyLength() const { return decoded_body_length_; } void SetDecodedBodyLength(int64_t value); const base::Optional<base::UnguessableToken>& RecursivePrefetchToken() const { return recursive_prefetch_token_; } void SetRecursivePrefetchToken( const base::Optional<base::UnguessableToken>& token) { recursive_prefetch_token_ = token; } unsigned MemoryUsage() const { // average size, mostly due to URL and Header Map strings return 1280; } bool AsyncRevalidationRequested() const { return async_revalidation_requested_; } void SetAsyncRevalidationRequested(bool requested) { async_revalidation_requested_ = requested; } bool NetworkAccessed() const { return network_accessed_; } void SetNetworkAccessed(bool network_accessed) { network_accessed_ = network_accessed; } bool FromArchive() const { return from_archive_; } void SetFromArchive(bool from_archive) { from_archive_ = from_archive; } bool WasAlternateProtocolAvailable() const { return was_alternate_protocol_available_; } void SetWasAlternateProtocolAvailable(bool was_alternate_protocol_available) { was_alternate_protocol_available_ = was_alternate_protocol_available; } bool IsSignedExchangeInnerResponse() const { return is_signed_exchange_inner_response_; } void SetIsSignedExchangeInnerResponse( bool is_signed_exchange_inner_response) { is_signed_exchange_inner_response_ = is_signed_exchange_inner_response; } bool WasInPrefetchCache() const { return was_in_prefetch_cache_; } void SetWasInPrefetchCache(bool was_in_prefetch_cache) { was_in_prefetch_cache_ = was_in_prefetch_cache; } network::mojom::CrossOriginEmbedderPolicyValue GetCrossOriginEmbedderPolicy() const; private: void UpdateHeaderParsedState(const AtomicString& name); KURL current_request_url_; AtomicString mime_type_; int64_t expected_content_length_ = 0; AtomicString text_encoding_name_; unsigned connection_id_ = 0; int http_status_code_ = 0; AtomicString http_status_text_; HTTPHeaderMap http_header_fields_; // Remote IP address of the socket which fetched this resource. AtomicString remote_ip_address_; // Remote port number of the socket which fetched this resource. uint16_t remote_port_ = 0; bool was_cached_ = false; bool connection_reused_ = false; bool is_null_ = false; mutable bool have_parsed_age_header_ = false; mutable bool have_parsed_date_header_ = false; mutable bool have_parsed_expires_header_ = false; mutable bool have_parsed_last_modified_header_ = false; // True if the resource was retrieved by the embedder in spite of // certificate errors. bool has_major_certificate_errors_ = false; // The Certificate Transparency policy compliance status of the resource. CTPolicyCompliance ct_policy_compliance_ = kCTPolicyComplianceDetailsNotAvailable; // True if the response was sent over TLS 1.0 or 1.1, which are deprecated and // will be removed in the future. bool is_legacy_tls_version_ = false; // True if the Timing-Allow-Origin check passes. // https://fetch.spec.whatwg.org/#concept-response-timing-allow-passed bool timing_allow_passed_ = false; // The time at which the resource's certificate expires. Null if there was no // certificate. base::Time cert_validity_start_; // Was the resource fetched over SPDY. See http://dev.chromium.org/spdy bool was_fetched_via_spdy_ = false; // Was the resource fetched over a ServiceWorker. bool was_fetched_via_service_worker_ = false; // The source of the resource, if it was fetched via ServiceWorker. This is // kUnspecified if |was_fetched_via_service_worker| is false. network::mojom::FetchResponseSource service_worker_response_source_ = network::mojom::FetchResponseSource::kUnspecified; // Was the fallback request with skip service worker flag required. bool was_fallback_required_by_service_worker_ = false; // True if service worker navigation preload was performed due to // the request for this resource. bool did_service_worker_navigation_preload_ = false; // True if this resource is stale and needs async revalidation. Will only // possibly be set if the load_flags indicated SUPPORT_ASYNC_REVALIDATION. bool async_revalidation_requested_ = false; // True if this resource is from an inner response of a signed exchange. // https://wicg.github.io/webpackage/draft-yasskin-http-origin-signed-responses.html bool is_signed_exchange_inner_response_ = false; // True if this resource is served from the prefetch cache. bool was_in_prefetch_cache_ = false; // True if this resource was loaded from the network. bool network_accessed_ = false; // True if this resource was loaded from a MHTML archive. bool from_archive_ = false; // True if response could use alternate protocol. bool was_alternate_protocol_available_ = false; // True if the response was delivered after ALPN is negotiated. bool was_alpn_negotiated_ = false; // https://fetch.spec.whatwg.org/#concept-response-type network::mojom::FetchResponseType response_type_ = network::mojom::FetchResponseType::kDefault; // HTTP version used in the response, if known. HTTPVersion http_version_ = kHTTPVersionUnknown; // Request id given to the resource by the WebUrlLoader. int request_id_ = 0; // The security style of the resource. // This only contains a valid value when the DevTools Network domain is // enabled. (Otherwise, it contains a default value of Unknown.) SecurityStyle security_style_ = SecurityStyle::kUnknown; // Security details of this request's connection. base::Optional<SecurityDetails> security_details_; scoped_refptr<ResourceLoadTiming> resource_load_timing_; scoped_refptr<ResourceLoadInfo> resource_load_info_; mutable CacheControlHeader cache_control_header_; mutable base::Optional<base::TimeDelta> age_; mutable base::Optional<base::Time> date_; mutable base::Optional<base::Time> expires_; mutable base::Optional<base::Time> last_modified_; // The id of the appcache this response was retrieved from, or zero if // the response was not retrieved from an appcache. int64_t app_cache_id_ = 0; // The manifest url of the appcache this response was retrieved from, if any. // Note: only valid for main resource responses. KURL app_cache_manifest_url_; // The URL list of the response which was fetched by the ServiceWorker. // This is empty if the response was created inside the ServiceWorker. Vector<KURL> url_list_via_service_worker_; // The cache name of the CacheStorage from where the response is served via // the ServiceWorker. Null if the response isn't from the CacheStorage. String cache_storage_cache_name_; // The headers that should be exposed according to CORS. Only guaranteed // to be set if the response was fetched by a ServiceWorker. Vector<String> cors_exposed_header_names_; // The time at which the response headers were received. For cached // responses, this time could be "far" in the past. base::Time response_time_; // ALPN negotiated protocol of the socket which fetched this resource. AtomicString alpn_negotiated_protocol_; // Information about the type of connection used to fetch this resource. net::HttpResponseInfo::ConnectionInfo connection_info_ = net::HttpResponseInfo::ConnectionInfo::CONNECTION_INFO_UNKNOWN; // Size of the response in bytes prior to decompression. int64_t encoded_data_length_ = 0; // Size of the response body in bytes prior to decompression. int64_t encoded_body_length_ = 0; // Sizes of the response body in bytes after any content-encoding is // removed. int64_t decoded_body_length_ = 0; // This is propagated from the browser process's PrefetchURLLoader on // cross-origin prefetch responses. It is used to pass the token along to // preload header requests from these responses. base::Optional<base::UnguessableToken> recursive_prefetch_token_; }; } // namespace blink #endif // THIRD_PARTY_BLINK_RENDERER_PLATFORM_LOADER_FETCH_RESOURCE_RESPONSE_H_
// returns a value "top", for which datums should be on the pMore side if t>=top int DirPDTreeNode::SortNodeForSplit() { int top = NData; static int callNumber = 0; callNumber++; vct3 Ck; vct3 Ct; vct3 r = F.Rotation().Row(0); double px = F.Translation()[0]; for (int k = 0; k < top; k++) { Ck = pMyTree->DatumSortPoint(Datum(k)); double kx = r*Ck + px; if (kx > 0) { while ((--top) > k) { Ct = pMyTree->DatumSortPoint(Datum(top)); double tx = r*Ct + px; if (tx <= 0) { int Temp = Datum(k); Datum(k) = Datum(top); Datum(top) = Temp; break; }; }; }; }; return top; }
/** Encrypts/decrypts sent/received data by running the data transformers against it; wraps * an inner socket for which the resulting data is sent to, or from which data is read. */ public class OBSocketImpl implements ISocketTL { ISocketFactory _innerFactory = new TCPSocketFactory(); TLAddress _addr; ISocketTL _socket; InputStream _inputStream; OutputStream _outputStream; boolean _socketCameFromServerSocket = false; List<IDataTransformer> _dataTransformerList; private String _debugStr; private void intiatiateTransformers(List<IDataTransformer> transformers) { _dataTransformerList = new ArrayList<IDataTransformer>(); for(IDataTransformer dt : transformers) { _dataTransformerList.add(dt.instantiateDataTransformer()); } } public OBSocketImpl(List<IDataTransformer> transformers, ISocketFactory factory) throws IOException { _innerFactory = factory; intiatiateTransformers(transformers); } /** Uses the default factory (TCP) */ public OBSocketImpl(TLAddress address, List<IDataTransformer> transformers) throws IOException { _addr = address; intiatiateTransformers(transformers); connect(address); } public OBSocketImpl(TLAddress address, List<IDataTransformer> transformers, ISocketFactory factory) throws IOException { _addr = address; _innerFactory = factory; intiatiateTransformers(transformers); connect(address); } // Called by server-socket only protected OBSocketImpl(List<IDataTransformer> transformers, ISocketTL socket) throws IOException { intiatiateTransformers(transformers); _socket = socket; _addr = _socket.getAddress(); _inputStream = new OBInputStream(_socket.getInputStream(), _dataTransformerList); _outputStream = new OBOutputStream(_socket.getOutputStream(), _dataTransformerList); _socketCameFromServerSocket = true; } @Override public void close() throws IOException { _socket.close(); } @Override public void connect(TLAddress endpoint, int timeout) throws IOException { // TODO: LOWER - Implement timeout connect(endpoint); } @Override public void connect(TLAddress endpoint) throws IOException { _socketCameFromServerSocket = false; _addr = endpoint; _socket = _innerFactory.instantiateSocket(endpoint); _inputStream = new OBInputStream(_socket.getInputStream(), _dataTransformerList); _outputStream = new OBOutputStream(_socket.getOutputStream(), _dataTransformerList); } @Override public InputStream getInputStream() throws IOException { return _inputStream; } @Override public OutputStream getOutputStream() throws IOException { return _outputStream; } @Override public boolean isClosed() { return _socket.isClosed(); } @Override public boolean isConnected() { return _socket.isConnected(); } @Override public TLAddress getAddress() { return _addr; } @Override public String toString() { return "[OBSocketImpl: fromServerSocket:"+_socketCameFromServerSocket+" addr:"+_addr+", inner:{"+super.toString()+"} ]"; } @Override public void setDebugStr(String s) { _debugStr = s; } @Override public String getDebugStr() { return _debugStr; } }
<gh_stars>1-10 import * as boom from "boom"; import * as hapi from "hapi"; import * as Joi from "joi"; import { IConfigureOptions, IWebhook, IWebhookEvent, PayPalRestApi, WebhookModel } from "paypal-rest-api"; import * as pkg from "../package.json"; export interface IHapiPayPalOptions { sdk: IConfigureOptions; routes?: string[]; webhook?: IWebhook; } export interface IRouteConfiguration extends hapi.RouteConfiguration { handler: (request: hapi.Request, reply: hapi.ReplyNoContinue) => Promise<any>; custom?: (request: hapi.Request, reply: hapi.ReplyNoContinue, error: any, response: any) => Promise<any>; } export class HapiPayPal { public static webhookEvents = new Set([ "BILLING.PLAN.CREATED", "BILLING.PLAN.UPDATED", "BILLING.SUBSCRIPTION.CANCELLED", "BILLING.SUBSCRIPTION.CREATED", "BILLING.SUBSCRIPTION.RE-ACTIVATED", "BILLING.SUBSCRIPTION.SUSPENDED", "BILLING.SUBSCRIPTION.UPDATED", "CUSTOMER.DISPUTE.CREATED", "CUSTOMER.DISPUTE.RESOLVED", "CUSTOMER.DISPUTE.UPDATED", "IDENTITY.AUTHORIZATION-CONSENT.REVOKED", "INVOICING.INVOICE.CANCELLED", "INVOICING.INVOICE.CREATED", "INVOICING.INVOICE.PAID", "INVOICING.INVOICE.REFUNDED", "INVOICING.INVOICE.UPDATED", "MERCHANT.ONBOARDING.COMPLETED", "PAYMENT.AUTHORIZATION.CREATED", "PAYMENT.AUTHORIZATION.VOIDED", "PAYMENT.CAPTURE.COMPLETED", "PAYMENT.CAPTURE.DENIED", "PAYMENT.CAPTURE.PENDING", "PAYMENT.CAPTURE.REFUNDED", "PAYMENT.CAPTURE.REVERSED", "PAYMENT.ORDER.CANCELLED", "PAYMENT.ORDER.CREATED", "PAYMENT.PAYOUTS-ITEM.BLOCKED", "PAYMENT.PAYOUTS-ITEM.CANCELED", "PAYMENT.PAYOUTS-ITEM.DENIED", "PAYMENT.PAYOUTS-ITEM.FAILED", "PAYMENT.PAYOUTS-ITEM.HELD", "PAYMENT.PAYOUTS-ITEM.REFUNDED", "PAYMENT.PAYOUTS-ITEM.RETURNED", "PAYMENT.PAYOUTS-ITEM.SUCCEEDED", "PAYMENT.PAYOUTS-ITEM.UNCLAIMED", "PAYMENT.PAYOUTSBATCH.DENIED", "PAYMENT.PAYOUTSBATCH.PROCESSING", "PAYMENT.PAYOUTSBATCH.SUCCESS", "PAYMENT.SALE.COMPLETED", "PAYMENT.SALE.DENIED", "PAYMENT.SALE.PENDING", "PAYMENT.SALE.REFUNDED", "PAYMENT.SALE.REVERSED", "VAULT.CREDIT-CARD.CREATED", "VAULT.CREDIT-CARD.DELETED", "VAULT.CREDIT-CARD.UPDATED", ]); public routes: Map<string, IRouteConfiguration> = new Map(); public webhook: WebhookModel; // private routes: Map<string, InternalRouteConfiguration> = new Map(); private _server: hapi.Server; private _paypal: PayPalRestApi; constructor() { this.register.attributes = { pkg, }; // Setup Routes this.routes.set("paypal_payment_create", { config: { id: "paypal_payment_create", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.payment.api.create({ body: request.payload }); } catch (err) { error = err; } this.responseHandler("paypal_payment_create", request, reply, error, response); }, method: "POST", path: "/paypal/payment", }); this.routes.set("paypal_webhooks_listen", { config: { id: "paypal_webhooks_listen", payload: { parse: false, }, }, handler: async (request, reply) => { let response = null; let error = null; try { // tslint:disable-next-line:max-line-length 1X200851AC360471T response = await this.paypal.webhookEvent.verify(this.webhook.model.id, request.headers, request.payload.toString()); if (response.verification_status !== "SUCCESS") { throw new Error("Webhook Verification Error"); } request.payload = JSON.parse(request.payload.toString()); } catch (err) { error = err; } this.responseHandler("paypal_webhooks_listen", request, reply, error, response); }, method: "POST", path: "/paypal/webhooks/listen", }); this.routes.set("paypal_webhooks_test", { config: { id: "paypal_webhooks_test", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.webhookEvent.api.get(request.params.webhookid); } catch (err) { error = err; } this.responseHandler("paypal_webhooks_test", request, reply, error, response); }, method: "GET", path: "/paypal/webhooks/listen/{webhookid}", }); this.routes.set("paypal_invoice_search", { config: { id: "paypal_invoice_search", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.search({ body: request.payload }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_search", request, reply, error, response); }, method: "POST", path: "/paypal/invoice/search", }); this.routes.set("paypal_invoice_create", { config: { id: "paypal_invoice_create", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.create({ body: request.payload }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_create", request, reply, error, response); }, method: "POST", path: "/paypal/invoice", }); this.routes.set("paypal_invoice_send", { config: { id: "paypal_invoice_send", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.send(request.params.invoiceid, { body: request.payload, }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_send", request, reply, error, response); }, method: "POST", path: "/paypal/invoice/{invoiceid}/send", }); this.routes.set("paypal_invoice_send", { config: { id: "paypal_invoice_send", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.send(request.params.invoiceid, { body: request.payload, }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_send", request, reply, error, response); }, method: "POST", path: "/paypal/invoice/{invoiceid}/send", }); this.routes.set("paypal_invoice_get", { config: { id: "paypal_invoice_get", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.get(request.params.invoiceid); } catch (err) { error = err; } this.responseHandler("paypal_invoice_get", request, reply, error, response); }, method: "GET", path: "/paypal/invoice/{invoiceid}", }); this.routes.set("paypal_invoice_cancel", { config: { id: "paypal_invoice_cancel", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.cancel(request.params.invoiceid, { body: request.payload, }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_cancel", request, reply, error, response); }, method: "POST", path: "/paypal/invoice/{invoiceid}/cancel", }); this.routes.set("paypal_invoice_update", { config: { id: "paypal_invoice_update", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.update(request.params.invoiceid, { body: request.payload, }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_update", request, reply, error, response); }, method: "PUT", path: "/paypal/invoice/{invoiceid}", }); this.routes.set("paypal_invoice_remind", { config: { id: "paypal_invoice_remind", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.invoice.api.remind(request.params.invoiceid, { body: request.payload, }); } catch (err) { error = err; } this.responseHandler("paypal_invoice_remind", request, reply, error, response); }, method: "POST", path: "/paypal/invoice/{invoiceid}/remind", }); this.routes.set("paypal_sale_refund", { config: { id: "paypal_sale_refund", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.sale.api.refund(request.params.transactionid, { body: request.payload, }); } catch (err) { error = err; } this.responseHandler("paypal_sale_refund", request, reply, error, response); }, method: "POST", path: "/paypal/sale/{transactionid}/refund", }); this.routes.set("paypal_webhook_list", { config: { id: "paypal_webhook_list", }, handler: async (request, reply) => { let response = null; let error = null; try { response = await this.paypal.webhook.api.list({ qs: request.query, }); } catch (err) { error = err; } this.responseHandler("paypal_webhook_list", request, reply, error, response); }, method: "GET", path: "/paypal/webhook", }); } get paypal() { return this._paypal; } set paypal(paypal) { if (!(paypal instanceof PayPalRestApi)) { throw new Error("paypal must be instance of PayPalRestApi"); } this._paypal = paypal; } get server() { return this._server; } set server(server) { this._server = server; } // tslint:disable-next-line:max-line-length public register: hapi.PluginFunction<any> = (server: hapi.Server, options: IHapiPayPalOptions, next: hapi.ContinuationFunction) => { if (!this.server) { this.server = server; } if (!this.paypal) { this.paypal = new PayPalRestApi(options.sdk); } if (!this.paypal.config.requestOptions.headers["PayPal-Partner-Attribution-Id"]) { this.paypal.config.requestOptions.headers["PayPal-Partner-Attribution-Id"] = "Hapi-PayPal"; } this.server.expose("paypal", this.paypal); let webhookPromise = Promise.resolve(); if (options.webhook) { const webhooksSchema = Joi.object().keys({ event_types: Joi.array().min(1).required(), url: Joi.string() .replace(/^https:\/\/(www\.)?localhost/gi, "").uri({ scheme: ["https"] }).required() .error(new Error("Webhook url must be https and cannot be localhost.")), }); const validate = Joi.validate(options.webhook, webhooksSchema); if (validate.error) { this.server.log("error", validate.error); } else { webhookPromise = this.enableWebhooks(options.webhook); } } if (options.routes && options.routes.length > 0) { options.routes.forEach((route) => { const { custom, ...hRoute } = this.routes.get(route); this.server.route(hRoute); this.server.log(["info", "hapi-paypal", "route"], hRoute); }); } webhookPromise .then(() => { next(); }); } // tslint:disable-next-line:max-line-length private responseHandler(routeId: string, request: hapi.Request, reply: hapi.ReplyNoContinue, error: Error, response: any) { const route = this.routes.get(routeId); if (route.custom) { return route.custom(request, reply, error, response); } if (error) { const bError = boom.badRequest(error.message); bError.reformat(); return reply(bError); } return reply(response.body); } private setupRoutes() { /* this.routes.set("paypal_webhook_list", { config: { id: "paypal_webhook_list", }, handler: async (request, reply, ohandler) => { try { const response = await this.paypal.webhook.list({ qs: request.query, }); this.defaultResponseHandler(ohandler, request, reply, null, response); } catch (err) { this.defaultResponseHandler(ohandler, request, reply, err, null); } }, method: "GET", path: "/paypal/webhook", }); this.routes.set("paypal_webhook_create", { config: { id: "paypal_webhook_create", }, handler: async (request, reply, ohandler) => { try { const response = await this.paypal.webhook.api.create({ body: request.payload, }); this.defaultResponseHandler(ohandler, request, reply, null, response); } catch (err) { this.defaultResponseHandler(ohandler, request, reply, err, null); } }, method: "POST", path: "/paypal/webhook", }); this.routes.set("paypal_webhook_get", { config: { id: "paypal_webhook_get", }, handler: async (request, reply, ohandler) => { try { const response = await this.paypal.webhook.get(request.params.webhook_id); this.defaultResponseHandler(ohandler, request, reply, null, response); } catch (err) { this.defaultResponseHandler(ohandler, request, reply, err, null); } }, method: "GET", path: "/paypal/webhook/{webhook_id}", }); this.routes.set("paypal_webhook_update", { config: { id: "paypal_webhook_update", }, handler: async (request, reply, ohandler) => { terrry { const response = await this.paypal.webhook.api.update(request.params.webhook_id, request.payload); this.defaultResponseHandler(ohandler, request, reply, null, response); } catch (err) { this.defaultResponseHandler(ohandler, request, reply, err, null); } }, method: "PATCH", path: "/paypal/webhook/{webhook_id}", }); this.routes.set("paypal_webhook_event_get", { config: { id: "paypal_webhook_event_get", }, handler: async (request, reply, ohandler) => { try { const response = await this.paypal.webhookEvent.api.get(request.params.webhookEvent_id); this.defaultResponseHandler(ohandler, request, reply, null, response); } catch (err) { this.defaultResponseHandler(ohandler, request, reply, err, null); } }, method: "PATCH", path: "/paypal/webhookEvent/{webhookEvent_id}", }); this.routes.set("paypal_webhook_events", { config: { id: "paypal_webhook_events", }, handler: async (request, reply, ohandler) => { try { const response = await this.paypal.webhook.api.types(); this.defaultResponseHandler(ohandler, request, reply, null, response); } catch (err) { this.defaultResponseHandler(ohandler, request, reply, err, null); } }, method: "PATCH", path: "/paypal/webhookEvent", }); */ } private async enableWebhooks(webhook: IWebhook) { try { const accountWebHooks = await this.getAccountWebhooks(); const twebhook = accountWebHooks.filter((hook: IWebhook) => hook.url === webhook.url)[0]; !twebhook ? await this.createWebhook(webhook) : await this.replaceWebhook({ ...twebhook, ...webhook }); } catch (err) { try { if (err.message) { const error = JSON.parse(err.message); if (error.name !== "WEBHOOK_PATCH_REQUEST_NO_CHANGE") { throw err; } } } catch (err) { throw err; } } this.server.log(["info", "hapi-paypal", "webhook"], this.webhook.model); } private async getAccountWebhooks() { const response = await this.paypal.webhook.api.list(); return response.body.webhooks; } private async createWebhook(webhook: IWebhook) { const webhookmodel = new this.paypal.webhook(webhook); this.webhook = webhookmodel; await webhookmodel.create(); } private async replaceWebhook(webhook: IWebhook) { const webhookmodel = new this.paypal.webhook(webhook); this.webhook = webhookmodel; await webhookmodel.update([ { op: "replace", path: "/event_types", value: webhook.event_types, }, ]); } }
<filename>NLPCCd/Camel/2218_2.java<gh_stars>0 //,temp,sample_3990.java,2,14,temp,sample_24.java,2,14 //,2 public class xxx { public void addConsumer(NettyConsumer consumer) { if (compatibleCheck) { if (bootstrapConfiguration != consumer.getConfiguration() && !bootstrapConfiguration.compatible(consumer.getConfiguration())) { throw new IllegalArgumentException("Bootstrap configuration must be identical when adding additional consumer: " + consumer.getEndpoint() + " on same port: " + port + ".\n Existing " + bootstrapConfiguration.toStringBootstrapConfiguration() + "\n New " + consumer.getConfiguration().toStringBootstrapConfiguration()); } } if (LOG.isDebugEnabled()) { NettyHttpConsumer httpConsumer = (NettyHttpConsumer) consumer; log.info("bootstrapfactory on port is adding consumer with context path"); } } };
<reponame>d4nuu8/krlcodestyle # -*- coding: utf-8 -*- from unittest import TestCase from importlib import reload from krllint import config from krllint.reporter import Category, MemoryReporter from krllint.linter import _create_arg_parser, Linter class MixedIndentationTestCase(TestCase): TEST_INPUT_WITH_SPACES = [" bar\n"] TEST_INPUT_WITH_TABS = ["\tbar\n"] TEST_RESULT_WITH_SPACES = [" bar\n"] TEST_RESULT_WITH_TABS = ["\tbar\n"] def test_rule_with_spaces_allowed(self): cli_args = _create_arg_parser().parse_args(["test_rule_with_spaces_allowed"]) reload(config) config.REPORTER = MemoryReporter config.INDENT_CHAR = " " config.DISABLE = ["bad-indentation"] linter = Linter(cli_args, config) lines, reporter = linter.lint_lines("test_rule_with_spaces_allowed", self.TEST_INPUT_WITH_TABS) self.assertEqual(reporter.found_issues[Category.CONVENTION], 0) self.assertEqual(reporter.found_issues[Category.REFACTOR], 0) self.assertEqual(reporter.found_issues[Category.WARNING], 1) self.assertEqual(reporter.found_issues[Category.ERROR], 0) self.assertEqual(reporter.found_issues[Category.FATAL], 0) self.assertEqual(lines, self.TEST_INPUT_WITH_TABS) self.assertEqual(reporter.messages[0].line_number, 0) self.assertEqual(reporter.messages[0].column, 0) self.assertEqual(reporter.messages[0].message, "line contains tab(s)") self.assertEqual(reporter.messages[0].code, "mixed-indentation") def test_rule_with_tabs_allowed(self): cli_args = _create_arg_parser().parse_args(["test_rule_with_tabs_allowed"]) reload(config) config.REPORTER = MemoryReporter config.INDENT_CHAR = "\t" config.DISABLE = ["bad-indentation"] linter = Linter(cli_args, config) lines, reporter = linter.lint_lines("test_rule_with_tabs_allowed", self.TEST_INPUT_WITH_SPACES) self.assertEqual(reporter.found_issues[Category.CONVENTION], 0) self.assertEqual(reporter.found_issues[Category.REFACTOR], 0) self.assertEqual(reporter.found_issues[Category.WARNING], 1) self.assertEqual(reporter.found_issues[Category.ERROR], 0) self.assertEqual(reporter.found_issues[Category.FATAL], 0) self.assertEqual(lines, self.TEST_INPUT_WITH_SPACES) self.assertEqual(reporter.messages[0].line_number, 0) self.assertEqual(reporter.messages[0].column, 0) self.assertEqual(reporter.messages[0].message, "line contains tab(s)") self.assertEqual(reporter.messages[0].code, "mixed-indentation") def test_rule_with_tabs_allowed_and_fix(self): cli_args = _create_arg_parser().parse_args(["--fix", "test_rule_with_tabs_allowed_and_fix"]) reload(config) config.REPORTER = MemoryReporter config.INDENT_CHAR = "\t" config.INDENT_SIZE = 1 config.DISABLE = ["bad-indentation"] linter = Linter(cli_args, config) lines, _ = linter.lint_lines("test_rule_with_tabs_allowed_and_fix", self.TEST_INPUT_WITH_SPACES) self.assertEqual(lines, self.TEST_RESULT_WITH_TABS) def test_rule_with_spaces_allowed_and_fix(self): cli_args = _create_arg_parser().parse_args(["--fix", "test_rule_with_spaces_allowed_and_fix"]) reload(config) config.REPORTER = MemoryReporter config.INDENT_CHAR = " " config.INDENT_SIZE = 3 config.DISABLE = ["bad-indentation"] linter = Linter(cli_args, config) lines, _ = linter.lint_lines("test_rule_with_spaces_allowed_and_fix", self.TEST_INPUT_WITH_TABS) self.assertEqual(lines, self.TEST_RESULT_WITH_SPACES)
#pragma once class Collider { };
<reponame>ggj2010/javabase<filename>dailycode/src/main/java/com/ggj/java/rpc/demo/netty/first/server/ServerProvider.java package com.ggj.java.rpc.demo.netty.first.server; import com.ggj.java.rpc.demo.netty.first.server.annation.RpcService; import com.ggj.java.rpc.demo.netty.first.server.handle.RpcServerHandler; import com.ggj.java.rpc.demo.netty.first.server.handle.ServerDecoder; import com.ggj.java.rpc.demo.netty.first.server.handle.ServerEncoder; import io.netty.bootstrap.ServerBootstrap; import io.netty.channel.ChannelInitializer; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.SocketChannel; import io.netty.channel.socket.nio.NioServerSocketChannel; import lombok.extern.slf4j.Slf4j; import org.springframework.core.io.Resource; import org.springframework.core.io.support.PathMatchingResourcePatternResolver; import org.springframework.core.io.support.ResourcePatternResolver; import java.util.HashMap; import java.util.Map; import java.util.concurrent.atomic.AtomicBoolean; /** * @author gaoguangjin */ @Slf4j public class ServerProvider { public static Map<String, Object> cacheSericeMap = new HashMap<>(); private static AtomicBoolean initStatus = new AtomicBoolean(false); public static void init() { if (initStatus.compareAndSet(false, true)) { scannAndCacheService(); startNettyServer(); } else { throw new IllegalStateException("can not repeat init"); } } private static void startNettyServer() { ServerBootstrap serverBootstrap = new ServerBootstrap(); NioEventLoopGroup boosGroup = new NioEventLoopGroup(); NioEventLoopGroup workGroup = new NioEventLoopGroup(); serverBootstrap.group(boosGroup, workGroup).channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) throws Exception { /** /**自定义协议解决粘包 begin**/ ch.pipeline().addLast(new ServerEncoder()); ch.pipeline().addLast(new ServerDecoder()); /**自定义协议解决粘包 end**/ ch.pipeline().addLast(new RpcServerHandler()); } }).bind(8000); } public static void main(String[] args) { init(); } /** * 扫描指定目录文件 */ private static void scannAndCacheService() { try { log.info("scann rpc class begin"); ResourcePatternResolver rp = new PathMatchingResourcePatternResolver(); Resource[] resources = rp.getResources("classpath:com/ggj/java/rpc/demo/netty/first/server/service/imp/*.class"); if (resources == null || resources.length == 0) { throw new IllegalArgumentException("scann package error"); } for (Resource resource : resources) { String className = resource.getFile().getPath().split("classes\\/")[1].replaceAll("\\/", ".").replaceAll(".class", ""); Class<?> clazz = Thread.currentThread().getContextClassLoader().loadClass(className); if (clazz.getAnnotation(RpcService.class) != null) { Object object = clazz.newInstance(); cacheSericeMap.putIfAbsent(object.getClass().getInterfaces()[0].getName(), object); } } log.info("scann rpc class end"); } catch (Exception e) { log.error("", e); } } }
package cat.udl.eps.softarch.hello.service; import cat.udl.eps.softarch.hello.model.Alert; import cat.udl.eps.softarch.hello.model.User; import cat.udl.eps.softarch.hello.repository.AlertRepository; import cat.udl.eps.softarch.hello.repository.UserRepository; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; /** * Created by http://rhizomik.net/~roberto/ */ @Service public class UserAlertsServiceImpl implements UserAlertsService { final Logger logger = LoggerFactory.getLogger(UserAlertsServiceImpl.class); @Autowired AlertRepository alertRepository; @Autowired UserRepository userRepository; @Transactional(readOnly = true) @Override public User getUserAndAlerts(String username) { User u = userRepository.findOne(username); logger.info("User {} has {} alerts", u.getUsername(), u.getAlerts().size()); return u; } @Transactional @Override public Alert addAlertToUser(String username, String weather, String region, Integer regionId) { User u = userRepository.findOne(username); Alert newAlert = new Alert(u, weather, region); u.addAlert(newAlert); alertRepository.save(newAlert); userRepository.save(u); return newAlert; } @Transactional @Override public void removeAlertFromUser(long alertId) { Alert a = alertRepository.findOne(alertId); User u = userRepository.findOne(a.getUser().getUsername()); if (u != null) { u.removeAlert(a); userRepository.save(u); } alertRepository.delete(a); } @Transactional @Override public void changeEnabledAlert(long alertId){ Alert a = alertRepository.findOne(alertId); a.changeEnabled(); alertRepository.save(a); } }
/** * Returns a mutable copy of the source which is filtered to only those features who have years within the range and * enough years to satisfy the threshold. */ private List<PointFeature> filterBy(List<PointFeature> source, int minYear, int maxYear, int yearThreshold) { List<PointFeature> result = Lists.newArrayList(); for(PointFeature f : source) { Map<String, Integer> yearCopy = Maps.newHashMap(); for (Map.Entry<String, Integer> e : f.getYearCounts().entrySet()) { if (Integer.parseInt(e.getKey()) >= minYear && Integer.parseInt(e.getKey()) <= maxYear) { yearCopy.put(e.getKey(), e.getValue()); } } if (yearCopy.size()>=yearThreshold) { result.add(new PointFeature(f.getLatitude(), f.getLongitude(), yearCopy)); } } return result; }
import getProgressA11y from "../getProgressA11y"; describe("getProgressA11y", () => { it("should return undefined if the progressing arg is false", () => { expect(getProgressA11y("", false)).toBeUndefined(); expect(getProgressA11y("something-else", false)).toBeUndefined(); }); it("should return the correct a11y props when progressing", () => { expect(getProgressA11y("some-id", true)).toEqual({ "aria-busy": true, "aria-describedby": "some-id", }); }); });
// ------------------------------------ // // Code Tree Manager // // ------------------------------------ class code_tree_manager { label_hasher & m_lbl_hasher; mam_trail_stack & m_trail_stack; region & m_region; template<typename OP> OP * mk_instr(opcode op, unsigned size) { void * mem = m_region.allocate(size); OP * r = new (mem) OP; r->m_opcode = op; r->m_next = nullptr; #ifdef _PROFILE_MAM r->m_counter = 0; #endif return r; } instruction * mk_init(unsigned n) { SASSERT(n >= 1); opcode op = n <= 6 ? static_cast<opcode>(INIT1 + n - 1) : INITN; if (op == INITN) { initn * r = mk_instr<initn>(op, sizeof(initn)); r->m_num_args = n; return r; } else { return mk_instr<instruction>(op, sizeof(instruction)); } } public: code_tree_manager(label_hasher & h, mam_trail_stack & s): m_lbl_hasher(h), m_trail_stack(s), m_region(s.get_region()) { } code_tree * mk_code_tree(func_decl * lbl, unsigned short num_args, bool filter_candidates) { code_tree * r = alloc(code_tree,m_lbl_hasher, lbl, num_args, filter_candidates); r->m_root = mk_init(num_args); return r; } joint2 * mk_joint2(func_decl * f, unsigned pos, unsigned reg) { return new (m_region) joint2(f, pos, reg); } compare * mk_compare(unsigned reg1, unsigned reg2) { compare * r = mk_instr<compare>(COMPARE, sizeof(compare)); r->m_reg1 = reg1; r->m_reg2 = reg2; return r; } check * mk_check(unsigned reg, enode * n) { check * r = mk_instr<check>(CHECK, sizeof(check)); r->m_reg = reg; r->m_enode = n; return r; } filter * mk_filter_core(opcode op, unsigned reg, approx_set s) { filter * r = mk_instr<filter>(op, sizeof(filter)); r->m_reg = reg; r->m_lbl_set = s; return r; } filter * mk_filter(unsigned reg, approx_set s) { return mk_filter_core(FILTER, reg, s); } filter * mk_pfilter(unsigned reg, approx_set s) { return mk_filter_core(PFILTER, reg, s); } filter * mk_cfilter(unsigned reg, approx_set s) { return mk_filter_core(CFILTER, reg, s); } get_enode_instr * mk_get_enode(unsigned reg, enode * n) { get_enode_instr * s = mk_instr<get_enode_instr>(GET_ENODE, sizeof(get_enode_instr)); s->m_oreg = reg; s->m_enode = n; return s; } choose * mk_choose(choose * alt) { choose * r = mk_instr<choose>(CHOOSE, sizeof(choose)); r->m_alt = alt; return r; } choose * mk_noop() { choose * r = mk_instr<choose>(NOOP, sizeof(choose)); r->m_alt = nullptr; return r; } bind * mk_bind(func_decl * lbl, unsigned short num_args, unsigned ireg, unsigned oreg) { SASSERT(num_args >= 1); opcode op = num_args <= 6 ? static_cast<opcode>(BIND1 + num_args - 1) : BINDN; bind * r = mk_instr<bind>(op, sizeof(bind)); r->m_label = lbl; r->m_num_args = num_args; r->m_ireg = ireg; r->m_oreg = oreg; return r; } get_cgr * mk_get_cgr(func_decl * lbl, unsigned oreg, unsigned num_args, unsigned const * iregs) { SASSERT(num_args >= 1); opcode op = num_args <= 6 ? static_cast<opcode>(GET_CGR1 + num_args - 1) : GET_CGRN; get_cgr * r = mk_instr<get_cgr>(op, sizeof(get_cgr) + num_args * sizeof(unsigned)); r->m_label = lbl; r->m_lbl_set.insert(m_lbl_hasher(lbl)); r->m_oreg = oreg; r->m_num_args = num_args; memcpy(r->m_iregs, iregs, sizeof(unsigned) * num_args); return r; } is_cgr * mk_is_cgr(func_decl * lbl, unsigned ireg, unsigned num_args, unsigned const * iregs) { SASSERT(num_args >= 1); is_cgr * r = mk_instr<is_cgr>(IS_CGR, sizeof(is_cgr) + num_args * sizeof(unsigned)); r->m_label = lbl; r->m_ireg = ireg; r->m_num_args = num_args; memcpy(r->m_iregs, iregs, sizeof(unsigned) * num_args); return r; } yield * mk_yield(quantifier * qa, app * pat, unsigned num_bindings, unsigned * bindings) { SASSERT(num_bindings >= 1); opcode op = num_bindings <= 6 ? static_cast<opcode>(YIELD1 + num_bindings - 1) : YIELDN; yield * y = mk_instr<yield>(op, sizeof(yield) + num_bindings * sizeof(unsigned)); y->m_qa = qa; y->m_pat = pat; y->m_num_bindings = num_bindings; memcpy(y->m_bindings, bindings, sizeof(unsigned) * num_bindings); return y; } cont * mk_cont(func_decl * lbl, unsigned short num_args, unsigned oreg, approx_set const & s, enode * const * joints) { SASSERT(num_args >= 1); cont * r = mk_instr<cont>(CONTINUE, sizeof(cont) + num_args * sizeof(enode*)); r->m_label = lbl; r->m_num_args = num_args; r->m_oreg = oreg; r->m_lbl_set = s; memcpy(r->m_joints, joints, num_args * sizeof(enode *)); return r; } void set_next(instruction * instr, instruction * new_next) { m_trail_stack.push(mam_value_trail<instruction*>(instr->m_next)); instr->m_next = new_next; } void save_num_regs(code_tree * tree) { m_trail_stack.push(mam_value_trail<unsigned>(tree->m_num_regs)); } void save_num_choices(code_tree * tree) { m_trail_stack.push(mam_value_trail<unsigned>(tree->m_num_choices)); } void insert_new_lbl_hash(filter * instr, unsigned h) { m_trail_stack.push(mam_value_trail<approx_set>(instr->m_lbl_set)); instr->m_lbl_set.insert(h); } }
Silicon Valley is throwing its support behind Hillary Clinton. On Thursday, Clinton's campaign sent out a list of endorsements from the business leaders, including big names from Facebook, Netflix, Airbnb, and Alphabet (Google's parent company), as well as prominent venture capitalists. Netflix CEO Reed Hastings went one step further in his support, and put out a statement in favor of Clinton. "Trump would destroy much of what is great about America," Hastings said, according to Politico. "Hillary Clinton is the strong leader we need, and it's important that Trump lose by a landslide to reject what he stands for." Here is the full list of supporters from the tech industry: Sheryl Sandberg, chief operating officer of Facebook Eric Schmidt, executive chairman of Alphabet Reed Hastings, founder and CEO of Netflix Drew Houston, founder and CEO of Dropbox Anne Wojcicki, CEO and cofounder of 23andMe Brook Byers, partner at Kleiner Perkins Caufield and Byers John Doerr, partner at Kleiner Perkins Caufield and Byers Reid Hoffman, partner at Greylock Peter Chernin, CEO of The Chernin Group Nathan Blecharczyk, cofounder and CTO of Airbnb Brian Chesky, cofounder and CEO of Airbnb Joe Gebbia, cofounder and CPO of Airbnb Irwin Jacobs, founding chairman and CEO emeritus of Qualcomm Paul Jacobs, executive chairman of Qualcomm David Karp, Founder and CEO of Tumblr Aaron Levie, cofounder and CEO of Box Mark Pincus, cofounder of Zynga Jeremy Stoppelman, CEO and cofounder of Yelp Barry Diller, chairman and senior executive of IAC and Expedia Candy Ergen, cofounder of DISH Network Additional reporting by Kif Leswing. NOW WATCH: How to find Netflix’s secret categories More From Business Insider
/** * Test for illegal mappings between collection types, iterable and non-iterable types etc. * * @author Gunnar Morling */ @RunWith(AnnotationProcessorTestRunner.class) public class ErroneousCollectionMappingTest { @Test @IssueKey("6") @WithClasses({ ErroneousCollectionToNonCollectionMapper.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = ErroneousCollectionToNonCollectionMapper.class, kind = Kind.ERROR, line = 28, messageRegExp = "Can't generate mapping method from iterable type to non-iterable type"), @Diagnostic(type = ErroneousCollectionToNonCollectionMapper.class, kind = Kind.ERROR, line = 30, messageRegExp = "Can't generate mapping method from non-iterable type to iterable type") } ) public void shouldFailToGenerateImplementationBetweenCollectionAndNonCollection() { } @Test @IssueKey("729") @WithClasses({ ErroneousCollectionToPrimitivePropertyMapper.class, Source.class, Target.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = ErroneousCollectionToPrimitivePropertyMapper.class, kind = Kind.ERROR, line = 26, messageRegExp = "Can't map property \"java.util.List<java.lang.String> strings\" to \"int strings\". " + "Consider to declare/implement a mapping method: \"int map\\(java.util.List<java.lang.String>" + " value\\)\"") } ) public void shouldFailToGenerateImplementationBetweenCollectionAndPrimitive() { } @Test @IssueKey("417") @WithClasses({ EmptyItererableMappingMapper.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = EmptyItererableMappingMapper.class, kind = Kind.ERROR, line = 35, messageRegExp = "'nullValueMappingStrategy','dateformat', 'qualifiedBy' and 'elementTargetType' are " + "undefined in @IterableMapping, define at least one of them.") } ) public void shouldFailOnEmptyIterableAnnotation() { } @Test @IssueKey("417") @WithClasses({ EmptyMapMappingMapper.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = EmptyMapMappingMapper.class, kind = Kind.ERROR, line = 35, messageRegExp = "'nullValueMappingStrategy', 'keyDateFormat', 'keyQualifiedBy', 'keyTargetType', " + "'valueDateFormat', 'valueQualfiedBy' and 'valueTargetType' are all undefined in @MapMapping, " + "define at least one of them.") } ) public void shouldFailOnEmptyMapAnnotation() { } @Test @IssueKey("459") @WithClasses({ ErroneousCollectionNoElementMappingFound.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = ErroneousCollectionNoElementMappingFound.class, kind = Kind.ERROR, line = 37, messageRegExp = "Can't map Collection element \".*AttributedString attributedString\" to \".*String string\". " + "Consider to declare/implement a mapping method: \".*String map(.*AttributedString value)") } ) public void shouldFailOnNoElementMappingFound() { } @Test @IssueKey("459") @WithClasses({ ErroneousCollectionNoKeyMappingFound.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = ErroneousCollectionNoKeyMappingFound.class, kind = Kind.ERROR, line = 37, messageRegExp = "Can't map Map key \".*AttributedString attributedString\" to \".*String string\". " + "Consider to declare/implement a mapping method: \".*String map(.*AttributedString value)") } ) public void shouldFailOnNoKeyMappingFound() { } @Test @IssueKey("459") @WithClasses({ ErroneousCollectionNoValueMappingFound.class }) @ExpectedCompilationOutcome( value = CompilationResult.FAILED, diagnostics = { @Diagnostic(type = ErroneousCollectionNoValueMappingFound.class, kind = Kind.ERROR, line = 37, messageRegExp = "Can't map Map value \".*AttributedString attributedString\" to \".*String string\". " + "Consider to declare/implement a mapping method: \".*String map(.*AttributedString value)") } ) public void shouldFailOnNoValueMappingFound() { } }
Analysis of Human Activity Recognition using Deep Learning Nowadays the deluge of data is increasing with new technologies coming up daily. These advancements in recent times have also led to an increased growth in fields like Robotics and Internet of Things (IoT). This paper helps us draw a comparison between the usage and accuracy of different Human Activity Recognition models. There will be discussion on mainly two models-2-D Convolutional Neural Network and Long-Short term Memory. In order to maintain the consistency and credibility of the survey, both models are trained using the same dataset containing information collected using wearable sensors which was acquired from a public website. They are compared using their accuracy and confusion matrix to check the true and false positives and later the various aspects and fields, where the two models can separately and together be used in the wider field of Human Activity Recognition using image data have been explained. The experimental results signified that both Convolutional Neural Networks and Long-Short term memory model are equally equipped for different situations, yet Long-Short Term memory model mostly appears to be more consistent than Convolutional Neural Networks.
function minTransfers(transactions: number[][]): number { const g: number[] = new Array(12).fill(0); for (const [f, t, x] of transactions) { g[f] -= x; g[t] += x; } const nums = g.filter(x => x !== 0); const m = nums.length; const f: number[] = new Array(1 << m).fill(1 << 29); f[0] = 0; for (let i = 1; i < 1 << m; ++i) { let s = 0; for (let j = 0; j < m; ++j) { if (((i >> j) & 1) === 1) { s += nums[j]; } } if (s === 0) { f[i] = bitCount(i) - 1; for (let j = (i - 1) & i; j; j = (j - 1) & i) { f[i] = Math.min(f[i], f[j] + f[i ^ j]); } } } return f[(1 << m) - 1]; } function bitCount(i: number): number { i = i - ((i >>> 1) & 0x55555555); i = (i & 0x33333333) + ((i >>> 2) & 0x33333333); i = (i + (i >>> 4)) & 0x0f0f0f0f; i = i + (i >>> 8); i = i + (i >>> 16); return i & 0x3f; }
import NewsController from './news'; declare module 'egg' { export interface IController { news: NewsController; } }
// sweeps one span // returns number of pages returned to heap, or ^uintptr(0) if there is nothing to sweep //go:nowritebarrier func sweepone() uintptr { _g_ := getg() sweepRatio := mheap_.sweepPagesPerByte _g_.m.locks++ if atomic.Load(&mheap_.sweepdone) != 0 { _g_.m.locks-- return ^uintptr(0) } atomic.Xadd(&mheap_.sweepers, +1) npages := ^uintptr(0) sg := mheap_.sweepgen for { s := mheap_.sweepSpans[1-sg/2%2].pop() if s == nil { atomic.Store(&mheap_.sweepdone, 1) break } if s.state != mSpanInUse { if s.sweepgen != sg { print("runtime: bad span s.state=", s.state, " s.sweepgen=", s.sweepgen, " sweepgen=", sg, "\n") throw("non in-use span in unswept list") } continue } if s.sweepgen != sg-2 || !atomic.Cas(&s.sweepgen, sg-2, sg-1) { continue } npages = s.npages if !s.sweep(false) { npages = 0 } break } if atomic.Xadd(&mheap_.sweepers, -1) == 0 && atomic.Load(&mheap_.sweepdone) != 0 { if debug.gcpacertrace > 0 { print("pacer: sweep done at heap size ", memstats.heap_live>>20, "MB; allocated ", (memstats.heap_live-mheap_.sweepHeapLiveBasis)>>20, "MB during sweep; swept ", mheap_.pagesSwept, " pages at ", sweepRatio, " pages/byte\n") } } _g_.m.locks-- return npages }
<reponame>stallpool/cruise /* Distributed under the OSI-approved BSD 3-Clause License. See accompanying file Copyright.txt or https://cmake.org/licensing for details. */ #include "cmProjectCommand.h" #include "cmsys/RegularExpression.hxx" #include <sstream> #include <stdio.h> #include "cmMakefile.h" #include "cmPolicies.h" #include "cmStateTypes.h" #include "cmSystemTools.h" #include "cmake.h" class cmExecutionStatus; // cmProjectCommand bool cmProjectCommand::InitialPass(std::vector<std::string> const& args, cmExecutionStatus&) { if (args.empty()) { this->SetError("PROJECT called with incorrect number of arguments"); return false; } std::string const& projectName = args[0]; this->Makefile->SetProjectName(projectName); std::string bindir = projectName; bindir += "_BINARY_DIR"; std::string srcdir = projectName; srcdir += "_SOURCE_DIR"; this->Makefile->AddCacheDefinition( bindir, this->Makefile->GetCurrentBinaryDirectory(), "Value Computed by CMake", cmStateEnums::STATIC); this->Makefile->AddCacheDefinition( srcdir, this->Makefile->GetCurrentSourceDirectory(), "Value Computed by CMake", cmStateEnums::STATIC); bindir = "PROJECT_BINARY_DIR"; srcdir = "PROJECT_SOURCE_DIR"; this->Makefile->AddDefinition(bindir, this->Makefile->GetCurrentBinaryDirectory()); this->Makefile->AddDefinition(srcdir, this->Makefile->GetCurrentSourceDirectory()); this->Makefile->AddDefinition("PROJECT_NAME", projectName.c_str()); // Set the CMAKE_PROJECT_NAME variable to be the highest-level // project name in the tree. If there are two project commands // in the same CMakeLists.txt file, and it is the top level // CMakeLists.txt file, then go with the last one, so that // CMAKE_PROJECT_NAME will match PROJECT_NAME, and cmake --build // will work. if (!this->Makefile->GetDefinition("CMAKE_PROJECT_NAME") || (this->Makefile->IsRootMakefile())) { this->Makefile->AddDefinition("CMAKE_PROJECT_NAME", projectName.c_str()); this->Makefile->AddCacheDefinition( "CMAKE_PROJECT_NAME", projectName.c_str(), "Value Computed by CMake", cmStateEnums::STATIC); } bool haveVersion = false; bool haveLanguages = false; bool haveDescription = false; std::string version; std::string description; std::vector<std::string> languages; enum Doing { DoingDescription, DoingLanguages, DoingVersion }; Doing doing = DoingLanguages; for (size_t i = 1; i < args.size(); ++i) { if (args[i] == "LANGUAGES") { if (haveLanguages) { this->Makefile->IssueMessage( cmake::FATAL_ERROR, "LANGUAGES may be specified at most once."); cmSystemTools::SetFatalErrorOccured(); return true; } haveLanguages = true; doing = DoingLanguages; } else if (args[i] == "VERSION") { if (haveVersion) { this->Makefile->IssueMessage(cmake::FATAL_ERROR, "VERSION may be specified at most once."); cmSystemTools::SetFatalErrorOccured(); return true; } haveVersion = true; doing = DoingVersion; } else if (args[i] == "DESCRIPTION") { if (haveDescription) { this->Makefile->IssueMessage( cmake::FATAL_ERROR, "DESCRITPION may be specified at most once."); cmSystemTools::SetFatalErrorOccured(); return true; } haveDescription = true; doing = DoingDescription; } else if (doing == DoingVersion) { doing = DoingLanguages; version = args[i]; } else if (doing == DoingDescription) { doing = DoingLanguages; description = args[i]; } else // doing == DoingLanguages { languages.push_back(args[i]); } } if (haveVersion && !haveLanguages && !languages.empty()) { this->Makefile->IssueMessage( cmake::FATAL_ERROR, "project with VERSION must use LANGUAGES before language names."); cmSystemTools::SetFatalErrorOccured(); return true; } if (haveLanguages && languages.empty()) { languages.push_back("NONE"); } cmPolicies::PolicyStatus cmp0048 = this->Makefile->GetPolicyStatus(cmPolicies::CMP0048); if (haveVersion) { // Set project VERSION variables to given values if (cmp0048 == cmPolicies::OLD || cmp0048 == cmPolicies::WARN) { this->Makefile->IssueMessage( cmake::FATAL_ERROR, "VERSION not allowed unless CMP0048 is set to NEW"); cmSystemTools::SetFatalErrorOccured(); return true; } cmsys::RegularExpression vx( "^([0-9]+(\\.[0-9]+(\\.[0-9]+(\\.[0-9]+)?)?)?)?$"); if (!vx.find(version)) { std::string e = "VERSION \"" + version + "\" format invalid."; this->Makefile->IssueMessage(cmake::FATAL_ERROR, e); cmSystemTools::SetFatalErrorOccured(); return true; } std::string vs; const char* sep = ""; char vb[4][64]; unsigned int v[4] = { 0, 0, 0, 0 }; int vc = sscanf(version.c_str(), "%u.%u.%u.%u", &v[0], &v[1], &v[2], &v[3]); for (int i = 0; i < 4; ++i) { if (i < vc) { sprintf(vb[i], "%u", v[i]); vs += sep; vs += vb[i]; sep = "."; } else { vb[i][0] = 0; } } std::string vv; vv = projectName + "_VERSION"; this->Makefile->AddDefinition("PROJECT_VERSION", vs.c_str()); this->Makefile->AddDefinition(vv, vs.c_str()); vv = projectName + "_VERSION_MAJOR"; this->Makefile->AddDefinition("PROJECT_VERSION_MAJOR", vb[0]); this->Makefile->AddDefinition(vv, vb[0]); vv = projectName + "_VERSION_MINOR"; this->Makefile->AddDefinition("PROJECT_VERSION_MINOR", vb[1]); this->Makefile->AddDefinition(vv, vb[1]); vv = projectName + "_VERSION_PATCH"; this->Makefile->AddDefinition("PROJECT_VERSION_PATCH", vb[2]); this->Makefile->AddDefinition(vv, vb[2]); vv = projectName + "_VERSION_TWEAK"; this->Makefile->AddDefinition("PROJECT_VERSION_TWEAK", vb[3]); this->Makefile->AddDefinition(vv, vb[3]); } else if (cmp0048 != cmPolicies::OLD) { // Set project VERSION variables to empty std::vector<std::string> vv; vv.push_back("PROJECT_VERSION"); vv.push_back("PROJECT_VERSION_MAJOR"); vv.push_back("PROJECT_VERSION_MINOR"); vv.push_back("PROJECT_VERSION_PATCH"); vv.push_back("PROJECT_VERSION_TWEAK"); vv.push_back(projectName + "_VERSION"); vv.push_back(projectName + "_VERSION_MAJOR"); vv.push_back(projectName + "_VERSION_MINOR"); vv.push_back(projectName + "_VERSION_PATCH"); vv.push_back(projectName + "_VERSION_TWEAK"); std::string vw; for (std::string const& i : vv) { const char* v = this->Makefile->GetDefinition(i); if (v && *v) { if (cmp0048 == cmPolicies::WARN) { vw += "\n "; vw += i; } else { this->Makefile->AddDefinition(i, ""); } } } if (!vw.empty()) { std::ostringstream w; w << cmPolicies::GetPolicyWarning(cmPolicies::CMP0048) << "\nThe following variable(s) would be set to empty:" << vw; this->Makefile->IssueMessage(cmake::AUTHOR_WARNING, w.str()); } } if (haveDescription) { this->Makefile->AddDefinition("PROJECT_DESCRIPTION", description.c_str()); // Set the CMAKE_PROJECT_DESCRIPTION variable to be the highest-level // project name in the tree. If there are two project commands // in the same CMakeLists.txt file, and it is the top level // CMakeLists.txt file, then go with the last one. if (!this->Makefile->GetDefinition("CMAKE_PROJECT_DESCRIPTION") || (this->Makefile->IsRootMakefile())) { this->Makefile->AddDefinition("CMAKE_PROJECT_DESCRIPTION", description.c_str()); this->Makefile->AddCacheDefinition( "CMAKE_PROJECT_DESCRIPTION", description.c_str(), "Value Computed by CMake", cmStateEnums::STATIC); } } if (languages.empty()) { // if no language is specified do c and c++ languages.push_back("C"); languages.push_back("CXX"); } this->Makefile->EnableLanguage(languages, false); std::string extraInclude = "CMAKE_PROJECT_" + projectName + "_INCLUDE"; const char* include = this->Makefile->GetDefinition(extraInclude); if (include) { bool readit = this->Makefile->ReadDependentFile(include); if (!readit && !cmSystemTools::GetFatalErrorOccured()) { std::string m = "could not find file:\n" " "; m += include; this->SetError(m); return false; } } return true; }
/** * Send a response to an queue add attempt. * * @param request * the original payload with the request to enqueue * @param player * the player to send it to * @param type * the type of response * @param media * the media clip, or null */ protected void sendEnqueueResponse(Enqueue request, EntityPlayer player, Response type, Media media) { EnqueueResponse response = new EnqueueResponse(type, media); ResponseTracker.markResponseFor(request, response); List<EntityPlayer> players = new ArrayList<EntityPlayer>(); players.add(player); firePayloadSend(new BehaviorPayload(BehaviorType.ENQUEUE_RESULT, response), players); }
<gh_stars>10-100 /**************************************************************************** Copyright 2004, Colorado School of Mines and others. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ****************************************************************************/ package edu.mines.jtk.sgl; import java.awt.*; import static edu.mines.jtk.ogl.Gl.*; /** * OpenGL color state. * @author <NAME>, Colorado School of Mines * @version 2005.05.31 */ public class ColorState implements State { /** * Constructs color state. */ public ColorState() { } /** * Determines whether current color is set. * @return true, if set; false, otherwise. */ public boolean hasColor() { return _colorSet; } /** * Gets the current color. * @return the current color. */ public Color getColor() { return _color; } /** * Sets the current color. * @param color the current color. */ public void setColor(Color color) { _color = color; _colorSet = true; } /** * Unsets the current color. */ public void unsetColor() { _color = _colorDefault; _colorSet = false; } /** * Determines whether shade model is set. * @return true, if set; false, otherwise. */ public boolean hasShadeModel() { return _shadeModelSet; } /** * Gets the shade model. * @return the shade model. */ public int getShadeModel() { return _shadeModel; } /** * Sets the shade model. * @param shadeModel the shade model. */ public void setShadeModel(int shadeModel) { _shadeModel = shadeModel; _shadeModelSet = true; } /** * Unsets the shade model. */ public void unsetShadeModel() { _shadeModel = _shadeModelDefault; _shadeModelSet = false; } public void apply() { if (_colorSet) { byte r = (byte)_color.getRed(); byte g = (byte)_color.getGreen(); byte b = (byte)_color.getBlue(); byte a = (byte)_color.getAlpha(); glColor4ub(r,g,b,a); } if (_shadeModelSet) { glShadeModel(_shadeModel); } } public int getAttributeBits() { int bits = 0; if (_colorSet) bits |= GL_CURRENT_BIT; if (_shadeModelSet) bits |= GL_LIGHTING_BIT; return bits; } private static Color _colorDefault = new Color(1.0f,1.0f,1.0f,1.0f); private Color _color = _colorDefault; private boolean _colorSet; private static int _shadeModelDefault = GL_SMOOTH; private int _shadeModel = _shadeModelDefault; private boolean _shadeModelSet; }
use common::{Ray, Transform, Vec3}; use crate::CubeCollider; pub type Tri = [Vec3; 3]; /// given a trangle that is counter clockwise, it will return the normal that is normalized pub fn get_normal_from_tri(tri: &Tri) -> Vec3 { -(tri[1] - tri[2]).cross(tri[0] - tri[2]).normalized() } /// get proper vertex position in world position pub fn get_vertex(w: &Vec3, t: &Transform, c: &CubeCollider) -> Vec<Vec3> { let s = c.scale * t.scale; let r = t.rotation * c.local_rotation; let mut vec: Vec<Vec3> = Vec::with_capacity(8); for x in [-1.0, 1.0] { for y in [-1.0, 1.0] { for z in [-1.0, 1.0] { vec.push(w + r * Vec3::new(s.x * x, s.y * y, s.z * z)) } } } vec } /// in binary order, aka v000 v001 v010, not rotated where v000 is min and v111 is max, /// note that it does not apply rotation or world position pub fn get_verts(t: &Transform, c: &CubeCollider) -> [Vec3; 8] { let c = t.scale * c.scale; let v111 = c; let v000 = -c; let v001 = Vec3::new(v000.x, v000.y, v111.z); let v010 = Vec3::new(v000.x, v111.y, v000.z); let v100 = Vec3::new(v111.x, v000.y, v000.z); let v011 = Vec3::new(v000.x, v111.y, v111.z); let v110 = Vec3::new(v111.x, v111.y, v000.z); let v101 = Vec3::new(v111.x, v000.y, v111.z); [v000, v001, v010, v011, v100, v101, v110, v111] } #[test] fn test_get_verts() { use crate::PhysicsMaterial; let scale = Vec3::new(2.0, 1.0, 10.0); let t = Transform { position: Vec3::zero(), rotation: common::Quaternion::identity(), scale, }; let material = PhysicsMaterial { friction: 1.0, restfullness: 1.0, }; let c = CubeCollider::new(Vec3::one(), material); let verts = get_verts(&t, &c); for x in [-1, 1] { for y in [-1, 1] { for z in [-1, 1] { assert!(verts.contains(&Vec3::new( scale.x * x as f32, scale.y * y as f32, scale.z * z as f32, ))) } } } } pub fn get_rays_for_cube(verts: &[Vec3; 8]) -> [Ray; 12] { let [v000, v001, v010, v011, v100, v101, v110, _v111] = *verts; [ Ray::new(v000, Vec3::unit_x()), Ray::new(v001, Vec3::unit_x()), Ray::new(v010, Vec3::unit_x()), Ray::new(v011, Vec3::unit_x()), Ray::new(v000, Vec3::unit_y()), Ray::new(v001, Vec3::unit_y()), Ray::new(v100, Vec3::unit_y()), Ray::new(v101, Vec3::unit_y()), Ray::new(v000, Vec3::unit_z()), Ray::new(v010, Vec3::unit_z()), Ray::new(v100, Vec3::unit_z()), Ray::new(v110, Vec3::unit_z()), ] } pub fn get_tris_for_cube(verts: &[Vec3; 8]) -> [Tri; 12] { let [v000, v001, v010, v011, v100, v101, v110, v111] = *verts; // counter clockwise [ // x face [v101, v100, v110], [v101, v110, v111], // -x face [v000, v001, v011], [v000, v011, v010], // y face [v111, v010, v011], [v010, v111, v110], // -y face [v000, v101, v001], [v000, v100, v101], // z face [v001, v101, v111], [v001, v111, v011], //-z face [v000, v010, v100], [v010, v110, v100], ] }
import java.util.Scanner; public class ProperNutrition { public static void main(String[] args) { Scanner in = new Scanner(System.in); long n, gar, bar; n = Long.parseLong(in.nextLine()); gar = Long.parseLong(in.nextLine()); bar = Long.parseLong(in.nextLine()); in.close(); long resp; for (int i = 0; i <= n; i++) { // calcula valor de n - uma qtd de barras resp = (n - (i * bar)); // Verifica se o valor calculado eh divisivel pelo valor das garrafas if (resp % gar == 0 && resp >= 0) { System.out.println("YES"); System.out.println(resp/gar + " " + i); return; } } System.out.println("NO"); } }
Story highlights The memo was submitted to the White House Office of Management and Budget The move would likely be controversial Washington (CNN) The White House is considering a proposal to move both the State Department bureau of Consular Affairs and its bureau of Population, Refugees, and Migration to the Department of Homeland Security, a senior White House official tells CNN. The move, which the White House official cautioned was far from becoming official policy, would likely be controversial among diplomats and experts in State Department matters. The bureau of Consular Affairs is one of the largest sections of the State Department. Its many responsibilities include issuing passports and assisting citizens overseas by putting out travel alerts and helping with emergency services. The bureau is also tasked with granting temporary visas to foreigners who want to visit or work in the United States. "It would be a huge mistake," said Anne Richard, who led the bureau of Population Refugees, and Migration during President Barack Obama's second term. The proposals were written in a memo submitted to the White House Office of Management and Budget from the White House Domestic Policy Council as part of President Trump's March executive order pushing for ideas for Government Reorganization. Read More
import React from 'react'; import createSvgIcon from './helpers/createSvgIcon'; export default createSvgIcon( <path d="M17 5v8.61l2 2V1H5v.61L8.39 5zM2.9 2.35L1.49 3.76 5 7.27V23h14v-1.73l1.7 1.7 1.41-1.41L2.9 2.35zM7 19V9.27L16.73 19H7z" />, 'MobileOffSharp', );
//============================================================================== // Copyright 2003 - 2012 LASMEA UMR 6602 CNRS/Univ. Clermont II // Copyright 2009 - 2012 LRI UMR 8623 CNRS/Univ Paris Sud XI // // Distributed under the Boost Software License, Version 1.0. // See accompanying file LICENSE.txt or copy at // http://www.boost.org/LICENSE_1_0.txt //============================================================================== #ifndef NT2_CORE_FUNCTIONS_COMMON_FREQSPACE_HPP_INCLUDED #define NT2_CORE_FUNCTIONS_COMMON_FREQSPACE_HPP_INCLUDED #include <nt2/core/functions/freqspace.hpp> #include <nt2/include/functions/freqspace1.hpp> #include <nt2/include/functions/colon.hpp> #include <nt2/include/functions/tie.hpp> #include <nt2/include/functions/scalar/floor.hpp> #include <nt2/include/functions/scalar/rec.hpp> #include <nt2/include/constants/half.hpp> #include <nt2/options.hpp> #include <boost/mpl/bool.hpp> namespace nt2 { namespace ext { //============================================================================ // This version of freqspace is called whenever a tie(...) = freqspace(...) is // captured before assign is resolved. As a tieable function, freqspace // retrieves rhs/lhs pair as inputs //============================================================================ NT2_FUNCTOR_IMPLEMENTATION( nt2::tag::freqspace_, tag::cpu_ , (A0)(N0)(A1)(N1) , ((node_ < A0, nt2::tag::freqspace_ , N0, nt2::container::domain > )) ((node_ < A1, nt2::tag::tie_ , N1, nt2::container::domain > )) ) { typedef void result_type; typedef typename boost::proto::result_of::child_c<A0&,0>::type child0; typedef typename boost::proto::result_of::child_c<A1&,0>::type child1; typedef typename boost::dispatch::meta:: terminal_of< typename boost::dispatch::meta:: semantic_of<child0>::type >::type in0_t; typedef typename boost::dispatch::meta:: terminal_of< typename boost::dispatch::meta:: semantic_of<child1>::type >::type out_t; typedef typename out_t::value_type value_t; BOOST_FORCEINLINE result_type operator()( A0& a0, A1& a1 ) const { int n = 0, m = 0; bool whole = false; bool meshgrid = false; getmn(a0, m, n, whole, meshgrid, N0(), N1() ); compute(a1, m, n, whole, meshgrid, N1()); } private: BOOST_FORCEINLINE void compute( A1 & a1, int m, int, bool whole , bool, boost::mpl::long_<1> const& ) const { if (whole) boost::proto ::child_c<0>(a1) = freqspace1(m,nt2::whole_,meta::as_<value_t>()); else boost::proto::child_c<0>(a1) = freqspace1(m, meta::as_<value_t>()); } void compute( A1 & a1, int m, int n, bool , bool /*meshgrid*/, boost::mpl::long_<2> const& ) const { value_t hvm = m*nt2::Half<value_t>(); value_t hvn = n*nt2::Half<value_t>(); value_t hm = nt2::rec(hvm); value_t hn = nt2::rec(hvn); value_t lm = -nt2::floor(hvm)*hm; value_t ln = -nt2::floor(hvn)*hn; // TODO: implement support for meshgrid option //if (meshgrid) // { boost::proto::child_c<0>(a1) = nt2::_(ln, hn, value_t(1)-value_t(2)/n); boost::proto::child_c<1>(a1) = nt2::_(lm, hm, value_t(1)-value_t(1)/m); // } // else // { // boost::proto::child_c<0>(a1) = ??; // boost::proto::child_c<1>(a1) = ??; // } } BOOST_FORCEINLINE //[f] = freqspace(n) void getmn(A0 const &a0, int &m, int& n, bool&, bool&, boost::mpl::long_<3> const &,//number of inputs boost::mpl::long_<1> const &//number of outputs ) const { m = int(boost::proto::value(boost::proto::child_c<1>(a0))); n = 0; } BOOST_FORCEINLINE //[f1, f2] = freqspace(n) void getmn(A0 const &a0, int &m, int& n, bool&, bool& , boost::mpl::long_<3> const & //number of inputs , boost::mpl::long_<2> const & ) const//number of outputs { typedef typename boost::proto::result_of::child_c<A0&,1>::type child1; typedef typename boost::proto::result_of::value<child1>::type type_t; typedef typename meta::is_scalar<type_t>::type choice_t; m = getval(boost::proto::value(boost::proto::child_c<1>(a0)),0,choice_t()); n = getval(boost::proto::value(boost::proto::child_c<1>(a0)),1,choice_t()); } template < class T > static int getval(const T & a0, int, const boost::mpl::bool_<true> &) { return a0; } template < class T > static int getval(const T & a0, int i, const boost::mpl::bool_<false> &) {return a0[i]; } BOOST_FORCEINLINE //[f] = freqspace(n, whole_) void getmn( A0 const &a0, int &m, int& n, bool &whole, bool& , boost::mpl::long_<4> const & //number of inputs , boost::mpl::long_<1> const & //number of outputs ) const { m = int(boost::proto::value(boost::proto::child_c<1>(a0))); n = 0; whole = true; } BOOST_FORCEINLINE //[f,g] = freqspace(n, whole_) void getmn( A0 const &a0, int &m, int& n, bool &whole, bool& , boost::mpl::long_<4> const & //number of inputs , boost::mpl::long_<2> const & //number of outputs ) const { m = int(boost::proto::value(boost::proto::child_c<1>(a0))); n = 0; whole = true; } template < class Dummy > BOOST_FORCEINLINE // [f1, f2] = freqspace([m, n]) void getmn( A0 const &a0, int &m, int& n, bool&, bool& , boost::mpl::long_<3> const & //number of inputs , boost::mpl::long_<2> const & //number of outputs , Dummy() ) const { typedef typename boost::proto::result_of::child_c<A0&,1>::type child1; typedef typename boost::proto::result_of::value<child1>::type type_t; typedef typename meta::is_scalar<type_t>::type choice_t; m = getval(boost::proto::value(boost::proto::child_c<1>(a0)),0,choice_t()); n = getval(boost::proto::value(boost::proto::child_c<1>(a0)),1,choice_t()); } template < class Dummy > BOOST_FORCEINLINE // [f1, f2] = freqspace([m, n], meshgrid_) void getmn( A0 const &a0, int &m, int& n, bool&, bool& meshgrid , boost::mpl::long_<4> const & //number of inputs , boost::mpl::long_<2> const & //number of outputs , Dummy() ) const { typedef typename boost::proto::result_of::child_c<A0&,1>::type child1; typedef typename boost::proto::result_of::value<child1>::type type_t; typedef typename meta::is_scalar<type_t>::type choice_t; m = getval(boost::proto::value(boost::proto::child_c<1>(a0)),0,choice_t()); n = getval(boost::proto::value(boost::proto::child_c<1>(a0)),1,choice_t()); meshgrid = true; } }; } } #endif
<filename>hw9/client/StockTradingApp/app/src/main/java/com/rochakgupta/stocktrading/main/section/portfolio/PortfolioHeaderViewHolder.java package com.rochakgupta.stocktrading.main.section.portfolio; import android.view.View; import android.widget.TextView; import androidx.recyclerview.widget.RecyclerView; import com.rochakgupta.stocktrading.R; public class PortfolioHeaderViewHolder extends RecyclerView.ViewHolder { final TextView netWorthView; public PortfolioHeaderViewHolder(View view) { super(view); netWorthView = (TextView) view.findViewById(R.id.portfolio_tv_net_worth);; } }