content
stringlengths
7
2.61M
What word will you say the most often in your life? The word you use most commonly is probably the word all English-speakers use: the. What are the most-used words in the English language? One way to trace most-frequently-used words is with the search magic of Google Ngram viewer. With the Ngram viewer, you type a word, and it tells you how often that word is used over a specific time period, based on the Google Books database. For example, the word “the” is used about 5% of the time, which means that in every text of 100 words, 5 of those words are “the.” Similarly, the word “of” is used about 3% of the time, and the word “and” is used about 2.5% of the time. The folks over at Oxford Dictionaries compiled a comprehensive analysis of English-language usage called the Oxford English Corpus. In this sense, a corpus is the entire body of words and phrases that constitute a language. See their complete analysis here. Obviously, most of the most commonly used words are short words that help build sentences. As in the previous sentence, the words of, are, that, and the join the parts of the sentence that make the idea. Linguists call these “function words.” 84 of the top 100 words are function words. Things get interesting when you look at the list of top 10 nouns: time person year way day thing man world life hand Some of the words, like way, have many different meanings, which may be why they are more frequently used. For example, you could say, “She’s lost her way” or “that’s the way to the grocery store.” It is the same word in both instances, but they are very different meanings. Another reason certain words occur frequently has to do with their use in common phrases. So a word like time is used often, and it also appears in many common phrases, like last time, in time, next time, etc. Journalists have suggested banishing overused words. Learn about the banished words from 2011 here. Do these words surprise you? Are there words you expected to be common that aren’t?
Bogota mayor Gustavo Petro is in London to receive an award for his administration’s environmental policies, a recognition that is in stark contrast to how he is perceived in Colombia’s capital. When Bogota awakes, the thin high altitude air enables an uninterrupted urban panorama punctuated only with pine covered mountains. This paradisaical image, however, vanishes once the daily commute begins and gives way to traffic-heavy thoroughfares, packed buses and an unpleasant smog hangs low permeating the city. It is this strategy of “Bogota Humana” and the work of previous administrations that has brought Gustavo Petro to London where he shared a stage with London’s Boris Johnson. In an almost incongruous turn of events, given the unrest in the city in recent weeks, Bogota has seen off competition from the likes of Buenos Aires, Paris and Singapore to win the prestigious Siemens and the C40 Cities Climate Leadership Group for work and policies employed and regarding future plans for Urban Transportation. This triumph will hopefully enable Bogota to remove herself from the list of cities most contaminated by sulfur dioxide as ranked by the Latin American Green City Index. This seems at first glance to be a far cry from the troublesome scenarios in the pocked and defaced streets of Bogota, but the Mayor has likened the disturbances in his city as similar to those in Paris in 2005 and London in 2012 where “a new disaffected generation” is finding a voice. Never one to shirk controversy, the mayor seems to court it and in July he decided to transplant his office from the stately environs of the Palacio Lievano overlooking the opulent neoclassical Plaza de Bolivar in downtown Bogota to the working class district of Ciudad Bolivar. He said: “the poor layers that make up the city’s fabric have traditionally surrounded the mayor with enthusiastic support.” This has created a mayoralty downtown with precious little cohesion. His aggressive position has left him with a vocal opposition, not least the former President Alvaro Uribe who he denounced for the “parapolitics” scandal which implicated an alliance between paramilitary organizations and politicians. As a former guerrilla with experience in the struggle for political participation, Petro is in a privileged position regarding the ongoing peace talks between the Colombian government and the FARC guerrillas in Havana, Cuba. The long running Colombian conflict has pitted various leftist groups – with the FARC being the largest and oldest – against the state for roughly 50 years in what began as an armed struggle over land. Bogota’s mayor must be seen as representing a success story for transitional justice arguably holding the country’s most powerful position behind only that of the President Juan Manuel Santos. A message to all guerrillas present at the negotiating table. The mayor admits that he has been consulted by both the Santos administration and the FARC regarding the dialogues. A trip to Havana to participate does not seem far off. The mayor recognizes that any dialogue does indeed represent a strengthening of democracy in Colombia. He speaks of a violent and bloody culture of silence and assassinations stemming back only 15 years. And while political assassinations may be largely consigned to the past and Colombia has improved – he has the figures to prove this – peace in itself has been largely absent from the streets of Bogota in the last fortnight. Huge protests originating in the countryside held by striking farmers in opposition to various free trade agreements they see as hindering their livelihoods gathered momentum and mass support in the city. Thousands took to the streets of Bogota culminating in riots on August 29. Much of the city, in particular some outlying districts and the colonial heart were left in tatters after vandalism and wanton violence erupted between some protesters and the riot police. Subsequent government statements of a deliberate and malicious infiltration of the protest marches by members of the FARC guerrillas have been given short shrift by Petro and his allies. This is perhaps due to the unrelenting and uncompromising style employed in Colombian politics by Petro.
package broadcast import ( "context" "github.com/iotaledger/hive.go/configuration" "github.com/iotaledger/hive.go/daemon" "github.com/iotaledger/hive.go/events" "github.com/iotaledger/hive.go/node" "github.com/iotaledger/goshimmer/packages/shutdown" "github.com/iotaledger/goshimmer/packages/tangle" "github.com/iotaledger/goshimmer/plugins/broadcast/server" ) const ( pluginName = "Broadcast" ) var ( // Plugin defines the plugin instance of the broadcast plugin. Plugin *node.Plugin deps = new(dependencies) ) type dependencies struct { Tangle *tangle.Tangle } func init() { Plugin = node.NewPlugin(pluginName, deps, node.Disabled, run) configuration.BindParameters(Parameters, "Broadcast") } // ParametersDefinition contains the configuration parameters used by the plugin. type ParametersDefinition struct { // BindAddress defines on which address the broadcast plugin should listen on. BindAddress string `default:"0.0.0.0:5050" usage:"the bind address for the broadcast plugin"` } // Parameters contains the configuration parameters of the broadcast plugin. var Parameters = &ParametersDefinition{} func run(_ *node.Plugin) { // Server to connect to. Plugin.LogInfof("Starting Broadcast plugin on %s", Parameters.BindAddress) if err := daemon.BackgroundWorker("Broadcast worker", func(ctx context.Context) { if err := server.Listen(Parameters.BindAddress, Plugin, ctx.Done()); err != nil { Plugin.LogError("Failed to start Broadcast server: %v", err) } <-ctx.Done() }); err != nil { Plugin.LogFatalf("Failed to start Broadcast daemon: %v", err) } // Get Messages from node. notifyNewMsg := events.NewClosure(func(messageID tangle.MessageID) { deps.Tangle.Storage.Message(messageID).Consume(func(message *tangle.Message) { go func() { server.Broadcast([]byte(message.String())) }() }) }) if err := daemon.BackgroundWorker("Broadcast[MsgUpdater]", func(ctx context.Context) { deps.Tangle.Storage.Events.MessageStored.Attach(notifyNewMsg) <-ctx.Done() Plugin.LogInfof("Stopping Broadcast...") deps.Tangle.Storage.Events.MessageStored.Detach(notifyNewMsg) Plugin.LogInfof("Stopping Broadcast... \tDone") }, shutdown.PriorityBroadcast); err != nil { Plugin.LogError("Failed to start as daemon: %s", err) } }
Ricardo Zonta Early career Born in Curitiba, Brazil, Zonta began karting in 1987, winning his first race shortly thereafter. The following year, he was runner-up for the Curitiba Karting Championship, and in 1991, he won the title. He continued karting until 1992, finishing fourth in the São Paulo Karting Championship before progressing to single-seaters for 1993. He finished 6th in the Brazilian Formula Chevrolet Championship, and then in 1994, came fifth in the Brazilian Formula Three Championship. A year later, Zonta won both the Brazilian and South American Formula Three Championships. Moving to Europe in 1996, Zonta competed in the International Formula 3000 Championship for Draco Racing, winning two races and finishing fourth overall. In the same year, he became the first Brazilian to compete in International Touring Cars, with Mercedes. In 1997, he won three races and the Formula 3000 championship. He also took home the "Golden Helmet" award for best international driver for his efforts. The Jordan Formula One team signed him as their official test driver following his championship, and in 1998, he was signed by McLaren boss Ron Dennis. Zonta tested with the McLaren Formula One team in 1998, and concurrently won the FIA GT Championship (GT1 class) and the "Golden Helmet" award in the "world prominence" category. In October 1998, immediately after winning the FIA GT championship, Zonta signed up with the B.A.R. Formula One racing team as one of its race drivers for the 1999 season, after rejecting offers from Jordan and Sauber. Formula One career In 1999, Zonta started as a Formula One racing driver alongside 1997 World Champion Jacques Villeneuve at new team B.A.R. Zonta injured his foot in an accident during practice for the Brazilian Grand Prix, and was forced to miss three races. He also had a large accident at Spa-Francorchamps, and finished the season with no championship points. Zonta remained with B.A.R for the 2000 season, scoring his first world championship point with a sixth place in the opening race. He had another large accident when his front suspension broke during testing at Silverstone, but continued the season, scoring points in both the Italian and United States Grands Prix, to finish 14th in the championship. Replaced by Olivier Panis for the 2001 season, Zonta became the third driver for the Jordan team, replacing the injured Heinz-Harald Frentzen for one race, and then again when Frentzen was sacked, but was overlooked to replace him for the remainder of the season. In 2002, he decided to focus on the Telefónica World Series, which he won. Zonta was then hired as test driver for the Toyota F1 team in 2003, retaining the position in 2004. Toward the end of the season, the team sacked Cristiano da Matta from a race seat, and Zonta drove in four Grands Prix. In Belgium a brilliant fourth place went beckoning when engine failure struck three laps from the finish. In Suzuka the team hired Jarno Trulli and Zonta had to sit the event out, however the team allowed him to compete in his home race, the Brazilian Grand Prix, which he finished in 13th. He continued as a test driver for Toyota in 2005, alongside veteran French pilot Olivier Panis. At the US Grand Prix later that year, he stood in for an injured Ralf Schumacher and took his place on the grid, only for Toyota, like the other six Michelin-shod teams, to withdraw from the race due to safety concerns. 2006 saw Zonta continue with Toyota as the team's third and test driver. Ricardo Zonta was confirmed as test driver for the Renault Formula One team for the 2007 season on 6 September 2006. After Formula One In 2007, Zonta entered the Stock Car Brasil series in parallel with the work for the Renault team. In 2008, Zonta kicked off his sportscar career by contesting the 24 Hours of Le Mans with Peugeot Sport, driving the #9 car alongside Franck Montagny and Formula One tester Christian Klien. He is also driving in the Grand Am Championship in America with Krohn Racing, while also being the team owner and driver of Panasonic Racing in Stock Car Brasil.
// Ask the user for an album to create func promptForNewAlbum(albumId string) (*Album, error) { fmt.Printf("Enter name for new album (press enter for '%s'): \n", albumId) reader := bufio.NewReader(os.Stdin) chosenAlbumName, err := reader.ReadString('\n') if err != nil { return nil, err } chosenAlbumName = strings.TrimSuffix(chosenAlbumName, "\n") if chosenAlbumName == "" { chosenAlbumName = albumId } chosenAlbum := Album{Name: chosenAlbumName} return &chosenAlbum, nil }
Equine laminitis model: cryotherapy reduces the severity of lesions evaluated seven days after induction with oligofructose. REASONS FOR PERFORMING STUDY A previous preliminary study demonstrated the potential of distal limb cryotherapy (DLC) for preventing laminitis. Clinically, DLC must be effective for periods longer than 48 h and the preventive effect must extend beyond its discontinuation. OBJECTIVES To evaluate the effect of DLC, applied during the developmental phase of induced laminitis, on the severity of clinical laminitis and lamellar histopathology 7 days after dosing. METHODS Eighteen normal Standardbred horses were divided into 3 groups of 6. Continuous cryotherapy was applied for 72 h to the distal limbs of the first group. The second and third groups were administered laminitis inducing doses of oligofructose and 72 h of cryotherapy applied (immediately after dosing) to the second group. After clinical assessment all horses were subjected to euthanasia 7 days after dosing and hoof lamellar tissues were harvested and analysed. RESULTS In the laminitis induced horses clinical lameness and laminitis histopathology was significantly reduced in horses that underwent 72 h of DLC compared with untreated controls. Cryotherapy alone produced no significant lameness or other ill effect. CONCLUSIONS Continuous, medium- to long-term (72 h) cryotherapy applied to the distal limbs of horses safely and effectively ameliorates the clinical signs and pathology of acute laminitis. POTENTIAL RELEVANCE Pre-emptive distal limb cryotherapy is a practical method of ameliorating laminitis in ill horses at risk of developing the disease.
Since the typhoon hit, Danny Estember has been hiking three hours round-trip into the mountains each day to obtain what he can only hope is clean water for his five daughters and two sons. The exhausting journey is necessary because safe water is desperately scarce in this storm-ravaged portion of the Philippines. Without it, people struggling to rebuild and even survive risk catching intestinal and other diseases that can spread if they're unable to wash properly. While aid agencies work to provide a steady supply, survivors have resorted to scooping from streams, catching rainwater in buckets and smashing open pipes to obtain what is left from disabled pumping stations. With at least 600,000 people homeless, the demand is massive. "I'm thirsty and hungry. I'm worried — no food, no house, no water, no money," said Estember, a 50-year-old ambulance driver. Thousands of other people who sought shelter under the solid roof of the Tacloban City Astrodome also must improvise, taking water from wherever they can — a broken water pipe or a crumpled tarp. The water is salty and foul tasting but it is all many have had for days. The U.S. Institute of Medicine defines an adequate daily intake of fluids as roughly 100 ounces for men and about 75 ounces for women. Given the shortages and hot climate, it's certain that most in the disaster zone aren't getting anything like those amounts, leaving them prone to energy-sapping dehydration. Providing clean, safe drinking water is key to preventing the toll of dead and injured from rising in the weeks after a major natural disaster. Not only do survivors need to stay hydrated, they also need to be protected from waterborne diseases such as cholera and typhoid. Haiti's devastating January 2010 earthquake was followed by a cholera outbreak in October 2011 that health officials say has killed more than 8,000 people and sickened nearly 600,000. The two events were not linked but it added misery to the entire country as it was still recovering from the first disaster. Some studies have shown that cholera may have been introduced in Haiti by U.N. troops from Nepal, where the disease is endemic. Washing regularly, using latrines and boiling drinking water are the best ways to avoid contracting diarrhea and other ailments that could burden already stressed health services. It took several days for aid groups to bring large quantities of water to Tacloban, the eastern Philippine city where the typhoon wreaked its worst destruction. By Friday, tankers were arriving. Philippine Red Cross workers sluiced water into enormous plastic bladders attached to faucets from which people fill jerry cans, buckets, bottles and whatever other containers they might have. "I'm thirsty," said Lydia Advincula, 54, who for the last few days had been placing buckets out doors to catch some of the torrential downpours that have added to the misery of homeless storm survivors. Water provisioning should get a big boost with the recent arrival of the U.S. Navy aircraft carrier USS George Washington, a virtual floating city with a distillation plant that can produce 400,000 gallons of fresh water per day — enough to supply 2,000 homes, according to the ship's website. Britain also is sending an aircraft carrier, the HMS Illustrious, with seven helicopters and facilities to produce fresh water, Britain's Ministry of Defense said. It said the ship is expected to reach the area about Nov. 25. Filtration systems are now operating in Tacloban, the center of the relief effort, and two other towns in Leyte province, the hardest-hit area. Helicopters are dropping bottled water along with other relief supplies to more isolated areas. Other more high-tech water purification solutions are also available, such as water purification bottles developed since the 2004 Indian Ocean tsunami that devastated parts of Thailand, Indonesia, India and Sri Lanka. Those contain systems that filter out parasites, bacteria and other dangerous substances from virtually any water source, making it safe to drink and alleviating the high cost and logistical difficulties that shipping in bottled water entails. Longer-term water solutions will come once the crucial issues of shelter and security are settled and will likely have to wait several months, said John Saunders, of the U.S.-based International Association of Emergency Managers. Those water systems are far more complex, requiring expensive, specialized equipment and training for operators, he said. "I can bring in a $300,000 water system that provides thousands of liters per day of drinking water, but who pays for the system and how is it maintained and distribution managed?" Saunders said. Long-term solutions are a distant concern for Jaime Llanera, 44, as he stands in a shelter he and his family have fashioned out of broken plywood and a tarpaulin. A single 12-ounce bottle of mineral water delivered by the military three days earlier is all that's available for his parents, sister, brother-in-law and a friend. To stretch their supply, they've been collecting rainwater in buckets and any other containers they can find and boiling it. They're also using rainwater to clean: His mother dunks clothing into a bucket of rainwater and tries to scrub out the filth. The family plans to wait one more week. If help hasn't come by then, they'll try to find a way out of Tacloban so they can stay with relatives elsewhere. "We have no house. We have no home. But we're still intact," Llanera said.
<filename>core/src/Physics/WorldDistanceJoint.cpp #include "WorldDistanceJoint.h" #include <cmath> #include <iostream> namespace ample::physics { WorldDistanceJoint2d::WorldDistanceJoint2d(const std::string &name, WorldObject2d &bodyA, WorldObject2d &bodyB, const ample::graphics::Vector2d<float> &anchorOnBodyA, const ample::graphics::Vector2d<float> &anchorOnBodyB, float length, bool collideConnected, float frequencyHz, float dampingRatio) : WorldJoint2d(name, "WorldDistanceJoint2d", bodyA, bodyB) { jointDef.Initialize(getB2Body(bodyA), getB2Body(bodyB), {anchorOnBodyA.x, anchorOnBodyA.y}, {anchorOnBodyB.x, anchorOnBodyB.y}); jointDef.collideConnected = collideConnected; jointDef.frequencyHz = frequencyHz; jointDef.dampingRatio = dampingRatio; if (length > 0) { jointDef.length = length; } initB2Joint(bodyA.getWorldLayer(), &jointDef); } void WorldDistanceJoint2d::onActive() { if (_form) { _form->setTranslate({(getAnchorA().x + getAnchorB().x) / 2, (getAnchorA().y + getAnchorB().y) / 2, _bodyA.getZ()}); float angle = atan((getAnchorA().x - getAnchorB().x) / (getAnchorA().y - getAnchorB().y)); _form->setRotate({0.0f, 0.0f, 1.0f}, 180 - angle * 180.0f / M_PI); float curLength = sqrt(pow(getAnchorA().x - getAnchorB().x, 2) + pow(getAnchorA().y - getAnchorB().y, 2)); _form->setScale({1.0f, curLength / _initLength, 1.0f}); } } void WorldDistanceJoint2d::setForm(std::shared_ptr<graphics::GraphicalObject2d> form) { _form = form; _initLength = getLength(); _bodyA.getWorldLayer().addObject(form); } ample::graphics::Vector2d<float> WorldDistanceJoint2d::getLocalAnchorA() const { const b2Vec2 &anchor = static_cast<b2DistanceJoint *>(_joint)->GetLocalAnchorA(); return {anchor.x, anchor.y}; } ample::graphics::Vector2d<float> WorldDistanceJoint2d::getLocalAnchorB() const { const b2Vec2 &anchor = static_cast<b2DistanceJoint *>(_joint)->GetLocalAnchorB(); return {anchor.x, anchor.y}; } void WorldDistanceJoint2d::setLength(float length) { static_cast<b2DistanceJoint *>(_joint)->SetLength(length); } float WorldDistanceJoint2d::getLength() const { return static_cast<b2DistanceJoint *>(_joint)->GetLength(); } void WorldDistanceJoint2d::setFrequency(float hz) { static_cast<b2DistanceJoint *>(_joint)->SetFrequency(hz); } float WorldDistanceJoint2d::getFrequency() const { return static_cast<b2DistanceJoint *>(_joint)->GetFrequency(); } void WorldDistanceJoint2d::setDampingRatio(float ratio) { static_cast<b2DistanceJoint *>(_joint)->SetDampingRatio(ratio); } float WorldDistanceJoint2d::getDampingRatio() const { return static_cast<b2DistanceJoint *>(_joint)->GetDampingRatio(); } WorldDistanceJoint2d::WorldDistanceJoint2d(const filing::JsonIO &input, std::shared_ptr<WorldObject2d> bodyA, std::shared_ptr<WorldObject2d> bodyB) : WorldJoint2d(input, *bodyA, *bodyB) { jointDef.localAnchorA = input.read<graphics::Vector2d<float>>("local_anchorA"); jointDef.localAnchorB = input.read<graphics::Vector2d<float>>("local_anchorB"); jointDef.length = input.read<float>("length"); jointDef.frequencyHz = input.read<float>("frequency_hz"); jointDef.dampingRatio = input.read<float>("damping_ratio"); jointDef.collideConnected = input.read<bool>("collide_connected"); initB2Joint(bodyA->getWorldLayer(), &jointDef); } std::string WorldDistanceJoint2d::dump() { filing::JsonIO output = WorldJoint2d::dump(); output.write<graphics::Vector2d<float>>("local_anchorA", jointDef.localAnchorA); output.write<graphics::Vector2d<float>>("local_anchorB", jointDef.localAnchorB); output.write<float>("length", jointDef.length); output.write<float>("frequency_hz", jointDef.frequencyHz); output.write<float>("damping_ratio", jointDef.dampingRatio); output.write<bool>("collide_connected", jointDef.collideConnected); return output; } } // namespace ample::physics
Framework for biosignal interpretation in intensive care and anesthesia. Improved monitoring improves outcomes of care. As critical care is "critical", everything that can be done to detect and prevent complications as early as possible benefits the patients. In spite of major efforts by the research community to develop and apply sophisticated biosignal interpretation methods (BSI), the uptake of the results by industry has been poor. Consequently, the BSI methods used in clinical routine are fairly simple. This paper postulates that the main reason for the poor uptake is the insufficient bridging between the actors (i.e., clinicians, industry and research). This makes it difficult for the BSI developers to understand what can be implemented into commercial systems and what will be accepted by clinicians as routine tools. A framework is suggested that enables improved interaction and cooperation between the actors. This framework is based on the emerging commercial patient monitoring and data management platforms which can be shared and utilized by all concerned, from research to development and finally to clinical evaluation.
UTSA (Oct. 17): The Roadrunners (0-3) looked like they'd be tough to handle in a 42-32 loss at No. 22 Arizona in the opener, but they've struggled last two weeks against Kansas State and Oklahoma State. This will be USM's homecoming game and it has to go in the win column to reach a bowl. Charlotte (Oct. 24): This is one road game that is a must win for USM. The 49ers (2-1) trailed Middle Tennessee State 42-7 after ONE QUARTER Saturday before falling 73-14. UTEP (Oct. 31): This game looks very winnable for the Golden Eagles in Hattiesburg. UTEP (1-2) lost its best player, running back Aaron Jones, to injury. The Miners are giving up 54.7 points a game on defense. Old Dominion (Nov. 21): The Monarchs won't fold in Hattiesburg, but they aren't quite as good as last season when quarterback Tyler Heinicke led them to a 6-6 record in their first season. If USM protects its home field the rest of the way, it will have a good shot at its first bowl bid since 2011. In the end, six wins may be all USM needs.
Death from Diamox: Three Case Reports iamox Acetazolamide is an inhibitor of enzyme carbonic anhydrase and a nonbacteriostatic sulfonamide. It is widely used in ophthalmic practice to prevent and control abnormal rise in intra-ocular pressure in glaucoma, pre-operative prophylaxis in intra-ocular surgery, prophylaxis after YAG laser, cystoid macular edema and retinal arterial occlusion etc. It is also used in nonophthalmic practice like acute mountain sickness1, peptic ulcer2, idiopathic intracranial hypertension in pregnancy3, chronic hydrocephalus4, epilepsy5, obstructive sleep apnea6 etc. Diamox is not without its adverse reactions7-11. Common side effects include parasthesias and GIT disturbances, while occasional side effects are transient myopia, photosensitivity, urticaria, melena / hematuria etc. Diamox has certain rare but fatal complications as well which include Steven Johnson Syndrome, erythema multiforme, toxic epidermal necrolysis, metabolic acidosis, anaphylaxis, acute delirium and depression. We report three cases, where use of diamox in an eye care setup proved fatal. The practice of pre-op diamox in cataract surgery has since been stopped at Al-Shifa. CASE REPORTS Case One, March 2004 A 60 years old male was admitted for cataract surgery at Pakistan Institute of Medical Science (PIMS) Islamabad. Routine systemic exam & lab profile was normal. Pre-op 500 mg of Diamox was given. Patient got restless on the morning of the operation and complained of increased micturition. In the ward, located on the first floor patient looked confused and lost. He went to toilet and Walked out of window and died of head injury.
/* * Title: CloudSim Toolkit * Description: CloudSim (Cloud Simulation) Toolkit for Modeling and Simulation * of Clouds * Licence: GPL - http://www.gnu.org/copyleft/gpl.html * * Copyright (c) 2009, The University of Melbourne, Australia */ package org.cloudbus.cloudsim.AutonomicLoadManagementStrategies; import org.cloudbus.cloudsim.brokers.DatacenterBroker; import org.cloudbus.cloudsim.cloudlets.Cloudlet; import org.cloudbus.cloudsim.core.CloudSim; import org.cloudbus.cloudsim.hosts.Host; import org.cloudbus.cloudsim.power.models.PowerAware; import org.cloudsimplus.builders.tables.CloudletsTableBuilder; import java.util.DoubleSummaryStatistics; import java.util.HashMap; import java.util.List; import java.util.Map; /** * This class is used to print the simulation results. This class is still under * development * * @see SimulationResults#printHostCpuUtilizationAndPowerConsumption(CloudSim * simulation, DatacenterBroker broker, List<Host> hostList) * */ public class SimulationResults { /** * Main method * * @param args */ public static void main(String[] args) { new SimulationResults(); } private boolean showAllHostUtilizationHistoryEntries; /** * @param simulation is the created instance of the class {@link CloudSim} * through which the simulation engine is launched * @param broker is the created data center broker used in the simulation * @param hostList is the created hosts during the simulation * * @author <NAME> * @since CloudSim Plus 1.0 */ public void SetHostIdealperiodMapTonew() { HostwithIdealperiod = new HashMap<Host, Double>(); } double AllhostpowerKWattsHour; public Map<Host, Double> HostwithIdealperiod = new HashMap<Host, Double>(); public double totalexecutiontimeofallCloudlets = 0; public double totalFinishedCloudlets = 0; public void printHostCpuUtilizationAndPowerConsumption(CloudSim simulation, DatacenterBroker broker, List<Host> hostList) { totalFinishedCloudlets = 0; totalexecutiontimeofallCloudlets = 0; List<Cloudlet> newList = broker.getCloudletFinishedList(); newList.forEach((cloudlet) -> { if (cloudlet.isFinished()) { totalFinishedCloudlets += 1; totalexecutiontimeofallCloudlets += cloudlet.getExecStartTime(); } }); new CloudletsTableBuilder(newList).build(); System.out.println(getClass().getSimpleName() + " finished!"); /** * Since the utilization history are stored in the reverse chronological order, * the values are presented in this way. */ for (Host host : hostList) { System.out.printf("Host %d CPU utilization and power consumption\n", host.getId()); System.out.println( "-------------------------------------------------------------------------------------------"); double prevUtilizationPercent = -1, prevWattsPerInterval = -1; final Map<Double, DoubleSummaryStatistics> utilizationPercentHistory = host.getUtilizationHistory(); double totalPowerWattsSec = 0; // double time = simulation.clock(); // time difference from the current to the previous line in the history double utilizationHistoryTimeInterval; double prevTime = 0; for (Map.Entry<Double, DoubleSummaryStatistics> entry : utilizationPercentHistory.entrySet()) { utilizationHistoryTimeInterval = entry.getKey() - prevTime; final double utilizationPercent = entry.getValue().getSum(); /** * The power consumption is returned in Watt-second, but it's measured the * continuous consumption before a given time, according to the time interval * defined by {@link #SCHEDULING_INTERVAL} set to the Datacenter. */ final double wattsSec = host.getPowerModel().getPower(utilizationPercent); final double wattsPerInterval = wattsSec * utilizationHistoryTimeInterval; if (!(utilizationHistoryTimeInterval > 2000)) {// to ignore the ideal host period (considered as host // shut down) totalPowerWattsSec += wattsPerInterval; if (showAllHostUtilizationHistoryEntries || prevUtilizationPercent != utilizationPercent || prevWattsPerInterval != wattsPerInterval) { System.out.printf( "\tTime %8.2f | CPU Utilization %6.2f%% | Power Consumption: %8.0f Watt-Second in %f Seconds\n", entry.getKey(), utilizationPercent * 100, wattsPerInterval, utilizationHistoryTimeInterval); } // commented for minimizing console output } else { HostwithIdealperiod.put(host, utilizationHistoryTimeInterval); } prevUtilizationPercent = utilizationPercent; prevWattsPerInterval = wattsPerInterval; prevTime = entry.getKey(); } System.out.printf( "Total Host %d Power Consumption in %.0f secs: %.2f Watt-Sec (mean of %.2f Watt-Second)\n", host.getId(), simulation.clock(), totalPowerWattsSec, totalPowerWattsSec / simulation.clock()); final double powerWattsSecMean = totalPowerWattsSec / simulation.clock(); // AllhostpowerconsumptioninWattsSec += totalPowerWattsSec / simulation.clock(); System.out.printf("Mean %.2f Watt-Sec for %d usage samples (%.5f KWatt-Hour)\n", powerWattsSecMean, utilizationPercentHistory.size(), PowerAware.wattsSecToKWattsHour(powerWattsSecMean)); System.out.println( "-------------------------------------------------------------------------------------------\n"); } } public void printHostCpuUtilizationAndPowerConsumptionNew(CloudSim simulation, DatacenterBroker broker, List<Host> hostList) { totalFinishedCloudlets = 0; totalexecutiontimeofallCloudlets = 0; List<Cloudlet> newList = broker.getCloudletFinishedList(); newList.forEach((cloudlet) -> { if (cloudlet.isFinished()) { totalFinishedCloudlets += 1; totalexecutiontimeofallCloudlets += cloudlet.getExecStartTime(); } }); new CloudletsTableBuilder(newList).build(); for (final Host host : hostList) { printHostCpuUtilizationAndPowerConsumption(host, simulation); } } public void printHostCpuUtilizationAndPowerConsumptionNewDaas(CloudSim simulation, DatacenterBroker broker, List<Host> hostList) { totalFinishedCloudlets = 0; totalexecutiontimeofallCloudlets = 0; List<Cloudlet> newList = broker.getCloudletFinishedList(); newList.forEach((cloudlet) -> { if (cloudlet.isFinished()) { totalFinishedCloudlets += 1; totalexecutiontimeofallCloudlets += cloudlet.getExecStartTime(); } }); new CloudletsTableBuilder(newList).build(); for (final Host host : hostList) { printHostCpuUtilizationAndPowerConsumptionDaaS(host, simulation); } // printVmsCpuUtilizationAndPowerConsumption(); } /* * private void printVmsCpuUtilizationAndPowerConsumption() { * * * for (Vm vm : vmList) { System.out.println("Vm " + vm.getId() + " at Host " + * vm.getHost().getId() + " CPU Usage and Power Consumption"); * System.out.println( * "----------------------------------------------------------------------------------------------------------------------" * ); double vmPower; //watt-sec double utilizationHistoryTimeInterval, prevTime * = 0; final UtilizationHistory history = vm.getUtilizationHistory(); for * (final double time : history.getHistory().keySet()) { * utilizationHistoryTimeInterval = time - prevTime; vmPower = * history.powerConsumption(time); final double wattsPerInterval = * vmPower*utilizationHistoryTimeInterval; System.out.printf( * "\tTime %8.1f | Host CPU Usage: %6.1f%% | Power Consumption: %8.0f Watt-Sec * %6.0f Secs = %10.2f Watt-Sec%n" * , time, history.getHostCpuUtilization(time) *100, vmPower, * utilizationHistoryTimeInterval, wattsPerInterval); prevTime = time; } * System.out.println(); } * * * } */ private void printHostCpuUtilizationAndPowerConsumption(final Host host, CloudSim simulation) { System.out.printf("Host %d CPU utilization and power consumption%n", host.getId()); System.out.println( "----------------------------------------------------------------------------------------------------------------------"); final Map<Double, DoubleSummaryStatistics> utilizationPercentHistory = host.getUtilizationHistory(); double totalWattsSec = 0; double prevUtilizationPercent = -1, prevWattsSec = -1; // time difference from the current to the previous line in the history double utilizationHistoryTimeInterval; double prevTime = 0; for (Map.Entry<Double, DoubleSummaryStatistics> entry : utilizationPercentHistory.entrySet()) { utilizationHistoryTimeInterval = entry.getKey() - prevTime; // The total Host's CPU utilization for the time specified by the map key final double utilizationPercent = entry.getValue().getSum(); final double watts = host.getPowerModel().getPower(utilizationPercent); // Energy consumption in the time interval final double wattsSec = watts * utilizationHistoryTimeInterval; // Energy consumption in the entire simulation time totalWattsSec += wattsSec; // only prints when the next utilization is different from the previous one, or // it's the first one if (showAllHostUtilizationHistoryEntries || prevUtilizationPercent != utilizationPercent || prevWattsSec != wattsSec) { // System.out.printf( // "\tTime %8.1f | Host CPU Usage: %6.1f%% | Power Consumption: %8.0f Watts * // %6.0f Secs = %10.2f Watt-Sec%n", // entry.getKey(), utilizationPercent * 100, watts, // utilizationHistoryTimeInterval, wattsSec); } prevUtilizationPercent = utilizationPercent; prevWattsSec = wattsSec; prevTime = entry.getKey(); } System.out.printf("Total Host %d Power Consumption in %.0f secs: %.0f Watt-Sec (%.5f KWatt-Hour)%n", host.getId(), simulation.clock(), totalWattsSec, PowerAware.wattsSecToKWattsHour(totalWattsSec)); final double powerWattsSecMean = totalWattsSec / simulation.clock(); // AllhostpowerconsumptioninWattsSec += totalWattsSec; AllhostpowerKWattsHour += PowerAware.wattsSecToKWattsHour(totalWattsSec); System.out.printf("Mean %.2f Watt-Sec for %d usage samples (%.5f KWatt-Hour)%n", powerWattsSecMean, utilizationPercentHistory.size(), PowerAware.wattsSecToKWattsHour(powerWattsSecMean)); System.out.printf( "----------------------------------------------------------------------------------------------------------------------%n%n"); } private void printHostCpuUtilizationAndPowerConsumptionDaaS(final Host host, CloudSim simulation) { HostwithIdealperiod = new HashMap<Host, Double>(); System.out.printf("Host %d CPU utilization and power consumption%n", host.getId()); System.out.println( "----------------------------------------------------------------------------------------------------------------------"); final Map<Double, DoubleSummaryStatistics> utilizationPercentHistory = host.getUtilizationHistory(); double totalWattsSec = 0; double prevUtilizationPercent = -1, prevWattsSec = -1; // time difference from the current to the previous line in the history double utilizationHistoryTimeInterval; double prevTime = 0; for (Map.Entry<Double, DoubleSummaryStatistics> entry : utilizationPercentHistory.entrySet()) { utilizationHistoryTimeInterval = entry.getKey() - prevTime; // The total Host's CPU utilization for the time specified by the map key final double utilizationPercent = entry.getValue().getSum(); if (!(utilizationHistoryTimeInterval > 2000)) {// to ignore the ideal host period (considered as host shut // down) final double watts = host.getPowerModel().getPower(utilizationPercent); final double wattsSec = watts * utilizationHistoryTimeInterval; totalWattsSec += wattsSec; if (showAllHostUtilizationHistoryEntries || prevUtilizationPercent != utilizationPercent || prevWattsSec != wattsSec) { System.out.printf("\tTime %8.1f | Host CPU Usage: %6.1f%% | Power Consumption: %8.0f Watts *%6.0f Secs = %10.2f Watt-Sec%n", entry.getKey(), utilizationPercent * 100, watts, utilizationHistoryTimeInterval, wattsSec); } prevUtilizationPercent = utilizationPercent; prevWattsSec = wattsSec; prevTime = entry.getKey(); } else { HostwithIdealperiod.put(host, utilizationHistoryTimeInterval); } } // System.out.printf( // "Total Host %d Power Consumption in %.0f secs (Ignoring ideal period) : %.0f Watt-Sec (%.5f KWatt-Hour)%n", // host.getId(), simulation.clock(), totalWattsSec, PowerAware.wattsSecToKWattsHour(totalWattsSec)); double powerWattsSecMean =0; if((HostwithIdealperiod != null) && HostwithIdealperiod.containsKey(host)) { powerWattsSecMean = totalWattsSec / (simulation.clock() - HostwithIdealperiod.get(host)); }else { powerWattsSecMean = totalWattsSec / simulation.clock(); } // AllhostpowerconsumptioninWattsSec += totalWattsSec; AllhostpowerKWattsHour += PowerAware.wattsSecToKWattsHour(totalWattsSec); System.out.printf("Mean %.2f Watt-Sec for %d usage samples (%.5f KWatt-Hour)%n", powerWattsSecMean, utilizationPercentHistory.size(), PowerAware.wattsSecToKWattsHour(powerWattsSecMean)); System.out.printf( "----------------------------------------------------------------------------------------------------------------------%n%n"); } public void setdcClusterpowertozero() { AllhostpowerKWattsHour = 0; // AllhostpowerconsumptioninWattsSec = 0; } public double getdcClusterPowerinKWattsHour() { // AllhostpowerKWattsHour = // PowerAware.wattsSecToKWattsHour(AllhostpowerWattsSecMean); return AllhostpowerKWattsHour; } }
from concurrent import futures import logging import grpc import MessageService_pb2 import MessageService_pb2_grpc class MessageService(MessageService_pb2_grpc.MessagingServiceServicer): def requestReply(self, request, context): print("Server received Payload: %s and Headers: %s" % (request.payload.decode(), request.headers)) return MessageService_pb2.GrpcMessage( payload=str.encode(request.payload.decode().upper()), headers=request.headers) def serve(): server = grpc.server(futures.ThreadPoolExecutor(max_workers=10)) MessageService_pb2_grpc.add_MessagingServiceServicer_to_server(MessageService(), server) server.add_insecure_port('[::]:50051') server.start() server.wait_for_termination() if __name__ == '__main__': logging.basicConfig() print("gRPC server started on port: 50051 ...") serve()
World-History, Itihsa, and Memory: Rabindranath Tagore's Musical Program in the Age of Nationalism This essay attempts an exploration of the historical and historiographical implications of the interplay of individual, local, national, and global forms of memory in the music of Rabindranath Tagore. Produced at a time of crises in the Indian postcolonial subjectivity, this music offers a critique of the Eurocentric discourses of World-history and nationalism, by invoking alternative Indian discourses of Itihsa and samj. At the same time, Tagore departs from the contemporary Hindu cultural nationalist revivalist approach of the tradition of North Indian (Hindustani) classical music and subjects it to a creative regenerative endeavor by reconnecting the tradition with its original subaltern roots. Skeptical of several kinds of homogeneous impulses, this music offers an alternative idea of universalism that is as much human as a specific civilizational concept. Tagore's musical program thus offers an aesthetic blueprint of a more inclusive indigenous modernity in the subcontinent.
A demonize-and-polarize strategy has worked for the Turkish president in the past. But it may ultimately tear his country apart. Since 2003, Turkish President Recep Tayyip Erdogan has been a guiding light for the ascendant global class of anti-elite, nationalist, conservative leaders. And all along, he has played the political underdog, rallying support by demonizing those who oppose him. Just weeks ahead of a constitutional referendum that, if passed, would further consolidate his authoritarian grip on the country, he has even taken to internationalizing this strategy, lashing out at various European leaders as “Nazis” for criticizing him. It may be a reasonable gamble from his perspective; after all, it has brought him success in the past. He has boosted his popularity by relying on a steady supply of domestic adversaries to cast as the latest “enemy of the people.” But this has also polarized his society to such an extent that even the security services, the traditional bulwark of Turkish unity, have become politicized and weakened at a time when the country faces violence on multiple fronts—along with the implosion of Turkey’s relationships in Europe. Amid a divisive campaign ahead of the April 16 referendum, terrorist groups ranging from the Kurdistan Workers Party (PKK) to the Islamic State exploit these divisions to turn Turks even more bitterly against each other. Today, as evidenced by surveys measuring expected support for Erdogan in the referendum, Turkey is about evenly split between pro- and anti-Erdogan factions: the former, a conservative right-wing coalition, believes that Turkey is a paradise; the latter, a loose group of leftists, secularists, liberals, Alevis (liberal Muslims), and Kurds, think they live in hell. For years, Turkey’s vaunted national-security institutions, including the military and the police, had helped the country navigate its perilous political fissures, first in the civil war-like street clashes pitting the left against the right in the 1970s, and later in the full-blown Kurdish nationalist insurgency and terror attacks led by the PKK in the 1990s. However illiberal and brutal their methods, including several coups d’état and police crackdowns, the military and police kept Turkey from imploding. But this has changed since Erdogan’s unprecedented purge of the security services in the aftermath of the failed coup of July 15. At the same time, Turkey’s involvement in the Syrian civil war is having unexpected, destabilizing repercussions back home, which are also severely undermining the country’s ability to withstand societal polarization. Ankara has sought to oust the Assad regime since the outbreak of civil war in Syria in 2011. After sending troops into northern Syria in August 2016, Turkey has also conducted military operations against both ISIS and the Kurdish Party for Democratic Unity (PYD). Accordingly, Ankara now has the distinction of being hated by all major parties in the Syrian civil war—Assad, ISIS, and the Kurds. Syria will no doubt continue trying to punish Turkish citizens for their country’s actions: Turkey has blamed the Assad regime for a 2013 set of car bombings in Reyhanli, in the south of Turkey, that killed 51 people, though the Syrian government denied involvement. Erdogan’s Syria policy is also a driver of ISIS and PKK terror attacks in Turkey. Each time Ankara makes a gain against the PYD in Syria, the PKK targets Turkey. And each ISIS attack in Turkey similarly seems to be a direct response to a Turkish attack against jihadists across the border. For instance, the June 2016 ISIS attack on the Istanbul airport, which killed 45 people, occurred just after Ankara’s Syrian-Arab proxies took territory from the terrorist group. The New Year’s Eve attack on an Istanbul nightclub that claimed at least 39 victims came just as Turkey-backed forces launched a campaign to take the strategic Syrian city of al-Bab from ISIS. ISIS and the PKK represent the extremes of Turkey’s two halves, each intent on widening the country’s political chasm—a chasm that, in turn, prevents the country from holding a candid debate on its Syria policy, and that policy’s impact on domestic security. Consider ISIS’s chosen targets: venues like the nightclub, frequented by secular and liberal Turks; foreign tourists, who have been targeted in multiple attacks in Istanbul; Kurds and leftists like those killed in a July 2015 twin suicide bombing in the Turkish border town of Suruc; as well as liberal Muslim sects like the Alevis, a key bloc in the anti-Erdogan opposition and the main victims in the most devastating ISIS attack in Turkey to date, which killed 103 people at a peace rally in Ankara in October 2015. Erdogan’s policies may hasten this trend. At present, as seen in pro-Erdogan media, his government recognizes those killed by the PKK as “martyrs,” granting them special status. He has so far refused to endow those killed by ISIS with such special recognition. Left unchanged, this policy could help create a two-tier taxonomy for deaths from terror attacks, further entrenching Turkey’s divisions along a PKK-ISIS axis. The failed coup, meanwhile, gave Erdogan license to consolidate power over the military and police forces, pulling them further onto the pro-Erdogan side of Turkey’s divisions. The next time the military intervenes in politics in Turkey, it will probably not be to topple Erdogan, but to defend him. The 22-year-old man that assassinated the Russian ambassador in Ankara on December 19, a member of Ankara’s elite police force who came of age in Erdogan’s Turkey, is a sign of the politicization of the police forces as well as the consequences of Erdogan's Syria policy. This was an explicitly political murder: Before pulling the trigger, he declared he was punishing his victim for Moscow’s policy in Syria. For Erdogan, chaos may breed opportunity. If the constitutional referendum passes, it would vastly expand the powers of the office of the president, making Erdogan head of government, head of state, and head of his ruling AKP party, consolidating power over the entire country. (Currently, he is only head of state, and as such lacks de jure control over the government. The country’s constitution also stipulates that the president be a nonpartisan figure, barring him from formally heading the ruling AKP.) But even if he does win, only half of the country will embrace his agenda. The other half will work to undermine it politically—and in the case of the PKK and other leftist militant groups, violently. Turkey is a country divided against itself. If terror attacks, societal polarization, and violence catapult it into an unfortunate civil war, the country will have no one save it from itself. Soner Cagaptay is a senior fellow at the Washington Institute and the author of The New Sultan: Erdogan and the Crisis of Modern Turkey.
<filename>validate_binary_search_tree.py<gh_stars>0 class Solution: # @param root, a tree node # @return a boolean def isValidBST(self, root): result, _, _ = self.isValid(root) return result def isValid(self, root): if not root: return True, None, None result = True low = root.val high = root.val if root.left: left, llow, lhigh = self.isValid(root.left) if left and lhigh < root.val: low = llow else: return False, None, None if root.right: right, rlow, rhigh = self.isValid(root.right) if right and rlow > root.val: high = rhigh else: return False, None, None return result, low, high
<reponame>capeanalytics/aws-sdk-cpp<gh_stars>0 /* * Copyright 2010-2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"). * You may not use this file except in compliance with the License. * A copy of the License is located at * * http://aws.amazon.com/apache2.0 * * or in the "license" file accompanying this file. This file is distributed * on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either * express or implied. See the License for the specific language governing * permissions and limitations under the License. */ #pragma once #include <aws/servicecatalog/ServiceCatalog_EXPORTS.h> #include <aws/servicecatalog/ServiceCatalogRequest.h> #include <aws/core/utils/memory/stl/AWSString.h> #include <utility> #include <aws/core/utils/UUID.h> namespace Aws { namespace ServiceCatalog { namespace Model { /** */ class AWS_SERVICECATALOG_API CreateConstraintRequest : public ServiceCatalogRequest { public: CreateConstraintRequest(); Aws::String SerializePayload() const override; Aws::Http::HeaderValueCollection GetRequestSpecificHeaders() const override; /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline const Aws::String& GetAcceptLanguage() const{ return m_acceptLanguage; } /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline void SetAcceptLanguage(const Aws::String& value) { m_acceptLanguageHasBeenSet = true; m_acceptLanguage = value; } /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline void SetAcceptLanguage(Aws::String&& value) { m_acceptLanguageHasBeenSet = true; m_acceptLanguage = std::move(value); } /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline void SetAcceptLanguage(const char* value) { m_acceptLanguageHasBeenSet = true; m_acceptLanguage.assign(value); } /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline CreateConstraintRequest& WithAcceptLanguage(const Aws::String& value) { SetAcceptLanguage(value); return *this;} /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline CreateConstraintRequest& WithAcceptLanguage(Aws::String&& value) { SetAcceptLanguage(std::move(value)); return *this;} /** * <p>The language code to use for this operation. Supported language codes are as * follows:</p> <p>"en" (English)</p> <p>"jp" (Japanese)</p> <p>"zh" (Chinese)</p> * <p>If no code is specified, "en" is used as the default.</p> */ inline CreateConstraintRequest& WithAcceptLanguage(const char* value) { SetAcceptLanguage(value); return *this;} /** * <p>The portfolio identifier.</p> */ inline const Aws::String& GetPortfolioId() const{ return m_portfolioId; } /** * <p>The portfolio identifier.</p> */ inline void SetPortfolioId(const Aws::String& value) { m_portfolioIdHasBeenSet = true; m_portfolioId = value; } /** * <p>The portfolio identifier.</p> */ inline void SetPortfolioId(Aws::String&& value) { m_portfolioIdHasBeenSet = true; m_portfolioId = std::move(value); } /** * <p>The portfolio identifier.</p> */ inline void SetPortfolioId(const char* value) { m_portfolioIdHasBeenSet = true; m_portfolioId.assign(value); } /** * <p>The portfolio identifier.</p> */ inline CreateConstraintRequest& WithPortfolioId(const Aws::String& value) { SetPortfolioId(value); return *this;} /** * <p>The portfolio identifier.</p> */ inline CreateConstraintRequest& WithPortfolioId(Aws::String&& value) { SetPortfolioId(std::move(value)); return *this;} /** * <p>The portfolio identifier.</p> */ inline CreateConstraintRequest& WithPortfolioId(const char* value) { SetPortfolioId(value); return *this;} /** * <p>The product identifier.</p> */ inline const Aws::String& GetProductId() const{ return m_productId; } /** * <p>The product identifier.</p> */ inline void SetProductId(const Aws::String& value) { m_productIdHasBeenSet = true; m_productId = value; } /** * <p>The product identifier.</p> */ inline void SetProductId(Aws::String&& value) { m_productIdHasBeenSet = true; m_productId = std::move(value); } /** * <p>The product identifier.</p> */ inline void SetProductId(const char* value) { m_productIdHasBeenSet = true; m_productId.assign(value); } /** * <p>The product identifier.</p> */ inline CreateConstraintRequest& WithProductId(const Aws::String& value) { SetProductId(value); return *this;} /** * <p>The product identifier.</p> */ inline CreateConstraintRequest& WithProductId(Aws::String&& value) { SetProductId(std::move(value)); return *this;} /** * <p>The product identifier.</p> */ inline CreateConstraintRequest& WithProductId(const char* value) { SetProductId(value); return *this;} /** * <p>The constraint parameters.</p> */ inline const Aws::String& GetParameters() const{ return m_parameters; } /** * <p>The constraint parameters.</p> */ inline void SetParameters(const Aws::String& value) { m_parametersHasBeenSet = true; m_parameters = value; } /** * <p>The constraint parameters.</p> */ inline void SetParameters(Aws::String&& value) { m_parametersHasBeenSet = true; m_parameters = std::move(value); } /** * <p>The constraint parameters.</p> */ inline void SetParameters(const char* value) { m_parametersHasBeenSet = true; m_parameters.assign(value); } /** * <p>The constraint parameters.</p> */ inline CreateConstraintRequest& WithParameters(const Aws::String& value) { SetParameters(value); return *this;} /** * <p>The constraint parameters.</p> */ inline CreateConstraintRequest& WithParameters(Aws::String&& value) { SetParameters(std::move(value)); return *this;} /** * <p>The constraint parameters.</p> */ inline CreateConstraintRequest& WithParameters(const char* value) { SetParameters(value); return *this;} /** * <p>The type of the constraint.</p> */ inline const Aws::String& GetType() const{ return m_type; } /** * <p>The type of the constraint.</p> */ inline void SetType(const Aws::String& value) { m_typeHasBeenSet = true; m_type = value; } /** * <p>The type of the constraint.</p> */ inline void SetType(Aws::String&& value) { m_typeHasBeenSet = true; m_type = std::move(value); } /** * <p>The type of the constraint.</p> */ inline void SetType(const char* value) { m_typeHasBeenSet = true; m_type.assign(value); } /** * <p>The type of the constraint.</p> */ inline CreateConstraintRequest& WithType(const Aws::String& value) { SetType(value); return *this;} /** * <p>The type of the constraint.</p> */ inline CreateConstraintRequest& WithType(Aws::String&& value) { SetType(std::move(value)); return *this;} /** * <p>The type of the constraint.</p> */ inline CreateConstraintRequest& WithType(const char* value) { SetType(value); return *this;} /** * <p>The text description of the constraint.</p> */ inline const Aws::String& GetDescription() const{ return m_description; } /** * <p>The text description of the constraint.</p> */ inline void SetDescription(const Aws::String& value) { m_descriptionHasBeenSet = true; m_description = value; } /** * <p>The text description of the constraint.</p> */ inline void SetDescription(Aws::String&& value) { m_descriptionHasBeenSet = true; m_description = std::move(value); } /** * <p>The text description of the constraint.</p> */ inline void SetDescription(const char* value) { m_descriptionHasBeenSet = true; m_description.assign(value); } /** * <p>The text description of the constraint.</p> */ inline CreateConstraintRequest& WithDescription(const Aws::String& value) { SetDescription(value); return *this;} /** * <p>The text description of the constraint.</p> */ inline CreateConstraintRequest& WithDescription(Aws::String&& value) { SetDescription(std::move(value)); return *this;} /** * <p>The text description of the constraint.</p> */ inline CreateConstraintRequest& WithDescription(const char* value) { SetDescription(value); return *this;} /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline const Aws::String& GetIdempotencyToken() const{ return m_idempotencyToken; } /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline void SetIdempotencyToken(const Aws::String& value) { m_idempotencyTokenHasBeenSet = true; m_idempotencyToken = value; } /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline void SetIdempotencyToken(Aws::String&& value) { m_idempotencyTokenHasBeenSet = true; m_idempotencyToken = std::move(value); } /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline void SetIdempotencyToken(const char* value) { m_idempotencyTokenHasBeenSet = true; m_idempotencyToken.assign(value); } /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline CreateConstraintRequest& WithIdempotencyToken(const Aws::String& value) { SetIdempotencyToken(value); return *this;} /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline CreateConstraintRequest& WithIdempotencyToken(Aws::String&& value) { SetIdempotencyToken(std::move(value)); return *this;} /** * <p>A token to disambiguate duplicate requests. You can create multiple resources * using the same input in multiple requests, provided that you also specify a * different idempotency token for each request.</p> */ inline CreateConstraintRequest& WithIdempotencyToken(const char* value) { SetIdempotencyToken(value); return *this;} private: Aws::String m_acceptLanguage; bool m_acceptLanguageHasBeenSet; Aws::String m_portfolioId; bool m_portfolioIdHasBeenSet; Aws::String m_productId; bool m_productIdHasBeenSet; Aws::String m_parameters; bool m_parametersHasBeenSet; Aws::String m_type; bool m_typeHasBeenSet; Aws::String m_description; bool m_descriptionHasBeenSet; Aws::String m_idempotencyToken; bool m_idempotencyTokenHasBeenSet; }; } // namespace Model } // namespace ServiceCatalog } // namespace Aws
/** Specifies an initial string with which to begin the replacement * text (e.g. "ORDER BY"); default is an empty string; may not be * <b>null</b>. */ protected QueryTransformer initialText(String text) { if (text == null) { throw new IllegalArgumentException("The initialText may not be null."); } this.m_initialText = text; return this; }
def _fix_size_recv(self, size): res = self.con.recv(size) while len(res) < size: res += self.con.recv(size - len(res)) return res
package com.jstarcraft.rns.model.benchmark.rating; import com.jstarcraft.ai.data.DataInstance; import com.jstarcraft.ai.modem.ModemDefinition; import com.jstarcraft.rns.model.AbstractModel; /** * * Global Average推荐器 * * <pre> * 参考LibRec团队 * </pre> * * @author Birdy * */ @ModemDefinition(value = { "meanOfScore" }) public class GlobalAverageModel extends AbstractModel { @Override protected void doPractice() { } @Override public void predict(DataInstance instance) { instance.setQuantityMark(meanScore); } }
Bioequivalence assessment of a pregabalin capsule and oral solution in fasted healthy volunteers: a randomized, crossover study. OBJECTIVE To determine the oral bioavailability of a pregabalin capsule relative to a pregabalin solution. METHODS This was an open-label, randomized, crossover study in 12 healthy volunteers. Pharmacokinetics were compared for a 100-mg capsule and a 100-mg capsule dissolved in water, administered fasted. RESULTS Mean Cmax and AUC0-∞ for the capsule were within 2% of those for the solution (3.8 vs. 3.7 g/ml and 26.7 vs. 27.0 gh/ml, respectively). The 90% confidence intervals for the ratios of Cmax and AUC0-∞ fell fully within 80 - 125%. CONCLUSIONS A 100-mg pregabalin capsule is bioequivalent to a pregabalin solution (100-mg capsule dissolved in water).
def add_edge( self, id1, edge, id2, edge_label=None, locked_with=None, edge_desc=None, full_label=False, ): if edge_desc is None: edge_desc = self.get_props_from_either(id2, id1, 'path_desc')[0] if (edge, id2) not in self._node_to_edges[id1]: self._node_to_edges[id1][(edge, id2)] = { 'label': edge_label, 'examine_desc': edge_desc, 'locked_desc': edge_desc, 'unlocked_desc': edge_desc, 'locked_with': locked_with, 'is_locked': False, 'full_label': full_label, }
The Burial Options Society of Strathcona County is not only pushing the county to create a public cemetery, a service missing from the community, but they are also trying to educate the public on current options by holding an info session. The session is scheduled for the morning of Saturday, April 13 from 9:30 a.m. until 11 a.m. at Headquarters Restaurant on Granada Boulevard. The meeting will discuss things like where the cemeteries are located, what services they provide, who are the contacts and other topics related finding a final resting place. Residents are often surprised there is no public cemetery in the county, said the non-profit rep, so the group will use these sessions to educate them about their options right now. Cemetery maps will also be provided during the coffee chat. The non-profit hopes around 20 people will attend and there is no pre-registration required. The county recently is currently looking at creating a public cemetery in and will hire an outside consultant to do a review and report, as well as an engagement portion with residents. Those finding will return to council. In the meantime, Hoffmann said the group will continue to provide residents with info on their current options and they will also hold cemetery tours. The next tour is scheduled for September 21. The tours take people on a big yellow school bus and travel to a cemetery and attendees learn about some of the people buried there. A tour ticket costs around $20, which also pays for a flower that you are given to place on a gravesite. All residents are welcome to either the info session this Saturday, which is free, or the cemetery tour in the fall, which costs around $20 per ticket.
def remove_interceptor(self, registration_id): check_not_none(registration_id, "Interceptor registration id should not be None") request = map_remove_interceptor_codec.encode_request(self.name, registration_id) return self._invoke(request, map_remove_interceptor_codec.decode_response)
Left Ventricular Assist Devices: When the Bridge to Transplantation Becomes the Destination Heart failure affects more than 5 million people in the United States. Left ventricular assist devices (LVADs), originally designed as a bridge to heart transplantation, are now implanted as either a bridge to transplantation or as a destination therapy for those individuals who are not transplant candidates. Left ventricular assist devices have improved survival and may improve the quality of life for many individuals. However, individuals who originally had LVADs implanted as a bridge to transplantation may be delisted because of changes in health status and, like those with LVADs as destination therapy, will live with this therapy until the end of life. Decision making can become more complicated when adverse effects or comorbid health conditions cause a significant decline in health status. Challenges related to informed consent, advance care planning, quality of life, and end-of-life care in this population will be discussed. Clinical interventions will be addressed to improve care in this growing population.
/* * To change this license header, choose License Headers in Project Properties. * To change this template file, choose Tools | Templates * and open the template in the editor. */ package edu.harvard.iq.dataverse; import java.util.regex.Matcher; import java.util.regex.Pattern; import javax.validation.ConstraintValidator; import javax.validation.ConstraintValidatorContext; /** * * @author sarahferry * Modeled after PasswordValidator and EMailValidator */ public class UserNameValidator implements ConstraintValidator<ValidateUserName, String> { @Override public void initialize(ValidateUserName constraintAnnotation) { } @Override public boolean isValid(String value, ConstraintValidatorContext context) { return isUserNameValid(value, context); } /** * Here we will validate the username * * @param username * @return boolean */ public static boolean isUserNameValid(final String username, ConstraintValidatorContext context) { if (username == null) { return false; } //TODO: What other characters do we need to support? String validCharacters = "[a-zA-Z0-9\\_\\-\\."; /* * if you would like to support accents or chinese characters, uncomment this * //support accents validCharacters += "À-ÿ\\u00C0-\\u017F"; //support chinese characters validCharacters += "\\x{4e00}-\\x{9fa5}"; * */ //end validCharacters += "]"; validCharacters += "{2,60}"; //must be between 2 and 60 characters for user name Pattern p = Pattern.compile(validCharacters); Matcher m = p.matcher(username); return m.matches(); } }
<reponame>metux/chromium-deb // Copyright 2017 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #ifndef WorkerFetchContext_h #define WorkerFetchContext_h #include <memory> #include "core/CoreExport.h" #include "core/loader/BaseFetchContext.h" #include "platform/wtf/Forward.h" namespace blink { class ResourceFetcher; class SubresourceFilter; class WebTaskRunner; class WebURLLoader; class WebWorkerFetchContext; class WorkerClients; class WorkerOrWorkletGlobalScope; CORE_EXPORT void ProvideWorkerFetchContextToWorker( WorkerClients*, std::unique_ptr<WebWorkerFetchContext>); // The WorkerFetchContext is a FetchContext for workers (dedicated, shared and // service workers) and threaded worklets (animation and audio worklets). This // class is used only when off-main-thread-fetch is enabled, and is still under // development. // TODO(horo): Implement all methods of FetchContext. crbug.com/443374 class WorkerFetchContext final : public BaseFetchContext { public: static WorkerFetchContext* Create(WorkerOrWorkletGlobalScope&); virtual ~WorkerFetchContext(); RefPtr<WebTaskRunner> GetTaskRunner() { return loading_task_runner_; } // BaseFetchContext implementation: KURL GetFirstPartyForCookies() const override; bool AllowScriptFromSource(const KURL&) const override; SubresourceFilter* GetSubresourceFilter() const override; bool ShouldBlockRequestByInspector(const ResourceRequest&) const override; void DispatchDidBlockRequest(const ResourceRequest&, const FetchInitiatorInfo&, ResourceRequestBlockedReason) const override; bool ShouldBypassMainWorldCSP() const override; bool IsSVGImageChromeClient() const override; void CountUsage(WebFeature) const override; void CountDeprecation(WebFeature) const override; bool ShouldBlockFetchByMixedContentCheck( const ResourceRequest&, const KURL&, SecurityViolationReportingPolicy) const override; bool ShouldBlockFetchAsCredentialedSubresource(const ResourceRequest&, const KURL&) const override; ReferrerPolicy GetReferrerPolicy() const override; String GetOutgoingReferrer() const override; const KURL& Url() const override; const SecurityOrigin* GetParentSecurityOrigin() const override; Optional<WebAddressSpace> GetAddressSpace() const override; const ContentSecurityPolicy* GetContentSecurityPolicy() const override; void AddConsoleMessage(ConsoleMessage*) const override; // FetchContext implementation: SecurityOrigin* GetSecurityOrigin() const override; std::unique_ptr<WebURLLoader> CreateURLLoader( const ResourceRequest&) override; void PrepareRequest(ResourceRequest&, RedirectType) override; bool IsControlledByServiceWorker() const override; int ApplicationCacheHostID() const override; void AddAdditionalRequestHeaders(ResourceRequest&, FetchResourceType) override; void DispatchWillSendRequest(unsigned long, ResourceRequest&, const ResourceResponse&, const FetchInitiatorInfo&) override; void DispatchDidReceiveResponse(unsigned long identifier, const ResourceResponse&, WebURLRequest::FrameType, WebURLRequest::RequestContext, Resource*, ResourceResponseType) override; void DispatchDidReceiveData(unsigned long identifier, const char* data, int dataLength) override; void DispatchDidReceiveEncodedData(unsigned long identifier, int encodedDataLength) override; void DispatchDidFinishLoading(unsigned long identifier, double finishTime, int64_t encodedDataLength, int64_t decodedBodyLength) override; void DispatchDidFail(unsigned long identifier, const ResourceError&, int64_t encodedDataLength, bool isInternalRequest) override; void AddResourceTiming(const ResourceTimingInfo&) override; void PopulateResourceRequest(const KURL&, Resource::Type, const ClientHintsPreferences&, const FetchParameters::ResourceWidth&, const ResourceLoaderOptions&, SecurityViolationReportingPolicy, ResourceRequest&) override; void SetFirstPartyCookieAndRequestorOrigin(ResourceRequest&) override; DECLARE_VIRTUAL_TRACE(); private: WorkerFetchContext(WorkerOrWorkletGlobalScope&, std::unique_ptr<WebWorkerFetchContext>); Member<WorkerOrWorkletGlobalScope> global_scope_; std::unique_ptr<WebWorkerFetchContext> web_context_; Member<SubresourceFilter> subresource_filter_; Member<ResourceFetcher> resource_fetcher_; RefPtr<WebTaskRunner> loading_task_runner_; }; } // namespace blink #endif // WorkerFetchContext_h
package seda.apps.Haboob; public interface HaboobConst { public static final String HTTP_RECV_STAGE = "HttpRecv"; public static final String CACHE_STAGE = "CacheStage"; public static final String BOTTLENECK_STAGE = "BottleneckStage"; public static final String HTTP_SEND_STAGE = "HttpSend"; public static final String DYNAMIC_HTTP_STAGE = "DynamicHttp"; }
Investigation on Long Term Operation of Thermochemical Heat Storage with MgO-Based Composite Honeycombs The efficient storing and utilizing of industrial waste heat can contribute to the reduction of CO2 and primary energy. Thermochemical heat storage uses a chemical and/or an adsorption-desorption reaction to store heat without heat loss. This study aims to assess the long-term operational feasibility of thermochemical material based composite honeycombs, so that a new thermochemical heat storage and peripheral system were prepared. The evaluation was done by three aspects: The compressive strength of the honeycomb, heat charging, and the discharging capabilities of the thermochemical heat storage. The compressive strength exceeded 1 MPa and is sufficient for safe use. The thermal performance was also assessed in a variety of ways during 100 cycles, 550 h in total. By introducing a new process, the amount of thermochemical-only charging was successfully measured for the first time. Furthermore, the heat charging capabilities were measured at 55.8% after the end of the experiment. Finally, the heat discharging capability was decreased until 60 cycles and there was no further degradation thereafter. This degradation was caused by charging at a too high temperature (550 °C). In comparative tests using a low temperature (450 °C), the performance degradation became slow, which means that it is important to find the optimal charging temperature.
Cutting out animal products from your diet can be difficult when you fancy going out for a meal or grabbing a snack with your coffee. But Harrogate has much more to offer vegans than meets the eye. In no particular order, here are ten of the best restaurants and cafes for plant-based yummy-ness in the town. Zizzi has a whole separate menu just for vegans and it isn't all just hearty salads. The chain offers its customers delicious vegan pizzas and pasta dishes as well as a range of vegan desserts! This independent business has been leading the way for vegan goodness for years. 'Whether you're looking for a quick snack from the shop on James Street or a proper sit down meal at the restaurant on Station Parade - they've got you covered. Wagamama has been bringing us the variety of oriental foods for a long time but what some customers may not have realised is that many of the dishes by their nature are suitable for vegans - just ask your waiter for more info! In the market for a pizza? Can't eat cheese? No problem. This lovely indie has got you covered, they have great deals on their pizzas and plenty for vegans and veggies to choose from.
<gh_stars>1-10 // eslint-disable-next-line eslint-comments/disable-enable-pair /* eslint-disable @typescript-eslint/no-var-requires */ import { Command } from '../../../lib/command' // eslint-disable-next-line quotes import debugFn = require('debug') const debug = debugFn(`dev`) export default class Dev extends Command { static description = `workflow development` static hidden = true static args = [ { name: `file`, required: true, description: `nodejs app entry point` } ] async run(): Promise<void> { const { args } = this.parse(Dev) debug(args) const nodemon = require(`nodemon`) const ngrok = require(`ngrok`) const port = 3000 nodemon({ script: args.file, ext: `js`, }) let url: string nodemon.on(`start`, async () => { if (!url) { url = await ngrok.connect({ port }) console.log(`Server now available at ${url}`) } }) nodemon.on(`quit`, async () => { await ngrok.kill() }) } }
def extract_geonames_id(url, rex): default_rex = ( [ r"https?://www\.geonames\.org/([0-9]+)/.+$" , r"https?://sws\.geonames\.org/([0-9]+)/about\.rdf$" ]) def match_geonames_id(url, rex): log.debug("match_geonames_id: url %s, regexp /%s/"%(url, rex)) geo_id = None try: matchobject = re.match(rex, url) if matchobject: geo_id = matchobject.group(1) except IndexError: geo_id = None except re.error as e: print("Invalid regular expression '%s' (%s)"%(rex, e), file=sys.stderr) raise return geo_id if rex: geo_id = match_geonames_id(url, rex) else: for rex in default_rex: geo_id = match_geonames_id(url, rex) if geo_id: break return geo_id
<reponame>ysdnoproject/fast-api-ec-demo<gh_stars>0 import glob import importlib from os.path import dirname, isfile, basename from fastapi import FastAPI def exec_initializers(app: FastAPI): py_files = glob.glob(dirname(__file__) + "/*.py") for f in py_files: if isfile(f) and not f.endswith('__init__.py'): module_name = basename(f)[:-3] module_path = '%s.%s' % (__package__, module_name) module = importlib.import_module(module_path) if hasattr(module, 'init'): func = getattr(module, 'init') func(app)
Marissa Alexander, Jailed for 3 Years, Speaks Out on Intimate Partner Violence & Building Movements | Democracy Now! initially sentenced to 20 years in prison for firing a single warning shot against her abusive husband in 2010. Her attorneys unsuccessfully tried to use Florida’s “stand your ground” law in her defense, saying she feared for her life when she fired the shot. Earlier this year, she was freed from house arrest after being jailed for three years and serving two years of court-ordered home confinement. We turn now to the case of Marissa Alexander, the African-American mother of three who was sentenced to 20 years in prison for firing what she maintains was a warning shot at her abusive husband in 2010. She attempted to use Florida’s “stand your ground” law in her defense—the law that was made famous when white vigilante George Zimmerman successfully used it as his defense after he shot and killed unarmed African-American teenager Trayvon Martin. But in March 2012, the jury rejected Alexander’s use of “stand your ground” and convicted her after only 12 minutes of deliberation. She was sentenced to 20 years behind bars under a Florida law known as “10-20-Life” that carries a mandatory minimum for certain gun crimes regardless of the circumstance. Alexander won an appeal for a new trial and later accepted a plea deal that capped her sentence to three years of time served. Earlier this year, she was freed from house arrest after being jailed for three years and serving two years of court-ordered home confinement. We go to Jacksonville to speak to Marissa Alexander. AMY GOODMAN: We turn now to the case of Marissa Alexander, the African-American mother of three who was sentenced to 20 years in prison for firing what she maintains was a warning shot at her abusive husband in 2010. She attempted to use Florida’s “stand your ground” law in her defense, the law that was made famous when white vigilante George Zimmerman successfully used it in his defense after he shot and killed unarmed African-American teenager Trayvon Martin. But in March 2012, the jury rejected Alexander’s use of “stand your ground” and convicted her after only 12 minutes of deliberation. She was sentenced to 20 years behind bars under a Florida law known as “10-20-Life” that carries a mandatory minimum for certain gun crimes regardless of the circumstance. Alexander won an appeal for a new trial and later accepted a plea deal that capped her sentence to three years of time served. AMY GOODMAN: Earlier this year, Marissa Alexander was freed from house arrest after being jailed for three years and serving two years of court-ordered home confinement. We go now to Jacksonville, Florida, to speak with Marissa Alexander. Welcome to Democracy Now!, Marissa. Talk about how it feels to be free now and what these last years were like, when you weren’t in prison, but you were under house arrest. MARISSA ALEXANDER: Thank you for having me, Amy. It’s been—you see I’m smiling. I’m so excited to be able to be a part of the movement that’s going on. I really wanted to hit the ground running. I wanted to be able to travel. I wanted to spend, you know, time with my kids outside of my home. So, that’s been great. And basically, the last two years, I really want to spend that time in building what I’m doing right now, which is my nonprofit. I pretty much completed a large portion of my book. So, I really wanted to use that time to not be bitter, but be stronger and hit the ground running and just make an impact. NERMEEN SHAIKH: Well, your daughter was only nine days old when this altercation occurred between you and your ex-husband in 2010. NERMEEN SHAIKH: So, can you tell us about her now, how old she is, and whether you’re able to spend time with her? MARISSA ALEXANDER: Right. She will be turning seven in July. So, she’s getting to be a big little girl. And we do have—we are divorced, and we have shared custody of her, so we, you know, split that time. And we just have a love fest. You know, for me to not be able to see her until she was three and for us to have the bond that we have now is just—it’s, you know, a beautiful thing. AMY GOODMAN: In 2013, we spoke with civil rights advocate and attorney Michelle Alexander, author the best-selling book The New Jim Crow: Mass Incarceration in the Age of Colorblindness. She talked about what role mandatory minimum sentencing played in your case, Marissa. MICHELLE ALEXANDER: She received a 20-year sentence because of harsh mandatory minimum sentences, sentences that exist in Florida and in states nationwide. Mandatory minimum sentences give no discretion to judges about the amount of time that the person should receive once a guilty verdict is rendered. Harsh mandatory minimum sentences for drug offenses were passed by Congress in the 1980s as part of the war on drugs and the “get tough” movement, sentences that have helped to fuel our nation’s prison boom and have also greatly aggravated racial disparities, particularly in the application of mandatory minimum sentences for crack cocaine. AMY GOODMAN: So, that’s Michelle Alexander. We’re speaking with Marissa Alexander—no relation—who is speaking to us from Jacksonville. And, Marissa, if you can go back to 2010—yes, I want you to respond to what Michelle Alexander said, but back to 2010, and describe to us what happened, and then, with the killing of Trayvon Martin by George Zimmerman, the vigilante, who was acquitted, unlike you, how that changed your case? MARISSA ALEXANDER: Well, you know, for me, that particular day, it was a reaction to an action. So, you know, it was a matter of a fight or flight. You know, I felt like I did the best I could. I maintain that. I still don’t believe what I did was wrong. The kids were not present. I would have never done that. That was not an issue. And that was—I believe that came out in my first trial. As far as it played forward with George Zimmerman, that was around the same time, because, by this time, you know, his case was going along, but mine hadn’t made it to the media. You know, his was—originally, he was given immunity at the crime scene but then, later on, was charged. I was charged first and then had to have a hearing. So there’s the difference between, you know, he and I, our cases. So that was the difference. And then, essentially, it’s in the—it’s in the jury instructions, regardless. So, you know, the difference in our cases is the fact that, you know, you had a child that was killed and was not present, and so you had no testimony. In my particular case, you had witnesses, or victims, if you will, that had the opportunity to change their stories, as was done in my case. So, that’s where some of that was a little bit different. But, essentially, our cases—you know, the only similarity is the fact that “stand your ground” was a commonality. Other than that, there were different circumstances and, obviously, different scenarios. AMY GOODMAN: And, of course, you had Angela Corey, who was the special prosecutor in both your case as well as the case of George Zimmerman. And at a news conference in 2012, a reporter asked Corey about that controversial law, known as “stand your ground” or what some call “right to shoot first.” Here’s how Corey responded. ANGELA COREY: If “stand your ground” becomes an issue, we fight it, if we believe it’s the right thing to do. So, if it becomes an issue in this case, we will fight that affirmative defense. REPORTER: How would you say “stand your ground” has affected your job since it became law? ANGELA COREY: My prosecutors—and a lot of them are here, and I’m so proud of them—they have worked tirelessly running this office while we’ve been working on this case. They fight these “stand your ground” motions. Mr. Moody just finished a four-day full “stand your ground” motion on another case. We fight hard. Some of them, we won, and we’ve had to appeal them—or the defense has appealed, and we’ve won it on appeal. Some, we fought hard, and the judge ruled against us. That’s happening to prosecutors all over the state. It is the law of the state of Florida, and it will be applied. REPORTER: But you think it’s invoked too much? ANGELA COREY: Justifiable use of deadly force, as we all knew it before “stand your ground” was issued, was still a tough affirmative defense to overcome, but we still fight these cases hard. NERMEEN SHAIKH: So, that was Angela Corey, the special prosecutor in both your case, Marissa, as well as the case of George Zimmerman. So, could you comment on what she said and also say what you think the explanation is for how “stand your ground”—the “stand your ground” law was applied in your case and in Zimmerman’s case? MARISSA ALEXANDER: OK. So I can just tell you from my perspective. I went into it, and I was always trained in the castle doctrine. So when I did what I did, I had no idea about “stand your ground.” I felt I did what I was taught, which was a duty to retreat before you use lethal force. I was inside my own home. I had a concealed weapons license, permit, and I also had a restraining order for—at that time. So, that’s in and of itself. Given that that’s—I’ve seen—I haven’t—I mean, from my experience on the inside, I can tell you this: A lot of times the defendants do not get the opportunity in cases where it is truly self-defense to even utilize that, especially defendants that are black. That’s what I have seen. And in most cases, they automatically have a “10-20-Life” placed on their folder. They have the enhancement. This legislation that was passed, which is—oddly enough, doesn’t give judges the discretion, what it does give is the prosecutors discretion. And I think that’s backwards, because it obviously is advantageous to the prosecutor to use it. So, I believe to put a mutual party in there that does give them discretion to look at it and say, “OK, I feel that it fits into a minimum mandatory situation,” as opposed to it not, and they’re supposed to be a mutual party, what it does is it gives the prosecution that advantageous advantage. And that just, to me, doesn’t work well for the defense, in most cases. AMY GOODMAN: Marissa Alexander, during the time of your house arrest, you had to pay for the costs of the monitoring. Also, you had a monthly drug test. But over the course of the two years, you paid around $10,000? Can you talk about the significance of this and also what it meant, in January, when you had your ankle bracelet, your monitor, taken off, what you did, and what it meant in terms of your freedom? MARISSA ALEXANDER: OK. You know, I was fortunate that I had support and the means to be able to pay for my ankle monitor. You’re talking about the cost of supervision. You’re talking about the cost of monitoring. You’re talking about drug tests, that I only got one the entire time I was on it, but I paid for it every month. You’re talking about it was being—it was taxed. I had court costs. And, essentially, I did not have an idea about how much I was supposed to pay towards the end, so I ended up having to come up with a couple thousand dollars in a short period of time. You’re talking about people who are—it’s hard for defendants, who come out, to get a job, if you’re on regular probation, with the fees where you don’t have an ankle monitor and pay for that, let alone have the ankle monitor and have to pay for those, which are obviously a little bit higher, and then try to obtain employment and pay those fees. And a lot of the time, they’re not able to do that and are subject to being violated on a technical violation because they can’t make the fees. It’s hard for the convicted to be able to get jobs that will allow for them to pay—you know, pay for those services—I mean, pay for that actual ankle monitor. Now, when I got my release, it was very important. My sister was pregnant. She had just got a new place, so that was the first thing I wanted to do, was spend time with her. So, me, my mom and her, we met, and we had a glass of wine. I took my baby girl to breakfast that following morning. We sat down at Cracker Barrel and had breakfast. And I took my twins to dinner that night. So it was important for me to just spend time outside of my home with the people that I love the most. And then, the following day, I had a celebration. NERMEEN SHAIKH: Well, Marissa Alexander, very quickly, before we conclude, I mean, it’s extraordinary what good use you made of your time under house arrest. I mean, first, you mentioned earlier that you completed or almost completed a book manuscript, and then, also, the Marissa Alexander Justice Project that you began. Can you talk about both those projects? MARISSA ALEXANDER: OK. So, the first year, I didn’t want to lose sight of what I had experienced. So, there was a lady who had written a book. She’s an older lady, much wiser, and she kind of experienced some of the things that I did. So she came in, and it was very cathartic for me to just be able to, you know, kind of regurgitate all of the experiences that I had, so that I would not lose them. So I did that for about six to eight months. And honestly, it was just emotionally taxing. And so I got that to a certain point. And then the next order of business, because I was doing paralegal school—I was bored out of my mind. And I felt like the best impact for me to do was to start my own nonprofit, not to trump what was going on, but to add to what’s already existing. And so, my nonprofit really focuses on the things that I feel like affected my case the most and just what affects our community as a whole. You’re talking about domestic violence and intimate partner violence, and the impact that it plays in just the homes and social norms and what people experience. And then you’re talking about criminal policy reform, those things, because my case did assist in changing some laws. The fact that I was given a minimum mandatory sentence, that alone has increased—like Michelle Alexander—mass incarceration. You’re talking about juveniles. I mean, these kids are experiencing things that, I mean, most of us have never had to experience. They don’t have a chance from the start. So I just feel like, you know, my nonprofit really wanted to touch on all of these things and be able to, you know, from the—I’ve been in it from the trenches, and now I’m looking at it from a bird’s-eye view. And I just believe that we have an opportunity to do better, and we have services that are available, but it’s not connected to the community. The community don’t know about the services. So, my question is: Where is the disconnect? So, I have answers for that, but, you know, that’s probably for another time. AMY GOODMAN: Well, Marissa Alexander, I want to thank you so much for joining us, initially sentenced to 20 years in prison for firing a single warning shot against her abusive husband into the ceiling in 2010. Her attorneys unsuccessfully tried to use Florida’s “stand your ground” law in her defense, saying she feared for her life when she fired the shot. In January, she was freed from house arrest after being jailed for three years and serving two years of court-ordered home confinement. Congratulations, before, being able to walk into the studio freely and leave it freely. AMY GOODMAN: This is Democracy Now! When we come back, we’re going to speak with a Syrian refugee who just graduated from college here in Tampa, Florida, and he gave the commencement address. Stay with us.
<gh_stars>0 from django.apps import AppConfig class TimerConfig(AppConfig): name = 'timer'
/** * Class : Supplier * Usage : Base supplier for all implemented suppliers. * Supplier is used to remove participants in candidate set who cannot map the filter conditions. * And produces a candidate set to offer workitem. */ public abstract class Supplier extends Selector { /** * Perform supply action on the candidate set to select a set for offering. * * @param candidateSet candidate participant set * @param workitem resourcing workitem * @return filtered participant set */ public abstract HashSet<ParticipantContext> performSupply(Set<ParticipantContext> candidateSet, WorkitemContext workitem); }
/** This method clears the dialog and hides it. */ private void clearAndHide() { userField.setText(null); passField.setText(null); passField2.setText(null); setVisible(false); dispose(); }
In a showdown between the Maryland Scholastic Association B Conference's two best baseball teams yesterday, Boys' Latin (10-3, 7-2) downed host Mount Carmel (9-6, 6-3), 11-3, to clinch first place in a game that was shortened to six innings because of rain. Lakers sophomore right-hander Myron Hayes pitched the entire game, striking out four and walking two. Senior center fielder Phil Booker was 2-for-3, including a leadoff double in the first inning and a triple in the fourth. Senior catcher Jay Arminger was 3-for-4 with three RBI.
// -*- Mode: C++; -*- // Package : omniORBpy // omniORBpy.h Created on: 2002/05/25 // Author : <NAME> (dgrisby) // // Copyright (C) 2002 <NAME> // // This file is part of the omniORBpy library // // The omniORBpy library is free software; you can redistribute it // and/or modify it under the terms of the GNU Lesser General // Public License as published by the Free Software Foundation; // either version 2.1 of the License, or (at your option) any later // version. // // This library is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Lesser General Public License for more details. // // You should have received a copy of the GNU Lesser General Public // License along with this library; if not, write to the Free // Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, // MA 02111-1307, USA // // Description: // Header for the C++ API to omniORBpy #ifndef _omniORBpy_h_ #define _omniORBpy_h_ // The file including this file must include the correct Python.h // header. This file does not include it, to avoid difficulties with // its name. #include <omniORB4/CORBA.h> // The omniORBpy C++ API consists of a singleton structure containing // function pointers. A pointer to the API struct is stored as a // PyCObject in the _omnipy module with the name API. Access it with // code like: // // PyObject* omnipy = PyImport_ImportModule((char*)"_omnipy"); // PyObject* pyapi = PyObject_GetAttrString(omnipy, (char*)"API"); // omniORBpyAPI* api = (omniORBpyAPI*)PyCObject_AsVoidPtr(pyapi); // Py_DECREF(pyapi); // // Obviously, you MUST NOT modify the function pointers! // // This arrangement of things means you do not have to link to the // _omnipymodule library to be able to use the API. struct omniORBpyAPI { PyObject* (*cxxObjRefToPyObjRef)(const CORBA::Object_ptr cxx_obj, CORBA::Boolean hold_lock); // Convert a C++ object reference to a Python object reference. // If <hold_lock> is true, caller holds the Python interpreter lock. CORBA::Object_ptr (*pyObjRefToCxxObjRef)(PyObject* py_obj, CORBA::Boolean hold_lock); // Convert a Python object reference to a C++ object reference. // Raises BAD_PARAM if the Python object is not an object reference. // If <hold_lock> is true, caller holds the Python interpreter lock. PyObject* (*handleCxxSystemException)(const CORBA::SystemException& ex); // Sets the Python exception state to reflect the given C++ system // exception. Always returns NULL. The caller must hold the Python // interpreter lock. void (*handlePythonSystemException)(); // Handles the current Python exception. An exception must have // occurred. Handles all system exceptions and omniORB. // LocationForward; all other exceptions print a traceback and raise // CORBA::UNKNOWN. The caller must hold the Python interpreter lock. void (*marshalPyObject)(cdrStream& stream, PyObject* desc, PyObject* obj, CORBA::Boolean hold_lock); // Marshal the Python object into the stream, based on the type // descriptor desc. PyObject* (*unmarshalPyObject)(cdrStream& stream, PyObject* desc, CORBA::Boolean hold_lock); // Unmarshal a Python object from the stream, based on type // descriptor desc. void (*marshalTypeDesc)(cdrStream& stream, PyObject* desc, CORBA::Boolean hold_lock); // Marshal the type descriptor into the stream as a TypeCode. PyObject* (*unmarshalTypeDesc)(cdrStream& stream, CORBA::Boolean hold_lock); // Unmarshal a TypeCode from the stream, giving a type descriptor. omniORBpyAPI(); // Constructor for the singleton. Sets up the function pointers. }; // Macros to catch all C++ system exceptions and convert to Python // exceptions. Use like // // try { // ... // } // OMNIORBPY_CATCH_AND_HANDLE_SYSTEM_EXCEPTIONS // // The macros assume that api is a pointer to the omniORBpyAPI // structure above. #ifdef HAS_Cplusplus_catch_exception_by_base #define OMNIORBPY_CATCH_AND_HANDLE_SYSTEM_EXCEPTIONS \ catch (const CORBA::SystemException& ex) { \ return api->handleCxxSystemException(ex); \ } #else #define OMNIORBPY_CATCH_AND_HANDLE_SPECIFIED_EXCEPTION(exc) \ catch (const CORBA::exc& ex) { \ return api->handleCxxSystemException(ex); \ } #define OMNIORBPY_CATCH_AND_HANDLE_SYSTEM_EXCEPTIONS \ OMNIORB_FOR_EACH_SYS_EXCEPTION(OMNIORBPY_CATCH_AND_HANDLE_SPECIFIED_EXCEPTION) #endif // Extensions to omniORB / omniORBpy may create their own pseudo // object reference types. To provide a Python mapping for these, a // function must be provided that takes a CORBA::Object_ptr and // returns a suitable PyObject. Functions are registered by appending // PyCObjects to the list _omnipy.pseudoFns. The CObjects must contain // pointers to functions with this signature: typedef PyObject* (*omniORBpyPseudoFn)(const CORBA::Object_ptr); #endif // _omniORBpy_h_
Integrating Principles of Safety Culture and Just Culture Into Nursing Homes: Lessons From the Pandemic Decades of concerns about the quality of care provided by nursing homes have led state and federal agencies to create layers of regulations and penalties. As such, regulatory efforts to improve nursing home care have largely focused on the identification of deficiencies and assignment of sanctions. The current regulatory strategy often places nursing home teams and government agencies at odds, hindering their ability to build a culture of safety in nursing homes that is foundational to health care quality. Imbuing safety culture into nursing homes will require nursing homes and regulatory agencies to acknowledge the high-risk nature of post-acute and long-term care settings, embrace just culture, and engage nursing home staff and stakeholders in actions that are supported by evidence-based best practices. The response to the COVID-19 pandemic prompted some of these actions, leading to changes in nursing survey and certification processes as well as deployment of strike teams to support nursing homes in crisis. These actions, coupled with investments in public health that include funds earmarked for nursing homes, could become the initial phases of an intentional renovation of the existing regulatory oversight from one that is largely punitive to one that is rooted in safety culture and proactively designed to achieve meaningful and sustained improvements in the quality of care and life for nursing home residents. b s t r a c t Decades of concerns about the quality of care provided by nursing homes have led state and federal agencies to create layers of regulations and penalties. As such, regulatory efforts to improve nursing home care have largely focused on the identification of deficiencies and assignment of sanctions. The current regulatory strategy often places nursing home teams and government agencies at odds, hindering their ability to build a culture of safety in nursing homes that is foundational to health care quality. Imbuing safety culture into nursing homes will require nursing homes and regulatory agencies to acknowledge the high-risk nature of post-acute and long-term care settings, embrace just culture, and engage nursing home staff and stakeholders in actions that are supported by evidence-based best practices. The response to the COVID-19 pandemic prompted some of these actions, leading to changes in nursing survey and certification processes as well as deployment of strike teams to support nursing homes in crisis. These actions, coupled with investments in public health that include funds earmarked for nursing homes, could become the initial phases of an intentional renovation of the existing regulatory oversight from one that is largely punitive to one that is rooted in safety culture and proactively designed to achieve meaningful and sustained improvements in the quality of care and life for nursing home residents. Published by Elsevier Inc. on behalf of AMDA e The Society for Post-Acute and Long-Term Care Medicine. Nursing homes have evolved from residences for the aged and infirm to dynamic health care settings that serve a diverse population with an array of medical complexity and a broad spectrum of functional disabilities. During this evolution, concerns about the quality of care prompted many regulatory changes. In 1986, the Institute of Medicine reported widespread abuse, neglect, and inadequacies in care within nursing homes. This inspired the Nursing Home Reform Act, passed as part of the Omnibus Budget Reconciliation Act (OBRA), which set minimum standards for the rights and care of nursing home residents in the United States. OBRA stipulated that in order to receive reimbursement from the Centers for Medicare & Medicaid Services (CMS), nursing homes must undergo annual certification of compliance with federal regulations, manifested as timed inspections or surveys. The Nursing Home Reform Act also emphasized the need for continuous, rather than cyclical, compliance. More recently, reports of hazards such as preventable hospitalizations, falls, health careeassociated infections and inappropriate antipsychotic prescribing have prompted additional regulatory requirements focused on the identification of deficiencies and assignment of consequences for noncompliance. Many stakeholders, including AMDAeThe Society for Post-Acute and Long-Term Care Medicine, have also worked to improve the quality of care in nursing homes. Despite the many layers of regulations and stakeholder efforts, we have not consistently achieved high-quality care in nursing homes. In 2019, surveyors identified deficiencies in more than 90% of nursing homes; 23% of those were for actual harm or immediate jeopardy to residents. 1 The COVID-19 pandemic brought to light that many nursing homes were poorly prepared and dramatically underresourced to provide safe and quality care to their residents during times of crisis. The fallout from the COVID-19 pandemic's effects on nursing homes has renewed discussions about a redesign of the system of nursing home care and its oversight. Some have suggested that the current nursing home regulatory process has not inspired sustainable improvements because of a "carrot and stick" approach rather than a positive behavioral approach. 2 To address this, Nazir et al 3 proposed changes to the regulatory system that account for superior performance and use behavioral economics to promote positive change. In contrast, Dark et al 4 observed that the impact of the pandemic and structural trends in nursing home ownership and management make continuing robust survey processes and regulatory sanctions critical to ensuring accountability necessary to protect vulnerable residents. Other groups emphasized that alignment of person-centered community values should be central to efforts to ensure high quality of care in nursing homes. 5 The common goal of these discussions has been optimization of care in nursing homes, which requires balancing efforts to support quality improvement with accountability for substandard performance. To further quality improvement, many health care systems have integrated the concept of safety culture into their overall organizational culture (Table 1). Safety culture represents an organizational commitment to safety at all levels (frontline to leadership) by minimizing adverse events even when carrying out complex and hazardous work. Discussed below, safety culture seeks to create a blame-free environment that in turn encourages health care workers to recognize and report errors or near misses that could result in patient harm. By doing so, policies and procedures can be changed to prevent similar events from occurring in the future, thus improving overall patient safety. These aspects of safety culture alone, however, are not sufficient to address poor performance and negligence. This gap gave rise to the idea of just culture, an aspect of safety culture, that we address more thoroughly below (Table 2). In brief, just culture focuses on identifying and addressing behaviors that create the potential for adverse events and calls for appropriate accountability. Just culture supports disciplinary actions against individuals or organizations who engage in reckless behavior or willfully violate best practices and standards of care. Just culture avoids punishing individuals for adverse events over which they have no control. In this article, we use the principles of safety culture and just culture to reenvision the organizational culture of nursing homes and their relationship with the regulatory agencies that oversee them, with the goal of improving the quality of care and life for nursing home residents. Safety Culture in Nursing Homes In 1999, the Institute of Medicine published their seminal report "To Err Is Human: Building a Safer Health System" that established a clear connection between the organizational culture and patient safety. 9 The report emphasized the need for an organizational culture that uses errors (and near misses) to improve and integrate safety into systems and processes. Safety culture has become an accepted part of health care, complete with accepted nomenclature, domains, processes, and outcome assessments. 10 Organizations with a robust safety culture acknowledge the high-risk nature of their organization's activities, use their resources to collaborate across ranks and disciplines to solve problems, and seek to optimize patient outcomes. 6 A systematic review of safety culture in hospitals observed that team perception of safety culture can improve care processes, reduce patient harm, and even decrease staff turnover. 11 Although limited, previous work indicates that incorporating safety culture into nursing homes improves resident outcomes. Bonner et al 12 evaluated the resident safety culture among certified nursing assistants (CNAs) and demonstrated a positive association between resident safety culture and increased reporting of falls. Restraint use was also lower in facilities with an ingrained safety culture among the CNAs. 12 A strong culture of safety was also associated with decreases in several negative outcomes specific to residents including falls, urinary tract infections in long-stay residents, and ulcers in short-stay residents. 13 Similarly, Guo et al 14 found a positive association between safety culture domains such as teamwork and successful discharges from long-term care into the community. Safety culture also appears to benefit the nursing home as an organization, with perceived patient safety culture in nursing homes associated with reduced deficiency citations, fewer substantiated complaints, lower amounts of fines paid by nursing home to the CMS for quality and safety issues, and increased odds of being designated as 4-or 5-star facilities. 15 Despite their successful integration into the organization culture of hospitals and their positive influence on the care of residents, the principles of safety culture are slow to permeate into the overall organizational culture of some nursing homes. Based on surveys conducted in 2005 using a hospital survey on patient safety culture, Castle et al 16 found that nursing homes, 11 of 12 scores were considerably lower compared with the hospital benchmarks. Similar surveys conducted in 40 nursing homes in 2006 and 2007, this time using a nursing home survey on patient safety culture, found that staff typically feel the safety culture is poor in their workplaces. 17 Notably, direct care staff reported weaker safety culture compared to administration and managers. This difference persists in community nursing homes as well as in Veterans Affairs nursing homes, termed Community Living Centers (CLCs). 17,18 Interestingly, Quach et al 19 reported that some elements of a strong safety culture exist among direct care providers in VA CLCs; how this compared to community-based nursing homes is not clear. Regardless, in all 3 of these studies, the authors indicate that improved communication between nursing home administrators and staff who provide bedside care may help address the differences in perceived safety culture among nursing home employees. 17e19 Other factors may contribute to the limited uptake of safety culture in nursing homes, as shown by Grunier and Mor, 20 who noted that organizational management, staff turnover, workforce shortages, and the traditional cyclical regulatory environment of identifying and punishing "bad" behavior impeded progress toward culture change. Twenty years ago, AMDA raised concerns about the survey process. 21 We contend that the punitive nature of the survey process, which has a strong influence on nursing home operations, is a marked barrier to implementing safety culture in nursing homes. Table 1 Principles of Safety Culture Acknowledgment of the high-risk nature of an organization's activities and the determination to achieve consistently safe operations A blame-free environment where individuals are able to report errors or near misses without fear of reprimand or punishment Encouragement of collaboration across ranks and disciplines to seek solutions to patient safety problems Organizational commitment of resources to address safety concerns Adapted from "Culture of Safety." 6 Fundamental changes to the survey process are needed in order to support the integration of safety culture into the nursing home sector as a whole. The survey process cannot endorse the blame-free environment called for by safety culture while still holding nursing homes accountable for poor performance and negligence. The principles of just culture, which hold individuals and organizations accountable for their behavior and actions, complement the use of safety culture to improve patient saftey. 22 Reimagining the survey process through the lens of just culture should support a more systemic cultural shift of nursing homes toward safety culture. Integrating Principles of Just Culture Into the Survey Process Just culture encourages transparency and error reporting while creating a balance between blame-free and punitive environments that ensure accountability. With just culture, rather than only focusing on outcomes, an organization examines behavioral choices, thereby reducing severity bias. 7 Similar to previous descriptions of just culture, for the purposes applying just culture to the nursing home survey process, we group behaviors that may cause harm into 3 categories (Table 2). 7 The first category is human error, where an unintentional failure that is beyond the control of humans causes or almost causes harm. From an organizational perspective, this includes adverse events as a result of factors that are outside of an individual's control. Just culture does not call for punishment or sanction of individuals with behavior that involves human error. Rather, it aligns with safety culture principles by supporting acceptance of risk along with system redesign to prevent future errors from happening. In the context of a nursing home survey, adverse events that occur despite a nursing home's efforts to prevent them should not incur deficiencies or other punitive measures. Integrating this aspect of just culture into nursing home surveys would require a process that forgoes assessments based only on scope and severity, in favor of an evaluation that examines reasonable steps taken to prevent the adverse event. One example is a resident who falls and experiences an injury requiring hospitalization. In just culture, the survey team would review overall efforts to prevent falls from occurring, including staff education on fall risk factors and prevention strategies, medication reviews, and handrails along with other environmental modifications to support safe mobility. The survey team would also account for choices made by the resident and family to choose mobility despite a clear recognition of an increased fall risk, recognizing the adverse event occurred as a result of shared decision making and despite prevention strategies. The second category is at-risk behavior, which occurs when individuals and/or organizations either do not recognize a risk as a consequence of a choice or otherwise minimize or justify the risk. Under just culture, the response to at-risk behaviors includes removing barriers to safe choices, removing any rewards associated with at-risk behaviors, and coaching individuals and/or organizations to recognize the consequences of their choices. At an organizational level, at-risk behaviors are the most challenging to identify and also the areas of greatest opportunity. In the context of a nursing home survey, if the ultimate goal is to improve resident safety, adverse events due to at-risk behaviors should require a remediation plan that includes staff education and coaching as well as objective improvement in process and outcome measures. The nursing home might choose to rely on their Quality Assurance and Process Improvement (QAPI) committee to develop a remediation plan. The severity or magnitude of the problem might also lead to requests for assistance from regional Quality Improvement Organizations (QIOs), state or jurisdictional health agencies, or even nearby hospitals. It is not the role or function of regulatory survey teams to provide this type of coaching to nursing homes, as this construct has potential for conflict of interest that could undermine their role in sanctioning reckless behavior. An example of at-risk behavior is of nurse who does not change his or her personal protective equipment (PPE) between providing wound care for sacral wound and a surgical site wound on the leg. The reasoning expressed is that PPE is stored far from the resident's bed and that using the same gloves to change multiple dressings should not be harmful as it is the same resident. Although no harm is intended, the nurse should recognize the risk of cross-contamination and infection in the wounds. Even if in this instance there is no demonstrable harm, under the rubric of just culture, the survey team would put the nursing home on notice to correct the behavior within a specified time frame. The resulting process improvement plan should include several features: staff education about transmission-based precautions and how to use PPE coaching on proper donning and doffing of PPE adapting the system to ensure PPE supplies and trash receptacles are convenient to rooms with residents on transmission-based precautions process measuredsurveillance for how often rooms do not have adequate supplies of PPE outcome measuredsurveillance for wound infections acquired in the nursing home. The survey team would need to reassess the nursing home, perhaps limiting its scope only to infection control and prevention issues. If the nursing home demonstrates that it has addressed the atrisk behavior, the survey team would take no further action. If the nursing home has not sufficiently rectified the at-risk behavior, the survey team could elect to take punitive action. The third category is reckless behavior, which involves a conscious disregard of a substantial and unjustifiable risk of harm. Similar to others, we have grouped reckless behavior with the more severe categories of knowingly causing harm and intentionally causing harm. 23 Reckless behavior is outside of what is accepted as the norm and may be self-serving. Just culture calls for punishment or sanction of individuals who engage in reckless behavior. Under the existing process, survey teams may invoke any of several punitive measures, including civil monetary penalties, withholding reimbursement from CMS, and even closure. Examples of reckless behavior by individuals include drug diversion or continued refusal to properly wear a mask during the COVID-19 pandemic, despite repeated education and coaching. At the organizational level, reckless behavior might manifest as a severe cost-cutting or intentional understaffing. The COVID-19 pandemic offers a striking example of a missed opportunity for applying the principles of just culture. At least 1 nursing home in Washington state, the epicenter for COVID-19 infections in the United States, received significant fines following an initial outbreak of SARS-CoV-2. 24 These sentinel events were severe enough to require assistance not only from local and state health departments but also from the Centers for Disease Control and Prevention (CDC). The observations by the CDC shed light on the novel nature of SARS-CoV-2 and informed subsequent infection prevention and control activities across the nation. Nevertheless, CMS fined the nursing home more than $600,000 dollars for not providing "quality care and services for residents during a respiratory outbreak," among other concerns. 25 A survey team trained in principles of just culture would have forgone assessing the scope and severity of the deaths due to SARS-CoV-2 and instead evaluated the behavior and actions of the nursing home. The survey team would have recognized that, especially in early 2020, several circumstances were beyond the control of the nursing home staff: a novel pathogen with a long incubation period, insufficient knowledge of transmission, a new disease with a variable set of clinical symptoms, no diagnostic tests, no experience in treating COVID-19 infections, and no pathogen-specific medications. A second example also comes from early in the pandemic. Nursing homes that reported higher numbers of COVID-19 cases did not receive Phase Three of Provider Relief Funds under Coronavirus Aid Relief and Economic Securities Act (CARES Act) as a consequence of what was interpreted as poor infection prevention and control practices. 26 Although this may have been true for some nursing homes, the practice also penalized nursing homes that were early adopters of a universal testing strategy to limit the spread of COVID-19 in their buildings. Eventually, CMS recognized the benefits of this approach and required all nursing homes to engage in universal testing of staff Human error Outbreaks of COVID-19 in nursing homes in areas of high community prevalence, despite adequate supplies of PPE and optimal infection control practices Accept risk Recognition that the risk of COVID-19 outbreaks in nursing homes mirrors the rates of COVID-19 infections in the community At-risk behavior Staff reusing PPE due to supply shortages Coach* CMS engaged regional quality improvement organizations (QIOs) to coach nursing homes on using a PPE calculator Remove barriers to safe choices Establishment of state or regional collaboration to help connect nursing homes with supplies of PPE At-risk behavior Nursing homes unable to cohort staff or use consistent assignments because of staff shortages due to illness and/ or quarantine System redesign States deployed strike teams to assess and provide crisis staff with expertise in nursing homes Reckless behavior Nursing home did not screen all of their staff on entry to the building, and employees with an elevated temperature and signs/symptoms of COVID-19 infection were allowed to work Sanction CMS sanctioned nursing homes through civil monetary penalties and nonpayment for admissions Reckless behavior Nursing home permitted a COVID-positive resident to share a room with a COVID-negative resident after positive test results were known Sanction CMS sanctioned nursing homes through civil monetary penalties and nonpayment for admissions *Note, regulatory survey teams are not designed to coach facilities. In this context, coaching refers to relationships between other governmental agencies and nursing homes for the purpose of teaching and supervision in the interest of resolving at-risk behaviors. and residents. 27 In the example above, applying the principles of just culture would call for CMS to recognize that some nursing homes with high care rates had actually engaged in innovative behaviors supportive of patient safety. The value of the approach is evidenced by CMS's eventual endorsement of this practice. 27 Changes to Nursing Home Culture Caused by the COVID-19 Pandemic Over time, the response to the COVID-19 pandemic gave rise to practices that advanced some aspects of safety culture. First, the devastation of SARS-CoV-2 infections on nursing home residents and staff forced acknowledgment of the high-risk nature of nursing home care during the pandemic. Second, the COVID-19 pandemic led to collaborations across ranks and disciplines, which included federal, state, county, and local agencies racing to develop educational resources and strike teams, discussed in further detail below. 28e31 Third, CMS demonstrated organizational commitment of resources to address safety concerns. The agency suspended the regular survey process and instead focused on infection control and prevention. 32 CMS also issued blanket waivers for many activities like telemedicine, relaxed training and certification requirements for nurse aides, and allowed physicians to more freely delegate tasks to a nurse practitioner, clinical nurse specialist, or physician assistant. 33 Strike teams leveraged key attributes of safety culture to support and coach nursing homes through access to resources including counsel from post-acute and long-term care experts. 34 Many states implemented strike teams in partnership with the National Guard and were able to provide personnel, technical expertise, and material help including staffing, testing assistance, vaccine clinics, personal protective equipment (PPE), and administration of monoclonal antibodies. 35e37 Massachusetts provided an especially successful example of using strike teams to control infections in nursing homes. 38 In April 2020, the governor of Massachusetts authorized disbursement of 130 million dollars to focus facilities, those with the highest rates of COVID-19 infection, if the facilities complied with infection control guidance by experts in the field. To ensure compliance, education, infection control expertise, and resources, including PPE, were made available. This resulted in a decrease in case counts in the focus facilities and a dramatic increase in adherence to the infection control core competencies as evidenced by audit compliance. Later in the pandemic, modifications to some aspects of the survey process also aligned with the principles of just culture ( Table 3). The changes acknowledged several challenges faced by nursing homes: ongoing PPE shortages; understaffing due to staff who were ill, on quarantine, or had left health care altogether; and outbreaks in nursing homes located in regions with high community prevalence of COVID-19. Implications for Policy The COVID-19 pandemic led to changes by CMS and other agencies that advanced safety culture in nursing homes. As stakeholders, it is important that we acknowledge this early shift and work to further develop safety culture in nursing homes, which is foundational to high-quality care. Continuing to support culture change in a more purposeful manner, that is, through policy changes, can help transform the current punitive oversight process into one that recognizes and promotes principles of safety culture. Embedded within the larger concept of safety culture, just culture should continue to guide survey processes that respond to human error and at-risk behavior with education, coaching, and strategies to reduce risk (Table 4). Just culture also sanctions nursing homes that engage in reckless behavior. As they did successfully for some nursing homes struggling with COVID-19, survey teams can prompt regional quality improvement organizations (QIOs) and/or state and local agencies to provide education, materials support, and technical assistance. 39 Strike teams, implemented as a short-term crisis-oriented solution during the COVID-19 pandemic, have the potential to evolve into a sustainable program that promotes safety culture in nursing homes. 40 Funds from the American Rescue Plan distributed to state and other jurisdictional health departments for the development Accept risk Recognize that the facility made reasonable efforts to prevent this adverse event, which was out of the facility's control System redesign Systematically review the event to identify potential root causes and develop a contingency plan for residents with a longer than expected leave of absence At-risk behavior A surveyor finds antibiotic prescriptions based on urinalysis results, without documentation of symptoms or culture results Penalize based on scope (pattern) and severity (actual harm that is not immediate) Coach* Regulatory survey team may refer the nursing home to local, state, or regional agencies that offer educational and technical resources for coaching Require staff education on antibiotic stewardship principles and the nursing home's antibiotic use protocol for suspected urinary tract infection System redesign The nursing home revises its protocols to require the presence of signs and symptoms that localize to the genitourinary tract prior to collecting a urine sample Reckless behavior Several staff members are frequently wearing masks below their nose Penalize based on scope (pattern) and severity (immediate jeopardy to resident health or safety) Sanction Penalize organization for failure to recognize and address reckless behavior by staff Coach* Regulatory survey team may refer the nursing home to local, state, or regional agencies that offer educational and technical resources for coaching *Note, regulatory survey teams are not designed to coach facilities. In this context, coaching refers to relationships between other governmental agencies and nursing homes for the purpose of teaching and supervision in the interest of resolving at-risk behaviors. of additional state-based COVID-19 strike teams could be the first steps in developing such a program. 41 This would help foster continued instillation of safety culture into nursing homes, while maintaining the focus of survey teams on identifying nursing homes with deficiencies and applying the principles of just culture, including holding organizations accountable for reckless and negligent behavior. Integrating safety culture into nursing homes and just culture into the survey process will require a significant commitment of resources. Both nursing home staff and surveyors will need education and training on safety culture to advance increased reporting, transparency, and appropriate accountability. Furthermore, federal, state, and jurisdictional health agencies may need to develop new standards and structured mechanisms for evaluation. In addition to the investment of financial resources, promoting expertise among staff and surveyors alike will be integral to the continued transformation of nursing homes into institutions that are firmly rooted in safety culture. The COVID-19 pandemic revealed fundamental weaknesses among nursing homes across the United States and among the agencies that oversee them. The response of the dedicated people working within this sector of health care also demonstrated that safety culture principles were integral to successful responses to the pandemic. Over time, measured and deliberate integration of safety culture into nursing homes will advance sustained improvements in the quality of care and life for nursing home residents.
[b]Hong Kong 16 December, (Asiantribune.com): [/b]The talks on the strengthening of the implementation and effective monitoring of the ceasefire agreement between the Government of Sri Lanka and the Liberation Tigers of Tamil Eelam is expected to commence early next year. This was finalized at a meeting between the Sri Lankan Foreign Minister Mangala Samaraweera and the Norwegian Foreign Minister Jonas Gahr Støre held yesterday on the sidelines of the WTO Conference in Hong Kong. The Sri Lankan Foreign Minister Mangala Samaraweera met with the Norwegian Foreign Minister Jonas Gahr Støre, this evening in Hong Kong. The Ministers held wide ranging discussions on bilateral relations between the two countries including matters related to the Norwegian facilitation in the Sri Lankan peace process. According to a Government statement released by Norway, "Minister Samaraweera, reiterated the new President’s commitment to the peace process and the role of Norway as the facilitator. The Sri Lankan minister also discussed operational modalities for the resumption of talks." The Norwegian Foreign Minister Jonas Gahr Støre welcomed the commitment demonstrated by the Sri Lankan government to move the peace process forward. Mr. Støre underlined that the new Norwegian Government is fully committed to engage as a facilitator.
<filename>seip2020_practical_assignments/SourceCodeAnalyzer/src/test/java/codeanalyzer/AnalyzerFacadeTest.java package codeanalyzer; import static org.junit.Assert.*; import java.util.HashMap; import java.util.Map; import org.junit.BeforeClass; import org.junit.Test; public class AnalyzerFacadeTest { private static AnalyzerFacade af = new AnalyzerFacade(); private static String filepathLocal = "src/test/resources/TestClass.java"; private static String filepathWeb = "https://drive.google.com/uc?export=download&id=1z51FZXqPyun4oeB7ERFlOgfcoDfLLLhg"; private static Map<String, Integer> expectedMetricsForRegex = new HashMap<>(); private static Map<String, Integer> expectedMetricsForStrcomp = new HashMap<>(); /* * No need to test for empty files because they have been already checked in the * File Readers */ @BeforeClass public static void setUp() { expectedMetricsForRegex.put("loc", 21); expectedMetricsForRegex.put("nom", 3); expectedMetricsForRegex.put("noc", 3); expectedMetricsForStrcomp.put("loc", 7); expectedMetricsForStrcomp.put("nom", 3); expectedMetricsForStrcomp.put("noc", 3); } @Test public void testCalculateMetricsWithRegexLocal() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathLocal, "local", "regex"); assertEquals(expectedMetricsForRegex, resultMetrics); } @Test public void testCalculateMetricsWithRegexWeb() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathWeb, "web", "regex"); assertEquals(expectedMetricsForRegex, resultMetrics); } @Test(expected = IllegalArgumentException.class) public void testCalculateMetricsWithRegexNoFileLocation() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathLocal, "wrong_location", "regex"); } @Test public void testCalculateMetricsWithStrcompLocal() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathLocal, "local", "strcomp"); assertEquals(expectedMetricsForStrcomp, resultMetrics); } @Test public void testCalculateMetricsWithStrcompWeb() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathWeb, "web", "strcomp"); assertEquals(expectedMetricsForStrcomp, resultMetrics); } @Test(expected = IllegalArgumentException.class) public void testCalculateMetricsWithStrcompNoFileLocation() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathLocal, "wrong_location", "strcomp"); } @Test(expected = IllegalArgumentException.class) public void testCalculateMetricsWithNoanalyzerType() { Map<String, Integer> resultMetrics = af.calculateMetrics(filepathLocal, "regex", "wrong_type"); } }
Nitric oxide in chronic airway inflammation in children: diagnostic use and pathophysiological significance Background: The levels of exhaled and nasal nitric oxide (eNO and nNO) in groups of patients with inflammatory lung diseases are well documented but the diagnostic use of these measurements in an individual is unknown. Methods: The levels of nNO and eNO were compared in 31 children with primary ciliary dyskinesia (PCD), 21 with non-CF bronchiectasis (Bx), 17 with cystic fibrosis (CF), 35 with asthma (A), and 53 healthy controls (C) using a chemiluminescence NO analyser. A diagnostic receiver-operator characteristic (ROC) curve for PCD using NO was constructed. Results: The median (range) levels of nNO in parts per billion (ppb) in PCD, Bx, CF, and C were 60.3 (3.3920), 533.6 (802053), 491.3 (311140), and 716 (3981437), respectively; nNO levels were significantly lower in PCD than in all other groups (p<0.05). The median (range) levels of eNO in ppb in PCD, Bx, CF, A, and C were 2.0 (0.25.2), 5.4 (1.022.1), 2.6 (0.812.9), 10.7 (1.646.7), and 4.85 (2.518.3), respectively. The difference in eNO levels in PCD reached significance (p<0.05) when compared with those in Bx, A and C but not when compared with CF. Using the ROC curve, nNO of 250 ppb showed a sensitivity of 97% and a specificity of 90% for the diagnosis of PCD. Conclusions: eNO and nNO cannot be used diagnostically to distinguish between most respiratory diseases. However, nNO in particular is a quick and useful diagnostic marker which may be used to screen patients with a clinical suspicion of PCD.
<gh_stars>0 package model; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.io.Serializable; import java.util.Enumeration; import java.util.Properties; import java.util.ResourceBundle; /** * A class providing static methods to serialize/deserialize objects to/from * files. References are noted in method comments. * * @author <NAME> * */ public class SaverLoader { private static final String KNOWN_FILES = "src/resources/data/KnownFiles.properties"; /** * Modified from https://www.tutorialspoint.com/java/java_serialization.htm * Saves a serializable object to the given file */ public static void save(Serializable object, String file) { try { FileOutputStream fileOut = new FileOutputStream(file); ObjectOutputStream out = new ObjectOutputStream(fileOut); out.writeObject(object); out.close(); fileOut.close(); } catch (IOException e) { throw new SLogoException("SaveFail", file); } } /** * @param file * To load from * @return the object stored in the file */ public static Object load(String file) { try { FileInputStream fileIn = new FileInputStream(file); ObjectInputStream in = new ObjectInputStream(fileIn); Object result = in.readObject(); in.close(); fileIn.close(); return result; } catch (IOException | ClassNotFoundException exc) { throw new SLogoException("LoadFail", file); } } /** * Modified from * https://stackoverflow.com/questions/22370051/how-to-write-values-in-a-properties-file-through-java-code * Updates a specific properties file to include file in its keyset. */ public static void addToKnown(String file) { try { Properties prop = new Properties(); FileInputStream in = new FileInputStream(KNOWN_FILES); prop.load(in); in.close(); prop.put(file, "Added by Program"); FileOutputStream out = new FileOutputStream(KNOWN_FILES); prop.store(out, null); out.close(); } catch (IOException e) { throw new SLogoException("KnownFiles"); } } /** * @return The known files, drawn from the properties file being written by * addToKnown(String file) */ public static Enumeration<String> knownFiles() { return ResourceBundle.getBundle(KNOWN_FILES).getKeys(); } }
def checkProcess(self, process, min=1): processInfo = self._wmi.ExecQuery('select * from Win32_Process where Name="%s"' % process) if len(processInfo) >= min: self.logger.info('Process %s is running with %d threads' % (process, min)) return 0 elif len(processInfo) == 0: self.logger.info('Process %s is not running' % (process)) else: self.logger.info('Process %s is running with %d thread(s)' % (process, len(processInfo))) return 1
from .get_many import get_many as get_many_sub_collections
Development and Validation of Computerized Adaptive Assessment Tools for the Measurement of Posttraumatic Stress Disorder Among US Military Veterans Key Points Question Can rapid psychometrically sound adaptive diagnostic screening and dimensional severity measures be developed for posttraumatic stress disorder? Findings In this diagnostic study including 713 US military veterans, the Computerized Adaptive DiagnosticPosttraumatic Stress Disorder measure was shown to have excellent diagnostic accuracy. The Computerized Adaptive TestPosttraumatic Stress Disorder also provided valid severity ratings and demonstrated convergent validity with the Post-Traumatic Stress Disorder checklist for Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. Meaning In this study, the Computerized Adaptive DiagnosticPosttraumatic Stress Disorder and Computerized Adaptive TestPosttraumatic Stress Disorder measures appeared to provide valid screening diagnoses and severity scores, with substantial reductions in patient and clinician burden. Introduction Posttraumatic stress disorder (PTSD) in US military veterans is recognized as one of the signature injuries of the conflicts in Iraq and Afghanistan. Fulton et al 1 conducted a meta-analysis of 33 studies published between 2007 and 2013, and PTSD prevalence among Operations Enduring Freedom and Iraqi Freedom veterans was estimated at 23%. Disease burden associated with PTSD is also notable among veterans from previous conflicts. Magruder and colleagues 2 estimated the temporal course of PTSD among Vietnam veterans and identified 5 mutually exclusive groups (ie, no PTSD, early recovery, late recovery, late onset, and chronic). Based on these findings, the authors suggested that PTSD remains "a prominent issue" for many who served. 2(p2) Among adults in the US without a history of military service, lifetime incidence of PTSD is estimated at 6.8%, 3 with women being twice as likely as men to be diagnosed with the condition. 3,4 Provision of evidence-based treatment for those with PTSD is contingent on accurate identification. Traditionally, this identification has required the use of measures developed using classical test theory (ie, summing responses to a fixed set of items). 5 Limitations of classical test theory are amplified when measuring complex conditions, such as PTSD. 5 Diagnostically, criterion A events of PTSD include "exposure to actual or threatened death, serious injury, or sexual violence." 6(p271) Such exposure can be secondary to directly experiencing, witnessing, learning about (occurred in a close family member or friend), and/or experiencing repeated or extreme exposure to aversive details regarding 1 or more traumatic events. Symptombased criteria include intrusive symptoms (eg, distressing memories of the events), avoidance of stimuli (eg, people and/or places that remind the affected person of the events), and negative alterations in cognitions and mood associated with the events (eg, feeling detached from others). 6 As would be expected based on the above-stated criteria, individuals with PTSD experience a wide range of symptoms with varying severity. Using latent profile analysis, Jongedijk et al 7 identified 3 classes of individuals among Dutch veterans with PTSD, including average, severe, and highly severe symptom severity classes. Among trauma-exposed, inner-city primary care patients, Rahman et al 8 examined data to assess associations between PTSD subclasses and major depressive disorder. The investigators identified 4 subclasses, including high severity and comorbidity, moderate severity, low PTSD and high depression, and resilient. These findings highlight the need to identify strategies capable of measuring complex traits. One alternative to administering traditional assessment measures is computerized adaptive testing (CAT) in which a person's initial item responses are used to determine a provisional estimate of their standing on the measured trait, which is then used for the selection of subsequent items, 9 thereby increasing the precision of measurement and accuracy of diagnostic screening and minimizing clinician and patient burden. 10 For complex disorders, such as PTSD, in which items are selected from distinct yet related subdomains (eg, exposure, negative alteration in mood and/or cognition, alteration in arousal and/or activity, avoidance, and intrusion), selection of items is based on multidimensional rather than unidimensional item response theory (IRT). 11 Adaptive diagnosis and measurement are fundamentally different. In measurement (ie, CAT) the objective is to move the items to the severity level of the patient. In computerized adaptive diagnosis (CAD), we move the items at the tipping point between a positive and negative diagnosis. 12 15 ), can be used to validate the CAT, but these tools are not used to derive a CAT. By contrast, CAD is based on machine-learning models for supervised learning (eg, random forest). We can use the same set of symptom items as the CAT to derive a CAD, but here we need an external criterion, such as the CAPS-5, to train the machinelearning model. CAD adaptively derives a binary screening diagnosis with an associated level of confidence, and CAT derives a dimensional severity measure that can be used to assess the severity of the underlying disorder and change in severity over time. CAD and CAT are complementary but are fundamentally different in theory and application. To do large-scale screening and measurement of PTSD, both measures are needed. Evidence for other mental health conditions (ie, depression, 16 anxiety, 17 mania/hypomania, 18 psychosis, 19 suicide risk, 20 and substance use disorders 21 ) indicates that one can create large item banks (hundreds of items for a given disorder), from which a small optimal subset of items can be adaptively administered for a given individual with no or minimal loss of information, yielding a substantial reduction in patient and clinician burden while maintaining high sensitivity and specificity for diagnostic categorization, as well as high correlation with extant self-and clinician-rated symptom severity standard measures. For CAD, Gibbons et al 12 for PTSD using multidimensional IRT. Initially, the investigators conducted a systematic review of PTSD instruments to identify items representing each of the 3 symptom clusters (reexperiencing, avoidance, and hypervigilance), as well as 3 additional subdomains (depersonalization, guilt, and sexual problems). A 104-item bank was constructed. Eighty-nine of these items were retained to further develop and validate a computerized test for PTSD (P-CAT). Although the DSM-5 was not completed at that time, the authors indicated that they included items related to domains that they expected to be included. Similarly, because DSM-5 measures were not yet developed, validation measures (eg, civilian version of the PTSD Checklist) 25 were based on DSM-IV criteria. Moreover, to "minimize burden and distress for participants," 24(p118) the SCID PTSD module 26 vs the Clinician-Administered PTSD scale 27 was administered. Work by Weathers et al 28 suggests that the CAPS is the most valid measure of PTSD relative to other clinical interviews or self-report measures. According to Eisen et al, 24 although concurrent validity was supported by high correlations, sensitivity and specificity were variable and the P-CAT was found to not be as reliable among those with "low levels of PTSD." 24(p1120) Although there are similarities between the CAT-PTSD and the P-CAT in terms of the underlying method, there are important differences as well. First, unlike the CAT-PTSD, which varies in length and has fixed precision of measurement, the P-CAT is fixed in length and allows the precision of measurement to vary. This difference has implications for longitudinal assessments in which constant precision of measurement is important and is assumed in most statistical models for the analysis of longitudinal data. 29 Second, the P-CAT item bank was limited to 89 items, whereas our item bank has 211 items. As such, these new methods provide better coverage of the entire PTSD continuum and have more exchangeable items at any point on that continuum. Third, we have developed both a CAT for the measurement of severity and a CAD for diagnostic screening. Diagnostic screening based on a CAD generally outperforms thresholding a continuous CAT-based measure, using fewer items. 12 The limitation of CAD is that it does not provide a quantitative determination, a gap that is filled by the CAT-PTSD. In combination, however, CAT and CAD can be used for both screening and measurement. Based on DSM-5 criteria, this study aimed to develop and test the psychometric properties of the CAD-PTSD (diagnostic screener) and the CAT-PTSD (dimensional severity measure) against the standard criterion measure (CAPS-5), 14 as well as the PCL-5. 15 Measure Development We developed the CAD-PTSD and CAT-PTSD scales using the general method introduced by Gibbons and colleagues. 16 First, a large item bank containing 211 PTSD symptom items was developed to create both the CAD-PTSD and CAT-PTSD measures, using separate analyses. The CAT-PTSD measure was developed by first calibrating the item bank using a multidimensional IRT model (the bifactor model 30 ) and then simulating CAT from the complete item response patterns (211 items) to select optimal CAT tuning parameters from 1200 different simulations. Next, the CAT-PTSD scale was validated against an extant PTSD scale, the PCL-5 (convergent validity) and the CAPS-5 (diagnostic discriminant validity). For CAD, we used an extremely randomized trees algorithm 31 to develop a classifier for the CAPS-5 PTSD diagnosis based on adaptive administration of no more than 6 items from the bank. 12 Classification accuracy was assessed using data not used to calibrate the model. Most applications of IRT are based on unidimensional models that assume that all of the association between the items is explained by a single primary latent dimension or factor (eg, mathematical ability). However, mental health constructs are inherently multidimensional; for example, in the area of depression, items may be sampled from the mood, cognition, behavior, and somatic subdomains, which produce residual associations between items within the subdomains that are not accounted for by the primary dimension. If we attempt to fit such data to a traditional unidimensional IRT model, we will typically have to discard most candidate items to achieve a reasonable fit of the model to the data. Bock and Aitkin 32 developed the first multidimensional IRT model, where each item can load on each subdomain that the test is designed to measure. This model is a form of exploratory item factor analysis and can accommodate the complexity of mental health constructs such as PTSD. In some cases, however, the multidimensionality is produced by the sampling of items from unique subdomains (eg, negative alterations in mood and/or cognition, avoidance, and intrusion). In such cases, the bifactor model, originally developed by Gibbons and Once the entire bank (ie, 211 PTSD items) is calibrated, we have estimates of each item's associated severity and we can adaptively match the severity of the items to the severity of the person. We do not know the severity of the person in advance of testing, but we learn it as we adaptively administer items. Beginning with an item in the middle of the severity distribution, we administer the item, obtain a categorical response, estimate the person's severity level and the uncertainty in that estimate, and select the next maximally informative item. 16 This process continues until the uncertainty falls below a predefined threshold, in our case, 5 points on a 100-point scale. The CAT has several tuning parameters 16 total bank score. The tuning parameters include the level of uncertainty at which we stop the adaptive test, a second stopping rule based on available information remaining in the item bank at the current level of severity, and an additional random component that selects the maximally informative item or the second maximally informative item to increase variety in the items administered. We select the next maximally informative item based on the following item information criteria. Item information describes the information contained in a given item for a specific severity estimate. Our goal is to administer the item with maximum item information at each step in the adaptive process. Unlike a CAT, which is criterion-free, a CAD uses the diagnostic information (ie, external criterion) to derive a classifier based on a subset of the symptoms in the item bank that maximize the association between the items and the diagnosis. A CAD is used for diagnostic screening, whereas a CAT is used for symptom severity measurement. Gibbons et al 12 Measures We developed an item bank containing 211 PTSD items drawn from 16 existing self-report and clinician-administered PTSD scales (eTable in the Supplement) and newly created items. Existing items were reworded to make them appropriate for adaptive administration, self-report, and userselectable time frames. Items were drawn from 5 subdomains: exposure (5 items), negative alterations in mood/cognition (58 items), alterations in arousal/reactivity (79 items), avoidance (18 items), and intrusion (51 items). Items were rated on 4-or 5-point Likert scales with categories of not at all, a little bit, moderately, quite a bit, very much, never, rarely, sometimes, and often. The trauma/PTSD L Module of the SCID 13 was used to assess criterion A events and the presence of symptoms. If a criterion A event and at least 1 current symptom were endorsed, the CAPS-5 was administered. 14 The CAPS-5 is the standard for assessing PTSD diagnosis. 28 Non-PTSD modules of the SCID 13 were administered to obtain information regarding current mental health conditions. The PCL-5 15 was used to determine self-reported PTSD symptom severity. Statistical Analysis The bifactor IRT models were fitted with the POLYBIF program. Improvement in fit of the bifactor model over a unidimensional alternative was determined using a likelihood ratio 2 statistic. The extra-trees classification algorithm was fitted using the Scikit-learn Python library. Logistic regression was used to estimate diagnostic discrimination capacity for the CAT-PTSD and area under the curve (AUC) for the receiver operating characteristic curve with 10-fold cross-validation using Stata, version 16 (StataCorp LLC). The Pearson r correlation coefficient test was used to assess the association between the CAT-PTSD score and the PCL-5 score. Using 2-sided testing, findings were considered significant at P <.05. Participants In To aid in patient triage, severity thresholds were selected based on sensitivity and specificity for the CAPS-5 diagnosis of PTSD. Scores on the CAT-PTSD can range from 0 to 100 and map on to PTSD severity categories. Categories of none, mild, moderate, and severe were selected; the shift between Median (range) 0 (0-27.6) 0 (0-27.5) a Some participants declined to respond to certain items; in these cases, the number who responded to that item or measure is reported. In Table 3, example CAT-PTSD interviews for patients with low, moderate, and high PTSD severity are presented. The testing session result is classification as having no evidence of PTSD (requires 12 items), possible PTSD (requires 9 items), and PTSD definite or highly likely (requires 11 items). In Table 4 Future directions include the need for additional field testing, which would also allow for evaluation of the acceptability and feasibility of implementing these tools in clinical settings, including via telehealth, which has been increasingly implemented as a result of the COVID-19 JAMA Network Open | Psychiatry pandemic. Use of telehealth assessment will in part be facilitated by designing a graphical user interface 45 in a cloud computing environment for routine test administration on internet-capable devices, such as smartphones, tablets, notebooks, and computers, and providing an advanced programming interface that can be interfaced with the electronic health record. To accommodate literacy issues, audio to the self-report questions can be enabled. Because the generation and testing of subdomain scores is beyond the scope of this study, future research in this area is warranted. Limitations This study has limitations. The CAD-PTSD and CAT-PTSD do not allow for evaluation and monitoring of specific symptoms to the extent that they may not always be adaptively administered. However, items from the 5 subdomains are available from most interviews and can be used to assess specific subdomains of PTSD (eg, avoidance). In addition, this study was conducted exclusively in English. Independent replication of our findings in other patient populations and in other languages (eg, Spanish) is needed. 46 How much were you bothered by repeated, disturbing dreams of a stressful experience from the past? Very much How much did feelings of being "super alert," on guard, or constantly on the lookout for danger occur or become worse after a stressful event or experience in the past? c Very much Have you markedly lost interest in free-time activities that used to be important to you? Often How much did having a very negative emotional state occur or become worse after having a stressful event or experience? Very much Someone touched me in a sexual way against my will d Often Diagnosis: positive Probability of having PTSD P =.81 Abbreviations: CAD, computerized adaptive diagnostic; PTSD, posttraumatic stress disorder. Role of the Funder/Sponsor: The funding organizations had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. The VA Office of Mental Health and Suicide Prevention did not influence the decision to submit the manuscript for publication. Disclaimer: The views, opinions, and/or findings contained in this article are those of the authors and should not be construed as an official Department of Veterans Affairs position, policy, or decision unless so designated by other documentation. Additional Information: The POLYBIF program used is freely available at http://www.healthstats.org.
<gh_stars>0 # -*- coding: utf-8 -*- """Census_Bias_Flip.ipynb Automatically generated by Colaboratory. Original file is located at https://colab.research.google.com/drive/19ptga2vYxzJhQAnvM6kaSjtjiY5-2K1B """ import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt expt_name = 'Census_flip_1' rho_a = .14 p_a = [1-rho_a, rho_a] rho_z = .12 p_z = [1-rho_z, rho_z] beta = [0, .42] N = 600 mu = [[1,4],[4,1]] D = len(mu[0]) cov = [[2,0],[0,2]] a = np.random.choice([0,1], p=p_a, size=N) z = np.random.choice([0,1], p=p_z, size=N) x = [np.random.multivariate_normal(mu[z_i],cov) for z_i in z] y = [np.random.choice([zi,1-zi],p=[1-beta[ai], beta[ai]]) for ai,zi in zip(a,z)] labels_protected = np.asarray([a,z,y]).T x = np.asarray(x) data = np.concatenate([labels_protected,x],axis=1) labels =['a','z','y'] labels.extend(['x'+str(i) for i in range(D)]) df = pd.DataFrame(data=data, columns = labels) df['x0'] = x[:,0] df['x1'] = x[:,1] df.head() """This notebook walks us through a case in which we are choosing a sample of a protected class of people from a certain demographic group, and generating insights by flipping labels of people from that sample """
Comparative Retinopathy Risk in People with Type 2 Diabetes Treated with Post Metformin Second-line Incretin Therapies: Study Based on US Electronic Medical Records Studies have reported conflicting results of the association of incretin-based treatment with the risk of diabetic retinopathy (DR), while the risk of DR in people treated with different antidiabetic drugs (ADD) in the context of glycaemic control in real-world settings is limited. This study aimed to evaluate the risk of developing DR in metformin-treated patients with type 2 diabetes (T2DM) who initiated secondline ADD and if glycaemic control over one-year post-therapy initiation is associated with DR risk during follow-up. From US Electronic Medical Records (EMR), those who received second line DPP-4 inhibitor (DPP-4i), GLP-1 receptor agonist (GLP-1RA), sulfonylurea, thiazolidinedione, or insulin for ≥3 months post-2004 were analysed. Based on 237,133 people with an average of 3.2 years follow-up, compared to people who initiated second-line with sulfonylurea, those with DPP-4i/GLP1RA/thiazolidinedione had 30%/31%/15% significantly lower adjusted risk of developing DR; insulin users had 84% increased risk (all p< 0.01), with significantly better sustainable HbA1c control over one year in incretin groups. This population representative EMR based study suggests that DR risk is not higher in people treated with incretins, versus other ADD, with the benefit of better glycaemic control. © 2020 Sanjoy Ketan Paul. Hosting by Science Repository. All rights reserved. Introduction Incretin-based therapies have demonstrated their ability to significantly reduce glucose levels while maintaining a low risk of hypoglycaemia in patients with type 2 diabetes (T2DM). While randomized controlled trials (RCT), have indicated that tight glycaemic control in patients with T2DM reduces the risk of microvascular complications, the clear picture on the effects of incretin-based therapies on macrovascular outcomes is not yet established. GLP-1 receptors are expressed in the cells of the retinal ganglion, Muller cells, and pigment epithelial cells, and preclinical studies suggest that GLP-1 receptor agonists (GLP-1RAs) possess a protective effect against diabetic retinopathy (DR) by reversing and preventing early changes, such as neurodegeneration and bloodretinal barrier permeability. However, the association of incretinbased therapies and DR in humans is limited and inconclusive. Few real-world data-based studies have evaluated the possible association of treatment with incretin-based therapies with the risk of DR, and outcome trials have reported on the association of GLP-1RA with DR in patients with T2DM. The SUSTAIN-6 trial (Trial to Evaluate Cardiovascular and Other Long-term Outcomes With Semaglutide in Subjects With Type 2 Diabetes) reported 76% (95% CI of HR: 1.11, 2.78) significantly increased risk of severe DR in people treated with GLP-1RA compared to placebo. The LEADER trial (Liraglutide Effect and Action in Diabetes: Evaluation of Cardiovascular Outcome Results-A Long Term Evaluation) reported a statistically insignificant 15% (95% CI of HR: 0.87, 1.52) higher risk of DR in people treated with GLP-1RA With Type 2 Diabetes). The pair-wise metaanalysis of 37 clinical trials revealed 27% (95% CI of OR: 1.05, 1.53) increased likelihood of DR in patients treated with DPP-4 inhibitor (DPP-4i) compared with placebo. Based on Medicare data in adults aged ≥ 65 years with 0.8 years of median follow-up, Wang et al. reported no increased risk of DR in people treated with incretins compared to other antidiabetic drugs (ADDs). A UK primary care data-based study on 77,115 individuals with 2.8 years of median follow-up also reported no increased DR risk among users of GLP-1RA, compared to those using two or more oral ADDs. While the UK study was based on an exposure-level design comparing GLP-1RA with a combination of any other ADDs, the US cohort study was based on claims data. We are not aware of any real-world electronic medical record (EMR) based study that holistically evaluated the possible association of different ADDs when introduced as post-metformin second-line intensification with the DR risk, in conjunction with the glycaemic control post-second-line ADD intensification. Using the US Centricity Electronic Medical Records (CEMR), the aims of this pharmaco-epidemiological outcome study were to evaluate the rates and risks of developing retinopathy in metformin-treated individuals with T2DM who initiated second-line ADD therapy with DPP-4i, GLP-1RA, sulfonylurea (SU), thiazolidinedione (TZD), or insulin (INS), and if the glycaemic control over one-year post-second-line therapy intensification explains the possible DR risk difference between therapy groups. I Data Source The CEMR incorporates patient-level data from over 40,000 independent physician practices, academic medical centres, hospitals and large integrated delivery networks covering all states of the US. The similarity of the general population characteristics and cardiometabolic risk factors in the CEMR database with those reported in the US national health surveys has been reported, and this database has been extensively used for academic research. Longitudinal EMRs were available for more than 34 million individuals from 1995 until April 2016. II Study Design All individuals with a diagnosis of T2DM (excluding type 1 and gestational diabetes) were included in this study with the conditions of no missing data for age and sex; age ≥ 18 and <80 years at diagnosis of T2DM; initiated therapy with metformin, and received a second-line ADD for at least 3 months from 2005 to 2016. The clinically driven machine-learning-based algorithms to identify patients with T2DM from EMRs have been described previously. The second line ADDs were DPP-4i, GLP-1RA, INS, TZD, or SU. The following cross-exposure users were excluded: users of secondline SU, TZD, and INS who had ever received a DPP-4i or GLP-1RA, users of second-line DPP-4i who had ever received a GLP-1RA, and users of second-line GLP-1RA who had ever received a DPP-4i. Initiation of a second-line ADD was defined as index date (baseline). Sensitivity analyses were conducted by censoring follow-up at the initiation of other restricted ADDs in the individual therapy groups. A robust methodology for extraction and assessment of longitudinal patient-level medication data from the CEMRs has previously been described. A detailed account of glucose-lowering drug use in the US population and the likelihood of sustaining glycaemic control by post-metformin second line ADD classes based on this database has also been reported. The presence of retinopathy and comorbidities prior and post-baseline was assessed by relevant disease identification codes (ICD-9, ICD-10, SNOMED-CT). Cardiovascular disease (CVD) was defined as ischaemic heart disease, peripheral vascular/artery disease, heart failure, or stroke. A disease was considered as prevalent if its first available diagnostic date was on or prior to the index date. HbA1c measures at index, 6 and 12 months were obtained as the nearest measure within 3 months either side of the time point. With the condition of at least one non-missing follow-up data over 12 months and complete data at baseline, the missing data were imputed using a Markov Chain Monte Carlo method adjusting for age, diabetes duration and usage of concomitant ADDs. III Statistical Methods Among those without DR at index, the event rates per 1000 person-years (PY) were estimated for retinopathy using the standard life-table method. Using multinomial propensity scores approach, the treatment groups were balanced on age, sex, diabetes duration, history of CVD, neuropathy, and renal diseases. Parametric survival regression models were used to calculate the risk (95% CI) of incident DR under propensity score balanced setup. Time to event was calculated from the second line ADD initiation to the first record of DR if any, or till the end of followup in the database. The final model was adjusted on age, sex, smoking status, baseline HbA1c, BMI, systolic blood pressure, use of third line ADDs, cardio-protective and anti-hypertensive medications, and history of CVD, neuropathy and renal diseases. The probability of HbA1c control below 7.5% within a 6-month follow-up was used as a timevarying covariate in additional risk analysis. The probability of reducing HbA1c level below 7.5% at 6 months and sustaining this glycaemic control over 12 months were estimated using a multivariate logistic regression model, adjusting and balancing for the covariate/confounders mentioned above. Sensitivity analyses were conducted excluding those developed DR within 6 months of the index date. Results From 2,624,954 individuals with T2DM, 237,133 met the inclusion criteria ( Figure 1) and the characteristics of these individuals at index date are presented in the table below (Table 1) 84% increased risk (all p< 0.01). One-point higher baseline HbA1c was associated with 13 % higher risk of developing DR, while 5% higher probability of reducing HbA1c below 7.5% over 6-month was associated with 12 % (HR CI: 1.08, 1.20) lower DR risk. Discussion Given the lack of real-world evidence on the association of second line ADD intensification choices, including incretins in conjunction with population-level glycaemic control with DR risk in the context of the guideline-oriented ADD therapy intensification pathway, our study offers new insight into the DR risk dynamics in people treated with different second-line ADDs with varying levels of HbA1c control. Based on about 237,000 people with mean 3.5 years of follow-up from a US representative EMR, our study suggests no association of treatment with incretins with DR risk in comparison to other ADDs. While other observational studies have evaluated the association of incretins with DR risk, the novelty of our study is the exploration of the glycaemic control post-second-line therapy intensification, and an explanation of how the observed population-level sustainable glycaemic control in these therapy groups could explain the dynamics of DR risk. Our study design is based on the "New User" approach at second line ADD initiation post metformin therapy and is different compared to the published studies based on real-world data from the US and UK. We observed that patients treated with incretins or TZD as second-line ADD intensification in the US had significantly lower rates and risk of developing DR over 737,733 person-years of follow-up. Although not comparable, a pharmacovigilance study based on Food and Drug Administration Adverse Event Reporting System, reported that the frequency of retinal adverse events (AEs) for GLP-1RAs was significantly lower than for other glucose-lowering medications. Furthermore, retinal AEs were more than four times more frequent in reports listing (11.7/1000) than in those not listing insulin (2.9/1000). While better glycaemic control is associated with microvascular risk reduction in patients with type 2 diabetes, our study also provides a realworld context in terms of sustainable glucose control with incretins and TZD (compared to insulin and sulfonylurea) and its association with long-term risk of diabetic retinopathy. Limitations of this study include unavoidable indication bias and residual confounding that remains as a common problem in any EMR based outcome studies, and lack of complete and/or reliable data on socioeconomic characteristics, physical activity, the nature of insurance, education, and income. Furthermore, while reliable information on medication adherence is a common problem in all clinical studies, detailed validation studies of US EMRs suggest a high level of agreement between EMR prescription data and the pharmacy claims data, especially in chronic diseases. The results should be interpreted with caution as we are not aware of the validity of the retinopathy coding in the CEMR database. However, a study by Lau and colleagues suggested that diagnostic, procedure and therapeutic codes derived from insurance billing claims accurately reflect the medical record for patients with diabetic retinopathy. A large cohort size with reasonable follow-up post metformin second-line ADD intensification, appropriate segregation of patients treated with insulin, adjustments for baseline risk factors and exposure to different cardioprotective therapies help provide confidence in the reliability of the estimates reported in the present study. While about 20% of patients in the USA are prescribed a non-metformin antidiabetic drug as the firstline therapy, we chose to consider post metformin therapy intensification only. These aspects may introduce some selection bias. In conclusion, with a relatively better likelihood of sustainable glycaemic control in people treated with incretins, compared to those treated with sulphonylurea or insulin as second-line ADD, treatment with incretins was not associated with DR risk. Even a modest glycaemic control below 7.5% significantly reduced the DR risk independent of second line ADD intensification. Ethical Approval The research involved existing data, where the subjects could not be identified directly or through identifiers linked to the subjects. Thus, according to the US Department of Health and Human Services Exemption 4 (CFR 46.101(b)), this study is exempt from ethics approval from an institutional review board and informed consent.
/// Reads the entire given file as a string. [[nodiscard]] std::u8string _read_file(const std::filesystem::path &p) { constexpr std::size_t _read_size = 1024000; std::u8string res; std::ifstream fin(p); while (fin) { std::size_t pos = res.size(); res.resize(res.size() + _read_size); fin.read(reinterpret_cast<char*>(res.data() + pos), _read_size); if (fin.eof()) { res.resize(pos + fin.gcount()); break; } } return res; }
Databases are widely used for data storage and access in computing applications. A goal of database storage is to provide enormous sums of information in an organized manner so that it can be accessed, managed, and updated. In a database, data may be organized into rows, columns, and tables. Different database storage systems may be used for storing different types of content, such as bibliographic, full text, numeric, and/or image content. Further, in computing, different database systems may be classified according to the organization approach of the database. There are many different types of databases, including relational databases, distributed databases, cloud databases, object-oriented and others. Databases are used by various entities and companies for storing information that may need to be accessed or analyzed. In an example, a retail company may store a listing of all sales transactions in a database. The database may include information about when a transaction occurred, where it occurred, a total cost of the transaction, an identifier and/or description of all items that were purchased in the transaction, and so forth. The same retail company may also store, for example, employee information in that same database that might include employee names, employee contact information, employee work history, employee pay rate, and so forth. Depending on the needs of this retail company, the employee information and the transactional information may be stored in different tables of the same database. The retail company may have a need to “query” its database when it wants to learn information that is stored in the database. This retail company may want to find data about, for example, the names of all employees working at a certain store, all employees working on a certain date, all transactions for a certain product made during a certain time frame, and so forth. When the retail store wants to query its database to extract certain organized information from the database, a query statement is executed against the database data. The query returns certain data according to one or more query predicates that indicate what information should be returned by the query. The query extracts specific data from the database and formats that data into a readable form. The query may be written in a language that is understood by the database, such as Structured Query Language (“SQL”), so the database systems can determine what data should be located and how it should be returned. The query may request any pertinent information that is stored within the database. If the appropriate data can be found to respond to the query, the database has the potential to reveal complex trends and activities. This power can only be harnessed through the use of a successfully executed query. In some instances, different organizations, persons, or companies may wish to share database data. For example, an organization may have valuable information stored in a database that could be marketed or sold to third parties. The organization may wish to enable third parties to view the data, search the data, and/or run reports on the data. In traditional methods, data is shared by copying the data in a storage resource that is accessible to the third party. This enables the third party to read, search, and run reports on the data. However, copying data is time and resource intensive and can consume significant storage resources. Additionally, when the original data is updated by the owner of the data, those modifications will not be propagated to the copied data. In light of the foregoing, disclosed herein are systems, methods, and devices for instantaneous and zero-copy data sharing in a multiple tenant database system. The systems, methods, and devices disclosed herein provide means for querying shared data, generating and refreshing materialized views over shared data, and sharing materialized views.
Stimulated Low-Frequency Raman Scattering in Brome Mosaic Virus We experimentally register stimulated low-frequency Raman scattering (SLFRS) in the suspension of brome mosaic virus (BMV) in phosphate buffer with very high conversion efficiency. We identify two components of the SLFRS spectrum as the breathing and quadrupole modes of BMV and determine damping characteristics and gain factors for these modes. We show that, using the coreshell model for BMV and taking into account the influence of the environment, the acoustic properties of individual components of such a composite nanosystem can be determined. Thus, we define the sound velocity in the RNA core of BMV, in view of spectral characteristics of SLFRS. Introduction Studying viruses is evidently an important scientific and practical task. Viruses are responsible for diseases of people, animals, plants, and periodic epidemic outbreaks; therefore, they are an object of intense study. The ability to quickly and accurately characterize new strains of viruses is important for predicting and controlling such outbreaks, therefore, various diagnostic methods are being actively developed. Even more important for practical purposes is to develop the methods of fast identifying certain viruses. Optics methods are one of the most effective for this aim. Real-time nondestructive identification processes are based on the Raman scattering effect. Raman spectroscopy methods, such as ultraviolet resonance Raman (UVRR) spectroscopy and polarized Raman spectroscopy, are being actively developed and improved. A compact and convenient platform for virus capture and identification was recently introduced, where a real-time nondestructive identification process was based on the surface-enhanced Raman scattering effect. Raman spectroscopy provides an important information on the molecular composition of the systems under study. For determining the morphology in the study of nanoscale (1-100 nm) and submicrometer (100-1000 nm) systems, the low-frequency Raman scattering (LFRS) can be used. Such powerful tool as LFRS can be a complement to different Raman spectroscopy methods for the identification of various biological systems. Characteristics of low-frequency acoustic vibrations of viruses (as any nanoparticles or submicroparticles) are defined by their own shape, their elastic properties. and the properties of the surrounding medium, and they are independent on the exciting intensity. Thus, the LFRS use in viruses can provide information on their elastic properties and explain how they change under conditions very close to real ones. This fact makes this method unique and very interesting from the viewpoint of biology and biomedicine. In the case of strong attenuation, where the acoustic impedances of a nanoscale (or submicrometer) object and the environment are slightly different, the LFRS efficiency can be quite low. In this case, SLFRS can be used for studying the morphological properties and for exact definition of natural vibrations of dielectric nanoparticles in suspensions. Of particular interest, there is the measurement of the natural vibration frequencies of nanoscale viruses dangerous to humans, for example, influenza viruses or SARS-CoV-2, with the aim of their subsequent destruction or suppression under optical (two-photon) or microwave resonance irradiation. For these purposes, it is of interest to study plant viruses that are safe for humans and have a similar shape (sphere). For the first time, the resonant microwave absorption studied in brome mosaic virus (BMV) and tomato bushy stunt virus has been proposed for their destruction. Later a reduction of the tobacco mosaic virus activity was shown after exposure to the microwave field. But due to strong water absorbtion in the microwave spectral ranges, this method is very difficult to implement in real conditions. To overcome this difficulty, one can use the excitation sources in the visible range, where water is transparent. Due to its high conversion efficiency, SLFRS can be used as a biharmonic pump source for effective resonant impact on viruses. In this paper, we present the results of experimental investigation of SLFRS excited in the suspension of spherical virus BMV in phosphate buffer. The SLFRS energetic and spectral characteristics are defined. We show that the core-shell model is the most suitable for describing the elastic properties of the system under investigation. Samples BMV is a small (27 -30 nm, 86 S), positive-stranded, icosahedral RNA plant virus belonging to the genus Bromovirus, family Bromoviridae in the alphavirus-like superfamily. The BMV virion is composed of 180 identical subunits of the capsid protein arranged in a T = 3 icosahedral geometry. The virion structure was determined by X-ray crystallography with a resolution of 0.34 nm. The capsid subunits exist in three different arrangements forming 12 pentameric and 20 hexameric capsomeres. At the center of each capsomere, there is a channel with 0.5 -0.6 nm diameter. The capsid itself has a thickness of 5 -6 nm and weighs roughly 3.6 10 6 Da. Virus particles contain approximately 22% nucleic acid and 78% protein. The genomic RNA does not fill completely the interior of the particle leaving a central cavity of about 8 nm. Molecular weight of the complete virion is approximately 4.6 10 6 Da. The isoelectric point determined by isoelectric focusing is 6.8. The brome mosaic virus virions were purified, with slight modifications. Infected leaves of barley (Hordeum vulgare L) were blended in 0.1 M phosphate buffer at pH 5.0. The leave sap was filtrated through cheese cloth incubated for 2 h. at room temperature and subjected to high-speed centrifugation (100,000 g, 2.5 h.; CP100WX, Hitachi). The pellet was dissolved in 0.1 M phosphate buffer at pH 7.0. SLFRS measurements were carried out using the setup described in. Ruby laser pulses ( = 694.3 nm, = 20 ns, E max per pulse equal to 0.3 J, = 0.015 cm −1 ) were used for the SLFRS excitation. The length of the active medium (BMV in phosphate buffer) was 10 mm. SLFRS spectra were registered with Fabry-Prot interferometers with different ranges of dispersion (from 0.3 to 8.3 cm −1 ) simultaneously in the forward and backward directions. All our measurements were carried out at room temperature. Experimental Setup and Results In Fig. 1, we present the hydrodynamic radius distribution of BMV in phosphate buffer obtained by dynamic light scattering with a Photocor Compact Z analyzer. On the right-hand side of Fig. 1, we show the transmission electron microscopy (TEM) image of brome mosaic viruses obtained with JEOL JEM-1400. The peak with a maximum at 16 nm corresponds to the individual BMV viruses. In addition, there is also a peak for larger size (38 nm) that corresponds to the aggregates of viruses. Figure 2 shows the summarized SLFRS spectra obtained for BMV in phosphate buffer in forward scattering geometry. For the case of a free continuous isotropic elastic sphere, this problem was solved by Lamb ; in his work, two types of vibrational modes were shown, namely, a spheroidal (SPH) mode and a torsional (TOR) mode, as a result of solving the equation of motion under stress free boundary conditions, where D is a lattice displacement vector, and are Lamb's constants, and is the mass density. The equations obtained for SPH modes read tan(qa) and for TOR modes are where v l and v t are longitudinal and transverse sound velocities, a is the sphere radius, l is the quantum number of orbital angular momentum, and j l is spherical Bessel functions of the first kind, v l q = v t Q =. The SPH modes is a vibration with dilatation, while the TOR modes are characterized by a constant density, and their eigenvalues are inversely proportional to the particle radius R = V /R, where V is the sound velocity and is a dimensionless parameter depending on the relation between the longitudinal and transverse sound velocities. The eigenfrequencies for both SPH and TOR modes are described by the quantum number of orbital angular momentum l and harmonic n. Viruses are small particles (D ) with a fairly spherical shape and, for such cases, only the breathing (l = 0) and quadrupole (l = 2) spheroidal modes are Ramanactive ones. On the other hand, the virus particle is heterogeneous as it consists of the RNA core and the capsid shell suspended in a liquid buffer; to determine the eigenfrequencies, in this case, is a more complex task. After Lamb, his free sphere model (FSM) was expanded to the cases with an elastic matrix or liquid surrounding of the sphere and also for special cases of inhomogeneity, including the core-shell model (CSM). These data are in good agreement with LFRS as well as SLFRS experiments for various submicrometer and nanoscale systems including biological systems. In Fig. 3, we show the lowest spheroidal eigenmodes estimated for the FSM model, without taking into account the liquid surrounding and with its consideration, as well as the values experimentally obtained. Longitudinal and transverse sound velocities used in calculations correspond to the values for lysozyme V L = 1817 m/s and V T = 915 m/s, as it is usually accepted for viruses. In Fig. 3, one can see that the spectral shift 1.94 cm −1 (58.5 GHz) most likely corresponds to the breathing (l = 0) oscillation mode. Considering the virus as a core-shell particle, the boundary conditions at various interfaces are changing. For free surfaces, zero surface traction is used and, for interfaces between two different materials, the displacement and the associated surface traction are continuous. To obtain the eigenfrequencies of BMV virus, using the CSM method, it is necessary to know the values of sound velocities in the RNA core but, to regret, these data were not available in the literature. Therefore, changing the value of longitudinal sound velocity and assuming the Poisson ratio to be 0.33 and the RNA density = 1.21 g/cm 3, we obtained the eigenfrequency dependence for breathing (l = 0) and quadrupolar (l = 2) spheroidal modes of virus in liquid media; see Fig. 4. Under these assumptions, the eigenfrequencies coincide with the experimental ones at the longitudinal sound velocity in RNA ranges from 3700 to 3800 m/s, what is comparable to the values for DNA (3400 -3800 m/s). In this case, a spectral shift of 0.74 cm −1 (21.3 GHz) corresponds to the quadrupole (l = 2) spheroidal mode. It is worth noting that the sound velocity values in the virus particle's core play an important role, as the rigidity of composite core-shell particle could be strongly dominated by its DNA/RNA core rather than the capsid shell. As it was shown, the vibrational modes of viruses embedded in a liquid are expected to be severely damped because of a weak acoustic impedance mismatch and viscosity. When considering the medium influence, eigenfrequencies are complex and their imaginary parts are related to the damping time by the ratio D = −1/Im. For BMV suspension, the breathing mode is much more damped than the quadrupole mode, their damping times are 16 and 59 ps, respectively. That coincides with the obtained SLFRS spectra, where the breathing-mode component is broader and has smaller conversion efficiency than quadrupole mode one. Therefore, the ability to experimentally observe these modes in such conditions is possible due to the stimulated nature of the SLFRS process. In Table 1, we present lowest eigenfrequencies estimated for different models. The spectral lines corresponding to 0.45, 0.55, and 1.08 cm −1 may refer to the presence of aggregates with an average size of 38 nm in the system under study, which was determined by the DLS method; see Fig. 1. It also may correspond to the SPH modes with odd l or the TOR modes, which could be Raman-active ones, taking into account the anisotropy of the virus as mentioned above. Parameters used in the calculations are as follows: the virus radius is 28 nm, the DNA core radius is 9 nm, the longitudinal and transverse sound velocities for virus or shell in CSM are 1817 and 915 m/s, respectively, the density of the protein coat and RNA core are 1.21 g/cm 3, the water density is 1.0 g/cm 3, and the longitudinal sound velocity for water is 1498 m/s. The half-width of the SLFRS spectral line is inversely proportional to the square root of the laser intensity, and the expression for the half-width of the line of the scattered component has the form. where g is the gain coefficient in , z is the interaction length in , is the half-width of the LFRS spectral line, and I is the laser intensity in . Using expression, the experimental values of the spectral width of the SLFRS line, the laser intensity, and the values given in Table 1 for FWHM of the spontaneous scattering line, one can obtain the values of the gain for modes l = 0 and l = 2. These values are 0.175 and 0.024 cm/MW, respectively. Note that the gain for the mode l = 0 is larger than the gain of the stimulated Brillouin scattering for such highly-nonlinear liquid as CS 2, for which it is 0.15 cm/MW. Discussion Our investigations clearly indicate that SLFRS can be excited with high conversion efficiency in nanoscale biological systems. SLFRS can be used both for spectral measurements that allow identification of the nanoscale systems under study. Also, SLFRS can be used as a source consisting of two spectral lines (biharmonic pumping) that allows selective and resonant impact on biological objects due to the ponderomotive interaction with the exact coincidence of the object's own acoustic frequency with the frequency difference. As it is known, the ponderomotive force acts on a dielectric particle in an external electromagnetic field; it is determined by the following expression: with n = n 1 /n 2, where n 1 is the refraction index of the surrounding medium, n 2 is the refraction index of the nanoparticle, and r is the nanoparticle radius. If the electromagnetic field consists of two waves with the frequency difference, it leads to the appearance of the ponderomotive force component oscillating with, this ponderomotive force will excite harmonic acoustic vibrations in the nanoparticle. If the frequency difference is equal to the particle's acoustic eigenfrequency, an effective impact on the nanoparticle can be realized. The application of biharmonic pumping in the visible or near-infrared range for the efficient excitation of the biological object's vibrations due to weak absorption in the liquid environment makes it perspective impact tool for such biosystems. Conclusions As we showed, the model of BMV as a core-shell object in a specific environment is quite suitable both for determining the type of oscillation modes and for estimation of the acoustic parameters of the BMV individual components separately. Taking into account the effect of the environment leading to attenuation, the damping parameters can be used for calculating the gain coefficients for the corresponding vibrational modes. Study of such plant viruses as BMV by optical methods, in particular, by SLFRS, is very important. This virus has spherical form as many human viruses (influenza viruses, SARS-CoV-2), which cause infectious diseases. Study of plant-virus elastic properties, definition of virus natural vibrational frequencies, and creating of methods of effective impact on them can help in employment of plant viruses as a model nanoparticles to develop methods of the human virus investigation, identification, and impact on them with the aim of their activity decreasing up to destruction.
Fascist Heroes vs. progressive policies and political correctness: Agenda and framing of the Spanish Alt-lite micro-celebrities on YouTube The New Right has generated alternative communities on YouTube in which it wages its cultural battle without journalistic intermediation. Recent research has detected the existence of a popular Alternative Influence Network in the United States formed by creators who use the techniques of digital influencers to spread their radical messages. This article studies this in the Spanish reality with the aim of analysing the agenda and framing of the contents disseminated by its right-wing micro-celebrities. To do so, it applies a content and discourse analysis to 406 videos of five prominent YouTubers. The results show the thematic predominance of feminism, racial diversity, welfare state economics and public freedoms. The general framing is one of opposition to progressive policies and the culture of political correctness. The lack of explicit supremacism, the use of cultural rather than racial arguments, the recourse to offensive humour and the absence of a propositional dimension bring them closer to the discursive strategies of the American Alt-lite.
const second = 1000 const minute = 60 * second const hour = 60 * minute // 정확한 값은 불명 const threshold = 8 * hour const PREFIX_KEY_TIMESTAMP = `BlockLimiterTimestamp` const PREFIX_KEY_COUNT = `BlockLimiterCount` export default class BlockLimiter { public readonly max = 500 private readonly KEY_TIMESTAMP: string private readonly KEY_COUNT: string public constructor(userId: string) { const identifier = `user=${userId}` this.KEY_TIMESTAMP = `${PREFIX_KEY_TIMESTAMP} ${identifier}` this.KEY_COUNT = `${PREFIX_KEY_COUNT} ${identifier}` } private expired() { const timestamp = parseInt(localStorage.getItem(this.KEY_TIMESTAMP) || '0', 10) const diff = Date.now() - timestamp return diff > threshold } public get count() { if (this.expired()) { return 0 } else { return parseInt(localStorage.getItem(this.KEY_COUNT) || '0', 10) } } public increment() { const count = this.count + 1 localStorage.setItem(this.KEY_COUNT, count.toString()) localStorage.setItem(this.KEY_TIMESTAMP, Date.now().toString()) return count } public check(): 'ok' | 'danger' { const { count, max } = this if (count < max) { return 'ok' } else { return 'danger' } } public reset() { localStorage.setItem(this.KEY_COUNT, '0') localStorage.setItem(this.KEY_TIMESTAMP, '0') } public static resetCounterByUserId(userId: string) { return new BlockLimiter(userId).reset() } }
Preparation of highly activated natural killer cells for advanced lung cancer therapy Background: Natural killer (NK) cells can be used as an adoptive immunotherapy to treat cancer patients. Purpose: In this study, we evaluated the efficacy of highly activated NK (HANK) cell immunotherapy in patients with advanced lung cancer. Patients and methods: Between March 2016 and September 2017, we enrolled 13 patients who met the enrollment criteria. Donor peripheral blood monocytes were isolated from patients and the NK cells were expanded. After 12 days of culture, the cells were collected and infused intravenously on days 13 to 15. The enrolled patients received at least one course including three times of infusions. The lymphocyte subsets, cytokine production, and the expression of carcinoembryonic antigen (CEA) and thymidine kinase 1 (TK1) were measured before treatment and after the last infusion. Results: No side effects were observed. After a three-month follow-up, the percentage of patients who achieved stable disease and progressive disease was 84.6% and 15.4%. Moreover, the level of IFN- was significantly higher after treatment and the level of CEA decreased substantially. The overall immune function of the patients who received the NK cell therapy remained stable. Conclusion: This is the first study to describe the efficacy of NK cell therapy of patients with advanced lung cancer. These clinical observations demonstrated that NK cell is safe and efficient for advanced lung cancer therapy. Introduction Lung carcinoma is the most common type of cancer and the leading cause of cancer mortality in the People's Republic of China. 1 Lung cancer includes nonsmall cell lung cancer (NSCLC) and small cell lung cancer, of which NSCLC accounts for approximately 80% and is defined as the most dangerous and common malignant cancer. 2,3 In addition, approximately 70% of patients with NSCLC are diagnosed at an advanced stage 4 and the 5-year survival rate is only 16.8%. 5 Surgery and chemotherapy are the standard therapies used to treat patients with NSCLC; however, they are insufficient to manage patients with advanced lung cancer due to a poor prognosis. 6,7 Moreover, severe toxicity is exhibited following chemotherapy. Although biological therapy is an attractive alternative method for clinical treatment, variable therapeutic effects have been reported due to individual differences. Thus, the suppression of tumor cell proliferation in a complicated microenvironment remains a problem for researchers. Recently, progress in the field of adoptive immunotherapy has shown increased potential for the treatment of advanced lung cancer patients. However, a safe and efficient immunotherapy regimen is still required. Natural killer (NK) cells are a critical component of the innate immune system and are characterized by their rapid response to and strong cytotoxicity against virusinfected or malignant cells without presensitization or restriction by major histocompatibility class I (MHC-I) molecules. NK cells identify their target cells through a set of activating and inhibitory receptors. Then, NK cells recognize self MHC-I molecules that are expressed on normal cells but downregulated by infected or transformed cells, which is termed the "missing-self" model. 14 When target cells exhibit decreased self MHC-I molecule expression or when the activating signals (activating receptors on NK cells and their corresponding ligands on tumor cells) dominate over the balance of inhibitory signals (inhibitory receptors on NK cells and their ligands on tumor cells), NK cell cytotoxicity is triggered. 15 Inibitory signals mediated by killer cell immunoglobulin-like receptors and NK group 2A (NKG2A) on NK cells interact with MHC-I molecules that are expressed on target cells. By comparison, various activating receptors like NK group 2D (NKG2D) and the natural cytotoxicity receptors (NCRs), including NKp30, NKp44, and NKp46, on NK cells provide positive signals when activated. 16,17 NK cells are innate lymphocytes that are part of the first line of defense against tumor cells. In addition, NK cells can recognize and kill tumor cells without the requirement of prior antigen exposure. Recent studies have investigated the potential for NK cells to provide therapeutic benefit in patients with advanced lung cancer. Tumor cells can escape immune surveillance by downregulating the level of MHC molecule expression that releases NK cells from inhibition and initiates antitumor activities. 22 With increased knowledge into NK cell function, adoptive NK cell therapy has been applied as a clinical treatment for advanced cancer patients, including lung cancer. To enhance the immune function of lung cancer patients, we isolated NK cells from patients themselves as adoptive immunotherapy. We developed the method to expand the NK cell number by a100-fold in 2 weeks with purity level ≥80%, and the expression level of the activating receptor increased almost 200-fold. 28 We also reported a case in which an advanced ovarian cancer patient received highly activated NK (HANK) cells cultured and proliferated ex vivo by this approach and had a good response. 29 Therefore, we adopted HANK cells to treat lung cancer patients in the clinical trial. In this study, we assessed the clinical effect of HANK cell therapy in patients with advanced lung cancer, as a potential novel therapeutic regimen. Materials and methods Ethics This clinical trial was approved by the Guangzhou Fuda Cancer Hospital ethics committee. In accordance with the Declaration of Helsinki, written informed consent was obtained from each participant. Patient eligibility Patients were enrolled in the present study based on the following criteria: 1) life expectancy >3 months; 2) age >18 years; 3) Karnofsky performance status >60; 4) pathological or radiographic confirmation of stage III-IV lung cancer; 5) no serious abnormalities in liver, lung, and kidney function; 6) no high blood pressure, acute or chronic infection, or severe heart disease; 7) no HIV, HTLV-1, syphilis, tuberculosis, or parasitic infections; and 8) the absence of level 3 hypertension, severe coronary disease, myelosuppression, and autoimmune diseases. Preparation of HANK cells HANK cells were prepared under good manufacturing practice conditions using clinical-grade reagents. The human NK cell in vitro culture kit (HAHK Bioengineering Co. Ltd, Shenzhen, People's Republic of China) was used to costimulate expansion and activation of NK cells in the peripheral blood mononuclear cells (PBMCs) according to the manufacturer's instructions. Briefly, collect approximately 50 mL of peripheral blood from the patient. Transfer the blood to a 50-mL conical tube and centrifuge at 600g for 15 min. Collect the plasma in the supernatant in a 50-mL conical tube. Add an equal volume of saline to the blood cells in the bottom, and resuspend the cells for lymphocyte separation. Transfer the plasma tube to a 56°C water bath for 30 min, then stand on the bench top until the tube temperature is below 37°C, centrifuge at 400g for 10 min, and transfer the supernatant into a new 50-mL conical tube and store at 4°C for further applications. The plasma tube stored at 4°C for more than 12 h should be centrifuged again and only the supernatant used. Transfer 20 mL human lymphocyte separation solution (Haoyang Biological Manufacture Co., Ltd, Tianjin, People's Republic of China) into two 50-mL conical tubes, respectively. Carefully lay equal volumes of the blood cell suspension onto the lymphocyte separation solution in two 50-mL conical tubes, centrifuge at 600g for 15 min, transfer the lymphocytes into the middle layer into a 50-mL conical tube, and wash twice with saline. The total cell number is counted. NK cell culture media consist of 1 L of X-Vivo15 serumfree medium (Lonza, Walkersville, MD, USA) and one tube of HK-002 (5% for initial 200 mL of medium, then reduce to 1-2%). A tube of HK-001 contains membrane chimeric active cellular factors. One tube of HK-001 is good for 410 7 lymphocyte initial culture. Calculate the number of HK-001 tubes needed according to the lymphocyte numbers. Take HK-001 out of the liquid nitrogen or −80°C freezer, and immediately put into a 37°C water bath to recover. Then centrifuge at 350g for 5 min, and remove the supernatant. Wash the precipitate twice with saline. Resuspend the precipitate with about 3 mL NK cell culture medium. The NK cells were cultured as follows. On day 1, mix 410 7 lymphocytes, 50 mL NK cell culture medium, and one tube of recovered HK-001 in a T175 culture flask (Corning Incorporated, Corning, NY, USA) and incubate at 37°C with 5% CO 2. On day 3, add 30 mL NK cell culture medium to the T175 flask. On day 5, add 60 mL NK cell culture medium to the T175 flask and adjust the cell concentration to 110 6 /mL. On day 6, add 60 mL NK cell culture medium to the T175 flask and, adjust the cell concentration to 110 6 /mL. On day 7, add 50 mL NK cell culture medium (1-2% plasma concentration) to the T175 flask. Count the cell number. If the total cell number is >610 7 cells, add a tube of recovered HK-001; or if the total cell number is 3~610 7 cells, add a tube of recovered HK-001 1 day later. Perform the first sterility test. On day 8, transfer the total culture from the T175 flask to the 2-L cell culture bag (Haoyang Biological Manufacture Co., Ltd). Add 200 mL NK cell culture medium to the cell culture bag. On days 9, 10, and 11, add 150 mL NK cell culture medium to the cell culture bag each day. On day 12, add 350 mL NK cell culture medium to the cell culture bag. Perform the quality control tests including cell viability, NK cell purity, and endotoxin. On day 13, harvest the HANK cells. Count the total cell number (should be around 110 10 cells). Collect the cultures into 450-mL conical centrifuge tubes. Precipitate cells and wash the cells once with saline. Adjust the cell concentration to 210 7 /mL with cell infusion solution (400 mL saline with 1% human serum albumin and 6 mL HK-003). A sample of 3-510 9 HANK cells was harvested into blood transfusion bags each day for infusion at a concentration of 210 7 /mL. The release tests were performed on each bag of cells. The process of HANK cell preparation is shown in Figure 1. NK cells were stained with anti-CD3-APC-H7, anti-CD16-Alexa Fluor700, anti-CD56-PE, anti-NKG2D-PerCP-Cy5.5, anti-NKG2A-PE-Cy7, anti-NKp30-APC, anti-NKp44-PE, and anti-NKp46-BV786 (BD Biosciences, San Jose, CA, USA). Then, the cells were washed with fluorescent activated cell-sorting buffer (0.2% BSA) (Hyclone), fixed with 1% paraformaldehyde (Sigma-Aldrich Co., St Louis, MO, USA), and analyzed on the FACSCanto™ II (BD, Franklin Lakes, NJ, USA). Data analysis was performed using FlowJo software (Treestar). Cells were gated on CD56 + CD3events (NK cells) and then analyzed for the expression of inhibitory and NK cell culture Day PBMC separation The first sterility test The quality control test Infusion 0 8 12 13 Figure 1 NK expansion process. PBMCs were obtained from peripheral blood (approximately 50 mL) of each cancer patient. The plasma was prepared, and the lymphocytes were separated using human lymphocyte separation solution. The lymphocytes were resuspended in NK cell culture medium and cultured. For the first week, the cells were cultured in a T175 culture flask. From day 8, the cells were cultured in a 2-L cell culture bag. The first sterility test was performed on day 8, while the quality control (cell viability ≥80%, NK cell purity ≥80%, endotoxin ≤1 EU/mL, and bacteria, fungi, and mycoplasma culture-negative) and sterility tests were performed on day 12. On day 13, approximately 3-510 9 HANK cells were harvested into a blood transfusion bags each day for infusion at a concentration of approximately 210 7 /mL. Abbreviations: NK, natural killer; PBMC, peripheral blood mononuclear cell; HANK, highly activated natural killer. activating surface receptors. The assessment was performed in freshly isolated peripheral blood mononuclear cells (PBMCs) on day 0 as well as expanded and activated NK cells on day 12. Therapeutic procedure Enrolled patients stopped pretreatment before HANK cell therapy. All enrolled patients received at least one course of treatment (each course included three infusions, the length of time for each infusion was within 30 min). No more than three courses of treatment were received monthly. Each course with three infusions was delivered consecutively over 3 days. Evaluation of safety and clinical efficacy Adverse effect observation The most common adverse reactions recorded include local (eg, pain and retroperitoneal errhysis) and systemic (eg, chills, fatigue, and fever) reactions. Blood toxicity was measured by the white blood cell count (reference: 3.5-9.510 9 cells/L) and the hemoglobin level (reference: 130-175 g/L). Hepatic toxicity was measured by aspartate transaminase (reference: 15-40 U/L) and alanine aminotransferase ALT (reference: 9-50 U/L). Renal toxicity was measured by creatinine (reference: 57-97 mol/L) level. All of these indexes were measured 14 days before and after HANK cell infusion as a detection index to evaluate adverse effects. Detection of immune function Lymphocyte subsets were detected by flow cytometry (FACSCanto™ II; BD). Peripheral blood (2 mL) was drawn to assess the level of immune function before and after the HANK cell infusion. BD) was used to detect the expression of IL-2 (reference range: 8-12.5 pg/mL), IL-4 (reference range: 3.5-6 pg/mL), IL-6 (reference range: 2.7-8.5 pg/mL), IL-10 (reference range: 1.8-4 pg/mL), tumor necrosis factor- (reference range: 1.7-2.5 pg/mL), and interferon- (reference range: 1.5-4 pg/mL). These factors are inhibited by tumor proliferation, and the level of these cytokines is indicative of the potency of the antitumor response elicited by the immune system. 30 Tumor biomarkers The level of carcinoembryonic antigen (CEA) and thymidine kinase 1 (TK1) was measured. TK1 is associated with DNA synthesis related to cellular proliferation and is used to monitor the effect of tumor therapy, prognosis, and followup. In addition to TK1, CEA is a tumor-associated antigen that is an important marker for lung cancer. Peripheral blood (5-6 mL) was collected for the detection of TK1 and CEA. The TK1 analytical reagent kit and CIS-2 series chemiluminescence digital imaging analyzer (Sino-Swed Tong Kang Bio-Tech Co. Ltd., Shenzhen, People's Republic of China) were used to detect the TK1 level (normal range: 0-2.0 pM) before and after the HANK cell immunotherapy. The expression level of CEA was detected by a chemiluminescent immunoassay. Computed tomography imaging scan Changes in computed tomography tumor imaging were monitored to evaluate the curative effect of the treatment according to the evaluation standards published by the World Health Organization. 31 According to the degree of change in the largest transverse diameter, the therapeutic effect is categorized as: 1) complete response (CR), in which there is a disappearance of the arterial enhancement imaging of all target lesions; 2) partial response (PR), in which there is a total reduction in the diameter of the target lesions >30%; 3) stable disease, in which tumor regression fails to reach PR or tumor progression fails to reach progressive disease; and 4) progressive disease, a total progression of the tumor diameter >20%. To accurately observe the therapeutic effects of HANK cell therapy, the total area of all tumors before and after treatment was compared. Any curative effect must be maintained for more than 4 weeks, with CR+PR representing the effective rate (RR). Statistical analysis SPSS version 13.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical analyses, and the results were expressed as the mean and SD values. GraphPad Prism 5 (GraphPad Software Inc., La Jolla, CA, USA) was used to plot graphs. The two groups (immune function and tumor markers) were compared using Wilcoxon signed-rank tests. All statistical tests were two-sided, and differences were considered significant at p<0.05. Patient characteristics From March 2016 to September 2017, a total of 13 patients (eight males, five females) with a median age of 57.3 years (range: 37-84 years) and a diagnosis of adenocarcinoma (n=12) or squamous cell carcinoma (n=1) were enrolled in the study. Among these patients, stage Ⅳ, Ⅲ, and Ⅱ was detected in 10, 2, and 1 patient, respectively (Table 1). Treatment safety Before HANK cell therapy, all other therapies were stopped. The detailed pretreatment data are shown in Table 2. During all infusions, the patients did not report a cold chill, fever, or any other discomfort. There was no case of queasiness or vomiting. After 14 days of HANK cell infusion, the white blood cell count and the aspartate transaminase, alanine aminotransferase, hemoglobin, and creatinine levels were not significantly different from those before treatment (p>0.05) ( Table 3). It was demonstrated that blood function, liver function, and renal function remained at a normal level. NK cell purity before and after in vitro expansion and activation Prior to NK cell expansion, the median percentage of the CD3 − CD56 + population in PBMC was 13.4% (range: 9.4-16.8%). Following expansion, the proportion of viable cells exceeded 92% without any bacterial, fungal, or mycoplasma contamination (endotoxin level <1 EU). The median proportion of CD3 -CD56 + cells was 88.1% (range: 81.3-94.9%). The representative results are presented in Figure 2. We also examined the expression rates of various activating and inhibitory receptors on the NK cell surface. The results show that there is a significant increase in the expression of activating receptors after culture compared to those on PBMC before expansion (Figure 3). For instance, NKG2D/CD56 + NK cells increase from 24% in PBMCs to 95% after 12 days of NK cell culture. Similarly, NKp30/CD56 + NK cells, NKp44/CD56 + NK cells, and NKp46/CD56 + NK cells were increased from 22%, 0.5%, and 62% to 65%, 66%, and 78%, respectively. Moreover, expansion and activation of NK cells had no influence on NKG2A/CD56 + NK cells with a stable change from 14% on day 0 to 12% on day 12. Immune function changes in patients with advanced lung cancer following HANK cell therapy The level of interferon- was significantly higher after treatment (p<0.05). In addition, the lymphocyte subset number remained stable (Table 4). No change was observed regarding the level of cytokines, including IL-2, IL-4, IL6, IL-10, and tumor necrosis factor- (p>0.05) ( Table 5). Changes in tumor biomarker expression The level of CEA significantly decreased following HANK cell adoptive immunotherapy (p<0.05). The mean TK1 value before HANK cell immunotherapy was 2.08 ±4.35 pmol/L and increased to 2.7±5.78 pmol/L after the last transfusion. Thus, there is no substantial change in the level of TK1 (Table 6). Clinical outcomes A total of 13 patients received different courses of autologous HANK cell infusion. After 3 months, 11 patients (84.6%) experienced stable disease and 2 patients (15.4%) experienced progressive disease according to the RECIST guidelines (Table 7). Discussion Lung cancer is reported to be a major cause of mortality and is associated with the highest incidence among all malignancies worldwide. 32 Furthermore, the majority of patients are diagnosed with advanced or metastatic disease. 33 Although chemotherapy and radiotherapy can be effective to some extent, the curative and beneficial effects of these treatments are severely limited. 34,35 Adoptive immunotherapy has long been recognized as a promising novel approach for the treatment of solid tumors. 36 As a part of the innate immune system, NK cells are highly related to the tumor microenvironment and mediate cytotoxic activity against tumor cells. 37 Thus, the clinical application of NK cell adoptive immunotherapy as a potential treatment for patients with solid tumors has recently gained attention. A previous study has shown that NK cells can recognize and lyse cells lacking MHC-I molecules through their activating receptors like NKG2D, NKp46, NKp44, and NKp30. 16,17 Hence, tumor cells are highly susceptible to NK cells due to their lack of MHC-I molecule expression. 14 In addition, the balance between activating and inhibitory Size of primary lesion (cm) Therapeutic regimen Result receptor signaling on NK cells is crucial to NK cell functionality, including cytokine production, cytotoxicity mediated by NK cells such as perforin and granzyme, and antibodydependent cell-mediated cytotoxicity. Depending on the balance between the activating and inhibitory signals engaged by ligands expressed on tumor cells, NK cells are triggered to kill or ignore target cells. Data from recent studies indicate that in response to NK cell therapy, the expression of NK cell activating receptors like NKG2D increased, whereas the expression of inhibitory receptors did not, suggesting that NK cell therapy can activate NK cells in advanced cancer patients. 23,47 However, the clinical application of NK cells has been limited by the low frequency of NK cells in peripheral blood (5-20%). Moreover, the expansion of NK cells to abundant numbers in vitro also remains difficult. A recent study reported that expansion of NK cells can be achieved by inducing proliferation in response to the presence of feeder cells, such as irradiated PBMC, Epstein-Barr virus-transformed lymphoblastoid cell lines, and gene-modified K562 cells, which act as artificial antigen-presenting cells. 24, In particular, feeder cells based on genetically modified K562 cells that contain chimeric active cellular factors on K562 cell membranes produce a high quantity of NK cells. 51,52 The human NK cell in vitro culture kit used in this study contains membrane chimeric cellular factors. However, a concern is that the in vitro expanded NK cells may reduce their cytotoxicity and persistence in vivo within several weeks. Ready to take fast action but a short persistence time are the advantage and disadvantage of adoptive NK cell therapy. Thus, repeated infusions are needed. A series of studies have demonstrated that NK cell therapy is a promising therapeutic approach to advanced lung cancer. Tonn et al 53 observed that some encouraging responses in patients with advanced lung cancer after two infusions of natural killer cell line NK-92. Yang et al 26 found that treatment of NK cell-rich lymphocytes combined with docetaxel in patients with advanced NSCLC was feasible without further toxicity or complication. Other regimens such as targeted therapy and percutaneous cryoablation combined with NK cell immunotherapy in advanced NSCLC patients also achieved satisfactory results. 54,55 In addition, some Table 5 The level of cytokines expressed in peripheral blood before and after highly activated natural killer cell immunotherapy clinical studies have been performed well in various tumors combined with chemotherapy or antibody. 27,56 It is not only safe and feasible, but also a better immune response is achieved when NK cell therapy is applied in cancer patients. 57,58 In summary, we have developed a practical, simple, safe, and economical NK cell expansion and activation procedure, which result in NK cells are with high quantity, high purity and high activity characterizations. Application of these NK cells for advanced lung cancer immunotherapy was safe and efficient, most of them remained in a stable condition. Although this NK cell therapy shows promise as a potential cancer treatment, further studies are needed to obtain additional improved curative effect. Disclosure The authors report no conflicts of interest in this work.
<filename>MSCL_Examples/Inertial/Python/setCurrentConfig.py #import the mscl library import sys sys.path.append("../../dependencies/Python") import mscl #TODO: change these constants to match your setup COM_PORT = "COM4" try: #create a Serial Connection with the specified COM Port, default baud rate of 921600 connection = mscl.Connection.Serial(COM_PORT) #create an InertialNode with the connection node = mscl.InertialNode(connection) #many other settings are available than shown below #reference the documentation for the full list of commands #if the node supports AHRS/IMU if node.features().supportsCategory(mscl.MipTypes.CLASS_AHRS_IMU): ahrsImuChs = mscl.MipChannels() ahrsImuChs.append(mscl.MipChannel(mscl.MipTypes.CH_FIELD_SENSOR_SCALED_ACCEL_VEC, mscl.SampleRate.Hertz(500))) ahrsImuChs.append(mscl.MipChannel(mscl.MipTypes.CH_FIELD_SENSOR_SCALED_GYRO_VEC, mscl.SampleRate.Hertz(100))) #apply to the node node.setActiveChannelFields(mscl.MipTypes.CLASS_AHRS_IMU, ahrsImuChs) #if the node supports Estimation Filter if node.features().supportsCategory(mscl.MipTypes.CLASS_ESTFILTER): estFilterChs = mscl.MipChannels() estFilterChs.append(mscl.MipChannel(mscl.MipTypes.CH_FIELD_ESTFILTER_ESTIMATED_GYRO_BIAS, mscl.SampleRate.Hertz(100))) #apply to the node node.setActiveChannelFields(mscl.MipTypes.CLASS_ESTFILTER, estFilterChs) #if the node supports GNSS if node.features().supportsCategory(mscl.MipTypes.CLASS_GNSS): gnssChs = mscl.MipChannels() gnssChs.append(mscl.MipChannel(mscl.MipTypes.CH_FIELD_GNSS_LLH_POSITION, mscl.SampleRate.Hertz(1))) #apply to the node node.setActiveChannelFields(mscl.MipTypes.CLASS_GNSS, gnssChs) node.setPitchRollAid(True) node.setAltitudeAid(False) offset = mscl.PositionOffset(0.0, 0.0, 0.0) node.setAntennaOffset(offset) except mscl.Error, e: print "Error:", e
The Production of Question Intonation by Young Adult Cochlear Implant Users: Does Age at Implantation Matter? Purpose The purpose of this observational study was to investigate the properties of sentence-final prosody in yes/no questions produced by cochlear implant (CI) users in order to determine whether and how the age at CI implantation impacts CI users' production of question intonation later in life. Method We acoustically analyzed recordings from 46 young adult CI users and 10 young adults with normal hearing who read yes/no questions. Of the 46 CI users, 20 had received their CI before the age of 4.0 years (early implantation group), 15 between ages 4.0 and 8.11 years (midimplantation group), and 11 at the age of 9.0 years or later (late implantation group). We assessed the prosodic properties of the produced questions for each implantation group and the normal hearing comparison group (a) by measuring the sentence-final rise in fundamental frequency, (b) by labeling the question-final intonation contour using the Tones and Breaks Index ( Beckman & Ayers, 1994 ; Silverman, Beckman, et al., 1992 ; Veilleux, Shattuck-Hufnagel, & Brugos, 2006 ), and (c) by assessing phrase-final lengthening. Results The fundamental frequency rises produced by all CI users exhibited a smaller magnitude than those produced by the normal hearing comparison group, although the difference between early implanted CI users and the normal hearing group did not reach statistical significance. Early implanted CI users were more comparable in their use of question-final intonation contours to the individuals with typical hearing than to those users with CI implanted later in life. All CI users exhibited significantly less phrase-final lengthening than the normal hearing comparison group, regardless of age at CI implantation. Conclusion The results of this investigation of question intonation produced by CI users suggest that those CI users who were implanted with CI earlier in life produce yes/no question intonation in a manner that is more similar to, albeit not the same as, individuals with normal hearing when compared to the productions of those users with CI implanted after 4.0 years of age.
// ============================================================================ // // Copyright (C) 2006-2021 Talend Inc. - www.talend.com // // This source code is available under agreement available at // %InstallDIR%\features\org.talend.rcp.branding.%PRODUCTNAME%\%PRODUCTNAME%license.txt // // You should have received a copy of the agreement // along with this program; if not, write to Talend SA // 9 rue Pages 92150 Suresnes, France // // ============================================================================ package org.talend.core.repository.ui.dialog; import org.eclipse.jface.dialogs.Dialog; import org.eclipse.jface.dialogs.IDialogConstants; import org.eclipse.jface.dialogs.IInputValidator; import org.eclipse.jface.resource.StringConverter; import org.eclipse.swt.SWT; import org.eclipse.swt.events.ModifyEvent; import org.eclipse.swt.events.ModifyListener; import org.eclipse.swt.layout.GridData; import org.eclipse.swt.widgets.Button; import org.eclipse.swt.widgets.Composite; import org.eclipse.swt.widgets.Control; import org.eclipse.swt.widgets.Label; import org.eclipse.swt.widgets.Shell; import org.eclipse.swt.widgets.Text; /** * * created by hcyi on Jul 3, 2015 Detailled comment * */ public class CustomInputDialog extends Dialog { /* * The title of the dialog. */ private String title; /** * The message to display, or <code>null</code> if none. */ private String message; /** * The input value; the empty string by default. */ private String value = "";//$NON-NLS-1$ /** * The input validator, or <code>null</code> if none. */ private IInputValidator validator; /** * Ok button widget. */ private Button okButton; /** * Input text widget. */ private Text text; /** * Error message label widget. */ protected Text errorMessageText; /** * Error message string. */ private String errorMessage; /** * Creates an input dialog with OK and Cancel buttons. Note that the dialog will have no visual representation (no * widgets) until it is told to open. * <p> * Note that the <code>open</code> method blocks for input dialogs. * </p> * * @param parentShell the parent shell, or <code>null</code> to create a top-level shell * @param dialogTitle the dialog title, or <code>null</code> if none * @param dialogMessage the dialog message, or <code>null</code> if none * @param initialValue the initial input value, or <code>null</code> if none (equivalent to the empty string) * @param validator an input validator, or <code>null</code> if none */ public CustomInputDialog(Shell parentShell, String dialogTitle, String dialogMessage, String initialValue, IInputValidator validator) { super(parentShell); this.title = dialogTitle; message = dialogMessage; if (initialValue == null) { value = "";//$NON-NLS-1$ } else { value = initialValue; } this.validator = validator; } @Override protected void buttonPressed(int buttonId) { if (buttonId == IDialogConstants.OK_ID) { value = text.getText(); } else { value = null; } super.buttonPressed(buttonId); } @Override protected void configureShell(Shell shell) { super.configureShell(shell); if (title != null) { shell.setText(title); } } @Override protected void createButtonsForButtonBar(Composite parent) { // create OK and Cancel buttons by default okButton = createButton(parent, IDialogConstants.OK_ID, IDialogConstants.OK_LABEL, true); createButton(parent, IDialogConstants.CANCEL_ID, IDialogConstants.CANCEL_LABEL, false); // do this here because setting the text will set enablement on the ok // button text.setFocus(); if (value != null) { text.setText(value); text.selectAll(); } } @Override protected Control createDialogArea(Composite parent) { Composite composite = (Composite) super.createDialogArea(parent); createMessageWidget(parent, composite); createInputWidget(composite); createErrorMessageWidget(composite); applyDialogFont(composite); return composite; } protected void createMessageWidget(Composite parent, Composite composite) { // create message if (message != null) { Label label = new Label(composite, SWT.WRAP); label.setText(message); GridData data = new GridData(GridData.GRAB_HORIZONTAL | GridData.GRAB_VERTICAL | GridData.HORIZONTAL_ALIGN_FILL | GridData.VERTICAL_ALIGN_CENTER); data.widthHint = convertHorizontalDLUsToPixels(IDialogConstants.MINIMUM_MESSAGE_AREA_WIDTH); label.setLayoutData(data); label.setFont(parent.getFont()); } } protected void createInputWidget(Composite composite) { text = new Text(composite, getInputTextStyle()); text.setLayoutData(new GridData(GridData.GRAB_HORIZONTAL | GridData.HORIZONTAL_ALIGN_FILL)); text.addModifyListener(new ModifyListener() { @Override public void modifyText(ModifyEvent e) { validateInput(); } }); } protected void createErrorMessageWidget(Composite composite) { errorMessageText = new Text(composite, SWT.READ_ONLY | SWT.WRAP); errorMessageText.setLayoutData(new GridData(GridData.GRAB_HORIZONTAL | GridData.HORIZONTAL_ALIGN_FILL)); errorMessageText.setBackground(errorMessageText.getDisplay().getSystemColor(SWT.COLOR_WIDGET_BACKGROUND)); setErrorMessage(errorMessage); } /** * Returns the error message label. * * @return the error message label * @deprecated use setErrorMessage(String) instead */ @Deprecated protected Label getErrorMessageLabel() { return null; } /** * Returns the ok button. * * @return the ok button */ protected Button getOkButton() { return okButton; } /** * Returns the text area. * * @return the text area */ protected Text getText() { return text; } /** * Returns the validator. * * @return the validator */ protected IInputValidator getValidator() { return validator; } /** * Returns the string typed into this input dialog. * * @return the input string */ public String getValue() { return value; } /** * Validates the input. * <p> * The default implementation of this framework method delegates the request to the supplied input validator object; * if it finds the input invalid, the error message is displayed in the dialog's message line. This hook method is * called whenever the text changes in the input field. * </p> */ protected void validateInput() { String errorMessage = null; if (validator != null) { errorMessage = validator.isValid(text.getText()); } setErrorMessage(errorMessage); } /** * Sets or clears the error message. If not <code>null</code>, the OK button is disabled. * * @param errorMessage the error message, or <code>null</code> to clear * @since 3.0 */ public void setErrorMessage(String errorMessage) { this.errorMessage = errorMessage; if (errorMessageText != null && !errorMessageText.isDisposed()) { errorMessageText.setText(errorMessage == null ? " \n " : errorMessage); //$NON-NLS-1$ // Disable the error message text control if there is no error, or // no error text (empty or whitespace only). Hide it also to avoid // color change. // See https://bugs.eclipse.org/bugs/show_bug.cgi?id=130281 boolean hasError = errorMessage != null && (StringConverter.removeWhiteSpaces(errorMessage)).length() > 0; errorMessageText.setEnabled(hasError); errorMessageText.setVisible(hasError); errorMessageText.getParent().update(); // Access the ok button by id, in case clients have overridden button creation. // See https://bugs.eclipse.org/bugs/show_bug.cgi?id=113643 Control button = getButton(IDialogConstants.OK_ID); if (button != null) { button.setEnabled(errorMessage == null); } } } /** * Returns the style bits that should be used for the input text field. Defaults to a single line entry. Subclasses * may override. * * @return the integer style bits that should be used when creating the input text * * @since 3.4 */ protected int getInputTextStyle() { return SWT.SINGLE | SWT.BORDER; } }
package net.saisho.structureadvancertool.block; import net.saisho.structureadvancertool.gui.StructureAdvancerGUIGui; import net.saisho.structureadvancertool.StructureAdvancerToolModElements; import net.minecraftforge.registries.ObjectHolder; import net.minecraftforge.fml.network.NetworkHooks; import net.minecraft.world.World; import net.minecraft.world.IBlockReader; import net.minecraft.util.text.StringTextComponent; import net.minecraft.util.text.ITextComponent; import net.minecraft.util.math.BlockRayTraceResult; import net.minecraft.util.math.BlockPos; import net.minecraft.util.Rotation; import net.minecraft.util.Mirror; import net.minecraft.util.Hand; import net.minecraft.util.Direction; import net.minecraft.util.ActionResultType; import net.minecraft.state.StateContainer; import net.minecraft.state.DirectionProperty; import net.minecraft.network.PacketBuffer; import net.minecraft.loot.LootContext; import net.minecraft.item.ItemStack; import net.minecraft.item.ItemGroup; import net.minecraft.item.Item; import net.minecraft.item.BlockItemUseContext; import net.minecraft.item.BlockItem; import net.minecraft.inventory.container.INamedContainerProvider; import net.minecraft.inventory.container.Container; import net.minecraft.entity.player.ServerPlayerEntity; import net.minecraft.entity.player.PlayerInventory; import net.minecraft.entity.player.PlayerEntity; import net.minecraft.block.material.Material; import net.minecraft.block.SoundType; import net.minecraft.block.DirectionalBlock; import net.minecraft.block.BlockState; import net.minecraft.block.Block; import java.util.List; import java.util.Collections; import io.netty.buffer.Unpooled; @StructureAdvancerToolModElements.ModElement.Tag public class AdvancerBlock extends StructureAdvancerToolModElements.ModElement { @ObjectHolder("structure_advancer_tool:advancer") public static final Block block = null; public AdvancerBlock(StructureAdvancerToolModElements instance) { super(instance, 1); } @Override public void initElements() { elements.blocks.add(() -> new CustomBlock()); elements.items .add(() -> new BlockItem(block, new Item.Properties().group(ItemGroup.BUILDING_BLOCKS)).setRegistryName(block.getRegistryName())); } public static class CustomBlock extends Block { public static final DirectionProperty FACING = DirectionalBlock.FACING; public CustomBlock() { super(Block.Properties.create(Material.ROCK).sound(SoundType.GROUND).hardnessAndResistance(1f, 10f).setLightLevel(s -> 0)); this.setDefaultState(this.stateContainer.getBaseState().with(FACING, Direction.NORTH)); setRegistryName("advancer"); } @Override public int getOpacity(BlockState state, IBlockReader worldIn, BlockPos pos) { return 15; } @Override protected void fillStateContainer(StateContainer.Builder<Block, BlockState> builder) { builder.add(FACING); } public BlockState rotate(BlockState state, Rotation rot) { return state.with(FACING, rot.rotate(state.get(FACING))); } public BlockState mirror(BlockState state, Mirror mirrorIn) { return state.rotate(mirrorIn.toRotation(state.get(FACING))); } @Override public BlockState getStateForPlacement(BlockItemUseContext context) { ; return this.getDefaultState().with(FACING, context.getNearestLookingDirection().getOpposite()); } @Override public List<ItemStack> getDrops(BlockState state, LootContext.Builder builder) { List<ItemStack> dropsOriginal = super.getDrops(state, builder); if (!dropsOriginal.isEmpty()) return dropsOriginal; return Collections.singletonList(new ItemStack(this, 1)); } @Override public ActionResultType onBlockActivated(BlockState blockstate, World world, BlockPos pos, PlayerEntity entity, Hand hand, BlockRayTraceResult hit) { super.onBlockActivated(blockstate, world, pos, entity, hand, hit); int x = pos.getX(); int y = pos.getY(); int z = pos.getZ(); if (entity instanceof ServerPlayerEntity) { NetworkHooks.openGui((ServerPlayerEntity) entity, new INamedContainerProvider() { @Override public ITextComponent getDisplayName() { return new StringTextComponent("Advancer"); } @Override public Container createMenu(int id, PlayerInventory inventory, PlayerEntity player) { return new StructureAdvancerGUIGui.GuiContainerMod(id, inventory, new PacketBuffer(Unpooled.buffer()).writeBlockPos(new BlockPos(x, y, z))); } }, new BlockPos(x, y, z)); } return ActionResultType.SUCCESS; } } }
import sys, os, io I = io.BytesIO(os.read(0,os.fstat(0).st_size)).readline ans = [[0,0,0]] vs = [3,5,7] for i in range(1, 1001): for v in vs: if i >= v and len(ans[i-v]) > 0: l = ans[i-v][::] l[vs.index(v)] += 1 ans.append(l) break else: ans.append([]) for tc in range(1, 1 + int(I())): n = int(I()) if len(ans[n]) > 0: print(*ans[n]) else: print(-1)
A bilattice-based trust model for personalizing recommendations Collaboration, interaction and information sharing are some of the key concepts of the next generation of web applications known as Web 2.0. A recommender system (RS) matches this description very well. Such a system is designed to suggest items (movies, articles,...) to users who might be interested in them. One of the widely used approaches is collaborative filtering, a technique that attempts to identify similar users and recommends items that those users liked. In order to determine the necessary interconnections between these users (and between users of a social network in the broad sense), a collection of data mining techniques commonly referred to as social network analysis is applied. Many online social networks consist of agents (humans or machines) connected by scores indicating how much they trust, or distrust, each other. Typically, such a trust network is sparse. Hence, a very important problem in trust networks is the determination of the scores of the agent pairs for which no explicit score is given. Trust propagation and aggregation operators can be used to solve this problem. Other applications in the context of social network analysis include e.g. to locate (un)trustworthy people in a network. We explain how to alleviate key problems in RSs by establishing a trust network among its users. We propose a new model in which trust scores (i.e. couples consisting of a trust value and a distrust value) are derived from a bilattice that preserves valuable trust provenance information including partial trust, partial distrust, ignorance and inconsistency. Being able to distinguish between those four concepts yields more accurate trust predictions, and consequently more and better recommendations. However, such an approach brings along some new difficulties as well. We focus on the trust score propagation problem and discuss possible ways to combine a recommendation from an unknown agent with the available trust scores, to obtain a personalized recommendation.
#pragma once #include <string> #include <sys/types.h> #include <sys/socket.h> #include <errno.h> #include <sys/select.h> #include <sys/time.h> #include <arpa/inet.h> #include <netinet/tcp.h> #include <fcntl.h> #include <unistd.h> #include <stddef.h> #include <thread> using namespace std; class async_tcp_server { private: string _host; int _port; volatile bool _running = false; int _fd = -1; struct sockaddr_in _addr; fd_set _readset; struct timeval _interval; int _error = 0; thread* _server_thread = nullptr; public: //create async, re-useable, no-delay socket, bind host:port, listen async_tcp_server(const string &host, short port, bool resuable = true, bool nodelay = true) { int fd = socket(AF_INET, SOCK_STREAM, 0); int enable = 1; if (resuable) { //RE-USEABLE if (setsockopt(fd, SOL_SOCKET, SO_REUSEADDR | SO_NOSIGPIPE, &enable, sizeof(enable)) < 0) { fprintf(stderr, "SO_REUSEADDR|SO_NOSIGPIPE set failed, errno: %d\n", errno); _error = errno; return; } } //SOL_TCP if (nodelay) { if (setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &enable, sizeof(enable)) < 0) { fprintf(stderr, "TCP_NODELAY set failed, errno: %d\n", errno); _error = errno; return; } } //bind struct sockaddr_in addr; memset(&addr, 0, sizeof(addr)); addr.sin_family = AF_INET; addr.sin_addr.s_addr = inet_addr(host.data()); addr.sin_port = htons(port); if (::bind(fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) { fprintf(stderr, "bind to %s:%d failed, errno: %d\n", host.data(), (int)port, errno); _error = errno; return; } //listen int backlog = 100; if (listen(fd, backlog) < 0) { fprintf(stderr, "bind to %s:%d failed\n", host.data(), (int)port); _error = errno; return; } _fd = fd; //set non-blocking set_nonblocking(_fd); _host = host; _port = port; _running = false; _addr = addr; _error = 0; FD_ZERO(&_readset); _interval.tv_sec = 0; _interval.tv_usec = 10000;//10ms printf("create async server socket success.\n"); } ~async_tcp_server() { stop(); } int set_nonblocking(int fd) { int old_option = fcntl(fd, F_GETFL); int new_option = old_option | O_NONBLOCK; return fcntl(fd, F_SETFL, new_option); } // stop serving and close sockets void stop() { if (!_running) return; { _running = false; _server_thread->join(); delete _server_thread; _server_thread = nullptr; close(_fd); _fd = -1; } printf("closed server.\n"); } void waitforshutdown() { if (_running) { _server_thread->join(); delete _server_thread; _server_thread = nullptr; _running = false; close(_fd); _fd = -1; } } void start() { if (_running) { printf("server already started.\n"); return; } _running = true; _server_thread = new thread([&](async_tcp_server* serv){serv->serving();}, this); printf("start serving thread...\n"); } private: static void run_routine(void *server) { async_tcp_server *serv = (async_tcp_server *)server; serv->serving(); //select polling } // running thread to run async-accept or recv message from read_fd_set void serving() { printf("**** serving...\n"); FD_SET(_fd, &_readset); //add server fd fd_set testfds; while (_running) { //copy readset FD_COPY(&_readset, &testfds); //do select polliing int result = select(FD_SETSIZE, &testfds, (fd_set *)0, (fd_set *)0, &_interval); //FD_SETSIZE:系统默认的最大文件描述符 if (result < 0)//return 0 when expire { perror("select _error"); _error = errno; close(_fd); break; } else if (result == 0) { //printf("timeout...\n"); continue; } /* scan fd */ for (int fd = 0; fd < FD_SETSIZE; fd++) { /* check if readable */ if (FD_ISSET(fd, &testfds)) { //maybe we should check if the fd is valid, with SO_SOCKET, SO_TYPE /* server fd readable, get new connect */ if (fd == _fd) { struct sockaddr client_address; socklen_t client_len = sizeof(client_address); int client_sockfd = accept(_fd, (struct sockaddr*)&client_address, &client_len); FD_SET(client_sockfd, &_readset); //add to listen readfdsets printf("adding client on fd %d\n", client_sockfd); } /* recv message from clients */ else { //1. recv msg size int msg_size = 0; int bytes = 0; int len = sizeof(msg_size); bool fd_closed = false; int so_errno = 0; socklen_t so_errno_len = sizeof(so_errno); while (bytes < len) { int ret = recv(fd, ((char*)(&msg_size))+bytes, sizeof(msg_size)-bytes, 0); getsockopt(fd, SOL_SOCKET, SO_ERROR, &so_errno, &so_errno_len); if (ret == 0 || ret < 0 && so_errno != 0 && so_errno != EAGAIN && so_errno != EWOULDBLOCK && so_errno != ETIMEDOUT) //if (ret < 0 && errno != EAGAIN && errno != EWOULDBLOCK && errno != ETIMEDOUT) { FD_CLR(fd, &_readset); ::close(fd); printf("ret: %d, errno: %d, closing the connect fd:%d\n", ret, errno, fd); fd_closed = true; break; } // else if (ret == 0) // { // printf("warn: client close the connection. errno: %d, fd: %d\n", errno, fd); // } bytes += ret; usleep(10);//10us } if (fd_closed) continue; printf("recv msg_size: %d\n", msg_size); //2. recv msg content char* buf = new char[msg_size]; bytes = 0; while (bytes < msg_size) { int ret = recv(fd, (char*)buf + bytes, msg_size - bytes, 0); if (ret <= 0 && errno != EAGAIN && errno != EWOULDBLOCK && errno != ETIMEDOUT) { FD_CLR(fd, &_readset); ::close(fd); printf("ret = %d, errno: %d, closing the connect fd:%d\n", ret, errno, fd); fd_closed = true; break; } bytes += ret; usleep(10); //10us } if (fd_closed) continue; //echo back ::send(fd, &msg_size, sizeof(msg_size), 0); ::send(fd, buf, msg_size, 0); printf("recv msg content, %s ok.\n", buf); } } } } //while(running) _running = false; printf("serving end.\n"); } };
/* * Copyright 2020 nuwansa. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.cloudimpl.stream.service; import com.cloudimpl.cluster.collection.CollectionOptions; import com.cloudimpl.cluster.collection.CollectionProvider; import com.cloudimpl.cluster4j.common.CloudMessage; import com.cloudimpl.cluster4j.common.RouterType; import com.cloudimpl.cluster4j.core.Inject; import com.cloudimpl.cluster4j.core.annon.CloudFunction; import com.cloudimpl.cluster4j.core.annon.Router; import java.util.NavigableMap; import java.util.function.Function; /** @author nuwansa */ @CloudFunction(name = "StreamService") @Router(routerType = RouterType.LEADER) public class StreamService implements Function<CloudMessage, CloudMessage> { private final CollectionProvider collectionProvider; private final NavigableMap<String, StreamDetail> streamDetails; @Inject public StreamService(CollectionProvider collectionProvider) { this.collectionProvider = collectionProvider; this.streamDetails = collectionProvider.createNavigableMap( "StreamIndex", CollectionOptions.builder().withOption("TableName", "StreamIndexTable").build()); } @Override public CloudMessage apply(CloudMessage t) { throw new UnsupportedOperationException( "Not supported yet."); // To change body of generated methods, choose Tools | Templates. } }
Self-Transcendence and Activities of Daily Living The number of older adults in our population is steadily increasing. Many older adults continue to remain active and care for themselves. However, differences exist in older adults ability to perform activities of daily living. The purpose of the study was to explore relationships among self-transcendence (ST), health status (SHS), and ability to perform activities of daily living (ADL) in noninstitutionalized older adults. The 88 participants were primarily widowed, White women, 65 years of age and older (M = 73.4), who perceived their health positively, and had 12 years or more of education. Findings included statistically significant relationships between ST and ADL and SHS and ADL. Twenty-two percent of the variance in ability to perform ADL was explained by SHS, and an additional 6% was explained by ST. Nurses are encouraged to explore factors that contribute to older adults ability to remain independent.
/* * Copyright 2009, Google Inc. * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * * Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * Redistributions in binary form must reproduce the above * copyright notice, this list of conditions and the following disclaimer * in the documentation and/or other materials provided with the * distribution. * * Neither the name of Google Inc. nor the names of its * contributors may be used to endorse or promote products derived from * this software without specific prior written permission. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ // This file contains the class definitions for TimingTable and TimingRecord, // which together make up a quick-and-dirty profiling tool for hand-instrumented // code. #ifndef O3D_CORE_CROSS_TIMINGTABLE_H_ #define O3D_CORE_CROSS_TIMINGTABLE_H_ #ifdef PROFILE_CLIENT #include <map> #include <string> #include "core/cross/timer.h" #include "utils/cross/structured_writer.h" // A record for keeping track of timing stats for a single section of code, // identified by a string. TimingRecord stores enough info to tell you the max, // min, and mean time that the code segment took, and the number of times it was // called. It also reports unfinished and unbegun calls, which are cases in // which you told it to start or finish recording data, but that call failed to // have a corresponding finish or start. Unfinished calls usually mean that you // failed to instrument a branch exiting the block early. Unstarted calls are // probably more serious bugs. class TimingRecord { public: TimingRecord() : started_(false), unfinished_(0), unbegun_(0), calls_(0), time_(0), min_time_(LONG_MAX), max_time_(0) { } void Start() { if (started_) { ++unfinished_; } started_ = true; timer_.GetElapsedTimeAndReset(); // This is how you reset the timer. } void Stop() { if (started_) { started_= false; ++calls_; float time = timer_.GetElapsedTimeAndReset(); if (time > max_time_) { max_time_ = time; } if (time < min_time_) { min_time_ = time; } time_ += time; } else { ++unbegun_; } } int UnfinishedCount() const { return unfinished_; } int UnbegunCount() const { return unbegun_; } int CallCount() const { return calls_; } float TimeSpent() const { return time_; } float MinTime() const { return min_time_; } float MaxTime() const { return max_time_; } void Write(o3d::StructuredWriter* writer) const { writer->OpenObject(); writer->WritePropertyName("max"); writer->WriteFloat(max_time_); writer->WritePropertyName("min"); writer->WriteFloat(min_time_); writer->WritePropertyName("mean"); writer->WriteFloat(calls_ ? time_ / calls_ : 0); writer->WritePropertyName("total"); writer->WriteFloat(time_); writer->WritePropertyName("calls"); writer->WriteInt(calls_); if (unfinished_) { writer->WritePropertyName("unfinished"); writer->WriteInt(unfinished_); } if (unbegun_) { writer->WritePropertyName("unbegun"); writer->WriteInt(unbegun_); } writer->CloseObject(); } private: bool started_; int unfinished_; int unbegun_; int calls_; float time_; float min_time_; float max_time_; o3d::ElapsedTimeTimer timer_; }; // The TimingTable is a quick-and-dirty profiler for hand-instrumented code. // Don't call its functions directly; wrap them in macros so that they can be // compiled in optionally. Currently we use GLUE_PROFILE_START/STOP/etc. in the // glue code [defined in common.h] and PROFILE_START/STOP/etc. elsewhere // in the plugin [defined below]. class TimingTable { public: TimingTable() { } virtual public ~TimingTable() {} virtual void Reset() { table_.clear(); } virtual void Start(const o3d::String& key) { table_[key].Start(); } virtual void Stop(const o3d::String& key) { table_[key].Stop(); } virtual void Write(o3d::StructuredWriter* writer) { std::map<o3d::String, TimingRecord>::iterator iter; writer->OpenArray(); for (iter = table_.begin(); iter != table_.end(); ++iter) { const TimingRecord& record = iter->second; if (record.CallCount() || record.UnfinishedCount() || record.UnbegunCount()) { writer->OpenObject(); writer->WritePropertyName(iter->first); iter->second.Write(writer); writer->CloseObject(); } } writer->CloseArray(); writer->Close(); } private: std::map<o3d::String, TimingRecord> table_; }; #endif // PROFILE_CLIENT #endif // O3D_CORE_CROSS_TIMINGTABLE_H_
THE EFFECTIVENESS OF TECHNOLOGY ENABLED LEARNING AMONG THE TEACHER TRAINEES OF DIPLOMA IN TEACHER EDUCATION (D.TED.,) PROGRAMME. Technology-enabled learning is selecting some appropriate technology and improving learning performance through the appropriate learning environment. The aim of the research study is to focus on the effectiveness of Technologyenabled learning among the Diploma in Teacher Education teacher trainees. Investigator has adopted experimental research method to analyse the framed hypotheses. A total sample of Thirty I year diploma teacher trainees from Vellore district has been chosen for the present study. The finding of the research study shows that there exists a significant difference between mean value of the pre-test and post-test scores of learning science through lecture method. There exists a significant difference between pre-test post-test scores of the experimental group in learning science through technology-enabled learning. It has been found that both lecture method and technology-enabled learning (TEL) or effective in term of achievement among the Diploma in Teacher Education Students. But analyses between the post-test scores of experimental and control group reveals that Technology-enabled learning was more effective than the lecture method. Hence it is recommended to utilize the technology-enabled learning for students studying diploma in teacher education programme.
<reponame>norzak/jsweet package def.dom; public class DeviceRotationRate extends def.js.Object { public double alpha; public double beta; public double gamma; public static DeviceRotationRate prototype; public DeviceRotationRate(){} }
/** * Returns the value of a device to someone who is buying the device. * \param resIndex resource id * \param device device type */ double Agent::buyerDeviceValue(int resIndex, device_name_t device) { double gainOverDeviceLife = glob.discoveredDevices[device][resIndex]->gainOverLifetime(*this); double value1 = gainOverDeviceLife - (endDayGPM * glob.discoveredDevices[device][resIndex]->lifetime); vector< double > consideredValues; if (value1 > 0) { consideredValues.push_back(value1); if (devProp[device][resIndex].getDeviceExperience() >= 1.0) { consideredValues.push_back(glob.discoveredDevices[device][resIndex]->costs(*this)); } return *min_element(consideredValues.begin(), consideredValues.end()); } else { return 0.0; } }
import { formatFromFilename, generateImageData, IGatsbyImageHelperArgs, IImage, getLowResolutionImageURL, } from "../image-utils" const generateImageSource = ( file: string, width: number, height: number, format ): IImage => { return { src: `https://example.com/${file}/${width}/${height}/image.${format}`, width, height, format, } } const args: IGatsbyImageHelperArgs = { pluginName: `gatsby-plugin-fake`, filename: `afile.jpg`, generateImageSource, width: 400, layout: `fixed`, sourceMetadata: { width: 800, height: 600, format: `jpg`, }, reporter: { warn: jest.fn(), }, } const fullWidthArgs: IGatsbyImageHelperArgs = { ...args, sourceMetadata: { width: 2000, height: 1500, format: `jpg`, }, layout: `fullWidth`, } const constrainedArgs: IGatsbyImageHelperArgs = { ...args, layout: `constrained`, } describe(`the image data helper`, () => { beforeEach(() => { jest.resetAllMocks() }) it(`throws if there's not a valid generateImageData function`, () => { const generateImageSource = `this should be a function` expect(() => generateImageData(({ ...args, generateImageSource, } as any) as IGatsbyImageHelperArgs) ).toThrow() }) it(`warns if generateImageSource function returns invalid values`, () => { const generateImageSource = jest .fn() .mockReturnValue({ width: 100, height: 200, src: undefined }) const myArgs = { ...args, generateImageSource, } generateImageData(myArgs) expect(args.reporter?.warn).toHaveBeenCalledWith( `[gatsby-plugin-fake] The resolver for image afile.jpg returned an invalid value.` ) ;(args.reporter?.warn as jest.Mock).mockReset() generateImageSource.mockReturnValue({ width: 100, height: undefined, src: `example`, format: `jpg`, }) generateImageData(myArgs) expect(args.reporter?.warn).toHaveBeenCalledWith( `[gatsby-plugin-fake] The resolver for image afile.jpg returned an invalid value.` ) ;(args.reporter?.warn as jest.Mock).mockReset() generateImageSource.mockReturnValue({ width: undefined, height: 100, src: `example`, format: `jpg`, }) generateImageData(myArgs) expect(args.reporter?.warn).toHaveBeenCalledWith( `[gatsby-plugin-fake] The resolver for image afile.jpg returned an invalid value.` ) ;(args.reporter?.warn as jest.Mock).mockReset() generateImageSource.mockReturnValue({ width: 100, height: 100, src: `example`, format: undefined, }) generateImageData(myArgs) expect(args.reporter?.warn).toHaveBeenCalledWith( `[gatsby-plugin-fake] The resolver for image afile.jpg returned an invalid value.` ) ;(args.reporter?.warn as jest.Mock).mockReset() generateImageSource.mockReturnValue({ width: 100, height: 100, src: `example`, format: `jpg`, }) generateImageData(myArgs) expect(args.reporter?.warn).not.toHaveBeenCalled() }) it(`warns if there's no plugin name`, () => { generateImageData(({ ...args, pluginName: undefined, } as any) as IGatsbyImageHelperArgs) expect(args.reporter?.warn).toHaveBeenCalledWith( `[gatsby-plugin-image] "generateImageData" was not passed a plugin name` ) }) it(`calls the generateImageSource function`, () => { const generateImageSource = jest.fn() generateImageData({ ...args, generateImageSource }) expect(generateImageSource).toHaveBeenCalledWith( `afile.jpg`, 800, 600, `jpg`, undefined, undefined ) }) it(`calculates sizes for fixed`, () => { const data = generateImageData(args) expect(data.images.fallback?.sizes).toEqual(`400px`) }) it(`calculates sizes for fullWidth`, () => { const data = generateImageData(fullWidthArgs) expect(data.images.fallback?.sizes).toEqual(`100vw`) }) it(`calculates sizes for constrained`, () => { const data = generateImageData(constrainedArgs) expect(data.images.fallback?.sizes).toEqual( `(min-width: 400px) 400px, 100vw` ) }) it(`returns URLs for fixed`, () => { const data = generateImageData(args) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/400/300/image.jpg` ) expect(data.images?.sources?.[0].srcSet).toEqual( `https://example.com/afile.jpg/400/300/image.webp 400w,\nhttps://example.com/afile.jpg/800/600/image.webp 800w` ) }) it(`returns URLs for fullWidth`, () => { const data = generateImageData(fullWidthArgs) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/750/563/image.jpg` ) expect(data.images?.sources?.[0].srcSet) .toEqual(`https://example.com/afile.jpg/750/563/image.webp 750w, https://example.com/afile.jpg/1080/810/image.webp 1080w, https://example.com/afile.jpg/1366/1025/image.webp 1366w, https://example.com/afile.jpg/1920/1440/image.webp 1920w`) }) it(`converts to PNG if requested`, () => { const data = generateImageData({ ...args, formats: [`png`] }) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/400/300/image.png` ) }) it(`does not include sources if only jpg or png format is specified`, () => { let data = generateImageData({ ...args, formats: [`auto`] }) expect(data.images?.sources?.length).toBe(0) data = generateImageData({ ...args, formats: [`png`] }) expect(data.images?.sources?.length).toBe(0) data = generateImageData({ ...args, formats: [`jpg`] }) expect(data.images?.sources?.length).toBe(0) }) it(`does not include fallback if only webp format is specified`, () => { const data = generateImageData({ ...args, formats: [`webp`] }) expect(data.images?.sources?.length).toBe(1) expect(data.images?.fallback).toBeUndefined() }) it(`does not include fallback if only avif format is specified`, () => { const data = generateImageData({ ...args, formats: [`avif`] }) expect(data.images?.sources?.length).toBe(1) expect(data.images?.fallback).toBeUndefined() }) it(`generates the same output as the input format if output is auto`, () => { const sourceMetadata = { width: 800, height: 600, format: `jpg`, } let data = generateImageData({ ...args, formats: [`auto`] }) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/400/300/image.jpg` ) expect(data.images?.sources?.length).toBe(0) data = generateImageData({ ...args, sourceMetadata: { ...sourceMetadata, format: `png` }, formats: [`auto`], }) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/400/300/image.png` ) expect(data.images?.sources?.length).toBe(0) }) it(`treats empty formats or empty string as auto`, () => { let data = generateImageData({ ...args, formats: [``] }) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/400/300/image.jpg` ) expect(data.images?.sources?.length).toBe(0) data = generateImageData({ ...args, formats: [] }) expect(data?.images?.fallback?.src).toEqual( `https://example.com/afile.jpg/400/300/image.jpg` ) expect(data.images?.sources?.length).toBe(0) }) }) describe(`the helper utils`, () => { it(`gets file format from filename`, () => { const names = [ `filename.jpg`, `filename.jpeg`, `filename.png`, `filename.heic`, `filename.jp`, `filename.jpgjpg`, `file.name.jpg`, `file.name.`, `filenamejpg`, `.jpg`, ] const expected = [ `jpg`, `jpg`, `png`, `heic`, undefined, undefined, `jpg`, undefined, undefined, `jpg`, ] for (const idx in names) { const ext = formatFromFilename(names[idx]) expect(ext).toBe(expected[idx]) } }) it(`gets a low-resolution image URL`, () => { const url = getLowResolutionImageURL(args) expect(url).toEqual(`https://example.com/afile.jpg/20/15/image.jpg`) }) it(`gets a low-resolution image URL with correct aspect ratio`, () => { const url = getLowResolutionImageURL({ ...fullWidthArgs, aspectRatio: 2 / 1, }) expect(url).toEqual(`https://example.com/afile.jpg/20/10/image.jpg`) }) })
<reponame>xander1111/Doubly-Linked-List import java.io.*; import java.util.*; import java.util.stream.Collectors; public class IOTest { public static void main(String[] args) { DLinkedList<Integer> testList = new DLinkedList<>(); try (BufferedReader reader = new BufferedReader(new FileReader("IO.txt"))) { List<Integer> ints = reader.lines() .map(s -> s.trim()) .map(s -> Integer.parseInt(s)) .collect(Collectors.toList()); ints.forEach(num -> testList.add(num * 2)); //testList.forEach(System.out::println); Iterator<Integer> it = testList.iterator(); Integer num = it.next(); while (it.hasNext()) { System.out.println(num / 2); num = it.next(); } } catch (IOException e) { System.err.println("Something went wrong while reading the file:"); System.err.println(e); } } }
Then-Gov. Jeb Bush urged the Legislature to heed the court and revisit the death penalty statute. Legislators ignored the state Supreme Court and ignored Bush. Again, there was no legislative response. Of the 33 states with the death penalty, Florida had been one of only three that allow a judge to override a jury recommendation. But here’s the confounding thing behind the tough-on-crime Legislature’s obdurate refusal to fix the problem. A change would have had no practical effect. A Florida judge hasn’t overridden a jury recommendation for life imprisonment and imposed the death penalty since 1999. A Broward jury had voted 8-4 in favor of a life sentence for Jeffrey Weaver, who had been convicted in the 1996 murder of Fort Lauderdale Police Officer Bryant Peney. But Circuit Judge Mark Speiser, perhaps responding to public outrage around the case, imposed the death penalty. That was the last time a Florida judge ignored a jury recommendation for life. In 2004, the Florida Supreme Court ruled that Speiser had erred. Weaver was re-sentenced to life without possibility of parole. But who knows how the courts will deal with the 384 men and five women in Florida who were sentenced to die under a statute found constitutionally wanting? Florida judges must now wrestle with questions of whether last week’s Supreme Court ruling applies retroactively. Not to mention the logistics associated with hauling dangerous convicts around the state to re-sentencing hearings. All of which could have been avoided if Florida’s legislative leadership, terrified of looking soft on crime, had not been so pigheaded.
Perceived social support and community adaptation in schizophrenia. Prompted by the continuing transition to community care, mental health nurses are considering the role of social support in community adaptation. This article demonstrates the importance of distinguishing between kinds of social support and presents findings from the first round data of a longitudinal study of community adaptation in 156 people with schizophrenia conducted in Brisbane, Australia. All clients were interviewed using the relevant subscales of the Diagnostic Interview Schedule to confirm a primary diagnosis of schizophrenia. The study set out to investigate the relationship between community adaptation and social support. Community adaptation was measured with the Brief Psychiatric Rating Scale (BPRS), the Life Skills Profile (LSP) and measures of dissatisfaction with life and problems in daily living developed by the authors. Social support was measured with the Arizona Social Support Interview Schedule (ASSIS). The BPRS and ASSIS were incorporated into a client interview conducted by trained interviewers. The LSP was completed on each client by an informal carer (parent, relative or friend) or a professional carer (case manager or other health professional) nominated by the client. Hierarchical regression analysis was used to examine the relationship between community adaptation and four sets of social support variables. Given the order in which variables were entered in regression equations, a set of perceived social support variables was found to account for the largest unique variance of four measures of community adaptation in 96 people with schizophrenia for whom complete data are available from the first round of the three-wave longitudinal study. A set of the subjective experiences of the clients accounted for the largest unique variance in measures of symptomatology, life skills, dissatisfaction with life, and problems in daily living. Sets of community support, household support and functional variables accounted for less variance. Implications for mental health nursing practice are considered.
<gh_stars>1-10 //------------------------------------------------------------------------- /* Copyright (C) 2010-2019 EDuke32 developers and contributors Copyright (C) 2019 sirlemonhead, Nuke.YKT This file is part of PCExhumed. PCExhumed is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ //------------------------------------------------------------------------- #include "ns.h" #include "engine.h" #include "exhumed.h" #include "aistuff.h" #include "player.h" #include "view.h" #include "status.h" #include "sound.h" #include "mapinfo.h" #include <string.h> #include <assert.h> BEGIN_PS_NS int nPushBlocks; // TODO - moveme? sectortype* overridesect; enum { kMaxPushBlocks = 100, kMaxMoveChunks = 75 }; TObjPtr<DExhumedActor*> nBodySprite[50]; TObjPtr<DExhumedActor*> nChunkSprite[kMaxMoveChunks]; BlockInfo sBlockInfo[kMaxPushBlocks]; TObjPtr<DExhumedActor*> nBodyGunSprite[50]; int nCurBodyGunNum; int sprceiling, sprfloor; Collision loHit, hiHit; // think this belongs in init.c? size_t MarkMove() { GC::MarkArray(nBodySprite, 50); GC::MarkArray(nChunkSprite, kMaxMoveChunks); for(int i = 0; i < nPushBlocks; i++) GC::Mark(sBlockInfo[i].pActor); return 50 + kMaxMoveChunks + nPushBlocks; } FSerializer& Serialize(FSerializer& arc, const char* keyname, BlockInfo& w, BlockInfo* def) { if (arc.BeginObject(keyname)) { arc("at8", w.field_8) ("sprite", w.pActor) ("x", w.x) ("y", w.y) .EndObject(); } return arc; } void SerializeMove(FSerializer& arc) { if (arc.BeginObject("move")) { arc ("pushcount", nPushBlocks) .Array("blocks", sBlockInfo, nPushBlocks) ("chunkcount", nCurChunkNum) .Array("chunks", nChunkSprite, kMaxMoveChunks) ("overridesect", overridesect) .Array("bodysprite", nBodySprite, countof(nBodySprite)) ("curbodygun", nCurBodyGunNum) .Array("bodygunsprite", nBodyGunSprite, countof(nBodyGunSprite)) .EndObject(); } } signed int lsqrt(int a1) { int v1; int v2; signed int result; v1 = a1; v2 = a1 - 0x40000000; result = 0; if (v2 >= 0) { result = 32768; v1 = v2; } if (v1 - ((result << 15) + 0x10000000) >= 0) { v1 -= (result << 15) + 0x10000000; result += 16384; } if (v1 - ((result << 14) + 0x4000000) >= 0) { v1 -= (result << 14) + 0x4000000; result += 8192; } if (v1 - ((result << 13) + 0x1000000) >= 0) { v1 -= (result << 13) + 0x1000000; result += 4096; } if (v1 - ((result << 12) + 0x400000) >= 0) { v1 -= (result << 12) + 0x400000; result += 2048; } if (v1 - ((result << 11) + 0x100000) >= 0) { v1 -= (result << 11) + 0x100000; result += 1024; } if (v1 - ((result << 10) + 0x40000) >= 0) { v1 -= (result << 10) + 0x40000; result += 512; } if (v1 - ((result << 9) + 0x10000) >= 0) { v1 -= (result << 9) + 0x10000; result += 256; } if (v1 - ((result << 8) + 0x4000) >= 0) { v1 -= (result << 8) + 0x4000; result += 128; } if (v1 - ((result << 7) + 4096) >= 0) { v1 -= (result << 7) + 4096; result += 64; } if (v1 - ((result << 6) + 1024) >= 0) { v1 -= (result << 6) + 1024; result += 32; } if (v1 - (32 * result + 256) >= 0) { v1 -= 32 * result + 256; result += 16; } if (v1 - (16 * result + 64) >= 0) { v1 -= 16 * result + 64; result += 8; } if (v1 - (8 * result + 16) >= 0) { v1 -= 8 * result + 16; result += 4; } if (v1 - (4 * result + 4) >= 0) { v1 -= 4 * result + 4; result += 2; } if (v1 - (2 * result + 1) >= 0) result += 1; return result; } void MoveThings() { thinktime.Reset(); thinktime.Clock(); UndoFlashes(); DoLights(); if (nFreeze) { if (nFreeze == 1 || nFreeze == 2) { DoSpiritHead(); } } else { actortime.Reset(); actortime.Clock(); runlist_ExecObjects(); runlist_CleanRunRecs(); actortime.Unclock(); } DoBubbleMachines(); DoDrips(); DoMovingSects(); DoRegenerates(); if (currentLevel->gameflags & LEVEL_EX_COUNTDOWN) { DoFinale(); if (lCountDown < 1800 && nDronePitch < 2400 && !lFinaleStart) { nDronePitch += 64; BendAmbientSound(); } } thinktime.Unclock(); } void ResetMoveFifo() { movefifoend = 0; movefifopos = 0; } // not used void clipwall() { } int BelowNear(DExhumedActor* pActor, int x, int y, int walldist) { auto pSector = pActor->sector(); int z = pActor->spr.pos.Z; int z2; if (loHit.type == kHitSprite) { z2 = loHit.actor()->spr.pos.Z; } else { z2 = pSector->floorz + pSector->Depth; BFSSectorSearch search(pSector); sectortype* pTempSect = nullptr; while (auto pCurSector = search.GetNext()) { for (auto& wal : wallsofsector(pCurSector)) { if (wal.twoSided()) { if (!search.Check(wal.nextSector())) { vec2_t pos = { x, y }; if (clipinsidebox(&pos, wallnum(&wal), walldist)) { search.Add(wal.nextSector()); } } } } auto pSect2 = pCurSector; while (pSect2) { pTempSect = pSect2; pSect2 = pSect2->pBelow; } int ecx = pTempSect->floorz + pTempSect->Depth; int eax = ecx - z; if (eax < 0 && eax >= -5120) { z2 = ecx; pSector = pTempSect; } } } if (z2 < pActor->spr.pos.Z) { pActor->spr.pos.Z = z2; overridesect = pSector; pActor->spr.zvel = 0; bTouchFloor = true; return kHitAux2; } else { return 0; } } Collision movespritez(DExhumedActor* pActor, int z, int height, int, int clipdist) { auto pSector = pActor->sector(); assert(pSector); overridesect = pSector; auto pSect2 = pSector; // backup cstat auto cstat = pActor->spr.cstat; pActor->spr.cstat &= ~CSTAT_SPRITE_BLOCK; Collision nRet; nRet.setNone(); int nSectFlags = pSector->Flag; if (nSectFlags & kSectUnderwater) { z >>= 1; } int spriteZ = pActor->spr.pos.Z; int floorZ = pSector->floorz; int ebp = spriteZ + z; int eax = pSector->ceilingz + (height >> 1); if ((nSectFlags & kSectUnderwater) && ebp < eax) { ebp = eax; } // loc_151E7: while (ebp > pActor->sector()->floorz && pActor->sector()->pBelow != nullptr) { ChangeActorSect(pActor, pActor->sector()->pBelow); } if (pSect2 != pSector) { pActor->spr.pos.Z = ebp; if (pSect2->Flag & kSectUnderwater) { if (pActor == PlayerList[nLocalPlayer].pActor) { D3PlayFX(StaticSound[kSound2], pActor); } if (pActor->spr.statnum <= 107) { pActor->spr.hitag = 0; } } } else { while ((ebp < pActor->sector()->ceilingz) && (pActor->sector()->pAbove != nullptr)) { ChangeActorSect(pActor, pActor->sector()->pAbove); } } // This function will keep the player from falling off cliffs when you're too close to the edge. // This function finds the highest and lowest z coordinates that your clipping BOX can get to. vec3_t pos = pActor->spr.pos; pos.Z -= 256; getzrange(pos, pActor->sector(), &sprceiling, hiHit, &sprfloor, loHit, 128, CLIPMASK0); int mySprfloor = sprfloor; if (loHit.type != kHitSprite) { mySprfloor += pActor->sector()->Depth; } if (ebp > mySprfloor) { if (z > 0) { bTouchFloor = true; if (loHit.type == kHitSprite) { // Path A auto pFloorActor = loHit.actor(); if (pActor->spr.statnum == 100 && pFloorActor->spr.statnum != 0 && pFloorActor->spr.statnum < 100) { int nDamage = (z >> 9); if (nDamage) { runlist_DamageEnemy(loHit.actor(), pActor, nDamage << 1); } pActor->spr.zvel = -z; } else { if (pFloorActor->spr.statnum == 0 || pFloorActor->spr.statnum > 199) { nRet.exbits |= kHitAux2; } else { nRet = loHit; } pActor->spr.zvel = 0; } } else { // Path B if (pActor->sector()->pBelow == nullptr) { nRet.exbits |= kHitAux2; int nSectDamage = pActor->sector()->Damage; if (nSectDamage != 0) { if (pActor->spr.hitag < 15) { IgniteSprite(pActor); pActor->spr.hitag = 20; } nSectDamage >>= 2; nSectDamage = nSectDamage - (nSectDamage>>2); if (nSectDamage) { runlist_DamageEnemy(pActor, nullptr, nSectDamage); } } pActor->spr.zvel = 0; } } } // loc_1543B: ebp = mySprfloor; pActor->spr.pos.Z = mySprfloor; } else { if ((ebp - height) < sprceiling && (hiHit.type == kHitSprite || pActor->sector()->pAbove == nullptr)) { ebp = sprceiling + height; nRet.exbits |= kHitAux1; } } if (spriteZ <= floorZ && ebp > floorZ) { if ((pSector->Depth != 0) || (pSect2 != pSector && (pSect2->Flag & kSectUnderwater))) { BuildSplash(pActor, pSector); } } pActor->spr.cstat = cstat; // restore cstat pActor->spr.pos.Z = ebp; if (pActor->spr.statnum == 100) { nRet.exbits |= BelowNear(pActor, pActor->spr.pos.X, pActor->spr.pos.Y, clipdist + (clipdist / 2)); } return nRet; } int GetActorHeight(DExhumedActor* actor) { return tileHeight(actor->spr.picnum) * actor->spr.yrepeat * 4; } DExhumedActor* insertActor(sectortype* s, int st) { return static_cast<DExhumedActor*>(::InsertActor(RUNTIME_CLASS(DExhumedActor), s, st)); } Collision movesprite(DExhumedActor* pActor, int dx, int dy, int dz, int ceildist, int flordist, unsigned int clipmask) { bTouchFloor = false; int x = pActor->spr.pos.X; int y = pActor->spr.pos.Y; int z = pActor->spr.pos.Z; int nSpriteHeight = GetActorHeight(pActor); int nClipDist = (int8_t)pActor->spr.clipdist << 2; auto pSector = pActor->sector(); assert(pSector); int floorZ = pSector->floorz; if ((pSector->Flag & kSectUnderwater) || (floorZ < z)) { dx >>= 1; dy >>= 1; } Collision nRet = movespritez(pActor, dz, nSpriteHeight, flordist, nClipDist); pSector = pActor->sector(); // modified in movespritez so re-grab this variable if (pActor->spr.statnum == 100) { int nPlayer = GetPlayerFromActor(pActor); int varA = 0; int varB = 0; CheckSectorFloor(overridesect, pActor->spr.pos.Z, &varB, &varA); if (varB || varA) { PlayerList[nPlayer].nDamage.X = varB; PlayerList[nPlayer].nDamage.Y = varA; } dx += PlayerList[nPlayer].nDamage.X; dy += PlayerList[nPlayer].nDamage.Y; } else { CheckSectorFloor(overridesect, pActor->spr.pos.Z, &dx, &dy); } Collision coll; clipmove(pActor->spr.pos, &pSector, dx, dy, nClipDist, nSpriteHeight, flordist, clipmask, coll); if (coll.type != kHitNone) // originally this or'ed the two values which can create unpredictable bad values in some edge cases. { coll.exbits = nRet.exbits; nRet = coll; } if ((pSector != pActor->sector()) && pSector != nullptr) { if (nRet.exbits & kHitAux2) { dz = 0; } if ((pSector->floorz - z) < (dz + flordist)) { pActor->spr.pos.X = x; pActor->spr.pos.Y = y; } else { ChangeActorSect(pActor, pSector); if (pActor->spr.pal < 5 && !pActor->spr.hitag) { pActor->spr.pal = pActor->sector()->ceilingpal; } } } return nRet; } void Gravity(DExhumedActor* pActor) { if (pActor->sector()->Flag & kSectUnderwater) { if (pActor->spr.statnum != 100) { if (pActor->spr.zvel <= 1024) { if (pActor->spr.zvel < 2048) { pActor->spr.zvel += 512; } } else { pActor->spr.zvel -= 64; } } else { if (pActor->spr.zvel > 0) { pActor->spr.zvel -= 64; if (pActor->spr.zvel < 0) { pActor->spr.zvel = 0; } } else if (pActor->spr.zvel < 0) { pActor->spr.zvel += 64; if (pActor->spr.zvel > 0) { pActor->spr.zvel = 0; } } } } else { pActor->spr.zvel += 512; if (pActor->spr.zvel > 16384) { pActor->spr.zvel = 16384; } } } Collision MoveCreature(DExhumedActor* pActor) { return movesprite(pActor, pActor->spr.xvel << 8, pActor->spr.yvel << 8, pActor->spr.zvel, 15360, -5120, CLIPMASK0); } Collision MoveCreatureWithCaution(DExhumedActor* pActor) { int x = pActor->spr.pos.X; int y = pActor->spr.pos.Y; int z = pActor->spr.pos.Z; auto pSectorPre = pActor->sector(); auto ecx = MoveCreature(pActor); auto pSector =pActor->sector(); if (pSector != pSectorPre) { int zDiff = pSectorPre->floorz - pSector->floorz; if (zDiff < 0) { zDiff = -zDiff; } if (zDiff > 15360 || (pSector->Flag & kSectUnderwater) || (pSector->pBelow != nullptr && pSector->pBelow->Flag) || pSector->Damage) { pActor->spr.pos.X = x; pActor->spr.pos.Y = y; pActor->spr.pos.Z = z; ChangeActorSect(pActor, pSectorPre); pActor->spr.ang = (pActor->spr.ang + 256) & kAngleMask; pActor->spr.xvel = bcos(pActor->spr.ang, -2); pActor->spr.yvel = bsin(pActor->spr.ang, -2); Collision c; c.setNone(); return c; } } return ecx; } int GetAngleToSprite(DExhumedActor* a1, DExhumedActor* a2) { if (!a1 || !a2) return -1; return GetMyAngle(a2->spr.pos.X - a1->spr.pos.X, a2->spr.pos.Y - a1->spr.pos.Y); } int PlotCourseToSprite(DExhumedActor* pActor1, DExhumedActor* pActor2) { if (pActor1 == nullptr || pActor2 == nullptr) return -1; int x = pActor2->spr.pos.X - pActor1->spr.pos.X; int y = pActor2->spr.pos.Y - pActor1->spr.pos.Y; pActor1->spr.ang = GetMyAngle(x, y); uint32_t x2 = abs(x); uint32_t y2 = abs(y); uint32_t diff = x2 * x2 + y2 * y2; if (diff > INT_MAX) { DPrintf(DMSG_WARNING, "%s %d: overflow\n", __func__, __LINE__); diff = INT_MAX; } return ksqrt(diff); } DExhumedActor* FindPlayer(DExhumedActor* pActor, int nDistance, bool dontengage) { int var_18 = !dontengage; if (nDistance < 0) nDistance = 100; int x = pActor->spr.pos.X; int y = pActor->spr.pos.Y; auto pSector =pActor->sector(); int z = pActor->spr.pos.Z - GetActorHeight(pActor); nDistance <<= 8; DExhumedActor* pPlayerActor = nullptr; int i = 0; while (1) { if (i >= nTotalPlayers) return nullptr; pPlayerActor = PlayerList[i].pActor; if ((pPlayerActor->spr.cstat & CSTAT_SPRITE_BLOCK_ALL) && (!(pPlayerActor->spr.cstat & CSTAT_SPRITE_INVISIBLE))) { int v9 = abs(pPlayerActor->spr.pos.X - x); if (v9 < nDistance) { int v10 = abs(pPlayerActor->spr.pos.Y - y); if (v10 < nDistance && cansee(pPlayerActor->spr.pos.X, pPlayerActor->spr.pos.Y, pPlayerActor->spr.pos.Z - 7680, pPlayerActor->sector(), x, y, z, pSector)) { break; } } } i++; } if (var_18) { PlotCourseToSprite(pActor, pPlayerActor); } return pPlayerActor; } void CheckSectorFloor(sectortype* pSector, int z, int *x, int *y) { int nSpeed = pSector->Speed; if (!nSpeed) { return; } int nFlag = pSector->Flag; int nAng = nFlag & kAngleMask; if (z >= pSector->floorz) { *x += bcos(nAng, 3) * nSpeed; *y += bsin(nAng, 3) * nSpeed; } else if (nFlag & 0x800) { *x += bcos(nAng, 4) * nSpeed; *y += bsin(nAng, 4) * nSpeed; } } int GetUpAngle(DExhumedActor* pActor1, int nVal, DExhumedActor* pActor2, int ecx) { int x = pActor2->spr.pos.X - pActor1->spr.pos.X; int y = pActor2->spr.pos.Y - pActor1->spr.pos.Y; int ebx = (pActor2->spr.pos.Z + ecx) - (pActor1->spr.pos.Z + nVal); int edx = (pActor2->spr.pos.Z + ecx) - (pActor1->spr.pos.Z + nVal); ebx >>= 4; edx >>= 8; ebx = -ebx; ebx -= edx; int nSqrt = lsqrt(x * x + y * y); return GetMyAngle(nSqrt, ebx); } void InitPushBlocks() { nPushBlocks = 0; memset(sBlockInfo, 0, sizeof(sBlockInfo)); } int GrabPushBlock() { if (nPushBlocks >= kMaxPushBlocks) { return -1; } return nPushBlocks++; } void CreatePushBlock(sectortype* pSector) { int nBlock = GrabPushBlock(); int xSum = 0; int ySum = 0; for (auto& wal : wallsofsector(pSector)) { xSum += wal.wall_int_pos().X; ySum += wal.wall_int_pos().Y; } int xAvg = xSum / pSector->wallnum; int yAvg = ySum / pSector->wallnum; sBlockInfo[nBlock].x = xAvg; sBlockInfo[nBlock].y = yAvg; auto pActor = insertActor(pSector, 0); sBlockInfo[nBlock].pActor = pActor; pActor->spr.pos.X = xAvg; pActor->spr.pos.Y = yAvg; pActor->spr.pos.Z = pSector->floorz - 256; pActor->spr.cstat = CSTAT_SPRITE_INVISIBLE; int var_28 = 0; for (auto& wal : wallsofsector(pSector)) { uint32_t xDiff = abs(xAvg - wal.wall_int_pos().X); uint32_t yDiff = abs(yAvg - wal.wall_int_pos().Y); uint32_t sqrtNum = xDiff * xDiff + yDiff * yDiff; if (sqrtNum > INT_MAX) { DPrintf(DMSG_WARNING, "%s %d: overflow\n", __func__, __LINE__); sqrtNum = INT_MAX; } int nSqrt = ksqrt(sqrtNum); if (nSqrt > var_28) { var_28 = nSqrt; } } sBlockInfo[nBlock].field_8 = var_28; pActor->spr.clipdist = (var_28 & 0xFF) << 2; pSector->extra = nBlock; } void MoveSector(sectortype* pSector, int nAngle, int *nXVel, int *nYVel) { if (pSector == nullptr) { return; } int nXVect, nYVect; if (nAngle < 0) { nXVect = *nXVel; nYVect = *nYVel; nAngle = GetMyAngle(nXVect, nYVect); } else { nXVect = bcos(nAngle, 6); nYVect = bsin(nAngle, 6); } int nBlock = pSector->extra; int nSectFlag = pSector->Flag; int nFloorZ = pSector->floorz; walltype *pStartWall = pSector->firstWall(); sectortype* pNextSector = pStartWall->nextSector(); BlockInfo *pBlockInfo = &sBlockInfo[nBlock]; vec3_t pos; pos.X = sBlockInfo[nBlock].x; int x_b = sBlockInfo[nBlock].x; pos.Y = sBlockInfo[nBlock].y; int y_b = sBlockInfo[nBlock].y; int nZVal; int bUnderwater = nSectFlag & kSectUnderwater; if (nSectFlag & kSectUnderwater) { nZVal = pSector->ceilingz; pos.Z = pNextSector->ceilingz + 256; pSector->setceilingz(pNextSector->ceilingz); } else { nZVal = pSector->floorz; pos.Z = pNextSector->floorz - 256; pSector->setfloorz(pNextSector->floorz); } auto pSectorB = pSector; Collision scratch; clipmove(pos, &pSectorB, nXVect, nYVect, pBlockInfo->field_8, 0, 0, CLIPMASK1, scratch); int yvect = pos.Y - y_b; int xvect = pos.X - x_b; if (pSectorB != pNextSector && pSectorB != pSector) { yvect = 0; xvect = 0; } else { if (!bUnderwater) { pos = { x_b, y_b, nZVal }; clipmove(pos, &pSectorB, nXVect, nYVect, pBlockInfo->field_8, 0, 0, CLIPMASK1, scratch); int ebx = pos.X; int ecx = x_b; int edx = pos.Y; int eax = xvect; int esi = y_b; if (eax < 0) { eax = -eax; } ebx -= ecx; ecx = eax; eax = ebx; edx -= esi; if (eax < 0) { eax = -eax; } if (ecx > eax) { xvect = ebx; } eax = yvect; if (eax < 0) { eax = -eax; } ebx = eax; eax = edx; if (eax < 0) { eax = -eax; } if (ebx > eax) { yvect = edx; } } } // GREEN if (yvect || xvect) { ExhumedSectIterator it(pSector); while (auto pActor = it.Next()) { if (pActor->spr.statnum < 99) { pActor->spr.pos.X += xvect; pActor->spr.pos.Y += yvect; } else { pos.Z = pActor->spr.pos.Z; if ((nSectFlag & kSectUnderwater) || pos.Z != nZVal || pActor->spr.cstat & CSTAT_SPRITE_INVISIBLE) { pos.X = pActor->spr.pos.X; pos.Y = pActor->spr.pos.Y; pSectorB = pSector; clipmove(pos, &pSectorB, -xvect, -yvect, 4 * pActor->spr.clipdist, 0, 0, CLIPMASK0, scratch); if (pSectorB) { ChangeActorSect(pActor, pSectorB); } } } } it.Reset(pNextSector); while (auto pActor = it.Next()) { if (pActor->spr.statnum >= 99) { pos = pActor->spr.pos; pSectorB = pNextSector; clipmove(pos, &pSectorB, -xvect - (bcos(nAngle) * (4 * pActor->spr.clipdist)), -yvect - (bsin(nAngle) * (4 * pActor->spr.clipdist)), 4 * pActor->spr.clipdist, 0, 0, CLIPMASK0, scratch); if (pSectorB != pNextSector && (pSectorB == pSector || pNextSector == pSector)) { if (pSectorB != pSector || nFloorZ >= pActor->spr.pos.Z) { if (pSectorB) { ChangeActorSect(pActor, pSectorB); } } else { movesprite(pActor, (xvect << 14) + bcos(nAngle) * pActor->spr.clipdist, (yvect << 14) + bsin(nAngle) * pActor->spr.clipdist, 0, 0, 0, CLIPMASK0); } } } } for(auto& wal : wallsofsector(pSector)) { dragpoint(&wal, xvect + wal.wall_int_pos().X, yvect + wal.wall_int_pos().Y); } pBlockInfo->x += xvect; pBlockInfo->y += yvect; } // loc_163DD xvect <<= 14; yvect <<= 14; if (!(nSectFlag & kSectUnderwater)) { ExhumedSectIterator it(pSector); while (auto pActor = it.Next()) { if (pActor->spr.statnum >= 99 && nZVal == pActor->spr.pos.Z && !(pActor->spr.cstat & CSTAT_SPRITE_INVISIBLE)) { pSectorB = pSector; clipmove(pActor->spr.pos, &pSectorB, xvect, yvect, 4 * pActor->spr.clipdist, 5120, -5120, CLIPMASK0, scratch); } } } if (nSectFlag & kSectUnderwater) { pSector->setceilingz(nZVal); } else { pSector->setfloorz(nZVal); } *nXVel = xvect; *nYVel = yvect; /* Update player position variables, in case the player sprite was moved by a sector, Otherwise these can be out of sync when used in sound code (before being updated in PlayerFunc()). Can cause local player sounds to play off-centre. TODO: Might need to be done elsewhere too? */ auto pActor = PlayerList[nLocalPlayer].pActor; initx = pActor->spr.pos.X; inity = pActor->spr.pos.Y; initz = pActor->spr.pos.Z; inita = pActor->spr.ang; initsectp = pActor->sector(); } void SetQuake(DExhumedActor* pActor, int nVal) { int x = pActor->spr.pos.X; int y = pActor->spr.pos.Y; nVal *= 256; for (int i = 0; i < nTotalPlayers; i++) { auto pPlayerActor = PlayerList[i].pActor; uint32_t xDiff = abs((int32_t)((pPlayerActor->spr.pos.X - x) >> 8)); uint32_t yDiff = abs((int32_t)((pPlayerActor->spr.pos.Y - y) >> 8)); uint32_t sqrtNum = xDiff * xDiff + yDiff * yDiff; if (sqrtNum > INT_MAX) { DPrintf(DMSG_WARNING, "%s %d: overflow\n", __func__, __LINE__); sqrtNum = INT_MAX; } int nSqrt = ksqrt(sqrtNum); int eax = nVal; if (nSqrt) { eax = eax / nSqrt; if (eax >= 256) { if (eax > 3840) { eax = 3840; } } else { eax = 0; } } if (eax > nQuake[i]) { nQuake[i] = eax; } } } Collision AngleChase(DExhumedActor* pActor, DExhumedActor* pActor2, int ebx, int ecx, int push1) { int nClipType = pActor->spr.statnum != 107; /* bjd - need to handle cliptype to clipmask change that occured in later build engine version */ if (nClipType == 1) { nClipType = CLIPMASK1; } else { nClipType = CLIPMASK0; } int nAngle; if (pActor2 == nullptr) { pActor->spr.zvel = 0; nAngle = pActor->spr.ang; } else { int nHeight = tileHeight(pActor2->spr.picnum) * pActor2->spr.yrepeat * 2; int nMyAngle = GetMyAngle(pActor2->spr.pos.X - pActor->spr.pos.X, pActor2->spr.pos.Y - pActor->spr.pos.Y); uint32_t xDiff = abs(pActor2->spr.pos.X - pActor->spr.pos.X); uint32_t yDiff = abs(pActor2->spr.pos.Y - pActor->spr.pos.Y); uint32_t sqrtNum = xDiff * xDiff + yDiff * yDiff; if (sqrtNum > INT_MAX) { DPrintf(DMSG_WARNING, "%s %d: overflow\n", __func__, __LINE__); sqrtNum = INT_MAX; } int nSqrt = ksqrt(sqrtNum); int var_18 = GetMyAngle(nSqrt, ((pActor2->spr.pos.Z - nHeight) - pActor->spr.pos.Z) >> 8); int nAngDelta = AngleDelta(pActor->spr.ang, nMyAngle, 1024); int nAngDelta2 = abs(nAngDelta); if (nAngDelta2 > 63) { nAngDelta2 = abs(nAngDelta >> 6); ebx /= nAngDelta2; if (ebx < 5) { ebx = 5; } } int nAngDeltaC = abs(nAngDelta); if (nAngDeltaC > push1) { if (nAngDelta >= 0) nAngDelta = push1; else nAngDelta = -push1; } nAngle = (nAngDelta + pActor->spr.ang) & kAngleMask; int nAngDeltaD = AngleDelta(pActor->spr.zvel, var_18, 24); pActor->spr.zvel = (pActor->spr.zvel + nAngDeltaD) & kAngleMask; } pActor->spr.ang = nAngle; int eax = abs(bcos(pActor->spr.zvel)); int x = ((bcos(nAngle) * ebx) >> 14) * eax; int y = ((bsin(nAngle) * ebx) >> 14) * eax; int xshift = x >> 8; int yshift = y >> 8; uint32_t sqrtNum = xshift * xshift + yshift * yshift; if (sqrtNum > INT_MAX) { DPrintf(DMSG_WARNING, "%s %d: overflow\n", __func__, __LINE__); sqrtNum = INT_MAX; } int z = bsin(pActor->spr.zvel) * ksqrt(sqrtNum); return movesprite(pActor, x >> 2, y >> 2, (z >> 13) + bsin(ecx, -5), 0, 0, nClipType); } int GetWallNormal(walltype* pWall) { auto delta = pWall->delta(); int nAngle = GetMyAngle(delta.X, delta.Y); return (nAngle + 512) & kAngleMask; } void WheresMyMouth(int nPlayer, vec3_t* pos, sectortype **sectnum) { auto pActor = PlayerList[nPlayer].pActor; int height = GetActorHeight(pActor) >> 1; *sectnum = pActor->sector(); *pos = pActor->spr.pos; pos->Z -= height; Collision scratch; clipmove(*pos, sectnum, bcos(pActor->spr.ang, 7), bsin(pActor->spr.ang, 7), 5120, 1280, 1280, CLIPMASK1, scratch); } void InitChunks() { nCurChunkNum = 0; memset(nChunkSprite, 0, sizeof(nChunkSprite)); memset(nBodyGunSprite, 0, sizeof(nBodyGunSprite)); memset(nBodySprite, 0, sizeof(nBodySprite)); nCurBodyNum = 0; nCurBodyGunNum = 0; nBodyTotal = 0; nChunkTotal = 0; } DExhumedActor* GrabBodyGunSprite() { DExhumedActor* pActor = nBodyGunSprite[nCurBodyGunNum]; if (pActor == nullptr) { pActor = insertActor(0, 899); nBodyGunSprite[nCurBodyGunNum] = pActor; pActor->spr.lotag = -1; pActor->spr.owner = -1; } else { DestroyAnim(pActor); pActor->spr.lotag = -1; pActor->spr.owner = -1; } nCurBodyGunNum++; if (nCurBodyGunNum >= 50) { // TODO - enum/define nCurBodyGunNum = 0; } pActor->spr.cstat = 0; return pActor; } DExhumedActor* GrabBody() { DExhumedActor* pActor = nullptr; do { pActor = nBodySprite[nCurBodyNum]; if (pActor == nullptr) { pActor = insertActor(0, 899); nBodySprite[nCurBodyNum] = pActor; pActor->spr.cstat = CSTAT_SPRITE_INVISIBLE; } nCurBodyNum++; if (nCurBodyNum >= 50) { nCurBodyNum = 0; } } while (pActor->spr.cstat & CSTAT_SPRITE_BLOCK_ALL); if (nBodyTotal < 50) { nBodyTotal++; } pActor->spr.cstat = 0; return pActor; } DExhumedActor* GrabChunkSprite() { DExhumedActor* pActor = nChunkSprite[nCurChunkNum]; if (pActor == nullptr) { pActor = insertActor(0, 899); nChunkSprite[nCurChunkNum] = pActor; } else if (pActor->spr.statnum) { // TODO MonoOut("too many chunks being used at once!\n"); return nullptr; } ChangeActorStat(pActor, 899); nCurChunkNum++; if (nCurChunkNum >= kMaxMoveChunks) nCurChunkNum = 0; if (nChunkTotal < kMaxMoveChunks) nChunkTotal++; pActor->spr.cstat = CSTAT_SPRITE_YCENTER; return pActor; } DExhumedActor* BuildCreatureChunk(DExhumedActor* pSrc, int nPic, bool bSpecial) { auto pActor = GrabChunkSprite(); if (pActor == nullptr) { return nullptr; } pActor->spr.pos = pSrc->spr.pos; ChangeActorSect(pActor, pSrc->sector()); pActor->spr.cstat = CSTAT_SPRITE_YCENTER; pActor->spr.shade = -12; pActor->spr.pal = 0; pActor->spr.xvel = (RandomSize(5) - 16) << 7; pActor->spr.yvel = (RandomSize(5) - 16) << 7; pActor->spr.zvel = (-(RandomSize(8) + 512)) << 3; if (bSpecial) { pActor->spr.xvel *= 4; pActor->spr.yvel *= 4; pActor->spr.zvel *= 2; } pActor->spr.xrepeat = 64; pActor->spr.yrepeat = 64; pActor->spr.xoffset = 0; pActor->spr.yoffset = 0; pActor->spr.picnum = nPic; pActor->spr.lotag = runlist_HeadRun() + 1; pActor->spr.clipdist = 40; // GrabTimeSlot(3); pActor->spr.extra = -1; pActor->spr.owner = runlist_AddRunRec(pActor->spr.lotag - 1, pActor, 0xD0000); pActor->spr.hitag = runlist_AddRunRec(NewRun, pActor, 0xD0000); return pActor; } void AICreatureChunk::Tick(RunListEvent* ev) { auto pActor = ev->pObjActor; if (!pActor) return; Gravity(pActor); auto pSector = pActor->sector(); pActor->spr.pal = pSector->ceilingpal; auto nVal = movesprite(pActor, pActor->spr.xvel << 10, pActor->spr.yvel << 10, pActor->spr.zvel, 2560, -2560, CLIPMASK1); if (pActor->spr.pos.Z >= pSector->floorz) { // re-grab this variable as it may have changed in movesprite(). Note the check above is against the value *before* movesprite so don't change it. pSector = pActor->sector(); pActor->spr.xvel = 0; pActor->spr.yvel = 0; pActor->spr.zvel = 0; pActor->spr.pos.Z = pSector->floorz; } else { if (!nVal.type && !nVal.exbits) return; int nAngle; if (nVal.exbits & kHitAux2) { pActor->spr.cstat = CSTAT_SPRITE_INVISIBLE; } else { if (nVal.exbits & kHitAux1) { pActor->spr.xvel >>= 1; pActor->spr.yvel >>= 1; pActor->spr.zvel = -pActor->spr.zvel; return; } else if (nVal.type == kHitSprite) { nAngle = nVal.actor()->spr.ang; } else if (nVal.type == kHitWall) { nAngle = GetWallNormal(nVal.hitWall); } else { return; } // loc_16E0C int nSqrt = lsqrt(((pActor->spr.yvel >> 10) * (pActor->spr.yvel >> 10) + (pActor->spr.xvel >> 10) * (pActor->spr.xvel >> 10)) >> 8); pActor->spr.xvel = bcos(nAngle) * (nSqrt >> 1); pActor->spr.yvel = bsin(nAngle) * (nSqrt >> 1); return; } } runlist_DoSubRunRec(pActor->spr.owner); runlist_FreeRun(pActor->spr.lotag - 1); runlist_SubRunRec(pActor->spr.hitag); ChangeActorStat(pActor, 0); pActor->spr.hitag = 0; pActor->spr.lotag = 0; } DExhumedActor* UpdateEnemy(DExhumedActor** ppEnemy) { if (*ppEnemy) { if (!((*ppEnemy)->spr.cstat & CSTAT_SPRITE_BLOCK_ALL)) { *ppEnemy = nullptr; } } return *ppEnemy; } END_PS_NS
<gh_stars>0 # coding: utf-8 # Copyright (c) HP-NTU Digital Manufacturing Corporate Lab, Nanyang Technological University, Singapore. # # This source code is licensed under the Apache-2.0 license found in the # LICENSE file in the root directory of this source tree. import numpy as np import os import tensorflow as tf import time import json import re import argparse from tensorflow.python import saved_model from tensorflow.python.saved_model import tag_constants from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def from tensorflow.python.tools import freeze_graph import data_gen import random import sys from preprocessing_factory import * from PIL import Image # Pathes for generated saved_model and TPU_model export_path = "saved_models/" TPU_model_folder = "TPU_models/" def getTensorInfo(filename): """ Get the Input and Output Tensor :param filename: The ".pb" format Tensorflow model :return: The name of inputTensor and outputTensor. The value of input's height, weight, and channels """ detection_graph = tf.Graph() with detection_graph.as_default(): graph_def = tf.GraphDef() # read the original model with tf.gfile.GFile(filename, 'rb') as fid: graph_def.ParseFromString(fid.read()) tf.import_graph_def(graph_def, name='') # Get the inputTensor, outputTensor, inputShape tensor_name_list = [tensor.name for tensor in tf.get_default_graph().as_graph_def().node] tensor_values_list = [tensor.values() for tensor in tf.get_default_graph().get_operations()] inputTensor = tensor_name_list[0] + ":0" outputTensor = tensor_name_list[-1] + ":0" inputshape = tensor_values_list[0] # Get the input's shape a = str(inputshape) parse_items = a.split(' ') id_name1 = re.findall(r"\d+\.?\d*", parse_items[3]) id_name2 = re.findall(r"\d+\.?\d*", parse_items[4]) id_name3 = re.findall(r"\d+\.?\d*", parse_items[5]) try: (height, width, channels) = (int(id_name1[0]), int(id_name2[0]), int(id_name3[0])) except IndexError: print("Error: The input tensor's dimension is not fixed") print(inputshape) sys.exit(0) return inputTensor, outputTensor, height, width, channels def savedModelGenerator(modelpath, dataset, height, width, channels, outTensor, inTensor, modelname): """ Generate the saved_model used for post training quantization :param modelpath: The original model's path :param dataset: The training dataset (Note that here just one image is used to run this model) :param height: The input tensor's height :param width: The input tensor's height :param channels: The input tensor's channel :param outTensor: The name of output Tensor :param inTensor: The name of input Tensor :param normalize: Whether to normalize :return: Saved_model """ # Randomly choose one image filename = [] image_batch = [] files = os.listdir(dataset) # Filter the gray image if channels > 1: for filei in files: imagei = Image.open(dataset + '/' + filei) imagei = np.array(imagei) if len(imagei.shape) > 2: filename.append(filei) break else: filename.append(random.choice(files)) # Preprocessing is neural network dependent: resize, trans the channels, and add one dimension image = Image.open(dataset + '/' + filename[0]) if image.mode == 'L': image = image.convert("RGB") bn, h, w, c = (1, height, width, channels) # image = image.resize((w, h), resample=0) preprocess_t = get_preprocessing(modelname) image = preprocess_t(image, h, w) image = np.expand_dims(image, axis=0) image_batch.append(image) # Loading the model detection_graph = tf.Graph() with detection_graph.as_default(): graph_def = tf.GraphDef() with tf.gfile.GFile(modelpath, 'rb') as fid: graph_def.ParseFromString(fid.read()) tf.import_graph_def(graph_def, name='') # make the savel_model dir cwd = os.getcwd() if os.path.exists(export_path): os.chdir(export_path) os.system("rm -rf ./*") os.chdir(cwd) builder = saved_model.builder.SavedModelBuilder(export_path) # run the model to generate the saved_model with detection_graph.as_default(): config = tf.ConfigProto() config.gpu_options.allow_growth = True with tf.Session(graph=detection_graph, config=config) as sess: input_tensor = detection_graph.get_tensor_by_name(inTensor) softmax_tensor = detection_graph.get_tensor_by_name(outTensor) signature = predict_signature_def(inputs={"myinput": input_tensor}, outputs={"myoutput": softmax_tensor}) builder.add_meta_graph_and_variables(sess=sess, tags=[tag_constants.SERVING], signature_def_map={'predict': signature}) _ = sess.run(softmax_tensor, {inTensor: image_batch[0]}) builder.save() def convertToTpu(dg, mn): """ Convert to the TPU-compatible model from the generated saved_model :param dg: The data_generator used for post training quantization :param mn: The corresponding model name :return: The converted model """ converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir=export_path, signature_key="predict") converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.allow_custom_ops = True converter.inference_output_type = tf.uint8 converter.inference_input_type = tf.uint8 converter.representative_dataset = tf.lite.RepresentativeDataset(dg.representative_dataset_gen) tflite_quant_model = converter.convert() if not os.path.exists(TPU_model_folder): os.mkdir(TPU_model_folder) with open(TPU_model_folder + mn + ".tflite", "wb") as wf: wf.write(tflite_quant_model) # Compile the tflite model to tpu-compatible tflite model os.chdir(TPU_model_folder) os.system("edgetpu_compiler -s " + mn + ".tflite") print("Successfully generate TPU model, stored at " + TPU_model_folder) def main(): # parse the args parser = argparse.ArgumentParser() # add new argument here parser.add_argument("-m", "--modelpath", type=str, default='./test/inception_v3_frozen_graph.pb', help="model path", required=True) parser.add_argument("-t", "--trainingset", type=str, default="/home/lsq/Documents/HP-NTU/EdgeTPU/dataSet5000", help="post training quantization imagedir", required=True) parser.add_argument("-vn", "--validnum", type=int, default=30, help="images num to be used in post " "training quantization") parser.add_argument("-mn", "--modelname", type=str, default="inceptionv3", help="model name", required=True) args = parser.parse_args() # Check if the provided images are enough. If not enough, the same images may be utilized more than once check_files = os.listdir(args.trainingset) if len(check_files) < args.validnum: print("Please provide more data than " + str(args.validnum) + " or change the value using -vn value " "(the default is 30") return 0 in_tensor, out_tensor, height, width, channels = getTensorInfo(args.modelpath) savedModelGenerator(args.modelpath, args.trainingset, height, width, channels, out_tensor, in_tensor, args.modelname) # New an object of data_generator dg = data_gen.dataset_generator(args.trainingset, height, width, channels, args.validnum, args.modelname) convertToTpu(dg, args.modelname) if __name__ == "__main__": main()
#ifndef EXODUS_RPCTX #define EXODUS_RPCTX #include <univalue.h> UniValue exodus_sendrawtx(const UniValue& params, bool fHelp); UniValue exodus_send(const UniValue& params, bool fHelp); UniValue exodus_sendall(const UniValue& params, bool fHelp); UniValue exodus_senddexsell(const UniValue& params, bool fHelp); UniValue exodus_senddexaccept(const UniValue& params, bool fHelp); UniValue exodus_sendissuancecrowdsale(const UniValue& params, bool fHelp); UniValue exodus_sendissuancefixed(const UniValue& params, bool fHelp); UniValue exodus_sendissuancemanaged(const UniValue& params, bool fHelp); UniValue exodus_sendsto(const UniValue& params, bool fHelp); UniValue exodus_sendgrant(const UniValue& params, bool fHelp); UniValue exodus_sendrevoke(const UniValue& params, bool fHelp); UniValue exodus_sendclosecrowdsale(const UniValue& params, bool fHelp); UniValue trade_MP(const UniValue& params, bool fHelp); UniValue exodus_sendtrade(const UniValue& params, bool fHelp); UniValue exodus_sendcanceltradesbyprice(const UniValue& params, bool fHelp); UniValue exodus_sendcanceltradesbypair(const UniValue& params, bool fHelp); UniValue exodus_sendcancelalltrades(const UniValue& params, bool fHelp); UniValue exodus_sendchangeissuer(const UniValue& params, bool fHelp); UniValue exodus_sendactivation(const UniValue& params, bool fHelp); UniValue exodus_sendalert(const UniValue& params, bool fHelp); #endif // EXODUS_RPCTX
Tomorrow night, Bethesda's Todd Howard will be inducted into the Academy of Interactive Arts & Sciences Hall of Fame. It's a fitting honor for a developer who has steered two of the most esteemed game series, The Elder Scrolls and Fallout. His most recent work — Skyrim (2011) and Fallout 4 (2015) — are highly regarded role-playing explorations of a fantasy world of dragons, and a post-apocalyptic zone of decay and dastardly conspiracies. Both were critical and commercial successes. Skyrim particularly seemed to catch its moment. While RPGs were once a niche entertainment for relatively small numbers of adherents, Bethesda's dragon-slaying magical exploration game punched through to the mainstream media, and meme status. He says the game's high profile success caught him by surprise. "I don't know how it happened," he says. “We could feel it when it crossed over to being referenced on television or other places. It's nothing we could ever plan for. It just kind of happened. "Certain things came together. People's mood, timing, vibe, marketing, all of it. But it happened very quickly, almost as soon as the game was out." Howard believes the elevated status of RPGs is due to the fact that so many games now borrow some of that genre's fundamentals such as NPC interactions, exploration, character upgrades and strong story. But the big breakthrough comes from freedom of movement. "Video games put you in a different place," he says. "They do geography so well. We can put the player anywhere, and the player can do anything. "Open world games have gotten more popular, so we have to think about creating the kinds of interactivity that make you feel like you're really in that world. We want to avoid activities that feel too 'gamey' and that take you out of the story." While open worlds have been the engine of role-playing's growth, the genre’s continued success will rely on solving a much trickier problem: character interactions. RPGs can still throw up jarring encounters with NPCs who skirt the uncanny valley. Howard says that Fallout 4's dog and robot were his favorite characters. “I think we have a very long way to go in how the other characters act and react to you. That's the big issue we're trying to solve. We're pretty good at pushing technology and world building. We have a good handle on game flow, the rate you get new things, how you're rewarded over time. But we need to be innovating on [characters].” Although Bethesda's RPGs do feature their fair share of fighting critters and clearing rooms, he's proud of the moral choices posed in Fallout 4, particularly in terms of the big twist, and the various factions at play. “We're pretty good at asking those [moral] questions. We need to get better at letting the player deliver answers to them.” As far as future projects, Howard is tight-lipped. He says the company's next games will please fans, but offers few specifics, other than generally praising Fallout 4 on VR, mobile game The Elder Scrolls: Legends and Skyrim coming to Nintendo Switch. On the latter, he “can’t say” whether the original Skyrim or the 2015 remaster will be released. We do know that Bethesda is working on two "bigger" new projects, but Howard offers no specifics. The Elder Scrolls 6 is also working its way through development, but is unlikely to be seen any time soon. So far as the future goes, Howard says he just wants to carry on doing what he does. "There's a long way to go. We have so many ideas that we didn't think we were ready for. But given our size now and how the tech is coming together, we can do some of the things that we've talked about for a very long time. Now they are within our grasp."
The Sounds of Paris in Verdis La traviata by Emilio Sala (review) in the European context, an effort not found elsewhere until a group led by Raphael Kiesewetter began doing that in Vienna during the 1820s. As Tim Eggington shows, even though Cooke and his colleagues failed in the short run, their movement began a special British tradition by which amateurs were deeply involved in musical scholarship and journalism. What Steffani, Cooke, and the Academys members accomplished led towards the work of George Grove and Stanley Sadie in the integration of professional and amateur musical culture in the long term. (I wrote on this problem in The Intellectual Origins of Musical Canon in Eighteenth-Century England, Journal of the American Musicological Society, 47, 488^520.) A provocative aspect emerges in the book in its interpretation of how Benjamin Cookes compositional career evolved. Eggington shows that even though Cooke made a living at Westminster Abbey, he nonetheless wrote relatively little liturgical music, thereby participating in the waning of church music as a focus of composers lives. The opportunity to write the more challenging kinds of glees, catches, and partsongs drew his attention to an increasing extent. Then in 1762 the windfall of an inheritance of his wife, Mary Jackson, made it possible for him to pursue unusual paths of composition. The concluding chapter looks in detail into two unusual pieces. In 1773 Cooke published an adaptation of Johann Ernst Galliards Morning Hymn, a chamber cantata on Miltons poem on Adam and Eve. As the first composer to set a major poem by Milton, Cooke made changes that, as Eggington interprets it, updated the music to the ears of his time and thereby subsumed Galliards original to make it his own (p. 213). In such a fashion might a musician attempt to realize universal values of nature through its particular artistry. An even more unusual piece, Cookes Collinss Ode, was published in 1785 with 165 subscribers, many of them prominent in public life. Collinss poem, put out in 1747, was written in reference to the odes of John Dryden and Alexander Pope, drawing on mythic and religious themes to personify the passions with which the goddess Music might be possessed. Cooke involved ancient or exotic instruments such as the tibiae pares, trigonale, and cymbalum in word-painting based on styles from early Baroque to recent harmonic experiments. Eggington concludes that we can see in this eccentric work how the manipulation of stylistic diversity as a means to convey complex ideas would, over the coming century, constitute an ever-increasing component of the art of composition (p. 249). WILLIAM WEBER California State University, Long Beach
n, m, k = map(int, input().split()) min_delta = min(k - 1, n - k) max_delta = max(k - 1, n - k) top = ((k - 1) * k + (n - k) * (n - k + 1)) // 2 + (max_delta - min_delta) * min_delta + max_delta + 1 def get_level(n, k, level): return 1 + min(level, k - 1) + min(level, n - k) if top <= m: print(max_delta + 1 + (m - top) // n) else: add = m - n curr_level = 0 while add >= get_level(n, k, curr_level): add -= get_level(n, k, curr_level) curr_level += 1 print(curr_level + 1)
package com.matchandtrade.rest.v1.controller; import com.matchandtrade.persistence.common.SearchResult; import com.matchandtrade.persistence.entity.ArticleEntity; import com.matchandtrade.rest.v1.json.ArticleJson; import com.matchandtrade.rest.v1.transformer.ArticleTransformer; import com.matchandtrade.test.DefaultTestingConfiguration; import com.matchandtrade.test.JsonTestUtil; import com.matchandtrade.test.helper.ArticleHelper; import com.matchandtrade.util.JsonUtil; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpHeaders; import org.springframework.http.MediaType; import org.springframework.mock.web.MockHttpServletResponse; import org.springframework.test.context.junit4.SpringRunner; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertTrue; import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.*; import static org.springframework.test.web.servlet.result.MockMvcResultHandlers.print; import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status; @RunWith(SpringRunner.class) @DefaultTestingConfiguration public class ArticleControllerIT extends BaseControllerIT { @Autowired private ArticleHelper articleHelper; private ArticleTransformer articleTransformer = new ArticleTransformer(); @Before public void before() { super.before(); } @Test public void delete_When_DeleteByArticleId_Then_Succeeds() throws Exception { ArticleEntity expected = articleHelper.createPersistedEntity(authenticatedUser); mockMvc.perform( delete("/matchandtrade-api/v1/articles/{articleId}", expected.getArticleId()) .header(HttpHeaders.AUTHORIZATION, authorizationHeader) ) .andExpect(status().isNoContent()); } @Test public void get_When_GetByAttachment_Then_Succeeds() throws Exception { ArticleEntity expectedEntity = articleHelper.createPersistedEntity(); ArticleJson expected = articleTransformer.transform(expectedEntity); String response = mockMvc.perform( get("/matchandtrade-api/v1/articles/{articleId}", expected.getArticleId()) .header(HttpHeaders.AUTHORIZATION, authorizationHeader) ) .andExpect(status().isOk()) .andReturn() .getResponse() .getContentAsString(); ArticleJson actual = JsonUtil.fromString(response, ArticleJson.class); assertEquals(expected, actual); } @Test public void get_When_GetAllAndPageSizeIs2_Then_Returns2Articles() throws Exception { articleHelper.createPersistedEntity(); articleHelper.createPersistedEntity(); articleHelper.createPersistedEntity(); MockHttpServletResponse response = mockMvc.perform( get("/matchandtrade-api/v1/articles?_pageNumber=1&_pageSize=2") .header(HttpHeaders.AUTHORIZATION, authorizationHeader) ) .andExpect(status().isOk()) .andReturn() .getResponse(); SearchResult<ArticleJson> actual = JsonTestUtil.fromSearchResultString(response, ArticleJson.class); assertEquals(2, actual.getResultList().size()); assertEquals(2, actual.getPagination().getSize()); assertEquals(1, actual.getPagination().getNumber()); assertTrue(actual.getPagination().getTotal() > 3); } @Test public void post_When_NewArticle_Then_Succeeds() throws Exception { ArticleJson expected = ArticleHelper.createRandomJson(); mockMvc .perform( post("/matchandtrade-api/v1/articles/") .header(HttpHeaders.AUTHORIZATION, authorizationHeader) .contentType(MediaType.APPLICATION_JSON) .content(JsonUtil.toJson(expected)) ) .andExpect(status().isCreated()); } @Test public void put_When_ExistingArticle_Then_Succeeds() throws Exception { ArticleEntity expectedEntity = articleHelper.createPersistedEntity(authenticatedUser); expectedEntity.setName(expectedEntity.getName() + " - updated"); ArticleJson expected = articleTransformer.transform(expectedEntity); String response = mockMvc .perform( put("/matchandtrade-api/v1/articles/{articleId}", expected.getArticleId()) .header(HttpHeaders.AUTHORIZATION, authorizationHeader) .contentType(MediaType.APPLICATION_JSON) .content(JsonUtil.toJson(expectedEntity)) ) .andExpect(status().isOk()) .andReturn() .getResponse() .getContentAsString(); ArticleJson actual = JsonUtil.fromString(response, ArticleJson.class); assertEquals(expected, actual); } }
Administrative and Taxation Mechanisms Supporting the Purchase and Maintenance of Electric Vehicles Based on the Example of Poland and other Selected European Countries Abstract Subject and purpose of work: The aim of this article is to review the current mechanisms of supporting the purchase of electric cars, with particular emphasis on tax reliefs and exemptions. Materials and methods: The research method consists of a review of literature, legal regulations and industry reports regarding the presented subject. Results: The authors analyzed the global electric car market, presenting the examples of the countries in which the share of electric vehicles has recently increased significantly in the total number of cars. In addition, current discounts and other preferences for the purchase of electric cars in European countries are presented together with future potential mechanism for buyers of electric in Poland. Conclusions: The price is the main economic determinate for buying the particular type of a car. The costs of acquiring and operating an electric car are currently higher than the costs for traditional combustion vehicles. However, the EU and European states authorities are processing to increase the popularity of electric cars, offering tax reliefs and other preferences with noticeably effects.
High-fat diet alters weight, caloric intake, and haloperidol sensitivity in the context of effort-based responding High-fat (HF) diets result in weight gain, hyperphagia, and reduced dopamine D2 signaling; however, these findings have been obtained only under free-feeding conditions. This study tested the extent to which HF diet affects effort-dependent food procurement and the extent to which dopamine signaling is involved. Male Sprague-Dawley rats consumed either a HF (n=20) or a standard-chow (n=20) diet. We assessed the sensitivity to effort-based reinforcement in 10 rats from each group by measuring consumption across a series of fixed-ratio schedules (FR 5FR 300) under a closed economy and quantified performance using the exponential-demand equation. For each FR, acute injections of 0 or 0.1mg/kg of haloperidol, a D2 antagonist, were administered to assess dopamine-related changes in consumption. Rats fed a HF diet consumed more calories and weighed significantly more than rats fed standard-chow. Food consumption decreased in both groups in an effort-dependent manner, but there were no group differences. Haloperidol reduced responding in an FR-dependent manner for both groups. Animals exposed to a HF diet showed an altered sensitivity to haloperidol relative to rats fed a standard diet, suggesting that HF diet alters sensitivity to DA signaling underlying effort-based food procurement.
Modelling the impact of agrometeorological variables on regional tea yield variability in South Indian tea-growing regions: 1981-2015 As tea (Camellia sinensis L.) yield strongly determined by local environmental conditions, thus assessing the potential impact of the seasonal and interannual climate variability on regional crop yield has become crucial. The present study assessed the region-level tea yield variability at different temporal scales utilising observed climate data for the period 19812015, to understand how the climate variability influences tea yields across the South Indian Tea Growing Regions (SITR)? Using statistical models, step-wise multiple regression (SMLR), seasonal autoregressive integrated moving average (SARIMAX), artificial neural network (ANN) and vector autoregressive model (VAR), the relations between meteorological factors and crop yield variability was measured. The higher explaining ability of ANN and VAR models over SMLR and SARIMAX shows that the multivariate time series models are better suited for capturing the nonlinear shortterm fluctuations and long-term variations. The analysis showed considerable spatial variation in the relative contributions of different climate factors to the variance of historical tea yield from 3 to 95%. Climate variability explained ~84.8% of the Abstract: As tea (Camellia sinensis L.) yield strongly determined by local environmental conditions, thus assessing the potential impact of the seasonal and interannual climate variability on regional crop yield has become crucial. The present study assessed the region-level tea yield variability at different temporal scales utilising observed climate data for the period 1981-2015, to understand how the climate variability influences tea yields across the South Indian Tea Growing Regions (SITR)? Using statistical models, step-wise multiple regression (SMLR), seasonal autoregressive integrated moving average (SARIMAX), artificial neural network (ANN) and vector autoregressive model (VAR), the relations between meteorological factors and crop yield variability was measured. The higher explaining ability of ANN and VAR models over SMLR and SARIMAX shows that the multivariate time series models are better suited for capturing the nonlinear shortterm fluctuations and long-term variations. The analysis showed considerable spatial variation in the relative contributions of different climate factors to the variance of historical tea yield from 3 to 95%. Climate variability explained~84.8% of the ABOUT THE AUTHORS Esack Edwin Raj is an Assistant Plant Physiologist at UPASI Tea Research Institute, Valparai and a Doctoral Research Scholar supported by CSIR-Fourth Paradigm Institute (CSIR-4pi), Bangalore through SPARK Programme. He is interested in using various empirical spatial statistical approaches of research in understanding the linkages between climate and crop yield variability for sustainable tea production in South India. His research focus is on sustainable land use management and planning, geographical information systems, digital mapping and in developing drought tolerant and disease resistance clones using marker-assisted selection. K. V. Ramesh is a Senior Principal Scientist at CSIR-4pi, Bangalore. His key research area is climate change, atmosphere, hydrology, energy and disease. Rajagobal RajKumar is a Senior Plant Physiologist (Ret.) at UPASI Tea Research Institute, Valparai. He is an expert in tea physiology and his research focus is on climate change, carbon sequestration, agronomy and policymaking. PUBLIC INTEREST STATEMENT Tea is one of the economically important perennial plantation crops is being cultivated as a rainfed crop in different parts of the world. There has been a significant reduction in crop productivity in the recent past due to the changes in temperature and rainfall pattern, especially in the South Indian Tea Growing Regions. In this contest, the study explored how tea crop responds to the temperature and rainfall anomalies? What extent to which climatic variability is affecting the real-world tea productivity? If climatic variability is affecting tea yield, which climatic variables are of importance and should be targeted with adaptation measures? The research answered these questions using a different empirical statistical modelling approach that would help to devise region specific suitable adaptation measures to policymakers and capacity building strategies to farmers in response to the changing climate. annual tea yield variability of 1.9 t ha −1 y −1, over 106.85 thousand ha translates into an annual variation of~0.02 million ton in tea production over the study area. Among the climatic factors, temperature variability identified to be the most serious factor determining the tea yield uncertainty than rainfall variability in South India (SI). Hence, the study recommends the policymakers to develop imperative regional specific adaptation strategies and effective management practices (for temperature related issues) to reduce the negative impact of climate change on crop yields. Introduction Climate change is one of the most important and critical ramifications that affect crops yield, challenged the global food security and socioeconomic stability. Crop yields response to climate change has received the major attention of the farmers, researchers and policymakers over the past few decades (Lobell, Cahill, & Field, 2007). Understanding the relationship between climate and crop yield can help in enhancing crop management strategies and resilience of crop production systems to climate change. There were many empirical models have been developed for various crops to address the climate-yield relationship that varying from simple statistical to complex process models. The process-based crop models (e.g. APSIM, CERES, and DSSAT) simulate key physical and physiological processes involved in crop growth and development with the inputs from the field experiments (Challinor, Ewert, Arnold, Simelton, & Fraser, 2009;Roudier, Sultan, Quirion, & Berg, 2011). However, the outputs of these crop models are difficult to extrapolate at the regional scale. To reduce uncertainties in the process model, the use of statistical model that relies on past observations of climate and crop yields offer historical relationships (Lobell & Burke, 2010) have been suggested because of their flexibility and usefulness (Lobell, Schlenker, & Costa-Roberts, 2011). As site-specific rainfall events and regional temperature episodes that found to determine the crop growth and development in a wide variety of mechanisms (Changnon & Hollinger, 2003;). It is important to quantify the effects of climatic variations on crop yields at a regional scale with in-season inputs rather than aggregating the data to look at decadal and nationwide average effects (Kucharik & Serbin, 2008). The statistical modelling approaches could represent the combined sensitivity of crop physiological attributes of a specific crop or genotype (e.g. crop physiological properties), adaptations to local/regional environmental conditions (e.g. soil and weather condition), together with the collective response of management practices to the seasonal variability in that area. In this context, Intergovernmental Group on Tea (IGT), held in Washington, DC, recommended to conducting regional and subregional scale analysis, since the global studies on climate change are on the continent or country levels have a large uncertain bias. Analysis of crop production system on the local scale is more precise than macro scale investigation (Osborne, Rose, & Wheeler, 2013) as the local climatic conditions are influenced by the topography of the region and proximity to the sea and oceans (). Based on the above background, the present study measured climate and crop yield variability at the regional scale using statistical approaches. Though localised impact found to be masked in national data, South Indian (hereafter SI) tea production has been increased during the past 35 years ( Figure S1a) despite increasing temperature maximum (Tmax; Figure S1c), declining temperature minimum (Tmin; Figure S1c) and intensifying rainfall (RF; Figure S1b). Across the South Indian tea-growing region (hereafter SITR), the positive rainfall trend was observed, nevertheless, the trends varied among the regions. Generally, Coonoor (CNR), Gudalur (GUD), Koppa (KOP), Meppadi (MEP) and Vandiperiyar (VPR) experienced a positive RF trend, while Munnar (MNR) encountered a negative RF trend ( Figure S1h). Apart from MNR and VLP, a warming trend in Tmin was apparent in the rest of the region studied ( Figure S1f). With a Tmax, GUD, MNR, VLP and VPR, a warming trend was apparent, which resulted in a collective SITR track-warming trend between 1981 and 2015 ( Figure S1g). Among teacultivating regions, trends in rainfall and temperature were not generally identical in magnitude, and in fact were not always in the same direction. These results point to spatial variations in climate variability and thus highlight the importance of the examining region-and month-specific climate relationship rather than generalising over entire SITR and averaging climate data over the all growing season. Before making a mitigation action plan to reduce the effects of climate variability, it is necessary to understand how different climatic factors affect tea yields? is a key question of the study. There are several studies conducted for many stable and perennial crops of commercial importance on a provincial level to investigate the influence of climate variability in crop production, but not on tea yield of SI. Progress has been made to elucidate the relationship between tea production and environment variables using crop models (e.g. CUPPA-TEA model by Matthews & Stephens, 1998a, 1998b across the different tea producing countries (e.g. Sri Lanka and North India) and often identified critical climate factors that significantly affect the crop growth and yield (Parry, Rosenzweig, Iglesias, Livermore, & Fischer, 2004;Tao, Yokozawa, Yinlong, Hayashi, & Zhang, 2006). Only a handful of studies have used observational datasets based on absolutely nonexperimental yield data to develop empirical models, including agricultural systems for a range of food crops in tropical countries (;). So, a systematic effort is needed especially with non-experimental datasets to clarify the question. Prediction of yield quantity can provide accurate information on the factors responsible for the suitable growing of crops, and it can help farmers and decision makers to make appropriate management options to minimise production risk. As predictions from different models often disagree, understanding the sources of this divergence are central to establishing a more robust picture of climate change likely impact. Given that individual models may have inaccuracies and misspecification, the use of different forecasting methods can help to minimise such limitations. Multiple Linear Regression (MLR) (Jiang & Thelen, 2004), Principle Component Analysis (Ayoubi, Khormali, & Sahrawat, 2009), Factor Analysis (Kaul, Hill, & Walthall, 2005), Support Vector Machines and Artificial Neural Network (ANN) (Green, Salas, Martinez, & Erskine, 2007;Norouzi, Ayoubi, Jalalian, Khademi, & Dehghani, 2010) are commonly used statistical methods for determining yield and distinguishing factors that influence them. Although the general strengths and limitations of specific methods in predicting yield responses to climate change are widely accepted, there has been a little systematic evaluation of their performance should be done on other methods (Lobell & Burke, 2010). Here, the study using a perfect modelling approach to explore the ability of statistical models in defining yield responses to changes in mean climatic conditions as simulated by a process-based crop model. It is important to note that the study does not compare the algorithms in terms of prediction power since the primary purpose of this study is to describe yield variability rather than to predict it. Shmueli pointed out this crucial difference elsewhere (see Shmueli, 2010). Comparative studies of different statistical methods can facilitate the choice of the best time series model for a further understanding of the nonlinear behaviour of yield response. Typically, tea (Camellia sinensis L.) plants are harvested once in a fifteen to twenty-one-day interval. Hence, environmental and climatic conditions within the plucking cycles control the rate of shoot development and tea yield over short-to-medium time-scales (Costa, De, Janaki Mohotti, & Wijeratne, 2007). Recent studies propose that climatic fluctuation and increased water availability influences the growth and development of new leaves in tea plants (;Carr, 1972;Hadfield, 1975) and quality of manufactured tea of operational plantations ((Ahmed et al.,, 2014;Hsiang, 2016). There is no comprehensive study to address, what extent to which climatic variability is affecting tea productivity in SI? What extent of the resilience or sensitivity of the real-world tea production systems to climate variability? Based on the assumption that yield response to contemporary weather is a proxy for responding to climate change. If climatic variability is affecting tea yield, we do not know which climatic variables are of importance and should be targeted with adaptation measures? Therefore, the study used an array of observed datasets to assess and quantify the relationship between the tea productivity and climate variables over the SITR. Further, we quantify the climate variability on tea productivity over this region. This will facilitate us to establish the basic theoretical understanding of tea response to climate and sensitivity to weather. Study area The SITR are situated in the Western Ghats of peninsular India and covers three states, Tamil Nadu, Kerala and Karnataka, latitude ranging from 75.36 to 77.09N and longitude from 9.57 to 13.35E where fragmented segments or part of the administrative district only tea is cultivated ( Figure 1). The altitude of the region starts at an elevation of 630 m (KOP) to 1790 m (CNR) above mean sea level. The climate of the regions is characterised by humid tropics with cool summer with heavy rainfall, which is heavier to the south and extends over six to eight months in a year. Such a climate favours tea production under rainfed so much that accounts, one-fifth of the total tea plantations of the country with an average yield of 2281 kg ha −1 and annually produces~25% of India's tea (Tea Board, 2016). SI is a key teagrowing region produces 243.71 Mkg (million kilogram) from 106.85 thousand hectares and Figure 1. Map of elevation range and meteorological stations in the SITR. Black lines with the cross line are the boundary of tea growing region, grey lines are administrative boundaries of districts and blue line are state boundaries of India. The base map of India including administrative units at different levels was acquired in vector format with data available from GADM database of Global Administrative Areas (http://gadn.org/country) and processed using ESRI® ArcMap Version 10.1. exports~36% (worth of~157 million US$) of total annual production. Agricultural activities will take place around the year and the crop will be harvested once in the 15-21-day interval. Unlike North Indian, the SI tea gardens show bimodal cropping seasons (i.e.) First crop season during April to June and the second crop season during September to November and it peaks respectively in the months of May and October. The total annual tea productivity of the SI ranging from 2030 (VPR) to 2909 (KOP). Seven meteorological stations, Coonoor (CNR), Gudalur (GUD), Koppa (KOP), Meppadi (MEP), Munnar (MUN), Valparai (VLP) and Vandiperiyar (VPR) are chosen as a representative sample of the climate in the study area (Table 1). Weather condition of the regions is influenced by the monsoon rain-belt with the highest rainfall is intense during June to September and even November in some years with large local variation because of orographic effects. According to 35year of daily data, the tea growing regions of SI receives an average annual rainfall from 1,777 to 4,051 mm. About 66% of the annual rainfall occurring in the Southwest monsoon, one of the important seasons over the tea-growing regions. The Northeast rainfall accounted 21% and remaining~14% of rainfall occurred during pre-and-post monsoon. Among the tea growing regions the mean annual air temperatures, both Tmin and Tmax were maximum at KOP and minimum in MNR and CNR. The mean annual relative humidity (RH%) range between 78 and 91% at 830h and from 67 to 77% at 1430h. Estimated average evapotranspiration (ETo) of the region is about 88-120 mm. Data collection and definition For the study, meteorological data recorded by the UPASI Tea Research Institute and its Regional Centres are used (UPASI Annual Reports 1981, which is the longest period continuous data across the region is currently available. The rain gauges are provided by the India Meteorological Department (IMD, Chennai), and the Department personnel inspect these gauges and undertake quality control of the data. Reference evapotranspiration (ETo) and cloudiness are calculated according to Penman-Monteith (Allen, Pereira, Raes, Smith, & Ab, 1998) and Black methods, respectively, by using SPEI and EcoHydrology Packages of R Statistics. Soil temperature at 830h and 1430h and soil moisture were derived from daily values input to the Java Newhall Simulation Model (jNSM) Version 1.6.1. All meteorological records were subjected to a visual inspection of reasonableness, completeness and any obvious discontinuities. After quality control of data, the daily values aggregated into monthly, seasonal and annual series. The crop production data at different temporal scales across the regions for the agricultural years 1981-2015 have been collected from the various sources viz., Annual Reports of Tea Board, J Thomas Tea Statistics and Theillai, UPASI monthly advisory circular publications. The productivity of tea is referred as the ratio of the area harvested and the dry weight of the yield. Figure S1a shows that the total tea production of SI steadily increased due to institutional (crop genetics) and technological developments (management techniques). As many nonclimatic factors influence the crop production rapidly with time, a low-pass filter have been used to separate the long-term changes from the short-term inter-annual variations about the trend (Press, Teukolsky, Vetterling, Flannery, & Ziegel, 1987). The lowfrequency (long-term trend) values are got by using a Hamming-type filter followed by applying five points Gaussian moving average. But, the person who reads should consider that it does not exactly remove the technology renovation trend over the years, but it is the best way to exclude the effect of other factors on historical yield change but the climate effects. In order to avoid the spurious effect in the models, stationarity of the data was checked before performing the analysis. Outliers, random walk, drift, trend, or changing variance in the time series is removed by transforming the data using the Augmented Dickey-Fuller (ADF) test and the KPSS test. Values are annual means of meteorological variables and total yield calculated across all years for each location. MSL: Mean above sea level (m); YPH: yield per hector (kg ha −1 ); Tmin: temperature minimum (°C); Tmax: temperature maximum (°C); RH830h; relative humidity at 830 h (%); RH1430h: relative humidity at 1430 h (%); RF: Annual total rainfall (mm); Rday: mean number of rain day in a year (if >$132#0.20 mm day −1 ); S: sunshine hour (h); Sday: mean number of sunny days in a year (if >0.10 h day −1 ); W: wind speed (km day −1 ); S830: minimum soil temperature (°C); S1430: maximum soil temperature (°C); Sm: Soil moisture (%); ETo: evapotranspiration (mm); and Cloud: cloudiness (%). Multiple regression analysis (MLR) In this study, MLR is used to find structural relations of the tea yield by assessing the complex multivariate connections with the independent climate variables. The MLR equation takes the following form: where y t is the dependent variable tea yield in year t, 0 is the constant value or intercept of the regression line, 1n is a vector of independent climate variables and t is the stochastic error term which is distributed as normal with mean zero. The contribution of one or more specific risk factors which has most influenced the tea yield of the region is assessed by the step-wise multiple regression method (SMLR) by combining a forward selection with backward elimination method. Then the variance inflation factor (VIF) was calculated in determining the multicollinearity, as VIF 1 where r 2 i is the multiple determination coefficient obtained from X i regression on the K 1 independent variables residual. If VIF 1, no intercorrelation exists between the independent variables; if VIF stays within the range one-to-five, the corresponding model is acceptable; if VIF > 10, the corresponding model is unstable. The choice of the final equation is based on the coefficient of multiple determination R 2, F value of the regression coefficients and the value of the T-test. Seasonal ARIMAX The autoregressive integrated moving average (ARIMA) model is a stochastic model, which has been widely used for modelling and projecting climatological applications. The ARIMA can be written as ARIMA p; d; q where p represents the order of autoregressive processes, d represents the order of difference, and q represents the order of the moving-average lags. This model can be extended to account for seasonal fluctuations, with the expression ARIMA p; d; q P; D; Q s, where s indicates the length of the seasonal period. Box and Jenkins introduced the SARIMA model consists of three iterative steps: identification, estimation, and diagnostic checking (Box, Jenkins, Reinsel, & Greta Ljung, 2015). The quantitative form of an ARIMA model can be expressed as: where p and q are the orders of AR and MA terms respectively, in which y t1 is the current and previous incidence of the time series, i and i the fixed coefficients and tj is current and previous incidence residuals. The autocorrelation functions (ACF) and partial autocorrelation functions (PACF) were used to check for seasonal effects and identify plausible models using the Ljung-Box test. The residuals are further examined for autocorrelation using ACF and PACF. The model diagnostic is performed using Bayesian information criteria (BIC) and significance tests. The model with the lowest BIC values, which are statistically significant is considered a good fit model. Diagnostic plots and the Shapiro-Wilk test are used to check for normality of the residuals in the nonlinear regression. Finally, the predictions are performed by using the best fitting model. Artificial neural network (ANN) Models based on ANN can effectively extract nonlinear relationships in the data-processing paradigm that is inspired by the way biological nervous systems in the human brain to solve specific problems (Ghodsi, Mirabdollah Yani, Jalali, & Ruzbahman, 2012). ANN has been widely used in time series predictions because of their characteristics of robustness, fault tolerance, and adaptive learning ability (). In this study, the multilayer perceptron (MLP) with back propagation learning rule is applied, which is the most commonly used neural network structure in ecological modelling and other allied sciences (Bocco, Willington, & Arias, 2010). Mathematically an MLP can be written as where i is the weight corresponding to the input variable of r i, b is the bias, is the activation function and n is a number of input variable. The entire data is divided into three data sets for training (70%), validation (15%) and testing (15%) processes. The numbers of neurons were determined by trial and error method and finally the model with the lowest RMSE, and the highest coefficient of determination (R 2 ) is selected as the best-fit model. In this study, ANN models are performed using MATLAB software package (MATLAB version R2015a with neural network toolbox). A Levenberg-Marquardt (LM) algorithm based ANN model is prepared using a MATLAB code and the change of training weights W is computed as follows: Then, the update of the weights can be adjusted as follows: where J is the Jacobian matrix, I is identify matrix, is the network error, is the Marquardt parameter which is to be updated using the decay rate depending on the outcome. In particular, is multiplied by the decay rate 0 < < 1 whenever E W decreases, while is divided by whenever E W increases in a new step (Ham & Kostanic, 2001). Multivariate vector autoregressive model (VAR) The VAR is one of the straightforward and easy to use multivariate time series model developed by George Box and Gwilym Jenkins (Ltkepohl, 2005). The study used a VAR model to provide useful heuristics for understanding the empirical causal relationships between climate variables and tea yield. The main advantage of VAR is multivariate variables are both explained and explanatory variables. The model can be thought as a linear prediction model that predicts the current value of a k variable based on its own lagged values t 1; ; T and the lagged values of the other variables. A basic autoregressive model of order k is defined as: where c is a k 1 vector of constants (intercept), A i are k k matrices (for every i 1; ; p) and t is a k 1 vector of error terms. The i-periods back observation y ti is called the i th lag of y. To determine the optimal order of lags to be used in VAR, the Akaike Information Criterion (AIC) was used as common selection criteria. The advantage of VAR model is to perform the Granger causality test, which determines the direction of causality among the variables in predicting the value of another variable with zero mean (Granger-cause) (Akinboade & Braimoh, 2009;Keren & Leon, 1991). If a variable c t is found to be helpful for predicting another variable y t, then c t is said to Granger-cause y t. Then it is useful to include variable c t as an explanatory variable in a causal model with the following form: where y is the dependent variable(crop yield) at time t, c is climate variables, t i is lag variables, i and i are coefficients of the model and t is the error term. The Granger test is conducted by testing the following null hypothesis; H 0 : 0 climate variable does not Granger cause tea yield, and H 1 : 0 climate variable Granger-causes tea yield. This can be evaluated using the F-test: Here, SEE is the sum of square errors, n df n p 2p is the degrees of freedom, n p is the number of equations, and 2p is the number of coefficients in the unrestricted model. In addition, impulse response function is used (IRF) to identify shock reactions to the climatic variables (Ji, Zhang, & Hao, 2012;Pesaran & Shin, 1998). It shows that how much the variance of the forecast errors of each variable has been explained by exogenous shocks to the other variables in the VAR : where is constants, and are the coefficients of the models, and t is univariate white noise. Model validation For the construction of a model, the data divided into two sets: training and testing sets. Several criteria are used to evaluate model performance and the difference between simulated and observed data. The root means squared error (RMSE), which measures the difference between fitted and observed values, was calculated to evaluate the systematic bias of the model. The smaller the RMSE, the better the model is for forecasting: Moreover, the statistical model validated based on the statistical significance (i.e. p-value) where linear regression was applied to compare observed and calibrated data and the model's explanatory power as measured by the coefficient of determination (R 2 ) for each simulation. A high coefficient of determination (R 2 ) indicates the best model performance in capturing the observed crop yield response to climate (Lobell & Burke, 2010). Furthermore, bias and accuracy of models was measured through the mean absolute percentage error (MAPE) using the formula: where P t is the predicted value at time t, O t is the observed value at time t and T is the number of predictions. In terms of the MAE and MAPE computation, the good model will have the smallest possible value of MAPE (less than or equal to 10%). The statistical significance of the regressions is calculated to compare the predictive accuracy of the proposed models according to the Diebold-Mariano test based on absolute errors. An absolute error is simply defined as a i f i y i j jwhere f i stands for a estimated value and y i is an observed value. The null and alternative hypotheses are defined as:. Statistical significance was set to two-sided p < 0:05. Finally, cross-validation plot of time series of actual and predicted values was used to assess model validity. Sensitivity and uncertainty analyses Additionally, the sensitivity analysis (SA) was implemented to illustrate how the variation of the output of a model can be apportioned to different sources of input (Saltelli, Chan, & Scott, 2000). Parameter sensitivity analysis at the regional level represents the influence of climate variability and other regional factors on crop yield. This would allow the selection of the most sensitive parameters for model calibration and improve the accuracy of the model at the regional scale. The first-order (FS) and global sensitivity (GS) indices were calculated for seven SITR using the Sobol's algorithm proposed by Saltelli et al.. FS provides a measure of the direct importance of each parameter, and the larger the FS index the more important the parameter. On the other hand, the GS index is a measure of the total effect of each parameter, i.e., its direct effect and all the interactions with other parameters. If the FS index is equal to the GS index for a given parameter, it means that this parameter does not interact with the other parameters. Conversely, if the FS index is lower than the GS index, it indicates strong interactions between this parameter and other parameters. With k quantitative input factors, the decomposition of the variance Var Y _ generalises to: where is the total variance (the variance of the model output), D i is the FS (local) index of the parameter i, D ij is the second-order sensitivity index for the interaction of parameters i and j, and S is the total (global) sensitivity index measure the interaction effect of many parameters i (Makowski, Naud, Jeuffroy, Barbottin, & Monod, 2006;Wu, Fushui, Chen, & Chen, 2009). The sensitivities of each parameter are defined by: where S i is the FS index for the parameter i, S ij is the GS index for the interaction of parameters i and j, and S is the total GS index for parameter i (the sum of all effects (first and higher order) involving parameter i) (;). For example, if there is a threefactor model, the three total effect terms for STi are: where Si is simply the fraction of the variance of that value to the total variance of the model, as previously defined. Although the sum of the individual effect terms will add to one, the sum of all the ST i values is typically larger than one because interactions are counted multiple times. The elementary effects of each factor was computed using Monte Carlo integration based upon Saltelli's efficient latin hypercube sampling (LHS) scheme for the estimation of standardised sensitivity indices. Specifically, 35-year time series of datasets resampled to create 5,000 simulations per parameter leading to a total number of model runs of 14 5; 000 70; 000 to compute the FS and GS indices. The analyses were conducted using R package tgp based on 95% confidence interval. Region tea yield variability The coefficient of variation (CV) of tea yield represents the ability of the crop production system to adapt to environmental change in different regions. If the CV is near zero, crop production is considered stable while higher the CV is higher interannual variability (Zhang, Patuwo, & Hu, 1998). A box plot was used to depict the discrete levels of yield data ( Figure 2). The highest coefficient of variation in CNR indicates the greatest relative variability of tea yields at the month, season and annual temporal scales. While the coefficient of variation in tea yield was marginal or less in GUD and KOP. The standard deviation (STD) of tea yields over the analysis period indicated the large interregional and interannual variability. The average yield variability (STD) was,46, 84 and 225 kg ha −1 y −1 respectively at the month (24.3%), season (14.1%) and annual (9.3%) scales over the study period. High annual STD up to 353 kg ha −1 in tea yields was found in CNR followed by MNR while the lowest variability is obtained for GUD (155 kg ha −1 ). Comparison of models The study establishes where and by how much crop yields varied within each tea growing regions and then identified how much of the variation in tea yields was explained by climate variables. The efficiency of the models, observed values at various temporal scales were compared with the results obtained by the SMLR, SARIMAX, ANN and the VAR. These regression equations of all the models are presented in table S1-S4. Though all possible regression combinations were attempted, only the best combination results are summarised in Table 3. Figure 3 shows the observed and predicted monthly yield for the calibration (Figure 3(a)) and the validation (Figure 3(b)) of VLP for all four models. It can be determined from the given graph that the predicted values of the calibration period show relatively good agreement with the observed values where ANN and VAR models outperform SMLR and SARIMAX models in general. Validation also has given near to the observed values with predicted values. It has been observed that the coefficient SMLR and SARIMAX during the testing period found in lower than the calibration period, except sometimes. However, the coefficient was greater during the testing period than the calibration period in the ANN and VAR models that were found to be precise models in elucidating tea yield in the peak and low cropping months/seasons. Figure 3(c) shows the scatter plot of the observed versus predicted values for the tea yield. The plot approximates a straight line, and an angle close to 45°(one-toone line) indicates the high accuracy of the ANN and VAR models for the estimation of the tea yield for VLP. Figure 4 visually outlines the distributions of observed and estimated yield using four models. The box plot represents the 25th and 75th percentiles with the bottom and top lines of the box, respectively; the difference between these two percentiles is termed interquartile range. In addition, the line intersecting the box identifies the median of the distribution, while the two whiskers provide a measure of the data's range. The isolated points outside the whiskers represent outliers. As can be seen in the picture, compared to the predicted yield, all models reproduce the observed dispersion of the yield quite well. As displayed by the upper whiskers, all models clearly Figure 3. Comparison of observed and modelled monthly tea yield (kg/ha; y axis) of Valparai during calibration (a) and testing period (b), and cross-validation plot at various temporal scales (c). Observed yield (kg/ha) in the x-axis (in the top and in the bottom) and predicted yield (kg/ha) y-axis (in the left and right). OBS: observed tea yield; and modelled yield by SMLR: step-wise multiple linear regression; SARIMAX: Seasonal autoregressive integrated moving average; ANN: artificial neural network; and VAR: Vector autoregressive model. underestimate the high levels of the yield, even though the model performs best. The performance statistics of the four different models with regard to the calibrating and testing errors for SITR are given in Table 3. All derived models are statistically significant with better model performance, except monthly and annual tea yield in CNR and seasonal yield in MEP. Even though the explained variability is lower elsewhere, the model error suggests that the climate variables have a clear Left, centre and right panel shows monthly (a-g), seasonal (h-n) and annual (o-u) tea yield, respectively. The regional codes are the same as those in the footnotes below Figure 2, but for model performance. impact on crop yields. Considering an average yield of 2500 kg ha −1 the models have a bias of,12 to 20, 25.7 to 35 and 60 to 64 kg ha −1 respectively in the month, season and annual tea yield. SMLR model The percentage of contribution of the climatic variables in the variation of tea yield was quantified by applying the MLR model and results are presented in Table 2. R 2 obtained by the model illustrates the fraction of the dependent variable's (yield) variance the model could explain the response variables. The results suggest that the MRL model is able to represent the tea yield variations ranging between 3.1 and 61.2%, whereas remaining the variation is attributed to residual factors such as better clones, improved crop management operations and the introduction of modern agro-technology. The results of the MRL show that few significant relationships between yield and climate variables, these coefficients can be used to assess the real effects of climate variability in the changes in the crop yields considered in this study. In addition, the sign of the coefficients indicates the direction of change in the yield versus climate variable changes. In the most productive tea-growing region (i.e. KOP) monthly climate variability explained 44.8% of the total yield variation, at the season it was 31.8% and annual it was 61.2% (Table S1). In the low production region (VPR), climate variability explained 28.7-38.9% of the tea yield variability. In high elevation regions such as the CNR, 29.5-43.8% of the tea yield variability was explained by climate. For higher rainfall regions such as the MNR and VLP, respectively 31.9-48% and 28.6-55% of the tea yield variability was explained by climate variability. The empirical findings revealed that the response of tea yield to the primary climatic variables, Tmin, Tmax and RF differs temporally and spatially. Tmax has a negative impact on yield at CNR, KOP and VPR in all temporal scales, whereas the seasonal and annual Tmin and RF have a negative impact on tea yield at VLP. Although temperature variability in SITR was more important, rainfall variability explained only part of the tea yield variability. Overall estimates among the primary climatic parameter indicated that Tmax had a negative influence (55.53 kg ha −1 y −1 ), Tmin had a positive influence (90.03 kg ha −1 y −1 ) and RF had an unbiased response to tea yield (0.003 kg ha −1 y −1 ). If we consider regression coefficients of the linear model represent yield increments per unit of weather change (e.g., kg ha −1 y −1 ) by holding other variables constant, a 1°C rise in Tmin can reduce maximum 234.5 kg ha −1 in the annual yield at VLP and Tmax by 641.8 kg ha −1 at KOP. SARIMAX models According to Box-Jenkins, AR models is of second order for GUD, MNR, VLP and VPR, while, AR fourth order for CNR, KOP and MEP monthly data. For MA models, the orders are GUD MA, CNR, MNR, VLP and VPR MA and for KOP and MEP MA. Also, the results show I for CNR, GUD and VLP, and I for KOP, MEP, MNR and VPR. The orders of SARIMAX models determine the dynamic relationship of past values of climate varies from two to four months. Analytical estimations of SARIMAX models are presented in table S2. The Ljung-Box statistics were used to check the adequacy of the tentatively identified model. The p-value of the Ljung-Box statistics was,0.405, 0.652 and 0.535 for values of lags equal to 12, 4 and 2, respectively. This indicated that the model was adequately captured the correlation information in the time series. Moreover, the residuals of the individual sample examined using the autocorrelation (ACF) and partial autocorrelation (PACF) function supported the model adequacy. ANN models A major issue in using neural networks is the selection of network architecture and appropriate patterns of input data that are likely to influence the desired output. After splitting the dataset into training and testing data, different learning rates, learning algorithms and the number of neurons in the hidden layers determined by the seasonal period of the time series. The optimum number of neurons was equal to one more than twice the number of input variables, i.e. 2n 1, where n 16 in the study. The results suggest that the ANN is effective in reproducing nonlinear interactions with fourteen input variables, one hidden layer with twenty-five neurons and an output layer with oneoutput variable (14-25-1 structure). Trial and error method was followed to find out the optimal number of input delay, where maximum 3-12, 1-4 and 1-3 periods are the delays for the month, season and annual time series, respectively. The network configuration, the number of iterations, as well as the smallest global error achieved within the specified number of iterations and training run-time which are given in table S3. The results were compared with actual observations by means of time series analysis and scatter statistics (scatter points, RMSE, MAPE and Pearson correlation coefficient). Figure 3 shows the scatter plot of the observed versus predicted values where the plot approximates a straight line, and an angle close to 45°(one-to-one line), which indicates the high accuracy of the ANN model for the estimation of tea yield. VAR models In the present study adopted a multivariate VAR model with exogenous variables because of lesser forecasting errors. The number of lags for VAR model is restricted from 5 to 12 for the month, 3 to 5 for the season and 1 for the annual dataset which was chosen by basing on AIC and HQIC criteria, i.e., the minimum AIC and HQIC. The results of the VAR model and SIC criteria are presented in table S4. In Figure 5, the vertical axis is the impulse responses of the yield, and the horizontal axis is the lag time (year) after the initial positive impact are applied to independent climate variables. Figure 4 illustrates the performance of the modelled yield compared with the observed yield. The models are adequate to explain,2.8 to 92.1% of the total variance of monthly tea yield with 4.34 to 17.4% and 3.23 to 25.3% MAPE during training validation periods, respectively. Based on the performance of the models (i.e. RMSE, MAPE) and the Diebold-Mariano test, ANN found to be a best-fit model for monthly tea yield of all the regions because of their strong nonlinear mapping ability and tolerance to complexity in data. The model captured both mean and outlier values efficiently as well. The application of the ANN model in predicting monthly tea yield, climate variables explained,86.3 (MEP) to 92.1% (VLP) of variation. That leaves the average MAPE between,6.1 and 5.5% during the calibration and testing periods, respectively. Monthly tea yield response to climate variation The application of ANN model explained highest 92.1% of yield variation in the VLP region (YPH, 0:97 Target 6) obtained with four months delay period and the model structure is 14-25-1. For KOP (YPH, 0:80 Target 50), CNR (YPH, 0:89 Target 16) and MNR (YPH, 0:93 Target 13), the model explained,91% of yield variation (14-23-1 structure) with information from three months delay period. The best prediction for MEP, GUD and VPR regions was obtained (14-21-1) with information from the past five months with,20 kg ha −1 (6%) error during the calibration and testing period. The delay period of the model 3-5 months shows that the yield was strongly influenced by the earlier month weather condition or the prediction of the yield could be possible in advance to the current month. Seasonal tea yield response to climate variation For seasonal tea yield, response variable able to explain 7% (MEP) to 38% (MNR) by SMLR, 49 (GUD) to 92% (VLP) by SARIMAX, 79 (CNR) to 91% (MNR) by ANN and 92.5 (KOP) to 99.6% (VLP) by VAR. Among the four models, the performance of the ANN found to have the strongest explanatory ability in explaining seasonal tea yield variation of GUD, MEP, MNR and VPR tea-growing regions (Figure 4). While the VAR model for CNR and KOP, and SARIMAX for VLP found to best-fit model for the seasonal yield. The model structure of seasonal tea yield for GUD is 14-21-1 with the information of three-season delay period resulted in the least error during the calibration (3.47%) and testing (1.96%) periods and explained 80% of seasonal yield variation. For MEP, the model (YPH, 0:99 Target 8:9) structure (14-23-1) explained 84.6% of yield variation with four months delay period. The ANN model for MNR (YPH, 0:92 Target 49) elucidated 91.4% of variation associated to climatic variables with a model structure 14-11-1 and four delay periods. In case of the VPR region, the model (YPH, 0:92 Target 38) with two delay period explained 90.1% of variation shows the lowest training and testing error. The effects of the climate variables on the seasonal tea yield (CNR and KOP) assessed using the VAR model is statistically significant (p < 0:001). The R 2 value indicates that 50.5 and 71.6% of the variation are explained by climate variables of CNR and KOP, respectively. The result shows that meteorological data included in the model are sufficient to explain regional and other complex factors that influence tea yield. F-statistic shows that the lagged terms four for CNR (F 7:767; p < 0:001) and two for KOP (F 8:972; p < 0:001) were statistically significant. The results from Table 4 show that there was a strong causal relationship between climate variables and tea yield. The null hypothesis in each case assumes that all the climate variables included in the model "Granger-cause" the yield, except RH830 at KOP. Since the F-statistic was statistically significant and the direction of causality from climate variables to yield, it is necessary to observe that the IRF which allows us to examine the yield responses to shocks. As seen from the Figure 5, crop yield reacts to impulse (shock) in Tmax, Tmin and RF for twelve periods. According to the Figure 5(a) about IRF from yield to its own, increasing of the yield in the current period has a positive effect on per hectare in the future, and show a decreasing trend of positive effect up to seven periods. The dynamic relationship of climate variables to yield, increasing of Tmin, Tmax and RF in the current period positively influence yield up to six periods for KOP and shows a gradually decreasing trend with uncertain positive effects. Increasing of Tmin, Tmax and RF in the current period will have an uncertain positive effect on crop yield of CNR in the future. YPH decreased sharply up to three periods and there is a brief upward after four and eight periods. For VLP, the best fit model is SARIMAX 4 (Log-likelihood = −764.61, AIC = 1561.21; Table S2), where Tmax, Tmin, RF, Sday and Cloud are the most limiting factors negatively regulating the crop yield. All coefficients for the estimated model are significant at the 5% level. The R 2 value of the estimated model is 0.920, showing that,92.0% of the variation is due to a seasonal climate that could be explained by the estimated previous lag value and the lagged error terms. The RMSE of training and testing periods are 67.29 and 70.82 kg ha −1, while the MAPE is 8.95 and 9.30%, relative to the average of the predicted yield of 633 kg ha −1. Ljung-Box statistic value is 17.148 for 15 degrees of freedom and the significant probability corresponding to Box-Ljung Q-statistic is 0.310, which is greater than 0.05. Therefore, it is accepted and it may be concluded that the selected SARIMAX 4 model is an adequate model for the given time series. The correlogram and Q-statistics show that there are no significant spikes in the ACFs or PACFs at 95% confidence limits (figure not shown), which indicates that the residuals of this SARIMAX model are white noise. The forecast performance is better than the other three models while the number of inputs is kept equally for all models. Moreover, the model shows no sign of biased estimations across the entire duration of the prediction period (5 years), as suggested by the scatter plot of the predicted and the observed yield (Figure 3(c)). However, the error in the high season period is still higher than the other period like the other three models because the month-to-month and year-to-year variations of the yield are driven by many factors. Annual tea yield response to climate variation In case of annual tea yield variability response, the SMLR model explained 22.7 (CNR) to 78.5% (KOP), SARIMAX 68 to 82%, ANN 52.4 (GUD) to 93.1% (VLP) and VAR 66.8 (GUD) to 95.0% (KOP). This shows that the VAR and ANN models explained higher annual tea yield variability than its counterparts. The overall analysis revealed that the VAR model for GUD, KOP and VPR, and ANN model for CNR, MEP, MNR and VLP found to best fit model explains more than,84% of the interannual tea yield variability. The appropriate number of lag of the VAR model is one for GUD (AIC 1; F 2:133; p < 0:066), KOP (AIC 1; F 2:536; p < 0:001) and VPR (AIC 1; F 15:070; p < 0:001). The result signifying the existence and nature of the long-run relationships between climate variables and crop yields of three locations (p < 0:001). R 2 value of the model is 0.668 (GUD), 0.950 (KOP) and 0.934 (VPR) suggesting that the variation is due to climate is significant and that could be explained by the estimated previous lag value. The Granger causality test has been checked whether significant cause, equilibrium relationships from climate variables to the crop yields in the chosen locations exist (Table 4). There is a strong causation of yields of tea and RH1430, RF and S in KOP and Rday in VPR, while no empirical evidence of causality exists from climatic variables to the yields of VPR. Compared with the IRF response pattern of annual yield to its own ( Figure 5(e)), the response of raising yield in the current period has a positive effect up to twelve periods. Exceptionally MEP has positive effects up to four periods, then extensively subdued after the shock. With rising Tmin, the yield is driven positivity in the first four periods for CNR and VLP, but there is a brief increase in CNR then drops more slowly (Figure 5(f)). While VLP declines gently to its shock and increase after almost nine periods after the shock. However, yield explodes after the two-period shock of Tmin in MNR and peaked in six periods and also maintains the positive effect for a long period accompanied by minor fluctuations. When shocked positive from Tmin on the first period, the yield drops sharply before peaking five periods after the shock and sustains negative effects for a long time. The similar trend is followed in case of Tmax in MEP. From Figure 5(g), the increment in yield reaches a maximum of about four periods after the initial Tmax shock occurs in CNR and MNR. Likewise, a shock to Tmax increases yields up t 13.25 kg ha −1 with a peak reached on two periods after the shock was applied followed by an uncertain decrease and brief increase. The effect of a shock in rainfall on yield is caused a quick decay with inducing further increment in subsequent periods while a shock in rainfall induces further increment in yield of VLP which can take about four periods to reach maximum. Sensitivity and uncertainty of climate variables To further investigate the influences of climate variables on tea yield, a sensitivity analysis was utilised by a variance-based bootstrap resampling technique. The analysis help to identify key parameters and rank the uncertain input factors with respect to their effects on the nonlinear statistical model output variables by calculating quantitative indices. Figure 6 shows the sensitivity indices calculated for the tea yield at various temporal scales for seven regions. The FS reflected the main (local) effect, and the GS indices reflect the sum of all effects involving the parameter on the model output, including interactions with all other criteria. The FS of Tmin, Tmax, S, S830 and W were much higher than the other parameters which indicating the direct influence of the parameter that determined monthly tea yield. The highest FS index of Tmin registered by VPR (0.188) indicated that Tmin alone determined~19% of the variation of the model output. Similarly, the Tmin of CNR (14%), MNR (9%) and MEP (5%) found to be a key parameter that recorded higher FS values. However, the FS of Tmin in GUD (0.053), KOP (0.040) and VLP (0.041) differed from other stations by registering lower FS values between 4 and 5%. It has been observed that the FS of S830 in GUD (0.080), S in KOP (0.053) and W in VLP exhibited the highest sensitivity indices consider as region-specific key parameters. The estimated FS for RF and Tmax were~4 and 5%, which was on par with FS key parameter in some regions. The result indicates that even though the FS value was lower than the key parameter, RF and Tmax were equally important in influencing the crop yield. In consonance with FS, GS shows a similar pattern of the highest sensitivity towards Tmin, Tmax, S, S830 and W of the respective regions. Tmin was the most sensitive factor in four of the seven regions. The GS was highest in MEP (0.944) denoting that 94% of the variation of the model output was influenced by Tmin. When comparing FS of Tmin in MEP (0.052), reaming 89% of the variation of the model output was controlled by the interaction of Tmin with other variables. Likewise, the highest GS of Tmin recorded in CNR, MNR and VPR showed that Tmin of the region was dealt with by the interaction with other variables. FS values based on seasonal yield, Tmin, RH830, S and S830 are found to be the key parameters by registering higher indices that are specific to regions. Tmin is the most influential factor in CNR (0.049), and VPR (0.085) followed by RH30 in MNR (0.086) and VLP (0.044), S in GUD (0.019) and KOP (0.071), and S830 in MEP (0.035). However, the primary meteorological variables Tmax and RF registered only,3 and 4% of the variation of the model output. The highest GS of Tmin in GUD (0.961) followed by RH830 in VLP (0.950) and S830 in MEP (0.915) shows that >94% of the variation where,90% of the variation of the model output was controlled by the interaction with other parameters. For the annual yield, FS shows explicit Figure 6. First order (FS: in Orange) and total global sensitivity (GS: in Red) of tea yield (kg ha −1 ) due to climate variability at month (a-g, top panel), season (h-o, middle panel) and annual (P-V, bottom panel) scales. All abbreviations are the same as those in the footnotes below Table 1. variation in key parameters. The most sensitive factor in CNR is Rday (0.030), Tmin in GUD (0.017), S in KOP (0.072), S830 in MEP (0.056), RH830 in MNR (0.172) and VLP (0.052), and Tmax in VPR (0.056). RH830 found to be the most sensitive factor besides the important key variables irrespective of regions compared. Rday and Tmax are also a highly sensitive parameter for three regions in which the global sensitivity index is highest and interaction with other parameters is usually much less than the first-order interaction. Figure 7 shows the large spatial variation in the total sensitivity of different climate variables to the variability of tea yields across SI. In general, increasing Tmin had a positive impact on yields over most of the tea growing regions, except for MEP and VLP (Figure 7(a)). In contrast, Tmax exerted a negative impact on tea yields except for MNR and VLP. Especially the negative impact was strongest in VPR regions, and it is unassertive in CNR (Figure 7(b)). An increase one standard deviation of RH830 can lead to decrease in tea yields in MNR and MEP regions (Figure 7(c)). The increase of RH1430 was found to drive up tea yield, especially in MNR and VPR while the increase in relative humidity to drive down tea yield in CNR, GUD and VLP (Figure 7(d)). Compared to the other regions, the magnitude of tea yield sensitivity to RF was larger in VPR followed by VLP (Figure 7(e)). Yield expected to increase by up to one standard deviation increase of RF in some major tea growing regions such as CNR, MEP and MNR. However, the magnitude of change is larger midaltitude region MEP, and it was lesser in the higher altitude regions CNR and MNR. Tea yields in several SITR are most sensitive to changes in S, with a~2 to 4% increase in response to a + 1 deviation increase in S of GUD, KOP, MEP and VPR (Figure 7(g)). The tea yield of VLP is more sensitive to W which shows much larger magnitude in change than in other regions (Figure 7(i)). MEP, MNR and VPR show high sensitivity to S830 where the larger positive response in MNR and VPR and negative response in MEP when an increase in S830 (Figure 7(j)). The climatic variables S1430, Sm, ETo and cloud have narrow uncertainty bounds, but vary substantially in the sign and magnitude of their influence (Figure 7(k-n)). Discussion Since the climate factors and their influences on tea yields found to covary on spatiotemporal scales, understanding the discrete effects of each climate factors and its uncertainty can help to develop more effective adaptation strategies in response to climate change anticipated. In line with similar studies in related domains, the present study demonstrated the use of empirical statistical modelling on observational data to understand the spatiotemporal dynamics of tea yield in response to climate. However, the reader should consider that the models do not explicitly take into account of extreme weather events (e.g. warm and cold spells, strong wind, hail, heavy or less precipitation) that may often lead to complete crop failure (). Indeed, by using the time series of monthly and seasonal scales, the extreme events that act on a shorter time scale are partially reflected in the yield averages. Among the statistical models, ANN and VAR models produced flexible and powerful results which are highly specific to each region-temporal pair. It is worth pointing out that the ANN models (for a month, season and annual yield data) ranked first in terms of training and testing performance than its counterparts (Table 3). It could be due to the nonlinear patterns in the dataset have not probably as strong impact on tea yield compared to the predictors, which are linearly related to yield. On the other hand, the training of an ANN does not use the entire available database because the test subset of data is not employed in the learning process. Generally, the performance of SARIMAX is poorer than those of ANN and VAR for most of the time series. The poor performance probably resulted because SARIMAX model assumes that time series are generated from linear processes, whereas the yield time series are often nonlinear and seasonal (Granger & Tersvirta, 1993;). The SARIMAX model generally performs better when periodicity exists in the data, as the periodicity is taken into account by the model by differencing the data (Mishra, Desai, & Singh, 2007). Probably because of the lack of seasonality in the data in these months, the SARIMAX models performed poorly. Irrespective models, the performance was better for the southern locations (MNR, VLP and VPR) than the northern locations (CNR, GUD, KOP and MEP) indicated that the relationships between climate variables and tea yield are stronger in the southern parts. Probably because of a weak relationship between the rainfall pattern in the northern locations and the climate variables. In line with the modelling efficiency, the performance of each method in terms of producing small errors varied depending on time series and locations. For instance, the MAPE for monthly tea yield was smallest with the ANN approach, whereas that for the annual yield was smallest with the VAR. In general, the variability in monthly tea yield related to climate variability was higher in VLP (,92% of the yield), followed by (,91% (CNR, KOP and MNR) of the yield variability of SITR. Approximately 77-80% of the seasonal tea yield variability was explained by climate. As highest, 80-93% of the annual yield variability explained by climate variability translates into large fluctuations in tea production. For example, in the study at an average~84.8% of the annual tea yield variability of 1.9 t ha −1 y −1 explained by climate variability, over 106.85 thousand ha translates into an annual fluctuation of,0.02 million tons in tea production over the study area. Tea cultivation in SI is not consistent with the climatology of all the regions. Climatological records in SI exhibit bimodal rainfall patterns such as Southwest (May-Aug) and Northeast monsoons (Sep-Dec). Except for CNR, all the tea growing receives,66% of the annual rainfall during the Southwest monsoon and,21% rainfall during the Northeast monsoon seasons. Thus, the tea plant is not able to capture the optimal window of growing conditions as the duration and frequency of rainfall related important other cofactors viz., relative humidity, rain day, cloud cover, sunshine hour and sunny day differed spatially. The relation between rainfall and rain day congruent with two possible patterns of impact on yield: a) increase in rainfall with higher number of rain day positively sensitive to the crop yield in CNR, MEP and MNR while it is negatively sensitive in VLP and GUD; and b) decrease in rainfall with higher number of rain day found to be sensitive to the crop yield in KOP and VPR (Figure 7(e, f)). The result confirms the importance of water availability in rainfed tea cultivation but also emphasises that the frequency of rainfall matters more than the total amount of rainfall in most of the localities (Ujjal, Joshi, & Bauer, 2015). Moreover, the influence of RF variability on tea yield was observed to be strong only in VPR and the sensitivity related to RF of all other regions found to be less than 4%. This was because tea growers of the regions are already adapted, or adapting, to climate change, which has made them also more adapted to rainfall variability. Indeed, the tea planters are suggested for developing an efficient water harvesting infrastructures in the areas where an uneven distribution of low rainfall frequency is more prevalent. Increasing in rainfall with increasing rain day found to affect yield positively in CNR, MNR and VLP were attributed to readily dissolving the soil nutrients facilitate easy absorption of the tea plants. While the negative impact of increasing rainfall with rain day in VLP was because of intense rainfall during the monsoon seasons lead to soil erosion or the washout of surface soil and the depletion of plant nutrients in soil due to runoff (Ruhul Amin, Zhang, & Yang, 2015). The lower sensitivity of annual yields of KOP and MNR was primarily because it was able to compensate for part of the yield loss that occurred during the dry period by having a substantially higher yield during the wet period of the year. Temperature variability was found to be the most important factor for tea yield in SI due to the high availability of RF controlling the influence of RF variability, soil moisture, soil temperatures and ETo rate. This result confirms that the temperature is a more important contributor to climate change impact than rainfall. A changing Tmax and Tmin some regions found to have beneficial (harmful) effect on crop production by enhancing (decreasing) photosynthesis, rate of shoot initiation in tea (Roberts, Summerfield, Ellis, Craufurd, & Wheeler, 1997;Squire, 1990) and the shoot growth (Carr, 1972;Carr & Stephens, 1992;Watson, 1986) thereby increasing crop yield as it increases (Ruhul ). However, the above or below optimal temperatures differentially influences the metabolic processes of tea plants by affecting the stability of various proteins and membranes, rates of photosynthesis and RuBisCo function and thereby inhibiting crop yield (Mathur & Jajoo, 2014). The inverse relationship with increasing temperature suggested that excess temperature would be detrimental to tea production where long-term effects of climate change (in relation to temperature) are larger than short-term effects. The higher sensitivity of RH830h in MNR and MEP regions (Figure 7(c)) and RH1430h in CNR, GUD and VLP regions (Figure 7(d)) is attributed to direct and indirect control of relative humidity on crop yield. Increase in relative humidity is not beneficial for higher yield as it directly controls the plantwater relationships and absorption of soil nutrients, and indirectly influences stomatal control, photosynthetic rates, leaf water potential and the occurrence of fungal diseases. The sensitivity of S with Sday in response to yield leads to three possible patterns of impact: a) increase in S with Sday found to be either positively sensitive to CNR, KOP, MEP and VPR or negatively sensitive to GUD; b) MNR found too sensitive to decrease in S with increasing Sday; and c) VLP shows sensitivity to increasing S with decreasing Sday (Figure 7(g, h)). Sunshine hour directly affects crop growth by influencing the onset and release of bud dormancy and shoot development (Matthews & Stephens, 1998b). Excessive sunshine negatively affects crops as the temperature effects. Some tea growing regions experience both cooler and warmer climates at a given altitude that modifies the balance between shoot and root growth by influencing the physiology of shoot growth. Cooler periods tend to result in banji formation (dormant shoots) due to the higher partition of carbohydrates to the roots while in warmer periods, carbohydrates are retranslocated to the developing shoots (Fordham, 1972;Rahman & Dutta, 1988;Squire, 1977). In addition to air temperature, soil temperature also influences the growth of the tea plant (Carr, 1972;Carr & Stephens, 1992), especially in situations where yield is limited by higher soil temperature of the regions. Magambo and Othieno reported that high soil temperature during the daytime combined with low soil temperature during the night induced early flowering of tea and reduced the vegetative growth. Some tea-growing regions, especially VLP experience periods of high wind speeds during certain times of the year and it is found to be highly sensitive to crop production. High wind speeds generally tend to increase ETo from tea canopies extensively and thereby accelerate the development of deficit soil moisture and stomatal closure in dry periods may reduce photosynthesis. Banerjee reported that wind turbulence could reduce the high temperature, which otherwise would adversely affect photosynthesis. However, the direct effect of wind speed on the physiology of growth and productivity of tea is still unknown. Limitation of the study The study considered only climatic variables and not measured the spatial heterogeneity in climate-yield response to soil and other socioeconomic attributes that might have limited the performance of the models. In future, datasets that include variables of input and output economics, soil characteristics and management should allow the models to explain greater amounts of yield variability. Finally, to improve the spatial match between crop and climate data, georeferenced data with sophisticated statistical models are needed to address the problem. Conclusion The study helps to describe the complex relationships between climate and yield and highlighted the crucial roles of key climatic factors that determining tea yield. Among the statistical models, ANN and VAR models produced the best fits for monthly, seasonal and annual yields, probably indicates that the multivariate time series models may be better suited for capturing the long-term and short-term variations in the time series. The developed statistical models can be easily integrated into a crop yield forecasting system under different climatic scenarios by using the probabilistic weather forecasts of the identified meteorological variables. The study also demonstrated the impact of climate variability on tea yields at spatial and temporal scales. Therefore, the study suggests the policymakers develop imperative regional specific adaptation strategies and effective management practices to mitigate the negative impact of climate change on crop yields. Supplementary material Supplemental data for this article can be accessed here. Author's contribution EER has carried out the research design, data collection, database construction, analyses, synthesis and write this manuscript. KVR and RRK supervised the research work, reviewed the results and commented on the manuscript. All authors read and approved the final version of the manuscript.
CPI inflation falls to record levels, IIP at 3-month low: Will RBI cut rates? Growth in industrial production fell to a three-month low in May while consumer price index (CPI)-based inflation declined below a stipulated floor of 2 per cent in June, providing the Reserve Bank of India leeway to cut the policy interest rate in August. Pulled down by capital goods, consumer durables and manufacturing, and mining, the index of industrial production expanded 1.7 per cent in May, lower than the revised 2.8 per cent rate in April.
Decontamination of skin exposed to nanocarriers using an absorbent textile material and PEG-12 dimethicone The removal of noxious particulate contaminants such as pollutants derived from particle-to-gas conversions from exposed skin is essential to avoid the permeation of potentially harmful substances into deeper skin layers via the stratum corneum or the skin appendages and their dispersion throughout the circulatory system. This study is aimed at evaluating the efficacy of using the silicone glycol polymer PEG-12 dimethicone and an absorbent textile material to remove fluorescing hydroxyethyl starch nanocapsules implemented as model contaminants from exposed porcine ear skin. Using laser scanning microscopy, it could be shown that while the application and subsequent removal of the absorbent textile material alone did not result in sufficient decontamination, the combined application with PEG-12 dimethicone almost completely eliminated the nanocapsules from the surface of the skin. By acting as a wetting agent, PEG-12 dimethicone enabled the transfer of the nanocapsules into a liquid phase which was taken up by the absorbent textile material. Only traces of fluorescence remained detectable in several skin furrows and follicular orifices, suggesting that the repeated implementation of the procedure may be necessary to achieve total skin surface decontamination.
Generating FAIR research data in experimental tribology Solutions for the generation of FAIR (Findable, Accessible, Interoperable, and Reusable) data and metadata in experimental tribology are currently lacking. Nonetheless, FAIR data production is a promising path for implementing scalable data science techniques in tribology, which can lead to a deeper understanding of the phenomena that govern friction and wear. Missing community-wide data standards, and the reliance on custom workflows and equipment are some of the main challenges when it comes to adopting FAIR data practices. This paper, first, outlines a sample framework for scalable generation of FAIR data, and second, delivers a showcase FAIR data package for a pin-on-disk tribological experiment. The resulting curated data, consisting of 2,008 key-value pairs and 1,696 logical axioms, is the result of the close collaboration with developers of a virtual research environment, crowd-sourced controlled vocabulary, ontology building, and numerous seemingly small-scale digital tools. Thereby, this paper demonstrates a collection of scalable non-intrusive techniques that extend the life, reliability, and reusability of experimental tribological data beyond typical publication practices. Introduction Data are the fundamental asset which attaches value to any scientific investigation. It is not surprising that the expectations of high-quality data, which can travel seamlessly between research groups and infrastructures, are shaping the policies responsible for allocating public funds. This drive led to defining the guiding principles that qualify research data as findable, accessible, interoperable, and reusable (FAIR) 4. Observing these guidelines has since then prompted the creation of detailed metrics 5,6 that assess whether shared digital objects satisfy these standards and add value for the future users of published data 7,8. However, the benefits of making data FAIR reach beyond the ease of communication. Increasing data's trustworthiness 9 eases the process of transforming data into knowledge 10 and facilitates its potential utilization by autonomous computer algorithms from the field of machine learning (ML) 11, as shown conceptually in the visual abstract in Fig. 1. Generating FAIR research data in tribology is particularly challenging because of the exceptional interdisciplinarity of the field: many seemingly trivial tribological problems require a deep, but still holistic, understanding of processes and mechanisms that act between, at, and underneath contacting surfaces 12. A tribological response is often regarded as the response of the entire tribological system signifying the importance of all aspects of the actual tribological situation. This complicates the creation of discipline-specific data infrastructures and the standards for experimental procedure and result documentation are still missing 13. The lack of standards can be partially attributed to the characteristic that tribologists usually interpret research results through the prism of their own scientific backgrounds, which can span a wide variety of physical science and engineering fields 14,15. In tribology, the precise sequence of events, and seemingly insignificant external influences, can have a profound effect on the outcomes of any given experiment. Because of that, data provenance is paramount for the generation of knowledge. This extends what "FAIR data" means for tribological experiments: besides the data and metadata generated during the tribological experiment itself, tribologically-FAIR data requires a fully machine-actionable information set of all involved processes and equipment that preceded the tribological test. Fig. 1 Visual Abstract: Unlocking the potential for scalable data science techniques in tribology is only possible through the serial production of FAIR datasets. However, generating truly FAIR data cannot be an afterthought, but rather has to be an integral part and objective of every tribological experiment. The technological backbone of the digitalization of an experimental environment is the software infrastructure which acts as a meeting point between controlled vocabularies (organized in an ontology) and the experimental data. Electronic lab notebooks (ELNs) generally offer an environment where researchers can record their observations digitally. However, the choice of an ELN is far from straightforward when the adherence to the FAIR principles as an end-goal becomes a priority. The role of the ELN then is not only to be a digital replacement for handwritten notes, but also to provide an intuitive interface for guiding researchers to the minimum required metadata and ensure their recording with minimal human error (e.g., as predefined key-value pairs), relate records of data and associated metadata, assign unique identifiers to digital objects, and, provide researchers the option to publish their results in data repositories with minimum extra effort 5. The Karlsruhe Data Infrastructure for Materials Science (Kadi4Mat) 32, a virtual research environment which includes an ELN, shows a clear commitment to these principles by offering tribologists two major benefits: direct integration with custom tribometers (for at-source data collection) and export of data and metadata in both machine-and user-readable formats. To assess the feasibility of producing FAIR data via the integration of a controlled vocabulary, an ontology, and an ELN, this paper demonstrates the implementation of a tribological experiment while accounting for as many details as possible. The intricacies of producing such a dataset, at times seemingly administrative (e.g., specimen naming conventions) are equally as important as the global decisions (e.g., using an ontology), in order to provide a reusable pipeline. With this publication, we provide a possible blueprint for FAIR data publication in experimental tribology, and highlight some of the associated challenges and the potential solutions. If applied at large, they may accelerate the rate of innovation in the field and prevent unnecessary and wasteful repetition of experiments. A sister publication 33 offers the software developer's point of view and a detailed description of the programmatic backbone of this project. Results End-to-end framework for FAIR data production. Producing FAIR data and metadata is not a standalone add-on to the operations of an experimental lab, but rather an integrated collection of scientific, software, and administrative solutions (Fig. 2). Each of the elements in this framework is coordinated with the rest, with the aim of producing a FAIR Data Package 34. The many groups and sequential routes which Fig. 2 describes stand to show an example of how the different actors (bottom) make their contribution to digitalization (blue and green layer), in order to facilitate the workflow of lab scientists (in orange). The back-end collection of digital tools (in green) can only be effective at communicating with the user if it is provided with the correct knowledge representation (coming from the blue layer). From a managerial point of view, the ELN (Kadi4Mat 32 ) administers the storage of data, the users who interact with it, and the timestamps of its manipulation; in effect this charts a who-and-when map. At least equally important is the what-and-how contents of the FAIR Data Package, which originates at the tribological experiment, and is the main focus of this publication. To make the presentation of this multifaceted project most effective, this chapter presents its various components, first describing the FAIR Data Package, which has a clear target composition, and then, the details about its building blocks. FAIR data package of a tribological experiment. The standard data structures based on formal notations are aimed to be used by automated computer algorithms, the visually-appealing human-friendly outputs aim to engage the human perceptions and natural intelligence. The basic fundamentally distinct information object in the FAIR Data Package is the Record (Fig. 3). Each Record is stored in the ELN and contains its own metadata (author, last revision/creation time, persistent ID, license, tags, corresponding ontology class -to name a few) and the details of the entity it represents. A Record can contain various externally generated data, such as tables, text, images, and videos, but also the Links to other Records (e.g., a tribometer Record is related to a tribological experiment Record), or hierarchies of Records (e.g., a Record is part of a Collection which unites all participating entities in a project). When exported for sharing, a Record has two forms: a human-readable PDF and a key-value structured JSON file; in case there were files uploaded, such as raw measured or processed data, a zipped file archive is added. For the showcase experiment performed for this publication, all associated Records were grouped in a Kadi4Mat Collection and then uploaded to Zenodo 34 where they were automatically given a digital object identifier (DOI). A noteworthy feature of this export is the ability to anonymize the Records before exporting them, so that researchers' privacy is preserved (more information listed in Table 1). A detailed video guide to the FAIR Data Package is available in video format at https://youtu.be/xwCpRDnPFvs 35. The Zenodo repository 34 also includes two visual summaries which represent two distinct viewpoints: a time-based workflow (Fig. 4) and a links-based ontology-derived graph (Fig. 5). When a workflow is considered, the tribological specimens (base and counter bodies) take a central importance as the carriers of information across the experiments. On the other hand, when the logical links are considered, the tribological experiment is the center of semantic connections. For this publication, the logical links visualization was generated automatically, while the workflow was manually composed as its automatic counterpart is in beta testing. Lastly, the ontology which guides the contents and Links of all Records is referenced with the URL where it can be downloaded from, with its relevant GitHub "commit hash". Using an ElN in tribology. Kadi4Mat and its ELN was selected because it has as a main objective the production of FAIR data coming from a diverse portfolio of data-producing sources. With the virtual research environment, which automatically records and backs up all necessary data and metadata, tribologists can focus on the procedural details of experiments and pay more attention to previously overlooked characteristics of their workflow. At-source production of FAIR metadata. The success of deploying a new system for data and metadata collection within the established workflows of tribological laboratories is at best challenging. Therefore, it is paramount that a FAIR data framework induces minimum disruption to the current research practices (front-end view in Fig. 2). However, implementing a new lab-wide system is also an opportunity to raise the overall efficiency of the lab's operations. With this in mind, the following two example solutions were developed for two representative processes, as an attempt to bridge the hands-on experimental activities with Kadi4Mat. Most of our tribometers are currently "in-house" developments, which test a narrow range of research questions. As such, these tribometers are usually controlled by LabVIEW. Conveniently, this offers access to all data and metadata while they are being collected. To package and upload this information to Kadi4Mat for one of these tribometers, a straightforward code which establishes a connection with the server hosting the ELN was added at the end of the already existing LabVIEW code; the technical details of this procedure are outlined in the sister publication 33. Such machine-operated processes are in contrast to the "analog" processes in experimental Topic Solution Reasoning User Tokens Uniquely assigned and randomly generated. These four-character tokens are kept in a registry that is administered internally for the lab. Using this approach, the researchers' names cannot be uniquely associated with a specific time and place (lab), but individual-specific trends can still be traced. Specimen Name Freely chosen by the responsible researchers and kept in a registry, which does not allow repeats. In this way individual researchers can decide what the most pertinent information to be encoded in the specimens' name is. As such, specimen names will follow different systems, in order to serve the primary user of the samples best. Of course, this is only in addition to the unique persistent identifiers for each specimen within the ELN. Record Type within ELN -lab equipment These types are for ease of navigation within the ELN. They are sourced from the respective superclass in the ontology for each record, and displayed via SurfTheOWL. -industrial procedure -scientific procedure -data processing -experimental object Record Name within ELN Class Friendly Name + Free Name of Choice + Optional Counter -The Class Friendly Name is listed in the relevant class in the ontology. -Free Name of Choice is only for the users' convenience, so it could be any one that doesn't result in repeated record titles. -The Optional Counter starts with a number sign (#) and is followed by 4-digit sequential number with leading zeros, if there are more than one of the same processes or objects. For example: "Interfacial Medium Shell V 1404 #0001". www.nature.com/scientificdata www.nature.com/scientificdata/ tribology like specimen milling, polishing, cleaning, and storage. These processes do not have files as an output and their details have hitherto only been recorded in paper lab notebooks, without any formalized vocabulary or a system. Thus, the showcase solution that was developed for collecting analog information for specimen cleaning consists of a guided user interface (GUI), which was programmed in LabVIEW and runs as a standalone executable on a tablet computer (Fig. 6). The GUI offers an intuitive way of ensuring that all requisite details are collected in a formalized manner, while in the backend, it allows upload of the assembled Record to Kadi4Mat. Critically, the existence of controlled vocabularies will enable the creation of more such GUIs with similar interfaces which will in turn streamline the onboarding of new researchers into the lab. Removing the intermediary (i.e., the human operator) from the process of metadata collection (wherever possible) has the added benefit of ensuring that the recorded descriptions always comply with the community-agreed standards. In order to bridge the ontology of standards to Kadi4Mat, another module, called SurfTheOWL 36, was programmed, which assembles the tree of required metadata for each Record. SurfTheOWL's JSON output supplies a machine-operable template for this tree, which can be integrated with Kadi4Mat, while, its counterpart, SurfTheOWL's Web output composes a human-readable equivalent, which can be used for either validating the ontology's structure or, in exceptional cases, a backup method of manual creation of ELN Records (also mentioned in Fig. 2). An ontology of FAIR tribological experiments (TriboDataFAIR Ontology). The main motivation behind building an ontology for the scope of this showcase experiment was threefold: first, to make the collected www.nature.com/scientificdata www.nature.com/scientificdata/ data interoperable; second, to provide a scalable environment for metadata manipulation and expansion; third, to support the construction of a knowledge graph based on the collected data 37. However, before an ontology could be composed, a controlled vocabulary database had to be amassed. As outlined in the methods section, the group of domain experts collaboratively built such a controlled vocabulary, which contained some basic semantics, and ensured to unambiguously describe all entities that reflect typical tribological processes and objects. However, this MediaWiki-based database is nonideal when it comes to scalability and interoperability -two areas that ontologies excel in. The process of transforming the controlled vocabulary into an ontology is nontrivial as it requires extensive linguistic curation: it is essential that the correct terms are used to achieve the best balance between generality and specificity. Furthermore, extensive domain knowledge was needed to build the class hierarchy in an extensible manner. Strategically, before the ontology was initiated, a representative showcase experiment was chosen, which limited the scope of needed terms and provided a clear envelope for the extent of details needed, by asking the question: Can one redo the same experiment based exclusively on the information in the ontology? The general philosophy of the TriboDataFAIR Ontology 37 is that procedures utilize, alter, and/or generate objects. For example, a tactile surface profilometry procedure simultaneously characterizes, but also physically modifies specimens. Such interactions between physical objects are subclasses of the object property involves. For simplicity and ease of understanding, the ontology models roles as object properties, rather than separate classes, e.g.: counter and base bodies are modeled as object properties. As a result, the ontology does not contain a class "Sample" (a case-specific role of an object), but rather "BlockSpecimen" (the object irrespective of its use). Figure 7 exemplifies how the ontology takes the semantic description of an event and in turn provides a template for its documentation in the ELN. Further, the version control of the ontology is ensured through the use of a GitHub repository (https://github.com/nick-garabedian/TriboDataFAIR-Ontology) and shared persistent identifiers for the classes that are used in the ELN: TriboDataFAIR Ontology with an acronym "TDO". The TriboDataFAIR Ontology, also listed on FAIRsharing.org (https://fairsharing.org/3597), can easily be expanded to include more complex description logic, but was decided to keep its structure as general as possible, as long as it satisfies its expected competency. The direct use case of the ontology, which also serves as its competency test, is the inclusion of the Kadi4MatRecord class; tracing the object properties and subclasses that originate at this class supplies the template (via SurfTheOWL 36 ) for the creation of metadata Records. The competency question thus becomes: What keys need to be provided to an ELN, so that after associating each of them with a value, the currently showcased tribological processes and events will be described FAIR'ly? Showcase pin-on-disk experiment. For this publication, a showcase pin-on-disk experiment was conducted, while recording all FAIR data and metadata details; hydrodynamic friction results are shown in Fig. 8. Using the infrastructure developed for this project, documenting thorough descriptions required significantly less time and effort than for other procedures in the lab; furthermore, sharing them publicly took only a few short steps, and facilitated their potential exchange and integration into larger future investigations. The scope and level of detail that descriptions, data, and metadata have to encompass in order to qualify as tribologically FAIR are ultimately up to the discretion of the domain experts composing the FAIR Data Package. Before community standards are firmly established, experienced tribologists have to, at the very least, select the features that make their experiments repeatable and reproducible; in our case three such features are local and global surface topography, magnetization, and hardness -each thoroughly described in their own ELN records. Fig. 4 Timeline of the processes and objects comprising the showcase FAIR tribological experiment. Note that the Experiment itself occupies only the space in the bottom right corner, and sits at the end of the workflow. However, the experimental workflow preceding the tribological test has to be considered as an essential part of the tribological experiment if FAIR data principles are to be satisfied. Selected visuals from the FAIR Data Package are also included. To distinguish the types of workflow elements they were colored and shaped as: procedures (dark blue rectangle), experiments (green rectangle), data processing (light blue hexagon), experimental objects (orange ellipse), qualitative results (teal pentagon), process detail (light turquoise pentagon), raw data (light turquoise rectangle), processed data (light turquoise cylinder). Discussion The FAIR Data Package provided together with this publication was assembled with the aim to observe the FAIR data metrics 5 as closely as possible; all descriptions and metadata related to all procedures and objects that are involved in implementing the showcase experiment are provided. However, there is still information that was impossible to retrieve, because external suppliers of materials and equipment often do not report details that at first might not seem important; however, as a highly interdisciplinary field, tribology will necessitate these features in the future. The same applies to commercial research software, as already recognized by other groups 38. Existing datasets from prior publications can also be included in the scheme through text mining techniques based on the ontology, especially as it grows. New but incomplete datasets can also be accommodated by the framework as ontologies offer multiple grades of generality. However, incomplete datasets have lower reasoning weight and, as such, it is ultimately up to the dataset producer to pick the number of details to include, and thus determine the data's value. The information that is accessible to tribologists, on the other hand, is often not documented with the necessary depth, although it could be key for future investigations. However, developing the appropriate digital infrastructure (e.g., through an ELN) is not a separate standalone process or a fix-all solution. Rather, it must advance in parallel with the development of controlled vocabularies, ontologies, and most importantly, it has to be in close contact with practicing researchers who can field test them. The exchange of information between Table 1) are colored as: data processing (light blue), experimental object (dark blue), industrial procedure (light green), lab equipment (dark green), scientific procedure (red). www.nature.com/scientificdata www.nature.com/scientificdata/ the various teams in the digitalization process is easiest to achieve through human-friendly outputs at every step of the way. Finally, finding a common unified standard that serves the needs of all experimental tribologists in the world seems utopic. However, what can be done is to at least adopt a common framework for metadata creation, which guarantees the interoperability of individual developments, especially through the use of ontologies. Unfortunately, this will involve cross-disciplinary knowledge in the fields of tribology (with its subfields), ontology development, machine learning (for putting the ontology structure in the correct light), and computer science (for developing the data infrastructure); a list of types of expertise which are not readily available in tribology labs. Additionally, the specialized tools needed to accelerate labs' digital transformation are still under development 39. We hope that this publication and the showcased solutions will contribute some basic pieces to such a development, will spark the dialogue for this process, and will encourage more participation. Fig. 6 A GUI for collecting predefined key-value descriptions of the event sequence comprising a typical specimen cleaning procedure -a digital interface for an inherently analog process. This digital event logging (also referenced in Fig. 2) can be included in the same information pipeline as computer-controlled processes. www.nature.com/scientificdata www.nature.com/scientificdata/ Methods The sections in this chapter present the chronological order of execution, in order to provide clarity to the motivations behind the choice of the various tools utilized in this project. Conceptualization. The project began by collecting a set of easy-to-identify, well-constrained lab objects, procedures, and datasets. Arguably the most effective way for compiling such list is through visuals; we used the open-source software Cytoscape 40 which is a platform originally built for visualizing complex networks and performing network analysis, e.g. on genome interactions based on big biological datasets. Although any other graph-building platform could have been used instead, Cytoscape features an intuitive user interface which let us create the mind-maps needed for the next steps in the project. Interestingly, even at this early stage, the tribological specimen emerged as the most-important carrier of information in an experiment, as it was also identified by other groups 31. However, describing the sample itself hardly provides enough information to resolve why a particular tribological phenomenon occurs. In fact, it is the details of the processes and objects that support the preparation of a specimen that can let us explore what the relevant factors are in reproducing a particular set of tribological results -a well-known challenge as described in the introduction. Controlled vocabulary. After the standalone entities (objects, processes, and data) were identified in the initial Cytoscape chart, their semantics had to be drafted and agreed upon by the group of domain experts (the ten participants in this part of the project are identified under "Author contributions"). The platform for such an effort had to offer simultaneous editing by multiple users, hierarchical and non-hierarchical structuring of elements, version control, and an intuitive user interface. As a result, a local instance of the MediaWiki 41 software was installed at the institute and named TriboWiki; in the well-known environment it was relatively easy to manage the collaborative progress through the extensive use of subpages and links. An additional benefit to this solution was the availability of extension modules, as well as, an integrated native API (application programmable interface) which enabled external archiving and manipulation of the contents, for example, for automatic progress visualization in MATLAB. The end result at this stage of the project, after approximately 4 months and 21 group discussion meetings, were the entity descriptions which reflected the synchronized views of all researchers, and a preliminary structure in their relations. Showcase experiment. The decision to implement a relatively standard experiment was vital for the success of this project because it prescribed a clear focus and made it more tangible, while helping it stand out from other digitalization efforts. The showcase experiment had a lubricated pin-on-disk arrangement, ran at 15 N normal load and a velocity range of 20 to 170 mm/s. The tests were performed on a CSEM tribometer at room temperature (20.7 °C) and a low viscous automotive Shell V-Oil1404 was applied. The fully detailed technical parameters can be found in the FAIR data package of this paper 34. While the experimental steps were performed, the contents of the internal TriboWiki were field tested for completeness and appropriateness. When technological solutions were needed to fulfill the FAIR data guidelines and the objectives of this project, the experimental pipeline was paused until suitable solutions, like the ones described in the following paragraphs, were built. Ontology development. An Ontology of FAIR Tribological Experiments (called TriboDataFAIR Ontology) was developed to provide both a scalable medium for the showcase-experiment-relevant semantics in the TriboWiki, but also to make the collected descriptions and metadata interoperable. The software called Protg 42 was used for the development of the ontology, while SUMO 43 and EXPO 44 were used as foundational upper ontologies, and tribAIn 31 was used to a limited extent when possible. Assembling the contents of the TriboDataFAIR Ontology while conducting the showcase experiment had a two-fold effect: on the one hand, it uncovered the missing gaps in the logical structure of the connections in the TriboWiki, while filtering out any repetitive and ambiguous definitions; on the other hand, the execution of the experiments guaranteed the competency of the ontology in accurately representing the needed objects, processes, and data. www.nature.com/scientificdata www.nature.com/scientificdata/ Electronic lab notebook. Kadi4Mat 32 was used to capture information at-source and store all collected data and metadata according to the standards established up to this point in the project. The application and evolution of Kadi4Mat for the purposes of collecting FAIR tribological data, are presented in a sister publication 33. LabVIEW was used to enable at-source generation of descriptions, data, and metadata for a showcase computer-controlled process (Tribological Experiment), coupled with an automatic upload to Kadi4Mat, and for the documentation of a sample analog procedure (Specimen Cleaning). A Python code using Django 45 and owlready2 46 was composed to automatically pull all experiment-necessary information contained in the TriboDataFAIR Ontology and restructure it from a class hierarchy into an intuitive "description/metadata hierarchy" (Fig. 7). Importantly, this latter automated approach serves as a competency test for the ontology and verifies its consistency, as it provides a differently organized view of the contained entities than the inherent class structure, which is easy to verify by a human operator before a process is conducted, and in turn makes the collected metadata interoperable. Data availability The FAIR Data Package of tribological data can be found on Zenodo 34, where it is versioned in case of updates. Additionally, the TriboDataFAIR Ontology can be found at https://github.com/nick-garabedian/ TriboDataFAIR-Ontology with its most up-to-date changes, while main updates are listed as versions in Zenodo 37. Code availability The virtual research environment Kadi4Mat, its documentation and source code, can be found at https://kadi. iam-cms.kit.edu/. The SurfTheOWL application (both source code and standalone executable) for deriving key-value pairs from the TriboDataFAIR Ontology is available at https://github.com/nick-garabedian/SurfTheOWL with its most up-to-date changes, while main updates are listed as versions in Zenodo 36.
EMG Controlled Artificial Hand and Arm Design Today, there are many people who have lost their hands or arms for various reasons. This situation affects both psychology and daily life of people negatively. With the developing technology, prosthetic hand and arm studies are carried out to facilitate the life of disabled people and to eliminate this negativity. Thanks to the existing biopotentials in the body, it is possible to read the human body. In this context, it can explain our hand and arm movements with the existing biopotential signals and transfer these signals to a prosthesis, enabling people to make the desired movement. Since the biopotential signals in the body are of very low amplitude and frequency, the first goal is to obtain the EMG signal cleanly without noise. In this study, the obtained analog signal was converted into digital information by using software in the computer environment. Thus, each signal gained a meaning. As a result, the movement of the prosthesis was provided by transferring it to stepper motors with the help of Arduino.
""" The Sims 4 Community Library is licensed under the Creative Commons Attribution 4.0 International public license (CC BY 4.0). https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/legalcode Copyright (c) COLONOLNUTTY """ from typing import TypeVar, Any ServiceType = TypeVar('ServiceType', bound=object) class _Singleton(type): def __init__(cls, *args, **kwargs) -> None: super(_Singleton, cls).__init__(*args, **kwargs) cls.__instance = None def __call__(cls, *args, **kwargs) -> 'CommonService': if cls.__instance is None: cls.__instance = super(_Singleton, cls).__call__(*args, **kwargs) return cls.__instance class CommonService(metaclass=_Singleton): """An inheritable class that turns a class into a singleton, create an instance by invoking :func:`~get`. :Example usage: .. highlight:: python .. code-block:: python class ExampleService(CommonService): @property def first_value(self) -> str: return 'yes' # ExampleService.get() returns an instance of ExampleService. # Calling ExampleService.get() again, will return the same instance. ExampleService.get().first_value """ @classmethod def get(cls: Any, *_, **__) -> 'CommonService': """get() Retrieve an instance of the service :return: An instance of the service :rtype: The type of the inheriting class """ return cls(*_, **__)
. In 50 fetuses and premature infants between the ages of 11 and 23 weeks the entire nasal mucosa was dissected free and stained by the Osmium, the PAS or the PAS-alcian blue whole-mount method after which the nasal mucous glands were studied with special reference to density. In front on the septum and in the lateral wall density increases evenly and is by the 23rd week 28 glands per square mm. In the middle and at the back of the septum, where the glands develope later, the density is somewhat smaller by the 23rd week. Through the entire period, and also after the 23rd week, new glands continue to develop a fact which is of importance for the understanding of the distribution of the glands since the first glands to develop form a glandular layer deep in the lamina propria and the last glands to develop a superficial layer.
PANAMA CITY — The Panama City Marine Institute is expanding its partnership with North Bay Haven from a half-day program to full days next year in its Maritime Program for students in ninth grade through 12th grade. The institute expects to add more than 100 charter school students to the PCMI campus, and offer such core classes as math, English, social studies, and the sciences with an emphasis on natural and marine. The program also will offer unique courses in aerospace technology where students can train to become certified drone pilots and Sea Cadets, a U.S. Navy-sponsored ROTC extracurricular which can earn students rank in the military after completing certain portions, should they decide to join after high school. For students interested in the environmental aspects of the sciences, the institute has highly active partnerships with Gulf World Marine Institute, Bay Watch, and Florida Fish and Wildlife. In the past, Boyce said, the institute has had joint projects with every water-oriented environmental agency in Bay County, and the National Fish and Wildlife. Currently, the institute is awaiting legislative approval to fund a dolphin study project with Gulf World Marine Institute. The outdoor recreation program offers students training in CPR, lifeguarding, fishing, sailing, kayaking, canoes, yolo-boarding, snorkeling and scuba. Students have the opportunity to earn rank through the Sea Cadet program, should they choose to enter the military after high school. Leeanna Thompson, a North Bay Haven junior, had never considered going into the military until she began the program at PCMI, but now she is interested in beginning her career in the Coast Guard. Thompson said she loves the program. Now in her second semester, she has been teaching new students to increase her experience with leadership. Thompson also is one of six students to earn their diving certifications this year through the outdoor recreation program. Ultimately, students should enroll in the program if they love the water and if they want to graduate with a fat resumé, Boyce said.
def _CompareAuthoredToArchived(self ,prim, attrName, archivePrim): attr = prim.GetAttribute(attrName) self.assertTrue(attr.IsDefined()) newVal = attr.Get() archAttr = archivePrim.GetAttribute(attrName) self.assertTrue(archAttr.IsDefined()) archVal = archAttr.Get() self.assertEqual(archVal, newVal, "Baseline archived constant '%s' did not " "compare equal to schema constant " "(baseline: %s, schema: %s)." % (attrName, repr(archVal), repr(newVal)))
It's official...Oprah is going to the Democratic Convention -- or at least she'll be in the same city -- and she will not be staying at a Holiday Inn. O is shelling out $50,000 to rent a house in a Denver 'burb, according to the Rocky Mountain News. That's even a lot for the Hamptons -- but Denver, Colo? Her reps are still mum on whether she'll appear on the floor, but unless she's in a contest with McCain on racking up the most homes, it seems pretty clear.
<reponame>lecture4u/KleinSchwarzeBox package p2p; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; import static java.util.concurrent.TimeUnit.*; public class BeeperControl { private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); public void beepForAnHour() { final Runnable beeper = new Runnable() { public void run() { System.out.println("beep"); } }; final ScheduledFuture<?> beeperHandle = scheduler.scheduleAtFixedRate(beeper, 10, 10, SECONDS); scheduler.schedule(new Runnable() { public void run() { beeperHandle.cancel(true); } }, 60 * 60, SECONDS); } }
Administration officials said the United States did not seek an endorsement of military action from the Arab League. It sought condemnation of the use of chemical weapons and a clear assignment of responsibility for the attack to the Assad government, both of which the officials said they were satisfied they got. The Obama administration has declined to spell out the legal justification that the president would use in ordering a strike, beyond saying that the large-scale use of chemical weapons violates international norms. But officials said he could draw on a range of treaties and statutes, from the Geneva Conventions to the Chemical Weapons Convention. Mr. Obama, they said, could also cite the need to protect a vulnerable population, as his Democratic predecessor, Bill Clinton, did in ordering NATO’s 78-day air campaign on Kosovo in 1999. Or he could invoke the “responsibility to protect” principle, cited by some officials to justify the American-led bombing campaign in Libya. “There is no doubt here that chemical weapons were used on a massive scale on Aug. 21 outside of Damascus,” said the White House spokesman, Jay Carney. “There is also very little doubt, and should be no doubt for anyone who approaches this logically, that the Syrian regime is responsible for the use of chemical weapons on Aug. 21 outside of Damascus.” A number of nations in Europe and the Middle East, along with several humanitarian organizations, have joined the United States in that assessment. But with the specter of the faulty intelligence assessments before the Iraq war still hanging over American decision making, and with polls showing that only a small fraction of the American public supports military intervention in Syria, some officials in Washington said there needs to be some kind of a public presentation making the case for war.
Synthesis, Structures, and Assembly of Geodesic Phenine Frameworks with Isoreticular Networks of Cyclo- para-phenylenes. A series of macrocycles were designed by rendering geodesic phenine frameworks in isoreticular networks of cyclo- para-phenylenes. Large, nanometer-sized molecules exceeding molecular weights of 2000 Da were synthesized by five-step transformations including macrocyclization of cyclo- meta-phenylene panels. The dependence of both the molecular structures and the fundamental properties on the panel numbers was delineated by a combination of spectroscopic and crystallographic analyses with the aid of theoretical calculations. Interestingly, flexibility of the molecules via panel rotations depends on the hoop size, which has not been disclosed with the small isoreticular cylco- para-phenylenes. One of the macrocycles served as a host for C70, and its association behaviors and crystal structures were revealed.
The Price Premium for Organic Wines: Estimating a Hedonic Farm-Gate Price Equation* Abstract Organic wines are increasingly produced and appreciated. Because organic production is more costly, a crucial question is whether they benefit from a price premium. We estimate hedonic price functions for Piedmont organic and conventional wines. We use data on the production side in addition to variables of interest to consumers. Our results show that, along with characteristics of interest to consumers, some farm and producer characteristics not directly relevant for consumers do significantly affect wine prices. We find that organic wine tends to obtain higher prices than conventional wine. The price premium is not simply an addition to other price components; organic quality modifies the impact of the other variables on price. (JEL Classification: C21, D49, L11, Q12)
/** * Options for the DragFeature. Use this to set the listeners that will handle * the drag events. * * @author Rafael Ceravolo - LOGANN * @author Jon Britton - SpiffyMap Ltd */ public class DragFeatureOptions extends ControlOptions { /** * To restrict dragging to a limited set of geometry types, send a list of strings * corresponding to the geometry class names (from OL docs). * @param geometryTypes */ public void setGeometryTypes(String[] geometryTypes) { JStringArray array = JStringArray.create(geometryTypes); getJSObject().setProperty("geometryTypes", array.getJSObject()); } /** * If set to true, mouse dragging will continue even if the mouse cursor leaves the * map viewport (from OL docs). * @param documentDrag */ public void setDocumentDrag(boolean documentDrag) { getJSObject().setProperty("documentDrag", documentDrag); } /** * Triggers when a feature has just started being dragged. * @param listener */ public void onStart(DragFeatureListener listener) { createAndSetCallback(listener, "onStart"); } /** * Continually trigged while a feature is being dragged. * @param listener */ public void onDrag(DragFeatureListener listener) { createAndSetCallback(listener, "onDrag"); } /** * Triggers when a feature completed dragged (the users releases the mouse). * @param listener */ public void onComplete(DragFeatureListener listener) { createAndSetCallback(listener, "onComplete"); } /** Creates a JS callback for a event type */ private void createAndSetCallback(DragFeatureListener listener, String name) { JSObject callback = DragFeatureImpl.createDragCallback(listener); getJSObject().setProperty(name, callback); } }
An algorithm to enhance the quality of service in mobile adhoc network A mobile adhoc network (MANET) is formed by a group of wireless mobile hosts or nodes without any fixed infrastructure. As there is no central control in a MANET, a mobile node itself acts as a router. A MANET may function in a stand-alone fashion or may be connected to the Internet. Undoubtedly, MANETs play a critical role in situations where a wired infrastructure is neither available nor easy to install. MANETs are found in applications such as short-term events, battlefield communications and disaster relief activities etc. Due to node mobility and scarcity of resources such as energy of nodes and bandwidth of wireless links, it is much tougher to provide QoS guarantee in MANETs. Therefore, while designing such a network we need to look out for a good Quality of Service (QoS) of it. Many approaches have been proposed for MANET QoS but very few have addressed it from the message transmission point of view. In this paper, we discuss different factors that affect QoS and also the challenges of QoS in MANET. We then propose an algorithm to enhance the QoS in mobile adhoc network depending on its type of application it is proposed for.
import sys from lib.training.training_base import read_config_from_file from lib.training.importer import import_scheme if __name__ == '__main__': config = read_config_from_file(sys.argv[1]) SCHEME = import_scheme(config['scheme']) training = SCHEME(config) training.do_evaluations()
package org.opensha.sha.imr.param.OtherParams; import org.opensha.commons.param.constraint.impl.StringConstraint; import org.opensha.commons.param.impl.StringParameter; /** * SigmaTruncTypeParam, a StringParameter that represents the type of * truncation to be applied to the probability distribution. The * constraint/options are hard-coded here because changes will require * changes in the probability calculations elsewhere in the code. * The parameter is left non editable */ public class SigmaTruncTypeParam extends StringParameter { public final static String NAME = "Gaussian Truncation"; public final static String INFO = "Type of distribution truncation to apply when computing exceedance probabilities"; // Options public final static String SIGMA_TRUNC_TYPE_NONE = "None"; public final static String SIGMA_TRUNC_TYPE_1SIDED = "1 Sided"; public final static String SIGMA_TRUNC_TYPE_2SIDED = "2 Sided"; /** * This constructor invokes the standard options ("None", "1 Sided", or "2 Sided"), * and sets the default as "None". The parameter is left non editable. */ public SigmaTruncTypeParam() { this(SIGMA_TRUNC_TYPE_NONE); } /** * This constructor invokes the standard options ("None", "1 Sided", or "2 Sided"), * and uses the given default value. The parameter is left non editable. */ public SigmaTruncTypeParam(String defaultValue) { super(NAME); StringConstraint options = new StringConstraint(); options.addString(SIGMA_TRUNC_TYPE_NONE); options.addString(SIGMA_TRUNC_TYPE_1SIDED); options.addString(SIGMA_TRUNC_TYPE_2SIDED); setConstraint(options); setInfo(INFO); setDefaultValue(defaultValue); setNonEditable(); setValueAsDefault(); } }
Appoints former NSW Supreme Court judge. The Government has appointed a former NSW Supreme Court judge to be Australia's new overseer of national security and terror legislation. The Coalition has been under increasing pressure to permanently fill the role of Independent National Security Monitor, which has been sitting vacant since former monitor Bret Walker came to the end of his three-year tenure in April. The Government in March announced it would disband the position as part of its package of red tape repeal initiatives, but later reversed the decision after it introduced several new significant national security laws. Roger Gyles, a former NSW Supreme Court judge with over 30 years in the legal profession, will fill the role immediately and start his tenure by examining the federal government's new counter terrorism legislation, Prime Minister Tony Abbott announced today. His position will be considered as acting until the appointment is approved by the Governor-General. In the meantime, Gyles will be tasked with examining whether the Government's counter-terrorism laws impact on journalists' ability to report on secret intelligence operations. "Gyles’ experience will equip him well for the important task of monitoring complex national security legislation," Abbott said in a statement. Greens MP Penny Wright last week introduced a bill aiming to shore up the security of the Independent National Security Monitor, by legislating that the statutory position is never vacant and always funded. The bill also attempts to increase the monitor's powers of review - it states that the INSLM can review proposed or draft legislation, not just bills that have already passed into law.