content
stringlengths 7
2.61M
|
---|
Origin of Shia Islam
Starting point
Shiism began for the first time with a reference made to the partisans of Ali the first leader of the Ahl al-Bayt (Household of the prophet). In the early years of Islamic history there was no "orthodox" Sunni or "heretical" Shiite, but rather of two points of view that were drifting steadily until became manifest as early as the death of Muhammad the prophet of Islam.
On the death of Muhammad, the prophet of Islam, in an assembly known as Saqifah a group of Muhajirun forced on the Ansar their wish for the acceptance of Abu Bakr as the successor to the prophet, Muhammad who was to be washed and buried. A distinguished absentee of this gathering was Ali the cousin and son-in-law of the prophet. There were some people who on view of some statement made by Muhammad in his lifetime believed that Ali should have taken the position, not only as a temporal head (Caliph) but also as spiritual head(Imam).
According to the Sunni sources, Ali "was a valued counselor of the caliphs who preceded him"; Umar is, therefore, reported by some of the important early Sunni authors as saying: "Had there not been Ali, 'Umar would have perished."
Jafri, on the other hand, quotes Veccia Vaglieri as saying "Ali was included in the council of the caliphs, but although it is probable that he was asked for advice on legal matters in view of his excellent knowledge of the Quran and the Sunnah, it is extremely doubtful whether his advice was accepted by Umar, who had been a ruling power even during the caliphate of Abu Bakr." And that is why Ali's decisions rarely find a place in the later developed Sunni schools of law, while Umar's decisions find common currency among them.
According to some sources, the Shiites are believed to have started as a political party and developed into a religious movement, influencing Sunnis and producing a number of important sects. Other scholars argue Western scholarship that views Shi'ism as a political movement is factually incorrect. According to Jafri, however, the origin of shiite is not merely the result of political partisanship concerning the leadership of Ummah.
In his book the origin of shiite islam he points out that those who emphasize the political nature of Shi'ism are "perhaps too eager to project the modern Western notion of the separation of church and state back into seventh century", since such an approach "implies the spontaneous appearance of Shi'ism rather than its gradual emergence and development".
Jafri says Islam is basically religious because Muhammad was appointed and sent by God to deliver His message, and political because of the circumstances in which it arose and grew. In the same way Shi'ism, in its inherent nature, has always been both religious and political. In one occasion, for example, when the Shura after Umar proposed that they would give him the Caliphate on condition that he acted according to Quran, Sunnah of Muhammad, and precedents established by the first two caliphs, he refused to accept the last condition.
In another occasion when Ali's Partisans asked him to play politic and reaffirm Muawiyah I as the Governor of Syria and sweet-talk him with promises before they could topple him from his position Ali said I have no doubt that what you advise is best for this life, he retorted. But I will have nothing to do with such underhanded schemes, neither yours nor Muawiya's. I do not compromise my faith by cheating, nor do I give contemptible men any say in my command. I will never confirm Muawiya as governor of Syria, not even for two days. Ali accepted the political realities of his day, however, believed he was better qualified for the caliphate, which is evidence from the historic exposition of Ali, known as Sermon of the roar of a camel.
According to Shiite, Ali declined to make use of the military support offered to him by Abu Sufyan ibn Harb to fight Abu Bakr. At the same time, however, he did not recognize Abu Bakr and refused to pay him homage for six months.
Imamate the Distinctive Institution of Shiite Islam
The distinctive institution of Shi’ism is the Imamate and the question of the Imamate is inseparable from that of walayat, or the esoteric function of interpreting the inner mysteries of the Quran and the Shari’ah.
Both Shiite and Sunni are in agreement over the two functions of prophet hood: to reveal God's law to men, and to guide men toward God. However, while Sunnis believe that both have come to an end with the death of Muhammad, Shiites believe that whereas legislation ended, the function of guiding and "explaining divine law continued through the line of Imams." In Shiite theology, thus, God does not guide via authoritative texts (i.e. the Qur'an and Hadith) only but also guides through some specially equipped individuals known as Imams. This constitution, Shiite says, is not limited to Islam, but each great messenger of God had two covenants, one concerning the next prophet who would eventually come, and one regarding the immediate successor, the Imam. For example, Sam was an imam for Noah, Ishmael was an Imam for Abraham, Aaron or Joshua for Moses, Simon, John and all the disciples for Jesus, and Ali and his descendants for Muhammad. It is narrated from the sixth imam, Ja'far al-Sadiq who had said "where there to remain on the earth but two men, one of them would be the proof of God". The different between apostles (Rasuls), the prophets(Nabi) and the Imams, thus, is described as follows: Rasul sees and hears the angel in awakness and sleep. Nabi hears the angel and sees him while asleep, but does not see him while awake though hears the speech. Imam (muhaddith) is the one who hears the angel in awakness while does not see him in awakness or sleep. According to the fifth Imam, however, this kind of revelation is not the revelation of prophethood but rather like the inspiration (ilham) which came to Mary (mother of Jesus), the mother of Moses and to the bee.
Hence the question was not only who the successor to Muhammad was, but also what the attributes of a true successor were.
Imamate vs Caliphate
The very life of Ali and his actions show that he accepted the previous caliphs as understood in the Sunni sense of Caliphate (the ruler and the administrator of the Sharia), but confined the function of Walayah, after the Prophet, to himself. That is why he is respected as the fourth caliph in the Sunni sense and as an Imam in the Shi’ite sense.
Sunnites, on the other hand, reject Imamate on the basis of Quran which says Muhammad, as the last of the Prophets, was not to be succeeded by any of his family; and that is why God let Muhammad's sons to die in infancy. And that is why Muhammad did not nominate a successor, as he wanted to leave the succession to be resolved "by the Muslim Community on the basis of the Quranic principle of consultation (Shura)."
The question Madelung propose here is that why the family members of Muhammad should not inherit other (other than prophethood) aspects of Muhammad's character such as rule (hukm) wisdom (Hikmah), and the Imamate. Since The Sunnite concept of the "true caliphate" itself defines it as a "succession of the Prophet in every respect except his prophethood". Madelung further asks "If God really wanted to indicate that he should not be succeeded by any of his family, why did He not let his grandsons and other kin die like his sons?"
It is said that one day the Abbasid Caliph Harun al-Rashid questioned the seventh Shiite Imam, Musa al-Kadhim, saying why he had permitted people to call him "Son of Allah's Apostle," while he and his forefathers were Muhammad's daughter's children. And that "the progeny belongs to the male (Ali) and not to the female (Fatimah)".
In response al-Kadhim recited the verses Quran, 6:84 and Quran, 6:85 and then asked "Who is Jesus's father, O Commander of the faithful?". "Jesus had no father." Said Harun. Al-kadhim argued that God in these verses had ascribed Jesus to the descendants of the prophets through Mary; "similarly, we have been ascribed to the descendants of the Prophet through our mother Fatimah," Said al-Kadhim. It is related that Harun asked Musa to give him more evidence and proof. Al-Kadhim, thus, recited the verse of Mubahala arguing that "None claims that the Prophet made someone enter under the cloak when he challenged the Christians to a contest of prayer to God (mubahala) except Ali, Fatimah, Hasan, and Husayn. So in the verse: "Our sons" refers to Hasan and Husayn.
In one of his long letters to Muawiya I, summoning him to pledge allegiance to him, Hasan ibn Ali made use of the argument of his father, Ali, which the latter had advanced against Abu Bakr after the death of Muhammad. Ali had said: "If Quraysh could claim the leadership over the Ansar on the grounds that the Prophet belonged to Quraysh, then the members of his family, who were the nearest to him in every respect, were better qualified for the leadership of the community."
Mu'awiya's response, to this argument is also interesting, for Muawiyah, while recognizing the excellence of the Muhammad’s family, further asserted that he would willingly follow Hasan's request were it not for his own superior experience in governing:"…You are asking me to settle the matter peacefully and surrender, but the situation concerning you and me today is like the one between you [your family] and Abu Bakr after the death of the Prophet…I have a longer period of reign [probably referring to his governorship], and am more experienced, better in policies, and older in age than you. …if you enter into obedience to me now, you will accede to the caliphate after me." Wrote back Muawiyya.
In his book, The Origins and Early Development of Shi’a Islam, Jafri comes to the conclusion that the majority of the Muslims who became known as Sunnis afterwards "placed the religious leadership in the totality of the community (Ahl al-Sunnah wal Jamaah), represented by the Ulama, as the custodian of religion and the exponent of the Quran and the Sunnah of the Prophet, while accepting state authority as binding… A minority of the Muslims, on the other hand, could not find satisfaction for their religious aspirations except in the charismatic leadership from among the people of the house of the Prophet, the Ahl al-Bayt, as the sole exponents of the Quran and the Prophetic Sunnah, although this minority too had to accept the state's authority. This group was called the Shiite."
Husayn's uprising
To Sunnis, Husayn's decision to travel to Iraq was a not mere political adventuring that went wrong, rather it was a decision to uphold the religion of Islam. To uphold the teachings of His Grand Father Prophet Muhammad and to stand against the wrong changes being incorporated in Islam by yazeed lanti.(Haider)
According to Shiite historians, on the other hand, Husayn had "received plenty of warning of the collapse of the shii revolt in Kufa as he approached Iraq." Shiite historians record that on his journey to Kufa when Husayn received grim news from Kufa, he addressed his companions telling that "of the death and destruction that awaited them ahead." At this point, they argue, Husayn could have retired to Medina or at least accepted the offer which was made to him to refuge in the mountain strongholds of the Tayy tribe. But he refused these and even addressed his companions telling them to leave him as he proceed toward Kufa.
Jafri, the Shiite historian, writes: "Husayn did not try to organize or mobilize military support, which he easily could have done in the Hijaz, nor did he even try to exploit whatever physical strength was available to him…
Is it conceivable that anyone striving for power would ask his supporters to abandon him,… What then did Husayn have in mind? Why was he still heading for Kufa?...
According to Jafri it is disappointing that historians have given too much attention "to external aspects of the event of Karbala and has never tried to analyze the inner history and agonizing conflict in Husayn's mind". He points out that Husayn "was aware of the fact that a victory achieved through military strength and might is always temporal, because another stronger power can in course of time bring it down in ruins. But a victory achieved through suffering and sacrifice is everlasting and leaves permanent imprints on man's consciousness… The natural process of conflict and struggle between action and reaction was now at work. That is, Muhammad's progressive Islamic action had succeeded in suppressing Arab conservatism, embodied in heathen pre-Islamic practices and ways of thinking. But in less than thirty years' time this Arab conservatism revitalized itself as a forceful reaction to challenge Muhammad's action once again. The forces of this reaction had already moved into motion with the rise of Muawiya , but the succession of Yazid was a clear sign that the reactionary forces had mobilized themselves and had now re-emerged with full vigor. The strength of this reaction, embodied in Yazid's character, was powerful enough to suppress or at least deface Muhammad's action. Islam was now, in the thinking of Husayn, in dire need of reactivation of Muhammad's action against the old Arabian reaction, and thus a complete shake-up. "
Jafri continue to say that "Husayn's acceptance of Yazid, with the latter's openly reactionary attitude against Islamic norms, would not have meant merely a political arrangement, as had been the case with Hasan and Muawiya, but an endorsement of Yazid's character and way of life as well." He then comes to the conclusion that Husayn "realized that mere force of arms would not have saved Islamic action and consciousness. To him it needed a shaking and jolting of hearts and feelings. This, he decided, could only be achieved through sacrifice and sufferings." "for those who", he writes, "fully appreciate the heroic deeds and sacrifices of, for example, Socrates and Joan of Arc, both of whom embraced death for their ideals, and above all of the great sacrifice of Jesus Christ for the redemption of mankind. It is in this light that we should read Husayn's replies to those well-wishers who advised him not to go to Iraq. It also explains why Husayn took with him his women and children, though advised by Ibn Abbas that should he insist on his project, at least he should not take his family with him." "Aware of the extent of the brutal nature of the reactionary forces, Husayn knew that after killing him the Umayyads would make his women and children captives and take them all the way from Kufa to Damascus. This caravan of captives of Muhammad's immediate family would publicize Husayn's message and would force the Muslims' hearts to ponder on the tragedy. It would make the Muslims think of the whole affair and would awaken their consciousness." So, according to Gafriii that is exactly what happened. He continue to writes that "Had Husayn not shaken and awakened Muslim consciousness by this method, who knows whether Yazid's way of life would have become standard behavior in the Muslim community, endorsed and accepted by the grandson of the Prophet." Then he arrives to the conclusion that "although after Yazid kingship did prevail in Islam, and though the character and behavior in the personal lives of these kings was not very different from that of Yazid, but the change in thinking which prevailed after the sacrifice of Husayn always served as a line of distinction between Islamic norms and the personal character of the rulers." |
Over the last few months, Muslims have been advised to become part of the national mainstream that wants education and jobs. Abandon old fears, embrace the new order, they have been told.A front-page story in the Indian Express on Wednesday exposed the fallacy underlying such appeals: that Muslims live in cloisters, study in madrassas, and are different from their "aspirational" Hindu counterparts.It is the story of Mohsin Sadiq Shaikh, a 24-year-old Muslim man from Solapur district.Shaikh was a member of the new economy that the Bharatiya Janata Party has sold to young Indians: he worked as an IT manager with a private firm in Pune.But even a membership to this new economy could not save Mohsin's life.On Monday night, as he returned home after a day of rioting in the city, he was killed by a mob of people identified by the police as members of the Hindu Rashtra Sena, a radical Hindu outfit. His friend, who was with him that night, said he was targeted because "he was wearing a skull cap and had a beard."A large section among cosmopolitan urban elites has supported Narendra Modi in the belief that economic prosperity is a secular good that accrues to all and flattens social differences.Does economic growth reduce religious conflict? Anjali Thomas Bohlken and Ernest John Sergenti studied 15 states, including Gujarat, between 1982 and 1995 and found that the occurrence of Hindu-Muslim riots came down in the years of higher growth. However, they found "no support for the conventional wisdom that higher levels of socio-economic well-being – either in the form of higher GDP per capital or higher literacy rates – reduce the occurrence of violence”.While it is fairly plausible that economic distress puts an additional strain on the social fabric, it is simplistic to argue that the social fabric itself is a product of the economy.Society is constructed out of the daily encounters of people, as well as their sustained engagements with each other.In its three decade of existence, the BJP has not distinguished itself as a party that constructs or fosters associations between communities. Instead, it appears to inhabit a parallel civic universe of its own, populated by Hindutva organisations like the Vishwa Hindu Parishad, the Bajrang Dal and the Durga Vahini. Its leaders are often seen sharing space with an assortment of fringe Hindu groups and activists – for instance, Pragya Thakur, the Sadhvi of Indore, accused in the Malegaon blast.Recently, the party nearly inducted Pramod Muthalik, the leader of Sri Ram Sene, infamous for its attack on young women in a pub in Mangalore. The chief of the Hindu Rashtra Sena in Pune, Dhananjay Desai, according to this report in the Indian Express, has connections with Muthalik and has been a vocal supporter of Pragya Thakur.Regardless of whether the BJP’s top leadership approves, its sweeping victory and ascendance to power in Delhi has filled its hardline supporters, both within and outside the party, not just with a sense of elation, but also with a sense of empowerment.In the Karnataka town of Bijapur, while taking out a victory procession, BJP workers reportedly “molested women belonging to a minority community and tried to forcibly smear gulaal on the faces of vegetable vendors from a particular community”, the region’s Inspector General of police has told the Hindustan Times. A former BJP Union Minister has been arrested in the rioting case.In Assam, a BJP MP told the Deccan Chronicle that the party’s youth wing would “launch a house-to-house campaign urging people not to engage the immigrants in any kind of work”.The trigger for the rioting in Pune might have been come from morphed pictures of Shivaji and Bal Thackeray that circulated on social media. But the scale of rioting – more than 200 buses were burnt – indicates that organised groups were involved. The Hindu Rashtra Sena leader has been arrested in relation to Shaikh's murder.In a primetime discussion on NDTV, BJP spokesperson Shaina NC sought to downplay the role of Hindutva fringe groups in Pune by deflecting blame on the Congress-led Maharashtra government which she claimed had failed to contain the unrest.While party spokespersons cannot be expected to rise above the fray of partisan politics, surely more can be expected of India’s new prime minister who has won a decisive victory by selling an economic dream to all Indians. He must speak up before it begins to take the shape of a nightmare for some – which, in truth, would be a nightmare for all. |
<filename>sources/Loader/src/Display/Font/Font.cpp
// (c) <NAME> e-mail : <EMAIL>
#include "defines.h"
#include "Display/Font/Font.h"
#include "font8display.inc"
#include "font5display.inc"
#include "fontUGOdisplay.inc"
#include "fontUGO2display.inc"
#include "font8.inc"
#include "font5.inc"
#include "fontUGO.inc"
#include "fontUGO2.inc"
const Font *Font::fonts[TypeFont::Count] = {&font5, &font8, &fontUGO, &fontUGO2};
const Font *Font::font = &font8;
int Font::GetSize()
{
return font->height;
}
int Font::GetLengthText(pchar text)
{
int retValue = 0;
while (*text)
{
retValue += Font::GetLengthSymbol((uint8)(*text));
text++;
}
return retValue;
}
int Font::GetHeightSymbol()
{
return 9;
}
int Font::GetLengthSymbol(uchar symbol)
{
return font->symbol[symbol].width + 1;
}
void Font::Set(TypeFont::E typeFont)
{
font = fonts[typeFont];
}
|
<reponame>Taiguz/react-blockchain<gh_stars>0
import { Block } from "../Block/types";
export interface BlockChain {
blocks: Block[]
checkBlockChainValidation: () => boolean
searchInvalidBlock: () => number | undefined
addBlock: (newBlock: Block) => void
lastBlock: () => Block
} |
Increased incidence of positive tests for estrogen binding in mammary carcinoma specimens transported in liquid nitrogen. Use of a liquid nitrogen container for convenient transport of frozen mammary carcinoma tissue for hormone-receptor assays is described. The container can be used in dry or wet mode and is appropriate for all types of transport, including air. Experience shows a significantly higher incidence of positive tests on specimens transported by this means than on specimens transported in dry ice. |
package org.rookit.api.dm.album.tracks;
import com.google.common.collect.Lists;
import org.rookit.api.dm.album.disc.Disc;
import org.rookit.api.dm.track.Track;
import org.rookit.convention.annotation.Property;
import org.rookit.convention.annotation.PropertyContainer;
import org.rookit.utils.optional.Optional;
import java.time.Duration;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import java.util.stream.Stream;
@PropertyContainer
public interface AlbumTracks extends Iterable<Track> {
AlbumTrackSlotsAdapter asSlots();
Optional<Disc> disc(String discName);
boolean contains(String discName);
boolean contains(Track track);
/**
* Return a withProperty of discs with the discs on the album.
*
* @return a withProperty of the album discs.
*/
@Property
Collection<Disc> discs();
default Collection<Track> asCollection() {
final List<Track> tracks = Lists.newArrayListWithCapacity(size());
for (final Disc disc : discs()) {
tracks.addAll(disc.asTrackCollection());
}
return Collections.unmodifiableCollection(tracks);
}
/**
* Returns the primitive of tracks in the entire album. This field will return
* the sum of the primitive of track on each of the discs of this album.
*
* @return primitive of tracks in the entire album.
*/
int size();
Optional<Duration> duration();
Stream<Track> stream();
}
|
Transcriptional circuit dynamics in HSPCs. Currently, the number of UCB units in the global UCB bank inventories exceeds 1.5 million, but many of these units are too small to support hematopoiesis, particularly for adults. UCB expansion technologies as reported here and by others may provide a paradigm-changing opportunity to use smaller and better HLA-matched units for UCBT; they also set a benchmark for the expansion of other important UCB populations such as MSCs, NK cells, or T cells. Indeed, promising results with genetically engineered UCB NK cells targeting CD19 cancers, UCB-derived virus-specific T cells, and T-regulatory cells support the use of UCB units in the global inventory for the treatment of patients with cancer in the coming decades. |
<reponame>arschles/athens-azure<gh_stars>0
package stringer
import (
"bytes"
"encoding/json"
"fmt"
"github.com/ericchiang/k8s"
)
// ToJSON encodes i to json and returns a string of the encoded JSON.
//
// If the encoding failed, it returns a string that indicates there was an error
func ToJSON(i k8s.Resource, name string) string {
b, err := json.Marshal(i)
if err != nil {
return fmt.Sprintf("error marshaling Deployment %s", name)
}
var buf bytes.Buffer
if err := json.Indent(&buf, b, "", " "); err != nil {
return fmt.Sprintf("error indenting JSON for job %s", name)
}
return string(buf.Bytes())
}
|
Differential gene regulation in DAPT-treated Hydra reveals candidate direct Notch signalling targets. In Hydra, Notch inhibition causes defects in head patterning and prevents differentiation of proliferating nematocyte progenitor cells into mature nematocytes. To understand the molecular mechanisms by which the Notch pathway regulates these processes, we performed RNA-seq and identified genes that are differentially regulated in response to 48h of treating the animals with the Notch inhibitor DAPT. To identify candidate direct regulators of Notch signalling, we profiled gene expression changes that occur during subsequent restoration of Notch activity and performed promoter analyses to identify RBPJ transcription factor-binding sites in the regulatory regions of Notch-responsive genes. Interrogating the available single-cell sequencing data set revealed the gene expression patterns of Notch-regulated Hydra genes. Through these analyses, a comprehensive picture of the molecular pathways regulated by Notch signalling in head patterning and in interstitial cell differentiation in Hydra emerged. As prime candidates for direct Notch target genes, in addition to Hydra (Hy)Hes, we suggest Sp5 and HyAlx. They rapidly recovered their expression levels after DAPT removal and possess Notch-responsive RBPJ transcription factor-binding sites in their regulatory regions. |
package com.chornyi.poc.database.domain;
import java.io.Serializable;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;
import lombok.Data;
@Data
@Entity
@Table(name = "PRINTJOBCFG")
public class PrintJobCfg implements Serializable {
@Id
@Column(insertable = false, name = "PRINTJOBCFGID", nullable = false)
private String id;
@Column(name = "PROCESSID", insertable = false, nullable = false)
private String processId;
@Column(name = "NAME")
private String name;
@Column(name = "QUERYREF", nullable = false)
private String queryRef;
@Column(name = "EXTRAQUERYREF")
private String extraQueryRef;
@Column(name = "APPID", nullable = false)
private String appId;
@Column(name = "CSVENCODING")
private String csvEncoding;
@Column(name = "DESCRIPTION")
private String description;
@Column(name = "REPRINTAPPID")
private String reprintAppId;
} |
A mix of confusion and camaraderie was in the air at the Bega Showground on Monday morning after residents woke up to the reality of the fire continuing to burn through Tathra and Tarraganda overnight. Piles of blankets, towels and clothes built up around the edges of the pavilion and makeshift sitting areas were established for people who evacuated their homes on Sunday afternoon. Volunteers took shifts making sandwiches for those forced to take shelter, representatives from the Red Cross accounted for individuals as they arrived and Local Land Services distributed pet food to animal companions. Outside, a sea of cars, caravans and tents surrounded the Bega Showground pavilion, with people eager to stay close by for the next announcement from the Rural Fire Service. Trevor Banville from Twin Waters on the Sunshine Coast set up his caravan at the Bega Showground only an hour before it was declared an evacuation centre for the Tathra fire, watching hundreds of people flood into the showgrounds. On Monday morning he put his holiday on hold and picked up a pair of tongs to help Bega Lions president Peter Wiley on the free breakfast barbecue. “I’ve been through tornadoes and floods up in Queensland, so I know what it’s like for people to lose their homes,” Mr Banville said. “You can’t not help, it’s the least I can do when I know these people have lost their property or can’t get back home.” Mr Banville, his wife and their two friends will continue to help volunteers at the Bega Showground for the next three days of their stay. Back inside, Bega High School students Bella Kilpatrick, Year 9, Woti Fastigata, Year 8 and Grace Stanger, Year 11 were on hand in the kitchen. With schools across the region closed due to the fire, they said many of their friends and fellow students didn’t know what to do. “In some ways it would be better if we did go to school, that way we could see everyone and make sure they’re okay,” Bella said. “We’ve been talking to a few friends who have lost houses which is really sad, we can’t help there, but we can help here.” Woolworths supermarkets supplied food and toiletries from their Bega, Moruya and Batemans Bay stores for those at the evacuation centre, ranging from infants to the elderly. The smoke from fires burning on the mountains to the east of the Bega Showground could be seen through the doors of the pavilion, creating a sense of unrest among the crowd inside.
COMFORT FOOD: Tourist Trevor Banville lends a hand to Peter Wiley after his camping spot at the Bega Showground was inundated with evacuees.
A mix of confusion and camaraderie was in the air at the Bega Showground on Monday morning after residents woke up to the reality of the fire continuing to burn through Tathra and Tarraganda overnight.
Piles of blankets, towels and clothes built up around the edges of the pavilion and makeshift sitting areas were established for people who evacuated their homes on Sunday afternoon.
Volunteers took shifts making sandwiches for those forced to take shelter, representatives from the Red Cross accounted for individuals as they arrived and Local Land Services distributed pet food to animal companions.
Outside, a sea of cars, caravans and tents surrounded the Bega Showground pavilion, with people eager to stay close by for the next announcement from the Rural Fire Service.
Trevor Banville from Twin Waters on the Sunshine Coast set up his caravan at the Bega Showground only an hour before it was declared an evacuation centre for the Tathra fire, watching hundreds of people flood into the showgrounds.
On Monday morning he put his holiday on hold and picked up a pair of tongs to help Bega Lions president Peter Wiley on the free breakfast barbecue.
“I’ve been through tornadoes and floods up in Queensland, so I know what it’s like for people to lose their homes,” Mr Banville said.
Mr Banville, his wife and their two friends will continue to help volunteers at the Bega Showground for the next three days of their stay.
Back inside, Bega High School students Bella Kilpatrick, Year 9, Woti Fastigata, Year 8 and Grace Stanger, Year 11 were on hand in the kitchen.
Bega High School students Bella Kilpatrick, Woti Fastigata and Grace Stanger helped at the Bega Showground evacuation centre while their school is closed due to Tathra fire.
With schools across the region closed due to the fire, they said many of their friends and fellow students didn’t know what to do.
“In some ways it would be better if we did go to school, that way we could see everyone and make sure they’re okay,” Bella said.
Woolworths supermarkets supplied food and toiletries from their Bega, Moruya and Batemans Bay stores for those at the evacuation centre, ranging from infants to the elderly.
The smoke from fires burning on the mountains to the east of the Bega Showground could be seen through the doors of the pavilion, creating a sense of unrest among the crowd inside.
Discuss "Evacuation centre volunteers: ‘We can’t help there, but we can help here’" |
Brain grey-matter volume alteration in adult patients with bipolar disorder under different conditions: a voxel-based meta-analysis Background The literature on grey-matter volume alterations in bipolar disorder is heterogeneous in its findings. Methods Using effect-size differential mapping, we conducted a meta-analysis of grey-matter volume alterations in patients with bipolar disorder compared with healthy controls. Results We analyzed data from 50 studies that included 1843 patients with bipolar disorder and 2289 controls. Findings revealed lower grey-matter volumes in the bilateral superior frontal gyri, left anterior cingulate cortex and right insula in patients with bipolar disorder and in patients with bipolar disorder type I. Patients with bipolar disorder in the euthymic and depressive phases had spatially distinct regions of altered grey-matter volume. Meta-regression revealed that the proportion of female patients with bipolar disorder or bipolar disorder type I was negatively correlated with regional grey-matter alteration in the right insula; the proportion of patients with bipolar disorder or bipolar disorder type I taking lithium was positively correlated with regional grey-matter alterations in the left anterior cingulate/paracingulate gyri; and the proportion of patients taking antipsychotic medications was negatively correlated with alterations in the anterior cingulate/paracingulate gyri. Limitations This study was cross-sectional; analysis techniques, patient characteristics and clinical variables in the included studies were heterogeneous. Conclusion Structural grey-matter abnormalities in patients with bipolar disorder and bipolar disorder type I were mainly in the prefrontal cortex and insula. Patients' mood state might affect grey-matter alterations. Abnormalities in regional grey-matter volume could be correlated with patients' specific demographic and clinical features. |
<gh_stars>1-10
/*
* Copyright 2021 LSD Information Technology (Pty) Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package za.co.lsd.ahoy.server.helm.sealedsecrets;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;
import za.co.lsd.ahoy.server.AhoyServerProperties;
import za.co.lsd.ahoy.server.docker.DockerRegistry;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import static za.co.lsd.ahoy.server.util.ProcessUtil.*;
@Component
@Slf4j
public class DockerConfigSealedSecretProducer {
private final AhoyServerProperties serverProperties;
public DockerConfigSealedSecretProducer(AhoyServerProperties serverProperties) {
this.serverProperties = serverProperties;
}
public String produce(DockerRegistry dockerRegistry) throws IOException {
try {
log.info("Producing docker registry sealed secret for registry: {}", dockerRegistry);
List<Process> processes = ProcessBuilder.startPipeline(Arrays.asList(
new ProcessBuilder("kubectl", "create", "secret", "docker-registry",
"docker-registry",
"--docker-server=" + dockerRegistry.getServer(),
"--docker-username=" + dockerRegistry.getUsername(),
"--docker-password=" + dockerRegistry.getPassword(),
"--dry-run", "-o", "json"),
new ProcessBuilder("kubeseal", "-o", "json", "--scope", "cluster-wide",
"--controller-name=" + serverProperties.getSealedSecrets().getControllerName(),
"--controller-namespace=" + serverProperties.getSealedSecrets().getControllerNamespace())
));
Process sealedSecretProcess = processes.get(processes.size() - 1);
if (sealedSecretProcess.waitFor() == 0) {
log.info("Successfully produced docker registry sealed secret");
String sealedSecret = outputFrom(sealedSecretProcess);
return extractDockerConfigJson(sealedSecret);
} else {
String error = errorFrom(sealedSecretProcess);
throw new IOException("Failed to produce docker config sealed secret: " + error);
}
} catch (InterruptedException e) {
throw new RuntimeException("Failed to produce docker config sealed secret", e);
}
}
private static String extractDockerConfigJson(String sealedSecret) throws IOException {
ObjectMapper mapper = new ObjectMapper();
JsonNode node = mapper.readTree(sealedSecret);
JsonNode dockerConfig = node.at("/spec/encryptedData/.dockerconfigjson");
return dockerConfig.asText();
}
}
|
/*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; under version 2
* of the License (non-upgradable).
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright (c) 2019 (original work) MedCenter24.com;
*/
import { Component, ElementRef, EventEmitter, Input, Output, ViewChild } from '@angular/core';
import { TranslateService } from '@ngx-translate/core';
import { LoggerComponent } from '../../../core/logger/LoggerComponent';
import { GlobalState } from '../../../../global.state';
import { LoadableComponent } from '../../../core/components/componentLoader';
import { FormService } from '../../form.service';
import { UiToastService } from '../../../ui/toast/ui.toast.service';
@Component({
selector: 'nga-form-viewer',
template: `
<div *ngIf="formId && formableId">
<span
class="fa fa-file-pdf-o mr-2"
(click)="downloadPdf()"
title="{{ 'Save as PDF' | translate }}"></span>
<span
class="fa fa-print mr-2"
(click)="print()"
title="{{ 'Print' | translate }}"
></span>
<span
class="fa fa-window-maximize"
(click)="preview()"
title="{{ 'Preview' | translate }}"
></span>
</div>
<span class="text-muted" *ngIf="!formId" translate>Form not assigned</span>
<iframe id="printf" name="printf" style="display: none;"></iframe>
<p-dialog [(visible)]="formPreviewerVisible"
header="{{ 'Form Preview' | translate }}"
[style]="{width: '800px'}"
[contentStyle]="{'max-height':'90vh'}"
[modal]="true"
[blockScroll]="true"
[closeOnEscape]="true"
[dismissableMask]="true"
[closable]="true"
appendTo="body">
<div class="preview-content" #previewContainer></div>
</p-dialog>
`,
styleUrls: ['./form.viewer.scss'],
})
export class FormViewerComponent extends LoadableComponent {
protected componentName: string = 'FormViewerComponent';
/**
* Identifier of the form to show
*/
@Input() formId: number;
/**
* Identifier of the source of the data for this form
*/
@Input() formableId: number;
@Output() init: any;
@Output() loaded: any;
/**
* will be triggered event instead of real method
* to pass control upper
*/
@Input() emitInsteadOfAction: boolean = false;
@Output() onPreview: EventEmitter<any> = new EventEmitter();
@Output() onPrint: EventEmitter<any> = new EventEmitter();
@Output() onPdf: EventEmitter<any> = new EventEmitter();
@ViewChild('previewContainer')
previewContainer: ElementRef;
formPreviewerVisible: boolean = false;
constructor(
protected _logger: LoggerComponent,
protected _state: GlobalState,
protected translateService: TranslateService,
protected formService: FormService,
private uiToastService: UiToastService,
) {
super();
}
private valid(): Promise<any> {
return new Promise<any>((resolve, reject) => {
if (!this.formId || !this.formableId) {
this.translateService.get('`form` and/or `data source` has not been provided').subscribe(res => {
this.uiToastService.errorMessage(res);
});
if (reject) {
reject();
}
} else {
resolve(true);
}
});
}
downloadPdf(forceRun: boolean = false): void {
if (this.emitInsteadOfAction && !forceRun) {
this.onPdf.emit();
} else {
this.valid()
.then(() => this.formService.downloadPdf(this.formId, this.formableId));
}
}
print(forceRun: boolean = false): void {
if (this.emitInsteadOfAction && !forceRun) {
this.onPrint.emit();
} else {
this.valid()
.then( () => {
const postfix = 'Print';
this.startLoader( postfix );
this.formService.getReportHtml( this.formId, this.formableId )
.subscribe({
next: html => {
this.stopLoader( postfix );
const newWin = window.frames[ 'printf' ];
newWin.document.write( `<body onload="window.print()">${html}</body>` );
newWin.document.close();
},
error: () => this.stopLoader( postfix ),
});
} );
}
}
preview(forceRun: boolean = false): void {
if (this.emitInsteadOfAction && !forceRun) {
this.onPreview.emit();
} else {
this.valid()
.then( () => {
this.formService.getReportHtml( this.formId, this.formableId )
.subscribe( ( html: string ) => {
this.formPreviewerVisible = true;
this.previewContainer.nativeElement.innerHTML = html;
});
} );
}
}
}
|
package fr.insee.lunatic.main;
import fr.insee.lunatic.utils.Modele;
import fr.insee.lunatic.utils.SchemaValidator;
import org.apache.commons.io.FileUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.File;
import java.io.InputStream;
public class DummyTestSchemaValidatorH {
private static final Logger logger = LoggerFactory.getLogger(DummyTestSchemaValidatorH.class);
public static void main(String[] args) {
String basePath = "src/test/resources/dummy";
File in = new File(String.format("%s/form.xml", basePath));
try {
SchemaValidator schemaValidator = new SchemaValidator(Modele.HIERARCHICAL);
logger.info("Valid : "+schemaValidator.validateFile(in));
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
|
3 Weight-Loss Myths You Have to Unlearn If You Want to Actually Lose Weight
When clients come to me to lose weight, they often arrive mentally and physically defeated. “I’ve tried everything,” they tell me. “The weight doesn’t stay off.”
But as I’ve discovered, the real reason it’s so hard for us to maintain weight loss is a faulty belief system. We repeat certain myths that perpetuate yo-yo dieting.
Here are three falsehoods surrounding weight loss and how to shift your mindset to defeat them. Shed these beliefs, and you may finally shed the extra weight once and for all.
Myth #1: “You need to count calories to lose weight”
Commercial weight-loss programs are based on a calories-in, calories-out model, which is an oversimplification of how weight loss works. A study out recently from John Hopkins University researchers shows how ineffective this equation is.
Many programs don’t work, and the ones that perform the best—Weight Watchers or Jenny Craig—only helped participants lose 3-5% more weight than the control group.
A better equation is to balance your physiology. Weight loss is often a side effect of eating the foods that help clear up nagging symptoms like fatigue and cravings (and sometimes they can also help alleviate more serious ones like acid reflux and pre-menstrual symptoms, too).
Subscribe to the Motto newsletter for advice worth sharing.
The first step to balance your body is to focus on nutrient-rich foods. When this happens, your hunger feels under control and you feel more satiated because your body isn’t hungry.
Myth #2: Losing weight is all about how much you eat and work out
It’s easy to think this, but other factors—like stress, lack of sleep and exposure to environmental toxins—can actually change how calories get computed in the body.
Just consider a study published in the Obesity Research and Clinical Practice journal : It found that people are 10% heavier today than those who ate and exercised the same amount in the 1980s—so it's clearly not as simple as burning more calories than you take in.
A better question than, “Am I at a caloric deficit?” is, “What can I add into my life to balance my hormones and myself?” Balanced hormones help your body maximize weight-loss efforts. Getting more sleep, drinking more filtered water and prioritizing deep self-care, like learning to be comfortable saying no , could actually do more to help you lose weight.
Read more: 5 Ways Meditating Can Help Your Career
Myth #3: A Paleo diet will work for you because it worked for your friend or a celebrity
While many realize that different foods work for different bodies, people often assume this difference means following a Paleo or vegan eating plan. However, a study performed by Israeli researchers found that the same foods elevated different people’s blood-sugar levels—which control appetite, cravings and hormones—at different rates. In other words, “good” and “bad” foods are unique to the individual.
Read more: 3 Signs Your ‘Good’ Habits Are Holding You Back
Learning how your blood sugar works is one of the most powerful tools you have for learning what your body responds best to. A quick and easy way to figure out your baseline blood-sugar response is an experiment I give to all my clients: Eat a vegetarian, Mediterranean and Paleo lunch on three different days. Each day, observe how you feel for the rest of the afternoon. How you eat at one meal sets up your blood sugar for the next three to five hours. If you’re still hungry, crashing or needing a snack, you can rule out that type of diet.
If you feel full, satiated and focused, you know that way of eating is a good starting point for you. There are nuances within these three diet “camps,” of course, but this test will provide you with a good starting point.
Ali Shapiro , M.S., is a health coach. |
Hostelworld expects modest earnings growth next year as it invests in new initiatives identified as part of a strategic revue into the business by its recently-appointed chief executive.
In a trading update on Thursday the online booking company said a revue showed that it was “ideally positioned” to provide solutions to the “unique needs of the hostelling industry”.
Investment in core customer acquisition and platform enhancements could deliver a return to growth, the company said.
While Hostelworld intended to improve its prospects, like-for-like bookings this year were likely to be flat given “expected declines in our supporting brands”.
“We are operating in an attractive and growing market, with a strong and trusted brand, providing relevant and valuable customers to the hostel sector,” said chief executive Gary Morrison, who was appointed in June 2018.
The strategy review found that Hostelworld’s core platform lacked investment, and as a result it will be the focus of a longer term investment strategy.
Next year the business plans to improve the booking experience for users, and provide unique hostel content. Growth and investment in the company will be self-funding from existing resources.
Headquartered in Dublin, Hostelworld will host a capital markets day for analysts this Friday. |
def _run(self) -> Tuple[List[EventOddsRatio], bool]:
event_contingency_tables, success_total, failure_total = self.get_partial_event_contingency_tables()
if not success_total or not failure_total:
return [], True
skewed_totals = False
if success_total / failure_total > 10 or failure_total / success_total > 10:
skewed_totals = True
odds_ratios = [
get_entity_odds_ratio(event_stats, FunnelCorrelation.PRIOR_COUNT)
for event_stats in event_contingency_tables
if not FunnelCorrelation.are_results_insignificant(event_stats)
]
positively_correlated_events = sorted(
[odds_ratio for odds_ratio in odds_ratios if odds_ratio["correlation_type"] == "success"],
key=lambda x: x["odds_ratio"],
reverse=True,
)
negatively_correlated_events = sorted(
[odds_ratio for odds_ratio in odds_ratios if odds_ratio["correlation_type"] == "failure"],
key=lambda x: x["odds_ratio"],
reverse=False,
)
events = positively_correlated_events[:10] + negatively_correlated_events[:10]
return events, skewed_totals |
Finding massive planets is nothing new these days. But finding them orbiting each other instead of orbiting a star is unprecedented. An object initially thought to be a single brown dwarf is actually a pair of giant worlds. It’s not yet clear how this binary system formed, but the discovery may help redefine the line between planets and brown dwarfs – failed stars with tens of times the mass of Jupiter.
This pair of planets is made up of two balls of gas the size of Jupiter but almost four times more massive, separated by some 600 million kilometres, and slowly circling each other once per century or so. The young couple only emits light at infrared wavelengths, with residual heat from their formation, just 10 million years ago.
Observations with the 10-metre Keck II telescope, by a team led by William Best of the University of Hawaii, uncovered the binary system, with the help of adaptive optics that correct for the blurring effects of Earth’s atmosphere.
“This is a careful piece of work and a very nice discovery,” says David Latham of the Harvard-Smithsonian Center for Astrophysics.
No one really understands the formation of rogue worlds that don’t orbit a star. So a binary system is even harder to understand, according to Gibor Basri of the University of California at Berkeley.
Gravitational interactions may slingshot single planets out of their solar systems, but the newly found pair of planets most likely formed from the fragmentation of a condensing protostar.
According to Alex de Koter of the University of Amsterdam, the discovery shows that various scenarios to produce free-floating planetary-mass objects are at work in the universe. Because they’re small and faint, they can only be discovered in our cosmic neighbourhood. This new find – called 2MASS J1119−1137 – is only 85 light years away, and the team thinks there may be many more similar planetary-mass binaries out there.
But are they really planets? Maybe not. In the past, the dividing line between planets and brown dwarfs was generally placed at 14 Jupiter masses, when nuclear fusion of deuterium in the object’s core sets in.
But Latham argues that the best way to distinguish between the two is not by their mass but by how they form: brown dwarfs result from collapsing clouds of gas and dust, while planets form out of a stellar disk.
And if other brown dwarfs are similar – that is, if they’re not brown dwarfs at all but sneaky double bodies – we may have underestimated how many free-floating planets there are in our universe. |
<reponame>Senzing/g2-sdk-java<filename>src/main/java/com/senzing/g2/engine/plugin/G2StandardizationPlugin.java
package com.senzing.g2.engine.plugin;
import java.util.List;
/**
* Standardizes a feature
*/
public interface G2StandardizationPlugin extends G2PluginInterface
{
/**
* Runs the feature standardization process
*
* @param context The {@link ProcessingContext} for performing the operation.
* @return A non-negative number on success and a negative number on failure.
*/
int process(ProcessingContext context);
/**
* Context for processing.
*/
class ProcessingContext
{
private FeatureInfo input = null;
private FeatureInfo result = null;
private String errorMessage = null;
/**
* Constructs an instance based on an input feature.
*
* @param input The input feature.
*/
public ProcessingContext(FeatureInfo input) {
this.input = input;
this.result = null;
}
/**
* Gets the {@link FeatureInfo} describing the input feature.
* @return The input feature.
*/
public FeatureInfo getInput() { return input; }
/**
* Gets the {@link FeatureInfo} describing the result feature.
* @return The result feature.
*/
public FeatureInfo getResult() { return result; }
/**
* Sets the result feature
* @param result The result feature
*/
public void setResult(FeatureInfo result) { this.result = result; }
/**
* Get the error message (if any).
* @return The error message that was set or <code>null</code> if no error.
*/
public String getErrorMessage() { return errorMessage; }
/**
* Sets the error message (if any).
* @param message The error message to set if an error occurs, or
* <code>null</code> to clear an error.
*/
public void setErrorMessage(String message) { errorMessage = message; }
}
}
|
#include <string>
#include "color.h"
Color Other(Color color) {
if (color == kWhite) {
return kBlack;
}
return kWhite;
}
std::string ColorToString(Color color) {
return color == kWhite ? "white" : "black";
} |
Did she just commit a federal crime?
With a tip of the War Room Kevlar helmet to Raw Story, here's the ever-charming Ann Coulter, speaking Thursday night about her hopes that George W. Bush will get to nominate a replacement for Associate Justice John Paul Stevens. "We need somebody to put rat poisoning in Justice Stevens' creme brulee," Coulter said.
Coulter insisted it was "just a joke, for you in the media."
Pursuant to 18 U.S.C. Section 115, anyone who "threatens to assault, kidnap, or murder . . . a United States judge . . . with intent to impede, intimidate, or interfere with" that judge's duties is guilty of a felony.
That's just a joke for you, Ann. Sort of. |
<filename>xen-4.6.0/xen/arch/x86/hvm/irq.c<gh_stars>1-10
/******************************************************************************
* irq.c
*
* Interrupt distribution and delivery logic.
*
* Copyright (c) 2006, <NAME>, XenSource Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; If not, see <http://www.gnu.org/licenses/>.
*/
#include <xen/config.h>
#include <xen/types.h>
#include <xen/event.h>
#include <xen/sched.h>
#include <xen/irq.h>
#include <xen/keyhandler.h>
#include <asm/hvm/domain.h>
#include <asm/hvm/support.h>
#include <asm/msi.h>
/* Must be called with hvm_domain->irq_lock hold */
static void assert_gsi(struct domain *d, unsigned ioapic_gsi)
{
struct pirq *pirq =
pirq_info(d, domain_emuirq_to_pirq(d, ioapic_gsi));
if ( hvm_domain_use_pirq(d, pirq) )
{
send_guest_pirq(d, pirq);
return;
}
vioapic_irq_positive_edge(d, ioapic_gsi);
}
static void assert_irq(struct domain *d, unsigned ioapic_gsi, unsigned pic_irq)
{
assert_gsi(d, ioapic_gsi);
vpic_irq_positive_edge(d, pic_irq);
}
/* Must be called with hvm_domain->irq_lock hold */
static void deassert_irq(struct domain *d, unsigned isa_irq)
{
struct pirq *pirq =
pirq_info(d, domain_emuirq_to_pirq(d, isa_irq));
if ( !hvm_domain_use_pirq(d, pirq) )
vpic_irq_negative_edge(d, isa_irq);
}
static void __hvm_pci_intx_assert(
struct domain *d, unsigned int device, unsigned int intx)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int gsi, link, isa_irq;
ASSERT((device <= 31) && (intx <= 3));
if ( __test_and_set_bit(device*4 + intx, &hvm_irq->pci_intx.i) )
return;
gsi = hvm_pci_intx_gsi(device, intx);
if ( hvm_irq->gsi_assert_count[gsi]++ == 0 )
assert_gsi(d, gsi);
link = hvm_pci_intx_link(device, intx);
isa_irq = hvm_irq->pci_link.route[link];
if ( (hvm_irq->pci_link_assert_count[link]++ == 0) && isa_irq &&
(hvm_irq->gsi_assert_count[isa_irq]++ == 0) )
assert_irq(d, isa_irq, isa_irq);
}
void hvm_pci_intx_assert(
struct domain *d, unsigned int device, unsigned int intx)
{
spin_lock(&d->arch.hvm_domain.irq_lock);
__hvm_pci_intx_assert(d, device, intx);
spin_unlock(&d->arch.hvm_domain.irq_lock);
}
static void __hvm_pci_intx_deassert(
struct domain *d, unsigned int device, unsigned int intx)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int gsi, link, isa_irq;
ASSERT((device <= 31) && (intx <= 3));
if ( !__test_and_clear_bit(device*4 + intx, &hvm_irq->pci_intx.i) )
return;
gsi = hvm_pci_intx_gsi(device, intx);
--hvm_irq->gsi_assert_count[gsi];
link = hvm_pci_intx_link(device, intx);
isa_irq = hvm_irq->pci_link.route[link];
if ( (--hvm_irq->pci_link_assert_count[link] == 0) && isa_irq &&
(--hvm_irq->gsi_assert_count[isa_irq] == 0) )
deassert_irq(d, isa_irq);
}
void hvm_pci_intx_deassert(
struct domain *d, unsigned int device, unsigned int intx)
{
spin_lock(&d->arch.hvm_domain.irq_lock);
__hvm_pci_intx_deassert(d, device, intx);
spin_unlock(&d->arch.hvm_domain.irq_lock);
}
void hvm_isa_irq_assert(
struct domain *d, unsigned int isa_irq)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int gsi = hvm_isa_irq_to_gsi(isa_irq);
ASSERT(isa_irq <= 15);
spin_lock(&d->arch.hvm_domain.irq_lock);
if ( !__test_and_set_bit(isa_irq, &hvm_irq->isa_irq.i) &&
(hvm_irq->gsi_assert_count[gsi]++ == 0) )
assert_irq(d, gsi, isa_irq);
spin_unlock(&d->arch.hvm_domain.irq_lock);
}
void hvm_isa_irq_deassert(
struct domain *d, unsigned int isa_irq)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int gsi = hvm_isa_irq_to_gsi(isa_irq);
ASSERT(isa_irq <= 15);
spin_lock(&d->arch.hvm_domain.irq_lock);
if ( __test_and_clear_bit(isa_irq, &hvm_irq->isa_irq.i) &&
(--hvm_irq->gsi_assert_count[gsi] == 0) )
deassert_irq(d, isa_irq);
spin_unlock(&d->arch.hvm_domain.irq_lock);
}
static void hvm_set_callback_irq_level(struct vcpu *v)
{
struct domain *d = v->domain;
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int gsi, pdev, pintx, asserted;
ASSERT(v->vcpu_id == 0);
spin_lock(&d->arch.hvm_domain.irq_lock);
/* NB. Do not check the evtchn_upcall_mask. It is not used in HVM mode. */
asserted = !!vcpu_info(v, evtchn_upcall_pending);
if ( hvm_irq->callback_via_asserted == asserted )
goto out;
hvm_irq->callback_via_asserted = asserted;
/* Callback status has changed. Update the callback via. */
switch ( hvm_irq->callback_via_type )
{
case HVMIRQ_callback_gsi:
gsi = hvm_irq->callback_via.gsi;
if ( asserted && (hvm_irq->gsi_assert_count[gsi]++ == 0) )
{
vioapic_irq_positive_edge(d, gsi);
if ( gsi <= 15 )
vpic_irq_positive_edge(d, gsi);
}
else if ( !asserted && (--hvm_irq->gsi_assert_count[gsi] == 0) )
{
if ( gsi <= 15 )
vpic_irq_negative_edge(d, gsi);
}
break;
case HVMIRQ_callback_pci_intx:
pdev = hvm_irq->callback_via.pci.dev;
pintx = hvm_irq->callback_via.pci.intx;
if ( asserted )
__hvm_pci_intx_assert(d, pdev, pintx);
else
__hvm_pci_intx_deassert(d, pdev, pintx);
default:
break;
}
out:
spin_unlock(&d->arch.hvm_domain.irq_lock);
}
void hvm_maybe_deassert_evtchn_irq(void)
{
struct domain *d = current->domain;
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
if ( hvm_irq->callback_via_asserted &&
!vcpu_info(d->vcpu[0], evtchn_upcall_pending) )
hvm_set_callback_irq_level(d->vcpu[0]);
}
void hvm_assert_evtchn_irq(struct vcpu *v)
{
if ( unlikely(in_irq() || !local_irq_is_enabled()) )
{
tasklet_schedule(&v->arch.hvm_vcpu.assert_evtchn_irq_tasklet);
return;
}
if ( v->arch.hvm_vcpu.evtchn_upcall_vector != 0 )
{
uint8_t vector = v->arch.hvm_vcpu.evtchn_upcall_vector;
vlapic_set_irq(vcpu_vlapic(v), vector, 0);
}
else if ( is_hvm_pv_evtchn_vcpu(v) )
vcpu_kick(v);
else if ( v->vcpu_id == 0 )
hvm_set_callback_irq_level(v);
}
void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
u8 old_isa_irq;
int i;
ASSERT((link <= 3) && (isa_irq <= 15));
spin_lock(&d->arch.hvm_domain.irq_lock);
old_isa_irq = hvm_irq->pci_link.route[link];
if ( old_isa_irq == isa_irq )
goto out;
hvm_irq->pci_link.route[link] = isa_irq;
/* PCI pass-through fixup. */
if ( hvm_irq->dpci )
{
if ( old_isa_irq )
clear_bit(old_isa_irq, &hvm_irq->dpci->isairq_map);
for ( i = 0; i < NR_LINK; i++ )
if ( hvm_irq->dpci->link_cnt[i] && hvm_irq->pci_link.route[i] )
set_bit(hvm_irq->pci_link.route[i],
&hvm_irq->dpci->isairq_map);
}
if ( hvm_irq->pci_link_assert_count[link] == 0 )
goto out;
if ( old_isa_irq && (--hvm_irq->gsi_assert_count[old_isa_irq] == 0) )
vpic_irq_negative_edge(d, old_isa_irq);
if ( isa_irq && (hvm_irq->gsi_assert_count[isa_irq]++ == 0) )
{
vioapic_irq_positive_edge(d, isa_irq);
vpic_irq_positive_edge(d, isa_irq);
}
out:
spin_unlock(&d->arch.hvm_domain.irq_lock);
dprintk(XENLOG_G_INFO, "Dom%u PCI link %u changed %u -> %u\n",
d->domain_id, link, old_isa_irq, isa_irq);
}
int hvm_inject_msi(struct domain *d, uint64_t addr, uint32_t data)
{
uint32_t tmp = (uint32_t) addr;
uint8_t dest = (tmp & MSI_ADDR_DEST_ID_MASK) >> MSI_ADDR_DEST_ID_SHIFT;
uint8_t dest_mode = !!(tmp & MSI_ADDR_DESTMODE_MASK);
uint8_t delivery_mode = (data & MSI_DATA_DELIVERY_MODE_MASK)
>> MSI_DATA_DELIVERY_MODE_SHIFT;
uint8_t trig_mode = (data & MSI_DATA_TRIGGER_MASK)
>> MSI_DATA_TRIGGER_SHIFT;
uint8_t vector = data & MSI_DATA_VECTOR_MASK;
if ( !vector )
{
int pirq = ((addr >> 32) & 0xffffff00) | dest;
if ( pirq > 0 )
{
struct pirq *info = pirq_info(d, pirq);
/* if it is the first time, allocate the pirq */
if ( !info || info->arch.hvm.emuirq == IRQ_UNBOUND )
{
int rc;
spin_lock(&d->event_lock);
rc = map_domain_emuirq_pirq(d, pirq, IRQ_MSI_EMU);
spin_unlock(&d->event_lock);
if ( rc )
return rc;
info = pirq_info(d, pirq);
if ( !info )
return -EBUSY;
}
else if ( info->arch.hvm.emuirq != IRQ_MSI_EMU )
return -EINVAL;
send_guest_pirq(d, info);
return 0;
}
return -ERANGE;
}
return vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
}
void hvm_set_callback_via(struct domain *d, uint64_t via)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int gsi=0, pdev=0, pintx=0;
uint8_t via_type;
via_type = (uint8_t)(via >> 56) + 1;
if ( ((via_type == HVMIRQ_callback_gsi) && (via == 0)) ||
(via_type > HVMIRQ_callback_vector) )
via_type = HVMIRQ_callback_none;
spin_lock(&d->arch.hvm_domain.irq_lock);
/* Tear down old callback via. */
if ( hvm_irq->callback_via_asserted )
{
switch ( hvm_irq->callback_via_type )
{
case HVMIRQ_callback_gsi:
gsi = hvm_irq->callback_via.gsi;
if ( (--hvm_irq->gsi_assert_count[gsi] == 0) && (gsi <= 15) )
vpic_irq_negative_edge(d, gsi);
break;
case HVMIRQ_callback_pci_intx:
pdev = hvm_irq->callback_via.pci.dev;
pintx = hvm_irq->callback_via.pci.intx;
__hvm_pci_intx_deassert(d, pdev, pintx);
break;
default:
break;
}
}
/* Set up new callback via. */
switch ( hvm_irq->callback_via_type = via_type )
{
case HVMIRQ_callback_gsi:
gsi = hvm_irq->callback_via.gsi = (uint8_t)via;
if ( (gsi == 0) || (gsi >= ARRAY_SIZE(hvm_irq->gsi_assert_count)) )
hvm_irq->callback_via_type = HVMIRQ_callback_none;
else if ( hvm_irq->callback_via_asserted &&
(hvm_irq->gsi_assert_count[gsi]++ == 0) )
{
vioapic_irq_positive_edge(d, gsi);
if ( gsi <= 15 )
vpic_irq_positive_edge(d, gsi);
}
break;
case HVMIRQ_callback_pci_intx:
pdev = hvm_irq->callback_via.pci.dev = (uint8_t)(via >> 11) & 31;
pintx = hvm_irq->callback_via.pci.intx = (uint8_t)via & 3;
if ( hvm_irq->callback_via_asserted )
__hvm_pci_intx_assert(d, pdev, pintx);
break;
case HVMIRQ_callback_vector:
hvm_irq->callback_via.vector = (uint8_t)via;
break;
default:
break;
}
spin_unlock(&d->arch.hvm_domain.irq_lock);
dprintk(XENLOG_G_INFO, "Dom%u callback via changed to ", d->domain_id);
switch ( via_type )
{
case HVMIRQ_callback_gsi:
printk("GSI %u\n", gsi);
break;
case HVMIRQ_callback_pci_intx:
printk("PCI INTx Dev 0x%02x Int%c\n", pdev, 'A' + pintx);
break;
case HVMIRQ_callback_vector:
printk("Direct Vector 0x%02x\n", (uint8_t)via);
break;
default:
printk("None\n");
break;
}
}
struct hvm_intack hvm_vcpu_has_pending_irq(struct vcpu *v)
{
struct hvm_domain *plat = &v->domain->arch.hvm_domain;
int vector;
if ( unlikely(v->nmi_pending) )
return hvm_intack_nmi;
if ( unlikely(v->mce_pending) )
return hvm_intack_mce;
if ( (plat->irq.callback_via_type == HVMIRQ_callback_vector)
&& vcpu_info(v, evtchn_upcall_pending) )
return hvm_intack_vector(plat->irq.callback_via.vector);
if ( is_pvh_vcpu(v) )
return hvm_intack_none;
if ( vlapic_accept_pic_intr(v) && plat->vpic[0].int_output )
return hvm_intack_pic(0);
vector = vlapic_has_pending_irq(v);
if ( vector != -1 )
return hvm_intack_lapic(vector);
return hvm_intack_none;
}
struct hvm_intack hvm_vcpu_ack_pending_irq(
struct vcpu *v, struct hvm_intack intack)
{
int vector;
switch ( intack.source )
{
case hvm_intsrc_nmi:
if ( !test_and_clear_bool(v->nmi_pending) )
intack = hvm_intack_none;
break;
case hvm_intsrc_mce:
if ( !test_and_clear_bool(v->mce_pending) )
intack = hvm_intack_none;
break;
case hvm_intsrc_pic:
if ( (vector = vpic_ack_pending_irq(v)) == -1 )
intack = hvm_intack_none;
else
intack.vector = (uint8_t)vector;
break;
case hvm_intsrc_lapic:
if ( !vlapic_ack_pending_irq(v, intack.vector, 0) )
intack = hvm_intack_none;
break;
case hvm_intsrc_vector:
break;
default:
intack = hvm_intack_none;
break;
}
return intack;
}
int hvm_local_events_need_delivery(struct vcpu *v)
{
struct hvm_intack intack = hvm_vcpu_has_pending_irq(v);
if ( likely(intack.source == hvm_intsrc_none) )
return 0;
return !hvm_interrupt_blocked(v, intack);
}
void arch_evtchn_inject(struct vcpu *v)
{
if ( has_hvm_container_vcpu(v) )
hvm_assert_evtchn_irq(v);
}
static void irq_dump(struct domain *d)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
int i;
printk("Domain %d:\n", d->domain_id);
printk("PCI 0x%16.16"PRIx64"%16.16"PRIx64
" ISA 0x%8.8"PRIx32" ROUTE %u %u %u %u\n",
hvm_irq->pci_intx.pad[0], hvm_irq->pci_intx.pad[1],
(uint32_t) hvm_irq->isa_irq.pad[0],
hvm_irq->pci_link.route[0], hvm_irq->pci_link.route[1],
hvm_irq->pci_link.route[2], hvm_irq->pci_link.route[3]);
for ( i = 0 ; i < VIOAPIC_NUM_PINS; i += 8 )
printk("GSI [%x - %x] %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8
" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8"\n",
i, i+7,
hvm_irq->gsi_assert_count[i+0],
hvm_irq->gsi_assert_count[i+1],
hvm_irq->gsi_assert_count[i+2],
hvm_irq->gsi_assert_count[i+3],
hvm_irq->gsi_assert_count[i+4],
hvm_irq->gsi_assert_count[i+5],
hvm_irq->gsi_assert_count[i+6],
hvm_irq->gsi_assert_count[i+7]);
printk("Link %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8"\n",
hvm_irq->pci_link_assert_count[0],
hvm_irq->pci_link_assert_count[1],
hvm_irq->pci_link_assert_count[2],
hvm_irq->pci_link_assert_count[3]);
printk("Callback via %i:%#"PRIx32",%s asserted\n",
hvm_irq->callback_via_type, hvm_irq->callback_via.gsi,
hvm_irq->callback_via_asserted ? "" : " not");
}
static void dump_irq_info(unsigned char key)
{
struct domain *d;
printk("'%c' pressed -> dumping HVM irq info\n", key);
rcu_read_lock(&domlist_read_lock);
for_each_domain ( d )
if ( is_hvm_domain(d) )
irq_dump(d);
rcu_read_unlock(&domlist_read_lock);
}
static struct keyhandler dump_irq_info_keyhandler = {
.diagnostic = 1,
.u.fn = dump_irq_info,
.desc = "dump HVM irq info"
};
static int __init dump_irq_info_key_init(void)
{
register_keyhandler('I', &dump_irq_info_keyhandler);
return 0;
}
__initcall(dump_irq_info_key_init);
static int irq_save_pci(struct domain *d, hvm_domain_context_t *h)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
unsigned int asserted, pdev, pintx;
int rc;
spin_lock(&d->arch.hvm_domain.irq_lock);
pdev = hvm_irq->callback_via.pci.dev;
pintx = hvm_irq->callback_via.pci.intx;
asserted = (hvm_irq->callback_via_asserted &&
(hvm_irq->callback_via_type == HVMIRQ_callback_pci_intx));
/*
* Deassert virtual interrupt via PCI INTx line. The virtual interrupt
* status is not save/restored, so the INTx line must be deasserted in
* the restore context.
*/
if ( asserted )
__hvm_pci_intx_deassert(d, pdev, pintx);
/* Save PCI IRQ lines */
rc = hvm_save_entry(PCI_IRQ, 0, h, &hvm_irq->pci_intx);
if ( asserted )
__hvm_pci_intx_assert(d, pdev, pintx);
spin_unlock(&d->arch.hvm_domain.irq_lock);
return rc;
}
static int irq_save_isa(struct domain *d, hvm_domain_context_t *h)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
/* Save ISA IRQ lines */
return ( hvm_save_entry(ISA_IRQ, 0, h, &hvm_irq->isa_irq) );
}
static int irq_save_link(struct domain *d, hvm_domain_context_t *h)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
/* Save PCI-ISA link state */
return ( hvm_save_entry(PCI_LINK, 0, h, &hvm_irq->pci_link) );
}
static int irq_load_pci(struct domain *d, hvm_domain_context_t *h)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
int link, dev, intx, gsi;
/* Load the PCI IRQ lines */
if ( hvm_load_entry(PCI_IRQ, h, &hvm_irq->pci_intx) != 0 )
return -EINVAL;
/* Clear the PCI link assert counts */
for ( link = 0; link < 4; link++ )
hvm_irq->pci_link_assert_count[link] = 0;
/* Clear the GSI link assert counts */
for ( gsi = 0; gsi < VIOAPIC_NUM_PINS; gsi++ )
hvm_irq->gsi_assert_count[gsi] = 0;
/* Recalculate the counts from the IRQ line state */
for ( dev = 0; dev < 32; dev++ )
for ( intx = 0; intx < 4; intx++ )
if ( test_bit(dev*4 + intx, &hvm_irq->pci_intx.i) )
{
/* Direct GSI assert */
gsi = hvm_pci_intx_gsi(dev, intx);
hvm_irq->gsi_assert_count[gsi]++;
/* PCI-ISA bridge assert */
link = hvm_pci_intx_link(dev, intx);
hvm_irq->pci_link_assert_count[link]++;
}
return 0;
}
static int irq_load_isa(struct domain *d, hvm_domain_context_t *h)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
int irq;
/* Load the ISA IRQ lines */
if ( hvm_load_entry(ISA_IRQ, h, &hvm_irq->isa_irq) != 0 )
return -EINVAL;
/* Adjust the GSI assert counts for the ISA IRQ line state.
* This relies on the PCI IRQ state being loaded first. */
for ( irq = 0; platform_legacy_irq(irq); irq++ )
if ( test_bit(irq, &hvm_irq->isa_irq.i) )
hvm_irq->gsi_assert_count[hvm_isa_irq_to_gsi(irq)]++;
return 0;
}
static int irq_load_link(struct domain *d, hvm_domain_context_t *h)
{
struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
int link, gsi;
/* Load the PCI-ISA IRQ link routing table */
if ( hvm_load_entry(PCI_LINK, h, &hvm_irq->pci_link) != 0 )
return -EINVAL;
/* Sanity check */
for ( link = 0; link < 4; link++ )
if ( hvm_irq->pci_link.route[link] > 15 )
{
gdprintk(XENLOG_ERR,
"HVM restore: PCI-ISA link %u out of range (%u)\n",
link, hvm_irq->pci_link.route[link]);
return -EINVAL;
}
/* Adjust the GSI assert counts for the link outputs.
* This relies on the PCI and ISA IRQ state being loaded first */
for ( link = 0; link < 4; link++ )
{
if ( hvm_irq->pci_link_assert_count[link] != 0 )
{
gsi = hvm_irq->pci_link.route[link];
if ( gsi != 0 )
hvm_irq->gsi_assert_count[gsi]++;
}
}
return 0;
}
HVM_REGISTER_SAVE_RESTORE(PCI_IRQ, irq_save_pci, irq_load_pci,
1, HVMSR_PER_DOM);
HVM_REGISTER_SAVE_RESTORE(ISA_IRQ, irq_save_isa, irq_load_isa,
1, HVMSR_PER_DOM);
HVM_REGISTER_SAVE_RESTORE(PCI_LINK, irq_save_link, irq_load_link,
1, HVMSR_PER_DOM);
|
Image caption Liz Sheppard has been crowdfunding treatment for cancer
There has been a big leap in the number of cancer patients turning to crowdfunding to pay for treatments not available on the NHS, figures seen by BBC Radio 5 live suggest.
Data from JustGiving shows that 2,348 appeals were set up by cancer patients or their loved ones in 2016, a seven-fold rise on the number for 2015.
Over £4.5m was raised by these appeals in 2016 compared with £530,000 in 2015.
Doctors say the number of patients bypassing the NHS is "very worrying".
'Strength and generosity'
Liz Sheppard, a mother-of-three from Mansfield, was diagnosed with small cell stomach cancer - a rare form of the disease - in November 2015.
She has now raised over £135,000 online to help pay for immunotherapy, which she is receiving at a private centre in London.
She has already spent around £60,000 of the money on immunotherapy, and says she is responding well to the treatment.
She told the BBC: "I'm able to get out and lead as normal a life as possible. Certainly I'm not bedridden.
"If it wasn't for people's generosity and kindness, I wouldn't be where I am now. It's not something I could have self-funded. Without that money I wouldn't be here. It means everything.
"I'm a mother. I look at my children every day and they keep me going.
"And the messages people leave when they make a donation can be motivating in themselves. You can draw a lot of strength from them."
Image copyright Science Photo Library Image caption Immunotherapy is one of the most popular treatments people have crowdfunded for
A spokesman for NHS England said: "More people than ever before are surviving cancer thanks to improved NHS care… and together with NICE (the National Institute for Health and Care Excellence) we have also launched a new-look cancer drugs fund, meaning patients will be able to access promising, new and innovative treatments much quicker."
According to the detailed figures released by the platform JustGiving, USA, Germany and Mexico topped the most popular destinations for patients travelling abroad for treatments last year.
More than a fifth of those looking for treatment (404 people) raised £1,393,490 in donations to travel to the United States for care.
Germany followed in second place with 142 people crowdfunding £368,530 (a 461% increase from 2015), whilst 23 people raised £69,660 to travel to Mexico for treatment (a 224% increase from 2015).
'Providing a lifeline'
Immunotherapy was the most popular treatment crowdfunded on the JustGiving platform in 2016.
The therapy uses the body's own immune system to fight off cancer. It has been shown to work in certain cases, but not all. And some are still in the very early stages of research.
The treatments people have funded are not always considered to have the backing of sufficient scientific evidence by NHS experts.
Charles Wells, chief operations officer for JustGiving, said: "Over the last 12 months, we've seen more and more people crowdfunding on JustGiving to raise money for cancer treatments that aren't available on the NHS.
"It can be a practical way for friends, family and the community to come together and help, as well as providing a lifeline for people by giving them access to pioneering treatments when they've been given a cancer diagnosis."
'Funding pressures'
Consultant oncologist Dr Clive Peedell expressed concern about the rise in the number of patients bypassing the NHS to fund their own treatment.
He told BBC Radio 5 live: "The NHS is clearly financially under pressure at present, but cancer therapy has received preferential funding compared with other diseases and conditions.
"The system for approving effective new cancer drugs is not perfect, but is much improved.
"The vast majority of proven effective treatments for cancer are funded by the NHS.
"This includes immunotherapy for a number of indications including lung cancer, which is my own field.
Future investment
"However, funding pressures are likely to pressurise the current system even further and we could see it break down in future.
"It is therefore very worrying to see this trend of crowdfunding for cancer drugs.
"It would be interesting to review all the cases to find out how many are genuinely appropriate.
"I worry that some patients may be trying to access treatment that may not be beneficial.
"Worse still, there may be significant extra costs involved, especially if patients pay privately or travel abroad."
The NHS England spokesman said it was investing £130m in state-of-the-art radiotherapy equipment, alongside £200m of funding over two years to improve local cancer services. |
NEW YORK � For now, Ashton Kutcher is the king of Twitter. But there is a new challenger � Oprah.
Kutcher triumphed over CNN in their much ballyhooed race to be the first to reach a million followers on the microblogging Web site. Kutcher surpassed that benchmark in the early morning hours April 17, narrowly edging out the breaking news feed from the Time Warner Inc.-owned network.
Speaking in a live webcast April 17, Kutcher took the tone of a revolutionary.
Kutcher had long trailed CNN, but he staged a rally in recent days that captured the attention of the Web. The million-mark race was taken by many as a symbol of huge upswing of Twitter�s popularity.
In recent months, the site has increased exponentially in visitors. The search engine Yahoo said that searches for Twitter over the past four months increased more than 5,559 percent over the same time last year.
The site allows users to type �tweets� of 140 characters or less on their computers or cell phones, which others �follow� on Twitter like a stock ticker.
Kutcher, who�s an avid user of the site along with wife Demi Moore, said Twitter is democratizing media and removing filters between celebrities and fans, big media companies and their customers.
The 31-year-old Kutcher had claimed he would �ding-dong-ditch� CNN founder Ted Turner if he won, and pledged to make good on his promise after winning. Sean �Diddy� Combs was among the celebrity �Twitteratti� who supported his run.
CNN�s Larry King posted a video earlier in the week, playfully threatening Kutcher: �CNN will bury you!� Kutcher was to appear on King�s program April 17.
King was far from the only person sucked into Twitter by the million-mark showdown. Among the many new users to join was Oprah Winfrey, whose entry caused ripples across Twitter.
She gained more than 130,000 followers in less than a day, suggesting Winfrey � so successful in television, magazines, books and other media � would thrive on yet another platform.
Kutcher, CNN and Winfrey pledged to mark the occasion by purchasing mosquito bed nets to combat malaria. Kutcher donated $100,000 to the Malaria No More Fund, the charity said April 17.
Winfrey hosted a Twitter special on her show April 17 with Kutcher as a guest, connected through the Internet communications service Skype. She also sent her first tweet.
�I�m still not sure I get it,� Winfrey said before tentatively typing �ASHTON IS NEXT� into a laptop, announcing the actor�s appearance.
�OK, here goes,� she said, pressing a key.
Jake Coyle is an entertainment writer with The Associated Press. He can be reached at [email protected]. |
<gh_stars>0
import {
EntitySubscriberInterface,
EventSubscriber,
InsertEvent,
} from 'typeorm';
import { Product } from '../product.entity';
@EventSubscriber()
export class ProductSubscriber implements EntitySubscriberInterface<Product> {
listenTo() {
// listens only to POST events
return Product;
}
beforeInsert(e: InsertEvent<Product>) {
console.log(`[BEFORE PRODUCT INSERTED: ${e.entity}`);
}
}
|
Are Smaller Emergency Departments More Prone to Volume Variability? Introduction Daily patient volume in emergency departments (ED) varies considerably between days and sites. Although studies have attempted to define high-volume days, no standard definition exists. Furthermore, it is not clear whether the frequency of high-volume days, by any definition, is related to the size of an ED. We aimed to determine the correlation between ED size and the frequency of high-volume days for various volume thresholds, and to develop a measure to identify high-volume days. Methods We queried retrospective patient arrival data including 1,682,374 patient visits from 32 EDs in 12 states between July 1, 2018June 30, 2019 and developed linear regression models to determine the correlation between ED size and volume variability. In addition, we performed a regression analysis and applied the Pearson correlation test to investigate the significance of median daily volumes with respect to the percent of days that crossed four volume thresholds ranging from 520% (in 5% increments) greater than each sites median daily volume. Results We found a strong negative correlation between ED median daily volume and volume variability (R2 = 81.0%; P < 0.0001). In addition, the four regression models for the percent of days exceeding specified thresholds greater than their daily median volumes had R2 values of 49.4%, 61.2%, 70.0%, and 71.8%, respectively, all with P < 0.0001. Conclusion We sought to determine whether smaller EDs experience high-volume days more frequently than larger EDs. We found that high-volume days, when defined as days with a count of arrivals at or above certain median-based thresholds, are significantly more likely to occur in lower-volume EDs than in higher-volume EDs. To the extent that EDs allocate resources and plan to staff based on median volumes, these results suggest that smaller EDs are more likely to experience unpredictable, volume-based staffing challenges and operational costs. Given the lack of a standard measure to define a high-volume day in an ED, we recommend 10% above the median daily volume as a metric, for its relevance, generalizability across a broad range of EDs, and computational simplicity. INTRODUCTION Background Emergency department (ED) visits in the United States increased from 119.2 million in 2006 to 145.6 million in 2016. 1 Results: We found a strong negative correlation between ED median daily volume and volume variability (R 2 = 81.0%; P < 0.0001). In addition, the four regression models for the percent of days exceeding specified thresholds greater than their daily median volumes had R 2 values of 49.4%, 61.2%, 70.0%, and 71.8%, respectively, all with P < 0.0001. Conclusion: We sought to determine whether smaller EDs experience high-volume days more frequently than larger EDs. We found that high-volume days, when defined as days with a count of arrivals at or above certain median-based thresholds, are significantly more likely to occur in lowervolume EDs than in higher-volume EDs. To the extent that EDs allocate resources and plan to staff based on median volumes, these results suggest that smaller EDs are more likely to experience unpredictable, volume-based staffing challenges and operational costs. Given the lack of a standard measure to define a high-volume day in an ED, we recommend 10% above the median daily volume as a metric, for its relevance, generalizability across a broad range of EDs, and computational simplicity. The increase in visits contributes to crowding, boarding, and overtaxing of clinical staff capabilities. 2,3 Several studies highlight the negative effects of crowding on patient satisfaction, care, health outcomes, and staff safety. 2,4,5 Volume predictions Nourazari et al. Are Smaller EDs More Prone to Volume Variability? and management strategies have been developed to improve operations and mitigate the impact of increased volume. 6,7 Staffing all days to the level of high-volume days would reduce crowding, however, it would be costly and inefficient on lowervolume days. Staffing to the average demand is a common approach to balance these tradeoffs. Importance A significant limitation of staffing to the average demand is that the method does not consider the day-to-day natural variability of demand, which is inherent to the system and cannot be eliminated. Although research exists on resource mobilization in a mass casualty or surge events (eg, the COVID-19 pandemic), few studies investigate the variability in patient volume on a day-to-day basis in the ED. A study demonstrating that lowervolume EDs are more prone to variability is of great value for effective and efficient management of ED operations and staffing. Furthermore, developing a measure for identifying high-volume days in EDs encourages robust staffing approaches, which could balance quality and efficiency while accounting for day-to-day volume variability. Goals We compared the variability of patient volume relative to ED size by assessing volume-based thresholds (5%, 10%, 15%, and 20% greater than the daily median volume of the ED). We intentionally avoided standard deviations and percentiles, which naturally scale with ED volume. Using median-based thresholds as the standard measures, we studied whether smaller EDs experience a greater frequency of high-volume days as opposed to those of larger, more resource-heavy EDs. METHODS Data This was a retrospective, observational study of aggregated third-party ED data. The dataset included 1,682,374 unique visits from 32 EDs in 12 states from July 1, 2018-June 30, 2019. The hospitals consisted of 28 urban and 4 rural hospitals. Collectively 5 out of 32 EDs were in academic hospitals, while the remaining 27 EDs were in community hospitals. We queried historical deidentified and anonymized data from a database of patient billing records provided by a national coding, billing, and analytics company (LogixHealth, Inc., Bedford, MA). The timestamps of patient arrivals were recorded and saved to a hospital database at the time of registration. Setting We excluded from the analysis pediatric-only and freestanding EDs, as well as EDs lacking data for all 365 days. Median daily arrivals in the remaining EDs ranged from 79 to 214 resulting in the annual visits ranging from about 29,000 to about 78,000. It is worth noting that although this range is relatively broad, it may not be completely inclusive of extreme ED sizes. Analysis To examine the correlation between ED median daily volume and volume variability, we developed a linear regression model with the following hypothesis: H 0 : ED median daily volume and the variability of volume are not correlated. H 1 : ED median daily volume and the variability of volume are linearly correlated. Next, for all EDs we calculated the percent of days above 5%, 10%, 15%, and 20% of the median daily volume. We propose that smaller EDs will more frequently experience days with volume above a given threshold, defined as a percentage above their median daily volume. The structured hypothesis is as follows: H 0 : The frequency of days that ED volume equals or exceeds 5%, 10%, 15%, and 20% of the median daily volume has no relation to the median daily volume of the ED. H 1 : The frequency of days that ED volume equals or exceeds 5%, 10%, 15%, and 20% of the median daily volume is higher in EDs with a smaller median daily volume than those with a larger median daily volume. We normalized the data to remove the day-of-week (DOW) effect. For each site, the ratio of the mean volume to the mean volume by DOW was multiplied by the true volume to generate adjusted daily volumes. RESULTS To examine the correlation between volume variability (the dependent variable) and ED median daily volume (the independent variable), we calculated the coefficient of variation (COV) for each site. The COV is used to adjust variability for ED size. We then conducted a regression analysis to investigate the correlation between ED size and volume variability. The linear regression model follows the form of Y = mX+b, and here, X is a vector of the median daily volume for each of the EDs (the independent variable), while Y is a vector of the COV for each of the EDs (the dependent variable). The results displayed in Figure 1 indicate a strong negative correlation with R 2 of 81.0% and P < 0.0001. These results demonstrate that smaller EDs generally have a higher COV and hence experience more daily volume variability than larger EDs. We then developed a series of linear regression models and Pearson correlation tests (Figure 2) to test the primary study hypothesis. For these models, X is a vector of the median daily volumes for each of the EDs (the independent variable), while Y is a vector of the frequency of days equaling or exceeding a given threshold for each of the EDs (the dependent variable). The results of the regression analysis indicate a statistically significant negative correlation between the independent and dependent variables, which led us to reject the null hypothesis Are Smaller EDs More Prone to Volume Variability? for all four cases. This demonstrates that lower-volume EDs tend to experience high-volume days more frequently than higher-volume EDs. For instance, as shown in Figure 2c, the smaller EDs have days with 15% more volume than their median volume roughly four times as often as the larger EDs. With the aim of formulating a measure to classify highvolume days that balances generalizability to various ED sizes, relevance, and derivation simplicity, we further analyzed the linear regression model results. To be able to generalize the high-volume metric to a broad range of EDs, we assessed the correlation determinations (R 2 ) for which Figures 2b-d demonstrate sufficient quality. Regarding the relevance of the metric, Figure 1a demonstrates that high-volume days with the threshold set to 5% above the median would occur about 25%-35% of the time, which is too common to be relevant for operational purposes. Figure 2b demonstrates that smaller EDs cross the 10% threshold on roughly 20% of days, whereas larger EDs cross the threshold on roughly 10% of days. Figures 2c and 2d illustrate that larger EDs almost never cross the 15% and 20% thresholds, which would prevent measures with these thresholds to be generalizable to a variety of EDs. Given the overall regression quality, applicability to both large and small EDs, and simplicity of derivation, we recommend 10% above median daily volume to represent a reasonable threshold for identifying high-volume days in EDs. This proposed measure is the first step in developing comprehensive measures beyond the "average" or "median" daily volume to identify "busy" days in an ED and better capture a comprehensive view of daily volume variability. DISCUSSION Although EDs vary with respect to the particulars of staffing, volume, acuity, boarding, and admission rate, they all are likely to operate differently on a low-volume day compared to a high-volume day. Unlike low-volume days, where different systems that are critical to efficient ED operation and flow are less likely to be stressed, higher volume days often lead to boarding and potential concerns for quality and safety because they strain medical resources and hinder the timeliness of emergency care. However, it is worth noting that low-volume days could also be problematic and impose financial challenges on ED operations as overstaffed days could lead to waste of resources and excess capacity. Hence, smaller EDs must develop strategies to identify, assess, and accommodate the effect and frequency of daily volume variability. While the identified root causes of ED crowding and long wait times are predominantly linked to the inherent variability of demand, many of the existing solutions are focused on streamlining patient flow. 10 Therefore, static solutions are being applied to a dynamic and unpredictable problem. Bridging this gap warrants the development and implementation of novel ED staffing approaches that adaptively align ED resources The percent of days exceeding specified thresholds vs daily median volume (2a: 5% above median volume, 2b: 10% above median volume, 2c: 15% above median volume, 2d: 20% above median volume). The data in all four charts indicate a negative slope, demonstrating that smaller emergency departments (ED) tend to cross percentof-median volume thresholds more frequently than larger EDs. In these models, multiplying EDs median daily volume by the slope and adding the intercept produces an estimate of the percent of days that exceed the respective threshold. Nourazari et al. Are Smaller EDs More Prone to Volume Variability? with demand. With the ability to classify high-volume days, ED leaders will be better equipped to proactively manage this variability and use appropriate staffing strategies that prevent prolonged wait times while balancing quality, provider satisfaction, operational complexity, and cost. LIMITATIONS A limitation of this study is that some EDs naturally have more day-to-day variability than others. For instance, an ED in a seasonal vacation town may experience significantly higher volume in certain months. Future work could explore the benefit of including additional explanatory variables, such as specific ED location, to correct for this effect. Furthermore, we obtained the data in this study for EDs in only 12 states. Although these states were distributed across broad regions of the United States, further research is recommended to support generalizing the findings. CONCLUSION Smaller EDs, in addition to having fewer resources to buffer increased demand, have more frequent high-volume days than larger EDs. Given the lack of a standard measure to define a high-volume day in EDs, we propose 10% above the median daily volume. Our recommended metric is directly related to daily ED volume and could be a starting point in identifying, understanding, and managing high-volume days in EDs. This work is a call to action for further studies in constructing a roadmap to develop robust measures that would help acknowledge, assess, and effectively plan for the daily volume variability in EDs. |
Effect of dietary composition on insulin receptors in normal subjects. Six normal subjects were placed on a high carbohydrate diet (80%) and a high fat diet (60%) for 2 weeks each. Glucose tolerance testing with plasma immunoreactive insulin levels was performed along with insulin receptor quantitation after a control period and after each of the dietary manipulations. Despite improved carbohydrate tolerance and decreased plasma immuno-reactive insulin after the high carbohydrate diet (evidence for increased insulin sensitivity) insulin receptor number and affinity were unchanged. These studies suggest that the increased insulin sensitivity induced by a high carbohydrate diet is due to some adaptive change in postreceptor activity. Manipulations of dietary composition fail to alter insulin binding to peripheral mononuclear cells. |
<reponame>objective-audio/cpp_utils<gh_stars>0
//
// yas_exception.h
//
#pragma once
#include <string>
namespace yas {
void raise_with_reason(std::string const &reason);
void raise_if_main_thread();
void raise_if_sub_thread();
} // namespace yas
|
from crispy_forms.utils import render_crispy_form
from django.http import QueryDict, HttpResponse, JsonResponse
from django.shortcuts import render
from django.template import RequestContext
from django.template.loader import render_to_string
from django.urls import reverse_lazy
from django.utils import timezone
from django.views import generic, View
from djangogirls.blog1 import forms
from .models import Post
from .forms import PostForm, MailForm
from django.core.mail import send_mail
from django.shortcuts import render, get_object_or_404
from django.shortcuts import redirect, render_to_response
from django.views.generic.edit import CreateView, DeleteView, UpdateView
from django.core.mail import EmailMessage
from django.http import JsonResponse
from django.template.context_processors import csrf
import json
class ListPost(generic.ListView):
template_name = 'blog1/post_list.html'
model = Post
context_object_name = 'posts'
def get_queryset(self):
"""Return the last five published questions."""
return Post.objects.filter(published_date__lte=timezone.now()).order_by('-published_date')
class DetailPost(generic.DetailView):
template_name = 'blog1/post_detail.html'
model = Post
class NewPost(generic.CreateView):
model = Post
form_class = PostForm
template_name = 'blog1/post_edit.html'
def form_valid(self, form):
form.instance.published_date = timezone.now()
form.instance.author = self.request.user
return super(NewPost, self).form_valid(form)
def get_success_url(self):
return reverse_lazy('blog1:post_detail', kwargs={'pk': self.object.pk})
class EditPost(generic.UpdateView):
model = Post
form_class = PostForm
def form_valid(self, form):
self.object = form.save()
data = {
'form_is_valid': True,
'id': form.instance.id,
'title': form.instance.title,
'text': form.instance.text
}
return JsonResponse(data)
def form_invalid(self, form):
form_html = render_crispy_form(form, context=csrf(self.request))
return JsonResponse({'html_form': form_html})
def get(self, request, *args, **kwargs):
self.object = self.get_object()
ct = self.get_context_data()
form_html = render_crispy_form(ct["form"], context=csrf(self.request))
return JsonResponse({'html_form': form_html})
class deletepost(generic.DeleteView):
model = Post
def delete(self, request, *args, **kwargs):
self.get_object().delete()
data = {'delete': 'ok'}
return JsonResponse(data)
def get(self, request, *args, **kwargs):
conte = {'post': self.get_object()}
form_html = render_to_string('blog1/post_delete.html',
conte,
request=request, )
return JsonResponse({'html_form': form_html})
class MailPost(generic.FormView):
form_class = MailForm
template_name = 'blog1/post_mail.html'
success_url = reverse_lazy('blog1:post_list')
def form_valid(self, form):
subject = form.cleaned_data['subject']
message = form.cleaned_data['feedback']
destination = form.cleaned_data['destination']
email = EmailMessage(subject, message, to=[destination])
email.send()
return super(MailPost, self).form_valid(form)
|
PORTLAND, Maine (AP) - The former director of the World Pro Ski Tour from the 1980s and 90s hopes to relaunch the tour in 2017.
Ed Rogers says he’s lining up investors to resurrect the World Pro Ski Tour, which for 40 years served as the only made-for-TV ski racing event with large cash prizes and national sponsors.
Rogers created a style of competition in which two skiers raced against each other instead of the conventional method of individuals racing against the clock. The races pitted Olympic and World Cup champs against weekend warrior ski racers from all over the globe. |
Bairi Piya
Picturization
The song is picturized on Parvati (Aishwarya Rai) and Devdas (Shah Rukh Khan). The song picturises the romance and the sweet relation between the two characters and their love for each other since their childhood.
Reception
"Bairi Piya" was an instant success and topped the charts. Ghoshal became the first and till date is the only singer to win both Filmfare and National Film Awards for her debut song. Shreya's rendition of "Ish" or "Eesh" in the song became the highlight of the character Parvati and was well appraised.
Reviewing the soundtrack, Aniket Joshi said, "If you liked "Aankhon Ki Gustakhiyan" from Hum Dil De Chuke Sanam, I can pretty much guarantee you’ll like “Bairi Piya”. The song falls in the same genre as the previously mentioned song from Hum Dil De Chuke Sanam, a chhed-chhad song, but done with a lot of grace and maturity. Yes, that’s quite hard to put together. Shreya Ghoshal and Udit Narayan render this number. The singing, like all of the songs in the album is just mind-blowing. The unique part of the song is the "ish" that Darbar has put in at certain points in the song, very unique!".
While reviewing for Rediff.com, Sukanya Verma wrote, "Udit Narayan and Shreya murmur sweet nothing as they playfully chide and make up in Bairi piya. Narayan successfully captures the eternal romanticism of Devdas whereas Shreya brings an element of impishness to Paro's character by blushing "Eesh" at every given opportunity." |
Determination of Trace Water Content in Petroleum and Petroleum Products. Measurement of water in petroleum and petroleum-based products is of industrial and economic importance; however, the varied and complex matrixes make the analyses difficult. These samples tend to have low amounts of water and contain many compounds which react with iodine, causing Karl Fischer titration (KFT) to give inaccurate, typically higher, results. A simple, rapid, automated headspace gas chromatography (HSGC) method which requires modified instrumentation and ionic liquid stationary phases was developed. Measurement of water in 12 petroleum products along with 3 National Institute of Standards and Technology reference materials was performed with the developed method. The range of water found in these samples was ∼12-3300 ppm. This approach appeared to be unaffected by complicated matrixes. The solvent-free nature of the HSGC method also negates the solubility limitations which are common with KFT. |
package events
import (
"context"
"database/sql"
"fmt"
"strings"
"time"
"github.com/Masterminds/squirrel"
dbtypes "github.com/contiamo/go-base/pkg/db/serialization"
"github.com/contiamo/go-base/pkg/tracing"
"github.com/golang/protobuf/ptypes"
"github.com/jackc/pgx/v4"
"github.com/pkg/errors"
uuid "github.com/satori/go.uuid"
"github.com/trusch/backbone-tools/pkg/api"
"github.com/trusch/backbone-tools/pkg/sqlizers"
"github.com/trusch/backbone-tools/pkg/ticker"
)
var (
pollInterval = 10 * time.Second
)
func NewServer(ctx context.Context, db *sql.DB, connectString string) (api.EventsServer, error) {
srv := &eventsServer{
Tracer: tracing.NewTracer("events", "EventsServer"),
db: db,
connectString: connectString,
}
return srv, srv.init(ctx)
}
type eventsServer struct {
tracing.Tracer
db squirrel.StdSqlCtx
connectString string
}
func (s *eventsServer) init(ctx context.Context) error {
_, err := s.db.ExecContext(ctx, `
CREATE TABLE IF NOT EXISTS events(
event_id UUID PRIMARY KEY,
topic TEXT NOT NULL,
payload BYTEA,
labels JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
sequence SERIAL
);
`)
return err
}
func (s *eventsServer) getBuilder(db squirrel.BaseRunner) squirrel.StatementBuilderType {
return squirrel.StatementBuilder.
PlaceholderFormat(squirrel.Dollar).
RunWith(db)
}
func (s *eventsServer) Publish(ctx context.Context, req *api.PublishRequest) (event *api.Event, err error) {
span, ctx := s.StartSpan(ctx, "Publish")
defer func() {
s.FinishSpan(span, err)
}()
span.SetTag("topic", req.GetTopic())
span.SetTag("labels", req.GetLabels())
span.SetTag("payload", req.GetPayload())
id := uuid.NewV4().String()
now := time.Now()
nowProto, err := ptypes.TimestampProto(now)
if err != nil {
return nil, err
}
if req.Labels == nil {
req.Labels = make(map[string]string)
}
builder := s.getBuilder(s.db).Insert("events").Columns(
"event_id",
"topic",
"labels",
"payload",
"created_at",
).Values(
id,
req.GetTopic(),
dbtypes.JSONBlob(req.GetLabels()),
req.GetPayload(),
now,
).Suffix("RETURNING \"sequence\"")
fmt.Println(builder.ToSql())
row := builder.QueryRowContext(ctx)
var seq uint64
if err := row.Scan(&seq); err != nil {
return nil, errors.Wrap(err, "failed to insert event")
}
_, err = s.db.ExecContext(ctx, `NOTIFY events_`+strings.Replace(req.GetTopic(), "-", "_", -1))
if err != nil {
return nil, err
}
return &api.Event{
Id: id,
Topic: req.GetTopic(),
Labels: req.GetLabels(),
Payload: req.GetPayload(),
CreatedAt: nowProto,
Sequence: seq,
}, nil
}
func (s *eventsServer) Subscribe(req *api.SubscribeRequest, resp api.Events_SubscribeServer) (err error) {
span, ctx := s.StartSpan(resp.Context(), "Subscribe")
defer func() {
s.FinishSpan(span, err)
}()
notifyConn, err := pgx.Connect(ctx, s.connectString)
if err != nil {
return err
}
ticker := ticker.New(pollInterval, 0.1, notifyConn, "events_"+strings.Replace(req.GetTopic(), "-", "_", -1))
if err := ticker.Start(ctx); err != nil {
return err
}
lastSequence := req.GetSinceSequence()
timestamp := req.GetSinceCreatedAt()
for {
select {
case <-ctx.Done():
return ctx.Err()
case <-ticker.C:
filter := squirrel.And{
squirrel.Eq{
"topic": req.GetTopic(),
},
}
if lastSequence > 0 {
filter = append(filter, squirrel.Gt{
"sequence": lastSequence,
})
}
if labels := req.GetLabels(); labels != nil {
filter = append(filter, sqlizers.JSONContains{
"labels": dbtypes.JSONBlob(labels),
})
}
if timestamp != nil {
ts, err := ptypes.Timestamp(timestamp)
if err != nil {
return err
}
filter = append(filter, squirrel.GtOrEq{
"created_at": ts,
})
}
rows, err := s.getBuilder(s.db).
Select("event_id", "labels", "payload", "created_at", "sequence").
From("events").
Where(filter).
QueryContext(ctx)
if err != nil {
return err
}
for rows.Next() {
var (
event = api.Event{Topic: req.GetTopic()}
createdAt time.Time
)
err = rows.Scan(&event.Id, dbtypes.JSONBlob(&event.Labels), &event.Payload, &createdAt, &event.Sequence)
if err != nil {
return err
}
event.CreatedAt, err = ptypes.TimestampProto(createdAt)
if err != nil {
return err
}
span, _ := s.StartSpan(ctx, "sendEvent")
err = resp.Send(&event)
lastSequence = event.Sequence
timestamp = nil
s.FinishSpan(span, err)
if err != nil {
return err
}
}
}
}
}
|
<reponame>kihaev/hapay
import { Component, OnInit } from '@angular/core';
import { validationPatterns } from 'src/app/shared/helpers/validationPatterns';
import { FormGroup, FormControl, Validators } from '@angular/forms';
import { BaseAuthComponent } from '../base-auth/base-auth.component';
import { Router } from '@angular/router';
import { AuthenticationService } from 'src/app/services/authentication.service';
import { AuthService } from 'angularx-social-login';
import { CookieWrapperService } from 'src/app/services/cookie-wrapper.service';
import { ToastrService } from 'ngx-toastr';
const SHARED_PATTERNS = validationPatterns().sharedPatterns;
@Component({
selector: 'app-signin',
templateUrl: 'sign-in.component.html',
styleUrls: ['sign-in.component.scss']
})
export class SignInComponent extends BaseAuthComponent implements OnInit {
public loginForm: FormGroup;
constructor(public router: Router,
public toastr: ToastrService,
public authService: AuthService,
public authenticationService: AuthenticationService,
public cookieWrapperService: CookieWrapperService) {
super(router, toastr, authService, cookieWrapperService, authenticationService);
}
ngOnInit() {
this.loginForm = new FormGroup({
email: new FormControl("", [
Validators.required,
Validators.pattern(SHARED_PATTERNS.emailAddress)
]),
password: new FormControl("", [Validators.required])
});
}
public login(): void {
if (this.loginForm.invalid) {
if (this.loginForm.controls.email.invalid) {
this.loginForm.controls.email.markAsDirty();
}
if (this.loginForm.controls.password.invalid) {
this.loginForm.controls.password.markAsDirty();
}
return;
}
this.authenticationService.login(
{user:
{ email: this.loginForm.controls.email.value,
password: this.loginForm.controls.password.value
}}
).subscribe();
// this.authenticate(this, this.loginCallback);
}
public forgotPassword() {
this.router.navigate(['/']);
}
public signup(): void {
this.router.navigate(['/account/signup']);
}
} |
<filename>web/mainApp/migrations/0007_auto_20190503_1228.py<gh_stars>1-10
# Generated by Django 2.1.8 on 2019-05-03 03:28
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('mainApp', '0006_auto_20190502_1414'),
]
operations = [
migrations.AlterField(
model_name='solvepost',
name='lang',
field=models.PositiveSmallIntegerField(choices=[(1, 'python3')], default=1),
),
]
|
WATCH: The remaining members of The Tragically Hip, Gord Sinclair, Paul Langlois and Rob Baker, talk life after the death of frontman Gord Downie.
Less than a year after the death of The Tragically Hip frontman Gord Downie, the remaining members of the iconic Canadian band are opening up about the grieving process as they embark on a new venture together.
ET Canada’s Carlos Bustamante was in Creemore, Ont. with band members Gord Sinclair, Rob Baker and Paul Langlois, who say the outpouring from fans following Downie’s death from brain cancer in October 2017 was unsurprising.
WATCH BELOW: Will the Tragically Hip play again?
Langlois says Downie was “crushed” that his inevitable death would mean the end of The Hip and lobbied for replacement vocalists to step in for him when he was gone.
“It’s still pretty fresh and it crushed Gord that The Hip wasn’t going to be,” Langlois explains. “Matter of fact, he was constantly saying we should continue, ‘What about this guy?’ or ‘What about this girl?’ And you know at a certain period he was talking that way. And it was like, ‘No way, man you got to stop.’ I think we are all still adjusting,” he adds.
For Baker, performing without Downie wasn’t an option.
Though the band may no longer have any gigs, they are still friends and business partners who say being together is a way to honour Downie’s memory. The band announced their partnership with the federally-licensed medical marijuana grower Up Cannabis in May 2017 and collaborated with the Up North event in Creemore this week. |
<filename>src/main/java/peru/UseMultiSet.java<gh_stars>0
package peru;
import com.google.common.collect.HashMultiset;
import com.google.common.collect.Multiset;
import com.google.common.collect.Sets;
/**
* Created by kmhaswade on 5/29/16.
*/
public class UseMultiSet {
public static void main(String[] args) {
Multiset<Integer> mset = HashMultiset.create();
mset.add(2);
mset.add(1);
mset.add(1);
mset.add(1);
mset.add(1);
mset.add(3);
mset.add(2);
System.out.println("#1: " + mset.count(1) + ", size: " + mset.size());
mset.forEach(e -> System.out.println(e));
System.out.println("size: "+ Sets.newHashSet("a", "b", "c").size());
}
}
|
export type Listener = (event : any)=>void;
export class EventEmitter {
private _listeners : Array<Listener> = new Array<Listener>();
subscribe(listener : Listener)
{
if(this._listeners.some((element)=>element === listener))
return;
this._listeners.push(listener);
}
unsubscribe(listener : Listener)
{
for(let i = this._listeners.length; i >= 0; i--)
{
if(this._listeners[i] === listener)
{
this._listeners.splice(i, 1);
return;
}
}
}
emit(event? : any)
{
for(let listener of this._listeners)
{
try
{
listener(event);
}
catch (exception)
{
console.error(exception);
}
}
}
}
|
ServiceConfig = {}
ServiceConfig['name'] = "sampleService"
# heartbeat server
ServiceConfig['heartbeatserver'] = "" # like 10.20.30.40:9090 if "" then do not keep heartbeat with server
ServiceConfig['department'] = "alimama"
ServiceConfig['business'] = "zhitongche"
ServiceConfig['product'] = "kgb"
ServiceConfig['desc'] = "this is a sample service"
|
Design, Preparation, and Evaluation of a Novel 99mTcN Complex of Ciprofloxacin Xanthate as a Potential Bacterial Infection Imaging Agent In order to seek novel technetium-99m bacterial infection imaging agents, a ciprofloxacin xanthate (CPF2XT) was synthesized and radiolabeled with 2+ core to obtain the 99mTcN-CPF2XT complex, which exhibited high radiochemical purity, hydrophilicity, and good stability in vitro. The bacteria binding assay indicated that 99mTcN-CPF2XT had specificity to bacteria. A study of biodistribution in mice showed that 99mTcN-CPF2XT had a higher uptake in bacterial infection tissues than in turpentine-induced abscesses, indicating that it could distinguish bacterial infection from sterile inflammation. Compared to 99mTcN-CPFXDTC, the abscess/blood and abscess/muscle ratios of 99mTcN-CPF2XT were higher and the uptakes of 99mTcN-CPF2XT in the liver and lung were obviously decreased. The results suggested that 99mTcN-CPF2XT would be a potential bacterial infection imaging agent. Introduction Public health advanced during the eighteenth and nineteenth centuries and with antibiotics discovered in the twentieth century, treatment of infection has been highly improved. However, at present, antibiotic abuse has led to the emergence of bacteria resistance, which limits the efficiency of antibiotics. Bacterial infection is one of the most common causes of morbidity and mortality in developing countries. In the meantime, it is predicted that infections of antibiotic resistance would be the first cause of human death by 2050. Considering the patients carrying inflammation may have different disease processes occurring at the same time, the clinic assessments do not always find a clear pathogen, which leads to delay in treatment and antibiotic abuse. In that case, early detection of infections can allow for the timely and appropriate treatment of patients, avoiding the overuse of antibiotics. Identifying and locating lesion sites of infection is a critical step in clinic treatment. Computed tomography (CT) and magnetic resonance imaging (MRI) are available for detecting infections. However, these imaging tools depend on anatomical changes that occur late in the disease process, so that in the early phase, they detect the infection foci inefficiently because obviously morphologic changes cannot be observed. Compared to CT and MRI, nuclear medicine techniques such as Positron Emission Tomography (PET) and Single-Photon Emission Computed Tomography (SPECT) are based on physiochemical and biochemical changes in organs, which will be located in the early phase. Radiopharmaceuticals with high specificity can selectively concentrate at the site Molecules 2020, 25, 5837 2 of 9 of infection, which leads to accurate detection of pathogens, and the rapid and suitable treatment of patients. Currently, discriminating between infection and sterile inflammation has been of great significance. Various radiopharmaceuticals have been developed for detecting inflammation in humans, however, they have some limitations. The development of new infection imaging radiopharmaceuticals is still needed in nuclear medicine. Technetium-99m is extensively used in nuclear medicine, which is due to its in-house availability, low cost, decay characteristics, and the ability to conjugate with bioactive molecules through a bifunctional chelator. Up to now, there have been several kinds of 99m Tc-labeled antimicrobial agents. Among them, 99m Tc-ciprofloxacin has been widely evaluated by many groups in the world. Compared to the radiolabeled leucocytes, 99m Tc-ciprofloxacin has more specificity for infection, lower cost of preparation, better imaging quality, and it can be prepared from a kit. However, there are some limitations of 99m Tc-ciprofloxacin, such as low radiochemical yield and heating preparation. In the structure of ciprofloxacin, the carbonyl and carboxyl groups are considered as necessary pharmacophores, which possibly coordinate with technetium-99m, therefore, labeling with 99m Tc would reduce the binding affinity to bacteria. 2+ core was found to conjugate well with ligands containing S atoms, such as dithiocarbamates. Recently, our group has developed some 99m Tc-labeled antibiotic tracers. For example, we synthesized ciprofloxacin dithiocarbamate (CPFXDTC, as shown in Figure 1) and radiolabeled it with 2+ core. 99m TcN-CPFXDTC was easily prepared through a ligand-exchange reaction. In the biodistribution assay in bacteria-infected mice, the infection uptake was 3.21 ± 0.66% ID/g at 4 h post-injection. The abscess/muscle and abscess/blood ratios were 1.78 and 1.86. Compared to 99m Tc-ciprofloxacin, the abscess uptake and abscess/blood ratio were higher, but the abscess/muscle ratio was much lower. In addition, 99m TcN-CPFXDTC was lipophilic so the higher accumulation of radioactivity was found in the liver. The lung uptake was appreciable as well. These limitations could affect the quality of infection imaging. Therefore, further research is needed to solve these problems. Molecules 2020, 25, x FOR PEER REVIEW 2 of 9 phase. Radiopharmaceuticals with high specificity can selectively concentrate at the site of infection, which leads to accurate detection of pathogens, and the rapid and suitable treatment of patients. Currently, discriminating between infection and sterile inflammation has been of great significance. Various radiopharmaceuticals have been developed for detecting inflammation in humans, however, they have some limitations. The development of new infection imaging radiopharmaceuticals is still needed in nuclear medicine. Technetium-99m is extensively used in nuclear medicine, which is due to its in-house availability, low cost, decay characteristics, and the ability to conjugate with bioactive molecules through a bifunctional chelator. Up to now, there have been several kinds of 99m Tc-labeled antimicrobial agents. Among them, 99m Tc-ciprofloxacin has been widely evaluated by many groups in the world. Compared to the radiolabeled leucocytes, 99m Tc-ciprofloxacin has more specificity for infection, lower cost of preparation, better imaging quality, and it can be prepared from a kit. However, there are some limitations of 99m Tc-ciprofloxacin, such as low radiochemical yield and heating preparation. In the structure of ciprofloxacin, the carbonyl and carboxyl groups are considered as necessary pharmacophores, which possibly coordinate with technetium-99m, therefore, labeling with 99m Tc would reduce the binding affinity to bacteria. 2+ core was found to conjugate well with ligands containing S atoms, such as dithiocarbamates. Recently, our group has developed some 99m Tc-labeled antibiotic tracers. For example, we synthesized ciprofloxacin dithiocarbamate (CPFXDTC, as shown in Figure 1) and radiolabeled it with 2+ core. 99m TcN-CPFXDTC was easily prepared through a ligandexchange reaction. In the biodistribution assay in bacteria-infected mice, the infection uptake was 3.21 ± 0.66% ID/g at 4 h post-injection. The abscess/muscle and abscess/blood ratios were 1.78 and 1.86. Compared to 99m Tc-ciprofloxacin, the abscess uptake and abscess/blood ratio were higher, but the abscess/muscle ratio was much lower. In addition, 99m TcN-CPFXDTC was lipophilic so the higher accumulation of radioactivity was found in the liver. The lung uptake was appreciable as well. These limitations could affect the quality of infection imaging. Therefore, further research is needed to solve these problems. As xanthate contains two sulfur atoms and can be easily labeled with technetium-99m, our group has recently reported some radiolabeled xanthates as potential imaging agents. These backgrounds encourage us to synthesize a ciprofloxacin xanthate (CPF2XT, as shown in Figure 1) and evaluate the possibility of 99m TcN-CPF2XT as a potential bacterial infection imaging agent. As xanthate contains two sulfur atoms and can be easily labeled with technetium-99m, our group has recently reported some radiolabeled xanthates as potential imaging agents. These backgrounds encourage us to synthesize a ciprofloxacin xanthate (CPF2XT, as shown in Figure 1) and evaluate the possibility of 99m TcN-CPF2XT as a potential bacterial infection imaging agent. Synthesis The reaction equation was shown in Scheme 1. CPF2XT was prepared by reacting the precursor N4 -2-hydroxyethylciprofloxacin (compound 3) with carbon disulfide and NaH in THF. CPF2XT was characterized by 1 H-NMR, 13 C-NMR, and ESI-MS. Synthesis The reaction equation was shown in Scheme 1. CPF2XT was prepared by reacting the precursor N4-2-hydroxyethylciprofloxacin (compound 3) with carbon disulfide and NaH in THF. CPF2XT was characterized by 1 H-NMR, 13 C-NMR, and ESI-MS. Radiolabeling 99m TcN-CPF2XT was easily prepared in high yields through a ligand-exchange reaction, which was illustrated in Scheme 2. The radiochemical purity of the complex was determined by TLC. Results of TLC of 99m TcN-CPF2XT were as follows: in saline, 99m TcO4 − and 99m TcN-CPF2XT stayed at the origin, while 2+ was moved to the front. In acetonitrile, 99m TcO4 − was moved to 0.3-0.5 (Rf value), while 2+ and 99m TcN-CPF2XT remained at the origin. In order to obtain the best labeling conditions, we optimized several parameters, such as pH of the solution, amount of ligand, reaction temperature, and incubation time. The labeling yield was over 90% when 5 mg of CPF2XT ligand was labeled with 2+ intermediate at pH 8-9 at room temperature for 30 min. Scheme 2. Preparation route and speculative structure of 99m TcN-CPF2XT. Stability Tests In the reaction solution at room temperature after 6 h, the radiochemical purity of the complex was still more than 90%. In mouse serum at 37 °C, the radiochemical purity of the complex was over 90% as well. Nearly no decomposition of 99m TcN-CPF2XT was found, suggesting its great stability in vitro. Radiolabeling 99m TcN-CPF2XT was easily prepared in high yields through a ligand-exchange reaction, which was illustrated in Scheme 2. The radiochemical purity of the complex was determined by TLC. Results of TLC of 99m TcN-CPF2XT were as follows: in saline, 99m TcO4 − and 99m TcN-CPF2XT stayed at the origin, while 2+ was moved to the front. In acetonitrile, 99m TcO4 − was moved to 0.3-0.5 (Rf value), while 2+ and 99m TcN-CPF2XT remained at the origin. In order to obtain the best labeling conditions, we optimized several parameters, such as pH of the solution, amount of ligand, reaction temperature, and incubation time. The labeling yield was over 90% when 5 mg of CPF2XT ligand was labeled with 2+ intermediate at pH 8-9 at room temperature for 30 min. Scheme 2. Preparation route and speculative structure of 99m TcN-CPF2XT. Stability Tests In the reaction solution at room temperature after 6 h, the radiochemical purity of the complex was still more than 90%. In mouse serum at 37 °C, the radiochemical purity of the complex was over 90% as well. Nearly no decomposition of 99m TcN-CPF2XT was found, suggesting its great stability in vitro. Partition Coefficient As compared to 99m TcN-CPFXDTC (log P = 1.02), the partition coefficients (log P) value of 99m TcN-CPF2XT was −0.80 ± 0.05. The result suggested 99m TcN-CPF2XT was hydrophilic while 99m TcN-CPFXDTC was lipophilic. In order to obtain the best labeling conditions, we optimized several parameters, such as pH of the solution, amount of ligand, reaction temperature, and incubation time. The labeling yield was over 90% when 5 mg of CPF2XT ligand was labeled with 2+ intermediate at pH 8-9 at room temperature for 30 min. Stability Tests In the reaction solution at room temperature after 6 h, the radiochemical purity of the complex was still more than 90%. In mouse serum at 37 C, the radiochemical purity of the complex was over 90% as well. Nearly no decomposition of 99m TcN-CPF2XT was found, suggesting its great stability in vitro. In Vitro Binding of 99m TcN-CPF2XT with Bacteria The result of in vitro bacteria (Staphylococcus. aureus) binding study of 99m TcN-CPF2XT is shown in Figure 2. An excess of ciprofloxacin and CPF2XT were added respectively for competition. Columns 2 and 3 indicated that binding of the complex to the bacteria was significantly reduced. The values were In Vitro Binding of 99m TcN-CPF2XT with Bacteria The result of in vitro bacteria (Staphylococcus. aureus) binding study of 99m TcN-CPF2XT is shown in Figure 2. An excess of ciprofloxacin and CPF2XT were added respectively for competition. Columns 2 and 3 indicated that binding of the complex to the bacteria was significantly reduced. The values were separately decreased to 29.32% and 13.25% of the original value. It suggested that 99m TcN-CPF2XT was specifically binding to bacteria. Figure 2. In vitro binding of 99m TcN-CPF2XT. Column 1: binding of 99m TcN-CPF2XT to bacteria, as the control group, the binding value was calculated as 100%; column 2: binding of 99m TcN-CPF2XT to bacteria competing with ciprofloxacin, and the data were shown as binding value/control group ratio; column 3: binding of 99m TcN-CPF2XT to bacteria competing with CPF2XT, and the data were shown as binding value/control group ratio. Biodistribution Biological distribution results in mice bearing bacterial infection and turpentine-induced abscess were demonstrated in Table 1. As seen in Table 1, in the bacteria-infected mice, the infection uptakes of 99m TcN-CPF2XT were 2.41 ± 0.37% ID/g at 2 h and 2.63 ± 0.49% ID/g at 4 h post-injection, suggesting it had a significant abscess uptake and good radioactivity retention in infection foci. The complex had low initial uptake in normal muscle, while relatively fast blood activity clearance was observed between 0.5 and 4 h post-injection, which led to the high abscess/muscle ratio and abscess/blood ratio. In the meanwhile, the abscess uptake of 99m TcN-CPF2XT in mice with a turpentine-induced abscess was lower than that in bacteria-infected mice. The abscess uptake of 99m TcN-CPF2XT was 1.23 ± 0.13% ID/g at 4 h post-injection. The abscess/muscle and abscess/blood ratios were 2.51 and 1.70 at 4 h post-injection, which were also lower than those in bacteria-infected mice. As for other organs, the high concentration in the kidney showed that the route of excretion was through the urinary system. The low value of radioactivity uptake in the stomach and thyroid indicated that the complex had good stability in vivo. Figure 2. In vitro binding of 99m TcN-CPF2XT. Column 1: binding of 99m TcN-CPF2XT to bacteria, as the control group, the binding value was calculated as 100%; column 2: binding of 99m TcN-CPF2XT to bacteria competing with ciprofloxacin, and the data were shown as binding value/control group ratio; column 3: binding of 99m TcN-CPF2XT to bacteria competing with CPF2XT, and the data were shown as binding value/control group ratio. Biodistribution Biological distribution results in mice bearing bacterial infection and turpentine-induced abscess were demonstrated in Table 1. Table 1. Biodistribution of 99m TcN-CPF2XT in mice (% ID/g ± SD). Mice with Bacterial Infection (n = 5) Mice with Turpentine-Induced Abscess (n = 5) As seen in Table 1, in the bacteria-infected mice, the infection uptakes of 99m TcN-CPF2XT were 2.41 ± 0.37% ID/g at 2 h and 2.63 ± 0.49% ID/g at 4 h post-injection, suggesting it had a significant abscess uptake and good radioactivity retention in infection foci. The complex had low initial uptake in normal muscle, while relatively fast blood activity clearance was observed between 0.5 and 4 h post-injection, which led to the high abscess/muscle ratio and abscess/blood ratio. In the meanwhile, the abscess uptake of 99m TcN-CPF2XT in mice with a turpentine-induced abscess was lower than that in bacteria-infected mice. The abscess uptake of 99m TcN-CPF2XT was 1.23 ± 0.13% ID/g at 4 h post-injection. The abscess/muscle and abscess/blood ratios were 2.51 and 1.70 at 4 h post-injection, which were also lower than those in bacteria-infected mice. As for other organs, the high concentration in the kidney showed that the route of excretion was through the urinary system. The low value of radioactivity uptake in the stomach and thyroid indicated that the complex had good stability in vivo. Discussion Ciprofloxacin xanthate containing two sulfur atoms could conjugate with the 2+ core to obtain a stable complex in the form of 99m TcN(L) 2 (L = bidentate ligand). − and the SDH kit were mixed to form a 2+ intermediate. Then the ciprofloxacin xanthate was added to acquire 99m TcN-CPF2XT at room temperature with high labeling yield by a ligand-exchange reaction. In the meantime, 99m TcN-CPF2XT remains the pharmacophores of ciprofloxacin, which specifically bind to bacteria. Compared to 99m Tc-ciprofloxacin, the preparation of 99m TcN-CPF2XT does not require heating and purification, thus making it easier for clinical application. In the in vitro bacteria binding assay, the binding efficiency of 99m TcN-CPF2XT to S. aureus with the corresponding ciprofloxacin xanthate for competition reduced by 86%, which showed its better specificity to bacteria. The biodistribution study indicated that 99m TcN-CPF2XT had good accumulation in infected foci (2.63% ID/g) at 4 h post-injection and the abscess/blood and abscess/muscle ratios were 4.78 and 4.04. However, the abscess uptake in mice with turpentine-induced abscess (1.23% ID/g) at 4 h post-injection was lower. As shown in Figure 3, according to the results of biodistribution studies in bacteria-infected mice, the infection uptake of 99m TcN-CPF2XT (2.63% ID/g) was nearly two times as much as 99m Tc-ciprofloxacin (1.26% ID/g) at 4 h post-injection. Moreover, the abscess/blood ratio of 99m TcN-CPF2XT (4.04) was much higher than that of 99m Tc-ciprofloxacin (0.82). The abscess/muscle ratio of 99m TcN-CPF2XT was 4.78 at 4 h post-injection, while the abscess/muscle ratio of 99m Tc-ciprofloxacin was 4.28. Compared to 99m Tc-ciprofloxacin, 99m TcN-CPF2XT showed higher uptake in infected sections. For 99m Tc-ciprofloxacin, the carbonyl and carboxyl group could coordinate with technetium-99m, which led to the decreasing of binding affinity to bacteria. 99m TcN-CPF2XT had two sulfur atoms that could complex with the 2+ core to obtain a stable radiolabeled product. The pharmacophore of ciprofloxacin remained so that abscess uptake of 99m TcN-CPF2XT was higher than that of 99m Tc-ciprofloxacin. Compared to 99m TcN-CPFXDTC, the target/non-target ratio of 99m TcN-CPF2XT was much higher. At 4 h post-injection, the abscess/blood ratio and abscess/muscle ratio were 4.78 and 4.04, while the corresponding data of 99m TcN-CPFXDTC were 1.78 and 1.76. As for other non-target organs, the radioactive accumulations of the lung (21.11 ± 6.80% ID/g) and the liver (34.65 ± 5.93% ID/g) of 99m TcN-CPFXDTC were appreciable, while the lung (3.09 ± 0.24% ID/g) and liver (7.02 ± 0.63% ID/g) uptakes of 99m TcN-CPF2XT were much lower. According to the partition coefficient data of these two, 99m TcN-CPFXDTC was lipophilic, while 99m TcN-CPF2XT was hydrophilic. The hydrophilicity of 99m TcN-CPF2XT possibly caused the lower uptake of the non-targets. In that case, 99m TcN-CPF2XT could obtain a higher quality of abdominal images than 99m TcN-CPFXDTC. Comparing the results of biodistribution of infected and inflammation mice, we found a significant difference between the uptake in bacteria-infected and inflammation foci. The results suggested that 99m TcN-CPF2XT had the potential to distinguish infection from sterile inflammation. It should be noted that the bacterial infection uptake of 99m TcN-CPF2XT was not high enough, thus possibly making it insufficient to obtain satisfactory imaging results. Further studies should be conducted to verify this speculation. making it insufficient to obtain satisfactory imaging results. Further studies should be conducted to verify this speculation. In the discovery of 99m Tc imaging agents, most 99m Tc (V) complexes are related to 3+ core and 2+ core. The 3+ core is isoelectronic with 2+, which has the ability to complex with the ciprofloxacin xanthate to form a stable labeled complex. By comparison, we prepared 99m TcO-CPF2XT through a ligand-exchange reaction by mixing CPF2XT and − with the GH kit. The log P value of 99m TcO-CPF2XT is −0.12 ± 0.05 and the bacteria binding value reduced by 58% after the excess of CPF2XT was added as an inhibitor. The uptake of 99m TcO-CPF2XT in infected mice was 1.63 ± 0.23% ID/g at 4 h post-injection. In the inflammation mice, the abscess uptake was 1.44 ± 0.13% ID/g at 4 h post-injection. There was no significant difference between the infected and inflammation mice, suggesting 99m TcO-CPF2XT was non-specific to bacteria. The finding suggested different 99m Tc cores may have a great impact on the properties of 99m Tc-labeled radiopharmaceuticals. The incorporation of the 99m TcN core into the ciprofloxacin xanthate ligand can improve the biological features for infection imaging. Materials and Methods Ciprofloxacin was purchased from J&K chemical. Carbon disulfide was purified by distillation before use. Succinic dihydrazide (SDH) kit was acquired from Beijing Shihong Pharmaceutical Center, Beijing Normal University, China. All other agents were of reagent grade and were used with no further purification. 99 Mo/ 99m Tc generator was purchased from Atomic High Tech Co. Ltd., Beijing, China. The NMR spectrum was recorded on a 600 MHz JNM-ECS spectrophotometer (JEOL, Tokyo, Japan). The ESI-MS spectrum was recorded on an LC-MS Shimadzu 2010 series. Synthesis of CPF2XT The precursor N4-2-hydroxyethylciprofloxacin (compound 3) was prepared according to a previously reported method. A total of 0.300 g of compound 3 and 0.043 g of NaH were mixed in 20 mL of THF. Next, 0.5 mL of carbon disulfide was added to the solution. The mixture was stirred for 2 h in an ice water environment and then continued overnight at room temperature. Deionized water was added to remove the excess NaH. The solvent was removed under reduced pressure and the residue was recrystallized from methanol/diethyl ether to give CPF2XT (yellow solid, 0.258 g, 65.15%). 1. The log P value of 99m TcO-CPF2XT is −0.12 ± 0.05 and the bacteria binding value reduced by 58% after the excess of CPF2XT was added as an inhibitor. The uptake of 99m TcO-CPF2XT in infected mice was 1.63 ± 0.23% ID/g at 4 h post-injection. In the inflammation mice, the abscess uptake was 1.44 ± 0.13% ID/g at 4 h post-injection. There was no significant difference between the infected and inflammation mice, suggesting 99m TcO-CPF2XT was non-specific to bacteria. The finding suggested different 99m Tc cores may have a great impact on the properties of 99m Tc-labeled radiopharmaceuticals. The incorporation of the 99m TcN core into the ciprofloxacin xanthate ligand can improve the biological features for infection imaging. Materials and Methods Ciprofloxacin was purchased from J&K chemical. Carbon disulfide was purified by distillation before use. Succinic dihydrazide (SDH) kit was acquired from Beijing Shihong Pharmaceutical Center, Beijing Normal University, China. All other agents were of reagent grade and were used with no further purification. 99 Mo/ 99m Tc generator was purchased from Atomic High Tech Co. Ltd., Beijing, China. The NMR spectrum was recorded on a 600 MHz JNM-ECS spectrophotometer (JEOL, Tokyo, Japan). The ESI-MS spectrum was recorded on an LC-MS Shimadzu 2010 series. Synthesis of CPF2XT The precursor N4 -2-hydroxyethylciprofloxacin (compound 3) was prepared according to a previously reported method. A total of 0.300 g of compound 3 and 0.043 g of NaH were mixed in 20 mL of THF. Next, 0.5 mL of carbon disulfide was added to the solution. The mixture was stirred for 2 h in an ice water environment and then continued overnight at room temperature. Deionized water was added to remove the excess NaH. The solvent was removed under reduced pressure and the residue was recrystallized from methanol/diethyl ether to give CPF2XT (yellow solid, 0.258 g, 65.15%). 1 2+. Then, 5 mg of CPF2XT was dissolved in 1 mL of saline was added to the mixture. The reaction solution was kept at room temperature for 30 min. The radiochemical purity of 99m TcN-CPF2XT was evaluated by thin-layer chromatography (TLC). TLC was performed by using a polyamide strip as a stationary phase, and saline and acetonitrile as mobile phases, respectively. Stability Study The stability of 99m TcN-CPF2XT was evaluated by measuring the radiochemical purity (RCP) of the complex. RCP was checked by TLC in the mixture at room temperature for 6 h and in mouse serum at 37 C for 6 h. Partition Coefficient Measurement The partition coefficient was measured as follows: the radiolabeled complex was mixed with an equal volume of phosphate buffer (0.025 mol/L, pH 7.4) and 1-octanol. The mixture was vortexed at room temperature for 1 min and centrifuged at 5000 r/min for 5 min afterward. Counts in 0.1 mL of the organic and aqueous layers were calculated with a well -counter. The partition coefficient value was calculated by the equation: P = (cpm in 1-octanol)/(cpm in buffer). The final partition coefficient was expressed as log P. In Vitro Bacteria Binding Study In vitro bacteria binding of 99m TcN-CPF2XT was evaluated by using a previously reported method. A total of 0.1 mL of 3.7 MBq 99m TcN-CPF2XT solution and 0.1 mL PBS (pH 7.4) containing about 1 10 8 S. aureus were added into a test tube with 0.8 mL of saline. The mixture was then incubated for 1 h at 37 C. Afterward, they were centrifuged at 2000 r/min for 5 min. The pellets were resuspended in 1 mL PBS (pH 7.4) and recentrifuged. The removed supernatant and the bacteria pellets were collected and the radioactivities of those were determined with a well -counter, respectively. The bacteria binding value was calculated by the equation as follows: Bacteria binding% = (cpm in precipitate − cpm in background)/(cpm in precipitate − cpm in background + cpm in supernatant) 100%. The background tube was the incubations without bacteria added. For the sake of determining the specificity of 99m TcN-CPF2XT binding to bacteria, ciprofloxacin and CPF2XT were used as inhibitors. The bacteria were preincubated with 10 mg/mL ciprofloxacin or CPF2XT for 1 h at 37 C. Then the 99m TcN-CPF2XT complex was added and incubated for 1 h at 37 C. The bacteria binding value was calculated as above. The results were expressed as the mean ± SD. Biodistribution Study in Bacterial Infected Mice Animal studies were performed in accordance with the Regulations on Laboratory Animals of Beijing Municipality and the guidelines of the Ethics Committee of Beijing Normal University. A 0.1 mL suspension (1 10 8 /mL S. aureus) was injected into the left thigh muscle of the Kunming female mice weighing 18-22 g. About three to five days later, 0.1 mL of the complex (7.4 10 5 Bq) was injected intravenously into each mouse via the tail vein. The mice were sacrificed at 0.5, 2, and 4 h post-injection. The infected section, normal muscle in the right thigh, blood, and other organs of interest were kept, weighed, and determined for radioactivity. The results were expressed as a percentage of injected dose per gram of tissue (% ID/g). Biodistribution Study in Turpentine-Induced Abscess Mice A total of 0.1 mL of turpentine was injected into the left thigh muscle of the Kunming female mice weighing 18-22 g. About seven days later, 0.1 mL of the complex (7.4 10 5 Bq) was injected intravenously into each mouse via the tail vein. The mice were sacrificed at 4 h post-injection. The abscess, normal muscle in the right thigh, blood, and other organs of interest were kept, weighed, and determined for radioactivity. The results were expressed as a percentage of injected dose per gram of tissue (% ID/g). Conclusions In this work, ciprofloxacin xanthate (CPF2XT) was successfully prepared, and it was labeled with the 99m TcN precursor to obtain 99m TcN-CPF2XT with high yields. 99m TcN-CPF2XT showed hydrophilicity, good stability, and specificity to bacteria. The biodistribution results in mice suggested 99m TcN-CPF2XT exhibited high infection uptake and target-to-non target ratio. Compared to the results in turpentine-induced abscess mice, the complex was able to distinguish infection from inflammation. By comparison, the target/non-target ratio of 99m TcN-CPF2XT was higher than that of 99m TcN-CPFXDTC and the liver and lung uptakes of the former were much lower than those of the latter. In the present study, 99m TcN-CPF2XT showed its potential as a good bacterial infection imaging tracer, justifying further investigations. |
/**
* @author szgooru Created On: 01-Feb-2017
*/
public class UpdatePreferenceHandler implements DBHandler {
private final ProcessorContext context;
private static final Logger LOGGER = LoggerFactory.getLogger(UpdatePreferenceHandler.class);
private static final String REQ_KEY_STANDARD_PREF = "standard_preference";
private static final String REQ_KEY_LANGUAGE_PREF = "language_preference";
private AJEntityUserPreference userPreference;
UpdatePreferenceHandler(ProcessorContext context) {
this.context = context;
}
@Override
public ExecutionResult<MessageResponse> checkSanity() {
// TODO: validation of the request JSON
// The user should not be anonymous
if (context.userId() == null || context.userId().isEmpty() || context.userId()
.equalsIgnoreCase(MessageConstants.MSG_USER_ANONYMOUS)) {
LOGGER.warn("Anonymous or invalid user attempting to update preference");
return new ExecutionResult<>(
MessageResponseFactory.createForbiddenResponse("Not allowed"),
ExecutionResult.ExecutionStatus.FAILED);
}
LOGGER.debug("checkSanity() OK");
return new ExecutionResult<>(null, ExecutionStatus.CONTINUE_PROCESSING);
}
@Override
public ExecutionResult<MessageResponse> validateRequest() {
LazyList<AJEntityUsers> users = AJEntityUsers
.findBySQL(AJEntityUsers.VALIDATE_USER, context.userId());
if (users == null || users.isEmpty()) {
LOGGER.warn("user not found in database");
return new ExecutionResult<>(
MessageResponseFactory.createNotFoundResponse("user not found in database"),
ExecutionStatus.FAILED);
}
this.userPreference = AJEntityUserPreference.findById(UUID.fromString(context.userId()));
return validateRequestPayload();
}
@Override
public ExecutionResult<MessageResponse> executeRequest() {
if (this.userPreference != null) {
return doUpdate();
}
LOGGER.debug("no existing user prefernece found, creating new");
userPreference = new AJEntityUserPreference();
userPreference.setUserId(context.userId());
userPreference.setPreferenceSettings(context.request().toString());
if (!userPreference.insert()) {
LOGGER.error("error while inserting user preference settings");
return new ExecutionResult<>(
MessageResponseFactory
.createInternalErrorResponse("Error while saving user preference settings"),
ExecutionResult.ExecutionStatus.FAILED);
}
LOGGER.debug("user preference settings stored successfully");
return new ExecutionResult<>(MessageResponseFactory.createNoContentResponse(),
ExecutionResult.ExecutionStatus.SUCCESSFUL);
}
@Override
public boolean handlerReadOnly() {
return false;
}
private ExecutionResult<MessageResponse> doUpdate() {
this.userPreference.setPreferenceSettings(context.request().toString());
if (!this.userPreference.save()) {
LOGGER.error("error while updating user preference settings");
return new ExecutionResult<>(
MessageResponseFactory
.createInternalErrorResponse("Error while saving user preference settings"),
ExecutionResult.ExecutionStatus.FAILED);
}
LOGGER.debug("user preference settings updated successfully");
return new ExecutionResult<>(MessageResponseFactory.createNoContentResponse(),
ExecutionResult.ExecutionStatus.SUCCESSFUL);
}
private ExecutionResult<MessageResponse> validateRequestPayload() {
try {
JsonObject standardPreferences = this.context.request()
.getJsonObject(REQ_KEY_STANDARD_PREF, null);
if (standardPreferences != null && !standardPreferences.isEmpty()) {
for (String subject : standardPreferences.fieldNames()) {
Long count = Base.count(AJEntityTaxonomySubject.TABLE,
AJEntityTaxonomySubject.FETCH_SUBJECT_BY_GUT_AND_FWCODE, subject,
standardPreferences.getString(subject));
if (count < 1) {
LOGGER.warn("invalid subject preference provided '{}' and framework '{}'", subject,
standardPreferences.getString(subject));
return new ExecutionResult<>(MessageResponseFactory.createInvalidRequestResponse(
"Invalid subject preference provided"), ExecutionResult.ExecutionStatus.FAILED);
}
}
}
JsonArray languagePreference = this.context.request()
.getJsonArray(REQ_KEY_LANGUAGE_PREF, null);
if (languagePreference != null && !languagePreference.isEmpty()) {
Set<Integer> languageIds = new HashSet<>();
languagePreference.forEach(langId -> {
languageIds.add(Integer.valueOf(langId.toString()));
});
if (languagePreference.size() != languageIds.size()) {
LOGGER.warn("non unique language preferences provided, aborting");
return new ExecutionResult<>(
MessageResponseFactory
.createInvalidRequestResponse("non unique language preference provided"),
ExecutionResult.ExecutionStatus.FAILED);
}
Long count = Base
.count(AJEntityGooruLanguage.TABLE, AJEntityGooruLanguage.FETCH_LANGUAGES_BY_IDS,
HelperUtility.toPostgresArrayInt(languageIds));
if (count != languageIds.size()) {
LOGGER.warn("invalid language preferences provided, aborting");
return new ExecutionResult<>(
MessageResponseFactory
.createInvalidRequestResponse("Invalid language preference provided"),
ExecutionResult.ExecutionStatus.FAILED);
}
}
return new ExecutionResult<>(null, ExecutionStatus.CONTINUE_PROCESSING);
} catch (Throwable t) {
LOGGER.error("unable to validate request", t);
return new ExecutionResult<>(
MessageResponseFactory.createInternalErrorResponse("unable to validate request"),
ExecutionResult.ExecutionStatus.FAILED);
}
}
} |
import { Seconds } from "../type/Units";
import { OfflineContext } from "./OfflineContext";
import { ToneAudioBuffer } from "./ToneAudioBuffer";
/**
* Generate a buffer by rendering all of the Tone.js code within the callback using the OfflineAudioContext.
* The OfflineAudioContext is capable of rendering much faster than real time in many cases.
* The callback function also passes in an offline instance of [[Context]] which can be used
* to schedule events along the Transport.
* @param callback All Tone.js nodes which are created and scheduled within this callback are recorded into the output Buffer.
* @param duration the amount of time to record for.
* @return The promise which is invoked with the ToneAudioBuffer of the recorded output.
* @example
* import { Offline, Oscillator } from "tone";
* // render 2 seconds of the oscillator
* Offline(() => {
* // only nodes created in this callback will be recorded
* const oscillator = new Oscillator().toDestination().start(0);
* }, 2).then((buffer) => {
* // do something with the output buffer
* console.log(buffer);
* });
* @example
* import { Offline, Oscillator } from "tone";
* // can also schedule events along the Transport
* // using the passed in Offline Transport
* Offline(({ transport }) => {
* const osc = new Oscillator().toDestination();
* transport.schedule(time => {
* osc.start(time).stop(time + 0.1);
* }, 1);
* // make sure to start the transport
* transport.start(0.2);
* }, 4).then((buffer) => {
* // do something with the output buffer
* console.log(buffer);
* });
* @category Core
*/
export declare function Offline(callback: (context: OfflineContext) => Promise<void> | void, duration: Seconds, channels?: number, sampleRate?: number): Promise<ToneAudioBuffer>;
|
ON THE ROLE OF THE INFLUENCE FUNCTION IN THE PERIDYNAMIC THEORY The influence function in the peridynamic theory is used to weight the contribution of all the bonds participating in the computation of volume-dependent properties. In this work, we use influence functions to establish relationships between bond-based and state-based peridynamic models. We also demonstrate how influence functions can be used to modulate nonlocal effects within a peridynamic model independently of the peridynamic horizon. We numerically explore the effects of influence functions by studying wave propagation in simple one-dimensional models and brittle fracture in three-dimensional models. |
#pragma once
#ifndef _MY_HASH_TABLE_H_
#define _MY_HASH_TABLE_H_
#include"my_allocator.h"
#include"my_type_traits.h" //for is_pair_v
#include"my_iterator.h"
#include"my_vector.h"
#include<algorithm>//for std::is_permutation
#include<cmath> // for std::ceil
namespace mystl{
//非pair类型
template<typename T, bool = is_pair_v<T>>
struct hash_table_value_traits {
using key_type = T;
using mapped_type = T;
using value_type = T;
//set的key和value为同一个
static const key_type& get_key(const value_type& value)
{
return value;
}
static const value_type& get_value(const value_type& value)
{
return value;
}
};
//pair类型
template<typename T>
struct hash_table_value_traits<T,true> {
using key_type = typename T::first_type;
using mapped_type = typename T::second_type;
using value_type = T;
//map的key为第一个元素,value为pair
static const key_type& get_key(const value_type& value)
{
return value.first;
}
static const value_type& get_value(const value_type& value)
{
return value;
}
};
template<typename T>
struct hash_table_node {
using node_ptr = hash_table_node*;
node_ptr next;
T value;
unsigned hashcode; //保存计算出的hash值避免重复计算
hash_table_node() :next(nullptr), value(),hashcode(0) {}
hash_table_node(const T& val) :next(nullptr), value(val) {}
hash_table_node(const hash_table_node& other) :next(other.next), value(other.value) {}
hash_table_node(hash_table_node&& other) :next(other.next), value(std::move(other.value)) {}
};
template<typename HashTable>
struct hash_table_const_iterator :public mystl::iterator<mystl::forward_iterator_tag, typename HashTable::value_type> {
using value_type = typename HashTable::value_type;
using pointer = const value_type*;
using reference = const value_type&;
using node_ptr = hash_table_node<value_type>*;
using htb_ptr = HashTable*;
using self = hash_table_const_iterator;
node_ptr node_p;
htb_ptr htb_p; //哈希表的指针,方便迭代器操作
hash_table_const_iterator() :node_p(nullptr), htb_p(nullptr){}
hash_table_const_iterator(node_ptr np, htb_ptr hp) :node_p(np), htb_p(hp) {}
//hash_table_const_iterator(const self& other):node_p(other.node_p), htb_p(other.htb_p) {};
reference operator*() const
{
return node_p->value;
}
pointer operator->() const
{
return &(operator*());
}
self& operator++() {
// 找出当前节点在桶中的下标
auto idx = node_p->hashcode & (htb_p->bucket_count() - 1);
node_p = htb_p->get_valid_node(node_p->next, idx);
return *this;
}
self& operator++(int) {
self tmp = *this;
++*this;
return tmp;
}
bool operator==(const self& rhs) const { return (node_p == rhs.node_p); }
bool operator!=(const self& rhs) const { return (node_p != rhs.node_p); }
};
template<typename HashTable>
struct hash_table_iterator :public hash_table_const_iterator<HashTable> {
using value_type = typename HashTable::value_type;
using pointer = value_type*;
using reference = value_type&;
using node_ptr = hash_table_node<value_type>*;
using htb_ptr = HashTable*;
using self = hash_table_iterator;
using hash_table_const_iterator<HashTable>::node_p;
using hash_table_const_iterator<HashTable>::htb_p;
hash_table_iterator() = default;
hash_table_iterator(node_ptr np, htb_ptr hp) :hash_table_const_iterator<HashTable>(np, hp) {};
//hash_table_iterator(const self& other):hash_table_const_iterator<HashTable>(other.node_p, other.htb_p) {};
reference operator*() const
{
return node_p->value;
}
pointer operator->() const
{
return &(operator*());
}
self& operator++() {
// 找出当前节点在桶中的下标
auto idx = node_p->hashcode & (htb_p->bucket_count() - 1);
node_p = htb_p->get_valid_node(node_p->next, idx);
return *this;
}
self& operator++(int) {
self tmp = *this;
++* this;
return tmp;
}
};
//桶迭代器
template<typename T>
struct const_bucket_iterator : public mystl::iterator<mystl::forward_iterator_tag, T> {
using node_ptr = hash_table_node<T>*;
using self = const_bucket_iterator;
using value_type = T;
using reference = const T&;
using pointer = const T*;
node_ptr node_p;
const_bucket_iterator() :node_p(nullptr) {}
const_bucket_iterator(node_ptr ptr) :node_p(ptr) {};
reference operator*() const
{
return node_p->value;
}
pointer operator->() const
{
return &(operator*());
}
self& operator++() {
ASSERT_EXPR(node_p != nullptr);
node_p = node_p->next;
return *this;
}
self& operator++(int) {
self tmp = *this;
++* this;
return tmp;
}
bool operator==(const self& rhs) const { return (node_p == rhs.node_p); }
bool operator!=(const self& rhs) const { return (node_p != rhs.node_p); }
};
template<typename T>
struct bucket_iterator : public const_bucket_iterator<T> {
using node_ptr = hash_table_node<T>*;
using self = bucket_iterator;
using value_type = T;
using reference = T&;
using pointer = T*;
using const_bucket_iterator<T>::node_p;
bucket_iterator() = default;
bucket_iterator(node_ptr ptr) :const_bucket_iterator<T>(ptr) {};
reference operator*() const
{
return node_p->value;
}
pointer operator->() const
{
return &(operator*());
}
self& operator++() {
ASSERT_EXPR(node_p != nullptr);
node_p = node_p->next;
return *this;
}
self& operator++(int) {
self tmp = *this;
++* this;
return tmp;
}
};
template<typename T, typename Hash, typename KeyEqual, typename Alloc = mystl::allocator<T>>
class hash_table {
public:
//T不为pair时,key_type,mapped_type,value_type相同
//T为pair时,key_type为pair的第一个元素,mapped_type为第二个元素,value_type为pair
using value_traits = hash_table_value_traits<T>;
using key_type = typename value_traits::key_type;
using mapped_type = typename value_traits::mapped_type;
using value_type = typename value_traits::value_type;
using hash_func = Hash;
using key_equal = KeyEqual;
using allocator_type = Alloc;
using size_type = typename allocator_type::size_type;
using difference_type = typename allocator_type::difference_type;
using reference = value_type&;
using const_reference = const value_type&;
using pointer = typename allocator_type::pointer;
using const_pointer = typename allocator_type::const_pointer;
using iterator = hash_table_iterator<hash_table>;
using const_iterator = hash_table_const_iterator<hash_table>;
using local_iterator = bucket_iterator<value_type>;
using const_local_iterator = const_bucket_iterator<value_type>;
using node_type = hash_table_node<value_type>;
using node_ptr = node_type*;
using node_allocator = mystl::allocator<node_type>;
using data_allocator = Alloc;
allocator_type get_allocator() const { return data_allocator(); }
static constexpr size_t kInitBucketCount = 16; //默认初始桶大小
static constexpr float kInitMaxLoadFactor = 0.75; //默认加载因子
static constexpr size_type kMaxBucketCount = 1 << 30;//最大桶数量
//便于使用hashtable中的方法
friend struct hash_table_iterator<hash_table>;
friend struct hash_table_const_iterator<hash_table>;
private:
size_type bucket_count_; //桶数量
vector<node_ptr> buckets_; //桶
size_type node_count_; //元素数量
hash_func hash_; //hash函数
key_equal key_equal_; //key比较函数
float max_load_factor_; //加载因子(当node_count_ / buckets_.size() >= load_factor_ 时就要扩容)
private:
void copy_buckets(const vector<node_ptr>& other_buckets);
//https://www.zhihu.com/question/422840340
//高16位与低16位异或,扰乱哈希值,增加随机性
int hash_disturb(unsigned hashcode) const { return hashcode ^ ((hashcode) >> 16); }
//先执行hash函数,然后获得hash值对应的桶下标(&操作的前提是桶的数量是2的倍数)
size_type bucket_index(const key_type& key) const { return hash_disturb(hash_(key)) & (bucket_count_ - 1); }
const key_type& node_key(node_ptr ptr) const { return value_traits::get_key(ptr->value); } //获取节点的key
template<typename ...Args>
node_ptr create_node(Args... args);
void destroy_node(node_ptr ptr);
void rehash() {
//每次扩容两倍
rehash(bucket_count_ * 2);
}
//找到一个有效节点,如果node_count_为0,直接返回nullptr
// 如果ptr不为nullptr,则返回ptr
// 否则寻找下一个有效节点
//start_idx :ptr所在桶的下标
node_ptr get_valid_node(node_ptr ptr, size_type ptr_idx) const;
public:
explicit hash_table(size_type bucket_count = kInitBucketCount, const hash_func& hash = hash_func(),
const key_equal key_eq = key_equal(), float max_load_factor = kInitMaxLoadFactor)
:bucket_count_(ge_near_pow2(bucket_count)),
buckets_(bucket_count, nullptr),
node_count_(0),
hash_(hash),
key_equal_(key_eq),
max_load_factor_(max_load_factor)
{
}
hash_table(const hash_table& other);
hash_table(hash_table&& other) noexcept
:bucket_count_(other.bucket_count_),
buckets_(std::move(other.buckets_)),
node_count_(other.node_count_),
hash_(other.hash_),
key_equal_(other, key_equal_),
max_load_factor_(other.max_load_factor_)
{
other.bucket_count_ = 0;
other.node_count_ = 0;
}
hash_table& operator=(const hash_table& other);
hash_table& operator=(hash_table&& other) noexcept
{
if (this != &other) {
clear();
bucket_count_ = other.bucket_count_;
buckets_ = std::move(other.buckets_);
node_count_ = other.node_count_;
hash_ = other.hash_;
key_equal_ = other.key_equal_;
max_load_factor_ = other.max_load_factor_;
other.bucket_count_ = 0;
other.node_count_ = 0;
}
return *this;
}
~hash_table() {
clear();
}
public:
//迭代器
iterator begin()noexcept
{
iterator ret(nullptr, this);
for (size_type i = 0; i < bucket_count_; ++i) {
if (buckets_[i] != nullptr) {
ret.node_p = buckets_[i];
break;
}
}
return ret;
}
const_iterator begin()const noexcept
{
const_iterator ret(nullptr, const_cast<hash_table*>(this));
for (size_type i = 0; i < bucket_count_; ++i) {
if (buckets_[i] != nullptr) {
ret.node_p = buckets_[i];
break;
}
}
return ret;
}
const_iterator cbegin()const noexcept { return begin(); }
iterator end()noexcept { return iterator(nullptr,this); };
const_iterator end()const noexcept { return const_iterator(nullptr,const_cast<hash_table*>(this)); }
const_iterator cend()const noexcept { return end(); }
//容量
bool empty() const noexcept { return node_count_ == 0; }
size_type size() const noexcept { return node_count_; }
size_type max_size() const noexcept { return static_cast<size_type>(kMaxBucketCount * kInitMaxLoadFactor); }
public:
//修改器
void clear();
//insert
std::pair<iterator, bool> insert_unique(const value_type& value);
std::pair<iterator, bool> insert_unique(value_type&& value);
//以 hint 为开始搜索的位置。返回指向被插入元素,或阻止插入的元素的迭代器。
iterator insert_unique(const_iterator hint, const value_type& value);
iterator insert_unique(const_iterator hint, value_type&& value);
template< typename InputIt , std::enable_if_t<mystl::is_input_iter_v<InputIt>, int> = 0>
void insert_unique(InputIt first, InputIt last);
iterator insert_multi(const value_type& value);
iterator insert_multi(value_type&& value);
//以 hint 为开始搜索的位置。返回指向新插入元素的迭代器。
iterator insert_multi(const_iterator hint, const value_type& value);
iterator insert_multi(const_iterator hint, value_type&& value);
template<typename InputIt, std::enable_if_t<mystl::is_input_iter_v<InputIt>, int> = 0>
void insert_multi(InputIt first, InputIt last);
//emplace
template <typename ...Args>
std::pair<iterator, bool> emplace_unique(Args&& ...args);
template <typename ...Args>
iterator emplace_unique_hint(const_iterator hint, Args&& ...args);
template <typename ...Args>
iterator emplace_multi(Args&& ...args);
template <typename ...Args>
iterator emplace_multi_hint(const_iterator hint, Args&& ...args);
//erase
iterator erase(const_iterator pos);
size_type erase_multi(const key_type& key);
size_type erase_unique(const key_type& key);
iterator erase(const_iterator first, const_iterator last);
void swap(hash_table& other) noexcept;
public:
//操作
size_type count(const key_type& key) const;
iterator find(const key_type& key);
const_iterator find(const key_type& key) const;
std::pair<iterator, iterator> equal_range(const key_type& key);
std::pair<const_iterator, const_iterator> equal_range(const key_type& key) const;
public:
//桶接口
local_iterator begin(size_type n)
{
ASSERT_EXPR(n < bucket_count_);
return buckets_[n];
}
const_local_iterator begin(size_type n) const
{
ASSERT_EXPR(n < bucket_count_);
return buckets_[n];
}
const_local_iterator cbegin(size_type n) const
{
ASSERT_EXPR(n < bucket_count_);
return buckets_[n];
}
local_iterator end(size_type n)
{
ASSERT_EXPR(n < bucket_count_);
return nullptr;
}
const_local_iterator end(size_type n) const
{
ASSERT_EXPR(n < bucket_count_);
return nullptr;
}
const_local_iterator cend(size_type n) const
{
ASSERT_EXPR(n < bucket_count_);
return nullptr;
}
size_type bucket_count() const { return bucket_count_; }
constexpr size_type max_bucket_count() const { return kMaxBucketCount; }
//返回下标为 n 的桶中的元素数。
size_type bucket_size(size_type n) const;
//返回关键 key 的桶的下标
size_type bucket(const key_type& key) const { return bucket_index(key); }
//哈希策略
//返回每个桶元素的平均数。
float load_factor() const { return static_cast<float>(node_count_) / bucket_count_; }
//管理最大加载因子(每个桶的平均元素数)。若加载因子超出此阈值,则容器自动增加桶数。
// 返回最大加载因子
float max_load_factor() const { return max_load_factor_; }
// 设置最大加载因子为 ml (不建议设置)
void max_load_factor(float ml) { max_load_factor_ = ml; }
//设置桶数为 count 并重哈希容器,若新的桶数使加载因子大于最大加载因子( count < size() / max_load_factor() )
//新桶数至少为ge_near_pow2(size() / max_load_factor())
void rehash(size_type count);
//设置桶数为适应至少 count 个元素,而不超出最大加载因子所需的数,并重哈希容器。
void reserve(size_type count)
{
size_type suggest_bucket_count = ge_near_pow2(static_cast<size_type>(std::ceil(count / max_load_factor())));
rehash(suggest_bucket_count);
}
//观察器
hash_func hash_function() const { return hash_; }
key_equal key_eq() const { return key_equal_; }
private:
//最接近n的大于等于n的2的倍数
size_type ge_near_pow2(size_type n) const {
if ((n & (n - 1)) == 0)
return n;
while ((n & (n - 1)) != 0) {
n = n & (n - 1);
}
return n << 1;
}
};
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline void hash_table<T, Hash, KeyEqual, Alloc>::copy_buckets(const vector<node_ptr>& other_buckets)
{
//逐个复制节点
for (size_type i = 0; i < bucket_count_; ++i) {
node_ptr curr = other_buckets[i];
if (curr != nullptr) {
node_ptr new_node = create_node(curr->value);
buckets_[i] = new_node;
while (curr->next != nullptr) {
new_node->next = create_node(curr->next->value);
curr = curr->next;
new_node = new_node->next;
}
}
}
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename ...Args>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::node_ptr
hash_table<T, Hash, KeyEqual, Alloc>::create_node(Args ...args)
{
node_ptr new_node = node_allocator::allocate(1);
mystl::construct(std::addressof(new_node->value), std::forward<Args>(args)...);
new_node->next = nullptr;
new_node->hashcode = hash_disturb(hash_(node_key(new_node))); //创建节点时就计算出哈希值
return new_node;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline void hash_table<T, Hash, KeyEqual, Alloc>::destroy_node(node_ptr ptr)
{
mystl::destroy_at(std::addressof(ptr->value));
node_allocator::deallocate(ptr, 1);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
typename hash_table<T, Hash, KeyEqual, Alloc>::node_ptr
hash_table<T, Hash, KeyEqual, Alloc>::get_valid_node(node_ptr ptr, size_type ptr_idx) const
{
if (node_count_ == 0) {
return nullptr;
}
if (ptr != nullptr) {
return ptr;
}
else {
//在之后的桶中寻找第一个有效的节点
while (++ptr_idx < bucket_count_) {
if (buckets_[ptr_idx] != nullptr) {
return buckets_[ptr_idx];
}
}
return nullptr;
}
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline hash_table<T, Hash, KeyEqual, Alloc>::hash_table(const hash_table& other)
:bucket_count_(other.bucket_count_),
buckets_(other.bucket_count_, nullptr),
node_count_(other.node_count_),
hash_(other.hash_),
key_equal_(other.key_equal_),
max_load_factor_(other.max_load_factor_)
{
copy_buckets(other.buckets_);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline hash_table<T, Hash, KeyEqual, Alloc>& hash_table<T, Hash, KeyEqual, Alloc>::operator=(const hash_table& other)
{
if (this != &other) {
clear();
bucket_count_ = other.bucket_count_;
buckets_.resize(bucket_count_, nullptr);
node_count_ = other.node_count_;
hash_ = other.hash_;
key_equal_ = other.key_equal_;
max_load_factor_ = other.max_load_factor_;
copy_buckets(other.buckets_);
}
return *this;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline void hash_table<T, Hash, KeyEqual, Alloc>::clear()
{
if (node_count_ > 0) {
for (size_type i = 0; i < bucket_count_; ++i) {
node_ptr curr = buckets_[i];
while (curr != nullptr) {
node_ptr next = curr->next;
destroy_node(curr);
curr = next;
}
buckets_[i] = nullptr;
}
node_count_ = 0;
}
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
std::pair<typename hash_table<T, Hash, KeyEqual, Alloc>::iterator, bool>
hash_table<T, Hash, KeyEqual, Alloc>::insert_unique(const value_type& value)
{
const key_type& key = value_traits::get_key(value);
size_type idx = bucket_index(key);
node_ptr new_node = nullptr;//保证在需要的时候才创建
if (buckets_[idx] == nullptr) {
//选择的桶为空,一定没有相同值
buckets_[idx] = new_node = create_node(value);
}
else {
node_ptr curr = buckets_[idx];
if (key_equal_(key, node_key(curr))) {
//与桶的第一个节点相同,返回key相同节点的迭代器
return std::make_pair(iterator(curr, this), false);
}
while (curr->next != nullptr) {
if (key_equal_(key,node_key(curr))) {
//存在相同的key,返回key相同节点的迭代器
return std::make_pair(iterator(curr, this),false);
}
curr = curr->next;
}
curr->next = new_node = create_node(value);
}
if (++node_count_ > bucket_count_ * max_load_factor_) {
rehash();
}
//此时new_node肯定不为nullptr
return std::make_pair(iterator(new_node, this), true);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
std::pair<typename hash_table<T, Hash, KeyEqual, Alloc>::iterator, bool>
hash_table<T, Hash, KeyEqual, Alloc>::insert_unique(value_type&& value)
{
const key_type& key = value_traits::get_key(value);
size_type idx = bucket_index(key);
node_ptr new_node = nullptr;//保证在需要的时候才创建
if (buckets_[idx] == nullptr) {
//选择的桶为空,一定没有相同值
buckets_[idx] = new_node = create_node(std::move(value));
}
else {
node_ptr curr = buckets_[idx];
if(key_equal_(key,node_key(curr))) {
//与桶的第一个节点相同,返回key相同节点的迭代器
return std::make_pair(iterator(curr, this), false);
}
while (curr->next != nullptr) {
if (key_equal_(key, node_key(curr))) {
//存在相同的key,返回key相同节点的迭代器
return std::make_pair(iterator(curr, this), false);
}
curr = curr->next;
}
curr->next = new_node = create_node(std::move(value));
}
if (++node_count_ > bucket_count_ * max_load_factor_) {
rehash();
}
//此时new_node肯定不为nullptr
return std::make_pair(iterator(new_node, this), true);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::insert_unique(const_iterator hint, const value_type& value)
{
//在这里hint用处不大,因为要确定没有重复值,只能从桶的第一个节点开始比较
(void)hint;
return insert_unique(value).first;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::insert_unique(const_iterator hint, value_type&& value)
{
//在这里hint用处不大,因为要确定没有重复值,只能从桶的第一个节点开始比较
(void)hint;
return insert_unique(std::move(value)).first;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename InputIt, std::enable_if_t<mystl::is_input_iter_v<InputIt>, int>>
inline void hash_table<T, Hash, KeyEqual, Alloc>::insert_unique(InputIt first, InputIt last)
{
while (first != last) {
insert_unique(*first);
++first;
}
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::insert_multi(const value_type& value)
{
return emplace_multi(value);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::insert_multi(value_type&& value)
{
return emplace_multi(std::move(value));
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::insert_multi(const_iterator hint, const value_type& value)
{
return emplace_multi_hint(hint, value);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::insert_multi(const_iterator hint, value_type&& value)
{
return emplace_multi_hint(hint, std::move(value));
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename InputIt, std::enable_if_t<mystl::is_input_iter_v<InputIt>, int>>
inline void hash_table<T, Hash, KeyEqual, Alloc>::insert_multi(InputIt first, InputIt last)
{
while (first != last) {
emplace_multi(*first);
++first;
}
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename ...Args>
std::pair<typename hash_table<T, Hash, KeyEqual, Alloc>::iterator, bool>
hash_table<T, Hash, KeyEqual, Alloc>::emplace_unique(Args && ...args)
{
node_ptr new_node = create_node(std::forward<Args>(args)...);
const key_type& key = node_key(new_node);
size_type idx = new_node->hashcode & (bucket_count_ - 1);
if (buckets_[idx] == nullptr) {
//选择的桶为空,一定没有相同值
buckets_[idx] = new_node;
}
else {
node_ptr curr = buckets_[idx];
if (key_equal_(key, node_key(curr))) {
destroy_node(new_node); //要删除创建出的节点
//与桶的第一个节点相同,返回key相同节点的迭代器
return std::make_pair(iterator(curr, this), false);
}
while (curr->next != nullptr) {
if (key_equal_(key, node_key(curr))) {
destroy_node(new_node);//要删除创建出的节点
//存在相同的key,返回key相同节点的迭代器
return std::make_pair(iterator(curr, this), false);
}
curr = curr->next;
}
curr->next = new_node;
}
if (++node_count_ > bucket_count_ * max_load_factor_) {
rehash();
}
return std::make_pair(iterator(new_node, this), true);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename ...Args>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::emplace_unique_hint(const_iterator hint, Args && ...args)
{
//在这里hint用处不大,因为要确定没有重复值,只能从桶的第一个节点开始比较
(void)hint;
return emplace_unique(std::forward<Args>(args)...).first;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename ...Args>
typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::emplace_multi(Args && ...args)
{
//一定会插入,所以先检查要不要扩容,免得插入之后再扩容,又hash一次
if (++node_count_ > bucket_count_ * max_load_factor_) {
rehash();
}
node_ptr new_node = create_node(std::forward<Args>(args)...);
const key_type& key = node_key(new_node);
size_type idx = new_node->hashcode & (bucket_count_ - 1);
if (buckets_[idx] == nullptr) {
buckets_[idx] = new_node;
}
else {
node_ptr curr = buckets_[idx];
//如果存在相同的key,就插入到它后面,保证相同的key都在一起
while (curr != nullptr) {
if (key_equal_(key, node_key(curr))) {
new_node->next = curr->next;
curr->next = new_node;
break;
}
curr = curr->next;
}
//不存在则在头部插入
if (curr == nullptr) {
new_node->next = buckets_[idx];
buckets_[idx] = new_node;
}
}
return iterator(new_node, this);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
template<typename ...Args>
typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::emplace_multi_hint(const_iterator hint, Args && ...args)
{
node_ptr hint_node = hint.node_p;
//hint == end()
if (hint_node == nullptr) {
return emplace_multi(std::forward<Args>(args)...);
}
//一定会插入,所以先检查要不要扩容,免得插入之后再扩容,又hash一次
if (++node_count_ > bucket_count_ * max_load_factor_) {
rehash();
}
node_ptr new_node = create_node(std::forward<Args>(args)...);
const key_type& key = node_key(new_node);
size_type idx = new_node->hashcode & (bucket_count_ - 1);
const key_type& hit_key = node_key(hint_node);
//与hint的key相同,直接在hint_node后插入
if (key_equal_(key, hit_key)) {
new_node->next = hint_node->next;
hint_node->next = new_node;
return iterator(new_node, this);
}
else {
//从桶的第一个节点寻找插入点
//如果存在相同的key,就插入到它后面,保证相同的key都在一起
//不存在则在尾部插入
node_ptr curr = buckets_[idx];
if (curr == nullptr) {
buckets_[idx] = new_node;
}
else {
//如果存在相同的key,就插入到它后面,保证相同的key都在一起
while (curr != nullptr) {
if (key_equal_(key, node_key(curr))) {
new_node->next = curr->next;
curr->next = new_node;
break;
}
curr = curr->next;
}
//不存在则在头部插入
if (curr == nullptr) {
new_node->next = buckets_[idx];
buckets_[idx] = new_node;
}
}
}
return iterator(new_node, this);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::erase(const_iterator pos)
{
node_ptr ptr = pos.node_p;
size_type idx = ptr->hashcode & (bucket_count_ - 1);
node_ptr curr = buckets_[idx];
iterator ret(nullptr, this);
if (curr == ptr) {
//桶的第一个元素就是目标节点
buckets_[idx] = curr->next;
destroy_node(curr);
--node_count_;
//找到下一个有效节点
ret.node_p = get_valid_node(buckets_[idx], idx);
}
else {
node_ptr prev = curr;
curr = curr->next;
while (curr != nullptr) {
if (curr == ptr) {
//连接前一个节点和后一个节点
prev->next = curr->next;
destroy_node(curr);
--node_count_;
//找到下一个有效节点
ret.node_p = get_valid_node(prev->next, idx);
break;
}
prev = curr;
curr = curr->next;
}
}
return ret;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::size_type
hash_table<T, Hash, KeyEqual, Alloc>::erase_multi(const key_type& key)
{
auto p = equal_range(key);
if (p.first.node_p != nullptr) {
erase(p.first, p.second);
return mystl::distance(p.first, p.second);
}
return 0;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
typename hash_table<T, Hash, KeyEqual, Alloc>::size_type
hash_table<T, Hash, KeyEqual, Alloc>::erase_unique(const key_type& key)
{
size_type idx = bucket_index(key);
node_ptr curr = buckets_[idx];
if (curr == nullptr) {
return 0;
}
else {
if (key_equal_(key, node_key(curr))) {
//删除桶的第一个节点
buckets_[idx] = curr->next;
destroy_node(curr);
--node_count_;
return 1;
}
node_ptr prev = curr;
curr = curr->next;
while (curr != nullptr) {
if (key_equal_(key, node_key(curr))) {
//删除链表上第一个与key相等的节点
prev->next = curr->next;
destroy_node(curr);
--node_count_;
return 1;
}
prev = curr;
curr = curr->next;
}
}
return 0;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::erase(const_iterator first, const_iterator last)
{
if (first.node_p == last.node_p) {
return end();
}
node_ptr first_ptr = first.node_p;
node_ptr last_ptr = last.node_p;
size_type first_bucket_idx = first_ptr->hashcode & (bucket_count_ - 1);
size_type last_bucket_idx = last_ptr->hashcode & (bucket_count_ - 1);
//在同一个桶中删除
if (first_bucket_idx == last_bucket_idx) {
//首先将first_ptr前一个节点与last_ptr连接
//first_ptr是桶中的第一个元素
if (first_ptr == buckets_[first_bucket_idx]) {
buckets_[first_bucket_idx] = last_ptr;
}
else {
//找到first_ptr的前一个节点
node_ptr first_prev = buckets_[first_bucket_idx];
while (first_prev->next != first_ptr) {
first_prev = first_prev->next;
}
first_prev->next = last_ptr;
}
while (first_ptr != last_ptr) {
node_ptr next = first_ptr->next;
destroy_node(first_ptr);
--node_count_;
first_ptr = next;
}
}
else {
//在不同的桶中
//首先将first_ptr前一个节点的next指向nullptr
if (first_ptr == buckets_[first_bucket_idx]) {
//first_ptr是桶中的第一个元素
buckets_[first_bucket_idx] = nullptr;
}
else {
//找到first_ptr的前一个节点
node_ptr first_prev = buckets_[first_bucket_idx];
while (first_prev->next != first_ptr) {
first_prev = first_prev->next;
}
first_prev->next = nullptr;
}
while (first_ptr != nullptr) {
node_ptr next = first_ptr->next;
destroy_node(first_ptr);
--node_count_;
first_ptr = next;
}
//删除之后桶的节点
for (++first_bucket_idx; first_bucket_idx != last_bucket_idx; ++first_bucket_idx) {
node_ptr curr = buckets_[first_bucket_idx];
while (curr != nullptr) {
node_ptr next = curr->next;
destroy_node(curr);
--node_count_;
curr = next;
}
buckets_[first_bucket_idx] = nullptr;
}
//last_bucket_idx 是有效下标
if (first_bucket_idx != bucket_count_) {
node_ptr curr = buckets_[first_bucket_idx];
while (curr != last_ptr) {
node_ptr next = curr->next;
destroy_node(curr);
--node_count_;
curr = next;
}
buckets_[first_bucket_idx] = last_ptr;
}
}
return iterator(last_ptr, this);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline void hash_table<T, Hash, KeyEqual, Alloc>::swap(hash_table& other) noexcept
{
if (this != &other) {
mystl::swap(bucket_count_, other.bucket_count_);
mystl::swap(buckets_, other.buckets_);
mystl::swap(node_count_, other.node_count_);
mystl::swap(hash_, other.hash_);
mystl::swap(key_equal_, other.key_equal_);
mystl::swap(max_load_factor_, other.max_load_factor_);
}
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::size_type
hash_table<T, Hash, KeyEqual, Alloc>::count(const key_type& key) const
{
size_type idx = bucket_index(key);
size_type ret = 0;
node_ptr curr = buckets_[idx];
while (curr != nullptr) {
if (key_equal_(key, node_key(curr))) {
++ret;
}
curr = curr->next;
}
return ret;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::iterator
hash_table<T, Hash, KeyEqual, Alloc>::find(const key_type& key)
{
//调用const版本的find
const_iterator it = static_cast<const hash_table&>(*this).find(key);
return iterator(it.node_p, this);
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::const_iterator
hash_table<T, Hash, KeyEqual, Alloc>::find(const key_type& key) const
{
size_type idx = bucket_index(key);
const_iterator ret(nullptr, const_cast<hash_table*>(this));
node_ptr curr = buckets_[idx];
while (curr != nullptr) {
if (key_equal_(key, node_key(curr))) {
ret.node_p = curr;
}
curr = curr->next;
}
return ret;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline std::pair<typename hash_table<T, Hash, KeyEqual, Alloc>::iterator, typename hash_table<T, Hash, KeyEqual, Alloc>::iterator>
hash_table<T, Hash, KeyEqual, Alloc>::equal_range(const key_type& key)
{
//调用const版本的equal_range
auto p = static_cast<const hash_table&>(*this).equal_range(key);
return std::make_pair(iterator(p.first.node_p, this), iterator(p.second.node_p, this));
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
std::pair<typename hash_table<T, Hash, KeyEqual, Alloc>::const_iterator, typename hash_table<T, Hash, KeyEqual, Alloc>::const_iterator>
hash_table<T, Hash, KeyEqual, Alloc>::equal_range(const key_type& key) const
{
size_type idx = bucket_index(key);
const_iterator first(nullptr, const_cast<hash_table*>(this));
node_ptr curr = buckets_[idx];
while (curr != nullptr) {
if (key_equal_(key, node_key(curr))) {
//第一个与key相等的元素赋值给first
if (first.node_p == nullptr) {
first.node_p = curr;
}
}
else if(first.node_p != nullptr){
//first赋值过,说明之前找到过key相等的节点
break;
}
curr = curr->next;
}
return std::make_pair(first, const_iterator(get_valid_node(curr, idx), const_cast<hash_table*>(this)));
}
/*===============桶接口===============*/
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline typename hash_table<T, Hash, KeyEqual, Alloc>::size_type
hash_table<T, Hash, KeyEqual, Alloc>::bucket_size(size_type n) const
{
ASSERT_EXPR(n < bucket_count_);
node_ptr curr = buckets_[n];
size_type ret = 0;
while (curr != nullptr) {
++ret;
curr = curr->next;
}
return ret;
}
template<typename T, typename Hash, typename KeyEqual, typename Alloc>
inline void hash_table<T, Hash, KeyEqual, Alloc>::rehash(size_type count)
{
//至少需要多少桶
size_type least_bucket_count = static_cast<size_type>(std::ceil(node_count_ / max_load_factor_));
if (count < least_bucket_count) {
count = ge_near_pow2(least_bucket_count);
}
else {
count = ge_near_pow2(count);
}
vector<node_ptr> new_buckets(count, nullptr);
if (!empty()) {
//TODO java1.8中拆分链表法(感觉没性能提升啊。。)
for (size_type i = 0; i < bucket_count_; ++i) {
node_ptr curr = buckets_[i];
while (curr != nullptr) {
node_ptr next = curr->next;
size_type idx = curr->hashcode & (count - 1);
//插入到新桶的头部
curr->next = new_buckets[idx];
new_buckets[idx] = curr;
curr = next;
}
}
}
mystl::swap(buckets_, new_buckets);
bucket_count_ = count;
}
//非成员函数
template <typename T, typename Hash, typename KeyEqual, typename Alloc>
bool operator==(const hash_table<T, Hash, KeyEqual, Alloc>& lhs, const hash_table<T, Hash, KeyEqual, Alloc>& rhs)
{
using value_traits = typename hash_table<T, Hash, KeyEqual, Alloc>::value_tratis;
if (lhs.size() != rhs.size()) {
return false;
}
else if(lhs.size() != 0){//节点数相同且不为0
//获取第一个有效节点
auto curr = lhs.begin().node_p;
while (curr != nullptr) {
auto& key = value_traits::get_key(curr->value);
auto p1 = lhs.equal_range(key);
auto p2 = rhs.equal_range(key);
if (mystl::distance(p1.first, p1.second) != mystl::distance(p2.first, p2.second)) {
return false;
}
//p2不是p1的排列
if (!std::is_permutation(p1.first, p1.second, p2.first)) {
return false;
}
curr = p1.second;
}
}
return true;
}
template <typename T, typename Hash, typename KeyEqual, typename Alloc>
bool operator!=(const hash_table<T, Hash, KeyEqual, Alloc>& lhs, const hash_table<T, Hash, KeyEqual, Alloc>& rhs)
{
return !(lhs == rhs);
}
template <typename T, typename Hash, typename KeyEqual, typename Alloc>
void swap(const hash_table<T, Hash, KeyEqual, Alloc>& lhs, const hash_table<T, Hash, KeyEqual, Alloc>& rhs) noexcept {
lhs.swap(rhs);
}
}
#endif // !_MY_HASH_TABLE_H_
|
Telemonitoramento de pacientes em um hospital cardiolgico no servio especializado de anticoagulao oral / Telemonitoring of patients in cardiological hospital in the oral anticoagulation service Introduction: The development of new technologies is increasingly present and common in the world's population, in the field of health this fact is no different. The use of Electronic Patient Records (PEP) and the implementation of telemedicine services are a reflection of a new level of care and monitoring in health. Objective: to report the experiences of cardiology nursing residents, working in a cardiology referral hospital in Pernambuco, during telemonitoring of patients monitored by the Anticoagulation Ambulatory (TELE-INR). Methodology: descriptive study, of the experience report type, carried out from June 2020 to August 2021, in partnership with the State Telehealth Center, linked to the State Health Department (NET-SES-PE) and the TELE-INR of the referral hospital in cardiology in the state of Pernambuco. Results: The development of new technologies is increasingly present and common in the world's population, in the field of health this fact is no different. The use of Electronic Patient Records (PEP) and the implementation of telemedicine services are a reflection of a new level of care and monitoring in health. The team responsible for TELE-INR strengthened its work plan based on carrying out Teleconsultations and Telemonitoring, for which some processes were established for the execution of TELE-INR activities. According to available statistics, approximately 500 consultations are carried out monthly, unifying the work groups and taking into account the inclusion of new patients. Conclusions: For nursing residents, teleconsultation and nursing telemonitoring have added to academic and personal training, enabling direct contact with patients and their specifications, generating moments of active health education, autonomy in care and improvement in taking clinical decisions. |
import { config, createLocalVue, RouterLinkStub, shallowMount, Wrapper, mount } from '@vue/test-utils';
import VueRouter from 'vue-router';
import Vuetify from '../../../node_modules/vuetify';
import LoginComponent from '../login-component';
import Login from '../Login.vue';
config.silent = false;
const localVue = createLocalVue();
localVue.use(Vuetify);
localVue.use(VueRouter);
describe('Login', () => {
let component: Wrapper<LoginComponent>;
beforeEach(() => {
component = mount<LoginComponent>(Login, {
localVue,
stubs: {
'router-link': RouterLinkStub,
'router-view': {
render: h => h('div')
}
}
});
});
it('email should be invalid', () => {
component.find('input').setValue('test');
expect(component.vm.valid).toBeFalsy();
});
it('email should be valid', () => {
component.find('input').setValue('<EMAIL>');
expect(component.vm.valid).toBeTruthy();
});
});
|
package main
import (
"encoding/json"
"fmt"
"log"
"github.com/codegangsta/cli"
"github.com/docker/libcontainer"
)
var specCommand = cli.Command{
Name: "spec",
Usage: "display the container specification",
Action: specAction,
}
func specAction(context *cli.Context) {
container, err := loadContainer()
if err != nil {
log.Fatal(err)
}
spec, err := getContainerSpec(container)
if err != nil {
log.Fatalf("Failed to get spec - %v\n", err)
}
fmt.Printf("Spec:\n%v\n", spec)
}
// returns the container spec in json format.
func getContainerSpec(container *libcontainer.Container) (string, error) {
spec, err := json.MarshalIndent(container, "", "\t")
if err != nil {
return "", err
}
return string(spec), nil
}
|
import yaml
from flask import request
from flask_restplus import Namespace, Resource
from db import dsl
api = Namespace('entities', description='Entities Endpoints')
class Entities(Resource):
def get(self, entity=None):
"""Get Entities"""
return (
dsl.file.entities.get(entity)
if entity else dsl.file.entities
)
api.add_resource(
Entities,
'/',
methods=['GET'])
api.add_resource(
Entities,
'/<string:entity>',
methods=['GET'])
|
use crate::TrowConfig;
use rocket::request::Request;
pub mod accepted_upload;
pub mod authenticate;
pub mod blob_deleted;
pub mod blob_reader;
pub mod content_info;
pub mod empty;
pub mod errors;
pub mod health;
pub mod html;
pub mod manifest_deleted;
pub mod manifest_history;
pub mod manifest_reader;
pub mod metrics;
pub mod readiness;
pub mod repo_catalog;
pub mod tag_list;
mod test_helper;
pub mod trow_token;
pub mod upload_info;
pub mod verified_manifest;
/// Gets the base URL e.g. <http://registry:8000> using the HOST value from the request header.
/// Falls back to hostname if it doesn't exist.
///
/// Move this.
fn get_base_url(req: &Request<'_>) -> String {
let host = get_domain_name(req);
let config = req
.rocket()
.state::<TrowConfig>()
.expect("TrowConfig not present!");
// Check if we have an upstream load balancer doing TLS termination
match req.headers().get("X-Forwarded-Proto").next() {
None => match config.tls {
None => format!("http://{}", host),
Some(_) => format!("https://{}", host),
},
Some(proto) => {
if proto == "http" {
warn!("Security issue! Upstream proxy is using HTTP");
}
format!("{}://{}", proto, host)
}
}
}
fn get_domain_name(req: &Request) -> String {
match req.headers().get("HOST").next() {
None => hostname::get()
.expect("Server has no name; cannot give clients my address")
.into_string()
.unwrap(),
Some(s_host) => s_host.to_string(),
}
}
|
<gh_stars>0
"""
Created on Wed June 1, 2020
working as of June xx, 2020
@author: brian
"""
import sqlite3
import Ch2, Ch7, Ch9
# Create a connection to the sqlite server
#conn = sqlite3.connect('Weather.sqlite')
conn = sqlite3.connect('/home/pi/Python/Weather/Weather.sqlite')
cur = conn.cursor()
data2 = Ch2.weather()
print('channel 2')
cur.executemany('''
INSERT OR REPLACE INTO Main VALUES (NULL,?,?,?,?,?)''', data2)
conn.commit()
data7 = Ch7.weather()
print('channel 7')
cur.executemany('''
INSERT OR REPLACE INTO Main VALUES (NULL,?,?,?,?,?)''',
data7)
conn.commit()
data9 = Ch9.weather()
print('channel 9')
cur.executemany('''
INSERT OR REPLACE INTO Main VALUES (NULL,?,?,?,?,?)''',
data9)
conn.commit()
|
def plot_sep_frac(sim, snap):
hspec = get_hspec(sim, snap)
hspec.plot_sep_frac(color=colors[sim], ls=lss[sim])
plt.xlabel(r"$v_\mathrm{90}$ (km s$^{-1}$)") |
/**
* Various netconf resources
*
*
*
*/
public final class NetconfResources {
public static final List<BuiltinCiphers> SSH_CIPHERS_PREFERENCE =
Collections.unmodifiableList(Arrays.asList(
BuiltinCiphers.aes128ctr,
BuiltinCiphers.aes192ctr,
BuiltinCiphers.aes256ctr
));
public static final List<BuiltinMacs> SSH_MAC_PREFERENCE =
Collections.unmodifiableList(Arrays.asList(
BuiltinMacs.hmacsha1,
BuiltinMacs.hmacsha256,
BuiltinMacs.hmacsha512
));
public static final String CLOSE_SUBSCRIPTION = "close-subscription";
public static final String CREATE_SUBSCRIPTION = "create-subscription";
public static final String TYPE = "type";
public static final String GET_CONFIG = "get-config";
public static final String GET_CONFIG_CONFIG = "config";
public static final String EDIT_CONFIG = "edit-config";
public static final String EDIT_CONFIG_CONFIG = "config";
public static final String RPC = "rpc";
public static final String ACTION = "action";
public static final String MESSAGE_ID = "message-id";
public static final String NETCONF = "NETCONF";
public static final String STATE_CHANGE = "STATE_CHANGE";
public static final String NETCONF_RPC_NS_1_0 = "urn:ietf:params:xml:ns:netconf:base:1.0";
public static final String NETCONF_BASE_CAP_1_0 = "urn:ietf:params:netconf:base:1.0";
public static final String NETCONF_BASE_CAP_1_1 = "urn:ietf:params:netconf:base:1.1";
public static final String CAPABILITY_TYPE = "CAPABILITY_TYPE";
public static final String DEFAULT_VALUE_1_1 = "1.1";
public static final String NETCONF_YANG_1 = "urn:ietf:params:xml:ns:yang:1";
public static final String NETCONF_NOTIFICATION = "urn:ietf:params:netconf:capability:notification:1.0";
public static final String NETCONF_NOTIFICATION_NS = "urn:ietf:params:xml:ns:netconf:notification:1.0";
public static final String IETF_NOTIFICATION_NS = "urn:ietf:params:xml:ns:yang:ietf-netconf-notifications";
public static final String BBF_NOTIFICATION_NS = "urn:broadband-forum-org:yang:bbf-software-image-management";
public static final String NC_NOTIFICATION_NS = "urn:ietf:params:xml:ns:netmod:notification";
public static final String NOTIFICATION_BUFFER_NS = "http://www.test-company.com/solutions/anv-notification-buffer";
public static final String NETCONF_WRITABLE_RUNNNG = "urn:ietf:params:netconf:capability:writable-running:1.0";
public static final String NETCONF_ROLLBACK_ON_ERROR = "urn:ietf:params:netconf:capability:rollback-on-error:1.0";
public static final String WITH_DEFAULTS_NS = "urn:ietf:params:xml:ns:yang:ietf-netconf-with-defaults";
public static final String NOTIFICATION_INTERLEAVE = "urn:ietf:params:netconf:capability:interleave:1.0";
public static final String NOTIFICATION = "notification";
public static final String XMLNS = "xmlns";
public static final String FILTER = "filter";
public static final String SUBTREE_FILTER = "subtree";
public static final String DEFAULT_OPERATION = "default-operation";
public static final String TEST_OPTION = "test-option";
public static final String ERROR_OPTION = "error-option";
public static final String DATA_SOURCE = "source";
public static final String DATA_TARGET = "target";
public static final String EDIT_CONFIG_OPERATION = "operation";
public static final String COPY_CONFIG = "copy-config";
public static final String SRC = "src";
public static final String DELETE_CONFIG = "delete-config";
public static final String LOCK = "lock";
public static final String UNLOCK = "unlock";
public static final String GET = "get";
public static final String WITH_DELAY_NS = "http://www.test-company.com/solutions/anv-test-netconf-extensions";
public static final String WITH_DELAY = "with-delay";
public static final String EXTENSION_NS = "http://www.test-company.com/solutions/netconf-extensions";
public static final String NC_STACK_NS = "urn:bbf:yang:obbaa:netconf-stack";
public static final String SYSTEM_STATE_NS = "urn:ietf:params:xml:ns:yang:ietf-system";
public static final String SYSTEM_STATE = "system-state";
public static final String SYSTEM_STATE_NAMESPACE = "urn:ietf:params:xml:ns:yang:ietf-system";
public static final String CLOCK = "clock";
public static final String SYS_CURRENT_DATE_TIME = "sys:current-datetime";
public static final String CURRENT_DATE_TIME = "current-datetime";
public static final String DEPTH = "depth";
public static final String FIELDS = "fields";
public static final String DATA_NODE = "data-node";
public static final String ATTRIBUTE = "attribute";
public static final String WITH_DEFAULTS = "with-defaults";
public static final String CLOSE_SESSION = "close-session";
public static final String KILL_SESSION = "kill-session";
public static final String SESSION_ID = "session-id";
public static final String STREAMS = "streams";
public static final String STREAM = "stream";
public static final String WRITABLE_RUNNING = ":writable-running";
public static final String HELLO = "hello";
public static final String CAPABILITIES = "capabilities";
public static final String CAPABILITY = "capability";
public static final String RPC_EOM_DELIMITER = "]]>]]>";
public static final String RPC_CHUNKED_DELIMITER = "\n##\n";
public static final String EOM_HANDLER = "EOM_HANDLER";
public static final String CHUNKED_HANDLER = "CHUNKED_HANDLER";
public static final String CHUNK_SIZE = "CHUNK_SIZE";
public static final String MAXIMUM_SIZE_OF_CHUNKED_MESSAGES = "MAXIMUM_SIZE_OF_CHUNKED_MESSAGES";
public static final String RPC_REPLY = "rpc-reply";
public static final String OK = "ok";
public static final String RPC_REPLY_DATA = "data";
public static final String RPC_ERROR = "rpc-error";
public static final String RPC_ERROR_TYPE = "error-type";
public static final String RPC_ERROR_TAG = "error-tag";
public static final String RPC_ERROR_SEVERITY = "error-severity";
public static final String RPC_ERROR_PATH = "error-path";
public static final String RPC_ERROR_MESSAGE = "error-message";
public static final String RPC_ERROR_APP_TAG = "error-app-tag";
public static final String RPC_ERROR_INFO = "error-info";
public static final String NONE = "NONE";
public static final String URL = "url";
public static final String NETCONF_SUBSYSTEM_NAME = "netconf";
public static final int CALL_HOME_IANA_PORT_TLS = 4335;
public static final Long DEFAULT_CONNECTION_TIMEOUT = 100000L;
public static final int DEFAULT_SSH_CONNECTION_PORT = 830;
public static final String COPY_CONFIG_SRC_CONFIG = "config";
public static final String REQUEST_LOG_STMT = "Got request from %s/%s ( %s ) session-id %s \n %s \n"
+ "---------------------------------";
public static final String RESPONSE_LOG_STMT = "Sending response to %s/%s ( %s ) session-id %s\n %s \n"
+ "---------------------------------";
public static final String NOTIFICATION_LOG_STMT = "Got notification for %s stream: \n %s" + "---------------------------------";
public static final String CREATESUBSCRIPTION_LOG_STMT = "Create subscription request: \n %s" + "---------------------------------";
public static final String SUFFIX = "_STREAM_LOGGER";
public static final String UNCLASSIFIED_NOTIFICATIONS = "unclassified notifications";
public static final DateTimeFormatter DATE_TIME_FORMATTER = ISODateTimeFormat.dateTimeNoMillis();
public static final String IMPLIED = "implied";
// This parameter needs to be moved to configuration
public static int RETRY_LIMIT_REVERSE_SSH = 3;
public static final String HEARTBEAT_INTERVAL = "hearbeat-interval";
public static final String NAME = "name";
public static final String DESCRIPTION = "description";
public static final String REPLAY_SUPPORT = "replaySupport";
public static final String REPLAY_LOG_CREATION_TIME = "replayLogCreationTime";
public static final String START_TIME = "startTime";
public static final String STOP_TIME = "stopTime";
public static final String EVENT_TIME = "eventTime";
public static final String DATA_STORE = "datastore";
public static final String CHANGED_BY = "changed-by";
public static final String USER_NAME = "username";
public static final String SOURCE_HOST = "source-host";
public static final String EDIT = "edit";
public static final String TARGET = "target";
public static final String OPERATION = "operation";
public static final String CHANGED_LEAF = "changed-leaf";
public static final String INSERT = "insert";
public static final String KEY = "key";
public static final String VALUE = "value";
public static final String REPLAY_COMPLETE = "replayComplete";
public static final String NOTIFICATION_COMPLETE = "notificationComplete";
public static final String CONFIG_CHANGE_NOTIFICATION = "netconf-config-change";
public static final String STATE_CHANGE_NOTIFICATION = "state-change-notification";
public static final String NC_STATE_CHANGE_NOTIFICATION = "netconf-state-change";
public static final String STATE_CHANGE_VALUE = "value";
public static final String CHANGES = "changes";
public static final String INTERLEAVE = "interleave";
public static final String CONFIG_CHANGE_STREAM = "CONFIG_CHANGE";
public static final String SYSTEM_STREAM = "SYSTEM";
public static final String OPER_STATE_CHANGE = "oper-state-change";
public static final String OLD_OPER_STATUS = "old-oper-status";
public static final String NEW_OPER_STATUS = "new-oper-status";
public static final String YANG_NAMESPACE = "urn:ietf:params:xml:ns:yang:ietf-yang-types";
/**
* ietf-yang-types.yang: typedef date-and-time { type string { pattern
* '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(\.\d+)?(Z|[\+\-]\d{2}:\d{2})'; }
*/
public static final DateTimeFormatter DATE_TIME_WITH_TZ = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss.SSSZZ");
public static final DateTimeFormatter DATE_TIME_WITH_TZ_WITHOUT_MS = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ssZZ");
public static final Pattern DATE_TIME_WITH_TZ_WITH_MS_PATTERN = Pattern.compile("\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}(\\.\\d+)(Z|[\\+\\-]\\d{2}:\\d{2})");
public static final Pattern DATE_TIME_WITH_TZ_WITHOUT_MS_PATTERN = Pattern.compile("\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}(Z|[\\+\\-]\\d{2}:\\d{2})");
public static final String IMPLICATION_CHANGE = "--automatic--";
public static final String NC_NBI_CLIENT_IDLE_CONNECTION_TIMEOUT_MS = "NC_NBI_CLIENT_IDLE_CONNECTION_TIMEOUT_MS";
public static DateTime parseDateTime(String dateTimeStr){
return ISODateTimeFormat.dateTimeParser().parseDateTime(dateTimeStr);
}
public static String printWithoutMillis(DateTime dateTime){
return DATE_TIME_WITH_TZ_WITHOUT_MS.print(dateTime);
}
public static String printWithMillis(DateTime dateTime){
return DATE_TIME_WITH_TZ.print(dateTime);
}
} |
Benefit Without Cost in a Mechanics Laboratory This paper describes, in some detail, a very simple building block experiment designed to motivate students towards the mastery of the basic principles of stability, dimensional similarity and the interpretation of practical measurements. The experiment consumes very little of either money, space, or time and can easily be adapted to challenge students of widely different abilities. |
NEW YORK (Reuters) - Cisco Systems Inc’s announcement on Wednesday that it plans to lay off 5,500 employees is unlikely to be the last round of Silicon Valley pink slips as hardware companies struggle to keep up with rapid technology shifts, analysts and recruiters said.
Companies that traditionally have made most of their money selling computers, chips, servers, routers and other equipment are especially vulnerable, analysts say, as mobile applications and cloud computing become increasingly important.
The Cisco layoffs come in the wake of Intel’s announcement in April that it was laying off 12,000 workers. Dell Incsaid in January it had shed 10,000 jobs and is expected to make further cuts after it closes a $67 billion deal to acquire data storage company EMC Corp.
So far this year, technology companies in the United States have shed about 63,000 jobs, according to outplacement consultancy Challenger, Gray & Christmas, Inc.
Chowdhry said he expects job cuts to rise drastically as more companies subscribe to “super cloud” services from the likes of Amazon.com Inc and Microsoft Corp. These services manage hardware, software, networks and databases and eliminate the need for workers to manage various technology layers, Chowdhry said.
In January, Chowdhry estimated that layoffs in the tech industry would hit 330,000 this year. On Wednesday, he said he had raised his estimate to 370,000. Some other analysts said that forecast was too bleak.
IBM Corp, Hewlett Packard Enterprise Co, Oracle Corp and Dell Inc could be the next to shed workers, analysts said.
Hewlett Packard Enterprise, Dell and Oracle declined comment and IBM could not be immediately reached for comment.
Cisco and other old-guard technology companies have been pursuing a challenging shift to software-oriented services. Margins in software services are higher than hardware because they bring recurring revenue and there are “fewer people involved on the cost side,” said Roger Kay, an analyst at Endpoint Technologies Associates.
That could mean more job cuts. Silicon Valley job recruiters offered mixed views about the fate of hardware engineers laid off at Cisco and other tech firms.
“Nobody wants to be laid off but if job elimination is going to happen, 2016 is not a bad time for it to happen,” said John Reed, Senior Executive Director of the tech recruitment firm Robert Half Technologies.
Still, recruiters said, hardware engineers may need to be flexible and willing to retrain if they want to find work. |
CAIRO • The flight track of EgyptAir Flight MS804 indicated that it crashed halfway between Crete and Egypt, which could mean it landed on what scientists refer to as the Mediterranean Ridge.
The ridge has been pushed upwards by the African plate of the earth's crust sliding under the Aegean Sea, deforming and crumbling the seafloor, said Mr William Ryan, a scientist at the Lamont- Doherty Earth Observatory at Columbia University who has studied the Mediterranean seafloor.
The water there is about 2.4km deep, and picking out wreckage at the bottom from among bumps, which are perhaps 15m to 30m in size, could be complex. If the plane crashed farther to the south, the wreckage would lie on a smoother plain at a depth between 2.3km and 2.7km, he said.
In that case, the search would go faster - and the much-desired answer to what caused the crash could come quicker.
The plane was carrying 56 passengers, including a child and two infants, and 10 crew, when it crashed on Thursday. They included 30 Egyptian and 15 French nationals, along with citizens of 10 other countries.
Egypt's navy, with help from French and other vessels, has been searching an area north of Alexandria, just south of where the signal from the plane was lost early on Thursday.
The Egyptian navy found human remains, wreckage and the personal belongings of passengers floating in the Mediterranean, about 290km north of Alexandria, on Friday. An army spokesman yesterday published pictures of the recovered items - which included blue debris with EgyptAir markings, seat fabric with designs in the airline's colours, and a yellow life jacket - on its official Facebook page.
Analysis of the debris is likely to be key to determining what happened to the flight.
EgyptAir Holding Company chairman Safwat Moslem said that the priority was finding passengers' remains and the flight recorders of the ill-fated plane, which will stop emitting a signal in a month when the batteries run out.
"The families want the bodies. That is what concerns us. The army is working on this. This is what we are focusing on," he said.
A French patrol boat carrying equipment capable of tracing the plane's black boxes is expected to reach the scene by today or tomorrow. |
/**
* May the build success be with you
* With great problems, comes great help from @guilhermesteves
*/
public class StaffHistoryDAOImpl extends SimpleDAOImpl<StaffHistory> implements StaffHistoryDAO {
@Override
public StaffHistory loadByAuthor(String author) {
MongoCollection collection = getCollection(StaffHistory.class);
return collection.findOne("{author : #}", author).as(StaffHistory.class);
}
} |
. Some natural peptides, referred to as cytomedines, were isolated from different organs: cortexin and epithalamin (both from the brain), cordialin (heart), hepalin (liver) and thymalin (thymus),--to test their stimulating effects on the growth in organotypic culture of different tissue explants taken from 3 day old rats. It has been found that these peptides exerted their obvious stimulating effects on the growth of the cultured explants, compared to the control, if taken in the respective concentrations: 100, 50, 50, 100 and 5 ng/ml. Thus, these cytomedins can be used in the clinical practice for stimulating reparative processes in the appropriate tissues. |
A Preliminary Study on Physiological and Molecular Effects of Iron Deficiency in Fuji/Chistock 1 In order to get experimental data on apple rootstock with iron-efficient genotypes capable of improving scion resistance to iron deficiency, this experiment was conducted on the physiological and molecular characteristics of Fuji/ Chistock 1 (F/C) under different iron conditions and compared it to Fuji/ M. Baccata (F/B). F/C was less sensitive to iron deficiency than F/B. F/B showed chlorosis after 25 days under iron-deficient conditions, but F/C showed no phenotypic changes, even after 40 days. The shoot and leaf area growth of F/C were respectively 5cm and 1000 mm2 higher than those of F/B, regardless of the iron-deficient or iron-efficient conditions. The young leaf chlorophyll and active iron of F/C were 5 SPAD and 5 mg kg−1 higher than those of F/B, either in iron-deficient or iron-efficient conditions. The expression of YSL5 and CS1 showed the same pattern. The enhancement expression of iron transport genes may be one explanation for these findings. |
<gh_stars>0
package main
import (
"fmt"
"github.com/Knetic/govaluate"
"math"
"reflect"
"strconv"
"strings"
)
type Expression struct {
E *govaluate.EvaluableExpression
A string // aggregate function name
N [2]int // for A == "top"
C [3]string // for call()
}
var predefinedFunctions = map[string]govaluate.ExpressionFunction{
"min": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
b := args[1].(float64)
return math.Min(a, b), nil
},
"max": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
b := args[1].(float64)
return math.Max(a, b), nil
},
"pow": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
b := args[1].(float64)
return math.Pow(a, b), nil
},
"sqrt": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Sqrt(a), nil
},
"round": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Round(a), nil
},
"isNaN": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.IsNaN(a), nil
},
"ceil": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Ceil(a), nil
},
"floor": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Floor(a), nil
},
"exp": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Exp(a), nil
},
"exp2": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Exp2(a), nil
},
"abs": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Abs(a), nil
},
"log": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Log(a), nil
},
"log2": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Log2(a), nil
},
"log10": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.Log10(a), nil
},
"isInf": func(args ...interface{}) (interface{}, error) {
a := args[0].(float64)
return math.IsInf(a, -1) || math.IsInf(a, 1), nil
},
"strlen": func(args ...interface{}) (interface{}, error) {
length := len(args[0].(string))
return float64(length), nil
},
}
func ParseExpr(ln string, expr string, name string, params map[string]interface{}, valueTmpl interface{}, path string) (res *Expression, eres error) {
var a string
var n [2]int
if strings.HasPrefix(expr, "sum(") {
a = "sum"
expr = expr[4 : len(expr)-1]
} else if strings.HasPrefix(expr, "len(") {
a = "len"
expr = expr[4 : len(expr)-1]
} else if strings.HasPrefix(expr, "mean(") {
a = "mean"
expr = expr[5 : len(expr)-1]
} else if strings.HasPrefix(expr, "std(") {
a = "std"
expr = expr[4 : len(expr)-1]
} else if strings.HasPrefix(expr, "top(") {
expr = expr[4 : len(expr)-1]
fields := split(expr, ",")
if len(fields) > 1 {
expr = fields[0]
i, err := strconv.Atoi(fields[len(fields)-1])
if err != nil {
eres = fmt.Errorf("invalid top expression on line " + ln + ": " + expr + ": bad top length")
return
}
a = "top"
n[1] = i
if len(fields) > 2 {
i, err := strconv.Atoi(fields[len(fields)-2])
if err != nil {
eres = fmt.Errorf("invalid top expression on line " + ln + ": " + expr + ": bad top length")
return
}
if n[1]*i < 0 {
n[0] = i
}
}
} else {
eres = fmt.Errorf("invalid top expression on line " + ln + ": " + expr + ": missing top length")
return
}
} else if strings.HasPrefix(expr, "call(") {
var m string
var f string
var p string
var tmp = map[string]govaluate.ExpressionFunction{
"call": func(args ...interface{}) (res interface{}, eres error) {
if len(args) < 2 {
eres = fmt.Errorf("module name and function name required")
return
}
m = args[0].(string)
f = args[1].(string)
if len(args) > 2 {
p = args[2].(string)
}
if m == "" || f == "" {
eres = fmt.Errorf("module name and function name required")
return
}
res, eres = CallPy(m, f, p, nil, path)
if res == nil {
if eres == nil {
eres = fmt.Errorf(" it must return a float number or an name/value tuple list")
}
return
}
return
},
}
e, err := govaluate.NewEvaluableExpressionWithFunctions(expr, tmp)
if err != nil {
eres = fmt.Errorf("invalid " + name + " expression on line " + ln + ": " + expr + ": " + err.Error())
return
}
_, err = e.Evaluate(nil)
if err != nil {
eres = fmt.Errorf("invalid " + name + " expression on line " + ln + ": " + expr + ": " + err.Error())
return
}
res = &Expression{
C: [3]string{m, f, p},
A: "call",
}
return
}
e, err := govaluate.NewEvaluableExpressionWithFunctions(expr, predefinedFunctions)
if err != nil {
eres = fmt.Errorf("invalid " + name + " expression on line " + ln + ": " + expr + ": " + err.Error())
return
}
p := &Position{}
p.Security = &Security{}
v, err2 := Evaluate(&Expression{E: e}, p, params)
if err2 != nil {
eres = fmt.Errorf("invalid " + name + " expression on line " + ln + ": " + expr + ": " + err2.Error())
return
}
if valueTmpl != nil && reflect.TypeOf(v) != reflect.TypeOf(valueTmpl) {
eres = fmt.Errorf("invalid " + name + " expression on line " + ln + ": " + expr + ": which must return " + reflect.TypeOf(valueTmpl).String())
return
}
res = &Expression{
E: e,
A: a,
N: n,
}
return
}
func Evaluate(e *Expression, p *Position, optional ...map[string]interface{}) (interface{}, error) {
params := make(map[string]interface{}, 60)
if len(optional) > 0 && optional[0] != nil {
params = optional[0]
}
s := p.Security
params["Symbol"] = s.Symbol
params["Sector"] = s.Sector
params["Industry"] = s.Industry
params["IndustryGroup"] = s.IndustryGroup
params["SubIndustry"] = s.SubIndustry
params["Market"] = s.Market
params["Type"] = s.Type
params["Currency"] = s.Currency
params["Multiplier"] = s.Multiplier
params["Rate"] = s.Rate
params["Adv20"] = s.Adv20
params["MarketCap"] = s.MarketCap
params["PrevClose"] = s.PrevClose
params["Open"] = s.Open
params["High"] = s.High
params["Low"] = s.Low
close := s.GetClose()
params["Close"] = close
params["Qty"] = s.Qty
params["Vol"] = s.Vol
params["Vwap"] = s.Vwap
params["Ask"] = s.Ask
params["Bid"] = s.Bid
params["AskSize"] = s.AskSize
params["BidSize"] = s.BidSize
params["OutstandBuyQty"] = p.OutstandBuyQty
params["OutstandSellQty"] = p.OutstandSellQty
params["Acc"] = p.Acc
params["Pos"] = p.Qty
params["AvgPx"] = p.AvgPx
params["Commission"] = p.Commission
params["RealizedPnl"] = p.RealizedPnl
params["BuyQty"] = p.BuyQty
params["SellQty"] = p.SellQty
params["BuyValue"] = p.BuyValue
params["SellValue"] = p.SellValue
params["Pos0"] = p.Bod.Qty
params["AvgPx0"] = p.Bod.AvgPx
params["Commission0"] = p.Bod.Commission
params["RealizedPnl0"] = p.Bod.RealizedPnl
params["Target"] = p.Target
params["NaN"] = math.NaN()
return e.E.Evaluate(params)
}
|
<reponame>syeddabeer/0projects
#1743. Restore the Array From Adjacent Pairs
class Solution():
def restoreArray(self, adjacentPairs):
d={}
for i, j in adjacentPairs:
d[i]=d.get(i, [])+[j] # bracket is needed
d[j]=d.get(j, [])+[i] # bracket is needed
for i in d:
if len(d[i])==1:
current_pointer = i
break
ans=[]
seen=set()
while current_pointer != None:
ans.append(current_pointer)
seen.add(current_pointer)
neighbors = d[current_pointer]
current_pointer = None
for neighbor in neighbors:
if neighbor not in seen:
current_pointer = neighbor
return ans
"""
There is an integer array nums that consists of n unique elements, but you have forgotten it. However, you do remember every pair of adjacent elements in nums.
You are given a 2D integer array adjacentPairs of size n - 1 where each adjacentPairs[i] = [ui, vi] indicates that the elements ui and vi are adjacent in nums.
It is guaranteed that every adjacent pair of elements nums[i] and nums[i+1] will exist in adjacentPairs, either as [nums[i], nums[i+1]] or [nums[i+1], nums[i]]. The pairs can appear in any order.
Return the original array nums. If there are multiple solutions, return any of them.
Example 1:
Input: adjacentPairs = [[2,1],[3,4],[3,2]]
Output: [1,2,3,4]
Explanation: This array has all its adjacent pairs in adjacentPairs.
Notice that adjacentPairs[i] may not be in left-to-right order.
Example 2:
Input: adjacentPairs = [[4,-2],[1,4],[-3,1]]
Output: [-2,4,1,-3]
Explanation: There can be negative numbers.
Another solution is [-3,1,4,-2], which would also be accepted.
Example 3:
Input: adjacentPairs = [[100000,-100000]]
Output: [100000,-100000]
""" |
A father who tried to kill his four young children with a hammer before driving them into a pub wall at 92mph has been jailed for life.
Owen Scott had been having a cocaine-induced psychotic episode and believed he was "protecting" his children by fleeing the clutches of an "evil gang".
The 29-year-old had driven his three children and step daughter 250 miles from his home in Fawley, Hampshire, to Thurgoland, South Yorkshire last August.
He repeatedly struck all four with a hammer before careering into the wall of The Travellers pub, making no attempt to brake.
Scott, a scaffolder, was arrested at the scene of the crash last August but claims to have no memory of the incident, Sheffield Crown Court heard.
His seven-year-old daughter lost a large section of her skull in the attack, is partially paralysed and will be wheelchair-dependant for the rest of her life. She has undergone 13 operations and remains in hospital six months later.
His 21-month-old son still has a hole in his skull which will require further surgery and both he and his nine-month old brother have to wear protective helmets.
Scott's eight-year-old stepdaughter also suffered severe injuries from which she is still recovering.
Mrs Justice O'Farrell ordered Scott to serve a minimum of 14 years for attempted murder.
"This was a gross abuse of a position of trust," she said.
"You were their father on whom they were reliant for love, affection, comfort and on whom they did rely to keep them safe.
"You will have to live for the rest of your life knowing that you have damaged, in some cases irrecoverably, the health, both physically and psychologically, of your children."
She heard how Scott had been a loving father to his children and his stepdaughter even after the breakdown of his relationship with his former partner.
But in the weeks before the incident, he developed paranoia, put down to a temporary psychosis caused by his long-term recreational cocaine and cannabis use.
Simon Keeley QC, prosecuting, said Scott became convinced he was being chased by a gang who meant him and his children harm.
On the day of the crash, he picked up the children, who lived with his ex-partner Sheryl Rogers in the Southampton area, and went on a two-day trip around the country, crashing his grey Dacia Logan into the pub in the early hours of August 23.
Mr Keeley said police had only traced part of Scott's route but he first went to the Isle of Wight before travelling to Liverpool.
He purchased a satnav in Colne, Lancashire, and visited a Burger King in Bury, Greater Manchester, before heading to the Huddersfield area and then into South Yorkshire, where the crash occurred on the A629.
An off-duty police officer, who witnessed the crash, dialled 999 and cared for the injured children until paramedics arrived. He said Scott, who was uninjured, had clambered over the children to get out of the car.
The two girls were found on top of each other on the central console of the car, the 21-month-old boy was found in a footwell and his younger brother in a carry-cot, also in a footwell, the judge heard.
She told Scott: "You made no attempt to comfort or assist them or check whether they were injured."
Scott was originally arrested on suspicion of dangerous driving while under the influence of alcohol or drugs.
Days later, South Yorkshire Police announced that he had also been charged with attempted murder.
Scott originally pleaded not guilty to the charges.
But last month, he admitted four counts of attempted murder and one count of dangerous driving.
At a previous hearing, prosecutors said Scott had used a hammer to inflict blows on the children in the car before driving deliberately into the front wall of the pub.
Michelle Colborne QC, defending, said Scott had "little or no memory" of events in the car and had undergone a psychiatric evaluation.
But she said that although he was found to be suffering from a "short-lived psychosis" at the time, this was not enough to amount to a psychiatric defence to attempted murder. |
<filename>gem-maven-plugin/src/main/java/de/saumya/mojo/gem/GenerateResourcesMojo.java
package de.saumya.mojo.gem;
import java.io.IOException;
import java.util.List;
import org.apache.maven.model.Resource;
import org.apache.maven.plugin.MojoExecutionException;
import org.apache.maven.plugins.annotations.LifecyclePhase;
import org.apache.maven.plugins.annotations.Mojo;
import org.apache.maven.plugins.annotations.Parameter;
import de.saumya.mojo.ruby.gems.GemException;
import de.saumya.mojo.ruby.script.ScriptException;
/**
* installs a set of given gems without resolving any transitive dependencies
*/
@Mojo( name = "generate-resources", defaultPhase = LifecyclePhase.GENERATE_RESOURCES )
public class GenerateResourcesMojo extends AbstractGemMojo {
@Parameter
protected List<String> includeRubyResources;
@Parameter
protected List<String> excludeRubyResources;
@Parameter
protected boolean includeBinStubs = false;
@Override
protected void executeWithGems() throws MojoExecutionException,
ScriptException, IOException, GemException {
if ( includeRubyResources != null) {
// add it to the classpath so java classes can find the ruby files
Resource resource = new Resource();
resource.setDirectory(project.getBasedir().getAbsolutePath());
for( String include: includeRubyResources) {
resource.addInclude(include);
}
if (excludeRubyResources != null) {
for( String exclude: excludeRubyResources) {
resource.addExclude(exclude);
}
}
addResource(project.getBuild().getResources(), resource);
}
if (includeBinStubs) {
Resource resource = new Resource();
resource.setDirectory(gemsConfig.getBinDirectory().getAbsolutePath());
resource.addInclude("*");
resource.setTargetPath("META-INF/jruby.home/bin");
addResource(project.getBuild().getResources(), resource);
}
}
}
|
<gh_stars>10-100
#include "podofo/base/PdfEncoding.h"
#include "podofo/base/PdfObject.h"
#include "podofo/base/PdfString.h"
#include "podofo/base/PdfVecObjects.h"
#include "podofo/doc/PdfFont.h"
#include "podofo/doc/PdfFontCache.h"
#include "podofo/doc/PdfFontConfigWrapper.h"
#include "podofo/doc/PdfFontMetrics.h"
#include <vector>
#include "__zz_cib_CibPoDoFo-class-down-cast.h"
#include "__zz_cib_CibPoDoFo-delegate-helper.h"
#include "__zz_cib_CibPoDoFo-generic.h"
#include "__zz_cib_CibPoDoFo-ids.h"
#include "__zz_cib_CibPoDoFo-type-converters.h"
#include "__zz_cib_CibPoDoFo-mtable-helper.h"
#include "__zz_cib_CibPoDoFo-proxy-mgr.h"
namespace __zz_cib_ {
using namespace ::PoDoFo;
template <>
struct __zz_cib_Delegator<::PoDoFo::TFontCacheElement> : public ::PoDoFo::TFontCacheElement {
using __zz_cib_Delegatee = __zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>;
using __zz_cib_AbiType = __zz_cib_Delegatee*;
using ::PoDoFo::TFontCacheElement::TFontCacheElement;
static void __zz_cib_decl __zz_cib_Delete_0(__zz_cib_Delegatee* __zz_cib_obj) {
delete __zz_cib_obj;
}
static __zz_cib_AbiType __zz_cib_decl __zz_cib_New_1() {
return new __zz_cib_Delegatee();
}
static __zz_cib_AbiType __zz_cib_decl __zz_cib_New_2(__zz_cib_AbiType_t<const char*> pszFontName, __zz_cib_AbiType_t<bool> bBold, __zz_cib_AbiType_t<bool> bItalic, __zz_cib_AbiType_t<bool> bIsSymbolCharset, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> pEncoding) {
return new __zz_cib_Delegatee( __zz_cib_::__zz_cib_FromAbiType<const char*>(pszFontName),
__zz_cib_::__zz_cib_FromAbiType<bool>(bBold),
__zz_cib_::__zz_cib_FromAbiType<bool>(bItalic),
__zz_cib_::__zz_cib_FromAbiType<bool>(bIsSymbolCharset),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(pEncoding));
}
#if defined(_WIN32) && !defined(PODOFO_NO_FONTMANAGER)
static __zz_cib_AbiType __zz_cib_decl __zz_cib_New_3(__zz_cib_AbiType_t<const wchar_t*> pszFontName, __zz_cib_AbiType_t<bool> bBold, __zz_cib_AbiType_t<bool> bItalic, __zz_cib_AbiType_t<bool> bIsSymbolCharset, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> pEncoding) {
return new __zz_cib_Delegatee( __zz_cib_::__zz_cib_FromAbiType<const wchar_t*>(pszFontName),
__zz_cib_::__zz_cib_FromAbiType<bool>(bBold),
__zz_cib_::__zz_cib_FromAbiType<bool>(bItalic),
__zz_cib_::__zz_cib_FromAbiType<bool>(bIsSymbolCharset),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(pEncoding));
}
#endif
static __zz_cib_AbiType_t<const ::PoDoFo::TFontCacheElement&> __zz_cib_decl __zz_cib_OperatorEqual_4(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const ::PoDoFo::TFontCacheElement&> rhs) {
return __zz_cib_ToAbiType<const ::PoDoFo::TFontCacheElement&>(
__zz_cib_obj->::PoDoFo::TFontCacheElement::operator=(
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::TFontCacheElement&>(rhs)
)
);
}
static __zz_cib_AbiType_t<bool> __zz_cib_decl __zz_cib_OperatorLT_5(const __zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const ::PoDoFo::TFontCacheElement&> rhs) {
return __zz_cib_ToAbiType<bool>(
__zz_cib_obj->::PoDoFo::TFontCacheElement::operator<(
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::TFontCacheElement&>(rhs)
)
);
}
static __zz_cib_AbiType_t<bool> __zz_cib_decl __zz_cib_OperatorApp_6(const __zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const ::PoDoFo::TFontCacheElement&> r1, __zz_cib_AbiType_t<const ::PoDoFo::TFontCacheElement&> r2) {
return __zz_cib_ToAbiType<bool>(
__zz_cib_obj->::PoDoFo::TFontCacheElement::operator()(
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::TFontCacheElement&>(r1),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::TFontCacheElement&>(r2)
)
);
}
};
}
namespace __zz_cib_ {
namespace __zz_cib_Class333 {
using namespace ::PoDoFo;
namespace __zz_cib_Class432 {
const __zz_cib_MethodTable* __zz_cib_GetMethodTable() {
static const __zz_cib_MTableEntry methodArray[] = {
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_Delete_0),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_New_1),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_New_2),
#if defined(_WIN32) && !defined(PODOFO_NO_FONTMANAGER)
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_New_3),
#else
reinterpret_cast<__zz_cib_MTableEntry> (0),
#endif
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_OperatorEqual_4),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_OperatorLT_5),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::TFontCacheElement>::__zz_cib_OperatorApp_6)
};
static const __zz_cib_MethodTable methodTable = { methodArray, 7 };
return &methodTable;
}
}}}
namespace __zz_cib_ {
using namespace ::PoDoFo;
template <>
struct __zz_cib_Delegator<::PoDoFo::PdfFontCache> : public ::PoDoFo::PdfFontCache {
using __zz_cib_Delegatee = __zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>;
using __zz_cib_AbiType = __zz_cib_Delegatee*;
using ::PoDoFo::PdfFontCache::PdfFontCache;
static __zz_cib_AbiType __zz_cib_decl __zz_cib_Copy_0(const __zz_cib_Delegatee* __zz_cib_obj) {
return new __zz_cib_Delegatee(*__zz_cib_obj);
}
static __zz_cib_AbiType __zz_cib_decl __zz_cib_New_1(__zz_cib_AbiType_t<::PoDoFo::PdfVecObjects*> pParent) {
return new __zz_cib_Delegatee( __zz_cib_::__zz_cib_FromAbiType<::PoDoFo::PdfVecObjects*>(pParent));
}
static __zz_cib_AbiType __zz_cib_decl __zz_cib_New_2(__zz_cib_AbiType_t<const ::PoDoFo::PdfFontConfigWrapper&> rFontConfig, __zz_cib_AbiType_t<::PoDoFo::PdfVecObjects*> pParent) {
return new __zz_cib_Delegatee( __zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfFontConfigWrapper&>(rFontConfig),
__zz_cib_::__zz_cib_FromAbiType<::PoDoFo::PdfVecObjects*>(pParent));
}
static void __zz_cib_decl __zz_cib_Delete_3(__zz_cib_Delegatee* __zz_cib_obj) {
delete __zz_cib_obj;
}
static __zz_cib_AbiType_t<void> __zz_cib_decl EmptyCache_4(__zz_cib_Delegatee* __zz_cib_obj) {
__zz_cib_obj->::PoDoFo::PdfFontCache::EmptyCache();
}
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFont_5(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<::PoDoFo::PdfObject*> pObject) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFont(
__zz_cib_::__zz_cib_FromAbiType<::PoDoFo::PdfObject*>(pObject)
)
);
}
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFont_6(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const char*> pszFontName, __zz_cib_AbiType_t<bool> bBold, __zz_cib_AbiType_t<bool> bItalic, __zz_cib_AbiType_t<bool> bSymbolCharset, __zz_cib_AbiType_t<bool> bEmbedd, __zz_cib_AbiType_t<::PoDoFo::PdfFontCache::EFontCreationFlags> eFontCreationFlags, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> __zz_cib_param6, __zz_cib_AbiType_t<const char*> pszFileName) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFont(
__zz_cib_::__zz_cib_FromAbiType<const char*>(pszFontName),
__zz_cib_::__zz_cib_FromAbiType<bool>(bBold),
__zz_cib_::__zz_cib_FromAbiType<bool>(bItalic),
__zz_cib_::__zz_cib_FromAbiType<bool>(bSymbolCharset),
__zz_cib_::__zz_cib_FromAbiType<bool>(bEmbedd),
__zz_cib_::__zz_cib_FromAbiType<::PoDoFo::PdfFontCache::EFontCreationFlags>(eFontCreationFlags),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(__zz_cib_param6),
__zz_cib_::__zz_cib_FromAbiType<const char*>(pszFileName)
)
);
}
#if defined(_WIN32) && !defined(PODOFO_NO_FONTMANAGER)
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFont_7(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const wchar_t*> pszFontName, __zz_cib_AbiType_t<bool> bBold, __zz_cib_AbiType_t<bool> bItalic, __zz_cib_AbiType_t<bool> bSymbolCharset, __zz_cib_AbiType_t<bool> bEmbedd, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> __zz_cib_param5) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFont(
__zz_cib_::__zz_cib_FromAbiType<const wchar_t*>(pszFontName),
__zz_cib_::__zz_cib_FromAbiType<bool>(bBold),
__zz_cib_::__zz_cib_FromAbiType<bool>(bItalic),
__zz_cib_::__zz_cib_FromAbiType<bool>(bSymbolCharset),
__zz_cib_::__zz_cib_FromAbiType<bool>(bEmbedd),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(__zz_cib_param5)
)
);
}
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFont_8(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const LOGFONTA&> logFont, __zz_cib_AbiType_t<bool> bEmbedd, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> pEncoding) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFont(
__zz_cib_::__zz_cib_FromAbiType<const LOGFONTA&>(logFont),
__zz_cib_::__zz_cib_FromAbiType<bool>(bEmbedd),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(pEncoding)
)
);
}
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFont_9(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const LOGFONTW&> logFont, __zz_cib_AbiType_t<bool> bEmbedd, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> pEncoding) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFont(
__zz_cib_::__zz_cib_FromAbiType<const LOGFONTW&>(logFont),
__zz_cib_::__zz_cib_FromAbiType<bool>(bEmbedd),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(pEncoding)
)
);
}
#endif
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFont_10(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<FT_Face> face, __zz_cib_AbiType_t<bool> bSymbolCharset, __zz_cib_AbiType_t<bool> bEmbedd, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> __zz_cib_param3) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFont(
__zz_cib_::__zz_cib_FromAbiType<FT_Face>(face),
__zz_cib_::__zz_cib_FromAbiType<bool>(bSymbolCharset),
__zz_cib_::__zz_cib_FromAbiType<bool>(bEmbedd),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(__zz_cib_param3)
)
);
}
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetDuplicateFontType1_11(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<::PoDoFo::PdfFont*> pFont, __zz_cib_AbiType_t<const char*> pszSuffix) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetDuplicateFontType1(
__zz_cib_::__zz_cib_FromAbiType<::PoDoFo::PdfFont*>(pFont),
__zz_cib_::__zz_cib_FromAbiType<const char*>(pszSuffix)
)
);
}
static __zz_cib_AbiType_t<::PoDoFo::PdfFont*> __zz_cib_decl GetFontSubset_12(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const char*> pszFontName, __zz_cib_AbiType_t<bool> bBold, __zz_cib_AbiType_t<bool> bItalic, __zz_cib_AbiType_t<bool> bSymbolCharset, __zz_cib_AbiType_t<const ::PoDoFo::PdfEncoding* const> __zz_cib_param4, __zz_cib_AbiType_t<const char*> pszFileName) {
return __zz_cib_ToAbiType<::PoDoFo::PdfFont*>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFontSubset(
__zz_cib_::__zz_cib_FromAbiType<const char*>(pszFontName),
__zz_cib_::__zz_cib_FromAbiType<bool>(bBold),
__zz_cib_::__zz_cib_FromAbiType<bool>(bItalic),
__zz_cib_::__zz_cib_FromAbiType<bool>(bSymbolCharset),
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfEncoding* const>(__zz_cib_param4),
__zz_cib_::__zz_cib_FromAbiType<const char*>(pszFileName)
)
);
}
static __zz_cib_AbiType_t<void> __zz_cib_decl EmbedSubsetFonts_13(__zz_cib_Delegatee* __zz_cib_obj) {
__zz_cib_obj->::PoDoFo::PdfFontCache::EmbedSubsetFonts();
}
#if defined(PODOFO_HAVE_FONTCONFIG)
static __zz_cib_AbiType_t<std::string> __zz_cib_decl GetFontConfigFontPath_14(__zz_cib_AbiType_t<FcConfig*> pConfig, __zz_cib_AbiType_t<const char*> pszFontName, __zz_cib_AbiType_t<bool> bBold, __zz_cib_AbiType_t<bool> bItalic) {
return __zz_cib_ToAbiType<std::string>(
::PoDoFo::PdfFontCache::GetFontConfigFontPath(
__zz_cib_::__zz_cib_FromAbiType<FcConfig*>(pConfig),
__zz_cib_::__zz_cib_FromAbiType<const char*>(pszFontName),
__zz_cib_::__zz_cib_FromAbiType<bool>(bBold),
__zz_cib_::__zz_cib_FromAbiType<bool>(bItalic)
)
);
}
#endif
static __zz_cib_AbiType_t<FT_Library> __zz_cib_decl GetFontLibrary_15(const __zz_cib_Delegatee* __zz_cib_obj) {
return __zz_cib_ToAbiType<FT_Library>(
__zz_cib_obj->::PoDoFo::PdfFontCache::GetFontLibrary()
);
}
static __zz_cib_AbiType_t<void> __zz_cib_decl SetFontConfigWrapper_16(__zz_cib_Delegatee* __zz_cib_obj, __zz_cib_AbiType_t<const ::PoDoFo::PdfFontConfigWrapper&> rFontConfig) {
__zz_cib_obj->::PoDoFo::PdfFontCache::SetFontConfigWrapper(
__zz_cib_::__zz_cib_FromAbiType<const ::PoDoFo::PdfFontConfigWrapper&>(rFontConfig)
);
}
static __zz_cib_AbiType_t<void> __zz_cib_decl Init_17(__zz_cib_Delegatee* __zz_cib_obj) {
__zz_cib_obj->::PoDoFo::PdfFontCache::Init();
}
};
}
namespace __zz_cib_ {
namespace __zz_cib_Class333 {
using namespace ::PoDoFo;
namespace __zz_cib_Class433 {
const __zz_cib_MethodTable* __zz_cib_GetMethodTable() {
static const __zz_cib_MTableEntry methodArray[] = {
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::__zz_cib_Copy_0),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::__zz_cib_New_1),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::__zz_cib_New_2),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::__zz_cib_Delete_3),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::EmptyCache_4),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFont_5),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFont_6),
#if defined(_WIN32) && !defined(PODOFO_NO_FONTMANAGER)
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFont_7),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFont_8),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFont_9),
#else
reinterpret_cast<__zz_cib_MTableEntry> (0),
reinterpret_cast<__zz_cib_MTableEntry> (0),
reinterpret_cast<__zz_cib_MTableEntry> (0),
#endif
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFont_10),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetDuplicateFontType1_11),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFontSubset_12),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::EmbedSubsetFonts_13),
#if defined(PODOFO_HAVE_FONTCONFIG)
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFontConfigFontPath_14),
#else
reinterpret_cast<__zz_cib_MTableEntry> (0),
#endif
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::GetFontLibrary_15),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::SetFontConfigWrapper_16),
reinterpret_cast<__zz_cib_MTableEntry> (&__zz_cib_::__zz_cib_Delegator<::PoDoFo::PdfFontCache>::Init_17)
};
static const __zz_cib_MethodTable methodTable = { methodArray, 18 };
return &methodTable;
}
}}}
|
/**
* Attach a new observer to the board controller
* @param observer Observer to attach
* @throws NullPointerException
*/
public void attach(iObserver observer) throws NullPointerException{
if (observer == null)
throw new NullPointerException("BoardController.attach() : NULL instance of iObserver");
this.m_observers.add(observer);
observer.setController(this);
} |
The Supreme Court was told that authorities had removed encroachments from around 2,280-km road length by August 31, prompting the court to observe that it was a matter of "great distress" vis-a-vis encroachments.
A bench of Justice Madan B. Lokur and Justice Deepak Gupta said that encroachment was a serious problem and asked the authorities to take the matter seriously, while hearing a plea on encroachments in Delhi.
Encroachments were cleared from 844.33-km roads/streets/footpaths falling in the North Delhi Municipal Corporation (NDMC) area, 811.01 km in South MC, 601.2 km in East MC, 11 km in the New Delhi Municipal Council and 12.44 km in Delhi Development Authority (DDA) jurisdiction.
The top court is dealing with the issue of the validity of the Delhi Laws (Special Provisions) Act, 2006, and subsequent legislations which protect unauthorised constructions from sealing.
The bench said clearing encroachments from such a large area reflected the magnitude of the problem. |
Evaluation of pH of curry soup containing coconut milk by near infrared spectroscopy The pH is the important parameters to characterize the food deterioration and an indicative of food spoilage. The aim of this research was to apply the near infrared (NIR) spectroscopy to evaluate pH of curry soup containing coconut milk. The soup samples from mixing tank, water content adjusted tank, UHT pipe and laminated containers in the production line were collected. There were also the pH adjusted samples where the curry was made from the same recipe but increasing placed time for 2, 4 and 6 hr after 0 hr. There are 73 samples in total. The sample was scanned with FT-NIR spectrometer. A prediction model for pH was established using NIR spectral data in conjunction with partial least squares regression, which was validated using leave one out cross validation and test set validation. After validated by unknown samples, the leave one out cross validation model showed better prediction performance. The best model developed using first derivative spectra in 9403.8-7498.3, 6102-5446.3 and 4605.4-4242.9 cm−1 provided an coefficient of determination (r2), root mean square error of cross validation (RMSECV), bias and ratio of performance to interquartile (RPIQ) of 0.73, 0.28, 0.01 and 1.89. The model was usable for screening and some other approximate calibrations. The model could be improved for further development of robust model using more natural samples in evaluation of pH in the curry soup Introduction The pH is one of the important parameters to characterize the food deterioration. It is an indicative of food spoilage and can indicate to the consumer any food quality changes. There were some researchers studied the relationship between the pH of food and its deterioration, for example, fresh noodle, chicken sausage, fish, fresh pork burger and cane juice. Fish is among the most consumed foods in the world and is very prone to microbial spoilage, which cause an increase in the pH of fish, due to an increase in volatile nitrogen bases concentration levels. In case of sausage, the low pH value is a positive character in sausage production because microorganism growth is reduced in low pH conditions. Therefore, shelf life of the product will increase with low pH values. However, from the study of Korkeala et al. above the level of 10 8 lactobacilli/g in vacuumpacked cooked ring sausages a sharp decrease in pH from 6.3 down to approximately 5.4, was observed. The biocide-treated juice retained its initial pH over 71 hr and in strong contrast, after 71 hr the untreated juice had markedly lower pH. Coconut milk is used in traditional tropical Asian food including curry soup. In order to preserve coconut milk, heat treatment is required. Seow and Gwee describe that pasteurization involves heating the milk to temperature of 72 °C for 20 min whereas Arumughan et al. indicated that ultra-high temperature (UHT) treatment of coconut milk requires heating the milk at 121 for 20 min. The drastic heat treatment is required because raw coconut milk is a low-acid liquid food, with a pH of around 6.2. In curry soup containing coconut milk industry, the already mixed sample of the soup has been collected for pH measurement. The pH specification for Green curry and Red curry is 5.4-6.4, that of Panang curry is 5.2-6.2 and that of Massaman curry is 5.1-6.1. The near infrared (NIR) spectroscopy, a rapid, accurate and environmental friendly method for quantifying the constituents and quality parameters of agricultural product and food, has been used both in research and industry. It was used to evaluate the pH of some food for acidity such as tomato juice, white vinegars, apple wine, yogurt and loquats. There has been no report about pH measurement using NIR spectroscopy for food deterioration. Therefore, this research paper aims to report the application of near infrared spectroscopy on evaluation of pH of curry soup for the industrial purpose. Samples The Green curry soup, Massaman curry soup and Panang curry soup from mixing tank, water content adjusted tank, ultra high temperature (UHT) pipe and laminated containers were collected. The After collecting a sample from the processing line, or pH adjusting, the sample of 200 ml were subjected immediately to homogenization (T25 digital ULTRA-TURRAX, IKA, Germany) at 2500 rpm for 3 minutes before NIR scanning. Near infrared scanning on sample Each sample was transferred into a quartz cup (diameter 64 mm and long 50 mm) and was scanned through quartz under the cup by an FT-NIR spectrometer (MPA, Bruker, Ettlingen, Germany) in diffuse reflection mode at a wavenumber between 12,500-3,600 cm -1 with a nominal resolution of 16 cm -1 resolution. All experiments were performed at room temperature (25±1 °C). Each sample was scanned in duplicate. All scan results were recorded in absorption mode (log 1/R). The duplicate spectra of each sample were averaged before further analysis. Analysis of pH of curry soup After scanning, the sample was measured for pH by pH-meter (HI 8521, HANNA instrument, Rhode Island, USA) equipped with a glass electrode by using buffer pH 7.00 and buffer pH 4.00 as calibration standard. Each sample was measured in triplicates. Repeatability and maximum coefficient of determination The precision of reference test of pH of curry soup was determined using the repeatability value (Rep. The repeatability is calculated by the standard deviation of the farthest different of the triplicate. Then the maximum coefficient of determination (R ) was calculated following Dardenne using the equation. ( 1 ) Where, SD is the standard deviation of pH value of calibration set. According to Dardenne the maximum R 2 could get with no error in the spectra or the model. He indicated that sometimes, SD and Rep was sufficient to give up NIR model development: it means a range too narrow and/or a reference method not sufficiently precise. Spectrum pre-treatment and NIR spectroscopy model establishment The NIR spectroscopic models for predicting the pH of curry soup were developed by partial least squares (PLS) regression. The OPUS, v.7.0.129 multivariate analysis software package (Bruker, Ettlingen, Germany) was used in both spectrum pre-treatment and model development. The NIR spectra used for model development were pre-treated in the following way; no pre-treatment, constant offset elimination, straight line subtraction, vector normalization (SNV), min-max normalization, multiplicative scatter correction (MSC), first derivatives (17 points segment), second derivatives (17 points segment), first derivatives+straight line subtraction, first derivatives+SNV and first derivatives+MSC. The modeling was two types including the leave one out cross validation model where total samples were used and test set validation model where 50% of samples were used for calibration and another 50% was used for validation. The optimum model was selected from a combination of a number of PLS factors, spectral pre-treated method and wave number ranges based on the lowest root mean squared error in cross (leave one out) validation (RMSECV). After that the model was validated and the coefficient of determination (r 2 ), root mean squared error in prediction (RMSEP), ratio of performance to interquartile (RPIQ) for skew distribution data set and the prediction bias were calculated. For skewed distributions, the ratio of standard error of validation to the standard deviation (RPD) is not acceptable for standardizing the SEP with respect to the population spread. To calculate the RPIQ index, the SD (standard deviation of the prediction set) was replaced by the interquartile (Q3-Q1) where Q3 and Q1 was the value below which 75% and 25%, respectively, of the samples were found. Figure 1 shows the average spectra of 4 curry soup. The number of samples, minimum (Min), maximum (Max), mean and standard deviation (SD) of pH of curry soup sample sets are shown in table 1. The repeatability of pH reference test for Green, Red, Massaman and Panang curry soup were 0.00, 0.00, 0.01 and 0.01. Therefore, the average repeatability was 0.01 and the R was 1.0 indicated that the NIR spectroscopic model should be potentially developed further. Table 1. Number of samples, minimum (Min), maximum (Max), mean and standard deviation (SD) of pH of curry soup samples of calibration set and prediction set. Results and Discussion The optimum model was selected if the model provided best prediction performance which was minimum RMSECV. Table 2 has indicated that an r 2 of 0.66-0.81 implies that a model was usable for screening and some other "approximate" calibrations. This result was similar to the prediction of pH of Mediterranean buffalo milk by Fourier-transform midinfrared spectroscopy which provided the r 2 of 0.76 where the averaged pH was 6.66. In case of Brown Swiss milk samples, De Marchi et al. reported the application of mid-infrared spectroscopy models developed for pH could discriminate between high and low values (r 2 = 0.59 to 0.62). There was no report on NIR spectroscopy for prediction of pH of coconut milk or animal milk and its product. Table 2. Statistics of prediction of pH of 4 curry soup (Green curry soup, Red curry soup, Massaman curry soup and Panang curry) by PLS models Figure 2 and 3 shows the scatter plots of pH measured by reference method (pH meter) and predicted by NIR spectroscopy of leave one out cross validation model and test set validation model, respectively. Sample set No. Samples Mean Max Min SD Figure 2. Comparison of the pH of curry soup predicted by near infrared (NIR) spectroscopy and measured by pH-meter (leave one out validation). Table 3 shows the true and predicted pH of 12 unknown samples of four curry soups by PLS models validated by a leave one out cross validation and test set validation. The prediction result of pH in the validation by test set samples model and the leave one out cross validation model were similar. The trend line of the models had the slope and offset far from the target line. Table 4. Statistics of prediction performance of leave-one-out validation and test set validation PLS models on unknown samples of four curry soups (Green curry soup, Red curry soup, Massaman curry soup and Panang curry soup) RMSEP : root mean square error of prediction ; SEP : standard error of prediction ; RPD : ratio of standard error of validation to the standard deviation. Conclusions From the results presented in this study, NIR spectroscopy could be used as an alternative technique to evaluate of pH of curry soup since the model showed acceptable prediction accuracy. The predictive statistics suggested that these models were usable for screening and some other "approximate" calibrations of pH of curry soup containing coconut milk. |
<reponame>onosendi/product-feedback
import type { Dispatch, ReactNode } from 'react';
import { Children, isValidElement, useState } from 'react';
export default function useSelectValue(
children: ReactNode,
defaultValue?: string | null,
): [string, Dispatch<string>] {
const [value, setValue] = useState(() => {
const child = Children.toArray(children).find((c) => (
isValidElement(c) && c.props.value === defaultValue));
if (child && isValidElement(child)) {
return child.props.children;
}
const firstChild = Children.toArray(children)[0];
if (firstChild && isValidElement(firstChild)) {
return firstChild.props.children;
}
return null;
});
return [value, setValue];
}
|
Constructing Nested Nodal Sets for Multivariate Polynomial Interpolation We present a robust method for choosing multivariate polynomial interpolation nodes. Our algorithm is an optimization method to greedily minimize a measure of interpolant sensitivity, a variant of a weighted Lebesgue function. Nodes are therefore chosen that tend to control oscillations in the resulting interpolant. This method can produce an arbitrary number of nodes and is not constrained by the dimension of a complete polynomial space. Our method is therefore flexible: nested nodal sets are produced in spaces of arbitrary dimensions, and the number of nodes added at each stage can be arbitrary. The algorithm produces a nodal set given a probability measure on the input space, thus parameterizing interpolants with respect to finite measures. We present examples to show that the method yields nodal sets that behave well with respect to standard interpolation diagnostics: the Lebesgue constant, the Vandermonde determinant, and the Vandermonde condition number. We also show that a nongreedy version of the... |
Single-Study Approvals: Quantum of Evidence Required. When does a single positive adequate and well-controlled study of a new drug meet the statutory requirement to demonstrate substantial evidence of effectiveness? The answer to this question, particularly with respect to new molecular entities, has been of considerable debate since 1962 when the requirement that new drugs prove their benefit to patients became law. A 1997 revision to the statute provided one pathway to a single-study approval (a single adequate and well-controlled study plus confirmatory evidence), while a 1998 guidance issued by FDA provided additional pathways, one of which is the one that is most frequently cited by FDA (a single statistically very persuasive study). This paper explains these 2 distinct pathways and provides illustrative examples of how FDA uses each of these 2 pathways. Regulators, industry, patients, and investors should each find this exegesis of these 2 independent, yet equally viable and valuable, pathways to an FDA approval both illuminating and invaluable. |
Takotsubo's cardiomyopathy with an uncommon complication: implications for management and treatment. We present the case of a 57-year-old female with no significant history of cardiac disease admitted to our service with stress-induced cardiomyopathy (Takotsubo's cardiomyopathy). Admission echocardiography with contrast showed a non-mobile apical-filling defect, consistent with laminar thrombus. After 1 month of anticoagulation with warfarin (bridged with inpatient intravenous heparin), follow-up echocardiography with contrast showed resolution of the thrombus. Although reported in the literature, to our knowledge, there are no consensus guidelines for the surveillance and treatment of left ventricular thrombus in patients with Takotsubo's cardiomyopathy. An awareness of this adverse effect and its treatment implications is imperative for any clinician caring for these patients. |
package com.peter.iliev.kata;
public class InsertionSort16 {
public static void sort(final Integer[] a, final int s, final int eInc) {
if (a.length < 2) {
return;
}
for (int i = s + 1; i <= eInc; i++) {
int pos = i;
final Integer insertMe = a[pos];
int index = pos - 1;
while (index >= s && a[index].compareTo(insertMe) > 0) {
a[index + 1] = a[index];
index--;
}
a[index + 1] = insertMe;
}
}
}
|
def remove_NAN_intensity_score_PEP(self, df):
print(len(df))
df = df[np.isfinite(df["Intensity"])]
print(len(df))
df = df[np.isfinite(df["Score"])]
print(len(df))
df = df[np.isfinite(df["PEP"])]
print(len(df))
return df |
package at.meinedomain.CheckIt;
import java.util.ArrayList;
import android.util.Log;
import at.meinedomain.CheckIt.Pieces.*;
public class Board {
public enum MatchState{
RUNNING,
CHECK_MATE_WON,
TIME_UP_WON,
OPPONENT_GONE,
CHECK_MATE_LOST,
TIME_UP_LOST,
STALE_MATE_DRAW,
LITTLE_MATERIAL_DRAW // TODO Test for this...
}
private SendMoveListener sendMoveListener;
private Color myColor;
private Point myKing;
private Point opponentKing;
private MatchState matchState;
private int width;
private int height;
private AbstractPiece[][] board;
private Color turn;
private Point markedPoint;
private Point markedPointOpponent;
private Point enPassant;
// Constructors=============================================================
public Board(SendMoveListener sml, Color player){
this.sendMoveListener = sml;
this.myColor = player;
myKing = myColor==Color.WHITE ? new Point(4,0) : new Point(4,7);
opponentKing = myColor==Color.WHITE ? new Point(4,7) : new Point(4,0);
matchState = MatchState.RUNNING;
width = 8;
height = 8;
turn = Color.WHITE;
markedPoint = null;
markedPointOpponent = null;
enPassant = null;
board = new AbstractPiece[width][height];
// whitePieces = new AbstractPiece[width][height];
// blackPieces = new AbstractPiece[width][height];
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
board[i][j] = null;
}
}
// init pawns
for(int i=0; i<width; i++){
board[i][1] = new Pawn(this, Color.WHITE, new Point(i,1));
board[i][6] = new Pawn(this, Color.BLACK, new Point(i,6));
}
// init rooks
board[0][0] = new Rook(this, Color.WHITE, new Point(0,0));
board[7][0] = new Rook(this, Color.WHITE, new Point(7,0));
board[0][7] = new Rook(this, Color.BLACK, new Point(0,7));
board[7][7] = new Rook(this, Color.BLACK, new Point(7,7));
//init knights
board[1][0] = new Knight(this, Color.WHITE, new Point(1,0));
board[6][0] = new Knight(this, Color.WHITE, new Point(6,0));
board[1][7] = new Knight(this, Color.BLACK, new Point(1,7));
board[6][7] = new Knight(this, Color.BLACK, new Point(6,7));
//init bishops
board[2][0] = new Bishop(this, Color.WHITE, new Point(2,0));
board[5][0] = new Bishop(this, Color.WHITE, new Point(5,0));
board[2][7] = new Bishop(this, Color.BLACK, new Point(2,7));
board[5][7] = new Bishop(this, Color.BLACK, new Point(5,7));
//init queen and king
board[3][0] = new Queen(this, Color.WHITE, new Point(3,0));
board[4][0] = new King(this, Color.WHITE, new Point(4,0));
board[3][7] = new Queen(this, Color.BLACK, new Point(3,7));
board[4][7] = new King(this, Color.BLACK, new Point(4,7));
// // init Piece-ArrayLists
// for(int i=0; i<width; i++){
// for(int j=0; j<height; j++){
// if(board[i][j] != null){
// if(board[i][j].getColor() == Color.WHITE){
// whitePieces[i][j] = board[i][j];
// }
// else{
// blackPieces[i][j] = board[i][j];
// }
// }
// }
// }
}
//--------------------------------------------------------------------------
// this constructor is used for testing the canMove()-method of pieces.
public Board(SendMoveListener sml, Color player, Color turn){
this(sml, player);
// redefine the board
this.turn = turn;
}
// Getters/Setters/move-methods=============================================
public AbstractPiece[][] getBoard(){
return board;
}
@Deprecated
public void setBoard(AbstractPiece[][] board){ // USED FOR TESTING ONLY!
this.board = board;
// set the myKing and opponentKing locations:
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
if(board[i][j]!=null && board[i][j] instanceof King){
if(pieceAt(i,j).getColor() == myColor){
myKing = new Point(i,j);
Log.d("Board", "My king is at "+i+","+j);
}
else{
opponentKing = new Point(i,j);
Log.d("Board", "Opponent's king is at "+i+","+j);
}
}
}
}
}
public int getWidth(){
return width;
}
public int getHeight(){
return height;
}
public AbstractPiece pieceAt(Point pt){
return pieceAt(pt.getX(), pt.getY());
}
public AbstractPiece pieceAt(int i, int j){
return board[i][j];
}
// for rook-placing (castling) and piece-placing (pawn reaches last rank)
public void placePiece(Point from, Point to){
AbstractPiece movingPiece = pieceAt(from);
movingPiece.setLocation(to);
if(movingPiece instanceof King){
if(movingPiece.getColor() == myColor){
myKing = to;
}
else{
opponentKing = to;
}
}
board[ to.getX()][ to.getY()] = movingPiece;
board[from.getX()][from.getY()] = null;
}
// Currently used for en-passant-capturing only.
private void killPiece(int x, int y){
board[x][y] = null;
}
// // move without testing for correctness of the move.
// public void move(Point from, Point to, MoveType mt){
// move(from, to, null, mt);
// }
// move without testing for correctness of the move.
public void move(Point from, Point to, MoveType mt){
// enPassant = ep;
if(turn.equals(myColor)){
sendMoveListener.sendMove(new Move(from, to, mt));
markedPoint = null;
markedPointOpponent = null;
}
else{
markedPointOpponent = to;
}
Log.d("Board", "now placePiece() with from.x="+from.getX()+", from.y="+from.getY());
playSound(mt);
placePiece(from, to);
if(mt==MoveType.CASTLE_KINGSIDE || mt==MoveType.CASTLE_QUEENSIDE){
placeCastlingRook(mt);
}
if(mt==MoveType.EN_PASSANT){
killPiece(to.getX(), from.getY());
}
if(mt==MoveType.DOUBLE_STEP){
enPassant = new Point(to.getX(), (from.getY()+to.getY())/2);
}
else{
enPassant = null;
}
if(mt==MoveType.PAWN_TO_QUEEN){
killPiece(to.getX(), to.getY());
board[to.getX()][to.getY()] = new Queen(this, turn, to);
}
else if(mt==MoveType.PAWN_TO_ROOK){
killPiece(to.getX(), to.getY());
board[to.getX()][to.getY()] = new Rook(this, turn, to);
}
else if(mt==MoveType.PAWN_TO_KNIGHT){
killPiece(to.getX(), to.getY());
board[to.getX()][to.getY()] = new Knight(this, turn, to);
}
else if(mt==MoveType.PAWN_TO_BISHOP){
killPiece(to.getX(), to.getY());
board[to.getX()][to.getY()] = new Bishop(this, turn, to);
}
// check if game is over------------------------------------------------
Color nextCol = (turn.equals(Color.WHITE)) ? Color.BLACK : Color.WHITE;
if(isInCheckMate(nextCol)){
matchState = nextCol==myColor ? MatchState.CHECK_MATE_LOST :
MatchState.CHECK_MATE_WON;
return;
}
if(isInStaleMate(nextCol)){
matchState = MatchState.STALE_MATE_DRAW;
return;
}
// ok, let's continue---------------------------------------------------
turn = (turn.equals(Color.WHITE)) ? Color.BLACK : Color.WHITE;
}
public void tryToMove(Point from, Point to){
AbstractPiece tempPiece = pieceAt(from);
if(tempPiece == null){
Log.wtf("Board", "Trying to move null!");
return;
}
else if(to.getX() >= width || to.getY() >= height){
Log.wtf("Board", "Trying to move outside the board");
return;
}
else{
tempPiece.tryToMove(to);
}
}
public Point getEnPassant(){
return enPassant;
}
public Point getMarkedPoint(){
return markedPoint;
}
public Point getMarkedPointOpponent(){
return markedPointOpponent;
}
public MatchState getMatchState(){
return matchState;
}
public Color getTurn(){
return turn;
}
public void setMarkedPoint(Point P){
markedPoint = P;
}
public void setMatchState(MatchState ms){
matchState = ms;
}
public void toggleTurn(){
turn = (turn==Color.WHITE) ? Color.BLACK : Color.WHITE;
}
public void playSound(MoveType mt){
if(!Settings.soundEnabled){
return;
}
else if(mt == MoveType.CAPTURE)
Assets.capture.play(1);
else if(mt==MoveType.CASTLE_KINGSIDE || mt==MoveType.CASTLE_QUEENSIDE)
Assets.castle.play(1);
else
Assets.move.play(1);
}
// Utility methods =========================================================
public boolean isEmpty(Point pt){
return pieceAt(pt)==null ? true : false;
}
public boolean isEmpty(int x, int y){
return pieceAt(x,y)==null ? true : false;
}
public boolean emptyAfterOppMove(Point pt, Point oppFrom, Point oppTo){
if(oppFrom == null){
// assuming oppTo==null too.
return isEmpty(pt);
}
else if(isEmpty(pt)){
return !pt.equals(oppTo);
}
else{ // was not empty
return pt.equals(oppFrom);
}
}
public boolean isOccupiedByTurn(Point pt){
if(!isEmpty(pt) && pieceAt(pt).getColor()==turn){
return true;
}
return false;
}
public boolean isOccupiedByTurnOpponent(Point pt){
if(!isEmpty(pt) && pieceAt(pt).getColor()!=turn){
return true;
}
return false;
}
public boolean isInCheck(Color c){
return leavesInCheck(c, null, null);
}
public boolean isInCheckMate(Color c){
return isInCheck(c) && !canMove(c) ? true : false;
}
public boolean isInStaleMate(Color c){
return !isInCheck(c) && !canMove(c) ? true : false;
}
private boolean canMove(Color c){
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
if(!isEmpty(i,j) && pieceAt(i,j).getColor()==c &&
pieceAt(i,j).canMoveSomewhere()){
return true;
}
}
}
return false;
}
// test if we are left in check if we move ignore to consider
// if ignore==consider==null then test for check in current position.
public boolean leavesInCheck(Color c, Point ignore, Point consider){
Point kingPt = myColor==c ? myKing : opponentKing;
// but if the king is moving right now, we need to reassign.
// color-test to because a king could move and cause an "Abzugsschach"
if(kingPt.equals(ignore) && pieceAt(ignore).getColor()==c)
kingPt = consider;
for(int i=0; i<width; i++){
for(int j=0; j<height; j++){
if(!isEmpty(i,j) && pieceAt(i,j).getColor()!=c // if opp there
&& !pieceAt(i,j).getLocation().equals(consider)){// and we did't just capture the attacker
if(pieceAt(i,j).attacks(kingPt, ignore, consider)){
return true;
}
}
}
}
return false;
}
private void placeCastlingRook(MoveType mt){
if(mt==MoveType.CASTLE_KINGSIDE && turn == Color.WHITE){
placePiece(new Point(7,0), new Point(5,0));
}
else if(mt==MoveType.CASTLE_QUEENSIDE && turn == Color.WHITE){
placePiece(new Point(0,0), new Point(3,0));
}
else if(mt==MoveType.CASTLE_KINGSIDE && turn == Color.BLACK){
placePiece(new Point(7,7), new Point(5,7));
}
else if(mt==MoveType.CASTLE_QUEENSIDE && turn == Color.BLACK){
placePiece(new Point(0,7), new Point(3,7));
}
}
}
|
Forensic genetics through the lens of Lewontin: population structure, ancestry and race In his famous 1972 paper, Richard Lewontin used classical protein-based markers to show that greater than 85% of human genetic diversity was contained within, rather than between, populations. At that time, these same markers also formed the basis of forensic technology aiming to identify individuals. This review describes the evolution of forensic genetic methods into DNA profiling, and how the field has accounted for the apportionment of genetic diversity in considering the weight of forensic evidence. When investigative databases fail to provide a match to a crime-scene profile, specific markers can be used to seek intelligence about a suspect: these include inferences on population of origin (biogeographic ancestry) and externally visible characteristics, chiefly pigmentation of skin, hair and eyes. In this endeavour, ancestry and phenotypic variation are closely entangled. The markers used show patterns of inter- and intrapopulation diversity that are very atypical compared to the genome as a whole, and reinforce an apparent link between ancestry and racial divergence that is not systematically present otherwise. Despite the legacy of Lewontin's result, therefore, in a major area in which genetics coincides with issues of public interest, methods tend to exaggerate human differences and could thereby contribute to the reification of biological race. This article is part of the theme issue Celebrating 50 years since Lewontin's apportionment of human diversity. Introduction When Richard Lewontin wrote his seminal 1972 article, 'The apportionment of human diversity', he had at his disposal extensive molecular population data based on an array of 17 'classical' polymorphisms (figure 1a), detectable by protein electrophoresis or immunological methods, that allowed him to assess variation within and between human groups. Lewontin found that 85.4% of total human diversity was contained within populations, and he emphasizes his point that 'less than 15% of all human genetic diversity is accounted for by differences between human groups' with an exclamation mark. However, in 1972, these same polymorphic markers formed the basis of another field, with a different aim: attributing a biological sample to an individual. That field is now known as forensic genetics. Taking Lewontin as a starting point, this review examines how human individual identification evolved from 'classical' polymorphisms to DNA, how it attempted to account for inter-population variation and population structure, and how, in no-suspect cases where database searches draw a blank, it has considered the apportionment of human diversity to make deductions about the population of origin of a sample ('biogeographic ancestry'; BGA). The lens of Lewontin allows us to see how unchanging and intractable some of the problems are: what characteristics we use to classify populations, how we name them and how they should be grouped in higher level comparisons. Lewontin's reliance on proteins, rather than DNA, brings phenotypes into play, and this leads to the uncomfortable intersection between genetic diversity as recognized by Landsteiner : 'to detect the non-identity of blood samples'. Forensic biologists went on to combine sets of these classical polymorphisms to reduce the random match probability (RMP; figure 2), the chance that two different individuals have matching genotypes, and exploited their Mendelian inherited nature in kinship testing. If genetic loci are unlinked and the population is randomly mating, then independent inheritance means that, in principle, their allele frequencies can be multiplied in deriving genotype frequencies-this is known as the product rule. As a consequence, RMPs fell to more useful average levels of 1% or lower, but there remained practical problems of protein degradation, body fluid specificity and the interpretation of mixed samples. It was the development of DNA-based analysis in the mid-1980s that relegated Lewontin's classical polymorphisms from forensics and began the modern era of robust individual identification. Initially, this was via DNA fingerprinting, based on length variation at multi-allelic autosomal minisatellites, and by the 1990s DNA profiling, based on length variation at short tandem repeats (STRs; also known as microsatellites). Today, a combination of approximately 17 'classical' markers are shown in their approximate chromosomal locations (from www.omim.org) on a G-banded human karyotype. Thirteen of the markers were diallelic; for the remaining four, the number of alleles analysed is given in parentheses after the marker name. All markers are also among those used in forensic serological analysis. APh: acid phosphatase 1; AK: adenylate kinase 1; PGM1: phosphoglucomutase 1; PGD: phosphogluconate dehydrogenase; Ag: -lipoprotein, Ag system; Lp: -lipoprotein, Lp system; Hp: haptoglobin. (b) Lewontin's 169 populations are shown, with assignment to one of seven racial groups indicated by background colour (n indicates number of populations per racial group). Not all populations were typed for all 17 markers shown in (a). Sets of populations are placed on the world map to indicate approximate regions of origin; north and south Native Americans are distinguished here, though were considered as one 'Amerind' group by Lewontin. For some populations, geographical location and racial group assignment indicate anthropological classifications and some examples (e.g. US Blacks, Turks) are placed separately from the major sets. Names of populations and racial groups are those given by Lewontin ; the significance of inverted commas round some population names is unclear. STR profiles are digital-each allele is designated by a number reflecting its number of repeat units-and therefore ideal for databasing. This allowed the development of large investigative databases containing profiles obtained from convicted individuals, suspects and crime-scene samples. The first national DNA database to be developed (in 1995) was that of the UK, which by March 2020 contained 6.6 million profiles, the largest by proportion of population of any in the world. It provides a 'hit' (a match between a crime-scene profile and a stored subject profile) in 66% of queries and thus represents an efficient tool for the detection of crime. Forensic significance of intragroup and intergroup variation The forensic geneticist needs genotypes that provide robust individual identification, an aim that emphasizes variation within the population: calculating a RMP to evaluate the significance of a match can then be done by compiling allele frequency data from that population, and assuming homogeneity and random mating. However, this raises two issues: which population is relevant to a particular case? And can population substructure invalidate the assumptions made in RMP calculations? These questions formed some of the battle-lines in the so-called 'DNA fingerprinting wars' of the 1990s, in which Lewontin, together with Daniel Hartl, was a vigorous combatant. The debate was eventually declared settled, although not at all to Lewontin's satisfaction. Accustomed to the compendious collections of population data on classical markers, Lewontin argued that a lack of similarly detailed data on the new-fangled DNA markers made RMP calculations unsound. He also pointed out that the major racial groups used in calculations (for example, 'Caucasian') likely harboured endogamous subgroups with significantly divergent allele frequencies (sub-populations) that violated the random mating assumption and made the use of the product rule inappropriate. It was unclear whether the product rule favoured the defence or the prosecution, but in any case, it should not be applied until more detailed data were available; instead, Lewontin suggested, the profile Figure 2. Calculation of RMPs and the effect of different population databases. Bar charts show the allele frequencies for three forensic STRs in two population databases, 'Caucasian-Americans' (n = 404 alleles typed) and African-Americans (n = 418). Note that 'Caucasian' is the term used by the authors but is no longer favoured in many areas of human genetics. Below is an evidence profile, heterozygous at each locus, and the corresponding allele frequencies, denoted p and q. An individual can receive either allele from either parent, so the genotype probability is 2pq (for homozygotes, the corresponding probability is p 2 ). Assuming the loci are independently inherited, the per-locus genotype frequencies can be multiplied together (the product rule) to give the profile frequency, which is equivalent to the RMP (the chance that some random unrelated person in the population carries the same profile as the evidential sample). In practice, many more than three STRs are analysed, giving much lower values than in this example. Given the different allele frequencies in the two databases, in this case, the profile frequency when using the Caucasian-American database is about five times higher than that for the African-American database. Note that the calculation here assumes the simplest of population genetic models (Hardy-Weinberg equilibrium) and typically in casework somewhat more complex models are used (see main text). (Online version in colour.) royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 377: 20200422 frequency in the population database should be used, and when the profile was unobserved (i.e. in the majority of cases), a frequency of 1/x should be assumed, where x is the population database size. However, clearly, without multiplication of allele frequencies at different loci, such assumed genotype frequencies were problematically and implausibly high, and undermined the utility of forensic DNA analysis. Lewontin therefore lost this battle. When a crime-scene profile matches a suspect, but the only information available about the perpetrator is the DNA profile itself, then the choice of reference population for calculating the RMP becomes particularly important. If there were major differences in allele frequencies between different populations, then a DNA profile might be very rare in one population, strongly incriminating the suspect, but orders of magnitude more common in another. The United States National Research Council (NRC) 1996 report made a number of recommendations to deal with this issue that remain widely adhered to today. One is the treatment of rare alleles: when an allele is not represented in a database, or present only a few times, any estimate of its frequency is inherently inaccurate. The recommendation is that each allele should be observed at least five times if its frequency estimate is to be used in statistical calculations, and that the frequency of any allele observed less often than this should be inflated to this minimum, i.e. 5/2N, where N is the number of individuals in the database (and 2N the number of genomes). A second NRC recommendation was for an adjustment to account for population structure, using a correction factor known as. For US populations, a conservative value of = 0.01 is recommended-this is at least an order of magnitude higher than empirically measured values; for 'some small, isolated populations', a higher value of 0.03 can be used. When factored into calculations, has the effect of somewhat elevating genotype frequencies. These kinds of compromises lack rigour but were justified as part of a conservative approach to statistics that favoured the defendant in a case. However, declaring the end of the 'DNA fingerprinting wars' without solving the underlying issues means that questions of inter-population variation and population substructure have not disappeared and tend to arise afresh with each new development in forensic technology. In today's age of highly sensitive PCR multiplexes, STR profiles are often partial (missing a full set of loci, or alleles) or mixed, which can increase RMPs and make interpretation more challenging. More rigorous approaches to calculating RMPs under different models of mate-choice have been developed. As well as considering the significance of a match between the profile of a known suspect and a crime-scene sample, in the modern world of very large investigative databases (such as those of the US and the UK) 'cold hits' are often reported and evaluated. In these cases, a crime-scene profile matches a profile in the database, sometimes from a case occurring long ago, when other evidence may be scanty. Here, the persuasive power of a low RMP may carry great weight, so careful calculation of the chance of erroneous matches becomes important. No-suspect cases: DNA-based intelligence on ancestry No nation holds a 'universal' DNA database containing the DNA profiles of all its citizens (although some have considered building one ). A consequence of this is that many crime-scene profiles entered into investigative databases return no hits, and therefore no potential suspects. This has led to attempts to produce intelligence from DNA information that could facilitate suspect identification. One indirect approach is the familial search -seeking autosomal STR profiles in an investigative database that are sufficiently similar to the crime-scene profile to suggest that they could come from a close relative ( parent, child or sibling). The reliability of this endeavour, like that of profile matching and database searching, is influenced by population structure. The reach of the method has recently been extended to more distant relatives by generating genomewide single-nucleotide polymorphism (SNP) genotypes and using these to query publicly accessible data generated by direct-to-consumer testing (investigative genetic genealogy). Intelligence can also be sought more directly by attempting to infer characteristics of the suspect from the crime-scene sample. Here, three areas have been focused upon: BGA, externally visible phenotypes and age. This review focuses on the first of these and ignores the last, since age is unaffected by variation in DNA and is more reliably assessed by measuring epigenetic variation. Because the investigated visible phenotypes show high inter-population variation and correlate with ancestry (and indeed with traditional ideas of race in contexts such as the US ), they intersect with the apportionment of human genetic diversity and are also considered here. If a population geneticist today wished to study the ancestry of an unknown individual sample, they would resort to genomewide analysis via a chip typing hundreds of thousands of SNPs, or even whole-genome sequencing. But forensic scientists do not usually have this luxury, since the amount and quality of DNA available is often low. Furthermore, the need for methods to be forensically validated, acceptable in the courtroom and compatible with existing investigative databases limits the application of genomewide techniques, and the number and type of markers that can be studied. Since standard forensic autosomal STR profiles are generated routinely, many studies have asked whether these contain any information about population of origin. More targeted work has sought SNPs with alleles that are highly differentiated between populations and therefore can have predictive value in combination. The autosomal STRs used in DNA profiling are multiallelic, with high mutation rates and high heterozygosity, properties that suit them to individual identification. This might be expected to make forensic STR profiles poorly differentiated between populations, and indeed the 13 CODIS (Combined DNA Index System) loci have a global F ST of approximately 4.5%, measured in the highly diverse Human Genome Diversity Project (HGDP) panel of indigenous populations, about one third of the approximately 15% observed by Lewontin. An F ST -based analysis of a large worldwide dataset based on the 13 CODIS loci indicated that these STRs systematically underestimate interpopulation genetic variation. However, application of the model-based clustering algorithm STRUCTURE showed that the CODIS loci give patterns of population clustering like those of other similar but independent sets of STRs. This study concluded that although forensic STRs do show relatively low F ST (a measure that is depressed for markers that are highly heterozygous ), their royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 377: 20200422 high heterozygosity actually strengthens ancestry inference compared to less heterozygous STR sets. It is worth noting that the correction factor for population structure, (discussed in the section above), is equivalent to F ST if random mating is assumed within sub-populations. The values recommended by the NRC (1% in general and 3% for 'some small, isolated populations') are considerably smaller than the global CODIS data (4.5% ); such low values are supported by analysis of forensic reference datasets, which could reflect relatively high degrees of inter-population admixture in the underlying samples. Forensic geneticists have investigated the ancestry information contained in autosomal STR profiles and developed methods to apply this in practice. For example, a machine-learning-based tool, PopAffiliator, claims approximately 86% accuracy in classifying 17-locus profiles to major regions essentially representing Europe, East Asia and sub-Saharan Africa. As with many predictive methods, the output of PopAffiliator provides probabilities of membership of either three or five different large population groups, and it is left to the user as to how to interpret or report these. Among adjacent regions, classification is less reliable, as illustrated by a STRUCTURE-based clustering analysis of the HGDP panel using 15 or 20 STRs : while European, African and Native American populations were highly differentiated, the HGDP populations of Europe, the Middle East and South Asia were not, and assigning a profile to one of these regions is inherently unreliable. To build panels of markers to predict the population of origin more robustly, loci were sought that maximized allele frequency differences between populations (ancestry informative markers; AIMs). As Lewontin observed, such markers are atypical: the most highly differentiated example in his set of classical markers was the Duffy blood group, showing a mean of just 63.6% of its total diversity within populations, compared to an average across loci of 85.4%. The Duffy negative allele was at greater than 90% frequency in sub-Saharan African populations, but at low frequency in most others. Such large differences are now taken as a signature of likely natural selection; in the case of Duffy, it was not until 3 years after Lewontin's paper that it was shown that erythrocytes from Duffy negative (now designated FYB ES /FYB ES homozygous) individuals were resistant to infection by the malaria parasite Plasmodium vivax. This same strongly selected marker was also the most highly differentiated locus in early AIM searches in the DNA era, and today it persists (as the SNP rs2814778) into many current BGA SNP multiplexes in forensic use. Binary AIMs were defined as variants exceeding some threshold, (the frequency of an allele in one population minus that in another; e.g. 50% ) in pairwise comparisons. Early on, both STRs and SNPs were included in AIM panels ( ; with an adjustment of calculation for multi-allelic loci) but today most sets are SNP-specific. One example designed for forensic use at a global level is a panel of 55 AIM SNPs that is available as a sensitive PCR multiplex. These SNPs were chosen based on their high allele frequency differences in various pairwise comparisons among a diverse collection of 63 populations. In an analysis of 3884 individuals from 73 populations using STRUCTURE, the most likely number of clusters (K) was eight, and the pattern of regional variation essentially resembled that observed for larger numbers of genomewide markers. Other AIM sets have been developed for discrimination at more local levels, for example Australia and the Pacific, and East Asia. At the individual level within populations, there can be considerable variation in cluster membership proportions so, in a likelihood-based approach, individuals can often be misassigned. If a tested individual belongs to a population that is not included in the reference set, they tend to be assigned to some geographically allied population-assignment is only as good as the reference data. Notably, the development and testing of such SNP panels is mostly based on indigenous populations that are not believed to be admixed. These may be very different from those seen in real forensic scenarios where, in urban settings, complex admixture is commonplace. No-suspect cases: DNA-based intelligence on phenotype prediction Traits that are forensically useful are those that a witness might observe and are collectively known as externally visible characteristics (EVCs). EVCs that are predictable from DNA variants need be largely genetically determined, and to have relatively simple genetic architecture. One such phenotype that has long been incorporated into standard DNA profiling is sex, predicted via a test for the presence or absence of the male-determining Y chromosome. Beyond this, research into the genetic basis of facial shape, hair type and stature has generated long lists of variants that contribute to these complex traits, but their predictive value is too low to make them of practical forensic use, despite commercial offerings that promise 'photofits' of individuals following DNA analysis. The phenotype that has received most attention is pigmentation, since this is relatively well characterized at the genetic level and variants are known that have large effects. The global apportionment of diversity in skin colour differs from that of hair and eye colour, reflecting differing evolutionary histories. Skin colour in indigenous populations shows a globally non-random geographical distribution, with people having the darkest skin in the tropics, and those with lighter skin in more northerly regions. The most widely held theory to explain the pattern of depigmentation from the human ancestral state of dark skin is the need to synthesize vitamin D in regions of low UV radiation. Following a similar methodology to Lewontin, Relethford quantified the apportionment of skin colour diversity, finding that just 9% of variation exists within populations-a reversal of the pattern found for classical markers, and underscoring the fact that skin colour has not evolved neutrally. By contrast, most of the global variation in hair and eye colour is among Europeans, with non-Europeans tending to show low variation; this has been taken to reflect a lack of a role for natural selection, and sexual selection has been proposed, though not proven, to be involved. The different histories and patterns of these pigmentation traits have influenced the search for underlying genetic variants: since variation in hair and eye colour is maximal within a relatively homogeneous European metapopulation, association studies have been productive ; however, association studies for skin colour cannot easily be done across populations with different phenotypes, since the signal of ancestry obscures the phenotypic signals. royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 377: 20200422 Many years of research into the genetic basis of human pigmentation have yielded a collection of genes whose products govern the abundance, properties and distribution of melanin pigments, giving rise to natural variation in the colour of skin, hair and eyes. A set of 41 SNPs in a total of 19 genes (the HIrisPlex-S system ) now allows estimation of individual probabilities for five skin, four hair and three eye colour categories from genotypes. The predictive models were developed and validated in a set of individuals (80% of them European) from indigenous populations ; in admixed populations, while SNP-based eye and hair colour prediction perform well, skin colour prediction is less accurate, reflecting the more complex nature of this trait, and possible epistatic interactions between alleles. The entanglement of population classification, ancestry and phenotype In order to consider the apportionment of diversity among groups, we first need to define the groups. Lewontin's list of 169 populations today has a retro feel to it (figure 1b), involved some rather arbitrary choices, and raises questions about how we label and classify our fellow humans. Some terms are now regarded as derogatory or politically incorrect: there are Lapps (today, Saami), Eskimos (Inuit), Gypsies (Roma) and Hottentots ( probably equivalent to Khoisan). Among the Amerinds are the Blackfoot, the Bloods, the Flathead and the Nez Perc. There are labels of language (speakers of Hindi and Urdu), religion (Oriental Jews) and skin colour (US Blacks). Lewontin's seven racial classifiers (figure 1b) include Caucasian and Mongoloid, two of Blumenbach's eighteenth-century races. As well as using an SNP chip, today's population geneticist would also be likely to use a classification scheme informed by ethnolinguistic affiliation, geography and subject selfdefinition: the 1000 Genomes Project provides examples. However, in forensic practice, by contrast, analysis is carried out within the socio-political frameworks of national criminal justice systems that are rooted in their own different census populations, and often reach back into the past. Thus, the battleground of the US-focused 'DNA fingerprinting wars' took place among the unhelpful confusion of Caucasian, Black and Hispanic categories. The first two are sometimes recast as European-and African-American; the last (derided by Lewontin as 'a biological hodgepodge' ) includes a diverse collection of Mexican, Puerto Rican, Guatemalan, Cuban, Spanish and other peoples with differing proportions of European, Native American and African ancestry. In the UK, six 'ethnic appearance' categories have been used : pale-skinned Caucasian, dark-skinned Caucasian, African/African-Caribbean, Indian subcontinent, East Asian and North African/Middle Eastern. Where the lines are to be drawn between these is far from clear, and they must contain endogamous sub-populations and varying degrees of admixture. In Malaysia, there are separate population reference databases for Malay, Chinese, Indian and Orang Asli indigenous people. Racial categories are context-dependent, rather than universal. Pigmentation phenotypes are a hallmark of many traditional race-based classifications, and in forensic genetics, the conflation of pigmentation and ancestry persists not only through the way population groups are labelled but also in the markers used in ancestry testing. Two SNPs in the 55-SNP AIM set described above are also part of the HIr-isPlex-S prediction set, and two more are pigmentation associated. Other SNPs are associated with less obviously visible phenotypes: Duffy has already been mentioned, and other examples include a variant in the EDAR gene associated with thicker hair in Asians and a variant in the acetaldehyde dehydrogenase gene responsible for Asian alcohol flush reaction. A move away from EVCs in ancestry SNP panels might help, but in practice ancestry and phenotypes are inexorably linked because the information that a DNA sample came from a European, an East Asian or an African raises expectations about the appearance and social identity of that person. As well as robust prediction, an EVC has utility if it is generally rare in a population, since it can substantially narrow a pool of suspects. In the UK, a red hair test has been available for many years and is useful because the population frequency of the trait is just 5% or so. Predicted phenotypes that characterize minority ethnic populations can therefore be seen as valuable in a similar way, but are problematic in that they focus attention on groups that are often already the target of excessive police attention. In providing a probability of belonging to a particular group or having a particular appearance, these kinds of tests point not to an individual suspect, but a pool or collective of similar suspects, and thus to the potential victimization of a community. Conclusion Lewontin notes in his 1972 paper that 'our perception of relatively large differences between human races and subgroups, as compared to the variation within these groups, is a biased perception and based on randomly chosen genetic differences, human races and populations are remarkably similar to each other'. By focusing on variants that are far from random and that exaggerate the differences between populations, and by conflating ancestry and phenotypes, forensic BGA testing and the prediction of EVCs have the effect of reinforcing a link between ancestry and racial divergence that is not systematically present in the genome otherwise. Thus, despite the profound legacy of Lewontin's 1972 study, in a major area in which genetics coincides with issues of public engagement and interest, methods in the field tend to emphasize human differences beyond the picture that generally emerges from genetic and genomic evidence. It would be nave to imagine that forensic scientists will give up their efforts to maximize intelligence from DNA evidence. However, it is also important to remember that these creative endeavours are undertaken because of the absence of universal forensic DNA databases. Indeed, the biases, ethical problems and invasions of privacy that the armoury of investigative methods present have been used to bolster the arguments for universal databasing. It has been argued that universal databases would be fairer to all citizens than the current discriminatory investigative databases, would aid exonerations of innocent people, would deter crime and would eliminate the invasion of privacy represented by mass-screens (or 'dragnets') and familial searching. The rise of investigative genetic genealogy and the use (and abuse) of publicly accessible genetic data by law enforcement has been used to further strengthen arguments in favour of universal forensic databases. Problems with BGA testing and the prediction of EVCs could royalsocietypublishing.org/journal/rstb Phil. Trans. R. Soc. B 377: 20200422 be marshalled as yet an additional justification. Given Lewontin's own social activism and his commitment to building a better world, as well as his general scepticism about forensic genetics, it seems most unlikely that he would have signed up to universal databases, and there are certainly powerful arguments to be made against them : they would be expensive, place disproportionate restrictions upon individual rights to privacy, treat the population as suspects (rather than citizens presumed innocent) and raise serious problems in navigating consent and its inevitable refusal by some. Since forensic databases operate at the level of nations, there would be thorny issues around the DNA profiling of visiting workers and tourists, and no doubt different nations would behave differently in this respect. There is no reason to believe that the creation of universal databases would make criminal justice systems fairer for ethnic minorities. How can the current situation be mitigated? There is a clear need for good practice in considering human classifications in imperfect but important forensic probability estimates. Labels matter, and should be used more carefully; this should include a nuanced consideration of admixture, rather than the shoe-horning of DNA donors into individual groups. It is promising that the field has woken up recently to the issue of ethics, and in particular to the question of the informed consent of participants in forensic population studies. This suggests that the broader questions around how forensic genetics interacts with racial classifications and a public view of human difference should also be the subject of consideration and regular re-evaluation, rather than relying on tablets of stone from a previous era representing empirical and arbitrary standards. Finally, as with those who study and write about population genetics and genomics, there is a responsibility for the scientist who uses and reports on forensic prediction of ancestry and phenotypes to think carefully about the language, the narrative, and the message that they convey to the public. Data accessibility. This article has no additional data. Authors' contributions. M.A.J.: conceptualization, writing-original draft and writing-review and editing. |
Hemorrhoid management in women: the role of tribenoside + lidocaine Hemorrhoids are commonly reported in women. However, despite the high prevalence of hemorrhoids in women and the major impact of this condition on quality of life, specific evidence and recommendations on the treatment of hemorrhoids in women are scant. This paper reviews various options in current therapy for hemorrhoids in womennamely, medical intervention (topical and systemic drug therapy)and discusses the available clinical evidence for an appropriate use of over-the-counter topical formulations for the symptomatic treatment of hemorrhoids. Its focus is on a medical preparation containing tribenoside + lidocaine, available as a rectal cream (tribenoside 5%/lidocaine 2%) and a suppository (tribenoside 400 mg/lidocaine 40 mg) and marketed under the brand Procto-Glyvenol® (Recordati, SpA, Italy). Given its rapid comprehensive efficacy on all the different symptoms of hemorrhoids, the tribenoside + lidocaine combination can find a place in the treatment of this hemorrhoidal disease. Importantly, its efficacy and tolerability have been formally evaluated in several well-conducted studies, some of which were specifically conducted in women. In particular, tribenoside + lidocaine can be safely administered in postpartum women and in pregnant women after the first trimester of pregnancy. In pregnant women, the tribenoside/lidocaine combination significantly improved both subjective and objective symptoms of hemorrhoids. Fast onset of symptom relief was reported from 10 minutes after administration, lasting up to 1012 hours. On these bases, tribenoside + lidocaine can represent a fast, effective, and safe option to treat hemorrhoids when conservative therapy is indicated, and it deserves consideration as a first-line treatment of this disease in clinical practice. Introduction Hemorrhoids affect approximately 25% of the general population in their lifetime and are associated with several bothersome symptoms, such as painful defecation, itching with the urge to scratch, and bleeding, which in turn limit social activities and have a major impact on quality of life. The severity of hemorrhoids is classified into four stages, according to Goligher's classification (Table 1). More advanced stages of the disease require surgical treatment, while medical management and lifestyle interventions are suitable for grade I/II hemorrhoids, which represent the wide majority (>90%) of all reported cases. Remarkably, many patients experience hemorrhoids without seeking medical consultation because of embarrassment or fear, discomfort, and pain associated with the treatment. 4 In particular, hemorrhoids are commonly reported in women, mostly during pregnancy and postpartum. 5 Pregnancy and vaginal birth predispose women to develop symptomatic hemorrhoids for several reasons: hormonal changes, increased intra-abdominal pressure, straining during defecation due to constipation, prolonged straining during the second stage of labor for more than 20 minutes, and giving birth to a baby with a weight over 3800 g. 6 The high levels of progesterone during pregnancy decrease the strength of the muscles' venous walls and reduce the venous tone; any combination of increased intra-abdominal pressure, increased venous congestion from the weight of the fetus, and obstruction of venous returns contribute to the development of pathological changes and incidence of hemorrhoids. Women with this condition may ultimately experience anal incontinence, and also report difficulties in dealing with hygienic problems. 6 Of note, women may perceive hemorrhoids as an embarrassing and sensitive disease, with a consequent reluctance to ask for medical attention. 6 Despite the high prevalence of hemorrhoids in women and the major impact of this condition on quality of life, specific evidence and recommendations on the treatment of hemorrhoids in women are scant. 5 This paper discusses current options in the therapy for hemorrhoids in women-namely, medical intervention (topical and systemic drug therapy)-and discusses the available clinical evidence for an appropriate use of over-the-counter topical formulations for the symptomatic treatment of hemorrhoids with a focus on the combination of tribenoside + lidocaine, which has been demonstrated to be a fast, effective, and tolerated option for the local treatment of low-grade hemorrhoids. 7 Selection of evidence Papers considered for this review were retrieved by a PubMed search, using different combinations of pertinent keywords (e.g., tribenoside and hemorrhoids), without any limitations on publication date or language. Documents from the authors' personal collection of literature could also be considered. Papers were selected for inclusion according to their relevance for the topic, as judged by the author. Management of hemorrhoids in women: state of the art In women with hemorrhoids, symptoms include pain, itching, and intermittent bleeding from the anus; quality of life can vary from mild physical and psychological discomfort to difficulty in dealing with everyday activities, depending on the severity of pain. 6,8 As hemorrhoids are such a common condition in women and can be associated with both physical symptoms and quality-of-life impairment, prevention is crucial. Dietary modification consisting of adequate fluid and fiber intake represents the primary approach to patients at high risk of hemorrhoid disease. 9 In patients with overt hemorrhoids, hygienic and dietary measures should be taken to prevent constipation, with the aim of maintaining soft, bulked stools that pass easily without straining during defecation. 9 A diet high in fibers, approximately 20-35 g/day, or intake of fiber supplements, such as maltodextrin-resistant fruit oligosaccharides, psyllium, methylcellulose, or calcium polycarbophil can be recommended. There have been several randomized controlled trials (RCTs) studying the relationship between dietary fiber and constipation. Some studies reported that dietary fiber can increase stool frequency, improve stool consistency, and have no obviously adverse effects. However, in another study, dietary fiber was not found more effective than placebo in therapeutic success, and it might increase the frequency of abdominal pain. Furthermore, large trials examining the effect of dietary fiber in the treatment of constipation are needed, the possible influential factors should be considered, and more gastrointestinal symptoms and adverse events should be reported before dietary fiber is formally recommended. 10 In addition to a high-fiber diet, it is also important to increase fluid intake, which adds moisture to stool, thus reducing constipation. Of note, no studies have been published so far showing that increasing liquid volume is effective as a treatment in euhydrated subjects with chronic constipation. Nevertheless, inadequate fluid intake or excessive fluid loss from diarrhea, vomiting, or febrile illness may cause hardening of the stool and is considered to be an important cause of constipation, especially in infants. 11 Increasing liquid intake is commonly recommended for constipated children, adults, and elderly subjects. Although the effects of fluid intake on constipation have never been fully studied or understood, the recommendation has remained mostly out of tradition. 12 Finally, the use of polyethylene glycol (PEG/macrogol 4000), an osmotic laxative, can also be considered a safe and effective treatment, even during pregnancy. In this respect PEG/ macrogol should be considered a first-line option, due to its minimal absorption and elimination in the urine without being metabolized. 13 Lastly, moisturizing and cleansing wipes can be used as a replacement for toilet paper by providing a cool, soothing sensation around the back passage. 14 Formulations (either wipes or intimate soap/gel) containing extract of Ruscus aculeatus, well known for its soothing properties, and devoid of alcohol, can provide relief for people suffering from hemorrhoids. Extract of Ruscus aculeatus has been documented to be effective in increasing venous tone because of its anti-inflammatory and astringent properties. 15 Features Grade I The anal cushions bleed but do not prolapse. Grade II The anal cushions prolapse through the anus on straining but reduce spontaneously. Grade III The anal cushions prolapse through the anus on straining or exertion and require manual replacement into the anal canal. Grade IV The prolapse stays out at all times and is irreducible. Acutely thrombosed, incarcerated internal hemorrhoids and incarcerated, thrombosed hemorrhoids involving circumferential rectal mucosal prolapse are also fourth-degree hemorrhoids. Conservative measures with local treatment are also recommended, while surgical procedures should be indicated only in case of failure of conservative treatment or high-grade disease. 16,17 As stated by the American Society of Colon and Rectal Surgeons (ASCRS) guidelines, medical therapy for hemorrhoids is based upon a heterogeneous group of options that can be offered with expectations of minimal harm and a decent potential for relief. 9,18 Current medical preparations are available as topical creams, ointments, gels, lotions, suppositories, and pads. These preparations may contain various ingredients, such as local anesthetics, corticosteroids, vasoconstrictors, antiseptics, keratolytics, protectants (e.g., mineral oils, cocoa butter), and astringents (ingredients that cause coagulation, e.g., witch hazel). 19 Clinicians should educate patients to use only medications whose efficacy and safety, including in special conditions (e.g., pregnancy), have been firmly established. As inflammation plays an important role especially in the cutaneous symptoms of hemorrhoidal disease, 22 topical antihemorrhoidal preparations containing an antiinflammatory agent and a local anesthetic are extensively used, and their use is supported by current medical practice. 19 Some commonly used combinations include ketocaine/ fluocinolone and hydrocortisone/benzocaine. Corticosteroids can be effective in this scenario; however, these molecules, often available as prodrugs, may be associated with a risk of systemic adsorption being systemically absorbed, distributed, metabolized, and excreted. Thus, their higher lipophilicity may limit their application over the middle-term period or in women who are elderly, are breastfeeding, or are pregnant. Accordingly, topical formulations providing alternatives to corticosteroids, but still endowed with anti-inflammatory and wound-healing effects, are highly desirable for adequate control of both objective and subjective symptoms of hemorrhoids. The topical combination of tribenoside and lidocaine (marketed under the brand Procto-Glyvenol ®, Recordati SpA, Italy) addresses the aforementioned criteria along with a formal and robust evaluation of its efficacy and safety in a number of well-conducted studies, most of which included a comparator arm using a reference treatment for grade I and II of hemorrhoids-that is, steroid-based preparations (hydrocortisone, prednisolone, fluocortolone) as recently reviewed. 7 Tribenoside + lidocaine combines the rapid local anesthetic action exerted by lidocaine with the efficacy of tribenoside in reducing inflammation, promoting local healing, and favoring the recovery of local vessels to normal conditions. This double mechanism of action allows to control both subjective (e.g., pain and discomfort) and objective (e.g., prolapse and bleeding) symptoms of hemorrhoids. In addition to the topical remedies, preparations for oral use including vasoactive ingredients have been proven to be effective. Based on the experience of their efficacy in the treatment of chronic venous insufficiency, phlebotonic drugs for oral use have largely been prescribed to treat hemorrhoids and are supported by well-grounded evidence. 9, Phlebotonics are a heterogenous class of drugs consisting of plant extracts (i.e., flavonoids) and synthetic compounds (i.e., calcium dobesilate). Although their precise mechanism of action has not been fully established, they are known to improve venous tone, stabilize capillary permeability, and increase lymphatic drainage. The evidence suggests that there is a potential benefit in using phlebotonics in treating hemorrhoidal disease as well as a benefit in alleviating post-hemorrhoidectomy symptoms. Outcomes, such as bleeding and overall symptom improvement, show a statistically significant beneficial effect, and there were few concerns regarding their overall safety from the evidence presented in the clinical trials. 26 In particular, diosmin shows anti-inflammatory and antioxidant activities, phlebotonic and vasoactive effects, improving venous tone, lymphatic drainage, and capillary hyperpermeability. 27 A new micronized diosmin formulation has been recently developed to increase bioavailability for the treatment of hemorrhoids. 28 Ginkgobased preparations including extract from the Gingko biloba tree (e.g., Ginkor brand) are also used as an oral supplement. 29 Tribenoside + lidocaine: an effective and well-tolerated combination therapy Tribenoside + lidocaine combination is a medical preparation for the local treatment of hemorrhoids, delivered as a suppository or rectal cream (Procto-Glyvenol ® ). This combination offers a rapid and comprehensive efficacy on all different symptoms of hemorrhoids thanks to the pharmacological peculiarities of its single components, the saccharide tribenoside and the fast-acting anesthetic lidocaine. Tribenoside possesses a wide spectrum of activities including anti-inflammatory, mild analgesic, antitoxic, wound-healing, fibrinolysis-promoting, antiarthritic, membrane-stabilizing, and venotropic properties along with a relevant tolerability profile toward the gastrointestinal and immune systems. 30 Importantly, the aforementioned activities differentiate tribenoside from the standard treatment of hemorrhoidsthat is, topical corticosteroids-thus, strengthening its place in therapy in the local treatment of hemorrhoids, providing comparable efficacy in reducing the associated inflammation and better tolerability than corticosteroids. 7,30 The combination with lidocaine confers an additional benefit (e.g., the fast relief from pain and itching), which is desirable during the acute phase of a hemorrhoidal crisis to help patients cope better with the most bothersome complaints, including itching. Therefore, tribenoside + lidocaine can improve both objective symptoms (inflammation, hemorrhage, secretion), thanks to the presence of tribenoside, and subjective symptoms (pain, itching, sense of weight, and tenesmus), given the local action of lidocaine. 7 lidocaine suppositories twice daily was administered for 5 days in 19 women who complained of bothersome pain. Although the open-label study design may provide biased results, 15 patients were satisfied with the treatment, 18 were moderately satisfied, and only 7 patients were poorly satisfied. Nevertheless, positive results were reported in 82.5% of cases, and a fast onset of symptom relief was reported when suppositories were administrated (10 minutes to 1 hour). Tolerability was considered very good in all patients: no systemic or local adverse effects were observed. Lastly, a long-term, open-label study was carried out by Zurita-Briceno in 30 patients (25 women) who were treated with tribenoside + lidocaine suppositories three times daily for 4 weeks. Efficacy was judged as "excellent" or "good" by over 80% of patients with very few adverse events, all not related to treatment. 41 Conclusions and place in therapy Hemorrhoids are a common condition. A large number of overthe-counter products are available on the market to address the prevalent attitude of patients to prefer to self-medicate over medical consultation. 1 The standard treatment is with steroids, but they are unfortunately burdened by an unbalanced benefit/risk ratio that hampers their usefulness, 18,43 thus limiting their therapeutic potential in some special populations, such as pregnant/breastfeeding women or the elderly. Given its rapid comprehensive efficacy on all different symptoms of The clinical evidence of tribenoside + lidocaine efficacy and tolerability stems from several clinical studies conducted in over 1200 patients of either gender, either versus its two individual components or versus steroids in the same setting as previously described. 7, The studies investigated both formulations of tribenoside + lidocaine combinations (suppositories and cream) either within the same study design or in distinct trials. Most importantly, the combination was compared and contrasted with active comparators and the treatment duration ranged between 10 and 28 days to investigate long-term tolerability. A detailed description of available studies goes beyond the aims of this review and can be found elsewhere. 7 Here, we focus only on studies specifically conducted on women or those involving a wide majority of women among their participants. However, all studies enrolled a large population of women treated with tribenoside + lidocaine; safety was reported as subjective evaluation by the study investigators being excellent in all cases. Of note, rare reported adverse reactions during treatment can be local reactions, such as burning (application site pain), rash, and pruritus. 42 Moggian and colleagues conducted two controlled, double-blind studies, published in a single paper, evaluating 67 women (mean age: 33 years) with hemorrhoidal disease as a consequence of pregnancy or delivery (either internal or external or mixed hemorrhoids). 33 In one study, tribenoside 400 mg + lidocaine 40 mg suppositories (n=21) administered twice daily for up to 10 days was compared with lidocaine 40 mg only (n=20). In the parallel double-blind evaluation, tribenoside + lidocaine suppositories (n=13) were compared with suppositories of hydrocortisone 1% (n=13). 33 Clinical evaluation of objective (secretion, hemorrhage, nodules) and subjective (pain, burning, pruritus) symptoms was assessed by a 4-point scale (0 = absent, 1 = mild, 2 = moderate, 3 = severe). Mean total scores for objective or subjective symptoms were calculated by adding the means of each symptom scores, and the differences between treatments were analyzed by a nonparametric test. (Figure 1). The tolerability of tribenoside + lidocaine was evaluated by the investigators being "very good" in all cases. Delarue and colleagues evaluated the effectiveness and safety of tribenoside + lidocaine during pregnancy and postpartum. In total, 40 women with hemorrhoids as a consequence of pregnancy (n=33) or delivery (n=7) were treated with oral tribenoside 400 mg 2-6 times daily for 10 days (postpartum) or 20 days (pregnancy). 38 Local treatment with tribenoside + Figure 1. Improvement of subjective and objective symptoms of hemorrhoids with tribenoside + lidocaine and hydrocortisone containing preparation in 26 women with hemorrhoidal disease as a consequence of pregnancy or delivery, expressed as difference in the score reported after treatment and the score reported at baseline. p<0.001 for both preparations versus baseline; p<0.01 for tribenoside + lidocaine versus hydrocortisone 1% in the effect on subjective symptoms. Graphical elaboration of data in Moggian. 33 hemorrhoids, the tribenoside + lidocaine combination can find a place in the treatment of this hemorrhoidal disease. Importantly, its efficacy and safety have been formally evaluated in several well-conducted studies, most of which included a comparator arm, either semiplacebo preparations containing either monocomponents (only lidocaine or only tribenoside) or steroid-containing preparations. No data versus placebo or vehicle are available. Overall, the combination of tribenoside + lidocaine was found to be superior over the single components in symptoms improvement, likely due to its ability to ameliorate both subjective and objective symptoms at the same time. The effects on subjective symptoms were rapidly observed after the administration of the combination (i.e., 10-30 minutes). 31 Moreover, the combination of tribenoside + lidocaine was at least equally effective as the gold-standard treatment for hemorrhoids-that is, steroid-based preparations-and sometimes superior in providing a prompt relief of bothersome symptoms, such as pain and itching. Finally, tolerability was excellent in all available studies with only negligible adverse events being reported. 7 It is noteworthy that this combination can be particularly suitable for some populations of patients at high risk of hemorrhoids in whom steroids could be contraindicated. In particular, tribenoside + lidocaine can be safely administered in postpartum women and in pregnant women after the first trimester of pregnancy (although no randomized studies have been conducted in this specific population). In addition, the combination of tribenoside + lidocaine can be suitable in athletes for whom rectal steroids are prohibited. In conclusion, tribenoside + lidocaine may represent a fast, effective, and safe option to treat hemorrhoids when conservative therapy is indicated, and it deserves consideration as first-line treatment of this disease in clinical practice. Contributions: The named author meets the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and has given his approval for this version to be published. Disclosure and potential conflicts of interest: The author declares that he has no conflicts of interest. The International Committee of Medical Journal Editors (ICMJE) Potential Conflicts of Interests form for the authors is available for download at http://www.drugsincontext.com/wp-content/uploads/2019/08/dic.212602-COI.pdf |
<filename>types/hook_dal.go
package types
type HookDal struct {
DiscordHook string
Hook string
Type string
}
|
Who should have carotid surgery or angioplasty? Carotid endarterectomy reduces the overall risk of stroke in patients with ECST70-99% recently symptomatic stenosis, and to a lesser extent, at least in the short-term, in patients with severe asymptomatic stenosis. Whether angioplasty and stenting is a reasonable alternative will be decided by the results of ongoing RCTs of angioplasty versus endarterectomy. The current policy of operating on all patients with a recently symptomatic severe carotid stenosis will, on average, do more good than harm. However, the number of patients needed to treat to prevent one stroke is still relatively high. The effectiveness of endarterectomy could be improved by selecting patients more rigorously. Subgroup analysis and risk factor modelling are likely to be of some value, but further testing is required before final models can be recommended for routine use in clinical practice. However, it is also likely that predictive models will eventually also take into account information on cerebral microemboli, cerebral perfusion, and genetic characteristics. The development and validation of integrated predictive models, combining these different modalities, will require large prospective clinical studies. |
Mapping XML documents to the object-relational form In e-commerce, partners doing business need to exchange many forms of documents as a means of communication between them through the Internet, such as product catalogs, purchase orders, invoices and so on. XML, which is a new standard adopted by World Wide Web Consortium to complement HTML for data exchange on the Web, has been used as a language for describing those documents used in e-commerce such as in xCBL. For XML to be used effectively in e-commerce, we need a system to store XML data, retrieve the data if a request is given, create a required document based on the data, and send the created document to the partners. In this paper, we are concerned with storing XML data. We suggest a method to store XML documents in the object-relational form as an alternative to get advantages from the two approaches of object-oriented and relational table forms. We then implement it on the Oracle 8i database. |
// Copyright 2021 <NAME>
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
// SPDX-License-Identifier: Apache-2.0
//
#include "ysfx.hpp"
#include "ysfx_config.hpp"
#include "ysfx_api_gfx.hpp"
#include "ysfx_eel_utils.hpp"
#if !defined(YSFX_NO_GFX)
# include "lice_stb/lice_stb_loaders.hpp"
# define WDL_NO_DEFINE_MINMAX
# include "WDL/lice/lice.h"
# include "WDL/lice/lice_text.h"
# include "WDL/wdlstring.h"
#endif
#include <vector>
#include <queue>
#include <unordered_set>
#include <memory>
#include <atomic>
#include <cassert>
#if !defined(YSFX_NO_GFX)
#define GFX_GET_CONTEXT(opaque) (((opaque)) ? ysfx_gfx_get_context((ysfx_t *)(opaque)) : nullptr)
enum {
ysfx_gfx_max_images = 1024,
ysfx_gfx_max_fonts = 128,
ysfx_gfx_max_input = 1024,
};
class eel_lice_state;
struct ysfx_gfx_state_t {
ysfx_gfx_state_t(ysfx_t *fx);
~ysfx_gfx_state_t();
std::unique_ptr<eel_lice_state> lice;
std::queue<uint32_t> input_queue;
std::unordered_set<uint32_t> keys_pressed;
ysfx_real scale = 0.0;
void *callback_data = nullptr;
int (*show_menu)(void *, const char *, int32_t, int32_t) = nullptr;
void (*set_cursor)(void *, int32_t) = nullptr;
const char *(*get_drop_file)(void *user_data, int32_t index) = nullptr;
};
//------------------------------------------------------------------------------
#if !defined(YSFX_NO_GFX)
static bool eel_lice_get_filename_for_string(void *opaque, EEL_F idx, WDL_FastString *fs, int iswrite)
{
if (iswrite)
return false; // this is neither supported nor used
ysfx_t *fx = (ysfx_t *)opaque;
std::string filepath;
if (!ysfx_find_data_file(fx, &idx, filepath))
return false;
if (fs) fs->Set(filepath.data(), (uint32_t)filepath.size());
return true;
}
#define EEL_LICE_GET_FILENAME_FOR_STRING(idx, fs, p) \
eel_lice_get_filename_for_string(opaque, (idx), (fs), (p))
#endif
//------------------------------------------------------------------------------
#if !defined(YSFX_NO_GFX)
# include "ysfx_api_gfx_lice.hpp"
#else
# include "ysfx_api_gfx_dummy.hpp"
#endif
//------------------------------------------------------------------------------
#if !defined(YSFX_NO_GFX)
static bool translate_special_key(uint32_t uni_key, uint32_t &jsfx_key)
{
auto key_c = [](uint8_t a, uint8_t b, uint8_t c, uint8_t d) -> uint32_t {
return a | (b << 8) | (c << 16) | (d << 24);
};
switch (uni_key) {
default: return false;
case ysfx_key_delete: jsfx_key = key_c('d', 'e', 'l', 0); break;
case ysfx_key_f1: jsfx_key = key_c('f', '1', 0, 0); break;
case ysfx_key_f2: jsfx_key = key_c('f', '2', 0, 0); break;
case ysfx_key_f3: jsfx_key = key_c('f', '3', 0, 0); break;
case ysfx_key_f4: jsfx_key = key_c('f', '4', 0, 0); break;
case ysfx_key_f5: jsfx_key = key_c('f', '5', 0, 0); break;
case ysfx_key_f6: jsfx_key = key_c('f', '6', 0, 0); break;
case ysfx_key_f7: jsfx_key = key_c('f', '7', 0, 0); break;
case ysfx_key_f8: jsfx_key = key_c('f', '8', 0, 0); break;
case ysfx_key_f9: jsfx_key = key_c('f', '9', 0, 0); break;
case ysfx_key_f10: jsfx_key = key_c('f', '1', '0', 0); break;
case ysfx_key_f11: jsfx_key = key_c('f', '1', '1', 0); break;
case ysfx_key_f12: jsfx_key = key_c('f', '1', '2', 0); break;
case ysfx_key_left: jsfx_key = key_c('l', 'e', 'f', 't'); break;
case ysfx_key_up: jsfx_key = key_c('u', 'p', 0, 0); break;
case ysfx_key_right: jsfx_key = key_c('r', 'g', 'h', 't'); break;
case ysfx_key_down: jsfx_key = key_c('d', 'o', 'w', 'n'); break;
case ysfx_key_page_up: jsfx_key = key_c('<KEY>'); break;
case ysfx_key_page_down: jsfx_key = key_c('<KEY>'); break;
case ysfx_key_home: jsfx_key = key_c('h', 'o', 'm', 'e'); break;
case ysfx_key_end: jsfx_key = key_c('e', 'n', 'd', 0); break;
case ysfx_key_insert: jsfx_key = key_c('i', 'n', 's', 0); break;
}
return true;
}
static EEL_F NSEEL_CGEN_CALL ysfx_api_gfx_getchar(void *opaque, EEL_F *p)
{
ysfx_gfx_state_t *state = GFX_GET_CONTEXT(opaque);
if (!state)
return 0;
if (*p >= 1/*2*/) { // NOTE(jpc) this is 2.0 originally, which seems wrong
if (*p == 65536) {
// TODO implement window flags
return 0;
}
// current key down status
uint32_t key = (uint32_t)*p;
uint32_t key_id;
if (translate_special_key(key, key))
key_id = key;
else if (key < 256)
key_id = ysfx::latin1_tolower(key);
else // support the Latin-1 character set only
return 0;
return (EEL_F)(state->keys_pressed.find(key_id) != state->keys_pressed.end());
}
if (!state->input_queue.empty()) {
uint32_t key = state->input_queue.front();
state->input_queue.pop();
return (EEL_F)key;
}
return 0;
}
static EEL_F NSEEL_CGEN_CALL ysfx_api_gfx_showmenu(void *opaque, INT_PTR nparms, EEL_F **parms)
{
ysfx_gfx_state_t *state = GFX_GET_CONTEXT(opaque);
if (!state || !state->show_menu)
return 0;
ysfx_t *fx = (ysfx_t *)state->lice->m_user_ctx;
std::string desc;
if (!ysfx_string_get(fx, *parms[0], desc) || desc.empty())
return 0;
int32_t x = (int32_t)*fx->var.gfx_x;
int32_t y = (int32_t)*fx->var.gfx_y;
return state->show_menu(state->callback_data, desc.c_str(), x, y);
}
static EEL_F NSEEL_CGEN_CALL ysfx_api_gfx_setcursor(void *opaque, INT_PTR nparms, EEL_F **parms)
{
ysfx_gfx_state_t *state = GFX_GET_CONTEXT(opaque);
if (!state || !state->set_cursor)
return 0;
int32_t id = (int32_t)*parms[0];
state->set_cursor(state->callback_data, id);
return 0;
}
static EEL_F NSEEL_CGEN_CALL ysfx_api_gfx_getdropfile(void *opaque, INT_PTR np, EEL_F **parms)
{
ysfx_gfx_state_t *state = GFX_GET_CONTEXT(opaque);
if (!state || !state->get_drop_file)
return 0;
const int32_t idx = (int)*parms[0];
if (idx < 0) {
state->get_drop_file(state->callback_data, -1);
return 0;
}
const char *file = state->get_drop_file(state->callback_data, idx);
if (!file)
return 0;
if (np > 1) {
ysfx_t *fx = (ysfx_t *)state->lice->m_user_ctx;
ysfx_string_set(fx, *parms[1], file);
}
return 1;
}
#endif
//------------------------------------------------------------------------------
#if !defined(YSFX_NO_GFX)
ysfx_gfx_state_t::ysfx_gfx_state_t(ysfx_t *fx)
: lice{new eel_lice_state{fx->vm.get(), fx, ysfx_gfx_max_images, ysfx_gfx_max_fonts}}
{
lice->m_framebuffer = new LICE_WrapperBitmap{nullptr, 0, 0, 0, false};
}
ysfx_gfx_state_t::~ysfx_gfx_state_t()
{
}
#endif
ysfx_gfx_state_t *ysfx_gfx_state_new(ysfx_t *fx)
{
return new ysfx_gfx_state_t{fx};
}
void ysfx_gfx_state_free(ysfx_gfx_state_t *state)
{
delete state;
}
void ysfx_gfx_state_set_bitmap(ysfx_gfx_state_t *state, uint8_t *data, uint32_t w, uint32_t h, uint32_t stride)
{
if (stride == 0)
stride = 4 * w;
eel_lice_state *lice = state->lice.get();
assert(stride % 4 == 0);
*static_cast<LICE_WrapperBitmap *>(lice->m_framebuffer) = LICE_WrapperBitmap{(LICE_pixel *)data, (int)w, (int)h, (int)(stride / 4), false};
}
void ysfx_gfx_state_set_scale_factor(ysfx_gfx_state_t *state, ysfx_real scale)
{
state->scale = scale;
}
void ysfx_gfx_state_set_callback_data(ysfx_gfx_state_t *state, void *callback_data)
{
state->callback_data = callback_data;
}
void ysfx_gfx_state_set_show_menu_callback(ysfx_gfx_state_t *state, int (*callback)(void *, const char *, int32_t, int32_t))
{
state->show_menu = callback;
}
void ysfx_gfx_state_set_set_cursor_callback(ysfx_gfx_state_t *state, void (*callback)(void *, int32_t))
{
state->set_cursor = callback;
}
void ysfx_gfx_state_set_get_drop_file_callback(ysfx_gfx_state_t *state, const char *(*callback)(void *, int32_t))
{
state->get_drop_file = callback;
}
bool ysfx_gfx_state_is_dirty(ysfx_gfx_state_t *state)
{
return state->lice->m_framebuffer_dirty;
}
void ysfx_gfx_state_add_key(ysfx_gfx_state_t *state, uint32_t mods, uint32_t key, bool press)
{
if (key < 1)
return;
uint32_t key_id;
if (translate_special_key(key, key))
key_id = key;
else if (key < 256)
key_id = ysfx::latin1_tolower(key);
else // support the Latin-1 character set only
return;
uint32_t key_with_mod = key;
if (key_id >= 'a' && key_id <= 'z') {
uint32_t off = (uint32_t)(key_id - 'a');
if (mods & (ysfx_mod_ctrl|ysfx_mod_alt))
key_with_mod = off + 257;
else if (mods & ysfx_mod_ctrl)
key_with_mod = off + 1;
else if (mods & ysfx_mod_alt)
key_with_mod = off + 321;
}
if (press && key_with_mod > 0) {
while (state->input_queue.size() >= ysfx_gfx_max_input)
state->input_queue.pop();
state->input_queue.push(key_with_mod);
}
if (press)
state->keys_pressed.insert(key_id);
else
state->keys_pressed.erase(key_id);
}
//------------------------------------------------------------------------------
void ysfx_gfx_enter(ysfx_t *fx, bool doinit)
{
fx->gfx.mutex.lock();
ysfx_gfx_state_t *state = fx->gfx.state.get();
if (doinit) {
if (fx->gfx.must_init.exchange(false, std::memory_order_acquire)) {
*fx->var.gfx_r = 1.0;
*fx->var.gfx_g = 1.0;
*fx->var.gfx_b = 1.0;
*fx->var.gfx_a = 1.0;
*fx->var.gfx_a2 = 1.0;
*fx->var.gfx_dest = -1.0;
*fx->var.mouse_wheel = 0.0;
*fx->var.mouse_hwheel = 0.0;
// NOTE the above are according to eel_lice.h `resetVarsToStock`
// it helps to reset a few more, especially for clearing
*fx->var.gfx_mode = 0;
*fx->var.gfx_clear = 0;
*fx->var.gfx_texth = 0;
*fx->var.mouse_cap = 0;
// reset key state
state->input_queue = {};
state->keys_pressed = {};
// reset lice
eel_lice_state *lice = state->lice.get();
LICE_WrapperBitmap framebuffer = *static_cast<LICE_WrapperBitmap *>(lice->m_framebuffer);
state->lice.reset();
lice = new eel_lice_state{fx->vm.get(), fx, ysfx_gfx_max_images, ysfx_gfx_max_fonts};
state->lice.reset(lice);
lice->m_framebuffer = new LICE_WrapperBitmap(framebuffer);
// load images from filenames
uint32_t numfiles = (uint32_t)fx->source.main->header.filenames.size();
for (uint32_t i = 0; i < numfiles; ++i)
lice->gfx_loadimg(fx, (int32_t)i, (EEL_F)i);
fx->gfx.ready = true;
}
}
ysfx_set_thread_id(ysfx_thread_id_gfx);
}
void ysfx_gfx_leave(ysfx_t *fx)
{
ysfx_set_thread_id(ysfx_thread_id_none);
fx->gfx.mutex.unlock();
}
ysfx_gfx_state_t *ysfx_gfx_get_context(ysfx_t *fx)
{
if (!fx)
return nullptr;
// NOTE: make sure that this will be used from the @gfx thread only
if (ysfx_get_thread_id() != ysfx_thread_id_gfx)
return nullptr;
return fx->gfx.state.get();
}
void ysfx_gfx_prepare(ysfx_t *fx)
{
ysfx_gfx_state_t *state = ysfx_gfx_get_context(fx);
eel_lice_state *lice = state->lice.get();
lice->m_framebuffer_dirty = false;
// set variables `gfx_w` and `gfx_h`
ysfx_real gfx_w = (ysfx_real)lice->m_framebuffer->getWidth();
ysfx_real gfx_h = (ysfx_real)lice->m_framebuffer->getHeight();
if (state->scale > 1.0) {
gfx_w *= state->scale;
gfx_h *= state->scale;
*fx->var.gfx_ext_retina = state->scale;
}
*fx->var.gfx_w = gfx_w;
*fx->var.gfx_h = gfx_h;
}
#endif
//------------------------------------------------------------------------------
void ysfx_api_init_gfx()
{
#if !defined(YSFX_NO_GFX)
lice_stb_install_loaders();
#endif
NSEEL_addfunc_retptr("gfx_lineto", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_lineto);
NSEEL_addfunc_retptr("gfx_lineto", 2, NSEEL_PProc_THIS, &ysfx_api_gfx_lineto2);
NSEEL_addfunc_retptr("gfx_rectto", 2, NSEEL_PProc_THIS, &ysfx_api_gfx_rectto);
NSEEL_addfunc_varparm("gfx_rect", 4, NSEEL_PProc_THIS, &ysfx_api_gfx_rect);
NSEEL_addfunc_varparm("gfx_line", 4, NSEEL_PProc_THIS, &ysfx_api_gfx_line);
NSEEL_addfunc_varparm("gfx_gradrect", 8, NSEEL_PProc_THIS, &ysfx_api_gfx_gradrect);
NSEEL_addfunc_varparm("gfx_muladdrect", 7, NSEEL_PProc_THIS, &ysfx_api_gfx_muladdrect);
NSEEL_addfunc_varparm("gfx_deltablit", 9, NSEEL_PProc_THIS, &ysfx_api_gfx_deltablit);
NSEEL_addfunc_exparms("gfx_transformblit", 8, NSEEL_PProc_THIS, &ysfx_api_gfx_transformblit);
NSEEL_addfunc_varparm("gfx_circle", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_circle);
NSEEL_addfunc_varparm("gfx_triangle", 6, NSEEL_PProc_THIS, &ysfx_api_gfx_triangle);
NSEEL_addfunc_varparm("gfx_roundrect", 5, NSEEL_PProc_THIS, &ysfx_api_gfx_roundrect);
NSEEL_addfunc_varparm("gfx_arc", 5, NSEEL_PProc_THIS, &ysfx_api_gfx_arc);
NSEEL_addfunc_retptr("gfx_blurto", 2, NSEEL_PProc_THIS, &ysfx_api_gfx_blurto);
NSEEL_addfunc_exparms("gfx_showmenu", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_showmenu);
NSEEL_addfunc_varparm("gfx_setcursor", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_setcursor);
NSEEL_addfunc_retptr("gfx_drawnumber", 2, NSEEL_PProc_THIS, &ysfx_api_gfx_drawnumber);
NSEEL_addfunc_retptr("gfx_drawchar", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_drawchar);
NSEEL_addfunc_varparm("gfx_drawstr", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_drawstr);
NSEEL_addfunc_retptr("gfx_measurestr", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_measurestr);
NSEEL_addfunc_retptr("gfx_measurechar", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_measurechar);
NSEEL_addfunc_varparm("gfx_printf", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_printf);
NSEEL_addfunc_retptr("gfx_setpixel", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_setpixel);
NSEEL_addfunc_retptr("gfx_getpixel", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_getpixel);
NSEEL_addfunc_retptr("gfx_getimgdim", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_getimgdim);
NSEEL_addfunc_retval("gfx_setimgdim", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_setimgdim);
NSEEL_addfunc_retval("gfx_loadimg", 2, NSEEL_PProc_THIS, &ysfx_api_gfx_loadimg);
NSEEL_addfunc_retptr("gfx_blit", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_blit);
NSEEL_addfunc_retptr("gfx_blitext", 3, NSEEL_PProc_THIS, &ysfx_api_gfx_blitext);
NSEEL_addfunc_varparm("gfx_blit", 4, NSEEL_PProc_THIS, &ysfx_api_gfx_blit2);
NSEEL_addfunc_varparm("gfx_setfont", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_setfont);
NSEEL_addfunc_varparm("gfx_getfont", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_getfont);
NSEEL_addfunc_varparm("gfx_set", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_set);
NSEEL_addfunc_varparm("gfx_getdropfile", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_getdropfile);
NSEEL_addfunc_varparm("gfx_getsyscol", 0, NSEEL_PProc_THIS, &ysfx_api_gfx_getsyscol);
NSEEL_addfunc_retval("gfx_getchar", 1, NSEEL_PProc_THIS, &ysfx_api_gfx_getchar);
}
|
import unittest
import vnmrjpy as vj
import glob
import nibabel as nib
class Test_SkipintGenerator(unittest.TestCase):
def test_generate_gems(self):
reduction = 4
gemsdir = sorted(glob.glob(vj.fids+'/gems*.fid'))[0]
procpar = gemsdir+'/procpar'
gen = vj.util.SkipintGenerator(procpar=procpar)
kmask = gen.generate_kspace_mask()
self.assertEqual(len(kmask.shape),4)
#nib.viewers.OrthoSlicer3D(kmask).show()
def test_generate_ge3d(self):
reduction = 4
gemsdir = sorted(glob.glob(vj.fids+'/ge3d_s*.fid'))[0]
procpar = gemsdir+'/procpar'
gen = vj.util.SkipintGenerator(procpar=procpar)
kmask = gen.generate_kspace_mask()
self.assertEqual(len(kmask.shape),4)
#nib.viewers.OrthoSlicer3D(kmask).show()
def test_generate_mge3d(self):
reduction = 4
gemsdir = sorted(glob.glob(vj.fids+'/mge3d*.fid'))[0]
procpar = gemsdir+'/procpar'
gen = vj.util.SkipintGenerator(procpar=procpar)
kmask = gen.generate_kspace_mask()
self.assertEqual(len(kmask.shape),4)
nib.viewers.OrthoSlicer3D(kmask).show()
def test_skiptab_ge3d(self):
""" Generate skiptab"""
pass
|
/** The type Default reflective key func which adapts default implementation. */
class DefaultReflectiveKeyFunc implements Function<Object, Request> {
@Override
public Request apply(Object o) {
return Controllers.defaultReflectiveKeyFunc().apply(o);
}
} |
Excessive cell phone use can lead to health problems as well as increased fatigue, lack of concentration and decreased effectiveness.
If you are using your phone for business-related activities, you are ringing your cash register -- if not, you're ringing other people's registers.
To reclaim your time, leave the phone out of the bedroom, do a pattern interrupt or follow the 5-second rule.
Your cell phone is the lifeblood of your business — but it also represents one of the greatest hurdles to your success. If your phone is running your life rather than vice versa, it’s time to reclaim your bedroom, your bathroom and your sanity by freeing yourself from its mind-numbing influence over virtually everything you do. |
Oracle's in-progress purchase of Acme Packet (News - Alert) for $1.7 billion (net) in cash is a great wake-up call.
Financial analysts and companies in the IP communications space are running around trying to spin this story to their favor, but I'm not even willing to concede that the deal will close without at least one suitor taking a run at Acme.
Financial data indicates Oracle's (News - Alert) offer is more than six times Acme Packet's sales over the past 12 months. That's a bit high, with three to four times yearly sales a typical price premium. Factor in Acme's declining profit margins over the past six months and you can understand why Oracle's stock price took a hit.
Could Acme Packet get more? There's at least one law firm "investigating potential claims" against Acme's Board of Directors for essentially selling too cheaply and not adequately shopping the company around, citing the rise in the company's stock price going from $15.29 in late October 2012 to $24.42 on January 22, 2013 – and one analyst setting a target price of $30 per share for the company.
Don't read too much into the lawsuit talks; it's a relatively (unfortunately) common occurrence. However, I am curious if Cisco put an offer on the table for Acme Packet, and if so, how much.
IBM (News - Alert), believe it or not, would also be on my short list.
Acme's story has been moving beyond the SBC space into service delivery networks (SDN) and its strength in security along with a move into software-based solutions would fit nicely with IBM's enterprise focus.
I wouldn't be surprised if there was at least one "hostile" bid put on the table before Oracle closes, but there are very few companies that will have the cash to outbid the current deal.
From Acme's perspective, it likely boils down to two things: Cash and solid access to the enterprise space. I've seen crack-brained speculation that Oracle is doing the deal to move hardware – who are these people and why do they have paying jobs? Everyone in the IP communications industry has been moving out of hardware and dedicated servers, so do not say this is a deal motivated to sell more Sun hardware.
The best margins and profits are in software, service contracts for software upgrades and services to get the most out of the software – not gear.
Beyond Acme Packet, there’s a bunch of other companies now doing a happy dance as analysts speculate on their future. GENBAND, Metaswitch and Sonus Networks (News - Alert) all come up in the whole "Who's going to buy the next SBC vendor?" discussion, with Broadsoft thrown in for good measure because of its share of the softswitch/application server marketplace.
There's also the potential for smaller companies like AudioCodes (News - Alert) and Sangoma to get scooped up by a larger player if a big M&A wave starts to happen. |
import logging
from config import celery_app
from .models import Feed
logger = logging.getLogger('celery')
@celery_app.task(name='fetch-feeds', bind=True)
def fetch_feeds(self, feeds=None):
"""
Fetchs feeds in background
@param feeds: a list of ids
"""
updated = []
failed = []
if feeds:
feeds = Feed.objects.filter(id__in=feeds)
for feed in (feeds or Feed.objects.all()):
parsed, valid = Feed.validate_url(feed.url)
if valid:
feed.set_content(parsed)
feed.fetch()
updated.append(feed)
else:
failed.append(feed)
logger.error(
'Error updating feed',
extra={
'feed': feed,
'feed_id': feed.id,
'user': feed.user
}
)
logger.info(
'Updated {} feeds'.format(len(updated))
)
if len(failed) > 0:
raise self.retry(
args=([f.id for f in failed],),
max_retries=3,
countdown=30
)
return [f.id for f in updated]
|
<reponame>fityannugroho/idn-area
import { Injectable } from '@nestjs/common';
import { InjectModel } from '@nestjs/mongoose';
import { Model } from 'mongoose';
import { SortHelper, SortOptions } from 'src/helper/sort.helper';
import { Village } from 'src/village/village.schema';
import { District, DistrictDocument } from './district.schema';
@Injectable()
export class DistrictService {
constructor(
@InjectModel(District.name)
private readonly districtModel: Model<DistrictDocument>,
private readonly sortHelper: SortHelper,
) {
this.sortHelper = new SortHelper({ sortBy: 'code', sortOrder: 'asc' });
}
/**
* If the name is empty, all districts will be returned.
* Otherwise, it will only return the districts with the matching name.
* @param name Filter by district name (optional).
* @param sort The sort query (optional).
* @returns The array of district.
*/
async find(name = '', sort?: SortOptions): Promise<District[]> {
return this.districtModel
.find({ name: new RegExp(name, 'i') })
.sort(this.sortHelper.query(sort))
.exec();
}
/**
* Find a district by its code.
* @param code The district code.
* @returns An district, or null if there are no match district.
*/
async findByCode(code: string): Promise<District> {
return this.districtModel.findOne({ code: code }).exec();
}
/**
* Find all villages in a district.
* @param districtCode The district code.
* @param sort The sort query (optional).
* @returns Array of village in the match district, or `false` if there are no district found.
*/
async findVillages(
districtCode: string,
sort?: SortOptions,
): Promise<false | Village[]> {
const villagesVirtualName = 'villages';
const district = await this.districtModel
.findOne({ code: districtCode })
.populate({
path: villagesVirtualName,
options: { sort: this.sortHelper.query(sort) },
})
.exec();
return district === null
? false
: (district[villagesVirtualName] as Promise<Village[]>);
}
}
|
(KHNL) - Some voters say they don't let party loyalty sway their decisions in the voting booth.
"Most people go by parties because they're pro-Republican or pro-Democrat," said voter Chris Takahashi. "I just prefer to go by individual basis."
Hawaii's voters not only confirmed the Democrats majority in the state legislature, they added to it. Democrats picked up two seats in the state house.
Democrats now hold 43 house seats in compared to just eight for the Republicans. There are no Republicans from the neighbor islands serving in the state house.
"It's just by can the person do the job and that's how I make my decision," said voter Tony Manuel.
Now Hawaii's highest ranking Republican is trying to figure out what that means for her party.
"We know it's possible with the right candidates and with the right campaigns," said Gov. Linda Lingle.
Lingle suspects Republicans need to run better races and do more to reach out to voters.
"I don't think they vote for a group of people," Lingle said. "I think they're voting for a legislator in their district and how that legislator has performed for them."
Hawaii's democrats called for unity during rallies in the days leading up to the election. whatever the reason - hawaii's voters responded. |
def retry(func: Callable[[], T]) -> T:
for i in range(10):
if config.DEBUG and i > 0:
print("Retry #%s" % str(i))
def timeoutHandler(signum, frame):
raise TimeoutException("Timeout!")
signal.signal(signal.SIGALRM, timeoutHandler)
signal.alarm(delayTime)
try:
t = func()
signal.alarm(0)
return t
except TimeoutException:
pass
signal.alarm(0)
raise TimeoutException("Retried 10 times... Failed!") |
Availability of two self-administered diet history questionnaires for pregnant Japanese women: A validation study using 24-hour urinary markers Background Accurate and easy dietary assessment methods that can be used during pregnancy are required in both epidemiological studies and clinical settings. To verify the utility of dietary assessment questionnaires in pregnancy, we examined the validity and reliability of a self-administered diet history questionnaire (DHQ) and a brief-type self-administered diet history questionnaire (BDHQ) to measure energy, protein, sodium, and potassium intake among pregnant Japanese women. Methods The research was conducted at a university hospital in Tokyo, Japan, between 2010 and 2011. The urinary urea nitrogen, sodium, and potassium levels were used as reference values in the validation study. For the reliability assessment, participants completed the questionnaires twice within a 4-week interval. Results For the DHQ (n = 115), the correlation coefficients between survey-assessed energy-adjusted intake and urinary protein, sodium, and potassium levels were 0.359, 0.341, and 0.368, respectively; for the BDHQ (n = 112), corresponding values were 0.302, 0.314, and 0.401, respectively. The DHQ-measured unadjusted protein and potassium intake levels were significantly correlated with the corresponding urinary levels (rs = 0.307 and rs = 0.342, respectively). The intra-class correlation coefficients for energy, protein, sodium, and potassium between the time 1 and time 2 DHQ (n = 58) and between the time 1 and time 2 BDHQ (n = 54) ranged from 0.505 to 0.796. Conclusions Both the DHQ and the BDHQ were valid and reliable questionnaires for assessing the energy-adjusted intake of protein, sodium, and potassium during pregnancy. In addition, given the observed validity of unadjusted protein and potassium intake measures, the DHQ can be a useful tool to estimate energy intake of pregnant Japanese women. Introduction Maternal nutrition is a significant factor for the well-being of both mother and fetus. 1e3 Deficiencies in energy and protein during pregnancy have been associated with low birth weight, 4 and the pathogeneses of preeclampsia and gestational diabetes mellitus (GDM) have been associated with excess energy and fat intake, as well as vitamin and mineral deficiencies. 1,2,5 The recent increase in the incidence of low birth weight and GDM in Japan 6,7 emphasizes the importance of adequate nutritional status. Therefore, accurate assessment of dietary intake, especially energy intake, is required to estimate the risk of pregnancy complications. Nonetheless, no validated questionnaires for assessing energy intake, which is an indirect indicator of the overall quantity and quality of dietary intake, exist for pregnant Japanese women. Dietary questionnaires, including diet history questionnaires (DHQs) and food frequency questionnaires, are often used in large epidemiological studies because they are less burdensome for participants and less costly than other dietary assessment methods. A self-administered DHQ and a brief-type self-administered diet history questionnaire (BDHQ) have already been validated for assessing energy intake and the intake of most nutrients using the dietary record, 24-hour urine collection, and doubly-labeled water methods in the non-pregnant, adult Japanese population. 8e11 The DHQ is a semi-quantitative questionnaire that assesses dietary intake for a total of 150 food and beverage items in the previous 1 month based on the following categories: the reported consumption frequency and portion size, usual cooking methods, and general dietary behavior. 8e11 The DHQ takes about 40 min to complete. On the other hand, the BDHQ is a fixed-portion questionnaire that assesses dietary intake for a total of 58 food and beverage items based on the reported consumption frequency, usual cooking methods, and general dietary behavior. 8 The BDHQ takes 10e15 min to complete. To identify the utility of the DHQ and the BDHQ as assessment tools for energy intake during pregnancy, validation studies with pregnant women are necessary. Other studies in non-pregnant women have used energy expenditures derived from the doubly-labeled water method, human calorimeters, accelerometer or heart-rate monitoring to assess the validity of estimated energy intake. 12 Of these, the doubly-labeled water method has been regarded as the gold standard. However, this method cannot be applied to pregnant women because safety during pregnancy has not been verified. The other objective methods also have implementation problems: large-scale equipment is needed, the estimation of energy expenditure is difficult during pregnancy, or the value is easily affected by psychological status. 13 On the other hand, 24-hour dietary recalls and dietary records are often used as reference methods. However, these subjective methods have the possibility of reporting bias. Protein and potassium intake are used as alternative markers to explore the validity of overall dietary intake because these nutrients are present in a variety of foods. 14e17 For example, protein is primarily found in meat, fish, beans, and cereals, while potassium is present in fruits, vegetables, beans, and potatoes. Protein and potassium intake measures are frequently validated using a 24-hour urinary excretion test. The 24-hour urine collection method is acceptable even for pregnant women because it is not a physically invasive procedure. In addition, the intake of sodium, which can be validated using a 24-hour urinary sodium level test, may be helpful for assessing energy intake 18 because it also reflects the consumption of a wide range of foods, including fish, shellfish, and processed foods. 19 Previous studies have indicated that the reporting accuracy of sodium intake, as well as those of protein and potassium intake, had a significant positive correlation with the reporting accuracy of energy intake, 20 and that the degree of misreporting did not greatly differ among energy, protein, sodium, and potassium intakes. 21 Protein, sodium, and potassium intake might complement each other as alternative markers of energy intake owing to their different sources. Thus, we assessed the indirect validity of energy intake by carefully interpreting the availability of these nutrients. It is also important to evaluate the validity and the reliability of protein, sodium, and potassium intake measures themselves during pregnancy, since an excess or deficiency of these nutrients affects fetal growth and may result in pregnancy complications. 22e24 The present study was designed 1) to assess the validity of the DHQ and the BDHQ for estimating protein, sodium, and potassium intake levels using the 24-hour urinary markers; 2) to assess the validity of the DHQ and the BDHQ for estimating energy intake levels using unadjusted intake of protein, sodium, and potassium as alternative indicators; and 3) to investigate the reliability of the DHQ and the BDHQ via comparing the dietary intake levels estimated using repeated administrations of the questionnaires. Methods Overview of the recruitment criteria and study design Validation study The present study was conducted at a university hospital in Tokyo, Japan. The BDHQ was administered between June and December 2010, while the DHQ was administered between January and June 2011. Thus, participants of the DHQ validation study and those of the BDHQ validation study were recruited separately. Healthy Japanese women with singleton pregnancies were recruited at 15e19 weeks of gestation. Those with diabetes, hypertension, and psychological diseases, as well as those who were less than 20 years of age and those who had a low Japanese literacy level, were excluded from the study. These inclusion and exclusion criteria were the same in validation studies for both the DHQ and BDHQ. Each participant received written and verbal information about the study protocol before providing written informed consent. The research ethics committee of the Graduate School of Medicine at the University of Tokyo approved the study procedures and protocol. The participants responded to the questionnaires while waiting for their pregnancy checkup at 19e23 weeks of gestation. Participants received instructions on how to complete the DHQ and the BDHQ before answering them. The participants who did not have sufficient time to complete the questionnaires in the hospital filled them out after returning home (within 7 days) and submitted them via mail. We resolved missing or unclear data face-to-face or through a telephone interview. Twenty-four-hour urine collection was conducted within the 5 days preceding the pregnancy checkup at 19e23 weeks of gestation. Reliability study To assess the reliability of the BDHQ and the DHQ, pregnant women at 15e19 weeks of gestation were recruited between October and December 2010 and between January and March 2011, respectively. The participants were a subsample from each validation study. The first measurement (time 1) was completed upon recruitment, and participants were later asked to complete the questionnaire a second time. The second measurement (time 2) was completed 4 weeks after the time 1 survey. Diet history questionnaire The DHQ was designed to assess the dietary intake of Japanese adults over the previous month 8e11 and has been previously validated for some fatty acids and vitamins in pregnant Japanese women. 25e28 Estimates of dietary intake for a total of 150 food and beverage items were calculated using an ad hoc computer algorithm, which included weighting factors for the DHQ. The estimates were based on consumption frequency and portion size of selected food and beverage items, daily intake of staple foods (rice, other grains, bread, noodles, and other wheat products), soup for noodles, and miso soup; usual cooking methods for fish, meat, eggs, and vegetables; and general dietary behavior, such as seasoning preferences. Information from usual cooking methods and general dietary behavior was used for estimation of dietary intake of four seasonings and soy sauce. Food item and standard portion size measures were derived from primary data from the National Nutrition Survey of Japan and various Japanese recipe books. 8 Each consumption frequency item had eight response options, ranging from "more than twice per day" to "almost never". Five response options for portion size were listed, ranging from "less than half of the standard portion size" to "more than 1.5 times the standard portion size". The DHQ also includes questions of dietary supplements and open-ended items for foods consumed more than once weekly but not appearing in the DHQ. However, this information was not used in the calculation of dietary intake. The BDHQ is a short-version of the DHQ and can be easily used in clinical settings. The BDHQ is a fixed-portion questionnaire that assesses dietary intake during the previous 1 month. Estimates of dietary intake for a total of 58 food and beverage items were calculated using an ad hoc computer algorithm, which included weighting factors for the BDHQ. The estimates were based on consumption frequency of selected food and beverage items, daily intake of rice and miso soup, usual cooking methods for fish and meat, and general dietary behavior. 8 Most food and beverage items were selected from the DHQ food list. 8 Standard portion sizes for women were determined based on the National Nutrition Survey of Japan and various Japanese recipe books. Intakes of five seasonings were estimated using qualitative information of usual cooking methods and general dietary behavior. Each consumption frequency item included seven response options, ranging from "more than twice per day" to "almost never". The unadjusted intake of energy and nutrients (g/day) measured by the DHQ and the BDHQ were calculated using an ad hoc computer algorithm based on the Japanese standard of food composition tables. 29 We excluded participants who reported an extremely unrealistic energy intake: those whose reported energy intake less than half the energy intake required for the lowest physical activity category or more than 1.5 times the energy intake required for the moderate physical activity category (as outlined by the Dietary Reference Intakes for Japanese) were excluded. 30,31 We used nutrient density to establish energy-adjusted values to evaluate the usefulness of the DHQ and the BDHQ in epidemiological studies. Using energy-adjusted values is recommended to help reduce intra-individual measurement errors. 32 Twenty-four-hour urine collection A single 24-hour urine collection was conducted to measure the 24-hour total urine volume and levels of urea nitrogen, sodium, and potassium. Upon enrollment, each participant received written and verbal instructions regarding the 24-hour urine collection method. The participants were provided with a 3-L plastic bottle, a 1-L plastic bottle, a 50-mL plastic bottle, and 350-mL cups. On the day before the urine collection, we called the participants to confirm the urine collection procedure. On the date of collection, the participants discarded their first urine specimen of the day and collected all subsequent specimens for the next 24 h. After all the urine samples were collected, the total urine volume was marked on the 3-L bottle with a felt-tipped marker. The well-stoppered 3-L bottle was shaken approximately 10 times and an aliquot of pooled urine was placed into a 50-mL plastic bottle. We collected the 50-mL bottle with the urine sample and the marked 3-L bottle emptied of its contents. The urine samples were stored at room temperature until submission and then were stored at 80 C until analysis. The urinary urea nitrogen level was determined via the urease and leucine dehydrogenase method using the Iatro-LQ UN rate (A) II assay (LSI Medience Corporation, Tokyo, Japan). The urea nitrogen level in the 24-hour urine sample was used for estimating the amount of dietary protein. Urinary levels of sodium and potassium were measured using ion-selective electrodes. The urinary creatinine level was measured via the enzyme method using the Iatro-LQ CRE (A) II assay (LSI Medience Corporation). These assays were analyzed by LSI Medience Corporation using an automated analyzer (BM6050; JEOL Ltd, Tokyo, Japan). Creatinine excretion in relation to body weight (creatinine level divided by body weight ) was used to verify the completeness of the 24-hour urine collection. 33 Participants with values of <10.8 or >25.2 were excluded from the analysis. 33 General questionnaires Demographic and lifestyle information, such as age and smoking habits, was collected from questionnaires administered during medical checkups at 19e23 weeks of gestation. Participants were also asked about pregnancy-associated nausea during the preceding month. The pre-pregnancy body mass index (BMI) was calculated from the self-reported pre-pregnancy weight and height. Statistical analysis All statistical analyses were conducted using the IBM Statistical Package for Social Sciences for Windows version 20.0 (IBM Japan, Tokyo, Japan), and differences with a two-tailed P-value <0.05 were considered statistically significant. Validation study We assumed that the level of validity between dietary intake and the corresponding urinary levels was greater than r 0.30, with 80% power and a 5% significance level. This was based on studies showing that a correlation coefficient between dietary assessment methods greater than 0.50 was considered good and values of 0.30e0.50 were considered acceptable. 34,35 Therefore, a sample size greater than 85 was required. 36 Spearman's correlation coefficients were calculated to examine the associations between dietary intake and the corresponding urinary nutrient levels. The analyses were conducted after the urinary nutrient levels were divided by the urinary creatinine levels. Creatinine adjustment has been used frequently to adjust for urinary dilution. In addition, all pregnant women were classified into quartiles according to their intake and 24-hour urinary protein (urea nitrogen), sodium, and potassium levels. Concordance in quartile ranking based on intake and urinary levels was assessed as the percentage of pregnant women who were classified in the same and adjacent quartiles. Discordance in quartile ranking was assessed as the percentage of pregnant women who were classified in the opposite quartiles for the highest and lowest quartiles (first quartile vs. fourth quartile). We calculated Spearman's correlation coefficients to examine the association between energy intake and the intake of other nutrients. These analyses were conducted for the participants of both the DHQ study and the BDHQ study. Reliability study We assumed that the levels of reliability for the DHQ and the BDHQ were greater than r 0.50, with 80% power and a 5% significance level. This was based on studies showing that good intraclass correlation coefficients (ICCs) for the repeatability of a dietary assessment questionnaire range from 0.50 to 0.70. 37 Therefore, a sample size greater than 29 was required. 36 The unadjusted and energy-adjusted intakes estimated from the time 1 and time 2 DHQ and the time 1 and time 2 BDHQ were compared using the paired t-test. The dietary intake ICCs between the time 1 and time 2 questionnaires were also calculated. The analyses were performed after the log transformation of all dietary intake (except energy intake) data. Validation study For the DHQ validation study, 180 pregnant women met the inclusion criteria and 147 (81.7%) provided written informed consent (Fig. 1). Thirty-two pregnant women were excluded from the analyses (4 dropped out, 11 had missing data, 2 had extremely unrealistic energy intake, 12 did not successfully complete all the urine collections, and 3 did not meet the creatinine-weight criteria). Thus, data from 115 women (63.9%) were included in the final analysis. For the BDHQ validation study, 171 pregnant women met the inclusion criteria and 140 (81.9%) provided written informed consent (Fig. 2). Twenty-eight women were excluded from the analyses (4 dropped out, 9 had missing data, 5 had extremely unrealistic energy intake, 7 did not successfully complete all the urine collections, and 3 did not meet the creatinine-weight criteria). Therefore, data from 112 women (65.5%) were analyzed for the BDHQ validation study. Table 1 summarizes the characteristics of the participants. The mean maternal age was 34 years, and the mean pre-pregnancy BMI was 20.2e20.5 kg/m 2. More than 60.0% of the participants were primigravidae. Mean dietary intake and mean 24-hour urinary levels are shown in Table 2. Spearman's correlation coefficients between the energyadjusted intakes and corresponding urinary levels were 0.359 (DHQ) and 0.302 (BDHQ) for protein, 0.341 (DHQ) and 0.314 (BDHQ) for sodium, and 0.368 (DHQ) and 0.401 (BDHQ) for potassium (Table 3). Unadjusted protein and potassium intakes from the DHQ and unadjusted potassium intake from the BDHQ had correlation coefficients greater than 0.30 with the corresponding urinary nutrient levels (r s 0.307, 0.342, and 0.354, respectively). However, the DHQ-measured unadjusted sodium intake and the BDHQ-measured unadjusted protein and sodium intake showed poor correlations with the corresponding urinary nutrient levels (r s 0.105, 0.231, and 0.250, respectively). When the DHQ participants were classified into quartiles based on the dietary intake and urinary nutrient levels, more than 70% of the participants were classified in the same or adjacent quartiles for unadjusted protein and potassium intake and energy-adjusted protein, sodium, and potassium intake (Table 4). Similarly, among BDHQ participants, more than 70% of the participants were classified in the same or adjacent quartiles for unadjusted potassium intake as well as energy-adjusted protein, sodium, and potassium intake. Reliability study All the participants in each reliability study were included in the corresponding validation study. Sixty-four pregnant women completed the time 1 DHQ (Fig. 1). Of these 64 women, 6 were excluded from the analyses (1 had missing data, 2 had an extremely unrealistic energy intake in the first DHQ, and 3 dropped out at time 2). Ultimately, 58 women were included in the DHQ reliability analysis. Fifty-six women completed the time 1 BDHQ (Fig. 2). Of them, 2 women were excluded from the analyses because of severely Dietary intakes were assessed using a self-administered diet history questionnaire (DHQ) or a brief-type self-administered diet history questionnaire (BDHQ). The 24-hour urinary excretion levels of urea nitrogen (a marker of dietary protein intake), sodium, and potassium were adjusted by urinary creatinine levels. under-reported energy intake at time 1. Therefore, data of 54 participants were included in the final analysis of BDHQ reliability. No significant differences were detected in the log-transformed mean energy, protein, sodium, or potassium intake levels between the time 1 and time 2 DHQ or BDHQ ( Table 5). The ICCs for the DHQmeasured energy and nutrient levels between the time 1 and time 2 surveys ranged from 0.517 to 0.716, while the ICCs between the time 1 and time 2 BDHQ ranged from 0.505 to 0.796. Discussion This is the first study to establish the validity and reliability of dietary questionnaires for evaluating the energy-adjusted intake of protein, sodium, and potassium among pregnant Japanese women and to validate unadjusted protein and potassium (as measured by the DHQ) as a proxy of energy intake. In general, a correlation coefficient greater than 0.30 between different dietary assessment methods is considered acceptable in a validation study. 34,35 According to this criterion, the energyadjusted intake of protein, sodium, and potassium measured using both the DHQ and the BDHQ had acceptable validity. In addition, the results of quartile concordance of energy-adjusted intake showed satisfactory ranking ability. In epidemiological studies, the use of energy-adjusted values is recommended to control for confounding, to reduce extraneous variation, and to minimize reporting bias. 32 The results of the present study indicate that both the DHQ and the BDHQ can be valuable assessment tools for epidemiological studies examining protein, sodium, and potassium intake levels. The DHQ-assessed unadjusted intake of protein and potassium also showed acceptable validity according to the criteria of previous studies. 34,35 However, DHQ-assessed unadjusted sodium intake was not associated with corresponding urinary levels, and the results of quartile ranking showed low concordance. Thus, our findings indicate that the DHQ cannot effectively estimate unadjusted sodium intake. A previous DHQ validation study in nonpregnant women showed similar results. 11 This may be because unadjusted intake values are more readily affected by other factors than energy-adjusted intake values, which reduce intra-individual measurement errors. 32 In addition, obtaining an accurate assessment of sodium intake using a single 24-hour urine collection is challenging. This is because sodium intake often varies daily according to salt consumption, 38 urinary sodium levels are easily affected by sodium loss from sweat, 39 and estimating sodium intake from seasonings is difficult. Regardless of the difficulty of estimating sodium intake using a dietary questionnaire, we included sodium intake in the study in an attempt to show the indirect validity of sodium intake to estimate energy intake. This method has not been thoroughly discussed and is controversial. 18,20,21 However, the observed validity of DHQ-assessed unadjusted protein and potassium intake and the strong correlations between energy intake and the intake of these nutrients should be sufficient to provide an indication of the indirect validity of using these nutrient intakes as a proxy for energy intake. This is Table 4 Concordance and discordance between the quartiles of dietary intakes and corresponding urinary levels. DHQ, self-administered diet history questionnaire; BDHQ, brief-type self-administered diet history questionnaire. a Concordance in quartile ranking based on intakes and 24-hour urinary levels was assessed as the percentage of pregnant women who were classified in the same and adjacent quartiles. b Discordance in quartile ranking was assessed as the percentage of pregnant women who were classified in the opposite quartiles for the highest and lowest quartiles (first quartile vs. fourth quartile). supported by previous studies that have proposed protein and potassium intakes as effective estimators of energy intake. 14e17 Misreporting of protein and potassium intake is indicative of a degree of misreporting of energy intake because the two nutrients are contained in a variety of foods contributing to energy. 15,17 Thus, accurate reporting (i.e., validity) of these two nutrients would result in accurate reporting for energy intake. On the other hand, the BDHQ was not shown to be a valid measure of unadjusted protein and sodium intake during pregnancy, indicating that estimating energy intake with this questionnaire is difficult. A previous study among non-pregnant Japanese adults also showed low validity for energy intake, as measured by the BDHQ, when compared to the DHQ, for which low-to-modest validity for energy intake have been shown. 8,9 One reason might be that the BDHQ does not include questions about portion size. In addition, estimating dietary quantity for pregnant women using uniform fixed portion sizes may be more challenging than for non-pregnant women because portion sizes are likely to vary with food cravings and aversions experienced by most women during pregnancy. 40 Therefore, the DHQ, which can estimate intake from both the consumption frequency and portion size of each food, is recommended as a diet assessment tool for estimating energy intake among pregnant women. In general, good ICCs for the reliability of a dietary assessment questionnaire range from 0.50 to 0.70. 37 Accordingly, the estimated intake of energy, protein, sodium, and potassium showed good ranking ability between the repeated administrations of the DHQ and the BDHQ. The results of the paired t-test also indicated that the two questionnaires were reliable for measuring these dietary intakes. Thus, both the DHQ and the BDHQ are reliable methods for assessing the intake of energy, protein, sodium, and potassium in pregnant Japanese women. The participants of this study were not representative of all pregnant Japanese women. Maternal age and education levels were slightly higher than those in other studies 6,41 ; however, there is little reason to believe that such differences would affect the relationships between survey-assessed nutrient intakes and urinary nutrient levels. In fact, the intake of protein, sodium, and potassium were similar to those found in other Japanese studies. 41,42 Thus, the DHQ and the BDHQ are valid and reliable measurement tools for estimating energy-adjusted protein, sodium, and potassium intake, and the DHQ is a valid tool for measuring energy intake in pregnant Japanese women. This study had a few limitations. First, some selection bias may have occurred in choosing the participants because the research was conducted in a single urban university hospital. Although the dietary intakes of our participants were equivalent to similar reports, differences in demographic characteristics, including age, education level, and BMI, might potentially affect dietary intake. Second, the 24-hour urine collection was performed only once. A single 24-hour urine collection might make it difficult to establish common excretion levels. In conclusion, the present study showed that the DHQ and the BDHQ have acceptable validity and reliability for the assessment of energy-adjusted protein, sodium, and potassium intake in pregnant Japanese women. These results indicate that both the DHQ and the BDHQ could be useful assessment tools in epidemiological studies. In addition, given the observed validity of unadjusted protein and potassium intake measures, the DHQ can be a useful tool for estimating energy intake. However, the BDHQ may not be useful for estimating energy intake. Conflicts of interest None declared. |
Bounds for Total -Electron Energy of Conjugated Hydrocarbons The total -electron energy of a conjugated molecule is one of the most important quantities which can be calculated within the framework of the Hckel molecular orbital (HMO) model. Although the Hckel model itself is based on a number of very rough approximations, it yields a fairly satisfactory total -electron energy. In particular, it has been shown that the H M O total -electron energies are linearly related to the experimental heats of formation of conjugated hydrocarbons. The resonance energies computed from HMO total -electron energies are of equal quality as those obtained using much more sophisticated calculation schemes. In the case of benzenoid hydrocarbons, the HMO resonance energies are closely related to certain experimental spectroscopic and kinetic data, Within the topological theory of conjugated molecules the problem of the dependence of the HMO total -electron energy on the structure of the molecule has been considered. Numerous results along these lines have been obtained (see, for example, and the review ). Among these results, lower and upper bounds for total -electron energy play a distinguished role. The first such bounds were obtained by M C C L E L L A N D. We shall often quote the following McClelland's upper bound: |
<reponame>adampointer/go-bitmex
// Code generated by go-swagger; DO NOT EDIT.
package chat
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
import (
"fmt"
"io"
"github.com/go-openapi/runtime"
strfmt "github.com/go-openapi/strfmt"
models "github.com/adampointer/go-bitmex/swagger/models"
)
// ChatNewReader is a Reader for the ChatNew structure.
type ChatNewReader struct {
formats strfmt.Registry
}
// ReadResponse reads a server response into the received o.
func (o *ChatNewReader) ReadResponse(response runtime.ClientResponse, consumer runtime.Consumer) (interface{}, error) {
switch response.Code() {
case 200:
result := NewChatNewOK()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return result, nil
case 400:
result := NewChatNewBadRequest()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return nil, result
case 401:
result := NewChatNewUnauthorized()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return nil, result
case 404:
result := NewChatNewNotFound()
if err := result.readResponse(response, consumer, o.formats); err != nil {
return nil, err
}
return nil, result
default:
return nil, runtime.NewAPIError("unknown error", response, response.Code())
}
}
// NewChatNewOK creates a ChatNewOK with default headers values
func NewChatNewOK() *ChatNewOK {
return &ChatNewOK{}
}
/*ChatNewOK handles this case with default header values.
Request was successful
*/
type ChatNewOK struct {
Payload *models.Chat
}
func (o *ChatNewOK) Error() string {
return fmt.Sprintf("[POST /chat][%d] chatNewOK %+v", 200, o.Payload)
}
func (o *ChatNewOK) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
o.Payload = new(models.Chat)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
// NewChatNewBadRequest creates a ChatNewBadRequest with default headers values
func NewChatNewBadRequest() *ChatNewBadRequest {
return &ChatNewBadRequest{}
}
/*ChatNewBadRequest handles this case with default header values.
Parameter Error
*/
type ChatNewBadRequest struct {
Payload *models.Error
}
func (o *ChatNewBadRequest) Error() string {
return fmt.Sprintf("[POST /chat][%d] chatNewBadRequest %+v", 400, o.Payload)
}
func (o *ChatNewBadRequest) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
o.Payload = new(models.Error)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
// NewChatNewUnauthorized creates a ChatNewUnauthorized with default headers values
func NewChatNewUnauthorized() *ChatNewUnauthorized {
return &ChatNewUnauthorized{}
}
/*ChatNewUnauthorized handles this case with default header values.
Unauthorized
*/
type ChatNewUnauthorized struct {
Payload *models.Error
}
func (o *ChatNewUnauthorized) Error() string {
return fmt.Sprintf("[POST /chat][%d] chatNewUnauthorized %+v", 401, o.Payload)
}
func (o *ChatNewUnauthorized) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
o.Payload = new(models.Error)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
// NewChatNewNotFound creates a ChatNewNotFound with default headers values
func NewChatNewNotFound() *ChatNewNotFound {
return &ChatNewNotFound{}
}
/*ChatNewNotFound handles this case with default header values.
Not Found
*/
type ChatNewNotFound struct {
Payload *models.Error
}
func (o *ChatNewNotFound) Error() string {
return fmt.Sprintf("[POST /chat][%d] chatNewNotFound %+v", 404, o.Payload)
}
func (o *ChatNewNotFound) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error {
o.Payload = new(models.Error)
// response payload
if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF {
return err
}
return nil
}
|
In trading on Friday, shares of Cavium crossed above their 200 day moving average of $36.00, changing hands as high as $36.05 per share. Cavium Inc shares are currently trading up about 1.3% on the day.
The most recent short interest data has been released by the NASDAQ for the 11/29/2013 settlement date, which shows a 431,274 share increase in total short interest for Cavium Inc , to 4,264,530, an increase of 11.25% since 11/15/2013. Total short interest is just one way to look at short data; another metric that we here at Dividend Channel find particularly useful is the "days to cover" metric because it considers both the total shares short and the average daily volume of shares traded.
Investors in Cavium Inc saw new options become available this week, for the June 2014 expiration. One of the key data points that goes into the price an option buyer is willing to pay, is the time value, so with 241 days until expiration the newly available contracts represent a potential opportunity for sellers of puts or calls to achieve a higher premium than would be available for the contracts with a closer expiration.
The most recent short interest data has been released by the NASDAQ for the 08/30/2013 settlement date, which shows a 745,144 share increase in total short interest for Cavium Inc , to 4,781,084, an increase of 18.46% since 08/15/2013. Total short interest is just one way to look at short data; another metric that we here at Dividend Channel find particularly useful is the "days to cover" metric because it considers both the total shares short and the average daily volume of shares traded.
And showing bullish technical patterns.
In trading on Wednesday, shares of Cavium Inc crossed above their 200 day moving average of $32.95, changing hands as high as $34.22 per share. Cavium Inc shares are currently trading up about 5% on the day.
Cavium (Nasdaq:CAVM) is trading at unusually high volume Friday with 1.9 million shares changing hands. It is currently at two times its average daily volume and trading down $2.46 (-6.8%).
Cavium (Nasdaq:CAVM) hit a new 52-week high Wednesday as it is currently trading at $38.61, above its previous 52-week high of $38.59 with 387,290 shares traded as of 1:01 p.m. ET. Average volume has been 1.1 million shares over the past 30 days.
I am going to watch for some trading after the reports come in for these companies.
TheStreet Ratings group would like to highlight 5 stocks pushing the electronics industry lower today, Dec. 17, 2012.
Cavium Networks has been trading above a key level and on Monday the bulls stepped in.
Facebook, Clean Energy and Xilinx are among stocks showing big changes in trading activity.
Cavium was a winner within the technology sector, rising 50 cents (1.5%) to $34.42 on average volume.
Cavium was a leading decliner within the electronics industry, falling 54 cents (-1.7%) to $31.04 on average volume.
Cavium was a leading decliner within the technology sector, falling 16 cents (-0.6%) to $28.06 on average volume.
These 15 technology stocks could be particularly vulnerable to Europe's continuing economic woes.
Cavium (Nasdaq:CAVM) hit a new 52-week low Friday as it is currently trading at $22.76, below its previous 52-week low of $22.84 with 1.4 million shares traded as of 3:30 p.m. ET. Average volume has been 1.4 million shares over the past 30 days.
Cavium (Nasdaq:CAVM) hit a new 52-week low Thursday as it is currently trading at $24.03, below its previous 52-week low of $24.20 with 415,146 shares traded as of 10 a.m. ET. Average volume has been 1.3 million shares over the past 30 days.
Cavium (Nasdaq:CAVM) is trading at unusually high volume Wednesday with 4.9 million shares changing hands. It is currently at 4.1 times its average daily volume and trading down $3.03 (-10.2%).
Lending to eurozone banks surged by 32% last week, renewing concern that the region will be unable to contain its debt crisis. |
package io.github.intellij.dlanguage.psi;
import com.intellij.psi.PsiElement;
import io.github.intellij.dlanguage.psi.named.DlangEnumMember;
import java.util.List;
import org.jetbrains.annotations.NotNull;
import org.jetbrains.annotations.Nullable;
public interface DLanguageAnonymousEnumDeclaration extends PsiElement {
@Nullable
DLanguageAssignExpression getAssignExpression();
@Nullable
PsiElement getOP_COLON();
@Nullable
PsiElement getKW_ENUM();
@Nullable
DLanguageType getType();
@NotNull
List<DlangEnumMember> getEnumMembers();
}
|
Linking iron-deficiency with allergy : role of molecular allergens and the microbiome Atopic individuals tend to develop a Th2 dominant immune response, resulting in hyperresponsiveness to harmless antigens, termed allergens. In the last decade, epidemiological studies have emerged that connected allergy with a deficient iron-status. Immune activation under iron-deficient conditions results in the expansion of Th2-, but not Th1 cells, can induce class-switching in B-cells and hampers the proper activation of M2, but not M1 macrophages. Moreover, many allergens, in particular with the lipocalin and lipocalin-like folds, seem to be capable of binding iron indirectly via siderophores harboring catechol moieties. The resulting locally restricted iron-deficiency may then lead during immune activation to the generation of Th2-cells and thus prepare for allergic sensitization. Moreover, iron-chelators seem to also influence clinical reactivity: mast cells accumulate iron before degranulation and seem to respond differently depending on the type of the encountered siderophore. Whereas deferoxamine triggers degranulation of connective tissue-type mast cells, catechol-based siderophores reduce activation and degranulation and improve clinical symptoms. Considering the complex interplay of iron, siderophores and immune molecules, it remains to be determined whether iron-deficiencies are the cause or the result of allergy. Introduction Iron is an essential nutrient utilized in almost every aspect of normal cell function. All cells require iron to proliferate, iron being essential for DNA biosynthesis, protein function and cell cycle progression. In humans, iron is critical for a wide variety of biological processes as it allows transportation of oxygen, aids in the energy household and is essential for a healthy immune system. Iron deficiency is probably the most common cause for anaemia, which by clinical definition is an insufficient mass of circulating red blood cells, and in public health terms is defined as a haemoglobin concentration below the thresholds given by the WHO, UNICEF and UNU. 1,2 As such, iron deficiency can exist in the absence of anaemia, if it is mild or if the deficiency lasts not long enough. 2 Iron deficiency can be absolute or functional, though combinations also exist. In absolute iron deficiency, the body has to cope with increased iron demand (e.g. during growth, blood donations, bleedings, and infections) that cannot sufficiently be compensated by dietary iron absorption and the release of recycled iron from senescent erythrocytes by macrophages. 3 In contrast, during functional iron deficiency, the body has enough iron stores in the form of iron-laden ferritin in the liver, spleen and bone marrow, but iron-trafficking is on hold and/or reversed. The different settings of iron metabolism are illustrated in Fig. 1. Under normal steady-state conditions, recycled iron is continuously released into the circulation by macrophages and dietary iron by enterocytes via the iron-exporter ferroportin in the form of Fe(II) (Fig. 1A). Subsequently, ceruloplasmin or membrane-bound hephaestin oxidizes iron to Fe(III) for transport via transferrin. 4 Functional iron deficiency (Fig. 1B) is the consequence of an activated immune system that is triggered by danger signals derived from pathogens or damaged tissue and which results in a stop of the continuous iron release from enterocytes and macrophages and the accumulation of circulating iron into macrophages. Discrimination between the two forms is complicated by the existence of various degrees and levels of absolute and functional iron deficiency and their smooth transition into anemia. 3 Moreover, many proteins contributing to iron homeostasis are also innate proteins, e.g. ferritin, lactoferrin, and lipocalin 2, which are released into the circulation upon immune activation to impede iron-sequestration by pathogens. This all complicates the assessment of the true iron status. An interesting aspect of iron deficiency it that a decrease in red blood cells is often accompanied by a relative increase of the white blood cell population representing the immune cells per se. With respect to allergy, several epidemiological studies have correlated a greater degree of iron deficiency in allergic subjects than in non-allergic individuals. 5,6 Whether this is the result of an overboard in the immune response, in which a functional iron-deficiency becomes absolute, or whether an absolute irondeficiency lays the ground for the generation of allergy remains to be determined. This review aims to address several important environmental, immunological and physiological aspects that may influence an individual's iron homeostatic status and may contribute to the establishment of allergy. Redox chemistry of iron Under physiological conditions, iron is largely found in the ferrous Fe(II) or ferric Fe(III) form and the metal's redox cycling properties make it highly suited to act as a biocatalyst in proteins or as an electron carrier. In general, Fe(III) prefers oxygen ligands, while Fe(II) favors nitrogen and sulfur ligands. 7 In the human body typical examples of Fe(III) binding proteins are lactotransferrin, transferrin and ferritin oxidizing Fe(II) to Fe(III) upon binding, and low molecular weight compounds in the blood also bind Fe(III), with citric acid being the major representative. 8 Also, amino acids, ATP/AMP, inositol phosphates and 2,5-dihydroxybenzoic acid have been described to chelate Fe(III) but not Fe(II). 9 A different picture emerges in the cytosolic compartment of cells, in which about 1 mM of Fe(II) predominates the labile iron pool, with glutathione in cellular concentrations ranging from 0.5 to 10 mM 10,11 acting as a buffer 9 and thus serving as a means for the subsequent incorporation of Fe(II) into a wide range of iron-dependent enzymes and electron transfer proteins. 9 Importantly, free redox-active iron can be very toxic under aerobic conditions due to the Haber-Weiss cycle. 12 In this cycle Fe(III) is reduced to Fe(II) by superoxide or other reducing agents and the oxidation of Fe(II) produces Fe(III) and hydroxyl radicals. As such, iron serves as a catalyst and minute amounts of free iron are sufficient to produce significant levels of reactive oxygen species (ROS). Human iron homeostasis An 80 kg human body contains approximately 4 g of iron. Hemoglobin iron accounts for approximately 60% of total iron, the vast majority of which is found within circulating erythrocytes. 13 Most of the remaining iron is stored in the liver within ferritin (E1 g). From an immunological point of view, it is interesting to note that the next largest iron-stores are the macrophages (E0.6 g) in the spleen, liver and bone marrow. 14 Around 0.3 g of iron is stored as myoglobin of muscles. All other cellular iron-containing proteins and enzymes are estimated to bind a total of E8 mg of iron. Iron is delivered to most tissues via circulating transferrin, which carries E2 mg of this metal in the steady state. Iron mass in the total extracellular fluid volume is about 10 mg, implicating that the transferrin iron pool turns over several times a day. 15 In healthy men plasma iron turnover ranges from 25 to 35 mg 16 per day, of which only 5 to 10% is provided by absorption of dietary iron in the gut, and the rest is predominantly iron recycled from monocytes and macrophages of the liver, adipose tissue, bone marrow, spleen and lymph nodes. 14 Nutritional iron uptake Dietary iron requirements, as well as bioavailability, are mainly determined by an individual's iron homeostatic status, affected by physiological conditions and reflected to a large extent in serum hepcidin levels. 17 The chief area of iron absorption is the duodenum and the proximal jejunum. 18 The duodenum has some unique characteristics as its pH is more acidic, with a pH ranging from 4 to 5 compared to the rest of the small intestine that has a pH-range between 7 and 9, and is the site where pancreatic juices and bile enter the small intestine. Depending on its form, iron will also be transported, (a) as heme (from meat) into the enterocytes via the highaffinity folate transporter, which is also the intestinal heme transporter PCP/HCP1 (SLC46A1). Interestingly, the duodenal cytochrome b, Dcytb, is also able to bind on the lumen and on the cytoplasmic side to heme molecules, though the implications of these binding sites have so far been investigated only for the cytoplasmic heme binding site. 26 (b) as non-heme iron, typically through low molecular weight chelates of ferric iron, which can derive from meat or plants. After reduction by ascorbic acid and/or duodenal cytochrome b, Dcytb, 24,25 iron enters the cells (enterocytes, macrophages, T cells) via the divalent metal-ion transporter 1, DMT1, pathway. 26 (c) via other uptake-pathways that seem to exist -e.g. bile itself in ''premicellar'' concentrations has been shown to interact with iron(II) 27 and contribute to iron absorption. 28,29 Ironcarrying proteins like ferritin from food are efficiently absorbed without depending on reduction or the heme transporter via receptor-mediated, clathrin-dependent endocytosis. 30 Absorption and increased iron accumulation were also found in the liver, when Fe(II) was ingested with glycine and asparagine, but not with other amino acids. 31 Once in the cell, iron is exported by ferroportin 1, also known as IREG1, MTP1, SLC40A1, FPN1, and HFE4, into the circulation. 32 In general, iron excretion is suppressed by iron deficiency and anaemia and enhanced during erythropoiesis and hypoxia. 26 Known inhibitors of bioavailability for non-heme iron are phytates, which are inositol polyphosphates found predominantly in nuts, seeds and grains that form insoluble precipitates with iron, 33 and polyphenolic compounds. Many of these fruit-and plant-derived polyphenols bind with high affinity to iron 34 and can greatly affect iron homeostasis, 35,36 as upon consumption, the plasma concentration of polyphenols can commonly reach values about 1-10 mM. 37 In most studies, the consumption of large quantities of purified polyphenols has been reported to decrease the volunteers' iron status. However, under more natural circumstances, it is reasonable to assume that polyphenols will be present in the food matrix already complexed with iron. As such, one study demonstrated that, while purified polyphenols decreased the iron parameters in the subjects, ingestion of polyphenols in context with iron significantly improved the iron and redox status in vivo. 38 Regulation of iron homeostasis Hepcidin, a 25 amino acid-long peptide, is the major regulator of iron homeostasis. It is mainly expressed in the liver, but can also be produced by parietal cells of the stomach 39 and by macrophages. As schematically presented in Fig. 1A, under steady-state conditions only a low concentration of hepcidin is present in circulation, with a median concentration of 7.8 nM found in men and 4.1 nM vs. 8.5 nM, in pre-and post-menopausal women, respectively. 40 Dietary and recycled iron is continuously released into the circulation to meet the daily iron requirements of the human body. Excretion of ferrous iron into the circulation is mediated by ferroportin, and subsequently, ceruloplasmin or membrane-bound hephaestin oxidizes Fe(II) to Fe(III) for further transport via transferrin. During inflammation (Fig. 1B), iron is removed from the circulation, mainly by upregulation of hepcidin, and many innate proteins like Lcn2 and ferritin are also secreted into the circulation to sequester iron. Importantly, also hepcidin-independent processes have been described in humans with iron-deficiency. 40 Hepcidin binds to the iron-exporter ferroportin, leading to its degradation, while iron is retained within the cells. 40,41 Thus, increasing body iron levels cause increased hepcidin expression, resulting in increased macrophage and liver cell iron sequestration, and decreased dietary iron absorption; the result is a reduction in serum iron. 42 In contrast, decreasing body iron levels cause decreased hepcidin expression, resulting in an increased release of macrophage iron and accelerated dietary iron absorption. Under absolute iron deficiency, a decreased hepcidin concentration will result in the release of the remaining iron stores and in an increase in the dietary iron absorption. Intracellular iron is also regulated by iron regulatory proteins (IRPs) 1 and 2 with iron-responsive elements, in which upregulation of these proteins reflects low body iron stores and an increase of dietary iron absorption. 43 Moreover, oxygen can regulate iron homeostasis. During hypoxia, the hypoxia-inducible factors (HIF) can target genes encoding transferrin and the transferrin receptor, leading to increased expression of transferrin and thus increased transport of ferric iron into cells. 44 Non-transferrin bound iron and the labile iron pool Excreted iron is usually bound to transferrin, ferritin or heme. In the circulation, also non-transferrin-bound iron, NTBI, is present, which represents a heterogeneous population that comprises organic anions with low affinity to iron (e.g. citrate, phosphates and carboxylates), polyfunctional ligands (chelates, siderophores and polypeptides), albumin 45 and surface components of membranes (e.g. glycans and sulfonates) able to bind Fe(II) and Fe(III). The NTBI is typically present in concentrations up to 10 mM and its existence has been correlated with high levels of transferrin saturation. Similarly, intracellularly, this heterogeneous population of redox-active Fe complexes is called the labile iron pool, LIP. In cells, the iron concentration usually ranges from 20 to 100 mM and is largely associated with proteins, whereas only a minor fraction of the cellular iron is present as the LIP (41%, up to 5 mM 46 ). In the cytosol, Fe(II) is prevalent, with glutathione probably acting as a major buffer. The LIP is primarily found in erythroid and myeloid cells, as well as in neuronal cells. Importantly, though the NTBI or LIP represents only a fraction of the total intra-and extracellular iron, fine-tuning the LIP-levels has physiologically and immunologically widereaching consequences. Both iron forms are the immediate therapeutic targets of diseases associated with a misbalance in the iron homeostasis, e.g. hereditary hemochromatosis, myelodysplastic syndromes and sickle-cell disease. 47 The immune system The immune system and iron Iron is essential for many peroxide-and nitrous oxide-generating enzymes, 48 and regulates cell growth, cell differentiation, and cytokine production. Iron can activate protein kinase C, which leads to phosphorylation of compounds regulating cell proliferation. In addition, iron is necessary for myeloperoxidase activity to form hydroxyl radicals, enabling neutrophils to efficiently eliminate bacteria. 49 Thus, any misbalance in the iron homeostasis towards either deficiency or overload has wide-reaching immunological consequences. Macrophages are immune cells known to store iron. The reason for the high presence of iron in macrophages is their pivotal role in systemic iron recycling, where senescent erythrocytes are cleared by predominantly splenic macrophages. 50 In addition to their important role in erythrocyte clearance and maintenance, their anti-inflammatory or inflammatory state seems to be dependent on the iron content. Anti-inflammatory macrophages have a lower iron-content and an increased expression of proteins associated with iron efflux and can be discriminated from inflammatory macrophages, which harbour high ironlevels. 51 Macrophage phagocytosis is generally unaffected by iron deficiency, even though macrophage bactericidal activity is affected. 48 Neutrophils under iron-limited conditions show impaired or lower killing activity due to the reduced activity of myeloperoxidase and reduced mobility to inflamed sites. Likewise, NK-cells exhibit lower activity due to reduced differentiation and proliferation under iron-restricted conditions. Importantly, T lymphocytes can also actively modulate the NTBI pool by uptake and export, with T cell deficiency associated with an accumulation of iron in the liver and pancreas. 52 Impaired iron-uptake via transferrin receptor 1 (TfR1) caused by mutations can result in severe B-and T-cell deficiencies due to the lack of activation, which can partly be compensated through the internalization of iron via NTBI-pathways. 53 Allergy and iron Allergy is an immune-mediated disease, caused by an aberrant immune response towards exogenous, normally harmless antigens derived from pollen, house dust mites, animal dander, insect venom and food components. In particular, in the westernized world the prevalence of allergy seems to increase, with almost 20% of the adults in Germany having at least one self-reported doctor diagnosed allergy. 54 Also in the United States, the prevalence of respiratory allergies is approximately 20%, whereas the prevalence of food allergies has increased from 3 to 5% and the prevalence of skin allergies from 7 to 12%. 55 Allergy defined as type I hypersensitivity causes an immediate reaction usually within minutes upon re-exposure to the antigen in an individual with preformed antigen-specific IgE antibodies. Allergic symptoms may affect the eyes, nose, skin, the lungs or the gastrointestinal tract, leading to red eyes, an itchy rash, runny nose, shortness of breath, swelling or diarrhea. In severe cases, a systemic reaction can occur, resulting in an anaphylactic shock. Mechanistically, it is important to note that the presence of specific IgE antibodies, referred to as allergic sensitization, is a precondition of allergic symptoms. They are a product from plasma cells when stimulated by T-helper 2 cell (Th2-) cytokines IL-4 and IL-13. A Th2 bias in the immune response is typically associated with allergies. Re-exposure to the allergen results in binding and cross-linking of IgE antibodies bound via highaffinity receptors on effector cells. Mast cells then discharge their granules containing histamine, leukotrienes and other active agents by exocytosis, causing allergic reactions. 56 However, many people may just be sensitized, thus already having IgE-antibodies, but not yet an allergic reaction upon exposure to the respective allergen. 54 As such, the events leading to the initiation of IgE formation are poorly understood, though it is generally accepted that the shift in the immune balance towards Th2 directly correlates with the overproduction of allergen-specific IgE. Epidemiological and experimental evidence Iron and allergy. The poor iron status at birth was associated with an increased risk of developing allergic diseases. 57 It has been argued that a massive perinatal lymphocyte expansion may have led to a prioritization of erythrocytes at the expense of other vital developing tissues 58 due to the restricted iron supply. 57 This may then have compromised vulnerable Th1 lymphocytes having lower intracellular iron stores and may have promoted eosinophilia. Accordingly, when the maternal iron status during pregnancy was reduced, this was adversely associated with childhood wheezing, altered lung function and atopic sensitisation in the first 10 years of life. 59 Along with a lower iron status increasing the risk of atopy, high iron concentration in umbilical cord samples was associated with a decreased risk of wheezing and eczema in the population-based Avon Longitudinal Study of Parents and Children. 60 Similarly, in a British study decreased serum ferritin levels were found in children with atopic eczema. 61 The lower iron-status is a consistent and a reproducible finding in multiple US cohorts, which clearly associates atopy with anemia. 5 Also in a casecontrolled, population-based Korean study, low beta-carotene, iron, folic acid, and dietary vitamin E were associated with atopic dermatitis in young children. 62 Another study showed a greatly reduced incidence of wheezing and asthma in infants of mothers who were supplemented during pregnancy with vitamin C, a known contributor to increasing iron bioavailability. 63 The change in the iron status impacted allergy also in vivo in a murine model of allergic asthma, in which oral iron supplementation, as well as systemic iron administration, suppressed airway manifestations. 64 Misregulated iron-metabolism can affect atopy or the generation of allergies. Patients with a much too high iron load due to frequent blood transfusions have a decreased CD4/CD8 ratio, 65,66 and their increased serum ferritin levels significantly correlate with the number of transfusions. More female allergy sufferers: association with iron? Finally, as iron homeostasis differs between the genders 67 and before and after adolescence, it may contribute to the differences seen in the prevalence of allergy in the various groups. During childhood, boys are more affected by allergy than girls, but this changes in adulthood and women are more likely to be affected than men. In a German evaluation, 24% of men, but 35% of women suffer from at least one allergy and 2.9% of men, but 6.4% of women suffer from food allergies. 68 Over 20% of Portuguese women were found iron-deficient and other studies have also confirmed that females present more often iron deficiency. 69,70 All in all, there is consistent evidence that the iron-status of allergic subjects is reduced and may be linked with the disease. Iron and cancer. It should be considered that cancer is in general associated with a highly suppressed immune defense (tumor tolerance phenomenon) and thus cancer and allergy have opposite immunological features. 71 In fact, epidemiological studies have suggested that an inverse association between allergic diseases and cancer exists, indicating that allergic and/ or atopic subjects with elevated IgE levels have a higher risk for allergies, but a lower risk of developing certain cancer types. 72 As far as the iron content is concerned, while decreased iron levels predispose to allergies, increased serum iron correlates with an increased cancer risk. Therefore, iron levels affect in opposite ways the immune regulation in allergy and cancer. Immune effects of iron Iron deficiency affects more Th1 than Th2 immune cells. The proliferative phase of T-cells is dependent on iron supply. Activation of T cells leads to the expression of TfR via an IL2-dependent pathway, which facilitates iron uptake. The iron availability is known to differentially modulate the proliferation of different Th cell subsets. Th1 cells usually associated with inflammation have a lower labile iron pool and activation of Th1 cells can be readily blocked 79 by inhibiting iron-uptake via TfR and/or deferoxamine, whereas Th2 cells seem to maintain a larger amount of iron in an assumable less labile and less readily chelatable pool that is only partly affected by blocking TfR and/or deferoxamine. 80 As such, under iron-restricted conditions, DNA synthesis in Th1, but not Th2 cells, was inhibited. In line with these experiments, the Th1-associated cytokines IFN-g and the IL-12/IL-18 mediated proliferation was found to be severely affected by iron chelators, whereas Th2-associated cytokines IL-13, which is IL-4 mediated, were resistant to potent iron chelators. 81 Thus, it is very likely that under conditions of lymphocyte expansion the limited iron supply in allergic subjects will favour the development of a Th2-environment, thereby preparing for later allergic sensitisation. M2 macrophages are more susceptible to iron-deprivation. Classical activated M1 macrophages are characterized by secretion of inflammatory cytokines, harbour high iron-levels 51 and are important contributors to the classical Th1 response. In contrast, alternative activated M2 macrophages usually residing in the tissues have immunoregulatory functions, have a lower iron-content and participate in Th2 reactions and in wound healing processes. Corna et al. demonstrated that under iron deprivation by deferoxamine both M1 and M2 macrophages enhanced IRP1 activity, whereas IRP2 was more strongly enhanced in M2 macrophages. 82 Deferoxamine treatment also does not lead to the suppression of ferritin heavy chain expression in M1 macrophages, indicating high enough iron stores, whereas in M2 macrophages TfR1 was upregulated for continuous iron supply. 51 As such, M1 macrophages are not as affected by irondeficient conditions as M2 macrophages. Importantly, M2 cells are not as efficient in expressing molecules involved in antigen presentation, such as MHC class II (I-Ab), or the costimulatory molecule CD86 after T-cell stimulation under iron-deficient conditions, whereas M1 macrophages seem unaffected by iron chelators. Iron suppresses class-switching in B-cells. Resting B-cells express much lower levels of TfR on the plasma membrane than on T-cells 83 iron uptake mechanisms. 84 B-cell proliferation and plasmacytic differentiation are not affected by Fe(II) although ferrous iron seems to inhibit activation-induced cytidine deaminase (AID), an important enzyme to initiate class-switching, in a dosedependent and specific manner, while other bivalent ions such as Zn(II), Mn(II), Mg(II), and Ni(II) did not inhibit classswitching. 85 As such, iron-deficiency may facilitate IgE class switching in the presence of Th2-associated cytokines like IL-4 and IL-13. Alternatively, B-cells can be stimulated by TGF-b to secrete IgA, and the iron-carrying protein lactoferrin is also able to directly interact with betaglycan (TGF-b receptor III, TbRIII) and initiate class-switching to IgA and IgG2b. 86 Thus, though proliferation and plasmacytic differentiation are not affected, 49 the AID enzyme responsible for class switching can be active under iron-deficient conditions. 85 Flavonoids impair mast cell function. Mast cells are important effector cells in triggering IgE-mediated allergic reactions and impeding mast cell release is a strategy of allergy treatment. It is known that, after priming with IgE and antigens, mast cells accumulate Fe(III) 87 before degranulation. On the other hand, the iron chelator desferrioxamine can similarly activate connective tissue-type mast cells via non-IgE mediated mechanisms as compound 80/40, but fails to activate basophils and mucosaltype mast cells and does not elicit late phase reaction similar to compound 80/40. As such, desferrioxamine has been suggested as a positive control in intradermal skin tests. 88 Still, many flavonoids such as apigenin, quercetin, luteolin, fisetin 89 and epicatechin that are strong iron chelators (Fig. 2) have been shown to exhibit anti-allergic activity in vitro and in in vivo models. 90,91 Luteolin inhibited mast cell activation, 92 naringenin inhibited allergen-induced airway inflammation, 93 baicalin suppressed food allergy by the generation of regulatory T-cells, 94 cacao-bean extract containing catechins ameliorated house dust mite induced atopic dermatitis, 95 rutin suppressed atopic dermatitis, 96 afzelin ameliorated asthma, 97 and proanthocyanidins, oligomers consisting of catechin, epicatechin and gallic esters, 98 and apigenin 99 inhibited airway inflammation. In double-blind placebo controlled clinical trials, O-methylated catechins reduced symptoms of Japanese cedar pollinosis 100 and catechins also reduced symptoms in mild and moderate atopic dermatitis. 101 From this, it becomes apparent that iron levels and chelators are able to regulate mast cells and thus have an impact on the severeness of allergic symptoms. Allergens Structure-function relationships: role for siderophores and iron. One of the fundamental riddles in allergy is why certain proteins emerge as allergens. It is assumed that they are directly related to the critical events triggering the Th2 bias. Despite the existence of thousands of protein families, the structures of major allergens can be restricted to few protein families. Although many allergens have been characterized in terms of secondary and tertiary structures, it is still uncertain whether common structural, functional or biochemical features underlie their ability to generate an allergic response. Nearly all major allergens from mammals belong to the lipocalin family, 102 while plant allergens usually originate from the prolamin (2S albumin, lipid binding proteins (LTPs)) and cupin (7S, 11S) superfamilies or from the pathogenesis-related (PR)-10 family. 103 All members of these families share certain characteristics like their great structural stability and their ability to serve as carriers for a variety of compounds with lipidic segments. 104,105 Allergens deriving from mammalian sources usually belong to the lipocalin family. Lipocalins show unusually low levels of overall sequence conservation with pairwise sequence identities often below 20%. Nevertheless, as illustrated in Fig. 3 the lipocalin fold is highly conserved. 106 This b-barrel structure shapes a calyx-like site which gives the name to the protein family and is the main feature regarding the binding abilities of the lipocalin fold. 107 While the wider end of the barrel is open to the solvent and rich in polar and charged amino acids, the narrower end is an inner, buried region rich in hydrophobic amino acids. Loops flanking the calyx display a great sequence variability that endows lipocalins with the ability to bind a large variety of ligands having polar and non-polar moieties. This property has been exploited in protein design that uses lipocalins as scaffolds to engineer novel binding proteins (''anticalins''). 108 Lipocalins are usually secreted and can be found in the dander, urine, fur, and saliva of animals. 109 They have been described as powerful bacteriostatic agents against various microorganisms, by impeding their iron sequestration and binding to siderophore-iron complexes, 110,111 but in accordance with their ligand binding plasticity, lipocalins also seem to act as carriers for lipids and hormones. The human homologue lipocalin 2 (Lcn2, NGAL) has immuneregulatory function, but is also a growth factor 112 and a stress protein that is released under various inflammatory conditions and in cancer. 113 The binding of Lcn2 to bacterial siderophores, which are low molecular weight compounds that are amongst the strongest soluble Fe(III) binding agents known, is considered a key feature against bacterial infections. Secreted Lcn2 seems to have specialized towards siderophores with catechol-moieties that facilitate their binding in the calyx site (see Fig. 4). 114 Moreover, Lcn2 has been described to bind to endogenous siderophores like 2,5-dihydroxybenzoic acid (2,5-DHBA), 115 thereby probably ensuring that excess free iron does not accumulate in the cytoplasm. Mammalian cells lacking this endogenous siderophore accumulate abnormally high levels of intracellular iron, leading to high levels of reactive oxygen species. 115 Siderophores and iron modulate the Th2-potency of allergens. As iron is essential for almost all life, microbes and plants have evolved efficient iron sequestration strategies by producing siderophores that usually form a stable, hexadentate, octahedral complex predominantly with Fe(III). 116 Siderophores are usually classified by the ligands used to chelate the ferric iron with the major types of siderophores having catechol, hydroxamate or a-hydroxycarboxylate moieties, as depicted in Fig. 2. Also, combinations of the chemical groups are possible with microbes often employing non-ribosomal peptides to act as siderophores. 117 Importantly, also many fruits and plants feature the presence of polyphenols/flavonoids known to behave as high-affinity iron chelators, a fact that is commonly overlooked, 34 but that may contribute to their mast cell stabilizing ability. In addition, the ability of those compounds to chelate iron has been proposed to be a key element in their anti-oxidant properties. 118 With regard to allergy, there seems to exist a particular role for catechol-type siderophores, which may also be related to the fact that catechol groups chelate iron 119,120 at physiologically relevant pHs. In this respect, it is relevant that lipocalin 2 can bind to only catechol-containing siderophores and not to others. This is an important characteristic that also seems to be extendable to many major allergens. Many lipocalin allergens such as the major milk allergen Bos d 5, or the major birch allergen Bet v 1, a prototype for the PR-10 protein family with a lipocalin-like architecture, are capable of transporting iron via catecholtype siderophores. 121,122 Importantly, their loading state, apo-(empty) or holo-(filled), seemed to be decisive for the subsequent immune response. Apo-allergens can mount a Th2-response in vitro, whereas the holo forms are rather immune-suppressive, indicating that the iron-carrying form impedes allergic sensitization. 121,122 As such, the natural ligand of the major birch pollen allergen Bet v 1 has been identified to be the flavonoid quercetin-3-Osophoroside; 123 the major peanut allergens Ara h 2 and Ara h 6, belonging to the 2S family, bind the flavonoid epigallocatechin-3gallate; 124 the major peanut allergen Ara h 1 from the 7S family forms large complexes by binding proanthocyanidins, which are oligomers consisting of catechin and epicatechin and their gallic acid esters. 125 Accordingly, the pathogenesis-related PR-10 proteins and major allergens in strawberries, Fra a 1 and Fra a 3, have been crystallized with catechin ligands. 126 In summary, there is solid evidence that allergens are capable of binding in particular catechol-type structures like quercetin and rutin with high iron affinities that surpass ironaffinity constants of deferoxamine. 127,128 Importantly, a great number of these polyphenols have been described in the literature to exhibit anti-allergic properties. Taking lipocalins as an example, allergens are not only structurally at the border between self and non-self, but may also functionally interfere with the human immune-regulatory processes. How normally harmless antigens become allergens with their characteristic Th2 skewing capacity is not known. There are several possible scenarios: (i) iron deficiency turns hololipocalins into apo-allergens, or (ii) exogenous lipocalins enter the body already devoid of a ligand in their calyx. In this case, they -due to their high affinity -will immediately sequester endogenous siderophores or comparable ligands directly at the mucosal sites, thereby contributing locally to an iron-deficient state. Additional triggers then may activate lymphocytes to become Th2 rather than Th1 cells. 121,122,128 (iii) Allergenic lipocalins may in an ongoing immune response interfere with the function of Lcn2, e.g. during infections either by providing or by sequestering ligands for Lcn2. In any of these cases, the immune function of Lcn2 may be skewed by allergens towards Th2. Can microbiota interfere with iron homeostasis? From hygiene hypothesis to colonizing microbiota. The so-called hygiene hypothesis originated from observations that hay fever is more common in small families having few members than in large families and is associated with a less ''dirty'' environment, hence less microbial exposure. 129 Moreover, living on a farm protects against atopy, hay fever, and asthma, especially in children. 130 As such, cumulative evidence suggests that decreased exposure to microorganisms promotes allergy. The microbiota that colonizes humans has not merely a commensal, but rather a mutualistic relationship with their human host. It is accepted today that as such the bacterial composition has a profound impact on the immune system and vice versa. Facts on microbiota in healthy versus allergic subjects. In the human healthy gut over 1000 bacterial species have been described, 131 with an individual having at least 160 species, of which 52 species account for over 90%. 132 Aerobic organisms including Streptococci and Lactobacilli, occasionally with Candida spp., predominantly colonize the human stomach, duodenum, and proximal jejunum, 133 whereas an anaerobic predominance prevails in the distal ileum and colon. Dominant colonic organisms in humans include Clostridium spp., Bacteroides and Bifidobacterium. 133 Thus, the Firmicutes and Proteobacteria phyla predominate in the duodenum, whereas Firmicutes and Bacteroidetes are the predominant phyla in the distal colon. Microbiome depletion as a result of broad-spectrum antibiotic treatment led not only to anaemia and thrombocytosis but also to an altered immune homeostasis, resulting in diminished granulocytes and B cells 116 in the bone marrow and increased CD8 + T-cells in mice. Importantly, faecal microbiota transfer partly rescued these hematopoietic changes. 134 In rats inoculated with human faecal microbiota, provision of an iron-fortified diet after feeding an iron-deficient diet significantly increased the abundance of dominant bacterial groups such as Bacteroides spp. and Clostridium cluster IV members compared with rats on an iron-deficient diet only. Moreover, iron supplementation increased the gut microbial butyrate concentration 6-fold compared with iron depletion and did not affect histological colitis scores, suggesting that iron supplementation enhanced the concentration of beneficial gut microbiota metabolites and may contribute to gut health. 135 Importantly, microbes able to inhabit the upper G/I tract seemed to be reduced in allergic subjects compared to nonallergic subjects. There are only a few studies on the microbiota conducted in humans. As such, food allergic patients seem to have an increased abundance of bacteria of the order Clostridiales (Lachnospiraceae, Ruminococcaceae) 136-139 and a decreased abundance of the order Bacteroidales. 136,140,141 Microbiota interfere with iron levels via siderophores. An interesting aspect here is that Proteobacteria, Bacteroidetes and some family members of Firmicutes (see Table 1) are more likely to influence dietary iron uptake in the host due to the fact that their site of residency coincides with the site of iron-uptake. When screening these bacteria for their ability to secrete or utilize siderophores, it is apparent that indeed most of these organisms can acquire iron by siderophore-mediated mechanisms. As no data indicated the increased or decreased abundance of certain bacterial order, one can only speculate on their impact on iron homeostasis and on the immune cells. The microbiota strongly manipulates the immune system. It is tempting to speculate that the composition and localization of the commensal microbiota in allergic subjects may directly impact the homeostatic iron status of the host, but more studies need to be done. Conclusions There is a clear epidemiological connection between a poor iron status and allergy risk, especially in females. Of note, iron-deficient conditions seem to promote a Th2-environment, which is a prerequisite for allergy. Potential contributing factors are endogenous iron levels, allergens capable of binding to iron chelators, and likely a skewed microbiota in allergic subjects. Conflicts of interest The authors declare no conflicts of interest. |
In-process Measurement of Gradient Boundary of Resin in Evanescent-wave-based Nano-stereolithography using Reflection Interference near Critical Angle Evanescent-wave-based nano-stereolithography, which uses the ultra-thin field distribution of evanescent wave to solidify photosensitive resin, provides a sub-micrometer vertical resolution of each layer. In fabrication process, cured resin (solid state) is submerged in uncured resin (liquid state). The interfaces between the cured and the uncured resin are made up of half-cured resin in a state of the uncompleted polymerization. Due to the gradient boundary directly determines the thickness of each fabrication layer and greatly influences the quality of products, it is of great significance to study the gradient boundary in the fabrication process. We proposed in-process measurement of gradient boundary using the reflection interference technique to monitor the formation of cured resin and investigate the gradient boundary. In this method, the variation of refractive index of resin in curing process has been utilized in the measurement. Measurement light was deduced near the critical angle to obtain a susceptible total internal reflection condition. In the verification experiment, a compact experiment system including fabrication and measurement sections has been developed. Two beams of light in the different wavelength have been delivered into the system as fabrication and measurement light, respectively. Resin exposed by increasing time has been measured by our proposed method at various incident angles near the critical angle. The refractive index distribution and the depth gradient boundary have been successfully measured. The results prove that the refractive index of cured resin is different in the position; the span of gradient is not constant as well. The maximum span of gradient boundary in center was measured in around 250 nm. This work that helps us clearly understand the curing process and the formation of the cured layer in EWNSL provides a research basis for further and detailed research in the nanoscale stereolithography. Introduction Micro-stereolithography, as one of the most powerful techniques of micro manufacturing, has been developed to produce micro-sized threedimensional polymer structures layer-by-layer. Evanescent-wave-based nano-stereolithography (EWNSL) utilizes evanescent wave instead of propagating light to provide optical energy. It can produce each layer of resin in a sub-micrometer vertical resolution. Figure 1 illustrates the basic mechanism of EWNSL. In order to generate evanescent wave, base resin needs to be placed on a high-refractive-index substrate, and a beam of exposure light needs to be incident at an angle larger than the critical angle of total international reflection (TIR). Evanescent wave occurs at the interface of TIR. The intensity of optical field of evanescent wave exponentially decays with the distance to the interface between resin and substrate. Only resin closed to the substrate is exposed by sufficient optical energy. Therefore, by providing appropriate exposure time, it is possible to fabricate cured resin in a thickness of sub-micrometer. One of the biggest problems of EWNSL is the gradient boundary between the cured and uncured resin made by the half-cured resin. This is caused by the exponential intensity decay of evanescent wave. Resin exposed by light in different intensity has different curing speed. The gradient boundary is made by the half-cured resin in a state of uncompleted polymerization and nonuniform curing degree; its physical characteristic is a medium between liquid and solid like gel; it can be removed by solvent or further cured in the exposure condition. The existence of half-cured resin largely influences the shape and quality of cured resin. In order to improve the fabrication accuracy and further study of the curing process in EWNSL, the investigation of the gradient boundary shows great significance. To date we only theoretically know the existence of gradient boundary; however, the gradient boundary in EWNSL has never been measured. The difficulties on this work are mainly on, first of all the spans of gradient boundary in EWNSL is in sub-micro scale. In addition, the gradient boundary has the gradient physical and chemical characteristic. The gradient boundary is not totally in the solid state and therefore, it can be easily disturbed by contacting measurement. Based on above demands and difficulties we proposed a measurement method utilizing the reflection interference at the critical angle of total internal reflection to measure the span of gradient boundary. In the following contents, principle and proposed method will be briefly explained. The feasibility of proposed method will be proved by experiments. Curing process of resin Essentially, the curing process of photosensitive resin is one of the photo-polymerization reactions. The mechanism and the synthetic process are shown in Fig. 2. The uncured resin contains photosensitive initiators and monomers. The curing process starts from initiators break into reactive species when resin is exposed by light in a particular wavelength. The reactive species may be a free radical, cation or anion, determined by the type of initiator. In following reactions, the reactive species add to monomer molecules to form the excited monomer as new radical, cation or anion centers, as the case may be. The process is repeated as the monomer molecules are successfully connected to the continuously propagating reactive centers, which finally results in the generation of the highmolecular-weight polymers. It is known that polymerization is a continual chemical reaction experience a certain duration time. Therefore, resin has intermediate states in curing process. The gradient boundary is made up of cured resin in such intermediate states. Principle of measurement method A schematic diagram of our measurement method is shown in Fig. 3 (a). Besides the exposure light for fabrication, another beam of light in a larger wavelength was launched from the bottom of the resin used as the measurement light. It is notable that photosensitive resin is cured by light in a certain wavelength range. Effects of the measurement light on curing process can be minimized by properly selecting a wavelength of the measurement light. The exposure light for fabrication is set around 65 (larger than the critical angle) which guarantees the formation condition of the total internal reflection and the evanescent field. The measurement beam is launched near the critical angle of the total internal reflection. Intensity distribution of the reflection is used to calculate the effective refractive index and the depth of the cured resin. The in-process measurement on cured resin is based on three significant principles; increase of resin's refractive index in the curing process, abrupt reflectivity change at the critical angle, and interference of reflection from bottom and top of the Curing process of resin. cured resin. The increase of refractive index is an accompanying phenomenon in the curing process caused by a decrease of intermolecular distances. Increments of refractive index vary from types of resin and is normally in a range from 0.01 to 0.05 refractive index units (RIU). As the cured resin has larger refractive index than the uncured resin, when light incident from a high-refractive-index prism to resin, as shown in the insert figure of Fig 3(b), the critical angle defined by Eq. 1 becomes larger with refractive index increasing, where n R and n P is the refractive index of resin and prism, respectively. If measurement light is incident exactly at the critical angle of uncured resin, then the increase of refractive index in curing process will destroy the total internal reflection and results in a reflectivity drop, as shown in Fig. 3 When the total internal reflection does occur, light will be transmitted into the cured resin. Due to the measurement light is transmitted from the substrate to the uncured resin at the critical angle of these two mediums, the total internal reflection occurs when the light transmits to the uncured resin. Therefore, all the light that transmitted into cured resin will be reflected from the top end of the gradient boundary, as shown in Fig. 3 (c). The reflection from the bottom side (R 1 ) and the top side (R 2 ) of resin generate the reflection interference. The reflection depth of R 2, that directly determines the length difference of optical path, largely influences the interference conditions and the reflection intensity. Therefore, is possible to obtain the reflection depth boundary by detecting the intensity of reflection. However, not only the depth of gradient boundary, but also the refractive index of the cured resin determines the length of optical path and therefore, the refractive index should be introduced into the calculation. It is known that the refractive index of resin increases with its curing degree, and resin's curing degree decreases with its distance to the substrate due to the special field distribution of evanescent wave. It can be reasonably inferred that resin's refractive index decreases with its distance to the substrate. However, the profile of refractive index is unknown and cannot be easily measured in fabrication process. In our investigations, based on the facts that the spans of gradient boundary are in sub-micrometer, we used the effective refractive index as an averaged refractive index of the cured resin. By this way, the prism, cured resin with gradient boundary, and uncured resin were treated as a three-layer-reflection model. The effective refractive index of cured resin was Effective refractive index of cured layer measured by changing the incident angle of measurement light and finding the critical angle of cured resin. The relationship between refractive index and critical angle is shown in Fig. 3 (d). Once the effective refractive index of cured resin is measured, the reflectivity from the bottom side of cured resin (R 1 ) can be calculated according to Eq. 2. The intensity of reflection contrast is calculated by Eq. 3. where is the phase difference generated by two different optical paths of R 1 and R 2. It can be expressed by the following equation, 1 2 sin( ) tan( ) cos( ) where I is the incident angle and T is the refractive angle calculated by Fresnel equations. Equations and show the relation between the reflection depth and the reflection intensity, which means it possible to obtain the value of reflection depth by measuring the reflection intensity in fabrication process. Figure 3 (e) shows the reflection intensity of interference as a function of reflection depth in various effective refractive index of cured resin. It is notable that only the first order reflection was count in our calculation. This is because the refractive angle is very large and the top side of gradient boundary is not perfect flat in experiment, which results in the high order reflections propagate long distance in horizontal direction and be scattered easily. Experimental The whole experiment system is shown in Fig. 4. It includes fabrication and measurement two sections. Light in the wavelength of 405 and 638 nm were used as curing light and measurement light, respectively. As we mentioned above, resin can be cured by light in a particularly wavelength range. In this experiment, the operation of measurement does not impact on the fabrication as the wavelength of measurement light was out of the range of curing wavelength. In the fabrication section, exposure light was transmitted through the collimator and the polarizer. A shutter was used to control the exposure time. In order to fabricate cured resin in a smaller width seeking the real size of production in micro/nano-stereolithography, the beam width of curing light was focused by a lens before launching to prism. In the measurement section, the polarized laser light in a wavelength of 638 nm was delivered by the left arm of and reflected from the interface between resin and substrate. The reflected light propagated into the imaging section and was collected by the CMOS camera. In the experiments, two arms were respectively fixed on two rotation stages centered on the prism. The incident angle of fabrication light was fixed at 65 degree while the measurement light changing with the rotation stage is in various incident angles near the critical angle. Urethane-acrylate-based resin was used in the experiments. Its refractive index is 1.478 before curing. The prism was in the refractive index of 1.78. In order to avoid the damage on the prism, resin was put on the substrate that in the same refractive index with the prism. Results and discussion After exposure of light in intensity of 1 mW for one second, resin in the exposure filed was cured. Figure 5 shows the experiment and calculation results. The intensity distributions of reflection were measured when a beam of measurement light was incident at 55.10, 56.19, 56.24, and 56.13, as shown in Fig. 5 (a) to (d). The refractive index of cured resin that corresponds to the critical angle in these degrees are 1.478, 1.479, 1.480, and 1.481, respectively. In Fig. 5 (a), incident angle of the measurement light is equal to the critical angle of uncured resin. In this case, the total internal reflection occurred at the boundary between the uncured resin and the substrate, which leads the highest intensity of reflected light from uncured resin; while cured resin in a relatively higher refractive index destroyed the condition total internal reflection; and therefore, cured resin with a relatively lower reflectivity was clearly distinguished with the uncured resin. In Fig. 5 (b), the incident angle was increased to 56.19. The corresponding maximum refractive index of resin that meets the requirement of total internal reflection also increase from 1.478 to 1.479. In this case, cured resin whose effective refractive index smaller than 1.479 generates the total internal reflection. Therefore, in Fig. 5 (b), the bright area in cured resin means where resin's effective refractive index smaller 1.479. The dashed line that marks the range of the total internal reflection in cured resin is the contour line of effective index 1.479. By slightly changing the incident angle step-bystep, distribution of effective refractive index was measured. In this verification experiment, due to the oblique observation influence the quality of imaging (this problem can be solved by applying immerging objective lens to control incident and observation angle), the distribution of effective refractive index was only roughly measured. Figure 5 (e) shows the cross-section of effective refractive index marked by red line in Fig. 5 (a) measured various incident angles. Figure 6 (f) shows the image of cured resin after washing and drying process measured by optical microscopy. It is notable that Fig. 5 (a) to (d) was measured in fabrication process when cured resin was still submerged in liquid resin and gradient boundary still existing; while Fig. 5 (f) was obtained after removing the liquid resin and the gradient boundary was destroyed before measurement. The reflection depth, which calculated by using the effective refractive index and reflection distribution was plotted in Fig. 5 (g). The red point is the raw date. The dark solid line is the processed results. In order to know the span of gradient boundary, the thickness of cured resin after washing and drying process was measured AFM as shown by the blue line in Fig. 5 (g). In the proposed method, the measurement light was oblique incidence and inclined projection results in the distortion of image, while Fig. 5 (f) and AFM do not have this problem. In order to make subtraction to calculate the span of gradient boundary, the transverse span of AFM results was scaled down. In Fig. 5 (f), we can see the gradient boundary is slightly larger than the thickness of cured resin. The span of gradient boundary different with the position, and is a maximum value around 250 nm in the center of cured resin. In addition, comparing the shape and the tendency of reflection depth thickness of cured resin, we found that these two curves have better similarity in the left side. This is because the measurement light was only incident from one side (from the left, and from the top side in Fig. 5 (a) to (d)). The reflection distribution in another side might be influenced by the scattering and multireflection. This problem can also be solved by applying immersion objective lens in to experiment system, which can provide measurement at particular incident angle from various directions. According to above results, the gradient boundary of resin in EWNSL has been successfully measured. The problem is that it is hard to examine rightness and accuracy of our measurement method. In our future work, experiment setup will be improved by applying an immersion objective lens and a higher resolution imaging system in experiments, and some calibration works might be done. Conclusion In conclusion, we proposed a measurement method based on the reflection interference at the critical angle in EWSL to measure the gradient boundary. The variation of refractive index of resin in curing process was utilized in measurement. The distribution of effective refractive index of cured resin was measured by changing the incident angle of measurement light near the critical angle. This distribution was used to calculate the reflection depth, which represent the top surface of gradient boundary. After deducing the thickness of cured resin after washing process from the reflection depth, the span of gradient boundary was successfully calculated. |
<reponame>merekenji/GameScheduler
package java_game_scheduler;
import static org.junit.Assert.*;
import org.junit.Test;
public class SchedulerTest {
@Test
public void createGameSuccessfully() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Tennis", 2);
assertEquals("Success: Game has been saved successfully", service.createGame(game));
}
@Test
public void createGameWithNullGameName() {
ISchedulerService service = new SchedulerService();
Game game = new Game("", 5);
assertEquals("Error: The Game name should not be empty", service.createGame(game));
}
@Test
public void createGameWithNoPlayers() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Golf", 0);
assertEquals("Error: There should at least be 1 player playing in the game", service.createGame(game));
}
@Test
public void createDuplicateGame() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Tennis", 2);
service.createGame(game);
assertEquals("Error: Game has already exist", service.createGame(game));
}
@Test
public void createNullGame() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: The Game object is null", service.createGame(null));
}
@Test
public void createPlayerSuccessfully() {
ISchedulerService service = new SchedulerService();
Game game1 = new Game("Tennis", 2);
Game game2 = new Game("Football", 11);
Game game3 = new Game("Badminton", 2);
service.createGame(game1);
Game[] games = { game1, game2, game3 };
Player player = new Player("Tom", games);
assertEquals("Success: Player has been saved successfully", service.createPlayer(player));
}
@Test
public void createPlayerThatPlayNoGames() {
ISchedulerService service = new SchedulerService();
Game game1 = new Game("Tennis", 2);
Game game2 = new Game("Football", 11);
Game game3 = new Game("Badminton", 2);
Game[] games = { game1, game2, game3 };
Player player = new Player("Tom", games);
assertEquals("Error: At least 1 game should exist in game repo", service.createPlayer(player));
}
@Test
public void createPlayerWithNoName() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Tennis", 2);
service.createGame(game);
Game[] games = { game };
Player player = new Player("", games);
assertEquals("Error: The Player name should not be empty", service.createPlayer(player));
}
@Test
public void createDuplicatePlayer() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Tennis", 2);
service.createGame(game);
Game[] games = { game };
Player player = new Player("Tom", games);
service.createPlayer(player);
assertEquals("Error: Player has already exist", service.createPlayer(player));
}
@Test
public void createNullPlayer() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: The Player object is null", service.createPlayer(null));
}
@Test
public void createDaySuccessfully() {
ISchedulerService service = new SchedulerService();
Game game1 = new Game("Tennis", 2);
Game game2 = new Game("Basketball", 5);
service.createGame(game1);
service.createGame(game2);
Game[] games = { game1, game2 };
Day day = new Day("Day One", games);
assertEquals("Success: Day has been saved successfully", service.createDay(day));
}
@Test
public void createDayWithNoGamesInRepo() {
ISchedulerService service = new SchedulerService();
Game game1 = new Game("Tennis", 2);
Game game2 = new Game("Basketball", 5);
Game[] games = { game1, game2 };
Day day = new Day("Day One", games);
assertEquals("Error: All game should exist in game repo", service.createDay(day));
}
@Test
public void createDayWithoutName() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Tennis", 2);
service.createGame(game);
Game[] games = { game };
Day day = new Day("", games);
assertEquals("Error: The Day name should not be empty", service.createDay(day));
}
@Test
public void createDuplicateDay() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Tennis", 2);
service.createGame(game);
Game[] games = { game };
Day day = new Day("Day One", games);
service.createDay(day);
assertEquals("Error: Day has already exist", service.createDay(day));
}
@Test
public void createNullDay() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: The Day object is null", service.createDay(null));
}
@Test
public void generateGameReportSuccessfully() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Basketball", 5);
service.createGame(game);
Game[] games = { game };
Player player1 = new Player("Tom", games);
Player player2 = new Player("Jerry", games);
service.createPlayer(player1);
service.createPlayer(player2);
Day day = new Day("Day One", games);
service.createDay(day);
StringBuffer sb = new StringBuffer();
sb.append("Game Report for Basketball\n");
sb.append("No. of Players: 5\n\n");
sb.append("Players playing in this game\n");
sb.append("Tom\n");
sb.append("Jerry\n");
sb.append("Days game is scheduled on\n");
sb.append("Day One\n");
assertEquals(sb.toString(), service.gameWiseReport("Basketball").toString());
}
@Test
public void generateNonExistantGameReport() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: Game does not exist", service.gameWiseReport("Tennis").toString());
}
@Test
public void generateEmptyGameReport() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: Game name should not be empty", service.gameWiseReport("").toString());
}
@Test
public void generatePlayerReportSuccessfully() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Basketball", 5);
service.createGame(game);
Game[] games = { game };
Player player1 = new Player("Tom", games);
service.createPlayer(player1);
Day day = new Day("Day One", games);
service.createDay(day);
StringBuffer sb = new StringBuffer();
sb.append("Player Report for Tom\n\n");
sb.append("Games player is playing in:\n");
sb.append("Basketball\n");
sb.append("Days Game is scheduled on\n");
sb.append("Day One\n");
assertEquals(sb.toString(), service.playerWiseReport("Tom").toString());
}
@Test
public void generateNonExistantPlayerReport() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: Player does not exist", service.playerWiseReport("Tom").toString());
}
@Test
public void generateEmptyPlayerReport() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: Player name should not be empty", service.playerWiseReport("").toString());
}
@Test
public void generateDayReportSuccessfully() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Basketball", 5);
service.createGame(game);
Game[] games = { game };
Player player1 = new Player("Tom", games);
service.createPlayer(player1);
Day day = new Day("Day One", games);
service.createDay(day);
StringBuffer sb = new StringBuffer();
sb.append("Day Report for Day One\n\n");
sb.append("Games played on this day\n");
sb.append("Basketball\n");
sb.append("Players playing in this game\n");
sb.append("Tom\n");
assertEquals(sb.toString(), service.dayWiseReport("Day One").toString());
}
@Test
public void generateNonExistantDayReport() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: Day does not exist", service.dayWiseReport("Day One").toString());
}
@Test
public void generateEmptyDayReport() {
ISchedulerService service = new SchedulerService();
assertEquals("Error: Day name should not be empty", service.dayWiseReport("").toString());
}
@Test
public void generateGameReportThatAreNotScheduledOnAnyDays() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Basketball", 5);
service.createGame(game);
Game[] games = { game };
Player player1 = new Player("Tom", games);
Player player2 = new Player("Jerry", games);
service.createPlayer(player1);
service.createPlayer(player2);
assertEquals("Error: Game not scheduled on any day", service.gameWiseReport("Basketball").toString());
}
@Test
public void generateGameReportThatHasNoPlayers() {
ISchedulerService service = new SchedulerService();
Game game = new Game("Basketball", 5);
service.createGame(game);
Game[] games = { game };
Day day = new Day("Day One", games);
service.createDay(day);
assertEquals("Error: Game does not have any players", service.gameWiseReport("Basketball").toString());
}
} |
Measuring loss of life, health, and income due to disease and injury: a method for combining morbidity, mortality, and direct medical cost into a single measure of disease impact. The impact of disease on a population includes illness, death, and medical care cost. Information on all three may be combined in a disease impact scale. The disease impact for a given condition can be defined as the sum of (a) the years of life lost before age 75 per 100,000 population (adjusted to reflect causes of death up to age 100); (b) the person-years of complete disability per 100,000 population, and (c) the direct medical costs in years of average annual personal income per 100,000 population.The sum of (a), (b), and (c)-disease impact in person years per 100,000 population-can be used to compare one disease with another, to estimate the potential effect of programs for risk alteration, and to measure the outcome of planned or accidental changes in society. The data necessary to calculate disease impact are becoming available in many States.In Minnesota, the total disease impact in 1978 was approximately 26,000 person-years per 100,000 population per year. The disease catgories in the International Classification of Diseases, Adapted, Eighth Revision, with the highest disease impact in the State were circulatory diseases (23.7 percent), injury and poisoning (10.9 percent), respiratory system (9.3 percent), neoplasms (9.0 percent); musculoskeletal system and connective tissue (8.8 percent), digestive system diseases (7.5 percent), and nervous system and sense organ diseases (5.8 percent). Circulatory diseases ranked first in morbidity, mortality, and cost, but the rankings for several other categories varied according to the parameter being considered.Use of a disease impact scale such as the one developed in Minnesota avoids dependence on a single parameter such as mortality or cost in making program decisions. In contrast to economic analyses of disease impact, it does not require estimates of discount rates, future rates of inflation, or salaries for homemakers, students, and children. Although the results of present calculations are only approximate, they provide a methodological framework within which correctable deficiencies in data collection methods are readily apparent. The disease impact scale is intended to be a component of a comprehensive disease surveillance system that includes measures of disease impact, the prevalence of risk factors for diseases, and the availability of health resources. |
Sexual behaviors related to HIV infection in Yi women of childbearing age in rural areas of southwest China ABSTRACT Liangshan Prefecture, the highest HIV-affected epidemic region in China, has more than 2.5 million Yi people. We firstly investigated the sexual behaviors and the related social determinants of health for HIV infection in Yi women of childbearing age in this area. A total of 800 Yi women of childbearing age were enrolled. Path analysis of the risk factors revealed that casual sex (0.152) and number of sex partners (0.152) were directly associated with HIV infection. Furthermore, education level (0.057), out-migrating for work (0.032), sense of self-worth (0.024) and number of sex partners (0.079) were indirectly related to HIV infection and mediated by casual sex and multiple sexual partners. The epidemic of HIV infection among Yi women of childbearing age in Liangshan Prefecture is serious, future promotion should increase their knowledge about condom and modify their perceptions of sexual behaviors. |
Because I did not know these stories were meant to be separate, I kept waiting for them to be tied together and did not realize until the last twenty minutes that they were not going to be. The creators mentioned in the Q&A that they tried to find some connections between the segments (such as having the police officer from the changeling story be the same officer that investigated the ghost story murders). But the primary rationale for making the film as four separate stories was for efficiency’s sake. With only a few months to write and shoot the film the producers divided the work between four writing teams and then found ways to connect the works.
Given the constraints, I feel that not cutting between the segments but instead playing each of them in their entirety and then moving to the next would have the served the film far better. The film opens in medias res with the Santa Claus segment and then returns to the beginning and the ending works well by telling the North Pole story this way. By cutting between the North Pole but keeping the other segments whole (stitched together with Shatner’s DJ shtick), each story would have been given more space to work and not have to vie as much with the other segments for tonal consistency.
In fact, one of the writers in the Q&A said that many of the comedic elements in the Krampus story had to be pulled back or cut entirely because they did not fit the overall tone of the other segments. Additionally, I felt that the more dramatic and serious tone of the changeling segment did not fit well with the other stories. Had it been a more self-contained story, it could have more comfortably occupied its dramatic space. By letting each segment stand on its own the film could have avoided these problems.
A Christmas Horror Story is also somewhat bedeviled by low production values. The CGI for the North Pole was very disappointing, to the point that I would have preferred that it be cut entirely in favor of a more mysterious setting (which would have served the ending better, in my opinion). Also, many of the action scenes were disorienting and almost nauseating to watch due to shoddy camera work and editing.
Despite these problems, A Christmas Horror Story was still an enjoyable experience. I can’t say that the festival crowd didn’t have an influence on me, and I am a little skeptical that I would have enjoyed the film as much on a small screen (it probably will feel right at home on SyFy). But, ultimately, the over-the-top Santa action scenes, genuinely spooky elements and the delightful William Shatner pulling everything together left me drunk on the joy of the season (and zombie elf decapitations). Perhaps we will return to Bailey Downs in the future — maybe for a Boxing Day Horror Story?
Miscellany
William Shatner’s booze-soaked performance (presumably it was a performance…) as he delivered increasingly troubling notices of some kind of disturbance occurring at the Bailey Downs mall was wonderful, especially his one-sided banter with the off-screen “Susan”.
“No no, Susan, I’m gonna talk about Jesus on the radio and you know why? Because it’s his birthday tomorrow!”
The ghost story was a real stretch to fit into the Christmas theme. The only connection was that the murders took place on Christmas. Honestly, this story could have been in any horror anthology and was probably the weakest of the four.
For a pretty low-budget flick, the Krampus creature design, used in both the Krampus and North Pole segments, was excellent. The filmmakers were lucky to discover their Krampus performer, Rob Archer (who was in attendance and is about as wide across the shoulders as I am tall).
Olunike Adeliyi and Adrian Holmes’s performances in the changeling segment — the second changeling story at this year’s After Dark after The Hallow — were particularly strong and emotional. Their more serious performance and story often felt at odds with the rest of the segments, though.
The final twist was both unexpected and hilarious. It brought laughs and cheers from the After Dark crowd at the Scotiabank.
Maybe I’m crazy, but shouldn’t this movie be called Christmas Horror Stories if it’s an anthology?
Completely Subjective Rating
My final word on A Christmas Horror Story?
A fun, yuletide B-movie held back a little by low production values and a fractured story, A Christmas Horror Story nevertheless delivers enough scares and laughs to take its place among the lower pantheon of Christmas horror films (for me, nothing can hold a candle to Gremlins).
Follow the rest of my 2015 Toronto After Dark reviews and experiences at A View from the Dark. |
Joan Jett & The Blackhearts To Unleash New Album ‘Unvarnished’ On October 1st!
Hot on the heels of being honored at this year’s Sunset Strip Music Festival and the City of West Hollywood’s official proclamation of August 1 as “Joan Jett Day,” the rock n roll icon is set to release her first CD of all original music in more than seven years. Joan Jett and the Blackhearts will release their 14th studio album Unvarnished October 1 on the Blackheart Records label. The album features ten original tracks and a Deluxe Edition will feature four additional bonus tracks. The first single, “Any Weather” will make its television debut on Jimmy Kimmel Live! on August 8th. To coincide with the television debut, the single will be available August 6th via iTunes preorder at www.itunes.com. |
Subsets and Splits