content
stringlengths 7
2.61M
|
---|
Reed Johnson
College career
Johnson was born in Riverside, California and grew up in Temecula, in southern Riverside (California) County. He attended Temecula Valley High School, where he participated in baseball and soccer. He was an All-League and an All-County selection.
Johnson attended college at Cal State Fullerton and was named an Academic All-American. He also set records by being the first Cal State Fullerton player to score 100 runs and collect 100 hits in a season.
Toronto Blue Jays
Johnson was drafted by the Toronto Blue Jays in the 17th round of the 1999 MLB Draft. In the minors, he was a Southern League All-Star in 2001 with the Tennessee Smokies, while hitting .314 with 13 home runs and 74 RBI. Johnson made his Major League debut on April 17, 2003 against the New York Yankees as a pinch runner. He recorded his first major league hit on April 20, 2003 against Boston Red Sox pitcher Casey Fossum and his first home run on May 17, 2003 against Jeremy Affeldt of the Kansas City Royals . He finished his rookie season, with a .294 batting average, 10 home runs, and 52 runs batted in. Johnson also won the American League Rookie of the Month Award for the month of September.
He is one of only five batters, through August 2009, to have hit both a leadoff and walk-off home run in the same game (having done so in 2003), the others being Billy Hamilton (1893), Victor Power (1957), Darin Erstad (2000), and Ian Kinsler (2009).
Johnson extended his tenure with the Blue Jays on December 7, 2005, after signing a one-year extension worth $1,425,000.
At the start the 2006 season, Johnson was platooned with Frank Catalanotto in left field, as they had been for the previous two seasons. In a Toronto Star article, Johnson was quoted as saying, "I train so that I can play every day. I don't train to be a fourth outfielder, or there would be a lot less training. I wouldn't be waking up as early. I wouldn't be trying to be in the shape that I'm in. I know my body can take the pounding of an everyday season".
In 2006, Johnson led all leadoff hitters in the American League with a .390 on-base percentage and also had a .319 batting average.
One of Johnson's more dubious honors is his propensity for being hit by pitches. Consistently among the Blue Jays leaders in being hit, in 2006 Johnson moved past Ed Sprague to take second on the Blue Jays all-time hit by pitch list, trailing only Carlos Delgado. He is also one of several players to be hit a major-league record three times in one game; Johnson was hit three times in a game against the Texas Rangers on April 15, 2005. He equaled this feat again on April 7, 2006, against the Tampa Bay Devil Rays.
In 2008, the Blue Jays acquired veteran Matt Stairs, again relegating Johnson to a platoon role. The Blue Jays signed all-star shortstop David Eckstein, and removed Johnson from his familiar role as leadoff hitter. The Blue Jays also signed outfielder Shannon Stewart to a minor league contract. Stewart, who played in 855 games for Toronto from 1995 to 2003, was a dependable and consistent force at the top of the Blue Jays lineup for many years, although by this point he was considered a liability in the field at times because of an injury suffered playing football, which greatly reduced his throwing strength. His presence at spring training made Johnson's role all the more uncertain. Johnson was released by the Jays on March 23, and replaced by Stewart.
Chicago Cubs
On March 25, he signed a one-year deal with the Chicago Cubs. Johnson platooned in center field with Jim Edmonds (as well as Félix Pie to start the season). Johnson "batted in" a game-winning run after he was hit by a pitch with the bases loaded on June 12, against the Atlanta Braves. During a crucial game in the 2008 season against the Milwaukee Brewers, Johnson executed a perfect hard slide into second base that prevented a double play, and allowed the Cubs to take a one-run lead. When Johnson returned to Rogers Centre to play the Toronto Blue Jays on June 13, 2008, he received a long standing ovation from Blue Jays fans.
During a game early in the 2009 season, also against the Brewers, Johnson showed versatility on the field by catching a Prince Fielder drive that had cleared the wall, preventing the Brewers from tying the game on a grand slam. He was placed on the 15-day DL on July 30 that same year with a left foot fracture.
Los Angeles Dodgers
On February 1, 2010, Johnson signed a one-year deal with the Los Angeles Dodgers to replace Juan Pierre as the team's fourth outfielder. He appeared in 102 games with a .262 batting average during the season.
Johnson appears in the opening introduction sequence of The Tonight Show with Jay Leno as a Dodger player. As the announcer introduces The Tonight Show, game footage of Johnson hitting a ball and running to first base is seen in the opening sequence; the exact game is not known. The shot of Johnson does not last more than two and a half seconds.
Second stint with the Cubs
On January 12, 2011, Johnson signed a minor league deal to return to the Cubs with an invitation to Spring Training.
On April 20, Johnson hit a walk-off homer into the left-field seats off Luke Gregerson to defeat the San Diego Padres 2-1 in the first game of a doubleheader. In 2011, he batted .309 in 246 at-bats. Through 2011, he had the second-best career fielding percentage (.991) among all active major league left fielders, behind Ryan Braun.
On December 21, 2011, Johnson re-signed with the Cubs on a one-year deal.
Atlanta Braves
On July 30, 2012, Johnson was traded along with left-handed pitcher Paul Maholm to the Atlanta Braves for right-handed pitchers Arodys Vizcaíno and Jaye Chapman.
On December 7, 2012, Johnson re-signed with the Atlanta Braves to a 1-year contract.
Miami Marlins
On January 31, 2014, Johnson signed a minor league contract with the Miami Marlins. He batted .235/.266/.348 with 2 home runs and 35 RBIs in 113 games with the team.
On February 17, 2015, the Marlins re-signed Johnson to another minor league contract. He was released on March 30.
Washington Nationals
Hours after being released by Miami, Johnson agreed to a minor league contract with the Washington Nationals. He appeared in 17 games for the Nationals in 2015 and hit .227. They re-signed him to a minor league contract after the season. He was released on April 3, 2016.
Personal life
Growing up, Johnson participated in curling. He resides in Chicago.
|
Adaptive virtual machine assignment for multi-tenant data center networks This paper proposes an adaptive virtual machine (VM) assignment scheme for multi-tenant data center networks. In multi-tenant data centers, tenants submit their resource requirements and the data centers provide VMs which are assigned to physical servers. These VMs communicate with each other to execute distributed processing. The amount of traffic communicated by VMs is very large and this is one of big issues in data center networks. In order to reduce the amount of traffic injected into data center networks, we need an appropriate VM assignment strategy which considers traffic between VMs. Furthermore, VMs are dynamically assigned to and removed from physical servers according to resource requirements of tenants, and thus we should consider such dynamic behavior. The proposed scheme provides an VM assignment strategy which reduces the amount of traffic injected into data center networks, taking into account dynamic behavior of the data center networks. Through simulation experiments, we demonstrate that the proposed scheme reduces the network load efficiently.
|
San Francisco Board of Supervisors
Government and politics
The City and County of San Francisco is a consolidated city-county, being simultaneously a charter city and charter county with a consolidated government, a status it has had since 1856. Since it is the only such consolidation in California, it is therefore the only California city with a mayor who is also the county executive, and a county board of supervisors that also acts as the city council.
Whereas the overall annual budget of the city and county is about $9 billion as of 2016, various legal restrictions and voter-imposed set-asides mean that Board of Supervisors can allocate only about $20 million directly without constraints, according to its president's chief of staff.
Salaries
Members of the San Francisco Board of Supervisors were paid $110,858 per year in 2015.
Election
There are 11 members of the Board of Supervisors, each representing a geographic district (see below). The current Board President is Norman Yee, who represents District 7, and was elected by his colleagues on the Board to succeed Malia Cohen, after she won the election to a seat on the California Board of Equalization.
How the Board of Supervisors should be elected has been a matter of contention in recent San Francisco history. Throughout the United States, almost all cities and counties with populations in excess of 200,000 divide the jurisdiction into electoral districts (in cities, often called "wards") to achieve a geographical distribution of members from across the community. But San Francisco, notwithstanding a population of over 700,000, was often an exception.
Prior to 1977 and again from 1980 through 2000, the Board of Supervisors was chosen in 'at-large' elections, with all candidates appearing together on the ballot. The person who received the most votes was elected President of the Board of Supervisors, and the next four or five (depending on how many seats were up for election) were elected to seats on the board. District elections were enacted by Proposition T in November 1976. The first district-based elections in 1977 resulted in a radical change to the composition of the Board, including the election of Harvey Milk, only the third openly gay or lesbian individual (and the first gay man) elected to public office in the United States. Following the assassinations of Supervisor Milk and Mayor George Moscone a year later by former Supervisor Dan White, district elections were deemed divisive and San Francisco returned to at-large elections until the current system was implemented in 2000.
District elections were repealed by Proposition A in August 1980 by a vote of 50.58% Yes to 49.42% No.
An attempt was made to reinstate district elections in November 1980 with Proposition N but it failed by a vote of 48.42% Yes to 51.58% No.
District elections were reinstated by Proposition G in November 1996 with a November runoff.
Runoffs were eliminated and replaced with instant-runoff voting with Proposition A in March 2002.
Under the current system, supervisors are elected by district to four-year terms. The City Charter provides a term limit of two successive four-year terms and requires supervisors to be out of office for four years after the expiration of their second successive term before rejoining the Board, through election or appointment, again. A partial term counts as a full term if the supervisor is appointed and/or elected to serve more than two years of it.
The terms are staggered so that only half the board is elected every two years, thereby providing continuity. Supervisors representing odd-numbered districts (1, 3, 5, 7, 9, and 11) are elected every fourth year counted from 2000 (so, 2000, 2004, 2008, etc.). Supervisors representing even-numbered districts (2, 4, 6, 8, and 10) were elected to transitional two-year terms in 2000, thereafter to be elected every fourth year (2002, 2006, 2010, etc.). Terms of office begin on the January 8 following the regular election for each seat. Each supervisor is required to live in his or her district, and although elections are ostensibly held on a non-partisan basis, as of 2018 all 11 supervisors are known to be members of the Democratic Party. The most recent supervisoral elections were held on November 6, 2018.
The President of the Board of Supervisors, under the new system, is elected by the members of the Board from among their number. This is typically done at the first meeting of the new session commencing after the general election, or when a vacancy in the office arises.
|
package com.example.administrator.testone.activity;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.support.v7.widget.AppCompatTextView;
import android.widget.AutoCompleteTextView;
import android.widget.EditText;
import com.example.administrator.testone.R;
public class MyLoginActivity extends AppCompatActivity {
private AutoCompleteTextView tv_user_name;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_my_login);
tv_user_name=findViewById(R.id.tv_user_name);
}
}
|
/**
* This exception is a wrapper around {@link ClassNotFoundException}.
*
* @author Oliver Richers
*/
public class ClassNotFoundRuntimeException extends RuntimeException {
private static final long serialVersionUID = 1L;
public ClassNotFoundRuntimeException(ClassNotFoundException exception) {
super(exception);
}
}
|
def data_from_response(response, data_key_path=None):
if data_key_path is None or response == DEAD_RESPONSE:
metadata, datarows = response
else:
metadata = response
datarows = response.copy()
for key in data_key_path:
datarows = datarows[key]
if key != data_key_path[-1] and type(datarows) is list:
datarows = datarows[0]
return metadata, datarows
|
Particle image velocimetry correlation signal-to-noise ratio metrics and measurement uncertainty quantification In particle image velocimetry (PIV) the measurement signal is contained in the recorded intensity of the particle image pattern superimposed on a variety of noise sources. The signal-to-noise-ratio (SNR) strength governs the resulting PIV cross correlation and ultimately the accuracy and uncertainty of the resulting PIV measurement. Hence we posit that correlation SNR metrics calculated from the correlation plane can be used to quantify the quality of the correlation and the resulting uncertainty of an individual measurement. In this paper we extend the original work by Charonko and Vlachos and present a framework for evaluating the correlation SNR using a set of different metrics, which in turn are used to develop models for uncertainty estimation. Several corrections have been applied in this work. The SNR metrics and corresponding models presented herein are expanded to be applicable to both standard and filtered correlations by applying a subtraction of the minimum correlation value to remove the effect of the background image noise. In addition, the notion of a valid measurement is redefined with respect to the correlation peak width in order to be consistent with uncertainty quantification principles and distinct from an outlier measurement. Finally the type and significance of the error distribution function is investigated. These advancements lead to more robust and reliable uncertainty estimation models compared with the original work by Charonko and Vlachos. The models are tested against both synthetic benchmark data as well as experimental measurements. In this work, U68.5 ?> uncertainties are estimated at the 68.5% confidence level while U95 ?> uncertainties are estimated at 95% confidence level. For all cases the resulting calculated coverage factors approximate the expected theoretical confidence intervals, thus demonstrating the applicability of these new models for estimation of uncertainty for individual PIV measurements.
|
<gh_stars>0
# Problem: Contains Duplicates II
# Difficulty: Easy
# Category: Array/Hash Table
# Leetcode 219: https://leetcode.com/problems/contains-duplicate-ii/#/description
# Description:
"""
Given an array of integers and an integer k,
find out whether there are two distinct indices i and j in the array
such that nums[i] = nums[j]
and the absolute difference between i and j is at most k.
"""
# Solution:
class Solution(object):
def contains_duplicate(self, nums, k):
if len(nums) < 2:
return [-1, -1]
buff_dic = {}
for i in range(len(nums)):
if nums[i] in buff_dic and i - buff_dic[nums[i]] <= k:
return [buff_dic[nums[i]], i]
buff_dic[nums[i]] = i
return [-1, -1]
# test cases:
obj = Solution()
print(obj.contains_duplicate([1, 2, 3, 4, 5, 1], 5))
print(obj.contains_duplicate([1, 2, 3, 4, 5, 1], 4))
print(obj.contains_duplicate([1, 2, 3, 4, 5, 1], 6))
print(obj.contains_duplicate([1, 2, 3], 2))
print(obj.contains_duplicate([1, 1, 1, 1], 0))
# special case
print(obj.contains_duplicate([1, 2, 1, 0, 3, 4, 5, 6, 7, 1], 9))
|
/**
* The Permit all authorization generator.
*/
public class PermitAllAuthorizationGenerator implements AuthorizationGenerator<CommonProfile> {
@Override
public CommonProfile generate(final WebContext webContext, final CommonProfile commonProfile) {
commonProfile.addRoles(casProperties.getMgmt().getAdminRoles());
return commonProfile;
}
}
|
from abc import abstractmethod
class Observer:
@abstractmethod
def notify(self, subject):
"""
Notify the view that the model has been updated
:param subject: the updated model
"""
raise NotImplementedError()
|
package main
import "net/http" // "net/http"
func main() {
var x *http.Client // "net/http Client"
var y int
z := http.RoundTripper(nil) // "net/http RoundTripper"
_ = x
_ = y
_ = &http.Client{ // "net/http Client"
Transport: z, // "net/http Client Transport"
}
}
|
<filename>src/shared/ui/atoms/cardbox-logo.tsx
import React from 'react';
import styled from 'styled-components';
import { IconCardboxLogo, IconUserLogoDefault } from '@box/shared/ui';
export function CardboxLogo() {
return (
<IconWrapper>
<IconUserLogoDefault data-icon="square" />
<IconCardboxLogo data-icon="text" />
</IconWrapper>
);
}
const IconWrapper = styled.div`
display: flex;
align-items: baseline;
[data-icon='text'] {
height: 24px;
margin-left: 10px;
}
[data-icon='square'] {
height: 17px;
width: 17px;
border-radius: 3px;
& rect {
fill: #683aef;
}
}
`;
|
/**
* @author Franck WOLFF
*/
public class TestBigInteger extends AbstractSpearalTestUnit {
@Before
public void setUp() throws Exception {
// printStream = System.out;
}
@After
public void tearDown() throws Exception {
printStream = NULL_PRINT_STREAM;
}
@Test
public void test() throws IOException {
encodeDecode(newBigInteger(8000, true), 4820);
encodeDecode(newBigInteger(256, true), 158);
encodeDecode(new BigInteger("-10000000000000000000000"), 5);
encodeDecode(BigInteger.valueOf(Long.MIN_VALUE).subtract(BigInteger.ONE), 12);
encodeDecode(BigInteger.valueOf(Long.MIN_VALUE), 12);
encodeDecode(BigInteger.valueOf(Long.MIN_VALUE + 1), 12);
encodeDecode(BigInteger.TEN.negate(), 4);
encodeDecode(BigInteger.ONE.negate(), 3);
encodeDecode(BigInteger.ZERO.negate(), 3);
encodeDecode(BigInteger.ZERO, 3);
encodeDecode(BigInteger.ONE, 3);
encodeDecode(BigInteger.TEN, 3);
encodeDecode(BigInteger.valueOf(Long.MAX_VALUE - 1), 12);
encodeDecode(BigInteger.valueOf(Long.MAX_VALUE), 12);
encodeDecode(BigInteger.valueOf(Long.MAX_VALUE).add(BigInteger.ONE), 12);
encodeDecode(new BigInteger("10000000000000000000000"), 4);
encodeDecode(newBigInteger(256, false), 158);
encodeDecode(newBigInteger(8000, false), 4820);
}
private static BigInteger newBigInteger(int length, boolean negate) {
char[] chars = new char[length];
Arrays.fill(chars, 'f');
BigInteger value = new BigInteger(String.valueOf(chars), 16);
return (negate ? value.negate() : value);
}
private void encodeDecode(BigInteger value, int expectedSize) throws IOException {
byte[] data = encode(value);
BigInteger clone = decode(data, BigInteger.class);
if (expectedSize > 0)
Assert.assertEquals(expectedSize, data.length);
Assert.assertEquals(value, clone);
}
}
|
On the welfare effect of international technology transfer in a two-country Ricardian model The purpose of this paper is to sketch out what are the consequences of free technology transfer, licensing and foreign direct investment (F.D.I) on the North-South welfare in a two-good, two-country Ricardian model. On the demand side and following the recent reappearance of the homotheticity restriction on consumers preferences, we use a Cobb-Douglas utility function. We show how that the developed country, which has an absolute advantage based on technology in both goods, gains by selling, giving, or investing into using the advanced technology in the developing countrys export sector. We find that the developed country unambiguously gains regardless of the mode of technology transfer. Even if it receives no income from abroad, it must benefit from the transfer due to an improvement in its terms of trade. In the other hand, we show that F.D.I and licensing may decrease developing countrys welfare because of the transfer of income to the developed country.
|
<reponame>Tom999999999/Wechat
package com.tiktokdemo.lky.tiktokdemo.utils;
import android.content.Context;
import android.content.SharedPreferences;
/**
* Created by 李凯源 on 2016/4/12.
* 缓存类, 所有的数据都是采用SharedPreferences方式存储和获取
*/
public class CacheUtil {
private static final String CACHE_FILE_NAME = "LkyDemo";
private static SharedPreferences mSharedPreferences;
/**
* @param context
* @param key
* 要取的数据的键
* @param defValue
* 缺省值
* @return
*/
public static boolean getBoolean(Context context, String key,
boolean defValue) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
}
return mSharedPreferences.getBoolean(key, defValue);
}
/**
* 存储一个boolean类型数据
*
* @param context
* @param key
* @param value
*/
public static void putBoolean(Context context, String key, boolean value) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
}
mSharedPreferences.edit().putBoolean(key, value).commit();
}
/**
* 存储一个String类型的数据
*
* @param context
* @param key
* @param value
*/
public static void putString(Context context, String key, String value) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
}
mSharedPreferences.edit().putString(key, value).commit();
}
/**
* 根据key取出一个String类型的值
*
* @param context
* @param key
* @param defValue
* @return
*/
public static String getString(Context context, String key, String defValue) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
}
return mSharedPreferences.getString(key, defValue);
}
/**
* 存储一个String类型的数据
*
* @param context
* @param key
* @param value
*/
public static void putInt(Context context, String key, int value) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
}
mSharedPreferences.edit().putInt(key, value).commit();
}
/**
* 根据key取出一个int类型的值
*
* @param context
* @param key
* @param defValue
* @return
*/
public static int getInt(Context context, String key, int defValue) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
}
return mSharedPreferences.getInt(key, defValue);
}
public static void clearData(Context context) {
if (mSharedPreferences == null) {
mSharedPreferences = context.getSharedPreferences(CACHE_FILE_NAME,
Context.MODE_PRIVATE);
} else {
mSharedPreferences.edit().clear().commit();
}
}
}
|
<reponame>Minitour/DB2018_EX5
package network.api;
import model.Shift;
import network.generic.GenericAPI;
import utils.Constants;
/**
* Created By Tony on 23/07/2018
*/
public class ShiftAPI extends GenericAPI<Shift> {
/**
* @param url the base endpoint url.
*/
public ShiftAPI() {
super(Constants.Routes.shift(),Shift.class);
}
}
|
Former first minister Alex Salmond is in talks to take his sell-out Edinburgh Fringe show on tour, it has been revealed.
The politician, who was ousted from Westminster in June’s snap general election, took to the stage with a series of special guests in his show Alex Salmond... Unleashed.
He added: “The show has been a tremendous success and we have welcomed a host of amazing guests to the Fringe stage, raising money for good causes far and wide.
“I have thoroughly enjoyed my time treading the boards at the Festival and I relished being Alex Salmond... Unleashed for 19 shows in a row.
Guests on Mr Salmond’s show over its run included politicians such as Brexit Secretary David Davis, as well as the former Celtic footballer Neil Lennon and comedian John Bishop.
The last show featured Scottish Brexit Minister Mike Russell as well as a surprise appearance by singer Sheena Wellington, who performed the Robert Burns classic A Man’s A Man For A’ That, which she sang at the opening of the Scottish Parliament in 1999.
Tasmina Ahmed-Sheikh, the former SNP MP who produced the show, confirmed talks were taking place with a view to taking it on the road.
She said: “This has been a fantastic production all round and the city of Edinburgh has been a great host.
|
Type III TGF- Receptor Down-Regulation Promoted Tumor Progression via Complement Component C5a Induction in Hepatocellular Carcinoma Simple Summary The clinical implications of TGFR3 downregulation are currently unknown in hepatocellular carcinoma (HCC). Clinically, we identified that HCC patients with low expression levels of tumoral TGFR3 exhibited significantly late tumor stages and shortened survival outcomes. Moreover, HCC patients developed lower plasma levels of TGFR3 (sTGFR3) (8.9 ng/mL) compared to healthy individuals (15.9 ng/mL), which represented a potential diagnostic marker. Similar to tumoral TGFR3, low levels of plasma sTGFR3 are also associated with poor clinical outcomes in HCC. To determine its tumor-suppressing capacities, continuous injection of sTGFR3 in an orthotopic liver tumor model was performed, resulting in 2-fold tumor volume reduction compared to control. Decreased expression of TGFR3 induced the upregulation of tumoral complement component C5a in HCC, which was found to contribute to poor clinical outcomes and promote tumor progression via a novel function in activating the tumor-promoting macrophages. Abstract Background and AimsTransforming growth factor-beta (TGF-) signaling orchestrates tumorigenesis and one of the family members, TGF- receptor type III (TGFR3), are distinctively under-expressed in numerous malignancies. Currently, the clinical impact of TGFR3 down-regulation and the underlying mechanism remains unclear in hepatocellular carcinoma (HCC). Here, we aimed to identify the tumor-promoting roles of decreased TGFR3 expression in HCC progression. Materials and MethodsFor clinical analysis, plasma and liver specimens were collected from 100 HCC patients who underwent curative resection for the quantification of TGFR3 by q-PCR and ELISA. To study the tumor-promoting mechanism of TGFR3 downregulation, HCC mouse models and TGFR3 knockout cell lines were applied. ResultsSignificant downregulation of TGFR3 and its soluble form (sTGFR3) were found in HCC tissues and plasma compared to healthy individuals (p < 0.01). Patients with <9.4 ng/mL sTGFR3 exhibited advanced tumor stage, higher recurrence rate and shorter disease-free survival (p < 0.05). The tumor-suppressive function of sTGFR3 was further revealed in an orthotopic mouse HCC model, resulting in 2-fold tumor volume reduction. In TGFR3 knockout hepatocyte and HCC cells, increased complement component C5a was observed and strongly correlated with shorter survival and advanced tumor stage (p < 0.01). Interestingly, C5a activated the tumor-promoting Th-17 response in tumor associated macrophages. ConclusionTGFR3 suppressed tumor progression, and decreased expression resulted in poor prognosis in HCC patients through upregulation of tumor-promoting complement C5a. Introduction The multifunctional cytokine transforming growth factor- (TGF) is a key regulator in multiple cellular processes including proliferation, differentiation, migration and immunological responses. Unlike the other well-defined signaling pathways, whether TGF- exhibits tumor-suppression or tumor-promoting functions remains controversial. Among the members of the super-family, type III receptor of TGF- (TGFR3) has shown a distinctive role in tumor biology. TGFR3 is a co-receptor presenting TGF- ligands to the TGFR1 and ubiquitously expressed on nearly all cell types. It plays an essential role in mediating cell proliferation, apoptosis, differentiation, and migration in most human tissues. Loss or reduced expression of TGFR3 have been reported in many malignancies including breast, kidney, lung, ovaries, prostate and liver. It has also been shown to be a key suppressor of tumor cell invasion, proliferation and angiogenesis in both in vitro and in vivo cancer models. Apart from a transmembrane protein found in the cell membranes, the receptor can undergo ectodomain shedding from the cell surface to form soluble TGFR3 (sTGFR3) and be released to extracellular matrix and circulation. Studies have demonstrated its anti-tumor capacities in melanoma and breast cancer by sequestering TGF- ligands for downstream pro-tumor signaling. To date, little is known about the clinical implications and molecular mechanisms of TGFR3 in hepatocellular carcinoma (HCC). HCC has a very high metastatic and fatality rate (overall mortality to incidence ratio >90%), representing the second most common cause of death from cancer worldwide. Curative treatment options, including surgical and radiofrequency ablation, can be only applied to the patients with limited tumor burden. Previously, we reported the clinical significances of alternative activated macrophages in promoting poor prognosis and tumor invasiveness in the disease. In contrast, the recruitment and activating mechanisms of such an immune population by HCC remain elusive. TGFR3 has been shown to possess the capacity to promote tumor-suppressing immunity. Together with the observation that 66% patients showed decreased expression of the receptor, we investigated the clinical impact of TGFR3 downregulation and its immuno-regulatory mechanism in HCC. In the present study, we revealed that the loss of TGFR3 contributed to poor prognosis and promoted tumor progression via the upregulation of complement component C5a. Aberrant Down-Regulation of TGFR3 Expression in HCC Patients The clinical characteristics of the 100 patients who underwent curative resection were described in Table S1. To determine the protein and gene expression level of tumoral TGFR3, immunohistochemistry, Western blotting and quantitative-PCR were applied. Downregulation of TGFR3 in tumor was consistently observed in both immuno-staining and blotting studies ( Figure 1A-C). Further transcript analysis revealed a significant 1.43and 0.89-fold decrease in TGFR3 in HCC tumor compared to adjacent non-tumoral and normal tissues, respectively ( Figure 1D). Such downregulation was observed in 66% of the studied HCC population, as well as in seven HCC-patient-derived cell-lines ( Figure 1E). All the data collectively illustrated the significant reduction in TGFR3 at both transcript and protein levels in HCC patients. Such down-regulation was also validated in publicly available datasets from Oncomine (www.oncomine.org) and TCGA/GTEx by using GEPIA (http://gepia.cancer-pku.cn/) ( Figure S1). Figure 1C,D) (Paired t-test for Figure 1E) (Log rank test for Figure 1F,G). Down-Regulation of TGFR3 Correlated with Poor Prognosis in HCC Patients When analyzed by the Kaplan-Meier method with log-rank statistics, low levels of tumoral TGFR3 transcript were found to be associated with poor overall survival (p = 0.017) and disease-free survival (p = 0.047) ( Figure 1F,G). Similar clinical traits of TGFR3 were also confirmed in the publicly available datasets from GEPIA ( Figure S2). Apart from survival outcomes, the correlation between clinicopathological characteristics and TGFR3 expression was examined. As summarized (Table 1), high levels of alphafetoprotein (AFP) (>20ng/mL)(p = 0.014), and advanced tumor stage in all grading systems, including UICC (p = 0.01), Edmonson (p = 0.003) and AJCC (p = 0.048), were found to be associated with low expression of TGFR3. These findings highlighted the clinical significance of TGFR3 in prognosis as well as tumor progression in HCC patients. Soluble TGFR3 (sTGFR3) Exhibited Diagnostic and Prognostic Potentials in HCC As mentioned, TGFR3 undergoes ectodomain shedding released from tissue to extracellular matrix and circulation as soluble TGFR3 (sTGFR3). Compared to healthy individuals (15.4 ng/mL), a significant reduction in plasma sTGFR3 was observed in 72% of HCC patients (8.9 ng/mL) (p < 0.01) ( Figure 2A). Receiver-operating characteristics (ROC) curve analysis revealed that sTGFR3 served as a biomarker for differentiating patients with HCC from healthy patients with an AUC of 0.838 (95% CI, 0.78 to 0.90) (p < 0.001) ( Figure 2B). At the cut-off value of 9.4 ng/mL sTGFR3 in plasma, the sensitivity was 82.7% and the specificity was 77.4%, respectively. With the acquired threshold value, Kaplan-Meier analysis revealed that patients with less than 9.4 ng/mL significantly developed poorer overall and disease-free survival compared to the ≥9.4 ng/mL group ( Figure 2C,D, respectively) (p < 0.05). In terms of clinicopathological characteristics, a continuous decrease in plasma sTGFR3 was observed in patients with advanced tumor stages (Stage I and II: 14.99 ng/mL ±3.62; Stage III: 7.67 ng/mL ± 0.71; Stage IV: 5.64 ng/mL ± 0.42) ( Figure 2E). Low levels of plasma sTGFR3 were also associated with high levels of bilirubin (>20mol/l) (p < 0.01), large tumor size (>5cm) (p = 0.012) and advanced tumor stages UICC (p < 0.01) and AJCC (p = 0.017) ( Table 2). Based on the finding that plasma sTGFR3 exhibited similar clinical associations with tumoral TGFR3, we further confirmed their strong correlation in HCC patients (R 2 = 0.112, p < 0.01) ( Figure 2F). TGFR3 Treatment Suppressed HCC Tumor Growth In Vivo With the evidence of advanced tumor stage patients having poorly expressed TGFR3, its tumor-suppressive function was further studied in a nude mouse orthotopic liver cancer model. Weekly intraperitoneal injection of recombinant sTGFR3 (25 g per mouse) significantly reduced tumor density by bioluminescence, with a 2-fold decrease compared to the untreated group (276.9U ± 40.65 vs. 138.1U ± 29.1, p = 0.017) ( Figure 3A-i). Consistently, the HCC tumor volume measured after scarification was also found to be 1.6-fold lower in the sTGFR3 treatment group (0.96 cm 3 ± 0.14 vs. 1.53 cm 3 ± 0.2) (p = 0.037) ( Figure Apart from sTGFR3, the tumor-suppressive role of TGFR3 was examined in a subcutaneous tumor model in nude mice, induced by the HCC cell-line MHCC97L, which was over-expressed with TGFR3 (MHCC97L-TGFR3). Including MHCC97L-NTC as a negative control, both the non-transfected and transfected cell-lines were injected into the left and right flank of each mouse, respectively ( Figure 3B-i). A significant decrease in tumor size in the MHCC97L+TGFR3 (0.35 cm 3 ± 0.07) group compared to MHCC97L-NTC (0.74 cm 3 ± 0.11) was observed (p = 0.045) ( Figure 3B-ii). Loss of TGFR3 Induced the up-Regulation of C5a which Associated with Poor Prognosis in HCC To simulate its loss during HCC progression, TGFR3 was knocked out by transfection of a Crispr/Cas9 KO plasmid in two hepatic non-HCC cell lines MIHA and LO2. Through analysis by ELISA and molecular array study, we discovered a significant increase in complement component C5a secretion in both MIHA-TGFR3 KO (0.833 ng/mL ± 0.083) and LO2-TGFR3 KO (0.66 ng/mL ± 0.037) supernatants compared to vector control, parental MIHAscramble (0.348 ng/mL ± 0.061) and LO2-scramble cells (0.23 ng/mL ± 0.03) (p < 0.01) ( Figure 4A). Furthermore, high secretory levels of C5a in all HCC were identified ( Figure 4B) with minimal co-expression with TGFR3 ( Figure S3) in contrast to MIHA and LO2. Interestingly, we detected human C5a in both tumoral tissue and plasma in an orthotopic nude mice model with MHCC97L-induced HCC tumor at week 4 ( Figures S4 and S5). These demonstrated the capacity of continuous secretion of C5a by implanted low-TGFR3-expression MHCC97L cells in vivo. Clinically, C5a-expressing HCC cells were also identified to be present in tumoral tissue ( Figure S6). Quantitative analysis revealed the significant increase in the complement protein in patients' tumoral tissue ( Figure 4C-i) and plasma ( Figure 4C-ii). Importantly, strong reverse relationships between C5a and TGFR3 in both human liver tumor tissue (p = 0.0259) ( Figure 4D-i) and plasma (p = 0.011) ( Figure 4D-ii) were validated. Consistent with the clinical phenotypes of TGFR3 downregulation, increased levels of plasma C5a were first detected in patients with advanced (stage III, 13.4 ng/mL ± 1.86 and IV, 13.8 ng/mL ± 3.003) compared to early (stage I/II, 7.428 ng/mL ± 1.075) tumor stage (p = 0.046) ( Figure 4E). High levels of both tissue and plasma C5a were also observed to be strongly correlated with large tumor size (p < 0.01; p < 0.0198) ( Figure 4F-i,ii). Patients with high levels of plasma C5a (>12 ng/mL) developed significantly shorter disease-free survival compared to the low-expression group (Log rank: 5.798, p = 0.016) ( Figure 4F-iii). Complement C5a Activated the Th-17 Responses in Tumor Promoting Macrophages In myeloid cells, macrophage is a major target of C5a, and as we had previously reported its critical roles in HCC, the function of the complement protein in the immune subset was further investigated. First, a positive correlation between plasma C5a and alternatively activated tumor (M2) macrophage marker (scavenger receptor) was identified in HCC patients (R 2 = 0.05, p = 0.030) ( Figure 5A). When analyzed by human PCR array, incubation of 1g/mL recombinant C5a significantly induced the Th-17-related cytokines (IL-17, IL-21 and IL-22) and IL-17 regulatory genes (CXCL-1, CXCL-2, CSF-2) in M2 macrophages, but did not classically activate the M1 subtype ( Figure 5B) (Supplementary Table S2). Significantly increased levels of C5a receptor were also detected in both activated macrophage populations in response to the complementary component ( Figure S7). Importantly, incubating M2 macrophages with HCC-cell-cultured medium (MHCC97L, MHCC97H, Hep3B, PLC and Huh7) containing high levels of C5a significantly elevated the expression level of IL-17 ( Figure 5C). Furthermore, patients with high levels of plasma C5a exhibited up-regulation of IL-17-secreting macrophages in HCC tumor in both immunostaining ( Figure 5C) and flow cytometry analysis ( Figure 5D). Figure 4A,B,C,E) (Chi-square test for Figure 4D,Fi-ii)(Log rank test for Figure 4Fiii). Figure 5A)(Unpaired t-test for Figure 5C). Discussion Despite the predominant downregulation of TGFR3 shown in both public databases and other studies, the clinical implication in HCC is unknown to date. For the first time, we revealed that decreased levels of tumoral and plasma TGFR3 are strongly associated with advanced tumor stage and tumor size, and, more importantly, poor clinical outcome, including shortened overall and disease-free survival. Apart from its prognostic associations, high differential levels of soluble TGFR3 in plasma between HCC patients and healthy individuals indicated the diagnostic potential, as validated by ROC curve analysis with the highly specific cut-off value of 9.4 ng/mL. All the evidence collectively highlighted the clinical significance of TGFR3 downregulation in HCC patients. Based on the clinical evidence, the tumor-suppressive function of TGFR3 in HCC was speculated and revealed in two tumor models. First, the continuous administration of recombinant sTGFR3 significantly decreased HCC tumor size by 2-fold compared to controls in the orthotopic nude mice liver tumor model, with an increased level of tumor cell apoptosis. Similar antitumor activity of recombinant sTGFR3 in tumor models was previously reported in breast and prostate cancer, indicating its effectiveness against different malignancies but not in HCC. Apart from sTGFR3, restoring TGFR3 in lowly expressed HCC cells also significantly reduced tumor growth by 2.1-fold in the subcutaneous tumor model. Findings from both models collectively illustrated the direct tumor-suppressive functions of TGFR3 in HCC. Several studies indicate that TGFR3 suppresses tumor development via negatively mediating TGF- signaling. In the present study, we identified the significant up-regulation of phospho-SMAD2/3 in tumoral tissue with minimal expression levels of TGFR3 ( Figure S8). The mechanisms of TGFR3 dysregulation remain unknown and it is tempting to correlate them with tumor hypoxia based on the close relationship between hypoxia-inducing factor 1 alpha (HIF-1) and the TGF- signaling pathway. Nevertheless, emerging evidence has shown that loss of TGFR3 is related to paracrine or cell autonomous signaling, resulting in the alteration in the tumor immune environment, including dendritic cells. Apart from cellular mechanism, recent studies reported that the disruption of TGFR3 induced the dysregulation of the complement components, including C4a and the complement factor D in breast and prostate cancer. Since the liver possesses many unique immunological properties, as the residence of many immunological cells and the synthesis site of numerous innate proteins, including complement components, we speculated an induced tumor-promoting immunological mechanism following the dysregulation of TGFR3 in HCC. More importantly, we previously reported that alternatively activated (M2) macrophages represented the key immune population contributing to HCC progression. Hence, we focused on studying the secretory profiles of TGFR3-down-regulated cells to identify potential immune-regulatory mechanisms associated with macrophages in HCC. By silencing TGFR3 in normal hepatocytes, we observed elevated secretion of one particular complement component, C5a, but not C3a and C4a. Further studies confirmed that HCC cell-lines with low TGFR3 expression also displayed high secretory level of C5a. Apart from being the central chemo-attractant provoking an innate immune response, emerging evidence has also suggested the novel roles of C5a in shaping the tumor immune microenvironment. Most tumors are rich in complement proteins, particularly C3a and C5a, produced directly by cancer cells, which have a variety of tumor-promoting mechanisms without activating the complement cascade. Clinically, the concentration of both plasma and tumoral C5a determine the disease progression in malignancies, including the lung and ovaries. Noteworthy hepatocytes are responsible for biosynthesizing the majority of complement proteins. Despite the close relationship between hepatocyte and C5a, its clinical implications and underlying mechanisms in HCC are currently unclear. For the first time, we showed that HCC cells upregulated the secretion of C5a, which is particularly induced by the downregulation of TGFR3. High levels of the complement protein were associated with late tumor stage, increased tumor size and poor disease-free survival in HCC patients, consistent with the clinical associations of decreased TGFR3 expression. On the other hand, both the tumoral and circulatory levels of C5a were found to be highly dependent on both tumoral and soluble TGFR3 in reverse manners. The clinical and in vitro evidence collectively suggested that dysregulation of TGFR3 in hepatocyte and HCC cells activated the secretion of pro-tumoral C5a in HCC. C5a possess many immuno-modulatory functions, particularly in the innate immunity. Direct cytokine and chemokine production in the immune cells are also found to be regulated by the complement protein. Here, we discovered the novel role of C5a in enhancing the tumor-promoting phenotypes on the alternatively activated (M2) macrophage. The treatment of either HCC cells supernatants with rich-in-C5a or recombinant C5a to M2 macrophages significantly induced its expression of C5aR and, surprisingly, IL-17 regulatory genes (CXCL-1, CXCL-2, MCSF, IL-17F) and Th-17 secretory cytokines (IL-17F, IL-21 and IL-22). In contrast, a minimal effect of C5a in another subtype, classically activated (M1) macrophage, was observed. Numerous studies suggested the clinical significance of Th-17 responses in contributing to poor survival outcome in HCC patients by the pro-tumor functions of IL-17, IL-21 and IL-22, including tumor proliferation and angiogenesis. Importantly, the capability of HCC-stroma-associated macrophages in inducing Th-17 T cell response was shown. The findings in the present study collectively indicated a novel mechanism of C5a in activating Th-17 tumor-promoting phenotypes in M2 macrophages. In conclusion, we first reported that both the downregulation of TGFR3 and increased C5a are associated with poor clinical outcomes in HCC. Plasma sTGFR3 served as a potential diagnostic biomarker for identifying patients with advanced tumor stages. A novel pro-tumoral mechanism of TGFR3 downregulation via C5a-activated tumorpromoting macrophages was revealed. Further applications of sTGFR3 and the C5a inhibitor may represent a new approach to treating HCC patients. Patient Samples Liver tumor tissues and blood samples were randomly collected from 100 patients (aged 3-83 years, 77% male) who underwent curative surgery for HCC in Queen Mary Hospital from 2004 to 2008. Normal liver tissues were from a healthy living donor (n = 100). Ethical approval (UW11-100 (HKU/HA HKW IRB)) was obtained from the University of Hong Kong and Hospital Authority-Hong Kong Western Cluster and consent was signed by the studied patients. Orthotopic Nude Mice Liver Tumor Model with sTGFR3 Treatment Male athymic nude mice (BALB/c nu/nu, 4-6 weeks old) (total number = 20) were used and all the studies were conducted according to the Animals (Control of Experiments) Ordinance (Hong Kong) and the Institute's guidance on animal experimentation. All mice were housed in a pathogen-free animal facility at 22 ± 2 C under controlled 12-h light/dark cycles. Mice were given regular chow (5053-PicoLab ® Rodent Diet 20, Lab Diet, MO, USA) and had access to autoclaved water. Surgical procedures were as described previously. Briefly, 3 10 5 MHCC97L suspended in 0.2 mL DMEM were injected subcutaneously into the flanks of mice. After 4 weeks, the subcutaneous tumors were resected and diced into 1 mm 3 cubes, which were then implanted in the left lobes of the livers of another group of nude mice. Simultaneously, 2 mg/kg of recombinant sTGFR3 were injected peritoneally in the treatment group weekly for four weeks. For the number of mice applied for the experiment, seven were used for both the treatment groups and as a negative control. Mice injected with PBS served as a negative control. Tumor size and metastasis of MHCC97L xenograft were monitored weekly by Xenogen IVIS ® (Xenogen IVIS ® 100, Caliper Life Sciences, Hopkinton, MA, USA). All mice were sacrificed at week 5 and the size of liver tumor was measured. RT2 Profiler PCR Array Total RNA was extracted from C5a treated (1 g/mL, 12 h) and untreated M1 and M2 macrophages, using the TRIzol reagent (Invitrogen) and purified with the RNAeasy MinElute Cleanup Kit (Qiagen) according to standard protocols and converted to first strand cDNA using RT2 First Strand Kit (Qiagen). Gene expression of 84 cytokines and chemokines-related genes was analyzed using the Human Cytokines and Chemokines RT2 Profiler™ PCR Array (PAHS-150Z, Qiagen) according to the manufacturer's protocol. In Vitro Over-Expression and Knockout of TGFR3 TGFR3-deficient non-HCC hepatocytes LO2 and MIHA were established using a CRISPR/Cas9 system with the plasmid TGFR3 CRISPR/Cas9 KO Plasmid (SC-401316) purchased from Santa Cruz Biotech. Control CRISPR/Cas9 Plasmid (SC-418922) was applied as a negative control. All the cells were transfected using Lipofectamine 2000 (Life Technologies) for 48 h with 3 g TGFR3 and control plasmid. The transfection efficiency was determined by fluorescent microscopy and cells were sorted by FACS analysis. On the other hand, to over-express TGFR3 in HCC MHCC97L cell line, a TGFR3 human clone, the pCMV6-AC-GFP vector, was purchased from Origene Technologies. In Vivo Study of Tumorigenicity of TGFR3 Over-Expressed Cells Male athymic nude mice (BALB/c nu/nu, 4-6 weeks old) were used. For the xenograft tumor growth assay, control cells (MHCC97L-NTC) were injected subcutaneously into the left dorsal flank of mice, and TGFR3-expressing cells (MHCC97L-TGFR3) were injected into the right dorsal flank of the same animal. Tumor formation in nude mice was monitored over a 4-week period, and the tumor volume was measured weekly and calculated as 0.5 L W2. The mice were euthanized on the fifth week, and the tumors were excised and embedded in paraffin. Sections (5 m) of tumors were stained with H&E to visualize the tumor structure. Quantitative Real Time RT-PCR Total RNA was extracted from cell lines and frozen tumor specimens using Trizol Reagent (Invitrogen). Total RNA was reverse-transcribed with High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). Messenger RNA expression levels were determined by real-time PCR using Fast Start SYBR Green Master (Roche Diagnostic) with an ABI Prism 770 sequence detection system. Primers used for the amplification of human genes were as follows. ELISA The level of soluble TGFR3 (Sigma) and complement component C5a (BD Biosciences) were quantified in patients' plasma and tumoral tissue. All procedures were performed according to the manufacturer's instructions. Cell Culture and Stimulation Authentication for the MHCC97L, MIHA and LO2 cells used in the present study was performed using Powerplex 16 HS kit (Promega) after PCR amplification and capillary electrophoresis. The human acute monocytic leukemia cell line THP-1 was purchased from ATCC and maintained according to ATCC guidelines. Other hepatic and HCC cell-lines were purchased or obtained as described in the Supplementary CTAT Table. The protocols of M1 and M2 macrophage polarization from THP-1 were adopted from Tjiu et al.. Briefly, to induce M1-polarized phenotype, 25 ng/mL interferon gamma (IFN-; Invitrogen) and 150 ng/mL lipopolysaccharide (LPS; Sigma) were added to 1 10 THP-1. To induce M2-polarized phenotype, 20 ng/mL of recombinant interleukin 4 (IL-4; Invitrogen) and recombinant interleukin 13 (IL-13; R&D Systems) were used. All stimulation lasted 24 h at 37 C, and culture was washed thoroughly with PBS three times prior to further study. For luciferase-labeling, MHCC97L were transfected with luciferase gene in pGL3 vector (Promega), and positive clones were selected according to luciferase activity in Xenogen In Vivo Imaging System 100 (Xenogen IVIS ® 100, Xenogen Corporation). Statistical Methods Comparisons and correlations of quantitative data between the two groups were analyzed by unpaired Student's t-test and chi-square test, respectively. Categorical data were analyzed by Fisher's exact test. The Cox proportional hazards model was applied to determine the independent factors of survival, based on the variables selected on univariate analysis. The log-rank test for comparison of survival in Kaplan-Meier survival plot was used for analysis. A p < 0.05 was considered statistically significant. All analyses were performed with Graphpad Prism 5.0 and SPSS18.0. Conclusions In conclusion, we first reported that both the downregulation of TGFR3 and increased C5a were associated with poor clinical outcome in HCC. Plasma sTGFR3 could serve as a novel diagnostic biomarker for identifying patients with advanced tumor stages. A novel pro-tumoral mechanism of TGFR3 downregulation via C5a-activated tumor-promoting macrophages was revealed. Therapeutic potentials involving the applications of sTGFR3 and C5a inhibitor may represent a new approach to treating HCC patients. Supplementary Materials: The following are available online at https://www.mdpi.com/2072-669 4/13/7/1503/s1. Figure S1: Downregulation of TGFR3 expression in HCC tumor from publicly available datasets. Figure S2: Kaplan-Meier analysis of overall survival and disease-free survival in HCC patients associated with the expression level of tumoral TGFR3 transcript from publicly available database GEPIA. Figure S3: Flow cytometry analysis of TGFR3 and intracellular C5a expression in HCC cell-lines. Figure S4: Presence of human C5a in tumoral tissue of a mouse HCC orthotopic model. Figure S5: Presence of human C5a in plasma of a mouse HCC orthotopic model. Figure S6: The secretion of C5a in glypican 3 expressed HCC cells. Figure S7: Increased expression of C5a receptor (C5aR) in macrophage sub-populations treated with recombinant C5a. Figure S8: Increased p-SMAD 2/3 expression in HCC tumoral tissue. Table S1: Clinical characteristics of the studied population. Table S2: Transcript analysis of C5a treated M1 and M2 macrophages.
|
<filename>app/javascript/StaticPages/AboutPage.tsx
export default function AboutPage() {
return (
<>
<h1>About Us</h1>
<ul className="list-unstyled">
<li>
<strong>Lead Developer:</strong> <NAME> (natbudin AT gmail DOT com)
</li>
<li>
<strong>Curator:</strong> <NAME> (valleyviolet AT gmail DOT com)
</li>
<li>
A project of{' '}
<a href="http://interactiveliterature.org">New England Interactive Literature</a>.
</li>
</ul>
<p>
Larp Library is made with Ruby on Rails, TypeScript, and React. The source code for the app
is available <a href="https://github.com/neinteractiveliterature/larp_library">on Github</a>
.
</p>
</>
);
}
|
package constructors
import (
"io"
"text/template"
"github.com/kepkin/gorest/internal/generator/constructors/fields"
"github.com/kepkin/gorest/internal/generator/translator"
)
// MakeFormDataConstructor receive a form-data struct definition and generate corresponding constructor
func MakeFormDataConstructor(wr io.Writer, def translator.TypeDef) error {
return formDataConstructorTemplate.Execute(wr, def)
}
var formDataConstructorTemplate = template.Must(template.New("formDataConstructor").Funcs(fields.BaseConstructor).Parse(`
func Make{{ .Name }}(c *gin.Context) (result {{ .Name }}, errors []FieldError) {
{{- if .HasNoStringFields }}
var err error
{{ end }}
{{- with .Fields }}
{{ if $.HasNoFileFields }}
form, err := c.MultipartForm()
if err != nil {
errors = append(errors, NewFieldError(InFormData, "", "can't parse multipart form", err))
return
}
getFormValue := func(param string) (string, bool) {
values, ok := form.Value[param]
if !ok {
return "", false
}
if len(values) == 0 {
return "", false
}
return values[0], true
}
{{ end }}
{{- end }}
{{ range $, $field := .Fields }}
{{- with $field }}
{{- if not .IsFile}}
{{- if .CheckDefault}}
{{ .StrVarName }}, ok := getFormValue("{{ .Parameter }}")
if !ok {
{{ .StrVarName }} = "{{ .Schema.Default }}"
}
{{- else }}
{{ .StrVarName }}, _ := getFormValue("{{ .Parameter }}")
{{- end }}
{{- end }}
{{- BaseValueFieldConstructor . "InFormData" }}
{{- end -}}
{{ end -}}
return
}
`))
|
<filename>klever/cli/descs/linux/testing/environment model specifications/tests/tty_v.2/tty_port_unregister_device.c
/*
* Copyright (c) 2018 ISP RAS (http://www.ispras.ru)
* Ivannikov Institute for System Programming of the Russian Academy of Sciences
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <linux/module.h>
#include <linux/tty.h>
#include <linux/tty_driver.h>
#include <ldv/linux/emg/test_model.h>
#include <ldv/verifier/nondet.h>
int flip_a_coin;
struct tty_driver *driver;
struct tty_port port;
struct device *device;
static int ldv_activate(struct tty_port *tport, struct tty_struct *tty)
{
ldv_invoke_callback();
return 0;
}
static void ldv_shutdown(struct tty_port *tport)
{
ldv_invoke_callback();
}
static const struct tty_port_operations ldv_tty_port_ops = {
.activate = ldv_activate,
.shutdown = ldv_shutdown,
};
static int __init ldv_init(void)
{
int res;
ldv_invoke_test();
tty_port_init(& port);
port.ops = & ldv_tty_port_ops;
flip_a_coin = ldv_undef_int();
if (flip_a_coin) {
ldv_register();
res = tty_port_register_device(& port, driver, ldv_undef_int(), device);
if (!res) {
tty_port_destroy(&port);
}
ldv_deregister();
}
return res;
}
static void __exit ldv_exit(void)
{
/* Do nothing */
}
module_init(ldv_init);
module_exit(ldv_exit);
|
<reponame>Moxinilian/kira
use crate::{arrangement::Arrangement, sound::Sound};
use super::{
error::{
AddArrangementError, AddGroupError, AddMetronomeError, AddParameterError,
AddSendTrackError, AddSoundError, AddSubTrackError,
},
AudioManager, AudioManagerSettings,
};
fn create_manager_with_limited_capacity() -> AudioManager {
let (manager, _) = AudioManager::new_without_audio_thread(AudioManagerSettings {
num_sounds: 1,
num_arrangements: 1,
num_parameters: 1,
num_instances: 1,
num_sequences: 1,
num_sub_tracks: 1,
num_send_tracks: 1,
num_groups: 1,
num_streams: 1,
num_metronomes: 1,
..Default::default()
});
manager
}
#[test]
fn returns_error_on_exceeded_sound_capacity() {
let mut manager = create_manager_with_limited_capacity();
let sound = Sound::from_frames(48000, vec![], Default::default());
assert!(manager.add_sound(sound.clone()).is_ok());
if let Err(AddSoundError::SoundLimitReached) = manager.add_sound(sound.clone()) {
} else {
panic!("AudioManager::add_sound should return Err(AddSoundError::SoundLimitReached) when the maximum number of sounds is exceeded");
}
}
#[test]
fn returns_error_on_exceeded_arrangement_capacity() {
let mut manager = create_manager_with_limited_capacity();
let arrangement = Arrangement::new(Default::default());
assert!(manager.add_arrangement(arrangement.clone()).is_ok());
if let Err(AddArrangementError::ArrangementLimitReached) =
manager.add_arrangement(arrangement.clone())
{
} else {
panic!("AudioManager::add_arrangement should return Err(AddArrangementError::ArrangementLimitReached) when the maximum number of arrangements is exceeded");
}
}
#[test]
fn returns_error_on_exceeded_parameter_capacity() {
let mut manager = create_manager_with_limited_capacity();
assert!(manager.add_parameter(Default::default()).is_ok());
if let Err(AddParameterError::ParameterLimitReached) = manager.add_parameter(Default::default())
{
} else {
panic!("AudioManager::add_parameter should return Err(AddParameterError::ParameterLimitReached) when the maximum number of arrangements is exceeded");
}
}
#[test]
fn returns_error_on_exceeded_sub_track_capacity() {
let mut manager = create_manager_with_limited_capacity();
assert!(manager.add_sub_track(Default::default()).is_ok());
if let Err(AddSubTrackError::TrackLimitReached) = manager.add_sub_track(Default::default()) {
} else {
panic!("AudioManager::add_sub_track should return Err(AddSubTrackError::TrackLimitReached) when the maximum number of arrangements is exceeded");
}
}
#[test]
fn returns_error_on_exceeded_send_track_capacity() {
let mut manager = create_manager_with_limited_capacity();
assert!(manager.add_send_track(Default::default()).is_ok());
if let Err(AddSendTrackError::TrackLimitReached) = manager.add_send_track(Default::default()) {
} else {
panic!("AudioManager::add_send_track should return Err(AddSendTrackError::TrackLimitReached) when the maximum number of arrangements is exceeded");
}
}
#[test]
fn returns_error_on_exceeded_group_capacity() {
let mut manager = create_manager_with_limited_capacity();
assert!(manager.add_group(Default::default()).is_ok());
if let Err(AddGroupError::GroupLimitReached) = manager.add_group(Default::default()) {
} else {
panic!("AudioManager::add_group should return Err(AddGroupError::GroupLimitReached) when the maximum number of arrangements is exceeded");
}
}
#[test]
fn returns_error_on_exceeded_metronome_capacity() {
let mut manager = create_manager_with_limited_capacity();
assert!(manager.add_metronome(Default::default()).is_ok());
if let Err(AddMetronomeError::MetronomeLimitReached) = manager.add_metronome(Default::default())
{
} else {
panic!("AudioManager::add_metronome should return Err(AddMetronomeError::MetronomeLimitReached) when the maximum number of arrangements is exceeded");
}
}
// TODO: write a test for exceeded stream capacity
|
Most wallpaper shots we feature on Jalopnik contain cars in their entirety, but this fraction of an Alfa Romeo is pretty mesmerizing on its own. Plus, it looks prettier from this angle anyway.
The photo comes from James Grabow and features a 1990 Alfa Romeo SZ. The car, James said, is the same one listed for sale with a $119,500 price tag last year. But after driving the car more, James said “the owner decided he was going to keep it” rather than make the sale. The owner hasn’t driven the car much since, though—apparently, it has less than 2,500 miles on it.
As for the the photo itself, it’s artsy enough to make passerby inquire as to what kind of car is on your computer desktop. That, my friends, is an opportunity to tell them all about your love for cars (and your favorite website) until they forget about something they needed to go do and walk away. That happens to me when I talk about pretty much anything, so don’t take it personally.
Photo credit: James Grabow. Used with permission. For more photos of his photos, check out his Flickr and Instagram accounts. For a big desktop version, click here.
|
The ASU basketball team has unveiled its complete schedule for the 2018-19 season.
The ASU basketball team has unveiled its complete schedule for the 2018-19 season.The Sun Devils, 20-12 overall and 8-10 in Pac-12 play a year ago, open the regular season Nov. 6 at home against Cal State Fullerton in the first of 13 non-conference games.
The most notable comes on Dec. 22 when perennial national title contender Kansas comes in for a 7 p.m. contest at Wells Fargo Arena that will be aired on ESPN2.
Pac-12 play starts on Jan. 5 with Colorado the first foe for the Sun Devils on Jan. 5.
Games against in-state rival Arizona will be on Jan. 31 and March 9, the first at home and the second on the road, with the latter being the last game of conference play.
Coach Bobby Hurley, in his third year, brings in a recruiting class ranked 11th by 247Sports. Last year the Sun Devils qualified for the NCAA Tournament for the first time since 2014, losing in the opening round to Syracuse 60-56.
Nov. 6: Cal State Fullerton, 6 p.m.
Nov. 9: McNeese State, 7 p.m.
Nov. 12: Long Beach State, 7 p.m.
Nov. 16: at San Francisco,7 p.m.
Nov. 19: vs. Mississippi State at Las Vegas/T-Mobile Arena, 9 p.m.
Nov. 28: Nebraska-Omaha, 7 p.m.
Dec. 1: Texas Southern, 7:30 p.m.
Dec. 7: Nevada at Los Angeles/Staples Center, 10 p.m.
Dec. 15: at Georgia, 4 p.m.
Dec. 17: at Vanderbilt, 5 p.m.
Dec. 22: Kansas, 7 p.m.
Dec. 29: Princeton, 2 p.m.
Jan. 3: Utah, 6 p.m.
Jan. 5: Colorado, 4 p.m.
Jan. 9: at California, 7 p.m.
Jan. 12: at Stanford, 7 p.m.
Jan. 17: Oregon State, 8 p.m.
Jan. 19: Oregon, 7:30 p.m.
Jan. 24: at UCLA, 9 p.m.
Jan. 26: at USC, 6 p.m.
Jan. 31: Arizona, 7 p.m.
Feb. 7: Washington State, 6 p.m.
Feb. 9: Washington, 8 p.m.
Feb. 13: at Colorado, 8:30 p.m.
Feb. 16: at Utah, 8 p.m.
Feb. 20: Stanford, 7 p.m.
Feb. 24: California, 4 p.m.
Feb. 28: at Oregon, 9 p.m.
March 3: at Oregon State, 9 p.m.
March 9: at Arizona, 2 p.m.
|
<reponame>kor44/extcap
package extcap
import (
"fmt"
"regexp"
"strings"
)
// Config represents config option which will be shown in Wireshark GUI
// Output examples
// arg {number=0}{call=--delay}{display=Time delay}{tooltip=Time delay between packages}{type=integer}{range=1,15}{required=true}
// arg {number=1}{call=--message}{display=Message}{tooltip=Package message content}{placeholder=Please enter a message here ...}{type=string}
// arg {number=2}{call=--verify}{display=Verify}{tooltip=Verify package content}{type=boolflag}
// arg {number=3}{call=--remote}{display=Remote Channel}{tooltip=Remote Channel Selector}{type=selector}
// arg {number=4}{call=--server}{display=IP address for log server}{type=string}{validation=\\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\b}
// value {arg=3}{value=if1}{display=Remote1}{default=true}
// value {arg=3}{value=if2}{display=Remote2}{default=false}
// ConfigOption
type ConfigOption interface {
call() string
display() string
tooltip() string
setNumber(int)
}
// common for all options
type cfg struct {
number int
callValue string
displayVal string
tooltipVal string
group string
required bool
}
func (c *cfg) call() string {
return c.callValue
}
func (c *cfg) display() string {
return c.displayVal
}
func (c *cfg) tooltip() string {
return c.tooltipVal
}
func (c *cfg) string(optType string, params [][2]string) string {
w := new(strings.Builder)
fmt.Fprintf(w, "arg {number=%d}{call=--%s}{display=%s}{type=%s}", c.number, c.callValue, c.displayVal, optType)
if c.tooltipVal != "" {
fmt.Fprintf(w, "{tooltip=%s}", c.tooltipVal)
}
if c.required {
fmt.Fprintf(w, "{required=true}")
}
if c.group != "" {
fmt.Fprintf(w, "{group=%s", c.group)
}
for i := range params {
fmt.Fprintf(w, "{%s=%s}", params[i][0], params[i][1])
}
return w.String()
}
func (c *cfg) setNumber(i int) {
c.number = i
}
// Integer option
type ConfigIntegerOpt struct {
cfg
min int
max int
defaultValue int
rangeSet bool
defaultSet bool
}
// Create new integer option
func NewConfigIntegerOpt(call, display string) *ConfigIntegerOpt {
opt := &ConfigIntegerOpt{}
opt.callValue = call
opt.displayVal = display
return opt
}
// WithRange sets min and max value for option
func (c *ConfigIntegerOpt) Range(min, max int) *ConfigIntegerOpt {
if min >= max {
panic("in range max value should be greater min value")
}
c.min = min
c.max = max
c.rangeSet = true
return c
}
// WithDefault sets default value for INTEGER option
func (c *ConfigIntegerOpt) Default(val int) *ConfigIntegerOpt {
c.defaultValue = val
c.defaultSet = true
return c
}
// WithRequired sets option required
func (c *ConfigIntegerOpt) Required(val bool) *ConfigIntegerOpt {
c.required = val
return c
}
// WithGroup sets option's group
func (c *ConfigIntegerOpt) Group(group string) *ConfigIntegerOpt {
c.group = group
return c
}
// SetTooltip sets option tooltip
func (c *ConfigIntegerOpt) Tooltip(tooltip string) *ConfigIntegerOpt {
c.tooltipVal = tooltip
return c
}
// String implement stringer interface
// Example output
// arg {number=0}{call=--delay}{display=Time delay}{tooltip=Time delay between packages}{type=integer}{range=1,15}{required=true}
func (c *ConfigIntegerOpt) String() string {
params := [][2]string{}
if c.rangeSet {
params = append(params, [2]string{"range", fmt.Sprintf("%d,%d", c.min, c.max)})
}
return c.string("integer", params)
}
// ConfigStringOpt impplement ConfigOption interface
type ConfigStringOpt struct {
cfg
placeholder string
validation *regexp.Regexp
required bool
defaultValue string
defaultSet bool
}
// Create new STRING option
func NewConfigStringOpt(call, display string) *ConfigStringOpt {
opt := &ConfigStringOpt{}
opt.callValue = call
opt.displayVal = display
return opt
}
// Default sets default value for STRING option
func (c *ConfigStringOpt) Default(val string) *ConfigStringOpt {
c.defaultValue = val
c.defaultSet = true
return c
}
// SetTooltip sets option tooltip
func (c *ConfigStringOpt) Placeholder(str string) *ConfigStringOpt {
c.placeholder = str
return c
}
// Required sets option required
func (c *ConfigStringOpt) Required(val bool) *ConfigStringOpt {
c.required = val
return c
}
// Validation sets option validation
func (c *ConfigStringOpt) Validation(str string) *ConfigStringOpt {
c.validation = regexp.MustCompile(str)
return c
}
// SetTooltip sets option tooltip
func (c *ConfigStringOpt) Tooltip(tooltip string) *ConfigStringOpt {
c.tooltipVal = tooltip
return c
}
// String implements string interface
// arg {number=0}{call=--server}{display=IP address for log server}{type=string}{validation=\\b(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\b}
func (c *ConfigStringOpt) String() string {
params := [][2]string{}
if c.placeholder != "" {
params = append(params, [2]string{"placeholder", c.placeholder})
}
if c.validation != nil {
params = append(params, [2]string{"validation", c.validation.String()})
}
return c.string("string", params)
}
// ConfigBoolOpt impplement ConfigOption interface
type ConfigBoolOpt struct {
cfg
validation *regexp.Regexp
required bool
defaultValue bool
defaultSet bool
}
// Create new BOOL option
func NewConfigBoolOpt(call, display string) *ConfigBoolOpt {
opt := &ConfigBoolOpt{}
opt.callValue = call
opt.displayVal = display
return opt
}
// Default sets default value option
func (c *ConfigBoolOpt) Default(val bool) *ConfigBoolOpt {
c.defaultValue = val
c.defaultSet = true
return c
}
// SetTooltip sets option tooltip
func (c *ConfigBoolOpt) Tooltip(tooltip string) *ConfigBoolOpt {
c.tooltipVal = tooltip
return c
}
// Required sets option required
func (c *ConfigBoolOpt) Required(val bool) *ConfigBoolOpt {
c.required = val
return c
}
// String implements string interface
// arg {number=2}{call=--verify}{display=Verify}{tooltip=Verify package content}{type=boolflag}
func (c *ConfigBoolOpt) String() string {
params := [][2]string{}
if c.defaultSet {
params = append(params, [2]string{"default", fmt.Sprintf("%t", c.defaultValue)})
}
return c.string("boolflag", params)
}
// Need implement
// fileselect
// selector
// radio
// multicheck
|
Electroweak results with the ATLAS 2010 data The ATLAS Experiment is one of the two multi-purpose detectors at the LHC. During the year 2010 it has collected 45 pb−1 of pp collisions at a center-of-mass energy of 7 TeV. In this paper the ATLAS electroweak results with 2010 data are described. Measurements of total inclusive W and Z production cross sections are presented, as well as differential cross sections and W charge asymmetry. As the production of pairs of bosons is an important process for investigating the electroweak sector of the Standard Model, the measurements of the cross section of such processes are also presented. Introduction Electroweak measurements are among the first studies performed at the LHC with early 2010 pp collision data at a center-of-mass energy of 7 TeV. W and Z final states are interesting because they are the "standard candle" used to get to know the detector, i.e. understand and calibrate it. They are used, for example, to validate the different object reconstructions and identifications, such as electron and muon reconstruction, to define the energy scale of missing energy and trigger efficiencies. Moreover, channels with W and Z bosons in the final state are important backgrounds to searches for new particles. Diboson production plays an important role in electroweak physics. Their production rates and the kinematic distributions of the final states are sensitive to triple gauge boson couplings. In Figure 1 the Standard Model s-channel Feynman diagram for W + W − production through the quark-antiquark annihilation is shown. This channel contains the W W Z and W W triple gauge boson coupling (TGC) vertices. Moreover, diboson channels are an important background to Higgs boson and new physics searches. ATLAS (A Toroidal LHC ApparatuS) is one of the two multi-purpose experiments at the LHC. It consists of an inner tracking detector surrounded by a superconducting solenoid which provides a 2 T magnetic field, electromagnetic and hadronic calorimeters and a muon spectrometer with a toroidal magnetic field. The inner detector provides precision tracking of charged particles for || < 2.5. It consists of a silicon pixel detector, a silicon strip detector and a straw tube tracker that also provides transition radiation measurements for electron identification. The calorimeter system covers the pseudorapidity range || < 4.9. It is composed of sampling calorimeters with either liquid argon (LAr) or scintillating tiles as active media. In the region || < 2.5 the electromagnetic LAr calorimeter is finely segmented and plays an important role in electron identification. The muon spectrometer has separate trigger and highprecision tracking chambers which provide muon identification in || < 2.7. In the year 2010 the ATLAS Experiment has collected 45 pb −1 of data from pp collisions at a center-of-mass energy of 7 TeV. It corresponds to 93.6% of the luminosity delivered by the LHC. This high fraction of data collected was achieved thanks to the good performance of all sub-detectors. In Figure 2 the cumulative luminosity during the 2010 pp run at 7 TeV as function of time is shown. Considering data quality requirements, the luminosity available then for analyses amounts to ∼35 pb −1. During the running of the first year, the luminosity profile changed considerably, due to an intense machine commissioning. Therefore the amount of multiple interactions per beam crossing increased throughout the data taking. This is shown in Figure 3, where the maximum mean number of interactions per beam crossing is plotted as a function of time. In this paper, several cross section measurements are presented. In ATLAS, the total cross section (here for the W → decay) is calculated as: where N obs is the number of observed events in data, N bkg is the number of background events (extracted from data or from Monte Carlo, depending on the analysis), L is the integrated luminosity corresponding to the selected runs and trigger choice, A W denotes the kinematic and geometric acceptance (fiducial acceptance) for the signal process and is determined from generator level Monte Carlo, and C W is the ratio between the number of reconstructed signal events passing the selections of the analysis and the number of events within the fiducial acceptance. The cross section is also measured in the fiducial region as: This cross section is not affected by significant theoretical uncertainties. Therefore, future improvements on the prediction of the acceptance can be used to extract improved total cross section measurements. W and Z boson production In the following, the W and Z/ * cross section measurements are presented in the electron and muon channels (Section 2.1). The cross section measurement for the production of W and Z bosons in association with jets is described in Section 2.2. Finally, in Section 2.3 the measurement of the W charge asymmetry in the muon channel is presented. W and Z/ * cross section measurements The W and Z/ * cross sections have been measured by ATLAS in their leptonic decay channels. In particular, the electron and muon channels have been studied. For the W → e e / channels, the signature is one lepton and missing energy from the neutrino. Therefore the event selection requires the presence of one well reconstructed lepton with p T > 20 GeV, sizable missing energy, E miss For the Z → ee/ channels, the signature is two opposite charged leptons. Therefore in this case the event selection requires the presence of two reconstructed leptons with opposite charge and the invariant mass of the lepton pair in the range 66 < m < 116 GeV. In For the two analyses the main backgrounds come from QCD jets and other electroweak processes. The estimation of the electroweak background is taken from Monte Carlo, while for the QCD background the estimation is extracted directly from data. As seen in Figures 4-7. Invariant mass distribution of candidate Z → events. The QCD background is found to be negligible. amount of residual background, after the event selections described before, is small both for W and Z channels. The main systematics affecting the cross section measurement come from the uncertainty on the electron reconstruction and identification (∼1.5/3%) and the uncertainty on the missing energy scale (2% for W channels). It has to be noticed that the resulting systematic uncertainty is already smaller than the systematic on luminosity (3.4%). The systematic on the acceptance (∼3/4%) is dominated by the uncertainties on the PDFs and on showering models. It is evaluated taking into account three contributions: uncertainties within one PDF set, which are derived using the CTEQ 6.6 PDF, uncertainties between different PDF sets (the maximal difference between the MRST LO *, CTEQ 6.6 and HERAPDF 1.0 sets is taken as a systematic), difference between the PYTHIA and MC@NLO simulations, using the same PDF set, CTEQ 6.6. The results of the measurement of the W and Z/ * cross sections are reported in Table 1. Table 1. Results of the W and Z/ * cross section measurements., 66 < m < 116 GeV Z/ * 0.945 ± 0.006(stat.) ± 0.011(syst.) ± 0.032(lumi.) ± 0.038(acc.) Figure 8 summarizes the results on the cross section measurements showing the measured and predicted W/Z cross section ratio. Good agreement with predictions is observed. Figure 9 2nd shows the measured and predicted W − vs. W + cross sections times leptonic branching ratios: already with the 2010 amount of data, the W cross section measurement has sensitivity to PDFs and some constraints can be set. W +jets and Z/ * +jets cross sections The study of massive vector boson production in association with one or more jets is an important test of QCD. Moreover, W/Z+jets processes are a significant background to studies of Standard Model processes such as tt or single-top production, as well as searches for the Higgs boson and for physics beyond the Standard Model. Therefore the measurements of the cross section and kinematic properties of W/Z+jets processes and comparisons to theoretical predictions are of significant interest. ATLAS has measured the cross sections for the W +jets production and Z/ * +jets production with the boson decaying either in the electron or the muon channel. Jets are reconstructed with the anti-k t algorithm with a radius parameter R = 0.4. All jets considered in the analysis for the W +jets (Z/ * +jets) cross section measurement are required to have a transverse momentum p T > 20 GeV and a pseudorapidity in the range || < 2.8. In addition, a lepton-jet overlap removal is applied: if a jet and a lepton passing some identification requirements are within ∆R = ∆ 2 + ∆ 2 < 0.5, the jet is removed, regardless of the jet p T or. Both in the W +jets and Z/ * +jets analyses, events are required to fire a single lepton trigger. Then, for the W +jets channel, the event selection is based on the presence of one isolated lepton (with E T > 20 GeV). The events must not have any additional identified lepton. Moreover, events are required to have E miss T > 25 GeV, and transverse mass m T > 40 GeV, as in the analysis for the W inclusive cross section measurement. For the Z/ * +jets analysis instead, the presence of two opposite charge leptons with E T > 20 GeV is required. Furthermore, the invariant mass of the lepton pair must be 66 < m < 116 GeV. The backgrounds are estimated with different techniques for the different analyses and also depending on the channel. For example, in the W +jets analysis, the number of QCD background events was estimated by fitting, in each jet multiplicity bin, the E miss T distribution in the data (without the E miss T cut) to a sum of two templates: one for the QCD background and another which included signal and the leptonic backgrounds. In both muon and electron channels, the shapes for the latter template were obtained from simulation. While in the muon channel the template for the QCD background was obtained from simulation, in the electron channel it was obtained from the data because the mechanisms by which a jet fakes an electron are difficult to simulate. The main systematics affecting the measurements come from the jet energy scale, which is smaller in the W +jets channel (∼9%) while is more important for the Z/ * +jets (∼10 -20%, depending on the p T and of the jet). Another important systematic for the W +jets channel comes from the treatment of pile-up: it amounts to ∼7%. W +jets cross section as a function of the inclusive jet multiplicity for electron channel. Also shown are predictions from ALPGEN, SHERPA, PYTHIA, MCFM and BLACKHAT-SHERPA, and the ratio of theoretical predictions to data (PYTHIA is not shown in the ratio). W +jets cross section as a function of the p T of the first jet in the event for the electron channel. The p T of the first jet is shown separately for events with ≥ 1 jet to ≥ 4 jet. The ≥ 2 jet, ≥ 3 jet and ≥ 4 jet distributions have been scaled down by factors of 10 and 100, 1000 respectively. Also shown are predictions from ALPGEN, SHERPA, MCFM and BLACKHAT-SHERPA, and the ratio of theoretical predictions to data for ≥ 1 jet and ≥ 2 jet events. The results of the measurements are shown in Figures 10 to 13. In Figure 10 the W +jets cross section as a function of jet multiplicity for the electron channel is shown, while Figure 11 shows the W +jets cross section as a function of the p T of the leading jet in the event for the electron channel, separately for events with ≥ 1 jet to ≥ 4 jet. The cross sections are quoted in the limited kinematic region: E j T > 20 GeV, | j | < 2.8, E T > 20 GeV, | e | < 2.47 (excluding 1.37 < | e | < 1.52), | | < 2.4, p T > 25 GeV, M T > 40 GeV, ∆R j > 0.5, where, j and denote lepton, jet and neutrino, respectively. Along with data, the predictions from ALPGEN, SHERPA, PYTHIA, MCFM and BLACKHAT-SHERPA, and the ratio of theoretical predictions to data are also displayed. Good agreement is observed with the predictions of the multi-parton matrix element generators ALPGEN and SHERPA. Calculations based on NLO matrix elements in MCFM and in BLACKHAT-SHERPA are also in good agreement with the data. Figure 12 shows the measured Z/ * +jets production cross section in the muon channel as a function of the inclusive jet multiplicity, while Figure 13 shows the measured inclusive cross section as a function of the p T of the jet, in events with at least one jet with p j T > 30 GeV and | j | < 2.8 in the final state, and normalized to Z/ * Drell-Yan cross section. The results are defined in a limited kinematic range for the Z/ * decay products. In the muon channel the measurements are presented in the region: 66 < m < 116 GeV, p T > 30 GeV, | | < 2.4, and ∆R j > 0.5. The measurements are compared to NLO pQCD predictions from MCFM, as well as the predictions from ALPGEN and SHERPA. The measured cross sections are described by the NLO pQCD predictions, which include non-perturbative corrections, as well as by the W charge asymmentry The measurement of the W boson charge asymmetry is sensitive to quark distribution via the dominant production process ud(d) → W +(−). It provides complementary information to that obtained from measurements of inclusive deep inelastic scattering cross sections at the HERA ep collider, as the HERA data do not strongly constrain the ratio between u and d valence quarks in the kinematic regime of low x, where x is the proton momentum fraction carried by the parton. In particular the measurement of the W charge asymmetry at the LHC can contribute to the understanding of PDFs in the parton momentum fraction range 10 −3 x 10 −1. The charge asymmetry has been studied in ATLAS in the muon channel. The event selection is very similar to the selection used in the W → cross section measurement described before. The asymmetry varies significantly as a function of the pseudorapidity of the charged decay lepton due to its strong correlation with the momentum fraction x of the partons producing the W boson. It is defined from the cross sections for W → production d W ± /d as: where the cross sections include the event kinematical cuts used to select W → events. Systematic effects on the W production cross-section measurements are typically the same for positive and negative muons, mostly canceling in the asymmetry. All systematic uncertainties on the asymmetry measurement are determined in each | | bin accounting for correlations between the charges. The dominant sources of systematic uncertainties on the asymmetry come from trigger and reconstruction efficiencies. The measured differential charge asymmetry in eleven bins of muon absolute pseudorapidity is shown in Figure 14. Also shown are expectations from W predictions at NLO with different PDF sets: CTEQ 6.6, HERA 1.0 and MSTW 2008 ; all predictions are presented with 90% confidence-level error bands. Figure 14. The muon charge asymmetry from W boson decays in bins of absolute pseudorapidity. The data (shown with error bars including the statistical and systematic uncertainties) are compared to MC@NLO predictions with different PDF sets. A 2 comparison using the measurement uncertainty and the central value of the PDF predictions yields values per number of degrees of freedom of 9.16/11 for the CTEQ 6.6 PDF Whereas none of the predictions are inconsistent with these data, the predictions are not fully consistent with each other since they are all phenomenological extrapolations in x. The input of the data presented here is therefore expected to contribute to the determination of the next generation of PDF sets, helping to reduce PDF uncertainties, particularly the shapes of the valence quark distributions in the low-x region. Diboson production In the following, some measurements of diboson production cross sections are presented. In Section 3.1 the measurements of W and Z cross sections are presented. The analysis of W + W − production is described in Section 3.2 while in Section 3.3 the measurement of W ± Z production is shown. W and Z cross section measurements The measurement of W and Z bosons in association with high energy photons provides important tests of the Standard Model. In fact, physics beyond the Standard Model, such as composite structure of W and Z bosons, new vector bosons, and techni-mesons would enhance those production cross sections and alter the event kinematics. ATLAS studies use measurements of pp → ± + X and pp → + − + X with an integrated luminosity of approximately 35 pb −1. Events are selected by requiring the presence of a W or Z boson candidate along with an associated isolated photon having a transverse energy E T > 15 GeV and being separated from the closest electron or muon by ∆R > 0.7. The event selection is based on the presence of one high p T lepton and one high E T photon. In addition, for the W channel the event is required to have E miss T > 25 GeV and m T > 40 GeV, while for the Z channel events with m > 40 GeV are selected. The suppression of photons from final state radiation FSR is done by means of an isolation cut. A total of 192 W candidates (95 in the electron and 97 in the muon channel) and 48 Z candidates (25 in the electron and 23 in the muon channel) pass all the requirements. In Figure 15 the three body transverse mass 2 of those W candidate events, considering both the electron and muon channels, is shown. In Figure 16 the three body invariant mass for the Z candidate events is shown, again for electron and muon channels together. In both distributions a good agreement between data and predictions can be observed. The summary of the measurements of the W and Z cross section is reported in Table 2, along with the NLO theoretical predictions. The main systematic uncertainties affecting the measurements come from photon reconstruction and identification efficiency (∼11%). While the current measurements are not strongly sensitive to possible new physics, the distributions of kinematic variables determined from the leptons and photons are consistent with the predictions from the Standard Model in a new kinematic regime. The ratio of the W to Z cross sections, defined as can be measured with a higher relative precision than the individual cross sections since both experimental and theoretical uncertainties partially cancel. This ratio is a test of the W W triple gauge coupling predicted by the Standard Model. In Figure 17 the measured ratio of the production cross sections of W and Z, together with the Standard Model prediction are reported. Figure 15. Distribution for the combined electron and muon decay channels of the three body transverse mass m T (,, ) of the W candidate events. Monte Carlo predictions for signal and backgrounds are also shown. Figure 16. Three body invariant mass m( +, −, ) distribution for Z data candidate events. Monte Carlo predictions for signal and backgrounds are also shown. Both the electron and muon decay channels are included. Table 2. Production cross sections of the pp → ± + X and pp → + − + X processes at √ s = 7 TeV. Both the experimental measurements and the Standard Model NLO predictions are given. The uncertainty in the Standard Model prediction is the systematic uncertainty coming from PDFs. pp → ± 42.5 ± 4.2(stat.) ± 7.2(syst.) ± 1.4(lumi.) 42.1 ± 2.7 pp → + − 6.4 ± 1.2(stat.) ± 1.6(syst.) ± 0.2(lumi.) 6.9 ± 0.5 Figure 17. The measured ratio of the production cross sections of W and Z, together with the Standard Model prediction. Results are shown for the electron and muon final states as well as for their combination. The error bars represent the statistical and the statistical plus systematic uncertainties. The one standard deviation uncertainty in the Standard Model prediction is represented by the yellow band. W + W − production The W + W − process plays an important role in electroweak physics. The production rate and kinematic distributions of W + W − are sensitive to the triple gauge couplings of the W boson and W + W − production is an important background to Standard Model Higgs boson searches. ATLAS has measured the W + W − production cross section. Candidate W + W − events are reconstructed in the fully leptonic decay channel (including the channels with taus decaying leptonically), looking for + − events. This final state has a better signal to background ratio than the semi-leptonic or hadronic channels. In Figure 18 the event display of a W + W − candidate is shown: in this case one W decays in e e and the other in. In the event display the electron and muon can be seen together with the direction of missing energy. Figure 18. Event display of a W W → e e candidate. The event selection starts from the requirement that the event has fired a single lepton trigger. The signal selection requires two opposite sign leptons with a p T > 20 GeV and missing energy. The main backgrounds for the fully leptonic channel are W +jets, Drell-Yan production, top production (both tt and W t) and other diboson processes. The main sources of background have been evaluated from Monte Carlo, except from the background from W +jets. This background was estimated directly from data, as the rate at which hadronic jets are mis-identified as leptons may not be accurately described in the Monte Carlo. The W +jets background is derived by defining a control region, similar to W + W − signal selection, that is enriched in W +jets events due to the use of an alternative lepton definition. The selected events are then required to pass the full W + W − event selection, where the jet is treated as if it was a fully identified lepton. The W +jets background is then estimated by scaling this control sample by a measured fake factor. To reject the background from Z bosons, a cut on the dilepton invariant mass is applied, removing events with |m − m Z | < 10 GeV and m < 15 GeV. Events are required to have a large amount of missing energy, where the missing energy in this case is defined as: where ∆ is the difference in the azimuthal angle between the E miss T and the nearest lepton or jet. This definition of E miss T,rel helps in rejecting more efficiently those events for which missing energy is likely to have been mis-measured. For the ee and channels the requirement is E miss T,rel > 40 GeV, while for the e channel it is E miss T,rel > 20 GeV. Another important requirement to suppress background, in particular the tt contribution, is to veto events with reconstructed jets. The effect of this selection can be seen by comparing Figures 19 and 20: the distribution of E miss T,rel for events passing the full event selection apart from the cut on E miss T,rel are reported. In Figure 19 the jet veto is not applied, while in Figure 20 it is: the tt background is highly suppressed. After the selection, 8 candidate events are selected in data: 1 in the e + e − channel, 2 in the + − channel and 5 in the e ± ∓ channel. In Figure 21 the distribution of the dilepton system p T for these W + W − candidates is shown. Figure 19. Relative E miss T distribution for the ee and sample without the jet-veto requirement. Relative E miss T distribution for the selected ee and events. The distributions show events with all selection criteria applied except for relative E miss T. Figure 21. Distributions of the dilepton system p T for W + W − candidates. The points are the data and the stacked histograms are from Monte Carlo predictions except the W +jets background, which is obtained from data-driven methods. ATLAS measures a cross section of tot W + W − = 41 +20 −16 (stat.) ± 5(syst.) ± 1(lumi.)pb. This is in agreement with the NNLO theoretical prediction, which is 44.3 ± 3pb. Note that the systematic uncertanity on the measurement is far smaller than the statistical uncertainty. This measurement will therefore profit from the high integrated luminosity collected in the year 2011. 3.3. W ± Z production ATLAS has also measured the W ± Z production cross section. The analysis uses four channels with leptonic decays (W ± Z → ) involving electrons and muons: eee, ee, e or (including secondary e or leptons from the decay of leptons) plus missing transverse energy, E miss T. The results are based on an integrated luminosity of 205 pb −1 collected in early 2011. The main sources of background to the leptonic W ± Z signal are ZZ, Z, Z/ * +jets, and top events. The signal and background contributions are mainly modeled with Monte Carlo simulation and validated with data-driven techniques. At least one single lepton trigger is required to fire in order to select the event. Events with two leptons of the same flavour and opposite charge with an invariant mass within 10 GeV of the Z boson mass are selected. This reduces much of the background from QCD and top production, and some diboson backgrounds. Events are then required to have at least 3 reconstructed leptons originating from the same primary vertex, two leptons from a Z → decay and an additional third lepton. This requirement reduces the Z/ * +jets, top, and some diboson backgrounds. Next, the E miss T in the event is required to be greater than 25 GeV and the transverse mass of the system formed from the third lepton and the E miss T is required to be greater than 20 GeV. These cuts suppress the remaining backgrounds from Z and diboson production. Figures 22 and 23 show the transverse momentum of W and Z bosons for the selected events. Figure 22. Transverse momentum of the W in W ± Z candidate events. Figure 23. Transverse momentum of the Z in W ± Z candidate events. ATLAS observes 12 W ± Z candidates in data, with 9.1 ± 0.2(stat) ± 1.3(sys) signal and 2.0 ± 0.3(stat) ± 0.7(sys) background events expected. The signal definition includes the contribution from tau decays into electrons or muons, which accounts for about 0.5 events. The final results for the combined total inclusive cross section measurement for the W ± Z bosons decaying directly into electrons and muons excluding contributions from tau decays is: tot W Z = 18 +7 −6 (stat.) +3 −3 (syst.) +1 −1 (lumi.)pb. This measurement is found to be in agreement with the NNLO theoretical prediction, which is 16.9 pb. Also in this case, the main uncertainty on the measurement comes from the amount of data available for the measurement. Therefore also this measurement will profit from the high statistic collected during the full 2011 period. Conclusions To conclude, in this paper the recent electroweak results of the ATLAS Experiment with 2010 data have been presented. They are summarized in Figure 24. Figure 24. Summary of the ATLAS crosssection measurements from the 2010 and early 2011 datasets, including inclusive W and Z, diboson W and Z, W + W −, W ± Z and tt production. The dark error bar represents the statistical uncertainty. The red error bar represents the full uncertainty, including systematics and luminosity uncertainties. In 2011, already more than 1 fb −1 of pp collisions has been collected. Such amount of data will give the possibility to improve those measurements for which the statistical error is the dominating uncertainty, like the measurements of the diboson production. With these data it will also be possible to measure differential cross sections. Moreover new analyses will be possible with the increased statistics, for example the measurement of the W/Z + b in W/Z+jets events and the ZZ production cross section.
|
Predicting Student Success: An Application of Data Mining Techniques in Higher Educational Systems A polarized signal receiver waveguide assembly, or feedhorn, for receiving a selected one of linearly polarized electromagnetic signals in one waveguide of circular cross-section and for launching or transmitting the selected signal into a second waveguide, the axes of the waveguides being disposed at right angle. The first waveguide has a closed end wall, formed as a hemispherical cavity having a hemispherical concave surface. A probe comprising a signal receiver portion disposed in a plane perpendicular to the axis of the first waveguide and a launch or re-transmitter portion having its axis perpendicular to the axis of the second waveguide has its launch or transmitter portion mounted in a controllably rotatable dielectric rod, such that rotation of the rod causes rotation of the signal receiver portion for alignment with a selected one of the polarized signals. The transmission line between the probe signal receiver portion and launch or re-transmitter portion consists of a pair of bifurcated curvilinear branches forming a rectangle disposed along the axis of the first waveguide and having sides bent at an angle in the form of angled and tapered winglets, rearwardly converging.
|
def initialise_dive(self, data, df_gas):
dive = robjects.r['dive']
gas_list, tank_times = self.trimix_list(df_gas)
size = len(gas_list)
d = dive(data, tanklist=gas_list)
custom_gas = robjects.r('''
customGas <- function(dive_profile, numgas, list_of_times)
{
#Applies names to the tanklist in the format c("1":"n") - necessary to select which gas to use at a specific time.
names(tanklist(dive_profile)) <- c(1:numgas)
#Cuts the dive_profile and switches to the specific gas at the time listed
d <- cut(times.dive(dive_profile), breaks = c(do.call(c, list_of_times), Inf), include.lowest = TRUE, labels = names(tanklist(dive_profile)))
whichtank(dive_profile) <- cut(times.dive(dive_profile), breaks = c(do.call(c, list_of_times), Inf), include.lowest = TRUE, labels = names(tanklist(dive_profile)))
return(dive_profile)
}
''')
dive_profile = custom_gas(d, size, tank_times)
return dive_profile, gas_list
|
public List<Result> transformR(List<Element> input) {
if (input.isEmpty()) { // end of recursion
return Collections.emptyList();
}
// Deconstruct
Element head = input.get(0);
List<Element> tail = input.subList(1, input.size() - 1);
// Handle head
Result transformed = transform(head);
// Recursion
List<Result> results = new ArrayList<>();
results.add(transformed);
results.addAll(transformR(tail));
return results;
}
|
## convert polyspline into 3d path
# Author: <NAME> (Neill3d), e-mail to: <EMAIL>
# www.neill3d.com
#
# Github repo - https://github.com/Neill3d/MoPlugs
# Licensed under BSD 3-clause
# https://github.com/Neill3d/MoPlugs/blob/master/LICENSE
#
from pyfbsdk import *
import pyediting
def ConvertPolySpline(polySpline, skipCount):
m = FBMatrix()
polySpline.GetMatrix(m)
scalingM = FBMatrix()
polySpline.GetMatrix(scalingM, FBModelTransformationType.kModelScaling)
vertices = pyediting.GetVertexArray(polySpline, False)
print len(vertices)
curve = FBModelPath3D("Spline_" + polySpline.Name)
curve.Show = True
curve.Visible = True
skip = 0
p2 = FBVertex()
for point in vertices:
if skip == 0:
FBVertexMatrixMult(p2, scalingM, point)
curve.PathKeyEndAdd( FBVector4d(p2[0], p2[1], p2[2], 1.0) )
skip += 1
if skip > skipCount:
skip = 0
#PATCH: Remove the two first point, they are unnecessary
curve.PathKeyRemove(0)
curve.PathKeyRemove(0)
# tangents
count = curve.PathKeyGetCount()
for i in range(count):
curve.PathKeySetXYZDerivative(i, FBVector4d(0.0, 0.0, 0.0, 1.0), False)
curve.SetMatrix(m)
curve.SetVector( FBVector3d(1.0, 1.0, 1.0), FBModelTransformationType.kModelScaling )
# END
models = FBModelList()
FBGetSelectedModels(models)
btn, value = FBMessageBoxGetUserValue("Convert PolySpline", "Number Of Points To Skip", 10, FBPopupInputType.kFBPopupInt,"Ok")
for model in models:
ConvertPolySpline(model, value)
|
<filename>js/popup/src/components/app.tsx
import * as React from "react";
import { Settings } from "./settings";
import { SettingsProvider } from "../providers/settings";
import { TabProvider } from "../providers/tab";
import { Header } from "./header";
import "./app.scss";
export interface Props {
}
export const App = (props: Props) => {
return (
<TabProvider>
<SettingsProvider>
<Header/>
<Settings/>
</SettingsProvider>
</TabProvider>
);
}
|
import { observer } from 'mobx-react-lite';
import React, { useMemo } from 'react';
import { useLocation } from 'react-router';
import LocalItemList from '../../components/local/LocalItemList';
import useNavigate from '../../hooks/useNavigate';
import { useMst } from '../../store/store';
function LocalMedia() {
const store = useMst();
const { navigate } = useNavigate();
const { state } = useLocation();
const navToDir = (dirName: string) => {
navigate('.', { state: { baseDir: dirName } });
};
const join = (...args: string[]) => {
return args
.map((part, i) => {
if (i === 0) {
return part.trim().replace(/[\/]*$/g, '');
} else {
return part.trim().replace(/(^[\/]*|[\/]*$)/g, '');
}
})
.filter(x => x.length)
.join('/');
};
const baseDir = useMemo(() => {
const preferred = store.userPreferences.download_directory;
if (state?.baseDir) {
return join(preferred, state.baseDir);
} else {
return preferred;
}
}, [state, store.userPreferences.download_directory]);
return (
<>
<LocalItemList baseDir={baseDir} onDirChange={navToDir} />
</>
);
}
export default observer(LocalMedia);
|
<filename>plugin/emitter/security/usage/metering_test.go
package usage
import (
"errors"
"testing"
"github.com/emitter-io/emitter/network/http"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
func TestNoop_New(t *testing.T) {
s := NewNoop()
assert.Equal(t, &NoopStorage{}, s)
}
func TestNoop_Configure(t *testing.T) {
s := new(NoopStorage)
err := s.Configure(nil)
assert.NoError(t, err)
}
func TestNoop_Name(t *testing.T) {
s := new(NoopStorage)
assert.Equal(t, "noop", s.Name())
}
func TestNoop_Get(t *testing.T) {
s := new(NoopStorage)
assert.Equal(t, uint32(123), s.Get(123).(Meter).GetContract())
}
func TestHTTP_New(t *testing.T) {
s := NewHTTP()
assert.NotNil(t, s.counters)
}
func TestHTTP_Name(t *testing.T) {
s := new(HTTPStorage)
assert.Equal(t, "http", s.Name())
}
func TestHTTP_ConfigureErr(t *testing.T) {
s := NewHTTP()
close(s.done)
err := s.Configure(nil)
assert.Error(t, errors.New("Configuration was not provided for HTTP metering provider"), err)
err = s.Configure(map[string]interface{}{})
assert.Error(t, errors.New("Configuration was not provided for HTTP metering provider"), err)
}
func TestHTTP_Configure(t *testing.T) {
s := NewHTTP()
close(s.done)
err := s.Configure(map[string]interface{}{
"interval": 1000.0,
"url": "http://localhost/test",
"authorization": "test",
})
assert.NoError(t, err)
assert.Equal(t, "http://localhost/test", s.url)
assert.NotNil(t, s.http)
}
func TestHTTP_Store(t *testing.T) {
h := http.NewMockClient()
h.On("Post", "http://127.0.0.1", mock.Anything, nil, mock.Anything).Return([]byte{}, nil)
u1 := usage{MessageIn: 1, TrafficIn: 200, MessageEg: 1, TrafficEg: 100, Contract: 0x1}
u2 := usage{MessageIn: 0, TrafficIn: 0, MessageEg: 0, TrafficEg: 0, Contract: 0x1}
s := NewHTTP()
s.url = "http://127.0.0.1"
defer close(s.done)
s.http = h
c := s.Get(1).(*usage)
c.AddEgress(100)
c.AddIngress(200)
assert.Equal(t, u1.MessageIn, c.MessageIn)
assert.Equal(t, u1.TrafficIn, c.TrafficIn)
assert.Equal(t, u1.MessageEg, c.MessageEg)
assert.Equal(t, u1.TrafficEg, c.TrafficEg)
assert.Equal(t, u1.Contract, c.Contract)
s.store()
assert.Equal(t, u2.MessageIn, c.MessageIn)
assert.Equal(t, u2.TrafficIn, c.TrafficIn)
assert.Equal(t, u2.MessageEg, c.MessageEg)
assert.Equal(t, u2.TrafficEg, c.TrafficEg)
assert.Equal(t, u2.Contract, c.Contract)
}
|
02 JUL 3302
Yesterday, the Empire joined the debate over the Federation's decision to deploy Farragut Battle Cruisers in the Merope system. In a statement, Senator Zemina Torval asserted that the use of Federal warships represented an attempt to claim possession of the non-human structures, commonly known as barnacles, found in Merope.
Responding to this criticism, Federal President Zachary Hudson has released a statement to the media:
"Our motives are entirely altruistic. The Federal presence in the Pleiades is motivated solely by a desire to protect the non-human structures located there."
"Since the barnacles were first discovered, they have been ruthlessly exploited. Given the barnacles' value to xenobiologists, and taking into consideration the possibility that they may possess some form of sentience, this exploitation cannot be allowed to continue."
|
Analysis of Platelet Aggregation, Secretion, Integrin Activation, and Calcium Release Stimulated by Different Agonists in Native Americans Arterial thrombosis results from obstructive vascular blood clots that are largely initiated by platelet activation. As such, individuals whose platelets react more robustly to chemical signals generated by vascular damage or dysfunction are at a higher risk for morbidity and mortality caused by thrombosis. As a population, Native Americans have higher rates of obstructive arterial diseases, and previous research suggests African American populations have higher overall platelet reactivity that positively correlates to their higher risk for cardiovascular disease. Therefore, we set out to measure platelet reactivity in Native Americans and identify potential genetic alleles that correlate to the elevations in responsivity. Five platelet agonists were utilized to simulate vascular damage signals, followed by measurements of markers indicative of subsequent platelet activation: aggregation, secretion, integrin activation, and calcium ion mobilization. Preliminary results from 17 subjects showed that Native American platelets more rapidly aggregate in response to several agonists when compared with Caucasians, whereas other measures of platelet function are largely similar between the groups. These results suggest that Native Americans have more sensitive platelet aggregation responses, which could result in a higher risk for occlusive arterial diseases. Complete genomic sequences were also obtained from the highest responders for use in identifying alleles that correlate to this response.
|
Behavior of Cement Composites Loaded by High Temperature In this paper, there are summarized the results of an experimental program focused on basic, mechanical and thermal properties of cement composites according to the high temperature loading. Four different materials were studied, which differed in used kind of cement and amount of fibers. As a matrix for studied composites the aluminous cement was chosen because of its resistance in high temperature. For a comparison the Portland cement was also tested. The second main ingredient used to provide better resistance in high temperatures - the basalt aggregate, was mixed in every specimen. The basalt fibers were chosen for two of the measured samples, remaining two ones were tested without fibers. The obtained data in this presented analyses show that the application of the aluminous cement leads to increase (depending on temperature) of porosity, which is the cause of decreasing of the coefficient of thermal conductivity. It can seems, that these cement composites will have low mechanical strength in high temperatures, but because of better sintering, the aluminous cement keeps its strength in high temperatures better than Portland cement.
|
An IoT network coordinated AI engine to produce loading and delivery schedules for capacitated vehicle routing problems The capacitated vehicle routing problem with weight constraints is investigated using a hybrid simulation-genetic algorithm approach. The computational method is capable of producing near optimal loading and delivery routes for truck fleets delivering products to specific locations. The method is illustrated through its application to a problem consisting of delivering collections of pallets to a number of locations throughout the northern corridor of the United States
|
<reponame>mightofcode/yinwangblog<filename>src/main/java/com/mocyx/yinwangblog/blog/GithubService.java
package com.mocyx.yinwangblog.blog;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.JSONObject;
import com.mocyx.yinwangblog.BlogException;
import com.mocyx.yinwangblog.Global;
import com.mocyx.yinwangblog.blog.entity.gql.GqlQuery;
import com.mocyx.yinwangblog.blog.entity.issue.IssuesDto;
import lombok.extern.slf4j.Slf4j;
import okhttp3.*;
import org.apache.commons.io.IOUtils;
import org.springframework.stereotype.Component;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.util.concurrent.TimeUnit;
/**
* @author Administrator
*/
@Component
@Slf4j
public class GithubService {
private final String queryStr = "{\n" +
" repository(owner: \"#{owner}\", name: \"#{name}\") {\n" +
" issues(first: 100, orderBy: {field: CREATED_AT, direction: DESC},states: OPEN) {\n" +
" nodes {\n" +
" author {\n" +
" login\n" +
" }\n" +
" bodyHTML\n" +
" title\n" +
" comments(first: 100) {\n" +
" nodes {\n" +
" bodyHTML\n" +
" author {\n" +
" login\n" +
" }\n" +
" createdAt\n" +
" id\n" +
" databaseId\n" +
" }\n" +
" }\n" +
" createdAt\n" +
" id\n" +
" databaseId\n" +
" }\n" +
" }\n" +
" }\n" +
"}";
private IssuesDto parseJson(String s) {
JSONObject jsonObject = JSON.parseObject(s);
JSONObject data = (JSONObject) jsonObject.get("data");
JSONObject repository = (JSONObject) data.get("repository");
JSONObject issues = (JSONObject) repository.get("issues");
String str = JSON.toJSONString(issues);
IssuesDto issuesDto = JSON.parseObject(str, IssuesDto.class);
return issuesDto;
}
public IssuesDto getIssues() throws IOException {
OkHttpClient client = new OkHttpClient.Builder()
.connectTimeout(3000, TimeUnit.MILLISECONDS)
.writeTimeout(3000, TimeUnit.MILLISECONDS)
.readTimeout(60000, TimeUnit.MILLISECONDS)
.build();
MediaType jsonMedia = MediaType.parse("application/json; charset=utf-8");
GqlQuery query = new GqlQuery();
String gql = queryStr.replace("#{owner}", Global.config.getGithubName())
.replace("#{name}", Global.config.getGithubRepo());
query.setQuery(gql);
String jsonStr = JSON.toJSONString(query);
RequestBody formBody = RequestBody.create(jsonMedia, jsonStr);
Request request = new Request.Builder()
.addHeader("Authorization", "bearer " + Global.config.getGithubToken())
.url(Global.gqlEndpoint)
.post(formBody)
.build();
Response response = client.newCall(request).execute();
String bodyString = IOUtils.toString(response.body().byteStream(), StandardCharsets.UTF_8);
if (!response.isSuccessful()) {
log.error("http fail {} {}", response.code(), bodyString);
throw new BlogException("http error");
} else {
log.debug("http success {} {}", response.code(), bodyString);
}
IssuesDto issuesDto = parseJson(bodyString);
return issuesDto;
}
}
|
// To pass the error
public abstract static class ThreadedDataSink<ReceivedRecordType extends Record, CreatedRecordType extends Record> extends DataSink<ReceivedRecordType, CreatedRecordType> {
// outputQ is the queue of exported records, which is read by an exporter running in a separate thread.
protected java.util.concurrent.BlockingQueue<CreatedRecordType> queue;
public ThreadedDataSink(int queueSize) {
queue = new java.util.concurrent.ArrayBlockingQueue<CreatedRecordType>(queueSize);
}
// Override this to route errorRecord to another data sink.
protected void sendOnError(Record errorRecord) {
logException(errorRecord);
}
// To handle export conversion errors by sending them to an error sink,
// simply override receive().
public void receive(ReceivedRecordType rec) {
CreatedRecordType exportedRecord = null;
try {
exportedRecord = exportConversion(rec);
try {
queue.put(exportedRecord);
} catch (InterruptedException ex) {
; /* Not really sure what to do with this. */
}
} catch (Throwable ex) {
java.util.Set<hu.sztaki.ilab.giraffe.schema.dataprocessing.EventType> ev = ProcessingElementBaseClasses.addEvent(null, EventType.ERROR_CONVERSION_FAILED);
sendOnError(updateErrorRecord(ex, rec, ev));
}
}
public abstract CreatedRecordType exportConversion(ReceivedRecordType record) throws java.lang.Throwable;
protected abstract Record updateErrorRecord(java.lang.Throwable ex, ReceivedRecordType received, java.util.Set<hu.sztaki.ilab.giraffe.schema.dataprocessing.EventType> events);
}
|
// ReadUintX reads an unsinged integer that was encoded using a variable number
// of bytes
func (e *BigEndianReader) ReadUintX() (uint64, int, error) {
c, err := e.ReadByte()
if err != nil {
return 0, 0, err
}
val := uint64(c) & 0x7f
n := 1
for ; c&0x80 != 0 && n < 9; n++ {
c, err = e.ReadByte()
if err != nil {
return 0, n, err
}
val++
if n == 8 && c&0x80 > 0 {
val = (val << 8) | uint64(c)
} else {
val = (val << 7) | uint64(c&0x7f)
}
}
return val, n, nil
}
|
How can service providers future-proof themselves against competition in the local loop? The competitive nature of the local loop is examined. Given the convergence of services, the access network must be designed to serve all subscribers with all types of services utilizing all available distribution media. A flexible architecture for platforms deployed in the local loop will be paramount in allowing a service provider to deal with such diversities. The distributed bandwidth architecture proposed offers a future-proof solution that leaves the door open for new services and distribution media as the access network needs mandate.
|
package be.swsb.pubgclient.model.match;
import java.time.ZonedDateTime;
public class MatchAttributes {
private String shardId;
private String gameMode;
private String mapName;
private ZonedDateTime createdAt;
private int duration;
private String titleId;
public String getShardId() {
return shardId;
}
public void setShardId(String shardId) {
this.shardId = shardId;
}
public String getGameMode() {
return gameMode;
}
public void setGameMode(String gameMode) {
this.gameMode = gameMode;
}
public String getMapName() {
return mapName;
}
public void setMapName(String mapName) {
this.mapName = mapName;
}
public ZonedDateTime getCreatedAt() {
return createdAt;
}
public void setCreatedAt(ZonedDateTime createdAt) {
this.createdAt = createdAt;
}
public int getDuration() {
return duration;
}
public void setDuration(int duration) {
this.duration = duration;
}
public String getTitleId() {
return titleId;
}
public void setTitleId(String titleId) {
this.titleId = titleId;
}
@Override
public String toString() {
return "MatchAttributes{" +
"shardId='" + shardId + '\'' +
", gameMode='" + gameMode + '\'' +
", mapName='" + mapName + '\'' +
", createdAt=" + createdAt +
", duration=" + duration +
", titleId='" + titleId + '\'' +
'}';
}
}
|
(CNS): The first case under the new anti-corruption law that will be heard in the Grand court had been delayed as the RCIPS staffer who is the first person to be charged under the law was refused legal aid the court heard Friday. Patricia Webster who was working as a receptionist at a police station when she reportedly abused police confidentiality was denied government funding because crimes under the new law have not yet made the legal aid schedule. Webster’s defence attorney pointed out that given the gravity of the charges which could result in a maximum sentence of ten years she needed representation.
Having established that Webster met the means test Justice Alex Henderson granted the legal aid request allowing the attorney to begin work on the case load. The case was adjourned until 3 February when Webster is expected to formally answer the charges against her.
Webster was arrested in October following an investigation by the RCIPS own anti-corruption team and has been charged with two counts of abuse of public office and two charges of misconduct in a public office contrary to section 17 of the anti-corruption law 2008.The specific details of the charges have not yet been made public but the RCIPS has said the counts relate to confidential police data.
|
def cellpose_init_model(self, gpu: bool=False, model_type: str='nuclei',
net_avg: bool=True, device: object=None,
torch: bool=True) -> None:
model = models.Cellpose(gpu = gpu,
model_type = model_type,
net_avg = net_avg,
device = device,
torch = torch)
return model
|
Everyone is waiting for John Wick. The titular assassin in John Wick: Chapter 3 – Parabellum finds himself the target of an army of assassins after be broke the rules at the prestigious Continental, the assassins hotel which forbids any violence within its walls. But to be fair, they shot first. Watch the latest John Wick: Chapter 3 trailer below.
Picking up shortly after the events of John Wick: Chapter 2, Parabellum finds our titular hero on the run and excommunicado from the Contintental after killing crime lord within its walls. With a $14 million price tag on his head and an army of bloodthirsty killers on his trail, John Wick (Keanu Reeves) turns to the few allies he has, which includes Halle Berry’s assassin Sofia and her dogs. Joining Sofia are a slew of several new characters from the assassin’s guild and the High Table, most of whom want to see John Wick dead.
That includes Anjelica Huston as The Director, a member of the High Table who is one of John Wick’s few key allies, Asia Kate Dillon as The Adjudicator, another member of the High Table, and Mark Dacascos as Zero, an assassin with a personal vendetta against John Wick. All of these new characters — including the German Shepherds — got their own character teasers debuted by IGN earlier this week leading up to the trailer.
|
#[derive(thiserror::Error, Debug)]
pub enum Error {
#[error("Missing token")]
MissingToken,
#[error("Missing key")]
MissingKey,
#[error("Invalid parameter: {}", .0)]
Parameter(&'static str),
#[error("Unexpected response: {}", .0)]
Response(String),
#[error("Request error: {}", .0)]
Reqwest(#[from] reqwest::Error),
#[error("Request middleware error: {}", .0)]
ReqwestMiddleware(anyhow::Error),
#[error("Error parsing response: {}", .0)]
Parse(#[from] serde_json::Error),
#[error("{}", .0.status())]
Http(reqwest::Response),
#[error("Too many requests. Retry after {}s", .0)]
TooManyRequests(u64),
}
impl From<reqwest_middleware::Error> for Error {
fn from(error: reqwest_middleware::Error) -> Self {
match error {
reqwest_middleware::Error::Reqwest(e) => {
Self::Reqwest(e)
},
reqwest_middleware::Error::Middleware(e) => {
Self::ReqwestMiddleware(e)
},
}
}
}
|
<reponame>ShZh-libraries/sznn
import { TensorBuilder } from "../src/tensor";
import { getGPUDevice } from "../src/gpu";
import { handlePadding } from "../src/layers/padding";
import { PaddingAttr } from "../../common/attr/padding";
import { expect } from "chai";
describe("Test padding layer of WebGPU backend", () => {
it("Test constant padding", async () => {
const device = await getGPUDevice();
const input = TensorBuilder.withData([
[
[
[1, 1.2],
[2.3, 3.4],
[4.5, 5.7],
],
],
]);
const attr = new PaddingAttr();
attr.pads = [0, 0, 0, 2, 0, 0, 0, 0];
const output = await handlePadding(input, attr, device!);
expect(output.shape).deep.equal([1, 1, 3, 4]);
expect(output.data).deep.equal(
new Float32Array([0, 0, 1, 1.2, 0, 0, 2.3, 3.4, 0, 0, 4.5, 5.7])
);
});
});
|
Rumor: Will Neill Blomkamp Direct The Hobbit?
Put this one down as a significant but believable rumor.
Since Guillermo del Toro walked away from directing The Hobbit, there has been plenty of speculation about who might take his place. For obvious reasons, Peter Jackson‘s name keeps coming up, but there are a number of reasons that he is unlikely to do the job. David Yates, David Dobkin and Brett Ratner are all names that have come up, though none of those really thrilled the fan contingent. But if Jackson can’t or won’t do the movie, what about his most recent protege, District 9 director Neill Blomkamp?
TheOneRing.net received a spy report that says that Blomkamp is the new director for The Hobbit, but hasn’t been able to verify. The site credits the report with some truth, at least, because it contains other facts that it says have been verified.
The story, in short, is that Jackson really isn’t interested in directing, and would produce with as little on-set presence as possible. Consequently, he wants someone he knows and/or trusts. del Toro fit that bill, and for that reason, if nothing else, Blomkamp is an easy rumor to believe.
Now, Blomkamp had previously told the LA Times that he wasn’t interested in making films “with seriously high budgets,” and that he’d already turned down a few. He wants to retain control over his films, and big budgets lead, among other things, to a loss of control.
But with Peter Jackson shepherding the project, might things be different? It isn’t difficult to assume that Blomkamp might make an exception if he’d be sheltered under Jackson’s oversight. Would he put aside his projected new sci-fi film to take this job?
But, again, this is a rumor. Nothing is substantiated at this point, and we’ll update with a denial or, possibly, a confirmation when possible.
|
r"""
Modified from https://raw.githubusercontent.com/pytorch/pytorch/v1.7.0/torch/distributed/launch.py
This script aims to quickly start Single-Node multi-process distributed training.
From PyTorch:
Copyright (c) 2016- Facebook, Inc (<NAME>)
Copyright (c) 2014- Facebook, Inc (<NAME>)
Copyright (c) 2011-2014 Idiap Research Institute (<NAME>)
Copyright (c) 2012-2014 Deepmind Technologies (<NAME>)
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
Copyright (c) 2011-2013 NYU (<NAME>)
Copyright (c) 2006-2010 NEC Laboratories America (<NAME>, <NAME>, <NAME>, <NAME>)
Copyright (c) 2006 Idiap Research Institute (<NAME>)
Copyright (c) 2001-2004 Idiap Research Institute (<NAME>, <NAME>, <NAME>)
From Caffe2:
Copyright (c) 2016-present, Facebook Inc. All rights reserved.
All contributions by Facebook:
Copyright (c) 2016 Facebook Inc.
All contributions by Google:
Copyright (c) 2015 Google Inc.
All rights reserved.
All contributions by Yangqing Jia:
Copyright (c) 2015 <NAME>
All rights reserved.
All contributions by Kakao Brain:
Copyright 2019-2020 Kakao Brain
All contributions from Caffe:
Copyright(c) 2013, 2014, 2015, the respective contributors
All rights reserved.
All other contributions:
Copyright(c) 2015, 2016 the respective contributors
All rights reserved.
Caffe2 uses a copyright model similar to Caffe: each contributor holds
copyright over their contributions to Caffe2. The project versioning records
all such contribution and copyright details. If a contributor wants to further
mark their specific copyright on a particular contribution, they should
indicate their copyright solely in the commit message of the change when it is
committed.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the names of Facebook, Deepmind Technologies, NYU, NEC Laboratories America
and IDIAP Research Institute nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import subprocess
import os
import socket
from argparse import ArgumentParser, REMAINDER
def get_free_port():
sock = socket.socket()
sock.bind(('', 0))
ip, port = sock.getsockname()
sock.close()
return port
def parse_args():
"""
Helper function parsing the command line options
@retval ArgumentParser
"""
parser = ArgumentParser(description="PyTorch distributed training launch "
"helper utility that will spawn up "
"multiple distributed processes")
parser.add_argument("--gpus", default="0", type=str,
help="CUDA_VISIBLE_DEVICES")
# Optional arguments for the launch helper
# parser.add_argument("--nnodes", type=int, default=1,
# help="The number of nodes to use for distributed "
# "training")
# parser.add_argument("--node_rank", type=int, default=0,
# help="The rank of the node for multi-node distributed "
# "training")
# parser.add_argument("--nproc_per_node", type=int, default=1,
# help="The number of processes to launch on each node, "
# "for GPU training, this is recommended to be set "
# "to the number of GPUs in your system so that "
# "each process can be bound to a single GPU.")
# parser.add_argument("--master_addr", default="127.0.0.1", type=str,
# help="Master node (rank 0)'s address, should be either "
# "the IP address or the hostname of node 0, for "
# "single node multi-proc training, the "
# "--master_addr can simply be 127.0.0.1")
# parser.add_argument("--master_port", default=29500, type=int,
# help="Master node (rank 0)'s free port that needs to "
# "be used for communication during distributed "
# "training")
# parser.add_argument("--use_env", default=False, action="store_true",
# help="Use environment variable to pass "
# "'local rank'. For legacy reasons, the default value is False. "
# "If set to True, the script will not pass "
# "--local_rank as argument, and will instead set LOCAL_RANK.")
# parser.add_argument("-m", "--module", default=False, action="store_true",
# help="Changes each process to interpret the launch script "
# "as a python module, executing with the same behavior as"
# "'python -m'.")
# parser.add_argument("--no_python", default=False, action="store_true",
# help="Do not prepend the training script with \"python\" - just exec "
# "it directly. Useful when the script is not a Python script.")
# positional
parser.add_argument("training_script", type=str,
help="The full path to the single GPU training "
"program/script to be launched in parallel, "
"followed by all the arguments for the "
"training script")
# rest from the training program
parser.add_argument('training_script_args', nargs=REMAINDER)
return parser.parse_args()
def main():
args = parse_args()
n_gpus = len(args.gpus.split(","))
# here, we specify some command line parameters manually,
# since in Single-Node multi-process distributed training, they often are fixed or computable
args.nnodes = 1
args.node_rank = 0
args.nproc_per_node = n_gpus
args.master_addr = "127.0.0.1"
args.master_port = get_free_port()
args.use_env = False
args.module = False
args.no_python = False
# world size in terms of number of processes
dist_world_size = args.nproc_per_node * args.nnodes
# set PyTorch distributed related environmental variables
current_env = os.environ.copy()
current_env["CUDA_VISIBLE_DEVICES"] = args.gpus
current_env["MASTER_ADDR"] = args.master_addr
current_env["MASTER_PORT"] = str(args.master_port)
current_env["WORLD_SIZE"] = str(dist_world_size)
processes = []
if 'OMP_NUM_THREADS' not in os.environ and args.nproc_per_node > 1:
current_env["OMP_NUM_THREADS"] = str(1)
# print("*****************************************\n"
# "Setting OMP_NUM_THREADS environment variable for each process "
# "to be {} in default, to avoid your system being overloaded, "
# "please further tune the variable for optimal performance in "
# "your application as needed. \n"
# "*****************************************".format(current_env["OMP_NUM_THREADS"]))
for local_rank in range(0, args.nproc_per_node):
# each process's rank
dist_rank = args.nproc_per_node * args.node_rank + local_rank
current_env["RANK"] = str(dist_rank)
current_env["LOCAL_RANK"] = str(local_rank)
# spawn the processes
with_python = not args.no_python
cmd = []
if with_python:
cmd = [sys.executable, "-u"]
if args.module:
cmd.append("-m")
else:
if not args.use_env:
raise ValueError("When using the '--no_python' flag, you must also set the '--use_env' flag.")
if args.module:
raise ValueError("Don't use both the '--no_python' flag and the '--module' flag at the same time.")
cmd.append(args.training_script)
if not args.use_env:
cmd.append("--local_rank={}".format(local_rank))
cmd.extend(args.training_script_args)
process = subprocess.Popen(cmd, env=current_env)
processes.append(process)
for process in processes:
process.wait()
if process.returncode != 0:
raise subprocess.CalledProcessError(returncode=process.returncode,
cmd=cmd)
if __name__ == "__main__":
main()
|
Heuristic algorithm for finding maximal breach path in wireless sensor network with omnidirectional sensors Barrier coverage is one of the most crucial issues for quality of service in wireless sensor networks (WSN) and it has been recently emerged as a premier research topic. The barrier coverage can guarantee intrusion detection when mobile objects are entering into the boundary of a sensor field or penetrate into sensor field. There are several penetration paths such as breach, support, exposure and detection. This paper is interested in the maximal breach path, which is specific for a penetrating intruder's safety that corresponds to the worst case coverage. Having known the MBP, network designers could improve the coverage of the networks and maximize the effectiveness of the system. Moreover, MBP provides fundamental background to develop intrusive detection and border surveillance applications. Therefore, in this paper we presents an innovative polynomial time algorithm for computing the MBP for a given sensor network named DBS. The simulation results show that the aforementioned algorithm significantly outperforms an existed one named Megerian's in term of both computational time and consummation.
|
Specific growth inhibition by acetate of an Escherichia coli strain expressing Era-dE, a dominant negative Era mutant. Escherichia coli Era is a GTP binding protein and essential for cell growth. We have previously reported that an Era mutant, designated Era-dE, causes a dominant negative effect on the growth and the loss of the ability to utilize TCA cycle metabolites as carbon source when overproduced. To investigate the role of Era, the gene expression in the cells overproducing Era-dE was examined by DNA microarray analysis. The expression of lipA and nadAB, which are involved in lipoic acid synthesis and NAD synthesis, respectively, was found to be reduced in the cells overproducing Era-dE. Lipoic acid and NAD are essential cofactors for the activities of pyruvate dehydrogenase complex, 2-oxoglutarate dehydrogenase complex and glycine cleavage enzyme complex. The expression of numerous genes involved in dissimilatory carbon metabolism and carbon source transport was increased. This set of genes partially overlaps with the set of genes controlled by cAMP-CAP in E coli. Moreover, the growth defect of Era-dE overproduction was specifically enhanced by acetate but not by TCA cycle metabolites both in rich and synthetic media. Intracellular serine pool in Era-dE overproducing cells was found to be increased significantly compared to that of the cells overproducing wild-type Era. It was further found that even the wild-type E. coli cells not overproducing Era-dE became sensitive to acetate in the presence of serine in a medium. We propose that when Era-dE is overproduced, carbon fluxes to the TCA cycle and to C1 units become impaired, resulting in a higher cellular serine concentration. We demonstrated that such cells with a high serine concentration became sensitive to acetate, however the reason for this acetate sensitivity is not known at the present.
|
public class AreaOfCircle
{
double x,y,r;
double area(double r)
{
//Using this keyword for scope resolution
this.r=r;
double ar=Math.PI*r*r;
return ar;
}
public static void main(String[] args)
{
//Creating object of "areaofcircle" class
AreaOfCircle a=new AreaOfCircle();
a.r=20.0;
System.out.println("Area ="+a.area(a.r));
}
}
|
// Coordinate transform in a clockwise manner
CommonParameters::XY Util::rotateCoordCWR( const CommonParameters::XY& coord, const CommonParameters::XY& centerCoord, const double rotationAngle ){
const double vecXOrg = coord.X - centerCoord.X;
const double vecYOrg = coord.Y - centerCoord.Y;
const double vecX = vecXOrg * cos( rotationAngle ) - vecYOrg * sin( rotationAngle );
const double vecY = vecXOrg * sin( rotationAngle ) + vecYOrg * cos( rotationAngle );
CommonParameters::XY coordRotated = { vecX + centerCoord.X, vecY + centerCoord.Y };
return coordRotated;
}
|
Case 3/2019 - Type IIB Tricuspid Atresia, in Natural Evolution, at 21 Years of Age DOI: 10.5935/abc.20190083 Clinical data Patient remained asymptomatic from birth until 16 years of age, when he started to show progressive fatigue at exertion, with the use of anti-congestive medication such as furosemide, enalapril, spironolactone, and carvedilol, in addition to warfarin. The diagnosis of heart disease characterized by heart murmur was attained in the first month of life. Physical examination: good overall status, eupneic, acyanotic, normal pulses in the 4 limbs. Weight: 63 kg, Height: 171 cm, right upper limb blood pressure: 130/90 mmHg, HR: 63 bpm, Sat O2: 89%. Precordium: apex beat was palpable at the 6th left intercostal space in the anterior axillary line and diffuse, with systolic impulses in the left sternal border. Hyperphonetic heart sounds, with irregular splitting of the second heart sound. Moderate intensity ejection systolic murmur at the left upper sternal border with systolic thrill and holosystolic murmur ++/4 at the lower sternal border and at the tip with diastolic murmur ++/4. The liver was palpable 4 cm from the costal border and lungs were clear. Patient remained asymptomatic from birth until 16 years of age, when he started to show progressive fatigue at exertion, with the use of anti-congestive medication such as furosemide, enalapril, spironolactone, and carvedilol, in addition to warfarin. The diagnosis of heart disease characterized by heart murmur was attained in the first month of life. Precordium: apex beat was palpable at the 6 th left intercostal space in the anterior axillary line and diffuse, with systolic impulses in the left sternal border. Hyperphonetic heart sounds, with irregular splitting of the second heart sound. Moderate intensity ejection systolic murmur at the left upper sternal border with systolic thrill and holosystolic murmur ++/4 at the lower sternal border and at the tip with diastolic murmur ++/4. The liver was palpable 4 cm from the costal border and lungs were clear. Clinical Diagnosis: Type II B tricuspid atresia with extensive septal defects and moderate infundibulum-pulmonary valve stenosis, mitral insuficiency, maintaining pulmonary hyperflow and high arterial saturation, undergoing natural evolution until adulthood. Clinical Reasoning: There were clinical elements leading to a diagnosis of cyanogenic congenital heart disease with marked clinical repercussion, with pulmonary hyperflow. Tricuspid atresia or double LV inflow tract with mild to moderate pulmonary stenosis due to limitation of pulmonary flow, in the presence of auscultation characteristic of associated pulmonary stenosis. The electrocardiogram emphasized LV overload, compatible with the above diagnoses. Echocardiogram and MRI highlighted the diagnostic elements of the defect. Differential Diagnosis: Other cyanogenic heart diseases with pulmonary hyperflow should be recalled with the same pathophysiological picture. Among them, left atrioventricular valve atresia in the presence of a well-developed LV and any other heart disease accompanied by right ventricular hypoplasia. Clinical Conduct: Taking into account the harmonized pulmonary and systemic flows over time, with no signs of hypoxemia and / or heart failure and in the presence of good physical tolerance, the clinical expectant management was considered. Comments: It is known that the different types of tricuspid atresia, whether with pulmonary flow limitation or not, has an unfavorable evolution, with signs of hypoxia or heart failure as early as in the first days of life, which progressively worsens over the first months, until the end of the first year of life. Therefore, the need for surgical intervention in this period. It can be affirmed that cases with tricuspid atresia and a mild repercussion who remain asymptomatic until adulthood are rarely identified. 1 In this circumstance, they may not require early surgical intervention. Thus, it is important to emphasize that these patients require a stringent and thorough evaluation, in order to be able to determine the most correct conduct for the infant, whether expectant or surgical intervention. This decision becomes even more difficult in adulthood, since heart failure that is observed at a later period, with myocardial dilatation and hypertrophy, and even with cardiac function preservation, is a parameter for an indefinite conduct, given the greater surgical risks in this age group. We did not find reports in the literature that were similar to the case described herein. Reference This is an open-access article distributed under the terms of the Creative Commons Attribution License
|
/**
* Default comparator. Uses if comparator isn't provided by client.
* Compares only entry keys that should implements {@link Comparable} interface.
*/
private static class DefaultHolderComparator<K, V> implements Comparator<Holder<K, V>>, Serializable {
/** */
private static final long serialVersionUID = 0L;
/** {@inheritDoc} */
@SuppressWarnings("unchecked")
@Override public int compare(Holder<K, V> h1, Holder<K, V> h2) {
if (h1 == h2)
return 0;
EvictableEntry<K, V> e1 = h1.entry;
EvictableEntry<K, V> e2 = h2.entry;
int cmp = ((Comparable<K>)e1.getKey()).compareTo(e2.getKey());
return cmp == 0 ? Long.compare(abs(h1.order), abs(h2.order)) : cmp;
}
}
|
/** copies LP data with column matrix into LP solver */
SCIP_RETCODE SCIPlpiLoadColLP(
SCIP_LPI* lpi,
SCIP_OBJSEN objsen,
int ncols,
const SCIP_Real* obj,
const SCIP_Real* lb,
const SCIP_Real* ub,
char** colnames,
int nrows,
const SCIP_Real* lhs,
const SCIP_Real* rhs,
char** rownames,
int nnonz,
const int* beg,
const int* ind,
const SCIP_Real* val
)
{
#ifndef NDEBUG
{
int j;
for( j = 0; j < nnonz; j++ )
assert( val[j] != 0 );
}
#endif
SCIPdebugMessage("calling SCIPlpiLoadColLP()\n");
assert(lpi != NULL);
assert(lpi->clp != NULL);
assert(lhs != NULL);
assert(rhs != NULL);
assert(obj != NULL);
assert(lb != NULL);
assert(ub != NULL);
assert(beg != NULL);
assert(ind != NULL);
assert(val != NULL);
assert( nnonz > beg[ncols-1] );
invalidateSolution(lpi);
ClpSimplex* clp = lpi->clp;
int* mybeg = NULL;
SCIP_ALLOC( BMSallocMemoryArray(&mybeg, ncols + 1) );
BMScopyMemoryArray(mybeg, beg, ncols);
mybeg[ncols] = nnonz;
clp->loadProblem(ncols, nrows, mybeg, ind, val, lb, ub, obj, lhs, rhs);
BMSfreeMemoryArray( &mybeg );
clp->setOptimizationDirection(objsen);
if ( colnames || rownames )
{
std::vector<std::string> columnNames(ncols);
std::vector<std::string> rowNames(nrows);
if (colnames)
{
for (int j = 0; j < ncols; ++j)
columnNames[j].assign(colnames[j]);
}
if (rownames)
{
for (int i = 0; i < ncols; ++i)
rowNames[i].assign(rownames[i]);
}
clp->copyNames(rowNames, columnNames);
}
return SCIP_OKAY;
}
|
The executive asked to come up with a plan to revive Network Rail's fortunes has said she cannot rule out recommending privatisation.
In an interview with the BBC, Nicola Shaw said that a partial or total sell-off "was absolutely on the table; it can't not be."
Ms Shaw was drafted in by the government after Network Rail's upgrade plans fell apart last summer.
Work to electrify key lines had been dogged by delays and mounting costs.
Ministers paused two of the projects, in the Midlands and across the Pennines, and replaced the chairman, while a number of reviews are carried out.
Ms Shaw, who is the boss of Britain's only high speed line, HS1, was asked to come up with a report before the Budget next spring.
She said there were a whole range of issues that had to be considered and she was keen to hear what people had to say.
She said she would also recommend changes to the regulator if necessary.
"I don't believe there is one perfect answer. I think there is something that we'll go forward with for the next period of evolution of the railway. I don't think there has to be a big row. The challenge for me is how to bring people together. To find a way forward that people will support."
The future of Network Rail - which controls 2,500 stations as well as tracks, tunnels and level crossings - has been up in the air ever since the embarrassing admission last June that, just one year into a five year upgrade plan, the company had lost control of timetables and budgets.
Problems came to a head when the company was re-classified as a public sector body in September last year.
Overnight, it meant it could no longer borrow extra money from private sources to fill the funding gap.
I've been told by those close to the situation that the impact of those changes took everyone by surprise.
It also meant the company's £37.7bn debt moved onto the government's books.
One source suggested that before the change "ministers might turn a blind eye" to the extra costs, as long as the job was done. This is no longer possible.
Insiders also talked of a failure to check if they had sufficient numbers of qualified engineers to carry out the necessary work.
And they underestimated how difficult it would be to upgrade and electrify the Great Western Line, which dates from Victorian times, while running a service on it.
Three reviews are under way.
One looking into what went wrong, due in a few weeks.
Network Rail's new chairman Sir Peter Hendy is looking at what they can afford to upgrade and how long it will take. It is likely to be published in November.
Nicola Shaw's report into how to change the structure and financing of Network Rail due in the spring.
Ms Shaw has a difficult job, navigating a wide range of views, including those of the unions and the new Labour leader, Jeremy Corbyn who wants to see the railways back in public hands.
"I am talking to unions, and to representatives of staff and to other members of different parties so I hope we have strong engagement because I think it matters."
Q&A: Why have rail upgrade plans been delayed?
|
Jenny Diski died yesterday. You might have discovered that fact if you happened to visit the London Review of Books, where Diski published essays, reviews, and blog posts for nearly twenty-five years. Or maybe, like me, you learned it on Twitter, where, hours before the obituaries arrived, old tweets of Diski’s, some of them years out of date, started swirling back into circulation. They joined a tumble of appreciative links and quotations, an accumulation whose size quickly disqualified the possibility of happy coincidence. This is how death announces itself now, at least for the artists who don’t rate a breaking-news alert on our phones: a surge of mentions on social media, a collective attempt to plug up the vacuum of absence with digital abundance. For a moment you think you’ve lucked into an outpouring of spontaneous enthusiasm. Finally! you tell yourself. We’re talking about her now! But then quickly enough the rational brain reasserts itself and begins working down the checklist: Are they handing out Nobels today? A genius grant, maybe? Was someone quoted by Beyoncé? No? Oh. Oh, no.
This momentary suspension of belief worked again on me yesterday, even though Diski’s death could hardly count as a surprise. She was not old—sixty-eight—but we’d known that her death was coming soon, because she’d been telling us so, in a series of remarkable essays in the LRB, for more than a year and a half. In the fall of 2014, Diski announced that she had an inoperable cancer in her lung. She’d written more than a dozen books, including novels, short stories, and travelogues, and her decision to chronicle her dying was simple. “I’m a writer. I’ve got cancer. Am I going to write about it? How am I not? I pretended for a moment that I might not, but knew I had to, because writing is what I do and now cancer is what I do, too.”
By the time of her first cancer essay, Diski was already used to writing about herself—“I start with me, and often enough end with me,” she once said—and she was already used to writing about herself in extremis. In 2009, she got fed up with a group of celebrities who had protested Roman Polanski’s arrest for a thirty-year-old rape. Particularly galling, she wrote, was the idea, suggested by the protesters’ petition, “of a thirteen-year-old consenting to have oral sex with a forty-four-year-old film director.” She then went on, with startling clarity, to describe her own experience of being raped, at the age of fourteen. In a similar fashion, a review of a memoir by a psychiatric patient became, in its way, a memoir of Diski’s own harrowing experience as a psychiatric patient decades ago.
Still, there is extreme, and then there is extinction. From the start, Diski recognized the difficulty of the task she’d set herself. “Can there possibly be anything new to add?” she wondered. “Isn’t the cliché of writing a cancer diary going to be compounded by the impossibility of writing in it anything other than what has already been written, over and over? Same story, same ending.”
A reasonable worry for most writers, but it turned out to be unnecessary in her case. It helped that Diski’s story was not the same story. In the narrative of her final months—the steroids, the Weetabix, the fentanyl patch—she chose to tell, for the first time, about living with Doris Lessing as a teenager, and about their complicated relationship thereafter. In large part, however, the appeal of Diski’s essays was the appeal of Diski herself. On the page she was brilliant, irritable, mordant, and humane. She could be hard on herself, and hard on others, but the hardness always seemed to have a point, as though anyone who hoped to reach even a tactical accommodation with what she once called “the adamantine way of the world” had to be prepared to match it rigor for rigor. Often hilarious—she was justly proud of answering her initial cancer diagnosis with a joke—she saw the slant in otherwise ordinary situations, and despised tendentiousness or cant of any sort.
In an illuminating profile last year, Giles Harvey wrote that Diski’s cancer diary made for a “marvel of steady and dispassionate self-revelation.” This is true, and yet for all her apparent equipoise, Diski also understood that an unflinching record of her final days meant that sometimes she’d have to show herself mid flinch. As she wrote in her final LRB essay, published in February:
I am scared of dissolution, of casting my particles to the wind, of having nothing to cast my particles to the wind with, of knowing nothing when knowing everything has been the taste every day, little by little, by knowing what little meant compared to a lot, compared to something or nothing.
“From a young girl on,” she wrote in another essay, “writing and being a writer was the only way I could think of to be, the only way to balance the down side of the seesaw.” She carried that lesson to the very end, seesaw and all. “Pretty strange to see myself ebullient about being alive,” she wrote on Twitter in January. “Not sure I believe a word of it.”
Robert P. Baird is The Paris Review’s editor at large.
|
import math
for _ in range(int(input())):
n=int(input())
l=[]
i=n-1
x=n
y=math.ceil(n**0.5)
l=[]
while(i>=2):
if y<i:
l.append([i,x])
else:
prev=x
x=i
y=math.ceil(i**0.5)
l.append([prev,i])
l.append([prev,i])
i-=1
print(len(l))
for i in l:
print(*i)
|
Ag-Doped Halide Perovskite Nanocrystals for Tunable Band Structure and Efficient Charge Transport Heterovalent doping of halide perovskite nanocrystals (NCs), offering potential tunability in optical and electrical properties, remains a grand challenge. Here, we report for the first time a controlled doping of monovalent Ag+ into CsPbBr3 NCs via a facile room-temperature synthesis method. Our results suggest that Ag+ ions act as substitutional dopants to replace Pb2+ ions in the perovskite NCs, shifting the Fermi level down toward the valence band and in turn inducing a heavy p-type character. Field effect transistors fabricated with Ag+-doped CsPbBr3 NCs exhibit 3 orders of magnitude enhancement in hole mobility at room temperature, compared with undoped CsPbBr3 NCs. Low-temperature electrical studies further confirm the influence of Ag+ doping on the charge-carrier transport. This work demonstrates the tunability of heterovalent doping on the electrical properties of halide perovskite NCs, shedding light on their future applications in versatile optoelectronic devices.
|
Early Musical Impressions from Both Sides of the Loudspeaker Ametaphoric image of the loudspeaker and its sides sums up the spatio-temporal ruptures that started shaping aural perception in the late 19th century: on one side, the listener; on the other, sound events conveyed by phonographic products, radio and various sound-recording devices. Diverse practices as well as samples of theoretical and aesthetic thinking from the early 20th century illustrate how new media have affected the musical imagination and listening in general.
|
def recalculate_checksums(self, flags=0):
buff, buff_ = self.__to_buffers()
num = windivert_dll.WinDivertHelperCalcChecksums(ctypes.byref(buff_), len(self.raw), flags)
if PY2:
self.raw = memoryview(buff)[:len(self.raw)]
return num
|
package com.fshows.fubei.biz.merchant.model.param;
import com.alibaba.fastjson.annotation.JSONField;
import com.fshows.fubei.foundation.model.BaseBizContentModel;
/**
* 获得服务器时间
*
* @author John (<EMAIL>)
* @version $Id ParamGetServerTime.java, v1.0 2019-06-11 15:26 John Exp$
*/
@SuppressWarnings("unused")
public class ParamGetServerTime extends BaseBizContentModel {
/**
* 服务器时间
*/
@JSONField(name = "time")
private String time;
public String getTime() {
return time;
}
public void setTime(String time) {
this.time = time;
}
}
|
Atmospheric-pressure plasma technology Major industrial plasma processes operating close to atmospheric pressure are discussed. Applications of thermal plasmas include electric arc furnaces and plasma torches for generation of powders, for spraying refractory materials, for cutting and welding and for destruction of hazardous waste. Other applications include miniature circuit breakers and electrical discharge machining. Non-equilibrium cold plasmas at atmospheric pressure are obtained in corona discharges used in electrostatic precipitators and in dielectric-barrier discharges used for generation of ozone, for pollution control and for surface treatment. More recent applications include UV excimer lamps, mercury-free fluorescent lamps and flat plasma displays.
|
. Acne benefits from a series of treatments. The introduction of isotretinoin was a therapeutic breakthrough which considerably improved both the evolution and the prognosis of the disease. Indications of this retinoid kept changing over the past twenty years. New clinical conditions emerged including the management of disease recurrences. The daily dosages must be selected according to the type of acne, the gender of the patient and the pharmaco-economical implications. Teratogenicity must never be neglected as it represents the dreadful adverse event of the drug. A European Directive currently marks out the way to use this retinoid.
|
Oct. 20, 2018, 6:58 p.m.
Oct. 20, 2018, 6:45 p.m.
Former Clippers general manager Elgin Baylor just arrived for the #LeBrome #LeBropener (I know he’s one of the best Lakers ever. Relax. It’s a joke).
|
Development of Highly Luminescent Water-Insoluble Carbon Dots by Using Calixpyrrole as the Carbon Precursor and Their Potential Application in Organic Solar Cells Carbon dots (CDs) are carbon-based fluorescent nanomaterials that are of interest in different research areas due to their low cost production and low toxicity. Considering their unique photophysical properties, hydrophobic/amphiphilic CDs are powerful alternatives to metal-based quantum dots in LED and photovoltaic cell designs. On the other hand, CDs possess a considerably high amount of surface defects that give rise to two significant drawbacks: causing decrease in quantum yield (QY), a crucial drawback that limits their utilization in LEDs, and affecting the efficiency of charge transfer, a significant factor that limits the use of CDs in photovoltaic cells. In this study, we synthesized highly luminescent, water-insoluble, slightly amphiphilic CDs by using a macrocyclic compound, calixpyrrole, for the first time in the literature. Calixpyrrole-derived CDs (CP-DOTs) were highly luminescent with a QY of over 60% and size of around 410 nm with graphitic structure. The high quantum yield of CP-DOTs indicated that they had less amount of surface defects. Furthermore, CP-DOTs were used as an additive in the active layer of organic solar cells (OSC). The photovoltaic parameters of OSCs improved upon addition of CDs. Our results indicated that calixpyrrole is an excellent carbon precursor to synthesize highly luminescent and water-insoluble carbon dots, and CDs derived from calixpyrrole are excellent candidates to improve optoelectronic devices.
|
/*****************************************************************************
* CDigitalAudio::VRELToVFRACT()
*****************************************************************************
* Translate between VREL and VFRACT, clamping if necessary.
*/
VFRACT CDigitalAudio::VRELToVFRACT(VREL vrVolume)
{
vrVolume /= 10;
if (vrVolume < MINDB * 10) vrVolume = MINDB * 10;
else if (vrVolume >= MAXDB * 10) vrVolume = MAXDB * 10;
return (::vfDbToVolume[vrVolume - MINDB * 10]);
}
|
<reponame>alainhl/Genius-Referrals-JAVA<gh_stars>0
/*
* GeniusReferralsLib
*
* This file was automatically generated by APIMATIC v2.0 ( https://apimatic.io ).
*/
package com.geniusreferrals.api.models;
import java.util.*;
public class ForceBonusesFormBuilder {
//the instance to build
private ForceBonusesForm forceBonusesForm;
/**
* Default constructor to initialize the instance
*/
public ForceBonusesFormBuilder() {
forceBonusesForm = new ForceBonusesForm();
}
/**
* The bonuses' wrapper
*/
public ForceBonusesFormBuilder bonus(ForceBonuses bonus) {
forceBonusesForm.setBonus(bonus);
return this;
}
/**
* Build the instance with the given values
*/
public ForceBonusesForm build() {
return forceBonusesForm;
}
}
|
def update_day_usage(self):
(
self._day_usage,
self._day_price,
) = self._retrieve_period_usage_with_retry(DAILY_TYPE)
|
//printOrders method's job is to show the current items for the current order
public static void printOrders(){
System.out.println("*********Current Order**********");
System.out.println("Current pizzas ordered: "+numPizzasOrdered);
System.out.println("Current Hoagies ordered: "+numHoagiesOrdered);
System.out.println("Current orders of Bread sticks ordered: "+numBreadSticksOrdered);
System.out.println("********************************");
}
|
A post-covid economy for health: from the great reset to build back differently A return to a business as usual economy would be a fatal mistake argues Ronald Labont Forum as the great reset (https://www.weforum.org/great-reset/). One of the initiative's proposals is to direct a small amount of the vast wealth held by private investors to businesses whose activities align with the sustainable development goals. Examples include green energy initiatives or companies pledging to hire more women executives, 1 as well as investments in health and education. Investors are told they can make a profit and still "help save the world." 1 The concept of green, socially responsible, and ethical investing is not new, but it has enjoyed a recent surge in portfolios. While an attractive plan on the surface, it does not address why private investors have accumulated such huge amounts of wealth. Moreover, a recent study found that over 70% of these portfolios are non-compliant with global climate change targets 2 and so will do little to reduce deaths from "heat domes" or fossil fuel pollution. The profit they generate, however, will increase wealth inequalities, indirectly worsening health inequities, 3 since such investing is mostly a prerogative of those who are already rich. A related argument advanced by the World Economic Forum's founder, Klaus Schwab, is stakeholder capitalism. Corporations' roles are (or should be) to serve not only their shareholders but also their "employees, customers, suppliers, local communities, and society at large." 4 There is little to fault the ethos that everyone should benefit from economic activities. In practice, however, critics worry that this stakeholder model would be little more than a mask covering up the structurally entrenched value maximising behaviours of transnational corporations and wealthy individuals. 5 The economy would reset as it was pre-pandemic, with little change in how its underlying reliance on constant growth and capital accumulation was imperilling global health. 6 Build back better? Several of the world's advanced economies have taken slightly bolder steps in their post-pandemic plans to "build back better," the slogan adopted by the Biden administration's $3.5tn 10 year budget proposal for the US. 7 Originally proposed as a more ambitious (and costly) "green new deal" with hefty government investments in climate change and environmental, health, and social protection spending, 8 the plan was subsequently scaled back to $1.75tn to appease conservatives and those with links to fossil fuel. 9 Even that amount has yet to be affirmed by that country's "flawed democracy," with extreme social polarisation and low levels of trust in institutions and political parties. 10 If it is implemented, however, some consider that it will signal a "transformative shift" 11 that provides an advocacy base for more radical environmental measures. Similar arguments apply to the EU's next generation recovery fund and European green deal, which also face challenges from some right wing nationalist member states. 12 Central to both plans is promoting a circular economy in which there is a continuous recirculation of post-consumer materials so that there is "no such thing as waste." 13 This reduces the overall ecological footprint of economic activity, protecting land and water resources essential to people's health. By reducing pollution it also minimises health risks, especially for those in low income countries, where much of the world's toxic waste eventually winds up. Governments could encourage a shift to a circular economy by making it a condition in procurement contracts, which account for a sizeable 12% of global gross domestic product (GDP). 14 Health enhancing social obligations could also be attached to such contracts-for example, gender equity, compliance with human rights obligations, or alignment with the SDGs. Both plans, and other versions proposed by several other countries, are likely to improve health outcomes, at least in the short term and for those countries with the tax and fiscal space to invest in them. But they face three implementation obstacles. The first is concern over governments' pandemic inflated debt. Fiscal hawks are again calling for austerity measures similar to those imposed after the 2008 financial crisis and which led to underfunded public health systems ill prepared for a pandemic. 33 The second is opposition by transnational corporations and wealthy individuals to KEY MESSAGES As the global pandemic recedes, "greening" economic growth and improved employment and social protection measures remain central to a more health equitable future Proposed reforms such as harnessing private capital for social impact investments or a stakeholder model of capitalism are likely to be insufficient More far reaching change is needed with governments shaping markets to ensure that economic activities achieve urgent social and environmental goals A transformative shift to degrowth would avoid unsustainable and inequitable consumption of finite ecological resources and ensure human survival tax increases needed to pay for government pandemic economic rescue packages, even though many of them benefited. The third is political willingness to reject the neoliberal model of capitalism that has dominated the past 40 years. Under this model governments' role in the economy has been largely confined to bailing out market failures. Build back differently: mission economies Mariana Mazzucato, an internationally influential economist, argues that these barriers could be overcome if governments took on more forceful leadership in mobilising public and private partners to achieve important economic, social, and environmental goal oriented "missions." 15 Rather than responding to market failures, governments should use regulations and tax policies to shape markets towards democratically decided social and environmental outcomes, especially when companies benefit from government spending and infrastructure. Mazzucato chairs the World Health Organization's recently established Council on the Economics of Health for All. The council's first mission policy brief outlined a different approach to health innovation from the flawed government responses to covid-19 vaccines that led to gross inequities in access 16 and pharmaceutical profiteering. 17 In the case of vaccines, governments could (and should) have required technology sharing by companies as a condition of the public financing that supported vaccine research and manufacture. The council's second brief on health system financing goes further by invoking modern monetary theory. This posits that governments that have their own sovereign currency can never run out of money; they simply issue bonds to be held by their central banks. 18 Progressive and redistributive tax systems are still important, but modern monetary theory suggests that these are no longer the sole or even primary source of public financing for health, education, social protection, green growth, or climate mitigation programmes. As the economist Tim Jackson explains in an interview: "That fundamental insight gives us the space that we need to create monetary and fiscal policies that are flexible, that are coordinated and that give government the space to manoeuvre as we navigate these huge environmental and social challenges that are facing us lifting the veil of the ideology that says the government cannot afford to spend in the well-being of its citizens." 19 To the extent that WHO has normative influence on its member states and civil society actors, the council's support of alternative economic models could help governments resist calls for post-pandemic austerity. There are limitations. Firstly, the trillions of new dollars created by high income countries to keep their pandemic economies afloat led to asset bubbles in financial markets and real estate. Historically, the bursting of such bubbles benefits those who are already wealthy and worsens health and living conditions for poor citizens. 6 Excess liquidity (money supply) also risks inflation, as is now being seen in rising food costs worldwide that will be hardest on the health of the poorest people. 20 Strong regulation of financial markets, targeted taxation to reduce inflation and speculative investing, and measures to restrain monopoly profiteering are all seen as companion policies in building an economy based on modern monetary theory. 34 Secondly, few low and middle income countries have sovereign reserve currencies, and most are dependent on borrowing from international lenders. This makes them particularly vulnerable to inflation, interest rate increases, and volatility in global financial markets, which risks increasing debt burdens and new imposition of austerity programmes that compromise the health and wellbeing of hundreds of millions still living in poverty. Some tax and financial policies must be reformed at global scale to prevent capital flight and redistribute wealth if all countries are to have the resources needed to improve the health of their populations. Finally, strengthening the state's role in disciplining the market's invisible (but inequitable) hand requires governments to be less beholden to business interests and more responsive to public interests. 15 Participatory forums and progressive social movement activism are essential in the clichd but vital task of "holding governments to account." Challenging the class based power of elite groups requires political struggle, as seen in progressive protests in many countries worldwide, from Black Lives Matter to resurging activism throughout Latin America. Protecting the public space for such struggle is now especially urgent given the rise in autocratic regimes globally and the increased suppression of opposition civil society voices. 21 Towards an eco-just degrowth Building back better, even if adopting a more fulsome mission economy approach and revitalised participatory politics, inevitably bumps up against the limits of our planetary ecosystem and a capitalist economy predicated on a continuous upward spiral of growth, (over) production, and (excess) consumption. 6 Consider the investment shift to electric vehicles, which has countries competing to produce as many or more of these as are in the fossil fuelled fleet. Vehicle generated greenhouse gas emissions will fall, but environmental damage arising from automobile manufacturing (including new emissions) and the extraction of rare metals needed for batteries will increase, 22 along with the exploitative conditions associated with their mining. 23 Structured global injustices remain, with wealthy nations continuing to inequitably consume and exhaust most of the world's natural resources, just as they did with covid-19 vaccines. To build back differently there has to be a major reduction in and redistribution of aggregate global consumption. This is not a new argument. A half century ago the Club of Rome published Limits to Growth, 25 foreshadowing how the aggressively marketed consumerism of wealthier countries was not environmentally sustainable. It was also patently unjust, resting on the centuries-old and ongoing exploitation of the natural and economic capital of poorer countries. 6 26 More recently terms such as degrowth and postgrowth have entered the policy lexicon, 27 with calls for a democratically led downscaling of material based production and consumption worldwide. Many in poorer nations will still need to increase their level of consumption, while those in wealthier nations can make do with considerably less with no sacrifice to (and more likely improvements in) life quality, happiness, and health. 28 This planned reduction in rich world material and energy consumption would be accompanied by growth, globally, in other desperately needed areas: social care (a low resource, caring economy), green technologies, and environmentally restorative forms of "decent work." 29 An equitable reduction in consumption by humanity's wealthiest decile is essential to create space for growth in countries where livelihoods need to rise if people are to sustain good health and achieve reasonable life expectancies. Fifty years on, the Club of Rome co-published an updated report, the 1.5-Degree Lifestyles. 30 The report contains detailed recommendations in support of its headline policies to achieve "a fair consumption space for all" (box 1). Restoring, reforming, or transforming capitalism Our post-covid world confronts the twinned crises of gross undershoots in our social domain (inequalities in wealth as the stellar example) and overshoots in our ecosystem domain (extreme weather and climate being the most obvious ones). 31 Human and planetary health both suffer. Restoring the capitalism that preceded the pandemic, even if in stakeholder rather than shareholder form, will do little to alter this trajectory. Commitments to build back better offer some important reforms but remain too little, too late, and too prone to political capture but elite group interests. Mission economies, if informed by a critical stance on power inequalities, afford more possibility for deeper reform without necessarily challenging the legitimacy of capitalism per se. However, they rest on the abilities of social movements and political actors to disrupt the recent rise in autocracy and to ensure more participatory governance models, from local to global scales. A more transformative pivot would be to advance the radical degrowth policies of redistribution and avid de-consumerism. These policies draw inspiration from wo r ke r, p ro du ce, a n d co n s u m e r cooperatives that still do well in Europe, peasant movements worldwide, and the buen vivir commune based principles that pervade South American environmental activism. 32 Whatever economic model emerges: the pre-covid-19 version of rapacious capitalism is well past being fit for (human) purpose. Contributors and sources: RL is distinguished research chair at the School of Epidemiology and Public Health at the University of Ottawa, and co-author of Health Equity in a Globalizing Era. He is active with the international People's Health Movement, and editor in chief of the BMC journal, Globalization and Health. Competing interests: I have read and understood BMJ policy on declaration of interests and have no conflicts to declare. Provenance and peer review: Commissioned; externally peer reviewed. This article is part of a series commissioned for the Prince Mahidol Awards Conference (PMAC) in January 2022. Funding for the articles, including open access fees, was provided by PMAC. The BMJ commissioned, peer reviewed, edited, and made the decision to publish these articles. Rachael Hinton and Kamran Abbasi were the lead editors for The BMJ. Ronald Labont, professor School of Epidemiology and Public Health, University of Ottawa, Ontario, Canada Correspondence to: [email protected] This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
|
(The Noah Project) — Hello! I'm just hanging out at the shelter, watching the snow piles melt and following the birds as they flit from one feeder to another. Oh, how I love the thought of spring! Green grass, gentle breezes and a nice warm patch of sunshine to curl up in. Oh boy, I can hardly wait! The sooner the better, right?
This would sure be a great time of year for you to come and meet your new best friend (me). I'm Kaitlin, a really nice calico that's not too big, not too "talkative" and is just about the most purr-fect package of feline friendliness that you're gonna find anywhere. Plus, I think I have the most unusual markings. It's almost like I'm a half-and-half cat! Half calico and half "anybody's guess."
But, most of all, I'm a 100-percent good girl who gets along with other cats and loves to get cozy on a lap for a nice, long nap. Maybe you have a lap that is just my size? Maybe you could come and Meet Me @ Noah Project so I can give it a try? Please?
For more information about Kaitlin and all of her friends at The Noah Project, please visit www.noahproject.petfinder.com.
The Noah Project is located at 5205 Airline Road in Muskegon.
Follow us on Twitter: @BoothFeatures.
|
Keep your Firefox toolbar clutter-free by adding a vertical toolbar.
Do you like to keep your horizontal toolbar free of clutter, but still want quick access to print preview, sync, downloads, and other options? Try using a simple vertical toolbar.
The Firefox extension is called Vertical Toolbar, and you can get it here.
The toolbar displays on the left edge of the browser by default, but can be moved to the right edge as well. Right-clicking on either the horizontal or vertical toolbars brings up more options, including "Vertical Toolbar Options."
To add or remove Firefox buttons, click on the "Customize" button.
Now you can drag and drop items from the selection box over to the vertical toolbar.
That's it. If you want more functionality than what's available in the Vertical Toolbar, try the All-in-One Sidebar Firefox extension instead.
|
Hirarchical Digital Image Inpainting Using Wavelets Inpainting is the technique of reconstructing unknown or damaged portions of an image in a visually plausible way. Inpainting algorithm automatically fills the damaged region in an image using the information available in undamaged region. Propagation of structure and texture information becomes a challenge as the size of damaged area increases. In this paper, a hierarchical inpainting algorithm using wavelets is proposed. The hierarchical method tries to keep the mask size smaller while wavelets help in handling the high pass structure information and low pass texture information separately. The performance of the proposed algorithm is tested using different factors. The results of our algorithm are compared with existing methods such as interpolation, diffusion and exemplar techniques. INTRODUCTION The objective of image inpainting or retouching, is to reconstruct missing or damaged portions of image in an unnoticeable way, producing repaired images with satisfactory visual quality. Majority of the Image Inpainting Algorithms achieve such a task by using Partial Differential Equations. The applications of inpainting includes reconstruction of small damaged paintings, removal of superimposed text, removal of scratches due to folding of images, removal of an object in an image etc. Many graphics editing softwares have incorporated a tool for inpainting. The difference between these tools and the inpainting algorithm which we follow is that, the former requires the user to select both the region to be inpainted and the pattern that is to be filled in the unknown region. Hence it creates overhead on the user as he/she may not be sure of the exact pattern to be used to fill while the latter takes only the region to be inpainted as input from the user. The result of the tools will have blocky effects when it just replaces the original region with the new pattern. Moreover the process becomes tedious when the user has to fill large number of smaller areas. The process of inpainting, as mentioned in the Fig.1 includes masking out the unknown region selected by the user, and the inpainting technique is applied to fill the masked region. Many inpainting techniques have been proposed which retouches the image in an effective manner. Some of the existing techniques are interpolation, isophotic diffusion and exemplar based inpainting. Interpolation and diffusion based techniques work well for smaller areas but fails to reproduce texture properly. It also results in blurring of edges. Whereas exemplar based method works well for larger areas but fails in proper reproduction of definite shapes. It results in excessive propagation of texture and hence damaging the larger structures in the image. In this paper, the structure and texture information are separated and the coarser structures are handled first and then moving to finer details. Multi resolution property of the wavelets makes it desirable to be used in the process of inpainting. As the wavelets has the property of separating the low pass and high pass coefficients, it provides us the structure and texture information of the image. Inpainting algorithm is applied to four subbands of the image formed after applying Wavelet Transform. This gives a better reconstruction of the images. RELATED WORK Interpolation methods are the primitive methods which can be used for inpainting. In the interpolation method, the neighboring pixels are considered for filling the inpainting area. The masked pixels are replaced with the the average of the neighboring pixels. The technique gives better result for the uniform area and fails in high structured regions. Bertalmio et al. pioneered a digital image inpainting algorithm based on partial differential equations (PDEs). Bertalmio et al., described an algorithm where the isophotes are extended or prolonged inwards from the boundary of mask region. The structure of the image on the boundary of mask region is extended inward. Anisotropic diffusion is applied where the gradient vector is computed and rotated by / 2 radians to obtain the direction of the isophote lines. Though the algorithm works well for small textured images, it fails in large textured images. Other methods involving partial differential equation and concentrating on smaller structures could be found in, and. Texture synthesis, is another way of synthesizing the missing area. Texture synthesis based inpainting could be extensively found in,,,, and. Criminsi et al., have presented exemplar based technique for filling image regions. It is derived from patch based texture synthesis. In their algorithm, both structure and texture information is propagated into the mask region. The algorithm works by taking a patch around a pixel on the boundary and replacing it with the best patch found by searching in the source region. The comparison of structure based and texture based inpainting is available in. Hierarchical TV inpainting is discussed in. PROPOSED METHOD The inpainting problem is depicted in Figure 2 to illustrate the notations involved in image inpainting. Initially, the algorithm allows the user to select target region, which is to be inpainted. The region to be inpainted is also called as mask which is represented as gray area in the figure. The region surrounding which forms a boundary between the target region and other Input image User Selection Inpainting algorithm Masked Image Inpainted image region is denoted as ∂. The Source region, is the region of the image which is not to be inpainted. This represents the white portion of the image except the masked region,. The inpainting algorithm fills starting from ∂ using the information in. It updates ∂ while filling which in turn shrinks. The exemplar method discussed in uses patches on ∂ involving some unknown pixels of and some known pixel of as shown in Figure 3. For a pixel 'p' on the boundary, the patch p is formed with 'p' as the centre. In the figure inner red square represents the pixel on the boundary and the outer green square represents the patch. A patch that is similar to the known pixels is searched in the entire image, (i.e.) patch that closely matches with the known pixels of p is searched in. The patch that yields minimum SSD (Sum of Squared differences) value is taken as the best match. This is called as the exemplar patch, q. The values corresponding to the unknown pixels are copied from the best matched patch. The unknown pixels of p are copied from corresponding locations of q. Figure 3 Patch based exemplar method A higher number of known pixels in the patch p increase the confidence of accuracy. If the pixel 'p' lies on an edge touching the ∂, copying the pixels leads to copying of edges and hence avoids blurring of edges. If any other patch near 'p' is considered first for restoration, edges(structure) will not be extended correctly. Hence the pixels on the boundary and the pixels near the edges have to be given a higher priority for restoration process. These priorities are termed as Confidence and Structure terms respectively. The patch priority is computed as the product of confidence term and Structure term. The patch with maximum priority value is restored first. Once when the highest priority patch is filled, there will be a change in the confidence values and the boundary. The confidence term of that particular patch will be updated as the sum of all the confidence values of the newly filled pixels divided by the total number of pixels in the patch. The new boundary is detected and the process is repeated until all the patches are filled and till the number of boundary pixels becomes zero. In this paper, the exemplar based method is adopted in a hierarchical multi resolution approach. Filling the coarser details first and then the finer details tend to improve the accuracy. Moreover handling the structures separately ensures the structure continuity in the image. Figure 2 Inpainting Problem Wavelet transforms are well known for their multi resolution property. The scaled and translated basis functions of wavelet transform are given in Eqn. and. For an image 'f' of size M x N, the wavelet coefficients at any particular level 'j', can be obtained through Eqn. Hence applying discrete wavelet transform (DWT) to an image decomposes the image in four sub bands LL, HL, LH and HH of reduced spatial resolution as shown in figure 4. Here the LL corresponds to the Low pass band (approximation coefficients) and majorly contains the texture information. The remaining three bands correspond to the high pass bands (detail coefficients) and contain the structure information. The resolution of the LL band could be further reduced by repeatedly applying the wavelet transform. At each level certain amount of fine details are shed off the LL band leaving the coarser details. A wavelet decomposed image after applying wavelet transform thrice is shown in figure 5. At level 3 the LL band contains the coarser texture information and HL,LH and HH contains the coarser structure information. Inpainting at the lowest resolution ensures the filling of coarser details first. Inpainting in each sub band separately ensures the handling of texture and structure information separately. Once all the sub bands in a particular level is filled they are combined to form the LL band of the next higher resolution through inverse wavelet transform as specified in Eqn.. ( The process is repeated in the higher resolution levels until the original image size. and b. Identify the mask pixels called target pixels in the decomposed image. Compute the confidence term for all target pixels Initially the confidence value is assigned as 0 for the target region() and 1 for the source region(). The confidence value for a patch p is calculated as in equation Where, C(q) is the confidence value of those pixels belonging to the source region and the patch p..The denominator is the cardinality of the patch. Compute the structure term for all target pixels The structure term ensures the continuity of the structure in an image. The structure term for any high pass band is calculated as in equation. The structure term for the LL band is calculated as the average value of the three high pass bands. Where e(q) specifies the coefficient values of those pixels belonging to the source region and the patch p 5. Compute patch priorities P(p) as in equation where C(p) is the confidence term and S(p) is the structure term. 6. Find the patch p with the maximum priority as in equation, i.e., p = arg max p for all P(p) 7. Find the best matching patch q in source region that minimizes d( p; q), where d is the Sum of Squared Distance(SSD). 8. Copy image data from q to p for all pixels belonging to the target region. 9. Update C(p) and the S(p) for the newly filled pixels. 10.. Repeat steps 3 to 9 until all target pixels in each sub band in the current level are filled. 11..Reconstruction to next level Apply inverse wavelet transform as in equation and construct the LL band of the next higher resolution level from the four sub bands of the current resolution. 12. Mapping of mask pixels As the target pixels in the lower resolution level are filled, after reconstruction few target pixels in the higher level will also be filled. Mapping between the pixels in the higher and lower levels has to be made before filling the current level. 13. Steps 3 to 12 are continued until the original image is inpainted. EXPERIMENTAL RESULTS Experiments have been conducted for various images with variable mask size and shape. The mask is chosen on uniform areas, high contrast areas etc. The image is decomposed using Haar wavelets and the sub bands in the lowest resolution level are filled as explained in the previous section. Interpolation and diffusion are some of the existing techniques which work well for smaller mask sizes and exemplar based methods perform better for larger mask sizes. The proposed method is compared with these existing methods. Interpolation and diffusion techniques outperform other methods in speed when the mask is chosen from a uniform area. They also perform well for large number of smaller masks. However their performance decays as the mask size increases. They fail drastically for textured images. The results for a textured low contrast and high contrast images are shown in Figure 6 From the results it could be seen that the hierarchical wavelet based inpainting performs better than the existing methods. The selection of the decomposition levels plays a crucial role in the quality of reconstruction. If a smaller value is chosen the structure propagation gets affected and if a larger value is chosen it introduces minor distortions in the reconstructed area as shown in red circle in Figure 9(b). Figure 9(a) shows the result of proper reconstruction at lower level. Since the coefficients in the transformed domain are copied into the inpainting area, the reconstructed image has varied brightness and contrast, which is proportional to the decomposition level. However the decomposition level for proper reconstruction is found to be proportional to the size of inpainting area. (a) (b) Figure 9 (a) Image inpainted from appropriate level (b) Image inpainted with excessive level, Inpainted area is marked in red circle CONCLUSION Digital image inpainting offers a digital technique for restoring a damaged image. The algorithm requires the user to specify the damaged portion manually. It generates the damaged portions using other portions of the same image. It cannot generate a portion which is not available in the undamaged portions. Majority of the algorithms concentrate on images with smaller damaged portions. The quality of performance drops as the mask size increases. Exemplar based methods perform well for larger mask size but fails in larger structure propagation. Hierarchical method proposed in this paper tries to utilize the advantage of the Exemplar based method by handling the structures separately through wavelets. The structure propagation is better when inpainted from the lower resolution level. Selection of the decomposition level depends on the mask size. Though this method produces visual quality better than other methods it changes the overall brightness and contrast of the image as the number of levels increases. The inpainting effort increases as the algorithm is applied to all the sub bands at various levels.
|
def clean_edge_lengths(self):
if self._nodes[0].numberOfAscendants()!=0:
raise ValueError, 'this tree\'s root has ascendants: this is illegal and precludes copying'
for node in self._nodes:
for son in node.descendants():
node.set_branch_to( son, None )
|
/**
* <pre>
* Authorization Msg requests to execute. Each msg must implement Authorization interface
* The x/authz will try to find a grant matching (msg.signers[0], grantee, MsgTypeURL(msg))
* triple and validate it.
* </pre>
*
* <code>repeated .google.protobuf2.Any msgs = 2 [(.cosmos_proto.accepts_interface) = "sdk.Msg, authz.Authorization"];</code>
*/
public Builder clearMsgs() {
if (msgsBuilder_ == null) {
msgs_ = java.util.Collections.emptyList();
bitField0_ = (bitField0_ & ~0x00000001);
onChanged();
} else {
msgsBuilder_.clear();
}
return this;
}
|
<filename>node_modules/react-styleguidist/lib/client/utils/getInfoFromHash.d.ts
/**
* Returns an object containing component/section name and, optionally, an example index
* from hash part or page URL:
* #!/Button → { targetName: 'Button' }
* #!/Button/1 → { targetName: 'Button', targetIndex: 1 }
*
* @param {string} hash
* @returns {object}
*/
export default function getInfoFromHash(hash: string): {
isolate?: boolean;
hashArray?: string[];
targetName?: string;
targetIndex?: number;
};
|
Microcapillary flow method to investigate emulsion droplet deformability Emulsions are complex systems widely used in many relevant industrial applications, such as in the food, biomedical and petrochemical fields. They are metastable systems composed of oil, water and surfactants, and show complex structures at rest spanning different spatial scales and depending on composition and temperature. Although the phase behavior of emulsions at rest has been thoroughly investigated, the behavior of emulsions under flow still remains an open question. In particular, emulsion droplet deformation in microconfined conditions is a topic of relevant importance because of its applications in a wide range of processes, such as flow in porous media and transdermal drug delivery. In this work, a microfluidic device to investigate the effect of different oils as emulsion continuous phase on droplet deformability is presented by measuring in situ the interfacial tension of water in oil emulsions flowing in a microcapillary.
|
<reponame>cvandeplas/plaso
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright 2013 The Plaso Project Authors.
# Please see the AUTHORS file for details on individual authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This file contains an Outlook Registry parser."""
from plaso.events import windows_events
from plaso.parsers import winreg
from plaso.parsers.winreg_plugins import interface
__author__ = '<NAME> (<EMAIL>)'
class OutlookSearchMRUPlugin(interface.KeyPlugin):
"""Windows Registry plugin parsing Outlook Search MRU keys."""
NAME = 'winreg_outlook_mru'
DESCRIPTION = u'Parser for Microsoft Outlook search MRU Registry data.'
REG_KEYS = [
u'\\Software\\Microsoft\\Office\\15.0\\Outlook\\Search',
u'\\Software\\Microsoft\\Office\\14.0\\Outlook\\Search']
# TODO: The catalog for Office 2013 (15.0) contains binary values not
# dword values. Check if Office 2007 and 2010 have the same. Re-enable the
# plug-ins once confirmed and OutlookSearchMRUPlugin has been extended to
# handle the binary data or create a OutlookSearchCatalogMRUPlugin.
# Registry keys for:
# MS Outlook 2007 Search Catalog:
# '\\Software\\Microsoft\\Office\\12.0\\Outlook\\Catalog'
# MS Outlook 2010 Search Catalog:
# '\\Software\\Microsoft\\Office\\14.0\\Outlook\\Search\\Catalog'
# MS Outlook 2013 Search Catalog:
# '\\Software\\Microsoft\\Office\\15.0\\Outlook\\Search\\Catalog'
REG_TYPE = 'NTUSER'
def GetEntries(
self, parser_context, key=None, registry_type=None, **unused_kwargs):
"""Collect the values under Outlook and return event for each one.
Args:
parser_context: A parser context object (instance of ParserContext).
key: Optional Registry key (instance of winreg.WinRegKey).
The default is None.
registry_type: Optional Registry type string. The default is None.
"""
value_index = 0
for value in key.GetValues():
# Ignore the default value.
if not value.name:
continue
# Ignore any value that is empty or that does not contain an integer.
if not value.data or not value.DataIsInteger():
continue
# TODO: change this 32-bit integer into something meaningful, for now
# the value name is the most interesting part.
text_dict = {}
text_dict[value.name] = '0x{0:08x}'.format(value.data)
if value_index == 0:
timestamp = key.last_written_timestamp
else:
timestamp = 0
event_object = windows_events.WindowsRegistryEvent(
timestamp, key.path, text_dict, offset=key.offset,
registry_type=registry_type,
source_append=': PST Paths')
parser_context.ProduceEvent(event_object, plugin_name=self.NAME)
value_index += 1
winreg.WinRegistryParser.RegisterPlugin(OutlookSearchMRUPlugin)
|
DES MOINES, IOWA (The Borowitz Report)—Businessman Donald Trump's failure to insult fellow G.O.P. hopeful John Kasich a full twenty-four hours after the Ohio governor entered the 2016 Presidential race has sent Trump's poll numbers plummeting, as many supporters expressed a sudden loss of confidence in the real-estate mogul.
Trump's Kasich gaffe occurred at a campaign rally in Des Moines on Wednesday, when the former reality-show star admitted that he did not yet know enough about the Ohio governor to properly insult him.
"I could get up here and call Kasich a loser, because my gut tells me that's what he is, but you've come to expect something more special out of me," Trump said. "If you bear with me, I promise you that I'll come up with a world-class insult that we can all be proud of."
The audience reacted with stunned silence, leading some observers to question whether Trump's failure to insult Kasich would turn from a mere gaffe into a full-blown scandal.
Carol Foyler, a Trump supporter who attended the Des Moines event, said that she still liked Trump because of the insults he had delivered in the past, but she acknowledged that her belief in him had been shaken.
"When you're in the White House and that phone rings, you've got to be ready to insult someone right away," she said.
Get news satire from The Borowitz Report delivered to your inbox.
|
package json;
import java.net.MalformedURLException;
import java.net.URL;
public class JSONObject {
private String message;
private URL link;
public JSONObject() { }
public void setMessage(String message) {
this.message = message;
}
public String getMessage(){
return message;
}
public void setLink(String link) {
try {
this.link = new URL(link);
} catch (MalformedURLException e) { System.exit(-1);}
}
public URL getLink(){
return link;
}
}
|
Blue piece: a web-based tool to provide screening and intervention services for parents of children with autism spectrum disorder Early intervention has been associated with the best outcomes for children with autism spectrum disorder (ASD). However, in some cities in Mexico is not easy to get access to early screening and intervention services due to socioeconomic disparities and lack of information. This paper describes Blue Piece, a web-based tool to provide screening and intervention services for parents of children with ASD. Blue Piece allows parents to get access to web-based screening tests, and depending on the test results, it provides information about the intervention services that are available near them. Blue Piece was designed following a user-centered approach. We close with plans for future work.
|
<filename>Relevance score.py
#!/usr/bin/env python
# coding: utf-8
# In[2]:
import pandas as pd
import numpy as np
import nltk
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
import re
from gensim import utils
from gensim.models.doc2vec import LabeledSentence
from gensim.models import Doc2Vec
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
import string
# In[6]:
pip install sklearn
# In[3]:
df=pd.read_csv('pd.csv',low_memory=False)
# In[18]:
d=df[['about','headline','location','content']][:15]
# In[19]:
d['about'][0]
# In[115]:
def remove_URL(text):
url = re.compile(r'https?://\S+|www\.\S+')
return url.sub(r'',text)
def remove_html(text):
html=re.compile(r'<.*?>')
return html.sub(r'',text)
# Reference : https://gist.github.com/slowkow/7a7f61f495e3dbb7e3d767f97bd7304b
def remove_emoji(text):
emoji_pattern = re.compile("["
u"\U0001F600-\U0001F64F" # emoticons
u"\U0001F300-\U0001F5FF" # symbols & pictographs
u"\U0001F680-\U0001F6FF" # transport & map symbols
u"\U0001F1E0-\U0001F1FF" # flags (iOS)
u"\U00002702-\U000027B0"
u"\U000024C2-\U0001F251"
"]+", flags=re.UNICODE)
return emoji_pattern.sub(r'', text)
def remove_punct(text):
table=str.maketrans('','',string.punctuation)
return text.translate(table)
# In[15]:
wn = nltk.WordNetLemmatizer()
def lemmatizer(text):
text = [wn.lemmatize(word) for word in text]
return text
def tokenization(text):
text = re.split('\W+', text)
return text
stopword = nltk.corpus.stopwords.words('english')
def remove_stopwords(text):
text = [word for word in text if word not in stopword]
return text
# In[56]:
import nltk
nltk.download('wordnet')
# In[79]:
d['content'].fillna('',inplace=True)
# In[76]:
d.info()
# In[88]:
d['about']= d['about'].apply(lambda x: remove_punct(x.lower()))
d['about']= d['about'].apply(lambda x: remove_emoji(x.lower()))
d['about']= d['about'].apply(lambda x: remove_html(x.lower()))
d['about']= d['about'].apply(lambda x: remove_URL(x.lower()))
d['tokenized'] = d['about'].apply(lambda x: tokenization(x.lower()))
d['No_stopwords'] = d['tokenized'].apply(lambda x: remove_stopwords(x))
d['lemmatized'] = d['No_stopwords'].apply(lambda x: lemmatizer(x))
# In[70]:
d['lemmatized'][0]
# In[86]:
d['content']= d['content'].apply(lambda x: remove_punct(x.lower()))
d['content']= d['content'].apply(lambda x: remove_emoji(x.lower()))
d['content']= d['content'].apply(lambda x: remove_html(x.lower()))
d['content']= d['content'].apply(lambda x: remove_URL(x.lower()))
d['tokenized'] = d['content'].apply(lambda x: tokenization(x.lower()))
d['No_stopwords'] = d['tokenized'].apply(lambda x: remove_stopwords(x))
d['lemmatized_'] = d['No_stopwords'].apply(lambda x: lemmatizer(x))
# In[16]:
corpus1 = ["A girl is styling her hair.", "A girl is brushing her hair."]
lemmatizer(x for x in corpus1)
# In[85]:
def jaccard_similarity(query, document):
intersection = set(query).intersection(set(document))
union = set(query).union(set(document))
return len(intersection)/len(union)
# In[17]:
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize
lemmatizer = WordNetLemmatizer()
for x in corpus1:
words = word_tokenize(x)
lemmatizer.lemmatize(w for w in words)
# In[100]:
jaccard_similarity(d['lemmatized'][0], d['lemmatized_'][8])
# In[ ]:
from sklearn.feature_extraction.text import TfidfVectorizer
# sentence pair
for c in range(len(corpus)):
corpus[c] = pre_process(corpus[c])
# creating vocabulary using uni-gram and bi-gram
tfidf_vectorizer = TfidfVectorizer(ngram_range=(1,2))
tfidf_vectorizer.fit(corpus)
feature_vectors = tfidf_vectorizer.transform(corpus)
# In[63]:
df['content'].fillna('',inplace=True)
# In[49]:
corpus[0]
# In[27]:
corpus[-1]
# In[116]:
import nltk
from nltk import word_tokenize
from nltk.corpus import stopwords
from unidecode import unidecode
import string
def pre_process(corpus):
# convert input corpus to lower case.
corpus = corpus.lower()
# collecting a list of stop words from nltk and punctuation form
# string class and create single array.
stopset = stopwords.words('english') + list(string.punctuation)
# remove stop words and punctuations from string.
corpus=remove_emoji(corpus)
# word_tokenize is used to tokenize the input corpus in word tokens.
corpus = " ".join([i for i in word_tokenize(corpus) if i not in stopset])
# remove non-ascii characters
corpus = unidecode(corpus)
return corpus
# In[117]:
corpus[2]
# In[40]:
from gensim.scripts.glove2word2vec import glove2word2vec
glove_input_file = 'glove.6B/glove.6B.50d.txt'
word2vec_output_file = 'word2vec.txt'
glove2word2vec(glove_input_file, word2vec_output_file)
from gensim.models import KeyedVectors
# load the Stanford GloVe model
filename = 'word2vec.txt'
word_emb_model = KeyedVectors.load_word2vec_format(filename, binary=False)
# In[44]:
from collections import Counter
import itertools
def map_word_frequency(document):
return Counter(itertools.chain(*document))
def get_sif_feature_vectors(sentence1, sentence2, word_emb_model=word_emb_model):
sentence1 = [token for token in sentence1.split() if token in word_emb_model.wv.vocab]
sentence2 = [token for token in sentence2.split() if token in word_emb_model.wv.vocab]
word_counts = map_word_frequency((sentence1 + sentence2))
embedding_size = 50 # size of vectore in word embeddings
a = 0.001
sentence_set=[]
for sentence in [sentence1, sentence2]:
vs = np.zeros(embedding_size)
sentence_length = len(sentence)
for word in sentence:
a_value = a / (a + word_counts[word]) # smooth inverse frequency, SIF
vs = np.add(vs, np.multiply(a_value, word_emb_model.wv[word])) # vs += sif * word_vector
vs = np.divide(vs, sentence_length) # weighted average
sentence_set.append(vs)
return sentence_set
# In[45]:
from sklearn.metrics.pairwise import cosine_similarity
def get_cosine_similarity(feature_vec_1, feature_vec_2):
return cosine_similarity(feature_vec_1.reshape(1, -1), feature_vec_2.reshape(1, -1))[0][0]
# In[90]:
ss=get_sif_feature_vectors(corpus[2], corpus[-1], word_emb_model=word_emb_model)
# In[106]:
corpus[2]==''
# In[50]:
df['about'].fillna('',inplace=True)
# In[54]:
df['headline'].fillna('',inplace=True)
info=[]
for i in range(df.shape[0]):
a=df['about'][i]+df['headline'][i]
info.append(a)
# In[59]:
df['info']=pd.Series(np.array(info))
# In[133]:
relevance=[]
for i in range(df.shape[0]):
sentence_1=df['info'][i]
sentence_2=df['content'][i]
sentence_1= pre_process(sentence_1)
sentence_2= pre_process(sentence_2)
sentence_set=get_sif_feature_vectors(sentence_1, sentence_2, word_emb_model=word_emb_model)
if np.isnan(sentence_set[0]).any():
similarity=0
elif np.isnan(sentence_set[1]).any():
similarity=0
else:
similarity=get_cosine_similarity(sentence_set[0], sentence_set[1])
relevance.append(similarity)
df['relevance_score']=pd.Series(np.array(relevance))
# In[134]:
df[['info','content','relevance_score']]
|
<filename>server/src/test/java/io/spine/examples/kanban/server/card/CardTest.java
/*
* Copyright 2021, TeamDev. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Redistribution and use in source and/or binary forms, with or without
* modification, must retain the above copyright notice and the following
* disclaimer.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package io.spine.examples.kanban.server.card;
import io.spine.examples.kanban.Card;
import io.spine.examples.kanban.event.CardCreated;
import io.spine.examples.kanban.server.KanbanContextTest;
import io.spine.testing.server.EventSubject;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Nested;
import org.junit.jupiter.api.Test;
@DisplayName("Card logic should")
class CardTest extends KanbanContextTest {
/**
* Creates a new board for placing cards.
*/
@BeforeEach
void setupBoard() {
context().receivesCommand(createBoard());
}
/**
* Verifies that a command to create a card generates corresponding event.
*/
@Nested
@DisplayName("create new card")
class Creation {
@BeforeEach
void setupCard() {
context().receivesCommand(createCard());
}
@Test
@DisplayName("generating `CardCreated` event")
void event() {
EventSubject assertEvents =
context().assertEvents()
.withType(CardCreated.class);
assertEvents.hasSize(1);
CardCreated expected = CardCreated
.newBuilder()
.setBoard(board())
.setCard(card())
// We call `buildPartial()` instead of `vBuild()` to be able to omit
// the `name` and `description` fields that are `required` in the event.
.buildPartial();
assertEvents.message(0)
.ignoringFields(3 /* name */, 4 /* description */)
.isEqualTo(expected);
}
@Test
@DisplayName("as entity with the `Card` state")
void entity() {
context().assertEntityWithState(card(), Card.class)
.exists();
}
}
}
|
Effect of Greater cardamom (Amomum subulatum Roxb.) on blood lipids, fibrinolysis and total antioxidant status in patients with ischemic heart disease Effect of Greater cardamom (Amomum subulatum Roxb.) on blood lipids, fibrinolysis and total antioxidant status in patients with ischemic heart disease Surendra Kumar Verma, Vartika Jain, Dharm Pal Singh 1 Department of Medicine, RNT Medical College, Udaipur-313001, Rajasthan, India 2 Department of Botany, Mohanlal Sukhadia University, Udaipur-313001, Rajasthan, India Asian Pacific Journal of Tropical Disease S739-S743 Introduction Spices have been consumed in many cultures over centuries. They were primarily consumed because of their taste and aroma. However, the recent scientific studies have proved their biological activities beyond their taste and smell. Spices are now known to possess anti-thrombotic, anti-atherosclerotic, hypolipidemic, hypoglycemic, hypotensive, anti-inflammatory, anti-arthritic and platelet aggregation inhibition activities. Few of the spices also possess adaptogenic property against physical stress. Interestingly, they do have antioxidant component. In this context, Garlic, Onion, Ginger, Curcumin, Small cardamom, etc. have been extensively studied. Greater cardamom or Large cardamom (Amomum subulatum Roxb.) member of Zingiberaceae family is a well known flavoring spice, used to treat various ailments in different medical systems world over. In Ayuverda, it is commonly used for dyspepsia, nausea, cough, vomiting and itching. It is also used as preventive as well as curative for throat troubles, lung congestion, mouth infections and digestive disorders. Its seeds are cardiac tonic, expectorant, appetizer and diuretic. Sarkar, after observing its ethnomedicinal properties from the Ra`rh civilization described Greater cardamom for its cardiovascular beneficial properties. He recommended that one teaspoonful of cardamom powder including seeds and pericarp, if taken twice a day, will benefit the patients with heart disease. However, its beneficial properties are enhanced if dietary alterations and some yoga postures are practiced along with it. A recent report also documented its cardio-adaptogenic property against physical stress in an animal experimental study. In view of the ethnomedicinal recommendations and its cardiotonic, antioxidant and antistress properties, the present placebo controlled study was carried out to evaluate the effect of Greater cardamom on some of the cardiac risk parameters in patients with ischemic heart disease. A R T I C L E I N F O A B S T R A C T Objective: Greater cardamom (Amomum subulatum Roxb.) fruit powder (seeds with pericarp) was evaluated for its effect on some of the cardiovascular risk factors in patients with ischemic heart disease. Methods: Thirty male individuals (50-70 years) with ischemic heart disease (old MI>6 months) were selected for the study and divided into two groups of fifteen each. Group I (Treated) received 3 g cardamom powder in two divided doses while Group II (Placebo) received matched placebo capsules for 12 weeks. Blood samples were collected initially and at 6 and 12 weeks for analysis of lipid profile, fibrinolytic activity and total antioxidant status. Results: Administration of Greater cardamom significantly (P<0.001) reduced atherogenic lipids without significant alteration in HDL-cholesterol. Plasma fibrinolytic activity and serum total antioxidant status were also enhanced significantly (P<0.05) at the end of the study. The placebo group however did not show significant alteration in any of these parameters. It was tolerated well without any untoward effects. Conclusions: Dietary supplementation of Greater cardamom favorably modifies lipid profile and significantly enhances fibrinolytic activity and total antioxidant status in patients with ischemic heart disease. Contents lists available at ScienceDirect Fruits of Amomum subulatum were collected from the local market. Fruits were identified and authenticated by Prof. S. S. Katewa, at Department of Botany, Mohanlal Sukhadia University where a voucher specimen no. (EA-623) was kept for future reference. The fruits were grinded well along with their outer shells to make a fine homogenous powder and filled in gelatin capsules. Each capsule contained 0.75 g of the cardamom powder. Matched placebo was prepared by filling the capsules with lactose powder. Study protocol After approval from institutional ethical committee, the study was conducted on 30, male non-obese (BMI<24) individuals with ischemic heart disease (IHD) between the ages of 50 to 70 years. It was a single blinded, placebo controlled study in accordance with the guidelines of the Declaration of Helsinki and Tokyo, 2004. The study subjects were selected from the medical Out Patient Department of Maharana Bhopal General Hospital attached to RNT Medical College, Udaipur. All the patients selected were of established coronary artery disease (healed MI > 6 months), stable in their symptoms and were receiving isosorbide 5-mononitrate and aspirin. The patients with hypertension, diabetes, renal and endocrine diseases were not included in the study. Similarly, the patients who were smokers, alcoholics, on lipid lowering drugs, dietary restrictions or weight reduction program were excluded from this study. After obtaining written consent, they were randomly divided in two groups of 15 each. Group I (treated group) received 3 g of cardamom powder in two divided doses while Group II (placebo group) received matched placebo for a period of 12 weeks. The dose of cardamom was decided based on the ethnomedicinal recommendations. During the entire study period the patients were not allowed to take any medication without prior consultation except isosorbide 5-mononitrate and aspirin. Also, they were not allowed to alter their dietary and exercise schedule which they were following preceding six months of study period. Blood chemistry Blood samples were collected in a fasting state, initially and at the end of 6th and 12th week for the analysis of fibrinolytic activity, fibrinogen, lipid profile and total antioxidant status by the methods as described earlier. Statistical analysis All the data were expressed as mean暲SE. Results were statistically analyzed with student's t-test and P value less than 0.05 was considered as significant difference in analysis. Results Administration of Greater cardamom in a dose of 1.5 g twice daily did not alter any lipid fraction at the end of six weeks. The reduction, however in all the atherogenic lipid fractions was significant (P<0.001) at the end of 12 weeks without significant alteration in HDL-Cholesterol (Table 1). This favorable alteration in blood lipids led to significant (P<0.01) decrease in atherogenic index (Figure Table 1 Effect of Greater cardamom (3 g) on lipid profile in patients with ischemic heart disease. Values are expressed as mean 暲 SE, P is compared to initial, a: P<0.05. b: NS (Not significant). Values are expressed as mean 暲 SE. P is compared to initial, a: P<0.05. b: NS (Not significant). 1) and improvement in the ratio between HDL-C/LDL-C ( Figure 2). Plasma fibrinolytic activity was also increased significantly (P<0.05) along with a significant rise in serum total antioxidant status at the end of 12 weeks without causing significant changes in fibrinogen levels (Table 2 and 3). The placebo group on the other hand, did not show any significant alteration in all these parameters (Table 1- Discussion Greater cardamom significantly reduced total cholesterol (10.78%), Triglycerides and VLDL-C (10.55%), LDL-C (14.90%) without significant effect on HDL-cholesterol. This favorable change was good enough to decrease the ratio of TC/HDL-C to a significant extent (P<0.01) which is detrimental to atherogenesis and therefore aptly called athergoenic index. Along with the decrease in atherogenic index, there was also significant (P<0.01) improvement in the ratio between HDL-C and LDL-C. The modest decrease of 12% in atherogenic lipids is in accordance with the pattern of hypolipidemic activity demonstrated by other plant products and spices. However, it is worth noting that in spite of significant and favorable alterations in lipid profile by Greater cardamom; the values of total cholesterol, triglycerides and LDLcholesterol were still in the higher range and undesirable for patients with IHD. If the present dose which is 60 mg/kg is increased further for a longer duration of time, it might be possible to further reduce the atherogenic lipids; as has been observed recently in an animal experimental study when 100 mg/kg chloroform:methanol (50:50) extract of Greater cardamom seeds for a period of 4 months have demonstrated significant decrease in atherogenic lipids, lipid peroxidation along with increase in HDL-C, glutathione and catalase activities in cholesterol fed rabbits. It was interesting to note that plasma fibrinolytic activity was also significant (P<0.05) increased by 36% at the end of 12 weeks. The placebo group however, did not show any significant alteration in fibrinolytic activity. On the other hand, fibrinogen an independent risk factor for cardiovascular disease has not been favorably modified by administration of Greater cardamom in patients with IHD. Interestingly, serum total antioxidant status was significantly (P<0.05) increased by 21% after 12 weeks of Greater cardamom administration. Greater cardamom mediated hypolipidemic activity along with significant enhancement of fibrinolysis needs further attention. The lipids and fibrin deposition are the two important components of atheroma formation. The dynamic equilibrium between fibrin deposition and its clearance by fibrinolytic activity determines the healthy status of coronary arteries. On the contrary, if fibrin is not removed properly by body's own clearing system, then its organization and fatty deposition on the artery involved will result in atheroma formation. It is interesting that most of the spices used in oriental dishes have been demonstrated to have fibrinolysis enhancing properties in healthy individuals and in patients with IHD and hypertension. Greater cardamom is further addition to this list. Further the evidence of dietary antioxidants in prevention of diseases has been escalating, and in this context, the antioxidant effect of Greater cardamom is a further addition to its cardio-beneficial properties. In a nutshell, the combination of its hypolipidemic, fibrinolysis enhancing and antioxidant improving properties may prove favorable in patients with athero-thrombotic coronary artery disease. On an average, seeds yield 2.5 % volatile oil which when included with 0.18 % volatile oil of pericarp gives a total yield of 2.68 % oil. In the present study, incorporation of both pericarp and seeds of Greater cardamom, as recommended in ethnomedicine have been employed which may have its basis by increasing the total concentration of 1,8-cineole, which is an important cardio-beneficial compound. This incorporation, which has increased the total concentration of 1,8-cineole (> 73%) might have resulted in its significant hypolipidemic activity which was not observed with the Small cardamom containing less than 40% of 1, 8-cineole. Cardiovascular effects of 1, 8 -cineole, a monoterpenic oxide have been evaluated in various experimental studies and demonstrated to possess vascular relaxant, antiinflammatory and antioxidant properties. The other major components isolated from Greater cardamom are Cardamonin and Alpinetin which have also shown significant anti-inflammatory, vasodilatory and platelet aggregation inhibitory activities in various animal studies. These compounds might be responsible for the observed hypolipidemic and fibrinolysis enhancing activities of Greater cardamom in the present study. Seeds also possess antioxidant activity as studied on hepatic and cardiac antioxidant enzymes, glutathione content and lipid conjugated dienes in rats fed high fat diet and in vitro DPPH radical scavenging activity. The antioxidant activity was attributed to their ability to activate antioxidant enzymes that catalyze the reduction of antioxidants. It is therefore; clear that cardamom contains components which enhance TAS. Moreover, in the present study, not only cardamom seeds but the pericarp was also incorporated containing flavanoids and tannins which also possess antioxidant activities. The present study therefore, suggests that long term dietary supplementation of Greater cardamom favorably alters lipid profile and significantly enhances fibrinolytic activity and total antioxidant status in patients with IHD. It is safe, well tolerated dietary functional food without any untoward side effects. Furthermore, in view of its stress adaptogenic property, it may prove to be beneficial as a dietary supplement to patients with coronary artery disease. However, further large scale, placebo controlled studies are warranted. Conflict of interest statement We declare that we have no conflict of interest.
|
def overlaps(group):
cycle_to_punch = group.subgraph.iloc[0]
subgraph = cg.create_node_subgraph(cycle_to_punch)
union, _ = subgraph.compute_intersection(
cycle_to_punch[0])
intersection = group.intersects(union.unary_union)
return intersection
|
Influence of Deposition Conditions on the Characteristics of Luminescent Silicon Carbonitride Thin Films The influence of the substrate temperature and argon gas flow on the compositional, structural, optical, and light emission properties of amorphous hydrogenated silicon carbonitride (a-SiCxNy:H) thin films were studied. Thin films were fabricated using electron cyclotron resonance plasma enhanced chemical vapor deposition (ECR PECVD) at a range of substrate temperatures from 120 to 170◦C (corresponding to deposition temperatures of 300 to 450◦C) in a mixture of SiH4, N2, and CH4 precursors. Variable angle spectroscopic ellipsometer (VASE), elastic recoil detection (ERD), and Rutherford backscattering spectrometry (RBS) verified optical bandgap widening, layer densification, and an increase of the refractive index at higher substrate temperatures. The microstructure of a-SiCxNy:Hz thin films was determined by X-ray photoelectron spectroscopy (XPS) and Fourier transform infrared (FTIR) spectroscopy. The substrate temperature strongly affected the binding state of all atoms, and in particular, carbon atoms attached to silicon and nitrogen, as well as hydrogen-terminated bonds. We correlated the films microstructural changes to a higher species mobility arriving on the growin layer at higher temperatures. Photoluminescence (PL) measurements showed that the total intensity of visible light emission increased. A systematic blueshift of the centroid of the wide PL peak was observed following the increase of optical gap. © The Author(s) 2018. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 License (CC BY, http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse of the work in any medium, provided the original work is properly cited. Silicon carbonitride (SiC x N y ) structures have attracted interest for the manufacturing of materials with robust mechanical properties 1 and promising optical features. 2 This is a consequence of their unique properties inherited from the combined properties of binary substructures, silicon carbide (SiC), silicon nitride (SiN), and carbonitride (CN). On the one hand, the durability and hardness of SiN structures cannot compete with those of carbon-based hard counterparts such as diamond-like carbon (DLC), CN, and SiC materials. 3 On the other hand, carbon-based films do not offer the required properties for current optical designs, 4 while SiN thin films have been considered as the basis for several optoelectronic devices. 5 The protective properties of SiC x N y materials appear as an intermediate matrix to meet the demands in hard coating technology and concurrently, the tunability of SiC x N y 's electrical and optical properties raises its interest in the field of Photonics. SiC x N y structures have appeal to researchers for wear-resistant 6 and corrosion-resistant coatings, 7 and recently, for silicon-based anode materials in lithium ion batteries. 8 In addition, SiC x N y structures are used for gate dielectrics in thin film transistors 9 and diffusion barriers. 10 They appear to broaden the optical parameter space required for the application of ultraviolet (UV) detectors 11,12 and the passivation layer in the third generation "all-silicon" tandem solar cells. 13,14 To this end, Chen et al. reported a direct bandgap of 3.8 eV for c-Si 2 N 4 C 15 and Azam et al. 16 investigated the tunability of the bandgap depending on the dominant phase present in the SiC x N y layer. We previously showed that the visible photoluminescence (PL) emission from SiC x N y films is stronger than that of SiC and SiN 3 materials and all optical properties (bandgap, transmittance, index of refraction, and light emission) can be controlled by adjusting the film compsoition. 2,17 Despite the advantages of SiC x N y films concerning both mechanical and optical properties, the study of this complex structure is not straightforward due to the presence of three elements (carbon, nitrogen, and silicon). Moreover, depending on the fabrication technique, a quaternary structure containing hydrogen may be produced. Plasma enhanced chemical vapor deposition (PECVD) is most widely applied for the deposition of amorphous hydrogenated silicon carbonitride (a-SiC x N y :H z ) through the decomposition of hydrocarbon gases (acetylene (C 2 H 2 ) or methane (CH 4 )) and silane (SiH 4 ). Hydrogen plays a significant role in structural and electronic properties. Its content can be controlled via the growth conditions. In addition to the effect of hydrogen, optical properties are directly affected by the growth parameters. The influence of deposition temperature on the aging mechanism of PECVD-grown SiC x N y films was reviewed by Huber et al. 18 and Haacke et al. 19 The dependence of mechanical properties on the deposition temperature was investigated by Ctvrtlik et al. 20 in a-SiC x N y films grown using sputtering. Despite reports on promising mechanical properties of SiC x N y thin films, they have not yet been well explored optically. Understanding the interdependency of light emission properties and film composition and structure requires a comprehensive study. Previously, we explored the role of CH 4 and N 2 gas flow 2,21 and post-deposition thermal annealing on the PL emission, composition, and microstructure of a-SiC x N y :H z 22 and suggested the luminescence model for this ternary material. 23 However, a full understanding of the influence of the deposition conditions on the luminescence properties of SiC x N y thin films is still lacking. To the best of authors' knowledge, only one research group examined the PL emission from a-SiC x N y thin films as a function of the deposition temperature, albeit without any discussion of the underlying mechanism and structural evolution. 24 In this contribution, we present the first in-depth analysis of the influence of deposition condition on the visible luminescence from a-SiC x N y :H z thin films deposited using the electron cyclotron resonance (ECR) PECVD technique. First, we discuss the influence of the substrate temperature on the growth process, and consequently, the induced changes in the film properties, including hydrogen concentration, film microstructure, and composition. We then link the evolution of these properties to the changes of visible light emission by varying the substrate temperature. In addition, in plasma-assisted methods, inert gases such as argon (Ar) can be added to the plasma to enhance the ionization efficiency. We review the influence of Ar addition to the growth process and its consequences on the film composition and luminescence properties. Experimental Sample preparation.-a-SiC x N y :H z thin films were fabricated using an ECR PECVD system, where reactant gases were fed into the main chamber for 30 minutes. The system was designed to feed N 2 and Ar gases into the plasma region and supply CH 4 and SiH 4 gases downstream from the discharge zone through a dispersion ring positioned out of the plasma region close to the substrate. With a fixed N8 ECS Journal of Solid State Science and Technology, 7 N7-N14 microwave power of 500 W, the stage temperature (denoted as the deposition temperatures, T d ) was varied from 300-450 C, in 50 C increments, corresponding to substrate temperatures (denoted as T s ) of 120, 137, 154, and 170 C, respectively. The temperature on the substrate surface was assumed to be similar to the back side of the sample stage, where the thermocouple was placed in contact with the substrate holder. During the deposition, the plasma can increase the surface temperature of the sample to about 150 C for a deposition time of 120 minutes. For the investigated samples of this work, the "plasma on" time was set to 30 minutes, which resulted in an increase of the substrate temperature by a few tens of degrees. 25 We have described further details of this deposition system elsewhere. 26 Prior to the deposition, the n-type (0.01-0.03 -cm resistivity) silicon wafers were cleaned with buffered hydrofluoric acid (HF) for 60 s to remove the native oxide layer while the vitreous carbon plates were cleaned using acetone followed by methanol, both using sonication for 10 min. To explore the influence of substrate temperature, four a-SiC x N y H z samples (SiCN-300, SiCN-350, SiCN-400, and SiCN-450) were grown using identical parameters except the deposition temperature that was kept at 300, 350, 400, and 450 C, respectively. Gas flow rates of 5, 10, and 8 (±5%) sccm were used for 30% SiH 4 diluted with Ar, 10% N 2 diluted with Ar, and pure CH 4 with the corresponding partial pressures of 0.42, 0.8, and 0.23 mTorr, respectively (sccm denotes cubic centimeters per minute at standard temperature and pressure). The samples are labeled accordingly in Table I. To investigate the effect of Ar gas flow, sample AR-SiCN-350 was fabricated using deposition parameters similar to those used for the SiCN-350 sample, except for an extra 5 sccm of Ar gas with corresponds to a partial pressure of 0.4 mTorr. Characterizations.-The composition of the as-deposited films was measured by Rutherford backscattering spectrometry (RBS) and elastic recoil detection (ERD) using 1.8 and 2 MeV 4 He + beams with silicon detectors located at 170 and 30 in Tandetron (CORNELL) geometry, respectively. RBS was employed to measure all constituent elements excluding hydrogen and ERD measurements to determine the hydrogen content. We discussed the details of the combined RBS-ERD technique elsewhere. 20 Film chemical structure was investigated ex-situ by X-ray Photoelectron Spectroscopy (XPS) measurements carried out with a Kratos Axis Ultra spectrometer using a monochromatic Al K excitation source (15 mA, 1486 eV). The instrument work function was calibrated to give a binding energy (BE) of 83.96 eV for the Au 4f 7/2 line of metallic gold. The spectrometer dispersion was adjusted to provide a BE of 932.62 eV for the Cu 2p 3/2 line of metallic copper. The Kratos charge neutralizer system was used on all specimens. The surface of the XPS samples was cleaned with buffered hydrofluoric acid (BHF) for 30 s to remove the surface oxide layer and decontamination as an alternative to Ar + pre-sputter cleaning of insulating materials. 27 IR transmission was measured at room temperature using a Fourier-transform spectrometer Bruker Vertex 80 v vacuum with a DTGS KBR detector and KBR beam splitter in the wavelength range of mid-infrared (400 to 4000 cm −1 ), with a resolution of 4 cm −1 and the OMNIC software was used to perform baseline-subtraction and normalization to the film thickness. To avoid the infrared (IR) absorption by air constituents, the pressure of the FTIR chamber was kept below 3 mTorr. The thickness, optical bandgap, and refractive index of the films were delivered by the simulation of variable angle spectroscopic ellipsometry (VASE) data using J. A. Woollam's CompleteEASE software package. VASE measurements ( and parameters) of the layers were obtained through the reflection spectra at multiple angles of incidence (55, 60, 65, 70, and 75 ) in the range of UV-VIS-NIR (300-1600 nm). The room-temperature PL spectra of the samples were measured in a wavelength range extending from the region of the near-infrared (NIR) (1100 nm) to the UV (350 nm) using charge-coupled device (CCD) arrays and a 325 nm He-Cd laser (E exc = 3.82 eV) with an optical power of 5 mW exciting an area of 2.3 mm 2. More details regarding the spectrometer and system response used for the correction can be found in Reference 28. The PL data recorded as a function of wavelength was first corrected with the system response and then smoothed using a Savitzy-Golay function. Compositional analyses.-The combined RBS-ERD results show that the substrate temperature strongly affects the film composition. Fig. 1 shows the variation of the concentration of all constituent elements of a-SiC x N y :H Z samples (including hydrogen) as a function of the deposition temperature with the corresponding experimental uncertainties. The small uncertainties are related to the use of a glassy carbon (vitreous carbon) substrate, which has an RBS signal at lower energy than the light elements in the SiC x N y :H z layer, in contrast to the commonly used silicon substrate. The values of the atomic concentrations listed in Table I indicate that an increase of T d from 300 to 400 C (T s from 120 to 154 ) decreases the nitrogen concentration Table I. The atomic concentrations of carbon, nitrogen, silicon, and hydrogen along with the mass density determined by combined RBS-ERD. The refractive index and thickness were determined using VASE. For all samples, a 30-minute deposition with a mixture 5, 10, and 8 sccm of SiH 4 /Ar, N 2 /Ar, and CH 4, respectively, were used. by about 25% of its initial value, silicon is virtually constant, and the carbon content increases by about 75%. At T d = 450 C (T s = 170 C) the concentration of silicon and nitrogen reduce drastically and the film becomes carbon rich. The hydrogen concentration decreases from 38 to 12 at. % when increasing T d from 300 to 450 C (T s from 120 to 154 C). The substrate temperature affects the migration of the species on the surface and their reactions. 29 The observed changes of the film composition can be explained by the increase of species' mobility arriving on the surface with higher substrate temperature and the consequent changes of the surface reactions, leading to a larger incorporation of carbon and fewer hydrocarbons into the growing film. As a result, carbon is the main element substituting hydrogen in the coating layer, while nitrogen slightly decreases. The density of the samples is provided in Table I and was calculated using the atomic concentration obtained from the combined RBS-ERD measurements and the thickness given by VASE. The film density increases when T d increases from 300 to 400 C (T s from 120 to 154 C), which is related to the loss of hydrogen (and subsequent thinner layers) and incorporation of a significant amount of carbon into the film. In sample SiCN-450, in contrast to the continual hydrogen loss, the film density decreases to 1.85 gr/cm 3, due to the abrupt decrease of the silicon content. An increase in the substrate temperature leads to an increase of Si-C and C-N bond densities and a decrease in the concentrations of C-C and Si-N bonds. In agreement with RBS results (Compositional analyses subsection), the higher reactivity of CH 4 at higher temperatures generates more free radicals to form carbon-related bonds such as C-N and Si-C. The decrease of the Si-N bond density can be expected from the lower nitrogen concentration in the film layer. The IR absorption spectra of samples grown at 300, 400, and 450 C (corresponding to T s = 120, 154, and 170 C) are normalized to the sample grown at the lowest deposition temperature (T d = 300 C). The evolution of the film bond structure by varying T d is particularly significant in the three regions of the IR spectrum shown in Fig. 3. The first considerable change is observed at a peak positioned at 1100 cm −1 (Fig. 3a), with the second region of interest located at 1700 cm −1 (Fig. 3b). Both can be assigned to carbon-nitrogen configurations. 22 In most cases, the overlap of the C-N and C = N absorption modes makes it difficult to distinguish them. 30 We attribute the enhancement of these two peaks to the formation of more C-N/C = N bonds at higher T d, which was also observed by Tomasella et al. 31 Fig. 3c shows the spectra in a third region, between 1900 and 2200 cm −1, which is an overlap of Si-H stretching modes and the C≡N stretching vibration at around 2100-2200 cm −1. A larger density of Si-H bond is observed at lower substrate temperatures due to the higher hydrogen content in the films. The increase in the substrate temperature makes the contribution of the C≡N bond around 2200 cm −1 significantly larger. The combination of the changes of the Si-H and C-N bonds with the variation of substrate temperature leads to the observed shift to larger wavenumbers at higher temperatures. The evolution of the bonding configuration given by IR absorption can be explained by the crosslinking of C to N atoms due to the increase of the carbon content and hydrogen loss at higher temperatures, in agreement with the discussed XPS analysis (Fig. 2) showing a larger density of carbon bonded to nitrogen at higher temperatures. It is noted that the changes of the Si-N and Si-C bonds were analyzed only using XPS measurements and their IR results are disregarded due to the large overlap of their IR bands. Optical properties.-A Cauchy model was employed for the fitting of the VASE data to describe the dispersion relation giving the refractive index (n) as a function of wavelength () and extinction coefficient (k) as an exponential absorption function as follows: The fitting parameters A, B, and C are the Cauchy coefficients, is the absorption coefficient, is the exponent factor, and is the band edge. Only the first three Cauchy coefficients were taken into account for our one-layer SiC x N y thin films 32 and the model of the silicon substrate was adopted. 33 We reported more details of the analysis of the optical constants of a-SiC x N y H z thin films previously. 22 Figs. 4a and 4b shows the typical spectra obtained from VASE measurements at different ellipsometric angles and the fitted data delivering the refractive index and thickness from VASE data ( and parameters). Fig. 4c shows an example of the refractive index and extinction coefficient. Hydrogenated SiC x N y thin films are virtually transparent over the visible spectrum, which allows us to assume that k (extinction coefficient) is equal to zero over most part of the visible spectrum in the employed Cauchy model. The low value of the best-fit mean-squared error (MSE) of about 4 validates the accuracy of the modeling. 5 shows the variation of the optical bandgap along with the other parameters determined directly from VASE simulations, i.e. refractive index and growth rate, as a function of the deposition temperature. It can be observed that higher temperatures result in optical gap widening, an increase of the refractive index, and thinner layers. It's worth mentioning that although amorphous materials contain a random network, some features of crystalline structures due to the long-range order of periodic crystal structure can still be observed in short range orders of amorphous structures, such as the optical bandgap (E 04 ). 34 The values of the optical bandgap related to the localized states were determined from the absorption coefficient (deduced from VASE simulations). The concept of the direct and indirect bandgap cannot be applied to amorphous structures due to the lack of long-range order making it impossible to define a Brillouin zone. The absorption edge in amorphous semiconductors can be considered as a "non-direct" rather than due a direct or indirect bandgap as in crystalline structures. Experimentally, the optical gap of amorphous silicon-based materials is quantified using the absorption coefficient () given by the extinction coefficient ( = 2k/). Various models exist to estimate the bandgap energy from the optical absorption coefficient. 35 We employed the optical bandgap (E 04 ), which is the energy where is equal to a standard value of 10 4 cm −1. 36 The E 04 was found to be much closer to the effective mobility gap of silicon-based compounds given by theoretical suggestions 36 compared to previous definitions of the optical bandgap such as Tauc energy gap. 37 The values of the film thickness and refractive index listed in Table I show that the increase of the substrate temperature by 50 C (T d by 150 C) results in a decrease of growth rate from 126 to 96 nm/min due to the incorporation of fewer hydrogen into the film (24% film shrinkage). The refractive index, which is related to the polarization response of a material, increases with substrate temperature. According to Lorentz-Lorenz equation, two correlated quantities, the density of the film and chemical configurations, affect the polarization. 38 Higher temperature induces changes in both the film mass density and microstructure through the larger density of carbon bonds in the SiC x N y :H z layer as a direct result of the hydrogen loss and higher carbon content. The values of the optical gap provided in Table II indicate an increase of 0.3 eV with an increase of substrate temperature by 50 C (T d by 150 C), which is a result of the competition between the hydrogen loss and the formation of carbon-related bonds. It is well known that lower hydrogen content increases the localized mid-bandgap states in the band structure, and in turn, the optical bandgap narrows. On the other hand, the rearrangement of the film atomic structure, observed by XPS (Fig. 2) and FTIR (Fig. 3), induces a higher concentration of Si-C and C-N bonds leading to optical bandgap widening. Apparently, the evolution of structure has a more profound effect on the changes of the optical bandgap compared to the other contributor to the changes of optical gap (hydrogen loss). Photoluminescence.-As discussed above, the increase of the growth temperature leads to layer densification and film shrinkage. The difference in the layer thickness of the samples grown at different temperatures makes it necessary to normalize the PL spectrum of each sample to the corresponding thickness. Fig. 6a shows the normalized PL emission profile under excitation using a 325 nm laser source of different SiC x N y :H z thin films as a function of wavelength. Due to the low signal level and the noise beyond ∼800 nm coming from the lower sensitivity of the CCD camera in this detection range the signal in this range is noisy. The intensity and the full width at half maximum (FWHM) of the PL profiles increase due to the enhancement of the low energy tail of the PL spectra with higher substrate temperature. To quantitatively understand the changes of the overall luminescent color, the chromaticity coordinates were calculated from the PL emission spectra. Fig. 6b shows the chromaticity coordinates of SiC x N y :H z thin films with the corresponding deposition temperatures labelled in the CIE 1931 chromaticity diagram. The values are listed in Table II. With the increase of substrate temperature of 50 C (T d = 150 C), the emission color changes from orange to yellow and is accompanied by an increase of the total emission power (as determined by integration of the emission spectrum) by a factor of 6.2 (Fig. 6c). In general, all samples show very broad PL spectra covering the whole visible range. The energy of the centroid of the PL spectra increases when the bandgap increases at higher temperatures. Ar gas.-Ar dilution significantly decreases the hydrogen content from 32 to 20 at. %, where the carbon content increases significantly (see Table I). In fact, the composition of Ar-SiCN-350 sample is very close to SiCN-400 sample grown whiteout Ar dilution except for the presence of lower silicon and larger nitrogen concentration in Ar-SiCN-350. The PL emission of Ar-SiCN-350 is illustrated in Fig. 7 and can be compared with that of SiCN-350 and SiCN-400 presented in Fig. 6a. The addition of Ar enhances the PL at the higher energy range in analogy to changes observed in the PL spectrum with the increase of 50 C of T d from SiCN-350 to SiCN-400. Discussion Substrate temperature.-The dissociation cross section of SiH 4 is comparable with that of common carbon sources used in CVD techniques such as C 2 H 2 and C 2 H 6. The energy of formation of free radicals of the more stable gases such as (CH 4 ) is higher than that of SiH 4 causing the dissociation of CH 4 to require higher thermal energy. 39 The combination of the substrate temperature of about 300 C and the energetic reactants coming from plasma more likely allows breaking of Si-H bonds (3.6 eV) originating from SiH 4 than C-H bond (4.3 eV) from CH 4. The increase of the substrate temperature is in such a way that the increased number of free radicals of CH 4 is more chemically active with SiH 4 species. 40 Compositional analyses verified that more carbon was incorporated into the growing layer as a result of the variation of the species' mobility arriving on the sample surface and the resulting chemical reactions leading to the layer formation. A decrease of hydrogen and nitrogen at higher substrate temperatures and a significant increase of carbon in the resultant films were observed. The reasons for the low hydrogen content at higher temperatures are twofold: First, the incomplete dissociation of CH 4 in such low substrate temperatures causes some hydrogen atoms to remain bonded to carbon atoms in the chemical species reaching the growing surface. This incorporates more hydrogen into the film layer in the form of hydrocarbon bonds. Second, at higher temperatures, weak hydrogen-terminated bonds are less stable leading to hydrogen desorption from the growing surface. Consequently, with the increase of temperature, carbon is the element substituting hydrogen in the layer, while nitrogen decreases. From a structural point of view, IR absorption verified the increase of the representative bonds of CN/ C = N at 1100 and 1700 cm −1 with higher deposition temperature. The increase of C-N and C = N bonds, despite the presence of a smaller amount of nitrogen in the film, can be explained by the changes of hydrogen cyanide (HCN) and CN at higher substrate temperatures. These two emission lines were found to be the major components in the gas mixture, which lead to the relatively low concentration of C-N bonds in CVD grown thin films. 41 Higher reactivity of CH 4 at higher substrate temperature yields less formation of hydrogen cyanide, HCN, providing the opportunity for the formation of C-N bonds in the film. The increase of Si-C bonds and a decrease of Si-N bonds in the film grown using higher substrate temperatures can also be correlated to the higher chemical activity of CH 4 with SiH 4 radicals to forming Si-C related bonds. This provides more attachment of carbon to silicon in the growing film decreasing the opportunity for nitrogen atoms to be bonded to silicon. Moreover, the Si-C bonds increase by the connection of silicon and carbon dangling bonds generated by the hydrogen desorption on the surface of the film. Therefore, the higher reactivity of CH 4 at higher temperatures not only promotes the incorporation of more carbon in the layer, but also reduces the amount of hydrogen-terminated bonds in the film, providing more cross-linking of carbon atoms in the resultant film. All induced changes in the optical bandgap, layer density, and refractive index correlate with the described process of hydrogen loss, changes of film stoichiometry, and larger density of carbon-related configurations. We took into account the characterization results described above, for the interpretation of the changes of PL properties as a function of the substrate temperature. Huran et al. 24 reported a similar PL reduction with the deposition temperature, while no details on the underlying mechanism for the changes in both shape and intensity of the PL emission were discussed. In our investigated samples, the PL peak position and intensity follow the increase of the optical gap value. This indicates that the observed PL is due to the radiative recombination of carriers in localized states at the bandtails of the amorphous alloys. Furthermore, because the luminescence is intense enough to be seen with the naked eye, the probability of non-radiative recombination in a deep defect is small due to the very small mobility of carriers in the bandtails. 42 Fig. 7 shows that by an increase of T d from 300 to 350 C, the PL peak at ∼550 nm and the PL peak at ∼675 nm become far more intense. Further increase of T d from 350 to 400 C results in a larger contribution of the PL peak around ∼675 nm, while the PL peak at ∼450 nm remains virtually unchanged. Even further increase of T d to 450 C induces more significant changes in the PL band positioned at ∼675 nm. The origin of this PL peak is associated with carbonrelated structures 23 and the systematic increase of this peak suggests an increase of the radiative carbon-related defects with the deposition temperature, in particular from 300 to 400 C, which is in agreement with the observed increase of carbon content given by RBS analysis. As shown in Fig. 6b the luminescence color changes through orange to yellow as the deposition temperature increases, in agreement with the increase of the relative intensity of 675 nm emission. The increase of the PL band intensity at higher temperatures can be expected from the aforementioned structural and compositional discussions where the presence of more carbon in the film leads to a larger density of carbon bonds (Si-C and C-N bonds) and consequently, an increase of carbon-related emissions. The PL peak ∼500 nm arises from nitride related defects 23 and the decrease of this peak with increasing deposition temperature can be explained by the lower nitrogen content, as verified by XPS and RBS. Ar gas.-The addition of Ar gas changes the plasma chemistry and the dissociation rate of precursors which affect consequently the film composition and bonding states of the elements in the growing layer. 43 The active Ar species extracted from the plasma region dissociate CH 4 (with higher activation energy) more effectively explaining the significant increase of carbon content in the resultant film (see Table I). This is in agreement with the studies indicating higher conversion rate of CH 4 with the addition of rare gases such as Ar. 44 Besides the higher ionization of the N 2 gas in the plasma and CH 4 near the samples surface, the kinetic energy of Ar ions can heat up the sample surface and influence the species reactions on the surface of the growing layer. It can be inferred that Ar dilution influences the film properties in analogy to the increase of the deposition temperature by 50 C (T s by 15 C). Both SiCN-400 and Ar-SiCN-350 samples showed higher carbon and lower hydrogen contents compared to the SiCN-350 sample. The hydrogen loss with Ar dilution was previously reported by other researchers for carbon films 45 and is related to the hydrogen desorption from the growing surface at higher temperatures. The only difference between SiCN-400 and Ar-SiCN-350 is the higher nitrogen and lower silicon content in SiCN-400. This can be explained by high activation of strongly bonded N 2 in the plasma containing more Ar gas, 46 which makes the amount of available N 2 species larger than the amount of N 2 species in the SiCN-400 sample grown with no Ar dilution. In SiCN-400 a decrease in the density of Si-N bond was observed by XPS and FTIR and nitrogen content was observed due to the competition of constant N 2 species with the increasing number of reactants generating from CH 4. In Ar-SiCN-350 sample, the N 2 radicals created in the plasma cone impacted the gas phase chemical reactions and the species arriving at the surface that ultimately leads to the incorporation of more nitrogen and less silicon in the film. With respect to the difference of the PL emission between Ar-SiCN-350 and SiCN-350 samples, the latter showed enhancement ∼500 nm where the origin of the PL emission was assigned to the nitrogen-related structures in the a-SiC x N y thin films, 23 which is consistent with the higher nitrogen content of Ar-SiCN-350. Conclusions We presented a comprehensive analysis of the influence of substrate temperature on the structure and visible luminescence emission of a-SiC x N y :H thin films deposited using the ECR PECVD technique. The hydrogen content showed a decrease with the substrate temperature, i.e. 38 to 7 at. % for temperatures ranging from 120 to 170 C, corresponding to the deposition temperatures of 300 to 450 C. Carbon was the main element substituting hydrogen in the film layer, while nitrogen (and silicon slightly) decreased. XPS and FTIR analyses verified that the carbon binding states changed significantly through the formation of more Si-C and C-N/C = N bonds. The optical properties of the a-SiC x N y :H z thin films (PL, refractive index, and optical bandgap) were discussed in terms of the evolution of microstructure and film composition. An increase of 50 C in the substrate temperature (corresponding to 150 C of deposition temperature) reduced the areal density of the layers due to the formation of denser layers and hydrogen loss. The refractive index of the films was found to increase with higher temperatures, which is due to the hydrogen loss and the subsequent more dense structure along with the formation of carbon-related phases. With an increase of the substrate temperature, the optical bandgap widened due to the more profound effect of structural changes competing with the hydrogen loss. The visible PL emission showed a systematic blueshift of the centroid of the wide peak following the increase of optical gap. These features indicate recombination between localized states bandtails of carbon-related structures. Ar compensation was found to enhance the contribution of the PL band in analogy to the enhancement of PL emisison with an increase of 50 C. However, Ar dilution was found to increas the contribuaton of the PL band originated from nitrogen-related structures compared to the non-Ar grown samples due to the increase of nitrogen content, which coincided with the carbon enrichment. The foregoing discussions suggest that the loss of hydrogen and higher carbon binding states are consequences of higher substrate temperature and Ar dilution, which open opportunities for a better control of the deposition conditions depending on the desired applications.
|
Epidemiological Factors and Clinical Course of COVID-19 in Patients Who Died Following the Disease in Dedicated COVID Hospital, Rewa District, Madhya Pradesh - A Retrospective Study BACKGROUND The clinical spectrum of SARSCoV-2 infection encompasses asymptomatic infection, mild upper respiratory tract infection, and severe viral pneumonia with respiratory failure and even death. This study attempts to estimate the time interval between symptoms onset to severity, time taken for hospitalization, length of stay in hospital along with demographic and clinical characteristics of deceased patients infected with Covid-19. METHODS This retrospective study was conducted in SSMC associated Dedicated Covid Hospital, Rewa district, India. Covid-19 positive deaths that occurred from May 2020 to January 2021 in this institute were considered for this study. Information regarding socio-demographic profile, systemic diseases / underlying medical conditions, signs and symptoms of the disease, clinical course, and investigations were collected and analysed. Time duration variables included were time from the initial symptom to breathlessness, time taken to seek treatment, delay in hospitalization, and length of stay in the hospital. RESULTS Elderly males with 2 or more comorbid conditions were found to be at higher risk of mortality. Median duration from onset of initial symptom to treatment seeking / hospitalization in DCH was 5 days. While mean duration from onset of initial symptoms to onset of breathlessness was 2 days 6 hrs. There was a delay of 3 days in hospitalization after experiencing breathlessness. 90 % patients had bilateral lung involvement at the time of admission. More than half of the patients had multiple organ involvement. Positive correlation was observed in delay in hospitalization, with syndrome severity at the time of admission and negative correlation with length of stay in hospital. CONCLUSIONS Delay in hospitalization is observed as an important factor which affects clinical course. Disease severity increases and length of stay decreases with delayed presentation at the time of admission. It should be addressed with awareness generation activities in community and self-assessment tool appropriate and suitable for implementing in general population. KEYWORDS Covid 19, Covid Infection, Mortality, Time Delay, Length of Stay (LoS)
|
/* If a work_unit performed less work than estimated (say by unexpectedly */
/* finding a factor) then do not update the rolling average this period. */
void invalidateNextRollingAverageUpdate (void)
{
IniWriteInt (LOCALINI_FILE, "RollingStartTime", 0);
adjust_rolling_average ();
}
|
Within any organization, whether it is a large financial institution or other national or international business entity, a non-business entity, a governmental entity, or some other entity, it is important to monitor and control which members of the organization have access to which of the organization's information and resources as well as the types of access granted to each member. For example, in a banking institution certain people should have access to customer account information while others should not. Of these people with access to customer account information, some should have both read and write access while others should only have read access.
Other examples of access to resources within an organization that may need to be closely monitored include such things as access to customer and employee confidential information, access to different software applications and profiles, access to areas of a building or other physical or virtual structures, and the like. The different access rights granted to members of an organization are generally referred to herein as “entitlements.”
Traditional techniques for monitoring and controlling the distribution of entitlements generally involve persons within the organization periodically reviewing the entitlements assigned to each individual member of the organization. Such traditional techniques pose significant problems. Perhaps the most significant problem is the fact that it takes a significant amount of an organization's resources to individually monitor and manage the entitlements of each member of the organization.
Specifically, large organizations can have tens or even hundreds of thousands of employees and millions of potential entitlements that need to be managed. Furthermore, each member is typically assigned numerous entitlements and, as such, there may be many millions of different entitlement combinations existing within the organization at any one time. Managing so many combinations of entitlements can be a monumental, if not impossible task, using traditional entitlement management techniques. The distribution of entitlements within an organization, however, is so important for both operation and compliance reasons that it must be monitored and controlled.
Additional confusion results when members transition to new roles within the organization. These transitioning members often require new entitlements to be able to operate effectively in their new roles, but they also may need their old entitlements for some period of time after their transition. If not properly managed, a person that transitions within the organization several times may accumulate a long line of legacy entitlements from previous roles in the organization. Such legacy entitlements may not be useful to the person any longer and can, in fact, create security risks or compliance issues if not properly monitored. For example, certain internal or external rules and regulations may require that one person not have access to entitlement “A” and entitlement “B.” If a person who required access to entitlement A transitions within the organization several times and ends up in a role where he requires access to entitlement B but still has a access to entitlement A from his earlier role, the rules and regulations would be violated.
Confusion regarding the dissemination of entitlements may also arise any time a new system or technology is implemented since the entitlement administrators may not be aware of who needs access to the new system or technology and who can have the old system or technology entitlements removed. Other costs may also arise out of poor management of entitlements. For example, where the entitlements include access to software, improper monitoring and control of entitlements can result in greater licensing payments being paid to the software provider than is necessary. More specifically, the organization may pay a periodic payment to the software provider for each member of the organization that as access to the software. If members of the organization have access to the software but do not use or need the software any longer due to a change in job function or a change in systems, then the organization can save money in licensing payments if it can recognize the existence of such legacy access to the software.
A good entitlement management system should also be able to anticipate which entitlements a new employee or person transitioning into a new role will need to perform their job effectively. Traditional systems cannot anticipate needs effectively since the people managing the entitlements usually do not have intimate knowledge regarding the new employee's job function and which entitlements are needed for that job function. Even the new employee or the person transitioning into the new role will usually not know which entitlements they need because they may not know which entitlements are available. For all these reasons, organizations desire more efficient and accurate systems for managing the distribution of entitlements.
|
<gh_stars>1-10
#pragma once
#include <Logging/LogMacros.h>
DECLARE_LOG_CATEGORY_EXTERN( LogMissionSystem, Log, All )
|
#
# Copyright (c) 2020 by <NAME>, <NAME> and <NAME>. All Rights Reserved.
# We are not using controller there because it's a very simple window....
#
from tkinter import *
class Launcherview:
def __init__(self, model):
self.model = model
return None
def createAndShowWindow(self):
self.window = Tk()
adminBtn = Button(self.window, text='Mode Admin', command=self.adminBtnClick)
adminBtn.pack()
studentBtn = Button(self.window, text="Mode Elève", command=self.studentBtnClick)
studentBtn.pack()
resultBtn = Button(self.window, text='Mode résultats', command=self.resultBtnClick)
resultBtn.pack()
self.window.mainloop()
return None
def adminBtnClick(self):
self.model.startAdminMode()
return None
def studentBtnClick(self):
self.model.startStudentMode()
return None
def resultBtnClick(self):
self.model.startResultMode()
return None
|
import numpy as np
import onnx
from NNet.converters.nnet2onnx import nnet2onnx
from NNet.converters.onnx2nnet import onnx2nnet
import onnxruntime
from NNet.python.nnet import *
### Options###
nnetFile = "../nnet/TestNetwork.nnet"
testInput = np.array([1.0,1.0,1.0,100.0,1.0]).astype(np.float32)
##############
# Convert NNET to ONNX and save ONNX network to given file
# Adapt network weights and biases so that no input or output normalization is required to evaluate network
onnxFile = nnetFile[:-4]+"onnx"
nnet2onnx(nnetFile,onnxFile=onnxFile,normalizeNetwork=True)
# Convert ONNX back to NNET and save NNET network
# Note that unless input mins and maxes are specified, the minimum and maximum floating point values will be written
nnetFile2 = nnetFile[:-4]+"v2.nnet"
onnx2nnet(onnxFile,nnetFile=nnetFile2)
## Test that the networks are equivalent
# Load models
nnet = NNet(nnetFile)
sess = onnxruntime.InferenceSession(onnxFile)
nnet2 = NNet(nnetFile2)
# Evaluate ONNX
onnxInputName = sess.get_inputs()[0].name
onnxOutputName = sess.get_outputs()[0].name
onnxEval = sess.run([onnxOutputName],{onnxInputName: testInput})[0]
# Evaluate Original NNET
inBounds = np.all(testInput>=nnet.mins) and np.all(testInput<=nnet.maxes)
if not inBounds:
print("WARNING: Test input is outside input bounds defined in NNet header!")
print("Inputs are clipped before evaluation. so evaluations may differ")
print("Test Input: "+str(testInput))
print("Input Mins: "+str(nnet.mins))
print("Input Maxes: "+str(nnet.maxes))
print("")
nnetEval = nnet.evaluate_network(testInput)
# Evaluate New NNET
inBounds = np.all(testInput>=nnet2.mins) and np.all(testInput<=nnet2.maxes)
if not inBounds:
print("WARNING: Test input is outside input bounds defined in NNet header!")
print("Inputs are clipped before evaluation. so evaluations may differ")
print("Test Input: "+str(testInput))
print("Input Mins: "+str(nnet2.mins))
print("Input Maxes: "+str(nnet2.maxes))
print("")
nnetEval2 = nnet2.evaluate_network(testInput)
print("")
print("NNET Evaluation: "+str(nnetEval))
print("ONNX Evaluation: "+str(onnxEval))
print("NNET2 Evaluation: "+str(nnetEval2))
print("")
print("Percent Error of ONNX evaluation: %.8f%%" % (max(abs((nnetEval-onnxEval)/nnetEval))*100.0))
print("Percent Error of NNET2 evaluation: %.8f%%" % (max(abs((nnetEval-nnetEval2)/nnetEval))*100.0))
|
//
// UIButton+SUIAdditions.h
// SUIUtilsDemo
//
// Created by RandomSuio on 16/2/18.
// Copyright © 2016年 RandomSuio. All rights reserved.
//
#import <UIKit/UIKit.h>
@interface UIButton (SUIAdditions)
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* Normal
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - Normal
@property (nullable,nonatomic,copy) NSString *sui_normalTitle;
@property (nullable,nonatomic,copy) UIColor *sui_normalTitleColo;
@property (nullable,nonatomic,copy) UIImage *sui_normalImage;
@property (nullable,nonatomic,copy) UIImage *sui_normalBackgroundImage;
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* Highlighted
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - Highlighted
@property (nullable,nonatomic,copy) NSString *sui_highlightedTitle;
@property (nullable,nonatomic,copy) UIColor *sui_highlightedTitleColo;
@property (nullable,nonatomic,copy) UIImage *sui_highlightedImage;
@property (nullable,nonatomic,copy) UIImage *sui_highlightedBackgroundImage;
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* Selected
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - Selected
@property (nullable,nonatomic,copy) NSString *sui_selectedTitle;
@property (nullable,nonatomic,copy) UIColor *sui_selectedTitleColo;
@property (nullable,nonatomic,copy) UIImage *sui_selectedImage;
@property (nullable,nonatomic,copy) UIImage *sui_selectedBackgroundImage;
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* Disabled
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - Disabled
@property (nullable,nonatomic,copy) NSString *sui_disabledTitle;
@property (nullable,nonatomic,copy) UIColor *sui_disabledTitleColo;
@property (nullable,nonatomic,copy) UIImage *sui_disabledImage;
@property (nullable,nonatomic,copy) UIImage *sui_disabledBackgroundImage;
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* Padding & Insets
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - Padding & Insets
@property (nonatomic) CGFloat sui_padding; // left & right
@property (nonatomic) UIEdgeInsets sui_insets;
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* TintColor
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - TintColor
@property (nullable,nonatomic,copy) IBInspectable UIColor *sui_imageTintColor;
/**
* set text hex color
*/
@property (nullable,assign,nonatomic) IBInspectable NSString *sui_titleHexColor;
/*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o*
* Resizable
*o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~o~*/
#pragma mark - Resizable
@property (nonatomic) IBInspectable BOOL sui_resizableImage;
@property (nonatomic) IBInspectable BOOL sui_resizableBackground;
@end
|
Jet flow instability of an inviscid compressible fluid A linear analysis of the perturbations associated with the jet flow, $\overline{u}(y) = {\rm sech}\,y$, of an inviscid compressible fluid is considered. Some subsonic stable solutions, not associated with stability boundaries, are first determined. Then a subsonic neutral solution is found and used as an aid in determining stability boundaries of the symmetric and antisymmetric disturbance modes. Numerical methods are also used to determine instability characteristics, including the Reynolds stress distributions. Comparisons are made with previous results obtained for the hyperbolic-tangent velocity profile and with unstable characteristics of the Bickley jet.
|
<filename>System/Library/PrivateFrameworks/AppleMediaServices.framework/AMSMetricsDatabaseSchema.h
/*
* This header is generated by classdump-dyld 1.0
* on Sunday, June 7, 2020 at 11:43:02 AM Mountain Standard Time
* Operating System: Version 13.4.5 (Build 17L562)
* Image Source: /System/Library/PrivateFrameworks/AppleMediaServices.framework/AppleMediaServices
* classdump-dyld is licensed under GPLv3, Copyright © 2013-2016 by <NAME>.
*/
@interface AMSMetricsDatabaseSchema : NSObject
+(id)databasePathForContainerId:(id)arg1 ;
+(BOOL)createOrUpdateSchemaUsingConnection:(id)arg1 ;
+(BOOL)removeDatabaseForContainerId:(id)arg1 ;
+(void)migrateVersion0to1WithMigration:(id)arg1 ;
+(void)migrateVersion1to2WithMigration:(id)arg1 ;
+(id)_containerURLForContainerId:(id)arg1 ;
+(void)_applyProtectionClassForDirectoryAtURL:(id)arg1 ;
+(BOOL)_addSkipBackupAttribute:(BOOL)arg1 forURL:(id)arg2 ;
@end
|
Hair Maintenance and Chemical Hair Product Usage as Barriers to Physical Activity in Childhood and Adulthood among African American Women Qualitative studies have identified haircare practices as important culturally specific barriers to physical activity (PA) among Black/African American (AA) women, but quantitative investigations are lacking. Using the Study of Environment, Lifestyle and Fibroids data among 1558 Black/AA women, we investigated associations between hair product usage/hair maintenance behaviors and PA during childhood and adulthood. Participants reported childhood and current chemical relaxer and leave-in conditioner use. Self-reported PA included childhood recreational sports participation, leisure-time PA engagement during adulthood, and, at each life stage, minutes of and intensity of PA. Adjusting for socioeconomic and health characteristics, we used Poisson regression with robust variance to estimate prevalence ratios (PRs) and 95% confidence intervals (CIs) for each PA measure for more vs. less frequent hair product use/hair maintenance. Thirty-four percent reported ≥twice/year chemical relaxer use and 22% reported ≥once/week leave-in conditioner use at age 10 years, and neither were associated with PA at age 10 years. In adulthood, ≥twice/year chemical relaxer users (30%) were less likely (PR = 0.90 ) and ≥once/week leave-in conditioner users (24%) were more likely (PR = 1.09 ) to report intense PA compared to counterparts reporting rarely/never use. Hair product use/maintenance may influence PA among Black/AA women and impact cardiometabolic health disparities. Introduction Black or African-American (AA) adolescent girls and women have been consistently shown to have the highest prevalence of physical inactivity as well as obesity among United States (US) adolescent girls and women. Regarding physical activity (PA), the Physical Activity Guidelines Advisory Committee recommends 60 min/day of PA for adolescents and ≥150 min/week of moderate PA or ≥75 min of vigorous PA for adults. Recent data suggest that only 30% of Black/AA adolescent girls compared to 39% of White adolescent girls met PA guidelines five days per week, and only 37% of Black/AA women compared to 55% of White women engaged in the recommended amounts of PA per week. Lack of physical activity is related to a variety of poor health outcomes, including mood disorders, obesity, and other markers of poor cardiometabolic health. Given the association with poor cardiometabolic health outcomes for which Blacks/AAs are disproportionately affected, disparities in physical activity are of public health importance. A potentially important, understudied contributor of racial/ethnic disparities in physical activity and relatedly, poor cardiometabolic health is the difference in hair product usage behaviors among Black/AA women compared to non-Black/AA US women. Black/AA women generally have naturally curly hair that requires either permanent chemical straightening products or use of heat and temporary straightening products in order to obtain straight hair, which is a widely accepted European beauty standard. These hair manipulation practices begin as early as childhood. Hair straightening and other products marketed to and used mainly by Black/AA girls and women have been shown to contain chemicals with endocrine-disrupting properties. The recent literature suggests that such chemicals are associated with poor cardiometabolic health outcomes through altering physiologic pathways like insulin secretion and adipogenesis. Furthermore, likely resulting in synergistic effects on cardiometabolic health, hairstyles achieved by use of these products may be barriers to PA. To attain and maintain desired hairstyles, a significant investment of time and financial resources is made [13,, and behaviors that counter these investments are avoided. For instance, moisture in the environment or from sweat can negatively impact such hairstyles. Indeed, "sweating out" hairstyles or causing hair to revert back to a naturally curly state during PA is often avoided [16,18,. Moreover, even if Black/AA women wear their hair in its naturally curly state, it is difficult for the hair to stay moisturized because the curl pattern makes it difficult for naturally produced oils to travel from the hair shaft down the strand. Therefore, even wearing hair in a natural state can result in PA avoidance because natural hairstyles may still require extensive hair product use and hair maintenance. In fact, qualitative studies support hair maintenance as a unique barrier to physical activity among Black/AA women as early as adolescence and into adulthood [13,. Lack of physical activity throughout the life course may partially explain recalcitrant disparities in poor cardiometabolic health among AA/Black women, and further study of hair product usage and maintenance behaviors as contributors is warranted. Although there have been several qualitative or mixed-methods studies to date that have investigated the associations between hair maintenance and the lack of physical activity among Black/AA adolescent girls and women [13,, quantitative studies with large sample sizes that consider chemical hair products usage patterns and physical activity are necessary. We sought to investigate the associations of chemical hair product use and hair maintenance behaviors with physical activity. We hypothesized inverse associations between chemical hair product use/hair maintenance frequency and physical activity. Given our previous findings in relation to changes in hair product use at different life stages among Black/AA women, we performed two cross-sectional investigations of these associations: one at age 10 years and one during adulthood. Study of Environment, Lifestyle, and Fibroids We used cross-sectional data from the Study of Environment, Lifestyle, and Fibroids (SELF), a prospective cohort study of 1693 women aged 23 to 35 years from the Detroit, Michigan area who self-identified as either Black/AA, alone or in combination with other racial/ethnic categories. The SELF was designed to investigate potential lifestyle and environmental risk factors for the development of uterine fibroids, and study details are described elsewhere. Briefly, eligible SELF participants who were without a prior diagnosis of uterine fibroids were enrolled between January of 2010 and December of 2012. At enrollment, participants completed computer-assisted telephone and web interviews (CATI/CAWI) as well as clinic visits. Follow-up occurred at approximately 20-month intervals over 5 years. The current study used early life and adulthood data collected via CATI/CAWI. Early life data were collected at baseline, and adulthood data were collected at the second or the third follow-up if the participant missed the second follow-up. The institutional review board at the National Institute of Environmental Health Sciences and the Henry Ford Health System approved the SELF protocol, and each participant provided informed consent. Study Participants Among the cohort of 1693 participants, a total of 135 participants were sequentially excluded for either loss to follow-up before the adult hair product usage assessment (n = 130) or missing values for either childhood or adulthood relaxer or leave-in conditioner (n = 5). The final analytic sample comprised 1558 Black/AA women. Childhood At baseline, adult participants reported chemical hair product usage at age 10 years. Participants were asked, "How often was your hair treated with chemical products that change the texture of your hair, such as Jheri curl, relaxer, or perm when you were around 10 years old?" and reported use as 10 or more times a year, 5 to 9 times a year, 2 to 4 times a year, once a year, or rarely/never. We combined responses to categorize chemical relaxer use as ≥twice/year, once/year, and rarely/never. Participants also reported leave-in conditioner use at age 10 years by responding to the question, "How often was your hair treated with leave-in conditioners or other hair products that remained on your hair rather than being rinsed out when you were around 10 years old?" Response options were about every day, 3 to 5 times a week, 1 to 2 times a week, 1 to 3 times a month, or rarely or never, and we categorized leave-in conditioner use as ≥once/week, 1-3 times/month, and rarely/never. Adulthood During a follow-up interview, participants self-reported chemical relaxer use in the past 12 months by responding 12 or more times, 6-11 times, 2-5 times, once, or did not use in the past 12 months to the following question, "During the past 12 months, about how often did you or someone else apply hair relaxers, straighteners, or perms to your hair?". To standardize responses with the childhood categories, we categorized chemical relaxer use in the past 12 months as ≥twice/year, once/year, and rarely/never. Leave-in conditioner use in the past 12 months was assessed using two questions about type (i.e., rinse-out vs. leave-in) and frequency of conditioner use, and these questions are provided in a prior publication. To standardize with childhood use, leave-in conditioner use in the past 12 months was categorized as ≥once/week, 1-3 times/month, and rarely/never. Assessment of other hair products use in the previous 12 months is described in detail in a prior publication. Briefly, participants also reported use of other hair products, and we used latent class analysis to identify latent classes based on frequency of use (i.e., high (≥once/week), medium (1-3 times/month), none (rarely/never)) of chemical relaxer/straightener, shampoo, individual growth/moisturizing products (i.e., shea butter, natural plant based oils, hair food, moisturizing creams and lotions, conditioners/detanglers), and a group of common hair styling products (e.g., hairspray or styling spritz, styling gel, mousse, pomade, hair grease, oil sheen, setting lotion). The model fit and interpretation for the three identified classes of hair product use were previously described. Classes were labeled as following: Class One-High styling/low other product use; Class Two-High styling product/medium shampoo and conditioner use; and Class Three-High styling, shampoo, and growth/moisturizing product use (i.e., conditioner and oils). Childhood At baseline, participants self-reported PA at age 10 years. Participation in recreational sports at age 10 years was dichotomized as yes vs. no. Participants reported their average number of PA minutes on a typical weekday and a typical weekend day. We separately dichotomized weekday and weekend PA as ≥60 min/day (yes vs. no) based on the US Department of Health and Human Services guidelines for PA for children. Among participants who engaged in any minutes of PA, which new PA guidelines suggest is beneficial for health, we assessed the intensity of PA. Participants reported how much of the time their PA was at a level high enough to cause a large increase in their breathing and heart rate. Response options included very little or none of the time, less than half the time, about half the time, and more than half the time. We created the following three categories for time spent engaging in intense PA: >half, ≤half, and very little/none of the time. Adulthood At the second follow-up, participants reported PA in the past 12 months. Participants who missed the second follow-up were not asked about PA and were not included in the adulthood PA analysis (n = 118). Participation in leisure-time PA was based on either a yes or no response to three questions about playing sports, having a regular exercise routine or class, and participation in any other activities outside of exercise classes or regular workouts (e.g., biking, hiking, dancing, or other recreational activities). If a participant reported an affirmative response to any of the three questions, she was defined as participating in leisure-time PA. Participants then reported the typical amount of time per week they spent participating in the activities. We applied the US Department of Health and Human Services guidelines for PA among adults to dichotomize minutes of PA in adulthood as ≥150 min/week (yes vs. no). Lastly, participants who reported any minutes of PA also reported how much time during leisure-time PA that their activity level was high enough to cause a large increase in breathing and heart rate, and PA intensity categories were standardized with those of childhood (i.e., >half, ≤half, and very little/none). Potential Confounders Participants recalled childhood characteristics at age 10 years. Sociodemographic factors included the educational attainment of the mother or primary caregiver (≤high school or general education development (GED), some college or associate's/technical degree, ≥bachelor's degree), two-parent household (yes vs. no), household income during the majority of childhood (well-off, middle income, low income/poor), food insecurity at any time during childhood (yes vs. no), and neighborhood safety (very unsafe, somewhat unsafe, somewhat safe, very safe). Health behaviors and characteristics during childhood included weight status (heavier, same weight, lighter in comparison to other children) and enjoyment of physical activity (not at all/a little/somewhat, quite a bit/very much). Statistical Analysis Study population characteristics during childhood and adulthood were described as counts and percentages or as means and standard deviations. Using Poisson regression models with robust variance, we estimated prevalence ratios (PRs) and 95% confidence intervals (CIs) for each PA outcome during childhood or adulthood, separately, among participants who reported more frequent chemical relaxer or leave-in conditioner use, separately, compared to those who reported rarely/never using each and participants who reported high styling, shampoo, and growth/moisturizing product use (Class Three) and high styling product/medium shampoo and conditioner use (Class Two) compared to participants with high styling/low other product use (Class One). We determined potential confounders to include in the adjusted models based on the previous literature and our construction of directed acyclic graphs [19,22,. All statistical models were estimated sequentially. For the analysis at age 10 years, Model 1 was unadjusted, Model 2 was adjusted for socioeconomic characteristics (childhood household educational attainment, two-parent household, childhood household income, and childhood food insecurity), Model 3 was additionally adjusted for neighborhood safety, and Model 4 was additionally adjusted for weight status relative to peers. For the adulthood analysis among participants with data for PA outcomes in the past 12 months (n = 1440), Model 1 was unadjusted, Model 2 was adjusted for sociodemographic characteristics (age, educational attainment, annual household income, employment status, and marital status), Model 3 was additionally adjusted for health behaviors/characteristics (smoking status, alcohol use, BMI), and Model 4 was additionally adjusted for enjoyment of PA, which may affect hair styling and maintenance as well as PA engagement, but could also act as a mediator on the pathway to PA. SAS, version 9.4 for Windows (Cary, NC, USA) was used in the analyses. Sensitivity and Potential Modification Analyses To test model assumptions about including enjoyment of PA as a potential confounder or mediator versus an effect modifier, we stratified fully adjusted adulthood models by enjoyment of PA (yes vs. no ). Secondly, since barriers to physical activity may differ by hairstyle or whether participants wear their hair naturally or in a relaxed/straightened state, we stratified fully adjusted models for leave-in conditioner use and hair maintenance behaviors in adulthood by whether participants reported chemical relaxer use in the past 12 months (yes vs. no ). In additional analyses, we stratified fully adjusted models for adulthood chemical hair product use/hair maintenance behaviors and each PA outcome in adulthood by the following potential modifiers: age group dichotomized at the median (<33 years vs. ≥33 years), dichotomized annual household income (≤$50,000 vs. >$50,000), and dichotomized obesity status (non-obese vs. obese). Age category was considered a potential modifier because studies suggest attitudes about hairstyles and maintenance practices may vary by generation and/or age as the social environment has changed over time. Studies also suggest hair product use/maintenance practices and PA engagement may vary by income. Lastly, BMI category may act as an effect modifier if propensity towards "sweating out" hairstyles vary by obesity status along with PA engagement. Study Population During early life, approximately half of participants (47%) resided in a household where the highest educational attainment was ≤high school, lived in middle-income households (53%), and lived in two-parent households (52%) ( Table 1). One-third (34%) reported chemical relaxer use at least twice/year while 22% reported leave-in conditioner use at least once/week. Approximately half (49%) reported participation in sports, and most reported ≥60 min of PA on weekdays (89%) and weekends (92%). Among 1547 participants who reported any minutes of PA, 30% reported spending more than half of the time engaging in intense PA. At the time of data collection, the mean age and standard deviation of participants was 33 ± 3.4 years ( Table 2). Most participants (84%) attained at least some college/associate/or technical degree, 25% had an annual household income of >$50,000, and 78% were employed either full-or part-time. Furthermore, 65% of participants were obese, 85% were in good/very good/excellent general physical health, and 44% reported quite a bit or very much enjoyment of PA. Approximately 30% of participants reported chemical relaxer use at least twice/year and 24% reported leave-in conditioner at least once/week in the past 12 months. Frequency of chemical relaxer and leave-in conditioner use were not associated, and only 7% of participants reported the most frequent usage category of both (Supplementary Table S1). Regarding hair maintenance in the past 12 months, 36% had high styling/low other product use; 33% high styling product/medium shampoo and conditioner use; and 31% high styling, shampoo, and growth/moisturizing product use. Over half of women (59%) participated in leisure-time PA, 30% met PA guidelines, and among those who reported any minutes of PA (n = 817), 42% reported spending more than half of the time engaged in intense PA in the past 12 months. Adulthood characteristics of participants by chemical relaxer use, leave-in conditioner use, and hair maintenance behaviors/hair product use in the prior 12 months are described in Supplementary Tables S2-S4. Abbreviations: SD (Standard Deviation); BMI (body mass index); kg (kilograms); m (meters). Note: Percentages may not sum to 100 due to missing values; Missing Values: educational attainment = 1; household income = 2; general health = 9; * 118 of the 1558 participants were lost to follow-up and did not provide adulthood PA data; ** Percentage of time spent engaging in intense PA was assessed among the 817 participants who engaged in any minutes of PA per week, on average. Chemical Hair Product Use and PA in Childhood Neither chemical relaxer use nor leave-in conditioner use at age 10 years was associated with participation in recreational sports, ≥60 min/day of PA on weekdays or weekends, or PA intensity at age 10 years (Table 3). Chemical Hair Product Use, Hair Maintenance Behaviors, and PA in Adulthood Prior to adjustment, both chemical relaxer and leave-in conditioner use were associated with PA in the previous 12 months (Table 4). However, associations attenuated after adjustment. Nonetheless, after adjustment for sociodemographic characteristics, health behaviors and characteristics, and enjoyment of PA, participants who reported using relaxers the most frequently (≥twice/year) were 10% less likely to participate in leisure-time PA (PR = 0.90 ) and, among those with any minutes of PA, were marginally less likely to report spending more than half the time engaged in PA (PR = 0.90 ) compared to participants who reported rarely/never using chemical relaxer. Conversely, more frequent vs. rarely/never using leave-in conditioner was positively associated with participation in leisure-time PA, meeting PA guidelines (≥150 min/week of leisure-time PA), and PA intensity prior to adjustment. After full adjustment, associations were attenuated but remained suggestive of the higher prevalence of PA outcomes among more frequent users of leave-in conditioner. For instance, compared to participants who reported rarely/never using leave-in conditioner, participants who reported leave-in conditioner use 1-3 times per month had an 18% higher prevalence of meeting PA guidelines (PR = 1.18 ) and, among participants with any minutes of PA, participants who reported ≥once/week use had a 9% higher prevalence of reporting spending more than half of the time engaging in intense PA (PR = 1.09 ) after full adjustment. Compared to participants with high styling/low other product use, participants with high styling product/medium shampoo and conditioner use and participants with high styling, shampoo, and growth/moisturizing product use were more likely to report participation in leisure-time PA, meeting PA guidelines, and intense PA > half the time (among participants with any minutes of PA) in unadjusted models (Table 5). After full adjustment, participants who reported high styling, shampoo, and growth/moisturizing product use (Class Three) versus high styling/low other product use (Class One) had a 23% higher prevalence of engaging in ≥150 min/week of PA (PR = 1.23 ). Sensitivity and Potential Modification Analyses The sensitivity analysis did not support enjoyment of PA as a potential effect modifier (Supplementary Table S5). After stratification by chemical relaxer use in the previous 12 months, some strata had small sample sizes, leading to a reduction in precision, and although confidence intervals overlapped, estimates differed by chemical relaxer use for certain PA outcomes (Supplementary Table S6). For instance, participants without chemical relaxer use who used leave-in conditioner ≥once/week had 17% higher prevalence of intense PA >half the time (PR = 1.17 ) compared to their counterparts who rarely/never used leave-in conditioner, but there was no association among participants who reported chemical relaxer use in the previous 12 months (PR = 0.89 ). Furthermore, participants without relaxer use who reported high styling, shampoo, and growth/moisturizing product use and high styling product/medium shampoo and conditioner use were more likely to report meeting PA guidelines compared to participants who reported high styling/low other product use (PR = 1.43 and PR = 1.34 ). However, participants with relaxer use who reported high styling, shampoo, and growth/moisturizing product use and participants with high styling product/medium shampoo and conditioner use were no more likely to report meeting PA guidelines than participants who reported high styling/low other product use (PR = 1.03 and PR = 0.94 ). After stratification by age group, annual household income, and obesity status, sample size in some strata reduced both precision and our ability to detect potential effect modification; however, age group and obesity status may act as effect modifiers (Supplementary Tables S7-S9). For instance, medium use (1-3 times/month) versus rarely/never use of leave-in conditioner was associated with a higher prevalence of participation in leisure-time PA among participants aged <33 years, but was not associated among participants aged ≥33 years (p-interaction<0.10). Additionally, although not observed among participants aged <33 years, ≥twice/year versus rarely/never using chemical relaxer was associated with a lower prevalence of meeting PA guidelines among participants aged ≥33 years (p-interaction<0.05). Lastly, although there was no association among obese participants, non-obese participants who reported high styling, shampoo, and growth/moisturizing product use were suggestively more likely to report spending over half the time engaged in intense PA compared to their counterparts who reported high styling/low other product use (p-interaction<0.10). Discussion In this large sample of Black/AA women, we evaluated associations between chemical hair product usage as well as hair maintenance and PA in childhood and adulthood. Hair product use was not associated with childhood PA. Instead, we found chemical relaxer use was a potential barrier to PA in adulthood, with greater chemical relaxer use in adulthood being associated with a lower prevalence of leisure-time PA, meeting PA guidelines, and spending at least half the time engaged in intense PA prior to adjustment. The associations with leisure-time PA, though attenuated, and a marginal association with intense PA held even after adjustment for key sociodemographic factors and health behaviors. Furthermore, Black/AA women who frequently vs. rarely used leave-in conditioner and who engaged in more versus fewer hair maintenance behaviors were more likely to meet PA guidelines in adulthood. The associations between greater hair maintenance and higher levels of PA were suggestively stronger among Black/AA women who did not use chemical relaxers in the previous 12 months. Overall, our results regarding chemical relaxer use in adulthood are consistent with the previous literature. Consistent with the findings from prior qualitative studies that suggested hair maintenance may act as a barrier to PA among Black/AA women in adulthood 21,22,25], we found that women who reported frequent versus rare/no chemical relaxer use in adulthood were less likely to engage in both leisure-time and intense PA. However, unlike prior studies, our results did not support chemical relaxer use as a barrier to childhood PA. This inconsistent finding is likely related to the different age of assessment across studies. Although our analysis corresponded to late childhood/pre-adolescence (age 10 years), prior studies were among Black/AA adolescents, which is a life stage when girls become more concerned about physical appearance and likely engage in different hair product use/hair maintenance behaviors. Additional quantitative studies among Black/AA children and adolescents are needed. Our results for associations with leave-in conditioner use and hair maintenance may be related to cultural shifts related to hair and PA. We found that women who reported more frequent leave-in conditioner use and hair maintenance were more likely to meet PA guidelines compared to women who reported rarely/no leave-in conditioner use and less hair maintenance. While these results appear to be contrary to the previous literature suggesting hair maintenance as a barrier to PA 21,22,25,28], it is likely that our finding is related to the cross-sectional nature of our study and the natural hair movement or recent shift towards wearing hair in its naturally curly state as well as the increasing community support (e.g., online hair communities) and ability to care for natural hair while maintaining physical activity among Black/AA women that may not be reflected in some of the previous literature. Hair maintenance practices captured in SELF occurred years after the onset of the natural hair movement, while previous studies that provided dates of data collection captured practices largely before or during the beginning stages of the movement. Although confidence intervals overlapped in our sensitivity analysis, hair maintenance was not associated with PA among women who reported chemical relaxer use, but hair maintenance was positively associated with PA among women who reported no chemical relaxer use. Among women with natural hair, those with greater hair maintenance may be more likely to meet PA guidelines and engage in intense PA compared to women with lower hair maintenance. This observation may result from potential reverse causation where women who engage in more PA spend more time doing the hair maintenance activities required, including washing hair, using leave-in conditioner, and styling natural hair, as a part of appearance management following PA. Black/AA women with natural hair may not as strongly view hair maintenance as a barrier to PA because natural hair maintenance may be less costly and time-intensive. This possibility is dependent upon hair style, approaches women use to maintain their hair, and whether they style their own hair or have it professionally styled. For instance, a prior study among Black/AA women cosmetologists/hair stylists or their clients (aged 18-71 years) who all wore natural hair reported a low prevalence of PA. Their results could be related to the age range of participants as well as to the possibility that hair maintenance can remain a barrier even among women who wear their hair naturally because of the financial and time investments related to professional styling and preferred hair styles (e.g., temporarily straightened hair) that are hard to maintain with PA. Additional studies that assess non-professional (e.g., at home) versus professional styling are warranted to investigate natural hair styles/styling practices in relation to PA and other potential hypotheses across additional populations of Black/AA women. Several characteristics of the social environment may explain our results. Traditionally, Black/AA women have used chemical straighteners to achieve straight hairstyles that were deemed socially desirable and acceptable. In order to maintain socially desirable straight hairstyles, behaviors like PA were avoided in order to not "sweat out" the hair or ruin desired hairstyles that required considerable time and financial investments [16,18,. Therefore, the social environmental influence of traditional ideal beauty standards related to hair acted as an upstream determinant or one of the fundamental causes of the observed association between frequent chemical relaxer use and lack of physical inactivity. In recent years, wearing of natural hair among Black/AA women has become more culturally acceptable, and there have been changes in the social environment such as the passing of hair anti-discrimination policies including the Creating a Respectful and Open World for Natural Hair (CROWN) Act that prohibits discrimination based on hairstyles and hair texture in schools and workplaces, which has been passed in seven US states and by the US House of Representatives, to date. Embracing natural hair is reflective in our current sample. We previously found that at age 15 years, 17% of participants did not use chemical relaxers/straighteners, and non-usage of chemical relaxers/straighteners increased to 59% during adulthood. Often, wearing natural hair allows Black/AA women to style and maintain their own hair and to not have to frequently use professional hair stylists. With the ability to maintain their own hair, the financial implications and costs related to maintaining their hair may be reduced, thus potentially reducing hair maintenance as a barrier to PA. Therefore, the recent natural hair movement and embracing of natural hair among women in the Black/AA community may have resulted in a reduction of hair maintenance as a barrier to PA. Also of importance, hair straightening and other products marketed to and used by Black/AA girls and women have been shown to contain chemicals with endocrine-disrupting properties that have been implicated as contributors to poor cardiometabolic health outcomes. The combination of exposure to endocrine-disrupting chemicals and the alteration of physical activity related to chemical hair product usage/hair maintenance behaviors may synergistically contribute to disparities in poor cardiometabolic health among Black/AA women over the life course, making this group of US women particularly vulnerable. This population would benefit from future research that considers hair product formulations, hair product use, and health behaviors related to hair product use as important, overarching determinants of cardiometabolic health that can serve as targets for intervention. Furthermore, in our additional analyses, we observed that relationships between hair product use/maintenance may vary by age and obesity status. Each of these characteristics require additional investigation as they may aid in the further identification of particularly vulnerable populations within the community of Black/AA women. Our study should be interpreted in the context of its limitations and strengths. The cross-sectional study design as well as the assessment of hair product use and PA at one time point in adulthood leads to the possibility of reverse causality. For example, greater PA may have resulted in greater hair maintenance. Other limitations include our use of subjective versus objective measurements of hair product use and PA, which may lead to non-differential misclassification. Furthermore, there is potential for recall bias of childhood behaviors at age 10 years among adults. Longitudinal studies over the life course are warranted. Additionally, we assessed leisure-time PA during adulthood and may have underestimated total PA. Nonetheless, by focusing on leisure-time PA, our results better elucidate hair maintenance as a barrier to PA that is engaged in by choice rather than as a necessity in the context of work. Our results may also be non-generalizable to other populations because our study participants were limited to one geographic location, and age was relatively homogenous within the cohort of reproductive age Black/AA women. Results may vary across different populations of Black/AA women in different US regions since beauty practices may vary by region. Results also may vary by birthplace/immigration status and generation because culture and beauty standards change over time. This study should be replicated in other groups. Nonetheless, the strengths of our study include its large sample size of an understudied population, our detailed assessment of hair product usage during both childhood and adulthood, and the examination of several measures of PA. Conclusions The common observation of chemical relaxer use as a barrier to PA among Black/AA women in qualitative studies was supported by this novel quantitative study. Wearing natural hair may reduce hair maintenance as a barrier to PA among Black/AA women. With Black/AA women having a higher prevalence of obesity compared to non-Hispanic White women, identifying modifiable factors that contribute to these disparities is imperative. Further investigation of culturally relevant yet understudied barriers to PA, including hair product use and hair maintenance, in the Black/AA community can inform intervention targets that can contribute to the elimination of cardiometabolic health disparities among this vulnerable group of women. Supplementary Materials: The following are available online at http://www.mdpi.com/1660-4601/17/24/9254/s1, Table S1: Bivariate associations between chemical relaxer and leave-in conditioner use; Table S2: Adulthood characteristics of participants by adulthood chemical relaxer use, SELF, N = 1440 *; Table S3: Adulthood characteristics of participants by adulthood leave-in conditioner use, SELF, N = 1440 *; Table S4: Adulthood characteristics of participants by latent class of hair maintenance behaviors/hair product use in adulthood, SELF, N = 1440 *; Table S5: Prevalence ratios for engagement in physical activity (PA) for participants by chemical hair product use or latent class of hair maintenance behaviors in the past 12 months, stratified by enjoyment of PA: Participants with more hair product use/hair maintenance compared to participants with less hair product use/hair maintenance, Study of Environment, Lifestyle and Fibroids (N = 1440 *); Table S6: Prevalence ratios for engagement in physical activity (PA) for participants by chemical hair product use or latent class of hair maintenance behaviors in the past 12 months, stratified by chemical relaxer use: Participants with more hair product use/hair maintenance compared to participants with less hair product use/hair maintenance, Study of Environment, Lifestyle and Fibroids (N = 1440 *); Table S7: Prevalence ratios for engagement in physical activity (PA) for participants by chemical hair product use or latent class of hair maintenance behaviors in the past 12 months, stratified by age group: Participants with more hair product use/hair maintenance compared to participants with less hair product use/hair maintenance, Study of Environment, Lifestyle and Fibroids (N = 1440 *); Table S8: Prevalence ratios for engagement in physical activity (PA) for participants by chemical hair product use or latent class of hair maintenance behaviors in the past 12 months, stratified by annual household income: Participants with more hair product use/hair maintenance compared to participants with less hair product use/hair maintenance, Study of Environment, Lifestyle and Fibroids (N = 1440 *); Table S9: Prevalence ratios for engagement in physical activity (PA) for participants by chemical hair product use or latent class of hair maintenance behaviors in the past 12 months, stratified by obesity status: Participants with more hair product use/hair maintenance compared to participants with less hair product use/hair maintenance, Study of Environment, Lifestyle and Fibroids (N = 1440 *). Funding: This research was supported by the Intramural Research Program of the NIH, National Institute of Environmental Health Sciences (NIEHS) (Z1AES103325 (CLJ) and 1ZIAES049013 (DB)) and, in part, by grant NIH/NIEHS P30ES000002 (TJT). Funding also came from the American Recovery and Reinvestment Act funds designated for NIH research.
|
/**
* This class represents a group of messages, which belongs to a communication
* edge.
*
* @author Quang Dung Nguyen
*
*/
public final class MessagesGroup extends EdgeProperty {
/**
* The overall vertical (top and bottom) margin between the text of the
* EdgeProperty and the surrounding rectangle.
*/
static final int MARGIN_VERTICAL = 6;
/**
* The overall (left and right) horizontal margin between the text of the
* EdgeProperty and the surrounding rectangle.
*/
static final int MARGIN_HORIZONTAL = 6;
/**
* The Length of arrow of communication messages
*/
static final int MESSAGE_ARROW_LENGTH = 10;
/**
* The offset between arrow and messages label of communication messages
*/
static final int MESSAGE_ARROW_OFFSET = 2;
/**
* The message which should be colored on navigation of communication
* messages
*/
private MMessage coloredMessage;
private Color activatedMessageColor = CommunicationDiagram.ACTIVATED_MESSAGE_COLOR;
public MessagesGroup(DiagramOptions opt, CommunicationDiagramEdge edge) {
super(edge.getId(), new PlaceableNode[] { edge.getSourceWayPoint(), edge.getTargetWayPoint() }, false, opt);
fName = edge.getId();
fOpt = opt;
fEdge = edge;
this.coloredMessage = new MMessage();
this.setStrategy(new StrategyInBetween(this, new PlaceableNode[] { edge.getSourceWayPoint(), edge.getTargetWayPoint() }, 0, -10));
}
@Override
public String name() {
return fName;
}
@Override
public String getStoreType() {
return "Communication Diagram Message";
}
@Override
public String toString() {
return "CommunicationDiagramMessage: " + name();
}
/**
* @return the activatedMessageColor
*/
public Color getActivatedMessageColor() {
return activatedMessageColor;
}
/**
* @param activatedMessageColor the activatedMessageColor to set
*/
public void setActivatedMessageColor(Color activatedMessageColor) {
this.activatedMessageColor = activatedMessageColor;
}
/**
* @return the coloredMessage
*/
public MMessage getColoredMessage() {
return coloredMessage;
}
/**
* @param coloredMessage the coloredMessage to set
*/
public void setColoredMessage(MMessage coloredMessage) {
this.coloredMessage = coloredMessage;
}
void removeColoredMessage() {
this.coloredMessage = new MMessage();
}
@Override
protected void onDraw(Graphics2D g) {
if (!isVisible())
return;
Graphics2D g2 = (Graphics2D) g.create();
if (invalid) {
calculateSize(g);
invalid = false;
}
if (this.getColor() != null && !this.getColor().equals(Color.WHITE)) {
Color old = g2.getColor();
g2.setColor(Color.green);
g2.fill(this.getBounds());
g2.setColor(old);
}
g2.setFont(getFont(g2));
if (isSelected()) {
drawSelected(g2);
}
FontMetrics fm = g2.getFontMetrics();
int lineHeight = fm.getAscent() + fm.getDescent();
double xDrawPoint = getX() + MARGIN_HORIZONTAL / 2 + MESSAGE_ARROW_LENGTH;
double yDrawPoint = getY() + fm.getAscent() + MARGIN_VERTICAL / 2;
int xDrawPointInteger = (int) xDrawPoint;
int yDrawPointInteger = (int) yDrawPoint;
int yOfArrow = yDrawPointInteger - fm.getAscent() / 2;
double angle = computeMessageArrowAngle(xDrawPoint, getY() + getHeight() / 2);
Color oldColor = g.getColor();
for (int i = 0; i < ((CommunicationDiagramEdge) fEdge).getMessages().size(); i++) {
MMessage mess = ((CommunicationDiagramEdge) fEdge).getMessages().get(i);
if (mess.isFailedReturn()) {
g2.setColor(Color.red);
}
if (mess.equals(coloredMessage)) {
g2.setColor(activatedMessageColor);
}
if (mess.getDirection() == MMessage.RETURN) {
DrawingUtil.drawReturnArrow(g2, xDrawPointInteger, yOfArrow, (int) (xDrawPointInteger + MESSAGE_ARROW_LENGTH * Math.cos(angle)),
(int) (yOfArrow + MESSAGE_ARROW_LENGTH * Math.sin(angle)));
} else if (mess.getDirection() == MMessage.BACKWARD) {
DrawingUtil.drawArrow(g2, (int) (xDrawPointInteger + MESSAGE_ARROW_LENGTH * Math.cos(angle)),
(int) (yOfArrow + MESSAGE_ARROW_LENGTH * Math.sin(angle)), xDrawPointInteger, yOfArrow);
} else {
DrawingUtil.drawArrow(g2, xDrawPointInteger, yOfArrow, (int) (xDrawPointInteger + MESSAGE_ARROW_LENGTH * Math.cos(angle)),
(int) (yOfArrow + MESSAGE_ARROW_LENGTH * Math.sin(angle)));
}
g2.drawString(mess.getSequenceNumberMessage(), xDrawPointInteger + MESSAGE_ARROW_LENGTH + MESSAGE_ARROW_OFFSET, yDrawPointInteger);
yDrawPointInteger += lineHeight;
yOfArrow += lineHeight;
if (mess.equals(coloredMessage) || mess.isFailedReturn()) {
g2.setColor(oldColor);
}
}
}
/**
* Calculate the angle of messages arrow. The nearest edge segment of
* communication edge appoints the arrows angle.
*
* @param x position X of messages draw point
* @param y position Y of messages draw point
* @return angle of messages arrow (parallel to nearest edge segment)
*/
private double computeMessageArrowAngle(double x, double y) {
List<WayPoint> wayPoints = fEdge.getWayPoints();
WayPoint wayPointOne = wayPoints.get(0);
WayPoint wayPointTwo = wayPointOne.getNextWayPoint();
// Search for begin point and end point of a segment of communication
// edge.
// Distance from drawing point to this segment must be minimum.
if (wayPoints.size() > 2) {
double nearestDistance = Double.MAX_VALUE;
for (int i = 0; i < wayPoints.size() - 1; i++) {
WayPoint first = wayPoints.get(i); // Point A
WayPoint second = first.getNextWayPoint(); // Point B
// Point C (x, y) is the argument of this method
// Check if angle CAB > 90 degree
double dotCB = (x - first.getX()) * (second.getX() - first.getX()) + (y - first.getY()) * (second.getY() - first.getY());
if (dotCB < 0) {
double distanceCA = Math.sqrt(Math.pow(x - first.getX(), 2) + Math.pow(y - first.getY(), 2));
if (distanceCA < nearestDistance) {
nearestDistance = distanceCA;
wayPointOne = first;
wayPointTwo = second;
continue;
}
}
// Check if angle CBA > 90 degree
double dotCA = (x - second.getX()) * (first.getX() - second.getX()) + (y - second.getY()) * (first.getY() - second.getY());
if (dotCA < 0) {
double distanceCB = Math.sqrt(Math.pow(x - second.getX(), 2) + Math.pow(y - second.getY(), 2));
if (distanceCB < nearestDistance) {
nearestDistance = distanceCB;
wayPointOne = first;
wayPointTwo = second;
continue;
}
}
// Otherwise calculate distance from C to segment AB
if (dotCB > 0 && dotCA > 0) {
double numerator = Math.abs((first.getX() - second.getX()) * (second.getY() - y) - (second.getX() - x) * (first.getY() - second.getY()));
double denominator = Math.sqrt(Math.pow((first.getX() - second.getX()), 2) + Math.pow((first.getY() - second.getY()), 2));
double distance = numerator / denominator;
if (distance < nearestDistance) {
nearestDistance = distance;
wayPointOne = first;
wayPointTwo = second;
}
}
}
}
// Compute the angle between target node and source node
// The arrow of a message is always parallel to message edge
double dx = wayPointTwo.getX() - wayPointOne.getX(), dy = wayPointTwo.getY() - wayPointOne.getY();
double angle = Math.atan2(dy, dx);
return angle;
}
/**
* Sets the rectangle size of this node.
*
* @param g Used Graphics.
*/
@Override
public void doCalculateSize(Graphics2D g) {
double width = 0;
for (int i = 0; i < ((CommunicationDiagramEdge) fEdge).getMessages().size(); i++) {
width = Math.max(width, g.getFontMetrics().stringWidth(((CommunicationDiagramEdge) fEdge).getMessages().get(i).getSequenceNumberMessage()));
}
setCalculatedWidth(width + MARGIN_HORIZONTAL + MESSAGE_ARROW_OFFSET + 2 * MESSAGE_ARROW_LENGTH);
setCalculatedHeight(g.getFontMetrics().getHeight() * ((CommunicationDiagramEdge) fEdge).getMessages().size() + MARGIN_VERTICAL);
}
}
|
#include <stdio.h>
#include <string.h>
int main() {
int n, k;
char in[51];
char tp[50 * 50 + 1];
scanf("%d%d%s", &n, &k, in);
int ff, len = strlen(in);
for (ff = len - 1; ff > 0; --ff)
if (memcmp(in, in + len - ff, ff) == 0)
break;
strcpy(tp, in);
for (int i = 0; i < k - 1; ++i)
strcat(tp, in + ff);
puts(tp);
return 0;
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.