content
stringlengths 7
2.61M
|
---|
package org.quartz.exception;
public class UnableToInterruptJobException extends SchedulerException {
public UnableToInterruptJobException(String msg) {
super(msg);
}
public UnableToInterruptJobException(Throwable cause) {
super(cause);
}
}
|
1. Field of the Invention
This invention generally relates to a power conserving arrangement for, and a method of, minimizing battery power consumption during stand-by operation of a portable, battery-operated, mobile station and, more particularly, a cellular telephone in a cellular telephone system.
2. Description of Related Art
A typical cellular telephone system includes a plurality of base stations or towers, each serving a pre-assigned geographical cell or region. Each base station transmits messages to a multitude of mobile stations, e.g. cellular telephones, in its region. Each telephone includes a transceiver and a decoder under microprocessor control.
During a stand-by mode of operation, each telephone waits to receive a telephone call. The message transmitted by a respective base station may be a so-called "global" message intended for all telephones, or, most frequently, an individual message intended for just one specific telephone. Hence, the individual message contains a unique mobile identification number (MIN), i.e. the telephone number Each telephone has its unique MIN pre-stored in an on-board memory.
Many messages are transmitted by a respective base station and, of all those many messages, only a very small amount, if any, are intended for a particular telephone. Nevertheless, each telephone, during the stand-by mode of operation, continuously receives and decodes all messages transmitted by the respective base station until the decoder of a particular telephone recognizes its MIN, after which, the telephone operates in a talk (call in progress) mode. The telephone transmits and receives data, including voice data, to and from the base station in the talk mode.
It will be seen that conventional cellular telephones in current use consume electrical power in both the talk and the stand-by modes. In current portable battery-operated telephones, the on-board battery typically has a working lifetime of approximately 8 hours in the stand-by mode, and about 1-2 hours in the talk mode. The battery must then be re-charged or replaced to continue telephone service. A major electrical current consumer on-board the battery-operated cellular telephone during the stand-by mode is the receiver section of the transceiver which, as previously described, is continuously on while the telephone is waiting to decode its MIN. The microprocessor and other electronic components on-board the telephone are also energized during the stand-by mode and additionally contribute to current drain on the battery. The need to increase the battery working lifetime between re-charges and/or battery replacement is self-evident.
To aid in understanding the invention described herein, a brief review of the prior art structure of the message transmitted by the base station during stand-by operation is presented. The message is a digital stream of bits, and may have one or more words. Usually, a message includes two words. FIG. 1 schematically shows the prior art structure of each word of the message. Each word contains forty bits. The first twenty-eight bits are message data containing, among other things, the MIN and/or a global message and/or a channel assignment message, etc. The last twelve bits are a sequence or parity field of check bits (BCH) and is a block code parity check sum. The BCH parity field confirms that the message data in the first twenty-eight bits were correctly received.
To overcome the problem of messages that are sometimes lost by rapidly changing radio signals, each word of the message is transmitted from the base station to each portable telephone five times. For a message to be validated, each word must be correctly received at least three out of the five times before the telephone will respond to the message. In addition, to compensate for burst errors, words are interleaved and transmitted in a format based on whether the MIN is odd or even.
FIG. 2 schematically shows the prior art structure of the interleaved format wherein each word A (designated for even telephone numbers) and each word B (designated for odd telephone numbers) is repeated five times and, for each repetition, the even word A is alternated with the odd word B. In addition, FIG. 2 shows a dotting sequence D which is a sequence of ten bits that advises the telephone that a synchronization word S is coming. The dotting sequence produces a 5 kHz frequency signal which is a precursor and a gross indicator that a message is about to start. The synchronization word is a sequence of eleven bits, and includes a synchronization pattern by which an internal clock of the telephone is synchronized to the base station transmitter.
Also imposed on the message data stream are busy-idle bits which are schematically shown in FIG. 3. A busy-idle bit is sent every ten bits of the message to indicate the status of the system channel. If the busy-idle bit is set to logic 1, then the channel is not busy. If the busy-idle bit is set to logic 0, then the channel is busy. The data rate for transmitted bits is 10 kbps. Hence, as shown in FIG. 2, 463 bits are transmitted in 46.3 msec, and is the total time in which one odd and one even word is transmitted five times in an interleaved format.
As previously noted, a message may, and typically does, contain more than one word. When this happens, each word also advises the on-board microprocessor that more words for the complete message are coming.
FIG. 4 schematically shows the prior art structure and duration of a complete message that consists of two words wherein word C is the second word of the message for an even telephone number which had word A as the first word, and wherein word D is the second word of the message for an odd telephone number which had word B as the first word. A two-word message takes 92.6 msec to be completely transmitted.
|
from abc import ABC
import ci_sdr
import torch
from espnet2.enh.loss.criterions.abs_loss import AbsEnhLoss
class TimeDomainLoss(AbsEnhLoss, ABC):
pass
EPS = torch.finfo(torch.get_default_dtype()).eps
class CISDRLoss(TimeDomainLoss):
"""CI-SDR loss
Reference:
Convolutive Transfer Function Invariant SDR Training
Criteria for Multi-Channel Reverberant Speech Separation;
<NAME> et al., 2021;
https://arxiv.org/abs/2011.15003
Args:
ref: (Batch, samples)
inf: (Batch, samples)
filter_length (int): a time-invariant filter that allows
slight distortion via filtering
Returns:
loss: (Batch,)
"""
def __init__(self, filter_length=512):
super().__init__()
self.filter_length = filter_length
@property
def name(self) -> str:
return "ci_sdr_loss"
def forward(
self,
ref: torch.Tensor,
inf: torch.Tensor,
) -> torch.Tensor:
assert ref.shape == inf.shape, (ref.shape, inf.shape)
return ci_sdr.pt.ci_sdr_loss(
inf, ref, compute_permutation=False, filter_length=self.filter_length
)
class SNRLoss(TimeDomainLoss):
def __init__(self, eps=EPS):
super().__init__()
self.eps = float(eps)
@property
def name(self) -> str:
return "snr_loss"
def forward(
self,
ref: torch.Tensor,
inf: torch.Tensor,
) -> torch.Tensor:
# the return tensor should be shape of (batch,)
noise = inf - ref
snr = 20 * (
torch.log10(torch.norm(ref, p=2, dim=1).clamp(min=self.eps))
- torch.log10(torch.norm(noise, p=2, dim=1).clamp(min=self.eps))
)
return -snr
class SISNRLoss(TimeDomainLoss):
def __init__(self, eps=EPS):
super().__init__()
self.eps = float(eps)
@property
def name(self) -> str:
return "si_snr_loss"
def forward(
self,
ref: torch.Tensor,
inf: torch.Tensor,
) -> torch.Tensor:
# the return tensor should be shape of (batch,)
assert ref.size() == inf.size()
B, T = ref.size()
# Step 1. Zero-mean norm
mean_target = torch.sum(ref, dim=1, keepdim=True) / T
mean_estimate = torch.sum(inf, dim=1, keepdim=True) / T
zero_mean_target = ref - mean_target
zero_mean_estimate = inf - mean_estimate
# Step 2. SI-SNR with order
# reshape to use broadcast
s_target = zero_mean_target # [B, T]
s_estimate = zero_mean_estimate # [B, T]
# s_target = <s', s>s / ||s||^2
pair_wise_dot = torch.sum(s_estimate * s_target, dim=1, keepdim=True) # [B, 1]
s_target_energy = (
torch.sum(s_target**2, dim=1, keepdim=True) + self.eps
) # [B, 1]
pair_wise_proj = pair_wise_dot * s_target / s_target_energy # [B, T]
# e_noise = s' - s_target
e_noise = s_estimate - pair_wise_proj # [B, T]
# SI-SNR = 10 * log_10(||s_target||^2 / ||e_noise||^2)
pair_wise_si_snr = torch.sum(pair_wise_proj**2, dim=1) / (
torch.sum(e_noise**2, dim=1) + self.eps
)
pair_wise_si_snr = 10 * torch.log10(pair_wise_si_snr + self.eps) # [B]
return -1 * pair_wise_si_snr
|
Analysis of the quotation corpus of the Russian Wiktionary The quantitative evaluation of quotations in the Russian Wiktionary was performed using the developed Wiktionary parser. It was found that the number of quotations in the dictionary is growing fast (51.5 thousands in 2011, 62 thousands in 2012). These quotations were extracted and saved in the relational database of a machine-readable dictionary. For this database, tables related to the quotations were designed. A histogram of distribution of quotations of literary works written in different years was built. It was made an attempt to explain the characteristics of the histogram by associating it with the years of the most popular and cited (in the Russian Wiktionary) writers of the nineteenth century. It was found that more than one-third of all the quotations (the example sentences) contained in the Russian Wiktionary are taken by the editors of a Wiktionary entry from the Russian National Corpus. INTRODUCTION The progress of computer technologies provides a basis for a new type of dictionaries. This is an online dictionary, where any interested person can take part in the development. In comparison with the traditional lexicography, on the one hand, this way of organizing collective work provides obvious advantages (high intensities of work, the possibility to discuss and correct the articles in real time at any stage of work). On the other hand, there is a high possibility that gaps can be presented in the source material and gaps can be found in the dictionary itself. One of the possible solutions to this problem is to develop a special software tool that can analyse an online dictionary at any stage of the development. Some possible solutions to this problem will be presented in the paper on the basis of the analysis of the quotations which illustrate the words meaning in the Russian Wiktionary. Therefore, the goals of this work are to construct the quotation corpus from the online dictionary, to analyse the chronological distribution of the quotation corpus in the time period of 1750 to 2012. This time period includes years with more than 10 quotations which refer to this year in the dictionary. The definition given in the paper Narinyany2001 for the dictionaries of the future can be applied to the Wiktionary -it is "an intellectual computer dictionary which combines thesaurus, lexicon and phraseological dictionary, and it is integrated with dictionaries in other languages". Indeed, the Wiktionary is a multilingual and multifunctional dictionary and thesaurus. The Wiktionary combines a glossary and a defining, grammatical, etymological, and translation dictionaries. Consequently, the Wiktionary contains not only word's definitions, semantically related words (synonyms, hypernyms, etc.), translations, but also the pronunciations (phonetic transcriptions, audio files), hyphenations, etymologies, quotations, parallel texts (quotations with translations), figures (which illustrate meaning of the words). The advantages of the Wiktionary are the huge volume of data and the great variety of the lexicographical material. It was shown in the papers Krizhanovsky2012, Meyer2012 that the size of the German Wiktionary is comparable with thesauri GermaNet and OpenThesaurus, and that the size of the THE FRAMEWORK OF THE MACHINE-READABLE WIKTIONARY The conception of the machine-readable Wiktionary is flexible in relation to input data, but it is strict and formal to output data. Input data. This conception suggests that different wiktionaries can have different structure of the article (e.g. different names of the article sections and different order of sections), which should be taken into account by a parser of wiktionaries. Moreover, even within one Wiktionary the structure of the article can change with time as new sections appear and templates vary and modify. Therefore there is need for a flexible and modular framework in order to parse so much "live" and various wiktionaries ( Figure 1). The specific properties of different wiktionaries are taken into account in the submodules "ruwikt" and "enwikt" in the module "Data extraction" in Figure 1. Output data. The data extracted from a Wiktionary are stored in the database of the machine-readable dictionary. The result databases filled by the parser have 1 Freely available for readers and editors. 2 The machine-readable Wiktionary is available at http://code.google.com/p/wikokit/ 3 identical structure independent on the source wiktionary. This provides compatibility of different machine-readable wiktionaries with external applications. The adding of new wiktionaries in a system can be done in practice, because many parts of the parser are already developed and do not depend on a specific wiktionary. Common API to the result databases of the machine-readable wiktionaries (read / write data). THE ARCHITECTURE OF THE DATABASE OF QUOTATION CORPUS The database of quotation corpus is a part of the relational database of the machine-readable Wiktionary presented in the paper Krizhanovsky2010. The following fields of the quotation template are recognized and added to the database during the extraction of semistructured data of the Wiktionary by the parser (Figure 2): The text of the quotation (stored in the field text of the table quote). The translation into Russian (the table quot_translation). The transcription of the quotation (the table quot_transcription is reserved for the English Wiktionary, it is not used in the Russian Wiktionary). Figure 2. Tables and relations related to quotations in the database of the machine-readable Wiktionary The quotation's reference joins the following elements in the DATABASE QUERIES It is possible to construct various SQL-queries using tables related to quotes 2) Get a sublist of quotations with non-empty reference (a source). There are 222 quotations where "ref_id" is not NULL in the table "quote". 3) Get a subsublist of quotations, which contain a date in the reference. There are 123 quotations with years. 4) At last, get a list of quotations, which contain a range of years in the reference. I.e. the value of the field "to" is greater than "from" in the table quot_year ( Figure 2). As a result the seven quotes were found, see Table 1. The column "entry" in Table 1 contains the headword of the Wiktionary article. The quotation is placed in the row below and the word in question is marked by bold font in the quote. If there is the translation of this quote into the Russian then it is presented in the row below. The author name, the title of the source book and 4 See entry "Moscow" in the Russian Wiktionary: http://ru.wiktionary.org/wiki/Moscow the publication (or writing) date (in years) are given in the columns "author", "title", "from", "to". Publication date: analysis and hypothesis In These quotations are presented in Figure 3 in In order to understand the relatively high number of quotations in Figure 5 in the time range from the 1830s to the 1880s, the contribution of the most cited in the Russian Wiktionary writers will be analysed. The writers with the highest number of quotations in the Russian Wiktionary are listed in the column "Author" in Table 2 The total number of quotations of the first seven authors in the period 1815-1910 (Chekhov -Leskov in Table 2) is 22429, it is 20.5%, i.e. one fifth part of the whole amount of quotations in this period in the Russian Wiktionary. It is possible that the high citations of these writers is the reason of the peak in Figure 5 in the time range from the 1830s to the 1880s, The open question remains, what authors contribute to the peak in Figure 5 in the period 1920s -1940s before the war? Distribution for centuries The analysis revealed that the earliest quotations in the Russia Wiktionary are dated: 70 BC, Cicero, "Against Verres", Latin, the entry "asylum" 6. In the course of experiments the distribution of quotes from the Russian Wiktionary dating from 17th to 21st century was made ( Figure 6) It could be seen that each subsequent century contains more quotations than the previous one. Probably, this tendency will remain, since the first 12 years of this century already have given 10% of the whole number of quotations in the dictionary. CONCLUSION In this paper the framework of the machine-readable Wiktionary was designed, which emphasizes the possibility to add new wiktionaries to the system. The architecture of the database of quotation corpus was described. An exemplary search task (to get a list of quotations in English, which refer to books written during more than one year) was solved.
|
Q:
Why are Massachusetts electricity costs so high?
Massachusetts has the third highest electricity cost in the US (after Hawaii and Connecticut). Why is this?
A:
To answer this I have to make some guesses because this is not an area of research for me, but having a spouse from there and having spent time there, I think I could make a somewhat educated guess. Especially because it uses a market system rather than a rate setting system for generation.
First, Massachusettes has the third highest population density of any state and it is concentrated mostly around Boston. Like rail, short distances are comparatively costly. Higher loads in small locations require greater infrastructure costs. Further, the variability will be harder to control because things that impact east Boston also impact west Boston, while things that impact eastern Montana do not at all impact western Montana. There is less independence for unplanned events in a small space. The covariance is high. A noreaster can be far more damaging, if it is damaging, in a small space. Repairs are costly.
Boston also has an old system. While I could be wrong, I wouldn't be surprised if it was aging and depending on the regulation on carrying power, it might not be worth the investment without a regulatory structure that guarantees cost recovery. Further, Massachusetts is growing and this means plugging costly new infrastructure onto old infrastructure. As I am guessing, I don't know this, but it wouldn't surprise me that the influx is straining the infrastructure and I wouldn't be surprised if the infrastructure is past its planned life. Boston was running electric cable cars in the 1880s.
The correct place to place this question would be with civil and electrical engineers. You will probably find there is a really long history of how the current state of affairs came to be in Massachusettes in general and Boston in particular. I would be surprised if Boston was not driving your costs. I have no knowledge about Connecticut's system and if I had to guess for Hawaii it is due to the intrinsic fragility of being on an island with lots of people and no grid to support local emergencies.
|
Study on Photoemission and Tunneling in Light - Modulated Scanning Tunneling Microscopy We propose the utilization of a light-modulated scanning tunneling microscope (LM-STM) to study the plasmonenhanced photoemission and tunneling effect between metal tip and Au nanostructures on conductive substrate. The periodic Au nanorods are fabricated by electron beam lithography (EBL) and the topography is imaged using atom force microscopy (AFM) and STM. With irradiation of continuous lasers at 532 and 805 nm, the performance of electron tunneling emission is measured under STMs ramp mode. We show that, by exciting localized surface plasmon resonances (LSPRs), a tiny laser intensity of around 1 W/cm2 along with a bias voltage less than 50 mV can activate a significantly enhanced tunneling current. This method has the potential for analyzing electron energy states and transport characteristics of surface plasmon.
|
Straightforward oxidation of a copper substrate produces an underwater superoleophobic mesh for oil/water separation. A superhydrophilic and underwater superoleophobic Cu(OH)2-covered mesh with micro- and nanoscale hierarchical composite structures is successfully fabricated through a one-step chemical oxidation of a smooth-copper mesh. Such mesh, without any further modification, can selectively separate water from oil/water mixtures with high separation efficiency, and possess excellent stability even after 60 uses. This method provides a simple, low-cost, and scalable strategy for the purification of oily wastewater.
|
//
// ImageRotateViewController.h
// CJUIKitDemo
//
// Created by ciyouzen on 2018/6/1.
// Copyright © 2018年 dvlproad. All rights reserved.
//
// 图片旋转
#import <UIKit/UIKit.h>
@interface ImageRotateViewController : UIViewController {
}
@end
|
def CrossHair(*args, **kwargs):
return _gdi_.DC_CrossHair(*args, **kwargs)
|
Cloning and direct sequencing from lambda cDNA libraries using the polymerase chain reaction: suppressin and the vasopressin receptor as models. A strategy using the polymerase chain reaction (PCR) to screen a lambda gt11 pituitary cDNA library for cDNAs encoding suppressin, a putative anti-proliferative protein, and a putative vasopressin receptor is described. The use of this technique will facilitate the demonstration of e.g. the presence of "neuropeptide receptors" on cells of the lymphoid system, confirming the concept of "shared ligands and receptors" by the neuroendocrine and the immune system. Neither of the genes encoding the proteins of the present study have previously been cloned. The PCR-screening procedure requires sequence information from the gene of interest which permits the generation of complementary primers. These primers are then used in combination with lambda phage primers complementary to regions flanking the cloning site in a PCR to amplify cDNAs derived from the gene of interest. This novel screening procedure yields cDNA related to the gene of interest, including the largest clone present in the library. To confirm the utility of this technique for cDNA libraries, the library was also screened using traditional cDNA hybridization techniques. The largest clone obtained by screening the cDNA library with PCR was the same as that obtained by the conventional technique. Thus, the results of these studies show that the PCR method can be used instead of more conventional means to screen cDNA libraries. Lastly, we describe a protocol for directly sequencing PCR-amplified DNA using the same primers that are used for amplification. The combined use of these two strategies permits cloning and sequencing of cDNAs from lambda cDNA libraries in a fraction of the time required using traditional screening techniques, but with identical results.
|
import './register';
import { ID } from '../../app/core/definitions/id';
import { DepositInput } from '../../app/deposit/deposit.in';
import { SignupInput } from '../../app/signup/signup.in';
import { TransferInput } from '../../app/transfer/transfer.in';
import { app } from '../register';
import { DepositPresenter } from './presenter/deposit/deposit.presenter';
import { SignupPresenter } from './presenter/signup/signup.presenter';
import { TransferPresenter } from './presenter/transfer/transfer.presenter';
const megaman: SignupInput = {
firstname: 'Megaman',
lastname: '<NAME>',
email: '<EMAIL>',
username: 'megamanx',
password: '<PASSWORD>',
};
const zero: SignupInput = {
firstname: 'Zero',
lastname: '<NAME>',
email: '<EMAIL>',
username: 'megamanx',
password: '<PASSWORD>',
};
const createUser = async (usr: SignupInput): Promise<ID> => {
const presenter: SignupPresenter = app.container.resolve<SignupPresenter>('signupPresenter');
const response = await app.main.signUp(usr);
const output = await presenter.present(response);
console.log(`User created: (#${output.id}) ${usr.firstname}`);
return output.id;
};
const makeDeposit = async (userId: ID, username: string, deposit: DepositInput): Promise<void> => {
const presenter: DepositPresenter = app.container.resolve<DepositPresenter>('depositPresenter');
const response = await app.main.deposit(deposit);
const output = await presenter.present(response);
console.log(`User (#${userId}) ${username} received ${output.balance} coins`);
console.log(`Transaction #${output.id}`);
};
const makeTransfer = async (
toId: ID,
fromId: ID,
toName: string,
fromName: string,
transfer: TransferInput,
): Promise<void> => {
const presenter: TransferPresenter = app.container.resolve<TransferPresenter>('transferPresenter');
const response = await app.main.transfer(transfer);
const output = await presenter.present(response);
console.log(`User (#${toId}) ${fromName} received ${transfer.value} coins from User (#${fromId}) ${toName}`);
console.log(`Transaction #${output.id}`);
};
(async (): Promise<void> => {
try {
const megamanId = await createUser(megaman);
const zeroId = await createUser(zero);
const megamanDeposit: DepositInput = { userId: megamanId, value: 100 };
await makeDeposit(megamanId, megaman.username, megamanDeposit);
const zeroDeposit: DepositInput = { userId: zeroId, value: 200 };
await makeDeposit(zeroId, zero.username, zeroDeposit);
const zeroTransfer: TransferInput = { from: zeroId, to: megamanId, value: 200 };
await makeTransfer(megamanId, zeroId, zero.username, megaman.username, zeroTransfer);
} catch (error) {
console.log(error);
}
})();
|
package processor;
import pcosta.kafka.api.MessageListener;
import pcosta.kafka.api.MessageMetadata;
import pcosta.kafka.api.annotation.DEFAULT_MESSAGE_TYPE;
import pcosta.kafka.api.annotation.MessagingListener;
/**
* @author <NAME>
*/
@MessagingListener(topic = "Topic", message = DEFAULT_MESSAGE_TYPE.class)
public class MessagingListener_DefaultType implements MessageListener<DEFAULT_MESSAGE_TYPE> {
@Override
public void onMessage(final MessageMetadata messageMetadata, final DEFAULT_MESSAGE_TYPE message) {
// empty
}
}
|
def validate(addresses) :
valid_addresses = []
for address in addresses :
response = requests.get('https://api.mailgun.net/v3/address/validate',
auth=('api', os.environ['MAILGUN_API_KEY']),
params={'address': address})
if response.json()['is_valid'] :
valid_addresses.append(address)
return valid_addresses
|
/**
*
* @author David Moss
*
*/
public class TestAppCParser {
/** My App.c file */
private File myAppFile;
public static junit.framework.Test suite() {
return new JUnit4TestAdapter(TestAppCParser.class);
}
@Before public void setUp() {
myAppFile = new File("app.c");
}
@Test public void appCExists() {
assertTrue("app.c file doesn't exist", myAppFile.exists());
}
@Test public void basicParse() {
AppCParser parser = new AppCParser(new File("app.c"));
TestResult result = parser.parse();
assertTrue(result.getFailMsg(), result.isSuccess());
}
@Test public void testParseResults() {
AppCParser parser = new AppCParser(new File("app.c"));
TestResult result = parser.parse();
assertTrue(result.getFailMsg(), result.isSuccess());
assertEquals("Wrong number of tests extracted", 3, parser.getTestCaseMap().size());
assertEquals("Wrong test name: " + parser.getTestCaseMap().get(new Integer(0)), parser.getTestCaseMap().get(new Integer(0)), "TestStateC.TestForceC");
assertEquals("Wrong test name: " + parser.getTestCaseMap().get(new Integer(1)), parser.getTestCaseMap().get(new Integer(1)), "TestStateC.TestToIdleC");
assertEquals("Wrong test name: " + parser.getTestCaseMap().get(new Integer(2)), parser.getTestCaseMap().get(new Integer(2)), "TestStateC.TestRequestC");
}
}
|
Pulmonary emphysema: current concepts of pathogenesis. Pulmonary emphysema is a major public health problem and is primarily a disease of smokers. The pathogenesis of emphysema in smokers is likely to be multifactorial and may involve protease-antiprotease imbalance, abnormal host response to injury, the inactivation of antiproteases by oxidants, and direct damage of lung tissue by pulmonary phagocytes. The data regarding current concepts of pathogenesis of emphysema in smokers are reviewed in this article.
|
/**
* @author Christopher L Merrill (see LICENSE.txt for license details)
*/
class ResourceStorageTests
{
@Test
void findResourceByIdAndType() throws IOException
{
MuseProject project = new SimpleProject();
MuseTask test = new MockTask();
test.setId("test1");
project.getResourceStorage().addResource(test);
Assertions.assertNotNull(project.getResourceStorage().getResource("test1", MuseTask.class));
}
@Test
void findSingleResourceById() throws IOException
{
MuseProject project = new SimpleProject();
MuseTask test1 = new MockTask();
test1.setId("test1");
project.getResourceStorage().addResource(test1);
MuseTask test2 = new MockTask();
test2.setId("test2");
project.getResourceStorage().addResource(test2);
MuseResource resource = project.getResourceStorage().getResource("test1");
Assertions.assertNotNull(resource);
Assertions.assertEquals(test1, resource, "should find the right resource");
ResourceToken<MuseResource> token = project.getResourceStorage().findResource("test2");
Assertions.assertNotNull(token);
Assertions.assertEquals(test2, token.getResource(), "token doesn't have the right resource");
}
@Test
void findMultipleResourcesByType() throws IOException
{
MuseProject project = new SimpleProject();
MuseTask test = new MockTask();
test.setId("Test1");
project.getResourceStorage().addResource(test);
test = new MockTask();
test.setId("Test2");
project.getResourceStorage().addResource(test);
List<ResourceToken<MuseResource>> resources = project.getResourceStorage().findResources(new ResourceQueryParameters(new MuseTask.TaskResourceType()));
Assertions.assertEquals(2, resources.size(), "Should find 2 resources");
Assertions.assertTrue(resources.get(0).getId().equals("Test1") ^ resources.get(1).getId().equals("Test1"), "Should find one resource with id 'Test1'");
Assertions.assertTrue(resources.get(0).getId().equals("Test2") ^ resources.get(1).getId().equals("Test2"), "Should find one resource with id 'Test2'");
}
@Test
void addAndRemoveResourceEvents() throws IOException
{
MuseProject project = new SimpleProject();
MuseTask test = new MockTask();
test.setId("Test1");
AtomicReference<ResourceToken<MuseResource>> resource_added = new AtomicReference<>(null);
AtomicReference<ResourceToken<MuseResource>> resource_removed = new AtomicReference<>(null);
ProjectResourceListener listener = new ProjectResourceListener()
{
@Override
public void resourceAdded(ResourceToken<MuseResource> added)
{
resource_added.set(added);
}
@Override
public void resourceRemoved(ResourceToken<MuseResource> removed)
{
resource_removed.set(removed);
}
};
project.addResourceListener(listener);
project.getResourceStorage().addResource(test);
Assertions.assertNotNull(resource_added.get());
Assertions.assertEquals(test.getId(), resource_added.get().getId());
project.getResourceStorage().removeResource(new InMemoryResourceToken(test));
Assertions.assertNotNull(resource_removed.get());
Assertions.assertEquals(test.getId(), resource_removed.get().getId());
// ensure listener is deregistered
resource_added.set(null);
project.removeResourceListener(listener);
project.getResourceStorage().addResource(test);
Assertions.assertNull(resource_added.get());
}
@Test
void refuseToAddDuplicateResource() throws IOException
{
MuseProject project = new SimpleProject();
MuseTask test = new MockTask();
test.setId("Test1");
project.getResourceStorage().addResource(test);
MuseTask duplicate = new MockTask();
duplicate.setId("Test1");
try
{
project.getResourceStorage().addResource(duplicate);
Assertions.fail("should have thown an exception");
}
catch (IllegalArgumentException e)
{
// pass
}
}
@Test
void addInvalidFilename()
{
Assertions.assertFalse(new FilenameValidator().isValid("test<1>"), "bad filename would have been accepted");
}
}
|
Reverse Diastolic Intrarenal Flow Due to Calcineurin Inhibitor (CNI) Toxicity Renal calcineurin inhibitor (CNI) toxicity is a frequent side effect of immunosuppression with CNIs in solid organ transplantation, leading to acute and chronic renal failure. Acute CNI toxicity is due to vasoconstriction of the vasa afferens and efferens and vacuolization of smooth muscle cells with medial hyalinosis, leading to vessel lumen narrowing.
|
Today, financial transactions such as credit card transactions, are performed either in person or over a communication link like a telephone. In the case of credit card or debit cards, the merchant receives a card number and an expiration date for the card and contacts a bank, credit card company or other financial institution to confirm the information of the card to verify its authenticity.
Merchants often use a VeriFone system (of Redwood City, Calif.) to obtain confirmation. The VeriFone system allows a merchant to automatically enter the necessary credit card information by swiping the card through a reader that reads the information from a magnetic strip on the card. Once read, the VeriFone system places a phone call to contact the institution to confirm the information. To merchants, the VeriFone system is very easy to use.
When the card holder is not present and wishes to make a credit card transaction, that individual typically contacts the merchant by telephone or by mail. Where time is a consideration, an individual will typically place a credit card order by phone. When making a credit card order by phone (or some other telecommunications medium), security becomes an issue because information is being sent on ordinary communications lines, which are typically not secure channels. Some unscrupulous individuals may intercept credit card information while it is being sent over the communication medium or even while at the merchant's place of business. What is needed is a way to perform secure credit card transactions using an insecure, communications medium.
Facsimile communication is a well-known communications technique. A facsimile transaction is generally not secure. A document may be encrypted prior to being sent to add security. However, the encryption scheme must be known by both the customer and the merchant so that the merchant is able to obtain the customer's information. For using such a system, the merchant and the customer may have to be very familiar with encryption technology, which is not the case. Also, ensuring that a merchant is able to decrypt a customer's document may require the customer and the merchant to have contact prior to placing an order. This prior contact may not be possible nor convenient. Furthermore, a merchant could not practically maintain a large number of different encryption schemes of various customers. However, if only a small number of encryption schemes are used, security may by compromised. Thus, in the prior art, the use of encryption with facsimile transactions has some drawbacks. It would be desirable to utilize a communication medium, such as facsimile communications, without requiring prior contact between a customer and a merchant, and with ease of use for both the customer and the merchant.
Changes could be made to communication systems to make them more secure. However, these changes usually require added costs and some downtime to the system to implement the changes. It would be advantageous to use existing communications hardware that does not have to be modified and is already in place.
The present invention provides for secure financial transactions. The present invention is easy to use for both the customer and the merchant and employs a system of existing telecommunications technology that does not have to be modified.
|
<filename>src/rules/only-export-type-allowed.ts
import { ESLintUtils } from '@typescript-eslint/experimental-utils'
const createRule = ESLintUtils.RuleCreator(
() => `https://github.com/plantain-00/eslint-plugin-plantain#readme`
)
type MessageIds = 'onlyExportTypeAllowed'
export default createRule<[], MessageIds>({
name: 'only-export-type-allowed',
meta: {
type: 'suggestion',
docs: {
description:
'Only export type is allowed',
recommended: false
},
messages: {
onlyExportTypeAllowed: 'Only export type is allowed.'
},
schema: []
},
defaultOptions: [],
create(context) {
return {
ExportAllDeclaration(node) {
if (node.exportKind === 'value') {
context.report({
messageId: 'onlyExportTypeAllowed',
node
})
}
},
ExportNamedDeclaration(node) {
if (node.exportKind === 'value') {
context.report({
messageId: 'onlyExportTypeAllowed',
node
})
}
}
}
}
})
|
/*
* Hash code for a 64bit integer
*/
uint32_t jenkins_hash_uint64(uint64_t x) {
uint32_t a, b, c;
a = (uint32_t) x;
b = (uint32_t) (x >> 32);
c = 0xdeadbeef;
final(a, b, c);
return c;
}
|
Floral tributes have been left outside a semi-detached house in Maghull, where couple have died following a fatal house fire in Bridge Farm Drive.
Two people have died following a house fire in Maghull .
A man and a woman were found unconscious and not breathing when firefighters arrived at their Bridge Farm Drive home.
The call came from a neighbour who discovered that their bathroom was filling up with smoke, shortly after 11pm on Saturday.
When fire crews investigated, they found that the smoke was in fact coming from a fire at the adjoining property.
CPR was carried out on both casualties and they were taken to Aintree hospital, but were later pronounced dead.
The names of the couple are not known, nor is the cause of the blaze.
Five fire engines attended the scene.
|
/**
* @license
* Copyright (c) 2014, 2021, Oracle and/or its affiliates.
* Licensed under The Universal Permissive License (UPL), Version 1.0
* as shown at https://oss.oracle.com/licenses/upl/
* @ignore
*/
import NumberRangeValidator = require('../ojvalidator-numberrange');
import { IntlNumberConverter } from '../ojconverter-number';
export interface NumberConverterFactory {
createConverter(options?: IntlNumberConverter.ConverterOptions): IntlNumberConverter;
}
export interface NumberRangeValidatorFactory {
createValidator(options?: NumberRangeValidator.ValidatorOptions): NumberRangeValidator;
}
|
1. Field of the Invention
The present invention is directed in general to the field of semiconductor fabrication and integrated circuits. In one aspect, the present invention relates to forming PMOS field effect transistors (FETs) as part of a complementary metal oxide semiconductor (CMOS) fabrication process.
2. Description of the Related Art
CMOS devices, such as NMOS or PMOS transistors, have conventionally been fabricated on semiconductor wafers with a surface crystallographic orientation of (100), and its equivalent orientations, e.g., (010), (001), (00-1), where the transistor devices are typically fabricated with a <100> crystal channel orientation (i.e., on 45 degree rotated wafer or substrate). The channel defines the dominant direction of electric current flow through the device, and the mobility of the carriers generating the current determines the performance of the devices. While it is possible to improve carrier mobility by intentionally stressing the channels of NMOS and/or PMOS transistors, it is difficult to simultaneously improve the carrier mobility for both types of devices formed on a uniformly-strained substrate because PMOS carrier mobility and NMOS carrier mobility are optimized under different types of stress. For example, some CMOS device fabrication processes have attempted to enhance electron and hole mobilities by using strained (e.g. with a bi-axial tensile strain) silicon for the channel region that is formed by depositing a layer of silicon on a template layer (e.g., silicon germanium) which is relaxed prior to depositing the silicon layer, thereby inducing tensile stress in the deposited layer of silicon. Such processes enhance the electron mobility for NMOS devices by creating tensile stress in NMOS transistor channels, but PMOS devices are insensitive to any uniaxial stress in the channel direction for devices fabricated along the <100> direction. On the other hand, attempts have been made to selectively improve hole mobility in PMOS devices, such as by forming PMOS channel regions with a compressively stressed SiGe layer over a silicon substrate. However, such compressive SiGe channel PMOS devices exhibit a higher sub-threshold slope (SS) and higher voltage threshold temperature sensitivity. This is in large part due to a degraded quality of the interface between the compressive SiGe layer and the dielectric layer which is quantified by the channel defectivity or interface trap density (Dit) in the PMOS devices.
Accordingly, there is a need for improved semiconductor processes and devices to overcome the problems in the art, such as outlined above. Further limitations and disadvantages of conventional processes and technologies will become apparent to one of skill in the art after reviewing the remainder of the present application with reference to the drawings and detailed description which follow.
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the drawings have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements for purposes of promoting and improving clarity and understanding. Further, where considered appropriate, reference numerals have been repeated among the drawings to represent corresponding or analogous elements.
|
Organic acids: Versatile stress response roles in plants. Organic acids (OAs) are central to cellular metabolism. Many plant stress responses involve exudation of OAs at the root-soil interface that can improve soil mineral acquisition and toxic metal tolerance. Because of their simple structure, the Low Molecular Weight Organic Acids (LMWOAs) are widely studied. We discuss the conventional roles of OAs, along with some newly emerging roles in plant stress tolerance. OAs are more versatile in their role in plant stress tolerance and are efficient chelating agents when compared with other acids, such as amino acids. Root OA exudation is important in soil carbon sequestration. These functions are key processes combating climate change and helping with more sustainable food production. We briefly review the mechanisms behind enhanced biosynthesis, secretion and regulation of these activities under different stresses. Also, an outline of the transgenic approaches targeted towards the enhanced production and secretion of OAs is provided. A re-occurring theme of OAs in plant biology is their roles as either 'acids' modifying pH or 'chelators' binding metals or as 'carbon sources' for microbes. We argue that these multiple functions are key factors for understanding these molecules important roles in plant stress biology. Finally, we contemplate how the functions of OAs in plant stress responses can be made use of and what the important unanswered questions are.
|
US refineries on the Gulf that had been anticipating a boom from Canada’s Alberta tar sands via the planned Keystone XL pipeline are becoming apathetic about the mired pipeline’s future, according to Wednesday’s Wall Street Journal. As the domestic US oil boom has kept refineries busy and rail and new pipelines have filled the shipping gap that Keystone would have filled, the refineries on the Gulf that had been waiting to process the Canadian heavy crude “increasingly doubt that the controversial Keystone XL pipeline expansion will ever be built” and “don’t particularly care.” But does that mean that the 830,000 barrels of heavy crude that would have streamed through the XL pipeline have become irrelevant? Not quite. The pipeline is still the best hope for Canadian tar sands to make it to refineries. Without it, Alberta’s surging industry might find itself choked with no way to move all the oil it produces.
Railroads are carrying soaring amounts of crude from Canada down to refineries along the U.S. Gulf Coast, reducing the need for the TransCanada Corp. project, which is still awaiting approval from the U.S. government after two years of delays.
Meanwhile, a rival pipeline company, Enbridge Inc., is expanding existing pipes to carry Canadian crude south—and it doesn’t need federal permission because it’s using existing pipeline rights of way. In addition, so much oil is sloshing around the U.S. from its own wells that refiners don’t need lots more heavy crude from the north to keep busy.
“Keystone XL has been back-burnered for so long that any relevant parties have been able to make plans as though the project never even existed in the first place,” says Sam Margolin, an analyst at Cowen & Co.
The domestic oil boom from sources like North Dakota’s Bakken region and the sudden glut of tar sands oil coming out of Alberta have overwhelmed existing pipelines, creating bottlenecks and forcing oil companies to find other ways to move oil from wells to refineries. Between 2011 and 2012, shipments to refineries by truck rose by 38 percent, barge transport increased by 53 percent, and rail shipments quadrupled.
The hitch is that Canada is still staring down a massive planned increase in tar sands production—and without Keystone XL being built, it might not be able to move the oil out of Alberta fast enough to keep pace with production. Canadian tar sands produced 1.8 million barrels per day in 2012, and are hoping to crank that up to about 5 million per day by 2030. In fact, Alberta could pass that milestone as soon as 2016.
|
Mitochondrial Quality Control in Age-Related Pulmonary Fibrosis Idiopathic pulmonary fibrosis (IPF) is age-related interstitial lung disease of unknown etiology. About 100,000 people in the U.S have IPF, with a 3-year median life expectancy post-diagnosis. The development of an effective treatment for pulmonary fibrosis will require an improved understanding of its molecular pathogenesis and the normal and pathological hallmarks of the aging lung. An important characteristic of the aging organism is its lowered capacity to adapt quickly to, and counteract, disturbances. While it is likely that DNA damage, chronic endoplasmic reticulum (ER) stress, and accumulation of heat shock proteins are capable of initiating tissue repair, recent studies point to a pathogenic role for mitochondrial dysfunction in the development of pulmonary fibrosis. These studies suggest that damage to the mitochondria induces fibrotic remodeling through a variety of mechanisms including the activation of apoptotic and inflammatory pathways. Mitochondrial quality control (MQC) has been demonstrated to play an important role in the maintenance of mitochondrial homeostasis. Different factors can induce MQC, including mitochondrial DNA damage, proteostasis dysfunction, and mitochondrial protein translational inhibition. MQC constitutes a complex signaling response that affects mitochondrial biogenesis, mitophagy, fusion/fission and the mitochondrial unfolded protein response (UPRmt) that, together, can produce new mitochondria, degrade the components of the oxidative complex or clearance the entire organelle. In pulmonary fibrosis, defects in mitophagy and mitochondrial biogenesis have been implicated in both cellular apoptosis and senescence during tissue repair. MQC has also been found to have a role in the regulation of other protein activity, inflammatory mediators, latent growth factors, and anti-fibrotic growth factors. In this review, we delineated the role of MQC in the pathogenesis of age-related pulmonary fibrosis. Introduction Idiopathic pulmonary fibrosis (IPF) represents one of the most aggressive and irreversible lung diseases, usually diagnosed in the fifth decade of life, carries a very poor prognosis, an unknown etiology and limited therapeutic options. Metabolic alterations seen in aging have also been found in the lungs of patients with IPF, raising the possibility that aging may contribute to the IPF pathogenesis. Alveolar epithelial cells (AEC) of IPF lungs have shown to exhibit short telomeres, predisposing these cells to apoptosis, leading to abnormal parenchymal architecture, dysfunctional re-epithelialization and exaggerated inflammatory reaction that contributes to the development of fibrotic scarring. In addition, fibroblast of IPF lungs showed abnormal cellular morphology with reduced mitochondrial mass, disrupted membranes and severe ruptured cristae, prompting the fibroblasts to maladaptive responses to stress after injury and increasing susceptibility to fibrosis. Among these alterations, mitochondria dysfunction is recognized as a major hallmark that accounts for the predilection to fibrosis in aging. While mitochondria were initially viewed as just the powerhouse of the cell, advances in the field has allowed us to understand the multiple roles of this organelle. Mitochondria are involved in the heme biosynthesis, intracellular calcium regulation, ATP-production and fatty acid synthesis. Thus, to guarantee adequate organelle function and maintenance of intracellular homeostasis, mitochondria rely on quality control pathways: mitochondrial biogenesis, fusion/fission, mitophagy and the mitochondrial unfolded protein response (UPRmt). Biogenesis of mitochondria allows proper cellular homeostasis, it relies on the action of PGC-1a, the major regulator of this process and its activation via different proteins (AMPK, SIRT1, MAPK and CREB). Fusion/Fission represents the cornerstone of mitochondria dynamics by controlling the cellular bioenergetics and mitochondrial networks via the actions of Drp1, Fis1, Mff, Mfn1 and Mfn2. Selective removal of damaged and dysfunctional mitochondria requires the coordinated action between PINK1/Parkin pathway activation. Accumulation of unfolded proteins inside the mitochondria leads to activation of the mitochondria unfolded protein response (UPRmt) with the goal of promoting repair, recovery and restore mitochondrial proteostasis. Activation of these machineries allows maintenance and regulation of this organelle metabolism, biogenesis, ROS production and mitochondrial DNA (mtDNA) damage repair. Disruptions of these processes can subsequently lead to the accumulation of dysfunctional mitochondria, alter the intracellular environment and contribute to the development of age-related lung fibrosis. While the exact triggering injury leading to irreversible fibrosis seen in IPF is unknown, dysregulation of the mitochondrial quality systems suggests a clear pathogenic role that could explain the metabolic dysregulation, proteostatics alterations and decline in mitochondrial function of patients with IPF. This review explores the mitochondrial quality control pathways, its association with the development of age-related lung fibrosis and describes new potential therapeutic targets. Mitochondrial Biogenesis Mitochondria biogenesis is a complex and tightly regulated process. Mitochondrial biogenesis is defined as increased growth in the number of mitochondria from the growth and division of pre-existing mitochondria. The major regulator of this process, co-transcriptional factor PGC-1a, has the ability to activate different transcription factors such as nuclear respiratory factor 1 (NRF-1) and nuclear respiratory factor 2 (NRF-2). These transcription factors, in turn, increase the expression of mitochondrial transcription factor (Tfam), which drives transcription and replication of mitochondrial DNA (mtDNA) (Figure 1). In response to injury or proteostatic stress, mitochondria biogenesis is stimulated to increase cellular energy production through AMP-activated protein kinase (AMPK), a protein considered another major regulator of mitochondrial biogenesis when energy levels inside the cell decrease. Furthermore, the reduction of AMPK and PGC-1a are a major contributing factor for the mitochondrial dysfunction evidenced in the bleomycin-induced lung fibrosis model. Another mechanism implicated, is through upregulation of the ROS-producing enzyme NADPH oxidase-4 (Nox4) which represses NRF-2 and Tfam leading to diminished biogenesis. In addition, experimental models have suggested a direct association between DNA damage causing activation of injury sensors poly(ADP-ribose) polymerase 1 (PARP-1) and p53 which diminish mitochondrial biogenesis by inhibiting the expression of PGC-1a. Mitochondria biogenesis is closely associated with the mTOR signaling, activation of this pathway directly leads to increased expression of PGC-1a. Studies have demonstrated that activation of the mTORC complex 1 (mTORC1) upregulates biogenesis by preventing the binding of the eukaryotic translation initiation factor 4E (eIF4)-binding proteins (4E-BP) to their targets. As a result, translation of nuclear-encoded mitochondrial proteins of complex V, complex I and Tfam takes place. Furthermore, mTOR signaling is a highly regulated pathway, proper function and quality control relies on the function of the proteasome and ubiquitin-ligases activity to accomplish protein synthesis required for mitochondria biogenesis. The actions of mTORC1 are carried out within minutes of activation, coordinating the inhibition of autophagy, promoting protein synthesis, suppressing protein turnover and activating the transcription factor SREBP1. The underlying mechanisms of how proteolysis is initially inhibited are unknown It is possible that phosphorylation of E3-ligases by mTORC1 inhibits ubiquitination and subsequently proteolysis by the 20S subunit of the proteasome. During early response, inhibition of protein degradation allows the proper stability of newly generated ribosomes and translation initiation. In the following hours, SREBP1 leads to transcriptional activation of the nuclear factor erythroid-derived 2-related factor (Nrf1, also known as NFE2L1) which promotes proteasome gene expression as a delayed response. Therefore, increased protein degradation is seen in the late stages of the mTORC1 response. Conceivably, the delayed production of proteasomes creates an efficient clearance of proteins to protect the cell from the accumulation of misfolded-cytotoxic proteins and simultaneously maintains an adequate pool of recycled amino acids to continue mitochondria biogenesis. Interaction between mitochondria dysfunction, mTORC1 and the ubiquitin-proteasome system might represent the key element to understand in depth the mechanism that lead to mitochondrial biogenesis dysregulation in age-related lung fibrosis. Regulation of mitochondria biogenesis. Activation of different signaling pathways, such as AMPK, SIRT1, CREB, MAPK has been associated with mitochondria biogenesis by increasing PGC-1a gene transcription. PGC-1a represents the major co-transcriptional factor that regulates mitochondria biogenesis by activating the nuclear respiratory factor 1 (NRF1) and nuclear respiratory factor 2 (NRF2) which leads to increase expression of mitochondrial transcription factor (Tfam), driving transcription and replication of mitochondria DNA (mtDNA). Mitochondria biogenesis is closely associated with the mTOR signaling, activation of this pathway directly leads to increased expression of PGC-1a. Studies have demonstrated that activation of the mTORC complex 1 (mTORC1) upregulates biogenesis by preventing the binding of the eukaryotic translation initiation factor 4E (eIF4)-binding proteins (4E-BP) to their targets. As a result, translation of nuclear-encoded mitochondrial proteins of complex V, complex I and Tfam takes place. Furthermore, mTOR signaling is a highly regulated pathway, proper function and quality control relies on the function of the proteasome and ubiquitin-ligases activity to accomplish protein synthesis required for mitochondria biogenesis. The actions of mTORC1 are carried out within minutes of activation, coordinating the inhibition of autophagy, promoting protein synthesis, suppressing protein turnover and activating the transcription factor SREBP1. The underlying mechanisms of how proteolysis is initially inhibited are unknown It is possible that phosphorylation Figure 1. Regulation of mitochondria biogenesis. Activation of different signaling pathways, such as AMPK, SIRT1, CREB, MAPK has been associated with mitochondria biogenesis by increasing PGC-1a gene transcription. PGC-1a represents the major co-transcriptional factor that regulates mitochondria biogenesis by activating the nuclear respiratory factor 1 (NRF1) and nuclear respiratory factor 2 (NRF2) which leads to increase expression of mitochondrial transcription factor (Tfam), driving transcription and replication of mitochondria DNA (mtDNA). Currently, the role of mitochondria biogenesis in age-related lung fibrosis is not fully elucidated. Studies have reported that with aging, mitochondria biogenesis declines due to a reduction in PGC-1a, PGC-1b, AMPK and p-53 through senescence-associated mechanism. Low expression of PGC-1a was evidenced in patients with IPF and fibrotic mouse lungs after bleomycin treatment. In addition, Yu et al. demonstrated that aerosolized thyroid hormone (TH) blunted bleomycin and TGF-B-induced fibrosis in mice. Furthermore, TH was able to restore mitochondrial function and morphology in AECII through PINK1 and PGC-1a actions. However, as we demonstrated in our study, mitochondria biogenesis is increased in the AECII of old mice through the upregulation of the mTORC1/PGC-1 signaling axis. In addition, we found the upregulation of other downstream molecules associated with mitochondria biogenesis, including Tfam and NRF1. Previous reports have demonstrated that an increase in mitochondria biogenesis could lead to cellular senescence by enhancing ROS-mediated damage, as a result of boosting mitochondrial respiration. This is based on the fact that administration of MitoTempo, following bleomycin exposure in AECII, inhibits the mTORC1 response and suppresses the expression of senescence markers. Moreover, chronic activation of the mTORC1/PGC-1 axis could explain the accumulation of large, dysmorphic mitochondria found in the aged mice by over-activating biogenesis and inhibiting mitophagy to remove defective mitochondria. Although many pieces remain unsolved, certainly mTORC1/PGC-1 axis represents a potential therapeutic target in the treatment of age-related lung fibrosis. Further translational studies are needed, nonetheless, current evidence points that quality control of mitochondria biogenesis plays a crucial role in the pathogenesis of age-related lung fibrosis, distortions in its regulation can shift the cell towards senescence and promote inflammation and fibrosis. Mitochondrial Dynamics Mitochondria are dynamic organelles in that they require the balance of fission and fusion for proper function and adaptation to cell growth, division, and injury response. Fission and fusion represent the first line of quality control in the setting of proteostatic stress. Quality control of fission and fusion is tied to appropriate cellular bioenergetics and homeostasis of mitochondrial networks. Fission is carried by the action of dynamin-related protein 1 (Drp1), fission 1 (Fis1), and mitochondria fission factor (Mff). Fission occurs when Drp1, located in the cytosol or at the endoplasmic reticulum, is recruited to the OMM where it constricts the mitochondria resulting in two separate organelles. In order to achieve this, Drp1 requires the formation of a complex through an interaction with the OMM receptors Fis1 and Mff. Hydrolysis of the Drp1-bound GTP constricts the Drp1 and allows severing of the enclosed membranes resulting in fission. Moreover, mitochondria fusion is carried out by proteins mitofusin1 (Mfn1), mitofusin 2 (Mfn2), and optic atrophy 1 (OPA1). To start this process, proteins Mfn1 and Mfn2, which contain two transmembrane domains in the OMM with a GTPase domain and are oriented towards the cytoplasm, while OPA1 is found in the IMM and has a dynamin-related GTPase capacity. Then, mitochondria lipid bilayer mixing is performed in a Mfn1/Mfn2-dependent manner with a GTP hydrolysis to allow energy for the OMM fusion. Similarly, IMM fusion requires a similar action by OPA1 to allow merging. Age-related lung fibrosis comprises a very complex physiopathology that involves a constant cycle of injury and repair along with persistent release of pro-inflammatory and pro-fibrotic peptides, which ultimately lead to mitochondrial dysfunction ( Figure 2). Mouse models of IPF have shown increased mitochondria fusion. This increase in mitochondria fusion leads to alterations of mitochondrial dynamics by impairing mitophagy and result in the accumulation of dysmorphic mitochondria. To support this, Bueno et al. reported that AECII in old lungs exhibit enhanced expression of OPA1 and Mfn1 with inactivation of Drp1 in addition to impaired mitophagy and a decline in mitochondrial function. Also, one suggested mechanism of Drp1 regulation is carried by the E3 ligase MARCH5 and SUMO-1, where ubiquitination and sumolyation stabilizes or recruits Drp1 to modulate fission. Further, the complex formed by Mdm30 with the ubiquitin ligase Skp1-Cullin-F-box (SCF-Mdm30) regulates the degradation of Mfn1/Mfn2. Similarly, Cilenti et al. determined that the MULAN/MAPL ligase can modulate mitochondrial morphology by promoting Mfn2 degradation. Interestingly, Radke et al. demonstrated that proteasome inhibition was capable of augmenting mitochondrial fission. This observation points out a link between the UPS and mitochondrial dynamics. In the aged lung, it is possible that the activity of the proteasome machinery modulates mitochondrial morphology and function by restructuring proteostasis along with regulating fusion/fission. Ultimately, recognition and deeper understanding of the mechanisms involved could lead to new therapeutic approaches targeting mitochondria quality control. and mitochondrial dynamics. In the aged lung, it is possible that the activity of the proteasome machinery modulates mitochondrial morphology and function by restructuring proteostasis along with regulating fusion/fission. Ultimately, recognition and deeper understanding of the mechanisms involved could lead to new therapeutic approaches targeting mitochondria quality control. Figure 2. Mitochondria dysfunction with aging. In aging, alterations of the mitochondria DNA (mtDNA) such as deletions, depletion or point mutations can lead to defects in the mitochondria oxidative phosphorylation system (OXPHOS) decreasing ATP production and increasing reactive oxygen species (ROS) production, generating further mtDNA damage, causing impaired fusion/fission and ultimately leading to mitochondrial dysfunction. Mitophagy Mitophagy is a selective autophagy where damaged and dysfunctional mitochondria are degraded. This maintains a homeostatic environment between synthesis and degradation of cellular organelles and proteins, where byproducts are sent to be recycled in other metabolic pathways. Mitophagy is crucial in the quality control of mitochondria not only at the organelle level but also at a cellular one, given its association with senescence, apoptosis, and necroptosis. Impairments in mitophagy have been associated with multiple diseases, including age-related pulmonary fibrosis. In the healthy mitochondria, mitophagy relies in the action of PTEN-induced putative kinase 1 (PINK1), a serine/threonine kinase which is continuously imported to the inner membrane (IM) to be cleaved and degraded by the mitochondria-specific proteases presenilin-associated rhomboid-like protein (PARL) and mitochondrial processing peptidases (MPP). In the setting of mitochondrial damage, PINK1 acts as the initial sensor to activate mitophagy, where stress-induced membrane depolarization inactivates PARL and MPP. This leads to the stabilization and accumulation of PINK1 on the outer-mitochondrial membrane (OMM), which allows the kinase domain of PINK1 to phosphorylate OMM proteins, in addition to recruitment and activation of the E3-ubiquitin ligase Parkin. Once Parkin is activated, it can polyubiquitinate proteins to the OMM, including voltage-dependent-anion channel 1 (VDAC1), mitofusin 1 (Mfn1), mitofusin 2 (Mfn2) and mitochondria1 rho 1 (MIRO) in order to be sub-sequentially phosphorylated by PINK1. The ubiquitination of these proteins on the OMM leads to the formation of the autophagosome by promoting a bridge with microtubule-associated protein 1 light chain 3 (MAP1LC3/LC3) on phagophores through adaptor protein SQSTM1/p62. Additionally, the ubiquitination of Mfn1 and Mfn2 results in fission, fragmentation, and subsequent degradation with mitophagy through the formation of the autophagosome. Mitochondria dysfunction with aging. In aging, alterations of the mitochondria DNA (mtDNA) such as deletions, depletion or point mutations can lead to defects in the mitochondria oxidative phosphorylation system (OXPHOS) decreasing ATP production and increasing reactive oxygen species (ROS) production, generating further mtDNA damage, causing impaired fusion/fission and ultimately leading to mitochondrial dysfunction. Mitophagy Mitophagy is a selective autophagy where damaged and dysfunctional mitochondria are degraded. This maintains a homeostatic environment between synthesis and degradation of cellular organelles and proteins, where byproducts are sent to be recycled in other metabolic pathways. Mitophagy is crucial in the quality control of mitochondria not only at the organelle level but also at a cellular one, given its association with senescence, apoptosis, and necroptosis. Impairments in mitophagy have been associated with multiple diseases, including age-related pulmonary fibrosis. In the healthy mitochondria, mitophagy relies in the action of PTEN-induced putative kinase 1 (PINK1), a serine/threonine kinase which is continuously imported to the inner membrane (IM) to be cleaved and degraded by the mitochondria-specific proteases presenilin-associated rhomboid-like protein (PARL) and mitochondrial processing peptidases (MPP). In the setting of mitochondrial damage, PINK1 acts as the initial sensor to activate mitophagy, where stress-induced membrane depolarization inactivates PARL and MPP. This leads to the stabilization and accumulation of PINK1 on the outer-mitochondrial membrane (OMM), which allows the kinase domain of PINK1 to phosphorylate OMM proteins, in addition to recruitment and activation of the E3-ubiquitin ligase Parkin. Once Parkin is activated, it can polyubiquitinate proteins to the OMM, including voltage-dependent-anion channel 1 (VDAC1), mitofusin 1 (Mfn1), mitofusin 2 (Mfn2) and mitochondria1 rho 1 (MIRO) in order to be sub-sequentially phosphorylated by PINK1. The ubiquitination of these proteins on the OMM leads to the formation of the autophagosome by promoting a bridge with microtubule-associated protein 1 light chain 3 (MAP1LC3/LC3) on phagophores through adaptor protein SQSTM1/p62. Additionally, the ubiquitination of Mfn1 and Mfn2 results in fission, fragmentation, and subsequent degradation with mitophagy through the formation of the autophagosome. Evidence suggests that impairments in the PINK1/Parkin pathway play a key role in the pathogenesis of age-related pulmonary fibrosis. A decrease in PINK1, a decrease in autophagy markers, and an increase in mitochondria mass have been associated with the aging and lungs of IPF patients. In AECII from IPF lungs, mitochondria have shown dysfunction, evidenced by an enlarged and dysmorphic structure with severely ruptured cristae. This suggests that accumulation of dysfunctional mitochondria, in this case, is a consequence of impaired mitophagy, especially in highly fibrotic areas. Thus, PINK1 is a crucial mitochondrial quality control regulator, not only for mitophagy but also as a key element in mitochondrial dynamics. PINK1-knockdown increases the expression of pro-fibrotic peptides, such as TGF-B1 and TGF-B2 in AECII after tunicamycin and bafilomycin administration. In addition, PINK1-deficienct mice exposed to intratracheal instillation of bleomycin had impaired activity of mitochondrial ETC complex I and IV activity leading to mitochondrial dysfunction, along with an increased susceptibility to develop lung fibrosis through upregulation of cytokine TNF and downregulation of cytokine IL-10. Moreover, low PINK1 expression correlates with high expression of apoptotic markers, including caspases. This finding may be attributed to a combination of multiple elements. Evidence shows that dysmorphic/dysfunctional mitochondria have reduced transmembrane potential, decreased ETC activity, increased ROS production and increased mitochondrial permeability transition pore, all of which could lead to activation of pro-fibrotic events in the AECII. Interestingly, mitophagy homeostasis can be impaired by the upstream effects of increased ER stress. Lung fibrosis has been associated with high ER stress markers. This inappropriate response to proteostasis alterations leads to the downregulation of PINK1 in AECII through the expression of ATF3, which acts as a transcriptional repressor of PINK1. It has been reported that patients with IPF have decreased levels of Parkin expression in isolated lung fibroblasts and myofibroblasts. In addition, Parkin-deficient mice showed an exacerbation of lung fibrosis in the bleomycin-induced model. Kobayashi et al. reported that impaired mitophagy by Parkin deficiency induces activation of platelet-derived growth factor receptor (PDGFR)/mammalian target of rapamycin (mTOR) signaling, prompting differentiation and proliferation of myofibroblast. Furthermore, Yu et al. linked a potential therapeutic and anti-fibrotic effect of thyroid hormone (TH). In this study, aerosolized TH resolved fibrosis in two mice models by promoting biogenesis, improving mitochondrial bioenergetics, and decreasing the apoptosis of AEC in a PPRGC1A or PINK1 dependent manner. This finding represents a novel approach to reversing fibrotic changes, nonetheless the exact molecular mechanism is still yet unclear, especially when previous studies have reported reduced expression of both PINK1 and PPGC1A in IPF lungs. The integrity of the proteins involved in mitophagy is tightly regulated, for instance, Parkin is regulated by the UPS through its ubiquitin-like domain that has a high affinity for the subunit Rpn13 of the 26S proteasome. This offers proteasomal degradation of outer-mitochondrial membranes (OMM) and Parkin degradation once mitophagy has ended, ensuring proper quality control of this process. In contrast, human IPF lungs have impaired proteasome activity which leads to the accumulation of misfolded and aggregated proteins in the ER, suggesting that the decline in mitophagy evidenced in IPF lungs could be associated with high levels of ER stress and altered activity of the proteasome. Together, these observations highlight that quality control of this pathway is diminished in lung fibrosis. Mitophagy represents an adaptive response and a protective role against the development of fibrosis, suggesting manipulation of this pathway as a potential pharmacological target. Mitochondrial Unfolded Protein Response (UPRmt) The UPRmt is a stress response pathway that promotes repair and recovery in several conditions, such as mitochondria DNA defects, ROS detoxification, and perturbations in mitochondrial proteostasis, protein import machinery or mitochondrial translation. Currently, the molecular mechanism of the UPRmt has been more broadly described in the model organism Caenorhabditis elegans (C. elegans) than in mammals. In C. elegans the master regulator of the UPRmt is ATFS-1. However, it is unclear who the master regulator of the UPRmt is in mammalians. Three possible homologs of ATFS-1 have been identified; ATF4, ATF5, and CHOP. In C. elegans, under favorable conditions, ATFS-1 is localized to the mitochondria, where it makes its way to the mitochondrial matrix and is degraded by the protease, LONP1. However, in the presence of mitochondrial stress, such as unfolded proteins in the matrix, ATFS-1 will translocate to the nucleus to transcribe UPRmt genes. Unfolded proteins in the matrix are cleaved by CLPP-1. The cleaved peptides are shuttled out of the mitochondria by the transporter HAF-1, located in the intermembrane space (IMS). Accumulation of cleaved peptides prevents mitochondrial transport by inhibiting the action of the translocase of the outer membrane (TOM) and the translocase of the inner membrane (TIM). As a result, the leucine zipper transcription factor ATFS-1 is translocated to the nucleus and initiates transcription of genes such as chaperones, proteases, OXPHOS complexes, and protein import components to promote recovery. In mammals, it is believed that the UPRmt response is regulated by the actions of three bZIP transcription factors, CHOP, ATF4 and ATF5. Interestingly, studies have suggested that each of the transcription factors might play different roles during the response and that activation obeys different stimuli. For instance, ATF5 is required to increase the expression of mitochondrial chaperones and proteases while ATF4 levels were upregulated in response to mtDNA depletion. At the present time, the regulatory mechanism of ATF5, CHOP or ATF4 are not completely understood, different from what it is known in C. elegans with the actions of LONP1 and CLPP-1. In addition, it is not clear how the three transcription factors coordinate the signaling during the pathway or if activation of each one can occur independently. Further investigations are needed to understand the different regulatory components of the UPRmt in order to fully elucidate its role in diseases associated with mitochondrial dysfunction (Figure 3). The above mentioned is considered the canonical UPRmt pathway, but there are other types of UPRmt. These other pathways have been described as the UPRmt sirtuin axis, UPRIMS/ER axis, and the UPRmt translational axis. While the canonical UPRmt axis aims at restoring proteostasis through increasing the mitochondria's ability to fold proteins correctly, the other axis uses other methods such as stopping translation or removing aberrant proteins. Increased oxidative stress causes damage to mitochondrial lipids, proteins and mtDNA, resulting in proteotoxic stress in the mitochondria. A recent study by Papa et al. described the role of the mitochondrial deacetylase Sirt3 in the UPRmt. Proteotoxic stress activates Sirt3 to coordinate the antioxidant machinery of the mitochondria by activating the transcription factor FOXOA3 which subsequently increases the expression of the mitochondria superoxide dismutase MnSOD and catalase. Moreover, SirT3 actions are required to maintain mitochondrial integrity during proteotoxic stress independently of CHOP expression. Also, inhibition of Sirt3 leads to the downregulation of mitophagy markers and the activation of apoptotic markers, suggesting a major role in the clearance of damaged or dysfunctional mitochondria. Furthermore, patients with IPF have increased acetylation of Sirt3 in AEC, leading to decreased activity of this enzyme and increase acetylation of MnSOD and OOG1 in the lung tissue. Jablonski et al. showed that bleomycin exposure to the Sirt3-/-mice augmented oxidant-induced mtDNA damage, apoptosis, and lung fibrosis while mice overexpressing Sirt3 had preserved mtDNA and were protected from bleomycin-induced lung fibrosis, highlighting Sirt3 as key element in maintaining the integrity of the AEC-mtDNA and a potential therapeutic target by preventing the mtROS injury and apoptosis in IPF. In sum, these findings show the critical role of Sirt3 as a regulator of mitochondrial network integrity to promote recovery and prevent dysfunction separately from the actions of the main transcription factors that drive the UPRmt. This pathway, distinct from the canonical pathway, is referred to as the UPRmt-sirtuin axis. response to mtDNA depletion. At the present time, the regulatory mechanism of ATF5, CHOP or ATF4 are not completely understood, different from what it is known in C. elegans with the actions of LONP1 and CLPP-1. In addition, it is not clear how the three transcription factors coordinate the signaling during the pathway or if activation of each one can occur independently. Further investigations are needed to understand the different regulatory components of the UPRmt in order to fully elucidate its role in diseases associated with mitochondrial dysfunction (Figure 3). Figure 3. Mitochondria unfolded protein response (UPRmt). In both, mammals and C. elegans, activation of the UPRmt is caused by accumulation of unfolded proteins inside of the mitochondria matrix. Proteolysis of impaired proteins is carried by the effects of the ClpP protease. In C. elegans, Figure 3. Mitochondria unfolded protein response (UPRmt). In both, mammals and C. elegans, activation of the UPRmt is caused by accumulation of unfolded proteins inside of the mitochondria matrix. Proteolysis of impaired proteins is carried by the effects of the ClpP protease. In C. elegans, export of these peptides to the cytosol is accomplished by HAF-1 which activates ATFS-1 (in C. elegans) or ATF5 (in mammals) to allow nuclear translocation and initiate transcription of this pathway genes. Physiologically, to prevent unnecessary activation of this signaling, ATFS-1 is imported into the mitochondria via the mitochondrial-targeting-sequence (MTS), formed by the translocase of the outer membrane (TOM) and the translocase of the inner membrane (TIM) to allow degradation of ATFS-1 by the protease LON. Furthermore, in C. elegans, translocation to the nucleus of the proteins UBL-5, LIN-65, MET-1 and MET-2 upon UPRmt activation facilitates the binding of ATFS-1 by chromatin remodeling. Ultimately, activation of ATFS-1 and ATF5 induces genes to promote OXPHOS recovery, ROS detoxification, expression of mitochondrial protein import components and upregulation of chaperones and proteases to re-establish mitochondria proteostasis. A halt or interference of translation is a hallmark of the UPRmt translational axis, but also of the integrated stress response (ISR). The ISR is activated under cell stress that is not solely dependent on the mitochondria. The ISR could be activated as a response to excessive ROS or amino-acid depletion. Some studies suggest that in order to have UPRmt activation, the ISR must also be activated, however, the exact mechanism is still unknown. ISR responses are carried by actions of the kinases which phosphorylate the eukaryotic translation initiation factor 2 subunit 1 (eIF2). The four kinases known to phosphorylate eIF2 are GCN2, PERK, HRI, and PKR. Phosphorylation of eIF2 interferes with the formation of the translation initiation complex. Thus, phosphorylation of eIF2 inhibits protein translation in the cytoplasm to reduce folding load on mitochondrial chaperones and simultaneously allowing translation of ISR-specific mRNAs, such as ATF4. While during acute mitochondrial stress, activation of the ISR enables protein recovery the effects of chronic activation of the ISR-ATF4 pathway are yet to be elucidated. Studies showed that increased expression of ATF4 facilitated the progression of age-related diseases by modulating the actions of p21 and p27. Hence, suggesting that mitochondrial dysfunction in aging could lead to chronic activation of the ISR-ATF4 pathway and overexpression of ATF4 might in fact, be harmful in this setting (Figure 4). Although the exact mechanism by which mitochondrial dysfunction promotes chronic ISR-ATF4 activation has not yet been determined, blockage of this pathway opens a new gate for pharmacological inhibitors to treat age-related diseases.. Mitochondria dysfunction and the integrated stress response (ISR). In aging, it is possible that mitochondrial stress and mitochondria dysfunction caused by mitochondria DNA (mtDNA) damage activates the integrated stress response (ISR) leading to phosphorylation of the eukaryotic translation initiation factor 2 subunit 1 (eIF2) by the kinases GCN2, PERK, HRI and PKR. This causes activation of the transcription factor 4 (ATF4) to allow protein recovery and promote cellular homeostasis. Overexpression of ATF4, can modulate the actions of p21, p27, drive the senescenceassociated secretory phenotype (SASP) and stimulate a metabolic reprogramming in the cell that could predispose to fibrotic disease. Mitochondrial Quality Control as Therapeutic Target in IPF The growing evidence of the mitochondria role in the development of IPF has established new opportunities to create therapeutic strategies that target the turnover and dynamics of this organelle. For instance, studies suggest that hormonal modulation of mitochondrial function by 17b-estradiol (E2) through nuclear or mitochondrial ER (Estrogen receptor) can induce antioxidant responses and activation of NRF1/2, Tfam and PGC-1a for mitochondrial biogenesis. Alternatively, ER-a receptor expression was upregulated in IPF lungs and pharmacologic blockage of this receptor lead to attenuation of fibrosis after bleomycin exposure, presumably by downregulation of the pro-fibrotic pathway carried by Smad2. Further, patients with IPF were reported to have disproportionally decreased synthesis of dehydroepiandrosterone (DHEA), a prohormone linked to antifibrotic properties by decreasing fibroblast proliferation, TGF-B1 collagen production and promoting fibroblast apoptosis. The effects of DHEA in fibroblast cell death resulted from the release of mitochondrial cytochrome C into the cytosol and ultimately activation of caspase 9. Moreover, the use of active thyroid hormone T3 has been suggested as potential therapy in lung fibrosis, Yu et al. reported that in mice models of pulmonary fibrosis administration of aerosolized T3 offered regenerative properties by inducing PGC-1a and PINK1, leading to mitochondrial biogenesis and restoration of mitochondrial function and attenuation of apoptosis. Additionally, mitochondria-targeted antioxidant agents such as MitoQ might represent a potential therapy. MitoQ, acting as an ROS-scavenger within the mitochondria decreased the expression of TGF-B1 and NOX4 in fibroblast of IPF patients, attenuating inflammation and collagen deposition. It is well known that PINK1 deficiency in AEC of IPF lungs represents a major factor triggering the mitochondrial dysfunction seen in these cells. Thus, agents that potentiate the activity of PINK1 can represent a therapeutic option. The neo-substrate kinetin triphosphate (KTP) showed to increase the activity of PINK1, Parkin and lower apoptosis markers, establishing this. Mitochondria dysfunction and the integrated stress response (ISR). In aging, it is possible that mitochondrial stress and mitochondria dysfunction caused by mitochondria DNA (mtDNA) damage activates the integrated stress response (ISR) leading to phosphorylation of the eukaryotic translation initiation factor 2 subunit 1 (eIF2) by the kinases GCN2, PERK, HRI and PKR. This causes activation of the transcription factor 4 (ATF4) to allow protein recovery and promote cellular homeostasis. Overexpression of ATF4, can modulate the actions of p21, p27, drive the senescence-associated secretory phenotype (SASP) and stimulate a metabolic reprogramming in the cell that could predispose to fibrotic disease. Mitochondrial Quality Control as Therapeutic Target in IPF The growing evidence of the mitochondria role in the development of IPF has established new opportunities to create therapeutic strategies that target the turnover and dynamics of this organelle. For instance, studies suggest that hormonal modulation of mitochondrial function by 17b-estradiol (E2) through nuclear or mitochondrial ER (Estrogen receptor) can induce antioxidant responses and activation of NRF1/2, Tfam and PGC-1a for mitochondrial biogenesis. Alternatively, ER-a receptor expression was upregulated in IPF lungs and pharmacologic blockage of this receptor lead to attenuation of fibrosis after bleomycin exposure, presumably by downregulation of the pro-fibrotic pathway carried by Smad2. Further, patients with IPF were reported to have disproportionally decreased synthesis of dehydroepiandrosterone (DHEA), a prohormone linked to antifibrotic properties by decreasing fibroblast proliferation, TGF-B1 collagen production and promoting fibroblast apoptosis. The effects of DHEA in fibroblast cell death resulted from the release of mitochondrial cytochrome C into the cytosol and ultimately activation of caspase 9. Moreover, the use of active thyroid hormone T3 has been suggested as potential therapy in lung fibrosis, Yu et al. reported that in mice models of pulmonary fibrosis administration of aerosolized T3 offered regenerative properties by inducing PGC-1a and PINK1, leading to mitochondrial biogenesis and restoration of mitochondrial function and attenuation of apoptosis. Additionally, mitochondria-targeted antioxidant agents such as MitoQ might represent a potential therapy. MitoQ, acting as an ROS-scavenger within the mitochondria decreased the expression of TGF-B1 and NOX4 in fibroblast of IPF patients, attenuating inflammation and collagen deposition. It is well known that PINK1 deficiency in AEC of IPF lungs represents a major factor triggering the mitochondrial dysfunction seen in these cells. Thus, agents that potentiate the activity of PINK1 can represent a therapeutic option. The neo-substrate kinetin triphosphate (KTP) showed to increase the activity of PINK1, Parkin and lower apoptosis markers, establishing this compound as a potential drug for these patients. Similarly, the induction of the mitochondrial protein SIRT3 represents an attractive therapeutic option giving its protective effects against injury and fibrosis. A pharmaceutical compound known as Hexafluoro showed to induce the expression of SIRT3 in mice treated with bleomycin, lessening the development of lung fibrosis by reducing the expression of collagen 1, a-SMA and fibronectin through TGF-B1 inhibition. Furthermore, mitochondrial quality control also relies on the presence of enzyme ubiquitylation to clear damaged mitochondrial proteins. This process is negatively regulated by de-ubiquitinating enzymes (DUBs) which inhibit mitophagy by removing the ubiquitin tags added by Parkin. Evidence suggests that the deletion of the de-ubiquitinate USP30, found in the mitochondrial outer membrane, might represent a novel therapeutic target that could enhance mitophagy and augment substrate ubiquitination. Taken together, pharmacological modulation of mitochondrial quality control that aims biogenesis, mitophagy and fission/fusion could lead to the prevention of mitochondrial dysfunction and ultimately, fibrosis. Multiple mechanisms of actions have been proposed for the different drugs, nonetheless, collective efforts are still required to fully elucidate the specific molecular mechanism by which many of the suggested compounds exerts their effect with the goal of providing high-quality therapies that can impact survival in patients with this condition. Mitochondria in Age-Related Lung Fibrosis In aging, different models have shown similar decreases in mitochondrial function. Multiple studies have pointed out that chronic activation of the UPRmt correlates with increased longevity in C. elegans and mice. However, this activation brings a variety of changes to the mitochondria network including increased fragmentation, and decreased oxygen consumption and ATP production. These changes could represent a protective metabolic response to allow mitochondria repair and as a result prolonged cell survival, or conversely, serve as the initiating factor for the development of aged-related diseases. Different regulators of the UPRmt have been found to be altered with aging. For instance, LONP1 expression was reported to be reduced in lung senescence-fibroblast which lead to the accumulation of oxidized proteins following hydrogen peroxide administration. Furthermore, senescence-fibroblasts were shown to have altered mitochondrial mass, suggesting that perhaps this morphologic change is partially a result of the reduction in the activity of LONP1 with aging, and could also serve as another explanation for the abnormal morphology observed in the mitochondria of IPF patients. Several reports have demonstrated reduced activity of CLPP-1 in aged cells. As a result of deficient CLPP-1, cells were shown to have numerous alterations, such as a higher amount of ROS, decreased ATP production and downregulation of fusion markers, establishing a link between protease abnormalities, energy metabolism and mitochondrial dynamics. In addition, CLPP-1-deficient aged cells were found to have an upregulation of mTORC1 signaling markers, suggesting that the increased activation of the mTORC1/PGC-1 axis in aged-AECII reported in our study could be a response to an upstream alteration in the mitochondrial matrix proteases. In mammals, efficient regulation of the transcription factors ATF5, ATF4 and CHOP is crucial to develop proper mitochondria-to-nucleus communication during UPRmt activation, and only a few studies have described the association of these transcription factors with age-related diseases. Firstly, ATF5 has been described as an activator of the mTORC pathway resulting in autophagy inhibition along with inhibition of apoptosis through the expression of BCL-2 anti-apoptotic factor. In addition, loss of ATF5 leads to significant apoptosis following thapsigargin (Tg) administration to induce-ER stress. Altogether, these findings posit ATF5 as a key pro-survival mediator during proteotoxic damage, a hallmark of age-related diseases. Moreover, multiple aging mice models and AECII from IPF patients have demonstrated increased transcription of ATF4 and CHOP. Studies suggest that ATF4 induces CHOP, which contributes to the induction of GADD34, the main regulator of a common adaptive pathway; termed the integrated stress response (ISR), to restore cellular homeostasis in response to ER stress, depletion of amino-acids and oxidative stress. Activation of this pathway through increased ER stress has been associated with chronic injury and diseases that ultimately lead to pulmonary fibrosis, such as STFPC-mutations in familial IPF and Hermansky-Pudlak syndrome, suggesting a link between UPRmt and ER stress as a possible trigger mechanism of the aberrant fibrotic pattern in lung fibrosis. Likewise, the role of CHOP in the development of lung fibrosis has been described in several reports. In the mouse model of bleomycin-induced pulmonary fibrosis, CHOP expression was found to be upregulated primarily in AECII of highly fibrotic areas, while using the same model, CHOP-deficient mice reported a marked reduction of fibrotic areas and significant lower apoptotic markers. Upregulation of apoptosis-related genes, including GADD45A and BNIP3L, are modulated by CHOP expression in AEC. Interestingly, activation of GADD45A is associated with p53 upregulation as a response of DNA damage and apoptosis induction, increased expression of this protein was found in AEC in IPF. Together, these findings indicate that CHOP plays a key role in the promotion of fibrotic remodeling following chronic injury and that possibly these effects are triggered by AEC apoptosis, making this transcription factor a target for future therapeutic prospects. Conclusions Currently, there are a limited number of marginally effective treatment options for patients with progressive forms of fibrotic lung disease, emphasizing the need for further mechanistic insight and translational progress. Evidence that mitochondrial dysfunction initiates fibrotic response is especially strong in IPF, but the mechanism linking mitochondrial quality control to aberrant repair remains elusive. Future studies determining how dysregulation of mitochondrial quality control contributes to the onset or progression of IPF will ultimately be important for advancing understanding of disease and laying the foundation for new and more effective treatments.
|
. Clinical and biochemical findings, obtained in 76 diabetic children aged 3 to 15 years, were analyzed. Osmolarity of the plasma and plasmic components (electrolytes, glucose and urine) as well as blood antidiuretic activity (ADA) were studied. Osmolarity and plasmic ADA indices increased and water-electrolyte balance deteriorated as metabolic disorders developed. No exact linear correlation between osmolarity and the blood ADA indices was observed in diabetes decompensation. A high blood ADA level is considered to be a manifestation of the pronounced organism dehydration. It was shown that blood coagulation, accumulation of active osmotic substances in the blood, i.e. glucose, urine and other products of the disordered metabolism, as well as a decrease in renal glomerular filtration cause the hyperosmolaric syndrome in diabetic children.
|
Identification of two distinct regions of allelic imbalance on chromosome 18q in metastatic prostate cancer Like most cancers, prostate cancer (CaP) is believed to be the result of the accumulation of genetic alterations within cells. Previous studies have implicated numerous chromosomal regions with elevated rates of allelic imbalance (AI), using mostly primary CaPs with an unknown disease outcome. These regions of AI are proposed sites for tumor suppressor genes. One of the regions previously implicated as coding for at least one tumor suppressor gene is the long arm of chromosome 18 (18q). To confirm this observation, as well as to narrow the critical region for this putative tumor suppressor, we analyzed 32 metastatic CaP specimens for AI on chromosome 18q. Thirtyone of these 32 specimens (96.8%) exhibited AI at one or more loci on chromosome 18q. Our analysis using 17 polymorphic markers revealed statistically significant AI on chromosome 18q at 3 markers, D18S35, D18S64 and D18S461. Using these markers as a guide, we have been able to identify 2 distinct minimum regions of AI on 18q. The first region is between the genetic markers D18S1119 and D18S64. The second region lies more distal on the long arm of the chromosome and is between the genetic markers D18S848 and D18S58. To determine if 18q loss is a late event in the progression of CaP, we also examined prostatic intraepithelial neoplasia (PIN) and primary prostate tumors from 17 patients for AI with a subset of 18q markers. We found significantly higher AI in the metastatic samples. Our results are consistent with 18q losses occurring late in CaP progression. Int. J. Cancer 85:654658, 2000. © 2000 WileyLiss, Inc.
|
Netflix releases the teaser trailer for filmmaker Ava DuVernay's upcoming limited series about the Central Park Five, When They See Us.
Director Ava DuVernay's Central Park Five Netflix limited series adds Michael K. Williams, Vera Farmiga, and John Leguizamo to its cast.
Selma and 13th director Ava DuVernay reunites with Netflix for a five-part limited TV series about the infamous Central Park Five case.
|
<filename>ole-docstore/ole-docstore-engine/src/test/java/org/kuali/ole/repository/CheckinManager_AT.java
package org.kuali.ole.repository;
import org.apache.commons.io.FileUtils;
import org.apache.solr.client.solrj.response.QueryResponse;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.kuali.ole.BaseTestCase;
import org.kuali.ole.RepositoryManager;
import org.kuali.ole.docstore.model.xmlpojo.ingest.*;
import org.kuali.ole.docstore.model.xmlpojo.work.bib.marc.DataField;
import org.kuali.ole.docstore.model.xmlpojo.work.bib.marc.SubField;
import org.kuali.ole.docstore.model.xmlpojo.work.bib.marc.WorkBibMarcRecord;
import org.kuali.ole.docstore.model.xmlpojo.work.bib.marc.WorkBibMarcRecords;
import org.kuali.ole.docstore.model.xstream.ingest.RequestHandler;
import org.kuali.ole.docstore.model.xstream.work.bib.marc.WorkBibMarcRecordProcessor;
import org.kuali.ole.docstore.service.BeanLocator;
import org.kuali.ole.docstore.service.IngestNIndexHandlerService;
import org.kuali.ole.docstore.service.ServiceLocator;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import javax.jcr.Node;
import javax.jcr.Session;
import java.io.File;
import java.net.URL;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import static junit.framework.Assert.assertNotNull;
/**
* Created by IntelliJ IDEA.
* User: Pranitha
* Date: 3/9/12
* Time: 11:09 AM
* To change this template use File | Settings | File Templates.
*/
@Ignore
@Deprecated
public class CheckinManager_AT
extends BaseTestCase {
private static final Logger LOG = LoggerFactory
.getLogger(CheckinManager_AT.class);
private IngestNIndexHandlerService ingestNIndexHandlerService = BeanLocator
.getIngestNIndexHandlerService();
@Before
public void setUp() throws Exception {
super.setUp();
}
@Test
public void checkInRecord() throws Exception {
CheckoutManager checkoutManager = new CheckoutManager();
NodeHandler nodeHandler = new NodeHandler();
String author = null;
String instIdValue = null;
// input record
URL resource = getClass().getResource("/org/kuali/ole/repository/request.xml");
File file = new File(resource.toURI());
String fileContent = FileUtils.readFileToString(file);
ResponseDocument responseDocument = new ResponseDocument();
RequestHandler requestHandler = new RequestHandler();
Request request = requestHandler.toObject(fileContent);
// Ingest & Index input rec
Response xmlResponse = ingestNIndexHandlerService.ingestNIndexRequestDocuments(request);
responseDocument = xmlResponse.getDocuments().get(0);
String uuid = responseDocument.getUuid();
Session session = RepositoryManager.getRepositoryManager().getSession("ole-khuntley", "batchIngest");
instIdValue = "6cecc116-a5c6-4240-994a-ba652a8ecde4";
//content of marc rec
String checkedOutContent = checkoutManager.checkOut(uuid, "mockUser", "checkout");
assertNotNull(checkedOutContent);
//updating marc content
List<WorkBibMarcRecord> updatedMarcRecs = new ArrayList<WorkBibMarcRecord>();
WorkBibMarcRecordProcessor workBibMarcRecordProcessor = new WorkBibMarcRecordProcessor();
workBibMarcRecordProcessor.fromXML(checkedOutContent);
WorkBibMarcRecords workBibMarcRecords = workBibMarcRecordProcessor.fromXML(checkedOutContent);
for (Iterator<WorkBibMarcRecord> iterator = workBibMarcRecords.getRecords().iterator(); iterator.hasNext(); ) {
WorkBibMarcRecord rec = iterator.next();
DataField dataField = rec.getDataFieldForTag("100");
for (SubField subField : dataField.getSubFields()) {
author = subField.getValue();
author = author + "Updated version for author";
subField.setValue(author);
}
updatedMarcRecs.add(rec);
}
workBibMarcRecords = new WorkBibMarcRecords();
workBibMarcRecords.setRecords(updatedMarcRecs);
workBibMarcRecordProcessor = new WorkBibMarcRecordProcessor();
String updatedContent = workBibMarcRecordProcessor.toXml(workBibMarcRecords);
updatedContent = updatedContent.replace("list", "collection");
RequestDocument updatedReqDoc = new RequestDocument();
buildRequestDoc(updatedReqDoc, responseDocument, updatedContent, instIdValue);
//checking in record with updated content and links
CheckinManager checkinManager = new CheckinManager();
String updatedVersion = checkinManager.updateContent(updatedReqDoc);
checkedOutContent = checkoutManager.checkOut(uuid, "mockUser", "checkout");
assertNotNull(checkedOutContent);
Assert.assertEquals("Content matches with version updated record", checkedOutContent, updatedContent);
Node bibNode = nodeHandler.getNodeByUUID(session, uuid);
String instanceIdentifier = bibNode.getProperty("instanceIdentifier").getValue().getString();
// Assert.assertEquals("Instance Identifier matches Docstore value", instanceIdentifier, instIdValue);
// getting instance identifier value from Solr
QueryResponse queryResponse = ServiceLocator.getIndexerService().searchBibRecord(responseDocument.getCategory(),
responseDocument.getType(),
responseDocument.getFormat(),
"id", uuid,
"instanceIdentifier");
//LOG.info("queryResponse" + queryResponse);
List solrInstIdList = new ArrayList();
solrInstIdList = (List) queryResponse.getResults().get(0).getFieldValue("instanceIdentifier");
Assert.assertEquals("Instance Identifier matches search result", instIdValue, solrInstIdList.get(0));
}
private void buildRequestDoc(RequestDocument updatedReqDoc, ResponseDocument responseDocument,
String updatedContent, String instIdValue) {
updatedReqDoc.setCategory(responseDocument.getCategory());
updatedReqDoc.setType(responseDocument.getType());
updatedReqDoc.setFormat(responseDocument.getFormat());
updatedReqDoc.setId(responseDocument.getUuid());
Content updatedCont = new Content();
updatedCont.setContent(updatedContent);
updatedReqDoc.setContent(updatedCont);
AdditionalAttributes addAtt = new AdditionalAttributes();
addAtt.setDateEntered("12-12-2-2010");
addAtt.setLastUpdated("12-12-2-2010");
addAtt.setFastAddFlag("true");
addAtt.setSupressFromPublic("false");
updatedReqDoc.setAdditionalAttributes(addAtt);
updatedReqDoc.setUuid(responseDocument.getUuid());
}
}
|
New tests pit Samsung's new Galaxy S7 and Galaxy S7 Edge against Apple's iPhone 6s and iPhone 6s Plus. Do the waterproofing claims hold true? Which is the toughest smartphone?
Protection-plan specialists SquareTrade have been busy testing the durability of Samsung's new Galaxy S7 and Galaxy S7 Edge smartphones, and pitting them against Apple's iPhone 6s and iPhone 6s Plus in its latest Breakability tests.
To help them carry out repeatable tests, SquareTrade added two new robots for the latest round of Breakability tests - the Deep Water DunkBot and the TumbleBot.
The Deep Water DunkBot submerges phones under five feet of water for 30 minutes to test water resistance, while the TumbleBot continually drops the devices in an enclosed chamber at a rotational speed of 50 revolutions per minute for 30 seconds in order to test durability.
How did the devices fair?
Water resistant? Yes. Waterproof? No. While the S7 and S7 Edge both survived 30 minutes under water, it turns out that their audio was permanently muffled and distorted. The iPhone 6s lost all audio and suffered water damage under the screen, while the iPhone 6s Plus malfunctioned at 10 minutes and died at 24 minutes.
The iPhone 6s is a tumble master. The iPhone 6s was the only smartphone to survive SquareTrade's tumble test unscathed. The S7 and S7 Edge both suffered significant damage to their back panels, while their front screens had minor cracks. The iPhone 6s Plus' screen completely shattered.
Sidewalk resistant? No. Dropped on their corners from six feet high, the S7 cracked after four falls, while the S7 Edge was completely unusable after seven. Dropped facedown, the S7 shattered on the first fall, while the S7 Edge shattered on the second.
Bend issues persist for Samsung. SquareTrade's testing showed that the S7 Edge performed the same as its S6 Edge predecessor. Not only did the phone crack at 110 pounds of pressure, but it also reached catastrophic failure at less than 170 pounds. The S7 withstood 170 pounds of pressure - the same as the iPhone 6s.
Here's a video of the testing in action.
"Samsung's new phones may hold up to an impressive amount of water, but we've found that they still struggle to keep up with the iPhone when it comes to screen durability," said Aileen Abaya, director of communications at SquareTrade. "So while the S7 and S7 edge may be perfect for underwater adventurers, those of us who are clumsy or accident-prone should still be careful about drops and tumbles."
|
Automatic Headlight Beam Management System for Vehicles Nowadays, most people driving vehicles tend to leave the high beam always on. Many people dont realize how dangerous can be leaving the high beam on when an opposite vehicle approach. People find it tedious to control the high beam as they would have to turn on and off multiple times within a short span. This is where our algorithm comes into action. The Automatic Headlight Beam Management System for Vehicles automatically controls the vehicles beam and headlight with the predefined variable such as location, time, opposite vehicles approach. The Automatic Beam Management System for Vehicles employs modules such as camera, GPS, microcontroller to achieve the desired result. This system gathers live video recording from the camera modules and makes it greyscale with an intensity such that only the headlight will be visible. Then, colour inversion technique is applied to easily identify the opposite vehicles headlight. This confirms the approach of a vehicle. The method is employed in such a way that street lamps are not confused as vehicles. This data is then shared with a microcontroller which changes the beam.
|
package cn.binarywang.wx.miniapp.api.impl;
import java.io.File;
import org.testng.annotations.*;
import cn.binarywang.wx.miniapp.api.WxMaService;
import cn.binarywang.wx.miniapp.test.ApiTestModule;
import com.google.inject.Inject;
/**
* @author <a href="https://github.com/binarywang"><NAME></a>
*/
@Test
@Guice(modules = ApiTestModule.class)
public class WxMaQrcodeServiceImplTest {
@Inject
private WxMaService wxService;
@Test
public void testCreateQrCode() throws Exception {
final File qrCode = this.wxService.getQrcodeService().createQrcode("111", 122);
System.out.println(qrCode);
}
@Test
public void testCreateWxaCode() throws Exception {
final File wxCode = this.wxService.getQrcodeService().createWxaCode("111", 122);
System.out.println(wxCode);
}
@Test
public void testCreateWxaCodeUnlimit() throws Exception {
final File wxCode = this.wxService.getQrcodeService().createWxaCodeUnlimit("111", null);
System.out.println(wxCode);
}
}
|
<gh_stars>0
import { Actions, RECEIVE_STATUS, REQUEST_STATUS } from '../actions/certificate-holder-actions';
import { CertificateCheckerState, CertificateCheckState } from '../state/certificate-checker';
const defaultState = {candidate: null, certificateStatus: CertificateCheckState.Idle, certificate: ''};
const certificateChecker = (state: CertificateCheckerState = defaultState, action: Actions): CertificateCheckerState => {
switch (action.type) {
case REQUEST_STATUS:
return {
...state,
certificateStatus: CertificateCheckState.Checking
}
case RECEIVE_STATUS:
return {
...state,
certificateStatus: action.hasCertificate
? CertificateCheckState.Confirmed
: CertificateCheckState.Rejected
}
default:
return state;
}
}
export default certificateChecker;
|
<reponame>Lisowolf/HangarEmu<filename>src/main/java/javax/microedition/lcdui/game/Sprite.java<gh_stars>0
package javax.microedition.lcdui.game;
public class Sprite extends Layer {
public static final int TRANS_NONE = 0;
public static final int TRANS_ROT90 = 5;
public static final int TRANS_ROT180 = 3;
public static final int TRANS_ROT270 = 6;
public static final int TRANS_MIRROR = 2;
public static final int TRANS_MIRROR_ROT90 = 7;
public static final int TRANS_MIRROR_ROT180 = 1;
public static final int TRANS_MIRROR_ROT270 = 4;
}
|
Andy Dehnart on the fashion show’s return from the dead.
Heidi Klum has reminded viewers since Project Runway’s first season that “in fashion, one day you’re in, and the next you’re out.” That’s also true of the TV show, on which fashion designers compete to present their clothing at Fashion Week.
Project Runway returns for its ninth season tonight now firmly back “in.” The show established a new reality-television subgenre when it debuted on Bravo in 2004: that is, talented professionals competing, and pushing their skills and craft to the limits in challenges judged by well-known people in the given industry. It became a pop-culture phenomenon—an incredible achievement for a show about sewing clothes. Before its fifth season aired, however, a well-publicized legal battle derailed the series. The show’s owners moved it from Bravo to Lifetime; the production company that created it, Magical Elves, quit; and the producers best known for The Real World, Bunim-Murray, took over. Bravo later created a knockoff, The Fashion Show, which never really worked despite a makeover between its first and second seasons; that attempt to re-create the original continues this year, as NBC has teamed with Magical Elves for a midseason show, Fashion Star, which sounds very, very familiar.
When season six of Project Runway finally debuted on Lifetime, it seemed like rolled-up, acid-wash jeans on Donald Trump: it just didn’t work. Set in Los Angeles instead of New York City, it had uninspired challenges, very little of judges Michael Kors and Nina Garcia, and many other problems.
But something happened last season, the show’s eighth: Project Runway started to resemble its former self, featuring strong personalities and a controversial finish. Viewers who had abandoned the show returned.
Some of its new features even improved upon the original. Besides minor but significant changes—refined sets; a Steadicam on the runway to follow the models as they show off the designers’ work—there were longer, 90-minute episodes. In an era of bloated reality shows that drag out an hour’s worth of content over two hours (NBC’s The Biggest Loser is the biggest offender), the new Runway added time in the right places.
Executive producer Jon Murray told me the expanded episodes “gave us the time to tell a better story” because “it allowed us to open up what was going on beyond those format elements.” There was more time to show designers imagining and constructing garments under extreme time constraints. Tim Gunn came back from being a catchphrase robot to a thoughtful mentor offering informed critiques of work in progress. The judges’ deliberations went into greater detail, so their rationales made more sense. Sometimes small moments, like a designer’s laugh during an interview, were held for an extra beat. Mostly, we got more of what made the show so compelling to begin with.
This season will see some other changes and firsts, including a public, outdoor runway challenge. A casting special will precede the season premiere tonight, on which 20 designers will have their final audition, trying to persuade the judges to let them compete this season.
And Project Runway does, going into its ninth season having succeeded last fall with both the idea and the execution.
|
Georgia head coach Mark Richt confirmed yesterday that wide receivers Malcolm Mitchell and Justin Scott-Wesley will not play Saturday against Clemson.
Mitchell had arthroscopic surgery July 31 on the same knee he injured last year and hasn’t practiced since. Scott-Wesley is serving a suspension for a marijuana arrest.
Those are both big losses for the ‘Dawgs. Mitchell and Scott-Wesley are both big play weapons. Fortunately for Georgia, they have depth at the receiver position.
Related: Georgia releases depth chart; LB Wilson left off
Michael Bennett, Chris Conley and Reggie Davis are listed as the starting wideouts heading into Saturday’s contest.
Bennett is a reliable target in the slot that will catch balls for quarterback Hutson Mason, making just his fourth career start. Conley can serve as a deep-ball threat, though he doesn’t possess the breakaway speed that Mitchell and Scott-Wesley have. Davis is also an experienced receiver on the outside who can fight his way open.
Richt feels good about his receiver situation in preparation for Clemson.
“Sometimes you don’t have to have blinding speed to go deep, and sometimes it’s just a matter of getting off the jam and getting the guy cut off,” Richt told reporters during his opening press conference Tuesday. “Not many guys just run away from people, but you know we have a good history of placing the ball where our guys can catch it.”
|
A Dynamical Quantum Cluster Approach to Two-Particle Correlation Functions in the Hubbard Model We investigate the charge- and spin dynamical structure factors for the 2D one-band Hubbard model in the strong coupling regime within an extension of the Dynamical Cluster Approximation (DCA) to two-particle response functions. The full irreducible two-particle vertex with three momenta and frequencies is approximated by an effective vertex dependent on the momentum and frequency of the spin/charge excitation. In the spirit of the DCA, the effective vertex is calculated with quantum Monte Carlo methods on a finite cluster. On the basis of a comparison with high temperature auxiliary field quantum Monte Carlo data we show that near and beyond optimal doping, our results provide a consistent overall picture of the interplay between charge, spin and single-particle excitations. I. INTRODUCTION Two-particle correlation functions, such as the dynamical spin-and charge correlation functions, determine a variety of crucial properties of many-body systems. Their poles as a function of frequency and momentum describe the elementary excitations, i.e. electron-hole excitations and collective modes, such as spin-and charge-density waves. Furthermore, an effective way to identify continuous phase transitions is to search for divergencies of susceptibilities, i.e. two-particle correlation functions. Yet, compared to studies of single-particle Green's functions and their spectral properties, where a good overall accord between theoretical models (Hubbard type-models) and experiment (ARPES) has been established (see Refs. 1,2,3,4), the situation is usually not so satisfying for twoparticle Green's functions. This is especially so for the case of correlated electron systems such as high-T c superconductors (HTSC). The primary reason for this is that calculations of these Green's functions are, from a numerical point of view, much more involved. Within the Dynamical Cluster Approximation (DCA) 5,6, which we consider in this paper, one can consistently define the two-particle Green's functions, by extracting the irreducible vertex function from the cluster. This approximation maps the original lattice problem to a cluster of size L c = l c l c embedded in a self-consistent host. Thus, correlations up to a range < l c are treated accurately, while the physics at longer length-scales is described at the mean-field level. The DCA is conveniently formulated in momentum space, i.e. via a patching of the BZ. Let K denote such a patch, and k the original lattice momentum. The approximation boils down to restricting momentum conservation only to the patch wave vectors K. This approximation is justified if k-space correlation functions are rather structureless and, thus, in real space short-ranged. From the technical point of view, the approximations are implemented via the Laue function ∆(k 1, k 2, k 3, k 4 ), which guarantees momentum conservation up to a reciprocal lattice vector. In the DCA, the Laue function is replaced by ∆ DCA (K 1, K 2, K 3, K 4 ), thereby insuring momentum conservation only between the cluster momenta. It is understood that k i belongs to the patch K i. To define uniquely the DCA approximation, in particular in view of two-particle quantities, it is useful to start with the Luttinger-Ward functional, which is computed using the DCA Laue function. Hence, DCA is a functional of a coarse-grained Green's function, G(K, i m ) ≡(K). Irreducible quantities such as the self-energy, and the two-particle irreducible vertex are calculated on the cluster and correspond, respectively, to the first-and second-order functional derivatives of DCA with respect to. Using the cluster irreducible self-energy, (K), and two-particle vertex, K,K (Q), one can then compute the lattice single-particle and lattice two-particle correlation functions using the Dyson and Bethe-Salpeter equations. This construction of twoparticle quantities has the appealing property that they are thermodynamically consistent 7,8. Hence, the spin susceptibility as calculated using the particle-hole correlation functions corresponds precisely to the derivative of the magnetization with respect to an applied uniform static magnetic field. The technical aspects of the above program are readily carried out for single-particle properties. However a full calculation of the irreducible twoparticle vertex -even within the DCA -is prohibitively expensive 9 and, thus, has never been carried out. In the present work, we would like to overcome this situation by suggesting a scheme where the K − and K − dependencies of the irreducible vertex are neglected. At low temperatures, this amounts to the assumption that in a energy and momentum window around the Fermi surface, the irreducible vertex depends weakly on K − and K. Following this assumption, an effective twoparticle vertex in terms of an average over the K − and K dependencies of K,K (Q) is introduced: where corresponds to the fully interacting susceptibility on the DCA cluster in the particle-hole channel and 0 is the corresponding bubble as obtained from the coarsegrained Green's functions. Finally, our estimate of the lattice susceptibility reads: where 0 (q) corresponds to the bubble of the dressed lattice Green's functions: In the following section, we describe some aspects of the explicit implementation of the DCA which is based on the Hirsch-Fye QMC algorithm 11. Our new approach requires substantial testing. In Sec. III A we compare the Nel temperature as obtained within the DCA without any further approximations to the result obtained with our new approach. In Sec. III B, we present results for the temperature and doping dependence of the spin and charge dynamical structure factors and compare the high temperature data with auxiliary field QMC simulations. Finally, Sec. IV draws conclusions. II. IMPLEMENTATION OF THE DCA We consider the standard model of strongly correlated electron systems, the single-band Hubbard model 12,13,14 with hopping between nearest neighbors t and Hubbard interaction U. The energy scale is set by t and throughout the paper we consider U = 8t. Our goal is to compute the spin, S(q, ) and charge C(q, ) dynamical structure factors. They are given, respectively, by: The left hand side of the above equations are obtained from the corresponding susceptibility as calculated from Eq.. Finally, a stochastic version of the Maximum Entropy method 15,16 is used to extract the dynamical quantities. In order to cross check our results, we slightly modify Eq. to: Here, we have introduced an additional "controlling" parameter in the susceptibility denominator, which is calculated in a self-consistent manner. It assures, for example in the case of the longitudinal spin response, that (q) obeys the following sum rule (a similar idea, to use sumrules for constructing a controlled local approximation for Of course, should be as close as possible equal to = 1, which is indeed what we will find after implementing the sum rule (see below). Our implementation of the DCA for the Hubbard model is standard 6. Here, we will only discuss our interpolation scheme as well as the implementation of a SUspin symmetry broken algorithm. Since the DCA evaluates the irreducible quantities, (K) as well as U ef f (Q) for the cluster wave vectors, an interpolation scheme has to be used. To this, we adopt the following strategy: for a fixed Matsubara frequency i m and for each cluster vector Q, the effective interaction U ef f is rewritten as a series expansion: with i = 0,..., n − 1, where n is the number of the cluster momentum vectors Q. The quantity ∆ i represents vectors, where each vector from the corresponding ∆ i belongs to the same "shell" around the origin in real space, i.e. With a given U ef f, Eq. can be inverted to uniquely determine A i. With these coefficients, one can compute the effective particle-hole interaction for every lattice momentum vector q. This interpolation method works well ax ay (0, ) (, ) when U ef f is localized in real space and the sum in Eq. can be cut-off at a given shell. The effective particle-hole interaction U ef f is shown in Fig. 2 for a variety of dopings at inverse temperature t = 6, U/t = 8 and on an L c = 8 cluster, which corresponds to the so-called "8A" Betts cluster (see Refs. 18,19). The U ef f -function displays a smooth momentum dependence. These observations further support the interpolation scheme (Eq. 11). Thus, indeed, U ef f is rather localized in real space with sizable reduction from its bare U = 8t value for larger doping and a further slight reduction at q = (, ). The reduction is partly due to the self-energy effects in the single-particle propagator, which reduce 0 from its non-interacting (U = 0) value. Partly, it also reflects both the Kanamori (see Ref. 14) repeated particle-particle scattering and vertex corrections. Summarizing, the new approach to two-particle properties relies on two approximations which render the calculation of the corresponding Green's function possible. Firstly, the effective particle-hole interaction U ef f (Q) depends only on the center-of-mass momentum and frequency, i.e. Q and i m. Secondly, (Q), is extracted directly from the cluster and 0 (Q) is obtained from the bubble of the coarse-grained Green's functions. To generate DCA results for the Nel temperature, we have used an SU symmetry broken code. The setup is illustrated in Fig. 3. We introduce a doubling of the unit cell -to accommodate AF ordering -which in turn defines the magnetic Brillouin zone. The DCA k-space patching is carried out in the magnetic Brillouin zone and the Dyson equation for the single-particle propagator is given as a matrix equation: with With the SU symmetry broken algorithm, one can compute directly the staggered magnetization, i.e. m = 1 L j e iQj (n j,↑ − n j,↓ ), and thereby determine the transition temperature. Since the DCA is a conserving approximation, the so determined transition temperature corresponds precisely to the temperature scale at which the corresponding susceptibility, calculated without any approximations on the irreducible vertex K,K (Q), diverges. III. RESULTS A. Comparison of the AF transition temperature with a symmetry broken DCA calculation A first test of the validity of our new approach is a comparison with the SU symmetry broken DCA calculation on an L c = 8 cluster at U = 8t. The idea is to extract the Nel temperature T N from a divergence in the spin susceptibility as calculated in the above described (paramagnetic) scheme -see Eq. -and to compare it to the DCA result as obtained from the SU symmetry broken algorithm. This comparison provides information on the accuracy of our approximation to the two-particle irreducible vertex (see Eq. 4). Using the SU symmetry breaking algorithm, the magnetic phase diagram for the one-band Hubbard model as a function of doping is shown in Fig. 4. The para-(antiferro)magnetic phase transition is indicated here by gray (blank) circles. At half-filling T N ≃ 0.4t and magnetism survives up to approximately 15 % hole doping. It is know that the convergence of the magnetization during the self-consistent steps in the DCA approach is extremely poor near the phase transition and, therefore, we cannot estimate the transition temperature more precisely than shown in Fig. 4. However, our precision is sufficient for comparison. We again stress that the so determined magnetic phase diagram corresponds to the exact DCA result where no approximation -apart from coarse graining -is made on the particle-hole irreducible = (, ). The inset shows the static free lattice susceptibility 0 for the momentum vector q = (, ). The bare Hubbard interaction strength is U = 8t. vertex. The blue (red) triangles indicate the transition line for the para-to the antiferromagnetic solutions extracted from the divergent spin susceptibility (Eq. 9) within the paramagnetic calculation. A precise estimation of the Nel temperature requires very accurate results and boils down to finding the zeros of the denominator of Eq.. In Fig. 5, we consider the effective irreducible particle-hole interaction U ef f for the static case and for the cluster momentum Q = (, ) relevant for the AF instability. As apparent, the irreducible particle-hole interaction becomes weaker with increasing doping. On the other hand, the susceptibility 0 (q, i m = 0) grows with increasing doping. At a first glance both quantities U ef f and 0 (see Fig. 5) vary smoothly as a function of doping. However, in the vicinity of the phase transition, signalized by the vanishing of the denominator in Eq., the precise interplay between U ef f and 0 becomes delicately important and renders an accurate estimate of the Nel temperature difficult. Given the difficulty in determining precisely the Nel temperature, we obtain good agreement between both methods at 10 %. Note that in those calculations the values of ≈ 0.86 − 0.97 are required to satisfy the sum rule in Eq.. At smaller dopings, and in particular at half-band filling the Nel temperature, as determined by the vanishing of the denominator in Eq., underestimates the DCA result. Hence, in this limit, the K and K dependence of the irreducible vertex plays an important role in the determination of T N and cannot be neglected. Let us emphasize, that a good agreement between the Nel temperatures at ≃ 10 % and above is a non-trivial achievement lending substantial support to the above new scheme for extracting two-particle Green's functions. B. Dynamical Spin and Charge structure factors To further asses the validity of our approach, we compare it to exact auxiliary-field Blankenbecler, Scalapino, Sugar (BSS) QMC results (Ref. 3). This method has a severe sign-problem especially in the vicinity of ≃ 10 % and, hence, is restricted to high temperatures. Such a comparison for the dynamical spin, S(q, ), and charge, C(q, ) dynamical structure factors is shown in Fig. 6 at t = 3, ≈ 14 % and U/t = 8. The BSS results correspond to simulations on an 8 8 lattice. Fig. 6b) depicts the BSS-QMC data in the spin sector. Due to short-range spin-spin correlations, remnants of the spindensity-wave are observable, displaying a characteristic energy-scale of 2J, where J is the usual exchange coupling, i.e. J = 4 t 2 U. The two-particle DCA calculations show spin excitations with the dominant weight concentrated, as expected and seen in the QMC data, around the AF wavevector (, ). As apparent from the sumrule, (see Fig. 7 a)), the DCA overestimates the weight at this wave vector but does very well away from q = (, ). The dispersion in the two-particle data has again a higher energy branch around 2J, but it also shows features at J. Since the total spin is a conserved quantity, one expects a zero-energy excitation at q =. This is exactly reproduced in the 88 QMC-BSS data, and qualitatively in the DCA results. As a function of decreasing temperature, the DCA dynamical spin structure factor shows a more pronounced spin-wave spectrum. This is confirmed in Fig. 8 on the left hand side. Here, we fix the temperature to t = 6 and keep the doping at ≈ 14 % but vary the cluster size. As apparent, for all considered cluster sizes (L c = 4, 8, 10, 16) a spin wave feature is indeed observable: a peak maximum at q = (, ) is present and the correct energy scale at q = (, 0) with 2J is recovered. Unfortunately, a direct comparison of both calculations at lower temperature is not possible due to the severe minus-sign problem in the BSS calculation. The investigation of the dynamical charge correlation function for the above parameters shows that the DCA calculations, which are depicted in Fig. 6 c), can also reproduce basic characteristics of the BSS charge excitation spectrum 6 d). Both calculations show excitations at ≈ U which are set by the remnants of the Mott-Hubbard gap. Similar results are obtained at lower temperatures (t = 6) on the right hand side of Fig. 8 for different cluster sizes (L c = 4, 8, 10, 16). The correspond- Fig. 8. ing values of are listed in Tab. I. These values confirm the overall correctness of our approach in that the corresponding sum rule for the charge response is accurately (exactly for = 1) fulfilled. The doping dependence of the spin-and chargeresponse is examined in Fig 9. Here, we restrict our calculations to the L c = 8 cluster at t = 6 and dopings between = 14 % and = 32 %. At = 14 % (see Fig. 8) the dynamical spin structure factor displays a spin wave dispersion with energy scale J. That is E SDW (, 0) = 2J with J = 4 t 2 U. As the system is further doped ( = 27 %) the dispersion is no longer sharply peaked around q = (, ). The excitations broaden up and change their energy scale from J = 4 t 2 U to an energy scale set by the non-interacting bandwidth. This effect becomes even more visible with higher dopings at = 32 % (Fig. 9 c)). Furthermore, the spectrum of the charge response shows a reduction of the weight of states at high energies (/t ≈ 8). This behavior corresponds to the loss of weight of the upper Hubbard band with increasing doping. The corresponding equal time spin and charge correlation functions of Fig. 9 c-d) are depicted in Fig. 10. As in auxiliary-field QMC simulations 4, the equal time spin correlation function shows a set of peaks at q = ( ±, ) and q = (, ± ). Here is proportional to the doping. The overall trend of the doping dependence of the spin-and charge-responses is in good agreement with the previous findings of QMC simulations (Refs. 2,3): there it was shown that the spin-response has a characteristic energy scale ≈ 2J and an SDW-like dispersion up to about ≈ 10 − 15 % doping, despite the fact that at these dopings the spin-spin correlations are very short-ranged (of order of the lattice parameter). A lot of the features of the two-particle spectra have direct influence on the single-particle spectral function and vice-versa. At optimal doping, = 14 % the spectral function A(q, ) in Fig. 11 a) shows three distinguish features. An upper Hubbard band (/t ≈ 8t) and a lower Hubbard band which splits in an incoherent background and a quasiparticle band of width set by the magnetic scale J. In agreement with earlier QMC data (Refs. 2,3), we view this narrow quasiparticle band as a fingerprint of a spin polaron where the bare particle is dressed by spin fluctuations. The fact that the dynamical spin structure factor in Fig 8 c) shows a well defined magnon despersion at this temperature and doping, = 14 %, allows us to interpret the features centered around q = and below the Fermi energy as backfolding or shadows of the quasiparticle band at q = (, ). A comparison of the charge response spectrum in Fig. 8 d) with the corresponding single-particle spectra in Fig. 11 a) reveals that the response in the particle-hole channel at almost zero energy is caused by particle-hole excitations around the quasi-particle spin-polaron band close to the Fermi energy. The high energy excitations, mentioned above, are due to transitions from the quasi-particle band to the upper Hubbard band. As a function of doping, notable changes in the spectral function which are reflected in the two-particle properties are apparent. On one hand, the spectral weight in the upper Hubbard band is reduced. As mentioned previously, this reduction in high energy spectral weight is apparent in the dynamical charge structure factor. On the other hand, at higher dopings the magnetic fluctuations are suppressed. Consequently, the narrow band changes its bandwidth from the magnetic exchange energy J to the free bandwidth. This evolution is clearly apparent in Figs. 11 b) and c) and is in good agreement with previous BSS-QMC results 3. IV. CONCLUSIONS Two-particle spectral functions, such as spin-and charge-dynamical spin structure factors, are clearly quantities which are of crucial relevance for the understanding of correlated materials. Within the DCA, those quantities have remained elusive due to numerical complexity. In principle, the two-particle irreducible vertex, K,K (Q), containing three momenta and frequencies has to be extracted from the cluster and inserted in the Bethe-Salpeter equation. Calculating this quantity on the cluster is in principle possible, but corresponds to a daunting task which -to the best of our knowledgehas never been carried out. To circumvent this problem, we have proposed a simplification which relies on the assumption that K,K (Q) is only weakly dependent on K and K. Given the validity of this assumption, one can average over K and K and retain its dependency on frequency and momenta, Q, of the excitation. We have tested this idea extensively for the two dimensional Hubbard model at strong couplings, U/t = 8. At dopings ≈ 10 % and higher, we have found a good agreement between the Nel temperature on an L c = 8 lattice as calculated within a symmetry broken DCA scheme with our new two-particle approach. This finding lends support to the validity of our scheme in this doping range. Furthermore, we studied the doping and temperature dependence of the spin and charge dynamical structure factors as well as the single-particle spectral function at t = 6. Our results provide a consistent picture of the physics of doped Mott insulators, very reminiscent of previous findings within auxiliary field QMC simulations. The strong point of the method, in contrast to auxiliary field QMC approaches, is that it can be pushed to lower temperatures above and below the superconducting transition temperature. Work in this direction is presently under progress.
|
New Delhi: A united opposition on Thursday disrupted Rajya Sabha proceedings protesting against making Aadhaar card mandatory for availing government benefits like subsidized LPG, PDS supplies and pensions, forcing its adjournment thrice since morning.
The House was first adjourned until noon, then for about 15 minutes and once again until 2pm as the Opposition uproar continued unabated. Opposition Trinamool Congress, Biju Janata Dal and Samajwadi Party had given notices for suspension of business to take up the issue which also found support from the Left parties and the Congress. The notices were rejected by the Chair.
While the government clarified that the unique identification number (UID) or Aadhaar card issued to citizens was not mandatory for availing government benefits and necessary instruction to this effect were being issued, dissatisfied opposition members trooped into the Well raising slogans, forcing adjournment of proceedings till noon.
Soon after the House assembled and listed papers laid, Naresh Agarwal (SP), Derek O’Brien (TMC) and Dilip Tirkey (BJD) said they had given notices under rule 267, but deputy chairman P.J. Kurien said their motion has not been permitted. Ram Gopal Yadav of SP said the Centre has issued instructions to state governments to stop ration card benefits, pensions and subsidized LPG to those not having Aadhaar card. As much as 40% of the population do not have Aadhaar card and the move will hit the poor hard, he said.
Derek O’Brien said the Bharatiya Janata Party-led government talks of cooperative federalism but takes decisions without discussing with states. Making Aadhaar mandatory will have serious repercussions across the country, he said.
BJD leader Tirkey said with 20% of the population in Odisha do not have Aadhaar card, the instruction by the Centre will only create problems for the poor. In his response, minister for urban development M. Venkaiah Naidu said the Act passed by Parliament provides that government benefits can be availed through Aadhaar cards.
The government, he said, has taken note of the concerns raised by members. “It [Aadhaar card] is not compulsory. If necessary, instructions will be issued," he said. The Direct Benefit Transfer (DBT)—the scheme of paying government benefits directly to users, was started by the previous United Progressive Alliance government, he said, adding “DBT is need of the hour" as it helps eliminate corruption, middlemen and leakages.
Observing that Aadhaar will not be made mandatory till the entire population gets such cards or UID numbers, he said “I will ensure necessary clarification is issued at the earliest."
|
Triacylglycerols and Other Lipids Profiling of Hemp By-Products Hemp seed by-products, namely hemp cake (hemp meal) and hemp hulls were studied for their lipid content and composition. Total lipid content of hemp cake and hemp hulls was 13.1% and 17.5%, respectively. Oil extraction yields using hexane, on the other hand, were much lower in hemp cake (7.4%) and hemp hulls (12.1%). Oil derived from both hemp seeds and by-products were primarily composed of neutral lipids (>97.1%), mainly triacylglycerols (TAGs), determined by SPE and confirmed by NMR study. Linoleic acid was the major fatty acid present in oils derived from hemp by-products, covering almost 55%, followed by -linolenic acid, covering around 18% of the total fatty acids. For the first time, 47 intact TAGs were identified in the hemp oils using UPLC-HRMS. Among them, TAGs with fatty acid acyl chain 18:3/18:2/18:2 and 18:3/18:2/18:1 were the major ones, followed by TAGs with fatty acid acyl chain of 18:3/18:3/18:2, 18:2/18:2/16:0, 18:2/18:2/18:1, 18:3/18:2.18:0, 18:2/18:2/18:0, 18:2/18:1/18:1 and 18:3/18:2:16:0. Besides TAGs, low levels of terpenes, carotenoids and cannabidiolic acid were also detected in the oils. Moreover, the oils extracted from hemp by-products possessed a dose-dependent DPPH radical scavenging property and their potencies were in a similar range compared to other vegetable oils. Introduction Cannabis sativa is an annual herbaceous plant that has been used as a source of food, fiber and medicine over centuries. Three main cultivar groups of cannabis plants have been widely grown around the world for the production of industrial fibers, hemp seeds and cannabinoids, especially cannabidiol (CBD). Industrial hemp and cannabis (marijuana) are the two main C. sativa plants cultivated in Canada. According to Health Canada, 77,800 acres of industrial hemp were planted in 2018, mostly in the prairies (Alberta 38.5%, Saskatchewan 35% and Manitoba 155). Cultivation of hemp is expected to rise significantly in the coming years because of increasing demand for either hemp seed oil for food or hemp oil containing mainly CBD having potential medicinal value. It is forecasted that 450,000 acres of hemp will be planted in Canada in coming days with a worth of CAN 1 billion market value. Production of a large quantity of hemp seed oil and hemp oil generates a huge amount of waste, including steam, leaves, roots and residual biomass obtained after oil extraction. Most of the hemp plant waste biomass ends up at landfills or as low-value products such as compost. Residual biomass collected after oil extraction from hemp seed commonly known as hemp cake or hemp meal is studied for its nutritional components and for possible bread supplementation because of its high protein content. The overview of hemp by-products' nutrients, phytochemical composition, bioavailability and bioefficacy was documented by. The hemp waste has also been examined for the production of biofuel or cement replacement in concrete production. Since hemp seed oil is reported to have an excellent polyunsaturated fatty acids (PUFAs) profile, including both omega-3 (-3) fatty acids and omega-6 (-6) fatty acids, hemp cake oil also demonstrates a similar fatty acid profile possessing potential for aquafeed application. Aquaculture is a well-established industry in Canada, with production activities occurring in every province and territory. To sustain current aquaculture farming, it is important to have new sources of feed ingredient inputs, especially lipid and protein. Fishmeals are primarily made from forage fish and by-products of the commercial fishery industry. With the global supply of forage fish at a plateau, the aquaculture industry has heavily shifted to use of plant-based proteins and oil to reduce dependency on conventional fishmeal. The inclusion rate of dietary fishmeal and fish oil used within salmon feeds has significantly reduced to 15-18% and 12-13%, respectively, of the total diet. In the present study we explore the lipid content of hemp waste, especially hemp cake and hemp seed hulls, and their characterization for their possible aquafeed application. Cold-pressed commercial hemp oil and oils extracted from hemp hearts and whole hemp seeds are also analyzed for comparison. Total Lipid Content and Oil Extraction Yield Hemp by-products, primarily hemp cake and hemp hulls, together with hemp hearts and whole hemp seeds (Figure 1), were ground and flour with particle size <1.0 mm was collected using a laboratory sifter (Buhler AG, Uzwil, Switzerland). Total lipid content of the ground samples was measured by the Folch method and results are shown in Table 1. Hemp hearts possessed the highest lipid content (54.7%), followed by the whole hemp seeds (48.0%). Even though hemp cake was obtained after oil extraction from hemp seeds, the biomass retains 13.1% total lipids. Hemp hulls also have 17.5% total lipids. Table 1. Total lipid and oil extraction yield of hemp hearts (HSHE), hemp whole seeds (HSWH), hemp cake (HSCA) and hemp seed hulls (HSHU). Separation of lipid classes using solid phase extraction (SPE) expressed in percentage of oil. examined for the production of biofuel or cement replacement in concrete production. Sample/Lipid Since hemp seed oil is reported to have an excellent polyunsaturated fatty acids (PUFAs) profile, including both omega-3 (-3) fatty acids and omega-6 (-6) fatty acids, hemp cake oil also demonstrates a similar fatty acid profile possessing potential for aquafeed application. Aquaculture is a well-established industry in Canada, with production activities occurring in every province and territory. To sustain current aquaculture farming, it is important to have new sources of feed ingredient inputs, especially lipid and protein. Fishmeals are primarily made from forage fish and by-products of the commercial fishery industry. With the global supply of forage fish at a plateau, the aquaculture industry has heavily shifted to use of plant-based proteins and oil to reduce dependency on conventional fishmeal. The inclusion rate of dietary fishmeal and fish oil used within salmon feeds has significantly reduced to 15-18% and 12-13%, respectively, of the total diet. In the present study we explore the lipid content of hemp waste, especially hemp cake and hemp seed hulls, and their characterization for their possible aquafeed application. Cold-pressed commercial hemp oil and oils extracted from hemp hearts and whole hemp seeds are also analyzed for comparison. Total Lipid Content and Oil Extraction Yield Hemp by-products, primarily hemp cake and hemp hulls, together with hemp hearts and whole hemp seeds (Figure 1), were ground and flour with particle size < 1.0 mm was collected using a laboratory sifter (Buhler AG, Uzwil, Switzerland). Total lipid content of the ground samples was measured by the Folch method and results are shown in Table 1. Hemp hearts possessed the highest lipid content (54.7%), followed by the whole hemp seeds (48.0%). Even though hemp cake was obtained after oil extraction from hemp seeds, the biomass retains 13.1% total lipids. Hemp hulls also have 17.5% total lipids. Hexane was used to extract oil from hemp by-products, hemp hearts and hemp seeds. Hemp hearts showed the highest oil extraction yield, 45.9%, followed by whole seed (36.4%). Both hemp hulls and hemp cake have a lower oil extraction yield of 12.1 and 7.4%, respectively. Oil extracted from hemp hearts has a light yellow color and oil from whole seeds has a light greenish color. Oils extracted from both hemp by-products and cold-pressed commercial oil are dark green in color ( Figure 2). Hexane was used to extract oil from hemp by-products, hemp hearts and hemp seeds. Hemp hearts showed the highest oil extraction yield, 45.9%, followed by whole seed (36.4%). Both hemp hulls and hemp cake have a lower oil extraction yield of 12.1 and 7.4%, respectively. Oil extracted from hemp hearts has a light yellow color and oil from whole seeds has a light greenish color. Oils extracted from both hemp by-products and cold-pressed commercial oil are dark green in color ( Figure 2). Separation of Lipid Classes and 1 H NMR Analysis Oils derived from either hemp by-products or from hemp seeds together with coldpressed hemp oil were further fractionated using solid phase extraction (SPE) to determine neutral lipid, glycolipids and phospholipids content gravimetrically. All the oil samples contained more than 97.1% of neutral lipid, mostly triacylglycerols (TAGs). Percentage of glycolipids content ranged from 0.3 to 2.2%; similarly phospholipids were in the range of 0.3-1.3% (Table 1). The 1 H NMR spectrum of the oils further shows an identical spectrum for all five oils derived either from hemp by-products or hemp seeds (Supplementary Material: Figure S1). The major signals observed in 1 H NMR spectrum entirely belong to TAGs, either fatty acid acyl chain (0.75-3.00 and 5.70 ppm) or glyceride (4.0-4.25 and 5.23 ppm), strongly suggesting that hexane extract or cold-pressed commercial hemp oil was comprised mainly of TAGs. Triacylglycerols (TAGs) Analysis of Neutral Lipid Using UPLC-HRMS Ultra-high performance liquid chromatography high-resolution mass spectrometry (UPLC-HRMS) was used for the identification of individual TAGs present in the neutral lipid fraction of the oil extracted from hemp by-products and other hemp biomasses plus cold-pressed commercial hemp oil. The total ion current (TIC) chromatogram of all oil samples is shown in Figure 3. TAGs were eluted between 1.0 min and 5.00 min and almost identical TICs were observed for all the tested neutral lipid fractions derived from either hemp by-products or hemp seeds. Separation of Lipid Classes and 1 H NMR Analysis Oils derived from either hemp by-products or from hemp seeds together with coldpressed hemp oil were further fractionated using solid phase extraction (SPE) to determine neutral lipid, glycolipids and phospholipids content gravimetrically. All the oil samples contained more than 97.1% of neutral lipid, mostly triacylglycerols (TAGs). Percentage of glycolipids content ranged from 0.3 to 2.2%; similarly phospholipids were in the range of 0.3-1.3% (Table 1). The 1 H NMR spectrum of the oils further shows an identical spectrum for all five oils derived either from hemp by-products or hemp seeds (Supplementary Material: Figure S1). The major signals observed in 1 H NMR spectrum entirely belong to TAGs, either fatty acid acyl chain (0.75-3.00 and 5.70 ppm) or glyceride (4.0-4.25 and 5.23 ppm), strongly suggesting that hexane extract or cold-pressed commercial hemp oil was comprised mainly of TAGs. Triacylglycerols (TAGs) Analysis of Neutral Lipid Using UPLC-HRMS Ultra-high performance liquid chromatography high-resolution mass spectrometry (UPLC-HRMS) was used for the identification of individual TAGs present in the neutral lipid fraction of the oil extracted from hemp by-products and other hemp biomasses plus cold-pressed commercial hemp oil. The total ion current (TIC) chromatogram of all oil samples is shown in Figure 3. TAGs were eluted between 1.0 min and 5.00 min and almost identical TICs were observed for all the tested neutral lipid fractions derived from either hemp by-products or hemp seeds. In total, 47 different TAGs were detected in oil either extracted from by-products or hemp seeds, and are listed in Table 2 16:0 was also higher based on the relative intensities of the ammonium adduct ions. TAGs with long chain fatty acid side chain, including C20:0, C22:0 and C24:0 were relatively lower in concentration. Individual fatty acids of TAGs with acyl side chain length 59:8, 50:4, 59:6, 60:6 and 62:3 were unable to be identified because of the overlapping signal and the MS system only picked the three strongest peaks at the given retention time. In total, 47 different TAGs were detected in oil either extracted from by-products or hemp seeds, and are listed in Table 2 :0 was also higher based on the relative intensities of the ammonium adduct ions. TAGs with long chain fatty acid side chain, including C20:0, C22:0 and C24:0 were relatively lower in concentration. Individual fatty acids of TAGs with acyl side chain length 59:8, 50:4, 59:6, 60:6 and 62:3 were unable to be identified because of the overlapping signal and the MS system only picked the three strongest peaks at the given retention time. Table 2. Heat map of triacylglycerols (TAGs) identified in oil extracted from hemp hearts (HSHE), hemp whole seeds (HSWH), hemp cake (HSCA), hemp seed hulls (HSHU) and cold-pressed hemp oil (HSCP). Fatty Acid Analysis High Abundance. Fatty Acid Analysis Fatty acid methyl esters (FAMEs) of the oil derived from both hemp by-products and other biomasses together with cold-pressed commercial oil were analyzed by gas chromatography (GC) after transesterification. The GC chromatogram is shown in Figure S2 and the data are presented in Table 3. The total fatty acid in the oil ranged from 747.7 to 863.9 mg/g of the oil. Oil extracted from hemp cake had the highest fatty acid content, 863.9 mg/g, followed by cold-pressed hemp oil, 859.2 mg/g. Linoleic acid (C18:2 n-6) was the predominant fatty acid in all the analyzed oil which covers more than 54.9% of the total fatty acids. -linolenic acid (C18:3 n-3) was the second dominant fatty acid followed by oleic acid (C18:1 n-9) and palmitic acid (C16:0). Steric acid, -linolenic acid and heptadecanoic acid are the other fatty acids detected in oil extracted from hemp by-products or other hemp seed samples. Small amounts of long chain fatty acids including arachidic acid (C20:0), cis-11-eicosenoic acid (C20:1), cis-11,14-eicosadienoic acid (C20:2), behenic acid (C22:0), erucic acid (C22:1) and lignoceric acid (C24:0) were also present, each covering less than 0.4% of total fatty acid. Table 3. Fatty acid profile of oil extracted from hemp hearts (HSHE), hemp whole seeds (HSWH), hemp cake (HSCA), hemp seed hulls (HSHU) and cold-pressed hemp oil (HSCP). Results are expressed in mg/g biomass and the percentage of individual fatty acid in the oil is given in the parenthesis. - 27 125.1 ± 12.9 (16.7) 158.6 ± 9.1 (18.4) 143.5 ± 1.5 (17.6) 155 Pigment Analysis The oil extracted from hemp whole seeds, hearts, hulls, cake and cold-pressed hemp oil was analyzed for its pigment content, carotenoids and chlorophylls using high-performance liquid chromatography (HPLC). Lutein was detected in all tested hemp oil samples. The highest lutein content was observed in hemp cake, i.e., 0.125 mg/g oil. -carotene and -carotene were also detected in oil extracted from both hemp by-products and coldpressed oil (Table 4). Several unidentified chlorophyll degradation peaks were detected in all analyzed oils except hemp hearts. HPLC chromatograms are shown in Figure S3. Table 4. Carotenoid, cannabinoid and terpene content in oil extracted from hemp hearts (HSHE), hemp whole seeds (HSWH), hemp cake (HSCA), hemp seed hulls (HSHU) and cold-pressed hemp oil (HSCP). Results are expressed in mg/g oil; concentration of terpenes is below the limit of quantitation (LoQ). Terpene and Cannabinoid Analysis Gas chromatography mass spectrometry (GC-MS) was used to analyze terpenes present in oil extracted from hemp by-products and hemp seeds together with cold-pressed hemp oil. The TIC of the GC-MS is shown in Figure S4. A number of terpenes were detected in the oil sample derived either from hemp by-products or hemp seeds but their concentrations were below the limit of quantitation (LoQ). -pinene, -pinene, terpinolene, -carophyllene and -humulene were common terpenes detected in all tested oil. Deltalimonene, p-cymene, isopulegol and geraniol were detected only in oil derived from whole seeds (Table 4). UPLC-HRMS was used for cannabinoid analysis in the oils extracted from hemp by-products and other hemp samples together with cold-pressed commercial hemp oil. Cannabinoid standards including cannabidiolic acid (CBDA) and CBD were used for calibration. A low level of CBDA was detected in oil derived from both hemp cake (0.027 mg/g) and hemp hulls (0.039 mg/g). CBD was detected in all tested oils except hemp hearts, and CBD concentration was below the limit of quantitation (LoQ). No other cannabinoids were detected in oils extracted from hemp by-products and hemp seed oils. The single ion monitoring of CBD and CBDA is shown in Figure S5 for all hemp oils. DPPH Radical Scavenging Activity The percentage yields of the MeOH extracts of the oil derived from the hemp hearts, whole seeds, hemp cake, hump hulls and the cold-pressed commercial oil were 4.4, 6.6, 6.7, 11.0 and 4.4%, respectively. The MeOH extracts were tested for their DPPH radical scavenging activity at various concentrations. Hemp oils showed weak DPPH radical scavenging potency; IC 50 values ranged from 555.0 to 3062.5 g/mL as compared to ascorbic acid (IC 50 −2.4 g/mL), which was used as a positive control. All the tested oil showed dose-dependent DPPH radical scavenging activity (Figure 4). Hemp cake oil possessed the strongest DPPH radical scavenging activity with an IC 50 value of 555.2 g/mL followed by cold-pressed commercial hemp oil with an IC 50 value of 610.0 g/mL. The oil derived from the whole hemp seeds had the weakest DPPH radical scavenging activity with an IC 50 value of 3062.5 g/mL. 6.7, 11.0 and 4.4%, respectively. The MeOH extracts were tested for their DPPH radical scavenging activity at various concentrations. Hemp oils showed weak DPPH radical scavenging potency; IC50 values ranged from 555.0 to 3062.5 g/mL as compared to ascorbic acid (IC50 −2.4 g/mL), which was used as a positive control. All the tested oil showed dose-dependent DPPH radical scavenging activity (Figure 4). Hemp cake oil possessed the strongest DPPH radical scavenging activity with an IC50 value of 555.2 g/mL followed by cold-pressed commercial hemp oil with an IC50 value of 610.0 g/mL. The oil derived from the whole hemp seeds had the weakest DPPH radical scavenging activity with an IC50 value of 3062.5 g/mL. Discussion Hemp seed oil is well-known for its health benefits because of its high polyunsaturated fatty acid (PUFA) content, especially linoleic acid (LA) -6 and -linolenic (ALA) -3 acid. The cold-pressed technique is commonly used to extract commercial hemp oil from seeds, leaving almost half of the total biomass as by-products commonly known as hemp cake or hemp meal. Hemp seed hulls are another hemp by-product obtained during the hulling process that produces hemp seed hearts, which can be consumed raw or cooked with other food. A number of studies have been conducted on hemp cake for Discussion Hemp seed oil is well-known for its health benefits because of its high polyunsaturated fatty acid (PUFA) content, especially linoleic acid (LA) -6 and -linolenic (ALA) -3 acid. The cold-pressed technique is commonly used to extract commercial hemp oil from seeds, leaving almost half of the total biomass as by-products commonly known as hemp cake or hemp meal. Hemp seed hulls are another hemp by-product obtained during the hulling process that produces hemp seed hearts, which can be consumed raw or cooked with other food. A number of studies have been conducted on hemp cake for its chemical composition and protein content, but no systematic study has been conducted on hemp seed hulls for their chemical characterization. The current study focuses on in-depth lipid analysis of hemp by-products, especially hemp cakes and hemp hulls, for their possible aquafeed application. Total lipid content of both hemp cake and hemp hulls was determined together with the oil extraction yield using hexane as extracting solvent. Whole hemp seeds and hemp hearts together with cold-pressed commercial hemp oils were also studied simultaneously for comparison. The lipid content of whole hemp seeds (48.0%) was slightly lower than hemp hearts (54.7%) because of the hull's presence, which has 17.5% total lipid. Even though the hemp cake was collected after cold-pressed oil extraction, it still contained 13.1% total lipid, similar to the previous study by. The hemp meal fractions reported to have oil content varied from 8.26 to 18.6% depending on the partial size. In the current study, we have used hexane as extracting solvent because it has less toxicity, is easy to evaporate and is widely used in food industries for oil extraction. The results suggested that hemp cake has the lowest hexane extractable oil, yielding only 7.4% of the biomass, which is almost half of the total lipid content (13.1%). The oil extraction yield of hemp seed hulls, hemp hearts and hemp whole seeds was also lower than the total lipid content ( Table 1). Lower oil extraction yield suggested there should be polar lipids such as glycolipids or phospholipids in the residual biomass after hexane extraction; further study is needed to characterize such lipids in hemp seeds and in hemp seed by-products. Solid phase extraction (SPE) technique with silica gel cartridge was further used to determine the percentage of various classes of lipids present within the oil extracted by hexane ; the SPE results suggested that oils (hexane extract) extracted from hemp by-products are primarily composed of neutral lipids, which comprise >97.1% of the oil ( Table 1). The proton NMR signals in their respective spectra displayed peaks only belonging to TAGs ( Figure S1), i.e., either fatty acid acyl side chain or glyceride moiety. The NMR spectral signals clearly demonstrated that the oil extracted either from by-products or hemp seeds is made up of mainly triglycerides (TAGs). Hemp seed and hemp seed meal have been well-studied for their fatty acid composition; results suggested that LA, ALA, oleic acid, -linolenic acid and palmitic acid are the main fatty acids present in hemp seed or hemp meal oil. LA (C18:2 n-6) is the major fatty acid that covers more than 50% of the total fatty acids and its percentage varies depending upon the extraction methods and cultivars. Oomah et al. reported 53.4-56.6% of LA in hemp seed oils extracted with petroleum ether using the Soxhlet extraction method. Similarly, 58% LA was reported in hemp seed oil extracted with supercritical carbon dioxide (CO 2 ) and 55.2-54.8% LA was reported in hemp seed meal fractions. In the current study, LA content in the hemp by-products oils was 54.9-55.9% of the total fatty acids. -linolenic acid (ALA) was the second major fatty acid in the oil extracted from hemp by-products, covering 16.7-18.4% of the total fatty acids. Longer chain fatty acids, i.e., C:20 or higher, were also detected but their concentration was significantly lower than LA and ALA. Moreover, PUFA -6 covers almost 60% and PUFA -3 covers only 18% of the total fatty acid count in the oil derived from hemp by-products. Almost identical results were observed for the oil derived from hemp hearts, hemp seeds and cold-pressed commercial hemp oil. Dietary lipids play important roles as a source of energy and essential fatty acids necessary for fish growth and development. There is urgent need for alternative sources of protein and lipid diet in the aquaculture industry because of increasing demand for aquaculture products, especially fish. In general, freshwater fish require either LA (18:2 -6), ALA (18:3 -3) or both, whereas marine fish require long chain fatty acids, eicosapentaenoic acid (EPA, 20:5 -3) and/or docosahexaenoic acid (DHA, 22:6 -3). Oils extracted from hemp by-products, namely hemp cake and hulls, contain high levels of LA and ALA, obviously suitable for freshwater aquafeed application. Extensive research has been conducted to study the effects of alternative lipid sources, mainly vegetable oils, on the growth of both marine and freshwater aquatic animals. Even though no growth performance advantage was observed in juvenile Tor tambroides with LA/ALA treatment against palm oil which is mainly composed of saturated fatty acids, significantly higher concentrations of -3 PUFA were found in the muscle of the LA/ALA feed diet group. Studies on Atlantic salmon suggested that partial replacement of fish oil by vegetable oils (40-60%) can produce similar results as diets containing 100% fish oil during the grow-out phase of Atlantic salmon in the sea and does not significantly affect -3 fatty acid composition in muscle when feeding fish with a diet containing low levels of fish meal and moderate levels of fish oil, suggesting the use of vegetable oil having little or no EPA/DHA for marine aquafeed application. In another study, Xue et al. observed that the fatty acid composition of fish fillets and livers reflected the dietary fatty acid composition when studying six alternative lipid sources in Japanese sea bass (Lateolabrax japonicas). Sankian et al. demonstrated that the complete replacement of fish oil is even possible in mandarin fish diets. These studies clearly demonstrate vegetable oils are already starting to replace fish oil either partially or fully in fish feed diets. Soybean is one of the well-studied vegetable oils for aquafeed, having high levels of PUFA LA and ALA, containing 55% and 13%, respectively. Oil derived from hemp by-products has a slightly better -3 profile than soybean oil because it has higher ALA (18%). Possible use of hemp seed by-products oil for aquafeed application not only helps in managing the waste biomass, it also gives economic value for emerging hemp industries around the world. In addition, stearidonic acid (SDA), which is also present in hemp by-products, is an intermediate fatty acid in the biosynthetic pathway from -linolenic acid to EPA/DHA, and the conversion from SDA is more efficient than from -linolenic acid. The -6/-3 fatty acid ratio is another factor which plays an important role in health benefits for both humans and aquatic animals. Human beings evolved on a diet with a ratio of -6/-3 essential fatty acids of 1 whereas in Western diets the ratio is 15/1 to 16.7/1; a lower ratio of -6/-3 is needed for the prevention and management of chronic diseases. A study suggested that an increase in the dietary -6/-3 ratio was reported to influence the growth of Atlantic salmon challenged with Paramoeba perurans. The -6/-3 fatty acid ratio of oils extracted from hemp cake and hemp hulls was 3.0:1 and 3.1:1, respectively. Several factors such as climatic conditions, cultivars, nutrients, etc. could affect the -6/-3 fatty acid ratios of hemp oil and their ratios were reported as 1.71:1-3.31:1. Even though fatty acids are a key component of the hemp seed oils and oils derived from hemp seed by-products, only a small amount may exist in free acid forms. The majority of fatty acids are present in ester forms, either in triacylglycerols (TAGs) or in polar lipids such as glycolipids or phospholipids. It is worth mentioning here that bioavailability of the fatty acids depends upon the structural composition of triacylglycerols. It has been reported that re-esterified triacylglycerols have superior bioavailability of EPA and DHA as compared to the natural fish oils, ethyl esters or free fatty acid forms. It is crucial to know the structure composition of the intact lipids (or triacylglycerols) to understand their true health benefits for humans or as feedstock, including aquafeed application. To the best of our knowledge, no research has been conducted to date on characterization of intact TAGs in hemp seed oil or oil-derived hemp by-products. In the current study, electrospray ionization (ESI) was used to characterize TAGs in neutral lipid fractions of oil derived from hemp by-products. ESI is a soft ionization technique previously used by MacDougall at al. for profiling TAGs of microalgae. ESI generates intact molecular ions, typically observed as ammonium, sodium or proton adducts ( Figure 5). The ammonium adducts ion was selected for identification of TAGs using the LipidMAPS database. Moreover, the molecular ion of TAG could be fragmented under low-energy collision-induced dissociation (CID) in a systematic fashion into diacylglycerol (DAG) product ions which were used for the identification of TAGs. Representative mass spectra of four TAGs having fatty acid side chains 18:3/18:2/18:1, 18:2/18:1/16:0, 24:0/18:3/18:2 and 22:0/18:2/18:1 with their fragment ions are shown in Figure 5. Based on the accurate masses of adduct and fragment ions, we have for the first time identified 47 TAGs in the oil derived from hemp by-products or hemp seeds. The MS spectrum in Figure S6 showed that the precursor ion at m/z 896.77087, 870.75549 and 844.74030 eluted at 2.35 min of hemp cake belongs to TAGs. Even though some of these TAGs were not chromatographically resolved, their identifications were incontrovertible. The major fragments observed in the MSMS spectra represent the neutral losses of fatty acids from the glyceride backbone. For example, in Figure 5, the precursor ion at m/z 896.77156 first lost an ammonium adduct and further lost major fragments at m/z 601.51970, m/z 599.50386 and m/z 597.48850, representing the neutral loss of fatty acids C18:3, C18:2 and C18:1, respectively. Unfortunately, determination of the position of individual fatty acyl chains within glyceride backbone was outside the scope of this study. Based on the intensities of the ammonium adducts ions we were able to create a hit map of the individual TAGs within the neutral lipid fractions to compare their relative concentration. The hit map diagram of individual TAGs ( Table 2) strongly suggested that TAGs with fatty acyl chain 18:3/18:2/18:2 and 18:3/18:2/18:1 are major components in all the tested hemp oils derived either from hemp by-products or from hemp seeds. Fatty acid analysis data (Table 3) clearly demonstrated that linoleic acid (C18:2 n-6), -linolenic acid (C18:3 n-3), oleic acid (C18:1 n-9), stearic acid (C18:0) and palmitic acid (C16:0) were the major fatty acids in hemp oil. This further confirmed that the TAGs having these fatty acyl chains (C18:3, C18:2, C18:1, C18:0 and C16:0) are the main TAGs in the hemp oils. TAGs with longer fatty acid acyl chains, i.e., C:20, C:22, C:23 and C:24 were also detected, but their intensities were lower as compared to the TAGs containing major fatty acid acyl chains. Carotenoids, chlorophyll and cannabinoids, especially CBD, were also reported in hemp seed oil. In the current study, we also found the presence of lutein in all oil samples extracted from either by-products or hemp seeds and cold-pressed commercial hemp oil. The light yellow color of the hemp hearts oil may be due to low levels of lutein. -carotene and -carotene were also detected in oils derived from hemp cake and hemp hulls plus cold-pressed hemp oil. Chlorophyll degradation products and cannabinoids, especially CBD and CBDA were observed in most of the oils except hemp hearts and whole seeds. Absence of cannabinoids and chlorophyll and only low levels of lutein presence in the hemp hearts oil clearly suggested that the cannabinoids, carotenoids and chlorophyll present only in the hemp hulls. Higher concentrations of pigments and chlorophyll in hemp cake indicated that the cold-pressed process enhances the extraction efficiency of pigments. Lower levels of lutein and absence of other carotenoids in whole seeds oil suggested that the heating process may possibly degrade the pigments. Terpenes were also present in all hemp samples, with more terpenes being detected in whole seed, cake and cold-pressed hemp oil. Hemp hearts and hemp hulls had fewer terpenes detected. Hemp seeds have been reported to have antinutrient components in them such as phytic acid or phytate. Even though we did not analyze phytic acid or phytate in oils extracted from hemp seeds and by-products, these antinutrient components are most likely not present in the hemp seed oils based on their polarity and ability to be extracted by hexane. Carotenoids, chlorophyll and cannabinoids, especially CBD, were also reporte hemp seed oil. In the current study, we also found the presence of lutein in al samples extracted from either by-products or hemp seeds and cold-pressed comme hemp oil. The light yellow color of the hemp hearts oil may be due to low levels of lu -carotene and -carotene were also detected in oils derived from hemp cake and h hulls plus cold-pressed hemp oil. Chlorophyll degradation products and cannabino especially CBD and CBDA were observed in most of the oils except hemp hearts whole seeds. Absence of cannabinoids and chlorophyll and only low levels of lutein p ence in the hemp hearts oil clearly suggested that the cannabinoids, carotenoids and c rophyll present only in the hemp hulls. Higher concentrations of pigments and chl phyll in hemp cake indicated that the cold-pressed process enhances the extraction ciency of pigments. Lower levels of lutein and absence of other carotenoids in whole se oil suggested that the heating process may possibly degrade the pigments. Terpenes w also present in all hemp samples, with more terpenes being detected in whole seed, c and cold-pressed hemp oil. Hemp hearts and hemp hulls had fewer terpenes detec Hemp seeds have been reported to have antinutrient components in them such as ph acid or phytate. Even though we did not analyze phytic acid or phytate in extracted from hemp seeds and by-products, these antinutrient components are m likely not present in the hemp seed oils based on their polarity and ability to be extra by hexane. Vegetables oil has been reported to have antioxidant properties ; thus, we further tested oils derived from both hemp cake and hemp hulls for their DPPH radical scavenging potency for comparison with other vegetable oils. Oils extracted from other hemp biomass and cold-pressed hemp oils were also tested. Cold-pressed hemp oil and oil derived from hemp cake possessed the highest DPPH radical scavenging potency compared to the oil extracted from hemp hearts and hemp hulls, which had relatively higher concentrations of -carotene and -carotene. The oil derived from hemp seeds showed significantly weaker DPPH radical activity as compared to other oils; most probably, the roasting process destroyed the antioxidative component. The mechanical crushing process seems to enhance extraction of antioxidative components present in the hemp oils. The IC 50 values of the tested oils range from 555 to 3062.5 g/mL, weaker as compared to the positive control ascorbic acid having IC 50 2.4 g/mL. The IC 50 values of hemp cake and cold-pressed hemp oils are in a similar range as those of soybean oil (IC 50 362 g/mL), canola oil (IC 50 400 g/mL), sesame oil (IC 50 500 g/mL) and are better than flax seed oil (IC 50 2400 g/mL) and avocado oil (IC 50 6200 g/mL). In summary, we have studied lipid composition of oils derived from hemp seed hulls and hemp cake together with whole hemp seeds, hemp hearts and commercial cold-pressed hemp oil. Both hemp by-products, i.e., hemp hulls and hemp cakes contain significant amounts of total lipids, 17.5 and >13.1% of the biomass, respectively, having great commercial value as a source of lipid for aquafeed application. It is worth mentioning here that this is the first report of such comprehensive lipid characterization for oil derived from hemp seed hulls and hemp cake in comparison with cold-pressed commercial oil and oil extracted from hemp whole seeds and hemp hearts. The fatty acids profile suggested that, regardless of material, for either hemp seed or hemp by-products, all biomasses had almost identical fatty acid compositions. LA and ALA acids were the two dominant fatty acids present in the oil extracted from hemp by-products, similar to those of cold-pressed hemp oil. The total -6/-3 ratios of the tested hemp oils were 3.1 to 3.3. For the first time, we have characterized the triacylglycerol content of the hemp oil and 47 individual TAGs were identified by UPLC-HRMS analysis. Among them, TAGs with fatty acyl side chain 18:3/18:2/18:2 and 18:3/18:2/18:1 were the major ones. A low level of carotenoids, especially lutein, -carotene and -carotene was found in oils extracted from hemp byproducts. Moreover, the oils derived from hemp by-products possessed dose-dependent DPPH radical scavenging properties and their potency was in a similar range as compared to other vegetable oils such as canola, soybean, avocado and sesame oils. General The NMR spectra were measured on a Bruker 500 or 700 MHz spectrometer with deuterated chloroform. HPLC analysis was carried out on an Agilent 1200 Series HPLC equipped with a diode array detector. High-resolution mass spectra were recorded with a Thermo Fisher Scientific (San Jose, CA, USA) Q Exactive mass spectrometer. HPLC grade solvents and Milli-Q water were used for the extraction, fractionation and LC/MS analysis. Research Materials Hemp cake, hemp seed hulls, hemp hearts and cold-pressed hemp oil were received from Fresh Hemp Food Ltd., Winnipeg, Manitoba, Canada. Roasted whole hemp seeds were purchased from a local store (Bulk Barn) in Halifax, Nova Scotia, Canada. Samples were stored at room temperature after being received, except the hemp oil, which was stored in a freezer (−20 C) prior to extraction or LC/MS analysis. All research materials were pulverized using a mechanical grinder and were filtered through a laboratory sifter (Buhler AG, Uzwil, Switzerland) to collect flour with particle size <1.0 mm for further analysis. Total Lipid Content and Oil Extraction Total lipid content was determined by the Folch method with slight modifications. In brief, a pulverized sample (~100 mg) was extracted at room temperature, homogenizing with CHCl 3 /MeOH (2:1, 1 mL 3) using a bead beater (Bead Mill 24, Fisher Scientific, Hampton, NH, USA) in 2 mL Lysing matrix Y tubes (3 1 min cycles) in triplicate. The combined lipid extracts were dried under nitrogen and kept under vacuum overnight; weight was measured gravimetrically and total lipid content was calculated using the following formula. Total lipid (%) = weight of lipid/weight of sample 100 The pulverized sample was further extracted with hexane to collect oil from both hemp sample and hemp wastes. The solid/solvent ratio was 1:20 (w/v) and the 20 g sample was used for oil extraction. Extraction was performed at room temperature overnight. Lipid Class Separation by Solid Phase Extraction (SPE) The oils (hexane extract) either extracted from hemp sample or received from Hemp Oil Canada were further fractionated into three different classes of lipid, i.e., neutral lipids containing mostly triacylglycerols (TAGs), glycolipids and phospholipids following the previously reported solid phase extraction (SPE) method by. Briefly, the SPE column (Discovery DSC-Si Tube 3 mL 500 mg) was conditioned with 10 mL chloroform. Approximately 100 mg oil in 1.0 mL chloroform was applied to the column. The column was then eluted successfully with chloroform (10 mL), acetone (10 mL) and methanol (10 mL), yielding neutral lipid, glycolipid and phospholipid, respectively. The percentage of each class of lipid was determined by their weight taken gravimetrically after drying under nitrogen followed by being under vacuum overnight. Triacylglycerols (TAGs) Analysis by UPLC-HRMS UPLC-HRMS data were acquired on an UltiMate 3000 system coupled to a Q-Exactive hybrid Quadrupole Orbitrap Mass Spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) equipped with a HESI-II probe for electrospray ionization. Separation was achieved on a Thermo Hypersil Gold C8 column (100 2.1 mm, 1.9 m) at 40 C. Through a flowsplitter, approximately 1/15 of LC eluent was sent to the mass spectrometer. A makeup solution consisting of 5 mM ammonium formate in IPA/de-ionized/methanol 1/2/7 (v/v) was delivered constantly at 100 L/min to MS. The solvent system was composed of acetonitrile and IPA. Initial gradient was 100% acetonitrile, which increased linearly to 5% IPA in 1 min, and then linearly to 70% IPA in 8 min, and was held for 2 min, at a flow-rate of 750 L/min. MS data were acquired in positive ion mode in data dependent mode, alternating between full MS and MSMS scans, where the three most abundant precursor ions were subjected to MSMS using 25 eV collision energy. The source parameters were set as follows: sheath gas: 15, auxiliary gas flow: 4, sweep gas: 0, spray voltage: 2.1 kV, capillary temperature: 375 C, heater temperature: 300 C. Fatty Acid Analysis by GC Fatty acid analysis was performed according to the AOAC official method 991. 39 with slight modifications in triplicate. Briefly,~10 mg of oil extracted from hemp biomasses or cold-pressed commercial hemp oil was placed in a dry 5 mL screw-capped reaction vial with MeOH (1.0 mL) containing 0.1 mg methyl tricosanoate as an internal standard (IS). The mixture was sonicated and 1.5 N NaOH solution in MeOH (0.5 mL) was added; blanketed with nitrogen; heated for 5 min at 100 C and cooled for 5 min. BF 3 14% solution in MeOH (1.0 mL, Sigma-Aldrich, St. Louis, MI, USA) was added, mixed, blanketed with nitrogen and heated at 100 C for 30 min. After cooling, the reaction was quenched by the addition of water (0.5 mL) and the FAME was extracted with hexane (2.0 mL). Part of the hexane layer (300-600 L) was transferred to a GC vial for analysis by GC-FID. GC-FID was carried out on an Agilent Technologies 7890A GC spectrometer using an Omegawax 250 fused silica capillary column (30 m 0.25 mm 0.25 m film thicknesses) for fatty acid analysis. Supelco ® 37 component FAME mix and PUFA-3 (Supelco, Bellefonte, PA, USA) were used as fatty acid methyl ester standards. Fatty acid content in hemp oil samples was calculated by the following equation and expressed as mg/g sample. Fatty acid (mg/g) = (A X W IS CF x /A IS W S 1.04) 1000 where A X = area counts of fatty acid methyl ester; A IS = area counts of internal standard (tricosylic acid methyl ester); CF X = theoretical detector correlation factor is 1; W IS = weight of IS added to sample in mg; W S = sample mass in mg; and 1.04 is factor necessary to express result as mg fatty acid/g sample. Pigment Analysis The oils (hexane extract) were dissolved in MeOH as 1 mg/mL concentration by sonicating for 10 min and were filtered through. The MeOH soluble fraction after filtration was used for HPLC analysis. Pigment including carotenoids and chlorophyll analysis was performed using an Agilent 1200 series HPLC with a YMC Carotenoid column (5 m, 2 250 mm, 181 YMC Co. Ltd., Tokyo, Japan) eluting with 50 mM NH 4 OAc in MeOH/tertiary butyl methyl ether (TBME) linear gradient 5 to 65%B in 30 min at 0.2 mL min −1 flow rate for 60 min. Standard curves for chlorophyll a, chlorophyll b, astaxanthin, -carotene, -carotene, canthaxanthin, fucoxanthin, lutein, lycopene and zeaxanthin at 450 nm were used for carotenoids quantification. Terpene and Cannabinoids Analysis Oil extracted from hemp biomasses or cold-pressed hemp oil (1.0 g) was dissolved in 5 mL of ethyl acetate. Samples were diluted by half and spiked with 20 g/mL of dodecane internal standard for GC-MS analysis. GC-MS analysis was carried out on an Agilent Technologies 6890N GC equipped with an Agilent 5975 MSD and FID detector. Separation was carried out using a Restek Rxi 625 Sil MS column (30 m 2.5 mm 1.4 M film thickness). Restek ® Cannabis Terpenes Standard #1 containing 19 terpenes was used as a terpene standard. Samples were injected in triplicate. Terpene standard curve containing 1, 2.5, 5, 10 and 25 g/mL of terpene standard and 20 g/mL of dodecane internal standard was injected in duplicate for quantification and identification. For cannabinoids analysis, hemp oils were dissolved in MeOH 1.0 mg/mL concentration and sonicated for 5 min at room temperature. The MeOH soluble part was applied to a C-18 solid phase extraction cartridge to remove residual triacylglycerols. The eluent was collected and further diluted 10 and subjected to UPLC/HRMS, using Acquity UPLC HSS-T3 column (Waters, 2.1 100 mm 1.8 m) with flow rate of 0.4 mL/min (0.1% formic acid in water/0.1 formic acid in MeOH with linear gradient; 20:80-16:84 in 2 min, 16:84-14:86 in 4 min) in full MS scan. The source parameters were as follows: sheath gas: 50, auxiliary gas flow: 10, spray voltage: 3.0 kV, capillary temperature: 300 C, heater temperature: 300 C. Standard cannabinoids purchased from Sigma-Aldrich, St. Louis, MI, USA were used as control. DPPH Radical Scavenging Activity 1,1-Diphenyl-picrygydrazyl (DPPH) radical scavenging activity was performed according to the procedure described by Hatano et al. with minor modifications. Oil derived from both hemp seeds and by-products (25 mL ∼ = 22.7 g) was extracted with MeOH (150 mL 2) by stirring at room temperature for 3 h. The MeOH soluble part was evaporated to dryness and used for DPPH radical scavenging assay. In brief, 100 L of extract at various concentrations was mixed with an equal volume of 60 M DPPH solution in MeOH, the resulting solution was thoroughly mixed and absorbance was measured at 520 nm after 30 min using a Spectra max Plus Spectrophotometer plate reader (Molecular Devices, San Jose, CA, USA). The scavenging activity was determined by comparing the absorbance with that of controls containing only DPPH and MeOH (100%). Vitamin C, a known antioxidant, was used as a positive control. Measurements were carried out in triplicates. DPPH Radical Scavenging activity (%) = 100 − (A SD − A SM )/(A DM − A ME ) 100 where A SD = absorbance of sample + DPPH; A SM = absorbance of sample + MeOH; A DM = absorbance of DPPH + MeOH; and A ME = absorbance of MeOH at 520 nm. IC 50 values were also calculated from the linear range of each assay. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27072339/s1, Figure S1: 1 H NMR spectrum of oil extracted from hemp seeds and hemp seeds by-products recorded in 500/700 MHz NMR Spectrometer; Figure S2: GC-FID of fatty acid analysis of oil extracted from hemp seeds and hemp seeds by-products; Figure S3: HPLC chromatograms of oils derived from hemp seeds and hemp seeds by-products; Figure S4: TIC of oils derived from hemp seeds and hemp by-products in GC-MS analysis; Figure S5: Single ion monitoring of CBD, CBDA, delta-9-trtrahydrocannabinol (THC) and delta-9-trtrahydrocannabinoic acid (THCA); Figure Institutional Review Board Statement: This is NRC publication No. 58308.
|
// Copyright 2013 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package stats
import (
"bytes"
"fmt"
"sort"
"time"
)
// A Timer that can be started and stopped and accumulates the total time it
// was running (the time between Start() and Stop()).
type Timer struct {
name fmt.Stringer
created time.Time
start time.Time
duration time.Duration
}
// Start the timer.
func (t *Timer) Start() *Timer {
t.start = time.Now()
return t
}
// Stop the timer.
func (t *Timer) Stop() {
t.duration += time.Since(t.start)
}
// ElapsedTime returns the time that passed since starting the timer.
func (t *Timer) ElapsedTime() time.Duration {
return time.Since(t.start)
}
// Return a string representation of the Timer.
func (t *Timer) String() string {
return fmt.Sprintf("%s: %s", t.name, t.duration)
}
// A TimerGroup represents a group of timers relevant to a single query.
type TimerGroup struct {
timers map[fmt.Stringer]*Timer
}
// NewTimerGroup constructs a new TimerGroup.
func NewTimerGroup() *TimerGroup {
return &TimerGroup{timers: map[fmt.Stringer]*Timer{}}
}
// GetTimer gets (and creates, if necessary) the Timer for a given code section.
func (t *TimerGroup) GetTimer(name fmt.Stringer) *Timer {
if timer, exists := t.timers[name]; exists {
return timer
}
timer := &Timer{
name: name,
created: time.Now(),
}
t.timers[name] = timer
return timer
}
// Timers is a slice of Timer pointers that implements Len and Swap from
// sort.Interface.
type Timers []*Timer
type byCreationTimeSorter struct{ Timers }
// Len implements sort.Interface.
func (t Timers) Len() int {
return len(t)
}
// Swap implements sort.Interface.
func (t Timers) Swap(i, j int) {
t[i], t[j] = t[j], t[i]
}
func (s byCreationTimeSorter) Less(i, j int) bool {
return s.Timers[i].created.Before(s.Timers[j].created)
}
// Return a string representation of a TimerGroup.
func (t *TimerGroup) String() string {
timers := byCreationTimeSorter{}
for _, timer := range t.timers {
timers.Timers = append(timers.Timers, timer)
}
sort.Sort(timers)
result := &bytes.Buffer{}
for _, timer := range timers.Timers {
fmt.Fprintf(result, "%s\n", timer)
}
return result.String()
}
|
Thermal radiation in curved spacetime using influence functional formalism Generalizing to relativistic exponential scaling and using the theory of noise from quantum fluctuations, it has been shown that one vacuum (Rindler, Hartle-Hawking, or Gibbons-Hawking for the cases of the uniformly accelerated detector, black hole, and de-Sitter universe, respectively) can be understood as resulting from the scaling of quantum noise in another vacuum. We explore this idea more generally to establish a flat spacetime and curved spacetime analogy. For this purpose, we start by examining noise kernels for free fields in some well-known curved spacetimes, e.g., the spacetime of a charged black hole, the spacetime of a Kerr black hole, Schwarzschild-de Sitter, Schwarzschild anti-de Sitter, and Reissner-Nordstrom de-Sitter spacetimes. Here, we consider a maximal analytical extension for all these spacetimes and different vacuum states. We show that the exponential scale transformation is responsible for the thermal nature of radiation. I. INTRODUCTION The Unruh effect refers to the thermal fluctuations experienced by a detector while undergoing linear motion with uniform acceleration in a Minkowski vacuum. This thermality can be demonstrated by tracing the vacuum state of the field over the modes beyond the accelerated detector's event horizon. However, the event horizon is well-defined only if the detector moves with eternal uniform linear acceleration, wherein the particle's speed asymptotically approaches the speed of light, entailing the formation of an event horizon. In contrast, in the circular case, velocity changes direction, but its magnitude remains constant, and there is no event horizon. In the realistic Unruh case, eternal uniform linear acceleration is impossible; hence the notion of the horizon is difficult to envisage. In the effect was studied as a kinematic effect in terms of influence functionals. In this context, also see. Thermal radiance from a black hole or observed by an accelerated detector is usually viewed as a geometric effect related to the existence of an event horizon. In, it was proposed that the detection of thermal radiance in these systems is a local, kinematic effect arising from the vacuum being subjected to a relativistic exponential scale transformation. Generalizing to relativistic exponential scaling and using the theory of noise kernel from quantum fluctuations, it has been shown that one vacuum (Rindler, Hartle-Hawking, or Gibbons-Hawking for the cases of the uniformly accelerated detector, black hole, and de Sitter universe, respectively) can be understood as resulting from the scaling of quantum noise in another. This paper explores the idea of relativistic exponen- * [email protected] [email protected] tial scaling and influence function formalism more generally to establish a flat spacetime and curved spacetime analogy. For this purpose, in sec. II, we briefly discuss relativistic exponential scaling. In sec. III, a sketch of the influence functional formalism is provided. The derivation of this formalism in different spacetimes is essentially the same as in the Unruh effect. Hence, in sec. IV, the key derivation of the Unruh effect using exponential scale transformation and influence functional formalism is brought out. This formalism is applied in the other examples discussed. Thus, for example, the Hawking radiation for 1+1 dimensional charged black hole, Kerr black hole, 1+1 dimensional Schwarzschild de-Sitter, Schwarzschild anti-de Sitter, and Reissner-Nordstrom de-Sitter spacetimes are studied in secs. V, VI, VII, VIII, IX respectively, using exponential scale transformation and the influence functional formalism. Used is made of maximal analytical extension for all these spacetimes and different vacuum states. The temperature obtained is consistent with that in the literature, obtained in different contexts. We finally make our conclusions. II. RELATIVISTIC EXPONENTIAL SCALING There exist several well-established methods in quantum field theory in curved spacetimes to study the particle creation from a black hole or Unruh effect. The central aspect of the Unruh effect is that the vacuum state where thermal radiance is observed (e.g., the Rindler vacuum), is related to the inertial vacuum state (Minkowski vacuum) by an exponential scale transformation. Similarly, for the Hawking radiation, the vacuum state where thermal radiance is observed (e.g., the Schwarzschild vacuum which is also known as the Boulware vacuum), is related to the inertial vacuum state (e.g., Kruskal vacuum which is also known as the Hartle-Hawking vacuum) by an exponential scale transformation. Due to the presence of the exponential terms, Rindler spacetime used by accelerated observers cov-ers only a wedge-shaped region of Minkowski spacetime. Similarly, for the Hawking effect, the Schwarzschild coordinate covers only a wedge-shaped patch of the Kruskal coordinate. Hence, using these coordinate systems, one can see the Hawking radiation and the Unruh effect. One can also consider another vacuum state such as the Unruh state for calculating Hawking radiation. The Unruh vacuum state corresponds to the Boulware vacuum in the far past and the Hartle-Hawking vacuum in the far future. In this paper, we also show that the exponential scale transformation is responsible for the thermal nature of radiation of these spacetimes, thus highlighting a uniform approach to this phenomenon. Further, this also strives to build up a geometry, quantum statistical mechanical equivalence. III. INFLUENCE FUNCTIONAL We consider the scenario wherein a detector is used to probe the unperturbed state of a scalar field. The detector and the massless scalar field can be considered to be the system and its bath, respectively. The system interacts weakly with the massless scalar field. We assume that at a given initial time, the system and the environment are uncorrelated. Thus, the total density matrix of the total system (system+environment) at the initial time can be written as the density matrix of the system outer product with the density matrix of the environment. As we are interested in the time evolution of the system of interest, which, here, is the detector, the field degrees of freedom are traced out. This provides the reduced density matrix of the detector, taking into account the influence of the field. This is ensconced in the form of the influence functional, which can be generically shown to be where (t, t ) and (t, t ) are the noise and dissipation kernels, respectively. These kernels can be written compactly as where I(k,, ) is the field (bath) spectral density. The influence functional is a path integral approach to study the dynamics of the system of interest taking into account the effects of the environment, sometimes called the reservoir. From the influence functional, characterizing the environment, one obtains what is known as the propagator. The propagator can then be used as a capsule to generate the final state of the system of interest, given its initial state, hence the name propagator. This provides a flexible tool to approach a vast variety of problems, ranging from the early universe, black holes to decoherence in the quantum to classical transition. Here, a paradigm model is that of quantum Brownian motion. The noise and dissipation kernels characterizing the influence kernel are determined by the spectral density. In order to obtain true irreversible dynamics, a continuous distribution of bath modes can be introduced, such that the spectral density is represented by a smooth function of the environment's frequencies. In this work, we explore the idea of relativistic exponential scaling and influence function formalism more generally with the aim to establish a flat spacetime and curved spacetime analogy. For this purpose, we examine noise kernels for free fields in some well-known curved spacetimes, e.g., the spacetimes of a charged black hole, a Kerr black hole, Schwarzschild-de Sitter, Schwarzschild anti-de Sitter, and Reissner-Nordstrom de-Sitter spacetimes. The essential derivation of this formalism is the one involved in the Unruh effect because the relation between the tortoise and the Kruskal coordinates is similar to the Rindler coordinate and the inertial coordinate. We briefly review the key derivation of the Unruh effect using exponential scale transformation and influence functional formalism in the next section. IV. UNRUH EFFECT A uniformly accelerated observer can see a part of Minkowski spacetime. The line element for this observer in Minkowski spacetime is given by, The observer is uniformly accelerated in x direction, and a is the four acceleration vector. On the other hand, the Minkowski metric is given by, The {y, z} coordinates are the same here. The following transformation connects the accelerated observer and the inertial observer, t( ) = 1 a e a sinh(a ), x( ) = 1 a e a cosh(a ). Here, we see that the Rindler coordinates are related to the inertial coordinates by an exponential scale transformation. Hence, Rindler spacetime used by accelerated observers covers only a wedge-shaped region of Minkowski spacetime. Now we consider a twodimensional massless scalar field in flat spacetime with mode decomposition The Lagrangian for the field can be expressed as a sum of oscillators with amplitude q ± k for each mode as Here we consider an observer undergoing constant acceleration a in this field with the following trajectory We want to show via the influence functional method that the observer detects thermal radiation. The system-field interaction is taken as, They are coupled at the spatial point x( ) with coupling strength and r is the detector's internal coordinate. Integrating out the spatial variable, we find that The scalar field is characterized by the spectral density, which in the discrete case is given by, Here k n = n = |k n |, as we take a massless scalar field. c + n ( ) = 2/L cos kx( ), c − n ( ) = 2/L sin kx( ) are effective coupling constants for the accelerated observer coupled to the field. If we use n → L 2 dk, I(k,, ) becomes, where I(k) = 2 2 is the spectral density of the scalar field seen by an inertial detector. For the case where the scalar field is initially in a vacuum, a scenario that is apt for the present work, the influence of the quantum field on the detector is expressed in terms of an influence kernel, appearing in the exponent of the influence functional and has the form, where 2 = +, ∆ = − and use is made of Eq.. Expanding the exponential terms in the above equations in terms of the Bessel functions of imaginary order, (, ) can be expressed as, where, the temperature associated with the Unruh effect is seen as T = a 2kB. Here a is the acceleration of the detector and k B is the Boltzman constant. This formalism will be applied subsequently. V. HAWKING RADIATION FOR 1+1 DIMENSIONAL CHARGED BLACK HOLE Here we consider a massless minimal coupled scalar field in a 1+1 dimensional charged black hole (Reissner-Nordstrom black hole). The line element of this black hole is given by, where Here M is the mass of the black hole and Q is the charge of the black hole. From the equation f (r) = 0, one can get two horizons as r ± = M ± M 2 − Q 2, where r ± are the event horizon and the Cauchy horizons, respectively. Surface gravities at the horizons is defined as ± = |f (r)| 2 |r = r ±. Here + is the surface gravity at the event horizon and − is the surface gravity at the Cauchy horizon. In terms of the surface gravities and horizons, the tortoise coordinates, r ⋆, can be written as r ⋆ = 1 + ln | r r+ − 1| − 1 − ln | r r− − 1|. Now using the expression of surface gravity at the event horizon, we define Kruskal coordinates for the event horizon as, Here, the exponential transformation makes its appearance. Thus, the ordinary coordinate of a charged black hole (t, r ⋆ ) covers only a wedge-shaped patch of the Kruskal coordinate. A detector at constant r ⋆ is similar to the case of the accelerating observer, as can be seen from an analogy with Eq.. The spectral density, in analogy with Eq., is given by, where I(k) = 2 2 is the spectral density of the scalar field, as seen by an inertial detector. The influence of the quantum field on the detector is expressed in terms of the influence kernel, where (for more details, see Appendix A. The calculation is similar but one has to take surface gravity + instead of acceleration a and appropriate coordinate system). From Eq., it can be seen that temperature associated with Hawking radiation for 1+1 dimensional charged black. From the expression of the temperature, it follows that for the non-extreme case (M = Q), temperature is not zero, but it is zero for the extreme case (M = Q). One can also arrive at the result from Eq., by writing the influence kernel in terms of null coordinates (U and V ) as, where U =t−r ⋆ = − 1 + e −+u and V =t+r ⋆ = 1 + e +v are the Kruskal coordinates. Here u = t − r ⋆ and v = t + r ⋆ are the ordinary null coordinates for charged black hole. For a detector at fixed r ⋆ the influence kernel can be written exactly as Eq.. One can also consider Unruh vacuum state, using the null coordinates U and v, for calculating the Hawking radiation for 1+1 dimensional charged black hole. The influence kernel can be written in terms of null coordinates (U and v) as, For a detector at fixed r ⋆ the influence kernel, using the Unruh vacuum state, can be written as, where As v is not same as V, we get a form different from that of Eq.. From Eq., it can be seen that temperature associated with Hawking radiation for 1+1 Thus, we get the same temperature using the Unruh vacuum state instead of the Hartle-Hawking vacuum state. VI. HAWKING RADIATION FROM THE KERR BLACK HOLE Here we consider a massless minimal coupled scalar field in Kerr black holes. Near the horizon, the scalar field theory in a 4-d Kerr black hole spacetime can be reduced to the 2-d field theory. In Boyer-Lindquist coordinates, Kerr metric is given by, where = r 2 + a 2 cos 2, Here M is the mass of the black hole, a is the angular momentum per unit mass of the black hole, and r = r ± is the event horizon and the Cauchy horizon, respectively. The determinant of the above metric is and the inverse of the above metric of (t, ) parts is given by, The action for a massless minimal coupled scalar field in the 4-d Kerr spacetime is given by, After taking the limit r → r + and the leading order terms, we get, Now we transform the coordinates to the locally nonrotating coordinate system, given by, where H ≡ a r 2 + + a 2. Using (, r,, ) coordinates, the action can be rewritten as, Applying the spherical harmonics expansion (x) = l,m l m (, r)Y l m (, ), finally one can get the effective 2-dimensional action as, The effective 2-dimensional metric from the above action can be expressed as, Hence, in the near-horizon region, the geometry of Kerr spacetime is the same as the Rindler spacetime when r + > r −. One can rewrite this metric as, where dr ⋆ = dr f (r). Surface gravity at the event horizon is defined as + = |f (r)| 2 |r = r +. Now using the expression of the surface gravity, we define Kruskal coordinates as, Here we again come across the exponential transformation relation. Thus, in the near-horizon region, the ordinary coordinate of Kerr black hole (, r ⋆ ) covers only a wedge-shaped patch of the Kruskal coordinate. For a detector at constant r ⋆, the case is similar to an accelerating observer, as can be seen from an analogy with Eq.. The spectral density, in analogy with Eq., is given by, where I(k) = 2 2 is the spectral density of the scalar field seen by an inertial detector. The influence of the quantum field on the detector is expressed in terms of the influence kernel, having the form where (for more details, see Appendix A, where one has to take surface gravity + instead of acceleration a and appropriate coordinate system). From Eq., it can be seen that the temperature associated with Hawking radiation for Kerr black hole is T From this it follows that for the non-extreme case (M = a), the temperature is not zero, but it is zero for the extreme case (M = a). As shown above, one can write the influence kernel interms of null coordinates (U and V ) as, where U =t −r ⋆ = − 1 + e −+u and V =t +r ⋆ = 1 + e +v are the Kruskal coordinates and u = − r ⋆ and v = + r ⋆ are ordinary null coordinates for Kerr black hole in the near-horizon region. For a detector at fixed r ⋆ the influence kernel can be written exactly as Eq.. One can also consider the Unruh vacuum state for calculating the Hawking radiation for the Kerr black hole. The influence kernel can be written as in terms of null coordinates (U and v) as, For a detector at fixed r ⋆ the influence kernel, using the Unruh vacuum state, can be written as, where G(k) = 2 + sinh(k/ + ) ∞ 0 dk I(k ) (for more details, see Appendix B, where one has to consider appropriate coordinate system). From Eq., it can be seen that the temperature associated with Hawking radiation for Kerr black hole is We get the same temperature using the Unruh vacuum state instead of the Hartle-Hawking vacuum state. VII. HAWKING RADIATION FOR 1+1 DIMENSIONAL SCHWARZSCHILD DE-SITTER SPACETIME We now consider a massless minimal coupled scalar field in the Schwarzschild-de Sitter spacetime. The line element for this spacetime in 1 + 1d is, Here M is the mass of the black hole, and is the positive cosmological constant. For 3M √ < 1, this spacetime has three Killing horizons, which are, r H is the black hole event horizon and r c > r H is the cosmological horizon, and r u is negative, which is the unphysical horizon. Now in terms of tortoise coordinates, the above line element can be written as, The exact form of tortoise coordinate, r ⋆, is given below, where i is the surface gravity of the corresponding horizon r i which is given by, Here H is the surface gravity at the black hole event horizon, and c is the surface gravity at the cosmological horizon. We now define Kruskal coordinates for the black hole event horizon as, We also define Kruskal coordinates for the cosmological horizon as, For both the cases we have exponential transformation relations. Thus, the ordinary coordinate of Schwarzschild-de Sitter spacetime (t, r ⋆ ) covers only a wedge-shaped patch of the Kruskal coordinate. It follows that for a detector at constant r ⋆, the case is similar to the accelerating observer. The spectral density for black hole event horizon, in analogy to the previous cases, is, where I(k) = 2 2 is the spectral density of the scalar field seen by an inertial detector. The influence kernel characterizing the influence of the quantum field on the detector has the form where (for more details, see Appendix A. The calculation is similar but one has to take surface gravity H instead of acceleration a and appropriate coordinate system). From Eq., the temperature associated with Hawking radiation for black hole horizon is seen to be T H = H 2kB. Similarly, the spectral density for cosmological horizon is given by, where I(k) = 2 2 is the spectral density of the scalar field. The form of the influence kernel is where G(k) = 2 c sinh(k/ c ) ∞ 0 dk I(k ) (for more details, see Appendix A, with surface gravity c instead of acceleration a and appropriate coordinate system). From Eq., the temperature associated with Hawking radiation for the cosmological horizon comes out to be T c = c 2kB. The influence kernel can be expressed in terms of null coordinates (U and V ) as, where U =t −r ⋆ = − 1 H e −H u and V =t +r ⋆ = 1 H e H v are the Kruskal coordinates for the black hole event horizon and u = t − r ⋆ and v = t + r ⋆ are ordinary null coordinates for Schwarschild de-Sitter spacetime. For the Unruh vacuum state the influence kernel, using the null coordinates U and v, is At fixed r ⋆ Eq. becomes, where (for more details, see Appendix B, where one has to consider appropriate coordinate system and surface gravity H instead of + ). From Eq., it can be seen that temperature associated with Hawking radiation for black hole horizon is T H = H 2kB, same as that using the Hartle-Hawking vacuum state. Similarly, one can also consider Unruh vacuum state for calculating the Hawking radiation for cosmological horizon. In this case, the temperature associated with Hawking radiation for cosmological horizon is T H = c 2kB which is the same as that using the Hartle-Hawking vacuum state. VIII. HAWKING RADIATION FOR 1+1 DIMENSIONAL SCHWARZSCHILD ANTI-DE SITTER SPACETIME Now we consider a massless minimal coupled scalar field in 1 + 1 d Schwarzschild anti-de Sitter spacetime. The line element for this spacetime is where Here M is the mass of the black hole and l 2 is connected with the positive cosmological constant. From the equation f (r) = 0, one can get one horizon as, where r + is the event horizon. Surface gravity at the event horizon is defined as = |f (r)| 2 |r = r +. We define the Kruskal coordinates as, an exponential transformation. It follows that the ordinary coordinate of Schwarzschild anti-de Sitter (t, r ⋆ ) covers only a wedge-shaped patch of the Kruskal coordinate. For a detector at constant r ⋆, the case is similar to an accelerating observer. The spectral density is given by, where I(k) = 2 2. The influence of the quantum field on the detector is expressed in terms of an influence kernel where (for more details, see Appendix A, with surface gravity instead of acceleration a and appropriate coordinate system). From Eq., it is seen that the temperature associated with Hawking radiation for 1 + 1 d Schwarzschild anti-de Sitter spacetime is T = 2kB = 1 2kB m r 2 + + r+ l 2. The influence kernel in terms of null coordinates (U and V ) is, where U =t −r ⋆ = − 1 e −u and V =t +r ⋆ = 1 e v are the Kruskal coordinates and u = t − r ⋆ and v = t + r ⋆ are ordinary null coordinates for Schwarschild anti-de sitter spacetime. For a detector at fixed r ⋆ the influence kernel can be written exactly as Eq.. One can also consider Unruh vacuum state for calculating the Hawking radiation for 1 + 1 d Schwarzschild anti-de Sitter spacetime. The influence kernel in terms of null coordinates (U and v) is, For a detector at fixed r ⋆ the influence kernel using Unruh vacuum state can be written as, where G(k) = 2 sinh(k/) ∞ 0 dk I(k ) (for more details, see Appendix B, with an appropriate coordinate system and surface gravity instead of + ). From Eq., it can be seen that temperature associated with Hawking radiation for 1 + 1d Schwarzschild anti-de Sitter spacetime is T H = 2kB = 1 2kB m r 2 + + r+ l 2, as obtained using the Hartle-Hawking vacuum state. IX. HAWKING RADIATION FOR 1+1 DIMENSIONAL REISSNER-NORDSTROM DE-SITTER SPACETIME We now consider a massless minimal coupled scalar field in the Reissner-Nordstrom de-Sitter spacetime. The line element for this spacetime in 1 + 1 d is, where Here M is the mass of the black hole, Q is the charge of the black hole, and is the positive cosmological constant. From the equation f (r) = 0, one can get three horizons for this spaetime. Here we consider r ± as the event and the Cauchy horizons, respectively, and r c the cosmological horizon. Surface gravities at the horizons are defined as ± = |f (r)| 2 | r=r± and c = |f (r)| 2 | r=rc. Here + is the surface gravity at the event horizon, − is the surface gravity at the Cauchy horizon, and c is the surface gravity at the cosmological horizon, respectively. The exact form of tortoise coordinate, r ⋆, in terms of the surface gravities and the horizons is, where r u is negative, which is the unphysical horizon. We now define Kruskal coordinates for the event horizon as, We also define Kruskal coordinates for the cosmological horizon as, For both the cases we have exponential transformation relations. Thus, the ordinary coordinate of Reissner-Nordstrom de-Sitter spacetime (t, r ⋆ ) covers only a wedge-shaped patch of the Kruskal coordinate. It follows that for a detector at constant r ⋆, the case is similar to the accelerating observer. The spectral density for the event horizon, in analogy to the previous cases, is, where I(k) = 2 2 is the spectral density of the scalar field seen by an inertial detector. The influence kernel characterizing the influence of the quantum field on the detector has the form where G(k) = 2 + sinh(k/ + ) ∞ 0 dk I(k ) (for more details, see Appendix A, with surface gravity + replacing acceleration a and an appropriate coordinate system). From Eq., the temperature associated with Hawking radiation for black hole horizon is seen to be T + = + 2kB. Similarly, the spectral density for cosmological horizon is given by, where I(k) = 2 2 is the spectral density of the scalar field. The form of the influence kernel is where G(k) = 2 c sinh(k/ c ) ∞ 0 dk I(k ) (for more details, see Appendix A, with surface gravity c instead of acceleration a and appropriate coordinate system). From Eq., the temperature associated with Hawking radiation for the cosmological horizon comes out to be T c = c 2kB. Further, one can use the Unruh vacuum state, as discussed in the previous sections. Same temperatures associated with the black hole and cosmological horizons are obtained, as that using the Hartle-Hawking vacuum sate. X. CONCLUSIONS Generalizing to relativistic exponential scaling and, using the theory of noise from quantum fluctuations, it was shown that one vacuum (Rindler, Hartle-Hawking, or Gibbons-Hawking for the cases of the uniformly accelerated detector, black hole, and de Sitter universe, respectively) can be understood as resulting from the scaling of quantum noise in another vacuum. Here, this idea was explored more generally to establish a flat and curved spacetime analogy. This provides a common perspective to the Unruh and Hawking effects. For this purpose, we examined noise kernels for free fields in some well-known curved spacetimes, e.g., charged black hole, Kerr black hole, Schwarzschild-de Sitter, Schwarzschild anti-de Sitter, and Reissner-Nordstrom de-Sitter spacetimes. We have shown that the exponential scaling transformation is responsible for the thermal nature of radiation. Here, we consider a maximal analytic extension for these spacetimes and different vacuum states. The temperature obtained, exactly matches with the literature in different contexts, using, for example, the Bogoliubov transformations. This formalism can also be applied for some other curved spacetime such as Reissner-Nordstrom-Anti-de Sitter, Kerr-de Sitter, Kerr-Anti-de Sitter, Kerr-Newman-de Sitter, and Kerr-Newman-Anti-de Sitter spacetimes. Recently, it has been shown that quasinormal modes of AdS Black Holes are related to black hole temperature. It would be interesting to check the relation between the quantum fluctuations to the quasinormal mode for a black hole spacetime.
|
Pharmacy & Pharmacology International Journal Treatment failures and first signs of resistance to artemisinin soon become evident already evident in China with arteannuin B and artemisinin monotherapy in 1973. This was due to a novel effect: a small fraction of the parasites, as a result of chemotherapeutic pressure, become dormant. At the ring stage, the parasite cycle is halted, making the parasites unsusceptible to further dosing until wakening.8 The parasite encapsulates itself against the aggressive peroxide artesunate and reawakens at the end of the treatment. The same effect is called quiescence by a French research team.9 The dormancy effect is also evident for artemisone, a new artemisinin derivative.10 The resistance to ACTs leads to the survival of most fit populations of parasites which in turn lead to more virulent infections.11 Severe recrudescence has also been noticed in the use of dihydroartemisininpiperaquine and a large increase in the prevalence of parasitemia. Not only the load of asexual parasites increased after 4 months but also the load of gametocytes.12
|
The present invention generally relates to methods and apparatus for determining the fluid level of a container, such as an accumulator, and more specifically, to methods and apparatus for determining the fluid level of a container without the need for either electrical power or a direct line of sight to the container.
Accumulators are frequently used on military and commercial aircraft to accommodate the thermal expansion of coolant and hydraulic fluids. During aircraft servicing, maintenance personnel need to ascertain if the respective liquid loop contains the appropriate fluid level. Often, the accumulator is located in an inconvenient space for visual inspection and there is no aircraft electrical power available to operate a level sensor.
U.S. Pat. No. 943,596, issued to Hans, discloses a method of indicating the height of liquid in a tank. A float is attached to a cable and as the float rises and falls, the cable moves in a linear direction. The opposite end of the cable is wrapped around a drum, which turns linear cable motion into rotational motion of the drum. Additional gearing eventually translates the rotational motion of the drum into rotational motion of a dial indicator, allowing visual indication of the liquid level. The indicator housing is attached to the float housing, requiring a visual line of sight to make a liquid level reading. This requirement of a visual line of sight, as used by Hans, may not be useful in applications such as an aircraft, where components are closely assembled, possibly blocking many sight lines to liquid containers.
U.S. Pat. No. 2,074,959, issued to Guest, discloses a fuel tank gauge that may be remotely located. In the Guest patent, the liquid movement is transferred to linear motion of either a cable or a chain. The cable of Guest is located in a conduit that extends to the gauge. The cable moves linearly in the conduit, wherein the linear motion of the cable is translated to rotary motion of the gauge. This linear cable movement requires a relatively linear path from the tank to the gauge in order to avoid the linearly moving cable from getting stuck in the conduit due to kinking or the like.
U.S. Pat. No. 2,522,988, issued to Caddell, discloses a method of indicating oil level using a float that drives a chain in a linear motion. The chain is looped over a sprocket causing the sprocket and attached shaft to move in a rotary motion. One end of the shaft may be connected to an indicator assembly. The shaft of the Caddell patent appears to be a rigid shaft, thereby requiring the indicator to be located linearly away from the oil tank. In situations where there is no line of sight to the tank, there would usually also be no direct linear line for a shaft to run to drive a remote indicator, as would be required in the Caddell gauge.
As can be seen, there is a need for improved methods and apparatus for determining the fill level of a container, such as an accumulator, at a location more convenient for maintenance and without the need for electrical power or a direct line of sight to the accumulator.
|
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericData.EnumSymbol;
import org.apache.avro.generic.GenericFixed;
import org.apache.avro.generic.GenericRecord;
public class CreateDatumUtils {
public static <T> ArrayList<T> createArrayDatum(T arg1, T arg2, T arg3) {
ArrayList<T> list = new ArrayList<T>();
list.add(arg1);
list.add(arg2);
list.add(arg3);
return list;
}
public static <T> ArrayList<T> createArrayDatum(T arg1, T arg2) {
ArrayList<T> list = new ArrayList<T>();
list.add(arg1);
list.add(arg2);
return list;
}
public static <T> Map<T, T> createMapDatum(T key1, T key2, T value1, T value2, T value3) {
Map<T, T> map = new HashMap<T, T>();
map.put(key1, value1);
map.put(key1, value2);
map.put(key2, value3);
return map;
}
public static <T> GenericRecord createRecordDatum(T field1, T field2) {
GenericRecord genRec = new GenericData.Record(SchemaUtils.generateRecordDatumSchema());
// Inserire i dati in accordo con lo schema
genRec.put("first", field1);
genRec.put("last", field2);
return genRec;
}
public static <T> GenericRecord createRecordDatumMutation(T field1) {
GenericRecord genRec = new GenericData.Record(SchemaUtils.generateRecordDatumSchemaForMutation());
// Inserire i dati in accordo con lo schema
GenericFixed genericFixed = new GenericData.Fixed(SchemaUtils.generateFixedSchema(), new byte[0]);
genRec.put("bytes", genericFixed);
genRec.put("first", field1);
return genRec;
}
public static <T> GenericRecord createUnionDatum(T field1, T field2) {
GenericRecord genRec = new GenericData.Record(SchemaUtils.generateUnionSchema());
// Inserire i dati in accordo con lo schema
genRec.put("experience", field1);
genRec.put("age", field2);
return genRec;
}
public static EnumSymbol createEnumSymbolDatum(String enumSymbol) {
return new GenericData.EnumSymbol(SchemaUtils.generateEnumSchema(), enumSymbol);
}
public static GenericFixed createFixedDatum(int size) {
return new GenericData.Fixed(SchemaUtils.generateFixedSchema(), new byte[size]);
}
}
|
Photonics-assisted compressive sampling systems In this paper, a systematic review is made on our research related to photonics-assisted compressive sampling (CS) systems including principle, structure and applications. We demonstrate their utility in wideband spectrum sensing and high throughput flow cytometry. Photonics-assisted CS systems not only can significantly reduce the data acquisition rate but also can achieve a large operational bandwidth (several GHz or even a few tens of GHz), which is one to two orders of magnitude larger than that of traditional electric CS systems. Single-channel and multi-channel photonicsassisted CS systems are presented in this paper and demonstrated to enable accurate reconstruction of frequency-sparse signals from only a few percent of the measurements required for Nyquist sampling. On the other hand, we also implement time-stretch-based single-pixel imaging systems with high frame rates, three orders of magnitude faster than conventional single-pixel cameras. To show their utility in biomedical applications, a real-time high-throughput imaging flow cytometer is demonstrated. In general, photonics-assisted CS systems show great potential in both wideband spectrum sensing and biomedical imaging applications.
|
Antimicrobial Activity of Satureja Hortensis L. Essential Oil Against Pathogenic Microbial Strains ABSTRACT A hydrodistilled oil of Satureja hortensis L. was investigated for its antimicrobial activity against a panel of 11 bacterial and three fungal strains. The antimicrobial activity was determined using disk-diffusion method and broth microdilution method. Essential oil of S. hortensis L. showed significant activity against wide spectrum of Gram (-) bacteria (MIC/MBC=0.0250.78/0.050.78 l/ml) and Gram (+) bacteria (MIC/MBC=0.050.39/0.050.78 l/ml), as well as against fungal strains (MIC/MBC=0.20/0.78 l/ml). Therefore, the present results indicate that this oil can be used in food conservation, treatment of different deseases of humans, and also for the treatment of the plants infected by phytopathogens.
|
Association between Emotional Intelligence and Objective Structured Examinations: A Study on Psychiatric Residents Introduction: Emotional Intelligence (EI) is the capacity to handle ones and the others feelings and reactions, and is important for achieving pleasant social interaction and success in life. The purpose of the present assessment was to explore the connection between the EI of psychiatric residents and their outcomes in objective scholastic evaluations. Methods: A cross-sectional study was used in the present assessment. 31 psychiatric residents of the University of Social Welfare and Rehabilitation Sciences (grade 1 to 4) were requested to answer the Schutte Self Report Emotional Intelligence Test (SSEIT) for probing the relationship between EI and objective structured examinations, like Mini-Clinical Examination Exercise (Mini-CEX), Objective Structured Clinical Examination (OSCE), and Chart-Stimulated Recall (CSR) scores, which had been taken in the previous six months. SSEIT score of 90 was taken as demarcating point for dividing the sample population into two target groups, including the 1 st group with SSEIT score lower than 90, and the 2 nd group with SSEIT score equal to or more than 90. Demographic characteristics were analyzed by comparison of proportions regarding gender and year of study and comparison of means (t-test) regarding age, scholastic evaluative scores and EI. Data analysis was conducted using MedCalc Statistical Software version 15.2. Statistical significance was determined as a P≤0.05. Results: 29 participants (93.54%) responded to the assessment and answered the SSEIT. According to the findings, there was no significant difference between the aforesaid groups regarding Mini-CEX, OSCE and CSR (P=0.101, P=0.091 and P=0.156, respectively). Post-hoc power analysis showed an intermediary power equal to 0.36 on behalf of this trial. Conclusion: According to the findings, while a significant difference, with respect to SSEIT score, was evident between two groups of psychiatric residents with higher and lower SSEIT scores, no significant difference was evident between them regarding the objective structured examinations. Keywords: Emotional intelligence, Objectives, Structural, Examinations, Psychiatric, Residency
|
A robust group synchronization approach to network congestion based on control events for media streaming We developed a robust approach that can be synchronized with accuracy to a media streaming group in a network congestion situation. This approach consists of a server-side and client-side algorithm. The server-side algorithm is used to transmit the playback information using a synchronization control packet to clients. The synchronization control packet is transmitted in an event-driven manner. The clientside algorithm synchronizes the playback position by using the information received from the server. The detailed algorithm is used to accurately synchronize the playback position on the client. Finally, we measured the synchronization accuracy according to the network congestion. The performance results of this experiment can improve the completion time and the synchronization accuracy in network congestion.
|
. A survey of literature data on male breast cancer is reported: on this basis the authors report their experience of five cases observed between 1980 and 1989 at Ospedale "S. Corona" Pietra Ligure, out of 500 female breast cancers treated in the same period. Epidemiological data and follow-up records are analysed, reporting survival rate similar to that of larger series.
|
Effects of a health careers program and family support for a health career on eighth graders' career interest. The separate and combined effects of participation in a health careers program and of parental support for a health career on young people's interest in a health career were examined. Twenty-seven eighth graders participating in a health careers orientation program were matched by sex, race, and parental education with 27 eighth grade nonparticipants, and personal interviews were then conducted with students in both groups. Both program participation and parental support were found to be significantly related to two measures of the students' interest in a health career. One measure was of the students' interest in general health-related careers. When program participation and parental support were each studied with the other factor controlled, it was found that parental support had a greater effect when program participation was absent. An analysis of various participation-support combinations revealed that when neither participation nor parental support was present, the students' interest in a health career was considerably less than if one or both were present.
|
High-throughput Identification of FLT3 Wild-type and Mutant Kinase Substrate Preferences and Application to Design of Sensitive In Vitro Kinase Assay Substrates. Acute myeloid leukemia (AML) is an aggressive disease that is characterized by abnormal increase of immature myeloblasts in blood and bone marrow. The FLT3 receptor tyrosine kinase plays an integral role in hematopoiesis, and one third of AML diagnoses exhibit gain-of-function mutations in FLT3, with the juxtamembrane domain internal tandem duplication (ITD) and the kinase domain D835Y variants observed most frequently. Few FLT3 substrates or phosphorylation sites are known, which limits insight into FLT3's substrate preferences and makes assay design particularly challenging. We applied in vitro phosphorylation of a cell lysate digest (adaptation of the Kinase Assay Linked with Phosphoproteomics (KALIP) technique and similar methods) for high-throughput identification of substrates for three FLT3 variants (wild-type, ITD mutant, and D835Y mutant). Incorporation of identified substrate sequences as input into the KINATEST-ID substrate preference analysis and assay development pipeline facilitated the design of several peptide substrates that are phosphorylated efficiently by all three FLT3 kinase variants. These substrates could be used in assays to identify new FLT3 inhibitors that overcome resistant mutations to improve FLT3-positive AML treatment.
|
Thermal Acid Hydrolysis Pretreatment, Enzymatic Saccharification and Ethanol Fermentation from Red Seaweed, Gracilaria verrucosa In this study, the red seaweed Gracilaria verrucosa was used as a bioethanol producing biomass. G. verrocosa has a high content of easily degradable carbohydrates, making it a potential substrate for the production of liquid fuels. The carbohydrates in G. verrucosa can be categorised according to their chemical structures: alginate, carrageenan, and agar. Carrageenan and agar, which are plentiful in the seaweed, can be used as a source of galactose and glucose. Various pretreatment techniques have been introduced to enhance the overall hydrolysis yield, and can be categorized into physical, chemical, biological, enzymatic or a combination of these. Dilute acid hydrolysis is commonly used to prepare seaweed hydrolysates for enzymatic saccharification and fermentation for economic reasons. However, thermal acid hydrolysis pretreatment for 3,6anhydrogalactose from G. verrucosa have produced 5hydroxymethylfurfural (HMF), an inhibitory compound for ethanol production. One of the problems encountered in G. verrucosa fermentation has been high concentrations of NaCl due to its origin from sea water. High-salt stress to yeasts is a significant impediment on the production of ethanol from seaweed hydrolysates. Salt stress in yeasts results in two phenomena: ion toxicity and osmotic stress. Defense responses to salt stress are based on osmotic adjustments by osmolyte synthesis and cation transport systems for sodium exclusion. The preferential utilization of glucose over non-glucose sugars by yeast often results in low overall ethanol production and yield. When yeast grows on a mixture of glucose and galactose, the glucose is metabolized first, whereas The seaweed, Gracilaria verrucosa, was fermented to produce bioethanol. Optimal pretreatment conditions were determined to be 12% (w/v) seaweed slurry and 270 mM sulfuric acid at 121C for 60 min. After thermal acid hydrolysis, enzymatic saccharification was carried out with 16 U/ml of mixed enzymes using Viscozyme L and Celluclast 1.5 L to G. verrucosa hydrolysates. A total monosaccharide concentration of 50.4 g/l, representing 84.2% conversion of 60 g/l total carbohydrate from 120 g dw/l G. verrucosa slurry was obtained by thermal acid hydrolysis and enzymatic saccharification. G. verrucosa hydrolysate was used as the substrate for ethanol production by separate hydrolysis and fermentation (SHF). Ethanol production by Candida lusitaniae ATCC 42720 acclimated to high-galactose concentrations was 22.0 g/l with ethanol yield (YEtOH) of 0.43. Acclimated yeast to high concentrations of specific sugar could utilize mixed sugars, resulting in higher ethanol yields in the seaweed hydrolysates medium.
|
import math
TEAM = int(input()) #valor do time - 0 pontua do lado direito, 1 pontua do lado esquerdo
#seleciona a localização do gol de acordo com o ID do time
if TEAM == 1: GOL = [0, 3750]
else: GOL = [16000, 3750]
#classe que armazena entidades e dados gerais do jogo
class Entidades:
entityType = None #WIZARD, OPPONENT_WIZARD, SNAFFLE ou BLUDGER
entitiesId = None
x, y = None, None
vx, vy = None, None
state = None #estado igual a 1 se o mago pegou o snaffle, senão igual a zero
def __init__(main, data):
main.entitiesId = data[0]
main.entityType = data[1]
main.x = data[2]
main.y = data[3]
main.vx = data[4]
main.vy = data[5]
main.state = data[6]
class Jogador:
x, y = None, None
vx, vy = None, None
state = None
def __init__(main, data):
main.entitiesId = data.entitiesId
main.entityType = data.entityType
main.x = data.x
main.y = data.y
main.vx = data.vx
main.vy = data.vy
main.state = data.state
#calcula distância entre duas entidades utilizando a função hypot
def getDist(main, target):
return math.hypot(abs(main.x - target.x), abs(main.y - target.y))
#método para alcançar snaffle (seleciona snaffle mais próximo do mago parâmetro)
def getSnaf(main, all_entities):
snafMin = float('inf')
snaffle = None
for target in all_entities:
if target.entityType != 'SNAFFLE': continue
if main.getDist(target) <= snafMin:
snafMin = main.getDist(target)
snaffle = target
return snaffle
#métodos estáticos para jogar o snaffle e mover o mago respectivamente
@staticmethod
def throw(x, y, power): print('THROW %s %s %s' % (x, y, power))
@staticmethod
def move(x, y, thruster): print('MOVE %s %s %s' % (x, y, thruster))
while True:
gameEntities = []
#a função split foi utilizada nos trechos abaixo para partir a string de entrada nos valores necessários para serem armazenados
myScore, myMagic = input().split()
opponentScore, opponentMagic = input().split()
entities = int(input()) #entidades ainda presentes no jogo
#armazena dados recebidos nas variáveis locais
for _ in range(entities):
entityId, entityType, x, y, vx, vy, state = input().split()
entitie = Entidades([int(entityId), entityType, int(x), int(y), int(vx), int(vy), int(state)])
gameEntities.append(entitie)
playerNum = 1
wiz1, wiz2 = None, None #variáveis que representam os magos
#checa quais entidades das coletadas são magos no vetor gameEntities
for entitie in gameEntities:
if entitie.entityType == 'WIZARD':
if playerNum == 1:
playerNum += 1
wiz1 = Jogador(entitie)
if playerNum == 2: wiz2 = Jogador(entitie)
#as variáveis nextsnaf recebem o próximo snaffle que deverá ser alcançado pelos magos
nextsnaf1 = wiz1.getSnaf(all_entities=gameEntities)
nextsnaf2 = wiz2.getSnaf(all_entities=gameEntities)
#checa se o mago 1 está no range do snaffle (400 é o range do mago porém coloquei 200 pela margem de erro do movimento), se não estiver, vai para o alcance, se estiver, pega o snaffle
if wiz1.getDist(nextsnaf1) > 200: wiz1.move(nextsnaf1.x, nextsnaf1.y, thruster=100) #situação em que não está no alcance
else: wiz1.throw(GOL[0], GOL[1], power=500) #se estiver no alcance pega o snaffle
#mesmos procedimentos para o mago 2
if wiz2.getDist(nextsnaf2) > 200: wiz2.move(nextsnaf2.x, nextsnaf2.y, thruster=100)
else: wiz2.throw(GOL[0], GOL[1], power=500)
|
With the complexity surrounding modern construction projects, dissemination of project information can be a formidable task. For example, a roadway construction project can impact millions of people every day as they travel the effected roads. Further, it is common for large projects to involve many contributing organizations, as well as numerous stakeholders. As these projects may require years to complete, efficiently managing and distributing project information is a critically important task.
When handling project information, one of the many challenges is maintaining current information while avoiding outdated or stale content. As known to those in the art, project schedules change frequently and for a variety of reasons. Further, information may originate from numerous different sources. For example, consider a road construction project. Traffic data representing real-time conditions may be received from a traffic operations center, while construction phasing, closure schedules and alerts may be received from other entities and organizations. Of course, traffic data may be of interest regardless of whether a construction project is active on a given roadway.
Content management platforms for managing and distributing project information currently exist, but these platforms are neither capable of combining current project data from a variety of sources into a single interface nor capable of providing adequate messaging utilities for notification services. While web pages today may provide real-time traffic information, these web pages do not incorporate project information such as up-coming construction or event schedules. Further, these traffic interfaces do not permit user selection of routes for notifications or alert. Also, currently available systems are not capable of efficiently managing feedback from stakeholders with tools, for example, to handle electronic correspondence. Accordingly, there is a need, among other things, for improved systems and methods for managing and communicating information such as project information and traffic information.
|
Q:
Switch with 2 red, 2 black and 1 ground wire
I have a switch indoors which controls two outdoor daylights. I opened the wall plate and found that it has two red wires and two black wires and one ground (so 5 terminals in all). I stuck a multimeter on them and only one of the black ones has voltage, the rest are all 0.
Now, I want to replace this whole thing with a timer switch, which has one line, one load and one neutral. I've taken care of the neutral (together with all the other neutrals in the box), but I am a bit confused about what to put in the line and load.
How can I make my timer switch control both the daylight bulbs? Also, what kind of a switch is that, which has five terminals?
Thanks!
A:
Connect the line to the one hot wire. The black wire in its cable gets capped off. Connect the load to the black or red wire of the other cable, you'll know you have the right one because the light will work. Cap off the other one.
This combination will work until you find one of the other 3-way or 4-way switches in the system. Then, it will fail until you throw that switch back. Your layout is
---- 3way ==== 4way ==== 3way ----
or
---- 3way ==== 4way ==== 4way ==== 4way ==== 3way -----
with any number of 4ways in the middle.
I have a personal rule that "the last guy" probably had a pretty good reason for doing it that way. So before you take a wrecking ball to his work, look for those reasons. Otherwise later, you'll be going "Gosh, it'd be a lot more convenient if it worked like --- oh right, it already did and I destroyed it because I didn't understand it". Those 4-way switches are about $10 each, people don't use those without a pretty good reason.
|
<gh_stars>1-10
#ifndef _SWAY_CLIENT_PANGO_H
#define _SWAY_CLIENT_PANGO_H
#include <cairo/cairo.h>
#include <pango/pangocairo.h>
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
PangoLayout *get_pango_layout(cairo_t *cairo, const char *font, const char *text,
int32_t scale, bool markup);
void get_text_size(cairo_t *cairo, const char *font, int *width, int *height,
int32_t scale, bool markup, const char *fmt, ...);
void pango_printf(cairo_t *cairo, const char *font, int32_t scale, bool markup, const char *fmt, ...);
#endif
|
Do Institutional Investors Demand Public Disclosure? We examine the effect of institutional ownership on corporate disclosure policy using a regression discontinuity design. Using novel data that encompasses every 8-K filing between 1996 and 2006, we find that positive shocks to institutional ownership around Russell index reconstitutions increase the quantity, form, and quality of disclosure. Compared to those at the bottom of the Russell 1000 index, firms at the top of the Russell 2000 index increase institutional ownership by 9.8%, and disclose 4.7% longer 8-K filings with 21.3% more embedded graphics. This incremental disclosure significantly increases the information content of 8-K filings for the market and for analysts.
|
Postprandial effects of breakfast glycaemic index on cognitive performance among young, healthy adults: A crossover clinical trial Objective: To evaluate the postprandial effects of high and low glycaemic index (GI) breakfasts on cognitive performance in young, healthy adults. Methods: A crossover clinical trial including 40 young, healthy adults (aged 2040 years, 50% females) recruited from primary healthcare centres in Salamanca, Spain. Verbal memory, phonological fluency, attention, and executive functions were examined 0, 60, and 120minutes after consuming a low GI (LGI), high GI (HGI), or water breakfast. Every subject tried each breakfast variant, in a randomized order, separated by a washout period of 7 days, for a total of 3 weeks. Results: A significant interaction between the type of breakfast consumed and immediate verbal memory was identified (P<.05). We observed a trend towards better performance in verbal memory (delayed and immediate), attention, and phonological fluency following an LGI breakfast. Discussion: Cognitive performance during the postprandial phase in young, healthy adults was minimally affected by the GI of breakfast. The potential for breakfasts GI modulation to improve short- and long-term cognitive functioning requires further research. GRAPHICAL ABSTRACT
|
# File containing all Pyautomagic constant values
# Constant Global Values
class ConstantGlobalValues:
VERSION = "1.0"
DEFAULT_KEYWORD = "Default"
NONE_KEYWORD = "None"
NEW_PROJECT = {
"LIST_NAME": "Create New Project",
"NAME": "Type the name of your new project",
"DATA_FOLDER": "Choose where your raw data is",
"FOLDER": "Choose where you want the results to be saved",
}
LOAD_PROJECT = {"List_Name": "Load and existing project"}
RATINGS = {
"Good": "Good",
"Bad": "Bad",
"OK": "OK",
"Interpolate": "interpolate",
"NotRated": "Not Rated",
}
EXTENSIONS = {
"mat": ".mat",
"text": ".txt .asc .csv",
"fif": ".fif",
"set": ".set",
"edf": ".edf",
}
PYAUTOMAGIC = "pyautomagic"
PREPROCESSING_FOLDER = "preprocessing"
# Default Parameters
class DefaultParameters:
FILTER_PARAMS = {"notch": {}, "high": {}, "low": {}}
CRD_PARAMS = {}
PREP_PARAMS = {}
HIGH_VAR_PARAMS = {}
INTERPOLATION_PARAMS = {"method": "spherical"}
RPCA_PARAMS = {}
MARA_PARAMS = {"largeMap": 0, "high": {"freq": 1.0, "order": []}}
ICLABEL_PARAMS = {}
EOG_REGRESSION_PARAMS = {}
CHANNEL_REDUCTION_PARAMS = {}
DETRENDING_PARAMS = {}
EEG_SYSTEM = {
"name": "Others",
"sys10_20": 0,
"locFile": "",
"refChan": {"idx": {}},
"fileLocType": "",
"eogChans": {},
"powerLineFreq": {},
}
SETTINGS = {"trackAllSteps": 0}
# Pre processing Constants
class PreprocessingConstants:
FILTER_CSTS = {
"NOTCH_EU": 50,
"NOTHC_US": 60,
"NOTCH OTHER": {},
"RUN MESSAGE": "Perform Filtering...",
}
EOG_REGRESSION_CSTS = {"RUN_MESSAGE": "Perform EOG Regression..."}
GENERAL_CSTS = {"ORIGINAL_FILE": " ", "REDUCED_NAME": "reduced"}
EEG_SYSTEM_CSTS = {
"sys10_20_file": "standard-10-5-cap385.elp",
"EGI_NAME": "EGI",
"OTHERS_NAME": "Others",
}
SETTINGS_PREP = {"pathToSteps": "/allSteps"}
# Recommended Parameters
class RecommendedParameters:
FILTER_PARAMS_REC = {
"notch": {"freq": 50},
"high": {"freq": 0.5, "order": {}},
"low": {"freq": 30, "order": {}},
}
CRD_PARAMS_REC = {
"ChannelCriterion": 0.85,
"LineNoiseCriterion": 4,
"BurstCriterion": 5,
"WindowCriterion": 0.25,
"Highpass": [0.25, 0.75],
}
PREP_PARAMS_REC = {}
HIGH_VAR_PARAMS_REC = {"sd": 25}
INTERPOLATION_PARAMS_REC = {"method": "spherical"}
RPCA_PARAMS_REC = {"lambda": [], "tol": 1e-7, "maxIter": 1000}
EOG_REGRESSION_PARAMS_REC = {}
DETRENDING_PARAMS_REC = {}
CHANNEL_REDUCTION_PARAMS_REC = {"tobeExcludedChans": []}
EEG_SYSTEM_REC = {
"name": "Others",
"standard_1020": 0,
"locFile": "",
"refChan": {"idx": []},
"fileLocType": "",
"eogChans": [],
"powerLineFreq": [],
}
SETTINGS_REC = {"trackAllSteps": 0, "pathToSteps": "/allSteps.mat"}
# Default Visualisation Parameters
class DefaultVisualizationParameters:
COLOR_SCALE = 100
DS_RATE = 2
CALC_QUALITY_PARAMS = {
"overall_thresh": [20, 25, 30, 35],
"time_thresh": [5, 10, 15, 20],
"chan_thresh": [5, 10, 15, 20],
"apply_common_avg": 1,
}
RATE_QUALITY_PARAMS = {
"overall_Good_Cutoff": 0.1,
"overall_Bad_Cutoff": 0.2,
"time_Good_Cutoff": 0.1,
"time_Bad_Cutoff": 0.2,
"channel_Good_Cutoff": 0.15,
"channel_Bad_Cutoff": 0.3,
"bad_Channel_Good_Cutoff": 0.15,
"bad_Channel_Bad_Cutoff": 0.3,
"Q_measure": {"THV", "OHA", "CHV", "RBC"},
}
|
<reponame>colin-daniels/agnr-ml
/* ************************************************************************ **
** This file is part of rsp2, and is licensed under EITHER the MIT license **
** or the Apache 2.0 license, at your option. **
** **
** http://www.apache.org/licenses/LICENSE-2.0 **
** http://opensource.org/licenses/MIT **
** **
** Be aware that not all of rsp2 is provided under this permissive license, **
** and that the project as a whole is licensed under the GPL 3.0. **
** ************************************************************************ */
//! Ressurected from the grave, this extremely subdued form of rsp2's phonopy code now
//! only exists to help compare outputs.
use crate::FailResult;
use crate::traits::{AsPath, Save, Load};
use crate::meta::{self, prelude::*};
use crate::hlist_aliases::*;
use crate::errors::DisplayPathNice;
use crate::cmd::SupercellSpecExt;
use rsp2_tasks_config as cfg;
use std::io::prelude::*;
use std::rc::Rc;
use std::process::Command;
use std::path::{Path};
use rsp2_fs_util::{TempDir};
use rsp2_fs_util as fsx;
use rsp2_structure::{Coords};
use rsp2_structure::supercell::{SupercellToken};
use rsp2_soa_ops::{Permute, Perm};
use rsp2_structure_io::Poscar;
use rsp2_array_types::{V3, Unvee};
use failure::Backtrace;
#[allow(unused)] // rustc bug
use itertools::Itertools;
//--------------------------------------------------------
// filenames invented by rsp2
//
// (we don't bother with constants for fixed filenames used by phonopy, like "POSCAR")
const FNAME_SETTINGS_ARGS: &'static str = "disp.args";
const FNAME_CONF_DISPS: &'static str = "disp.conf";
const FNAME_OUT_SYMMETRY: &'static str = "symmetry.yaml";
//--------------------------------------------------------
// Directory types in this module follow a pattern of having the datatype constructed
// after all files have been made; this is thrown when that is not upheld.
#[derive(Debug, Fail)]
#[fail(display = "Directory '{}' is missing required file '{}' for '{}'", dir, filename, ty)]
pub(crate) struct MissingFileError {
backtrace: Backtrace,
ty: &'static str,
dir: DisplayPathNice,
filename: String,
}
#[derive(Debug, Fail)]
#[fail(display = "phonopy failed with status {}", status)]
pub(crate) struct PhonopyFailed {
backtrace: Backtrace,
pub status: std::process::ExitStatus,
}
impl MissingFileError {
fn new(ty: &'static str, dir: &dyn AsPath, filename: String) -> Self {
let backtrace = Backtrace::new();
let dir = DisplayPathNice(dir.as_path().to_owned().into());
MissingFileError { backtrace, ty, dir, filename }
}
}
//--------------------------------------------------------
type SymmetryYaml = rsp2_phonopy_io::SymmetryYaml;
impl Load for SymmetryYaml {
fn load(path: impl AsPath) -> FailResult<Self>
{ Ok(rsp2_phonopy_io::symmetry_yaml::read(fsx::open(path.as_path())?)?) }
}
//--------------------------------------------------------
type DispYaml = rsp2_phonopy_io::DispYaml;
impl Load for DispYaml {
fn load(path: impl AsPath) -> FailResult<Self>
{ Ok(rsp2_phonopy_io::disp_yaml::read(fsx::open(path.as_path())?)?) }
}
//--------------------------------------------------------
// this is a type alias so we wrap it
#[derive(Debug, Clone, Default)]
pub struct Conf(pub rsp2_phonopy_io::Conf);
impl Load for Conf {
fn load(path: impl AsPath) -> FailResult<Self>
{ Ok(rsp2_phonopy_io::conf::read(fsx::open_text(path.as_path())?).map(Conf)?) }
}
impl Save for Conf {
fn save(&self, path: impl AsPath) -> FailResult<()>
{ Ok(rsp2_phonopy_io::conf::write(fsx::create(path.as_path())?, &self.0)?) }
}
//--------------------------------------------------------
/// Type representing extra CLI arguments.
///
/// Used internally to store things that must be preserved between
/// runs but cannot be set in conf files, like e.g. `--tolerance`
#[derive(Serialize, Deserialize)]
#[derive(Debug, Clone, Default)]
pub(crate) struct Args(Vec<String>);
impl<S, Ss> From<Ss> for Args
where
S: AsRef<str>,
Ss: IntoIterator<Item=S>,
{
fn from(args: Ss) -> Self
{ Args(args.into_iter().map(|s| s.as_ref().to_owned()).collect()) }
}
impl Load for Args {
fn load(path: impl AsPath) -> FailResult<Self>
{
use path_abs::FileRead;
use crate::util::ext_traits::PathNiceExt;
let path = path.as_path();
let text = FileRead::open(path)?.read_string()?;
if let Some(args) = shlex::split(&text) {
Ok(Args(args))
} else {
bail!("Bad args at {}", path.nice())
}
}
}
impl Save for Args {
fn save(&self, path: impl AsPath) -> FailResult<()>
{
use path_abs::FileWrite;
let mut file = FileWrite::create(path.as_path())?;
for arg in &self.0 {
writeln!(file, "{}", shlex::quote(arg))?;
}
Ok(())
}
}
//--------------------------------------------------------
mod builder {
use super::*;
/// FIXME: This no longer needs to exist but it's just easier to keep around.
///
/// It used to be passed around through a decent amount of the high-level code in rsp2_tasks::cmd,
/// so that different parts of the configuration could be set at times that were convenient.
#[derive(Debug, Clone)]
pub struct Builder {
symprec: Option<f64>,
conf: Conf,
}
impl Default for Builder {
fn default() -> Self {
Builder {
symprec: None,
conf: Default::default(),
}
}
}
impl Builder {
pub fn new() -> Self
{ Default::default() }
pub fn symmetry_tolerance(mut self, x: f64) -> Self
{ self.symprec = Some(x); self }
pub fn conf(mut self, key: impl AsRef<str>, value: impl AsRef<str>) -> Self
{ self.conf.0.insert(key.as_ref().to_owned(), value.as_ref().to_owned()); self }
/// Extend with configuration lines from a phonopy .conf file.
/// If the file defines a value that was already set, the new
/// value from the file will take precedence.
// FIXME: FailResult<Self>... oh, THAT's why the general recommendation is for &mut Self
#[allow(unused)]
pub fn conf_from_file(self, file: impl BufRead) -> FailResult<Self>
{Ok({
let mut me = self;
for (key, value) in rsp2_phonopy_io::conf::read(file)? {
me = me.conf(key, value);
}
me
})}
pub fn supercell_dim(self, dim: [u32; 3]) -> Self
{ self.conf("DIM", dim.iter().join(" ")) }
pub fn diagonal_disps(self, value: bool) -> Self
{ self.conf("DIAG", fortran_bool(value)) }
fn args_from_settings(&self) -> Args
{
let mut out = vec![];
if let Some(tol) = self.symprec {
out.push(format!("--tolerance"));
out.push(format!("{:e}", tol));
}
out.into()
}
}
impl Builder {
/// Make last-second adjustments to the config that are only possible once
/// the structure and metadata are known.
fn finalize_config(&self, meta: HList1<meta::SiteMasses>) -> Self
{
let masses: meta::SiteMasses = meta.pick();
self.clone().conf("MASS", masses.iter().join(" "))
}
pub(super) fn displacements(
&self,
coords: &Coords,
meta: HList2<
meta::SiteMasses,
meta::SiteElements,
>,
) -> FailResult<DirWithDisps>
{
self.finalize_config(meta.sift())
._displacements(coords, meta.sift())
}
// this pattern of having a second impl method is to simulate rebinding
// the output of `finalize_config` to `self`. (otherwise, we'd be forced
// to have a `self: &Self` in scope with the incorrect config, which would
// be a massive footgun)
fn _displacements(
&self,
coords: &Coords,
meta: HList2<
meta::SiteElements,
meta::SiteMasses,
>,
) -> FailResult<DirWithDisps>
{Ok({
let elements: meta::SiteElements = meta.pick();
let dir = TempDir::new_labeled("rsp2", "phonopy")?;
{
let dir = dir.path();
trace!("Displacement dir: '{}'...", dir.display());
let extra_args = self.args_from_settings();
self.conf.save(dir.join(FNAME_CONF_DISPS))?;
Poscar {
comment: "blah", coords, elements,
}.save(dir.join("POSCAR"))?;
extra_args.save(dir.join(FNAME_SETTINGS_ARGS))?;
{
trace!("Calling phonopy for displacements...");
let mut command = Command::new("phonopy");
command
.args(&extra_args.0)
.arg(FNAME_CONF_DISPS)
.arg("--displacement")
.current_dir(&dir);
log_stdio_and_wait(command, None)?;
}
{
trace!("Producing {}...", FNAME_OUT_SYMMETRY);
let mut command = Command::new("phonopy");
command
.args(&extra_args.0)
.arg(FNAME_CONF_DISPS)
.arg("--sym")
.current_dir(&dir)
.stdout(fsx::create(dir.join(FNAME_OUT_SYMMETRY))?);
check_status(command.status()?)?;
//---------------------------
// NOTE: Even though integer-based FracTrans is gone, this limitation is
// still necessary because we're only capable of hashing the rotations
// when using GroupTree. (meaning they must be unique, meaning there
// must be no pure translations)
// (though maybe it would be better to check for pure translations *there*,
// rather than checking PPOSCAR *here*)
//---------------------------
//
// check if input structure was primitive
let Poscar { coords: prim, .. } = Poscar::load(dir.join("PPOSCAR"))?;
let ratio = coords.lattice().volume() / prim.lattice().volume();
let ratio = round_checked(ratio, 1e-4)?;
ensure!(ratio == 1, "attempted to compute symmetry of a supercell");
}
}
DirWithDisps::from_existing(dir)?
})}
}
}
//--------------------------------------------------------
// NOTE: The original motivation for this type's existence was to be generic over
// a path type P, which could be a TempDir, or a borrowed path (in the case
// of opening a previously-existing directory), or etc.
//
// Now that it isn't used outside of this module anymore, it could probably
// be trashed... but it just seems easier to leave it alone for now.
/// Represents a directory with the following data:
/// - `POSCAR`: The input structure
/// - `disp.yaml`: Phonopy file with displacements
/// - configuration settings which impact the selection of displacements
/// - `--tol`, `--dim`
/// - `symmetry.yaml`: Output of `phonopy --sym`, currently present
/// only for validation purposes. (we use spglib)
///
/// Generally, the next step is to supply the force sets, turning this
/// into a DirWithForces.
///
/// # Note
///
/// Currently, the implementation is rather optimistic that files in
/// the directory have not been tampered with since its creation.
/// As a result, some circumstances which probably should return `Error`
/// may instead cause a panic, or may not be detected as early as possible.
#[derive(Debug)]
struct DirWithDisps {
dir: TempDir,
displacements: Vec<(usize, V3)>,
// These are cached in memory from `disp.yaml` due to the likelihood
// that code using `DirWithDisps` will need them.
super_coords: Coords,
super_meta: HList2<meta::SiteElements, meta::SiteMasses>,
}
impl AsPath for DirWithDisps {
fn as_path(&self) -> &Path {
self.dir.as_path()
}
}
impl DirWithDisps {
fn from_existing(dir: TempDir) -> FailResult<Self>
{Ok({
for name in &[
"POSCAR",
"disp.yaml",
FNAME_CONF_DISPS,
FNAME_SETTINGS_ARGS,
FNAME_OUT_SYMMETRY,
] {
let path = dir.as_path().join(name);
if !path.exists() {
throw!(MissingFileError::new("DirWithDisps", &dir, name.to_string()));
}
}
trace!("Parsing disp.yaml...");
let DispYaml {
displacements, coords, elements, masses,
} = Load::load(dir.as_path().join("disp.yaml"))?;
let elements: Rc<[_]> = elements.into();
let masses: Rc<[_]> = masses.into_iter().map(meta::Mass).collect::<Vec<_>>().into();
let meta = hlist![elements, masses];
DirWithDisps {
dir,
displacements,
super_coords: coords,
super_meta: meta,
}
})}
/// Get the structure from `disp.yaml`.
///
/// # Note
///
/// This superstructure was generated by phonopy, and the atoms may be in
/// a different order than most supercells in rsp2 (those produced with SupercellToken).
#[allow(unused)]
fn super_coords(&self) -> &Coords
{ &self.super_coords }
/// Get displacements. *The atom indices are for phonopy's supercell!*
fn displacements(&self) -> &[(usize, V3)]
{ &self.displacements }
// Although we ultimately use `spglib` (since it gives fuller precision for
// the translations), the intent is still to get the spacegroup used *by phonopy*
// (as otherwise we might end up with e.g. underdetermined force constants)
//
// So we call `phonopy --sym` for the sole purpose of validating that the spacegroup
// returned is the same (or a subgroup). This could fail if our method of assigning
// integer atom types differed from phonopy (e.g. are masses checked?).
fn phonopy_sg_op_count(&self) -> FailResult<usize>
{ Ok(SymmetryYaml::load(self.dir.join(FNAME_OUT_SYMMETRY))?.space_group_operations.len()) }
}
/// A smattering of information about the displacements chosen by phonopy, and how they
/// relate to rsp2's conventions.
pub struct PhonopyDisplacements {
/// The original displacements exactly as they were chosen by phonopy.
pub phonopy_super_displacements: Vec<(usize, V3)>,
/// Permutation that rearranges phonopy's superstructure to match `superstructure`.
///
/// I.e. `phonopy_superstructure.permuted_by(&perm_from_phonopy) ≈ superstructure`,
/// modulo lattice point translations.
pub coperm_from_phonopy: Perm,
/// Displacements that use indices into the primitive structure.
///
/// You are free to just use this field and ignore the rest (which merely come
/// for "free" with it). This field should be compatible with superstructures
/// of any size, and obviously does not depend on the convention for ordering
/// sites in a supercell.
pub prim_displacements: Vec<(usize, V3)>,
/// Number of spacegroup operations detected by phonopy when it generated its displacements.
///
/// If this is larger than the amount found by rsp2, the force constants will end up
/// missing some terms.
pub spacegroup_op_count: usize,
}
/// Produce a variety of data describing the displacements in terms of rsp2's conventions
/// (whereas most other methods on `DirWithDisps` use phonopy's conventions).
pub fn phonopy_displacements(
settings: &cfg::Phonons,
prim_coords: &Coords,
prim_meta: HList2<
meta::SiteElements,
meta::SiteMasses,
>,
sc: &SupercellToken,
// supercell coordinates in rsp2's ordering convention, as created by `sc`
our_super_coords: &Coords,
) -> FailResult<PhonopyDisplacements> {
let displacement_distance = settings.displacement_distance.expect("(bug) missing displacement-distance should have been caught earlier");
let symmetry_tolerance = settings.symmetry_tolerance.expect("(bug) missing symmetry-tolerance should have been caught earlier");
let dir = {
let mut builder = {
builder::Builder::new()
// HACK: Give phonopy a slightly smaller symprec to ensure that, in case
// rsp2 and phonopy find different spacegroups, phonopy should find the
// smaller one.
.symmetry_tolerance(symmetry_tolerance * 0.99)
.conf("DISPLACEMENT_DISTANCE", format!("{:e}", displacement_distance))
.supercell_dim(settings.supercell.dim_for_unitcell(prim_coords.lattice()))
};
if let cfg::PhononDispFinder::Phonopy { diag } = settings.disp_finder {
builder = builder.diagonal_disps(diag);
}
builder.displacements(prim_coords, prim_meta.sift())?
};
let sc_dims = sc.periods();
assert_eq!(settings.supercell.dim_for_unitcell(prim_coords.lattice()), sc_dims);
// cmon, big money, big money....
// if these assertions always succeed, it will save us a
// good deal of implementation work.
let perm_from_phonopy;
{
let phonopy_super_coords = Poscar::load(dir.join("SPOSCAR"))?.coords;
perm_from_phonopy = phonopy_super_coords.perm_to_match(&our_super_coords, 1e-10)?;
// make phonopy match us
let phonopy_super_coords = phonopy_super_coords.clone().permuted_by(&perm_from_phonopy);
let err_msg = "\
phonopy's superstructure does not match rsp2's conventions! \
Unfortunately, support for this scenario is not yet implemented.\
";
assert_close!(
abs=1e-10,
our_super_coords.lattice(), phonopy_super_coords.lattice(),
"{}", err_msg,
);
let lattice = our_super_coords.lattice();
let diffs = {
zip_eq!(our_super_coords.to_carts(), phonopy_super_coords.to_carts())
.map(|(a, b)| (a - b) / lattice)
.map(|v| v.map(|x| x - x.round()))
.map(|v| v * lattice)
.collect::<Vec<_>>()
};
assert_close!(
abs=1e-10,
vec![[0.0; 3]; diffs.len()],
diffs.unvee(),
"{}", err_msg,
);
}
let prim_displacements = {
let primitive_atoms = sc.atom_primitive_atoms();
dir.displacements().iter()
.map(|&(phonopy_idx, disp)| {
let our_super_idx = perm_from_phonopy.permute_index(phonopy_idx);
let our_prim_idx = primitive_atoms[our_super_idx];
(our_prim_idx, disp)
})
.collect::<Vec<_>>()
};
Ok(PhonopyDisplacements {
prim_displacements,
coperm_from_phonopy: perm_from_phonopy,
phonopy_super_displacements: dir.displacements().to_vec(),
spacegroup_op_count: dir.phonopy_sg_op_count()?,
})
}
/// Struct to simulate named arguments
pub struct PhonopyForceSets<'a> {
/// The original displacements exactly as they were chosen by phonopy.
pub phonopy_super_displacements: &'a [(usize, V3)],
/// Which cell (as defined in the docs for [`SupercellToken`]) did rsp2 choose for each displacement?
pub rsp2_displaced_site_cells: &'a [[u32; 3]],
/// Perm that rearranges phonopy's superstructure to match rsp2's superstructure.
pub coperm_from_phonopy: &'a Perm,
pub sc: &'a SupercellToken,
}
impl PhonopyForceSets<'_> {
pub fn write(
self,
w: impl Write,
force_sets: &Vec<std::collections::BTreeMap<usize, V3>>,
) -> FailResult<()> {
let PhonopyForceSets {
coperm_from_phonopy,
phonopy_super_displacements,
rsp2_displaced_site_cells,
sc,
} = self;
let num_atoms = coperm_from_phonopy.len();
// rsp2 and phonopy agree about the primitive sites that were displaced,
// but the supercell data may differ in two ways:
//
// * Different images may have been chosen to be displaced.
// * The supercell atoms may be in a different order.
// permutation that turns our metadata into phonopy's metadata
let ref deperm_from_phonopy = coperm_from_phonopy.inverted();
let deperm_to_phonopy = coperm_from_phonopy; // inverse of inverse
let site_cells = sc.atom_cells();
let ref phonopy_displaced_site_cells: Vec<[u32; 3]> = {
phonopy_super_displacements.iter().map(|&(phonopy_site, _)| {
// FIXME I can't quite figure out whether this should use the
// coperm or the deperm, but the deperm seems to make the most sense.
//
// Currently it doesn't seem to matter which one we use, because the permutation
// between rsp2 and phonopy always seems to be involutory; that is, it is equal to
// its own inverse.
if coperm_from_phonopy != deperm_from_phonopy {
warn_once!("Untested code path: 94111e7c-3afe-4838-948a-108580e8d252");
}
let rsp2_site = deperm_from_phonopy.permute_index(phonopy_site);
site_cells[rsp2_site]
}).collect()
};
let phonopy_force_sets: Vec<Vec<V3>> = {
zip_eq!(force_sets, phonopy_displaced_site_cells, rsp2_displaced_site_cells)
.map(|(rsp2_row, &phonopy_cell, &rsp2_cell)| {
// First, perform a translation that translates rsp2_cell to phonopy_cell,
// so that the correct site is displaced.
let rsp2_latt = sc.lattice_point_from_cell(rsp2_cell);
let phonopy_latt = sc.lattice_point_from_cell(phonopy_cell);
let translation_latt = phonopy_latt - rsp2_latt;
let deperm = sc.lattice_point_translation_deperm(translation_latt);
// Then, permute into phonopy's supercell convention
let deperm = deperm.then(deperm_to_phonopy);
// Apply this permutation to the columns while densifying
let mut phonopy_row = vec![V3::zero(); num_atoms];
for (&our_index, &vector) in rsp2_row {
let phonopy_index = deperm.permute_index(our_index);
phonopy_row[phonopy_index] = vector;
}
phonopy_row
}).collect()
};
rsp2_phonopy_io::force_sets::write(w, phonopy_super_displacements, phonopy_force_sets)
}
}
//-----------------------------
// helpers
fn round_checked(x: f64, tol: f64) -> FailResult<i32>
{Ok({
let r = x.round();
ensure!((r - x).abs() < tol, "not nearly integral: {}", x);
r as i32
})}
fn fortran_bool(b: bool) -> &'static str {
match b {
true => ".TRUE.",
false => ".FALSE.",
}
}
pub(crate) fn log_stdio_and_wait(
mut cmd: std::process::Command,
stdin: Option<String>,
) -> FailResult<()>
{Ok({
use std::process::Stdio;
if stdin.is_some() {
cmd.stdin(Stdio::piped());
}
debug!("$ {:?}", cmd);
let mut child = cmd
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()?;
if let Some(text) = stdin {
child.stdin.take().unwrap().write_all(text.as_bytes())?;
}
let stdout_worker = crate::stdout::spawn_log_worker(child.stdout.take().unwrap());
let stderr_worker = crate::stderr::spawn_log_worker(child.stderr.take().unwrap());
check_status(child.wait()?)?;
let _ = stdout_worker.join();
let _ = stderr_worker.join();
})}
fn check_status(status: std::process::ExitStatus) -> Result<(), PhonopyFailed>
{
if status.success() { Ok(()) }
else {
let backtrace = failure::Backtrace::new();
Err(PhonopyFailed { backtrace, status })
}
}
//-----------------------------
|
import { Logger } from '@nestjs/common';
import * as chalk from 'chalk';
import * as fs from 'fs';
export class NginxConfigFile {
private readonly lines: string[] = [];
constructor(private readonly configFilePath: string) {}
addLines(...lines: string[]) {
this.lines.push(...lines);
}
public async write() {
Logger.debug(`Writing ${chalk.bold(this.configFilePath)}`);
await fs.promises.writeFile(this.configFilePath, this.getData());
}
public async delete() {
if (
await fs.promises
.access(this.configFilePath, fs.constants.F_OK)
.then(() => true)
.catch(() => false)
) {
Logger.debug(`Deleting ${chalk.bold(this.configFilePath)}`);
await fs.promises.unlink(this.configFilePath);
} else {
Logger.debug(`File ${chalk.bold(this.configFilePath)} does not exist`);
}
}
private getData(): string {
return this.lines.join('\n');
}
}
|
<gh_stars>0
import assert from "assert";
import Component from "../component/Component";
import { Port } from "./Port";
import { Wire } from "./Wire";
/**
* The network maintains connections to directly-connected components
*/
export class Network
{
/**
* Maintain a set of the ports in the network
*/
protected ports: Set<Port> = new Set();
/**
* The list of wires associated with this network
*/
private __wires: Wire[] = [];
/**
* Connect a port to the network
*/
public connect(port: Port) {
assert([0, port.bitWidth].includes(this.bitWidth), "Network contains mismatched widths");
if (this.bitWidth == 0) {
for (let i = 0; i < port.bitWidth; i++) {
this.__wires.push(new Wire());
}
}
this.ports.add(port);
port.connect(this);
}
/**
* Get the bit-width of the network
*/
public get bitWidth() {
return this.__wires.length;
}
/**
* Get the list of components connected to the network
*/
public get components() {
let result = new Set<Component>();
for (let port of this.ports) {
result.add(port.component);
}
return result;
}
/**
* Get the list of wires within this network
*/
public get wires() {
return this.__wires;
}
}
|
NOVO7
History
The first tablets in the Novo 7 line were the Novo 7 Basic and Novo 7 Advanced, released in early 2011. Unlike most competing tablets, the Novo 7 Basic used a MIPS based CPU (1 GHz Ingenic JZ4770 XBurst), while the Advanced had a more common ARM CPU. Both had 800x480 pixel touch screens and were launched with Android 2.2, although the company later released updates to Android 3 and Android 4.
The company released the Novo 7 Paladin in late 2011, which became the world's first Android 4.0 (Ice Cream Sandwich) tablet. Its specifications were similar to the Novo 7 Basic, having a 7-inch touch screen, 1 GHz MIPS processor, 512 MB RAM, 8 to 16 GB of internal flash storage, external microSDHC slot, miniUSB, and Wi-Fi (802.11 b/g/n). Two more tablets, the Aurora and the Elf, were added to the range soon afterwards; both of these tablets used ARM CPUs.
Ainol's latest Novo 7 tablets are the Tornado, Mars, Elf II and Aurora II. These all have ARM processors, 8GB of internal storage (Elf II also has 16 GB model and all Aurora II - 16 GB) and 7 inch screens. The main difference between the models is the CPU speed and screen resolution: the Tornado has a 1 GHz single-core processor and an 800x480 screen, the Mars has the same processor but a 1024x600 pixel screen, and the Elf II/Aurora II has a dual-core 1.5 GHz processor (currently limited to 1.32 GHz) as well as a 1024x600 screen.
All Novo 7 models have been budget based tablets, having lower specifications than other tablets released at the same time but also a cheaper price. They have all supported Wi-Fi, but none of them have had built-in 3G support.
In August, 2012, Ainol released a new flagship tablet, the Novo 7 Fire / Flame, with features that compete with and even exceed many features of other recently released and popular consumer tablets, including the Google Nexus 7 tablet and Kindle Fire. Features of the Novo 7 Fire / Flame include a high-resolution 1280×800 IPS display with a 5-point capacitive multi-touch screen, Full HD 1080p support, 16 GB memory / 1 GB RAM, Android 4.0.4 Ice Cream Sandwich operating system, official access to Google Play (Google Market), a 1.5 GHz AMLogic ARM 2nd generation Cortex-A9 based dual-core CPU, a micro SD storage card slot, an HDMI port, a micro-USB port, Wi-Fi, Bluetooth wireless connection, 3G access through an external dongle USB device, a 5 MP rear-facing camera with AF and auto flash, and 2 MP front-facing webcam. It also provides for hardware accelerated Flash (access to Flash-based video), features a 3-axis gravity sensor, and contains a 5000 mAh battery.
In September 2012, Ainol announced the release of the Novo 7 Crystal with the latest Android 4.1.1 Jelly Bean. The tablet is set to be made available, for shipment, on 28 September. Novo7 Crystal matches and surpasses the specifications of the Novo 7 Elf II and looks to replace its predecessor. It has a special 7-inch MVA screen with 1024×600 pixel resolution and wide 170° viewing angle.
Android operating system
On March 28, 2012, Ainol released updates to Android 4.0.3 v1.0 (Ice Cream Sandwich) for NOVO7 series. It fixes most of the bugs for [v4.0.3 v0.9] firmware that was released in March 7, 2012.
Community support
Some users are trying to hack and create scripts to enhance the compatibility of the NOVO7 tablets.
At least two ports of CyanogenMod 9 (Android 4.0.4) have been made and they are said to make the tablets much smoother and faster than the initial ROM: one by Feiyu, and one by Quarx2k
Specifications
(Note: no GPS )
|
1. Field of the Invention
The present invention relates to an ink jet recording material capable of recording ink images having a high color density, a high clarity, a high water-resistance, a high resistance to blotting of ink images due to a high humidity and optionally a high resistance to fading, and having a high surface smoothness and a high gloss. The ink jet recording material of the present invention enables sharp ink images, comparable to those of silver-salt photographic images, to be recorded thereon.
2. Description of the Related Art
The ink jet recording system is a system for recording ink images by jetting ink droplets, corresponding to images to be recorded, toward a recording medium to cause the jetted ink droplets to be directly absorbed, imagewise, into the recording medium.
An ink jet printer can easily effect multi-color recording on the recording medium and thus is now rapidly becoming popular, for home use and for office use, as a text- or picture-outputting machine for computers.
The multi-color recording system using the ink jet recording system can rapidly and accurately form complicated images and the quality (color density and clarity) of the recorded colored images is comparable to the quality of color images formed by a conventional printing system using a printing plate or by conventional color photography. In the case where the ink jet recording system is utilized for a small number of prints, the ink jet recording system is advantageous in that the cost for recording is lower than the printing cost of a conventional printing system or a conventional photographic printing system. The progress in the accuracy and color quality of the printer and an increase in the printing speed of the printer require the printing media to have an enhanced performance. High gloss is required and also, since the ink for the ink jet recording system contains a large amount of water or another liquid medium, particularly a liquid medium having a high boiling temperature to prevent a blocking of the ink jet nozzle heads and, after printing, the coloring material such as a dye exists together with the liquid medium for a long period in the recording layer, the conventional recording material is disadvantageous in that the ink images are blotted with the lapse of time and the stabilization of the color tone of the printed ink images is difficult.
To enhance the resistance of ink images printed on an image recording stratum to moisture, a plurality of attempts have been made. For example, in one attempt, a uniform aqueous solution or an emulsion latex of a cationic polymer is added to the ink or, in another attempt, fine solid particles having a cationic surface charge (for example, alumina particles or cation-modified silica particles are added to the ink.
For example, Japanese Unexamined Patent Publication No. 60-46,288 discloses an ink jet recording method using a recording material comprising an ink containing a specific dye and a polyamine, etc. Also, Japanese Unexamined Patent Publication No. 63-162,275 discloses an ink jet recording material comprising a cationic polymer and a cationic surfactant coated on or impregnated in a support. Further, use of fine inorganic cationic particles, for example, alumina or cation-modified silica particles is known, for example, from Japanese Examined Patent Publication No. 4-19,037 and Japanese Unexamined Patent Publication No. 11-198,520. The attempts mentioned above relatively greatly contributed to enhancing the water resistance of the printed ink images. However, the enhancing effect on the resistance to blotting of the ink images due to moisture is insufficient and, particularly, substantially no effect was found on stabilization of the color tone of the printed ink images within a short time.
To solve the above-mentioned problems, Japanese Unexamined Patent Publication No. 10-157,277 discloses an attempt in which a two-layered image recording stratum is formed on an opaque support, the opaqueness of an under layer is made higher than the opaqueness of the upper layer, and a white-coloring pigment is contained in the under layer. In this attempt, since the upper layer is formed transparent and the under layers is formed opaque, the portion of the dye of the ink absorbed in the under layer which dye may blot in the under layer, is hidden from sight in the opaque layer and thus cannot be recognized through the upper layer. In this attempt, a certain degree of effect is recognized, but the problems are not completely solved. Particularly, the dye absorbed and blotted in the under layer further spread into the upper layer with the lapse of time and as a final result, an ink image-blotting phenomenon appears. Also, by this attempt alone, it is difficult to stabilize the color tone of the printed ink images within a short time. Particularly, for a specific use in which the stabilization of the color tone within a short time is required, for example, the use of checking the color tone of ink images formed by an ink jet recording system for the purpose of proofreading of colored images of prints, the above-mentioned recording stratum is unsatisfactory.
Currently, since digital cameras have become popular and ink jet printers using a photo-ink, capable of recording images having a high accuracy and having a low price are available, a demand of recording material capable of recording thereon ink images having a high quality comparable to that of silver-salt photographic images is increased. Since the printers can record full-colored ink images at high speed with a high quality and accuracy, the recording material for the printers are also required to provide with further enhanced properties. Particularly, to use the ink jet recording system in place of the silver-salt photographic printing system, the ink jet recording materials are strongly required to have a high ink-absorbing rate, a high ink absorption capacity, a high roundness of dots, a high density of colored images, and high surface gloss and a smoothness comparable to those of silver salt photographic printing sheets.
To realize the high clarity and color density of the ink images comparable to the silver-salt photographic image, the inventors of the present invention provided, in Japanese Unexamined Patent Publication No. 9-286,165, an ink jet recording material having at least one ink receiving layer comprising fine silica particles having an average primary particle size of 3 to 40 nm and an average secondary particle size of 10 to 300 nm, and water-soluble resin. The fine silica particles contribute to enhancing the color-forming property of the ink and the clarity and brightness of the printed images.
Also, the use of the fine silica particles enables the printed images to exhibit a high color density and a high quality (clarity). However, since the silica particles exhibit an anionic property, the resultant images formed from a cationic dye ink exhibit an unsatisfactory water resistance. Also, a cationization treatment of silica particles is difficult. Further, the silica particle-containing recording stratum is disadvantageous in that the resultant smoothness and gloss thereof, without a gloss-providing treatment, are insufficient.
In another invention disclosed in Japanese Unexamined Patent Publication No. 10-193,776, an ink jet recording material having an ink-receiving and recording layer comprising fine silica particles having an average primary particle size of 20 xcexcm or less and a hydrophilic binder, is provided. Particularly, in this recording material, when fumed silica particles are used as the fine silica particles, a high gloss of the recording stratum can be obtained, and the ink exhibits a good color-forming property. However, the resultant gloss of the recording material is lower than that of the silver-salt photographic material. Also, the fumed silica particles are difficult to cationalization process. Further, the fumed silica particles are disadvantageous in that since the thixotropic property thereof is too high and thus the resultant coating liquid containing the fumed silica particles exhibits a poor stability in storage.
Currently, various types of ink jet recording materials containing alumina hydrate particles are provided. For example, Japanese Unexamined Patent Publication No. 8-324,098 discloses a process for producing an ink jet recording material in which a coating liquid containing alumina hydrate particles dispersed by high speed aqueous streams is employed. When the alumina hydrate particles dispersed by the high speed aqueous streams are employed, a recording stratum having a high transparency can be formed, but this recording stratum is disadvantageous in that the dispersion of the alumina hydrate particles causes the ink-absorbing property of the recording stratum to be decreased. Also, the alumina hydrate particle-containing recording stratum is unsatisfactory in the color-forming property of the dye in the ink and thus clear and sharp images cannot be obtained. A plurality of inventions relating to ink jet recording materials containing alumina hydrate particles having a boehmite structure are provided. The alumina hydrate particles having the boehmite structure exhibit a high laminating property and enable a recording stratum having a high gloss and a high smoothness to be obtained. Also, the resultant recording stratum exhibits a high transparency and the images printed on the recording stratum have a high color density. However, this type of recording stratum has a low ink absorption and thus is difficult to use practically. Also, the alumina hydrate particle-containing recording stratum has an insufficient color-forming property for the dye of the ink and thus clear and bright colored images are not obtained on the recording stratum.
Generally speaking, as a method of imparting a high gloss to a recording material, a method of smoothing a surface of a coating layer of the recording material by feeling the recording material to a smoothing apparatus, for example, a calender, and passing the recording material between a pair of pressing and heating rolls under pressure, is known. When only the above-mentioned conventional procedure is applied, the resultant gloss of the recording material is insufficient. Also, since the press-heating procedure causes the ink-absorbing pores formed in the coating layer to be decreased, as a result, the smoothed coating layer easily allows the printed ink images to be blotted. Particularly, in the current ink jet printing system, to form ink images having a photographic image-like tone but no roughened surface-like tone, printers having photoink-jetting nozzles through which low concentration ink images are superposed on each other are mainly used. Thus, the recording material is required to have a further enhanced ink absorption.
Various types of methods of forming an ink-receiving layer from an ink-absorbing polymeric material, for example, starch, gelatin, a water-soluble cellulose derivative, polyvinyl alcohol or polyvinyl pyrrolidone on a plastic film or a resin-coated paper sheet having a high gloss and a high smoothness, are known. The recording materials produced by the above-mentioned methods have a sufficiently high gloss. However, this type of recording materials exhibit a low ink absorption and a low ink-drying rate and, thus, the handling property of the recording material is insufficient, the ink is unevenly absorbed in the recording material, and the water-resistance and the resistance to curling of the recording material are insufficient.
As means for solving the above-mentioned problems, Japanese Unexamined Patent Publications No. 2-274,587, No. 8-67,064, No. 8-118,790, No. 9-286,162 and No. 10-217,601 disclose a coating layer containing, as a main component, super fine pigment particles. Among them, coating layers containing colloidal silica particles having a small particle size (disclosed in Japanese Unexamined Patent Publications No. 2-274,857, No. 8-67,064, and No. 8-118,790 have a high gloss and high water resistance. However, since the colloidal silica particles are primary particles independent from each other and thus fine pores for absorbing the ink cannot be formed between the particles, and the ink-absorbing properties of the coating layers are unsatisfactory for practical use.
Also, Japanese Unexamined Patent Publication No. 2-43,083 discloses a recording material having a surface layer comprising, as a main component, an aluminum oxide and an under layer having an ink absorbing property, as a recording material having a high resistance to fading of the recorded images, because the dye for the images is electrically bonded with the aluminum oxide particles and thus exhibits a high resistance to decomposition.
As mentioned above, the ink jet recording system in which an aqueous ink is jetted imagewise in the form of fine droplets through fine nozzles toward a recording material and ink images are formed on the surface of the recording material is advantageous in that the printing noise is low, full colored images can be easily formed, a high speed recording can be effected, and the recording cost is cheaper than that of other conventional recording systems. Thus, the ink jet recording system is widely employed as an output terminal printer, as a printer for facsimile machines plotters and as a printing system for notebooks, slips and tickets.
Due to the fact that the use of the printers is rapidly expanding, the accuracy and minuteness of the printed images have improved, the printing speed has increased and that digital cameras have been developed, the recording materials are required to have improved properties. Namely, a recording materials having a high ink-absorbing property, a high color density of recorded images, a high water resistance, a high light resistance, and a quality (clarity) and durability of the recorded images comparative to those of the silver-salt type photographic sheets, are in strong demand. Further, to obtain a photographic tone image, the recording material surface must have a high gloss.
As a recording sheet having a high surface gloss, a cast-coated paper sheet produced by contacting a wetted coating layer of the recording sheet with a mirror-finished peripheral surface of a heating drum under pressure, and drying the coating layer to transfer the mirror-like surface to the coating layer surface, is known. The cast-coated paper sheet has a higher surface gloss, a more superior surface smoothness, and a more excellent printing effect than those of the conventional super calender-finished coating sheet, and thus is mainly used for high quality prints. However, when the cast-coated paper sheet is used as an ink jet recording material, various problems occur.
Namely, the conventional cast-coated paper sheet generally exhibits a high gloss when the mirror-finished surface of the cast-coater drum is copied by the film-forming material, for example, a binder, contained in a pigment-containing composition from which the coating layer is formed. However, the film-forming material contained in the coating layer causes the porosity of the coating layer to be decreased or lost, and the ink-absorption of the coating layer when an ink jet recording procedure is applied thereof is significantly reduced. To improve the ink-absorption of the coating layer, it is important that a porous structure is formed in the cast-coating layer to cause the resultant coating layer to exhibit an enhanced ink-absorbing property. For this effect, it is necessary to decrease the film-forming property of the recording stratum. However, the decrease in the content of the film-forming material in the recording stratum creates a such a problem that the white sheet gloss of the resultant recording stratum decreases. As mentioned above, it was very difficult to simultaneously keep both the surface gloss and the ink jet recording property of the cast-coating layer at satisfactory levels.
As means for solving the above-mentioned problem, Japanese Unexamined Patent Publication No. 7-89,220 discloses that a cast-coated paper sheet having both excellent gloss and ink-absorbing property and thus useful for ink jet recording system can be produced by the steps of coating a coating liquid comprising, as a principal component, a composition of a copolymer having a gloss-transition temperature of 40xc2x0 C. or more on a paper sheet having a recording stratum comprising as principal components, a pigment and a binder, to form a coating layer for casting; and while the coating layer is kept in a wetted condition, bringing the wetted coating layer into contact with a heated casting surface of a casting drum under pressure, and then drying the coating layer to impart a high smoothness to the casting layer surface. Further, Japanese Unexamined Patent Publications No. 2-274,587 and No. 10-250,218 disclose a cast-coated recording stratum containing super-fine inorganic colloidal particles.
As mentioned above, currently, due to the development of high speed ink jet recording system, high accuracy and quality of the ink jet recorded images and full color recording system, on improvement in clarity, color density and storage durability of the recorded images is required of the ink jet recording material. For example, an ink jet recording material having a high recording quality and storage durability comparable to those of the silver-salt type photographic recording sheet is required. The above-mentioned prior art recording materials are insufficient to satisfy the above-mentioned requirements. Particularly, the conventional ink jet recording sheets having excellent gloss and a superior ink jet recording aptitude are not always satisfactory in resistance to fading of the printed ink images upon being exposed to sunlight or room light (for example, fluorescent lamp light). This problem has not yet been solved.
Regarding this problem, many attempts have been made to enhance the light resistance of the printed images by applying a light resistance-enhancing material to the ink jet recording sheets. For example, Japanese Unexamined Patent Publication No. 57-87,988 discloses an ink jet recording sheet containing, as at least one component, an ultraviolet ray-absorber. Japanese Unexamined Patent Publication No. 61-146,591 discloses an ink jet recording medium for recording colored images thereon with an aqueous ink containing a water-soluble dye, characterized in that the recording medium contains a hindered amine compound. Japanese Unexamined Patent Publication No. 4-201,594 discloses an ink jet recording material comprising a base material and an ink receiving layer formed on the base material and characterized in that the ink receiving layer contains super fine particulates of a transition metal compound. The recording materials mentioned above exhibit a certain light resistance-enhancing effect. However, they are insufficient in the ink-absorbing property and disadvantageous in that, with respect to the light resistance, the color balance of the faded images is unsatisfactory.
Japanese Unexamined Patent Publication No. 1-241,487 discloses an aqueous ink recording material having a coating formed on a base sheet surface and comprising 100 parts by weight of a resin binder comprising polyvinyl alcohol and a cationic, water-soluble resin and 0.1 to 30 parts by weight of a light-resistance-enhancing agent consisting of a compound having phenolic hydroxyl groups. This recording sheet is, however, unsatisfactory in the light resistance-enhancing effect thereof. Also, Japanese Unexamined Patent Publication No. 8-132,727 discloses an ink receiving layer comprising a metal complex of polyvinyl alcohol with calcium chloride, and Japanese Unexamined Patent Publication No. 9-290,556 discloses an ink jet recording sheet having a support and magnesium sulfate in a dry amount of 0.2 to 2.0 g/m2 attached to the support. The recording sheets mentioned above exhibit a relatively good color balance of faded colored images, but the retention in color density of the images after fading is insufficient, and thus these recording sheets are not usable in practice.
Japanese Unexamined Patent Publication No. 10-193,776 discloses an ink jet recording material characterized by containing at least one member selected from image-stabilizing agents and ultraviolet ray absorbers, as a fade-preventing agent. However, it was found that certain fade-preventing agents degrade the ink-absorbing property of the recording material, and generally, the light resistance of the resultant recording materials is insufficient.
Japanese Unexamined Patent Publications No. 11-20,306 and No. 11-192,777 respectively disclose an ink jet recording sheet having an ink receiving layer containing, as a cross-linking agent, boric acid or borax, for the purpose of enhancing the water resistance of the ink receiving layer. This type of ink receiving layer is not satisfactory in both gloss and light resistance. Japanese Unexamined Patent Publication No. 2000-73,296 discloses a paper sheet having a porous layer containing borax and thus exhibiting a decreased change in form (curling form) due to change in the environmental conditions. However, this type of the paper sheet is unsatisfactory in the gloss thereof.
Japanese Unexamined Patent Publication No. 11-263,065 discloses a mat-type ink jet recording sheet provided with an ink receiving layer comprising cyclodextrin, and thus has excellent reproducibility of dots, resolving power of images, color-reproducibility of images, color-forming property of ink and pigment ink-applicability. Also, Japanese Unexamined Patent Publication No. 11-286,172 discloses a recording sheet provided with an ink receiving layer containing cyclodextrin which causes the light resistance of the recorded images to be enhanced. However, the recording sheets mentioned above are unsatisfactory in the gloss thereof.
An object of the present invention is to provide an ink jet recording material capable of recording thereon ink images having excellent color density, clarity, water-resistance and resistance to blotting, and a superior sharpness comparable to that of silver-salt photographic images, and having high surface smoothness and gloss.
Another object of the present invention is to provide an ink jet recording material having a high gloss and excellent ink jet recording properties, such as color density and clarity of ink images, and capable of recording ink images having a high light-resistance.
The above-mentioned objects can be attained by the ink jet recording material of the present invention which comprises:
a substrate and an image-recording stratum, located on at least one surface of the substrate, formed from at least one ink receiving layer and comprising a binder and a plurality of pigment particles dispersed in the binder,
at least one ink receiving layer of the image-recording stratum comprising fine particles of at least one pigment selected from the group consisting of silica, aluminosilicate and xcex1-, xcex8-, xcex4- and xcex3-aluminas and having an average particle size of 1 xcexcm or less.
In the ink jet recording material of the present invention, preferably, at least one ink receiving layer of the image-recording stratum comprises fine particles of at least one silica compound selected from the group comprising silica and aluminosilicate and fine particles of at least one alumina compound selected from the group consisting of xcex1-, xcex8-, xcex4- and xcex3-aluminas, and the fine particles of the silica compound and the fine particles of the alumina compound respectively have an average particle size of 1 xcexcm or less.
In the ink jet recording material of the present invention, the fine particles of the alumina compounds are preferably in the form of secondary particles having an average secondary particle size of 500 nm or less, and consisting of a plurality of primary particles agglomerated with each other.
In the ink jet recording material of the present invention, the fine particles of the alumina compounds preferably have a BET specific area of 180 to 300 m2/g.
In the ink jet recording material of the present invention, the fine particles of the alumina compounds preferably have a BET specific area of 50 to 300 m2/g and a pore volume of 0.2 to 1.0 ml/g.
In the ink jet recording material of the present invention, the fine particles of the alumina compounds are preferably selected from rod-shaped fine particles of xcex4- and xcex3-aluminas having an average particle length of 300 nm or less.
In the ink jet recording material of the present invention, the fine particles of the alumina compounds are preferably a product of hydrolysis of an aluminum alkoxide and have an Al2O3 content of 99.99% by weight or more.
In the ink jet recording material of the present invention, the fine particles of the alumina compounds are preferably fine particles of fumed alumina.
In the ink jet recording material of the present invention, the fine particles of at least one silica compound selected from the group consisting of silica and aluminosilicate contained in the ink image-recording layer are preferably formed from an aqueous slurry containing secondary particles having a average secondary particle size of 500 nm or less, each of the secondary particles consisting of an agglomerate of a plurality of primary particles having an average primary particle size of 3 to 40 nm with each other.
In the ink jet recording material of the present invention, the fine particles of silica are preferably fine particles of fumed silica.
In the ink jet recording material of the present invention, preferably, the fine silica compound particles and the fine alumina compound particles are respectively products obtained by subjecting aqueous dispersions containing particles of materials for the silica compounds and the alumina compounds, to pulverization procedures using pulverization and dispersion means under pressure selected from homogenizers under pressure, ultrasonic homogenizers and high speed stream-impacting homogenizers, to such an extent that the pulverization products have an average particle size of 1 xcexcm or less.
In the ink jet recording material of the present invention, the image-recording stratum preferably has at least one ink receiving inside layer formed on the substrate and an ink receiving outermost layer formed on the outer surface of the ink receiving inside layer.
In the ink jet recording material of the present invention, the ink receiving inside layer of the image-recording stratum preferably contains fine particles of gel-method silica, and the ink receiving outermost layer preferably contains fine pigment particles of at least one member selected from the group consisting of the silica compounds and of the alumina compounds.
In the ink jet recording material of the present invention, the fine pigment particles contained in the ink receiving outermost layer are preferably secondary particles having an average secondary particle size of 800 nm or less and each consisting of a plurality of primary particles having an average primary particle size of 3 to 50 nm and agglomerated with each other to form secondary particles.
In the ink jet recording material of the present invention, the fine pigment particles contained in the ink receiving outermost layer are preferably fine fumed silica particles.
In the ink jet recording material of the present invention, the ink receiving outermost layer optionally further contains a cationic compound.
In the ink jet recording material of the present invention, the ink receiving outermost layer is preferably one formed by coating a coating liquid prepared by subjecting a mixture of the fine pigment particles and the cationic compound to a mechanical mix-dispersing procedure, on a substrate surface; and drying the coated coating liquid layer on the substrate surface.
In the ink jet recording material of the present invention, the fine silica particles contained in the ink receiving inside layers are preferably porous particles each having a plurality of fine pores having an average pore size of 20 nm or less.
In the ink jet recording material of the present invention, the substrate preferably exhibits a non-absorbing property for aqueous liquids.
In the ink jet recording material of the present invention, it is preferable that at least one ink receiving inside layer is formed from an aqueous coating liquid containing the fine pigment particles and a binder on the substrate; and the ink receiving outermost layer is formed from an aqueous coating liquid containing the fine pigment particles and binder on an outermost surface of the ink receiving inside layer,
the ink receiving outermost layer being formed in such a manner that the aqueous coating liquid for the ink receiving outermost layer is coated on the aqueous coating liquid layer for the ink receiving inside layer adjacent to the ink receiving outermost layer, before the aqueous coating liquid layer is dried, and the both the aqueous coating liquid strata for the ink receiving outermost layer and the ink receiving inside layer are simultaneously dried, to thereby enhance the ink image-receiving property and the surface smoothness of the image-recording stratum.
In the ink jet recording material of the present invention, the substrate is preferably formed from an air-impermeable material.
In the ink jet recording material of the present invention, the air-impermeable material for the substrate is preferably selected from laminate paper sheets comprising a support sheet consisting of a paper sheet and at least one air-impermeable coating layer formed on at least one surface of the support sheet and comprising a polyolefin resin.
In the ink jet recording material of the present invention, the ink receiving outermost layer optionally further comprises a cationic compound.
In the ink jet recording material of the present invention, the ink receiving outermost layer preferably exhibits a 75xc2x0 specular surface gloss of 30% or more.
In the ink jet recording material of the present invention, the ink receiving inside layer and the ink receiving outermost layer are preferably formed in such a manner that the coating procedure of the coating liquid for the ink receiving inside layer onto the substrate and the coating procedure of the coating liquid for the ink receiving outermost layer onto the adjacent ink receiving inside layer are substantially simultaneously carried out through a plurality of coating liquid-feeding slits of a multi-strata-coating apparatus.
In the ink jet recording material of the present invention, the simultaneous multi coating apparatus is preferably selected from multi coating slot die coaters, multi coating slide die coaters, and multi coating curtain die coaters.
In the ink jet recording material of the present invention, the ink receiving inside layer and the ink receiving outermost layer are preferably formed by such a manner that the coating procedure of the coating liquid for the ink receiving inside layer onto the substrate and the coating procedure of the coating liquid for the ink receiving outermost layer onto the adjacent ink receiving inside layer are successively carried out through a plurality of coating liquid-feeding slits of a plurality of coating apparatuses located independently from each other.
In the ink jet recording material of the present invention, the independent coating apparatuses are preferably selected from slot die coaters, slide die coaters and curtain die coaters each having a single coating liquid-feeding slit.
In the ink jet recording material of the present invention, the at least one ink receiving layer of the image-recording stratum comprising the binder and the fine pigment particle of at least one pigment selected from the group consisting of silica, aluminosilicate and xcex1-, xcex8-, xcex4- and xcex3-aluminas and having an average particle size of 1 xcexcm or less, optionally further comprises a light resistance-enhancing agent for images comprising at least one member selected from the group consisting of phenolic compounds, boric acid, borate salts and cyclodextrin compounds.
In the ink jet recording material of the present invention, it is preferable that the image-recording stratum comprises a plurality of ink receiving layers superposed on each other, that an ink receiving layer located outermost of the image-recording stratum comprises the fine pigment particles and the binder,
and that at least one ink receiving layer in the image-recording layer contains an image light resistance-enhancing agent comprising at least one member selected from the group consisting of phenolic compounds, boric acid, borate salts and cyclodextrin compounds.
In the ink jet recording material of the present invention, the fine pigment particles contained in the ink receiving layer containing the image light resistance-enhancing agent are preferably in the form of secondary particles having an average secondary particle size of 1 xcexcm or less and each consists of a plurality of primary particles having an average primary particle size of 3 to 40 nm agglomerated with each other.
In the ink jet recording material of the present invention, the phenolic compounds are preferably selected from the group consisting of hydroquinone compounds, pyrocatechol compounds and phenolsulfonic acid compounds.
In the ink jet recording material of the present invention, the cyclodextrin compounds are preferably selected from the group consisting of
xcex1-cyclodextrins,
xcex2-cyclodextrins,
xcex3-cyclodextrins,
alkylated cyclodextrins,
hydroxyalkylated cyclodextrins, and
cation-modified cyclodextrins.
In the ink jet recording material of the present invention, the cyclodextrin compounds are preferably xcex3-cyclodextrins.
In the ink jet recording material of the present invention, the image light resistance-enhancing agent is preferably contained in the ink receiving layer by coating the ink receiving layer with a solution of the image light resistance-enhancing agent and drying the coated solution.
In the ink jet recording material of the present invention, the content of the image light resistance enhancing agent in the ink receiving layer is preferably 0.1 to 10 g/m2.
In the ink jet recording material of the present invention, the fine pigment particles are preferably fine particles of at least one member selected from fumed silica, amorphous silica, aluminas and alumina hydrates.
In the ink jet recording material of the present invention, the fumed silica particles are preferably in the form of secondary particles having an average secondary particle size of 300 nm or less and each consisting of a plurality of primary particles having a primary particle size of 3 to 50 nm and agglomerated with each other.
In the ink jet recording material of the present invention, the ink receiving layer comprising the fine pigment particles and the binder optionally further comprises a cationic compound.
In the ink jet recording material of the present invention, the binder preferably comprises at least one member selected from the group consisting of water-soluble polymeric compounds, latices of copolymers of conjugated diene compounds, latices of vinyl copolymers, water-dispersible acrylic resins, water-dispersible polyester resins and water-dispersible polyurethane resins.
In the ink jet recording material of the present invention, the binder preferably comprises at least one member selected from the group consisting of polyvinyl alcohol, partially saponificated polyvinyl alcohols, acetacetylated polyvinyl alcohols, silyl-modified polyvinyl alcohols, cation-modified polyvinyl alcohols, and anion-modified polyvinyl alcohols.
In the ink jet recording material of the present invention, the substrate is preferably formed from a ink-nonabsorbing material.
In the ink jet recording material of the present invention, the surface of the image-recording stratum preferably has a 75xc2x0 specular gloss of 30% or more.
|
Q:
Does a blood transfusion cure disease?
Does transferring blood between two people also transfer all the white blood cells?
Why can't AIDS victims with low t-cell count just get blood transfusions till they have more t-cells? Why can't someone who's over a cold give blood to someone with a cold to cure them? I know this is silly, but I really want to know why this won't work.
A:
Not really no. Most blood transfusions we think about are red blood cells or platelets, which don't have the immune function you're asking for. That's a good thing. Usually, if there are white blood cells in the transfused blood, the host's immune system will recognize them as foreign and destroy them. Remember, your cells all look like foreign invaders to my cells; blood transfusions of red blood cells are carefully matched to limit negative reactions. There is also a process called transfusion-associated graft versus host disease in which the donor white blood cells will attack the host cells; this mainly occurs in immune-compromised individuals, but GvHD is definitely something to avoid. Blood transfusions are usually filtered and irradiated to remove, among other things, white blood cells.
That being said, people are beginning to use white blood cells as treatment. A new therapy being studied heavily for all sorts of diseases, from cancer to HIV, is to take the hosts own white blood cells and grow them up in the lab to select for the strongest and most effective cells. The researchers then wipe out the individual's immune system and give them a dose of their own, super-powered white blood cells, hoping that works.
Sometimes it kind of does. They've also been trying a new system, similar to what you propose, using bone marrow. They had a huge success with an HIV-positive individual now referred to as "the Berlin patient." They gave him a marrow transfusion which would produce HIV-immune white blood cells and replaced his immune system. He was and is effectively cured of HIV/AIDS.
|
If Robert Altman is to be remembered for one thing, it’s his abandonment of the talkative formulas left behind by Classical Hollywood’s decaying relevance. From his hard-boiled adaptation of Philip Marlowe in “The Long Goodbye” (1973) to his ambitious country epic “Nashville” (1975), his directorial flair is replicated, at best, through homage.
Much in the same way we look at “King Kong” (1933) now as being a ridiculous but genius paradigm for special effects, traces of Altman can be seen as bits and pieces, one at a time, of social commentary that hadn’t existed before — perhaps never again — were it not for him. With a keen eye for the American dream’s illusory tactics, Altman was especially gifted in connecting an ethos of nonchalant nationalism to a geographic location: its people, its vibe, its essence, really. Something Altmanesque was seen in Madison last weekend at the premiere of James Runde’s “Played Out” (2019).
Since 2015, Runde and a handful of colleagues have been shooting, developing and nurturing “Played Out” into the state it is today. Following the lives of various Madisonians — an unemployed mother (Leslie), an aspiring hip-hop artist (Booda) and an aimless man (James) — the film compresses external stressors into an obstacle for the psyche, pitting these characters against demands of validation and social normativity to pursue their inner desires. Through my own involvement in the Communications Department and Runde’s generosity, I was able to view a cut of the film in early March and again at its packed WIFF premiere.
The desire for direction pervades throughout “Played Out,” and I was mostly struck by its success in creating an organic drift of this theme across the characters’ various points in life. James, portrayed by Runde himself, seems to float around a baseless existence that is comprised of nothing more than work, hobbies and family. These are all great aspects in their own right, but Runde’s character seems to merely participate in these aspects rather than engage with them.
The product of this lifestyle is ambivalent and aptly mirrors the overwhelming wave of reality that looms over each newly branded adult as they enter the “real world.” To this extent, James isn’t a pitiful character nor an unlikeable one. Behind Runde’s facial pensivity, we see someone who is full of subdued anxieties toward the future, all without explicitly acknowledging their presence.
Leslie, on the other hand, faces an abrupt shattering of stability as opposed to fearing its impending throes. As she attempts to balance unemployment, motherhood and perfecting bass lines on a gorgeous Rickenbacker, we see a more salient externalization of these anxieties in comparison to James. Her character certainly stands out as a conduit for Altman’s counterculturalism; I mean, how often, really, do you hear of suburban mothers laying bass for punk-rock bands? There’s an air of grace about Leslie that embraces the challenge of adaptation, marking an increment of progress that’s so strongly coveted by the cast.
However, I found Booda to be an astonishing fusion of the two other characters. He seemed to be the most dynamic and focal character of the three, which is hardly a complaint. Instead of seeing James and Leslie as afterthoughts in the script, they seemed to have an almost ethereal force on Booda, pushing and pulling his choices toward their respective qualities of subdued anxieties or movement to change.
“Played Out” remains consistent in both its tone and visual style. The latter is a pretty muddy mixture of beige, gray and white. Low contrast gave a look and feel to the world that can only be described as two slices of white bread placed together — that is, flat and mild. With the occasional flair of colorful lighting, though, this proved an excellent metaphor for suburban distillation that, to some degree, seemed to be an argument Runde was after. In fact, the only astonishingly colorful scene I can recall was at some downtown bar; it seems that the only flavors of life to be found beyond the all-too-familiar benders of youth are a decent plate of spaghetti and a good punk-rock jam session.
Cinematography follows suit, with your average, hand-held cinéma vérité: while the composition, lighting and stage blocking all mesh cozily into the narrative, they don’t seem to push any boundaries or match someone’s high expectations. You’ve got the shot-reverse-shot, the close-up, the scene shot and so on. It’s all there, and while the camera work isn’t some revelation of übercinema, it doesn’t try to be. It knows what it is and owns that identity in a superbly fitting fashion.
Speaking of fitting, that’s certainly the most outright theme of the movie. While in one regard, James tries to satiate his hunger for sociality and simultaneously find time to attend his sister’s birthday, Leslie yearns to connect and inspire her son towards loftier academic goals. Meanwhile, Booda tries to launch a new LP while upholding the loving patriarchal position he’s proven to excel at. The film is wrought with an ambition to grow, if not for one’s self, then for others. According to Runde in the Q&A, the title has many meanings, from fleeting moments to utter exhaustion. Without appreciating these truths in life, no one can expect to grow into what they desire.
And so, we return to identifying what makes Runde’s featurette so Altmanesque. The characters of this demographic drama aren’t out searching for total nirvana, and they’re not seeking out some lost treasure or stopping a cackling villain. They’re people like you and me. This was even a driving force for Runde’s approach to multiple narrators, regarding the actors as not only friends but persons with unique stories that deserved to be told. If life is the pie, “Played Out” is the single slice you never want to end. It’s an independent film designed by the ultimate philosophical naturalist, sacrificing bells and whistles for real elements that transport us to the same place at the same time. For all I know, we could have been in that theater as the events unfolded in real time.
|
package com.github.nakjunizm;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class SpringBootSeedApplication {
public static void main(String[] args) {
SpringApplication.run(SpringBootSeedApplication.class, args);
}
}
|
package br.com.conpag.dao.sistema;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import javax.inject.Inject;
import br.com.conpag.controller.sistema.LogController;
import br.com.conpag.dao.BaseDAO;
import br.com.conpag.entity.sistema.Municipio;
public class MunicipioDAO extends BaseDAO {
private static final String SQL_UPDATE_GEODATA = "municipio.updateGeodata";
@Inject
private LogController logController;
public void updateGeoData( Municipio municipio ){
PreparedStatement ps = null;
try{
String sql = this.queryManager.getQuery( SQL_UPDATE_GEODATA );
ps = this.connection.prepareStatement( sql );
ps.setDouble(1, municipio.getLatitude() );
ps.setDouble(2, municipio.getLongitude() );
ps.setInt(3, municipio.getId() );
ps.execute();
}
catch (SQLException e){
logController.printLog(e, true);
}
}
}
|
<reponame>InnoZ/MAS<gh_stars>0
package com.innoz.toolbox.scenarioGeneration.transit;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.PriorityQueue;
import org.apache.log4j.Logger;
import org.matsim.api.core.v01.Id;
import org.matsim.api.core.v01.Scenario;
import org.matsim.api.core.v01.TransportMode;
import org.matsim.api.core.v01.network.Link;
import org.matsim.api.core.v01.network.Network;
import org.matsim.core.population.routes.NetworkRoute;
import org.matsim.core.population.routes.RouteUtils;
import org.matsim.pt.transitSchedule.TransitScheduleFactoryImpl;
import org.matsim.pt.transitSchedule.api.Departure;
import org.matsim.pt.transitSchedule.api.TransitLine;
import org.matsim.pt.transitSchedule.api.TransitRoute;
import org.matsim.pt.transitSchedule.api.TransitRouteStop;
import org.matsim.pt.transitSchedule.api.TransitSchedule;
import org.matsim.pt.transitSchedule.api.TransitScheduleWriter;
import org.matsim.pt.transitSchedule.api.TransitStopFacility;
/**
*
* Simplifies a given transit schedule by merging its transit routes.
*
* @author dhosse
*
*
*/
public class TransitScheduleSimplifier{
// private final Comparator<TransitRouteStop> arrivalOffsetComparator = new Comparator<TransitRouteStop>() {
//
// @Override
// public int compare(TransitRouteStop o1, TransitRouteStop o2) {
// return Double.compare(o1.getArrivalOffset(), o2.getArrivalOffset());
// }
// };
public static TransitSchedule simplifyTransitSchedule(Scenario scenario, String outputDirectory){
return new TransitScheduleSimplifier().mergeEqualTransitRoutes(scenario, outputDirectory);
}
/**
* Simplifies a transit schedule by merging transit routes within a transit line with equal route profiles.
* The simplified schedule is also written into a new file.
*
* @param scenario the scenario containing the transit schedule to simplify
* @param outputDirectory the destination folder for the simplified transit schedule file
* @return the simplified transit schedule
*/
private TransitSchedule mergeEqualTransitRoutes(final Scenario scenario, String outputDirectory) {
final Logger log = Logger.getLogger(TransitScheduleSimplifier.class);
log.info("starting simplify method for given transit schedule...");
log.info("equal transit routes within a transit line will be merged...");
final String UNDERLINE = "__";
TransitScheduleFactoryImpl factory = new TransitScheduleFactoryImpl();
TransitSchedule schedule = scenario.getTransitSchedule();
Map<Id<TransitLine>,TransitLine> transitLines = schedule.getTransitLines();
TransitSchedule mergedSchedule = factory.createTransitSchedule();
//add all stop facilities of the originial schedule to the new one
for(TransitStopFacility stop : schedule.getFacilities().values())
mergedSchedule.addStopFacility(stop);
int routesCounter = 0;
int mergedRoutesCounter = 0;
Iterator<TransitLine> transitLineIterator = transitLines.values().iterator();
while(transitLineIterator.hasNext()){
TransitLine transitLine = transitLineIterator.next();
Map<Id<TransitRoute>,TransitRoute> transitRoutes = transitLine.getRoutes();
TransitRoute refTransitRoute = null;
TransitLine mergedTransitLine = factory.createTransitLine(transitLine.getId());
TransitRoute mergedTransitRoute = null;
routesCounter += transitRoutes.size();
//add all transit routes of this transit line to a queue
PriorityQueue<Id> uncheckedRoutes = new PriorityQueue<Id>();
uncheckedRoutes.addAll(transitRoutes.keySet());
//iterate over all transit routes
while(uncheckedRoutes.size() > 0){
//make the current transit route the reference for the equality test
refTransitRoute = transitRoutes.get(uncheckedRoutes.remove());
String id = refTransitRoute.getId().toString();
//check all other transit routes, except for the reference route
for(Id transitRouteId : transitRoutes.keySet()){
if(!transitRouteId.equals(refTransitRoute.getId())){
TransitRoute transitRoute = transitRoutes.get(transitRouteId);
//if the route profiles are equal, "mark" current transit route by adding it to a string array
if(routeProfilesEqual(transitRoute, refTransitRoute)){
id += UNDERLINE+transitRoute.getId().toString();
uncheckedRoutes.remove(transitRoute.getId());
}
}
}
//if the new id equals the old one, there are no routes to be merged...
if(id.equals(refTransitRoute.getId().toString())){
mergedTransitLine.addRoute(refTransitRoute);
mergedRoutesCounter++;
continue;
}
//split new id in order to access the original routes
String[] listOfRoutes = id.split(UNDERLINE);
NetworkRoute newRoute = refTransitRoute.getRoute();//computeNetworkRoute(scenario.getNetwork(), refTransitRoute);
List<TransitRouteStop> newStops = computeNewRouteProfile(factory, refTransitRoute, transitRoutes, listOfRoutes, newRoute, null);
compareRouteProfiles(refTransitRoute.getStops(), newStops);
mergedTransitRoute = factory.createTransitRoute(Id.create(id, TransitRoute.class),
newRoute, newStops, TransportMode.pt);
mergeDepartures(factory, transitRoutes, mergedTransitRoute.getStops().get(0), mergedTransitRoute, listOfRoutes);
//add merged transit route to the transit line
mergedTransitLine.addRoute(mergedTransitRoute);
mergedRoutesCounter++;
}
mergedSchedule.addTransitLine(mergedTransitLine);
}
log.info("number of initial transit routes: " + routesCounter);
String diff = routesCounter > mergedRoutesCounter ? Integer.toString(mergedRoutesCounter - routesCounter)
: "+"+Integer.toString(mergedRoutesCounter - routesCounter);
log.info("number of merged transit routes: " + mergedRoutesCounter + " ( " + diff + " )");
log.info("writing simplified transit schedule to " + outputDirectory);
new TransitScheduleWriter(mergedSchedule).writeFile(outputDirectory);
log.info("... done.");
return mergedSchedule;
}
private void compareRouteProfiles(List<TransitRouteStop> stops,
List<TransitRouteStop> newStops) {
for(TransitRouteStop stop : stops){
if(stops.indexOf(stop) != newStops.indexOf(stop))
newStops.set(stops.indexOf(stop), stop);
}
}
/**
* Simplifies a transit schedule by merging transit routes within a transit line with touching route profiles.
* The initial transit routes are split into sections on which they overlap. A new section is created if the number
* of overlapping transit routes changes.
*
* @param scenario the scenario containing the transit schedule to simplify
* @param outputDirectory the destination folder for the simplified transit schedule file
* @return the simplified transit schedule
*/
private TransitSchedule mergeTouchingTransitRoutes(Scenario scenario, String outputDirectory){
final String UNDERLINE = "__";
Logger log = Logger.getLogger(TransitScheduleSimplifier.class);
log.info("starting simplify method for given transit schedule...");
log.info("transit routes within a transit line that overlap at least at one stop facility will be merged...");
TransitScheduleFactoryImpl factory = new TransitScheduleFactoryImpl();
List<TransitRouteStop> stops = new ArrayList<TransitRouteStop>();
TransitSchedule schedule = scenario.getTransitSchedule();
Map<Id<TransitLine>, TransitLine> transitLines = schedule.getTransitLines();
int mergedRoutesCounter = 0;
Iterator<TransitLine> transitLineIterator = transitLines.values().iterator();
while(transitLineIterator.hasNext()){
TransitLine transitLine = transitLineIterator.next();
Map<Id<TransitRoute>,TransitRoute> transitRoutes = transitLine.getRoutes();
TransitRoute refTransitRoute = null;
TransitRoute mergedTransitRoute;
PriorityQueue<Id> uncheckedRoutes = new PriorityQueue<Id>();
uncheckedRoutes.addAll(transitRoutes.keySet());
List<TransitRouteStop> stopsEqual = new ArrayList<TransitRouteStop>();
//iterate over all transit routes
while(uncheckedRoutes.size() > 0){
stops.clear();
mergedTransitRoute = null;
//make current transit route the reference route
refTransitRoute = transitRoutes.get(uncheckedRoutes.remove());
String id = refTransitRoute.getId().toString();
//iterate over all other transit routes
for(Id transitRouteId : transitRoutes.keySet()){
if(transitRouteId.equals(refTransitRoute.getId()))
continue;
TransitRoute transitRoute = transitRoutes.get(transitRouteId);
//if the reference route and the current transit route overlap at one point ore more
//add the current transit route id to an id array
if((stopsEqual = routeProfilesTouch(transitRoute,refTransitRoute)).size() > 0){
id += UNDERLINE+transitRoute.getId().toString();
uncheckedRoutes.remove(transitRoute.getId());
}
//add overlaps (stops) for creating new route stops later...
for(TransitRouteStop stop : stopsEqual)
if(!stops.contains(stop))
stops.add(stop);
}
if(id.equals(refTransitRoute.getId().toString()))
continue;
String[] listOfRoutes = id.split(UNDERLINE);
while(stops.size() > 0){
//create new network routes and afterwards new route profiles and transit routes
List<NetworkRoute> newRoutes = computeNetworkRoutesByTransitRouteStops(scenario.getNetwork(), transitRoutes, listOfRoutes);
for(NetworkRoute networkRoute : newRoutes){
List<TransitRouteStop> newStops = computeNewRouteProfile(factory, refTransitRoute, transitRoutes, listOfRoutes,networkRoute, stops);
TransitRouteStop start = newStops.get(0);
mergedTransitRoute = factory.createTransitRoute(Id.create("merged_" + mergedRoutesCounter, TransitRoute.class), networkRoute, newStops, TransportMode.pt);
mergedRoutesCounter++;
mergeDepartures(factory, transitRoutes, start,mergedTransitRoute,listOfRoutes);
transitLine.addRoute(mergedTransitRoute);
}
}
//remove transit routes that have been merged from the transit schedule
for(int i = 0; i < listOfRoutes.length; i++)
transitLine.removeRoute(transitRoutes.get(Id.create(listOfRoutes[i], TransitRoute.class)));
}
}
log.info("writing simplified transit schedule to " + outputDirectory);
new TransitScheduleWriter(schedule).writeFile(outputDirectory);
log.info("... done.");
return null;
}
/**
*
* This method creates a simplified transit route out of the reference transit route.
* The route of the resulting transit route equals the initial route, except that
* it starts at the first and ends at the last transit route stop (no depot
* tours etc.).
*
* @param transitRoute the reference transit route
* @return the simplified network route
*/
private static NetworkRoute computeNetworkRoute(Network network, TransitRoute transitRoute) {
List<Id<Link>> routeLinkIds = new ArrayList<Id<Link>>();
double startOffset = Double.MAX_VALUE;
double endOffset = Double.MIN_VALUE;
TransitRouteStop start = null;
TransitRouteStop end = null;
for(TransitRouteStop stop : transitRoute.getStops()){
if(stop.getArrivalOffset() < startOffset){
startOffset = stop.getArrivalOffset();
start = stop;
}
if(stop.getArrivalOffset() > endOffset){
endOffset = stop.getArrivalOffset();
end = stop;
}
}
Id startLinkId = start.getStopFacility().getLinkId();
Id endLinkId = end.getStopFacility().getLinkId();
routeLinkIds.add(transitRoute.getRoute().getStartLinkId());
for(Id linkId : transitRoute.getRoute().getLinkIds())
routeLinkIds.add(linkId);
routeLinkIds.add(transitRoute.getRoute().getEndLinkId());
int startIndex = routeLinkIds.indexOf(startLinkId);
int endIndex = routeLinkIds.indexOf(endLinkId);
for(int i = 0; i < routeLinkIds.size(); i++){
if(routeLinkIds.indexOf(routeLinkIds.get(i)) < startIndex)
routeLinkIds.remove(routeLinkIds.get(i));
if(routeLinkIds.indexOf(routeLinkIds.get(i)) > endIndex)
routeLinkIds.remove(routeLinkIds.get(i));
}
// //get the start and the end link ids from the first and the last transit route stop
// Id startLinkId = transitRoute.getStops().get(0).getStopFacility().getLinkId();
// Id endLinkId = transitRoute.getStops().get(transitRoute.getStops().size()-1).getStopFacility().getLinkId();
//
// //if the initial network route doesn't contain the link id of the first stop it is added as first link
// if(!transitRoute.getRoute().getLinkIds().contains(startLinkId))
// routeLinkIds.add(startLinkId);
// //if the initial network route contains the start link id
// //set start index at the position of the start link id inside the initial network route
// else{
// startIndex = transitRoute.getRoute().getLinkIds().indexOf(startLinkId);
// routeLinkIds.add(transitRoute.getRoute().getLinkIds().get(startIndex));
// startIndex++;
// }
//
// //add all link ids of the initial network route to the new route as long as the end link is not reached yet
// for(int i = startIndex; i < transitRoute.getRoute().getLinkIds().size() ; i++){
// routeLinkIds.add(transitRoute.getRoute().getLinkIds().get(i));
// if(transitRoute.getRoute().getLinkIds().get(i).equals(endLinkId))
// break;
// }
//
// //if the new network route doesn't contain the end link so far, add it
// if(!routeLinkIds.contains(endLinkId))
// routeLinkIds.add(endLinkId);
return RouteUtils.createNetworkRoute(routeLinkIds, network);
}
/**
* Creates a list of new network routes. These routes are parts of the initial
* network routes. Every time the number of overlapping transit routes on a link changes,
* a new network route is created.
*
* @param listOfRoutes the id list of all touching transit routes
* @return a list of network routes for the merged transit routes
*/
private static List<NetworkRoute> computeNetworkRoutesByTransitRouteStops(Network network, Map<Id<TransitRoute>,TransitRoute> transitRoutes, String[] listOfRoutes) {
List<NetworkRoute> newNetworkRoutes = new ArrayList<NetworkRoute>();
PriorityQueue<Id<TransitRoute>> uncheckedTransitRoutes = new PriorityQueue<Id<TransitRoute>>();
for(int i=0;i<listOfRoutes.length;i++){
uncheckedTransitRoutes.add(Id.create(listOfRoutes[i], TransitRoute.class));
}
List<TransitRouteStop> checkedTransitRouteStops = new ArrayList<TransitRouteStop>();
int maxStops = Integer.MIN_VALUE;
for(TransitRoute transitRoute : transitRoutes.values()){
int size = transitRoute.getStops().size();
if(size > maxStops)
maxStops = size;
}
int transitRoutesContaining = 0;
TransitRoute currentTransitRoute = null;
//until all transit route stops have been visited...
while(checkedTransitRouteStops.size() < maxStops){
List<Id<Link>> routeLinkIds = new ArrayList<Id<Link>>();
//check transit route
currentTransitRoute = transitRoutes.get(uncheckedTransitRoutes.remove());
//counter to store the number of routes containing the LAST stop
transitRoutesContaining = 1;
//iterate over all transit route stops in the current transit route
for(TransitRouteStop stop : currentTransitRoute.getStops()){
//if this stop has not been visited yet
if(!checkedTransitRouteStops.contains(stop)){
//counter to store the number of routes containing the CURRENT stop
int containing = 1;
//iterate over all OTHER transit routes
for(TransitRoute transitRoute : transitRoutes.values()){
if(!transitRoute.getId().equals(currentTransitRoute.getId())){
//if the investigated transit route contains the current stop, increment counter
if(transitRoute.getStop(stop.getStopFacility()) != null){
containing++;
}
}
}
//if the number of containing transit routes changes and there are route links inside the new network route
//split the initial network route. add the current route to the list to be returned and continue with creating
//another network route
if(transitRoutesContaining != containing){
if(routeLinkIds.size() < 1){
transitRoutesContaining = containing;
}
else{
newNetworkRoutes.add(RouteUtils.createNetworkRoute(routeLinkIds, network));
transitRoutesContaining = containing;
for(int i=0;i<routeLinkIds.size()-1;i++)
routeLinkIds.remove(i);
}
}
Id nextLinkId = stop.getStopFacility().getLinkId();
//if the last and the current link aren't adjacent, add the intervening links from the initial network route
if(routeLinkIds.size() > 0){
Id lastLinkId = routeLinkIds.get(routeLinkIds.size()-1);
List<Id<Link>> linkIds = currentTransitRoute.getRoute().getLinkIds();
int lastLinkIndex = linkIds.contains(lastLinkId) ? linkIds.indexOf(lastLinkId)+1 : 0;
int nextLinkIndex = linkIds.contains(nextLinkId) ? linkIds.indexOf(nextLinkId) : 0;
for(int i = lastLinkIndex; i < nextLinkIndex-1; i++){
if(!routeLinkIds.contains(linkIds.get(i)))
routeLinkIds.add(linkIds.get(i));
}
}
routeLinkIds.add(stop.getStopFacility().getLinkId());
checkedTransitRouteStops.add(stop);
//if the last stop of the current transit route is reached, create one last network route and add it to the list
if(currentTransitRoute.getStops().indexOf(stop) >= currentTransitRoute.getStops().size()-1)
newNetworkRoutes.add(RouteUtils.createNetworkRoute(routeLinkIds, network));
}
}
}
return newNetworkRoutes;
}
/**
*
* Creates a new route profile for a simplified transit route.
* The arrival and departure offsets of each stop are merged to
* get the average travel time to and stop time for all routes at that stop.
*
* @param newRoute the new network route
* @return merged route profile
*/
private List<TransitRouteStop> computeNewRouteProfile(TransitScheduleFactoryImpl factory,
TransitRoute refTransitRoute, Map<Id<TransitRoute>,TransitRoute> transitRoutes, String[] listOfRoutes,NetworkRoute newRoute,
List<TransitRouteStop> stops){
List<TransitRouteStop> newStops = new ArrayList<TransitRouteStop>();
for(int i = 0; i < refTransitRoute.getStops().size(); i++){
double arrivalOffset = 0;
int arrCounter = 0;
double departureOffset = 0;
int depCounter = 0;
for(int j = 0; j < listOfRoutes.length; j++){
TransitRouteStop stop = transitRoutes.get(Id.create(listOfRoutes[j], TransitRoute.class)).getStops().get(i);
arrivalOffset += stop.getArrivalOffset();
arrCounter++;
departureOffset += stop.getDepartureOffset();
depCounter++;
}
TransitRouteStop newStop = factory.createTransitRouteStop(refTransitRoute.getStops().get(i).getStopFacility(), arrivalOffset/arrCounter,
departureOffset/depCounter);
newStop.setAwaitDepartureTime(refTransitRoute.getStops().get(i).isAwaitDepartureTime());
newStops.add(newStop);
}
return newStops;
}
/**
* Merges the departures of all transit routes that are to be merged.
*
* @param startTransitRouteStop the first stop of the new transit route
* @param mergedTransitRoute the new transit route
*/
private void mergeDepartures(TransitScheduleFactoryImpl factory, Map<Id<TransitRoute>,TransitRoute> transitRoutes, TransitRouteStop startTransitRouteStop,
TransitRoute mergedTransitRoute,String[] listOfTransitRoutes) {
for(int i = 0; i < listOfTransitRoutes.length; i++){
TransitRoute transitRoute = transitRoutes.get(Id.create(listOfTransitRoutes[i], TransitRoute.class));
if(mergedTransitRouteContainsTransitRouteStops(mergedTransitRoute, transitRoute, startTransitRouteStop)){
// if(transitRoute.getStops().contains(transitRoute.getStop(startTransitRouteStop.getStopFacility()))&&!transitRoute.getId().toString().contains("merged")){
//
// for(TransitRouteStop stop : mergedTransitRoute.getStops())
// if(!transitRoute.getStops().contains(transitRoute.getStop(stop.getStopFacility())))
// continue all;
for(Departure departure : transitRoute.getDepartures().values()){
String departureId = mergedTransitRoute.getDepartures().size() < 10 ?
"0"+Integer.toString(mergedTransitRoute.getDepartures().size()) :
Integer.toString(mergedTransitRoute.getDepartures().size());
Departure dep = factory.createDeparture(Id.create(departureId, Departure.class),
departure.getDepartureTime() + transitRoute.getStop(startTransitRouteStop.getStopFacility()).getDepartureOffset());
dep.setVehicleId(departure.getVehicleId());
mergedTransitRoute.addDeparture(dep);
}
}
}
}
/**
* Compares the route profiles of two given transit routes for equality.
*
* @param transitRoute
* @param transitRoute2
* @return true if the route profiles are equal, false if not
*/
private boolean routeProfilesEqual(TransitRoute transitRoute,
TransitRoute transitRoute2) {
if(transitRoute.getStops().size() != transitRoute2.getStops().size())
return false;
for(int i=0;i<transitRoute.getStops().size();i++){
if(!(transitRoute.getStops().get(i).getStopFacility().getId().equals(transitRoute2.getStops().get(i).getStopFacility().getId())))
break;
if(i == transitRoute.getStops().size()-1)
return true;
}
return false;
}
/**
* Checks two given transit routes for overlaps.
*
* @param transitRoute
* @param refTransitRoute
* @return an empty list if the transit routes do not overlap, else the collection of the stops that both transit routes contain
*/
private List<TransitRouteStop> routeProfilesTouch(TransitRoute transitRoute,
TransitRoute refTransitRoute) {
List<TransitRouteStop> stops = new ArrayList<TransitRouteStop>();
for(TransitRouteStop stop : refTransitRoute.getStops()){
if(transitRoute.getStops().contains(stop))
stops.add(stop);
}
return stops;
}
private boolean mergedTransitRouteContainsTransitRouteStops(TransitRoute mergedTransitRoute, TransitRoute transitRoute, TransitRouteStop start){
if(!transitRoute.getStops().contains(transitRoute.getStop(start.getStopFacility()))||transitRoute.getId().toString().contains("merged"))
return false;
for(TransitRouteStop stop : mergedTransitRoute.getStops()){
if(!transitRoute.getStops().contains(transitRoute.getStop(stop.getStopFacility())))
return false;
}
return true;
}
}
|
OPTIMIZATION OF THE PHYSICAL FITNESS THROUGH BALLROOM DANCE, IN CHILDREN OF LOW AND MIDDLE SCHOOL-AGE The optimization of the physical fitness through ballroom dance in any category of children, is an easy, useful, educational and pleasant way for physical, functional and motrical education, compared to other specific or non-specific means of education in the physical and sportive area, which can be used to positively influence the developing human body. This study seeks to underline the qualitative effects of ballroom dance on the health, and on the optimization of the physical fitness parameters. The assessment tools applied hereto indicate the beneficial effects that dance have on the optimization of health. Introduction The intent of this article is primarily to create an instrument of assessment and in-depth study of the beneficial effects of dancing in preventing the onset or aggravation of certain psycho-postural and walking deficiencies. The idea is to create a working instrument that can be efficient in the prevention of physical and functional deficiencies. The results obtained through dance therapy will be assessed by means of specific methods and techniques. The indications and counter-indications of this form of physical therapy for different deficiencies will also be thoroughly analyzed. The Importance of the Research Dance movement therapy is the use of creative movement and dance in a therapeutic relationship. Dancing, as well as the physical movement in general, presupposes training, energy, it stimulates the hearth rate, and it represents as such an excellent form of physical training, which tones the muscles, strengthens the bones, and increases the resistance during effort, and the muscular flexibility. Moreover, therapy through dance increases the level of motrical intelligence, thanks to the stock of knowledge provided by the specific moves, due to their complexity, the high level of coordination of the motrical acts, the skills and customs acquired, which are specific to this type of effort. Conceptual dance benefits have a cognitive outcome (the realization of complex commands and motric actions, the increase of improvisation level, the increase of kinesthetic memory); affective outcomes (feelings, expressions, challenges); physical outcomes (healthy habits; development of dancing skills; body awareness, control, balance and coordination development; accumulation of physical flexibility, stamina, strengths and agility; positive physical activity that release stress; development of sensorimotor skills through brain dance patterns) and social outcomes. This art form is accessible to people of all ages wanting a healthier life, by avoiding obesity, stress, and by improving the self-esteem and the general disposition. It is especially recommended to children -for a more harmonious development, both body wise, and emotionally and cognitively -but also to teenagers, in order to correct any possible light physical deficiencies inherited from childhood. Some of the immediate benefits of the dance therapy include thus: improving the body flexibility, increasing the body strength and the physical resistance, inducing a good mood and fighting stress, preventing depression, preventing cardiac diseases, facilitating the loss of weight, improving the balance, strengthening the immune system. Before continuing, a distinction must be made: this article does not analyze dancing from a musical perspective, but merely from the perspective of the physical fitness attained as effect of dancing, namely of ballroom dancing. Dance therapy is a complementary prevention form. In ballroom dance, the training process is carried out similarly to any other sport. The structure of the training session in ballroom dance is complex, including the following: -adapting the body to the physical effort; -developing the performance abilities -gaining the sport shape -obtaining maximum results" In order to attain the above mentioned the structure of the training session will include two main groups of elements: the static elements, and the dynamic elements. The ballroom dance technique is learned individually for each of the 10 dances. At this stage, the dancer learns the steps technique, the choreographies, and then he learns to execute them on the music, in order to respect the rhythm and to attain the specificity of the learned dance. For the efficient learning of the technical elements, the technical and physical description and illustration play a huge role, and have a great influence on the dancers. Hypothesis, Purpose, and Goals of the Research Just as rigorously practiced sport leads, in short term, to "leaps", often spectacular, in the body strength and endurance, it is sure that dance, practiced regularly, will also lead to a significantly improved basic fitness, a better coordination and motricity, as well as a better psychological state. The intention is to use the assessment experiments and instruments in order to measure the progress of the subjects in a given timeframe, and in order to prove the validity of the hypothesis. Research Material and Methods The research took place over 6 months, and it was carried out on children aged between 6 and 12 years. During these months of research, the children participated in two dance lessons weekly, in one assisted practice session, and in ballet and fitness sessions once every two weeks. The dance and assisted practice sessions lasted 90 minutes each, and the ballet and fitness sessions 60 minutes each. In the dance lessons, children learned the four dance styles (slow waltz, quickstep, cha-cha-cha and jive), containing sequences specific to ballroom dance. The fitness program contained exercises designed to improve the body resistance to physical effort, the muscle tone, the heart rate during sustained effort, and the muscular elasticity. The chosen programs aimed at attaining a correct physical expression considering the current deficiencies at the addressed age. In the somatometric assessment, by filling in the anamnestic charts, the values for height, weight and body mass index have been registered. The ballroom dance classes were conducted according to a complex program, in three parts. During the first part, children underwent a general warm-up (adaptation of the body to effort), with usual exercises (rotation of body limbs and of the hips, lunges, running, etc.). During the second part, the program included 6 exercises specific for ballroom dance, both for the Standard and Latino categories. During the third part, the children deepened or perfected the learned sequences (Table 1). Table 1 Initial position Description of exercise Standing position, with knees slightly flexed, leaning slightly forward, weight on the toes. Arms abducted at 90 degrees, bent elbows (dancing position). Exercises specific for ballroom dance Knees extension, with raising on toes, in three strokes. Knees flexing, with lifting the heel off the ground, in three strokes. Standing position, with knees slightly flexed, leaning slightly forward, weight on the toes. Arms abducted. Circumduction of arms in axis, forwards and backwards. Standing position, with knees slightly flexed, leaning slightly forward, weight on the toes. Arms abducted at 90 degrees, bent elbows T1 -step forward with the right foot T2 -raise on toes with triple extension T3 -triple flexion Initial position Description of exercise (dancing position). T4 -step backwards with the left foot T5 -raise on toes with triple extension Standing position, with knees slightly flexed, leaning slightly forward, weight on the toes. Arms abducted at 90 degrees, bent elbows (dancing position). T1 -step forward with the right foot T2 -raise on toes with triple extension T3 -triple flexion T4 -step backwards with the left foot T5 -raise on toes with triple extension/ triple flexion Standing position, with knees slightly flexed, leaning slightly forward, weight on the toes. Arms abducted at 90 degrees, bent elbows (dancing position). T1 -raise on toes with triple extension T2 -added step to the left (two steps laterally) T3 -triple flexion T4 -raise on toes with triple extension T5 -added step to the right (two steps laterally)/ triple flexion From sitting position, arm abducted at 90 degrees, with bent elbows (dancing position) Maintaining the dancing position in isometric contraction Standing position, feet apart, arms abducted at 90 degrees "8" moves from the pelvic area Standing position, arms crossed on the chest T1 -side step on the right side, with an "8" move from the pelvic area T2 -come back T3 -side step on the left side, with an "8" move from the pelvic area T4 -come back Standing position, arms crossed on the chest T1 -step forward with the right foot, with an "8" move from the pelvic area T2 -come back T3 -step forward with the left foot, with an "8" move from the pelvic area T4 -come back Standing position, arms crossed on the chest T1 -step backwards with the right foot, with an "8" move from the pelvic area T2 -come back T3 -step backwards with the left foot, with an "8" move from the pelvic area T4 -come back From sitting position, arm abducted at 90 degrees Jumping on toes, with kicks forward, laterally and back, alternatively, with the right foot. From sitting position, arm abducted at 90 degrees Jumping on toes, with kicks forward, laterally and back, alternatively, with the left foot. The body mass index (the Quetelet index) is calculated by dividing the body weight expressed in grams to the height expressed in centimeters. The thoracic elasticity involves measuring the dimension of the respiratory act, the difference between inspiration and expiration. It is obtained by using a metric strip, placed on the back, under the lower angle of the scapula, and differently in the front -for the boys, under the nervous aureole, and for the girls at the level of the joint of the 9 th rib with the sternum, supramammary -and by calculation the value of the thoracic elasticity. The Ruffier Test provides real data on the readiness of the cardiovascular system for effort, by registering the heart rate at rest, in lying position, supine, or in sitting position (P1), after 30 squats performed in 30 seconds, where the heart rate is registered immediately after effort, (P2), and after 1 minute of rest (P3), as well as in sitting position. The measuring will be done over 15 seconds. The calculation is done according to the formula: Results of the Research The first step of the research consists in measuring and centralizing the values for the somatometric and somato-functional assessments. In theory, the body mass index can be calculated for children, but in practice the results do not reflect the reality, since 75% of the children had a BMI under 18,50, which would mean that they are underweight, and only 25% scored within the predetermined limit values for normal weight, namely between 18,50 and 24,99. Chart 1. Body Mass Index In the somatometric assessments (Chart 2), the thoracic elasticity indicated increased values for each child, as follows: Graph 3. The Ruffier Test A constant in progress or regress, linked directly to the activities performed, could not be established. The established difference may be the result of a multitude of external factors, such as: how well rested was the child on that particular day, the (over-)burdening at school on that day, the nutrition on that day. Conclusions Considering the assertions above, based on the experiments and investigations performed, the utility of ballroom dance appears evident, ballroom dance providing thus significant benefits, both physically and mentally. The regularly practiced dance leads, among other, to the improvement of psycho-motric coordination, of the heart rate and of the pulmonary capacity, and, more broadly, to maintaining unaltered a general state of physical health, in all the developing components of the human being (somato-functional, psychological and emotional health). The influence of various types of music on the physical activities: a more vivid and rapid rhythm of the music may impose a more alert rhythm of the exercises, or may support longer effort, while classical music may constitute the appropriate frame for neuro-psychic and neuro-muscular relaxation, and may help in the recovery after effort.
|
Field of the Invention
The invention relates to a heat exchanger for an air conditioner, and more particularly to a cooling fin for a heat exchanger which provides an improved heat transfer performance.
A conventional heat exchanger for an air conditioner includes, as shown in FIG. 1, a plurality of flat vertical fins 1 arranged in a parallel relation to each other at predetermined intervals and a plurality of heat exchanging tubes 2 passing horizontally through the fins 1 perpendicular thereto. The air currents flow in the spaces defined between the fins 1 in the direction of the arrow in FIG. 1 and exchange heat with the fluid flowing in the heat exchanging tubes 2.
For a thermal fluid flowing around each flat fin 1, there has been known that the thickness of the thermal boundary layer 3 on both heat transfer surfaces of the fin 1 is gradually thickened in proportion to square root of the distance from the air current inlet end of the fin 1 as shown in FIG. 2. In this regard, the heat transfer rate of the fin 1 is remarkably reduced in proportion to the distance from the air current inlet end. Therefore, the above heat exchanger has a lower heat transfer efficiency.
For the thermal fluid flowing about each heat transfer pipe 102, it has been also known that, when lower velocity air currents flow in the direction of the arrow of FIG. 3, the air currents separate from the outer surface of the pipe 2 at portions spaced apart from the center point of outer surface of the pipe 4 at angles of 70-degree to 80-degree. Therefore, an air dead region 4 is formed behind each tube 2 in a direction of the air flow as shown in the hatched region of FIG. 3. In the air dead region 4, the heat transfer rate of the tube 2 is remarkably reduced so that the heat transfer efficiency of the above heat exchanger becomes worse.
In order to overcome the above problems, there has been proposed another solution as disclosed in Korean Patent Application No. 96-27642 filed on Jul. 9, 1996 by the present applicant. This heat exchanger, as shown in FIGS. 4 and 5, includes a plurality of heat exchanging tubes 2 which are fitted into the regularly spaced flat fins 1 such that the tubes 2 are perpendicular to the fins 1. The heat exchanger also includes a plurality of angled louver patterns which are formed adjacent the tubes 2 passing through each fin 1. Each louver pattern comprises a pair of louver groups located either above or below one of the tubes 2. A lower louver pattern disposed below a tube 2 comprises a first louver group 20 configured to guide an air current flow in a first direction, and a second louver group 40 which is inclined opposite to the first louver group such that the guided air current is guided in a different direction. An upper louver pattern located above a tube 2 comprises a third louver group 30 and a fourth louver group 50 inclined relative to one another. Each of the louver groups is radially oriented relative to a respective tube 2.
The first and third louver groups 20 and 30 are oriented in mirror image relationship to each other such that the air currents flowing over both surfaces of the flat fin 1 and in the area between adjacent tubes 2 become turbulent and mixed. Further, the second and fourth louver groups 40 and 50 are similarly placed in mirror image relationship to each other such that the air currents which have passed the groups 20 and 30 continue to traverse the remainder of the area between the tubes 2 and become turbulently mixed by the groups 40 and 50, thereby reducing the dead air region.
Each of the louver groups includes louvers 70-75 which are inclined obliquely relative to the plane of the fin, as can be seen in FIG. 5. That is, each of the louvers 71-74 has a left end L projecting past a first surface S1 of the flat fin 1, and a right end R thereof extending past a second surface S2 of the flat fin 1. Each louver provides a slit arranged transversely relative to the air flow. The louvers are formed by way of a cutting and twisting process so as to be integral with the flat fin 1. The fin 1 includes flat, solid portions 60, some of which are round and surround respective tubes 2. For example, one of those round areas occupies a region between upper ends of the louver groups 20, 40 and a lower outer circumference of an adjacent tube 2. The louver groups are radially oriented with respect to respective tubes 2.
The first and second louver groups 20, 40 are arranged symmetrical relative to each other and are separated by a solid portion 60 of the fin. The same is true of the third and fourth groups 30 and 50.
The louvers 70-75 of each group are sequentially arranged relative to one another without any solid fin portion disposed therebetween.
In the drawing, reference numeral 80 denotes beads or ridges which are vertically oriented. Each bead 80 defines a vertical longitudinal axis that perpendicularly intersects the axes of vertically adjacent pipes 2. The beads serve as water guides to drain water, or dew, that condenses on the tubes 2 or fins. The beads also reinforce the fin 1 and enlarge the surface area thereof.
Each bead 80 is located in a solid portion 60 of the fin situated between the first and third groups 20, 30 on the one hand, and the second and fourth groups 40, 50 on the other hand.
The bead projects above the plane of the fin 1 and has a V-shaped cross-section (see FIG. 5).
In the heat exchanger described above, each louver group has a remote edge e facing away from a respective lower group and facing an edge of another louver group and extending parallel with respect to a direction s of the air flow. The air current flowing over those edges e is not well mixed, resulting in the creation of a wider dead air region behind each tube 2, as well as an increase in the pressure drop, thereby reducing the heat transfer efficiency of the heat exchanger.
Furthermore, since the beads are formed only in vertical alignment with the tubes 2, the strength of portions of the fin 1 disposed in front of and behind the tubes 2 is not improved, which greatly lowers the overall strength of the fin 1. In addition, there are insufficient beads to satisfactorily drain all of the dew formed on the surface of the fin 1.
|
Additive manufacturing (also known as 3D printing, solid free-form fabrication, rapid prototyping and rapid manufacturing) is commonly used to manufacture three-dimensional solid objects. It is particularly useful for applications where speed of manufacture is important but where low costs are desirable, for example in the manufacture of prototypes.
The additive manufacturing process involves the creation of a three dimensional object by successive addition of multiple material layers, each layer having a finite thickness. A variety of methods fall under the umbrella of additive manufacturing including: stereolithography (SLA), fused deposition modelling (FDM), selective deposition modelling (SDM), laser sintering (LS) and selective light modulation (SLM).
Each of the above known methods includes the following steps:
1. The conversion of a computer-generated 3D model to a file format (such as .STL or .OBJ) which provides geometric information in a physical Cartesian space. Computer aided design (CAD) software may be used to generate the initial 3D model.
2. Once converted, the 3D model is broken down (“sliced”) into a series of two-dimensional (‘2D’) discrete cross sections.
3. A computer controlled apparatus successively fabricates each cross section, one on top of another in the z-direction, forming successive layers of build material on top of another which in turn forms the three dimensional object.
The fabrication process differs between the above-mentioned methods, as does the choice of build material.
The fabrication process used in both stereolithography (SLA) and selective light modulation (SLM) involves a build material of liquid photosensitive polymer (often known as a ‘resin’) and a mechanism for exposing the photosensitive polymer to electromagnetic radiation.
Exposed photosensitive polymer undergoes a chemical reaction leading to polymerization and solidification. The solidification of the photosensitive polymer is commonly known as “curing”, and the solidified photosensitive polymer is said to have been “cured” or “hardened”.
In both SLA and SLM, electromagnetic radiation is applied to a targeted area known as the “working surface”. However, the two processes differ from one another in the way that the electromagnetic radiation is applied to the targeted area: SLA systems use a laser beam mounted on an x-y scanning system to create each material layer of the 3D object by tracing a digital cross-section onto the photosensitive polymer; SLM systems on the other hand, use spatial light modulators such as digital projectors to project the whole digital cross-section onto the photosensitive polymer in one go. The digital projector may be based on: Digital Light Processing (DLP), Digital Micromirror Device (DMD), Liquid Crystal Display (LCD), or Liquid Crystal on Silicon (LCOS).
The apparatus required to carry out SLA or SLM methods usually includes: a vat to hold the photosensitive polymer; a source of electromagnetic radiation (typically UV, near-UV, or visible light); a build platform; an elevator mechanism capable of adjusting the separation of the vat and the build platform; and a controlling computer.
The apparatus may be configured in a “top-down” arrangement in which the source of electromagnetic radiation is located above the vat, or in a “bottom-up” arrangement where the source of electromagnetic radiation is located below the vat.
In a top-down arrangement, such as that shown in FIG. 1A, the source of the electromagnetic radiation is located above the vat. In use, the build platform is positioned below the surface of the photosensitive polymer. The working surface is the photosensitive polymer located above the build platform and the distance between the upper surface of the photosensitive polymer and the upper surface of the build platform defines the cross-sectional thickness of a cured layer. Disadvantages associated with the top-down method include the necessary process of recoating the cured photosensitive polymer with uncured (“fresh”) photosensitive polymer. In addition, the high viscosity of the photopolymer and high surface tension can lead to difficulties in levelling the surface of the photosensitive polymer.
In a bottom-up arrangement, such as that shown in FIG. 1B, the issue of levelling the surface of the photosensitive polymer is avoided by locating the source of electromagnetic radiation below the vat. A layer of photosensitive polymer sandwiched between an optically clear vat floor and the build platform forms the working surface and allows for precise control over the layer thickness and the surface quality of the layer of photopolymer. However, as the photosensitive polymer hardens, it bonds to those surfaces it is in contact with resulting in high separation forces and difficulties in raising the build platform to build the next layer and a risk of damaged to the cured layer.
It is known that damage during separation can be reduced by non-stick coatings and/or thin film layers on the vat. However, these coatings and layers add to the cost of the 3D printing equipment.
Dendukuri, et al (2006), Nature Mater., Vol. 5, pp. 365-369 suggested the application of coatings that inhibit the cure of the photosensitive polymer to the vat floor. A coating of PDMS (an optically clear oxygen rich resin) is applied to the bottom of the vat, the presence of oxygen inhibits the cure of acrylate polymers thus creating a layer of uncured liquid polymer (approximately 2.5μ thick) between the PDMS and the solidified layer. As a result the cured layer does not adhere to the vat floor thus reducing the forces required to raise the elevator. However, when using a cure-inhibition coating, the separation forces between the vat floor and the cured part can be still be very large due to the surface tension forces associated with thin-film viscous liquids. The surface tension forces are particularly important because they are inversely proportional to the layer thickness.
One method of overcoming damage due to surface tension forces is x-translation which utilises a cure-inhibition coating with a slide mechanism and variable depth vat. The cure inhibition coating on the vat floor creates a non-cured layer that acts as a lubricant between the vat floor and the cured part thus the cured part can easily glide on the cure-inhibition layer. The cured cross-section is slid off the cure-inhibition layer into a deeper channel, increasing the distance between the solidified part and the vat floor, reducing surface tension forces by an order of magnitude, allowing the build platform to be raised easily before being moved back to a position above the build platform. This method of translating the build platform from a shallow channel to a deeper channel via translation in the x-direction typically requires an additional “over-lift” step, where the build platform is raised higher than necessary in order to allow for photosensitive polymer to recoat the working surface. Any such additional step/extra movement leads to an undesirable build-up in the time taken to prepare the working surface for the next layer.
As 3D models are sliced into thousands of material layers, it is important to reduce the fabrication time of each cross-section. This depends upon a number of factors such as the time to cure the photosensitive polymer at the desired thickness and the time to prepare the working surface for the next layer. The time to cure the photosensitive polymer is a function of the power of the source of the electromagnetic radiation at the working surface and the composition of the photosensitive polymer. Typically, high power sources result in shorter cure times. The time taken to prepare the working surface for the next layer typically depends on the separation method and time taken to recoat the working surface with fresh photosensitive polymer. Several extra seconds taken during the layer separation process for a model with thousands of layers will add extra hours onto the overall fabrication time.
The apparatus used in the above described SLA and SLM methods tend to be mechanically complex, difficult to operate and maintain and expensive to buy and use. The use of high power lasers and UV light sources tends to significantly increase the cost of the machines both to purchase and to use through high-energy consumption. Furthermore, the health and safety risks of high power laser and UV light source make current systems unsuitable for home use or by untrained personnel.
|
package com.l2d.tuto.springbootwebfluxsecurity.security;
import org.apache.logging.log4j.util.Strings;
import org.springframework.http.HttpHeaders;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.security.core.userdetails.User;
import org.springframework.util.StringUtils;
import org.springframework.web.server.ServerWebExchange;
import reactor.core.publisher.Mono;
/**
* Utility class for Spring Security.
*/
public final class SecurityUtils {
private SecurityUtils() {
}
public static String getTokenFromRequest(ServerWebExchange serverWebExchange) {
String token = serverWebExchange.getRequest()
.getHeaders()
.getFirst(HttpHeaders.AUTHORIZATION);
return StringUtils.isEmpty(token) ? Strings.EMPTY : token;
}
public static Mono<String> getUserFromRequest(ServerWebExchange serverWebExchange) {
return serverWebExchange.getPrincipal()
.cast(UsernamePasswordAuthenticationToken.class)
.map(UsernamePasswordAuthenticationToken::getPrincipal)
.cast(User.class)
.map(User::getUsername);
}
}
|
<filename>libs/boost.task/boost/task/spin/count_down_event.hpp
// Copyright <NAME> 2009.
// Distributed under the Boost Software License, Version 1.0.
// (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_TASKS_SPIN_COUNT_DOWN_EVENT_H
#define BOOST_TASKS_SPIN_COUNT_DOWN_EVENT_H
#include <cstddef>
#include <boost/atomic.hpp>
#include <boost/thread/thread_time.hpp>
#include <boost/utility.hpp>
namespace boost {
namespace tasks {
namespace spin {
class count_down_event : private noncopyable
{
private:
std::size_t initial_;
atomic< std::size_t > current_;
public:
explicit count_down_event( std::size_t);
std::size_t initial() const;
std::size_t current() const;
bool is_set() const;
void set();
void wait();
bool timed_wait( system_time const&);
template< typename TimeDuration >
bool timed_wait( TimeDuration const& rel_time)
{ return timed_wait( get_system_time() + rel_time); }
};
}}}
#endif // BOOST_TASKS_SPIN_COUNT_DOWN_EVENT_H
|
/**
* model representing the individual setting.
*
* @author Practical IT
*/
export class IndividualSetting {
allow_email?: number;
email_per_day?: number;
email_per_week?: number;
allow_sms?: number;
sms_per_day?: number;
sms_per_week?: number;
}
|
<reponame>Bornsoul/Revenger_JoyContinue
// Fill out your copyright notice in the Description page of Project Settings.
#pragma once
#include "Revenger.h"
#include "Actor/Characters/GameCharacter.h"
#include "GC_Player.generated.h"
/**
*
*/
UCLASS()
class REVENGER_API AGC_Player : public AGameCharacter
{
GENERATED_BODY()
public:
AGC_Player();
};
|
Joint Scheduling of Proactive Caching and On-Demand Transmission Traffics Over Shared Spectrum Proactive caching has emerged as a promising solution to reduce the content access latency in radio access networks (RANs), thereby attracting considerable attention in the era of 6G research. It allows base stations to push popular content items to mobile users devices proactively. Therefore, a cached-enabled RAN may serve a user by either on-demand transmission or proactive content placement, which share a common radio spectrum. How to efficiently schedule proactive caching and on-demand transmission then becomes a challenging issue that remains open. In this paper, we present a unified framework for joint scheduling of caching and on-demand transmission. In particular, we formulate a Markovian queueing model to analyze the average delay and power consumption of the proposed scheduling policy, which are then jointly minimized via linear programming (LP). Furthermore, a low-complexity heuristic scheduling policy is conceived to strike a sub-optimal tradeoff between delay and power based on greedy algorithms. Simulation results shall demonstrate that the overall service latency of a RAN can be substantially reduced by judiciously designing joint scheduling of caching and on-demand transmission.
|
package com.shu.designpattern.proxy;
import java.lang.reflect.Proxy;
public class Main {
public static void main(String[] args){
XiaoMing xiaoMing=new XiaoMing();
MyJdkProxyInvocationHandler myJdkProxyInvocationHandler=new MyJdkProxyInvocationHandler(xiaoMing);
House o = (House)Proxy.newProxyInstance(xiaoMing.getClass().getClassLoader(), xiaoMing.getClass().getInterfaces(), myJdkProxyInvocationHandler);
o.buyHouse();
// Enhancer enhancer=new Enhancer();
// enhancer.setSuperclass(XiaoMing.class);
// enhancer.setCallback(new CglibProxy());
// House o = (House)enhancer.create();
// o.buyHouse();
}
}
|
<filename>src/main/java/com/studyvm/pcomj/parser/SkipManyParser.java<gh_stars>0
package com.studyvm.pcomj.parser;
import com.studyvm.pcomj.base.AbstractParserCombinator;
import com.studyvm.pcomj.base.ParseResult;
import com.studyvm.pcomj.base.Parser;
import com.studyvm.pcomj.base.ParserInput;
import java.util.Optional;
import static com.studyvm.pcomj.utils.CommonUtil.succeed;
public class SkipManyParser<T> extends AbstractParserCombinator<Void> {
private final Parser<T> skipper;
public SkipManyParser(Parser<T> skipper) {
this.skipper = skipper;
}
@Override
public Optional<ParseResult<Void>> parse(ParserInput s) {
Optional<ParseResult<T>> r1 = skipper.parse(s);
if (!r1.isPresent()) {
return Optional.empty();
}
do {
s = s.rest();
Optional<ParseResult<T>> r2 = skipper.parse(s);
if (!r2.isPresent()) {
return succeed(s);
}
} while (true);
}
}
|
Campylobacter troglodytis sp. nov., Isolated from Feces of Human-Habituated Wild Chimpanzees (Pan troglodytes schweinfurthii) in Tanzania ABSTRACT The transmission of simian immunodeficiency and Ebola viruses to humans in recent years has heightened awareness of the public health significance of zoonotic diseases of primate origin, particularly from chimpanzees. In this study, we analyzed 71 fecal samples collected from 2 different wild chimpanzee (Pan troglodytes) populations with different histories in relation to their proximity to humans. Campylobacter spp. were detected by culture in 19/56 (34%) group 1 (human habituated for research and tourism purposes at Mahale Mountains National Park) and 0/15 (0%) group 2 (not human habituated but propagated from an introduced population released from captivity over 30 years ago at Rubondo Island National Park) chimpanzees, respectively. Using 16S rRNA gene sequencing, all isolates were virtually identical (at most a single base difference), and the chimpanzee isolates were most closely related to Campylobacter helveticus and Campylobacter upsaliensis (94.7% and 95.9% similarity, respectively). Whole-cell protein profiling, amplified fragment length polymorphism analysis of genomic DNA, hsp60 sequence analysis, and determination of the mol% G+C content revealed two subgroups among the chimpanzee isolates. DNA-DNA hybridization experiments confirmed that both subgroups represented distinct genomic species. In the absence of differential biochemical characteristics and morphology and identical 16S rRNA gene sequences, we propose to classify all isolates into a single novel nomenspecies, Campylobacter troglodytis, with strain MIT 05-9149 as the type strain; strain MIT 05-9157 is suggested as the reference strain for the second C. troglodytis genomovar. Further studies are required to determine whether the organism is pathogenic to chimpanzees and whether this novel Campylobacter colonizes humans and causes enteric disease. Humans are coming into closer proximity with wild primates for a variety of reasons, including habitat fragmentation and loss from deforestation, forest encroachment, competition for food and natural resources, bushmeat hunting, and expanding research and ecotourism activities. Evidence that humans and great apes are exchanging microorganisms due to socioecological practices and ecological overlap is accumulating at an alarming rate. Unknowingly, they may become links in each others' host-pathogen cycles. Infectious disease transmission from humans to chimpanzees (Pan troglodytes) and gorillas (Gorilla gorilla), in particular, is becoming more of a concern, with the Red List from the World Conservation Union (IUCN) classifying them as endangered and critically endangered species, respectively, and with pathogenic organisms undoubtedly expected to contribute to declines in wild ape populations and possibly even to contribute to species decimation. Surveillance and reporting of known, uncommon, and new infectious agents in wild primate populations are increasingly important. In 2001, campylobacteriosis, salmonellosis, and shigellosis in free-ranging human-habituated mountain gorillas in Uganda were reported. In 2007, Escherichia coli strains isolated from habituated chimpanzees were genetically more similar to isolates obtained from humans employed in chimpanzee research and tourism than to E. coli isolates obtained from humans in a local village with no regular interactions with these chimpanzees. In our study, we cultured feces for Campylobacter species in 2 groups of wild chimpanzees residing in different National Parks in Tanzania. One group has lived in close proximity to humans studying their behavior and ecology for over 40 years, and in more recent years, to humans involved in ecotourism activities. The second group is not habituated to humans and does not tolerate contact with humans for any length of time. It is comprised of chimpanzees once held in captivity and introduced into the wild in the late 1960s and/or their offspring. In this report, we characterize by phenotypic, genotypic, and phylogenetic analyses a novel species of Campylobacter, Cam-pylobacter troglodytis, which was isolated from the feces of human-habituated chimpanzees. MATERIALS AND METHODS Animals. Two different groups of wild chimpanzees (Pan troglodytes) from Tanzania National Parks were studied. Group 1 consisted of individually identified chimpanzees (Pan troglodytes schweinfurthii) that reside in the Mahale Mountains National Park in western Tanzania (latitude 6°S, longitude 30°E). They belong to the M group, a group of chimpanzees that have been reported to have loose stools with fluid consistency over the past 14 years and intermittent respiratory illnesses. The M group is habituated to human presence and tolerates observation from close proximity for extended periods. They are regularly observed by local trackers and guides, as well as tourists and researchers from around the world. The M group was once comprised of 101 individuals and is now estimated to contain only 63 individuals. At the time of the study, there were no clinical signs of disease in group 1. Chimpanzees in group 2 live in Rubondo Island National Park, which is surrounded by Lake Victoria (latitude 2°S, longitude 31°E). They originated from a group of 17 chimpanzees that were released from captivity between 1966 and 1969. All 17 introduced chimpanzees were born in the wild, but prior to their release, they were reportedly housed for between 3.5 months and 9 years in various European zoos. There has been little to no human contact with these animals since the time of their release, and they do not tolerate observation or close proximity to humans for any period of time. Most of the group 2 chimpanzees cannot be individually recognized. Group 2 chimpanzees have been reported to have intermittent loose and watery stools during the past several years. The studies were approved by the Institutional Animal Care and Use Committee. Fecal samples. All feces were collected noninvasively as part of a long-term chimpanzee health-monitoring program. Only fresh, uncontaminated feces were collected from the forest floor. Fecal consistency was recorded as firm, loose, or watery, and fecal occult blood testing was performed on each sample using Hemoccult tests (Beckman Coulter, Fullerton, CA). A total of 71 fecal samples were obtained. From group 1, during June 2005, 56 samples were collected from 29 individuals: 13 males (7 adults, 4 adolescents, and 2 juveniles) and 16 females (8 adults, 3 adolescents, and 5 juveniles); their feces were collected immediately after the chimps were observed defecating to preclude contamination. From group 2, 15 fecal samples were collected on 6 different days between August 2004 and February 2005. Only fresh feces (none older than 12 h) were collected where chimpanzee origin was confirmed by directly observing defecation, finding feces directly under chimpanzee night nests, or immediate chimpanzee tracking after chimpanzee vocalization and subsequent collection without direct observation of defecation. Of the 15 samples from group 2, 8 were collected from different individuals who were directly observed defecating at one tracking location during a single sighting of 11 chimpanzees. In an adjacent but different tracking location, 3 other samples were collected on a single day from under 3 different chimpanzee night nests. Two samples were collected from 2 distant tracking locations and were more than likely from 2 different individual chimpanzees. In one case, chimps were heard vocalizing and were tracked, and a sample was obtained without directly observing the individual defecating or finding the specimen directly under a night nest. Since this population is not habituated, genders and age groups for the samples are not known, and although highly unlikely, it is possible that the same individual was sampled more than once. Using a clean wooden applicator stick, a small amount of feces, approximately the size of a small grape, from each sample was placed in a 1-dram vial prefilled with brucella broth with 20% glycerol. The fecal sample was totally submerged, and the vial caps were tightened completely. The vials were stored frozen at approximately 20°C in a solar-powered freezer. The freezer was closed tightly and padlocked. The freezer temperature was monitored and recorded using freezer minimum-maximum thermometers. Frozen samples were transported on dry ice to the United States for analysis at the Division of Comparative Medicine, Massachusetts Institute of Technology. Bacterial isolation and biochemical characterization. Feces were homogenized in 1 ml of phosphate-buffered saline (PBS), and aliquots were placed on CVA (cefoperzone, vancomycin, and amphotericin B) plates or TVP (trimethoprim, vancomycin, and polymyxin) plates and filtered through a 0.45-m filter onto Trypticase soy agar plates with 5% sheep blood. Selective-medium plates were also used and were prepared as follows: blood agar base (Oxoid; Remel), 5% horse blood (Quad Five, Ryegate, MT), 50 g amphotericin B/ml, 100 g vancomycin/ml, 3.3 g polymyxin B/ml, 200 g bacitracin/ml, and 10.7 g nalidixic acid/ml. After incubation under microaerobic conditions (the culture vessels were evacuated to 25 in. of mercury and filled with 80:10:10 N 2 -CO 2 -H 2 ) at 37°C, suspect colonies were identified as presumptive campylobacter based on colony morphology, biochemical reactions, phase microscopy, and Gram staining. Biochemical characterization of urease, catalase, and oxidase production, as well as sensitivity to nalidixic acid and cephalothin, were conducted as previously described by our laboratory. For other tests, the inoculum size was adjusted to 10 6 CFU/ml, and bacteria were grown on a basal medium of brucella agar supplemented with 5% horse blood according to the method of On and Holmes. Tests for growth in the presence of 1% bile, 1% glycine, 0.1% selenite, 0.04% triphenyltetrazolium chloride (TTC), and salt were conducted as described by On and Holmes. The method of Hwang and Ederer was used for hippurate hydrolysis. Nitrate reduction was conducted according to the method of Cook. Discs were used for indoxyl acetate hydrolysis and also for alkaline phosphatase production (Rosco Diagnostica, Denmark). All cultures were incubated for 3 days in a microaerobic environment. Control cultures were Campylobacter jejuni 81-176 (bile, salt, hippurate, selenite, TTC, nitrate, glycine, and growth at 42°C), Helicobacter canis type strain (alkaline phosphatase), Helicobacter cinaedi type strain (alkaline phosphatase), Helicobacter pylori SS1 (bile, salt, hippurate, selenite, TTC, nitrate, glycine, and growth at 42°C), and Campylobacter coli (hippurate hydrolysis). Data for the reference species were taken from On et al. Genomic-DNA extraction for rRNA gene sequencing. For PCR of genomic DNA, isolates were grown on blood agar plates, harvested, and washed once with PBS, and a High Pure PCR template preparation kit (Roche Molecular Biochemicals) was used for DNA extraction according to the manufacturer's specifications. Genus-specific PCR. Campylobacter genus-specific primers that amplified a 280-base product on the 16S rRNA gene were used as previously described. 16S rRNA sequence analysis. Amplification of the 16S rRNA cistrons, 16S rRNA gene sequencing, and analysis of the 16S rRNA data were performed as described elsewhere. For alignment, the 16S rRNA gene sequences were entered into RNA, a program designed and maintained at Forsyth Institute for analysis of 16S rRNA. The database contains over 600 sequences for Helicobacter, Wolinella, Arcobacter, and Campylobacter strains and 2,000 sequences for other bacteria. Whole-cell protein profiling. Strains were grown on Mueller-Hinton agar supplemented with 5% sterile horse blood and incubated at 37°C for 48 h under microaerobic conditions. Protein extraction and SDS-PAGE were performed as described by Pot et al.. The similarity of the obtained normalized SDS-PAGE patterns was determined by the Pearson correlation coefficient, and clustering was performed by the unweighted pair group method with arithmetic mean (UPGMA), using BioNumerics software version 5.0 (Applied Maths). AFLP analysis. Amplified fragment length polymorphism (AFLP) analysis using the restriction enzyme combination HindIII/HhaI was performed as described previously. The amplified and fluorescently labeled fragments were loaded on a denaturing polyacrylamide gel on an ABI Prism 377 automated sequencer. GeneScan version 3.1 software (Applied Biosystems) was used for data collection, and the generated profiles were imported, using the CrvConv filter, into BioNumerics version 4.61 (Applied Maths, Belgium) for normalization and further analysis. After normalization, the obtained AFLP profiles were imported into an in-house AFLP reference database containing profiles from type and reference strains of all established Campylobacter species. The similarity between profiles was determined by the Pearson correlation coefficient, and cluster analysis was performed by UPGMA. hsp60 sequence analysis. hsp60 sequences were generated as described previously. For tree construction, sequences were aligned using the ClustalX software package, and clustering was performed by the neighbor-joining method using BioNumerics v. 5.1. Unknown bases were discarded for the analysis. Bootstrap values were determined using 500 replicates. DNA-DNA hybridization experiments. DNA-DNA hybridizations were performed between strains MIT 05-9149 T and MIT 05-9157. DNA was extracted from 0.25 to 0.5 g (wet weight) cells as described by Pitcher et al.. DNA-DNA hybridizations were performed with photobiotin-labeled probes in microplate wells using an HTS7000 Bio Assay Reader (Perkin Elmer) for the fluorescence measurements. The hybridization temperature was 30°C. Determination of mol% GC content. For the determination of the mol% GC content, DNA was enzymatically degraded into nucleosides as described by Mesbah and Whitman. The nucleoside mixture was separated by highperformance liquid chromatography (HPLC) using a Waters SymmetryShield C 8 RESULTS Prevalence of Campylobacter spp. in group 1 and group 2. All fecal samples were firm in consistency, except for 6 in group 1, which were loose. Of these 6, two were positive for the novel campylobacter (Table 1). Thirty-one of the 56 samples in group 1 were positive for fecal occult blood, 4 of which tested positive for the novel campylobacter ( Table 1). The age groups and genders of chimpanzees positive for the novel campylobacter are provided in Table 1. All group 2 fecal samples were firm in consistency; 4 of the 15 samples were positive for occult blood, 10 were negative, and 1 sample was not tested. Of the 56 samples collected from chimpanzees at the Mahale Mountains National Park (group 1), 19 and 49 were Campylobacter positive by culture and PCR analyses, respectively. Although 8 samples from chimpanzees at Rubondo Island (group 2) tested positive for Campylobacter by PCR, all samples were negative for Campylobacter spp. by culture. Biochemical characterization. All isolates were positive for catalase, oxidase, alkaline phosphatase, growth at 37 and 42°C, growth on 1% glycine, and sensitivity to nalidixic acid ( Table 2). All isolates were negative for urease, growth at 25°C, and growth on 3% NaCl. Most isolates were positive for growth on triphenyltetrazolium chloride (9/11), and 9/11 isolates were also negative for selenite reduction and growth on 2% NaCl and on 2% bile. Eight of 10 were negative for indoxyl acetate, and 7/10 were negative for nitrate reduction. Only 1 isolate was sensitive to cephalothin. Taking into account variable reactions, similar results for biochemical tests were shared with C. jejuni, Campylobacter hyointestinalis (both subspecies), Campylobacter lari, Campylobacter rectus, and Campylobacter sputo-rum. It is notable that 3 isolates of C. troglodytis were positive for hippurate hydrolysis. 16S rRNA sequence analysis. Using DNA extracted from the culture, the16S rRNA gene was amplified and sequenced for 6 out of the 17 samples that tested positive by both culture and PCR. Analyses showed novel gene sequences in all 6, with all strains being essentially identical; 2 isolates differed only by a single base. Phylogenetic relationships based on 16S rRNA sequence similarity values are shown in Fig. 1. By 16S rRNA analysis, the novel campylobacter was most closely related to the named species Campylobacter helveticus and Campylobacter upsaliensis (94.7% and 95.9% similarity). It is also related to two unclassified Campylobacter isolates from hamsters and cotton topped tamarins (95.5% and 95.2% similarity; unpublished observations), forming a distinct subcluster in the campylobacter phylogenetic tree, as shown in Fig. 1. Whole-cell protein and AFLP fingerprinting. The six strains included in the biochemical analyses and an additional two strains were chosen for whole-cell protein and AFLP fingerprinting. Data for Campylobacter reference strains were available from previous studies. Unexpectedly, the protein profiles of the eight strains revealed the presence of two subgroups. The first subgroup comprised the strains MIT 05-9149 T, MIT 05-9159, MIT 05-9166, and MIT 05-9175; the second subgroup comprised strains MIT 05-9150, MIT 05-9156, MIT 05-9157, and MIT 05-9164. The protein profiles of both subgroups were clearly different from each other and from those of other Campylobacter species (Fig. 2). For two strains (MIT 05-9166 and MIT 05-9175), repeated analyses failed to generate good-quality AFLP profiles; the remaining isolates again formed the same two subgroups (Fig. 3). The two subgroups had very different AFLP profiles that also allowed us to distinguish them from other Campylobacter species. DNA-DNA hybridization experiments and determination of the mol% GC content. Strains MIT 05-9149 T (subgroup 1) and MIT 05-9157 (subgroup 2) exhibited a hybridization level of 30%; their DNA base ratios were 34 and 38 mol%, respectively. Electron microscopy. By electron microscopy, the organisms from both subgroups were curved, measured on average 2.5 to 3.0 m by 0.25 to 0.3 m, and had a single, nonsheathed polar flagellum, although one flagellum at each end of the organism was sometimes seen (Fig. 5). DISCUSSION Members of the genus Campylobacter, currently comprising some 20 species, are Gram-negative asaccharolytic bacteria with microaerobic growth requirements and have a low GC content. They are considered either to be biochemically inert or to have indistinct biochemical characteristics. They colonize mucosal surfaces (the gastrointestinal tract, oral cavity, or urogenital tract) of healthy and diseased humans, livestock, domestic and wild animals, and birds, particularly poultry. Most of these species have been associated with disease in humans, with occurrence worldwide. Food-borne and waterborne transmission from fecal con-tamination are the most frequently reported modes of human acquired infection. Campylobacter spp. have been reported to be pathogenic in various captive, domestic, and wild primate species (2,52). Studies to date suggest that C. jejuni occurs frequently in nonhuman primates, particularly in juveniles, and is associated with diarrhea. Morton et al. suggest that C. jejuni is not a natural pathogen of wild macaques in Indonesia but infects them postcapture. Campylobacters have been reported in feces of both tourist-habituated and non-touristhabituated mountain gorillas (Gorilla beringei beringei) in Uganda. In this study, we isolated and identified a Gram-negative, nonsporulating bacterium with microaerobic growth requirements from chimpanzees living in the wild but with frequent contact with humans. Among the strains examined, two subgroups could be distinguished. Strains belonging to these subgroups had clearly different whole-cell protein and AFLP profiles, hsp60 sequences, and DNA base compositions; however, by 16S rRNA analysis, morphology, and biochemical criteria, they were indistinguishable. A DNA-DNA hybridization value of 30% in a representative strain of each subgroup demonstrated that they represent distinct genomic species. The divergence in 16S rRNA gene sequences toward C. upsaliensis and C. helveticus and the unique whole-cell protein and AFLP profiles convincingly demonstrate that these bacteria do not belong to one of the established Campylobacter species. Therefore, we believe that it is appropriate to classify both genomic species into a single nomenspecies, for which we propose the name Campylobacter troglodytis below. Our finding of C. troglodytis is the first report of this possible bacterial pathogen in the feces of wild chimpanzees. We found C. troglodytis in the feces of all age groups (infant, juvenile, adolescent, and adult) of M-group chimpanzees, in loose and firm stools, and in stools that tested positive and negative for fecal occult blood (Table 1). Other chimpanzees in the M group have been observed to have loose stools with positive and negative fecal occult blood tests. Factors including diet and intestinal pathogens may account for loose stools, and colitis from infectious or noninfectious diseases may account for the presence of blood in the feces. For example, in the M group residing at Mahale Mountains, various parasites, including Bertiella, Oesophagostomum, Prosthenorchis, Strongyloides, and Trichuris species, have been reported. In addition, rotavirus has been detected in the feces of M-group chimpanzees. The potential pathogenicity of C. troglodytis in wild chimpanzees should be investigated, and additional studies should be conducted to determine if other potential bacterial, viral, and helminth and protozoan pathogens may be present in this population. One of the nearest neighbors of C. troglodytis phylogenetically is C. upsaliensis, a catalase-negative or weakly positive campylobacter that was first described when it was isolated from dogs in 1983 and was then reported in cats in 1989. C. troglodytis differs from C. upsaliensis in that C. troglodytis is negative for nitrate reductase and indoxyl acetate hydrolysis. C. upsaliensis has been reported to be a potential human pathogen, with reports of gastroenteritis and bacteremia in healthy hosts and opportunistic infections in immunocompromised individuals. Diarrheic disease in children in socially disadvantaged groups and day care centers has also been reported. C. helveticus, also isolated from domestic cats and dogs, has not been reported to cause disease in humans. Recently, C. avium has been identified in birds, Campylobacter peloridis and C. cuniculorum in humans and molluscs, and Campylobacter insulaenigrae in mammals. C. troglodytis may or may not be of human origin, given that the feces of humans residing in this locale have not been specifically cultured for the organism. Another distinct possibility is that the bacterium colonizes the intestinal tracts of other wild animals, including rodents and other species of nonhuman primates. More studies are required to determine its host distribution and pathogenicity. Taxonomy. C. troglodytis sp. nov. (tro ⅐ glo ⅐ dy ⅐ tis) N.L. gen. n. troglodytis of a chimpanzee (Pan troglodytes), from which the bacterium was isolated. Cells are slender and slightly curved (0.2 by 2 to 3 m). The bacterium is Gram negative and nonsporulating, being motile with a single nonsheathed flagellum at one end. Organisms grow on solid agar and appear as small pinpoint colonies. The organism grows at 37°C and 42°C, but not at 25°C. It is catalase and oxidase positive but hippurate, urease, and indole acetate hydrolysis negative. It is gamma-glutamyl transpeptidase negative and alkaline phosphatase hydrolysis negative. It
|
Comparisons of Achievement Between High School Monolinguals and Bilinguals. The purpose of this study was to determine if bilingual high school students from homes where French and English are spoken achieved at a significantly different level than did high school monolinguals. The subjects, 401 bilinguals and 550 monolinguals, were identified through the Hoffman Bilingual Schedule administered to tenth and eleventh graders from ten schools in Vermilion Parish, Louisiana. The _t test comparisons of the English, reading, and spelling test scores on the Stanford Achievement Test, Basic Battery, Form X, made by bilin guals and monolinguals were computed. Subjects were grouped for com parisons by total population, IQ, sex, race, and school. Comparisons for statistically significant differences (tested at.05) indicated that: 1. Monolinguals achieved at significantly higher levels than bilinguals on the English, reading, and spelling tests. 2. No significant differences were found on the three tests when monolinguals and bilinguals with high IQs were compared. 3. A significant difference existed for the English test in favor of monolinguals when subjects of average IQ were grouped. 4. No significant differences were found when subjects of low IQ were grouped. 5. Female monolinguals achieved significantly higher than female bilinguals on the English test. 6. Male monolinguals achieved higher than male bilinguals on the
|
Hexamethylene bisacetamide activates the human immunodeficiency virus type 1 provirus by an NF-kappa B-independent mechanism. Expression of the human immunodeficiency virus type 1 (HIV-1) provirus in T lymphocytic and monocytic cells can be induced by treatment with hexamethylene bisacetamide (HMBA). The induction occurs at the transcriptional level within 1 to 3 h after the addition of the drug, and is not associated with detectable changes in the binding of transcription factors to the enhancer, TATA box or other regulatory regions of the HIV-1 long terminal repeat (LTR). Using the 5' deletion mutants of HIV-1 LTR controlling the expression of the chloramphenicol acetyltransferase gene, we found that the deletion of the kappa B enhancer did not affect HIV-1 inducibility, whereas the deletion of the Sp1 binding sites abolished transcriptional activation. However, the presence of the HIV-1 LTR Sp1 binding sites in the context of the heterologous promoter did not induce responsiveness to HMBA. We conclude that HMBA increases transcription through the secondary modification of the basal transcription complex suggesting the existence of a regulatory pathway that circumvents the requirement for the induction of NF-kappa B or other DNA-specific binding proteins.
|
1) People who block the pavement because they’re preoccupied with their smartphones I’m talking about those who wander around the streets at a snails pace whilst staring down at their fucking smartphones and then are shocked when they see me trying to pass them roadside. It can be really fucking dangerous especially if there is on-coming traffic. Unfortunately it’s more and more common these days. I’m thinking of chaining a cowbell around my neck just so they know that I’m there.
High Risk
2) Angry Motorists I am not fond of maniac drivers who want the footpath and your blood. It’s especially common when I’m on country roads. I hate it when they try to run me off the road when I’m obeying the highway code and wearing reflective shit. There’s no fucking need for it!
High Risk
3) Overly kind motorists As much as I hate angry motorists, I think I hate overly kind motorists even fucking more. I’m talking about the drivers who stop for me at every road. I feel bad as I appreciate the sentiment but when I’m out running it is really fucking dangerous to assume that someone is going to stop if I go in front of their car. Let me assume that you’re a fucking maniac out to mow down joggers and we can maybe meet later for a drink and relax, OK?
High Risk
4) Those on Mobility Scooters Whilst there is negligible danger posed from those in mobility scooters, the risk of being shamed is high when you’re a slow jogger like me. I’ve been passed countless times by the old, the fat and the lazy alike. I pretend not to be annoyed but it’s humiliating as they always take so fucking long to pass me. It’s as if they’re showing off.
Low Risk
5) Other runners As runners, we’re all in it together and we should try to foster a kind, sporting spirit within one another. For instance if I see someone faster than me I look on in admiration. If I see someone who is struggling I will give a nod of encouragement. If I see someone going the same pace I’ll try to take that motherfucker in any way I can. I don’t just reserve my racing for events or those in mobility scooters. If I see someone passing me and I think I can take them, then I’ll do it (I’ll usually fail but hey, God loves a sweater).
Low Risk
6) Gangs The risk is high with these bastards as you’re always sure that one of them has a switchblade. When I was new to running if I received any verbal abuse from them I’d have responded with much profanity. Nowadays I know running isn’t worth getting stabbed or over. If I wanted to be stabbed nightly, I’d marry a girl from Lurgan.
High Risk
7) Vulnerable looking women When I’m out running in the morning I see a few people making their way towards the train alone. I immediately feel self conscious as nothing says “There’s a rapist on the loose” better than hearing my panting at dawn directly behind you. I try to breathe easier and make less noise but I seem even more suspicious. One day I’ll end up getting peppered spray or kicked in the balls for just trying to make my way home.
Moderate Risk
8 ) Cyclists who refuse to use the road I don’t mind the apologetic cyclists who use the pavement when they aren’t confident enough for the roads. I’ve been there and done that as a kid. Who I hate most are the cunts who try run you over when they should be on the road. This is not Amsterdam you fucking hippy. Get on the fucking road. And stop ringing your horn so much otherwise I’ll ram it up your hole.
High Risk
9) Strangers who try stopping me for a chat I don’t mind stopping to help with directions but I’ve been stopped by the old and the drunk alike for general chat. Small talk is not my forte and besides I’m not out to shoot the breeze. I’ve been asked “are you OK?’ on 4 separate occasions over the last 18 months by worried old women. There’s something about my general demeanour that sets off pity in the hearts of old dears. But I’m not out to be pitied. I just want to be left alone to run!
Low Risk (but fucking annoying)
|
<filename>src/org/maltparser/parser/transition/TransitionTableHandler.java
package org.maltparser.parser.transition;
import org.maltparser.core.exception.MaltChainedException;
import org.maltparser.core.helper.HashMap;
import org.maltparser.core.symbol.Table;
import org.maltparser.core.symbol.TableHandler;
/**
*
* @author <NAME>
**/
public class TransitionTableHandler implements TableHandler {
private final HashMap<String, TransitionTable> transitionTables;
public TransitionTableHandler() {
transitionTables = new HashMap<String, TransitionTable>();
}
public Table addSymbolTable(String tableName) throws MaltChainedException {
TransitionTable table = transitionTables.get(tableName);
if (table == null) {
table = new TransitionTable(tableName);
transitionTables.put(tableName, table);
}
return table;
}
public Table getSymbolTable(String tableName) throws MaltChainedException {
return transitionTables.get(tableName);
}
}
|
<gh_stars>1-10
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
import { IExecOptions } from 'azure-pipelines-task-lib/toolrunner'
import { singleton } from 'tsyringe'
import * as taskLib from 'azure-pipelines-task-lib/task'
/**
* A wrapper around the Azure Pipelines Task Lib, to facilitate testability.
*/
@singleton()
export default class TaskLibWrapper {
/**
* Logs a debug message.
* @param message The message to log.
*/
public debug (message: string): void {
taskLib.debug(message)
}
/**
* Logs an error message.
* @param message The message to log.
*/
public error (message: string): void {
taskLib.error(message)
}
/**
* Asynchronously executes an external tool.
* @param tool The tool executable to run.
* @param args The arguments to pass to the tool.
* @param options The execution options.
* @returns A promise containing the result of the execution.
*/
public exec (tool: string, args: string | string[], options?: IExecOptions): Promise<number> {
return taskLib.exec(tool, args, options)
}
/**
* Gets the value of an input. If the input is `required` but nonexistent, this method will throw.
* @param name The name of the input.
* @param required A value indicating whether the input is required.
* @returns The value of the input or `undefined` if the input was not set.
*/
public getInput (name: string, required: boolean | undefined): string | undefined {
return taskLib.getInput(name, required)
}
/**
* Gets the localized string from the JSON resource file and optionally formats using the additional parameters.
* @param key The key of the resources string in the resource file.
* @param param Optional additional parameters for formatting the string.
* @returns The localized and formatted string.
*/
public loc (key: string, ...param: any[]): string {
return taskLib.loc(key, ...param)
}
/**
* Logs a warning message.
* @param message The message to log.
*/
public warning (message: string): void {
taskLib.warning(message)
}
}
|
// Positive Assert: it checks for the condition bCond
// and exits the program if it is not TRUE
VOID _Asrt(BOOL bCond,
LPCTSTR cstrMsg,
...)
{
if (!bCond)
{
DWORD dwErr = GetLastError();
va_list arglist;
va_start(arglist, cstrMsg);
_vftprintf(stderr, cstrMsg, arglist);
if (dwErr == ERROR_SUCCESS)
dwErr = ERROR_GEN_FAILURE;
exit(dwErr);
}
}
|
Neuropeptide Y Protects against Methamphetamine-Induced Neuronal Apoptosis in the Mouse Striatum Methamphetamine (METH) is an illicit drug that causes neuronal apoptosis in the mouse striatum, in a manner similar to the neuronal loss observed in neurodegenerative diseases. In the present study, injections of METH to mice were found to cause the death of enkephalin-positive projection neurons but not the death of neuropeptide Y (NPY)/nitric oxide synthase-positive striatal interneurons. In addition, these METH injections were associated with increased expression of neuropeptide Y mRNA and changes in the expression of the NPY receptors Y1 and Y2. Administration of NPY in the cerebral ventricles blocked METH-induced apoptosis, an effect that was mediated mainly by stimulation of NPY Y2 receptors and, to a lesser extent, of NPY Y1 receptors. Finally, we also found that neuropeptide Y knock-out mice were more sensitive than wild-type mice to METH-induced neuronal apoptosis of both enkephalin- and nitric oxide synthase-containing neurons, suggesting that NPY plays a general neuroprotective role within the striatum. Together, our results demonstrate that neuropeptide Y belongs to the class of factors that maintain neuronal integrity during cellular stresses. Given the similarity between the cell death patterns induced by METH and by disorders such as Huntington's disease, our results suggest that NPY analogs might be useful therapeutic agents against some neurodegenerative processes.
|
import megengine as mge
import megengine.functional as F
from megengine.core import Tensor
def softmax_loss(pred, label, ignore_label=-1):
max_pred = F.zero_grad(pred.max(axis=1, keepdims=True))
pred -= max_pred
log_prob = pred - F.log(F.exp(pred).sum(axis=1, keepdims=True))
mask = 1 - F.equal(label, ignore_label)
vlabel = label * mask
loss = -(F.indexing_one_hot(log_prob, vlabel, 1) * mask)
return loss
def smooth_l1_loss(pred, target, beta: float):
abs_x = F.abs(pred - target)
in_mask = abs_x < beta
out_mask = 1 - in_mask
in_loss = 0.5 * abs_x ** 2 / beta
out_loss = abs_x - 0.5 * beta
loss = in_loss * in_mask + out_loss * out_mask
return loss.sum(axis=1)
|
. OBJECTIVE The study aimed at comparing burnout in staff members at residential drug and alcohol detoxification wards with and without teamsupervision. METHOD 4 times in a period of 18 month all staff members (n = 44) were assessed for burnout using a german version (Checkliste Burnoutmerkmale) of the Maslach Burnout Inventory (MBI, Maslach u. Jackson 1986) to asses the severity and the CBE (Checkliste Burnoutentstehungsmerkmale) for associated burnout risc-factors. RESULT There was no statistical differences between the mean scores of the 3 different wards due to extreme SDs. The interpersonal differences among staff on the 4 occasions were remarkably. On repeated measurements the intraindividual changes were high. Higher scores were correlated with high workload (seen as frequent admissions). CONCLUSION Work-related variables (admissions) turned out to be of more importance than supervision in times of chronic staff-shortage.
|
Q:
Fudge into Australia
Can I take fudge that has been produced in Guernsey (Channel Islands) into Australia together with my personal luggage?
A:
When you arrive you MUST declare all the food you are carrying. The Australian officials will then either confiscate it or let it in. (I think you can also arrange to ship it home or leave it at the airport for you to take home when you go.) You will only get into trouble if you try to sneak something in without declaring it. They have sniffer dogs and other ways of knowing what you're carrying. So if the fudge is bought and you're on your way, just tell them about it on arrival and get a decision made there.
Now, if you're trying to decide whether to buy it, or if you have time to eat it rather than see it confiscated, you need to check out the Australian government web page on what you can bring in. It list things you must declare, but that may be returned to you (that is, not confiscated). Your fudge absolutely must be declared. It seems, though, that it explicitly will be allowed in, according to another government page:
Confectionery (excluding Indian milk-based desserts and sweets) is allowed into Australia. Confectionery includes chocolate, fudge, toffees, boiled sweets, peppermints, marshmallows and liquorice etc. It does not include liquid dairy desserts, spreads or drinks, which are covered under the Dairy items heading.
[emphasis mine]. But remember, being allowed doesn't mean don't declare it. It means declare it, let them look at it, and thank them when they allow you to keep it.
|
<gh_stars>1-10
# Comment PC9800 interrupts
# @category: PC9800.Python
pc9800ints = {
0x18: { # Keyboard, CRT BIOS, buzzer
0x00: "Keyboard: Read key data",
0x01: "Keyboard: Get key buffer status",
0x02: "Keyboard: Shift key-Check status",
0x03: "Keyboard: Initialize keyboard interface",
0x04: "Keyboard: Key input status check",
0x05: "Keyboard: Read key code from key buffer",
0x06: "Keyboard: Buffer initialization",
0x07: "Keyboard: Shift key status and key data read",
0x08: "Keyboard: Check shift key status and key data",
0x09: "Keyboard: Create key data",
0x0a: "set text video mode",
0x0b: "get text video mode",
0x0c: "start text screen display",
0x0d: "end text screen display",
0x0e: "set text screen single display area",
0x0f: "set text screen multiple display area",
0x10: "set cursor type",
0x11: "display cursor",
0x12: "terminate cursor",
0x13: "set cursor position",
0x14: "read font pattern 16 dot",
0x16: "initialize text video RAM",
0x1A: "define user character",
0x1b: "set KCG access mode",
0x1c: "init CRT",
0x1d: "set display width",
0x1e: "set cursor type",
0x1f: "read font patter 24 dot",
0x20: "define user character 24 dot",
0x21: "read memory switch",
0x22: "write memory switch",
0x19: "init light pen",
0x15: "get light pen position",
0x19: "start buzzer",
0x15: "stop buzzer",
0x23: "set buzzer frequency",
0x24: "set buzzer time",
0x40: "start graphic screen",
0x41: "stop graphic screen",
0x42: "set graphic screen mode",
0x43: "set graphic screen palette register (8 color palette)",
0x44: "set graphic screen border color",
0x45: "write bit seqence to VRAM",
0x46: "read bit seqence from VRAM",
0x47: "draw line or rectangle",
0x48: "draw circle",
0x49: "draw graphic character",
0x4a: "set graphic screen fast write mode",
},
0x1b: { # Floppy disk BIOS
0x01: "FDD Verify",
0x02: "FDD Read diagnosis",
0x03: "FDD Initialization",
0x04: "FDD Sense",
0x05: "FDD Data write",
0x06: "FDD Data read",
0x07: "FDD Seek to cylinder 0",
0x09: "FDD Write deleted data",
0x0A: "FDD Read ID",
0x0C: "FDD Read deleted data",
0x0D: "FDD Track format",
0x0E: "FDD Set Operation mode",
0x10: "FDD Seek",
},
0x1c: { # Timer BIOS
0x02: "Timer: set interval",
0x03: "Timer: cancel",
0x04: "Timer: set timer (one-shot)",
0x05: "Timer: set timer (repeat)",
0x06: "Timer: beep function",
},
0x21: { # DOS
0x00: "DOS 1+ - TERMINATE PROGRAM",
0x0d: "DOS 1+ - DISK RESET",
0x0f: "DOS 1+ - OPEN FILE USING FCB",
0x09: "DOS 1+ - WRITE STRING TO STANDARD OUTPUT",
0x10: "DOS 1+ - CLOSE FILE USING FCB",
0x14: "DOS 1+ - SEQUENTIAL READ FROM FCB FILE",
0x1a: "DOS 1+ - SET DISK TRANSFER AREA ADDRESS",
0x25: "DOS 1+ - SET INTERRUPT VECTOR",
0x30: "DOS 2+ - GET DOS VERSION",
0x35: "DOS 2+ - GET INTERRUPT VECTOR",
0x3c: "DOS 2+ - CREAT - CREATE OR TRUNCATE FILE",
0x3d: "DOS 2+ - OPEN - OPEN EXISTING FILE",
0x3e: "DOS 2+ - CLOSE - CLOSE FILE",
0x3f: "DOS 2+ - READ - READ FROM FILE OR DEVICE",
0x40: "DOS 2+ - WRITE - WRITE TO FILE OR DEVICE",
0x43: "DOS 2+ - GET FILE ATTRIBUTES",
0x48: "DOS 2+ - ALLOCATE MEMORY",
0x4a: "DOS 2+ - RESIZE MEMORY BLOCK",
0x4e: "DOS 2+ - FINDFIRST - FIND FIRST MATCHING FILE",
0x4c: "DOS 2+ - EXIT - TERMINATE WITH RETURN CODE",
},
0x40: { # Illegal
}
}
def addComment(inst, int_n, func):
codeUnit = listing.getCodeUnitAt(inst.getAddress())
#if inst.getComment(codeUnit.PLATE_COMMENT) is not None: return
comment = "INT {:X}h\n".format(int_n)
if int_n in pc9800ints:
if func not in pc9800ints[int_n] and int_n == 0x1b: # FDD int hack
func &= 0xf
if int_n == 0x40: # FDD int hack
if func is None: func = 0
comment += "Illegal interrupt. AH={:X}h".format(func)
elif func in pc9800ints[int_n]:
if func is not None:
comment += "Function {:X}h: ".format(func)
comment += pc9800ints[int_n][func]
else:
print("Unknown function")
return
else:
print("Unknown interrupt")
return
print(comment)
inst.setComment(codeUnit.PLATE_COMMENT, comment)
listing = currentProgram.getListing()
inst = listing.getInstructions(currentProgram.getMemory(), True)
for i in inst:
if monitor.isCancelled(): exit()
if i.getMnemonicString() == "INT":
int_n = i.getOpObjects(0)[0].getValue()
commented = False
prev_i = i
for _ in range(5): # look back for AH or AX value
prev_i = prev_i.getPrevious()
if prev_i == None: break
if prev_i.getMnemonicString() == "MOV":
if type(prev_i.getOpObjects(0)[0]) is ghidra.program.model.lang.Register and type(prev_i.getOpObjects(1)[0]) is ghidra.program.model.scalar.Scalar:
if prev_i.getOpObjects(0)[0].getName() in ("AH", "AX"):
val = prev_i.getOpObjects(1)[0].getValue()
if prev_i.getOpObjects(0)[0].getName() == "AX": val >>= 8
print("{} INT {:X}h AH {:X}h".format(i.getAddress(), int_n, val))
addComment(i, int_n, val)
commented = True
break
if not commented: print("{} INT {:X}h Can't find AH".format(i.getAddress(), int_n))
exit()
|
// Copyright 2014 The Chromium OS Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include <base/check.h>
#include <brillo/osrelease_reader.h>
#include <base/files/file_enumerator.h>
#include <base/files/file_util.h>
#include <base/logging.h>
#include <brillo/strings/string_utils.h>
namespace brillo {
void OsReleaseReader::Load() {
Load(base::FilePath("/"));
}
bool OsReleaseReader::GetString(const std::string& key,
std::string* value) const {
CHECK(initialized_) << "OsReleaseReader.Load() must be called first.";
return store_.GetString(key, value);
}
void OsReleaseReader::LoadTestingOnly(const base::FilePath& root_dir) {
Load(root_dir);
}
void OsReleaseReader::Load(const base::FilePath& root_dir) {
base::FilePath osrelease = root_dir.Append("etc").Append("os-release");
if (!store_.Load(osrelease)) {
// /etc/os-release might not be present (cros deploying a new configuration
// or no fields set at all). Just print a debug message and continue.
DLOG(INFO) << "Could not load fields from " << osrelease.value();
}
base::FilePath osreleased = root_dir.Append("etc").Append("os-release.d");
base::FileEnumerator enumerator(osreleased, false,
base::FileEnumerator::FILES);
for (base::FilePath path = enumerator.Next(); !path.empty();
path = enumerator.Next()) {
std::string content;
if (!base::ReadFileToString(path, &content)) {
// The only way to fail is if a file exist in /etc/os-release.d but we
// cannot read it.
PLOG(FATAL) << "Could not read " << path.value();
}
// There might be a trailing new line. Strip it to keep only the first line
// of the file.
content = brillo::string_utils::SplitAtFirst(content, "\n", true).first;
store_.SetString(path.BaseName().value(), content);
}
initialized_ = true;
}
std::vector<std::string> OsReleaseReader::GetKeys() const {
CHECK(initialized_) << "OsReleaseReader.Load() must be called first.";
return store_.GetKeys();
}
} // namespace brillo
|
/**
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for
* license information.
*
* Code generated by Microsoft (R) AutoRest Code Generator.
*/
package com.microsoft.azure.management.network.v2020_06_01.implementation;
import com.microsoft.azure.management.network.v2020_06_01.NetworkInterfaceTapConfiguration;
import com.microsoft.azure.arm.model.implementation.CreatableUpdatableImpl;
import rx.Observable;
import com.microsoft.azure.management.network.v2020_06_01.ProvisioningState;
import com.microsoft.azure.management.network.v2020_06_01.VirtualNetworkTap;
class NetworkInterfaceTapConfigurationImpl extends CreatableUpdatableImpl<NetworkInterfaceTapConfiguration, NetworkInterfaceTapConfigurationInner, NetworkInterfaceTapConfigurationImpl> implements NetworkInterfaceTapConfiguration, NetworkInterfaceTapConfiguration.Definition, NetworkInterfaceTapConfiguration.Update {
private final NetworkManager manager;
private String resourceGroupName;
private String networkInterfaceName;
private String tapConfigurationName;
NetworkInterfaceTapConfigurationImpl(String name, NetworkManager manager) {
super(name, new NetworkInterfaceTapConfigurationInner());
this.manager = manager;
// Set resource name
this.tapConfigurationName = name;
//
}
NetworkInterfaceTapConfigurationImpl(NetworkInterfaceTapConfigurationInner inner, NetworkManager manager) {
super(inner.name(), inner);
this.manager = manager;
// Set resource name
this.tapConfigurationName = inner.name();
// set resource ancestor and positional variables
this.resourceGroupName = IdParsingUtils.getValueFromIdByName(inner.id(), "resourceGroups");
this.networkInterfaceName = IdParsingUtils.getValueFromIdByName(inner.id(), "networkInterfaces");
this.tapConfigurationName = IdParsingUtils.getValueFromIdByName(inner.id(), "tapConfigurations");
//
}
@Override
public NetworkManager manager() {
return this.manager;
}
@Override
public Observable<NetworkInterfaceTapConfiguration> createResourceAsync() {
NetworkInterfaceTapConfigurationsInner client = this.manager().inner().networkInterfaceTapConfigurations();
return client.createOrUpdateAsync(this.resourceGroupName, this.networkInterfaceName, this.tapConfigurationName, this.inner())
.map(innerToFluentMap(this));
}
@Override
public Observable<NetworkInterfaceTapConfiguration> updateResourceAsync() {
NetworkInterfaceTapConfigurationsInner client = this.manager().inner().networkInterfaceTapConfigurations();
return client.createOrUpdateAsync(this.resourceGroupName, this.networkInterfaceName, this.tapConfigurationName, this.inner())
.map(innerToFluentMap(this));
}
@Override
protected Observable<NetworkInterfaceTapConfigurationInner> getInnerAsync() {
NetworkInterfaceTapConfigurationsInner client = this.manager().inner().networkInterfaceTapConfigurations();
return client.getAsync(this.resourceGroupName, this.networkInterfaceName, this.tapConfigurationName);
}
@Override
public boolean isInCreateMode() {
return this.inner().id() == null;
}
@Override
public String etag() {
return this.inner().etag();
}
@Override
public String id() {
return this.inner().id();
}
@Override
public String name() {
return this.inner().name();
}
@Override
public ProvisioningState provisioningState() {
return this.inner().provisioningState();
}
@Override
public String type() {
return this.inner().type();
}
@Override
public VirtualNetworkTap virtualNetworkTap() {
VirtualNetworkTapInner inner = this.inner().virtualNetworkTap();
if (inner != null) {
return new VirtualNetworkTapImpl(inner.name(), inner, manager());
} else {
return null;
}
}
@Override
public NetworkInterfaceTapConfigurationImpl withExistingNetworkInterface(String resourceGroupName, String networkInterfaceName) {
this.resourceGroupName = resourceGroupName;
this.networkInterfaceName = networkInterfaceName;
return this;
}
@Override
public NetworkInterfaceTapConfigurationImpl withId(String id) {
this.inner().withId(id);
return this;
}
@Override
public NetworkInterfaceTapConfigurationImpl withName(String name) {
this.inner().withName(name);
return this;
}
@Override
public NetworkInterfaceTapConfigurationImpl withVirtualNetworkTap(VirtualNetworkTapInner virtualNetworkTap) {
this.inner().withVirtualNetworkTap(virtualNetworkTap);
return this;
}
}
|
Ho Ho Ho, Merry Shavemas!
Today I’ll be doing a combined review, of an assortment of Christmas / Winter themed scents from Mama Bear Soaps.
First off, we’ll tackle the lather. If you’ve been following my reviews, you’ll have noticed that I consistently put Mama Bear Soaps at a 8/10 for lather quality. It’s a good soap, but it can be a bit finicky for getting the right amount of water to provide a good level of glide without going too runny. I find it’s best to to load the brush a lot, and then add water gradually. So, that 9/10 Lather score will be taken into account for the overall score for each of the individual reviews below.
And now, the scents (in the order they appear in the picture above, from left to right):
Frankincense and Myrrh -This is a warm woodsy spicy scented soap. It’s rather nice. I frankly don’t really have any idea what either Frankincense or Myrrh are expected to smell like, but this’ll do the trick. It manages to stay reasonably strong without any noticeable fading throughout the shave. Aroma:8, Strength:9, Overall:8
Welcome Home – Sue says she was going for more of a “spiced apple cider” approach, but what I got out of this was more of a nice sweet apple pie. Either way, it’s a nice scent. It came on somewhat faintly, but didn’t fade noticeably during the shave. A:9, S: 8, O:8
Winter Woods – Winter woods is a rather nice blend of “evergreen and ozone, moss and musk, wildflower and herb”. The flowers seem to dominate the first impression of the scent, with the other stuff providing a backdrop. It’s a complex combination that seems to mesh together rather well. It wasn’t the greatest in the strength department, being only somewhat noticeable when the lather is applied to the face. A:8, S: 7, O: 7
Christmas Forest – Well, in the words of my girlfriend, who I had sniff it (without telling her what it was first) “It smells like a Christmas tree”. Piney, and kind of seems like there might be a bit of something spicy to it too. Was nice, however the scent faded rather fast. A:9, S:7, O: 7
Gingerbread – Nice and sweet and gingery; I like the scent quite a bit, it’s a bit lacking in the strength department, but is still there in the background during the shave, just not by very much. A: 9, S: 7, O: 7
Sleigh Ride – This one is sweet and fruity; minty and spicy. It combines orange, green apple, peppermint, and cloves. I’m not all that much of a fan, it kind of seems like a bit of a mishmash. It wasn’t all that strong, only being barely detectable while lathered on the brush, and not really noticeable at all once applied to the face. A: 7, S: 6, O:5
Christmas Rose – The rose and pine made for an interesting combination. The lather wasn’t strongly scented, but was noticeable without any significant fading throughout the shave. A:8, S:8, O: 7
Winter Grapefruit – This is, thus far, my favourite soap from Mama Bear. It’s cool and grape-fruity, with just a hint of evergreen to give it a “winter” theme. It smells wonderful, and comes on nice and strong without significant fading. Highly recommended. A: 9, S: 9, O: 9
With the exception of the Winter Woods, these can all be found as part of Mama Bears’ “Seasonal and Holiday Favorites” sample pack, which gets you these and a few autumn-themed scents for $6.99. The Winter Woods is considered part of their “Everyday favorites” lineup, and a sample goes for $1. For full size, most are currently only available in her older style 4 oz pucks in a plastic bowl, for $9.99, or extra for a wooden bowl, whereas the Sleigh Ride is available in the newer style 5 oz puck for $7.99, with the option for a plastic tub or wooden bowl for extra.
Ingredients: Ingredients: Coconut Oil, Palm Oil, Castor Oil, Safflower, Glycerine (kosher, of vegetable origin), Purified Water, Sodium Hydroxide (saponifying agent), Sorbitol (moisturizer), Sorbitan oleate (emulsifier), Soybean protein (conditioner), Wheat protein and fragrance either natural or synthetic.
|
<gh_stars>1000+
package org.hswebframework.web.authorization.token;
import java.util.concurrent.atomic.AtomicLong;
/**
* 用户令牌信息
*
* @author zhouhao
* @since 3.0
*/
public class LocalUserToken implements UserToken {
private static final long serialVersionUID = 1L;
private String userId;
private String token;
private String type = "default";
private volatile TokenState state;
private AtomicLong requestTimesCounter = new AtomicLong(0);
private volatile long lastRequestTime = System.currentTimeMillis();
private volatile long firstRequestTime = System.currentTimeMillis();
private volatile long requestTimes;
private long maxInactiveInterval;
@Override
public long getMaxInactiveInterval() {
return maxInactiveInterval;
}
public void setMaxInactiveInterval(long maxInactiveInterval) {
this.maxInactiveInterval = maxInactiveInterval;
}
public LocalUserToken(String userId, String token) {
this.userId = userId;
this.token = token;
}
public LocalUserToken() {
}
@Override
public String getUserId() {
return userId;
}
@Override
public long getRequestTimes() {
return requestTimesCounter.get();
}
@Override
public long getLastRequestTime() {
return lastRequestTime;
}
@Override
public long getSignInTime() {
return firstRequestTime;
}
@Override
public String getToken() {
return token;
}
@Override
public TokenState getState() {
return state;
}
public void setState(TokenState state) {
this.state = state;
}
public void setUserId(String userId) {
this.userId = userId;
}
public void setToken(String token) {
this.token = token;
}
public void setFirstRequestTime(long firstRequestTime) {
this.firstRequestTime = firstRequestTime;
}
public void setLastRequestTime(long lastRequestTime) {
this.lastRequestTime = lastRequestTime;
}
public void setRequestTimes(long requestTimes) {
this.requestTimes = requestTimes;
requestTimesCounter.set(requestTimes);
}
public void touch() {
requestTimesCounter.addAndGet(1);
lastRequestTime = System.currentTimeMillis();
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public LocalUserToken copy() {
LocalUserToken userToken = new LocalUserToken();
userToken.firstRequestTime = firstRequestTime;
userToken.lastRequestTime = lastRequestTime;
userToken.requestTimesCounter = new AtomicLong(requestTimesCounter.get());
userToken.token = token;
userToken.userId = userId;
userToken.state = state;
userToken.maxInactiveInterval = maxInactiveInterval;
userToken.type = type;
return userToken;
}
@Override
public int hashCode() {
return token.hashCode();
}
@Override
public boolean equals(Object obj) {
return obj != null && hashCode() == obj.hashCode();
}
}
|
Qatari Foreign Minister Khalid al-Atiyah said Wednesday that differences between his country and fellow Gulf Cooperation Council (GCC) members had been resolved.
Speaking at a joint press conference with Kuwaiti counterpart Sabah Khaled Al-Sabah, al-Atiyah said the return to Doha of the Saudi, United Arab Emirates (UAE) and Bahraini ambassadors would depend on the latter three countries.
"Differences are possible within the GCC," al-Atiyah said. "They should not, however, lead to severing relations."
Saudi Arabia, the UAE and Bahrain all recalled their ambassadors from Doha last month.
The three Gulf States attributed the move to what they described as Qatari "interference" in their affairs, along with Qatari support for Egypt's Muslim Brotherhood – the group from which ousted president Mohamed Morsi hails.
On April 17, GCC foreign ministers agreed to press ahead with implementation of an agreement signed last November obliging signatories not to interfere in the affairs of fellow GCC member states.
After meeting at a Saudi airbase, GCC foreign ministers agreed not to support any organizations or individuals that threatened the security or stability of GCC member states.
They further agreed to refrain from supporting what they described as "hostile" media outlets.
Khalid al-Atiyah also said that his country supported the "choices" of the Egyptian people.
Al-Atiyah stated that Qatari Emir Tamim Ben Hamad had reiterated Qatar's desire to see stability in Egypt and "support the choices of the Egyptian people."
Relations between Cairo and Doha have soured dramatically since the Egyptianarmy ousted elected president Mohamed Morsilast July on the back of demonstrations against his leadership.
Following Morsi's ouster and subsequent imprisonment, a number of his supporters fled to Qatar amid a heavy-handed crackdown by Egypt's army-backed interim authoritieson pro-democracy protests.
|
<reponame>danqua/knighty
#ifndef VIDEO_H
#define VIDEO_H
#include <stdbool.h>
#include <SDL.h>
#define WINDOW_WIDTH 800
#define WINDOW_HEIGHT 632
extern SDL_Window* window;
extern SDL_Renderer* renderer;
int video_init();
void video_shutdown();
SDL_Texture* load_texture_from_file(const char* filename);
void draw_text(int x, int y, char* format, ...);
#endif
|
Control of stem cell fate and function by engineering physical microenvironments. The phenotypic expression and function of stem cells are regulated by their integrated response to variable microenvironmental cues, including growth factors and cytokines, matrix-mediated signals, and cellcell interactions. Recently, growing evidence suggests that matrix-mediated signals include mechanical stimuli such as strain, shear stress, substrate rigidity and topography, and these stimuli have a more profound impact on stem cell phenotypes than had previously been recognized, e.g. self-renewal and differentiation through the control of gene transcription and signaling pathways. Using a variety of cell culture models enabled by micro and nanoscale technologies, we are beginning to systematically and quantitatively investigate the integrated response of cells to combinations of relevant mechanobiological stimuli. This paper reviews recent advances in engineering physical stimuli for stem cell mechanobiology and discusses how micro- and nanoscale engineered platforms can be used to control stem cell niche environments and regulate stem cell fate and function.
|
<gh_stars>1000+
/*---------------------------------------------------------------------------------------------
* Copyright (c) Microsoft Corporation. All rights reserved.
* Licensed under the MIT License. See License.txt in the project root for license information.
*--------------------------------------------------------------------------------------------*/
import { KeyCode, KeyCodeUtils, IMMUTABLE_CODE_TO_KEY_CODE, ScanCode } from 'vs/base/common/keyCodes';
import { ChordKeybinding, Keybinding, KeybindingModifier, SimpleKeybinding, ScanCodeBinding } from 'vs/base/common/keybindings';
import { OperatingSystem } from 'vs/base/common/platform';
import { BaseResolvedKeybinding } from 'vs/platform/keybinding/common/baseResolvedKeybinding';
import { removeElementsAfterNulls } from 'vs/platform/keybinding/common/resolvedKeybindingItem';
/**
* Do not instantiate. Use KeybindingService to get a ResolvedKeybinding seeded with information about the current kb layout.
*/
export class USLayoutResolvedKeybinding extends BaseResolvedKeybinding<SimpleKeybinding> {
constructor(actual: Keybinding, os: OperatingSystem) {
super(os, actual.parts);
}
private _keyCodeToUILabel(keyCode: KeyCode): string {
if (this._os === OperatingSystem.Macintosh) {
switch (keyCode) {
case KeyCode.LeftArrow:
return '←';
case KeyCode.UpArrow:
return '↑';
case KeyCode.RightArrow:
return '→';
case KeyCode.DownArrow:
return '↓';
}
}
return KeyCodeUtils.toString(keyCode);
}
protected _getLabel(keybinding: SimpleKeybinding): string | null {
if (keybinding.isDuplicateModifierCase()) {
return '';
}
return this._keyCodeToUILabel(keybinding.keyCode);
}
protected _getAriaLabel(keybinding: SimpleKeybinding): string | null {
if (keybinding.isDuplicateModifierCase()) {
return '';
}
return KeyCodeUtils.toString(keybinding.keyCode);
}
protected _getElectronAccelerator(keybinding: SimpleKeybinding): string | null {
return KeyCodeUtils.toElectronAccelerator(keybinding.keyCode);
}
protected _getUserSettingsLabel(keybinding: SimpleKeybinding): string | null {
if (keybinding.isDuplicateModifierCase()) {
return '';
}
const result = KeyCodeUtils.toUserSettingsUS(keybinding.keyCode);
return (result ? result.toLowerCase() : result);
}
protected _isWYSIWYG(): boolean {
return true;
}
protected _getDispatchPart(keybinding: SimpleKeybinding): string | null {
return USLayoutResolvedKeybinding.getDispatchStr(keybinding);
}
public static getDispatchStr(keybinding: SimpleKeybinding): string | null {
if (keybinding.isModifierKey()) {
return null;
}
let result = '';
if (keybinding.ctrlKey) {
result += 'ctrl+';
}
if (keybinding.shiftKey) {
result += 'shift+';
}
if (keybinding.altKey) {
result += 'alt+';
}
if (keybinding.metaKey) {
result += 'meta+';
}
result += KeyCodeUtils.toString(keybinding.keyCode);
return result;
}
protected _getSingleModifierDispatchPart(keybinding: SimpleKeybinding): KeybindingModifier | null {
if (keybinding.keyCode === KeyCode.Ctrl && !keybinding.shiftKey && !keybinding.altKey && !keybinding.metaKey) {
return 'ctrl';
}
if (keybinding.keyCode === KeyCode.Shift && !keybinding.ctrlKey && !keybinding.altKey && !keybinding.metaKey) {
return 'shift';
}
if (keybinding.keyCode === KeyCode.Alt && !keybinding.ctrlKey && !keybinding.shiftKey && !keybinding.metaKey) {
return 'alt';
}
if (keybinding.keyCode === KeyCode.Meta && !keybinding.ctrlKey && !keybinding.shiftKey && !keybinding.altKey) {
return 'meta';
}
return null;
}
/**
* *NOTE*: Check return value for `KeyCode.Unknown`.
*/
private static _scanCodeToKeyCode(scanCode: ScanCode): KeyCode {
const immutableKeyCode = IMMUTABLE_CODE_TO_KEY_CODE[scanCode];
if (immutableKeyCode !== KeyCode.DependsOnKbLayout) {
return immutableKeyCode;
}
switch (scanCode) {
case ScanCode.KeyA: return KeyCode.KeyA;
case ScanCode.KeyB: return KeyCode.KeyB;
case ScanCode.KeyC: return KeyCode.KeyC;
case ScanCode.KeyD: return KeyCode.KeyD;
case ScanCode.KeyE: return KeyCode.KeyE;
case ScanCode.KeyF: return KeyCode.KeyF;
case ScanCode.KeyG: return KeyCode.KeyG;
case ScanCode.KeyH: return KeyCode.KeyH;
case ScanCode.KeyI: return KeyCode.KeyI;
case ScanCode.KeyJ: return KeyCode.KeyJ;
case ScanCode.KeyK: return KeyCode.KeyK;
case ScanCode.KeyL: return KeyCode.KeyL;
case ScanCode.KeyM: return KeyCode.KeyM;
case ScanCode.KeyN: return KeyCode.KeyN;
case ScanCode.KeyO: return KeyCode.KeyO;
case ScanCode.KeyP: return KeyCode.KeyP;
case ScanCode.KeyQ: return KeyCode.KeyQ;
case ScanCode.KeyR: return KeyCode.KeyR;
case ScanCode.KeyS: return KeyCode.KeyS;
case ScanCode.KeyT: return KeyCode.KeyT;
case ScanCode.KeyU: return KeyCode.KeyU;
case ScanCode.KeyV: return KeyCode.KeyV;
case ScanCode.KeyW: return KeyCode.KeyW;
case ScanCode.KeyX: return KeyCode.KeyX;
case ScanCode.KeyY: return KeyCode.KeyY;
case ScanCode.KeyZ: return KeyCode.KeyZ;
case ScanCode.Digit1: return KeyCode.Digit1;
case ScanCode.Digit2: return KeyCode.Digit2;
case ScanCode.Digit3: return KeyCode.Digit3;
case ScanCode.Digit4: return KeyCode.Digit4;
case ScanCode.Digit5: return KeyCode.Digit5;
case ScanCode.Digit6: return KeyCode.Digit6;
case ScanCode.Digit7: return KeyCode.Digit7;
case ScanCode.Digit8: return KeyCode.Digit8;
case ScanCode.Digit9: return KeyCode.Digit9;
case ScanCode.Digit0: return KeyCode.Digit0;
case ScanCode.Minus: return KeyCode.Minus;
case ScanCode.Equal: return KeyCode.Equal;
case ScanCode.BracketLeft: return KeyCode.BracketLeft;
case ScanCode.BracketRight: return KeyCode.BracketRight;
case ScanCode.Backslash: return KeyCode.Backslash;
case ScanCode.IntlHash: return KeyCode.Unknown; // missing
case ScanCode.Semicolon: return KeyCode.Semicolon;
case ScanCode.Quote: return KeyCode.Quote;
case ScanCode.Backquote: return KeyCode.Backquote;
case ScanCode.Comma: return KeyCode.Comma;
case ScanCode.Period: return KeyCode.Period;
case ScanCode.Slash: return KeyCode.Slash;
case ScanCode.IntlBackslash: return KeyCode.IntlBackslash;
}
return KeyCode.Unknown;
}
private static _resolveSimpleUserBinding(binding: SimpleKeybinding | ScanCodeBinding | null): SimpleKeybinding | null {
if (!binding) {
return null;
}
if (binding instanceof SimpleKeybinding) {
return binding;
}
const keyCode = this._scanCodeToKeyCode(binding.scanCode);
if (keyCode === KeyCode.Unknown) {
return null;
}
return new SimpleKeybinding(binding.ctrlKey, binding.shiftKey, binding.altKey, binding.metaKey, keyCode);
}
public static resolveUserBinding(input: (SimpleKeybinding | ScanCodeBinding)[], os: OperatingSystem): USLayoutResolvedKeybinding[] {
const parts: SimpleKeybinding[] = removeElementsAfterNulls(input.map(keybinding => this._resolveSimpleUserBinding(keybinding)));
if (parts.length > 0) {
return [new USLayoutResolvedKeybinding(new ChordKeybinding(parts), os)];
}
return [];
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.