content
stringlengths 10
4.9M
|
---|
America is being watched from above. Government surveillance planes routinely circle over most major cities — but usually take the weekends off.
Each weekday, dozens of U.S. government aircraft take to the skies and slowly circle over American cities. Piloted by agents of the FBI and the Department of Homeland Security (DHS), the planes are fitted with high-resolution video cameras, often working with “augmented reality” software that can superimpose onto the video images everything from street and business names to the owners of individual homes. At least a few planes have carried devices that can track the cell phones of people below. Most of the aircraft are small, flying a mile or so above ground, and many use exhaust mufflers to mute their engines — making them hard to detect by the people they’re spying on. The government’s airborne surveillance has received little public scrutiny — until now. BuzzFeed News has assembled an unprecedented picture of the operation’s scale and sweep by analyzing aircraft location data collected by the flight-tracking website Flightradar24 from mid-August to the end of December last year, identifying about 200 federal aircraft. Day after day, dozens of these planes circled above cities across the nation.
Day after day, dozens of these planes circled above cities across the nation.
The FBI and the DHS would not discuss the reasons for individual flights but told BuzzFeed News that their planes are not conducting mass surveillance. The DHS said that its aircraft were involved with securing the nation’s borders, as well as targeting drug smuggling and human trafficking, and may also be used to support investigations by the FBI and other law enforcement agencies. The FBI said that its planes are only used to target suspects in specific investigations of serious crimes, pointing to a statement issued in June 2015, after reporters and lawmakers started asking questions about FBI surveillance flights. “It should come as no surprise that the FBI uses planes to follow terrorists, spies, and serious criminals,” said FBI Deputy Director Mark Giuliano, in that statement. “We have an obligation to follow those people who want to hurt our country and its citizens, and we will continue to do so.” But most of these government planes took the weekends off. The BuzzFeed News analysis found that surveillance flight time dropped more than 70% on Saturdays, Sundays, and federal holidays. “The fact that they are mostly not flying on weekends suggests these are relatively run-of-the-mill investigations,” Nathan Freed Wessler, an attorney with the American Civil Liberties Union’s (ACLU) Project on Speech, Privacy, and Technology, told BuzzFeed News. The government’s aerial surveillance programs deserve scrutiny by the Supreme Court, said Adam Bates, a policy analyst with the Cato Institute, a libertarian think tank in Washington, D.C. “It’s very difficult to know, because these are very secretive programs, exactly what information they’re collecting and what they’re doing with it,” Bates told BuzzFeed News.
The BuzzFeed News analysis also revealed how the government responded to the mass shooting last December in San Bernardino, California.
Surveillance flight time dropped more than 70% on Saturdays, Sundays, and federal holidays.
In the weeks leading up to the deadliest terrorist attack on U.S. soil since 9/11, nearby neighborhoods in and around Los Angeles were watched intensively by FBI aircraft. But San Bernardino itself was apparently ignored: Our data shows no FBI surveillance flights over the city. That changed abruptly after the attack on the morning of Dec. 2. Within 90 minutes, two planes — one an FBI Cessna, the other a DHS Pilatus PC-12 surveillance aircraft — were circling the scene. Later that afternoon, the FBI plane flew around the home of the two shooters, Syed Rizwan Farook and Tashfeen Malik. Farook attended the nearby Dar Al Uloom Al-Islamiyah of America mosque. And starting Dec. 3, FBI planes traced circles with the mosque near their center. Three different FBI planes flew around the mosque, some circling for more than three hours at a time. There were flights on each day in the week after the attack — except for Saturday and Sunday.
The FBI told BuzzFeed News that it cannot launch investigations based on race, ethnicity, or religion — surveillance means that individual criminal suspects are being watched, not groups of people.
But Shakeel Syed, executive director of the Islamic Shura Council of Southern California, an umbrella organization representing the region’s mosques and Islamic centers, told BuzzFeed News that he is alarmed that the FBI’s knee-jerk reaction to the San Bernardino massacre seems to have been to send its planes to watch the Dar Al Uloom mosque. “That is extremely troubling, and reconfirms the fears that we continue to talk about,” Syed said. “I don’t know that they have ever done surveillance of churches or synagogues when people of those traditions have committed acts of criminality.” In the months before the San Bernardino attack, some of the government’s surveillance planes circled over other neighborhoods with large Muslim populations. In the San Francisco Bay Area, for instance, there was a clear circle above Little Kabul in Fremont, home to the largest concentration of ethnic Afghans in the nation. The main concentrations of surveillance in Minneapolis, meanwhile, were above an area known as Little Mogadishu for its large Somali population. But these neighborhoods did not come under heightened aerial scrutiny after the terrorist mayhem in Paris on Nov. 13, nor after San Bernardino. And on Thanksgiving Day, less than two weeks after the Paris attacks, with the nation under a State Department–issued global terrorism alert, federal surveillance planes almost entirely stopped flying, only to resume once the holiday was over. The BuzzFeed News analysis almost certainly underestimates the scope of surveillance by federal aircraft. Some two dozen planes operated by the FBI and more than 130 registered to the DHS never appeared on Flightradar24, suggesting that some surveillance planes may be hidden from public view on plane-tracking websites. (See here for details on the BuzzFeed News analysis.) FBI planes have also on occasion been used to support local law enforcement. In April 2015, after riots broke out in Baltimore following the death of Freddie Gray in police custody, FBI planes were sent to monitor the situation, documents obtained by the ACLU show. FBI Director James Comey told Congress that the agency’s aircraft also flew over Ferguson, Missouri, in the summer of 2014, after a local police officer shot Michael Brown. Responding to the BuzzFeed News analysis, FBI spokesman Christopher Allen said that planes may circle over cities while waiting for a suspect to emerge from a building. In some cases, the BuzzFeed News analysis showed that FBI aircraft indeed seemed to be following a vehicle from place to place, pausing to circle at each stop. Other flights, however, circled a single location for several hours, and then returned to their airfields.
Left image shows plane likely following a suspect; right shows repeated circling
As to the big drop-off in flights on the weekends, Allen told BuzzFeed News that the agency’s surveillance depends on the needs of individual investigations.
“If we need it, it’s going to happen,” Allen said. The targets of surveillance may simply be less active on the weekends, he said. And because traffic is lighter, he added, it’s easier for the FBI to follow suspects on the ground instead of by air. That explanation did not convince James Wedick, a former FBI agent based near Sacramento, California. “That’s painful,” Wedick told BuzzFeed News. He suspects that the weekend dip reflects the controversial practice of using undercover agents and informants to entice suspects into joining fake terrorist plots devised by the FBI. “The FBI today is better able to control investigations, enabling agents to orchestrate events when more resources were available,” Wedick said.
US Customs and Border Patrol US Customs and Border Patrol Courtesy William Larkins Left: Two DHS aircraft patrol the U.S. border; Center: DHS helicopter, with cameras beneath the cockpit door; Right: Cessna 208, operated by an FBI front company
In June of last year, the Associated Press reported that it had linked more than 50 planes, mostly small Cessna Skylane 182 aircraft, to 13 fake companies created as fronts for the FBI. Also using Flightradar24, AP reporters tracked more than 100 flights in 11 states over the course of a month.
BuzzFeed News extended the list of FBI front companies, drawing from other sources that have investigated the agency’s airborne operations. We then looked for planes registered to these front companies in data provided by Flightradar24. (Its data comes from radio signals broadcast by transponders that reveal planes’ locations and identifying information, picked up by receivers on the ground that are hosted by volunteers across the country.) We detected nearly 100 FBI fixed-wing planes, mostly small Cessnas, plus about a dozen helicopters. Collectively, they made more than 1,950 flights over our four-month-plus observation period. The aircraft frequently circled or hovered around specific locations, often for several hours in the daytime over urban areas. We also tracked more than 90 aircraft, about two-thirds of them helicopters, that were registered to the DHS, which is responsible for border protection, customs, and immigration. Not surprisingly, these planes were especially active around border towns such as McAllen, Texas, which faces the Mexican city of Reynosa across the Rio Grande. But the DHS’s airborne operations also extended far into the U.S. interior. And over some cities, notably Los Angeles, its aircraft seemed to circle around particular locations, behaving like those in the FBI’s fleet.
The DHS would not comment on flights over specific cities, but confirmed that its planes regularly support other law enforcement agencies, including the FBI.
DHS spokesman Carlos Lazo told BuzzFeed News by email that its planes are mainly used to combat the illegal drug trade, human trafficking, and violent crime. In 2015, he said, DHS aerial surveillance missions supported investigations that “resulted in 706 arrests including violent criminals and sex traffickers, the seizure of more than 10,000 lbs of cocaine, 342 lbs of heroin, more than 1,000 lbs of methamphetamine, 350 weapons, and $24 million in cash.”
BuzzFeed News
Regulations require that a plane’s owners submit documents to the Federal Aviation Administration describing modifications that might affect a plane’s airworthiness, and BuzzFeed News obtained this paperwork for about 130 of the planes identified in our analysis — giving a strong sense of what the aircraft are capable of.
Many FBI Cessnas, for example, are fitted with exhaust mufflers to reduce engine noise. FBI and DHS aircraft carry sophisticated camera systems in steerable mounts that can provide conventional video, night vision, and infrared thermal imaging. These include Talon devices, manufactured by FLIR Systems of Wilsonville, Oregon. The company’s website boasts that these devices “deliver high-resolution imagery day or night.” On FBI planes, cameras are typically paired with augmented reality systems, which superimpose a variety of information over the video, and can embed the feed from a camera into a wider scene built up from stored satellite images. This promotional video from Churchill Navigation of Boulder, Colorado, whose systems are installed on FBI surveillance aircraft, explains some of their capabilities.
Over the past few years, news organizations and advocacy groups have also accumulated evidence that some government surveillance planes can carry equipment to track cell phones on the ground.
Federal and local law enforcement agencies are known to use devices called cell-site simulators that mimic cell phone towers, emitting powerful signals that trick people’s phones into connecting to them as if they were the real thing. Sometimes called “Stingrays” for the brand name of one popular model, these devices read the unique identification codes of the cell phones that connect to them, and so can be used to track people — even if they are indoors, in dense crowds, or otherwise hidden from view. Official records indicate that both the DHS and the FBI can connect to cell phones from the air. Documents obtained by Chris Soghoian, principal technologist with the ACLU’s Project on Speech, Privacy, and Technology, show that in 2010 the DHS spent almost $190,000 under a contract that included the purchase of cell-site simulators and an “airborne flight kit” for a Stingray device — consisting of “specialized antennas, antenna mounts, cables and power connections.” The contract also covered training for up to four DHS Immigration and Customs Enforcement agents to be instructed on how to operate Stingrays from an aircraft. Other government spending records, reviewed by BuzzFeed News, show that in 2008 the FBI purchased a Stingray airborne kit for $55,000. And in March, the San Francisco–based Electronic Frontier Foundation (EFF) released a series of documents, obtained through a Freedom of Information lawsuit, in which FBI officials discussed the use of cell-site simulators from aircraft. The documents reveal that the agency was unsure how many times the devices had been used, when pressed for information by the Senate judiciary committee. “I cannot say for certain that the mission numbers are 100% accurate,” one official wrote, noting that so far, five flights involving Stingrays had been identified. The FBI told BuzzFeed News that cell-site simulators are used very rarely, and only to track suspects. Calls are not intercepted, and personal data is not captured, Allen said. The DHS declined to comment on how often it used the devices from the air, but Lazo, the department’s spokesman, said the technology provides “invaluable assistance” in hunting down criminal suspects. “Cell-site simulators used by DHS are not used to collect the contents of any communication, including any data contained on the phone itself,” he added.
BuzzFeed News
Still, tracking the movements of specific criminal suspects may entail connecting to the phones of thousands of people who just happen to be nearby. And although government policies say that information about nontarget phones should be quickly discarded, privacy advocates remain concerned about cell-site simulators, which may not require a warrant in emergency situations. “In our opinion, any time Stingrays or the like are used, they need to have a warrant based on probable cause,” Nate Cardozo, senior staff attorney with the EFF, told BuzzFeed News. One of the most sensitive questions surrounding the government’s surveillance flights is whether Muslims are being disproportionately targeted. Even before San Bernardino, Republican presidential frontrunner Donald Trump was calling for surveillance of certain mosques. And in the wake of the bombings in Brussels in March, rival Ted Cruz said that surveillance in Muslim neighborhoods should be intensified. BuzzFeed News mapped mosques and Islamic centers throughout the nation, as detailed in a database maintained by the Hartford Institute for Religion Research in Connecticut. Some mosques were at the center of the circles traced by FBI planes, but BuzzFeed News could see no clear pattern indicating widespread surveillance of mosques. Privacy advocates argue that all of the flights should be subjected to greater official scrutiny, to ensure that a balance is being struck between effective law enforcement and privacy. “When people think of surveillance, they think of the NSA, or of specific people being tracked, or mosques being infiltrated,” Ramzi Kassem, a law professor at the City University of New York, told BuzzFeed News. “They aren’t necessarily thinking about planes circling overhead of American cities and doing god knows what. It’s important for people to be aware.”
Tap to play GIF Tap to play GIF Mark Pernice for BuzzFeed News |
/**
* Serves as a base class for controls that contain and make use of other
* widgets.
*
* @author Lateef Ojulari
* @since 1.0
*/
@UplAttributes({ @UplAttribute(name = "components", type = UplElementReferences.class),
@UplAttribute(name = "valueMarker", type = String.class) })
public abstract class AbstractMultiControl extends AbstractControl implements MultiControl {
private Map<String, ChildWidgetInfo> widgetInfoMap;
private ValueStore thisValueStore;
private List<String> standalonePanelNames;
private String uplValueMarker;
public AbstractMultiControl() {
widgetInfoMap = new LinkedHashMap<String, ChildWidgetInfo>();
}
@Override
public final void onPageConstruct() throws UnifyException {
super.onPageConstruct();
uplValueMarker = getUplAttribute(String.class, "valueMarker");
doOnPageConstruct();
}
@Override
public void addChildWidget(Widget widget) throws UnifyException {
doAddChildWidget(widget, false, false, false, true);
}
@Override
public void setValueStore(ValueStore valueStore) throws UnifyException {
super.setValueStore(valueStore);
for (ChildWidgetInfo childWidgetInfo : widgetInfoMap.values()) {
if (childWidgetInfo.isConforming()) {
childWidgetInfo.getWidget().setValueStore(valueStore);
}
}
}
@Override
public void setDisabled(boolean disabled) {
super.setDisabled(disabled);
for (ChildWidgetInfo childWidgetInfo : widgetInfoMap.values()) {
if (!childWidgetInfo.isIgnoreParentState()) {
childWidgetInfo.getWidget().setDisabled(disabled);
}
}
}
@Override
public void setEditable(boolean editable) {
super.setEditable(editable);
for (ChildWidgetInfo childWidgetInfo : widgetInfoMap.values()) {
if (!childWidgetInfo.isIgnoreParentState()) {
childWidgetInfo.getWidget().setEditable(editable);
}
}
}
@Override
public void populate(DataTransferBlock transferBlock) throws UnifyException {
if (transferBlock != null) {
DataTransferBlock childBlock = transferBlock.getChildBlock();
if (childBlock == null) {
super.populate(transferBlock);
} else {
DataTransferWidget dtWidget = (DataTransferWidget) getChildWidgetInfo(childBlock.getId()).getWidget();
dtWidget.populate(childBlock);
onInternalChildPopulated(dtWidget);
}
}
}
@Override
public ChildWidgetInfo getChildWidgetInfo(String childId) {
return widgetInfoMap.get(childId);
}
@Override
public Collection<ChildWidgetInfo> getChildWidgetInfos() {
return widgetInfoMap.values();
}
@Override
public int getChildWidgetCount() {
return widgetInfoMap.size();
}
@Override
public void setId(String id) throws UnifyException {
boolean changed = !DataUtils.equals(getId(), id);
super.setId(id);
if (changed && !widgetInfoMap.isEmpty()) {
Map<String, ChildWidgetInfo> map = new LinkedHashMap<String, ChildWidgetInfo>();
for (ChildWidgetInfo childWidgetInfo : widgetInfoMap.values()) {
Widget widget = childWidgetInfo.getWidget();
String newChildId = WidgetUtils.renameChildId(id, widget.getId());
widget.setId(newChildId);
map.put(newChildId, new ChildWidgetInfo(widget, childWidgetInfo.isIgnoreParentState(),
childWidgetInfo.isExternal()));
}
widgetInfoMap = map;
}
}
@Override
public final Object getValue(String attribute) throws UnifyException {
if (attribute != null) {
return super.getValue(attribute);
}
if (getValueStore() != null) {
return getValueStore().getValueObject();
}
return null;
}
@Override
public Widget getChildWidget(String childId) throws UnifyException {
if (widgetInfoMap != null) {
ChildWidgetInfo childWidgetInfo = widgetInfoMap.get(childId);
if (childWidgetInfo != null) {
return childWidgetInfo.getWidget();
}
}
return null;
}
protected String getUplValueMarker() {
return uplValueMarker;
}
/**
* Creates and adds a non-conforming external child widget that doesn't ignore
* parent state.
*
* @param descriptor
* descriptor used to create child widget.
* @return the added child widget
* @throws UnifyException
* if an error occurs
*/
protected Widget addExternalChildWidget(String descriptor) throws UnifyException {
Widget widget = (Widget) getUplComponent(getSessionLocale(), descriptor, false);
doAddChildWidget(widget, true, false, false, true);
return widget;
}
/**
* Creates and adds a non-conforming external child standalone panel that
* doesn't ignore parent state.
*
* @param panelName
* the panelName
* @param cloneId
* the clone ID
* @return the added child widget
* @throws UnifyException
* if an error occurs
*/
protected Widget addExternalChildStandalonePanel(String panelName, String cloneId) throws UnifyException {
String uniqueName = UplUtils.generateUplComponentCloneName(panelName, cloneId);
Page page = getRequestContextUtil().getRequestPage();
StandalonePanel standalonePanel = page.getStandalonePanel(uniqueName);
if (standalonePanel == null) {
standalonePanel = getPageManager().createStandalonePanel(getSessionLocale(), uniqueName);
page.addStandalonePanel(uniqueName, standalonePanel);
getUIControllerUtil().updatePageControllerInfo(
getRequestContextUtil().getResponsePathParts().getControllerName(), uniqueName);
if (standalonePanelNames == null) {
standalonePanelNames = new ArrayList<String>();
}
standalonePanelNames.add(uniqueName);
}
standalonePanel.setContainer(getContainer());
doAddChildWidget(standalonePanel, true, false, false, true);
return standalonePanel;
}
@Override
public void addPageAliases() throws UnifyException {
super.addPageAliases();
if (standalonePanelNames != null) {
Page page = getRequestContextUtil().getRequestPage();
for (String uniqueName : standalonePanelNames) {
StandalonePanel standalonePanel = page.getStandalonePanel(uniqueName);
if (standalonePanel != null) {
List<String> aliases = getPageManager().getExpandedReferences(standalonePanel.getId());
getRequestContextUtil().addPageAlias(getId(), DataUtils.toArray(String.class, aliases));
}
}
}
}
@Override
protected final ValueStore createValueStore(Object storageObject, int dataIndex) throws UnifyException {
if (uplValueMarker != null) {
return super.createValueStore(storageObject, uplValueMarker, dataIndex);
}
return super.createValueStore(storageObject, dataIndex);
}
/**
* Removes all external child widgets.
*
* @throws UnifyException
* if an error occurs
*/
protected void removeAllExternalChildWidgets() throws UnifyException {
for (Iterator<Map.Entry<String, ChildWidgetInfo>> it = widgetInfoMap.entrySet().iterator(); it.hasNext();) {
Map.Entry<String, ChildWidgetInfo> entry = it.next();
if (entry.getValue().isExternal()) {
it.remove();
}
}
if (standalonePanelNames != null) {
Page page = getRequestContextUtil().getRequestPage();
for (String uniqueName : standalonePanelNames) {
page.removeStandalonePanel(uniqueName);
}
}
standalonePanelNames = null;
}
/**
* Creates and adds a non-conforming internal child widget that doesn't ignore
* parent state.
*
* @param descriptor
* descriptor used to create child widget.
* @return the added child widget
* @throws UnifyException
* if an error occurs
*/
protected Widget addInternalChildWidget(String descriptor) throws UnifyException {
return addInternalChildWidget(descriptor, false, false);
}
/**
* Creates and adds an internal child widget.
*
* @param descriptor
* descriptor used to create child widget.
* @param conforming
* indicates if child is conforming
* @param ignoreParentState
* set this flag to true if child widget ignore parent state.
* @return the added child widget
* @throws UnifyException
* if an error occurs
*/
protected Widget addInternalChildWidget(String descriptor, boolean conforming, boolean ignoreParentState)
throws UnifyException {
Widget widget = (Widget) getUplComponent(getSessionLocale(), descriptor, false);
doAddChildWidget(widget, true, conforming, ignoreParentState, false);
return widget;
}
/**
* Adds child widget id to request context page aliases.
*
* @param widget
* the child widget
* @throws UnifyException
* if an error occurs
*/
protected void addPageAlias(Widget widget) throws UnifyException {
getRequestContextUtil().addPageAlias(getId(), widget.getId());
}
/**
* Adds id to request context page aliases.
*
* @param id
* the to add
* @throws UnifyException
* if an error occurs
*/
protected void addPageAlias(String id) throws UnifyException {
getRequestContextUtil().addPageAlias(getId(), id);
}
protected void onInternalChildPopulated(Widget widget) throws UnifyException {
}
protected abstract void doOnPageConstruct() throws UnifyException;
private void doAddChildWidget(Widget widget, boolean pageConstruct, boolean conforming, boolean ignoreParentState,
boolean external) throws UnifyException {
int childIndex = widgetInfoMap.size();
String childId = WidgetUtils.getChildId(getId(), widget.getId(), childIndex);
widget.setId(childId);
if (pageConstruct) {
widget.onPageConstruct();
widget.setContainer(getContainer());
}
if (!ignoreParentState) {
widget.setEditable(isEditable());
widget.setDisabled(isDisabled());
}
if (conforming) {
widget.setValueStore(getValueStore());
} else {
widget.setValueStore(getThisValueStore());
}
widget.setConforming(conforming);
widgetInfoMap.put(childId, new ChildWidgetInfo(widget, ignoreParentState, external));
}
private ValueStore getThisValueStore() throws UnifyException {
if (thisValueStore == null) {
thisValueStore = createValueStore(this);
}
return thisValueStore;
}
public static class ChildWidgetInfo {
private Widget widget;
private boolean external;
private boolean ignoreParentState;
public ChildWidgetInfo(Widget widget, boolean ignoreParentState, boolean external) {
this.widget = widget;
this.ignoreParentState = ignoreParentState;
this.external = external;
}
public Widget getWidget() {
return widget;
}
public boolean isIgnoreParentState() {
return ignoreParentState;
}
public boolean isExternal() {
return external;
}
public boolean isConforming() {
return widget.isConforming();
}
public boolean isPrivilegeVisible() throws UnifyException {
return widget.isVisible();
}
}
} |
// switchAndFetchReplicaHead tries to pull the latest version of a branch. Will fail if the branch
// does not exist on the ReadReplicaDatabase's remote. If the target branch is not a replication
// head, the new branch will not be continuously fetched.
func switchAndFetchReplicaHead(ctx *sql.Context, branch string, db ReadReplicaDatabase) error {
branchRef := ref.NewBranchRef(branch)
var branchExists bool
branches, err := db.ddb.GetBranches(ctx)
if err != nil {
return err
}
for _, br := range branches {
if br.String() == branch {
branchExists = true
break
}
}
cm, err := actions.FetchRemoteBranch(ctx, db.tmpDir, db.remote, db.srcDB, db.DbData().Ddb, branchRef, actions.NoopRunProgFuncs, actions.NoopStopProgFuncs)
if err != nil {
return err
}
if !branchExists {
err = db.ddb.NewBranchAtCommit(ctx, branchRef, cm)
if err != nil {
return err
}
}
err = pullBranches(ctx, db, []string{branch})
if err != nil {
return err
}
return nil
} |
def add_namespaces(self, cmd_dict):
j = get_registered_commands(self.logger, cmd_dict)
for namespace, cmds in j.items():
if namespace in self.namespaces:
self.logger.info(f'Replacing namespace {namespace}')
delattr(self, namespace)
else:
self.logger.debug(f'Adding namespace {namespace}')
self.namespaces.append(namespace)
inner = type(namespace, (), {})
setattr(inner, 'methods', [])
for name, jsondict in cmds.items():
cmd = Command(namespace, name, self, jsondict)
self.logger.debug(f'Adding registered command: {name}')
setattr(inner, name, types.MethodType(cmd.genfn(cmd.docstring), self))
inner.methods.append(name)
setattr(self, namespace, inner) |
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import tensorflow as tf
# from tensorflow import logging as tf_logging
from argo.core.argoLogging import get_logger
tf_logging = get_logger()
from argo.core.utils.argo_utils import create_concat_opts
import pdb
import os
import numpy as np
#from tensorflow.train import SessionRunArgs
from argo.core.hooks.EveryNEpochsTFModelHook import EveryNEpochsTFModelHook
#import os
#import timeit
#from .utils.argo_utils import create_reset_metric
from datasets.Dataset import TRAIN, VALIDATION #, TEST, check_dataset_keys_not_loop
class TwoDimPCALatentVariablesHook(EveryNEpochsTFModelHook):
def __init__(self,
model,
tensors,
tensors_names,
period,
time_reference,
dirName,
datasets_keys = [TRAIN, VALIDATION]
):
self._dirName = dirName + '/pca_latent'
super().__init__(model, period, time_reference, dataset_keys=datasets_keys, dirName=self._dirName)
#self._ds_handle = model.ds_handle
#self._ds_initializers = model.datasets_initializers
#self._ds_handles_nodes = model.datasets_handles_nodes
self._tensors = tensors
self._tensors_names = tensors_names
#images = {ds_key : (index_list, model.dataset.get_elements(index_list, ds_key)) \
# for (ds_key, index_list) in images_indexes.items()}
#check_dataset_keys_not_loop(list(images.keys()))
#self._images = images
self._hook_name = "two_dim_pca_latent_variables_hook"
tf_logging.info("Create TwoDimPCALatentVariablesHook for: " + ", ".join(self._datasets_keys))
# in this specific logger this is not used
self._files = []
def _begin_once(self):
self.concat_ops = {}
self.concat_update_ops = {}
self.concat_reset_ops = {}
self.pca = {}
for ds_key in self._datasets_keys:
self.concat_ops[ds_key] = {}
self.concat_update_ops[ds_key] = {}
self.concat_reset_ops[ds_key] = {}
self.pca[ds_key] = {}
# every hook needs to have its own accumulator to not have problem of erasing memory that other hooks still needs
# maybe memory occupation could be improved if really needed, but great troubles for concurrency in parallel execution
scope = self._hook_name + "/" + ds_key + "_concat_metric/"
#with tf.variable_scope(scope) as scope:
for (tensor, tensor_name) in zip(self._tensors, self._tensors_names):
dim = tensor.shape.as_list()[1]
self.concat_ops[ds_key][tensor_name], self.concat_update_ops[ds_key][tensor_name], self.concat_reset_ops[ds_key][tensor_name] = create_concat_opts(scope + tensor_name, tensor)
# see https://tomaxent.com/2018/01/17/PCA-With-Tensorflow/
singular_values, u, _ = tf.svd(self.concat_ops[ds_key][tensor_name])
sigma = tf.diag(singular_values)
# first two compoments
sigma_reduced = tf.slice(sigma, [0, 0], [dim, 2])
#couldn't simply take tf.diag(singular_values[:2])?
#TODO are you sure? PCA should use right vectors, usually called V, not U. If convention is X = U S V^T
# the reshape is needed (why, which is original output shape of u?)
self.pca[ds_key][tensor_name] = tf.matmul(tf.reshape(u,[-1,dim]), sigma_reduced)
#queste sono le proiezioni di tutti i punti sui primi 2 principal axis.
def after_create_session(self, session, coord):
super().after_create_session(session, coord)
def do_when_triggered(self, run_context, run_values):
#tf_logging.info("trigger for ImagesGeneratorHook s" + str(global_step) + " s/e" + str(global_step/global_epoch)+ " e" + str(global_epoch))
tf_logging.info("trigger for TwoDimPCALatentVariablesHook")
for ds_key in self._datasets_keys:
#images = self._images[ds_key][1]
session = run_context.session
dataset_initializer = self._ds_initializers[ds_key]
#for tensor_name in self._tensors_names:
session.run([dataset_initializer] + [*self.concat_reset_ops[ds_key].values()])
while True:
try:
session.run([*self.concat_update_ops[ds_key].values()], feed_dict = {self._ds_handle: self._ds_handles[ds_key] }) # **feed_dict,
except tf.errors.OutOfRangeError:
break
returns = session.run([*self.pca[ds_key].values()])
tensors_pca = dict(zip(self.pca[ds_key].keys(),returns))
for tensor_name in self._tensors_names:
pca = tensors_pca[tensor_name]
plt.figure(figsize=(10, 10))
# I need to take into account the number of samples z,
# and replicate z in a little bit tricky way
#labels = self._model.dataset.get_labels(ds_key)
labels = self._model.dataset.get_elements(self._model.y, self._ds_handle, self._ds_handles[ds_key], dataset_initializer, session)
batch_size = self._model.batch_size["eval"]
#TODO comment: what is samples? use t.shape for tensors (not len(t)) is much more understandable.
# import pdb;pdb.set_trace()
samples = int(len(pca)/len(labels)) #self._model.samples
repeated_labels = [0] * (len(labels)*samples)
for i in range(0,len(labels),batch_size):
repeated_labels[i*samples:i*samples+batch_size*samples] = np.tile(labels[i:i+batch_size], samples)
plt.scatter(pca[:, 0], pca[:, 1], c=repeated_labels, cmap='gist_rainbow', s=7)
plt.title(self._plot_title, fontsize=9, loc='center')
fileName = "pca2d_" + tensor_name + "_" + str(ds_key) + "_" + self._time_reference_str + "_" + str(self._time_ref).zfill(4)
plt.savefig(self._dirName + '/' + fileName + '.png') # , bbox_inches='tight'
plt.close()
# TODO this needs to be replaced
# save txt file
data = np.hstack([pca,np.array(repeated_labels).reshape(-1,1)])
np.savetxt(self._dirName + '/' + fileName + '.txt', data, fmt='%.3f %.3f %i', newline='\r\n')
def plot(self):
pass
|
/**
* Created by ryu on 2/3/17.
*/
public class AlmostIntegerRockGarden {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
int x = in.nextInt();
int y = in.nextInt();
int i = 0;
for (int X = 1; X <= 12; X++) {
for (int Y = 1; Y <= X; Y++) {
if (X == Math.max(Math.abs(x), Math.abs(y)) && Y == Math.min(Math.abs(x), Math.abs(y))) {
HashSet<String> unique = new HashSet<>();
unique.add(x + "," + y);
for (int z = 0; z < 11; z++) {
int a = S[i][2*z];
int b = S[i][2*z + 1];
boolean success = false;
for (int Z = 0; Z < 8 && !success; Z++) {
if (Z == 4) {
int tmp = a;
a = b;
b = tmp;
}
if (Z%2==1) a = -a;
b = -b;
success = unique.add(a + "," + b);
}
System.out.println(a + " " + b);
}
return;
}
i++;
}
}
}
static int[][] S = new int[][]{
new int[]{2, 1, 5, 1, 3, 1, 9, 6, 4, 4, 10, 2, 11, 2, 7, 5, 9, 7, 6, 5, 12, 8,},
new int[]{3, 1, 10, 2, 6, 4, 5, 1, 5, 5, 6, 5, 9, 6, 11, 3, 6, 4, 10, 5, 7, 5,},
new int[]{3, 1, 10, 2, 3, 2, 12, 6, 9, 7, 12, 8, 6, 4, 6, 5, 5, 1, 3, 3, 7, 5,},
new int[]{6, 5, 4, 4, 12, 8, 11, 2, 2, 1, 9, 6, 11, 3, 5, 1, 10, 2, 7, 5, 1, 1,},
new int[]{6, 5, 11, 2, 7, 1, 10, 2, 7, 5, 11, 3, 9, 6, 3, 1, 2, 1, 9, 6, 5, 1,},
new int[]{9, 7, 2, 2, 9, 6, 6, 3, 6, 3, 3, 1, 7, 5, 12, 8, 5, 1, 10, 2, 6, 5,},
new int[]{9, 4, 10, 8, 7, 1, 12, 4, 12, 4, 8, 2, 9, 4, 3, 1, 11, 4, 9, 4, 9, 8,},
new int[]{6, 4, 10, 2, 12, 8, 7, 5, 7, 1, 3, 1, 11, 3, 8, 4, 6, 5, 3, 2, 5, 1,},
new int[]{7, 1, 10, 2, 5, 1, 11, 3, 7, 5, 6, 4, 3, 1, 12, 6, 6, 5, 9, 6, 6, 4,},
new int[]{10, 2, 8, 4, 9, 7, 3, 1, 12, 8, 4, 2, 5, 1, 9, 6, 6, 5, 1, 1, 7, 5,},
new int[]{8, 4, 2, 1, 9, 6, 10, 2, 6, 5, 11, 3, 7, 5, 2, 1, 7, 1, 3, 1, 12, 8,},
new int[]{9, 4, 12, 6, 12, 6, 9, 4, 10, 1, 9, 9, 1, 1, 10, 2, 8, 7, 10, 2, 11, 10,},
new int[]{9, 1, 3, 1, 12, 8, 11, 1, 12, 10, 12, 7, 10, 3, 8, 4, 9, 7, 1, 1, 10, 4,},
new int[]{12, 7, 12, 3, 12, 4, 9, 6, 12, 2, 10, 3, 12, 4, 11, 1, 11, 7, 11, 1, 6, 6,},
new int[]{10, 2, 6, 4, 3, 1, 7, 5, 6, 4, 9, 7, 5, 1, 9, 6, 8, 4, 6, 5, 4, 2,},
new int[]{12, 3, 10, 9, 4, 2, 10, 9, 7, 2, 5, 1, 9, 4, 7, 5, 9, 1, 8, 5, 9, 4,},
new int[]{12, 7, 12, 7, 9, 7, 7, 3, 3, 2, 12, 7, 10, 5, 11, 5, 3, 1, 7, 7, 10, 1,},
new int[]{6, 4, 10, 2, 12, 8, 7, 5, 7, 1, 3, 1, 3, 2, 5, 1, 9, 7, 6, 5, 6, 3,},
new int[]{12, 8, 5, 1, 2, 2, 6, 5, 7, 5, 10, 2, 3, 1, 12, 6, 3, 3, 3, 2, 11, 3,},
new int[]{1, 1, 12, 8, 12, 6, 9, 6, 10, 2, 3, 1, 11, 3, 1, 1, 3, 3, 5, 1, 7, 5,},
new int[]{12, 11, 9, 4, 11, 9, 10, 8, 12, 11, 9, 8, 7, 6, 10, 1, 9, 1, 2, 1, 9, 3,},
new int[]{10, 2, 11, 3, 7, 5, 6, 4, 3, 1, 4, 2, 5, 1, 6, 5, 8, 4, 3, 2, 12, 8,},
new int[]{8, 4, 12, 10, 11, 6, 11, 8, 12, 10, 8, 2, 12, 10, 7, 4, 9, 9, 11, 4, 12, 1,},
new int[]{12, 7, 7, 7, 6, 2, 3, 1, 3, 2, 12, 7, 11, 5, 11, 2, 12, 7, 9, 7, 10, 1,},
new int[]{4, 4, 9, 3, 10, 5, 12, 8, 12, 2, 7, 5, 10, 4, 5, 2, 8, 3, 7, 4, 10, 6,},
new int[]{6, 4, 3, 2, 5, 5, 3, 2, 3, 1, 9, 6, 6, 5, 5, 1, 10, 2, 11, 3, 12, 6,},
new int[]{11, 2, 11, 1, 11, 7, 11, 4, 10, 8, 9, 2, 10, 8, 12, 11, 10, 1, 3, 2, 11, 1,},
new int[]{7, 3, 12, 7, 10, 5, 11, 5, 3, 1, 6, 2, 9, 7, 10, 1, 3, 2, 12, 7, 12, 7,},
new int[]{8, 3, 12, 2, 10, 4, 5, 2, 8, 1, 9, 3, 4, 4, 12, 8, 7, 5, 10, 6, 11, 2,},
new int[]{11, 8, 12, 10, 7, 2, 7, 4, 12, 10, 9, 9, 12, 10, 11, 6, 11, 4, 8, 4, 9, 8,},
new int[]{4, 4, 10, 4, 10, 6, 9, 3, 7, 4, 7, 5, 8, 1, 5, 2, 12, 2, 12, 8, 10, 5,},
new int[]{9, 6, 3, 1, 5, 1, 7, 1, 9, 7, 10, 2, 9, 6, 7, 5, 6, 5, 3, 2, 4, 2,},
new int[]{8, 5, 9, 5, 6, 5, 7, 5, 8, 5, 11, 5, 11, 3, 9, 6, 7, 3, 11, 10, 10, 7,},
new int[]{8, 4, 9, 6, 10, 2, 12, 8, 6, 5, 5, 5, 3, 1, 5, 1, 7, 5, 9, 7, 4, 2,},
new int[]{10, 6, 8, 5, 6, 1, 9, 4, 9, 7, 10, 6, 10, 9, 5, 3, 9, 9, 9, 5, 12, 12,},
new int[]{9, 5, 10, 2, 3, 2, 5, 3, 10, 2, 8, 1, 8, 2, 10, 8, 3, 1, 11, 5, 10, 2,},
new int[]{12, 7, 10, 4, 10, 3, 5, 3, 12, 10, 3, 1, 12, 8, 8, 4, 11, 3, 11, 1, 1, 1,},
new int[]{9, 1, 11, 4, 12, 10, 5, 1, 11, 6, 7, 5, 9, 7, 10, 7, 8, 7, 3, 2, 6, 2,},
new int[]{4, 4, 12, 2, 10, 4, 8, 1, 10, 5, 5, 2, 7, 5, 8, 1, 8, 3, 10, 6, 12, 8,},
new int[]{9, 4, 6, 2, 12, 3, 3, 1, 7, 1, 9, 8, 12, 4, 10, 8, 11, 4, 6, 2, 9, 4,},
new int[]{11, 5, 11, 10, 8, 5, 9, 6, 10, 7, 8, 5, 6, 5, 7, 3, 9, 7, 8, 5, 7, 5,},
new int[]{11, 3, 12, 8, 5, 1, 3, 1, 2, 1, 4, 2, 7, 5, 10, 2, 6, 5, 7, 1, 6, 3,},
new int[]{9, 6, 10, 2, 6, 5, 7, 5, 3, 2, 9, 6, 6, 3, 3, 1, 6, 3, 5, 1, 7, 1,},
new int[]{11, 4, 9, 4, 4, 4, 9, 4, 9, 3, 12, 4, 6, 2, 1, 1, 9, 4, 10, 8, 12, 3,},
new int[]{8, 4, 8, 2, 7, 2, 7, 4, 11, 4, 12, 10, 12, 10, 11, 6, 9, 8, 12, 10, 11, 8,},
new int[]{12, 7, 12, 7, 9, 7, 7, 3, 3, 2, 12, 7, 9, 3, 4, 2, 11, 5, 7, 7, 6, 3,},
new int[]{7, 1, 2, 1, 9, 6, 11, 3, 12, 8, 8, 4, 5, 1, 3, 1, 7, 5, 2, 1, 6, 5,},
new int[]{8, 1, 12, 8, 12, 11, 8, 7, 8, 7, 5, 4, 10, 8, 3, 1, 8, 1, 10, 6, 10, 3,},
new int[]{4, 4, 10, 5, 5, 2, 12, 2, 10, 6, 8, 1, 8, 1, 9, 3, 7, 5, 8, 3, 12, 8,},
new int[]{10, 2, 6, 4, 3, 1, 7, 5, 6, 4, 9, 7, 9, 6, 5, 5, 6, 5, 5, 1, 2, 1,},
new int[]{8, 3, 12, 2, 10, 4, 5, 2, 8, 1, 9, 3, 7, 4, 4, 4, 12, 8, 7, 5, 11, 2,},
new int[]{9, 1, 11, 4, 12, 10, 5, 1, 11, 6, 7, 5, 9, 2, 6, 2, 9, 7, 8, 7, 3, 2,},
new int[]{11, 1, 11, 1, 11, 7, 12, 11, 10, 5, 7, 6, 10, 8, 11, 4, 3, 2, 10, 1, 7, 6,},
new int[]{12, 4, 12, 2, 10, 8, 12, 10, 6, 3, 12, 10, 6, 1, 11, 4, 11, 5, 12, 10, 6, 6,},
new int[]{10, 1, 9, 4, 12, 6, 9, 4, 4, 2, 5, 2, 10, 2, 11, 10, 10, 2, 8, 7, 8, 4,},
new int[]{11, 2, 9, 2, 7, 6, 11, 7, 12, 11, 10, 8, 10, 8, 11, 4, 10, 1, 3, 2, 11, 1,},
new int[]{5, 1, 3, 1, 9, 6, 6, 5, 2, 1, 2, 2, 3, 3, 10, 2, 7, 5, 9, 7, 12, 8,},
new int[]{4, 2, 3, 1, 7, 1, 10, 2, 2, 1, 12, 8, 7, 5, 5, 1, 6, 5, 9, 6, 6, 3,},
new int[]{12, 11, 10, 8, 11, 1, 7, 6, 7, 6, 11, 7, 10, 1, 11, 1, 11, 2, 3, 2, 10, 8,},
new int[]{6, 2, 12, 7, 7, 3, 9, 7, 10, 1, 10, 5, 7, 7, 12, 7, 3, 2, 12, 7, 3, 1,},
new int[]{8, 4, 8, 2, 7, 2, 7, 4, 11, 4, 12, 10, 9, 9, 11, 8, 9, 8, 12, 10, 12, 10,},
new int[]{9, 1, 12, 11, 11, 4, 12, 10, 5, 1, 11, 2, 10, 6, 6, 6, 10, 8, 7, 3, 10, 9,},
new int[]{7, 2, 12, 10, 8, 1, 11, 6, 8, 4, 9, 9, 8, 2, 12, 10, 11, 4, 12, 10, 12, 1,},
new int[]{6, 6, 7, 6, 12, 1, 10, 8, 10, 1, 9, 3, 12, 11, 9, 4, 2, 1, 9, 1, 12, 11,},
new int[]{10, 2, 6, 6, 5, 2, 10, 2, 4, 4, 12, 6, 9, 4, 8, 7, 12, 6, 9, 4, 10, 1,},
new int[]{9, 4, 11, 3, 10, 6, 8, 5, 10, 6, 5, 3, 6, 1, 10, 10, 8, 7, 10, 9, 9, 5,},
new int[]{9, 3, 9, 4, 11, 4, 9, 3, 5, 5, 4, 1, 10, 8, 9, 3, 8, 2, 9, 4, 9, 4,},
new int[]{9, 3, 11, 2, 7, 4, 10, 6, 4, 4, 7, 5, 10, 4, 8, 1, 5, 2, 8, 3, 12, 8,},
new int[]{7, 5, 6, 1, 10, 9, 7, 2, 8, 5, 10, 9, 9, 1, 9, 4, 5, 1, 4, 2, 9, 4,},
new int[]{10, 7, 10, 2, 11, 7, 12, 7, 5, 4, 9, 6, 12, 11, 11, 7, 8, 4, 9, 9, 10, 8,},
new int[]{11, 3, 12, 6, 6, 4, 3, 1, 5, 5, 10, 2, 6, 4, 5, 1, 9, 6, 7, 5, 6, 5,},
new int[]{6, 4, 3, 1, 3, 2, 6, 5, 3, 2, 10, 2, 7, 5, 5, 1, 11, 3, 7, 1, 9, 6,},
new int[]{11, 2, 9, 7, 5, 5, 3, 2, 12, 7, 2, 2, 10, 1, 9, 3, 7, 3, 11, 5, 12, 7,},
new int[]{8, 3, 12, 2, 10, 4, 5, 2, 8, 1, 9, 3, 7, 5, 10, 5, 4, 4, 10, 6, 8, 1,},
new int[]{6, 3, 6, 3, 12, 8, 9, 6, 3, 1, 7, 1, 6, 5, 11, 3, 10, 2, 5, 1, 7, 5,},
new int[]{12, 10, 9, 8, 8, 2, 8, 4, 9, 9, 12, 10, 11, 4, 11, 8, 11, 6, 7, 2, 7, 4,},
new int[]{12, 11, 9, 3, 9, 1, 9, 8, 9, 2, 2, 1, 9, 4, 10, 1, 11, 9, 10, 8, 6, 6,},
new int[]{10, 6, 8, 5, 6, 1, 9, 4, 9, 7, 10, 6, 10, 9, 5, 3, 9, 9, 9, 5, 8, 7,}
};
} |
<reponame>heyjcollins/cli<filename>api/cloudcontroller/ccv3/relationship.go
package ccv3
import (
"encoding/json"
"code.cloudfoundry.org/cli/api/cloudcontroller"
"code.cloudfoundry.org/cli/api/cloudcontroller/ccv3/internal"
)
// Relationship represents a one to one relationship.
// An empty GUID will be marshaled as `null`.
type Relationship struct {
GUID string
}
func (r Relationship) MarshalJSON() ([]byte, error) {
if r.GUID == "" {
var emptyCCRelationship struct {
Data interface{} `json:"data"`
}
return json.Marshal(emptyCCRelationship)
}
var ccRelationship struct {
Data struct {
GUID string `json:"guid"`
} `json:"data"`
}
ccRelationship.Data.GUID = r.GUID
return json.Marshal(ccRelationship)
}
func (r *Relationship) UnmarshalJSON(data []byte) error {
var ccRelationship struct {
Data struct {
GUID string `json:"guid"`
} `json:"data"`
}
err := cloudcontroller.DecodeJSON(data, &ccRelationship)
if err != nil {
return err
}
r.GUID = ccRelationship.Data.GUID
return nil
}
// DeleteIsolationSegmentOrganization will delete the relationship between
// the isolation segment and the organization provided.
func (client *Client) DeleteIsolationSegmentOrganization(isolationSegmentGUID string, orgGUID string) (Warnings, error) {
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.DeleteIsolationSegmentRelationshipOrganizationRequest,
URIParams: internal.Params{"isolation_segment_guid": isolationSegmentGUID, "organization_guid": orgGUID},
})
return warnings, err
}
// DeleteServiceInstanceRelationshipsSharedSpace will delete the sharing relationship
// between the service instance and the shared-to space provided.
func (client *Client) DeleteServiceInstanceRelationshipsSharedSpace(serviceInstanceGUID string, spaceGUID string) (Warnings, error) {
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.DeleteServiceInstanceRelationshipsSharedSpaceRequest,
URIParams: internal.Params{"service_instance_guid": serviceInstanceGUID, "space_guid": spaceGUID},
})
return warnings, err
}
// GetOrganizationDefaultIsolationSegment returns the relationship between an
// organization and it's default isolation segment.
func (client *Client) GetOrganizationDefaultIsolationSegment(orgGUID string) (Relationship, Warnings, error) {
var responseBody Relationship
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.GetOrganizationRelationshipDefaultIsolationSegmentRequest,
URIParams: internal.Params{"organization_guid": orgGUID},
ResponseBody: &responseBody,
})
return responseBody, warnings, err
}
// GetSpaceIsolationSegment returns the relationship between a space and it's
// isolation segment.
func (client *Client) GetSpaceIsolationSegment(spaceGUID string) (Relationship, Warnings, error) {
var responseBody Relationship
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.GetSpaceRelationshipIsolationSegmentRequest,
URIParams: internal.Params{"space_guid": spaceGUID},
ResponseBody: &responseBody,
})
return responseBody, warnings, err
}
// SetApplicationDroplet sets the specified droplet on the given application.
func (client *Client) SetApplicationDroplet(appGUID string, dropletGUID string) (Relationship, Warnings, error) {
var responseBody Relationship
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.PatchApplicationCurrentDropletRequest,
URIParams: internal.Params{"app_guid": appGUID},
RequestBody: Relationship{GUID: dropletGUID},
ResponseBody: &responseBody,
})
return responseBody, warnings, err
}
// UpdateOrganizationDefaultIsolationSegmentRelationship sets the default isolation segment
// for an organization on the controller.
// If isoSegGuid is empty it will reset the default isolation segment.
func (client *Client) UpdateOrganizationDefaultIsolationSegmentRelationship(orgGUID string, isoSegGUID string) (Relationship, Warnings, error) {
var responseBody Relationship
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.PatchOrganizationRelationshipDefaultIsolationSegmentRequest,
URIParams: internal.Params{"organization_guid": orgGUID},
RequestBody: Relationship{GUID: isoSegGUID},
ResponseBody: &responseBody,
})
return responseBody, warnings, err
}
// UpdateSpaceIsolationSegmentRelationship assigns an isolation segment to a space and
// returns the relationship.
func (client *Client) UpdateSpaceIsolationSegmentRelationship(spaceGUID string, isolationSegmentGUID string) (Relationship, Warnings, error) {
var responseBody Relationship
_, warnings, err := client.MakeRequest(RequestParams{
RequestName: internal.PatchSpaceRelationshipIsolationSegmentRequest,
URIParams: internal.Params{"space_guid": spaceGUID},
RequestBody: Relationship{GUID: isolationSegmentGUID},
ResponseBody: &responseBody,
})
return responseBody, warnings, err
}
|
import * as React from "react";
import { IValueBoxProps } from "@alethio/ui/lib/layout/content/box/ValueBox";
export interface IUncleHashBoxProps {
children: string;
variant?: IValueBoxProps["variant"];
linkTo?: string;
}
export declare class UncleHashBox extends React.Component<IUncleHashBoxProps> {
render(): JSX.Element;
}
//# sourceMappingURL=UncleHashBox.d.ts.map |
def save_admin(name, value, app=None):
from .models import Parameter
if app is None:
app = guess_extension_name()
__is_defined(app, 'A', name)
fullname = "%s.%s" % (app, name)
try:
p = Parameter.objects.get(name=fullname)
except Parameter.DoesNotExist:
p = Parameter()
p.name = fullname
p.value = None
f = get_parameter_form('A', name, app)
f()._save_parameter(p, name, value) |
// Get VF PCI device associated with the given MAC.
// this method compares with administrative MAC for SRIOV configured net devices
// TODO: move this method to github: Mellanox/sriovnet
func GetVfPciDevFromMAC(mac string) (string, error) {
links, err := netlink.LinkList()
if err != nil {
return "", err
}
matchDevs := []string{}
for _, link := range links {
if len(link.Attrs().Vfs) > 0 {
for _, vf := range link.Attrs().Vfs {
if vf.Mac.String() == mac {
vfPath, err := filepath.EvalSymlinks(fmt.Sprintf("/sys/class/net/%s/device/virtfn%d", link.Attrs().Name, vf.ID))
if err == nil {
matchDevs = append(matchDevs, path.Base(vfPath))
}
}
}
}
}
var dev string
switch len(matchDevs) {
case 1:
dev = matchDevs[0]
err = nil
case 0:
err = fmt.Errorf("could not find VF PCI device according to administrative mac address set on PF")
default:
err = fmt.Errorf("found more than one VF PCI device matching provided administrative mac address")
}
return dev, err
} |
Big data epidemic prevention and control framework design based on 5G base station data fusion
Big data epidemic prevention and the control framework design based on 5G base station data fusion is studied in this paper. The system considers the organization of metadata, the storage method of metadata, and the application of metadata. The object-oriented method is adopted in the organization of metadata to analyze the comprehensive data framework more efficiently. The main development goal of the 5G mobile communication will be to closely integrate with other wireless mobile communication technologies. With this integration, the control framework design is achieved. For the verification, the comparison analysis is conducted. |
Raw content
C O N F I D E N T I A L LA PAZ 000722 SIPDIS STATE FOR WHA A/S SHANNON, DAS BILL MCGLYNN NSC FOR DAN RESTREPO USAID FOR A/ADMINISTRATOR DEBBIE KENNEDY E.O. 12958: DECL: 05/19/2019 TAGS: SNAR, PGOV, PREL, EAID, ASEC, PTER, BL SUBJECT: SCENE SETTER FOR A/S SHANNON VISIT Classified By: Charge d'Affaires Kris Urs for reasons 1.4 (b, d) 1. (C) Welcome to La Paz! Your visit comes at a critical moment in U.S./Bolivian relations. In the past year, bilateral relations sank to their lowest point in decades, largely as a result of unprovoked Government of Bolivia rhetoric and action. Events over the past twelve months include a march on the U.S. Chancery compound by some 70,000 pro-Government of Bolivia demonstrators, the expulsion of USAID from the Chapare (one of Bolivia,s two major drug producing areas), the expulsion of first the U.S. Ambassador in La Paz and then the Bolivian Ambassador in Washington, the expulsion of 38 DEA agents and other personnel from Bolivia, drug decertification and suspension of Andean Trade Preference and Drug Enforcement Act (ATPDEA) trade benefits for Bolivia, and the expulsion of a second secretary at the U.S. Embassy in La Paz on espionage charges. An Opportunity -------------- 2. (C) There are emerging signs that the worst may be coming to an end. Since the election of President Barack Obama, the GOB's anti-American rhetoric has softened and the GOB has made a number of approaches to the U.S. Government seeking better relations. These include: a) two congratulatory letters send by President Morales to President Obama, b) a congratulatory letter sent by Foreign Minister Choquehuanca to Secretary of State Clinton, c) an approach by Bolivian Charge d'Affaires to the United Nations Pablo Solon to Assistant Secretary Shannon, d) approaches made to the Embassy on behalf of Foreign Minister Choquehuanca by the President of the National Assembly (a MAS politician) and the President of the Foreign Relations Committee of the National Assembly (a MNR politician) seeking better ties, and e) a successful meeting between Foreign Minister Choquehuanca and Secretary of State Clinton on the margins of the April 17-19 Summit of the Americas in Trinidad and Tobago. Foreign Minister Choquehuanca has repeatedly publicly stated that the GOB seeks better relations with the United States Government; however, President Morales and other senior GOB officials continue intermittently to spout anti-American rhetoric. Overall, however, the tone appears to have softened. Why the Change? --------------- 3. (C) There are many possible reasons for the GOB,s professed desire to improve ties. President Morales and his advisors may feel that they have gone too far in attacking the U.S. Government and wish to reposition themselves prior to December 6, 2009 presidential elections, with an eye toward regaining middle class voters. Suspension of ATPDEA benefits is substantially affecting employment in El Alto; the government has been hearing from its supporters in the city who want it to do all it can to regain benefits. There are signs President Morales is becoming increasingly anxious about drug cartel activity in Bolivia; while we do not believe that President Morales will ever accept the DEA back into the country, there appears to be a recognition on his part that he has to do something to deal with the drug problem. He may believe that he is more likely to get European help if the government brings about a rapprochement with the U.S. Government. 4. (C) But we believe the most compelling reason for the change of tone is simply President Obama,s popularity in the region and in Bolivia. A key component of the GOB,s political strategy during its first three years in power was to bash President Bush and the United States, accusing the U.S. of interfering in Bolivian internal politics, conspiring against Morales and planning his assassination. These accusations, outlandish as they were, boosted Morales' political standing among his base, in part because of President Bush,s low ratings. This strategy is no longer viable now that President Obama is in office. President Obama,s ratings are high and the U.S. Government is taking steps to end many of the unpopular policies pursued by the former administration. In this context, bashing the United States is not a winning political strategy. But How Durable? ---------------- 5. (C) Ultimately, however, it is not clear whether the change in tone on the part of the GOB will be durable. In his nearly 20 years in public life, President Morales has made attacking the United States a staple of his political discourse, first as a cocalero leader in the Chapare, then as a member of the Chamber of Deputies, and finally as President. Morales has not heretofore demonstrated a large amount of flexibility or creativity in his political discourse; he is a bit of a &one note Johnny.8 At the present, with a divided opposition and widespread popular support, Morales is in a strong position and feels he can politically afford some accommodation with the United States. As we move closer to December 2009 elections, however, the President may return to U.S. bashing as a way to boost his political support. Strong Anti American Faction ---------------------------- 6. (C) Despite the incipient rapprochement, there remain officials in the GOB who remain deeply anti-American and whom we believe are opposed to improved relations. Presidency Minister Quintana and Government Minister Rada lead the list. It was recently announced that a group of Iranian legislators will visit Bolivia starting May 18; we do not believe that the timing is coincidental, and we suspect Quintana had something to do with the visit. Recently, there have been a spate of public attacks against USAID, including a move to throw USAID out of the city of El Alto; there are signs that Quintana is involved. Quintana and Rada have prevailed in the past over the Foreign Ministry when it has come to relations with the United States. Cuba, Venezuela and Iran ------------------------ 7. (C) Cuba, Venezuela, and Iran are increasingly influential here in La Paz and throughout Bolivia. These countries will not welcome any rapprochement in U.S./Bolivian relations. President Morales relies heavily on the advice of Hugo Chavez, speaking with him frequently. Key Points of Contention ) Drugs and Democracy --------------------------------------------- - 8. (C) Two key substantive points of contention in the U.S./Bolivian relationship are drugs and democracy. Overcoming these issues will be very difficult. 9. (C) Drugs: Since November 2008, when he expelled the DEA "on a personal decision," President Morales has repeatedly (and without proof) accused the DEA of performing "covert political operations," of plotting against him, and of knowing about and/or assisting in narcotrafficking. While we remain committed to working with the government in the fight against drugs despite these setbacks, the expulsion of DEA has severely hampered the government's ability to investigate drug activity. There is simply no domestic or international group capable of replacing DEA's expertise, despite the government's public calls for increased partnerships with South American and/or European nations (which have not born fruit). Anecdotal evidence, including recent discovery of "mega-labs" in the dense jungle region of the Chapare, points to the arrival of Mexican and Colombian cartels. Although the situation could rapidly worsen, Morales appears set against allowing the DEA's return. The government desires that any anti-drug activity be led by Bolivian forces. At the same time, the Morales regime's coca policy, with its emphasis on "social control" enforced by the coca syndicates themselves, has been a failure. Coca cultivation has increased steadily since Morales took office, with a concomitant rise in cocaine production. We have been told frankly by the Vice Foreign Minister that the cocaleros run the GOB's coca policy. While the GOB does not want to be characterized as a "narco-state," given its current pro-coca policies and the lack of a viable intelligence gathering alternative to the DEA, it is clear the narcotics problem will only deteriorate, which will in turn pose a serious irritant to the relationship. 10. (C) Democracy: President Morales and his Movement Toward Socialism (MAS) party have grown increasingly authoritarian in nature since December 2005 elections. The opposition claims the MAS is behind a systematic dismantling of the judiciary branch; the country's Supreme Court, Constitutional Tribunal, and Judicial Council all currently lack a quorum, leaving them effectively defunct and unable to rule on the legality of MAS actions. The MAS has surrounded or threatened to surround the Congress to force favorable votes several times (including one march led personally by Morales), and MAS leadership has called for the closure of Congress and rule by supreme decree. MAS-related social groups have engaged in mob violence under the pretext of "communitarian justice" to repress dissent, including the invasion of former Vice President Victor Hugo Cardenas' home, the beating of former Congress Representative Marlene Paredes, and the public whipping of former indigenous leader Marcial Fabricano. Although Morales has won several referenda with over 50 or even 60 percent of the vote, the opposition points to a campaign of fraud, intimidation, and bribery in MAS strongholds to ensure high voter turnout. This steady erosion of democratic practices and institutions under the Morales regime shows no sign of abating and is likely to prove a serious irritant to the bilateral relationship. URS |
package org.harmony.toddler.mybatis.groovy.demo.domain;
import lombok.Data;
import java.util.Date;
@Data
public class User {
private Integer id;
private String name;
private Integer age;
private String addr;
private Date createTime;
private Date updateTime;
} |
/**
*
* Do a lot of inserts / removes in patterns: remove n, insert n in random order
*
*/
@Test
public void testInsert() {
insert1(0);
valTime = 0L;
long startTime = System.nanoTime();
int ii = 0;
insert1(ii);
long v1 = (System.nanoTime() - startTime) / 1000000;
long v2 = valTime / 1000000;
System.out.format("Test SZ: %d SZp2: %d took %,d milliseconds%n", SZ, SZp2, v1);
System.out.format("val iter time: %,d insert/remove time: %,d%n", v2, v1 - v2);
for (int i = 0; i < 20; i++) {
Miter m = iterTimes.get(i);
System.out.format("outer: %d, time: %,d inner: %,d itemNbr: %,d%n", m.outerIter, m.time,
m.innerIter, m.itemNbr);
Miter m = iterTimes.get(99999);
System.out.format("outer: %d, time: %,d inner: %,d itemNbr: %,d ref 99999%n", m.outerIter,
m.time, m.innerIter, m.itemNbr);
} |
<reponame>ashu-vyas-github/AndrewNg_MachineLearning_Coursera
import numpy
def estimate_gaussian(x_array):
"""
This function estimates the parameters of a Gaussian distribution
using a provided dataset.
Parameters
----------
x_array : array_like
The dataset of shape (m x n) with each n-dimensional
data point in one row, and each total of m data points.
Returns
-------
mean_mu : array_like
A vector of shape (n,) containing the means of each dimension.
sigma2 : array_like
A vector of shape (n,) containing the computed
variances of each dimension.
Instructions
------------
Compute the mean of the data and the variances
In particular, mean_mu[i] should contain the mean of
the data for the i-th feature and sigma2[i]
should contain variance of the i-th feature.
"""
num_examples, num_features = x_array.shape
mean_mu = numpy.zeros(num_features)
sigma2 = numpy.zeros(num_features)
mean_mu = (1/num_examples)*numpy.sum(x_array, axis=0)
sigma2 = (1/num_examples)*numpy.sum((x_array - mean_mu)**2, axis=0)
return mean_mu, sigma2
|
A fan in Brazil traveled 80 miles from Guaratuba to Coritiba to watch his beloved team play, but the incredible thing is that he did it laying on a gatch bed. The sick man attended Coritiba's match hoping to see his club continue their epic victorious run in the tournament.
Soccer, an Alternative Healing Therapy for Fans
In today's world, the most common reaction to an illness is paying your most trusted doctor a visit. He then makes a diagnosis, and based on that, he may ask you to take a couple of lab tests or simply prescribe you with the 'right' medicine. Western medicine however, has overlooked people's own capacity to heal through alternatives such as laughter therapy, acupuncture, alphabiotics, raw diets, etc. Can watching soccer live at a stadium also be an alternative to healing?
The energy that travels inside a stadium when the home team scores a goal is incomparable. The joy felt on that glorious moment could have a similar effect of a morphine shot- well, not exactly, but you get the point.
Soccer disconnects you from the stressful routine of daily life and takes you to a place medicine can't. The rush of adrenaline felt at soccer matches is positive for your brain. Stress and bad eating habits have been found to be among the most common causes of any disease. Other medical studies have shown that recreational activities that involve mental stress are healthy, specially if they are unpredictable and require your mind to adjust to new conditions. The different situations that occur in a soccer game precisely have these characteristics. Although fans are always expecting a goal to come, it is unpredictable when and how this happens. This means, that if your team throws an epic comeback with five minutes to go and you are present at the stadium, you will go home feeling light and vigorous.
There is however, a downside to all of this. The possibility that the team you follow will lose is always present and that usually leaves fans with a bitter aftertaste that can last for days. A fan in Brazil was willing to take the risk and traveled 80 miles from Guaratuba to Curitiba to watch his beloved team play, but the incredible thing is that he did it laying on a gatch bed. The sick man present at the stadium attended Coritiba's home match against Vasco da Gama for the Brasileirão hoping to see his club continue their epic victorious run in the tournament. Sadly for the spectator, Coritiba lost to Vasco, but still remains among the top three contenders in the Brasileirão, two points behind Cruzeiro and Botafogo.
Next time an ill person visits a stadium in these conditions, hopefully he will have a better experience. Nonetheless, this fan was probably delighted to see his team play despite the loss, as it is obvious that nothing was going to stop him from going to Couto Pereira stadium in Curitiba. |
//
// SaveVM.h
// cepin
//
// Created by dujincai on 15/6/4.
// Copyright (c) 2015年 talebase. All rights reserved.
//
#import "BaseTableViewModel.h"
#import "SaveJobDTO.h"
#import "SaveCompanyModel.h"
@interface SaveVM : BaseTableViewModel
@property(nonatomic,retain)id deleteJobStateCode;
@property(nonatomic,retain)NSMutableArray *selectJobs;
@property(nonatomic,strong)id deleteSaveCompany;
@property(nonatomic,strong)NSMutableArray *selectedCompanies;
-(void)deleteJobs;
-(void)cancelSavecompany;
@end
|
#include<bits/stdc++.h>
using namespace std;
int arr[100] , dp[100];
int n , sum = 0;
int calc(int x){
if(x == n + 1) return 0;
int &ret = dp[x]; if(ret != -(1<<30)) return ret;
ret = arr[x] - calc(x + 1);
int theta = 0;
for(int j = x ; j < n ; j++){
theta -= arr[j];
ret = max(ret , theta + arr[j + 1] - calc(j + 2));
}
return ret;
}
int main(){
cin>>n; for(int j = 1 ; j <= n ; j++){
cin>>arr[j];
sum += arr[j];
}
for(int j = 0 ; j < 100 ; j++) dp[j] = -(1<<30);
int shit = calc(1);
cout<<(sum - shit)/2<<' '<<(sum + shit)/2<<endl;
}
|
<reponame>elemprog-python/elemprog-exercices-corriges<gh_stars>0
# Exercice 9.2 : Répétitions dans les listes
## Question 1
from typing import List, Set, TypeVar
T = TypeVar('T')
def repetes(l : List[T]) -> Set[T]:
"""Retourne l'ensemble des éléments répétés dans L.
"""
# Eléments déjà vus
vus : Set[T] = set() # éléments déjà vus
# Ensemble résultat
ens : Set[T] = set()
e : T
for e in l:
if e in vus: # déjà vu ?
ens.add(e) # répétition
else:
vus.add(e) # c'est tout vu !
return ens
# Jeu de tests
assert repetes([1, 2, 23, 9, 2, 23, 6, 2, 9]) == {2, 9, 23}
assert repetes([1, 2, 3, 4]) == set()
assert repetes(['bonjour', 'ça', 'ça', 'va', '?']) == {'ça'}
## Question 2
def sans_repetes(l : List[T]) -> List[T]:
"""Retourne la liste des éléments de l sans
leur(s) répétition(s)."""
# Eléments déjà vus
vus : Set[T] = set()
# Liste résultat
lr : List[T] = []
e : T
for e in l:
if e not in vus:
lr.append(e)
vus.add(e)
return lr
# Jeu de tests
assert sans_repetes([1, 2, 23, 9, 2, 23, 6, 2, 9]) == [1, 2, 23, 9, 6]
assert sans_repetes([1, 2, 3, 4]) == [1, 2, 3, 4]
assert sans_repetes([2, 1, 2, 1, 2, 1, 2]) == [2, 1]
assert sans_repetes(['bonjour', 'ça', 'ça', 'va' , '?']) == ['bonjour', 'ça', 'va', '?']
## Question 3
def uniques(l : List[T]) -> Set[T]:
"""Retourne l'ensemble des éléments apparaissant
une seule fois dans l."""
# Eléments vus au moins une fois
unefois : Set[T] = set()
# Eléments vus plus d'une fois
trop : Set[T] = set()
e : T
for e in l:
if e in unefois:
# vu au moins 2 fois
trop.add(e)
else:
# vu pour la première fois
unefois.add(e)
return unefois - trop
# Jeu de tests
assert uniques([1, 2, 23, 9, 2, 23, 6, 2, 1]) == {6, 9}
assert uniques([1, 2, 1, 1]) == {2}
assert uniques([1, 2, 1, 2, 1]) == set()
|
(CNN) After nearly four hours of deliberation, a jury ruled in favor of pop star Taylor Swift in her countersuit against former radio host David Mueller for alleged assault and battery.
Swift accused Mueller of groping her at a meet-and-greet event in June 2013. He will be required to pay $1 in damages to Swift.
The jury, comprised of six women and two men, also found the singer's mother, Andrea Swift, not liable for tortious interference.
After the reading of the verdict, Swift embraced her mother.
In a statement, Swift thanked the judge and her legal team for "fighting for me and anyone who feels silenced by a sexual assault."
"I acknowledge the privilege that I benefit from in life, in society and in my ability to shoulder the enormous cost of defending myself in a trial like this," Swift added in the statement, obtained by CNN. "My hope is to help those whose voices should also be heard. Therefore, I will be making donations in the near future to multiple organizations that help sexual assault victims defend themselves."
Mueller had sued Swift, the singer's mom Andrea Swift, and radio promotions director Frank Bell in 2015, accusing them of interfering with his $150,000/year contract as a local morning radio DJ in Denver by pressuring his employer, KYGO radio, to fire him.
Bell was also found not liable for tortious interference.
Swift was dismissed as a defendant in Mueller's suit on Friday after a judge ruled that there was insufficient evidence to show that Swift had acted improperly.
Swift's lawsuit against Mueller argued that the trial would "serve as an example to other women who may resist publicly reliving similar outrageous and humiliating acts."
After the verdict, Swift's attorney, Doug Baldridge, said the ruling was "not just a win" but "something that can make a difference."
"It takes people like Taylor, wonderful people like Taylor, who we all know, to stand up and draw these lines," he told the press.
He added: "As I said in the closing [argument], that dollar, that single dollar, is of immeasurable value in this ever going fight to figure out where the lines are, what's right and what's wrong."
CNN's Scott McLean spoke with Mueller on the phone Monday night.
"My heart is still set on proving my innocence," Mueller said. "I'm resolute."
The trial
The civil trial began August 8 and included four days of testimony.
A courtroom sketch of Taylor Swift on the stand on Thursday, August 10.
Mueller testified on Tuesday, claiming he had not inappropriately touched Swift during the 2013 photo-op. Instead, he said that he had simply touched her arm and ribs while "jostling" for the photo.
Swift testified on Thursday, delivering a confident and assertive account of the incident that led to the legal proceedings and rejected Mueller's characterization of the event.
"This was not jostling," Swift said. "He did not touch my rib. He did not touch my arm. ... He grabbed my bare ass."
Swift's mother testified that her daughter was visibly upset after the incident.
"Mom, a guy just grabbed my (rear end) in the meet and greet," Andrea Swift said her famous daughter told her.
"I wanted to vomit and cry at the same time," Andrea Swift said.
The photo
A photo of Swift, Mueller, and Mueller's then-girlfriend, Shannon Melcher, at the 2013 event was referenced several times during the trial. A key point of disagreement between the two sides was the positioning of Mueller's hand, which was hidden from view in the photo and positioned below Swift's lower back.
In this courtroom sketch, pop singer Taylor Swift, front left, confers with her attorney as David Mueller, back left, and the judge look on during a civil trial in federal court Tuesday, Aug. 8, 2017, in Denver. Mueller, a former radio disc jockey accused of groping Swift before a concert testified Tuesday that he may have touched the pop superstar's ribs with a closed hand as he tried to jump into a photo with her but insisted he did not touch her backside as she claims. (Jeff Kandyba via AP)
Mueller's attorney, Gabriel McFarland, claimed the photo did not show any inappropriate touching, and asked Swift while she was on the stand why her skirt did not appear ruffled in the front.
"Because my ass is located in the back of my body," Swift said
Swift's bodyguard Greg Dent and photographer Stephanie Simbeck, who took the photo of Mueller and Swift at the meet-and-greet, both claimed they'd witnessed the groping.
Melcher, and Mueller's former radio co-host, Ryan Kliesch also testified during the trial, saying they had never seen Mueller act disrespectfully toward or inappropriately touch women.
At the end of the meet-and-greet, Swift reported the incident her mother and her management team.
Swift's radio promotions director, Bell, told Mueller's bosses at KYGO, and Mueller was fired two days later after the station conducted its own investigation. (KYGO is a CNN affiliate.)
Mueller testified that he did not grope Swift and said his career in radio had been ruined by what he said was a false accusation.
Mueller was seeking more than $257,500 in damages, according to his attorney's closing arguments.
CNN's Scott McLean and Sara Weisfeldt reported from Denver and Sandra Gonzalez wrote from Los Angeles. CNN's Topher Gauk-Roger, Eric Levenson Steve Almasy, Kristen Holmes and Chuck Johnston contributed to this report. |
//** FreeRtChangeList - Frees a route-change notification list.
//
// Called to clean up a list of route-change notifications in the failure path
// of 'RTWalk' and 'IPRouteTimeout'.
//
// Entry: RtChangeList - The list to be freed.
//
// Returns: Nothing.
//
void
FreeRtChangeList(RtChangeList* CurrentRtChangeList)
{
RtChangeList *TmpRtChangeList;
while (CurrentRtChangeList) {
TmpRtChangeList = CurrentRtChangeList->rt_next;
CTEFreeMem(CurrentRtChangeList);
CurrentRtChangeList = TmpRtChangeList;
}
} |
/// Parse a Graphic Control extension sub-block
fn parse_sub_block(&mut self, bytes: &[u8]) -> Result<()> {
if bytes.len() == 4 {
self.set_flags(bytes[0]);
let delay = u16::from(bytes[2]) << 8 | u16::from(bytes[1]);
self.set_delay_time_cs(delay);
self.set_transparent_color_idx(bytes[3]);
Ok(())
} else {
Err(Error::MalformedGraphicControlExtension)
}
} |
Exogenous cannabinoids are structurally and pharmacologically diverse compounds that are widely used. The purpose of this systematic review is to summarize the data characterizing the potential for these compounds to act as substrates, inhibitors, or inducers of human drug metabolizing enzymes, with the aim of clarifying the significance of these properties in clinical care and drug interactions. In vitro data were identified that characterize cytochrome P-450 (CYP-450) enzymes as potential significant contributors to the primary metabolism of several exogenous cannabinoids: tetrahydrocannabinol (THC; CYPs 2C9, 3A4); cannabidiol (CBD; CYPs 2C19, 3A4); cannabinol (CBN; CYPs 2C9, 3A4); JWH-018 (CYPs 1A2, 2C9); and AM2201 (CYPs 1A2, 2C9). CYP-450 enzymes may also contribute to the secondary metabolism of THC, and UDP-glucuronosyltransferases have been identified as capable of catalyzing both primary (CBD, CBN) and secondary (THC, JWH-018, JWH-073) cannabinoid metabolism. Clinical pharmacogenetic data further support CYP2C9 as a significant contributor to THC metabolism, and a pharmacokinetic interaction study using ketoconazole with oromucosal cannabis extract further supports CYP3A4 as a significant metabolic pathway for THC and CBD. However, the absence of interaction between CBD from oromucosal cannabis extract with omeprazole suggests a less significant role of CYP2C19 in CBD metabolism. Studies of THC, CBD, and CBN inhibition and induction of major human CYP-450 isoforms generally reflect a low risk of clinically significant drug interactions with most use, but specific human data are lacking. Smoked cannabis herb (marijuana) likely induces CYP1A2 mediated theophylline metabolism, although the role of cannabinoids specifically in eliciting this effect is questionable. |
/**
* Create an Array with backtrace information for Kernel#caller
* @param level
* @param length
* @return an Array with the backtrace
*/
public IRubyObject createCallerBacktrace(int level, int length, Stream<StackWalker.StackFrame> stackStream) {
runtime.incrementCallerCount();
RubyStackTraceElement[] fullTrace = getPartialTrace(level, length, stackStream);
int traceLength = safeLength(level, length, fullTrace);
if (traceLength < 0) return runtime.newEmptyArray();
final IRubyObject[] traceArray = new IRubyObject[traceLength];
for (int i = 0; i < traceLength; i++) {
traceArray[i] = RubyStackTraceElement.to_s_mri(this, fullTrace[i + level]);
}
RubyArray backTrace = RubyArray.newArrayMayCopy(runtime, traceArray);
if (RubyInstanceConfig.LOG_CALLERS) TraceType.logCaller(backTrace);
return backTrace;
} |
import type { SetupContext } from '@vue/composition-api';
import type { App } from '~/entities';
export const useAddItemToQueueMenu = (
root: SetupContext['root'],
trackOrEpisode: App.SimpleTrackDetail | App.EpisodeDetail | undefined,
): App.MenuItem<'custom'> => {
const type = 'custom';
const name = '次に再生に追加';
if (trackOrEpisode == null) {
return {
type,
name,
handler: () => {},
disabled: true,
};
}
const trackName = trackOrEpisode.name;
return {
type,
name,
handler: () => {
root.$spotify.player.addItemToQueue({
uri: trackOrEpisode.uri,
deviceId: root.$getters()['playback/playbackDeviceId'],
}).then(() => {
root.$toast.pushPrimary(`"${trackName}" を次に再生に追加しました。`);
}).catch((err: Error) => {
console.error({ err });
root.$toast.pushError(`"${trackName}" を次に再生に追加できませんでした。`);
});
},
};
};
|
import { BaseComponentContext } from '@microsoft/sp-component-base';
import { Guid } from '@microsoft/sp-core-library';
import { LambdaParser } from '@pnp/odata/parsers';
import { SharePointQueryableCollection, sp } from '@pnp/sp';
import '@pnp/sp/taxonomy';
import { ITermInfo, ITermSetInfo, ITermStoreInfo } from '@pnp/sp/taxonomy';
export class SPTaxonomyService {
constructor(private context: BaseComponentContext) {
}
public async getTerms(termSetId: Guid, parentTermId?: Guid, skiptoken?: string, hideDeprecatedTerms?: boolean, pageSize: number = 50): Promise<{ value: ITermInfo[], skiptoken: string }> {
try {
const parser = new LambdaParser(async (r: Response) => {
const json = await r.json();
let newSkiptoken='';
if(json['@odata.nextLink']) {
var urlParams = new URLSearchParams(json['@odata.nextLink'].split('?')[1]);
if(urlParams.has('$skiptoken')) {
newSkiptoken = urlParams.get('$skiptoken');
}
}
return { value: json.value, skiptoken: newSkiptoken };
});
let legacyChildrenUrlAndQuery = '';
if (parentTermId && parentTermId !== Guid.empty) {
legacyChildrenUrlAndQuery = sp.termStore.sets.getById(termSetId.toString()).terms.getById(parentTermId.toString()).concat('/getLegacyChildren').toUrl();
}
else {
legacyChildrenUrlAndQuery = sp.termStore.sets.getById(termSetId.toString()).concat('/getLegacyChildren').toUrl();
}
let legacyChildrenQueryable = SharePointQueryableCollection(legacyChildrenUrlAndQuery).top(pageSize).usingParser(parser);
if (hideDeprecatedTerms) {
legacyChildrenQueryable = legacyChildrenQueryable.filter('isDeprecated eq false');
}
if (skiptoken && skiptoken !== '') {
legacyChildrenQueryable.query.set('$skiptoken', skiptoken);
}
const termsResult = await legacyChildrenQueryable() as {value: ITermInfo[], skiptoken: string};
return termsResult;
} catch (error) {
return { value: [], skiptoken: '' };
}
}
public async getTermById(termSetId: Guid, termId: Guid): Promise<ITermInfo> {
if (termId === Guid.empty) {
return undefined;
}
try {
const termInfo = await sp.termStore.sets.getById(termSetId.toString()).terms.getById(termId.toString()).expand("parent")();
return termInfo;
} catch (error) {
return undefined;
}
}
public async searchTerm(termSetId: Guid, label: string, languageTag: string, parentTermId?: Guid, stringMatchId: string = '0', pageSize: number = 50): Promise<ITermInfo[]> {
try {
const searchTermUrl = sp.termStore.concat(`/searchTerm(label='${label}',setId='${termSetId}',languageTag='${languageTag}',stringMatchId='${stringMatchId}'${parentTermId && parentTermId !== Guid.empty ? `,parentTermId='${parentTermId}'` : ''})`).toUrl();
const searchTermQuery = SharePointQueryableCollection(searchTermUrl).top(pageSize);
const filteredTerms = await searchTermQuery();
return filteredTerms;
} catch (error) {
return [];
}
}
public async getTermSetInfo(termSetId: Guid): Promise<ITermSetInfo | undefined> {
const tsInfo = await sp.termStore.sets.getById(termSetId.toString()).get();
return tsInfo;
}
public async getTermStoreInfo(): Promise<ITermStoreInfo | undefined> {
const termStoreInfo = await sp.termStore();
return termStoreInfo;
}
}
|
Adolescent Adjustment During COVID‐19: The Role of Close Relationships and COVID‐19‐related Stress
During the COVID‐19 pandemic, adolescents’ typical social support systems have been disrupted. The present study examined adolescent adjustment during the pandemic (summer, 2020) while controlling for pre‐pandemic adjustment (2017–2018) in 170 youth (ages 12–20) from Missouri and Florida. We also examined whether positive and negative relationship qualities with four close others (i.e., mothers, fathers, siblings, and best friends) interacted with COVID‐related stress to impact adolescent adjustment. In general, we found that close relationships impacted adolescent adjustment in expected directions (i.e., positive relationships better for adjustment, negative relationships more detrimental), but while mothers and fathers impacted adolescent adjustment in largely similar ways to pre‐pandemic studies, influences of relationships with best friends and sibling were more impacted by COVID‐related stress.
Even in the most typical and positive of contexts, adolescence can be a particularly stressful developmental period given the large number physical, social, and cognitive changes taking place simultaneously. Research also indicates that adolescents are more likely to be highly reactive to stress due to hormonal and brain development changes (Romeo, 2013). It should be no surprise, then, that research on the effects of previous pandemics, such as H1N1, reveal that adolescents are further negatively impacted by stressors and changes resulting from the pandemic (Murray, 2009). These negative outcomes include increased anxiety, depression, mood swings, anger, and involvement in risky and maladaptive coping behaviors (Murray, 2009). This is particularly concerning given that many behavioral and mood disorders also first emerge during adolescence (Kessler et al., 2001;Merikangas et al., 2010). Therefore, the combination of typical stressors with the additional stressors of COVID-19 may interact with pre-existing vulnerabilities to produce even higher levels of stress and reduced mental health for adolescents (Alloy & Abramson, 2007). Importantly, theorists identify social support as an important aspect of the stress process (Pearlin, Menaghan, Lieberman, & Mullan, 1981). Social support serves as a key buffer against stress and the subsequent development of emotional and behavioral problems during adolescence (Cheng et al., 2014;Possel et al., 2018). It does so both directly, as well as through promoting more adaptive coping responses (Calvete & Connor-Smith, 2006;Holahan, Valentiner, & Moos, 1995). During the current COVID-19 pandemic, however, adolescents' schools and activities have been closed or significantly reduced, thus limiting adolescents' social connections with peers, while family members became primary companions. Therefore, the present study examined whether relationship qualities with close others buffered or exacerbated behavioral and adjustment problems during the pandemic (while controlling for pre-pandemic adjustment), and whether COVID-19-related stress moderated this association.
Importance of Close Relationships During Adolescence
Social relational theoretical models (Collins & Laursen, 1992;Hartup & Laursen, 1993) typically note that while there is fair degree of developmental continuity across close relationships, the functions and processes of these relationships typically change over the course of adolescence. While parents are considered primary socialization sources earlier in childhood (Maccoby, 1994), and siblings are typically youth's primary out-of-school companions during childhood and often up through early adolescence (McHale & Crouter, 1996), adolescents begin to strive for greater independence from family members in an effort to assert their developing autonomy (Steinberg, 1990). Such autonomy assertion typically reveals itself in the form of increased conflict with parents (Laursen, Coy, & Collins, 1998) and siblings (Kim, McHale, Osgood, & Crouter, 2006). Additionally, adolescents start to seek-out peers as their primary sources of social support; first, with close friends, and later, with romantic partners (Furman & Buhrmester, 1992). Across adolescence, however, mothers and best friends are similarly and consistently rated by adolescents as being the most frequent providers of social support, while mothers and siblings are engaged in the greatest amount of conflict (Furman & Buhrmester, 1992).
Despite these differences and transitions taking place throughout adolescence, all of these close, important relationships have been previously shown (1) to engage in both positive (e.g., affection, intimacy, support) and negative (e.g., conflict, criticism, antagonism) processes or qualities, and (2) show unique associations with adolescent adjustment and well-being. Adams and Laursen (2007) found that negative relationship qualities with mothers and fathers were associated overtime with greater anxiety, depression, and delinquency, while positive relationship qualities with mothers and fathers were associated overtime with less anxiety, depression, and delinquency. Conflict and negativity with siblings have been found to negatively impact youth adjustment and psychopathology over and above similar relationships with parents (Dirks, Persram, Recchia, & Howe, 2015), while close and positive sibling relationships have found across several studies to be protective against internalizing and externalizing symptoms (Buist, Dekovic, & Prinzie, 2013). Similarly, recent meta-analytic findings indicate that positive relationship quality with friends is related to lower symptoms of loneliness and depression, whereas negative features of friendship, such as conflict, are related to greater loneliness and depressive symptoms (Schwartz-Mette, Shankman, Dueweke, Borowski, & Rose, 2020). While the research on direction of effects between positive and negative relationships qualities with these important close relationships is usually clear under typical contexts, what is less understood is how these relationships have functioned during the pandemic. Theorists have long noted the important role that social support plays in the stress process (Pearlin et al., 1981), but risk and resilience theorists also note that close relationships, such as those with family members, can be both protective or risky, depending on the context (Masten, 2018).
During the COVID-19 pandemic, and particularly early-on during stay-at-home orders, adolescents were typically less able to utilize their preferred sources of in-person social support, namely peers. With this change, for many adolescents, family members became more central companions and sources of support for adolescents. Parents certainly play a crucial role in providing support during times of stress, and the way they respond to adolescent distress has been shown to affect adolescent adjustment following acute stressful situations like 9/11 (Gil-Rivas, Silver, Holman, McIntosh, & Poulin, 2007) and more generally (Eisenberg et al., 1999). The role of siblings, though, should not be overlooked. Interactions with siblings may be particularly salient during the COVID-19 pandemic. For many adolescents, siblings served as the only "peers" consistently present in-person in their daily lives, and they may help one another cope with stress from the pandemic. Older siblings have been shown to serve as role models and significant sources of support and advice (Killoren & Roach, 2014;Tucker, Barber, & Eccles, 1997); however, as sibling relationships become more egalitarian and less hierarchical with age during adolescence, younger siblings also may be able to serve in a supportive role. Further, highquality sibling relationships have been found to be protective against family-wide stressors and during stressful life events (Gass, Jenkins, & Dunn, 2007;Waite, Shanahan, Calkins, Keane, & O'Brien, 2011).
Although adolescents were much less likely to have in-person interactions with friends during the early stages of the COVID-19 pandemic, synchronous and asynchronous communication technologies can be useful in helping youth maintain relationships and provide outlets for disclosure when close relationship partners are not physically present (Lindell, Campione-Barr, & Killoren, 2015). Therefore, friends may have continued to serve as important sources of support even when there was not face-to-face contact. This is important given that youth increasingly turn to friends for support during adolescence (Furman & Rose, 2015). Notably, though, there is likely considerable variation in youths' virtual access to friends.
Pandemic Stress and Adolescent Adjustment
While research on youth emotional and behavioral adjustment during the pandemic is still emerging, recent research suggests that, on average, adjustment problems increased. Comparisons in China from pre-pandemic rates of youth depression and anxiety, revealed higher rates of youth depression and anxiety during the pandemic than expected (Duan et al., 2020;Xie et al., 2020). Longitudinal studies of youth in Australia (Magson et al., 2021), as well as one examining youth from 12 different samples across the United States, Europe, and South America (Berendese et al., under review), have found increases in depression and anxiety in adolescents from pre-pandemic to during the pandemic. Lacking in the literature so far, however, is the examination of factors that may protect against these increases in adjustment problems, as well as those that might exacerbate these difficulties. Positive and negative relationship qualities with close others are likely factors that could contribute.
In considering the impact of the pandemic on adolescent adjustment, taking into account prepandemic adjustment is critical. Of course, prior to the COVID-19 pandemic, adolescents ranged widely in terms of their emotional (e.g., depressive and anxiety symptoms) and behavioral (e.g., risky or problematic behaviors) adjustment. Accordingly, statistically controlling for pre-pandemic adjustment was necessary in order to explore relative (residualized) changes in youth adjustment overtime (Castro-Schilo & Grimm, 2017). Moreover, prevalence rates of some adjustment problems in adolescents, such as depression, have been increasing over the last couple of decades (Mojtabai, Olfson, & Han, 2016), even prior to the pandemic. Controlling for pre-pandemic adjustment also is important for detecting changes in adjustment that are driven by experiencing the pandemic as opposed to more general cohort trends over time. In order to account for this in the present study, we followed up two samples of adolescents we had previously examined emotional and behavioral adjustment in 2017-2018.
The Present Study
Adolescents use their close relationships with mothers, fathers, siblings, and best friends, in different ways and for different functions. The COVID-19 pandemic has caused some shifts in the frequency, opportunities, and ways that adolescents engage in positive and negative relationship processes with these close relationship partners. Additionally, the stress of the pandemic (e.g., economic instability, social isolation, fear of illness) appears to negatively impact the emotional and behavioral adjustment of adolescents (Berendese et al., under review;Magson et al., 2021) when compared to pre-pandemic adjustment levels. Therefore, the present study had two primary goals: (1) to examine the associations between adolescents' perceptions of positive and negative relationship qualities across four important close relationship partners on adolescent adjustment during the pandemic (controlling for pre-pandemic adjustment), and (2) to examine the moderating role of COVID-related stress on this association. Generally, we hypothesized that positive relationship qualities would be associated with better (more positive) adolescent adjustment and negative relationship qualities would be associated with worse (more problematic) adolescent adjustment. However, given the salience of friends during adolescence, and the enduring influence of mothers, particularly with regards to both serving as primary sources of social support (Furman & Buhrmester, 1992), we hypothesized that those relationships would be particularly influential on adolescent adjustment during this time. Additionally, we predicted that COVID-related stress would moderate the association between relationship qualities and emotional and behavioral outcomes, such that there would be a stronger association for negative relationship qualities and poorer outcomes, and a weaker association for positive relationship qualities and better adjustment.
METHOD Participants
Time 1 data (pre-pandemic) were collected during 2017-2018 as part of two separate and larger studies in Central Missouri and Southern Florida, and in June and July of 2020 (Time 2; during pandemic), participants from these studies were contacted again to participate in a study of adolescent coping during the pandemic. The original (Time 1) sample consisted of 244 youth from Missouri (M age = 13.71, SD = 1.66, 48% female, 68% White, 17% African American, 11% Latinx, 4.9% American Indian/Alaska Native, 3.7% Asian, 2.9% Hawaiian/Pacific Islander, and 6.1% another race or multi-racial; median family income range = $70,000-$84,999) and 123 youth from Florida (M age = 13.99, SD = 1.58, 54% female, 53% White, 29% African American, 12% Latinx; M family income = $55,000/year). At Time 2, 36% of the original Missouri sample and 42% of the original Florida sample agreed to participate (141 youth); an additional 29 siblings of youth in the original Florida sample also participated at Time 2 in order to make the number of sibling constellations more comparable across samples. Across both samples, non-attritted participants were older and more likely to be White, with more well-educated parents and a higher overall family income. The final sample of adolescents who participated at Time 2 consisted of 170 youth (89 from Missouri; 81 from Florida) who were approximately half female (n = 86) and half male (n = 82); two participants did not identify as female or male. At the time of data collection during the pandemic, participants ranged in age from 12 to 20 years, with a mean age of 16.21 years old (SD = 1.95). The sample was predominately European American (80%), with 14% African American and <5% each Asian American, American Indian/Alaskan Native, and Hawaiian/ Pacific Islander. Approximately 10% of the sample identified as Latinx. Mean family income was $70,000-$79,000/year (15% made <$40,000/year; 34% made more than $100,000/year) and 80% of the sample had at least one parent with a four-year college degree or more.
Some of the participants are siblings (67 pairs) because the original Missouri study involved families with multiple adolescents, and the Florida study was expanded to include siblings during COVID-19 data collection. Another 36 participants did not have siblings in the sample. Siblings who participated were required to include the first-born and a second-born no more than 5 years younger. If more than two children were in the household (<25% of the original Missouri sample), only the oldest two siblings were included in the study and youth were only asked to report on the sibling who participated (or first-born or second-born in their family if their sibling did not participate). We controlled for non-independence of data by clustering participants within families using the "Type = Complex" specification in Mplus, which computes standard errors and chi-square tests of model fit taking into account non-independence of observations due to cluster sampling (Muth en & Muth en, 2017). We did not specify separate models at the within-versus between-family level, however, as adolescents in the sample were answering questions about a wide range of relationships both inside and outside of the family and predicting family-level differences was not the study focus.
Procedures
Participants were originally part of two larger studies of adolescents (Campione- Barr et al., 2019;Rote et al., 2021). Recruitment for the larger studies involved flyers, mailings, and school contacts. Youth adjustment measures were assessed as part of the larger studies (Time 1; collected between June 2017 and December 2018) as well as during the pandemic (Time 2; June/July 2020), while relationship measures and COVID-19-related stress measures were assessed only at Time 2. Original participants from the larger studies were invited to participate in the current research. Adolescents who agreed to participate and received electronic consent from parents (if under age 18), responded to surveys online via the Qualtrics platform.
Measures
Relationship quality. At Time 2 (during pandemic), all youth completed the Network of Relationships Inventory (NRI; Furman & Buhrmester, 1985) to assess relationship quality with mothers, fathers, siblings, and best friends. The entire measure consists of 39 items across 13 sub-scales, although previous research has found that these sub-scales best combine into three categories: positivity/support, negativity, and relative power (Adams & Laursen, 2007). For the present study, relationship positivity (18 items regarding affection, companionship, instrumental aid, intimacy, nurturance, reliable alliance, support, and admiration), and relationship negativity (nine items regarding conflict, criticism, and antagonism) were utilized for each of the close relationships assessed. All items were scored on a five-point Likert scale from 1 (little or none) to 5 (the most). Mean scores for positivity and negativity with each relationship partner were utilized in analyses, and internal consistency was high among all relationships (a = .91-.96).
Anxiety symptoms. Before the pandemic (Time 1; 2017/2018), participants reported on their anxiety symptoms. However, different measures were used at the two different sites. Participants in the Missouri sample responded to the 28-item Revised Children's Manifest Anxiety Scale (RCMAS; Reynolds & Richmond, 1978), which is the same measure used for the combined sample during the pandemic (Time 2; June/July 2020). Items were rated on a Likert scale from 1 (not at all true of yourself) to 5 (really true of yourself). Mean scores at each time point were used, with higher scores indicating greater anxiety. Participants in the Florida sample responded to the Anxiety subscale of the Depression Anxiety Stress Scale (DASS; Lovibond & Lovibond, 1995) at Time 1. The seven items of the subscale were rated on a 0 (Did not apply to me at all) to 3 (Applied to me very much, or most of the time) scale. Scores were the average of these items (Time 1 a = .78). Of the full sample, 136 adolescents (80% of the sample) had pre-pandemic anxiety data.
To include pre-pandemic anxiety data from both samples in the same analyses, the proportion of maximum scaling (POMS) method was applied (Little, 2013). This scaling approach produces proportion scores for each participant ranging from 0 to 1 using the following method: (participant's scorescale minimum) / (scale maximumscale minimum). The POMS method is preferable to zscoring when combining data across different scales in longitudinal research because it does not obscure mean level differences between individuals across time points, allowing full examination of change overtime in rank order score differences (Moeller, 2015). As such, the use of POMS scoring for Time 1 adjustment measures should not be a limitation of the study.
Depressive symptoms. Adolescents in Missouri sample at Time 1 and in both samples at Time 2 completed the 20-item Center for Epidemiological Studies Depression Scale (CES-D; Radloff, 1977). Items were rated on a Likert scale from 1 (not at all true of yourself) to 5 (really true of yourself). Each item represented a depressive symptom. Items were rated on a 0-3 point scale in terms of symptom frequency. Scores were the sum of the ratings across items and higher scores represented greater depressive symptoms. This scale was internally reliable (Time 1 a = .89; Time 2 a = .91). Participants in the Florida sample responded to the Depression subscale of the Depression Anxiety Stress Scale (DASS; Lovibond & Lovibond, 1995) at Time 1. The seven items of the subscale were also rated on 0-3 point scale of symptom frequency, but scale scores were the average of these items (Time 1 a = .88). The POMS method was used to combine pre-pandemic depression scores across samples. Of the full sample, 136 adolescents (80% of the sample) had pre-pandemic depression data.
Problem behavior. Adolescent involvement in risky or problematic behavior was assessed using a 19-item scale by Eccles and Barber (1990) in the Missouri sample at Time 1 and in both samples at Time 2. Items related to drug use, cheating at school, theft, etc. were assessed on a five-point scale from 1 (never happens) to 5 (happens very often). Cronbach alphas were .91 for Time 1 and .88 for Time 2. Participants in the Florida sample responded to the 10-item Problem Behavior Scale (PBS; Mason, Cauce, Gonzales, & Hiraga, 1996) at Time 1. These items assessed similar behaviors as the Hartup and Laursen (1993) scale and were also rated on a 5-point "never" to "very often" scale. Scores were the average of these items (Time 1 a = .80). The POMS method was used to combine pre-pandemic problem behavior scores across samples. Of the full sample, 136 adolescents (80% of the sample) had pre-pandemic problem behavior data.
COVID-19-Related Stress (CASPE Questionnaire). Participants responded to four items from the COVID-19 Adolescent Symptom and Psychological Experience Questionnaire (CASPE, Ladouceur, 2020) to assess COVID-19-related stress. All items were rated on a five-point Likert scale from 1 (not at all/very slightly) to 5 (Extremely). A sum score was created from responses to the following items such that higher scores indicated higher levels of experience stress due to the pandemic: "Overall, how much has the COVID-19 outbreak, and the resulting changes to daily life, affected your life in a negative way?", "COVID-19 presents a lot of uncertainty about the future. In the past 7 days, including today, how stressful have you found this uncertainty to be?", The COVID-19 outbreak has changed and disrupted many existing plans. In the past 7 days, including today, how stressful do you find these disruptions to be?", "How stressful have the restrictions on leaving home been for you?" The combined items displayed acceptable internal consistency (a = .77).
Analytic plan
Structural equation path analyses predicting anxiety symptoms, depressive symptoms, and problem behavior during COVID-19, controlling for prior levels of each outcome, were examined in Mplus 8.4 (Muth en & Muth en, 2017). In each model, adjustment outcomes were regressed upon positive or negative relationship quality with relationship partners (mothers, fathers, siblings, and best friends), COVID-19-related stress, and interactions between COVID-19-related stress and relationship quality. Relationship partners were modeled simultaneously to better assess each relationship's unique contribution to adolescent adjustment during COVID-19. Positive and negative relationship quality were assessed separately, however, as our sample size precluded examination of all relationships and types of relationship qualities within a single model. Based on bivariate correlations between demographic and outcome variables (see Table 1), gender was controlled for in all models, but adolescent age, SES (family income, parent highest education), and race were not included (see Figure 1 for the general moderation model). Significant and marginal interaction terms were explored using Johnson-Neyman regions of significance plots, which depict the association between the predictor and outcome variable surrounded by its 95% confidence interval (CI) across all levels of the moderator. The association between the predictor and outcome is significant across the range of moderator values at which the 95% CI does not include zero (Bauer & Curran, 2005). Only interactions producing regions of significance are discussed in the results.
Model fit was evaluated using traditional fit indices (non-significant v 2 values, RMSEA < .05 to .08, CFI > .90 to .95, SRMR < .05 to 08; Kline, 2005). Across variables in all models, approximately 7.7% of data were missing completely at random, Little's MCAR test v 2 (121) = 127.56, p = .324. Missing data were multiply imputed using 10 datasets with product terms based on centered variables created prior to imputation (von Hippel, 2009), then models were analyzed using a maximum likelihood estimator. Participants were clustered within families to control for non-independence during both multiple imputation and model estimation (Muth en & Muth en, 2017). For both positive and negative relationship quality models, a main effects model (with path coefficients from the interaction terms to the outcomes constrained at 0) was examined prior to a moderation model (in which these interaction paths were freely estimated). Model fit comparisons were conducted using v 2 difference tests.
Descriptive Information
Correlations and descriptive statistics of all study variables can be found in Table 1. Adolescents reported moderate-to-high relationship positivity and moderate-to-low relationship negativity across relationship types. Relationships were generally most positive and least negative with best friends. Relationship quality was moderately correlated among relationship types, with the strongest associations appearing for relationship positivity among family members.
Adolescents reported low levels of problem behavior during COVID-19, but quite high levels of depression and anxiety for a community sample, with 38% of participants reporting depression levels above the clinical cut-off (16.0; Radloff, 1977). Average levels of COVID-19-related stress were at the midpoint of the scale. All adjustment measures were significantly correlated with one another within time points and with themselves over time; anxiety and depressive symptoms had particularly high associations. COVID-19-related stress was significantly associated with concurrent, but not prior, adjustment measures.
Compared to males, female adolescents reported more negative relationship quality with mothers and more positive relationship quality with best friends; they also reported greater COVID-19related stress and all forms of adjustment problems during COVID-19. Older adolescents reported less negative relationship quality with siblings and higher levels of pre-pandemic problem behavior. White, non-Hispanic adolescents reported higher levels of pre-pandemic depressive symptoms and anxiety. No other significant correlations emerged with demographic variables.
Path Analyses
Fit indices and path coefficients are presented in Table 2 for main effect models and in Table 3 for models including interaction terms. The data fit the models well for positive and negative relationship quality. Because all predictor variables were centered prior to computing interaction terms, no meaningful differences emerged between the main effects observed in the models with and without interactions included. Findings will therefore be discussed based on the interaction models. Notably, the addition of interaction terms significantly improved model fit for the positive relationships model (Dv 2 (12) = 26.93, p = .008), but not for the negative relationships model (Dv 2 (12) = 15.48, p = .22). Significant interactions in the negative relationships model should therefore be interpreted with caution, given the possibility of an inflated family-wise error rate. Across all models, higher pre-pandemic anxiety, depressive symptoms, and problem behavior were associated with higher levels of the same adjustment problems during COVID-19. Controlling for prior adjustment, greater reported COVID-19related stress was also associated with poorer adjustment on all outcomes, although this effect was smaller for problem behavior. Likewise, controlling for prior adjustment, female adolescents reported higher levels of anxiety and depressive symptoms during COVID-19, but not significantly more problem behavior.
Relationship Positivity
Controlling for prior adjustment, gender, and COVID-19-related stress, more positivity in relationships with mothers was associated with lower .14 (.10) Note. Standardized path coefficients presented, SE in parentheses. Gender: 1 = male, 2 = female. † p < .10, *p < .05, **p < .01, ***p < .001. depressive symptoms during COVID-19 and more positivity in relationships with fathers was associated with less problem behavior during COVID-19. Relationship positivity with best friends, and marginally (p = .07) siblings, interacted with COVIDrelated stress in predicting problem behavior. More positive relationships with best friends (Figure 2) and siblings (Figure 3) were significantly associated with greater problem behavior when COVID-19related stress was relatively high (>0.2 SD for best friends, >1.3 SD for siblings), but not when it was average or low.
Relationship Negativity
Controlling for prior adjustment, gender, and COVID-19-related stress, more negativity in relationships with mothers was associated with more depressive symptoms and problem behavior during COVID-19 and more negativity in relationships with fathers was associated with more problem behavior during COVID-19. Negativity in sibling relationships marginally (p = .07) interacted with COVID-related stress in predicting anxiety; more relationship negativity with siblings was significantly associated with more anxiety only when COVID-related stress was low (<À.67 SD; see Figure 4).
DISCUSSION
The COVID-19 pandemic has impacted the ways that youth interact with close others. While adolescents often seek social support from peers in typical contexts (Furman & Rose, 2015), stay-athome orders and safety regulations have made face-to-face interaction with peers less accessible, while also forcing greater time spent with family members. The present study aimed to examine the unique role of relationship qualities (i.e., positive and negative) with four important close relationships in adolescents' lives (i.e., mothers, fathers, siblings, and best friends) and in adolescents' emotional and behavioral adjustment during the pandemic. Additionally, we sought information about whether these relationship qualities were moderated by COVID-19-related stress levels. In general, we found that adolescents' close relationships with parents predicted their adjustment in expected directions (i.e., positive relationships were better for adjustment, negative relationships were detrimental for adjustment) regardless of COVID-19related stress, while the effects on adjustment of positive egalitarian relationships with siblings and friends, and negative relationships with siblings, depended more on COVID-19-related stress.
Pre-pandemic adjustment levels, as well as COVID-19-related stress, were consistently associated with higher levels of depression, anxiety, and problem behavior during the pandemic. Over and above these findings, unique associations with the four different close relationship were revealed. Positive relationship qualities with both mothers and fathers were uniquely predictive of better youth adjustment, but while positive relationships with Note. Standardized path coefficients presented, SE in parentheses. † p < .10, *p < .05, **p < .01, ***p < .001. mothers were more protective against depressive symptoms, more positive relationships with fathers were protective against problem behavior.
Conversely, more negative relationships with mothers were associated with greater depressive symptoms and problem behavior, and more FIGURE 2 Johnson-Neyman Plot of the interaction between best friend relationship positivity 9 covid stress on problem behavior. Note. All variables standardized. Thick black line represents the association between best friend relationship positivity and problem behavior; light black lines represent the upper and lower bounds of a 95% CI around this association. Region within the gray rectangle depicts values of Covid Stress at which the association between best friend relationship positivity and problem behavior is significantly positive. Rel Pos, Relationship Positivity. negative relationships with fathers were only associated with greater problem behavior. These findings appear to be consistent with previous research prior to the pandemic which has found that conflict with fathers is associated with higher levels of risky behavior, rather than emotional adjustment problems, which was more the case for conflict with mothers (Adams & Laursen, 2007). Mothers are more likely than fathers to be turned to as significant sources of social support throughout childhood and adolescence (Furman & Buhrmester, 1992), and often thought societally as more intuned to and responsible for their children's emotional well-being than are fathers. Interestingly, while negative relationships with either parent were associated with greater problem behavior, only positive relationships with fathers were uniquely protective against problem behavior with adolescents in this study. Previous research on father involvement has found that it is particularly protective against adolescent delinquency, and that greater involvement in risky and delinquent behaviors actually increases fathers' involvement (Coley & Medeiros, 2007). In the context of the pandemic and the added health risks involved with adolescents engaging in problematic behaviors outside the home, and in combination with the fact that fathers may have been more involved and engaged with their adolescents than is typical due to stayat-home orders, positive relationship qualities with fathers may have been particularly beneficial to teens.
While the more hierarchical nature of the parent-child relationship lends itself to relationships with mothers and fathers being similar, the more egalitarian nature of relationships with both siblings and best friends also revealed some similarities in the ways that those relationships impacted adolescent adjustment. Interestingly, for both siblings and best friends, higher relationship positivity in the context of high levels of COVID-related stress, was associated with greater adolescent problem behavior. Previous research suggests that deviancy training processes, a form of peer socialization in which friends or siblings encourage and exacerbate each other's behavior problems by responding positively to deviant talk, are especially strong when these relationships and interactions are positive (Piehler & Dishion, 2007;Whiteman, Jensen, & McHale, 2017). However, whether deviancy training processes might help to explain the current findings is unclear. On one hand, if the friends are not seeing each other in person, the effects of deviancy training processes (e.g., over video chat) may be weaker, but siblings were seeing each other regularly while "stuck at home" FIGURE 4 Johnson-Neyman plot of the interaction between sibling relationship negativity 9 covid stress on anxiety symptoms. Note. All variables standardized. Thick black line represents association between sibling relationship negativity and anxiety; light black lines represent the upper and lower bounds of a 95% CI around this association. Region within the gray rectangle depicts values of Covid Stress at which the association between sibling relationship negativity and anxiety is significantly positive. Sib Rel Neg, Sibling Relationship Negativity.
together. Alternatively, if deviancy training processes (virtual or in-person) are related to youth breaking rules related to social distancing and engaging in problematic behavior together outside the home without the parents' knowledge, then considering deviancy training may help to explain the current findings.
In terms of unique findings between siblings and best friends, we found that the combination of higher negative sibling relationship qualities (but not with best friends) with low levels of COVIDrelated stress was associated with greater adolescent anxiety. It is not particularly surprising that high levels of conflict and negativity with siblings would be associated with greater anxiety as this has been found in pre-pandemic studies (e.g., Campione-Barr, Greer, & Kruse, 2013;Dirks et al., 2015). Sibling relationships are known to be the quintessential "love-hate" relationship with high levels of ambivalence particularly common during adolescence (Buist & Vermande, 2014;Killoren, Rodr ıguez de Jes us, Updegraff, & Wheeler, 2017). It is interesting, however, that this would only be evident under conditions of low COVID-related stress. It is likely that in households experiencing high levels of COVID-related stress, sibling conflict is pretty low on the hierarchy of concerns. Alternatively, families experiencing low levels of COVIDrelated stress may have experienced more business-as-usual style of family interactions, but with the added time together and boredom, sibling conflict likely rose to higher-than-usual levels, to some detrimental effects.
Our study is the first we are aware of to examine the potential protective and exacerbating effects of various relationship qualities on adolescent adjustment during the COVID-19 pandemic. Despite the interesting results, the examination is not without limitations. First, despite the fact that the study used adolescents from multiple areas of the United States (Midwest and South), the ethnic/ racial and socioeconomic diversity of the larger sample was limited, consisting primarily of White, middle-class families. Given that the COVID-19 pandemic disproportionately impacted families with fewer financial means and families from minoritized groups (Tai, Shah, Doubeni, Sia, & Wieland, 2021), the generalizability of these results is likely limited. It is also important to note that compliance with COVID-19 public health regulations, as well as the viral burden of the pandemic, were not consistent across the United States, and thus our samples from Missouri and Florida may not be consistent with those from the East-or West-coast. However, we did not assess beliefs or compliance in our study, only the influence of COVID-19-related stress; therefore, it is difficult to know how much these findings would have varied.
Second, given that we studied adolescents, a potential fifth important close relationship that was missing from this examination was romantic relationships. While we did assess the positivity and negativity of adolescents' romantic relationships, only approximately 25% of our sample reported on such a relationship, which was too much missing data to be useful. While romantic partners are increasingly important to youth over the course of adolescence (Furman & Rose, 2015), it is difficult for us to know how much they were able to stay in contact or see each other in-person during this time and this likely ranged widely across families. Some parents may have allowed romantic partners to essentially be a part of their "family bubble," while others may have had strict rules against seeing romantic partners due to health concerns.
Finally, the sample size for our analyses provides reasonable power (80%) to detect moderately small sized path coefficients (f 2 = .05, as calculated in G*Power ;Faul, Erdfelder, Lang, & Buchner, 2007). However, the sample is somewhat small for the number of parameters in our models, which may have resulted in less precise results (Kline, 2005). Examination of the same models with each relationship considered independently (thus greatly reducing model parameters) provided generally similar findings, however, bolstering confidence in our results (see Tables S1 and S2).
Despite these limitations, our findings suggest not only the ways in which adolescents were uniquely utilizing mothers, fathers, siblings, and best friends as sources of coping during the pandemic, but also have further implications beyond the pandemic. It appears as though relationship qualities with mothers and fathers during the pandemic were similarly impactful on adolescent adjustment to the ways in which they were typically during pre-pandemic (and likely postpandemic) times. Alternatively, the associations between relationships with siblings and best friends and adolescents' adjustment were more impacted by the stress of the COVID-19 pandemic. This is likely because the amount of time they spent together and the ways in which they interacted. Siblings were likely spending more time together than usual due to outside-of-household contact being limited, and they were likely to serve as a substitute for more preferred peers given the limitations. Best friends, on the other hand, were more likely relegated to online or virtual communication (e.g., texting, social media, video chat, online gaming). In the aftermath of the pandemic, it is likely that adolescents will return more of their time to socializing with peers and utilizing their social support as is developmentally appropriate, but it will be interesting for future research to examine whether this experience has improved or changed the ways in which adolescents utilize their siblings as sources of support and companionship even when they are not the only alternative.
Supporting Information
Additional supporting information may be found online in the Supporting Information section at the end of the article. |
<gh_stars>1-10
import React from 'react'
import {Col, Form} from 'react-bootstrap'
interface SpellFilterProps {
onChange: (event: React.ChangeEvent<HTMLInputElement>) => void
}
const SpellFilter: React.FC<SpellFilterProps> = ({onChange}) => {
return (
<Col md={4}>
Filter by name:
<Form.Control type="text" placeholder="Enter spell name here" onChange={onChange} />
</Col>
)
}
export default SpellFilter |
def energy_estimation(model,
x_test=None,
spiking_model: bool = True,
device_list: list = None,
n_timesteps: int = TIMESTEPS,
dt: float = DT,
verbose: bool = False):
if device_list is None:
device_list = list(DEVICES.keys())
print('Extracting model info...')
[number_of_connection_per_neuron_list, number_of_neurons_list,
activations_to_track_index_list] = extract_model_info(model)
neuron_energy = np.sum(number_of_neurons_list)
if spiking_model:
print('Found a spiking model. Extracting intermediate activations...')
neuron_energy *= n_timesteps
mean_activations = [tf.reduce_mean(tf.abs(model.layers[n].output)) for n in activations_to_track_index_list]
synop_energy = tf.add_n([number_of_connection_per_neuron_list[n] * number_of_neurons_list[n]
* mean_activations[n] for n in range(len(mean_activations))])
new_model = tf.keras.Model(model.input, synop_energy)
new_model.compile()
synop_energy = new_model.predict(x_test, batch_size=BATCH_SIZE, verbose=VERBOSE)
synop_energy = np.mean(synop_energy)
synop_energy *= n_timesteps * dt
else:
print('Found a non-spiking model.')
synop_energy = np.dot(np.array(number_of_connection_per_neuron_list), number_of_connection_per_neuron_list)
synop_energy_dict = {}
neuron_energy_dict = {}
total_energy_dict = {}
for device in device_list:
energy_dict = DEVICES[device]
if spiking_model and not energy_dict['spiking']:
print(COLOUR_DICTIONARY['red'], 'Error!', COLOUR_DICTIONARY['purple'],
'Impossible to infer spiking models on standard hardware!', COLOUR_DICTIONARY['black'])
break
synop_energy_dict[device] = synop_energy * energy_dict['energy_per_synop']
neuron_energy_dict[device] = neuron_energy * energy_dict['energy_per_neuron']
total_energy_dict[device] = synop_energy_dict[device] + neuron_energy_dict[device]
if verbose:
print('Estimating energy on ', COLOUR_DICTIONARY['red'], device, COLOUR_DICTIONARY['black'])
print(COLOUR_DICTIONARY['red'], 'Global model energy', COLOUR_DICTIONARY['black'])
print(COLOUR_DICTIONARY['orange'], '\t--------- Total energy ---------', COLOUR_DICTIONARY['black'])
print('\tSynop layer energy: ', synop_energy_dict[device], 'J/inference')
print('\tNeuron layer energy: ', neuron_energy_dict[device], 'J/inference')
print('\tTotal layer energy:', COLOUR_DICTIONARY['green'], total_energy_dict[device],
COLOUR_DICTIONARY['black'], 'J/inference\n\n')
return synop_energy_dict, neuron_energy_dict, total_energy_dict |
Environmental and Economic Effects of Changing to Shelf-Stable Dairy or Soy Milk for the Breakfast in the Classroom Program
Objectives To estimate economic and environmental effects of reducing milk waste from the US Breakfast in the Classroom (BIC) School Breakfast Program by replacing conventional milk with shelf-stable dairy or soy milk. Methods We estimated net greenhouse gas emissions (GHGE; kg CO2 equivalents ) from replacing conventional milk with shelf-stable dairy or soy milk by adapting existing life cycle assessments and US Environmental Protection Agency Waste Reduction Model estimates to BIC parameters. We estimated net cost with school meal purchasing data. Results Replacing conventional dairy milk with shelf-stable dairy or soy milk would reduce milk-associated GHGE by 28.5% (0.133 kg CO2e) or 79.8% (0.372 kg CO2e) per student per meal, respectively. Nationally, this equates to driving 248 million or 693 million fewer miles annually, respectively. This change would increase milk costs 1.9% ($0.005) or 59.4% ($0.163) per student per meal, respectively. Conclusions Replacing conventional milk with shelf-stable dairy or soy milk could substantially reduce waste and concomitant GHGE in BIC; switching to shelf-stable dairy has low net costs. Pilot tests of these options are warranted to optimize the nutritional value, cost, and sustainability of BIC. |
Achievement Evaluation in Electronic Teaching Material
Electronic teaching material drew up on the media material basis of s ound, pattern, three-dimensional and movie-television, is the carrier and repres enting form bringing about modern edu-technology, which distinctively differenti ate orthodoxy teaching material as means of papers and proved crucial contents o f higher educational teaching material construction. As reasonable and practical electronic teaching material assessment system hadn't been established in the m ost universities in China, the essay presenting two assessment systems that proj ecting electronic teaching properties and conveniently operated. |
def dediac_xmlbw(s):
return _DIAC_RE_XMLBW.sub(u'', s) |
<reponame>tiana890/PJSIP-IOS-lib
/* $Id$ */
/*
* Copyright (C) 2008-2011 Teluu Inc. (http://www.teluu.com)
* Copyright (C) 2003-2008 <NAME> <<EMAIL>>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#ifndef __PJLIB_UTIL_SCANNER_CIS_BIT_H__
#define __PJLIB_UTIL_SCANNER_CIS_BIT_H__
#include <ZadarmaPJSIP/pj/types.h>
PJ_BEGIN_DECL
/**
* This describes the type of individual character specification in
* #pj_cis_buf_t. Basicly the number of bits here
*/
#ifndef PJ_CIS_ELEM_TYPE
# define PJ_CIS_ELEM_TYPE pj_uint32_t
#endif
/**
* This describes the type of individual character specification in
* #pj_cis_buf_t.
*/
typedef PJ_CIS_ELEM_TYPE pj_cis_elem_t;
/**
* Maximum number of input specification in a buffer.
* Effectively this means the number of bits in pj_cis_elem_t.
*/
#define PJ_CIS_MAX_INDEX (sizeof(pj_cis_elem_t) << 3)
/**
* The scanner input specification buffer.
*/
typedef struct pj_cis_buf_t
{
pj_cis_elem_t cis_buf[256]; /**< Must be 256 (not 128)! */
pj_cis_elem_t use_mask; /**< To keep used indexes. */
} pj_cis_buf_t;
/**
* Character input specification.
*/
typedef struct pj_cis_t
{
pj_cis_elem_t *cis_buf; /**< Pointer to buffer. */
int cis_id; /**< Id. */
} pj_cis_t;
/**
* Set the membership of the specified character.
* Note that this is a macro, and arguments may be evaluated more than once.
*
* @param cis Pointer to character input specification.
* @param c The character.
*/
#define PJ_CIS_SET(cis,c) ((cis)->cis_buf[(int)(c)] |= (1 << (cis)->cis_id))
/**
* Remove the membership of the specified character.
* Note that this is a macro, and arguments may be evaluated more than once.
*
* @param cis Pointer to character input specification.
* @param c The character to be removed from the membership.
*/
#define PJ_CIS_CLR(cis,c) ((cis)->cis_buf[(int)c] &= ~(1 << (cis)->cis_id))
/**
* Check the membership of the specified character.
* Note that this is a macro, and arguments may be evaluated more than once.
*
* @param cis Pointer to character input specification.
* @param c The character.
*/
#define PJ_CIS_ISSET(cis,c) ((cis)->cis_buf[(int)c] & (1 << (cis)->cis_id))
PJ_END_DECL
#endif /* __PJLIB_UTIL_SCANNER_CIS_BIT_H__ */
|
def _onTabPopupMenu(self, view, event, plugin):
menu = self.Menu(plugin, view.get_toplevel())
if hasattr(event, 'button'):
menu.popup(None, None, None, None, event.button, event.time)
else:
tab = view.get_tab_label(plugin)
x, y, w, h = view.getTabAlloc(tab)
rect = gdk.Rectangle(x, y, w, h)
menu.popup(None, None,
lambda m, r: (r.x+r.width/2, r.y+r.height/2, True),
rect, 0, event.time) |
package scp_test
import (
"github.com/stretchr/testify/assert"
"testing"
"github.com/viant/toolbox/storage/scp"
"io/ioutil"
"os"
"path"
"strings"
)
func TestService_List(t *testing.T) {
service := scp.NewService(nil)
assert.NotNil(t, service)
dir, home := path.Split(os.Getenv("HOME"))
objects, err := service.List("scp://127.0.0.1/" + dir)
assert.Nil(t, err)
for _, object := range objects {
if strings.HasSuffix(object.URL(), home) {
assert.True(t, object.IsFolder())
}
}
}
func TestService_Delete(t *testing.T) {
service := scp.NewService(nil)
assert.NotNil(t, service)
err := service.Upload("scp://127.0.0.1//tmp/file.txt", strings.NewReader("this is test"))
assert.Nil(t, err)
objects, err := service.List("scp://127.0.0.1/tmp/")
assert.Nil(t, err)
assert.True(t, len(objects) > 1)
object, err := service.StorageObject("scp://127.0.0.1//tmp/file.txt")
assert.Nil(t, err)
reader, err := service.Download(object)
if err == nil {
defer reader.Close()
content, err := ioutil.ReadAll(reader)
assert.Nil(t, err)
assert.Equal(t, "this is test", string(content))
err = service.Delete(object)
assert.Nil(t, nil)
}
}
|
// NewSingleLineParser returns a new SingleLineParser.
func NewSingleLineParser(
outputFn func(*Message),
parser parsers.Parser) *SingleLineParser {
return &SingleLineParser{
outputFn: outputFn,
parser: parser,
}
} |
Metabolic and skeletal complications of HIV infection: the price of success.
Over the past 10 years, in conjunction with the broad availability of potent antiretroviral regimens, the care of human immunodeficiency virus (HIV)-infected patients has shifted from prevention and treatment of opportunistic infections and malignancies to management of the metabolic and related complications associated with HIV infection and its treatment. Metabolic disorders, including lipodystrophy, dyslipidemia, and insulin resistance, occur at a high rate in HIV-infected individuals receiving highly active antiretroviral therapy (HAART). These disorders are associated with increased risk of cardiovascular disease and have become an important cause of morbidity and mortality in HIV-infected patients. Herein, we present the case of a patient with HIV infection who responded well to HAART but developed multiple complications potentially related to this therapy. This article reviews the clinical characteristics of the metabolic and skeletal disturbances observed in HIV infection and discusses strategies for their management. |
// check the longest broadcast time
func checkLongestBroadcastTime(msgs []string, statisticsClient *client.StatisticsClient) {
for _, tx := range msgs {
msgReport, err := statisticsClient.GetReportMessage(tx)
if err != nil {
fmt.Printf("failed to get tx %s's report info, as: %v", tx, err)
os.Exit(1)
}
var timeStart, timeEnd *time.Time
for _, report := range msgReport {
if timeStart == nil || report.Time.Before(*timeStart) {
timeStart = &report.Time
}
if timeEnd == nil || report.Time.After(*timeEnd) {
timeEnd = &report.Time
}
}
fmt.Printf("Tx %s broadcast time is %v\n", tx, timeEnd.Sub(*timeStart))
}
} |
<filename>testing/InputTestUtils.tsx<gh_stars>0
import React from "react";
import { cleanup, render } from "@testing-library/react";
import "@testing-library/jest-dom/extend-expect";
import { configure, mount, ReactWrapper } from "enzyme";
import Adapter from "enzyme-adapter-react-16";
import { spy, SinonSpy } from "sinon";
import { Form } from "antd";
import { Root, Message } from "protobufjs";
import { default as compiledProtobufBundle } from "../src/proto/bundle.json";
import Input from "../src/input/Input";
export class InputTestClassHelper {
originalError: (message?: any, ...optionalParams: any[]) => void;
originalWarn: (message?: any, ...optionalParams: any[]) => void;
originalLog: (message?: any, ...optionalParams: any[]) => void;
excludedMessages: string[];
constructor() {
this.originalError = console.error;
this.originalWarn = console.warn;
this.originalLog = console.log;
this.excludedMessages = ["inside a test was not wrapped in act"];
}
handleLog(log: (t?: any, ...p: any[]) => void, template: string, ...optionalParams: any[]): void {
if (!this.excludedMessages.some((excludedMessage) => template.includes(excludedMessage))) {
log(template, optionalParams);
}
}
doBeforeAll(): void {
console.error = (t: string, ...p: any[]) => this.handleLog(this.originalError, t, p);
console.warn = (t: string, ...p: any[]) => this.handleLog(this.originalWarn, t, p);
console.log = (t: string, ...p: any[]) => this.handleLog(this.originalLog, t, p);
}
doBeforeEach(): void {
global.matchMedia =
global.matchMedia ||
function () {
return {
matches: false,
onchange: null,
addListener: jest.fn(), // Deprecated
addEventListener: jest.fn(),
removeEventListener: jest.fn(),
dispatchEvent: jest.fn(),
};
};
}
doAfterEach(): void {
cleanup();
}
doAfterAll(): void {
console.error = this.originalError;
console.warn = this.originalWarn;
console.log = this.originalLog;
}
addAllSetupAndTearDowns(): void {
const helper = this;
beforeAll(() => helper.doBeforeAll());
beforeEach(() => helper.doBeforeEach());
afterEach(() => helper.doAfterEach());
afterAll(() => helper.doAfterAll());
}
}
export class InputTestCaseHelper {
mockCallback: SinonSpy<any[], any>;
component: ReactWrapper;
constructor(target: string) {
this.mockCallback = spy();
this.component = mount(<Input target={target} callback={this.mockCallback} />);
}
static _clickableTypes = ["button", "a", "icon"];
click(testId: string) {
const selector = "[data-testid='" + testId + "']";
const filter = (rw: ReactWrapper) => InputTestCaseHelper._clickableTypes.includes(rw.name());
const results = this.component.find(selector).filterWhere(filter);
if (results.length != 1) {
throw new Error(`Wanted exactly 1 node, found ${results.length} for selector ${selector}`);
}
results.simulate("click");
}
static _settableTypes = ["input", "textarea"];
setValue(testId: string, value: any) {
const selector = "[data-testid='" + testId + "']";
const filter = (rw: ReactWrapper) => InputTestCaseHelper._settableTypes.includes(rw.name());
const results = this.component.find(selector).filterWhere(filter);
if (results.length != 1) {
throw new Error(`Wanted exactly 1 node, found ${results.length} for selector ${selector}`);
}
results.simulate("change", { target: { value: "" + value } });
}
async submit(): Promise<Message> {
const selector = "button[data-testid='protostore-submit']";
this.component.find(selector).simulate("submit");
await new Promise((resolve) => {
setTimeout(resolve, 0);
}); // Forces the action queue to run out.
this.component.update(); // Forces an update to the component.
expect(this.mockCallback.called).toBeTruthy();
const result = this.mockCallback.getCalls()[0].args[0];
return result;
}
}
|
<filename>CNNdroid Source Package/rs/convRolledInF8OutF2.rs
#pragma version(1)
#pragma rs_fp_relaxed
#pragma rs java_package_name(layers)
rs_allocation In_Blob;
rs_allocation Kernel_Blob;
rs_allocation Bias_Blob;
int c_i;
int h_i;
int w_i;
int n_k;
int c_k;
int h_k;
int w_k;
int h_o;
int w_o;
int pad_x;
int pad_y;
int stride_x;
int stride_y;
int group;
float2 __attribute__((kernel)) root(uint32_t x)
{
float2 sum = 0;
sum.x = sum.y = 0;
int kernel_num = x % (n_k / 2);
int h_num = (x * 2) / (w_o * n_k);
int w_num = (x % (w_o * n_k / 2)) / (n_k / 2);
int g = (kernel_num * 2) / (n_k / group);
int channel_offset = g * c_k / 4;
int c_k_new = c_k / 4;
for (int h = 0 ; h < h_k ; h++){
for (int w = 0 ; w < w_k ; w++){
for (int i = 0 ; i < c_k_new / 2 ; i++)
{
int cur_x = h_num * stride_x + h; //should take care of the strides(Be careful)
int cur_y = w_num * stride_y + w; //should take care of the strides(Be careful)
if (cur_x < pad_x || cur_x >= (pad_x + h_i))
continue;
else if (cur_y < pad_y || cur_y >= (pad_y + w_i))
continue;
else
{
int frame_index = (cur_x - pad_x) * w_i * c_i / 4 + (cur_y - pad_y) * c_i / 4 + (2 * i + channel_offset);
float4 frame_value1 = rsGetElementAt_float4(In_Blob,frame_index);
float4 frame_value2 = rsGetElementAt_float4(In_Blob,frame_index + 1);
float4 kernel_value1, kernel_value2;
int kernel_size = h_k * w_k * c_k_new;
int kernel_index = kernel_num * 2 * kernel_size + h * w_k * c_k_new + w * c_k_new + 2 * i;
kernel_value1 = rsGetElementAt_float4(Kernel_Blob,kernel_index);
kernel_value2 = rsGetElementAt_float4(Kernel_Blob,kernel_index + 1);
sum.x += dot(frame_value1 ,kernel_value1) + dot(frame_value2 ,kernel_value2);
kernel_index += kernel_size;
kernel_value1 = rsGetElementAt_float4(Kernel_Blob,kernel_index);
kernel_value2 = rsGetElementAt_float4(Kernel_Blob,kernel_index + 1);
sum.y += dot(frame_value1 ,kernel_value1) + dot(frame_value2 ,kernel_value2);
}
}
}
}
return sum + rsGetElementAt_float2(Bias_Blob,kernel_num);
} |
/**
* @author Giovanni Caire - TILAB
*/
class ConnectionPool {
private HashMap connections = new HashMap();
private TransportProtocol myProtocol;
private ConnectionFactory myFactory;
private int maxSize;
private int size;
private boolean closed = false;
private long hitCnt = 0;
private long missCnt = 0;
ConnectionPool(TransportProtocol p, ConnectionFactory f, int ms) {
myProtocol = p;
myFactory = f;
// Temporary hack for HTTP since HTTP connections cannot be re-used
if (myProtocol instanceof HTTPProtocol) {
maxSize = 0;
}
else {
maxSize = ms;
}
size = 0;
}
// The actual connection creation operation must NOT be included in the synchronized block. In facts
// in certain cases it may take a lot of time due to TCP timeouts expiration.
ConnectionWrapper acquire(TransportAddress ta, boolean requireFreshConnection) throws ICPException {
ConnectionWrapper cw = null;
List l = null;
String url = myProtocol.addrToString(ta);
synchronized (this) {
if (closed) {
throw new ICPException("Pool closed");
}
l = (List) connections.get(url);
if (l == null) {
l = new ArrayList();
connections.put(url, l);
}
if (requireFreshConnection) {
// We are checking a given destination. This means that this destination may be no longer valid
// --> In order to avoid keeping invalid connections that can lead to very long waiting times,
// close all non-used connections towards this destination.
closeConnections(l);
}
else {
Iterator it = l.iterator();
while (it.hasNext()) {
cw = (ConnectionWrapper) it.next();
if (cw.lock()) {
cw.setReused();
hitCnt++;
return cw;
}
}
}
}
// If we get here no connection is available --> create a new one
try {
Connection c = myFactory.createConnection(ta);
synchronized (this) {
cw = new ConnectionWrapper(c, ta);
if (size < maxSize) {
// Reusable connection --> Store it
l.add(cw);
size++;
}
else {
// OneShot connection --> don't even store it.
cw.setOneShot();
}
missCnt++;
return cw;
}
}
catch (IOException ioe) {
throw new ICPException("Error creating connection. ", ioe);
}
finally {
// We may have created a new list of connections that end up to be useless (e.g. because the connection is one-shot,
// or because there was an error creating the connection) --> remove it
synchronized (this) {
if (l.isEmpty()) {
connections.remove(url);
}
}
}
}
private void closeConnections(List l) {
List closedConnections = new ArrayList();
Iterator it = l.iterator();
while (it.hasNext()) {
ConnectionWrapper cw = (ConnectionWrapper) it.next();
if (cw.lock()) {
cw.close();
cw.unlock();
closedConnections.add(cw);
}
}
// Now remove all closed connections
it = closedConnections.iterator();
while (it.hasNext()) {
if (l.remove(it.next())) {
size--;
}
}
}
synchronized void release(ConnectionWrapper cw) {
cw.unlock();
}
synchronized void remove(ConnectionWrapper cw) {
try {
String url = myProtocol.addrToString(cw.getDestAddress());
List l = (List) connections.get(url);
if (l != null) {
if (l.remove(cw)) {
size--;
if (l.isEmpty()) {
connections.remove(url);
}
}
}
cw.getConnection().close();
}
catch (Exception e) {
// Just ignore it
}
}
synchronized void shutdown() {
Iterator it = connections.values().iterator();
while (it.hasNext()) {
List l = (List) it.next();
for (int i = 0; i < l.size(); i++) {
ConnectionWrapper cw = (ConnectionWrapper) l.get(i);
cw.close();
}
l.clear();
}
connections.clear();
closed = true;
}
void clearExpiredConnections(long currentTime) {
Iterator it = getConnectionsList().iterator();
while (it.hasNext()) {
ConnectionWrapper cw = (ConnectionWrapper) it.next();
if (cw.isExpired(currentTime)) {
remove(cw);
cw.unlock();
}
}
}
public String toString() {
return "[Connection-pool: total-hit="+hitCnt+", total-miss="+missCnt+", current-size="+size+" connections="+connections+"]";
}
private synchronized List getConnectionsList() {
List cc = new ArrayList();
Iterator it = connections.values().iterator();
while (it.hasNext()) {
List l = (List) it.next();
Iterator it1 = l.iterator();
while (it1.hasNext()) {
cc.add(it1.next());
}
}
return cc;
}
} |
<filename>cmd/bpxe/cmd/execute.go
// Copyright (c) 2021 Aree Enterprises, Inc. and Contributors
// Use of this software is governed by the Business Source License
// included in the file LICENSE
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/LICENSE-Apache-2.0
package cmd
import (
"context"
"encoding/xml"
"fmt"
"io/ioutil"
"bpxe.org/pkg/bpmn"
"bpxe.org/pkg/flow"
"bpxe.org/pkg/process"
"bpxe.org/pkg/tracing"
"github.com/spf13/cobra"
)
// executeCmd represents the execute command
var executeCmd = &cobra.Command{
Use: "execute [file.bpmn]",
Short: "Execute BPMN model",
Long: `This command will execute processes in a BPMN model.`,
Run: func(cmd *cobra.Command, args []string) {
file := args[0]
var document bpmn.Definitions
var err error
src, err := ioutil.ReadFile(file)
if err != nil {
fmt.Printf("Can't read file: %v\n", err)
return
}
err = xml.Unmarshal(src, &document)
if err != nil {
fmt.Printf("XML unmarshalling error: %v\n", err)
return
}
for i := range *document.Processes() {
processElement := &(*document.Processes())[i]
if id, present := processElement.Id(); present {
fmt.Printf("Loaded process %s\n", *id)
} else {
fmt.Println("Loaded an unnamed process")
}
proc := process.New(processElement, &document)
if instance, err := proc.Instantiate(); err == nil {
traces := instance.Tracer.Subscribe()
err := instance.StartAll(context.Background())
if err != nil {
fmt.Printf("failed to run the instance: %s\n", err)
}
done := make(chan bool)
go func() {
for {
trace := tracing.Unwrap(<-traces)
switch trace := trace.(type) {
case flow.NewFlowTrace:
fmt.Printf("New flow %s\n", trace.FlowId.String())
case flow.FlowTrace:
sourceId, present := trace.Source.Id()
if !present {
sourceId = new(string)
*sourceId = "unnamed"
}
for _, flow := range trace.Flows {
target, err := flow.SequenceFlow().Target()
if err != nil {
fmt.Printf("Can't find target in a flow")
}
targetId, present := target.Id()
if !present {
targetId = new(string)
*targetId = "unnamed"
}
fmt.Printf("Flow(%s) %s -> %s\n", flow.Id().String(), *sourceId, *targetId)
}
case flow.CeaseFlowTrace:
fmt.Printf("No flows left\n")
done <- true
return
case tracing.ErrorTrace:
fmt.Printf("Error: %v\n", trace.Error)
default:
}
}
}()
instance.WaitUntilComplete(context.Background())
<-done
} else {
fmt.Printf("failed to instantiate the process: %s\n", err)
}
}
},
Args: cobra.MinimumNArgs(1),
}
func init() {
rootCmd.AddCommand(executeCmd)
}
|
# flake8: noqa
from rastervision.core.evaluation.evaluation_item import *
from rastervision.core.evaluation.class_evaluation_item import *
from rastervision.core.evaluation.evaluator import *
from rastervision.core.evaluation.evaluator_config import *
from rastervision.core.evaluation.classification_evaluation import *
from rastervision.core.evaluation.classification_evaluator import *
from rastervision.core.evaluation.classification_evaluator_config import *
from rastervision.core.evaluation.chip_classification_evaluation import *
from rastervision.core.evaluation.chip_classification_evaluator import *
from rastervision.core.evaluation.chip_classification_evaluator_config import *
from rastervision.core.evaluation.semantic_segmentation_evaluation import *
from rastervision.core.evaluation.semantic_segmentation_evaluator import *
from rastervision.core.evaluation.semantic_segmentation_evaluator_config import *
from rastervision.core.evaluation.object_detection_evaluation import *
from rastervision.core.evaluation.object_detection_evaluator import *
from rastervision.core.evaluation.object_detection_evaluator_config import *
|
/**
* The method checks the header for a keep-alive or close information in the connection field.
*
* @param header
* the header to check
* @return <code>false</code> if the connection has to be kept alive, <code>true</code> if not
*/
boolean isClosedByHeader(final HttpHeader header) {
final String connectionField;
if (header.hasField(RequestHeaders.CONNECTION)) {
connectionField = header.getField(RequestHeaders.CONNECTION).getValue().toString().toLowerCase();
} else {
connectionField = null;
}
switch (header.getVersion()) {
case HTTP_1_0:
return !"keep-alive".equals(connectionField);
case HTTP_1_1:
return "close".equals(connectionField);
default:
return true;
}
} |
/**
* Utility class for handling device ABIs
*/
public class AbiUtils {
/**
* The set of 32Bit ABIs.
*/
private static final Set<String> ABIS_32BIT = new HashSet<String>();
/**
* The set of 64Bit ABIs.
*/
private static final Set<String> ABIS_64BIT = new HashSet<String>();
/**
* The set of ARM ABIs.
*/
private static final Set<String> ARM_ABIS = new HashSet<String>();
/**
* The set of Intel ABIs.
*/
private static final Set<String> INTEL_ABIS = new HashSet<String>();
/**
* The set of Mips ABIs.
*/
private static final Set<String> MIPS_ABIS = new HashSet<String>();
/**
* The set of ABI names which CTS supports.
*/
private static final Set<String> ABIS_SUPPORTED_BY_CTS = new HashSet<String>();
/**
* The map of architecture to ABI.
*/
private static final Map<String, Set<String>> ARCH_TO_ABIS = new HashMap<String, Set<String>>();
static {
ABIS_32BIT.add("armeabi-v7a");
ABIS_32BIT.add("x86");
ABIS_32BIT.add("mips");
ABIS_64BIT.add("arm64-v8a");
ABIS_64BIT.add("x86_64");
ABIS_64BIT.add("mips64");
ARM_ABIS.add("armeabi-v7a");
ARM_ABIS.add("arm64-v8a");
INTEL_ABIS.add("x86");
INTEL_ABIS.add("x86_64");
MIPS_ABIS.add("mips");
MIPS_ABIS.add("mips64");
ARCH_TO_ABIS.put("arm", ARM_ABIS);
ARCH_TO_ABIS.put("arm64", ARM_ABIS);
ARCH_TO_ABIS.put("x86", INTEL_ABIS);
ARCH_TO_ABIS.put("x86_64", INTEL_ABIS);
ARCH_TO_ABIS.put("mips", MIPS_ABIS);
ARCH_TO_ABIS.put("mips64", MIPS_ABIS);
ABIS_SUPPORTED_BY_CTS.addAll(ARM_ABIS);
ABIS_SUPPORTED_BY_CTS.addAll(INTEL_ABIS);
ABIS_SUPPORTED_BY_CTS.addAll(MIPS_ABIS);
}
/**
* Private constructor to avoid instantiation.
*/
private AbiUtils() {}
/**
* Returns the set of ABIs associated with the given architecture.
* @param arch The architecture to look up.
* @return a new Set containing the ABIs.
*/
public static Set<String> getAbisForArch(String arch) {
if (arch == null || arch.isEmpty() || !ARCH_TO_ABIS.containsKey(arch)) {
return getAbisSupportedByCts();
}
return new HashSet<String>(ARCH_TO_ABIS.get(arch));
}
/**
* Returns the set of ABIs supported by CTS.
* @return a new Set containing the supported ABIs.
*/
public static Set<String> getAbisSupportedByCts() {
return new HashSet<String>(ABIS_SUPPORTED_BY_CTS);
}
/**
* @param abi The ABI name to test.
* @return true if the given ABI is supported by CTS.
*/
public static boolean isAbiSupportedByCts(String abi) {
return ABIS_SUPPORTED_BY_CTS.contains(abi);
}
/**
* Creates a flag for the given ABI.
* @param abi the ABI to create the flag for.
* @return a string which can be add to a command sent to ADB.
*/
public static String createAbiFlag(String abi) {
if (abi == null || abi.isEmpty() || !isAbiSupportedByCts(abi)) {
return "";
}
return String.format("--abi %s ", abi);
}
/**
* Creates a unique id from the given ABI and name.
* @param abi The ABI to use.
* @param name The name to use.
* @return a string which uniquely identifies a run.
*/
public static String createId(String abi, String name) {
return String.format("%s %s", abi, name);
}
/**
* Parses a unique id into the ABI and name.
* @param id The id to parse.
* @return a string array containing the ABI and name.
*/
public static String[] parseId(String id) {
if (id == null || !id.contains(" ")) {
return new String[] {"", ""};
}
return id.split(" ");
}
/**
* @return the test name portion of the test id.
* e.g. armeabi-v7a android.mytest = android.mytest
*/
public static String parseTestName(String id) {
return parseId(id)[1];
}
/**
* @return the abi portion of the test id.
* e.g. armeabi-v7a android.mytest = armeabi-v7a
*/
public static String parseAbi(String id) {
return parseId(id)[0];
}
/**
* @param name The name of the ABI.
* @return The bitness of the ABI with the given name
*/
public static String getBitness(String name) {
return ABIS_32BIT.contains(name) ? "32" : "64";
}
/**
* @param abilistString A comma separated string containing abis.
* @return A List of Strings containing valid ABIs.
*/
public static Set<String> parseAbiList(String unsupportedAbiDescription) {
Set<String> abiSet = new HashSet<>();
String[] descSegments = unsupportedAbiDescription.split(":");
if (descSegments.length == 2) {
for (String abi : descSegments[1].split(",")) {
String trimmedAbi = abi.trim();
if (isAbiSupportedByCts(trimmedAbi)) {
abiSet.add(trimmedAbi);
}
}
}
return abiSet;
}
} |
// DetectError detects any possible errors in responses from Gremlin Server and generates an error for each code
func extractError(r interfaces.Response) error {
switch r.Status.Code {
case interfaces.StatusSuccess, interfaces.StatusNoContent, interfaces.StatusPartialContent:
return nil
case interfaces.StatusUnauthorized:
return Error{Wrapped: fmt.Errorf("unauthorized: %s", r.Status.Message), Category: ErrorCategoryAuth}
case interfaces.StatusAuthenticate:
return Error{Wrapped: fmt.Errorf("not authenticated: %s", r.Status.Message), Category: ErrorCategoryAuth}
case interfaces.StatusMalformedRequest:
return Error{Wrapped: fmt.Errorf("malformed request: %s", r.Status.Message), Category: ErrorCategoryClient}
case interfaces.StatusInvalidRequestArguments:
return Error{Wrapped: fmt.Errorf("invalid request arguments: %s", r.Status.Message), Category: ErrorCategoryClient}
case interfaces.StatusServerError:
return Error{Wrapped: fmt.Errorf("server error: %s", r.Status.Message), Category: ErrorCategoryServer}
case interfaces.StatusScriptEvaluationError:
return Error{Wrapped: fmt.Errorf("script evaluation failed: %s", r.Status.Message), Category: ErrorCategoryClient}
case interfaces.StatusServerTimeout:
return Error{Wrapped: fmt.Errorf("server timeout: %s", r.Status.Message), Category: ErrorCategoryServer}
case interfaces.StatusServerSerializationError:
return Error{Wrapped: fmt.Errorf("script evaluation failed: %s", r.Status.Message), Category: ErrorCategoryClient}
default:
return Error{Wrapped: fmt.Errorf("unknown error: %s", r.Status.Message), Category: ErrorCategoryGeneral}
}
} |
<filename>ControlListener.h
#ifndef CONTROL_LISTENER_H
#define CONTROL_LISTENER_H
#include "Arduino.h"
class ControlListener {
public:
virtual void adjust(double x) = 0;
};
#endif
|
def destroy_classical_register(self, name):
if name not in self.__classical_registers:
raise QISKitError("Can't destroy this register: Not present")
else:
logger.info(">> classical register destroyed: %s", name)
del self.__classical_registers[name] |
<reponame>teilin/adventofcode2020<gh_stars>0
package main
import (
"fmt"
"io/ioutil"
"log"
"math"
"os"
"strconv"
"strings"
)
func readInput(inputString string) (int, []int) {
file, err := ioutil.ReadFile(inputString)
if err != nil {
log.Fatal(err)
}
tmp := strings.Split(string(file), "\n")
myTimeStamp, _ := strconv.Atoi(tmp[0])
var busIDs []int
for _, busID := range strings.Split(tmp[1], ",") {
if busID != "x" {
c, _ := strconv.Atoi(busID)
busIDs = append(busIDs, c)
}
}
return myTimeStamp, busIDs
}
func readInput2(inputString string) []string {
file, err := ioutil.ReadFile(inputString)
if err != nil {
log.Fatal(err)
}
tmp := strings.Split(string(file), "\n")
var busIDs []string
for _, busID := range strings.Split(tmp[1], ",") {
busIDs = append(busIDs, busID)
}
return busIDs
}
func part1(earliestTimestamp int, busIDs []int) (int, int) {
index := 1
earliestBusID := 0
earliestBusIDTime := 0
for _, busID := range busIDs {
for index > 0 {
if busID*index >= earliestTimestamp {
if earliestBusID == 0 || busID*index <= earliestBusIDTime {
earliestBusID = busID
earliestBusIDTime = busID * index
}
break
}
index++
}
index = 1
}
return earliestBusID, earliestBusIDTime
}
func modPower(b, e, mod int64) int64 {
if e == 0 {
return 1
} else if e%2 == 0 {
return modPower((b*b)%mod, e/2, mod)
}
return (b * modPower(b, e-1, mod)) % mod
}
func modInverse(a, m int64) int64 {
m0 := m
var y int64 = 0
var x int64 = 1
if m == 1 {
return 0
}
for a > 1 {
q := math.Floor(float64(a) / float64(m))
t := m
m = a % m
a = t
t = y
y = x - int64(q)*y
x = t
}
if x < 0 {
x = x + m0
}
return x
}
func part2(busList []string) int64 {
var busMap map[int64]int64 = make(map[int64]int64)
var N int64 = 1
for i, bus := range busList {
if bus != "x" {
b, _ := strconv.Atoi(bus)
busMap[int64((b-i+1)%b)] = int64(b)
N *= int64(b)
}
}
var ans int64 = 0
for i, b := range busMap {
var ni int64 = N / b
mi := modInverse(ni, b)
var forB int64 = i * mi * ni
ans += forB
}
return ans%N - 1
}
func main() {
earliestTimestamp, busIDs := readInput(os.Args[1])
busID, time := part1(earliestTimestamp, busIDs)
minutesWait := time - earliestTimestamp
fmt.Println("Earliest busid: " + strconv.Itoa(busID) + " at time " + strconv.Itoa(time) + ". Multiplied: " + strconv.Itoa(busID*minutesWait))
busList := readInput2(os.Args[1])
firstDeparture := part2(busList)
fmt.Println(firstDeparture)
}
|
#!/usr/bin/env python
#
# Examples from the talk:
# <NAME> - Google I/O 2012 - Go Concurrency Patterns
# http://www.youtube.com/watch?v=f6kdp27TYZs
# http://code.google.com/p/go/source/browse/2012/concurrency.slide?repo=talks
from chan import Chan, chanselect, quickthread
from collections import namedtuple
from collections import OrderedDict
import time
import random
import sys
EXAMPLES = OrderedDict()
#---------------------------------------------------------------------------
# Fan In
#---------------------------------------------------------------------------
def example_fan_in():
def boring(message):
def sender(message, c):
i = 0
while True:
c.put("%s: %d" % (message, i))
time.sleep(0.2 * random.random())
i += 1
c = Chan()
quickthread(sender, message, c)
return c
def fan_in(input1, input2):
def forwarder(input, output):
while True:
output.put(input.get())
c = Chan()
quickthread(forwarder, input1, c)
quickthread(forwarder, input2, c)
return c
c = fan_in(boring("Joe"), boring("Ann"))
for i in xrange(10):
print c.get()
print "You're both boring; I'm leaving."
EXAMPLES['fanin'] = example_fan_in
#---------------------------------------------------------------------------
# Sequence
#---------------------------------------------------------------------------
def example_sequence():
Message = namedtuple("Message", ['string', 'wait'])
def boring(msg):
c = Chan()
wait_for_it = Chan()
def sender():
i = 0
while True:
c.put(Message("%s: %d" % (msg, i), wait_for_it))
time.sleep(0.2 * random.random())
wait_for_it.get()
i += 1
quickthread(sender)
return c
def fan_in(*input_list):
def forward(input, output):
while True:
output.put(input.get())
c = Chan()
for input in input_list:
quickthread(forward, input, c)
return c
c = fan_in(boring('Joe'), boring('Ann'))
for i in xrange(5):
msg1 = c.get(); print msg1.string
msg2 = c.get(); print msg2.string
msg1.wait.put(True)
msg2.wait.put(True)
print "You're all boring; I'm leaving"
EXAMPLES['sequence'] = example_sequence
#---------------------------------------------------------------------------
# Select
#---------------------------------------------------------------------------
def example_select():
def boring(msg):
c = Chan()
def sender():
i = 0
while True:
c.put("%s: %d" % (msg, i))
time.sleep(1.0 * random.random())
i += 1
quickthread(sender)
return c
def fan_in(input1, input2):
c = Chan()
def forward():
while True:
chan, value = chanselect([input1, input2], [])
c.put(value)
quickthread(forward)
return c
c = fan_in(boring("Joe"), boring("Ann"))
for i in xrange(10):
print c.get()
print "You're both boring; I'm leaving."
EXAMPLES['select'] = example_select
#---------------------------------------------------------------------------
# Timeout
#---------------------------------------------------------------------------
def timer(duration):
def timer_thread(chan, duration):
time.sleep(duration)
chan.put(time.time())
c = Chan()
quickthread(timer_thread, c, duration)
return c
def example_timeout():
def boring(msg):
c = Chan()
def sender():
i = 0
while True:
c.put("%s: %d" % (msg, i))
time.sleep(1.5 * random.random())
i += 1
quickthread(sender)
return c
c = boring("Joe")
while True:
chan, value = chanselect([c, timer(1.0)], [])
if chan == c:
print value
else:
print "You're too slow."
return
EXAMPLES['timeout'] = example_timeout
#---------------------------------------------------------------------------
# RCV Quit
#---------------------------------------------------------------------------
def example_rcvquit():
def boring(msg, quit):
c = Chan()
def sender():
i = 0
while True:
time.sleep(1.0 * random.random())
chan, _ = chanselect([quit], [(c, "%s: %d" % (msg, i))])
if chan == quit:
quit.put("See you!")
i += 1
quickthread(sender)
return c
quit = Chan()
c = boring("Joe", quit)
for i in xrange(random.randint(0, 10), 0, -1):
print c.get()
quit.put("Bye!")
print "Joe says:", quit.get()
EXAMPLES['rcvquit'] = example_rcvquit
#---------------------------------------------------------------------------
# Daisy chain
#---------------------------------------------------------------------------
def example_daisy():
def f(left, right):
left.put(1 + right.get())
N = 1000 # Python's threads aren't that lightweight
leftmost = Chan()
rightmost = leftmost
left = leftmost
for i in xrange(N):
right = Chan()
quickthread(f, left, right)
left = right
def putter():
right.put(1)
quickthread(putter)
print leftmost.get()
EXAMPLES['daisy'] = example_daisy
def main():
if len(sys.argv) < 2 or sys.argv[1] not in EXAMPLES:
print "Possible examples:"
for example in EXAMPLES.iterkeys():
print " %s" % example
return
EXAMPLES[sys.argv[1]]()
if __name__ == '__main__':
main()
|
<reponame>parampavar/vector<gh_stars>0
use metrics::counter;
use vector_core::internal_event::InternalEvent;
use super::prelude::{error_stage, error_type};
#[derive(Debug)]
pub struct RedisSendEventError<'a> {
error: &'a redis::RedisError,
error_code: String,
}
#[cfg(feature = "sinks-redis")]
impl<'a> RedisSendEventError<'a> {
pub fn new(error: &'a redis::RedisError) -> Self {
Self {
error,
error_code: error.code().unwrap_or("UNKNOWN").to_string(),
}
}
}
impl<'a> InternalEvent for RedisSendEventError<'a> {
fn emit(self) {
error!(
message = "Failed to send message.",
error = %self.error,
error_code = %self.error_code,
error_type = error_type::WRITER_FAILED,
stage = error_stage::SENDING,
rate_limit_secs = 10,
);
counter!(
"component_errors_total", 1,
"error_code" => self.error_code,
"error_type" => error_type::WRITER_FAILED,
"stage" => error_stage::SENDING,
);
// deprecated
counter!("send_errors_total", 1);
}
}
#[derive(Debug)]
pub struct RedisReceiveEventError {
error: redis::RedisError,
error_code: String,
}
impl From<redis::RedisError> for RedisReceiveEventError {
fn from(error: redis::RedisError) -> Self {
let error_code = error.code().unwrap_or("UNKNOWN").to_string();
Self { error, error_code }
}
}
impl InternalEvent for RedisReceiveEventError {
fn emit(self) {
error!(
message = "Failed to read message.",
error = %self.error,
error_code = %self.error_code,
error_type = error_type::READER_FAILED,
stage = error_stage::SENDING,
rate_limit_secs = 10,
);
counter!(
"component_errors_total", 1,
"error_code" => self.error_code,
"error_type" => error_type::READER_FAILED,
"stage" => error_stage::RECEIVING,
);
}
}
|
# Generated by Django 2.2.6 on 2020-01-06 13:37
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django_countries.fields
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('yatranepal', '0003_auto_20200105_1905'),
]
operations = [
migrations.AlterField(
model_name='adventure',
name='adventureDesc',
field=models.TextField(verbose_name='Adventure Description'),
),
migrations.AlterField(
model_name='adventure',
name='adventureImage',
field=models.ImageField(upload_to='adventures/', verbose_name='Adventure Image'),
),
migrations.AlterField(
model_name='adventure',
name='adventureName',
field=models.CharField(max_length=200, verbose_name='Name of Adventure'),
),
migrations.AlterField(
model_name='adventure',
name='adventureSlug',
field=models.CharField(max_length=50, verbose_name='Adventure URL'),
),
migrations.AlterField(
model_name='hotel',
name='hotelAddress',
field=models.CharField(max_length=255, verbose_name='Location of Hotel'),
),
migrations.AlterField(
model_name='hotel',
name='hotelDesc',
field=models.TextField(verbose_name='Hotel Description'),
),
migrations.AlterField(
model_name='hotel',
name='hotelFeatures',
field=models.TextField(verbose_name='Features of Hotel(in Points)'),
),
migrations.AlterField(
model_name='hotel',
name='hotelImage',
field=models.ImageField(upload_to='hotels/', verbose_name='Image of Hotel'),
),
migrations.AlterField(
model_name='hotel',
name='hotelName',
field=models.CharField(max_length=255, verbose_name='Name of the Hotel'),
),
migrations.AlterField(
model_name='hotel',
name='hotelPrice',
field=models.IntegerField(verbose_name='Price of Hotel per Room per day'),
),
migrations.AlterField(
model_name='hotel',
name='hotelSlug',
field=models.CharField(max_length=50, verbose_name='Hotel URL'),
),
migrations.AlterField(
model_name='package',
name='packageDesc',
field=models.TextField(verbose_name='Package Description'),
),
migrations.AlterField(
model_name='package',
name='packageFeatures',
field=models.TextField(verbose_name='Package Features'),
),
migrations.AlterField(
model_name='package',
name='packageImage',
field=models.ImageField(upload_to='packages/', verbose_name='Package Image'),
),
migrations.AlterField(
model_name='package',
name='packageName',
field=models.CharField(max_length=200, verbose_name='Package Name'),
),
migrations.AlterField(
model_name='package',
name='packagePrice',
field=models.IntegerField(verbose_name='Package Tentative Price(NRs)'),
),
migrations.AlterField(
model_name='package',
name='packageSlug',
field=models.CharField(max_length=50, verbose_name='Package URL'),
),
migrations.AlterField(
model_name='package',
name='placeName',
field=models.ManyToManyField(to='yatranepal.Place', verbose_name='Places that Lies in This Package'),
),
migrations.AlterField(
model_name='place',
name='placeDesc',
field=models.TextField(verbose_name='Place Description'),
),
migrations.AlterField(
model_name='place',
name='placeImage',
field=models.ImageField(upload_to='places/', verbose_name='Image of Place'),
),
migrations.AlterField(
model_name='place',
name='placeName',
field=models.CharField(max_length=255, verbose_name='Name of the Place'),
),
migrations.AlterField(
model_name='place',
name='placeSlug',
field=models.CharField(max_length=50, verbose_name='Place URL'),
),
migrations.AlterField(
model_name='transportation',
name='fare',
field=models.IntegerField(verbose_name='Price(NRs.)'),
),
migrations.AlterField(
model_name='transportation',
name='placeFrom',
field=models.CharField(max_length=200, verbose_name='Source Place'),
),
migrations.AlterField(
model_name='transportation',
name='placeTo',
field=models.CharField(max_length=200, verbose_name='Destination Place'),
),
migrations.AlterField(
model_name='transportation',
name='transportationType',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='yatranepal.TransportationType', verbose_name='TransportationType'),
),
migrations.AlterField(
model_name='transportationtype',
name='transportationType',
field=models.CharField(max_length=255, verbose_name='Transportation Type'),
),
migrations.CreateModel(
name='Profile',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('bio', models.CharField(max_length=150)),
('country', django_countries.fields.CountryField(max_length=2)),
('address', models.CharField(max_length=100)),
('phone', models.IntegerField()),
('dob', models.DateField()),
('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
]
|
// Author: <NAME>
// Email: <EMAIL>
// Project page: http://code.google.com/p/csgtestworld/
// Website: http://createuniverses.blogspot.com/
#include "ML_Maths.h"
void mlSinCos(mlAngle angle, mlFloat * sinResult, mlFloat * cosResult)
{
*sinResult = mlSin(angle);
*cosResult = mlCos(angle);
}
mlAngle mlAngleNormalise(mlAngle x)
{
mlFloat t = x * mlHalfInversePi + 100.5f;
mlFloat f = t - mlFloat(int(t));
mlFloat result = (f - 0.5f) * mlTwoPi;
return result;
}
static UInt32 mlReinterpretFloatAsInteger(mlFloat x)
{
return reinterpret_cast<UInt32 &>(x);
}
static bool mlInternal_IsNan(mlFloat32 x)
{
assert(sizeof(x) == sizeof(mlFloat32));
UInt32 i = mlReinterpretFloatAsInteger(x);
return ((i & 0x7f800000L) == 0x7f800000L) &&
((i & 0x007fffffL) != 0000000000L);
}
static bool mlInternal_IsInfinity(mlFloat32 x)
{
assert(sizeof(x) == sizeof(mlFloat32));
UInt32 i = mlReinterpretFloatAsInteger(x);
return ((i & 0x7f800000L) == 0x7f800000L) &&
((i & 0x007fffffL) == 0000000000L);
}
bool mlIsValid(mlFloat x)
{
return !mlInternal_IsNan(x) && !mlInternal_IsInfinity(x);
}
|
<gh_stars>0
import { useNavigate } from 'react-router-dom';
import { Add } from '@mui/icons-material';
import { Button, Container, Divider, Stack } from '@mui/material';
import Clients from './Clients';
const ClientsRouteComponent = () => {
const navigate = useNavigate();
return <Container maxWidth="md" sx={{ my: 2 }}>
<Stack direction="column" spacing={1}>
<Button startIcon={<Add />} variant="contained" onClick={() => navigate("new")}>Create New Client</Button>
<Divider />
<Clients />
</Stack>
</Container>
}
export default ClientsRouteComponent
|
RESEARCH ARTICLE Boundary value methods with Crank-Nicolson preconditioner for option pricing model
Under a jump-difiusion process, the option pricing function satisfles a partial integrodifierential equation. A fourth order compact scheme is used to discretize the spatial variable of this equation. The boundary value method is then applied for temporal direction because of its unconditional stability. To avert the numerical oscillation caused by the non-smooth payofi, both the second order backward difierence formula and the boundary value method are adopted on the initial time layer. Moreover, the resulting linear system of the boundary value method applied in the follow-up layers is solved by the GMRES method with a preconditioner which comes from the Crank-Nicolson scheme. With regard to this preconditioner, studies for invertibility and convergence in right-preconditioned GMRES method are given. Numerical experiments demonstrate the superiority of the method. |
/**
* Created by nguyenbon on 10/18/16.
*/
public class CarPresenterImpl implements CarPresenter {
protected CarView mCarView;
protected CarInteractorImpl mCarInteractorImpl;
public CarPresenterImpl(CarView carView) {
this.mCarView = carView;
this.mCarInteractorImpl = new CarInteractorImpl();
}
@Override
public void validate(String value) {
if (TextUtils.isEmpty(value)) {
mCarView.setEdittextError();
} else {
long registered = mCarInteractorImpl.getRegistered();
if (registered < 2002) {
validateEngine();
} else {
validateEmission();
}
}
}
@Override
public void checkRegistered(String value) {
if (TextUtils.isEmpty(value)) {
mCarView.hiddenEdittextEmission();
mCarView.hiddenEdittextEngine();
} else {
long registered = Long.parseLong(value);
mCarInteractorImpl.setRegistered(registered);
if (registered < 2002) {
mCarView.hiddenEdittextEmission();
mCarView.showEdittextEngine();
} else {
mCarView.hiddenEdittextEngine();
mCarView.showEdittextEmission();
}
}
}
private void validateEngine() {
if (TextUtils.isEmpty(mCarView.getEngineValue())) {
mCarView.setEdittextEngineError();
return;
} else {
mCarInteractorImpl.setEngine(Long.parseLong(mCarView.getEngineValue()));
}
mCarView.displayTaxResult(mCarInteractorImpl.calcTax());
}
private void validateEmission() {
if (TextUtils.isEmpty(mCarView.getEmissionValue())) {
mCarView.setEdittextEmissionError();
return;
} else {
mCarInteractorImpl.setEmission(Long.parseLong(mCarView.getEmissionValue()));
}
mCarView.displayTaxResult(mCarInteractorImpl.calcTax());
}
} |
Cybersecurity analysts say they have discovered a malware-infected version of Pokémon Go for Android devices, which is of particular concern to those in regions where the extremely popular game has not yet launched and users are installing versions downloaded from file-sharing sites.
Proofpoint researchers found an APK (the Android app file format) of Pokémon Go carrying the remote-access exploit called DroidJack. Symantec discovered the malware in late 2014 and describes it as "a Trojan horse for Android devices that opens a back door on the compromised device [and] also steals information."
This exploit is not in any official app-store version of Pokémon Go where that game has launched — currently only the United States, Australia and New Zealand. However, users in other regions looking to get in on the craze could potentially encounter this infected edition.
Proofpoint's writeup on the malware includes detailed screenshots and descriptions of how to know for sure if your device has been infected. Proofpoint notes that "we have not observed this malicious APK in the wild" yet, but added it was found on a known malicious file repository about three days after Pokémon Go's launch in Oceania last week.
Though Symantec gives DroidJack its lowest threat rating at the moment, it has a renewed potential to spread thanks to Pokémon Go's popularity and free-to-download format. Some websites have posted guides on how to "side-load" unofficial copies of Pokémon Go, and the file has been shared directly among users.
"As in the case of the compromised Pokémon Go APK we analyzed, the potential exists for attackers to completely compromise a mobile device," Proofpoint wrote. "If that device is brought onto a corporate network, networked resources are also at risk. ... Bottom line, just because you can get the latest software on your device does not mean that you should."
POKÉMON GO LAUNCH TRAILER |
t = int(input());
for test in range(t):
l,r = map(int,input().split());
bl = bin(r)[2:];
br = bin(l)[2:];
br = "0"*(len(bl)-len(br))+br;
mx = bl.count("1");
mxm = bl;
for i in range(len(bl)):
if(bl[i]!=br[i]):
temp = len(bl)-i-1+bl[0:i].count("1");
if(temp>=mx):
mx = temp;
mxm = bl[0:i]+"0"+"1"*(len(bl)-i-1);
break;
print(int(mxm,2));
|
/**
* Ensures that the list of builders is not null. If it's null, the list is created and
* initialized to be the same size as the messages list with null entries.
*/
private void ensureBuilders() {
if (this.builders == null) {
this.builders = new ArrayList<SingleFieldBuilder<MType, BType, IType>>(messages.size());
for (int i = 0; i < messages.size(); i++) {
builders.add(null);
}
}
} |
<reponame>fnrizzi/pressio-demoapps
/*
//@HEADER
// ************************************************************************
//
// rom_problem_members.hpp
// Pressio
// Copyright 2019
// National Technology & Engineering Solutions of Sandia, LLC (NTESS)
//
// Under the terms of Contract DE-NA0003525 with NTESS, the
// U.S. Government retains certain rights in this software.
//
// Pressio is licensed under BSD-3-Clause terms of use:
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions
// are met:
//
// 1. Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright
// notice, this list of conditions and the following disclaimer in the
// documentation and/or other materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its
// contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
// FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
// COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
// INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
// (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
// SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
// HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
// IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
// POSSIBILITY OF SUCH DAMAGE.
//
// Questions? Contact <NAME> (<EMAIL>)
//
// ************************************************************************
//@HEADER
*/
#ifndef ROM_GALERKIN_IMPL_ROM_PROBLEM_MEMBERS_HPP_
#define ROM_GALERKIN_IMPL_ROM_PROBLEM_MEMBERS_HPP_
namespace pressio{ namespace rom{ namespace galerkin{ namespace impl{
template <class T, class ops_t, class projector_t>
struct ProjectorMixin : T
{
projector_t projector_;
ProjectorMixin() = delete;
ProjectorMixin(const ProjectorMixin &) = default;
ProjectorMixin & operator=(const ProjectorMixin &) = delete;
ProjectorMixin(ProjectorMixin &&) = default;
ProjectorMixin & operator=(ProjectorMixin &&) = delete;
~ProjectorMixin() = default;
template<
class T1, class T2, class T3, class T4, class _ops_t = ops_t,
mpl::enable_if_t<std::is_void<_ops_t>::value, int > = 0
>
ProjectorMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative)
: T(fomObj, decoder, romStateIn, fomNominalStateNative),
projector_(decoder)
{}
template<
class T1, class T2, class T3, class T4, class _ops_t = ops_t,
mpl::enable_if_t<mpl::not_void<_ops_t>::value, int > = 0
>
ProjectorMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const _ops_t & udOps)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, udOps),
projector_(decoder, udOps)
{}
};
//---------------------------------------------------
// implicit policies
//---------------------------------------------------
template <class T, bool def, bool hypredvelo, bool, typename ... Args>
struct ImplicitPoliciesMixin;
// specialize for default
template <class T, class r_pol_t, class j_pol_t>
struct ImplicitPoliciesMixin<T, true, false, false, r_pol_t, j_pol_t> : T
{
r_pol_t residualPolicy_;
j_pol_t jacobianPolicy_;
ImplicitPoliciesMixin() = delete;
ImplicitPoliciesMixin(const ImplicitPoliciesMixin &) = default;
ImplicitPoliciesMixin & operator=(const ImplicitPoliciesMixin &) = delete;
ImplicitPoliciesMixin(ImplicitPoliciesMixin &&) = default;
ImplicitPoliciesMixin & operator=(ImplicitPoliciesMixin &&) = delete;
~ImplicitPoliciesMixin() = default;
template<class T1, class T2, class T3, class T4, class ...Args>
ImplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
Args && ... args)
: T(romStateIn, fomObj, decoder, fomNominalStateNative, std::forward<Args>(args)...),
residualPolicy_(::pressio::ops::extent(romStateIn,0), T::projector_, T::fomCRef(), T::fomStatesMngr_),
jacobianPolicy_(::pressio::ops::extent(romStateIn,0), T::projector_, T::fomCRef(), T::fomStatesMngr_, decoder)
{}
};
// specialize for hyp-red velo
template <class T, class r_pol_t, class j_pol_t>
struct ImplicitPoliciesMixin<T, false, true, false, r_pol_t, j_pol_t> : T
{
r_pol_t residualPolicy_;
j_pol_t jacobianPolicy_;
ImplicitPoliciesMixin() = delete;
ImplicitPoliciesMixin(const ImplicitPoliciesMixin &) = default;
ImplicitPoliciesMixin & operator=(const ImplicitPoliciesMixin &) = delete;
ImplicitPoliciesMixin(ImplicitPoliciesMixin &&) = default;
ImplicitPoliciesMixin & operator=(ImplicitPoliciesMixin &&) = delete;
~ImplicitPoliciesMixin() = default;
template<class T1, class T2, class T3, class T4, class T5, typename ...Args>
ImplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const T5 & projector,
Args && ... args)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, std::forward<Args>(args)...),
residualPolicy_(::pressio::ops::extent(romStateIn,0), projector, T::fomCRef(), T::fomStatesMngr_),
jacobianPolicy_(::pressio::ops::extent(romStateIn,0), projector, T::fomCRef(), T::fomStatesMngr_, decoder)
{}
};
// specialize for masked velo
template <class T, class masker_t, class r_pol_t, class j_pol_t>
struct ImplicitPoliciesMixin<T, false, false, true, masker_t, r_pol_t, j_pol_t> : T
{
/* here we need to consider also the case where the masker is a pybind11:object
that is passed in directly from python: in that scenario, masker_t is
a C++ wrapper class wrapping the actualy pure python class,
so we need to create an object of this masker_t and pass that to the policies
because the policies do NOT accept pybind11::objects
*/
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
const masker_t masker_;
#endif
r_pol_t residualPolicy_;
j_pol_t jacobianPolicy_;
ImplicitPoliciesMixin() = delete;
ImplicitPoliciesMixin(const ImplicitPoliciesMixin &) = default;
ImplicitPoliciesMixin & operator=(const ImplicitPoliciesMixin &) = delete;
ImplicitPoliciesMixin(ImplicitPoliciesMixin &&) = default;
ImplicitPoliciesMixin & operator=(ImplicitPoliciesMixin &&) = delete;
~ImplicitPoliciesMixin() = default;
template<class T1, class T2, class T3, class T4, class T5, class T6, class ...Args>
ImplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const T5 & masker,
const T6 & projector,
Args && ... args)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, std::forward<Args>(args)...),
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
masker_(masker),
residualPolicy_(::pressio::ops::extent(romStateIn,0), projector, masker_, T::fomCRef(), T::fomStatesMngr_),
jacobianPolicy_(::pressio::ops::extent(romStateIn,0), projector, masker_, T::fomCRef(), T::fomStatesMngr_, decoder)
#else
residualPolicy_(::pressio::ops::extent(romStateIn,0), projector, masker, T::fomCRef(), T::fomStatesMngr_),
jacobianPolicy_(::pressio::ops::extent(romStateIn,0), projector, masker, T::fomCRef(), T::fomStatesMngr_, decoder)
#endif
{}
};
// aliases to make things easier
template <class T, typename ...Args>
using DefaultImplicitPoliciesMixin = ImplicitPoliciesMixin<T, true, false, false, Args...>;
template <class T, typename ...Args>
using HypRedVeloImplicitPoliciesMixin = ImplicitPoliciesMixin<T, false, true, false, Args...>;
template <class T, typename ...Args>
using HypRedResidualImplicitPoliciesMixin = ImplicitPoliciesMixin<T, false, true, false, Args...>;
template <class T, typename ...Args>
using MaskedVeloImplicitPoliciesMixin = ImplicitPoliciesMixin<T, false, false, true, Args...>;
template <class T, typename ...Args>
using MaskedResidualImplicitPoliciesMixin = ImplicitPoliciesMixin<T, false, false, true, Args...>;
//---------------------------------------------------
// explicit policies
//---------------------------------------------------
template <class T, bool, bool, bool, typename ... Args>
struct ExplicitPoliciesMixin;
// default
template <class T, class rhs_pol_t>
struct ExplicitPoliciesMixin<T, true, false, false, rhs_pol_t> : T
{
rhs_pol_t rhsPolicy_;
ExplicitPoliciesMixin() = delete;
ExplicitPoliciesMixin(const ExplicitPoliciesMixin &) = default;
ExplicitPoliciesMixin & operator=(const ExplicitPoliciesMixin &) = delete;
ExplicitPoliciesMixin(ExplicitPoliciesMixin &&) = default;
ExplicitPoliciesMixin & operator=(ExplicitPoliciesMixin &&) = delete;
~ExplicitPoliciesMixin() = default;
template<
class T1, typename ...Args,
mpl::enable_if_t<::pressio::containers::details::traits<T1>::rank==1,int> = 0
>
ExplicitPoliciesMixin(const T1 & romStateIn, Args && ...args)
: T(romStateIn, std::forward<Args>(args)...),
rhsPolicy_(::pressio::ops::extent(romStateIn,0), T::projector_, T::fomCRef(), T::fomStatesMngr_)
{}
template<
class T1, typename ...Args,
mpl::enable_if_t<::pressio::containers::details::traits<T1>::rank==2,int> = 0
>
ExplicitPoliciesMixin(const T1 & romStateIn, Args && ...args)
: T(romStateIn, std::forward<Args>(args)...),
rhsPolicy_(::pressio::ops::extent(romStateIn,0), ::pressio::ops::extent(romStateIn,1),
T::projector_, T::fomCRef(), T::fomStatesMngr_)
{}
};
// hypere-reduced
template <class T, class rhs_pol_t>
struct ExplicitPoliciesMixin<T, false, true, false, rhs_pol_t> : T
{
rhs_pol_t rhsPolicy_;
ExplicitPoliciesMixin() = delete;
ExplicitPoliciesMixin(const ExplicitPoliciesMixin &) = default;
ExplicitPoliciesMixin & operator=(const ExplicitPoliciesMixin &) = delete;
ExplicitPoliciesMixin(ExplicitPoliciesMixin &&) = default;
ExplicitPoliciesMixin & operator=(ExplicitPoliciesMixin &&) = delete;
~ExplicitPoliciesMixin() = default;
template<
class T1, class T2, class T3, class T4, class T5, typename ...Args,
mpl::enable_if_t<::pressio::containers::details::traits<T1>::rank==1,int> = 0
>
ExplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const T5 & projector,
Args && ...args)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, std::forward<Args>(args)...),
rhsPolicy_(::pressio::ops::extent(romStateIn,0), projector, T::fomCRef(), T::fomStatesMngr_)
{}
template<
class T1, class T2, class T3, class T4, class T5, typename ...Args,
mpl::enable_if_t<::pressio::containers::details::traits<T1>::rank==2,int> = 0
>
ExplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const T5 & projector,
Args && ...args)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, std::forward<Args>(args)...),
rhsPolicy_(::pressio::ops::extent(romStateIn,0), ::pressio::ops::extent(romStateIn,1),
projector, T::fomCRef(), T::fomStatesMngr_)
{}
};
// masked
template <class T, class masker_t, class rhs_pol_t>
struct ExplicitPoliciesMixin<T, false, false, true, masker_t, rhs_pol_t> : T
{
/* here we need to consider also the case where the masker is a pybind11:object
that is passed in directly from python: in that scenario, masker_t is
a C++ wrapper class wrapping the actualy pure python class,
so we need to create an object of this masker_t and pass that to the policies
because the policies do NOT accept pybind11::objects
*/
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
const masker_t masker_;
#endif
rhs_pol_t rhsPolicy_;
ExplicitPoliciesMixin() = delete;
ExplicitPoliciesMixin(const ExplicitPoliciesMixin &) = default;
ExplicitPoliciesMixin & operator=(const ExplicitPoliciesMixin &) = delete;
ExplicitPoliciesMixin(ExplicitPoliciesMixin &&) = default;
ExplicitPoliciesMixin & operator=(ExplicitPoliciesMixin &&) = delete;
~ExplicitPoliciesMixin() = default;
template<
class T1, class T2, class T3, class T4, class T5, class T6, class ...Args,
mpl::enable_if_t<::pressio::containers::details::traits<T1>::rank==1,int> = 0
>
ExplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const T5 & masker,
const T6 & projector,
Args && ...args)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, std::forward<Args>(args)...),
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
masker_(masker),
rhsPolicy_(::pressio::ops::extent(romStateIn,0), projector, masker_, T::fomCRef(), T::fomStatesMngr_)
#else
rhsPolicy_(::pressio::ops::extent(romStateIn,0), projector, masker, T::fomCRef(), T::fomStatesMngr_)
#endif
{
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
static_assert
(std::is_same<T5, pybind11::object>::value,
"Maked policies mixin: masker object must be a pybind11::object");
#endif
}
template<
class T1, class T2, class T3, class T4, class T5, class T6, class ...Args,
mpl::enable_if_t<::pressio::containers::details::traits<T1>::rank==2,int> = 0
>
ExplicitPoliciesMixin(const T1 & romStateIn,
const T2 & fomObj,
const T3 & decoder,
const T4 & fomNominalStateNative,
const T5 & masker,
const T6 & projector,
Args && ...args)
: T(fomObj, decoder, romStateIn, fomNominalStateNative, std::forward<Args>(args)...),
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
masker_(masker),
rhsPolicy_(::pressio::ops::extent(romStateIn,0), ::pressio::ops::extent(romStateIn,1),
projector, masker_, T::fomCRef(), T::fomStatesMngr_)
#else
rhsPolicy_(::pressio::ops::extent(romStateIn,0), ::pressio::ops::extent(romStateIn,1),
projector, masker, T::fomCRef(), T::fomStatesMngr_)
#endif
{
#ifdef PRESSIO_ENABLE_TPL_PYBIND11
static_assert
(std::is_same<T5, pybind11::object>::value,
"Maked policies mixin: masker object must be a pybind11::object");
#endif
}
};
// aliases to make things easier
template <class T, typename ...Args>
using DefaultExplicitPoliciesMixin = ExplicitPoliciesMixin<T, true, false, false, Args...>;
template <class T, typename ...Args>
using HypRedVeloExplicitPoliciesMixin = ExplicitPoliciesMixin<T, false, true, false, Args...>;
template <class T, typename ...Args>
using MaskedVeloExplicitPoliciesMixin = ExplicitPoliciesMixin<T, false, false, true, Args...>;
}}}}
#endif // ROM_GALERKIN_IMPL_ROM_PROBLEM_MEMBERS_HPP_
|
Apple's "Think different" logo
"Think different." was an advertising slogan for Apple, Inc. (then Apple Computer, Inc.) in 1997 created by the Los Angeles office of advertising agency TBWA Chiat/Day.[1] The slogan has been widely taken as a response to IBM's slogan "Think".[2][3][4] It was used in a television commercial, several print advertisements, and a number of TV promos for Apple products. Apple's use of the slogan was discontinued in 2002.
Television commercials [ edit ]
Significantly shortened versions of the text were used in two television commercials, known as "Crazy Ones", directed by Chiat\Day's Jennifer Golub who also shared the art director credit with Jessica Schulman Edelstein and Yvonne Smith. According to Jobs’s biography, two versions were created before it first aired: one with a voiceover by Richard Dreyfuss and one featuring a voiceover by Steve Jobs.[5] In the morning of the first air date, Jobs decided to go with the Dreyfuss version, stating that it was about Apple, not about himself. It was edited at Venice Beach Editorial, by Dan Bootzin, Chiat\Day's in-house editor,[6] and post-produced by Hunter Conner.
The slogan "Think Different" was created by Craig Tanimoto, Art Director at Chiat\Day, who also contributed conceptual design work resulting in the use of iconic portraiture for the campaign. Tanimoto is also credited with opting for "Think Different" rather than the grammatically correct "Think Differently," which was considered but rejected by Lee Clow. The full text of the various versions of this commercial were written by creative director Rob Siltanen and copywriter Ken Segall, along with input from many on the team at the agency and at Apple. The commercial's music was composed by Chip Jenkins for Elias Arts.[7]
The one-minute commercial featured black-and-white footage of 17 iconic 20th century personalities. In order of appearance they were: Albert Einstein, Bob Dylan, Martin Luther King, Jr., Richard Branson, John Lennon (with Yoko Ono), Buckminster Fuller, Thomas Edison, Muhammad Ali, Ted Turner, Maria Callas, Mahatma Gandhi, Amelia Earhart, Alfred Hitchcock, Martha Graham, Jim Henson (with Kermit the Frog), Frank Lloyd Wright and Pablo Picasso. The commercial ends with an image of a young girl opening her closed eyes, as if making a wish. The final clip is taken from the All Around The World version of the "Sweet Lullaby" music video, directed by Tarsem Singh; the young girl is Shaan Sahota, Singh's niece.[8]
The thirty-second commercial was a shorter version of the previous one, using 11 of the 17 personalities, but closed with Jerry Seinfeld, instead of the young girl. In order of appearance: Albert Einstein, Bob Dylan, Martin Luther King, Jr., John Lennon, Martha Graham, Muhammad Ali, Alfred Hitchcock, Mahatma Gandhi, Jim Henson, Maria Callas, Pablo Picasso, followed by Jerry Seinfeld. This version aired only once, during the series finale of Seinfeld.
Another early example of the "Think Different" ads was on February 4, 1998, months before taking the colors out of the logo, where a commercial aired with a snail carrying an Intel Pentium II chip on its back moving slowly, as the Power Macintosh G3 claims that it is twice as fast as Intel's Pentium II Processor.[9]
Concept, philosophy, background [ edit ]
Apple's famous 1984 commercial was created by advertising agency Chiat/Day. In 1986, CEO John Sculley replaced Chiat/Day with agency BBDO[10]. Under CEO Gil Amelio BBDO pitched to an internal marketing meeting at the then struggling Apple a new brand campaign with the slogan "We're back". Reportedly everyone in the meeting expressed approval with the exception of the recently returned Jobs who said "the slogan was stupid because Apple wasn't back."[11]
Jobs then invited three advertising agencies to present new ideas that reflected the philosophy he thought had to be reinforced within the company he co-founded. Chiat/Day was one of them. While Jobs thought the creative concept "brilliant" he originally hated the words of the television commercial, until changing his mind. According to TBWA/Chiat/Day's creative director of the time Rob Siltanen: "Steve was highly involved with the advertising and every facet of Apple’s business. But he was far from the mastermind behind the renowned launch spot...While Steve Jobs didn’t create the advertising concepts, he does deserve an incredible amount of credit. He was fully responsible for ultimately pulling the trigger on the right ad campaign from the right agency, and he used his significant influence to secure talent and rally people like no one I’ve ever seen before. Without Steve Jobs there’s not a shot in hell that a campaign as monstrously big as this one would get even close to flying off the ground...it got an audience that once thought of Apple as semi-cool, but semi-stupid to suddenly think about the brand in a whole new way."[7]
Jobs said the following in an interview for PBS' 'One Last Thing' documentary:[12]
When you grow up you tend to get told the world is the way it is and your job is just to live your life inside the world. Try not to bash into the walls too much. Try to have a nice family life, have fun, save a little money. That’s a very limited life. Life can be much broader once you discover one simple fact, and that is - everything around you that you call life, was made up by people that were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. The minute that you understand that you can poke life and actually something will, you know if you push in, something will pop out the other side, that you can change it, you can mold it. That’s maybe the most important thing. It’s to shake off this erroneous notion that life is there and you’re just gonna live in it, versus embrace it, change it, improve it, make your mark upon it. I think that’s very important and however you learn that, once you learn it, you’ll want to change life and make it better, cause it’s kind of messed up, in a lot of ways. Once you learn that, you’ll never be the same again.
The original long version appeared on posters made by Apple. The text was written by Rob Siltanen with participation of Lee Clow and others on his creative team.[13]
Print advertisements [ edit ]
Print advertisements from the campaign were published in many mainstream magazines such as Newsweek and Time. Their style was predominantly traditional, prominently featuring the company's computers or consumer electronics along with the slogan.
There was also another series of print ads which were more focused on brand image than specific products. Those featured a portrait of one historic figure, with a small Apple logo and the words "Think Different" in one corner, with no reference to the company's products. The familiar faces on display included Jimi Hendrix, Richard Clayderman, Miles Davis, Billy Graham, Bryan Adams, Cesar Chavez, John Lennon, Laurence Gartel, Mahatma Gandhi and others.
Promotional posters [ edit ]
Promotional posters from the campaign were produced in small numbers in 24 x 36 inch sizes. They featured the portrait of one historic figure, with a small Apple logo and the words "Think Different" in one corner. The posters were produced between 1997 and 1998.
There were at least 29 different Think Different posters created. The sets were as follows:[citation needed]
Set 1
Set 2
Set 3
Set 4
Set 5 (The Directors set, never officially released)
In addition, around the year 2000, Apple produced the ten, 11x17 poster set often referred to as "The Educators Set", which was distributed through their Education Channels. Apple sent out boxes (the cover of which is a copy of the 'Crazy Ones' original TD poster) that each contained 3 packs (sealed in plastic) of 10 small/miniature Think Different posters.
Educator Set
During a special event held on October 14, 1998 at the Flint Center in Cupertino California, a limited edition 11" x 14" softbound book was given to employees and affiliates of Apple Computer, Inc. to commemorate the first year of the ad campaign. The 50 page book contained a foreword by Steve Jobs, the text of the original Think Different ad, and illustrations of many of the posters used in the campaign along with narratives describing each person.
Reception and influence [ edit ]
Upon release, the "Think Different" Campaign proved to be an enormous success for Apple and TBWA\Chiat\Day. Critically acclaimed, the spot would garner numerous awards and accolades, including the 1998 Emmy Award for Best Commercial and the 2000 Grand Effie Award for most effective campaign in America.
In retrospect, the new ad campaign marked the beginning of Apple's re-emergence as a marketing powerhouse. In the years leading up to the ad Apple had lost market share to the Wintel ecosystem which offered lower prices, more software choices, and higher-performance CPUs. Worse for Apple's reputation was the high-profile failure of the Apple Newton, a billion-dollar project that proved to be a technical and commercial dud. The success of the "Think Different" campaign, along with the return of Steve Jobs, bolstered the Apple brand and reestablished the "counter-culture" aura of its earlier days, setting the stage for the immensely successful iMac all-in-one personal computer and later the macOS (previously OS X) operating system.
Revivals [ edit ]
Product packaging [ edit ]
Since late 2009, the box packaging specification sheet for iMac computers has included the following footnote:
Macintosh Think different.
In previous Macintosh packaging, Apple's website URL was printed below the specifications list.
The apparent explanation for this inconspicuous usage is that Apple wished to maintain its trademark registrations on both terms – in most jurisdictions, a company must show continued use of a trademark on its products in order to maintain registration, but neither trademark is widely used in the company's current marketing. (With regards to "Macintosh", Apple's computers are now usually marketed as simply "Mac".) Indeed, this packaging was used as the required specimen of use when Apple filed to re-register "Think Different" as a U.S. trademark in 2009.[14]
macOS [ edit ]
Apple has continued to include portions of the "Crazy Ones" text as Easter eggs in a range of places in OS X. This includes the high-resolution icon for TextEdit introduced in Leopard, the "All My Files" Finder icon introduced in Lion, the high-resolution icon for Notes in Mountain Lion and Mavericks and on the new Color LCD Display preferences menu introduced for MacBook Pro with Retina Display.
Apple Color Emoji [ edit ]
Several emoji glyphs in Apple's Apple Color Emoji font contain portions of the text of "Crazy Ones”, including 1F4CB ‘Clipboard’, 1F4C3 ‘Page with Curl’, 1F4C4 ‘Page facing up’ and 1F4D1 ‘Bookmark Tabs’.
On at least four separate occasions, the Apple homepage featured images of notable figures not originally part of the campaign alongside the "Think Different" slogan:
Similar portraits were also posted without the "Think different" text on at least seven additional occasions:
Other media [ edit ]
A portion of the text is recited in the trailer for Jobs, a biographical drama film of Steve Jobs' life.[19] Ashton Kutcher, as Jobs, is shown recording the audio for the trailer in the film's final scene.
The Richard Dreyfuss audio version is used in the introduction of the first episode of The Crazy Ones,[20] a podcast provided by Ricochet,[21] hosted by Owen Brennan and Patrick Jones.[22]
Parodies [ edit ]
The Simpsons episode "Mypods and Boomsticks" pokes fun at the slogan, writing it "Think Differently", which is grammatically correct.
For Steam's release on Mac OS X, Valve has released a Left 4 Dead–themed advertisement featuring Francis, whose in-game spoken lines involve him hating various things. The given slogan is "I hate different."[23][24] Subsequently, for Team Fortress 2's release on Mac, a trailer was released which concludes with "Think bullets".[25]
Aiura parodies this through the use of "Think Crabing" in its opening.[26]
In the musical Nerds, which depicts a fictionalized account of the lives of Steve Jobs and Bill Gates, there is a song titled "Think Different" in which Jobs hallucinates an anthropomorphized Oracle dancing with him and urging him to fight back against the Microsoft empire.[27]
In the animated show Gravity Falls in episode "A Tale of Two Stans", a poster with the words "Ponder alternatively" and a strawberry colored in a similar fashion as the old Apple logo shows in the background.[28]
See also [ edit ]
References [ edit ] |
<reponame>fx2003/OI
/*
ID: fx.yoyo1
LANG: C++
TASK: milk2
*/
#include<iostream>
#include<cstdio>
using namespace std;
struct list
{
long int l,r;
list *p,*q;
};
list *head=NULL;
void create(list *p=NULL)
{
list *q=NULL;
if(head==NULL)
{
head=p;
p->p=NULL;
p->q=NULL;
}
else
{
q=head;
while(q!=NULL)
{
if(p->r<q->l)
{
if(q==head)
{
head=p;
p->p=NULL;
p->q=q;
q->p=p;
}
else
{
p->p=q->p;
p->q=q;
q->p->q=p;
q->p=p;
}
break;
}
else if(p->l<q->l&&p->r>=q->l&&p->r<=q->r)
{
q->l=p->l;
delete p;
break;
}
else if(p->l<q->l&&p->r>q->r)
{
list *pp=p;
q->l=p->l;
q->r=p->r;
p=q->q;
delete pp;
while(p!=NULL)
{
if(q->r<p->l)
{
p->p=q;
q->q=p;
break;
}
else if(q->r<=p->r)
{
q->r=p->r;
if(p->q!=NULL)
p->q->p=q;
q->q=p->q;
delete p;
break;
}
else
{
list *r=p;
p=p->q;
q->q=p;
delete r;
}
}
break;
}
else if(p->l>=q->l&&p->r<=q->r)
{
delete p;
break;
}
else if(p->l>=q->l&&p->l<=q->r&&p->r>q->r&&q->q==NULL)
{
q->r=p->r;
delete p;
break;
}
else if(p->l>=q->l&&p->l<=q->r&&p->r>q->r&&q->q->l>p->r)
{
q->r=p->r;
delete p;
break;
}
else if(p->l>=q->l&&p->l<=q->r&&p->r>q->r&&q->q->l<=p->r)
{
list *pp=p;
q->r=p->r;
p=q->q;
delete pp;
while(p!=NULL)
{
if(q->r<p->l)
{
p->p=q;
q->q=p;
break;
}
else if(q->r<=p->r)
{
q->r=p->r;
if(p->q!=NULL)
p->q->p=q;
q->q=p->q;
delete p;
break;
}
else
{
list *r=p;
p=p->q;
q->q=p;
delete r;
}
}
break;
}
else if(p->l>q->r&&q->q==NULL)
{
p->p=q;
p->q=q->q;
q->q=p;
break;
}
else
q=q->q;
}
}
return;
}
void cal()
{
list *p=head;
FILE *out=fopen("milk2.out","w");
long int max1=0,max0=0;
while(p!=NULL)
{
if(p->r-p->l>max1)
max1=p->r-p->l;
if(p->q!=NULL&&p->q->l-p->r>max0)
max0=p->q->l-p->r;
p=p->q;
}
fprintf(out,"%ld %ld\n",max1,max0);
fclose(out);
return;
}
int main()
{
int n;
FILE *in;
in=fopen("milk2.in","r");
fscanf(in,"%d",&n);
for(int i=0;i<n;i++)
{
list *t=NULL;
t=new list;
fscanf(in,"%ld%ld",&t->l,&t->r);
create(t);
}
cal();
fclose(in);
return 0;
}
|
def _calc_global_pcs(self, drop_pc1=False):
df = self.df.xs('01', axis=1, level=1)
norm = ((df.T - df.mean(1)) / df.std(1)).T
U, S, vH = frame_svd(norm)
self.global_vars['pc1'] = vH[0]
self.global_vars['pc2'] = vH[1]
self.global_loadings['pc1'] = U[0]
self.global_loadings['pc2'] = U[1]
if drop_pc1 is True:
S_n = S.copy()
S_n[0] = 0
norm = U.dot(pd.DataFrame(np.diag(S_n)).dot(vH.T))
return norm |
DENVER (AP) – A man imprisoned for 28 years after a woman said she dreamed that he raped her could be freed after a Denver judge overturned his conviction, saying he would likely be acquitted at a new trial because someone else confessed to the crime.
Clarence Moses-EL was convicted in 1988 and sentenced to 48 years in prison for raping and assaulting a woman when she returned home from a night of drinking. When police initially asked who attacked her, she named the man who later confessed.
More than a day after the assault, while in the hospital, the woman identified Moses-EL as her attacker, saying his face came to her in a dream.
Moses-EL has long claimed he was innocent. But his efforts to appeal his conviction were unsuccessful, in part because Denver police threw away DNA evidence from the attack. Police destroyed body swabs and the victim’s clothing despite a judge’s order to preserve it for testing that could have confirmed Moses-EL’s guilt or innocence.
The case inspired legislation requiring the preservation of DNA evidence in major felony cases for a defendant’s lifetime. Lawmakers also took the rare step of sponsoring a bill ordering a new trial for Moses-EL, but it was scrapped after then-Gov. Bill Ritter, a former prosecutor, threatened to veto it.
His break came in December 2013 when another man, L.C. Jackson, sent him a letter in prison saying he couldn’t believe Moses-EL was accused of raping the woman because he “had sex” with her at the same time that night.
“I really don’t know what to say to you, but let’s start by bringing what was done in the dark into the light,” Jackson wrote, according to court documents. “I have a lot on my heart.”
The letter led to a hearing in July, where Jackson testified that he became angry during sex with the woman and hit her in the face. The woman told police that she was lying down to sleep when a man put his hands around her neck and raped her.
Jackson has not been charged in that case. But DNA evidence led to his conviction in the 1992 rapes of a mother and daughter that happened about a mile and a half away from the first woman’s home. There was no way to immediately reach him Wednesday, with records showing he is in prison.
Jurors would likely clear Moses-EL if they heard Jackson’s sworn confession and saw evidence showing that Jackson’s blood type – rather than Moses-EL’s – was found in the victim, Denver District Judge Kandace Gerdes wrote in her Monday order granting a new trial.
“He’s elated,” attorney Gail Johnson said of Moses-EL, who was being held in the Bent County Correctional Facility in southeastern Colorado. “He’s very happy and feels like he’s finally getting justice in this case.”
A judge set Moses-EL’s bond at $50,000, and his attorneys expect he will soon be returned to Denver, where he will likely be released pending a new trial. Prosecutors declined to say whether they would file new charges against him.
Prosecutors were still reviewing the ruling and would meet with the victim, said Lynn Kimbrough, spokeswoman for the Denver district attorney’s office.
– By Sadie Gurman, AP Writer
(© Copyright 2015 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed.) |
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include "nx.h"
#include "utils.h"
void nx_set_iv(nx_cbc_context* ctx,
u8 iv[16])
{
memcpy(ctx->iv, iv, 16);
}
void nx_init_cbc_encrypt(nx_cbc_context* ctx,
u8 key[16],
u8 iv[16])
{
mbedtls_aes_setkey_enc(&ctx->aes, key, 128);
nx_set_iv(ctx, iv);
}
void nx_init_cbc_decrypt(nx_cbc_context* ctx,
u8 key[16],
u8 iv[16])
{
mbedtls_aes_setkey_dec(&ctx->aes, key, 128);
nx_set_iv(ctx, iv);
}
void nx_encrypt_cbc(nx_cbc_context* ctx,
u8* input,
u8* output,
u32 size)
{
mbedtls_aes_crypt_cbc(&ctx->aes, MBEDTLS_AES_ENCRYPT, size, ctx->iv, input, output);
}
void nx_decrypt_cbc(nx_cbc_context* ctx,
u8* input,
u8* output,
u32 size)
{
mbedtls_aes_crypt_cbc(&ctx->aes, MBEDTLS_AES_DECRYPT, size, ctx->iv, input, output);
}
void nx_add_ctr(nx_ctr_context* ctx,
u32 block_num)
{
u32 ctr[4];
ctr[3] = getbe32(&ctx->ctr[0]);
ctr[2] = getbe32(&ctx->ctr[4]);
ctr[1] = getbe32(&ctx->ctr[8]);
ctr[0] = getbe32(&ctx->ctr[12]);
for (u32 i = 0; i < 4; i++) {
u64 total = ctr[i] + block_num;
// if there wasn't a wrap around, add the two together and exit
if (total <= 0xffffffff) {
ctr[i] += block_num;
break;
}
// add the difference
ctr[i] = (u32)(total - 0x100000000);
// carry to next word
block_num = (u32)(total >> 32);
}
putbe32(ctx->ctr + 0x00, ctr[3]);
putbe32(ctx->ctr + 0x04, ctr[2]);
putbe32(ctx->ctr + 0x08, ctr[1]);
putbe32(ctx->ctr + 0x0C, ctr[0]);
}
void nx_set_ctr(nx_ctr_context* ctx,
u8 ctr[16])
{
memcpy(ctx->ctr, ctr, 16);
}
void nx_init_ctr(nx_ctr_context* ctx,
u8 key[16], u8 ctr[16])
{
mbedtls_aes_setkey_enc(&ctx->aes, key, 128);
if (ctr) nx_set_ctr(ctx, ctr);
}
void nx_crypt_ctr_block(nx_ctr_context* ctx,
u8 input[16],
u8 output[16])
{
int i;
u8 stream[16];
mbedtls_aes_crypt_ecb(&ctx->aes, MBEDTLS_AES_ENCRYPT, ctx->ctr, stream);
if (input)
{
for (i = 0; i<16; i++)
{
output[i] = stream[i] ^ input[i];
}
}
else
{
for (i = 0; i<16; i++)
output[i] = stream[i];
}
nx_add_ctr(ctx, 1);
}
void nx_crypt_ctr(nx_ctr_context* ctx,
u8* input,
u8* output,
u32 size)
{
u8 stream[16];
u32 i;
while (size >= 16)
{
nx_crypt_ctr_block(ctx, input, output);
if (input)
input += 16;
if (output)
output += 16;
size -= 16;
}
if (size)
{
memset(stream, 0, 16);
nx_crypt_ctr_block(ctx, stream, stream);
if (input)
{
for (i = 0; i<size; i++)
output[i] = input[i] ^ stream[i];
}
else
{
memcpy(output, stream, size);
}
}
}
void nx_sha256(const u8* data,
u32 size,
u8 hash[0x20])
{
mbedtls_sha256(data, size, hash, 0);
}
int nx_sha256_verify(const u8* data,
u32 size,
const u8 checkhash[0x20])
{
u8 hash[0x20];
mbedtls_sha256(data, size, hash, 0);
if (memcmp(hash, checkhash, 0x20) == 0)
return Good;
else
return Fail;
}
void nx_sha256_init(nx_sha256_context* ctx)
{
mbedtls_sha256_starts(&ctx->sha, 0);
}
void nx_sha256_update(nx_sha256_context* ctx,
const u8* data,
u32 size)
{
mbedtls_sha256_update(&ctx->sha, data, size);
}
void nx_sha256_finish(nx_sha256_context* ctx,
u8 hash[0x20])
{
mbedtls_sha256_finish(&ctx->sha, hash);
}
void nx_rsa_init_key_pubmodulus(rsakey* key, u8* modulus, int size)
{
u8 exponent[3] = { 0x01, 0x00, 0x01 };
nx_rsa_init_key_pub(key, modulus, exponent, size);
}
void nx_rsa_init_key_pub(rsakey* key, u8* modulus, u8 exponent[3], int size)
{
key->keytype = RSAKEY_PUB;
key->keysize = size;
memcpy(key->n, modulus, sizeof(key->n) / size);
memcpy(key->e, exponent, sizeof(key->e));
}
int nx_rsa_init(nx_rsa_context* ctx, rsakey* key, int padding)
{
mbedtls_rsa_init(&ctx->rsa, padding, 0);
ctx->rsa.len = sizeof(key->n) / key->keysize;
if (key->keytype == RSAKEY_INVALID)
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.N, key->n, sizeof(key->n) / key->keysize))
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.E, key->e, sizeof(key->e)))
goto clean;
if (mbedtls_rsa_check_pubkey(&ctx->rsa))
goto clean;
if (key->keytype == RSAKEY_PRIV)
{
if (mbedtls_mpi_read_binary(&ctx->rsa.D, key->d, sizeof(key->d) / key->keysize))
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.P, key->p, sizeof(key->p) / key->keysize))
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.Q, key->q, sizeof(key->q) / key->keysize))
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.DP, key->dp, sizeof(key->dp) / key->keysize))
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.DQ, key->dq, sizeof(key->dq) / key->keysize))
goto clean;
if (mbedtls_mpi_read_binary(&ctx->rsa.QP, key->qp, sizeof(key->qp) / key->keysize))
goto clean;
if (mbedtls_rsa_check_privkey(&ctx->rsa))
goto clean;
}
return 1;
clean:
return 0;
}
void nx_rsa_free(nx_rsa_context* ctx)
{
mbedtls_rsa_free(&ctx->rsa);
}
int nx_rsa_public(const u8* signature, u8* output, rsakey* key, int padding)
{
nx_rsa_context ctx;
u32 result;
nx_rsa_init(&ctx, key, padding);
result = mbedtls_rsa_public(&ctx.rsa, signature, output);
nx_rsa_free(&ctx);
if (result == 0)
return 1;
else
return 0;
}
int nx_rsa_verify_hash(const u8* signature, const u8 hash[0x20], rsakey* key, int padding)
{
nx_rsa_context ctx;
u32 result;
// u8 output[0x200];
if (key->keytype == RSAKEY_INVALID)
return Fail;
nx_rsa_init(&ctx, key, padding);
// memset(output, 0, sizeof(output));
// result = nx_rsa_public(signature, output, key, padding);
// printf("Result = %d\n", result);
// memdump(stdout, "output: ", output, sizeof(output) / key->keysize);
mbedtls_entropy_context entropy;
mbedtls_entropy_init(&entropy);
result = mbedtls_rsa_pkcs1_verify(&ctx.rsa, mbedtls_entropy_func, &entropy, MBEDTLS_RSA_PUBLIC, MBEDTLS_MD_SHA256, 0x20, hash, (u8*)signature);
nx_rsa_free(&ctx);
if (result == 0)
return Good;
else
return Fail;
} |
package examples
import (
"context"
"fmt"
"net/http"
"testing"
"github.com/infobip-community/infobip-api-go-sdk/v2/pkg/infobip"
"github.com/infobip-community/infobip-api-go-sdk/v2/pkg/infobip/models"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
//const (
// apiKey = "your-api-key"
// baseURL = "your-base-url"
// destNumber = "123456789012"
//)
func TestSendSMS(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
sms := models.SMSMsg{
Destinations: []models.SMSDestination{
{To: destNumber},
},
From: "Infobip Gopher",
Text: "Hello from Go SDK",
}
request := models.SendSMSRequest{
Messages: []models.SMSMsg{sms},
}
msgResp, respDetails, err := client.SMS.Send(context.Background(), request)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Messages[0].MessageID, "MessageID should not be empty")
assert.NotEqual(t, models.SendSMSResponse{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestSendSMSBulk(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
sms := models.SMSMsg{
Destinations: []models.SMSDestination{
{To: destNumber},
},
From: "Infobip Gopher",
Text: "Hello from Go SDK",
SendAt: "2022-06-02T16:00:00.000+0000",
}
sms2 := models.SMSMsg{
Destinations: []models.SMSDestination{
{To: destNumber},
},
From: "Infobip Gopher",
Text: "Hello (2) from Go SDK",
}
request := models.SendSMSRequest{
BulkID: "f4b07b1a-a009-49d5-a94d-f8fd1bfdc985",
Messages: []models.SMSMsg{sms, sms2},
}
msgResp, respDetails, err := client.SMS.Send(context.Background(), request)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Messages[0].MessageID, "MessageID should not be empty")
assert.NotEqual(t, models.SendSMSResponse{}, msgResp)
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestSendBinarySMS(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
binSMS := models.BinarySMSMsg{
Destinations: []models.SMSDestination{
{To: destNumber},
},
From: "Infobip Gopher",
Binary: &models.SMSBinary{Hex: "0f c2 4a bf 34 13 ba"},
}
request := models.SendBinarySMSRequest{
Messages: []models.BinarySMSMsg{binSMS},
}
msgResp, respDetails, err := client.SMS.SendBinary(context.Background(), request)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Messages[0].MessageID, "MessageID should not be empty")
assert.NotEqual(t, models.SendSMSResponse{}, msgResp)
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestSendSMSOverQueryParameters(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
paramsSMS := models.SendSMSOverQueryParamsParams{
Username: "your-username",
Password: "<PASSWORD>",
From: "Infobip Gopher",
To: []string{destNumber},
Text: "Hello from Go SDK",
}
msgResp, respDetails, err := client.SMS.SendOverQueryParams(context.Background(), paramsSMS)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Messages[0].MessageID, "MessageID should not be empty")
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestPreviewSMS(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
previewReq := models.PreviewSMSRequest{
LanguageCode: "TR",
Text: "Mesaj gönderimi yapmadan önce önizleme seçeneğini kullanmanız doğru karar vermenize olur.",
}
msgResp, respDetails, err := client.SMS.Preview(context.Background(), previewReq)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Previews[0].TextPreview, "TextPreview should not be empty")
assert.NotEqual(t, models.PreviewSMSResponse{}, msgResp)
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestGetSMSDeliveryReports(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
queryParams := models.GetSMSDeliveryReportsParams{
Limit: 10,
}
msgResp, respDetails, err := client.SMS.GetDeliveryReports(context.Background(), queryParams)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Results[0].MessageID, "MessageID should not be empty")
assert.NotEqual(t, models.GetSMSDeliveryReportsResponse{}, msgResp)
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestGetSMSLogs(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
queryParams := models.GetSMSLogsParams{
Limit: 10,
}
msgResp, respDetails, err := client.SMS.GetLogs(context.Background(), queryParams)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.Results[0].MessageID, "MessageID should not be empty")
assert.NotEqual(t, models.GetSMSLogsResponse{}, msgResp)
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestGetScheduledSMS(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
queryParams := models.GetScheduledSMSParams{BulkID: "f4b07b1a-a009-49d5-a94d-f8fd1bfdc985"}
msgResp, respDetails, err := client.SMS.GetScheduledMessages(context.Background(), queryParams)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.BulkID, "BulkID should not be empty")
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestRescheduleSMS(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
params := models.RescheduleSMSParams{BulkID: "f4b07b1a-a009-49d5-a94d-f8fd1bfdc985"}
req := models.RescheduleSMSRequest{SendAt: "2022-06-01T16:00:00.000+0000"}
msgResp, respDetails, err := client.SMS.RescheduleMessages(context.Background(), req, params)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.BulkID, "BulkID should not be empty")
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestGetScheduledSMSStatus(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
params := models.GetScheduledSMSStatusParams{BulkID: "f4b07b1a-a009-49d5-a94d-f8fd1bfdc985"}
msgResp, respDetails, err := client.SMS.GetScheduledMessagesStatus(context.Background(), params)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.BulkID, "BulkID should not be empty")
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
func TestUpdateScheduledSMSStatus(t *testing.T) {
client, err := infobip.NewClient(baseURL, apiKey)
require.Nil(t, err)
params := models.UpdateScheduledSMSStatusParams{BulkID: "f4b07b1a-a009-49d5-a94d-f8fd1bfdc985"}
req := models.UpdateScheduledSMSStatusRequest{Status: "CANCELED"}
msgResp, respDetails, err := client.SMS.UpdateScheduledMessagesStatus(context.Background(), req, params)
fmt.Println(msgResp)
fmt.Println(respDetails)
require.Nil(t, err)
assert.NotNil(t, respDetails)
assert.NotEmptyf(t, msgResp.BulkID, "BulkID should not be empty")
assert.NotEqual(t, models.ResponseDetails{}, msgResp)
assert.Equal(t, http.StatusOK, respDetails.HTTPResponse.StatusCode)
}
|
<filename>src/ff_api.c<gh_stars>0
#ifdef __cplusplus
extern "C" {
#endif
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <strings.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/eventfd.h>
#include <sys/epoll.h>
#include <sys/ioctl.h>
#include <arpa/inet.h>
#include <errno.h>
#include <assert.h>
#include <pthread.h>
#include <semaphore.h>
#include "ff_api.h"
int ff_init(int argc, char * const argv[])
{
printf("ff_init\n");
return 0;
}
void ff_run(loop_func_t loop, void *arg)
{
printf("ff_run\n");
for(;;)
{
loop(arg);
}
}
int ff_socket(int domain, int type, int protocol)
{
printf("ff_socket %d\n", domain);
return socket(domain,type,protocol);
}
int ff_setsockopt(int s, int level, int optname, const void *optval, socklen_t optlen)
{
printf("ff_setsockopt fd %d\n", s);
return setsockopt(s, level, optname, optval, optlen);
}
int ff_getsockopt(int s, int level, int optname, void *optval, socklen_t *optlen)
{
printf("ff_getsockopt fd %d\n", s);
return getsockopt(s, level, optname, optval, optlen);
}
int ff_listen(int s, int backlog)
{
printf("ff_listen fd %d\n", s);
return listen(s, backlog);
}
int ff_bind(int s, const struct linux_sockaddr *addr, socklen_t addrlen)
{
printf("ff_bind fd %d\n", s);
return bind(s, addr, addrlen );
}
int ff_accept(int s, struct linux_sockaddr *addr, socklen_t *addrlen)
{
printf("ff_accept fd %d\n", s);
return accept(s, addr, addrlen);
}
int ff_connect(int s, const struct linux_sockaddr *name, socklen_t namelen)
{
printf("ff_connect fd %d\n", s);
return connect(s, name, namelen);
}
int ff_close(int fd)
{
printf("ff_close fd %d\n", fd);
return close(fd);
}
int ff_getpeername(int s, struct linux_sockaddr *name, socklen_t *namelen)
{
printf("ff_getpeername fd %d\n", s);
return getpeername(s, name, namelen);
}
int ff_getsockname(int s, struct linux_sockaddr *name, socklen_t *namelen)
{
printf("ff_getsockname fd %d\n", s);
return getsockname(s, name, namelen);
}
ssize_t ff_read(int d, void *buf, size_t nbytes)
{
ssize_t cnt = read(d,buf,nbytes);
//printf("ff_read fd %d, size %lu\n", d, cnt);
return cnt;
}
ssize_t ff_readv(int fd, const struct iovec *iov, int iovcnt)
{
//printf("ff_readv fd %d\n", fd);
return readv(fd,iov,iovcnt);
}
ssize_t ff_write(int fd, const void *buf, size_t nbytes)
{
ssize_t cnt = write(fd,buf,nbytes);
//printf("ff_write fd %d, size %lu\n", fd, cnt);
return cnt;
}
ssize_t ff_writev(int fd, const struct iovec *iov, int iovcnt)
{
//printf("ff_writev fd %d\n", fd);
return writev(fd,iov,iovcnt);
}
#ifdef __cplusplus
}
#endif
|
use std::fmt;
/// CmdArg contains information about an argument on the directive line. An
/// argument is specified in one of the following forms:
///
/// - key (no value)
/// - key= (empty value)
/// - key=() (empty value)
/// - key=a (single value)
/// - key=a,b,c (single value)
/// - key=(a,b,c) (multiple value)
#[derive(Clone)]
pub struct CmdArg {
/// Key of CmdArg
pub key: String,
/// Values of CmdArg
pub vals: Vec<String>,
}
impl fmt::Display for CmdArg {
#[inline]
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.vals.len() {
0 => write!(f, "{}", self.key),
1 => write!(f, "{}={}", self.key, self.vals[0]),
_ => write!(f, "{}=({})", self.key, self.vals.join(",")),
}
}
}
impl fmt::Debug for CmdArg {
#[inline]
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt::Display::fmt(self, f)
}
}
/// TestData contains information about datadriven testcase that was parsed from the test file.
/// Data format is text file(txt).
#[derive(Clone, Default)]
pub struct TestData {
/// Pos is a file:line prefix for the input test file, suitable for
/// inclusion in logs and error messages.
pub pos: String,
/// Cmd is the first string on the directive line (up to the first whitespace).
pub cmd: String,
/// CmdArgs contains the k/v arguments to the command.
pub cmd_args: Vec<CmdArg>,
/// Input is the text between the first directive line and the ---- separator.
pub input: String,
/// Expected is the value below the ---- separator. In most cases,
/// tests need not check this, and instead return their own actual
/// output.
/// This field is provided so that a test can perform an early return
/// with "return d.expected" to signal that nothing has changed.
pub expected: String,
}
impl TestData {
/// Return `true` if the `cmd_args` contains a value for the specified key.
pub fn contains_key(&self, k: &str) -> bool {
for cmd_arg in self.cmd_args.iter() {
if cmd_arg.key == k {
return true;
}
}
false
}
}
#[cfg(test)]
mod tests {
use crate::{CmdArg, TestData};
#[test]
fn test_contains_key() {
let cmd_arg = CmdArg {
key: "key".to_string(),
vals: vec!["123".to_string(), "92".to_string(), "92".to_string()],
};
let mut d = TestData::default();
d.cmd_args.push(cmd_arg);
let cmd_arg = CmdArg {
key: "key2".to_string(),
vals: vec!["some string".to_string()],
};
d.cmd_args.push(cmd_arg);
assert_eq!(d.contains_key("key2"), true);
assert_eq!(d.contains_key("key1"), false);
}
}
|
/* These default values correspond to expected usage for the chip. */
SIM_DESC
sim_open (SIM_OPEN_KIND kind,
host_callback *cb,
struct bfd *abfd,
char **argv)
{
int i;
SIM_DESC sd = sim_state_alloc (kind, cb);
mn10300_callback = cb;
SIM_ASSERT (STATE_MAGIC (sd) == SIM_MAGIC_NUMBER);
if (sim_cpu_alloc_all (sd, 1, 0) != SIM_RC_OK)
return 0;
simulator = sd;
STATE_WATCHPOINTS (sd)->pc = &(PC);
STATE_WATCHPOINTS (sd)->sizeof_pc = sizeof (PC);
STATE_WATCHPOINTS (sd)->interrupt_handler = NULL;
STATE_WATCHPOINTS (sd)->interrupt_names = NULL;
if (sim_pre_argv_init (sd, argv[0]) != SIM_RC_OK)
return 0;
sim_add_option_table (sd, NULL, mn10300_options);
sim_do_command (sd, "memory region 0,0x100000");
sim_do_command (sd, "memory region 0x40000000,0x200000");
if (sim_parse_args (sd, argv) != SIM_RC_OK)
{
sim_module_uninstall (sd);
return 0;
}
if ( NULL != board
&& (strcmp(board, BOARD_AM32) == 0 ) )
{
STATE_ENVIRONMENT (sd) = OPERATING_ENVIRONMENT;
sim_do_command (sd, "memory region 0x44000000,0x40000");
sim_do_command (sd, "memory region 0x48000000,0x400000");
sim_hw_parse (sd, "/mn103int@0x34000100/reg 0x34000100 0x7C 0x34000200 0x8 0x34000280 0x8");
sim_hw_parse (sd, "/glue@0x30000000/reg 0x30000000 12");
sim_hw_parse (sd, "/glue@0x30000000 > int0 nmirq /mn103int");
sim_hw_parse (sd, "/glue@0x30000000 > int1 watchdog /mn103int");
sim_hw_parse (sd, "/glue@0x30000000 > int2 syserr /mn103int");
sim_hw_parse (sd, "/glue@0x30002000/reg 0x30002000 4");
sim_hw_parse (sd, "/glue@0x30002000 > int ack /mn103int");
sim_hw_parse (sd, "/glue@0x30004000/reg 0x30004000 8");
sim_hw_parse (sd, "/mn103int > nmi int0 /glue@0x30004000");
sim_hw_parse (sd, "/mn103int > level int1 /glue@0x30004000");
sim_hw_parse (sd, "/glue@0x30006000/reg 0x30006000 32");
sim_hw_parse (sd, "/glue@0x30006000 > int0 irq-0 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int1 irq-1 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int2 irq-2 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int3 irq-3 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int4 irq-4 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int5 irq-5 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int6 irq-6 /mn103int");
sim_hw_parse (sd, "/glue@0x30006000 > int7 irq-7 /mn103int");
sim_hw_parse (sd, "/mn103cpu@0x20000000");
sim_hw_parse (sd, "/mn103cpu@0x20000000/reg 0x20000000 0x42");
sim_hw_parse (sd, "/glue@0x20002000");
sim_hw_parse (sd, "/glue@0x20002000/reg 0x20002000 4");
sim_hw_parse (sd, "/mn103cpu > ack int0 /glue@0x20002000");
sim_hw_parse (sd, "/glue@0x20004000");
sim_hw_parse (sd, "/glue@0x20004000/reg 0x20004000 12");
sim_hw_parse (sd, "/glue@0x20004000 > int0 reset /mn103cpu");
sim_hw_parse (sd, "/glue@0x20004000 > int1 nmi /mn103cpu");
sim_hw_parse (sd, "/glue@0x20004000 > int2 level /mn103cpu");
sim_hw_parse (sd, "/mn103cpu > ack ack /mn103int");
sim_hw_parse (sd, "/mn103int > level level /mn103cpu");
sim_hw_parse (sd, "/mn103int > nmi nmi /mn103cpu");
sim_hw_parse (sd, "/pal@0x31000000");
sim_hw_parse (sd, "/pal@0x31000000/reg 0x31000000 64");
sim_hw_parse (sd, "/pal@0x31000000/poll? true");
sim_hw_parse (sd, "/glue@0x31002000");
sim_hw_parse (sd, "/glue@0x31002000/reg 0x31002000 16");
sim_hw_parse (sd, "/pal@0x31000000 > countdown int0 /glue@0x31002000");
sim_hw_parse (sd, "/pal@0x31000000 > timer int1 /glue@0x31002000");
sim_hw_parse (sd, "/pal@0x31000000 > int int2 /glue@0x31002000");
sim_hw_parse (sd, "/glue@0x31002000 > int0 int3 /glue@0x31002000");
sim_hw_parse (sd, "/glue@0x31002000 > int1 int3 /glue@0x31002000");
sim_hw_parse (sd, "/glue@0x31002000 > int2 int3 /glue@0x31002000");
sim_hw_parse (sd, "/pal@0x31000000 > countdown irq-0 /mn103int");
sim_hw_parse (sd, "/pal@0x31000000 > timer irq-1 /mn103int");
sim_hw_parse (sd, "/pal@0x31000000 > int irq-2 /mn103int");
sim_hw_parse (sd, "/mn103tim@0x34001000/reg 0x34001000 36 0x34001080 100 0x34004000 16");
sim_hw_parse (sd, "/mn103tim > timer-0-underflow timer-0-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-1-underflow timer-1-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-2-underflow timer-2-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-3-underflow timer-3-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-4-underflow timer-4-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-5-underflow timer-5-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-6-underflow timer-6-underflow /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-6-compare-a timer-6-compare-a /mn103int");
sim_hw_parse (sd, "/mn103tim > timer-6-compare-b timer-6-compare-b /mn103int");
sim_hw_parse (sd, "/mn103ser@0x34000800/reg 0x34000800 48");
sim_hw_parse (sd, "/mn103ser@0x34000800/poll? true");
sim_hw_parse (sd, "/mn103ser > serial-0-receive serial-0-receive /mn103int");
sim_hw_parse (sd, "/mn103ser > serial-0-transmit serial-0-transmit /mn103int");
sim_hw_parse (sd, "/mn103ser > serial-1-receive serial-1-receive /mn103int");
sim_hw_parse (sd, "/mn103ser > serial-1-transmit serial-1-transmit /mn103int");
sim_hw_parse (sd, "/mn103ser > serial-2-receive serial-2-receive /mn103int");
sim_hw_parse (sd, "/mn103ser > serial-2-transmit serial-2-transmit /mn103int");
sim_hw_parse (sd, "/mn103iop@0x36008000/reg 0x36008000 8 0x36008020 8 0x36008040 0xc 0x36008060 8 0x36008080 8");
sim_do_command (sd, "memory region 0x32000020,0x30");
sim_do_command (sd, "memory region 0x20000070,0x4");
sim_do_command (sd, "memory region 0x28400000,0x800");
sim_do_command (sd, "memory region 0x28401000,0x800");
sim_do_command (sd, "memory region 0x32000100,0xF");
sim_do_command (sd, "memory region 0x32000200,0xF");
sim_do_command (sd, "memory region 0x32000400,0xF");
sim_do_command (sd, "memory region 0x32000800,0xF");
}
else
{
if (board != NULL)
{
sim_io_eprintf (sd, "Error: Board `%s' unknown.\n", board);
return 0;
}
}
if (sim_analyze_program (sd,
(STATE_PROG_ARGV (sd) != NULL
? *STATE_PROG_ARGV (sd)
: NULL),
abfd) != SIM_RC_OK)
{
sim_module_uninstall (sd);
return 0;
}
if (sim_config (sd) != SIM_RC_OK)
{
sim_module_uninstall (sd);
return 0;
}
if (sim_post_argv_init (sd) != SIM_RC_OK)
{
sim_module_uninstall (sd);
return 0;
}
for (i = 0; i < MAX_NR_PROCESSORS; ++i)
{
SIM_CPU *cpu = STATE_CPU (sd, i);
CPU_PC_FETCH (cpu) = mn10300_pc_get;
CPU_PC_STORE (cpu) = mn10300_pc_set;
}
return sd;
} |
def nextPower(self, n, p):
m = 1
while m <= n:
m *= p
return m |
// Copyright 2019 The Cockroach Authors.
//
// Use of this software is governed by the Business Source License
// included in the file licenses/BSL.txt.
//
// As of the Change Date specified in that file, in accordance with
// the Business Source License, use of this software will be governed
// by the Apache License, Version 2.0, included in the file
// licenses/APL.txt.
import { Divider, Tooltip } from "antd";
import classNames from "classnames";
import _ from "lodash";
import { NanoToMilli } from "oss/src/util/convert";
import { FixLong } from "oss/src/util/fixLong";
import { Chip } from "oss/src/views/app/components/chip";
import Empty from "oss/src/views/app/components/empty";
import React from "react";
import { Link } from "react-router-dom";
import { getValueFromString, Identity } from "..";
import "./latency.styl";
interface StdDev {
stddev: number;
stddevMinus2: number;
stddevMinus1: number;
stddevPlus1: number;
stddevPlus2: number;
}
interface ILegendProps {
displayIdentities: Identity[];
staleIDs: Set<number>;
multipleHeader: boolean;
collapsed?: boolean;
nodesSummary: any;
std?: StdDev;
node_id?: string;
}
interface DetailedRow {
latency: number;
identityB: Identity;
}
type DetailedIdentity = Identity & {
row: DetailedRow[];
title?: string;
};
// createHeaderCell creates and decorates a header cell.
function createHeaderCell(staleIDs: Set<number>, id: DetailedIdentity, key: string, isMultiple?: boolean, collapsed?: boolean) {
const node = `n${id.nodeID.toString()}`;
const className = classNames(
"latency-table__cell",
"latency-table__cell--header",
{ "latency-table__cell--header-warning": staleIDs.has(id.nodeID) },
{ "latency-table__cell--start": isMultiple },
);
return (
<td key={key} className={className}>
<Link to={`/node/${id.nodeID}`}>{collapsed ? "" : node}</Link>
</td>
);
}
const generateCollapsedData = (data: [DetailedIdentity[]], rowLength: number) => {
const collapsedData: Array<DetailedIdentity[]> = [];
const rows: number[] = [];
for (let x = 0; x < rowLength; x++) {
data.forEach((dataItems: DetailedIdentity[], dataIndex: number) => {
if (!collapsedData[dataIndex]) {
collapsedData[dataIndex] = [{...dataItems[0], row: []}];
rows.push(data[dataIndex].length);
}
const maxValues: DetailedRow[] = [];
dataItems.forEach(item => maxValues.push(item.row[x]));
collapsedData[dataIndex][0].row.push(maxValues.reduce((prev, current) => (prev.latency === 0 || prev.latency > current.latency) ? prev : current));
});
}
return collapsedData.map(array => array.map(itemValue => {
let rowsCount = 0;
const newRow: DetailedRow[] = [];
rows.forEach(row => {
const rowEach: DetailedRow[] = [];
for (let x = 0; x < row; x++) {
rowEach.push(itemValue.row[rowsCount + x]);
}
rowsCount = rowsCount + row;
newRow.push(rowEach.reduce((prev, current) => (prev.latency === 0 || prev.latency > current.latency) ? prev : current));
});
return {...itemValue, row: newRow };
}));
};
const renderMultipleHeaders = (displayIdentities: Identity[], collapsed: boolean, nodesSummary: any, staleIDs: Set<number>, nodeId: string, multipleHeader: boolean) => {
const data: any = [];
let rowLength = 0;
const filteredData = displayIdentities.map(identityA => {
const row: any[] = [];
displayIdentities.forEach(identityB => {
const a = nodesSummary.nodeStatusByID[identityA.nodeID].activity;
const nano = FixLong(a[identityB.nodeID].latency);
if (identityA.nodeID === identityB.nodeID) {
row.push({ latency: 0, identityB });
} else if (staleIDs.has(identityA.nodeID) || staleIDs.has(identityB.nodeID) || _.isNil(a) || _.isNil(a[identityB.nodeID])) {
row.push({ latency: -2, identityB });
} else if (nano.eq(0)) {
row.push({ latency: -1, identityB });
} else {
const latency = NanoToMilli(nano.toNumber());
row.push({ latency, identityB });
}
});
rowLength = row.length;
return {row, ...identityA};
});
filteredData.forEach((value) => {
const newValue = {
...value,
title: multipleHeader ? getValueFromString(nodeId, value.locality) : value.locality,
};
if (data.length === 0) {
data[0] = new Array(newValue);
} else {
if (data[data.length - 1][0].title === newValue.title) {
data[data.length - 1].push(newValue);
data[data.length - 1][0].rowCount = data[data.length - 1].length;
} else {
data[data.length] = new Array(newValue);
}
}
});
if (collapsed) {
return generateCollapsedData(data, rowLength);
}
return data;
};
const getVerticalLines = (data: [DetailedIdentity[]], index: number) => {
// tslint:disable-next-line: no-shadowed-variable
const values: any = [];
let currentNumber = 0;
data.forEach(array => {
currentNumber = currentNumber + array.length;
values.push(currentNumber);
});
return values.includes(index);
};
const getLatencyCell = ({ latency, identityB, identityA }: { latency: number; identityB: Identity; identityA: DetailedIdentity }, verticalLine: boolean, isMultiple?: boolean, std?: StdDev, collapsed?: boolean) => {
const generateClassName = (names: string[]) => classNames(
{ "latency-table__cell--end": isMultiple },
{ "latency-table__cell--start": verticalLine },
...names,
);
if (latency === 0) {
return (
<td
className={generateClassName(["latency-table__cell", "latency-table__cell--self"])}
/>
);
}
if (latency === -2) {
return (
<td
className={generateClassName(["latency-table__cell", "latency-table__cell--no-connection"])}
>
<Tooltip placement="bottom" title="Disconnected">
<Chip
title="--"
type="yellow"
/>
</Tooltip>
</td>
);
}
if (latency === -1) {
return (
<td
className={generateClassName(["latency-table__cell", "latency-table__cell--stddev-even"])}
>
<Chip
title="loading..."
type="yellow"
/>
</td>
);
}
const className = classNames({
"latency-table__cell": true,
"latency-table__cell--end": isMultiple,
"latency-table__cell--start": verticalLine,
"latency-table__cell--stddev-minus-2":
std.stddev > 0 && latency < std.stddevMinus2,
"latency-table__cell--stddev-minus-1":
std.stddev > 0 && latency < std.stddevMinus1 && latency >= std.stddevMinus2,
"latency-table__cell--stddev-even":
std.stddev > 0 && latency >= std.stddevMinus1 && latency <= std.stddevPlus1,
"latency-table__cell--stddev-plus-1":
std.stddev > 0 && latency > std.stddevPlus1 && latency <= std.stddevPlus2,
"latency-table__cell--stddev-plus-2":
std.stddev > 0 && latency > std.stddevPlus2,
});
const type: any = classNames({
"green":
std.stddev > 0 && latency < std.stddevMinus2,
"lightgreen":
std.stddev > 0 && latency < std.stddevMinus1 && latency >= std.stddevMinus2,
"grey":
std.stddev > 0 && latency >= std.stddevMinus1 && latency <= std.stddevPlus1,
"lightblue":
std.stddev > 0 && latency > std.stddevPlus1 && latency <= std.stddevPlus2,
"blue":
std.stddev > 0 && latency > std.stddevPlus2,
});
return (
<td className={className}>
{collapsed ? (
<Chip
title={`${latency.toFixed(2)}ms`}
type={type}
/>
) : (
<Tooltip overlayClassName="Chip--tooltip" placement="bottom" title={(
<div>
<div className="Chip--tooltip__nodes">
<div className="Chip--tooltip__nodes--item">
<p className="Chip--tooltip__nodes--item-title">{`Node ${identityB.nodeID}`}</p>
<p className="Chip--tooltip__nodes--item-description">{identityB.locality}</p>
</div>
<Divider type="vertical" />
<div className="Chip--tooltip__nodes--item">
<p className="Chip--tooltip__nodes--item-title">{`Node ${identityA.nodeID}`}</p>
<p className="Chip--tooltip__nodes--item-description">{identityA.locality}</p>
</div>
</div>
<p className={`color--${type}`}>{`${latency.toFixed(2)}ms roundtrip`}</p>
</div>
)}>
<div>
<Chip
title={`${latency.toFixed(2)}ms`}
type={type}
/>
</div>
</Tooltip>
)}
</td>
);
};
// tslint:disable-next-line: variable-name
export const Latency: React.SFC <ILegendProps> = ({
displayIdentities,
staleIDs,
multipleHeader,
collapsed,
nodesSummary,
std,
node_id,
}) => {
const data = renderMultipleHeaders(displayIdentities, collapsed, nodesSummary, staleIDs, node_id, multipleHeader);
const className = classNames(
"latency-table",
{ "latency-table__multiple": multipleHeader },
{ "latency-table__empty": data.length === 0 },
);
// tslint:disable-next-line: no-bitwise
const width = data && (data.reduce((a: any, b: any) => (a.length || a) + b.length, 0) * 108) + 150;
if (data.length === 0) {
return <div className={className}><Empty /></div>;
}
return (
<table className={className} style={{ width }}>
<thead>
{multipleHeader && (
<tr>
<th style={{ width: 115 }} />
<th style={{ width: 45 }} />
{_.map(data, (value, index) => <th className="region-name" colSpan={data[index].length}>{value[0].title}</th>)}
</tr>
)}
{!collapsed && (
<tr className="latency-table__row">
{multipleHeader && <td />}
<td className="latency-table__cell latency-table__cell--spacer" />
{_.map(data, value => _.map(value, (identity, index: number) => createHeaderCell(staleIDs, identity, `0-${value.nodeID}`, index === 0, collapsed)))}
</tr>
)}
</thead>
<tbody>
{_.map(data, (value, index) => _.map(data[index], (identityA, indA: number) => {
return (
<tr key={index} className={`latency-table__row ${data[index].length === indA + 1 ? "latency-table__row--end" : ""}`} >
{multipleHeader && Number(indA) === 0 && (
<th rowSpan={collapsed ? 1 : data[index][0].rowCount}>{value[0].title}</th>
)}
{createHeaderCell(staleIDs, identityA, `${identityA.nodeID}-0`, false, collapsed)}
{_.map(identityA.row, ((identity: any, indexB: number) => getLatencyCell({ ...identity, identityA }, getVerticalLines(data, indexB), false, std, collapsed)))}
</tr>
);
}))}
</tbody>
</table>
);
};
|
<filename>CalcFunctions.hs
module CalcFunctions(adds, subs, muls, divs, pis, validateUser, loadHistory, addToHistory, history2String, splitLines, loadFile, getUserHistory) where
import Prelude
import System.IO.Unsafe
import Control.Exception
import Numeric (showIntAtBase)
import Data.Char
{- functions for the basic arithmetic operations, returning a string -}
adds a b = show (a+b)
subs a b = show (a-b)
muls a b = show (a*b)
divs a b = show (a `div` b)
{- Converts a history (list of lists of strings) to a string -}
history2String history = foldr (\x y->x++"\n"++y) "" $ map (\line->foldr (\x y->x++":"++y) "" line) history
{- returns the history of a given user -}
getUserHistory history user = filter (\(uname:_)->uname==user) history
{- adds the given info to the given user's history -}
addToHistory user info history | userExists = (user : info : userHistory):removeUserFromHistory
| otherwise = [user,info]:history
where
userExists = (length finduser) > 0
userHistory = tail $ head $ finduser
finduser = filter (\(uname:_)->uname==user) history
removeUserFromHistory = filter (\(uname:_)->uname/=user) history
lineToString :: [String] -> String
lineToString line = foldr (\s1 s2->s1++":"++s2) "" line
{- loads the history and returns it as a list -}
loadHistory = map (\line->splitLines line ':') $ splitLines (loadFile "history.txt") '\n'
loadFile f = unsafePerformIO . readFile $ f
{- Validates the given user and pasword -}
validateUser user pass = 0 < (length $
filter (\[u,p]->user==u && pass==p) $
map (\line->splitLines line ' ') $
splitLines (loadFile "users.txt") '\n')
splitLines [] _ = []
splitLines line split = (takeWhile (\c -> c /= split) line) : continue (dropWhile (\c -> c/=split) line)
where
continue [] = []
continue x = splitLines (tail x) split
-- Pi --
{- pow a b computes a^b -}
pow :: Integer -> Integer -> Integer
pow base 0 = 1
pow base 1 = base
pow base exp | even exp = pow2 * pow2
| otherwise = base * pow base (exp - 1)
where
pow2 = pow base (div exp 2)
{- powmod a b c computes (a^b)mod c -}
powmod base 0 _ = 1
powmod base 1 m = mod base m
powmod base exp m | even exp = mod (pow2 * pow2) m
| otherwise = mod (base * powmod base (exp - 1) m) m
where
pow2 = powmod base (div exp 2) m
{- compute a part of S in the formula for BBP algorithm -}
part1 j n = part1' j n n
part1' j n (-1) = 0
part1' j n k = decimalPart $ ((part1' j n (k-1))) + ((fromIntegral (powmod 16 (n-k) r)) / fromIntegral r)
where
r = 8*k+j
{- Returns the decimal part of a double -}
decimalPart :: Double -> Double
decimalPart x = x - fromIntegral (floor x)
{- compute the second part of S in the formula for BBP algorithm -}
part2 j n = part2' j n 1 0.0
part2' :: Integer -> Integer -> Integer -> Double -> Double
part2' j n k oldt | t == oldt = t
| otherwise = part2' j n (k+1) t
where
t = oldt + 1.0/((fromIntegral (pow 16 k))*(fromIntegral (8*k+j)))
{- used for BBP algorithm -}
s :: Integer -> Integer -> Double
s j n = (part1 j n) + (part2 j n)
{- The BBP algorithm. pis n returns the n-th hexadecimal digit of Pi. -}
pis n = head $ showIntAtBase 16 intToDigit (floor (x*fromInteger (pow 16 14))) ""
where
x = decimalPart (4 * (s 1 n2) - 2*(s 4 n2) - (s 5 n2) - (s 6 n2))
n2 = n - 1
|
//
// XTlib.h
// XTlib
//
// Created by xtc on 2018/1/29.
// Copyright © 2018年 teason. All rights reserved.
//
// 在podfile中本地配置
//pod 'XTBase',:path => '../XTBase/'
//pod 'XTlib',:path => '../XTlib/'
//pod 'XTlib/Animations',:path => '../XTlib/'
//pod 'XTlib/CustomUIs',:path => '../XTlib/'
#ifndef XTlib_h
#define XTlib_h
// Base
#import <XTBase/XTBase.h>
// DataBase
#import <XTFMDB/XTFMDB.h>
// Color
#import <XTColor/XTColor.h>
// Request
#import <XTReq/XTReq.h>
// table collection
#import <XTTable/XTTable.h>
#import <XTTable/XTCollection.h>
#endif /* XTlib_h */
|
We still know nothing about Robin Ventura, but we'll learn a lot about him when we find out where he plans to play Alex Rios.
Ozzie Guillen, of course, cemented Rios in center field. He also used Rios as a defensive replacement in center field, even if it meant shoving the superior defender Alejandro De Aza to a corner spot. (He also never pinch-hit for Rios, but that's neither here nor there.)
In the event that Ventura is thinking Rios should be first in line for center field duties, allow me to present some video evidence. I'm knee-deep in player profiles for White Sox Outsider 2012, which means I have to relive the worst moments of the season -- multiple times! -- in order to accurately recount them. I may as well try to make lemonade out of lemons.
Below are 10 videos depicting the Alex Rios 2011 Center Field Experience. And mind you, there aren't just 10 times poor defense occurred; MLB.com only has embeddable highlights of misplays that result in runs. For instance, the drop against Colorado that got him benched, and the misplayed ground-rule double that set up two Texas runs in a 2-1 loss, aren't shown, because no runs scored on that particular play.
Nor am I counting a couple of homers he probably should have saved -- both hit by Eric Hosmer, one on July 4, and the other on July 6 -- because they have an above-average degree of difficulty by definition.
No. 1: April 22, 2011
An indirect route on a liner by Jhonny Peralta turns into a triple.
Your browser does not support iframes.
No. 2: April 23, 2011
The very next day, Rios takes a right-angled route on what ends up being an Alex Avila triple.
Your browser does not support iframes.
May 3: May 1, 2011
Rios underestimates the carry of Felix Pie's drive to center, which bounces off the wall and turns into a triple.
Your browser does not support iframes.
No. 4: July 10, 2011
Rios gives up a base by making a fruitless throw home. The disgust in Tom Kelly's voice, both during and after the throw, is downright palpable.
Your browser does not support iframes.
No. 5: July 25, 2011
This is the kind of play that Mark Gonzales seems to pin on Ramirez, but Ramirez had to chase these kinds of pop-ups while having no clue whether he could go at them 100 percent, or whether Rios would lurk underneath him at the last second. And Juan Pierre did this to him, too.
Your browser does not support iframes.
No. 6: July 30, 2011
Rios allows a runner to score from first on a single by snaring a single awkwardly and taking his sweet time throwing the ball back in. And then he throws to the wrong bag.
Your browser does not support iframes.
No. 7: Aug. 4, 2011
See No. 5.
Your browser does not support iframes.
No. 8: Aug. 4, 2011
Rios misses the cut-off man.
Your browser does not support iframes.
No. 9: Aug. 4, 2011 (skip to 0:49)
Rios gets caught in between charging a sinking liner, and lets it skip past him for a triple.
Your browser does not support iframes.
No. 10: Sept. 14, 2011
Rios gets caught in between charging a sinking liner, but at least gets part of a glove on it. Which means he actually gets charged with an error. |
/**
* Class to test CimMethod
* @author Sharad Singhal
*/
public class CimMethodTest {
private static boolean verbose = false;
static CimStructure struct; // Test_Struct
static StructureValue structValue, structValue2; // Test_Struct value
static CimClass cimClass; // Test_Struct.Test_Class
static CimInstance instance, instance2; // Test_Class instance
static CimEnumeration enumeration; // Enumeration with name "enum", that contains a single enumerationValue called "name"
static EnumerationValue enumValue, enumValue2; // enumeration value (name)
static List<Qualifier> quals;
static {
// create some qualifiers for the parameter
quals = new Vector<Qualifier>();
quals.add(StandardQualifierType.DESCRIPTION.getQualifier("Qualifier", Constants.defaultNameSpacePath));
// create a CimStructure "Test_Struct" with a boolean key property labeled "kz" and true default value
Qualifier keyQual = StandardQualifierType.KEY.getQualifier(true, Constants.defaultNameSpacePath);
Vector<Qualifier> keyQuals = new Vector<Qualifier>();
keyQuals.add(keyQual);
CimProperty keyProperty = new CimProperty("Test_Struct","kz",DataType.BOOLEAN,new DataValue(true),keyQuals);
Vector<CimProperty> keyProperties = new Vector<CimProperty>(); // properties for Test_Struct
keyProperties.add(keyProperty);
struct = new CimStructure(ElementType.STRUCTURE,"Test_Struct",null,null,Constants.defaultNameSpacePath,keyProperties);
// System.out.println(struct.toMOF());
// create a Test_Structure value
HashMap<String,DataValue> propertyValues = new HashMap<String,DataValue>();
propertyValues.put("kz",new DataValue(false));
structValue = StructureValue.createStructureValue(struct, propertyValues, "$as");
structValue2 = StructureValue.createStructureValue(struct, propertyValues, "$as2");
// System.out.println(structValue.toMOF());
// create a CimClass "Test_Class" that inherits from "Test_Struct" with a string property with default value p2Value
CimProperty classProperty = new CimProperty("Test_Struct.Test_Class","p2",DataType.STRING,new DataValue("p2Value"),null);
Vector<CimProperty> classProperties = new Vector<CimProperty>();
classProperties.add(classProperty);
cimClass = new CimClass(ElementType.CLASS,"Test_Class",struct,null,struct.getNameSpacePath(),classProperties);
// System.out.println(cimClass.toMOF());
// create a CimInstance based on Test_Class
HashMap<String,DataValue> classPropertyValues = new HashMap<String,DataValue>();
classPropertyValues.put("p2", new DataValue("newP2Value"));
classPropertyValues.put("KZ", new DataValue(true));
instance = CimInstance.createInstance(cimClass, classPropertyValues, "$ac");
instance2 = CimInstance.createInstance(cimClass, classPropertyValues, "$ac2");
// System.out.println(instance.toMOF());
// create an enumeration "enum" with one string value "name" defined in it
Vector<EnumerationValue> ev = new Vector<EnumerationValue>();
ev.add(new EnumerationValue("name","enum",null,null));
enumeration = new CimEnumeration("enum",null,null,Constants.defaultNameSpacePath,DataType.STRING,ev);
// System.out.println(enumeration.toMOF());
// create an enumValue based on enum
enumValue = enumeration.getValue("name");
enumValue2 = enumeration.getValue("name");
// System.out.println(enumValue.toMOF());
}
// data objects
private static Object [] obj = {null, new UInt8((short)8), (byte) -12, new UInt16(22), (short)55,
new UInt32(100), -75, new UInt64("351"),Long.valueOf(500),
"foobar",true,Float.valueOf((float) 45.0),(double) 500.,
new DateTime(new Date(1397968828080L)),(char)'a',new ObjectPath(ElementType.CLASS,"CIM_Test",new NameSpacePath("/cimv2"), null, null),
new OctetString("0x213244"),
enumValue,
structValue,
instance,
new UInt8[]{new UInt8((short)8)},new Byte[]{(byte) -12},new UInt16[]{new UInt16(22)},new short[]{(short)55},
new UInt32[]{new UInt32(100)}, new Integer[]{-75},new UInt64[]{new UInt64("351")},new long[]{500L},
new String[]{"foobar"},new boolean[]{true},new Float[]{Float.valueOf((float) 45.0)},new double[]{(double) 500.},
new DateTime[]{new DateTime(new Date(1397968828080L))},new char[]{(char)'a'},new ObjectPath[]{new ObjectPath(ElementType.CLASS,"CIM_Test",new NameSpacePath("/cimv2"), null, null)},
new OctetString[]{new OctetString("0x213244")},
new EnumerationValue[]{enumValue},
new StructureValue[]{structValue},
new CimInstance[]{instance}
};
// data objects
private static Object [] obj2 = {null, new UInt8((short)8), (byte) -12, new UInt16(22), (short)55,
new UInt32(100), -75, new UInt64("351"),Long.valueOf(500),
"foobar",true,Float.valueOf((float) 45.0),(double) 500.,
new DateTime(new Date(1397968828080L)),(char)'a',new ObjectPath(ElementType.CLASS,"CIM_Test",new NameSpacePath("/cimv2"), null, null),
new OctetString("0x213244"),
enumValue2,
structValue2,
instance2,
new UInt8[]{new UInt8((short)8)},new Byte[]{(byte) -12},new UInt16[]{new UInt16(22)},new short[]{(short)55},
new UInt32[]{new UInt32(100)}, new Integer[]{-75},new UInt64[]{new UInt64("351")},new long[]{500L},
new String[]{"foobar"},new boolean[]{true},new Float[]{Float.valueOf((float) 45.0)},new double[]{(double) 500.},
new DateTime[]{new DateTime(new Date(1397968828080L))},new char[]{(char)'a'},new ObjectPath[]{new ObjectPath(ElementType.CLASS,"CIM_Test",new NameSpacePath("/cimv2"), null, null)},
new OctetString[]{new OctetString("0x213244")},
new EnumerationValue[]{enumValue2},
new StructureValue[]{structValue2},
new CimInstance[]{instance2}
};
// known data types
private static DataType [] type = {DataType.VOID, DataType.UINT8, DataType.SINT8, DataType.UINT16, DataType.SINT16,
DataType.UINT32, DataType.SINT32, DataType.UINT64, DataType.SINT64,
DataType.STRING, DataType.BOOLEAN, DataType.REAL32, DataType.REAL64,
DataType.DATETIME, DataType.CHAR16, DataType.OBJECTPATH,
DataType.OCTETSTRING,
DataType.ENUMERATIONVALUE,DataType.STRUCTUREVALUE,DataType.INSTANCEVALUE,
DataType.UINT8_ARRAY, DataType.SINT8_ARRAY, DataType.UINT16_ARRAY, DataType.SINT16_ARRAY,
DataType.UINT32_ARRAY, DataType.SINT32_ARRAY, DataType.UINT64_ARRAY, DataType.SINT64_ARRAY,
DataType.STRING_ARRAY, DataType.BOOLEAN_ARRAY, DataType.REAL32_ARRAY, DataType.REAL64_ARRAY,
DataType.DATETIME_ARRAY, DataType.CHAR16_ARRAY, DataType.OBJECTPATH_ARRAY,
DataType.OCTETSTRING_ARRAY,
DataType.ENUMERATIONVALUE_ARRAY,DataType.STRUCTUREVALUE_ARRAY,DataType.INSTANCEVALUE_ARRAY
};
// mof values
String [] mof = {"[Description(\"Qualifier\")]\nVoid Method();\n",
"[Description(\"Qualifier\")]\nUInt8 Method(UInt8 Name = 8);\n",
"[Description(\"Qualifier\")]\nSInt8 Method(SInt8 Name = -12);\n",
"[Description(\"Qualifier\")]\nUInt16 Method(UInt16 Name = 22);\n",
"[Description(\"Qualifier\")]\nSInt16 Method(SInt16 Name = 55);\n",
"[Description(\"Qualifier\")]\nUInt32 Method(UInt32 Name = 100);\n",
"[Description(\"Qualifier\")]\nSInt32 Method(SInt32 Name = -75);\n",
"[Description(\"Qualifier\")]\nUInt64 Method(UInt64 Name = 351);\n",
"[Description(\"Qualifier\")]\nSInt64 Method(SInt64 Name = 500);\n",
"[Description(\"Qualifier\")]\nString Method(String Name = \"foobar\");\n",
"[Description(\"Qualifier\")]\nBoolean Method(Boolean Name = true);\n",
"[Description(\"Qualifier\")]\nReal32 Method(Real32 Name = 45.0);\n",
"[Description(\"Qualifier\")]\nReal64 Method(Real64 Name = 500.0);\n",
"[Description(\"Qualifier\")]\nDatetime Method(Datetime Name = \"20140420044028.080***+000\");\n",
"[Description(\"Qualifier\")]\nChar16 Method(Char16 Name = \'a\');\n",
"[Description(\"Qualifier\")]\nCim_Test ref Method(Cim_Test ref Name = \"/class/cimv2:CIM_Test\");\n",
"[Description(\"Qualifier\")]\nOctetString Method(OctetString Name = \"0x213244\");\n",
"[Description(\"Qualifier\")]\nenum Method(enum Name = name);\n",
"[Description(\"Qualifier\")]\nTest_Struct Method(Test_Struct Name = value of Test_Struct as $as {\n\tkz = false;\n});\n",
"[Description(\"Qualifier\")]\nTest_Class Method(Test_Class Name = instance of Test_Class as $ac {\n\tp2 = \"newP2Value\";\n\tkz = true;\n});\n",
"[Description(\"Qualifier\")]\nUInt8 [] Method(UInt8 [] Name = { 8 });\n",
"[Description(\"Qualifier\")]\nSInt8 [] Method(SInt8 [] Name = { -12 });\n",
"[Description(\"Qualifier\")]\nUInt16 [] Method(UInt16 [] Name = { 22 });\n",
"[Description(\"Qualifier\")]\nSInt16 [] Method(SInt16 [] Name = { 55 });\n",
"[Description(\"Qualifier\")]\nUInt32 [] Method(UInt32 [] Name = { 100 });\n",
"[Description(\"Qualifier\")]\nSInt32 [] Method(SInt32 [] Name = { -75 });\n",
"[Description(\"Qualifier\")]\nUInt64 [] Method(UInt64 [] Name = { 351 });\n",
"[Description(\"Qualifier\")]\nSInt64 [] Method(SInt64 [] Name = { 500 });\n",
"[Description(\"Qualifier\")]\nString [] Method(String [] Name = { \"foobar\" });\n",
"[Description(\"Qualifier\")]\nBoolean [] Method(Boolean [] Name = { true });\n",
"[Description(\"Qualifier\")]\nReal32 [] Method(Real32 [] Name = { 45.0 });\n",
"[Description(\"Qualifier\")]\nReal64 [] Method(Real64 [] Name = { 500.0 });\n",
"[Description(\"Qualifier\")]\nDatetime [] Method(Datetime [] Name = { \"20140420044028.080***+000\" });\n",
"[Description(\"Qualifier\")]\nChar16 [] Method(Char16 [] Name = { \'a\' });\n",
"[Description(\"Qualifier\")]\nCim_Test ref [] Method(Cim_Test ref [] Name = { \"/class/cimv2:CIM_Test\" });\n",
"[Description(\"Qualifier\")]\nOctetString [] Method(OctetString [] Name = { \"0x213244\" });\n",
"[Description(\"Qualifier\")]\nenum [] Method(enum [] Name = { name });\n",
"[Description(\"Qualifier\")]\nTest_Struct [] Method(Test_Struct [] Name = { value of Test_Struct as $as {\n\tkz = false;\n} });\n",
"[Description(\"Qualifier\")]\nTest_Class [] Method(Test_Class [] Name = { instance of Test_Class as $ac {\n\tp2 = \"newP2Value\";\n\tkz = true;\n} });\n"
};
@AfterClass
public static void tearDownAfterClass() throws Exception {
System.out.println("done.");
}
@Before
public void setUp() throws Exception {
System.out.print("-");
}
@After
public void tearDown() throws Exception {
System.out.print(".");
return;
}
@BeforeClass
public static void setUpBeforeClass() throws Exception {
System.out.print("CimMethod ");
assertEquals(DataType.values().length,type.length);
assertEquals(obj.length,type.length);
assertEquals(type.length,obj2.length);
assertNotNull(struct);
assertNotNull(structValue);
assertNotNull(cimClass);
assertNotNull(instance);
assertNotNull(enumeration);
assertNotNull(enumValue);
/*
repository = new InMemoryRepository();
mClass = (CimClass) Java2Cim.getModelForClass(CimMethodTestClass.class, repository.getDefaultNameSpace(), repository);
assertNotNull(mClass);
System.out.println(mClass.toMOF());
*/
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#CimMethod(String, java.lang.String, net.aifusion.metamodel.DataType, java.util.List, java.util.List)}.
* Test method for {@link net.aifusion.metamodel.CimMethod#CimMethod(String, java.lang.String, java.lang.String, boolean, java.util.List, java.util.List)}.
* Test method for {@link net.aifusion.metamodel.CimMethod#CimMethod(String, java.lang.String, net.aifusion.metamodel.CimEnumeration, boolean, java.util.List, java.util.List)}.
* Test method for {@link net.aifusion.metamodel.CimMethod#CimMethod(String, java.lang.String, net.aifusion.metamodel.CimStructure, boolean, java.util.List, java.util.List)}.
*/
@Test
public final void testCimMethod() {
Vector<CimParameter> params = new Vector<CimParameter>();
for(int i = 0; i < type.length; i++){
DataType t = type[i];
DataValue v = new DataValue(t,obj[i]);
params.clear();
CimMethod m = null;
// System.out.println(t+" "+v.toMOF());
if(t.isVoid()){
m = new CimMethod("Cim_Class","Method",t,quals, null);
assertNotNull(m);
} else if(t.isPrimitive()){
params.add(new CimParameter(null,"Name",t,v, null));
m = new CimMethod("Cim_Class","Method",t,quals, params);
} else if(t.isReference()){
params.add(new CimParameter(null,"Name","Cim_Test",t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method","Cim_Test",t.isArray(),quals, params);
} else if(t.isEnumerationValue()){
params.add(new CimParameter(null, "Name", enumeration, t.isArray(), v, null));
m = new CimMethod("Cim_Class","Method",enumeration,t.isArray(),quals, params);
} else if(t.isStructureValue()){
params.add(new CimParameter(null,"Name",struct,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",struct,t.isArray(),quals, params);
} else if(t.isInstanceValue()){
params.add(new CimParameter(null,"Name",cimClass,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",cimClass,t.isArray(),quals, params);
} else {
fail("Unknown data type "+t);
}
assertNotNull(m);
}
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#getRefClassName()}.
* Test method for {@link net.aifusion.metamodel.CimMethod#getEnum()}.
* Test method for {@link net.aifusion.metamodel.CimMethod#getReturnedType()}.
* Test method for {@link net.aifusion.metamodel.CimMethod#getStruct()}.
*/
@Test
public final void testGetMethods() {
Vector<CimParameter> params = new Vector<CimParameter>();
for(int i = 0; i < type.length; i++){
DataType t = type[i];
DataValue v = new DataValue(t,obj[i]);
params.clear();
CimMethod m = null;
// System.out.println(t+" "+v.toMOF());
if(t.isVoid()){
m = new CimMethod("Cim_Class","Method",t,quals, null);
assertNotNull(m);
} else if(t.isPrimitive()){
params.add(new CimParameter(null,"Name",t,v, null));
m = new CimMethod("Cim_Class","Method",t,quals, params);
} else if(t.isReference()){
params.add(new CimParameter(null,"Name","Cim_Test",t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method","Cim_Test",t.isArray(),quals, params);
} else if(t.isEnumerationValue()){
params.add(new CimParameter(null, "Name", enumeration, t.isArray(), v, null));
m = new CimMethod("Cim_Class","Method",enumeration,t.isArray(),quals, params);
} else if(t.isStructureValue()){
params.add(new CimParameter(null,"Name",struct,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",struct,t.isArray(),quals, params);
} else if(t.isInstanceValue()){
params.add(new CimParameter(null,"Name",cimClass,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",cimClass,t.isArray(),quals, params);
} else {
fail("Unknown data type "+t);
}
assertNotNull(m);
assertEquals(t,m.getReturnedType());
if(t.isReference()){
assertEquals("Cim_Test",m.getRefClassName());
} else {
assertEquals(null,m.getRefClassName());
}
if(t.isEnumerationValue()){
assertEquals(enumeration,m.getEnum());
} else {
assertEquals(null,m.getEnum());
}
if(t.isInstanceValue()){
assertEquals(cimClass,m.getStruct());
} else {
if(t.isStructureValue()){
assertEquals(struct,m.getStruct());
} else {
assertEquals(null,m.getStruct());
}
}
}
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#getParameters()}.
*/
@Test
public final void testGetParameters() {
Vector<CimParameter> params = new Vector<CimParameter>();
for(int i = 0; i < type.length; i++){
DataType t = type[i];
DataValue v = new DataValue(t,obj[i]);
params.clear();
CimMethod m = null;
// System.out.println(t+" "+v.toMOF());
if(t.isVoid()){
m = new CimMethod("Cim_Class","Method",t,quals, null);
assertNotNull(m);
} else if(t.isPrimitive()){
params.add(new CimParameter(null,"Name",t,v, null));
m = new CimMethod("Cim_Class","Method",t,quals, params);
} else if(t.isReference()){
params.add(new CimParameter(null,"Name","Cim_Test",t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method","Cim_Test",t.isArray(),quals, params);
} else if(t.isEnumerationValue()){
params.add(new CimParameter(null, "Name", enumeration, t.isArray(), v, null));
m = new CimMethod("Cim_Class","Method",enumeration,t.isArray(),quals, params);
} else if(t.isStructureValue()){
params.add(new CimParameter(null,"Name",struct,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",struct,t.isArray(),quals, params);
} else if(t.isInstanceValue()){
params.add(new CimParameter(null,"Name",cimClass,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",cimClass,t.isArray(),quals, params);
} else {
fail("Unknown data type "+t);
}
assertNotNull(m);
List<CimParameter> l = m.getParameters();
assertEquals(params.size(),l.size());
for(int j = 0; j < params.size(); j++){
assertEquals(params.get(j),l.get(j));
}
}
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#bind(java.lang.reflect.Method, java.lang.Object)}.
* Test method for {@link net.aifusion.metamodel.CimMethod#invoke(java.util.List)}.
*/
@Test
public final void testBindAndInvoke() {
Object [] param = new Object[]{
null,new UInt8((short)4),Byte.valueOf((byte)6),(byte)7,new UInt16(30000), // 0-4
Short.valueOf((short)22),(short)15,new UInt32(54000L),Integer.valueOf(22000),45,new UInt64("530000"),Long.valueOf(540L),(long)50, // 5-12
"String arg",Boolean.valueOf(true),false,Float.valueOf((float) 54000D),(float)45.4,Double.valueOf(6400D),(double)4000,new DateTime(), // 13-20
Character.valueOf('x'),'x', new OctetString("0x3f"), // 21-22
JavaModelMapper.getObjectPathFromClass(CimMethodTestClass.class), // 23
new MethodBindingClass(), // 24
new UInt8[]{new UInt8((short)4)},new Byte[]{Byte.valueOf((byte)6)},new byte[]{(byte)7},new UInt16[]{new UInt16(30000)}, // 25-28
new Short[]{Short.valueOf((short)22)},new short[]{(short)15},new UInt32[]{new UInt32(54000L)}, // 29-31
new Integer[]{Integer.valueOf(22000)},new int[]{45},new UInt64[]{new UInt64("530000")},new Long[]{Long.valueOf(540L)},new long[]{(long)50}, // 32-36
new String[]{"String arg"},new Boolean[]{Boolean.valueOf(true)},new boolean[]{false},new Float[]{Float.valueOf((float) 54000D)},new float[]{(float)45.4}, // 37-41
new Double[]{Double.valueOf(6400D)},new double[]{(double)4000},new DateTime[]{new DateTime()}, // 42-44
new Character[]{Character.valueOf('x')},new char[]{'x'}, new OctetString[]{new OctetString("0x3f")},// 45-46
new ObjectPath[]{JavaModelMapper.getObjectPathFromClass(CimMethodTestClass.class)}, // 47
new MethodBindingClass[]{new MethodBindingClass()}, // 48
EnumBindingClass.NAME1, // 50
new EnumBindingClass[]{EnumBindingClass.NAME1}, // 51
new MethodBindingSuperClass(), // 52
new MethodBindingSuperClass[]{new MethodBindingSuperClass()} // 53
};
CimMethodTestClass javaClass = new CimMethodTestClass();
Method [] javaMethods = CimMethodTestClass.class.getDeclaredMethods();
Repository r = new InMemoryRepository();
CimClass cimClass = (CimClass) Java2Cim.getModelForClass(CimMethodTestClass.class, r);
assertNotNull(cimClass);
CimEnumeration cimEnum = (CimEnumeration) r.get(new ObjectPath(ElementType.ENUMERATION,"AIFusion_EnumBindingClass",Constants.defaultNameSpacePath, null, null));
assertNotNull(cimEnum);
CimStructure cimStruct = (CimStructure) r.get(new ObjectPath(ElementType.STRUCTURE,"cim_testmethodssup",Constants.defaultNameSpacePath,null, null));
assertNotNull(cimStruct);
CimClass cimClass1 = (CimClass) r.get(new ObjectPath(ElementType.CLASS,"Cim_TestMethods",Constants.defaultNameSpacePath,null, null));
assertNotNull(cimClass1);
if(verbose) {
for(NamedElement e : r.getElements(null,null,null, false)){
System.out.println(e.toMOF());
}
}
Map<String,CimMethod> cimMethods = cimClass.getAllMethods();
// for each CIM method declared
for(CimMethod cimMethod : cimMethods.values()){
if(verbose) System.out.println("Trying CIM Method "+cimMethod.toMOF());
// obtain the method parameters
List<CimParameter> cimParameters = cimMethod.getParameters();
// try all java methods in the java class
for(Method javaMethod : javaMethods){
// ignore any non-exported methods
if(!javaMethod.isAnnotationPresent(Export.class)) continue;
// try to bind the Cim method to the java method
try {
cimMethod.bind(javaMethod, javaClass);
if(verbose) System.out.println("\t\t\tBind Succeeded for "+javaMethod.toString());
if(cimMethod.getReturnedType() == DataType.VOID){
// Void method in class does not take any parameters
assertEquals(0,cimParameters.size());
try {
DataValue returned = cimMethod.invoke(cimParameters);
assertNull(returned);
} catch(ModelException ei){
// we will fail here if invoke() fails on void method
fail(ei.toString());
}
} else {
// all others take one parameter
assertEquals(1,cimParameters.size());
// clone the parameter
CimParameter cimParameter = cimParameters.get(0).createInstanceParameter();
assertNotNull(cimParameter);
if(verbose) System.out.println("\tParameter "+cimParameter.toMOF());
// try every parameter value
for(Object javaParam : param){
DataType t = DataType.getTypeForObject(javaParam);
DataValue v = new DataValue(t,javaParam);
assertNotNull(v);
try {
cimParameter.setValue(v);
if(verbose) System.out.println("\t Matched value "+v.toMOF());
cimParameters.clear();
cimParameters.add(cimParameter);
try {
DataValue rv = cimMethod.invoke(cimParameters);
if(verbose) System.out.println("\t Returned value "+rv.toMOF());
assertEquals(v,rv);
} catch (ModelException ez){
// we fail here if cimMethod#invoke() fails
if(verbose) System.out.println("\t Invocation failed");
assertEquals(1,ez.getReason().getCode());
continue;
}
} catch (ModelException ep){
// cimParameter#setValue() should throw exceptions for mismatched parameter values
assertEquals(4, ep.getReason().getCode());
continue;
}
}
}
} catch (ModelException ex){
// we will fail due to type mismatch during CimMethod#bind()
if(ex.getReason().getCode() != 13){
if(verbose) System.out.println(ex.toString());
fail();
}
continue;
}
}
}
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#createInstanceMethod()}.
*/
@Test
public final void testCreateInstanceMethod() {
String mof = "class Test_Class { [static] String foo(sint32 p);\n String bar(String q); };";
MOFParser p = new MOFParser();
p.parse(new ByteArrayInputStream(mof.getBytes()), Constants.defaultNameSpacePath);
Repository r = p.getRepository();
CimClass e = (CimClass) r.get(new ObjectPath(ElementType.CLASS,"Test_Class",Constants.defaultNameSpacePath,null, null));
assertNotNull(e);
// System.out.println(e.toMOF());
Map<String,CimMethod> m = e.getAllMethods();
assertEquals(2,m.size());
// non static methods are cloned
CimMethod meth = m.get("bar");
assertFalse(meth.isStatic());
CimMethod instanceMethod = meth.createInstanceMethod();
assertEquals(meth,instanceMethod);
assertFalse(meth == instanceMethod);
// static methods are shared
meth = m.get("foo");
assertTrue(meth.isStatic());
instanceMethod = meth.createInstanceMethod();
assertEquals(meth,instanceMethod);
assertTrue(meth == instanceMethod);
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#isStatic()}.
*/
@Test
public final void testIsStatic() {
String mof = "class Test_Class { [static] String foo(sint32 p);\n String bar(String q); };";
MOFParser p = new MOFParser();
p.parse(new ByteArrayInputStream(mof.getBytes()), Constants.defaultNameSpacePath);
Repository r = p.getRepository();
CimClass e = (CimClass) r.get(new ObjectPath(ElementType.CLASS,"Test_Class",Constants.defaultNameSpacePath,null, null));
assertNotNull(e);
// System.out.println(e.toMOF());
Map<String,CimMethod> m = e.getAllMethods();
assertEquals(2,m.size());
assertFalse(m.get("bar").isStatic());
assertTrue(m.get("foo").isStatic());
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#toMOF(java.lang.String)}.
*/
@Test
public final void testToMOFString() {
Vector<CimParameter> params = new Vector<CimParameter>();
for(int i = 0; i < type.length; i++){
DataType t = type[i];
DataValue v = new DataValue(t,obj[i]);
params.clear();
CimMethod m = null;
// System.out.println(t+" "+v.toMOF());
if(t.isVoid()){
m = new CimMethod(null,"Method",t,quals, null);
assertNotNull(m);
} else if(t.isPrimitive()){
params.add(new CimParameter("Cim_Class#Method","Name",t,v, null));
m = new CimMethod(null,"Method",t,quals, params);
} else if(t.isReference()){
params.add(new CimParameter("Cim_Class#Method","Name","Cim_Test",t.isArray(),v, null));
m = new CimMethod(null,"Method","Cim_Test",t.isArray(),quals, params);
} else if(t.isEnumerationValue()){
params.add(new CimParameter("Cim_Class#Method", "Name", enumeration, t.isArray(), v, null));
m = new CimMethod(null,"Method",enumeration,t.isArray(),quals, params);
} else if(t.isStructureValue()){
params.add(new CimParameter("Cim_Class#Method","Name",struct,t.isArray(),v, null));
m = new CimMethod(null,"Method",struct,t.isArray(),quals, params);
} else if(t.isInstanceValue()){
params.add(new CimParameter("Cim_Class#Method","Name",cimClass,t.isArray(),v, null));
m = new CimMethod("Cim_Class#Method","Method",cimClass,t.isArray(),quals, params);
} else {
fail("Unknown data type "+t);
}
assertNotNull(m);
assertEquals(mof[i],m.toMOF());
}
}
/**
* Test method for {@link net.aifusion.metamodel.CimMethod#getFullName()}.
*/
@Test
public final void testGetFullName() {
Vector<CimParameter> params = new Vector<CimParameter>();
for(int i = 0; i < type.length; i++){
DataType t = type[i];
DataValue v = new DataValue(t,obj[i]);
params.clear();
CimMethod m = null;
// System.out.println(t+" "+v.toMOF());
if(t.isVoid()){
m = new CimMethod("Cim_Class","Method",t,quals, null);
assertNotNull(m);
} else if(t.isPrimitive()){
params.add(new CimParameter("Cim_Class#Method","Name",t,v, null));
m = new CimMethod("Cim_Class","Method",t,quals, params);
} else if(t.isReference()){
params.add(new CimParameter("Cim_Class#Method","Name","Cim_Test",t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method","Cim_Test",t.isArray(),quals, params);
} else if(t.isEnumerationValue()){
params.add(new CimParameter("Cim_Class#Method", "Name", enumeration, t.isArray(), v, null));
m = new CimMethod("Cim_Class","Method",enumeration,t.isArray(),quals, params);
} else if(t.isStructureValue()){
params.add(new CimParameter("Cim_Class#Method","Name",struct,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",struct,t.isArray(),quals, params);
} else if(t.isInstanceValue()){
params.add(new CimParameter("Cim_Class#Method","Name",cimClass,t.isArray(),v, null));
m = new CimMethod("Cim_Class","Method",cimClass,t.isArray(),quals, params);
} else {
fail("Unknown data type "+t);
}
assertNotNull(m);
assertEquals("Cim_Class#Method",m.getFullName());
}
}
} |
package com.inlym.lifehelper.common.constant;
/**
* 自定义请求属性值
*
* <h2>注意事项
* <li>在使用 {@code request.getAttribute} 和 {@code request.setAttribute} 方法时,不要直接操作字符串,应该直接当做常量存于此处。
*
* <pre class="code">
* request.setAttribute(CustomRequestAttribute.CLIENT_IP, clientIp);
* </pre>
*
* @author <a href="https://www.inlym.com">inlym</a>
* @date 2022-01-19
* @since 1.0.0
*/
public final class CustomRequestAttribute {
/** 请求 ID */
public static final String REQUEST_ID = "REQUEST_ID";
/** 用户 ID */
public static final String USER_ID = "USER_ID";
/** 客户端 IP 地址 */
public static final String CLIENT_IP = "CLIENT_IP";
}
|
package providercache
import (
"io/ioutil"
"log"
"path/filepath"
"sort"
"strings"
"github.com/hashicorp/terraform/addrs"
"github.com/hashicorp/terraform/internal/getproviders"
)
// Dir represents a single local filesystem directory containing cached
// provider plugin packages that can be both read from (to find providers to
// use for operations) and written to (during provider installation).
//
// The contents of a cache directory follow the same naming conventions as a
// getproviders.FilesystemMirrorSource, except that the packages are always
// kept in the "unpacked" form (a directory containing the contents of the
// original distribution archive) so that they are ready for direct execution.
//
// A Dir also pays attention only to packages for the current host platform,
// silently ignoring any cached packages for other platforms.
//
// Various Dir methods return values that are technically mutable due to the
// restrictions of the Go typesystem, but callers are not permitted to mutate
// any part of the returned data structures.
type Dir struct {
baseDir string
targetPlatform getproviders.Platform
// metaCache is a cache of the metadata of relevant packages available in
// the cache directory last time we scanned it. This can be nil to indicate
// that the cache is cold. The cache will be invalidated (set back to nil)
// by any operation that modifies the contents of the cache directory.
//
// We intentionally don't make effort to detect modifications to the
// directory made by other codepaths because the contract for NewDir
// explicitly defines using the same directory for multiple purposes
// as undefined behavior.
metaCache map[addrs.Provider][]CachedProvider
}
// NewDir creates and returns a new Dir object that will read and write
// provider plugins in the given filesystem directory.
//
// If two instances of Dir are concurrently operating on a particular base
// directory, or if a Dir base directory is also used as a filesystem mirror
// source directory, the behavior is undefined.
func NewDir(baseDir string) *Dir {
return &Dir{
baseDir: baseDir,
targetPlatform: getproviders.CurrentPlatform,
}
}
// NewDirWithPlatform is a variant of NewDir that allows selecting a specific
// target platform, rather than taking the current one where this code is
// running.
//
// This is primarily intended for portable unit testing and not particularly
// useful in "real" callers, with the exception of terraform-bundle.
func NewDirWithPlatform(baseDir string, platform getproviders.Platform) *Dir {
return &Dir{
baseDir: baseDir,
targetPlatform: platform,
}
}
// AllAvailablePackages returns a description of all of the packages already
// present in the directory. The cache entries are grouped by the provider
// they relate to and then sorted by version precedence, with highest
// precedence first.
//
// This function will return an empty result both when the directory is empty
// and when scanning the directory produces an error.
//
// The caller is forbidden from modifying the returned data structure in any
// way, even though the Go type system permits it.
func (d *Dir) AllAvailablePackages() map[addrs.Provider][]CachedProvider {
if err := d.fillMetaCache(); err != nil {
log.Printf("[WARN] Failed to scan provider cache directory %s: %s", d.baseDir, err)
return nil
}
return d.metaCache
}
// ProviderVersion returns the cache entry for the requested provider version,
// or nil if the requested provider version isn't present in the cache.
func (d *Dir) ProviderVersion(provider addrs.Provider, version getproviders.Version) *CachedProvider {
if err := d.fillMetaCache(); err != nil {
return nil
}
for _, entry := range d.metaCache[provider] {
// We're intentionally comparing exact version here, so if either
// version number contains build metadata and they don't match then
// this will not return true. The rule of ignoring build metadata
// applies only for handling version _constraints_ and for deciding
// version precedence.
if entry.Version == version {
return &entry
}
}
return nil
}
// ProviderLatestVersion returns the cache entry for the latest
// version of the requested provider already available in the cache, or nil if
// there are no versions of that provider available.
func (d *Dir) ProviderLatestVersion(provider addrs.Provider) *CachedProvider {
if err := d.fillMetaCache(); err != nil {
return nil
}
entries := d.metaCache[provider]
if len(entries) == 0 {
return nil
}
return &entries[0]
}
func (d *Dir) fillMetaCache() error {
// For d.metaCache we consider nil to be different than a non-nil empty
// map, so we can distinguish between having scanned and got an empty
// result vs. not having scanned successfully at all yet.
if d.metaCache != nil {
log.Printf("[TRACE] providercache.fillMetaCache: using cached result from previous scan of %s", d.baseDir)
return nil
}
log.Printf("[TRACE] providercache.fillMetaCache: scanning directory %s", d.baseDir)
allData, err := getproviders.SearchLocalDirectory(d.baseDir)
if err != nil {
return err
}
// The getproviders package just returns everything it found, but we're
// interested only in a subset of the results:
// - those that are for the current platform
// - those that are in the "unpacked" form, ready to execute
// ...so we'll filter in these ways while we're constructing our final
// map to save as the cache.
//
// We intentionally always make a non-nil map, even if it might ultimately
// be empty, because we use that to recognize that the cache is populated.
data := make(map[addrs.Provider][]CachedProvider)
for providerAddr, metas := range allData {
for _, meta := range metas {
if meta.TargetPlatform != d.targetPlatform {
log.Printf("[TRACE] providercache.fillMetaCache: ignoring %s because it is for %s, not %s", meta.Location, meta.TargetPlatform, d.targetPlatform)
continue
}
if _, ok := meta.Location.(getproviders.PackageLocalDir); !ok {
// PackageLocalDir indicates an unpacked provider package ready
// to execute.
log.Printf("[TRACE] providercache.fillMetaCache: ignoring %s because it is not an unpacked directory", meta.Location)
continue
}
packageDir := filepath.Clean(string(meta.Location.(getproviders.PackageLocalDir)))
execFile := findProviderExecutableInLocalPackage(meta)
if execFile == "" {
// If the package doesn't contain a suitable executable then
// it isn't considered to be part of our cache.
log.Printf("[TRACE] providercache.fillMetaCache: ignoring %s because it is does not seem to contain a suitable plugin executable", meta.Location)
continue
}
log.Printf("[TRACE] providercache.fillMetaCache: including %s as a candidate package for %s %s", meta.Location, providerAddr, meta.Version)
data[providerAddr] = append(data[providerAddr], CachedProvider{
Provider: providerAddr,
Version: meta.Version,
PackageDir: filepath.ToSlash(packageDir),
ExecutableFile: filepath.ToSlash(execFile),
})
}
}
// After we've built our lists per provider, we'll also sort them by
// version precedence so that the newest available version is always at
// index zero. If there are two versions that differ only in build metadata
// then it's undefined but deterministic which one we will select here,
// because we're preserving the order returned by SearchLocalDirectory
// in that case..
for _, entries := range data {
sort.SliceStable(entries, func(i, j int) bool {
// We're using GreaterThan rather than LessThan here because we
// want these in _decreasing_ order of precedence.
return entries[i].Version.GreaterThan(entries[j].Version)
})
}
d.metaCache = data
return nil
}
// This is a helper function to peep into the unpacked directory associated
// with the given package meta and find something that looks like it's intended
// to be the executable file for the plugin.
//
// This is a bit messy and heuristic-y because historically Terraform used the
// filename itself for local filesystem discovery, allowing some variance in
// the filenames to capture extra metadata, whereas now we're using the
// directory structure leading to the executable instead but need to remain
// compatible with the executable names bundled into existing provider packages.
//
// It will return a zero-length string if it can't find a file following
// the expected convention in the given directory.
func findProviderExecutableInLocalPackage(meta getproviders.PackageMeta) string {
packageDir, ok := meta.Location.(getproviders.PackageLocalDir)
if !ok {
// This should never happen because the providercache package only
// uses the local unpacked directory layout. If anything else ends
// up in here then we'll indicate that no executable is available,
// because all other locations require a fetch/unpack step first.
return ""
}
infos, err := ioutil.ReadDir(string(packageDir))
if err != nil {
// If the directory itself doesn't exist or isn't readable then we
// can't access an executable in it.
return ""
}
// For a provider named e.g. tf.example.com/awesomecorp/happycloud, we
// expect an executable file whose name starts with
// "terraform-provider-happycloud", followed by zero or more additional
// characters. If there _are_ additional characters then the first one
// must be an underscore or a period, like in thse examples:
// - terraform-provider-happycloud_v1.0.0
// - terraform-provider-happycloud.exe
//
// We don't require the version in the filename to match because the
// executable's name is no longer authoritative, but packages of "official"
// providers may continue to use versioned executable names for backward
// compatibility with Terraform 0.12.
//
// We also presume that providers packaged for Windows will include the
// necessary .exe extension on their filenames but do not explicitly check
// for that. If there's a provider package for Windows that has a file
// without that suffix then it will be detected as an executable but then
// we'll presumably fail later trying to run it.
wantPrefix := "terraform-provider-" + meta.Provider.Type
// We'll visit all of the directory entries and take the first (in
// name-lexical order) that looks like a plausible provider executable
// name. A package with multiple files meeting these criteria is degenerate
// but we will tolerate it by ignoring the subsequent entries.
for _, info := range infos {
if info.IsDir() {
continue // A directory can never be an executable
}
name := info.Name()
if !strings.HasPrefix(name, wantPrefix) {
continue
}
remainder := name[len(wantPrefix):]
if len(remainder) > 0 && (remainder[0] != '_' && remainder[0] != '.') {
continue // subsequent characters must be delimited by _
}
return filepath.Join(string(packageDir), name)
}
// If we fall out here then nothing has matched.
return ""
}
|
Using the Split Bregman Algorithm to Solve the Self-Repelling Snake Model
Preserving the contour topology during image segmentation is useful in manypractical scenarios. By keeping the contours isomorphic, it is possible to pre-vent over-segmentation and under-segmentation, as well as to adhere to giventopologies. The self-repelling snake model (SR) is a variational model thatpreserves contour topology by combining a non-local repulsion term with thegeodesic active contour model (GAC). The SR is traditionally solved using theadditive operator splitting (AOS) scheme. Although this solution is stable, thememory requirement grows quickly as the image size increases. In our paper,we propose an alternative solution to the SR using the Split Bregman method.Our algorithm breaks the problem down into simpler subproblems to use lower-order evolution equations and approximation schemes. The memory usage issignificantly reduced as a result. Experiments show comparable performance to the original algorithm with shorter iteration times.
Introduction
Topology preservation in image segmentation is an external constraint to discourage changes in the topology of the segmentation contour. It is usually applied in problems where the object topology is known a priori. One example is in medical image analysis where the segmentation of the brain cortical surface must produce results consistent with the real-world brain cortical structure . Another example is the segmentation of objects with complicated interiors, noises, or occlusions, where a topological constraint can be used to prevent over-segmentation, i.e., the forming of "holes" due to image complexity , or under-segmentation, i.e., when the contours of separate objects merge. Much active research undergoes in the area, such as image segmentation and registration using the Beltrami representation of shapes and non-local shape descriptors , multi-label image segmentation with preserved topology , and min-cut/max-flow segmentation using topology priors .
Since the problem of topology preservation can be intuitively linked to the process of contour evolution, many active contour models have been proposed for it. In these models, the contour is affected by various forces until it converges to the final segmentation result. To preserve topology during the contour evolution process, a constraint term is usually added to the variational formulation which prevents the contour from self-intersecting, i.e., merging or splitting. For example, Han et al. proposed a simple-points detection scheme in an implicit level set framework in 2003. Meanwhile, Cecil et al. monitored the changes in the Jacobian of the level set. In 2005, Alexandrov et al. recast the topology preservation problem to a shape optimization problem of the level sets, where narrow bands around the segmentation contours are discouraged from overlapping. Sundaramoorthi and Yezzi proposed an approach based on knot energy minimization to realize the same effect. Rochery et al. used a similar idea while introducing a non-local regularization term, which was applied in the tracking of long thin objects in remote sensing images. Building on the previous ideas, the self-repelling snake model (SR) was proposed by Le Guyader et al. in 2008 . The SR uses an implicit level set representation for the curve and adds a non-local repulsion term to the classic geodesic active contour model (GAC) . In the follow-up work , the short time existence/uniqueness and Lipschitz regularity property of the SR model were studied. Later, successfully extended the SR model to non-local topology-preserving segmentation-guided registration. Attempts have also been made to combine the SR with the region-based Chan-Vese model, though a direct combination proved less successful than the original SR.
The SR model has intuitive and straightforward geometric interpretations, but its' non-local term leads to complications in the numerical implementation.
To the best of our knowledge, the SR model has always been solved through the additive operator splitting (AOS) strategy. On the one hand, the derivation of gradient descent equations is complicated and requires the upwind difference discretisation scheme. On the other hand, though the AOS is stable, the memory requirement grows quadratically with the size of the image. In this paper, we propose an alternative solution using the Split Bregman method that aims towards a more concise algorithm and less memory usage.
The Split Bregman algorithm was first proposed in computer vision by Goldstein and Osher for the total variation model (TV) for image restoration.
By introducing splitting variables and iterative parameters, it transforms the original constrained minimization problems into simpler sub-problems that can be solved alternatively. The Split Bregman algorithm is shown to be equivalent to the Alternating Direction Method of Multipliers (ADMM) and the Augmented Lagrangian Method (ALM) . In this paper, we introduce an intermediate variable to split the original problem into two sub-problems, which turns a second-order optimization problem into two first-order ones. Solving the new sub-problems no longer requires taking complex differentials of the geodesic curvature term. We also replaced the re-initialization of the signed distance function with a simple projection scheme. As a result, the optimization of the level set function is simplified. In addition, to address some problems arising from the Split Bregman solution, we replaced the Heaviside representation of the level set in with one that performed better in our algorithm.
The paper is organized as follows. In section 2, we review and provide some intuition to the original SR model. In section 3, we design the Split Bregman algorithm for the SR model and derive the Euler-Lagrange equations or gradient descent equations for the sub-problems. In section 4, the discretization schemes for the sub-problems are presented for the alternating iterative optimization.
In section 5, we provide some numerical examples and comparisons of results.
Finally, we draw conclusions in section 6.
The Original Self-Repelling Snake Model
The original SR model as proposed in is an edge-based segmentation model based on the GAC . It adopts the variational level set formulation , where the segmentation contour is implicitly represented as the zero level line of a signed distance function . An energy functional is minimized until convergence is reached and the segmentation contour is obtained. The energy functional comprises three terms, two of which are taken from the GAC model and contribute to edge detection and the balloon force respectively, while the last one accounts for the self-repulsion of contour as it approaches itself.
The definition of the SR model is as follows. Let f (x) : Ω → R be a scalar value image, x ∈ Ω, and Ω is the domain of the image. The standard edge detect function g(x) ∈ is given by where s = 1 or 2, ρ is a scaling parameter, and G σ denotes a Gaussian convolution of the image with a standard deviation of σ. The object boundary C is represented by the zero set of a level set function φ, The level set function φ is defined as a signed distance function, such that, where d(x, C) is the Euclidean distance between point x and contour C. As a signed distance function, φ satisfies the constraint condition below, i.e. the Eikonal equation, To represent the image area and contour, we introduce the Heaviside function H(φ) and Dirac functions δ(φ). Since the original Heaviside function is discontinuous and therefore indifferentiable, we adopt the regularization schemes below , These schemes are different from the ones chosen for the original model in . In particular, ε does not regularize the entire image domain, which improves stability of edge-based models. The effect is more apparent in our Split Bregman algorithm, as we will discuss further in section 3.
Given the above, the energy functional E(φ) of the SR model can be written as where γ, α, β are penalty parameters that balance three terms.
is the geodesic length of the contour. The total variation of the Heaviside function, or the total length of the contour, is weighted by the edge detector in (1).
E a (φ) is the closed area of the contour also weighted by the edge detector. It contributes to a balloon force that pushes the segmentation contour over weak edges .
measures the nearness of the two points x and y, e.g. the further away the points the smaller the repulsion. In (10), h ε (φ(x)) and h ε (φ(y)) denote the narrow bands around the points x and y, where, When the points x and y are further than distance l from the contour, Given the energy functional (7) and the constraint (4) , the variational formulation for SR is and the evolution equation where (15) is the geodesic curvature that shifts the contour towards the edges detected by g(x). In the image areas with near-uniform intensity, ∇g(x) → 0, |∇φ(x,t)| ) → 0 in those areas, the geodesic curvature term has little effect and the balloon force αg(x) dominates.
Lastly, the evolution equation derived from the repulsion term is To summarize, by applying variational methods to the three energy terms and substituting δ ε (φ(x)) with |∇φ(x)|, the following evolution equations can be derived With regards to the constraint |∇φ| = 1, the dynamic re-initialization scheme below is adopted in , The above is a typical Hamilton-Jacobi equation that can be discretized and solved through an up-wind difference scheme . To solve (17), the original solution adopts the AOS strategy . The first term on the r.h.s. of (17) is discretized with the half-point difference scheme and the harmonic averaging approximation. The next two terms adopt the up-wind scheme. Two semiimplicit schemes are constructed by concatenating the rows and columns of the image respectively , where For each A l (l ∈ (x 1 , x 2 )), where i, j are two points in the image, N l (i) is the set of nearest neighbors of , and A l is a diagonally dominant tridiagonal matrix. Finally, φ k+1 can be calculated as Since i and j span the entire image, if Ω ∈ R m×n , then A l ∈ R (m×n)×(m×n) Consequently, the variable A greatly increases the memory requirement for the AOS solution. In the last step, (19) is solved via the Thomas algorithm which involves LR decomposition, forward substitution, and backward substitution, with the convergence rate of O(N ). In the following section, we will propose another solution to the SR with the Split Bregman method that aims to be faster by replacing the re-initialization step, more memory efficient by using compact intermediate variables, and more concise by bypassing the complex discretization schemes.
The Split-Bregman Algorithm for the Self-repelling Snake Model
The Split Bregman method is a fast alternating directional method often used in solving L 1 -regularized constrained optimization problems . To design the Split Bregman algorithm for (7), we first introduce a splitting variable w = ∇φ and the Bregman iterator b. We can re-formulate the energy minimization problem as where b 0 = 0, w 0 = 0, and µ is a penalty parameter. The original problem can then be solved as two sub-problems in alternating order for loops k = 1, 2, ..., K.
The sub-problems are, To solve the sub-problem (24), we can derive the following evolution equation of φ via standard variational methods , The initial condition and boundary condition are as below, where, With the Heaviside function originally adopted in , the newly introduced component δ ε (φ) in the Split Bregman algorithm may be excessively smoothed. For the sub-problem (25), if | w(x)| = 0, we obtain the corresponding Euler- However, since the second term in (30) contains the integral of w(y), it is difficult to construct the iterative scheme for w k . An approximation formula with projection is designed in the next section to address this issue.
Discretization and Iterative Scheme
For the next step in solving (26) and (30), we devise the discretization of the continuous derivatives. Let the spatial step be 1 and time step be τ , and the discrete coordinates for the pixel (i, j) be x i,j = (x 1i , x 2j ) where i = 0, 1, 2, ..., M + 1, j = 0, 1, 2, ..., N + 1 , we get φ i,j = φ(x 1i , x 2j ). Let the other variables take similar forms. With the first order finite difference approximation, we can obtain the discrete gradient, Laplacian, and divergences respectively as, The first order time derivative of φ i,j can be approximated as . Therefore, from (26), a semi-implicit iterative scheme can be de-signed for φ k+1,s+1 i,j where s = 0, 1, 2, ..., S, such that, y denotes a point taken from a small window of size 2d × 2d around point x.
The repulsion from points further away is negligible, therefore we only check within a small window. Note that the initial and boundary conditions in (27) still hold.
Next, we will solve (30) iteratively. By temporarily fixing w k+1,r (y), we can design a concise approximate generalized soft thresholding formula. For and w k+1,0 (y) = w k (y). For r = 0, 1, 2, . . . , R, since | w k+1,r i,j | = 1, the iterative formula for w k+1 from (25) can be written as, In practice, a single iteration is often enough for computing (35). Alternatively, we can directly use the soft thresholding formula to derive w k+1 . For abbreviation, let The formula for w k+1 i,j is The same projection scheme as (36) is used afterwards. After φ k+1 i,j , w k+1 i,j have been obtained, we can derive b k+1 i,j directly from (23). In summary, the Split-Bregman algorithm proposed in this section has four main advantages. First, the memory requirement is reduced. For an image of size m×n, the parameter A in the AOS solution is size 2×(m×n)×(m×n). However, in the Split Bregman algorithm, the sizes of both w and b are 2 × (m × n) only. As the image size increases, the memory usage in the original algorithm increases quadratically while the one in the new algorithm increases linearly. This is an important point when dealing with big images. Second, the numerical solution can be simplified. In (17), the convolution term containing ∇φ is hyperbolic, which requires the upwind difference scheme. By substituting ∇φ with the auxiliary variable w we can remove the need for complex discretization schemes. Third, the use of a simple projection scheme in place of the initialization step improves algorithm efficiency. Finally, contour evolution is stabilized by confining the smoothing of the Heaviside function to the narrow-bands around the contours.
The proposed algorithm is summarized in Algorithm 1.
Experimental Results
The experiments below demonstrate that the Split Bregman solution of the SR model successfully prevents contour splitting (which causes over-segmentation)
Model
(1) Initialize Calculate g(x) using (1) Initialize φ 0 (x) as signed distance function and set Set penalty parameters Set tolerance errors, time step and iterative steps In Figure 1, contour splitting is successfully prevented and the topology is preserved. The parameter α controls the outwards or inwards driving force, γ dictates the geodesic length, β weighs the repelling force, and µ weighs the constraint. A large β causes the contour to become unstable, as the repulsive force is a highly local term. However, increasing β and decreasing the window size narrows the gap between the contours. Typically, the window size is 5 × 5 or 7 × 7 as according to . The time step τ is chosen according to the convergence condition τ ≤ 1 4µ based on the Courant-Friedrichs-Lewy condition .
Increasing ε improves the smoothness of the contour but lowers the effectiveness of topology preservation, as it smooths out the repulsive force. In Figure 2, contour merging is prevented as the fingers of the hand remain separate. In the basic GAC model, the proximity of the contours would cause them to merge despite there being a detected edge, because it reduces the total geodesic length. One example of a practical application of the algorithm is adhesive cell segmentation. The centers of cells can be detected via k-means clustering or detector filters such as the circle Hough Transform or the Laplacian of Gaussian . Since the topology is maintained, the number of cells remains the same. In Figure 4, the repulsive force prevents cell contours from merging and separates the adhesive cells. The algorithm can also be extended to 3D, as seen in Figure 5.
Conclusions
By introducing an intermediate variable and the Bregman iterative parameter, the Self-Repelling Snake model can be solved through the Split-Bregman method. This new solution bypasses the frequent re-initialization of the signed distance function and simplifies computation, as well as reduces the memeory requirement. The performance of the Split Bregman solution is similar to that of the original solution in , e.g. both merging and splitting of the segmentation contour can be prevented and the topology is preserved. |
So far we have learned many secrets of water revealed by Dr. Mantena Satyanarayana Raju through my previous updates on water. I haven’t seen anyone apart from Dr. Raju who has emphasized so much on drinking 5-6 liters of water a day. Many of you might have heard from your elder's that drinking water while eating is harmful. Still we hardly care about their words of wisdom. Let us now try to understand the harm in drinking water while eating in the words of Dr. Mantena Satyanarayana Raju.
Many of us have the unnatural habit of drinking water while eating. When one settles down for a meal, invariably he places a glass or pitcher of water besides him. People drink in water while eating in many ways. Many of us have the unnatural habit of drinking water while eating. When one settles down for a meal, invariably he places a glass or pitcher of water besides him. People drink in water while eating in many ways.
First type: Some people take a glass or two of water and then begin their meal assuming that it reduces their hunger and consequently reduces their body weight.
Second type: Some people take water frequently to facilitate easy gulping down of food and to stop hiccups.
Third type: Third type: Some consume one or two glasses of water after eating food believing that ¾ of food and ¼ of water helps digestion better.
Fourth type: Some drink water ½ an hour after their meals as they find it inconvenient to drink water while and immediately after meals.
Fifth Type: Many people drink water while eating because they feel thirsty and their throat dries out. There are 3 reasons for this. Firstly, they do not drink enough water (5-6 liters in a day) rest of the time as per the recommendations of Dr. Raju. Secondly, people especially in India consume plenty of Spices and Masalas in their food. Thirdly, people don't chew their food properly. If you drink 2-4 glasses of water half an hour before your food, your thirst is quenched and you don't feel thirsty while eating. Spices and Masalas obviously makes the food tasty but they lead to many gastric and intestinal disorders. Moreover they create heat in the body increasing the body temperature which urges you to drink water. If you avoid these spices and masalas you don't need any water while eating. If you chew your food properly you don't need water to gulp your food.
Nobody has the time or inclination to find the harm of drinking water while eating. Even the doctors who spend several years of their life in studying human body fail to advise the people about the harm in drinking water while eating food. Many of us are educated and are still ignorant of the nature. Layman and doctors equally suffer from digestive problems and take medicines. We actually do not want to learn and take precautions because we have medicines for every ailment. Many of us are aware that even a medicine is going to harm our body because medicine is after all a chemical in itself. Still we prefer to suffer and take medicines. It might give you a temporary relief but at the cost of some problem in the long run. Dr. Raju says, we must educate ourselves and our children about the harm of drinking water while eating and overcome all sorts of problems arising out of it. According to him, when we drink water while eating, the difficulty in the mouth is different and in the intestines is different. To understand his claim clearly one should know (1) The relationship of water and digestion in the mouth (2) The relationship of water and digestion in the stomach and (3) The relationship of water and digestion in the intestines.
1. The Relationship of Water and Digestion in the Mouth: The first phase of digestion process starts in the mouth. The food we eat should get ready for digestion up to 20%-25% in the mouth itself and then enter the stomach. The teeth accomplish this task. But how many of us have the practice and time to chew our food properly? Right from a boy to an aged person, everyone is in a hurry. None of us have time to eat our meal peacefully. Everyone is in a rush and wants to end up his meal in a minute or ten and rushes out. Had anyone thought what will be the consequences of such an eating? Here are a few consequences of eating food in a hurry without proper chewing.
(a) It becomes real hard to gulp the food into the stomach without chewing it properly. To overcome this we reach a glass of water. When we eat food or pickles they sometimes get obstructed in the esophagus and induce hiccups. To avoid hiccups we again reach out for water. Some people have the habit of eating food while watching television, reading news papers, talking to fellow mates or straying in thoughts. Since the concentration is not on the food, it leads to over eating and subsequently eruption of gases in the stomach and causes belching. To stop it we drink water. These are the common mistakes we comment and don’t end up here; we pass them to our children too.
(b) (b) When we chew food thoroughly, the required saliva is produced and it moves into the stomach easily without any obstruction. The saliva substitutes the function of water as the saliva contains 98% of water and 2% of digestive enzymes. Saliva helps in proper digestion of food. Saliva also kills or injures certain kinds of bacteria found in the food we eat. When we drink water while eating food, less amount of saliva is produced. Due to reduced saliva and increased water intake the digestive process is hampered. There is no way out but to commit a second mistake. The first mistake takes place in the mouth and second mistake in the stomach. If we chew food properly, the saliva mixes well with the food which in turn enables the food to move freely into the stomach. Saliva not only makes the digestion process easy but also complete.
2. The relationship of Water and Digestion in the Stomach: The second phase of digestion process begins in the stomach. After swallowing, the chewed food reaches the stomach. Several digestive enzymes and gastric juices are produced over there for digesting the food we eat. One among such acids is the most powerful hydrochloric acid which plays a key role in the digestion of our food. Along with it there as several other enzymes which helps in the completion of digestive process. Hydrochloric Acid kills certain harmful bacteria left over in the food. The food is usually digested in two hours of time when these acids are in concentrated form. Its strength is reduced when water mixes with it. Let us see what happens in the stomach when we drink water while eating.
Acids Weaken: The powerful acids in our stomach become mild and lose their potency if we mix water along with food. The mild acid fails to digest food as effectively as the strong one. The stomach is forced to produce double the amount of acid to digest food. The increase in the volume of digestive gases leads to burning sensation and formation of ulcers in the stomach. The stomach gets habituated to produce an excess acid which is a very unhealthy practice. How many of us think about the inner difficulties of our body? Hardly any! We take care of every minute thing of our external body from our hair to nails. If you begin to take care about your inner organs, you are going to lead a healthy and happy life free of diseases.
Non-digestion of food: When we drink water while eating it goes into the stomach along with food. It obstructs digestive juices from reaching the food directly. The food breaks into small pieces with the help of digestive juices and takes a semi-liquid form. However, when we take water while eating it breaks the food into pieces but cannot digest it completely. The process of digestion takes more time. The weight of water is also more and the natural movements of stomach slowdown. This is also one of the reasons for delay in digestion. With this, the digestion process takes double the time of usual digestion.
Sagging Stomach: Generally two glasses of water take 15 minutes to mix in blood. However, when we take water along with food, it remains in the stomach till the food takes the form of liquid after digestion. It may take two to four hours. The stomach, which bears the weight of the food and water, sags. As food and water remain in the stomach for such a long time the food is fermented which is the reason for belching. Belly protrudes to some extent even among lean people.
Heaviness in Stomach: As the water and food remain in the stomach for long period, food gets fermented and gases are produced. It becomes a tight compartment. The heavy stomach puts pressure on the lower part of the lungs and the lungs contract by 20%-30%. As a result you find difficulty and uneasiness while breathing. The diaphragm, which is an important part in breathing process, undergoes pressure and its movements are restricted. Likewise, the muscles of our stomach which help respiration lose their capacity to expand and contract. The fermented gases neither moves up or down push the stomach forward leading to uneasiness and restlessness. In some cases, it may even result in chest pain which is often mistaken as heart ache by many.
Drowsiness: The oxygen we inhale to a large extent reaches the stomach and helps digestion. As long as the oxygen is in the stomach we feel drowsy. As the food remains in the stomach for a long period, the lungs receive less oxygen and we become dull, drowsy and weak. That is the reason why many people yarn continuously after food and some prefer to go for a nap; while some other sleep on the chairs in a sitting posture. School Children and Employees in the offices often resort to this practice. If we eat food without drinking water, we don’t experience this drowsiness because the food gets digested quickly.
3. The relationship Water and Digestion in the Intestines: The third and final phase of digestion takes place in the small intestines. The food that was digested in the above two stages reaches the small intestines for final process. The food when digested in this part is absorbed by the intestines in the form of semi juice. When the food is taken along with water, only partially digested food goes into the small intestines. The proteins and fats in the food are digested to some extent in the stomach where as the rest is digested in the small intestines by the digestive enzymes and juices that are produced at the junction where the intestines join the stomach. Since the food is not digested properly more digestive gases are produced for digestion. Here the food is pounded well, turns into liquid form and moves forward. Where there is water these movements are restricted. In this way, the process of digestion in the small intestines is also delayed as in the case of stomach. The intestines absorb the food that is not digested properly. As the food remains in the intestines along with water the food gets fermented and gases are produced. The intestines as such have some amount of gases in them. With the movements in intestines, gas tries to come out. We can also hear some sounds. Two hours after we take food, the digested food is ready to mix in the blood. It is more beneficial if we drink water now. If water is drunk during this period it mixes well with digested food and easily absorbed into the blood.
When the food is not digested properly the body absorbs the juices that are not digested completely. Millions of cells die and millions of new cells are formed in the body. The formation of new cells depends on the energy and essence of the food we eat. When the food is not digested properly, the intestines cannot absorb its full benefit. Almost 50% of the essence of the food we eat is purged out. The food we take initially becomes paste and then juice which is popularly known as glucose. The glucose is absorbed into blood and cells and body receives energy. If the food does not become the paste and converted to juice called glucose, we feel weak and feeble. The cells remain healthy and last long only when they receive properly digested food and nutrients.
The moment I have learned about the harm in drinking water while eating food, I stopped drinking water while eating. I have been following this practice for almost 3 years now. And believe me it’s a wonderful feeling which can only be experienced and felt. It’s really difficult to explain the feeling of being healthy. I recommend the readers to have a try and feel the pleasure of Dr. Raju’s principle of drinking water. You have to drink 2-4 glasses of water half an hour before you eat every time. This will quench your thirst and never feel thirsty while eating. Drink again after 2 hours of having your food. By this time most part of your digestion process is complete and it won’t take much time to quench your thirst. I have shared this secret of drinking water with many of my friends and many have given a positive response. In fact, most of the followers of Dr. Raju's Natural Way of Living begins their journey into this path with water therapy recommended by Dr. M.S. Raju.
Of late, I have come across a few comments from the readers favoring the practice of drinking water while eating. Instead of striking off their point of view immediately, I tried to research on their claim. I have gone through various books on nutrition, googled on internet, gathered information from people who are following this path and also from those who don't favor this. I came across a very interesting observation during my research. I have noticed that almost all the people including doctors and intellectuals from the west believes that drinking water while eating aids digestion. Contrary to that, people in the east especially in Asia believes that drinking water while eating hinders digestion. All Ayurvedic and naturopathy doctors claim drinking water while eating is harmful. Allopathy doctors however supported the western claim. That is certain because they believe only what they have read and studied in their books. I certainly knew that there must be some truth in both the claims. Being the sadhaka of Yoga I never believe in the believes. Yoga teaches to experiment on your body with an intelligence to experience the effects of an experiment. Since I was following the practice of not drinking water while eating, I already knew the benefits of doing so. Therefore I decided to drink water while eating. Not to my surprise, I found that it is hindering my digestion and getting all the symptoms mentioned above. Still I wanted to be very sure before making any claim. I continued my research further. On reading some of the ancient yoga texts I found that there are two reasons for these contradictory views among the people of East and the West.
The first reason is the distinct nature and type of food and second is the climate. The customary Indian food is made up of ¾ solid and ¼ liquid. The Asian people especially the Indians consume plenty of fruits and cooked food like Rice, Wheat and Jowar (solid) with vast variety of Vegetables, Dals (lentils), Sambar, Rasam, Yogurt etc (Semi Solid and Liquid). The food was so scientifically designed that you don't need any additional water while eating. Moreover, you are asked to drink 2-4 glasses of water half an hour before food. This water is quickly absorbed into the blood quenching your thirst and at the same time make the intestines wet aiding the food to move freely while and after digestion. Secondly you drink 5-6 liters of water throughout the day leaving a gap of half an hour before and two hours after eating food. Since the Indian food already contains enough water and the modern generation is ignorantly drinking more water while eating. It is therefore people like Dr. M.S. Raju stepped up to take us back to the roots by preaching these age old secrets of good health.
According to an ancient yoga text of Chaandogya upanishad, solid foods, fluids and fats which fuel the body are each split up into sixteen parts on consumption. The gross part of the food becomes faeces, medium part becomes flesh and the subtlest becomes the mind in the ration of 10/16, 5/16 and 1/16 respectively. The grossest part of fluids become urine, the medium becomes blood and the subtlest becomes the prana (energy). Similarly, the grossest part of fats becomes the bone, the medium becomes the marrow and the subtlest becomes the speech. Unlike the modern scientists these findings are not based on animal experiments. They are purely based on personal experiments and experiences. A yogi named Shvetaketu lived on fluids for fifteen days and lost his power of thinking but regained it again as soon as he ate solid food. His power of speech diminished when he went without fats. This experience revealed to him that the mind is the product of food, energy of fluids and speech of fats.
The customary Indian food is based on some of these teachings of our ancestors. Some of my friends on facebook from China, Vietnam, Taiwan and Singapore mentioned few facts similar to this.
Contrary to this, the western food is totally different. Unlike the Indian food, western food is usually comprised of baked and dry foods like meat, pizza, burgers, and sausages served with salads, cream, catchup and sauces which contain less or almost no water content in them. Secondly, due to the cold weather conditions it is not possible for them to drink 5-6 liters of water a day and I suppose 3 liters a day is more than enough for them. Plenty of Dr. M.S. Raju's followers in the west are able to follow Dr. Raju's way of drinking water successfully because they make their food in Indian style even in America or any other State in the west. I am not yet sure on my part whether this is the exact reason why the people in the west believe that drinking water while eating helps digestion. I leave this up to the people who live there to experiment and experience the fact. As far as Indians are concerned they are enjoying the refined state of health after following the practice of avoiding water while eating.
Next time, hope to come up with another interesting topic on Natural Way of Living. Till then keep watching for updates…! |
package run.yuyang.trotsky.service;
import run.yuyang.trotsky.model.conf.IntroConf;
import run.yuyang.trotsky.service.base.SerializableService;
public interface IntroService extends SerializableService {
boolean exist(String name);
IntroConf getIntro(String name);
boolean delIntro(String name);
boolean delIntroAndSave(String name);
boolean addIntro(IntroConf conf);
boolean addIntroAndSave(IntroConf conf);
boolean existNoteAndDisk(String name);
}
|
package org.realityforge.bazel.depgen.model;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import org.realityforge.bazel.depgen.AbstractTest;
import org.realityforge.bazel.depgen.config.ArtifactConfig;
import org.realityforge.bazel.depgen.config.JavaConfig;
import org.realityforge.bazel.depgen.config.Nature;
import org.testng.annotations.Test;
import static org.testng.Assert.*;
public class ArtifactModelTest
extends AbstractTest
{
@Test
public void parseArtifactWith2PartCoord()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.isVersioned() );
assertNull( model.getVersion() );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp" );
}
@Test
public void parseArtifactWith3PartCoord()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:1.0" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.isVersioned() );
assertEquals( model.getVersion(), "1.0" );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp:jar:1.0" );
}
@Test
public void parseArtifactWith4PartCoord()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:jszip:1.0" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.isVersioned() );
assertEquals( model.getVersion(), "1.0" );
assertEquals( model.getType(), "jszip" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp:jszip:1.0" );
}
@Test
public void parseArtifactWith5PartCoord()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:jszip:sources:1.0" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.isVersioned() );
assertEquals( model.getVersion(), "1.0" );
assertEquals( model.getType(), "jszip" );
assertEquals( model.getClassifier(), "sources" );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp:jszip:sources:1.0" );
}
@Test
public void parseArtifactWith1PartCoord()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example" );
final InvalidModelException exception =
expectThrows( InvalidModelException.class, () -> ArtifactModel.parse( source ) );
assertEquals( exception.getMessage(),
"The 'coord' property on the dependency must specify 2-5 components separated by the ':' character. The 'coords' must be in one of the forms; 'group:id', 'group:id:version', 'group:id:type:version' or 'group:id:type:classifier:version'." );
assertEquals( exception.getModel(), source );
}
@Test
public void parseArtifactWith6PartCoord()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:jszip:sources:1.0:compile" );
final InvalidModelException exception =
expectThrows( InvalidModelException.class, () -> ArtifactModel.parse( source ) );
assertEquals( exception.getMessage(),
"The 'coord' property on the dependency must specify 2-5 components separated by the ':' character. The 'coords' must be in one of the forms; 'group:id', 'group:id:version', 'group:id:type:version' or 'group:id:type:classifier:version'." );
assertEquals( exception.getModel(), source );
}
@Test
public void parseArtifactWithIncludeOptional()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setIncludeOptional( true );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.isVersioned() );
assertNull( model.getVersion() );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertTrue( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
}
@Test
public void parseArtifactWithExportDeps()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
final JavaConfig java = new JavaConfig();
java.setExportDeps( true );
source.setJava( java );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertEquals( model.getType(), "jar" );
assertTrue( model.exportDeps( false ) );
}
@Test
public void parseArtifactWithExportDeps_overrideGlobal()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
final JavaConfig java = new JavaConfig();
java.setExportDeps( false );
source.setJava( java );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertEquals( model.getType(), "jar" );
assertFalse( model.exportDeps( true ) );
}
@Test
public void parseArtifactWithNature()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setNatures( Collections.singletonList( Nature.Plugin ) );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertEquals( model.getNatures( Nature.Java ), Collections.singletonList( Nature.Plugin ) );
}
@Test
public void parseArtifactWithNoIncludeSource()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setIncludeSource( false );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.isVersioned() );
assertNull( model.getVersion() );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertFalse( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
}
@Test
public void parseArtifactWithIncludeExternalAnnotations()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setIncludeExternalAnnotations( true );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.includeExternalAnnotations( false ) );
}
@Test
public void parseArtifactWithNoIncludeExternalAnnotations()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setIncludeExternalAnnotations( false );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.includeExternalAnnotations( true ) );
}
@Test
public void parseArtifactWithCoordAndExcludes()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setExcludes( Arrays.asList( "com.biz.db", "com.biz.ui:core-ui" ) );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.isVersioned() );
assertNull( model.getVersion() );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
final List<ExcludeModel> excludes = model.getExcludes();
assertFalse( excludes.isEmpty() );
assertEquals( excludes.size(), 2 );
final ExcludeModel exclude1 = excludes.get( 0 );
assertEquals( exclude1.getGroup(), "com.biz.db" );
assertNull( exclude1.getId() );
final ExcludeModel exclude2 = excludes.get( 1 );
assertEquals( exclude2.getGroup(), "com.biz.ui" );
assertEquals( exclude2.getId(), "core-ui" );
assertEquals( model.toCoord(), "com.example:myapp" );
}
@Test
public void parseArtifactWithVisibility()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setVisibility( Arrays.asList( "//project:__subpackages__", "//other:__subpackages__" ) );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.isVersioned() );
assertNull( model.getVersion() );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertEquals( model.getVisibility(), Arrays.asList( "//project:__subpackages__", "//other:__subpackages__" ) );
}
@Test
public void parseArtifactWithNaturesSpecified()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
source.setNatures( Arrays.asList( Nature.Java, Nature.J2cl ) );
assertEquals( ArtifactModel.parse( source ).getNatures( Nature.Java ),
Arrays.asList( Nature.Java, Nature.J2cl ) );
}
@Test
public void parseArtifactWithNaturesNotSpecified()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
assertEquals( ArtifactModel.parse( source ).getNatures( Nature.Java ), Collections.singletonList( Nature.Java ) );
}
@Test
public void parseArtifactWithSpec2Elements()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertFalse( model.isVersioned() );
assertNull( model.getVersion() );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp" );
}
@Test
public void parseArtifactWithSpec3Elements()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:1.0" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.isVersioned() );
assertEquals( model.getVersion(), "1.0" );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp:jar:1.0" );
}
@Test
public void parseArtifactWithSpec4Elements()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:jar:1.0" );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.isVersioned() );
assertEquals( model.getVersion(), "1.0" );
assertEquals( model.getType(), "jar" );
assertNull( model.getClassifier() );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
assertTrue( model.getExcludes().isEmpty() );
assertTrue( model.getVisibility().isEmpty() );
assertEquals( model.toCoord(), "com.example:myapp:jar:1.0" );
}
@Test
public void parseArtifactWithAllSpecElementsAndExcludes()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp:jszip:sources:1.0" );
source.setExcludes( Arrays.asList( "com.biz.db", "com.biz.ui:core-ui" ) );
final ArtifactModel model = ArtifactModel.parse( source );
assertEquals( model.getSource(), source );
assertEquals( model.getGroup(), "com.example" );
assertEquals( model.getId(), "myapp" );
assertTrue( model.isVersioned() );
assertEquals( model.getVersion(), "1.0" );
assertEquals( model.getType(), "jszip" );
assertEquals( model.getClassifier(), "sources" );
assertFalse( model.includeOptional() );
assertTrue( model.includeSource( true ) );
assertFalse( model.exportDeps( false ) );
final List<ExcludeModel> excludes = model.getExcludes();
assertFalse( excludes.isEmpty() );
assertEquals( excludes.size(), 2 );
final ExcludeModel exclude1 = excludes.get( 0 );
assertEquals( exclude1.getGroup(), "com.biz.db" );
assertNull( exclude1.getId() );
final ExcludeModel exclude2 = excludes.get( 1 );
assertEquals( exclude2.getGroup(), "com.biz.ui" );
assertEquals( exclude2.getId(), "core-ui" );
assertEquals( model.toCoord(), "com.example:myapp:jszip:sources:1.0" );
}
@Test
public void parseArtifactMissingCoord()
{
final ArtifactConfig source = new ArtifactConfig();
final InvalidModelException exception =
expectThrows( InvalidModelException.class, () -> ArtifactModel.parse( source ) );
assertEquals( exception.getMessage(), "The dependency must specify the 'coord' property." );
assertEquals( exception.getModel(), source );
}
@Test
public void getRepositories()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
final List<String> repositories = Arrays.asList( "local", "central" );
source.setRepositories( repositories );
assertEquals( ArtifactModel.parse( source ).getRepositories(), repositories );
}
@Test
public void getRepositories_defaultValue()
{
final ArtifactConfig source = new ArtifactConfig();
source.setCoord( "com.example:myapp" );
assertTrue( ArtifactModel.parse( source ).getRepositories().isEmpty() );
}
}
|
/**
* Project Version State DTO object
*/
@ApiModel(description = "Project Version State DTO object")
@javax.annotation.Generated(value = "io.swagger.codegen.languages.JavaClientCodegen", date = "2018-07-09T13:54:27.094-07:00")
public class ProjectVersionState {
@SerializedName("analysisResultsExist")
private Boolean analysisResultsExist = null;
@SerializedName("analysisUploadEnabled")
private Boolean analysisUploadEnabled = null;
@SerializedName("attentionRequired")
private Boolean attentionRequired = null;
@SerializedName("auditEnabled")
private Boolean auditEnabled = null;
@SerializedName("batchBugSubmissionExists")
private Boolean batchBugSubmissionExists = null;
@SerializedName("committed")
private Boolean committed = null;
@SerializedName("criticalPriorityIssueCountDelta")
private Integer criticalPriorityIssueCountDelta = null;
@SerializedName("deltaPeriod")
private Integer deltaPeriod = null;
@SerializedName("extraMessage")
private String extraMessage = null;
@SerializedName("hasCustomIssues")
private Boolean hasCustomIssues = null;
@SerializedName("id")
private Long id = null;
@SerializedName("issueCountDelta")
private Integer issueCountDelta = null;
@SerializedName("lastFprUploadDate")
private OffsetDateTime lastFprUploadDate = null;
@SerializedName("metricEvaluationDate")
private OffsetDateTime metricEvaluationDate = null;
@SerializedName("percentAuditedDelta")
private Float percentAuditedDelta = null;
@SerializedName("percentCriticalPriorityIssuesAuditedDelta")
private Float percentCriticalPriorityIssuesAuditedDelta = null;
public ProjectVersionState analysisResultsExist(Boolean analysisResultsExist) {
this.analysisResultsExist = analysisResultsExist;
return this;
}
/**
* Get analysisResultsExist
* @return analysisResultsExist
**/
@ApiModelProperty(example = "false", required = true, value = "")
public Boolean isAnalysisResultsExist() {
return analysisResultsExist;
}
public void setAnalysisResultsExist(Boolean analysisResultsExist) {
this.analysisResultsExist = analysisResultsExist;
}
public ProjectVersionState analysisUploadEnabled(Boolean analysisUploadEnabled) {
this.analysisUploadEnabled = analysisUploadEnabled;
return this;
}
/**
* Get analysisUploadEnabled
* @return analysisUploadEnabled
**/
@ApiModelProperty(example = "false", required = true, value = "")
public Boolean isAnalysisUploadEnabled() {
return analysisUploadEnabled;
}
public void setAnalysisUploadEnabled(Boolean analysisUploadEnabled) {
this.analysisUploadEnabled = analysisUploadEnabled;
}
public ProjectVersionState attentionRequired(Boolean attentionRequired) {
this.attentionRequired = attentionRequired;
return this;
}
/**
* Get attentionRequired
* @return attentionRequired
**/
@ApiModelProperty(example = "false", required = true, value = "")
public Boolean isAttentionRequired() {
return attentionRequired;
}
public void setAttentionRequired(Boolean attentionRequired) {
this.attentionRequired = attentionRequired;
}
public ProjectVersionState auditEnabled(Boolean auditEnabled) {
this.auditEnabled = auditEnabled;
return this;
}
/**
* Get auditEnabled
* @return auditEnabled
**/
@ApiModelProperty(example = "false", required = true, value = "")
public Boolean isAuditEnabled() {
return auditEnabled;
}
public void setAuditEnabled(Boolean auditEnabled) {
this.auditEnabled = auditEnabled;
}
public ProjectVersionState batchBugSubmissionExists(Boolean batchBugSubmissionExists) {
this.batchBugSubmissionExists = batchBugSubmissionExists;
return this;
}
/**
* Get batchBugSubmissionExists
* @return batchBugSubmissionExists
**/
@ApiModelProperty(example = "false", required = true, value = "")
public Boolean isBatchBugSubmissionExists() {
return batchBugSubmissionExists;
}
public void setBatchBugSubmissionExists(Boolean batchBugSubmissionExists) {
this.batchBugSubmissionExists = batchBugSubmissionExists;
}
public ProjectVersionState committed(Boolean committed) {
this.committed = committed;
return this;
}
/**
* False if Project Version is in an incomplete state
* @return committed
**/
@ApiModelProperty(example = "false", required = true, value = "False if Project Version is in an incomplete state")
public Boolean isCommitted() {
return committed;
}
public void setCommitted(Boolean committed) {
this.committed = committed;
}
public ProjectVersionState criticalPriorityIssueCountDelta(Integer criticalPriorityIssueCountDelta) {
this.criticalPriorityIssueCountDelta = criticalPriorityIssueCountDelta;
return this;
}
/**
* Get criticalPriorityIssueCountDelta
* @return criticalPriorityIssueCountDelta
**/
@ApiModelProperty(required = true, value = "")
public Integer getCriticalPriorityIssueCountDelta() {
return criticalPriorityIssueCountDelta;
}
public void setCriticalPriorityIssueCountDelta(Integer criticalPriorityIssueCountDelta) {
this.criticalPriorityIssueCountDelta = criticalPriorityIssueCountDelta;
}
public ProjectVersionState deltaPeriod(Integer deltaPeriod) {
this.deltaPeriod = deltaPeriod;
return this;
}
/**
* Get deltaPeriod
* @return deltaPeriod
**/
@ApiModelProperty(required = true, value = "")
public Integer getDeltaPeriod() {
return deltaPeriod;
}
public void setDeltaPeriod(Integer deltaPeriod) {
this.deltaPeriod = deltaPeriod;
}
public ProjectVersionState extraMessage(String extraMessage) {
this.extraMessage = extraMessage;
return this;
}
/**
* Get extraMessage
* @return extraMessage
**/
@ApiModelProperty(required = true, value = "")
public String getExtraMessage() {
return extraMessage;
}
public void setExtraMessage(String extraMessage) {
this.extraMessage = extraMessage;
}
public ProjectVersionState hasCustomIssues(Boolean hasCustomIssues) {
this.hasCustomIssues = hasCustomIssues;
return this;
}
/**
* Get hasCustomIssues
* @return hasCustomIssues
**/
@ApiModelProperty(example = "false", required = true, value = "")
public Boolean isHasCustomIssues() {
return hasCustomIssues;
}
public void setHasCustomIssues(Boolean hasCustomIssues) {
this.hasCustomIssues = hasCustomIssues;
}
public ProjectVersionState id(Long id) {
this.id = id;
return this;
}
/**
* Get id
* @return id
**/
@ApiModelProperty(required = true, value = "")
public Long getId() {
return id;
}
public void setId(Long id) {
this.id = id;
}
public ProjectVersionState issueCountDelta(Integer issueCountDelta) {
this.issueCountDelta = issueCountDelta;
return this;
}
/**
* Get issueCountDelta
* @return issueCountDelta
**/
@ApiModelProperty(required = true, value = "")
public Integer getIssueCountDelta() {
return issueCountDelta;
}
public void setIssueCountDelta(Integer issueCountDelta) {
this.issueCountDelta = issueCountDelta;
}
public ProjectVersionState lastFprUploadDate(OffsetDateTime lastFprUploadDate) {
this.lastFprUploadDate = lastFprUploadDate;
return this;
}
/**
* Get lastFprUploadDate
* @return lastFprUploadDate
**/
@ApiModelProperty(required = true, value = "")
public OffsetDateTime getLastFprUploadDate() {
return lastFprUploadDate;
}
public void setLastFprUploadDate(OffsetDateTime lastFprUploadDate) {
this.lastFprUploadDate = lastFprUploadDate;
}
public ProjectVersionState metricEvaluationDate(OffsetDateTime metricEvaluationDate) {
this.metricEvaluationDate = metricEvaluationDate;
return this;
}
/**
* Get metricEvaluationDate
* @return metricEvaluationDate
**/
@ApiModelProperty(required = true, value = "")
public OffsetDateTime getMetricEvaluationDate() {
return metricEvaluationDate;
}
public void setMetricEvaluationDate(OffsetDateTime metricEvaluationDate) {
this.metricEvaluationDate = metricEvaluationDate;
}
public ProjectVersionState percentAuditedDelta(Float percentAuditedDelta) {
this.percentAuditedDelta = percentAuditedDelta;
return this;
}
/**
* Get percentAuditedDelta
* @return percentAuditedDelta
**/
@ApiModelProperty(required = true, value = "")
public Float getPercentAuditedDelta() {
return percentAuditedDelta;
}
public void setPercentAuditedDelta(Float percentAuditedDelta) {
this.percentAuditedDelta = percentAuditedDelta;
}
public ProjectVersionState percentCriticalPriorityIssuesAuditedDelta(Float percentCriticalPriorityIssuesAuditedDelta) {
this.percentCriticalPriorityIssuesAuditedDelta = percentCriticalPriorityIssuesAuditedDelta;
return this;
}
/**
* Get percentCriticalPriorityIssuesAuditedDelta
* @return percentCriticalPriorityIssuesAuditedDelta
**/
@ApiModelProperty(required = true, value = "")
public Float getPercentCriticalPriorityIssuesAuditedDelta() {
return percentCriticalPriorityIssuesAuditedDelta;
}
public void setPercentCriticalPriorityIssuesAuditedDelta(Float percentCriticalPriorityIssuesAuditedDelta) {
this.percentCriticalPriorityIssuesAuditedDelta = percentCriticalPriorityIssuesAuditedDelta;
}
@Override
public boolean equals(java.lang.Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
ProjectVersionState projectVersionState = (ProjectVersionState) o;
return Objects.equals(this.analysisResultsExist, projectVersionState.analysisResultsExist) &&
Objects.equals(this.analysisUploadEnabled, projectVersionState.analysisUploadEnabled) &&
Objects.equals(this.attentionRequired, projectVersionState.attentionRequired) &&
Objects.equals(this.auditEnabled, projectVersionState.auditEnabled) &&
Objects.equals(this.batchBugSubmissionExists, projectVersionState.batchBugSubmissionExists) &&
Objects.equals(this.committed, projectVersionState.committed) &&
Objects.equals(this.criticalPriorityIssueCountDelta, projectVersionState.criticalPriorityIssueCountDelta) &&
Objects.equals(this.deltaPeriod, projectVersionState.deltaPeriod) &&
Objects.equals(this.extraMessage, projectVersionState.extraMessage) &&
Objects.equals(this.hasCustomIssues, projectVersionState.hasCustomIssues) &&
Objects.equals(this.id, projectVersionState.id) &&
Objects.equals(this.issueCountDelta, projectVersionState.issueCountDelta) &&
Objects.equals(this.lastFprUploadDate, projectVersionState.lastFprUploadDate) &&
Objects.equals(this.metricEvaluationDate, projectVersionState.metricEvaluationDate) &&
Objects.equals(this.percentAuditedDelta, projectVersionState.percentAuditedDelta) &&
Objects.equals(this.percentCriticalPriorityIssuesAuditedDelta, projectVersionState.percentCriticalPriorityIssuesAuditedDelta);
}
@Override
public int hashCode() {
return Objects.hash(analysisResultsExist, analysisUploadEnabled, attentionRequired, auditEnabled, batchBugSubmissionExists, committed, criticalPriorityIssueCountDelta, deltaPeriod, extraMessage, hasCustomIssues, id, issueCountDelta, lastFprUploadDate, metricEvaluationDate, percentAuditedDelta, percentCriticalPriorityIssuesAuditedDelta);
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append("class ProjectVersionState {\n");
sb.append(" analysisResultsExist: ").append(toIndentedString(analysisResultsExist)).append("\n");
sb.append(" analysisUploadEnabled: ").append(toIndentedString(analysisUploadEnabled)).append("\n");
sb.append(" attentionRequired: ").append(toIndentedString(attentionRequired)).append("\n");
sb.append(" auditEnabled: ").append(toIndentedString(auditEnabled)).append("\n");
sb.append(" batchBugSubmissionExists: ").append(toIndentedString(batchBugSubmissionExists)).append("\n");
sb.append(" committed: ").append(toIndentedString(committed)).append("\n");
sb.append(" criticalPriorityIssueCountDelta: ").append(toIndentedString(criticalPriorityIssueCountDelta)).append("\n");
sb.append(" deltaPeriod: ").append(toIndentedString(deltaPeriod)).append("\n");
sb.append(" extraMessage: ").append(toIndentedString(extraMessage)).append("\n");
sb.append(" hasCustomIssues: ").append(toIndentedString(hasCustomIssues)).append("\n");
sb.append(" id: ").append(toIndentedString(id)).append("\n");
sb.append(" issueCountDelta: ").append(toIndentedString(issueCountDelta)).append("\n");
sb.append(" lastFprUploadDate: ").append(toIndentedString(lastFprUploadDate)).append("\n");
sb.append(" metricEvaluationDate: ").append(toIndentedString(metricEvaluationDate)).append("\n");
sb.append(" percentAuditedDelta: ").append(toIndentedString(percentAuditedDelta)).append("\n");
sb.append(" percentCriticalPriorityIssuesAuditedDelta: ").append(toIndentedString(percentCriticalPriorityIssuesAuditedDelta)).append("\n");
sb.append("}");
return sb.toString();
}
/**
* Convert the given object to string with each line indented by 4 spaces
* (except the first line).
*/
private String toIndentedString(java.lang.Object o) {
if (o == null) {
return "null";
}
return o.toString().replace("\n", "\n ");
}
} |
package io.ray.streaming.runtime.worker.tasks;
import io.ray.streaming.runtime.core.processor.Processor;
import io.ray.streaming.runtime.worker.JobWorker;
/**
* Input stream task with 1 input. Such as: map operator.
*/
public class OneInputStreamTask extends InputStreamTask {
public OneInputStreamTask(int taskId, Processor inputProcessor, JobWorker jobWorker) {
super(taskId, inputProcessor, jobWorker);
}
}
|
<reponame>jroimartin/orujo
// Copyright 2014 The orujo Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Package basic implements basic auth mechanisms for orujo.
*/
package basic
import (
"crypto/sha256"
"crypto/subtle"
"fmt"
"net/http"
"github.com/jroimartin/orujo"
)
// A BasicHandler is a orujo built-in handler that provides
// basic authentication.
type BasicHandler struct {
realm string
username string
password string
// ErrorMsg can be used to set a custom error message. The parameter
// provuser contains the username provided by the user.
ErrorMsg func(w http.ResponseWriter, provuser string)
}
// NewBasicHandler returns a new BasicHandler.
func NewBasicHandler(realm, username, password string) BasicHandler {
return BasicHandler{
realm: realm,
username: username,
password: password,
ErrorMsg: defaultErrorMsg,
}
}
// ServeHTTP validates the username and password provided by the user.
func (h BasicHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
isValid, provUser := h.isValidAuth(r)
if isValid {
return
}
w.Header().Set("WWW-Authenticate", "Basic realm=\""+h.realm+"\"")
w.WriteHeader(http.StatusUnauthorized)
h.ErrorMsg(w, provUser)
errorStr := fmt.Errorf("Invalid username or password (username: %s)", provUser)
orujo.RegisterError(w, errorStr)
}
// defaultErrorMsg writes the default unauthorized response to the
// http.ResponseWriter
func defaultErrorMsg(w http.ResponseWriter, provuser string) {
fmt.Fprintln(w, http.StatusText(http.StatusUnauthorized))
}
func (h BasicHandler) isValidAuth(r *http.Request) (valid bool, username string) {
provUser, provPass, ok := r.BasicAuth()
if !ok {
return false, "unknown"
}
provUserSha256 := sha256.Sum256([]byte(provUser))
userSha256 := sha256.Sum256([]byte(h.username))
validUser := subtle.ConstantTimeCompare(provUserSha256[:], userSha256[:]) == 1
provPassSha256 := sha256.Sum256([]byte(provPass))
passSha256 := sha256.Sum256([]byte(h.password))
validPass := subtle.ConstantTimeCompare(provPassSha256[:], passSha256[:]) == 1
return validUser && validPass, provUser
}
|
Transient hot-strip (THS) method for measuring thermal conductivity of thermally insulating materials
The fast and precise transient hot-strip (THS) method is well suited for thermal conductivity measurements on solid materials. The THS method may, however, give large experimental errors when applied to thermally insulating materials of low heat capacity per unit volume. Models to deal with those potential error sources and some indications about the precautions to be taken in order to minimize them are described in the present work. Measurements of thermal conductivity of a styrofoam insulating material (thermal conductivity 0.036 W m−1K−1, density 25.4 kg m−3) was performed to verify the models. The result obtained is in good agreement with the standard hot plate method, indicating that the THS method is also well suited for thermal conductivity measurements of thermal insulators. |
N = int(input())
if 1 <= N <= 100:
for i in range(0,25):
for v in range(0,14):
if (4 * i) + (7 * v) == N:
print('Yes')
break
if (4 * i) + (7 * v) == N:
break
else:
print('No')
else:
print('No') |
//returns: false if the graph has cycles
public boolean isGraphReady() {
Initializer init = new Initializer(root, this);
this.vertices = init.initialize();
return !cycleWarning;
} |
def allocateSpace(self, nSteps):
self.listOfTimePositionAmplitudes = []
for i in range(self.dim):
self.listOfTimePositionAmplitudes.append(self.mySpace.functionSpacetimeZero(nSteps)) |
def inline_block_locals(comp, variable_names=None):
symbol_tree = transformation_utils.SymbolTree(
transformation_utils.ReferenceCounter)
transform_spec = InlineBlock(comp, variable_names)
return transformation_utils.transform_postorder_with_symbol_bindings(
comp, transform_spec.transform, symbol_tree) |
def model_evaluation(model_name, fc, bac_test, rcc_test, bac_train, rcc_train, ft, st):
evaluation['model'].append(model_name)
evaluation['feature_count'].append(fc)
evaluation['BAC_test'].append(bac_test)
evaluation['Recall_test'].append(rcc_test)
evaluation['BAC_train'].append(bac_train)
evaluation['Recall_train'].append(rcc_train)
evaluation['Fit_time'].append(ft)
evaluation['Score_time'].append(st)
df_eval = pd.DataFrame({'model_name': evaluation['model'],
'feature_count': evaluation['feature_count'],
'Balanced_Accuracy_test': evaluation['BAC_test'],
'Recall_test': evaluation['Recall_test'],
'Balanced_Accuracy_train': evaluation['BAC_train'],
'Recall_train': evaluation['Recall_train'],
'Fit_time': evaluation['Fit_time'],
'Score_time': evaluation['Score_time']
})
return df_eval.sort_values(by='Balanced_Accuracy_test', ascending=False).round(3) |
Dysnatremia in Gastrointestinal Disorders
The primary solute of the milieu intérieur is sodium and accompanying anions. The solvent is water. The kidneys acutely regulate homeostasis in filtration, secretion, and resorption of electrolytes, non-electrolytes, and minerals while balancing water retention and clearance. The gastrointestinal absorptive and secretory functions enable food digestion and water absorption needed to sustain life. Gastrointestinal perturbations including vomiting and diarrhea can lead to significant volume and electrolyte losses, overwhelming the renal homeostatic compensatory mechanisms. Dysnatremia, potassium and acid-base disturbances can result from gastrointestinal pathophysiologic processes. Understanding the renal and gastrointestinal contributions to homeostatis are important for the clinical evaluation of perturbed volume disturbances.
INTRODUCTION
The kidneys tightly regulate the amount of water and electrolytes in the human body to maintain a consistent internal fluid. They must adjust the balance of excreted water and urine electrolytes in stable conditions and when faced with extreme stress brought on by disease. However, large losses of water and electrolytes through the gastrointestinal tract can overwhelm these impressive homeostatic capabilities, producing dysnatremia, hypokalemia and acid-base disturbances. Diarrhea and vomiting are the two most common gastrointestinal disturbances causing these homeostatic perturbations. We delve into the underlying physiological complexities leading to these abnormalities in the following review article.
NORMAL PHYSIOLOGY UNDER STEADY STATE
A firm grasp of normal gastrointestinal and renal physiology is necessary to understand the electrolyte disturbances caused by gastrointestinal disturbances.
Sodium and Potassium Distribution in the Body
The ionic composition of intracellular and extracellular compartment in the body differ vastly with transmembrane gradients established by active transport. Sodium (Na + ) is the main cation in the extracellular space, exerting an effect on cell volume by influencing the movement of water. In other words, Na + exerts an osmotic effect when confined to the extracellular space, behaving as an effective osmole.
The sodium concentration measured in plasma water (145-155 mmol/L) is slightly higher than plasma sodium concentration (137-142 mmol/L) since plasma is composed of both an aqueous phase (water) and a solid phase (proteins and lipids) (1,2). Sodium and other electrolytes are found in the water fraction of the plasma. Normally, the plasma contains 93% water and 7% solids (proteins and lipids), thus the normal range of the sodium concentration in plasma water is higher than the corresponding normal range of plasma sodium concentration. Modern methods of plasma sodium measurement including ionselective electrode utilize plasma dilution to indirectly calculate the plasma sodium concentration under the assumption water makes up 93% of plasma under normal conditions (2,3). Any reduction in the percentage of plasma water such addition of lipids or protein can lead to a falsely low measurement, yielding pseudohyponatremia.
Osmolality is the preferred unit to express concentration in biological fluids where solutes are measured per mass in osmoles (Osm) per kilogram (kg) solvent over osmolarity in which solutes are measured as moles (mol) per liter (L) of solvent, a unit measurement based on volume. In the clinical laboratory, osmolality is a measurable variable, whereas plasma osmolarity is often estimated from sodium, urea nitrogen, and glucose (not accounting for a number of osmoles). Therefore, osmolality (expressed as Osm/kg or mol/kg) is preferred over osmolarity (mol/L) when dealing with biological fluids. Addition of solutes to a solvent will decrease its freezing point, a colligative property, making freezing point depression useful in measuring plasma osmolality. Since the plasma solute concentrations are small (mmol and mOsm), there is little difference between plasma osmolarity and osmolality in practice as noted by Edelman et al. (4). Although sodium is the predominant extracellular cation, it is not the sole determinant of plasma osmolality (P Osm ) as estimated in the following Equation.
Blood urea nitrogen (BUN) and glucose (Glc) are measured in mg/dL in the above equation and must be converted to mmol/L by the conversion factors 2.8 for the nitrogen content in BUN and 18 for glucose. Normal plasma osmolality ranges 275-295 mOsmoles/kg of water (2). It should be noted that urea can diffuse between the intracellular and extracellular compartments via transmembrane urea transporters to achieve equilibrium and therefore, exerts less (and time-dependent) osmotic pressure across the cell membrane to influence the movement of water. A solute that is confined to a specific space and cannot freely cross a membrane can influence the direction of movement of water into that space, thus exerting a tonicity force. Therefore, an ion such as Na + or a solute such as glucose, when present in high concentration and confined to the extracellular space, can draw water from the intracellular space, exerting a significant tonic force to change the volume of surrounding cells. The tonicity of a solution is related to its effect on the volume of a cell, thus isotonic solutions have minimal impact on the cell volume in the healthy state.
Intracellular sodium concentration averages 12 mmol/L. Potassium (K + ) is the main cation in the intracellular space with a concentration ranging 140-150 mmol/L whereas normal plasma K + concentration ranges 3.5-5.5 mmol/liter, reflecting 98% intracellular distribution of total body K + (5-7). The asymmetric distribution of Na + and K + is achieved by sodiumpotassium ATPase (Na + -K + ATPase) which actively transports Na + and K + in opposite directions to achieve transmembrane gradients necessary for cell signaling, energy storage to facilitate solute transportation and to maintain constant intracellular volume (8). The amounts of potassium in the intracellular compartment and sodium in the extracellular compartment are the main determinants of the fraction of body water in each compartment (9). Edelman articulated through his isotype equilibration studies that plasma sodium "is a reflection of the ratio of the sum of exchangeable monovalent cation (Na E + K E ) to T.B.W." (where T.B.W. is total body water) and introduced the concept of exchangeable sodium (Na + E ) and potassium (K + E ) in his original equation (4,10). He devised the classical concept that Na + and K + exists in the human body in an exchangeable form (as free ions or bound to proteins in plasma and interstitial fluid) and an unexchangeable form (bound to cartilage, skin and bone) with plasma water sodium determined by the exchangeable form. Burton Rose later simplified the Edelman equation to its modern version shown here (11): Nguyen and Kurtz later refined the Rose equation in an attempt to improve its accuracy to predict changes in , but the simpler Rose formula has proven to be reliable in most clinical cases (12). Daily Na + intake averages ∼180 mmol (4.2 g) in men and 150 mmol (3.5 g) in women (13). The kidneys filter plasma at a high glomerular filtration rate (GFR) needed to remove waste products and reabsorb solutes (14). They play an important role in sodium homeostasis where renal sodium excretion occurs to maintain normal extracellular fluid volume and control arterial blood pressure. More than 500 g Na + is extracted from the plasma ultrafiltrate daily in order to excrete the ingested 3 g in urine, demonstrating highly efficient Na + reabsorption capabilities along the nephron to conserve total body Na + (13). Multiple Na + transporters located on the apical renal tubular cell surface facilitate sodium reabsorption throughout the nephron beginning with the Na + -H + exchanger in the proximal tubule followed by the Na-K-2Cl transporter in the loop of Henle and the Na-Cl cotransporter in the distal convoluted tubule, and finally the epithelial sodium channel (eNaC) in the collecting duct (Figure 1). Na + -K + ATPase, located on the basolateral surface of these tubular cells, serves as the efflux mechanism for sodium to return to the vascular compartment in exchange for entry of blood potassium through the same cells for secretion into the urine. Review of each sodium transporter is outside the scope of this article, so the reader is referred to the Palmer and Schnermann review article for further details (13). FIGURE 1 | Sodium reabsorption throughout the nephron. The nephron has multiple sodium chloride transporters to reabsorb filtered sodium so that <1% of filtered sodium is excreted from the body. The proximal tubule (PT) reabsorbs the majority of the filtered load (60%) followed by the thick ascending Loop of Henle (TALH) and the distal convoluted tubule (DCT). Very little sodium remains for reabsorption by the time the tubular fluid reaches the cortical collecting duct (CCD). Sodium is conserved through these redundant reabsorption mechanisms.
Antidiuretic Hormone (ADH)
High renal sodium avidity conserves total body sodium. This property of the kidneys plus renal water loss or water conservation in the steady state are key determinants of Antidiuretic hormone (ADH) or arginine vasopressin secreted by the posterior pituitary reduces water loss in the urine by promoting renal water reabsorption. ADH binds to vasopressin V2 receptors on the renal collecting duct to activate an adenyl cyclase to produce cAMP which, in turn, activates protein kinase A (PKA). Phosphorylation of PKA leads to the translocation of the water channel aquaporin 2 (AQP2) from its location in cytosolic vesicles to the apical plasma membrane of collecting duct cells resulting in the movement of water from tubular lumen to blood (15,16) (Figure 2). ADH expression and release are driven by both osmotic and non-osmotic mechanisms. Specifically, rising osmolality detected by an osmoreceptor in the anterior hypothalamus leads to the release of ADH to maintain plasma osmolality between 280 and 295 mOsm/kg (17). ADH release also occurs when arterial baroreceptors detect depletion of circulatory volume (18). Rising plasma osmolality activates osmoreceptors, triggering thirst to increase water intake along with release of ADH (19). Even as ADH both modulates water balance and acts as a vasoconstrictor on smooth muscles of blood vessels, a reflex buffering mechanism exists to prevent a significant rise in arterial blood pressure until the maximal antidiuretic dose of ADH is released. This is the result of a baroreceptor-mediated tonic inhibition of the synthesis and secretion of ADH via vagal afferent nerve input to the pituitary (20). Any disease process that reduces baroreceptor sensitivity would lead to dis-inhibition of synthesis and secretion of ADH or greater release of ADH.
Non-osmotic stimulants of ADH secretion include decreased blood pressure and volume which are sensed by baroreceptors in heart and large arteries. Changes in blood pressure and volume are not as potent stimulators of ADH release as osmotic changes but are powerful in extreme conditions. Nausea and vomiting are also potent non-osmotic stimulators of ADH release (21,22).
Plasma sodium concentration and ultimately plasma tonicity are closely controlled by water homeostasis mediated by thirst, release of ADH and its effects on renal water reclamation (23).
Electrolyte and Water Absorption in the Gastrointestinal Tract
The gastrointestinal (GI) tract absorbs nutrients and fluids daily from dietary intake in addition to its endocrine and immune functions. Daily fluid intake ranges 1.5-2 liters, but secretion of fluid from saliva, gastric juices, pancreatic juices, and bile add more volume to total a daily flux of 10 liters through the upper small intestine (24)(25)(26). Fluid, electrolytes, and nutrients are absorbed so that the volume is reduced to 1.5 liters when the gastrointestinal fluids reach the ileocecal valve. The colon absorbs another 1.4 liters so that final stool volume is 0.1 liters (Figure 3). Sodium and water absorption are enhanced in the upper small bowel with the electrolyte content of the intestinal lumen mirroring plasma electrolyte concentrations. The adult gut can absorb up to 1,000 mmol of sodium per day with most of the absorption occurring in the small intestine. The colon absorbs another 50 mmol of sodium and chloride and secretes a small amount of potassium (25). Failure of small intestinal absorption will lead to delivery of large volumes to the colon, overwhelming the limited absorptive capacity of this segment (25). Table 1 summarizes the electrolyte content of GI secretions and fluids.
The stomach is mostly impermeable with negligible absorption. Duodenal mucosa is freely permeable with considerable movement of water and ions in response to osmotic and concentration gradients. The gut contents become isotonic with plasma in the duodenum through bulk water and solute absorption regardless of their original composition in the stomach. Acid is neutralized by bicarbonate (HCO − 3 ) addition from pancreatic and biliary secretions. Pancreatic secretion of HCO − 3 occurs primarily through the chloride/bicarbonate (Cl − /HCO − 3 ) ion exchanger with ranging 70-120 mmol/L in the pancreatic fluid (27). The cystic fibrosis transmembrane conductance regulator (CFTR) recycles Cl − across the apical membrane and regulates the activity of the Cl − /HCO − 3 exchanger (27) (Figure 4). The initial theory that the intestinal tract separates its absorptive and secretory functions spatially with absorption occurring in locations superficial to the luminal surface (villous cells) while secretion arises from cells in crypts has been challenged in recent years (28). Molecular localization experiments have shown that the Na + /H + exchanger (NHE3) essential for Na + and fluid absorption and CFTR, NKCC1 (Na + and K + coupled Cl − transporter) and NBCe1 (Na + and HCO − 3 cotransporter) involved in stimulated Cl − and HCO3 − secretion are present together in all upper crypt and villous enterocytes (29). Both the jejunum and ileum absorb and secrete fluid, but absorption normally predominates to reduce the total gut fluid volume to ∼1 l/day by the time it reaches the colon. Sodium, chloride and bicarbonate make up the ionic content in the jejunum and ileum with Na + absorption coupled to glucose absorption through bulk flow along osmotic gradients (30). Fordtran and Carter observed in their early studies increased Na + absorption when solutions containing glucose or galactose were infused into the jejunum (30). These findings suggested that Na + absorption is coupled to glucose absorption through an active transport mechanism later identified as the Na + /glucose co-transporter SGLT1 (31). Chloride absorption appears to follow that of Na + passively in the jejunum (25) and can be absorbed actively against an osmotic gradient in jejunum, ileum and colon. The sodium-proton (Na + /H + ) exchanger takes up Na + and secretes H + into the intestinal lumen in addition to coupled Na + absorption through SGLT1. Chloride absorption is coupled with HCO − 3 secretion through the regulator known as down-regulated in adenoma (DRA) encoded by the gene SLC26A3 (25,27,32) (Figure 4). Chloride and bicarbonate exchange predominates by the end of the ileum producing an alkaline solution. Chloride is also secreted into the intestinal lumen as part of Cl − recycling by CFTR located in the intestinal crypts (33,34). The presence of luminal Cl − affects HCO − 3 transport in the ileum and colon such as when luminal Cl − is absent, HCO − 3 is absorbed as a paired ion with Na + (35). When Cl − is present, HCO − 3 is secreted into the lumen of both these intestinal segments (35). Small intestinal secretory cells lack apical K + channels so that K + absorption occurs through passive diffusion and solvent drag (27).
Water absorption is postulated to be coupled with Na + and glucose transport in the small intestine via SGLT1 and passively in the colon (36,37). The role of aquaporin channels in water absorption remains unknown despite the expression of multiple aquaporin isotypes throughout the gastrointestinal system including small intestine (38,39).
Sodium absorption in the colon occurs via eNaC regulated by aldosterone. Bicarbonate secreted into the colon via the Cl − /HCO − 3 exchanger is consumed in buffering organic acids produced by colonic bacteria (27). Some of the organic anions produced by this neutralizing reaction are absorbed via a linked HCO − 3 exchange transporter, but the remainder are excreted in the stool, making up to 30-40 mmol/day of potential alkali lost (27). The colon absorbs K + while secreting H + into its lumen through the apical membrane H + /K + -ATPase. However, the colon secretes K + more than it absorbs via luminal K + channels which respond to aldosterone by increasing luminal K + content up to 75 mmol/L (27,39). Aldosterone does not seem to affect K + secretion in other segments such as the ileum, making the colon unique in its response (40). Final stool water volume tends to be small, making net stool K + loss 10-15 mmol/day (27). The major ion transporters important in absorption and secretion are summarized in Table 2.
ELECTROLYTE DISORDERS IN VOMITING
Vomiting can lead to significant volume and electrolyte loss leading to dysnatremia, metabolic alkalosis, and hypokalemia when large volumes are lost. Gastric fluid normally contains 120-160 mmol/L of Cl − balanced by Na + , K + , and H + with K + concentration up to 10 mmol/L, so vomiting with small volumes losses rarely lead to significant loss of total body electrolyte content or volume. Volume, electrolyte, and acidbase disturbances present in cases of protracted vomiting or nasogastric suction with large volume loss.
Gastric Na + content varies based on the acidity of the stomach content. Hyponatremia ensues when large amounts of Na + is lost with fluid in the gastric content. Hypotonic hyponatremia is defined as <135 mmol/L and should be distinguished from pseudo-hyponatremia observed in paraproteinemia or uncontrolled hyperlipidemia when is measured by methods requiring pre-measurement plasma sample dilution (3) or trans-locational hyponatremia as seen in hyperglycemia. Hypotonic hyponatremia reflects an excess of water in relation to Na + (41,42). Volume depletion can precipitate hypotension, which activates the renin-angiotensinaldosterone system to increase renal tubular Na + and Cl − reabsorption and activates the release of ADH (9). ADH release leads to increased renal tubular water reabsorption, acting in concert with aldosterone to preserve intravascular volume. If hypotonic fluids are ingested or administered in the presence of increased ADH secretion, more water may be reabsorbed than Na + , leading to hyponatremia. Reduced GFR from volume depletion may limit renal water excretion, maintaining hyponatremia through water retention. ADH secretion may occur if hypotension is prolonged, perpetuating hyponatremia despite lacking an osmotic stimulus for its release. Nausea or pain can prolong ADH secretion after resolution of hypovolemia, exacerbating hyponatremia.
Hypernatremia can develop in vomiting when extra-renal water loss exceeds sodium loss in the vomitus causing a net water loss. Hypernatremia is defined as plasma sodium exceeding 145 mmol/L and reflects a hypertonic state (9,23). A rise in plasma osmolality will trigger thirst prompting water intake and ADH release to increase renal water absorption to correct the hyperosmolality. Hypernatremia develops if water is unavailable, thirst drive is impaired, or if the affected are too young or too old to seek water themselves (9,23).
Recurrent vomiting with large gastrointestinal fluid loss can lead to gastric alkalosis. The loss of this H + -rich gastric fluid leads to increased production of HCl in the parietal cells for secretion into the gastric lumen. Bicarbonate, generated as the conjugate anion from HCl production in the parietal cell, is returned back into circulation, raising plasma HCO − 3 , constituting the generation phase of metabolic alkalosis (43). A sudden rise in plasma HCO − 3 is followed by increased sodium bicarbonaturia and marked kaliuresis (44). However, ongoing loss of Cl − -rich gastric fluid leads to total body Cl − depletion, volume depletion, and reduced GFR. Reduced GFR limits excretion of urinary HCO − 3 by reducing the filtered load of HCO − 3 , contributing to the maintenance of metabolic alkalosis (45). Volume depletion also activates the renin-angiotensin-aldosterone system which contributes to maintenance of metabolic alkalosis. Angiotensin II promotes renal apical Na + / H + exchange in the proximal tubule and basolateral Na + / HCO − 3 cotransport back to the vascular space in the proximal tubule. Aldosterone stimulates increased H + secretion from the type A intercalated cell in the distal tubule into the tubular lumen with generation of HCO − 3 that is extruded back into the blood space in exchange for Cl − , maintaining a high plasma HCO − 3 . Chloride depletion reduces the distal delivery of tubular Cl − needed Pancreas: the Cl − /HCO − 3 exchanger and cystic fibrosis transmembrane conductance regulator (CFTR) are essential in bicarbonate secretion to neutralize HCl secreted in the stomach. Small intestine: Sodium (Na + ) absorption occurs via the Na + /H + exchanger (NHE). Glucose absorption is coupled to Na + absorption through the sodium-glucose cotransporter 1 (SGLT1) and exits the absorptive cell through glucose transporter 2 (GLUT2). Na + /K + -ATPase serves as efflux mechanism for absorbed Na + . Cl − absorption is coupled with HCO − 3 excretion through the regulator down-regulated in adenoma (DRA) in the jejunum and ileum. Cl − secretion into the lumen occurs through CFTR. Colon: similar transporters for Na + and Cl − absorption as in the small intestine with the addition of epithelial Na + channel (eNaC). K + absorption is coupled to H + secretion via H + /K + -ATPase. K + secretion occurs under the influence of aldosterone in the colon, a characteristic unique to this segment. Bicarbonate (HCO − 3 ) is secreted in the colon with the absorption of small chain fatty acids (SCFA). for HCO − 3 secretion, contributing to the maintenance phase of metabolic alkalosis. A characteristic feature of metabolic alkalosis caused by gastric losses is a urinary Cl − concentration <10 mmol/L (27).
Marked kaliuresis accompanying sodium bicarbonaturia in the generation phase may lead to total body K + loss since urinary K + loss may exceed gastric K + loss (44,46). Secondary hyperaldosteronism increases Na + reabsorption via eNaC in the principal cell, creating an electronegative gradient favorable for K + secretion via the renal outer medullary potassium channel (ROMK), contributing to ongoing hypokalemia. Hypokalemia in turn increases the activity of the H + /K + -ATPase exchange pumps in the luminal membrane of type A intercalated cells which reabsorb K + in exchange for H + secretion, maintaining metabolic alkalosis. Hypokalemia also reduces the activity of pendrin, the Cl − /HCO − 3 exchanger located in type B intercalated cells, contributing to maintenance of metabolic alkalosis (47). Ultimately, hypokalemia and aldosterone increase activity of H + /K + -ATPase and H + -ATPase, respectively, to enhance distal hydrogen secretion while limiting HCO − 3 excretion to maintain metabolic alkalosis.
Treatment
Addressing the underlying cause of vomiting is the first step to correct the various electrolyte and acid-base disturbances induced by vomiting. Correction of volume depletion restores intravascular volume, improves GFR and delivers more filtered HCO − 3 for tubular excretion. In addition, restoration of effective arterial blood volume stops reninangiotensin-aldosterone and ADH activation, helping to correct hyponatremia. Fluid resuscitation with a chloriderich crystalloid also corrects Cl − depletion and increases the delivery of urinary Cl − needed for Cl − /HCO − 3 exchange via pendrin to effectively excrete HCO − 3 and correct metabolic alkalosis. Repletion of potassium also helps to reverse metabolic alkalosis.
DYSNATREMIA IN DIARRHEA
The definition of diarrhea is nebulous as most people refer to increased bowel movements or loose or watery stool consistency (26). Researchers have used a stool weight >200 grams daily in a Western diet to define diarrhea (26). Stool weight can increase in those who eat a high fiber diet but is not considered diarrhea since the stool is formed (26). The solute or water loss in the fecal matter better reflects the meaning of the term diarrhea with two broad categories: osmotic and secretory diarrhea. The stool composition of each type of diarrhea discussed below are summarized in Table 3.
Osmotic Diarrhea
Osmotic diarrhea occurs when poorly absorbable, low-molecular weight solutes ingested pull water and ions into the intestinal lumen, leading to loose or unformed stools (33). Solutes such as lactulose, mannitol, sorbitol, polyethylene glycol (PEG), magnesium-based antacids or laxatives or lactose in those lactose-intolerant can lead to osmotic diarrhea. Osmotic diarrhea stops once non-absorbable solute has been purged and decreases with fasting. Hammer et al. induced osmotic diarrhea with PEG and lactulose to uncover its pathophysiology. Polyethylene glycol, non-absorbable and not metabolized by colonic bacteria, was administered at increasing doses and served as a contrast to lactulose, which is metabolized by colonic bacteria, to illustrate the differences in stool composition. Increasing osmotic loads of PEG caused a near linear increase in stool water output, averaging 75-80% (50). Stool osmolality was 60-70 mOsmol/kg higher than normal plasma osmolality of 290 mOsmol/kg (50). Stool Na + , Cl − , and HCO − 3 concentrations were lower than their respective concentrations in the plasma. PEG-induced diarrhea was associated with very small enteric losses of Na + , K + , and Cl − (50). Daily fecal losses of Na + ranged from 4-31 mmol/day while fecal K + loss ranged from 6-13 mmol/day and fecal Cl − loss ranged from 1-10 mmol/day with increasing PEG dose (50).
Lactulose was also administered in increasing doses with increased stool weight as observed with PEG, but the stool percent water content rose to an average of 90%, reaching a maximum of 1,100 g/day (50). Organic acid concentration decreased with mean stool osmolality ∼90 mosmol/kg higher than plasma osmolality due to colonic absorption of these organic acids. Fecal carbohydrate concentration rose due to bacterial metabolism of lactulose, contributing more of an osmotic driving force for diarrhea (50). Fecal Na + and K + content were higher with higher daily fecal losses compared to PEG-induced diarrhea (50). Shiau et al. observed similar fecal Na + and K + concentrations in patients with lactulose-induced diarrhea (52). The differences in stool solute composition, fecal water content and stool osmolality between these two types of osmotic diarrhea illustrates how stool losses can determine plasma electrolyte and acid-base disturbances. Some HCO − 3 is secreted to neutralize these organic acids, so the loss of fecal organic acids is a loss of a potential bicarbonate pool, leading to metabolic acidosis. The greater loss of organic acids in lactulose-induced diarrhea leads to greater losses of obligate cations such as Na + and K + . Greater loss of fecal K + can precipitate total body K + depletion, leading to hypokalemia. If fecal water loss exceeds Na + loss, volume depletion and hypernatremia may ensue despite compensatory renal water reabsorption. Hyponatremia occurs if Na + loss exceeds water loss.
Secretory Diarrhea -Cholera
Secretory diarrhea results from overstimulation of the intestinal tract's secretory capacity with a net secretion of anions (Cl − and HCO − 3 ), net secretion of K + or net inhibition of Na + absorption (26). Most cases of secretory diarrhea are due to infection such as cholera. It is characterized by large stool volumes which can exceed 1 l/h, an absence of red or white blood cells in the stool, absence of fever or other systemic symptoms except those related to volume depletion, persistence of diarrhea with fasting, and lack of excess stool osmotic gap (33). Stool osmotic gap (OG) is calculated by the equation OG = 290 -2( + ) where 290 is the assumed plasma osmolality. Unmeasured cations such as magnesium (Mg 2+ ), calcium (Ca 2+ ), ammonium (NH + 4 ) and organic cations make up the gap with a value >50 considered abnormal (33).
Vibrio cholera, the putative Gram-negative bacterial species responsible for copious "rice-water" diarrhea produces the cholera toxin (CT), an 84-kDa protein consisting of a dimeric alpha-subunit and five beta-subunits. The larger A1 subunit of the dimeric alpha-subunit contains the toxic activity. Each of the beta-subunits binds tightly to the GM1 ganglioside abundant in enterocytes brush border followed by endocytosis of the A1 subunit (33). The A1 subunit catalyzes ribosylation of the alpha-subunit of a guanine nucleotide stimulatory protein (Gs), activating adenylyl cyclase to increase cAMP production by 100-fold (53). cAMP activates protein kinase A, which then phosphorylates and activates the apical Cl − channel CFTR, leading to increased intestinal Cl − secretion and diarrhea (53). V. cholera produces another enterotoxin, the zona occludens toxin (ZOT), which increases paracellular fluid permeability, contributing to the increased stool volume observed in cholera diarrhea (53). V. cholera also produces a neuraminidase that can increase the GM 1 content of adjacent intestinal cells, increasing cholera toxin binding sites (53) (Figure 5). The final result is large volume diarrhea with increased loss of stool Na + , Cl − , K + , and HCO − 3 . Volume depletion quickly occurs with watery diarrhea up to 1 liter/hour with stool Na + concentration of 130 mmol/L and stool Cl − of 100 mmol/L (48,49,54).
Hypernatremia, hypokalemia and metabolic acidosis have all been observed in cholera patients with severe volume loss >3 liters stool volume (54). Hyponatremia can occur in children with significant volume loss (55). Hypokalemia occurs due to fecal K + loss with large volume diarrhea leading to total body K + depletion (54,55). Metabolic acidosis also occurs due to fecal bicarbonate loss and also lactic acidosis from hypotension and poor tissue perfusion (54,55).
Congenital Chloride Diarrhea
Congenital chloride diarrhea due to a defect in the apical Cl − /HCO − 3 exchanger results in watery diarrhea with high stool Cl − and metabolic alkalosis (33,56). The disorder is characterized by an autosomal recessive inheritance pattern with a mutation in the SLC26A3 gene which encodes for an apical Cl − /HCO − 3 exchanger. The exchange mechanism in the ileum and colon is reversed, leading to loss of Cl − , producing a profuse chloride-rich diarrhea while increasing absorption of HCO − 3 (56,57). Coupled Na + /H + transport through the Na + /H + exchangers NHE2 or NHE3 leads to intestinal loss of both NaCl and fluid (56). The disease presents early in life with antenatal defects including distended bowel loops and polyhydramnios in utero due to fetal secretory "urine like" diarrhea observed on ultrasonography. Premature birth FIGURE 5 | Mechanism of cholera enterotoxicity. Vibrio cholera produces 3 toxins that lead to the copious "rice water" cholera diarrhea: cholera toxin (CT), zona occludens toxin (ZOT) and a neuraminidase. CT binding followed by endocytosis leads to increase in cAMP production and ultimately activation of CFTR leading to increased intestinal Cl − secretion and diarrhea. ZOT increases paracellular fluid permeability, contributing to increased stool volume. Neuraminidase increases the number of CT binding sites in adjacent intestinal cells, amplifying the enterotoxicity of CT.
and lack of meconium are other key characteristics of this disease (56). Electrolyte disturbances present immediately after birth manifesting as hypochloremia, hyponatremia, and volume depletion. Stool volumes can range from 2-7 liters daily with lifelong intestinal loss of electrolytes (56). Diagnosis is based on clinical presentation confirmed by high fecal Cl − concentration after repletion of fluid and electrolytes (51,56). Hypochloremic metabolic alkalosis results from increased intestinal HCO − 3 absorption (57). Disease management includes salt substitution therapy to prevent Cl − and volume loss.
Treatment
Treatment of each electrolyte and acid-base disorder accompanying each type of diarrhea includes addressing of the underlying cause appropriately followed by stabilization of hemodynamics, volume repletion and correction of electrolyte disturbances. Aggressive volume repletion either through oral rehydration or intravenous fluids is recommended for cholera patients to make up for the volume already lost and to keep up with ongoing losses. Oral solutions containing both glucose and Na + take advantage of normal coupled intestinal Na + /glucose transport to deliver the needed NaCl for volume repletion essential in saving lives. The observation that the addition of glucose to oral salt solutions increases intestinal Na + absorption in cholera treatment shed light on the mechanism of coupled Na + /glucose absorption and ultimately led to the discovery of SGLT1 (58).
In those with hyponatremia, slow correction of plasma sodium is recommended to avoid neurologic sequelae from rapid correction. Antibiotic treatment may be indicated for severe infectious diarrhea such as in cholera. Frequent antibiotic use may lead to antimicrobial resistance of V. cholera, so its use is reserved for severe cases.
CONCLUSION
Both the gastrointestinal and renal systems work in concert to maintain a steady physiological state. The gastrointestinal tract absorbs necessary nutrients and water while the kidneys work to excrete waste products and fine tune the extracellular solute composition. Gastrointestinal losses of sodium, chloride, bicarbonate, potassium and water can overwhelm renal compensatory mechanisms, yielding electrolyte and acidbase disturbances. Understanding of the pathophysiologic mechanisms provides insight into normal physiology and informs the conventional wisdom guiding therapy.
AUTHOR CONTRIBUTIONS
CD organized and drafted the final narrative review, selected the figures and commissioned the medical art from a professional artist, constructed the tables, and compiled the majority of the references. GJE, JD, GPE, and HL reviewed the manuscript, figures and tables, and gave input on the final version. BW put together the initial draft of the manuscript along with references, reviewed, and gave editorial input on the final draft. All authors contributed to the article and approved the submitted version. |
When people ask me how I’m doing, I always tell them, “I’m great,” bare minimum, I’m doing great. You know, you like positivity, right? Well then why settle for anything less? I tell people how great I’m doing and they like it, they appreciate the jolt of good vibes I’m sending their way. I even like to say it aggressively, like, “I’m great!” but short, like a really intense response, I’m staring at you directly in the eye, that hand shake we’re engaged in, doesn’t it hurt? Not a lot, but just a little bit, right? That’s because it’s a great handshake.
But like I said, that’s my bare minimum. So actually, if you ever run into me on the street, and you say, “What’s up Rob? How’s it going?” and I’m telling you that I’m great, well, honestly, I’m not really doing that well. Because that’s my bare minimum, that’s the basest level of interaction I’ll allow myself with another human being. If I’m just great, yeah, I’m glad that I got to maybe spark some positivity with my left hand, as it smacked your right shoulder while we were in the middle of that ultra-firm handshake. But your great, that’s my not-so-great.
And things aren’t usually that bad in my life, I don’t have too much to complain about. Which is why most of my day-to-day interactions will fall more “terrific” on the scale than they will merely “great.” I say it like an affirmation, “I’m terrific, how are you?” with added emphasis on the “rif.” Ter-RIF-ic. I might forgo the handshake for a high-five, not a regular five, OK, my hand is all the way up here for a reason, and if you present an outstretched low open palm, don’t expect that I’m going to come down to make contact. No, I’m going to stand here with my hand all the way up, and if you don’t make a move, eventually I’ll force it, I’ll say, “Come on man, high five!” and then when we make contact, I’m looking for an audible slapping sound, all right, yeah, it might hurt, but it’s not real pain, that’s the feeling of you not having experienced a real high-five in quite some time, so you’ll get used to it, all right? Terrific, I’ll repeat it again after that slap, it was loud enough that everyone around turned their heads in our direction, and I’ll extend that spotlight to you, I’ll say it a little louder, “We’re doing terrific over here.”
And again, I don’t want to get too hung up on levels and scales, my terrific equals your OK. But that’s exactly what it is, if I’m terrific, I’m just OK. And I don’t know about you, but I really hate settling for just OK, no way dude, life’s too short to go around feeling just kind of all right. Which is why, don’t get too hung up on the high-five thing. Yeah, it’s a little aggressive, and definitely loud, but I try not to really let myself get too comfortable feeling simply terrific. I’d say that the majority of people I run into say hi to me, and when they ask how I’m doing, I’ll tell them that, “I’m better than ever.”
Now we’re getting into some genuine good emotions here, some truly positive positives. Just embrace it, I’m not trying to rub it in your face, because, again, don’t read too much into it, all right, this my way of how you would say that you’re doing fine, everything’s fine, I’m fine. But I’m better than ever. Just hop on, there’s plenty of room on my express bus to outstanding good feelings.
Just don’t tell me that you’re doing well. I hate it when I ask someone, “How’s it going?” and they’re like, “I’m well.” I’m just like, man, what a buzz kill. Who says well anyway? Like I know it’s correct, and I know it makes sense to write it out that way. But to say it? In actual conversation? You’re well? You sound like a textbook. And now I’m just of great again. And I won’t even say anything, I think I see my friend Jim over there anyway. Maybe he’s got what I need to recharge the batteries here.
And no, I don’t think it’s disingenuous, trying to come across as better than I actually am. I’m just constantly reaching, like maybe if I tell you that I’m better than ever, maybe you’ll light up a little inside. Maybe I’ll inspire you to a higher level of however it is you’d describe yourself at the moment. And then I look at your eyes widening, I can see all of that positivity weaseling its way inside your head, I think, I did that, that was me. And I get pumped up. So when I said I was better than ever, maybe I wasn’t. But now I might be.
And so I’ll correct myself, I’ll add something like, “You know what? This is one of the best days of my life!” (emphasis on that life.) And then a high-five isn’t going to do, OK, I need something better, maybe I’ll get up real close and I’ll shadow box, like I’ll give you two or three fake punches to the gut before letting out a really intense laugh, “ha HA!” and then sidle up next to you, my left arm wrapping around your neck, like a noogie without the actual noogie part, and with my right hand, I’ll pat you on the gut, like we’re brothers, like we’re two guys just horsing around, reveling in the unlimited potential of our out-of-this-world dispositions. Hey come on, let’s go get some ice cream. Yeah, the ice cream place two blocks down, come on, I’ll race you! Let’s go! Ha HA! |
<filename>test/csvGenerator.test.ts
import CSVGenerator from '../src/csvGenerator';
import {glossentryHandler} from '../src/ContentTypeHandlers/glossentry';
import * as ezdClient from '@jorsek/ezd-client';
// if you used the '@types/mocha' method to install mocha type definitions, uncomment the following line
// import 'mocha';
const testingConnectionConfig = {
org: "public-test",
token: "--",
rootMapId: "--",
};
describe('Publisher tests', () => {
it('test retrieve faq Content Type Handler', async () => {
const publisher = new CSVGenerator(new ezdClient.Client(testingConnectionConfig));
var result = await publisher.getCorrectContentTypeHandler("Glossary Entry");
var contentTypeHandler = new result.contentHandler();
expect(contentTypeHandler).toBeInstanceOf(glossentryHandler);
});
}); |
Presently, I'm serving as a branch chief supervisor where I'm supporting the space launch system program—known as the SLS—which replaces space vehicles and the retired space shuttles. In my branch, I am providing systems engineering and integration function, which is all the operations concepts for the design, development, and integration of the vehicles that will be used at the Kennedy Space Center, where they actually assemble and launch the SLS.
Lam: Have you always been passionate about space exploration?
Duncan: I've always been passionate about the space program, but I grew up back in the 1950s in Montgomery, Alabama—the Deep South. My interest was really in urban community development, transportation, and communication—the things that get services to us, that makes us understand what is going on around us. And at an early age, I recognized that my interests have always been in the connectedness of the world, the universe itself.
I was drawn to space exploration because I realized it’s about these technologies that not only help us learn about the universe, but also find out if there is someone out there like us. It also provides technologies and innovations that we need to understand and improve life here on Earth.
Space was the answer, but I knew that there had to be some way to get us more connected. I’m also interested in bringing us together here on Earth. I grew up out in rural America, where it was really dark. I didn't have an interest in astronomy, but I always wondered about what other civilizations or groups of people were out there, and whether [they] somehow get along better than we do. I’m interested in how technology can get rid of the artificial barriers that separate us. That's what space exploration is about.
Lam: How did you come to work at the Marshall Space Flight Center?
Duncan: I went to graduate school at Howard University in Washington, D.C., and I thought I would work for a state or even a federal transportation agency, or the Department of Energy because they paid for my advanced studies. And at that time, in 1980, Marshall was in need of systems engineers, and they also didn't have many minorities. I'm a double minority, I'm a female and I'm also African American.
I applied, and they interviewed me. The processes that I had used for urban systems development are the same processes that we use on the space program. The rest is history. They hired me to do simulations of the environment in lieu of orbit. I worked at the first space lab mission on orbit where I sat on the consuls working with the researchers to get their commands and controls up to the low-Earth orbit.
It was an indirect route, still doing what I wanted to do in terms of enabling technology for transportation, bridges, highways, and communications systems. I thought I’d be doing that on earth, but I had the opportunities to really impact some of the research at NASA. |
Over one year ago, days apart, I began to receive e-mail messages addressed to others. For weeks I worked diligently to try and put a stop to it. My requests fell on deaf ears. I receive regular reminders that it is happening still.
I began to receive many (or all) e-mail messages addressed to someone named Sandy, who lives in Ontario Province, Canada. The domain name is Eastlink.ca, a broadband access provider. It didn’t take long to figure out that I was receiving all of Sandy’s e-mail. I wrote to Sandy, suggesting she complain to her ISP. And of course I also received a copy of the message in my own inbox. I wrote to Sandy a couple of times and never heard from her. I guess she doesn’t care – or maybe she did not receive them. I also complained to Eastlink.ca, and heard nothing from them.
I also receive all of Brian’s e-mail, and his ISP is ica.net, another broadband access provider in eastern Canada. I complianed to ica.net, several times, and never received a response. I wrote to Brian also, and he responded and suggested I change my e-mail address. As if!
I also receive messages to someone at charter.net, but this user’s e-mail address does not indicate their name. I wrote to them and to Charter.net – you guessed it: no response.
Soon after this began, I wrote inbox rules to immediately delete all e-mail messages addressed *to* these user accounts that ended up in my inbox. Now and then I look in my Trash Bin (where deleted e-mails go), and sure enough, there are still scores of e-mail messages: thank you’s for online merchant orders, FaceBook invites, e-cards, and personal correspondence. I don’t read these messages.
Some of these messages still come to my inbox – this includes messages where the recipient is in the BCC (blind carbon copy) list. My inbox rules don’t know how to respond to these.
I wish this would stop. I’m going to write to ica.net, Charter.net, and eastlink.ca again, but I’m not expecting any response, not to mention action.
I cannot imagine that this is happening only to me. If some malevolent (or even accidental) action is behind this, then chances are that hundreds or thousands of other users’ e-mail messages are also being forwarded without their permission.
This also makes me wonder if this is happening to MY incoming e-mail: could some other user out there be receiving messages sent to me? I sure don’t relish that idea: sometimes I receive “reset your password by clicking on this URL” messages. What if someone else receives these and decides to click the one-time link before I do? Some online account of mine could be compromised as a result.
I’m also worried about my own liability in this matter. I’m receiving e-mail messages that are supposed to be sent to others. I don’t want them, I don’t read them, and I delete them when I see them. But what if I receive messages containing personal medical information, for instance?
There are several possible causes for this inadvertent e-mail forwarding:
Malware, tampering, or compromise of ISP e-mail server.
Compromise of individual users’ e-mail accounts, where attacker inserts rules to forward mail to me (and maybe others).
Malare or compromise on individual users’ computers; this may be true if users use workstation-based e-mail software such as Outlook, Outlook Express, or Thunderbird.
There may be other potential causes, but I cannot think of any more.
If malware or a human intruder were behind this, what is their gain? What is the benefit for an intruder if someone’s e-mail is forwarded to someone who lives 3,000 miles away? If the intent is to harm someone, who does it harm? If the intent is to harm the individuals whose e-mail messages are being forwarded to me, then I can think of several more malicious ways to harm them. If the intent is to harm me, I don’t see how this harms me. |
import java.io.PrintWriter;
import java.util.HashMap;
import java.util.Map;
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
// TODO Auto-generated method stub
int n;
Scanner in = new Scanner (System.in);
PrintWriter out = new PrintWriter (System.out);
n= in.nextInt();
int w[]=new int[n];
int h[]=new int [n];
int sum =0;
int firstMax=0;
int secondMax=0;
int index=0;
for(int i=0;i<n;i++)
{
w[i] = in.nextInt();
h[i] = in.nextInt();
sum += w[i];
if(firstMax < h[i])
{
firstMax = h[i];
index = i;
}
}
for(int i=0;i<n;i++)
{
if(i != index)
secondMax = Math.max(secondMax, h[i]);
}
for(int i=0;i<n;i++)
{
int curH = firstMax;
if(i == index)
curH = secondMax;
out.print( (sum-w[i]) *curH );
out.print(" ");
}
out.flush();
}
}
|
#include <cstdio>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <algorithm>
using namespace std;
pair<int,int> f[1005];
int n,k;
int main()
{
cin>>n>>k;
if(n*k>n*(n-1)/2)
{
puts("-1");
return 0;
}
for(int i=1;i<=n;i++)
{
f[i].first=k;
f[i].second=i;
}
printf("%d\n",n*k);
for(int i=1;i<=n;i++)
{
k=0;
for(int j=i;j<=n;j++)
if(f[j].second==i)
{
k=f[j].first;
f[j].first=0;
}
sort(f+i,f+n+1);
for(int j=1;j<=k;j++)
printf("%d %d\n",i,f[i+j].second);
for(int j=i+k+1;j<=n;j++)
if(f[j].first)
{
f[j].first--;
printf("%d %d\n",f[j].second,i);
}
}
return 0;
}
|
#include <mutex>
#include <sstream>
#include <string>
#include <utility>
//Include before spdlog headers to avoid conflicts with Windows types - Solokiller
#include "extdll.h"
#include <spdlog/spdlog.h>
#include <spdlog/details/null_mutex.h>
#include <spdlog/sinks/file_sinks.h>
#include <spdlog/sinks/null_sink.h>
#include "CBaseGameInterface.h"
#include "CConsoleLogSink.h"
#include "CLogSystem.h"
#include "LogDefs.h"
#include "Logging.h"
namespace logging
{
namespace
{
CLogSystem g_LogSystem;
}
CLogSystem& LogSystem()
{
return g_LogSystem;
}
CLogSystem::CLogSystem() = default;
CLogSystem::~CLogSystem() = default;
bool CLogSystem::Initialize()
{
//Configure global logging settings
//Set up the error handler
spdlog::set_error_handler(
[ & ]( const std::string& szErrorMessage )
{
this->LogErrorHandler( szErrorMessage );
}
);
//Default pattern is year-month-day format
spdlog::set_pattern( "[%Y-%m-%d %T.%e] [%n] [%l] %v" );
//Prefix all logs with the name of the library that created it
SetBasePath( "logs/" LIBRARY_NAME "_" );
UTIL_AddCommand( "log_level_" LIBRARY_NAME,
[]
{
LogSystem().Command_LogLevel();
}
);
UTIL_AddCommand( "log_to_file_" LIBRARY_NAME,
[]
{
LogSystem().Command_LogToFile();
}
);
UTIL_AddCommand( "log_list_loggers_" LIBRARY_NAME,
[]
{
LogSystem().Command_ListLoggers();
}
);
UTIL_AddCommand( "log_test_logger_" LIBRARY_NAME,
[]
{
LogSystem().Command_TestLogger();
}
);
//Create shared sinks
m_NullSink = std::make_shared<spdlog::sinks::null_sink<LoggingMutex_t>>();
if( UTIL_CheckParm( "-condebug" ) )
{
m_DebugSink = std::make_shared<LogSink_t>( PrepareFilename( "condebug" ), 0, 0 );
if( !m_DebugSink )
{
Con_Printf( "Fatal error: Couldn't create condebug log sink\n" );
return false;
}
}
m_ConsoleSink = std::make_shared<CConsoleLogSink<LoggingMutex_t>>();
m_LogDistSink = std::make_shared<LOGGING_FACTORY( spdlog::sinks::dist_sink )>();
if( !m_ConsoleSink || !m_LogDistSink )
{
Con_Printf( "Fatal error: Couldn't create log sinks\n" );
return false;
}
m_State = State::ACTIVE;
//Create global, non configurable loggers
null_logger = CreateNullLogger( "null_logger" );
con = CreateConsoleLogger( "console" );
log = CreateMultiLogger( "log" );
if( !null_logger || !con || !log )
{
return false;
}
//TODO: add startup config, add option to control logging - Solokiller
return true;
}
void CLogSystem::Shutdown()
{
//So that there's a "Log file closed" message with a timestamp.
if( IsLogToFileEnabled() )
DisableLogToFile();
m_State = State::UNINITIALIZED;
{
//Drop all loggers we created before internally
std::shared_ptr<spdlog::logger>* const loggers[] =
{
&log,
&con,
&null_logger
};
for( auto logger : loggers )
{
DropLogger( *logger );
( *logger ).reset();
}
}
m_LogDistSink.reset();
m_LogFileSink.reset();
m_DebugSink.reset();
m_ConsoleSink.reset();
m_NullSink.reset();
}
//Helper function to create loggers and handle the exception
template<typename FACTORY>
auto CreateSpdLogObject( const std::string& szName, FACTORY factory )
{
try
{
return factory();
}
catch( const spdlog::spdlog_ex& e )
{
//Should always be shown so devs can catch it
Con_Printf( "Failed to create spdlog object \"%s\": %s\n", szName.c_str(), e.what() );
}
return decltype( factory() ){};
}
std::shared_ptr<spdlog::logger> CLogSystem::CreateBasicLogger( const std::string& logger_name, const spdlog::filename_t& filename, bool truncate )
{
return CreateSpdLogObject( logger_name,
[ & ]()
{
const auto szCompleteFilename{ PrepareFilename( filename ) };
return spdlog::LOGGING_FACTORY( basic_logger )( logger_name, szCompleteFilename, truncate );
}
);
}
std::shared_ptr<spdlog::logger> CLogSystem::CreateRotatingLogger( const std::string& logger_name, const spdlog::filename_t& filename, size_t max_file_size, size_t max_files )
{
return CreateSpdLogObject( logger_name,
[ & ]()
{
const auto szCompleteFilename{ PrepareFilename( filename ) };
return spdlog::LOGGING_FACTORY( rotating_logger )( logger_name, szCompleteFilename, max_file_size, max_files );
}
);
}
std::shared_ptr<spdlog::logger> CLogSystem::CreateDailyLogger( const std::string& logger_name, const spdlog::filename_t& filename, int hour, int minute )
{
return CreateSpdLogObject( logger_name,
[ & ]()
{
const auto szCompleteFilename{ PrepareFilename( filename ) };
auto sink = std::make_shared<LogSink_t>(
PrepareFilename( szCompleteFilename ),
hour,
minute
);
return spdlog::create( logger_name, sink );
}
);
}
std::shared_ptr<spdlog::logger> CLogSystem::CreateConsoleLogger( const std::string& logger_name )
{
if( m_State != State::ACTIVE )
Con_DPrintf( "Warning: log system has not initialized, console loggers may not function properly\n" );
return CreateSpdLogObject( logger_name,
[ & ]()
{
std::vector<spdlog::sink_ptr> sinks;
//Log to console and debug log if enabled
sinks.emplace_back( m_ConsoleSink );
if( m_DebugSink )
sinks.emplace_back( m_DebugSink );
return spdlog::create( logger_name, sinks.begin(), sinks.end() );
}
);
}
std::shared_ptr<spdlog::logger> CLogSystem::CreateMultiLogger( const std::string& logger_name )
{
if( m_State != State::ACTIVE )
Con_DPrintf( "Warning: log system has not fully initialized, multi loggers may not function properly\n" );
return CreateSpdLogObject( logger_name,
[ & ]()
{
std::vector<spdlog::sink_ptr> sinks;
sinks.emplace_back( m_ConsoleSink );
if( m_DebugSink )
sinks.emplace_back( m_DebugSink );
sinks.emplace_back( m_LogDistSink );
return spdlog::create( logger_name, sinks.begin(), sinks.end() );
}
);
}
std::shared_ptr<spdlog::logger> CLogSystem::CreateNullLogger( const std::string& logger_name )
{
return CreateSpdLogObject( logger_name,
[ & ]()
{
return spdlog::create( logger_name, m_NullSink );
}
);
}
std::shared_ptr<spdlog::logger> CLogSystem::GetLogger( const std::string& szLoggerName )
{
return spdlog::get( szLoggerName );
}
void CLogSystem::DropLogger( const std::shared_ptr<spdlog::logger>& logger )
{
ASSERT( logger );
if( !logger )
return;
spdlog::drop( logger->name() );
}
spdlog::level::level_enum CLogSystem::GetLogLevel( const std::string& szLoggerName ) const
{
auto logger = spdlog::get( szLoggerName );
if( !logger )
return spdlog::level::off;
return logger->level();
}
bool CLogSystem::SetLogLevel( spdlog::logger& logger, spdlog::level::level_enum level )
{
logger.set_level( level );
return true;
}
bool CLogSystem::SetLogLevel( spdlog::logger& logger, const char* pszLogLevel )
{
try
{
return SetLogLevel( logger, LogLevelFromString( pszLogLevel ) );
}
catch( const std::runtime_error& )
{
return false;
}
}
bool CLogSystem::SetLogLevel( const std::string& szLoggerName, spdlog::level::level_enum level )
{
auto logger = spdlog::get( szLoggerName );
if( !logger )
return false;
return SetLogLevel( *logger, level );
}
bool CLogSystem::SetLogLevel( const std::string& szLoggerName, const char* pszLogLevel )
{
auto logger = spdlog::get( szLoggerName );
if( !logger )
return false;
return SetLogLevel( *logger, pszLogLevel );
}
void CLogSystem::SetBasePath( std::string&& szPath )
{
UTIL_FixSlashes( &szPath[ 0 ] );
m_szBasePath = GameInterface()->GetGameDirectory();
m_szBasePath += FILESYSTEM_PATH_SEPARATOR_CHAR;
if( !szPath.empty() )
m_szBasePath += szPath;
//Note: this uses the given path because g_pFileSystem already handles the game directory name in its search paths
const auto lastSlash = szPath.find_last_of( FILESYSTEM_PATH_SEPARATOR_CHAR );
const auto szDirectoryPart = lastSlash != szPath.npos ? szPath.substr( 0, lastSlash ) : szPath;
//Make sure the directories exist
g_pFileSystem->CreateDirHierarchy( szDirectoryPart.c_str(), nullptr );
//TODO: update existing sinks? - Solokiller
}
bool CLogSystem::IsLogToFileEnabled() const
{
return !!m_LogFileSink;
}
void CLogSystem::SetLogToFile( const bool bState )
{
if( bState )
EnableLogToFile();
else
DisableLogToFile();
}
void CLogSystem::EnableLogToFile()
{
if( !IsLogToFileEnabled() )
{
const char szBaseFilename[] = "L";
const auto szBaseName = PrepareFilename( szBaseFilename );
auto logFile = CreateLogSink( szBaseFilename );
if( logFile )
{
m_LogFileSink = logFile;
Con_Printf( "%s logging data to file %s\n", LIBRARY_NAME, szBaseName.c_str() );
m_LogDistSink->add_sink( m_LogFileSink );
}
}
}
void CLogSystem::DisableLogToFile()
{
if( IsLogToFileEnabled() )
{
log->critical( "Log file closed" );
m_LogDistSink->remove_sink( m_LogFileSink );
m_LogFileSink.reset();
}
Con_Printf( "%s logging disabled.\n", LIBRARY_NAME );
}
void CLogSystem::Command_LogLevel()
{
if( Cmd_ArgC() < 2 )
{
Con_Printf( "Usage: log_level_%s <logger name> [<log level>]\nAvailable log levels:\n", LIBRARY_NAME );
for( auto pszLevel : spdlog::level::level_names )
{
Con_Printf( "%s\n", pszLevel );
}
return;
}
const char* pszLoggerName = Cmd_ArgV( 1 );
auto logger = GetLogger( pszLoggerName );
if( !logger )
{
Con_Printf( "Logger not found: \"%s\"\n", pszLoggerName );
return;
}
if( Cmd_ArgC() < 3 )
{
Con_Printf( "Log level for logger \"%s\": \"%s\"\n", pszLoggerName, spdlog::level::to_str( logger->level() ) );
return;
}
const char* pszLogLevel = Cmd_ArgV( 2 );
if( SetLogLevel( *logger, pszLogLevel ) )
{
Con_Printf( "Log level for logger \"%s\" set to \"%s\"\n", pszLoggerName, pszLogLevel );
}
else
{
Con_Printf( "Couldn't set log level for logger \"%s\" to \"%s\"\n", pszLoggerName, pszLogLevel );
}
}
void CLogSystem::Command_LogToFile()
{
//Mimics the engine's "log" command
if( Cmd_ArgC() < 2 )
{
Con_Printf( "Usage: log_to_file_%s < on | off >\n", LIBRARY_NAME );
if( m_LogFileSink )
Con_Printf( "currently logging\n" );
else
Con_Printf( "not currently logging\n" );
return;
}
const char* const pszNewState = Cmd_ArgV( 1 );
if( !strcmp( pszNewState, "on" ) )
{
EnableLogToFile();
}
else if( !strcmp( pszNewState, "off" ) )
{
DisableLogToFile();
}
else
{
Con_Printf( "log_to_file_%s: unknown parameter %s, 'on' and 'off' are valid\n", LIBRARY_NAME, pszNewState );
}
}
void CLogSystem::Command_ListLoggers()
{
size_t uiCount = 0;
Con_Printf( "Available loggers:\n" );
spdlog::apply_all(
[ & ]( const std::shared_ptr<spdlog::logger>& logger )
{
++uiCount;
Con_Printf( "%s\n", logger->name().c_str() );
}
);
Con_Printf( "%u loggers\n", uiCount );
}
void CLogSystem::Command_TestLogger()
{
const auto iArgC = Cmd_ArgC();
if( iArgC < 3 )
{
Con_Printf( "Usage: log_test_logger_%s < name > < level > [ < arg >... ]\n", LIBRARY_NAME );
return;
}
const char* pszLoggerName = Cmd_ArgV( 1 );
auto logger = GetLogger( pszLoggerName );
if( !logger )
{
Con_Printf( "Logger not found: \"%s\"\n", pszLoggerName );
return;
}
try
{
const char* pszLogLevel = Cmd_ArgV( 2 );
const auto level = LogLevelFromString( pszLogLevel );
//Unfortunately Cmd_Args isn't available on the client side, so we need a cross-dll solution
std::stringstream stream;
//This won't catch quotes or redundant spaces or anything, but it's enough to test the logger
for( int iArg = 3; iArg < iArgC; ++iArg )
{
stream << Cmd_ArgV( iArg );
if( iArg + 1 < iArgC )
stream << ' ';
}
logger->log( level, "Test string: {}", stream.str() );
}
catch( const std::runtime_error& e )
{
Con_Printf( "%s\n", e.what() );
}
}
void CLogSystem::LogErrorHandler( const std::string& szErrorMessage )
{
Con_Printf( "Error in spdlog while logging: \"%s\"\n", szErrorMessage.c_str() );
}
std::string CLogSystem::PrepareFilename( const std::string& szFilename ) const
{
//Put the log directory in the game directory
//TODO: could check here if users already added the game directory to help flag misuse - Solokiller
return m_szBasePath + szFilename;
}
std::shared_ptr<CLogSystem::LogSink_t> CLogSystem::CreateLogSink( const std::string& szBaseName )
{
return CreateSpdLogObject( szBaseName,
[ & ]()
{
return std::make_shared<LogSink_t>(
PrepareFilename( szBaseName ),
0,
0
);
}
);
}
}
|
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import math
from dope_selfsup.nets.resnet import resnet18
class DOPEProjector(nn.Module):
def __init__(self, in_dim, latent_dim, proj_dim):
super(DOPEProjector, self).__init__()
self.mlp = nn.Sequential(
nn.Conv2d(in_dim, latent_dim, kernel_size=1, stride=1, padding=0), nn.ReLU(),
nn.Conv2d(latent_dim, latent_dim, kernel_size=1, stride=1, padding=0), nn.ReLU(),
nn.Conv2d(latent_dim, latent_dim, kernel_size=1, stride=1, padding=0), nn.ReLU(),
nn.Conv2d(latent_dim, proj_dim, kernel_size=1, stride=1, padding=0, bias=False))
def forward(self, x):
return torch.nn.functional.normalize(self.mlp(x),dim=1)
class DOPEContrast(nn.Module):
def __init__(self, inputSize, T=0.07, negative_source=None,
n_pos_pts=None, n_obj=None):
super(DOPEContrast, self).__init__()
self.inputSize = inputSize
self.T = T
self.negative_source = negative_source
## pre-initializing indexing variables
M_size = n_obj*n_pos_pts
self.pos_idx = torch.eye(M_size, M_size).bool()
M_size = n_obj*n_pos_pts
# for 3 obj 2 pts block diag would look like this
# 1 1 0 0 0 0
# 1 1 0 0 0 0
# 0 0 1 1 0 0
# 0 0 1 1 0 0
# 0 0 0 0 1 1
# 0 0 0 0 1 1
block_diag = torch.block_diag(*([torch.ones(n_pos_pts, n_pos_pts)]*n_obj))
# negatives from the same object in the 2nd view
idx_mat_1 = (block_diag - torch.eye(M_size, M_size))
# 0 1 0 0 0 0
# 1 0 0 0 0 0
# 0 0 0 1 0 0
# 0 0 1 0 0 0
# 0 0 0 0 0 1
# 0 0 0 0 1 0
# for each row, what are the columns where we have 1
# [[1],[0],[4],[3],[5],[4]]
self.gather_idx_1 = idx_mat_1.nonzero()[:,1].reshape(-1,n_pos_pts-1).cuda()
# negatives from the same object in the 2nd view
idx_mat_2 = 1 - block_diag
# 0 0 1 1 1 1
# 0 0 1 1 1 1
# 1 1 0 0 1 1
# 1 1 0 0 1 1
# 1 1 1 1 0 0
# 1 1 1 1 0 0
# for each row, what are the columns where we have 1
# [[2,3,4,5],[2,3,4,5],[0,1,4,5],[0,1,4,5] ...
self.gather_idx_2 = idx_mat_2.nonzero()[:,1].reshape(-1, M_size-n_pos_pts).cuda()
assert len(self.negative_source) > 0, "can't train without sampling any negatives"
def extract_local_features(self, q, k, uv_q, uv_k):
'''
q: B x 128 x 56 x 56 input features 1
k: B x 128 x 56 x 56 input features 2
uv_q: N_pts x 2 pixel coordinates to extract features from q
uv_k: N_pts x 2 pixel coordinates to extract features from k
outputs extracted features from q and k of shape B, N_pts, C
'''
B, C, fH, fW = q.shape
# positive features from the foreground in 2nd view
q = q[torch.arange(B)[:,None], :, uv_q[:,:,0], uv_q[:,:,1]]
k = k[torch.arange(B)[:,None], :, uv_k[:,:,0], uv_k[:,:,1]]
return q, k
def get_pos_neg_l(self, M):
l_pos = M[self.pos_idx]
l_neg_2nd_view = M.gather(1, self.gather_idx_1)
l_neg_other_obj = M.gather(1, self.gather_idx_2)
return l_pos, l_neg_2nd_view, l_neg_other_obj
def forward(self, q, k):
k = k.detach()
feature_similarity_matrix = torch.mm(q, k.T)
l_pos, l_neg_2nd_view, l_neg_other_obj = self.get_pos_neg_l(feature_similarity_matrix)
### return 0 if not used
for_logging = {
"local_pos_l":l_pos.detach().mean().item(),
"2nd_view_neg_l":0,
"other_obj_neg_l":0,
}
# pos logit
out = l_pos.view(len(l_pos), 1)
if "2nd_view" in self.negative_source:
out = torch.cat([out, l_neg_2nd_view], dim=1)
for_logging["2nd_view_neg_l"] = l_neg_2nd_view.detach().mean().item()
if "other_obj" in self.negative_source:
out = torch.cat([out, l_neg_other_obj], dim=1)
for_logging["other_obj_neg_l"] = l_neg_other_obj.detach().mean().item()
out = torch.div(out, self.T)
out = out.squeeze().contiguous()
return out, for_logging
class DOPENetworkCNN(nn.Module):
def __init__(self, fpn_dim):
super(DOPENetworkCNN, self).__init__()
# FPN dimension
D = fpn_dim
self.backbone = resnet18()
self.fpn = torchvision.ops.FeaturePyramidNetwork([64, 128, 256, 512], D)
self.panoptic_1 = self._make_upsample_block(D, 0, 3)
self.panoptic_2 = self._make_upsample_block(D, 28, 1)
self.panoptic_3 = self._make_upsample_block(D, 14, 2)
self.panoptic_4 = self._make_upsample_block(D, 7, 3)
self.conv1x1_mask = nn.Conv2d(D, 1,kernel_size=1, stride=1, padding=0, bias=False)
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
def forward(self, x):
backbone_out = self.backbone(x)
# plugging features into FPN
feats = {
"f1":backbone_out["f5"],
"f2":backbone_out["f6"],
"f3":backbone_out["f7"],
"f4":backbone_out["f8"]
}
f1, f2, f3, f4 = self.fpn(feats).values()
# panoptifc FPN upsampling
f1 = self.panoptic_1(f1)
f2 = self.panoptic_2(f2)
f3 = self.panoptic_3(f3)
f4 = self.panoptic_4(f4)
# element-wise sum + 1x1 conv "projection"
f = f1 + f2 + f3 + f4
output = {}
output['local_feat_pre_proj'] = f
# mask prediction
mask = self.conv1x1_mask(f)
mask = torch.sigmoid(mask)
output['mask'] = mask.squeeze()
return output
def _make_upsample_block(self, dim, in_size, blocks):
layers = []
for _ in range(blocks):
conv = nn.Conv2d(dim, dim, kernel_size=3, padding=1, stride=1)
norm = nn.BatchNorm2d(dim)
activation = nn.ReLU(inplace=True)
if in_size != 0:
upsample = Interpolate(in_size*2, mode='bilinear')
in_size*=2
layers.extend([conv, norm, activation, upsample])
else:
layers.extend([conv, norm, activation])
return nn.Sequential(*layers)
class Interpolate(nn.Module):
def __init__(self, size, mode):
super(Interpolate, self).__init__()
self.interp = nn.functional.interpolate
self.size = size
self.mode = mode
def forward(self, x):
x = self.interp(x, size=self.size, mode=self.mode, align_corners=False)
return x
if __name__ == "__main__":
model = DOPENetworkCNN(128)
input = torch.randn(32,3,224,224)
output = model(input)
import pdb; pdb.set_trace()
|
def _organize_df(self) -> pd.DataFrame:
df = self.raw_df.copy()
if df.loc[0]['period'][0] == 'Q':
df['period'] = df['period'].str.replace('0', '')
df['date'] = df['year'].map(str)+ '-' +df['period'].map(str)
df['date'] = pd.to_datetime(df['date']).apply(lambda x: x.strftime('%Y-%m'))
if df.loc[0]['period'][0] == 'M':
df['period'] = df['period'].str.replace('M', '')
df['date'] = df['period'].map(str)+ '-' +df['year'].map(str)
df['date'] = pd.to_datetime(df['date'], format='%m-%Y').apply(lambda x: x.strftime('%Y-%m'))
if df.loc[0]['period'][0] == 'S':
df['period'] = df['period'].str.replace('S', '')
df['period'] = (df['period'].map(int)*6).map(str)
df['date'] = df['year'].map(str)+ '-' +df['period'].map(str)
df['date'] = pd.to_datetime(df['date']).apply(lambda x: x.strftime('%Y-%m'))
if df.loc[0]['period'][0] == 'A':
df = df.rename(columns={'year':'date'}, errors='raise')
df = df.set_index('date')
df = df.sort_index()
df = df.drop(columns=['period', 'year'], errors='ignore')
return df |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.