content
stringlengths 10
4.9M
|
---|
/** determinant of matrix
*
* Computes determinant of matrix m, returning d
*/
static final void DETERMINANT_3X3(final RefFloat d, mat3f m)
{
d.d = m.f[M(0,0)] * (m.f[M(1,1)]*m.f[M(2,2)] - m.f[M(1,2)] * m.f[M(2,1)]);
d.d -= m.f[M(0,1)] * (m.f[M(1,0)]*m.f[M(2,2)] - m.f[M(1,2)] * m.f[M(2,0)]);
d.d += m.f[M(0,2)] * (m.f[M(1,0)]*m.f[M(2,1)] - m.f[M(1,1)] * m.f[M(2,0)]);
} |
// Read reads data from the connection.
// No deadline is set if the Conn read timeout is the zero value.
// A deadline, defined as current time + read timeout, is set otherwise.
//
// See net.Conn.Read for more information.
func (c *conn) Read(b []byte) (int, error) {
if c.readTimeout != 0 {
if err := c.Conn.SetReadDeadline(time.Now().Add(c.readTimeout)); err != nil {
return 0, err
}
}
return c.Conn.Read(b)
} |
package org.aksw.sparqlify.admin.web.api;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
public class CollectionJpa<T> {
private EntityManagerFactory emf;
private Class<T> clazz;
public CollectionJpa(Class<T> clazz, EntityManagerFactory emf) {
this.emf = emf;
this.clazz = clazz;
}
public T get(Object id) {
EntityManager em = emf.createEntityManager();
em.getTransaction().begin();
T result = em.find(clazz, id);
em.getTransaction().commit();
em.close();
return result;
}
} |
High speed arithmetic Architecture of Parallel Multiplier – Accumulator Based on Radix-2 Modified Booth Algorithm
The sustained growth in VLSI technology is fuelled by the continued shrinking of transistor to ever smaller dimension. The benefits of min iaturization are high packing densities, high circuit speed and low power d issipation. Binary mult iplier is an electronic circuit used in digital electron ics such as a computer to mult iply two binary numbers, which is built using a binary adder. A fixed-width mult iplier is attractive to many mult imedia and dig ital signal processing systems which are desirable to maintain a fixed format and allow a minimum accuracy loss to output data. This paper presents the design of high-accuracy modified Booth multipliers using Carry Look ahead Adder. The high accuracy fixed width modified booth mult iplier is used to satisfy the needs o f the applications like d igital filtering, arithmetic coding, wavelet transformation, echo cancellat ion, etc. The high accuracy modified booth multip liers can also be applicable to lossy applications to reduce the area and power consumption of the whole system while maintaining good output quality. This project p resents an efficient implementation of high speed mult iplier using the shift and add method, Radix_2, Radix_4 modified Booth multip lier algorithm. The parallel mult ipliers like radix 2 and rad ix 4 modified booth mult iplier does theComputations using lesser adders and lesser iterative steps. As a result of which they occupy lesser space as compared to the serial mult iplier. This very important criteria because in the fabrication of chips and high performance system requires components which are as small as possible. |
def solve(vec_lst):
x_sum = 0
y_sum = 0
z_sum = 0
for vec in vec_lst:
x_sum += vec[0]
y_sum += vec[1]
z_sum += vec[2]
if x_sum != 0 or y_sum != 0 or z_sum != 0:
print("NO")
else:
print("YES")
n = int(input())
vec_lst = []
for x in range(n):
vector = [int(x) for x in input().split()]
vec_lst.append(vector)
solve(vec_lst) |
Some of the eagle-eyed amongst you will have already noticed, but Elite Dangerous and Frontier Developments have been shortlisted for no less than FOUR Golden Joystick awards. They are...
Best Audio
Best moment (Hyperspace)
Studio of the Year
Best PC Game
Being nominated for any award is incredible but the Golden Joysticks are special because they’re voted for by youthe players. Just to be shortlisted in these categories is an absolute honour but now we're asking you, our dedicated and passionate community, to lend us your votes and your voices.
Head over here to cast your votes and please do share on Facebook and Twitter.
The game and the awards simply aren't possible without our incredible and loyal community, which is why if we win an award we also want to award one to you. So, for a limited time, we will release a free gold Sidewinder Paint Job to everyone so that you can join in the celebrations too.
We'd also like to introduce the teams behind all the hard work that has led us to be shortlisted with Q&A's, livestreams, and spotlight Meet the Team articles so keep your eyes peeled for more info and don't forget to cast your votes! |
/**
* A {@link ConfigWriter} for protobuf format config files.
*/
public final class ProtoConfigWriter implements ConfigWriter {
private final OutputStream writer;
private final boolean writeAsText;
private final ConfigProto.Builder builder;
/**
* Constructs a writer for a protobuf config file.
*
* @param writer The writer to write to.
* @param writeAsText Write as text instead of binary.
*/
public ProtoConfigWriter(OutputStream writer, boolean writeAsText) {
this.writer = writer;
this.writeAsText = writeAsText;
this.builder = ConfigProto.newBuilder();
}
@Override
public void writeStartDocument() throws ConfigWriterException {
// No-op
}
@Override
public void writeEndDocument() throws ConfigWriterException {
// No-op
}
@Override
public void writeGlobalProperties(Map<String, String> props) throws ConfigWriterException {
builder.putAllProperties(props);
}
@Override
public void writeSerializedObjects(Map<String, SerializedObject> map) throws ConfigWriterException {
for (Map.Entry<String, SerializedObject> e : map.entrySet()) {
SerializedObject<?> serObj = e.getValue();
SerializedObjectProto.Builder serObjBuilder = SerializedObjectProto.newBuilder();
serObjBuilder.setName(serObj.getName());
serObjBuilder.setType(serObj.getClassName());
serObjBuilder.setLocation(serObj.getLocation());
builder.addSerializedObject(serObjBuilder.build());
}
}
@Override
public void writeStartComponents() throws ConfigWriterException {
// No-op
}
@Override
public void writeComponent(Map<String, String> attributes, Map<String, Property> properties) {
ComponentProto.Builder componentBuilder = ComponentProto.newBuilder();
componentBuilder.setName(attributes.get(ConfigLoader.NAME));
componentBuilder.setType(attributes.get(ConfigLoader.TYPE));
if (attributes.get(ConfigLoader.EXPORT).equalsIgnoreCase("true")) {
componentBuilder.setExportable(true);
}
if (attributes.get(ConfigLoader.IMPORT).equalsIgnoreCase("true")) {
componentBuilder.setImportable(true);
}
if (attributes.containsKey(ConfigLoader.ENTRIES)) {
componentBuilder.setEntries(attributes.get(ConfigLoader.ENTRIES));
}
if (attributes.containsKey(ConfigLoader.LEASETIME)) {
componentBuilder.setLeaseTime(Long.parseLong(attributes.get(ConfigLoader.LEASETIME)));
}
if (attributes.containsKey(ConfigLoader.SERIALIZED)) {
componentBuilder.setSerialized(attributes.get(ConfigLoader.SERIALIZED));
}
for (Map.Entry<String, Property> property : properties.entrySet()) {
String key = property.getKey();
Property value = property.getValue();
if (value instanceof ListProperty) {
//
// Must be a string/component list
PropertyListProto.Builder listBuilder = PropertyListProto.newBuilder();
listBuilder.setName(key);
for (SimpleProperty s : ((ListProperty) value).getSimpleList()) {
listBuilder.addItem(s.getValue());
}
for (Class<?> c : ((ListProperty) value).getClassList()) {
listBuilder.addType(c.getName());
}
componentBuilder.addListProperty(listBuilder.build());
} else if (value instanceof MapProperty) {
//
// Must be a string,string map
PropertyMapProto.Builder mapBuilder = PropertyMapProto.newBuilder();
mapBuilder.setName(key);
for (Map.Entry<String, SimpleProperty> e : ((MapProperty) value).getMap().entrySet()) {
mapBuilder.putElements(e.getKey(), e.getValue().getValue());
}
componentBuilder.addMapProperty(mapBuilder.build());
} else {
//
// Standard property
componentBuilder.putProperties(key, value.toString());
}
}
builder.addComponents(componentBuilder.build());
}
@Override
public void writeEndComponents() throws ConfigWriterException {
// No-op
}
@Override
public void close() throws ConfigWriterException {
ConfigProto proto = builder.build();
try {
if (writeAsText) {
PrintStream stream = new PrintStream(writer);
stream.println(proto.toString());
stream.close();
} else {
proto.writeTo(writer);
writer.close();
}
} catch (IOException e) {
throw new ConfigWriterException(e);
}
}
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*
*/
package org.jboss.test.ldap;
import java.util.ArrayList;
import java.util.List;
import com.beust.jcommander.Parameter;
/**
* Command line arguments for ldap-server.
*/
public class CLIArguments {
public static final String DEFAULT_ADDR = "0.0.0.0";
public static final int DEFAULT_PORT = 10389;
@Parameter(description = "[LDIFs to import]")
private final List<String> ldifFiles = new ArrayList<String>();
@Parameter(names = { "--help", "-h" }, description = "shows this help and exits", help = true)
private boolean help;
@Parameter(names = { "--allow-anonymous", "-a" }, description = "allows anonymous bind to the server")
private boolean allowAnonymous;
@Parameter(names = { "--port",
"-p" }, description = "takes [portNumber] as a parameter and binds the LDAP server on that port")
private int port = DEFAULT_PORT;
@Parameter(names = { "--bind",
"-b" }, description = "takes [bindAddress] as a parameter and binds the LDAP server on the address")
private String bindAddress = DEFAULT_ADDR;
@Parameter(names = { "--ssl-port",
"-sp" }, description = "adds SSL transport layer (i.e. 'ldaps' protocol). It takes [portNumber] as a parameter and binds the LDAPs server on the port")
private Integer sslPort = null;
@Parameter(names = { "--ssl-need-client-auth", "-snc" }, description = "enables SSL 'needClientAuth' flag")
private boolean sslNeedClientAuth;
@Parameter(names = { "--ssl-want-client-auth", "-swc" }, description = "enables SSL 'wantClientAuth' flag")
private boolean sslWantClientAuth;
@Parameter(names = { "--ssl-enabled-protocol",
"-sep" }, description = "takes [sslProtocolName] as argument and enables it for 'ldaps'. Can be used multiple times."
+ " If the argument is not provided following are used: TLSv1, TLSv1.1, TLSv1.2")
private List<String> sslEnabledProtocols;
@Parameter(names = { "--ssl-enabled-ciphersuite", "-scs" }, description = "takes [sslCipherSuite] as argument and enables it for 'ldaps'. Can be used multiple times.")
private List<String> sslCipherSuite;
@Parameter(names = { "--ssl-keystore-file", "-skf" }, description = "takes keystore [filePath] as argument. The keystore should contain privateKey to be used by LDAPs")
private String sslKeystoreFile;
@Parameter(names = { "--ssl-keystore-password", "-skp" }, description = "takes keystore [password] as argument")
private String sslKeystorePassword;
public List<String> getLdifFiles() {
return ldifFiles;
}
public boolean isHelp() {
return help;
}
public int getPort() {
return port;
}
public String getBindAddress() {
return bindAddress;
}
public boolean isAllowAnonymous() {
return allowAnonymous;
}
public Integer getSslPort() {
return sslPort;
}
public boolean isSslNeedClientAuth() {
return sslNeedClientAuth;
}
public boolean isSslWantClientAuth() {
return sslWantClientAuth;
}
public List<String> getSslEnabledProtocols() {
return sslEnabledProtocols;
}
public List<String> getSslCipherSuite() {
return sslCipherSuite;
}
public String getSslKeystoreFile() {
return sslKeystoreFile;
}
public String getSslKeystorePassword() {
return sslKeystorePassword;
}
}
|
In college, I was in a sort of “future broadcasters” club with a bunch of other students who were looking to do television hosting, news anchoring, radio announcing, podcasting, etc. The majority of us were women, so we often talked about gender-specific issues. When Gretchen Carlson sued Roger Ailes for sexual harassment earlier this month, I wasn’t shocked; my classmates and I had learned from an Emmy-winning producer just how common behavior like Carlson described was in newsrooms across the country.
While at this point, we simply don’t know if the allegations are true, we also learned that if we ever experienced harassment of any sort, we should go to our boss immediately. In fact, in the nine places I’ve worked, I was told to do that every time. At Barnes & Noble, my managers asked a man who was saying lewd things to me to leave the premises. At the spa where I worked reception in college, the owner believed me the moment I told her we were getting creepy phone calls. She helped me create a list of the phone numbers making the calls. When one of them called the next time, she picked up and let them have it. That gave me the impression I was valued. My safety was prioritized above a sale.
Since the second wave of feminism in the 1960s and ’70s, women have been infiltrating and dominating the workforce through nearly every field, but it’s no secret that walking into a well-established boys’ club can be a dangerous gamble. That doesn’t mean it shouldn’t be done. One of the important roles of a manager in the workplace is to provide support to employees who need it. Women who believe they were harassed on the job are employees who need it. Bosses should be the first line of defense against an alleged hostile work environment.
That’s why what Neil Cavuto did this week is so unconscionable. He defended Ailes against Carlson’s allegations and as someone who has a show on Fox News Channel, he joined ranks with a host of other employees of the network. Unlike Harris Faulkner or Greta Van Susteren or Geraldo Rivera, though, he isn’t just a host. According to the Fox website, he is also senior vice president and managing editor of not only Fox News Channel, but Fox Business, too.
That means Cavuto is someone’s boss. He is a lot of people’s boss. Many of them are undoubtedly women. By choosing to put more effort into his role as Ailes’ employee than his role as someone’s boss even though he could have chosen to stay quiet and not take sides, what message is he sending to those women? He’s certainly not sending the message that he is there for them, that he would believe them if they spoke out about harassment or assault, that he would at least reserve judgment until after an outside law firm completed their investigation or the case made its way through the courts, that he prioritizes his employees. He’s certainly not sending the message that he is a good leader. He is sending the message that he is a good follower.
The fear of not being believed is one of the greatest deterrents for women who are considering coming forward with their stories of rape, harassment, or assault. I don’t need to tell you this. You’ve seen the headlines. Television commentators, pundits, defense lawyers, and Internet commenters take turns musing about why Bill Cosby‘s accusers didn’t come forward sooner and if we, as a society, don’t believe dozens of rape accusers, why would we believe individual harassment accusers?
Beyond that, they use dubious sources to aggressively and publicly attempt to discredit other journalists with sexual harassment claims.They allow similar stories of harassment at other networks to go under the radar and disappear. They question why a star like Kesha would stay quiet for so long if she were really being sexually abused by her producer. They celebrate when a rape allegation turns out to be false and go to absurd lengths to bust the story open in a way you know they wouldn’t if it were, say, a false claim of robbery. They fill my Twitter mentions and the mentions of other outspoken victims’ advocates with messages about how they don’t believe Carlson because she doesn’t strike them as hot enough to harass. The underlying message here is that women are not believed.
That’s a cultural problem. The way to fix cultural problems is to have leaders address them and show other people the way forward. Leaders can be anyone from a politician to a celebrity to — you guessed it — a boss. Bosses are given more responsibility in the workplace because it is their job to lead. They are trusted to do the right thing and to lead by example. Cavuto didn’t do that. Not because he is expected to necessarily support Carlson, but by speaking out in favor of Ailes even before NewsCorp concludes its own internal investigation, he played judge. He upheld the cultural problem. He let his female employees know what side he would be on if they ever came forward with a similar story.
In my broadcasting club, we didn’t go over what would happen in a situation like this. We weren’t told what to do if our bosses gave public statements condemning respected women and calling their stories of harassment “nonsense.” We weren’t told what sort of reverberations that might cause among our ranks. We were only told that if we were harassed, we should always tell our bosses right away.
[image via screengrab]
For more from Lindsey: Twitter. Facebook.
This is an opinion piece. The views expressed in this article are those of just the author. |
News Corporation, run by billionaire Rupert Murdoch, has formally entered into the process to buy Frank McCourt's Los Angeles Dodgers, according to the Wall Street Journal. News Corp.'s Fox unit is interested in buying a 15% to 20% piece of the MLB team.
The move by News Corp., which owned the Dodgers from 1998 to 2004, is seen as an effort to keep the television rights from rival Time Warner. The team's current local broadcasting rights with Fox's Prime Ticket expire after the 2013 season. McCourt bought the Dodgers and their stadium for $355 million in 2004. The Dodgers and Fox recently came to an agreement whereby the team will abide by the broadcaster's current rights to an exclusive negotiating period later this year to bid for the team's future television rights, and Fox will not interfere with the sale of the Dodgers in bankruptcy court.
Under News Corp. the Dodgers lost a lot of money but the media company also likely underpaid for the team's television rights. Fox paid the team $15 million in 2003 to show 80 games (the other games were carried on a local station). But in 2004 after McCourt bought the Dodgers and more games were moved to Fox the team's rights fee increased to $25 million. In 2013, the last year of the contract, the Dodgers are to receive $39 million.
In the nearby video with Rich Brand, a partner at Arent Fox LLP who has worked on media deals with several professional sports teams, the attorney discusses the value of the team's future media rights, likely to be in excess of the multi-billion deal recently announced for the Los Angeles Angels of Anaheim.
Whom ever Fox joins with in the bidding will have an obvious advantage in buying the Dodgers given News Corp.'s deep pockets and its ties to MLB through its national broadcasting deal. |
Enhanced therapeutic effect of cis-diamminedichloroplatinum(II) against nude mouse grown human pancreatic adenocarcinoma when combined with 1-beta-D-arabinofuranosylcytosine and caffeine.
We demonstrated previously that the effect of cis-diamminedichloroplatinum(II) (cisplatin) against pancreatic cancer was substantially enhanced by the addition to the chemotherapeutic regimen of 1-beta-D-arabinofuranosylcytosine and caffeine. To obtain information on the factors influencing tumor response to this combination treatment, we investigated two adenocarcinomas of the exocrine pancreas grown in the nude mouse, tumors Capan-1 and SW-1990. Tumor response to cisplatin, characterized by tumor regression and tumor growth arrest, was observed when it was given in the upper limits of tolerance (5 mg/kg). Caffeine and 1-beta-D-arabinofuranosylcytosine singly and in combination had no effect on tumor growth; neither did they influence the effect of cisplatin when combined singly with the latter. However, the triple combination of cisplatin, 1-beta-D-arabinofuranosylcytosine, and caffeine resulted in complete tumor regression. The enhancing effect of the triple combination depended on tumor sensitivity to cisplatin and the amount of cisplatin administered and required rather large amounts of caffeine. The present report indicates that certain combination regimens may enhance the therapeutic effect of cisplatin against pancreatic carcinoma. |
def AdaptCursorOffsetIfNeeded( sanitized_html, cursor_offset ):
preceding_angle_bracket_index = cursor_offset
while True:
if preceding_angle_bracket_index < 0:
return cursor_offset
char = sanitized_html[ preceding_angle_bracket_index ]
if preceding_angle_bracket_index != cursor_offset and char == '>':
return cursor_offset
if char == '<':
break
preceding_angle_bracket_index -= 1
tag = Tag( TAG_REGEX.match( sanitized_html,
preceding_angle_bracket_index ) )
if not tag:
return cursor_offset
if tag.kind == TagType.OPENING:
return tag.end_offset
return tag.start_offset |
France's glacial pace of reform could push the eurozone to "breaking point" if another crisis hits the 18-nation bloc, a leading think tank has warned.
The Centre for Economics and Business Research (CEBR) said that although the rest of the eurozone – including peripheral economies of Portugal, Italy, Ireland, Greece and Spain – was heading towards a tentative recovery, France's big budget deficit, stubbornly high unemployment and chronic competitiveness problem meant Europe's second-largest economy was at risk of being plunged into another crisis, with grave consequences for the rest of the currency area.
"The risk that the French economy will lag behind its neighbours in enacting reforms is not a trivial one," said Danae Kyriakopoulou, an economist at the CEBR, and co-author of the new report. "Though the troubles of the region's periphery have so far proved manageable, a crisis in [France] could have dramatic consequences for the viability of the currency union and push the eurozone to breaking point."
French president François Hollande has introduced a sweeping package of supply side reforms aimed at boosting productivity and reviving growth. Earlier this year, he announced a multi-billion euro package of spending cuts he said would pave the way for a reduction in business taxes.
However, his popularity has plummeted since his election two years ago amid record joblessness, with 3.4m people now unemployed. Meanwhile, the budget deficit, at 4.3pc of gross domestic product, remains well above the European Union limit of 3pc.
Ms Kyriakopoulou said "low political capital" among French politicians had delayed fiscal consolidation. However, there was little room left for manoeuvre. "France has come under a lot of pressure from the EU already. At some point there will be nowhere left to kick the can, so they'll have to do something," she said.
The think tank expects the French economy, which stagnated in the first quarter of this year, to grow by almost 5pc over the next five years, compared with 4.6pc in both Italy and Spain.
Britain, which is forecast to be the fastest-growing economy in the G7 this year, will expand by 7.7pc between now and 2019, the CEBR predicts, and eurozone powerhouse Germany is expected to grow by 9.2pc.
Ms Kyriakopoulou said that even though the size of the French economy recovered to pre-crisis levels in 2011, long before the UK, GDP per person would not recover until 2017, a year after the UK, which only recently surpassed its pre-crisis peak. Greece and Ireland, which shrank massively during the financial crisis, are expected to grow by 15pc and 11pc respectively.
Mario Draghi, the president of the European Central Bank, warned last week that the recovery in Europe remained "weak, fragile and uneven", with tensions between Russia and the West presenting big downside risks for Europe.
The bloc is also dogged by the threat of deflation, which would make it harder for peripheral countries to reduce their huge debt piles, despite stronger growth. Inflation at present stands at only 0.4pc.
The CEBR said that despite the "dynamic comeback" of countries such as Greece and Ireland, they would be haunted by the eurozone crisis for years to come. |
/*
* Copyright 2017, Haiku, Inc. All Rights Reserved.
* Distributed under the terms of the MIT License.
*
* Authors:
* <NAME> <<EMAIL>>
*/
#include <unicode/uversion.h>
#include <RelativeDateTimeFormat.h>
#include <stdlib.h>
#include <time.h>
#include <unicode/gregocal.h>
#include <unicode/reldatefmt.h>
#include <unicode/utypes.h>
#include <ICUWrapper.h>
#include <Language.h>
#include <Locale.h>
#include <LocaleRoster.h>
#include <TimeUnitFormat.h>
static const URelativeDateTimeUnit kTimeUnitToRelativeDateTime[] = {
UDAT_REL_UNIT_YEAR,
UDAT_REL_UNIT_MONTH,
UDAT_REL_UNIT_WEEK,
UDAT_REL_UNIT_DAY,
UDAT_REL_UNIT_HOUR,
UDAT_REL_UNIT_MINUTE,
UDAT_REL_UNIT_SECOND,
};
static const UCalendarDateFields kTimeUnitToICUDateField[] = {
UCAL_YEAR,
UCAL_MONTH,
UCAL_WEEK_OF_MONTH,
UCAL_DAY_OF_WEEK,
UCAL_HOUR_OF_DAY,
UCAL_MINUTE,
UCAL_SECOND,
};
BRelativeDateTimeFormat::BRelativeDateTimeFormat()
: Inherited()
{
Locale icuLocale(fLanguage.Code());
UErrorCode icuStatus = U_ZERO_ERROR;
fFormatter = new RelativeDateTimeFormatter(icuLocale, icuStatus);
if (fFormatter == NULL) {
fInitStatus = B_NO_MEMORY;
return;
}
fCalendar = new GregorianCalendar(icuStatus);
if (fCalendar == NULL) {
fInitStatus = B_NO_MEMORY;
return;
}
if (!U_SUCCESS(icuStatus))
fInitStatus = B_ERROR;
}
BRelativeDateTimeFormat::BRelativeDateTimeFormat(const BLanguage& language,
const BFormattingConventions& conventions)
: Inherited(language, conventions)
{
Locale icuLocale(fLanguage.Code());
UErrorCode icuStatus = U_ZERO_ERROR;
fFormatter = new RelativeDateTimeFormatter(icuLocale, icuStatus);
if (fFormatter == NULL) {
fInitStatus = B_NO_MEMORY;
return;
}
fCalendar = new GregorianCalendar(icuStatus);
if (fCalendar == NULL) {
fInitStatus = B_NO_MEMORY;
return;
}
if (!U_SUCCESS(icuStatus))
fInitStatus = B_ERROR;
}
BRelativeDateTimeFormat::BRelativeDateTimeFormat(const BRelativeDateTimeFormat& other)
: Inherited(other),
fFormatter(other.fFormatter != NULL
? new RelativeDateTimeFormatter(*other.fFormatter) : NULL),
fCalendar(other.fCalendar != NULL
? new GregorianCalendar(*other.fCalendar) : NULL)
{
if ((fFormatter == NULL && other.fFormatter != NULL)
|| (fCalendar == NULL && other.fCalendar != NULL))
fInitStatus = B_NO_MEMORY;
}
BRelativeDateTimeFormat::~BRelativeDateTimeFormat()
{
delete fFormatter;
delete fCalendar;
}
status_t
BRelativeDateTimeFormat::Format(BString& string,
const time_t timeValue) const
{
if (fFormatter == NULL)
return B_NO_INIT;
time_t currentTime = time(NULL);
UErrorCode icuStatus = U_ZERO_ERROR;
fCalendar->setTime((UDate)currentTime * 1000, icuStatus);
if (!U_SUCCESS(icuStatus))
return B_ERROR;
UDate UTimeValue = (UDate)timeValue * 1000;
int delta = 0;
int offset = 1;
URelativeDateTimeUnit unit = UDAT_REL_UNIT_SECOND;
for (int timeUnit = 0; timeUnit <= B_TIME_UNIT_LAST; ++timeUnit) {
delta = fCalendar->fieldDifference(UTimeValue,
kTimeUnitToICUDateField[timeUnit], icuStatus);
if (!U_SUCCESS(icuStatus))
return B_ERROR;
if (abs(delta) >= offset) {
unit = kTimeUnitToRelativeDateTime[timeUnit];
break;
}
}
UnicodeString unicodeResult;
// Note: icu::RelativeDateTimeFormatter::formatNumeric() is a part of ICU Draft API
// and may be changed in the future versions and was introduced in ICU 57.
fFormatter->formatNumeric(delta, unit, unicodeResult, icuStatus);
if (!U_SUCCESS(icuStatus))
return B_ERROR;
BStringByteSink byteSink(&string);
unicodeResult.toUTF8(byteSink);
return B_OK;
}
|
/// Check a filepath to see if it is an existing and valid KWFD file
/**
* If a feature_descriptor_io algorithm is not specified only check that the
* file exists. Otherwise read the file and make sure it is valid.
*/
bool valid_feature_file_exists( std::string const& filepath,
kwiver::vital::frame_id_t frame,
kwiver::vital::algo::feature_descriptor_io_sptr fd_io )
{
if( ST::FileExists( filepath ) )
{
if( !fd_io )
{
LOG_INFO( main_logger, "Skipping frame " << frame <<
", output exists: " << filepath );
return true;
}
try
{
kwiver::vital::feature_set_sptr feat;
kwiver::vital::descriptor_set_sptr desc;
fd_io->load(filepath, feat, desc);
if( feat && feat->size() > 0 && desc && desc->size() > 0 )
{
LOG_INFO( main_logger, "Skipping frame " << frame <<
", output exists: " << filepath <<
"\nfile contains " << feat->size() << " features, "
<< desc->size() << " descriptors" );
return true;
}
}
catch(...)
{
LOG_WARN( main_logger, "Not able to load " << filepath << ", recomputing" );
}
}
return false;
} |
#include<bits/stdc++.h>
using namespace std;
typedef long long int ll;
int main()
{
ll n,k,s;
cin>>n>>k>>s;
ll diff=n-1;
if(diff*k<s || k>s)
{
cout<<"NO"<<endl;
}
else
{
cout<<"YES"<<endl;
vector<int> ans;
for(int i=0;i<k;i++)ans.push_back(diff);
ll sum=diff*k;
int j=k-1;
while(1)
{
if(sum-s>(diff-1))
{
// cout<<"ss"<<endl;
ans[j]=1;
sum-=(diff-1);
j--;
}
else
{
// cout<<"er"<<endl;
ans[j]-=sum-s;
j--;
break;
}
}
vector<int> res;
// for(int i=0;i<k;i++)
// {
// cout<<ans[i]<<" ";
// }
// cout<<endl;
res.push_back(1+ans[0]);
for(int i=1;i<k;i++)
{
if(i%2==1)
{
res.push_back(res[i-1]-ans[i]);
}
else
{
res.push_back(res[i-1]+ans[i]);
}
}
for(int i=0;i<k;i++)
{
cout<<res[i]<<" ";
}
cout<<endl;
}
return 0;
} |
<gh_stars>0
/**
* \file randomservice/randomservice_instance_create.c
*
* \brief Create a randomservice instance.
*
* \copyright 2019 Velo Payments, Inc. All rights reserved.
*/
#include <agentd/randomservice/private/randomservice.h>
#include <cbmc/model_assert.h>
#include <unistd.h>
#include <vpr/parameters.h>
#include "randomservice_internal.h"
/* forward decls */
static void randomservice_instance_dispose(void* disposable);
/**
* \brief Create a randomservice instance.
*
* \param random File descriptor pointing to /dev/random.
*
* \returns a properly created randomservice instance, or NULL on failure.
*/
randomservice_root_context_t* randomservice_instance_create(int random)
{
/* parameter sanity check. */
MODEL_ASSERT(random >= 0);
/* allocate memory for the instance. */
randomservice_root_context_t* instance = (randomservice_root_context_t*)
malloc(sizeof(randomservice_root_context_t));
if (NULL == instance)
{
return NULL;
}
/* clear the instance. */
memset(instance, 0, sizeof(randomservice_root_context_t));
/* set the dispose method. */
instance->hdr.dispose = &randomservice_instance_dispose;
/* set the random handle. */
instance->random_fd = random;
/* success. */
return instance;
}
/**
* \brief Dispose of a randomservice instance.
*
* \param disposable The instance to dispose.
*/
static void randomservice_instance_dispose(void* disposable)
{
randomservice_root_context_t* instance =
(randomservice_root_context_t*)disposable;
/* parameter sanity check. */
MODEL_ASSERT(NULL != instance);
/* clear the structure. */
memset(instance, 0, sizeof(randomservice_root_context_t));
}
|
<gh_stars>0
package br.dominioL.estruturados.testes;
import static org.hamcrest.Matchers.*;
import static org.junit.Assert.*;
import org.junit.Before;
import org.junit.Test;
import br.dominioL.estruturados.elemento.primitivos.Booleano;
import br.dominioL.estruturados.elemento.primitivos.Numero;
import br.dominioL.estruturados.elemento.primitivos.Texto;
import br.dominioL.estruturados.json.ConstrutorJson;
import br.dominioL.estruturados.json.Json;
import br.dominioL.estruturados.json.ListaJson;
import br.dominioL.estruturados.json.ObjetoJson;
public final class TesteConstrutorJson {
private ObjetoJson objetoComUmLucasFalso;
private ObjetoJson objetoVazio;
private ObjetoJson objetoComObjetoVazio;
private ListaJson listaVazia;
private ListaJson listaComUmLucasFalso;
private ListaJson listaComElementosSimplesEObjetoComElementosSimples;
@Before
public void criarFigurantes() {
objetoComUmLucasFalso = Json.criarObjeto(Texto.criar("{ \"numero\": 1, \"texto\": \"Lucas\", \"booleano\": false }"));
objetoVazio = Json.criarObjeto(Texto.criar("{}"));
objetoComObjetoVazio = Json.criarObjeto(Texto.criar("{ \"identificador\": {}}"));
listaVazia = Json.criarLista(Texto.criar("[]"));
listaComUmLucasFalso = Json.criarLista(Texto.criar("[ 1, \"Lucas\", false ]"));
listaComElementosSimplesEObjetoComElementosSimples = Json.criarLista(Texto.criar("[ 1, \"Lucas\", false, { \"numero\": 1, \"texto\": \"Lucas\", \"booleano\": false } ]"));
}
@Test
public void construirObjetoVazio() {
ObjetoJson objetoConstruido = ConstrutorJson.deObjeto().construir();
assertThat(objetoConstruido, is(equalTo(objetoVazio)));
}
@Test
public void construirListaVazia() {
ListaJson listaConstruida = ConstrutorJson.deLista().construir();
assertThat(listaConstruida, is(equalTo(listaVazia)));
}
@Test
public void construirObjetoComElementosSimplesUtilizandoIdentificadoresTexto() {
ObjetoJson objetoConstruido = ConstrutorJson.deObjeto()
.inserir(Texto.criar("numero"), Numero.um())
.inserir(Texto.criar("texto"), Texto.criar("Lucas"))
.inserir(Texto.criar("booleano"), Booleano.falso())
.construir();
assertThat(objetoConstruido, is(equalTo(objetoComUmLucasFalso)));
}
@Test
public void construirObjetoComElementosSimplesUtilizandoIdentificadoresString() {
ObjetoJson objetoConstruido = ConstrutorJson.deObjeto()
.inserir("numero", Numero.um())
.inserir("texto", Texto.criar("Lucas"))
.inserir("booleano", Booleano.falso())
.construir();
assertThat(objetoConstruido, is(equalTo(objetoComUmLucasFalso)));
}
@Test
public void construirObjetoComOjetoVazioDentro() {
ObjetoJson objetoConstruido = ConstrutorJson.deObjeto()
.inserir("identificador", ConstrutorJson.deObjeto()
.construir())
.construir();
assertThat(objetoConstruido, is(equalTo(objetoComObjetoVazio)));
}
@Test
public void construirListaComElementosSimples() {
ListaJson construido = ConstrutorJson.deLista()
.inserir(Numero.um())
.inserir(Texto.criar("Lucas"))
.inserir(Booleano.falso())
.construir();
assertThat(construido, is(equalTo(listaComUmLucasFalso)));
}
@Test
public void construirListaComElementosSimplesEObjetoComElementosSimples() {
ListaJson listaConstruida = ConstrutorJson.deLista()
.inserir(Numero.um())
.inserir(Texto.criar("Lucas"))
.inserir(Booleano.falso())
.inserir(ConstrutorJson.deObjeto()
.inserir(Texto.criar("numero"), Numero.um())
.inserir(Texto.criar("texto"), Texto.criar("Lucas"))
.inserir(Texto.criar("booleano"), Booleano.falso())
.construir())
.construir();
assertThat(listaConstruida, is(equalTo(listaComElementosSimplesEObjetoComElementosSimples)));
}
}
|
Conventional wisdom holds that many of the favorite silent movie actors who failed to survive the transition to sound films—or talkies—in the late-1920s/early-1930s were done in by voices in some way unsuited to the new medium. Talkies are thought to have ruined the career of John Gilbert, for instance, because his “squeaky” voice did not match his on-screen persona as a leading male sex symbol. Audiences reportedly laughed the first time they heard Gilbert’s voice on screen. And in the case of the late silent era’s most popular female performer, the original “It girl” Clara Bow, a voice sometimes described as a “honk,” along with a strong Brooklyn accent and careless diction are often said to have forced her into retirement at the relatively young age of twenty-eight.
The real issue, however, was less the voices than the essence of the art. Despite our early-twenty-first century use of the word “movie” to refer to any cinematic production, silent movies and talkies differed substantially. Where talkies relied upon spoken words to communicate plot, ideas, and emotions, silent movies communicated visually using, as the name suggests, pictures that moved. In short, as seeing differs from hearing, so too movies differed from talkies.
The essence of silent movies was visual. As one observer at the time put it, in silent movies “People are doing something. We see them do it; even if they are only thinking or feeling…, we still see it.” Writing a movie column for a Chicago newspaper in the 1920s, Carl Sandburg marveled especially at actor Charlie Chaplin’s ability to convey complex ideas and emotions “with shrugs, smiles, solemnities, insinuations, blandishments.” Chaplin’s visual “sentences” were so “alive with gesture and intonation,” that Sandburg—one of the great American writers—could not imagine “reproduc[ing] any story Charlie Chaplin tells verbally.”
The visual nature of silent cinema made it particularly interesting to deaf Americans, for whom visual forms of communication were more natural than audible ones. In fact, historian John Schuchman argues that the silent movie era “represents the only time in the cultural history of the United States when deaf persons could participate in one of the performing arts with their hearing peers on a comparatively equal basis.” Alice T. Terry, a leading figure promoting a uniquely Deaf cultural viewpoint in the early-twentieth century, explained why: “[T]he movies [are] pre-eminently the place for pantomime or signs;” they have “no use for speech and lip-reading.”
On the other hand, deaf Americans in the late-1920s understood better than anyone the vulnerability of visual, non-verbal and non-audible, forms of communication confronted by the forces of normality. Beginning after the American Civil War, a number of individuals interested in deaf education—people such as telephone inventor Alexander Graham Bell—stressed the need to eliminate sign language and other visual/manual forms of communication from the nation’s deaf schools in favor of teaching verbal/audible communication in English. At the risk of over-simplification, their reasons—which included evolutionary theory, eugenics, and assimilation—boil down to an assertion that oralism could effectively cure deafness enabling deaf people to live more effectively in “the normal world.”
By the end of the 1920s—a decade famously proclaimed an era for “normalcy” by then presidential candidate Warren G. Harding—the oralist triumph in deaf education was nearly complete. According to data compiled in 1928, over 90 percent of deaf children received some or all of their education via oral methods and just two exclusively manualist schools remained (both were segregated institutions for the African American deaf). Oralists were much less successful reaching people outside the classroom, however. Sign language stayed alive in the Deaf community until the pendulum swung back in the manualist direction later in the twentieth century.
The normalizing attitudes and beliefs that underpinned oralist efforts in deaf education also afflicted ideas about modes of communication at the cinema. Even before talkies became a reality, the world depicted in silent movies was being described as “a dessicated [sic] and dehumanized world, from which all intrinsic worth has departed.” Similarly, the premiere of The Jazz Singer (1927)—widely considered the key film in the transition to talkies—was hailed at the time as “an upward step in [human] racial development which is dependent, absolutely, on the arts of communication.” The silent productions which continued to be produced during the next few years soon began to be referred to as “dummies” or “dumbies.” This, of course, recalls a common slur directed at deaf people, implying that a lack of oral and audible communication was connected to a lack of intelligence.
As in deaf education, the shift from movies to talkies met resistance. Charlie Chaplin was the most prominent resister, insisting that “I can say far more with a gesture than I can with words” and vowing “never [to] use dialogue in my pictures.” “It stands to reason,” added “It girl” Clara Bow, “that you can’t act as well when you have lines to think about, particularly those of us who have never been trained to talk while we act.” No less a movie authority than inventor Thomas A. Edison concurred, lamenting that because talkies required actors to “concentrate on the voice” they had “forgotten how to act.”
Nevertheless, within three years of The Jazz Singer’s appearance, virtually all new production of silent movies in Hollywood ceased. Eventually even Chaplin relented and began using audible dialogue to tell his filmed stories—though not until 1936. In the years since, only The Artist (2011) stands out as an attempt to make a true movie, where the pictures and actions of the performers, rather than their spoken dialogue, tell the story, but even the success of that Academy Award winning picture has not spawned copycats.
The movies had been cured of their deafness—or at least taught to “talk like living people,” as an advertisement for one theater sound system put it—but at what cost?
Featured image: Joyce Compton and Clara Bow in The Wild Party. Public Domain via Wikimedia Commons. |
REPRESENTING AND RECOGNIZING POINT OF VIEW Warren Sack
A representation of ideological point of view is articulated and a method for detecting the point(s) of view expressed in a news story is described. A version of the method, actor-role analysis, is encoded in a computer program, SpinDoetor, which can automatically detect the point(s) of view represented in some news stories. SpinDoctor is a computer program designed to detect ideological point of viewlin news stories. To detect point of view, SpinDoctor implements a critical reading strategy called actor-role analysis (Sack, 1994a; Sack, 1994b). Actor-role analysis was developed around the following observation: one means of detecting point of view is to examine how certain people, who appear again and again in the news (i.e., news actors), are described or portrayed (i.e., are assigned roles). Thus, for example, if one is given a news story which mentions Oliver North -(in)famous for his role in the Iran-Contra affair and recent senatorial campaign -and the story assigns North the role of patriot (via the use of certain adjectives and verbs), one can quite certain that the point of view expressed in the story is significantly to the right (in the spectrum of US politics) than that expressed by another news story which assigns North the role of villain or criminal. Two aspects of actor-role analysis, as it is implemented in the SpinDoctor system, might be of especial interest to researchers concerned with textual analysis: (1) A representation for ideological point of view: Although practically all, contemporary, AI systems for NLP are capable of finding actors and roles in texts to fill in scripts, frames, or templates (e.g., Jacobs and Ran, 1993), none of these systems assign any political significance to the pairing of certain actors with certain roles (e.g., North as patriot versus North as criminal). contrast, I maintain that sets of actor-role pairs are an interesting and implementable representation for differing ideological points of view. The proposed actor-role representation of ideological point of view accords with some recent work by Lakoff (1991) and generalizes and improves upon previous AI work on representation of ideology (e.g., Abelson and Carroll, 1966; Carbonell, 1978). (2) An algorithm for anaphoric resolution: Actor-role analysis incorporates a new anaphoric resolution algorithm. It is shown how careful attention to actor-role pairings assists in the resolution of anaphoric reference. By noting, for example, that an instance of the pronoun "he" is cast in the role of victim and that, earlier in the same story, "Lieut. Rodriguez" is also cast in the role of 1 Ideological point of view characterizes the political slant of an entire story; it is different from psychological point of view (e.g., as it is used by Wiebe, 1994) which characterizes the source of given sentence or statement contained within a story. victim, SpinDoctor postulates the resolution of the instance of the pronoun "he" to the proper name "Lieut. Rodriguez." SpinDoctor uses three main data structures: (1) Actors are collections of noun phrases and pronouns hierarchically arranged into groups (e.g., "President Cristiani" is declared to be a member of actor group "government officials" which, in turn, is declared to be a part of a larger group called simply the "government" that also contains "Army officers" and others. (2) Roles are packets of verbs, adjectives, and adverbs and some associated syntactic constraints. Thus, for example, the role criminal is associated with the subject of the verb "kidnap" while the role victim is associated with the object of the verb. (3) Points of View are sets of actor-role pairings. Thus, for example, to find a news story from the late-80s/early-90s from El Salvador written from the government’s point of view, the machine looks for stories in which the government plays the role of the source of the story, the government describes its violent actions as legitimate military actions (i.e., assigns itself a military role), while, it assigns the guerrillas the role of terrorist by labeling the guerrillas’ violent actions as criminal actions. The user of the system fills the database with actor, role, and point of view definitions. However, currently, the hand-built role definitions are being replaced with definitions from Roget’s Thesaurus and Wordnet, an on-line thesaurus ci’eated by George Miller’s group at Princeton University. The analysis process followed by SpinDoctor is as follows: (1) Given a news story SpinDoctor finds which noun phrases (i.e., actors) play which roles in the story. (2) Then, it determines which actors are of a similar group or are identical to other actors. To do this it does both (a) anaphoric resolution (e.g., determining who or what is being referenced when pronouns like "he," "she," and "it" are used); and, (b) actor grouping (e.g., inferring that army spokesman," "Lieut. Rodriguez," and "Gen. Bustillo" are all a part of the same group (e.g., the "armed forces"). (3) It the constructs a profile of the analyzed story which describes how often (within the given story) top-level actors were assigned different roles. (4) Using weighted actor-role bindings allows it to distinguish, for example, stories in which the government is cited as a source once and the guerrillas are cited ten times from stories in which the guerrillas are cited as a source less often than the government. Finally, it matches the weighted actor-role bindings against the point of view definitions, determines which point(s) of view match(es) best and outputs name of the one (or more) best-matching point(s) of view. References ̄ Sack, W. (1994a) On the Computation of Point of View, Proceedings of the National Conference of Artificial Intelligence (AAAI 94), July 31-August 4, 1994, Seattle, WA. ̄ Sack, W. (1994b) Actor-Role Analysis: Ideology, Point of View and the News (Teeh. Report 94-005 and MS Thesis) Cambridge, MA: MIT Media Laboratory. 152 From: AAAI Technical Report FS-95-03. Compilation copyright © 1995, AAAI (www.aaai.org). All rights reserved. |
package core.backend.restaurant.dto;
import core.backend.menu.dto.MenuResponseDto;
import core.backend.restaurant.domain.Location;
import core.backend.restaurant.domain.Restaurant;
import lombok.Getter;
import java.time.LocalDateTime;
import java.util.List;
import java.util.stream.Collectors;
@Getter
public class RestaurantResponseDto {
private Long id;
private String name;
private String description;
private Location location;
private List<MenuResponseDto> menuList;
private LocalDateTime updatedAt;
private LocalDateTime createdAt;
public RestaurantResponseDto(Restaurant entity) {
id = entity.getId();
name = entity.getName();
description = entity.getDescription();
location = entity.getLocation();
menuList = entity.getMenuList().stream()
.map(MenuResponseDto::new)
.collect(Collectors.toList());
updatedAt = entity.getUpdatedAt();
createdAt = entity.getCreatedAt();
}
}
|
/**
* Save the CT log currently being edited.
* @return an empty string on failure or the constant string CT_LOG_SAVED on success
* @throws IllegalStateException if there is no CT log to save
*/
public String saveCtLogBeingEdited() {
if (ctLogEditor.getCtLogBeingEdited() == null) {
throw new IllegalStateException("The CT log being edited has already been saved or was never loaded.");
}
/* Validate data entry by the user */
if (!ctLogEditor.hasValidUrl()) {
systemConfigurationHelper.addErrorMessage("CTLOGTAB_MISSINGPROTOCOL");
return StringUtils.EMPTY;
}
if (ctLogEditor.getCtLogTimeout() <= 0) {
systemConfigurationHelper.addErrorMessage("CTLOGTAB_TIMEOUTNEGATIVE");
return StringUtils.EMPTY;
}
if (ctLogEditor.getPublicKeyFile() != null) {
final byte[] keyBytes = getCtLogPublicKey(ctLogEditor.getPublicKeyFile());
if (keyBytes == null) {
return StringUtils.EMPTY;
}
}
if (ctLogEditor.getIsAcceptingByExpirationYear() && !StringUtils.isNumeric(ctLogEditor.getExpirationYearRequired())) {
systemConfigurationHelper.addErrorMessage("CTLOGCONFIGURATION_INVALID_YEAR");
return StringUtils.EMPTY;
}
/* Ensure the new log configuration is not conflicting with another log */
final CTLogInfo ctLogToUpdate = ctLogEditor.getCtLogBeingEdited();
for (final CTLogInfo existing : super.getAllCtLogs()) {
final boolean isSameLog = existing.getLogId() == ctLogToUpdate.getLogId();
final boolean urlExistsInCtLogGroup = StringUtils.equals(existing.getUrl(), ctLogEditor.getCtLogUrl())
&& StringUtils.equals(existing.getLabel(), ctLogEditor.getCtLogLabel());
if (!isSameLog && urlExistsInCtLogGroup) {
systemConfigurationHelper.addErrorMessage("CTLOGTAB_ALREADYEXISTS", existing.getUrl());
return StringUtils.EMPTY;
}
}
/* Update the configuration */
final String url = ctLogEditor.getCtLogUrl();
final byte[] keyBytes = ctLogEditor.getPublicKeyFile() != null ? getCtLogPublicKey(ctLogEditor.getPublicKeyFile())
: ctLogEditor.getCtLogBeingEdited().getPublicKeyBytes();
final int timeout = ctLogEditor.getCtLogTimeout();
final String label = ctLogEditor.getCtLogLabel();
ctLogToUpdate.setLogPublicKey(keyBytes);
ctLogToUpdate.setTimeout(timeout);
ctLogToUpdate.setUrl(url);
ctLogToUpdate.setLabel(label);
ctLogToUpdate.setExpirationYearRequired(ctLogEditor.getIsAcceptingByExpirationYear() ?
Integer.valueOf(ctLogEditor.getExpirationYearRequired()) : null);
systemConfigurationHelper.saveCtLogs(super.getAllCtLogs());
ctLogEditor.stopEditing();
return CT_LOG_SAVED;
} |
/*************************************************************************
* Copyright (c) 2013 eProsima. All rights reserved.
*
* This copy of FASTRPC is licensed to you under the terms described in the
* FASTRPC_LICENSE file included in this distribution.
*
*************************************************************************/
#ifndef _UTILS_TYPEDEFS_H_
#define _UTILS_TYPEDEFS_H_
#include "dds/Middleware.h"
namespace eprosima
{
namespace rpc
{
namespace transport
{
class ServerTransport;
}
#define DDS_TIMEOUT(name, duration) DDS::Duration_t name = {duration.total_seconds(), \
static_cast<EPROSIMA_UINT32>(duration.fractional_seconds() * (1000000000 / boost::posix_time::time_duration::traits_type::res_adjust()))};
#define DDS_TIMEOUT_SET(name, duration) name.sec = duration.total_seconds(); \
name.nanosec = static_cast<EPROSIMA_UINT32>(duration.fractional_seconds() * (1000000000 / boost::posix_time::time_duration::traits_type::res_adjust()));
} // namespace rpc
} // namespace eprosima
#endif // _UTILS_TYPEDEFS_H_
|
The reception history of Beowulf
This paper traces both the scholarly and popular reception of the Old English epic Beowulf from the publication of the first edition of the poem in 1815 to the most recent English novel based on it from 2019. Once the work was first made available to the scholarly community, numerous editions in various languages began to appear, the most recent being in English from 2008; once editions were published, Old English scholars around the world could translate the text into their native languages beginning with Danish in 1820. Translations, in their turn, made the poem available to a general audience, which responded to the poem through an array of media: music, art, poetry, prose fiction, plays, film, television, video games, comic books, and graphic novels. The enduring, widespread appeal of the poem remains great and universal. |
/**
* Produces events simulating stocks from the AEX.
*/
static public class AEXStocksEventPullSource extends EventPullSource {
String[] stocks = {"abn amro", "26",
"aegon", "38",
"ahold", "34",
"akzo nobel", "51",
"asm lith h", "26",
"corus plc", "2",
"dsm", "40",
"elsevier", "14",
"fortis (nl)", "32",
"getronics", "6",
"gucci", "94",
"hagemeyer", "25",
"heineken", "61",
"ing c", "78",
"klm", "66",
"kon olie", "66",
"kpn", "13",
"numico c", "44",
"philips, kon", "38",
"tnt", "26",
"unilever c", "62",
"vendex kbb", "16",
"vnu", "49",
"wolt-kluw c", "25"};
public long getSleepTime() {
return Rand.randomLong(2000, 4000);
}
public Event pullEvent() {
Event event = Event.createDataEvent("/stocks/aex");
int stockNumber = Rand.randomInt(0, (stocks.length) / 2 - 1);
int nextStockIndex = 2 * stockNumber;
event.setField("number", "" + stockNumber);
event.setField("name", stocks[nextStockIndex]);
if (stocks[nextStockIndex + 1] == null) {
stocks[nextStockIndex + 1] = "" + Rand.randomInt(50, 150);
}
int currentStockValue = new Integer(stocks[nextStockIndex + 1]).intValue();
int newStockValue = currentStockValue + Rand.randomInt(-2, 2);
event.setField("rate", "" + newStockValue + "." + Rand.randomInt(0, 99));
return event;
}
} |
def process_game_rules_for_user(
self,
user: RedditUser,
author: Redditor,
unread_item: Comment,
patch_notes_line_number: int,
) -> bool:
if not user.can_submit_guess:
return True
user.num_guesses += 1
if user.num_guesses >= MAX_NUM_GUESSES:
user.can_submit_guess = False
self.db.update_user(user)
if not self.update_patch_notes_table_in_db(patch_notes_line_number):
self.reply_with_bad_guess_feedback(
user,
author,
unread_item,
f"Line #{patch_notes_line_number} has already been guessed.\n\n",
)
return True
if not self.process_guess_for_user(
user, author, unread_item, patch_notes_line_number
):
return False
return True |
<gh_stars>1-10
#pragma once
#include "CesiumGeospatial/Ellipsoid.h"
#include "CesiumGeospatial/Library.h"
#include <glm/mat4x4.hpp>
#include <glm/vec3.hpp>
namespace CesiumGeospatial {
/**
* @brief Transforms positions to various reference frames.
*/
class CESIUMGEOSPATIAL_API Transforms final {
public:
/**
* @brief Computes a transformation from east-north-up axes to an
* ellipsoid-fixed reference frame.
*
* Computes a 4x4 transformation matrix from a reference frame with an
* east-north-up axes centered at the provided origin to the provided
* ellipsoid's fixed reference frame. The local axes are defined as: <ul>
* <li>The `x` axis points in the local east direction.</li>
* <li>The `y` axis points in the local north direction.</li>
* <li>The `z` axis points in the direction of the ellipsoid surface normal
* which passes through the position.</li>
* </ul>
*
* @param origin The center point of the local reference frame.
* @param ellipsoid The {@link Ellipsoid} whose fixed frame is used in the
* transformation. Default value: {@link Ellipsoid::WGS84}.
* @return The transformation matrix
*/
static glm::dmat4x4 eastNorthUpToFixedFrame(
const glm::dvec3& origin,
const Ellipsoid& ellipsoid = Ellipsoid::WGS84) noexcept;
};
} // namespace CesiumGeospatial
|
import enum
from django.utils.translation import gettext_lazy as _
class UserConfirmationRequestStatus(enum.Enum):
created = 'created'
sent = 'sent'
confirmed = 'confirmed'
cancelled = 'cancelled'
@classmethod
def translation(cls):
return {
cls.created: _('Created'),
cls.sent: _('Sent'),
cls.confirmed: _('Confirmed'),
cls.cancelled: _('Cancelled'),
}
|
/* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
/* vim: set ts=8 sts=2 et sw=2 tw=80: */
// Copyright (c) 2006-2008 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#ifndef BASE_PORT_H_
#define BASE_PORT_H_
#include <stdarg.h>
#ifdef _MSC_VER
# define GG_LONGLONG(x) x##I64
# define GG_ULONGLONG(x) x##UI64
#else
# define GG_LONGLONG(x) x##LL
# define GG_ULONGLONG(x) x##ULL
#endif
// Per C99 7.8.14, define __STDC_CONSTANT_MACROS before including <stdint.h>
// to get the INTn_C and UINTn_C macros for integer constants. It's difficult
// to guarantee any specific ordering of header includes, so it's difficult to
// guarantee that the INTn_C macros can be defined by including <stdint.h> at
// any specific point. Provide GG_INTn_C macros instead.
#define GG_INT8_C(x) (x)
#define GG_INT16_C(x) (x)
#define GG_INT32_C(x) (x)
#define GG_INT64_C(x) GG_LONGLONG(x)
#define GG_UINT8_C(x) (x##U)
#define GG_UINT16_C(x) (x##U)
#define GG_UINT32_C(x) (x##U)
#define GG_UINT64_C(x) GG_ULONGLONG(x)
namespace base {
// It's possible for functions that use a va_list, such as StringPrintf, to
// invalidate the data in it upon use. The fix is to make a copy of the
// structure before using it and use that copy instead. va_copy is provided
// for this purpose. MSVC does not provide va_copy, so define an
// implementation here. It is not guaranteed that assignment is a copy, so the
// StringUtil.VariableArgsFunc unit test tests this capability.
// The C standard says that va_copy is a "macro", not a function. Trying to
// use va_list as ref args to a function, as above, breaks some machines.
#if defined(__GNUC__)
# define base_va_copy(_a, _b) ::va_copy(_a, _b)
#elif defined(_MSC_VER)
# define base_va_copy(_a, _b) (_a = _b)
#else
# error No va_copy for your compiler
#endif
} // namespace base
// Define an OS-neutral wrapper for shared library entry points
#if defined(XP_WIN)
# define API_CALL __stdcall
#elif defined(XP_LINUX) || defined(XP_DARWIN)
# define API_CALL
#endif
#endif // BASE_PORT_H_
|
Adult primary gastric volvulus, a report of two cases.
Gastric volvulus is the medical situation that a stomach is twisted beyond the physiological range. It is a rare disease which is hard to experience in routine medical examination. Principally surgical treatment is essential for the acute type. However, the conservative therapy should be attempted in some cases, such as decompression of a stomach with a nasogastric tube, endoscopic reduction and so forth. Concerning surgical operation, the base is reduction of the torsion and immobilization of stomach. Recently, laparoscopic surgery is performed for the case that the general condition is stable or chronically progressive in the early stages. Percutaneous endoscopic gastrostomy (PEG) had also been performed for gastric immobilization. However, the recurrences and problems of twisting around the gastrostomy site were reported in addition to the problem of cosmetic outcomes. Therefore, the case is decreasing. In this paper, we present two cases on adult primary gastric volvulus. For the first case, endoscopic reduction was not good enough to release the torsion state. Then laparoscopic gastropexy was performed successfully. For the second case, we succeeded in endoscopic reduction. Since the patient had already experienced gastric volvulus, laparoscopic surgery was performed. The upper and middle gastric bodies were secured to the anterior abdominal wall, and gastric antrum to the ligamentum teres hepatis with interrupted absorbable sutures respectively. However, partial gastric volvulus recurred after ten and a several days postoperatively due to cutting off of the suture at the antrum secured to the ligamentum teres hepatis at previous surgery. Then, PEG for 2 points of lower body and antrum were performed to secure the antrum. The gastrostomies were removed 6 months after the surgery. Immobilization by laparoscopic gastropexy and PEG are useful for gastric volvulus due to their significant merit of minimum invasiveness. Concerning gastropexy, the number of sutures is very important for the secured part not to be torn off. |
import factory
from django.contrib.auth.decorators import login_required
from django.shortcuts import redirect
from django.urls import reverse
from faker import Faker
from auto_repair_saas.apps.authentication.models import User
from auto_repair_saas.apps.contacts.models import Contact
from auto_repair_saas.apps.jobs.models import Job
from auto_repair_saas.apps.staff.models import Staff
from auto_repair_saas.apps.vehicles.models import Vehicle
fake = Faker()
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = User
username = fake.name()
email = fake.email()
password = <PASSWORD>.password()
class ContactFactory(factory.django.DjangoModelFactory):
class Meta:
model = Contact
contact_type = 'client'
name = factory.LazyAttribute(lambda _: fake.name())
email = factory.LazyAttribute(lambda _: fake.email())
phone = factory.LazyAttribute(lambda _: fake.phone_number()[:20])
class VehicleFactory(factory.django.DjangoModelFactory):
class Meta:
model = Vehicle
number_plate = factory.LazyAttribute(lambda _: fake.license_plate())
owner = factory.SubFactory(ContactFactory)
class StaffFactory(factory.django.DjangoModelFactory):
class Meta:
model = Staff
name = factory.LazyAttribute(lambda _: fake.name())
email = factory.LazyAttribute(lambda _: fake.email())
class JobFactory(factory.django.DjangoModelFactory):
class Meta:
model = Job
client = factory.SubFactory(ContactFactory)
vehicle = factory.SubFactory(VehicleFactory)
charged = factory.LazyAttribute(lambda _: fake.random_int())
due_start_date = factory.LazyAttribute(
lambda _: fake.date_between(start_date='-1y', end_date='today')
)
due_end_date = factory.LazyAttribute(
lambda _: fake.date_between(start_date='-1y', end_date='today')
)
assigned = factory.SubFactory(StaffFactory)
status = factory.LazyAttribute(lambda _: fake.random_element(
elements=('pending', 'confirmed', 'in_progress', 'done',)
))
payment_method = factory.LazyAttribute(lambda _: fake.random_element(
elements=('cash', 'card', 'mpesa',)
))
paid = factory.LazyAttribute(lambda _: fake.random_element(
elements=(True, False)
))
payment_registered_on = factory.LazyAttribute(
lambda _: fake.date_between(start_date='-1y', end_date='today')
)
@login_required
def seed_data(request, *args, **kwargs):
# clean tables
for model in (Job, Staff, Vehicle, Contact,):
model.objects.all().delete()
for _ in range(0, 20):
client = ContactFactory(contact_type='client')
vehicle = VehicleFactory(owner=client)
JobFactory(client=client, vehicle=vehicle)
for _ in range(0, 5):
ContactFactory(contact_type='supplier')
# cleanup estimates
pending_estimates = Job.objects.filter(status='pending')
for estimate in pending_estimates:
estimate.paid = False
estimate.payment_registered_on = None
estimate.save()
for job in Job.objects.all().exclude(status='pending'):
if not job.paid:
job.payment_registered_on = None
return redirect(reverse('dashboard'))
|
package xjson
import "testing"
func TestJsonStr_UnmarshalJSON(t *testing.T) {
s := `{"name": "key1", value:"value1"}`
var s1 JsonStr
err := s1.UnmarshalJSON([]byte(s))
if err != nil {
t.Fatal(err)
}
t.Log("s1.UnmarshalJSON:", s1)
bytes, err := s1.MarshalJSON()
if err != nil {
t.Fatal(err)
}
t.Log("s1.MarshalJSON bytes:", bytes)
t.Log("s1.MarshalJSON:", string(bytes))
}
|
def build_model(cls, args, task):
if args.decoder_layers_to_keep:
args.decoder_layers = len(args.decoder_layers_to_keep.split(","))
if getattr(args, "max_target_positions", None) is None:
args.max_target_positions = getattr(
args, "tokens_per_sample", DEFAULT_MAX_TARGET_POSITIONS
)
if args.character_embeddings:
embed_tokens = CharacterTokenEmbedder(
task.source_dictionary,
eval(args.character_filters),
args.character_embedding_dim,
args.decoder_embed_dim,
args.char_embedder_highway_layers,
)
elif args.adaptive_input:
embed_tokens = AdaptiveInput(
len(task.source_dictionary),
task.source_dictionary.pad(),
args.decoder_input_dim,
args.adaptive_input_factor,
args.decoder_embed_dim,
options.eval_str_list(args.adaptive_input_cutoff, type=int),
args.quant_noise_pq,
args.quant_noise_pq_block_size,
)
else:
embed_tokens = cls.build_embedding(
args, task.source_dictionary, args.decoder_input_dim
)
if args.tie_adaptive_weights:
assert args.adaptive_input
assert args.adaptive_input_factor == args.adaptive_softmax_factor
assert (
args.adaptive_softmax_cutoff == args.adaptive_input_cutoff
), "{} != {}".format(
args.adaptive_softmax_cutoff, args.adaptive_input_cutoff
)
assert args.decoder_input_dim == args.decoder_output_dim
decoder = TransformerDecoder(
args, task.target_dictionary, embed_tokens, no_encoder_attn=True
)
return cls(decoder) |
Adsorption of amino acids on graphene: assessment of current force fields.
We compare the free energies of adsorption (ΔAads) and the structural preferences of amino acids on graphene obtained using the non-polarizable force fields-Amberff99SB-ILDN/TIP3P, CHARMM36/modified-TIP3P, OPLS-AA/M/TIP3P, and Amber03w/TIP4P/2005. The amino acid-graphene interactions are favorable irrespective of the force field. While the magnitudes of ΔAads differ between the force fields, the relative free energy of adsorption across amino acids is similar for the studied force fields. ΔAads positively correlates with amino acid-graphene and negatively correlates with graphene-water interaction energies. Using a combination of principal component analysis and density-based clustering technique, we grouped the structures observed in the graphene adsorbed state. The resulting population of clusters, and the conformation in each cluster indicate that the structures of the amino acid in the graphene adsorbed state vary across force fields. The differences in the conformations of amino acids are more severe in the graphene adsorbed state compared to the bulk state for all the force fields. Our findings suggest that the force fields studied will give qualitatively consistent relative strength of adsorption across proteins but different structural preferences in the graphene adsorbed state. |
def shuffle(self):
for i in range(len(self.layers) - 1):
d0, d1 = self.layers[i], self.layers[i + 1]
w = NNChromosome._random([d0, d1])
b = NNChromosome._random([d1])
self.layers_wb.append((w, b)) |
The characteristics of intestinal injury peripheral to strangulating obstruction lesions in the equine small intestine.
Recent studies suggest that horses requiring surgical correction of strangulating intestinal obstruction may develop post operative complications as a result of ischaemia/reperfusion injury. Therefore, the mucosal and serosal margins of resected small intestine from 9 horses with small intestinal strangulating lesions were examined for evidence of ischaemia/reperfusion injury. Severe mucosal injury and marked elevations in myeloperoxidase activity were detected at ileal resection margins (n = 4), whereas the mucosa from proximal jejunal (n = 9) and distal jejunal (n = 5) resection margins was normal. However, the serosa from jejunal resection margins had evidence of haemorrhage and oedema, and the proximal jejunal serosa had significantly increased numbers of neutrophils. Histological injury in ileal stumps is indicative of the inability fully to resect the ileum in horses with distal small intestinal strangulations. One of 4 horses subjected to ileal resection was subjected to euthanasia and found to have a necrotic ileal stump. Evidence of serosal injury and neutrophil infiltration in the proximal jejunal resection margins may predispose horses to post operative adhesions. Four of 8 horses discharged from the hospital suffered from recurrent colic in the post operative period. |
def _CreateExtractionWorker(self, worker_number, options):
proxy_server = rpc_proxy.StandardRpcProxyServer()
extraction_worker = self._engine.CreateExtractionWorker(
worker_number, rpc_proxy=proxy_server)
extraction_worker.SetDebugMode(self._debug_mode)
extraction_worker.SetSingleProcessMode(self._single_process_mode)
open_files = getattr(options, 'open_files', False)
extraction_worker.SetOpenFiles(open_files)
if getattr(options, 'os', None):
mount_path = getattr(options, 'filename', None)
extraction_worker.SetMountPath(mount_path)
filter_query = getattr(options, 'filter', None)
if filter_query:
filter_object = pfilter.GetMatcher(filter_query)
extraction_worker.SetFilterObject(filter_object)
text_prepend = getattr(options, 'text_prepend', None)
extraction_worker.SetTextPrepend(text_prepend)
return extraction_worker |
// -*- C++ -*-
//
// Package: JetPlusTracks
// Class: JetPlusTrackProducer
//
/**\class JetPlusTrackProducer JetPlusTrackProducer.cc JetPlusTrackProducer.cc
Description: [one line class summary]
Implementation:
[Notes on implementation]
*/
//
// Original Author: Olga Kodolova,40 R-A12,+41227671273,
// Created: Fri Feb 19 10:14:02 CET 2010
//
//
// system include files
#include <memory>
// user include files
#include "FWCore/Framework/interface/Frameworkfwd.h"
#include "FWCore/Framework/interface/stream/EDProducer.h"
#include "FWCore/Framework/interface/Event.h"
#include "FWCore/Framework/interface/MakerMacros.h"
#include "FWCore/ParameterSet/interface/ParameterSet.h"
#include "RecoJets/JetPlusTracks/plugins/JetPlusTrackProducer.h"
#include "DataFormats/JetReco/interface/CaloJetCollection.h"
#include "DataFormats/JetReco/interface/CaloJet.h"
#include "DataFormats/JetReco/interface/JPTJetCollection.h"
#include "DataFormats/JetReco/interface/JPTJet.h"
#include "DataFormats/JetReco/interface/TrackJetCollection.h"
#include "DataFormats/JetReco/interface/TrackJet.h"
#include "DataFormats/TrackReco/interface/TrackFwd.h"
#include "DataFormats/TrackReco/interface/Track.h"
#include "DataFormats/JetReco/interface/Jet.h"
#include "DataFormats/VertexReco/interface/VertexFwd.h"
#include "DataFormats/VertexReco/interface/Vertex.h"
#include "DataFormats/Math/interface/deltaPhi.h"
#include "DataFormats/Math/interface/deltaR.h"
#include <string>
using namespace std;
using namespace jpt;
//
// constants, enums and typedefs
//
//
// static data member definitions
//
//
// constructors and destructor
//
JetPlusTrackProducer::JetPlusTrackProducer(const edm::ParameterSet& iConfig) {
//register your products
src_ = iConfig.getParameter<edm::InputTag>("src");
srcTrackJets_ = iConfig.getParameter<edm::InputTag>("srcTrackJets");
alias_ = iConfig.getUntrackedParameter<string>("alias");
srcPVs_ = iConfig.getParameter<edm::InputTag>("srcPVs");
vectorial_ = iConfig.getParameter<bool>("VectorialCorrection");
useZSP_ = iConfig.getParameter<bool>("UseZSP");
ptCUT_ = iConfig.getParameter<double>("ptCUT");
dRcone_ = iConfig.getParameter<double>("dRcone");
usePAT_ = iConfig.getParameter<bool>("UsePAT");
mJPTalgo = new JetPlusTrackCorrector(iConfig, consumesCollector());
if (useZSP_)
mZSPalgo = new ZSPJPTJetCorrector(iConfig);
produces<reco::JPTJetCollection>().setBranchAlias(alias_);
produces<reco::CaloJetCollection>().setBranchAlias("ak4CaloJetsJPT");
input_jets_token_ = consumes<edm::View<reco::CaloJet> >(src_);
input_addjets_token_ = consumes<edm::View<reco::CaloJet> >(iConfig.getParameter<edm::InputTag>("srcAddCaloJets"));
input_trackjets_token_ = consumes<edm::View<reco::TrackJet> >(srcTrackJets_);
input_vertex_token_ = consumes<reco::VertexCollection>(srcPVs_);
mExtrapolations_ =
consumes<std::vector<reco::TrackExtrapolation> >(iConfig.getParameter<edm::InputTag>("extrapolations"));
}
JetPlusTrackProducer::~JetPlusTrackProducer() {
// do anything here that needs to be done at desctruction time
// (e.g. close files, deallocate resources etc.)
}
//
// member functions
//
bool sort_by_pt(const reco::JPTJet& a, const reco::JPTJet& b) { return (a.pt() > b.pt()); }
// ------------ method called to produce the data ------------
void JetPlusTrackProducer::produce(edm::Event& iEvent, const edm::EventSetup& iSetup) {
using namespace edm;
auto const& jets_h = iEvent.get(input_jets_token_);
auto const& addjets_h = iEvent.get(input_addjets_token_);
auto const& iExtrapolations = iEvent.get(mExtrapolations_);
edm::RefProd<reco::CaloJetCollection> pOut1RefProd = iEvent.getRefBeforePut<reco::CaloJetCollection>();
edm::Ref<reco::CaloJetCollection>::key_type idxCaloJet = 0;
auto pOut = std::make_unique<reco::JPTJetCollection>();
auto pOut1 = std::make_unique<reco::CaloJetCollection>();
double scaleJPT = 1.;
for (auto const& jet : iEvent.get(input_trackjets_token_)) {
int icalo = -1;
int i = 0;
for (auto const& oldjet : addjets_h) {
double dr2 = deltaR2(jet, oldjet);
if (dr2 <= dRcone_ * dRcone_) {
icalo = i;
}
i++;
} // Calojets
if (icalo < 0)
continue;
auto const& mycalo = addjets_h[icalo];
std::vector<edm::Ptr<reco::Track> > tracksinjet = jet.tracks();
reco::TrackRefVector tracksincalo;
reco::TrackRefVector tracksinvert;
for (auto const& itrack : tracksinjet) {
for (auto const& ixtrp : iExtrapolations) {
if (ixtrp.positions().empty())
continue;
if (usePAT_) {
double mydphi = deltaPhi(ixtrp.track()->phi(), itrack->phi());
if (fabs(ixtrp.track()->pt() - itrack->pt()) > 0.001 || fabs(ixtrp.track()->eta() - itrack->eta()) > 0.001 ||
mydphi > 0.001)
continue;
} else {
if (itrack.id() != ixtrp.track().id() || itrack.key() != ixtrp.track().key())
continue;
}
tracksinvert.push_back(ixtrp.track());
reco::TrackBase::Point const& point = ixtrp.positions().at(0);
double dr2 = deltaR2(jet, point);
if (dr2 <= dRcone_ * dRcone_) {
tracksincalo.push_back(ixtrp.track());
}
} // Track extrapolations
} // tracks
const reco::TrackJet& corrected = jet;
math::XYZTLorentzVector p4;
jpt::MatchedTracks pions;
jpt::MatchedTracks muons;
jpt::MatchedTracks elecs;
scaleJPT =
mJPTalgo->correction(corrected, mycalo, iEvent, iSetup, tracksinvert, tracksincalo, p4, pions, muons, elecs);
if (p4.pt() > ptCUT_) {
reco::JPTJet::Specific jptspe;
jptspe.pionsInVertexInCalo = pions.inVertexInCalo_;
jptspe.pionsInVertexOutCalo = pions.inVertexOutOfCalo_;
jptspe.pionsOutVertexInCalo = pions.outOfVertexInCalo_;
jptspe.muonsInVertexInCalo = muons.inVertexInCalo_;
jptspe.muonsInVertexOutCalo = muons.inVertexOutOfCalo_;
jptspe.muonsOutVertexInCalo = muons.outOfVertexInCalo_;
jptspe.elecsInVertexInCalo = elecs.inVertexInCalo_;
jptspe.elecsInVertexOutCalo = elecs.inVertexOutOfCalo_;
jptspe.elecsOutVertexInCalo = elecs.outOfVertexInCalo_;
reco::CaloJetRef myjet(pOut1RefProd, idxCaloJet++);
jptspe.theCaloJetRef = edm::RefToBase<reco::Jet>(myjet);
jptspe.JPTSeed = 1;
reco::JPTJet fJet(p4, jet.primaryVertex()->position(), jptspe, mycalo.getJetConstituents());
pOut->push_back(fJet);
pOut1->push_back(mycalo);
}
} // trackjets
int iJet = 0;
for (auto const& oldjet : jets_h) {
reco::CaloJet corrected = oldjet;
// ZSP corrections
double factorZSP = 1.;
if (useZSP_)
factorZSP = mZSPalgo->correction(corrected, iEvent, iSetup);
corrected.scaleEnergy(factorZSP);
// JPT corrections
scaleJPT = 1.;
math::XYZTLorentzVector p4;
jpt::MatchedTracks pions;
jpt::MatchedTracks muons;
jpt::MatchedTracks elecs;
bool validMatches = false;
if (!vectorial_) {
scaleJPT = mJPTalgo->correction(corrected, oldjet, iEvent, iSetup, pions, muons, elecs, validMatches);
p4 = math::XYZTLorentzVector(corrected.px() * scaleJPT,
corrected.py() * scaleJPT,
corrected.pz() * scaleJPT,
corrected.energy() * scaleJPT);
} else {
scaleJPT = mJPTalgo->correction(corrected, oldjet, iEvent, iSetup, p4, pions, muons, elecs, validMatches);
}
reco::JPTJet::Specific specific;
if (validMatches) {
specific.pionsInVertexInCalo = pions.inVertexInCalo_;
specific.pionsInVertexOutCalo = pions.inVertexOutOfCalo_;
specific.pionsOutVertexInCalo = pions.outOfVertexInCalo_;
specific.muonsInVertexInCalo = muons.inVertexInCalo_;
specific.muonsInVertexOutCalo = muons.inVertexOutOfCalo_;
specific.muonsOutVertexInCalo = muons.outOfVertexInCalo_;
specific.elecsInVertexInCalo = elecs.inVertexInCalo_;
specific.elecsInVertexOutCalo = elecs.inVertexOutOfCalo_;
specific.elecsOutVertexInCalo = elecs.outOfVertexInCalo_;
}
// Fill JPT Specific
specific.theCaloJetRef = edm::RefToBase<reco::Jet>(jets_h.refAt(iJet));
specific.mResponseOfChargedWithEff = (float)mJPTalgo->getResponseOfChargedWithEff();
specific.mResponseOfChargedWithoutEff = (float)mJPTalgo->getResponseOfChargedWithoutEff();
specific.mSumPtOfChargedWithEff = (float)mJPTalgo->getSumPtWithEff();
specific.mSumPtOfChargedWithoutEff = (float)mJPTalgo->getSumPtWithoutEff();
specific.mSumEnergyOfChargedWithEff = (float)mJPTalgo->getSumEnergyWithEff();
specific.mSumEnergyOfChargedWithoutEff = (float)mJPTalgo->getSumEnergyWithoutEff();
specific.mChargedHadronEnergy = (float)mJPTalgo->getSumEnergyWithoutEff();
// Fill Charged Jet shape parameters
double deR2Tr = 0.;
double deEta2Tr = 0.;
double dePhi2Tr = 0.;
double Zch = 0.;
double Pout2 = 0.;
double Pout = 0.;
double denominator_tracks = 0.;
int ntracks = 0;
for (reco::TrackRefVector::const_iterator it = pions.inVertexInCalo_.begin(); it != pions.inVertexInCalo_.end();
it++) {
double deR = deltaR((*it)->eta(), (*it)->phi(), p4.eta(), p4.phi());
double deEta = (*it)->eta() - p4.eta();
double dePhi = deltaPhi((*it)->phi(), p4.phi());
if ((**it).ptError() / (**it).pt() < 0.1) {
deR2Tr = deR2Tr + deR * deR * (*it)->pt();
deEta2Tr = deEta2Tr + deEta * deEta * (*it)->pt();
dePhi2Tr = dePhi2Tr + dePhi * dePhi * (*it)->pt();
denominator_tracks = denominator_tracks + (*it)->pt();
Zch = Zch + (*it)->pt();
Pout2 = Pout2 + (**it).p() * (**it).p() - (Zch * p4.P()) * (Zch * p4.P());
ntracks++;
}
}
for (reco::TrackRefVector::const_iterator it = muons.inVertexInCalo_.begin(); it != muons.inVertexInCalo_.end();
it++) {
double deR = deltaR((*it)->eta(), (*it)->phi(), p4.eta(), p4.phi());
double deEta = (*it)->eta() - p4.eta();
double dePhi = deltaPhi((*it)->phi(), p4.phi());
if ((**it).ptError() / (**it).pt() < 0.1) {
deR2Tr = deR2Tr + deR * deR * (*it)->pt();
deEta2Tr = deEta2Tr + deEta * deEta * (*it)->pt();
dePhi2Tr = dePhi2Tr + dePhi * dePhi * (*it)->pt();
denominator_tracks = denominator_tracks + (*it)->pt();
Zch = Zch + (*it)->pt();
Pout2 = Pout2 + (**it).p() * (**it).p() - (Zch * p4.P()) * (Zch * p4.P());
ntracks++;
}
}
for (reco::TrackRefVector::const_iterator it = elecs.inVertexInCalo_.begin(); it != elecs.inVertexInCalo_.end();
it++) {
double deR = deltaR((*it)->eta(), (*it)->phi(), p4.eta(), p4.phi());
double deEta = (*it)->eta() - p4.eta();
double dePhi = deltaPhi((*it)->phi(), p4.phi());
if ((**it).ptError() / (**it).pt() < 0.1) {
deR2Tr = deR2Tr + deR * deR * (*it)->pt();
deEta2Tr = deEta2Tr + deEta * deEta * (*it)->pt();
dePhi2Tr = dePhi2Tr + dePhi * dePhi * (*it)->pt();
denominator_tracks = denominator_tracks + (*it)->pt();
Zch = Zch + (*it)->pt();
Pout2 = Pout2 + (**it).p() * (**it).p() - (Zch * p4.P()) * (Zch * p4.P());
ntracks++;
}
}
for (reco::TrackRefVector::const_iterator it = pions.inVertexOutOfCalo_.begin();
it != pions.inVertexOutOfCalo_.end();
it++) {
Zch = Zch + (*it)->pt();
}
for (reco::TrackRefVector::const_iterator it = muons.inVertexOutOfCalo_.begin();
it != muons.inVertexOutOfCalo_.end();
it++) {
Zch = Zch + (*it)->pt();
}
for (reco::TrackRefVector::const_iterator it = elecs.inVertexOutOfCalo_.begin();
it != elecs.inVertexOutOfCalo_.end();
it++) {
Zch = Zch + (*it)->pt();
}
if (mJPTalgo->getSumPtForBeta() > 0.)
Zch = Zch / mJPTalgo->getSumPtForBeta();
if (ntracks > 0) {
Pout = sqrt(fabs(Pout2)) / ntracks;
}
if (denominator_tracks != 0) {
deR2Tr = deR2Tr / denominator_tracks;
deEta2Tr = deEta2Tr / denominator_tracks;
dePhi2Tr = dePhi2Tr / denominator_tracks;
}
specific.R2momtr = deR2Tr;
specific.Eta2momtr = deEta2Tr;
specific.Phi2momtr = dePhi2Tr;
specific.Pout = Pout;
specific.Zch = Zch;
// Create JPT jet
reco::Particle::Point vertex_ = reco::Jet::Point(0, 0, 0);
// If we add primary vertex
edm::Handle<reco::VertexCollection> pvCollection;
iEvent.getByToken(input_vertex_token_, pvCollection);
if (pvCollection.isValid() && !pvCollection->empty())
vertex_ = pvCollection->begin()->position();
reco::JPTJet fJet(p4, vertex_, specific, corrected.getJetConstituents());
iJet++;
// Output module
if (fJet.pt() > ptCUT_)
pOut->push_back(fJet);
}
std::sort(pOut->begin(), pOut->end(), sort_by_pt);
iEvent.put(std::move(pOut1));
iEvent.put(std::move(pOut));
}
//define this as a plug-in
//DEFINE_FWK_MODULE(JetPlusTrackProducer);
|
<reponame>ezLeaks/backdoored
package javassist.compiler;
import javassist.bytecode.*;
import javassist.compiler.ast.*;
import javassist.*;
public class JvstCodeGen extends MemberCodeGen
{
String paramArrayName;
String paramListName;
CtClass[] paramTypeList;
private int paramVarBase;
private boolean useParam0;
private String param0Type;
public static final String sigName = "$sig";
public static final String dollarTypeName = "$type";
public static final String clazzName = "$class";
private CtClass dollarType;
CtClass returnType;
String returnCastName;
private String returnVarName;
public static final String wrapperCastName = "$w";
String proceedName;
public static final String cflowName = "$cflow";
ProceedHandler procHandler;
public JvstCodeGen(final Bytecode a1, final CtClass a2, final ClassPool a3) {
super(a1, a2, a3);
this.paramArrayName = null;
this.paramListName = null;
this.paramTypeList = null;
this.paramVarBase = 0;
this.useParam0 = false;
this.param0Type = null;
this.dollarType = null;
this.returnType = null;
this.returnCastName = null;
this.returnVarName = null;
this.proceedName = null;
this.procHandler = null;
this.setTypeChecker(new JvstTypeChecker(a2, a3, this));
}
private int indexOfParam1() {
return this.paramVarBase + (this.useParam0 ? 1 : 0);
}
public void setProceedHandler(final ProceedHandler a1, final String a2) {
this.proceedName = a2;
this.procHandler = a1;
}
public void addNullIfVoid() {
if (this.exprType == 344) {
this.bytecode.addOpcode(1);
this.exprType = 307;
this.arrayDim = 0;
this.className = "java/lang/Object";
}
}
@Override
public void atMember(final Member a1) throws CompileError {
final String v1 = a1.get();
if (v1.equals(this.paramArrayName)) {
compileParameterList(this.bytecode, this.paramTypeList, this.indexOfParam1());
this.exprType = 307;
this.arrayDim = 1;
this.className = "java/lang/Object";
}
else if (v1.equals("$sig")) {
this.bytecode.addLdc(Descriptor.ofMethod(this.returnType, this.paramTypeList));
this.bytecode.addInvokestatic("javassist/runtime/Desc", "getParams", "(Ljava/lang/String;)[Ljava/lang/Class;");
this.exprType = 307;
this.arrayDim = 1;
this.className = "java/lang/Class";
}
else if (v1.equals("$type")) {
if (this.dollarType == null) {
throw new CompileError("$type is not available");
}
this.bytecode.addLdc(Descriptor.of(this.dollarType));
this.callGetType("getType");
}
else if (v1.equals("$class")) {
if (this.param0Type == null) {
throw new CompileError("$class is not available");
}
this.bytecode.addLdc(this.param0Type);
this.callGetType("getClazz");
}
else {
super.atMember(a1);
}
}
private void callGetType(final String a1) {
this.bytecode.addInvokestatic("javassist/runtime/Desc", a1, "(Ljava/lang/String;)Ljava/lang/Class;");
this.exprType = 307;
this.arrayDim = 0;
this.className = "java/lang/Class";
}
@Override
protected void atFieldAssign(final Expr a1, final int a2, final ASTree a3, final ASTree a4, final boolean a5) throws CompileError {
if (a3 instanceof Member && ((Member)a3).get().equals(this.paramArrayName)) {
if (a2 != 61) {
throw new CompileError("bad operator for " + this.paramArrayName);
}
a4.accept(this);
if (this.arrayDim != 1 || this.exprType != 307) {
throw new CompileError("invalid type for " + this.paramArrayName);
}
this.atAssignParamList(this.paramTypeList, this.bytecode);
if (!a5) {
this.bytecode.addOpcode(87);
}
}
else {
super.atFieldAssign(a1, a2, a3, a4, a5);
}
}
protected void atAssignParamList(final CtClass[] v1, final Bytecode v2) throws CompileError {
if (v1 == null) {
return;
}
int v3 = this.indexOfParam1();
for (int v4 = v1.length, a1 = 0; a1 < v4; ++a1) {
v2.addOpcode(89);
v2.addIconst(a1);
v2.addOpcode(50);
this.compileUnwrapValue(v1[a1], v2);
v2.addStore(v3, v1[a1]);
v3 += (CodeGen.is2word(this.exprType, this.arrayDim) ? 2 : 1);
}
}
@Override
public void atCastExpr(final CastExpr v-1) throws CompileError {
final ASTList v0 = v-1.getClassName();
if (v0 != null && v-1.getArrayDim() == 0) {
final ASTree v2 = v0.head();
if (v2 instanceof Symbol && v0.tail() == null) {
final String a1 = ((Symbol)v2).get();
if (a1.equals(this.returnCastName)) {
this.atCastToRtype(v-1);
return;
}
if (a1.equals("$w")) {
this.atCastToWrapper(v-1);
return;
}
}
}
super.atCastExpr(v-1);
}
protected void atCastToRtype(final CastExpr v-1) throws CompileError {
v-1.getOprand().accept(this);
if (this.exprType == 344 || CodeGen.isRefType(this.exprType) || this.arrayDim > 0) {
this.compileUnwrapValue(this.returnType, this.bytecode);
}
else {
if (!(this.returnType instanceof CtPrimitiveType)) {
throw new CompileError("invalid cast");
}
final CtPrimitiveType a1 = (CtPrimitiveType)this.returnType;
final int v1 = MemberResolver.descToType(a1.getDescriptor());
this.atNumCastExpr(this.exprType, v1);
this.exprType = v1;
this.arrayDim = 0;
this.className = null;
}
}
protected void atCastToWrapper(final CastExpr v-2) throws CompileError {
v-2.getOprand().accept(this);
if (CodeGen.isRefType(this.exprType) || this.arrayDim > 0) {
return;
}
final CtClass lookupClass = this.resolver.lookupClass(this.exprType, this.arrayDim, this.className);
if (lookupClass instanceof CtPrimitiveType) {
final CtPrimitiveType a1 = (CtPrimitiveType)lookupClass;
final String v1 = a1.getWrapperName();
this.bytecode.addNew(v1);
this.bytecode.addOpcode(89);
if (a1.getDataSize() > 1) {
this.bytecode.addOpcode(94);
}
else {
this.bytecode.addOpcode(93);
}
this.bytecode.addOpcode(88);
this.bytecode.addInvokespecial(v1, "<init>", "(" + a1.getDescriptor() + ")V");
this.exprType = 307;
this.arrayDim = 0;
this.className = "java/lang/Object";
}
}
@Override
public void atCallExpr(final CallExpr v2) throws CompileError {
final ASTree v3 = v2.oprand1();
if (v3 instanceof Member) {
final String a1 = ((Member)v3).get();
if (this.procHandler != null && a1.equals(this.proceedName)) {
this.procHandler.doit(this, this.bytecode, (ASTList)v2.oprand2());
return;
}
if (a1.equals("$cflow")) {
this.atCflow((ASTList)v2.oprand2());
return;
}
}
super.atCallExpr(v2);
}
protected void atCflow(final ASTList a1) throws CompileError {
final StringBuffer v1 = new StringBuffer();
if (a1 == null || a1.tail() != null) {
throw new CompileError("bad $cflow");
}
makeCflowName(v1, a1.head());
final String v2 = v1.toString();
final Object[] v3 = this.resolver.getClassPool().lookupCflow(v2);
if (v3 == null) {
throw new CompileError("no such $cflow: " + v2);
}
this.bytecode.addGetstatic((String)v3[0], (String)v3[1], "Ljavassist/runtime/Cflow;");
this.bytecode.addInvokevirtual("javassist.runtime.Cflow", "value", "()I");
this.exprType = 324;
this.arrayDim = 0;
this.className = null;
}
private static void makeCflowName(final StringBuffer a2, final ASTree v1) throws CompileError {
if (v1 instanceof Symbol) {
a2.append(((Symbol)v1).get());
return;
}
if (v1 instanceof Expr) {
final Expr a3 = (Expr)v1;
if (a3.getOperator() == 46) {
makeCflowName(a2, a3.oprand1());
a2.append('.');
makeCflowName(a2, a3.oprand2());
return;
}
}
throw new CompileError("bad $cflow");
}
public boolean isParamListName(final ASTList v2) {
if (this.paramTypeList != null && v2 != null && v2.tail() == null) {
final ASTree a1 = v2.head();
return a1 instanceof Member && ((Member)a1).get().equals(this.paramListName);
}
return false;
}
@Override
public int getMethodArgsLength(ASTList v2) {
final String v3 = this.paramListName;
int v4 = 0;
while (v2 != null) {
final ASTree a1 = v2.head();
if (a1 instanceof Member && ((Member)a1).get().equals(v3)) {
if (this.paramTypeList != null) {
v4 += this.paramTypeList.length;
}
}
else {
++v4;
}
v2 = v2.tail();
}
return v4;
}
@Override
public void atMethodArgs(ASTList v-6, final int[] v-5, final int[] v-4, final String[] v-3) throws CompileError {
final CtClass[] paramTypeList = this.paramTypeList;
final String paramListName = this.paramListName;
int v0 = 0;
while (v-6 != null) {
final ASTree v2 = v-6.head();
if (v2 instanceof Member && ((Member)v2).get().equals(paramListName)) {
if (paramTypeList != null) {
final int a3 = paramTypeList.length;
int a4 = this.indexOfParam1();
for (final CtClass a6 : paramTypeList) {
a4 += this.bytecode.addLoad(a4, a6);
this.setType(a6);
v-5[v0] = this.exprType;
v-4[v0] = this.arrayDim;
v-3[v0] = this.className;
++v0;
}
}
}
else {
v2.accept(this);
v-5[v0] = this.exprType;
v-4[v0] = this.arrayDim;
v-3[v0] = this.className;
++v0;
}
v-6 = v-6.tail();
}
}
void compileInvokeSpecial(final ASTree a1, final int a2, final String a3, final ASTList a4) throws CompileError {
a1.accept(this);
final int v1 = this.getMethodArgsLength(a4);
this.atMethodArgs(a4, new int[v1], new int[v1], new String[v1]);
this.bytecode.addInvokespecial(a2, a3);
this.setReturnType(a3, false, false);
this.addNullIfVoid();
}
@Override
protected void atReturnStmnt(final Stmnt a1) throws CompileError {
ASTree v1 = a1.getLeft();
if (v1 != null && this.returnType == CtClass.voidType) {
this.compileExpr(v1);
if (CodeGen.is2word(this.exprType, this.arrayDim)) {
this.bytecode.addOpcode(88);
}
else if (this.exprType != 344) {
this.bytecode.addOpcode(87);
}
v1 = null;
}
this.atReturnStmnt2(v1);
}
public int recordReturnType(final CtClass a4, final String v1, final String v2, final SymbolTable v3) throws CompileError {
this.returnType = a4;
this.returnCastName = v1;
this.returnVarName = v2;
if (v2 == null) {
return -1;
}
final int a5 = this.getMaxLocals();
final int a6 = a5 + this.recordVar(a4, v2, a5, v3);
this.setMaxLocals(a6);
return a5;
}
public void recordType(final CtClass a1) {
this.dollarType = a1;
}
public int recordParams(final CtClass[] a1, final boolean a2, final String a3, final String a4, final String a5, final SymbolTable a6) throws CompileError {
return this.recordParams(a1, a2, a3, a4, a5, !a2, 0, this.getThisName(), a6);
}
public int recordParams(final CtClass[] a5, final boolean a6, final String a7, final String a8, final String a9, final boolean v1, final int v2, final String v3, final SymbolTable v4) throws CompileError {
this.paramTypeList = a5;
this.paramArrayName = a8;
this.paramListName = a9;
this.paramVarBase = v2;
this.useParam0 = v1;
if (v3 != null) {
this.param0Type = MemberResolver.jvmToJavaName(v3);
}
this.inStaticMethod = a6;
int v5 = v2;
if (v1) {
final String a10 = a7 + "0";
final Declarator a11 = new Declarator(307, MemberResolver.javaToJvmName(v3), 0, v5++, new Symbol(a10));
v4.append(a10, a11);
}
for (int a12 = 0; a12 < a5.length; ++a12) {
v5 += this.recordVar(a5[a12], a7 + (a12 + 1), v5, v4);
}
if (this.getMaxLocals() < v5) {
this.setMaxLocals(v5);
}
return v5;
}
public int recordVariable(final CtClass v1, final String v2, final SymbolTable v3) throws CompileError {
if (v2 == null) {
return -1;
}
final int a1 = this.getMaxLocals();
final int a2 = a1 + this.recordVar(v1, v2, a1, v3);
this.setMaxLocals(a2);
return a1;
}
private int recordVar(final CtClass a1, final String a2, final int a3, final SymbolTable a4) throws CompileError {
if (a1 == CtClass.voidType) {
this.exprType = 307;
this.arrayDim = 0;
this.className = "java/lang/Object";
}
else {
this.setType(a1);
}
final Declarator v1 = new Declarator(this.exprType, this.className, this.arrayDim, a3, new Symbol(a2));
a4.append(a2, v1);
return CodeGen.is2word(this.exprType, this.arrayDim) ? 2 : 1;
}
public void recordVariable(final String a1, final String a2, final int a3, final SymbolTable a4) throws CompileError {
int v2;
char v3;
for (v2 = 0; (v3 = a1.charAt(v2)) == '['; ++v2) {}
final int v4 = MemberResolver.descToType(v3);
String v5 = null;
if (v4 == 307) {
if (v2 == 0) {
v5 = a1.substring(1, a1.length() - 1);
}
else {
v5 = a1.substring(v2 + 1, a1.length() - 1);
}
}
final Declarator v6 = new Declarator(v4, v5, v2, a3, new Symbol(a2));
a4.append(a2, v6);
}
public static int compileParameterList(final Bytecode v-4, final CtClass[] v-3, int v-2) {
if (v-3 == null) {
v-4.addIconst(0);
v-4.addAnewarray("java.lang.Object");
return 1;
}
final CtClass[] v3 = { null };
final int v0 = v-3.length;
v-4.addIconst(v0);
v-4.addAnewarray("java.lang.Object");
for (int v2 = 0; v2 < v0; ++v2) {
v-4.addOpcode(89);
v-4.addIconst(v2);
if (v-3[v2].isPrimitive()) {
final CtPrimitiveType a1 = (CtPrimitiveType)v-3[v2];
final String a2 = a1.getWrapperName();
v-4.addNew(a2);
v-4.addOpcode(89);
final int a3 = v-4.addLoad(v-2, a1);
v-2 += a3;
v3[0] = a1;
v-4.addInvokespecial(a2, "<init>", Descriptor.ofMethod(CtClass.voidType, v3));
}
else {
v-4.addAload(v-2);
++v-2;
}
v-4.addOpcode(83);
}
return 8;
}
protected void compileUnwrapValue(final CtClass v2, final Bytecode v3) throws CompileError {
if (v2 == CtClass.voidType) {
this.addNullIfVoid();
return;
}
if (this.exprType == 344) {
throw new CompileError("invalid type for " + this.returnCastName);
}
if (v2 instanceof CtPrimitiveType) {
final CtPrimitiveType a1 = (CtPrimitiveType)v2;
final String a2 = a1.getWrapperName();
v3.addCheckcast(a2);
v3.addInvokevirtual(a2, a1.getGetMethodName(), a1.getGetMethodDescriptor());
this.setType(v2);
}
else {
v3.addCheckcast(v2);
this.setType(v2);
}
}
public void setType(final CtClass a1) throws CompileError {
this.setType(a1, 0);
}
private void setType(final CtClass v2, final int v3) throws CompileError {
if (v2.isPrimitive()) {
final CtPrimitiveType a1 = (CtPrimitiveType)v2;
this.exprType = MemberResolver.descToType(a1.getDescriptor());
this.arrayDim = v3;
this.className = null;
}
else {
if (v2.isArray()) {
try {
this.setType(v2.getComponentType(), v3 + 1);
return;
}
catch (NotFoundException a2) {
throw new CompileError("undefined type: " + v2.getName());
}
}
this.exprType = 307;
this.arrayDim = v3;
this.className = MemberResolver.javaToJvmName(v2.getName());
}
}
public void doNumCast(final CtClass v2) throws CompileError {
if (this.arrayDim == 0 && !CodeGen.isRefType(this.exprType)) {
if (!(v2 instanceof CtPrimitiveType)) {
throw new CompileError("type mismatch");
}
final CtPrimitiveType a1 = (CtPrimitiveType)v2;
this.atNumCastExpr(this.exprType, MemberResolver.descToType(a1.getDescriptor()));
}
}
}
|
def _make_downloader_mock(self):
def _download(url, tmpdir_path, verify):
del verify
self.downloaded_urls.append(url)
filename = self.dl_fnames.get(url, os.path.basename(url))
path = os.path.join(tmpdir_path, filename)
self.fs.add_file(path)
dl_result = downloader.DownloadResult(
path=utils.as_path(path),
url_info=self.dl_results[url],
)
return promise.Promise.resolve(dl_result)
return mock.patch.object(
downloader._Downloader, 'download', side_effect=_download) |
// LookupLocationContext performs a location lookup. If you want memoisation
// of the results, you should use MaybeLookupLocationContext.
func (s *Session) LookupLocationContext(ctx context.Context) (*geolocate.Results, error) {
task := geolocate.NewTask(geolocate.Config{
Logger: s.Logger(),
Resolver: s.resolver,
UserAgent: s.UserAgent(),
})
return task.Run(ctx)
} |
A fight at Silver Springs Park on Monday evening drew a large crowd of spectators including Bree Holloman who said she went only to observe from the sidelines. "I feel like I shouldn't have even shown up, but at the same time, I didn't want to fight and why would they fight me?" she questioned.
As Holloman watched various people brawling, a man punched her in the face.
Reeling from the blow, Holloman jumped up and bolted. Another man, though, caught up with her and dragged her across a road before he started beating her. The entire ordeal was caught on camera and posted on Facebook.
Holloman said they singled her out because she's transgender.
"They want to beat me up because it's like very hard for them to fathom that I am a woman, but they still think I'm a man," Holloman said.
This violent act was the second time in a year Holloman said she was beaten up because of her gender identity.
After Monday's incident, she landed in the hospital with a swollen black eye and scraped knees and an elbow. Aside from her physical injuries, Holloman said she suffered emotional ones as well.
"I'm tired of feeling like I can't be who I am and walk around," she explained. "I really don't feel safe in Springfield. I don't feel safe here. I"m trying to move away."
A Springfield Police Department spokesperson said it's too early in the investigation to determine if gender identity was a factor. However, if that comes up during the course of the investigation, police will pass that information along to the prosecutor. |
It's the breast vegetable we've seen for a long time: Farmer discovers hilarious rude-shaped potato while harvesting his crop
Potato found by amused workers at Farndon Fields farm near Leicester
Farmer says the potato is one of the strangest he has seen in 30 years
Staff keep potato in a locked drawer t o preserve it for a s long as possible
They are now considering using it to promote a breast cancer charity
A farmer couldn't believe his eyes when his stumbled upon a breast-shaped potato growing in his field.
The spud was found by amused workers at Farndon Fields Farm Shop near Market Harborough, Leicestershire, while they were grading their potatoes.
Kevin Stokes, the shop's owner and an ambassador for the Potato Council, said he is now looking into the best way to preserve the 'hilarious' tuber, which he claims is the strangest he has come across during 30 years in the industry.
Unusual: The farmer who found the potato says it is one of the strangest he has come across during 30 years in the industry. He is currently exploring ways to preserve the 'hilarious' spud
The farm shop's marketing manager Nicola Stokes agreed that the potato is totally out of the ordinary, adding that it was 'the breast example she's seen in a long time.'
Ms Stokes said: 'Kevin the managing director brought it into the office and everyone fell around laughing. He'd been sorting the potatoes which had been harvested and found this. It was hilarious."
She added: 'We've been growing potatoes here for more than 30 years. We've had some odd shapes before but nothing like this'.
The potato is currently being kept out of daylight to preserve it for as long as possible, with a long-term plan to pickle it expected to be drawn up over the coming days.
Productive: Nicola Stokes says staff at Farndon Fields Farm Shop (pictured) want to do something positive with the potato and are considering using it to promote a breast cancer charity
One idea is that the potato could be auctioned to raise money for charity.
Ms Stokes said: ' It would be nice to do something productive with it. We might use it to promote a Breast Cancer campaign or something'.
In the meantime the potato is being kept away from prying eyes, hidden in a drawer on the farm.
Ms Stokes said: ' Potatoes do have a shelf life so we are exploring ways to preserve it.' |
// TestCreateServiceInstanceWithAuthError tests creating a SerivceInstance when
// the secret containing the broker authorization info cannot be found.
func TestCreateServiceInstanceWithAuthError(t *testing.T) {
ct := &controllerTest{
t: t,
broker: func() *v1beta1.ClusterServiceBroker {
b := getTestBroker()
b.Spec.AuthInfo = &v1beta1.ClusterServiceBrokerAuthInfo{
Basic: &v1beta1.ClusterBasicAuthConfig{
SecretRef: &v1beta1.ObjectReference{
Namespace: testNamespace,
Name: "secret-name",
},
},
}
return b
}(),
instance: getTestInstance(),
skipVerifyingInstanceSuccess: true,
preCreateBroker: func(ct *controllerTest) {
prependGetSecretReaction(ct.kubeClient, "secret-name", map[string][]byte{
"username": []byte("user"),
"password": []byte("pass"),
})
},
preCreateInstance: func(ct *controllerTest) {
prependGetSecretNotFoundReaction(ct.kubeClient)
},
}
ct.run(func(ct *controllerTest) {
if err := util.WaitForInstanceCondition(ct.client, testNamespace, testInstanceName, v1beta1.ServiceInstanceCondition{
Type: v1beta1.ServiceInstanceConditionReady,
Status: v1beta1.ConditionFalse,
Reason: "ErrorGettingAuthCredentials",
}); err != nil {
t.Fatalf("error waiting for instance reconciliation to fail: %v", err)
}
})
} |
#include <bits/stdc++.h>
typedef long long int ll;
#define MAX 1000000001
#define fio ios_base::sync_with_stdio(false);
using namespace std;
int main() {
fio;
int n,m,temp,count=0;
vector<int> has,take;
cin>>n>>m;
for(int i=0;i<n;i++) {cin>>temp; has.push_back(temp);}
sort(has.begin(),has.end());
for(int i=1;i<MAX && m>0;i++)
{
if(!binary_search(has.begin(), has.end(), i))
{
m -= i;
if(m>=0) {count++; take.push_back(i);}
}
}
cout<<count<<"\n";
for(auto it = take.begin(); it!=take.end(); it++)
cout<<*it<<" ";
cout<<"\n";
return 0;
} |
from snakemake import shell
input, output, params, threads, wildcards, config = snakemake.input, snakemake.output, snakemake.params, snakemake.threads, snakemake.wildcards, snakemake.config
if config['y'][wildcards.yid]['t'][wildcards.sid]['paired']:
shell("""
trimmomatic PE -threads {threads} \
{input.r1} {input.r2} {output.r1} {output.r1u} {output.r2} {output.r2u} \
{params.trimmer}
touch {output.r0}
""")
else:
shell("""
trimmomatic SE -threads {threads} \
{input} {output} \
{params.trimmer}
touch {output.r1} {output.r2} {output.r1u} {output.r2u}
""")
|
<gh_stars>1-10
{-# LANGUAGE TemplateHaskell #-}
module UnitTest.CallbackParse.ThreadControl where
import Data.Aeson (Value)
import Data.Yaml.TH (decodeFile)
import Test.Tasty as Tasty
import Web.Facebook.Messenger
import UnitTest.Internal
--------------------
-- THREAD CONTROL --
--------------------
threadControlTests :: TestTree
threadControlTests = Tasty.testGroup "Thread Control Callbacks"
[ passThreadCallback
, requestThreadCallback
, takeThreadCallback
]
passThreadVal :: Value
passThreadVal = $$(decodeFile "test/json/callback/pass_thread_control.json")
requestThreadVal :: Value
requestThreadVal = $$(decodeFile "test/json/callback/request_thread_control.json")
takeThreadVal :: Value
takeThreadVal = $$(decodeFile "test/json/callback/take_thread_control.json")
passThreadCallback :: TestTree
passThreadCallback = parseTest "Pass thread" passThreadVal
$ msg $ CMPassThread $ PassThread (AppId "123456789")
$ Just "Additional content that the caller wants to set"
requestThreadCallback :: TestTree
requestThreadCallback = parseTest "Request thread" requestThreadVal
$ msg $ CMRequestThread $ RequestThread (AppId "123456789")
$ Just "additional content that the caller wants to set"
takeThreadCallback :: TestTree
takeThreadCallback = parseTest "Take thread" takeThreadVal
$ msg $ CMTakeThread $ TakeThread (AppId "123456789")
$ Just "additional content that the caller wants to set!"
msg :: CallbackContent -> CallbackMessaging
msg contnt = standardMessaging (Just 1458692752478)
Nothing
contnt
|
/**
* Klasa definiuje typ calkowitoliczbowy, ktory moze zostac wlaczony
* do systemu typow rozpoznawanych przez silnik.
* <p>
* Typ ten przechowuje wartosci calkowite w obiekcie {@link IntegerHolder},
* ktory moze reprezentowac:
* <ul>
* <li>liczby calkowite z przedzialu od -9223372036854775808 do 9223372036854775807 (64 bit, signed)
* <li>wartosc null
* </ul>
*
* @author Przemek Hertel
* @since 1.0.0
*/
@ParamType(IntegerType.TYPE_NAME)
public class IntegerType implements Type<IntegerHolder> {
public static final String TYPE_NAME = "integer";
/**
* Zamienia obiekt holdera na String.
*
* @param value obiekt holdera
* @return stringowa reprezentacja holdera lub null, jesli wartosc holdera jest null
*/
@Override
public String encode(IntegerHolder value) {
Long v = value.getValue();
return v != null ? v.toString() : null;
}
/**
* Zamienia string na obiekt holdera.
* Moze rzucic wyjatek, jesli string nie reprezentuje liczby,
* ktora da sie przechowac w obiekcie {@link IntegerHolder}.
* String rowny null lub majacy wylacznie biale znaki zamieniany
* jest na IntegerHolder(null).
*
* @param text string reprezentujacy liczbe calkowita
* @return obiekt holdera
* @throws NumberFormatException jesli string nie reprezentuje liczby typu {@link Long}
*/
@Override
public IntegerHolder decode(String text) {
Long value = EngineUtil.hasText(text) ? Long.valueOf(text.trim()) : null;
return new IntegerHolder(value);
}
/**
* Jesli podany obiekt (obj) reprezentuje typ calkowitoliczbowy,
* ktory mozna bezstratnie zapisac w zmiennej typu Long,
* to metoda skonwertuje ten obiekt na {@link IntegerHolder}.
* <p>
* Jesli obj jest stringiem, ktory mozna bezstratnie sparsowac
* jako liczbe long, to ten string rowniez zostanie skonwertowany.
* Jesli string nie parsuje sie do longa, metoda rzuci wyjatek.
* <p>
* Typy java, ktore sa konwertowalne na IntegerHolder:
* <ul>
* <li>Long
* <li>Integer
* <li>Short
* <li>Byte
* <li>null
* <li>String, jesli mozna go sparsowac na Long
* </ul>
* Argument rowny null zostanie skonwertowany na IntegerHolder reprezentujacy null.
* <p>
* Na przyklad:
* <pre>
* convert( new Long(17) ); // IntegerHolder.getValue() : Long(17)
* convert( 17 ); // IntegerHolder.getValue() : Long(17)
* convert( null ); // IntegerHolder.getValue() : null
* convert( "17" ); // IntegerHolder.getValue() : Long(17)
* convert( 0.11 ); // throws IllegalArgumentException
* convert( "9A" ); // throws NumberFormatException
* </pre>
*
* @param obj dowolny obiekt java lub null
* @return obiekt holdera
* @throws IllegalArgumentException jesli przekazany obiekt nie jest konwertowalny na IntegerHolder
* @throws NumberFormatException jesli obiekt jako string nie jest parsowalny na Long
*/
@Override
public IntegerHolder convert(Object obj) {
if (obj instanceof Long || obj instanceof Integer || obj instanceof Short || obj instanceof Byte) {
Number n = (Number) obj;
return new IntegerHolder(n.longValue());
}
if (obj == null) {
return new IntegerHolder(null);
}
if (obj instanceof String) {
return decode((String) obj);
}
throw new IllegalArgumentException("conversion not supported for: " + obj.getClass());
}
@Override
public IntegerHolder[] newArray(int size) {
return new IntegerHolder[size];
}
} |
<reponame>applibgroup/FancyButtons
package com.rilixtech.themify_icons_typeface;
import android.content.Context;
import android.graphics.Typeface;
import com.rilixtech.materialfancybutton.typeface.IIcon;
import com.rilixtech.materialfancybutton.typeface.ITypeface;
import java.util.Collection;
import java.util.HashMap;
import java.util.LinkedList;
public class ThemifyIcons implements ITypeface {
private static final String TTF_FILE = "themify-icons-v0.1.2.ttf";
private static final String MAPPING_FONT_PREFIX = "thei";
private static Typeface typeface = null;
private static HashMap<String, Character> mChars;
@Override public IIcon getIcon(String key) {
return Icon.valueOf(key);
}
@Override public HashMap<String, Character> getCharacters() {
if (mChars == null) {
HashMap<String, Character> aChars = new HashMap<String, Character>();
for (Icon v : Icon.values()) {
aChars.put(v.name(), v.character);
}
mChars = aChars;
}
return mChars;
}
@Override public String getMappingPrefix() {
return MAPPING_FONT_PREFIX;
}
@Override public String getFontName() {
return "Themify Icons";
}
@Override public String getVersion() {
return "0.1.2";
}
@Override public int getIconCount() {
return mChars.size();
}
@Override public Collection<String> getIcons() {
Collection<String> icons = new LinkedList<String>();
for (Icon value : Icon.values()) {
icons.add(value.name());
}
return icons;
}
@Override public String getAuthor() {
return "<NAME>";
}
@Override public String getUrl() {
return "http://themify.me/themify-icons";
}
@Override public String getDescription() {
return "Themify Icons is a complete set of icons for use in web design and apps, consisting of 320+ pixel-perfect, hand-crafted icons that draw inspiration from Apple iOS 7.";
}
@Override public String getLicense() {
return "SIL Open Font License (OFL)";
}
@Override public String getLicenseUrl() {
return "http://scripts.sil.org/OFL";
}
@Override public Typeface getTypeface(Context context) {
if (typeface == null) {
try {
typeface = Typeface.createFromAsset(context.getAssets(), "fonts/" + TTF_FILE);
} catch (Exception e) {
return null;
}
}
return typeface;
}
public enum Icon implements IIcon {
thei_wand('\ue600'),
thei_volume('\ue601'),
thei_user('\ue602'),
thei_unlock('\ue603'),
thei_unlink('\ue604'),
thei_trash('\ue605'),
thei_thought('\ue606'),
thei_target('\ue607'),
thei_tag('\ue608'),
thei_tablet('\ue609'),
thei_star('\ue60a'),
thei_spray('\ue60b'),
thei_signal('\ue60c'),
thei_shopping_cart('\ue60d'),
thei_shopping_cart_full('\ue60e'),
thei_settings('\ue60f'),
thei_search('\ue610'),
thei_zoom_in('\ue611'),
thei_zoom_out('\ue612'),
thei_cut('\ue613'),
thei_ruler('\ue614'),
thei_ruler_pencil('\ue615'),
thei_ruler_alt('\ue616'),
thei_bookmark('\ue617'),
thei_bookmark_alt('\ue618'),
thei_reload('\ue619'),
thei_plus('\ue61a'),
thei_pin('\ue61b'),
thei_pencil('\ue61c'),
thei_pencil_alt('\ue61d'),
thei_paint_roller('\ue61e'),
thei_paint_bucket('\ue61f'),
thei_na('\ue620'),
thei_mobile('\ue621'),
thei_minus('\ue622'),
thei_medall('\ue623'),
thei_medall_alt('\ue624'),
thei_marker('\ue625'),
thei_marker_alt('\ue626'),
thei_arrow_up('\ue627'),
thei_arrow_right('\ue628'),
thei_arrow_left('\ue629'),
thei_arrow_down('\ue62a'),
thei_lock('\ue62b'),
thei_location_arrow('\ue62c'),
thei_link('\ue62d'),
thei_layout('\ue62e'),
thei_layers('\ue62f'),
thei_layers_alt('\ue630'),
thei_key('\ue631'),
thei_import('\ue632'),
thei_image('\ue633'),
thei_heart('\ue634'),
thei_heart_broken('\ue635'),
thei_hand_stop('\ue636'),
thei_hand_open('\ue637'),
thei_hand_drag('\ue638'),
thei_folder('\ue639'),
thei_flag('\ue63a'),
thei_flag_alt('\ue63b'),
thei_flag_alt_2('\ue63c'),
thei_eye('\ue63d'),
thei_export('\ue63e'),
thei_exchange_vertical('\ue63f'),
thei_desktop('\ue640'),
thei_cup('\ue641'),
thei_crown('\ue642'),
thei_comments('\ue643'),
thei_comment('\ue644'),
thei_comment_alt('\ue645'),
thei_close('\ue646'),
thei_clip('\ue647'),
thei_angle_up('\ue648'),
thei_angle_right('\ue649'),
thei_angle_left('\ue64a'),
thei_angle_down('\ue64b'),
thei_check('\ue64c'),
thei_check_box('\ue64d'),
thei_camera('\ue64e'),
thei_announcement('\ue64f'),
thei_brush('\ue650'),
thei_briefcase('\ue651'),
thei_bolt('\ue652'),
thei_bolt_alt('\ue653'),
thei_blackboard('\ue654'),
thei_bag('\ue655'),
thei_move('\ue656'),
thei_arrows_vertical('\ue657'),
thei_arrows_horizontal('\ue658'),
thei_fullscreen('\ue659'),
thei_arrow_top_right('\ue65a'),
thei_arrow_top_left('\ue65b'),
thei_arrow_circle_up('\ue65c'),
thei_arrow_circle_right('\ue65d'),
thei_arrow_circle_left('\ue65e'),
thei_arrow_circle_down('\ue65f'),
thei_angle_double_up('\ue660'),
thei_angle_double_right('\ue661'),
thei_angle_double_left('\ue662'),
thei_angle_double_down('\ue663'),
thei_zip('\ue664'),
thei_world('\ue665'),
thei_wheelchair('\ue666'),
thei_view_list('\ue667'),
thei_view_list_alt('\ue668'),
thei_view_grid('\ue669'),
thei_uppercase('\ue66a'),
thei_upload('\ue66b'),
thei_underline('\ue66c'),
thei_truck('\ue66d'),
thei_timer('\ue66e'),
thei_ticket('\ue66f'),
thei_thumb_up('\ue670'),
thei_thumb_down('\ue671'),
thei_text('\ue672'),
thei_stats_up('\ue673'),
thei_stats_down('\ue674'),
thei_split_v('\ue675'),
thei_split_h('\ue676'),
thei_smallcap('\ue677'),
thei_shine('\ue678'),
thei_shift_right('\ue679'),
thei_shift_left('\ue67a'),
thei_shield('\ue67b'),
thei_notepad('\ue67c'),
thei_server('\ue67d'),
thei_quote_right('\ue67e'),
thei_quote_left('\ue67f'),
thei_pulse('\ue680'),
thei_printer('\ue681'),
thei_power_off('\ue682'),
thei_plug('\ue683'),
thei_pie_chart('\ue684'),
thei_paragraph('\ue685'),
thei_panel('\ue686'),
thei_package('\ue687'),
thei_music('\ue688'),
thei_music_alt('\ue689'),
thei_mouse('\ue68a'),
thei_mouse_alt('\ue68b'),
thei_money('\ue68c'),
thei_microphone('\ue68d'),
thei_menu('\ue68e'),
thei_menu_alt('\ue68f'),
thei_map('\ue690'),
thei_map_alt('\ue691'),
thei_loop('\ue692'),
thei_location_pin('\ue693'),
thei_list('\ue694'),
thei_light_bulb('\ue695'),
thei_Italic('\ue696'),
thei_info('\ue697'),
thei_infinite('\ue698'),
thei_id_badge('\ue699'),
thei_hummer('\ue69a'),
thei_home('\ue69b'),
thei_help('\ue69c'),
thei_headphone('\ue69d'),
thei_harddrives('\ue69e'),
thei_harddrive('\ue69f'),
thei_gift('\ue6a0'),
thei_game('\ue6a1'),
thei_filter('\ue6a2'),
thei_files('\ue6a3'),
thei_file('\ue6a4'),
thei_eraser('\ue6a5'),
thei_envelope('\ue6a6'),
thei_download('\ue6a7'),
thei_direction('\ue6a8'),
thei_direction_alt('\ue6a9'),
thei_dashboard('\ue6aa'),
thei_control_stop('\ue6ab'),
thei_control_shuffle('\ue6ac'),
thei_control_play('\ue6ad'),
thei_control_pause('\ue6ae'),
thei_control_forward('\ue6af'),
thei_control_backward('\ue6b0'),
thei_cloud('\ue6b1'),
thei_cloud_up('\ue6b2'),
thei_cloud_down('\ue6b3'),
thei_clipboard('\ue6b4'),
thei_car('\ue6b5'),
thei_calendar('\ue6b6'),
thei_book('\ue6b7'),
thei_bell('\ue6b8'),
thei_basketball('\ue6b9'),
thei_bar_chart('\ue6ba'),
thei_bar_chart_alt('\ue6bb'),
thei_back_right('\ue6bc'),
thei_back_left('\ue6bd'),
thei_arrows_corner('\ue6be'),
thei_archive('\ue6bf'),
thei_anchor('\ue6c0'),
thei_align_right('\ue6c1'),
thei_align_left('\ue6c2'),
thei_align_justify('\ue6c3'),
thei_align_center('\ue6c4'),
thei_alert('\ue6c5'),
thei_alarm_clock('\ue6c6'),
thei_agenda('\ue6c7'),
thei_write('\ue6c8'),
thei_window('\ue6c9'),
thei_widgetized('\ue6ca'),
thei_widget('\ue6cb'),
thei_widget_alt('\ue6cc'),
thei_wallet('\ue6cd'),
thei_video_clapper('\ue6ce'),
thei_video_camera('\ue6cf'),
thei_vector('\ue6d0'),
thei_themify_logo('\ue6d1'),
thei_themify_favicon('\ue6d2'),
thei_themify_favicon_alt('\ue6d3'),
thei_support('\ue6d4'),
thei_stamp('\ue6d5'),
thei_split_v_alt('\ue6d6'),
thei_slice('\ue6d7'),
thei_shortcode('\ue6d8'),
thei_shift_right_alt('\ue6d9'),
thei_shift_left_alt('\ue6da'),
thei_ruler_alt_2('\ue6db'),
thei_receipt('\ue6dc'),
thei_pin2('\ue6dd'),
thei_pin_alt('\ue6de'),
thei_pencil_alt2('\ue6df'),
thei_palette('\ue6e0'),
thei_more('\ue6e1'),
thei_more_alt('\ue6e2'),
thei_microphone_alt('\ue6e3'),
thei_magnet('\ue6e4'),
thei_line_double('\ue6e5'),
thei_line_dotted('\ue6e6'),
thei_line_dashed('\ue6e7'),
thei_layout_width_full('\ue6e8'),
thei_layout_width_default('\ue6e9'),
thei_layout_width_default_alt('\ue6ea'),
thei_layout_tab('\ue6eb'),
thei_layout_tab_window('\ue6ec'),
thei_layout_tab_v('\ue6ed'),
thei_layout_tab_min('\ue6ee'),
thei_layout_slider('\ue6ef'),
thei_layout_slider_alt('\ue6f0'),
thei_layout_sidebar_right('\ue6f1'),
thei_layout_sidebar_none('\ue6f2'),
thei_layout_sidebar_left('\ue6f3'),
thei_layout_placeholder('\ue6f4'),
thei_layout_menu('\ue6f5'),
thei_layout_menu_v('\ue6f6'),
thei_layout_menu_separated('\ue6f7'),
thei_layout_menu_full('\ue6f8'),
thei_layout_media_right_alt('\ue6f9'),
thei_layout_media_right('\ue6fa'),
thei_layout_media_overlay('\ue6fb'),
thei_layout_media_overlay_alt('\ue6fc'),
thei_layout_media_overlay_alt_2('\ue6fd'),
thei_layout_media_left_alt('\ue6fe'),
thei_layout_media_left('\ue6ff'),
thei_layout_media_center_alt('\ue700'),
thei_layout_media_center('\ue701'),
thei_layout_list_thumb('\ue702'),
thei_layout_list_thumb_alt('\ue703'),
thei_layout_list_post('\ue704'),
thei_layout_list_large_image('\ue705'),
thei_layout_line_solid('\ue706'),
thei_layout_grid4('\ue707'),
thei_layout_grid3('\ue708'),
thei_layout_grid2('\ue709'),
thei_layout_grid2_thumb('\ue70a'),
thei_layout_cta_right('\ue70b'),
thei_layout_cta_left('\ue70c'),
thei_layout_cta_center('\ue70d'),
thei_layout_cta_btn_right('\ue70e'),
thei_layout_cta_btn_left('\ue70f'),
thei_layout_column4('\ue710'),
thei_layout_column3('\ue711'),
thei_layout_column2('\ue712'),
thei_layout_accordion_separated('\ue713'),
thei_layout_accordion_merged('\ue714'),
thei_layout_accordion_list('\ue715'),
thei_ink_pen('\ue716'),
thei_info_alt('\ue717'),
thei_help_alt('\ue718'),
thei_headphone_alt('\ue719'),
thei_hand_point_up('\ue71a'),
thei_hand_point_right('\ue71b'),
thei_hand_point_left('\ue71c'),
thei_hand_point_down('\ue71d'),
thei_gallery('\ue71e'),
thei_face_smile('\ue71f'),
thei_face_sad('\ue720'),
thei_credit_card('\ue721'),
thei_control_skip_forward('\ue722'),
thei_control_skip_backward('\ue723'),
thei_control_record('\ue724'),
thei_control_eject('\ue725'),
thei_comments_smiley('\ue726'),
thei_brush_alt('\ue727'),
thei_youtube('\ue728'),
thei_vimeo('\ue729'),
thei_twitter('\ue72a'),
thei_time('\ue72b'),
thei_tumblr('\ue72c'),
thei_skype('\ue72d'),
thei_share('\ue72e'),
thei_share_alt('\ue72f'),
thei_rocket('\ue730'),
thei_pinterest('\ue731'),
thei_new_window('\ue732'),
thei_microsoft('\ue733'),
thei_list_ol('\ue734'),
thei_linkedin('\ue735'),
thei_layout_sidebar_2('\ue736'),
thei_layout_grid4_alt('\ue737'),
thei_layout_grid3_alt('\ue738'),
thei_layout_grid2_alt('\ue739'),
thei_layout_column4_alt('\ue73a'),
thei_layout_column3_alt('\ue73b'),
thei_layout_column2_alt('\ue73c'),
thei_instagram('\ue73d'),
thei_google('\ue73e'),
thei_github('\ue73f'),
thei_flickr('\ue740'),
thei_facebook('\ue741'),
thei_dropbox('\ue742'),
thei_dribbble('\ue743'),
thei_apple('\ue744'),
thei_android('\ue745'),
thei_save('\ue746'),
thei_save_alt('\ue747'),
thei_yahoo('\ue748'),
thei_wordpress('\ue749'),
thei_vimeo_alt('\ue74a'),
thei_twitter_alt('\ue74b'),
thei_tumblr_alt('\ue74c'),
thei_trello('\ue74d'),
thei_stack_overflow('\ue74e'),
thei_soundcloud('\ue74f'),
thei_sharethis('\ue750'),
thei_sharethis_alt('\ue751'),
thei_reddit('\ue752'),
thei_pinterest_alt('\ue753'),
thei_microsoft_alt('\ue754'),
thei_linux('\ue755'),
thei_jsfiddle('\ue756'),
thei_joomla('\ue757'),
thei_html5('\ue758'),
thei_flickr_alt('\ue759'),
thei_email('\ue75a'),
thei_drupal('\ue75b'),
thei_dropbox_alt('\ue75c'),
thei_css3('\ue75d'),
thei_rss('\ue75e'),
thei_rss_alt('\ue75f');
char character;
Icon(char character) {
this.character = character;
}
public String getFormattedName() {
return "{" + name() + "}";
}
public char getCharacter() {
return character;
}
public String getName() {
return name();
}
// remember the typeface so we can use it later
private static ITypeface typeface;
public ITypeface getTypeface() {
if (typeface == null) {
typeface = new ThemifyIcons();
}
return typeface;
}
}
}
|
def display_dynamic_tags(self):
has_dynamic_sections = False
for section in self.elffile.iter_sections():
if not isinstance(section, DynamicSection):
continue
has_dynamic_sections = True
self._emitline("\nDynamic section at offset %s contains %s entries:" % (
self._format_hex(section['sh_offset']),
section.num_tags()))
self._emitline(" Tag Type Name/Value")
padding = 20 + (8 if self.elffile.elfclass == 32 else 0)
for tag in section.iter_tags():
if tag.entry.d_tag == 'DT_NEEDED':
parsed = 'Shared library: [%s]' % tag.needed
elif tag.entry.d_tag == 'DT_RPATH':
parsed = 'Library rpath: [%s]' % tag.rpath
elif tag.entry.d_tag == 'DT_RUNPATH':
parsed = 'Library runpath: [%s]' % tag.runpath
elif tag.entry.d_tag == 'DT_SONAME':
parsed = 'Library soname: [%s]' % tag.soname
elif tag.entry.d_tag.endswith(('SZ', 'ENT')):
parsed = '%i (bytes)' % tag['d_val']
elif tag.entry.d_tag == 'DT_FLAGS':
parsed = describe_dt_flags(tag.entry.d_val)
elif tag.entry.d_tag == 'DT_FLAGS_1':
parsed = 'Flags: %s' % describe_dt_flags_1(tag.entry.d_val)
elif tag.entry.d_tag.endswith(('NUM', 'COUNT')):
parsed = '%i' % tag['d_val']
elif tag.entry.d_tag == 'DT_PLTREL':
s = describe_dyn_tag(tag.entry.d_val)
if s.startswith('DT_'):
s = s[3:]
parsed = '%s' % s
else:
parsed = '%#x' % tag['d_val']
self._emitline(" %s %-*s %s" % (
self._format_hex(ENUM_D_TAG.get(tag.entry.d_tag, tag.entry.d_tag),
fullhex=True, lead0x=True),
padding,
'(%s)' % (tag.entry.d_tag[3:],),
parsed))
if not has_dynamic_sections:
self._emitline("\nThere is no dynamic section in this file.") |
def _calculate_pitch(self, lat_sat, long_sat, alt_sat, lat_drone, long_drone, alt_drone):
R = 6371000
lat_sat = math.radians(lat_sat)
lat_drone = math.radians(lat_drone)
long_sat = math.radians(long_sat)
long_drone = math.radians(long_drone)
delta_long = long_drone - long_sat
delta_lat = lat_drone - lat_sat
delta_alt = alt_drone - alt_sat
a = math.pow(math.sin(delta_lat / 2), 2) + math.cos(lat_sat) * \
math.cos(lat_drone) * math.pow(math.sin(delta_long / 2), 2)
c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a))
d = R * c
pitch_angle = math.atan2(delta_alt, d)
pitch_angle = math.degrees(pitch_angle)
return pitch_angle |
/**
* Decodes the {@link Genotype} to a phenotype by using a SAT/PB solver.
*
* @param genotype
* the genotype
* @return the phenotype
*/
protected Model decodeSATGenotype(Genotype genotype) {
if (!isInit) {
init();
}
return manager.decodeSATGenotype(variables, genotype);
} |
def delete_project(lookoutvision_client, project_name):
try:
logger.info("Deleting project: %s", project_name)
response = lookoutvision_client.delete_project(ProjectName=project_name)
logger.info("Deleted project ARN: %s ", response["ProjectArn"])
except ClientError as err:
logger.exception("Couldn't delete project %s.", project_name)
raise |
/** create img with relative filename.
*
* @param filename (caller is responsible for anchoring this)
* @return
*/
private HtmlImg createHtmlImg(String filename) {
HtmlImg img = new HtmlImg();
img.setSrc(filename);
return img;
} |
def fill_between_curves_uv(blk, crvlist, tr=False, reverse=True, sty={}, eps=1e-24):
style = dict(fc='c', ec='none', lw=0.25, zorder=100)
uv=dict(umin=-np.inf, umax=np.inf, vmin=-np.inf, vmax=np.inf)
uv.update(blk.uvbounds)
umin, umax = uv['umin'], uv['umax']
vmin, vmax = uv['vmin'], uv['vmax']
umin, umax, vmin, vmax = float(umin), float(umax), float(vmin), float(vmax)
if reverse==True:
for i in range(len(crvlist)):
if (i % 2) == 1:
crvlist[i] = reverse_curve(crvlist[i])
if tr==True:
crvlist = blk.update_curves_from_tr(crvlist)
crvlist = snap_crvlist_to_bounds(crvlist, umin=umin, umax=umax, vmin=vmin, vmax=vmax, eps=eps)
crvlist = blk.apply_masks( blk.update_curves_from_uv( crvlist ) )
U = np.concatenate([ crv.UV[0] for crv in crvlist ])
V = np.concatenate([ crv.UV[1] for crv in crvlist ])
x, y = V-U, V+U
xy = np.dstack([x,y])[0]
style.update(sty)
p = mpl.patches.Polygon(xy, **style)
plt.gca().add_patch(p) |
import torch
from rlpyt.utils.tensor import to_onehot, from_onehot
class DiscreteMixin:
"""Conversions to and from one-hot."""
def __init__(self, dim, dtype=torch.long, onehot_dtype=torch.float):
self._dim = dim
self.dtype = dtype
self.onehot_dtype = onehot_dtype
@property
def dim(self):
return self._dim
def to_onehot(self, indexes, dtype=None):
"""Convert from integer indexes to one-hot, preserving leading dimensions."""
return to_onehot(indexes, self._dim, dtype=dtype or self.onehot_dtype)
def from_onehot(self, onehot, dtype=None):
"""Convert from one-hot to integer indexes, preserving leading dimensions."""
return from_onehot(onehot, dtpye=dtype or self.dtype)
|
import * as React from "react";
import { IEmojiProps } from "../../styled";
const SvgAnimalMammal20 = (props: IEmojiProps) => (
<svg viewBox="0 0 72 72" width="1em" height="1em" {...props}>
<path
fill="#A57939"
d="M21.588 15.085l-2.25 3.625-4.25 2-4.5 5.375-1.5 4.5 1.5 3 4.25.375 5.875-1.125s1.46.396 2.041 2.5c.667 2.417-1.541 3.375-1.916 3.5s-4 0-4 0l-5.625.5-.625 3.75.875 1.375s3.5.75 5-.375 5 .875 5 .875l4.897.65s-2.285 4.367-.535 6.617-1.487 2.859-1.487 2.859l-2.378 2.208-.685 1.667 1.063 2 3.375.25 1.564 1.147 1.248.44 1.438.287 7.5-1.875 6.25-.25 3.125-1.625s1.125-2.375 1.25-2.75 2.25-8 2.25-8v-8.25l-1-6.5-2.375-4.25-4.375-4-7-2.625s-5.125-.5-5.625-.75-4-1.75-4-1.75l-1.25-3.75-1.75-2.25h-1.375v.625z"
/>
<path
fill="#A57939"
d="M37.504 23.669v-3.167s2.584-7.667 2.667-7.917c.083-.25 3.667-4.5 3.667-4.5l3.916-1.416 6.25.333 7.084 6.083 2.486 2.813c2.334 10.66-13.653 10.473-11.47 1.62-2.613 6.484 1.41 10.118 6.234 13.65l4.25 5.417.166 4.75-1.583 5.834s-3.417 3.916-4.5 4.333c-1.083.417-7.417 2.833-7.417 2.833l1.25-8.25.084-6.25-1.334-4.583-1.083-3.667-3.167-3.916-4.333-2.917-3.167-1.083z"
/>
<path
fill="#D0CFCE"
d="M33.967 23.205c-.546.38-1.963 3.757-.254 4.527s4.727 2.448 4.727 2.448l4.488 5.02 1.762 4.782v7.873l-1.019 4.606s1.533 5.208 5.058 1.875c3.525-3.334.525 0 .525 0l1.75-9.815-1.75-9.269-1.78-4.75-4.886-4.917-8.621-2.38z"
/>
<g
fill="none"
stroke="#000"
strokeLinecap="round"
strokeLinejoin="round"
strokeMiterlimit={10}
strokeWidth={2}
>
<path d="M20.4 32.493c-3.017 2.23-6.565 1.993-8.831 1.49-1.597-.356-2.615-1.97-2.235-3.56.286-1.2.862-2.866 1.962-5.148 3.118-6.473 8.438-6.488 8.438-6.488.408-1.036 1.866-3.7 3.007-4.195 0 0 7.13 7.371 1.028 11.344M11.678 43.593c-1.804-.749-2.843-6.56 6.743-4.5.58.124 1.599.236 2.172.083 1.795-.477 5.645-.646 5.645-4.583M34.117 35.086c.141.095 4.583 9.875-3.119 10.442s-13.796-2.032-13.796-2.032" />
<path d="M30.123 49.402s-1.125 3.97 1.9 6.993l-2.784.81a4.823 4.823 0 00-3.011 2.57v0a1.89 1.89 0 001.049 2.583c1.608.596 4.311.877 8.12-1.11 0 0 9.347 2.556 12.421-3.908 0 0 5.515-9.213 1.735-22.254-2.675-9.231-10.61-11.326-15.016-11.762-1.626-.16-3.175-.69-4.665-1.36-1.852-.834-3.512-1.005-3.512-1.005M38.614 20.591s.11-13.112 10.127-13.868 13.866 9.616 13.866 9.616" />
<path d="M55.866 16.593c-2.188-1.25-7.437 1.888-3.563 8.125s13.355 10.569 10.304 18.779c-3.59 9.662-13.116 10.72-13.116 10.72M34.053 27.802s15.438 4.603 9.876 23.728M26.36 45.61s-2.098 4.712.485 9.212c0 0-6.917.75-4.735 5.244" />
</g>
</svg>
);
export default SvgAnimalMammal20;
|
Five years ago, a little after midnight as 2012 had just begun, we started to get emails.
Later that morning, we realized that what was happening was significant enough to write a story about it. A woman named Debbie Cook had sent her fellow Scientologists a message for the new year, and it was hitting Scientology like a tidal wave.
For 17 years, Debbie had been “Captain FSO,” the Sea Org official who ran the Flag Service Organization, the outfit that oversees Scientology’s spiritual mecca, the Flag Land Base in Clearwater, Florida. The Captain FSO has to be a hard-as-nails Sea Org commander who runs a small army of similarly dedicated fanatics, but also serves and interacts with the wealthy “publics” who come to Flag for its high-priced counseling. And to have that position for 17 years made Debbie almost legendary. But by the end of 2011, she had been moved from that position and then had quietly left the Sea Org itself, although she was still a Scientologist in good standing. Other church members may have not seen or heard from Debbie in some time, but her name was still one that carried weight. Here’s how Jefferson Hawkins explained to us what Debbie meant to most people in the church…
For many, many years, Debbie was used as the spokesperson for the Flag Land Base. Her picture was featured prominently in every issue of “Source” Magazine, with a “Message from the Captain.” She made many, many promotional videos extolling the virtues of coming to the Flag Land Base. The Church deliberately built her up as an “opinion leader” for Scientologists. There was a lot of work that went into establishing her as a high-profile opinion leader for the top Scientologists.
Advertisement
And now, in a lengthy message that was forwarded to thousands of Scientologists around the world, Debbie used the words of L. Ron Hubbard to take apart David Miscavige.
Five years later, we can say that Debbie Cook’s email is one of the things new defectors from the Church of Scientology most often cite as the reason they began to take seriously the idea of leaving an organization they might have belonged to for decades, or had even grown up in. There’s just no question that Debbie Cook’s message had a devastating effect on the loyalty of many longtime Scientologists, and helped them decide to walk away.
Debbie was sued by the church for writing it, based on draconian agreements she had signed when she left her position in the Sea Org. Debbie was living in the San Antonio area at the time, and she hired a local attorney name Ray Jeffrey to represent her. We were in the courtroom that February day for a preliminary hearing, and we can tell you it was a nervous moment. According to a strict reading of the contracts Debbie had signed, Scientology could potentially convince a judge that she owed them millions of dollars for criticizing Miscavige. On Debbie’s side, Jeffrey argued that she had signed those contracts under duress after being abused in the Sea Org. And in order to prove that, he wanted to put Debbie on the witness stand. Scientology’s attorneys were against it, but Judge Martha Tanner said she wanted to hear Debbie’s testimony.
What Debbie said in the witness box was devastating. She talked about leaving her FSO post and being sent to California, where she was tasked with overseeing the imprisonment of executives in “The Hole,” Miscavige’s bizarre jail for underlings he suspected of sabotage. Before long, Debbie was thrown into The Hole as a prisoner herself, and was subjected to disturbing abuse, such as being stood up in a trash can with water poured over her while the other inmates chanted insults at her and called her a “lesbian.” After seven weeks, Debbie managed to get away from the Hole, and she was happy to sign anything to get away from the Sea Org altogether. At that point, she said on the witness stand, she would have signed a confession to killing babies if it meant getting away from that sick organization.
The press coverage of that day of testimony went national, and it was brutal. And that’s why the Church of Scientology waived the rest of the hearing and then eventually settled with Debbie for an undisclosed amount — but we’re pretty sure it was in the millions. Millions of dollars, that is, paid by Scientology in order to end the lawsuit that it had filed. We always have to remind people of that who criticize Debbie for accepting the settlement and not somehow pushing on with a lawsuit which she had not filed and which potentially had liabilities in the millions of dollars against her.
We don’t know the terms of the settlement, but they obviously included the stipulation that Debbie and her husband Wayne Baumgarten leave the country for a few years. They’re back now, in Texas, but they still won’t give interviews about their experiences, which it’s not hard to conclude is a condition of the settlement they signed.
And that’s a shame, because we’d still like to talk to Debbie about the effect her 2012 email has had on the Church of Scientology. We have a feeling that she knows what a big effect it’s had, and she could probably tell us some interesting things about what she’s heard from the people who were affected by it.
Instead, to mark this anniversary, we’re posting her lengthy email again, and with the annotations we wrote for it back in 2012, with some slight edits to update things. We hope you enjoy this examination of what Debbie wrote, and that it becomes plain why it was such a challenge to David Miscavige and his leadership of an organization that is now in serious decline.
DEBBIE COOK’S NEW YEAR’S 2012 MESSAGE TO SCIENTOLOGISTS, ANNOTATED
Dear Friend, I am emailing you as a friend and fellow Scientologist. As we enter a new year, it is hoped that 2012 can be a year of great dissemination and a year of real progress up The Bridge for all Scientologists.
As we explained in our introduction to Scientology, church members are encouraged to take courses and do counseling (“auditing”) so they can complete ever more expensive and esoteric levels of Scientology understanding. This process is know as “The Bridge,” so Debbie is wishing her fellow church members a year of successful advancement.
Although I am not in the Sea Org right now, I served in the Sea Org at Flag for 29 years. 17 of those years were as Captain FSO. I am a trained auditor and C/S as well as an OEC, FEBC and DSEC.
As we wrote earlier, Cook enjoyed a storied career as the captain of the Flag Service Organization, which made her CEO, essentially, of Scientology’s spiritual mecca in Clearwater, Florida. Such positions are only held by executives in the “Sea Organization,” Scientology’s hardcore elite corps that requires its workers to sign billion-year contracts and promise to come back, lifetime after lifetime, and dedicate themselves utterly. Along the way to becoming the commanding officer at Flag, Debbie had become a “case supervisor” (C/S), which means that she was qualified to oversee how auditors handled individual cases. She had also completed work in various executive training packages — the Organization Executive Course (OEC), Flag Executive Briefing Course (FEBC), and the Data Series Evaluator Course (DSEC). In other words, Debbie Cook was trained to the gills, had an illustrious past as an almost legendary executive at Flag, and had immense authority for the average Scientologist who might be receiving this e-mail.
I am completely dedicated to the technology of Dianetics and Scientology and the works of LRH. I have seen some of the most stunning and miraculous results in the application of LRH technology and I absolutely know it is worth fighting to keep it pure and unadulterated. My husband and I are in good standing and we are not connected with anyone who is not in good standing. We have steadfastly refused to speak to any media, even though many have contacted us. But I do have some very serious concerns about out-KSW that I see permeating the Scientology religion.
Another key concept to understand is that to Scientologists, L. Ron Hubbard — who died in 1986 — is still the one and only “Source” for their “technology.” When current church leader David Miscavige in 2007 put out a new set of Hubbard’s books — The Basics — it was based on the notion that Miscavige had found transcription errors that occurred when Hubbard was publishing the books in the 1950s. In other words, these new editions were even more “pure” and “unadulterated.” In her message, Cook is stressing that she too cares about Hubbard’s original words and ideas, but she lets on now that she sees a problem with the current church, that it is not following Hubbard’s program to “Keeping Scientology Working,” and that as a result the church is permeated with “out-KSW” activity.
I have the utmost respect for the thousands of dedicated Scientologists and Sea Org members. Together, we have come through everything this world could throw at us and have some real impingement on the world around us. I am proud of our accomplishments and I know you are too.However there is no question that this new age of continuous fundraising is not our finest moment.
The timing of Cook’s e-mail, New Year’s Eve, was calculated for maximum effect. Not only does a lot of fundraising go on at New Year’s Eve events put on by Scientology but in November, the St. Petersburg Times published a devastating expose about the church’s current obsession with raising money from its members. And unlike previous exposes about the church’s high management, this was a series that revealed what just about every Scientologist is currently struggling with — management’s mania for donations.
LRH says in HCO PL 9 Jan 51, An Essay on Management, “drop no curtains between the organization and the public about anything.” – LRH Based on this policy I am communicating to you about some situations that we need to do something about within our religion, within our group.
Advertisement
LRH — L. Ron Hubbard — wrote not only many books that Scientologists use, but also thousands of “policy letters” that spell out every facet of Scientology’s processes, staffing, internal justice, and administration. Cook brilliantly relies on direct quotations of Hubbard’s words to make her arguments in this e-mail. It’s not just Debbie Cook who has a problem with the way Scientology is going today — it’s LRH himself who would be appalled. That’s an extremely powerful argument for a group of people who consider Hubbard almost a god.
Actions that are either not covered in policy or directly violate LRH policy and tech include the extreme over-regging and fund-raising activities that have become so much a part of nearly every Sea Org org and Class V org as well as every “OT Committee”. This fundraising is not covered anywhere in LRH policy.
In Scientology parlance, it’s a “registrar” whose job it is to convince church members to give more money for services and for donations. (Particularly since the publication of The Basics books, however, many more executives and ordinary workers have been expected to help out with sales and fundraising. “The role model of Scientologists is supposed to be the auditor. But auditors have been turned into fundraisers,” says ex-Sea Org member Chuck Beatty.) The act of asking for donations has become known as “regging,” and here Debbie complains about “over-regging.” At individual “orgs” — churches — volunteer OTs (high-ranking church members) in “OT Committees” are also raising money for new buildings.
Hardworking Sea Org members and the dedicated staff of orgs around the world aren’t choosing to do these actions. Nor are the OTs. I am sure they would be more than happy if they could just get on with direct dissemination of Scientology as they have done for so many years. But the truth is that this is being driven from the very highest echelons within the Scientology structure and clearly there is a lot of pressure to make targets that are being set.
By feeding on their own — subjecting Scientologists to constant fundraising — the church isn’t spending enough time bringing in new people — “disseminating” Scientology. As we wrote last year, there’s good evidence that the church is shrinking. There may be no more than about 40,000 active Scientologists around the world, and not the 10 million members that the church claims. [And five years later, that number is reportedly much smaller yet.]
The IAS: The [International Association of Scientologists] was created unbeknownst to LRH in 1984 by Marc Yager and David Miscavige. This was supposed to be based on LRH policies on the subject of membership and the HASI, however the IAS is nothing like the membership system described by LRH which only has two memberships and is covered in HCO PL 22 March 1965 “Current Promotion and Org Program Summary, Membership Rundown” and states: “There are two memberships…” – LRH
“She’s absolutely correct. In 1983, I was in the room doing my own project when I witnessed the executives of the CMO Int fire the project to start the IAS,” says Chuck Beatty, a former Sea Org member who is today a sort of unofficial church historian. “They didn’t bother Hubbard with the details of what they were doing.”
Beatty says the IAS was formed as a way to raise funds to help Scientology fight its legal battles, and to do so with a non-profit organization.
“One of its purposes was to sidestep the IRS,” Beatty says, reminding us that in 1983, Scientology did not have tax exempt status (that came in 1993). The executives who started the IAS wanted to take church member donations to create a war chest for Scientology’s battles, Beatty says.
LRH lists there the INTERNATIONAL ANNUAL MEMBERSHIP and gives its cost at 10 pounds sterling or $30 US. He also lists a LIFETIME MEMBERSHIP which is priced at $75 US. There are no other memberships or statuses approved or known to LRH. Furthermore, membership monies are supposed to go directly to the org where the membership is signed up, and the money used for dissemination by that org, in that area. This is covered in HCOPL 1 Sept 1965R Membership Policies. “It all goes into the HCO Book Account in the area where the membership is brought and is not part of the organization’s weekly gross income. Membership monies go to dissemination.” – LRH Currently membership monies are held as Int reserves and have grown to well in excess of a billion dollars. Only a tiny fraction has ever been spent, in violation of the policy above. Only the interest earned from the holdings have been used very sparingly to fund projects through grants. In fact many of the activities you see at IAS events are not actually funded by the IAS, but rather by the Scientologists involved.
Cook’s assertion that Miscavige and the church have amassed a billion dollars in cash reserves has been one of the most-talked about things in her e-mail. Can Scientology really have so much at its disposal? Well, let’s look at what we know.
Just a little more than a month ago, the St. Petersburg Times (now Tampa Bay Times) published its latest blockbuster expose of the church, focusing on how much Scientology has become about fundraising. Journalists Joe Childs and Tom Tobin provided these eye-opening numbers:
“Scientology rings up astonishing sums: $100 million a year just from services sold in Clearwater, a minimum of $250 million since 2006 for the International Association of Scientologists, tens of millions for new church buildings called Ideal Orgs, and untold millions more from selling new volumes of church scripture.”
Our own sources suggest that the St. Pete Times may actually have been conservative in its estimates. Revenue for services at Flag over the last six years has averaged $138 million a year, we’re told, and The Basics — a republication of key Hubbard books which was launched in 2007 — has brought in hundreds of millions more.
But Beatty tells me that Hubbard himself wanted the church to have a lot of cash on hand.
“Hubbard did want the church to amass a big reserve,” he says. “His instruction was that we needed to have enough so that the church could keep going for five years, even if every dime in revenue was cut off. That was our target.”
Cook, however, complains that even with all the money on hand, it isn’t being spent the way Hubbard wanted it to be used…
Think about it, how many ads disseminating Scientology, Dianetics or any Scn affiliated programs have you seen on TV? Heard on the radio? Seen in newspapers? I haven’t seen one in the 4 years I have lived in San Antonio, Texas, the 7th largest city in the US. How many have you seen?
Advertisement
Debbie Cook may have not seen many ads, but our own experience is different. On the Internet, at least, Scientology ads seem ubiquitous. And they show up in places like Hulu, where they get wide exposure.
Beatty acknowledged that Cook might simply not be familiar with the Internet enough to know how much Scientology advertises there. On the other hand, he points out, Scientology doesn’t do the kind of massive television campaigns that it did under the leadership of marketing executive Jefferson Hawkins in the 1980s. [We noticed a huge increase in online and television advertising by Scientology shortly after Debbie’s email came out, suggesting that this criticism had hit home. Scientology’s huge expenditures on advertising have continued over the last five years.]
Donating anything more than a lifetime membership to the IAS is not based on LRH policy. The article “What Your Donations Buy” (The Auditor, The Monthly Journal of Scientology No. 51, 1970) is clearly talking about how the church uses your donations for Dianetics and Scientology services. Next time you are asked to donate outside of services, realize that you are engaged in fundraising and ask to see something in writing from L. Ron Hubbard that this is something he expects from you as a Scientologist.
Now we get to the really difficult part of the e-mail for her fellow church members. They are at this point being asked to think for themselves and to speak up rather than simply accept what they are told by church officials.
As Jason Beghe explained to us the other day, this is a radical suggestion for devoted church members. Beghe is a veteran actor of films and television series, and in the 1990s he was a celebrated member of Scientology. In 2008, he announced his defection from the church and has been criticizing it ever since.
“I know it’s hard to understand how someone can be so dense. But you’re in a trance. When someone of this magnitude speaks up, it has an effect,” Beghe told us.
“These people cannot think for themselves, which is ironic, because they’re told when they get into Scientology that they’ll be trained to do exactly that,” he said. “At some point they’re going to wake up. Hopefully.”
New Org Buildings: LRH also never directed the purchase of opulent buildings or the posh renovations or furnishings for every org. In fact, if you read HCO PL 12 March 75 Issue II, “The Ideal Org”, which is what this program has been called, and nowhere in it will you find 20 million dollar buildings or even any reference to the poshness of org premises at all as part of LRH’s description of an “Ideal Org”. Instead, an Ideal Org was one that delivered and moved people up The Bridge – something that is not part of this “Ideal Org” program. LRH says in the PL that an Ideal Org: “would be clean and attractive enough not to repel its public” – LRH. This is all it says about the state of the building.
Beatty says this is a devastating critique by Cook. For the last decade, Scientologists have been under incredible pressure to raise money for lavish new buildings to replace their current orgs — many of which are not full at all, and really don’t need replacing.
Beatty explains that Cook is showing that Hubbard’s idea of an “Ideal Org” was based on what it produced — not what it looked like.
“Hubbard would have removed Miscavige for that alone. He basically knocked out Hubbard’s final, long-range plans for the movement,” Beatty says. “Hubbard wanted ‘Saint Hill Sized’ organizations. He never pushed for the Ideal Orgs.”
Beatty is referring to the legendary status of Saint Hill Manor — Hubbard’s UK home — which in the 1960s was known for its bustling productivity as it trained auditors and moved church members up the “Bridge.” Ever since, orgs have been measured by whether they are “Saint Hill Sized” (although what, exactly, makes an org Saint Hill Sized seems to be something of an elusive set of criteria).
Cook is charging that Miscavige has thrown out Hubbard’s plan — to produce more busy, bustling and at least clean, if utilitarian, facilities — and has replaced it with a worldwide real estate buying binge for large, opulent buildings that have largely been sitting empty. Members, meanwhile, are constantly hit up for cash for these projects.
As a result of this off-policy alteration of the Ideal Org PL, we have the majority of top OTs, now deemed “OT Ambassadors”, heavily engaged in fund-raising activities that include “bingo”, “pirate dinners”, “knitting classes”, “hay rides”, and many other activities strictly revolving around raising funds for the required multi-millions of dollars to fund their “Ideal Org”. As part of this, people around every org are now asked to donate to their local “Ideal Org” instead of their own services or their own Bridge. LRH says in HCO PL Org Ethics and Tech: “GET RID OF DISTRACTIONS FROM SCIENTOLOGY in your org. Baby-sitting or raffle tickets and such nonsense.” – LRH
Cook really hits home on this one. As we saw last year, not only does David Miscavige have Scientologists around the world participating in extremely embarrassing rituals to convince each other to give until it hurts, Miscavige himself seems to take humor in watching his followers engage in such silliness.
In 2011, we published a video which shows Miscavige describing European Scientologists putting on those “pirate dinners” and similar mummery, and noted the way Miscavige described it, his voice dripping with derision.
Yet these distractions are rampant as they are being used as fund-raisers to get money for the huge quotas being issued to fund the “Ideal Org”. “If the org slumps… don’t engage in ‘fund-raising’ or ‘selling postcards’ or borrowing money. Just make more income with Scientology. It’s a sign of very poor management to seek extraordinary solutions for finance outside Scientology. It has always failed.” “For orgs as for pcs, ‘Solve It With Scientology’. “Every time I myself have sought to solve financial or personnel in other ways than Scientology I have lost out. So I can tell you from experience that org solvency lies in more Scientology, not patented combs or fund-raising barbeques.” HCO PL 24 February 1964, Issue II, Org Programming, (OEC Vol. 7, p. 930) The point is that Scientologists and OT’s need to be training, auditing and disseminating to raw public- not regging each other or holding internal fundraisers.
Many ex-Scientologists tell us that what started their disaffection with the church was the way they tended to get hung up on their journey “up the bridge.” They might get stuck on a particular level for years, spending large amounts of money for auditing but never getting “gains” or “wins.”
The constant focus on donations only exacerbates that situation, as members find that they can’t advance when all of their money is going to things like the IAS or new buildings. For many current church members, then, Cook’s admonition here — that such fundraising is stunting their own spiritual advancement — will likely strike a chord.
Out Tech: Over the last few years we have seen literally hundreds and hundreds of people who were validated as clear using the CCRD as developed by LRH now being told they are not Clear. This included hundreds of OTs who were then put onto NED as a “handling”. LRH clearly forbid any Dianetics to be run on OTs in HCOB “Dianetics Forbidden on OTs”. This is out tech. This entire technical “handling” was directed personally by COB RTC and was done on thousands of OTs. But it was based not on an LRH HCO Bulletin, but rather based on a single C/S instruction where LRH C/Sed one pre-OT who had not achieved the state of clear but was mid OT III and not making it. LRH directed a solo handling that the pre-OT was to do to get himself to achieve the state of Clear. This LRH C/S taken out of context was then used to implement a technical handling that was in direct violation of an LRH HCOB.
That’s a mouthful of Scientology jargon about upper-level teachings (“NED” is New Era Dianetics, “OT III” is Operating Thetan Level Three, etc.), but Beatty helped me understand what it all comes down to:
“She’s giving an example of Miscavige taking a single comment by Hubbard in regards to a single person’s case and applying it to the entire movement as a policy, with tremendously harmful results,” Beatty says. The upshot was that many, many longtime, highly trained Scientologists were told that their training had been faulty and they were expected to redo levels that had cost them tens of thousands of dollars. Famous defectors such as Tory Christman and Jason Beghe say it was this order to redo levels that caused them to question Miscavige and the church.
This and other “technical handlings” done on Solo NOTs auditors created great expense and hardship on Solo NOTs auditors around the world as they were made to do these handlings to continue on the level.
Beatty gave us this description for Solo New Era Dianetics for OTs (Solo NOTs): “It’s high volume solo exorcism,” he says.
Then there are the “fast grades at Flag” that no other org has. How can it be that Flag has been delivering grades differently to the rest of the world for the last 3 years? Whatever the problem is, the fact is that having “fast Grades” at Flag creates a hidden data line and is a HIGH CRIME and the subject of an entire policy letter called HCOPL “TECH DEGRADES” which LRH has placed at the start of every Scientology course. More recently the fad seems to be that nearly everyone needs to “re-do their Purif and do a long objectives program”, including many OTs mid Solo NOTs. There is nothing wrong with doing objectives, but it is a clear violation of HCOB ‘MIXING RUNDOWNS AND REPAIRS” to have a person mid a rundown or OT level be taken off it and placed on an objectives program. Solo NOTs auditors are also being made to get their objectives from a Class IX auditor at great expense as they are not being allowed to co-audit. Flag has made many millions of dollars on the above listed out tech handlings because OTs mid Solo NOTs are forced to get these out-tech actions to be able to get back onto and stay on the level and complete it. Not to mention the spiritual effects of the out tech that this has on each OT. I myself was subject to these out tech “handlings”, including extensive FPRD mid Solo NOTs. It took its toll in many ways, including physical situations I am still dealing with today. So I have some reality of the hardship caused.
We’re just going to have to take Debbie Cook’s word for it that the standard way of becoming an exteriorizing superhuman intergalactic shaman is being screwed up on many different levels, and we can only assume this would outrage your typical Hubbardite.
LRH Command Structure: LRH left us with a complex and balanced command structure, with our orgs led by the Office of ED International. This office was considered so important that LRH created a special management group called the Watch Dog Committee whose only purpose was to see that this office and the other needed layers of management existed. LRH ED 339R speaks of this extensively as the protection for our Church. But these people are missing. And not just some. As of just a few years ago there were no members of the office of ED Int on post, not to mention top execs throughout the International Management structure.
We’re getting back to something interesting here. As we wrote earlier, Cook experienced the vacuum of leadership that occurred in the mid-2000s as David Miscavige purged nearly every high-ranking official around him and sent them to “The Hole,” a hellish sort of prison at Scientology’s secretive desert base in Southern California (imagine getting locked up at your office with your co-workers for a couple of years, and only being able to leave when you are marched out and ordered to jump in a lake together, and you get some idea).
“Her point is absolutely valid,” Beatty tells us. “The top two councils of the movement have been decimated by Miscavige — the Watchdog Committee and Executive Strata.”
If Hubbard had intended that there would be some measure of checks and balances among the many entities he was leaving behind, instead Miscavige has either driven off or disappeared his many top executives at “Int base” in California, including Heber Jentzsch, who is still nominally the “president” of the Church of Scientology, International, but has not been heard from in years.
You may have also wondered… where is Heber, the President of the Church? What about Ray Mitthoff, Senior C/S International, the one that LRH personally turned over the upper OT Levels to? How about Norman Starkey, LRH’s Trustee? What happened to Guillaume – Executive Director International? And Marc Yeager, the WDC Chairman? What happened to the other International Management executives that you have seen at events over the years? The truth is that I spent weeks working in the empty International Management building at Int. Empty because everyone had been removed from post. When I first went up lines I was briefed extensively by David Miscavige about how bad all of them were and how they had done many things that were all very discreditable. This seemed to “explain” the fact that the entirety of the Watchdog Committee no longer existed. The entirety of the Executive Strata, which consisted of ED International and 11 other top International executives that were the top executives in their particular fields, no longer existed. That the Commodore’s Messenger Org International no longer existed. All of these key command structures of Scientology International, put there by LRH, had been removed. There were hundreds and hundreds of unanswered letters and requests for help from org staff, written based on LRH ED 339R where LRH says that staff can write to these top executives in the Exec Strata for help. But this is not possible if all these execs have been removed and no one is there to help them or to get evaluations and programming done to expand Scientology. Well, after that I got to spend some quality time with Heber, Ray Mithoff, Norman Starkey, Guillaume, as well as the entirety of International Management at the time, who were all off post and doing very long and harsh ethics programs. These have gone on for years and to the only result of that they are still off post. There is no denying that these top executives have all gradually disappeared from the scene. You don’t see them at the big events anymore or on the ship at Maiden Voyage.
“Quality time,” heh. Cook is being exceedingly cheeky here. The recipients of her e-mail may be so sheltered that they don’t realize that Cook herself was put in “The Hole” in 2007, which is why she got to spend time with Jentzsch and other executives doing time in Miscavige’s prison. As we reported earlier, Cook herself is said to have gone through a harrowing, and homophobic, hazing:
“For the next twelve hours Debbie was made to stand in a large garbage can and face one hundred people screaming at her demanding a confession as to her ‘homosexual tendencies’. While this was going on, water was poured over her head. Signs were put around Debbie’s neck, one marked in magic marker ‘LESBO’ while this torture proceeded. Debbie was repeatedly slapped across the face by other women in the room during the interrogation. Debbie never did break.”
A year after that incident, Debbie was no longer in the Sea Org or a staff member of the church. She claims to have remained in good standing as a public member of the church, but now that she has sent out this e-mail, that will likely change.
David Miscavige has now become the “leader” of the Scientology religion. Yet what LRH left behind was a huge structure to properly manage all aspects of the Scientology religion. He put a complete and brilliant organizational structure there, not one individual. There never was supposed to be a “leader” other than LRH himself as the goal maker for our group. There is a situation here and even if you have not been to the International Management Base you should be able to see that over regging and frequent tech changes are not OK and you have a responsibility to do something to Keep Scientology Working. You should be able to find and read the references on membership in OEC Volume 6. Find and read the HCO PL entitled “The Ideal Org” (Data Series 40). Find and read the references on org buildings, including HCO PL 24 Aug 65 II, Cleanliness of Quarters and Staff, Improve our Image. Also, HCO PL 17 June 69, The Org Image. If you don’t want to make waves or put yourself in danger of being taken off the level or denied eligibility, then there are some simple things you can do. First and foremost, withdraw your support from off policy actions. Stop donating to anything other than your own services and actual Bridge progress. Simply demand to see an LRH reference that says you are required to make other such donations. No one will be able to produce any references because there aren’t any.
Again, Cook returns to the radical notion that Scientologists stand up to management. How is that working? Well, here’s an example of what we saw on Facebook as recipients of this e-mail reacted to its ideas…
“It is an email that maliciously mixes truths and LRH references with half-truths, un-truths and disaffection,” wrote a European Scientologist who sent out a warning to her friends. She then indicated that she had notified the church’s Office of Special Affairs — its intelligence and covert operations wing — and said she was willing to help anyone who need “dead agenting material” about Cook. In other words, slander that could be used to convince Scientologists that Cook was not to be trusted.
Cook clearly has an uphill battle on her hands.
Stop supporting any of the activities that are being done to forward off-policy fund-raising in your area. LRH says what he expects of a Scientologist – that is what he expects you to do. In fact he put it in HCOB 10 June 1960 Issue I, Keeping Scientology Working Series 33, WHAT WE EXPECT OF A SCIENTOLOGIST. Read it and follow it. The other thing you can do is to send this email to as many others as you can, even if you do it anonymously. Please keep this email among us, the Scientologists. The media have no place in this. You may wonder why I have not written a KR and gone about my business. The answer is, I have. But there is no longer anyone to send that KR to.
Cook was apparently pretty naive about the Internet. Within minutes of her thousands of copies of this email going out, it was being fed to reporters.
But you can and should write reports and bring off-policy to the attention of local org executives and local Sea org members. We are a strong and powerful group and we can affect a change. We have weathered many storms. I am sorry that I am the one telling you, but a new storm is upon us. It’s waves are already in the media and the world around us. The truth is that as a Scientologist you are more able, more perceptive and have a higher integrity. Scientology is supposed to allow you to “think for yourself” and never compromise your own integrity. And most certainly LRH held every Scientologist responsible to KEEP SCIENTOLOGY WORKING.
As we reported earlier, Cook’s own background contains some less than ideal behavior, particularly in the way she participated in the fundraising she now decries. But several prominent ex-Scientologists all said that Cook would only have been following directions, and that she actually has enormous personal integrity.
I am not trying to do anything other than affect a change in serious off policy actions occurring. My husband and I have most of our family and many many good friends who are Scientologists. I have not been real interested in sticking my neck out like this. However, I also know that I dedicated my entire adult life to supporting LRH and the application of LRH technology and if I ever had to look LRH in the eye I wouldn’t be able to say I did everything I could to Keep Scientology Working if I didn’t do something about it now. We all have a stake in this. It is simply not possible to read the LRH references and not see the alterations and violations that are currently occurring. You have a very simple obligation to LRH. Don’t participate in anything off policy, and let others know they should not either. If every person who reads this email does nothing more than step back from off-policy actions we would have changed direction. If we took all that energy and directed it into auditing, training and raw public dissemination, we would be winning. And that is what I wish for you and all of us as we ring in this new year. ARC, Debbie Cook
“ARC” — affinity, reality, communication” — is a standard sign-off for Scientologists, a reference to a bedrock concept by Hubbard that church members use to express warmth for each other. To the end, Debbie is trying her best to reach out to her fellow Scientologists as a Scientologist. Her program is not to criticize Hubbard or Scientology itself, only the leadership of Miscavige and the way he has consolidated power and has worn down his followers with all of the money-grubbing.
We hope these annotations have helped readers understand what Cook intended, and can see now how it might be an enormously effective message for Scientologists who are exhausted by the fundraising, stuck on their journeys up the Bridge, and are doubting the diminutive man at the top of the organization.
[And now, five years later, we’d like to hear how Debbie’s email affected you, whether you were still in Scientology at the time, or you were watching from the outside. And Happy New Year!]
——————–
Go here to start making your plans.
——————–
Posted by Tony Ortega on January 1, 2017 at 00:00
E-mail tips and story ideas to tonyo94 AT gmail DOT com or follow us on Twitter. We post behind-the-scenes updates at our Facebook author page. After every new story we send out an alert to our e-mail list and our FB page.
Our book, The Unbreakable Miss Lovely: How the Church of Scientology tried to destroy Paulette Cooper, is on sale at Amazon in paperback, Kindle, and audiobook versions. We’ve posted photographs of Paulette and scenes from her life at a separate location. Reader Sookie put together a complete index. More information about the book, and our 2015 book tour, can also be found at the book’s dedicated page.
Learn about Scientology with our numerous series with experts…
BLOGGING DIANETICS: We read Scientology’s founding text cover to cover with the help of L.A. attorney and former church member Vance Woodward
UP THE BRIDGE: Claire Headley and Bruce Hines train us as Scientologists
GETTING OUR ETHICS IN: Jefferson Hawkins explains Scientology’s system of justice
SCIENTOLOGY MYTHBUSTING: Historian Jon Atack discusses key Scientology concepts
Other links: Shelly Miscavige, ten years gone | The Lisa McPherson story told in real time | The Cathriona White stories | The Leah Remini ‘Knowledge Reports’ | Hear audio of a Scientology excommunication | Scientology’s little day care of horrors | Whatever happened to Steve Fishman? | Felony charges for Scientology’s drug rehab scam | Why Scientology digs bomb-proof vaults in the desert | PZ Myers reads L. Ron Hubbard’s “A History of Man” | Scientology’s Master Spies | Scientology’s Private Dancer | The mystery of the richest Scientologist and his wayward sons | Scientology’s shocking mistreatment of the mentally ill | Scientology boasts about assistance from Google | The Underground Bunker’s Official Theme Song | The Underground Bunker FAQ
Our Guide to Alex Gibney’s film ‘Going Clear,’ and our pages about its principal figures…
Jason Beghe | Tom DeVocht | Sara Goldberg | Paul Haggis | Mark “Marty” Rathbun | Mike Rinder | Spanky Taylor | Hana Whitfield |
class LIIWebClient:
""" A client for the LIIWeb JSON API that makes it easier to work with content in LIIWeb.
"""
timeout = 5 * 60
def __init__(self, url, username, password):
"""
Create a new client.
:param url: LII URL
:param username: LII API user username
:param password: LII API user password
"""
self.session = requests.session()
self.session.auth = (username, password)
self.session.headers.update({
'Accept': JSON_CONTENT_TYPE,
})
self.url = url
def find_legislation(self, frbr_uri_prefix, fields=('field_frbr_uri',)):
""" Fetch a single legislation expression, if it exists, by filtering on the FRBR URI.
We do this because there is no GET endpoint for a work FRBR URI, only an expression FRBR URI.
By default, only fetches the id and frbr_uri fields. Specify a list of fields to fetch otherwise.
:param frbr_uri_prefix: legislation FRBR URI.
:param fields: a tuple of fields to fetch, in addition to the node id.
"""
params = {
'filter[field_frbr_uri][value]': frbr_uri_prefix,
'filter[field_frbr_uri][operator]': 'STARTS_WITH'
}
if fields:
params['fields[node--legislation]'] = ','.join(fields)
resp = self.session.get(self.url + '/jsonapi/node/legislation', params=params, timeout=self.timeout)
self.check_for_error(resp)
info = resp.json()
if info['data']:
return info['data'][0]
def get_legislation(self, expr_uri, fields=('field_frbr_uri',)):
""" Fetch a single legislation expression, if it exists.
By default, only fetches the id and frbr_uri fields. Specify a list of fields to fetch otherwise.
:param expr_uri: legislation FRBR URI.
:param fields: a tuple of fields to fetch, in addition to the node id.
"""
# the GET request requires this accept header, not the default one
headers = {
'Accept': 'application/json',
}
params = {}
if fields:
params['fields[node--legislation]'] = ','.join(fields)
resp = self.session.get(self.url + expr_uri, params=params, headers=headers, timeout=self.timeout)
if resp.status_code == 404:
return
self.check_for_error(resp)
info = resp.json()
if info['data']:
return info['data']
def list_legislation(self, place_code):
""" List all legislation expressions for a place.
:param place_code: country code such as 'za' or country and locality such as 'za-cpt'.
"""
results = []
params = {
'filter[field_frbr_uri][value]': f'/akn/{place_code}/',
'filter[field_frbr_uri][operator]': 'STARTS_WITH',
'fields[node--legislation]': 'field_frbr_uri',
}
url = self.url + '/jsonapi/node/legislation'
while url:
resp = self.session.get(url, params=params, timeout=self.timeout)
self.check_for_error(resp)
info = resp.json()
results.extend(info['data'])
# pagination
url = info['links'].get('next', {}).get('href')
if url and url.startswith('http://'):
url = 'https://' + url[7:]
# clear these, they're baked into the url now
params = {}
return results
def create_legislation_work(self, info):
""" Create a new legislation work and expression and return the full description.
:param info: full description of the legislation, in Drupal JSON format
"""
resp = self.session.post(
self.url + '/jsonapi/node/legislation',
json=info,
headers={'Content-Type': JSON_CONTENT_TYPE},
timeout=self.timeout)
self.check_for_error(resp)
info = resp.json()['data']
if not info:
log.error(f"Empty response to POST: {resp}: Headers: {resp.headers} -- Body: {resp.text}")
raise Exception(f"LII returned empty response when creating work: {resp.text}")
return info
def create_legislation(self, expr_uri, info):
""" Create a new legislation expression and return the full description.
:param expr_uri: expression URI
:param info: full description of the legislation, in Drupal JSON format
"""
resp = self.session.post(
self.url + expr_uri,
json=info,
headers={'Content-Type': JSON_CONTENT_TYPE},
timeout=self.timeout)
self.check_for_error(resp)
info = resp.json()['data']
if not info:
log.error(f"Empty response to POST: {resp}: Headers: {resp.headers} -- Body: {resp.text}")
raise Exception(f"LII returned empty response when creating work: {resp.text}")
return info
def delete_legislation(self, expr_uri):
""" Delete legislation by node id.
:param nid: legislation node id to delete
"""
resp = self.session.delete(self.url + expr_uri, timeout=self.timeout)
self.check_for_error(resp)
def update_legislation(self, expr_uri, info):
""" Patch an existing legislation by node id.
:param expr_uri: expression FRBR URI
:param info: updated information, in Drupal JSON format
"""
resp = self.session.patch(
self.url + expr_uri,
json=info,
headers={'Content-Type': JSON_CONTENT_TYPE},
timeout=self.timeout)
self.check_for_error(resp)
return resp.json()['data']
def upload_file(self, node, fname, data, fieldname):
""" Upload a file to the lii and return the node id.
:param node: node type, such as 'legislation'
:param fname: filename to use
:param data: contents of the file as a bytestring
:param fieldname: name of the field on the node
:returns: the id of the uploaded file
"""
resp = self.session.post(
self.url + f'/jsonapi/node/{node}/{fieldname}',
data=data,
headers={
'Content-Type': 'application/octet-stream',
'Content-Disposition': f'attachment; filename="{fname}"',
},
timeout=self.timeout
)
self.check_for_error(resp)
return resp.json()['data']['id']
def list_legislation_files(self, nid, field):
""" List files associated with a legislation node.
:param nid: legislation node id
:param field: file type to list, either 'field_images' or 'field_files'
"""
resp = self.session.get(self.url + f"/jsonapi/node/legislation/{nid}/{field}", timeout=self.timeout)
self.check_for_error(resp)
return resp.json()['data']
def check_for_error(self, resp):
try:
resp.raise_for_status()
except:
log.error(f"Error from lii: {resp.text}")
raise |
use ahash::AHashSet;
use crate::api::{Dedup, HasKey, PartitionByKey, Unary};
use crate::stream::Stream;
use crate::tag::tools::map::TidyTagMap;
use crate::{BuildJobError, Data};
impl<D: Data + HasKey> Dedup<D> for Stream<D> {
fn dedup(self) -> Result<Stream<D>, BuildJobError> {
self.partition_by_key().unary("dedup", |info| {
let mut table = TidyTagMap::<AHashSet<D::Target>>::new(info.scope_level);
move |input, output| {
input.for_each_batch(|batch| {
if !batch.is_empty() {
let mut session = output.new_session(&batch.tag)?;
let set = table.get_mut_or_insert(&batch.tag);
for d in batch.drain() {
if !set.contains(d.get_key()) {
set.insert(d.get_key().clone());
session.give(d)?;
}
}
}
if batch.is_last() {
table.remove(&batch.tag);
}
Ok(())
})
}
})
}
}
|
import java.util.Scanner;
public class Main {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
int n = sc.nextInt();
int []a = new int[n];
int []c = new int[9];
int cn = 0;
int min = 0; int max = 0;
for(int i=0; i<n; i++) {
a[i] = sc.nextInt();
}
for(int i=0; i<9; i++) {
c[i] = 0;
}
sc.close();
for(int i=0; i<n; i++) {
if(1<=a[i]&&a[i]<=399) {c[0]++;}
if(400<=a[i]&&a[i]<=799) {c[1]++;}
if(800<=a[i]&&a[i]<=1199) {c[2]++;}
if(1200<=a[i]&&a[i]<=1599) {c[3]++;}
if(1600<=a[i]&&a[i]<=1999) {c[4]++;}
if(2000<=a[i]&&a[i]<=2399) {c[5]++;}
if(2400<=a[i]&&a[i]<=2799) {c[6]++;}
if(2800<=a[i]&&a[i]<=3199) {c[7]++;}
if(3200<=a[i]) {c[8]++;}
}
for(int i=0; i<8; i++) {
if(c[i] != 0) {cn++;}
}
if(cn!=0) {min=cn;}
if(cn==0) {min=1;}
max=cn+c[8];
System.out.println(min+" "+max);
}
} |
N , M , K = map(int, raw_input().split() )
a = N - M
Mod = 1000000009 ;
def calc( n ):
n += 1
res = 1 ; add = 2 ;
while( n ):
if n&1 :
res = ( res * add ) % Mod ;
add = ( add * add ) % Mod ;
n >>= 1 ;
return res - 2 ;
if ( N<=a*K+K-1 ):
ans = M ;
else:
ans = ( calc( N / K - a ) * K % Mod + M - ( N / K - a ) * K ) % Mod
print ans
|
public class helloworld{
public static void main(String[] args) {
String nama, kelas, nim;
nama = "Irsyaadul_Ibaad";
kelas = "D3IF4401";
nim = "6706202089";
System.out.println("Nama = " + nama);
System.out.println("NIM = " + nim);
System.out.println("Kelass = " + kelas);
}
}
|
/**
* This uses a ThreadLocal to bind an externalization strategy based on the invoking subsystem. In other
* words, when we know we're serializing for Server-Agent communication then set to AGENT, when we know we're
* serializing for RemoteClient-Server communication set to REMOTEAPI. By keeping this info on the thread
* we avoid having to tag all of the relevant objects that will be serialized.
*
* @author jay shaughnessy
*/
public class ExternalizableStrategy {
public enum Subsystem {
AGENT((char) 1), // set bidirectionally for agent<--->server communication
REFLECTIVE_SERIALIZATION((char) 3); // set unidirectionally for both CLI-->server and WS-->server communication
private char id;
Subsystem(char id) {
this.id = id;
}
public char id() {
return id;
}
}
private static ThreadLocal<Subsystem> strategy = new ThreadLocal<Subsystem>() {
protected ExternalizableStrategy.Subsystem initialValue() {
return Subsystem.AGENT;
}
};
public static Subsystem getStrategy() {
return strategy.get();
}
public static void setStrategy(Subsystem newStrategy) {
strategy.set(newStrategy);
}
} |
<filename>break.cpp
#include<bits/stdc++.h>
using namespace std;
main()
{
int n;
cin>>n;
for(int i=0; ;i++)
{
n=n-3;
if(n<0)
{
break;
}
cout<<n<<endl;
}
}
|
Abstract
Explanations for the persistence of violence in the eastern part of the Democratic Republic of Congo blame the incendiary actions of domestic and regional leaders, as well as the inefficacy of international peace-building efforts. Based on several years of ethnographic research, this article adds another piece to the puzzle, emphasizing the perverse consequences of well-meaning international efforts. I argue that three narratives dominate the public discourse on Congo and eclipse the numerous alternative framings of the situation. These narratives focus on a primary cause of violence, illegal exploitation of mineral resources; a main consequence, sexual abuse of women and girls; and a central solution, extending state authority. I elucidate why simple narratives are necessary for policy makers, journalists, advocacy groups, and practitioners on the ground, especially those involved in the Congo. I then consider each narrative in turn and explain how they achieved prominence: they provided straightforward explanations for the violence, suggested feasible solutions to it, and resonated with foreign audiences. I demonstrate that the focus on these narratives and on the solutions they recommended has led to results that clash with their intended purposes, notably an increase in human rights violations.
Life conditions in the eastern part of the Democratic Republic of Congo have deteriorated significantly since the end of the transition to peace and democracy in late 2006. Each year, the people of the eastern provinces feel less secure than the year before.1 There were more people internally displaced in 2010 than at the end of 2006.2 Armed groups, including the Congolese army, relentlessly commit horrific violations of human rights. The Congo has dropped twenty places (from 167 to 187) in the Index of Human Development, officially becoming the least developed country on earth.3 Overall, current conditions for the populations of the eastern Congo remain among the worst in Africa.
There are many reasons for the deterioration of the situation, notably incendiary actions by domestic and regional leaders, grassroots antagonisms over land and power, and the persistence of corruption at all levels of the political and economic system. A number of recent studies have convincingly analysed these internal dynamics, as well as those at the level of the Great Lakes region, and shown their nefarious effects.4 In addition, a few researchers have explored why the international efforts at building peace and democracy have failed.5 This article takes the analysis one step further and considers how, despite a number of positive results, the international efforts themselves have contributed to the degradation of the situation.
This article focuses on the negative consequences of external efforts that aim to help the Congo build peace and democracy. These include advocacy campaigns in Europe and North America, as well as humanitarian, development, and peace-building initiatives implemented in the eastern Congo by non-governmental organizations (NGOs), the diplomatic representations of various states, and international organizations such as the United Nations (UN) and the African Union.
There is no doubt that these international efforts have achieved many positive results. Re-establishing peace, albeit a precarious one, over most of the Congolese territory would not have been possible without the presence of the UN peacekeeping mission and the work of African and Western diplomats. Likewise, it is mostly thanks to these interveners that the Congo managed to organize its first democratic elections in 2006. As of the time of this writing, the UN mission remains the only military force capable of protecting the population from the exactions of the Congolese army and various other armed groups. Humanitarian agencies are the only ones able to respond to epidemics and, in the eastern provinces, to provide access to drinkable water and basic health care. However, aside from these encouraging aspects, the interventions have also produced a series of detrimental outcomes.
In the past five years, three narratives have dominated the discourse on the Congo and oriented the intervention strategies. These narratives focus on a primary cause of violence, the illegal exploitation of natural resources; a main consequence, sexual abuse against women and girls; and a central solution, reconstructing state authority.6 There is no doubt that the illegal exploitation of Congolese mineral resources is a significant cause of conflict, that sexual violence is a terrible and widespread form of abuse, and that reconstructing state authority is an essential measure. However, we can wonder how the illegal exploitation of resources came to be seen as the main cause of violence, sexual abuse as the worst consequence, and the extension of state authority as the primary solution to the conflict, to the exclusion of other causes, consequences, and solutions.
This article considers three central questions: Why use simple narratives? Why these three in particular, and not any of the numerous alternative framings of the situation? What are the effects of the exclusive focus on this specific cause, consequence, and solution? While my answers to the first two questions demonstrate that interveners had good reasons for adopting dominant, simple narratives, and for focusing on three of them, my answer to the third question demonstrates that this adoption had some positive results, but was damaging overall.
The use of these three narratives has enabled advocates to put the Congo on the agenda of some of the most powerful states and organizations, and thus prompted action to end what remains a ‘forgotten conflict’.7 However, I argue that the well-meaning international efforts have also had unintended ramifications that have prevented the intervention from achieving its stated goals, and that have even, at times, contributed to the deterioration of the situation in the eastern Congo. The international actors' concentration on trafficking of mineral resources as a source of violence has led them to overlook the myriad other causes, such as land conflict, poverty, corruption, local political and social antagonisms, and hostile relationships between state officials, including security forces, and the general population. Interveners have singled out for support one category of victims, sexually injured women and girls, at the expense of others, notably those tortured in a non-sexual manner, child soldiers, and the families of those killed. The dominant narratives have oriented international programmes on the ground toward three main goals – regulating trade of minerals, providing care to victims of sexual violence, and helping the state extend its authority – at the expense of all the other necessary measures, such as resolving land conflict, promoting inter-community reconciliation, jump-starting economic development, ensuring that state authorities respect human rights, and fighting corruption. Even worse, because of these exclusive focuses, the international efforts have exacerbated the problems that they aimed to combat: the attempts to control the exploitation of resources have enabled armed groups to strengthen their control over mines; the disproportionate attention to sexual violence has raised the status of sexual abuse to an effective bargaining tool for combatants; and the state reconstruction programmes have boosted the capacity of an authoritarian regime to oppress its population.
To develop this analysis, I first explain why policy makers and practitioners need simple narratives in order to work, and why it is especially important for those involved in the Congo. I then consider each of the three dominant narratives in turn. For each case, I present the narrative, locate its sources, and explain why it has become dominant over competing narratives. I then show how it has oriented international intervention strategies on the ground, acknowledging the positive outcomes and highlighting the main negative consequences.
This article draws on a year of ethnographic research conducted in the eastern Congo from June 2010 to July 2011. During that time, I investigated mainly the situation in North Kivu – the most violent area of the Congo during my fieldwork – but I also gathered data on the other unstable provinces, notably South Kivu, North Katanga, and Oriental Province, as well as in the capital city of Kinshasa. In addition, I completed three short trips to Europe and North America to study the perception of the eastern Congo among interveners based in capitals and headquarters.
Overall, I have gathered data from more than 170 in-depth interviews with international interveners and Congolese stakeholders. I also draw on field observations, analysis of key policy papers, and participant observation. The latter research method consisted of patrolling with military peacekeepers, implementing state reconstruction programmes with UN officials, assisting community reconciliation projects with NGOs, participating in dozens of coordination meetings, and training, briefing, and advising NGOs, diplomatic missions, peacekeeping sections, and other agencies. Furthermore, this article builds on ten years of ethnographic research in the Congo that I conducted between 2001 and 2010 for an earlier project, including more than 330 in-depth interviews and a year and a half of fieldwork.
Virtually all of my contacts asked to remain anonymous in view of the personal and professional risks involved in providing information on the topics analysed in this article. For this reason, I fully reference only the data obtained through on-the-record interviews and from public sources. All the information and quotations that I do not fully reference come from confidential interviews and participant observation.
The power of simple narratives
The study of narratives permeates a number of disciplines, from its dominance in literature to its occasional use by interpretive social scientists.8 Simply put, a narrative is a story that people create to make sense of their lives and environments. For the purpose of this article, the most important feature of narratives is that they help shape the way we perceive the social and material worlds, and thus orient how we act upon our environment.
Narratives include a central frame, or a combination of frames.9 Frames are essential to the social world since problems are not given, but have to be constructed. Frames shape our views on what counts as a problem (for example, the illegal exploitation of mining resources) and what does not (for instance, land conflicts). Frames also affect which events will be noticed (sexual violence) and which will not (non-sexual torture), as well as how they will be interpreted (as worthy of international response or as domestic problems). Thus, frames and narratives do not cause action. Instead, they make action possible: they authorize, enable, and justify specific practices and policies (such as regulation of the mineral trade) while precluding others (such as resolution of land conflicts). These actions in turn reproduce and reinforce both the dominant practices and the meanings, embodied in frames and narratives, upon which they are predicated. Over time, the narratives and the practices they authorize come to be taken as natural, granted, and the only conceivable ones.
The literature on frames is also useful for understanding why certain narratives become dominant. It shows that certain stories resonate more, and thus are more effective at influencing action, when they assign the cause of the problems to ‘the deliberate actions of identifiable individuals’; when they include ‘bodily harm to vulnerable individuals, especially when there is a short and clear causal chain assigning responsibility’; when they suggest a simple solution; and when they can latch on to pre-existing narratives.10
As was evident from my fieldwork, the aspect of ‘simplicity’ – notably an uncomplicated story line, which builds on elements already familiar to the general public, and a straightforward solution – is particularly important in enabling a narrative to achieve and maintain prominence. Media outlets need to find a story that fits in a few pages, or can be told in a few minutes, and that their audience can easily understand and remember. Policy makers based in headquarters, such as desk officers and advisers to foreign and defence ministers, face a similar challenge for internal bureaucratic reasons. They are granted only a few minutes or a short memo to brief their superiors, who decide on the main policy directions, but usually have only a superficial knowledge of various conflict zones – and, for the most part, are not particularly interested in the Congo. They thus have to find a brief and straightforward presentation of the situation, with clear policy recommendations that their superiors can readily grasp and approve. Finally, aid organizations need to raise funds for their programmes, and advocacy agencies need to mobilize followers. As numerous staff members have explained to me, fundraising and advocacy efforts succeed best when they put forward a simple narrative, and the story is most likely to resonate with its target audience if it includes well-defined ‘good’ and ‘evil’ individuals, or clear-cut perpetrators and victims.
The need to find a simple narrative is all the more important in the case of the Congo given that policy makers, and the general public, usually perceive the conflict there as extremely complex and intractable. Virtually all my interviewees complained about the multiplicity of foreign and domestic actors involved in the violence, the seemingly endless character of the conflict, and the blurred lines between victims and perpetrators. Simple narratives are critical to helping deal with such complexity: they identify salient issues, dictate urgent action, and help determine who is worth supporting and who should be challenged.
Simple narratives are also essential given the poor quality of the information on the Congolese conflict. Apart from rare exceptions, international agencies involved in the Congo recruit their staff on the basis of their technical expertise – whether on humanitarian aid, development, peace building, or diplomacy, and not on their knowledge of the country. Before their deployment, newly hired staff members benefit, at best, from a few days of briefing on the country. Most interveners therefore lack contextual knowledge upon arrival in the field. When on the ground, they lack time to read the extensive literature on the conflict. They also lack reliable information on current events, as the material available is strikingly poor for a number of reasons – including an over-reliance on official data from UN and Congolese authorities, poor relationships between international interveners and their Congolese counterparts, lack of access to the most unstable areas, and the staff's inability to speak local languages. To make matters worse, meetings and reports usually provide factual information on security events, but rarely put these facts into a broader context, and almost never infer their meaning for the overall political, social, and economic situation.11 The rapid turnover of most international staff, who usually stay in the Congo for a period ranging between a few months and three years, compounds the lack of in-depth understanding of the conflict. Once again, simple, dominant narratives offer a way out of this predicament. They emphasize a few themes to focus on; interveners can then believe that they have a grasp of the most important features of the situation, instead of feeling lost and deprived of the knowledge necessary to properly accomplish their work.
Dominant narratives, however, are always contested, usually by marginalized voices.12 As a result, competing narratives abound. This article therefore traces the alternative narratives that various local and foreign actors have developed to contest the dominant ones. I show that, even among international interveners, there is rising awareness that the dominant narratives on the Congo are too simplistic and that they obscure understanding.
Because of this unceasing contestation, dominant narratives are inherently unstable. Nevertheless, two mechanisms counteract the effects of contestation and lead most actors to reproduce the dominant narratives. To start with, people usually tend to ‘interpret new information as a confirmation of [their existing] beliefs’.13 In addition, large-scale bureaucracies, such as most international organizations and foreign ministries, are notoriously resistant to change because they rely on routines and stability to function and because change usually ‘threatens entrenched organizational culture and interests’.14 Consequently, change in frames and narratives – and in the practices and policies they enable – usually takes place slowly and incrementally.15 The multiple actors who reproduce the narratives often do so with some degree of variation, which, over time, leads to a gradual evolution of the narratives. This is how sexual violence, once a neglected issue, has in the past ten years progressively become a central feature of the discourse on the Congo. One should note, however, that change can also take place rapidly, for example when marginalized voices suddenly become dominant (such as after decolonization)16 or when marginalized actors find a way to destabilize meanings, for instance by offering a new, more persuasive discourse at a time of crisis.17
Given that narratives orient action, it is important to study their impact on the ground. While most scholarly research has focused on the positive outcomes of various advocacy and norm-promotion efforts, several scholars have studied the negative consequences of dominant narratives.18 My article builds on these analyses and goes one step further: in addition to explaining how narratives orient intervention at a macro-level, in headquarters and national capitals, I also trace their effects at the micro-level, on the ground, where we can observe the actual consequences of the broader discourses.19
The cause: conflict minerals
The first dominant narrative holds that the illegal exploitation of mineral resources is the main source of violence in the Congo.20 Congolese minerals fund local and foreign armed groups who commit atrocities against the population. The solution is straightforward: to end war and violence, we should end the illegal trafficking of resources.
European advocacy NGOs such as Global Witness were the first to put forth this narrative in the late 1990s. Their campaigns led to the creation of a UN Panel of Inquiry that investigated the illegal exploitation of natural resources and other forms of wealth in the Congo. Along with the efforts of the European NGOs, the three reports that the Panel of Inquiry published between 2001 and 2003 put the topic of mineral resources firmly on the policy agenda.21 From then on, media reports multiplied, along with research on the link between mineral resources and violence in the Congo.22 Newly created US advocacy NGOs like Enough adopted the narrative and helped reinforce nascent interest on the subject in North America. By 2011, conflict minerals had become a requisite topic of conferences and writings on the Congo.
Think tanks, academics, Congolese intellectuals, and interveners on the ground regularly emphasize a number of competing narratives. They highlight the presence of foreign armed groups, Rwandan and Ugandan efforts to eradicate these militias, and the violent competition for power among Congolese leaders. Field-based international peace builders emphasize instead local drivers of tensions, such as land issues and grassroots antagonisms over traditional and administrative power.23 Academics and local populations also point to other economic sources of abuse beyond conflict minerals, notably disputes over cattle, charcoal, timber, drugs, and taxation at checkpoints.24 In fact, estimates show that only 8 percent of all conflicts are over natural resources.25
Although these competing narratives do influence the discourse on the Congo, the conflict minerals narrative has become so prominent that it often eclipses the others. The interviews I conducted with foreign interveners were clear in this regard. A number of them presented mineral resource trafficking as the main reason for Rwanda's involvement in the Congo and the only funding source for armed groups, although it is one among many and often not the largest contributor.26 Numerous interveners similarly emphasized that ending violence required first stopping the illegal exploitation of resources – although it is only one of several urgent steps necessary for ending tensions. The only other measure that these interviewees usually mentioned was the reconstruction of state authority, which they saw as necessary for better control over trade in minerals.
The actions of the countries and organizations most involved in the Congo also clearly illustrate how strongly this dominant narrative has influenced international action. Both panels of experts that the UN has created on the Congo have investigated the illegal exploitation of natural resources.27 While countries and organizations outside of the Great Lakes usually pay little attention to the Congo, Germany, the European Union, the OECD, the US, the UN, and the World Bank have all passed legislation or set up projects to reform the mining sector and help prevent the use of Congolese conflict minerals.28
The conflict minerals narrative has reached and maintained prominence in large part because it resonates with non-Congolese audiences. It latches onto a broader narrative on the economic dimensions of violence and on the ‘resource curse’, which has dominated research on civil wars in the 2000s, and has led to high-profile policy initiatives such as the Kimberley Process.29 It assigns the cause of the problem to the deliberate actions of identifiable individuals (soldiers in various armed groups), references bodily harm to vulnerable people (the Congolese population), suggests a simple solution to the complex issue of the Congolese conflict (to end the illegal exploitation of resources) and enables the American and European publics to take action (by boycotting companies suspected of using conflict minerals). It also lets journalists and advocates tell the story of the Congo in a manner that the less-informed public can easily understand and relate to. As a journalist explained, ‘the fact that I say coltan is in cell phones and your cell phone is supporting the conflict in Congo is a simplification of the conflict, but I would say it anyway, because we as journalists are trying to make things less foreign to a foreign audience’.30 The reactions of large parts of the Congolese elite and diaspora further legitimize this story line. Many of them contend that their country is victim of a global conspiracy in which Western powers support neighbouring states and foreign armed groups and fuel conflict on the ground in order to ease their access to Congolese natural resources.31 In this story, conflict minerals are, again, at the heart of the violence.
The illegal exploitation of resources is certainly an important cause of violence in the Congo.32 The advocacy efforts have thus achieved considerable results. They have helped bring international attention to the Congo. They have forced companies doing business there to consider whether their actions fuelled conflict. They have also made it more difficult for neighbouring countries to exploit Congolese minerals illegally. However, by focusing exclusively on one cause of violence, and one solution to it, the proponents of this narrative have inadvertently exacerbated the very problems they were combating. The dominance of this narrative has diverted attention from much-needed policy actions, such as the resolution of grassroots antagonisms, the fight against corruption, and the reform of the state administration. Furthermore, in 2010 and 2011, as the international regulations came into effect and international pressure enticed Kabila to impose a temporary ban on mining operations in the Kivus and Maniema, it became clear that, given the conditions in the eastern Congo, such technical measures could not make any headway on their own.33 Even worse, as these measures were not accompanied by broader political, economic, military, and social reforms, they actually fuelled the problem they purported to combat. Since military leaders remained the principal power brokers in rural areas, and since corruption persisted, the application of the technical measures deprived vulnerable populations of their sole means of livelihood while allowing armed groups to continue and even expand their mining operations.34 Furthermore, many experts worried that the regulations would result in a de facto ban on Congolese mineral exports, given the near impossibility of implementing the required supply chain verification in the unstable conditions prevailing in the eastern Congo.35 This would lead either to a permanent loss of revenue for artisanal miners, their families, and the countless small businesses that depend on them, or to the replacement of ethically sound companies with rogue businesses that would ignore due diligence requirements.36
The consequence: sexual violence
Advocates of the conflict minerals narrative overwhelmingly focus on one specific consequence of the illegal exploitation of resources: rape and sexual torture of women and girls.37 More broadly, enormous attention is paid to the problem of sexual abuse in the eastern Congo, more than to any other form of violence.38 Margot Wallström, the UN Special Representative on Sexual Violence in Conflict, has dubbed the eastern Congo the ‘rape capital of the world’ and the ‘most dangerous place on earth to be a woman’, which are labels that journalists, advocates, and aid workers have used ad nauseam ever since. Rape is the main theme of countless media reports on the Congo. According to an insider, since 2009, there has been no interest in the Congo at the UN Security Council except when it discussed incidents of mass rapes and potential responses to them. Similarly, US State Department top officials reportedly pay no attention to the Congo except when sexual violence grabs the headlines. As a result, visiting a hospital treating victims of sexual abuse (notably the Panzi hospital in Bukavu or Heal Africa in Goma) seems to have become an obligatory stop during diplomatic visits to the eastern Congo, to the point that aid workers on the ground find it appalling. Sexual violence has also become a requisite topic of expertise for all people who work on the Congo. The Belgian foreign minister, for instance, feels obliged to react publicly to every case of mass rape in order to meet the expectation of his domestic audience. Finally, according to donors and aid workers, sexual violence is such a buzzword that many foreign and Congolese organizations insert references to it in all kinds of project proposals to increase their chances of obtaining funding.
Sexual violence has not always dominated the discourse on the Congo. During the large-scale fighting that took place between 1994 and 2003, even though sexual violence existed at higher levels than today, few people discussed it. They talked instead about violence in general and only a handful of humanitarian organizations had specific projects to help victims of sexual abuse. The 2002 report by Human Rights Watch on ‘the war within the war’ was the first to draw attention to this specific form of brutality.39 Journalists and news editors started favouring the sexual violence angle when talking about the Congolese conflict. The attention to this issue prompted NGOs to initiate projects on sexual abuse and to launch fundraising campaigns that reinforced interest in the topic. By all accounts, the visit of Hillary Clinton to the eastern Congo in 2009, which focused on victims of sexual violence and resulted in an offer of millions of dollars in aid, and which was followed shortly after by a trip by Margot Wallström, entrenched sexual violence as the frame to use when thinking about the Congo. From then on, eastern Congo and rape became inextricably linked for most foreign audiences.
Congolese populations on the ground, Congolese intellectuals, and field-based interveners are the most vocal challengers of this narrative. They emphasize the many other consequences of violence, such as killings, forced labour, conscription of child soldiers, and non-sexual torture. There are several reasons, however, why the sexual violence narrative has reached and maintained prominence. To start with, the emotional impact of sexual violence is particularly strong, because of several characteristics. It involves intentionally inflicted bodily harm to individuals who are socially constructed as the most vulnerable (women and girls). It is viewed as the ‘ultimate violation of self’.40 The consequences of this form of violence are also worse than others as, in addition to being tortured, victims are often subjected to social stigma and rejection by their communities.41 Most people thus react more strongly to cases of sexual violence than to other forms of abuse. Furthermore, in the Congo, the presence of sexual violence clashes with the image of the country as a pacified, post-conflict environment, which emerged during the transition to peace and became dominant after the 2006 post-war elections.42 At the same time, it fits perfectly with widespread stereotypes of Congolese people as savage and barbaric.43 Moreover, the narrative resonates with audiences from all nationalities, as sexual abuse takes place everywhere. As a journalist explained, stories of rape are another way to make the Congolese conflict less foreign to the audience. The response of the public is unequivocal: this journalist noticed that, of all of his articles on the Congo, his stories on rape get the highest number of hits.44 Finally, the narrative includes a straightforward, feasible answer to the problem – to provide medical care to victims of sexual abuse – and a possibility for people all over the world to get involved by sending money for projects helping rape survivors.
It is indisputable that everything should be done to stop the scourge of sexual violence in the Congo. Tens of thousands of Congolese are sexually assaulted every year; some of the rapes include horrific torture, and they almost always destroy the lives of the victims. The advocacy efforts have thus achieved a crucial outcome, by leading to the provision of much-needed help to the victims. However, this international focus has also led to unintentionally counterproductive results, namely discrimination against other vulnerable populations and, at times, an increase in the use of sexual abuse by combatants.
The overwhelming focus on sexual abuse against women and girls has led to discrimination against vulnerable populations in two ways. First, the concentration on sexual violence diverts attention from other forms of violence that are equally horrific, such as non-sexual torture, killings, and recruitment of child soldiers.45 For instance, the UN Development Programme's support to the reconstruction of the justice system focuses on enabling Congolese officials to respond to sexual abuses, instead of to all kinds of violent crime.46 The police mission of the European Union has only one unit deployed outside of the capital, and this unit focuses exclusively on the fight against sexual abuse, instead of on the fight against all illegal activities. During off-the-record interviews, Congolese and foreign aid workers regularly complained that they cannot draw the attention of the media or donors to horrific events that have no sexual dimension. They also complained that they receive more money than they need to treat victims of sexual abuse, while they lack funding to implement other crucial projects. The focus on sexual violence has actually shaped the provision of health services to such a point that Congolese women know that often the best, and sometimes the only way to obtain care is to claim to have been raped.47
A second problem is that, while there is enormous attention to violence against women and girls, there is little consideration of sexual abuse of men and boys.48 However, at least 4 to 10 percent of all rape victims are male, and their abuse also carries equally terrible psychological and physical consequences. Ignoring men and boys leads to discrimination in the provision of support to rape survivors. Framing sexual violence as a women's issue is also counterproductive, as it prevents constructive engagement with men – whether they are victims or perpetrators, power brokers or powerless – and thus cannot break the cycle of trauma and violence.
Even worse than discrimination against victims of different forms of abuse, the other main perverse consequence of this dominant narrative is that armed groups have started to perceive sexual violence as an effective bargaining tool.49 The singular focus on sexual violence signals that this form of abuse is particularly forbidden and punishable, and thus creates incentives for various groups to exploit it. While this mostly takes the form of threats of rapes in order to push for negotiations or end military operations, there are also examples of such threats being enacted, such as during the August 2010 mass rapes in Luvungi. A local militia called Mai Mai Sheka, which allied with the foreign rebel group the Democratic Forces for the Liberation of Rwanda, gang raped 387 civilians over the course of three days in a remote part of Walikale territory. According to several sources, Sheka ordered his soldiers to systematically rape women, instead of just looting and beating people as they usually do, because he wanted to draw attention to his armed group and to be invited to the negotiating table.50 He knew that using sexual violence was the best way to reach this goal, because it would draw the attention of the international community, and various states and advocacy groups would put pressure on the Congolese government to negotiate with him – which is exactly what happened. Unfortunately, many other rebel leaders have used the same reasoning as Sheka and humanitarian organizations have observed an increase in the use of sexual violence by armed groups that have political claims.
This last unintended consequence would not exist if it were not for the presence of a final problem: there is much more attention, and many more projects, devoted to the consequences of sexual violence than to its causes, such as poverty, land conflict, hostile civil–military relationships, disorganization of the army and the police, weakness of the justice system, physical and economic insecurity, and oppressive gender norms.51 The massive media coverage in the aftermath of the 2010 mass rapes in Luvungi is a case in point: all news items focused on the horrific nature of the violence, and on the UN failure to respond, while virtually none tried to explain why the soldiers decided to rape. The International Security and Stabilization Support Strategy provides a good illustration of how international contributions are used: 72 percent of the funds for sexual violence are devoted to treating victims of rape, and only 27 percent to preventing sexual abuse.52 The UN strategy on sexual violence presents a similar disproportion.53 Regrettably, the millions of dollars spent on this problem will never resolve it if they do not address its causes. Helping women who have been raped is imperative, but there is no doubt that the victims would have preferred an effective prevention programme, which would have spared them from assault in the first place.
The solution: state building
As was evident in my interviews, virtually all interveners saw the reconstruction of state authority in the east as the most effective way to end violence, including sexual abuse, and to stop the illegal exploitation of natural resources.54 Thus, one of the main priorities of the UN peacekeeping mission, as well as of numerous international donors and UN agencies, was to help the Congolese government extend its authority in the unstable eastern provinces.55
The focus on state building as the central solution to the complex problems of the Congo comes from two sources. First, diplomats and the leadership of international organizations are most comfortable with a state-to-state approach.56 They are trained to deal with state officials, and they see such interactions as the best way to respect the global norms of sovereignty and non-interference. It is therefore of utmost importance for these high-ranking interveners to ensure that they have counterparts with whom to interact. Second, from 2009 onward, international interveners believed that they had successfully implemented all the standard post-war solutions, notably general elections as well as national and regional reconciliation. From their point of view, the remaining problems were thus due to criminality and other ‘law and order’ issues, which the Congo would be able to tackle if it were not a ‘failed state’.57 Reconstructing state authority was a way to give the Congolese government the capacity necessary to address these domestic matters. At the same time, many international NGOs and church structures saw themselves as providing services that should be the responsibility of the state, such as health care and education. They therefore considered state building to be a sustainable exit strategy.58
The main problem with this strategy is that the Congolese state remains a predatory structure, as it has been during most of the Congo's history.59 Governmental officials are often preoccupied with using public offices as a means to accumulate personal wealth, even when it conflicts with the pursuit of the public good.60 State officials, including members of the army, the police, and the administration, continue to be responsible for the largest part of all human rights violations.61 Consequently, throughout the eastern Congo, people often experience the state as an oppressive, exploitative, and threatening machine, instead of seeing it as a structure set up for their benefit. Overall, large parts of the population survive in spite of the state rather than with its help.
While policy makers based in headquarters and national capitals often overlooked this problem, field-based interveners were painfully aware of it. There were thus nuances within the dominant narrative, notably different views of which components of the state structure interveners should emphasize. On-the-ground interveners and Congolese activists requested a strengthening of the justice system in order to end impunity and a reorganization of the armed forces in order to halt opportunistic violence. However, these advocacy efforts led to very few concrete results, as national and local authorities who benefited from the status quo met them with resistance. Worse, the dominant narrative insisted so strongly on state building as the leading solution to violence that, despite their failure to promote accountability and respect for human rights, interveners preferred to implement any kind of state reconstruction project possible rather than no project at all. The international efforts thus focused on material reconstruction. Using funding from a number of bilateral and multilateral donors, UN agencies have built roads and administrative buildings, and have transported police and military forces to their new areas of deployment.
Unfortunately, extending the authority of a predatory state merely results in replacing one group of perpetrators (foreign and Congolese rebel groups) with another (state authorities and state security forces). Furthermore, it sometimes actually worsens living conditions for the population. For instance, Jeroen Adam and Koen Vlassenroot have masterfully demonstrated how the international efforts to reconstruct the taxation system became constitutive of the regime of predation.62 The aftermath of the 2010 mass rapes in Luvungi provides another telling illustration. In response to the attack, the Congolese army deployed a battalion to ensure the safety of the population and to dismantle the bases of the armed groups responsible for the assault. This operation caused the displacement of hundreds of people and was marred by numerous human rights violations, including ‘rape, abduction and disappearance, perpetrated by [army] soldiers against civilians’.63 In addition, outraged by the news of the mass rapes, high-level diplomats and UN officials vowed to bring justice to the victims. The UN thus helped deploy Congolese justice officials to conduct the investigation, but the proceedings were so poorly organized that they resulted in perpetrators threatening victims with death to discourage their testifying against them.64 To protect the victims, high-ranking interveners asked the peacekeeping mission to help station 100 Congolese police. Field-based peace builders tried to stall the process, as they knew that these underpaid police would be one more factor of insecurity in the area in the long run, but they eventually had to comply, which created another protection problem for an already sorely affected population.
Interestingly, nobody I met challenged the emphasis on state building as the indispensable response to the ongoing conflict. There was no narrative emphasizing other modes of social organization beyond the state.65 When I asked interveners why they persevered even when there was no evidence that the presence of state authorities would benefit the population, and even when all available data suggested that state officials were likely to commit abuses, they answered that there was no alternative. Likewise, every Congolese I talked to, from poor peasants to high-level policy makers, presented the extension of state presence as an essential measure to end the violence. Even if they did not trust the police and the army, even if they had been victims of abuses in the past, they still hoped that state reconstruction would eventually better their living circumstances.
Conclusion
Policy analysts and academic researchers have paid enormous attention to the national and regional causes for the continuation of violence in the eastern Congo. This focus is legitimate, as domestic and foreign actors who incite fighting are mainly responsible for the ongoing human rights abuses. However, the analysis is incomplete if we overlook the unintended effects of well-intentioned international efforts.
Three related narratives dominate the discourse on the Congo and eclipse numerous competing framings of the situation. They emphasize one central cause (illegal exploitation of natural resources), one main consequence (sexual abuse of women and girls), and one key solution (reconstructing state authority). These dominant narratives have helped bring international attention to the Congo. They have challenged the view of the Congolese conflict as an intractable problem. They have made it possible for international interveners to identify concrete actions that would help improve the situation in the Congo. Indeed, these actions have assuaged some of the sources of violence, notably those linked to the exploitation of Congolese minerals. They have also enabled agencies to raise the funds necessary to provide much-needed help to victims of rape. However, by leading interveners to focus overwhelmingly on these issues, and to neglect other causes, consequences, and solutions, these narratives also have a number of perverse consequences. They obscure most interveners' understanding of the multi-layered problems of the Congo. They orient the intervention toward a series of technical responses and hinder the search for a comprehensive solution. They lead interveners to privilege one category of victims over all the others. Even more disconcertingly, they reinforce the problems that their advocates want to address, notably by legitimizing state-building programmes that reinforce the harassment of the populations by state officials, and by turning sexual violence into an attractive tool for armed groups.
Developing policy recommendations to offset the negative impact of dominant narratives while preserving their positive outcomes would require an entirely new article. This one suggests several pointers, however. Advocacy organizations should emphasize the other causes and consequences of violence. In the short run, this would help raise funds to address these other issues, while, in the long term, it would reinforce the existing contestation of the dominant narratives and thus bolster the process of change. In addition, when reacting to cases of sexual abuse, top policy officials should consider using quiet diplomacy instead of public denunciations that produce results contrary to their intended goals. When funding sexual violence projects, donors should devote more money to addressing its causes. The reconstruction of state authority might help address sexual abuse and conflict minerals, but only if interveners fundamentally review their current strategy. They should halt the material programmes and, with those funds, consider paying the salary of state officials and security forces, as lack of income often leaves them no choice but to harass the population. Interveners should also refocus their efforts on strengthening the justice system and promoting respect for human rights, notably by using performance-based financing. Finally, they should help deploy additional security forces and state authorities only when they are properly trained, paid, and supervised.
© The Author 2012. Published by Oxford University Press on behalf of Royal African Society. All rights reserved |
<reponame>blockchainreg/kava
package main
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"strings"
"time"
"github.com/cosmos/cosmos-sdk/codec"
crkeys "github.com/cosmos/cosmos-sdk/crypto/keys"
sdk "github.com/cosmos/cosmos-sdk/types"
sdkrest "github.com/cosmos/cosmos-sdk/types/rest"
"github.com/cosmos/cosmos-sdk/version"
"github.com/cosmos/cosmos-sdk/x/auth"
authrest "github.com/cosmos/cosmos-sdk/x/auth/client/rest"
authclient "github.com/cosmos/cosmos-sdk/x/auth/client/utils"
authexported "github.com/cosmos/cosmos-sdk/x/auth/exported"
"github.com/cosmos/cosmos-sdk/x/bank"
"github.com/cosmos/cosmos-sdk/x/gov"
"github.com/cosmos/cosmos-sdk/x/gov/types"
"github.com/cosmos/cosmos-sdk/x/staking"
tmtime "github.com/tendermint/tendermint/types/time"
"github.com/kava-labs/kava/app"
"github.com/kava-labs/kava/x/cdp"
"github.com/kava-labs/kava/x/pricefeed"
)
func init() {
version.Name = "kava"
config := sdk.GetConfig()
app.SetBech32AddressPrefixes(config)
app.SetBip44CoinType(config)
config.Seal()
keybase = getKeybase()
}
var (
keybase crkeys.Keybase
)
func main() {
// setup messages send to blockchain so it is in the correct state for testing
sendProposal()
sendDeposit()
sendVote()
sendDelegation()
sendUndelegation()
sendCoins()
// create an XRP cdp and send to blockchain
sendXrpCdp()
// create a BTC cdp and send to blockchain
sendBtcCdp()
// reduce the price of BTC to trigger an auction
sendMsgPostPrice()
}
// lower the price of xrp to trigger an auction
func sendMsgPostPrice() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addr, err := sdk.AccAddressFromBech32(address) // validator address
if err != nil {
panic(err)
}
price, err := sdk.NewDecFromStr("1")
if err != nil {
panic(err)
}
// set the expiry time
expiry := tmtime.Now().Add(time.Second * 100000)
// create a cdp message to send to the blockchain
// from, assetcode, price, expiry
msg := pricefeed.NewMsgPostPrice(
addr,
"btc:usd",
price,
expiry,
)
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// cast to the generic msg type
msgToSend := []sdk.Msg{msg}
// send the message to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, msgToSend, keybase)
}
func sendBtcCdp() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addr, err := sdk.AccAddressFromBech32(address) // validator address
if err != nil {
panic(err)
}
// create a cdp message to send to the blockchain
// sender, collateral, principal
msg := cdp.NewMsgCreateCDP(
addr,
sdk.NewInt64Coin("btc", 200000000),
sdk.NewInt64Coin("usdx", 10000000),
"btc-a",
)
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// cast to the generic msg type
msgToSend := []sdk.Msg{msg}
// send the message to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, msgToSend, keybase)
}
func sendXrpCdp() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addr, err := sdk.AccAddressFromBech32(address) // validator address
if err != nil {
panic(err)
}
// create a cdp message to send to the blockchain
// sender, collateral, principal
msg := cdp.NewMsgCreateCDP(
addr,
sdk.NewInt64Coin("xrp", 200000000),
sdk.NewInt64Coin("usdx", 10000000),
"xrp-a",
)
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// cast to the generic msg type
msgToSend := []sdk.Msg{msg}
// send the message to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, msgToSend, keybase)
}
func sendProposal() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
proposalContent := gov.ContentFromProposalType("A Test Title", "A test description on this proposal.", gov.ProposalTypeText)
addr, err := sdk.AccAddressFromBech32(address) // validator address
if err != nil {
panic(err)
}
// create a message to send to the blockchain
msg := gov.NewMsgSubmitProposal(
proposalContent,
sdk.NewCoins(sdk.NewInt64Coin("stake", 1000)),
addr,
)
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// SEND THE PROPOSAL
// cast to the generic msg type
msgToSend := []sdk.Msg{msg}
// send the PROPOSAL message to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, msgToSend, keybase)
}
func sendDeposit() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addr, err := sdk.AccAddressFromBech32(address) // validator
if err != nil {
panic(err)
}
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// NOW SEND THE DEPOSIT
// create a deposit transaction to send to the proposal
amount := sdk.NewCoins(sdk.NewInt64Coin(sdk.DefaultBondDenom, 10000000))
deposit := gov.NewMsgDeposit(addr, 1, amount) // Note: '1' must match 'x-example' in swagger.yaml
depositToSend := []sdk.Msg{deposit}
sendMsgToBlockchain(cdc, address, keyname, password, depositToSend, keybase)
}
func sendVote() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addr, err := sdk.AccAddressFromBech32(address) // validator
if err != nil {
panic(err)
}
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// NOW SEND THE VOTE
// create a vote on a proposal to send to the blockchain
vote := gov.NewMsgVote(addr, uint64(1), types.OptionYes) // Note: '1' must match 'x-example' in swagger.yaml
// send a vote to the blockchain
voteToSend := []sdk.Msg{vote}
sendMsgToBlockchain(cdc, address, keyname, password, voteToSend, keybase)
}
// this should send coins from one address to another
func sendCoins() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addrFrom, err := sdk.AccAddressFromBech32(address) // validator
if err != nil {
panic(err)
}
addrTo, err := sdk.AccAddressFromBech32("kava1ls82zzghsx0exkpr52m8vht5jqs3un0ceysshz") // Note: must match the faucet address
if err != nil {
panic(err)
}
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// create coins
amount := sdk.NewCoins(sdk.NewInt64Coin(sdk.DefaultBondDenom, 2000000))
coins := bank.NewMsgSend(addrFrom, addrTo, amount) // Note: '1' must match 'x-example' in swagger.yaml
coinsToSend := []sdk.Msg{coins}
// NOW SEND THE COINS
// send the coin message to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, coinsToSend, keybase)
}
func getTestAddress() (address string) {
// the test address - Note: this must match with startchain.sh
address = "kava1ffv7nhd3z6sych2qpqkk03ec6hzkmufy0r2s4c"
return address
}
func getKeynameAndPassword() (keyname string, password string) {
keyname = "<PASSWORD>" // Note: this must match the keys in the startchain.sh script
password = "" // Note: this must match the keys in the startchain.sh script
return keyname, password
}
// this should send a delegation
func sendDelegation() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addrFrom, err := sdk.AccAddressFromBech32(address) // validator
if err != nil {
panic(err)
}
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// get the validator address for delegation
valAddr, err := sdk.ValAddressFromBech32("kavavaloper1ffv7nhd3z6sych2qpqkk03ec6hzkmufyz4scd0") // **FAUCET**
if err != nil {
panic(err)
}
// create delegation amount
delAmount := sdk.NewInt64Coin(sdk.DefaultBondDenom, 1000000)
delegation := staking.NewMsgDelegate(addrFrom, valAddr, delAmount)
delegationToSend := []sdk.Msg{delegation}
// send the delegation to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, delegationToSend, keybase)
}
// this should send a MsgUndelegate
func sendUndelegation() {
// get the address
address := getTestAddress()
// get the keyname and password
keyname, password := getKeynameAndPassword()
addrFrom, err := sdk.AccAddressFromBech32(address) // validator
if err != nil {
panic(err)
}
// helper methods for transactions
cdc := app.MakeCodec() // make codec for the app
// get the keybase
keybase := getKeybase()
// get the validator address for delegation
valAddr, err := sdk.ValAddressFromBech32("<KEY>") // **FAUCET**
if err != nil {
panic(err)
}
// create delegation amount
undelAmount := sdk.NewInt64Coin(sdk.DefaultBondDenom, 1000000)
undelegation := staking.NewMsgUndelegate(addrFrom, valAddr, undelAmount)
delegationToSend := []sdk.Msg{undelegation}
// send the delegation to the blockchain
sendMsgToBlockchain(cdc, address, keyname, password, delegationToSend, keybase)
}
func getKeybase() crkeys.Keybase {
if keybase != nil {
return keybase
}
// create a keybase
// IMPORTANT - TAKE THIS FROM COMMAND LINE PARAMETER and does NOT work with tilde i.e. ~/ does NOT work
// myKeybase, err := keys.NewKeyBaseFromDir("/tmp/kvcliHome")
inBuf := strings.NewReader("")
keybase, err := crkeys.NewKeyring(sdk.KeyringServiceName(),
"test", "/tmp/kvcliHome", inBuf)
if err != nil {
panic(err)
}
return keybase
}
// sendMsgToBlockchain sends a message to the blockchain via the rest api
func sendMsgToBlockchain(cdc *codec.Codec, address string, keyname string,
password string, msg []sdk.Msg, keybase crkeys.Keybase) {
// get the account number and sequence number
accountNumber, sequenceNumber := getAccountNumberAndSequenceNumber(cdc, address)
txBldr := auth.NewTxBuilder(
authclient.GetTxEncoder(cdc), accountNumber, sequenceNumber, 500000, 0,
true, "testing", "memo", sdk.NewCoins(), sdk.NewDecCoins(),
).WithTxEncoder(authclient.GetTxEncoder(cdc)).WithChainID("testing").
WithKeybase(keybase).WithAccountNumber(accountNumber).
WithSequence(sequenceNumber).WithGas(500000)
// build and sign the transaction
// this is the *Amino* encoded version of the transaction
txBytes, err := txBldr.BuildAndSign("vlad", "", msg)
if err != nil {
panic(err)
}
// fmt.Printf("txBytes: %s", txBytes)
// need to convert the Amino encoded version back to an actual go struct
var tx auth.StdTx
cdc.UnmarshalBinaryLengthPrefixed(txBytes, &tx) // might be unmarshal binary bare
// now we re-marshall it again into json
jsonBytes, err := cdc.MarshalJSON(
authrest.BroadcastReq{
Tx: tx,
Mode: "block",
},
)
fmt.Printf("%s", bytes.NewBuffer(jsonBytes))
if err != nil {
panic(err)
}
fmt.Println()
fmt.Println("post body: ", string(jsonBytes))
fmt.Println()
resp, err := http.Post("http://localhost:1317/txs", "application/json", bytes.NewBuffer(jsonBytes))
if err != nil {
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
fmt.Printf("\n\nBody:\n\n")
fmt.Println(string(body))
}
// getAccountNumberAndSequenceNumber gets an account number and sequence number from the blockchain
func getAccountNumberAndSequenceNumber(cdc *codec.Codec, address string) (accountNumber uint64, sequenceNumber uint64) {
// we need to setup the account number and sequence in order to have a valid transaction
resp, err := http.Get("http://localhost:1317/auth/accounts/" + address)
if err != nil {
panic(err)
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
panic(err)
}
var bodyUnmarshalled sdkrest.ResponseWithHeight
err = cdc.UnmarshalJSON(body, &bodyUnmarshalled)
if err != nil {
panic(err)
}
var account authexported.Account
err = cdc.UnmarshalJSON(bodyUnmarshalled.Result, &account)
if err != nil {
panic(err)
}
return account.GetAccountNumber(), account.GetSequence()
}
|
<reponame>metux/chromium-deb
// Copyright 2017 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "components/subresource_filter/content/browser/page_load_statistics.h"
#include "base/logging.h"
#include "base/metrics/histogram_macros.h"
#include "components/subresource_filter/core/common/time_measurements.h"
namespace subresource_filter {
PageLoadStatistics::PageLoadStatistics(const ActivationState& state)
: activation_state_(state) {}
PageLoadStatistics::~PageLoadStatistics() = default;
void PageLoadStatistics::OnDocumentLoadStatistics(
const DocumentLoadStatistics& statistics) {
// Note: Chances of overflow are negligible.
aggregated_document_statistics_.num_loads_total += statistics.num_loads_total;
aggregated_document_statistics_.num_loads_evaluated +=
statistics.num_loads_evaluated;
aggregated_document_statistics_.num_loads_matching_rules +=
statistics.num_loads_matching_rules;
aggregated_document_statistics_.num_loads_disallowed +=
statistics.num_loads_disallowed;
aggregated_document_statistics_.evaluation_total_wall_duration +=
statistics.evaluation_total_wall_duration;
aggregated_document_statistics_.evaluation_total_cpu_duration +=
statistics.evaluation_total_cpu_duration;
}
void PageLoadStatistics::OnDidFinishLoad() {
if (activation_state_.activation_level != ActivationLevel::DISABLED) {
UMA_HISTOGRAM_COUNTS_1000(
"SubresourceFilter.PageLoad.NumSubresourceLoads.Total",
aggregated_document_statistics_.num_loads_total);
UMA_HISTOGRAM_COUNTS_1000(
"SubresourceFilter.PageLoad.NumSubresourceLoads.Evaluated",
aggregated_document_statistics_.num_loads_evaluated);
UMA_HISTOGRAM_COUNTS_1000(
"SubresourceFilter.PageLoad.NumSubresourceLoads.MatchedRules",
aggregated_document_statistics_.num_loads_matching_rules);
UMA_HISTOGRAM_COUNTS_1000(
"SubresourceFilter.PageLoad.NumSubresourceLoads.Disallowed",
aggregated_document_statistics_.num_loads_disallowed);
}
if (activation_state_.measure_performance) {
DCHECK(activation_state_.activation_level != ActivationLevel::DISABLED);
UMA_HISTOGRAM_CUSTOM_MICRO_TIMES(
"SubresourceFilter.PageLoad.SubresourceEvaluation.TotalWallDuration",
aggregated_document_statistics_.evaluation_total_wall_duration,
base::TimeDelta::FromMicroseconds(1), base::TimeDelta::FromSeconds(10),
50);
UMA_HISTOGRAM_CUSTOM_MICRO_TIMES(
"SubresourceFilter.PageLoad.SubresourceEvaluation.TotalCPUDuration",
aggregated_document_statistics_.evaluation_total_cpu_duration,
base::TimeDelta::FromMicroseconds(1), base::TimeDelta::FromSeconds(10),
50);
} else {
DCHECK(aggregated_document_statistics_.evaluation_total_wall_duration
.is_zero());
DCHECK(aggregated_document_statistics_.evaluation_total_cpu_duration
.is_zero());
}
}
} // namespace subresource_filter
|
/*
* drivers/i2c/muxes/i2c-mux-mlxcpld.c
* Copyright (c) 2016 Mellanox Technologies. All rights reserved.
* Copyright (c) 2016 Michael Shych <[email protected]>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the names of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* Alternatively, this software may be distributed under the terms of the
* GNU General Public License ("GPL") version 2 as published by the Free
* Software Foundation.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
#include <linux/device.h>
#include <linux/i2c.h>
#include <linux/i2c-mux.h>
#include <linux/io.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/i2c/mlxcpld.h>
#define CPLD_MUX_MAX_NCHANS 8
/* mlxcpld_mux - mux control structure:
* @last_chan - last register value
* @client - I2C device client
*/
struct mlxcpld_mux {
u8 last_chan;
struct i2c_client *client;
};
/* MUX logic description.
* Driver can support different mux control logic, according to CPLD
* implementation.
*
* Connectivity schema.
*
* i2c-mlxcpld Digital Analog
* driver
* *--------* * -> mux1 (virt bus2) -> mux -> |
* | I2CLPC | i2c physical * -> mux2 (virt bus3) -> mux -> |
* | bridge | bus 1 *---------* |
* | logic |---------------------> * mux reg * |
* | in CPLD| *---------* |
* *--------* i2c-mux-mlxpcld ^ * -> muxn (virt busn) -> mux -> |
* | driver | |
* | *---------------* | Devices
* | * CPLD (i2c bus)* select |
* | * registers for *--------*
* | * mux selection * deselect
* | *---------------*
* | |
* <--------> <----------->
* i2c cntrl Board cntrl reg
* reg space space (mux select,
* IO, LED, WD, info)
*
*/
static const struct i2c_device_id mlxcpld_mux_id[] = {
{ "mlxcpld_mux_module", 0 },
{ }
};
MODULE_DEVICE_TABLE(i2c, mlxcpld_mux_id);
/* Write to mux register. Don't use i2c_transfer() and i2c_smbus_xfer()
* for this as they will try to lock adapter a second time.
*/
static int mlxcpld_mux_reg_write(struct i2c_adapter *adap,
struct i2c_client *client, u8 val)
{
struct mlxcpld_mux_plat_data *pdata = dev_get_platdata(&client->dev);
int ret = -ENODEV;
if (adap->algo->master_xfer) {
struct i2c_msg msg;
u8 msgbuf[] = {pdata->sel_reg_addr, val};
msg.addr = client->addr;
msg.flags = 0;
msg.len = 2;
msg.buf = msgbuf;
ret = __i2c_transfer(adap, &msg, 1);
if (ret >= 0 && ret != 1)
ret = -EREMOTEIO;
} else if (adap->algo->smbus_xfer) {
union i2c_smbus_data data;
data.byte = val;
ret = adap->algo->smbus_xfer(adap, client->addr,
client->flags, I2C_SMBUS_WRITE,
pdata->sel_reg_addr,
I2C_SMBUS_BYTE_DATA, &data);
}
return ret;
}
static int mlxcpld_mux_select_chan(struct i2c_mux_core *muxc, u32 chan)
{
struct mlxcpld_mux *data = i2c_mux_priv(muxc);
struct i2c_client *client = data->client;
u8 regval = chan + 1;
int err = 0;
/* Only select the channel if its different from the last channel */
if (data->last_chan != regval) {
err = mlxcpld_mux_reg_write(muxc->parent, client, regval);
data->last_chan = err < 0 ? 0 : regval;
}
return err;
}
static int mlxcpld_mux_deselect(struct i2c_mux_core *muxc, u32 chan)
{
struct mlxcpld_mux *data = i2c_mux_priv(muxc);
struct i2c_client *client = data->client;
/* Deselect active channel */
data->last_chan = 0;
return mlxcpld_mux_reg_write(muxc->parent, client, data->last_chan);
}
/* Probe/reomove functions */
static int mlxcpld_mux_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct i2c_adapter *adap = to_i2c_adapter(client->dev.parent);
struct mlxcpld_mux_plat_data *pdata = dev_get_platdata(&client->dev);
struct i2c_mux_core *muxc;
int num, force;
struct mlxcpld_mux *data;
int err;
if (!pdata)
return -EINVAL;
if (!i2c_check_functionality(adap, I2C_FUNC_SMBUS_WRITE_BYTE_DATA))
return -ENODEV;
muxc = i2c_mux_alloc(adap, &client->dev, CPLD_MUX_MAX_NCHANS,
sizeof(*data), 0, mlxcpld_mux_select_chan,
mlxcpld_mux_deselect);
if (!muxc)
return -ENOMEM;
data = i2c_mux_priv(muxc);
i2c_set_clientdata(client, muxc);
data->client = client;
data->last_chan = 0; /* force the first selection */
/* Create an adapter for each channel. */
for (num = 0; num < CPLD_MUX_MAX_NCHANS; num++) {
if (num >= pdata->num_adaps)
/* discard unconfigured channels */
break;
force = pdata->adap_ids[num];
err = i2c_mux_add_adapter(muxc, force, num, 0);
if (err)
goto virt_reg_failed;
}
return 0;
virt_reg_failed:
i2c_mux_del_adapters(muxc);
return err;
}
static int mlxcpld_mux_remove(struct i2c_client *client)
{
struct i2c_mux_core *muxc = i2c_get_clientdata(client);
i2c_mux_del_adapters(muxc);
return 0;
}
static struct i2c_driver mlxcpld_mux_driver = {
.driver = {
.name = "mlxcpld-mux",
},
.probe = mlxcpld_mux_probe,
.remove = mlxcpld_mux_remove,
.id_table = mlxcpld_mux_id,
};
module_i2c_driver(mlxcpld_mux_driver);
MODULE_AUTHOR("Michael Shych ([email protected])");
MODULE_DESCRIPTION("Mellanox I2C-CPLD-MUX driver");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_ALIAS("platform:i2c-mux-mlxcpld");
|
//! Load configuration values from a JSON file
//! @param[in] jsonFilename file (including path if needed) of config.json
//! @param[out] json object with parameters: T, {x,y}{min,max}, Nx, Ny, cfl
nlohmann::json loadConfig(std::string jsonFilename) {
std::ifstream i(jsonFilename);
assert(i.good() && "config.json not found in current or parent directory");
nlohmann::json j;
i >> j;
return j;
} |
package info.xiaomo.core.network.mina.code;
import java.net.URLDecoder;
import java.nio.ByteBuffer;
import java.nio.charset.Charset;
import java.util.HashMap;
import java.util.Map;
import java.util.regex.Pattern;
import org.apache.mina.core.buffer.IoBuffer;
import org.apache.mina.core.session.IoSession;
import org.apache.mina.filter.codec.CumulativeProtocolDecoder;
import org.apache.mina.filter.codec.ProtocolDecoderOutput;
import org.apache.mina.http.ArrayUtil;
import org.apache.mina.http.HttpRequestImpl;
import org.apache.mina.http.api.HttpMethod;
import org.apache.mina.http.api.HttpVersion;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* 消息解码
*
*
* @version $Id: $Id
*/
public class HttpServerDecoderImpl extends CumulativeProtocolDecoder {
/**
* Regex to parse HttpRequest Request Line
*/
public static final Pattern REQUEST_LINE_PATTERN = Pattern.compile(" ");
/**
* Regex to parse out QueryString from HttpRequest
*/
public static final Pattern QUERY_STRING_PATTERN = Pattern.compile("\\?");
/**
* Regex to parse out parameters from query string
*/
public static final Pattern PARAM_STRING_PATTERN = Pattern.compile("\\&|;");
/**
* Regex to parse out key/value pairs
*/
public static final Pattern KEY_VALUE_PATTERN = Pattern.compile("=");
/**
* Regex to parse raw headers and body
*/
public static final Pattern RAW_VALUE_PATTERN = Pattern.compile("\\r\\n\\r\\n");
/**
* Regex to parse raw headers from body
*/
public static final Pattern HEADERS_BODY_PATTERN = Pattern.compile("\\r\\n");
/**
* Regex to parse header name and value
*/
public static final Pattern HEADER_VALUE_PATTERN = Pattern.compile(":");
/**
* Regex to split cookie header following RFC6265 Section 5.4
*/
public static final Pattern COOKIE_SEPARATOR_PATTERN = Pattern.compile(";");
/**
* 已解析的HTTP对象
*/
public static final String HTTP_REQUEST = "http.request";
private static final Charset CHARSET = Charset.forName("UTF-8");
private static final Logger LOG = LoggerFactory.getLogger(HttpServerDecoderImpl.class);
/**
* {@inheritDoc}
*/
@Override
protected boolean doDecode(IoSession session, IoBuffer msg, ProtocolDecoderOutput out) throws Exception {
/**
* 消息已经解析
* 谷歌浏览器一次请求存在多次收到消息,还额外请求了/favicon.ico路径
*/
if (session.containsAttribute(HTTP_REQUEST)) {
return false;
}
msg.mark();
HttpRequestImpl rq = parseHttpRequestHead(msg.buf(), msg);
if (rq != null) {
out.write(rq);
session.setAttribute(HTTP_REQUEST, rq);
// LOG.info("解析成功");
return true;
}
msg.reset();
return false;
}
/**
* {@inheritDoc}
*/
@Override
public void finishDecode(final IoSession session, final ProtocolDecoderOutput out) throws Exception {
}
/**
* {@inheritDoc}
*/
@Override
public void dispose(final IoSession session) throws Exception {
}
private HttpRequestImpl parseHttpRequestHead(final ByteBuffer buffer, IoBuffer msg) throws Exception {
// Java 6 >> String raw = new String(buffer.array(), 0, buffer.limit(),
// Charset.forName("UTF-8"));
final String raw = new String(buffer.array(), 0, buffer.limit());
// LOG.debug(raw);
final String[] headersAndBody = RAW_VALUE_PATTERN.split(raw, -1);
if (headersAndBody.length <= 1) {
return null;
}
String[] headerFields = HEADERS_BODY_PATTERN.split(headersAndBody[0]);
headerFields = ArrayUtil.dropFromEndWhile(headerFields, "");
final String requestLine = headerFields[0];
final Map<String, String> generalHeaders = new HashMap<String, String>();
for (int i = 1; i < headerFields.length; i++) {
final String[] header = HEADER_VALUE_PATTERN.split(headerFields[i]);
generalHeaders.put(header[0].toLowerCase(), header[1].trim());
}
final String[] elements = REQUEST_LINE_PATTERN.split(requestLine);
final HttpMethod method = HttpMethod.valueOf(elements[0]);
final HttpVersion version = HttpVersion.fromString(elements[2]);
final String[] pathFrags = QUERY_STRING_PATTERN.split(elements[1]);
final String requestedPath = pathFrags[0];
String queryString = pathFrags.length >= 2 ? pathFrags[1] : "";
queryString = URLDecoder.decode(queryString, "UTF-8");
// we put the buffer position where we found the beginning of the HTTP
// body
buffer.position(headersAndBody[0].length() + 4);
// POST 請求
String contentLen = generalHeaders.get("content-length");
// post 数据
if (contentLen != null && method == HttpMethod.POST) {
LOG.debug("found content len : {}", contentLen);
LOG.debug("decoding BODY: {} bytes", msg.remaining());
int contentLength = Integer.valueOf(contentLen);
if (contentLength <= msg.remaining()) {
byte[] content = new byte[contentLength];
msg.get(content);
String str = new String(content, CHARSET);
queryString = URLDecoder.decode(str, "UTF-8");
}
}
return new HttpRequestImpl(version, method, requestedPath, queryString, generalHeaders);
}
}
|
def vscroll(self, steps=1):
for col in range(self.columns):
bits = self[col::self.columns]
if steps > 0:
self[col::self.columns] = bitops.rotater(bits, steps)
elif steps < 0:
self[col::self.columns] = bitops.rotatel(bits, abs(steps)) |
/**
* Set an array of CGI filenames/handler functions
*
* @param cgis an array of CGI filenames/handler functions
* @param num_handlers number of elements in the 'cgis' array
*/
void http_set_cgi_handlers(const tCGI *cgis, int num_handlers)
{
LWIP_ASSERT("no cgis given", cgis != NULL);
LWIP_ASSERT("invalid number of handlers", num_handlers > 0);
g_pCGIs = cgis;
g_iNumCGIs = num_handlers;
} |
<filename>3dphotography.py
# from DepthEstimation.inference import main
# main()
# print('Depth Estimation Done!')
import os
from Inpainting import main
print('impainting started')
for i in os.listdir("Input"):
main.inpaint(i)
|
package mux
// RouteFactoryFunc is an interface to bundle related routes under shared
// route settings.
type RouteFactoryFunc func(RouteFactory)
// RouteFactory creates routes.
type RouteFactory interface {
Get(string, HandlerFunc) *Route
Post(string, HandlerFunc) *Route
Put(string, HandlerFunc) *Route
Delete(string, HandlerFunc) *Route
Group(string, RouteFactoryFunc) RouteFactory
}
|
<reponame>MgArreaza13/wonderhumans
from django.contrib import admin
from apps.portfolio.models import HomelessPortfolio
# Register your models here.
admin.site.register(HomelessPortfolio) |
Passive detection of subpixel obstacles for flight safety
Military aircraft fly below 100 ft. above ground level in support of their missions. These aircraft include fixed and rotary wing and may be manned or unmanned. Flying at these low altitudes presents a safety hazard to the aircrew and aircraft, due to the occurrences of obstacles within the aircraft's flight path. The pilot must rely on eyesight and in some cases, infrared sensors to see obstacles. Many conditions can exacerbate visibility creating a situation in which obstacles are essentially invisible, creating a safety hazard, even to an alerted aircrew. Numerous catastrophic accidents have occurred in which aircraft have collided with undetected obstacles. Accidents of this type continue to be a problem for low flying military and commercial aircraft. Unmanned Aerial Vehicles (UAVs) have the same problem, whether operating autonomously or under control of a ground operator. Boeing-SVS has designed a passive, small, low- cost (under $100k) gimbaled, infrared imaging based system with advanced obstacle detection algorithms. Obstacles are detected in the infrared band, and linear features are analyzed by innovative cellular automata based software. These algorithms perform detection and location of sub-pixel linear features. The detection of the obstacles is performed on a frame by frame basis, in real time. Processed images are presented to the aircrew on their display as color enhanced features. The system has been designed such that the detected obstacles are displayed to the aircrew in sufficient time to react and maneuver the aircraft to safety. A patent for this system is on file with the US patent office, and all material herein should be treated accordingly. |
<reponame>gbtec-michaelhoppe/ocl.js<filename>lib/components/expressions/context/OperationContextExpression.ts
import { ContextExpression } from './ContextExpression';
import { OclExecutionContext } from '../../OclExecutionContext';
import { OclValidationError } from '../../OclValidationError';
import { PreExpression } from '../PreExpression';
import { PostExpression } from '../PostExpression';
/**
* The Operation Context Expression allows to define pre and or post conditions of functions.
*
* @oclExpression context Person::kill() (pre|post)
* @oclExample
* context Person::setAge(age: number)
* pre: age > 0
*/
export class OperationContextExpression extends ContextExpression {
private fnName: any;
private returnType: any;
private preExpressions: Array<PreExpression>;
private postExpressions: Array<PostExpression>;
private params: Array<string>;
constructor(operationMetaInfo, expressions, registeredTypes) {
super();
const split = operationMetaInfo.pathName.split('::');
this.targetType = split[0];
this.fnName = split[1];
this.params = operationMetaInfo.params;
this.returnType = operationMetaInfo.returnType;
this.preExpressions = expressions.filter(expr => expr instanceof PreExpression);
this.postExpressions = expressions.filter(expr => expr instanceof PostExpression);
const actualType = registeredTypes[this.targetType];
if (actualType && typeof actualType.prototype[this.fnName] === 'function') {
const self = this;
const originalFn = actualType.prototype[this.fnName];
actualType.prototype[this.fnName] = function(...args): any {
const oclExecutionContext = new OclExecutionContext(this);
oclExecutionContext.registerTypes(registeredTypes);
const anies = (self.params || []).reduce((prev, cur, i) => {
prev[cur] = args[i];
return prev;
}, {result: undefined});
self.preExpressions.forEach(preExpression => {
preExpression.variables = anies;
const evaluationResult = preExpression.evaluate(oclExecutionContext);
if (!evaluationResult) {
throw new OclValidationError(`A precondition failed on type ${self.targetType}.`);
}
});
const result = originalFn.call(this, ...args);
anies.result = result;
self.postExpressions.forEach(postExpression => {
postExpression.variables = anies;
const evaluationResult = postExpression.evaluate(oclExecutionContext);
if (!evaluationResult) {
throw new OclValidationError(`A postcondition failed on type ${self.targetType}.`);
}
});
return result;
};
}
}
}
|
<reponame>cndracos/voogasalad_oneclassonemethod
package authoring.entities;
import engine.components.Jumps;
import engine.components.Lives;
import engine.components.Player;
import engine.components.Score;
import engine.components.presets.BottomCollision;
import engine.components.presets.PlayerMovement;
import javafx.scene.input.KeyCode;
/**
* Block class that acts as a preset. Makes it easier to users to create an enemy without needing
* to manually add components.
* @author <NAME>(hy115)
*
*/
public class BottomLine extends InteractableEntity {
private final static String TYPE = "BottomLine";
/**
* Initialize
* @param ID
* @param name
*/
public BottomLine(int ID, String name) {
super(ID);
addDefaultComponents();
this.setName(name);
this.setPresetType(TYPE);
}
/**
* Add the default components to the player object.
*/
private void addDefaultComponents() {
this.add(new BottomCollision(this.getID()));
}
}
|
def status(self):
if not self.volume:
status = volume_status.NONE
elif self._status and self._last_status_check >= time.time() - MIN_TIME_BETWEEN_STATUS_CHECKS:
status = self._status
else:
try:
self.volume.update()
status = volume_status_map.get(self.volume.status.split(' ')[0], None)
if status == volume_status.IN_USE and self.volume.attachment_state() == 'attached':
status = volume_status.ATTACHED
if not status:
log.error("Unknown volume status: {0}. Setting status to volume_status.NONE"
.format(self.volume.status))
status = volume_status.NONE
self._status = status
self._last_status_check = time.time()
except EC2ResponseError as e:
log.error(
'Cannot retrieve status of current volume. {0}'.format(e))
status = volume_status.NONE
return status |
/*--------------------------------------------------------------*/
/* */
/* execute_redefine_world_event */
/* */
/* execute_redefine_world_event.c - creates a world object */
/* */
/* NAME */
/* execute_redefine_world_event.c - creates a world object */
/* */
/* SYNOPSIS */
/* void execute_redefine_world_event_line(world, &command_line) */
/* */
/* OPTIONS */
/* */
/* DESCRIPTION */
/* */
/* execute_redefines stratum objects based on an input file */
/* input file is a worldfile in standard format - */
/* it will be used to reset all state variable unless state variable */
/* in the input file is -9999; */
/* note, spatial structure cannot change - thus input worldfile must */
/* have the same basin,hill,zone,patch,strata IDs as current world */
/* PROGRAMMER NOTES */
/* */
/* Original code, January 15, 2003. */
/*--------------------------------------------------------------*/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "rhessys.h"
void sort_patch_layers( struct patch_object *patch);
void execute_redefine_world_thin_event(struct world_object *world,
struct command_line_object *command_line,
struct date current_date,
int thintyp)
{
/*--------------------------------------------------------------*/
/* Local function definition. */
/*--------------------------------------------------------------*/
void input_new_strata_thin( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct patch_object *,
struct canopy_strata_object *,
int);
void input_new_patch_mult( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct patch_object *);
void input_new_zone_mult( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct zone_object *);
void input_new_hillslope_mult( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct hillslope_object *);
void input_new_basin_mult( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct basin_object *);
void skip_strata( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct patch_object *,
struct canopy_strata_object *);
void skip_patch( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct patch_object *);
void skip_zone( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct zone_object *);
void skip_hillslope( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct hillslope_object *);
void skip_basin( struct command_line_object *,
FILE *,
int,
struct base_station_object **,
struct default_object *,
struct basin_object *);
void compute_mean_hillslope_parameters( struct hillslope_object *);
struct canopy_strata_object *find_stratum_in_patch( int,
struct patch_object *);
struct patch_object *find_patch_in_zone( int,
struct zone_object *);
struct zone_object *find_zone_in_hillslope( int,
struct hillslope_object *);
struct hillslope_object *find_hillslope_in_basin( int,
struct basin_object *);
struct basin_object *find_basin( int,
struct world_object *);
/*--------------------------------------------------------------*/
/* Local variable definition. */
/*--------------------------------------------------------------*/
FILE *world_input_file;
int b,h,z,p,c;
int basin_ID, world_ID, hill_ID, zone_ID, patch_ID, stratum_ID;
int num_basin, num_hill, num_zone, num_patch, num_stratum;
char world_input_filename[MAXSTR];
char record[MAXSTR];
char ext[MAXSTR];
struct canopy_strata_object *stratum;
struct patch_object *patch;
struct zone_object *zone;
struct hillslope_object *hillslope;
struct basin_object *basin;
/*--------------------------------------------------------------*/
/* Try to open the world file in read mode. */
/*--------------------------------------------------------------*/
sprintf(ext,".Y%4dM%dD%dH%d",current_date.year,
current_date.month,
current_date.day,
current_date.hour);
strcpy(world_input_filename, command_line[0].world_filename);
strcat(world_input_filename, ext);
if ( (world_input_file = fopen(world_input_filename,"r")) == NULL ){
fprintf(stderr,
"FATAL ERROR: Cannot open world execute_redefine input file %s\n",
world_input_filename);
exit(EXIT_FAILURE);
} /*end if*/
printf("\n Redefine using %s", world_input_filename);
/*--------------------------------------------------------------*/
/* Read in the world ID. */
/*--------------------------------------------------------------*/
fscanf(world_input_file,"%d",&world_ID);
read_record(world_input_file, record);
/*--------------------------------------------------------------*/
/* Read in the number of basin files. */
/*--------------------------------------------------------------*/
fscanf(world_input_file,"%d",&num_basin);
read_record(world_input_file, record);
/*--------------------------------------------------------------*/
/* Construct the basins. */
/*--------------------------------------------------------------*/
for (b=0; b < num_basin; b++ ){
fscanf(world_input_file,"%d",&basin_ID);
read_record(world_input_file, record);
basin = find_basin( basin_ID,
world);
if (basin != NULL) {
input_new_basin_mult(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
basin);
fscanf(world_input_file,"%d",&num_hill);
read_record(world_input_file, record);
for ( h = 0; h < num_hill; h++){
fscanf(world_input_file,"%d",&hill_ID);
read_record(world_input_file, record);
hillslope = find_hillslope_in_basin( hill_ID,
basin);
if (hillslope != NULL) {
input_new_hillslope_mult(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
hillslope);
fscanf(world_input_file,"%d",&num_zone);
read_record(world_input_file, record);
for ( z=0; z < num_zone; z++) {
fscanf(world_input_file,"%d",&zone_ID);
read_record(world_input_file, record);
zone = find_zone_in_hillslope(zone_ID,hillslope);
if (zone != NULL) {
input_new_zone_mult(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
zone);
fscanf(world_input_file, "%d",&num_patch);
read_record(world_input_file, record);
for (p=0; p < num_patch; p++) {
fscanf(world_input_file,"%d",&patch_ID);
read_record(world_input_file, record);
patch = find_patch_in_zone(patch_ID, zone);
if (patch != NULL) {
input_new_patch_mult(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch);
fscanf(world_input_file, "%d",&num_stratum);
read_record(world_input_file, record);
for (c=0; c < num_stratum; c++) {
fscanf(world_input_file,"%d",&stratum_ID);
read_record(world_input_file, record);
stratum = find_stratum_in_patch(stratum_ID,patch);
if (stratum != NULL) {
input_new_strata_thin(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch,
stratum,
thintyp);
} /* end canopy if */
else {
skip_strata(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch,
stratum);
}
} /* end canopy loop */
/*--------------------------------------------------------------*/
/* re-sort patch layers to account for any changes in */
/* height */
/*--------------------------------------------------------------*/
sort_patch_layers(patch);
} /* end patch if */
else {
skip_patch(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch);
fscanf(world_input_file, "%d",&num_stratum);
read_record(world_input_file, record);
for (c=0; c < num_stratum; c++) {
fscanf(world_input_file,"%d",&stratum_ID);
read_record(world_input_file, record);
skip_strata(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch,
stratum);
} /* end NULL canopy loop */
} /* end NULL patch else */
} /* end patch loop */
} /* end zone if */
else {
skip_zone(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
zone);
fscanf(world_input_file, "%d",&num_patch);
read_record(world_input_file, record);
for (p=0; p < num_patch; p++) {
fscanf(world_input_file,"%d",&patch_ID);
read_record(world_input_file, record);
skip_patch(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch);
fscanf(world_input_file, "%d",&num_stratum);
read_record(world_input_file, record);
for (c=0; c < num_stratum; c++) {
fscanf(world_input_file,"%d",&stratum_ID);
read_record(world_input_file, record);
skip_strata(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch,
stratum);
} /* end NULL canopy loop */
} /* end NULL patch loop */
} /* end NULL zone else */
} /* end zone loop */
compute_mean_hillslope_parameters(hillslope);
} /* end hillslope if */
else {
skip_hillslope(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
hillslope);
fscanf(world_input_file,"%d",&num_zone);
read_record(world_input_file, record);
for ( z=0; z < num_zone; z++) {
fscanf(world_input_file,"%d",&zone_ID);
read_record(world_input_file, record);
skip_zone(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
zone);
fscanf(world_input_file, "%d",&num_patch);
read_record(world_input_file, record);
for (p=0; p < num_patch; p++) {
fscanf(world_input_file,"%d",&patch_ID);
read_record(world_input_file, record);
skip_patch(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch);
fscanf(world_input_file, "%d",&num_stratum);
read_record(world_input_file, record);
for (c=0; c < num_stratum; c++) {
fscanf(world_input_file,"%d",&stratum_ID);
read_record(world_input_file, record);
skip_strata(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch,
stratum);
} /* end NULL canopy loop */
} /* end NULL patch loop */
} /* end NULL zone loop */
} /* end NULL hillslope else */
} /* end hillslope loop */
} /* end basin if*/
else {
skip_basin(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
basin);
fscanf(world_input_file,"%d",&num_hill);
read_record(world_input_file, record);
for ( h = 0; h < num_hill; h++){
fscanf(world_input_file,"%d",&hill_ID);
read_record(world_input_file, record);
hillslope = find_hillslope_in_basin( hill_ID,
basin);
skip_hillslope(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
hillslope);
fscanf(world_input_file,"%d",&num_zone);
read_record(world_input_file, record);
for ( z=0; z < num_zone; z++) {
fscanf(world_input_file,"%d",&zone_ID);
read_record(world_input_file, record);
skip_zone(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
zone);
fscanf(world_input_file, "%d",&num_patch);
read_record(world_input_file, record);
for (p=0; p < num_patch; p++) {
fscanf(world_input_file,"%d",&patch_ID);
read_record(world_input_file, record);
skip_patch(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch);
fscanf(world_input_file, "%d",&num_stratum);
read_record(world_input_file, record);
for (c=0; c < num_stratum; c++) {
fscanf(world_input_file,"%d",&stratum_ID);
read_record(world_input_file, record);
skip_strata(command_line, world_input_file,
world[0].num_base_stations,
world[0].base_stations,
world[0].defaults,
patch,
stratum);
} /* end NULL canopy loop */
} /* end NULL patch loop */
} /* end NULL zone loop */
} /* end NULL hillslope loop */
} /* end basin else */
} /*end basin loop */
/*--------------------------------------------------------------*/
/* Close the world_input_file. */
/*--------------------------------------------------------------*/
if ( fclose(world_input_file) != 0 )
exit(EXIT_FAILURE);
return;
} /*end execute_redefine_world_event.c*/
|
package e2e
import (
"context"
"fmt"
"net/url"
"reflect"
"github.com/onsi/ginkgo"
"github.com/onsi/gomega"
"github.com/open-cluster-management/multicloud-operators-foundation/pkg/utils"
e2eutil "github.com/open-cluster-management/multicloud-operators-foundation/test/e2e/util"
configv1 "github.com/openshift/api/config/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
var _ = ginkgo.Describe("Testing ManagedCluster", func() {
ginkgo.Context("Get ManagedCluster cpu worker capacity", func() {
ginkgo.It("should get a cpu_worker successfully in status of managedcluster", func() {
gomega.Eventually(func() error {
cluster, err := clusterClient.ClusterV1().ManagedClusters().Get(context.Background(), defaultManagedCluster, metav1.GetOptions{})
if err != nil {
return err
}
capacity := cluster.Status.Capacity
if _, ok := capacity["core_worker"]; !ok {
return fmt.Errorf("Expect core_worker to be set, but got %v", capacity)
}
return nil
}, eventuallyTimeout, eventuallyInterval).ShouldNot(gomega.HaveOccurred())
})
})
ginkgo.Context("Testing Clusterca sync", func() {
ginkgo.It("Get CA from apiserver", func() {
//Only need to test this case in ocp
if !isOcp {
return
}
//Create a fake secret for apiserver
fakesecretName := "fake-server-secret"
fakeSecret, err := e2eutil.CreateFakeTlsSecret(kubeClient, fakesecretName, utils.OpenshiftConfigNamespace)
gomega.Expect(err).ToNot(gomega.HaveOccurred())
//get apiserveraddress
apiserverAddress, err := utils.GetKubeAPIServerAddress(context.TODO(), ocpClient)
gomega.Expect(err).ToNot(gomega.HaveOccurred())
//add serving secret in apiserver
url, err := url.Parse(apiserverAddress)
gomega.Expect(err).ToNot(gomega.HaveOccurred())
apiserver, err := ocpClient.ConfigV1().APIServers().Get(context.TODO(), utils.ApiserverConfigName, metav1.GetOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
newApiserver := apiserver.DeepCopy()
newApiserver.Spec.ServingCerts.NamedCertificates = []configv1.APIServerNamedServingCert{
{
Names: []string{
url.Hostname(),
},
ServingCertificate: configv1.SecretNameReference{
Name: fakesecretName,
},
},
}
newApiserver, err = ocpClient.ConfigV1().APIServers().Update(context.TODO(), newApiserver, metav1.UpdateOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Eventually(func() bool {
cluster, err := clusterClient.ClusterV1().ManagedClusters().Get(context.Background(), defaultManagedCluster, metav1.GetOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
if len(cluster.Spec.ManagedClusterClientConfigs) == 0 {
return false
}
for _, config := range cluster.Spec.ManagedClusterClientConfigs {
if config.URL != apiserverAddress {
continue
}
if reflect.DeepEqual(config.CABundle, fakeSecret.Data["tls.crt"]) {
return true
}
}
return false
}, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue())
//rollback apiserver and delete secret
newApiserver.Spec.ServingCerts.NamedCertificates = []configv1.APIServerNamedServingCert{}
_, err = ocpClient.ConfigV1().APIServers().Update(context.TODO(), newApiserver, metav1.UpdateOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
err = kubeClient.CoreV1().Secrets(utils.OpenshiftConfigNamespace).Delete(context.TODO(), fakesecretName, metav1.DeleteOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
})
ginkgo.It("Get CA from configmap", func() {
//Only need to test this case in ocp
if !isOcp {
return
}
configmapCa, err := utils.GetCAFromConfigMap(context.TODO(), kubeClient)
if err != nil {
if errors.IsNotFound(err) {
_, err = e2eutil.CreateFakeRootCaConfigMap(kubeClient, utils.CrtConfigmapName, utils.ConfigmapNamespace)
gomega.Expect(err).ToNot(gomega.HaveOccurred())
} else {
gomega.Expect(err).ToNot(gomega.HaveOccurred())
}
}
configmapCa, err = utils.GetCAFromConfigMap(context.TODO(), kubeClient)
gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Eventually(func() bool {
cluster, err := clusterClient.ClusterV1().ManagedClusters().Get(context.Background(), defaultManagedCluster, metav1.GetOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
if len(cluster.Spec.ManagedClusterClientConfigs) == 0 {
return false
}
for _, config := range cluster.Spec.ManagedClusterClientConfigs {
if reflect.DeepEqual(config.CABundle, configmapCa) {
return true
}
}
return false
}, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue())
//delete configmap
err = kubeClient.CoreV1().ConfigMaps(utils.ConfigmapNamespace).Delete(context.TODO(), utils.CrtConfigmapName, metav1.DeleteOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
})
ginkgo.It("Get CA from service account", func() {
//Only need to test this case in ocp
if !isOcp {
return
}
serviceAccountCa, err := utils.GetCAFromServiceAccount(context.TODO(), kubeClient)
gomega.Expect(err).ToNot(gomega.HaveOccurred())
gomega.Eventually(func() bool {
cluster, err := clusterClient.ClusterV1().ManagedClusters().Get(context.Background(), defaultManagedCluster, metav1.GetOptions{})
gomega.Expect(err).ToNot(gomega.HaveOccurred())
if len(cluster.Spec.ManagedClusterClientConfigs) == 0 {
return false
}
for _, config := range cluster.Spec.ManagedClusterClientConfigs {
if reflect.DeepEqual(config.CABundle, serviceAccountCa) {
return true
}
}
return false
}, eventuallyTimeout, eventuallyInterval).Should(gomega.BeTrue())
})
})
})
|
<reponame>EQt/graphidx<gh_stars>1-10
#pragma once
#include <cctype>
#include <cstdint>
#include <istream>
#include <functional>
/**
Check that the first character is a digit and parse it as number
*/
template <typename int_ = size_t>
struct check_uint
{
int_ &n;
check_uint(int_ &n) : n(n) {}
};
template <typename int_ = size_t>
std::istream& operator>>(std::istream &io, check_uint<int_> c)
{
char ch = (char) io.get();
if (isspace(ch))
return io >> c;
if (!isdigit(ch))
throw std::runtime_error(
std::string("Expected number, but got '") + ch + "'");
io.unget();
return io >> c.n;
}
/**
Call function `f(uint64_t i, bool nl)` for every number `i` in the stream `io`
whereby `nl` tells whether it was the last number in that line.
*/
inline void
parse_uints(std::istream &io, std::function<void(uint64_t, bool)> f)
{
uint64_t i = 0;
bool
first = true,
finished = false,
newline = false;
char c;
while (io.get(c)) {
switch (c) {
case ' ':
case ',':
case '\t':
finished = true;
break;
case '\n':
case '\r':
finished = true;
newline = true;
break;
case '_':
continue;
default:
if ('0' <= c && c <= '9') {
if (finished) {
if (!first) f(i, newline);
finished = false;
newline = false;
i = 0;
}
first = false;
i *= 10;
i += decltype(i)(c - '0');
} else {
throw std::runtime_error(
std::string("Unknown character: '") + c + "'");
}
}
}
if (!first) f(i, true);
}
|
/**
* Creates validation bindings for the controls on this page.
*/
private void bindControls() {
initializeValidators();
usingKeyPairObservable = SWTObservables.observeSelection(usingKeyPair);
bindingContext.bindValue(usingKeyPairObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.USING_KEY_PAIR), null, null);
IViewerObservableValue keyPairSelectionObservable = ViewersObservables.observeSingleSelection(keyPairComposite
.getViewer());
bindingContext.bindValue(keyPairSelectionObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.KEY_PAIR), null, null);
ChainValidator<String> keyPairValidator = new ChainValidator<>(keyPairSelectionObservable,
usingKeyPairObservable, new ValidKeyPairValidator(AwsToolkitCore.getDefault().getCurrentAccountId()));
bindingContext.addValidationStatusProvider(keyPairValidator);
usingCnameObservable = SWTObservables.observeSelection(usingCnameButton);
bindingContext.bindValue(usingCnameObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.USING_CNAME), null, null)
.updateTargetToModel();
bindingContext.bindValue(SWTObservables.observeText(cname, SWT.Modify),
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.CNAME), null, null)
.updateTargetToModel();
bindingContext.bindValue(sslCertObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.SSL_CERTIFICATE_ID));
ChainValidator<String> chainValidator = new ChainValidator<>(
SWTObservables.observeText(cname, SWT.Modify), usingCnameObservable, new NotEmptyValidator(
"CNAME cannot be empty."), new NoInvalidNameCharactersValidator("Invalid characters in CNAME."));
bindingContext.addValidationStatusProvider(chainValidator);
ControlDecoration cnameDecoration = newControlDecoration(cname, "Enter a CNAME to launch your server");
new DecorationChangeListener(cnameDecoration, chainValidator.getValidationStatus());
bindingContext.bindValue(healthCheckURLObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.HEALTH_CHECK_URL));
bindingContext.bindValue(snsTopicObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.SNS_ENDPOINT));
bindingContext.bindValue(SWTObservables.observeSelection(incrementalDeploymentButton),
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.INCREMENTAL_DEPLOYMENT));
bindingContext.bindValue(workerQueueUrlObservable,
PojoObservables.observeValue(wizardDataModel, DeployWizardDataModel.WORKER_QUEUE_URL));
} |
/*
* Creates a new key_returned_t and prepends it to a list.
*
* Side effects:
* - updates *list to point to a new head.
*/
static key_returned_t *
_key_returned_prepend (_mongocrypt_key_broker_t *kb,
key_returned_t **list,
_mongocrypt_key_doc_t *key_doc)
{
key_returned_t *key_returned;
BSON_ASSERT (key_doc);
key_returned = bson_malloc0 (sizeof (*key_returned));
BSON_ASSERT (key_returned);
key_returned->doc = _mongocrypt_key_new ();
_mongocrypt_key_doc_copy_to (key_doc, key_returned->doc);
key_returned->next = *list;
*list = key_returned;
kb->decryptor_iter = kb->keys_returned;
return key_returned;
} |
<reponame>viniciusbmello/bossabox-front-v2
import styled from 'styled-components';
const Layout = styled.li`
& {
list-style: none !important;
}
.card {
margin-bottom: 1rem;
padding: 1rem 1rem 2rem 2rem;
background: ${props => props.theme.colors.white};
border: 1px solid ${props => props.theme.colors.mostDarkestWhite};
box-shadow: 0 0.5rem 1rem ${props => props.theme.colors.shadow};
border-radius: 0.5rem;
}
.card {
color: ${props => props.theme.colors.lightInk};
}
.card--title {
font-weight: 400;
color: ${props => props.theme.colors.darkerBlue};
}
.card--header a {
text-decoration: none;
}
.card--title > div {
transform: rotate(15deg);
width: 2.4rem;
height: 2.4rem;
margin-left: 0.5rem;
position: relative;
display: inline-block;
}
.card--description {
padding-top: 0.5rem;
}
ul,
li {
list-style: none !important;
}
ul > li {
margin-top: 1rem;
display: inline-block;
}
.card--tag {
padding: 0.4rem 0.7rem 0.5rem;
margin-right: 0.5rem;
border-radius: 0.5rem;
background: ${props => props.theme.colors.blue};
color: ${props => props.theme.colors.darkerWhite};
}
.card--tag-match {
padding: 0.4rem 0.7rem 0.5rem;
margin-right: 0.5rem;
border-radius: 0.5rem;
background: ${props => props.theme.colors.yellow};
color: ${props => props.theme.colors.white};
}
ul > li > span span:first-child {
font-size: 1.4rem;
padding-right: 0.2rem;
}
`;
export default Layout;
|
/**
* Runs the sanity test for getMinChildIdx() given in the write-up.
*/
@Test
public void testGetMinChildIdxSanity() {
List<Integer> startingList = Arrays.asList(new Integer[]{5,3,4});
sanityIntegerHeap.list = new ArrayList<>(startingList);
assertEquals(1, sanityIntegerHeap.getMinChildIdx(0));
assertEquals(new ArrayList<>(startingList), sanityIntegerHeap.list);
} |
/**
* Appends a string representation of the first argument in the radix specified by the second argument.
*
* @param value the number to append
* @param base the radix that the specified value should be converted to before append
* @return this
* @throws BufferOverflowException if the relative append operation exceeds the underlying buffer's capacity
* @throws IllegalArgumentException if the specified arguments are illegal
* @throws IllegalStateException if the underlying buffer was released
*/
@NotNull
default B appendBase(long value, int base)
throws BufferOverflowException, IllegalArgumentException, IllegalStateException, IndexOutOfBoundsException {
BytesInternal.append(this, value, base);
return (B) this;
} |
A Transfer Model Based on Supervised Multi-Layer Dictionary Learning for Brain Tumor MRI Image Recognition
Artificial intelligence (AI) is an effective technology for automatic brain tumor MRI image recognition. The training of an AI model requires a large number of labeled data, but medical data needs to be labeled by professional clinicians, which makes data collection complex and expensive. Moreover, a traditional AI model requires that the training data and test data must follow the independent and identically distributed. To solve this problem, we propose a transfer model based on supervised multi-layer dictionary learning (TSMDL) for brain tumor MRI image recognition in this paper. With the help of the knowledge learned from related domains, the goal of this model is to solve the task of transfer learning where the target domain has only a small number of labeled samples. Based on the framework of multi-layer dictionary learning, the proposed model learns the common shared dictionary of source and target domains in each layer to explore the intrinsic connections and shared information between different domains. At the same time, by making full use of the label information of samples, the Laplacian regularization term is introduced to make the dictionary coding of similar samples as close as possible and the dictionary coding of different class samples as different as possible. The recognition experiments on brain MRI image datasets REMBRANDT and Figshare show that the model performs better than competitive state of-the-art methods.
INTRODUCTION
Brain tumor is a common neurological disease. As a high incidence disease, its incidence rate has reached 1.34 per 100,000 in China, and over 200,000 patients diagnosed with primary or metastatic brain tumors in the United States every year. Among the incidence of systemic tumors, brain tumors are second only to those of the stomach, uterus, breast, and esophagus, accounting for approximately 2% of systemic tumors and the proportion of deaths has exceeded 2% (Sun et al., 2019;Sung et al., 2021). According to surveys, the incidence rate of brain tumors is the highest among children, and the highest incidence is 20-50-year-old young adults. Among childhood malignancies, brain tumors are the second most common, after leukemia. Brain tumors not only cause physical and mental suffering to patients, but also place a heavy financial burden on their families. As a standard technique for non-invasive brain tumor diagnosis, magnetic resonance imaging (MRI) is an essential component of medical diagnosis and treatment. It uses magnetic resonance phenomena to obtain electromagnetic signals from the brain, so as to reconstruct brain information and provide a validated anatomical image of the brain. MRI can increase the diagnostic ability of medical diagnosticians. The wide application of MRI mainly benefits from the following characteristics (Amin et al., 2017;Bahadure et al., 2017): (1) no bony artifacts, good soft tissue resolution and clear visualization of soft tissue structures; (2) ability to image multiple aspects and multiple parameters, facilitating the acquisition of diagnostic information as a means of determining the various characteristics of the lesion; (3) no radiological damage and no ionizing radiation damage; (4) different profiles can be selected by adjusting the magnetic field, resulting in a three-dimensional image with different angles, which facilitates the identification of the lesion site; (5) has a flow-space effect and does not require an external contrast agent, allowing direct visualization of the vascular structure and facilitating the observation of the relationship between the vessel and the lesion. However, it is time consuming for radiologists to interpret the large number of MRI images and detect early brain tumors. These medical images need to be analyzed by doctors one by one, and the condition should be determined according to their experience.
Artificial intelligence (AI) technology, especially in particular medical image processing, is an effective way to address this challenge (Zeng et al., 2018;Sajjad et al., 2019;Mittal et al., 2019;Ge et al., 2020;Hua et al., 2021). In the process of brain disease diagnosis, firstly, the image features are extracted, and then the extracted image features are classified to complete the image classification and recognition. For example, Ismael and Abdel-Qader (2018) used Gabor filter and discrete wavelet transform to extract statistical features for brain tumor classification. Then this method used the tumor segmented as input and multi-layer perceptron (MLP) as the classifier. Liu et al. (2012) proposed a multi-level classification method for meningiomas. According to the type and growth rate of tumors, meningiomas are divided into three levels. In the classification step, the authors used a multiple logistic regression model. Mallick et al. (2019) proposed a brain MRI image classification method based on deep neural networks. Using encoding and decoding techniques, this method mainly used an automatic autoencoder to extract and classify brain images. To assist radiologists in MRI classification, Sachdeva et al. (2016) proposed a semi-automated classification method with multiple stages. To detect tumor regions, the first stage was the outline system detection of the contentbased tumor regions, which can be manually indicated by the physician, called segmented regions of interest (SROI). Then, 71 texture and intensity features were extracted from the SROI regions, and the features were optimized by genetic algorithm. In the classification stage, support vector machine (SVM) and artificial neural network were used. Nikam and Shinde (2013) proposed a brain MRI image classification method based on distance learning. Firstly, the images were preprocessed, and many techniques such as gray transformation, median filtering, and high pass filtering were used to remove the noise of MRI brain image. The threshold segmentation method was used to segment the MRI brain image. Then the features are extracted by correlation, entropy, contrast, homogeneity, and energy. Finally, a Euclidean distance classifier was used for classification. Ghassemi et al. (2020) proposed a CNN model for multi-class brain tumor classification. Firstly, the method was pre-trained as a discriminator in generative adversarial network to extract image features. Second, the softmax classifier was used to distinguish the three kinds of tumors. This model consists of six layers, which can be used together with various data augmentation techniques. Kiranmayee et al. (2016) proposed a brain MRI classification method using a SVM. In the data processing stage, a median adaptive filter was used to remove noise, and then the watershed method, fuzzy clustering method, and threshold method were used to segment MRI brain image. The kernel SVM was used as the classifier.
The dictionary learning method is widely used to solve various problems of computer vision and image analysis Ni et al., 2020). Dictionary learning aims to find a suitable dictionary for the input data and transform it into a sparse representation, so as to mine the useful features of the data, simplify the learning task and reduce the complexity of the model. A kernel sparse representation was developed in Chen et al. (2017). It contained three key steps for multi-label brain tumor segmentation: component analysis-split for dictionary learning initialization, kernel dictionary learning and kernel sparse coding, and graphcut method for image segmentation. A system combining an adaptive type-2 fuzzy system and dictionary learning was proposed in Ghasemi et al. (2020), in which the sparse coding step and dictionary learning step were executed alternately, and the fuzzy membership functions in the type-2 fuzzy system were used to represent model uncertainty and improve sparse representation. A learning method combining discriminate sub-dictionary and projective dictionary pair learning was developed for classifying proton magnetic resonance spectroscopy of brain gliomas tumor (Adebileje et al., 2017).
AI mainly uses intelligent methods to extract brain image features, which requires a large number of labeled data sets to understand the potential connections in the data. But in the field of medicine, because of the confidentiality and professionalism of patient information, medical data need to be marked by professional clinicians, and data collection is complex and expensive. Lack of labeled trainable data is one of the bottlenecks that restrict the development of medical image analysis. In addition, traditional AI methods require training data and test data to be independent and identically distributed. Transfer learning relaxes this restriction on training data and test data (Ni et al., 2018b;Jiang et al., 2020;Jiang et al., 2021). It can apply the knowledge or patterns learned from a related domain (source domain) to another target domain, and utilize the information shared by source domain samples and target domains, then finally build a model to adapt to the target domain.
To solve this problem, this paper focuses on solving the distribution differences between source and target domains. Through the feature mapping of source and the target domain samples, the source domain knowledge can be transferred to target domain learning. Because dictionary learning can exploit the essential characteristics of the data, this paper uses Multilayer dictionary learning (MDL) in transfer learning to exploit the shared knowledge between source and target domains. MDL first obtains the dictionary and sparse features of the first layer on the original samples, then obtains the dictionary and sparse features of the second layer based on the obtained sparse features of the first layer, and learns the dictionary and sparse features in turn to finally obtain the deep dictionary and sparse features. Finally, the new test data can be encoded by the multi-layer dictionary and the final classification results can be obtained. According to the difference of domain and task, transfer learning is divided into feature transfer, sample transfer and parameter transfer. In this paper, the target and source domain are images, and the task is to train the image, extract features, and realize the classification of different types of images, so this paper belongs to the parameter transfer mode. The advantages of this algorithm are as follows: (1) based on multi-layer learning, multi-layer dictionaries are obtained, and the discriminability of sparse representation coefficients can be enhanced in layer by layer dictionary learning; (2) through multilayer shared dictionary learning, the sample reconstructions of source and target domains are constrained layer by layer, so as to minimize the error of sample reconstruction both in source and target domains; (3) by utilizing the label information, Laplacian regularization term is introduced, and the sparse coding of samples in the same class is as close as possible, while the sparse coding of samples in different classes is as different as possible. At the same time, in the last layer of the proposed model, the classification error term is introduced in the last layer of MDL to improve the discriminative performance of the model; (4) The recognition experiments on brain MRI image datasets REMBRANDT (Clark et al., 2013) and Figshare (Cheng et al., 2016) show that the proposed model performs satisfactory classification performance in terms of accuracy, precision, F1score, and recall.
The rest of the paper is organized as follows: the related work is introduced in section "Backgrounds." The proposed method is given in section "Proposed Method", and experiments are performed in section "Experiment." Finally, conclusion and future work are summarized in section "Conclusion."
BACKGROUNDS Dictionary Learning
Dictionary learning methods can basically be divided into unsupervised dictionary learning and supervised dictionary learning. The unsupervised dictionary learning does not make use of sample label information. The supervised dictionary learning makes use of sample label information and pays more attention to the discriminative ability of sparse representation coefficients.
KSVD (Jiang et al., 2013) is a famous supervised dictionary learning algorithm. KSVD introduces the classification error of a linear classifier into the objective function, while learning the representation and classification ability of the dictionary. The objective function of K-SVD is where Z is the sparse representation coefficient, W is the parameter of the linear classifier, H is the label vector of the training data. To solve Eq. (1), the first two of these terms are combined and Eq.
(1) is rewritten as (2) can be solved by using an iterative strategy. When W is fixed, the problem of <D,Z> represents the same formulation as K-SVD, and it can therefore be solved using the K-SVD. When D and Z are fixed, Eq. (2) is a simple linear problem that can be solved by linear methods.
Multi-Layer Dictionary Learning
With the development of deep learning, researchers have found that the deeper the structure of a neural network, the better and more accurate the representation. MDL (also known as deep dictionary learning) refers to the idea of deep learning, and applies "deep structure" to layer-by-layer dictionary learning (Song et al., 2019;Gu et al., 2020). The dictionary and sparse representation obtained by the traditional single-layer dictionary learning method are shallow, which is not conducive to the task of recognition and classification when the data dimension is too high or the number of samples is too large. Singhal et al. (2017) proposed a deep dictionary learning model, which used the idea of deep learning to learn the multi-level dictionary and the deep features of the original samples. As an example, the two-layer dictionary learning is illustrated in Figure 1. D 1 and D 2 are dictionaries learned in the first and second layer. Z 2 is the sparse coefficient learned in the second layer. The sample X can be represented as X = D 1 Z 1 = D 1 D 2 Z 2 , where the sparse coding learning in the first layer Z 1 = D 2 Z 2 . Specifically, the first layer is solved as a single layer of dictionary learning to 1 D 2 D 2 Z X FIGURE 1 | The schematic diagram of two-layer dictionary learning.
Frontiers in Neuroscience | www.frontiersin.org obtain feature Z 1 on dictionary D 1 , and Z 1 is then used as input to the second layer, which is also solved as a single layer of dictionary learning to obtain feature Z 2 . The new test data can be encoded by the learned D 1 and D 2 . In this way, after completing the L-layer dictionary learning, the final dictionary and sparse representations are obtained as D L and Z L . In this case, the sample X can be represented as Then the dictionaries in L-layers and the sparse coding can be solved by PROPOSED METHOD
Objective Function
We assume that there is a corresponding association between source and target domains in transfer learning. From this point, based on the framework of MDL, we try to learn the common shared dictionary between source and target domains to exploit the shared knowledge among different related domains. At the same time, by making full use of the label information of the samples, the classification error term is introduced in the last layer of the multi-layer dictionary, which makes the sparse representation of the target domain more discriminative. According to this idea, we propose a transfer model based on supervised multi-layer dictionary learning (TSMDL), and its objective function is l,j ) belonging to different classes 0, otherwise (7) where (·) means s or t.
We explain the above Eq. (5) as follows: 1. The first two terms X s − D 1 Z s F are the Laplacian regularization terms of the source domain in the first layer, which, respectively, constrain the dictionary codes of the same class in the source domain to be as close as possible, and the dictionary codes of different classes to be as different as possible. The fifth and sixth terms F are the Laplacian regularization terms of the target domain in the first layer. Similarly to the third and fourth terms, their goal is to, respectively, constrain the dictionary codes of the same class in the target domain to be as close as possible, and the dictionary codes of different classes to be as different as possible. 3. Following the generation rules for the first six terms, the corresponding reconstruction error terms and Laplacian regularization terms for the source and target domains are constructed for layers 2 to L.
4.
C s c=1 f (Z s L , y s c , w s c , b s c ) and C t c=1 f (Z t L , y t c , w t c , b t c ) are classification error terms for the last layer of the source domain and target domain, respectively. Its goal is to improve the discriminative ability of the model. In this study, we use SVM multi-class classifier. The parameters w Again, we simplify the function above and obtain that min
Optimization
We use the alternating optimization approach to solve Eq. (9). The parameters to be solved include D 1 , P C 1 , P M 1 , Z 1 ,. . ., D L , P C L , P M L , Z L , w and b. In the following, we divide the solution of these variables into three parts. a. Update parameters D 1 , P C 1 , P M 1 , Z 1 ,. . ., D L , P C L and P M L First, we update parameters D 1 , P C 1 , P M 1 and Z 1 in the first layer. When fixed the other parameters, the objective function of TSMDL is Further, the parameters except for D 1 are fixed, the optimization problem can be written as Following (Boyd et al., 2011), the optimal value of D 1 can be computed by an alternating direction method of multipliers. Then the Laplacian matrixes P C 1 and P M 1 can be computed according to Eqs.(6, 7).
The optimal value of Z 1 can be obtained by taking the derivation of Eq.(8) as the following formulation, i.e., After obtaining the D l , the optimal value of Z l (2 ≤ l ≤ L − 1) can be obtained by, b. Update parameter Z L : When the other parameters are fixed, the objective function of TSMDL related to Z L is Let z i L (i = 1, 2, ..., N) be the ith column of Z L . We rewrite Eq. (15) related to z i L as In this study, we use standard L1-SVM for term f (z i L , y i c , w c , b c ), thus we can set y i c = 1 if class label y i c = c and otherwise y i c = −1. In this case, the optimal value of z i L can be computed by a least square problem. c. Update parameters w and b When the other parameters are fixed, the objective function of TSMDL related to w and b is min Obviously, Eq. (17) can be solved by various SVM solvers.
We show the optimization procedure of TSMDL in algorithm 1.
Input: Training data matrix X, parameters α l , β l and λ l , ∀ l 1: Initialize D using K-SVD algorithm on each class, initialize P using principal component analysis (
Learning a Classifier
We compute 1, 2, ..., L). The test sample X new , we compute its sparse coding as z new = 1 ... L x new . Finally, we can use the following formulation to predict the class label of x new Frontiers in Neuroscience | www.frontiersin.org
EXPERIMENT Experiment Settings
The datasets used in the study are taken from (Clark et al., 2013) and Figshare (Cheng et al., 2016) testing data. We use wavelet transform wavelets and gray level co-occurrence matrix (GLCM) method for feature extraction (Mohankumar, 2016). Each image is extracted onto a 540 dimensional vector.
In the experiment, we compare our model with LC-KSVD (Jiang et al., 2013), SRC (Wright et al., 2009), CRC (Zhang et al., 2011), HFA (Long et al., 2013), KMA (Tuia and Camps-Valls, 2016), and DDTML (Ni et al., 2018a). Following the authors, all parameters in comparative methods are set in their default settings. The parameters β, λ 1 , and λ in TSMDL are set in the grid {0.01, 0.05, 0.1,...,2}. The number of layers is set in {3, 4, 5}, and the TSMDL model is accordingly named as TSMDL-3, TSMDL-4, and TSMDL-5, respectively. The sizes of dictionaries are 500, 450, 400, 350, and 300 corresponding to layer 1 to layer 5, respectively. In order to ensure the stability and effectiveness of the experimental results, for the proposed model and other comparative experimental methods, we run each task 10 times. All the methods are implemented in MATLAB, and the environment that we used in the experiments is a computer with Intel Core i5-3317U 1.70 GHz CPU, 16 GB RAM.
Experiment Results
In this subsection, we present the effect of TSMDL on T1 and T2 tasks. We summarize the performance of all comparative methods in terms of accuracy, precision, F1-score, and recall. The experiment results are shown in Figures 3-6, respectively. According to Figures 3-6, we can draw the following results: In terms of accuracy, precision, F1-score, and recall, the proposed TSMDL achieves the best results. In addition, Frontiers in Neuroscience | www.frontiersin.org the performance of TSMDL-5 is better than TSMDL-3 and TSMDL-4. It is indicated that the multi-layer framework of dictionary learning can exploit the instinct structure of data samples and can build a relationship between source and target domains. Thus, TSMDL is suitable for the application of brain tumor MRI image recognition.
In the experiments, except for the LC-KSVD, SRC, and CRC algorithms, all other algorithms are transfer learning-based classification methods, which show that transfer learning strategy is helpful for brain tumor MRI image classification in the target domain. The classification knowledge in the source domain can be effectively transferred to the target domain to help the target domain achieve better classification results.
The proposed TSMDL in this paper is obviously superior to other transfer learning methods, which shows that multiple layer transfer learning dictionary learning can truly restore the brain MRI images of source and target domains, and reduce the distribution difference between domains. Thus, it can strengthen the domain adoption between source and target domains in the sparse representation space. The reason is that TSMDL is based on MDL; it can learn a more complex and accurate dictionary to represent the original data, and obtain more discriminative representation coefficients. In addition, TSMDL is a supervised learning model, in which the label information can be exploited, so TSMDL can obtain higher discrimination performance.
CONCLUSION
With the popularity of MRI equipment, a large number of new MRI brain images emerge, but obtaining labeled data is very time-consuming and expensive. Therefore, the goal of this paper is to use a large number of labeled data from the source domain to learn a classifier with strong generalization ability, and to classify the target domain with only a small number of labeling samples. Therefore, based on the MDL framework, we learn the common dictionary on each layer of the network to minimize the sample reconstruction error of the constrained source domain and target domain. At the same time, the Laplacian regularization term is introduced in each layer of the network to make the sparse coding of similar samples as close as possible, while the sparse coding of different classes of samples is as different as possible. The experimental results on brain MRI image datasets REMBRANDT and Figshare show that our model achieves the state-of-theart methods. Future works will include studying the effect of using unlabeled samples in the target domain while training, and other relevant problems like large-scale and online adaptation of dictionaries.
DATA AVAILABILITY STATEMENT
Publicly available datasets were analyzed in this study. This data can be found here:
AUTHOR CONTRIBUTIONS
YG developed the theoretical framework and model in this work and drafted the manuscript. YG and KL implemented the algorithm and performed experiments and result analysis. Both contributed to the article and approved the submitted version. |
/**
* This class contains a couple of static methods for converting
* between Fahrenheit and Celsius. The methods are mapped to
* el functions in the book examples TLD file.
*
* @author Hans Bergsten, Gefion software <[email protected]>
* @version 1.0
*/
public class TempConverter {
/**
* Main method to test the other methods.
*/
public static void main (String[] args) throws Exception {
System.out.println("20 C is " + toFahrenheit(20) + " F");
System.out.println("68 F is " + toCelsius(68) + " C");
}
public static double toCelsius(double fahrenheit) {
return (fahrenheit - 32) * 5 / 9;
}
public static double toFahrenheit(double celsius) {
return celsius * 9 / 5 + 32;
}
} |
Homosexual Activist Admits True Purpose of Battle is to Destroy Marriage
Even knowing that there are radicals in all movements, doesn’t lessen the startling admission recently by lesbian journalist Masha Gessen. On a radio show she actually admits that homosexual activists are lying about their radical political agenda. She says that they don’t want to access the institution of marriage; they want to radically redefine and eventually eliminate it.
Here is what she recently said on a radio interview:
“It’s a no-brainer that (homosexual activists) should have the right to marry, but I also think equally that it’s a no-brainer that the institution of marriage should not exist. …(F)ighting for gay marriage generally involves lying about what we are going to do with marriage when we get there — because we lie that the institution of marriage is not going to change, and that is a lie. The institution of marriage is going to change, and it should change. And again, I don’t think it should exist. And I don’t like taking part in creating fictions about my life. That’s sort of not what I had in mind when I came out thirty years ago. I have three kids who have five parents, more or less, and I don’t see why they shouldn’t have five parents legally… I met my new partner, and she had just had a baby, and that baby’s biological father is my brother, and my daughter’s biological father is a man who lives in Russia, and my adopted son also considers him his father. So the five parents break down into two groups of three… And really, I would like to live in a legal system that is capable of reflecting that reality, and I don’t think that’s compatible with the institution of marriage.” (Source: http://www.abc.net.au/radionational/programs/lifematters/why-get-married/4058506)
For quite some time, the defenders of natural marriage have attempted to point out that the true agenda behind the homosexual demands organizations is not marriage equality; it is the total unraveling of marriage and uprooting traditional values from society. (This will ultimately include efforts to silence and punish some churches that openly adhere to their religious teachings about marriage and sexual morality.)
While few have been as vocal as this lesbian activist was in this interview, we do have numerical examples proving her point. When given the opportunity to marry, after laws have been struck down relatively small percentages of homosexuals actually bother to marry compared to their heterosexual counterparts. This raises question about the true need to unravel marriage for the “fair” extension its benefits. Only 12 percent of homosexuals in the Netherlands marry compared to 86 percent of their heterosexual peers. Less than 20 percent of same-sex couples already living together in California married when given the chance in 2008. In contrast, 91 percent of heterosexual couples in California who are living together are married.
Clearly this is about cultural change and tearing down the traditional family ethic, since it seems that most homosexuals living together neither need nor desire to marry, though they do desire to radically change marriage.
Gays and lesbians are free to live as they choose, and we live in a society which roundly applauds them doing so like never before in our history, but they do not have the right to rewrite marriage for all of society. |
Context-oriented programming: beyond layers
While many software systems today have to be aware of the context in which they are executing, there is still little support for structuring a program with respect to context. A first step towards better context-orientation was the introduction of method layers. This paper proposes two additional language concepts, namely the implicit activation of method layers, and the introduction of dynamic variables. |
/**
* Class to run a CallerInfoAsyncQuery in a separate thread, with
* its own Looper. We cannot use the main Looper because on the
* 1st quit the thread is maked dead, ie no further test can use
* it. Also there is not way to inject a Looper instance in the
* query, so we have to use a thread with its own looper.
*/
private class QueryRunner extends Thread
implements CallerInfoAsyncQuery.OnQueryCompleteListener {
private Looper mLooper;
private String mNumber;
private boolean mAsyncCompleted;
public QueryRunner(String number) {
super();
mNumber = number;
}
// Run the query in the thread, wait for completion.
public void runAndCheckCompletion() throws InterruptedException {
start();
join();
assertTrue(mAsyncCompleted);
}
@Override
public void run() {
Looper.prepare();
mLooper = Looper.myLooper();
mAsyncCompleted = false;
// The query will pick the thread local looper we've just prepared.
CallerInfoAsyncQuery.startQuery(kToken, mContext, mNumber, this, null);
mLooper.loop();
}
// Quit the Looper on the 1st callback
// (EVENT_EMERGENCY_NUMBER). There is another message
// (EVENT_END_OF_QUEUE) that will never be delivered because
// the test has exited. The corresponding stack trace
// "Handler{xxxxx} sending message to a Handler on a dead
// thread" can be ignored.
public void onQueryComplete(int token, Object cookie, CallerInfo info) {
mAsyncCompleted = true;
mInfo = info;
mLooper.quit();
}
} |
package repository_manage
import (
"github.com/anden007/afocus-godf/src/interfaces"
"github.com/anden007/afocus-godf/src/model/model_manage"
"github.com/google/uuid"
)
type DepartmentRepository struct{}
func NewDepartmentRepository() *DepartmentRepository {
instance := new(DepartmentRepository)
return instance
}
func (m *DepartmentRepository) GetByParentId(parentId uuid.UUID) (result []model_manage.Department, err error) {
db := interfaces.DI().GetDataBase()
if parentId == uuid.Nil {
err = db.GetDB().Model(&model_manage.Department{}).Where("parent_id is null or parent_id = ?", uuid.Nil).Preload("Parent").Find(&result).Error
} else {
err = db.GetDB().Model(&model_manage.Department{}).Where("parent_id = ?", parentId).Preload("Parent").Find(&result).Error
}
return
}
func (m *DepartmentRepository) Add(entity model_manage.Department) (err error) {
db := interfaces.DI().GetDataBase()
if entity.ParentId != uuid.Nil {
err = db.GetDB().Model(model_manage.Department{}).Where(model_manage.Department{Id: entity.ParentId}).Updates(model_manage.Department{IsParent: true}).Error
}
err = db.GetDB().Create(&entity).Error
return
}
func (m *DepartmentRepository) Edit(entity model_manage.Department) (err error) {
db := interfaces.DI().GetDataBase()
if entity.ParentId != uuid.Nil {
err = db.GetDB().Model(model_manage.Department{}).Where(model_manage.Department{Id: entity.ParentId}).Updates(model_manage.Department{IsParent: true}).Error
}
err = db.GetDB().Save(&entity).Error
return
}
func (m *DepartmentRepository) DelByIds(ids []uuid.UUID) (err error) {
db := interfaces.DI().GetDataBase()
err = db.GetDB().Delete(model_manage.Department{}, "id in (?)", ids).Error
if err == nil {
err = db.GetDB().Exec("CALL func_sync_isparent").Error
}
return
}
|
//
// ISecurityGuardSimulatorDetect.h
// SecurityGuardMain
//
// Created by lifengzhong on 15/11/10.
// Copyright © 2015年 <NAME>. All rights reserved.
//
#ifndef ISecurityGuardSimulatorDetect_h
#define ISecurityGuardSimulatorDetect_h
#if TARGET_OS_WATCH
#import <SecurityGuardSDKWatch/SimulatorDetect/ISimulatorDetectComponent.h>
#import <SecurityGuardSDKWatch/Open/IOpenSecurityGuardPlugin.h>
#else
#import <SecurityGuardSDK/SimulatorDetect/ISimulatorDetectComponent.h>
#import <SecurityGuardSDK/Open/IOpenSecurityGuardPlugin.h>
#endif
@protocol ISecurityGuardSimulatorDetect <NSObject, ISimulatorDetectComponent, IOpenSecurityGuardPluginInterface>
@end
#endif /* ISecurityGuardSimulatorDetect_h */
|
/**
* Created by Anton Nashatyrev on 14.12.2018.
*/
public class DaemonChannelHandler implements Closeable, AutoCloseable {
private final Channel channel;
private final boolean isInitiator;
private Queue<ResponseBuilder> respBuildQueue = new ConcurrentLinkedQueue<>();
private StreamHandler<MuxerAdress> streamHandler;
private Stream<MuxerAdress> stream;
private ByteBuf prevDataTail = Unpooled.buffer(0);
public DaemonChannelHandler(Channel channel, boolean isInitiator) {
this.channel = channel;
this.isInitiator = isInitiator;
}
public void setStreamHandler(StreamHandler<MuxerAdress> streamHandler) {
this.streamHandler = streamHandler;
}
void onData(ByteBuf msg) throws InvalidProtocolBufferException {
ByteBuf bytes = prevDataTail.isReadable() ? Unpooled.wrappedBuffer(prevDataTail, msg) : msg;
while (bytes.isReadable()) {
if (stream != null) {
streamHandler.onRead(bytes.nioBuffer());
bytes.clear();
break;
} else {
ResponseBuilder responseBuilder = respBuildQueue.peek();
if (responseBuilder == null) {
throw new RuntimeException("Unexpected response message from p2pDaemon");
}
try {
ByteBuf bbDup = bytes.duplicate();
InputStream is = new ByteBufInputStream(bbDup);
int msgLen = CodedInputStream.readRawVarint32(is.read(), is);
if (msgLen > bbDup.readableBytes()) {
break;
}
} catch (IOException e) {
throw new RuntimeException(e);
}
Action action = responseBuilder.parseNextMessage(bytes);
if (action != Action.ContinueResponse) {
respBuildQueue.poll();
}
if (action == Action.StartStream) {
P2Pd.StreamInfo resp = responseBuilder.getStreamInfo();
MuxerAdress remoteAddr = new MuxerAdress(new Peer(resp.getPeer().toByteArray()), resp.getProto());
MuxerAdress localAddr = MuxerAdress.listenAddress(resp.getProto());
stream = new NettyStream(channel, isInitiator, localAddr, remoteAddr);
streamHandler.onCreate(stream);
channel.closeFuture().addListener((ChannelFutureListener) future -> streamHandler.onClose());
}
}
}
prevDataTail = Unpooled.wrappedBuffer(Util.byteBufToArray(bytes));
}
void onError(Throwable t) {
streamHandler.onError(t);
}
public <TResponse> CompletableFuture<TResponse> expectResponse(
ResponseBuilder<TResponse> responseBuilder) {
respBuildQueue.add(responseBuilder);
return responseBuilder.getResponse();
}
public <TResponse> CompletableFuture<TResponse> call(P2Pd.Request request,
ResponseBuilder<TResponse> responseBuilder) {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
try {
request.writeDelimitedTo(baos);
} catch (IOException e) {
throw new RuntimeException(e);
}
byte[] msgBytes = baos.toByteArray();
ByteBuf buffer = channel.alloc().buffer(msgBytes.length).writeBytes(msgBytes);
CompletableFuture<TResponse> ret = expectResponse(responseBuilder);
ChannelFuture channelFuture = channel.writeAndFlush(buffer);
try {
channelFuture.get();
} catch (InterruptedException e) {
throw new RuntimeException(e);
} catch (ExecutionException e) {
throw new RuntimeException(e);
}
return ret;
}
public void close() {
channel.close();
}
@FunctionalInterface
public interface FunctionThrowable<A, B> {
B apply(A arg) throws Exception;
}
private enum Action {
EndResponse,
ContinueResponse,
StartStream
}
public static abstract class ResponseBuilder<TResponse> {
protected boolean throwOnResponseError = true;
protected CompletableFuture<TResponse> respFuture = new CompletableFuture<>();
protected Action parseNextMessage(ByteBuf bytes) {
ByteBuf buf = bytes.duplicate();
try {
return parseNextMessage(new ByteBufInputStream(bytes));
} catch (Exception e) {
respFuture.completeExceptionally(new RuntimeException("Error parsing message: "
+ (Util.byteBufToArray(buf)), e));
return Action.EndResponse;
}
}
abstract Action parseNextMessage(InputStream is) throws Exception;
CompletableFuture<TResponse> getResponse() {
return respFuture;
}
P2Pd.StreamInfo getStreamInfo() {
try {
TResponse resp = respFuture.get();
if (resp instanceof P2Pd.Response) {
return ((P2Pd.Response) resp).getStreamInfo();
} else {
return (P2Pd.StreamInfo) resp;
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
public static class SingleMsgResponseBuilder<TResponse> extends ResponseBuilder<TResponse>{
FunctionThrowable<InputStream, TResponse> parser;
public SingleMsgResponseBuilder(FunctionThrowable<InputStream, TResponse> parser) {
this.parser = parser;
}
@Override
Action parseNextMessage(InputStream is) {
try {
TResponse response = parser.apply(is);
if (throwOnResponseError && response instanceof P2Pd.Response &&
((P2Pd.Response) response).getType() == P2Pd.Response.Type.ERROR) {
throw new P2PDError(((P2Pd.Response) response).getError().toString());
} else {
respFuture.complete(response);
}
} catch (Exception e) {
respFuture.completeExceptionally(e);
}
return Action.EndResponse;
}
CompletableFuture<TResponse> getResponse() {
return respFuture;
}
}
public static class SimpleResponseBuilder extends SingleMsgResponseBuilder<P2Pd.Response> {
public SimpleResponseBuilder() {
super(P2Pd.Response::parseDelimitedFrom);
}
}
public static class ListenerStreamBuilder extends SingleMsgResponseBuilder<P2Pd.StreamInfo> {
public ListenerStreamBuilder() {
super(P2Pd.StreamInfo::parseDelimitedFrom);
}
@Override
protected Action parseNextMessage(ByteBuf bytes) {
super.parseNextMessage(bytes);
return Action.StartStream;
}
}
public static class SimpleResponseStreamBuilder extends SingleMsgResponseBuilder<P2Pd.Response> {
public SimpleResponseStreamBuilder() {
super(P2Pd.Response::parseDelimitedFrom);
}
@Override
protected Action parseNextMessage(ByteBuf bytes) {
super.parseNextMessage(bytes);
try {
if (getResponse().get().getType() == P2Pd.Response.Type.OK) {
return Action.StartStream;
} else {
return Action.EndResponse;
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}
public static class DHTListResponse extends ResponseBuilder<List<P2Pd.DHTResponse>> {
private final List<P2Pd.DHTResponse> items = new ArrayList<>();
private boolean started;
@Override
Action parseNextMessage(InputStream is) throws Exception {
if (!started) {
P2Pd.Response response = P2Pd.Response.parseDelimitedFrom(is);
if (response.getType() == P2Pd.Response.Type.ERROR) {
throw new P2PDError("" + response.getError());
} else {
if (!response.hasDht() || response.getDht().getType() != P2Pd.DHTResponse.Type.BEGIN) {
throw new RuntimeException("Invalid DHT list start message: " + response);
}
started = true;
return Action.ContinueResponse;
}
} else {
P2Pd.DHTResponse response = P2Pd.DHTResponse.parseDelimitedFrom(is);
if (response.getType() == P2Pd.DHTResponse.Type.END) {
respFuture.complete(items);
return Action.EndResponse;
} else if (response.getType() == P2Pd.DHTResponse.Type.VALUE) {
items.add(response);
return Action.ContinueResponse;
} else {
throw new RuntimeException("Invalid DHT list message: " + response);
}
}
}
}
public static class UnboundMessagesResponse<MessageT> extends ResponseBuilder<BlockingQueue<MessageT>> {
private final BlockingQueue<MessageT> items = new LinkedBlockingQueue<>();
private final Function<InputStream, MessageT> decoder;
private boolean started;
public UnboundMessagesResponse(Function<InputStream, MessageT> decoder) {
this.decoder = decoder;
}
@Override
Action parseNextMessage(InputStream is) throws Exception {
if (!started) {
P2Pd.Response response = P2Pd.Response.parseDelimitedFrom(is);
if (response.getType() == P2Pd.Response.Type.ERROR) {
throw new P2PDError("" + response.getError());
} else {
respFuture.complete(items);
started = true;
return Action.ContinueResponse;
}
} else {
MessageT message = decoder.apply(is);
items.add(message);
return Action.ContinueResponse;
}
}
}
} |
import { IncomingMessage } from 'http'
import { parse } from 'url'
import { getBalanceDev } from 'dev-distribution/src/libs'
import { AddressBalance, DistributionTarget } from 'dev-distribution/src/types'
import { get } from 'request'
type DistributionTargets = ReadonlyArray<DistributionTarget>
const proto = 'https'
const getPackageNamePath = (req: IncomingMessage) =>
((parsed) => (parsed.pathname || '').replace(/^\//, ''))(parse(req.url || ''))
const validateAddress = (address: string) =>
address.length === 42 && address.startsWith('0x')
const fetchPackages = async (): Promise<DistributionTargets> =>
new Promise<DistributionTargets>((resolve) =>
get(
`${proto}://dev-distribution.now.sh/config/packages`,
{ json: true },
(_, __, body) => {
// tslint:disable-next-line: no-expression-statement no-unsafe-any
resolve(body)
}
)
)
const fetchDev = async (
address?: string
): Promise<AddressBalance | undefined> =>
address && validateAddress(address) ? getBalanceDev(address) : undefined
const balance = (data?: AddressBalance) => (data ? data.balance : data)
export const fetchBalance = async (req: IncomingMessage) => {
const name = getPackageNamePath(req)
const pkgs = await fetchPackages()
const address = ((pkg) => (pkg ? pkg.address : undefined))(
pkgs.find((pkg) => pkg.package === name)
)
const etherscan = await fetchDev(address)
return balance(etherscan)
}
|
<filename>src/panchang.py<gh_stars>0
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
# panchang.py -- routines for computing tithi, vara, etc.
#
# Copyright (C) 2013 <NAME> <<EMAIL>>
# Downloaded from https://github.com/bdsatish/drik-panchanga
#
# This file is part of the "drik-panchanga" Python library
# for computing Hindu luni-solar calendar based on the Swiss ephemeris
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
Use Swiss ephemeris to calculate tithi, nakshatra, etc.
"""
from __future__ import division
from math import ceil
from collections import namedtuple as struct
import swisseph as swe
Date = struct('Date', ['year', 'month', 'day'])
Place = struct('Place', ['latitude', 'longitude', 'timezone'])
sidereal_year = 365.256360417 # From WolframAlpha
# Hindu sunrise/sunset is calculated w.r.t middle of the sun's disk
# They are geomretic, i.e. "true sunrise/set", so refraction is not considered
_rise_flags = swe.BIT_DISC_CENTER + swe.BIT_NO_REFRACTION
# namah suryaya chandraya mangalaya ... rahuve ketuve namah
swe.KETU = swe.PLUTO # I've mapped Pluto to Ketu
planet_list = [swe.SUN, swe.MOON, swe.MARS, swe.MERCURY, swe.JUPITER,
swe.VENUS, swe.SATURN, swe.MEAN_NODE, # Rahu = MEAN_NODE
swe.KETU, swe.URANUS, swe.NEPTUNE ]
revati_359_50 = lambda: swe.set_sid_mode(swe.SIDM_USER, 1926892.343164331, 0)
galc_cent_mid_mula = lambda: swe.set_sid_mode(swe.SIDM_USER, 1922011.128853056, 0)
set_ayanamsa_mode = lambda: swe.set_sid_mode(swe.SIDM_LAHIRI)
reset_ayanamsa_mode = lambda: swe.set_sid_mode(swe.SIDM_FAGAN_BRADLEY)
# Temporary function
def get_planet_name(planet):
names = { swe.SURYA: 'Surya', swe.CHANDRA: 'Candra', swe.KUJA: 'Mangala',
swe.BUDHA: 'Budha', swe.GURU: 'Guru', swe.SUKRA: 'Sukra',
swe.SANI: 'Sani', swe.RAHU: 'Rahu', swe.KETU: 'Ketu', swe.PLUTO: 'Ketu'}
return names[planet]
# Convert 23d 30' 30" to 23.508333 degrees
from_dms = lambda degs, mins, secs: degs + mins/60 + secs/3600
# the inverse
def to_dms_prec(deg):
d = int(deg)
mins = (deg - d) * 60
m = int(mins)
s = round((mins - m) * 60, 6)
return [d, m, s]
def to_dms(deg):
d, m, s = to_dms_prec(deg)
return [d, m, int(s)]
def unwrap_angles(angles):
"""Add 360 to those elements in the input list so that
all elements are sorted in ascending order."""
result = angles
for i in range(1, len(angles)):
if result[i] < result[i-1]: result[i] += 360
assert(result == sorted(result))
return result
# Make angle lie between [-180, 180) instead of [0, 360)
norm180 = lambda angle: (angle - 360) if angle >= 180 else angle
# Make angle lie between [0, 360)
norm360 = lambda angle: angle[0] % 360
# Ketu is always 180° after Rahu, so same coordinates but different constellations
# i.e if Rahu is in Pisces, Ketu is in Virgo etc
ketu = lambda rahu: (rahu + 180) % 360
def function(point):
swe.set_sid_mode(swe.SIDM_USER, point, 0.0)
#swe.set_sid_mode(swe.SIDM_LAHIRI)
# Place Revati at 359°50'
#fval = norm180(swe.fixstar_ut("Revati", point, flag = swe.FLG_SWIEPH | swe.FLG_SIDEREAL)[0]) - ((359 + 49/60 + 59/3600) - 360)
# Place Revati at 0°0'0"
#fval = norm180(swe.fixstar_ut("Revati", point, flag = swe.FLG_SWIEPH | swe.FLG_SIDEREAL)[0])
# Place Citra at 180°
fval = swe.fixstar_ut("Citra", point, flag = swe.FLG_SWIEPH | swe.FLG_SIDEREAL)[0] - (180)
# Place Pushya (delta Cancri) at 106°
# fval = swe.fixstar_ut(",deCnc", point, flag = swe.FLG_SWIEPH | swe.FLG_SIDEREAL)[0] - (106)
return fval
def bisection_search(func, start, stop):
left = start
right = stop
epsilon = 5E-10 # Anything better than this puts the loop below infinite
while True:
middle = (left + right) / 2
midval = func(middle)
rtval = func(right)
if midval * rtval >= 0:
right = middle
else:
left = middle
if (right - left) <= epsilon: break
return (right + left) / 2
def inverse_lagrange(x, y, ya):
"""Given two lists x and y, find the value of x = xa when y = ya, i.e., f(xa) = ya"""
assert(len(x) == len(y))
total = 0
for i in range(len(x)):
numer = 1
denom = 1
for j in range(len(x)):
if j != i:
numer *= (ya - y[j])
denom *= (y[i] - y[j])
total += numer * x[i] / denom
return total
# Julian Day number as on (year, month, day) at 00:00 UTC
gregorian_to_jd = lambda date: swe.julday(date.year, date.month, date.day, 0.0)
jd_to_gregorian = lambda jd: swe.revjul(jd, swe.GREG_CAL) # returns (y, m, d, h, min, s)
def local_time_to_jdut1(year, month, day, hour = 0, minutes = 0, seconds = 0, timezone = 0.0):
"""Converts local time to JD(UT1)"""
y, m, d, h, mnt, s = swe.utc_time_zone(year, month, day, hour, minutes, seconds, timezone)
# BUG in pyswisseph: replace 0 by s
jd_et, jd_ut1 = swe.utc_to_jd(y, m, d, h, mnt, 0, flag = swe.GREG_CAL)
return jd_ut1
def nakshatra_pada(longitude):
"""Gives nakshatra (1..27) and paada (1..4) in which given longitude lies"""
# 27 nakshatras span 360°
one_star = (360 / 27) # = 13°20'
# Each nakshatra has 4 padas, so 27 x 4 = 108 padas in 360°
one_pada = (360 / 108) # = 3°20'
quotient = int(longitude / one_star)
reminder = (longitude - quotient * one_star)
pada = int(reminder / one_pada)
# convert 0..26 to 1..27 and 0..3 to 1..4
return [1 + quotient, 1 + pada]
def sidereal_longitude(jd, planet):
"""Computes nirayana (sidereal) longitude of given planet on jd"""
set_ayanamsa_mode()
longi = swe.calc_ut(jd, planet, flag = swe.FLG_SWIEPH | swe.FLG_SIDEREAL)
reset_ayanamsa_mode()
return norm360(longi[0]) # degrees
solar_longitude = lambda jd: sidereal_longitude(jd, swe.SUN)
lunar_longitude = lambda jd: sidereal_longitude(jd, swe.MOON)
def sunrise(jd, place):
"""Sunrise when centre of disc is at horizon for given date and place"""
lat, lon, tz = place
result = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)
rise = result[1][0] # julian-day number
# Convert to local time
return [rise + tz/24., to_dms((rise - jd) * 24 + tz)]
def sunset(jd, place):
"""Sunset when centre of disc is at horizon for given date and place"""
lat, lon, tz = place
result = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_SET)
setting = result[1][0] # julian-day number
# Convert to local time
return [setting + tz/24., to_dms((setting - jd) * 24 + tz)]
def moonrise(jd, place):
"""Moonrise when centre of disc is at horizon for given date and place"""
lat, lon, tz = place
result = swe.rise_trans(jd - tz/24, swe.MOON, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)
rise = result[1][0] # julian-day number
# Convert to local time
return to_dms((rise - jd) * 24 + tz)
def moonset(jd, place):
"""Moonset when centre of disc is at horizon for given date and place"""
lat, lon, tz = place
result = swe.rise_trans(jd - tz/24, swe.MOON, lon, lat, rsmi = _rise_flags + swe.CALC_SET)
setting = result[1][0] # julian-day number
# Convert to local time
return to_dms((setting - jd) * 24 + tz)
# Tithi doesn't depend on Ayanamsa
def tithi(jd, place):
"""Tithi at sunrise for given date and place. Also returns tithi's end time."""
tz = place.timezone
# 1. Find time of sunrise
rise = sunrise(jd, place)[0] - tz / 24
# 2. Find tithi at this JDN
moon_phase = lunar_phase(rise)
today = ceil(moon_phase / 12)
degrees_left = today * 12 - moon_phase
# 3. Compute longitudinal differences at intervals of 0.25 days from sunrise
offsets = [0.25, 0.5, 0.75, 1.0]
lunar_long_diff = [ (lunar_longitude(rise + t) - lunar_longitude(rise)) % 360 for t in offsets ]
solar_long_diff = [ (solar_longitude(rise + t) - solar_longitude(rise)) % 360 for t in offsets ]
relative_motion = [ moon - sun for (moon, sun) in zip(lunar_long_diff, solar_long_diff) ]
# 4. Find end time by 4-point inverse Lagrange interpolation
y = relative_motion
x = offsets
# compute fraction of day (after sunrise) needed to traverse 'degrees_left'
approx_end = inverse_lagrange(x, y, degrees_left)
ends = (rise + approx_end -jd) * 24 + tz
answer = [int(today), to_dms(ends)]
# 5. Check for skipped tithi
moon_phase_tmrw = lunar_phase(rise + 1)
tomorrow = ceil(moon_phase_tmrw / 12)
isSkipped = (tomorrow - today) % 30 > 1
if isSkipped:
# interpolate again with same (x,y)
leap_tithi = today + 1
degrees_left = leap_tithi * 12 - moon_phase
approx_end = inverse_lagrange(x, y, degrees_left)
ends = (rise + approx_end -jd) * 24 + place.timezone
leap_tithi = 1 if today == 30 else leap_tithi
answer += [int(leap_tithi), to_dms(ends)]
return answer
def nakshatra(jd, place):
"""Current nakshatra as of julian day (jd)
1 = Asvini, 2 = Bharani, ..., 27 = Revati
"""
# 1. Find time of sunrise
lat, lon, tz = place
rise = sunrise(jd, place)[0] - tz / 24. # Sunrise at UT 00:00
offsets = [0.0, 0.25, 0.5, 0.75, 1.0]
longitudes = [ lunar_longitude(rise + t) for t in offsets]
# 2. Today's nakshatra is when offset = 0
# There are 27 Nakshatras spanning 360 degrees
nak = ceil(longitudes[0] * 27 / 360)
# 3. Find end time by 5-point inverse Lagrange interpolation
y = unwrap_angles(longitudes)
x = offsets
approx_end = inverse_lagrange(x, y, nak * 360 / 27)
ends = (rise - jd + approx_end) * 24 + tz
answer = [int(nak), to_dms(ends)]
# 4. Check for skipped nakshatra
nak_tmrw = ceil(longitudes[-1] * 27 / 360)
isSkipped = (nak_tmrw - nak) % 27 > 1
if isSkipped:
leap_nak = nak + 1
approx_end = inverse_lagrange(offsets, longitudes, leap_nak * 360 / 27)
ends = (rise - jd + approx_end) * 24 + tz
leap_nak = 1 if nak == 27 else leap_nak
answer += [int(leap_nak), to_dms(ends)]
return answer
def yoga(jd, place):
"""Yoga at given jd and place.
1 = Vishkambha, 2 = Priti, ..., 27 = Vaidhrti
"""
# 1. Find time of sunrise
lat, lon, tz = place
rise = sunrise(jd, place)[0] - tz / 24. # Sunrise at UT 00:00
# 2. Find the Nirayana longitudes and add them
lunar_long = lunar_longitude(rise)
solar_long = solar_longitude(rise)
total = (lunar_long + solar_long) % 360
# There are 27 Yogas spanning 360 degrees
yog = ceil(total * 27 / 360)
# 3. Find how many longitudes is there left to be swept
degrees_left = yog * (360 / 27) - total
# 3. Compute longitudinal sums at intervals of 0.25 days from sunrise
offsets = [0.25, 0.5, 0.75, 1.0]
lunar_long_diff = [ (lunar_longitude(rise + t) - lunar_longitude(rise)) % 360 for t in offsets ]
solar_long_diff = [ (solar_longitude(rise + t) - solar_longitude(rise)) % 360 for t in offsets ]
total_motion = [ moon + sun for (moon, sun) in zip(lunar_long_diff, solar_long_diff) ]
# 4. Find end time by 4-point inverse Lagrange interpolation
y = total_motion
x = offsets
# compute fraction of day (after sunrise) needed to traverse 'degrees_left'
approx_end = inverse_lagrange(x, y, degrees_left)
ends = (rise + approx_end - jd) * 24 + tz
answer = [int(yog), to_dms(ends)]
# 5. Check for skipped yoga
lunar_long_tmrw = lunar_longitude(rise + 1)
solar_long_tmrw = solar_longitude(rise + 1)
total_tmrw = (lunar_long_tmrw + solar_long_tmrw) % 360
tomorrow = ceil(total_tmrw * 27 / 360)
isSkipped = (tomorrow - yog) % 27 > 1
if isSkipped:
# interpolate again with same (x,y)
leap_yog = yog + 1
degrees_left = leap_yog * (360 / 27) - total
approx_end = inverse_lagrange(x, y, degrees_left)
ends = (rise + approx_end - jd) * 24 + tz
leap_yog = 1 if yog == 27 else leap_yog
answer += [int(leap_yog), to_dms(ends)]
return answer
def karana(jd, place):
"""Returns the karana and their ending times. (from 1 to 60)"""
# 1. Find time of sunrise
rise = sunrise(jd, place)[0]
# 2. Find karana at this JDN
solar_long = solar_longitude(rise)
lunar_long = lunar_longitude(rise)
moon_phase = (lunar_long - solar_long) % 360
today = ceil(moon_phase / 6)
degrees_left = today * 6 - moon_phase
return [int(today)]
def vaara(jd):
"""Weekday for given Julian day. 0 = Sunday, 1 = Monday,..., 6 = Saturday"""
return int(ceil(jd + 1) % 7)
def masa(jd, place):
"""Returns lunar month and if it is adhika or not.
1 = Chaitra, 2 = Vaisakha, ..., 12 = Phalguna"""
ti = tithi(jd, place)[0]
critical = sunrise(jd, place)[0] # - tz/24 ?
last_new_moon = new_moon(critical, ti, -1)
next_new_moon = new_moon(critical, ti, +1)
this_solar_month = raasi(last_new_moon)
next_solar_month = raasi(next_new_moon)
is_leap_month = (this_solar_month == next_solar_month)
maasa = this_solar_month + 1
if maasa > 12: maasa = (maasa % 12)
return [int(maasa), is_leap_month]
# epoch-midnight to given midnight
# Days elapsed since beginning of Kali Yuga
ahargana = lambda jd: jd - 588465.5
def elapsed_year(jd, maasa_num):
ahar = ahargana(jd) # or (jd + sunrise(jd, place)[0])
kali = int((ahar + (4 - maasa_num) * 30) / sidereal_year)
saka = kali - 3179
vikrama = saka + 135
return kali, saka
# New moon day: sun and moon have same longitude (0 degrees = 360 degrees difference)
# Full moon day: sun and moon are 180 deg apart
def new_moon(jd, tithi_, opt = -1):
"""Returns JDN, where
opt = -1: JDN < jd such that lunar_phase(JDN) = 360 degrees
opt = +1: JDN >= jd such that lunar_phase(JDN) = 360 degrees
"""
if opt == -1: start = jd - tithi_ # previous new moon
if opt == +1: start = jd + (30 - tithi_) # next new moon
# Search within a span of (start +- 2) days
x = [ -2 + offset/4 for offset in range(17) ]
y = [lunar_phase(start + i) for i in x]
y = unwrap_angles(y)
y0 = inverse_lagrange(x, y, 360)
return start + y0
def raasi(jd):
"""Zodiac of given jd. 1 = Mesha, ... 12 = Meena"""
s = solar_longitude(jd)
solar_nirayana = solar_longitude(jd)
# 12 rasis occupy 360 degrees, so each one is 30 degrees
return ceil(solar_nirayana / 30.)
def lunar_phase(jd):
solar_long = solar_longitude(jd)
lunar_long = lunar_longitude(jd)
moon_phase = (lunar_long - solar_long) % 360
return moon_phase
def samvatsara(jd, maasa_num):
kali = elapsed_year(jd, maasa_num)[0]
# Change 14 to 0 for North Indian tradition
# See the function "get_Jovian_Year_name_south" in pancanga.pl
if kali >= 4009: kali = (kali - 14) % 60
samvat = (kali + 27 + int((kali * 211 - 108) / 18000)) % 60
return samvat
def ritu(masa_num):
"""0 = Vasanta,...,5 = Shishira"""
return (masa_num - 1) // 2
def day_duration(jd, place):
srise = sunrise(jd, place)[0] # julian day num
sset = sunset(jd, place)[0] # julian day num
diff = (sset - srise) * 24 # In hours
return [diff, to_dms(diff)]
# The day duration is divided into 8 parts
# Similarly night duration
def gauri_chogadiya(jd, place):
lat, lon, tz = place
tz = place.timezone
srise = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)[1][0]
sset = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_SET)[1][0]
day_dur = (sset - srise)
end_times = []
for i in range(1, 9):
end_times.append(to_dms((srise + (i * day_dur) / 8 - jd) * 24 + tz))
# Night duration = time from today's sunset to tomorrow's sunrise
srise = swe.rise_trans((jd + 1) - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)[1][0]
night_dur = (srise - sset)
for i in range(1, 9):
end_times.append(to_dms((sset + (i * night_dur) / 8 - jd) * 24 + tz))
return end_times
def trikalam(jd, place, option='rahu'):
lat, lon, tz = place
tz = place.timezone
srise = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)[1][0]
sset = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_SET)[1][0]
day_dur = (sset - srise)
weekday = vaara(jd)
# value in each array is for given weekday (0 = sunday, etc.)
offsets = { 'rahu': [0.875, 0.125, 0.75, 0.5, 0.625, 0.375, 0.25],
'gulika': [0.75, 0.625, 0.5, 0.375, 0.25, 0.125, 0.0],
'yamaganda': [0.5, 0.375, 0.25, 0.125, 0.0, 0.75, 0.625] }
start_time = srise + day_dur * offsets[option][weekday]
end_time = start_time + 0.125 * day_dur
# to local timezone
start_time = (start_time - jd) * 24 + tz
end_time = (end_time - jd) * 24 + tz
return [to_dms(start_time), to_dms(end_time)] # decimal hours to H:M:S
rahu_kalam = lambda jd, place: trikalam(jd, place, 'rahu')
yamaganda_kalam = lambda jd, place: trikalam(jd, place, 'yamaganda')
gulika_kalam = lambda jd, place: trikalam(jd, place, 'gulika')
def durmuhurtam(jd, place):
lat, lon, tz = place
tz = place.timezone
# Night = today's sunset to tomorrow's sunrise
sset = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_SET)[1][0]
srise = swe.rise_trans((jd + 1) - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)[1][0]
night_dur = (srise - sset)
# Day = today's sunrise to today's sunset
srise = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)[1][0]
day_dur = (sset - srise)
weekday = vaara(jd)
# There is one durmuhurtam on Sun, Wed, Sat; the rest have two
offsets = [[10.4, 0.0], # Sunday
[6.4, 8.8], # Monday
[2.4, 4.8], # Tuesday, [day_duration , night_duration]
[5.6, 0.0], # Wednesday
[4.0, 8.8], # Thursday
[2.4, 6.4], # Friday
[1.6, 0.0]] # Saturday
# second durmuhurtam of tuesday uses night_duration instead of day_duration
dur = [day_dur, day_dur]
base = [srise, srise]
if weekday == 2: dur[1] = night_dur; base[1] = sset
# compute start and end timings
start_times = [0, 0]
end_times = [0, 0]
for i in range(0, 2):
offset = offsets[weekday][i]
if offset != 0.0:
start_times[i] = base[i] + dur[i] * offsets[weekday][i] / 12
end_times[i] = start_times[i] + day_dur * 0.8 / 12
# convert to local time
start_times[i] = (start_times[i] - jd) * 24 + tz
end_times[i] = (end_times[i] - jd) * 24 + tz
return [start_times, end_times] # in decimal hours
def abhijit_muhurta(jd, place):
"""Abhijit muhurta is the 8th muhurta (middle one) of the 15 muhurtas
during the day_duration (~12 hours)"""
lat, lon, tz = place
tz = place.timezone
srise = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_RISE)[1][0]
sset = swe.rise_trans(jd - tz/24, swe.SUN, lon, lat, rsmi = _rise_flags + swe.CALC_SET)[1][0]
day_dur = (sset - srise)
start_time = srise + 7 / 15 * day_dur
end_time = srise + 8 / 15 * day_dur
# to local time
return [(start_time - jd) * 24 + tz, (end_time - jd) * 24 + tz]
# 'jd' can be any time: ex, 2015-09-19 14:20 UTC
# today = swe.julday(2015, 9, 19, 14 + 20./60)
def planetary_positions(jd, place):
"""Computes instantaneous planetary positions
(i.e., which celestial object lies in which constellation)
Also gives the nakshatra-pada division
"""
jd_ut = jd - place.timezone / 24.
positions = []
for planet in planet_list:
if planet != swe.KETU:
nirayana_long = sidereal_longitude(jd_ut, planet)
else: # Ketu
nirayana_long = ketu(sidereal_longitude(jd_ut, swe.RAHU))
# 12 zodiac signs span 360°, so each one takes 30°
# 0 = Mesha, 1 = Vrishabha, ..., 11 = Meena
constellation = int(nirayana_long / 30)
coordinates = to_dms(nirayana_long % 30)
positions.append([planet, constellation, coordinates, nakshatra_pada(nirayana_long)])
return positions
def ascendant(jd, place):
"""Lagna (=ascendant) calculation at any given time & place"""
lat, lon, tz = place
jd_utc = jd - (tz / 24.)
set_ayanamsa_mode() # needed for swe.houses_ex()
# returns two arrays, cusps and ascmc, where ascmc[0] = Ascendant
nirayana_lagna = swe.houses_ex(jd_utc, lat, lon, flag = swe.FLG_SIDEREAL)[1][0]
# 12 zodiac signs span 360°, so each one takes 30°
# 0 = Mesha, 1 = Vrishabha, ..., 11 = Meena
constellation = int(nirayana_lagna / 30)
coordinates = to_dms(nirayana_lagna % 30)
reset_ayanamsa_mode()
return [constellation, coordinates, nakshatra_pada(nirayana_lagna)]
# http://www.oocities.org/talk2astrologer/LearnAstrology/Details/Navamsa.html
# Useful for making D9 divisional chart
def navamsa_from_long(longitude):
"""Calculates the navamsa-sign in which given longitude falls
0 = Aries, 1 = Taurus, ..., 11 = Pisces
"""
one_pada = (360 / (12 * 9)) # There are also 108 navamsas
one_sign = 12 * one_pada # = 40 degrees exactly
signs_elapsed = longitude / one_sign
fraction_left = signs_elapsed % 1
return int(fraction_left * 12)
def navamsa(jd, place):
"""Calculates navamsa of all planets"""
jd_utc = jd - place.timezone / 24.
positions = []
for planet in planet_list:
if planet != swe.KETU:
nirayana_long = sidereal_longitude(jd_utc, planet)
else: # Ketu
nirayana_long = ketu(sidereal_longitude(jd_utc, swe.RAHU))
positions.append([planet, navamsa_from_long(nirayana_long)])
return positions
# ----- TESTS ------
def all_tests():
print(sys._getframe().f_code.co_name)
print(moonrise(date2, bangalore)) # Expected: 11:32:04
print(moonset(date2, bangalore)) # Expected: 24:8:47
print(sunrise(date2, bangalore)[1]) # Expected: 6:49:47
print(sunset(date2, bangalore)[1]) # Expected: 18:10:25
assert(vaara(date2) == 5)
print(sunrise(date4, shillong)[1]) # On this day, Nakshatra and Yoga are skipped!
assert(karana(date2, helsinki) == [14]) # Expected: 14, Vanija
return
def tithi_tests():
print(sys._getframe().f_code.co_name)
feb3 = gregorian_to_jd(Date(2013, 2, 3))
apr24 = gregorian_to_jd(Date(2010, 4, 24))
apr19 = gregorian_to_jd(Date(2013, 4, 19))
apr20 = gregorian_to_jd(Date(2013, 4, 20))
apr21 = gregorian_to_jd(Date(2013, 4, 21))
print(tithi(date1, bangalore)) # Expected: krishna ashtami (23), ends at 27:07:38
print(tithi(date2, bangalore)) # Expected: Saptami, ends at 16:24:19
print(tithi(date3, bangalore)) # Expected: <NAME>, ends at 25:03:30
print(tithi(date2, helsinki)) # Expected: Shukla saptami until 12:54:19
print(tithi(apr24, bangalore)) # Expected: [10, [6,9,29], 11, [27, 33, 58]]
print(tithi(feb3, bangalore)) # Expected: [22, [8,14,6], 23, [30, 33, 17]]
print(tithi(apr19, helsinki)) # Expected: [9, [28, 45, 0]]
print(tithi(apr20, helsinki)) # Expected: [10, [29, 22, 7]]
print(tithi(apr21, helsinki)) # Expected: [10, [5, 22, 6]]
return
def nakshatra_tests():
print(sys._getframe().f_code.co_name)
print(nakshatra(date1, bangalore)) # Expected: 27 (Revati), ends at 17:06:37
print(nakshatra(date2, bangalore)) # Expected: 27 (Revati), ends at 19:23:09
print(nakshatra(date3, bangalore)) # Expecred: 24 (Shatabhisha) ends at 26:32:43
print(nakshatra(date4, shillong)) # Expected: [3, [5,1,14]] then [4,[26,31,13]]
return
def yoga_tests():
print(sys._getframe().f_code.co_name)
may22 = gregorian_to_jd(Date(2013, 5, 22))
print(yoga(date3, bangalore)) # Expected: Vishkambha (1), ends at 22:59:45
print(yoga(date2, bangalore)) # Expected: Siddha (21), ends at 29:10:56
print(yoga(may22, helsinki)) # [16, [6,20,33], 17, [27,21,58]]
def masa_tests():
print(sys._getframe().f_code.co_name)
jd = gregorian_to_jd(Date(2013, 2, 10))
aug17 = gregorian_to_jd(Date(2012, 8, 17))
aug18 = gregorian_to_jd(Date(2012, 8, 18))
sep19 = gregorian_to_jd(Date(2012, 9, 18))
may20 = gregorian_to_jd(Date(2012, 5, 20))
may21 = gregorian_to_jd(Date(2012, 5, 21))
print(masa(jd, bangalore)) # Pusya (10)
print(masa(aug17, bangalore)) # Shravana (5) amavasya
print(masa(aug18, bangalore)) # <NAME> [6, True]
print(masa(sep19, bangalore)) # Normal Bhadrapada [6, False]
print(masa(may20, helsinki)) # Vaisakha [2]
print(masa(may21, helsinki)) # Jyestha [3]
def ascendant_tests():
print(sys._getframe().f_code.co_name)
jd = swe.julday(2015, 9, 24, 23 + 38/60.)
assert(ascendant(jd, bangalore) == [2, [4, 37, 10], [5, 4]])
jd = swe.julday(2015, 9, 25, 13 + 29/60. + 13/3600.)
assert(ascendant(jd, bangalore) == [8, [20, 23, 31], [20, 3]])
def navamsa_tests():
print(sys._getframe().f_code.co_name)
jd = swe.julday(2015, 9, 25, 13 + 29/60. + 13/3600.)
nv = navamsa(jd, bangalore)
expected = [[0, 11], [1, 5], [4, 1], [2, 2], [5, 4], [3, 10],
[6, 4], [10, 11], [9, 5], [7, 10], [8, 10]]
assert(nv == expected)
if __name__ == "__main__":
import sys
bangalore = Place(12.972, 77.594, +5.5)
shillong = Place(25.569, 91.883, +5.5)
helsinki = Place(60.17, 24.935, +2.0)
date1 = gregorian_to_jd(Date(2009, 7, 15))
date2 = gregorian_to_jd(Date(2013, 1, 18))
date3 = gregorian_to_jd(Date(1985, 6, 9))
date4 = gregorian_to_jd(Date(2009, 6, 21))
apr_8 = gregorian_to_jd(Date(2010, 4, 8))
apr_10 = gregorian_to_jd(Date(2010, 4, 10))
all_tests()
tithi_tests()
nakshatra_tests()
yoga_tests()
masa_tests()
ascendant_tests()
navamsa_tests()
new_moon(date1)
|
def handle(self,msgBytes):
msg = disco_capnp.DiscoReq.from_bytes(msgBytes)
which = msg.which()
if which == 'actorReg':
self.handleActorReg(msg)
elif which == "serviceReg":
self.handleServiceReg(msg)
elif which == "serviceLookup":
self.handleServiceLookup(msg)
elif which == 'actorUnreg':
self.handleActorUnreg(msg)
elif which == 'groupJoin':
self.handleGroupJoin(msg)
else:
pass |
Social Bonds, Self-Control, and Adult Criminality
Recent modifications to self-control theory suggest that influential factors (bonds) equate to self-control in the calculation of whether or not to engage in deviant behavior. Hirschi argued that self-control should fare better as a theory when it is operationalized as the number and salience of an individual’s social bonds, rather than as a cognitive scale, or count of previous acts, as suggested by the original theory. This study extends the control theory literature by assessing the impact of redefined self-control, as well as attitudinal self-control, on adult criminal behavior. Data analyzed were from Waves 10 and 11 of the National Youth Survey Family Study. Findings suggest that both forms of self-control (new and old) are equivalently predictive of adult crime, yet it is unlikely that they are capturing the same phenomenon during adulthood. Implications for control theory are discussed. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.