content
stringlengths
10
4.9M
def f(n): if n%2==0: return n//2 else: return 3 * n + 1 s = int(input()) array = [] while s not in array: array.append(s) s = f(s) print(len(array)+1)
What follows is a journal written by one of the operatives in the second XCOM campaign, Alan Robertson. Operation Starving Shield – April 20th,2035 Today’s mission sees us heading back to East Africa. Today, we are tasked with recovering a data vault from an ADVENT facility in the slums district of Cairo. We recently all undertook more training on the Avenger, and now are capable of taking five soldiers into the battlefield. So, today I am joined by Sgt. Kelly, Cpl. Ishikawa, Cpl. Girard, and Sq. Yosef. As soon as we drop into the area, Central advises us that charges have been placed and we only have eight minutes to recover this data before it is destroyed. We’re on an elevated highway, and so Yosef moves up the guardrail on the side, and spots a sectoid and trooper. Kelly drops over the guardrail and takes cover behind a barricade. Ishikawa does the same behind a different barricade, and spots a stun lancer and another sectoid. Yosef drops down and takes cover behind a bench. Girard drops down and takes cover behind an information kiosk. I prefer to keep my elevation and take cover behind the guardrail. We watch as the first group of enemies continues its patrol. All of us except for Kelly set into overwatch. She pulls out one of her axes and hurls it into the back of the sectoid and kills him. Spooked, the trooper tries to run, but as soon as he moves I take him down with one quick sniper shot. The other sectoid and stun lancer start to move forward. Ishikawa takes a shot and hits the sectoid, and Yosef misses the stun lancer. Kelly rushes from her position and near the sectoid and takes him down with one nice axe swing. The stun lancer moves out of the building and discharges his stun baton on Kelly. Ishikawa and Girard and Yosef move in closer. I line up a shot and take down the stun lancer with my sniper rifle. Central advises us that all hostiles are down, and we can focus on the data vault. Kelly moves in and hacks the vault to disarm the detonator. We did an excellent job today. Kelly will spend a few days recovering from being hit by the stun baton, but again we’re all coming home. Ishikawa is also being promoted to a Sergeant and is learning how to Suppress the enemy. This allows him to take reactionary fire if the unit moves, and gives it a penalty to its aim (Kinda hard to aim with someone firing a barrage of bullets in your direction). Author Heath Markley I have been an avid gamer for 32 years now. It started with the Atari 2600 my parents bought me as a child (my mom still says she regrets that decision), and I have never looked back. For my gaming, I enjoy RPG and strategy games the most, but I will play almost all games. Besides gaming, I am a family man (beautiful wife and three wonderful children) and a healthcare IT professional. So, gaming time is a premium for me, but I make the most of every chance I get. Check out My Youtube Channel Here
<reponame>HM-DTLab-WiSe20-21-Alzheimer/WiSe20-21-Alzheimer-Vergissmeinnicht package vergissmeinicht.helper; import com.amazonaws.services.dynamodbv2.AmazonDynamoDB; import com.amazonaws.services.dynamodbv2.document.*; import com.amazonaws.services.dynamodbv2.model.*; import org.json.*; import java.util.*; public final class VergissmeinichtDynamoDB { private static final Random RANDOM = new Random(); private static final String USER_TABLE = "vergissmeinicht-user"; private static final String MEMORY_TABLE = "vergissmeinicht-memories"; private static final String USER_ID = "userId"; private static final String JSON_MEMORY = "memory"; private static final String DATE_ATTRIBUTE = "date"; private static final String TAGS_ATTRIBUTE = "tags"; private static final String FILE_ATTRIBUTE = "file"; private static final String NAME_ATTRIBUTE = "name"; private VergissmeinichtDynamoDB() {} public static String getIdentityId(AmazonDynamoDB db, String userId) { Map<String, AttributeValue> key = new HashMap<>(); key.put(USER_ID, new AttributeValue(userId)); GetItemRequest getItemRequest = new GetItemRequest() .withKey(key) .withTableName(USER_TABLE); Map<String, AttributeValue> returnItem = db.getItem(getItemRequest).getItem(); return returnItem.get("identityId").getS(); } public static void storeUserIds(DynamoDB dynamoDb, String userId, String identityId) { Item item = new Item() .withPrimaryKey(USER_ID, userId) .withString("identityId", identityId); dynamoDb.getTable(USER_TABLE).putItem(item); } public static void storeMemory(DynamoDB dynamoDb, JSONObject contents) { Item item = new Item() .withPrimaryKey( USER_ID, contents.getJSONObject("user").getString("userID"), "memoryId", getNextMemoryId(dynamoDb, contents.getJSONObject("user").getString("userID"))) .withString(DATE_ATTRIBUTE, contents.getJSONObject(JSON_MEMORY).getString(DATE_ATTRIBUTE)) .withString(TAGS_ATTRIBUTE, contents.getJSONObject(JSON_MEMORY).getString(TAGS_ATTRIBUTE)) .withString(FILE_ATTRIBUTE, contents.getJSONObject(JSON_MEMORY).getString(FILE_ATTRIBUTE)) .withString(NAME_ATTRIBUTE, contents.getJSONObject(JSON_MEMORY).getString(NAME_ATTRIBUTE)); dynamoDb.getTable(MEMORY_TABLE).putItem(item); } public static String getFilenameForRandomMemory(String userId, AmazonDynamoDB db) { DynamoDB dynamoDB = new DynamoDB(db); int size = getNextMemoryId(dynamoDB, userId); if (size == 0) return null; int memoryId = RANDOM.nextInt(size); Map<String, AttributeValue> key = new HashMap<>(); key.put(USER_ID, new AttributeValue(userId)); key.put("memoryId", new AttributeValue().withN(String.valueOf(memoryId))); GetItemRequest getItemRequest = new GetItemRequest() .withKey(key) .withTableName(MEMORY_TABLE); Map<String, AttributeValue> returnItem = db.getItem(getItemRequest).getItem(); return returnItem.get("file").getS(); } public static String getFilenameForDate(String datum, String userId, AmazonDynamoDB db) { List<String> fileNames = new ArrayList<>(); DynamoDB dynamoDB = new DynamoDB(db); Table table = dynamoDB.getTable(MEMORY_TABLE); ItemCollection<QueryOutcome> items = table.query(USER_ID, userId); for (Item item : items) if (item.getString("date").equals(datum)) fileNames.add(item.getString("file")); int results = fileNames.size(); if (results > 0) { int index = RANDOM.nextInt(fileNames.size()); return fileNames.get(index); } else return null; } public static int getNextMemoryId(DynamoDB db, String userId) { Table table = db.getTable(MEMORY_TABLE); ItemCollection<QueryOutcome> items = table.query(USER_ID, userId); //no size method for itemcollection?! int count = 0; for (Item item : items) count++; return count; } }
#include<iostream> #include"mytime1.h" Time::Time() { hours=minutes=0; } Time::Time(int h,int m) { hours=h; minutes=m; } void Time::AddMin(int m) { minutes+=m; hours+=minutes/60; minutes%=60; } void Time::AddHr(int h) { hours+=h; } void Time::Reset(int h,int m) { hours=h; minutes=m; } Time Time::operator+(const Time & t)const { Time sum; sum.minutes=minutes+t.minutes; sum.hours=hours+t.hours+sum.minutes/60; sum.minutes %=60; return sum; } void Time::show() const { std::cout<<hours<<"hours,"<<minutes<<"minutes"; }
Results The Pentagon created a toll-free hotline to report potential exposure and seek medical screening. As of March 2015, 544 people had called the hotline to report being exposed. In late 2014, the Pentagon formed a group, led by Brad R. Carson, under secretary of the Army, to identify service members potentially exposed to chemical weapons and screen them for care. The group issued guidelines this month that also cover troops exposed to chlorine. Mr. Carson apologized for the military’s mishandling of past cases. The group acknowledged that the Pentagon had previously been notified that more than 800 American troops believed they were exposed, but the Pentagon failed to follow up thoroughly. The services agreed to consider Purple Hearts for those exposed to makeshift bombs made from chemical weapons. Based on this, the Army approved a medal for former Specialist Richard Beasley.
package me.moodcat.core.mappers; import static javax.ws.rs.core.Response.Status.NOT_FOUND; import javax.persistence.EntityNotFoundException; import javax.ws.rs.core.Response; import javax.ws.rs.ext.Provider; /** * This ExceptionMapper maps {@link EntityNotFoundException EntityNotFoundExceptions} in such a way * that the client * receives a descriptive JSON response and HTTP status code. */ @Provider public class EntityNotFoundExceptionMapper extends AbstractExceptionMapper<EntityNotFoundException> { @Override public Response.Status getStatusCode() { return NOT_FOUND; } }
<reponame>DragonRoar/deep-radiomics-glioma from .vq_module import VQModule as VQ
/** * Util class for encoding/decoding boolean values for {@link EnumFacing}s on * bit level. * * @author Oliver Kahrmann * */ public final class FaceBitmap { private FaceBitmap() { // Util Class } public static boolean isSideBitSet(byte bitField, EnumFacing side) { int oper = 1 << side.ordinal(); return (bitField & oper) != 0; } public static byte setSideBit(byte bitField, EnumFacing side) { int oper = 1 << side.ordinal(); return (byte) (bitField | oper); } public static byte unsetSideBit(byte bitField, EnumFacing side) { int oper = 63 - (1 << side.ordinal()); return (byte) (bitField & oper); } }
/// convert pyarray object to persia tensor /// this function will panic when meet some datatype numpy::Datatype can't support fn convert_pyarray_object_to_tensor_impl( pyarray_object: &PyAny, dtype: &PyAny, name: Option<String>, python: Python, ) -> TensorImpl { let dtype: &PyArrayDescr = dtype.downcast().expect(format!( "PersiaBatch datatype parse error {}, check PersiaBatch datatype support list to prevent datatype parse error.", name.clone().unwrap_or("unknow_data".to_string()) ).as_str()); let datatype = dtype.get_datatype().unwrap(); unsafe { let pyarray_ptr = AsPyPointer::as_ptr(pyarray_object); gen_tensor_impl_by_datatype!( pyarray_ptr, datatype, python, name, (bool, Bool, BOOL), (f32, Float32, F32), (f64, Float64, F64), (i8, Int8, I8), (i16, Int16, I16), (i32, Int32, I32), (i64, Int64, I64), (u8, Uint8, U8), (u16, Uint16, U16), (u32, Uint32, U32), (u64, Uint64, U64) ) } }
// Copyright 2020 The Kubernetes Authors. // SPDX-License-Identifier: Apache-2.0 package polling import ( "context" "fmt" "time" "k8s.io/apimachinery/pkg/api/meta" cmdutil "k8s.io/kubectl/pkg/cmd/util" "k8s.io/kubectl/pkg/scheme" "sigs.k8s.io/cli-utils/pkg/kstatus/polling/clusterreader" "sigs.k8s.io/cli-utils/pkg/kstatus/polling/engine" "sigs.k8s.io/cli-utils/pkg/kstatus/polling/event" "sigs.k8s.io/cli-utils/pkg/kstatus/polling/statusreaders" "sigs.k8s.io/cli-utils/pkg/kstatus/status" "sigs.k8s.io/cli-utils/pkg/object" "sigs.k8s.io/controller-runtime/pkg/client" ) // NewStatusPoller creates a new StatusPoller using the given clusterreader and mapper. The StatusPoller // will use the client for all calls to the cluster. func NewStatusPoller(reader client.Reader, mapper meta.RESTMapper, o Options) *StatusPoller { setDefaults(&o) var statusReaders []engine.StatusReader statusReaders = append(statusReaders, o.CustomStatusReaders...) srs, defaultStatusReader := createStatusReaders(mapper) statusReaders = append(statusReaders, srs...) return &StatusPoller{ engine: &engine.PollerEngine{ Reader: reader, Mapper: mapper, DefaultStatusReader: defaultStatusReader, StatusReaders: statusReaders, ClusterReaderFactory: o.ClusterReaderFactory, }, } } // NewStatusPollerFromFactory creates a new StatusPoller instance from the // passed in factory. func NewStatusPollerFromFactory(f cmdutil.Factory, o Options) (*StatusPoller, error) { config, err := f.ToRESTConfig() if err != nil { return nil, fmt.Errorf("error getting RESTConfig: %w", err) } mapper, err := f.ToRESTMapper() if err != nil { return nil, fmt.Errorf("error getting RESTMapper: %w", err) } c, err := client.New(config, client.Options{Scheme: scheme.Scheme, Mapper: mapper}) if err != nil { return nil, fmt.Errorf("error creating client: %w", err) } return NewStatusPoller(c, mapper, o), nil } func setDefaults(o *Options) { if o.ClusterReaderFactory == nil { o.ClusterReaderFactory = engine.ClusterReaderFactoryFunc(clusterreader.NewCachingClusterReader) } } // Options can be provided when creating a new StatusPoller to customize the // behavior. type Options struct { // CustomStatusReaders specifies any implementations of the engine.StatusReader interface that will // be used to compute reconcile status for resources. CustomStatusReaders []engine.StatusReader // ClusterReaderFactory allows for custom implementations of the engine.ClusterReader interface // in the StatusPoller. The default implementation if the clusterreader.CachingClusterReader. ClusterReaderFactory engine.ClusterReaderFactory } // StatusPoller provides functionality for polling a cluster for status for a set of resources. type StatusPoller struct { engine *engine.PollerEngine } // Poll will create a new statusPollerRunner that will poll all the resources provided and report their status // back on the event channel returned. The statusPollerRunner can be cancelled at any time by cancelling the // context passed in. func (s *StatusPoller) Poll(ctx context.Context, identifiers object.ObjMetadataSet, options PollOptions) <-chan event.Event { return s.engine.Poll(ctx, identifiers, engine.Options{ PollInterval: options.PollInterval, }) } // PollOptions defines the levers available for tuning the behavior of the // StatusPoller. type PollOptions struct { // PollInterval defines how often the PollerEngine should poll the cluster for the latest // state of the resources. PollInterval time.Duration } // createStatusReaders creates an instance of all the statusreaders. This includes a set of statusreaders for // a particular GroupKind, and a default engine used for all resource types that does not have // a specific statusreaders. // TODO: We should consider making the registration more automatic instead of having to create each of them // here. Also, it might be worth creating them on demand. func createStatusReaders(mapper meta.RESTMapper) ([]engine.StatusReader, engine.StatusReader) { defaultStatusReader := statusreaders.NewGenericStatusReader(mapper, status.Compute) replicaSetStatusReader := statusreaders.NewReplicaSetStatusReader(mapper, defaultStatusReader) deploymentStatusReader := statusreaders.NewDeploymentResourceReader(mapper, replicaSetStatusReader) statefulSetStatusReader := statusreaders.NewStatefulSetResourceReader(mapper, defaultStatusReader) statusReaders := []engine.StatusReader{ deploymentStatusReader, statefulSetStatusReader, replicaSetStatusReader, } return statusReaders, defaultStatusReader }
/** * The <code>DecoderInfo</code> object contains the neccessary data to * initialize a decoder. A track either contains a <code>DecoderInfo</code> or a * byte-Array called the 'DecoderSpecificInfo', which is e.g. used for AAC. * * The <code>DecoderInfo</code> object received from a track is a subclass of * this class depending on the <code>Codec</code>. * * <code> * AudioTrack track = (AudioTrack) movie.getTrack(AudioCodec.AC3); * AC3DecoderInfo info = (AC3DecoderInfo) track.getDecoderInfo(); * </code> * * @author in-somnia */ public abstract class DecoderInfo { static DecoderInfo parse(CodecSpecificBox css) { final long l = css.getType(); final DecoderInfo info; if(l==BoxTypes.H263_SPECIFIC_BOX) info = new H263DecoderInfo(css); else if(l==BoxTypes.AMR_SPECIFIC_BOX) info = new AMRDecoderInfo(css); else if(l==BoxTypes.EVRC_SPECIFIC_BOX) info = new EVRCDecoderInfo(css); else if(l==BoxTypes.QCELP_SPECIFIC_BOX) info = new QCELPDecoderInfo(css); else if(l==BoxTypes.SMV_SPECIFIC_BOX) info = new SMVDecoderInfo(css); else if(l==BoxTypes.AVC_SPECIFIC_BOX) info = new AVCDecoderInfo(css); else if(l==BoxTypes.AC3_SPECIFIC_BOX) info = new AC3DecoderInfo(css); else if(l==BoxTypes.EAC3_SPECIFIC_BOX) info = new EAC3DecoderInfo(css); else info = new UnknownDecoderInfo(); return info; } private static class UnknownDecoderInfo extends DecoderInfo { } }
Starting a new corporation is not an easy task. Even with veterans of Eve, beginning something new while creating the infrastructure, and willing a corporation into existence requires tremendous effort. Combine that with the monumental task of building an alliance from scratch at the same time, and even a veteran is in for an extremely demanding challenge. My name is Kasken, and I have spent the last two months trying to accomplish just that. I began with a month old corporation and a vision; Eve has become stagnant and toxic. Elitist attitudes and obnoxious alliance chats have become the norm. I started building my own corporation and then my own alliance to create a place in which I could once again enjoy the wonders of Eve. Ignoring even the challenges of growing and building a corporation at the same time, alliance challenges are extremely intense by themselves. For starters, how does one recruit corporations into a small irrelevant alliance no one has heard of? The same recruitment issues corporations face are faced by alliances, only on a grander scale. Diplomatic Immunity is my alliance that faced and still faces these issues. We began as a single corporation alliance with no notoriety or member count to speak of. Even as experienced leadership, with contacts around Eve, convincing a corporation to join an alliance that currently has no assets, very little infrastructure, and nothing to actually offer is…beyond challenging. The question of “how do I grow my alliance?” glares at me on a daily basis. The usual methods were used of course, post on recruitment forums, spam recruitment channel, and reach out to any known contacts. With very little to offer, and Eve’s current “risk averse” attitude, convincing a corporation to take such a huge risk with the potential for such little reward is almost impossible. Eve has become risk averse in the sense people tend to not want to risk assets, pilots, and gain in-game on a chance of something potentially being better. Over the last decade Eve has grown slowly into larger power blocs. Humanity as a whole groups together for safety and numbers & Eve is no exception to that instinct. People hoard their treasures in Eve, like Gollum and his precious one ring. Convincing someone to part with this treasure in order to promote a chance of future success feels like an impossible task. Once a corporation is convinced to join and they do so, the long, slow process of building momentum begins. As each corporation joins and the member count goes up, it becomes easier to recruit corporations. Eventually corporations begin to approach the alliance and recruitment becomes almost self-sustaining. This critical mass takes quite a while to reach however. For example, my alliance is not yet at this stage, as several factors play into self-sustainability. Not only must the member count be there, but relevance in the world of Eve, killboard stats, SRP programs, and general reputation all come into play. The majority of the elements a corporation looks for, especially PvP corporations, when joining an alliance, require significant pre-existing membership. SRP programs are also huge for PvP corporations. In order to fund these programs, an alliance must be able to take and hold “money moons” to support the cost of replacing ships. A group cannot take and hold moons without having the member base to do so. Gaining alliance-level income often results in managing a substantial amount of smaller, less ISK-generating moons as a temporary measure. No one wants to deal with 100 platinum moons to maintain alliance-level income. Quality over quantity is also a question that must be answered. Mass recruiting corporations could turn an alliance into a Brave-style noob fest, where thousands of members are there, yet all flying Maulus & Atrons. Being overly picky in quality will create an issue where an alliance can fly all the high end ships, but lack the numbers to make them work realistically as doctrine. Deciding where to draw the line between needing member corporations and the level of skill the alliance requires is a tough thing to do. It is a fine line between keeping the current members, growing, and allowing the alliance to progress in a fashion that works for everyone. Diplomatic Immunity is still struggling slightly with corporate recruiting and maintaining as a result of trying to keep slightly higher standards in order to further our own goals. This is a problem I personally have yet to solve, as I have not found a reliable way yet to convince corporations that taking a chance or a risk is the best way to go. We have a decent recruitment pitch and a solid plan for achieving our goals, but pushing corporations over that edge into taking the plunge is not an easy process. Each group must find their own approach that works. Diplomatic Immunity is still working on this as well, as there is no easy solution. Another subject that I have personally witnessed can cause major issues is communication. As an executor, I have certain ideas, rules, and alliance level plans. As additional leadership is appointed, if communication is not extremely clear and flowing, people start diverging. It is barely noticeable at first, but as each person continues along their own paths and goals, major splits can occur resulting in bitter arguments that end in a sundering. It is absolutely critical alliance leadership communicates with each other and their member corporations. Corporations that don’t know or understand the “big picture” start to feel lost, bored, and eventually unhappy. As information doesn’t flow downhill, people lose their way and it leads to the beginnings of the end. That is a death sentence for a young alliance struggling to get its feet under them. Without this critical communication, corporations begin to drift and die. I have learned this first hand. The executor, as well as anyone in leadership must communicate effectively, clearly, and most importantly… often. Leaders must make a conscious effort to keep everyone in the loop, and maintain their understanding of the vision for the alliance. A fresh alliance starting from scratch faces significant challenges getting off the ground. Odds are stacked against success, and it takes extremely dedicated leadership and players willing to shoulder a huge burden to keep things moving and flowing. Burnout is a strong possibility while trying to build and maintain a startup alliance. Without the amazing directors we have helping us out, our alliance would be in significantly worse shape than it currently is. Commitment is another key to success. Things will not go smoothly. There will be major obstacles and hurdles. The key is not to give up. Things unexpected will happen. Corporations will not have the fortitude to gut out the rough patches and leave. The alliance leadership has to have the determination to weather these storms, and still keep building and progressing regardless of the setbacks. Diplomatic Immunity lost a major corporation and two smaller corporations to Triumvirate. People thought we were ‘fail cascading’ because we lost roughly a third of our member base. I could have closed our doors and said, that is it, DIP is done. Instead, we now own sovereignty in five systems, including two stations. A new corporation is in the process of joining, and we’re in discussions with others. Leadership that is in it until the end, working for it knowing how difficult it will be is completely critical. Regardless of who joins or leaves, I for example, have put billions of ISK and real world money into the alliance for IT services. Commitment is key to making sure an alliance succeeds and continues along the planned vision. Is the end result worth it? That is for each Eve player to decide. For me, as a bittervet, this was my chance to finally and fully make my mark on Eve. It is giving me goals and a reason to continue playing again. I absolutely love the challenge, and as hard and frustrating as it is, I currently wouldn’t want it any other way. Creating a corporation and alliance from nothing is an extremely difficult, but very fulfilling proposition that must be undertaken knowing the inherent extreme difficulty. To any and all who are willing to go this route, I encourage you, but without the ability to dedicate a significant chunk of personal resources and time, it will fail. However, if you can get it to succeed…
<gh_stars>0 /*#pragma config(Motor, port2, FrontLeft, tmotorVex393HighSpeed_MC29, openLoop) #pragma config(Motor, port3, FourBar, tmotorVex393HighSpeed_MC29, openLoop) #pragma config(Motor, port4, BackLeft, tmotorVex393HighSpeed_MC29, openLoop) #pragma config(Motor, port5, Flipper, tmotorVex393HighSpeed_MC29, openLoop) #pragma config(Motor, port6, Intake, tmotorVex393HighSpeed_MC29, openLoop) #pragma config(Motor, port7, BackRight, tmotorVex393HighSpeed_MC29, openLoop) #pragma config(Motor, port8, Launcher, tmotorVex393_MC29, openLoop) #pragma config(Motor, port9, FrontRight, tmotorVex393HighSpeed_MC29, openLoop) //*!!Code generated by Luke */// void driveForward(int x, int power){ int tickCount = (x*180)/(3.1415926*2); SensorValue[leftDriveEnc]=0; SensorValue[rightDriveEnc]=0; while(SensorValue[leftDriveEnc]>-tickCount||SensorValue[rightDriveEnc]<tickCount){ motor[FrontLeft]=motor[BackLeft]=power; motor[FrontRight]=motor[BackRight]-power; } motor[FrontLeft]=motor[BackLeft]=0; motor[FrontRight]=motor[BackRight]=0; } void driveBackwards(int x, int power){ int tickCount = (x*180)/(3.1415926*2); SensorValue[leftDriveEnc]=0; SensorValue[rightDriveEnc]=0; while(SensorValue[leftDriveEnc]<tickCount||SensorValue[rightDriveEnc]>-tickCount){ motor[FrontLeft]=motor[BackLeft]=-power; motor[FrontRight]=motor[BackRight]=power; } motor[FrontLeft]=motor[BackLeft]=0; motor[FrontRight]=motor[BackRight]=0; } task auton() { motor[FrontLeft]=motor[BackLeft]=0; //This just makes sure that nothing is running at the start motor[FrontRight]=motor[BackRight]=0; if(SensorValue[redAuto]==1){ //start RED auton driveForward(46, 80); //First number is how far forward (Should be in inches...) Second Number is power! driveBackwards(65, 80); //First number is how far forward (Should be in inches...) Second Number is power! motor[FrontLeft]=motor[BackLeft]=-80; //Turn power motor[FrontRight]=motor[BackRight]=-80; //Turn power wait1Msec(530); //Turn time (ms) //motor[FourBar]=15; //Ignore for now motor[FrontLeft]=motor[BackLeft]=0; //Resets turn power to zero motor[FrontRight]=motor[BackRight]=0; //Resets turn power to zero driveBackwards(45, 80); //First number is how far forward (Should be in inches...) Second Number is power! motor[FrontLeft]=motor[BackLeft]=0; //Stops driving motor[FrontRight]=motor[BackRight]=0; //Stops driving //motor[FourBar]=0; wait1Msec(10000); //Waits for end } if(SensorValue[blueAuto]==1){ //Start BLUE auton driveForward(44, 80); //First number is how far forward (Should be in inches...) Second Number is power! driveBackwards(64, 80);//First number is how far forward (Should be in inches...) Second Number is power! motor[FrontLeft]=motor[BackLeft]=80; //Turn Power motor[FrontRight]=motor[BackRight]=80; //Turn Power //motor[FourBar]=70; // Ignore for now wait1Msec(555); //Turn Time (ms) //motor[FourBar]=15; motor[FrontLeft]=motor[BackLeft]=0; //Resets Turn Power motor[FrontRight]=motor[BackRight]=0; //Resets Turn Power driveBackwards(32, 80); //First number is how far forward (Should be in inches...) Second Number is power! wait1Msec(10000); //Waits until end } if(SensorValue[blueAuto]==0&&SensorValue[redAuto]==0){ int time=nSysTime; while(nSysTime>time+15000){ motor[FrontLeft]=motor[BackLeft]=0; motor[FrontRight]=motor[BackRight]=0; } } }
package com.example; import io.undertow.Undertow; import io.undertow.server.handlers.PathHandler; import io.undertow.servlet.Servlets; import io.undertow.servlet.api.DeploymentInfo; import io.undertow.servlet.api.DeploymentManager; import io.undertow.servlet.api.ServletContainer; import io.undertow.servlet.api.ServletInfo; import org.jboss.resteasy.plugins.server.servlet.HttpServlet30Dispatcher; import org.jboss.resteasy.spi.ResteasyDeployment; import javax.servlet.ServletException; import javax.ws.rs.core.Application; public class Server { public static void main(String[] args) throws ServletException { System.setProperty("log4j.configurationFile", "log4j2.xml"); int port = 8080; PathHandler rootPathHandler = new PathHandler(); Undertow server = Undertow.builder() .addHttpListener(port, "0.0.0.0") // 0.0.0.0 bind to all interfaces .setHandler(rootPathHandler) .build(); server.start(); Application application = new RestApplication(); ResteasyDeployment deployment = new ResteasyDeployment(); deployment.setApplication(application); ServletInfo resteasyServlet = Servlets.servlet("ResteasyServlet", HttpServlet30Dispatcher.class) .setAsyncSupported(true) .setLoadOnStartup(1) .addMapping("/"); DeploymentInfo deploymentInfo = new DeploymentInfo() .addServletContextAttribute(ResteasyDeployment.class.getName(), deployment) .addServlet(resteasyServlet) .setDeploymentName("RestServices") .setContextPath("/rest") .setClassLoader(Undertow.class.getClassLoader()); DeploymentManager deploymentManager = ServletContainer.Factory.newInstance().addDeployment(deploymentInfo); deploymentManager.deploy(); rootPathHandler.addPrefixPath(deploymentInfo.getContextPath(), deploymentManager.start()); } }
<filename>src/org/semanticweb/cogExp/core/AbstractSequentInferenceRule.java package org.semanticweb.cogExp.core; import java.util.ArrayList; import java.util.List; public class AbstractSequentInferenceRule implements SequentInferenceRule { @Override public boolean isApplicable(Sequent s) { // TODO Auto-generated method stub return false; } @Override public List<SequentPosition> findPositions(Sequent s) { // TODO Auto-generated method stub return null; } @Override public List<RuleBinding> findRuleBindings(Sequent s, boolean ... saturate) { return findRuleBindings(s); } @Override public List<RuleBinding> findRuleBindings(Sequent s) { List<SequentPosition> positions = findPositions(s); ArrayList<RuleBinding> bindings = new ArrayList<RuleBinding>(); for (SequentPosition pos: positions){ RuleBinding binding = new RuleBinding(); binding.insertPosition("SINGLEPOS",pos); } return bindings; } public SequentList computePremises(Sequent s) { // TODO Auto-generated method stub return null; } @Override public SequentList computePremises(Sequent sequent, SequentPosition position) throws Exception { // TODO Auto-generated method stub return null; } @Override public SequentList computePremises(Sequent sequent, RuleBinding binding) throws Exception { // TODO Auto-generated method stub return null; } @Override public String getName() { // TODO Auto-generated method stub return null; }; @Override public String getShortName() { // TODO Auto-generated method stub return null; }; @Override public List<RuleKind> qualifyRule(){ return null; } @Override public List<RuleApplicationResults> computeRuleApplicationResults(Sequent sequent, RuleBinding binding) throws Exception{ return null; } public void clearCaches(){} }
// TODO: coord on triangle, see Embree's documentation void Scene::getIntersectionInfo(const Intersection &intersection, IntersectionInfo *info) { info->primitive = makeRef<const Primitive>(&fetchIntersectedPrimitive(intersection)); info->distance = intersection.hitDistance(); info->wo = Vec3f(-intersection.rayHit.ray.dir_x, -intersection.rayHit.ray.dir_y, -intersection.rayHit.ray.dir_z); info->hitpoint = Vec3f(intersection.rayHit.ray.org_x, intersection.rayHit.ray.org_y, intersection.rayHit.ray.org_z) + info->wo * -intersection.hitDistance(); info->uv = Point2f(intersection.rayHit.hit.u, intersection.rayHit.hit.v); info->Ng = info->primitive->Ng; info->normal = pointOnTriangle(info->primitive->normal[0], info->primitive->normal[1], info->primitive->normal[2], info->uv.x(), info->uv.y()); info->normal.normalize(); info->geomID = intersection.geomID(); info->primID = intersection.primID(); info->bsdf = materialList[info->primitive->materialId].get(); }
<reponame>ashkrishan/jbwizard<gh_stars>0 import { Component, OnInit } from "@angular/core"; import { Comment } from "./comment.model"; import { CommentService } from "./comment.service"; @Component({ selector: 'app-comment-list', template: ` <div class="col-md-8 col-md-offset-2"> <app-comment [comment]="comment" *ngFor="let comment of comments"></app-comment> </div> ` }) export class CommentListComponent implements OnInit { comments: Comment[]; constructor(private commentService: CommentService) {} // ngOnInit() { // this.commentService.getComments() // .subscribe( // (comments: Comment[]) => { // this.comments = comments; // } // ); // } ngOnInit() { this.comments = this.commentService.getComments(); } }
/// The distinct identification string as required by regulation for a human cell, /// tissue, or cellular and tissue-based product. pub fn distinct_identifier(&self) -> Option<&str> { if let Some(Value::String(string)) = self.value.get("distinctIdentifier") { return Some(string); } return None; }
import {ADTBase} from '../base/base'; import {ADTPriorityQueueChildren} from './priority-queue-children'; import {ADTPriorityQueueComparator} from './priority-queue-comparator'; import {ADTPriorityQueueOptions} from './priority-queue-options'; import {ADTPriorityQueueState} from './priority-queue-state'; import {ADTQueryFilter} from '../query/query-filter'; import {ADTQueryOptions} from '../query/query-options'; import {ADTQueryResult} from '../query/query-result'; export class ADTPriorityQueue<T> implements ADTBase<T> { public state: ADTPriorityQueueState<T>; public readonly comparator: ADTPriorityQueueComparator<T>; constructor(comparator: ADTPriorityQueueComparator<T>, options?: ADTPriorityQueueOptions<T>) { if (typeof comparator !== 'function') { throw new Error('Must have a comparator function for priority queue to operate properly'); } this.comparator = comparator; this.state = this.parseOptions(options); this.heapify(); } public parseOptions(options?: ADTPriorityQueueOptions<T>): ADTPriorityQueueState<T> { const state = this.parseOptionsState(options); const finalState = this.parseOptionsOther(state, options); return finalState; } public parseOptionsState(options?: ADTPriorityQueueOptions<T>): ADTPriorityQueueState<T> { const state: ADTPriorityQueueState<T> = this.getDefaultState(); if (!options) { return state; } let parsed: ADTPriorityQueueState<T> | Array<string> | null = null; let result: ADTPriorityQueueState<T> | null = null; if (typeof options.serializedState === 'string') { parsed = this.parseOptionsStateString(options.serializedState)!; if (Array.isArray(parsed)) { throw new Error(parsed.join('\n')); } result = parsed; } if (result) { state.elements = result.elements; } return state; } public parseOptionsStateString(data: string): ADTPriorityQueueState<T> | Array<string> | null { if (typeof data !== 'string' || data === '') { return null; } let result: ADTPriorityQueueState<T> | Array<string> | null = null; let parsed: ADTPriorityQueueState<T> | null = null; let errors: Array<string> = []; try { parsed = JSON.parse(data); if (parsed) { errors = this.getStateErrors(parsed); } if (errors.length) { throw new Error('state is not a valid ADTPriorityQueueState'); } result = parsed; } catch (error) { result = [error.message].concat(errors); } return result; } public parseOptionsOther( s: ADTPriorityQueueState<T>, options?: ADTPriorityQueueOptions<T> ): ADTPriorityQueueState<T> { let state: ADTPriorityQueueState<T> | null = s; if (!s) { state = this.getDefaultState(); } if (!options) { return state; } if (options.elements && Array.isArray(options.elements)) { state.elements = options.elements.slice(); } return state; } public getDefaultState(): ADTPriorityQueueState<T> { const state: ADTPriorityQueueState<T> = { type: 'pqState', elements: [] }; return state; } public getStateErrors(state: ADTPriorityQueueState<T>): Array<string> { const errors: Array<string> = []; if (!state) { errors.push('state is null or undefined'); return errors; } if (state.type !== 'pqState') { errors.push('state type must be pqState'); } if (!Array.isArray(state.elements)) { errors.push('state elements must be an array'); } return errors; } public isValidState(state: ADTPriorityQueueState<T>): boolean { const errors = this.getStateErrors(state); if (errors.length) { return false; } return true; } public queryDelete(query: ADTQueryResult<T>): T | null { if (!query || !query.index) { return null; } const index = query.index(); if (index === null) { return null; } this.swapNodes(index, this.size() - 1); this.state.elements.pop(); if (this.size() > 1) { this.fixHeap(index); } return query.element; } public queryIndex(query: T): number | null { const index = this.state.elements.findIndex((element) => { return element === query; }); if (index < 0) { return null; } return index; } public queryOptions(opts?: ADTQueryOptions): Required<ADTQueryOptions> { const options: Required<ADTQueryOptions> = { limit: Infinity }; if (opts?.limit && typeof opts.limit === 'number' && opts.limit >= 1) { options.limit = Math.round(opts.limit); } return options; } public areNodesValidHeap(nodeIndex: number | null, nextIndex: number | null): boolean { if (typeof nextIndex !== 'number') { return true; } if (typeof nodeIndex !== 'number') { return true; } const nodeValue = this.state.elements[nodeIndex]; const nextValue = this.state.elements[nextIndex]; const startFromTop = nodeIndex < nextIndex; if (nodeValue == null) { return startFromTop; } if (nextValue == null) { return !startFromTop; } if (startFromTop) { return this.comparator(this.state.elements[nodeIndex], this.state.elements[nextIndex]); } else { return this.comparator(this.state.elements[nextIndex], this.state.elements[nodeIndex]); } } public fixHeap(nodeIndex: number | null): void { if (this.size() <= 1) { return; } if (typeof nodeIndex !== 'number') { return; } if (nodeIndex < 0) { return; } if (nodeIndex >= this.size()) { return; } if (nodeIndex % 1 !== 0) { return; } const startFromTop = nodeIndex < Math.floor(this.size() / 2); let nextIndex = this.getNextIndex(startFromTop, nodeIndex); while (this.areNodesValidHeap(nodeIndex, nextIndex) === false) { this.swapNodes(nodeIndex, nextIndex); nodeIndex = nextIndex; nextIndex = this.getNextIndex(startFromTop, nodeIndex); } } public getChildNodesIndexes(nodeIndex: number | null): ADTPriorityQueueChildren { if (typeof nodeIndex !== 'number') { return {left: null, right: null}; } if (nodeIndex < 0) { return {left: null, right: null}; } if (nodeIndex >= this.size()) { return {left: null, right: null}; } if (nodeIndex % 1 !== 0) { return {left: null, right: null}; } const childOneIndex = nodeIndex * 2 + 1; const childTwoIndex = nodeIndex * 2 + 2; if (childOneIndex >= this.size()) { return {left: null, right: null}; } if (childTwoIndex >= this.size()) { return {left: childOneIndex, right: null}; } return {left: childOneIndex, right: childTwoIndex}; } public getNextIndex(startFromTop: boolean, nodeIndex: number | null): number | null { if (!startFromTop) { return this.getParentNodeIndex(nodeIndex); } const childIndexes = this.getChildNodesIndexes(nodeIndex); if (childIndexes.left === null || childIndexes.right === null) { return childIndexes.left; } if (this.state.elements[childIndexes.left] === null) { return childIndexes.right; } if (this.state.elements[childIndexes.right] === null) { return childIndexes.left; } if ( this.comparator(this.state.elements[childIndexes.left], this.state.elements[childIndexes.right]) ) { return childIndexes.left; } else { return childIndexes.right; } } public getParentNodeIndex(nodeIndex: number | null): number | null { if (typeof nodeIndex !== 'number') { return null; } if (nodeIndex <= 0) { return null; } if (nodeIndex >= this.size()) { return null; } if (nodeIndex % 1 !== 0) { return null; } return Math.floor((nodeIndex - 1) / 2); } public isHeapSorted(): boolean { let result = true; const size = this.getParentNodeIndex(this.size() - 1); if (!size) { return true; } for (let i = 0; i <= size; i++) { const child = this.getNextIndex(true, i); result = result && this.areNodesValidHeap(i, child); } return result; } public heapify(): void { if (this.size() <= 1) { return; } if (this.isHeapSorted()) { return; } let nodeIndex = this.getParentNodeIndex(this.size() - 1); while (nodeIndex !== null && nodeIndex >= 0) { this.fixHeap(nodeIndex); nodeIndex--; } } public swapNodes(nodeOneIndex: number | null, nodeTwoIndex: number | null): void { if (typeof nodeOneIndex !== 'number') { return; } if (typeof nodeTwoIndex !== 'number') { return; } if (nodeOneIndex < 0) { return; } if (nodeTwoIndex < 0) { return; } if (nodeOneIndex >= this.size()) { return; } if (nodeTwoIndex >= this.size()) { return; } if (nodeOneIndex % 1 !== 0) { return; } if (nodeTwoIndex % 1 !== 0) { return; } if (nodeOneIndex === nodeTwoIndex) { return; } const nodeOneInfo = this.state.elements[nodeOneIndex]; this.state.elements[nodeOneIndex] = this.state.elements[nodeTwoIndex]; this.state.elements[nodeTwoIndex] = nodeOneInfo; } public clearElements(): ADTPriorityQueue<T> { this.state.elements = []; return this; } public forEach(func: (element: T, index: number, arr: T[]) => void, thisArg?: any): ADTPriorityQueue<T> { let boundThis = this; if (thisArg) { boundThis = thisArg; } this.state.elements.forEach((elem, idx) => { func.call(boundThis, elem, idx, this.state.elements); }, boundThis); return this; } public front(): T | null { if (this.size() === 0) { return null; } return this.state.elements[0]; } public pop(): T | null { if (this.size() === 0) { return null; } if (this.size() === 1) { return this.state.elements.pop()!; } const highestPriority = this.front(); this.swapNodes(0, this.size() - 1); this.state.elements.pop(); this.fixHeap(0); return highestPriority; } public push(element: T): ADTPriorityQueue<T> { this.state.elements.push(element); this.fixHeap(this.size() - 1); return this; } public query( filters: ADTQueryFilter<T> | ADTQueryFilter<T>[], opts?: ADTQueryOptions ): ADTQueryResult<T>[] { const resultsArray: ADTQueryResult<T>[] = []; const options = this.queryOptions(opts); this.forEach((element, index) => { let take = false; if (resultsArray.length >= options.limit) { return false; } if (Array.isArray(filters)) { take = !!filters.length && filters.every((filter) => { return filter(element); }); } else { take = filters(element); } if (!take) { return false; } const result: ADTQueryResult<T> = {} as ADTQueryResult<T>; result.element = element; result.key = (): string | null => null; result.index = this.queryIndex.bind(this, element); result.delete = this.queryDelete.bind(this, result); resultsArray.push(result); }); return resultsArray; } public reset(): ADTPriorityQueue<T> { this.clearElements(); this.state.type = 'pqState'; return this; } public size(): number { if (!this.isValidState(this.state)) { return 0; } return this.state.elements.length; } public stringify(): string | null { if (!this.isValidState(this.state)) { return null; } return JSON.stringify(this.state); } }
def crossover_chromosomes( self, parent_1, parent_2 ) -> Tuple[Chromosome, Chromosome]: if self.crossover_type == "single_point": crossover = self.single_point_crossover else: crossover = self.single_point_crossover if random.random() < self.crossover_probability: child_1, child_2 = crossover(parent_1, parent_2) else: child_1 = Chromosome(genes=copy(parent_1.genes)) child_2 = Chromosome(genes=copy(parent_2.genes)) return child_1, child_2
/** * This drain is similar to the one provided? * @param drain Drain to compare with * @return TRUE if they are similar */ private boolean similar(final Temporary drain) { return this.work.owner().equals(drain.work.owner()) && this.work.rule().equals(drain.work.rule()) && this.marker.equals(drain.marker); }
"""Convert problem settings.""" import copy from test.core.derivatives.utils import get_available_devices from typing import Any, Iterator, List, Tuple import torch from torch import Tensor from torch.nn.parameter import Parameter from backpack import extend from backpack.utils.subsampling import subsample def make_test_problems(settings): """Creates test problems from settings. Args: settings (list[dict]): raw settings of the problems Returns: list[ExtensionTestProblem] """ problem_dicts = [] for setting in settings: setting = add_missing_defaults(setting) devices = setting["device"] for dev in devices: problem = copy.deepcopy(setting) problem["device"] = dev problem_dicts.append(problem) return [ExtensionsTestProblem(**p) for p in problem_dicts] def add_missing_defaults(setting): """Create full settings from setting. Args: setting (dict): configuration dictionary Returns: dict: full settings. Raises: ValueError: if no proper settings """ required = ["module_fn", "input_fn", "loss_function_fn", "target_fn"] optional = { "id_prefix": "", "seed": 0, "device": get_available_devices(), } for req in required: if req not in setting.keys(): raise ValueError("Missing configuration entry for {}".format(req)) for opt, default in optional.items(): if opt not in setting.keys(): setting[opt] = default for s in setting.keys(): if s not in required and s not in optional.keys(): raise ValueError("Unknown config: {}".format(s)) return setting class ExtensionsTestProblem: """Class providing functions and parameters.""" def __init__( self, input_fn, module_fn, loss_function_fn, target_fn, device, seed, id_prefix, ): """Collection of information required to test extensions. Args: input_fn (callable): Function returning the network input. module_fn (callable): Function returning the network. loss_function_fn (callable): Function returning the loss module. target_fn (callable): Function returning the labels. device (torch.device): Device to run on. seed (int): Random seed. id_prefix (str): Extra string added to test id. """ self.module_fn = module_fn self.input_fn = input_fn self.loss_function_fn = loss_function_fn self.target_fn = target_fn self.device = device self.seed = seed self.id_prefix = id_prefix def set_up(self): """Set up problem from settings.""" torch.manual_seed(self.seed) self.model = self.module_fn().to(self.device) self.input = self.input_fn().to(self.device) self.target = self.target_fn().to(self.device) self.loss_function = self.loss_function_fn().to(self.device) def tear_down(self): """Delete all variables after problem.""" del self.model, self.input, self.target, self.loss_function def make_id(self): """Needs to function without call to `set_up`. Returns: str: id of problem """ prefix = (self.id_prefix + "-") if self.id_prefix != "" else "" return prefix + "dev={}-in={}-model={}-loss={}".format( self.device, tuple(self.input_fn().shape), self.module_fn(), self.loss_function_fn(), ).replace(" ", "") def forward_pass( self, subsampling: List[int] = None ) -> Tuple[Tensor, Tensor, Tensor]: """Do a forward pass. Return input, output, and parameters. If sub-sampling is None, the forward pass is calculated on the whole batch. Args: subsampling: Indices of selected samples. Default: ``None`` (all samples). Returns: input, output, and loss of the forward pass """ input = self.input.clone() target = self.target.clone() if subsampling is not None: batch_axis = 0 input = subsample(self.input, dim=batch_axis, subsampling=subsampling) target = subsample(self.target, dim=batch_axis, subsampling=subsampling) output = self.model(input) loss = self.loss_function(output, target) return input, output, loss def extend(self): """Extend module of problem.""" self.model = extend(self.model) self.loss_function = extend(self.loss_function) @staticmethod def __get_reduction_factor(loss: Tensor, unreduced_loss: Tensor) -> float: """Return the factor used to reduce the individual losses. Args: loss: Reduced loss. unreduced_loss: Unreduced loss. Returns: Reduction factor. Raises: RuntimeError: if either mean or sum cannot be determined """ mean_loss = unreduced_loss.flatten().mean() sum_loss = unreduced_loss.flatten().sum() if torch.allclose(mean_loss, sum_loss): if unreduced_loss.numel() == 1 and torch.allclose(loss, sum_loss): factor = 1.0 else: raise RuntimeError( "Cannot determine reduction factor. ", "Results from 'mean' and 'sum' reduction are identical. ", f"'mean': {mean_loss}, 'sum': {sum_loss}", ) elif torch.allclose(loss, mean_loss): factor = 1.0 / unreduced_loss.numel() elif torch.allclose(loss, sum_loss): factor = 1.0 else: raise RuntimeError( "Reductions 'mean' or 'sum' do not match with loss. ", f"'mean': {mean_loss}, 'sum': {sum_loss}, loss: {loss}", ) return factor def trainable_parameters(self) -> Iterator[Parameter]: """Yield the model's trainable parameters. Yields: Model parameter with gradients enabled. """ for p in self.model.parameters(): if p.requires_grad: yield p def collect_data(self, savefield: str) -> List[Any]: """Collect BackPACK attributes from trainable parameters. Args: savefield: Attribute name. Returns: List of attributes saved under the trainable model parameters. Raises: RuntimeError: If a non-differentiable parameter with the attribute is encountered. """ data = [] for p in self.model.parameters(): if p.requires_grad: data.append(getattr(p, savefield)) else: if hasattr(p, savefield): raise RuntimeError( f"Found non-differentiable parameter with attribute '{savefield}'." ) return data def get_batch_size(self) -> int: """Return the mini-batch size. Returns: Mini-batch size. """ return self.input.shape[0] def compute_reduction_factor(self) -> float: """Compute loss function's reduction factor for aggregating per-sample losses. For instance, if ``reduction='mean'`` is used, then the reduction factor is ``1 / N`` where ``N`` is the batch size. With ``reduction='sum'``, it is ``1``. Returns: Reduction factor """ _, _, loss = self.forward_pass() batch_size = self.get_batch_size() loss_list = torch.zeros(batch_size, device=self.device) for n in range(batch_size): _, _, loss_n = self.forward_pass(subsampling=[n]) loss_list[n] = loss_n return self.__get_reduction_factor(loss, loss_list)
<reponame>SilentEight/cloud-nuke package aws import ( "fmt" "math" "strings" "sync" "time" "github.com/hashicorp/go-multierror" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/s3" "github.com/gruntwork-io/go-commons/errors" "github.com/gruntwork-io/cloud-nuke/config" "github.com/gruntwork-io/cloud-nuke/logging" ) // getS3BucketRegion returns S3 Bucket region. func getS3BucketRegion(svc *s3.S3, bucketName string) (string, error) { input := &s3.GetBucketLocationInput{ Bucket: aws.String(bucketName), } result, err := svc.GetBucketLocation(input) if err != nil { return "", err } if result.LocationConstraint == nil { // GetBucketLocation returns nil for us-east-1 // https://github.com/aws/aws-sdk-go/issues/1687 return "us-east-1", nil } return *result.LocationConstraint, nil } // getS3BucketTags returns S3 Bucket tags. func getS3BucketTags(svc *s3.S3, bucketName string) ([]map[string]string, error) { input := &s3.GetBucketTaggingInput{ Bucket: aws.String(bucketName), } tags := []map[string]string{} // Please note that svc argument should be created from a session object which is // in the same region as the bucket or GetBucketTagging will fail. result, err := svc.GetBucketTagging(input) if err != nil { if aerr, ok := err.(awserr.Error); ok { switch aerr.Code() { case "NoSuchTagSet": return tags, nil } } return tags, err } for _, tagSet := range result.TagSet { tags = append(tags, map[string]string{"Key": *tagSet.Key, "Value": *tagSet.Value}) } return tags, nil } // hasValidTags checks if bucket tags permit it to be in the deletion list. func hasValidTags(bucketTags []map[string]string) bool { // Exclude deletion of any buckets with cloud-nuke-excluded tags if len(bucketTags) > 0 { for _, tagSet := range bucketTags { key := strings.ToLower(tagSet["Key"]) value := strings.ToLower(tagSet["Value"]) if key == AwsResourceExclusionTagKey && value == "true" { return false } } } return true } // S3Bucket - represents S3 bucket type S3Bucket struct { Name string CreationDate time.Time Region string Tags []map[string]string Error error IsValid bool InvalidReason string } // getAllS3Buckets returns a map of per region AWS S3 buckets which were created before excludeAfter func getAllS3Buckets(awsSession *session.Session, excludeAfter time.Time, targetRegions []string, bucketNameSubStr string, batchSize int, configObj config.Config) (map[string][]*string, error) { if batchSize <= 0 { return nil, fmt.Errorf("Invalid batchsize - %d - should be > 0", batchSize) } svc := s3.New(awsSession) input := &s3.ListBucketsInput{} output, err := svc.ListBuckets(input) if err != nil { return nil, errors.WithStackTrace(err) } regionClients, err := getRegionClients(targetRegions) if err != nil { return nil, errors.WithStackTrace(err) } var bucketNamesPerRegion = make(map[string][]*string) totalBuckets := len(output.Buckets) if totalBuckets == 0 { return bucketNamesPerRegion, nil } totalBatches := int(math.Ceil(float64(totalBuckets) / float64(batchSize))) batchCount := 1 // Batch the get operation for batchStart := 0; batchStart < totalBuckets; batchStart += batchSize { batchEnd := int(math.Min(float64(batchStart)+float64(batchSize), float64(totalBuckets))) logging.Logger.Infof("Getting - %d-%d buckets of batch %d/%d", batchStart+1, batchEnd, batchCount, totalBatches) targetBuckets := output.Buckets[batchStart:batchEnd] currBucketNamesPerRegion, err := getBucketNamesPerRegion(svc, targetBuckets, excludeAfter, regionClients, bucketNameSubStr, configObj) if err != nil { return bucketNamesPerRegion, err } for region, buckets := range currBucketNamesPerRegion { if _, ok := bucketNamesPerRegion[region]; !ok { bucketNamesPerRegion[region] = []*string{} } for _, bucket := range buckets { bucketNamesPerRegion[region] = append(bucketNamesPerRegion[region], bucket) } } batchCount++ } return bucketNamesPerRegion, nil } // getRegions creates s3 clients for target regions func getRegionClients(regions []string) (map[string]*s3.S3, error) { var regionClients = make(map[string]*s3.S3) for _, region := range regions { logging.Logger.Debugf("S3 - creating session - region %s", region) awsSession, err := newSession(region) if err != nil { return regionClients, err } regionClients[region] = s3.New(awsSession) } return regionClients, nil } // getBucketNamesPerRegions gets valid bucket names concurrently from list of target buckets func getBucketNamesPerRegion(svc *s3.S3, targetBuckets []*s3.Bucket, excludeAfter time.Time, regionClients map[string]*s3.S3, bucketNameSubStr string, configObj config.Config) (map[string][]*string, error) { var bucketNamesPerRegion = make(map[string][]*string) var bucketCh = make(chan *S3Bucket, len(targetBuckets)) var wg sync.WaitGroup for _, bucket := range targetBuckets { if len(bucketNameSubStr) > 0 && !strings.Contains(*bucket.Name, bucketNameSubStr) { logging.Logger.Debugf("Skipping - Bucket %s - failed substring filter - %s", *bucket.Name, bucketNameSubStr) continue } wg.Add(1) go func(bucket *s3.Bucket) { defer wg.Done() getBucketInfo(svc, bucket, excludeAfter, regionClients, bucketCh, configObj) }(bucket) } go func() { wg.Wait() close(bucketCh) }() // Start reading from the channel as soon as the data comes in - so that skip // messages are shown to the user as soon as possible for bucketData := range bucketCh { if bucketData.Error != nil { logging.Logger.Warnf("Skipping - Bucket %s - region - %s - error: %s", bucketData.Name, bucketData.Region, bucketData.Error) continue } if !bucketData.IsValid { logging.Logger.Debugf("Skipping - Bucket %s - region - %s - %s", bucketData.Name, bucketData.Region, bucketData.InvalidReason) continue } if _, ok := bucketNamesPerRegion[bucketData.Region]; !ok { bucketNamesPerRegion[bucketData.Region] = []*string{} } bucketNamesPerRegion[bucketData.Region] = append(bucketNamesPerRegion[bucketData.Region], aws.String(bucketData.Name)) } return bucketNamesPerRegion, nil } // getBucketInfo populates the local S3Bucket struct for the passed AWS bucket func getBucketInfo(svc *s3.S3, bucket *s3.Bucket, excludeAfter time.Time, regionClients map[string]*s3.S3, bucketCh chan<- *S3Bucket, configObj config.Config) { var bucketData S3Bucket bucketData.Name = aws.StringValue(bucket.Name) bucketData.CreationDate = aws.TimeValue(bucket.CreationDate) bucketRegion, err := getS3BucketRegion(svc, bucketData.Name) if err != nil { bucketData.Error = err bucketCh <- &bucketData return } bucketData.Region = bucketRegion // Check if the bucket is in target region matchedRegion := false for region := range regionClients { if region == bucketData.Region { matchedRegion = true break } } if !matchedRegion { bucketData.InvalidReason = "Not in target region" bucketCh <- &bucketData return } // Check if the bucket has valid tags bucketTags, err := getS3BucketTags(regionClients[bucketData.Region], bucketData.Name) if err != nil { bucketData.Error = err bucketCh <- &bucketData return } bucketData.Tags = bucketTags if !hasValidTags(bucketData.Tags) { bucketData.InvalidReason = "Matched tag filter" bucketCh <- &bucketData return } // Check if the bucket is older than the required time if !excludeAfter.After(bucketData.CreationDate) { bucketData.InvalidReason = "Matched CreationDate filter" bucketCh <- &bucketData return } // Check if the bucket matches config file rules if !config.ShouldInclude(bucketData.Name, configObj.S3.IncludeRule.NamesRegExp, configObj.S3.ExcludeRule.NamesRegExp) { bucketData.InvalidReason = "Filtered by config file rules" bucketCh <- &bucketData return } bucketData.IsValid = true bucketCh <- &bucketData } // emptyBucket will empty the given S3 bucket by deleting all the objects that are in the bucket. For versioned buckets, // this includes all the versions and deletion markers in the bucket. // NOTE: In the progress logs, we deliberately do not report how many pages or objects are left. This is because aws // does not provide any API for getting the object count, and the only way to do that is to iterate through all the // objects. For memory and time efficiency, we opted to delete the objects as we retrieve each page, which means we // don't know how many are left until we complete all the operations. func emptyBucket(svc *s3.S3, bucketName *string, isVersioned bool, batchSize int) error { // Since the error may happen in the inner function handler for the pager, we need a function scoped variable that // the inner function can set when there is an error. var errOut error pageId := 1 // Handle versioned buckets. if isVersioned { err := svc.ListObjectVersionsPages( &s3.ListObjectVersionsInput{ Bucket: bucketName, MaxKeys: aws.Int64(int64(batchSize)), }, func(page *s3.ListObjectVersionsOutput, lastPage bool) (shouldContinue bool) { logging.Logger.Debugf("Deleting page %d of object versions (%d objects) from bucket %s", pageId, len(page.Versions), aws.StringValue(bucketName)) if err := deleteObjectVersions(svc, bucketName, page.Versions); err != nil { logging.Logger.Errorf("Error deleting objects versions for page %d from bucket %s: %s", pageId, aws.StringValue(bucketName), err) errOut = err return false } logging.Logger.Infof("[OK] - deleted page %d of object versions (%d objects) from bucket %s", pageId, len(page.Versions), aws.StringValue(bucketName)) logging.Logger.Debugf("Deleting page %d of deletion markers (%d deletion markers) from bucket %s", pageId, len(page.DeleteMarkers), aws.StringValue(bucketName)) if err := deleteDeletionMarkers(svc, bucketName, page.DeleteMarkers); err != nil { logging.Logger.Errorf("Error deleting deletion markers for page %d from bucket %s: %s", pageId, aws.StringValue(bucketName), err) errOut = err return false } logging.Logger.Infof("[OK] - deleted page %d of deletion markers (%d deletion markers) from bucket %s", pageId, len(page.DeleteMarkers), aws.StringValue(bucketName)) pageId++ return true }, ) if err != nil { return err } if errOut != nil { return errOut } return nil } // Handle non versioned buckets. err := svc.ListObjectsV2Pages( &s3.ListObjectsV2Input{ Bucket: bucketName, MaxKeys: aws.Int64(int64(batchSize)), }, func(page *s3.ListObjectsV2Output, lastPage bool) (shouldContinue bool) { logging.Logger.Debugf("Deleting object page %d (%d objects) from bucket %s", pageId, len(page.Contents), aws.StringValue(bucketName)) if err := deleteObjects(svc, bucketName, page.Contents); err != nil { logging.Logger.Errorf("Error deleting objects for page %d from bucket %s: %s", pageId, aws.StringValue(bucketName), err) errOut = err return false } logging.Logger.Debugf("[OK] - deleted object page %d (%d objects) from bucket %s", pageId, len(page.Contents), aws.StringValue(bucketName)) pageId++ return true }, ) if err != nil { return err } if errOut != nil { return errOut } return nil } // deleteObjects will delete the provided objects (unversioned) from the specified bucket. func deleteObjects(svc *s3.S3, bucketName *string, objects []*s3.Object) error { if len(objects) == 0 { logging.Logger.Debugf("No objects returned in page") return nil } objectIdentifiers := []*s3.ObjectIdentifier{} for _, obj := range objects { objectIdentifiers = append(objectIdentifiers, &s3.ObjectIdentifier{ Key: obj.Key, }) } _, err := svc.DeleteObjects( &s3.DeleteObjectsInput{ Bucket: bucketName, Delete: &s3.Delete{ Objects: objectIdentifiers, Quiet: aws.Bool(false), }, }, ) return err } // deleteObjectVersions will delete the provided object versions from the specified bucket. func deleteObjectVersions(svc *s3.S3, bucketName *string, objectVersions []*s3.ObjectVersion) error { if len(objectVersions) == 0 { logging.Logger.Debugf("No object versions returned in page") return nil } objectIdentifiers := []*s3.ObjectIdentifier{} for _, obj := range objectVersions { objectIdentifiers = append(objectIdentifiers, &s3.ObjectIdentifier{ Key: obj.Key, VersionId: obj.VersionId, }) } _, err := svc.DeleteObjects( &s3.DeleteObjectsInput{ Bucket: bucketName, Delete: &s3.Delete{ Objects: objectIdentifiers, Quiet: aws.Bool(false), }, }, ) return err } // deleteDeletionMarkers will delete the provided deletion markers from the specified bucket. func deleteDeletionMarkers(svc *s3.S3, bucketName *string, objectDelMarkers []*s3.DeleteMarkerEntry) error { if len(objectDelMarkers) == 0 { logging.Logger.Debugf("No deletion markers returned in page") return nil } objectIdentifiers := []*s3.ObjectIdentifier{} for _, obj := range objectDelMarkers { objectIdentifiers = append(objectIdentifiers, &s3.ObjectIdentifier{ Key: obj.Key, VersionId: obj.VersionId, }) } _, err := svc.DeleteObjects( &s3.DeleteObjectsInput{ Bucket: bucketName, Delete: &s3.Delete{ Objects: objectIdentifiers, Quiet: aws.Bool(false), }, }, ) return err } // nukeAllS3BucketObjects batch deletes all objects in an S3 bucket func nukeAllS3BucketObjects(svc *s3.S3, bucketName *string, batchSize int) error { versioningResult, err := svc.GetBucketVersioning(&s3.GetBucketVersioningInput{ Bucket: bucketName, }) if err != nil { return err } isVersioned := aws.StringValue(versioningResult.Status) == "Enabled" if batchSize < 1 || batchSize > 1000 { return fmt.Errorf("Invalid batchsize - %d - should be between %d and %d", batchSize, 1, 1000) } logging.Logger.Infof("Emptying bucket %s", aws.StringValue(bucketName)) if err := emptyBucket(svc, bucketName, isVersioned, batchSize); err != nil { return err } logging.Logger.Infof("[OK] - successfully emptied bucket %s", aws.StringValue(bucketName)) return nil } // nukeEmptyS3Bucket deletes an empty S3 bucket func nukeEmptyS3Bucket(svc *s3.S3, bucketName *string, verifyBucketDeletion bool) error { _, err := svc.DeleteBucket(&s3.DeleteBucketInput{ Bucket: bucketName, }) if err != nil { return err } if !verifyBucketDeletion { return err } // The wait routine will try for up to 100 seconds, but that is not long enough for all circumstances of S3. As // such, we retry this routine up to 3 times for a total of 300 seconds. const maxRetries = 3 for i := 0; i < maxRetries; i++ { logging.Logger.Infof("Waiting until bucket (%s) deletion is propagated (attempt %d / %d)", aws.StringValue(bucketName), i+1, maxRetries) err = svc.WaitUntilBucketNotExists(&s3.HeadBucketInput{ Bucket: bucketName, }) // Exit early if no error if err == nil { logging.Logger.Info("Successfully detected bucket deletion.") return nil } logging.Logger.Warnf("Error waiting for bucket (%s) deletion propagation (attempt %d / %d)", aws.StringValue(bucketName), i+1, maxRetries) logging.Logger.Warnf("Underlying error was: %s", err) } return err } // nukeAllS3Buckets deletes all S3 buckets passed as input func nukeAllS3Buckets(awsSession *session.Session, bucketNames []*string, objectBatchSize int) (delCount int, err error) { svc := s3.New(awsSession) verifyBucketDeletion := true if len(bucketNames) == 0 { logging.Logger.Infof("No S3 Buckets to nuke in region %s", *awsSession.Config.Region) return 0, nil } totalCount := len(bucketNames) logging.Logger.Infof("Deleting - %d S3 Buckets in region %s", totalCount, *awsSession.Config.Region) multiErr := new(multierror.Error) for bucketIndex := 0; bucketIndex < totalCount; bucketIndex++ { bucketName := bucketNames[bucketIndex] logging.Logger.Debugf("Deleting - %d/%d - Bucket: %s", bucketIndex+1, totalCount, *bucketName) err = nukeAllS3BucketObjects(svc, bucketName, objectBatchSize) if err != nil { logging.Logger.Errorf("[Failed] - %d/%d - Bucket: %s - object deletion error - %s", bucketIndex+1, totalCount, *bucketName, err) multierror.Append(multiErr, err) continue } err = nukeEmptyS3Bucket(svc, bucketName, verifyBucketDeletion) if err != nil { logging.Logger.Errorf("[Failed] - %d/%d - Bucket: %s - bucket deletion error - %s", bucketIndex+1, totalCount, *bucketName, err) multierror.Append(multiErr, err) continue } logging.Logger.Infof("[OK] - %d/%d - Bucket: %s - deleted", bucketIndex+1, totalCount, *bucketName) delCount++ } return delCount, multiErr.ErrorOrNil() }
Following the misfire that was Margaret Cho’s 1994 sitcom All-American Girl, it took 20 years before ABC — or any other network — would take a chance on a series led by an Asian-American cast. As Viola Davis noted in her history-making Emmys speech on Sunday, “The only thing that separates women of color from anyone else is opportunity.” Finally, women of color are being afforded some space — and even making space for themselves — in network television, and this more diverse landscape has made for undeniably richer, more informative storytelling. When EW spoke to Fresh Off the Boat showrunner Nahnatchka Khan in advance of the ABC comedy’s first season, she said that a focus-test of the original pilot left white people in the audience feeling “persecuted.” Now, as season 2 goes into full-swing — Fresh returned Tuesday — we caught up with Khan and star Constance Wu to talk representation, stereotyping, strong women, and what’s next for the Huang family. After years of being relegated to token and supporting roles, Fresh Off the Boat can boast the largest Asian-American-led cast on television, which exerts a unique and not insignificant amount of pressure. “We sort of have the burden of an entire group’s representation,” says Khan of having to answer to criticisms from the Asian-American community. “You can’t please everybody.” Indeed, even Eddie Huang — the author of the memoir that inspired the show — has openly derided the series for being too safe and not adequately representative of his experiences. He narrated the series for season 1, but was absent from the season 2 premiere and seems unlikely to return. Khan’s priority, however, is to the characters. “It’s like, ‘Would Jessica do this? Would Louis do this?’ That’s what we try to stay true to, just making sure the characters behave in ways that feel authentic to them and their story,” she explains. “In doing so, I think people see themselves or their families or their growing-up experience in [Fresh Off the Boat] and that makes us happy. But if people don’t see that, that’s okay too. Not everybody can see every moment of their life displayed by one set of people. It’s just not going to happen.” Wu — who plays Huang matriarch Jessica — posits that the initial concerns from the Asian-American community came from a place of fear. “They’re so accustomed to Asian people being the butt of the joke. I think a big part of that is because Asian people traditionally — and even now in entertainment — are supporting characters, and supporting characters are inherently there to support the more important story, which is often the white person’s story,” Wu tells EW. “When you’re a supporting character you don’t have room in the medium to show all your many colors. I understand the reactive response. If you’re a puppy that’s been kicked a million times, you instantly think you’re being kicked again.” Adds Wu, “We are leading and telling our own unique story and giving each of our characters humanity, so they’re not just supporting a white person’s story. It will take some time. We’ve been wounded in the past, but if you approach it with an objective mind and an eye for what a full character arc looks like, then I think you can find [Fresh Off the Boat] quite enjoyable and important.” Earlier this summer, Jane the Virgin star Gina Rodriguez told Glam Belleza Latina that she never saw herself onscreen while growing up, but that the current shift toward more diverse television has allowed her and other actors and actresses of color to represent their communities in heretofore unseen ways. “And the reason that’s important is because little kids look at the screen, as we did when we were growing up, and wonder, ‘Where do I fit in?’ And when you see that you fit in everywhere, you know anything is possible.” Khan echoes these statements, telling EW, “So many people come up to me and they’re like, ‘My kids love the show.’ Asian kids seeing themselves on TV, that’s their new normal. If you take a step back, you realize that the TV landscape really has been … There have been some amazing shows in terms of representation, but not for Asian people. It’s not been there at all in comedy. It’s cool to just have this be the new normal.” Moving into season 2, Fresh Off the Boat will see the Huang family continuing to grapple with their desire to fulfill the American dream while staying true to their roots. Now that the family restaurant has taken off, patriarch Louis (Randall Park) will need to figure out “what happens when you achieve that first level of success. Do you keep chasing it? When is enough enough?” asks Khan. Also, look out for Honey (Chelsey Crisp) and Jessica going into business together and continuing to connect through their mutual otherness, as well as, of course, their mutual love of Stephen King. Khan also reveals that Jessica’s ex-boyfriend, Oscar Chow (Rex Lee) will make a reappearance, bringing his boyfriend into town to meet up with the Huangs. As for the family, Eddie (Hudson Yang) will begin to experience that “first blush of rebellion” that comes with early adolescence, and he and Jessica will “butt heads even more.” However, it’s unlikely that Jessica will take that behavior lying down. A memorable scene from season 1 had Jessica repeatedly jamming a stuffed rabbit into Eddie’s face in order to teach him a lesson about date rape. (“No means no! Respect girls!”) “That’s one of my favorite moments. My favorite thing is writing strong women,” says Khan, who was also at the helm of the short-lived B—- in Apartment 23. “Women who don’t apologize is the thing that I love to do the most. With Jessica, she feels strongly. She’s just living her life. She’s not trying to make a statement, she’s just doing what she feels is right and she loves her family and she’ll do anything to protect them. … Certainly on network sitcoms in the past, the wife character has been sweeter or more matronly, and Jessica is not about that. She’s not touchy-feely, but she loves her family, and that strength and confidence is presented in a way that won’t end with a big hug moment. But she’ll attack her son with a giant stuffed animal because she wants him to understand the way of the world and that’s how she shows her love. And for us, that just feels authentic.” Fresh Off the Boat airs Tuesdays at 8:30 p.m. ET on ABC.
package scripting import ( "context" "time" ) // TestOutcome reflects the task status. type TestOutcome string const ( TestOutcomeSuccess = "success" TestOutcomeFailure = "failure" TestOutcomeTimeout = "timeout" ) type TestOptions struct { Name string `bson:"name" json:"name" yaml:"name"` Args []string `bson:"args" json:"args" yaml:"args"` Pattern string `bson:"pattern" json:"pattern" yaml:"pattern"` Timeout time.Duration `bson:"timeout" json:"timeout" yaml:"timeout"` Count int `bson:"count" json:"count" yaml:"count"` } // TestResults capture the data about the run of a specific test run. type TestResult struct { Name string `bson:"name" json:"name" yaml:"name"` StartAt time.Time `bson:"start_at" json:"start_at" yaml:"start_at"` Duration time.Duration `bson:"duration" json:"duration" yaml:"duration"` Outcome TestOutcome `bson:"outcome" json:"outcome" yaml:"outcome"` } func (opt TestOptions) getResult(ctx context.Context, err error, startAt time.Time) TestResult { out := TestResult{ Name: opt.Name, StartAt: startAt, Duration: time.Since(startAt), Outcome: TestOutcomeSuccess, } if opt.Timeout > 0 && out.Duration > opt.Timeout { out.Outcome = TestOutcomeTimeout return out } if ctx.Err() != nil { out.Outcome = TestOutcomeTimeout return out } if err != nil { out.Outcome = TestOutcomeFailure return out } return out }
""" Check that CCL works correctly. """ import pytest import numpy as np from cobaya.model import get_model from cobaya.likelihood import Likelihood class CheckLike(Likelihood): """ This is a mock likelihood that simply forces soliket.CCL to calculate a CCL object. """ def logp(self, **params_values): ccl = self.theory.get_CCL() # noqa F841 return -1.0 def get_requirements(self): return {"CCL": None} fiducial_params = { "ombh2": 0.0224, "omch2": 0.122, "cosmomc_theta": 104e-4, "tau": 0.065, "ns": 0.9645, "logA": 3.07, "As": {"value": "lambda logA: 1e-10*np.exp(logA)"} } info_dict = { "params": fiducial_params, "likelihood": { "checkLike": {"external": CheckLike} }, "theory": { "camb": { }, "soliket.CCL": { "kmax": 10.0, "nonlinear": True } } } def test_ccl_import(request): """ Test whether we can import pyCCL. """ import pyccl def test_ccl_cobaya(request): """ Test whether we can call CCL from cobaya. """ model = get_model(info_dict) model.loglikes() def test_ccl_distances(request): """ Test whether the calculated angular diameter distance & luminosity distances in CCL have the correct relation. """ model = get_model(info_dict) model.loglikes({}) cosmo = model.provider.get_CCL()["cosmo"] z = np.linspace(0.0, 10.0, 100) a = 1.0 / (z + 1.0) da = cosmo.angular_diameter_distance(a) dl = cosmo.luminosity_distance(a) assert np.allclose(da * (1.0 + z) ** 2.0, dl) def test_ccl_pk(request): """ Test whether non-linear Pk > linear Pk in expected regimes. """ model = get_model(info_dict) model.loglikes({}) cosmo = model.provider.get_CCL()["cosmo"] k = np.logspace(np.log10(3e-1), 1, 1000) pk_lin = cosmo.linear_matter_power(k, a=0.5) pk_nonlin = cosmo.nonlin_matter_power(k, a=0.5) assert np.all(pk_nonlin > pk_lin)
<gh_stars>10-100 use crate::command::trove::CommandTrove; use crate::gui::prompts::{prompt_input, prompt_input_validate}; use serde::{Deserialize, Serialize}; pub trait Parsable { fn parse_arguments(matches: &clap::ArgMatches) -> Self; } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct HoardCommand { pub name: String, pub namespace: String, pub tags: Option<Vec<String>>, pub command: String, pub description: Option<String>, } impl HoardCommand { pub fn default() -> Self { Self { name: "".to_string(), namespace: "".to_string(), tags: None, command: "".to_string(), description: None, } } #[allow(dead_code)] pub fn is_complete(&self) -> bool { if self.name.is_empty() || self.namespace.is_empty() || self.tags.is_none() || self.command.is_empty() || self.description.is_none() { return false; } true } pub fn tags_as_string(&self) -> String { self.tags .as_ref() .unwrap_or(&vec!["".to_string()]) .join(",") } #[allow(dead_code)] pub fn with_command_raw(self, command_string: &str) -> Self { Self { name: self.name, namespace: self.namespace, tags: self.tags, command: command_string.to_string(), description: self.description, } } pub fn with_command_string_input( self, default_value: Option<String>, parameter_token: &str, ) -> Self { let base_prompt = format!( "Command to hoard ( Mark unknown parameters with {} )\n", parameter_token ); let command_string: String = prompt_input(&base_prompt, false, default_value); Self { name: self.name, namespace: self.namespace, tags: self.tags, command: command_string, description: self.description, } } pub fn with_tags_raw(self, tags: &str) -> Self { Self { name: self.name, namespace: self.namespace, tags: Some( tags.chars() .filter(|c| !c.is_whitespace()) .collect::<String>() .split(',') .map(std::string::ToString::to_string) .collect(), ), command: self.command, description: self.description, } } pub fn with_tags_input(self, default_value: Option<String>) -> Self { let tag_validator = move |input: &String| -> Result<(), String> { if input.contains(' ') { Err("Tags cant contain whitespaces".to_string()) } else { Ok(()) } }; let tags: String = prompt_input_validate( "Give your command some optional tags ( comma seperated )", true, default_value, Some(tag_validator), ); self.with_tags_raw(&tags) } pub fn with_namespace_input(self, default_namespace: Option<String>) -> Self { let namespace: String = prompt_input("Namespace of the command", false, default_namespace); Self { name: self.name, namespace, tags: self.tags, command: self.command, description: self.description, } } fn with_name_input_prompt( self, default_value: Option<String>, trove: &CommandTrove, prompt_string: &str, ) -> Self { let namespace = self.namespace.clone(); let command_names = trove.commands.clone(); let validator = move |input: &String| -> Result<(), String> { if input.contains(' ') { Err("The name cant contain whitespaces".to_string()) } else if command_names .iter() .filter(|x| x.namespace == namespace) .any(|x| x.name == *input) { Err( "A command with same name exists in the this namespace. Input a different name" .to_string(), ) } else { Ok(()) } }; let name = prompt_input_validate(prompt_string, false, default_value, Some(validator)); Self { name, namespace: self.namespace, tags: self.tags, command: self.command, description: self.description, } } pub fn with_name_input(self, default_value: Option<String>, trove: &CommandTrove) -> Self { self.with_name_input_prompt(default_value, trove, "Name your command") } pub fn with_alt_name_input(self, default_value: Option<String>, trove: &CommandTrove) -> Self { let name = self.name.clone(); let command = self.command.clone(); let namespace = self.namespace.clone(); self.with_name_input_prompt( default_value, trove, &format!( "A command with same name already exists in the namespace '{}'. Enter an alternate name for '{}' with command `{}`", namespace, name, command ), ) } pub fn with_description_input(self, default_value: Option<String>) -> Self { let description_string: String = prompt_input("Describe what the command does", false, default_value); Self { name: self.name, namespace: self.namespace, tags: self.tags, command: self.command, description: Some(description_string), } } } impl Parsable for HoardCommand { fn parse_arguments(matches: &clap::ArgMatches) -> Self { let mut new_command = Self::default(); if let Some(n) = matches.value_of("name") { new_command.name = n.to_string(); } // Defaults to 'default' namespace if let Some(ns) = matches.value_of("namespace") { new_command.namespace = ns.to_string(); } // "$ hoard test -t" was run // Expects comma seperated tags if let Some(tags) = matches.value_of("tags") { new_command.tags = Some( tags.split(',') .map(std::string::ToString::to_string) .collect(), ); } if let Some(c) = matches.value_of("command") { new_command.command = c.to_string(); } new_command } } pub trait Parameterized { // Check if parameter pointers are present fn is_parameterized(&self, token: &str) -> bool; // Count number of parameter pointers fn get_parameter_count(&self, token: &str) -> usize; fn split(&self, token: &str) -> Vec<String>; // Get parameterized Stringlike subject including parameter token // For example, given subject with parameter token '#1': // 'This is a #1 with one parameter token' // `get_split_subject("#")` returns // Vec['This is a ', '#', ' with one parameter token'] fn get_split_subject(&self, token: &str) -> Vec<String>; // Replaces parameter tokens with content from `parameters`, // consuming entries one by one until `parameters` is empty. fn replace_parameters(self, token: &str, parameters: &[String]) -> HoardCommand; fn with_input_parameters(self, token: &str) -> HoardCommand; } impl Parameterized for HoardCommand { fn is_parameterized(&self, token: &str) -> bool { self.command.contains(token) } fn get_parameter_count(&self, token: &str) -> usize { self.command.matches(token).count() } fn split(&self, token: &str) -> Vec<String> { self.command.split(token).map(ToString::to_string).collect() } fn get_split_subject(&self, token: &str) -> Vec<String> { let split = self.split(token); let mut collected: Vec<String> = Vec::new(); for s in split { collected.push(s.clone()); collected.push(token.to_string()); } collected } fn replace_parameters(self, token: &str, parameters: &[String]) -> HoardCommand { let mut parameter_iter = parameters.iter(); let split = self.split(token); let mut collected: Vec<String> = Vec::new(); for s in split { collected.push(s.clone()); collected.push(parameter_iter.next().unwrap_or(&token.to_string()).clone()); } collected.pop(); Self { name: self.name, namespace: self.namespace, tags: self.tags, command: collected.concat(), description: self.description, } } fn with_input_parameters(self, token: &str) -> HoardCommand { let parameter_count = self.get_parameter_count(token); if parameter_count == 0 { return self; } let mut command_state = self.command.clone(); for i in 0..parameter_count { let prompt_dialoge = format!( "Enter parameter({}) nr {} \n~> {}\n", token, (i + 1), command_state ); let parameter = prompt_input(&prompt_dialoge, false, None); command_state = command_state.replacen(token, &parameter, 1); } Self { name: self.name, namespace: self.namespace, tags: self.tags, command: command_state, description: self.description, } } } #[cfg(test)] mod test_commands { use super::*; #[test] fn one_tag_as_string() { let command = HoardCommand::default().with_tags_raw("foo"); let expected = "foo"; assert_eq!(expected, command.tags_as_string()); } #[test] fn no_tag_as_string() { let command = HoardCommand::default(); let expected = ""; assert_eq!(expected, command.tags_as_string()); } #[test] fn multiple_tags_as_string() { let command = HoardCommand::default().with_tags_raw("foo,bar"); let expected = "foo,bar"; assert_eq!(expected, command.tags_as_string()); } #[test] fn parse_single_tag() { let command = HoardCommand::default().with_tags_raw("foo"); let expected = Some(vec!["foo".to_string()]); assert_eq!(expected, command.tags); } #[test] fn parse_no_tag() { let command = HoardCommand::default(); let expected = None; assert_eq!(expected, command.tags); } #[test] fn parse_multiple_tags() { let command = HoardCommand::default().with_tags_raw("foo,bar"); let expected = Some(vec!["foo".to_string(), "bar".to_string()]); assert_eq!(expected, command.tags); } #[test] fn parse_whitespace_in_tags() { let command = HoardCommand::default().with_tags_raw("foo, bar"); let expected = Some(vec!["foo".to_string(), "bar".to_string()]); assert_eq!(expected, command.tags); } } #[cfg(test)] mod test_parameterized { use super::*; fn command_struct(command: &str) -> HoardCommand { HoardCommand::default().with_command_raw(command) } #[test] fn test_split() { let token = "#".to_string(); let c: HoardCommand = command_struct("test # test"); let expected = vec!["test ".to_string(), " test".to_string()]; assert_eq!(expected, c.split(&token)); } #[test] fn test_split_empty() { let token = "#".to_string(); let c: HoardCommand = command_struct("test test"); let expected = vec!["test test".to_string()]; assert_eq!(expected, c.split(&token)); } #[test] fn test_split_multiple() { let token = "#".to_string(); let c: HoardCommand = command_struct("test # test #"); let expected = vec!["test ".to_string(), " test ".to_string(), "".to_string()]; assert_eq!(expected, c.split(&token)); } #[test] fn test_replace_parameters() { let token = "#".to_string(); let c: HoardCommand = command_struct("test # bar"); let to_replace = vec!["foo".to_string()]; let expected = "test foo bar".to_string(); assert_eq!(expected, c.replace_parameters(&token, &to_replace).command); } #[test] fn test_replace_last_parameters() { let token = "#".to_string(); let c: HoardCommand = command_struct("test foo #"); let to_replace = vec!["bar".to_string()]; let expected = "test foo bar".to_string(); assert_eq!(expected, c.replace_parameters(&token, &to_replace).command); } }
<gh_stars>1-10 /* * LiskHQ/lisk-commander * Copyright © 2019 Lisk Foundation * * See the LICENSE file at the top-level directory of this distribution * for licensing information. * * Unless otherwise agreed in a custom licensing agreement with the Lisk Foundation, * no part of this software, including this file, may be copied, modified, * propagated, or distributed except according to the terms contained in the * LICENSE file. * * Removal or modification of this copyright notice is prohibited. * */ import { flags as flagParser } from '@oclif/command'; import * as childProcess from 'child_process'; import BaseCommand from '../../base'; import { describeApplication, PM2ProcessInstance } from '../../utils/core/pm2'; interface Args { readonly name: string; } export default class LogsCommand extends BaseCommand { static args = [ { name: 'name', description: 'Lisk Core installation directory name.', required: true, }, ]; static flags = { json: flagParser.boolean({ ...BaseCommand.flags.json, hidden: true, }), pretty: flagParser.boolean({ ...BaseCommand.flags.pretty, hidden: true, }), }; static description = 'Stream logs of a Lisk Core instance.'; static examples = ['core:logs mainnet-latest']; async run(): Promise<void> { const { args } = this.parse(LogsCommand); const { name } = args as Args; const instance = await describeApplication(name); if (!instance) { this.log( `Lisk Core instance: ${name} doesn't exists, Please install using lisk core:install`, ); return; } const { installationPath, network } = instance as PM2ProcessInstance; const fileName = `${installationPath}/logs/${network}/lisk.log`; const tail = childProcess.spawn('tail', ['-f', fileName]); const { stderr, stdout } = tail; stdout.on('data', data => { this.log(data.toString('utf-8').replace(/\n/, '')); }); stderr.on('data', data => { this.log(data.message); }); tail.on('close', () => { tail.removeAllListeners(); }); tail.on('error', err => { this.log(`Failed to process logs for ${name} with error: ${err.message}`); tail.removeAllListeners(); }); } }
-- We deliberately want to ensure the function we add to the rule database -- has the constraints we need on it when we get it out. {-# OPTIONS_GHC -Wno-redundant-constraints #-} {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE DerivingStrategies #-} {-# LANGUAGE GeneralizedNewtypeDeriving #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TupleSections #-} module Development.IDE.Graph.Internal.Database (newDatabase, incDatabase, build, getDirtySet, getKeysAndVisitAge) where import Control.Concurrent.Async import Control.Concurrent.Extra import Control.Concurrent.STM.Stats (STM, atomically, atomicallyNamed, modifyTVar', newTVarIO, readTVarIO) import Control.Exception import Control.Monad import Control.Monad.IO.Class (MonadIO (liftIO)) import Control.Monad.Trans.Class (lift) import Control.Monad.Trans.Reader import qualified Control.Monad.Trans.State.Strict as State import Data.Dynamic import Data.Either import Data.Foldable (for_, traverse_) import Data.HashSet (HashSet) import qualified Data.HashSet as HSet import Data.IORef.Extra import Data.Maybe import Data.Traversable (for) import Data.Tuple.Extra import Debug.Trace (traceM) import Development.IDE.Graph.Classes import Development.IDE.Graph.Internal.Rules import Development.IDE.Graph.Internal.Types import qualified Focus import qualified ListT import qualified StmContainers.Map as SMap import System.Time.Extra (duration, sleep) import System.IO.Unsafe newDatabase :: Dynamic -> TheRules -> IO Database newDatabase databaseExtra databaseRules = do databaseStep <- newTVarIO $ Step 0 databaseValues <- atomically SMap.new pure Database{..} -- | Increment the step and mark dirty. -- Assumes that the database is not running a build incDatabase :: Database -> Maybe [Key] -> IO () -- only some keys are dirty incDatabase db (Just kk) = do atomicallyNamed "incDatabase" $ modifyTVar' (databaseStep db) $ \(Step i) -> Step $ i + 1 transitiveDirtyKeys <- transitiveDirtySet db kk for_ transitiveDirtyKeys $ \k -> -- Updating all the keys atomically is not necessary -- since we assume that no build is mutating the db. -- Therefore run one transaction per key to minimise contention. atomicallyNamed "incDatabase" $ SMap.focus updateDirty k (databaseValues db) -- all keys are dirty incDatabase db Nothing = do atomically $ modifyTVar' (databaseStep db) $ \(Step i) -> Step $ i + 1 let list = SMap.listT (databaseValues db) atomicallyNamed "incDatabase - all " $ flip ListT.traverse_ list $ \(k,_) -> SMap.focus updateDirty k (databaseValues db) updateDirty :: Monad m => Focus.Focus KeyDetails m () updateDirty = Focus.adjust $ \(KeyDetails status rdeps) -> let status' | Running _ _ _ x <- status = Dirty x | Clean x <- status = Dirty (Just x) | otherwise = status in KeyDetails status' rdeps -- | Unwrap and build a list of keys in parallel build :: forall key value . (RuleResult key ~ value, Typeable key, Show key, Hashable key, Eq key, Typeable value) => Database -> Stack -> [key] -> IO ([Key], [value]) -- build _ st k | traceShow ("build", st, k) False = undefined build db stack keys = do (ids, vs) <- runAIO $ fmap unzip $ either return liftIO =<< builder db stack (map Key keys) pure (ids, map (asV . resultValue) vs) where asV :: Value -> value asV (Value x) = unwrapDynamic x -- | Build a list of keys and return their results. -- If none of the keys are dirty, we can return the results immediately. -- Otherwise, a blocking computation is returned *which must be evaluated asynchronously* to avoid deadlock. builder :: Database -> Stack -> [Key] -> AIO (Either [(Key, Result)] (IO [(Key, Result)])) -- builder _ st kk | traceShow ("builder", st,kk) False = undefined builder db@Database{..} stack keys = withRunInIO $ \(RunInIO run) -> do -- Things that I need to force before my results are ready toForce <- liftIO $ newTVarIO [] current <- liftIO $ readTVarIO databaseStep results <- liftIO $ for keys $ \id -> -- Updating the status of all the dependencies atomically is not necessary. -- Therefore, run one transaction per dep. to avoid contention atomicallyNamed "builder" $ do -- Spawn the id if needed status <- SMap.lookup id databaseValues val <- case viewDirty current $ maybe (Dirty Nothing) keyStatus status of Clean r -> pure r Running _ force val _ | memberStack id stack -> throw $ StackException stack | otherwise -> do modifyTVar' toForce (Wait force :) pure val Dirty s -> do let act = run (refresh db stack id s) (force, val) = splitIO (join act) SMap.focus (updateStatus $ Running current force val s) id databaseValues modifyTVar' toForce (Spawn force:) pure val pure (id, val) toForceList <- liftIO $ readTVarIO toForce let waitAll = run $ waitConcurrently_ toForceList case toForceList of [] -> return $ Left results _ -> return $ Right $ do waitAll pure results -- | Refresh a key: -- * If no dirty dependencies and we have evaluated the key previously, then we refresh it in the current thread. -- This assumes that the implementation will be a lookup -- * Otherwise, we spawn a new thread to refresh the dirty deps (if any) and the key itself refresh :: Database -> Stack -> Key -> Maybe Result -> AIO (IO Result) -- refresh _ st k _ | traceShow ("refresh", st, k) False = undefined refresh db stack key result = case (addStack key stack, result) of (Left e, _) -> throw e (Right stack, Just me@Result{resultDeps = ResultDeps deps}) -> do res <- builder db stack deps let isDirty = any (\(_,dep) -> resultBuilt me < resultChanged dep) case res of Left res -> if isDirty res then asyncWithCleanUp $ liftIO $ compute db stack key RunDependenciesChanged result else pure $ compute db stack key RunDependenciesSame result Right iores -> asyncWithCleanUp $ liftIO $ do res <- iores let mode = if isDirty res then RunDependenciesChanged else RunDependenciesSame compute db stack key mode result (Right stack, _) -> asyncWithCleanUp $ liftIO $ compute db stack key RunDependenciesChanged result -- | Compute a key. compute :: Database -> Stack -> Key -> RunMode -> Maybe Result -> IO Result -- compute _ st k _ _ | traceShow ("compute", st, k) False = undefined compute db@Database{..} stack key mode result = do let act = runRule databaseRules key (fmap resultData result) mode deps <- newIORef UnknownDeps (execution, RunResult{..}) <- duration $ runReaderT (fromAction act) $ SAction db deps stack built <- readTVarIO databaseStep deps <- readIORef deps let changed = if runChanged == ChangedRecomputeDiff then built else maybe built resultChanged result built' = if runChanged /= ChangedNothing then built else changed -- only update the deps when the rule ran with changes actualDeps = if runChanged /= ChangedNothing then deps else previousDeps previousDeps= maybe UnknownDeps resultDeps result let res = Result runValue built' changed built actualDeps execution runStore case getResultDepsDefault [] actualDeps of deps | not(null deps) && runChanged /= ChangedNothing -> do -- IMPORTANT: record the reverse deps **before** marking the key Clean. -- If an async exception strikes before the deps have been recorded, -- we won't be able to accurately propagate dirtiness for this key -- on the next build. void $ updateReverseDeps key db (getResultDepsDefault [] previousDeps) (HSet.fromList deps) _ -> pure () atomicallyNamed "compute" $ SMap.focus (updateStatus $ Clean res) key databaseValues pure res updateStatus :: Monad m => Status -> Focus.Focus KeyDetails m () updateStatus res = Focus.alter (Just . maybe (KeyDetails res mempty) (\it -> it{keyStatus = res})) -- | Returns the set of dirty keys annotated with their age (in # of builds) getDirtySet :: Database -> IO [(Key, Int)] getDirtySet db = do Step curr <- readTVarIO (databaseStep db) dbContents <- getDatabaseValues db let calcAge Result{resultBuilt = Step x} = curr - x calcAgeStatus (Dirty x)=calcAge <$> x calcAgeStatus _ = Nothing return $ mapMaybe (secondM calcAgeStatus) dbContents -- | Returns ann approximation of the database keys, -- annotated with how long ago (in # builds) they were visited getKeysAndVisitAge :: Database -> IO [(Key, Int)] getKeysAndVisitAge db = do values <- getDatabaseValues db Step curr <- readTVarIO (databaseStep db) let keysWithVisitAge = mapMaybe (secondM (fmap getAge . getResult)) values getAge Result{resultVisited = Step s} = curr - s return keysWithVisitAge -------------------------------------------------------------------------------- -- Lazy IO trick data Box a = Box {fromBox :: a} -- | Split an IO computation into an unsafe lazy value and a forcing computation splitIO :: IO a -> (IO (), a) splitIO act = do let act2 = Box <$> act let res = unsafePerformIO act2 (void $ evaluate res, fromBox res) -------------------------------------------------------------------------------- -- Reverse dependencies -- | Update the reverse dependencies of an Id updateReverseDeps :: Key -- ^ Id -> Database -> [Key] -- ^ Previous direct dependencies of Id -> HashSet Key -- ^ Current direct dependencies of Id -> IO () -- mask to ensure that all the reverse dependencies are updated updateReverseDeps myId db prev new = do forM_ prev $ \d -> unless (d `HSet.member` new) $ doOne (HSet.delete myId) d forM_ (HSet.toList new) $ doOne (HSet.insert myId) where alterRDeps f = Focus.adjust (onKeyReverseDeps f) -- updating all the reverse deps atomically is not needed. -- Therefore, run individual transactions for each update -- in order to avoid contention doOne f id = atomicallyNamed "updateReverseDeps" $ SMap.focus (alterRDeps f) id (databaseValues db) getReverseDependencies :: Database -> Key -> STM (Maybe (HashSet Key)) getReverseDependencies db = (fmap.fmap) keyReverseDeps . flip SMap.lookup (databaseValues db) transitiveDirtySet :: Foldable t => Database -> t Key -> IO (HashSet Key) transitiveDirtySet database = flip State.execStateT HSet.empty . traverse_ loop where loop x = do seen <- State.get if x `HSet.member` seen then pure () else do State.put (HSet.insert x seen) next <- lift $ atomically $ getReverseDependencies database x traverse_ loop (maybe mempty HSet.toList next) -------------------------------------------------------------------------------- -- Asynchronous computations with cancellation -- | A simple monad to implement cancellation on top of 'Async', -- generalizing 'withAsync' to monadic scopes. newtype AIO a = AIO { unAIO :: ReaderT (IORef [Async ()]) IO a } deriving newtype (Applicative, Functor, Monad, MonadIO) -- | Run the monadic computation, cancelling all the spawned asyncs if an exception arises runAIO :: AIO a -> IO a runAIO (AIO act) = do asyncs <- newIORef [] runReaderT act asyncs `onException` cleanupAsync asyncs -- | Like 'async' but with built-in cancellation. -- Returns an IO action to wait on the result. asyncWithCleanUp :: AIO a -> AIO (IO a) asyncWithCleanUp act = do st <- AIO ask io <- unliftAIO act -- mask to make sure we keep track of the spawned async liftIO $ uninterruptibleMask $ \restore -> do a <- async $ restore io atomicModifyIORef'_ st (void a :) return $ wait a unliftAIO :: AIO a -> AIO (IO a) unliftAIO act = do st <- AIO ask return $ runReaderT (unAIO act) st newtype RunInIO = RunInIO (forall a. AIO a -> IO a) withRunInIO :: (RunInIO -> AIO b) -> AIO b withRunInIO k = do st <- AIO ask k $ RunInIO (\aio -> runReaderT (unAIO aio) st) cleanupAsync :: IORef [Async a] -> IO () -- mask to make sure we interrupt all the asyncs cleanupAsync ref = uninterruptibleMask $ \unmask -> do asyncs <- atomicModifyIORef' ref ([],) -- interrupt all the asyncs without waiting mapM_ (\a -> throwTo (asyncThreadId a) AsyncCancelled) asyncs -- Wait until all the asyncs are done -- But if it takes more than 10 seconds, log to stderr unless (null asyncs) $ do let warnIfTakingTooLong = unmask $ forever $ do sleep 10 traceM "cleanupAsync: waiting for asyncs to finish" withAsync warnIfTakingTooLong $ \_ -> mapM_ waitCatch asyncs data Wait = Wait {justWait :: !(IO ())} | Spawn {justWait :: !(IO ())} fmapWait :: (IO () -> IO ()) -> Wait -> Wait fmapWait f (Wait io) = Wait (f io) fmapWait f (Spawn io) = Spawn (f io) waitOrSpawn :: Wait -> IO (Either (IO ()) (Async ())) waitOrSpawn (Wait io) = pure $ Left io waitOrSpawn (Spawn io) = Right <$> async io waitConcurrently_ :: [Wait] -> AIO () waitConcurrently_ [] = pure () waitConcurrently_ [one] = liftIO $ justWait one waitConcurrently_ many = do ref <- AIO ask -- spawn the async computations. -- mask to make sure we keep track of all the asyncs. (asyncs, syncs) <- liftIO $ uninterruptibleMask $ \unmask -> do waits <- liftIO $ traverse (waitOrSpawn . fmapWait unmask) many let (syncs, asyncs) = partitionEithers waits liftIO $ atomicModifyIORef'_ ref (asyncs ++) return (asyncs, syncs) -- work on the sync computations liftIO $ sequence_ syncs -- wait for the async computations before returning liftIO $ traverse_ wait asyncs
import { Component, OnInit, OnDestroy, Input } from '@angular/core' // import { BreakpointObserver } from '@angular/cdk/layout' // import { DomSanitizer } from '@angular/platform-browser' // import { ConfigurationsService } from '../../../../../../../../../library/ws-widget/utils/src/public-api' import { ActivatedRoute } from '@angular/router' import { Subscription } from 'rxjs' import { NSDiscussData } from '../../models/discuss.model' @Component({ selector: 'app-discuss-left-menu', templateUrl: './left-menu.component.html', styleUrls: ['./left-menu.component.scss'], }) export class LeftMenuComponent implements OnInit, OnDestroy { // tabs: any = [] // tabs: any = [] @Input() unseen = 0 tabsData!: NSDiscussData.IDiscussJsonData private tabs: Subscription | null = null constructor( // private breakpointObserver: BreakpointObserver, // private domSanitizer: DomSanitizer, // private configSvc: ConfigurationsService, private activateRoute: ActivatedRoute, ) { } ngOnInit(): void { this.tabs = this.activateRoute.data.subscribe(data => { if (data && data.pageData) { this.tabsData = data.pageData.data.tabs || [] } }) } ngOnDestroy() { if (this.tabs) { this.tabs.unsubscribe() } } }
// DNSRebindFromQueryRandom is a response handler to DNS queries // It extracts the two hosts in the DNS query string // then returns either extracted hosts randomly func DNSRebindFromQueryRandom(session string, dcss *DNSClientStateStore, q dns.Question) []string { dcss.RLock() answers := []string{dcss.Sessions[session].ResponseIPAddr} dnsCacheFlush := dcss.Sessions[session].DNSCacheFlush hosts := []string{dcss.Sessions[session].ResponseIPAddr, dcss.Sessions[session].ResponseReboundIPAddr} dcss.RUnlock() log.Printf("DNS: in DNSRebindFromQueryRandom\n") if dnsCacheFlush == false { answers[0] = hosts[rand.Intn(len(hosts))] } return answers }
def initialize(self, system, env): self.system = system self.env = env self.env.process(self.scheduling())
/** * Created by ietree * 2017/5/1 */ public class ArrayBinTreeTest { public static void main(String[] args) { ArrayBinTree<String> binTree = new ArrayBinTree<String>(4, "根"); binTree.add(0, "第二层右子节点", false); binTree.add(2, "第三层右子节点", false); binTree.add(6, "第四层右子节点", false); System.out.println(binTree); } }
<reponame>dmjesus89/campanha<gh_stars>0 /** @author diegomauricio * * Classe responável por representar o time */ package com.br.campanha.mvc.entity; import java.io.Serializable; import java.util.List; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.OneToMany; import javax.persistence.Table; import javax.validation.constraints.NotNull; import org.springframework.util.StringUtils; import com.fasterxml.jackson.annotation.JsonIgnore; @Entity @Table(name = "TIME") public class TimeEntity implements Serializable { private static final long serialVersionUID = 1L; public TimeEntity() { super(); } public TimeEntity(String nomeTime) { super(); this.nomeTime = nomeTime; } /** * @param idTime * @param nomeTime * @param listaCmapanhas */ public TimeEntity(Long idTime, String nomeTime, List<CampanhaEntity> listaCmapanhas) { super(); this.idTime = idTime; this.nomeTime = nomeTime; this.listaCmapanhas = listaCmapanhas; } @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "idTime", unique = true, nullable = false) private Long idTime; // @JsonIgnore @Column(name = "nomeTime", unique = true, nullable = false) @NotNull(message = "Por Favor, preencha o nome do time.") private String nomeTime; @OneToMany(fetch = FetchType.EAGER, mappedBy = "timeCoracao", cascade = CascadeType.DETACH) private List<CampanhaEntity> listaCmapanhas; /** * @return the idTime */ public Long getIdTime() { return idTime; } /** * @param ididTime * the id to set */ public void setIdTime(Long idTime) { this.idTime = idTime; } /** * @return the nomeTime */ public String getNomeTime() { return nomeTime; } /** * @param nomeTime * the nomeTime to set */ public void setNomeTime(String nomeTime) { this.nomeTime = nomeTime; } /** * @return the listaCmapanhas */ public List<CampanhaEntity> getListaCmapanhas() { return listaCmapanhas; } /** * @param listaCmapanhas * the listaCmapanhas to set */ public void setListaCmapanhas(List<CampanhaEntity> listaCmapanhas) { this.listaCmapanhas = listaCmapanhas; } /** * Validação de campos vazios * * @return */ @JsonIgnore public boolean isInvalidTime() { if (StringUtils.isEmpty(getNomeTime())) { return true; } return false; } }
/// Performs the Boolean `OR` operation against another bitstream and writes the /// result into `self`. If the other bitstream ends before `self` does, it is /// extended with zero, leaving all remaining bits in `self` as they were. impl<E, T, I> BitOrAssign<I> for BitSlice<E, T> where E: crate::Endian, T: crate::Bits, I: IntoIterator<Item=bool> { /// `OR`s a bitstream into a slice. /// /// # Examples /// /// ```rust /// use bitvec::*; /// let lhs: &mut BitSlice = &mut bitvec![0, 1, 0, 1, 0, 1]; /// let rhs = bitvec![0, 0, 1, 1]; /// *lhs |= rhs; /// assert_eq!("011101", &format!("{}", lhs)); /// ``` fn bitor_assign(&mut self, rhs: I) { for (idx, other) in (0 .. self.len()).zip(rhs.into_iter()) { let val = self.get(idx) | other; self.set(idx, val); } } }
def receive(self, receivedQueue = None): oldData = "empty" while True: if self.isReceiver: try: if self.sub.poll(timeout=0): data = self.sub.recv_multipart() if oldData != data: start = data[1].find("(") comma = data[1].find(",") end = data[1].find(")") lWheel = int(data[1][start + 1:comma]) rWheel = int(data[1].strip()[comma + 1:end]) info = (lWheel, rWheel) if receivedQueue is not None: receivedQueue.put(info) else: pass oldData = data except KeyboardInterrupt: print "Receiver stopping" break else: break
/** * Called upon child returning and USER saying no. */ public static final SubLObject uiat_sr_rejected_assertion(SubLObject interaction, SubLObject concept) { { final SubLThread thread = SubLProcess.currentSubLThread(); { SubLObject state = user_interaction_agenda.ui_state_lookup(interaction, $AR_STATE, UNPROVIDED); SubLObject v_agenda = user_interaction_agenda.ui_agenda(interaction); SubLObject force = user_interaction_agenda.ui_state_lookup(interaction, $TEXT_TYPE, UNPROVIDED); if (force == $QUESTION) { { SubLObject _prev_bind_0 = rkf_assisted_reader.$rkf_ar_processing_mode$.currentBinding(thread); SubLObject _prev_bind_1 = control_vars.$rkf_mt$.currentBinding(thread); SubLObject _prev_bind_2 = rkf_assisted_reader.$rkf_ar_parsing_mt$.currentBinding(thread); SubLObject _prev_bind_3 = rkf_assisted_reader.$rkf_ar_semantics_mt$.currentBinding(thread); SubLObject _prev_bind_4 = rkf_assisted_reader.$rkf_user$.currentBinding(thread); try { rkf_assisted_reader.$rkf_ar_processing_mode$.bind($QUESTION_PROCESSING, thread); control_vars.$rkf_mt$.bind(user_interaction_agenda.uia_domain_interaction_mt(v_agenda), thread); rkf_assisted_reader.$rkf_ar_parsing_mt$.bind(user_interaction_agenda.uia_parsing_interaction_mt(v_agenda), thread); rkf_assisted_reader.$rkf_ar_semantics_mt$.bind(user_interaction_agenda.uia_domain_interaction_mt(v_agenda), thread); rkf_assisted_reader.$rkf_user$.bind(user_interaction_agenda.uima_state_lookup(user_interaction_agenda.uia_meta_agenda(v_agenda), $USER, UNPROVIDED), thread); { SubLObject address = uia_mumbler.uia_mumble_create_address_for_uia(v_agenda); { SubLObject _prev_bind_0_11 = rkf_mumbler.$rkf_default_mumble_address$.currentBinding(thread); try { rkf_mumbler.$rkf_default_mumble_address$.bind(address, thread); rkf_assisted_reader.rkf_ar_act_reject_assert_concept(state, concept); } finally { rkf_mumbler.$rkf_default_mumble_address$.rebind(_prev_bind_0_11, thread); } } } } finally { rkf_assisted_reader.$rkf_user$.rebind(_prev_bind_4, thread); rkf_assisted_reader.$rkf_ar_semantics_mt$.rebind(_prev_bind_3, thread); rkf_assisted_reader.$rkf_ar_parsing_mt$.rebind(_prev_bind_2, thread); control_vars.$rkf_mt$.rebind(_prev_bind_1, thread); rkf_assisted_reader.$rkf_ar_processing_mode$.rebind(_prev_bind_0, thread); } } } else { { SubLObject _prev_bind_0 = rkf_assisted_reader.$rkf_ar_processing_mode$.currentBinding(thread); SubLObject _prev_bind_1 = control_vars.$rkf_mt$.currentBinding(thread); SubLObject _prev_bind_2 = rkf_assisted_reader.$rkf_ar_parsing_mt$.currentBinding(thread); SubLObject _prev_bind_3 = rkf_assisted_reader.$rkf_ar_semantics_mt$.currentBinding(thread); SubLObject _prev_bind_4 = rkf_assisted_reader.$rkf_user$.currentBinding(thread); try { rkf_assisted_reader.$rkf_ar_processing_mode$.bind($TEXT_PROCESSING, thread); control_vars.$rkf_mt$.bind(user_interaction_agenda.uia_domain_interaction_mt(v_agenda), thread); rkf_assisted_reader.$rkf_ar_parsing_mt$.bind(user_interaction_agenda.uia_parsing_interaction_mt(v_agenda), thread); rkf_assisted_reader.$rkf_ar_semantics_mt$.bind(user_interaction_agenda.uia_domain_interaction_mt(v_agenda), thread); rkf_assisted_reader.$rkf_user$.bind(user_interaction_agenda.uima_state_lookup(user_interaction_agenda.uia_meta_agenda(v_agenda), $USER, UNPROVIDED), thread); { SubLObject address = uia_mumbler.uia_mumble_create_address_for_uia(v_agenda); { SubLObject _prev_bind_0_12 = rkf_mumbler.$rkf_default_mumble_address$.currentBinding(thread); try { rkf_mumbler.$rkf_default_mumble_address$.bind(address, thread); rkf_assisted_reader.rkf_ar_act_reject_assert_concept(state, concept); } finally { rkf_mumbler.$rkf_default_mumble_address$.rebind(_prev_bind_0_12, thread); } } } } finally { rkf_assisted_reader.$rkf_user$.rebind(_prev_bind_4, thread); rkf_assisted_reader.$rkf_ar_semantics_mt$.rebind(_prev_bind_3, thread); rkf_assisted_reader.$rkf_ar_parsing_mt$.rebind(_prev_bind_2, thread); control_vars.$rkf_mt$.rebind(_prev_bind_1, thread); rkf_assisted_reader.$rkf_ar_processing_mode$.rebind(_prev_bind_0, thread); } } } } return interaction; } }
Picture: Wikipedia Here we have some real science fiction. Researches from UK have received 228.000 pounds from the "Leverhulme Trust", to build an amorphous non-silicon biological robot. The plasmobot should be build from physarum polycephalum. This is mould, that lives in forests, gardens and wet places in general. The research should be a step into a complete new field of robotics, that enables parallel processing with non-silicon parts. Well, why the hell use mould? Professor Andy Adamatzky, who leads the project, has proved in a former project, that the mould has 'computing abilities'! They call it plasmodium. He says: This mould, or plasmodium, is a naturally occurring substance with its own embedded intelligence. It propagates and searches for sources of nutrients and when it finds such sources it branches out in a series of veins of protoplasm. The plasmodium is capable of solving complex computational tasks, such as the shortest path between points and other logical calculations. Through previous experiments we have already demonstrated the ability of this mould to transport objects. By feeding it oat flakes, it grows tubes which oscillate and make it move in a certain direction carrying objects with it. We can also use light or chemical stimuli to make it grow in a certain direction. Not enough yet? Than take this: This new plasmodium robot, called plasmobot, will sense objects, span them in the shortest and best way possible, and transport tiny objects along pre-programmed directions. The robots will have parallel inputs and outputs, a network of sensors and the number crunching power of super computers. Totally weird. I don't want to quote the whole article here. Read the rest at science daily.
{-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE NoStarIsType #-} -- | Convention: -- -- @>@ Vector to the left of operator (mnemonic: v) -- @<@ Vector to the right of operator (mnemonic: v) -- @|@ Matrix to side of operator -- @.@ Last element of vector/matrix. -- -- The above symbols were chosen to minimize risk of conflict with common -- operators from other libraries (based on Hoogle search). module Numeric.Units.Dimensional.LinearAlgebra.Operators where import Data.Proxy import GHC.TypeLits hiding (type (*)) import Numeric.Units.Dimensional.LinearAlgebra.Vector import Numeric.Units.Dimensional.LinearAlgebra.Matrix import Numeric.Units.Dimensional.Prelude import qualified Prelude -- Operator fixity analogous with Prelude. infixl 9 >!! infixl 7 *<, >*, >/, >.<, *|, |*, |*|, |*<, >*| infixl 6 >+<, >-<, |+|, |-| infixr 5 <:, <:., |:, |:. -- These in these construction operators the @:@ cannot be to the left -- so the order of characters in the operator are somewhat reversed from -- the ideal we are forced to reverse order of characters from the ideal -- (for consistency with other operator conventions in this module the -- @>@ and @|@ should have been on the right side of the operator). (<:) = vCons x <:. y = x <: vSing y (|:) :: Vec d c a -> Mat d r c a -> Mat d (r+1) c a (|:) = consRow v1 |:. v2 = v1 |: rowMatrix v2 -- | Vector element querying. (>!!) :: (KnownNat m, m + 1 <= n) => Vec d n a -> Proxy m -> Quantity d a v >!! n = vElemAt n v -- Vectors (>+<), (>-<) :: Num a => Vec d n a -> Vec d n a -> Vec d n a (>+<) = elemAdd (>-<) = elemSub (*<) :: Num a => Quantity d1 a -> Vec d2 n a -> Vec ((*) d1 d2) n a (*<) = scaleVec (>*) :: Num a => Vec d1 n a -> Quantity d2 a -> Vec ((*) d2 d1) n a (>*) = flip scaleVec (>/) :: Fractional a => Vec d1 n a -> Quantity d2 a -> Vec ((/) d1 d2) n a (>/) = scaleVecInv (>.<) :: Num a => Vec d1 n a -> Vec d2 n a -> Quantity ((*) d1 d2) a (>.<) = dotProduct -- Matrices (|+|), (|-|) :: Num a => Mat d r c a -> Mat d r c a -> Mat d r c a (|+|) = mElemAdd (|-|) = mElemSub (|*|) :: Num a => Mat d1 r n a -> Mat d2 n c a -> Mat ((*) d1 d2) r c a (|*|) = matMat (*|) :: Num a => Quantity d1 a -> Mat d2 r c a -> Mat ((*) d1 d2) r c a (*|) = scaleMat (|*) :: Num a => Mat d1 r c a -> Quantity d2 a -> Mat ((*) d2 d1) r c a (|*) = flip scaleMat (|*<) :: Num a => Mat d1 r c a -> Vec d2 c a -> Vec ((*) d1 d2) r a (|*<) = matVec (>*|) :: Num a => Vec d1 r a -> Mat d2 r c a -> Vec ((*) d2 d1) c a (>*|) v m = transpose m |*< v -- vecMat v m
/** * Created by guilhermeaugusto on 15/08/2014. */ public class AlarmReciever extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { LogFiles.writeAlarmTriggerLog(Enums.TriggerType.Active); DataBaseHandler dataBaseHandler = new DataBaseHandler(context); Intent activityIntent = new Intent(context, VisualizeAnnotationActivity.class); Annotations annotation = dataBaseHandler.selectAnnotation(intent.getLongExtra("Annotation_ID", 0)); if(annotation.getAlarm().getCyclePeriod() != Enums.PeriodTypes.None) { Long dateInMillis = Long.parseLong(annotation.getAlarm().getDateInMillis()); Long periodInMillis = annotation.getAlarm().createCycleTimeInMillis(); annotation.getAlarm().setDateInMillis(Long.toString(dateInMillis + periodInMillis)); AlarmEntity.createAlarm(context, annotation); } else { annotation.getAlarm().setId(0); } dataBaseHandler.updateAnnotation(annotation); annotation.setOperationType(Enums.OperationType.Triggered); activityIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); activityIntent.addFlags(Intent.FLAG_ACTIVITY_MULTIPLE_TASK); activityIntent.putExtra("Annotation", annotation); context.startActivity(activityIntent); } }
/* * Copyright 2002-2014 iGeek, Inc. * All Rights Reserved * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.igeekinc.util.objectcache; import javax.swing.event.ChangeListener; public interface CachableObjectIF<I> { public abstract I getIDObject(); public abstract CachableObjectHandle<I, ? extends CachableObjectIF<I>> getHandle(); /** * CachableObject supports ChangeListeners for convenience. The listeners * are not stored persistently so in order to be sure to receive events, keep * a reference to the object. CachableObjectHandles do not maintain a reference. * The cache and GC may discard unreferenced objects at any time even if they have * ChangeListeners attached and if the object is subsequently reloaded by the cache * the ChangeListeners will not be reattached. * @param newListener */ public abstract void addChangeListener(ChangeListener newListener); public abstract void removeChangeListener(ChangeListener removeListener); public abstract void fireChangeEvent(); public abstract boolean equals(Object obj); @SuppressWarnings("unchecked") public abstract boolean equals(CachableObject obj); public abstract int hashCode(); public abstract boolean isDirty(); public abstract long getDirtyTime(); // Returns the time when the object was last set dirty }
VOL. 130 | NO. 169 | Monday, August 31, 2015 Apple is preparing to move its Germantown store, with a “next generation” design planned for the new location. The retailer has applied for a $1.5 million building permit through the city-county Office of Construction Code Enforcement for a new storefront and interior renovation at 2031 West St., within the Shops of Saddle Creek South. Apple’s current store is at 7615 W. Farmington Blvd., in the Shops of Saddle Creek North. The Germantown Design Review Commission unanimously approved the plans, which include modifying the current brick façade to a three-panel glass front, at its Aug. 25 meeting. Rick Millitello, who presented the plan to the design commission on behalf of Apple, said the store will be one of the first to feature Apple’s new store design. “Our project is the next generation of retail store that we’re rolling out, and that’s the design concept that we have – and we’re really excited because this is going to be one of the first, if it’s approved, that we build,” Millitello told the commission. “So we’re really excited to expand in Germantown and we’re excited to see the result of all the work that we’ve put in to develop this design.” Other aspects of the design, according to Millitello, include a matte granite reinforced panel on the exterior as well as natural oak tables inside. The store will also feature a changeable display that will include living plants at times, TV displays that change and artwork, among other things. Millitello told the commission the new retail design is rolling out in some overseas stores this fall, and the Memphis store will be part of the first rollout in the U.S.
Three. That’s the number of times Senate Majority Leader Harry Reid mentioned his favorite political targets, the billionaire Koch brothers, in a Senate Judiciary Committee hearing held Tuesday. Reid was testifying at the hearing, which was held to address a constitutional amendment to overturn the controversial 2010 Citizens United ruling. “My Republican colleagues attempt to cloak their defense of the status quo in terms of noble principles,” said Reid, a Democrat from Nevada. “They defend the money pumped into our system by the Koch brothers as free speech.” Two Democratic senators, Tom Udall of New Mexico and Michael Bennet of Colorado, are sponsoring the amendment, which is backed by 40 other Democrats, to overturn the Citizens United ruling. The Supreme Court’s 5-4 decision in that case allowed organizations such as unions and corporations to donate unlimited amounts of money to nonprofit political groups. Republicans largely favor the ruling, seeing it as a matter of free speech. Democrats believe it is undermines the principles of democracy, by, they say, allowing money to unfairly influence elections. Reid testified for 10 minutes before mentioning billionaire libertarian donors Charles and David Koch. “I defy anyone to determine what the Koch brothers are spending money on today politically,” he said. “They have all of these phantom organizations.” “They must have 15 different phony organizations to pump into the system to hide who they are,” he railed, calling the businessmen “the two wealthiest men in America, interested in their bottom line.” “The American people reject the notion that money gives the Koch brothers, corporations or special interest groups a greater voice in government than a mechanic, a lawyer, a doctor, a health care worker,” said Reid. Reid’s focus on the Kochs, who operate Koch Industries, based in Wichita, Kan., is not new. By one count, the Nevada Democrat has mentioned the duo 134 times on the Senate floor. Earlier this year, he called them “unAmerican” after a group the Kochs support aired ads opposing Obamacare. Reid’s focus led Republican Kan. Sen. Pat Roberts to comment last month that Reid was suffering from a “Koch addiction.” In his testimony, Reid told a story about his 1998 Senate race against Republican John Ensign. Reid ultimately won the race, but said that he was bothered by the money floating around in the race. “I hope that did not corrupt me, but it was corrupting,” he said of the experience. When the McCain-Feingold campaign finance law passed in 2002, Reid said “it was like taking a bath.” “I felt so clean.” And with the Supreme Court’s 5-4 vote in favor of Citizens United, it was “back into the sewer.” Ted Cruz, the Republican Texas senator, provided the most colorful response to the Democrats’ efforts, saying that it would “muzzle” the free speech rights of all Americans, including both liberal and conservative groups. “This amendment, if adopted, would give Congress the power to ban books and to ban movies,” Cruz continued, while pointing out that the Citizens United case began as an effort to quiet the makers of a movie critical of Hillary Clinton. He also referenced the book “Fahrenheit 451,” a famous dytopian novel in which the American government burned books. “Ray Bradbury would be astonished because we are seeing ‘Fahrenheit 451’ Democrats today,” said Cruz. In the hearing, Cruz announced that he would be proposing two campaign finance bills later Tuesday. Follow Chuck on Twitter
/** * Draw string at widget offset. * * @param gc the GC * @param offset the widget offset * @param s the string to be drawn * @param fg the foreground color */ private void draw(GC gc, int offset, String s, Color fg) { int baseline= fTextWidget.getBaseline(offset); FontMetrics fontMetrics= gc.getFontMetrics(); int fontBaseline= fontMetrics.getAscent() + fontMetrics.getLeading(); int baslineDelta= baseline - fontBaseline; Point pos= fTextWidget.getLocationAtOffset(offset); gc.setForeground(fg); gc.drawString(s, pos.x, pos.y + baslineDelta, true); }
import platform import numpy as np import cv2 import math from skimage.filters import gaussian from skimage.feature import peak_local_max from tflite_runtime.interpreter import Interpreter from tflite_runtime.interpreter import load_delegate # edge tpu delegate names for different hardware EDGETPU_SHARED_LIB = { 'Windows': 'edgetpu.dll', 'Linux': 'libedgetpu.so.1', 'Darwin': 'libedgetpu.1.dylib' }[platform.system()] class Grasp: def __init__(self, center, angle, length, width): self.center = center self.angle = angle self.length = length self.width = width def sqr_crop(img): h = img.shape[0] w = img.shape[1] half_sm_dim = min(h, w)//2 img = img[(h//2)-(half_sm_dim):(h//2)+(half_sm_dim), (w//2)-(half_sm_dim):(w//2)+(half_sm_dim)] return img def detect_grasps(p_img, width_img, ang_img, num_grasps=1, ang_threshold=5): # TODO: check speed local_max = peak_local_max(p_img, min_distance=20, threshold_abs=0.2, num_peaks=num_grasps) grasps = [] for grasp_point_array in local_max: grasp_point = tuple(grasp_point_array) grasp_length = width_img[grasp_point] g_width = grasp_length/2 grasp_angle = ang_img[grasp_point] if ang_threshold > 0: if grasp_angle > 0: grasp_angle = ang_img[grasp_point[0] - ang_threshold:grasp_point[0] + ang_threshold + 1, grasp_point[1] - ang_threshold:grasp_point[1] + ang_threshold + 1].max() else: grasp_angle = ang_img[grasp_point[0] - ang_threshold:grasp_point[0] + ang_threshold + 1, grasp_point[1] - ang_threshold:grasp_point[1] + ang_threshold + 1].min() g = Grasp(grasp_point, grasp_angle, grasp_length, g_width) grasps.append(g) return grasps def main(): # file directories model_dir = 'trained_models/ggcnn_model_edgetpu.tflite' img_dir = 'sample_imgs/img_1.jpeg' # set interpreter and get tensors interpreter = Interpreter(model_dir, experimental_delegates=[load_delegate(EDGETPU_SHARED_LIB)]) interpreter.allocate_tensors() # get input/output tensor information _, input_height, input_width, _ = interpreter.get_input_details()[0]['shape'] input_scale, input_zero_point = interpreter.get_input_details()[0]['quantization'] data_type = interpreter.get_input_details()[0]['dtype'] output_scale_0, output_zero_point_0 = interpreter.get_output_details()[0]['quantization'] output_scale_1, output_zero_point_1 = interpreter.get_output_details()[1]['quantization'] output_scale_2, output_zero_point_2 = interpreter.get_output_details()[2]['quantization'] output_scale_3, output_zero_point_3 = interpreter.get_output_details()[3]['quantization'] # read and manipulate image img = cv2.imread(img_dir) img_sqr = sqr_crop(img) img_resized = cv2.resize(img_sqr, (input_height, input_width)) frame = img_resized img_scaled = (img_resized / input_scale) + input_zero_point # scale input for quantized model img_expanded = np.expand_dims(img_scaled, axis=0).astype(data_type) # run model interpreter.set_tensor(interpreter.get_input_details()[0]['index'], img_expanded) interpreter.invoke() # get model output model_output_data_0 = (interpreter.get_tensor(interpreter.get_output_details()[0]['index'])).astype(np.float32) model_output_data_1 = (interpreter.get_tensor(interpreter.get_output_details()[1]['index'])).astype(np.float32) model_output_data_2 = (interpreter.get_tensor(interpreter.get_output_details()[2]['index'])).astype(np.float32) model_output_data_3 = (interpreter.get_tensor(interpreter.get_output_details()[3]['index'])).astype(np.float32) # scale output from quantized model model_output_data_0 =(model_output_data_0 - output_zero_point_0) * output_scale_0 model_output_data_1 =(model_output_data_1 - output_zero_point_1) * output_scale_1 model_output_data_2 =(model_output_data_2 - output_zero_point_2) * output_scale_2 model_output_data_3 =(model_output_data_3 - output_zero_point_3) * output_scale_3 # get postion, angle, and width maps grasp_positions_out = model_output_data_0 grasp_angles_out = np.arctan2(model_output_data_2, model_output_data_1)/2.0 grasp_width_out = model_output_data_3 * 150 # convert maps to images grasp_position_img = grasp_positions_out[0, ].squeeze() grasp_width_img = grasp_width_out[0, ].squeeze() grasp_angles_img = grasp_angles_out[0, ].squeeze() # run each image through a gaussian filter grasp_position_img = gaussian(grasp_position_img, 2.0, preserve_range=True) grasp_width_img = gaussian(grasp_width_img, 1.0, preserve_range=True) # grasp_angles_img = gaussian(grasp_angles_img, 2.0, preserve_range=True) # find grasps from images grasps = detect_grasps(grasp_position_img, grasp_width_img, grasp_angles_img) for g in grasps: line_x1 = int(g.center[1]-(g.width*math.cos(g.angle))) line_y1 = int(g.center[0]+(g.width*math.sin(g.angle))) line_x2 = int(g.center[1]+(g.width*math.cos(g.angle))) line_y2 = int(g.center[0]-(g.width*math.sin(g.angle))) cv2.line(frame, (line_x1,line_y1), (line_x2,line_y2), (0, 0, 255), 2) frame = cv2.resize(frame, (600, 600)) cv2.imshow("Frame", frame) cv2.waitKey(1000) if __name__ == '__main__': main()
Characterization of a human pathological immunoglobulin fragmenting during purification. A pathological immunoglobulin GI (Po), briefly characterized in plasma before purification, was found to fragment during salt precipitation to material resembling the Fab and Fc components of papain proteolysis. Termed (Fab)s and (Fc)s fragments, they were studied by a variety of physicochemical and immunological procedures as a pseudoglobulin mixture and after isolation by ion-exchange chromatography. The (Fab)s fragment was almost identical in composition to the analogous IgC products arising from papain, plasmin, and trypsin attack. Except for a low half-eystine content, comparative amino acid analyses and tryptic peptide mapping indicated that the (Fc)s fragment was closely related to the proteins elaborated in heavy chain disease. Experiments with papain digests showed that the Fab fragment possessed more antigenic determinants than (Fab)s whereas the Fc and (Fc)s fragments were immunologically indistinguishable. Quantitative end-group determinations revealed that each Po fragment contained a single free N-terminal residue. A tridecapeptide sequence was established for the (Fab)s fragment and a hexapeptide sequence for (Fc)s at their N-termini. Comparisons with known structural data suggested that the (Fab)s sequence was due to I chains. The location of the hexapeptide was tentatively assigned to the N-terminal section of the Fc component of tryptic proteolysis.
/* * Get the MultiXact data to save in a checkpoint record */ void MultiXactGetCheckptMulti(bool is_shutdown, MultiXactId *nextMulti, MultiXactOffset *nextMultiOffset, MultiXactId *oldestMulti, Oid *oldestMultiDB) { LWLockAcquire(MultiXactGenLock, LW_SHARED); *nextMulti = MultiXactState->nextMXact; *nextMultiOffset = MultiXactState->nextOffset; *oldestMulti = MultiXactState->oldestMultiXactId; *oldestMultiDB = MultiXactState->oldestMultiXactDB; LWLockRelease(MultiXactGenLock); debug_elog6(DEBUG2, "MultiXact: checkpoint is nextMulti %u, nextOffset %u, oldestMulti %u in DB %u", *nextMulti, *nextMultiOffset, *oldestMulti, *oldestMultiDB); }
use std::io::Read; fn main() { let mut s: String = String::new(); std::io::stdin().read_to_string(&mut s).ok(); let mut itr = s.trim().split_whitespace(); let a: Vec<usize> = (0..9) .map(|_| itr.next().unwrap().parse().unwrap()) .collect(); let n: usize = itr.next().unwrap().parse().unwrap(); let mut c: Vec<bool> = vec![false; 9]; for _ in 0..n { let q: usize = itr.next().unwrap().parse().unwrap(); for j in 0..9 { if a[j] == q { c[j] = true; } } } if (c[0] && c[1] && c[2]) || (c[3] && c[4] && c[5]) || (c[6] && c[7] && c[8]) || (c[0] && c[3] && c[6]) || (c[1] && c[4] && c[7]) || (c[2] && c[5] && c[8]) || (c[0] && c[4] && c[8]) || (c[2] && c[4] && c[6]) { println!("Yes"); } else { println!("No"); } }
/** * Returns whether there is an active network connection or not. * <p/> * Note that an active network connection does not guarantee that there is a connection to the * internet. * * @param context the context to use to get the {@link ConnectivityManager} * @return whether there is an active network connection or not */ public static boolean isNetworkAvailable(@NonNull Context context) { final ConnectivityManager cm = (ConnectivityManager) context.getSystemService(Context.CONNECTIVITY_SERVICE); final NetworkInfo activeNetwork = cm.getActiveNetworkInfo(); return activeNetwork != null && activeNetwork.isConnected(); }
Text Size: A- A+ The Gujarat BJP has been posting videos of religious and spiritual gurus praising Prime Minister Narendra Modi, and asking their followers to ‘strengthen his hands’. New Delhi: In between their sermons on gods and goddesses and the scriptures, spiritual leaders and religious preachers in Gujarat have also been listing the qualities of Prime Minister Narendra Modi, and backing his government’s decisions. And what could be a better opportunity to flaunt them than in the run-up to the state assembly elections? Which is exactly what the ruling party is doing. Videos of these sermons, shot at different points in time, are now being circulated on social media to garner support for the BJP ahead of next month’s polls. The preachers have praised Modi’s honesty and patriotism, while a few of them have also weighed in positively on decisions like demonetisation and the introduction of the Goods and Services Tax. The underlying message to their followers is to “strengthen his hands”. Ramesh Bhai Ojha A spiritual leader from the Saurashtra region of Gujarat, Ojha cited the example of a train changing tracks to support PM’s decision of implementing GST, in a video posted by the Gujarat BJP’s Facebook page on 6 November. “When a train changes track, the speed has to be slowed down, otherwise there would be a derailment. And here we are talking about changing our economy. There would be lot more noises from both those in favour and those against it,” Ojha said. “The most important thing is that the ‘Pradhan Sewak’ of the country is working, and even if a mistake is committed, then people would say ‘it’s okay’, as they know it is for better of the country,” he added. He compared the situation it to cricket icon Sachin Tendulkar failing to score a century. “The crowd will be disappointed, but they will not blame him. No one would doubt his potential and intention,” Ojha said. Varniswaroopdasji A child spiritual guru, Varniswaroopdasji spoke about India’s rising stature in the world because of Prime Minister Modi, in a video posted by the Gujarat BJP on its Facebook page on 5 November. “Samarpan maa taakat chhe (there is power in sacrifice). Modi sahib is a statue of sacrifice. During demonetisation, many stood in line and got just Rs 2,000; people bore the hardships. People are 98 per cent sure that (Modi) wants to do something for the country. If there is any shrestha vyaktitva (superior personality) after Gandhi, it is him,” he said. Mahant Swami Maharaj Another video, posted on 4 November, showed Mahant Swami Maharaj, the head of the BAPS Swaminarayan Sanstha that runs Akshardham temples wish that “the lotus and Modi would bloom”. Nityaswaroopdas Posted on 31 October, the video shows Swami Nityaswaroopdas telling his followers: “No one can threaten us anymore; this is how the country has been made now. He [Modi] has no personal interest and greed. He does not have anyone in family. Strengthen his hands.” Morari Bapu A guru with a global following, Morari Bapu also spoke in favour of Modi in a video posted a few months ago. “I got a call from Kaushikbhai (Patel, BJP Gujarat co-ordinator) asking if I have any views on the Modi government’s three years in power. I said I maintain my distance from everyone and have nothing to do with politics. All I could say is ‘Is aadmi ki rashtrabhakti par koi ungli nahi utha sakta’ (No one can question his patriotism),” Bapu said. For ThePrint’s smart analysis of how the rest of the media is doing its job, no holds barred, go to PluggedIn . Show Full Article
/** * Writes a PUSH_PROMISE frame to the output stream. * * @param frame the PUSH_PROMISE frame * @param out the output stream * * @throws IOException if an I/O error occurs */ private void writePushPromise(final Frame frame, final OutputStream out) throws IOException { assert frame instanceof PushPromiseFrame : "Non-PUSH_PROMISE frame passed to writePushPromise"; final var pushPromise = (PushPromiseFrame) frame; this.writeUnsignedInt(pushPromise.getPromisedStreamId(), out); out.write(pushPromise.getFragment()); }
""" This utility is called by Github Actions. It bumps the version number, unless the patch part of the major.minor.patch version is 0. This would suggest a manual major or minor release, in which case we probably don't want automatic patch increments. It's expected that setup.py contains a call to setup, where one of the arguments is the version number. This script rewrites setup.py to have a bumped version number. Other Github Action workflows after it could commit this rewritten setup.py back to the repository automatically. The output of this script is always the bumped version number, which could be put into an environment variable for later use in the workflow. """ import ast import black with open("setup.py") as f: parsed_setup = ast.parse(f.read()) for element in parsed_setup.body: if isinstance(element, ast.Expr) and element.value.func.id == "setup": for keyword in element.value.keywords: if keyword.arg == "version": original_version = keyword.value.value major, minor, patch = keyword.value.value.split(".") if int(patch) != 0: # If the last digit is 0, it suggests that a major or minor # release was done manually. In that case, don't alter anything. patch = str(int(patch) + 1) bumped_version = keyword.value.value = f"{major}.{minor}.{patch}" bumped_setup = black.format_str(ast.unparse(parsed_setup), mode=black.FileMode()) print(bumped_version) with open("setup.py", "w") as w: w.write(bumped_setup)
. BACKGROUND Alpine skiing and snowboarding are the most popular winter sports. These sports are also associated with a certain injury risk which, however, has steadily decreased during the past decades. During the winter season 2002/2003 the last large survey on ski injuries in Austria was performed. Among others, modern skiing equipment and optimized slope preparation may impact on the injury risk. We hypothesise that these changes may have led to a further decrease in ski injuries during the past decade. METHODS In the winter season 2012/2013, skiing injuries were recorded in 26 Austrian ski areas. Data were collected from rescue personnel on ski slopes and by physicians in the hospital or doctors practice with the help of a questionnaire. RESULTS A total of 7325 injured skiers and snowboarders (age: 34.8 ± 17.8 years) were recorded (49 % males and 51 % females; 80 % skiers, 14 % snowboarders, 6 % others). The most frequent causes of injury were self-inflicted falls (87 %) and collisions with other skiers/snowboarders (8 %). Most affected injury locations among skiers were the knee (41 %; predominantly in female skiers, > 50 %), shoulder/back (18 %) and arms (10 %). Most affected injury locations among snowboarders were arms (38 %) and shoulder/back (23 %). Head injuries were found at the same frequency (8 %) in skiers and snowboarders. The calculated injury rate was about 0.6 injuries per 1000 skier days and has decreased by more than 50 % during the past decade. CONCLUSIONS Modern skiing equipment and optimised slope preparation may be at least partly responsible for the decreased injury risk on ski slopes which is supported by the observation of a reduced falling frequency. Future preventive measures should focus on a reduction of knee injuries in female skiers.
<gh_stars>1-10 type Result = { current: [any, any] } export function getStateFromResult(result: Result) { return result.current[0] } export function getHandlersFromResult(result: Result) { return result.current[1] }
// trimPrefix is an implementation of strings.TrimPrefix that uses caseCompare func (b *Kit) trimPrefix(s, prefix string) string { if b.hasPrefix(s, prefix) { return s[len(prefix):] } return s }
/** * The action that allows navigating to the path element */ static class NavigateAction extends DumbAwareAction { /** * The constructor */ NavigateAction() { super("Navigate to ...", "Navigate to place where path element is defined", null); } /** * {@inheritDoc} */ @Override public void actionPerformed(AnActionEvent e) { final Module module = e.getData(LangDataKeys.MODULE); if (module == null) { return; } final ModuleDependenciesAnalyzer.OrderPathElement element = e.getData(ORDER_PATH_ELEMENT_KEY); if (element instanceof ModuleDependenciesAnalyzer.OrderEntryPathElement) { final ModuleDependenciesAnalyzer.OrderEntryPathElement o = (ModuleDependenciesAnalyzer.OrderEntryPathElement)element; final OrderEntry entry = o.entry(); final Module m = entry.getOwnerModule(); ProjectStructureConfigurable.getInstance(module.getProject()).selectOrderEntry(m, entry); } } }
/** * * Agent which starts the whole system core * */ public class Agent_Initiator extends PikaterAgent { private static final long serialVersionUID = -3908734088006529947L; private String fileName = null; /** * Get ontologies which is using this agent */ @Override public List<Ontology> getOntologies() { List<Ontology> ontologies = new ArrayList<Ontology>(); ontologies.add(AgentManagementOntology.getInstance()); return ontologies; } /** * Agent setup */ @Override protected void setup() { initDefault(); registerWithDF(CoreAgents.INITIATOR.getName()); logInfo("Agent " + getName() + " configuration " + fileName); // read agents from configuration try { XmlConfigurationProvider configProvider = new XmlConfigurationProvider(fileName); Configuration configuration = configProvider.getConfiguration(); List<AgentConfiguration> agentConfigurations = configuration.getAgentConfigurations(); for (AgentConfiguration agentConfiguration : agentConfigurations) { // Preimplemented jade agents do not count with named arguments, // convert to string if necessary Object [] argArray = agentConfiguration.getArguments().toArray(); Object[] arguments = processArgs(argArray); Boolean creationSuccessful = this.createAgent( agentConfiguration.getAgentType(), agentConfiguration.getAgentName(), arguments); if (!creationSuccessful) { logSevere("Creation of agent " + agentConfiguration.getAgentName() + " failed."); } } } catch (Exception e) { this.logException("Unexpected error occured:", e); } addBehaviour(new TickerBehaviour(this, 60000) { private static final long serialVersionUID = 2962563585712447816L; Calendar calender; SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); protected void onTick() { calender = Calendar.getInstance(); logInfo("tick=" + getTickCount() + " time=" + sdf.format(calender.getTime()) ); } }); } /** * Creates agent in this container * @param type - agent type = name of class * @param name - agent name * @return - confirms creation */ public Boolean createAgent(String type, String name, Object[] args) { // get a container controller for creating new agents PlatformController container = getContainerController(); String agentName = name; if ((nodeName != null) && (!nodeName.isEmpty())) { agentName = name + "-" + nodeName; } try { AgentController agent = container.createNewAgent( agentName, type, args); agent.start(); // provide agent time to register with DF etc. doWait(300); return true; } catch (ControllerException e) { logException("Exception while adding agent", e); return false; } } /** * Arguments conversion * */ public Object[] processArgs(Object[] arguments) { Object[] toReturn = new Object[arguments.length]; for (int i = 0; i < arguments.length; i++) { Argument arg = (Argument) arguments[i]; if (arg.getSendOnlyValue()) { toReturn[i] = arg.getValue(); } else { toReturn[i] = arguments[i]; } } return toReturn; } /** * Initialization */ @Override public void initDefault() { Object[] args = getArguments(); if (args != null) { if (args.length > 0) { fileName = (String) args[0]; } if (args.length > 1) { nodeName = (String) args[1]; } } if (fileName == null) { fileName = CoreConfiguration .getCoreMasterConfigurationFilepath(); } initLogging(); } }
<filename>src/app/model/calculations/indicators/rsi-indicator.ts import { SmoothedMovingAverage } from '../moving-average/smoothed-moving-average'; import { ConfigurableSourceIndicator } from './configurable-source-indicator'; import { IndicatorConfiguration } from './configurable-source'; import { SimpleMovingAverage } from '../moving-average/simple-moving-average'; import { MovingAverage } from '../moving-average/moving-average'; import { ExponentialMovingAverage } from '../moving-average/exponential-moving-average'; export enum RsiAverage { /** * Wilder originally formulated the calculation of the moving average as * using Smoothed Moving Average. */ WILDER = 'WILDER', /** * A variation called Cutler's RSI is based on a simple moving average of U and D. */ CUTLER = 'CUTLER', /** * Some commercial packages, like AIQ, use a standard exponential * moving average (EMA). */ EMA = 'EMA' } export interface RsiIndicatorConfiguration extends IndicatorConfiguration { rsiAverage?: RsiAverage; } export class RsiIndicator extends ConfigurableSourceIndicator { private previous: number; private smmaU: MovingAverage; private smmaD: MovingAverage; constructor(configuration = {} as RsiIndicatorConfiguration) { super(configuration); let { rsiAverage = RsiAverage.WILDER, numberOfPeriods = 14 } = configuration; switch(rsiAverage) { case RsiAverage.WILDER: this.smmaU = new SmoothedMovingAverage(numberOfPeriods); this.smmaD = new SmoothedMovingAverage(numberOfPeriods); break; case RsiAverage.CUTLER: this.smmaU = new SimpleMovingAverage(numberOfPeriods); this.smmaD = new SimpleMovingAverage(numberOfPeriods); break; case RsiAverage.EMA: this.smmaU = new ExponentialMovingAverage(numberOfPeriods); this.smmaD = new ExponentialMovingAverage(numberOfPeriods); break; default: throw new TypeError(rsiAverage + ": Illegal value for rsiAverage"); } } compute(instant: Date, value: number): number { let u: number, d: number; let rsi: number; if (this.previous) { if (value > this.previous) { u = value - this.previous; d = 0; } else { u = 0; d = this.previous - value; } let smmaU: number = this.smmaU.movingAverageOf(u); let smmaD: number = this.smmaD.movingAverageOf(d); let rs: number = smmaU / smmaD; rsi = 100 - 100/(1 + rs); } this.previous = value; return rsi; } }
def obfuscate(cls, idStr): return base64.b64encode(idStr)
<filename>callback.go package wxwork // Callback 应用回调,需加密 type Callback struct { // 企业应用接收企业微信推送请求的访问协议和地址,支持http或https协议 URL string `json:"url,omitempty" xml:"url,omitempty"` // 用于生成签名 Token string `json:"token" xml:"token"` // 用于消息体的加密,是AES密钥的Base64编码 EncodingAESKey string `json:"encodingaeskey" xml:"encodingaeskey"` }
/** * Method to set up the text area */ private void initTextArea() { textArea = new JTextArea(50, 50); textArea.setLineWrap(true); textArea.addKeyListener(new textKeyListener(this)); textArea.addMouseListener(new textMouseListener(this)); }
/** * <p>Returns the saved {@link HttpResponse} for the given {@link Request}, if any.</p> * @param request The {@link Request} to retrieve from cache * @param <T> Type of object inside request * @param <I> Type of object mapped * @return Cached {@link HttpResponse} if any */ public <T, I> HttpResponse<I> retrieveFromCache(Request<T> request) { for (CachedRequest cachedRequest : cachedRequests.get(request.getUrl())) { if (cachedRequest.getRequest().equals(request)) { return ((CachedRequest<T, I>) cachedRequest).retrieveFromCache(); } } return null; }
<filename>src/model/ant-sql-model.ts import { Entity, KeyGenParams } from '@antjs/ant-js'; import { AntModel } from '@antjs/ant-js/build/model/ant-model'; import { AntSqlReference } from './ref/ant-sql-reference'; import { ApiSqlColumn } from '../api/api-sql-column'; import { SqlColumn } from './sql-column'; import { SqlModel } from './sql-model'; import { SqlReference } from './ref/sql-reference'; import { SqlType } from './sql-type'; export class AntSqlModel<TEntity extends Entity> extends AntModel<TEntity> implements SqlModel<TEntity> { /** * Model alias */ protected _alias: string; /** * Auto generated column. */ protected _autoGeneratedColumn: SqlColumn; /** * Map of table columns, including the id; * The key of the map is the alias of the column in the entities managed. * The value of the map is the column info. */ protected _columns: Map<string, SqlColumn>; /** * Map of columns by type */ protected _columnsByType: Map<SqlType, SqlColumn[]>; /** * Non reference columns. */ protected _nonReferenceColumns: SqlColumn[]; /** * Referenced columns collection. */ protected _referenceColumns: SqlColumn[]; /** * Map of table colunms, including the id. * The key of the map is the alias of the column in the SQL table. * The value of the map is the column info. */ protected _sqlColumns: Map<string, SqlColumn>; /** * SQL table name. */ protected _tableName: string; /** * Constructor. * @param id Model's id. * @param keyGen Key generation config. * @param columns Model columns. * @param tableName SQL table name. * @param alias Model alias. */ public constructor( id: string, keyGen: KeyGenParams, columns: Iterable<ApiSqlColumn>, tableName: string, alias?: string, ) { super(id, keyGen); this._alias = alias; this._initializeColumns(columns); this._tableName = tableName; } /** * Model's alias */ public get alias(): string { return this._alias; } /** * Table columns. */ public get columns(): Iterable<SqlColumn> { return this._columns.values(); } /** * SQL table name. */ public get tableName(): string { return this._tableName; } /** * Gets a column by its alias */ public columnByAlias(alias: string): SqlColumn { return this._columns.get(alias); } /** * Gets a column by an SQL alias. * @param alias Alias of the column at the SQL server. * @returns Column. */ public columnBySql(alias: string): SqlColumn { return this._sqlColumns.get(alias); } /** * Transforms an entity into a primary * @param entity Entity to process * @returns Primary object generated. */ public entityToPrimary(entity: TEntity): any { const primary: any = {}; for (const column of this._nonReferenceColumns) { primary[column.entityAlias] = entity[column.entityAlias]; } for (const column of this._referenceColumns) { primary[column.entityAlias] = (entity[column.entityAlias] as SqlReference<Entity, number | string>).id; } return primary; } /** * Transforms an entity into a secondary object. * @param entity Entity to process. * @returns Secondary object */ public entityToSecondary(entity: TEntity): any { const secondary: { [key: string]: any } = {}; for (const column of this._nonReferenceColumns) { const entityValue = entity[column.entityAlias]; if (undefined !== entityValue) { secondary[column.sqlName] = entityValue; } } for (const column of this._referenceColumns) { const entityValue = entity[column.entityAlias]; if (undefined !== entityValue) { secondary[column.sqlName] = (entityValue as SqlReference<TEntity, number | string>).id; } } return secondary; } /** * Gets the auto generated column of the model. * @returns Auto generated column of the model or null if no column is auto generated. */ public getAutoGeneratedColumn(): SqlColumn { return this._autoGeneratedColumn; } /** * Transforms multiple entities into primary objects. * @param entities Entities to process. * @returns Primary objects generated. */ public mEntityToPrimary(entities: TEntity[]): any[] { const primaries = new Array(entities.length); for (let i = 0; i < entities.length; ++i) { primaries[i] = this.entityToPrimary(entities[i]); } return primaries; } /** * Transforms multiple entities into primary objects. * @param entities Entities to process. * @returns Secondary objects generated. */ public mEntityToSecondary(entities: TEntity[]): any[] { const secondaries = new Array(entities.length); for (let i = 0; i < entities.length; ++i) { secondaries[i] = this.entityToSecondary(entities[i]); } return secondaries; } /** * @inheritdoc */ public mPrimaryToEntity(primaries: any[]): TEntity[] { const entities = new Array<TEntity>(primaries.length); for (let i = 0; i < primaries.length; ++i) { entities[i] = { ...primaries[i] }; } const dateColumns = this._columnsByType.get(SqlType.Date); if (undefined !== dateColumns) { for (const entity of entities) { for (const column of dateColumns) { (entity as any)[column.entityAlias] = new Date(entity[column.entityAlias]); } } } for (const column of this._referenceColumns) { for (const entity of entities) { (entity as any)[column.entityAlias] = new AntSqlReference(entity[column.entityAlias], column.refModel); } } return entities; } /** * Process secondary objects and generates entities from them. * @param secondaries Secondary objects to process. * @returns Entities generated. */ public mSecondaryToEntity(secondaries: any[]): TEntity[] { const entities = new Array<TEntity>(secondaries.length); for (let i = 0; i < secondaries.length; ++i) { entities[i] = this.secondaryToEntity(secondaries[i]); } return entities; } /** * @inheritdoc */ public primaryToEntity(primary: any): TEntity { // Spread operator is ridiculously fast const entity = { ...primary }; const dateColumns = this._columnsByType.get(SqlType.Date); if (undefined !== dateColumns) { for (const column of dateColumns) { entity[column.entityAlias] = new Date(entity[column.entityAlias]); } } for (const column of this._referenceColumns) { entity[column.entityAlias] = new AntSqlReference(entity[column.entityAlias], column.refModel); } return entity; } /** * Creates an entity from a secondary object. * @param secondary Secondary entity to transform * @returns Entity generated. */ public secondaryToEntity(secondary: any): TEntity { const entity: { [key: string]: any } = {}; for (const column of this._nonReferenceColumns) { entity[column.entityAlias] = secondary[column.sqlName]; } for (const column of this._referenceColumns) { entity[column.entityAlias] = new AntSqlReference(secondary[column.sqlName], column.refModel); } const booleanColumns = this._columnsByType.get(SqlType.Boolean); if (undefined !== booleanColumns) { for (const column of booleanColumns) { entity[column.entityAlias] = Boolean(entity[column.entityAlias]); } } const dateColumns = this._columnsByType.get(SqlType.Date); if (undefined !== dateColumns) { for (const column of dateColumns) { entity[column.entityAlias] = new Date(entity[column.entityAlias]); } } return entity as TEntity; } /** * Generates a sql column from an API sql column * @param column Column to process. */ private _apiColumnToColumn(column: ApiSqlColumn): SqlColumn { return { autoGenerationStrategy: column.autoGenerationStrategy, entityAlias: column.entityAlias, refAlias: column.refAlias, sqlName: column.sqlName, type: column.type, }; } /** * Initializes the columns map. * @param columns Columns to set. */ private _initializeColumns(columns: Iterable<ApiSqlColumn>): void { this._autoGeneratedColumn = null; this._columns = new Map(); this._columnsByType = new Map(); this._nonReferenceColumns = new Array(); this._referenceColumns = new Array(); this._sqlColumns = new Map(); for (const column of columns) { const modelColumn: SqlColumn = this._apiColumnToColumn(column); this._initializeColumnsSetAutoGeneratedColumn(modelColumn); this._columns.set(modelColumn.entityAlias, modelColumn); this._sqlColumns.set(modelColumn.sqlName, modelColumn); this._initializeColumnsSetColumnsOfType(modelColumn); this._initializeColumnsSetReferenceColumn(column); } } /** * Process a column adding an entry to the columns by type map. * @param column Column to process. */ private _initializeColumnsSetColumnsOfType(column: SqlColumn): void { let columnsOfType = this._columnsByType.get(column.type); if (undefined === columnsOfType) { columnsOfType = new Array(); this._columnsByType.set(column.type, columnsOfType); } columnsOfType.push(column); } /** * Process a column establishing the autogenerated column. * @param column Column to process. */ private _initializeColumnsSetAutoGeneratedColumn(column: SqlColumn): void { if (null != column.autoGenerationStrategy) { if (null === this._autoGeneratedColumn) { this._autoGeneratedColumn = column; } else { throw new Error('Unexpected auto generated column. There is already an auto generated column in this model'); } } } /** * Process a column adding it to the reference columns of the model (if necessary). * @param column Column to process. */ private _initializeColumnsSetReferenceColumn(column: SqlColumn): void { if (null == column.refAlias) { this._nonReferenceColumns.push(column); } else { this._referenceColumns.push(column); } } }
/* eslint-disable react/display-name */ import React, { useState } from 'react'; import { Table, Checkbox, Button, Input } from 'antd'; import { SearchOutlined } from '@ant-design/icons'; import { Link } from 'react-router-dom'; import { Chapter, Series, Languages } from 'houdoku-extension-lib'; import { ipcRenderer } from 'electron'; import { connect, ConnectedProps } from 'react-redux'; import routes from '../../constants/routes.json'; import { sendProgressToTrackers } from '../../features/tracker/utils'; import ChapterTableContextMenu from './ChapterTableContextMenu'; import { getChapterDownloadedSync } from '../../util/filesystem'; import ipcChannels from '../../constants/ipcChannels.json'; import { RootState } from '../../store'; import { toggleChapterRead } from '../../features/library/utils'; const downloadsDir = await ipcRenderer.invoke( ipcChannels.GET_PATH.DOWNLOADS_DIR ); const mapState = (state: RootState) => ({ chapterList: state.library.chapterList, chapterLanguages: state.settings.chapterLanguages, trackerAutoUpdate: state.settings.trackerAutoUpdate, currentTask: state.downloader.currentTask, }); // eslint-disable-next-line @typescript-eslint/no-explicit-any const mapDispatch = (dispatch: any) => ({ toggleChapterRead: (chapter: Chapter, series: Series) => toggleChapterRead(dispatch, chapter, series), }); const connector = connect(mapState, mapDispatch); type PropsFromRedux = ConnectedProps<typeof connector>; type Props = PropsFromRedux & { series: Series; }; const ChapterTable: React.FC<Props> = (props: Props) => { const [showingContextMenu, setShowingContextMenu] = useState(false); const [contextMenuLocation, setContextMenuLocation] = useState<{ x: number; y: number; }>({ x: 0, y: 0, }); const [contextMenuChapter, setContextMenuChapter] = useState<Chapter | undefined>(); const [filterTitle, setFilterTitle] = useState(''); const [filterGroup, setFilterGroup] = useState(''); const getFilteredList = () => { return props.chapterList.filter( (chapter: Chapter) => props.chapterLanguages.includes(chapter.languageKey) && chapter.title.toLowerCase().includes(filterTitle) && chapter.groupName.toLowerCase().includes(filterGroup) ); }; const getNextUnreadChapter = () => { return getFilteredList() .sort( (a: Chapter, b: Chapter) => parseFloat(a.chapterNumber) - parseFloat(b.chapterNumber) ) .find((chapter: Chapter) => !chapter.read); }; const getColumnSearchProps = (dataIndex: string) => ({ filterDropdown: () => ( <div style={{ padding: 8 }}> <Input placeholder={`Filter ${dataIndex}...`} allowClear onChange={(e) => dataIndex === 'title' ? setFilterTitle(e.target.value) : setFilterGroup(e.target.value) } /> </div> ), filterIcon: () => <SearchOutlined />, }); const columns = [ { title: 'Rd', dataIndex: 'read', key: 'read', width: '5%', render: function render(_text: string, record: Chapter) { return ( <Checkbox checked={record.read} onChange={() => { props.toggleChapterRead(record, props.series); if (!record.read && props.trackerAutoUpdate) { sendProgressToTrackers(record, props.series); } }} /> ); }, }, { title: 'DL', dataIndex: 'downloaded', key: 'downloaded', width: '5%', render: function render(_text: string, record: Chapter) { return ( <Checkbox checked={getChapterDownloadedSync( props.series, record, downloadsDir )} disabled /> ); }, }, { title: '', dataIndex: 'language', key: 'language', width: '5%', render: function render(_text: string, record: Chapter) { return Languages[record.languageKey] === undefined ? ( <></> ) : ( <div className={`flag flag-${Languages[record.languageKey].flagCode}`} /> ); }, }, { title: 'Title', dataIndex: 'title', key: 'title', width: '33%', ...getColumnSearchProps('title'), }, { title: 'Group', dataIndex: 'groupName', key: 'groupName', width: '22%', ...getColumnSearchProps('groupName'), }, { title: 'Vol', dataIndex: 'volumeNumber', key: 'volumeNumber', width: '8%', align: 'center', sorter: { compare: (a: Chapter, b: Chapter) => parseFloat(a.volumeNumber) - parseFloat(b.volumeNumber), multiple: 2, }, }, { title: 'Ch', dataIndex: 'chapterNumber', key: 'chapterNumber', defaultSortOrder: 'descend', width: '7%', align: 'center', sorter: { compare: (a: Chapter, b: Chapter) => parseFloat(a.chapterNumber) - parseFloat(b.chapterNumber), multiple: 1, }, }, { title: () => { const nextChapter: Chapter | undefined = getNextUnreadChapter(); if (nextChapter === undefined) return <></>; return ( <Link to={`${routes.READER}/${nextChapter.id}`}> <Button type="primary">Continue</Button> </Link> ); }, key: 'readButton', width: '15%', align: 'center', render: function render(_text: string, record: Chapter) { return ( <Link to={`${routes.READER}/${record.id}`}> <Button>Read</Button> </Link> ); }, }, ]; const filteredList = getFilteredList(); return ( <> <ChapterTableContextMenu location={contextMenuLocation} visible={showingContextMenu} series={props.series} chapter={contextMenuChapter} chapterList={filteredList} close={() => setShowingContextMenu(false)} /> <Table onRow={(record, _rowIndex) => { return { onClick: () => { setShowingContextMenu(false); }, onContextMenu: (event) => { setContextMenuLocation({ x: event.clientX, y: event.clientY }); setContextMenuChapter(record); setShowingContextMenu(true); }, }; }} dataSource={filteredList} // @ts-expect-error cleanup column render types columns={columns} rowKey="id" size="small" /> </> ); }; export default connector(ChapterTable);
import numpy as np def init_maze(ax, env, state, params): im = ax.imshow(env.occupied_map, cmap="Greys") anno_pos = ax.annotate( "A", fontsize=20, xy=(state.pos[1], state.pos[0]), xycoords="data", xytext=(state.pos[1] - 0.3, state.pos[0] + 0.25), ) anno_goal = ax.annotate( "G", fontsize=20, xy=(state.goal[1], state.goal[0]), xycoords="data", xytext=(state.goal[1] - 0.3, state.goal[0] + 0.25), ) ax.set_xticks([]) ax.set_yticks([]) return anno_pos def update_maze(im, env, state): xy = (state.pos[1], state.pos[0]) xytext = (state.pos[1] - 0.3, state.pos[0] + 0.25) im.set_position((xytext[0], xytext[1])) im.xy = (xy[0], xy[1]) return im
package com.epul.oeuvre.domains; import javax.persistence.*; import java.sql.Date; import java.util.Objects; @Entity @Table(name = "reservation", schema = "baseoeuvre", catalog = "") @IdClass(EntityReservationPK.class) public class EntityReservation { private Integer idOeuvrevente; private Integer idAdherent; private Date dateReservation; private String statut; private EntityOeuvrevente oeuvreventeByIdOeuvrevente; private EntityAdherent adherentByIdAdherent; private EntityProprietaire proprietaireByIdProprietaire; /*@Id @Column(name = "id_oeuvrevente", nullable = false) public Integer getIdOeuvrevente() { return idOeuvrevente; } public void setIdOeuvrevente(Integer idOeuvrevente) { this.idOeuvrevente = idOeuvrevente; } @Id @Column(name = "id_adherent", nullable = false) public Integer getIdAdherent() { return idAdherent; } public void setIdAdherent(Integer idAdherent) { this.idAdherent = idAdherent; }*/ @Basic @Column(name = "date_reservation", nullable = false) public Date getDateReservation() { return dateReservation; } public void setDateReservation(Date dateReservation) { this.dateReservation = dateReservation; } @Basic @Column(name = "statut", nullable = false, length = 20) public String getStatut() { return statut; } public void setStatut(String statut) { this.statut = statut; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; EntityReservation that = (EntityReservation) o; return Objects.equals(idOeuvrevente, that.idOeuvrevente) && Objects.equals(idAdherent, that.idAdherent) && Objects.equals(dateReservation, that.dateReservation) && Objects.equals(statut, that.statut); } @Override public int hashCode() { return Objects.hash(idOeuvrevente, idAdherent, dateReservation, statut); } @Id @ManyToOne @JoinColumn(name = "id_oeuvrevente", referencedColumnName = "id_oeuvrevente", nullable = false) public EntityOeuvrevente getOeuvreventeByIdOeuvrevente() { return oeuvreventeByIdOeuvrevente; } public void setOeuvreventeByIdOeuvrevente(EntityOeuvrevente oeuvreventeByIdOeuvrevente) { this.oeuvreventeByIdOeuvrevente = oeuvreventeByIdOeuvrevente; } @Id @ManyToOne @JoinColumn(name = "id_adherent", referencedColumnName = "id_adherent", nullable = false) public EntityAdherent getAdherentByIdAdherent() { return adherentByIdAdherent; } public void setAdherentByIdAdherent(EntityAdherent adherentByIdAdherent) { this.adherentByIdAdherent = adherentByIdAdherent; } @ManyToOne @JoinColumn(name = "id_proprietaire", referencedColumnName = "id_proprietaire", nullable = false) public EntityProprietaire getProprietaireByIdProprietaire() { return proprietaireByIdProprietaire; } public void setProprietaireByIdProprietaire(EntityProprietaire proprietaireByIdProprietaire) { this.proprietaireByIdProprietaire = proprietaireByIdProprietaire; } }
package message import ( "CommonModule" "fmt" "time" ) // 获取服务组件信息的消息 type DownloadServiceLogMessage struct { BaseMessageInfo Service string // 服务名称 } // 生成消息 func NewDownloadServiceLogMessage(name string, pri common.Priority, tra TransType) (msg *DownloadServiceLogMessage) { MessageId++ return &DownloadServiceLogMessage{BaseMessageInfo: BaseMessageInfo{id: MessageId, priority: pri, trans: tra, birthday: time.Now()}, Service: name} } func (s *DownloadServiceLogMessage) String() string { return fmt.Sprintf("service:%s, %s", s.Service, s.BaseMessageInfo.String()) } type DownloadServiceLogResponse struct { BaseResponseInfo Log string // 日志的下载路径 } // 生成回应 func NewDownloadServiceLogResponse(log string, msg BaseMessage) (response *DownloadServiceLogResponse) { resp := DownloadServiceLogResponse{BaseResponseInfo: BaseResponseInfo{message: msg}, Log: log} return &resp } func (s *DownloadServiceLogResponse) String() (result string) { result = fmt.Sprintf("log:%s, %s", s.Log, s.BaseResponseInfo.String()) return }
import java.io.*; import java.util.*; import java.lang.Math; public class Problem { public static Cup[] solve(int n, int w, int k){ Bottle[] bottles = new Bottle[n]; Cup[] cups = new Cup[k]; double totalvolume = (double)n*w/(double)k; double lastvolume = 0; double currvolume = 0; double eps = 1e-7; for(int i = 0; i<bottles.length; i++){ Bottle b = new Bottle(); b.w = w; bottles[i] = b; } for(int i = 0; i<cups.length; i++){ cups[i] = new Cup(); } for(Cup c : cups){ if(c.totalvolume < totalvolume){ for(int j = 0; j<bottles.length; j++) { Bottle b = bottles[j]; if(b.w > 0 && (b.c1 == null || b.c2 == null) && b.c1 != c && b.c2 != c){ double newvol = Math.min(totalvolume-c.totalvolume, b.w); if(newvol - eps > 0){ if(b.c1 == null) b.c1 = c; else b.c2 = c; c.bottles.add(j+1); c.volumes.add(newvol); c.totalvolume+=newvol; b.w-=newvol; } } } } } for(Bottle b : bottles){ lastvolume+=b.w; } if(lastvolume>eps) cups = null; return cups; } public static void main(String args[] ){ BufferedReader in = new BufferedReader( new InputStreamReader(System.in) ); try { Cup[] res = null; int n,w,m; StringTokenizer st; String buffer = in.readLine(); st = new StringTokenizer(buffer); n = Integer.valueOf(st.nextToken()); w = Integer.valueOf(st.nextToken()); m = Integer.valueOf(st.nextToken()); res = solve(n,w,m); if(res==null){ System.out.println("NO"); } else { System.out.println("YES"); for(Cup c : res){ System.out.println(c); } } } catch (Exception e) { System.out.println(e); e.printStackTrace(); } } } class Bottle { public Bottle() { w = 0; c1 = null; c2 = null; } public double w; public Cup c1; public Cup c2; } class Cup { public Cup(){ bottles = new LinkedList<Integer>(); volumes = new LinkedList<Double>(); totalvolume = 0; } public String toString(){ StringBuffer res = new StringBuffer(); for(int i = 0; i<bottles.size(); i++){ res.append(bottles.get(i)) .append(" ") .append(volumes.get(i)) .append(" "); } return res.toString(); } public List<Integer> bottles; public List<Double> volumes; public double totalvolume; }
/** Parse a {@link String} representation of a Bitcoin monetary value. If this * object's pattern includes a currency sign, either symbol or code, as by default is true * for instances of {@link BtcAutoFormat} and false for instances of {@link * BtcFixedFormat}, then denominated (i.e., prefixed) currency signs in the parsed String * will be recognized, and the parsed number will be interpreted as a quantity of units * having that recognized denomination. * <p>If the pattern includes a currency sign but no currency sign is detected in the parsed * String, then the number is interpreted as a quatity of bitcoins. * <p>If the pattern contains neither a currency symbol nor sign, then instances of {@link * BtcAutoFormat} will interpret the parsed number as a quantity of bitcoins, and instances * of {@link BtcAutoFormat} will interpret the number as a quantity of that instance's * configured denomination, which can be ascertained by invoking the {@link * BtcFixedFormat#symbol()} or {@link BtcFixedFormat#code()} method. * * <p>Consider using the single-argument version of this overloaded method unless you need to * keep track of the current parse position. * * @return a Coin object representing the parsed value * @see java.text.ParsePosition */ public Coin parse(String source, ParsePosition pos) { DecimalFormatSymbols anteSigns = null; int parseScale = COIN_SCALE; Coin coin = null; synchronized (numberFormat) { if (numberFormat.toPattern().contains("¤")) { for(ScaleMatcher d : denomMatchers()) { Matcher matcher = d.pattern.matcher(source); if (matcher.find()) { anteSigns = setSymbolAndCode(numberFormat, matcher.group()); parseScale = d.scale; break; } } if (parseScale == COIN_SCALE) { Matcher matcher = coinPattern.matcher(source); matcher.find(); anteSigns = setSymbolAndCode(numberFormat, matcher.group()); } } else parseScale = scale(); Number number = numberFormat.parse(source, pos); if (number != null) try { coin = Coin.valueOf( ((BigDecimal)number).movePointRight(offSatoshis(parseScale)).setScale(0, HALF_UP).longValue() ); } catch (IllegalArgumentException e) { pos.setIndex(0); } if (anteSigns != null) numberFormat.setDecimalFormatSymbols(anteSigns); } return coin; }
// InstallPath returns unique filename for bitstream relative to given directory func (f *FileAOCX) InstallPath(root string) (ret string) { interfaceID := f.InterfaceUUID() uniqID := f.UniqueUUID() if interfaceID != "" && uniqID != "" { ret = filepath.Join(root, interfaceID, uniqID+fileExtensionAOCX) } return }
def sample_categorical(X_hat_cat_split: List[np.ndarray], C_cat_split: Optional[List[np.ndarray]]) -> List[np.ndarray]: X_ohe_hat_cat = [] rows = np.arange(X_hat_cat_split[0].shape[0]) for i in range(len(X_hat_cat_split)): proba = softmax(X_hat_cat_split[i], axis=1) proba = proba * C_cat_split[i] if (C_cat_split is not None) else proba cols = np.argmax(proba, axis=1) samples = np.zeros_like(proba) samples[rows, cols] = 1 X_ohe_hat_cat.append(samples) return X_ohe_hat_cat
package com.mdd.service; import javax.ws.rs.*; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import java.util.List; @Path("/api") public class DevFestResource { private NameGenerator _generator = new NameGenerator(); @GET @Path("hello") @Produces(MediaType.TEXT_PLAIN) public String getHello() { return "Bazinga"; } @GET @Path("name/{gender}") @Produces(MediaType.APPLICATION_JSON) public Person getSingleNameByGender(@PathParam("gender") String gender) { if(gender != null && !gender.isEmpty() && genderValid(gender)) { return _generator.getRandomName(gender); } else { throw new WebApplicationException(Response.Status.BAD_REQUEST); } } private boolean genderValid(String gender){ if(gender.length() != 1) return false; if(gender.equalsIgnoreCase("m")) return true; if(gender.equalsIgnoreCase("f")) return true; return false; } @GET @Path("name") @Produces(MediaType.APPLICATION_JSON) public Person getSingleName() { return _generator.getRandomName(); } @GET @Path("xname") @Produces(MediaType.APPLICATION_XML) public Person getSingleNameXml() { return _generator.getRandomName(); } @GET @Path("names/{count}") @Produces(MediaType.APPLICATION_JSON) public List<Person> getPersonList(@PathParam("count") int count) { if(!countValid(count)){ throw new WebApplicationException(Response.Status.BAD_REQUEST); } return _generator.getRandomPersons(count); } private boolean countValid(int count) { return count > 0 && count < 101; } }
import argparse import sys, os parser = argparse.ArgumentParser() parser.add_argument('pyqlabpath', help='path to PyQLab directory') parser.add_argument('qubit', help='qubit name') parser.add_argument('stop', help='longest delay in ns', type = float) parser.add_argument('step', help='delay step in ns', type = float) args = parser.parse_args() from QGL import * q = QubitFactory(args.qubit) T1Stop = args.stop T1Step = args.step InversionRecovery(q, np.arange(0,T1Stop/1e9,T1Step/1e9), suffix=True)
LONDON—Policies that promote gender equality, safeguards against violence and exploitation and access to healthcare make Canada the best place to be a woman among the world’s biggest economies, a global poll of experts showed on Wednesday. Infanticide, child marriage and slavery make India the worst, the same poll concluded. A young woman has her hair braided by her mother in Chennai, India in this 2011 file photo. A Thomson Reuters Foundation survey, polling 370 gender specialists, found Canada to be the best place to be a woman amongst G20 nations, excluding the European Union economic grouping. ( PHILIP BROWN / Reuters ) Germany, Britain, Australia and France rounded out the top five countries out of the Group of 20 in a perceptions poll of 370 gender specialists conducted by TrustLaw, a legal news service run by the Thomson Reuters Foundation. The United States came in sixth but polarized opinion due to concerns about reproductive rights and affordable healthcare. At the other end of the scale, Saudi Arabia — where women are well educated but are banned from driving and only won the right to vote in 2011 — polled second-worst after India, followed by Indonesia, South Africa and Mexico. Article Continued Below “India is incredibly poor, Saudi Arabia is very rich. But there is a commonality and that is that unless you have some special access to privilege, you have a very different future, depending on whether you have an extra X chromosome, or a Y chromosome,” said Nicholas Kristof, journalist and co-author of “Half the Sky: Turning Oppression into Opportunity for Women Worldwide,” commenting on the poll results. The poll, released ahead of a summit of G20 heads of state to be held in Mexico June 18-19, showed the reality for many women in many countries remains grim despite the introduction of laws and treaties on women’s rights, experts said. “In India, women and girls continue to be sold as chattels, married off as young as 10, burned alive as a result of dowry-related disputes and young girls exploited and abused as domestic slave labour,” said Gulshun Rehman, health programme development adviser at Save the Children UK, who was one of those polled. “This is despite a groundbreakingly progressive Domestic Violence Act enacted in 2005 outlawing all forms of violence against women and girls.” TrustLaw asked aid professionals, academics, health workers, policymakers, journalists and development specialists with expertise in gender issues to rank the 19 countries of the G20 in terms of the overall best and worst to be a woman. They also ranked countries in six categories: quality of health, freedom from violence, participation in politics, work place opportunities, access to resources such as education and property rights and freedom from trafficking and slavery. Respondents came from 63 countries on five continents and included experts from United Nations Women, the International Rescue Committee, Plan International, Amnesty USA and Oxfam International, as well as prominent academic institutions and campaigning organizations. Representatives of faith-based organizations were also surveyed. Article Continued Below The EU, which is a member of the G20 as an economic grouping along with several of its constituent countries, was not included in the survey. Canada was perceived to be getting most things right in protecting women’s well-being and basic freedoms. “While we have much more to do, women have access to healthcare, we place a premium on education, which is the first step toward economic independence and we have laws that protect girls and women and don’t allow for child marriage,” said Farah Mohamed, president and CEO of the Canada-based G(irls) 20 Summit, which organized a youth gathering that took place in Mexico in May, ahead of the G20 leaders’ meeting. Experts were divided on the situation in the United States. Civil rights and domestic violence laws, access to education, workplace opportunities and freedom of movement and speech were positive. But access to contraception and abortion were being curtailed and women suffered disproportionately from a lack of access to affordable healthcare, some experts said. “Many of the gains of the last 100 years are under attack and the most overt and vicious attack is on reproductive rights,” said Marsha Freeman, director of International Women’s Rights Action Watch. BARRIERS TO DEVELOPMENT It is more vital than ever to protect women’s freedoms at a time of political upheaval in several parts of the world, some experts said. “Times of political transition, we’ve learned the hard way, can also be times of fragility, and when rights for women and girls can be rolled back instead of advanced,” said Minky Worden, director of global initiatives at Human Rights Watch. Women’s rights are particularly under attack in G20 host country Mexico, which ranked 15th in the survey. Mexico has a culture of male chauvinism, high rates of physical and sexual violence and pockets of poverty where healthcare and other services are no better than in some of the most marginalized communities of Africa, experts said. Women are also victims of drug-related crime. Some 300 women were killed in 2011 in the violent border town of Ciudad Juarez with almost total impunity, said Amnesty USA. “The violence affects men and women but often women disproportionately,” added Worden. “Mexico is a place where law enforcement remains a challenge, and the government has an obligation to protect women, but often fails in that obligation, as it does to protect men.” Putting women’s rights on the global agenda is the key to progress and to effective development, said Kristof. Countries that restrict women’s rights and freedoms or fail to protect them from injustices will suffer long-term, socially and economically, he added. While the poll was based on perceptions and not statistics, U.N. data supports the experts’ views. The Gender Inequality Index (GII), which looks at reproductive health, the labour market and empowerment of women through education and politics, named the same three countries as the worst places for women, although Saudi Arabia ranked the absolute worst in the GII, followed by India. The GII, however, does not include gender-based violence or other elements such as the fact that many women carry additional burdens of caregiving and housekeeping. When it came to what country was best, the expert perception did not match U.N. data. The GII ranked Germany, France and South Korea as the top three countries, in that order. Canada came seventh and the United States was in tenth place. Activists were not surprised by the experts’ favourable view of Canada, however. “Having an understanding of Canadian culture and tracking the work they’re doing around violence against women and gender equality, I believe that Canada really has been emerging as a model for what most countries should aspire to for a long time,” said Jimmie Briggs, journalist, author and founder of the Man Up Campaign that works to engage youth to stop violence against women and girls. HOW THEY RANK 1. Canada 2. Germany 3. Britain 4. Australia 5. France 6. United States 7. Japan 8. Italy 9. Argentina 10. South Korea 11. Brazil 12. Turkey 13. Russia 14. China 15. Mexico 16. South Africa 17. Indonesia 18. Saudi Arabia 19. India Read more about:
def generate_vectranspose_matrix(n, k): indices = np.arange(n*k).reshape((n,k)) ind_vec = np.reshape(indices,(-1,1)) ind_T_vec = np.reshape(indices.T,(-1,1)) Tnk = np.zeros((n*k,n*k)) for i in range(n*k): Tnk[i,np.where(ind_vec == ind_T_vec[i])] = 1 return Tnk
package com.cts.training.lamda; public class LamdaExDemo { public static void main(String []args) { Hello h=(int a,int b)->a+b; System.out.println(h.add(10,20)); } } interface Hello { public int add(int a,int b); }
#include "plumber.h" int sporth_comb(sporth_stack *stack, void *ud) { plumber_data *pd = ud; SPFLOAT input; SPFLOAT out; SPFLOAT looptime; SPFLOAT revtime; sp_comb *comb; switch(pd->mode) { case PLUMBER_CREATE: #ifdef DEBUG_MODE fprintf(stderr, "comb: Creating\n"); #endif sp_comb_create(&comb); plumber_add_ugen(pd, SPORTH_COMB, comb); if(sporth_check_args(stack, "fff") != SPORTH_OK) { fprintf(stderr,"Not enough arguments for comb\n"); stack->error++; return PLUMBER_NOTOK; } looptime = sporth_stack_pop_float(stack); revtime = sporth_stack_pop_float(stack); input = sporth_stack_pop_float(stack); sporth_stack_push_float(stack, 0); break; case PLUMBER_INIT: #ifdef DEBUG_MODE fprintf(stderr, "comb: Initialising\n"); #endif looptime = sporth_stack_pop_float(stack); revtime = sporth_stack_pop_float(stack); input = sporth_stack_pop_float(stack); comb = pd->last->ud; sp_comb_init(pd->sp, comb, looptime); sporth_stack_push_float(stack, 0); break; case PLUMBER_COMPUTE: looptime = sporth_stack_pop_float(stack); revtime = sporth_stack_pop_float(stack); input = sporth_stack_pop_float(stack); comb = pd->last->ud; comb->revtime = revtime; sp_comb_compute(pd->sp, comb, &input, &out); sporth_stack_push_float(stack, out); break; case PLUMBER_DESTROY: comb = pd->last->ud; sp_comb_destroy(&comb); break; default: fprintf(stderr, "comb: Unknown mode!\n"); break; } return PLUMBER_OK; }
def parse(u): if yaml is None: http://code.google.com/p/simplejson/issues/detail?id=40 and the documentation of simplejson.loads(): "If s is a str then decoded JSON strings that contain only ASCII characters may be parsed as str for performance and memory reasons. If your code expects only unicode the appropriate solution is decode s to unicode prior to calling loads." return json.loads(u) Otherwise, yaml. def code_constructor(loader, node): value = loader.construct_mapping(node) return eval(value['python'], {}) yaml.add_constructor(u'!code', code_constructor) return yaml.load(u)
def priority(self): if self.name.endswith( "_BIT" ): bias = 1 else: bias = 0 if self.category.startswith( "GL_VERSION_" ): priority = 0 elif self.category.startswith( "GL_ARB_" ): priority = 2 elif self.category.startswith( "GL_EXT_" ): priority = 4 else: priority = 6 return priority + bias
import { Context, getUserId, AuthError } from '../../utils' export const note = async (_, { id }, ctx: Context, info) => { const userId = getUserId(ctx) const hasPermission = await ctx.db.exists.Note({ id, owner: { id: userId } }) if (!hasPermission) { throw new AuthError() } return await ctx.db.query.note({ where: { id } }) }
<gh_stars>100-1000 #pragma once class CDestroyablePhysicsObject : public CPhysicObject, public CPHDestroyable, public CPHCollisionDamageReceiver, public CHitImmunity, public CDamageManager { typedef CPhysicObject inherited; float m_fHealth; ref_sound m_destroy_sound; shared_str m_destroy_particles; public: CDestroyablePhysicsObject () ; virtual ~CDestroyablePhysicsObject () ; virtual CPhysicsShellHolder* PPhysicsShellHolder () ; virtual BOOL net_Spawn (CSE_Abstract* DC) ; virtual void net_Destroy () ; virtual void Hit (SHit* pHDS); virtual void InitServerObject (CSE_Abstract* D) ; virtual CPHCollisionDamageReceiver *PHCollisionDamageReceiver () {return static_cast<CPHCollisionDamageReceiver*>(this);} virtual DLL_Pure *_construct () ; virtual CPhysicsShellHolder* cast_physics_shell_holder () {return this;} virtual CParticlesPlayer* cast_particles_player () {return this;} virtual CPHDestroyable* ph_destroyable () {return this;} virtual void shedule_Update (u32 dt) ; virtual bool CanRemoveObject () ; virtual void OnChangeVisual (); protected: void Destroy () ; private: };
/** * List displaying fragment. */ public class ListMenuFragment extends AbstractMenuNavigatorFragment { public ListMenuFragment() { super(); } private class ListMenuAdapter extends BaseAdapter { private final ListMenu listMenu; private final LayoutInflater inflater; public ListMenuAdapter(final ListMenu listMenu, final LayoutInflater inflater) { super(); this.listMenu = listMenu; this.inflater = inflater; } @Override public int getCount() { return listMenu.items.length; } @Override public AbstractNavigationMenu getItem(final int position) { return listMenu.items[position]; } @Override public long getItemId(final int position) { return position; } @Override public View getView(final int position, final View convertView, final ViewGroup parent) { final View listItemView = inflater.inflate(R.layout.single_list_item_layout, null); final AbstractNavigationMenu menu = getItem(position); final ImageView imageView = (ImageView) listItemView.findViewById(R.id.list_item_image); final TextView textView = (TextView) listItemView.findViewById(R.id.list_item_text); if (menu.iconFile == null) { imageView.setVisibility(View.GONE); } else { final Bitmap bitmap = bitmapReader.getBitmap(menu.iconFile); if (bitmap == null) { imageView.setVisibility(View.GONE); } else { if (menu.isDisabled()) { final Bitmap bitmapGray = bitmapReader.getGrayBitmap(menu.iconFile); imageView.setImageBitmap(bitmapGray); } else { imageView.setImageBitmap(bitmap); } } } textView.setText(menu.name); if (menu.isDisabled()) { textView.setTextColor(Color.GRAY); } if (!menu.isDisabled()) { listItemView.setOnClickListener(new MenuNavigatorOnClickListener(menu)); } return listItemView; } } private class LatestListAdapter extends BaseAdapter { // first = description, second = transaction private final List<AbstractTransactionMenu> latestList; private final LayoutInflater inflater; /** * @param descriptionTransactionPairsList * first = description, second = transaction */ public LatestListAdapter(final String name, final TextView latestText, final LayoutInflater inflater) { this.inflater = inflater; final Persistence persistence = new Persistence(getActivity()); latestList = persistence.getLatestList(name); if (latestList.isEmpty()) { latestText.setVisibility(View.GONE); } } @Override public int getCount() { return latestList.size(); } @Override public Object getItem(final int position) { return latestList.get(position); } @Override public long getItemId(final int position) { // TODO Auto-generated method stub return 0; } @Override public View getView(final int position, final View convertView, final ViewGroup parent) { final AbstractTransactionMenu menu = latestList.get(position); String shortcut = menu.shortcut; if (shortcut == null) { shortcut = menu.description; } if (shortcut == null) { shortcut = menu.name; } final View listItemView = inflater.inflate(R.layout.single_list_item_layout, null); final ImageView imageView = (ImageView) listItemView.findViewById(R.id.list_item_image); final TextView textView = (TextView) listItemView.findViewById(R.id.list_item_text); imageView.setVisibility(View.GONE); textView.setText(shortcut); listItemView.setOnClickListener(new MenuNavigatorOnClickListener(menu)); return listItemView; } } @Override public ListMenu getNavigationMenu() { return (ListMenu) super.getNavigationMenu(); } @Override public View onCreateView(final LayoutInflater inflater, final ViewGroup container, final Bundle savedInstanceState) { if (getNavigationMenu() == null) { return null; } final ViewGroup listViewGroup = (ViewGroup) inflater.inflate(R.layout.list_fragment_layout, container, false); final ListView listView = (ListView) listViewGroup.findViewById(R.id.listView); final TextView latestText = (TextView) listViewGroup.findViewById(R.id.latestChoosenTextView); final ListView latestChoosenListView = (ListView) listViewGroup.findViewById(R.id.latestChoosenListView); final ListMenu listMenu = getNavigationMenu(); listView.setAdapter(new ListMenuAdapter(listMenu, inflater)); AbstractNavigationMenu menu = listMenu; while (menu.parent != null && menu.parent.parent != null) { menu = menu.parent; } latestChoosenListView.setAdapter(new LatestListAdapter(menu.name, latestText, inflater)); return listViewGroup; } }
/** * * @author Boris Heithecker */ @XmlRootElement(name = "NotenAssessmentContext") @XmlAccessorType(value = XmlAccessType.FIELD) @XmlType(propOrder = { "rangeMaximum", "floorValues", "marginModel", "marginValue", "defaultDistribution" }) public class NotenAssessmentXmlAdapter { public static final String LOCALNAME = "NotenAssessmentContext"; public NotenAssessmentXmlAdapter() { } public NotenAssessmentXmlAdapter(Int2 rangeMaximum, Int2[] floorValues, String marginModel, Int2 marginValue, String defaultDist) { this.rangeMaximum = rangeMaximum; this.floorValues = floorValues; this.marginModel = marginModel; this.marginValue = marginValue; this.defaultDistribution = defaultDist; } public Int2 getRangeMaximum() { return rangeMaximum; } public Int2[] getFloorValues() { return floorValues; } public String getMarginModel() { return marginModel; } public Int2 getMarginValue() { return marginValue; } public String getDefaultDistribtution() { return defaultDistribution; } public void setDefaultDistribtution(String defaultDistribtution) { this.defaultDistribution = defaultDistribtution; } @XmlJavaTypeAdapter(value=Int2Adapter.class) private Int2 rangeMaximum; @XmlList //produziert einen Fehler beim JaXB @XmlJavaTypeAdapter(value=Int2Adapter.class) private Int2[] floorValues; private String marginModel; @XmlJavaTypeAdapter(value=Int2Adapter.class) private Int2 marginValue; private String defaultDistribution; }
package solver import ( "github.com/mokiat/gomath/sprec" "github.com/mokiat/lacking/game/physics" ) var _ physics.DBConstraintSolver = (*MatchRotation)(nil) // NewMatchRotation creates a new MatchRotation constraint solver. func NewMatchRotation() *MatchRotation { return &MatchRotation{ xAxis: NewMatchAxis(). SetPrimaryAxis(sprec.BasisXVec3()). SetSecondaryAxis(sprec.BasisXVec3()), yAxis: NewMatchAxis(). SetPrimaryAxis(sprec.BasisYVec3()). SetSecondaryAxis(sprec.BasisYVec3()), } } // NewMatchRotation represents the solution for a constraint // that keeps two bodies oriented in the same direction on // all axis. type MatchRotation struct { xAxis *MatchAxis yAxis *MatchAxis } func (r *MatchRotation) Reset(ctx physics.DBSolverContext) { r.xAxis.Reset(ctx) r.yAxis.Reset(ctx) } func (r *MatchRotation) CalculateImpulses(ctx physics.DBSolverContext) physics.DBImpulseSolution { xSolution := r.xAxis.CalculateImpulses(ctx) ySolution := r.yAxis.CalculateImpulses(ctx) return physics.DBImpulseSolution{ Primary: physics.SBImpulseSolution{ Impulse: sprec.Vec3Sum(xSolution.Primary.Impulse, ySolution.Primary.Impulse), AngularImpulse: sprec.Vec3Sum(xSolution.Primary.AngularImpulse, ySolution.Primary.AngularImpulse), }, Secondary: physics.SBImpulseSolution{ Impulse: sprec.Vec3Sum(xSolution.Secondary.Impulse, ySolution.Secondary.Impulse), AngularImpulse: sprec.Vec3Sum(xSolution.Secondary.AngularImpulse, ySolution.Secondary.AngularImpulse), }, } } func (r *MatchRotation) CalculateNudges(ctx physics.DBSolverContext) physics.DBNudgeSolution { xSolution := r.xAxis.CalculateNudges(ctx) ySolution := r.yAxis.CalculateNudges(ctx) return physics.DBNudgeSolution{ Primary: physics.SBNudgeSolution{ Nudge: sprec.Vec3Sum(xSolution.Primary.Nudge, ySolution.Primary.Nudge), AngularNudge: sprec.Vec3Sum(xSolution.Primary.AngularNudge, ySolution.Primary.AngularNudge), }, Secondary: physics.SBNudgeSolution{ Nudge: sprec.Vec3Sum(xSolution.Secondary.Nudge, ySolution.Secondary.Nudge), AngularNudge: sprec.Vec3Sum(xSolution.Secondary.AngularNudge, ySolution.Secondary.AngularNudge), }, } }
<filename>Linux/demo/Android-Build-Demo/server.c #include <stdio.h> #include <unistd.h> #include "src/HPSocket4C.h" En_HP_HandleResult OnReceive(HP_Server pSender, HP_CONNID dwConnID, const BYTE* pData, int iLength) { printf("[ConnID:%d, Length: %d] %s\n", dwConnID, iLength, pData); return HR_OK; } int main(int argc, char* const argv[]) { printf("HP-Socket 5.2.2 for Android\n"); HP_TcpPackServerListener listener = Create_HP_TcpPackServerListener(); HP_TcpPackServer svr = Create_HP_TcpPackServer(listener); HP_Set_FN_Server_OnReceive(listener, OnReceive); HP_TcpPackServer_SetMaxPackSize(svr, 0x40000); HP_TcpPackServer_SetPackHeaderFlag(svr, 0x169); if (HP_Server_Start(svr, "0.0.0.0", 5555)) { printf("Server start success, and listen on 0.0.0.0:5555\n"); while(1) { sleep(1000); } } else { printf("Server start fault, error code is %d\n", HP_Server_GetLastError(svr)); } return 0; }
<reponame>ross-alexander/lizards #include <assert.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <libxml++/libxml++.h> extern "C" { #include "lualib.h" #include "lauxlib.h" } #include <lizards.h> #include "local.h" /* ---------------------------------------------------------------------- -- -- map -- ---------------------------------------------------------------------- */ map_t* toMap(lua_State *L, int index) { map_t **mapp = (map_t**)lua_touserdata(L, index); if (mapp == NULL) luaL_typerror(L, index, MAP); return *mapp; } int Map_width(lua_State *L) { map_t *map = toMap(L, 1); lua_pushnumber(L, map->width); return 1; } int Map_height(lua_State *L) { map_t *map = toMap(L, 1); lua_pushnumber(L, map->height); return 1; } int Map_hexes(lua_State *L) { map_t *map = toMap(L, 1); lua_newtable(L); int t = lua_gettop(L); int i, j, k = 0; for (j = 0; j < map->height; j++) for (i = 0; i < map->width; i++) { lua_pushnumber(L, k); hex_t *hex = (*map)(i, j); hex_t **mapp = (hex_t**)lua_newuserdata(L, sizeof(hex_t*)); *mapp = hex; luaL_getmetatable(L, HEX); lua_setmetatable(L, -2); lua_settable(L, t); k++; } return 1; } int Map_hex(lua_State *L) { hex_t *hex; if (lua_gettop(L) != 3) return 0; map_t *map = toMap(L, 1); int x = lua_tonumber(L, 2); int y = lua_tonumber(L, 3); if ((hex = (*map)(x,y))) { hex_t **mapp = (hex_t**)lua_newuserdata(L, sizeof(hex_t*)); *mapp = hex; luaL_getmetatable(L, HEX); lua_setmetatable(L, -2); return 1; } return 0; } static int Map_tostring (lua_State *L) { map_t *map = toMap(L, 1); lua_pushfstring(L, "Map(%d x %d)", map->width, map->height); return 1; } void Map_register(lua_State *L) { static const luaL_reg Map_methods[] = { {"width", Map_width}, {"height", Map_height}, {"hexes", Map_hexes}, {"hex", Map_hex}, {0, 0}, }; static const luaL_reg Map_meta[] = { {"__tostring", Map_tostring}, {0, 0} }; Meta_register(L, MAP, Map_methods, Map_meta, 0); }
<filename>NewEngine/CityPlane/Gamepad.h #ifndef GAMEPAD_H #define GAMEPAD_H #include <Windows.h> #include <Xinput.h> // XInput Button values static const WORD XINPUT_Buttons[] = { XINPUT_GAMEPAD_A, XINPUT_GAMEPAD_B, XINPUT_GAMEPAD_X, XINPUT_GAMEPAD_Y, XINPUT_GAMEPAD_DPAD_UP, XINPUT_GAMEPAD_DPAD_DOWN, XINPUT_GAMEPAD_DPAD_LEFT, XINPUT_GAMEPAD_DPAD_RIGHT, XINPUT_GAMEPAD_LEFT_SHOULDER, XINPUT_GAMEPAD_RIGHT_SHOULDER, XINPUT_GAMEPAD_LEFT_THUMB, XINPUT_GAMEPAD_RIGHT_THUMB, XINPUT_GAMEPAD_START, XINPUT_GAMEPAD_BACK }; // XInput Button IDs struct XButtonIDs { // Function prototypes //---------------------// XButtonIDs(); // Default constructor // Member variables //---------------------// int A, B, X, Y; // 'Action' buttons // Directional Pad (D-Pad) int DPad_Up, DPad_Down, DPad_Left, DPad_Right; // Shoulder ('Bumper') buttons int L_Shoulder, R_Shoulder; // Thumbstick buttons int L_Thumbstick, R_Thumbstick; int Start; // 'START' button int Back; // 'BACK' button }; class Gamepad { public: // Function prototypes //---------------------// // Constructors Gamepad(); Gamepad(int a_iIndex); void Update(); // Update gamepad state // Thumbstick functions // - Return true if stick is inside deadzone, false if outside bool LStick_InDeadzone(); bool RStick_InDeadzone(); float LeftStick_X(); // Return X axis of left stick float LeftStick_Y(); // Return Y axis of left stick float RightStick_X(); // Return X axis of right stick float RightStick_Y(); // Return Y axis of right stick // Utility functions XINPUT_STATE GetState(); // Return gamepad state int GetIndex(); // Return gamepad index bool Connected(); // Return true if gamepad is connected bool GetButtonPressed(int a_iButton); private: // Member variables //---------------------// XINPUT_STATE m_State; // Current gamepad state int m_iGamepadIndex; // Gamepad index (eg. 1,2,3,4) }; #endif // GAMEPAD_H // Externally define the XButtonIDs struct as XButtons extern XButtonIDs XButtons;
package graph.trans; import java.io.File; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; import java.util.concurrent.RejectedExecutionHandler; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import org.apache.log4j.BasicConfigurator; import org.apache.log4j.Level; import org.apache.log4j.Logger; import constants.ConstantsGraphs; import constants.ConstantsTrans; import graph.PGraph; public class EntGraphBuilder { public static void main(String[] args) { PGraph.pGraphs = new ArrayList<>(); List<PGraph> pGraphs = PGraph.pGraphs; System.err.println("start!"); ConstantsTrans.setPGraphParams(); BasicConfigurator.configure(); Logger.getRootLogger().setLevel(Level.WARN); // String root = "../../python/gfiles/typedEntGrDir_aida/"; // PGraph pgraph = new PGraph(root+"location#person_sim.txt"); // TODO: be careful List<Float> lmbdas = EntGraphBuilder.getLambdas1(); if (!ConstantsTrans.checkFrgVio) { lmbdas = EntGraphBuilder.getLambdas_HTL(); } // List<Float> lmbdas = new ArrayList<>(); // // lmbdas.add(.04f); // lmbdas.add(.12f);// was .06 // lmbdas.add(.1f); // lmbdas.add(.2f); // List<Float> lmbdas = new ArrayList<>(); // lmbdas.add(.04f); // lmbdas.add(.08f); // lmbdas.add(.12f); File folder = new File(ConstantsGraphs.root); File[] files = folder.listFiles(); Arrays.sort(files); int gc = 0; for (File f : files) { String fname = f.getName(); // if (gc == 50) {//TODO: be careful // break; // } // if (!fname.contains("thing#person")) { // continue; // } // if (!fname.contains("thing#location") && // !fname.contains("location#location")) { // continue; // } // if (fname.startsWith("location#location_sim.txt")) { // seenLoc = true; // } // if (seenLoc) { // break; // } if (!fname.contains(ConstantsGraphs.suffix)) { continue; } System.out.println(fname); // if (gc++==50) { // break; // } String outPath = ""; if (!ConstantsTrans.shouldReplaceOutputs) { String fname2 = ConstantsGraphs.root + fname; int lastDotIdx = fname2.lastIndexOf('.'); outPath = fname2.substring(0, lastDotIdx) + ConstantsTrans.graphPostFix; System.out.println("out: " + outPath); File candF = new File(outPath); if (candF.exists() && candF.length() > 0) { continue; } else { System.out.println("not exist"); } } System.out.println("accepted out:: " + outPath); System.out.println("fname: " + fname); PGraph pgraph = new PGraph(ConstantsGraphs.root + fname); if (pgraph.nodes.size() == 0) { continue; } pGraphs.add(pgraph); gc++; System.out.println("allEdgesRem, allEdges: " + PGraph.allEdgesRemained + " " + PGraph.allEdges); } System.out.println("allEdgesRem, allEdges: " + PGraph.allEdgesRemained + " " + PGraph.allEdges); // if (1==1) { // System.exit(0);//TODO: remove this // } PGraph.setRawPred2PGraphs(pGraphs); for (PGraph pgraph: pGraphs) { pgraph.setSortedEdges(); } final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<>(ConstantsTrans.numTransThreads); ThreadPoolExecutor threadPool = new ThreadPoolExecutor(ConstantsTrans.numTransThreads, ConstantsTrans.numTransThreads, 600, TimeUnit.HOURS, queue); // to silently discard rejected tasks. :add new // ThreadPoolExecutor.DiscardPolicy() threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { // this will block if the queue is full try { executor.getQueue().put(r); } catch (InterruptedException e) { e.printStackTrace(); } } }); for (PGraph pgraph : pGraphs) { EntGraphBuilderRunner tnfR = new EntGraphBuilderRunner(pgraph, lmbdas); threadPool.execute(tnfR); } threadPool.shutdown(); // Wait hopefully all threads are finished. If not, forget about it! try { threadPool.awaitTermination(200, TimeUnit.HOURS); } catch (InterruptedException e1) { e1.printStackTrace(); } // Collections.sort(scores, Collections.reverseOrder()); // System.out.println("higest scoring relations:"); // for (int i = 0; i < Math.min(1000000, scores.size()); i++) { // System.out.println(scores.get(i)); // } } public static List<Float> getLambdas1() { List<Float> lmbdas = new ArrayList<>(); float maxLmbda = .05f; int numLmbdas = 10; float minLambda = maxLmbda / numLmbdas; for (float lmbda = minLambda; lmbda <= maxLmbda; lmbda += (maxLmbda - minLambda) / (numLmbdas - 1)) { lmbdas.add(lmbda); } lmbdas.add(.06f); lmbdas.add(.1f); lmbdas.add(.2f); lmbdas.add(.3f); lmbdas.add(.4f); lmbdas.add(.5f); // lmbdas.remove(0);//TODO: remove this return lmbdas; } public static List<Float> getLambdas_HTL() { List<Float> lmbdas = new ArrayList<>(); float maxLmbda = .05f; int numLmbdas = 7; float minLambda = .02f; for (float lmbda = minLambda; lmbda <= maxLmbda; lmbda += (maxLmbda - minLambda) / (numLmbdas - 1)) { lmbdas.add(lmbda); } lmbdas.add(.06f); lmbdas.add(.1f); lmbdas.add(.2f); lmbdas.add(.3f); lmbdas.add(.4f); lmbdas.add(.5f); return lmbdas; } public static List<Float> getLambdas2() { List<Float> lmbdas = new ArrayList<>(); // lmbdas.add(.025f); lmbdas.add(.03f); lmbdas.add(.04f); lmbdas.add(.05f); lmbdas.add(.06f); lmbdas.add(.1f); lmbdas.add(.2f); return lmbdas; } static List<Float> getLambdas3(float lmbda) { List<Float> lmbdas = new ArrayList<>(); lmbdas.add(lmbda); return lmbdas; } }
def draw_all_objects(): window.clear() for x_offset in (-window.width, 0, window.width): for y_offset in (-window.height, 0, window.height): gl.glPushMatrix() gl.glTranslatef(x_offset, y_offset, 0) batch.draw() gl.glPopMatrix()
import math from collections import defaultdict import itertools def trial_division(n): a = [1] for i in range(2,int(math.sqrt(n)) + 1): while n % i == 0: n //= i a.append(i) a.append(n) return a N = int(input()) d = defaultdict(int) for i in range(2,N+1): for k in trial_division(i): if k != 1: d[k] += 1 check = [] for v in d.values(): if v != 1: check.append(v) M = len(check) tmp = 1 ans = 0 for i in check: if i >= 74: ans += 1 for i in range(M): for k in range(i+1,M): for a in range(1,check[i]+1): for b in range(1,check[k]+1): if (a+1)*(b+1) == 75: ans += 1 for i in range(M): for k in range(i+1,M): for l in range(k+1,M): for a in range(1,check[i]+1): for b in range(1,check[k]+1): for c in range(1,check[l]+1): if (a+1)*(b+1)*(c+1) == 75: ans += 1 print(ans)
Ubisoft's new DRM scheme is already causing problems, as the servers required to authenticate games went down today. When Ubisoft announced its new DRM system, that required PC gamers to be connected to the Ubisoft servers at all times while playing, one of the biggest concerns that gamers had is what would happen if the servers went down. Well, as it turns out, when the Ubioft severs go down, no one can play their games and Ubisoft customers get very upset. At around 8am GMT, people began to complain in the Assassin's Creed 2 forum that they couldn't access the Ubisoft servers and were unable to play their games. Fast forward ten hours and it seems that the problem still hasn't been resolved, despite the assurances from a Ubisoft representative that the servers were 'constantly monitored' "I don't have any clear information on what the issue is ... but clearly the extended downtime and lengthy login issues are unacceptable, particularly as I've been told these servers are constantly monitored," said 'Ubi.Vigil', adding, "I'll do what I can to get more information on what the issue is here first thing tomorrow and push for a resolution and assurance this won't happen in the future." It's unclear whether this is a worldwide problem, or just a European one, but gamers are understandably frustrated. This is a disaster that everyone saw coming, but probably didn't expect to see quite so soon. (Thanks to Xanthious for the tip)
/** * Represents a count that was the result of a {@link Prospector} run. */ public class IndexEntry { private final String index; private final String data; private final String dataType; private final String tripleValueType; private final String visibility; private final Long count; private final Long timestamp; /** * Constructs an instance of {@link IndexEntry}. * * @param index - Indicates which {@link IndexWorkPlan} the data came from. * @param data - The information that is being counted. * @param dataType - The data type of {@code data}. * @param tripleValueType - Indicates which parts of the RDF Statement are included in {@code data}. * @param visibility - The visibility of this entry. * @param count - The number of times the {@code data} appeared within Rya. * @param timestamp - Identifies which Prospect run this entry belongs to. */ public IndexEntry( final String index, final String data, final String dataType, final String tripleValueType, final String visibility, final Long count, final Long timestamp) { this.index = index; this.data = data; this.dataType = dataType; this.tripleValueType = tripleValueType; this.visibility = visibility; this.count = count; this.timestamp = timestamp; } /** * @return Indicates which {@link IndexWorkPlan} the data came from. */ public String getIndex() { return index; } /** * @return The information that is being counted. */ public String getData() { return data; } /** * @return The data type of {@code data}. */ public String getDataType() { return dataType; } /** * @return Indicates which parts of the RDF Statement are included in {@code data}. */ public String getTripleValueType() { return tripleValueType; } /** * @return The visibility of this entry. */ public String getVisibility() { return visibility; } /** * @return The number of times the {@code data} appeared within Rya. */ public Long getCount() { return count; } /** * @return Identifies which Prospect run this entry belongs to. */ public Long getTimestamp() { return timestamp; } @Override public String toString() { return "IndexEntry{" + "index='" + index + '\'' + ", data='" + data + '\'' + ", dataType='" + dataType + '\'' + ", tripleValueType=" + tripleValueType + ", visibility='" + visibility + '\'' + ", timestamp='" + timestamp + '\'' + ", count=" + count + '}'; } @Override public int hashCode() { return Objects.hash(index, data, dataType, tripleValueType, visibility, count, timestamp); } @Override public boolean equals(Object o) { if(this == o) { return true; } if(o instanceof IndexEntry) { final IndexEntry entry = (IndexEntry) o; return Objects.equals(index, entry.index) && Objects.equals(data, entry.data) && Objects.equals(dataType, entry.dataType) && Objects.equals(tripleValueType, entry.tripleValueType) && Objects.equals(visibility, entry.visibility) && Objects.equals(count, entry.count) && Objects.equals(timestamp, entry.timestamp); } return false; } /** * @return An empty instance of {@link Builder}. */ public static Builder builder() { return new Builder(); } /** * Builds instances of {@link IndexEntry}. */ public static final class Builder { private String index; private String data; private String dataType; private String tripleValueType; private String visibility; private Long count; private Long timestamp; /** * @param index - Indicates which {@link IndexWorkPlan} the data came from. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setIndex(String index) { this.index = index; return this; } /** * @param data - The information that is being counted. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setData(String data) { this.data = data; return this; } /** * @param dataType - The data type of {@code data}. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setDataType(String dataType) { this.dataType = dataType; return this; } /** * @param tripleValueType - Indicates which parts of the RDF Statement are included in {@code data}. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setTripleValueType(String tripleValueType) { this.tripleValueType = tripleValueType; return this; } /** * @param visibility - The visibility of this entry. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setVisibility(String visibility) { this.visibility = visibility; return this; } /** * @param count - The number of times the {@code data} appeared within Rya. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setCount(Long count) { this.count = count; return this; } /** * @param timestamp - Identifies which Prospect run this entry belongs to. * @return This {@link Builder} so that method invocations may be chained. */ public Builder setTimestamp(Long timestamp) { this.timestamp = timestamp; return this; } /** * @return Constructs an instance of {@link IndexEntry} built using this builder's values. */ public IndexEntry build() { return new IndexEntry(index, data, dataType, tripleValueType, visibility, count, timestamp); } } }
import { FabrixService as Service } from '@fabrix/fabrix/dist/common' /** * @module SchemaMigrationService * @description Schema Migrations */ export class SchemaMigrationService extends Service { /** * Drop collection */ async dropModel(model, connection) { const dialect = connection.dialect.connectionManager.dialectName return model.sequelize.query(dialect === 'sqlite' ? 'PRAGMA foreign_keys = OFF' : 'SET FOREIGN_KEY_CHECKS = 0') .then(() => { return model.sync({force: true}) }) .then(() => { return model.sequelize.query(dialect === 'sqlite' ? 'PRAGMA foreign_keys = ON' : 'SET FOREIGN_KEY_CHECKS = 1') }) .catch(err => { return model.sync({force: true}) }) } /** * Alter an existing schema */ async alterModel(model, connection) { // const dialect = connection.dialect.connectionManager.dialectName // return connection.sync(model) return model.sync() } migrateModels(models, connection) { let promises = [] Object.entries(models).forEach(([ _, model ]: [ any, {[key: string]: any}]) => { if (model.migrate === 'drop') { promises.push(this.dropModel(model, connection)) } else if (model.migrate === 'alter') { promises.push(this.alterModel(model, connection)) } else if (model.migrate === 'none') { return } else { return } }) return promises } /** * Drop collections in current connection * @param connection connection object */ async dropDB(connection) { const dialect = connection.dialect.connectionManager.dialectName return connection.query(dialect === 'sqlite' ? 'PRAGMA foreign_keys = OFF' : 'SET FOREIGN_KEY_CHECKS = 0') .then(() => { return connection.sync({force: true}) }) .then(() => { return connection.query(dialect === 'sqlite' ? 'PRAGMA foreign_keys = ON' : 'SET FOREIGN_KEY_CHECKS = 1') }) .catch(err => { return connection.sync({force: true}) }) } /** * Alter an existing database */ async alterDB(connection) { return connection.sync() } /** * Migrate the DB * Checks the connection level instances first and the reverts to model level migration strategy */ async migrateDB(connections) { let promises = [] Object.entries(connections).forEach(([ _, store ]: [ any, {[key: string]: any}]) => { if (store.migrate === 'drop') { promises.push(this.dropDB(store)) } else if (store.migrate === 'alter') { promises.push(this.alterDB(store)) } else if (store.migrate === 'none') { return } else { promises = [...promises, ...this.migrateModels(store.models, store)] } }) return Promise.all(promises) } }
def compressed_data_files_exist(self): if self['compressed_data_files'] is None: return False fullfile = os.path.join(self['pathname'], self['compressed_data_files'][0]) train = sfiles.CompressedFileTrain(fullfile, "traverse") try: train.open() train.close() return True except sfiles.IoError: return False
def start_modbus_server(self, ip, name=None, timeout=None): Rammbock.start_tcp_server(ip=ip, port=MODBUS_PORT, name=name, timeout=timeout, protocol='modbus', family='ipv4')
/** * Additional test. * Tests that whatever getPresentationName() of the last edit returns - * empty string in particular - the result doesn't change. */ public void testGetPresentationName02() { assertEquals("", ce.getPresentationName()); TestUndoableEdit.counter = 1; ce.addEdit(new TestUndoableEdit(TestUndoableEdit.NAME)); assertEquals(String.valueOf(1), ce.getPresentationName()); ce.addEdit(new TestUndoableEdit()); assertEquals("", ce.getPresentationName()); }
a=int(input());print(max(i for i in [1, 6, 28, 120, 496, 2016, 8128, 32640, 130816]if a%i==0))
/** * Copyright (C) 2011 - present by OpenGamma Inc. and the OpenGamma group of companies * * Please see distribution for license. */ package com.opengamma.financial.interestrate.future.calculator; import org.apache.commons.lang.Validate; import com.opengamma.financial.interestrate.AbstractInterestRateDerivativeVisitor; import com.opengamma.financial.interestrate.YieldCurveBundle; import com.opengamma.financial.interestrate.future.definition.BondFutureSecurity; import com.opengamma.financial.interestrate.future.definition.InterestRateFuture; import com.opengamma.financial.interestrate.future.method.BondFutureSecurityDiscountingMethod; import com.opengamma.financial.interestrate.future.method.InterestRateFutureDiscountingMethod; /** * Calculate security prices for futures (bond and interest rate). */ public final class PriceFromCurvesDiscountingCalculator extends AbstractInterestRateDerivativeVisitor<YieldCurveBundle, Double> { /** * The calculator instance. */ private static final PriceFromCurvesDiscountingCalculator s_instance = new PriceFromCurvesDiscountingCalculator(); /** * The method to compute bond future prices. */ private static final BondFutureSecurityDiscountingMethod METHOD_BOND_FUTURE = BondFutureSecurityDiscountingMethod.getInstance(); /** * The method to compute interest rate future prices. */ private static final InterestRateFutureDiscountingMethod METHOD_RATE_FUTURE = InterestRateFutureDiscountingMethod.getInstance(); /** * Return the calculator instance. * @return The instance. */ public static PriceFromCurvesDiscountingCalculator getInstance() { return s_instance; } /** * Private constructor. */ private PriceFromCurvesDiscountingCalculator() { } @Override public Double visitInterestRateFuture(final InterestRateFuture future, final YieldCurveBundle curves) { Validate.notNull(curves); Validate.notNull(future); return METHOD_RATE_FUTURE.price(future, curves); } @Override public Double visitBondFutureSecurity(final BondFutureSecurity future, final YieldCurveBundle curves) { Validate.notNull(curves); Validate.notNull(future); return METHOD_BOND_FUTURE.price(future, curves); } }
The kinetics of ageing in dry-stored seeds: a comparison of viability loss and RNA degradation in unique legacy seed collections. Background and Aims Determining seed longevity by identifying chemical changes that precede, and may be linked to, seed mortality, is an important but difficult task. The standard assessment, germination proportion, reveals seed longevity by showing that germination proportion declines, but cannot be used to predict when germination will be significantly compromised. Assessment of molecular integrity, such as RNA integrity, may be more informative about changes in seed health that precede viability loss, and has been shown to be useful in soybean. Methods A collection of seeds stored at 5 °C and 35-50 % relative humidity for 1-30 years was used to test how germination proportion and RNA integrity are affected by storage time. Similarly, a collection of seeds stored at temperatures from -12 to +32 °C for 59 years was used to manipulate ageing rate. RNA integrity was calculated using total RNA extracted from one to five seeds per sample, analysed on an Agilent Bioanalyzer. Results Decreased RNA integrity was usually observed before viability loss. Correlation of RNA integrity with storage time or storage temperature was negative and significant for most species tested. Exceptions were watermelon, for which germination proportion and storage time were poorly correlated, and tomato, which showed electropherogram anomalies that affected RNA integrity number calculation. Temperature dependencies of ageing reactions were not significantly different across species or mode of detection. The overall correlation between germination proportion and RNA integrity, across all experiments, was positive and significant. Conclusions Changes in RNA integrity when ageing is asymptomatic can be used to predict onset of viability decline. RNA integrity appears to be a metric of seed ageing that is broadly applicable across species. Time and molecular mobility of the substrate affect both the progress of seed ageing and loss of RNA integrity.
<reponame>aashritha2001/hackportal import { useEffect, useState } from 'react'; import 'firebase/storage'; import firebase from 'firebase'; import Image from 'next/image'; import defaultPFP from '../public/assets/defaultPFP.jpg'; import GitHubIcon from '@material-ui/icons/GitHub'; import LinkedInIcon from '@mui/icons-material/LinkedIn'; import PersonIcon from '@mui/icons-material/Person'; /** * card for each member of the team */ export default function MemberCards(props) { const [imageLink, setImageLink] = useState(); useEffect(() => { if (props.fileName !== undefined) { const storageRef = firebase.storage().ref(); storageRef .child(`member_images/${props.fileName}`) .getDownloadURL() .then((url) => { setImageLink(url); }) .catch((error) => { console.error('Could not find matching image file'); }); } }, []); return ( <div className="md:w-52 w-44 relative mt-24 md:mx-3 mx-1 shadow-2xl"> {/* Profile Image */} <div className="absolute left-1/2 -translate-x-1/2 -translate-y-1/2 rounded-full drop-shadow-2xl"> <Image className="rounded-full object-cover" src={props.fileName !== undefined && imageLink !== undefined ? imageLink : defaultPFP} height={120} width={120} alt="Your profile" layout="fixed" /> </div> {/* Main Body */} <div className="min-h-[4.8rem] bg-[#F2F3FF]"></div> <div className="min-h-[7.2rem] bg-[#C1C8FF] p-4"> <h1 className="text-lg font-black">{props.name}</h1> <p>{props.description}</p> <div className="flex justify-left space-x-2 > * + *"> {props.github !== undefined && ( <a href={props.github} target="_blank" rel="noreferrer"> <GitHubIcon style={{ fontSize: 'large' }} /> </a> )} {props.linkedin !== undefined && ( <a href={props.linkedin} target="_blank" rel="noreferrer"> <LinkedInIcon style={{ fontSize: 'x-large' }} /> </a> )} {props.personalSite !== undefined && ( <a href={props.personalSite} target="_blank" rel="noreferrer"> <PersonIcon style={{ fontSize: 'x-large' }} /> </a> )} </div> </div> </div> ); }