content
stringlengths
10
4.9M
<gh_stars>1-10 package com.company.StackqueuE; public class StackUsingJavaClient { public static void main(String[] args) throws Exception { StackUsingArray S1 = new StackUsingArray(5); for (int i = 1; i <= 5; i++) { S1.push(i * 10); } S1.display(); S1.pop(); S1.pop(); S1.display(); S1.push(10); S1.push(20); S1.display(); } }
/* this hook runs before hashes are updated */ static int s2n_conn_pre_handshake_hashes_update(struct s2n_connection *conn) { if (conn->actual_protocol_version < S2N_TLS13) { return 0; } if (s2n_conn_get_current_message_type(conn) != CLIENT_FINISHED) { return 0; } GUARD(s2n_tls13_handle_application_secrets(conn)); return 0; }
<gh_stars>0 import autocannon from 'autocannon'; import { writeFile } from 'fs/promises'; import mkdirp from 'mkdirp'; import { resolve } from 'path'; import { requireEnv } from 'require-env-variable'; const { TITLE, SUBTITLE, PORT } = requireEnv('TITLE', 'SUBTITLE', 'PORT'); const instance = autocannon( { url: `http://localhost:${PORT}/graphql`, connections: 10, duration: 5, title: ` ${TITLE} | ${SUBTITLE}`, method: 'POST', headers: { 'content-type': 'application/json', }, body: '{"query":"{authors{id name md5 books{ id name }}}"}', }, async (err, result) => { if (err) console.error(err); await mkdirp('raw_results'); const fileName = resolve('raw_results/' + result.title + '.json'); writeFile(fileName, JSON.stringify(result, null, 2), { encoding: 'utf-8', }) .then(() => { console.log('Result written to: ' + fileName); }) .catch(console.error); } ); process.once('SIGINT', () => { //@ts-expect-error instance.stop(); }); autocannon.track(instance, { renderProgressBar: true });
All photos by Dylan Thuras / Michelle Enemark Western outsiders first traveled through what is now Las Vegas in 1829, when Mexican scout Rafael Rivera wandered away from the rest of his traveling party and stumbled upon a lush valley. Though today we think of the city’s surrounding landscape as dry, barren and prickling with cacti, Las Vegas, or “the meadows,” was first known and named for its abundant grasses. A lot has changed since then. The arrival of a railroad in 1905 quickly transformed Vegas into the place our minds conjure today—a mecca of lavish casinos, sky-high resorts, strip clubs, and shotgun weddings. Over 42 million tourists descend upon the city each year for a taste of gambling and glitz—but there’s a lot more to the former frontier town than what glints the brightest. Caught up in the flashing lights of Las Vegas’ main drag, many visitors to the city never venture off The Strip. That’s their loss. Hiding around the corner is another version of Sin City—the old, weird Las Vegas. It’s still full of games, neon, and celebrities, but the games are pinball machines, the neon lights have been decommissioned, and the celebrities are 1920s gangsters and 1950s burlesque performers. Here’s a beginners guide to these lesser-known Las Vegas landmarks, an itinerary perfect for the curious traveler looking to dig into the city’s rich history and many selves. The signs lean one on top of each other forming a collage of Las Vegas history. (Photo by Dylan Thuras) The Neon Boneyard has rightfully become a Las Vegas institution. Its two-acre lot is stacked high with decommissioned signs of Las Vegas’ yesteryear, a physical embodiment of Las Vegas history. Over the years the Young Electric Sign Company or YESCO, created many of of Las Vegas most recognizable neon displays. Once retired, they were cast off to the so-called ”boneyard.” These signs donated to the Neon Museum formed the core of the Neon Boneyard’s collection. Over 150 decommissioned neon signs are found here. Some are from famous locations like the Sahara, Caesar’s Palace, the Stardust, and the Golden Nugget. Some are classics; both Binion’s Horseshoe and the original Aladdin’s lamp that once adorned the long-gone Aladdin Casino now sit unlit in the dust. Sunlight glints off of broken glass and rusted metal beckoning passersby into motel rooms with color TVs, wedding chapels, and smaller casinos now lost to time. The final collection is a loud mix of midcentury modern design, brash showmanship, and moments frozen in thousand-watt bulbs and graceful glass tubing. A vast variety of typography is found in the Neon Boneyard. View the Neon Boneyard with Google Maps, and you will find a huge skull smiling up at you. Once part of Treasure Island, his massive bony head now lives in the Neon Boneyard greeting incoming flights and passing satellites. Though most are defunct, a few of the signs still work and can be turned on for evening tours. Visitors looking to lose themselves among the decades of Las Vegas iconography on display should aim for the earlier tours of the day to avoid the midday sun (shade is scarce during the hour-long exploration). Tours tend to sell out, so we recommend buying tickets in advance. A popular act in the 1950s and 1960s Jennie Lee longed to start a burlesque museum. Many of the formerly-flashing signs in the Neon Boneyard advertise a particular Las Vegas institution: the showgirl. Visitors looking to learn more about the history of burlesque in Las Vegas and beyond should drop into the Burlesque Hall of Fame to take a look at a collection that started with the remarkable Jennie Lee. Known as “The Burlesque Version of Jayne Mansfield,” “Miss 44 and Plenty More,” and “The Bazoom Girl,” Jennie Lee was a popular stage act in the 1950s and 1960s. A fierce advocate for dancers sticking together, she helped to start The Exotic Dancers’ League of North America, the first union for dancers. She served as the organization’s first president and often spoke of her dream to start a burlesque museum. The staff are friendly and helpful and can help answer questions about the history of burlesque. Portrait of a burlesque dancer, complete with her kiss. In order to get the museum started, Lee gathered memorabilia from her fellow dancers. Unfortunately, she didn’t live to see her dream realized. After Lee’s death in 1990, the endeavor was taken up by her friends and colleagues. Today, the Burlesque Hall of Fame can be found on Fremont Street, although it is slated for a move to the Art District in the fall of 2016. The collection includes gloves, fans, posters, and pasties of such iconic Burlesque stars as Lili St. Cyr, Chesty Morgan and Gypsy Rose Lee. THE MOB MUSEUM The bulletholes in the St. Valentine’s Day massacre wall wall have been circled and painted red. Organized crime and its shadowy connection to the gaming industry is a defining part of Las Vegas lore. However, one of the most infamous pieces in the Mob Museum’s collection is actually an import. In 1929 in Chicago, seven members of Bugs Moran’s gang were apprehended by police officers and ordered to line up facing a brick wall, so that they could be cuffed. Unbeknownst to them, the cops were actually members of Al Capone’s gang in disguise. Rather than cuffing the rival gang members, they simply shot them dead. The wall’s bricks were bought and shipped to Canada by a businessman, George Patey, who exhibited them until 1968 (some report that he did so in a wax museum, others say he toured them around American shopping malls). In 1971, he opened a nightclub with the brick wall assembled behind a sheet of plexiglas in the men’s bathroom, so that patrons could attempt to aim their pee at the bullet holes. (Though Jimmy Stewart and Robert Mitchum were both patrons, their success in hitting the target is a detail lost to history.) After the nightclub closed, the bricks lived in storage until 1997, when Patey tried to auction them off. Failing to do so, he instead decided to sell the wall brick-by-brick to gangster enthusiasts, until he passed away in 2004. The remaining bricks were left to his niece, who finally sold them to the Mob Museum in 2012. Somewhere along the line, the 85-year-old bloodstains were enhanced with bright red paint (for subtlety’s sake). Chips from the S.S. Tango. The Mob Museum is also home to numerous interesting exhibits on illegal gambling operations and bootlegging. One of the most intriguing of these enterprises is the S.S. Tango, a luxury casino on a ship that was anchored three miles from the California coast, floating in international waters. The State of California eventually managed to circumvent the three mile limit, and sailed out to arrest the casino’s owner. Allegedly, he refused to let them board his ship and instead turned the ship’s fire hoses on the assembled law enforcement, resulting in an eight-day standoff. LOTUS OF SIAM Walking amongst neon boneyards, showgirls, and mobsters works up a serious appetite. While most Las Vegas tourists are standing in a buffet line, the hungry Atlas Obscura adventurer should head just past the end of The Strip to the easily overlooked (but unreasonably delicious) Lotus of Siam. The strip mall is home to a wide variety of cuisines from Indian to sushi and one suspects they may all be delicious. Hidden in a strip mall past the last of The Strip’s mega casinos, the Lotus of Siam would be easy to drive right past. However, this unassuming little spot has deservedly been called the best Thai restaurant in North America. Although it can take weeks to get a dinner reservation, the Atlas team managed to slip in for a Tuesday lunch without a wait. The Tom Kha Kai (Bangkok Style) soup is profoundly good. The huge menu of over 150 choices makes it difficult to know where to start, but it is hard to go wrong. Ranging from the sticky rice dishes of the country’s northern regions to classic noodle dishes, the food is spectacular. Our test crew went for a papaya salad, golden tofu, and a coconut-heavy Bangkok-style soup. The Golden Tofu should not be thought of as a second choice to the meat dishes. Every bit as good, the tofu is lightly crisp on the outside and almost magically soft and fluffy on the inside. Even just a small sampling of the dishes on the menu will leave you well-sated for the next off-Strip location (we know, because we probably overdid it). Appropriately feeling like we were about to explode, we went out towards the desert to experience Las Vegas’ atomic past. A selection of Geiger counters at the Atomic Testing Museum, a not-for-profit dedicated to the history of atomic testing, much of which happened within sight of Las Vegas. The National Atomic Testing Museum tells the story of the nearly 100 nuclear bombs detonated in this area between 1951 and 1963. Although it’s been some time since mushroom clouds billowed against the desert sunset, the National Atomic Testing Museum is dedicated to keeping that history alive. Starting in 1951, the Nevada Test Site, or NTS, located about 65 miles northwest of the museum, was a very busy place, and most of the iconic images and photos from what we think of as the nuclear era come from the Nevada Test Site. The museum does a nice job of creating an atmosphere of being in an underground testing facility. Housing over 12,000 artifacts, the sobering museum showcases not only the history of the Nevada Test Site, but tells the story of the nation’s nuclear program, and its impact on Las Vegas and the surrounding communities. During the 50s and 60s, the population in Las Vegas doubled and later tripled with people looking for jobs on the cutting edge of technology. The museum, affiliated with the Smithsonian Institute, is not just Geiger counters (shown above) and old black and white photos — it also highlights the pop culture and the sociological trends surrounding atomic testing. The museum also has a charming collection of machinery models hand-carved by a man with the excellent name of Rocky Hardcastle. While the outside of the Pinball Hall of Fame doesn’t look like much the inside is a riot of colors, lights and sound. Turns out that some of the most exciting machines in Las Vegas aren’t even close to slot machines. Located not far off The Strip in an unassuming building, the Pinball Hall of Fame is beloved by locals and traveling aficionados alike. Home to over 200 machines it is one of the best collections in the country and all of the vintage pinball machines can be played at just 25 cents a pop. The Pinball Hall of Fame has over 200 vintage pinball machines, all playable. $10 of quarters gets you a long way. The incredible vintage art is almost as delightful as playing the games themselves. There are a few non-pinball machines, like this supremely difficult to play basketball game. The hockey game with an adorable goalie graphic is also remarkably difficult. The Pinball Hall of Fame’s owner Tim Arnold was just 16 when he purchased his first pinball game in 1972. He charged neighborhood kids to play it and the seed for the Pinball Hall of Fame was sown. Arnold went on to operate multiple arcades throughout Michigan and retired to Las Vegas in the 1990s. Having collected over 1000 pinball machines by that time, he opened the Pinball Hall of Fame in 2009. After operational costs, Arnold donates all of the money earned at the Pinball Hall of Fame to charity. Every 25¢ does a tiny bit of good. FRANKIE’S TIKI ROOM No trip to Las Vegas would be complete without a drink at a local watering hole, and the curious traveler seeks out a spot with personality. Enter Frankie’s Tiki Room! The interior is just as one would hope, complete with pufferfish lighting fixtures. A kitschy tiki aesthetic has a long history on the Las Vegas scene but with the closing of Tiki classics like Don the Beachcomber and Aku Aku at the Stardust in the 1980’s, Tiki establishments were becoming an endangered species. A short lived bar called Taboo Cove opened in The Venetian in 2001 but closed in 2005. Las Vegas was temporarily Tiki-less. That changed when in 2008 owner P. Moss bought the 50s-era Frankie’s Bar & Cocktail Lounge and brought tiki back to Las Vegas in a big way. Working with Bamboo Ben, the grandson of tiki pioneer Eli Hedley, Moss created custom Frankie’s Tiki mugs and lined the interior with thatch and bamboo decorations and lit it with appropriately dim lights inside pufferfish lighting fixtures. The names of the drinks in the bar are a object lesson in dad jokes from the Frankiestein to the Thurston Howl. The drinks are terrific blends of classic tiki and creative tiki riffs. The number of skulls on the drink indicates how much booze is in them (after a few “five skullers” you may wake up wondering how many skulls you have). That a day among bombs and boneyards could end on a tacky tropical island is just one more example of Las Vegas’ unending capacity to provide the unexpected—off The Strip, it’s remarkably easy to hit the jackpot.
// // JPGiftShowView.h 礼物的展示view // JPGiftManager // // Created by Keep丶Dream on 2018/3/13. // Copyright © 2018年 dong. All rights reserved. // #import <UIKit/UIKit.h> #import "JPGiftCountLabel.h" @class JPGiftModel; typedef void(^completeShowViewBlock)(BOOL finished,NSString *giftKey); typedef void(^completeShowViewKeyBlock)(JPGiftModel *giftModel); static const CGFloat showGiftView_UserIcon_WH = 44; //头像宽高 static const CGFloat showGiftView_UserName_W = 60;//名字宽 - static const CGFloat showGiftView_UserName_H = 16;//名字高 - static const CGFloat showGiftView_UserName_L = 2;//名字左 static const CGFloat showGiftView_GiftIcon_H = 50;//图片高 //static const CGFloat showGiftView_GiftIcon_W = showGiftView_GiftIcon_H*(70/55.0);//图片宽 static const CGFloat showGiftView_GiftIcon_W = 50;//图片宽 static const CGFloat showGiftView_UserIcon_LT = (showGiftView_GiftIcon_H-showGiftView_UserIcon_WH)*0.5;//头像距离上/左 static const CGFloat showGiftView_XNum_W = 50;//礼物数宽 static const CGFloat showGiftView_XNum_H = 30;//礼物数高 static const CGFloat showGiftView_XNum_L = 5;//礼物数左 @interface JPGiftShowView : UIView /** 展示礼物动效 @param giftModel 礼物的数据 @param completeBlock 展示完毕回调 */ - (void)showGiftShowViewWithModel:(JPGiftModel *)giftModel completeBlock:(completeShowViewBlock)completeBlock; /** 隐藏礼物 */ - (void)hiddenGiftShowView; /** 背景 */ @property(nonatomic,strong) UIView *bgView; /** icon */ @property(nonatomic,strong) UIImageView *userIconView; /** name */ @property(nonatomic,strong) UILabel *userNameLabel; /** giftName */ @property(nonatomic,strong) UILabel *giftNameLabel; /** giftImage */ @property(nonatomic,strong) UIImageView *giftImageView; /** count */ @property(nonatomic,strong) JPGiftCountLabel *countLabel; /** 礼物数 */ @property(nonatomic,assign) NSInteger giftCount; /** 当前礼物总数 */ @property(nonatomic,assign) NSInteger currentGiftCount; /** block */ @property(nonatomic,copy)completeShowViewBlock showViewFinishBlock; /** 返回当前礼物的唯一key */ @property(nonatomic,copy)completeShowViewKeyBlock showViewKeyBlock; /** model */ @property(nonatomic,strong) JPGiftModel *finishModel; @end
<filename>example/index.tsx import 'react-app-polyfill/ie11'; import * as React from 'react'; import * as ReactDOM from 'react-dom'; import { Slot, Fill, SlotProvider } from '../.'; const App = () => { const [fillSlot1, setFillSlot1] = React.useState(true); const [fillSlot2, setFillSlot2] = React.useState(true); return ( <SlotProvider> <div className="App"> <Slot name="title"> <h1>Title</h1> </Slot> <Slot name="subtitle" /> {fillSlot1 && ( <Fill slot="title"> <h1>Custom title</h1> </Fill> )} {fillSlot2 && ( <Fill slot="subtitle"> <h2>Custom subtitle</h2> </Fill> )} <hr /> <button onClick={() => setFillSlot1(!fillSlot1)}> Filling slot 1: {fillSlot1 ? 'Yes' : 'No'} </button> <button onClick={() => setFillSlot2(!fillSlot2)}> Filling slot 2: {fillSlot2 ? 'Yes' : 'No'} </button> </div> </SlotProvider> ); }; ReactDOM.render(<App />, document.getElementById('root'));
/** * It deletes an image from the given path. */ private void removeImage(JSONArray args, CallbackContext callbackContext) throws JSONException { String filename = args.optString(0); if (filename.equals(EMPTY_STR)) { callbackContext.error("Missing filename string"); } File file = new File(filename); if (file.exists()) { try { file.delete(); } catch (Exception ex) { callbackContext.error(ex.getMessage()); } } callbackContext.success(filename); }
Oliver Porter created and implemented the public-private partnership (PPP) model for Sandy Springs, Ga.—a city of 100,000 people near Atlanta. He has served as the principal advisor for many other new cities and for cities considering the conversion to the PPP model, both in the United States and Japan. He has authored three books on this subject and has agreed to sit down with The Freeman. The Freeman: Can you describe in a nutshell what Sandy Springs, Georgia, has been able to do—that is, provide a sketch of your model? Porter: The Sandy Springs model is a public-private partnership (PPP) in which the city contracts with private industry for all of its basic services other than public safety—that is, police, fire, and courts. The model has been an outstanding success, both financially and in response to citizens’ service needs, over the seven years since the city’s incorporation. Financially: The city has not increased tax rates at all; has paid for a major capital improvement program from savings in the operating budget; has built a $35 million reserve fund despite a recession; and has no long-term liabilities—that is, no loans, no bonds, and of most importance, no unfunded liabilities for pensions and other benefits. The Freeman: How much money has the model saved taxpayers there? Porter: Initially about $20 million per year—40 percent of the budget for the “basket” of services being provided. These services include: administration; human resources; finance; accounting; purchasing; information technology; the backroom operations for the police, fire and courts; parks and recreation; transportation (road and sidewalk maintenance, traffic design and control); community development (planning, zoning, permitting, and enforcement); and management of the capital program. Over the life of the contracts, I am comfortable in saying that over $140 million of the taxpayers’ dollars have been saved. The Freeman: That is truly staggering. But what about the quality of the services? Porter: Services have been substantially improved under the PPP model. Surveys, both internal and national, have generally rated Sandy Springs services as excellent. The best indicator of citizen satisfaction may be that in the first election (four years) after the city was formed, the lowest vote total that any incumbent received was 84 percent. That certainly indicates a high level of voter satisfaction with the efficiency and responsiveness of the model. The Freeman: The New York Times, not known for its affinity for anything private, wrote a pretty favorable story about your outsourcing work in Sandy Springs. There were certainly grudging admissions. But one worry the author expressed is that it only worked because Sandy Springs is an affluent area and that outsourced government services are not feasible in poorer areas. What do you think about this concern? Porter: First, let me say that although Sandy Springs is relatively affluent, it is not a rich enclave. Unfortunately, there were areas of the city that were well below the average income of the metropolitan area. Sandy Springs is a melting pot with a population that includes 30 percent minorities—a growing segment—and over 55 percent apartment dwellers. Five other new cities with varying levels of affluence have been formed, each adopting the PPP model, and all have done well. In my opinion, the model is even more suited for less-affluent communities. These communities need the savings that the model offers, even more than richer areas. By the way, everything that I am saying about city governments applies equally to counties. The Freeman: Detroit is insolvent. It’s a city that is essentially dying. If you could say anything to the new “emergency” manager there—Kevyn Orr—what would you say? Porter: I hope to have the opportunity to meet him in the next month. I would say to him, “If you are in a deep hole, quit digging!” In a crisis, small, incremental steps are not sufficient. Bold initiatives are required. First, look for alternative service methods such as a PPP to produce operating savings; and second, consider the privatization of the city’s assets, to raise funds to be applied to the debt. The Freeman: Some ideological purists who read this publication might not like the idea of public-private partnerships like those you’ve established. But among those purists, some will have reasonable concerns about corrupt relationships between business and government forming over time. Do you worry about the system in Sandy Springs being corrupted? Porter: No. All governments have shown an Achilles heel that allows for corruption. The traditional model for cities is not immune. However, there is less opportunity under the PPP model than would normally be the case. The fact that the elected officials are prohibited from meddling in the day-to-day operations—including the bidding of contracts, hiring and firing of employees, and the granting of license and permits, etc.—is a deterrent to improper dealings. All contracts are granted through competitive bidding that is open to public scrutiny. The initial contract bids were thoroughly scrutinized by a citizens’ committee, then by a volunteer group appointed by the governor, and finally by the elected council. On a continuing basis, PPPs diminish the opportunity for such unacceptable behavior. Unlike traditional cities, the private contractors have a profit motive that serves as a natural incentive to reduce costs and operate efficiently. Therefore, behaviors such as preferential hiring of friends and relatives, or palm-greasing, that are sometimes prevalent in traditional governments, become a non-issue in the PPP model. The Freeman: Has anyone copied your model? Porter: Yes, At least five other cities. There are thousands of existing cities and counties that could benefit from the model. The only barrier to the adoption of the PPP model is politics. Officials, who have been elected under the traditional form of government, are scared to consider a new model even though it offers better service at lower costs. When I interact with such groups, I point out that their principal job is to serve the citizens—not to provide jobs—and that a part of their job description should be to constantly consider alternative methods for providing service. The Freeman: We hope to publish this conversation in an issue on the subject of power. And as you know, the way political power works, in part, is that it protects entrenched interests, most of whom have a lot to lose from change in the status quo. It seems to us that the biggest obstacle for people adopting your model is that very power and those who benefit from its existence. What does it take to dislodge these special interests so that the people can see the benefits of privatization? Porter: Unfortunately, it may take a financial crisis: bankruptcy or near-ruin. A number of our cities are near that point. If unfunded liabilities are properly recognized, many more are approaching the crisis state. For the cities not yet in crisis, there are several steps that should be taken to open the door to efficiency. First, it takes a hero, an elected official, or prominent citizen, who is willing to take the heat that may come from those with vested interests. Such sponsorship should lead to a low-cost study that compares current operational costs, and this is very important, costs for pensions and other benefits, of the traditional city versus the PPP model. There is no risk to the city for such a study. And the cost is quite low compared to the potential payoff. If the study shows the potential for substantial savings, the city should issue RFPs [requests for proposals] for the PPP. Again, there is no risk. If the bids do not show substantial savings (and in most cases they will), the city has no obligation to proceed. The Freeman: Can state governments do anything to help municipalities adopt your model? Porter: To date, the states have done little; however, there is much that can be done. Obviously the most effective step would be a requirement that cities, at least, consider alternative models. Funding of comparative studies would be an even more helpful step. Removal of legal barriers to the PPP model that exist in some states is, of course, necessary and desirable. May I add that not-for-profit organizations and media outlets should also take up the cause of municipal reform. The Freeman is to be commended for opening the subject. The Freeman: Oliver Porter, it’s been a pleasure to speak with you. Porter: Thank you. I hope that the conversation will not end with this interview. I welcome contacts from interested citizens across our nation.
<gh_stars>1-10 #include "BossEmmiter.h" #include "j1EntityFactory.h" #include "j1Render.h" BossEmmiter::BossEmmiter(fPoint pos, uint radius, uint spawnRatio, const j1Entity* owner, uint timeLife) : Projectile(pos, { 0.F,0.F }, 0u, owner, "BossEmmiter", PROJECTILE_TYPE::BOSS_EMMITER) { SetPivot(450, 250); size.create(900, 500); position -= pivot; engine.seed(rd()); lifeTimer.Start(); createArrowsTimer.Start(); dieTimer.Start(); //currentAnimation = &anim; rang.x = radius; rang.y = radius * 0.5F; lifeTime = timeLife; createArrowsSpeed = spawnRatio; dieTime = 1u; constantHeigth = App->render->camera->h; /*App->audio->PlayFx(App->entityFactory->strech_Shoot, 0);*/ } BossEmmiter::~BossEmmiter() { } bool BossEmmiter::PreUpdate() { if (lifeTimer.Read() > lifeTime) { to_explode = true; } return true; } bool BossEmmiter::Update(float dt) { if (!to_explode) { if (createArrowsTimer.Read() > createArrowsSpeed) { CreateArrow(); createArrowsTimer.Start(); } } return true; } bool BossEmmiter::PostUpdate() { if (to_explode) { if (dieTime < dieTimer.ReadSec()) { to_delete = true; } } return true; } void BossEmmiter::CreateArrow() { float posY = RandomValue(-rang.y, rang.y); posY += (position.y + size.y / 2); float posX = RandomValue(-rang.x, rang.x); posX += (position.x + size.x / 2); App->entityFactory->CreateArrow({ posX, posY - 350 }, { posX, posY + 100 }, 200, owner, PROJECTILE_TYPE::BOSS_EMMITER_ARROWS, 2); } float BossEmmiter::RandomValue(int min, int max) { std::uniform_int_distribution<int> range(min, max); return range(rd); }
// zzcf, zzbl, zzdm public abstract class zzbt extends zzcf { public zzbt(zzbl zzbl1) { super(zzbl.zza(zzbl1)); // 0 0:aload_0 // 1 1:aload_1 // 2 2:invokestatic #16 <Method com.google.android.gms.common.api.GoogleApiClient zzbl.zza(zzbl)> // 3 5:invokespecial #19 <Method void zzcf(com.google.android.gms.common.api.GoogleApiClient)> // 4 8:return } protected void doExecute(com.google.android.gms.common.api.Api.AnyClient anyclient) throws RemoteException { execute(); // 0 0:aload_0 // 1 1:invokevirtual #28 <Method void execute()> // 2 4:return } public abstract void execute(); protected zzdm zztp; }
/** * * * @author IDV Development Team * @version $Revision: 1.3 $ */ public class SlackOutputHandler extends OutputHandler { /** _more_ */ public static final String PROP_SLACK_API_TOKEN = "slack.api.token"; /** _more_ */ public static final OutputType OUTPUT_SLACK_PUBLISH = new OutputType("Publish to Slack", "slack_publish", OutputType.TYPE_VIEW, "", "/slack/slack.png"); /** * _more_ */ public SlackOutputHandler() {} /** * _more_ * * * @param repository _more_ * @param element _more_ * @throws Exception _more_ */ public SlackOutputHandler(Repository repository, Element element) throws Exception { super(repository, element); addType(OUTPUT_SLACK_PUBLISH); } /** * _more_ * * @param request _more_ * @param state _more_ * @param links _more_ * * * @throws Exception _more_ */ public void getEntryLinks(Request request, State state, List<Link> links) throws Exception { if ( !request.isAnonymous() && (state.getEntry() != null) && state.getEntry().isFile() && getAccessManager().canDoAction( request, state.getEntry(), Permission.ACTION_EDIT) && (getRepository().getProperty( PROP_SLACK_API_TOKEN, (String) null) != null)) { links.add(makeLink(request, state.getEntry(), OUTPUT_SLACK_PUBLISH)); } } /** * _more_ * * * @param request _more_ * @param outputType _more_ * @param entry _more_ * @return _more_ * * @throws Exception _more_ */ @Override public Result outputEntry(Request request, OutputType outputType, Entry entry) throws Exception { if ( !getAccessManager().canDoAction(request, entry, Permission.ACTION_EDIT)) { throw new IllegalArgumentException("No access"); } if (getRepository().getProperty(PROP_SLACK_API_TOKEN, (String) null) == null) { return new Result( "", new StringBuilder("No Slack API token defined")); } StringBuilder sb = new StringBuilder("slack publish stuff here"); return new Result("", sb); } }
<gh_stars>0 package nie.sr2.util; import java.sql.*; import java.util.*; import java.io.IOException; import java.io.LineNumberReader; import java.net.*; import com.sun.org.apache.bcel.internal.generic.FMUL; import nie.config_ui.ConfiguratorException; import nie.core.*; import nie.sn.SearchEngineConfig; import nie.sn.SearchTuningConfig; import nie.sn.SearchTuningConfigFatalException; import nie.sn.SnRequestHandler; import nie.sr2.ReportConstants; import nie.sr2.SearchEngineLink; // SEARCH ACTIVITY POPULATOR // Makes up semi-random looking search activity // Can use with date roller // Driven by csv control file public class Populator // implements nie.sn.CronLiteJob // Runnable { private final static String kClassName = "Populator"; static final long MY_INTERVAL = nie.sn.CronLite.HOUR; // How often to run public long getRunIntervalInMS() { return MY_INTERVAL; } //////////////////////////////////////////////// // // In case we're run as a stand-alone program // instead of being incorporated in another program // ///////////////////////////////////////////////// static public void main( String[] inArgs ) { final String kFName = "main"; Populator util = new Populator(); util.parseCommandLine( inArgs ); try { util.setupConfigFromURI(); } catch( Exception e ) { fatalErrorMsg( kFName, "Error initializing, exiting." + " Error: " + e ); System.exit( 2 ); } util.run(); } /////////////////////////////////////////////////////////// private static final void __Constructors_and_Initialization__() {} //////////////////////////////////////////////// // // Constructors. // // All constructors should call commonInit() // //////////////////////////////////////////////// private /*public*/ Populator() // throws UtilException { // If you use this one, you must also then call // .setupConfigFromURI() before running } // NOTE: This constructor may not be called // Instead, they may call the null arg version // and then call .setupConfigFromURI() public Populator( SearchTuningConfig inMainConfig ) throws Exception { this(); setMainConfig( inMainConfig ); // setupSearchEngineConfig(); } public void setupConfigFromURI() throws Exception { final String kFName = "setupDBConfigFromURI"; final String kExTag = kClassName + '.' + kFName + ": "; try { // fDBConf = new DBConfig( fConfigFileURI ); // mMainConfig = new SearchTuningConfig( fConfigFileURI, null ); mMainConfig = new SearchTuningConfig( getMainConfigURI(), null ); } catch( Exception e ) { throw new UtilException( kExTag + "Error intializing config/data from URI \"" + getMainConfigURI() + "\"" + " REMINDER: This needs to a FULL SearchTrack config, and NOT just a Database config" + " because we need more than just database info to load search records." // ^^^ In particular we need search engine info and Site ID, and maybe some patterns + " Error was: " + e ); } // checkDataFile(); // setupSearchEngineConfig(); } public boolean hadError() { return mHadError; } void parseCommandLine( String inArgs[] ) { final String kFName = "parseCommandLine"; boolean haveSeenConfigURI = false; // For each argument on the command line for( int i = 0; i < inArgs.length; i++ ) { // If the argument starts with a dash then it's a switch /////// if( inArgs[i].startsWith( "-" ) ) { String flag = inArgs[i].substring( 1 ).toLowerCase(); // See if it's a verbosity flag // boolean result = getRunLogObject().setVerbosityByString( lFlag, false ); boolean result = getRunLogImplObject().setVerbosityByString( inArgs[i], false ); // // ^^^ Must preserve original case when controlling logging // // and setVerbosityByString can handle optional leading hyphen // lFlag, false // ); // If it's not verbosity, keep ckecking if( ! result ) { if( flag.startsWith("nuke") ) { if( flag.indexOf("log") > 0 ) { mNukeLog = true; } else if( flag.indexOf("dns") > 0 ) { mNukeDns = true; } else if( flag.indexOf("all")>0 || flag.indexOf("both")>0 ) { mNukeLog = mNukeDns = true; } else { bailOnBadSyntax( inArgs[i], "Must be -nuke_log, -nuke_dns, or -nuke_both" ); } } // Where to read the data from else if( flag.startsWith("data") ) { if( i == inArgs.length-1 ) bailOnBadSyntax( inArgs[i], "requires an argument" ); String arg = inArgs[++i]; mDataURI = NIEUtil.trimmedStringOrNull( arg ); if( null==mDataURI ) bailOnBadSyntax( arg, "data_file requires an argument" ); } // Days / Window of time // mDaysInWindow else if( flag.equals("days") ) { if( i == inArgs.length-1 ) bailOnBadSyntax( inArgs[i], "requires an argument" ); String arg = inArgs[++i]; mDaysInWindow = NIEUtil.stringToIntOrDefaultValue( arg, -1, true, true ); if( mDaysInWindow < 1 ) bailOnBadSyntax( arg , "Days must be positive int" ); } // Site ID else if( flag.equals("site_id") || flag.equals("site-id") ) { if( i == inArgs.length-1 ) bailOnBadSyntax( inArgs[i], "requires an argument" ); String arg = inArgs[++i]; mSiteId = NIEUtil.stringToIntOrDefaultValue( arg, -1, true, true ); if( mSiteId < 1 ) bailOnBadSyntax( arg , "site_id must be positive int" ); } // Preview Only mode else if( flag.startsWith("preview") ) { // mDoPreviewOnly = true; setDoPreviewOnly(); } // backfill missing records else if( flag.startsWith("back") ) { setDoBackFill( true ); } // No backfill else if( flag.startsWith("no") && flag.indexOf("back") > 0 ) { setDoBackFill( false ); } // roll records forward to current date and time else if( flag.startsWith("roll") ) { setDoRollDates( true ); } else if( flag.startsWith("no") && flag.indexOf("roll") > 0 ) { setDoRollDates( false ); } // Use Google else if( flag.equals("use_google") || flag.equals("usge-google") || flag.equals("google") ) { mUseGoogleInstead = true; setDoBackFill( true ); } // Site prefix (for google) else if( flag.equals("site_prefix") || flag.equals("site-prefix") || flag.equals("prefix") ) { if( i == inArgs.length-1 ) bailOnBadSyntax( inArgs[i], "requires an argument" ); String arg = inArgs[++i]; mSitePrefix = NIEUtil.trimmedStringOrNull( arg ); if( null==mSitePrefix ) bailOnBadSyntax( arg, "prefix requires an argument" ); setDoBackFill( true ); } else { // We don't know what it is bailOnBadSyntax( inArgs[i] ); } } } else { // If it's not a switch then the only other thing it // can legally be is the name of an alternate config // file. /////// if( ! haveSeenConfigURI ) { fConfigFileURI = inArgs[i]; haveSeenConfigURI = true; getRunLogObject().debugMsg( kClassName, kFName, "Command line option " + (i+1) + " is config file name \"" + fConfigFileURI + "\"." ); } // Else we've already seen the config! else { getRunLogObject().fatalErrorMsg( kClassName, kFName, "Can only specify one config file" + " on the command line." ); bailOnBadSyntax( inArgs[i] ); } } } // End for each command line option debugMsg( kFName, "haveSeenConfigURI=" + haveSeenConfigURI ); if( ! haveSeenConfigURI ) bailOnBadSyntax( "<path-to-config-file>", "No config file given on command line." ); /*** if( ! haveSeenConfigURI ) { fatalErrorMsg( kFName, "No configuration file given on command line." ); System.exit(1); } ***/ } private void bailOnBadSyntax( String inOpt ) { bailOnBadSyntax( inOpt, null ); } private void bailOnBadSyntax( String inOpt, String optMsg ) { final String kFName = "bailOnBadSyntax"; String msg = "Utility to create sample searches in the logs, usually for a demo." + NIEUtil.NL + "It also takes care of updating DNS cache, populating null match counts," + NIEUtil.NL + "rolling records up to the current date and time, and clearing the report cache." + NIEUtil.NL + NIEUtil.NL ; /*String*/ msg = "Bad Command Line Syntax: "; if( optMsg != null ) msg += optMsg; else msg += "Unknown option \"" + inOpt + "\""; msg += NIEUtil.NL; msg += "Required Parameter:" + NIEUtil.NL + "\tdatabase_config_file_name_or_url.xml" + NIEUtil.NL + NIEUtil.NL ; msg += "REQUIRED Args:" + NIEUtil.NL + NIEUtil.NL + "config_file.xml (full SearchTrack, not just DB)" + NIEUtil.NL + "-data[_file] tabbed_data.txt (REQUIRED!)" + NIEUtil.NL + NIEUtil.NL + "Primary Options:" + NIEUtil.NL + NIEUtil.NL + "-nuke_log" + NIEUtil.NL + "-nuke_dns" + NIEUtil.NL + "-nuke_both" + NIEUtil.NL + NIEUtil.NL + "-site_id int (OVERRIDE the site id in the main config)" + NIEUtil.NL + NIEUtil.NL + "-days int (how many days to spread the searches over, DEFAULT=" + DEFAULT_WINDOW_DAYS + ")" + NIEUtil.NL + NIEUtil.NL + "-preview[_only] (just show what you would do, but don't do it)" + NIEUtil.NL + NIEUtil.NL + "-back[_fill] (Back fill match counts)" + NIEUtil.NL + "-no_back[_fill] turn off, default=" + DEFAULT_DO_BACK_FILL + NIEUtil.NL + "-use_google (instead for adding match counts, IMPLIES -back_fill)" + NIEUtil.NL + "-site_prefix something.com (site prefix for using with Google back fill)" + NIEUtil.NL + NIEUtil.NL + "-roll[_dates] (move up to current date and time)" + NIEUtil.NL + "-no_roll[_dates] turn off, default=" + DEFAULT_DO_ROLL_DATES + NIEUtil.NL + NIEUtil.NL ; msg = msg + RunLogBasicImpl.getVerbosityLevelDescriptions( true, true, true, true ); getRunLogObject().fatalErrorMsg( kClassName, kFName, msg ); System.exit( 1 ); } /////////////////////////////////////////////////////////// private static final void __Main_Logic__() {} public void run() { final String kFName = "run"; try { mHadError = false; commonInit(); // nukeTablesIfRequested(); // Now process them int count = doProcessRecords( mDataIn ); statusMsg( kFName, "Inserted " + count + " records" ); // Do we have any cleanup work to do? if( count>0 ) { if( getDoPreviewOnly() ) { statusMsg( kFName, "Preview mode, so skipping clearing of report cache and any back filling results." ); } // Yes, we really did something else { // Commit if( getDBConfig().getVendorNeedsCommitByDefault() ) { cConnectionUpdate.commit(); } // Backfill missing data if( getDoBackFill() ) { statusMsg( kFName, "Backfilling any missing results ..." ); mBackFiller.run(); } else { statusMsg( kFName, "Configured to NOT Backfill missing results." ); } // Roll random records right up to the minute // Since they were random, the lastest could be a few days old if( getDoRollDates() ) { statusMsg( kFName, "Rolling record dates forward ..." ); mRoller.run(); } else { statusMsg( kFName, "Configured to NOT Roll record dates forward." ); } statusMsg( kFName, "One last time... clearing report cache ..." ); nie.sr2.java_reports.ActivityTrend.clearReportCache( getMainConfigURI() ); } } // Else no sense doing those if no data added else { statusMsg( kFName, "No Records added, so skipping clearing of report cache and any back filling and rolling results." ); } } catch( UtilException de ) { // errorMsg( kFName, "SQL Exception caught: " + se ); stackTrace( kFName, de, "DNS Exception or Init Error" ); mHadError = true; // se.printStackTrace(); } catch( SQLException se ) { // errorMsg( kFName, "SQL Exception caught: " + se ); stackTrace( kFName, se, "SQL Exception" ); mHadError = true; // se.printStackTrace(); } catch( Exception e ) { stackTrace( kFName, e, "General Exception Caught" ); mHadError = true; // fatalErrorMsg( kFName, "General exception caught: " + t ); // e.printStackTrace(); // System.exit(-1); } finally { // quickie cleanup! // lCandidateIPNumbersResultSet = DBConfig.closeResults( lCandidateIPNumbersResultSet, kClassName, kFName, false ); // cStatementRead = DBConfig.closeStatement( cStatementRead, kClassName, kFName, false ); // cConnectionRead = DBConfig.closeConnection( cConnectionRead, kClassName, kFName, false ); // cStatementUpdate = DBConfig.closeStatement( cStatementUpdate, kClassName, kFName, false ); // cConnectionUpdate = DBConfig.closeConnection( cConnectionUpdate, kClassName, kFName, false ); } } // These are things that are done ONCE PER RUN // and this is NOT part of the constructor chain public void commonInit() throws Exception { final String kFName = "commonInit"; final String kExTag = kClassName + '.' + kFName + ": "; initDB(); nukeTablesIfRequested(); initBackFillerIfNeeded(); initRollerNeeded(); openDataFile(); } public void initDB() throws Exception { final String kFName = "initDB"; final String kExTag = kClassName + '.' + kFName + ": "; if( null==getDBConfig() ) throw new Exception( kExTag + "Null DB config." ); statusMsg( kFName, "Opening Database ..." ); // initDatabase(); /*** // TODO: Nice idea from other util method String tableName = getTableName(); if( ! getDBConfig().verifyASpecificDBTable( tableName, false, false ) ) throw new DBConfigException( kExTag + "No such table \"" + tableName + "\"" ); ***/ // fOperatingMode = kMinimumMode; // fDriverName = kDefaultDBURL; // cStatementRead = null; // cConnectionRead = null; cStatementUpdate = null; cConnectionUpdate = null; // Configure the database and cache a statement try { // fDBConf = new DBConfig( fConfigFileURI ); // ^^^ moved to setupDBConfigFromURI() // cStatement = getDBConfig().createStatement(); debugMsg( kFName, "getDBConfig()=" + getDBConfig() ); // Object [] objs = getDBConfig().createStatement(); // cStatementRead = (Statement) objs[0]; // cConnectionRead = (Connection) objs[1]; Object [] objs = getDBConfig().createStatement(); cStatementUpdate = (Statement) objs[0]; cConnectionUpdate = (Connection) objs[1]; } catch( Exception e ) { stackTrace( kFName, e, "Exception while caching database statements" ); throw new UtilException( kExTag + "Error caching statement: " + e ); } } void initBackFillerIfNeeded() throws Exception { if( getDoBackFill() ) { mBackFiller = new BackfillMatchCounts( getMainConfig() ); mBackFiller.setUseGoogleInstead( getUseGoogleInstead() ); mBackFiller.setSitePrefix( getSitePrefix() ); // mBackFiller.commonInit(); // ^^^ No, back.run() will take care of this } } void initRollerNeeded() throws Exception { if( getDoRollDates() ) { mRoller = new RollDates( getMainConfig() ); } } void nukeTablesIfRequested() throws Exception { final String kFName = "nukeTablesIfRequested"; if( mNukeLog ) { statusMsg( kFName, "Clearing old log records from " + getLogTableName() ); String sql = "DELETE FROM " + getLogTableName(); int results = cStatementUpdate.executeUpdate( sql ); if( 0 == results ) warningMsg( kFName, "No records deleted from search log " + getLogTableName() ); else statusMsg( kFName, "" + results + " records deleted from search log " + getLogTableName() ); } else { statusMsg( kFName, "Combining with old log records from " + getLogTableName() ); } if( mNukeDns ) { statusMsg( kFName, "Clearing old dns records from " + DNSLookup2.getDomainTableName() ); String sql = "DELETE FROM " + DNSLookup2.getDomainTableName(); int results = cStatementUpdate.executeUpdate( sql ); if( 0 == results ) warningMsg( kFName, "No records deleted from DNS cache " + DNSLookup2.getDomainTableName() ); else statusMsg( kFName, "" + results + " records deleted from DNS cache " + DNSLookup2.getDomainTableName() ); } else { statusMsg( kFName, "Combining with old DNS records from " + DNSLookup2.getDomainTableName() ); } } void openDataFile() throws IOException { final String kFName = "openDataFile"; statusMsg( kFName, "Opening data file: " + getDataURI() ); mDataIn = NIEUtil.openURIReadChar( getDataURI() ); // NIEUtil.fetchURIContentsLines( getDataURI() ) // NIEUtil.fetchURIContentsChar( getDataURI() ) } int doProcessRecords( LineNumberReader fin ) throws Exception { final String kFName = "doProcessRecords"; final int kReportInterval = 100; if( null == fin ) { errorMsg( kFName, "Null line reader passed in, exiting method." ); return -1; } // Process the rest of the lines int lineCounter = 1; int recordCounter = 0; int lastCountReported = 0; String currClientIP = null; String currClientName = null; String currRandIPPrefix = null; String currRandNameSuffix = null; boolean randEveryRecord = false; String line = null; // For each line in the file while( null != (line = fin.readLine()) ) { lineCounter = fin.getLineNumber(); // lineCounter++; // line = NIEUtil.trimmedStringOrNull( line ); // DON'T TRIM at this point! // You will lose the initial indenting. // and Java doesn't have just a Right Trim rtrim // Some sanity checking if( null==line ) { currClientIP = currClientName = currRandIPPrefix = currRandNameSuffix = null; continue; } // If the line starts with #, then it's a comment if( line.startsWith("#") ) { statusMsg( kFName, "Skipping comment line # " + lineCounter ); continue; } // Get the values // Vector values = NIEUtil.parseCSVLine( line ); Vector values = NIEUtil.parseTabDelimLine( line ); // Some sanity checking if( null==values || values.isEmpty() ) { statusMsg( kFName, "Skipping empty line (1) # " + lineCounter ); currClientIP = currClientName = currRandIPPrefix = currRandNameSuffix = null; randEveryRecord = false; continue; } String val1 = NIEUtil.trimmedStringOrNull( (String) values.get(0) ); // statusMsg( kFName, "val1 = '" + val1 + "'" ); // Is this a client header line? if( null!=val1 ) { // statusMsg( kFName, "Header Line # " + lineCounter ); // Is it random client directive? if( val1.startsWith("*") || val1.startsWith("+") ) { // Clear out explicit names, which we are no longer using currClientIP = currClientName = null; // And for now, also clear out the rendom seeds, which we are about to set with new values currRandIPPrefix = currRandNameSuffix = null; // Randomize EVERY record if( val1.startsWith("+") ) randEveryRecord = true; // Assign the batch to a random person else randEveryRecord = false; // Is there a base name? if( val1.length()>1 ) { // If this fails, it'll be null, and that's fine currRandNameSuffix = NIEUtil.trimmedStringOrNull( val1.substring(1) ); } // Else totally random // Is there a seed TLD IP? // This is the 123 in 123.xx.xx.xx IP address // first part of an IPv4 class C address // 1 - 254 (0 and 255 are not valid either) if( values.size()>1 ) { // Skip trailing comments if( ((String)values.get(1)).trim().startsWith("#") ) continue; if( null==currRandNameSuffix ) { errorMsg( kFName, "Can assign IP base for completely wildcarded domain, on line # " + lineCounter + " of file \"" + getDataURI() + "\" (1)" + NIEUtil.NL + " Must be one of either:" + NIEUtil.NL + " *.domain.com(tab)123" + NIEUtil.NL + " *.domain.com(end-of-line)" + NIEUtil.NL + " *(end-of-line)" + NIEUtil.NL + " Will continue reading any subsequent lines." + " Line=\"" + line + "\"" ); currClientIP = currClientName = currRandIPPrefix = currRandNameSuffix = null; randEveryRecord = false; continue; } // Get the number if( values.size()>2 ) { String tmpStr = NIEUtil.trimmedStringOrNull( (String) values.get(2) ); if( null!=tmpStr && ! tmpStr.startsWith("#") ) currRandIPPrefix = tmpStr; } if( null!=currRandIPPrefix ) { int tmpInt = NIEUtil.stringToIntOrDefaultValue( currRandIPPrefix, -1, false, true ); if( tmpInt < 1 || tmpInt > 254 ) { currRandIPPrefix = null; errorMsg( kFName, "Invalid IP prefix on line # " + lineCounter + " of file \"" + getDataURI() + "\"" + " Must be in the range of 1 through 254 (the left field of a Class-C IPv4 address)" + " Will just use default values, and will continue to read any subsequent lines." ); currClientIP = currClientName = currRandIPPrefix = currRandNameSuffix = null; randEveryRecord = false; continue; } } } // Else totally random TLD IP prefix // Just leave them all null } // Else it's an explicit client // TODO: explicit clients are never stored! else { // Clear out the rendom seeds, which we are no longer using currRandIPPrefix = currRandNameSuffix = null; // And for now clear out explicit names, which we are about to set with new values currClientIP = currClientName = null; randEveryRecord = false; } } // Else it's data else { // statusMsg( kFName, "Data Line # " + lineCounter ); if( values.size() < 2 ) { warningMsg( kFName, "Skipping apparently blank line # " + lineCounter + " of file \"" + getDataURI() + "\" (1)" + " (it had an initial indent, but nothing else, comment out to avoid this warning)" + " Will continue reading any subsequent lines." + " Line=\"" + line + "\"" ); currClientIP = currClientName = currRandIPPrefix = currRandNameSuffix = null; continue; } // Query // ----- // word is get(1) String query = NIEUtil.trimmedStringOrNull( (String) values.get(1) ); if( null == query ) { warningMsg( kFName, "Skipping apparently blank line # " + lineCounter + " of file \"" + getDataURI() + "\" (2)" + " (it had one or more indents, but nothing else, comment out to avoid this warning)" + " Will continue reading any subsequent lines." + " Line=\"" + line + "\"" ); currClientIP = currClientName = currRandIPPrefix = currRandNameSuffix = null; continue; } // Count (how many times to run it) // ----- int count = -1; // count is get(2) if specified if( values.size() > 2 ) { String countStr = NIEUtil.trimmedStringOrNull( (String) values.get(2) ); count = NIEUtil.stringToIntOrDefaultValue( countStr, -1, false, true ); } // No explicit count given, try to calculate one if( count <= 0 ) { count = wordToNumber( query ); // If still negative, complain if( count <= 0 ) { errorMsg( kFName, "No valid explicit nor implicit count for line # " + lineCounter + " of file \"" + getDataURI() + "\"" + " This is the NUMBER OF TIMES the query should be run." + " If you want to show 0 RESULTS, please put it in the NEXT field." + " Will continue reading any subsequent lines." + " Line=\"" + line + "\"" ); continue; } } // Matched (how many documents matched) // ------- int matched = -1; // count is get(3) if specified if( values.size() > 3 ) { String matchedStr = NIEUtil.trimmedStringOrNull( (String) values.get(3) ); matched = NIEUtil.stringToIntOrDefaultValue( matchedStr, -1, false, true ); } // No explicit count given, we'll run the query // -1 means automatic // TODO: -2 means use google public, but would need prefix // Searched (how many documents were searched in total, many don't give this anyway) // -------- int searched = -1; // number searched is get(4) if specified if( values.size() > 4 ) { String searchedStr = NIEUtil.trimmedStringOrNull( (String) values.get(4) ); searched = NIEUtil.stringToIntOrDefaultValue( searchedStr, -1, false, true ); } // At this point we have a query and a desired count // Values query and count are set // Now we need to "run" them, and attribute them to a client int tmpCount = generateLogEntries( query, count, matched, searched, currClientIP, currClientName, currRandIPPrefix, currRandNameSuffix, randEveryRecord ); // Tabulate if( tmpCount > 0 ) { recordCounter += tmpCount; if( recordCounter >= lastCountReported + kReportInterval ) { statusMsg( kFName, "Have added " + recordCounter + " so far." ); lastCountReported = recordCounter; } } else { errorMsg( kFName, "No records generated for line # " + lineCounter + " of file \"" + getDataURI() + "\"" + " Will continue reading any subsequent lines." + " Line=\"" + line + "\"" ); continue; } } } // End of While lines in file statusMsg( kFName, "Done." + " Final statistics:" + " # lines read = " + lineCounter + ", entries added = " + recordCounter ); return recordCounter; } int generateLogEntries( String inQuery, int inCount, int optMatched, int optSearched, String inClientIP, String inClientName, String inRandIPPrefix, String inRandNameSuffix, boolean inRandomizeEveryRecord // int inWindowDays ) throws Exception { final String kFName = "generateLogEntries"; // Normalize the numeric prefix we will use for IPs String ipPrefix = NIEUtil.trimmedStringOrNull( inRandIPPrefix ); if( null!=ipPrefix ) { if( ipPrefix.endsWith(".*") || ipPrefix.endsWith(".+") ) { if( ipPrefix.length()>2 ) ipPrefix = NIEUtil.trimmedStringOrNull( ipPrefix.substring(0,ipPrefix.length()-2) ); else ipPrefix = null; } else if( ipPrefix.endsWith(".") ) { if( ipPrefix.length()>1 ) ipPrefix = NIEUtil.trimmedStringOrNull( ipPrefix.substring(0,ipPrefix.length()-1) ); else ipPrefix = null; } } if( null==ipPrefix ) ipPrefix = randomIpSegment(); // Normalize the Suffix we will use for names String nameSuffix = NIEUtil.trimmedStringOrNull( inRandNameSuffix ); if( null!=nameSuffix ) { if( nameSuffix.startsWith("*.") || nameSuffix.startsWith("+.") ) { if( nameSuffix.length()>2 ) nameSuffix = NIEUtil.trimmedStringOrNull( nameSuffix.substring(2) ); else nameSuffix = null; } else if( nameSuffix.startsWith(".") ) { if( nameSuffix.length()>1 ) nameSuffix = NIEUtil.trimmedStringOrNull( nameSuffix.substring(1) ); else nameSuffix = null; } } if( null==nameSuffix ) nameSuffix = numStringToWords( ipPrefix ) + ".com"; // Need to get client to attribute this to String clientIPToUse = null; String clientNameToUse = null; // Have both, just use them if( null!=inClientIP && null!=inClientName ) { // Good, just copy over clientIPToUse = inClientIP; clientNameToUse = inClientName; } // Have IP, but need to generate a name else if( null!=inClientIP ) { clientIPToUse = inClientIP; String fragment = ipToNameFragment( inClientIP ); if( null==fragment ) { errorMsg( kFName, "Could not get fragment, can't log (1)" ); return -1; } clientNameToUse = fragment + '.' + nameSuffix; } // Else use suffix and prefix to generate IP and name // And those will have been set up top // ipPrefix nameSuffix else { String ipTriple = randomIpTriple(); clientIPToUse = ipPrefix + '.' + ipTriple; String fragment = ipToNameFragment( clientIPToUse ); if( null==fragment ) { errorMsg( kFName, "Could not get fragment, can't log (2)" ); return -1; } clientNameToUse = fragment + '.' + nameSuffix; } // The interval of time in milliseconds we'll spread out // the queries long dateRangeMs = (long) getNumDaysInWindow() * NIEUtil.MS_PER_DAY; int recsAdded = 0; // For the desired number of times... for( int i=1; i<=inCount; i++ ) { // Generate a random date long subtractMs = (long)( Math.random() * (double) dateRangeMs ); long newMs = NIEUtil.getCurrTimeMillis() - subtractMs; java.sql.Timestamp newDate = new java.sql.Timestamp( newMs ); if( inRandomizeEveryRecord ) { // Sanity check if( null!=inClientIP && null!=inClientName ) { warningMsg( kFName, "Can't randomize records when specific client IP and Name passed in, clearing flag." ); inRandomizeEveryRecord = false; } // Have IP, but need to generate a name // TODO: for now line parsing can't get to this state but may do later else if( null!=inClientIP ) { clientIPToUse = inClientIP; String fragment = ipToNameFragment( inClientIP ); if( null==fragment ) { errorMsg( kFName, "Could not get fragment, can't log (1)" ); return -1; } clientNameToUse = fragment + '.' + nameSuffix; } // Else use suffix and prefix to generate IP and name // And those will have been set up top // ipPrefix nameSuffix else { String ipTriple = randomIpTriple(); clientIPToUse = ipPrefix + '.' + ipTriple; String fragment = ipToNameFragment( clientIPToUse ); if( null==fragment ) { errorMsg( kFName, "Could not get fragment, can't log (2)" ); return -1; } clientNameToUse = fragment + '.' + nameSuffix; } } int thisResult = -1; if( getDoPreviewOnly() ) { statusMsg( kFName, "Would add record:" + NIEUtil.NL + "\tQuery='" + inQuery + "'" + NIEUtil.NL + "\tMatched='" + optMatched + "'" + NIEUtil.NL + "\tSearched='" + optSearched + "'" + NIEUtil.NL + "\tDate/time='" + newDate + "'" + NIEUtil.NL + "\tIPaddr='" + clientIPToUse + "'" + NIEUtil.NL + "\tIPname='" + clientNameToUse + "'" ); thisResult = 1; } // Actually do it else { thisResult = insertLogRecord( inQuery, optMatched, optSearched, newDate, clientIPToUse, clientNameToUse ); // Might need to do DNS for each record if( inRandomizeEveryRecord ) { if( thisResult > 0 ) { updateDnsTable( clientIPToUse, clientNameToUse ); } else { warningMsg( kFName, "No records added for query '" + inQuery + "', no DNS to update (1)." ); } } } if( thisResult >= 0 ) recsAdded += thisResult; } // End for requested number of times // Update DNS if we've added any records // And haven't done it already if( ! inRandomizeEveryRecord ) { if( recsAdded > 0 ) { statusMsg( kFName, "Adding DNS for query batch '" + inQuery + "' " + clientIPToUse + '/' + clientNameToUse ); updateDnsTable( clientIPToUse, clientNameToUse ); } else { warningMsg( kFName, "No records added for query '" + inQuery + "', no DNS to update (2)." ); } } return recsAdded; } // Based losely on nie.sn.SearchLogger int insertLogRecord( String inQuery, int optMatched, int optSearched, java.sql.Timestamp inTimestamp, String clientIPToUse, String clientNameToUse ) throws Exception { final String kFName = "insertLogRecord"; nie.core.DBUpdateStmt statement = new nie.core.DBUpdateStmt( nie.core.DBConfig.LOG_TABLE, getDBConfig() ); // Basic info // ========== // Site ID statement.setValue( nie.sn.SearchLogger.SITE_ID_DB_FIELD, getSiteIdToUse() ); // Transaction Type statement.setValue( nie.sn.SearchLogger.TRANS_TYPE_DB_FIELD, nie.sn.SearchLogger.TRANS_TYPE_SEARCH ); // Search Info // ============ // We now support sentinels / sentinals that represent a NULL search // So look for those sentinels, and force to null if( getMainConfig().isNullSearch(inQuery) ) inQuery = null; // Don't whine about nulls String normQuery = nie.sn.SearchLogger.normalizeString( inQuery, false ); // Log them, if present if( null != inQuery ) statement.setValue( nie.sn.SearchLogger.ORIGINAL_SEARCH_DB_FIELD_NAME, inQuery ); if( null != normQuery ) statement.setValue( nie.sn.SearchLogger.NORMALIZED_SEARCH_DB_FIELD_NAME, normQuery ); // Conditionally log them // TODO: If not, backfill if( optMatched >= 0 ) statement.setValue( nie.sn.SearchLogger.NUM_FOUND_DB_FIELD_NAME, optMatched ); if( optSearched >= 0 ) statement.setValue( nie.sn.SearchLogger.NUM_SEARCHED_DB_FIELD_NAME, optSearched ); // No SearchNames Status // User Info // ==================== // CLIENT_HOST if( null != clientIPToUse ) statement.setValue( nie.sn.SearchLogger.CLIENT_IP_DB_FIELD_NAME, clientIPToUse ); // No Advertising // No Click-Through // Date / Time Info // ==================== // java.sql.Timestamp currentTime = new Timestamp( // System.currentTimeMillis() // ); statement.setValue( nie.sn.SearchLogger.START_TIME_DB_FIELD_NAME, inTimestamp ); statement.setValue( nie.sn.SearchLogger.END_TIME_DB_FIELD_NAME, inTimestamp ); // Send the updates // USUALLY, if there's a problem, we get a _SQL_ Exception // which we let flow upwards, like all other errors. // However, a DBConfigException indicates that the connection // is temporarily down, so we should not freak out as loudly int numRows = -1; boolean badError = false; Exception badException = null; try { // send the update numRows = statement.sendUpdate( true ); } // This exception means we are just not trying right now // typically this would be caught above catch( Exception e ) { errorMsg( kFName, "There was a problem executing the update," + " or perhaps the database connection is down" + "; this may be temporary (2)." + " This particular search will not be logged." + " Error: " + e ); return -1; } return numRows; } boolean updateDnsTable( String inIPNumber, String inHostName ) throws Exception { final String kFName = "updateDnsTable"; // Object [] parts = getDBConfig().createStatement(); // Statement stmt = (Statement) parts[0]; boolean result = false; if( getDoPreviewOnly() ) { statusMsg( kFName, "Would do update for '" + inIPNumber + "'='" + inHostName + "'" ); result = true; } else { result = DNSLookup2.staticUpdateDnsTable( getDBConfig(), cStatementUpdate, // stmt, inIPNumber, inHostName, true ); } // getDBConfig().closeStatement( stmt, kClassName, kFName, true ); return result; } public boolean _updateRecord( String inQueryText, int inCount, int optSearchedCount ) { final String kFName = "updateRecord"; inQueryText = NIEUtil.trimmedLowerStringOrNull( inQueryText ); if( null==inQueryText ) { errorMsg( kFName, "Null search passed in, returning false." ); return false; } if( inCount < 0 ) { errorMsg( kFName, "Negative count passed in, returning false." ); return false; } boolean debug = shouldDoDebugMsg( kFName ); boolean info = shouldDoInfoMsg( kFName ); boolean outSuccess = false; int updateCount = 0; String lastSql = null; try { String updateSql = (null!=inQueryText) ? kUpdateSQL_1 : kUpdateSQL_2; if( fOperatingMode == kOverwriteAllMode ) updateSql += kUpdateSQL_Suffix; // Put the actual values in the update statement // TODO: we could use a prepared statement instead updateSql = replaceAll( updateSql, VALUE_MARKER_1, ""+inCount ); if( optSearchedCount >= 0 ) { updateSql = replaceAll( updateSql, VALUE_MARKER_2, ""+optSearchedCount ); } else { updateSql = replaceAll( updateSql, VALUE_MARKER_2, "NULL" ); } if( null!=inQueryText ) updateSql = replaceAll( updateSql, KEY_MARKER, NIEUtil.sqlEscapeString(inQueryText,true) ); if( debug ) debugMsg( kFName, "Trying to update using SQL statement: \"" + updateSql + "\"" ); lastSql = updateSql; cStatementUpdate.execute( updateSql ); updateCount = cStatementUpdate.getUpdateCount(); if( debug ) debugMsg( kFName, "Update count = " + updateCount ); if( updateCount > 0 ) outSuccess = true; /*** else { // If update didn't work, try inserting if( debug ) debugMsg( kFName, "Update failed, will try insert." ); // Work on the INSERT statement // Host name String lSQLInsertText = replaceAll( insertSql, kHOSTNAME, inHostName ); // IP address lSQLInsertText = replaceAll( lSQLInsertText, kIPNUMBER, inIPNumber ); lSQLInsertText = replaceAll( lSQLInsertText, kRESOLVED, kWAS_NOT_RESOLVED ); // Resolved yes / no (actually 0/1) lSQLInsertText = replaceAll( lSQLInsertText, kRESOLVED, ( inWasResolved ? kWAS_RESOLVED : kWAS_NOT_RESOLVED ) ); if( debug ) debugMsg( kFName, "Trying to insert using SQL statement: \"" + lSQLInsertText + "\"" ); lastSql = lSQLInsertText; cStatementUpdate.execute( lSQLInsertText ); updateCount = cStatementUpdate.getUpdateCount(); if( debug ) debugMsg( kFName, "Insert count = " + updateCount ); if( updateCount > 0 ) outSuccess = true; else { outSuccess = false; debugMsg( kFName, "Insert also failed; returning false" ); } } ***/ } catch( SQLException se ) { outSuccess = false; errorMsg( kFName, "Error executing SQL \"" + lastSql + "\"" + "; returning false." + " SQL Exception: " + se ); // fatalErrorMsg( kFName, msg ); //System.exit( -1 ); } return outSuccess; } /////////////////////////////////////////////////////////// private static final void __Utility__() {} static String randomIpSegment() { return "" + ( (int) ( Math.random() * 254.0 ) + 1 ); } static String randomIpTriple() { return randomIpSegment() + '.' + randomIpSegment() + '.' + randomIpSegment(); } public static String ipToNameFragment( String inStr ) { final String kFName = "ipToNameFragment"; final String prefix = "ws-"; inStr = NIEUtil.trimmedStringOrNull( inStr ); if( null==inStr ) { errorMsg( kFName, "Null/empty string passed in, returning null." ); return null; } return prefix + NIEUtil.replaceChars( inStr, '.' , '-' ); } // Convert "123" into "onetwothree" // Doesn't need to be fancy, don't need "onehundredtwentythree" public static String numStringToWords( String inStr ) { final String kFName = "numStringToWords"; final String[] digitNames = { "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", "nine" }; inStr = NIEUtil.trimmedStringOrNull( inStr ); if( null==inStr ) { errorMsg( kFName, "Null/empty string passed in, returning NULL" ); return null; } StringBuffer buff = new StringBuffer(); char [] digits = inStr.toCharArray(); for( int i=0; i<digits.length; i++ ) { char digit = digits[i]; if( digit < '0' || digit > '9' ) { errorMsg( kFName, "Invalid digit at OFFSET " + i + ", string=\""+inStr+'"' + ", returning NULL" ); return null; } int nameOffset = digit - '0'; buff.append( digitNames[nameOffset] ); } // statusMsg( kFName, "Input/output = "+inStr+'/'+new String(buff) ); return new String( buff ); } // TODO: replace with call to standard NIEUtil equivalent static String replaceAll( String inSourceString, String inSearchString, String inReplacementString ) { int lPosition = inSourceString.indexOf( inSearchString ); if( lPosition >= 0 ) { String lPrefixString = inSourceString.substring( 0, lPosition ); String lPostString = replaceAll( inSourceString.substring( lPosition + inSearchString.length() ), inSearchString, inReplacementString ); return lPrefixString + inReplacementString + lPostString; } else return inSourceString; } // Return a string in the range of 1 to 254, using modulo math // use the input word as a seed // Edge cases: a return value from wordToNumber of -1, 0 or 1 will // map to the string "1" public static String wordToIPSegmentNumberStr( String inWord ) { int val = wordToNumber( inWord ); val = val<0 ? val * -1 : val; val = 0==val ? 1 : val; val = ((val-1) % 254) + 1; return "" + val; } // Given a word, calculate it's sum giving each letter // it's ordinal value from the alphabet, regardless of case // and ignore anything else // 0 or -1 indicates a word with no letters // "a" = 1, "b" = 2, "c" = 3, "aa" = 2, "bc" = 5 (2+3) public static int wordToNumber( String inWord ) { final String kFName = "wordToNumber"; inWord = NIEUtil.trimmedLowerStringOrNull( inWord ); if( null==inWord ) { errorMsg( kFName, "Null/empty word passed in, returning -1" ); return -1; } int outVal = -1; char [] symbols = inWord.toCharArray(); for( int i=0; i<symbols.length; i++ ) { char c = symbols[i]; if( c < 'a' || c > 'z' ) continue; outVal = outVal < 0 ? 0 : outVal; outVal += ( c - 'a' + 1 ); } if( outVal < 0 ) errorMsg( kFName, "Word had no chars between a and z, returning -1" ); return outVal; } /////////////////////////////////////////////////////////// private static final void __Simple_Getters_and_Setters__() {} int getNumDaysInWindow() { return mDaysInWindow; } boolean getUseGoogleInstead() { return mUseGoogleInstead; } void setUseGoogleInstead() { setUseGoogleInstead( true ); } void setUseGoogleInstead( boolean inFlag ) { mUseGoogleInstead = inFlag; } String getSitePrefix() { return mSitePrefix; } void setSitePrefix( String inPrefx ) { mSitePrefix = NIEUtil.trimmedStringOrNull( inPrefx ); } boolean getDoBackFill() { return mDoBackFill; } void setDoBackFill() { setDoBackFill( true ); } void setDoBackFill( boolean inFlag ) { mDoBackFill = inFlag; } boolean getDoRollDates() { return mDoRollDates; } void setDoRollDates() { setDoRollDates( true ); } void setDoRollDates( boolean inFlag ) { mDoRollDates = inFlag; } boolean getDoPreviewOnly() { return mDoPreviewOnly; } void setDoPreviewOnly() { setDoPreviewOnly( true ); } void setDoPreviewOnly( boolean inFlag ) { mDoPreviewOnly = inFlag; } String getDataURI() { return mDataURI; } public static String getLogTableName() { return DBConfig.LOG_TABLE; } int getSiteIdToUse() { if( mSiteId > 0 ) return mSiteId; return getMainConfig().getSearchLogger().getConfiguredSiteID(); } /////////////////////////////////////////////////////////// // // Build a tree set from a result set. If the input result // set is null, then we return an initialized, but empty, // tree set. // /////////////////////////////////////////////////////////// // private TreeSet buildIPTreeSet( ResultSet inResultSet ) private Hashtable _buildIPTreeSet( ResultSet inResultSet ) { final String kFName = "buildIPTreeSet"; boolean debug = shouldDoDebugMsg( kFName ); // if( gComparator == null ) // gComparator = new DotNotationComparator(); // TreeSet lTreeSet = new TreeSet( gComparator ); Hashtable outHash = new Hashtable(); if( inResultSet != null ) { try { while( inResultSet.next() ) { String lIPNumberString; lIPNumberString = inResultSet.getString( 1 ); if( debug ) debugMsg( kFName, "Converting '" + lIPNumberString + "'" ); // DotNotation lDotNotation = // new DotNotation( lIPNumberString ); // lTreeSet.add( lDotNotation ); // lTreeSet.add( lIPNumberString ); outHash.put( lIPNumberString, lIPNumberString ); } } catch( SQLException se ) { fatalErrorMsg( kFName, "SQL Exception caught: " + se ); se.printStackTrace(); System.exit( -1 ); } } // return lTreeSet; return outHash; } // Needs to operate from command line or as a thread under main app DBConfig getDBConfig() { final String kFName = "getDBConfig"; // if( null!=fDBConf ) // return fDBConf; if( null!=getMainConfig() ) return getMainConfig().getDBConfig(); return null; // NOT return _fDBConf } public void setMainConfig( SearchTuningConfig inMainConfig ) { mMainConfig = inMainConfig; } SearchTuningConfig getMainConfig() { return mMainConfig; } String getMainConfigURI() { if( null!=fConfigFileURI ) return fConfigFileURI; if( null != getMainConfig() ) return getMainConfig().getConfigFileURI(); return null; } void _setDBConfig( DBConfig inDB ) { _fDBConf = inDB; } /////////////////////////////////////////////////////////// private static final void __Logging__() {} // This gets us to the logging object private static RunLogInterface getRunLogObject() { return RunLogBasicImpl.getRunLogObject(); } // This gets us essentially the same thing, but casted // to let us to implementation specific things like parse // command line options private static RunLogBasicImpl getRunLogImplObject() { // return RunLogBasicImpl.getRunLogObject(); return RunLogBasicImpl.getRunLogImplObject(); } protected boolean stackTrace( String inFromRoutine, Exception e, String optMessage ) { return getRunLogObject().stackTrace( kClassName, inFromRoutine, e, optMessage ); } private static boolean statusMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().statusMsg( kClassName, inFromRoutine, inMessage ); } private static boolean transactionStatusMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().transactionStatusMsg( kClassName, inFromRoutine, inMessage ); } private static boolean shouldDoTransactionStatusMsg( String inFromRoutine ) { return getRunLogObject().shouldDoTransactionStatusMsg( kClassName, inFromRoutine ); } private static boolean infoMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().infoMsg( kClassName, inFromRoutine, inMessage ); } private static boolean debugMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().debugMsg( kClassName, inFromRoutine, inMessage ); } private static boolean shouldDoDebugMsg( String inFromRoutine ) { return getRunLogObject().shouldDoDebugMsg( kClassName, inFromRoutine ); } private static boolean traceMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().traceMsg( kClassName, inFromRoutine, inMessage ); } private static boolean shouldDoTraceMsg( String inFromRoutine ) { return getRunLogObject().shouldDoTraceMsg( kClassName, inFromRoutine ); } private static boolean shouldDoInfoMsg( String inFromRoutine ) { return getRunLogObject().shouldDoInfoMsg( kClassName, inFromRoutine ); } private static boolean warningMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().warningMsg( kClassName, inFromRoutine, inMessage ); } private static boolean errorMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().errorMsg( kClassName, inFromRoutine, inMessage ); } private static boolean fatalErrorMsg( String inFromRoutine, String inMessage ) { return getRunLogObject().fatalErrorMsg( kClassName, inFromRoutine, inMessage ); } /////////////////////////////////////////////////////////// private static final void __Fields_and_Constants__() {} // private JDOMHelper fMainElem; /////////////////////////////////////////////////////////// // // Private members... // /////////////////////////////////////////////////////////// SearchTuningConfig mMainConfig; String mSitePrefix; private String fConfigFileURI; private String mDataURI; private LineNumberReader mDataIn; private String _mErrorMsg; BackfillMatchCounts mBackFiller; RollDates mRoller; boolean mUseGoogleInstead = DEFAULT_USE_GOOGLE_INSTEAD; static final boolean DEFAULT_USE_GOOGLE_INSTEAD = false; boolean mDoPreviewOnly = false; boolean mDoBackFill = DEFAULT_DO_BACK_FILL; static final boolean DEFAULT_DO_BACK_FILL = true; boolean mDoRollDates = DEFAULT_DO_ROLL_DATES; static final boolean DEFAULT_DO_ROLL_DATES = true; int mSiteId = -1; // Usually we use what's in the main config // int mSiteId = DEFAULT_SITE_ID; // static final int DEFAULT_SITE_ID = 10; boolean mHadError; private boolean mNukeDns = DEFAULT_NUKE_DNS; private boolean mNukeLog = DEFAULT_NUKE_LOG; public static final boolean DEFAULT_NUKE_DNS = false; public static final boolean DEFAULT_NUKE_LOG = false; // The query we will use to pull records String _fSQL; // The primary database configuration DBConfig _fDBConf; Statement _cStatementRead; Connection _cConnectionRead; Statement cStatementUpdate; Connection cConnectionUpdate; // Which search engine to use, we do NOT always use the // main configured one, for example we might use Google instead SearchEngineConfig _mTargetSearchEngineConfig; int fOperatingMode = DEFAULT_MODE; // static final String DEFAULT_NAME_SUFFIX = "somedomain.com"; // static final String DEFAULT_IP_PREFIX = "198"; public static final String _GOOGLE_CONFIG_URI = AuxIOInfo.SYSTEM_RESOURCE_PREFIX + "static_files/predefined_configs/search_engine_google_public.xml"; public static final String _SYSTEM_RESOURCE_BASE_CLASS = nie.config_ui.Configurator2.kFullClassName; // = "nie.config_ui.Configurator2"; static final int kErrorMode = 0; static final int kNewOnlyMode = 1; static final int _kRetryMode = 2; static final int _kRefreshMode = 3; static final int kOverwriteAllMode = 4; static final int DEFAULT_MODE = kNewOnlyMode; int mDaysInWindow = DEFAULT_WINDOW_DAYS; static final int DEFAULT_WINDOW_DAYS = 95; boolean fOverwriteGoodWithNotResolved = false; //////////////////////////////////////////////////////////// // // SQL Statements... // //////////////////////////////////////////////////////////// static final String SQL_NEW_ONLY = // "SELECT UNIQUE client_host FROM " + getLogTableName() + " log" "SELECT DISTINCT original_query FROM " + getLogTableName() + " log" + " WHERE num_results IS NULL" ; static final String SQL_ALL_RECORDS = "SELECT DISTINCT original_query FROM " + getLogTableName() ; static final String KEY_MARKER = "VAR_KEY"; static final String VALUE_MARKER_1 = "VAR_VALUE_1"; static final String VALUE_MARKER_2 = "VAR_VALUE_2"; // no inserts in this util: // static final String kInsertDomainNameSQL = static final String kUpdateSQL_1 = "UPDATE " + getLogTableName() + " SET num_results = " + VALUE_MARKER_1 + ", num_searched = " + VALUE_MARKER_2 + " WHERE original_query = '" + NIEUtil.sqlEscapeString( KEY_MARKER, true ) + "'" ; static final String kUpdateSQL_2 = "UPDATE " + getLogTableName() + " SET num_results = " + VALUE_MARKER_1 + ", num_searched = " + VALUE_MARKER_2 + " WHERE original_query IS NULL" ; static final String kUpdateSQL_Suffix = " AND num_results IS NULL" ; // public static boolean _gDebug = false; }
Design of Fuzzy Kalman Filter for Air-Gap Disturbance Attenuation of Magnetic Levitation System In this paper, a feedback controller using the fuzzy Kalman filter that attenuates air-gap disturbance in a magnetic levitation system is proposed. One of the core technology of the magnetic levitation system is a levitation control to maintain a constant air-gap. The magnetic levitation system which shows unsatisfactory performance under air-gap disturbance due to rail irregularities is modeled. Takagi-Sugeno fuzzy system is used for the modeling of a nonlinear magnetic levitation system, and the Kalman filter is utilized to improve the modeling results. In this sense the fuzzy Kalman filter is obtained. This approach can solve the problem by the conventional state-feedback control. Finally, the effectiveness of the proposed method is demonstrated by simulation.
// New will create and return a new YStore. A error will be returned if the // store already exists and contains invalid data. func New(path string) (*YStore, error) { if _, ok := currentStores[path]; !ok { currentStores[path] = &YStore{ path: filepath.Clean(path), data: make(map[string]map[string]string), } } return currentStores[path], currentStores[path].read() }
#include <stdio.h> int main() { long long a,b,n,x,i, c; scanf("%lld",&n);a=-1000000;b=1000000;c=0; for(i=0;i<n;i++){ scanf("%lld",&x); if(x<b){b=x;}; if(x>a){a=x;}; c=c+x; }; printf("%lld %lld %lld\n",b,a,c); return 0; }
Hundreds of people protested at the Tennessee state capitol today ahead of and during Gov. Bill Haslam's State of the State address. The protest was actually scheduled before President Donald Trump's executive order on refugees last Friday, but the response by the state's GOP leadership — or lack thereof — over the weekend only added fuel to the fire. Protesters swarmed the halls, chanting and singing, forcing legislators to walk a roped-off gauntlet to get to Haslam's address. But don't worry, guys, those protesters aren't really mad at Trump or Haslam or the Legislature — they were paid to be there! That, at least, is the theory of state Sen. Paul Bailey (R-Sparta), who tweeted the following on Monday night: via Twitter Yes, that does really say, "Despite what the media may report several of the protesters admitted that they had been paid to be at the TN Capitol." Who did they admit that to? Bailey did not return tweets or a phone call to answer. This is of course a continuation of a fake news rumor floating around conspiracy-minded conservatives that all Trump protesters are being paid by George Soros, because if you're a liberal billionaire whose party lost the election, that's EXACTLY what you would spend money on. (Insert rolling eyes emoji.) But in this era where facts somehow do not exist, Bailey is not alone — apparently one in three Trump voters think Soros paid all several million women who marched around the world in protest one day after Trump's inauguration. Which means, statistically speaking, Bailey is probably not the only state legislator who believes this. And as Sean Hannity had Newt Gingrich on his show Monday night to talk about this very-much-100-percent-made-up thing, it's only going to get worse. Fact(ual) Lives Matter, anyone? (Sad face emoji.)
I hear you. We have plenty of news curation apps on Android and we don't need yet another one. But despite the countless options, there's still room for an app that does its job well, looks good at it, and doesn't try to reinvent the wheel with algorithms and predictions that inevitably fall short of their promise. Source might be this app. Coming from Jacob Klinker of Klinker Apps, the same guy who brought us the Talon Twitter client, the Blur launcher, and EvolveSMS, Source already has a reputation to live up to. It's out in beta, an extremely early beta if you ask me, and you can join in to try it out and give your feedback to Jacob right away. Source works by checking various, well, sources for news articles. At launch, it only supports your Twitter lists and some sort of Twitter-based feeds in various categories like Architecture, Food, Design, and others. Jacob plans on adding Google+, Feedly, and your own RSS feeds to the mix later, with the possibility of charging for some of them through IAPs. The articles are displayed in a river of images and titles. Tap an item and it expands to reveal the text, tap it again to close it. You can also favorite, share articles, and view their original web pages. There's Android Wear support through notifications, in case you so desperately need your fix that you'll read blocks of text on that tiny screen on your wrist. There's no way to search that I can see, but you can filter your feed by source. As for tablet support, it is on Jacob's roadmap. Source's allure is in its Material-inspired interface. Bright color themes, smooth animations when opening and closing articles, expanding bubbles to activate and deactivate options in the settings, all of these give the app a modern feel that would make it sit comfortably on any L-running device. However, Source isn't without its bugs. I've noticed duplicates several times already, an issue with selecting and deselecting topics in the settings, and many articles not displaying their text. There also seem to be a few scrolling issues every now and then. Nevertheless, this is a beta, and that's what testing is for. If you're interested in participating in the beta, you must follow these steps: Join Source's Google+ community Click this link to opt into the beta Download Source from the Play Store (note that the app may take a while to show up after you opt in). +JacobKlinker
use std::fmt::{self, Formatter}; #[derive(Debug)] pub enum Error { Qr(qr_code::types::QrError), Address(bitcoin::util::address::Error), Secp256k1(bitcoin::secp256k1::Error), Miniscript(miniscript::Error), Bmp(qr_code::bmp_monochrome::BmpError), InvalidAddressType, MissingChecksum, MissingMappedKey(String), OnlyPkh, } impl fmt::Display for Error { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { match self { Error::Qr(e) => write!(f, "{:?}", e), Error::Address(e) => write!(f, "{:?}", e), Error::Miniscript(e) => write!(f, "{:?}", e), Error::Secp256k1(e) => write!(f, "{:?}", e), Error::Bmp(e) => write!(f, "{:?}", e), Error::InvalidAddressType => write!(f, "Valid values: wpkh, wsh, pkh, shwpkh"), Error::MissingMappedKey(s) => write!(f, "Missing mapped key for alias {}", s), Error::OnlyPkh => write!(f, "Only *pkh address: wpkh, pkh, shwpkh"), Error::MissingChecksum => write!(f, "Missing checksum"), } } } macro_rules! impl_error { ( $from:ty, $to:ident ) => { impl std::convert::From<$from> for Error { fn from(err: $from) -> Self { Error::$to(err) } } }; } impl_error!(bitcoin::util::address::Error, Address); impl_error!(miniscript::Error, Miniscript); impl_error!(bitcoin::secp256k1::Error, Secp256k1); impl_error!(qr_code::types::QrError, Qr); impl_error!(qr_code::bmp_monochrome::BmpError, Bmp);
Can Australia Please Stop Being Washington’s Bitch And Help Assange Now? Caitlin Johnstone Blocked Unblock Follow Following May 22, 2017 The Wall Street Journal has published an editorial titled “The U.S. Can Get Julian Assange” and subtitled “Avoid extradition and use secret services to airlift him to stand trial in America.” This horrifying article, run by one of America’s major mainstream publications, details how US special forces could technically storm the embassy of a sovereign nation, kidnap an Australian journalist who has broken no laws, and drag him back to the States in a way that the editorial’s author claims has legal precedent in America. The mass media propaganda machine of a government that tortures whistleblowers is openly advocating kidnapping an Australian citizen, from an Ecuadorian embassy, in the UK, in order to stop him from traveling to Ecuador. Because he helped show the American people the truth about their government. In what other nation would such a suggestion be perceived as anything but outrageous and unacceptable? As anything but an endorsement of an act of war? And yet Assange’s own country — my country — lets this kind of dialogue continue to escalate without a word. I’ve been to Ecuador. I stayed for three months in 1999 living with locals, travelling by local transport and hitching around the Andes, the Amazon and the coast. It is a beautiful country, so rich in many things — landscape, culture, art, community and heart. But not a lot of money and not a lot of influence and certainly no leverage in terms of military might or trade. When it comes to doing the right thing, they have nothing but heart and soul, nothing but the courage of the underdog. It is so deeply embarrassing to me that Ecuador, a tiny nation with so little power, had to step in and do the job that Australia should have done for Assange. It’s so shameful that it was not my country showing the kind of “ticker” as we say here that was required in that crucial moment where one of our most significant citizens needed help. Growing up, I thought it was my country that was the plucky underdog that always helped out a mate and set things right. I thought it was my country that had the moral compass and a sense of natural justice that set us apart from our neurotically litigious cousins in the States. I thought we did the right thing because it was the right thing. Such is life, said Ned Kelly, another famous Australian before he was hanged. Such is life. We aren’t a particularly patriotic lot here in Oz. We don’t generally hang our flag in our front yards or sing our national anthem before a football game like the Americans do. If you tell someone you were in the Australian armed forces you don’t get an enthusiastic “Oh, thank you for your service,” you get an “Oh bonza, and what are you doing with yourself now?” And I wonder if that’s not so much a part of our culture as the natural result of a general collective awareness that we haven’t got a whole lot to be proud of these days. Washington says jump, Canberra asks how high. Washington says invade Iraq, we send out boys to die and kill and come back irreparably traumatized for a plutocratic resource conflict on the other side of the planet. We’re America’s bitch. Well, just once I’d like a chance to feel proud of my country. Assange has committed no crimes, nobody has seen a single shred of evidence that he was ever colluding with Russia, and he has brought the light of truth to many parts of the world with nothing to show for it but five years of wrongful imprisonment while his children grew up hearing his name smeared with lies. Among the many great men and women that our nation has produced, none shine brighter than he. Even if he had committed crimes it shouldn’t have made a difference to Australia; all our favorite national heroes are criminals. All our favorite battles are the ones we lost. We’ve got a curious fondness for the marginal, the underdog, the slightly bent-and-battered, the lost. He’s not being forsaken by his country because of his legal controversies, he’s being forsaken by his country because our so-called leaders don’t have the guts to stand up to the evils of the American corruption machine. My readers sometimes ask me why I write so much about the American government. This is why. I write about the American government because my nation has no government of its own. We sit here a cowardly, snivelling vassal state, staring at the floor and doing as we’re told while our midget cousin Ecuador stands up to our bully and fights our battles for us. It’d almost be better if we just asked the US to annex us already so at least we’d get an imaginary vote in their pretend democracy. Again, the US government tortures whistleblowers. Assange needs protection before their ghouls sink their claws into him forever. He’s our man and he will not see justice if they get to him. Canberra’s too crammed full of globalist stooges to do it of their own accord, but if enough of us push for this, they’ll cave. Literally the most Aussie thing that has ever happened. Allow me to speak to my people in my mother tongue: Come on Aussies, this is piss weak, this is as soft as a month-old pav from Woollies, what a pack of soft-cock suck-ups you are. Where’s your animal, where’s your digger spirit, where’s your ticker, son? Give us some Aussie mongrel for Christ’s sake! I know you’ve got it in you. Get your head out your arse and give the Yanks some lip and get our boy back home. Youse are acting like a bunch of woolly woofters and youse all know it too. Wake up Australia, this sheila has had a gutful of your bullshit, it’s time to grow a pair. — — — Thanks for reading! If you enjoyed this, please consider helping me out by sharing it around, liking me on Facebook, following me on Twitter, or even tossing me some money on Patreon so I can keep this gig up.
//------------------------------------------------------------------------------ // Vive.h // // Authors: <NAME> // //------------------------------------------------------------------------------ #pragma once #include "_MLHTCViveSystem.h" #include <mlModuleIncludes.h> #include <WEMBase/WEMModuleBase/WEMProcessor.h> #include <thread> #include "Vive/Model.h" #include "Vive/Vive.h" ML_START_NAMESPACE class _MLHTCVIVE_EXPORT HTCVive : public WEMProcessor { public: //! Constructor. HTCVive(std::string type="HTCVive"); virtual ~HTCVive(); //! Initializes module after loading. virtual void activateAttachments(); //! Handles field changes of the field \p field. virtual void handleNotification (Field* field); //! Processes input WEM -> modify nodes. virtual void _process(); //------------------------------------------------------------------------------ // Initializes the vive and renders the current model to it. This method is // passed to a thread. The rendering method runs until _run is set to false. // void vive(); //------------------------------------------------------------------------------ // whenever the user wants to stop a thread, this method is called. It sets // _run to false (-> the render function terminates) and joins the thread (-> // the thread gets shut down) // void HTCVive::killThread(); //------------------------------------------------------------------------------ // whenever the user starts the vr inside MeVisLab, this method is called // (with _run set to true) // void HTCVive::startThread(); private: // button for starting vr NotifyField *_notifyStartVR; // click this whenever the model has changed in MeVisLab NotifyField *_notifyUpdateVR; // button for stopping vr NotifyField *_notifyStopVR; // members for managing vr thread std::thread _viveThread; bool _run; // Implements interface for the runtime type system of the ML. ML_MODULE_CLASS_HEADER(HTCVive) }; ML_END_NAMESPACE
// SyncNodePools keeps the cluster node pools in state with the model func (cm *ClusterManager) SyncNodePools(clusterModel *model.Cluster) error { cm.oci.GetLogger().Infof("Syncing Node Pools states of Cluster[%s]", clusterModel.Name) nodePools := clusterModel.NodePools ce, err := cm.oci.NewContainerEngineClient() if err != nil { return err } waitForChange := false nodepoolNamesToCheck := make(map[string]bool, 0) for _, np := range nodePools { if !np.Delete { nodepoolNamesToCheck[np.Name] = true } if waitForChange == false && !np.Delete { waitForChange = true } if np.Add { if err := cm.AddNodePool(clusterModel, np); err != nil { return err } } else if !np.Delete { if err := cm.UpdateNodePool(clusterModel, np); err != nil { return err } } } if waitForChange { err = ce.WaitingForClusterNodePoolActiveState(&clusterModel.OCID, nodepoolNamesToCheck) if err != nil { return err } } waitForDelete := false for _, np := range nodePools { if np.Delete { if !waitForDelete { waitForDelete = true } if err := cm.DeleteNodePool(clusterModel, np); err != nil { return err } } } if waitForDelete { err = ce.WaitingForClusterNodePoolActiveState(&clusterModel.OCID, make(map[string]bool, 0)) if err != nil { return err } } return nil }
// NewUserRepository is the constructor for UserRepository func NewUserRepository(mongoDB *db.MongoDB) *UserRepository { client := mongoDB.GetClient() userCollection := client.Database(os.Getenv("DB_MONGODB_NAME")).Collection("user") return &UserRepository{ mongoDB: mongoDB, userCollection: userCollection, } }
GI = lambda: int(input()); GIS = lambda: map(int, input().split()); LGIS = lambda: list(GIS()) def main(): t = GI() for i in range(t): n = GI() l = [] while n: n, m = divmod(n, 10) l.append(m) print(sum(bool(x) for x in l)) for i, d in enumerate(l): if d: print(d * 10 ** i, end=' ') print() main()
package net.ravendb.client.documents.operations.replication; public class ReplicationHubAccessResult { private DetailedReplicationHubAccess[] results; public DetailedReplicationHubAccess[] getResults() { return results; } public void setResults(DetailedReplicationHubAccess[] results) { this.results = results; } }
package com.tencent.qcloud.suixinbo.presenters; import android.content.Context; import android.os.Handler; import android.os.HandlerThread; import android.os.Looper; import android.os.Message; import android.text.TextUtils; import com.tencent.cos.COSClient; import com.tencent.cos.COSClientConfig; import com.tencent.cos.common.COSEndPoint; import com.tencent.cos.model.COSRequest; import com.tencent.cos.model.COSResult; import com.tencent.cos.model.PutObjectRequest; import com.tencent.cos.model.PutObjectResult; import com.tencent.cos.task.listener.ITaskListener; import com.tencent.cos.task.listener.IUploadTaskListener; import com.tencent.qcloud.suixinbo.model.MySelfInfo; import com.tencent.qcloud.suixinbo.presenters.viewinface.UploadView; import com.tencent.qcloud.suixinbo.utils.SxbLog; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.InputStream; /** * Cos人图片上传类 */ public class UploadHelper extends Presenter { private final String TAG = "PublishHelper"; private final String bucket = "sxbbucket"; private final String appid = "1253488539"; private final static int THREAD_GET_SIG = 1; private final static int THREAD_UPLAOD = 2; private final static int THREAD_GETSIG_UPLOAD = 3; private final static int MAIN_CALL_BACK = 1; private final static int MAIN_PROCESS = 2; private Context mContext; private UploadView mView; private HandlerThread mThread; private Handler mHandler; private Handler mMainHandler; public UploadHelper(Context context, UploadView view) { mContext = context; mView = view; mThread = new HandlerThread("upload"); mThread.start(); mHandler = new Handler(mThread.getLooper(), new Handler.Callback() { @Override public boolean handleMessage(Message msg) { switch (msg.what) { case THREAD_GET_SIG: doUpdateSig(); break; case THREAD_UPLAOD: doUploadCover((String) msg.obj, true); break; case THREAD_GETSIG_UPLOAD: doUpdateSig(); doUploadCover((String) msg.obj, false); break; default: break; } return false; } }); mMainHandler = new Handler(Looper.getMainLooper(), new Handler.Callback() { @Override public boolean handleMessage(Message msg) { SxbLog.d(TAG, "handleMessage id:" + msg.what); switch (msg.what) { case MAIN_CALL_BACK: if (null != mView) mView.onUploadResult(msg.arg1, (String) msg.obj); break; case MAIN_PROCESS: if (null != mView) mView.onUploadProcess(msg.arg1); break; default: break; } return false; } }); } private String createNetUrl() { return "/" + MySelfInfo.getInstance().getId() + "_" + System.currentTimeMillis() + ".jpg"; } private void doUpdateSig() { String sig = UserServerHelper.getInstance().getCosSig(); MySelfInfo.getInstance().setCosSig(sig); // SxbLog.d(TAG, "doUpdateSig->get sig: " + sig); } /** * 复制单个文件 * @param oldPath String 原文件路径 如:c:/fqf.txt * @param newPath String 复制后路径 如:f:/fqf.txt * @return boolean */ public boolean copyFile(String oldPath, String newPath) { try { int bytesum = 0; int byteread = 0; File oldfile = new File(oldPath); if (oldfile.exists()) { //文件存在时 InputStream inStream = new FileInputStream(oldPath); //读入原文件 FileOutputStream fs = new FileOutputStream(newPath); byte[] buffer = new byte[1444]; int length; while ( (byteread = inStream.read(buffer)) != -1) { bytesum += byteread; //字节数 文件大小 System.out.println(bytesum); fs.write(buffer, 0, byteread); } inStream.close(); } } catch (Exception e) { SxbLog.e(TAG, "copy file failed!"); e.printStackTrace(); return false; } return true; } private void doUploadCover(final String path, boolean bRetry) { String sig = MySelfInfo.getInstance().getCosSig(); if (TextUtils.isEmpty(sig)) { if (bRetry) { Message msg = new Message(); msg.what = THREAD_GETSIG_UPLOAD; msg.obj = path; mHandler.sendMessage(msg); } return; } String tmpPath = path; if ("Xiaomi".equals(android.os.Build.MANUFACTURER)) { // 复制到tmp文件再上传(小米5机器上无法占用文件) tmpPath = path + "_tmp"; copyFile(path, tmpPath); } //创建COSClientConfig对象,根据需要修改默认的配置参数 final COSClientConfig config = new COSClientConfig(); //设置园区 config.setEndPoint(COSEndPoint.COS_GZ); //创建COSlient对象,实现对象存储的操作 COSClient cos = new COSClient(mContext, appid, config, null); SxbLog.d(TAG, "upload cover: " + tmpPath); //上传文件 PutObjectRequest putObjectRequest = new PutObjectRequest(); putObjectRequest.setBucket(bucket); putObjectRequest.setCosPath(createNetUrl()); putObjectRequest.setSrcPath(tmpPath); putObjectRequest.setSign(sig); putObjectRequest.setListener(new IUploadTaskListener(){ @Override public void onSuccess(COSRequest cosRequest, COSResult cosResult) { PutObjectResult result = (PutObjectResult) cosResult; if(result != null){ SxbLog.i(TAG, "upload succeed: " + result.url); Message msg = new Message(); msg.what = MAIN_CALL_BACK; msg.arg1 = 0; msg.obj = result.source_url; mMainHandler.sendMessage(msg); } } @Override public void onFailed(COSRequest COSRequest, final COSResult cosResult) { SxbLog.w(TAG, "upload error code: " + cosResult.code + " msg:" + cosResult.msg); if (-96 == cosResult.code) { // 签名过期重试 Message msg = new Message(); msg.what = THREAD_GETSIG_UPLOAD; msg.obj = path; mHandler.sendMessage(msg); } else { Message msg = new Message(); msg.what = MAIN_CALL_BACK; msg.arg1 = cosResult.code; msg.obj = cosResult.msg; mMainHandler.sendMessage(msg); } } @Override public void onProgress(COSRequest cosRequest, final long currentSize, final long totalSize) { SxbLog.d(TAG, "onUploadProgress: " + currentSize + "/" + totalSize); Message msg = new Message(); msg.what = MAIN_PROCESS; msg.arg1 = (int) (currentSize * 100 / totalSize); mMainHandler.sendMessage(msg); } @Override public void onCancel(COSRequest cosRequest, COSResult cosResult) { } }); PutObjectResult putObjectResult = cos.putObject(putObjectRequest); if (0 != putObjectResult.code){ Message msg = new Message(); msg.what = MAIN_CALL_BACK; msg.arg1 = -1; msg.obj = "upload failed"; mMainHandler.sendMessage(msg); } } public void updateSig() { mHandler.sendEmptyMessage(THREAD_GET_SIG); } public void uploadCover(String path) { Message msg = new Message(); msg.what = THREAD_UPLAOD; msg.obj = path; mHandler.sendMessage(msg); } @Override public void onDestory() { mView = null; mContext = null; } }
#include "gc.h" #include <map> namespace ploy { std::unordered_set<const GC*> GC::all; }
<reponame>KristiyanGK/Cloudcooking import RecipeStore from "./recipeStore"; import UserStore from "./userStore"; import { createContext } from "react"; import { configure } from "mobx"; import CommonStore from "./commonStore"; import ModalStore from "./modalStore"; import CategoryStore from "./categoryStore"; import ChatStore from "./chatStore"; import CommentStore from "./commentStore"; configure({enforceActions: 'always'}) export class RootStore { recipeStore: RecipeStore; userStore: UserStore; commonStore: CommonStore; modalStore: ModalStore; categoryStore: CategoryStore; chatStore: ChatStore; commentStore: CommentStore; constructor() { this.recipeStore = new RecipeStore(this); this.userStore = new UserStore(this); this.commonStore = new CommonStore(this); this.modalStore = new ModalStore(this); this.categoryStore = new CategoryStore(this); this.chatStore = new ChatStore(this); this.commentStore = new CommentStore(this); } } export const RootStoreContext = createContext(new RootStore());
Preliminary profiling of blood transcriptome in a rat model of hemorrhagic shock Hemorrhagic shock is a leading cause of morbidity and mortality worldwide. Significant blood loss may lead to decreased blood pressure and inadequate tissue perfusion with resultant organ failure and death, even after replacement of lost blood volume. One reason for this high acuity is that the fundamental mechanisms of shock are poorly understood. Proteomic and metabolomic approaches have been used to investigate the molecular events occurring in hemorrhagic shock but, to our knowledge, a systematic analysis of the transcriptomic profile is missing. Therefore, a pilot analysis using paired-end RNA sequencing was used to identify changes that occur in the blood transcriptome of rats subjected to hemorrhagic shock after blood reinfusion. Hemorrhagic shock was induced using a Wigger’s shock model. The transcriptome of whole blood from shocked animals shows modulation of genes related to inflammation and immune response (Tlr13, Il1b, Ccl6, Lgals3), antioxidant functions (Mt2A, Mt1), tissue injury and repair pathways (Gpnmb, Trim72) and lipid mediators (Alox5ap, Ltb4r, Ptger2) compared with control animals. These findings are congruent with results obtained in hemorrhagic shock analysis by other authors using metabolomics and proteomics. The analysis of blood transcriptome may be a valuable tool to understand the biological changes occurring in hemorrhagic shock and a promising approach for the identification of novel biomarkers and therapeutic targets. Impact statement This study provides the first pilot analysis of the changes occurring in transcriptome expression of whole blood in hemorrhagic shock (HS) rats. We showed that the analysis of blood transcriptome is a useful approach to investigate pathways and functional alterations in this disease condition. This pilot study encourages the possible application of transcriptome analysis in the clinical setting, for the molecular profiling of whole blood in HS patients.
#include <iostream> #include <cstdio> #include <cstdlib> using namespace std; int f[1027][1027],g[11][11],n,m,t,c[1027],ans; int main() { cin>>n>>m>>t; for (int i=1;i<=m;i++) { int p,q; scanf("%d%d",&p,&q); g[p][q]=g[q][p]=1; f[(1<<(p-1))|(1<<(q-1))][(1<<(p-1))|(1<<(q-1))]=1; } for (int i=1;i<(1<<n);i++) c[i]=c[i>>1]+(i&1); for (int i=0;i<(1<<n);i++) for (int j=1;j<=n;j++) if ((1<<(j-1))&i) for (int k=0;k<=i;k++) if ((k&i)==k) for (int l=1;l<=n;l++) if ( !((1<<(l-1))&i) && g[j][l]) { int p=0; if ((1<<(j-1))&k) p=(k^(1<<(j-1)) )|(1<<(l-1)); else p=k|(1<<(l-1)); if ( !(( (1<<(l-1))-1 )&p)) f[i|(1<<(l-1))][p]+=f[i][k]; } for (int i=0;i<(1<<n);i++) if (c[i]==t) ans+=f[(1<<n)-1][i]; cout<<ans; return 0; }
/** * Tests for openAPI info section mapping. */ public class OpenAPIInfoTests { private static final Path RES_DIR = Paths.get("src/test/resources/ballerina-to-openapi/openapi_info").toAbsolutePath(); private Path tempDir; @BeforeMethod public void setup() throws IOException { this.tempDir = Files.createTempDirectory("bal-to-openapi-test-out-" + System.nanoTime()); } @Test(description = "Generate OpenAPI spec with default value with empty Ballerina.toml") public void defaultOpenAPIInfo() throws IOException { Path ballerinaFilePath = RES_DIR.resolve("project01/project04.bal"); compareWithGeneratedFile(ballerinaFilePath, "openapi_info/project01.yaml"); } @Test(description = "Generate OpenAPI spec with default value with version Ballerina.toml") public void versionOpenAPIInfo() throws IOException { Path ballerinaFilePath = RES_DIR.resolve("project02/project02.bal"); compareWithGeneratedFile(ballerinaFilePath, "openapi_info/project02.yaml"); } @Test(description = "Generate OpenAPI spec with default value with version Ballerina.toml") public void openAPIAnnotation() throws IOException { Path ballerinaFilePath = RES_DIR.resolve("project03/project03.bal"); compareWithGeneratedFile(ballerinaFilePath, "openapi_info/project02.yaml"); } }
/** * TotalVariability class for i-vectors: M = m + T w This class is mostly based on Alize FactorAnalysisStat class. * * @author meignier */ public class TotalVariability { /** The Constant logger. */ private final static Logger logger = Logger.getLogger(TotalVariability.class.getName()); /** The ubm. */ private GMM ubm; /** The feature dimension. */ int featureDimension; /** The nb component. */ int nbComponent; /** The super vector dimension. */ int superVectorDimension; /** The i vector dimension. */ int iVectorDimension; /** The accumulator cluster name. */ private ArrayList<String> accumulatorClusterName; // name of each entry in accumulators (= the name of the cluster) /** The accumulator cluster gender. */ private ArrayList<String> accumulatorClusterGender; // name of each entry in accumulators (= the name of the cluster) /** The zero order statistic. */ private ArrayList<MatrixRowVector> zeroOrderStatistic; // sum of Likelihood for each cluster /** The first order statistic. */ private ArrayList<MatrixRowVector> firstOrderStatistic; // sum of likelihood x feature for each cluster private ArrayList<MatrixRowVector> normalizedFirstOrderStatistic; // sum of likelihood x feature for each cluster minus likelihood x umb mean /** The zero order statistic copy. */ //private ArrayList<MatrixRowVector> zeroOrderStatisticCopy; // copy of sum of Likelihood for each cluster /** The first order statistic copy. */ //private ArrayList<MatrixRowVector> firstOrderStatisticCopy; // copy sum of likelihood x feature for each cluster /** The super mean ubm. */ private MatrixRowVector superMeanUBM; // concatenation of UBM means (m) /** The super inverse covariance ubm. */ private MatrixRowVector superInverseCovarianceUBM; // concatenation of UBM inverse covariance /** The total variability matrix. */ private MatrixRectangular totalVariabilityMatrix; // low rank total variability matrix (T) /** The i vector list. */ // private IVectorArrayList iVectorList; // list of i-vector (w) /** The _l_h_inv. */ // private ArrayList<MatrixSymmetric> _l_h_inv; public void debug() { logger.info("accumulatorClusterName size : " + accumulatorClusterName.size()); logger.info("accumulatorClusterGender size : " + accumulatorClusterGender.size()); logger.info("ubm feature dimesion : " + featureDimension); logger.info("ubm number of components: " + nbComponent); logger.info("super vector size : " + superVectorDimension); logger.info("zeroOrderStatistic size : " + zeroOrderStatistic.size()); if (zeroOrderStatistic.size() > 0) { logger.info("zeroOrderStatistic size of get(0) : " + zeroOrderStatistic.get(0).getSize()); } logger.info("firstOrderStatistic size : " + firstOrderStatistic.size()); if (firstOrderStatistic.size() > 0) { logger.info("firstOrderStatistic size of get(0) : " + firstOrderStatistic.get(0).getSize()); } /* * logger.info("_l_h_inv size : "+_l_h_inv.size()); if (_l_h_inv.size() > 0) { logger.info("_l_h_inv size of get(0) : "+_l_h_inv.get(0).getSize()); } logger.info("iVectorList size : "+iVectorList.size()); if (iVectorList.size() > 0) { * logger.info("iVectorList size of get(0) : "+iVectorList.get(0).getDimension()); } */ } /** * Initialize. * * @param _ubm the _ubm * @throws DiarizationException the diarization exception */ protected void initialize(GMM _ubm) throws DiarizationException { ubm = _ubm; featureDimension = ubm.getDimension(); nbComponent = ubm.getNbOfComponents(); superVectorDimension = featureDimension * nbComponent; gmm2SuperVectors(ubm); zeroOrderStatistic = new ArrayList<MatrixRowVector>(0); firstOrderStatistic = new ArrayList<MatrixRowVector>(0); //zeroOrderStatisticCopy = new ArrayList<MatrixRowVector>(0); //firstOrderStatisticCopy = new ArrayList<MatrixRowVector>(0); normalizedFirstOrderStatistic = new ArrayList<MatrixRowVector>(0); accumulatorClusterName = new ArrayList<String>(0); accumulatorClusterGender = new ArrayList<String>(0); // _l_h_inv = new ArrayList<MatrixSymmetric>(0); // accumulatorCluster = new ArrayList<String>(0); // iVectorList = new IVectorArrayList(); } // _gmm contains the model and the accumulator /** * Instantiates a new total variability. * * @param _ubm the _ubm * @param _rang the _rang * @throws DiarizationException the diarization exception */ public TotalVariability(GMM _ubm, int _rang) throws DiarizationException { initialize(_ubm); iVectorDimension = _rang; totalVariabilityMatrix = initializeTotalVariabilityMatrix(superVectorDimension, iVectorDimension, ubm); } /** * Instantiates a new total variability. * * @param _ubm the _ubm * @param _matrixU the _matrix u * @throws DiarizationException the diarization exception */ public TotalVariability(GMM _ubm, MatrixRectangular _matrixU) throws DiarizationException { initialize(_ubm); totalVariabilityMatrix = _matrixU; iVectorDimension = totalVariabilityMatrix.numCols(); if (totalVariabilityMatrix.numRows() != superVectorDimension) { throw new DiarizationException("Total variability matrix row size problem: " + totalVariabilityMatrix.numRows() + " vs. " + superVectorDimension); } if (totalVariabilityMatrix.numCols() != iVectorDimension) { throw new DiarizationException("Total variability matrix column size problem: " + totalVariabilityMatrix.numCols() + " vs. " + iVectorDimension); } } /** * Adds the accumulator from gmm. * * @param gmm the gmm * @throws DiarizationException the diarization exception */ protected void addAccumulatorFromGMM(GMM gmm) throws DiarizationException { MatrixRowVector super0Order = new MatrixRowVector(nbComponent); MatrixRowVector super1Order = new MatrixRowVector(superVectorDimension); // DoubleVector super2Order = new DoubleVector(superVectorSize); int j = 0, k = 0; for (Gaussian gaussian : gmm) { // for (int k = 0; k < gmm.componentList.size(); k++) { DiagGaussian dg = (DiagGaussian) gaussian; super0Order.set(k, dg.getStatistic().getZeroOrder()); for (int i = 0; i < featureDimension; i++) { super1Order.set(j, dg.getStatistic().getFirstOrder().get(i)); // super2Order.set(j, dg.getAccumulator().getCovarianceAccumulator().get(i)); j++; } k++; } /* * logger.info("0Order: "); for(int i = 0; i < super0Order.getDimension(); i++) { logger.info("i :"+i+" = "+super0Order.get(i)); } logger.info("1Order: "); for(int i = 0; i < super1Order.getDimension(); i++) { * logger.info("i :"+i+" = "+super1Order.get(i)); } */ zeroOrderStatistic.add(super0Order); firstOrderStatistic.add(super1Order); // accumulator2Order.add(super2Order); } /** * Gmm2 super vectors. * * @param gmm the gmm * @throws DiarizationException the diarization exception */ protected void gmm2SuperVectors(GMM gmm) throws DiarizationException { int dim = gmm.getDimension(); int nbComp = gmm.getNbOfComponents(); int size = dim * nbComp; superMeanUBM = new MatrixRowVector(size); superInverseCovarianceUBM = new MatrixRowVector(size); int j = 0; for (Gaussian g : gmm) { for (int i = 0; i < dim; i++) { superMeanUBM.set(j, g.getMean(i)); superInverseCovarianceUBM.set(j, g.getInvertCovariance(i, i)); j++; } } } /** * Load statistic. * * @param fileName the file name * @param statistic the statistic * @throws FileNotFoundException the file not found exception * @throws IOException Signals that an I/O exception has occurred. */ protected void loadStatistic(String fileName, ArrayList<MatrixRowVector> statistic) throws FileNotFoundException, IOException { MatrixRectangular matrix = MatrixIO.readRectMatrix(fileName, false); zeroOrderStatistic.ensureCapacity(matrix.numRows()); firstOrderStatistic.ensureCapacity(matrix.numRows()); normalizedFirstOrderStatistic.ensureCapacity(matrix.numRows()); // zeroOrderStatisticCopy.ensureCapacity(matrix.numRows()); // firstOrderStatisticCopy.ensureCapacity(matrix.numRows()); for (int i = 0; i < matrix.numRows(); i++) { MatrixRowVector vector = new MatrixRowVector(matrix.numCols()); for (int j = 0; j < matrix.numCols(); j++) { vector.set(j, matrix.get(i, j)); } statistic.add(vector); } } /** * Load zero order statistic. * * @param fileName the file name * @throws FileNotFoundException the file not found exception * @throws IOException Signals that an I/O exception has occurred. */ protected void loadZeroOrderStatistic(String fileName) throws FileNotFoundException, IOException { loadStatistic(fileName, zeroOrderStatistic); } /** * Load first order statistic. * * @param fileName the file name * @throws FileNotFoundException the file not found exception * @throws IOException Signals that an I/O exception has occurred. */ protected void loadFirstOrderStatistic(String fileName) throws FileNotFoundException, IOException { loadStatistic(fileName, firstOrderStatistic); } /** * Load statistic. * * @param fileNameZeroOrder the file name zero order * @param fileNameFirstOrder the file name first order * @throws FileNotFoundException the file not found exception * @throws IOException Signals that an I/O exception has occurred. */ public void loadStatistic(String fileNameZeroOrder, String fileNameFirstOrder) throws FileNotFoundException, IOException { loadFirstOrderStatistic(fileNameFirstOrder); loadZeroOrderStatistic(fileNameZeroOrder); if (SpkDiarizationLogger.DEBUG) logger.info("substract speaker statistics"); substractSpeakerStats(); } protected void copyStatistic(ArrayList<MatrixRowVector> src, ArrayList<MatrixRowVector> dest) { dest.clear(); for (MatrixRowVector s : src) { MatrixRowVector d = new MatrixRowVector(s.getSize()); for (int i = 0; i < s.getSize(); i++) { d.set(i, s.get(i)); } dest.add(d); } } /** * Save statistic. * * @param fileName the file name * @param statistic the statistic * @throws FileNotFoundException the file not found exception * @throws IOException Signals that an I/O exception has occurred. */ protected void saveStatistic(String fileName, ArrayList<MatrixRowVector> statistic) throws FileNotFoundException, IOException { MatrixRectangular matrix = new MatrixRectangular(statistic.size(), superVectorDimension); for (int i = 0; i < matrix.numRows(); i++) { MatrixRowVector vector = statistic.get(i); for (int j = 0; j < matrix.numCols(); j++) { matrix.set(i, j, vector.get(j)); } statistic.add(vector); } MatrixIO.writeMatrix(matrix, fileName, false); } /** * Copy statistic. * protected void copyStatistic() { logger.info("copy statistic"); copyStatistic(zeroOrderStatistic, zeroOrderStatisticCopy); copyStatistic(firstOrderStatistic, firstOrderStatisticCopy); // zeroOrderStatisticCopy = (ArrayList<MatrixRowVector>) zeroOrderStatistic.clone(); // firstOrderStatisticCopy = (ArrayList<MatrixRowVector>) firstOrderStatistic.clone(); }*/ /** * Restore statistic. * protected void restoreStatistic() { logger.info("restore statistic"); copyStatistic(zeroOrderStatisticCopy, zeroOrderStatistic); copyStatistic(firstOrderStatisticCopy, firstOrderStatistic); // zeroOrderStatistic = (ArrayList<MatrixRowVector>) zeroOrderStatisticCopy.clone(); // firstOrderStatistic = (ArrayList<MatrixRowVector>) firstOrderStatisticCopy.clone(); }*/ /** * Compute statistics. * * @param clusterSet the cluster set * @param featureSet the feature set * @param useSpeechDetection the use speech detection * @throws DiarizationException the diarization exception * @throws IOException Signals that an I/O exception has occurred. */ public void computeStatistics(ClusterSet clusterSet, AudioFeatureSet featureSet, boolean useSpeechDetection) throws DiarizationException, IOException { // calculer les occupations, occ1, occ2 à partir de l'UBM // attention le clusterSet doit être correctement organisé (session), cf ClusterSet.toSpeakerSession() int minCapacity = clusterSet.clusterGetSize(); if (SpkDiarizationLogger.DEBUG) logger.finest("capacity :" + minCapacity); zeroOrderStatistic.ensureCapacity(minCapacity); firstOrderStatistic.ensureCapacity(minCapacity); normalizedFirstOrderStatistic.ensureCapacity(minCapacity); // zeroOrderStatisticCopy.ensureCapacity(minCapacity); // firstOrderStatisticCopy.ensureCapacity(minCapacity); accumulatorClusterName.ensureCapacity(minCapacity); accumulatorClusterGender.ensureCapacity(minCapacity); for (String clusterName : clusterSet) { Cluster cluster = clusterSet.getCluster(clusterName); GMM gmm = (ubm.clone()); GMMFactory.iterationAccumulation(cluster, featureSet, ubm, gmm, useSpeechDetection); addAccumulatorFromGMM(gmm); accumulatorClusterName.add(clusterName); accumulatorClusterGender.add(cluster.getGender()); } if (SpkDiarizationLogger.DEBUG) logger.info("substract speaker statistics"); substractSpeakerStats(); } /** * Estimate l. * * @throws DiarizationException the diarization exception */ protected ArrayList<MatrixSymmetric> estimateL() throws DiarizationException { ArrayList<MatrixSymmetric> _l_h_inv = new ArrayList<MatrixSymmetric>(zeroOrderStatistic.size()); MatrixSymmetric matrixL = new MatrixSymmetric(iVectorDimension); // L(_rang) for (int sent = 0; sent < zeroOrderStatistic.size(); sent++) { if ((sent % 1000) == 0) { logger.info("\t sent #: " + sent); } matrixL.fill(0.0); MatrixRowVector sentN = zeroOrderStatistic.get(sent); for (int g = 0; g < nbComponent; g++) { double zeroOrderStatistic4n_g = sentN.get(g); for (int l = 0; l < featureDimension; l++) { int k = (g * featureDimension) + l; double valueK = zeroOrderStatistic4n_g * superInverseCovarianceUBM.get(k); for (int i = 0; i < iVectorDimension; i++) { double valueI = valueK * totalVariabilityMatrix.get(k, i); for (int j = i; j < iVectorDimension; j++) { matrixL.add(i, j, valueI * totalVariabilityMatrix.get(k, j)); } } } } for (int i = 0; i < iVectorDimension; i++) { matrixL.add(i, i, 1.0); } MatrixSymmetric matrixLI = matrixL.invert(); _l_h_inv.add(matrixLI); } return _l_h_inv; } /** * Substract speaker stats. */ protected void substractSpeakerStats() { // Alize AccumulateTV::substractM // C'est constant ? si oui, alors sauvegarder firstOrder normalizedFirstOrderStatistic.clear(); for (int sent = 0; sent < zeroOrderStatistic.size(); sent++) { // for each speaker MatrixRowVector sentN = zeroOrderStatistic.get(sent); MatrixRowVector sentSX = firstOrderStatistic.get(sent); MatrixRowVector sentSX_norm = sentSX.copy(); normalizedFirstOrderStatistic.add(sentSX_norm); for (int i = 0; i < nbComponent; i++) { int iDec = i * featureDimension; for (int j = 0; j < featureDimension; j++) { double value = -(sentN.get(i) * superMeanUBM.get(iDec + j)); sentSX_norm.add(iDec + j, value); } } } } /** * Estimate total variability matrix. * * @throws DiarizationException the diarization exception */ protected void estimateTotalVariabilityMatrix(ArrayList<MatrixSymmetric> _l_h_inv, fr.lium.spkDiarization.libModel.ivector.IVectorArrayList iVectorList) throws DiarizationException { MatrixRowVector C = new MatrixRowVector(iVectorDimension); MatrixSymmetric A = new MatrixSymmetric(iVectorDimension); totalVariabilityMatrix.fill(0.0); for (int g = 0; g < nbComponent; g++) { A.fill(0.0); for (int j = 0; j < zeroOrderStatistic.size(); j++) { MatrixSymmetric matrixLInv = _l_h_inv.get(j); fr.lium.spkDiarization.libModel.ivector.IVector iVector = iVectorList.get(j); for (int k = 0; k < iVectorDimension; k++) { for (int l = k; l < iVectorDimension; l++) { double value = (matrixLInv.get(k, l) + (iVector.get(k) * iVector.get(l))) * zeroOrderStatistic.get(j).get(g); A.add(k, l, value); // ok } } } MatrixSymmetric AInvert = A.invert(); // ok for (int i = 0; i < featureDimension; i++) { C.fill(0.0); // ok int gi = (g * featureDimension) + i; // ok for (int j = 0; j < zeroOrderStatistic.size(); j++) { for (int k = 0; k < iVectorDimension; k++) { C.add(k, normalizedFirstOrderStatistic.get(j).get(gi) * iVectorList.get(j).get(k));// ok // R.add(k, firstOrderStatistic.get(j).get(gi) * iVectorList.get(j).get(k));// ok } } for (int j = 0; j < iVectorDimension; j++) { for (int k = 0; k < iVectorDimension; k++) { totalVariabilityMatrix.add(gi, j, AInvert.get(j, k) * C.get(k));// ok } } } } } /** * Estimate i vector. */ protected fr.lium.spkDiarization.libModel.ivector.IVectorArrayList estimateIVector(ArrayList<MatrixSymmetric> listOfLInvert) { // matrice U, occ, order1, L, superCov fr.lium.spkDiarization.libModel.ivector.IVectorArrayList iVectorList = new fr.lium.spkDiarization.libModel.ivector.IVectorArrayList(); iVectorList.ensureCapacity(zeroOrderStatistic.size()); MatrixRowVector tmp = new MatrixRowVector(iVectorDimension); for (int sent = 0; sent < zeroOrderStatistic.size(); sent++) { MatrixRowVector W = new MatrixRowVector(iVectorDimension); W.fill(0.0); tmp.fill(0.0); for (int i = 0; i < iVectorDimension; i++) { for (int k = 0; k < superVectorDimension; k++) { double v = totalVariabilityMatrix.get(k, i); v *= superInverseCovarianceUBM.get(k); //v *= firstOrderStatistic.get(sent).get(k); v *= normalizedFirstOrderStatistic.get(sent).get(k); tmp.add(i, v); } } for (int i = 0; i < iVectorDimension; i++) { for (int k = 0; k < iVectorDimension; k++) { W.add(i, listOfLInvert.get(sent).get(i, k) * tmp.get(k)); } } if (accumulatorClusterName.size() == 0) { iVectorList.add(new fr.lium.spkDiarization.libModel.ivector.IVector(W, "sent_" + sent, Cluster.genderStrings[0])); } else { iVectorList.add(new IVector(W, accumulatorClusterName.get(sent), accumulatorClusterGender.get(sent))); } } return iVectorList; } /** * Train i vector. * * @return the i vector array list * @throws DiarizationException the diarization exception * @throws IOException Signals that an I/O exception has occurred. */ public fr.lium.spkDiarization.libModel.ivector.IVectorArrayList trainIVector() throws DiarizationException, IOException { // computeStatistics(clusterSet, featureSet, false); if (SpkDiarizationLogger.DEBUG) logger.info("trainIVector: estimate L"); ArrayList<MatrixSymmetric> _l_h_inv = estimateL(); //logger.info("trainIVector: substract speaker statistics"); //substractSpeakerStats(); if (SpkDiarizationLogger.DEBUG) logger.info("trainIVector: estimate i-vector"); fr.lium.spkDiarization.libModel.ivector.IVectorArrayList iVectorList = estimateIVector(_l_h_inv); return iVectorList; } /** * Train total variability matrix. * * @param nbIteration the nb iteration * @param fileNameBase the file name base * @return the matrix rectangular * @throws DiarizationException the diarization exception * @throws IOException Signals that an I/O exception has occurred. */ public MatrixRectangular trainTotalVariabilityMatrix(int nbIteration, String fileNameBase) throws DiarizationException, IOException { if (fileNameBase.isEmpty() == false) { String fileName = IOFile.getFilename(fileNameBase, -1); MatrixIO.writeMatrix(totalVariabilityMatrix, fileName, false); } // computeStatistics(clusterSet, featureSet, false); //copyStatistic(); for (int i = 0; i < nbIteration; i++) { logger.info("iteration :" + i); if (SpkDiarizationLogger.DEBUG) debug(); if (SpkDiarizationLogger.DEBUG) logger.info("---->start: " + zeroOrderStatistic.get(0).get(0)); if (SpkDiarizationLogger.DEBUG) logger.info("---->start: " + firstOrderStatistic.get(0).get(0)); if (SpkDiarizationLogger.DEBUG) logger.info("\ttrainTotalVariabilityMatrix: estimate L"); ArrayList<MatrixSymmetric> _l_h_inv = estimateL(); if (SpkDiarizationLogger.DEBUG) logger.info("---->after L: " + zeroOrderStatistic.get(0).get(0)); if (SpkDiarizationLogger.DEBUG) logger.info("---->after L: " + firstOrderStatistic.get(0).get(0)); if (SpkDiarizationLogger.DEBUG) logger.info("\ttrainTotalVariabilityMatrix: estimate i-vector"); IVectorArrayList iVectorList = estimateIVector(_l_h_inv); if (SpkDiarizationLogger.DEBUG) logger.info("---->after Iv: " + zeroOrderStatistic.get(0).get(0)); if (SpkDiarizationLogger.DEBUG) logger.info("---->after Iv: " + firstOrderStatistic.get(0).get(0)); if (SpkDiarizationLogger.DEBUG) logger.info("\ttrainTotalVariabilityMatrix: estimate TV matrix"); estimateTotalVariabilityMatrix(_l_h_inv, iVectorList); if (fileNameBase.isEmpty() == false) { String fileName = IOFile.getFilename(fileNameBase, i); MatrixIO.writeMatrix(totalVariabilityMatrix, fileName, false); } //logger.info("---->before: " + zeroOrderStatistic.get(0).get(0)); //logger.info("---->before: " + firstOrderStatistic.get(0).get(0)); //restoreStatistic(); //logger.info("---->after: " + zeroOrderStatistic.get(0).get(0)); //logger.info("---->after: " + firstOrderStatistic.get(0).get(0)); } return totalVariabilityMatrix; } /** * Initialize total variability matrix. * * @param rows the rows * @param cols the cols * @param ubm the ubm * @return the matrix rectangular * @throws DiarizationException the diarization exception */ protected static MatrixRectangular initializeTotalVariabilityMatrix(int rows, int cols, GMM ubm) throws DiarizationException { double sumInvertCovariance = 0.0; for (Gaussian g : ubm) { DiagGaussian dg = (DiagGaussian) g; for (int i = 0; i < dg.getDimension(); i++) { sumInvertCovariance += dg.getCovariance(i, i); } } return MatrixIO.createGaussianRandom(rows, cols, 0, Math.sqrt(sumInvertCovariance / ubm.getDimension())); } }
/* access modifiers changed from: package-private */ public e.b d(w wVar) { e a2 = a(wVar.k()); e.b e2 = a2 != null ? a2.e() : null; e b2 = b(wVar.k()); return (b2 == null || !(e2 == null || e2 == e.b.NONE)) ? e2 : b2.e(); }
The proponent of a ballot measure to ban circumcision in a California city has dropped the effort following claims of anti-Semitic themes and imagery, including a comic book that featured a "Monster Mohel." Jena Troutman, the Santa Monica woman who submitted the proposal to the Santa Monica city clerk last month, said she has withdrawn the measure to prohibit "Genital Cutting of Male Minors." Under the measure, which needed more than 6,000 signatures to go on the November 2012 ballot, circumcising a child under the age of 18 would have been a misdemeanor offense punishable by a $1,000 fine or a year in jail. Troutman, a lactation consultant and mother of two, told FoxNews.com that her focus was "never about religion" when she submitted the initiative, which was written by a San Diego-based group called MGMbill.org, the same organization that authored the measure to ban circumcision that will appear on the ballot in San Francisco this November. "I don't have the time or the energy to argue with everybody, but you shouldn't go around cutting up your little babies," Troutman said. "Why don't people [expletive] get that? For me, this was never about religion. It was about protecting babies from their parents not knowing that circumcision was started in America to end masturbation." Santa Monica Mayor Richard Bloom confirmed the news on his Facebook page late Monday. Bloom wrote: "The proponent of the Santa Monica ballot initiative to ban circumcision just left a msg 4 me that she is WITHDRAWING the measure!!" During Tuesday's telephone interview, Troutman distanced herself from MGMbill.org, which is led by Matthew Hess, the author of the "Foreskin Man" cartoon, which depicts a blond superhero taking on a character named "Monster Mohel." Several critics, including the Anti-Defamation League, have blasted the publication as "disrespectful and deeply offensive." "I respect and appreciate the Jewish religion and people's delicate feelings about their religious customs," she said, adding that she originally sought a religious exemption for the proposed ban but ultimately declined to include one since it would have been unconstitutional. Still, "bodily integrity and genital autonomy" are human rights, she said. "I'm tired of it being about religion," Troutman continued. "For over a million babies a year, they're being cut for no reason." According to the Centers for Disease Control and Prevention, male circumcision has been associated with a lower risk for HIV infection in international observational studies and in three randomized controlled clinical trials. "It is possible, but not yet adequately assessed, that male circumcision could reduce male-to-female transmission of HIV, although probably to a lesser extent than female-to-male transmission," a CDC website reads. "Male circumcision has also been associated with a number of other health benefits. Although there are risks to male circumcision, serious complications are rare." Hess told Fox News that the case has brought attention to his group's message. "Even though there will be no ballot measure in Santa Monica, Jena did help bring additional exposure to the problem of forced circumcision in this country, and that alone is an important accomplishment," he said. Unlike in Santa Monica, Troutman confirmed that a measure banning male circumcision of minors will still appear on the ballot in San Francisco. If passed, the measure would make it a misdemeanor crime punishable by a $1,000 fine or a year in jail. Amanda Susskind, regional director of the Anti-Defamation League's Pacific Southwest region, characterized the news as a "welcome" development. "Everybody is happy that this particular petition isn't going forward," Susskind told FoxNews.com. "[But] there's a movement, so our concern is that they'll find someone else in Santa Monica to file [another initiative] on their behalf." Susskind continued, "The main issue for us is the right for a parent to choose the religious upbringing of a child and I think that is a concept that resonates with most people."
<reponame>foreverbell/BadAppleOS.rs<filename>src/ba/video.rs use alloc::vec::Vec; use ba::decompressor::decompress; use ba::stream::Stream; use krnl::console; extern "C" { static _binary_build_vdata_bin_start: u8; static _binary_build_vdata_bin_end: u8; } pub struct Video { n_frames: usize, cur_frame: usize, frames: Vec<console::ConsoleBuf>, } // artify the frame to emphasize boundary. pub fn artify(frame: &mut console::ConsoleBuf) { const DXY: [[isize; 2]; 4] = [[-1, 0], [1, 0], [0, -1], [0, 1]]; const DOT_CHARS: [char; 4] = [',', '.', '\'', '`']; const LINE_CHARS: [char; 4] = ['v', '^', '\\', '/']; let within = |x: isize, y: isize| -> bool { return x >= 0 && x < console::MAX_ROW as isize && y >= 0 && y < console::MAX_COLUMN as isize; }; let get = |frame: &console::ConsoleBuf, x: isize, y: isize| -> char { frame[x as usize][y as usize].ch as char }; let set = |frame: &mut console::ConsoleBuf, x: isize, y: isize, ch: char| { frame[x as usize][y as usize].ch = ch as u8; }; for x in 0..console::MAX_ROW as isize { for y in 0..console::MAX_COLUMN as isize { if get(frame, x, y) == ' ' { continue; } let mut dir: Option<usize> = None; let mut n_empty: usize = 0; for d in 0..4 { if within(x + DXY[d][0], y + DXY[d][1]) && get(frame, x + DXY[d][0], y + DXY[d][1]) == ' ' { dir = Some(d); n_empty += 1; } } if let Some(d) = dir { let mut use_line = true; if n_empty > 2 { use_line = false; } if within(x + DXY[(3 - d) ^ 1][0], y + DXY[(3 - d) ^ 1][1]) && get(frame, x + DXY[(3 - d) ^ 1][0], y + DXY[(3 - d) ^ 1][1]) == ' ' { use_line = false; } if within(x + DXY[3 - d][0], y + DXY[3 - d][1]) && get(frame, x + DXY[3 - d][0], y + DXY[3 - d][1]) == ' ' { use_line = false; } let new_char = if use_line { LINE_CHARS[d] } else { DOT_CHARS[d] }; set(frame, x, y, new_char); } } } } impl Video { pub fn new() -> Video { Video { n_frames: 0, cur_frame: 0, frames: Vec::new(), } } pub fn initialize(&mut self) { let vdata_start: *const u8 = unsafe { &_binary_build_vdata_bin_start as *const _ }; let vdata_end: *const u8 = unsafe { &_binary_build_vdata_bin_end as *const _ }; printf!("[video] Decompressing data.\n"); match decompress(vdata_start, vdata_end) { Some(decompressed) => { self.n_frames = decompressed.n_frames; self.frames = Vec::new(); let mut reader = Stream::new(decompressed.buf.as_slice()); self.frames.resize( self.n_frames, [[Default::default(); console::MAX_COLUMN]; console::MAX_ROW], ); for f in 0..self.n_frames { for row in 0..console::MAX_ROW { for col in 0..console::MAX_COLUMN { let ch = if reader.next_byte() == 0 { '%' } else { ' ' }; self.frames[f][row][col] = console::ScreenChar::new(ch as u8, Default::default()); } } artify(&mut self.frames[f]); } printf!("[video] Data loaded.\n"); }, None => printf!("[video] Corrupted Data.\n"), } } pub fn progress(&self) -> usize { (self.cur_frame + 1) * 100 / self.n_frames } pub fn has_next(&self) -> bool { self.cur_frame < self.n_frames } pub fn next(&mut self) { console::CONSOLE.lock().bkcpy(&self.frames[self.cur_frame]); self.cur_frame += 1; } }
Thieves stole a weekend’s worth of beer from a pop-up Sheffield city centre bar. Most of the stock at Bar Stewards in Gibraltar Street was stolen during a break-in on Thursday morning. Thieves also took musical instruments stored in the cellar, along with a laptop. The start-up business currently opens only at weekends but is in the process of applying for a permanent license. Co-owner Alan Quinlen, who runs Bar Stewards with Charlie Mullen, said the stolen good were only worth a few hundred pounds, but to a business such as theirs that was a huge amount. "They basically ransacked us," he said. "They came in and took what little we had. "We have managed to scrape a few beers from various places, and luckily they haven't taken the cask. "We are just powering through and hopefully we will be able to recoup some of our losses this weekend." Because it is not yet a permanent business Bar Stewards does not have CCTV, although a neighbouring shop has a camera covering the entrance. Alan said he had an idea who might have been responsible. "I reckon they scoped us out," he said. "We can remember someone coming in that was quite dodgy, asking suspicious questions." Fellow bars and breweries have offered their support on social media, and Bar Stewards will open as normal from 5pm today. "It's a worry for us. We want to operate where we can leave things where we don't have to worry about them," said Alan. "But we are resolute and we are not going anywhere. "The responses on social media have been very heartening." Today’s top stories: First picture of teenager found dead at derelict Sheffield hotel named by police Breaking: Man seriously injured after falling from tram bridge in Sheffield city centre Sheffield flat set alight in arson attack Sheffield United: Kevin Gage's Blades Column - Step forward 'Superman' Jack O'Connell - a new Sheffield United star Get all the latest Owls stats Get all the latest Blades stats
def _prefix_login_path(self, remote_path): if not remote_path.startswith(os.path.sep): remote_path = os.path.join(os.path.sep, remote_path) return os.path.normpath(remote_path)
PPG Paints Arena in Pittsburgh, Pennsylvania was home to a fun night of fights, with a remarkable seven (T)KOs, one submission and two decisions, both of them split-decisions. Performances of the Night: Mike Perry and Uriah Hall Krzysztof Jotko dominated the first round of his fight against the mercurial Uriah Hall, dropping the TUF product and landing dozens of hard shots over the course of the first five minutes. Hall has a reputation for crumbling under pressure, but he survived the onslaught and his perseverance paid off when he landed a clean right hand midway through the second round that knocked Jotko down and out, earning Hall a cool $50,000 bonus. Alex Reyes stepped in on short notice to replace Thiago Aalves against “Platinum” Mike Perry. Reyes is a natural lightweight and was making his UFC debut, which is the sort of scenario that often results in a one-sided showcase fight. Sure enough, that’s what we saw as Perry walked through Reyes’ shots and destroyed him with a brutal knee in the clinch to get the KO after just 79 seconds. Perry leaves Pittsburgh one step closer to a fight against a top contender and $50K richer. Fight of the Night: Gregor Gillespie vs Jason Gonzalez This lightweight scrap had something for everyone. The first round was filled with takedowns, grappling, ground-and-pound and straight up rock-em sock-em robots style back-and-forth brawling as both fighters gave it absolutely everything they had. The second round started at the same furious pace as the first, with both guys exchanging frenetic flurries of shots, until Gillespie picked his moment to shoot in, get the takedown and transition into a silky smooth arm triangle from mount. Fights like this truly display the full breadth of mixed martial arts and it’s great to see both guys rewarded with $50,000 bonus checks for their roles in showcasing everything the sport has to offer.
// collect collects all accepted runes and returns them as a token. After that, // it sets the start position to the current position. func (i *isolate) collect(t token.Type) token.Token { tk := token.New( t, string(i.input[i.start:i.pos]), i.start, i.pos-i.start, ) i.start = i.pos return tk }
import { Component } from '@angular/core'; import { Router } from '@angular/router'; import { ActionSheetController } from '@ionic/angular'; @Component({ selector: 'app-tabs', templateUrl: 'tabs.page.html', styleUrls: ['tabs.page.scss'] }) export class TabsPage { constructor(public actionSheetController: ActionSheetController, private router: Router) {} async openOptions2Add() { const actionSheet = await this.actionSheetController.create({ header: 'Features', cssClass: 'my-custom-class', buttons: [{ text: 'Event', icon: 'calendar', handler: () => { this.router.navigate(['/add-event']) } }, { text: 'Task', icon: 'create', handler: () => { this.router.navigate(['/add-task']) } }, { text: 'Note', icon: 'clipboard', handler: () => { this.router.navigate(['/add-note']) } }, { text: 'Cancel', icon: 'close', role: 'cancel', handler: () => { console.log('Cancel clicked'); } }] }); await actionSheet.present(); } }
import itertools import math import numpy as np from bsoid_app.bsoid_utilities.likelihoodprocessing import boxcar_center def bsoid_extract(data, fps): """ Extracts features based on (x,y) positions :param data: list, csv data :param fps: scalar, input for camera frame-rate :return f_10fps: 2D array, extracted features """ win_len = np.int(np.round(0.05 / (1 / fps)) * 2 - 1) feats = [] for m in range(len(data)): dataRange = len(data[m]) dxy_r = [] dis_r = [] for r in range(dataRange): if r < dataRange - 1: dis = [] for c in range(0, data[m].shape[1], 2): dis.append(np.linalg.norm(data[m][r + 1, c:c + 2] - data[m][r, c:c + 2])) dis_r.append(dis) dxy = [] for i, j in itertools.combinations(range(0, data[m].shape[1], 2), 2): dxy.append(data[m][r, i:i + 2] - data[m][r, j:j + 2]) dxy_r.append(dxy) dis_r = np.array(dis_r) dxy_r = np.array(dxy_r) dis_smth = [] dxy_eu = np.zeros([dataRange, dxy_r.shape[1]]) ang = np.zeros([dataRange - 1, dxy_r.shape[1]]) dxy_smth = [] ang_smth = [] for l in range(dis_r.shape[1]): dis_smth.append(boxcar_center(dis_r[:, l], win_len)) for k in range(dxy_r.shape[1]): for kk in range(dataRange): dxy_eu[kk, k] = np.linalg.norm(dxy_r[kk, k, :]) if kk < dataRange - 1: b_3d = np.hstack([dxy_r[kk + 1, k, :], 0]) a_3d = np.hstack([dxy_r[kk, k, :], 0]) c = np.cross(b_3d, a_3d) ang[kk, k] = np.dot(np.dot(np.sign(c[2]), 180) / np.pi, math.atan2(np.linalg.norm(c), np.dot(dxy_r[kk, k, :], dxy_r[kk + 1, k, :]))) dxy_smth.append(boxcar_center(dxy_eu[:, k], win_len)) ang_smth.append(boxcar_center(ang[:, k], win_len)) dis_smth = np.array(dis_smth) dxy_smth = np.array(dxy_smth) ang_smth = np.array(ang_smth) feats.append(np.vstack((dxy_smth[:, 1:], ang_smth, dis_smth))) f_10fps = [] for n in range(0, len(feats)): feats1 = np.zeros(len(data[n])) for s in range(math.floor(fps / 10)): for k in range(round(fps / 10) + s, len(feats[n][0]), round(fps / 10)): if k > round(fps / 10) + s: feats1 = np.concatenate((feats1.reshape(feats1.shape[0], feats1.shape[1]), np.hstack((np.mean((feats[n][0:dxy_smth.shape[0], range(k - round(fps / 10), k)]), axis=1), np.sum((feats[n][dxy_smth.shape[0]:feats[n].shape[0], range(k - round(fps / 10), k)]), axis=1))).reshape(len(feats[0]), 1)), axis=1) else: feats1 = np.hstack((np.mean((feats[n][0:dxy_smth.shape[0], range(k - round(fps / 10), k)]), axis=1), np.sum((feats[n][dxy_smth.shape[0]:feats[n].shape[0], range(k - round(fps / 10), k)]), axis=1))).reshape(len(feats[0]), 1) f_10fps.append(feats1) return f_10fps def bsoid_predict(feats, clf): """ :param feats: list, multiple feats (original feature space) :param clf: Obj, MLP classifier :return nonfs_labels: list, label/100ms """ labels_fslow = [] for i in range(0, len(feats)): labels = clf.predict(feats[i].T) labels_fslow.append(labels) return labels_fslow
export enum ERowStatus { Committed, New, Draft, Deleted }
/** * Resolve a value from a given enumeration into its name. * @param enumName * @param value * @return */ public String resolve(final String enumName, final int value) { HKXEnum enumContainer = contents.get(enumName); if(enumContainer != null) { return enumContainer.get(value); } return Integer.toString(value); }
// Copyright © 2018 | <NAME> | <EMAIL> //---------------------------------------------------------------------- // This work is free. You can redistribute it and/or modify it under the // terms of the Do What The Fuck You Want To Public License, Version 2, // as published by Sam Hocevar. See the LICENSE file for more details. //---------------------------------------------------------------------- #ifndef EIN_CAMERA_HPP #define EIN_CAMERA_HPP #include <glm/glm.hpp> class Camera { glm::dvec3 pos = glm::dvec3(0.0, 0.0, 0.0); glm::dvec3 front = glm::dvec3(0.0, 0.0, -1.0); glm::dvec3 up = glm::dvec3(0.0, 1.0, 0.0); double pitch = 0.0, yaw = 0.0; const double velocity = 1.0; glm::mat4 proj; public: //TODO: a bit mask or a normalized vector instead of this enum class Direction{forward, backward, left, right, up, down}; Camera(const glm::dvec3& position, const glm::dvec3& direction, const glm::mat4& projection) noexcept; void move(Direction direction, double dt) noexcept; void rotate(double yaw, double pitch) noexcept; glm::vec3 position() const noexcept { return glm::vec3(pos.x, pos.y, pos.z); } glm::mat4 view() const noexcept; glm::mat4 projection() const noexcept { return proj; }; }; #endif //EIN_CAMERA_HPP
// RegisterDevice registers a device to the driver. The driver will // establish connections to the device. func (d *driverImpl) RegisterDevice(device cgra.Device) { d.device = device d.establishConnectionOneSide(d.device, cgra.North) d.establishConnectionOneSide(d.device, cgra.South) d.establishConnectionOneSide(d.device, cgra.East) d.establishConnectionOneSide(d.device, cgra.West) }
/** * Initializes the specified {@link Authenticator} and registers it to * this service. * * @param authenticator Authenticator to initialize and register by type * @param directoryService configuration info to supply to the Authenticator during initialization * @throws javax.naming.Exception if initialization fails. */ private void register( Authenticator authenticator, DirectoryService directoryService ) throws LdapException { authenticator.init( directoryService ); authenticators.add( authenticator ); Collection<Authenticator> authenticatorList = getAuthenticators( authenticator.getAuthenticatorType() ); if ( authenticatorList == null ) { authenticatorList = new ArrayList<>(); authenticatorsMapByType.put( authenticator.getAuthenticatorType(), authenticatorList ); } if ( !authenticatorList.contains( authenticator ) ) { authenticatorList.add( authenticator ); } }
def delete_temp_image(dataset, obj_id): if not obj_id and str(obj_id) != "0": return type = dataset.type file_path = "media/" + str(type).lower() + "_" + str(dataset.id) + "_" + str(obj_id) if type == "HIPE": file_path += ".pdf" elif type == "MNIST": file_path += ".png" try: os.remove(file_path) except Exception as e: pass
// Recover []byte to compact struct instance directly without copying // Time is money, yeah! // Memory is money, yeah! func deserialize(t, b, p []byte) *routingTable { tcnt, bcnt, pcnt := len(t), len(b), len(p) bsize, psize := int(unsafe.Sizeof(base_t{})), int(unsafe.Sizeof(pre_t{})) verifyLen(tcnt, 4) verifyLen(bcnt, bsize) verifyLen(pcnt, psize) return &routingTable{ trie: *(*[]uint32)(convertSlice(unsafe.Pointer(&t), tcnt/4)), base: *(*[]base_t)(convertSlice(unsafe.Pointer(&b), bcnt/bsize)), pre: *(*[]pre_t)(convertSlice(unsafe.Pointer(&p), pcnt/psize)), } }
/** * @author Guduru, Thirupathi Reddy * @modified 2/4/16 */ public class SingletonIndexWriter { private static SingletonIndexWriter ourInstance = null; private final IndexWriter indexWriter; public static SingletonIndexWriter getInstance(final LuceneIndexConfig luceneIndexConfig) { if (ourInstance == null) ourInstance = new SingletonIndexWriter(luceneIndexConfig); return ourInstance; } private SingletonIndexWriter(final LuceneIndexConfig luceneIndexConfig) { final Analyzer analyzer = new StandardAnalyzer(); final IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer); indexWriterConfig.setOpenMode(IndexWriterConfig.OpenMode.CREATE); indexWriterConfig.setMergePolicy(new TieredMergePolicy()); // merge policy indexWriterConfig.setMergeScheduler(new ConcurrentMergeScheduler()); // merge scheduler indexWriterConfig.setRAMBufferSizeMB(64);// which is Lucene sweet spot for RAM buffer. indexWriterConfig.setUseCompoundFile(false); indexWriterConfig.setMaxBufferedDeleteTerms(1); try { final Path path = FileSystems.getDefault().getPath(CommandLineConfig.getConfig(ConfigParameters.indexLocation)); final Directory directory = new MMapDirectory(path); indexWriter = new IndexWriter(directory, indexWriterConfig); } catch (IOException e) { throw new RuntimeException(e); } } public IndexWriter getIndexWriter() { return indexWriter; } }
/** * Generic functionality for PIX/PDQ auditing interceptors, * a kind of Template Method. * @author Dmytro Rud */ public class AuditInterceptorUtils { private static final transient Logger LOG = LoggerFactory.getLogger(AuditInterceptorUtils.class); private AuditInterceptorUtils() { throw new IllegalStateException("Helper class"); } /** * Performs ATNA auditing. Both input and output messages * are expected to be {@link MessageAdapter}s. * <p> * Does not produce any own exceptions, only rethrows exceptions * raised during the proper call. */ public static <T extends MllpAuditDataset> void doProcess(AuditInterceptor<T> interceptor, Exchange exchange) throws Exception { MessageAdapter<?> msg = exchange.getIn().getBody(MessageAdapter.class); // pass in case of non-auditable message types if( ! isAuditable(interceptor, msg)) { interceptor.getWrappedProcessor().process(exchange); return; } MllpAuditStrategy<T> strategy = interceptor.getAuditStrategy(); T auditDataset = createAndEnrichAuditDatasetFromRequest(strategy, exchange, msg); determineParticipantsAddresses(interceptor, exchange, auditDataset); boolean failed = false; try { interceptor.getWrappedProcessor().process(exchange); msg = resultMessage(exchange).getBody(MessageAdapter.class); enrichAuditDatasetFromResponse(strategy, auditDataset, msg); failed = ! AuditUtils.isPositiveAck(msg); } catch (Exception e) { failed = true; throw e; } finally { AuditUtils.finalizeAudit( auditDataset, interceptor.getMllpEndpoint().isAllowIncompleteAudit(), strategy, failed); } } /** * Checks whether the given message should be audited. * All exceptions are ignored. */ private static <T extends MllpAuditDataset> boolean isAuditable(AuditInterceptor<T> interceptor, MessageAdapter<?> msg) { try { Message message = msg.getHapiMessage(); Terser terser = new Terser(message); // no audit for fragments 2..n if (ArrayUtils.contains(message.getNames(), "DSC") && StringUtils.isNotEmpty(terser.get("DSC-1"))) { return false; } String messageType = terser.get("MSH-9-1"); return interceptor.getMllpEndpoint().getHl7v2TransactionConfiguration().isAuditable(messageType); } catch (Exception e) { LOG.error("Exception when determining message auditability, no audit will be performed", e); return false; } } /** * Creates a new audit dataset and enriches it with data from the request * message. All exception are ignored. * @return * newly created audit dataset or <code>null</code> when creation failed. */ private static <T extends MllpAuditDataset> T createAndEnrichAuditDatasetFromRequest( MllpAuditStrategy<T> strategy, Exchange exchange, MessageAdapter<?> msg) { try { T auditDataset = strategy.createAuditDataset(); AuditUtils.enrichGenericAuditDatasetFromRequest(auditDataset, msg); strategy.enrichAuditDatasetFromRequest(auditDataset, msg, exchange); return auditDataset; } catch(Exception e) { LOG.error("Exception when enriching audit dataset from request", e); return null; } } /** * Enriches the given audit dataset with data from the response message. * All exception are ignored. */ private static <T extends MllpAuditDataset> void enrichAuditDatasetFromResponse( MllpAuditStrategy<T> strategy, T auditDataset, MessageAdapter<?> msg) { try { strategy.enrichAuditDatasetFromResponse(auditDataset, msg); } catch(Exception e) { LOG.error("Exception when enriching audit dataset from response", e); } } /** * Determines addresses of local and remote participants and stores them * into the audit dataset. All exception are ignored. */ private static <T extends MllpAuditDataset> void determineParticipantsAddresses( AuditInterceptor<T> interceptor, Exchange exchange, T auditDataset) { try { interceptor.determineParticipantsAddresses(exchange, auditDataset); } catch(Exception e) { LOG.error("Exception when determining participants' addresses", e); } } }
package util import ( R "reflect" A "github.com/Foxcapades/Argonaut/v0/pkg/argo" ) var numericKinds = map[R.Kind]bool{ R.Int: true, R.Int8: true, R.Int16: true, R.Int32: true, R.Int64: true, R.Uint: true, R.Uint8: true, R.Uint16: true, R.Uint32: true, R.Uint64: true, R.Float32: true, R.Float64: true, } func GetRootValue(v R.Value) R.Value { // Used for recursion detection c := v haveAddr := false // see json.Unmarshaler indirect() if v.Kind() != R.Ptr && v.Type().Name() != "" && v.CanAddr() { haveAddr = true v = v.Addr() } for { if v.Kind() == R.Interface && !v.IsNil() { tmp := v.Elem() if tmp.Kind() == R.Ptr && !tmp.IsNil() { haveAddr = false v = tmp continue } } if v.Kind() != R.Ptr { break } if v.Elem().Kind() == R.Interface && v.Elem().Elem() == v { v = v.Elem() break } if v.IsNil() { v.Set(R.New(v.Type().Elem())) } if v.Type().AssignableTo(R.TypeOf((*A.Unmarshaler)(nil)).Elem()) { break } if haveAddr { v = c haveAddr = false } else { v = v.Elem() } } return v } var unmarshalerType = R.TypeOf((*A.Unmarshaler)(nil)).Elem() func IsUnmarshaler(t R.Type) bool { return t.AssignableTo(unmarshalerType) } func IsInterface(t R.Type) bool { return t.Kind() == R.Interface } func IsBasicKind(k R.Kind) bool { return k == R.String || k == R.Bool || IsNumericKind(k) } func IsNumericKind(k R.Kind) bool { return numericKinds[k] } func IsByteSlice(t R.Type) bool { return t.Kind() == R.Slice && t.Elem().Kind() == R.Uint8 } //func Compatible(val, test interface{}) bool { // vt := GetRootValue(R.ValueOf(val)).Type() // tt := R.TypeOf(test) // // if tt.Kind() == R.Ptr { // return tt.Elem().AssignableTo(vt) // } // // return tt.AssignableTo(vt) //} func Compatible(val, test *R.Value) bool { return val.Type().AssignableTo(test.Type()) }
#!/usr/local/bin/python from pathlib import Path import functools import subprocess # import readline import time import sys import os import importlib import pegtree import pegtree.treeconv as treeconv from pegtree.terminal import DefaultConsole bold = DefaultConsole.bold color = DefaultConsole.color ''' istty = True def bold(s): return '\033[1m' + str(s) + '\033[0m' if istty else str(s) COLOR = { "Black": '0;30', "DarkGray": '1;30', "Red": '0;31', "LightRed": '1;31', "Green": '0;32', "LightGreen": '1;32', "Orange": '0;33', "Yellow": '1;33', "Blue": '0;34', "LightBlue": '1;34', "Purple": '0;35', "LightPurple": '1;35', "Cyan": '0;36', "LightCyan": '1;36', "LightGray": '0;37', "White": '1;37', } def showing(pos, msg): if pos is None: print(msg) else: print(pos.showing(msg)) def log(type, pos, *msg): msg = ' '.join(map(str, msg)) if type.startswith('err'): showing(pos, color('Red', '[error] ') + str(msg)) elif type.startswith('warn'): showing(pos, color('Orange', '[warning] ') + str(msg)) elif type.startswith('info') or type.startswith('notice'): showing(pos, color('Cyan', '[info] ') + str(msg)) else: showing(pos, str(msg)) ''' def version(): print(DefaultConsole.bold( 'PEGTree - A PEG Parser Generator with Tree Annotation')) def read_inputs(a): path = Path(a) if path.exists(): f = path.open() data = f.read() f.close() return data else: return a def readlines(prompt): s = input(prompt) if s != '': return s else: l = [] while True: prev = s s = input() l.append(s) if prev == '' and s == '': break return '\n'.join(l) def parse_options(argv): options = { '-v': ('verbose', True), '-verbose': ('verbose', True), '-g': ('grammar', None), '--grammar': ('grammar', None), '-s': ('start', None), '--start': ('start', None), '-e': ('expression', None), '--expr': ('expression', None), '-f': ('format', None), '--format': ('format', None), '-O0': ('-O', 0), '-O1': ('-O', 1), '-O2': ('-O', 2), '-O2': ('-O', 3), } def parse_each(a, d): first = a[0] if first.startswith('-'): if first in options: key, value = options[first] if value is None: if len(a) > 1: d[key] = a[1] return a[2:] else: d[key] = value return a[1:] # d['inputs'].append(a) raise CommandUsageError else: d['inputs'].append(a[0]) return a[1:] d = {'inputs': [], '-O': 2, 'verbose': False} while len(argv) > 0: argv = parse_each(argv, d) #print('OPTION', d) if d['verbose']: DefaultConsole.isverbose = True return d class CommandUsageError(Exception): pass def usage(): print(bold('PEGTree - A PEG Parser Generator with Tree Annotation')) print("Usage: pegtree <command> options inputs") print(" -g | --grammar <file> specify a grammar file") print(" -e | --expr <expression> specify a parsing expression") print(" -s | --start <NAME> specify a starting rule") print(" -f | --format <file> specify an output format") print() print("Example:") print(" pegtree parse -g math.tpeg <inputs>") print(" pegtree example -g math.tpeg <inputs>") print(" pegtree pasm -g math.tpeg") print(" pegtree update") print() print("The most commonly used pegtree commands are:") print(" parse run a generated parser") print(" example test all examples") print(" pasm generate a pasm combinator function") print(" list all sample grammars") print(" update update pegtree (via pip)") showingTPEG = False def load_grammar(options, default=None): global showingTPEG expr = options.get('expression', None) if expr is not None: grammar = 'A = ' + expr options['urn'] = f'(-e {repr(expr)})' return pegtree.grammar(grammar, **options) file = options.get('grammar', default) if file is None: print('Enter a TPEG grammar') sb = [] try: while True: s = input() if s == '' or s is None: break sb.append(s) except: pass data = '\n'.join(sb) options['urn'] = '(stdin)' showingTPEG = False return pegtree.grammar(data, **options) if file == 'stdin.tpeg': data = sys.stdin.read() options['urn'] = file return pegtree.grammar(data, **options) options['urn'] = file return pegtree.grammar(file, **options) def generator(options): # if 'parser' in options: # m = importlib.import_module(options['parser']) # return m.generate return pegtree.generate def getstart(peg, options): if 'start' in options: return options['start'] return peg.start() def colorTree(t): if t.isSyntaxError(): return t.message(color('Red', 'Syntax Error')) else: # unconsumed = '' # if epos < len(t.inputs_): # unconsumed = ' + ' + color('Purple', t.inputs_[epos:]) sb = [] t.strOut(sb, token=lambda x: color('Blue', x), tag=lambda x: color('Cyan', x)) return "".join(sb) # parse command def sample(options): files = os.listdir(Path(__file__).parent / 'grammar') files.sort() for file in files: if file.endswith('.tpeg'): print(file) def tpeg(options): peg = load_grammar(options) print(peg) if '@@example' in peg: print() prefix = color('Blue', 'example') quote = color("Red", "'''") for testcase in peg['@@example']: name, doc = testcase name = bold(name) doc = doc.getToken() if doc.find('\n') > 0: print(prefix, name, quote) print(doc, quote, sep='\n') else: print(prefix, name, doc) def peg(options): options['isPurePEG'] = True peg = load_grammar(options) print(peg) def optimize(options): from pegtree.optimizer import prepare peg = load_grammar(options) start, refs, rules, memos = prepare(peg) for ref in refs: uname = ref.uname(peg) print(uname, '=', rules[uname]) print('memo:', ' '.join(memos)) def parse(options, conv=None): peg = load_grammar(options) parser = generator(options)(peg, **options) inputs = options['inputs'] tdump = treeconv.treedump(options, colorTree) if len(inputs) == 0: # Interactive Mode try: start = getstart(peg, options) while True: s = readlines(color('Blue', start) + bold(' <<< ')) tree = parser(s, urn='(stdin)') print(tdump(tree)) except (EOFError, KeyboardInterrupt): pass elif len(inputs) == 1: colorTree(read_inputs(inputs[0])) else: for file in options['inputs']: st = time.time() t = parser(read_inputs(file)) et = time.time() print(file, (et - st) * 1000.0, "[ms]:", t.tag) def example(options): peg = load_grammar(options) if '@@example' not in peg: return parsers = {} for testcase in peg['@@example']: name, doc = testcase if not name in peg: continue if not name in parsers: parsers[name] = generator(options)(peg, start=name) res = parsers[name](doc.inputs_, doc.urn_, doc.spos_, doc.epos_) # print() ok = doc.inputs_[doc.spos_:res.epos_] fail = doc.inputs_[res.epos_:doc.epos_] print(bold(f'parsing {name}'), color( 'Green', f'{ok}')+color('Red', f'{fail}'), bold('=> '), end='') print(colorTree(res)) def dumpError(lines, line, s): errs = 0 for t in s: cur = str(t) if t.isSyntaxError(): errs = 1 s = max(0, t.spos_ - 10) prev = t.inputs_[s:t.spos_] # print(line) print(lines, color('Green', f'{prev}') + color('Red', f'{cur}')) return errs def test(options): peg = load_grammar(options) parser = generator(options)(peg, **options) inputs = options['inputs'] st = time.time() lines = 0 fail = 0 try: for file in options['inputs']: with open(file) as f: for line in f: lines += 1 try: t = parser(line) fail += dumpError(lines, line, t) except RecursionError: print(color('Red', line)) fail += 1 except KeyboardInterrupt: pass et = time.time() if lines > 0: print(f'{fail}/{lines} {fail/lines} {(et - st) * 1000.0} ms') def pasm(options): from pegtree.nezcc import parsec peg = load_grammar(options) parsec(peg, **options) def jsonfy(options): from .visitor import JSONfy import json peg = load_grammar(options) parser = generator(options)(peg, **options) for file in options['inputs']: tree = parser(read_inputs(file)) value = JSONfy.convert(tree) print(json.dumps(value)) def pasmcc(options): from pegtree.nezcc import parsec peg = load_grammar(options) parsec(peg, **options) def cjtoken(options, conv=None): import pegtree.cj as cj peg = load_grammar(options) parser = generator(options)(peg, **options) inputs = options['inputs'] for file in options['inputs']: with open(file) as f: for line in f.readlines(): if line.startswith('#'): continue line = line.replace('\n', '') print(line) tree = parser(line) print(repr(tree)) for token in cj.tokenize(tree): print(repr(token)) print() def update(options): try: # pip3 install -U git+https://github.com/KuramitsuLab/pegpy.git subprocess.check_call( ['pip3', 'install', '-U', 'pegtree']) except: pass def update_beta(options): try: # pip3 install -U git+https://github.com/KuramitsuLab/pegtree.git subprocess.check_call( ['pip3', 'install', '-U', 'git+https://github.com/KuramitsuLab/pegtree.git']) except: pass def main(argv=sys.argv): names = globals() if len(argv) > 1: cmd = argv[1] options = parse_options(argv[2:]) cs = cmd.split('.') if len(cs) == 2: cmd = cs[0] options['ext'] = cs[1] if cmd in names: names[cmd](options) return usage() if __name__ == "__main__": main(sys.argv)
/** * Parses {@param version} and checks whether it is greater or equal to * <{@param otherMajor}, {@param otherMinor}, {@param otherPatch}> * comparing the corresponding version components in lexicographical order. * * @param version * @param otherMajor * @param otherMinor * @param otherPatch */ public static boolean isAtLeastVersion(final String version, final int otherMajor, final int otherMinor, final int otherPatch) { String[] parts = version.split("-")[0].split("\\."); int major = Integer.parseInt(parts[0]); int minor = Integer.parseInt(parts[1]); int patch = Integer.parseInt(parts[2]); int majorComparison = Integer.compare(major, otherMajor); if (majorComparison != 0) { return majorComparison > 0; } int minorComparison = Integer.compare(minor, otherMinor); if (minorComparison != 0) { return minorComparison > 0; } int patchComparison = Integer.compare(patch, otherPatch); if (patchComparison != 0) { return patchComparison > 0; } return true; }
/* * check if clicked scrollbar and action it */ bool ThumbManager::clickedScrollbar(const Point& p) { if(_imageCount<THUMBS_PER_PAGE || !_scrollbar.containsPoint(p)) return false; if(p.X<_thumbx) _first-=THUMBS_PER_PAGE; else if(p.X>_thumbx+SCROLLBAR_KNOB_WIDTH) _first+=THUMBS_PER_PAGE; if(_first<0) _first=0; else if(_first>_imageCount-THUMBS_PER_PAGE) _first=_imageCount-THUMBS_PER_PAGE; _last=_first+THUMBS_PER_PAGE-1; drawThumbs(); drawScrollbar(); _touchManager.waitForPenUp(); return true; }
/** * Data computed during drawing. */ static final class DrawingData { private int mRemainderHorizontal; private int mRemainderVertical; private float mHorizontalPatchesSum; private float mVerticalPatchesSum; }
I am deeply disappointed that Senate Republicans have once again refused to do their job and give well-qualified nominees to the federal bench the yes-or-no votes they deserve. The D.C. Circuit, considered the Nation’s second-highest court, has three vacancies. These are judgeships created by Congress. Chief Justice John Roberts and the Judicial Conference of the United States believe that these vacancies should be filled, not removed. And my constitutional duty as President is to nominate highly qualified individuals to fill these vacancies. Patricia Millett, Nina Pillard, and Judge Robert Wilkins have all received the highest possible rating from the non-partisan American Bar Association. They have broad bipartisan support, and no one has questioned their merit. Yet Senate Republicans have blocked all three from receiving a yes-or-no vote. This obstruction is completely unprecedented. Four of my predecessor’s six nominees to the D.C. Circuit were confirmed. Four of my five nominees to this court have been obstructed. When it comes to judicial nominations, I am fulfilling my constitutional responsibility, but Congress is not. Instead, Senate Republicans are standing in the way of a fully-functioning judiciary that serves the American people. The American people and our judicial system deserve better. A majority of the United States Senate supports these three extraordinary nominees, and it is time for simple yes-or-no votes without further obstruction or delay.
/** * @author frekele - Leandro Kersting de Freitas */ @XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Payment implements ClearSaleEntity { private static final long serialVersionUID = 1L; @JsonDeserialize(using = OffsetDateTimeJsonDeserialize.class) @JsonSerialize(using = OffsetDateTimeJsonSerialize.class) @JsonProperty("BirthDate") private OffsetDateTime birthDate; @JsonProperty("Amount") private BigDecimal amount; @JsonProperty("Type") private String type; @JsonProperty("QtyInstallments") private String qtyInstallments; @JsonProperty("CardNumber") private String cardNumber; @JsonProperty("CardBin") private String cardBin; @JsonProperty("CardEndNumber") private String cardEndNumber; @JsonProperty("CardType") private String cardType; @JsonProperty("CardExpirationDate") private String cardExpirationDate; @JsonProperty("CardHolderName") private String cardHolderName; @JsonProperty("Address") private String address; @JsonProperty("Nsu") private String nsu; @JsonProperty("Currency") private String currency; public Payment() { super(); } }
import { Document, Model, Schema } from 'mongoose'; import { IUser } from "../../domain/entities/types"; import { User } from "../../domain/entities/User"; export interface IUserDocument extends Omit<IUser, '_id'>, Document { } export interface IUserModel extends IUser, Model<IUserDocument> { toUser(user: IUser): User; } const UserSchema: Schema<IUserDocument> = new Schema<IUserDocument>( { firstName: { type: Schema.Types.String, required: true, }, lastName: { type: Schema.Types.String, required: true, }, userName: { type: Schema.Types.String, required: true, }, email: { type: Schema.Types.String, unique: true, required: false, }, city: { type: Schema.Types.String, required: false, }, avatar: { type: Schema.Types.String, required: false, default: "#000000", }, description: { type: Schema.Types.String, required: false, } }, { timestamps: true, } ); UserSchema.statics.toUser = (user: IUser) => { return new User({ _id: user._id.toString(), firstName: user.firstName, lastName: user.lastName, userName: user.userName, email: user.email, city: user.city, avatar: user.avatar, description: user.description, }) }; export default UserSchema;
/** * * * <pre> * Indicates that a live agent should be brought in to handle the * interaction with the user. In most cases, when you set this flag to true, * you would also want to set end_interaction to true as well. Default is * false. * </pre> * * <code>bool live_agent_handoff = 7;</code> * * @return This builder for chaining. */ public Builder clearLiveAgentHandoff() { liveAgentHandoff_ = false; onChanged(); return this; }
<gh_stars>10-100 import { assert } from "chai" import { parse, parseReply } from "../../src/parser" import { FinalReply } from "../../src/reply" import { TokenIter } from "../../src/parser/lexer" import { S_Exp, RootExpr } from "../../src/s-exps" // Idris 2 only. describe("Parsing :generate-def reply", () => { it("can parse a success sexp.", () => { const sexp = `(:return (:ok "append [] ys = ys\nappend (x :: xs) [] = x :: append xs []\nappend (x :: xs) (y :: ys) = x :: append xs (y :: ys)") 3)` const payload: S_Exp.GenerateDef = [ ":ok", "append [] ys = ys\nappend (x :: xs) [] = x :: append xs []\nappend (x :: xs) (y :: ys) = x :: append xs (y :: ys)", ] const rootExpr: RootExpr = [":return", payload, 3] const expected: FinalReply.GenerateDef = { def: "append [] ys = ys\nappend (x :: xs) [] = x :: append xs []\nappend (x :: xs) (y :: ys) = x :: append xs (y :: ys)", id: 3, ok: true, type: ":return", } const tokens = new TokenIter(sexp) const exprs = parse(tokens) as RootExpr assert.deepEqual(exprs, rootExpr) const parsed = parseReply(rootExpr, ":generate-def-next") assert.deepEqual(parsed, expected) }) it("can parse a failure sexp.", () => { const sexp = `(:return (:error "No more results") 16)` const payload: S_Exp.GenerateDef = [":error", "No more results"] const rootExpr: RootExpr = [":return", payload, 16] const expected: FinalReply.GenerateDef = { err: "No more results", id: 16, ok: false, type: ":return", } const tokens = new TokenIter(sexp) const exprs = parse(tokens) as RootExpr assert.deepEqual(exprs, rootExpr) const parsed = parseReply(rootExpr, ":generate-def-next") assert.deepEqual(parsed, expected) }) })
{-# LANGUAGE BangPatterns #-} {-# LANGUAGE DuplicateRecordFields #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE NamedFieldPuns #-} {-# LANGUAGE OverloadedStrings #-} module Json.Error ( -- * Types Error(..) -- * Encoding , encode , builderUtf8 ) where import Data.Bytes.Builder (Builder) import Data.Text.Short (ShortText) import Json.Context (Context(..)) import Data.ByteString.Short.Internal (ShortByteString(SBS)) import qualified Data.Bytes.Builder as Builder import qualified Data.Bytes.Chunks as ByteChunks import qualified Data.Primitive as PM import qualified Data.Text.Short.Unsafe as TS import qualified Json.Context as Context -- | A single error message. data Error = Error { message :: !ShortText , context :: !Context } deriving (Eq,Show) ba2st :: PM.ByteArray -> ShortText {-# inline ba2st #-} ba2st (PM.ByteArray x) = TS.fromShortByteStringUnsafe (SBS x) encode :: Error -> ShortText encode p = ba2st (ByteChunks.concatU (Builder.run 128 (builderUtf8 p))) builderUtf8 :: Error -> Builder builderUtf8 Error{message,context} = Context.builderUtf8 context <> Builder.ascii2 ':' ' ' <> Builder.shortTextUtf8 message
package kalman import ( "fmt" "testing" "github.com/konimarti/lti" "gonum.org/v1/gonum/mat" ) // Testing based on example on page 145 in book "Kalman Filter" by <NAME>, 2017 //newContext for Rose Filter func newRoseContext() *Context { // define current context ctx := Context{ X: mat.NewVecDense(1, []float64{0.04280385872149909}), P: mat.NewDense(1, 1, []float64{0}), } return &ctx } //newRoseFilter is a helper functions for tests func newRoseFilter() *roseImpl { // define LTI system rose := NewRoseFilter( lti.Discrete{ Ad: mat.NewDense(1, 1, []float64{1}), Bd: mat.NewDense(1, 1, []float64{0}), C: mat.NewDense(1, 1, []float64{1}), D: mat.NewDense(1, 1, []float64{0}), }, mat.NewDense(1, 1, []float64{1}), // Gd 9.0, // Gamma 0.5, // AlphaR 0.3, // AlphaM ) return rose.(*roseImpl) } func TestRoseFilter(t *testing.T) { ctx := newRoseContext() filter := newRoseFilter() ctrl := mat.NewVecDense(1, nil) // init filter for comparison testing y := filter.Std.Lti.Response(ctx.X, ctrl) filter.Rose.E1 = y filter.Rose.EE1.Mul(y, y.T()) config := []struct { Iter int Input []float64 Expected []float64 }{ { Iter: 1, Input: []float64{ 0.04280385872149909, }, Expected: []float64{ 0.04280385872149909, }, }, { Iter: 2, Input: []float64{ -0.09725182469943415, }, Expected: []float64{ 0.04280385872149909, }, }, { Iter: 3, Input: []float64{ 0.002742478388650294, }, Expected: []float64{ 0.04280385872149909, }, }, } for _, cfg := range config { z := mat.NewVecDense(1, cfg.Input) filteredResult := filter.Apply(ctx, z, ctrl) expectedResult := mat.NewVecDense(1, cfg.Expected) if !mat.EqualApprox(expectedResult, filteredResult, 1e-4) { fmt.Println("actual:", filteredResult) fmt.Println("expected:", expectedResult) t.Error("ApplyFilter:", cfg.Iter) } } }
/// Gathers semantic information about vertex, face and edge elements and /// their respective properties. /// /// Vertices are required, the other elements are not. Furthermore, faces /// have to be stored after vertices and edges have to be stored after /// faces. An error is returned if any of those properties is violated. pub fn new(elements: &[ElementDef]) -> Result<Self, Error> { let vertex_pos = elements.iter() .position(|e| VERTEX_ELEMENT_NAMES.contains(&e.name.as_str())) .ok_or_else(|| invalid_input!("no 'vertex' elements in PLY file"))?; let face_pos = elements.iter().position(|e| FACE_ELEMENT_NAMES.contains(&e.name.as_str())); if let Some(face_pos) = face_pos { // Faces can only be in the file after vertices. if face_pos < vertex_pos { return Err(invalid_input!( "found 'face' elements before 'vertex' elements (that's not allowed)" )); } } let edge_pos = elements.iter().position(|e| EDGE_ELEMENT_NAMES.contains(&e.name.as_str())); if let Some(edge_pos) = edge_pos { // Edges can only be in the file after vertices and faces. if face_pos.is_none() || edge_pos < face_pos.unwrap() { let problem = if face_pos.is_none() { "but no" } else { "before" }; return Err(invalid_input!( "found 'edge' elements {} 'face' elements (that's not allowed as \ LOX can't add edges on their own; edges always need to be part of a face)", problem, )); } } Ok(Self { vertex: VertexInfo::new(&elements[vertex_pos])?, face: face_pos.map(|pos| FaceInfo::new(&elements[pos])).transpose()?, edge: edge_pos.map(|pos| EdgeInfo::new(&elements[pos])).transpose()?, }) }
/** * Show search results map screen - using ShowSearchResultsOnMapEvent event */ private void showSearchResultsMapScreen(final ShowSearchResultsOnMapEvent event) { final Trip trip = singletonComponents.getTripService().getTrip(event.getTrip().getKey()); final int day = event.getTripDay(); final String key = event.getSearchResultsKey(); final SearchType type = (searchService.extractKeyProperty(key, Page.QUERY_TYPE)).equals(SearchType.GOOGLE .toString()) ? SearchType.GOOGLE : SearchType.LP; final String query = searchService.extractKeyProperty(key, Page.SEARCH_QUERY); final HasLatLngBounds searchBounds = searchService.stringToBounds(searchService.extractKeyProperty(key, Page.SEARCH_BOUNDS)); toast.showLoading(messages.searching(type.equals(SearchType.GOOGLE) ? query : POIType.getDisplayString(query))); searchService.search(type, query, searchBounds, trip.getLocation(), new SearchResultsListener() { @Override public void onSuccess(List<SearchItem> results) { toast.hideLoading(); showSearchResultsInMap(trip, day, results, key, event.isHistoryEvent()); } @Override public void onFailure(Throwable caught) { toast.hideLoading(); toast.showToast(messages.searchError()); } }); }
use crate::{ cairo::lang::{ compiler::program::Program, instances::CairoLayout, vm::{ builtin_runner::{BuiltinRunner, Error as BuiltinRunnerError}, memory_dict::{Error as MemoryDictError, MemoryDict}, memory_segments::{Error as MemorySegmentError, MemorySegmentManager}, output_builtin_runner::OutputBuiltinRunner, relocatable::{MaybeRelocatable, RelocatableValue}, utils::RunResources, vm_core::{RunContext, VirtualMachine, VirtualMachineError}, vm_exceptions::VmException, }, }, hint_support::StaticLocals, }; use num_bigint::BigInt; use std::{ cell::RefCell, collections::{HashMap, HashSet}, rc::Rc, }; pub type BuiltinRunnerMap = HashMap<String, Box<dyn BuiltinRunner>>; type BuiltinRunnerFactory = dyn Fn(&str, bool) -> Box<dyn BuiltinRunner>; #[derive(Debug)] pub struct CairoRunner { pub program: Rc<Program>, pub instance: CairoLayout, pub builtin_runners: Rc<RefCell<BuiltinRunnerMap>>, pub original_steps: Option<BigInt>, pub proof_mode: bool, pub allow_missing_builtins: bool, pub memory: Rc<RefCell<MemoryDict>>, pub segments: Rc<RefCell<MemorySegmentManager>>, pub segment_offsets: Option<HashMap<BigInt, BigInt>>, pub final_pc: Option<RelocatableValue>, /// Flag used to ensure a safe use. pub run_ended: bool, /// Flag used to ensure a safe use. pub segments_finalized: bool, /// A set of memory addresses accessed by the VM, after relocation of temporary segments into /// real ones. pub accessed_addresses: Option<HashSet<RelocatableValue>>, pub program_base: Option<RelocatableValue>, pub execution_base: Option<RelocatableValue>, pub execution_public_memory: Option<Vec<BigInt>>, pub initial_pc: Option<RelocatableValue>, pub initial_ap: Option<RelocatableValue>, pub initial_fp: Option<RelocatableValue>, pub vm: Option<VirtualMachine>, } #[derive(Debug, thiserror::Error)] pub enum Error { #[error("Builtins {non_existing_builtins:?} are not present in layout \"{layout}\"")] BuiltinsNotPresent { non_existing_builtins: Vec<String>, layout: String, }, #[error("The {name} builtin is not supported.")] BuiltinNotSupported { name: String }, #[error("The builtins specified by the %builtins directive must be subsequence of {supported_builtin_list:?}. Got {program_builtins:?}.")] BuiltinsNotSubsequence { supported_builtin_list: Vec<String>, program_builtins: Vec<String>, }, #[error("Missing builtin.")] MissingBuiltin, #[error("Missing main().")] MissingMain, #[error("Segments not initialized.")] SegmentsNotInitialized, #[error("Function entrypoint not initialized.")] FunctionEntrypointNotInitialized, #[error("State not initialized.")] StateNotInitialized, #[error("VM not initialized.")] VmNotInitialized, #[error(transparent)] MemoryDictError(MemoryDictError), #[error(transparent)] MemorySegmentError(MemorySegmentError), #[error(transparent)] VmError(VmException), #[error(transparent)] VirtualMachineError(VirtualMachineError), #[error(transparent)] BuiltinRunnerError(BuiltinRunnerError), #[error("end_run called twice")] EndRunCalledTwice, #[error("Run must be ended before calling read_return_values.")] RunNotEnded, #[error("The stop pointer of the missing builtin \"{builtin_name}\" must be 0.")] NonZeroMissingBuiltinStopPointer { builtin_name: String }, #[error("Cannot add the return values to the public memory after segment finalization.")] CannotAddReturnValuesAfterSegmentFinalization, #[error("Unexpected builtin type")] UnexpectedBuiltinType, #[error("Unexpected None value")] UnexpectedNoneValue, } impl CairoRunner { pub fn new( program: Rc<Program>, instance: CairoLayout, memory: MemoryDict, proof_mode: bool, allow_missing_builtins: bool, ) -> Result<Self, Error> { if !allow_missing_builtins { let mut non_existing_builtins = vec![]; for program_builtin in program.builtins().iter() { if !instance.builtins.contains_key(program_builtin) { non_existing_builtins.push(program_builtin.to_owned()); } } if !non_existing_builtins.is_empty() { return Err(Error::BuiltinsNotPresent { non_existing_builtins, layout: instance.layout_name.to_owned(), }); } } let mut builtin_runners = HashMap::new(); let mut builtin_factories: HashMap<String, Box<BuiltinRunnerFactory>> = HashMap::new(); builtin_factories.insert(String::from("output"), Box::new(output_builtin_factory)); builtin_factories.insert(String::from("pedersen"), Box::new(pedersen_builtin_factory)); builtin_factories.insert( String::from("range_check"), Box::new(range_check_builtin_factory), ); builtin_factories.insert(String::from("ecdsa"), Box::new(ecdsa_builtin_factory)); builtin_factories.insert(String::from("bitwise"), Box::new(bitwise_builtin_factory)); // TODO: implement the following builtin factories // // ```python // builtin_factories = dict( // pedersen=lambda name, included: HashBuiltinRunner( // name=name, // included=included, // ratio=instance.builtins["pedersen"].ratio, // hash_func=pedersen_hash, // ), // range_check=lambda name, included: RangeCheckBuiltinRunner( // included=included, // ratio=instance.builtins["range_check"].ratio, // inner_rc_bound=2 ** 16, // n_parts=instance.builtins["range_check"].n_parts, // ), // ecdsa=lambda name, included: SignatureBuiltinRunner( // name=name, // included=included, // ratio=instance.builtins["ecdsa"].ratio, // process_signature=process_ecdsa, // verify_signature=verify_ecdsa_sig, // ), // bitwise=lambda name, included: BitwiseBuiltinRunner( // included=included, bitwise_builtin=instance.builtins["bitwise"] // ), // ) // ``` let supported_builtin_list: Vec<String> = builtin_factories.keys().cloned().collect(); if program .builtins() .iter() .any(|item| !supported_builtin_list.contains(item)) { return Err(Error::BuiltinsNotSubsequence { supported_builtin_list, program_builtins: program.builtins().to_vec(), }); } for (name, _) in instance.builtins.iter() { let factory = builtin_factories .get(name) .ok_or(Error::BuiltinNotSupported { name: name.to_owned(), })?; let included = program.builtins().contains(name); // In proof mode all the builtin_runners are required. if included || proof_mode { builtin_runners.insert(format!("{}_builtin", &name), factory(name, included)); } } let memory = Rc::new(RefCell::new(memory)); let segments = Rc::new(RefCell::new(MemorySegmentManager::new( memory.clone(), program.prime().clone(), ))); Ok(Self { program, instance, builtin_runners: Rc::new(RefCell::new(builtin_runners)), original_steps: None, proof_mode, allow_missing_builtins, memory, segments, segment_offsets: None, final_pc: None, run_ended: false, segments_finalized: false, accessed_addresses: None, program_base: None, execution_base: None, execution_public_memory: None, initial_pc: None, initial_ap: None, initial_fp: None, vm: None, }) } pub fn initialize_segments(&mut self) { // Program segment. self.program_base = Some(self.segments.borrow_mut().add(None)); // Execution segment. self.execution_base = Some(self.segments.borrow_mut().add(None)); // Builtin segments. for builtin_runner in self.builtin_runners.borrow_mut().values_mut() { builtin_runner.initialize_segments(&mut self.segments.borrow_mut()); } } /// Initializes state for running a program from the main() entrypoint. If self.proof_mode == /// True, the execution starts from the start label rather then the main() function. /// /// Returns the value of the program counter after returning from main. pub fn initialize_main_entrypoint(&mut self) -> Result<RelocatableValue, Error> { self.execution_public_memory = Some(vec![]); let mut stack: Vec<MaybeRelocatable> = vec![]; for builtin_name in self.program.builtins().iter() { match self .builtin_runners .borrow_mut() .get_mut(&format!("{}_builtin", builtin_name)) { Some(builtin_runner) => { for item in builtin_runner.initial_stack().into_iter() { stack.push(item); } } None => { if !self.allow_missing_builtins { return Err(Error::MissingBuiltin); } else { stack.push(MaybeRelocatable::Int(BigInt::from(0u8))); } } } } if self.proof_mode { // TODO: implement the following Python code // // ```python // # Add the dummy last fp and pc to the public memory, so that the verifier can enforce // # [fp - 2] = fp. // stack_prefix: List[MaybeRelocatable] = [self.execution_base + 2, 0] // stack = stack_prefix + stack // self.execution_public_memory = list(range(len(stack))) // // assert isinstance( // self.program, Program // ), "--proof_mode cannot be used with a StrippedProgram." // self.initialize_state(self.program.start, stack) // self.initial_fp = self.initial_ap = self.execution_base + 2 // return self.program_base + self.program.get_label("__end__") // ``` todo!() } else { let return_fp = self.segments.borrow_mut().add(None); match self.program.main() { Some(main) => self.initialize_function_entrypoint(&main, stack, return_fp.into()), None => Err(Error::MissingMain), } } } pub fn initialize_function_entrypoint( &mut self, entrypoint: &BigInt, args: Vec<MaybeRelocatable>, return_fp: MaybeRelocatable, ) -> Result<RelocatableValue, Error> { let end = self.segments.borrow_mut().add(None); let mut stack = args; stack.push(return_fp); stack.push(end.clone().into()); self.initialize_state(entrypoint, &stack)?; self.initial_fp = Some(self.execution_base()?.to_owned() + &BigInt::from(stack.len())); self.initial_ap = self.initial_fp.clone(); self.final_pc = Some(end.clone()); Ok(end) } pub fn initialize_state( &mut self, entrypoint: &BigInt, stack: &[MaybeRelocatable], ) -> Result<(), Error> { self.initial_pc = Some(self.program_base()?.to_owned() + entrypoint); // Load program. self.load_data( self.program_base()?.to_owned().into(), &self .program .data() .iter() .map(|item| item.to_owned().into()) .collect::<Vec<_>>(), ); // Load stack. self.load_data( self.execution_base()?.to_owned().into(), &stack.iter().map(|item| item.to_owned()).collect::<Vec<_>>(), ); Ok(()) } pub fn initialize_vm( &mut self, hint_locals: HashMap<String, ()>, _static_locals: (), ) -> Result<(), Error> { let context = RunContext::new( self.memory.clone(), self.initial_pc()?.to_owned().into(), self.initial_ap()?.to_owned().into(), self.initial_fp()?.to_owned().into(), self.program.prime().clone(), ); self.vm = Some(VirtualMachine::new( self.program.clone(), Rc::new(RefCell::new(context)), hint_locals, StaticLocals { segments: self.segments.clone(), }, Some(self.builtin_runners.clone()), Some(self.program_base()?.to_owned().into()), )); // TODO: implement the following Python code // // ```python // for builtin_runner in self.builtin_runners.values(): // builtin_runner.add_validation_rules(self) // builtin_runner.add_auto_deduction_rules(self) // // self.vm.validate_existing_memory() // ``` Ok(()) } /// Runs the VM until pc reaches 'addr', and stop right before that instruction is executed. pub fn run_until_pc( &mut self, addr: MaybeRelocatable, run_resources: Option<RunResources>, ) -> Result<(), Error> { let mut run_resources = run_resources.unwrap_or(RunResources { n_steps: None }); while self.vm()?.run_context.borrow().pc != addr && !run_resources.consumed() { self.vm_step()?; run_resources.consume_step(); } if self.vm()?.run_context.borrow().pc != addr { // TODO: implement `as_vm_exception` on `vm` and switch over // Error: End of program was not reached Err(Error::VmError(VmException {})) } else { Ok(()) } } pub fn vm_step(&mut self) -> Result<(), Error> { if &self.vm()?.run_context.borrow().pc == self.final_pc()? { // TODO: implement `as_vm_exception` on `vm` and switch over // Error: Execution reached the end of the program. return Err(Error::VmError(VmException {})); } self.vm_mut()?.step()?; Ok(()) } pub fn end_run( &mut self, disable_trace_padding: bool, disable_finalize_all: bool, ) -> Result<(), Error> { if self.run_ended { return Err(Error::EndRunCalledTwice); } self.accessed_addresses = { let mut vm_memory = self.memory.borrow_mut(); Some( self.vm()? .accessed_addresses .iter() .map(|addr| match vm_memory.relocate_value(addr.to_owned()) { MaybeRelocatable::Int(_) => { panic!("unexpected variant: MaybeRelocatable::Int") } MaybeRelocatable::RelocatableValue(value) => value, }) .collect::<HashSet<_>>(), ) }; self.memory.borrow_mut().relocate_memory()?; self.vm_mut()?.end_run()?; if disable_finalize_all { // For tests. return Ok(()); } // Freeze to enable caching; No changes in memory should be made from now on. self.memory.borrow_mut().freeze(); // Deduce the size of each segment from its usage. self.segments.borrow_mut().compute_effective_sizes(false)?; if self.proof_mode && !disable_trace_padding { // TODO: implement the following Python code // // ```python // self.run_until_next_power_of_2() // while not self.check_used_cells(): // self.run_for_steps(1) // self.run_until_next_power_of_2() // ``` todo!() } self.run_ended = true; Ok(()) } /// Reads builtin return values (end pointers) and adds them to the public memory. /// Note: end_run() must precede a call to this method. pub fn read_return_values(&self) -> Result<(), Error> { if !self.run_ended { return Err(Error::RunNotEnded); } let mut pointer = self.vm()?.run_context.borrow().ap.clone(); for builtin_name in self.program.builtins().iter().rev() { match self .builtin_runners .borrow_mut() .get_mut(&format!("{}_builtin", builtin_name)) { Some(builtin_runner) => { pointer = builtin_runner.final_stack(self, pointer)?; } None => { if !self.allow_missing_builtins { return Err(Error::MissingBuiltin); } pointer = pointer - &BigInt::from(1u32).into(); if self.memory.borrow_mut().index(&pointer)? != MaybeRelocatable::Int(BigInt::from(0u32)) { return Err(Error::NonZeroMissingBuiltinStopPointer { builtin_name: builtin_name.to_owned(), }); } } } } if self.segments_finalized { return Err(Error::CannotAddReturnValuesAfterSegmentFinalization); } // TODO: implement the following Python code // // ```python // # Add return values to public memory. // self.execution_public_memory += list( // range(pointer - self.execution_base, self.vm.run_context.ap - self.execution_base) // ) // ``` Ok(()) } /// Writes data into the memory at address ptr and returns the first address after the data. pub fn load_data( &mut self, ptr: MaybeRelocatable, data: &[MaybeRelocatable], ) -> MaybeRelocatable { self.segments.borrow_mut().load_data(ptr, data) } // TODO: implement `output_callback` pub fn print_output(&self) -> Result<(), Error> { if let Some(output_runner) = self.builtin_runners.borrow().get("output_builtin") { let output_runner = output_runner .as_any() .downcast_ref::<OutputBuiltinRunner>() .ok_or(Error::UnexpectedBuiltinType)?; println!("Program output:"); let (_, size) = output_runner.get_used_cells_and_allocated_size(self)?; let mut i = BigInt::from(0u32); while i < size { match self.memory.borrow_mut().get( &(output_runner .base .clone() .ok_or(Error::UnexpectedNoneValue)? + &i) .into(), None, ) { Some(val) => { println!(" {}", val); } None => { println!(" <missing>"); } } i += BigInt::from(1u32); } println!(); } Ok(()) } fn program_base(&self) -> Result<&RelocatableValue, Error> { self.program_base .as_ref() .ok_or(Error::SegmentsNotInitialized) } fn execution_base(&self) -> Result<&RelocatableValue, Error> { self.execution_base .as_ref() .ok_or(Error::SegmentsNotInitialized) } fn final_pc(&self) -> Result<&RelocatableValue, Error> { self.final_pc .as_ref() .ok_or(Error::FunctionEntrypointNotInitialized) } fn initial_pc(&self) -> Result<&RelocatableValue, Error> { self.initial_pc.as_ref().ok_or(Error::StateNotInitialized) } fn initial_ap(&self) -> Result<&RelocatableValue, Error> { self.initial_ap.as_ref().ok_or(Error::StateNotInitialized) } fn initial_fp(&self) -> Result<&RelocatableValue, Error> { self.initial_fp.as_ref().ok_or(Error::StateNotInitialized) } fn vm(&self) -> Result<&VirtualMachine, Error> { self.vm.as_ref().ok_or(Error::VmNotInitialized) } fn vm_mut(&mut self) -> Result<&mut VirtualMachine, Error> { self.vm.as_mut().ok_or(Error::VmNotInitialized) } } impl From<MemoryDictError> for Error { fn from(value: MemoryDictError) -> Self { Self::MemoryDictError(value) } } impl From<MemorySegmentError> for Error { fn from(value: MemorySegmentError) -> Self { Self::MemorySegmentError(value) } } impl From<VirtualMachineError> for Error { fn from(value: VirtualMachineError) -> Self { Self::VirtualMachineError(value) } } impl From<BuiltinRunnerError> for Error { fn from(value: BuiltinRunnerError) -> Self { Self::BuiltinRunnerError(value) } } fn output_builtin_factory(_name: &str, included: bool) -> Box<dyn BuiltinRunner> { Box::new(OutputBuiltinRunner::new(included)) } fn pedersen_builtin_factory(_name: &str, _included: bool) -> Box<dyn BuiltinRunner> { todo!() } fn range_check_builtin_factory(_name: &str, _included: bool) -> Box<dyn BuiltinRunner> { todo!() } fn ecdsa_builtin_factory(_name: &str, _included: bool) -> Box<dyn BuiltinRunner> { todo!() } fn bitwise_builtin_factory(_name: &str, _included: bool) -> Box<dyn BuiltinRunner> { todo!() } #[cfg(test)] mod tests { use super::*; use crate::cairo::lang::compiler::program::FullProgram; #[test] fn test_run_past_end() { let program = serde_json::from_str::<FullProgram>(include_str!( "../../../../test-data/artifacts/run_past_end.json" )) .unwrap(); let mut runner = CairoRunner::new( Rc::new(program.into()), CairoLayout::plain_instance(), MemoryDict::new(), false, false, ) .unwrap(); runner.initialize_segments(); let end = runner.initialize_main_entrypoint().unwrap(); runner.initialize_vm(HashMap::new(), ()).unwrap(); runner.run_until_pc(end.into(), None).unwrap(); runner.end_run(false, false).unwrap(); runner.read_return_values().unwrap(); } #[test] fn test_bad_stop_ptr() { let program = serde_json::from_str::<FullProgram>(include_str!( "../../../../test-data/artifacts/bad_stop_ptr.json" )) .unwrap(); let mut runner = CairoRunner::new( Rc::new(program.into()), CairoLayout::small_instance(), MemoryDict::new(), false, false, ) .unwrap(); runner.initialize_segments(); let end = runner.initialize_main_entrypoint().unwrap(); runner.initialize_vm(HashMap::new(), ()).unwrap(); runner.run_until_pc(end.into(), None).unwrap(); runner.end_run(false, false).unwrap(); match runner.read_return_values() { Err(Error::BuiltinRunnerError(BuiltinRunnerError::InvalidStopPointer { builtin_name, expected, found, })) => { assert_eq!(builtin_name, "output"); assert_eq!( expected, RelocatableValue { segment_index: BigInt::from(2u8), offset: BigInt::from(1u8) } ); assert_eq!( found, RelocatableValue { segment_index: BigInt::from(2u8), offset: BigInt::from(3u8) } ); } _ => panic!("unexpected result"), } } }
def move(self, attacking_character, move: Move): assert len(self.participants) == 2, 'need to have only 2 players battling' attacker, defender = self.participants if attacking_character == defender: defender = attacker attacker = attacking_character self.participants[defender]['hp'] -= move.power self.moves.append(move) if self.participants[defender]['hp'] <= 0: return BattleFinished(victor=attacker) return BattleMoveMade(move, self.participants[defender]['hp'], defender)
def clean_dataframe(df, is_slugify=True, threshold=50, rename_cols=None): if is_slugify: df = df.rename(columns=slugify) df = df.dropna(axis=1, how='all') for column in get_category_cols(df, threshold=threshold): df[column] = df[column].astype('category') for column in get_int_cols(df): df[column] = df[column].astype(int) if rename_cols is not None: df = df.rename(columns=rename_cols) return df
Who’s the best team in the NFL? The answer is as muddled as at any point in the season. According to our Elo ratings, the answer is technically the New England Patriots, who overtook the Denver Broncos after beating them in Week 9. But the Broncos gained ground by beating the Oakland Raiders on Sunday while the Patriots had a bye. Just one Elo point now separates the teams; they’re tied for all intents and purposes. A case could also be made for the Arizona Cardinals, who have the NFL’s best record at 8-1. But the deeper we go, the less impressive the Cardinals look. Jeff Sagarin’s “pure points” rating, based on margin of victory and strength of schedule, has the Cardinals just eighth in the league. Football Outsiders’ Defense-adjusted Value Over Average (DVOA) ratings, based on an analysis of play-by-play data, has them 15th. The Cardinals’ top quarterback, Carson Palmer, is out for the year. Elo, which tends to strike a middle ground between conventional wisdom and advanced statistics, has the Cardinals fourth. A credible case could also be made for the Seattle Seahawks on the basis of their longer-term performance. They came into the season ranked No. 1 and are the fourth-best team so far this year on the basis of DVOA. And they looked as good as they have since Week 1 while beating the New York Giants 38-17 last week. The Seahawks’ next six weeks consist of home-and-home games against the Cardinals and San Francisco 49ers, and road games at Kansas City and Philadelphia. Within a couple of weeks, we could be talking about how the Seahawks playoff hopes are torpedoed, or how Seattle was the best team in the league all along. So far, however, this season has been characterized by the lack of any one truly dominant team. It’s also been characterized by a bunch of awful teams, such as the Raiders and Jacksonville Jaguars. The Raiders have a 31 percent chance of finishing at 0-16, according to our simulations. The Jaguars and the Raiders also have an outside shot at finishing with the lowest Elo rating of all-time. If the elite teams aren’t as great as usual, and the worst teams are worse than usual, that means there are more wins (and Elo ratings points) to go around among the upper-middle class of the league. There is a glut of 12 teams at an Elo rating between 1532 and 1608 — somewhere between “slightly above average” and “pretty good.” The Cincinnati Bengals, who lost at home Thursday against the Cleveland Browns, fell out of that group last week. These types of games — a clear underdog (the Bengals were 8.5-point favorites, according to Elo) winning definitively (the Browns won 24-3) — produce the biggest swings in the Elo ratings. In fact, the 53-point shift against the Bengals and toward the Browns is the largest of the season so far. (The largest swing of all time, 77 points, came Sept. 21, 2008, when the Miami Dolphins ended the Patriots’ 21-game regular-season winning streak by beating them 38-13 in Foxborough, Massachusetts.) Even the result in Cincinnati, however, did nothing to clarify the playoff picture. The Browns, at 6-3, are first in the AFC North; the Bengals, 5-3-1, are second. Both have lower Elo ratings than the Baltimore Ravens and Pittsburgh Steelers, who are tied for third at 6-4. No AFC North team has better than a 31 percent chance or worse than a 20 percent chance of winning the division. Playoff odds for the rest of the league follow: My ESPN colleague Gregg Easterbrook wrote this week about the inequities of the NFL playoff system. As Gregg noted, the 4-5 New Orleans Saints would make the playoffs if the season ended today (as NFC South champions). The 6-3 Green Bay Packers, meanwhile, would miss out. One might protest that the Saints aren’t such a bad team: They have plenty of talent and a +26 point differential, and they’re 12th in the league in DVOA. Still, what division a team plays in makes a huge amount of difference to its playoff odds. To demonstrate this, I extracted some additional data from our Elo simulations this week, estimating a team’s chances of winning its division based on its regular-season record. In the NFC North, for example, a roughly average division, teams finishing the season with 10 wins won the division 30 percent of the time. The same 10-win finish would produce a drastically different result in other divisions. In the NFC West, 10 wins were good enough for the division title just 4 percent of the time. In the Saints’ NFC South, meanwhile, 10 wins took the division more than 99 percent of the time. Even an eight-win finish would be good more often than not in the NFC South; teams finishing with that win total won the division 65 percent of the time. (This includes cases where the Carolina Panthers, who tied a game this year, finish at 8-7-1.) Seven wins might be enough. In one of the 5,000 simulations we ran this week, the Panthers even won the division with a 5-10-1 record. If anything, this understates the importance of divisional placement. Not only are teams from weaker divisions more likely to make the playoffs with, for instance, a 9-7 record, they’ll also have an easier time achieving that record because they’re facing softer competition. And the NFL protects division winners in the playoffs, giving them a higher seed than wild cards and a home game in the opening round. These problems will be hard to avoid so long as the NFL insists on having such small divisions and placing such importance upon them. In a four-team division, there’s a 1-in-16 chance that all four teams will be below average — one of them will make the playoffs anyway. Because there are eight divisions, a case like that will come up about once every other season. This is one reason I’d like to see the NFL expand to 36 teams, which would allow for six-team divisions. Short of that, the league could adopt any number of approaches to make division placement less important. The least radical would be to no longer prioritize division winners in setting playoff seedings. (A 7-9 division winner could still make the playoffs, but it wouldn’t get a home game.) More fun: The league could “bump” any division winners that failed to finish with a winning record and replace them with an additional wild-card team. Elo Point Spreads Record against point spread: 70-67-3 (6-5 in Week 10) Straight-up record: 103-43-1 (9-4 in Week 10) The point spreads implied by Elo ratings have just barely climbed to a winning record against closing Las Vegas lines. Still, we wouldn’t recommend that you bet on them. Indeed, as the season has worn on, there have been fewer and fewer differences between them, and most of those that exist are easy to explain. Elo is more bullish on the Cardinals and Philadelphia Eagles this week than Las Vegas lines imply, for example, but it doesn’t know that their starting quarterbacks are sidelined. Still, there’s a difference of opinion in what’s undoubtedly the biggest game of the week. Whereas Vegas has the Indianapolis Colts as 2.5- or three-point favorites at home against the Patriots, Elo has the Patriots just slightly favored. New England has been one of Elo’s favorite teams this year. It’s strange to think Vegas could underrate a team as high-profile as New England, but that’s been the case historically: The Patriots have covered the point spread about 57 percent of the time since Bill Belichick took over as coach.
package pe.gob.sunat.tecnologia3.arquitectura.framework.desktop.modulos; import java.util.logging.Logger; import org.openide.util.NbBundle; import org.openide.windows.TopComponent; import org.openide.windows.WindowManager; import org.netbeans.api.settings.ConvertAsProperties; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.explorer.ExplorerManager; import org.openide.nodes.Node; @ConvertAsProperties(dtd = "-//org.filter//Created//EN", autostore = false) @TopComponent.Description(preferredID = "ConfigProyectoTopComponent", persistenceType = TopComponent.PERSISTENCE_ALWAYS) @TopComponent.Registration(mode = "explorer", openAtStartup = false) @ActionID(category = "Window", id = "pe.gob.sunat.tecnologia3.arquitectura.framework.desktop.modulos.ConfigProyectoTopComponent") @ActionReference(path = "Menu/Window" /*, position = 333 */) @TopComponent.OpenActionRegistration(displayName = "#CTL_SelectionAction", preferredID = "ConfigProyectoTopComponent") public final class ConfigurarProyectoTopComponent extends TopComponent implements ExplorerManager.Provider { private static ConfigurarProyectoTopComponent instance; private static final String PREFERRED_ID = "ConfigProyectoTopComponent"; private ExplorerManager em = new ExplorerManager(); private static final Logger logger = Logger.getLogger(ConfigurarProyectoTopComponent.class.getName()); public ConfigurarProyectoTopComponent() { initComponents(); setName(NbBundle.getMessage(ConfigurarProyectoTopComponent.class, "CTL_SelectionTopComponent")); Node rootNode = new ProyectoNode(new CreatedChildFactory()); outlineView1.getOutline().setRootVisible(false); em.setRootContext(rootNode); } /** * * este metodo es llamado con el constructor para inicializar el form. * WARNING: Do NOT modify this code. The content of this method is * always regenerated by the Form Editor. */ // <editor-fold defaultstate="collapsed" desc="Generated Code">//GEN-BEGIN:initComponents private void initComponents() { outlineView1 = new org.openide.explorer.view.OutlineView("Configuracion de Módulos"); javax.swing.GroupLayout layout = new javax.swing.GroupLayout(this); this.setLayout(layout); layout.setHorizontalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(outlineView1, javax.swing.GroupLayout.DEFAULT_SIZE, 476, Short.MAX_VALUE) ); layout.setVerticalGroup( layout.createParallelGroup(javax.swing.GroupLayout.Alignment.LEADING) .addComponent(outlineView1, javax.swing.GroupLayout.PREFERRED_SIZE, 276, javax.swing.GroupLayout.PREFERRED_SIZE) ); }// </editor-fold>//GEN-END:initComponents // Variables declaration - do not modify//GEN-BEGIN:variables private org.openide.explorer.view.OutlineView outlineView1; // End of variables declaration//GEN-END:variables /** * Gets default instance. Do not use directly: reserved for *.settings files only, * i.e. deserialization routines; otherwise you could get a non-deserialized instance. * To obtain the singleton instance, use {@link #findInstance}. */ public static synchronized ConfigurarProyectoTopComponent getDefault() { if (instance == null) { instance = new ConfigurarProyectoTopComponent(); } return instance; } /** * obtener la instancia de ConfigurarProyectoTopComponent instance. No llamar a {@link #getDefault} directamente! */ public static synchronized ConfigurarProyectoTopComponent findInstance() { TopComponent win = WindowManager.getDefault().findTopComponent(PREFERRED_ID); if (win == null) { logger.warning( "no se puede encontrar al componente: " + PREFERRED_ID + "."); return getDefault(); } if (win instanceof ConfigurarProyectoTopComponent) { return (ConfigurarProyectoTopComponent) win; } logger.warning("Hay multiples componentes con el nombre: '" + PREFERRED_ID + "'."); return getDefault(); } @Override public int getPersistenceType() { return TopComponent.PERSISTENCE_ALWAYS; } void writeProperties(java.util.Properties p) { // better to version settings since initial version as advocated at // http://wiki.apidesign.org/wiki/PropertyFiles p.setProperty("version", "1.0"); // TODO store your settings } Object readProperties(java.util.Properties p) { if (instance == null) { instance = this; } instance.readPropertiesImpl(p); return instance; } private void readPropertiesImpl(java.util.Properties p) { String version = p.getProperty("version"); } @Override protected String preferredID() { return PREFERRED_ID; } public ExplorerManager getExplorerManager() { return em; } }
def parse_translation(self, language, entry, translation): for ts in [ts for ts in map(str.strip, translation.split(';')) if len(ts) > 0]: area_regex = __class__._regex.match(ts) if area_regex is not None: area = self._word_areas[area_regex.group(0)] translation_set = TranslationSet(entry=entry, area=area, language=language) else: translation_set = TranslationSet(entry=entry, language=language) self.session.add(translation_set) for t in [t for t in map(str.strip, ts.split(',')) if len(t) > 0]: self.session.add(Translation(translation_set=translation_set, translation=t))
""" *String Whitespacing Protocol* The protocol describing treatment of whitespace character for strings. """ from .._protocol import StringProtocol __all__ = ["StringWhitespacingProtocol"] class StringWhitespacingProtocol( StringProtocol, ): pass
<gh_stars>1-10 from django_filters.rest_framework import DjangoFilterBackend from django.shortcuts import get_object_or_404 from django.utils.decorators import method_decorator from rest_framework import exceptions, filters, generics, status, views from rest_framework.response import Response from core import models from core.models.base import DrawStatus, GameStatus, SurrenderStatus from service import serializers from service.decorators import convert_query_params_to_snake_case from service.mixins import CamelCase from service.permissions import IsAuthenticated, UserIsNationState # NOTE this could possibly be replaced by using options def get_game_filter_choices(): return { 'gameStatuses': models.base.GameStatus.CHOICES, 'nationChoiceModes': models.base.NationChoiceMode.CHOICES, 'deadlines': models.base.DeadlineFrequency.CHOICES, 'variants': [(v.id, str(v)) for v in models.Variant.objects.all()], } class GameFilterChoicesView(views.APIView): def get(self, request, format=None): return Response(get_game_filter_choices()) class BaseMixin: def get_game(self): return get_object_or_404( models.Game.objects, slug=self.kwargs['slug'], status=GameStatus.ACTIVE, participants=self.request.user.id, ) def get_user_nation_state(self): game = self.get_game() return get_object_or_404( models.NationState.objects, turn=game.get_current_turn(), user=self.request.user.id, ) @method_decorator(convert_query_params_to_snake_case, 'dispatch') class ListGames(CamelCase, generics.ListAPIView): permission_classes = [IsAuthenticated] queryset = ( models.Game.objects.all() .select_related('variant') .prefetch_related( 'participants', 'turns__nationstates__user', 'turns__nationstates__surrenders', 'turns__turnend', ) .order_by('-created_at') ) serializer_class = serializers.ListGamesSerializer filter_backends = [ DjangoFilterBackend, filters.SearchFilter, filters.OrderingFilter, ] search_fields = [ 'name', 'created_by__username' ] filterset_fields = [ 'variant', 'status', 'num_players', 'nation_choice_mode', 'order_deadline', 'retreat_deadline', 'build_deadline', ] ordering_fields = [ 'created_at', 'initialized_at' ] class ListVariants(CamelCase, generics.ListAPIView): permission_classes = [IsAuthenticated] queryset = ( models.Variant.objects.all() .prefetch_related( 'territories__named_coasts', 'nations', ) ) serializer_class = serializers.ListVariantsSerializer class CreateGameView(CamelCase, generics.CreateAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.CreateGameSerializer def create(self, request, *args, **kwargs): defaults = {'variant': 'standard', 'num_players': 7} request.data.update(defaults) return super().create(request, *args, **kwargs) class GameStateView(CamelCase, generics.RetrieveAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.GameStateSerializer queryset = ( models.Game.objects.all() .select_related('variant') .prefetch_related( 'participants', 'pieces', 'turns__draws__drawresponse_set', 'turns__draws__nations', 'turns__orders', 'turns__nationstates__surrenders', 'turns__nationstates__user', 'turns__piecestates', 'turns__territorystates', 'turns__turnend', ) ) lookup_field = 'slug' class ToggleJoinGame(generics.UpdateAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.GameSerializer queryset = models.Game.objects.all() lookup_field = 'slug' def check_object_permissions(self, request, obj): if request.user not in obj.participants.all(): if obj.participants.count() >= obj.num_players: raise exceptions.PermissionDenied( detail='Game is already full.' ) if obj.status != GameStatus.PENDING: raise exceptions.PermissionDenied( detail='Game is not pending.' ) else: if obj.status != GameStatus.PENDING: raise exceptions.PermissionDenied( detail='Cannot leave game.' ) class CreateOrderView(CamelCase, BaseMixin, generics.CreateAPIView, generics.DestroyAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.OrderSerializer queryset = models.Order.objects.all() def get_serializer_context(self): context = super().get_serializer_context() context['nation_state'] = self.get_user_nation_state() return context def delete_old_order(self, serializer): """ Delete existing order before creating new order. Return existing order ID so client can update store correctly. """ try: old_order = models.Order.objects.get( source=serializer.validated_data['source'], turn=serializer.validated_data['turn'], nation=serializer.validated_data['nation'], ) old_order_id = old_order.id old_order.delete() return old_order_id except models.Order.DoesNotExist: return None def create(self, request, *args, **kwargs): serializer = self.get_serializer(data=request.data) serializer.is_valid(raise_exception=True) old_order_id = self.delete_old_order(serializer) self.perform_create(serializer) headers = self.get_success_headers(serializer.data) response_data = {**serializer.data, 'old_order': old_order_id} return Response( response_data, status=status.HTTP_201_CREATED, headers=headers ) class DestroyOrderView(CamelCase, generics.DestroyAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.OrderSerializer queryset = models.Order.objects.all() # def check_object_permissions(self, request, order): # user_nation_state = self.get_user_nation_state() # # TODO check if you can delete another order from a different game # if order.nation != user_nation_state.nation: # raise exceptions.PermissionDenied( # detail='Order does not belong to this user.' # ) class ListOrdersView(CamelCase, BaseMixin, generics.ListAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.OrderSerializer def get_queryset(self): turn = get_object_or_404( models.Turn, id=self.kwargs['pk'], ) user_nation_state = models.NationState.objects.filter( turn=turn, user=self.request.user.id, ).first() if not user_nation_state: return models.Order.objects.none() return models.Order.objects.filter( turn=turn, nation=user_nation_state.nation, ) class ToggleFinalizeOrdersView(CamelCase, generics.UpdateAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.ToggleFinalizeOrdersSerializer queryset = models.NationState.objects.filter( turn__game__status=GameStatus.ACTIVE ) def check_object_permissions(self, request, obj): if request.user != obj.user: raise exceptions.PermissionDenied( 'Cannot finalize orders for other nation.' ) class ToggleSurrenderView( CamelCase, generics.UpdateAPIView, generics.CreateAPIView ): permission_classes = [IsAuthenticated] serializer_class = serializers.SurrenderSerializer queryset = models.Surrender.objects.filter( nation_state__turn__current_turn=True, nation_state__turn__game__status=GameStatus.ACTIVE, status=SurrenderStatus.PENDING ) def create(self, request, *args, **kwargs): turn = models.Turn.objects.get(id=kwargs['turn']) if not turn.current_turn: raise exceptions.PermissionDenied( 'Cannot surrender on inactive turn.' ) if not turn.game.status == GameStatus.ACTIVE: raise exceptions.PermissionDenied( 'Cannot surrender on inactive game.' ) user_nation_state = get_object_or_404( models.NationState, turn=turn, user=request.user.id, ) defaults = { 'user': request.user.id, 'nation_state': user_nation_state.id } request.data.update(defaults) return super().create(request, *args, **kwargs) def check_object_permissions(self, request, surrender): if request.user != surrender.user: raise exceptions.PermissionDenied( detail='Cannot surrender if not controlling nation.' ) class ProposeDraw(generics.CreateAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.CreateDrawSerializer queryset = models.Draw.objects.filter( turn__current_turn=True, turn__game__status=GameStatus.ACTIVE, status=DrawStatus.PROPOSED ) def get_user_nation_state(self): turn = self.kwargs['turn'] return get_object_or_404( models.NationState.objects, turn=turn, user=self.request.user.id, ) def get_serializer_context(self): context = super().get_serializer_context() context['user_nation_state'] = self.get_user_nation_state() return context def create(self, request, *args, **kwargs): turn = models.Turn.objects.get(id=kwargs['turn']) nation = models.Nation.objects.get( nationstate__user=request.user, nationstate__turn=turn, ) defaults = { 'turn': turn.id, 'proposed_by': nation.id, 'proposed_by_user': request.user.id, } request.data.update(defaults) return super().create(request, *args, **kwargs) class CancelDraw(generics.UpdateAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.CancelDrawSerializer queryset = models.Draw.objects.filter( turn__current_turn=True, turn__game__status=GameStatus.ACTIVE, status=DrawStatus.PROPOSED ) def check_object_permissions(self, request, draw): turn = self.kwargs['turn'] nation = models.NationState.objects.get( user=request.user, turn=turn ).nation if nation != draw.proposed_by: raise exceptions.PermissionDenied( detail='Cannot cancel another nation\'s draw proposal.' ) class DrawResponse(CamelCase, generics.CreateAPIView, generics.DestroyAPIView): permission_classes = [IsAuthenticated] serializer_class = serializers.DrawResponseSerializer queryset = models.DrawResponse.objects.filter( draw__turn__current_turn=True, draw__turn__game__status=GameStatus.ACTIVE, draw__status=DrawStatus.PROPOSED, ) def get_user_nation_state(self): draw = models.Draw.objects.get(id=self.kwargs['draw']) return models.NationState.objects.filter( turn=draw.turn, user=self.request.user.id, ).first() def get_serializer_context(self): context = super().get_serializer_context() context['user_nation_state'] = self.get_user_nation_state() return context def create(self, request, *args, **kwargs): draw = models.Draw.objects.get(id=kwargs['draw']) nation = models.Nation.objects.get( nationstate__user=request.user, nationstate__turn=draw.turn, ) defaults = { 'draw': draw.id, 'nation': nation.id, 'user': request.user.id, } request.data.update(defaults) return super().create(request, *args, **kwargs) def check_object_permissions(self, request, draw_response): draw = self.kwargs['draw'] turn = models.Turn.objects.get(draws=draw) nation = models.NationState.objects.get( user=request.user, turn=turn ).nation if nation != draw_response.nation: raise exceptions.PermissionDenied( detail='Cannot cancel another nation\'s draw response.' ) class NationStateFromTurnMixin: permission_classes = [IsAuthenticated, UserIsNationState] queryset = models.NationState.objects.all() lookup_field = 'turn__id' lookup_url_kwarg = 'pk' def get_queryset(self): queryset = super().get_queryset() return queryset.filter(user=self.request.user) class NationStateOrdersFinalized(CamelCase, NationStateFromTurnMixin, generics.RetrieveAPIView): serializer_class = serializers.NationStateOrdersFinalizedSerializer class NationStateOrdersStatus(CamelCase, NationStateFromTurnMixin, generics.RetrieveAPIView): serializer_class = serializers.NationStateOrdersStatusSerializer
/* * dynamic_loader.h * * DSP-BIOS Bridge driver support functions for TI OMAP processors. * * Copyright (C) 2008 Texas Instruments, Inc. * * This package is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as * published by the Free Software Foundation. * * THIS PACKAGE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. */ #ifndef _DYNAMIC_LOADER_H_ #define _DYNAMIC_LOADER_H_ #include <linux/kernel.h> #include <linux/types.h> /* * Dynamic Loader * * The function of the dynamic loader is to load a "module" containing * instructions for a "target" processor into that processor. In the process * it assigns memory for the module, resolves symbol references made by the * module, and remembers symbols defined by the module. * * The dynamic loader is parameterized for a particular system by 4 classes * that supply the module and system specific functions it requires */ /* The read functions for the module image to be loaded */ struct dynamic_loader_stream; /* This class defines "host" symbol and support functions */ struct dynamic_loader_sym; /* This class defines the allocator for "target" memory */ struct dynamic_loader_allocate; /* This class defines the copy-into-target-memory functions */ struct dynamic_loader_initialize; /* * Option flags to modify the behavior of module loading */ #define DLOAD_INITBSS 0x1 /* initialize BSS sections to zero */ #define DLOAD_BIGEND 0x2 /* require big-endian load module */ #define DLOAD_LITTLE 0x4 /* require little-endian load module */ /***************************************************************************** * Procedure dynamic_load_module * * Parameters: * module The input stream that supplies the module image * syms Host-side symbol table and malloc/free functions * alloc Target-side memory allocation * init Target-side memory initialization, or NULL for symbol read only * options Option flags DLOAD_* * mhandle A module handle for use with Dynamic_Unload * * Effect: * The module image is read using *module. Target storage for the new image is * obtained from *alloc. Symbols defined and referenced by the module are * managed using *syms. The image is then relocated and references resolved * as necessary, and the resulting executable bits are placed into target memory * using *init. * * Returns: * On a successful load, a module handle is placed in *mhandle, and zero is * returned. On error, the number of errors detected is returned. Individual * errors are reported during the load process using syms->error_report(). **************************************************************************** */ extern int dynamic_load_module( /* the source for the module image */ struct dynamic_loader_stream *module, /* host support for symbols and storage */ struct dynamic_loader_sym *syms, /* the target memory allocator */ struct dynamic_loader_allocate *alloc, /* the target memory initializer */ struct dynamic_loader_initialize *init, unsigned options, /* option flags */ /* the returned module handle */ void **mhandle); /***************************************************************************** * Procedure dynamic_open_module * * Parameters: * module The input stream that supplies the module image * syms Host-side symbol table and malloc/free functions * alloc Target-side memory allocation * init Target-side memory initialization, or NULL for symbol read only * options Option flags DLOAD_* * mhandle A module handle for use with Dynamic_Unload * * Effect: * The module image is read using *module. Target storage for the new image is * obtained from *alloc. Symbols defined and referenced by the module are * managed using *syms. The image is then relocated and references resolved * as necessary, and the resulting executable bits are placed into target memory * using *init. * * Returns: * On a successful load, a module handle is placed in *mhandle, and zero is * returned. On error, the number of errors detected is returned. Individual * errors are reported during the load process using syms->error_report(). **************************************************************************** */ extern int dynamic_open_module( /* the source for the module image */ struct dynamic_loader_stream *module, /* host support for symbols and storage */ struct dynamic_loader_sym *syms, /* the target memory allocator */ struct dynamic_loader_allocate *alloc, /* the target memory initializer */ struct dynamic_loader_initialize *init, unsigned options, /* option flags */ /* the returned module handle */ void **mhandle); /***************************************************************************** * Procedure dynamic_unload_module * * Parameters: * mhandle A module handle from dynamic_load_module * syms Host-side symbol table and malloc/free functions * alloc Target-side memory allocation * * Effect: * The module specified by mhandle is unloaded. Unloading causes all * target memory to be deallocated, all symbols defined by the module to * be purged, and any host-side storage used by the dynamic loader for * this module to be released. * * Returns: * Zero for success. On error, the number of errors detected is returned. * Individual errors are reported using syms->error_report(). **************************************************************************** */ extern int dynamic_unload_module(void *mhandle, /* the module * handle */ /* host support for symbols and * storage */ struct dynamic_loader_sym *syms, /* the target memory allocator */ struct dynamic_loader_allocate *alloc, /* the target memory initializer */ struct dynamic_loader_initialize *init); /***************************************************************************** ***************************************************************************** * A class used by the dynamic loader for input of the module image ***************************************************************************** **************************************************************************** */ struct dynamic_loader_stream { /* public: */ /************************************************************************* * read_buffer * * PARAMETERS : * buffer Pointer to the buffer to fill * bufsiz Amount of data desired in sizeof() units * * EFFECT : * Reads the specified amount of data from the module input stream * into the specified buffer. Returns the amount of data read in sizeof() * units (which if less than the specification, represents an error). * * NOTES: * In release 1 increments the file position by the number of bytes read * ************************************************************************ */ int (*read_buffer) (struct dynamic_loader_stream *thisptr, void *buffer, unsigned bufsiz); /************************************************************************* * set_file_posn (release 1 only) * * PARAMETERS : * posn Desired file position relative to start of file in sizeof() units. * * EFFECT : * Adjusts the internal state of the stream object so that the next * read_buffer call will begin to read at the specified offset from * the beginning of the input module. Returns 0 for success, non-zero * for failure. * ************************************************************************ */ int (*set_file_posn) (struct dynamic_loader_stream *thisptr, /* to be eliminated in release 2 */ unsigned int posn); }; /***************************************************************************** ***************************************************************************** * A class used by the dynamic loader for symbol table support and * miscellaneous host-side functions ***************************************************************************** **************************************************************************** */ typedef u32 ldr_addr; /* * the structure of a symbol known to the dynamic loader */ struct dynload_symbol { ldr_addr value; }; struct dynamic_loader_sym { /* public: */ /************************************************************************* * find_matching_symbol * * PARAMETERS : * name The name of the desired symbol * * EFFECT : * Locates a symbol matching the name specified. A pointer to the * symbol is returned if it exists; 0 is returned if no such symbol is * found. * ************************************************************************ */ struct dynload_symbol *(*find_matching_symbol) (struct dynamic_loader_sym *thisptr, const char *name); /************************************************************************* * add_to_symbol_table * * PARAMETERS : * nname Pointer to the name of the new symbol * moduleid An opaque module id assigned by the dynamic loader * * EFFECT : * The new symbol is added to the table. A pointer to the symbol is * returned, or NULL is returned for failure. * * NOTES: * It is permissible for this function to return NULL; the effect is that * the named symbol will not be available to resolve references in * subsequent loads. Returning NULL will not cause the current load * to fail. ************************************************************************ */ struct dynload_symbol *(*add_to_symbol_table) (struct dynamic_loader_sym * thisptr, const char *nname, unsigned moduleid); /************************************************************************* * purge_symbol_table * * PARAMETERS : * moduleid An opaque module id assigned by the dynamic loader * * EFFECT : * Each symbol in the symbol table whose moduleid matches the argument * is removed from the table. ************************************************************************ */ void (*purge_symbol_table) (struct dynamic_loader_sym *thisptr, unsigned moduleid); /************************************************************************* * dload_allocate * * PARAMETERS : * memsiz size of desired memory in sizeof() units * * EFFECT : * Returns a pointer to some "host" memory for use by the dynamic * loader, or NULL for failure. * This function is serves as a replaceable form of "malloc" to * allow the user to configure the memory usage of the dynamic loader. ************************************************************************ */ void *(*dload_allocate) (struct dynamic_loader_sym *thisptr, unsigned memsiz); /************************************************************************* * dload_deallocate * * PARAMETERS : * memptr pointer to previously allocated memory * * EFFECT : * Releases the previously allocated "host" memory. ************************************************************************ */ void (*dload_deallocate) (struct dynamic_loader_sym *thisptr, void *memptr); /************************************************************************* * error_report * * PARAMETERS : * errstr pointer to an error string * args additional arguments * * EFFECT : * This function provides an error reporting interface for the dynamic * loader. The error string and arguments are designed as for the * library function vprintf. ************************************************************************ */ void (*error_report) (struct dynamic_loader_sym *thisptr, const char *errstr, va_list args); }; /* class dynamic_loader_sym */ /***************************************************************************** ***************************************************************************** * A class used by the dynamic loader to allocate and deallocate target memory. ***************************************************************************** **************************************************************************** */ struct ldr_section_info { /* Name of the memory section assigned at build time */ const char *name; ldr_addr run_addr; /* execution address of the section */ ldr_addr load_addr; /* load address of the section */ ldr_addr size; /* size of the section in addressable units */ #ifndef _BIG_ENDIAN u16 page; /* memory page or view */ u16 type; /* one of the section types below */ #else u16 type; /* one of the section types below */ u16 page; /* memory page or view */ #endif /* a context field for use by dynamic_loader_allocate; * ignored but maintained by the dynamic loader */ u32 context; }; /* use this macro to extract type of section from ldr_section_info.type field */ #define DLOAD_SECTION_TYPE(typeinfo) (typeinfo & 0xF) /* type of section to be allocated */ #define DLOAD_TEXT 0 #define DLOAD_DATA 1 #define DLOAD_BSS 2 /* internal use only, run-time cinit will be of type DLOAD_DATA */ #define DLOAD_CINIT 3 struct dynamic_loader_allocate { /* public: */ /************************************************************************* * Function allocate * * Parameters: * info A pointer to an information block for the section * align The alignment of the storage in target AUs * * Effect: * Allocates target memory for the specified section and fills in the * load_addr and run_addr fields of the section info structure. Returns TRUE * for success, FALSE for failure. * * Notes: * Frequently load_addr and run_addr are the same, but if they are not * load_addr is used with dynamic_loader_initialize, and run_addr is * used for almost all relocations. This function should always initialize * both fields. ************************************************************************ */ int (*dload_allocate) (struct dynamic_loader_allocate *thisptr, struct ldr_section_info *info, unsigned align); /************************************************************************* * Function deallocate * * Parameters: * info A pointer to an information block for the section * * Effect: * Releases the target memory previously allocated. * * Notes: * The content of the info->name field is undefined on call to this function. ************************************************************************ */ void (*dload_deallocate) (struct dynamic_loader_allocate *thisptr, struct ldr_section_info *info); }; /* class dynamic_loader_allocate */ /***************************************************************************** ***************************************************************************** * A class used by the dynamic loader to load data into a target. This class * provides the interface-specific functions needed to load data. ***************************************************************************** **************************************************************************** */ struct dynamic_loader_initialize { /* public: */ /************************************************************************* * Function connect * * Parameters: * none * * Effect: * Connect to the initialization interface. Returns TRUE for success, * FALSE for failure. * * Notes: * This function is called prior to use of any other functions in * this interface. ************************************************************************ */ int (*connect) (struct dynamic_loader_initialize *thisptr); /************************************************************************* * Function readmem * * Parameters: * bufr Pointer to a word-aligned buffer for the result * locn Target address of first data element * info Section info for the section in which the address resides * bytsiz Size of the data to be read in sizeof() units * * Effect: * Fills the specified buffer with data from the target. Returns TRUE for * success, FALSE for failure. ************************************************************************ */ int (*readmem) (struct dynamic_loader_initialize *thisptr, void *bufr, ldr_addr locn, struct ldr_section_info *info, unsigned bytsiz); /************************************************************************* * Function writemem * * Parameters: * bufr Pointer to a word-aligned buffer of data * locn Target address of first data element to be written * info Section info for the section in which the address resides * bytsiz Size of the data to be written in sizeof() units * * Effect: * Writes the specified buffer to the target. Returns TRUE for success, * FALSE for failure. ************************************************************************ */ int (*writemem) (struct dynamic_loader_initialize *thisptr, void *bufr, ldr_addr locn, struct ldr_section_info *info, unsigned bytsiz); /************************************************************************* * Function fillmem * * Parameters: * locn Target address of first data element to be written * info Section info for the section in which the address resides * bytsiz Size of the data to be written in sizeof() units * val Value to be written in each byte * Effect: * Fills the specified area of target memory. Returns TRUE for success, * FALSE for failure. ************************************************************************ */ int (*fillmem) (struct dynamic_loader_initialize *thisptr, ldr_addr locn, struct ldr_section_info *info, unsigned bytsiz, unsigned val); /************************************************************************* * Function execute * * Parameters: * start Starting address * * Effect: * The target code at the specified starting address is executed. * * Notes: * This function is called at the end of the dynamic load process * if the input module has specified a starting address. ************************************************************************ */ int (*execute) (struct dynamic_loader_initialize *thisptr, ldr_addr start); /************************************************************************* * Function release * * Parameters: * none * * Effect: * Releases the connection to the load interface. * * Notes: * This function is called at the end of the dynamic load process. ************************************************************************ */ void (*release) (struct dynamic_loader_initialize *thisptr); }; /* class dynamic_loader_initialize */ #endif /* _DYNAMIC_LOADER_H_ */
def gen_true_states(self): assert len(self.x0) == self.nD, "Initial state has dimension %s != "\ "model dimension %s" % (len(self.x0), self.nD) self.true_states = odeint(self.df_data_generation, self.x0, self.Tt, args=(self.model.params[self.params_set], ))
// this thread announces the room service to the lobby: void ov_server_t::announce_service() { uint32_t cnt(0); char cpost[1024]; while(runsession) { if(!cnt) { if(get_num_clients() == 0) { long int r(random()); secret = r & 0xfffffff; socket.set_secret(secret); } CURLcode res; sprintf(cpost, "?port=%d&name=%s&pin=%d&srvjit=%1.1f&grp=%s", portno, roomname.c_str(), secret, serverjitter, group.c_str()); serverjitter = 0; std::string url(lobbyurl); url += cpost; curl_easy_setopt(curl, CURLOPT_URL, url.c_str()); curl_easy_setopt(curl, CURLOPT_USERPWD, "room:room"); curl_easy_setopt(curl, CURLOPT_USERAGENT, "libcurl-agent/1.0"); curl_easy_setopt(curl, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4); res = curl_easy_perform(curl); if(res == 0) cnt = 6000; else cnt = 500; } --cnt; std::this_thread::sleep_for(std::chrono::milliseconds(PINGPERIODMS)); while(!latfifo.empty()) { latreport_t lr(latfifo.front()); latfifo.pop(); sprintf(cpost, "?latreport=%d&src=%d&dest=%d&lat=%1.1f&jit=%1.1f", portno, lr.src, lr.dest, lr.tmean, lr.jitter); std::string url(lobbyurl); url += cpost; curl_easy_setopt(curl, CURLOPT_URL, url.c_str()); curl_easy_setopt(curl, CURLOPT_USERPWD, "room:room"); curl_easy_setopt(curl, CURLOPT_USERAGENT, "libcurl-agent/1.0"); curl_easy_perform(curl); } } }
Another round of national negative headlines; another round of Paul LePage-generated headaches for the state of Maine. The most recent episode highlighted a six-year reign, in which LePage has been charged as divisive, uncivil, and objectively unpopular. Some might say that Mainers got what they voted for in their current governor, but the majority of the state can accurately say they did not. LePage was first elected in 2010 with less than 38 percent of the vote in a four-way race for governor against a Democrat and two independent candidates. He was elected again in 2014 in three-way race (just one independent this time) with 48 percent. Advertisement This isn’t a new phenomenon. In nine of Maine’s last 11 gubernatorial elections, the winner received less than 50-percent support. In five of those elections, the governor was elected with less than 40 percent of all the votes. However, some Mainers think they have the solution—a fundamental change in how individuals literally cast their ballot—both to the deteriorating state of politics, and to the strategic predicament in which voters are often put. That change is also quite simple: Let people rank their candidates, rather than choosing only one. “A lot of voters feel like they are voting for the lesser of two evils, rather than the candidate they like better,” said Kyle Bailey, the campaign manager for the Committee for Ranked Choice Voting, which worked to put the reform on the Maine ballot this November. A model ranked choice ballot. —Image courtesy of the Committee for Ranked Choice Voting Under a ranked choice system, if no candidate receives 50 percent after voters’ first choices are counted, the candidate with the least first-choice rankings is eliminated. The voters who picked the now-eliminated candidate then have their votes reallocated according to who they ranked as their next choice. If necessary, that process would be repeated again and again until a candidate has received the majority of the votes (Bailey’s group put together a video last year with visual explainers). Advertisement Bailey said voters are too often concerned about “wasting their vote” or which candidate is most “viable,” while conversations about policy and substance take a back seat. While some cities and European countries use ranked choice voting, Maine would be the first state in the nation to implement the system. If passed, the reform would apply to both primary and general elections for U.S. Senate, House, and the governorship, as well as the Maine Senate and House of Representatives. Bailey alluded to research showing that ranked choice voting can result in more civil campaigns—as negative campaigning could potentially backfire. For candidates to win, he says, they would have to broaden their appeal beyond their base. “Under the old way, when you’re knocking on doors and see a yard sign for another candidate, you skip that house and go to the next door,” he said. “With ranked choice voting, you couldn’t do that. [Candidates] have to go knock on that door, talk to that voter, and ask, if they couldn’t be their first choice, could they be their second choice?” Bailey said ranked choice voting decreases negative campaigning since attacking a voter’s preferred candidate could alienate that voter. “Voters have power to not make the attacker their second choice,” he said. Or their third, and so on. Bailey said the hope is that that affect will have a carryover effect on governance. He hastened to point out, however, that the initiative, Question 5 on the state’s ballot this fall, is not a direct reaction to LePage (who opposes the measure). Advertisement In 2008, two years before LePage took office, the League of Women Voters of Maine first convened to study alternative voting systems. They concluded three years later that ranked choice voting was the best choice. Bailey did acknowledge the “changing climate in Augusta” was something that needs to be addressed. “It’s getting worse,” he said. Bailey said ranked choice voting, while not a silver bullet, would better accommodate Maine’s “rich history” of political independence and competition in a “healthier, more democratic way.” While Maine would be the first state to implement ranked choice voting statewide, Cambridge, Massachusetts has used the system to elect its city council and school board since 1941 (albeit in combination with a system of proportional representation, ensuring minorities are represented in the city’s multi-member districts). “I love it,” said Polyxane Cobb, one of Cambridge’s election commissioners. Cobb says the primary benefit of preferential voting is that it’s solves the strategic voting dilemma, particularly in elections where viable candidates and likable candidates may not be one and the same. “There are all those people who neither trust [Donald] Trump or Hillary [Clinton],” she said of the two major-party presidential nominees. “But they’re gonna vote for one of them.” Cobb said it resolves that issue and allows voters to rank their favorite candidate first, without worrying about a having a potential “spoiler” effect. She also said it has resulted in more people politically engaging on the neighborhood level. Of course there are detractors to making such a drastic change to how Mainers vote. In a July 20 article, the Bangor Daily News wrote that “a more complex ballot could reduce voter turnout” and criticized the application of ranked choice voting in some U.S. cities. The paper also noted that incorrectly filled out ballots can result in the promise of a “true majority” falling short. A 2015 study of ranked choice voting in the San Francisco area found that due to “ballot exhaustion”—where ballots can be tossed out in later rounds if voters mistakenly rank or do not rank all the candidates—the winner did not actually receive a majority of all the votes. In response, Bailey noted that in the 68 of 107 Bay Area elections since 2004, a true majority was achieved on the first round of counting. “I think it’s a slight to voters to say they don’t know more than one candidate,” he said. Cobb said, in her experience, the complexity of ranked choice voting hasn’t been a significant problem. “There are a few people who don’t quite understand,” she said. “But there a few people who don’t quite understand ordinary voting, too.” Bailey said the system reflects the type of decisions people make as a consumer on a daily basis, putting ranked choice voting in a context appealing to both New Englanders’ hearts and minds. “I go to Dunkin’ Dunkins in the morning and if they’re out of a multigrain bagel, I get a garlic bagel; that’s my second choice,” he said, later adding, “We make ranked choices every day of our lives.”
<gh_stars>1-10 #ifndef VISIONWIDGET_H #define VISIONWIDGET_H #include <QWidget> namespace Ui { class VisionWidget; } class VisionWidget : public QWidget { Q_OBJECT public: explicit VisionWidget(QWidget *parent = 0); ~VisionWidget(); private: Ui::VisionWidget *ui; }; #endif // VISIONWIDGET_H
/** * These functions have been ported from hamcrest, whereas the signature has been customized */ public class CustomHamcrestMatchers { /** * Creates a matcher for {@link List}s that matches when consecutive passes over the examined {@link List} * yield at least one item that is matched by the corresponding matcher from the specified * <code>itemMatchers</code>. Whilst matching, each traversal of the examined {@link List} will stop as * soon as a matching item is found. * <p> * For example: * * <pre> * assertThat(Arrays.asList(&quot;foo&quot;, &quot;bar&quot;, &quot;baz&quot;), hasItems(endsWith(&quot;z&quot;), endsWith(&quot;o&quot;))) * </pre> * * @param <T> * Type of items to be matched * * @param itemMatchers * the matchers to apply to items provided by the examined {@link List} * @return the matcher instance */ @Factory @SuppressWarnings("unchecked") public static <T> Matcher<List<T>> hasItemsInList(Matcher<? super T>... itemMatchers) { List<Matcher<? super List<T>>> all = new ArrayList<>(itemMatchers.length); for (Matcher<? super T> elementMatcher : itemMatchers) { // Doesn't forward to hasItem() method so compiler can sort out generics. all.add(new IsCollectionContaining<>(elementMatcher)); } return allOf(all); } }
<filename>cloud-security/src/main/java/com/lin/learn/cloud/security/model/User.java<gh_stars>0 package com.lin.learn.cloud.security.model; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.SpringSecurityCoreVersion; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.util.Assert; import java.io.Serializable; import java.util.*; public class User implements UserDetails { private String username; private String password; private Set<GrantedAuthority> authorities; public User(String username, String password, Collection<GrantedAuthority> authorities) { this.username = username; this.password = password; this.authorities = Collections.unmodifiableSet(sortAuthorities(authorities)); } @Override public Collection<? extends GrantedAuthority> getAuthorities() { return this.authorities; } @Override public String getPassword() { return this.password; } @Override public String getUsername() { return this.username; } @Override public boolean isAccountNonExpired() { return true; } @Override public boolean isAccountNonLocked() { return true; } @Override public boolean isCredentialsNonExpired() { return true; } @Override public boolean isEnabled() { return true; } private static SortedSet<GrantedAuthority> sortAuthorities( Collection<? extends GrantedAuthority> authorities) { Assert.notNull(authorities, "Cannot pass a null GrantedAuthority collection"); // Ensure array iteration order is predictable (as per // UserDetails.getAuthorities() contract and SEC-717) SortedSet<GrantedAuthority> sortedAuthorities = new TreeSet<>( new AuthorityComparator()); for (GrantedAuthority grantedAuthority : authorities) { Assert.notNull(grantedAuthority, "GrantedAuthority list cannot contain any null elements"); sortedAuthorities.add(grantedAuthority); } return sortedAuthorities; } private static class AuthorityComparator implements Comparator<GrantedAuthority>, Serializable { private static final long serialVersionUID = SpringSecurityCoreVersion.SERIAL_VERSION_UID; public int compare(GrantedAuthority g1, GrantedAuthority g2) { // Neither should ever be null as each entry is checked before adding it to // the set. // If the authority is null, it is a custom authority and should precede // others. if (g2.getAuthority() == null) { return -1; } if (g1.getAuthority() == null) { return 1; } return g1.getAuthority().compareTo(g2.getAuthority()); } } }
package v1 import ( "github.com/hfeng101/niwo/storage/mysql" ) func GetListByKeywordFromPersonageRecord(key string) (*[]mysql.PersonageRecordList, error){ personageRecordList := &[]mysql.PersonageRecordList{} dbHandle := mysql.GetMysqlDbHandle() //关键字模糊查找 dbHandle.Where("theme like ?", "%"+key+"%").Find(personageRecordList) return personageRecordList,nil } func GetListByKeywordFromSportRecord(key string) (*[]mysql.SportRecordList,error) { sportRecordList := &[]mysql.SportRecordList{} dbHandle := mysql.GetMysqlDbHandle() //关键字模糊查找 dbHandle.Where("theme like ?", "%"+key+"%").Find(sportRecordList) return sportRecordList,nil } func GetListByKeywordFromEconomicsRecord(key string) (*[]mysql.EconomicsRecordList,error){ economicsRecordList := &[]mysql.EconomicsRecordList{} dbHandle := mysql.GetMysqlDbHandle() //关键字模糊查找 dbHandle.Where("theme like ?", "%"+key+"%").Find(economicsRecordList) return economicsRecordList,nil } func GetListByKeywordFromMilitaryRecord(key string) (*[]mysql.MilitaryRecordList,error){ militaryRecordList := &[]mysql.MilitaryRecordList{} dbHandle := mysql.GetMysqlDbHandle() //关键字模糊查找 dbHandle.Where("theme like ?", "%"+key+"%").Find(militaryRecordList) return militaryRecordList,nil } func GetListByKeywordFromEntertainmentRecord(key string) (*[]mysql.EntertainmentRecordList,error){ entertainmentRecordList := &[]mysql.EntertainmentRecordList{} dbHandle := mysql.GetMysqlDbHandle() //关键字模糊查找 dbHandle.Where("theme like ?", "%"+key+"%").Find(entertainmentRecordList) return entertainmentRecordList,nil } //TODO //所有表合集 func GetListByKeywordFromAllRecord(key string) (interface{},error){ return nil,nil }
/** * */ package com.jems.cbd.atores; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import com.jems.cbd.anotations.Cabecalho; /** * Através da classe <code>Pessoa</code> serão instanciados os <i> objetos * Pessoa</i> que serão persistidos no banco de dado através do JDBC e também do * Hibernante, possui um construtor vazio e outro com todos os atributos da * classe, o toString sobrescrito para imprimir pessoa, e todo os Get e Sets * além da Anotations do Hibernate. * * * @author <NAME> * @version 1.0 * @see com.jems.cbd.hibernate.TestesHibernate * @see com.jems.cbd.atores.dao.PessoaDao * @see com.jems.cbd.padrois.PersistirAtor */ @Cabecalho(dataCriacao = "21/04/2016") /* * @Entity É a principal anotação do JPA "Hibernate". Ela deve aparecer antes do * nome de uma classe e deve ser definida em todas as classes que terão objetos * persistidos no banco de dados */ @Entity public class Pessoa { /* * @Id Utilizada para indicar qual atributo de uma classe anotada com * * @Entity será mapeado para a chave primária da tabela correspondente à * classe */ @Id /* * @GeneratedValue Geralmente vem acompanhado da anotação @Id. Serve para * indicar que o atributo é gerado pelo banco, no momento em que um novo * registro é inserido */ @GeneratedValue private Long id; private String nome; private String sobreNome; private String email; private String cpf; public Pessoa() { } public Pessoa(String nome, String sobreNome, String email, String cpf) { this.nome = nome; this.sobreNome = sobreNome; this.email = email; this.cpf = cpf; } @Override public String toString() { return "Nome: " + this.nome + " Sobrenome: " + this.sobreNome + " Email: " + this.email + " Cpf: " + this.cpf; } public String getNome() { return nome; } public void setNome(String nome) { this.nome = nome; } public String getSobreNome() { return sobreNome; } public void setSobreNome(String sobreNome) { this.sobreNome = sobreNome; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getCpf() { return cpf; } public void setCpf(String cpf) { this.cpf = cpf; } }
// Copyright (c) 2020-2021 C4 Project // // This file is part of c4t. // Licenced under the MIT licence; see `LICENSE`. package compiler import ( "strings" "github.com/c4-project/c4t/internal/id" ) // Named wraps an Instance with its ID. type Named struct { // ID is the ID of the compiler. ID id.ID `toml:"id" json:"id"` Instance } // AddName names this Instance with ID name, lifting it to a Named. func (c Instance) AddName(name id.ID) *Named { return &Named{ID: name, Instance: c} } // FullID gets a fully qualified identifier for this configuration, consisting of the compiler name, followed by // 'oOpt' where 'Opt' is its selected optimisation name, and 'mMopt' where 'Mopt' is its selected machine profile. // // Where Opt or Mopt contain '.', these become '_'. This behaviour may change. func (n Named) FullID() (id.ID, error) { // In case of things like '-march=armv8.1-a'. repl := strings.NewReplacer(".", "_") o := repl.Replace(n.SelectedOptName()) m := repl.Replace(n.SelectedMOpt) // We don't append in the config time, which means that this ID doesn't fully capture the compiler specification; // that said, maybe the config time being a part of the specification is a rare enough case that we needn't worry. return id.New(append(n.ID.Tags(), "o"+o, "m"+m)...) }
def ntp_sync(): try: ntp_client = ntplib.NTPClient() ntp_response = ntp_client.request(PypePeer.NTP_SERVER_ADDR) time_obj = datetime.datetime.utcfromtimestamp( ntp_response.tx_time) win32api.SetSystemTime(time_obj.year, time_obj.month, (time_obj.weekday() + 1) % 7, time_obj.day, time_obj.hour, time_obj.minute, time_obj.second, time_obj.microsecond / 1000) Logger.info('Synced clock with network time: {}.'.format(time_obj)) except Exception as e: Logger.info('Time sync error: ' + str(e))
{-# LANGUAGE OverloadedLists, OverloadedStrings, TypeOperators #-} module Main where import Network.Wai.Handler.Warp (run) import Servant import Twirp import Prelude hiding ((!!)) import System.Random import Control.Monad.IO.Class import Data.List.NonEmpty (NonEmpty, (!!)) import Twirp.Example.Haberdasher.Haberdasher import Proto.Haberdasher as P import Proto.Haberdasher_Fields as P import Data.ProtoLens (defMessage) import Lens.Micro type RequestID = String type API = Header "X-Request-Id" RequestID :> HaberdasherAPI main :: IO () main = run 8003 app app :: Application app = twirpErrorResponses apiApp apiApp :: Application apiApp = serve (Proxy :: Proxy API) server server :: Server API server _requestID = (makeHat :<|> getBill) :<|> checkHealth where makeHat :: Size -> Handler Hat makeHat size = do color' <- choice ["blue", "red", "white"] kind' <- choice ["Setson", "cowboy", "beanie"] pure $ defMessage & P.inches .~ (size^.inches) & P.color .~ color' & P.name .~ kind' getBill :: Hat -> Handler Bill getBill hat = do price' <- case hat^.color of "blue" -> pure $ mkPrice 10 hat "red" -> pure $ mkPrice 20 hat "white" -> pure $ mkPrice 40 hat _ -> throwError (err400 { errBody = "Invalid hat color" }) pure $ defMessage & P.maybe'price ?~ price' & P.status .~ Bill'UN_PAID & P.maybe'extra .~ Nothing where mkPrice m h = defMessage & P.dollars .~ (m * fromIntegral (h^.inches)) & P.cents .~ 0 checkHealth :: Ping -> Handler Pong checkHealth _ = pure $ defMessage & P.status .~ "OK" & P.stuff .~ mempty & P.maybe'extra .~ Nothing & P.id .~ 1 & P.type' .~ "hello" choice :: MonadIO m => NonEmpty a -> m a choice fs = (fs !!) <$> liftIO (randomRIO (0, pred (length fs)))
<filename>attach_sweep.py # !!!! # Call this script as, for example: # python3 attach_sweep.py --sweep_id chvie4r1 --config_args_path "configs/svhn.yml" import wandb import train import argparse from utils import load_args_from_yaml parser = argparse.ArgumentParser(description='Attaching to a sweep.') parser.add_argument('--sweep_id', type=str, default="7qifsy12", metavar='N', help="ID of sweep to attach to.") args, unknown = parser.parse_known_args() wandb_args = load_args_from_yaml('configs/wandb.yml') wandb.agent(wandb_args.team_name + '/' + wandb_args.project_name + '/' + args.sweep_id, function=train.train)
#include <pddl/detail/normalization/Description.h> #include <pddl/AST.h> #include <pddl/NormalizedAST.h> #include <pddl/detail/normalization/Domain.h> #include <pddl/detail/normalization/Problem.h> namespace pddl { namespace detail { //////////////////////////////////////////////////////////////////////////////////////////////////// // // Description // //////////////////////////////////////////////////////////////////////////////////////////////////// normalizedAST::Description normalize(ast::Description &&description) { normalizedAST::Description normalizedDescription; normalizedDescription.domain = normalize(std::move(description.domain)); if (description.problem) normalizedDescription.problem = normalize(std::move(description.problem.value()), normalizedDescription.domain.get()); return normalizedDescription; } //////////////////////////////////////////////////////////////////////////////////////////////////// } }
import pytest import pandas._testing as tm class TestDataFrameTake: def test_take(self, float_frame): # homogeneous order = [3, 1, 2, 0] for df in [float_frame]: result = df.take(order, axis=0) expected = df.reindex(df.index.take(order)) tm.assert_frame_equal(result, expected) # axis = 1 result = df.take(order, axis=1) expected = df.loc[:, ["D", "B", "C", "A"]] tm.assert_frame_equal(result, expected, check_names=False) # negative indices order = [2, 1, -1] for df in [float_frame]: result = df.take(order, axis=0) expected = df.reindex(df.index.take(order)) tm.assert_frame_equal(result, expected) result = df.take(order, axis=0) tm.assert_frame_equal(result, expected) # axis = 1 result = df.take(order, axis=1) expected = df.loc[:, ["C", "B", "D"]] tm.assert_frame_equal(result, expected, check_names=False) # illegal indices msg = "indices are out-of-bounds" with pytest.raises(IndexError, match=msg): df.take([3, 1, 2, 30], axis=0) with pytest.raises(IndexError, match=msg): df.take([3, 1, 2, -31], axis=0) with pytest.raises(IndexError, match=msg): df.take([3, 1, 2, 5], axis=1) with pytest.raises(IndexError, match=msg): df.take([3, 1, 2, -5], axis=1) def test_take_mixed_type(self, float_string_frame): # mixed-dtype order = [4, 1, 2, 0, 3] for df in [float_string_frame]: result = df.take(order, axis=0) expected = df.reindex(df.index.take(order)) tm.assert_frame_equal(result, expected) # axis = 1 result = df.take(order, axis=1) expected = df.loc[:, ["foo", "B", "C", "A", "D"]] tm.assert_frame_equal(result, expected) # negative indices order = [4, 1, -2] for df in [float_string_frame]: result = df.take(order, axis=0) expected = df.reindex(df.index.take(order)) tm.assert_frame_equal(result, expected) # axis = 1 result = df.take(order, axis=1) expected = df.loc[:, ["foo", "B", "D"]] tm.assert_frame_equal(result, expected) def test_take_mixed_numeric(self, mixed_float_frame, mixed_int_frame): # by dtype order = [1, 2, 0, 3] for df in [mixed_float_frame, mixed_int_frame]: result = df.take(order, axis=0) expected = df.reindex(df.index.take(order)) tm.assert_frame_equal(result, expected) # axis = 1 result = df.take(order, axis=1) expected = df.loc[:, ["B", "C", "A", "D"]] tm.assert_frame_equal(result, expected)
/** * Unit test for the Sheet class. * * @author Glen Stampoultzis (glens at apache.org) */ public class TestSheet extends TestCase { public void testCreateSheet() throws Exception { // Check we're adding row and cell aggregates List records = new ArrayList(); records.add( new BOFRecord() ); records.add( new DimensionsRecord() ); records.add( new EOFRecord() ); Sheet sheet = Sheet.createSheet( records, 0, 0 ); int pos = 0; assertTrue( sheet.records.get(pos++) instanceof BOFRecord ); assertTrue( sheet.records.get(pos++) instanceof ColumnInfoRecordsAggregate ); assertTrue( sheet.records.get(pos++) instanceof DimensionsRecord ); assertTrue( sheet.records.get(pos++) instanceof RowRecordsAggregate ); assertTrue( sheet.records.get(pos++) instanceof ValueRecordsAggregate ); assertTrue( sheet.records.get(pos++) instanceof EOFRecord ); } }
<reponame>SwarmUS/HiveMind #include "hal/hal_flash.h" bool Flash_eraseSector(uint8_t sector) { FLASH_EraseInitTypeDef eraseConfig = {.TypeErase = FLASH_TYPEERASE_SECTORS, .Sector = sector, .NbSectors = 1, .VoltageRange = FLASH_VOLTAGE_RANGE_3}; uint32_t badSector = 0; HAL_FLASH_Unlock(); bool ret = HAL_FLASHEx_Erase(&eraseConfig, &badSector) == HAL_OK; HAL_FLASH_Lock(); return ret; }
/** * Wraps a symbol and its corresponding latest price. */ public class TickerPrice { /** * Ticker symbol. */ private String symbol; /** * Latest price. */ private String price; public String getSymbol() { return symbol; } public void setSymbol(String symbol) { this.symbol = symbol; } public String getPrice() { return price; } public void setPrice(String price) { this.price = price; } @Override public String toString() { return new ToStringBuilder(this, ToStringStyle.SHORT_PREFIX_STYLE) .append("symbol", symbol) .append("price", price) .toString(); } }
Berlin, Oct 22 (ANI): A new study on landscape around Chesapeake Bay, the largest estuary in the US, has determined that imbalance in nitrogen cycle due to the widespread use of fertilizers is damaging water quality and fish populations. According to a report in Helmholtz Centre for Environmental Research (UFZ), Professor Grace Brush, from Johns Hopkins University in Baltimore, USA, undertook the study of landscape changes around Chesapeake Bay. Professor Brush studied the organisms and materials preserved in sediments in Chesapeake Bay spanning 1000 to 14,000 years, alongside available historical records covering the past 300 years, to trace the history of changes to nitrogen loading in the estuary. She highlights how population growth, agricultural expansion, and urbanization have released nitrogen from the land and moved it to Chesapeake Bay, where it has accumulated and degraded both the natural wildlife and water quality. The combination of the increasing use of fertilizers, deforestation and the draining of wetlands and floodplains to provide more land for crops, has led to an imbalance in the nitrogen cycle, in particular reduced opportunities for the natural removal of nitrogen. As a result, there is an excess of nitrogen in the estuary, also known as eutrophication. This in turn has led to the deterioration of the local ecosystem through reduced concentrations of oxygen in the bay, affecting both the water quality and the fish populations. Providing food for an increasing population is the main reason for these changes, according to Professor Brush. Although the estuary supplied an abundance of fish species, humans also need plant-based food products in their diets, hence the increase in grasslands and use of fertilizers. Brush adds that aquatic deterioration is not unique to Chesapeake but a global phenomenon. Marine dead zones with low oxygen and/or toxic algae, caused primarily by the run-off of fertilizers from the land, as well as a greater reliance on fossil fuel, are on the increase. Brush concludes her review by looking at the likely implications of this imbalanced nitrogen cycle on future ecosystems as well as ways to improve water quality. She recommends multiple processes to reduce nitrogen accumulation, both natural and engineered, and notes that ultimately the decision to proceed will come down to politics. According to Brush, the future of the Chesapeake and coastal regions in general will depend very much on the recognition of the importance of nitrogen removal for goals other than restoring the fishery and how successful the various tools for nitrogen removal are. (ANI)
// SPDX-License-Identifier: LGPL-2.1-or-later /* * * BlueZ - Bluetooth protocol stack for Linux * * Copyright (C) 2013-2014 Intel Corporation. All rights reserved. * * */ #ifdef HAVE_CONFIG_H #include <config.h> #endif #define _GNU_SOURCE #include <stdint.h> #include <stdbool.h> #include <errno.h> #include <unistd.h> #include <fcntl.h> #include <glib.h> #include <sys/ioctl.h> #include <sys/socket.h> #include <sys/wait.h> #include <sys/types.h> #include <net/if.h> #include <linux/sockios.h> #include <netinet/in.h> #include <netinet/ip6.h> #include <linux/if_bridge.h> #include "btio/btio.h" #include "lib/bluetooth.h" #include "lib/bnep.h" #include "lib/sdp.h" #include "lib/sdp_lib.h" #include "src/uuid-helper.h" #include "profiles/network/bnep.h" #include "src/log.h" #include "hal-msg.h" #include "ipc-common.h" #include "ipc.h" #include "utils.h" #include "bluetooth.h" #include "pan.h" #define SVC_HINT_NETWORKING 0x02 #define BNEP_BRIDGE "bt-pan" #define BNEP_PANU_INTERFACE "bt-pan" #define BNEP_NAP_INTERFACE "bt-pan%d" struct pan_device { char iface[16]; bdaddr_t dst; uint8_t conn_state; uint8_t role; GIOChannel *io; struct bnep *session; guint watch; }; static bdaddr_t adapter_addr; static GSList *devices = NULL; static uint8_t local_role = HAL_PAN_ROLE_NONE; static uint32_t nap_rec_id = 0; static uint32_t panu_rec_id = 0; static GIOChannel *nap_io = NULL; static bool nap_bridge_mode = false; static struct ipc *hal_ipc = NULL; static int set_forward_delay(int sk) { unsigned long args[4] = { BRCTL_SET_BRIDGE_FORWARD_DELAY, 0 , 0, 0 }; struct ifreq ifr; memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, BNEP_BRIDGE, IFNAMSIZ - 1); ifr.ifr_data = (char *) args; if (ioctl(sk, SIOCDEVPRIVATE, &ifr) < 0) { error("pan: setting forward delay failed: %d (%s)", errno, strerror(errno)); return -1; } return 0; } static int nap_create_bridge(void) { int sk, err; DBG("%s", BNEP_BRIDGE); if (nap_bridge_mode) return 0; sk = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0); if (sk < 0) return -EOPNOTSUPP; if (ioctl(sk, SIOCBRADDBR, BNEP_BRIDGE) < 0) { err = -errno; if (err != -EEXIST) { close(sk); return -EOPNOTSUPP; } } err = set_forward_delay(sk); if (err < 0) ioctl(sk, SIOCBRDELBR, BNEP_BRIDGE); close(sk); nap_bridge_mode = err == 0; return err; } static int bridge_if_down(void) { struct ifreq ifr; int sk, err; sk = socket(AF_INET, SOCK_DGRAM, 0); memset(&ifr, 0, sizeof(ifr)); strncpy(ifr.ifr_name, BNEP_BRIDGE, IF_NAMESIZE - 1); ifr.ifr_flags &= ~IFF_UP; /* Bring down the interface */ err = ioctl(sk, SIOCSIFFLAGS, (caddr_t) &ifr); close(sk); if (err < 0) { error("pan: Could not bring down %s", BNEP_BRIDGE); return err; } return 0; } static int nap_remove_bridge(void) { int sk, err; DBG("%s", BNEP_BRIDGE); if (!nap_bridge_mode) return 0; bridge_if_down(); sk = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0); if (sk < 0) return -EOPNOTSUPP; err = ioctl(sk, SIOCBRDELBR, BNEP_BRIDGE); if (err < 0) err = -errno; close(sk); if (err < 0) return err; nap_bridge_mode = false; return 0; } static int device_cmp(gconstpointer s, gconstpointer user_data) { const struct pan_device *dev = s; const bdaddr_t *dst = user_data; return bacmp(&dev->dst, dst); } static void pan_device_free(void *data) { struct pan_device *dev = data; if (dev->watch > 0) { bnep_server_delete(BNEP_BRIDGE, dev->iface, &dev->dst); g_source_remove(dev->watch); } if (dev->io) { g_io_channel_shutdown(dev->io, FALSE, NULL); g_io_channel_unref(dev->io); } if (dev->session) bnep_free(dev->session); g_free(dev); } static void pan_device_remove(struct pan_device *dev) { devices = g_slist_remove(devices, dev); if (g_slist_length(devices) == 0) { local_role = HAL_PAN_ROLE_NONE; nap_remove_bridge(); } pan_device_free(dev); } static void bt_pan_notify_conn_state(struct pan_device *dev, uint8_t state) { struct hal_ev_pan_conn_state ev; char addr[18]; if (dev->conn_state == state) return; dev->conn_state = state; ba2str(&dev->dst, addr); DBG("device %s state %u", addr, state); bdaddr2android(&dev->dst, ev.bdaddr); ev.state = state; ev.local_role = local_role; ev.remote_role = dev->role; ev.status = HAL_STATUS_SUCCESS; ipc_send_notif(hal_ipc, HAL_SERVICE_ID_PAN, HAL_EV_PAN_CONN_STATE, sizeof(ev), &ev); if (dev->conn_state == HAL_PAN_STATE_DISCONNECTED) pan_device_remove(dev); } static void bt_pan_notify_ctrl_state(struct pan_device *dev, uint8_t state, uint8_t status) { struct hal_ev_pan_ctrl_state ev; DBG(""); ev.state = state; ev.local_role = local_role; ev.status = status; memset(ev.name, 0, sizeof(ev.name)); if (local_role == HAL_PAN_ROLE_NAP) memcpy(ev.name, BNEP_BRIDGE, sizeof(BNEP_BRIDGE)); else if (local_role == HAL_PAN_ROLE_PANU) memcpy(ev.name, dev->iface, sizeof(dev->iface)); ipc_send_notif(hal_ipc, HAL_SERVICE_ID_PAN, HAL_EV_PAN_CTRL_STATE, sizeof(ev), &ev); } static void bnep_disconn_cb(void *data) { struct pan_device *dev = data; DBG("%s disconnected", dev->iface); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); } static void bnep_conn_cb(char *iface, int err, void *data) { struct pan_device *dev = data; DBG(""); if (err < 0) { error("bnep connect req failed: %s", strerror(-err)); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); return; } memcpy(dev->iface, iface, sizeof(dev->iface)); DBG("%s connected", dev->iface); bt_pan_notify_ctrl_state(dev, HAL_PAN_CTRL_ENABLED, HAL_STATUS_SUCCESS); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_CONNECTED); } static void connect_cb(GIOChannel *chan, GError *err, gpointer data) { struct pan_device *dev = data; uint16_t l_role, r_role; int perr, sk; DBG(""); if (err) { error("%s", err->message); goto fail; } l_role = (local_role == HAL_PAN_ROLE_NAP) ? BNEP_SVC_NAP : BNEP_SVC_PANU; r_role = (dev->role == HAL_PAN_ROLE_NAP) ? BNEP_SVC_NAP : BNEP_SVC_PANU; sk = g_io_channel_unix_get_fd(dev->io); dev->session = bnep_new(sk, l_role, r_role, BNEP_PANU_INTERFACE); if (!dev->session) goto fail; perr = bnep_connect(dev->session, bnep_conn_cb, bnep_disconn_cb, dev, dev); if (perr < 0) { error("bnep connect req failed: %s", strerror(-perr)); goto fail; } if (dev->io) { g_io_channel_unref(dev->io); dev->io = NULL; } return; fail: bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); } static void bt_pan_connect(const void *buf, uint16_t len) { const struct hal_cmd_pan_connect *cmd = buf; struct pan_device *dev; uint8_t status; bdaddr_t dst; char addr[18]; GSList *l; GError *gerr = NULL; DBG(""); switch (cmd->local_role) { case HAL_PAN_ROLE_NAP: if (cmd->remote_role != HAL_PAN_ROLE_PANU) { status = HAL_STATUS_UNSUPPORTED; goto failed; } break; case HAL_PAN_ROLE_PANU: if (cmd->remote_role != HAL_PAN_ROLE_NAP && cmd->remote_role != HAL_PAN_ROLE_PANU) { status = HAL_STATUS_UNSUPPORTED; goto failed; } break; default: status = HAL_STATUS_UNSUPPORTED; goto failed; } android2bdaddr(&cmd->bdaddr, &dst); l = g_slist_find_custom(devices, &dst, device_cmp); if (l) { status = HAL_STATUS_FAILED; goto failed; } dev = g_new0(struct pan_device, 1); bacpy(&dev->dst, &dst); local_role = cmd->local_role; dev->role = cmd->remote_role; ba2str(&dev->dst, addr); DBG("connecting to %s %s", addr, dev->iface); dev->io = bt_io_connect(connect_cb, dev, NULL, &gerr, BT_IO_OPT_SOURCE_BDADDR, &adapter_addr, BT_IO_OPT_DEST_BDADDR, &dev->dst, BT_IO_OPT_PSM, BNEP_PSM, BT_IO_OPT_SEC_LEVEL, BT_IO_SEC_MEDIUM, BT_IO_OPT_OMTU, BNEP_MTU, BT_IO_OPT_IMTU, BNEP_MTU, BT_IO_OPT_INVALID); if (!dev->io) { error("%s", gerr->message); g_error_free(gerr); g_free(dev); status = HAL_STATUS_FAILED; goto failed; } devices = g_slist_append(devices, dev); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_CONNECTING); status = HAL_STATUS_SUCCESS; failed: ipc_send_rsp(hal_ipc, HAL_SERVICE_ID_PAN, HAL_OP_PAN_CONNECT, status); } static void bt_pan_disconnect(const void *buf, uint16_t len) { const struct hal_cmd_pan_disconnect *cmd = buf; struct pan_device *dev; uint8_t status; GSList *l; bdaddr_t dst; DBG(""); android2bdaddr(&cmd->bdaddr, &dst); l = g_slist_find_custom(devices, &dst, device_cmp); if (!l) { status = HAL_STATUS_FAILED; goto failed; } dev = l->data; if (dev->conn_state == HAL_PAN_STATE_CONNECTED && dev->session) bnep_disconnect(dev->session); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); status = HAL_STATUS_SUCCESS; failed: ipc_send_rsp(hal_ipc, HAL_SERVICE_ID_PAN, HAL_OP_PAN_DISCONNECT, status); } static gboolean nap_watchdog_cb(GIOChannel *chan, GIOCondition cond, gpointer user_data) { struct pan_device *dev = user_data; DBG("disconnected"); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); return FALSE; } static gboolean nap_setup_cb(GIOChannel *chan, GIOCondition cond, gpointer user_data) { struct pan_device *dev = user_data; uint8_t packet[BNEP_MTU]; int sk, n, err; if (cond & (G_IO_ERR | G_IO_HUP | G_IO_NVAL)) { error("Hangup or error or inval on BNEP socket"); return FALSE; } sk = g_io_channel_unix_get_fd(chan); /* * BNEP_SETUP_CONNECTION_REQUEST_MSG should be read and left in case * of kernel setup connection msg handling. */ n = recv(sk, packet, sizeof(packet), MSG_PEEK); if (n < 0) { error("read(): %s(%d)", strerror(errno), errno); goto failed; } if (n < 3) { error("pan: to few setup connection request data received"); goto failed; } err = nap_create_bridge(); if (err < 0) error("pan: Failed to create bridge: %s (%d)", strerror(-err), -err); if (bnep_server_add(sk, (err < 0) ? NULL : BNEP_BRIDGE, dev->iface, &dev->dst, packet, n) < 0) { error("pan: server_connadd failed"); goto failed; } dev->watch = g_io_add_watch(chan, G_IO_HUP | G_IO_ERR | G_IO_NVAL, nap_watchdog_cb, dev); g_io_channel_unref(dev->io); dev->io = NULL; bt_pan_notify_ctrl_state(dev, HAL_PAN_CTRL_ENABLED, HAL_STATUS_SUCCESS); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_CONNECTED); return FALSE; failed: pan_device_remove(dev); return FALSE; } static void nap_connect_cb(GIOChannel *chan, GError *err, gpointer user_data) { struct pan_device *dev = user_data; DBG(""); if (err) { error("%s", err->message); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); return; } g_io_channel_set_close_on_unref(chan, TRUE); dev->watch = g_io_add_watch(chan, G_IO_IN | G_IO_HUP | G_IO_ERR | G_IO_NVAL, nap_setup_cb, dev); } static void nap_confirm_cb(GIOChannel *chan, gpointer data) { struct pan_device *dev; bdaddr_t dst; char address[18]; GError *err = NULL; DBG(""); bt_io_get(chan, &err, BT_IO_OPT_DEST_BDADDR, &dst, BT_IO_OPT_DEST, address, BT_IO_OPT_INVALID); if (err) { error("%s", err->message); g_error_free(err); return; } DBG("incoming connect request from %s", address); dev = g_new0(struct pan_device, 1); bacpy(&dev->dst, &dst); local_role = HAL_PAN_ROLE_NAP; dev->role = HAL_PAN_ROLE_PANU; strncpy(dev->iface, BNEP_NAP_INTERFACE, 16); dev->iface[15] = '\0'; dev->io = g_io_channel_ref(chan); g_io_channel_set_close_on_unref(dev->io, TRUE); if (!bt_io_accept(dev->io, nap_connect_cb, dev, NULL, &err)) { error("bt_io_accept: %s", err->message); g_error_free(err); goto failed; } devices = g_slist_append(devices, dev); bt_pan_notify_conn_state(dev, HAL_PAN_STATE_CONNECTING); return; failed: bt_pan_notify_conn_state(dev, HAL_PAN_STATE_DISCONNECTED); } static void destroy_nap_device(void) { DBG(""); nap_remove_bridge(); if (nap_io) { g_io_channel_shutdown(nap_io, FALSE, NULL); g_io_channel_unref(nap_io); nap_io = NULL; } } static int register_nap_server(void) { GError *gerr = NULL; DBG(""); nap_io = bt_io_listen(NULL, nap_confirm_cb, NULL, NULL, &gerr, BT_IO_OPT_SOURCE_BDADDR, &adapter_addr, BT_IO_OPT_PSM, BNEP_PSM, BT_IO_OPT_SEC_LEVEL, BT_IO_SEC_MEDIUM, BT_IO_OPT_OMTU, BNEP_MTU, BT_IO_OPT_IMTU, BNEP_MTU, BT_IO_OPT_INVALID); if (!nap_io) { destroy_nap_device(); error("%s", gerr->message); g_error_free(gerr); return -EIO; } return 0; } static void bt_pan_enable(const void *buf, uint16_t len) { const struct hal_cmd_pan_enable *cmd = buf; uint8_t status, state; int err; DBG(""); if (local_role == cmd->local_role) { status = HAL_STATUS_SUCCESS; goto reply; } /* destroy existing server */ destroy_nap_device(); switch (cmd->local_role) { case HAL_PAN_ROLE_NAP: break; case HAL_PAN_ROLE_NONE: local_role = HAL_PAN_ROLE_NONE; status = HAL_STATUS_SUCCESS; state = HAL_PAN_CTRL_DISABLED; goto notify; default: status = HAL_STATUS_UNSUPPORTED; goto reply; } local_role = cmd->local_role; err = register_nap_server(); if (err < 0) { status = HAL_STATUS_FAILED; destroy_nap_device(); goto reply; } status = HAL_STATUS_SUCCESS; state = HAL_PAN_CTRL_ENABLED; notify: bt_pan_notify_ctrl_state(NULL, state, status); reply: ipc_send_rsp(hal_ipc, HAL_SERVICE_ID_PAN, HAL_OP_PAN_ENABLE, status); } static void bt_pan_get_role(const void *buf, uint16_t len) { struct hal_rsp_pan_get_role rsp; DBG(""); rsp.local_role = local_role; ipc_send_rsp_full(hal_ipc, HAL_SERVICE_ID_PAN, HAL_OP_PAN_GET_ROLE, sizeof(rsp), &rsp, -1); } static const struct ipc_handler cmd_handlers[] = { /* HAL_OP_PAN_ENABLE */ { bt_pan_enable, false, sizeof(struct hal_cmd_pan_enable) }, /* HAL_OP_PAN_GET_ROLE */ { bt_pan_get_role, false, 0 }, /* HAL_OP_PAN_CONNECT */ { bt_pan_connect, false, sizeof(struct hal_cmd_pan_connect) }, /* HAL_OP_PAN_DISCONNECT */ { bt_pan_disconnect, false, sizeof(struct hal_cmd_pan_disconnect) }, }; static sdp_record_t *nap_record(void) { sdp_list_t *svclass, *pfseq, *apseq, *root, *aproto; uuid_t root_uuid, nap, l2cap, bnep; sdp_profile_desc_t profile[1]; sdp_list_t *proto[2]; sdp_data_t *v, *p; uint16_t psm = BNEP_PSM, version = 0x0100; uint16_t security = 0x0001, type = 0xfffe; uint32_t rate = 0; const char *desc = "Network Access Point", *name = "Network Service"; sdp_record_t *record; uint16_t ptype[] = { 0x0800, /* IPv4 */ 0x0806, /* ARP */ }; sdp_data_t *head, *pseq, *data; record = sdp_record_alloc(); if (!record) return NULL; record->attrlist = NULL; record->pattern = NULL; sdp_uuid16_create(&nap, NAP_SVCLASS_ID); svclass = sdp_list_append(NULL, &nap); sdp_set_service_classes(record, svclass); sdp_uuid16_create(&profile[0].uuid, NAP_PROFILE_ID); profile[0].version = 0x0100; pfseq = sdp_list_append(NULL, &profile[0]); sdp_set_profile_descs(record, pfseq); sdp_set_info_attr(record, name, NULL, desc); sdp_attr_add_new(record, SDP_ATTR_NET_ACCESS_TYPE, SDP_UINT16, &type); sdp_attr_add_new(record, SDP_ATTR_MAX_NET_ACCESSRATE, SDP_UINT32, &rate); sdp_uuid16_create(&root_uuid, PUBLIC_BROWSE_GROUP); root = sdp_list_append(NULL, &root_uuid); sdp_set_browse_groups(record, root); sdp_uuid16_create(&l2cap, L2CAP_UUID); proto[0] = sdp_list_append(NULL, &l2cap); p = sdp_data_alloc(SDP_UINT16, &psm); proto[0] = sdp_list_append(proto[0], p); apseq = sdp_list_append(NULL, proto[0]); sdp_uuid16_create(&bnep, BNEP_UUID); proto[1] = sdp_list_append(NULL, &bnep); v = sdp_data_alloc(SDP_UINT16, &version); proto[1] = sdp_list_append(proto[1], v); head = sdp_data_alloc(SDP_UINT16, &ptype[0]); data = sdp_data_alloc(SDP_UINT16, &ptype[1]); sdp_seq_append(head, data); pseq = sdp_data_alloc(SDP_SEQ16, head); proto[1] = sdp_list_append(proto[1], pseq); apseq = sdp_list_append(apseq, proto[1]); aproto = sdp_list_append(NULL, apseq); sdp_set_access_protos(record, aproto); sdp_add_lang_attr(record); sdp_attr_add_new(record, SDP_ATTR_SECURITY_DESC, SDP_UINT16, &security); sdp_data_free(p); sdp_data_free(v); sdp_list_free(apseq, NULL); sdp_list_free(root, NULL); sdp_list_free(aproto, NULL); sdp_list_free(proto[0], NULL); sdp_list_free(proto[1], NULL); sdp_list_free(svclass, NULL); sdp_list_free(pfseq, NULL); return record; } static sdp_record_t *panu_record(void) { sdp_list_t *svclass, *pfseq, *apseq, *root, *aproto; uuid_t root_uuid, panu, l2cap, bnep; sdp_profile_desc_t profile[1]; sdp_list_t *proto[2]; sdp_data_t *v, *p; uint16_t psm = BNEP_PSM, version = 0x0100; uint16_t security = 0x0001, type = 0xfffe; uint32_t rate = 0; const char *desc = "PAN User", *name = "Network Service"; sdp_record_t *record; uint16_t ptype[] = { 0x0800, /* IPv4 */ 0x0806, /* ARP */ }; sdp_data_t *head, *pseq, *data; record = sdp_record_alloc(); if (!record) return NULL; record->attrlist = NULL; record->pattern = NULL; sdp_uuid16_create(&panu, PANU_SVCLASS_ID); svclass = sdp_list_append(NULL, &panu); sdp_set_service_classes(record, svclass); sdp_uuid16_create(&profile[0].uuid, PANU_PROFILE_ID); profile[0].version = 0x0100; pfseq = sdp_list_append(NULL, &profile[0]); sdp_set_profile_descs(record, pfseq); sdp_set_info_attr(record, name, NULL, desc); sdp_attr_add_new(record, SDP_ATTR_NET_ACCESS_TYPE, SDP_UINT16, &type); sdp_attr_add_new(record, SDP_ATTR_MAX_NET_ACCESSRATE, SDP_UINT32, &rate); sdp_uuid16_create(&root_uuid, PUBLIC_BROWSE_GROUP); root = sdp_list_append(NULL, &root_uuid); sdp_set_browse_groups(record, root); sdp_uuid16_create(&l2cap, L2CAP_UUID); proto[0] = sdp_list_append(NULL, &l2cap); p = sdp_data_alloc(SDP_UINT16, &psm); proto[0] = sdp_list_append(proto[0], p); apseq = sdp_list_append(NULL, proto[0]); sdp_uuid16_create(&bnep, BNEP_UUID); proto[1] = sdp_list_append(NULL, &bnep); v = sdp_data_alloc(SDP_UINT16, &version); proto[1] = sdp_list_append(proto[1], v); head = sdp_data_alloc(SDP_UINT16, &ptype[0]); data = sdp_data_alloc(SDP_UINT16, &ptype[1]); sdp_seq_append(head, data); pseq = sdp_data_alloc(SDP_SEQ16, head); proto[1] = sdp_list_append(proto[1], pseq); apseq = sdp_list_append(apseq, proto[1]); aproto = sdp_list_append(NULL, apseq); sdp_set_access_protos(record, aproto); sdp_add_lang_attr(record); sdp_attr_add_new(record, SDP_ATTR_SECURITY_DESC, SDP_UINT16, &security); sdp_data_free(p); sdp_data_free(v); sdp_list_free(apseq, NULL); sdp_list_free(root, NULL); sdp_list_free(aproto, NULL); sdp_list_free(proto[0], NULL); sdp_list_free(proto[1], NULL); sdp_list_free(svclass, NULL); sdp_list_free(pfseq, NULL); return record; } bool bt_pan_register(struct ipc *ipc, const bdaddr_t *addr, uint8_t mode) { sdp_record_t *nap_rec, *panu_rec; int err; DBG(""); bacpy(&adapter_addr, addr); nap_rec = nap_record(); if (bt_adapter_add_record(nap_rec, SVC_HINT_NETWORKING) < 0) { sdp_record_free(nap_rec); error("Failed to allocate PAN-NAP sdp record"); return false; } panu_rec = panu_record(); if (bt_adapter_add_record(panu_rec, SVC_HINT_NETWORKING) < 0) { sdp_record_free(nap_rec); sdp_record_free(panu_rec); error("Failed to allocate PAN-PANU sdp record"); return false; } err = bnep_init(); if (err < 0) { error("Failed to init BNEP"); bt_adapter_remove_record(nap_rec->handle); bt_adapter_remove_record(panu_rec->handle); return false; } err = register_nap_server(); if (err < 0) { error("Failed to register NAP server"); bt_adapter_remove_record(nap_rec->handle); bt_adapter_remove_record(panu_rec->handle); bnep_cleanup(); return false; } nap_rec_id = nap_rec->handle; panu_rec_id = panu_rec->handle; hal_ipc = ipc; ipc_register(hal_ipc, HAL_SERVICE_ID_PAN, cmd_handlers, G_N_ELEMENTS(cmd_handlers)); return true; } void bt_pan_unregister(void) { DBG(""); g_slist_free_full(devices, pan_device_free); devices = NULL; local_role = HAL_PAN_ROLE_NONE; bnep_cleanup(); ipc_unregister(hal_ipc, HAL_SERVICE_ID_PAN); hal_ipc = NULL; bt_adapter_remove_record(nap_rec_id); nap_rec_id = 0; bt_adapter_remove_record(panu_rec_id); panu_rec_id = 0; destroy_nap_device(); }
import { useLocalStorage } from "react-use"; import { ReservationUnitByPkType, ReservationUnitType, } from "../modules/gql-types"; import { ReservationUnit } from "../modules/types"; export type ReservationUnitList = { reservationUnits: | ReservationUnit[] | ReservationUnitType[] | ReservationUnitByPkType[]; selectReservationUnit: ( reservationUnit: | ReservationUnit | ReservationUnitType | ReservationUnitByPkType ) => void; containsReservationUnit: ( reservationUnit: | ReservationUnit | ReservationUnitType | ReservationUnitByPkType ) => boolean; removeReservationUnit: ( reservationUnit: | ReservationUnit | ReservationUnitType | ReservationUnitByPkType ) => void; clearSelections: () => void; }; const useReservationUnitsList = (): ReservationUnitList => { const [reservationUnits, setReservationUnits] = useLocalStorage( "reservationUnitList", [] as ReservationUnit[] ); const selectReservationUnit = (reservationUnit: ReservationUnit) => { setReservationUnits([ ...(reservationUnits as ReservationUnit[]), reservationUnit, ]); }; const removeReservationUnit = (reservationUnit: ReservationUnit) => { if (!reservationUnits) { return; } setReservationUnits( reservationUnits.filter((unit) => unit.id !== reservationUnit.id) ); }; const clearSelections = () => { setReservationUnits([]); }; const containsReservationUnit = (reservationUnit: ReservationUnit): boolean => reservationUnits ? reservationUnits.some((ru) => ru.id === reservationUnit.id) : false; return { selectReservationUnit, containsReservationUnit, clearSelections, removeReservationUnit, reservationUnits: reservationUnits || [], }; }; export default useReservationUnitsList;
<gh_stars>1-10 import { RootState } from "../rootReducer"; import { Language } from "../../types/interface"; export const languei18nSelector = (state: RootState): string => state.langue.languei18nCode; export const allLanguesSelector = (state: RootState): Language[] => state.langue.langues; export const showLangModalSelector = (state: RootState): boolean => state.langue.showLangModal;
<gh_stars>0 // // Perceptron.hpp // Moles // // Created by <NAME> on 10/08/16. // Copyright © 2016 wizzo. All rights reserved. // #ifndef Perceptron_hpp #define Perceptron_hpp #include <stdio.h> namespace molegame { namespace misc { template<int Pc> struct Params { int p[Pc]; }; template<int Ic> class Perceptron { public: void init(Params<Ic> weights, int treshold) { for (int wIdx = 0; wIdx < Ic; ++wIdx) { this->weights.p[wIdx] = weights.p[wIdx]; } this->treshold = treshold; }; bool output(Params<Ic> inputs) { int sum = 0; for (int inIdx = 0; inIdx < Ic; ++inIdx) { sum+= inputs.p[inIdx] * this->weights.p[inIdx]; } return sum > this->treshold; } private: Params<Ic> weights; int treshold; }; }; }; #endif /* Perceptron_hpp */
/** * Converts a multi-dimensional object array descriptor, given by descriptor, whose dimensionality * is represented by $ characters into the same descriptor but one in which the dimensionality * is represented by [ characters. * * The purpose of this method is to return a descriptor whose format is of the format expected * by the {@link #getUnifyingArrayWrapperDescriptor} method. * * @param descriptor The multi-dimensional object array descriptor. * @return The unifying multi-dimensional object array descriptor. */ private static String prepareObjectArrayForUnification(String descriptor) { int numLeadingTokens = 0; for (char c : descriptor.toCharArray()) { if (c == '$') { numLeadingTokens++; } else { break; } } String transformedTokens = new String(new char[numLeadingTokens]).replace("\0", "["); String remainder = descriptor.substring(numLeadingTokens); return transformedTokens + remainder; }
// Ensure that all dictionary indices are valid and that all values // are in range. // // Note that since we don't have the attribute schema, this doesn't validate // that a given attribute is being treated as the right type. That is, an // attribute called 'source.ip' which is of type IP_ADDRESS could be listed as // a string or an int, and we wouldn't catch it here. func checkPreconditions(dictionary dictionary, attrs *mixerpb.Attributes) error { var e *me.Error for k := range attrs.StringAttributes { if _, present := dictionary[k]; !present { e = me.Append(e, fmt.Errorf("attribute index %d is not defined in the current dictionary", k)) } } for k := range attrs.Int64Attributes { if _, present := dictionary[k]; !present { e = me.Append(e, fmt.Errorf("attribute index %d is not defined in the current dictionary", k)) } } for k := range attrs.DoubleAttributes { if _, present := dictionary[k]; !present { e = me.Append(e, fmt.Errorf("attribute index %d is not defined in the current dictionary", k)) } } for k := range attrs.BoolAttributes { if _, present := dictionary[k]; !present { e = me.Append(e, fmt.Errorf("attribute index %d is not defined in the current dictionary", k)) } } for k, v := range attrs.TimestampAttributes { if _, present := dictionary[k]; !present { e = me.Append(e, fmt.Errorf("attribute index %d is not defined in the current dictionary", k)) } if _, err := ptypes.Timestamp(v); err != nil { e = me.Append(e, err) } } for k := range attrs.BytesAttributes { if _, present := dictionary[k]; !present { e = me.Append(e, fmt.Errorf("attribute index %d is not defined in the current dictionary", k)) } } return e.ErrorOrNil() }
/*++ Copyright (c) 2004 - 2006, Intel Corporation All rights reserved. This program and the accompanying materials are licensed and made available under the terms and conditions of the BSD License which accompanies this distribution. The full text of the license may be found at http://opensource.org/licenses/bsd-license.php THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED. Module Name: Crc32SectionExtract.c Abstract: Implements GUIDed section extraction protocol interface with a specific GUID: CRC32. Please refer to the Tiano File Image Format Specification, FV spec 0.3.6 --*/ #include "Tiano.h" #include "EfiDriverLib.h" #include "EfiFirmwareFileSystem.h" #include EFI_PROTOCOL_DEFINITION (GuidedSectionExtraction) #include EFI_PROTOCOL_DEFINITION (SecurityPolicy) #include "GuidedSection.h" #include "Crc32SectionExtract.h" EFI_STATUS EFIAPI InitializeCrc32GuidedSectionExtractionProtocol ( IN EFI_HANDLE ImageHandle, IN EFI_SYSTEM_TABLE *SystemTable ); EFI_STATUS EFIAPI InitializeCrc32GuidedSectionExtractionProtocol ( IN EFI_HANDLE ImageHandle, IN EFI_SYSTEM_TABLE *SystemTable ) /*++ Routine Description: Entry point of the CRC32 GUIDed section extraction protocol. Creates and initializes an instance of the GUIDed section extraction protocol with CRC32 GUID. Arguments: ImageHandle EFI_HANDLE: A handle for the image that is initializing this driver SystemTable EFI_SYSTEM_TABLE: A pointer to the EFI system table Returns: EFI_SUCCESS: Driver initialized successfully EFI_LOAD_ERROR: Failed to Initialize or has been loaded EFI_OUT_OF_RESOURCES: Could not allocate needed resources --*/ { EFI_STATUS Status; EFI_GUIDED_SECTION_EXTRACTION_PROTOCOL *Crc32GuidedSep; EFI_HANDLE Handle; // // Initialize EFI library // EfiInitializeDriverLib (ImageHandle, SystemTable); // // Call all constructors per produced protocols // Status = GuidedSectionExtractionProtocolConstructor ( &Crc32GuidedSep, (EFI_EXTRACT_GUIDED_SECTION) Crc32ExtractSection ); if (EFI_ERROR (Status)) { if (Crc32GuidedSep != NULL) { gBS->FreePool (Crc32GuidedSep); } return Status; } // // Pass in a NULL to install to a new handle // Handle = NULL; Status = gBS->InstallProtocolInterface ( &Handle, &gEfiCrc32GuidedSectionExtractionProtocolGuid, EFI_NATIVE_INTERFACE, Crc32GuidedSep ); if (EFI_ERROR (Status)) { gBS->FreePool (Crc32GuidedSep); return EFI_LOAD_ERROR; } return EFI_SUCCESS; } STATIC UINT32 EFIAPI GetSectionLength ( IN EFI_COMMON_SECTION_HEADER *CommonHeader ) /*++ Routine Description: Get a length of section. Parameters: CommonHeader - Pointer to the common section header. Return Value: The length of the section, including the section header. --*/ // TODO: function comment is missing 'Arguments:' // TODO: function comment is missing 'Returns:' // TODO: CommonHeader - add argument and description to function comment { UINT32 Size; Size = *(UINT32 *) CommonHeader->Size & 0x00FFFFFF; return Size; } STATIC EFI_STATUS EFIAPI Crc32ExtractSection ( IN EFI_GUIDED_SECTION_EXTRACTION_PROTOCOL *This, IN VOID *InputSection, OUT VOID **OutputBuffer, OUT UINTN *OutputSize, OUT UINT32 *AuthenticationStatus ) /*++ Routine Description: This function reads and extracts contents of a section from an encapsulating section. Parameters: This - Indicates the calling context. InputSection - Buffer containing the input GUIDed section to be processed. OutputBuffer - *OutputBuffer is allocated from boot services pool memory and containing the new section stream. The caller is responsible for freeing this buffer. AuthenticationStatus - Pointer to a caller allocated UINT32 that indicates the authentication status of the output buffer Return Value: EFI_SUCCESS EFI_OUT_OF_RESOURCES EFI_INVALID_PARAMETER EFI_NOT_AVAILABLE_YET --*/ // TODO: function comment is missing 'Arguments:' // TODO: function comment is missing 'Returns:' // TODO: This - add argument and description to function comment // TODO: InputSection - add argument and description to function comment // TODO: OutputBuffer - add argument and description to function comment // TODO: OutputSize - add argument and description to function comment // TODO: AuthenticationStatus - add argument and description to function comment // TODO: EFI_INVALID_PARAMETER - add return value to function comment // TODO: EFI_INVALID_PARAMETER - add return value to function comment // TODO: EFI_OUT_OF_RESOURCES - add return value to function comment // TODO: EFI_SUCCESS - add return value to function comment { EFI_STATUS Status; CRC32_SECTION_HEADER *Crc32SectionHeader; EFI_GUID_DEFINED_SECTION *GuidedSectionHeader; UINT8 *Image; UINT32 Crc32Checksum; VOID *DummyInterface; if (OutputBuffer == NULL) { return EFI_INVALID_PARAMETER; } *OutputBuffer = NULL; // // Points to the section header // Crc32SectionHeader = (CRC32_SECTION_HEADER *) InputSection; GuidedSectionHeader = (EFI_GUID_DEFINED_SECTION *) InputSection; // // Check if the GUID is a CRC32 section GUID // if (!EfiCompareGuid ( &(GuidedSectionHeader->SectionDefinitionGuid), &gEfiCrc32GuidedSectionExtractionProtocolGuid )) { return EFI_INVALID_PARAMETER; } Image = (UINT8 *) InputSection + (UINT32) (GuidedSectionHeader->DataOffset); *OutputSize = GetSectionLength ((EFI_COMMON_SECTION_HEADER *) InputSection) - (UINT32) GuidedSectionHeader->DataOffset; Status = gBS->AllocatePool (EfiBootServicesData, *OutputSize, OutputBuffer); if (EFI_ERROR (Status)) { return EFI_OUT_OF_RESOURCES; } // // Implictly CRC32 GUIDed section should have STATUS_VALID bit set // ASSERT (GuidedSectionHeader->Attributes & EFI_GUIDED_SECTION_AUTH_STATUS_VALID); *AuthenticationStatus = EFI_LOCAL_AUTH_STATUS_IMAGE_SIGNED | EFI_AGGREGATE_AUTH_STATUS_IMAGE_SIGNED; // // Check whether there exists EFI_SECURITY_POLICY_PROTOCOL_GUID. // Status = gBS->LocateProtocol (&gEfiSecurityPolicyProtocolGuid, NULL, &DummyInterface); if (!EFI_ERROR (Status)) { *AuthenticationStatus |= EFI_LOCAL_AUTH_STATUS_PLATFORM_OVERRIDE | EFI_AGGREGATE_AUTH_STATUS_PLATFORM_OVERRIDE; } else { // // Calculate CRC32 Checksum of Image // gBS->CalculateCrc32 (Image, *OutputSize, &Crc32Checksum); if (Crc32Checksum != Crc32SectionHeader->CRC32Checksum) { *AuthenticationStatus |= EFI_LOCAL_AUTH_STATUS_TEST_FAILED | EFI_AGGREGATE_AUTH_STATUS_TEST_FAILED; } } EfiCopyMem (*OutputBuffer, Image, *OutputSize); return EFI_SUCCESS; }
A new numerical approach to Anderson (de)localization We develop a new approach for the Anderson localization problem. The implementation of this method yields strong numerical evidence leading to a (surprising to many) conjecture: The two dimensional discrete random Schroedinger operator with small disorder allows states that are dynamically delocalized with positive probability. This approach is based on a recent result by Abakumov-Liaw-Poltoratski which is rooted in the study of spectral behavior under rank-one perturbations, and states that every non-zero vector is almost surely cyclic for the singular part of the operator. The numerical work presented is rather simplistic compared to other numerical approaches in the field. Further, this method eliminates effects due to boundary conditions. While we carried out the numerical experiment almost exclusively in the case of the two dimensional discrete random Schroedinger operator, we include the setup for the general class of Anderson models called Anderson-type Hamiltonians. We track the location of the energy when a wave packet initially located at the origin is evolved according to the discrete random Schroedinger operator. This method does not provide new insight on the energy regimes for which diffusion occurs. Introduction In 1958 P. W. Anderson suggested that sufficiently large impurities in a semiconductor could lead to spatial localization of electrons, called Anderson localization. Although many physicists consider the problem solved, many mathematical questions with striking physical relevance remain open. The field has grown into a rich mathematical theory (see and for the study of different Anderson models; for refined notions of Anderson localization see and ). We consider the discrete random Schrödinger operator in dimension d which is given by the selfadjoint operator H ω = − + i∈Z d ω i < · , δ i > δ i on l 2 (Z d ). Here δ i ∈ l 2 (Z d ) assumes the value 1 in the i−th entry, i = (i 1 , i 2 , . . . , i d ), and zero in all other entries (see equation (4) below for an example in two dimensions). The random variables ω i are i.i.d. with uniform distribution in , i.e. according to the probability distribution P = (2c) −1 Π i χ dx. The Laplacian describes a crystal with atoms located at the integer lattice points Z d . Adding the random part can be interpreted as having the atoms being not perfectly on the lattice points, but randomly displaced. This paper pertains to one of the "weaker" definitions of localization which is equivalent to the property that the spectrum of the operator is almost surely purely singular. For d ≥ 2, localization is proved analytically for disorders c above a certain dimension-dependent threshold c d . The first result of this type can be found in . Simpler proofs and better constants can be found in as well as . Diffusion may hence only occur for small disorder c. This is precisely the question we are addressing: Does the two dimensional discrete random Schrödinger operator exhibit Anderson localization for small disorder with non-zero probability? The main contribution of this paper is the introduction of a new numerical approach, and its implementation for the two dimensional discrete random Schrödinger operator. This application supports the following conjecture which overthrows the widely spread belief that localization takes place for random disorders of any strength. Conjecture 1.1 (Delocalization conjecture). For disorder c 0.7, the two dimensional discrete random Schrödinger operator does not exhibit Anderson localization with positive probability; in the sense that it has non-zero absolutely continuous spectrum with positive probability. In particular, we do not have dynamical localization with positive probability for small disorder. While this conjecture is based on deep analytical results, we included an explicit statement of the main tool, see Corollary 3.3, so that the applied numerical sections 4 through 7 are essentially self-contained. We would like to point out the important feature of this Corollary which makes a numerical experiment feasible: It suffices to track the evolution (under the random Hamiltonian) of just one vector! In Section 2, we introduce Anderson-type Hamiltonians, a very general notion of Anderson model which includes many of those studied in literature. The methods described within this paper can be extended to most Anderson-type Hamiltonians. We further explain the notion of singular and absolutely continuous spectrum, as well as the corresponding parts of the operator. In Section 3 we state an improvement, Theorem 3.1, of a result by Jaksic and Last concerning the cyclicity of vectors for the Anderson-type Hamiltonian. We further explain how analytic results are used to prove the main tool, Corollary 3.3. Section 4 is devoted to a description of the numerical experiment. A summary of numerical results and the conclusions can be found in Section 5. In Section 6 we verify the performance of the method in many examples, e.g. for large disorder, for the free/unperturbed two dimensional discrete Schrödinger operator, for the one dimensional discrete random Schrödinger operator. We further include an investigation of the distribution of energies after repeated application of the random operator of a wave packet initially located at the origin. We briefly remark on computing and memory requirements in Section 7. Acknowledgements. The author would like to thank A. Poltoratski for suggesting the initial mathematical idea of the experiment, and G. Berkolaiko for the insightful discussions concerning many aspects of this research as well as for reading and making useful comments on most parts of this paper. Further, she would like to thank J. Kuehl for running initial experiments using a code of his, as well as for being such a wonderful husband. Anderson-type Hamiltonians, discrete Schrödinger operator While the numerical experiment within pertains to the discrete random Schrödinger operator, we define so-called Anderson-type Hamiltonians which were first introduced in . The advantage of making this general definition is that this notion is a generalization of many Anderson models discussed in literature. In particular, the method described within this paper can be applied to many other interesting Anderson models. For n ∈ N consider the probability space Ω n = (R, B, µ n ), where B is the Borel sigma-algebra on R and µ n is a Borel probability measure. Let Ω = ∞ n=0 Ω n be a product space with the probability measure P on Ω introduced as the product measure of the corresponding measures on Ω n on the product sigma-algebra A. The elements of Ω are points in R ∞ , ω = (ω 1 , ω 2 , ...) for ω n ∈ Ω n . Let H be a separable Hilbert space and let {ϕ n } n∈N be a countable collection of unit vectors in H. For each ω ∈ Ω define an Anderson-type Hamiltonian on H as a self-adjoint operator formally given by Except for degenerate cases, the perturbation V ω is almost surely a non-compact operator. It is hence not possible to apply results from classical perturbation theory to study the spectra of H ω , see e.g. and . In the case of an orthogonal sequence {ϕ n }, this operator was studied in and . Probably the most important special case of an Anderson-type Hamiltonian is the discrete random Schrödinger operator on l 2 (Z d ) Singular and absolutely continuous parts of normal operators Recall that an operator in a separable Hilbert space is called normal if T * T = T T * . By the spectral theorem operator T is unitarily equivalent to M z , multiplication by the independent variable z, in a direct sum of Hilbert spaces where µ is a scalar positive measure on C. The measure µ is called a scalar spectral measure of T . If T is a unitary or self-adjoint operator, its spectral measure µ is supported on the unit circle or on the real line, respectively. Via Radon decomposition, µ can be decomposed into a singular and absolutely continuous parts µ = µ s +µ ac . The singular component µ s can be further split into singular continuous and pure point parts. For unitary or self-adjoint T we denote by T ac the restriction of T to its absolutely continuous part, i.e. T ac is unitarily equivalent to M t ⊕ H(t)dµac(t) . Similarly, define the singular, singular continuous and the pure point parts of T , denoted by T s , T sc and T pp , respectively. Theoretical background and the main tool Let us explain how the main theoretical tool used to indicate delocalization, Corollary 3.3, is deduced from the following theorem. Let us remind the reader that we use the term delocalization to mean the existence of absolutely continuous spectrum almost surely, and that such delocalization implies dynamical delocalization. A sequence {ϕ n } ⊂ H is called a representing system, if every vector ϕ ∈ H can be represented as a series ϕ = a n ϕ n that converges with respect to the norm of H. Note that, bases are representing systems. However, unlike in the case of a basis, a representation of a vector need not be unique. Theorem 3.1. Let H ω be the Anderson-type Hamiltonian introduced in equation (1). Suppose that the probability measure P is a product of absolutely continuous measures and {ϕ n } is a representing system in H. Assume that there exists a vector ψ ∈ H that is cyclic for H ω , P-almost surely. Then any non-zero ϕ ∈ H is cyclic for H ω , P-almost surely. It is well-known that if an Anderson-type Hamiltonian H ω is purely singular almost surely then it is cyclic almost surely. Equivalently, if such an operator is not cyclic with positive probability, then there are energies which are diffusive with non-zero probability. A proof of almost-sure cyclicity of the singular part (H ω ) s and almost-sure cyclicity of certain specific vectors can be found in and for the discrete Schrödinger operator in . Together with the latter theorem and the fact that the δ n form a basis of l 2 (Z d ) we obtain the following result. The following two statements were formulated by A. Poltoratski (private communications). Corollary 3.2. Assume the hypotheses of Theorem 3.1. If Anderson localization occurs, then the orbit of any non-zero vector under the operator H ω is almost surely dense in the Hilbert space H. In other words, fix 0 = f ∈ H. If H ω has purely singular spectrum almost surely, then for all v ∈ H with norm v H = 1 the distance from the vector v to the span of the orbit of f under the operator is zero with respect to P-almost surely, i.e. with respect to P-almost surely More specifically, we will apply the following immediate consequence. Remark 3.4. The converse of Corollary 3.3 is not true. Hence we cannot draw any conclusions, if the distance between a fixed (unit) vector and the subspace generated by the orbit of another vector tends to zero. In particular, we cannot conclude that there must be localization. Even if we show (3) for many or 'all' vectors (instead of just δ 11 ), it could be possible that the absolutely continuous part has multiplicity one and that δ 00 is cyclic, that is, l 2 (Z 2 ) = clos span{H k ω δ 00 : k ∈ N ∪ {0}}. Method of numerical experiment Let us explain the computational approach used to indicate diffusion. Consider the discrete Schrödinger operator given by (1) and (2) with random variable ω distributed according to the hypotheses of Corollary 3.3. Fix the vectors δ 00 ∈ l 2 (Z 2 ) and δ 11 ∈ l 2 (Z 2 ), that is Notice that D n ω,c := dist(δ 11 , span{H k ω δ 00 : k = 0, 1, 2, . . . , n}) simply describes the distance between the unit vector δ 11 and the subspace obtained taking the closure of the span of the vectors δ 00 , H ω δ 00 , H 2 ω δ 00 , . . . , H n ω δ 00 . In virtue of Corollary 3.3, we obtain delocalization, if we can find c > 0 for which (3) happens with non-zero probability. In the numerical experiment, we initially fix c and fix one computer-generated realization of the random variable ω (with distribution in accordance to the hypotheses of Corollary 3.3). We then calculate the distances D n ω,c for n ∈ {0, 1, 2, . . .}. In Subsection 4.2 (below) we describe the numerical approach used to compute D n ω,c . Assuming that we know D n ω,c for n = 0, . . . , 4500, let us find a lower estimate for the limit While those results at this point looked fairly promising, they were not yet satisfactory. Most of all they do not provide a reliable estimate for the limit D ω,c . In order to obtain such an estimate for D ω,c , we re-scaled the horizontal axis in Figure 1 by a negative power n −a (power of the reciprocal, so that the horizontal axis is reverted) and approximated the resulting graph by a line. The re-scaled graph is shown in Figure 2. Subsection 4.3 contains information about the choice of the re-scaling factor and explains why, for appropriately small disorders, the graph does not decay to zero, e.g. logarithmically. The subtleties of choosing the re-scaling parameter a are the reason why we do not expect delocalization with probability one in the Delocalization Conjecture 1.1, but rather with non-zero probability. This decision is explained further in Subsection 4.3 The value of D ω,c is estimated by the y−intercept y ω,c of the approximating line. Since the re-scaled graphs in Figure 2 were sometimes rather noisy (e.g. a line through the steeper sections of the graph has a lower y−intercept), we decided to include a lower estimate L ω,c for y ω,c given by the minimum y−intercept of the lines passing through any two consecutive points. Summarizing the last few steps, we have Finally, we repeat the experiment for many values of c and many computer-generated realizations for the random variable ω. Concerning the different realizations, throughout we took the minimum of y ω,c and L ω,c over all the different computer-generated realizations of ω. Roughly the goal is to show that for some c > 0, the limits D ω,c are bounded away from zero for many realizations ω. Figure 1 using the best exponent a = 0.2. The y−intercept of the approximating line is the estimate y ω,c of the value for D ω,c . When to fix the realization ω In the experiments described here, we had fixed c and ω at the beginning. For fixed c = 0.1, we also computed several cases for which we chose a different realization ω each time we applied he random operator. In other words, for c = 0.1 we fix countably many realizations of ω each independently distributed and each in accordance with Corollary 3.3. Let those realizations be denoted by ω i , i ∈ N. Then we compute the distance between δ 11 and the closure of the span of the vectors δ 00 , H ω1 δ 00 , H ω2 (H ω1 δ 00 ), H ω3 (H ω2 H ω1 δ 00 ), etc. The results obtained from this setup agreed very well with the ones described in Section 5 below. Computing the distance D n ω,c For fixed n and ω let us briefly explain the computational approach to obtain D n ω,c . The main idea is to apply the Gram-Schmidt orthogonalization process in order to recursively compute D n ω,c . Take m 0 = δ 00 and D 0 ω,c = 1 (since δ 00 and δ 11 are orthonormal). In order to compute D n+1 ω,c , assume we have an orthonormal basis {m 0 , m 1 , m 2 , . . . , m n } for the linear subspace X n := span{H k ω δ 00 : k = 0, 1, 2, . . . , n} of l 2 (Z 2 ). Let us find an orthonormal basis for X n+1 . According to the Gram-Schmidt orthogonalization process, we define m n+1 to be the unit vector in the direction of H ω m n − n l=0 < H ω m n , m l > m l . The following proposition says that all but the two last terms in the sum are zero. We learned this fact and its proof from a conversation with M. Hastings. This simplification reduces the required memory by the order n (from O(n 3 ) to O(n 2 )). Although this result seems to be well-known to the physics community, we include the short proof by mathematical induction on n. Assume that the statement of the proposition is true for some n − 1 ≥ 2. It remains to show that the statement is true for n. Assume that we have computed an orthonormal sequence m 0 , m 1 , . . . , m n . For l ≤ n − 2, it suffices to show that < H ω m n , m l >= 0. By following the argument for the base case, we obtain < H ω m n , m l >=< m n , H ω m l >= m n , m l+1 + < H ω m l , m l > m l . The latter expression equals zero, because we assumed l ≤ n − 2 and the orthogonality assumption on m 0 , m 1 , . . . , m n . According to the latter proposition we take m n+1 to be the unit vector in the direction of Now, the distance D n+1 ω,c of the vector δ 11 to the subspace X n+1 equals the Euclidean norm D n+1 ω,c = e n+1 2 with e n+1 = δ 11 − P n+1 δ 11 , and where P n+1 denotes the orthogonal projection from l 2 (Z 2 ) onto X n+1 . A little more analysis allows us to simplify the latter expression. The following expression is closely related to the dimensionless scaling parameter that occurs in the so-called Thouless criterion. We use this same recursive definition for e n in the inner product and the fact that the m n form an orthonormal sequence to obtain e n+1 = e n − < (e n−1 − < e n−1 , m n > m n ), m n+1 > m n+1 = e n − < e n−1 , m n+1 > m n+1 . We proceed to replace e n by its recursive definition and so on, until we obtain e n+1 = δ 11 − n+1 l=2 < δ 11 , m l > m l . The proposition follows from equation (6) and the Pythagorean Theorem, since the m l form an orthonormal sequence and all of them are orthogonal to e n+1 . Also notice that δ 11 2 = 1. Choice of the re-scaling parameter For each fixed c and ω, the re-scaling exponent a is chosen so that the re-scaled graph of the distance function (see Figure 2) satisfies the least square property; that is, the error when approximating the graph by a line is minimal. With this exponent we then find the corresponding linear approximation for the re-scaled distance function. We include an extract of the table of best re-scaling exponents a which satisfy the least square property for our data. As the values for y ω,c were not very sensitively dependent on the precise value of a, we used a rather coarse mesh a = 0.05 : 0.05 : 0.85 and refined using a = 0.01 : 0.01 : 0.05, if the best re-scaling exponent was below 0.05. Each entry in the table corresponds to a different realization of the random variable ω. The N/A indicates that for this particular realization, even the re-scaling parameter a = 0.02 yields a concave graph. For this realization, we do not obtain any information. For values of c 1.2 many realizations did not yield a reasonable best fit parameter a. No statement can be made for such disorders. Since we can only investigate finitely many randomizations, and one of the realizations for a fairly small value of c = 0.15 yielded an inconclusive result, we decided to conjecture delocalization with non-zero probability in the Delocalization Conjecture 1.1, rather than almost surely. The existence of a positive re-scaling factor implies that the graph in Figure 1 will not decay to zero, e.g. logarithmically. Indeed, if we use a re-scaling factor smaller than the one in the table will result in a 'globally concave' graph for the distances D n ω,c . In this case, the y−intercept of the line lies below the value expected for D ∞ ω,c . Conclusions As mentioned in Section 4, for a fixed c we chose many realizations ω. We took the minimum of the resulting quantities for L ω,c and y ω,c (the y−intercept of the approximating line and the minimum y−intercept of the lines passing through any two consecutive points, respectively). Figure 3 shows L ω,c and y ω,c as a function of c. Being rather cautious, we say that a negative value for y ω,c indicates that the orbit of δ 00 may not span the whole space. Hence the final conclusion of this numerical experiment is precisely the Delocalization Conjecture 1.1. As it was explained in the remark following Corollary 3.3, we cannot conclude localization even if y ω,c < 0. Therefore, the experiments do not imply localization for larger values of c. Further supporting the credibility of the method and the numerical experiments Apart from the usual tests (the program is running stably, checking all subroutines, many verifications for small n) , we have also tested the code and versions for other models: the free/unperturbed two dimensional Schrödinger operator and the one dimensional random Schrödinger operator. We briefly summarize the results, in order to provide verification for the correctness of method and code. Further, we provide information of the energy distribution in terms of the distance from the origin of the evolution of the vector δ 00 , describing how a wave packet which was initially located at the origin changes as time progresses. Free discrete two dimensional Schrödinger operator When we apply the free discrete Schrödinger operator H = H 0 to the vector δ 00 , it immediately becomes clear that Hδ 00 as well as all vectors H n δ 00 , n ∈ N ∪ {0}, are symmetric with respect to the origin. In dimension d = 2, it is not hard to see that the distance between δ 11 and the orbit of δ 00 under H is at least √ 3/2 ≈ 0.8660. Indeed, we have dist(δ 11 , clos span{H n δ 00 : n ∈ N ∪ {0}}) > min In the experiments for the free discrete two dimensional Schrödinger operator we obtained a y−intercept of the approximating line approximately equals 0.8867. The re-scaled graph of distances still had a very convex shape, so the actual distance as n → ∞ would be bigger. In fact, we have extracted from Figure Verifying localization for the one dimensional random Schrödinger operator Consider the discrete random Schrödinger operator in one dimension, see e.g. equations (1) and (2) with d = 1. For this operator, it is well known that localization occurs for random disorders of all strengths (in particular, for small values of c) and at all energies. We have adopted and applied this computational approach for the discrete random Schrödinger operator in one dimension. Figure 5 shows a typical re-scaled graph of the distance D n ω,c = dist(δ 1 , span{H k ω δ 0 : k = 0, 1, 2, . . . , n}) for n = 3000, 3001, 3002, . . . , 15000 for the disorder c = 0.05. With a re-scaling exponent of a = 0.09, the graph of D n ω,c is still concave, so that the y−intercept of the approximating line is an upper estimate of D ω,c the limit. Therefore we have D ω,c < y ω,c = −0.0543. While we know by the remark following Corollary 3.3 that this experiment does not allow us to conclude that there is localization, the result still provides support for the credibility of the method at hand as well as the numerical design. Diffusion of energy for small values of c We present the distribution of energies of a wave packet initially located at the origin as the random operator is repeatedly applied. By distribution of energies, we mean how much of the energy is located at which 'distance' from the origin. For example, in order to obtain how much energy of the vector m k (defined in Subsection 4.2) is at 'distance' 2 from the origin, we use the elements of m k which are located on the diamond for which has entries equal to 2. The energy E(2, k) of the vector m k at 'distance' 2 from the origin is equal to the Euclidean norm over the elements in this diamond. In general, we have for the energy E(l, k) of the vector m k at 'distance' l from the origin. Here (m k ) i,j refers to the (i, j)−entry of the 2 × 2− matrix m k . By small modifications of our programs, we have extracted the location of the energy the vector δ 00 evolves under the random Hamiltonian for the values c = .1, c = 1 and c = 5 of disorder, see Figure 8. In accordance with our Delocalization Conjecture 1.1, the energy for small disorder is far away from the origin whereas it is concentrated close to the origin for large disorder. Figures 6 and 7 shows the energy distribution of H n ω δ 00 for n = 2999 for values of c ranging from c = .1 to c = 1. Again, the fact that the energy for small disorder is far away from the origin whereas it shifts much closer to the origin as the disorder increases, supports Delocalization Conjecture 1.1. Both figures are the averages obtained from two realizations for each value of c. And again, Figure 6: For c = 0.05 we show E(l, n), i.e. the evolution of the energy distribution of H n ω,c δ 00 for the diamonds at distance l from the origin. Notice that the energy travels far out from the origin (the diagonal is the farthest possible). Precision The results are not a phenomenon of numerical errors (e.g. round off errors that sum up over time). Indeed, we compared our results with those of a double precision computation. The results agreed very well. On computing and memory requirements The implementation uses memory rather efficiently, so that the numerical experiments were mainly limited by the length of the computation. On the rather small machines available to us, it took 8 1/2 hours to complete one realization for one value of c. Since we need to include many realizations of the random variable and many values of c, it took even several units a considerable time to finish all the computations. In order to compute D n ω described in Section 4, our code requires order n 2 (i.e. O(n 2 )) memory. Indeed, in order to carry out the Gram-Schmidt orthogonalization process described in Subsection 4.2, we must store matrices of size O(n) × O(n). The corresponding code for the d−dimensional discrete random Schrödinger operator will require memory size of order O(n d ). The random Schrödinger operator on the (dyadic) tree uses memory of O(2 n ). With the resources available to us, memory restrictions would only allow us to compute up to n ≈ 27 for the tree. In this case, we cannot produce sufficient data to support the fact that the discrete random Schrödinger operator on the tree does indeed exhibit delocalization.
#![crate_name = "tiny_http"] #![crate_type = "lib"] #![forbid(unsafe_code)] extern crate log; extern crate ascii; extern crate chrono; extern crate chunked_transfer; extern crate url; use std::io; use std::net; use std::net::{Shutdown, TcpStream, TcpListener}; use std::sync::atomic::AtomicBool; use std::sync::atomic::Ordering; use std::sync::Arc; use client::ClientConnection; use client::ClientConnectionError; use util::RefinedTcpStream; pub use common::{HTTPVersion, Header, HeaderField, Method, StatusCode}; pub use request::{ReadWrite, Request}; pub use response::{Response, ResponseBox}; mod client; mod common; mod request; mod response; mod util; pub struct Server { listener: TcpListener, is_shutting_down: Arc<AtomicBool>, } impl Server { pub fn new(addr: String) -> Result<Server, io::Error>{ let listener = net::TcpListener::bind(addr)?; Ok(Server{ listener, is_shutting_down: Arc::new(AtomicBool::new(false)), }) } pub fn try_clone(&self) -> Result<Server, io::Error> { let listener = self.listener.try_clone()?; Ok(Server{ listener, is_shutting_down: self.is_shutting_down.clone(), }) } pub fn accept(&self) -> Result<ClientConnection, AcceptError> { err_if_false(!self.is_shutting_down.load(Ordering::Relaxed), AcceptError::ShuttingDown())?; let (socket, _) = self.listener.accept() .map_err(AcceptError::Accept)?; let (read_closable, write_closable) = RefinedTcpStream::new(socket); ClientConnection::new(write_closable, read_closable) .map_err(AcceptError::ClientConnection) } fn shutdown(&mut self) -> Result<(), ShutdownError> { self.is_shutting_down.store(true, Ordering::Relaxed); let addr = self.listener.local_addr() .map_err(ShutdownError::LocalAddr)?; // Connect briefly to ourselves to unblock the accept thread let stream = TcpStream::connect(addr) .map_err(ShutdownError::Connect)?; stream.shutdown(Shutdown::Both) .map_err(ShutdownError::Shutdown) } } #[derive(Debug)] pub enum AcceptError { Accept(io::Error), ClientConnection(ClientConnectionError), ShuttingDown(), } #[derive(Debug)] pub enum ShutdownError { LocalAddr(io::Error), Connect(io::Error), Shutdown(io::Error), } impl Drop for Server { fn drop(&mut self) { let _ = self.shutdown(); } } fn err_if_false<E>(value: bool, err: E) -> Result<(), E> { if value { Ok(()) } else { Err(err) } }
Isolation of Gemmata-Like and Isosphaera-Like Planctomycete Bacteria from Soil and Freshwater ABSTRACT New cultured strains of the planctomycete division (order Planctomycetales) of the domain Bacteria related to species in the genera Gemmata and Isosphaera were isolated from soil, freshwater, and a laboratory ampicillin solution. Phylogenetic analysis of the 16S rRNA gene from eight representative isolates showed that all the isolates were members of the planctomycete division. Six isolates clustered with Gemmata obscuriglobus and related strains, while two isolates clustered with Isosphaera pallida. A double-membrane-bounded nucleoid was observed in Gemmata-related isolates but not in Isosphaera-related isolates, consistent with the ultrastructures of existing species of each genus. Two isolates from this study represent the first planctomycetes successfully cultivated from soil.