content
stringlengths
10
4.9M
Why Peter Daou’s DDoS Claim Is Dubious — and Why It Matters Anthony Citrano Blocked Unblock Follow Following Sep 9, 2017 Summary: the timing of the crash and the continued glaring vulnerability of Verrit’s web server make poor planning a far more likely explanation. UPDATE 9/12/2017: I’ve edited this post to reflect new information I received from an engineer yesterday. Verrit’s original IPv4 address was hit with a minor UDP NTP amplification attack shortly after their launch. Peter also shared this email today from his provider which provides similar information. I remain extremely skeptical that Verrit was the target, and the new information shows with certainty that it was not “sophisticated and persistent”. Quite the contrary; it wasn’t persistent because the site has remained up since it changed IPs, and it wasn’t sophisticated because, among other reasons, the attack bandwidth is such that it could have been mounted by one person with a single 10 megabit connection. However, given the way NTP amplification works, it still could have been enough to cripple a Linode slice/VPS and possibly even saturate its inbound port(s), especially in combination with the massive flood of legitimate traffic that was inbound at the same time. Without a view into exactly how many clicks were generated toward the Verrit server, it’s hard to determine what percentage of the server’s traffic was legitimate. It might sound like a dodge to say “we’re both right”, but we really are. Peter is right about the NTP attack. My facts (if not my implication) all remain technically sound and well-founded. This is a textbook example of a bungled site launch, and I think if I got Peter drunk enough he’d admit that. Even if Verrit really was a target that evening, it could easily have been prevented with the most basic digital groundwork. Also, my key point becomes more painfully obvious by the hour: Verrit is not being targeted by malicious forces intent on silencing “the 65 million”, and I continue to believe that mythos forms an unfortunately large part of Peter’s narrative. However, in my original post, I strongly implied that the Verrit server fell victim entirely to legitimate traffic. It’s now clear that was not the case. The NTP event definitely contributed to the server’s troubles, and I apologize to Peter and Leela for implying otherwise. While my facts and key points were correct, I’d nonetheless like to atone for the implication itself. So, I’ve written a check for $500 to the International Rescue Committee, the non-profit Leela mentioned earlier in the week. Recently, I’ve earned myself a bit of a reputation for calling out tech BS that emanates from certain political activists on Twitter. For the last few months, I’ve been occasionally pestering Louise Mensch about her wilder claims, especially those based entirely on humorously flawed interpretations of technical “clues”. Her fans — allowing their confirmation biases to feast on any and every morsel of dreck served up about Trump — react with passion, accusing me of being a Russian agent or a “Trump bro”. So I want to start by explaining what motivates me to be a gadfly about this stuff, and hopefully pre-empt a few “omg why do you care so much?” questions in the process. I view Donald Trump as an existential threat to the Great American Experiment, and when his detractors are provably full of shit, it hands him and his enablers everything they need to bury the terrifying truth under the weight of our own lies. When these lies are exposed — and most of them will be — he can say to America, “See?! More fake news!” You can accuse me of being politically motivated — and I suppose, in the way I’ve described above, I am — but you can’t call me an enemy of the cause. I’ve criticized Trump for years, worked for Bill Clinton, raised a lot of money for Barack Obama, and voted for Hillary Clinton last year. If you truly care about America’s future, you should want a rigorous public dialogue based entirely upon facts, scrubbed of lies, innuendo, and partisan victimhood narratives. And, you should want to see an equally honest and impartial judicial process, with the President and his comrades supported by the best possible defense. It’s precisely because I want America to survive that I find hyperbole so odious. Now that we have that out of the way, let’s explore the claim that Verrit was targeted by a “sophisticated and persistent” DDoS attack. The best place to start is with a basic explanation of a DDoS attack. There are many types, and they’re beyond the scope here. Cloudflare has an excellent explainer if you’re so inclined, and their CEO has written extensively about their technical approach. In a nutshell, DDoS means hitting the target server with so many requests that it runs out of resources to handle them. This results in legitimate traffic going unanswered. Anonymous, the hacktivist collective, is well-known for using this technique to cripple target organizations. Their attack on the Daily Stormer is a recent example. You’ll notice they were targeting IP addresses. Remember this for later. There are myriad ways to attack a site, and not all of them require a direct IP address. But by far the easiest is directly via its IP address. Armed with that, knocking a site offline is fairly trivial. That’s why companies like Fastly boast about their ability to cloak server IPs from even the most sophisticated attackers. With no IP, life gets much harder for the bad guys. When you type a domain name into your web browser, the browser does a “lookup” with DNS (the Internet’s “phone book”) to find out which IP address to connect to. If a site is behind a protective network like Cloudflare or Fastly (this is called reverse proxying), then — assuming the site owner has carefully configured their DNS — it’s much harder to discover the web server’s IP. That’s because the IP that’s returned for lookups is an IP for the reverse proxy (the service’s network), and not the actual web server. The use of something called Anycast diffuses those special IP addresses across thousands of high-bandwidth servers around the globe. That makes mounting an attack many orders of magnitude more difficult, because in order to attack one customer, they have to attack the service’s entire global network. This disparity is hard to overstate: it’s like the difference between beating up an out-of-shape drunk and taking on an entire U.S. Army brigade. There are many other reasons why this diffusion is good: perhaps the most impactful is edge caching, which serves content (videos, images, web pages, etc) from a server close to the end user. A video that’s going viral can be served to users in Tokyo from an edge server in Tokyo, London in London, etc. This dramatically improves the user experience (much faster load times) and substantially reduces the load on the origin server, because each edge server only needs to ask the origin server for the file once, then it will serve the file on its behalf to users in that region. There are many ways for attackers to figure out a target server’s IP, but by far the easiest is for the site owner to tell them. And the silliest way to do that is with public DNS entries that point directly at the origin server, such as “mail.mysite.com” or “forum.mysite.com”. Cloudflare specifically warns customers not to make this mistake. As of this writing, Verrit’s DNS zone file is still offering up its origin server’s IP address to anyone who asks. I’m not going to publish it, but it can be discovered in ten seconds by anyone with a shred of technical know-how. Now let’s look at the timeline. On Sunday afternoon, Hillary tweeted her Verrit endorsement. As it began spreading, the site quickly became unreachable. Many observers didn’t need a blowhard like me to tell them what was going on; all they needed was Occam’s razor, which says Verrit fell victim to bad planning, and the server simply wasn’t prepared for the crush of traffic. “Slashdot Effect” is the original term for this, but I like Reddit’s “hug of death” better because that’s what it really is — there’s so much inbound love that it kills you. When Hillary tweeted Sunday afternoon, Verrit had absolutely no edge caching or DDoS protection in place. We know that because the IP history of the Verrit domain tells us so: (for you nerds out there, the Amazon IP is for Amazon EC2, not Cloudfront) This means, indisputably, that until sometime Monday, 100% of the clicks from Hillary’s tweet (and anyone else linking to Verrit) from anywhere in the world were all routed directly to one lonely little web server sitting in a rack in Newark, New Jersey. That’s right — every click, every page, every image — all had to be served from that one box. Launching a site that’s expected to get national attention with no DDoS protection or edge caching in place is absolutely crazy. Peter was likely one of the first people to notice his site had shit the proverbial bed, and presumably kicked his shadowy tech team into action, spending the evening hours frantically signing up for Cloudflare and doing many other things they should have done weeks before. On Monday, he blamed the blackout on a “sophisticated and persistent DDoS attack”. The first I heard of this (and the first I heard of Verrit) was when someone, in the wee hours of Tuesday morning, retweeted this into my timeline: So I asked, politely, how he knew. His answer was more than a little opaque: Bored and sleepless, I did a little sniffing around. I noticed their sudden Sunday night transition to Cloudflare — and discovered what I mentioned earlier: their web server’s IP address was still completely exposed. Wide open. Anyone who wanted to DDoS them could do it in a moment. That singular fact is critical to this story: with the server’s direct IP address, if any “sophisticated attacker” wanted the site offline, the site would be offline. If it was a persistent attack targeted at Verrit, then changing the server’s IP without cloaking the new IP would provide, at best, a very brief reprieve. But publicly pointing out the gaping security hole seemed irresponsible until I gave him a chance to plug it. I immediately asked Peter to DM me. He did, and I gave him step by step instructions on how to fix it. He thanked me and told me he’d pass it along to his tech team. My skepticism was noticed by a few journalists, and a couple of them wanted details. So, half a day later, I messaged Peter again to tell him I wanted to defend my skepticism but didn’t want to provide an attack vector (however obvious this one may be). That’s when things got a little weird. Peter asked if I planned to tell the journalists what I’d figured out. I said yes, because it was the only way to justify my prior public comments. He said he was “working on a lot of issues and [was] underwater” and that I was “free to discuss whatever [I] want”. Then, in what felt like a misguided attempt to quell my meddling, he told me he was engaging “the authorities” and there were things I “may not be aware of”. Then, he blocked me. That bizarre exchange took my confidence level from 98% to 99.9%. In conclusion, Peter is asking us to believe that “five [unnamed] engineers from two [unnamed] firms .. including [an unnamed] cyber security expert” worked to fend off a “sophisticated and persistent attack”, yet left the most obvious DDoS vectors wide open, even days after it was pointed out to them. He’s asking us to believe Verrit was the victim of a paralyzing digital assault, yet he still hasn’t bothered to lock the front door. He’s asking us to believe that dark forces are out to silence Verrit and “the 65 million”, but these forces suddenly felt guilty Monday afternoon, turned off their ion cannons, and went home. Either that’s all true, and I’m wrong, or Hillary gave Verrit a “hug of death” and Peter’s just winging it. You be the judge.
<filename>domain/entity/admin/adminnotification_inbox.go package admin import ( "time" "github.com/gocql/gocql" ) type AdminNotificationInbox struct { NotificationID gocql.UUID `json:"notification_id"` Severity int `json:"severity"` DateAdded *time.Time `json:"date_added"` Title string `json:"title"` Description string `json:"description"` URL string `json:"url"` IsRead int `json:"is_read"` IsRemove int `json:"is_remove"` }
<filename>inject_main.go<gh_stars>0 //+build wireinject package main import ( "context" "database/sql" "net/http" "github.com/gilcrest/go-api-basic/app" "github.com/gilcrest/go-api-basic/datastore" "github.com/gilcrest/go-api-basic/handler" "github.com/google/wire" "github.com/gorilla/mux" "github.com/rs/zerolog" "go.opencensus.io/trace" "gocloud.dev/server" "gocloud.dev/server/driver" "gocloud.dev/server/health" "gocloud.dev/server/health/sqlhealth" ) // applicationSet is the Wire provider set for the application var applicationSet = wire.NewSet( app.NewApplication, newRouter, wire.Bind(new(http.Handler), new(*mux.Router)), handler.NewAppHandler, ) // goCloudServerSet var goCloudServerSet = wire.NewSet( trace.AlwaysSample, server.New, server.NewDefaultDriver, wire.Bind(new(driver.Server), new(*server.DefaultDriver)), ) // newServer is a Wire injector function that sets up the // application using a PostgreSQL implementation func newServer(ctx context.Context, logger zerolog.Logger, dsn datastore.PGDatasourceName) (*server.Server, func(), error) { // This will be filled in by Wire with providers from the provider sets in // wire.Build. wire.Build( wire.InterfaceValue(new(trace.Exporter), trace.Exporter(nil)), goCloudServerSet, applicationSet, appHealthChecks, wire.Struct(new(server.Options), "HealthChecks", "TraceExporter", "DefaultSamplingPolicy", "Driver"), datastore.NewDB, wire.Bind(new(datastore.Datastorer), new(*datastore.Datastore)), datastore.NewDatastore) return nil, nil, nil } // appHealthChecks returns a health check for the database. This will signal // to Kubernetes or other orchestrators that the server should not receive // traffic until the server is able to connect to its database. func appHealthChecks(db *sql.DB) ([]health.Checker, func()) { dbCheck := sqlhealth.New(db) list := []health.Checker{dbCheck} return list, func() { dbCheck.Stop() } }
OFDR with local spectrum matching method for optical fiber shape sensing Strain measurement is the basis of shape reconstruction by using the optical frequency domain reflectometer, and we propose a local spectrum matching method to determine the strain induced wavelength shift by matching the most similarity portion in the measurement spectrum. Comparison study between conventional cross-correlation and local spectrum matching indicates that the proposed method can effectively eliminate the fake peaks and multi-peaks of the cross-correlation. Shape sensing experiments after the calibration process of the curvature illustrate that the two-dimensional shape reconstruction error can be as low as 1 cm in a sensing fiber length of 1 m.
def encrypt(self, serialized): fernet = Fernet(self.encryption_cipher_key) return fernet.encrypt(serialized)
/** * Derive encryption and hashing key using the input shared secret for * DH key exchange mode. * * @return ret * return true on success. false on failure. */ static int32_t kex_kdf(void) { int ret = -1; size_t keymat1_size = 0; size_t keymat2_size = 0; struct sdo_kex_ctx *kex_ctx = getsdo_key_ctx(); sdo_byte_array_t *shse = get_secret(); sdo_aes_keyset_t *keyset = get_keyset(); uint8_t *keymat1 = NULL; uint8_t *keymat2a = NULL; uint8_t *keymat2b = NULL; uint8_t *hmac_buf = NULL; uint8_t hmac_key[SHA256_DIGEST_SIZE] = {0}; if (!shse) { LOG(LOG_ERROR, "Failed to get the shared secret\n"); goto err; } keymat1_size = 1 + strnlen_s(kex_ctx->kdf_label, SDO_MAX_STR_SIZE) + 1 + strnlen_s(kex_ctx->sek_label, SDO_MAX_STR_SIZE) + shse->byte_sz; keymat2_size = 1 + strnlen_s(kex_ctx->kdf_label, SDO_MAX_STR_SIZE) + 1 + strnlen_s(kex_ctx->svk_label, SDO_MAX_STR_SIZE) + shse->byte_sz; keymat1 = sdo_alloc(keymat1_size); if (!keymat1) { LOG(LOG_ERROR, "Out of memory for key material 1\n"); goto err; } keymat2a = sdo_alloc(keymat2_size); if (!keymat2a) { LOG(LOG_ERROR, "Out of memory for key material 2a\n"); goto err; } #ifdef KEX_ECDH384_ENABLED keymat2b = sdo_alloc(keymat2_size); if (!keymat2b) { LOG(LOG_ERROR, "Out of memory for key material 2b\n"); goto err; } #endif ret = prep_keymat(keymat1, keymat1_size, shse, false, false); if (ret) { LOG(LOG_ERROR, "Failed to prepare keymat1\n"); goto err; } ret = prep_keymat(keymat2a, keymat2_size, shse, true, false); if (ret) { LOG(LOG_ERROR, "Failed to prepare keymat2a\n"); goto err; } #ifdef KEX_ECDH384_ENABLED ret = prep_keymat(keymat2b, keymat2_size, shse, true, true); if (ret) { LOG(LOG_ERROR, "Failed to prepare keymat2b\n"); goto err; } #endif hmac_buf = sdo_alloc(SDO_SHA_DIGEST_SIZE_USED); if (!hmac_buf) { LOG(LOG_ERROR, "Failed to allocate hmac buffer\n"); goto err; } if (crypto_hal_hmac(SDO_CRYPTO_HMAC_TYPE_USED, keymat1, keymat1_size, hmac_buf, SDO_SHA_DIGEST_SIZE_USED, hmac_key, sizeof(hmac_key))) { LOG(LOG_ERROR, "Failed to derive key via HMAC\n"); goto err; } if (memcpy_s(keyset->sek->bytes, keyset->sek->byte_sz, hmac_buf, keyset->sek->byte_sz)) { LOG(LOG_ERROR, "Failed to copy sek key\n"); goto err; } if (crypto_hal_hmac(SDO_CRYPTO_HMAC_TYPE_USED, keymat2a, keymat2_size, keyset->svk->bytes, keyset->svk->byte_sz, hmac_key, sizeof(hmac_key))) { LOG(LOG_ERROR, "Failed to derive key via HMAC\n"); goto err; } #ifdef KEX_ECDH384_ENABLED if (crypto_hal_hmac(SDO_CRYPTO_HMAC_TYPE_USED, keymat2b, keymat2_size, hmac_buf, SDO_SHA_DIGEST_SIZE_USED, hmac_key, sizeof(hmac_key))) { LOG(LOG_ERROR, "Failed to derive key via HMAC\n"); goto err; } if (memcpy_s(keyset->svk->bytes + SDO_SHA_DIGEST_SIZE_USED, keyset->svk->byte_sz - SDO_SHA_DIGEST_SIZE_USED, hmac_buf, 16)) { LOG(LOG_ERROR, "Failed to fill svk\n"); goto err; } #endif ret = 0; err: if (hmac_buf) { sdo_free(hmac_buf); } if (keymat1) { sdo_free(keymat1); } if (keymat2a) { sdo_free(keymat2a); } if (keymat2b) { sdo_free(keymat2b); } sdo_byte_array_free(shse); return ret; }
from epidag.factory.arguments import * from epidag.factory.workshop import * __all__ = ['get_workshop'] WorkshopDict = dict() def get_workshop(name): if not isinstance(name, str): raise NameError('A workshop name must be string') # todo locker try: ws = WorkshopDict[name] except KeyError: ws = Workshop() WorkshopDict[name] = ws # todo finally release locker return ws
<filename>src/index.ts export * from "./logger"; export * from "./daemon/index"; export * from "./rpc/index"; export * from "./config/index"; export {PoolState} from "./api/chia/farmer/farmer"; export {TradeRecordInJson} from "./api/chia/wallet/util/trade_utils";
<gh_stars>10-100 package main import ( "bytes" "context" "fmt" "html/template" "image" "image/jpeg" "io" "io/ioutil" "log" "net/http" "os" "path" "github.com/getyoti/yoti-go-sdk/v3" "github.com/getyoti/yoti-go-sdk/v3/dynamic" _ "github.com/joho/godotenv/autoload" ) type contextKey string var ( sdkID string key []byte client *yoti.Client selfSignedCertName = "yotiSelfSignedCert.pem" selfSignedKeyName = "yotiSelfSignedKey.pem" portNumber = "8080" errApplyingTheParsedTemplate = "Error applying the parsed template: " errParsingTheTemplate = "Error parsing the template: " profileEndpoint = "/profile" scenarioBuilderErr = "Scenario Builder Error: `%s`" ) func home(w http.ResponseWriter, req *http.Request) { templateVars := map[string]interface{}{ "yotiScenarioID": os.Getenv("YOTI_SCENARIO_ID"), "yotiClientSdkID": os.Getenv("YOTI_CLIENT_SDK_ID")} t, err := template.ParseFiles("login.html") if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf(errParsingTheTemplate+err.Error()), ))) return } err = t.Execute(w, templateVars) if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf(errApplyingTheParsedTemplate+err.Error()), ))) return } } func sourceConstraints(w http.ResponseWriter, req *http.Request) { constraint, err := (&dynamic.SourceConstraintBuilder{}).WithDrivingLicence("").WithPassport("").Build() if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("Constraint Builder Error: `%s`", err), ))) return } policy, err := (&dynamic.PolicyBuilder{}).WithFullName(constraint).WithStructuredPostalAddress(constraint).Build() if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("Policy Builder Error: `%s`", err), ))) return } scenario, err := (&dynamic.ScenarioBuilder{}).WithPolicy(policy). WithCallbackEndpoint(profileEndpoint).Build() if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf(scenarioBuilderErr, err), ))) return } pageFromScenario(w, req, "Source Constraint example", scenario) } func dynamicShare(w http.ResponseWriter, req *http.Request) { policy, err := (&dynamic.PolicyBuilder{}).WithFullName().WithEmail().Build() if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf(scenarioBuilderErr, err), ))) return } scenario, err := (&dynamic.ScenarioBuilder{}).WithPolicy(policy).WithCallbackEndpoint(profileEndpoint).Build() if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf(scenarioBuilderErr, err), ))) return } pageFromScenario(w, req, "Dynamic Share example", scenario) } func pageFromScenario(w http.ResponseWriter, req *http.Request, title string, scenario dynamic.Scenario) { sdkID := os.Getenv("YOTI_CLIENT_SDK_ID") key, err := ioutil.ReadFile(os.Getenv("YOTI_KEY_FILE_PATH")) if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("Unable to retrieve `YOTI_KEY_FILE_PATH`. Error: `%s`", err.Error()), ))) return } client, err := yoti.NewClient(sdkID, key) if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("%s", err), ))) } share, err := client.CreateShareURL(&scenario) if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("%s", err.Error()), ))) return } templateVars := map[string]interface{}{ "pageTitle": title, "yotiClientSdkID": sdkID, "yotiShareURL": share.ShareURL, } var t *template.Template t, err = template.ParseFiles("dynamic-share.html") if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("error parsing template: "+err.Error()), ))) return } err = t.Execute(w, templateVars) if err != nil { errorPage(w, req.WithContext(context.WithValue( req.Context(), contextKey("yotiError"), fmt.Sprintf("error applying the parsed template: "+err.Error()), ))) return } } func errorPage(w http.ResponseWriter, r *http.Request) { templateVars := map[string]interface{}{ "yotiError": r.Context().Value(contextKey("yotiError")).(string), } log.Printf("%s", templateVars["yotiError"]) t, err := template.ParseFiles("error.html") if err != nil { panic(errParsingTheTemplate + err.Error()) } err = t.Execute(w, templateVars) if err != nil { panic(errApplyingTheParsedTemplate + err.Error()) } } func profile(w http.ResponseWriter, r *http.Request) { var err error key, err = ioutil.ReadFile(os.Getenv("YOTI_KEY_FILE_PATH")) sdkID = os.Getenv("YOTI_CLIENT_SDK_ID") if err != nil { errorPage(w, r.WithContext(context.WithValue( r.Context(), contextKey("yotiError"), fmt.Sprintf("Unable to retrieve `YOTI_KEY_FILE_PATH`. Error: `%s`", err.Error()), ))) return } client, err = yoti.NewClient(sdkID, key) if err != nil { errorPage(w, r.WithContext(context.WithValue( r.Context(), contextKey("yotiError"), fmt.Sprintf("%s", err), ))) } yotiOneTimeUseToken := r.URL.Query().Get("token") activityDetails, err := client.GetActivityDetails(yotiOneTimeUseToken) if err != nil { errorPage(w, r.WithContext(context.WithValue( r.Context(), contextKey("yotiError"), err.Error(), ))) return } userProfile := activityDetails.UserProfile selfie := userProfile.Selfie() var base64URL string if selfie != nil { base64URL = selfie.Value().Base64URL() decodedImage := decodeImage(selfie.Value().Data()) file := createImage() saveImage(decodedImage, file) } dob, err := userProfile.DateOfBirth() if err != nil { errorPage(w, r.WithContext(context.WithValue( r.Context(), contextKey("yotiError"), fmt.Sprintf("Error parsing Date of Birth attribute. Error %q", err.Error()), ))) return } var dateOfBirthString string if dob != nil { dateOfBirthString = dob.Value().String() } templateVars := map[string]interface{}{ "profile": userProfile, "selfieBase64URL": template.URL(base64URL), "rememberMeID": activityDetails.RememberMeID(), "dateOfBirth": dateOfBirthString, } var t *template.Template t, err = template.New("profile.html"). Funcs(template.FuncMap{ "escapeURL": func(s string) template.URL { return template.URL(s) }, "marshalAttribute": func(name string, icon string, property interface{}, prevalue string) interface{} { return struct { Name string Icon string Prop interface{} Prevalue string }{ name, icon, property, prevalue, } }, }). ParseFiles("profile.html") if err != nil { fmt.Println(err) return } err = t.Execute(w, templateVars) if err != nil { errorPage(w, r.WithContext(context.WithValue( r.Context(), contextKey("yotiError"), fmt.Sprintf("Error applying the parsed profile template. Error: `%s`", err), ))) return } } func main() { // Check if the cert files are available. certificatePresent := certificatePresenceCheck(selfSignedCertName, selfSignedKeyName) // If they are not available, generate new ones. if !certificatePresent { err := generateSelfSignedCertificate(selfSignedCertName, selfSignedKeyName, "127.0.0.1:"+portNumber) if err != nil { panic("Error when creating https certs: " + err.Error()) } } http.HandleFunc("/", home) http.HandleFunc(profileEndpoint, profile) http.HandleFunc("/dynamic-share", dynamicShare) http.HandleFunc("/source-constraints", sourceConstraints) rootdir, err := os.Getwd() if err != nil { log.Fatal("Error: Couldn't get current working directory") } http.Handle("/images/", http.StripPrefix("/images", http.FileServer(http.Dir(path.Join(rootdir, "images/"))))) http.Handle("/static/", http.StripPrefix("/static", http.FileServer(http.Dir(path.Join(rootdir, "static/"))))) log.Printf("About to listen and serve on %[1]s. Go to https://localhost:%[1]s/", portNumber) err = http.ListenAndServeTLS(":"+portNumber, selfSignedCertName, selfSignedKeyName, nil) if err != nil { panic("Error when calling `ListenAndServeTLS`: " + err.Error()) } } func decodeImage(imageBytes []byte) image.Image { decodedImage, _, err := image.Decode(bytes.NewReader(imageBytes)) if err != nil { panic("Error when decoding the image: " + err.Error()) } return decodedImage } func createImage() (file *os.File) { file, err := os.Create("./images/YotiSelfie.jpeg") if err != nil { panic("Error when creating the image: " + err.Error()) } return } func saveImage(img image.Image, file io.Writer) { var opt jpeg.Options opt.Quality = 100 err := jpeg.Encode(file, img, &opt) if err != nil { panic("Error when saving the image: " + err.Error()) } }
/** * Should be atleast called once before the draw method is called * * @param display_box */ void map::set_display_box(engine::math::box2_t display_box) { engine::graphics::box_builder builder({m_size.x * m_tile_size.x, m_size.y * m_tile_size.y}); builder.to_center(display_box); m_dest.reset(new engine::math::box2_t(builder.build())); }
/** * Determine whether a {@link com.helger.quartz.IJob} with the given * identifier already exists within the scheduler. * * @param jobKey * the identifier to check for * @return true if a Job exists with the given identifier * @throws JobPersistenceException * on error */ public boolean checkExists (final JobKey jobKey) throws JobPersistenceException { synchronized (m_aLock) { return m_aJobsByKey.get (jobKey) != null; } }
#include "join_operator.hpp" #include <algorithm> #include <iostream> using namespace std; namespace SQL_Compiler { JoinIterator::JoinIterator( OperatorIterator&& from_left, OperatorIterator&& from_right, Joiner const& joiner_left_, Joiner const& joiner_right_) : joiner_left(joiner_left_), joiner_right(joiner_right_) { fill_map(left, joiner_left, move(from_left)); fill_map(right, joiner_right, move(from_right)); restart(); } void JoinIterator::fill_map( JoinMapper& target, Joiner const& joiner, OperatorIterator&& from) { for (; !from.is_done(); ++from) { Tuple t = joiner(*from); target[t].push_back(*from); } } void JoinIterator::match_buckets() { if (is_done()) return ; bucket_right = right.find(bucket_left->first); while (bucket_right == end(right)) { ++bucket_left; if (is_done()) return ; bucket_right = right.find(bucket_left->first); } it_left = begin(bucket_left->second); it_right = begin(bucket_right->second); } Tuple const& JoinIterator::dereference() const { return cache; } void JoinIterator::merge_left_right() { cache = *it_left; copy(begin(*it_right), end(*it_right), back_inserter(cache)); } void JoinIterator::increment() { ++it_right; if (it_right == end(bucket_right->second)) { it_right = begin(bucket_right->second); ++it_left; if (it_left == end(bucket_left->second)) { ++bucket_left; match_buckets(); if (is_done()) return ; } } merge_left_right(); } bool JoinIterator::is_done() const { return bucket_left == end(left); } void JoinIterator::restart() { bucket_left = begin(left); match_buckets(); merge_left_right(); } Join::Join( BaseOperator * const left_, BaseOperator * const right_, Relation const& rleft_, Relation const& rright_, std::vector<JoinKey> const& keys_ ) : left(left_), right(right_), rleft(rleft_), rright(rright_), keys(keys_) { compile(); } Joiner Join::compile(Relation const& rel, vector<string> const& names) { vector<int> indices; for (auto const& name : names) { indices.push_back(rel[name]); } return [indices](Tuple const& source){ Tuple t; t.reserve(indices.size()); for (auto const& i : indices) { t.push_back(source[i]); } return t; }; } void Join::compile() { vector<string> names_left; vector<string> names_right; for (auto const& key : keys) { names_left.push_back(key.first); names_right.push_back(key.second); } joiner_left = compile(rleft, names_left); joiner_right = compile(rright, names_right); } OperatorIterator Join::begin() const { return OperatorIterator(new JoinIterator(left->begin(), right->begin(), joiner_left, joiner_right)); } }
/** * Base class for all resources */ public abstract class BaseLogSvcResource { // Constant defines the date/time format for a request parameter. public static final String DATE_TIME_FORMAT = "yyyy-MM-dd_HH:mm:ss"; // non-service logs private final static List<String> nonServiceLogFileNames = new ArrayList<String>() { { add("systemevents"); add("messages"); add("nginx_access"); add("nginx_error"); add("bkutils"); } }; // used when no media type is specified public final static MediaType DEFAULT_MEDIA_TYPE = MediaType.APPLICATION_XML_TYPE; // used as xml tag or json attribute name for error message public final static String ERROR_MESSAGE_TAG = "error_message"; // A reference to the log service configurable properties loader. @Autowired protected LogSvcPropertiesLoader _logSvcPropertiesLoader; @Context HttpHeaders header; protected MediaType getMediaType() { MediaType mediaType = DEFAULT_MEDIA_TYPE; if (header != null) { List<MediaType> mTypes = header.getAcceptableMediaTypes(); if (mTypes != null) { for (MediaType media : mTypes) { if (LogSvcConstants.ACCEPTED_MEDIA_TYPES.contains(media)) { mediaType = media; break; } } } } return mediaType; } /** * Verifies a valid severity level is passed in the request and returns the * appropriate LogSeverity enumeration. * * @param severity The severity passed in the request. * @return The corresponding LogSeverity. * @throws APIException for an invalid severity. */ protected LogSeverity validateLogSeverity(int severity) { if ((severity >= 0) && (severity < LogSeverity.values().length)) { return LogSeverity.values()[severity]; } else { throw APIException.badRequests.parameterIsNotValid("severity"); } } protected void validateMsgRegex(String msgRegex) { // Validate regular message if (msgRegex != null && !msgRegex.equals("")) { try { Pattern.compile(msgRegex); } catch (PatternSyntaxException e) { throw APIException.badRequests.parameterIsNotValid("regex", e); } } } /** * Returns list of actual log names. * If there are any alias names present in the inputted list they will be replaced * with their actual names. */ protected List<String> getLogNamesFromAlias(List<String> logNames) { if (logNames == null || logNames.isEmpty()) { return logNames; } List<String> validLogNames = new ArrayList<String>(); for (String name : logNames) { if (LogSvcConstants.logAliasNames.containsKey(name)) { validLogNames.add(LogSvcConstants.logAliasNames.get(name)); } else { validLogNames.add(name); } } return validLogNames; } /** * Get set of log names supported by vipr */ protected Set<String> getValidLogNames() { Set<String> logNames = new HashSet<String>(); logNames.addAll(ServicesMetadata.getControlNodeLogNames()); logNames.addAll(ServicesMetadata.getExtraNodeServiceNames()); logNames.addAll(nonServiceLogFileNames); return logNames; } }
PARTITION OF IMPURITIES WITHIN TITANIA SLAG The content of gangue impurities in ilmenite and the Ti 4+/ Ti 3+ ratio during smelting controls the distribution of impurities between pseudobrookite and glassy silicate. These two factors greatly affect the stability of these two phases in sulfate type slag. The entire SiO 2 and CaO in the smelter feed solidify as constituents of the glassy silicate. For a T i 4+/ Ti 3+ ratio above 3.4, 95% of MgO in the melt enters the M 3 O 5 phase and glassy silicate solidifies as a homogeneous phase which is easily devitrified during slag upgrading. Higher degrees of reduction trigger solidification of pyroxene within glassy silicate. In the pyroxene range, a 70-80% of MgO from the melt solidifies within the M 3 O 5 phase. An excessive reduction leads to solidification of refractory glassy silicate saturated with forsterite. In the homogeneous range of glassy silicate, 62% of Al 2 O 3 in the melt enters the M 3 O 5 phase during slag solidification. In pyroxene and forsterite range, the fraction of Al 2 O 3 solidified in M 3 O 5 phase decreases to 50% and 40%, respectively. A high stability of glassy silicate for T i 4+/ Ti 3+ ratio lower than 2.6 prevents the required devitrification and also the transformation of original M 3 O 5 phase into a three-phase combination composed of rutile, MgO depleted pseudobrookite and MgO enriched ilmenite during upgrading of Sorel slag to UGS product. For different QMM slag grades, Th and U background levels in the pseudobrookite were found to be in the range of 10-20 ppm and 18 ppm, respectively. About 90% of radioactive isotopes in the smelter feed were entrapped within glassy silicate during slag solidification. The smallest solidus-liquidus gap was measured for QMM slag containing the gangue impurities in the range of 1-2%. As expected, the highest expansion of solidus-liquidus gap occurred for Sorel slag containing 11-12% of gangue impurities. Sharp changes in the viscosity curves for Sorel and QMM slag corresponding to the final melting of slag are in agreement with liquidus temperature measurements.
package controller; import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.File; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.net.URL; import java.util.ResourceBundle; import javafx.event.ActionEvent; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Alert; import javafx.scene.control.CheckBox; import javafx.scene.control.ComboBox; import javafx.scene.control.TableView; import javafx.scene.input.KeyCode; import javafx.scene.input.KeyEvent; import javafx.scene.layout.AnchorPane; public class FxImpresoraTicketController implements Initializable { @FXML private AnchorPane window; @FXML private ComboBox<String> cbImpresoras; @FXML private CheckBox cbCortarPapel; private FxVentaController ventaController; private PrinterService printerService; @Override public void initialize(URL url, ResourceBundle rb) { printerService = new PrinterService(); printerService.getPrinters().forEach(e -> { cbImpresoras.getItems().add(e); }); for (int i = 0; i < cbImpresoras.getItems().size(); i++) { if (cbImpresoras.getItems().get(i).equalsIgnoreCase(Session.NOMBRE_IMPRESORA)) { cbImpresoras.getSelectionModel().select(i); break; } } if (Session.CORTAPAPEL_IMPRESORA != null) { cbCortarPapel.setSelected(Session.CORTAPAPEL_IMPRESORA.equalsIgnoreCase("1")); } } private void eventGuardarImpresora() { if (cbImpresoras.getSelectionModel().getSelectedIndex() >= 0) { String ruta = "./archivos/impresoraticket.txt"; File archivo; BufferedWriter bw = null; try { archivo = new File(ruta); if (archivo.exists()) { bw = new BufferedWriter(new FileWriter(archivo)); bw.write(cbImpresoras.getSelectionModel().getSelectedItem()); bw.newLine(); bw.write(cbCortarPapel.isSelected() ? "1" : "0"); Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.INFORMATION, "Impresora de ticket", "Se guardo la configuración", false); } else { bw = new BufferedWriter(new FileWriter(archivo)); bw.write(cbImpresoras.getSelectionModel().getSelectedItem()); bw.newLine(); bw.write(cbCortarPapel.isSelected() ? "1" : "0"); Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.INFORMATION, "Impresora de ticket", "Se guardo la configuración", false); } } catch (IOException e) { Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.WARNING, "Impresora de ticket", "Error al crear el archivo:" + e.getLocalizedMessage(), false); } finally { try { if (null != bw) { bw.close(); } iniciarRutasImpresion(); } catch (IOException e2) { Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.WARNING, "Impresora de ticket", "Error al finalizar el creado del archivo:" + e2.getLocalizedMessage(), false); } } } else { Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.WARNING, "Impresora de ticket", "Seleccione una impresora", false); } } private void iniciarRutasImpresion() { File archivo; FileReader fr = null; BufferedReader br = null; try { archivo = new File("./archivos/impresoraticket.txt"); if (archivo.exists()) { Session.ESTADO_IMPRESORA = true; // Apertura del fichero y creacion de BufferedReader para poder // hacer una lectura comoda (disponer del metodo readLine()). fr = new FileReader(archivo); br = new BufferedReader(fr); // Lectura del fichero Session.NOMBRE_IMPRESORA = br.readLine(); Session.CORTAPAPEL_IMPRESORA = br.readLine(); System.out.println(Session.NOMBRE_IMPRESORA); System.out.println(Session.CORTAPAPEL_IMPRESORA); Tools.Dispose(window); } else { Session.ESTADO_IMPRESORA = false; Tools.Dispose(window); } } catch (IOException e) { Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.WARNING, "Impresora de ticket", "Error al leer el archivo:" + e.getLocalizedMessage(), false); Session.ESTADO_IMPRESORA = false; } finally { // En el finally cerramos el fichero, para asegurarnos // que se cierra tanto si todo va bien como si salta // una excepcion. try { if (null != fr) { fr.close(); } if (null != br) { br.close(); } } catch (IOException e2) { Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.WARNING, "Impresora de ticket", "Error al finalizar la lectura del archivo:" + e2.getLocalizedMessage(), false); } } } @FXML private void onKeyPressedGuardar(KeyEvent event) { if (event.getCode() == KeyCode.ENTER) { eventGuardarImpresora(); } } @FXML private void onActionGuardar(ActionEvent event) { eventGuardarImpresora(); } private void eventImprimirPrueba() { if (cbImpresoras.getSelectionModel().getSelectedIndex() >= 0) { if (cbCortarPapel.isSelected()) { String text = "Impresora " + cbImpresoras.getSelectionModel().getSelectedItem() + "\nPara uso de todo tipo de tickets" + "\nCorta papel" + "\n\n\n\n\n\n\n\n\n\n"; printerService.printString(cbImpresoras.getSelectionModel().getSelectedItem(), text, true); ventaController.imprimirVenta("Impresion de prueba", new TableView<>(),"00.00", "00.00", "00.00", "00.00", "00.00", "00.00", "0000-00000000"); } // else { // String text = "Impresora " + cbImpresoras.getSelectionModel().getSelectedItem() // + "\nPara uso de todo tipo de tickets" // + "\nNo corta papel" // + "\n\n\n\n\n\n\n\n\n\n"; // printerService.printString(cbImpresoras.getSelectionModel().getSelectedItem(), text, false); // ventaController.imprimirVenta("Impresion de prueba", "00.00", "00.00", "00.00", "00.00", "00.00", "00.00", "0000-00000000"); // // } } else { Tools.AlertMessage(window.getScene().getWindow(), Alert.AlertType.WARNING, "Impresora de ticket", "Seleccione una impresora", false); } } @FXML private void onKeyPressedProbar(KeyEvent event) { if (event.getCode() == KeyCode.ENTER) { eventImprimirPrueba(); } } @FXML private void onActionProbar(ActionEvent event) { eventImprimirPrueba(); } public void setInitVentasController(FxVentaController ventaController) { this.ventaController = ventaController; } }
Golf is struggling, and the game’s most decorated player has identified a culprit: the golf ball. Speaking at the HSBC Golf Business Forum in Ponte Vedra Beach, Florida, on Tuesday, the 18-time major winner turned prolific course designer blamed changes in the golf ball for the recent spate of course closures throughout the United States. “Fact is, more golf courses have closed in the U.S. in each of the last 10 years than have opened,” Nicklaus said. “This is thanks in great part to changes in the golf ball and the distance it travels. Courses have had to change along with it. It’s now a slower game and more expensive than before, and that can’t be a good thing.” Nicklaus’ solution is a creative one: create golf balls specifically tailored to each course instead of forcing courses to add length in response to longer-traveling golf balls. “We don’t want to change the game for the core golfer, but we need to make every effort to offer alternatives to bring more people into the game and keep them in the game,” Nicklaus said. “I think we need to develop a golf ball to suit the golf course, rather than build courses to suit a golf ball. Whether it’s a ball that goes 50%, 75%, or 100%, you play a ball that fits the course and your game.” According to Nicklaus, it’s an easy fix for a very real problem for the game. “It’s not that big a deal,” Nicklaus said. “We used to do it when traveling to play the Open and switching from the large ball to the small. It took us only a day to get used to a different ball. But when land is a dear commodity and water is scarce, you need to do something to respond to today’s situation.”
/* * check_checksums_and_signatures -- (internal) check if checksums * and signatures are correct for parts * in a given replica */ static int check_checksums_and_signatures(struct pool_set *set, struct poolset_health_status *set_hs) { LOG(3, "set %p, set_hs %p", set, set_hs); for (unsigned r = 0; r < set->nreplicas; ++r) { struct pool_replica *rep = REP(set, r); struct replica_health_status *rep_hs = REP_HEALTH(set_hs, r); if (rep->remote) continue; for (unsigned p = 0; p < rep->nhdrs; ++p) { if (replica_is_part_broken(r, p, set_hs)) continue; LOG(4, "checking checksum for part %u, replica %u", p, r); struct pool_hdr *hdr; if (rep->remote) { hdr = rep->part[p].remote_hdr; } else { hdr = HDR(rep, p); } if (!util_checksum(hdr, sizeof(*hdr), &hdr->checksum, 0, POOL_HDR_CSUM_END_OFF(hdr))) { ERR("invalid checksum of pool header"); rep_hs->part[p].flags |= IS_BROKEN; } else if (util_is_zeroed(hdr, sizeof(*hdr))) { rep_hs->part[p].flags |= IS_BROKEN; } enum pool_type type = pool_hdr_get_type(hdr); if (type == POOL_TYPE_UNKNOWN) { ERR("invalid signature"); rep_hs->part[p].flags |= IS_BROKEN; } } } return 0; }
/** * Request to set up WebHook subscription */ public class SubscriptionRequestBody implements TamTamSerializable { @NotNull private final @Valid String url; private Set<@Valid String> updateTypes; private @Valid String version; @JsonCreator public SubscriptionRequestBody(@JsonProperty("url") String url) { this.url = url; } /** * URL of HTTP(S)-endpoint of your bot. Must starts with http(s):// * @return url **/ @JsonProperty("url") public String getUrl() { return url; } public SubscriptionRequestBody updateTypes(Set<String> updateTypes) { this.setUpdateTypes(updateTypes); return this; } /** * List of update types your bot want to receive. See &#x60;Update&#x60; object for a complete list of types * @return updateTypes **/ @JsonProperty("update_types") public Set<String> getUpdateTypes() { return updateTypes; } public void setUpdateTypes(Set<String> updateTypes) { this.updateTypes = updateTypes; } public SubscriptionRequestBody version(String version) { this.setVersion(version); return this; } /** * Version of API. Affects model representation * @return version **/ @JsonProperty("version") public String getVersion() { return version; } public void setVersion(String version) { this.version = version; } @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } SubscriptionRequestBody other = (SubscriptionRequestBody) o; return Objects.equals(this.url, other.url) && Objects.equals(this.updateTypes, other.updateTypes) && Objects.equals(this.version, other.version); } @Override public int hashCode() { int result = 1; result = 31 * result + (url != null ? url.hashCode() : 0); result = 31 * result + (updateTypes != null ? updateTypes.hashCode() : 0); result = 31 * result + (version != null ? version.hashCode() : 0); return result; } @Override public String toString() { return "SubscriptionRequestBody{" + " url='" + url + '\'' + " updateTypes='" + updateTypes + '\'' + " version='" + version + '\'' + '}'; } }
// Fill fills in the function pointers in the Features struct from the // optional interfaces. It returns the original updated Features // struct passed in. func (ft *Features) Fill(f Fs) *Features { if do, ok := f.(Purger); ok { ft.Purge = do.Purge } if do, ok := f.(Copier); ok { ft.Copy = do.Copy } if do, ok := f.(Mover); ok { ft.Move = do.Move } if do, ok := f.(DirMover); ok { ft.DirMove = do.DirMove } if do, ok := f.(DirChangeNotifier); ok { ft.DirChangeNotify = do.DirChangeNotify } if do, ok := f.(UnWrapper); ok { ft.UnWrap = do.UnWrap } if do, ok := f.(DirCacheFlusher); ok { ft.DirCacheFlush = do.DirCacheFlush } if do, ok := f.(PutUncheckeder); ok { ft.PutUnchecked = do.PutUnchecked } if do, ok := f.(PutStreamer); ok { ft.PutStream = do.PutStream } if do, ok := f.(MergeDirser); ok { ft.MergeDirs = do.MergeDirs } if do, ok := f.(CleanUpper); ok { ft.CleanUp = do.CleanUp } if do, ok := f.(ListRer); ok { ft.ListR = do.ListR } return ft.DisableList(Config.DisableFeatures) }
import React, { ReactElement } from 'react' import ssrPrepass from 'react-ssr-prepass' import { renderToString } from 'react-dom/server.js' import { StaticRouter } from 'react-router-dom' import { HelmetProvider } from 'react-helmet-async' import { getFullPath, withoutSuffix } from '../utils/route' import { createRouter } from './utils' import coreViteSSR from '../core/entry-server.js' import type { Context, SsrHandler } from './types' import { provideContext } from './components.js' export { ClientOnly, useContext } from './components.js' let render: (element: ReactElement) => string | Promise<string> = renderToString // @ts-ignore if (__USE_APOLLO_RENDERER__) { // Apollo does not support Suspense so it needs its own // renderer in order to await for async queries. // @ts-ignore import('@apollo/client/react/ssr') .then(({ renderToStringWithData }) => { render = renderToStringWithData }) .catch(() => null) } const viteSSR: SsrHandler = function ( App, { routes, base, prepassVisitor, PropsProvider, pageProps, styleCollector, ...options }, hook ) { return coreViteSSR(options, async (ctx, { isRedirect, ...extra }) => { const context = ctx as Context context.router = createRouter({ routes, base, initialState: (extra.initialState as Record<string, unknown>) || null, pagePropsOptions: pageProps, PropsProvider, }) if (hook) { context.initialState = (await hook(context)) || context.initialState } if (isRedirect()) return {} const routeBase = base && withoutSuffix(base(context), '/') const fullPath = getFullPath(context.url, routeBase) const helmetContext: Record<string, Record<string, string>> = {} let app: ReactElement = React.createElement( HelmetProvider, { context: helmetContext }, React.createElement( StaticRouter, { basename: routeBase, location: fullPath }, provideContext(React.createElement(App, context), context) ) ) const styles = styleCollector && (await styleCollector(context)) if (styles) { app = styles.collect(app) } await ssrPrepass(app, prepassVisitor) const body = await render(app) if (isRedirect()) { styles && styles.cleanup && styles.cleanup() return {} } const currentRoute = context.router.getCurrentRoute() if (currentRoute) { Object.assign( context.initialState || {}, (currentRoute.meta || {}).state || {} ) } const { htmlAttributes: htmlAttrs = '', bodyAttributes: bodyAttrs = '', ...tags } = helmetContext.helmet || {} const styleTags: string = (styles && styles.toString(body)) || '' styles && styles.cleanup && styles.cleanup() const headTags = Object.keys(tags) .map((key) => (tags[key] || '').toString()) .join('') + '\n' + styleTags return { body, headTags, htmlAttrs, bodyAttrs } }) } export default viteSSR
/** * get if map is loading something */ private boolean isLoading() { return !loadTimer.isDisposed() && (loadExecutor.getActiveCount() > 0 || downloadExecutor.getActiveCount() > 0 || displayExecutor.getActiveCount() > 0); }
Scientists pour a lot of brainpower into understanding how their experimental equipment works. You don’t want to be fooled into thinking you’ve made a great discovery because of some quirk in the apparatus you didn’t know about. Just the other day, a new paper published online suggested that the instruments used to detect gravitational waves exhibited such a quirk, tricking scientists into claiming the detection of waves that maybe weren’t really there. It appears that gravity wave fans can relax, though. A response to the challenge pretty much establishes that the new criticism doesn’t undermine the wave discoveries. Of course, you never know — supposedly well-established results sometimes do fade away. Often that’s because scientists have neglected to understand the most important part of the entire experimental apparatus — their own brains. It’s the brain, after all, that devises experiments and interprets their results. How the brain perceives, how it makes decisions and judgments, and how those judgments can go awry are at least as important to science as knowing the intricacies of nonbiotic experimental machinery. And as any brain scientist will tell you, there’s still a long way to go before understanding the brain will get crossed off science’s to-do list. But there has been progress. A recent special issue of the journal Neuron offers a convenient set of “perspective” papers exploring the current state of understanding of the brain’s inner workings. Those papers show that a lot is known. But at the same time they emphasize that there’s a lot we don’t know. Glancing at the table of contents reveals the first lesson about understanding the brain: It’s a complex problem that needs to be approached from multiple perspectives. On one level, there’s the dynamics of electrical currents that constitute the main signaling method of the brain’s nerve cells. Then on a higher level there’s the need to figure out the rules by which nerve cells make connections (synapses) and create the neural circuitry for processing sensory input, learning and behaving. Another challenge is understanding how nerve cell networks represent memories and how you recall what you’ve learned. And it’s essential to understand how neurobiological processing conducted by molecules and cells and electrical signaling gets translated into behaviors, from simple bodily movements to complex social interactions. Nerve cells in the brain, or neurons, are known to communicate among themselves by transmitting electrical signals, aided by chemical signaling at the synapses connecting the neurons. But there are gaps in understanding how that process takes the brain from perceptions to thoughts to actions. Each of Neuron’s perspective papers both describes what’s already known about how the brain works and offers speculations where scientists lack full knowledge about how the brain does it jobs. Much of the effort to explain the brain involves mapping the electrical signaling throughout the entire network of nerve cell connections. Per Roland of the University of Copenhagen, for instance, discusses how those signals vary in space and time. He emphasizes the important balance between signaling that incites neurons to send signals and the messaging that inhibits signaling, keeping some neurons quiet. Sophie Denève and colleagues of the Ecole Normale Supérieure in Paris also emphasize the balance between excitation and inhibition in neural circuitry. That balance is important, they say, for understanding how the whole brain can learn to do things based on changes in the connections between individual neurons. Somehow the rules governing synaptic connections between cells enable such “local” activity to modify the “global” neural circuitry that carries out the brain’s many functions. Excitation-inhibition balance, plus feedback from the global network influencing synapse strength, “can ensure that global functions can be learned with local learning rules,” Denève and colleagues write. Almost all these approaches to figuring out the brain involve how it manipulates information. In a sense, the ultimate key question is how the brain conducts the mysterious process by which it absorbs information in the form of lights and colors, sounds, smells and tactile inputs and transforms them into physical actions — ideally behaviors that are appropriate responses to the inputs. Just (OK, not “just,” but sort of) as in a computer, the brain transforms input into output; information about the external world is manipulated to produce information about how to react to it. But because sensory input has its limits, and some of it is ambiguous, the informational variables of the external world cannot be gauged with certainty, Xaq Pitkow and Dora Angelaki of Baylor College of Medicine and Rice University in Houston point out in their perspective. So the brain’s behavioral choices must be based on some method of computing probabilities to infer the likely state of the world — and then choosing the wisest (probably) actions in response. “It is widely accepted that the brain somehow approximates probabilistic inference,” Pitkow and Angelaki write. But nobody really knows how the brain does it. Pitkow and Angelaki propose that multiple populations of the brain’s neurons perform various computations to make appropriate behavioral decisions. Patterns of electrical signaling by these neurons must represent the original sensory stimuli — that is, the patterns in the stimuli are encoded in the patterns of electrical signaling among the neurons. Those neural signaling patterns, in Pitkow and Angelaki’s description, are then recoded into another set of patterns; that process sorts out the important variables in the environment from those that don’t matter. Those patterns are then decoded in the process of generating behavioral actions. In sum, the brain appears to implement algorithms for collecting and assessing information about the environment and encoding that information in messages that tell the body what to do. Somehow those algorithms allow the brain to conduct statistical computations that combine beliefs about the environment with the expected outcome of different behaviors. Pitkow and Angelaki present sophisticated speculation about the possible ways the brain could accomplish this task. It’s clearly an unimaginably complicated process, and figuring out how the brain does it will require more sophisticated experiments than neuroscientists have so far imagined. Much research on brain function in animals, for instance, offers the animal a choice of two options, given various external conditions. But tasks of that nature are vastly simpler than the jobs that evolution optimized brains for. “The real benefit of complex inferences like weighing uncertainty may not be apparent unless the uncertainty has complex structure,” Pitkow and Angelaki argue. “Overly simple tasks” are “ill-suited to expose the inferential computations that make the brain special.” And so truly understanding the brain, it seems, will require better experiments — using apparatus that is more fully understood than the brain now is — of sufficient complexity to be worthy of probing the brain’s abilities. Follow me on Twitter: @tom_siegfried
from torch import nn import torch class BiRNN(nn.Module): def __init__(self, vocab, embed_size, num_hiddens, num_layers): super(BiRNN, self).__init__() self.embedding = nn.Embedding(len(vocab), embed_size) # 将bidirectional设置为True即得到双向循环神经网络 self.encoder = nn.LSTM(input_size=embed_size, hidden_size=num_hiddens, num_layers=num_layers, bidirectional=True) # 初始时间步和最终时间步的隐状态作为全连接层输入 self.decoder = nn.Linear(4*num_hiddens, 2) def forward(self, inputs): embedding = self.embedding(inputs.permute(1, 0)) outputs,_ = self.encoder(embedding) encoding = torch.cat((outputs[0], outputs[-1]), -1) outs = self.decoder(encoding) return outs
import numpy as np import pandas as pd from pystorm.hal.run_control import RunControl from pystorm.hal import data_utils class NetBuilder(object): def __init__(self, HAL, net=None): """Initialize NetBuilder: Inputs: ======= HAL (HAL object) : net (hal.neuromorph.graph object, default None) : User may provide a custom network they constructed. If no network is supplied, typically one will be added with a call like NetBuilder.create_single_pool_net() """ self.hal = HAL self.net = net def add_net(self, net): self.net = net def create_single_pool_net_from_spec(self, ps, decoders=None): return self.create_single_pool_net( ps.Y, ps.X, tap_matrix=ps.TPM, decoders=decoders, biases=ps.biases, gain_divs=ps.gain_divisors, loc_yx=ps.loc_yx, diffusor_cuts_yx=ps.diffusor_cuts_yx) def create_single_pool_net(self, Y, X, tap_matrix=None, decoders=None, biases=0, gain_divs=1, loc_yx=(0, 0), diffusor_cuts_yx=None): """Creates a Network with a single Pool Inputs: ======= Y (int) : number of rows in the pool X (int) : number of columns in the pool tap_matrix ((N, dim) array or None (default)) : array of tap point/dimension assignments to each neuron if provided, Network will have an Input connected to its Pool if None, Network will not have an Input decoders ((dim, N) array or None (default)) : array of each neuron's decoding weight in each dimension if provided, Network will have an Ouput connected to its Pool if None, Network will not have an Output biases ((N,) int array or int) : bias bits for each neuron gain_divs ((N,) int array or int) : gain divisor bits for each neuron Returns: ======== Network object """ N = Y * X if tap_matrix is None: Din = 0 tap_spec = np.zeros((N, 1)) # have to put something in, (N, [[]]) might work else: if isinstance(tap_matrix, list): Din = len(tap_matrix) tap_spec = (N, tap_matrix) else: Din = tap_matrix.shape[1] tap_spec = tap_matrix assert tap_spec.shape[0] == N, ( "tap matrix has {} entries but Y*X={}".format(tap_spec.shape[0], Y*X)) if decoders is None: Dout = 0 else: Dout = decoders.shape[0] from pystorm.hal.neuromorph import graph # to describe HAL/neuromorph network net = graph.Network("net") # decoders are initially zero # we remap them later (without touching the rest of the network) using HAL.remap_weights() net.pool = net.create_pool("p1", tap_spec, biases=biases, gain_divisors=gain_divs, xy=(X, Y), user_xy_loc=(loc_yx[1], loc_yx[0]), diffusor_cuts_yx=diffusor_cuts_yx) if Dout > 0: b1 = net.create_bucket("b1", Dout) net.output = net.create_output("o1", Dout) net.decoder_conn = net.create_connection("c_p1_to_b1", net.pool, b1, decoders) net.create_connection("c_b1_to_o1", b1, net.output, None) if Din > 0: net.input = net.create_input("i1", Din) net.create_connection("c_i1_to_p1", net.input, net.pool, None) self.net = net return net @staticmethod def to_synspace(nrny, nrnx): """converts y, x nrn coordinate to synapse coordinate""" return nrny // 2, nrnx // 2 @staticmethod def create_default_yx_taps(SY, SX, D, bad_syn=None): """create 'good' (i.e. maximally adjacently orthogonal) arrays of synapses Inputs: ====== SY, SX (int, int) : dimensions of grid to create synapses in D (int) : dimensionality of representation bad_syn (pandas dataframe indexed y,x or (y, x) np array) : synapses to avoid (e.g. because of high bias or long T_PE) Returns: ======= (SY, SX, D)-array of tap points can be converted to (Y*X, D) what Pool takes as tap_spec with syn_taps_to_nrn_taps() """ if isinstance(bad_syn, np.ndarray) and bad_syn.shape != (SY, SX): raise ValueError("bad_syn should be 2D array-like and shape (SY, SX)") if bad_syn is None: bad_syn = np.array([[False] * SY] * SX, dtype=bool) def get_bad_syn(y, x): if isinstance(bad_syn, pd.DataFrame): return bad_syn.loc[y, x] else: return bad_syn[y, x] def find_closest_not_bad(y, x): # XXX unused # search in expanding manhattan radii # doing this dumb-ly, O(N**2) instead of O(N) # really want to encode an outward spiral R = 1 while True: if R == max(SX, SY): assert(False) ylo = max(y - R, 0) yhi = min(y + R, SY - 1) xlo = max(x - R, 0) xhi = min(x + R, SX - 1) # now pick the first good one for y in range(ylo, yhi): for x in range(xlo, xhi): if not get_bad_syn(y, x): return y, x R += 1 def eliminate_projections(base_vect, neighbors): """eliminate <neighbors> projections on base_vect""" if len(neighbors) == 1: proj = np.dot(neighbors[0], np.dot(neighbors[0], base_vect)) base_vect -= proj assert(np.abs(np.dot(neighbors[0], base_vect)) < 1e-10) elif len(neighbors) > 1: to_elim = np.vstack(neighbors) U, S, VT = np.linalg.svd(to_elim) VpT = VT[:len(neighbors), :] proj = np.dot(VpT.T, np.dot(VpT, base_vect)) base_vect -= proj assert(np.sum(np.abs(np.dot(to_elim, base_vect))) < 1e-10) return base_vect def get_cartesian_vector_set(D): vects = np.zeros((2*D, D)) for d in range(D): vects[2*d, d] = 1 vects[2*d+1, d] = -1 return vects def get_random_unit_vector(D): gaussian = np.random.randn(D) return gaussian / np.linalg.norm(gaussian) # for D == 1, use on/off halves if D == 1: tap_matrix = np.zeros((SY, SX)) for y in range(SY): for x in range(SX): if not get_bad_syn(y, x): if x < SX // 2: tap_matrix[y, x] = 1 else: tap_matrix[y, x] = -1 else: # can expose these later, I suppose use_mean = True cartesian = True cartesian_vects = get_cartesian_vector_set(D) # pick a random standard basis direction for each tap # try to keep adjacent vectors orthogonal # raster-scan, considering already-set vectors # neighborhood under consideration grows with dimensions tap_matrix = np.zeros((SY, SX, D), dtype=int) for y in range(SY): for x in range(SX): if not get_bad_syn(y,x): neighbors = [] if D >= 2: if x > 0: if ~get_bad_syn(y, x - 1): neighbors.append('l') elif D == 2 and y > 0: # helps 2D with few taps neighbors.append('u') elif D == 2 and y > 0: # helps 2D with few taps neighbors.append('u') if D >= 3: if y > 0: neighbors.append('u') if D >= 4: if x > 0 and y > 0: neighbors.append('ul') if D >= 5: if x < grid_pts_X - 1 and y > 0: neighbors.append('ur') elim_vects = [] for n in neighbors: if n == 'l': elim_vects.append(tap_matrix[y, x - 1]) if n == 'u': elim_vects.append(tap_matrix[y - 1, x]) if n == 'ul': elim_vects.append(tap_matrix[y - 1, x - 1]) if n == 'ur': elim_vects.append(tap_matrix[y - 1, x + 1]) base_vect_norm = 0 fails = 0 # debugging info base_vect_tries = [] base_vect_elims = [] base_vect_tries_cart = [] while True: # now assign the base_vect to eliminate projections from neighbors # keep trying if we pick the base_vect badly base_vect = get_random_unit_vector(D) base_vect_tries.append(base_vect) # if convert completely random vector into its nearest # standard_basis vector if cartesian: similarities = np.dot(cartesian_vects, base_vect) base_vect = cartesian_vects[np.argmax(similarities)].copy() base_vect_tries_cart.append(base_vect) # eliminate projections # the base_vect we chose may be in the span of the neighbors # if so, try again, up to some limit try: base_vect = eliminate_projections(base_vect, elim_vects) base_vect_elims.append(base_vect) base_vect_norm = np.linalg.norm(base_vect) # if taking the neighbor's projections out of the # random vector leaves you with anything, break out if base_vect_norm > 1e-10: candidate_vect = base_vect / base_vect_norm # for any vector that "works", so does its opposite # use the one that moves the mean encoder closer to zero # XXX can also take into account if not orthogonal to # some neighbors, esp for D == 2 if use_mean: curr_sum = np.sum(tap_matrix, axis=(0,1)) pos_norm = np.linalg.norm(curr_sum + candidate_vect) neg_norm = np.linalg.norm(curr_sum - candidate_vect) if neg_norm < pos_norm: candidate_vect *= -1 break # leave while with candidate_vect # shouldn't happen, but try again if it does except AssertionError: base_vect_norm = 0 # print debug info if something goes really wrong fails += 1 if fails > 100: print("failed at y,x: ", y, ",", x) print("tap matrix neighborhood") print(tap_matrix[y-1:y+1, x-1:x+1, :]) print("last ten tries:") print("random vector candidates") print(np.array(base_vect_tries[-10:])) print("closest cartesian vector") print(np.array(base_vect_tries_cart[-10:])) print("after eliminating neighbor's projections") print(np.array(base_vect_elims[-10:])) raise RuntimeError("failed to get orthogonal vector 100 times" + "something is probably wrong with neighborhood logic") tap_matrix[y, x, :] = candidate_vect tap_matrix = tap_matrix.reshape((SY, SX, D)) for i in range(D): items = np.nonzero(tap_matrix[:, i])[0] if len(items) % 2 == 1: tap_matrix[items[-1], i] = 0 return tap_matrix @staticmethod def get_diff_cuts_to_break_pool_in_half(height, width): x = width // 2 cut_yxs = [] for y in range(0, height, 4): cut_yxs.append((y, x + 1, 'left')) return cut_yxs def break_pool_in_half(self, pool): """Opens the diffusor down the middle of a pool. Good for 1D pools with default tap points (improves yield). Parameters: ========== pool (Pool object) the pool (in the currently mapped network) to cut """ if self.net is None: raise RuntimeError("no Network attached to NetBuilder") if self.hal.last_mapped_network != self.net: raise RuntimeError("Trying to run un-mapped network. Run map first.") if pool not in self.net.get_pools(): raise ValueError("supplied pool was not in the current network") loc_y, loc_x = pool.mapped_yx cut_yxs = NetBuilder.get_diff_cuts_to_break_pool_in_half(pool.height, pool.width) for y, x, direction in cut_yxs: self.hal.set_diffusor(y + loc_y, x + loc_x, direction, 'broken') def open_all_diff_cuts(self): """Opens all the diffusor cuts (no current passes) works on an already-mapped network. Remapping will erase this state. """ # this isn't strictly necessary (the fn doesn't operate on self.net) # but it does enforce that the network is already mapped if self.net is None: raise RuntimeError("no Network attached to NetBuilder") if self.hal.last_mapped_network != self.net: raise RuntimeError("Trying to run un-mapped network. Run map first.") CORE_ID = 0 # connect diffusor around pools for tile_id in range(256): self.hal.driver.OpenDiffusorAllCuts(CORE_ID, tile_id) @staticmethod def syn_taps_to_nrn_taps(tap_matrix, spacing=1): SY, SX, D = tap_matrix.shape Y = SY * 2 * spacing X = SX * 2 * spacing nrn_tap_matrix = np.zeros((Y, X, D)) for d in range(D): nrn_tap_matrix[::2*spacing, ::2*spacing, d] = tap_matrix[:, :, d] return nrn_tap_matrix.reshape((Y * X, D)) @staticmethod def make_taps_even(taps): """taking a tap list or tap matrix, make the number of taps per dim even modifies taps, removing taps to meet the eveness condition """ if isinstance(taps, list): for tap_dim in taps: if len(tap_dim) % 2 == 1: tap_dim = tap_dim[:-1] else: dims = taps.shape[1] for d in range(dims): tap_dim = taps[:, d] if int(np.sum(np.abs(tap_dim))) % 2 == 1: nonzero_idxs = np.arange(len(tap_dim))[tap_dim != 0] rand_nonzero_idx = nonzero_idxs[np.random.randint(np.sum(tap_dim != 0))] taps[rand_nonzero_idx, d] = 0
import glob import numpy as np from skimage.io import imread from torch.utils.data import Dataset, DataLoader from torchvision import datasets, transforms import torch import pandas as pd from PIL import Image class HandwritingDataset(torch.utils.data.Dataset): def __init__(self, csv_path, transform = None, target_transform = None): self.df = pd.read_csv(csv_path, header = None) self.transform = transform self.target_transform = target_transform self.x = np.asarray(self.df.iloc[:len(self.df),1:]).reshape([len(self.df),28,28]) # taking all columns expect column 0 self.x = self.x.astype('uint8') self.y = np.asarray(self.df.iloc[:len(self.df),0]).reshape([len(self.df)]) # taking column 0 def __len__(self): return len(self.df) def __getitem__(self, index): target = self.y[index] image = self.x[index] PIL_image = Image.fromarray(image) if self.transform is not None: PIL_image = self.transform(PIL_image) if self.target_transform is not None: target = self.target_transform(target) return PIL_image, target def get_handwriting_operators_dataloaders(batch_size=128, path_to_train_csv='/Users/aashishkumar/Documents/notebooks/handwriting_operators_train_temp.csv', path_to_test_csv='/Users/aashishkumar/Documents/notebooks/handwriting_operators_test_temp.csv'): """ Handwriting Operators dataloader with (32, 32) images handwriting_operators_train_temp.csv this file does not have bg class, 12 classes in total """ all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = HandwritingDataset(path_to_train_csv, transform=all_transforms) test_data = HandwritingDataset(path_to_test_csv, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_handwriting_letters_dataloaders(batch_size=128, path_to_train_csv='/Users/aashishkumar/Documents/Projects/forked_repos/no_cuda/IB-INN/handwriting_letters_train.csv', path_to_test_csv='/Users/aashishkumar/Documents/Projects/forked_repos/no_cuda/IB-INN/handwriting_letters_test.csv'): """ Handwriting Operators dataloader with (32, 32) images handwriting_letters_train.csv file has 26 classes only. No bg class, no mirror class """ all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = HandwritingDataset(path_to_train_csv, transform=all_transforms) test_data = HandwritingDataset(path_to_test_csv, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_emnist_uppercase_dataloaders(batch_size=128, path_to_train_csv='/Users/aashishkumar/Documents/notebooks/emnist_uppercase_train_3rd_May_2021.csv', path_to_test_csv='/Users/aashishkumar/Documents/notebooks/emnist_uppercase_test_3rd_May_2021.csv'): """ Handwriting Operators dataloader with (32, 32) images emnist_uppercase_train_3rd_May_2021.csv has 26 uppercase classes """ all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = HandwritingDataset(path_to_train_csv, transform=all_transforms) test_data = HandwritingDataset(path_to_test_csv, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_emnist_lowercase_dataloaders(batch_size=128, path_to_train_csv='/Users/aashishkumar/Documents/notebooks/emnist_lowercase_train_13th_May.csv', path_to_test_csv='/Users/aashishkumar/Documents/notebooks/emnist_lowercase_test_13th_May.csv'): """ Handwriting Operators dataloader with (32, 32) images emnist_uppercase_train_3rd_May_2021.csv has 26 uppercase classes """ all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = HandwritingDataset(path_to_train_csv, transform=all_transforms) test_data = HandwritingDataset(path_to_test_csv, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_emnist_uppercase_reduced_dataloaders(batch_size=128, path_to_train_csv='/Users/aashishkumar/Documents/notebooks/emnist_uppercase_train_11th_May_2021_reduced.csv', path_to_test_csv='/Users/aashishkumar/Documents/notebooks/emnist_uppercase_test_11th_May_2021_reduced.csv'): """ Handwriting Operators dataloader with (32, 32) images emnist_uppercase_train_11th_May_2021_reduced.csv has 10 uppercase classes """ all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = HandwritingDataset(path_to_train_csv, transform=all_transforms) test_data = HandwritingDataset(path_to_test_csv, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_mnist_dataloaders(batch_size=128, path_to_data='/Users/aashishkumar/Documents/pytorch_datasets'): """MNIST dataloader with (32, 32) images.""" all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = datasets.MNIST(path_to_data, train=True, download=True, transform=all_transforms) test_data = datasets.MNIST(path_to_data, train=False, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_fashion_mnist_dataloaders(batch_size=128, path_to_data='/Users/aashishkumar/Documents/pytorch_datasets'): """FashionMNIST dataloader with (32, 32) images.""" all_transforms = transforms.Compose([ transforms.Resize(32), transforms.ToTensor() ]) train_data = datasets.FashionMNIST(path_to_data, train=True, download=True, transform=all_transforms) test_data = datasets.FashionMNIST(path_to_data, train=False, transform=all_transforms) train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True) return train_loader, test_loader def get_dsprites_dataloader(batch_size=128, path_to_data='../dsprites-data/dsprites_data.npz'): """DSprites dataloader.""" dsprites_data = DSpritesDataset(path_to_data, transform=transforms.ToTensor()) dsprites_loader = DataLoader(dsprites_data, batch_size=batch_size, shuffle=True) return dsprites_loader def get_chairs_dataloader(batch_size=128, path_to_data='../rendered_chairs_64'): """Chairs dataloader. Chairs are center cropped and resized to (64, 64).""" all_transforms = transforms.Compose([ transforms.Grayscale(), transforms.ToTensor() ]) chairs_data = datasets.ImageFolder(root=path_to_data, transform=all_transforms) chairs_loader = DataLoader(chairs_data, batch_size=batch_size, shuffle=True) return chairs_loader def get_chairs_test_dataloader(batch_size=62, path_to_data='../rendered_chairs_64_test'): """There are 62 pictures of each chair, so get batches of data containing one chair per batch.""" all_transforms = transforms.Compose([ transforms.Grayscale(), transforms.ToTensor() ]) chairs_data = datasets.ImageFolder(root=path_to_data, transform=all_transforms) chairs_loader = DataLoader(chairs_data, batch_size=batch_size, shuffle=False) return chairs_loader def get_celeba_dataloader(batch_size=128, path_to_data='../celeba_64'): """CelebA dataloader with (64, 64) images.""" celeba_data = CelebADataset(path_to_data, transform=transforms.ToTensor()) celeba_loader = DataLoader(celeba_data, batch_size=batch_size, shuffle=True) return celeba_loader class DSpritesDataset(Dataset): """D Sprites dataset.""" def __init__(self, path_to_data, subsample=1, transform=None): """ Parameters ---------- subsample : int Only load every |subsample| number of images. """ self.imgs = np.load(path_to_data)['imgs'][::subsample] self.transform = transform def __len__(self): return len(self.imgs) def __getitem__(self, idx): # Each image in the dataset has binary values so multiply by 255 to get # pixel values sample = self.imgs[idx] * 255 # Add extra dimension to turn shape into (H, W) -> (H, W, C) sample = sample.reshape(sample.shape + (1,)) if self.transform: sample = self.transform(sample) # Since there are no labels, we just return 0 for the "label" here return sample, 0 class CelebADataset(Dataset): """CelebA dataset with 64 by 64 images.""" def __init__(self, path_to_data, subsample=1, transform=None): """ Parameters ---------- subsample : int Only load every |subsample| number of images. """ self.img_paths = glob.glob(path_to_data + '/*')[::subsample] self.transform = transform def __len__(self): return len(self.img_paths) def __getitem__(self, idx): sample_path = self.img_paths[idx] sample = imread(sample_path) if self.transform: sample = self.transform(sample) # Since there are no labels, we just return 0 for the "label" here return sample, 0
<filename>clients/cpp/src/RoutedStore.cpp /* -*- C++ -*-; c-basic-offset: 4; indent-tabs-mode: nil */ /* * Implementation for RoutedStore class. * * Copyright (c) 2009 <NAME>, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); you may not * use this file except in compliance with the License. You may obtain a copy of * the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations under * the License. */ #include "RoutedStore.h" #include "voldemort/UnreachableStoreException.h" #include "voldemort/InsufficientOperationalNodesException.h" #include <iostream> namespace Voldemort { using namespace boost; using namespace std; static const bool REPAIR_READS = true; RoutedStore::RoutedStore(const std::string& storeName, shared_ptr<ClientConfig>& config, shared_ptr<Cluster>& clust, shared_ptr<std::map<int, shared_ptr<Store> > >& map, shared_ptr<RoutingStrategy>& routingStrat) : name(storeName), clientConfig(config), cluster(clust), clusterMap(map), routingStrategy(routingStrat) { } RoutedStore::~RoutedStore() { close(); } static bool doGetFromStore(const std::string& key, std::list<VersionedValue>** result, Node* node, Store* store) { *result = NULL; try { *result = store->get(key); node->setAvailable(true); return true; } catch (UnreachableStoreException& e) { /* XXX - TODO add real logging */ std::cerr << "WARNING: Could not read node: " << e.what() << std::endl; node->setAvailable(false); } return false; } std::list<VersionedValue>* RoutedStore::get(const std::string& key) const { std::list<VersionedValue>* result = NULL; bool status = false; { /* Start by routing to the preferred list one at a time. */ RoutingStrategy::prefListp prefList = routingStrategy->routeRequest(key); RoutingStrategy::prefList::const_iterator it; for (it = prefList->begin(); it != prefList->end(); ++it) { status = doGetFromStore(key, &result, it->get(), (*clusterMap)[(*it)->getId()].get()); if (status) return result; } } { /* If that fails just try every node in the cluster */ const Cluster::nodeMap* nm = cluster->getNodeMap(); Cluster::nodeMap::const_iterator it; for (it = nm->begin(); it != nm->end(); ++it) { if (it->second.get()->isAvailable(clientConfig->getNodeBannageMs())) { status = doGetFromStore(key, &result, it->second.get(), (*clusterMap)[it->first].get()); } if (status) return result; } } throw InsufficientOperationalNodesException("Could not reach any " "node for get operation"); } static bool doPutFromStore(const std::string& key, const VersionedValue& value, Node* node, Store* store) { try { store->put(key, value); node->setAvailable(true); return true; } catch (UnreachableStoreException& e) { /* XXX - TODO add logging */ cerr << "Setting node " << node->getId() << " unavailable because: " << e.what() << endl; node->setAvailable(false); } return false; } void RoutedStore::put(const std::string& key, const VersionedValue& value) { bool status = false; { /* Start by routing to the preferred list one at a time. */ RoutingStrategy::prefListp prefList = routingStrategy->routeRequest(key); RoutingStrategy::prefList::const_iterator it; for (it = prefList->begin(); it != prefList->end(); ++it) { status = doPutFromStore(key, value, it->get(), (*clusterMap)[(*it)->getId()].get()); if (status) return; } } { /* If that fails just try every node in the cluster */ const Cluster::nodeMap* nm = cluster->getNodeMap(); Cluster::nodeMap::const_iterator it; for (it = nm->begin(); it != nm->end(); ++it) { if (it->second.get()->isAvailable(clientConfig->getNodeBannageMs())) { status = doPutFromStore(key, value, it->second.get(), (*clusterMap)[it->first].get()); } if (status) return; } } throw InsufficientOperationalNodesException("Could not reach any " "node for put operation"); } static bool doDeleteFromStore(const std::string& key, const Version& version, bool* result, Node* node, Store* store) { try { *result = store->deleteKey(key, version); node->setAvailable(true); return true; } catch (UnreachableStoreException& e) { node->setAvailable(false); } return false; } bool RoutedStore::deleteKey(const std::string& key, const Version& version) { bool status = false; bool result = false; { /* Start by routing to the preferred list one at a time. */ RoutingStrategy::prefListp prefList = routingStrategy->routeRequest(key); RoutingStrategy::prefList::const_iterator it; for (it = prefList->begin(); it != prefList->end(); ++it) { status = doDeleteFromStore(key, version, &result, it->get(), (*clusterMap)[(*it)->getId()].get()); if (status) return result; } } { /* If that fails just try every node in the cluster */ const Cluster::nodeMap* nm = cluster->getNodeMap(); Cluster::nodeMap::const_iterator it; for (it = nm->begin(); it != nm->end(); ++it) { if (it->second.get()->isAvailable(clientConfig->getNodeBannageMs())) { status = doDeleteFromStore(key, version, &result, it->second.get(), (*clusterMap)[it->first].get()); } if (status) return result; } } throw InsufficientOperationalNodesException("Could not reach any " "node for delete operation"); } const std::string* RoutedStore::getName() const { return &name; } void RoutedStore::close() { std::map<int, shared_ptr<Store> >::const_iterator it; for (it = clusterMap->begin(); it != clusterMap->end(); ++it) { it->second->close(); } } } /* namespace Voldemort */
use std::cmp::Ordering; use data::*; use mem_backend::MemBackend; pub fn interpret<M>(inst: Instruction, mem: &mut M, state: State) -> Result<State, M::Error> where M: MemBackend + ?Sized { use data::Function::*; let Instruction(f, reg, op) = inst; Ok(match f { Bad => panic!("bad instruction"), Nop => state, Set => state.register_modify(reg, |_| state.operand(op)), Load => { let val = mem.load(state.operand(op))?; state.register_modify(reg, |_| val) }, Store => { mem.store(state.operand(op), state.register(reg))?; state }, Cmp => State { flags: { let mut new_flags = Flags { cmp_l: false, cmp_g: false, cmp_e: false, ..state.flags }; match state.register(reg).cmp(&state.operand(op)) { Ordering::Less => { new_flags.cmp_l = true; }, Ordering::Greater => { new_flags.cmp_g = true; }, Ordering::Equal => { new_flags.cmp_e = true; } } new_flags }, ..state }, Branch => state.branch(op), BranchL => if state.flags.cmp_l { state.branch(op) } else { state }, BranchG => if state.flags.cmp_g { state.branch(op) } else { state }, BranchE => if state.flags.cmp_e { state.branch(op) } else { state }, BranchNE => if !state.flags.cmp_e { state.branch(op) } else { state }, GetSp => state.register_modify(reg, |_| state.sp), SetSp => State { sp: state.operand(op), ..state }, Push => { let new_sp = state.sp - 1; mem.store(new_sp, state.register(reg))?; State { sp: new_sp, ..state } }, Pop => { let val = mem.load(state.sp)?; State { sp: state.sp + 1, ..state.register_modify(reg, |_| val) } }, Call => { let new_sp = state.sp - 1; mem.store(new_sp, state.ip)?; State { sp: new_sp, ip: state.operand(op), ..state } }, Ret => { let val = mem.load(state.sp)?; State { sp: state.sp + 1, ip: val, ..state } }, Add => state.register_modify(reg, |x| x.wrapping_add(state.operand(op))), Sub => state.register_modify(reg, |x| x.wrapping_sub(state.operand(op))), Mul => state.register_modify(reg, |x| x * state.operand(op)), Div => state.register_modify(reg, |x| x / state.operand(op)), DivMod => { let x = state.register(reg); let y = state.operand(op); // Always puts the result in registers C, D State { c: x / y, d: x % y, ..state } }, Not => state.register_modify(reg, |x| !x), And => state.register_modify(reg, |x| x & state.operand(op)), Or => state.register_modify(reg, |x| x | state.operand(op)), Xor => state.register_modify(reg, |x| x ^ state.operand(op)), Lsh => state.register_modify(reg, |x| x << state.operand(op)), Rsh => state.register_modify(reg, |x| x >> state.operand(op)), Halt => State { halt: true, ..state }, IntSw => handle_interrupt(state.operand(op), mem, state)?, IntHw => State { int_outgoing: Some(state.operand(op)), ..state }, IntPause => State { flags: Flags { int_pause: true, ..state.flags }, ..state }, IntCont => State { flags: Flags { int_pause: false, ..state.flags }, ..state }, IntHGet => state.register_modify(reg, |_| state.inth), IntHSet => State { inth: state.operand(op), ..state }, IntExit => { let a = mem.load(state.sp + 0)?; let ip = mem.load(state.sp + 1)?; let flags = Flags::from(mem.load(state.sp + 2)?); let new_sp = state.sp + 3; State { sp: new_sp, ip: ip, a: a, flags: flags, ..state } }, Trace => { if let Operand::Const(0) = op { info!("trace(ip={:#010x}) {:#?}", state.ip, state); } else { info!("trace(ip={:#010x}) {:?} = {:#010x}", state.ip, op, state.operand(op)); } state }, }) } pub fn handle_interrupt<M>(code: u32, mem: &mut M, state: State) -> Result<State, M::Error> where M: MemBackend + ?Sized { if state.inth == 0 { Ok(state) } else { let new_sp = state.sp - 3; mem.store(new_sp + 0, state.a)?; mem.store(new_sp + 1, state.ip)?; mem.store(new_sp + 2, state.flags.into())?; Ok(State { a: code, sp: new_sp, ip: state.inth, halt: false, flags: Flags { int_pause: true, ..Flags::default() }, ..state }) } } #[cfg(test)] mod tests { use data::*; use data::Function::*; use data::Operand::*; use data::Register::*; // Test-friendly boiler-plate free version fn interpret(instruction: Instruction, mem: &mut [u32], state: State) -> State { super::interpret(instruction, mem, state).unwrap_or_else(|addr| { panic!("out of bounds: {:x}", addr); }) } fn interprets(instructions: &[Instruction], mem: &mut [u32], state: State) -> State { instructions.iter().cloned() .fold(state, |state, inst| interpret(inst, mem, state)) } fn memless_test(state0: State, instructions: &[Instruction], state1: State) { let state1_actual = interprets(instructions, &mut vec![], state0); assert_eq!(state1_actual, state1); } #[test] #[should_panic] fn interpret_bad() { memless_test( State::default(), &[ Instruction(Bad, A, Const(0)) ], State::default() ) } #[test] fn interpret_nop() { memless_test( State::default(), &[ Instruction(Nop, A, Const(0)) ], State::default() ); } #[test] fn interpret_set() { let mut mem = vec![]; let n = 0x8381adef; let state0 = State::default(); let state1 = interpret(Instruction(Set, B, Const(n)), &mut mem, state0); let state2 = interpret(Instruction(Set, C, Reg(B)), &mut mem, state1); assert_eq!(state1, State { b: n, ..state0 }); assert_eq!(state2, State { c: n, ..state1 }); } #[test] fn interpret_load() { let mut mem = vec![0x0403_0201, 0x4030_2010]; let mem_orig = mem.clone(); let state0 = State { d: 0x0000_0001, ..State::default() }; let state1 = interprets( &[ Instruction(Load, A, Const(0x0000_0000)), Instruction(Load, B, Reg(D)) ], &mut mem, state0 ); assert_eq!(state1, State { a: 0x0403_0201, b: 0x4030_2010, ..state0 }); assert_eq!(mem, mem_orig); } #[test] fn interpret_store() { let mut mem = vec![0; 2]; let state0 = State { a: 0xaabb_ccdd, b: 0xeeff_1111, d: 0x0000_0001, ..State::default() }; let state1 = interprets( &[ Instruction(Store, A, Const(0x0000_0000)), Instruction(Store, B, Reg(D)), ], &mut mem, state0 ); assert_eq!(state1, state0); assert_eq!(&mem, &[0xaabb_ccdd, 0xeeff_1111]); } #[test] fn interpret_cmp() { let mut mem = vec![]; let state0 = State { a: 1, b: 2, c: 3, d: 2, ..State::default() }; let fd = Flags::default(); let cases = &[ (A, Reg(B), Flags { cmp_l: true, cmp_g: false, cmp_e: false, ..fd }), (C, Reg(B), Flags { cmp_l: false, cmp_g: true, cmp_e: false, ..fd }), (D, Reg(B), Flags { cmp_l: false, cmp_g: false, cmp_e: true, ..fd }), (A, Const(2), Flags { cmp_l: true, cmp_g: false, cmp_e: false, ..fd }), (B, Const(2), Flags { cmp_l: false, cmp_g: false, cmp_e: true, ..fd }), (C, Const(2), Flags { cmp_l: false, cmp_g: true, cmp_e: false, ..fd }), ]; for &(reg, op, flags) in cases { let state1 = interpret(Instruction(Cmp, reg, op), &mut mem, state0); assert_eq!(state1, State { flags: flags, ..state0 }); } } #[test] fn interpret_branch() { let mut mem = vec![]; let state0 = State { b: 0xdead_beef, ..State::default() }; // register parameter shouldn't matter let state1 = interpret(Instruction(Branch, A, Const(0xabab_0202)), &mut mem, state0); let state2 = interpret(Instruction(Branch, C, Reg(B)), &mut mem, state1); assert_eq!(state1, State { ip: 0xabab_0202, ..state0 }); assert_eq!(state2, State { ip: 0xdead_beef, ..state1 }); } fn test_branch_flag<F>(f: Function, mut should_branch: F) where F: FnMut(Flags) -> bool { let mut mem = vec![]; for flag_bits in 0x0..0x8 { let flags = Flags { cmp_l: flag_bits & 0x1 != 0, cmp_g: flag_bits & 0x2 != 0, cmp_e: flag_bits & 0x4 != 0, ..Flags::default() }; let state0 = State { b: 0xf3f3_1a1a, flags: flags, ..State::default() }; // register parameter shouldn't matter let state1 = interpret(Instruction(f, D, Const(0xeaea_0000)), &mut mem, state0); let state2 = interpret(Instruction(f, B, Reg(B)), &mut mem, state1); if should_branch(flags) { assert_eq!(state1, State { ip: 0xeaea_0000, ..state0 }); assert_eq!(state2, State { ip: 0xf3f3_1a1a, ..state1 }); } else { assert_eq!(state1, state0); assert_eq!(state2, state1); } } } #[test] fn interpret_branchl() { test_branch_flag(BranchL, |flags| flags.cmp_l); } #[test] fn interpret_branchg() { test_branch_flag(BranchG, |flags| flags.cmp_g); } #[test] fn interpret_branche() { test_branch_flag(BranchE, |flags| flags.cmp_e); } #[test] fn interpret_branchne() { test_branch_flag(BranchNE, |flags| !flags.cmp_e); } #[test] fn interpret_getsp() { let state0 = State { sp: 0x30, ..State::default() }; let state1 = State { b: 0x30, ..state0 }; memless_test( state0, &[ Instruction(GetSp, B, Const(0)) ], state1 ); } #[test] fn interpret_setsp() { let state0 = State { b: 0x100, ..State::default() }; let state1 = State { sp: 0x100, ..state0 }; memless_test( state0, &[ Instruction(SetSp, A, Reg(B)) ], state1 ); memless_test( state0, &[ Instruction(SetSp, A, Const(0x100)) ], state1 ); } #[test] fn interpret_push() { let mut mem = vec![0; 0x04]; let state0 = State { a: 0xaaa, b: 0xf000_baaa, sp: 0x02, ..State::default() }; let state1 = interprets( &[ Instruction(Push, A, Reg(C)), Instruction(Push, B, Const(0xdead_beef)), ], &mut mem, state0 ); assert_eq!(state1, State { sp: 0x00, ..state0 }); assert_eq!(&mem, &[0xf000_baaa, 0xaaa, 0, 0]); } #[test] fn interpret_pop() { let mut mem = vec![0xdeadbeef, 0xaaa, 0, 0]; let mem_orig = mem.clone(); let state0 = State { sp: 0x00, ..State::default() }; let state1 = interprets( &[ Instruction(Pop, A, Const(0)), Instruction(Pop, B, Const(0)), ], &mut mem, state0 ); assert_eq!(state1, State { a: 0xdead_beef, b: 0x0000_0aaa, sp: 0x02, ..state0 }); assert_eq!(mem, mem_orig); } #[test] fn interpret_call() { let mut mem = vec![0; 0x04]; let state0 = State { a: 0xaaa, ip: 0xbbb, sp: 0x02, ..State::default() }; let state1 = interprets( &[ Instruction(Call, C, Reg(A)), Instruction(Call, D, Const(0xdead_beef)), ], &mut mem, state0 ); assert_eq!(state1, State { sp: 0x00, ip: 0xdead_beef, ..state0 }); assert_eq!(&mem, &[0xaaa, 0xbbb, 0x0, 0x0]); } #[test] fn interpret_ret() { let mut mem = vec![0xdeadbeef, 0xaaa, 0, 0]; let mem_orig = mem.clone(); let state0 = State { sp: 0x00, ..State::default() }; let state1 = interprets( &[ Instruction(Ret, A, Const(0)), ], &mut mem, state0 ); assert_eq!(state1, State { ip: 0xdead_beef, sp: 0x01, ..state0 }); assert_eq!(mem, mem_orig); } #[test] fn interpret_add() { let state0 = State::default(); let state1 = State { a: 5, b: 10, ..state0 }; memless_test( state0, &[ Instruction(Add, A, Const(5)), Instruction(Add, B, Reg(A)), Instruction(Add, B, Reg(B)), ], state1 ); } #[test] fn interpret_sub() { let state0 = State { a: 5, b: 10, ..State::default() }; let state1 = State { a: 3, b: 7, ..state0 }; memless_test( state0, &[ Instruction(Sub, A, Const(2)), Instruction(Sub, B, Reg(A)), ], state1 ); } #[test] fn interpret_mul() { let state0 = State { a: 6, ..State::default() }; let state1 = State { a: 144, ..state0 }; memless_test( state0, &[ Instruction(Mul, A, Const(2)), Instruction(Mul, A, Reg(A)), ], state1 ); } #[test] fn interpret_div() { let state0 = State { a: 100, b: 100, ..State::default() }; let state1 = State { a: 50, b: 2, ..state0 }; memless_test( state0, &[ Instruction(Div, A, Const(2)), Instruction(Div, B, Reg(A)), ], state1 ); } #[test] fn interpret_divmod() { let state0 = State { a: 28, b: 9, ..State::default() }; let state1 = State { c: 3, d: 1, ..state0 }; memless_test( state0, &[ Instruction(DivMod, A, Reg(B)) ], state1 ); } #[test] fn interpret_not() { let state0 = State { d: 0xff00_ff00, a: 0x0000_0000, ..State::default() }; let state1 = State { d: 0x00ff_00ff, a: 0xffff_ffff, ..state0 }; memless_test( state0, &[ Instruction(Not, A, Reg(B)), // Reg(B) should be ignored Instruction(Not, D, Const(0)), ], state1 ); } #[test] fn interpret_and() { let state0 = State { a: 0x0f0f_0f0f, b: 0x3232_3232, c: 0x3232_3232, ..State::default() }; let state1 = State { b: 0x0202_0202, c: 0x3030_3030, ..state0 }; memless_test( state0, &[ Instruction(And, B, Reg(A)), Instruction(And, C, Const(0xf0f0_f0f0)), ], state1 ); } #[test] fn interpret_or() { let state0 = State { a: 0x1010_0101, b: 0x0101_0000, ..State::default() }; let state1 = State { a: 0x1111_1111, b: 0x0101_1010, ..state0 }; memless_test( state0, &[ Instruction(Or, B, Const(0x0000_1010)), Instruction(Or, A, Reg(B)), ], state1 ); } #[test] fn interpret_xor() { let state0 = State { a: 0x1010_0101, b: 0x1111_1111, ..State::default() }; let state1 = State { a: 0x0101_1010, b: 0x1010_0101, ..state0 }; memless_test( state0, &[ Instruction(Xor, A, Reg(B)), Instruction(Xor, B, Const(0x0101_1010)), ], state1 ); } #[test] fn interpret_lsh() { let state0 = State { a: 0x0000_1111, b: 4, ..State::default() }; let state1 = State { a: 0x1110_0000, ..state0 }; memless_test( state0, &[ Instruction(Lsh, A, Reg(B)), Instruction(Lsh, A, Const(16)), ], state1 ); } #[test] fn interpret_rsh() { let state0 = State { a: 0x1110_0000, b: 4, ..State::default() }; let state1 = State { a: 0x0000_0111, ..state0 }; memless_test( state0, &[ Instruction(Rsh, A, Reg(B)), Instruction(Rsh, A, Const(16)), ], state1 ); } #[test] fn interpret_halt() { let state0 = State { halt: false, ..State::default() }; let state1 = State { halt: true, ..state0 }; memless_test( state0, &[ Instruction(Halt, A, Const(0)) ], state1 ); } }
/** * QrtzTriggers generated by hbm2java */ @Entity @Table(name="QRTZ_TRIGGERS") public class QrtzTriggers implements java.io.Serializable { private QrtzTriggersId id; private QrtzJobDetails qrtzJobDetails; private String description; private Long nextFireTime; private Long prevFireTime; private Long priority; private String triggerState; private String triggerType; private long startTime; private Long endTime; private String calendarName; private Byte misfireInstr; private Blob jobData; private QrtzCronTriggers qrtzCronTriggers; /* private QrtzBlobTriggers qrtzBlobTriggers; private QrtzSimpropTriggers qrtzSimpropTriggers; private QrtzSimpleTriggers qrtzSimpleTriggers; */ public QrtzTriggers() { } public QrtzTriggers(QrtzTriggersId id, QrtzJobDetails qrtzJobDetails, String triggerState, String triggerType, long startTime) { this.id = id; this.qrtzJobDetails = qrtzJobDetails; this.triggerState = triggerState; this.triggerType = triggerType; this.startTime = startTime; } /* public QrtzTriggers(QrtzTriggersId id, QrtzJobDetails qrtzJobDetails, String description, Long nextFireTime, Long prevFireTime, Long priority, String triggerState, String triggerType, long startTime, Long endTime, String calendarName, Byte misfireInstr, Blob jobData, QrtzCronTriggers qrtzCronTriggers, QrtzBlobTriggers qrtzBlobTriggers, QrtzSimpropTriggers qrtzSimpropTriggers, QrtzSimpleTriggers qrtzSimpleTriggers) { this.id = id; this.qrtzJobDetails = qrtzJobDetails; this.description = description; this.nextFireTime = nextFireTime; this.prevFireTime = prevFireTime; this.priority = priority; this.triggerState = triggerState; this.triggerType = triggerType; this.startTime = startTime; this.endTime = endTime; this.calendarName = calendarName; this.misfireInstr = misfireInstr; this.jobData = jobData; this.qrtzCronTriggers = qrtzCronTriggers; this.qrtzBlobTriggers = qrtzBlobTriggers; this.qrtzSimpropTriggers = qrtzSimpropTriggers; this.qrtzSimpleTriggers = qrtzSimpleTriggers; }*/ @EmbeddedId @AttributeOverrides( { @AttributeOverride(name="schedName", column=@Column(name="SCHED_NAME", nullable=false, length=120) ), @AttributeOverride(name="triggerName", column=@Column(name="TRIGGER_NAME", nullable=false, length=200) ), @AttributeOverride(name="triggerGroup", column=@Column(name="TRIGGER_GROUP", nullable=false, length=200) ) } ) public QrtzTriggersId getId() { return this.id; } public void setId(QrtzTriggersId id) { this.id = id; } @ManyToOne(fetch=FetchType.LAZY) @JoinColumns( { @JoinColumn(name="SCHED_NAME", referencedColumnName="SCHED_NAME", nullable=false, insertable=false, updatable=false), @JoinColumn(name="JOB_NAME", referencedColumnName="JOB_NAME", nullable=false, insertable=false, updatable=false), @JoinColumn(name="JOB_GROUP", referencedColumnName="JOB_GROUP", nullable=false, insertable=false, updatable=false) } ) public QrtzJobDetails getQrtzJobDetails() { return this.qrtzJobDetails; } public void setQrtzJobDetails(QrtzJobDetails qrtzJobDetails) { this.qrtzJobDetails = qrtzJobDetails; } @Column(name="DESCRIPTION", length=250) public String getDescription() { return this.description; } public void setDescription(String description) { this.description = description; } @Column(name="NEXT_FIRE_TIME", precision=13, scale=0) public Long getNextFireTime() { return this.nextFireTime; } public void setNextFireTime(Long nextFireTime) { this.nextFireTime = nextFireTime; } @Column(name="PREV_FIRE_TIME", precision=13, scale=0) public Long getPrevFireTime() { return this.prevFireTime; } public void setPrevFireTime(Long prevFireTime) { this.prevFireTime = prevFireTime; } @Column(name="PRIORITY", precision=13, scale=0) public Long getPriority() { return this.priority; } public void setPriority(Long priority) { this.priority = priority; } @Column(name="TRIGGER_STATE", nullable=false, length=16) public String getTriggerState() { return this.triggerState; } public void setTriggerState(String triggerState) { this.triggerState = triggerState; } @Column(name="TRIGGER_TYPE", nullable=false, length=8) public String getTriggerType() { return this.triggerType; } public void setTriggerType(String triggerType) { this.triggerType = triggerType; } @Column(name="START_TIME", nullable=false, precision=13, scale=0) public long getStartTime() { return this.startTime; } public void setStartTime(long startTime) { this.startTime = startTime; } @Column(name="END_TIME", precision=13, scale=0) public Long getEndTime() { return this.endTime; } public void setEndTime(Long endTime) { this.endTime = endTime; } @Column(name="CALENDAR_NAME", length=200) public String getCalendarName() { return this.calendarName; } public void setCalendarName(String calendarName) { this.calendarName = calendarName; } @Column(name="MISFIRE_INSTR", precision=2, scale=0) public Byte getMisfireInstr() { return this.misfireInstr; } public void setMisfireInstr(Byte misfireInstr) { this.misfireInstr = misfireInstr; } @Column(name="JOB_DATA") public Blob getJobData() { return this.jobData; } public void setJobData(Blob jobData) { this.jobData = jobData; } @OneToOne(fetch=FetchType.LAZY, mappedBy="qrtzTriggers") public QrtzCronTriggers getQrtzCronTriggers() { return this.qrtzCronTriggers; } public void setQrtzCronTriggers(QrtzCronTriggers qrtzCronTriggers) { this.qrtzCronTriggers = qrtzCronTriggers; } /* @OneToOne(fetch=FetchType.LAZY, mappedBy="qrtzTriggers") public QrtzBlobTriggers getQrtzBlobTriggers() { return this.qrtzBlobTriggers; } public void setQrtzBlobTriggers(QrtzBlobTriggers qrtzBlobTriggers) { this.qrtzBlobTriggers = qrtzBlobTriggers; } @OneToOne(fetch=FetchType.LAZY, mappedBy="qrtzTriggers") public QrtzSimpropTriggers getQrtzSimpropTriggers() { return this.qrtzSimpropTriggers; } public void setQrtzSimpropTriggers(QrtzSimpropTriggers qrtzSimpropTriggers) { this.qrtzSimpropTriggers = qrtzSimpropTriggers; } @OneToOne(fetch=FetchType.LAZY, mappedBy="qrtzTriggers") public QrtzSimpleTriggers getQrtzSimpleTriggers() { return this.qrtzSimpleTriggers; } public void setQrtzSimpleTriggers(QrtzSimpleTriggers qrtzSimpleTriggers) { this.qrtzSimpleTriggers = qrtzSimpleTriggers; } */ @Override public String toString() { return "QrtzTriggers{" + "id=" + id + ", qrtzJobDetails=" + qrtzJobDetails + ", description=" + description + ", nextFireTime=" + nextFireTime + ", prevFireTime=" + prevFireTime + ", priority=" + priority + ", triggerState=" + triggerState + ", triggerType=" + triggerType + ", startTime=" + startTime + ", endTime=" + endTime + ", calendarName=" + calendarName + ", misfireInstr=" + misfireInstr + ", jobData=" + jobData + '}'; } }
import { sendCommand } from "./protocol/mod.ts"; import type { Raw, RedisValue } from "./protocol/mod.ts"; import type { Backoff } from "./backoff.ts"; import { exponentialBackoff } from "./backoff.ts"; import { ErrorReplyError } from "./errors.ts"; import { BufReader, BufWriter, } from "./vendor/https/deno.land/std/io/buffer.ts"; import { delay } from "./vendor/https/deno.land/std/async/delay.ts"; type Closer = Deno.Closer; export interface Connection { closer: Closer; reader: BufReader; writer: BufWriter; maxRetryCount: number; isClosed: boolean; isConnected: boolean; isRetriable: boolean; close(): void; connect(): Promise<void>; reconnect(): Promise<void>; } export interface RedisConnectionOptions { tls?: boolean; db?: number; password?: <PASSWORD>; username?: string; name?: string; /** * @default 10 */ maxRetryCount?: number; backoff?: Backoff; } export class RedisConnection implements Connection { name: string | null = null; closer!: Closer; reader!: BufReader; writer!: BufWriter; maxRetryCount = 10; private readonly hostname: string; private readonly port: number | string; private retryCount = 0; private _isClosed = false; private _isConnected = false; private backoff: Backoff; get isClosed(): boolean { return this._isClosed; } get isConnected(): boolean { return this._isConnected; } get isRetriable(): boolean { return this.maxRetryCount > 0; } constructor( hostname: string, port: number | string, private options: RedisConnectionOptions, ) { this.hostname = hostname; this.port = port; if (options.name) { this.name = options.name; } if (options.maxRetryCount != null) { this.maxRetryCount = options.maxRetryCount; } this.backoff = options.backoff ?? exponentialBackoff(); } private async authenticate( username: string | undefined, password: string, ): Promise<void> { try { password && username ? await this.sendCommand("AUTH", username, password) : await this.sendCommand("AUTH", password); } catch (error) { if (error instanceof ErrorReplyError) { throw new AuthenticationError("Authentication failed", { cause: error, }); } else { throw error; } } } private async selectDb( db: number | undefined = this.options.db, ): Promise<void> { if (!db) throw new Error("The database index is undefined."); await this.sendCommand("SELECT", db); } private async sendCommand( command: string, ...args: Array<RedisValue> ): Promise<Raw> { const reply = await sendCommand(this.writer, this.reader, command, ...args); return reply.value(); } /** * Connect to Redis server */ async connect(): Promise<void> { try { const dialOpts: Deno.ConnectOptions = { hostname: this.hostname, port: parsePortLike(this.port), }; const conn: Deno.Conn = this.options?.tls ? await Deno.connectTls(dialOpts) : await Deno.connect(dialOpts); this.closer = conn; this.reader = new BufReader(conn); this.writer = new BufWriter(conn); this._isClosed = false; this._isConnected = true; try { if (this.options.password != null) { await this.authenticate(this.options.username, this.options.password); } if (this.options.db) { await this.selectDb(this.options.db); } } catch (error) { this.close(); throw error; } this.retryCount = 0; } catch (error) { if (error instanceof AuthenticationError) { this.retryCount = 0; throw (error.cause ?? error); } if (this.retryCount++ >= this.maxRetryCount) { this.retryCount = 0; throw error; } const backoff = this.backoff(this.retryCount); await delay(backoff); await this.connect(); } } close() { this._isClosed = true; this._isConnected = false; try { this.closer!.close(); } catch (error) { if (!(error instanceof Deno.errors.BadResource)) throw error; } } async reconnect(): Promise<void> { if (!this.reader.peek(1)) { throw new Error("Client is closed."); } try { await this.sendCommand("PING"); this._isConnected = true; } catch (_error) { // TODO: Maybe we should log this error. this.close(); await this.connect(); await this.sendCommand("PING"); } } } class AuthenticationError extends Error {} function parsePortLike(port: string | number | undefined): number { let parsedPort: number; if (typeof port === "string") { parsedPort = parseInt(port); } else if (typeof port === "number") { parsedPort = port; } else { parsedPort = 6379; } if (!Number.isSafeInteger(parsedPort)) { throw new Error("Port is invalid"); } return parsedPort; }
package tree import ( "container/heap" "encoding/json" "fmt" "sort" "strings" ) var ( maxHeapSorter = func(r1, r2 result) bool { return r1.Score > r2.Score } increasingSorter = func(r1, r2 result) bool { return r1.Score < r2.Score } ) type ( result struct { Filename string Score float64 } results []result sorter struct { r results sortBy func(r1, r2 result) bool } ) func (s sorter) Len() int { return len(s.r) } func (s sorter) Less(i, j int) bool { return s.sortBy(s.r[i], s.r[j]) } func (s sorter) Swap(i, j int) { s.r[i], s.r[j] = s.r[j], s.r[i] } func (s *sorter) Push(x interface{}) { s.r = append(s.r, x.(result)) } func (s *sorter) Pop() interface{} { old := s.r n := len(old) x := old[n-1] s.r = old[0 : n-1] return x } func (r results) Top(limit uint) results { if len(r) <= int(limit) { s := sorter{ r: r, sortBy: increasingSorter, } sort.Sort(s) return s.r } top := sorter{ r: r[:limit], sortBy: maxHeapSorter, } heap.Init(&top) for _, res := range r[limit:] { if res.Score < top.r[0].Score { heap.Pop(&top) heap.Push(&top, res) } } top.sortBy = increasingSorter sort.Sort(top) return top.r } func NewFormatter(results results) *formatter { return &formatter{ results: results, } } type formatter struct { results results json bool } func (f *formatter) JSON(value bool) *formatter { f.json = value return f } func (f formatter) String() string { if f.json { v, _ := json.Marshal(f.results) return string(v) } var builder strings.Builder for _, result := range f.results { builder.WriteString(fmt.Sprintln(result)) } return builder.String() }
/** * This class is used to implement the unimplemented methods of * <b>ViewFolderDAO</b>. This class contains the methods to related to View * Folders which require database access. */ public class PGViewFolderDAO implements ViewFolderDAO { static transient Logger logger = Logger .getLogger("com.ibm.safr.we.internal.data.dao.PGViewFolderDAO"); private static final String TABLE_NAME = "VIEWFOLDER"; private static final String COL_ENVID = "ENVIRONID"; private static final String COL_ID = "VIEWFOLDERID"; private static final String COL_NAME = "NAME"; private static final String COL_COMMENT = "COMMENTS"; private static final String COL_CREATETIME = "CREATEDTIMESTAMP"; private static final String COL_CREATEBY = "CREATEDUSERID"; private static final String COL_MODIFYTIME = "LASTMODTIMESTAMP"; private static final String COL_MODIFYBY = "LASTMODUSERID"; private Connection con; private ConnectionParameters params; private UserSessionParameters safrLogin; private PGSQLGenerator generator = new PGSQLGenerator(); public PGViewFolderDAO(Connection con, ConnectionParameters params, UserSessionParameters safrlogin) { this.con = con; this.params = params; this.safrLogin = safrlogin; } public List<ViewFolderQueryBean> queryAllViewFolders(Integer environmentId, SortType sortType) throws DAOException { List<ViewFolderQueryBean> result = new ArrayList<ViewFolderQueryBean>(); boolean admin = SAFRApplication.getUserSession().isSystemAdministrator(); String orderString = null; if (sortType.equals(SortType.SORT_BY_ID)) { orderString = "VF.VIEWFOLDERID"; } else { orderString = "UPPER(VF.NAME)"; } try { String selectString = ""; if (admin) { selectString = "SELECT VF.VIEWFOLDERID,VF.NAME, " + "VF.CREATEDTIMESTAMP, VF.CREATEDUSERID, VF.LASTMODTIMESTAMP, VF.LASTMODUSERID FROM " + params.getSchema() + ".VIEWFOLDER VF " + "WHERE VF.ENVIRONID = ? " + " ORDER BY " + orderString; } else { selectString = "SELECT VF.VIEWFOLDERID,VF.NAME, L.RIGHTS, " + "VF.CREATEDTIMESTAMP, VF.CREATEDUSERID, VF.LASTMODTIMESTAMP, VF.LASTMODUSERID FROM " + params.getSchema() + ".VIEWFOLDER VF " + "LEFT OUTER JOIN " + params.getSchema() + ".SECVIEWFOLDER L " + "ON VF.ENVIRONID = L.ENVIRONID AND VF.VIEWFOLDERID = L.VIEWFOLDERID " + " AND L.GROUPID=" + SAFRApplication.getUserSession().getGroup().getId() + " " + "WHERE VF.ENVIRONID = ? " + " ORDER BY " + orderString; } PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectString); pst.setInt(1, environmentId); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } while (rs.next()) { ViewFolderQueryBean viewFolderBean = new ViewFolderQueryBean( environmentId, rs.getInt(COL_ID), DataUtilities.trimString(rs.getString(COL_NAME)), admin ? EditRights.ReadModifyDelete : SAFRApplication.getUserSession().getEditRights( rs.getInt("RIGHTS"), ComponentType.ViewFolder, environmentId), rs.getDate(COL_CREATETIME), DataUtilities.trimString(rs.getString(COL_CREATEBY)), rs.getDate(COL_MODIFYTIME), DataUtilities.trimString(rs.getString(COL_MODIFYBY))); result.add(viewFolderBean); } pst.close(); rs.close(); return result; } catch (SQLException e) { throw DataUtilities.createDAOException( "Database error occurred while querying all View Folders.",e); } } public List<ViewFolderQueryBean> queryViewFolders(Integer environmentId, Integer groupId, boolean isSystemAdmin, SortType sortType) throws DAOException { List<ViewFolderQueryBean> result = new ArrayList<ViewFolderQueryBean>(); String orderString = null; if (sortType.equals(SortType.SORT_BY_ID)) { orderString = "VF.VIEWFOLDERID"; } else { orderString = "UPPER(VF.NAME)"; } try { String selectString = ""; if (isSystemAdmin) { selectString = "SELECT VF.VIEWFOLDERID,VF.NAME, " + "VF.CREATEDTIMESTAMP, VF.CREATEDUSERID, VF.LASTMODTIMESTAMP, VF.LASTMODUSERID FROM " + params.getSchema() + ".VIEWFOLDER VF " + "WHERE VF.ENVIRONID = ? " + " ORDER BY " + orderString; } else { selectString = "SELECT VF.VIEWFOLDERID,VF.NAME, L.RIGHTS, " + "VF.CREATEDTIMESTAMP, VF.CREATEDUSERID, VF.LASTMODTIMESTAMP, VF.LASTMODUSERID FROM " + params.getSchema() + ".VIEWFOLDER VF " + "LEFT OUTER JOIN " + params.getSchema() + ".SECVIEWFOLDER L " + "ON VF.ENVIRONID = L.ENVIRONID AND VF.VIEWFOLDERID = L.VIEWFOLDERID " + " AND L.GROUPID = ? " + "WHERE VF.ENVIRONID = ? " + " ORDER BY " + orderString; } PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectString); if(isSystemAdmin) { pst.setInt(1, environmentId); } else { pst.setInt(1, groupId); pst.setInt(2, environmentId); } rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } while (rs.next()) { ViewFolderQueryBean viewFolderBean = new ViewFolderQueryBean( environmentId, rs.getInt(COL_ID), DataUtilities.trimString(rs.getString(COL_NAME)), isSystemAdmin ? EditRights.ReadModifyDelete : SAFRApplication.getUserSession().getEditRights( rs.getInt("RIGHTS"), ComponentType.ViewFolder, environmentId), rs.getDate(COL_CREATETIME), DataUtilities.trimString(rs.getString(COL_CREATEBY)), rs.getDate(COL_MODIFYTIME), DataUtilities.trimString(rs.getString(COL_MODIFYBY))); result.add(viewFolderBean); } pst.close(); rs.close(); return result; } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while querying View Folders for the specified Group and/or Environment.",e); } } public ViewFolderTransfer getViewFolder(Integer id, Integer environmentId) throws DAOException { ViewFolderTransfer result = null; try { List<String> idNames = new ArrayList<String>(); idNames.add(COL_ID); idNames.add(COL_ENVID); String selectString = generator.getSelectStatement(params .getSchema(), TABLE_NAME, idNames, null); PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectString); pst.setInt(1, id); pst.setInt(2, environmentId); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } if (rs.next()) { result = generateTransfer(rs); } else { logger.info("No such View Folder in Env " + environmentId + " with id : "+ id); } pst.close(); rs.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while retrieving the View Folder with id ["+ id + "]", e); } return result; } /** * This method is used to generate a transfer object for the View Folder * * @param rs * : The resultset of a database query run on VIEWFOLDER table * with which the values for the transfer objects are set. * @return A transfer object for the View Folder with values set according * to the resultset. * @throws SQLException */ private ViewFolderTransfer generateTransfer(ResultSet rs) throws SQLException { ViewFolderTransfer viewFolder = new ViewFolderTransfer(); viewFolder.setEnvironmentId(rs.getInt(COL_ENVID)); viewFolder.setId(rs.getInt(COL_ID)); viewFolder.setName(DataUtilities.trimString(rs.getString(COL_NAME))); viewFolder.setComments(DataUtilities.trimString(rs.getString(COL_COMMENT))); viewFolder.setCreateTime(rs.getDate(COL_CREATETIME)); viewFolder.setCreateBy(DataUtilities.trimString(rs.getString(COL_CREATEBY))); viewFolder.setModifyTime(rs.getDate(COL_MODIFYTIME)); viewFolder.setModifyBy(DataUtilities.trimString(rs.getString(COL_MODIFYBY))); return viewFolder; } public ViewFolderTransfer persistViewFolder(ViewFolderTransfer viewFolder) throws DAOException, SAFRNotFoundException { if (!viewFolder.isPersistent()) { return (createViewFolder(viewFolder)); } else { return (updateViewFolder(viewFolder)); } } /** * This method is used to create a View Folder in VIEWFOLDER table * * @param viewFolder * : The transfer object which contains the values which are to * be set in the columns for the corresponding View Folder which * is being created. * @return The transfer object which contains the values which are received * from the VIEWFOLDER for the View Folder which is created. * @throws DAOException */ private ViewFolderTransfer createViewFolder(ViewFolderTransfer viewFolder) throws DAOException { try { String[] columnNames = { COL_ENVID, COL_NAME, COL_COMMENT, COL_CREATETIME, COL_CREATEBY, COL_MODIFYTIME, COL_MODIFYBY }; List<String> names = new ArrayList<String>(Arrays.asList(columnNames)); if (viewFolder.isForImportOrMigration()) { names.add(1, COL_ID); } String statement = generator.getInsertStatement(params.getSchema(), TABLE_NAME, COL_ID, names, !viewFolder.isForImportOrMigration()); PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(statement); int i = 1; pst.setInt(i++, viewFolder.getEnvironmentId()); if (viewFolder.isForImportOrMigration()) { pst.setInt(i++, viewFolder.getId()); } pst.setString(i++, viewFolder.getName()); pst.setString(i++, viewFolder.getComments()); if (viewFolder.isForImportOrMigration()) { pst.setTimestamp(i++, DataUtilities.getTimeStamp(viewFolder.getCreateTime())); } pst.setString(i++, viewFolder.isForImportOrMigration() ? viewFolder.getCreateBy() : safrLogin.getUserId()); if (viewFolder.isForImportOrMigration()) { pst.setTimestamp(i++, DataUtilities.getTimeStamp(viewFolder.getModifyTime())); } pst.setString(i++,viewFolder.isForImportOrMigration() ? viewFolder.getModifyBy() : safrLogin.getUserId()); rs = pst.executeQuery(); rs.next(); int id = rs.getInt(1); viewFolder.setPersistent(true); viewFolder.setId(id); if (!viewFolder.isForImportOrMigration()) { viewFolder.setCreateBy(safrLogin.getUserId()); viewFolder.setCreateTime(rs.getDate(2)); viewFolder.setModifyBy(safrLogin.getUserId()); viewFolder.setModifyTime(rs.getDate(3)); } break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while creating a new View Folder.",e); } return viewFolder; } /** * This method is used to update a View Folder in VIEWFOLDER table * * @param viewFolder * : The transfer object which contains the values which are to * be set in the columns for the corresponding View Folder which * is being updated. * @return The transfer object which contains the values which are received * from the VIEWFOLDER for the View Folder which is updated. * @throws DAOException * @throws SAFRNotFoundException */ private ViewFolderTransfer updateViewFolder(ViewFolderTransfer viewFolder) throws DAOException, SAFRNotFoundException { boolean isImportOrMigrate = viewFolder.isForImport() || viewFolder.isForMigration() ? true : false; boolean useCurrentTS = !isImportOrMigrate; try { String[] columnNames = { COL_NAME, COL_COMMENT, COL_MODIFYTIME, COL_MODIFYBY }; List<String> names = new ArrayList<String>(Arrays.asList(columnNames)); if (isImportOrMigrate) { names.add(COL_CREATETIME); names.add(COL_CREATEBY); } List<String> idNames = new ArrayList<String>(); idNames.add(COL_ID); idNames.add(COL_ENVID); String statement = generator.getUpdateStatement(params.getSchema(), TABLE_NAME, names, idNames, useCurrentTS); PreparedStatement pst = null; while (true) { try { pst = con.prepareStatement(statement); int i = 1; pst.setString(i++, viewFolder.getName()); pst.setString(i++, viewFolder.getComments()); if (isImportOrMigrate) { // createby and lastmod set from source component pst.setTimestamp(i++, DataUtilities.getTimeStamp(viewFolder.getCreateTime())); pst.setString(i++, viewFolder.getCreateBy()); pst.setTimestamp(i++, DataUtilities.getTimeStamp(viewFolder.getModifyTime())); pst.setString(i++, viewFolder.getModifyBy()); } else { // createby details are untouched // lastmodtimestamp is CURRENT_TIMESTAMP // lastmoduserid is logged in user pst.setString(i++, safrLogin.getUserId()); } pst.setInt(i++, viewFolder.getId()); pst.setInt(i++, viewFolder.getEnvironmentId()); if (useCurrentTS) { ResultSet rs = pst.executeQuery(); rs.next(); viewFolder.setModifyTime(rs.getDate(1)); viewFolder.setModifyBy(safrLogin.getUserId()); rs.close(); pst.close(); } else { int count = pst.executeUpdate(); if (count == 0) { throw new SAFRNotFoundException("No Rows updated."); } pst.close(); } break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while updating the View Folder.",e); } return viewFolder; } public void removeViewFolder(Integer id, Integer environmentId) throws DAOException { try { List<String> idNames = new ArrayList<String>(); idNames.add(COL_ID); idNames.add(COL_ENVID); // removing its association with any Group String deleteAssocQuery = generator.getDeleteStatement(params .getSchema(), "SECVIEWFOLDER", idNames); PreparedStatement pst = null; while (true) { try { pst = con.prepareStatement(deleteAssocQuery); pst.setInt(1, id); pst.setInt(2, environmentId); pst.execute(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); // removing its association with any Group String deleteViewQuery = generator.getDeleteStatement(params .getSchema(), "VFVASSOC", idNames); while (true) { try { pst = con.prepareStatement(deleteViewQuery); pst.setInt(1, id); pst.setInt(2, environmentId); pst.execute(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); String statement = generator.getDeleteStatement(params.getSchema(), TABLE_NAME, idNames); pst = null; while (true) { try { pst = con.prepareStatement(statement); pst.setInt(1, id); pst.setInt(2, environmentId); pst.execute(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while deleting the View Folder.",e); } } public ViewFolderTransfer getDuplicateViewFolder(String viewFolderName, Integer viewFolderId, Integer environmentId) throws DAOException { ViewFolderTransfer result = null; try { String statement = generator.getDuplicateComponent(params .getSchema(), TABLE_NAME, COL_ENVID, COL_NAME, COL_ID); PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(statement); int i = 1; pst.setInt(i++, environmentId); pst.setString(i++, viewFolderName.toUpperCase()); pst.setInt(i++, viewFolderId); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } if (rs.next()) { result = generateTransfer(rs); logger.info("Existing View Folder with name '" + viewFolderName + "' found in Environment [" + environmentId + "]"); } pst.close(); rs.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while retrieving a duplicate View Folder.",e); } return result; } public Integer getCountOfViewsInViewFolder(Integer viewFolderId, Integer environmentId) throws DAOException { Integer count = 0; try { String statement = "Select Count(VIEWID) from " + params.getSchema() + ".VFVASSOC where ENVIRONID =? AND VIEWFOLDERID =?"; PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(statement); int i = 1; pst.setInt(i++, environmentId); pst.setInt(i++, viewFolderId); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } if (rs.next()) { count = rs.getInt(1); } pst.close(); rs.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while retrieving count of views in View Folder.",e); } return count; } public List<DependentComponentTransfer> getUserDependencies( Integer viewFolderId) throws DAOException { List<DependentComponentTransfer> userDependencies = new ArrayList<DependentComponentTransfer>(); try { String selectDependentLRs = "Select A.USERID From " + params.getSchema() + ".USER A where A.DEFFOLDERID = ?"; PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectDependentLRs); pst.setInt(1, viewFolderId); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } while (rs.next()) { DependentComponentTransfer depCompTransfer = new DependentComponentTransfer(); depCompTransfer.setId(-1); depCompTransfer.setName(DataUtilities.trimString(rs.getString("USERID"))); userDependencies.add(depCompTransfer); } pst.close(); rs.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while retrieving user dependencies of View Folder.",e); } return userDependencies; } @Override public void deleteAssociatedViews(Integer environmentId, List<Integer> deletionIds) { String placeHolders = generator.getPlaceholders(deletionIds.size()); try { if (deletionIds.isEmpty()) { deletionIds.add(0); } String deleteAssocViews = "DELETE FROM " + params.getSchema() + ".VFVASSOC A " + "WHERE A.ENVIRONID = ? " + "AND A.VFVASSOCID IN ( " + placeHolders + " )"; PreparedStatement pst = null; while (true) { try { pst = con.prepareStatement(deleteAssocViews); pst.setInt(1, environmentId); int ndx = 2; for( int i = 0 ; i < deletionIds.size(); i++ ) { pst.setInt(ndx++, deletionIds.get(i)); } pst.execute(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while deleting associated Views of View Folder.",e); } } public void persistAssociatedViews( List<ViewFolderViewAssociationTransfer> componentAssociationTransfers) throws DAOException { List<ViewFolderViewAssociationTransfer> associatedViewCreate = new ArrayList<ViewFolderViewAssociationTransfer>(); List<ViewFolderViewAssociationTransfer> associatedViewUpdate = new ArrayList<ViewFolderViewAssociationTransfer>(); for (ViewFolderViewAssociationTransfer associatedView : componentAssociationTransfers) { if (!associatedView.isPersistent()) { associatedViewCreate.add(associatedView); } else { associatedViewUpdate.add(associatedView); } } if (associatedViewCreate.size() > 0) { createAssociatedViews(associatedViewCreate); } if (associatedViewUpdate.size() > 0) { updateAssociatedViews(associatedViewUpdate); } } private void createAssociatedViews(List<ViewFolderViewAssociationTransfer> associatedViewsCreate) { // data is either all imported or migrated or none of it is boolean isImportOrMigrate = associatedViewsCreate.get(0).isForImport() || associatedViewsCreate.get(0).isForMigration() ? true : false; boolean useCurrentTS = !isImportOrMigrate; try { String[] columnNames = { COL_ENVID, "VIEWFOLDERID", "VIEWID", COL_CREATETIME, COL_CREATEBY, COL_MODIFYTIME, COL_MODIFYBY }; List<String> names = new ArrayList<String>(Arrays.asList(columnNames)); if (isImportOrMigrate) { names.add(1, "VFVASSOCID"); } PreparedStatement pst = null; ResultSet rs = null; while (true) { try { for (ViewFolderViewAssociationTransfer associatedViewtoCreate : associatedViewsCreate) { if (pst == null) { String statement = generator.getInsertStatement( params.getSchema(), "VFVASSOC", "VFVASSOCID", names,useCurrentTS); pst = con.prepareStatement(statement); } int i = 1; pst.setInt(i++, associatedViewtoCreate.getEnvironmentId()); if (isImportOrMigrate) { pst.setInt(i++, associatedViewtoCreate.getAssociationId()); } pst.setInt(i++, associatedViewtoCreate.getAssociatingComponentId()); pst.setInt(i++, associatedViewtoCreate.getAssociatedComponentId()); if (isImportOrMigrate) { pst.setTimestamp(i++, DataUtilities.getTimeStamp(associatedViewtoCreate.getCreateTime())); } pst.setString(i++,isImportOrMigrate ? associatedViewtoCreate.getCreateBy() : safrLogin.getUserId()); if (isImportOrMigrate) { pst.setTimestamp(i++, DataUtilities.getTimeStamp(associatedViewtoCreate.getModifyTime())); } pst.setString(i++,isImportOrMigrate ? associatedViewtoCreate.getModifyBy() : safrLogin.getUserId()); rs = pst.executeQuery(); rs.next(); int id = rs.getInt(1); rs.close(); associatedViewtoCreate.setAssociationId(id); associatedViewtoCreate.setPersistent(true); } break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while creating associations of View Folder with Views.",e); } } private void updateAssociatedViews(List<ViewFolderViewAssociationTransfer> associatedViewUpdate) { // data is either all imported or migrated or none of it is boolean isImportOrMigrate = associatedViewUpdate.get(0).isForImport() || associatedViewUpdate.get(0).isForMigration() ? true : false; boolean useCurrentTS = !isImportOrMigrate; try { List<String> names = new ArrayList<String>(); names.add("VIEWFOLDERID"); names.add("VIEWID"); if (isImportOrMigrate) { names.add(COL_CREATETIME); names.add(COL_CREATEBY); names.add(COL_MODIFYTIME); names.add(COL_MODIFYBY); } else { names.add(COL_MODIFYTIME); names.add(COL_MODIFYBY); } List<String> idNames = new ArrayList<String>(); idNames.add("VFVASSOCID"); idNames.add(COL_ENVID); PreparedStatement pst = null; while (true) { try { for (ComponentAssociationTransfer associatedViewtoUpdate : associatedViewUpdate) { if (pst == null) { String statement = generator.getUpdateStatement( params.getSchema(), "VFVASSOC", names, idNames, useCurrentTS); pst = con.prepareStatement(statement); } int i = 1; pst.setInt(i++, associatedViewtoUpdate.getAssociatingComponentId()); pst.setInt(i++, associatedViewtoUpdate.getAssociatedComponentId()); if (isImportOrMigrate) { // created and lastmod details set from source component pst.setTimestamp(i++, DataUtilities.getTimeStamp(associatedViewtoUpdate.getCreateTime())); pst.setString(i++, associatedViewtoUpdate.getCreateBy()); pst.setTimestamp(i++, DataUtilities.getTimeStamp(associatedViewtoUpdate.getModifyTime())); pst.setString(i++, associatedViewtoUpdate.getModifyBy()); } else { pst.setString(i++, safrLogin.getUserId()); } pst.setInt(i++, associatedViewtoUpdate.getAssociationId()); pst.setInt(i++, associatedViewtoUpdate.getEnvironmentId()); if ( useCurrentTS ) { ResultSet rs = pst.executeQuery(); rs.next(); associatedViewtoUpdate.setModifyBy(safrLogin.getUserId()); associatedViewtoUpdate.setModifyTime(rs.getDate(1)); rs.close(); pst.close(); } else { int count = pst.executeUpdate(); if (count == 0) { throw new SAFRNotFoundException("No Rows updated."); } pst.close(); } } break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while updating associations of View Folder with Views.",e); } } @Override public List<ViewFolderViewAssociationTransfer> getVFVAssociation(Integer environmentId, Integer id) { List<ViewFolderViewAssociationTransfer> result = new ArrayList<ViewFolderViewAssociationTransfer>(); try { String schema = params.getSchema(); String selectString = null; boolean admin = SAFRApplication.getUserSession().isSystemAdministrator(); if (admin) { selectString = "Select A.NAME AS VFNAME, A.VIEWFOLDERID, B.VIEWID, C.NAME AS VIEWNAME, B.VFVASSOCID, " + "B.CREATEDTIMESTAMP, B.CREATEDUSERID, B.LASTMODTIMESTAMP, B.LASTMODUSERID " + "From " + schema + ".VIEWFOLDER A, " + schema + ".VFVASSOC B, " + schema + ".VIEW C " + "Where A.ENVIRONID = ? " + " AND A.VIEWFOLDERID = ? " + " AND A.ENVIRONID = B.ENVIRONID AND A.VIEWFOLDERID = B.VIEWFOLDERID" + " AND B.ENVIRONID = C.ENVIRONID AND B.VIEWID = C.VIEWID " + "Order By B.VFVASSOCID"; } else { selectString = "Select A.NAME AS VFNAME, A.VIEWFOLDERID, B.VIEWID, C.NAME AS VIEWNAME, B.VFVASSOCID, D.RIGHTS, " + "B.CREATEDTIMESTAMP, B.CREATEDUSERID, B.LASTMODTIMESTAMP, B.LASTMODUSERID " + "From " + schema + ".VIEWFOLDER A INNER JOIN " + schema + ".VFVASSOC B ON A.ENVIRONID = B.ENVIRONID AND A.VIEWFOLDERID = B.VIEWFOLDERID INNER JOIN " + schema + ".VIEW C ON B.ENVIRONID = C.ENVIRONID AND B.VIEWID = C.VIEWID LEFT OUTER JOIN " + schema + ".SECVIEW D ON C.ENVIRONID = D.ENVIRONID AND C.VIEWID = D.VIEWID " + " AND D.GROUPID = " + SAFRApplication.getUserSession().getGroup().getId() + " Where A.ENVIRONID = ? " + " AND A.VIEWFOLDERID = ? " + " Order By B.VFVASSOCID"; } PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectString); pst.setInt(1, environmentId); pst.setInt(2, id); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } while (rs.next()) { ViewFolderViewAssociationTransfer vfvAssociationTransfer = new ViewFolderViewAssociationTransfer(); vfvAssociationTransfer.setEnvironmentId(environmentId); vfvAssociationTransfer.setAssociatingComponentId(rs.getInt("VIEWFOLDERID")); vfvAssociationTransfer.setAssociatingComponentName( DataUtilities.trimString(rs.getString("VFNAME"))); vfvAssociationTransfer.setAssociatedComponentId(rs.getInt("VIEWID")); vfvAssociationTransfer.setAssociatedComponentName( DataUtilities.trimString(rs.getString("VIEWNAME"))); vfvAssociationTransfer.setAssociationId(rs.getInt("VFVASSOCID")); vfvAssociationTransfer.setAssociatedComponentRights( admin ? EditRights.ReadModifyDelete : SAFRApplication.getUserSession().getEditRights( rs.getInt("RIGHTS"), ComponentType.View, environmentId)); vfvAssociationTransfer.setCreateTime(rs.getDate(COL_CREATETIME)); vfvAssociationTransfer.setCreateBy(DataUtilities.trimString(rs.getString(COL_CREATEBY))); vfvAssociationTransfer.setModifyTime(rs.getDate(COL_MODIFYTIME)); vfvAssociationTransfer.setModifyBy(DataUtilities.trimString(rs.getString(COL_MODIFYBY))); result.add(vfvAssociationTransfer); } pst.close(); rs.close(); return result; } catch (SQLException e) { String msg = "Database error occurred while retrieving view associations for Environment ["+ environmentId + "]."; throw DataUtilities.createDAOException(msg, e); } } @Override public List<ViewFolderViewAssociationTransfer> getVFVAssociations(Integer environmentId) { List<ViewFolderViewAssociationTransfer> result = new ArrayList<ViewFolderViewAssociationTransfer>(); try { String schema = params.getSchema(); String selectString = null; boolean admin = SAFRApplication.getUserSession().isSystemAdministrator(); if (admin) { selectString = "Select A.NAME AS VFNAME, A.VIEWFOLDERID, B.VIEWID, C.NAME AS VIEWNAME, B.VFVASSOCID, " + "B.CREATEDTIMESTAMP, B.CREATEDUSERID, B.LASTMODTIMESTAMP, B.LASTMODUSERID " + "From " + schema + ".VIEWFOLDER A, " + schema + ".VFVASSOC B, " + schema + ".VIEW C " + "Where A.ENVIRONID = ? " + " AND A.ENVIRONID = B.ENVIRONID AND A.VIEWFOLDERID = B.VIEWFOLDERID" + " AND B.ENVIRONID = C.ENVIRONID AND B.VIEWID = C.VIEWID " + "Order By B.VFVASSOCID"; } else { selectString = "Select A.NAME AS VFNAME, A.VIEWFOLDERID, B.VIEWID, C.NAME AS VIEWNAME, B.VFVASSOCID, D.RIGHTS, " + "B.CREATEDTIMESTAMP, B.CREATEDUSERID, B.LASTMODTIMESTAMP, B.LASTMODUSERID " + "From " + schema + ".VIEWFOLDER A INNER JOIN " + schema + ".VFVASSOC B ON A.ENVIRONID = B.ENVIRONID AND A.VIEWFOLDERID = B.VIEWFOLDERID INNER JOIN " + schema + ".VIEW C ON B.ENVIRONID = C.ENVIRONID AND B.VIEWID = C.VIEWID LEFT OUTER JOIN " + schema + ".SECVIEW D ON C.ENVIRONID = D.ENVIRONID AND C.VIEWID = D.VIEWID " + " AND D.GROUPID = " + SAFRApplication.getUserSession().getGroup().getId() + " Where A.ENVIRONID = ? " + " Order By B.VFVASSOCID"; } PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectString); pst.setInt(1, environmentId); rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } while (rs.next()) { ViewFolderViewAssociationTransfer vfvAssociationTransfer = new ViewFolderViewAssociationTransfer(); vfvAssociationTransfer.setEnvironmentId(environmentId); vfvAssociationTransfer.setAssociatingComponentId(rs.getInt("VIEWFOLDERID")); vfvAssociationTransfer.setAssociatingComponentName( DataUtilities.trimString(rs.getString("VFNAME"))); vfvAssociationTransfer.setAssociatedComponentId(rs.getInt("VIEWID")); vfvAssociationTransfer.setAssociatedComponentName( DataUtilities.trimString(rs.getString("VIEWNAME"))); vfvAssociationTransfer.setAssociationId(rs.getInt("VFVASSOCID")); vfvAssociationTransfer.setAssociatedComponentRights( admin ? EditRights.ReadModifyDelete : SAFRApplication.getUserSession().getEditRights( rs.getInt("RIGHTS"), ComponentType.View, environmentId)); vfvAssociationTransfer.setCreateTime(rs.getDate(COL_CREATETIME)); vfvAssociationTransfer.setCreateBy(DataUtilities.trimString(rs.getString(COL_CREATEBY))); vfvAssociationTransfer.setModifyTime(rs.getDate(COL_MODIFYTIME)); vfvAssociationTransfer.setModifyBy(DataUtilities.trimString(rs.getString(COL_MODIFYBY))); result.add(vfvAssociationTransfer); } pst.close(); rs.close(); return result; } catch (SQLException e) { String msg = "Database error occurred while retrieving view associations for Environment ["+ environmentId + "]."; throw DataUtilities.createDAOException(msg, e); } } @Override public List<ViewQueryBean> queryPossibleViewAssociations(int environmentId, List<Integer> notInParam) { String placeHolders = generator.getPlaceholders(notInParam.size()); List<ViewQueryBean> result = new ArrayList<ViewQueryBean>(); try { boolean admin = SAFRApplication.getUserSession().isSystemAdministrator(); if (notInParam.isEmpty()) notInParam.add(0); String selectString; if (admin) { selectString = "Select VIEWID, NAME From " + params.getSchema() + ".VIEW" + " Where ENVIRONID = ? " + " AND VIEWID > 0" + " AND VIEWID NOT IN ( " + placeHolders + " )" + " Order By VIEWID"; } else { selectString = "Select A.VIEWID, A.NAME,L.RIGHTS From " + params.getSchema() + ".VIEW A " + "LEFT OUTER JOIN " + params.getSchema() + ".SECVIEW L " + "ON A.ENVIRONID = L.ENVIRONID AND A.VIEWID = L.VIEWID" + " AND L.GROUPID = " + SAFRApplication.getUserSession().getGroup().getId() + " Where A.ENVIRONID = ? " + " AND A.VIEWID > 0" + " AND A.VIEWID NOT IN ( " + placeHolders + " )" + " Order By A.VIEWID"; } PreparedStatement pst = null; ResultSet rs = null; while (true) { try { pst = con.prepareStatement(selectString); pst.setInt(1, environmentId); int ndx = 2; for( int i = 0 ; i < notInParam.size(); i++ ) { pst.setInt(ndx++, notInParam.get(i)); } rs = pst.executeQuery(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } while (rs.next()) { int i = 1; ViewQueryBean viewQueryBean = new ViewQueryBean( environmentId, rs.getInt(i++), DataUtilities.trimString(rs.getString(i++)), null,null,null, admin ? EditRights.ReadModifyDelete : SAFRApplication.getUserSession().getEditRights( rs.getInt("RIGHTS"), ComponentType.View, environmentId), null, null, null, null, null, null, null); result.add(viewQueryBean); } pst.close(); rs.close(); } catch (SQLException e) { throw DataUtilities.createDAOException( "Database error occurred while retrieving all the possible Views which can be associated with this View Folder.",e); } return result; } @Override public void clearAssociations() { try { String deleteString = "DELETE FROM " + params.getSchema() + ".x_vfvtbl"; PreparedStatement pst = null; while (true) { try { pst = con.prepareStatement(deleteString); pst.execute(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException( "Database error occurred while clearing View Folder associations",e); } } @Override public void addAllViewsAssociation(Integer viewId, Integer environmentId) { try { String[] columnNames = { COL_ENVID, "VIEWFOLDERID", "VIEWID", COL_CREATETIME, COL_CREATEBY, COL_MODIFYTIME, COL_MODIFYBY }; List<String> names = new ArrayList<String>(Arrays.asList(columnNames)); PreparedStatement pst = null; ResultSet rs = null; while (true) { try { String statement = generator.getInsertStatement( params.getSchema(), "VFVASSOC", "VFVASSOCID", names, true); pst = con.prepareStatement(statement); int i = 1; pst.setInt(i++, environmentId); pst.setInt(i++, 0); pst.setInt(i++, viewId); pst.setString(i++, safrLogin.getUserId()); pst.setString(i++, safrLogin.getUserId()); rs = pst.executeQuery(); rs.close(); break; } catch (SQLException se) { if (con.isClosed()) { // lost database connection, so reconnect and retry con = DAOFactoryHolder.getDAOFactory().reconnect(); } else { throw se; } } } pst.close(); } catch (SQLException e) { throw DataUtilities.createDAOException("Database error occurred while creating associations of View Folder with Views.",e); } } }
/** * The subtree rooted by the specified element is serialized to a string. * @param root the root of the subtree to be serialized (this may be any node, even a document) * @return the serialized string */ public String serializeToString(XMLDOMNode root) { if (root == null) { return ""; } if (root instanceof XMLDOMDocument) { root = ((XMLDOMDocument) root).getDocumentElement(); } else if (root instanceof XMLDOMDocumentFragment) { root = root.getFirstChild(); } if (root instanceof XMLDOMElement) { final StringBuilder builder = new StringBuilder(); final DomNode node = root.getDomNodeOrDie(); toXml(1, node, builder); builder.append("\r\n"); return builder.toString(); } return root.getDomNodeOrDie().asXml(); }
Father who beat to death man he caught raping his five-year-old daughter will NOT face charges because of Texas state laws on deadly force A young father who beat his daughter's rapist to death after walking in on the assault will not face charges according to the states grand jury The 23-year-old beat 47-year-old Jesus Flores to death for molesting his five-year-old girl in a secluded barn The authorities accepted that he attempted to phone 911 to help the man after the attack Under the law in the state of Texas deadly force is authorized and justified in order to stop an aggravated sexual assault or sexual assault A Texas father who discovered a man raping his five-year-old daughter and beat him to death with his bare hands will not be charged with homicide under state law. A Lavaca County grand jury decided not to press charges against the 23-year-old father in the June 9th death of Jesus Mora Flores, 47, who was killed inside a remote shack after he was caught molesting the young girl. Under Texas state law, deadly force is authorized and indeed, justified in order to stop an aggravated sexual assault and coupled with the fact that the harrowing 911 calls made by the father back claims he even tried to save the pedophile's life led to the grand jury's decision. Lavaca County sheriff's deputies said that the father, whose name has not been released to protect the little girl's identity, sent her and her brother to feed the family's chickens. The boy rushed back to tell his dad that someone had grabbed his sister and taken her to a small secluded shack and the father rushed towards his daughter's screams and arrived to find them both with their underwear off. Scroll Down for Video No Charges: 25th Judicial District Attorney Heather McMinn, left, and Lavaca County Sheriff Mica Harmon appear at a news conference in Halletsville, Texas on Tuesday, June 19, 2012 in the aftermath of the killing Flying into a rage, the father beat Flores unconscious, but attempted to call 911 for the rapist after he had made sure his daughter was safe. Sheriff Micah Harmon had said in June that he was not willing to press charges against the father, rather the case would be presented to a grand jury. At the time, Harmon said that the man was 'very remorseful' and didn't know at the time he had killed Flores. 'You have a right to defend your daughter,' Harmon told CNN at the time. 'The girl's father acted in defense of his third person. Once the investigation is completed we will submit it to the district attorney who then submits it to the grand jury, who will decide if they will indict him.' Indeed, the father is heard profanely screaming at a dispatcher who couldn't locate the property. Becoming increasingly frazzled, the father at one point tells the dispatcher he's going to put the man in his truck and drive him to a hospital before sheriff's deputies finally arrive. V'Anne Huser, the father's attorney, sternly told reporters several times during a news conference at the Lavaca County courthouse that neither the father nor the family will ever give interviews. 'He's a peaceable soul,' Huser said. 'He had no intention to kill anybody that day.' The attack happened on the family's ranch off a quiet, two-lane county road between the farming towns of Shiner and Yoakum. Decision: Heather McMinn, district attorney for Guadalupe, Gonzales and Lavaca Counties, speaks at a news conference with Lavaca County Sheriff Micah Harmon, second from right, and V'anne Huger, right, attorney for the father, in June Authorities say a witness saw Flores 'forcibly carrying' the girl into a secluded area and then scrambled to find the father. Running toward his daughter's screams, investigators said, the father pulled Flores off his child and 'inflicted several blows to the man's head and neck area.' Emergency crews found Flores' pants and underwear pulled down on his lifeless body by the time they responded to the 911 call. The girl was taken to a hospital and examined, and authorities say forensic evidence and witness accounts corroborated the father's story that his daughter was being sexually molested. 'Under the law in the state of Texas deadly force is authorized and justified in order to stop an aggravated sexual assault or sexual assault,' District Attorney Heather McMinn told reporters in June. Horror: This photo shows a building near Shiner, Texas, where authorities say a Texas father beat to death with his fists a man molesting his 5-year-old daughter on June 9th 'All the evidence provided by the sheriff's department and the Texas Rangers indicated that's what was occurring when the victim's father arrived at the scene,' she said. Authorities said he expressed regret at the killing at the time, and no evidence so far has led them to doubt his story. The girl's grandfather agreed it had been an accident. 'My son. Sorry,' the grandfather told the Victoria Advocate in broken English. 'It was an accident.' Lavaca County Sheriff Micah Harmon added: 'He was very remorseful. I don't think it was his intent for the man to die.' Defense attorney L'Anne Huser speaks to the media during a news conference held by 25th Judicial District Attorney Heather McMinn and Lavaca County Sheriff Mica Harmon in Halletsville, Texas Residents of the small Lavaca County town were largely in support of the father, saying the victim deserved it. Sonny Jaehne, a Shiner native, told the Victoria Advocate: 'He got what he deserved, big time. Friend Mark Harabis reiterated this: 'I agree with him totally. I would probably do worse. 'The family will have to deal with that the rest of their lives, no matter what happens to the father. Even if they let him go, he and his child will have to deal with that the rest of their lives.' Unfortunately your browser does not support IFrames.
import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { ViewfieldsComponent } from './viewfields/viewfields.component'; import { CreatefieldsComponent } from './createfields/createfields.component'; import { ManagefieldsComponent } from './managefields/managefields.component'; import { EditfieldsComponent } from './editfields/editfields.component'; import { ThemeModule } from '../../@theme/theme.module'; import { NgxEchartsModule } from 'ngx-echarts'; import { Ng2SmartTableModule } from 'ng2-smart-table'; import { EdittodolisttableComponent } from './createfields/editTodolistTable.component'; import { SmartTableService } from '../../@core/data/smart-table.service'; import { FormsModule } from '@angular/forms'; import { NbStepperModule } from '@nebular/theme'; import { RouterModule } from '@angular/router'; import { FieldsRoutingModule } from './fields-routing.module'; import { FieldsComponent } from './fields.component'; import { TodolistComponent } from './managefields/todolist.component'; import { ButtoncomponentComponent } from './managefields/buttoncomponent.component'; @NgModule({ imports: [ CommonModule, ThemeModule, NgxEchartsModule, Ng2SmartTableModule, FormsModule, NbStepperModule,RouterModule,FieldsRoutingModule ], declarations: [ViewfieldsComponent,TodolistComponent,ButtoncomponentComponent, CreatefieldsComponent, ManagefieldsComponent, EditfieldsComponent, EdittodolisttableComponent,FieldsComponent, TodolistComponent, ButtoncomponentComponent], providers: [ SmartTableService, ], entryComponents: [ ButtoncomponentComponent ] }) export class FieldsModule { }
""" For backwards-compatibility. keep this file. (Many people are going to have key bindings that rely on this file.) """ from .app import * __all__ = [ # Old names. "HasArg", "HasCompletions", "HasFocus", "HasSelection", "HasValidationError", "IsDone", "IsReadOnly", "IsMultiline", "RendererHeightIsKnown", "InEditingMode", "InPasteMode", "ViMode", "ViNavigationMode", "ViInsertMode", "ViInsertMultipleMode", "ViReplaceMode", "ViSelectionMode", "ViWaitingForTextObjectMode", "ViDigraphMode", "EmacsMode", "EmacsInsertMode", "EmacsSelectionMode", "IsSearching", "HasSearch", "ControlIsSearchable", ] # Keep the original classnames for backwards compatibility. HasValidationError = lambda: has_validation_error HasArg = lambda: has_arg IsDone = lambda: is_done RendererHeightIsKnown = lambda: renderer_height_is_known ViNavigationMode = lambda: vi_navigation_mode InPasteMode = lambda: in_paste_mode EmacsMode = lambda: emacs_mode EmacsInsertMode = lambda: emacs_insert_mode ViMode = lambda: vi_mode IsSearching = lambda: is_searching HasSearch = lambda: is_searching ControlIsSearchable = lambda: control_is_searchable EmacsSelectionMode = lambda: emacs_selection_mode ViDigraphMode = lambda: vi_digraph_mode ViWaitingForTextObjectMode = lambda: vi_waiting_for_text_object_mode ViSelectionMode = lambda: vi_selection_mode ViReplaceMode = lambda: vi_replace_mode ViInsertMultipleMode = lambda: vi_insert_multiple_mode ViInsertMode = lambda: vi_insert_mode HasSelection = lambda: has_selection HasCompletions = lambda: has_completions IsReadOnly = lambda: is_read_only IsMultiline = lambda: is_multiline HasFocus = has_focus # No lambda here! (Has_focus is callable that returns a callable.) InEditingMode = in_editing_mode
<reponame>reesretuta/dropshipping-icons<filename>js/duotone/call-swap-calls.d.ts export declare const callSwapCalls: string[];
// import { inject, async, addProviders, fakeAsync, tick } from '@angular/core/testing'; // import { TokenHelper } from '../../app/helpers/tokenHelper'; // import { ConfigService } from '../../app/services/configService' // describe('token helper', () => { // beforeEach(() => { // addProviders([TokenHelper, ConfigService]); // }); // it('should parse the token', inject([TokenHelper], (tokenHelper : TokenHelper) => { // var idToken = 'eyJ0eXAiOiJKV1QiLCJ9.eyJhdWQiOiI2NAif.t_a6R-PMxGoL4Ky_lAmjrgw&state=4854bff1-6ed4-47f8-87f8-8b98c4382178&session_state=854c31c9-e04a-462f-b1a3-bdf78e63a4a0'; // var token = '<KEY>'; // var currentUrl = tokenHelper.getCurrentURL(); // spyOn(tokenHelper, 'getCurrentURL').and.returnValue(`http://www.lunchorder.be/#id_token=${idToken}`); // var actualToken = tokenHelper.getToken(); // expect(actualToken).toBe(token); // })); // });
/*++ /* NAME /* dsn_buf 3 /* SUMMARY /* delivery status buffer /* SYNOPSIS /* #include <dsn_buf.h> /* /* typedef struct { /* .in +4 /* /* Convenience member */ /* DSN dsn; /* light-weight, dsn(3) */ /* /* Formal members... */ /* VSTRING *status; /* RFC 3463 */ /* VSTRING *action; /* RFC 3464 */ /* VSTRING *mtype; /* dns */ /* VSTRING *mname; /* host or domain */ /* VSTRING *dtype; /* smtp, x-unix */ /* VSTRING *dtext; /* RFC 2821, sysexits.h */ /* /* Informal members... */ /* VSTRING *reason; /* informal text */ /* .in -4 /* } DSN_BUF; /* /* DSN_BUF *dsb_create(void) /* /* DSN_BUF *dsb_update(dsb, status, action, mtype, mname, dtype, /* dtext, reason_fmt, ...) /* DSN_BUF *dsb; /* const char *status; /* const char *action; /* const char *mtype; /* const char *mname; /* const char *dtype; /* const char *dtext; /* const char *reason_fmt; /* /* DSN_BUF *dsb_simple(dsb, status, reason_fmt, ...) /* DSN_BUF *dsb; /* const char *status; /* const char *reason_fmt; /* /* DSN_BUF *dsb_unix(dsb, status, dtext, reason_fmt, ...) /* DSN_BUF *dsb; /* const char *status; /* const char *reason_fmt; /* /* DSN_BUF *dsb_formal(dsb, status, action, mtype, mname, dtype, /* dtext) /* DSN_BUF *dsb; /* const char *status; /* const char *action; /* const char *mtype; /* const char *mname; /* const char *dtype; /* const char *dtext; /* /* DSN_BUF *dsb_status(dsb, status) /* DSN_BUF *dsb; /* const char *status; /* /* void dsb_reset(dsb) /* DSN_BUF *dsb; /* /* void dsb_free(dsb) /* DSN_BUF *dsb; /* /* DSN *DSN_FROM_DSN_BUF(dsb) /* DSN_BUF *dsb; /* DESCRIPTION /* This module implements a simple to update delivery status /* buffer for Postfix-internal use. Typically it is filled in /* the course of delivery attempt, and then formatted into a /* DSN structure for external notification. /* /* dsb_create() creates initialized storage for formal RFC 3464 /* attributes, and human-readable informal text. /* /* dsb_update() updates all fields. /* /* dsb_simple() updates the status and informal text, and resets all /* other fields to defaults. /* /* dsb_unix() updates the status, diagnostic code, diagnostic /* text, and informal text, sets the diagnostic type to UNIX, /* and resets all other fields to defaults. /* /* dsb_formal() updates all fields except the informal text. /* /* dsb_status() updates the status field, and resets all /* formal fields to defaults. /* /* dsb_reset() resets all fields in a DSN_BUF structure without /* deallocating memory. /* /* dsb_free() recycles the storage that was allocated by /* dsb_create(), and so on. /* /* DSN_FROM_DSN_BUF() populates the DSN member with a shallow /* copy of the contents of the formal and informal fields, and /* returns a pointer to the DSN member. This is typically used /* for external reporting. /* /* Arguments: /* .IP dsb /* Delivery status buffer. /* .IP status /* RFC 3463 "enhanced" status code. /* .IP action /* RFC 3464 action code; specify DSB_DEF_ACTION to derive the /* action from the status value. The only values that really /* matter here are "expanded" and "relayed"; all other values /* are already implied by the context. /* .IP mtype /* The remote MTA type. /* The only valid type is DSB_MTYPE_DNS. The macro DSB_SKIP_RMTA /* conveniently expands into a null argument list for the /* remote MTA type and name. /* .IP mname /* Remote MTA name. /* .IP dtype /* The reply type. /* DSB_DTYPE_SMTP or DSB_DTYPE_UNIX. The macro DSB_SKIP_REPLY /* conveniently expands into a null argument list for the reply /* type and text. /* .IP dtext /* The reply text. The reply text is reset when dtype is /* DSB_SKIP_REPLY. /* .IP reason_fmt /* The informal reason format. /* SEE ALSO /* msg(3) diagnostics interface /* DIAGNOSTICS /* Fatal: out of memory. /* LICENSE /* .ad /* .fi /* The Secure Mailer license must be distributed with this software. /* AUTHOR(S) /* <NAME> /* <NAME> Research /* P.O. Box 704 /* Yorktown Heights, NY 10598, USA /*--*/ /* System library. */ #include <sys_defs.h> #include <stdlib.h> /* 44BSD stdarg.h uses abort() */ #include <stdarg.h> #include <string.h> /* Utility library. */ #include <msg.h> #include <mymalloc.h> #include <vstring.h> /* Global library. */ #include <dsn_buf.h> /* Application-specific. */ #define STR(x) vstring_str(x) /* dsb_create - create delivery status buffer */ DSN_BUF *dsb_create(void) { DSN_BUF *dsb; /* * Some fields aren't needed until we want to report an error. */ dsb = (DSN_BUF *) mymalloc(sizeof(*dsb)); dsb->status = vstring_alloc(10); dsb->action = vstring_alloc(10); dsb->mtype = vstring_alloc(10); dsb->mname = vstring_alloc(100); dsb->dtype = vstring_alloc(10); dsb->dtext = vstring_alloc(100); dsb->reason = vstring_alloc(100); return (dsb); } /* dsb_free - destroy storage */ void dsb_free(DSN_BUF *dsb) { vstring_free(dsb->status); vstring_free(dsb->action); vstring_free(dsb->mtype); vstring_free(dsb->mname); vstring_free(dsb->dtype); vstring_free(dsb->dtext); vstring_free(dsb->reason); myfree((void *) dsb); } /* * Initial versions of this code represented unavailable inputs with null * pointers, which produced fragile and hard to maintain code. The current * code uses empty strings instead of null pointers. * * For safety we keep the test for null pointers in input. It's cheap. */ #define DSB_TRUNCATE(s) \ do { VSTRING_RESET(s); VSTRING_TERMINATE(s); } while (0) #define NULL_OR_EMPTY(s) ((s) == 0 || *(s) == 0) #define DSB_ACTION(dsb, stat, act) \ vstring_strcpy((dsb)->action, !NULL_OR_EMPTY(act) ? (act) : "") #define DSB_MTA(dsb, type, name) do { \ if (NULL_OR_EMPTY(type) || NULL_OR_EMPTY(name)) { \ DSB_TRUNCATE((dsb)->mtype); \ DSB_TRUNCATE((dsb)->mname); \ } else { \ vstring_strcpy((dsb)->mtype, (type)); \ vstring_strcpy((dsb)->mname, (name)); \ } \ } while (0) #define DSB_DIAG(dsb, type, text) do { \ if (NULL_OR_EMPTY(type) || NULL_OR_EMPTY(text)) { \ DSB_TRUNCATE((dsb)->dtype); \ DSB_TRUNCATE((dsb)->dtext); \ } else { \ vstring_strcpy((dsb)->dtype, (type)); \ vstring_strcpy((dsb)->dtext, (text)); \ } \ } while (0) /* dsb_update - update formal attributes and informal text */ DSN_BUF *dsb_update(DSN_BUF *dsb, const char *status, const char *action, const char *mtype, const char *mname, const char *dtype, const char *dtext, const char *format,...) { va_list ap; vstring_strcpy(dsb->status, status); DSB_ACTION(dsb, status, action); DSB_MTA(dsb, mtype, mname); DSB_DIAG(dsb, dtype, dtext); va_start(ap, format); vstring_vsprintf(dsb->reason, format, ap); va_end(ap); return (dsb); } /* vdsb_simple - update status and informal text, va_list form */ DSN_BUF *vdsb_simple(DSN_BUF *dsb, const char *status, const char *format, va_list ap) { vstring_strcpy(dsb->status, status); DSB_TRUNCATE(dsb->action); DSB_TRUNCATE(dsb->mtype); DSB_TRUNCATE(dsb->mname); DSB_TRUNCATE(dsb->dtype); DSB_TRUNCATE(dsb->dtext); vstring_vsprintf(dsb->reason, format, ap); return (dsb); } /* dsb_simple - update status and informal text */ DSN_BUF *dsb_simple(DSN_BUF *dsb, const char *status, const char *format,...) { va_list ap; va_start(ap, format); (void) vdsb_simple(dsb, status, format, ap); va_end(ap); return (dsb); } /* dsb_unix - update status, UNIX diagnostic and informal text */ DSN_BUF *dsb_unix(DSN_BUF *dsb, const char *status, const char *dtext, const char *format,...) { va_list ap; vstring_strcpy(dsb->status, status); DSB_TRUNCATE(dsb->action); DSB_TRUNCATE(dsb->mtype); DSB_TRUNCATE(dsb->mname); vstring_strcpy(dsb->dtype, DSB_DTYPE_UNIX); vstring_strcpy(dsb->dtext, dtext); va_start(ap, format); vstring_vsprintf(dsb->reason, format, ap); va_end(ap); return (dsb); } /* dsb_formal - update the formal fields */ DSN_BUF *dsb_formal(DSN_BUF *dsb, const char *status, const char *action, const char *mtype, const char *mname, const char *dtype, const char *dtext) { vstring_strcpy(dsb->status, status); DSB_ACTION(dsb, status, action); DSB_MTA(dsb, mtype, mname); DSB_DIAG(dsb, dtype, dtext); return (dsb); } /* dsb_status - update the status, reset other formal fields */ DSN_BUF *dsb_status(DSN_BUF *dsb, const char *status) { vstring_strcpy(dsb->status, status); DSB_TRUNCATE(dsb->action); DSB_TRUNCATE(dsb->mtype); DSB_TRUNCATE(dsb->mname); DSB_TRUNCATE(dsb->dtype); DSB_TRUNCATE(dsb->dtext); return (dsb); } /* dsb_reset - reset all fields */ void dsb_reset(DSN_BUF *dsb) { DSB_TRUNCATE(dsb->status); DSB_TRUNCATE(dsb->action); DSB_TRUNCATE(dsb->mtype); DSB_TRUNCATE(dsb->mname); DSB_TRUNCATE(dsb->dtype); DSB_TRUNCATE(dsb->dtext); DSB_TRUNCATE(dsb->reason); }
poster="http://v.politico.com/images/1155968404/201705/3835/1155968404_5422063144001_5422012881001-vs.jpg?pubId=1155968404" true Trump: I’m ‘so confident’ health care bill will pass the Senate President Donald Trump claimed victory Thursday after the House passed legislation to repeal and replace Obamacare. And he predicted more wins will follow. “We won, and we’re gonna finish it off, and we’re gonna go on to a lot of other things,” Trump said Thursday in an address inside the White House Rose Garden. Story Continued Below Trump took a victory lap at the White House, celebrating the most significant legislative win to date of his administration and predicting similar success in the Senate as he spoke with a contingent of administration officials and House Republicans behind him — both literally and figuratively. “We’re gonna get this passed through the Senate,” Trump said. “I feel so confident.” The American Health Care Act passed the House on Thursday afternoon on a 217-213 vote and now heads to the Senate, where its fate is far less certain than Trump suggests. Trump hailed the “great group” of people behind him who supported the legislation and credited the health care bill for uniting the party, although 20 House Republicans voted “no” against the bill. “We have a lot of groups, but they all came together,” Trump said, shouting out the far-right House Freedom Caucus and moderate Tuesday Group. “We have just developed a bond. This has really brought the Republican Party together.” Trump declared Obamacare is dead and made big promises for the future of health care. He offered broad praise of the bill — using vague descriptors such as “great” and “very, very incredibly well crafted” — but called on Republican leaders to “brag” about the bill in more detail. “Yes, premiums will be coming down. Yes, deductibles will be coming down,” Trump pledged. “But very importantly, it’s a great plan, and ultimately that’s what it’s all about.” “We knew that wasn’t going to work,” Trump said of Obamacare. “I predicted it a long time ago. I said it’s failing, and now it’s obvious that it’s failing. It’s dead. It’s essentially dead. If we don’t pay lots of ransom money over to the insurance companies it would die immediately. So what we have is something very, very incredibly well crafted.” When Trump invited Republican leaders to tout the plan, he encouraged them “say how good this plan is.” Sign up here for POLITICO Huddle A daily play-by-play of congressional news in your inbox. Email Sign Up By signing up you agree to receive email newsletters or alerts from POLITICO. You can unsubscribe at any time. “We don’t have to talk about this unbelievable victory — wasn’t it unbelievable? — so we don’t have to say it again,” he advised. “But it’s gonna be an unbelievable victory, actually, when we get it through the Senate, and there’s so much spirit there.” Despite Trump's celebratory declarations, Thursday's vote was not without controversy. House Republicans are under fire for voting on a new version of the bill that hadn’t been scored by the nonpartisan Congressional Budget Office, whose projection of an earlier version of the bill estimated 24 million fewer Americans would be insured over the next decade. But Trump looked back to his Republican allies for reassurance. “Coming from a different world and only being a politician for a short period of time, how am I doing? Am I doing OK?” he asked. “I’m president. Hey, I’m president. Can you believe it? Right. I thought you need a little bit more time, they always told me, more time. But we didn’t.” “And we are going to have a tremendous four years and, maybe even more importantly, we’re gonna have a tremendous eight years,” Trump added. “But we’re gonna start off with just a great first year.”
export interface IStorageService { // interfaz de los servicion de almacenamiento uploadFile(imgData: string): Promise<string>; }
Characterising Single and Two-Phase Homogeneous Isotropic Turbulence with Stagnation Points : It has been shown that, for dense, sub-Kolmogorov particles advected in a turbulent flow, carrier phase properties can be reconstructed from the particles’ velocity field. For that, the instantaneous particles’ velocity field can be used to detect the stagnation points of the carrier phase. The Rice theorem can therefore be used, implying that the Taylor length is proportional to the mean distance between such stagnation points. As this model has been only tested for one-dimensional time signals, this work discusses if it can be applied to two-phase, three-dimensional flows. We use direct numerical simulations with turbulent Reynolds numbers Re λ between 40 and 520 and study particle-laden flows with a Stokes number of St = 0.5. We confirm that for the carrier phase, the Taylor length is proportional to the mean distance between stagnation points with a proportionality coefficient that depends weakly on Re λ . Then, we propose an interpolation scheme to reconstruct the stagnation points of the particles’ velocity field. The results indicate that the Rice theorem cannot be applied in practice to two-phase three-dimensional turbulent flows, as the clustering of stagnation points forms very dense structures that require a very large number of particles to accurately sample the flow stagnation points. Introduction Turbulent flows laden with inertial particles are widely encountered in nature, playing a preeminent role in particles dispersion in the atmosphere, rain formation and marine snow sedimentation, among others . They are also relevant for several industrial flows, such as fuel or coal combustion, fluidized beds reactors and separation techniques. One of the main challenges to characterizing these flows is the need to simultaneously resolve the particle positions and velocities and the flow velocity field at their scale . All these configurations involve highly turbulent three-dimensional flows, which can be highly inhomogeneous and unsteady, where possible finite-size effects from particles may also be present. In this work we focus on a simplified case: homogeneous isotropic turbulent flows (HIT) laden with point-like inertial particles. The stagnation points of velocity fields in turbulent flows present several relevant characteristics that can be used to gain further understanding of these systems. For instance, the zero-crossings of fluctuating one-dimensional velocity signals have been extensively studied, as they can be used to quantify the Taylor microscale λ of homogeneous isotropic turbulence via the Rice theorem . As a consequence, these structures have been intensively studied over the last years, in works that cover the energy cascade of turbulence and atmospheric flows , among others. They present several advantages, such as the zero-crossings of a velocity signal being robust when the flow is unsteady and/or the calibration of probes is not guaranteed. Furthermore, it has recently been shown that they can also be used to quantify the integral length scale L . While most works focus on zero crossings, others have considered the case of stagnation points (STPS), defined as the set of velocity nulls satisfying v(x n ) = 0, where v is the fluid velocity field . In particular, Goto and Vassilicos generalized the Rice theorem, finding a relation between the number density of STPS and λ, with B a constant that may vary with Re λ due to the dependency of the non-Gaussianity of velocity derivatives with this parameter. While the study from Goto and Vassilicos confirmed the validity of this theorem, it did not explore a sufficiently large range of Reynolds numbers based on the Taylor scale Re λ to report on the dependency B(Re λ ). Although these studies concern single-phase turbulent flows, it has recently been proposed that the Rice theorem can be applied to particle-laden turbulent flows. The work by Mora et al. developed an experimental method to estimate the carrier-flow turbulent kinetic energy dissipation rate ε in the presence of inertial sub-Kolmogorov particles at moderate Re λ . Its foundations rely on the unladen flow dissipation calculation using the Rice theorem, and the density of zero crossings n s . Moreover, the results from such a model apply, in principle, also to three-dimensional particle velocities depending on the simplified equation of motion, with v p the particle velocity and u(x p , t) the carrier's flow velocity evaluated at the particle's location x p , and τ p the particle viscous response (defined in the next section). This simplified model relies on two conditions: the diameter of the particles must be smaller than the Kolmogorov lengthscale of turbulence η, and their density must be much larger than the carrier's flow density. The Fourier transform of Equation (2) yields, v p =û iωτ p + 1 . As a consequence, the particle field velocity is a low-pass filtered version of the carrier phase one, with a cut-off frequency of f c = τ −1 p /2π, or f c τ η = (2πSt) −1 . The cut-off frequency therefore depends on the Stokes number of inclusions, defined as St = τ p /τ η , with τ η = (ν/ε) 1/2 the flow Kolmogorov time scale (ν is the fluid kinematic viscosity). We can then deduce from Equation (3) that if the cut-off frequency f c is large enough to resolve the dissipation scales, n s should be recovered. Thus, it is possible to deduce the value of λ from the particles' velocities. As stated above, while this model has been developed for 1D signals and zero crossings, Equation (3) is already defined for three-dimensional velocities, and the zero-crossings number density can also be redefined as the stagnation points number density . We can therefore conclude, in principle, that the generalized Rice theorem and the model from Mora et al. can be combined to deduce the carrier phase value of λ using inertial particles. This rationale can also give access to other small-scale quantities, such as ε and η, among others. Beyond its fundamental interest, this could also be used to quantify the carrier phase properties in experiments of two-phase turbulent flows. Indeed, to resolve the carrier phase simultaneously with the inclusions velocities in such conditions is beyond the possibilities of current experimental techniques. Finally, to quantify these properties of the carrier phase would also help to detect the presence of two-way coupling between the inertial particles and the carrier flow. The first two conditions for the applicability of this model are: (i) that Equation (3) holds, and (ii) that the cut-off frequency verifies the relation f c τ η = (2πSt) −1 > 10 −2 . This condition is proposed based on the fact that Vassilicos and collaborators found that, when low-pass filtering zero crossings n s with cut-off frequencies at least one order of magnitude larger than the Kolmogorov length-scale, such low-pass filtered velocity records were still able to resolve the value of λ. The last condition, (iii) is to have enough particles to sample all the stagnation points present in the flow. This latter condition has the added constraint that stagnation points are known to form dense clusters , thus making the sampling of these points more difficult. This work aims at studying the applicability of the aforementioned model for stagnation points. We use direct numerical simulations (DNSs) with random forcing, that avoid experimental errors that may contaminate the counting of stagnation points . The results could be extended to experimental fields of particles advected by turbulent flows. We will first focus on verifying the applicability of the generalized Rice theorem (Equation (1)) to our DNSs, which span a wide range of Re λ that goes between 40 and 520. We will particularly focus on the dependency of B with Re λ , not discussed in previous works, and in verifying the applicability of Equation (1) to instantaneous velocity fields. Then, on a second part, we will use the method of Mora et al. to study DNSs of two-phase flows with Re λ = 240, using up to 10 7 tracers (i.e., St = 0) that follow the streamlines of the flow, and inertial particles (St = 0.5) that evolve according to Equation (2). Our results indicate that the model may not apply unless an extremely large number of particles is injected, as the clusters of stagnation points require a very large spatial resolution (or particle densities) to be resolved. Numerical Simulations Our study was conducted using DNSs at five different values of Re λ using the GHOST code . These simulations follow standard practices regarding their temporal integration, dealiasing procedures, and have an adequate spatial resolution of the smallest scales, i.e., κη 1 (where κ = N/3 is the maximum resolved wavenumber in Fourier space and N the linear spatial resolution ). The Kolmogorov lengthscale η is defined as η = (ν 3 /ε) 1/4 . Fully dealiased pseudospectral methods with second-order Runge-Kutta methods for the time stepping are used. The 3D simulation domain for all datasets has dimensions of 2π × 2π × 2π. All relevant parameters can be found in Table 1. Table 1. Relevant parameters from the DNS used in this study. N is the number of points in the DNS in one axis, such that N 3 is the total number of grid points in the simulation domain. L/(2π) is the integral lengthscale in units of the domain linear size 2π. η is the Kolmogorov dissipation scale. Re λ is the Reynolds number based on the Taylor microscale λ. "# snapshots" is the number of snapshots of the vector fields used for the analysis, and # STPS snps the averaged number of STPS (i.e., stagnation points) over the total number of snapshots. Numerical simulations solve the incompressible Navier-Stokes equations for the velocity u with a random solenoidal forcing f, where p = p/ρ (with p the pressure and ρ a uniform mass density), which is obtained from the incompressibility condition ∇ ∇ ∇ · u = 0. In Equation (4), Du/Dt = a is the Lagrangian acceleration of the fluid elements. We define the r.m.s. velocity as u = |u i | 2 1/2 (where u i is a Cartesian component of the velocity and Einstein notation is used), the Taylor microscale is λ = (15νu 2 /ε) 1/2 , and the integral scale is L = π/(2u 2 ) E(k)/k dk (where E(k) is the isotropic energy spectrum). The solenoidal forcing f is given by a superposition of Fourier modes with random phases in the shell with wavenumber k = 1. A new random forcing was generated every 0.5 large-scale turnover time, and the forcing was linearly evolved from its previous state to the next state along this period of time. This results in a continuous and slowly evolving random forcing with a correlation time of 0.5 turnover times, which at the largest resolution considered has an integral scale L/(2π) ≈ 0.309, and which will be useful for simulations with inertial particles, as discussed below. The simulations also use the largest Reynolds number attainable at their spatial resolution, with κη ≈ 1 (see Table 1). We use five numerical datasets, labeled in the following as "DNS-N", where N is the linear resolution of each dataset. The Taylor-based Reynolds number, Re λ = u λ/ν, spans more than one decade. We have Re λ ∈ for spatial resolutions of 64 3 , 128 3 , 256 3 , 512 3 , and 1024 3 grid points (see Figure 1). We stored enough snapshots of the vector fields to have adequate global statistics. For all datasets, we applied the method proposed by Haynes and collaborators to compute the stagnation points. This method goes through each cell of the DNS domain and uses the velocity values of the eight cell's corners. If there is a change of sign in all the three velocity components, a local trilinear interpolation function is created with the corner's velocity values. Then the Newton method is used to find any velocity nulls within the cell. If there is no change of sign in any of the three velocity components then no velocity nulls should be contained in the cell for a well resolved DNS. Both elliptic and hyperbolic stagnation points were considered. More details about this method can be found in references . For DNS-512 we also have data of tracers and inertial point particles without gravity. Particles are integrated following Equation (2), which can be written as: These equations are integrated with a high-order Runge-Kutta method to evolve in time and a high-order three-dimensional spatial spline interpolation to estimate the fluid velocity u(x p ) at the particle position (see for details). Simulations with particles are conducted as follows: first, a DNS of the Eulerian flow is conducted without particles, until a turbulent steady state is reached. Then, particles are injected with a uniform random distribution in space, and with the same initial velocity as the velocity of the fluid element at the particle position. Particles are integrated for several turnover times (in the case of tracers) or for several particle relaxation times (in the case of inertial particles) before data starts to be collected for the analysis. In order to apply the method described in the introduction, the particles' velocities v p were interpolated using the 'griddata' function in the interpolate module of the SciPy library. Particles' velocities were thus interpolated on the DNS Eulerian grid points with a linear interpolation. For points outside the particles' position range, on the boundaries of the DNS domain, the nearest method was employed, obtaining a synthetic velocity field from the particles for each snapshot. We therefore proceeded to apply the method from Haynes and collaborators here too, to detect the stagnation points. As discussed above, we used two types of particles: inertial particles with St = 0.5 and tracers with St = 0. While tracers are expected to sample the flow uniformly, the former have been reported to cluster . For each type of particle, two DNSs were run, one with 10 6 particles and another with 10 7 . This will allow us in the next sections to analyse the influence of the number of particles in the convergence of the Rice theorem, and to verify if it actually applies for our datasets (we remind the reader that particles are injected only for the N 3 = 512 3 run, i.e., with ≈ 1.3 × 10 8 Eulerian grid points). In the following, our datasets with particles will be labelled as: As each run was performed independently, and given that we used random forcing, the temporal evolution of the single-phase flow is not expected to be identical for all datasets. Validation of the Generalized Rice Theorem in Single-Phase Turbulent Flows We first verify the validity of the generalized Rice theorem for our DNSs. To this end, in Figure 2 we plot the prefactor B in the Rice theorem (defined in Equation (1)) as a function of Re λ . Figure 2a is deduced for each instantaneous Eulerian velocity field from the DNSs at N = 512, with Re λ also computed instantaneously for each snapshot (n s f denotes the density of stagnation points in the Eulerian fluid velocity). On the other hand, Figure 2b shows the value of B for all resolutions N in the DNSs, and averaged over all snapshots corresponding to that run as detailed in Table 1 (Re λ is also computed from the averaged characteristic quantities). Our results are in good agreement with the generalized Rice theorem, as we find B is of order unity and has a small dependency with Re λ . Furthermore, our results are consistent with the study from Goto and Vassilicos on a similar flow. Note that while this study focused on the range Re λ ∈ , in Figure 2b we extend the validity of the theorem to Re λ ∈ . Our DNSs also show that B is a slowly decreasing function with Re λ , and therefore presents the opposite trend as the one found for zero-crossings of the fluctuating velocity (but consistent with the constant value found in , where the same flow was studied for a large range of Re λ ). This result is surprising, as it suggests that the contribution of small-scale intermittency effects decreases when Re λ is increased. The good collapse of all fields shown in Figure 2a,b suggests that the generalized Rice theorem is valid not only when averaging in time Eulerian fields, but also for instantaneous realisations. To confirm this feature, Figure 3a shows the temporal evolution of n −1/3 s f and λ for DNS-512-1 and DNS-512-2 for the Eulerian fluid velocity field. It can be observed that both curves present almost identical trends, with values of B that remain almost constant (see Figure 3b). We therefore conclude that the generalized Rice theorem applies to our datasets, for a wide range of Re λ as well as for instantaneous velocity fields. Validation of the Rice Theorem in Two-Phase Turbulent Flows We now proceed to study the applicability of the generalized Rice theorem to particleladen flows. It can be easily seen that for both St = 0 and St = 0.5 the condition f c τ η = (2πSt) −1 > 10 −2 holds. Additionally, the applicability of Equation (2) is trivially valid in our case, as particles evolve according to it in the simulations. Figure 4 shows the reconstructed (i.e., interpolated) x velocity field component using 10 7 particles with St = 0 or St = 0.5, compared to the actual Eulerian flow velocity component. Fields are very similar, although not identical. The squared point-wise differences between the three velocity fields are presented in Figure 5. Discrepancies between the flow field and inertial particles are expected (Figures 4c and 5b), as Equation (3) implies that the STPS may be preserved but not the velocity values elsewhere. Furthermore, this is also expected when comparing the flow velocity and the tracers (Figure 4a,b respectively, or see Figure 5a), as the latter are also expected to preserve the null points but, depending on the number of particles present in the flow, could result in a coarse-grained reconstruction of the former. Moreover, as expected, the tracers' reconstructed field shows fewer differences with the flow field than the inertial particles (see Figure 5a,b). We can therefore study the validity of Equation (1) in our DNSs. A first test is to compare the number of STPS detected for the carrier phase and for the interpolated (tracer or inertial) particle velocity fields (Figure 6a). Surprisingly, we see that for any of the particle sets, a smaller number of STPS are detected, and even when injecting 10 7 tracers we have 30% fewer STPS in the interpolated field (Figure 6a). This surprising result points towards the inapplicability of the model from Mora et al. to our datasets. This is confirmed when comparing n −1/3 sp for the tracers' interpolated field and λ in Figure 6b, as we find values of B to always be larger than those found for the Eulerian flow velocity in Figure 3b. Nevertheless, Figure 6b suggests that, while the generalized Rice theorem does not apply exactly, some trends are still recovered. In other words, a calibrated value of B(St, N part , . . . ) (where N part is the total number of particles) could be used if data are available from, e.g., numerical simulations. Black and red circles correspond to STPS detected from the tracers and inertial particles fields, respectively. As the amount of STPS in a 2D slice can be small, in all panels we show all STPS in a slice with 6 grid points in x (i.e., the STPS detected in x ∈ π/2 ± 6π/512). As the only condition that may be violated is to have enough particles to sample all stagnation points in the flow, we will now analyse this hypothesis. For all N = 512 DNSs, we find that the carrier phase has values of n −1/3 s f of around 0.3. This implies that we have around 30 stagnation points per unit of volume (with a total volume of (2π) 3 in our DNSs). Conversely, we have 4 × 10 3 and 4 × 10 4 (inertial or tracer) particles per unit of volume when injecting, respectively, 10 6 and 10 7 inclusions. These densities imply that, in principle, all stagnation points should be resolved by the interpolated fields. However, as we discussed in the introduction, this consideration does not take into account the clustering of stagnation points. Indeed, Figure 7a,b shows the presence of strong clusters of STPS in the flow. Using similar DNSs, a previous work showed that stagnation points form very dense clusters, and that clustering increases significantly with Re λ (as shown in Figure 7c). Such clustering (against, e.g., an homogeneous spatial distribution of points) implies that to resolve all stagnation points would require injecting many more particles, as dense regions of stagnation points also require larger densities of particles to be resolved with our interpolation scheme. This can also be seen in Figure 4, where the interpolated fields for both tracers and inertial particles do not recover all stagnation points in dense regions. We remark that tracers are instead distributed homogeneously in space, while inertial particles are also known to form clusters in turbulent flows, but the positions of such clusters are not directly related to the positions of those formed by the stagnation points . Besides this effect, another important effect that can explain why the particles see fewer zeros than the number of STPS in the Eulerian field is associated with the stability of stagnation points in 3D flows . Even in the simpler 2D case, most instantaneous stagnation points can be classified into elliptic or hyperbolic types, depending on their local stability. Tracers and inertial particles can be expected to spend longer times around elliptic points, whereas hyperbolic points should quickly push nearby tracers and inertial particles along their unstable manifolds. In 3D, the possible topologies of velocity field nulls are more complex, but any 3D stagnation point with an unstable manifold should have the same effect. As a result, STPS are sampled differently depending on their topology, and some STPS will be less sampled than others. This can be another important reason behind the lower number of detected zeros from the 3D particles' fields. Conclusions Throughout this work we used DNS data to study the applicability of the generalized Rice theorem to single and two-phase flows. Our results can be summarised as follows: • We verified the validity of the generalized Rice theorem for our dataset, which covers the range Re λ ∈ . Furthermore, we showed that the prefactor B in Equation (1), that quantifies the non-Gaussianity of velocity derivatives, has a weak dependence with Re λ , and that it tends to decrease when this parameter is increased. • Furthermore, we showed that the generalized Rice theorem applies for time-averaged three-dimensional velocity fields, but also for instantaneous realizations. • We proposed an interpolation scheme to reconstruct the stagnation points using the particles' velocity field. Our results indicate that the Rice theorem cannot be applied in practice to two-phase three-dimensional turbulent flows, as the clustering of stagnation points forms very dense structures that require a very large number of particles to accurately sample the flow stagnation points. Even with 10 7 tracers or inertial particles, we did not manage to apply the Rice theorem satisfactorily. • We find that this lack of resolution of stagnation points is consistent with the strong clustering of STPS, as it implies the presence of very dense regions of these points, which require the injection of a very high number density of particles to be resolved. Another possible explanation for the lower number of STPS detected with the particles' velocity field is the local stability of 3D STPS with unstable manifolds. • While the number of the carrier phase STPS is always larger than the one obtained when using the interpolation scheme proposed here, we do find that they evolve over time following similar trends. This feature requires further study to be validated. In conclusion, our study suggests that the generalized Rice theorem and the rationale from Mora et al. cannot be used in a practical way to reconstruct the carrier phase from particles' measurements in turbulent two-phase flows. Its application would require a number of particles that would make such study extremely hard using modern experimental techniques.
#!/usr/bin/env python3 import math class Int9: """ An integer over Z9 (module 9) field """ def __init__(self, n: int): self.n = n % 9 def __add__(self, other): return Int9(self.n + other.n) def __sub__(self, other): return Int9(self.n - other.n) def __mul__(self, other): return Int9(self.n * other.n) def __int__(self): return self.n def __str__(self): return str(self.n) def __repr__(self): return 'Int9({})'.format(self.n) class Mat9: """ A matrix over Z9 (module 9) field. Several limitations apply to the matrix structure: * Matrix must be a square matrix * Matrix size must be a power of two """ def __init__(self, m: list): if type(m[0][0]) != Int9: raise Exception("Mat9 constructor must be provided with a 2-dimensional list of Int9") if len(m) != len(m[0]): raise Exception("Non-square matrixes are not supported. Provided matrix: {}".format(m)) self.m = m self.L = len(m) def __add__(self, other): if self.L != other.L: raise Exception( "__add__ is called on matrixes of different size. Arguments: {}; {}".format(self.m, other.m) ) return Mat9([[self.m[i1][i2] + other.m[i1][i2] for i2 in range(self.L)] for i1 in range(self.L)]) def __sub__(self, other): if self.L != other.L: raise Exception( "__sub__ is called on matrixes of different size. Arguments: {}; {}".format(self.m, other.m) ) return Mat9([[self.m[i1][i2] - other.m[i1][i2] for i2 in range(self.L)] for i1 in range(self.L)]) @staticmethod def _mul2(a, b): """ Calculate __mul__ for matrixes of size 2 :return: A Mat9 - product of a and b """ a11 = a.m[0][0] a12 = a.m[0][1] a21 = a.m[1][0] a22 = a.m[1][1] b11 = b.m[0][0] b12 = b.m[0][1] b21 = b.m[1][0] b22 = b.m[1][1] m1 = (a11 + a22) * (b11 + b22) m2 = (a21 + a22) * b11 m3 = a11 * (b12 - b22) m4 = a22 * (b21 - b11) m5 = (a11 + a12) * b22 m6 = (a21 - a11) * (b11 + b12) m7 = (a12 - a22) * (b21 + b22) c11 = m1 + m4 - m5 + m7 c12 = m3 + m5 c21 = m2 + m4 c22 = m1 - m2 + m3 + m6 return Mat9([[c11, c12], [c21, c22]]) @staticmethod def _mul(a, b): """ Calculate __mul__ using Strassen algorithm :return: A Mat9 - product of a and b """ l_div = a.L // 2 a11 = Mat9([a.m[i][:l_div] for i in range(0, l_div)]) a12 = Mat9([a.m[i][l_div:] for i in range(0, l_div)]) a21 = Mat9([a.m[i][:l_div] for i in range(l_div, a.L)]) a22 = Mat9([a.m[i][l_div:] for i in range(l_div, a.L)]) b11 = Mat9([b.m[i][:l_div] for i in range(0, l_div)]) b12 = Mat9([b.m[i][l_div:] for i in range(0, l_div)]) b21 = Mat9([b.m[i][:l_div] for i in range(l_div, b.L)]) b22 = Mat9([b.m[i][l_div:] for i in range(l_div, b.L)]) m1 = (a11 + a22) * (b11 + b22) m2 = (a21 + a22) * b11 m3 = a11 * (b12 - b22) m4 = a22 * (b21 - b11) m5 = (a11 + a12) * b22 m6 = (a21 - a11) * (b11 + b12) m7 = (a12 - a22) * (b21 + b22) c11 = m1 + m4 - m5 + m7 c12 = m3 + m5 c21 = m2 + m4 c22 = m1 - m2 + m3 + m6 for i in range(l_div): c11.m[i].extend(c12.m[i]) c21.m[i].extend(c22.m[i]) c11.m.extend(c21.m) c11.L = c11.L * 2 return c11 def __mul__(self, other): if self.L != other.L: raise Exception( "__mul__ is called on matrixes of different size. Arguments: {}; {}".format(self.m, other.m) ) if self.L == 1: return Mat9([[self.m[0][0] * other.m[0][0]]]) if self.L == 2: return Mat9._mul2(self, other) return Mat9._mul(self, other) def __pow__(self, power, modulo=None): if power == 0: return Mat9([[Int9(1) if i == j else Int9(0) for i in range(self.L)] for j in range(self.L)]) if power == 1: return self if power % 2 != 0: return (self ** (power - 1)) * self self_halfpow = self ** (power / 2) return self_halfpow * self_halfpow def main(): m = [list(map(lambda n: Int9(n), list(map(int, input().split()))))] for i in range(len(m[0]) - 1): m.append(list(map(lambda n: Int9(n), list(map(int, input().split()))))) m_size = len(m) m_size_pow_2 = 2 ** int(math.ceil(math.log2(len(m)))) if m_size_pow_2 != m_size: for i in range(m_size): m[i].extend([Int9(0) for _ in range(m_size_pow_2 - m_size)]) m.extend([[Int9(0) for _ in range(m_size_pow_2)] for _ in range(m_size_pow_2 - m_size)]) m9 = Mat9(m) result_noncut = m9 ** m_size result_printable = '\n'.join( [' '.join([ str(result_noncut.m[i][j]) for j in range(m_size) ]) for i in range(m_size)] ) print(result_printable) if __name__ == '__main__': main()
/** Destructor - Terminate the LineReader, close its source file */ public void close() { try { reader.close(); } catch (Exception exc) { System.err.println(exc.getMessage()); exc.printStackTrace(); } }
/* * SPDX-License-Identifier: Apache-2.0 * * The OpenSearch Contributors require contributions made to * this file be licensed under the Apache-2.0 license or a * compatible open source license. */ /* * Licensed to Elasticsearch under one or more contributor * license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright * ownership. Elasticsearch licenses this file to you under * the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ /* * Modifications Copyright OpenSearch Contributors. See * GitHub history for details. */ package org.opensearch.gradle.test.rest; import org.opensearch.gradle.OpenSearchJavaPlugin; import org.opensearch.gradle.test.RestIntegTestTask; import org.opensearch.gradle.test.RestTestBasePlugin; import org.opensearch.gradle.testclusters.TestClustersPlugin; import org.opensearch.gradle.util.GradleUtils; import org.gradle.api.Plugin; import org.gradle.api.Project; import org.gradle.api.plugins.JavaBasePlugin; import org.gradle.api.provider.Provider; import org.gradle.api.tasks.SourceSet; import org.gradle.api.tasks.SourceSetContainer; /** * Apply this plugin to run the YAML based REST tests. */ public class YamlRestTestPlugin implements Plugin<Project> { public static final String SOURCE_SET_NAME = "yamlRestTest"; @Override public void apply(Project project) { project.getPluginManager().apply(OpenSearchJavaPlugin.class); project.getPluginManager().apply(TestClustersPlugin.class); project.getPluginManager().apply(RestTestBasePlugin.class); project.getPluginManager().apply(RestResourcesPlugin.class); // create source set SourceSetContainer sourceSets = project.getExtensions().getByType(SourceSetContainer.class); SourceSet yamlTestSourceSet = sourceSets.create(SOURCE_SET_NAME); // create the test cluster container RestTestUtil.createTestCluster(project, yamlTestSourceSet); // setup the yamlRestTest task Provider<RestIntegTestTask> yamlRestTestTask = RestTestUtil.registerTask(project, yamlTestSourceSet); // setup the dependencies RestTestUtil.setupDependencies(project, yamlTestSourceSet); // setup the copy for the rest resources project.getTasks().withType(CopyRestApiTask.class, copyRestApiTask -> { copyRestApiTask.sourceSetName = SOURCE_SET_NAME; project.getTasks().named(yamlTestSourceSet.getProcessResourcesTaskName()).configure(t -> t.dependsOn(copyRestApiTask)); }); project.getTasks().withType(CopyRestTestsTask.class, copyRestTestTask -> { copyRestTestTask.sourceSetName = SOURCE_SET_NAME; }); // setup IDE GradleUtils.setupIdeForTestSourceSet(project, yamlTestSourceSet); // wire this task into check project.getTasks().named(JavaBasePlugin.CHECK_TASK_NAME).configure(check -> check.dependsOn(yamlRestTestTask)); } }
from django.contrib.gis.admin import * class GeoTaggedModelAdmin(OSMGeoAdmin): pass
// CloneOrOpenDefaultGitHubRepoSSH clones the default Kubernetes GitHub // repository via SSH if the repoPath is empty, otherwise updates it at the // expected repoPath. func CloneOrOpenDefaultGitHubRepoSSH(repoPath string) (*Repo, error) { return CloneOrOpenGitHubRepo( repoPath, DefaultGithubOrg, DefaultGithubRepo, true, ) }
<filename>src/cell_iterator.ts<gh_stars>0 export const cellIterator = replacer => { const activeSheet = SpreadsheetApp.getActiveSheet(); const selection = activeSheet.getSelection(); const ranges = selection.getActiveRangeList().getRanges(); ranges.forEach(range => { const values = range.getValues(); values.forEach(row => { for (let i = 0; i < row.length; i++) { row[i] = replacer(row[i]); } }); range.setValues(values); }); };
/** * This method checks the folder name and length * @param dir * @param File d1, d2 * @throws */ private boolean checkLengthAndName(File d1, File d2){ if(d1.listFiles().length != d2.listFiles().length){ return false; } for(File fd1 : d1.listFiles()){ if(fd1.getName().equals(".DS_Store")) continue; if(getFileByName(d2, fd1.getName()) == null){ return false; } } return true; }
def push(self, image, ids, object_points, image_points): self.images_array.append(copy.deepcopy(image)) self.ids_arrays.append(copy.deepcopy(ids)) self.object_points_arrays.append(copy.deepcopy(object_points)) self.image_points_arrays.append(copy.deepcopy(image_points))
The Wrb antigen, a receptor for Plasmodium falciparum malaria, is located on a helical region of the major membrane sialoglycoprotein of human red blood cells. 1. Immunoprecipitation of periodate/NaB3H4-labelled human erythrocytes using anti-Wrightb (Wrb) monoclonal antibodies showed that these antibodies specifically react with the major erythrocyte sialoglycoprotein alpha (glycophorin A). 2. Similar experiments on erythrocytes from the only known individual lacking the Wrb antigen but with otherwise normal sialoglycoproteins did not result in the immunoprecipitation of any sialoglycoprotein. 3. We suggest that the Wrb antigen is located on an alpha-helical region between residues 55 and 70 of sialoglycoprotein alpha.
// NewDeck returns a new deck for the type. func (typ Type) Deck() *Deck { if typ == ShortDeck { return NewShortDeck() } return NewDeck() }
// clone returns a clone of this utxoDiff func (d *UTXODiff) clone() *UTXODiff { clone := &UTXODiff{ toAdd: d.toAdd.clone(), toRemove: d.toRemove.clone(), } return clone }
#include <iostream> #include <cmath> #include <algorithm> #include <vector> using namespace std; int main() { long long N, K; cin >> N >> K; cout << min(N % K, abs(N % K - K)) << endl; }
//import { Component } from '@angular/core'; //@Component({ // selector: 'app-bar-chart', // templateUrl: './bar-chart.component.html' //}) ///** bar-chart component*/ //export class BarChartComponent { //} import { Component, Inject } from '@angular/core'; import { Http } from '@angular/http'; @Component({ selector: 'app-bar-chart', templateUrl: './bar-chart.component.html', }) export class BarChartComponent { public weatherForecast: WeatherForecast; public chartLegend: boolean = true; public chartType: string = 'bar'; public chartOptions: any = { responsive: true, legend: { position: 'bottom' } }; constructor(http: Http, @Inject('BASE_URL') baseUrl: string) { http.get(baseUrl + 'api/LineChart/GetWeatherForecast').subscribe(result => { this.weatherForecast = result.json() as WeatherForecast; }, error => console.error(error)); } } interface Weather { data: Array<number>; label: string; } interface WeatherForecast { weatherList: Weather[]; chartLabels: string[]; }
/** * For reading through the write connection when in transaction, * populate the clone directly from the database row. * Should not be called unless (this.descriptor.hasSerializedObjectPolicy() && query.shouldUseSerializedObjectPolicy()) * This method populates the object only in if some mappings potentially should be read using sopObject and other mappings - not using it. * That happens when the row has been just read from the database and potentially has serialized object still in deserialized bits as a field value. * Note that clone == sopObject is the same case, but (because clone has to be set into cache beforehand) extraction of sopObject * from bit was done right before this method is called. * If attempt to deserialize sopObject from bits has failed, but SOP was setup to allow recovery * (all mapped all fields/value mapped to the object were read, not just those excluded from SOP) * then fall through to buildAttributesIntoWorkingCopyClone. * Nothing should be done if sopObject is not null, but clone != sopObject: * the only way to get into this case should be with original query not maintaining cache, * through a back reference to the original object, which is already being built (or has been built). * @return whether the object has been populated with attributes, if not then buildAttributesIntoWorkingCopyClone should be called. */ protected boolean buildAttributesIntoWorkingCopyCloneSOP(Object clone, CacheKey sharedCacheKey, ObjectBuildingQuery query, JoinedAttributeManager joinManager, AbstractRecord databaseRow, UnitOfWorkImpl unitOfWork, boolean forRefresh) throws DatabaseException { Object sopObject = databaseRow.getSopObject(); if (clone == sopObject) { boolean readAllMappings = query.shouldReadAllMappings(); FetchGroup executionFetchGroup = query.getExecutionFetchGroup(this.descriptor); for (DatabaseMapping mapping : this.descriptor.getMappings()) { if (readAllMappings || query.shouldReadMapping(mapping, executionFetchGroup)) { if (mapping.hasNestedIdentityReference() || mapping.isOutOnlySopObject()) { if (mapping.isOutSopObject()) { databaseRow.setSopObject(null); mapping.buildCloneFromRow(databaseRow, joinManager, clone, sharedCacheKey, query, unitOfWork, unitOfWork); } else { databaseRow.setSopObject(sopObject); mapping.buildCloneFromRow(databaseRow, joinManager, clone, sharedCacheKey, query, unitOfWork, unitOfWork); } } } } if (this.descriptor.hasEventManager()) { postBuildAttributesIntoWorkingCopyCloneEvent(clone, databaseRow, query, unitOfWork, forRefresh); } databaseRow.setSopObject(null); return true; } else { if (sopObject == null) { sopObject = this.descriptor.getSerializedObjectPolicy().getObjectFromRow(databaseRow, unitOfWork, (ObjectLevelReadQuery)query); if (sopObject != null) { boolean readAllMappings = query.shouldReadAllMappings(); FetchGroup executionFetchGroup = query.getExecutionFetchGroup(this.descriptor); for (DatabaseMapping mapping : this.descriptor.getMappings()) { if (readAllMappings || query.shouldReadMapping(mapping, executionFetchGroup)) { if (mapping.isOutSopObject()) { databaseRow.setSopObject(null); mapping.buildCloneFromRow(databaseRow, joinManager, clone, sharedCacheKey, query, unitOfWork, unitOfWork); } else { databaseRow.setSopObject(sopObject); mapping.buildCloneFromRow(databaseRow, joinManager, clone, sharedCacheKey, query, unitOfWork, unitOfWork); } } } if (this.descriptor.hasEventManager()) { postBuildAttributesIntoWorkingCopyCloneEvent(clone, databaseRow, query, unitOfWork, forRefresh); } databaseRow.setSopObject(null); return true; } else { return false; } } else { return true; } } }
Paul Ryan, King of Pork. We learned during last night’s vice presidential debate that Paul Ryan thinks the President’s stimulus package was a total waste of money. They passed the stimulus. The idea that we could borrow $831 billion, spend it on all of these special interest groups, and that it would work out just fine. Ryan went on to call the stimulus bill “Crony capitalism and corporate welfare.” VP Biden responded, of course, that Paul Ryan tried to carve out some of that crony capitalism and corporate welfare for his own constituents in two letters he penned to the vice president seeking stimulus pork for his own voters. And AP reports today, having gone through thousands of pages of Ryan’s congressional correspondence, that Ryan also sought stimulus funds for environmental projects, even though he criticized “green” stimulus funds in last night’s debate. Ryan also wrote to the EPA in 2009 on behalf of a small town trying to secure $550,000 in stimulus money for utility repairs. Ryan, whose staff requested meetings with the EPA about the matter, said the rescinding of the grant “would be economically devastating” to Sharon, Wis., since it already began spending the money. (The EPA said project costs were incurred before October 2008, making the project ineligible for stimulus cash.) Ryan has also voiced support for millions in EPA grant money to clean up abandoned building sites in Wisconsin towns. Even better, AP found that Ryan sought funds under the evil Obamacare, another program he wants to repeal because it’s allegedly so “bad.” A Kenosha health center’s request to use money under Obama’s new health care law to help meet health care needs of “thousands of new patients” who lack coverage. Ryan’s December 2010 letter to the Department of Health and Human Services, first reported by the Nation magazine and also obtained by the AP, appears at odds with his pledge to repeal “Obamacare.” Interestingly, during the debate Ryan tried to pass all of this off as something that every congressional office does. Well, yes, and it’s called “pork.” But of course, it’s really not something that every politician does. You’ll recall that GOP Governor Rick Perry didn’t try to carve out stimulus pork for his constituents, he turned the money down. Sarah Palin turned the money down too, as Governor of Alaska. And so did the Republican governor of South Carolina, Mark Sanford. When faced with the decision as to whether they should try to carve out stimulus “pork” for their constituents, a lot of Republican politicians said “no.” Paul Ryan, however, said YES! Is Paul Ryan now saying that Sarah Palin, Rick Perry, and Mark Sanford went too far? If those politicians were willing to forgo stimulus monies for their constituents, why couldn’t Paul Ryan?
<gh_stars>0 package br.com.casadocodigo.loja.configuration; import org.springframework.security.web.context.AbstractSecurityWebApplicationInitializer; public class SpringSecurityFilterConfiguration extends AbstractSecurityWebApplicationInitializer { }
Since the duo arrived in the Summer of ‘13, Jrue Holiday and Tyreke Evans have consistently been compared, contrasted or downright disparaged in New Orleans. Similar to a neck-and-neck race at the Fair Grounds, the two points guards have often alternated as momentary place holders with the title, Pelicans best point guard. Three and a half years later, a clear winner is yet to solidly cement himself atop of all hearts and minds. Holiday was restricted at the outset for a second straight season, this time indefinitely as family issues demanded his full attention. Meanwhile, Evans, plagued by further leg complications, wasn’t able to step onto the hardwood until the 27th game of the season. Both players have yet to sustain a greater level of performance seen in years past, so understandably there are some voices calling to bring back neither guard. If only beggars could be choosers… In case you missed it, scoring premium talent in unrestricted free agency has not been a route available to New Orleans. The biggest acquisition during Dell Demps tenure has been the combination of Solomon Hill, E’Twaun Moore and Langston Galloway. No, seriously. Have you ever taken the time to look at the entire list of signings since July 21, 2010 (the current general manager’s first day on the job)? Then, when you also factor in that both guards litter several top-10 franchise categories, it just seems prudent in keeping at least one of them because free agency isn’t going to replace this level of talent. So the question is: if forced to choose between the two, should the front office prioritize keeping Evans or Holiday in contract negotiations? Kevin: I do not think I’m surprising anyone familiar with my writing when I say I’d go with Tyreke Evans over Jrue Holiday. I favor evidence based on fit. Potential and individual stats are great, but proof of actual team success combined with intangibles like energy, chemistry and attitude are more important to me — it’s where a purely analytical approach to roster building can cause problems. I’ve seen what a healthy Tyreke Evans and Anthony Davis pairing can bring. I’ve seen this tandem put together a nice playoff run. I’ve seen them do this with a better than Alvin Gentry, but still not ready to be a head coach — Monty Williams — often winning games in spite of their coach. Monty was canned tuna, Alvin is Vienna sausage — tuna is the clearly the better option, but it’s all still canned meat. Monty did know, however, that the pick-and-roll was the key to unlocking this duo — and though he insisted on waiting until 14 seconds had run off of the clock to initiate the play, he parlayed this into a really fun season. Davis has grown as an offensive threat under Alvin Gentry; some of this is natural progression, but we also need to give some credit to Gentry and his staff for expanding his game. However, the current model has a lot of Anthony Davis in isolation: this season 63% of his 2pt FG have been assisted down from 71.5% in 2014-15. Davis is getting much better at creating his own shot with increased usage, but he remains more devastating when involved in a pick and roll and set up by teammates. Oleh: Ahh, the once reliable pick-and-roll strategy so often utilized by Monty Williams. Despite the fact that Tyreke has totaled just 174 minutes, the Pelicans have actually run the play more than most (3rd highest frequency in PnR ball handler AND roll man) and remain rather effective as Davis often winds up the beneficiary. Here's a look at the most productive interior scorers in the @NBA by playtype. Cody Zeller, Anthony Davis, Gortat, Whiteside take top spots. pic.twitter.com/8QbSrb9ZLW — Synergy Sports Tech (@SynergySST) January 11, 2017 However, Jrue Holiday, who derives 53.4% of his offense from PnR ball handler plays, is struggling to a tune of .73 PPP. Conversely, Evans (.97 PPP) and Tim Frazier (.85 PPP) are not. Kevin: Yep, I’m aware Evans still thrives in the pick and roll as a ball handler. He gets to the rim at an elite level. He also looks to create off of penetration more than he is given credit for. He’s very good at getting the ball to Davis when he rolls or pops, and he is exceptionally good at passing out to open shooters on the wing. He’s still getting comfortable with his repaired knee and his new teammates, but we’ve already seen some incredible kickouts from him this season even if the shots didn’t always drop. This downhill penetrate and dish ethos creates a lot of open looks — we’ve seen this when Tim Frazier was running point as well. However, Tyreke is more of a threat to finish at the rim causing defenses to send a second defender more often. Tyreke’s size also gives him better court vision to find open teammates. Holiday is more of a slow plodder and methodical in his penetration — which is effective in short doses, but he doesn’t get to the rim like Evans or even Frazier. Holiday is also much more passive on the offensive end which allows the defense to play him solo if his man slips the screen. I believe the reason he’s only averaging .73 PPP is purely because he doesn’t drive hard. Oleh: Okay, so we’ve established that Holiday is probably the last guy Gentry should rely on to execute a pick and roll, but hey, there’s a heck of a lot more to an offense than just this single strategy. For instance, Jrue has proven more adept finishing at the rim over the last two seasons combined and is the bigger threat over their careers from the perimeter. And interestingly, Holiday is getting to the free throw line with greater regularity for a first time this season. Is this just a small sample size blip or indicative of a trend going forward? Then there’s the whole aspect of defense -- the part of the process that has improved immensely for the Pelicans -- and Holiday absolutely runs, or should I say defends, in circles around Evans. In isolation, pick and rolls and spot-up situations, Holiday is the vastly superior defender. I’m of the opinion that Holiday is as important as Davis in Darren Erman’s schemes as he has the strength, speed and IQ to cover anyone on the wing. Evans, throughout his time in New Orleans, has had trouble with merely being average on defense. Furthermore, not having suffered any setbacks with that once problematic fibula for quite some time, it’s safe to start operating under the assumption that Holiday has moved past his leg issues. The same cannot be said of Tyreke, as he’s currently under heavy minute restrictions and the Pelicans may not discover if he’s past his knee problems once his contract runs out at the end of the season. After how many injuries we’ve been forced to endure, Kevin, shouldn’t it be hard to advocate that New Orleans chooses the mystery door over the safer health track record? Kevin: Well, Holiday has not played more than 65 games in any season as a Pelican and that 65-game season was the most games he’s played by a whopping 25 contests. I understand Evans hasn’t been the model of good health either; however, he did play 72 games in 2013-14 and 79 games in 2014-15. His last two seasons have been derailed by possibly botched surgeries, playing through injury or rushing himself back too soon. The closing stretch should give us a decent estimation of how much that knee is going to hamper him going forward. I must also mention now that Jrue has changed agents, leaving Thad Foucher who also represents Anthony Davis. This might be a telltale sign that Jrue isn’t interested in giving Dell Demps a hometown discount. Recently reports have circulated that he could be seeking a contract in the $20-25m range annually. Holiday is a good player — he is not $20m/year good. In this era of good point guard play, you can’t cripple your cap space with a guard that is probably right around average at his position offensively — even if he is above average (perhaps even elite) defensively — while he also has leadership and durability concerns. Conversely, Tyreke Evans has seen his value drop due to his recent knee issues and playing in an offense that doesn’t maximize his abilities. If the offense can be adjusted or if a new staff is brought in I could see Tyreke playing for a good deal less than Holiday is reportedly seeking. Oleh: Is a yearly salary of 15 million -- yes, that’s the minimum I think Evans will command if his knee doesn’t flare up again -- the better option if it means handing the keys to a one-way player who is yet to prove there’s enough explosion left in those legs to sustain the rigors of a full seasons? Dude, I absolutely love Tyreke Evans and for a lot of the same reasons as you. I’ve defended him a countless number of times on this site including last season when “he didn’t fit Alvin Gentry’s system”. His heart and willingness to tough out injury are readily apparent; I mean he persevered through so many knee drainings one year ago just to get on the floor for one of the worst teams in basketball. And who could ever forget the line of all lines: "We ain't stoppin' at no stores. Straight gas." Kevin: See, there are factors beyond play style that have me favoring Tyreke. Anthony Davis is a dominating superstar, but while he’s made strides in being the team leader, he isn’t that fiery general that motivates outside of his own play. Jrue is downright monkish on the court. He doesn’t get rattled, doesn’t show cracks, but he doesn’t exude confidence or instill fire in his teammates. Tyreke has many detractors in the media, but I think even they would have to agree that when Evans is fired up you see that energy and charisma exude through his teammates and the arena. He has the swagger and emotion you want from your lead guard. He sets the tone. Last season was Holiday’s best season in New Orleans, yet it was dismal in terms of win totals. He showed greatness in stretches, but there were also spans where he looked very pedestrian. In his best year as a Pelican Holiday posted a 1.7 VORP. In contrast, 2014-15 was Evans best season in New Orleans, which resulted in a playoff run and 2.6 VORP — of course in fairness to Holiday the PG rotation was much thinner, but it’s still a pretty sizeable margin. Oleh: Monkish, huh? Holiday may show very little emotion in games, but I have to disagree about the moving the needle with teammates part. Ever hear of Kawhi Leonard? In a recent matchup I attended, I watched him closely throughout warmups. Holiday went up to Davis, Evans and Buddy Hield and had words, a celebratory handshake or some personal interaction. Evans did nothing of the sort. Tyreke may have more visible leadership qualities once the game is underway, but it doesn’t mean Holiday is any less effective in that department. Have you noticed that since Lance Stephenson departed, Hield suddenly references Holiday a lot more in the media -- no doubt Jrue has taken the rookie under his wing. Oh and speaking of in fairness to Holiday, VORP is a box score based statistic which fails to give a player’s defense it’s proper due. During that 2014-15 campaign, many NBA play by play regression models indicated Holiday was one of the most instrumental players in the league, rating well ahead of Evans. Kevin: In those warmups was Holiday shooting threes from the Crescent City Basketball logo out of bounds and behind the rim? Just kidding, though, that’s always been a pet peeve of mine — take a shot you are actually going to take in the game. Fair enough on the rest, but tell me how many of these point guards you’d need a really good argument for to make you stomach them replacing Jrue Holiday: Dennis Schröder, Isaiah Thomas, Marcus Smart, Kemba Walker, Kyrie Irving, Reggie Jackson, Goran Dragic, Kyle Lowry, John Wall, Stephen Curry, James Harden, Patrick Beverley, Chris Paul, D’Angelo Russell, Jordan Clarkson, Mike Conley, Ricky Rubio, Zach LaVine, Kris Dunn, Russell Westbrook, Eric Bledsoe, Brandon Knight, Damian Lillard, Tony Parker, Patty Mills and George Hill? There’s 23 players in that list that I’d either jump on the switch, or at least give it a lot of consideration. This isn’t as much of a knock on Jrue as it seems — it’s just the era we are in. There is a higher level of point guard play than I can ever remember. There may have been eras that had better point guards, but the quality of play from your average starting point guard is much higher. Then factor in that you are about to have a draft class with several highly touted point guard prospects. Can you really give a guy who maybe is not clearly better better than 23 other players at his position $20-25m a year? Especially with his durability concerns. Oleh: Actually, I’d keep Holiday over about half the names on your list without any hesitation -- he still sits 16th in Real Plus-Minus. That’s a pretty good predicative mark considering his data fields are filled with tons of missed time over the last three years where he wasn’t 100% himself in a lot of contests. Meanwhile Evans is listed 62nd among shooting guards, a weaker position no less. Evans has also suffered through injury too, but it’s a bit disconcerting yet telling that the model favors the future of Holiday so much more. That said, I think it’s time to call it quits on this long-winded discussion; however, judging by all the words, it should be apparent that the Pelicans need to keep one of the players. To be honest, though, it wouldn’t take much for me to lean towards both. They both compliment each other’s strengths and unless Demps can swing a beneficial trade for one of the two point guards, the smartest play may be to retain both assets because New Orleans is most certainly not going to attract even half the free agent everyone hopes to see. Seriously, go click on that link located in the introduction again. The reason Dell Demps is seemingly attached to the roster year after year is because there isn’t a treasure trove of replacements knocking on his door. If both Holiday and Evans start passing some tests, like last night in Brooklyn where the Pelicans had a nice come from behind win against the Nets, keeping both players may be the surest path to future postseason berths. I still fondly recall how well both Holiday and Evans played against the Golden State Warriors over two years ago. Curry’s team was on a 15-game winning streak yet the Pelicans almost pulled out the win without Anthony Davis, and OMG did you look at the rest of the lineup? Kevin: So, you would take 13 or 14 guys from that list over Holiday? That’s my point. He’s not top 10 at his position so at that potentially expensive contract with the other concerns doesn’t seem prudent to me. However, I agree with much of what you said there. I, too, like when both get to play with each other, which I and I’m almost certain you think is what Dell had always envisioned. I would be fine with keeping both if we had a coach that did feature them together, and it would help if Holiday’s contract was more in the $16m/year range. That’s not been what’s reported however. Also, Holiday is the best tradeable asset we have that isn’t our first round pick. I don’t think his value is as high as others believe it is, but he can fetch a decent return if sent to a team needing guard depth or a stop gap perimeter defender for the playoffs. I made a fictitious plan for sending him away early this month. Moving him now allows you to gain an asset or two, possibly improve lottery odds and really evaluate where Evans’ knee is. When I initially wrote that trade piece, I was expecting Evans to be over playing for bad coaches and look to finally play for someone who understands him. I have whispers that he does love the players he plays with and the city so maybe he would return on $12-$15m a year deal even if Gentry is still the man in charge. If you make the deal above and keep Evans in that range, you could have Bledsoe and Tyreke at just $2-3m a year over what you are paying just to have Holiday. All three players come with injury concerns, so I’d rather get two for the price of one — and I think both are better than the one anyway. Also, Tyreke could start at the three and then with a smart staggering plan slide to the one while Bledsoe (or Jrue if that’s the route we go) rests and stagger the minutes in a way that one of them is one the court 95% of the time, but also that they share the court a good 65-70% of the time as well. Everyone has been clamoring for a 6’-6” plus playmaking wing that can drive, finish around the rim, get to the line, rebound and hit the catch and shoot three. He’s already on the team. I like him best as a point guard, but in this era positions are becoming increasingly unimportant. Tyreke can just defend the best matchup for his size and be a point guard/forward. Oleh: Another good point: the dimension so many seek for this roster — a playmaking wing — may realistically best be filled by Evans. He’ll never be the lockdown defender one hopes, but the Pelicans aren’t in need of that aspect with Hill, Moore and Dante Cunningham rostered and a team defense that is torching the production of the offense. Kevin, I know you’re big on results so we’re going to bring this to a close with this: does everyone remember how deadly the Davis-Evans-Holiday combination has been in the past, especially last season under Gentry? They were absolutely sensational in a sea of muck that was the Pelicans one season ago. That’s the best support, by a long country mile, that Anthony Davis has seen around him since arriving in New Orleans. So, barring the outcome of some incredibly fortunate trade, the Pelicans best bet at point guard going forward is to gamble on one of Jrue Holiday or Tyreke Evans. You side with Reke, Kevin, and I, Jrue, but we both agree keeping the duo together could potentially wind up being the best option in the long run.
package io.quarkus.qson.deserializer; public interface AnySetter { void setAny(Object target, String key, Object value); }
<filename>examples/adaptors/from_string/main.cpp #include <universal_adaptor.h> #include <iostream> #include <memory> #include "../../help_lib.h" template<typename Self> struct from_str; template<std::integral Self> struct from_str<Self> { Self operator()(const std::string& str) const { return static_cast<Self>(std::stoll(str)); } }; template<std::floating_point Self> struct from_str<Self> { Self operator()(const std::string& str) const { return static_cast<Self>(std::stold(str)); } }; template<typename Self> declare parse = from_str<Self>{}; template<> struct from_str<bool> { bool operator()(const std::string& str) const { if (str == "true") { return true; } if (str == "false") { return false; } throw std::runtime_error("bool cannot parse from \"" + str + "\""); } }; int main() { namespace boost = std; std::cout << ("true" | parse<bool>()) << std::endl; std::cout << ("12.0" | parse<int>()) << std::endl; std::cout << ("12.2" | parse<int>()) << std::endl; std::cout << ("12.2" | parse<float>()) << std::endl; }
A Force-directed Approach for Fast Generation of Efficient Multi-Port NoC Architectures Networks-on-chip (NoC) is an emerging style of system design introduced to overcome the communication and the performance bottlenecks of a shared-bus design. Away from the traditional NoC mesh design, multi local port router (MLPR) has been introduced as design alternative to improve the bandwidth, reduce the network area (36% average area savings) and eventually, improve the overall performance of the NoC system. In this research, we present a fast mapping tool (cMap) for generating NoC architectures using MLPRs. The algorithm exploits the advantages offered by MLPRs and starts with a minimum dimension mesh. After an initial bandwidth-communication-cost based nearest-neighbor placement, it uses a force-directed approach to iteratively expand the mesh, as the cost gets reduced. The algorithm introduces the concept of folding to improve the NoC design. Unlike the earlier exhaustive-search based optiMap algorithm, cMap can handle any size of the task graph, producing near-optimal results (average cost difference between 3% and 10%,) in a couple of seconds. We experiment with a rich set of 22 benchmarks and report the results
// Copyright (c) 2016, <NAME> // Copyright (c) 2017, The University of Texas at Austin // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are met: // // 1. Redistributions of source code must retain the above copyright notice, // this list of conditions and the following disclaimer. // // 2. Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in the // documentation and/or other materials provided with the distribution. // // 3. Neither the name of the copyright holder nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS // IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, // THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR // PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR // CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, // EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, // PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; // OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, // WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR // OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF // ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #include <eigen_conversions/eigen_msg.h> #include <pcl_conversions/pcl_conversions.h> #include <opencv2/highgui/highgui.hpp> #include "orp/core/orp_utils.h" #include "orp/classifier/rgb_classifier.h" #include <sstream> int main(int argc, char **argv) { // Start up the name and handle command-line arguments. srand (static_cast <unsigned> (time(0))); ros::init(argc, argv, "rgb_classifier"); ROS_INFO("Starting RGB Classifier"); RGBClassifier v; v.init(); // for cluster visualization //cv::namedWindow( "RGBCluster", cv::WINDOW_NORMAL ); ros::AsyncSpinner spinner(2); spinner.start(); ros::waitForShutdown(); //cv::destroyAllWindows(); return 1; } RGBClassifier::RGBClassifier(): Classifier3D() { } void RGBClassifier::cb_classify(const sensor_msgs::PointCloud2& cloud) { ROS_DEBUG_NAMED("RGB Classfiier", "Received point cloud to classify as R/G/B"); orp::ClassificationResult classRes; classRes.method = "rgb"; orp::Segmentation seg_srv; seg_srv.request.scene = cloud; bool segmentation_succeeded = segmentation_client_.call(seg_srv); if(!segmentation_succeeded) { ROS_ERROR_STREAM_THROTTLE_NAMED(5, "RGB Classifier", "Could not call segmentation service at " << segmentation_service_); } std::vector<sensor_msgs::PointCloud2> clouds = seg_srv.response.clusters; if(!clouds.empty()) { for(auto eachCloud = clouds.begin(); eachCloud != clouds.end(); eachCloud++) { if(eachCloud->width < 3) { continue; } pcl::PointCloud<ORPPoint>::Ptr thisCluster( new pcl::PointCloud<ORPPoint>); pcl::fromROSMsg(*eachCloud, *thisCluster); std::string color = "unknown"; M.release(); M = cv::Mat(thisCluster->points.size(), 1, CV_8UC3, cv::Scalar(0,0,0)); int i = 0; pcl::PointCloud<ORPPoint>::iterator point; float r=0, g=0, b=0; for(point = thisCluster->points.begin(); point < thisCluster->points.end(); ++point, ++i) { r += point->r; g += point->g; b += point->b; } // std::cout << "M has " << eachCloud->width << " elements." << std::endl; color = getColor(r, g, b); orp::WorldObject thisObject; thisObject.label = "obj_" + color; thisObject.pose.header.frame_id = eachCloud->header.frame_id; Eigen::Vector4f clusterCentroid; pcl::compute3DCentroid(*thisCluster, clusterCentroid); Eigen::Affine3d finalPose; finalPose(0,3) = clusterCentroid(0); finalPose(1,3) = clusterCentroid(1); finalPose(2,3) = clusterCentroid(2); tf::poseEigenToMsg(finalPose, thisObject.pose.pose); thisObject.pose.pose.orientation.x = 0; thisObject.pose.pose.orientation.y = 0; thisObject.pose.pose.orientation.z = 0; thisObject.pose.pose.orientation.w = 1; thisObject.probability = 0.75; classRes.result.push_back(thisObject); } } classification_pub_.publish(classRes); } inline uchar reduceVal(const uchar val) { if (val < 64) return 0; if (val < 128) return 64; return 255; } void RGBClassifier::processColors(cv::Mat& img) { uchar* pixelPtr = img.data; for (int i = 0; i < img.rows; i++) { for (int j = 0; j < img.cols; j++) { const int pi = i*img.cols*3 + j*3; pixelPtr[pi + 0] = reduceVal(pixelPtr[pi + 0]); // B pixelPtr[pi + 1] = reduceVal(pixelPtr[pi + 1]); // G pixelPtr[pi + 2] = reduceVal(pixelPtr[pi + 2]); // R if(pixelPtr[pi+0] == 64) pixelPtr[pi+0] = 127; if(pixelPtr[pi+1] == 64) pixelPtr[pi+1] = 127; if(pixelPtr[pi+2] == 64) pixelPtr[pi+2] = 127; } } } std::string RGBClassifier::getColor(float r, float g, float b) { // TODO(Kukanani): // This function used to take a cv::Mat and use the cv::sum function to // calculate the r, g, and b values in one line each. However, this began // giving me segfaults after migrating to a new machine. I don't know the // exact issue, but I suspect a conflict with OpenCV versions 2 and 3. // Whatever the reason, you now have the pass the r/g/b values directly to // this function. //which is greater? if(r > g && r > b) return "red"; else if(g > r && g > b) return "green"; return "blue"; } /////////////////////////////// // Example of OpenCV-based point cloud filtering // pcl::PointCloud<ORPPoint>::Ptr pclCloud = pcl::PointCloud<ORPPoint>::Ptr(new pcl::PointCloud<ORPPoint>()); // pcl::fromROSMsg(cloud, *pclCloud); // // uint8_t* pixelPtr = (uint8_t*)cv_ptr->image.data; // int cn = cv_ptr->image.channels(); // cv::Scalar_<uint8_t> bgrPixel; // int i= 0; // for (size_t u = 0; u < cloud.height; ++u) // rows // { // for (size_t v = 0; v < cloud.width; ++v, ++i) // cols // { // if(cv_ptr->image.at<cv::Vec3b>(u,v).val[0] > 128) { //blue channel check // pclCloud->points[i].x = std::numeric_limits<float>::quiet_NaN(); // pclCloud->points[i].y = std::numeric_limits<float>::quiet_NaN(); // pclCloud->points[i].z = std::numeric_limits<float>::quiet_NaN(); // } // } // } // pcl::toROSMsg(*pclCloud, cloud); // filterPub.publish(cloud); // // } //classify // // // ///http://stackoverflow.com/questions/5906693/how-to-reduce-the-number-of-colors-in-an-image-with-opencv-in-python // inline uchar reduceVal(const uchar val) // { // if (val < 64) return 0; // if (val < 128) return 64; // return 255; // } // // void OpenCVClassifier::process(cv::Mat& img) // { // uchar* pixelPtr = img.data; // for (int i = 0; i < img.rows; i++) // { // for (int j = 0; j < img.cols; j++) // { // const int pi = i*img.cols*3 + j*3; // pixelPtr[pi + 0] = reduceVal(pixelPtr[pi + 0]); // B // pixelPtr[pi + 1] = reduceVal(pixelPtr[pi + 1]); // G // pixelPtr[pi + 2] = reduceVal(pixelPtr[pi + 2]); // R // // // if(pixelPtr[pi+0] == 64) pixelPtr[pi+0] = 127; // if(pixelPtr[pi+1] == 64) pixelPtr[pi+1] = 127; // if(pixelPtr[pi+2] == 64) pixelPtr[pi+2] = 127; // } // } // }
Mauricio Pochettino says he is very happy at Tottenham and is looking forward to leading Spurs in their hunt for trophies in their new stadium. Mauricio Pochettino says he is very happy at Tottenham and is looking forward to leading Spurs in their hunt for trophies in their new stadium. Mauricio Pochettino says winning the Premier League is "our dream" and Tottenham is a "fantastic club to achieve things with". The north London club came within touching distance of lifting the top flight trophy last term, but a poor run of form heading into the home straight saw them eventually finish in third - 11 points behind eventual winners Leicester. The Argentine manager - now in his third year at White Hart Lane - says he hopes to be able to win the Premier League with Spurs and believes they are in a good position to make progress despite last season's disappointment. "It is our dream to win the Premier League. It is our premier competition. For us, it is our first step," Pochettino told Sky Sports. Tottenham came close to winning the title last season but suffered poor form in the latter stages "I am very happy here - me and my staff as well as my family. I think we find that we are in a very good place to ensure we can work hard. It is a big club with lots of supporters and it is a fantastic club to achieve big things with. "We finished third in the league last season. It was a tough summer because the way that we finished the season was bad. We showed some weakness at the end of the season that we need to try and work out which is always important. Tottenham 2-1 Burnley highlights Tottenham 2-1 Burnley highlights "When you compete with a big side, it is always difficult, but it shows we are a better team than last season. We have had some problems in the last months after Man City. "There were more things that maybe made it difficult for us to compete at our best and I think now we are in a good position to try to achieve good results. "We need to work hard because teams like Chelsea, Liverpool, Man City, they have all improved. It is true that the Premier League is tougher than last season because all the big teams are improving their squads but we are confident in ours." Tottenham are awaiting the completion of their new stadium which is scheduled to be ready by the start of the 2018/19 campaign and Pochettino believes the ground will help them compete for silverware. Spurs are currently fifth in the Premier League table - one point from the top four "I think for us, our first challenge is to finish the new stadium. We have brilliant facilities on the training ground and I think the new stadium can help a lot for us to fight for the titles. "We have to wait now and it's all about time. It's difficult to ask for time in football but I think we are in a good process. A good challenge is to arrive at the new stadium and at that moment, be in the position to fight for titles." Upgrade to Sky Sports now and get six months half price
/** * Cria um vetor de Date contendo todas as datas entre primeira e segunda, sendo que o vetor inclui a primeira e * a segunda * * @param primeira * @param segunda * @return */ public ArrayList<Date> VetorDeDatasEntreDuasData(Date primeira,Date segunda){ if(this.diferencaEmDias(primeira, segunda)<0){ return null; } ArrayList<Date> vetor = new ArrayList<>(); Date endDate=primeira; while(this.diferencaEmDias(endDate, segunda)!=0) { vetor.add(endDate); Calendar cal = Calendar.getInstance(); cal.setTime(endDate); cal.add(cal.DAY_OF_MONTH, +1); endDate = cal.getTime(); } vetor.add(segunda); return vetor; }
def remove_tiny_subs(subs, duration_millis=1000, left_millis=2000, right_millis=2000, style=None): copy_subs = SSAFile() new_subs = SSAFile() for sub in subs: if (style and sub.style is style) or not style: copy_subs.append(sub) for i, sub in enumerate(copy_subs): if sub.duration >= duration_millis: new_subs.append(sub) continue if left_millis is None and right_millis is None: continue if i == 0: if copy_subs[i + 1].start - sub.end < right_millis: new_subs.append(sub) elif i == len(copy_subs) - 1: if sub.start - copy_subs[i - 1].end < left_millis: new_subs.append(sub) elif copy_subs[i + 1].start - sub.end < right_millis or sub.start - copy_subs[i - 1].end < left_millis: new_subs.append(sub) if style: for sub in subs: if sub.style is not style: new_subs.append(sub) new_subs.sort() return new_subs
Solvent effects in esterification of phthalic anhydride on sulfated titania-based solid acid Esterification of phthalic anhydride with 2-octanol has been studied in the presence of sulfated titania. This study shows that sulfated titania has strong catalytic effects on esterification reactions. The catalyst was characterized with x-ray diffraction, FT-IR, Brunauer Emmett Teller, SEM and thermogravimetric methods. The effect of the type and volume of solvent has been investigated. Three solvents were used in this work: acetonitrile, dimethyl sulfoxide (DMSO) and an ionic liquid, 2-hydroxy ethylammonium formate. The results show that addition of various solvents has significant effects on the reaction rate of phthalic anhydride. DMSO and 2-hydroxy ethylammonium formate enhance the reaction rate but acetonitrile reduces it.
def write(self, data): with self._lock: view = memoryview(data) total_sent, total = 0, len(data) while total_sent < total: try: sent = self._socket.send(view[total_sent:]) if sent == 0: raise ConnectionResetError( 'The server has closed the connection.') total_sent += sent except BlockingIOError: time.sleep(self.delay)
<filename>cpp/SpeculativeValues/FunctionalSources/SpeculativeLiteral.h #ifndef SpeculativeLiteral_h #define SpeculativeLiteral_h #include <ostream> #include <sstream> #include <string> #include "object_ptr.h" #include "SpeculativeNode.h" template <typename T> class SpeculativeLiteral : public SpeculativeNode<T> { public: SpeculativeLiteral( const T& inT ): SpeculativeNode<T>(), mValue(inT) { } template <typename... ArgTypes> SpeculativeLiteral( ArgTypes... inArgs ): SpeculativeNode<T>(), mValue(inArgs...) { } virtual ~SpeculativeLiteral() {} virtual T resolve() { return mValue; } virtual void makeConcrete() { mValue.makeConcrete(); } virtual std::string to_string() { std::stringstream ss; ss << mValue; return ss.str(); } friend std::ostream& operator<<(std::ostream& os, SpeculativeLiteral<T> l) { return os << l.to_string(); } private: T mValue; }; #endif
/** * Copyright 2013 Google Inc. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package co.uk.gauntface.devicelab.appengine; import co.uk.gauntface.devicelab.appengine.controller.DeviceUserController; import co.uk.gauntface.devicelab.appengine.controller.DevicesController; import co.uk.gauntface.devicelab.appengine.model.Device; import co.uk.gauntface.devicelab.appengine.model.DeviceUserPair; import co.uk.gauntface.devicelab.appengine.utils.GPlusTokenInfo; import co.uk.gauntface.devicelab.appengine.utils.Utils; import java.io.IOException; import java.util.ArrayList; import java.util.List; import javax.servlet.http.*; @SuppressWarnings("serial") public class DevicesServlet extends HttpServlet { private DevicesController mDevicesController; private DeviceUserController mDeviceUserController; public DevicesServlet() { mDevicesController = new DevicesController(); mDeviceUserController = new DeviceUserController(); } public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException { System.out.println("DevicesServlet: doPost() req.getRequestURI() = "+req.getRequestURI()); String requestUri = req.getRequestURI(); String[] uriParts = requestUri.split("/"); if(uriParts.length < 4) { // TODO return relevant error message and http status code System.out.println("The URI parts length is less that 3 => "+uriParts.length); return; } String action = uriParts[3]; if(action.equals("get")) { handleGetEndpoint(req, resp); } else if(action.equals("register")) { handleRegisterEndpoint(req, resp); } } public void handleGetEndpoint(HttpServletRequest req, HttpServletResponse resp) throws IOException { System.out.println("DevicesServlet: handleGetEndpoint()"); String idToken = Utils.getPostParameter("id_token", req); String userId = GPlusTokenInfo.getUserId(idToken); List<Device> devices = new ArrayList<Device>(); if(userId != null) { System.out.println("DevicesServlet: UserId = "+userId); List<DeviceUserPair> devicePairs = mDeviceUserController.getDeviceIds(userId); devices = mDevicesController.getDevices(devicePairs); } else { System.out.println("DevicesServlet: No UserId Set"); } String jsonResponse = "{\"devices\": ["; for(int i = 0; i < devices.size(); i++) { jsonResponse += devices.get(i).getJsonString(); if(i+1 < devices.size()) { jsonResponse += ", "; } } jsonResponse += "]}"; resp.addHeader("Access-Control-Allow-Origin", "*"); resp.setContentType("application/json"); resp.getWriter().println(jsonResponse); } public void handleRegisterEndpoint(HttpServletRequest req, HttpServletResponse resp) throws IOException { String idToken = Utils.getPostParameter("id_token", req); String deviceIdString = Utils.getPostParameter("device_id", req); Long deviceId = null; if(deviceIdString != null) { deviceId = Long.valueOf(deviceIdString); } String gcmId = Utils.getPostParameter("gcm_id", req); String deviceName = Utils.getPostParameter("device_name", req); String deviceNickname = Utils.getPostParameter("device_nickname", req); int platformId = Integer.parseInt(Utils.getPostParameter("platform_id", req)); String platformVersion = Utils.getPostParameter("platform_version", req); String userId = GPlusTokenInfo.getUserId(idToken); Device device = new Device(deviceId, gcmId, deviceName, deviceNickname, platformId, platformVersion); deviceId = mDevicesController.registerDevice(device); if(deviceId != null) { mDeviceUserController.registerUserDeviceParing(new DeviceUserPair(userId, deviceId)); } else { // TODO Throw Error } String jsonResponse = "{"; jsonResponse += "\"device_id\": "+Long.toString(deviceId); jsonResponse += "}"; resp.addHeader("Access-Control-Allow-Origin", "*"); resp.setContentType("application/json"); resp.getWriter().println(jsonResponse); } }
extern crate serde; extern crate reqwest; use serde::{Serialize, Deserialize}; use reqwest::StatusCode; #[derive(Serialize, Deserialize, Debug)] pub struct SMSResponse { pub status: String, pub message_id: String, pub credit_used: u32 } #[derive(Serialize, Deserialize, Debug)] pub struct SMSCreditResponse { pub sms_credits: String } #[derive(Serialize, Deserialize, Debug)] pub struct SMSRequestPayload <'a> { pub to: &'a str, pub from: &'a str, pub message: &'a str } pub enum RequestMethods { Get, Post, } #[derive(Debug)] pub enum JusibeError { InvalidCredentialError, BadRequestError, NoError, RequestError } #[derive(Serialize, Deserialize, Debug)] pub struct DeliveryStatusResponse { pub message_id: String, pub status: String, pub date_sent: String, pub date_delivered: String } #[derive(Serialize, Deserialize, Debug)] pub struct BulkSMSResponse { pub status: String, pub bulk_message_id: String } #[derive(Serialize, Deserialize, Debug)] pub struct BulkStatusResponse { pub bulk_message_id: String, pub status: String, pub created: String, pub processed: String, pub total_numbers: String, pub total_unique_numbers: String, pub total_valid_numbers: String, pub total_invalid_numbers: String } impl From<reqwest::Error> for JusibeError { fn from(err: reqwest::Error) -> JusibeError { match err.status() { Some(StatusCode::BAD_REQUEST) => JusibeError::BadRequestError, Some(StatusCode::UNAUTHORIZED) => JusibeError::InvalidCredentialError, None => JusibeError::NoError, _ => JusibeError::RequestError } } }
package template var VarEmailNewOrderAdminHtmlFile = []byte(`<html> <head> <title>{{$.Else.Subject}}</title> </head> <body> <h2>Client</h2> <table border="1"> <tbody> <tr> <td><b>Last&nbsp;name</b>&nbsp;&nbsp;&nbsp;</td> <td> {{if ne $.Client.LastName "" }} {{$.Client.LastName}} {{else}} - {{end}} </td> </tr> <tr> <td><b>First&nbsp;name</b>&nbsp;&nbsp;&nbsp;</td> <td> {{if ne $.Client.FirstName "" }} {{$.Client.FirstName}} {{else}} - {{end}} </td> </tr> <tr> <td><b>Middle&nbsp;name</b>&nbsp;&nbsp;&nbsp;</td> <td> {{if ne $.Client.MiddleName "" }} {{$.Client.MiddleName}} {{else}} - {{end}} </td> </tr> <tr> <td><b>Phone</b>&nbsp;&nbsp;&nbsp;</td> <td> {{if ne $.Client.Phone "" }} {{$.Client.Phone}} {{else}} - {{end}} </td> </tr> <tr> <td><b>Email</b>&nbsp;&nbsp;&nbsp;</td> <td> {{if ne $.Client.Email "" }} {{$.Client.Email}} {{else}} - {{end}} </td> </tr> </tbody> </table> <div>&nbsp;</div> <h2>Delivery</h2> <div> {{if ne $.Client.DeliveryComment "" }} {{$.Client.DeliveryComment}} {{else}} - {{end}} </div> <div>&nbsp;</div> <h2>Order comment</h2> <div> {{if ne $.Client.OrderComment "" }} {{$.Client.OrderComment}} {{else}} - {{end}} </div> <div>&nbsp;</div> <h2>Order products</h2> <div> <table border="1" width="100%"> <tbody> {{range $.Basket.Products}} <tr> <td> {{.RenderName}} </td> <td> {{.RenderPrice}}&nbsp;{{$.Basket.Currency.Code}}&nbsp;x&nbsp;{{.RenderQuantity}} </td> <td> {{.RenderSum}} {{$.Basket.Currency.Code}} </td> </tr> {{end}} </tbody> </table> </div> <h2>Total: {{$.Basket.RenderTotalSum}} {{$.Basket.Currency.Code}}</h2> <div>&nbsp;</div> <div><a href="{{$.Else.CpOrderLink}}" target="_blank">{{$.Else.CpOrderLink}}</a></div> </body> </html>`)
// clears an object, making it appropriate for reuse static void object_reset(topazScript_t * s, topazScript_Object_t * o) { o->type = topazScript_Object_Type_Undefined; o->api = NULL; o->nativeData = NULL; o->context = s; o->apiData = NULL; }
DENVER – No one can complain about Colorado’s music scene and it’s pretty hard to do so when the music is free of charge. Here are several concerts taking place across Colorado you may want to take the whole family to: 1. Arapahoe Philharmonic Summer Concert The Arapahoe Philharmonic is launching its 2017-18 season with a free Americana/pop performance featuring music by Bernstein, Copland, Gershwin, Sousa, Williams and others. The concert takes place Sunday, July 2 at 3 p.m. at the Fisher Auditorium on The Englewood Campus. The campus is located at 3800 South Logan Street in Englewood. 2. ‘Rock Your Summer’ at the Outlets at Castle Rock The Outlets at Castle Rock are kicking off their annual “Rock Your Summer” concert series on Friday, June 30 from 6 p.m. to 8 p.m. Headlining the summer concert series will be the Tunisia Band, a Denver-based group that has garnered attention for their “insanely high-energy live show.” The Outlets at Castle Rock are located at 5050 Factor Shops Boulevard, Suite 437. The Country Music Project will play on July 28 from 6 p.m. to 8 p.m. and The Midnight Club will perform on August 25 from 6 p.m. to 8 p.m. 3. Broomfield’s Summer Nigh Concert Series The annual Summer Nights Concert Series in Broomfield is at it again with live music, dancing, drinks, food and fun at the AMC Theaters Court. There’s also a beer garden that opens at 5 p.m. every Thursday. To see a full lineup of the bands, click here . The concert series goes through August 10 from 6:30 p.m. to 8:30 p.m. 4. Concerts in Clement Park Foothills Park & Recreation District's “Concerts in Clement Park” incorporates community collaborations and grant funded performances which are free, open to the public and family friendly. The performances are held at the Grant Family Amphitheater in Clement Park starting at 7 p.m. To see a list of the performances, click here . 5. Denver’s Union Station Summer Concert Series Denver’s Union Station knows how to work a crowd: The iconic landmark is hosting a summer concert series until Aug. 25, featuring local bands outside the Terminal Bar patio. The concert series will be held on the last Friday of each month from 5 p.m. to 8 p.m. on an extended Terminal Bar Plaza Beer patio. There is no cover charge, but you must be 21 to consume alcohol. 6. Concert in the Park concert series in Frisco Frisco’s Concert in the Park series kicked off last week and will run for nine Thursdays through Aug. 17 at the Frisco Historic Park. The series will feature music from bluegrass to country to jazz to rock. The concerts starts at 5:30 p.m. each Thursday and are free to the public. Guests are invited to bring lawn chairs and well-behaved pets to join in on the best family friendly happy hour in Summit County.
<gh_stars>1-10 import os import threading from tkinter import Tk, Label, Button, IntVar, Radiobutton from tkinter import messagebox as tkMessageBox from tkinter.filedialog import askopenfile, askdirectory from gps.exif.exif_tool import ExifOps from gps.exif.search_geotag import GeoTag, AccuracyTolerance, LocationPriority, LocationOverWrite, OperationType from gps.pasing.google_parser import Location, DataModel class MyFirstGUI(): working = False time_accuracy = None time_priority = None overwrite_policy = None op_path = None operation = None def __init__(self, master): self.master = master master.title("Welcome to Exif Tool") master.minsize(width=600, height=600) # Step 1 # Load the google json file self.load_location_file = Button(master, text="Load Json", command=self.load_json_file) self.load_location_file.grid(row=1, columnspan=7, ipady=10) # ######################### # Step 2 # Load the images folder for exif manipulations self.load_location_file = Button(master, text="Load Image Folder", command=self.load_image_folder) self.load_location_file.grid(row=2, columnspan=7, ipady=10) # ######################### # Step 3 # Select Location assign parameter : Accuracy self.time_accuracy = IntVar() Label(master, text="Select location time accuracy ").grid(row=3, columnspan=7, ipady=10) Radiobutton(master, text="01 Min", variable=self.time_accuracy, value=1).grid(row=4, column=0, ipady=10) Radiobutton(master, text="05 Min", variable=self.time_accuracy, value=5).grid(row=4,column=1,ipady=10) Radiobutton(master, text="15 Min", variable=self.time_accuracy, value=15).grid(row=4,column=2,ipady=10) Radiobutton(master, text="30 Min", variable=self.time_accuracy, value=30).grid(row=4, column=3, ipady=10) Radiobutton(master, text="01 hr", variable=self.time_accuracy, value=60).grid(row=4, column=4, ipady=10) Radiobutton(master, text="02 hr", variable=self.time_accuracy, value=120).grid(row=4, column=5, ipady=10) Radiobutton(master, text="Day", variable=self.time_accuracy, value=0).grid(row=4, column=6, ipady=10) self.time_accuracy.set(15) # ######################### # Step 4 # Select Location arsing parameter : Priority self.time_priority = IntVar() Label(master, text="Select location time priority ").grid(row=5, columnspan=7, ipady=10) Radiobutton(master, text="PAST", variable=self.time_priority, value=1).grid(row=6, column=0, ipady=10) Radiobutton(master, text="FUTURE", variable=self.time_priority, value=-1).grid(row=6, column=1, ipady=10) Radiobutton(master, text="ANY", variable=self.time_priority, value=0).grid(row=6, column=2, ipady=10) self.time_priority.set(0) # ######################### # Step 5 # Select Overwrite policy self.overwrite_policy = IntVar() Label(master, text="Select Location Overwrite policy ").grid(row=7, columnspan=7, ipady=10) Radiobutton(master, text="Skip", variable=self.overwrite_policy, value=1).grid(row=8, column=0, ipady=10) Radiobutton(master, text="OverWrite", variable=self.overwrite_policy, value=0).grid(row=8, column=1, ipady=10) self.overwrite_policy.set(1) # ######################### # Step 6 # Select Opration self.operation = IntVar() Label(master, text="Operation Type ").grid(row=9, columnspan=7, ipady=10) Radiobutton(master, text="Search", variable=self.operation, value=1).grid(row=10, column=0, ipady=10) Radiobutton(master, text="Execute", variable=self.operation, value=0).grid(row=10, column=1, ipady=10) self.operation.set(0) # ######################### # Step 7 # Start with Processing self.process = Button(master, text="Start Processing", command=self.process_imges).grid(row=11, columnspan=7, ipady=10) # ######################### self.remove_geo_tags = Button(master, text="Remove Geo Tags", command=self.remove_geo_tag) self.remove_geo_tags.grid(row=12, column=1) self.remove_all_tags = Button(master, text="Remove All Exif Tags", command=self.load_json_file) self.remove_all_tags.grid(row=12, column=5) def is_number(self,s): try: float(s) return True except ValueError: pass try: import unicodedata unicodedata.numeric(s) return True except (TypeError, ValueError): pass return False def load_image_folder(self): if self.working: tkMessageBox.showinfo("Exif Tool", "Wait till current tasks get's finished !") return dirname = askdirectory() self.op_path = dirname def load_json_file(self): if self.working: tkMessageBox.showinfo("Exif Tool", "Wait till current tasks get's finished !") return filename = askopenfile() filename = filename.name def load_data(self,filename): self.working = True DataModel(filename).load_data_map() self.working = False tkMessageBox.showinfo("Exif Tool", "Data successfully Loaded from {}".format(filename)) threading._start_new_thread(load_data, (self,filename)) def remove_geo_tag(self): self.check_files() def remove(path): self.working = True ExifOps.remove_gps_tags(path) threading._start_new_thread(remove, (self.op_path,)) def remove_all_tags(self): self.check_files() def remove(path): self.working = True ExifOps.remove_all_tags(path) threading._start_new_thread(remove, (self.op_path,)) def process_imges(self): self.check_files() def execute(self): self.working = True data = ExifOps.batch_job(self.op_path, AccuracyTolerance(self.time_accuracy.get()), LocationPriority(self.time_priority.get()), LocationOverWrite(self.overwrite_policy.get()), OperationType(self.operation.get())) tkMessageBox.showinfo("Exif Tool", "Total images : {}, found Location for : {}".format(data[0],data[1])) self.working = False threading._start_new_thread(execute, (self,)) def do(self,loc): self.working = True l = Location(loc) data = GeoTag().find(l) print(data) self.lable_location['text'] = str(data) diff = int(loc["timestampMs"]) - data.milisec diff /= 60000 tkMessageBox.showinfo("Say Hello", "{} - \n diff is {} min".format(str(data),diff)) self.working = False def check_files(self): if self.working: tkMessageBox.showinfo("Exif Tool", "Wait till current tasks get's finished !") return if self.op_path is None: tkMessageBox.showinfo("Exif Tool", "Please select Images to process!") return root = Tk() my_gui = MyFirstGUI(root) root.mainloop()
""" Mixins for fields. """ from bok_choy.promise import EmptyPromise from common.test.acceptance.tests.helpers import get_selected_option_text, select_option_by_text class FieldsMixin: """ Methods for testing fields in pages. """ def field(self, field_id): """ Return field with field_id. """ query = self.q(css=f'.u-field-{field_id}') return query.text[0] if query.present else None def wait_for_field(self, field_id): """ Wait for a field to appear in DOM. """ EmptyPromise( lambda: self.field(field_id) is not None, f"Field with id \"{field_id}\" is in DOM." ).fulfill() def mode_for_field(self, field_id): """ Extract current field mode. Returns: `placeholder`/`edit`/`display` """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id}') if not query.present: return None field_classes = query.attrs('class')[0].split() if 'mode-placeholder' in field_classes: return 'placeholder' if 'mode-display' in field_classes: return 'display' if 'mode-edit' in field_classes: return 'edit' def icon_for_field(self, field_id, icon_id): """ Check if field icon is present. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} .u-field-icon') return query.present and icon_id in query.attrs('class')[0].split() def title_for_field(self, field_id): """ Return the title of a field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} .u-field-title') return query.text[0] if query.present else None def message_for_field(self, field_id): """ Return the current message in a field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} .u-field-message') return query.text[0] if query.present else None def message_for_textarea_field(self, field_id): """ Return the current message for textarea field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} .u-field-message-help') return query.text[0] if query.present else None def wait_for_message(self, field_id, message): """ Wait for a message to appear in a field. """ EmptyPromise( lambda: message in (self.message_for_field(field_id) or ''), f"Messsage \"{message}\" is visible." ).fulfill() def indicator_for_field(self, field_id): """ Return the name of the current indicator in a field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} .u-field-message .fa') return [ class_name for class_name in query.attrs('class')[0].split(' ') if class_name.startswith('message') ][0].partition('-')[2] if query.present else None def wait_for_indicator(self, field_id, indicator): """ Wait for an indicator to appear in a field. """ EmptyPromise( lambda: indicator == self.indicator_for_field(field_id), f"Indicator \"{self.indicator_for_field(field_id)}\" is visible." ).fulfill() def make_field_editable(self, field_id): """ Make a field editable. """ query = self.q(css=f'.u-field-{field_id}') if not query.present: return None field_classes = query.attrs('class')[0].split() if 'mode-placeholder' in field_classes or 'mode-display' in field_classes: if field_id == 'bio': bio_field_selector = '.u-field-bio > .wrapper-u-field' self.wait_for_element_visibility(bio_field_selector, 'Bio field is visible') self.browser.execute_script("$('" + bio_field_selector + "').click();") else: self.q(css=f'.u-field-{field_id}').first.click() def value_for_readonly_field(self, field_id): """ Return the value in a readonly field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} .u-field-value') if not query.present: return None return query.text[0] def value_for_text_field(self, field_id, value=None, press_enter=True): """ Get or set the value of a text field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} input') if not query.present: return None if value is not None: current_value = query.attrs('value')[0] query.results[0].send_keys('\ue003' * len(current_value)) # Delete existing value. query.results[0].send_keys(value) # Input new value if press_enter: query.results[0].send_keys('\ue007') # Press Enter return query.attrs('value')[0] def set_value_for_textarea_field(self, field_id, value): """ Set the value of a textarea field. """ self.wait_for_field(field_id) self.make_field_editable(field_id) field_selector = f'.u-field-{field_id} textarea' self.wait_for_element_presence(field_selector, 'Editable textarea is present.') query = self.q(css=field_selector) query.fill(value) query.results[0].send_keys('\ue007') # Press Enter def get_non_editable_mode_value(self, field_id): """ Return value of field in `display` or `placeholder` mode. """ self.wait_for_field(field_id) self.wait_for_ajax() return self.q(css=f'.u-field-{field_id} .u-field-value .u-field-value-readonly').text[0] def value_for_dropdown_field(self, field_id, value=None, focus_out=False): """ Get or set the value in a dropdown field. """ self.wait_for_field(field_id) self.make_field_editable(field_id) query = self.q(css=f'.u-field-{field_id} select') if not query.present: return None if value is not None: select_option_by_text(query, value, focus_out) if self.mode_for_field(field_id) == 'edit': return get_selected_option_text(query) else: return self.get_non_editable_mode_value(field_id) def link_title_for_link_field(self, field_id): """ Return the title of the link in a link field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-link-title-{field_id}') return query.text[0] if query.present else None def wait_for_link_title_for_link_field(self, field_id, expected_title): """ Wait until the title of the specified link field equals expected_title. """ return EmptyPromise( lambda: self.link_title_for_link_field(field_id) == expected_title, f"Link field with link title \"{expected_title}\" is visible." ).fulfill() def click_on_link_in_link_field(self, field_id, field_type='a'): """ Click the link in a link field. """ self.wait_for_field(field_id) query = self.q(css=f'.u-field-{field_id} {field_type}') if query.present: query.first.click() def error_for_field(self, field_id): """ Returns bool based on the highlighted border for field. """ query = self.q(css=f'.u-field-{field_id}.error') return True if query.present else False # lint-amnesty, pylint: disable=simplifiable-if-expression def get_social_first_element(self): """ Returns the title of first social media link. """ query = self.q(css='.u-field-social_links > .field > .field-label') return query[0].text
<reponame>FiligreeTech/arrow // Licensed to the Apache Software Foundation (ASF) under one // or more contributor license agreements. See the NOTICE file // distributed with this work for additional information // regarding copyright ownership. The ASF licenses this file // to you under the Apache License, Version 2.0 (the // "License"); you may not use this file except in compliance // with the License. You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, // software distributed under the License is distributed on an // "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY // KIND, either express or implied. See the License for the // specific language governing permissions and limitations // under the License. import { Vector } from './vector'; import { Type, DataType, Dictionary } from './type'; import { Utf8, Binary, Decimal, FixedSizeBinary } from './type'; import { List, FixedSizeList, Union, Map_, Struct } from './type'; import { Bool, Null, Int, Float, Date_, Time, Interval, Timestamp } from './type'; export interface VisitorNode { acceptTypeVisitor(visitor: TypeVisitor): any; acceptVectorVisitor(visitor: VectorVisitor): any; // acceptMessageVisitor(visitor: MessageVisitor): any; } export abstract class TypeVisitor { visit(node: Partial<VisitorNode>): any { return node.acceptTypeVisitor!(this); } visitMany(nodes: Partial<VisitorNode>[]): any[] { return nodes.map((node) => this.visit(node)); } abstract visitNull(node: Null): any; abstract visitBool(node: Bool): any; abstract visitInt(node: Int): any; abstract visitFloat(node: Float): any; abstract visitUtf8(node: Utf8): any; abstract visitBinary(node: Binary): any; abstract visitFixedSizeBinary(node: FixedSizeBinary): any; abstract visitDate(node: Date_): any; abstract visitTimestamp(node: Timestamp): any; abstract visitTime(node: Time): any; abstract visitDecimal(node: Decimal): any; abstract visitList(node: List): any; abstract visitStruct(node: Struct): any; abstract visitUnion(node: Union<any>): any; abstract visitDictionary(node: Dictionary): any; abstract visitInterval(node: Interval): any; abstract visitFixedSizeList(node: FixedSizeList): any; abstract visitMap(node: Map_): any; static visitTypeInline<T extends DataType>(visitor: TypeVisitor, type: T): any { switch (type.TType) { case Type.Null: return visitor.visitNull(type as any as Null); case Type.Int: return visitor.visitInt(type as any as Int); case Type.Float: return visitor.visitFloat(type as any as Float); case Type.Binary: return visitor.visitBinary(type as any as Binary); case Type.Utf8: return visitor.visitUtf8(type as any as Utf8); case Type.Bool: return visitor.visitBool(type as any as Bool); case Type.Decimal: return visitor.visitDecimal(type as any as Decimal); case Type.Date: return visitor.visitDate(type as any as Date_); case Type.Time: return visitor.visitTime(type as any as Time); case Type.Timestamp: return visitor.visitTimestamp(type as any as Timestamp); case Type.Interval: return visitor.visitInterval(type as any as Interval); case Type.List: return visitor.visitList(type as any as List<T>); case Type.Struct: return visitor.visitStruct(type as any as Struct); case Type.Union: return visitor.visitUnion(type as any as Union); case Type.FixedSizeBinary: return visitor.visitFixedSizeBinary(type as any as FixedSizeBinary); case Type.FixedSizeList: return visitor.visitFixedSizeList(type as any as FixedSizeList); case Type.Map: return visitor.visitMap(type as any as Map_); case Type.Dictionary: return visitor.visitDictionary(type as any as Dictionary); default: return null; } } } export abstract class VectorVisitor { visit(node: Partial<VisitorNode>): any { return node.acceptVectorVisitor!(this); } visitMany(nodes: Partial<VisitorNode>[]): any[] { return nodes.map((node) => this.visit(node)); } abstract visitNullVector(node: Vector<Null>): any; abstract visitBoolVector(node: Vector<Bool>): any; abstract visitIntVector(node: Vector<Int>): any; abstract visitFloatVector(node: Vector<Float>): any; abstract visitUtf8Vector(node: Vector<Utf8>): any; abstract visitBinaryVector(node: Vector<Binary>): any; abstract visitFixedSizeBinaryVector(node: Vector<FixedSizeBinary>): any; abstract visitDateVector(node: Vector<Date_>): any; abstract visitTimestampVector(node: Vector<Timestamp>): any; abstract visitTimeVector(node: Vector<Time>): any; abstract visitDecimalVector(node: Vector<Decimal>): any; abstract visitListVector(node: Vector<List>): any; abstract visitStructVector(node: Vector<Struct>): any; abstract visitUnionVector(node: Vector<Union<any>>): any; abstract visitDictionaryVector(node: Vector<Dictionary>): any; abstract visitIntervalVector(node: Vector<Interval>): any; abstract visitFixedSizeListVector(node: Vector<FixedSizeList>): any; abstract visitMapVector(node: Vector<Map_>): any; static visitTypeInline<T extends DataType>(visitor: VectorVisitor, type: T, vector: Vector<T>): any { switch (type.TType) { case Type.Null: return visitor.visitNullVector(vector as any as Vector<Null>); case Type.Int: return visitor.visitIntVector(vector as any as Vector<Int>); case Type.Float: return visitor.visitFloatVector(vector as any as Vector<Float>); case Type.Binary: return visitor.visitBinaryVector(vector as any as Vector<Binary>); case Type.Utf8: return visitor.visitUtf8Vector(vector as any as Vector<Utf8>); case Type.Bool: return visitor.visitBoolVector(vector as any as Vector<Bool>); case Type.Decimal: return visitor.visitDecimalVector(vector as any as Vector<Decimal>); case Type.Date: return visitor.visitDateVector(vector as any as Vector<Date_>); case Type.Time: return visitor.visitTimeVector(vector as any as Vector<Time>); case Type.Timestamp: return visitor.visitTimestampVector(vector as any as Vector<Timestamp>); case Type.Interval: return visitor.visitIntervalVector(vector as any as Vector<Interval>); case Type.List: return visitor.visitListVector(vector as any as Vector<List<T>>); case Type.Struct: return visitor.visitStructVector(vector as any as Vector<Struct>); case Type.Union: return visitor.visitUnionVector(vector as any as Vector<Union>); case Type.FixedSizeBinary: return visitor.visitFixedSizeBinaryVector(vector as any as Vector<FixedSizeBinary>); case Type.FixedSizeList: return visitor.visitFixedSizeListVector(vector as any as Vector<FixedSizeList>); case Type.Map: return visitor.visitMapVector(vector as any as Vector<Map_>); case Type.Dictionary: return visitor.visitDictionaryVector(vector as any as Vector<Dictionary>); default: return null; } } } // import { Footer, Block } from './ipc/message'; // import { Field, FieldNode, Buffer } from './ipc/message'; // import { Message, Schema, RecordBatch, DictionaryBatch } from './ipc/message'; // export abstract class MessageVisitor { // visit(node: VisitorNode): any { // return node.acceptMessageVisitor(this); // } // visitMany(nodes: VisitorNode[]): any[] { // return nodes.map((node) => this.visit(node)); // } // abstract visitFooter(node: Footer): any; // abstract visitBlock(node: Block): any; // abstract visitMessage(node: Message): any; // abstract visitSchema(node: Schema): any; // abstract visitField<T extends DataType>(node: Field<T>): any; // abstract visitBuffer(node: Buffer): any; // abstract visitFieldNode(node: FieldNode): any; // abstract visitDataType<T extends Type>(node: DataType<T>): any; // abstract visitDictionary(node: Dictionary): any; // abstract visitRecordBatch(node: RecordBatch): any; // abstract visitDictionaryBatch(node: DictionaryBatch): any; // }
/** * @author Dang Duy Hieu * @version $Id$ */ public class HibernateLocalDataValueStore implements LocalDataValueStore { // ------------------------------------------------------------------------- // Dependencies // ------------------------------------------------------------------------- private SessionFactory sessionFactory; public void setSessionFactory( SessionFactory sessionFactory ) { this.sessionFactory = sessionFactory; } private PeriodStore periodStore; public void setPeriodStore( PeriodStore periodStore ) { this.periodStore = periodStore; } // private JdbcTemplate jdbcTemplate; // // public void setJdbcTemplate( JdbcTemplate jdbcTemplate ) // { // this.jdbcTemplate = jdbcTemplate; // } // ------------------------------------------------------------------------- // Basic DataValue // ------------------------------------------------------------------------- @SuppressWarnings( "unchecked" ) public Collection<DataValue> getDataValues( OrganisationUnit source, Collection<DataElement> dataElements, Collection<Period> periods ) { Collection<Period> storedPeriods = new ArrayList<Period>(); for ( Period period : periods ) { Period storedPeriod = periodStore.reloadPeriod( period ); if ( storedPeriod != null ) { storedPeriods.add( storedPeriod ); } } if ( storedPeriods.isEmpty() || source == null || dataElements == null || dataElements.isEmpty() ) { return new HashSet<DataValue>(); } Session session = sessionFactory.getCurrentSession(); Criteria criteria = session.createCriteria( DataValue.class ); criteria.add( Restrictions.eq( "source", source ) ); criteria.add( Restrictions.in( "dataElement", dataElements ) ); criteria.add( Restrictions.in( "period", storedPeriods ) ); criteria.addOrder( Order.asc( "dataElement" ) ); criteria.addOrder( Order.asc( "optionCombo" ) ); criteria.addOrder( Order.asc( "timestamp" ) ); return criteria.list(); } }
<filename>shell/platform/linux/fl_text_input_plugin_test.cc // Copyright 2013 The Flutter Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "flutter/shell/platform/linux/fl_text_input_plugin.h" #include "flutter/shell/platform/linux/fl_method_codec_private.h" #include "flutter/shell/platform/linux/public/flutter_linux/fl_binary_messenger.h" #include "flutter/shell/platform/linux/public/flutter_linux/fl_json_method_codec.h" #include "flutter/shell/platform/linux/public/flutter_linux/fl_value.h" #include "flutter/shell/platform/linux/testing/fl_test.h" #include "flutter/shell/platform/linux/testing/mock_binary_messenger.h" #include "flutter/shell/platform/linux/testing/mock_binary_messenger_response_handle.h" #include "flutter/shell/platform/linux/testing/mock_im_context.h" #include "flutter/testing/testing.h" #include "gmock/gmock.h" #include "gtest/gtest.h" void printTo(FlMethodResponse* response, ::std::ostream* os) { *os << ::testing::PrintToString( fl_method_response_get_result(response, nullptr)); } MATCHER_P(SuccessResponse, result, "") { g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(FlMethodResponse) response = fl_method_codec_decode_response(FL_METHOD_CODEC(codec), arg, nullptr); if (fl_value_equal(fl_method_response_get_result(response, nullptr), result)) { return true; } *result_listener << ::testing::PrintToString(response); return false; } MATCHER_P(FlValueEq, value, "equal to " + ::testing::PrintToString(value)) { return fl_value_equal(arg, value); } class MethodCallMatcher { public: using is_gtest_matcher = void; explicit MethodCallMatcher(::testing::Matcher<std::string> name, ::testing::Matcher<FlValue*> args) : name_(name), args_(args) {} bool MatchAndExplain(GBytes* method_call, ::testing::MatchResultListener* result_listener) const { g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GError) error = nullptr; g_autofree gchar* name = nullptr; g_autoptr(FlValue) args = nullptr; gboolean result = fl_method_codec_decode_method_call( FL_METHOD_CODEC(codec), method_call, &name, &args, &error); if (!result) { *result_listener << ::testing::PrintToString(error->message); return false; } if (!name_.MatchAndExplain(name, result_listener)) { *result_listener << " where the name doesn't match: \"" << name << "\""; return false; } if (!args_.MatchAndExplain(args, result_listener)) { *result_listener << " where the args don't match: " << ::testing::PrintToString(args); return false; } return true; } void DescribeTo(std::ostream* os) const { *os << "method name "; name_.DescribeTo(os); *os << " and args "; args_.DescribeTo(os); } void DescribeNegationTo(std::ostream* os) const { *os << "method name "; name_.DescribeNegationTo(os); *os << " or args "; args_.DescribeNegationTo(os); } private: ::testing::Matcher<std::string> name_; ::testing::Matcher<FlValue*> args_; }; ::testing::Matcher<GBytes*> MethodCall(std::string name, ::testing::Matcher<FlValue*> args) { return MethodCallMatcher(::testing::StrEq(name), args); } static FlValue* build_map(std::map<const gchar*, FlValue*> args) { FlValue* value = fl_value_new_map(); for (auto it = args.begin(); it != args.end(); ++it) { fl_value_set_string_take(value, it->first, it->second); } return value; } static FlValue* build_list(std::vector<FlValue*> args) { FlValue* value = fl_value_new_list(); for (auto it = args.begin(); it != args.end(); ++it) { fl_value_append_take(value, *it); } return value; } struct InputConfig { int64_t client_id = -1; const gchar* input_type = "TextInputType.text"; const gchar* input_action = "TextInputAction.none"; gboolean enable_delta_model = false; }; static FlValue* build_input_config(InputConfig config) { return build_list({ fl_value_new_int(config.client_id), build_map({ {"inputAction", fl_value_new_string(config.input_action)}, {"inputType", build_map({ {"name", fl_value_new_string(config.input_type)}, })}, {"enableDeltaModel", fl_value_new_bool(config.enable_delta_model)}, }), }); } struct EditingState { const gchar* text = ""; int selection_base = -1; int selection_extent = -1; int composing_base = -1; int composing_extent = -1; }; static FlValue* build_editing_state(EditingState state) { return build_map({ {"text", fl_value_new_string(state.text)}, {"selectionBase", fl_value_new_int(state.selection_base)}, {"selectionExtent", fl_value_new_int(state.selection_extent)}, {"selectionAffinity", fl_value_new_string("TextAffinity.downstream")}, {"selectionIsDirectional", fl_value_new_bool(false)}, {"composingBase", fl_value_new_int(state.composing_base)}, {"composingExtent", fl_value_new_int(state.composing_extent)}, }); } struct EditingDelta { const gchar* old_text = ""; const gchar* delta_text = ""; int delta_start = -1; int delta_end = -1; int selection_base = -1; int selection_extent = -1; int composing_base = -1; int composing_extent = -1; }; static FlValue* build_editing_delta(EditingDelta delta) { return build_map({ {"oldText", fl_value_new_string(delta.old_text)}, {"deltaText", fl_value_new_string(delta.delta_text)}, {"deltaStart", fl_value_new_int(delta.delta_start)}, {"deltaEnd", fl_value_new_int(delta.delta_end)}, {"selectionBase", fl_value_new_int(delta.selection_base)}, {"selectionExtent", fl_value_new_int(delta.selection_extent)}, {"selectionAffinity", fl_value_new_string("TextAffinity.downstream")}, {"selectionIsDirectional", fl_value_new_bool(false)}, {"composingBase", fl_value_new_int(delta.composing_base)}, {"composingExtent", fl_value_new_int(delta.composing_extent)}, }); } static void send_key_event(FlTextInputPlugin* plugin, gint keyval, gint state = 0) { GdkEvent* gdk_event = gdk_event_new(GDK_KEY_PRESS); gdk_event->key.keyval = keyval; gdk_event->key.state = state; FlKeyEvent* key_event = fl_key_event_new_from_gdk_event(gdk_event); fl_text_input_plugin_filter_keypress(plugin, key_event); fl_key_event_dispose(key_event); } TEST(FlTextInputPluginTest, MessageHandler) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); EXPECT_TRUE(messenger.HasMessageHandler("flutter/textinput")); } TEST(FlTextInputPluginTest, SetClient) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); g_autoptr(FlValue) args = build_input_config({.client_id = 1}); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) message = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", args, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", message); } TEST(FlTextInputPluginTest, Show) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); EXPECT_CALL(context, gtk_im_context_focus_in(::testing::Eq<GtkIMContext*>(context))); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) message = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.show", nullptr, nullptr); messenger.ReceiveMessage("flutter/textinput", message); } TEST(FlTextInputPluginTest, Hide) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); EXPECT_CALL(context, gtk_im_context_focus_out(::testing::Eq<GtkIMContext*>(context))); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) message = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.hide", nullptr, nullptr); messenger.ReceiveMessage("flutter/textinput", message); } TEST(FlTextInputPluginTest, ClearClient) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) message = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.clearClient", nullptr, nullptr); messenger.ReceiveMessage("flutter/textinput", message); } TEST(FlTextInputPluginTest, PerformAction) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); // set input config g_autoptr(FlValue) config = build_input_config({ .client_id = 1, .input_type = "TextInputType.multiline", .input_action = "TextInputAction.newline", }); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", config, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); // set editing state g_autoptr(FlValue) state = build_editing_state({ .text = "Flutter", .selection_base = 7, .selection_extent = 7, }); g_autoptr(GBytes) set_state = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setEditingState", state, nullptr); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_state); // update editing state g_autoptr(FlValue) new_state = build_list({ fl_value_new_int(1), // client_id build_editing_state({ .text = "Flutter\n", .selection_base = 8, .selection_extent = 8, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(new_state)), ::testing::_, ::testing::_, ::testing::_)); // perform action g_autoptr(FlValue) action = build_list({ fl_value_new_int(1), // client_id fl_value_new_string("TextInputAction.newline"), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.performAction", FlValueEq(action)), ::testing::_, ::testing::_, ::testing::_)); send_key_event(plugin, GDK_KEY_Return); } TEST(FlTextInputPluginTest, MoveCursor) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); // set input config g_autoptr(FlValue) config = build_input_config({.client_id = 1}); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", config, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); // set editing state g_autoptr(FlValue) state = build_editing_state({ .text = "Flutter", .selection_base = 4, .selection_extent = 4, }); g_autoptr(GBytes) set_state = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setEditingState", state, nullptr); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_state); // move cursor to beginning g_autoptr(FlValue) beginning = build_list({ fl_value_new_int(1), // client_id build_editing_state({ .text = "Flutter", .selection_base = 0, .selection_extent = 0, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(beginning)), ::testing::_, ::testing::_, ::testing::_)); send_key_event(plugin, GDK_KEY_Home); // move cursor to end g_autoptr(FlValue) end = build_list({ fl_value_new_int(1), // client_id build_editing_state({ .text = "Flutter", .selection_base = 7, .selection_extent = 7, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(end)), ::testing::_, ::testing::_, ::testing::_)); send_key_event(plugin, GDK_KEY_End); } TEST(FlTextInputPluginTest, Select) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); // set input config g_autoptr(FlValue) config = build_input_config({.client_id = 1}); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", config, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); // set editing state g_autoptr(FlValue) state = build_editing_state({ .text = "Flutter", .selection_base = 4, .selection_extent = 4, }); g_autoptr(GBytes) set_state = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setEditingState", state, nullptr); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_state); // select to end g_autoptr(FlValue) select_to_end = build_list({ fl_value_new_int(1), // client_id build_editing_state({ .text = "Flutter", .selection_base = 4, .selection_extent = 7, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(select_to_end)), ::testing::_, ::testing::_, ::testing::_)); send_key_event(plugin, GDK_KEY_End, GDK_SHIFT_MASK); // select to beginning g_autoptr(FlValue) select_to_beginning = build_list({ fl_value_new_int(1), // client_id build_editing_state({ .text = "Flutter", .selection_base = 4, .selection_extent = 0, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(select_to_beginning)), ::testing::_, ::testing::_, ::testing::_)); send_key_event(plugin, GDK_KEY_Home, GDK_SHIFT_MASK); } TEST(FlTextInputPluginTest, Composing) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); g_signal_emit_by_name(context, "preedit-start", nullptr); // update EXPECT_CALL(context, gtk_im_context_get_preedit_string( ::testing::Eq<GtkIMContext*>(context), ::testing::A<gchar**>(), ::testing::_, ::testing::A<gint*>())) .WillOnce( ::testing::DoAll(::testing::SetArgPointee<1>(g_strdup("Flutter")), ::testing::SetArgPointee<3>(0))); g_autoptr(FlValue) state = build_list({ fl_value_new_int(-1), // client_id build_editing_state({ .text = "Flutter", .selection_base = 0, .selection_extent = 0, .composing_base = 0, .composing_extent = 7, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(state)), ::testing::_, ::testing::_, ::testing::_)); g_signal_emit_by_name(context, "preedit-changed", nullptr); // commit g_autoptr(FlValue) commit = build_list({ fl_value_new_int(-1), // client_id build_editing_state({ .text = "engine", .selection_base = 6, .selection_extent = 6, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(commit)), ::testing::_, ::testing::_, ::testing::_)); g_signal_emit_by_name(context, "commit", "engine", nullptr); // end EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", ::testing::_), ::testing::_, ::testing::_, ::testing::_)); g_signal_emit_by_name(context, "preedit-end", nullptr); } TEST(FlTextInputPluginTest, SurroundingText) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); // set input config g_autoptr(FlValue) config = build_input_config({.client_id = 1}); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", config, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); // set editing state g_autoptr(FlValue) state = build_editing_state({ .text = "Flutter", .selection_base = 3, .selection_extent = 3, }); g_autoptr(GBytes) set_state = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setEditingState", state, nullptr); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_state); // retrieve EXPECT_CALL(context, gtk_im_context_set_surrounding( ::testing::Eq<GtkIMContext*>(context), ::testing::StrEq("Flutter"), 7, 3)); gboolean retrieved = false; g_signal_emit_by_name(context, "retrieve-surrounding", &retrieved, nullptr); EXPECT_TRUE(retrieved); // delete g_autoptr(FlValue) update = build_list({ fl_value_new_int(1), // client_id build_editing_state({ .text = "Flutr", .selection_base = 3, .selection_extent = 3, }), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingState", FlValueEq(update)), ::testing::_, ::testing::_, ::testing::_)); gboolean deleted = false; g_signal_emit_by_name(context, "delete-surrounding", 1, 2, &deleted, nullptr); EXPECT_TRUE(deleted); } TEST(FlTextInputPluginTest, SetMarkedTextRect) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); g_signal_emit_by_name(context, "preedit-start", nullptr); // set editable size and transform g_autoptr(FlValue) size_and_transform = build_map({ { "transform", build_list({ fl_value_new_float(1), fl_value_new_float(2), fl_value_new_float(3), fl_value_new_float(4), fl_value_new_float(5), fl_value_new_float(6), fl_value_new_float(7), fl_value_new_float(8), fl_value_new_float(9), fl_value_new_float(10), fl_value_new_float(11), fl_value_new_float(12), fl_value_new_float(13), fl_value_new_float(14), fl_value_new_float(15), fl_value_new_float(16), }), }, }); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_editable_size_and_transform = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setEditableSizeAndTransform", size_and_transform, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_editable_size_and_transform); // set marked text rect g_autoptr(FlValue) rect = build_map({ {"x", fl_value_new_float(1)}, {"y", fl_value_new_float(2)}, {"width", fl_value_new_float(3)}, {"height", fl_value_new_float(4)}, }); g_autoptr(GBytes) set_marked_text_rect = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setMarkedTextRect", rect, nullptr); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); EXPECT_CALL(context, gtk_im_context_set_cursor_location( ::testing::Eq<GtkIMContext*>(context), ::testing::Pointee(::testing::AllOf( ::testing::Field(&GdkRectangle::x, 27), ::testing::Field(&GdkRectangle::y, 32), ::testing::Field(&GdkRectangle::width, 0), ::testing::Field(&GdkRectangle::height, 0))))); messenger.ReceiveMessage("flutter/textinput", set_marked_text_rect); } TEST(FlTextInputPluginTest, TextInputTypeNone) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); g_autoptr(FlValue) args = build_input_config({ .client_id = 1, .input_type = "TextInputType.none", }); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", args, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::A<FlBinaryMessengerResponseHandle*>(), SuccessResponse(null), ::testing::A<GError**>())) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); EXPECT_CALL(context, gtk_im_context_focus_in(::testing::Eq<GtkIMContext*>(context))) .Times(0); EXPECT_CALL(context, gtk_im_context_focus_out(::testing::Eq<GtkIMContext*>(context))); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); g_autoptr(GBytes) show = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.show", nullptr, nullptr); messenger.ReceiveMessage("flutter/textinput", show); } TEST(FlTextInputPluginTest, TextEditingDelta) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); // set config g_autoptr(FlValue) args = build_input_config({ .client_id = 1, .enable_delta_model = true, }); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", args, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::A<FlBinaryMessengerResponseHandle*>(), SuccessResponse(null), ::testing::A<GError**>())) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); // set editing state g_autoptr(FlValue) state = build_editing_state({ .text = "Flutter", .selection_base = 7, .selection_extent = 7, }); g_autoptr(GBytes) set_state = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setEditingState", state, nullptr); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::_, SuccessResponse(null), ::testing::_)) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_state); // update editing state with deltas g_autoptr(FlValue) deltas = build_list({ fl_value_new_int(1), // client_id build_map({{ "deltas", build_list({ build_editing_delta({ .old_text = "Flutter", .delta_text = "Flutter", .delta_start = 7, .delta_end = 7, .selection_base = 0, .selection_extent = 0, }), }), }}), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingStateWithDeltas", FlValueEq(deltas)), ::testing::_, ::testing::_, ::testing::_)); send_key_event(plugin, GDK_KEY_Home); } TEST(FlTextInputPluginTest, ComposingDelta) { ::testing::NiceMock<flutter::testing::MockBinaryMessenger> messenger; ::testing::NiceMock<flutter::testing::MockIMContext> context; g_autoptr(FlTextInputPlugin) plugin = fl_text_input_plugin_new(messenger, context); EXPECT_NE(plugin, nullptr); // set config g_autoptr(FlValue) args = build_input_config({ .client_id = 1, .enable_delta_model = true, }); g_autoptr(FlJsonMethodCodec) codec = fl_json_method_codec_new(); g_autoptr(GBytes) set_client = fl_method_codec_encode_method_call( FL_METHOD_CODEC(codec), "TextInput.setClient", args, nullptr); g_autoptr(FlValue) null = fl_value_new_null(); EXPECT_CALL(messenger, fl_binary_messenger_send_response( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::A<FlBinaryMessengerResponseHandle*>(), SuccessResponse(null), ::testing::A<GError**>())) .WillOnce(::testing::Return(true)); messenger.ReceiveMessage("flutter/textinput", set_client); // update EXPECT_CALL(context, gtk_im_context_get_preedit_string( ::testing::Eq<GtkIMContext*>(context), ::testing::A<gchar**>(), ::testing::_, ::testing::A<gint*>())) .WillOnce( ::testing::DoAll(::testing::SetArgPointee<1>(g_strdup("Flutter ")), ::testing::SetArgPointee<3>(8))); g_autoptr(FlValue) update = build_list({ fl_value_new_int(1), // client_id build_map({{ "deltas", build_list({ build_editing_delta({ .old_text = "", .delta_text = "Flutter ", .delta_start = 0, .delta_end = 8, .selection_base = 8, .selection_extent = 8, .composing_base = 0, .composing_extent = 8, }), }), }}), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingStateWithDeltas", FlValueEq(update)), ::testing::_, ::testing::_, ::testing::_)); g_signal_emit_by_name(context, "preedit-changed", nullptr); // commit g_autoptr(FlValue) commit = build_list({ fl_value_new_int(1), // client_id build_map({{ "deltas", build_list({ build_editing_delta({ .old_text = "Flutter ", .delta_text = "engine", .delta_start = 8, .delta_end = 8, .selection_base = 14, .selection_extent = 14, .composing_base = 0, .composing_extent = 8, }), }), }}), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingStateWithDeltas", FlValueEq(commit)), ::testing::_, ::testing::_, ::testing::_)); g_signal_emit_by_name(context, "commit", "engine", nullptr); // end g_autoptr(FlValue) end = build_list({ fl_value_new_int(1), // client_id build_map({{ "deltas", build_list({ build_editing_delta({ .delta_text = "Flutter engine", .selection_base = 14, .selection_extent = 14, }), }), }}), }); EXPECT_CALL(messenger, fl_binary_messenger_send_on_channel( ::testing::Eq<FlBinaryMessenger*>(messenger), ::testing::StrEq("flutter/textinput"), MethodCall("TextInputClient.updateEditingStateWithDeltas", FlValueEq(end)), ::testing::_, ::testing::_, ::testing::_)); g_signal_emit_by_name(context, "preedit-end", nullptr); }
/* ** Constructor for a new ZipfileCsr object. */ static int zipfileOpen(sqlite3_vtab *p, sqlite3_vtab_cursor **ppCsr){ ZipfileTab *pTab = (ZipfileTab*)p; ZipfileCsr *pCsr; pCsr = sqlite3_malloc(sizeof(*pCsr)); *ppCsr = (sqlite3_vtab_cursor*)pCsr; if( pCsr==0 ){ return SQLITE_NOMEM; } memset(pCsr, 0, sizeof(*pCsr)); pCsr->iId = ++pTab->iNextCsrid; pCsr->pCsrNext = pTab->pCsrList; pTab->pCsrList = pCsr; return SQLITE_OK; }
def Log(self, entry, condition=True): if ((self._logger is not None) and condition): return self._logger.Write(entry) return False
// NewLimitedResolver returns a resolver which will pass up to lookupLimit calls to r. // In addition to that limit, the evaluation of each "MX" record will be limited // to mxQueryLimit. // All calls over the limit will return ErrDNSLimitExceeded. func NewLimitedResolver(r Resolver, lookupLimit, mxQueriesLimit uint16) Resolver { return &LimitedResolver{ lookupLimit: int32(lookupLimit), mxQueriesLimit: mxQueriesLimit, resolver: r, } }
<filename>Controllers/MPC/generated/codegen/mex/ExtractDistanceTrajectory/ExtractDistanceTrajectory_emxutil.c /* * Academic License - for use in teaching, academic research, and meeting * course requirements at degree granting institutions only. Not for * government, commercial, or other organizational use. * * ExtractDistanceTrajectory_emxutil.c * * Code generation for function 'ExtractDistanceTrajectory_emxutil' * */ /* Include files */ #include <string.h> #include "rt_nonfinite.h" #include "ExtractDistanceTrajectory.h" #include "ExtractDistanceTrajectory_emxutil.h" /* Function Definitions */ void emxEnsureCapacity_int32_T(const emlrtStack *sp, emxArray_int32_T *emxArray, int32_T oldNumel, const emlrtRTEInfo *srcLocation) { int32_T newNumel; int32_T i; void *newData; if (oldNumel < 0) { oldNumel = 0; } newNumel = 1; for (i = 0; i < emxArray->numDimensions; i++) { newNumel = (int32_T)emlrtSizeMulR2012b((uint32_T)newNumel, (uint32_T) emxArray->size[i], srcLocation, sp); } if (newNumel > emxArray->allocatedSize) { i = emxArray->allocatedSize; if (i < 16) { i = 16; } while (i < newNumel) { if (i > 1073741823) { i = MAX_int32_T; } else { i <<= 1; } } newData = emlrtCallocMex((uint32_T)i, sizeof(int32_T)); if (newData == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } if (emxArray->data != NULL) { memcpy(newData, (void *)emxArray->data, sizeof(int32_T) * oldNumel); if (emxArray->canFreeData) { emlrtFreeMex2018a(sp, (void *)emxArray->data); } } emxArray->data = (int32_T *)newData; emxArray->allocatedSize = i; emxArray->canFreeData = true; } } void emxEnsureCapacity_real_T(const emlrtStack *sp, emxArray_real_T *emxArray, int32_T oldNumel, const emlrtRTEInfo *srcLocation) { int32_T newNumel; int32_T i; void *newData; if (oldNumel < 0) { oldNumel = 0; } newNumel = 1; for (i = 0; i < emxArray->numDimensions; i++) { newNumel = (int32_T)emlrtSizeMulR2012b((uint32_T)newNumel, (uint32_T) emxArray->size[i], srcLocation, sp); } if (newNumel > emxArray->allocatedSize) { i = emxArray->allocatedSize; if (i < 16) { i = 16; } while (i < newNumel) { if (i > 1073741823) { i = MAX_int32_T; } else { i <<= 1; } } newData = emlrtCallocMex((uint32_T)i, sizeof(real_T)); if (newData == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } if (emxArray->data != NULL) { memcpy(newData, (void *)emxArray->data, sizeof(real_T) * oldNumel); if (emxArray->canFreeData) { emlrtFreeMex2018a(sp, (void *)emxArray->data); } } emxArray->data = (real_T *)newData; emxArray->allocatedSize = i; emxArray->canFreeData = true; } } void emxEnsureCapacity_real_T1(const emlrtStack *sp, emxArray_real_T *emxArray, int32_T oldNumel, const emlrtRTEInfo *srcLocation) { int32_T newNumel; int32_T i; void *newData; if (oldNumel < 0) { oldNumel = 0; } newNumel = 1; for (i = 0; i < emxArray->numDimensions; i++) { newNumel = (int32_T)emlrtSizeMulR2012b((uint32_T)newNumel, (uint32_T) emxArray->size[i], srcLocation, sp); } if (newNumel > emxArray->allocatedSize) { i = emxArray->allocatedSize; if (i < 16) { i = 16; } while (i < newNumel) { if (i > 1073741823) { i = MAX_int32_T; } else { i <<= 1; } } newData = emlrtCallocMex((uint32_T)i, sizeof(real_T)); if (newData == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } if (emxArray->data != NULL) { memcpy(newData, (void *)emxArray->data, sizeof(real_T) * oldNumel); if (emxArray->canFreeData) { emlrtFreeMex2018a(sp, (void *)emxArray->data); } } emxArray->data = (real_T *)newData; emxArray->allocatedSize = i; emxArray->canFreeData = true; } } void emxFree_int32_T(const emlrtStack *sp, emxArray_int32_T **pEmxArray) { if (*pEmxArray != (emxArray_int32_T *)NULL) { if (((*pEmxArray)->data != (int32_T *)NULL) && (*pEmxArray)->canFreeData) { emlrtFreeMex2018a(sp, (void *)(*pEmxArray)->data); } emlrtFreeMex2018a(sp, (void *)(*pEmxArray)->size); emlrtFreeMex2018a(sp, (void *)*pEmxArray); *pEmxArray = (emxArray_int32_T *)NULL; } } void emxFree_real_T(const emlrtStack *sp, emxArray_real_T **pEmxArray) { if (*pEmxArray != (emxArray_real_T *)NULL) { if (((*pEmxArray)->data != (real_T *)NULL) && (*pEmxArray)->canFreeData) { emlrtFreeMex2018a(sp, (void *)(*pEmxArray)->data); } emlrtFreeMex2018a(sp, (void *)(*pEmxArray)->size); emlrtFreeMex2018a(sp, (void *)*pEmxArray); *pEmxArray = (emxArray_real_T *)NULL; } } void emxInit_int32_T(const emlrtStack *sp, emxArray_int32_T **pEmxArray, int32_T numDimensions, const emlrtRTEInfo *srcLocation, boolean_T doPush) { emxArray_int32_T *emxArray; int32_T i; *pEmxArray = (emxArray_int32_T *)emlrtMallocMex(sizeof(emxArray_int32_T)); if ((void *)*pEmxArray == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } if (doPush) { emlrtPushHeapReferenceStackR2018a(sp, (void *)pEmxArray, (void (*)(const void *, void *))emxFree_int32_T); } emxArray = *pEmxArray; emxArray->data = (int32_T *)NULL; emxArray->numDimensions = numDimensions; emxArray->size = (int32_T *)emlrtMallocMex(sizeof(int32_T) * numDimensions); if ((void *)emxArray->size == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } emxArray->allocatedSize = 0; emxArray->canFreeData = true; for (i = 0; i < numDimensions; i++) { emxArray->size[i] = 0; } } void emxInit_real_T(const emlrtStack *sp, emxArray_real_T **pEmxArray, int32_T numDimensions, const emlrtRTEInfo *srcLocation, boolean_T doPush) { emxArray_real_T *emxArray; int32_T i; *pEmxArray = (emxArray_real_T *)emlrtMallocMex(sizeof(emxArray_real_T)); if ((void *)*pEmxArray == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } if (doPush) { emlrtPushHeapReferenceStackR2018a(sp, (void *)pEmxArray, (void (*)(const void *, void *))emxFree_real_T); } emxArray = *pEmxArray; emxArray->data = (real_T *)NULL; emxArray->numDimensions = numDimensions; emxArray->size = (int32_T *)emlrtMallocMex(sizeof(int32_T) * numDimensions); if ((void *)emxArray->size == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } emxArray->allocatedSize = 0; emxArray->canFreeData = true; for (i = 0; i < numDimensions; i++) { emxArray->size[i] = 0; } } void emxInit_real_T1(const emlrtStack *sp, emxArray_real_T **pEmxArray, int32_T numDimensions, const emlrtRTEInfo *srcLocation, boolean_T doPush) { emxArray_real_T *emxArray; int32_T i; *pEmxArray = (emxArray_real_T *)emlrtMallocMex(sizeof(emxArray_real_T)); if ((void *)*pEmxArray == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } if (doPush) { emlrtPushHeapReferenceStackR2018a(sp, (void *)pEmxArray, (void (*)(const void *, void *))emxFree_real_T); } emxArray = *pEmxArray; emxArray->data = (real_T *)NULL; emxArray->numDimensions = numDimensions; emxArray->size = (int32_T *)emlrtMallocMex(sizeof(int32_T) * numDimensions); if ((void *)emxArray->size == NULL) { emlrtHeapAllocationErrorR2012b(srcLocation, sp); } emxArray->allocatedSize = 0; emxArray->canFreeData = true; for (i = 0; i < numDimensions; i++) { emxArray->size[i] = 0; } } /* End of code generation (ExtractDistanceTrajectory_emxutil.c) */
/** * Copyright 2021 Expedia, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import logger from '@iex/shared/logger'; import { AWSError, Request, S3 } from 'aws-sdk'; import { HeadObjectOutput } from 'aws-sdk/clients/s3'; import { ReadStream } from 'fs-extra'; const defaultOptions: S3.Types.ClientConfiguration = { region: process.env.S3_REGION, maxRetries: 3 }; export function createS3Client(options: S3.Types.ClientConfiguration = defaultOptions): S3 { return new S3({ ...defaultOptions, ...options }); } export const defaultS3Client = createS3Client(); /** * Writes data to the Insights Explorer bucket. * * @note The function executes a putobject() request * @param {Buffer | String} body file content to write to S3 * @param {string} key S3 bucket key to write to * @returns {string} S3 bucket URI to the uploaded file * @throws {AWSError} If putobject request fails */ export async function writeToS3(body: Buffer | string, key: string): Promise<string> { const bucket = process.env.S3_BUCKET!; const response = await defaultS3Client.putObject({ Body: body, Bucket: bucket, Key: key }).promise(); const uri = `s3://${bucket}/${key}`; logger.info(`[STORAGE] S3 file successfully uploaded with Etag: ${response.ETag} and URI: ${uri}`); return uri; } /** * Writes a stream to the Insights Explorer bucket. * * @note The function executes a putobject() request * @param {ReadStream} stream stream content to write to S3 * @param {number} fileSize Size of the file * @param {string} key S3 bucket key to write to * @returns {string} S3 bucket URI to the uploaded file * @throws {AWSError} If putobject request fails */ export async function streamToS3(stream: ReadStream, fileSize: number, key: string): Promise<string> { const bucket = process.env.S3_BUCKET!; const response = await defaultS3Client .putObject({ Body: stream, Bucket: bucket, ContentLength: fileSize, Key: key }) .promise(); const uri = `s3://${bucket}/${key}`; logger.info(`[STORAGE] S3 file successfully uploaded with Etag: ${response.ETag} and URI: ${uri}`); return uri; } /** * Reads data from the Insights Explorer S3 bucket. * * @note The function executes a getObject() request * @param {string} key Key to get file from bucket * @returns {Promise<Buffer>} Returns requested S3 buffer */ export async function readFromS3(key: string): Promise<Buffer> { const bucket = process.env.S3_BUCKET!; logger.info(`[STORAGE] Streaming from s3://${bucket}/${key}`); const file = await defaultS3Client.getObject({ Bucket: bucket, Key: key }).promise(); return Buffer.from(file.Body as Buffer); } /** * Streams data from the Insights Explorer S3 bucket. * * @note The function executes a getObject() request * @param {string} key Key to get file from bucket * @range {string} range Optional range to retrieve * @returns {Promise<ReadStream>} Returns requested S3 file stream */ export async function streamFromS3(key: string, range?: string): Promise<ReadStream> { const bucket = process.env.S3_BUCKET!; logger.info(`[STORAGE] Streaming from s3://${bucket}/${key}`); const response = defaultS3Client.getObject({ Bucket: bucket, Key: key, Range: range }); return response.createReadStream() as ReadStream; } /** * Gets an Object from the Insights Explorer S3 bucket. * * @note The function executes a getObject() request * @param {string} key Key to get file from bucket * @range {string} range Optional range to retrieve * @returns {Request<S3.Types.GetObjectOutput, AWSError>} Returns requested S3 GetObject response */ export function getFromS3(key: string, range?: string): Request<S3.Types.GetObjectOutput, AWSError> { const bucket = process.env.S3_BUCKET!; logger.info(`[STORAGE] Streaming from s3://${bucket}/${key}`); return defaultS3Client.getObject({ Bucket: bucket, Key: key, Range: range }); } /** * Returns a file head request. * * @note The function executes a headObject() request * @param {string} key Key of file to check in bucket * @returns {Request<S3.Types.HeadObjectOutput, AWSError>} Returns requested S3 HeadObject response */ export async function headFromS3(key: string): Promise<HeadObjectOutput | undefined> { const bucket = process.env.S3_BUCKET!; logger.info(`[STORAGE] Checking existance of s3://${bucket}/${key}`); try { return await defaultS3Client.headObject({ Bucket: bucket, Key: key }).promise(); } catch (error: any) { if (error.code == 'NotFound') return undefined; throw error; } } /** * Checks if a file exists in S3 and returns true/false * * @note The function executes a headObject() request * @param {string} key Key of file to check in bucket * @returns {Promise<boolean>} Returns true if the file exists, else false */ export async function existsInS3(key: string): Promise<boolean> { const bucket = process.env.S3_BUCKET!; logger.info(`[STORAGE] Checking existance of s3://${bucket}/${key}`); try { await defaultS3Client.headObject({ Bucket: bucket, Key: key }).promise(); return true; } catch (error: any) { if (error.code == 'NotFound') return false; throw error; } }
Clopidogrel resistance: a new chapter in a fast-moving story. Although platelets lack nuclei and are the smallest circulating human cells, they play an integral and complex role in the process of thrombosis, both physiological and pathophysiological. Activation and aggregation of platelets play a central role in the propagation of intracoronary thrombi after (1) spontaneous atherosclerotic plaque disruption that results in myocardial ischemia or infarction in the acute coronary syndromes (ACS), or (2) the mechanical disruption that results from percutaneous coronary intervention (PCI). Platelets initially adhere to collagen and von Willebrand factor at the site of the disrupted plaque, resulting in an initial platelet monolayer. After activation, platelets release secondary agonists such as thromboxane A2 and adenosine diphosphate (ADP), which in combination with thrombin generated by the coagulation cascade result in stimulation and recruitment of additional platelets.1,2 With this pathophysiological background, it is not surprising that antiplatelet therapy is a cornerstone of the management of patients with ACS, especially those undergoing PCI.3–5 See p 3171 Aspirin inhibits cyclooxygenase (COX) by irreversible acetylation, which prevents the production of thromboxane A2. The antithrombotic effect of aspirin results from the decreased production of this prothrombotic, vasoconstrictive substance. Aspirin is effective in the short- and long-term prevention of adverse vascular events in high-risk patient groups, including those with ACS, stroke and peripheral arterial disease.6 Aspirin also has been shown to reduce the frequency of ischemic complications after PCI.7,8 Despite the impressive and consistent effects of aspirin in reducing adverse events in a variety of ischemic heart disease states, a significant rate of such events persists, and more potent antiplatelet agents, glycoprotein IIb/IIIa inhibitors, and thienopyridines have been developed. The thienopyridines irreversibly inhibit ADP binding to the P2Y12 receptor on the platelet surface. By blocking this receptor, these agents interfere with platelet activation, degranulation, and—by inhibiting the …
<filename>app/src/main/java/com/tricks/math_tricks/fragmentItems/ContentFragmentPdf.java package com.tricks.math_tricks.fragmentItems; import android.content.Context; import android.os.Build; import android.os.Bundle; import android.os.VibrationEffect; import android.os.Vibrator; import android.util.Log; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.ImageButton; import android.widget.Toast; import androidx.annotation.NonNull; import androidx.fragment.app.Fragment; import com.github.barteksc.pdfviewer.PDFView; import com.github.barteksc.pdfviewer.util.FitPolicy; import com.tricks.math_tricks.R; import java.io.File; public class ContentFragmentPdf extends Fragment { private String topicPath = null; private static final String TAG = "ContentFragment"; private boolean isNightMode = false; public ContentFragmentPdf() { super(); //empty constructor } public ContentFragmentPdf(String topic_path) { this.topicPath = topic_path; } private ImageButton nightMode; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (savedInstanceState != null) { topicPath = savedInstanceState.getString("url"); isNightMode = savedInstanceState.getBoolean("night_mode"); } } @Override public void onSaveInstanceState(@NonNull Bundle outState) { super.onSaveInstanceState(outState); outState.putString("url", topicPath); outState.putBoolean("night_mode", isNightMode); } PDFView pdfView; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment View view = (ViewGroup) inflater.inflate(R.layout.main_content_fragment_pdf, container, false); vibrator = (Vibrator)getContext().getSystemService(Context.VIBRATOR_SERVICE); nightMode = view.findViewById(R.id.night_mode_btn); pdfView = (PDFView) view.findViewById(R.id.pdfView); Log.d(TAG, "onCreateView: " + topicPath); pdfView.fromFile(new File(topicPath)).nightMode(isNightMode).fitEachPage(true).pageFitPolicy(FitPolicy.WIDTH).spacing(0).defaultPage(0).enableSwipe(true).swipeHorizontal(false).load(); nightMode.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { toggleView(); } }); return view; } Vibrator vibrator; public void toggleView(){ if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.O) { vibrator.vibrate(VibrationEffect.createOneShot(50, VibrationEffect.DEFAULT_AMPLITUDE)); } else { //deprecated in API 26 vibrator.vibrate(50); } if (isNightMode) { isNightMode = false; Toast.makeText(getContext(), "Night mode Off", Toast.LENGTH_SHORT).show(); pdfView.fromFile(new File(topicPath)).nightMode(isNightMode).fitEachPage(true).pageFitPolicy(FitPolicy.WIDTH).spacing(0).defaultPage(0).enableSwipe(true).swipeHorizontal(false).load(); } else { isNightMode = true; Toast.makeText(getContext(), "Night mode On", Toast.LENGTH_SHORT).show(); pdfView.fromFile(new File(topicPath)).nightMode(isNightMode).fitEachPage(true).pageFitPolicy(FitPolicy.WIDTH).spacing(0).defaultPage(0).enableSwipe(true).swipeHorizontal(false).load(); } } }
Spectroscopy of blue supergiants in NGC 300 We have obtained VLT low-resolution (~5 A) multi-object spectroscopy in the 4,000-5,000 A spectral range of about 70 blue supergiant candidates in the Sculptor Group spiral galaxy NGC 300. We present a detailed spectral catalog containing identification, magnitudes, colors and spectral types. We employ synthetic spectra to determine metal abundances for two A0 supergiants of the sample. In agreement with the expectations, the star closer to the galactic center is found to be more metal rich than the object at a larger galactocentric distance. Using the Balmer Hbeta line profile we have estimated the mass-loss rate for one of the brightest A2 supergiants in the sample. We determined the wind momentum of the star and compared it to the value expected from the empirical wind momentum-luminosity relationship (WLR) for A-type supergiants of Kudritzki et al. (1999). Good agreement is obtained. Introduction With the recent advent of 8-10 meter-class telescopes, it has become possible for the first time to carry out quantitative spectroscopic analysis of stellar objects in distant galaxies, opening a new dimension in the study of the stellar content of such galaxies. These studies will not only allow us to investigate the stellar properties, but will also greatly contribute to an understanding of the host galaxies themselves in terms of star formation properties, and their chemical and dynamical evolution. In order to explore this new opportunity, a few years ago we started a program to carry out spectroscopy of blue supergiant stars in a number of nearby galaxies. Blue supergiants are especially well-suited for spectroscopic analysis in the visual part of the spectrum, because at visual wavelengths they belong to the brightest objects in a galaxy, attaining absolute visual magnitudes as bright as M V = −10, thus pushing the limit for quantitative spectroscopic analysis with 8m-class telescopes out to distance moduli m − M ≃ 30. Indeed, we have recently performed the first quantitative spectroscopic analysis of a blue supergiant in the galaxy NGC 3621, at a distance of 6.7 Mpc (Bresolin et al. 2001), more than 100 times the distance to the LMC, and the most distant galaxy in which quantitative stellar spectroscopy has yet been carried out. Other galaxies in which blue supergiants have been studied so far include LMC and SMC (Kudritzki et al. 1989, Lennon et al. 1991, Puls et al. 1996, Venn 1999, de Koter et al. 1998, the Inter-Cloud population (Rolleston et al. 1999), M31 (McCarthy et al. 1997, Venn et al. 2000, Smartt et al. 2001, M33 (McCarthy et al. 1995, Monteverde et al. 1997, and NGC 6822 (Muschielok et al. 1999, Venn et al. 2001, all of them belonging to the Local Group. Going beyond the Local Group, the main scientific driver behind this work is the possibility to derive relatively accurate chemical abundances for these stars together with more accurate estimates of extinction and reddening, even from low resolution (R ≃ 1000) optical spectra (Kudritzki 1998), and to use their wind properties which can be determined from the optical spectra to get an independent estimate of the host galaxies distances from the wind momentum-luminosity relationship (WLR, Kudritzki et al. 1999). Indeed, in the latter paper it was shown that there is evidence that the WLR for blue supergiants, once properly calibrated and tested for systematic effects, might yield a standard candle similar in accuracy to Cepheid variables, and reaching out to the distance of the Fornax and Virgo clusters. Before we can fully use this new instrument of distance measurement, we must thoroughly test its dependence on a number of parameters, most importantly the stellar spectral type and metallicity. An ideal place to carry out such a test, and significantly enhance the number of calibrating objects for the WLR, is the Sculptor Group galaxy NGC 300. At a distance of ∼ 2.0 Mpc, as derived from Cepheid variables (Freedman et al. 2001), NGC 300 is close enough to allow quantitative spectroscopy of its blue supergiant population with multiobject spectroscopy at the VLT. Furthermore, NGC 300 shows clear signs of recent massive star forming activity, so a considerable number of blue supergiants can be expected in this galaxy. Indeed, a recent wide-field photometric survey of the galaxy carried out by some of us (Pietrzyński et al. 2001) has identified more than 100 OB associations. This same survey is currently discovering a large number of new Cepheids, and the blue supergiant abundances which we will derive will allow us to determine the abundance gradient in the disk of NGC 300, from which we hope to obtain the first accurate empirical determination of the effect of metallicity on the Cepheid Period-Luminosity (PL) relation, currently only poorly constrained by observations. The effects of reddening will also be investigated, by comparing observed and synthetic colors of individual blue supergiants. The purpose of this paper is to present our blue supergiant spectroscopy in NGC 300. In Sec. 2 we describe the target selection, and the spectroscopic observations and reductions. In Sec. 3 we present the spectral classification of our targets. In Sec. 4 we discuss some first results regarding metallicities and wind momenta, and our conclusions will be presented in Sec. 5. Blue supergiant selection and spectroscopy A large set of multi-epoch, broad-band images have been obtained with the Wide Field Imager (WFI) at the ESO/MPI 2.2m telescope on La Silla, as part of a long-term project aiming at the discovery and monitoring of Cepheids in NGC 300. As mentioned in the Introduction, these data have already been used by Pietrzyński et al. (2001) to identify OB associations in NGC 300. Improved BVI stellar photometry has been measured for the current work with DAOPHOT/ALLSTAR on a subset (about 20 nights) of the entire dataset, leading to a zero-point accuracy of ∼ 0.03 mag. For further details on the stellar photometry the reader is referred to Pietrzyński et al. (2001). For a preliminary catalog of blue supergiant candidates we selected all stars brighter than V = 20, corresponding to an absolute magnitude M V = −6.5 (luminosity class Iab or brighter for B-and A-type stars) for an adopted distance modulus m − M = 26.53 (Freedman et al. 2001), and having color index in the range −0.3 < B − V < 0.3. At this high galactic latitude (b = −79 • ) the foreground reddening is low, around E(B − V ) = 0.02 (Burstein & Heiles 1984). This, combined with low internal reddening, makes the observed B − V closely corresponding to the intrinsic stellar color, and our criterion is therefore optimal for isolating late B-and early A-type supergiants. The final list of candidates for the spectroscopic follow-up, containing 167 objects, was set up by carefully examining the original WFI frames, rejecting objects on the basis of broad profiles and presence of nearby companions. An Hα image of the galaxy was also inspected in order to avoid overlap with emission line nebulae. Spectroscopy of a subset of our candidate list was obtained with Antu and FORS1 at the Very Large Telescope (Paranal) on September 25 and 26, 2000. Two FORS1 fields were observed each night in multi-object spectroscopy mode, allowing simultaneous spectroscopy of 19 objects, for a total of four different pointings, chosen to allow a good coverage of the radial extent of the galaxy. Sky and seeing conditions were excellent on both nights, with typical 0.7 arcsec seeing, but with long spells of 0.4-0.5 arcsec seeing. Five exposures, each lasting 45 min, were secured at every pointing with a 600 gr/mm grating, which provides approximately a 5Å spectral resolution. The spectral coverage with this setup is about 1,000Å wide, centered around 4,500Å (dependent on an object's position in the focal field along the dispersion axis), including in most cases the range from the H and K calcium lines up to the Balmer Hβ line. Due to the positioning limitations of the FORS slitlets and the uneven distribution of blue supergiants in NGC 300, a few additional objects, not included in our candidate list, were also observed. Among these were H II regions, blue stars somewhat fainter than our original magnitude limit, and a handful of late-type stars. Central coordinates of the fields observed are reported in Table 1, while Fig. 1 shows the location of these fields on a wide-field image of NGC 300. The individual fields, together with the identification of the spectroscopic targets, are shown in Fig. 2 through 5, reproducing V -band, 5-min FORS1 exposures. Table 2 summarizes the positions, BVI magnitudes and additional information for all the objects for which a spectrum was obtained. In this Table and for the rest of this paper, individual stars will be identified with the letter corresponding to the galaxy field (A through D) and the progressive FORS slitlet number (1 through 19). The generic image processing tasks within IRAF 4 were used for bias and flat field corrections. After adding all the images of a given field, each individual slitlet spectrum was treated as a long-slit spectrum, and independently wavelength calibrated and extracted. Finally, the 1-D spectra were normalized with a low-order polynomial. Our targets are mostly located in uncrowded regions, so that sky subtraction did not pose particular problems. The only difficult cases were represented by stars in the proximity of or within emission nebulae. Despite our efforts to avoid such occurrences by using the Hα image of NGC 300, the spectra of several stars were found to be contaminated by nebular emission. At this spectral resolution a complete and satisfactory subtraction of this contamination is not possible, and we marked the affected objects in Table 2. The average S/N for most of the spectra is close to 50, but for the brightest stars it goes up to ≃ 100. Only very few spectra are underexposed (S/N < 25). Spectral classification The spectral classification of our targets was carried out by a visual comparison of the observed spectra with template spectra of B-and A-type Galactic supergiants, and B-type supergiants in the SMC, degraded to FORS resolution. While the SMC data have been taken from the literature (Lennon 1997), the Galactic data are part of an ongoing project, which aims at obtaining high-resolution spectra of nearby blue supergiants for accurate stellar atmospheric analysis. The overall appearance of the available spectra compared with the template spectra was used for the spectral classification, with special attention to widely-used diagnostics of blue supergiants. As pointed out by Lennon (1997), ambiguities in the spectral classification of extragalactic B-supergiants can arise if the classification scheme does not account for possible significant deviations from galactic metallicity. We have therefore applied his classification criteria, which are more independent of metallicity, and have used both sets of templates, Galactic and SMC, for B-type spectra. However, in a few cases ambiguities remained, which we will be able to eliminate only after a detailed quantitative analysis of the spectra has been carried out. Those cases are marked in Table 2 and the comparison shows a relatively small effect (a shift of one or two subclasses) due to the different metallicity of the template spectra. The spectral types thus determined are presented in column 7 of Table 2. In several cases we provide a range of spectral classes, reflecting the uncertainty due to the S/N of the spectra and the possible abundance effects. Additional comments pertaining to the appearance of the spectra or the spectral classification are given in column 8. Objects which show a possibly composite spectrum were not assigned a spectral classification. The nebular contamination has been identified as such when the 2-D spectra indicated the presence of emission lines extending over and beyond the stellar position. We present in Figs. 6-12 the spectra of the NGC 300 supergiants, grouped by spectral type as follows: late-O and early-B (B0 through B5), late-B, early-A (A0 through A5), late-A and F stars. We exclude from this spectral atlas the H II region spectra, and some additional objects, which include a foreground white dwarf and a WN11 star, which will be discussed in a future paper. The position of the supergiant sample in the HRD is shown in Fig. 13. The stellar luminosity was derived after correcting for reddening (assuming A V = 3.1 E(B − V )). The latter was obtained from the observed B − V and the expected value of this color index for a given spectral type, as given by Fitzgerald (1970). Bolometric corrections and effective temperatures as a function of spectral type were taken from Humphreys & McElroy (1984). Theoretical stellar tracks at metallicity Z=0.008 from Schaerer et al. (1993) are also shown for comparison (the new tracks which include stellar rotation by the Geneva group and used in Sect. 4.1 to estimate the stellar parameters of two bright supergiants are not yet available for this metallicity). The apparent gap centered around log T eff =4.1 is the result of a selection effect, due to the lack of template spectra between types B5 and B8 in our classification program. Stellar parameters and abundances While at the resolution currently attainable in multiobject spectroscopy of targets as faint as our blue supergiants a detailed chemical abundance analysis is difficult (though not impossible), we can in any case estimate, within roughly 0.2 dex, the abundances of several elements by a comparison of the observed spectra to synthetic ones generated by models of blue supergiant atmospheres, calculated for a variety of metal abundances. We leave a full abundance analysis of our blue supergiant sample to a forthcoming paper, but we want to show here some first results illustrating the power of our technique to estimate the metal abundances of these stars. The photospheric analyses are performed on the basis of hydrostatic LTE line-blanketed model atmospheres (Kurucz 1991) and subsequent non-LTE/LTE spectrum synthesis. Effective temperatures T eff are estimated from the spectral classification, and surface gravities log g are determined from the Balmer line strengths; the microturbulence ξ is assumed to be the same as in the Galactic comparisons. The helium content y (by number) and the stellar metallicity are deduced from the comparison of the observed and the synthetic spectra for varying elemental abundances. At present, the elements with strong lines in A-type stars, e.g. Mg ii, Ti ii and Fe ii, are treated in non-LTE, while Si ii and Cr ii are treated in LTE. In total, several ten-thousand lines from these and some 20 other chemical species -comprising almost all spectral features -are included in the spectrum synthesis (with the CNO and S lines also in non-LTE). The results from the analysis of two of the NGC 300 supergiants, A-8 and D-13, are summarized in Table 3. Radial velocities v rad can be determined from cross-correlation of the observed spectra with the synthetic ones. In Fig. 14 we show the observed spectrum of the A0 Ia star A-8, together with model fits which were calculated for three different metal abundances: 0.2, 0.5 and 1.0 solar. A comparison of the predicted and observed line intensities, particularly for lines of elements Fe and Cr, suggests a low abundance for this star, in the order of 0.2 solar. This is consistent with the star's position in the outskirts of the galaxy, where a low metal abundance would be expected. In Fig. 15 we show the same model fits to the spectrum of another A0 supergiant, star D-13. This object has a clearly higher metal abundance, in the order of 0.5 solar. Again this is consistent with the expected higher metallicity for this star, which is considerably closer to the center of NGC 300. In both figures the estimated noise level in the continuum (1%) is shown by vertical bars at the lower left. An additional 1% uncertainty is estimated for the placement of the continuum. The latter was measured from low-order polynomial fits to relatively line-free regions of the observed spectra, where the theoretical normalized fluxes predicted for different metallicities reach unity simultaneously (e.g. around 4160Å, 4200Å, 4610Å, 4690Å). This is contrast with the situation in the UV part of the spectrum, where the 'true' continuum is never observed (Haser et al. 1998). Applying the technique described above to the complete sample of supergiants, we expect to delineate the radial metallicity gradient in the disk of NGC 300 rather accurately. It will be of considerable interest to compare the stellar abundance gradient to the one derived from H II region abundance studies, which in extragalactic work has been assumed to reflect the stellar abundances, e.g. in the work of the HST Key Project team on M101 to measure the metallicity effect on the Cepheid PL relation (Kennicutt et al. 1998). Having determined the atmospheric parameters, the physical properties of the stars are derived, cf. Table 3. The reddening E(B − V ) is found from the comparison of the photometry with the synthetic colors from the model fluxes. Absolute visual magnitudes M V are obtained after correcting for extinction, assuming A V = 3.1 E(B − V ). Applying a bolometric correction B.C. leads to the stellar bolometric magnitude M bol . From this and the atmospheric parameters T eff and g, the stellar luminosity L, the radius R and the spectroscopic mass M spec are determined. Zero-age main sequence masses M ZAMS are derived from comparison with stellar evolution tracks accounting for mass-loss and rotation (Meynet & Maeder 2000;Maeder & Meynet 2001). The supergiant A-8 is situated in a region of the HRD where partial blue loops may be found in stellar evolution calculations, depending on the detailed physics accounted for (see discussion in Maeder & Meynet 2001). An enhanced He abundance supports such an interpretation for the evolutionary status of this object (cf. the 12 M ⊙ track of Maeder & Meynet 2001). The more massive supergiant D-13 has either developed directly from the main sequence (in the case of an initially fast rotator) or has reached the post red supergiant phase (as an initially slow rotator), accounting for its marked helium enhancement and its spectroscopic mass (cf. the 20 M ⊙ track of Meynet & Maeder 2000). Note that for this metallicity sophisticated stellar evolution tracks accounting for rotation are not available yet, but those for solar metallicity are expected to be sufficiently similar. Unfortunately, information on the N/C ratio, the most sensitive indicator on the evolutionary status, cannot be derived from the available spectra. To conclude, the wealth of data obtained on just two sample supergiants clearly demonstrates the versatility of quantitative spectroscopy for stellar and -through the observation of a larger ensemble of objects -galactic studies. Wind Momentum-Luminosity Relationship (WLR) The existence of a relationship between the stellar wind momentum and the luminosity of hot massive stars is a sound prediction of the theory of radiatively driven winds (Kudritzki 1998, Kudritzki & Puls 2000. The relationship has the forṁ where the product of mass-loss rate (Ṁ ) and the wind terminal velocity (v ∞ ) gives the mechanical momentum flow carried away by the stellar wind. R is the stellar radius, L the luminosity, and α is the exponent of the power-law line strength distribution function of the metal lines driving the wind. As the ionizing properties in the stellar atmosphere change with effective temperature, so do the line strengths of the metal lines most effective at driving the wind, and as a consequence α is expected to vary with stellar spectral type, and values from ∼ 0.65 (O-type) to ∼ 0.38 (A-type) are found in the solar neighborhood (Kudritzki & Puls 2000). The validity of the WLR has been demonstrated empirically by Puls et al. (1996) for O-type stars in the Galaxy and the Magellanic Clouds, as well as for Galactic supergiants of type B and A , confirming the expected dependence of the relation on spectral type, roughly in agreement with the slopes predicted by theory. Our new, large sample of blue supergiants in NGC 300, all at a given distance (which will eventually be improved with the Cepheids we are currently detecting in a parallel program) provides an extremely valuable dataset to improve the calibration of the WLR and establish its dependence on spectral type and metallicity, with a much higher accuracy than hitherto possible. The most accurate results for the wind momentum of the blue supergiants can be obtained from a modeling of the Balmer Hα line profile (for a review, see Kudritzki & Puls 2000), and we are looking forward to getting red spectra covering Hα for our complete sample of blue supergiants with VLT later this year. However, for the brightest stars it is possible to derive the wind momentum also from the Hβ line, albeit with lower accuracy. We have done this for the A2 supergiant D-12. This star has an absolute visual magnitude M V = −8.35, and is one of the brightest supergiants in our sample. From a fit to the Hβ line profile, the mass-loss rate was obtained (1.8 (±0.2) × 10 −6 M ⊙ /yr), which, together with an assumed value for v ∞ (150 km/s, a typical value for A2 supergiants, cf. Lamers et al. 1995 andKudritzki &Puls 2000) and radius (210 R ⊙ , derived from E(B −V ) = 0.15, M V = −8.35 and model atmosphere flux), yielded the position in the diagnostic diagram shown in Fig. 16. It is seen that this 'preliminary' datapoint from this one supergiant fits relatively well into the existing WLR for A supergiants, making us optimistic that we can achieve a significant improvement on the WLR with the results based on the Hα profiles. Conclusions We have presented a spectral catalog of about 70 blue supergiant candidates in NGC 300, observed at a resolution R ≃ 1000 and S/N ≃ 50 in the 4,000-5,000Å spectral range at the VLT. Of the observed targets, 62 are spectroscopically confirmed as supergiants with spectral types between late-O and F. Most of these supergiants are types B and A. In our survey, we also found several different, interesting objects including a WN11 star and a foreground white dwarf, which will be analyzed in detail in forthcoming studies. The spectral classification of the blue supergiants determined in this paper will be essential for a thorough investigation of the dependence of the wind momentum-luminosity relationship as a new, purely spectroscopic and far-reaching distance indicator on spectral type, as predicted by theory, and seen in preliminary empirical results . A model atmosphere technique was employed to obtain first results for the metal abundances of two A0 supergiants in our sample. A comparison of synthetic spectra, calculated for different metal abundances, with the observed spectra of these stars, yields metal abundances different by approx 0.3 dex in the expected sense that the star closer to the center of NGC 300 is more metal-rich than its counterpart, which is located at a larger galactocentric distance. In a forthcoming paper we will use this technique on the whole sample of blue supergiants in NGC 300 to determine their metal abundances. While the individual values of these abundances will be of a modest accuracy (about ±0.2 dex per star), the large number of stars in our sample and the wide range in galactocentric distance they span, should allow us to determine the abundance gradient in the disk of NGC 300 with an accuracy which is unprecedented in the study of spiral galaxies beyond the Local Group. We also report on a first determination of the wind momentum for one of the brightest A-type supergiants in our sample, based on its mass-loss rate as determined from the Balmer Hβ line profile, and find that it fits relatively well into the empirical WLR determined from A supergiants in the Galaxy and M31 by Kudritzki et al. (1999). We will be able to obtain the wind parameters with higher accuracy from Hα profiles which we expect to have at our disposal, for all stars in our sample, by the end of 2001. We will then be able to perform a thorough empirical check of the usefulness of the WLR for distance determinations, including a calibration of its dependence on spectral type and metallicity, and of its intrinsic dispersion, for a given spectral type and metallicity. FB acknowledges DLR grant 50 OR 9909 for support while working in Munich. WG gratefully acknowledges financial support received from Fondecyt grants 1000330 and 8000002 for this project. Part of this work was done while he was a scientific visitor at ESO Garching. WG is grateful for the support received from ESO. We also thank the referee, A. de Koter, for positive and constructing comments. Fig. 1.-The four FORS1 fields observed in NGC 300 are marked on a montage of eight B-band ESO/MPI 2.2m + WFI frames. North is at the top, east to the left. Field size is approximately 34 ′ × 33 ′ . Fig. 2.-Field A from a 5-min, V-band FORS1 exposure. The field of view is approximately 6.8 ′ × 6.8 ′ . The multi-object spectroscopy targets are marked by the circles. Their identification numbers correspond to those in Table 2. Schaerer et al. (1993) for different initial stellar masses are shown, as indicated by the numbers to the left of the Main Sequence. The gap around log T eff =4.1 is due to the unavailability of spectral templates between types B5 and B8 in our spectral classification program. The two stars described in Sec. 4.1 are shown with different symbols. ; improved photometry and distance modulus were used for M31, see Bresolin et al. 2001). Also plotted is the A0 supergiant examined by Bresolin et al. (2001) in NGC 3621, and star D-12 in NGC 300 analyzed in this paper. Table 3. A-8 and D-13: basic properties and stellar parameters
/** * Enables the Firestore cache. * By enabling the cache, network access is disabled. * While the cache is enabled, all snapshot listeners and document requests retrieve results * from the cache. Write operations are queued until network access is re-enabled. * * See <a href="https://firebase.google.com/docs/firestore/manage-data/enable-offline">Firestore Data Model</a> * for more information on Firestore caching. * * @param handler The completion handler called once the cache has been enabled. */ public void enableCache(final FirestoreCompletionHandler<Void> handler) { database.disableNetwork().addOnCompleteListener(new OnCompleteListener<Void>() { @Override public void onComplete(@NonNull Task<Void> task) { handler.completed(null); } }); }
import * as React from 'react'; import { Button, COLOR_REDISH, Split, Child, Tooltip, COLOR_GREENISH } from '@lunchpad/base'; import { settingsLabels } from '@lunchpad/types'; import { NotificationContext } from '@lunchpad/contexts'; const { remote } = window.require('electron'); interface IErrorBoundary { hasError: boolean } export class ErrorBoundary extends React.Component<{}, IErrorBoundary> { static contextType = NotificationContext.Context constructor(props) { super(props); this.state = { hasError: false }; } static getDerivedStateFromError(error) { // Update state so the next render will show the fallback UI. return { hasError: true }; } componentDidCatch(error, errorInfo) { // You can also log the error to an error reporting service console.log(error, errorInfo); } copyConfiguration() { const { addNotification } = this.context const config = localStorage.getItem(settingsLabels.layout.config); remote.clipboard.writeText(config); addNotification("Your current configuration has been copied to your clipboard.", 2500) } resetConfiguration() { localStorage.clear(); window.location.reload(); } tryBackup() { const old = localStorage.getItem(settingsLabels.layout.old); localStorage.setItem(settingsLabels.layout.config, old); localStorage.setItem(settingsLabels.layout.active, "default"); window.location.reload(); } render() { if (this.state.hasError) { // You can render any custom fallback UI return ( <> <Split justify="center"> <Child text="center" padding="0 0 3rem 0" align="center"> <h1>OH NO! Something went wrong.</h1> <p>You may have a look at the developer console.<br/>To open it press Ctrl+Shift+I or Cmd+Option+I</p> </Child> <Child padding="0 0 5rem 0" width="auto"> <Tooltip delay={50} title="Copy your current configuration to your clipboard. Helpful for errorfinding or manual debugging."> <Button onClick={this.copyConfiguration.bind(this)}>Copy configuration to clipboard</Button> </Tooltip> </Child> <Child padding="0 0 5rem 0" width="auto" align="center"> <Tooltip delay={50} type="error" title="Try to restore the configuration from an automatic backup"> <Button color={COLOR_GREENISH} onClick={this.tryBackup.bind(this)}>Try last configuration</Button> </Tooltip> </Child> <Child width="auto" align="center"> <Tooltip delay={50} type="error" title="Absolute danger Zone! Will reset the whole application to factory defaults"> <Button color={COLOR_REDISH} onClick={this.resetConfiguration.bind(this)}>Reset everything</Button> </Tooltip> </Child> </Split> </> ) } return this.props.children; } }
At the volunteer meet on Thursday. (Source: Express photo by Oinam Anand Following the drubbing it received in the Lok Sabha polls, AAP chief Arvind Kejriwal on Thursday asked party volunteers to apologise to people resigning from the Delhi government and request them to grant the party a second chance. The party is also set to rebuild its organisation, Kejriwal said. Advertising In his first address to volunteers after the general elections, Kejriwal said, “When you go to the people, I request you to not argue with them. Apologise for our mistakes and ask them for one more chance. Tell them that we will stay in the government for five years and won’t run away this time.” In an attempt to boost volunteers’ morale, Kejriwal said, “We expect elections to take place in October. The Delhi Assembly will be dissolved soon. We all hope that the AAP comes to power with a majority this time.” Acknowledging complaints from volunteers, Kejriwal said, “Many of you feel that no one listens to you. We apologise for this. I am told that you are not treated properly when you go to the party office. Now, the entire system in Delhi is going to be monitored by me. Another complaint is that some members of the PAC (Political Affairs Committee) decide matters behind closed doors.” During the hour-long discussion, the topic of government formation was brought up. Kejriwal maintained that the party would not ally with the Congress again, and explained why the AAP could not form the government immediately. “When I met the Lieutenant-Governor, I was informed about technical details which does not allow us to form the government again. That is what I told mediapersons after meeting the L-G,” he said. Advertising Discussing Shazia Ilmi’s resignation over lack of inner-party democracy, he said, “When Shazia Ilmi left the party, I was in jail. From what I have heard, she has said it had become difficult to approach Arvind. This is completely wrong. I have been meeting people all the time,” he said.
package com.izofar.bygonenether.world.feature; import com.izofar.bygonenether.init.ModBlocks; import net.minecraft.data.worldgen.ProcessorLists; import net.minecraft.data.worldgen.features.NetherFeatures; import net.minecraft.world.level.block.Blocks; import net.minecraft.world.level.levelgen.structure.templatesystem.RandomBlockMatchTest; public class ModFeatureUtils { public static void replaceBlackstoneBlobs() { NetherFeatures.BLACKSTONE_BLOBS.config.replaceState = ModBlocks.COBBLED_BLACKSTONE.get().defaultBlockState(); } public static void replaceBlackstoneInBastion() { ProcessorLists.REMOVE_GILDED_BLACKSTONE.inputPredicate = new RandomBlockMatchTest(Blocks.BLACKSTONE, 1.0F); ProcessorLists.REMOVE_GILDED_BLACKSTONE.outputState = ModBlocks.COBBLED_BLACKSTONE.get().defaultBlockState(); } }
package com.merlin.network.internal.exception; /** * User: Simon * Date: 2016/1/18 * Desc: */ public class HttpMessageConversionException extends RestClientException { public HttpMessageConversionException(String exceptionMessage) { super(exceptionMessage); } }
/** * A Prolog event consisting of a subject and subject-specific data. */ public class PrologEvent extends EventObject { private static final long serialVersionUID = 1L; private String subject; private String data; /** * Constructs a Prolog event. * * @param source * the {@link PrologEventDispatcher} * @param subject * the subject for which a {@link PrologEventListener} can be * registered * @param data * subject-specific data of the event */ public PrologEvent(Object source, String subject, String data) { super(source); this.subject=subject; this.data=data; } /** * @return subject-specific data of the event */ public String getData() { return data; } /** * Returns the subject for which a {@link PrologEventListener} can be * registered. * * @return the subject */ public String getSubject() { return subject; } }
def Hamiltonian(J, K, spins): energ0 = -J * first_NN_interaction(spins) if K == 0: energ1 = 0 else: energ1 = -K * four_body_sum(spins) energ = energ0 + energ1 return energ
use crate::geoq::{fgb::index, geojson::fvec}; use super::columns; use super::hilbert::BBox; use super::hilbert::BoundedFeature; use flatbuffers::FlatBufferBuilder; use flatgeobuf::{ColumnType, GeometryType, HeaderArgs, HeaderBuilder}; use serde_json::Value; use std::collections::{HashMap, HashSet}; use std::convert::TryInto; use std::iter::Map; // table Header { // name: string; // Dataset name // envelope: [double]; // Bounds // geometry_type: GeometryType; // Geometry type (should be set to Unknown if per feature geometry type) // has_z: bool = false; // Does geometry have Z dimension? // has_m: bool = false; // Does geometry have M dimension? // has_t: bool = false; // Does geometry have T dimension? // has_tm: bool = false; // Does geometry have TM dimension? // columns: [Column]; // Attribute columns schema (can be omitted if per feature schema) // features_count: ulong; // Number of features in the dataset (0 = unknown) // index_node_size: ushort = 16; // Index node size (0 = no index) // crs: Crs; // Spatial Reference System // title: string; // Dataset title // description: string; // Dataset description (intended for free form long text) // metadata: string; // Dataset metadata (intended to be application specific and suggested to be structured fx. JSON) // } fn geometry_type(features: &Vec<BoundedFeature>) -> GeometryType { let mut types = HashSet::new(); let mut last_gtype = GeometryType::Unknown; for bf in features { let f = &bf.feature; // let val = f.geometry.map(|g| g.value); if let Some(geom) = &f.geometry { let gtype = match geom.value { geojson::Value::Point(_) => GeometryType::Point, geojson::Value::LineString(_) => GeometryType::LineString, geojson::Value::Polygon(_) => GeometryType::Polygon, geojson::Value::MultiPoint(_) => GeometryType::MultiPoint, geojson::Value::MultiLineString(_) => GeometryType::MultiLineString, geojson::Value::MultiPolygon(_) => GeometryType::MultiPolygon, geojson::Value::GeometryCollection(_) => GeometryType::GeometryCollection, }; types.insert(gtype); last_gtype = gtype; } } if types.len() == 1 { last_gtype } else { GeometryType::Unknown } } #[derive(Clone, Debug)] pub struct ColSpec { pub name: String, pub type_: ColumnType, } #[derive(PartialEq, Debug)] enum PropType { Boolean, String, Long, Double, JsonVal, } // impl Eq for PropType {} fn schema<'a>(features: impl Iterator<Item = &'a geojson::Feature>) -> HashMap<String, PropType> { let mut schema = HashMap::<String, PropType>::new(); for f in features { if f.properties.is_none() { continue; } for (k, v) in f.properties.as_ref().unwrap() { let jsont_o = match v { Value::Bool(_) => Some(PropType::Boolean), Value::String(_) => Some(PropType::String), Value::Number(num) => { if num.is_f64() { Some(PropType::Double) } else if num.is_i64() { Some(PropType::Long) } else { // Is this possible? I think is_f64 or is_i64 should cover all None } } Value::Array(_) => Some(PropType::JsonVal), Value::Object(_) => Some(PropType::JsonVal), Value::Null => None, }; if jsont_o.is_none() { continue; } let jsont = jsont_o.unwrap(); if !schema.contains_key(k) { schema.insert(k.to_string(), jsont); } else { let current = schema.get(k).unwrap(); if *current == jsont { continue; } else { // schemas diverge for a key. // 2 cases of widening: // number: from Long -> Double // any other (e.g. string vs array, string vs JSON): // -> JsonVal if *current == PropType::JsonVal { // Already using Json, most generic schema type, so leave as is continue; } else if jsont == PropType::Long && *current == PropType::Double { // Already have Double and found a Long. Leave schema as is // to "widen" from Long to double continue; } else { // Widen from current specific type to more generic Json type schema.insert(k.to_string(), PropType::JsonVal); } } } } } schema } fn col_type(prop_type: &PropType) -> ColumnType { match *prop_type { PropType::Boolean => ColumnType::Bool, PropType::Long => ColumnType::Long, PropType::Double => ColumnType::Double, PropType::String => ColumnType::String, PropType::JsonVal => ColumnType::Json, } } fn col_specs(features: &Vec<BoundedFeature>) -> Vec<ColSpec> { let schema = schema(features.iter().map(|f| &f.feature)); schema .iter() .map(|(k, v)| ColSpec { name: k.to_string(), type_: col_type(v), }) .collect() } pub fn write<'a>( features: &Vec<BoundedFeature>, bounds: &BBox, ) -> (FlatBufferBuilder<'a>, Vec<ColSpec>) { let mut bldr = FlatBufferBuilder::new(); // https://github.com/flatgeobuf/flatgeobuf/blob/master/src/fbs/header.fbs // https://github.com/flatgeobuf/flatgeobuf/blob/master/src/ts/generic/featurecollection.ts#L158-L182 let name = bldr.create_string("L1"); // let desc = bldr.create_string(""); let col_specs: Vec<ColSpec> = col_specs(features); let cols_vec = Some(columns::build(&mut bldr, &col_specs)); let bounds_vec = bldr.create_vector(&bounds.to_vec()); let args = HeaderArgs { name: Some(name), features_count: features.len().try_into().unwrap(), // not sure when this would fail...i guess 128bit system? geometry_type: geometry_type(features), index_node_size: index::NODE_SIZE, columns: cols_vec, envelope: Some(bounds_vec), ..Default::default() }; let header = flatgeobuf::Header::create(&mut bldr, &args); bldr.finish_size_prefixed(header, None); (bldr, col_specs) } #[test] fn test_schema_inference() { let gj = r#"{"type":"Feature","properties": {"name": "pizza", "age": 123},"geometry": {"type": "Point", "coordinates": [-118, 34]}}"#; let feats = fvec(gj); let sch = schema(feats.iter()); assert_eq!(2, sch.len()); assert_eq!(Some(&PropType::Long), sch.get("age")); assert_eq!(Some(&PropType::String), sch.get("name")); } #[test] fn test_schema_inference_mixed() { let gj = r#" {"type": "FeatureCollection", "features": [ {"type":"Feature","properties": {"name": "pizza", "n": "null"},"geometry": {"type": "Point", "coordinates": [-118, 34]}}, {"type":"Feature","properties": {"foo": ["pizza"], "n": 123},"geometry": {"type": "Point", "coordinates": [-118, 34]}}, {"type":"Feature","properties": {"foo": {"a":"b"}},"geometry": {"type": "Point", "coordinates": [-118, 34]}}, {"type":"Feature","properties": {"name": 1.0},"geometry": {"type": "Point", "coordinates": [-118, 34]}}, {"type":"Feature","properties": {"name": 1},"geometry": {"type": "Point", "coordinates": [-118, 34]}} ]}"#; let feats = fvec(gj); let sch = schema(feats.iter()); assert_eq!(3, sch.len()); assert_eq!(Some(&PropType::JsonVal), sch.get("name")); assert_eq!(Some(&PropType::JsonVal), sch.get("foo")); assert_eq!(Some(&PropType::JsonVal), sch.get("n")); }
class CustomTargetIndex: """A special opaque object returned by indexing a CustomTarget. This object exists in meson, but acts as a proxy in the backends, making targets depend on the CustomTarget it's derived from, but only adding one source file to the sources. """ def __init__(self, target, output): self.typename = 'custom' self.target = target self.output = output self.for_machine = target.for_machine def __repr__(self): return '<CustomTargetIndex: {!r}[{}]>'.format( self.target, self.target.get_outputs().index(self.output)) def get_outputs(self): return [self.output] def get_subdir(self): return self.target.get_subdir() def get_filename(self): return self.output def get_id(self): return self.target.get_id() def get_all_link_deps(self): return self.target.get_all_link_deps() def get_link_deps_mapping(self, prefix, environment): return self.target.get_link_deps_mapping(prefix, environment) def get_link_dep_subdirs(self): return self.target.get_link_dep_subdirs() def is_linkable_target(self): suf = os.path.splitext(self.output)[-1] if suf == '.a' or suf == '.dll' or suf == '.lib' or suf == '.so': return True
// // XLibrary.h // XLibrary // // Created by <NAME> on 16.10.13. // Copyright (c) 2013 <NAME>. All rights reserved. // #ifdef __OBJC__ #import "XMacros.h" #import "ExtendNSLogFunctionality.h" #import "XCategories.h" #import "XUI.h" #endif
Fractal dimensions of self-avoiding walks and Ising high-temperature graphs in 3D conformal bootstrap The fractal dimensions of polymer chains and high-temperature graphs in the Ising model both in three dimension are determined using the conformal bootstrap applied for the continuation of the $O(N)$ models from $N=1$ (Ising model) to $N=0$ (polymer). The unitarity bound below $N=1$ of the scaling dimension for the the $O(N)$-symmetric-tensor develops a kink as a function of the fundamental field as in the case of the energy operator dimension in the Ising model. Although this kink structure becomes less pronounced as $N$ tends to zero, an emerging asymmetric minimum in the current central charge $C_J$ can be used to locate the CFT. It is pointed out that certain level degeneracies at the $O(N)$ CFT should induce these singular shapes of the unitarity bounds. As an application to the quantum and classical spin systems, we also predict critical exponents associated with the $\mathcal{N}=1$ supersymmetry, which could be relevant for locating the correspoinding fixed point in the phase diagram. Introduction Conformal field theory (CFT) is an indispensable framework in deepening our understanding on the universality class of the critical phenomena which goes beyond the renormalization group (RG). Despite its incomparable success in 2D, the clues for 3D CFT has been scarce until recently. The recent breakthrough came from numerical determination for the 3D Ising exponents using the crossingsymmetry sum rule for the Z 2 -symmetric intermediate states in the four-point function φφφφ of the same scalar field (the fundamental field φ in the λφ 4 -theory). The key empirical observation, which becomes a cornerstone in this so-called conformal bootstrap approach, was that the scaling dimensions of the spin and energy operator in the Ising model corresponds to a "kink" that emerges along the unitarity upper-bound curve for the dimension ∆ φ 2 of the leading non-trivial Z 2 symmetric operator ε =: φ 2 : as a function of the dimension ∆ φ of the fundamental field. This singular shape, the kink in the unitarity bound, is shared also in the case of the sum rule for the O(N )-symmetry, which can be used to map the critical O(N ) model with N = 2, 3, · · · , ∞ on the ∆ φ -∆ S plane , where ∆ φ and ∆ S respectively stand for the dimension of the fundamental field φ a ("a" is an O(N ) label) and the dimension of the energy operator ε = a : (φ a ) 2 :, which is the leading non-trivial operator in the O(N ) singlet sector S. Towards an analytic understanding on the consequence of the 3D conformal symmetry, it would be important to aim at a representation theory of the spectrum generating algebra analogous to the degenerate representation in the Virasoro algebra . Another outstanding direction would be to generalize the ideas in the stochastic Loewner evolution (SLE) so as to describe critical geometry in 3D. In this respect, the importance of the continuous family of the critical O(N ) models below N = 2 could be emphasized more since in 2D they precisely represent the continuous family of the models described by the SLE κ with 2 κ 4 via the trigonometric relation κ = 4π/ arccos(−N/2). (1) Mathematicians proved that the Hausdorff dimension of SLE κ curves is given by (Beffara's theorem ). In physics, the same fractal dimension can be computed from the dimension of the 2-leg operator (a special case of the watermelon operator for an arbitrary number of legs ) represented by an O(N ) symmetric tensor operator ϕ ab , which behaves as a scalar under the spatial O(2) rotations. In this paper, we study the 3D O(N ) model below N = 2 with a focus on the fractal dimension of the loops in the high-temperature expansion. The fractal dimension may be given by d F = D − ∆ T , where ∆ T is the scaling dimension 1 of the most relevant operator ϕ ab in the O(N ) symmetric tensor sector T . Apart from the models with N 2, where ∆ T has been estimated , there are several important physical cases in 3D, where understanding based on the conformal symmetry, in particular, the determination of the fractal dimensions may be interesting. (a) Polymer (N = 0) The N → 0 limit of the O(N ) symmetry, where the degeneracy of ∆ T and ∆ S occurs, describes polymer chains under the excluded volume limit (a self avoiding walk) as shown in the celebrated work by de Gennes . A direct approach to this polymer limit makes various OPE coefficients singular and makes the current bootstrap method, which hinges on the positivity of the squared OPE coefficients (a main part of the unitarity), difficult. For instance, the square of the OPE coefficient λ φφ T for the stress-energy tensor may have a simple pole at N = 0 since the Ward identity tells us that it is inverse proportional to the central charge C T , which is essentially proportional to the number of components N . Practically, as N tends to zero, this pole seems to result in an effective slowdown of the convergence to the optimal unitarity bound, meaning that the number of derivatives necessary to attain a given precision increases more rapidly. Accordingly, the detection of the kink at N = 0.1 within a limited computational cost becomes much more difficult compared with the case of finite N (e.g. N = 2). We circumvent this difficulty by assuming that a clear change of the slope ∂C J /∂∆ φ for current central charge C J defined through the conserved current J µ ab may correspond to the dimension ∆ φ of the CFT. This analysis leads to the estimate for the fractal dimensions d F = 3 − ∆ T (0) ∼ 1.701. (b) The Ising model (N = 1) and the N = 1 SUSY fixed point The operator content of the O(N ) model at N = 1 contains that of the Ising model as its singlet sub-sector and its thermodynamical exponents can be determined from the well-studied dimensions ∆ φ for the spin operator and ∆ φ 2 = ∆ S for the energy operator. Since the O(1) model contains only one component scalar, it is less noticeable that the dimension ∆ T of the symmetric "tensor" ϕ ab may carry important information on the critical exponents. It is, however, natural to consider that ∆ T is one of the geometric exponents that determines the fractal dimension d F = 3−∆ T (1) ∼ 1.734 for the high-temperture graphs also measured by a Monte Carlo (MC) simulation for the 3D Ising model . As a natural extension of this analysis, we also give the fractal dimension which would possibly be realized by the magnetic flux loops in an effective gauge theory, which appears, for instance, in the Kitaev model plus the local exchange interaction of the Ising type . An interesting possibility is that the phase diagram of this model, "magnetic threestate of matter" (or its slight extension), may contain the fixed point of the 3D N = 1 superconformal field theory (SCFT), whose 2D counterpart is a well-established SCFT , which may explain the Majorana fermion nature of the 2D Ising model as the Nambu-Goldstone fermions associated with a spontaneous breaking of the supersymmetry. Unlike in 2D, where the N = 1 SCFT corresponds to the universality class of the Ising tricritical point , our view is that the SCFT and the Ising tricritical point are distinct fixed points in 3D. This 3D SCFT is also proposed as a boundary effective theory for the topological superconductor . (c) The model at N = −2 and its possible relation to the loop-erased random walk The O(N ) model at N = −2 may be considered as an endpoint of the continuous family of the O(N ) model in the sense that the dimension for the fundamental field and energy operator reduce to the mean field values 2 (∆ φ , ∆ S )=(1/2, 1) and may be paired with the other end point N = ∞, where the mean field value of (∆ φ , ∆ T ) = (1/2, 1) and the spherical model value of ∆ S = 2 are realized. Among these operators in N = −2 and N = ∞, the only nontrivial dimension is ∆ T (−2); as in 2D , it would be natural to conjecture that d F = 3 − ∆ T (−2) ∼ 1.614 is the fractal dimension of the loop-erased random walk. Apart from the MC simluations already mentioned, there are still vast works of related simulations, among which some notable are certain sophisticated tests of the conformal invariance in the 3D selfavoiding walk (N → 0) , the worm algorithm that can be applied for continuous values of N 0 , and certain clever algorithms with analysis that get rid of the correction-to-scaling to attain ever improving precision on the self-avoiding and loop-erased random walks. Our emphasis is not on the precision for the critical exponents, though some of them including perhaps the anomalous dimensions slightly above N = 0 and the fractal dimension for the Ising model may already be more accurate than existing MC simulations . Instead, it is our purpose here to consider how the conformal invariance may be used to determine the fractal dimension, without any use of machine generated random numbers, and to help opening a way to understand more theoretical aspects (such as the kink formation, the representation theory, the 3D SLE, and so on) in the 3D O(N ) CFT in general. This paper is organized as follows. We consider the O(N ) model for a global range of −2 N ∞ in Section 2.1, and show that the fractal dimension d F can be regarded as a geometric RG eigenvalue given by the dimension ∆ T of the traceless symmetric tensor ϕ ab . Section 2.2 is a quantitative discussion on how the gap between ∆ T and ∆ S closes in the polymer limit N → 0 using a simple 6-loop RG analysis leaving the details in Appendix. In Section 3.1, the intermediate states in the four point function is classified into three sectors (S: singlet, T: traceless symmetric tensor, A: antisymmetric tensor) using the operator product expansion (OPE) φ a × φ b of the fundamental fields. The key equation in the O(N ) conformal bootstrap, namely, the crossing symmetry sum rule is reviewed with a brief discussion of the solution manifold with regard to the unitarity bound. In Section 3.2, the definitions and useful 1/N -expansions of the current central charge C J as well as those of the standard central charge C T are given. The implication of the unitarity and corresponding implementation of the bootstrap, though being standard, are given in Section 3.3. We present our main results in Section 4. We give a qualitative description on how an effective smoothing of the kink in ∆ T occurs in the polymer limit N → 0 (we idenfify it as a severe unitarity wall, across which the continuation of the unitarity-saturating solution is interrupted) and discussions on how certain level-degeneracies in the O(N ) CFT would be related to various singular shapes (the kinks in ∆ T , C T , and in particular, C J ) of the unitarity bounds in Section 4.1 and Section 4.2, respectively. We determine the fractal dimensions by the conformal bootstrap for the polymers (N → 0) in Section 4.3 and for the 3D Ising high-temperature graphs (N = 1) in Section 4.5. We compute the fractal dimension for N = −2 in RG and conjecture that it corresponds to that of the loop erased random walk in Section 4.4. In Section 4.6, we estimate the set of the scaling dimensions (∆ φ , ∆ φ 2 ) for the N = 1 SCFT and discuss the relation to the critical exponent ν as well as the fractal dimension d F | SUSY for the corresponding excitation. We conclude with selected future directions in Section 5. We start with the discussion on the two relevant operators ε and ϕ ab in the O(N ) model, which respectively belong to the O(N )-singlet sector (S) and the O(N )-symmetric tensor sector (T ). These operators are formed as a bilinear of the fundamental field φ a with the scaling dimension ∆ φ , which transforms as a fundamental representation of the O(N ) group: T : The energy operator ε is already in the single component model and plays an essential role in the initial formulation of the conformal bootstrap for the Ising model, which has the Z 2 = O(1) symmetry . The most relevant operator ϕ ab in T sector is responsible for the crossover phenomena with respect to the symmetry breaking with an arbitrary M . As in the general statistical model, the two-point function φ a (x)φ b (y) can be expressed as a sum over self-interacting random walks between x and y (also ). The Hausdorff dimension of this random walk is given by , where ν and φ 2 are respectively the correlation length exponent and the crossover exponent of the O(N ) model. Since these two independent exponents are related to the scaling dimensions ∆ S of ε and ∆ T of ϕ ab by one has a simpler expression for the Hausdorff dimension which may be viewed as a geometric RG eigenvalue y G in the light of the fact that the magnetic (thermal) RG eigenvalues can be determined by the relation In the range −2 N ∞, this dimension ∆ T decreases monotonically from a certain value ∆ T (−2) (FIG. 4 and below for the meaning) to the trivial value ∆ T (∞) = D − 2 and at N = 0 crosses the dimension ∆ S of the energy operator ε (in the O(N ) singlet sector S, as mentioned), which in turn increases monotonically from ∆ S (−2) = D − 2 to ∆ S (∞) = 2 in the same range of N (one may also notice the asymptotic slopes computed in 1/N -expansions are symmetric in 3D: Actually, this somewhat dual behavior of ∆ T and ∆ S in the global range of N is almost shared in the 2D O(N ) model though the range N ∈ should be replaced by N ∈ , where the model has a critical point and exact results are available from the Coulomb gas , SLE , and CFT torus partition funtion (as described just below) for continuous values of N ; it is also likely to be a generic feature of the O(N ) CFT in 2 D < 4 from the RG point of view. The operator content of the 2D O(N ) model with −2 N 2 can be studied exactly by the torus partition function . Using the Coulomb gas coupling g (1 g 2) determined by the relation N = −2 cos(πg), the central charge (which is a 2D counterpart of C T in Section 3.2) and the scaling dimensions are given by Since the SLE parameter κ is actually related to g by κ = 4/g, the last relation in (7) with (6) is equivalent to the formula (2) in the Beffara's theorem . In the 2D torus partition function, the multiplicity N (N + 1)/2 − 1=(N − 1)(N + 2)/2 for the traceless symmetric tensor ϕ ab tends to zero as N → 1 in accordance with the observation that the expression in (4) apparently vanishes at N = 1. It is, however, instructive to note that the dimension ∆ T = 5/8 in the N = 1 model (g = 4/3) is of physical relevance. Namely, it corresponds via (6) to the fractal dimension d F = 11/8 of the Ising interfaces, which are the SLE κ=3 curves. The relevance of this tensor ϕ ab for generic D 2 in the O(N ) sum rule at N = 1 will be discussed in the end of Section 3.1 and will be used to determine its scaling dimension in D = 3 in Section 4.5. Similarly for the 2D N = −2 model, the dimension ∆ T = 3/4 (g = 2) leads to the fractal dimension d F = 5/4 of the loop-erased random walks (SLE κ=2 curves) . We will give in Section 4.4 a simple estimate for d F in the N = −2 model using (6) by a pseudo ǫ expansion in the 6-loop RG, which agrees with the numerical simulations results obtained for the Fig. 1 The scaling dimensions of the singlet scalar ε (∆S: solid red) and traceless symmetric tensor ϕ ab (∆T : dashed blue) as a function of ∆ φ in 2D (left: eq.(7)) and in 3D (right). The right branch of the unitariy bound (dashed gray) with the Z2 case is shown for 2D as a guide to the eye. The 3D curve is obtained as the -Padé approximant for N ∈ continued by the curve from the pseudo ǫ-series (Appendix) for N > 7 and should be regarded as schematic as the anomalous dimension tends to be smaller than the genuine value. The N → ∞ asymptotics (dotted) are shown for both scaling dimensions. 3D loop-erased random walk . We use the conformal bootstrap to determine ∆ T in the O(1) model and the fractal dimension d F of the high-temperture graphs in the 3D Ising model in Section 4.5. 2.2 The degeneracy of the relevant operators from S and T sectors in the limit N → 0 In addition to the above two important cases, we are especially interested in the N → 0 limit of the 3D O(N ) model, which describes dilute solutions of polymers, where the random walk becomes selfavoiding. Besides such physical relevance, the limit N → 0 is theoretically special for the following two reasons. First, some of squared OPE coefficients may become negative for N < 0 due to single poles at N = 0, which makes it difficult to take the approaches based on the unitarity, on which most of the present conformal bootstrap schemes depend. Second, as mentined above, N = 0 is the precisely the point where the degeneracy of the two scaling dimensions ∆ S and ∆ T take place. As a quick example using (7) in 2D, ∆ S = ∆ T = 2/3 (d F = 4/3) 3 follows from g = 3/2 for N = 0 and the gap opens with the following asymmetric N -derivatives, It would be notable that this derivative for ∆ T for 2D is somehow almost unchanged in magnitude for 3D as we will see below. The leading term in the ǫ-expansion may be compared with (8) as where each contribution of the derivative for D = 4 − ǫ is ∂(∆ S , ∆ T )/∂N = (3ǫ/32, −ǫ/32). In Appendix, we compute a pseudo ǫ series using the input of the 6-loops D = 3 RG calculations and present reasonable estimates by a simple Padé analysis together with the best-known results of the ǫ-expansion up to ǫ 5 . As a simple estimate, we take the average of the six and five-loops and the maximum deviation as an error. This gives, The same analysis for the derivatives at N = 1 (the Ising point) yields The derivatives for ∆ T , which are useful in this paper, increases only slightly (∼ 10%) in the interval N ∈ . Nevertheless, in order to get better estimates, one may found it more useful to keep both (10) and (11) than to choose one of these two. More concretely, the variation ∆ T (N 2 ) − ∆ T (N 1 ) with 0 N 1 < N 2 1 can be better approximated by (N 2 − N 1 ) times the derivative at the midpoint (N 2 +N 1 )/2 obtained as a linear interpolation between (10) and (11). For instance, a roughest estimate for the variation between N = 1 and N = 0 may be obtained as ∆ T (1) − ∆ T (0) = (+1) × (−0.032(5) − 0.036(7))/2 = −0.034 (4), where the errors in (10) and (11) are assumed to be independent. Although we do not use the last example, which would maximize the uncertainty, one may check 4 this estimate may reasonably connect the results obtained independently by conformal bootstrap in Section 4.3 (N = 0) and in Section 4.5 (N = 1). 3 Operator product expansion of the fundamental fields in the O(N ) CFT Crossing symmetry sum rule The crossing symmetry sum rule used in this paper is the most basic one (in the sense it does not involve the mixed correlators ) in the conformal bootstrap for the CFT with a global O(N ) symmetry as described briefly below. The fundamental field in this theory is a scalar operator φ a , which transforms as an O(N )-vector, with dimension ∆ φ . Crucially, the OPE of φ a with itself may be decomposed into three sectors: where S, T , and A denote the O(N ) singlets sector of even spin, the O(N ) symmetric tensor sector of even spin, and the O(N ) anti-symmetric tensor sector of odd spin, respectively. Note that there is an infinite tower of the scaling dimensions {∆ ℓ , · · · } for the states with fixed ℓ in each sector. The dependence on x is omitted on the right hand side. The sets of the OPE coefficients λ X ∆,ℓ = λ O X,∆,ℓ φφ (with X = S, T, A and the tensor labels are omitted) encode important dynamical information in the O(N ) CFT and satisfy highly nontrivial constraints due to the associativity of the operator algebra. These constraints can be expressed as a sum rule that follows from the equivalence (the crossing symmetry) of the two different expansions of a single four point function φ a ( from the two distinct degeneration limits (x 1 → x 2 and x 1 → x 4 ), where the contribution from the identity operator, which belongs to the singlet (S) sector, becomes dominant. Let us write the contribution from each sector in the OPE (12) in the channel x 1 → x 2 as follows: Here in the first sum, the set S ′ includes all the operator in S-sector except the the identity operator (∆, ℓ) = (0, 0), for which the contribution for the four-point function is simply (x 2 12 x 2 34 ) −∆ φ δ ab δ cd , which is usualy the dominant contribution in the limit x 1 → x 2 . For concreteness, we note that the conformal partial wave (global conformal blocks) G ∆,ℓ (u, v) in D-dimensions is a function of the two cross-ratios given as and has the following form in terms of the radial coordinates re iθ = z/(1 where the coefficients B ) can be iteratively fixed by the Casimir differential equation for the D-dimensional conformal group, and C ν j with ν = (D −2)/2 is the Gegenbauer polynomial. The crossing symmetry can be nicely seen using the OPE (12) and the notation (13) as follows: where all the possible three tensor structures after the contractions are represented as )(= δ ab δ cd , / \= δ ac δ bd , ≍= δ ad δ bc , and the tilde notation is used for representing the quantities with u and v interchanged, thus sum rule follows by comparing the terms for each tensor structure ≍, )(, and / \: For each N , the solution manifold for the crossing symmetry (18)- (20) consisting of the points represented by the effective CFT data (the possible set of scaling dimensions and spins (∆, ℓ) in X-sector with associated OPE coefficients λ X ∆,ℓ ) may be inifinite dimensional. An important one-parameterfamily solution, which is conveniently parametrized by ∆ φ , can be singled out along the boundary of the unitarity (dictated by the lower bounds (30) and the positivity (31)), whose projection onto ∆ φ -∆ T plane is shown for each N in FIG. 2. As is well-known, the search for this unitarity saturating solution can be formulated as a linear optimization problem (see Section 3.3 for more details) and can be solved with the aid of knowledge on the global conformal blocks G ∆,ℓ (u, v) such as that in (15). In order to clear up a common source of confusion, it is worth to make a careful distinction between the spectrum of the O(1) model (N = 1) and that of the Ising model in our formulation. The Ising (Z 2 ) sum rule, which is used in for instance, follows from the crossing symmetry of the four-point function for a single scalar φ( Since only the singlet fields appear in the Ising OPE φ × φ, the whole contribution in x 1 → x 2 except that from the identity operator may be denoted as S ′ . Then the crossing symmetry leads to 1 + S ′ = (u/v) ∆ φ (1 +S ′ ), which further simplifies to, Now the logic is as follows. A proper subset (18) and (20) of the O(N ) sum rule for N = 1 implies the Ising sum rule (21), which, along with the requirements of saturating unitarity ((30) and (31)), is sufficient for a given ∆ φ to single out a unique solution for S ′ . Thus, in particular, the unitaritysaturating solution for the Ising sum rule may be embedded into the solution for the O(1) sum rule as its S-sector. In this case, one may still generalize this Ising spectrum to a solution for the O(1) sum rule, which may also admit non-empty T ⊕ A sectors, in addition to S-sector, determined in turn by solving −3T + − A + = −1 + − S ′ + and T − − A − = 0, where the Ising contribution S ′ + may be regarded as a seed generating these sectors. In particular, T -sector in this solution contains the rank-2 symmetric tensor operator ϕ ab in (4) with a non-vanishing squared OPE coefficient also for N = 1, which actually determines the fractal dimensions for the Ising high temperature graphs as shown in Section 4.5. The central charges C T and C J If we omit RG irrelevant operators and keep only most important ones, the OPE (12) becomes where T µν and J µ ab are the stress-energy tensor and the conserved vector current, respectively. As the symbol in (22) signifies, T µν is a spin-2 O(N ) singlet with dimension D and J µ ab is a spin-1 anti-symmetric vector (J µ ab = −J µ ba ) with dimension D − 1, which transform as an O(N )-adjoint representation. The conformally invariant two-point functions of T µν (x) and J µ ab (x) are , with normalization given by the surface of unit (D − 1)-sphere S D = 2π D/2 /Γ (D/2) and with The Ward identities for the stress-energy tensor T and the conserved current J leads to where C T,free = N D/(D − 1) and C J,free = 2/(D − 2) are free field values. For the O(N ) CFT, some useful results are known in the IR fixed point. These include the ǫ = 4 − D expansion and the 1/N -expansion in D = 3 5 Regarding (26) and (27), it is an open direction to study under which conditions these C T and C J in 3D are monotonically decreasing along the RG as the central charge does so in the 2D unitary system . Together with the leading correction in 1/N in the expansion , the large N asymptotics of the central charges C T and C J may be written as a function of ∆ φ as where the slope ∂C T /∂∆ φ has been already considered in . On the other hand, we observe that the other one ∂C J /∂∆ φ shows interesting behavior (FIG. 3) for the case N ∼ 0 ≪ 1 of our main interest 6 . The formation of an effective minimum near N ∼ 0 is discussed in Section 4.2 and used to estimate the fractal dimension d F of polymer chains in Section 4.3. 5 For 1/N coefficient, there is a mismatch by a factor 2 between (4.25) and (6.8) of . Our results on the slope (29) in FIG. 3 as well as supports the value −64/9π 2 in (27) reproduced from (4.25). Also by using a Padé analysis on (28), the universal curves ∆ φ -∆S and ∆ φ -∆T in FIG. 1 can be drawn, which will be discussed elsewhere. 6 We thank Tomoki Ohtsuki for pointing out that similar kinks in CJ can be observed via the direct CJ minimization in D = 3 for N 2, and Yu Nakayama for further discussions. For CT in 3D, it is known that the direct CT minimization reproduces CT along the unitarity bound (via ∆S-maximization) for N = 1, but not for generic N > 2 . It is possible to check similar characteristics are shared by CJ in 3D. It would be also interesting to study the implication of these phenomena on the solution space of the crossing symmetry. Conformal bootstrap, the unitarity bound, and the search space A particularly important one-parameter-family solution of the crossing symmetry (18)- (20) lies along the boundary of unitarity, which may connect the whole spectrum of the free theory and the O(N ) CFT. This solution can be singled out by a linear optimization as mentioned in the end of Section 3.1. The unitarity consists of the following two conditions. First, the scaling dimensions in a D-dimensional theory must satisfy the lower bounds, which correspond to the requiment that the anomalous dimensions be positive) , where the inequalities are saturated by conserved currents such as T µν and J µ ab (ℓ > 0) and by free scalars such as a fundamental field φ a at the free theory (ℓ = 0). Second, the squared OPE coefficients must be positive: These two requirements of the unitarity enable one to solve the crossing symmetry (18)-(20) along the unitarity bound via the simplex algorithm or the semi-definite program . We use the standard simplex algorithm (Sec. 6 of ) with our particular implementation based on the code . As usual, the simplex algorithm is used in order to try to determine if there exists a solution of the crossing symmetry (18)-(20) that satisfies the lower bounds (30) and the positivity (31) for a given ∆ φ and in the region ∆ X > ∆ X0 for the dimension ∆ X0 of some low-lying operator; here we use it for the symmetric tensor ϕ ab (∆ X = ∆ T ) mainly for the reason given in Section 4.1-b. If solutions do not exist (do exist), the next search region ∆ T > ∆ T1 can be chosen narrower such that ∆ T1 < ∆ T0 (∆ T1 > ∆ T0 ). Then one may take a bisection procedure from some initial finite interval of ∆ T and narrow the search region by each trial. If the initial interval is taken wide enough, the iteration eventually reaches the upper-bound ∆ T∞ for ∆ T , for which the solution in ∆ T ∆ T∞ is expected to be unique (in particular, ∆ T = ∆ T∞ ). The value of ∆ T∞ is measured numerically by setting the bisection accuracy goal δ(∆ T ), which we typically take δ(∆ T ) less than 10 −4 . In practice, the crossing symmetry constraints are extracted by a truncated Taylor-expansion around the symmetric point u = v = 1/4 of the sum rule (18)-(20) also with a truncated number ℓ max of the spin sectors. The simplex algorithm (at j-th step of the bisection) searches the spectrum region bounded from below by (30) (with ∆ T ∆ Tj−1 ) and from above by an appropriate upper bound ∆ max , which should be taken large enough. The derivatives of (18)- (20) are computed with respect to the coordinate (a, b) defined from (z,z) = (a + √ b, a − √ b)/2, which is related to the cross-ratios as (u, v) = (zz, (1 − z)(1 −z)). Following the convention in , they are reduced to the following set of the derivatives of the global conformal block G ∆,ℓ (a, b) in (15), which one may select as with some (m max , n max ), which consists of K = (m max + n max + 1)(n max + 1) derivatives. In general, the unitarity bound becomes more strict for larger number K of the derivatives, although an exact form of convergence to the optimal bound is not well-understood so far. Also for a given K 0 , the bound usually depends only scarcely to a paticular choice of (m max , n max ) with K ∼ K 0 . The number of spins ℓ max should be taken large enough with respect to the choice (m max , n max ) so that resulting bound does not depend on ℓ max . Our default choice used for measuring the critical exponents is (m max , n max ) = (8,8) with K = 153, ∆ max = 70, and ℓ max ∼ 50. of the scaling dimension ∆ φ 2 = ∆ φ 2 (∆ φ ) in the case of Z 2 symmetry studied for the Ising model 7 . Actually, the appearance of singular shapes in the conformal bootstrap seems more ubiquitous as we will see also in this work. Below we use the sum rule (18) Remarkably, however, the kink becomes less and less pronounced as N tends to zero (the polymer limit) thus practically making the determination of ∆ φ more difficult. Even in such circumstances, other singular shapes may remain in other universal quantities along the unitarity bound. This is indeed the case, and we observed a very clear change of the slope ∂C J /∂∆ φ in the current central charge C J as discussed in Section 4.2 and used this for the limit N → 0 in Section 4.3. Now two important remarks are in order. Convergence to the optimal bound. The first is on the convergence of the bound to the optimal shape with respect to the truncation (32) of the derivative orders (m, n) on the conformal block G ∆,ℓ (a, b), which is currently unavoidable in numerics. Although the upper-bound obtained in a finite truncation is rigorous, a larger number K of derivatives leads to more restrictive bound (i.e. a lower upper-bound) and actually makes the kink more sharp, where the convergence to the optimal bound tends to be faster than that in the rest. In addition to this, the convergence becomes much slower in the polymer limit N → 0. Thus a bruteforce approach to the limit is taking K large enough with respect to a given small N . In practice, the unitarity bound for ∆ T looks smooth for N 0.2, which makes the detection of the kink (a discontinuity in the slope) becomes very hard within a reasonablely large number of derivatives (K = 153). Here it would be also worth noting the present guess on the optimal shape (K = ∞) for the limit N → 0. On the right of the kink, the convergence of the slope ∂C T /∂∆ φ to a almost constant, as in the cuves for N 0.5, seems to be plausible. On the left, finite-K curves are convex upward as in FIG. 2. It seems, however, there are no strong indications that excludes the possiblity that the optimal shape on the left is also almost straight in 3D, while there is a 2D example where the bound is likely to be convex downward . In general, it would be interesting to consider if there is a principle that forbids the optimal unitiarity bound for these scaling dimensions be convex upward with some reasonable assumptions. We will give a qualitative argument in Section 4.2 on the enhanced slope ∂∆ T /∂∆ φ on the left of the kink in view of the level dynamics. The slowdown of the convergence is probably related to the degeneracy of two levels ∆ T = ∆ S and the severe unitarity wall at N = 0, where some squared OPE coefficients 8 including (λ S D,2 ) 2 for the stress-energy tensor has a pole, which can be seen by the relation (25) from the Ward identity and by our observation that the ratio C T /C T ;free remains finite ∼ 0.955. The detailed analysis on this special limit N → 0 and quantitative knowledge on the order of convergence in the conformal bootstrap in general might be useful and would deserve further investigation. Gap assumptions. The second remark is especially relevant if one tries to find the upper bound for ∆ S by the ∆ S -bisection in the case N < 1. In , it was found useful to complement the unitarity lower bounds (30) by an extra gap assumption that scalar fields (ℓ = 0) in the right hand side of the OPE (12) be bounded below by the RG canonical dimension: This has previously been used in order just to improve numerical stability ; in particular, the condition ∆ T 1 in D = 3 was not supposed to change the resulting solution of (18)-(20) through the ∆ S -bisection. Indeed, it can be checked that this gap assumption makes no distinction in the resulting spectrum for N 2. In the bootstrap for N < 1, however, we found that the ∆ S -bisection using the pure unitarity condition (30) and (31) may yield a solution that violates this additional gap assumption (33), namely, a solution with 1/2 < ∆ T < 1 < ∆ S , which satisfies (30), but can not correspond to the O(N ) CFT from the RG point of view. This phenomenon may remind us of a level repulsion between ∆ T and ∆ S in the solution space that becomes stronger as N → 0. Accordingly, if one sticks to keep the extra condition ∆ T 1, the ∆ S -bisection yields another unphysical solution 9 which contains ∆ T = 1 < ∆ S and thus saturates the extra gap assumption set by hand. In contrast to the ∆ S -bisection, our bisection for ∆ T , which should be the lowest dimension scalar contained in the product φ a ×φ b in the O(N ) CFT 8 Another important example of a singlular OPE coefficient in N → 0 may be λ ε εε for three energy operators, which would play a role in the mixed correlator bootstrap . The physical origin of the divergence of λ ε εε can be traced back to the strong repulsion between the loop segments ε in the O(N → 0) loop model . 9 Around N = 1 the saturation of (33) may not be so serious as the kink in ∆S appears around the expected Ising position, which should be consistent with our observation that the OPE coefficient λ T 1,0 of the (unphysical) level ∆T = 1 is negligible compared to those for other operators. However, below N = 1 this makes much difference: for instance, a kink in ∆S emerges even in N = 0.1, which was smoothed in the solution with the pure unitarity conditions. Again, it is obvious that this solution with ∆T = 1 can not represent a physical spectrum. with N > 0, is free from such problems. In particular, the extra assumption ∆ S 1 does not change the resulting solution. As we have just seen, besides the direct role of determining the fractal dimension (6), the property that ∆ T has the lowest dimensions (like a ground state in quantum mechanics) in the right hand side of the OPE (12) adds the study of ∆ T a special importance. The slope change in C J and the level degeneracy in A-sector The singular shape is not restricted to the unitarity bound for scaling dimensions; it may also appear in the unitariy bound for the OPE coefficients of the conserved currents such as T µν and J µ ab , which are via the Ward identities reflected on sudden changes of the slopes in the central charge C T and the current central charge C J defined in (23). We show C T /(N C free ) and C J for 0.1 N 2 in FIG. 3, most of which has just one point where the slope change occurs. As the interaction in the O(N ) CFT becomes infinitesimal in the N → ∞ limit, both changes in (∆ φ , C T ) and (∆ φ , C J ) from the free field values tends to zero. This corresponds to the asymptotics of these central charges given by (29); for a finite N , not too small, these slopes are actually shared as the initial slopes for small ∆ φ − 1/2, where the effective interaction may be weak. In the case of our interest (N → 0), a change of the slope ∂C J /∂∆ φ is enhanced and an effective minimum is formed for N < 0.5. The effective minimum is used to estimate the dimension ∆ φ of the fundamental field in N → 0 in Section 4.3. Before this application, we present some preliminary analysis on the mechanism (a level dynamics along the unitarity saturating solution that connects the free field theory and the O(N ) CFT, where the levels are essentially eigenvalues of the infinite dimensional matrix obtained by linearizing the RG flow around the fixed point in the theory space) behind the formation of these kinks. As ∆ φ = 1/2 corresponds to the free field theory, the effective anomalous dimension η = 2(∆ φ −1/2) may be considered as an effective interaction parameter. If one traces the spectrum of scaling dimensions along the unitarity bound, one will meet a reorganization of the spectrum when ∆ φ crosses the value at the kink ∆ φ * , which is expected to be ∆ φ of the O(N ) CFT as a similar phenomenon has been observed in the Ising model . Along the unitarity-saturating solution for the crossing symmetry of the O(N ) sum rule (18)- (20), this reorganization may be qualitatively different for N > 0.5 and N ≪ 0.5, which may be described as follows. Suppose we superpose the curves for the sub-leading scaling dimensions on FIG. 2 and a certain dimension bifurcates to the right (left) as we move to larger (smaller) ∆ φ ; then let us call this R-bifurcation (L-bifurcation). Before describing more details of the O(N ) spectrum, let us mention that the usage of the word "bifurcation" here does not necessarily mean the bifurcation of the common square-root type. We take the following important case to illustrate this. On the Ising spectrum obtained via the Z 2 sum-rule (21), an extensive description is given in Section 3 of Ref. , where the recombination is shown to occurà la Hilbert's infinite hotel. As both sides of the IR fixed point (∆ φ = ∆ φ * ) have infinitely many operators, the effective correspondence between the two spectrum can be non-trivial depending on the versions of the infinite hotels, for which "∞ = ∞ + 1" and "∞ = 2 · ∞" are described below. In particular, if one temporarily denotes the scalar operators (ℓ = 0) on the right (∆ φ > ∆ φ * ) by E, E ′ , E ′′ , · · · and those on the left (∆ φ < ∆ φ * ) by E, χ, E ′ , E ′′ · · · in ascending order of the scaling dimensions, the recombination of the spectrum is observed as in the following: where the symbol " d →" stands for a connection of the nearest levels by a sudden descent from the right of ∆ φ = ∆ φ * to the left, and χ is a decoupling (null) operator, which only appears numerically on the left side (∆ φ < ∆ φ * ) with a small (ideally vanishing) squread OPE coefficient. Although the nearest levels in this case (e.g. E ′′ and E ′ ) never touch with each other within a finite numerical bootstrap, it is also suggested that the recombination transition becomes shaper and eventually the nearest levels would be connected in the limit of the infinite number of derivatives (K → ∞). In this regard, the connection between both sides for a large, but finite K would also seem to be, where the levels on the right bifurcate (or possibly multifurcate) to the left as represented by the symbol " b →". Here in the connection E ′′ b → E ′′ + E ′ , for instance, the lower branch E ′′ → E ′ causes a sudden decent by a finite gap ∆ Right does not meet such a large jump if K is large enough. It is also worth to mention that if the decoupling operator χ on the left indeed disappears in the ideal limit K → ∞, the connection is then more like where all the connections of the levels may be continuous (with no gap left) and may have smoother (or even no) kinks compared with that of the lowest one (E). In the scalar (ℓ = 0 in S-) sector along the unitarity-saturating solutions for the O(N ) sum rule (18)- (20) around N = 1, we observe a similar recombination as in (34) as expected. Besides more common looking bifurcation, below let us simply call these processes in (35) e.g. E ′′ b → E ′′ + E ′ a L-bifurcation of E ′′ . For N > 0.5, we observe that an R-bifurcation of the dimension ∆ (2) A,ℓ=1 of the sub-leading spin-1 antisymmetric tensor in A-sector 10 (just above the conserved current J µ ab ) and a L-bifurcations of the sub-leading dimension ∆ (12) for the sectors S, T , and A). After the L-bifurcation, the lower branch of ∆ (2) S flows into the free value (∆ φ , ∆) = (1/2, 2) (being RG unstable, the free theory at ∆ φ = 1/2 tends to have fast varying subleading dimensions; thus it is numerically subtle to see lim ∆ φ →1/2 ∆ (2) S = 2) and seem to contribute to larger slopes ∂∆ S /∂∆ φ and ∂∆ T /∂∆ φ in ∆ φ < ∆ φ * via the level repulsion. Now for N ≪ 0.5, the R-bifurcation does no longer coincide with the L-bifurcation; the latter may be observed at much larger ∆ φ . The lower branch after the R-bifurcation of the subleading spin-1 dimension flows into the level ∆ = 2 of the conserved current J µ ab just below in the same A-sector. This isolated R-bifurcation and the confluent behavior in A-sector should lead to the enhanced change of the slope ∂C J /∂∆ φ (i.e. the effective asymmetric minimum) at ∆ φ = ∆ φ * of the O(N ) CFT and the subsequent divergent behavior of C J in ∆ φ > ∆ φ * , respectively. The change in the slope ∂C T /∂∆ φ seems to be mainly due to the L-bifurcation in the S-sector; the R-bifurcation in A-sector may also change ∂C T /∂∆ φ , but this effect is rather weak as in the curve for N = 0.1 in FIG. 3. The discussion above is obviously not enough to fully describe the level dynamics as one moves along the unitarity bound and to understand how it may lead to the formation of the kink. On the other hand, the degeneracy of the levels at ∆ φ = ∆ φ * might play a role in constructing the putative representation theory for 3D CFTs. Therefore, the bifurcations and the confluent behavior observed here may deserve further studies. 4.3 Determination of ∆ φ and the fractal dimension in the limit N → 0 There are practically at least three ways to estimate ∆ φ (or equivalently, the anomalous dimension η) of the O(N ) CFT with some reasonable assumptions for each case: 1. Calculate ∆ φ that gives the asymmetric minimum of the current central charge C J (FIG. 3), 2. Calculate ∆ φ where the R-bifurcation of the subleading spin-1 dimension ∆ (2) A,ℓ=1 occurs, 3. Locate the kink (∆ φ , ∆ T ) in the unitarity upperbound of ∆ T (FIG. 2). As discussed in the previous section, the method 1 and method 2 are essentially equivalent and should give consistent estimates. Within our derivative truncation (32) of K ∼ 153, the method 1 is applicable for N 0.4. Although the method 3, which will be used for N = 1 in Section 4.5, has an advantage of giving simultaneous estimates for (∆ φ , ∆ T ), it may not be so accurate for N 0.2 as the smoothing 10 An analogous R-bifurcation of a spin-1 operator is also observed in the N = 2 supersymmetric (SUSY) Ising model , where the decoupling operator of the lower branch never touches the level of the J µ ab at ∆ = 2. It is also remarkable that the N = 0 model has a twisted N = 2 SUSY in 2D , whose origin, the presence of underlying Osp(2M, 2M ) for any M in N = 0 , is actually independent of the space dimension D. 11 This L-bifurcation of ∆ S becomes ∆ φ 4 ∼ 3.8, which gives the correction to scaling exponent ω ∼ 0.8. In the XY model (N = 2), we reproduce ∆ (2) T ∼ 3.65 . More detailed study of the subleading spectrum is beyond the scope of this work. of the kink inevitably occurs (Section 4.1). Taking these into account, using the criterions 1 and 2 for a zoom-up data, one may obtain by the conformal bootstrap which amounts, by an ad-hoc linear extrapolation from ∆ φ = 0.518151(6) for the Ising model (N = 1) , to ∆ φ | N =0 = 0.5141±0.0002 for N = 0 with uncertainty simply copied from (37) as we will not use this later (the notation ±x, instead of (x), is temporally used to indicate uncertainties for the ease of comparison). There are various RG estimates for the anomalous dimension η = 2∆ φ +2−D (see for review), which have in general relatively larger uncertainties than those for the exponent ν −1 = D−∆ S . Nicely, the estimate for N = 0 above lies almost at the center of the result ∆ φ | N =0 = 0.5142 ± 0.0013 obtained from updated and the most accurate RG computation . Note also that MC simulations are rarely able to measure this with an exception of ∆ φ | N =0 = 0.5125 ± 0.0007 , which is slightly smaller than our bootstrap result. Now, we estimate the optimal unitarity bound for ∆ T at N = 0.1 with ∆ φ in (37) and extrapolate it to N = 0. The unitarity upper bound ∆ * T for ∆ T approaches the optimal value from above as the number of derivatives K tends to infinity as shown in Table 1. The downward uncertainty for the last digit (10 −5 ) of each ∆ * T (K) due to the choice of the bisection accuracy is shown as a subscript. Also note that these last digits may be subject to change by the choice of the cut-off for spins. As already mentioned, the convergence for this small value of N becomes much slower than that for generic N like N = 1. Note also that ∆ T * (K) has a minor variation in addition to the overall tendency to decrease (the upper-bound must decrease as the constraints gets stronger). This is expected since ∆ T * depends on the precise choice of the derivatives (32), which has more information than just a one number K. The uncertainty induced by this variation, however, does not become dominant in the analysis below. The estimate for the optimal bound ∆ T * (∞) in Table 1 is obtained by a phenomenological fit which is presumably better than the raw bound ∆ T * (K max ) with K max = 153 obtained by the bisection. We adopt this value ∆ T * (∞) = 1.2948(36) as optimal for N = 0.1 with this conservative error bar, which includes the entire residual where the error due to the extrapolation is estimated as the uncertainty in (10) multiplied by 0.1 giving 7 × 10 −4 , which is negligible compared to the other uncertainties. We note that the extrapolations from other small values of N would give consistent estimates meaning that the RG extrapolation by ∂∆ T /∂N Table 1 The unitarity bound for ∆T with N = 0.1 at ∆ φ = 0.5145 and with N = 1 at ∆ φ = 0.51815 obtained with various choices (nmax, mmax)=(7, 2)80, (8, 2)99, (8,4)117, (8,6)135, (8,8)153 of the derivatives, where the number K of derivatives is given as a subscript. The data is not ideally smooth as it depends weakly on the precise choice of derivatives (32), which is not completely specified by K. For fixed K, the bound ∆T * may be smaller by up to y × 10 −5 , where y is shown as the subscript of the last digit. is correct and actually avoidable in principle. With the degeneracy ∆ S = ∆ T at N = 0 understood (Section 2.2), this value (38) is consistent with ∆ S = 1.2999(32) (ν = 0.5882 (11)) from the RG and with ∆ S = 1.29815(2) (ν = 0.587597(11)) from the most accurate MC, which is far ahead of other simulations in accuracy . Using the relation (6), the symmetric tensor dimension (38) leads to the fractal dimension, Here let us just mention that this is much larger than the Flory value d F = D+2 3 = 5/3 = 1.6666 · · · and should be more precise. The comparison with the corresponding results for ∆ S (= ∆ T ) from more modern litertures are just given below (38). Now, two remarks are in order. First, we note that using another choice p = 1 in the fit would lead to a larger residual |∆ T * (K max ) − ∆ T * (∞)| ∼ 0.0085 resulting in ∆ T = 1.293 (9) for N = 0. Although this value from p = 1 is still consistent with the other estimates, one clearly needs to further increase K max from K max = 153 in order to obtain a better estimate, which is computationally time consuming. Although there seems to be no decisive difference between p = 1 and p = 2 regarding the quality of the fits, the fit using the function with the coexisting powers p = 1 and p = 2 yields |∆ T * (K max ) − ∆ T * (∞)| ∼ 0.0024 giving ∆ T = 1.2988 (32) for N = 0, which is effectively the same result as (38) obtained with p = 2 only. Second, we comment on the subtlety in the analysis for non-integer N . Before doing so, let us briefly summarise on the three quantities all for N = 0.1: ∆ T * (K max ), ∆ T * (∞), ∆ T , for which the level of rigor is decreasing in this order. The unitarity bound ∆ T * (K max ) is a rigorous upper bound, albeit not optimal. The extrapolation to the optimal bound ∆ T * (∞) involves the phenomenologica fit, for which the entire residual |∆ T * (K max ) − ∆ T * (∞)| is included as an error; the resulting estimate is not rigorous, but would be called conservative. Last but not the least, we use ∆ T * (∞) as an estimate for ∆ T in the O(N ) CFT. This is justified if the unitarity bound is saturated at the O(N ) CFT, which is emplically expected at N ∈ N as will be done for N = 1 in Section 4.5. On the other hand, one should not expect an exact saturation for N = 0.1 as suggested by the unitarity violation in the free O(N ) model for non-integer N . We nevertheless expect that |∆ T − ∆ T * (∞)|/∆ T ≪ 1, which means that the unitarity-saturating solution still passes very close to the location of the O(N ) CFT as it happens in the bootstrap for the Ising model in non-integer dimensions D / ∈ N , which is shown to be non-unitary . This point is corroborated in Section 5. Our purpose here is not a pursuit on the numerical precision, which is presently less than the MC , but is giving a new perspective on how the conformal bootstrap can be used to determine the fractal dimension. Although more sophisticated approaches to the limit N = 0 deserves further consideration as in Section 5, the result here may be already encouraging enough to let us believe gaining a deeper understanding on the 3D self-avoiding walk based on the conformal invariance is promising. The end point N = −2 and the loop erased random walk The present form of the conformal bootstrap, which depends on the unitarity, can not directly be applied to the O(−2) model, despite its importance as an endpoint of the continuous O(N ) family that would be paired with the spherical model limit N = ∞ (Section 2.1). Instead, we compute the pseudo ǫ expansion (τ -series) for the symmetric tensor dimension ∆ T in the 3D O(−2) model from the result of the fixed dimension 6-loop RG . More details can be found in Appendix, where our parallel Table 2 The Padé table for dF | N=−2 = 3 − ∆T . The positive real pole closest to 1 is shown in the bracket. analysis for the N -derivatives of ∆ T and ∆ S is performed. The result is, The 5-and 6-loop simple Padé analysis parallel to that in Appendix using Table 2 yields, with which the relatively large uncertainty comes from the oscillating data along the boundary of Table 2 (i.e. the direct series M = 0 and its dual L = 0) and is estimated as a root-mean-square deviation. As expected, the estimate (41) . As a remark on the analysis of the Padé table, it may be possible to note that field theories tend to prefer a slightly smaller central values compared with the numerical predictions. In our case, omissions of the four boundry data (indicated by * in Table 2) would lead to d F | N =−2 = 1.6162 (23), which would agree with the functional RG and older simulations , but would be definitely smaller than the most recent numerical results . It would be interesting to improve the present conformal bootstrap so as to analyse N < 0 across the severe unitarity wall, to obtain a better estimate on d F | N =−2 , and the dimension for sub-leading operators that is responsible for the correction to scaling. Such study may contribute to deeper understanding on the LERW from the conformal invariance in 3D. Fractal dimension of the high-temperature graphs in the 3D Ising model The fractal dimension of the critical excitation in the O(1) model is most straightforwardly and rigorously accessible by the conformal bootstrap since the bisection for ∆ T yields a clear kink along the unitarity bound just as in the conventional cases N 2 and since the O(1) model is unitary so that the O(1) IR fixed point may be expected to saturate the optimal unitarity bound as in the Ising case. By a brief inspection of the zoom-up of FIG. 2, one obtains ∆ T ∼ 1.266 and ∆ φ ∼ 0.5181, for which the latter is consistent with the known result for the Ising spin operator ∆ φ = 0.518151(6) . It is also empirically interesting, though being not rigorous at all, that if we adopt an ad-hoc criterion that the kink is located around ∆ φ where the bisection takes longer than a certain period (∼ 2 weeks, for instance), we obtain another precise estimate ∆ φ = 0.518149(6) from the O(N ) sum rule (18)- (20) with N = 1; near the kink, the optimization takes indeed longer time than the other generic points, while the convergence to the optimal bound with respect to the number of derivatives K becomes much faster. The unitarity upper bound ∆ T * for N = 1 at ∆ φ = 0.51815 for various choices of the derivatives, represented by its number of the components K, are shown in Table 1. This leads via (6) to an estimate for the fractal dimension for the high-temperature graphs in the 3D Ising model, which agrees well with d F = 1.7349(65) from the 3D Ising plaquette-update MC simulation and with the worm algorithm simulation d F = 1.734(4) . Our results by the conformal bootstrap (N = 0 In contrast to the case with N = 0.1 in Section 4.3, it would be natural to assume that the O(1) CFT saturates the optimal bound. Note also that the uncertainty 6 × 10 −6 in ∆ φ , as being multiplied by (∂∆ T /∂∆ φ )| ∆ φ =0.51815 , may induce the error only less than 10 −4 in d F . Then the source of uncertainty is virtually restricted to the extrapolation of We keep this conservative estimate, while the uncertainty is roughly halved if one uses the fit with p = 2 or becomes even smaller if one uses an exponential fit, for which the quality of the latter seems decent for this particular case. It would be very useful if one had a general theory for the scaling of the residual with respect to the optimal bound as a function of K. In any case, the conformal bootstrap here for this particular fractal dimension seems to give an order of magnitude more precise result compared with the MC simulations . 4.6 The tricrtical Ising fixed point and the N = 1 SCFT In 2D, it is well-known that the CFT for the trictitical Ising model (c = 7/10) is known to be the first member of the series of the minimal N = 1 superconformal CFTs (SCFTs) . In condensed matter, the N = 1 SCFT in 3D is propsed to describe boundary excitations in the topological superconductor . The action of the one-component Gross-Neveu-Yukawa model in D-dimensions is given by where the Yukawa coupling mixes the bosons φ and the fermion ψ. At the N = 1 supersymmetric IR fixed point, the interaction is described in terms of the superpotential W = Σ 3 and a realsupermultiplet Σ = φ + θψ + θ 2 φ 2 , where θ is a fermionic coordinate (in superspace) of dimension −1/2 regardless of the space dimension D and θ 2 = ǫ αβ θ α θ β (α, β = 1, 2). Thus it is pointed out that the intersection of the following extra constraint with the unitarity bound curve ∆ φ 2 = ∆ φ 2 (∆ φ ) may be used to locate the SCFT . Table 3 The cut-off K of derivatives and the corresponding position ∆ cross φ of the intersection between the SUSY relation (44) and the linear fit (45) for the unitarity bound for ∆ φ 2 as well as the coefficients a and b. In contrast to the branch of this curve in the left of the kink, the right branch in the relevant region, which intersects with (44), is observed to be almost linear both in 2D and 3D . Let us then define the coefficients a and b of the linear fit in 3D: For each choice of the set of the derivatives (n max , m max ) = (2k, 1), (k = 4, 5, · · · , 10) with the cut-off K for the number of derivatives given in Section 3.3 and the spin cut-off at L = 50 for k = 9, 10 and at L = 40 for otherwise, the linear fit is performed for the right branch of the the unitarity upper-bound of ∆ φ 2 in the range ∆ φ ∈ obtained through the Z 2 sum rule. Table 3 shows the position of the intersection ∆ cross φ as well as the coefficients of the fit 13 with respect to K. Although the convergence of the data may be improved by choosing the cut-offs (and the range of the fit) which consumes much more computation time, this would not seriously change the qualitative argument below. Table 3 may lead to an estimate which is slightly larger than the one-loop RG result 1/2 + 1/14 = 0.571 · · · and satisfies the rough lower bound ∆ N =1 φ > 0.565 14 . The geometric exponent for the N = 1 fixed point may be obtained by the conformal bootstrap giving the symemtric tensor dimension ∆ T ∼ 1.43 in the O(1) model at ∆ φ = ∆ N =1 φ . The derivation of (6) does not seem to depend on whether a fixed point may allow the description by the supersymmetry or not, although more detailed analysis would be useful. By assuming (6) holds here also, one would obtain, In order to find the fractal object with the dimension (47) in a lattice model, one natural candidate would be the Kitaev model augmented by a local Ising exchange interaction at finite temperature, in which the non-supersymmetric fractal dimension has already been realized by the magnetic flux loops . In particular, the critical temperature along the paramagnetic to quantum-spin-liquid transition was determined by identifying the magnetic flux loop in this extended Kitaev model (an effective Z 2 gauge system) with the 3D Ising high-temperature graphs and by using the knowledge of the fractal dimension for the latter, for which our independent estimate is in (42). Similarly, it would be interesting if the fractal dimension (47) is realized by the flux loop at some point in the phase diagram (or its suitable extension) and can be used to locate the emergent N = 1 fixed point by simulation. Indeed, the phase diagram is already very rich and studied for understanding the effect of thermal agitations on the topological order, and more specifically, the thermal fractionalization of the quantum spins into Majorana fermions. It is also worthwhile to note that if the N = 1 SCFT is realized in the vicinity of the "tricritical" point 15 in , the corresponding RG fixed point may be different from that of the 3D Ising tricritical point, which is widely believed to be described by the mean field exponents in view of the RG argument including φ 6 interaction . Another interesting model which would be relevant to the N = 1 supersymmetry is the Blume-Capel model . In 2D, the tricritical point of this model belongs to the tricritical Ising universality class, which can be identified with the celebrated N = 1 fixed point . In the latter, the set of scaling dimensions which is obtained as an intersection between the relation (44) and the unitarity bound (which is expected to be saturated by the analytic solution to the crossing symmetry along is actually (∆ ε , ∆ ε ′ ) = (1/5, 6/5), where the energy operator ε and subleading energy operator ε ′ both in the Z 2 -even sector together form one operator in the Neveu-Schwarz sector (while the spin operator, which is Z 2 -odd, independently belongs to the Ramond sector with dimension ∆ σ = 3/40 ). Thus the correspondence (∆ φ , ∆ φ 2 ) → (∆ σ , ∆ ε ) in the Ising model should be replaced by ( Regarding the crossing symmetry, this can happen in 2D because σ in the Ising model and ε in the N = 1 model have the same fusion rule since they belong to the same position (r, s) = (1, 2) in the Kac table of the Virasoro representation. If we use the same identification ∆ N =1 φ → ∆ ε as in 2D, this gives the exponents in the thermal sector 16 which are in a reasonable agreement with the exponent 2 − α = 1.213 found by the variational RG method in the phase diagram of the 3D Blume-Capel model . As noticed in , the latter value, which was aimed at the 3D tricritical exponent, deviates considerably from the standard Ising tricritical exponent, which is believed to be the mean field value 2 − α = 3/2 as already mentioned, while their results agree impressively with the 2D exact results for both the tricritical and the critical point, and with the modern estimate for the 3D critical point. This seems to leave some possibility that their method actually detects an additional N = 1 fixed point with the exponent (48) in the Blume-Capel model other than the possibility that their value quoted above is simply a very poor estimate for the 3D Ising tricritical point. In any case, in view of the RG, it is likely that it is only in D = 2 that the Ising tricritical point and the N = 1 fixed point can be identified with each other. It is interesting to study how these two fixed points would deviate from each other in D = 2 + ǫ and evolve all the way to the different universality classes in D = 3 by the conformal bootstrap. Conclusion We take the simplest bootstrap approach based only on the crossing symmetry of the four-point function φ a φ b φ c φ d of the fundamental fields for the one-parameter-family of the 3D O(N ) model with a special focus on the fractal dimension d F for the range 0 N 1. Besides the property of being exactly at the severe unitarity wall (Section 4.1), the limit N → 0 may be characterized by the degeneracy of the two operator dimensions ∆ S and ∆ T in any space dimensionality D. Accordingly, a more elaborate approach to such limits would need to deal with the possible logarithms that could appear in the four-point functions . Also if one tries to perform the mixed correlator bootstrap including the energy fields ε, for which the logarithms appear at the level of two point functions, it would be inevitable to face with these logarithms. In that case, the smooth continuation by the conformal bootstrap to the 2D problem from D = 2 + ǫ would be also interesting since the correlation function including ε in the 2D O(N ) model can be dealt with both in the integral-representation and the differential equation using the degenerate representation at the integer level 3 in the Virasoro algebra. This is a subject of further research. The issue of unitarity violation is certainly important and nontrivial, and it is currently not obvious to what extent it affects the estimates in Section 4.3 for non-integer N obtained using the assumption of the unitarity, which is actually weakly violated as outlined below. Nevertheless, there seems to be various possible improvements for the study of the bootstrap for non-unitary systems (e.g. the 16 Another identification ∆ N =1 φ 2 → ∆ε yields ν ∼ 0.701, which seem to agree with four estimates for ν (∼ 0.71) by the functional RG for the N = 1 UV Lagrangian. Note that the latter ν is not meant for a physical realization (e.g. the Blume-Capel model) and is just an indication that ∆ N =1 φ 2 computed from the mass renormalization may agree with the conformal bootstrap. In the 2D N = 1 fixed point, for instance, the observed value 2 − α = dν = 10/9 ∼ 1.11 follows from ∆ N =1 φ = 1/5, but not from ∆ N =1 φ 2 . determinant method or the extremal function flow method ) suggested from the present work. To elaborate on some of these, let us first recall that the positivity is violated in the free O(N ) models at any non-integer value of N , which can be shown by computing the norms 17 for a certain class of composite operators . Although the above argument only applies to the free model, the recent study on the unitarity violation in the single scalar φ 4 -theory (N = 1 is fixed) in the fractional dimensions D = 4 − ǫ points to a quatitatively similar behavior, both for the free (UV) theory and for the interacting (IR) theory, that the positivity violation indeed occurs, but generically only at the operator with very high scaling dimensions ∆ ≫ D , which would explain why the bootstrap for the Ising model in non-integer dimensions works decently . Our exact computation for the squared OPE coefficients in the 2D O(N ) model for non-integer N also shows a similar behavior for the positivity violation. In view of these analysis, we conclude that the set of the scaling dimensions (∆ φ , ∆ T ) of the IR O(N ) FP for non-integer N should not saturate the unitarity upper bound ∆ T ∆ * T (∆ φ ) in general, but hopefully it lies in the vicinity of the bound (it could appear on either side of the bound), for N not too close to N = 0, at which the squared OPE coefficient for the low-lying operator, namely, the stress-energy tensor T µν (∆ = D) changes its sign due to the pole as discussed in Section 4.1.1. For N ∼ 0, on the other hand, if the family of the unitarity saturating solution obtained here along ∆ T = ∆ * T (∆ φ ) passes indeed nearby the IR O(N ) FP, an inclusion of one more spin-1 operator just above the conserved current J µ in the truncated spectrum in the determinant method should qualitatively improve the behavior of the non-unitary solution according to the mechanism in Section 4.2 for the emergence of the kink in the current central charge C J . It would be nice to quantify the effect of the unitarity violation in more detail and clarify how (via the flow method for instance) the putative non-unitary IR O(N ) FP and the unitary solution here could be connected with each other. In the long run, it would be interesting to generalize the ideas in the SLE so as to describe the critical geometry embedded in 3D though the task would severely face to our limited understanding on the 3D geometry since the SLE is intrinsically based on the Riemann mapping (uniformization) theorem on the 2D conformal map. In this respect, the 3D O(N ) model with −2 N ∞ offers a natural one-parameter-family of the loop ensembles ( for an extensive simulations), which is likely to have a conformally invariant measure ( for a simulation at N = 0). We have been able to focus on the region 0 N 1, which have the most interesting cases 18 as the two boundary points (namely, the self-avoiding-walks in the N → 0 model and the high-temperature graphs in 3D Ising model in the N = 1 model) and to estimate the fractal dimension d F = 3 − ∆ T by the conformal bootstrap using the unitarity conditions. Although the positivity (31) prevents us from applying the conformal bootstrap across the severe unitarity wall at N = 0, we compute the fractal dimension for N = −2 by the 6-loop RG giving d F ∼ 1.614, which would be encouraging to conjecture that the 3D N = −2 model may also describe the loop erased random walk as in 2D although some elaborate operator correspondence may be necessary in view of the logarithmic corrections . It is of interest to see if some generalization of the Beffara's theorem (2) exists in 3D and if the loop ensemble in the 3D O(N ) model with −2 N ∞ of the fractal dimension 1.614 d F 2 (parallel to 5/4 d F 3/2 in 2D) has a natural parametrization in terms of the inverse-trigonometric (1), or other transcendental function of N , as κ of the 2D SLE κ . Table 5 The Padé table for ∂∆T (0)/∂N . The positive real pole closest to 1 is shown in the bracket. L + M = 6) and five-loops (L + M = 5) order approximants. For each derivative, the data occuring with a pole in (indicated by * ) is omitted since it is rather close to τ = 1, where the series is to be evaluated. As a simple estimate, we take the average of the six and five-loops and the maximum deviation as an error. This gives the value quoted in (10) and (11) in the text.
/** * Returns a provider of empty tiles filled with the given value in all bands. * A value of {@code null} is interpreted as 0 for integer types or NaN for floating point types. * * @param model sample model of the empty tiles. * @param fillValue the value to use for filling empty spaces in rasters, or {@code null} for zero. * @return provider of filled tiles. */ public static TilePlaceholder filled(final SampleModel model, final Number fillValue) { if (fillValue == null) { return empty(model); } final Number[] values = new Number[model.getNumBands()]; Arrays.fill(values, fillValue); return filled(model, new FillValues(model, values, true)); }
/** * For storing a CASKS14b Record. * * @author Andy Turner * @version 1.0.0 */ public class Census_CASKS14bRecord extends Census_AreaRecord { /** * Table KS014b National Statistics - Socio Economic Classification - males * aged 16-74 Footnotes and Comments for Table KS014b 1. For long-term * unemployed year last worked is 1999 or earlier. 2. In the NS-SeC * classification, all full-time students are recorded in the 'full-time * students' category regardless of whether they are economically active or * not. 3. Not classifiable for other reasons' includes people whose * occupation has not been coded. */ /** * KS014b0001 = malesAged16to74 */ protected int malesAged16to74; /** * KS014b0002 = malesAged16to74LargeEmployersAndHigherManagerialOccupations */ protected int malesAged16to74LargeEmployersAndHigherManagerialOccupations; /** * KS014b0003 = malesAged16to74HigherProfessionalOccupations */ protected int malesAged16to74HigherProfessionalOccupations; /** * KS014b0004 = * malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate */ protected int malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate; /** * KS014b0005 = malesAged16to74IntermediateOccupations */ protected int malesAged16to74IntermediateOccupations; /** * KS014b0006 = malesAged16to74SmallEmployersAndOwnAccountWorkers */ protected int malesAged16to74SmallEmployersAndOwnAccountWorkers; /** * KS014b0007 = malesAged16to74LowerSupervisoryAndTechnicalOccupations */ protected int malesAged16to74LowerSupervisoryAndTechnicalOccupations; /** * KS014b014b = malesAged16to74SemiRoutineOccupations */ protected int malesAged16to74SemiRoutineOccupations; /** * KS014b0009 = malesAged16to74RoutineOccupations */ protected int malesAged16to74RoutineOccupations; /** * KS014b0010 = malesAged16to74NeverWorked */ protected int malesAged16to74NeverWorked; /** * KS014b0011 = malesAged16to74LongTermUnemployed */ protected int malesAged16to74LongTermUnemployed; /** * KS014b0012 = malesAged16to74FullTimeStudents */ protected int malesAged16to74FullTimeStudents; /** * Creates a new record with all numerical fields set to 0. * * @param rID What {@link #ID} is set to. */ public Census_CASKS14bRecord(Census_RecordID rID) { super(rID); } /** * Creates a new record with fields set from line. * * @param rID What {@link #ID} is set to. * @param line A line for a CSV file. */ public Census_CASKS14bRecord(Census_RecordID rID, String line) { super(rID); String[] fieldsDummy = line.split(","); // System.out.println(fieldsDummy.length); // for (int i = 0; i < fieldsDummy.length; i ++ ){ // System.out.println(fieldsDummy[i]); // } String[] fields = new String[13]; for (int i = 0; i < fields.length; i++) { fields[i] = ""; } System.arraycopy(fieldsDummy, 0, fields, 0, fields.length); zoneCode = fields[0].substring(1, 11); // From Table KS14b this.malesAged16to74 = Math_Integer.parseInt(fields[1]); this.malesAged16to74LargeEmployersAndHigherManagerialOccupations = Math_Integer.parseInt(fields[2]); this.malesAged16to74HigherProfessionalOccupations = Math_Integer.parseInt(fields[3]); this.malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate = Math_Integer.parseInt(fields[4]); this.malesAged16to74IntermediateOccupations = Math_Integer.parseInt(fields[5]); this.malesAged16to74SmallEmployersAndOwnAccountWorkers = Math_Integer.parseInt(fields[6]); this.malesAged16to74LowerSupervisoryAndTechnicalOccupations = Math_Integer.parseInt(fields[7]); this.malesAged16to74SemiRoutineOccupations = Math_Integer.parseInt(fields[8]); this.malesAged16to74RoutineOccupations = Math_Integer.parseInt(fields[9]); this.malesAged16to74NeverWorked = Math_Integer.parseInt(fields[10]); this.malesAged16to74LongTermUnemployed = Math_Integer.parseInt(fields[11]); this.malesAged16to74FullTimeStudents = Math_Integer.parseInt(fields[12]); } /** * @return a string description of this. */ @Override public String toString() { return super.toString() + ", malesAged16to74 " + malesAged16to74 + ", malesAged16to74LargeEmployersAndHigherManagerialOccupations " + malesAged16to74LargeEmployersAndHigherManagerialOccupations + ", malesAged16to74HigherProfessionalOccupations " + malesAged16to74HigherProfessionalOccupations + ", malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate " + malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate + ", malesAged16to74IntermediateOccupations " + malesAged16to74IntermediateOccupations + ", malesAged16to74SmallEmployersAndOwnAccountWorkers " + malesAged16to74SmallEmployersAndOwnAccountWorkers + ", malesAged16to74LowerSupervisoryAndTechnicalOccupations" + malesAged16to74LowerSupervisoryAndTechnicalOccupations + ", malesAged16to74SemiRoutineOccupations " + malesAged16to74SemiRoutineOccupations + ", malesAged16to74RoutineOccupations " + malesAged16to74RoutineOccupations + ", malesAged16to74NeverWorked " + malesAged16to74NeverWorked + ", malesAged16to74LongTermUnemployed " + malesAged16to74LongTermUnemployed + ", malesAged16to74FullTimeStudents " + malesAged16to74FullTimeStudents; } /** * @return A String representation of this. */ @Override public String toCSV() { return super.toCSV() + "," + malesAged16to74 + "," + malesAged16to74LargeEmployersAndHigherManagerialOccupations + "," + malesAged16to74HigherProfessionalOccupations + "," + malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate + "," + malesAged16to74IntermediateOccupations + "," + malesAged16to74SmallEmployersAndOwnAccountWorkers + "," + malesAged16to74LowerSupervisoryAndTechnicalOccupations + "," + malesAged16to74SemiRoutineOccupations + "," + malesAged16to74RoutineOccupations + "," + malesAged16to74NeverWorked + "," + malesAged16to74LongTermUnemployed + "," + malesAged16to74FullTimeStudents; } /** * @return A Comma Separated Version (CSV) of the names of the * fields/variables. */ @Override public String toCSVHeader() { return super.toCSVHeader() + ",malesAged16to74" + ",malesAged16to74LargeEmployersAndHigherManagerialOccupations" + ",malesAged16to74HigherProfessionalOccupations" + ",malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate" + ",malesAged16to74IntermediateOccupations" + ",malesAged16to74SmallEmployersAndOwnAccountWorkers" + ",malesAged16to74LowerSupervisoryAndTechnicalOccupations" + ",malesAged16to74SemiRoutineOccupations" + ",malesAged16to74RoutineOccupations" + ",malesAged16to74NeverWorked" + ",malesAged16to74LongTermUnemployed" + ",malesAged16to74FullTimeStudents"; } /** * Returns a copy of this.malesAged16to74 * @return */ public int getMalesAged16to74() { return this.malesAged16to74; } /** * Returns a copy of * this.malesAged16to74LargeEmployersAndHigherManagerialOccupations * @return */ public int getMalesAged16to74LargeEmployersAndHigherManagerialOccupations() { return this.malesAged16to74LargeEmployersAndHigherManagerialOccupations; } /** * Returns a copy of this.malesAged16to74HigherProfessionalOccupations * @return */ public int getMalesAged16to74HigherProfessionalOccupations() { return this.malesAged16to74HigherProfessionalOccupations; } /** * Returns a copy of this. * malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate * @return */ public int getMalesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate() { return this.malesAged16to74LowerManagerialAndProfessionalOccupationsIntermediate; } /** * Returns a copy of this.malesAged16to74IntermediateOccupations * @return */ public int getMalesAged16to74IntermediateOccupations() { return this.malesAged16to74IntermediateOccupations; } /** * Returns a copy of this.malesAged16to74SmallEmployersAndOwnAccountWorkers * @return */ public int getMalesAged16to74SmallEmployersAndOwnAccountWorkers() { return this.malesAged16to74SmallEmployersAndOwnAccountWorkers; } /** * Returns a copy of * this.malesAged16to74LowerSupervisoryAndTechnicalOccupations * @return */ public int getMalesAged16to74LowerSupervisoryAndTechnicalOccupations() { return this.malesAged16to74LowerSupervisoryAndTechnicalOccupations; } /** * Returns a copy of this.malesAged16to74SemiRoutineOccupations * @return */ public int getMalesAged16to74SemiRoutineOccupations() { return this.malesAged16to74SemiRoutineOccupations; } /** * Returns a copy of this.malesAged16to74RoutineOccupations * @return */ public int getMalesAged16to74RoutineOccupations() { return this.malesAged16to74RoutineOccupations; } /** * Returns a copy of this.malesAged16to74NeverWorked * @return */ public int getMalesAged16to74NeverWorked() { return this.malesAged16to74NeverWorked; } /** * Returns a copy of this.malesAged16to74LongTermUnemployed * @return */ public int getMalesAged16to74LongTermUnemployed() { return this.malesAged16to74LongTermUnemployed; } /** * Returns a copy of this.malesAged16to74FullTimeStudents * @return */ public int getMalesAged16to74FullTimeStudents() { return this.malesAged16to74FullTimeStudents; } }
<gh_stars>1-10 package json_template import ( "testing" ) func TestTokenize1(t *testing.T) { data := ` x = args["str"] ` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 6 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[0].token != tokenWord { t.Fatal("token 0 type") } if string(tokens[0].data) != "x" { t.Fatal("token 0 content") } if tokens[1].token != tokenEqual { t.Fatal("token 1 type") } if tokens[2].token != tokenWord { t.Fatal("token 2 type") } if string(tokens[2].data) != "args" { t.Fatal("token 2 content") } if tokens[3].token != tokenBracketSO { t.Fatal("token 3 type") } if tokens[4].token != tokenString { t.Fatal("token 4 type") } if string(tokens[4].data) != `"str"` { t.Fatal("token 4 content") } if tokens[5].token != tokenBracketSC { t.Fatal("token 5 type") } } func TestTokenize2(t *testing.T) { data := `%%{dddddd}%%-1.34` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 2 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[0].token != tokenObject { t.Fatal("token 0 type") } if string(tokens[0].data) != "{dddddd}" { t.Fatal("token 0 content") } if tokens[1].token != tokenNum { t.Fatal("token 1 type") } if string(tokens[1].data) != "-1.34" { t.Fatal("token 1 content") } } func TestTokenize3(t *testing.T) { data := `32 xx()%x%{"s":"%%"}%x%` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 5 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[0].token != tokenNum { t.Fatal("token 0 type") } if string(tokens[0].data) != "32" { t.Fatal("token 0 content") } if tokens[1].token != tokenWord { t.Fatal("token 1 type") } if string(tokens[1].data) != "xx" { t.Fatal("token 1 content") } if tokens[2].token != tokenBracketRO { t.Fatal("token 2 type") } if tokens[3].token != tokenBracketRC { t.Fatal("token 3 type") } if tokens[4].token != tokenObject { t.Fatal("token 4 type") } if string(tokens[4].data) != "{\"s\":\"%%\"}" { t.Fatal("token 4 content") } } func TestTokenize4(t *testing.T) { data := `ss.dd("x\"x")` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 6 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[1].token != tokenDot { t.Fatal("token 3 type") } if tokens[4].token != tokenString { t.Fatal("token 4 type") } if string(tokens[4].data) != `"x\"x"` { t.Fatal("token 4 type") } } func TestTokenize5(t *testing.T) { data := `res = xx if x() @ end` _, err := tokenize([]byte(data)) if err == nil { t.Fatal("err is nil") } pErr, ok := err.(ParseError) if !ok { t.Fatalf("err type %T != ParseError", err) } if pErr.Msg != ErrUnexpectedSymbol { t.Fatal("err msg:", pErr.Msg) } if pErr.Pos.line != 3 || pErr.Pos.column != 2 { t.Fatal("err position:", pErr.Pos) } } func TestTokenize6(t *testing.T) { data := `12..34` _, err := tokenize([]byte(data)) if err == nil { t.Fatal("err is nil") } pErr, ok := err.(ParseError) if !ok { t.Fatalf("err type %T != ParseError", err) } if pErr.Msg != ErrParseNumber { t.Fatal("err msg:", pErr.Msg) } } func TestTokenize7(t *testing.T) { data := "xx = %y" _, err := tokenize([]byte(data)) if err == nil { t.Fatal("err is nil") } pErr, ok := err.(ParseError) if !ok { t.Fatalf("err type %T != ParseError", err) } if pErr.Msg != ErrUnexpectedObjEnd { t.Fatal("err msg:", pErr.Msg) } } func TestTokenize8(t *testing.T) { data := "xx = %%{}" _, err := tokenize([]byte(data)) if err == nil { t.Fatal("err is nil") } pErr, ok := err.(ParseError) if !ok { t.Fatalf("err type %T != ParseError", err) } if pErr.Msg != ErrUnexpectedObjEnd { t.Fatal("err msg:", pErr.Msg) } } func TestTokenize9(t *testing.T) { data := `d="xx` _, err := tokenize([]byte(data)) if err == nil { t.Fatal("err is nil") } pErr, ok := err.(ParseError) if !ok { t.Fatalf("err type %T != ParseError", err) } if pErr.Msg != ErrUnexpectedStrEnd { t.Fatal("err msg:", pErr.Msg) } } func TestTokenize10(t *testing.T) { data := "%@%{}%@%" _, err := tokenize([]byte(data)) if err == nil { t.Fatal("err is nil") } pErr, ok := err.(ParseError) if !ok { t.Fatalf("err type %T != ParseError", err) } if pErr.Msg != ErrIllegalObjQuote { t.Fatal("err msg:", pErr.Msg) } } func TestTokenize11(t *testing.T) { data := `if x y=x else y=z end` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 10 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[0].token != tokenKwIf { t.Fatal("token 0 type") } if tokens[5].token != tokenKwElse { t.Fatal("token 5 type") } if tokens[9].token != tokenKwEnd { t.Fatal("token 9 type") } } func TestTokenize12(t *testing.T) { data := `for _ x in args` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 5 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[0].token != tokenKwFor { t.Fatal("token 0 type") } if tokens[1].token != tokenWord { t.Fatal("token 1 type") } if string(tokens[1].data) != "_" { t.Fatal("token 1 content") } if tokens[3].token != tokenKwIn { t.Fatal("token 0 type") } } func TestTokenize13(t *testing.T) { data := `fn(x,y)` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 6 { t.Fatalf("len(tokens) = %d ", len(tokens)) } if tokens[0].token != tokenWord { t.Fatal("token 0 type") } if string(tokens[0].data) != "fn" { t.Fatal("token 0 content") } if tokens[1].token != tokenBracketRO { t.Fatal("token 1 type") } if tokens[2].token != tokenWord { t.Fatal("token 2 type") } if string(tokens[2].data) != "x" { t.Fatal("token 2 content") } if tokens[3].token != tokenComa { t.Fatal("token 3 type") } if tokens[4].token != tokenWord { t.Fatal("token 4 type") } if string(tokens[4].data) != "y" { t.Fatal("token 4 content") } if tokens[5].token != tokenBracketRC { t.Fatal("token 5 type") } } func TestTokenize14(t *testing.T) { data := ` result = %%{ "obj":{}, "arr":[], "info": "test template" }%% x ` tokens, err := tokenize([]byte(data)) if err != nil { t.Fatalf("err=%v", err) } if len(tokens) != 4 { t.Fatalf("len(tokens) = %d ", len(tokens)) } pos := tokens[2].start if pos.line != 2 || pos.column != 13 { t.Fatal("Incorrect token position", pos) } pos = tokens[3].start if pos.line != 7 || pos.column != 4 { t.Fatal("Incorrect token position", pos) } }