content
stringlengths
7
2.61M
import 'mocha' import '../env' import { expect } from 'chai' import { MessageParameters, SdkOptions } from '../../../src/dto' import { TestmailInbox } from '../../../src/test-helpers' import { generateCredentials } from '../../factory/signedCredentials' import { getBasicOptionsForEnvironment } from '../../helpers' import { AffinidiWalletV6WithEncryption as AffinityWallet } from '../../helpers/AffinidiWallet' const { TEST_SECRETS } = process.env const { COGNITO_PASSWORD } = JSON.parse(TEST_SECRETS) const options: SdkOptions = getBasicOptionsForEnvironment() const { env } = options const messageParameters: MessageParameters = { message: `Your verification code is: {{CODE}}`, subject: `Verification code`, } const waitForOtpCode = async (inbox: TestmailInbox): Promise<string> => { const { body } = await inbox.waitForNewEmail() return body.replace('Your verification code is: ', '') } const createInbox = () => new TestmailInbox({ prefix: env, suffix: 'otp.wallet' }) const getCredentialIds = (credentials: any[]) => new Set(credentials.map((credential) => credential.id)) function checkIsString(value: string | unknown): asserts value is string { expect(value).to.be.a('string') } describe('WalletStorageService [OTP]', () => { it.skip('full flow with 100+ credentials', async () => { const inbox = createInbox() const password = <PASSWORD> const signUpToken = await AffinityWallet.initiateSignUpByEmail(options, inbox.email, password, messageParameters) checkIsString(signUpToken) const signUpCode = await waitForOtpCode(inbox) const commonNetworkMember = await AffinityWallet.completeSignUp(options, signUpToken, signUpCode) console.log('signed up') const credentialsToSave = generateCredentials(220) await commonNetworkMember.saveCredentials(credentialsToSave.slice(0, 55)) console.log('saved 55 credentials') await commonNetworkMember.saveCredentials(credentialsToSave.slice(55, 110)) console.log('saved 110 credentials') await commonNetworkMember.saveCredentials(credentialsToSave.slice(110, 165)) console.log('saved 165 credentials') await commonNetworkMember.saveCredentials(credentialsToSave.slice(165, 220)) console.log('saved 220 credentials') { const credentials = await commonNetworkMember.getCredentials() console.log(`retrieved ${credentials.length} credentials`) expect(credentials).to.have.length(220) expect(getCredentialIds(credentials)).to.deep.equal(getCredentialIds(credentialsToSave)) } { const credentialIdsToDelete = [credentialsToSave[90].id, credentialsToSave[150].id, credentialsToSave[210].id] const expectedIds = getCredentialIds(credentialsToSave) for (const id of credentialIdsToDelete) { expectedIds.delete(id) await commonNetworkMember.deleteCredentialById(id) console.log('deleted credential') const remaining = await commonNetworkMember.getCredentials() console.log(`There are ${remaining.length} credentials left`) } const credentials = await commonNetworkMember.getCredentials() console.log('retrieved credentials') expect(credentials).to.have.length(217) expect(getCredentialIds(credentials)).to.deep.equal(expectedIds) } }).timeout(600000) })
1. Field of the Invention The present invention relates to thin film multilayered structures applicable to ferroelectric thin film elements using Si substrates, for example, capacitors for DRAMs and ferroelectric RAMs (FeRAM), pyroelectric elements, microactuators, thin film capacitors, small piezoelectric elements, etc., and manufacturing methods thereof. More specifically, it relates to metallic thin films epitaxially grown on Si substrates interposing buffer layers and manufacturing methods thereof. 2. Description of the Related Art Recent years, techniques in forming thin films of dielectrics and ferroelectrics, for example, BaTiO3, SrTiO3, (Ba,Sr)TiO3 (hereafter abbreviated as BST), PbTiO3, (Pb,La)TiO3, Pb(Zr,Ti)O3 (hereafter abbreviated as PZT), (Pb,La)(Zr,Ti)O3 (hereafter abbreviated as PLZT) and Pb(Mg,Nb)O3, on Si substrates have been researched extensively. In particular, if Pb perovskite type ferroelectrics, for example, PZT, and PLZT, having a large residual dielectric polarization could be epitaxially grown, the spontaneous polarization can be oriented in one direction, and then larger polarization values and switching characteristics can be realized. Therefore, the applicability to high density recording media is extremely increased so that requirements for establishment of methods in forming ferroelectric thin films, being excellent in crystalline properties, on Si substrates have intensified. In the use for orienting the spontaneous polarization in one direction, the film thickness direction, as described above, a so-called MFM (metal-ferroelectric-metal) structure where a ferroelectric thin film is interposed between an upper and a lower metallic thin films (electrode layer) on a Si substrate has been generally used. However, it is difficult to improve crystalline properties of ferroelectric thin films in this structure for reasons described below, and ferroelectric thin films having fully satisfying crystalline properties have not been obtained until now. That is, when a metallic material, for example, Al, Cu, Ag and Au, is used as a metallic thin film (lower electrode) formed on a Si substrate, a metallic oxide film is formed at the interface of the metallic thin film and the ferroelectric thin film during the formation of the ferroelectric thin film on said lower electrode. Mutual diffusion is likely to occur between the aforementioned metallic material and the Si substrate so that in the case in which a semiconductor element, etc., is formed on the Si substrate, the characteristics thereof may be changed. A method using Pt as the metallic thin film is also taken into consideration. Pt has advantages in that it is difficult to be oxidized in air and it is likely to cause lattice matching with PZT and ferroelectrics, for example, PLZT, and BST. However, because Pt is inherently likely to form compounds with elements, for example, Si and Pb, it was feared that the characteristics of the semiconductor elements formed on the Si substrate might be changed, and compounds with ferroelectrics containing Pb formed at the interface so that the crystalline properties of ferroelectric thin films formed thereon might be degraded. A phenomenon wherein oxygen diffuses into the lower layer through grain boundaries of the Pt thin film was observed, and it was feared that, although Pt itself is difficult to be oxidized, characteristics of elements or films, for example, semiconductor elements, positioned in the under layer of the Pt thin film might be badly effected. Regarding this point, Ir or Rh having a face-centered cubic structure has, similarly to Pt, a high conductivity, is better in processability compared to Pt, and furthermore, has a diffusion barrier function against oxygen so that the phenomenon wherein oxygen diffuses into the lower layer through the Ir thin film does not occur. Ir is not likely to react with other elements so that problems accompanied with using Pt such as the change in the characteristics of semiconductor elements and the degradation of crystalline properties of ferroelectric thin films can be suppressed. As described above, Ir and Rh are said to be suitable materials for electrode layers to manufacture ferroelectric thin films being excellent in crystalline properties. However, regarding the forming of Ir or Rh thin film on a Si substrate, it was difficult to cause the epitaxial growth by conventional methods. For example, although Nakamura et al. formed the Ir thin film, as the electrode of the PZT thin film capacitor, on a SiO2/Si substrate by the RF magnetron sputtering method (Jpn. J. Appl. Phys. Vol.34 (1995), 5184), the obtained Ir thin film was not a epitaxial film but a film having the (111) preferred orientation. Although Horii et al. formed the Ir thin film on a YSZ/Si substrate by the sputtering method (The Japan Society of Applied Physics and Related Societies / Extended Abstracts (The 45th Meeting (1998)), only a film in which the (100) and (111) orientations are mixed with each other was obtained. Accordingly, the object of the present invention is to provide a substrate metallic thin film which functions to form epitaxial ferroelectric thin films of excellent crystalline properties on Si substrates, and to provide a manufacturing method thereof. The thin film multilayered structure comprises: a single crystal Si substrate; a MgO buffer layer epitaxially grown on said single crystal Si substrate; and a metallic thin film made of Ir or Rh epitaxially grown on said MgO buffer layer. The single crystal Si substrate and the MgO buffer layer preferably fulfill the crystallographic relations: MgO (001) // Si (001); and MgO [100] // Si [100] It is more preferable that said single crystal Si substrate, said MgO buffer layer and said metallic thin film fulfill the crystallographic relations: metallic thin film (001) MgO (001) // Si (001), and metallic thin film [100] // MgO [100] // Si [100]. The MgO buffer layer preferable has a mean surface roughness of about 1.5 nm or less, and said metallic thin film preferably has a mean surface roughness of about 1.5 nm or less. The ferroelectric thin film element comprises, in addition to the above-explained thin film multilayered structure, a ferroelectric thin film orientationally grown on said thin film multilayered structure and an upper electrode formed on said ferroelectric thin film. The manufacturing method of a thin film multilayered structure comprises the steps of epitaxially growing a MgO buffer layer on a single crystal Si substrate; and epitaxially growing a metallic thin film made of Ir or Rh on said MgO buffer layer. The MgO buffer layer is preferably formed at a temperature of about 350 to 900xc2x0 C. and at a growth rate of 1.0 to 2.0 nm/min, and more preferably at a temperature of about 500 to 900xc2x0. The manufacturing method of a ferroelectric thin film element comprises the step of, in addition to the above-explained method, orientationally growing a ferroelectric thin film is on said thin film multilayered structure and forming an upper electrode on said ferroelectric thin film. By interposing the epitaxial MgO layer as the buffer layer on the Si substrate, and by adopting the thin film multilayered structured structure in which the metallic thin film of Ir or Rh, having a face-centered cubic structure, is formed on the MgO layer, an epitaxial metallic thin film excellent in crystalline properties and surface flatness can be formed on the Si substrate. Furthermore, functional thin films of ferroelectrics, etc., having high orientational properties with one or more axes can be formed on the epitaxial metallic thin film. Ir and Rh are likely not to cause oxygen diffusion or react with other elements. Therefore, in the case in which the ferroelectric thin film element is formed on the Si substrate, by using the epitaxially grown Ir thin film or Rh thin film as the lower electrode of the ferroelectric thin film element, a ferroelectric thin film excellent in crystalline properties can be formed without changing the characteristics of the semiconductor element, etc., formed on the Si substrate. For the purpose of illustrating the invention, there is shown in the drawings several forms which are presently preferred, it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
Calling humans the weakest link in computer security is dangerous. Don’t touch that computer, human—you can’t be trusted. The idea that humans are the “weakest link” in computer security is very popular among computer scientists and people who work on the technical elements of cybersecurity. After all, if your job is to secure computer systems, it’s satisfying to feel that the problems lie not in the computers but in everyone else. Of course, it’s completely true that many computer security incidents involve human users making bad decisions—opening emails or downloading files despite warning signs; using obvious, easily guessable passwords; ignoring warning signals from their browser or operating system. But that’s no reason for technologists to feel smug about their accomplishments. In fact, just the opposite: These sorts of mistakes are evidence that the technology is failing its human users, not the other way around. Eliminate the humans! Why didn’t we think of that before? It’s an attitude strangely reminiscent of a certain type of hostile librarian who gives the impression that she would much prefer you not touch, or even breathe on, any of the precious books in her care. The whole point of computers—and libraries, for that matter—is that they’re supposed to improve the lives of people, and yet, strangely, it’s the people who end up being painted as the problem. Mims makes plenty of sensible points in his piece about the role of social engineering in computer security incidents and how susceptible most of us are to phishing attempts and also about how hard it is to educate people on computer security—a topic I’ve also grown increasingly demoralized about. And, in fairness to him, he probably didn’t have any say over that headline. The best parts of his article hint subtly toward encouraging better human-centered design for security, though it can be hard to tell given how dismissive the language is toward humans in general. It’s hard to educate people about what SSL is and how it works. So a human-centered design approach for security would suggest that, say, we create technologies that make it easier for people to tell when they’re being deceived online and limit the resulting damage—for instance, tools that flag when they’re dealing with emails from people they haven’t interacted with before or that isolate newly downloaded programs from accessing the rest of a machine and test them for any ill effects. And this approach is not totally unrelated to the philosophy of “assume that humans will fail and automate around them” that Mims cites. The difference lies in whether you assume humans will fail or, instead, assume that their opinions and ideas and instincts should help you design the tools that they use. These may seem like subtle distinctions—and in some ways they probably are. If we end up with better email filtering and security technologies that block more phishing emails from landing in recipients’ inboxes or monitor systems for malware—the technologies Mims specifically advocates for in his piece—does it really matter whether they’re developed by someone raging about the idiocy of humans? I’m not totally certain. I think a healthy cynicism about how easily people are deceived is probably a good thing for a security engineer. At the same time, I worry that an engineer who is focused on the need to “patch” human behavior or overly inclined to think of humans in the same terms as coding errors and technical glitches runs the risk of understanding—and respecting—human behavior too little to be able to effectively support it. The first sentence is dead on: There’s no point in building systems that cause problems and then demanding that everyone figure out how to use them better. On the other hand, “locking down” systems so that people can’t make “dumb mistakes” isn’t the right mindset for developing technical tools that make it harder for people to deceive each other or extract information from one another under false pretenses. At some point, computer security technology may reach the point where we can confidently blame breaches on the stupidity of the people who get hacked, but first we have to be sure that technology isn’t also tripping up reasonably bright, competent people—as it seems to still do. But we don’t get to that point by upgrading, or patching, the human brain, we get there by accepting that the onus is on the designers of technology to support—not bypass—people’s decisions and provide them with the right signals and nudges to make better ones.
/***************************************************************************** Licensed to Accellera Systems Initiative Inc. (Accellera) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. Accellera licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. *****************************************************************************/ #ifndef __TLM_CORE_1_REQ_RSP_CHANNELS_FIFO_FIFO_PEEK_HH__ #define __TLM_CORE_1_REQ_RSP_CHANNELS_FIFO_FIFO_PEEK_HH__ namespace tlm { template <typename T> inline T tlm_fifo<T>::peek(tlm_tag<T> *) const { while (is_empty()) { // call free-standing sc_core::wait(), // since sc_prim_channel::wait(.) is not const sc_core::wait(m_data_written_event); } return buffer.read_data(); } template <typename T> inline bool tlm_fifo<T>::nb_peek(T &t) const { if (used() < 1) { return false; } t = buffer.peek_data(0); return true; } template <typename T> inline bool tlm_fifo<T>::nb_peek(T &t, int n) const { if (n >= used() || n < -1) { return false; } if (n == -1) { n = used() - 1; } t = buffer.peek_data(n); return true; } template <typename T> inline bool tlm_fifo<T>::nb_can_peek(tlm_tag<T> *) const { return !is_empty(); } template <typename T> inline bool tlm_fifo<T>::nb_poke(const T &t, int n) { if (n >= used() || n < 0) { return false; } buffer.poke_data(n) = t; return true; } } // namespace tlm #endif /* __TLM_CORE_1_REQ_RSP_CHANNELS_FIFO_FIFO_PEEK_HH__ */
package br.com.arvoreavl.teste; public class ArvoreAVLTeste { }
Palestinians and human rights campaigners are marking International Day Against the Wall, but while those oppressed by the barrier hold tours to show their plight, some Israeli companies see it as a chance to cash in. Hani Amer lives on a hot piece of real estate, but it is not because of any beautiful view. Seven years ago the Israeli army built its wall in front of his house, and also behind it and to the sides, so that he is now completely closed in. A gate to the wall is his only contact with the outside world and he is the only one who has the key. Construction on the wall began eight years ago along the 1949 armistice line between Israel and the Palestinian West bank. Israel insists the barrier is for security, but Palestinians call it apartheid. The International Court of Justice has also said it breaches international law. As well as keeping people out, the wall is also bringing some in. Businessman Abu Hasan is making money from Hani’s misery. He runs what he calls “alternative tours” and brings tourists to see Israeli settlements, checkpoints and border fences. Abu Hasan says many tourists are shocked to see the wall. “I have a lot of people crying on the tour and I have Jewish people coming with me often, Jewish Americans,” he said. Jewish Israelis are not allowed to sign up because some of the areas are closed to Israeli citizens. Israel insists the wall has kept suicide bombers out. Israeli Defense Forces spokesperson Avital Liebovich says, most of the nearly 500 kilometer barrier is fence and less than 4 per cent concrete. However, there is nothing regular about caging people in, say an Israeli couple who are walking the wall on the tour. Gal Lugassi is leaving for America where she has arranged to give talks and raise money to help Palestinians. Still, the wall means money for some travel agencies, winning business from tours showcasing, or perhaps reveling, in Israeli security measures. If you are planning a trip to the Holy Land, gone are the days of visiting traditional holy sites. If you go online you will find tours like “the ultimate counter-terrorism mission”, which offers a tour with the army, or “terror tourism” – a tour that includes a Palestinian raid. However, it is no holiday for Hani because while tourists can come and go, he is stuck on his piece of land, hemmed in by the Israeli wall.
/** * Key class which executes VFA automation. * * @author Bob Marks */ public class VfaTestRunner { public void executeAll(VfaFeature feature) { logFeature(feature); for (VfaScenario scenario : feature.getScenarios()) { executeScenario(scenario); } } // Feature Methods public void logFeature(VfaFeature feature) { // FIXME MOVE print("Feature: ", CliColour.feature); println(feature.getName(), CliColour.strong); println(); if (feature.getDescription() != null) { println(" " + feature.getDescription().trim(), CliColour.description); println(); } } // Scenario methods public void logScenario(VfaScenario scenario) { // FIXME MOVE print(" Scenario: ", CliColour.scenario); println(scenario.getName(), CliColour.strong); println(); } public void executeScenario(VfaScenario scenario) { logScenario(scenario); for (VfaStep step : scenario.getSteps()) { executeStep(scenario, step); } } // Scenario methods public void executeStep(VfaScenario scenario, VfaStep step) { print(VfaUtil.padLeft(step.getType().getName(), 9) + " ", CliColour.step); // TODO improve print(step.getName()); print(VfaUtil.pad(60 - step.getName().length())); // TODO improve if (step.getActions() == null) { return; } for (int i = 0; i < step.getActions().size(); i++) { VfaAction vfaAction = step.getActions().get(i); if (i > 0) { // new line print(VfaUtil.pad(10 + 60)); // TODO improve } executeAction(vfaAction); } } public void executeAction(VfaAction action) { print(": ", CliColour.actionSquare); print(action.getCommand(), CliColour.actionCommand); VfaResult result = action.execute(); println(" " + result.getArguments(), CliColour.actionArguments); } // Print methods private void print(String input, CliColour cliColour) { System.out.print(colorize(input, cliColour.getAttribute())); System.out.flush(); } private void print(String input) { print(input, CliColour.normal); } private void println(String input, CliColour cliColour) { print(input, cliColour); System.out.println(); } private void println(String input) { println(input, CliColour.normal); } private void println() { System.out.println(); } }
import { Component, OnInit } from '@angular/core'; import { NetworkService } from '../../common/services/network.service'; import { FactoryService } from '../../common/services/factory.service'; @Component({ selector: 'app-ping', templateUrl: './ping.component.html', styleUrls: ['./ping.component.css'] }) export class PingComponent implements OnInit { ipAddress: string; constructor(private networkService:NetworkService, private factory:FactoryService) { } ngOnInit() { } ping(){ this.networkService.pingAddress(this.ipAddress); } }
Austrian judicial system of 1867 The article examines the Austrian judicial system formed on the basis of the Basic Constitutional Law of Austria on JudicialPower of December 27, 1867, requirements for individuals who wanted to become judges.The judge could be any male Austrian citizen who had a university degree in law and practical experience of at least three years,successfully passed the written and oral exams. Examination commissions were set up annually by the Minister of Justice at each higherregional court. They included law professors and skilled practitioners. Thus, the professionalism of judges was ensured.Judges were appointed for life by the emperor or relevant officials on his behalf. At the time of their appointment, they took anofficial oath and an oath to strictly abide by the constitution and laws of Austria-Hungary. All decisions were made on behalf of theemperor. Judges were recognized as free and independent in their decisions. In 1908, in Eastern Galicia, 63.8 % of judges were of Polishnationality and 31.8 % were Ukrainians. From 1870 in Eastern Galicia there was one higher legal court in Lviv and 5 district judges,and from the beginning of the XX century 10 district judges.The functions and powers of the Supreme Judicial and Cassation Tribunal in Vienna (the State Tribunal), which was the highestcourt in Austria, are highlighted. The competence of cases in which the State Tribunal made decisions as a court of first instance andthe procedure for their consideration are analyzed. The procedure of formation of the composition of the State Tribunal is covered.Along with the State Tribunal, the Administrative Tribunal was functioning in Austria, created on the basis of the law adopted bythe Austrian Parliament in 1875. The structure, powers and functions of the High Regional Courts, District Courts and County Courtsare analyzed. The peculiarities of the functioning of the Austrian judicial system in Galicia in 18671918 are highlighted.
Rice Prices and Poverty in Liberia When assessing the impact of changes in food prices on poverty, it is important to consider food producers (who may benefit from an increase in prices) as well as consumers (who loose out when the price increases), with a focus on poor consumers and producers. In the case of rice in Liberia however, the impact of a change in price is not ambiguous because a large share of the rice consumed is imported, while the rice locally produced is used mostly for auto-consumption. An increase in the price of rice will result in higher poverty in the country as a whole (even if some local producers will gain from this increase), while a reduction in price will reduce poverty. Furthermore, because rice represents a large share of food consumption, any change in its price is likely to have a large impact on poverty. Using data from the 2007 CWIQ survey, the paper finds that an increase or decrease of 20 percent in the price of rice could lead to an increase or decrease of three to four percentage points in the share of the population in poverty.
Michael Gerson is a nationally syndicated columnist who appears twice weekly in The Post. He is the author of “Heroic Conservatism” (HarperOne, 2007) and co-author of “City of Man: Religion and Politics in a New Era” (Moody, 2010). He appears regularly on the “PBS NewsHour,” “Face the Nation” and other programs. Gerson serves as senior adviser at One, a bipartisan organization dedicated to the fight against extreme poverty and preventable diseases. Until 2006, Gerson was a top aide to President George W. Bush as assistant to the president for policy and strategic planning. Prior to that appointment, he served in the White House as deputy assistant to the president and director of presidential speechwriting and assistant to the president for speechwriting and policy adviser. The GOP wants America to move on. But we can’t ignore this corruption. Congress can’t look past evidence of apparent intended obstruction and conspiracy. What if some future president views evangelical Protestantism as incompatible with American democracy? Politics based on team loyalty ceases to serve political purposes. His appeal to inner demons above better angels proved easier than many of us hoped. The system that continues to oppress black Americans must be consciously shut down. The president’s threat to cut off assistance to Central American countries proves the administration’s lack of vision. It is still a legal judgment call whether the president is a crook. If we live in the future, all of us are eventually dead. We are headed into a time of political testing, when the right words from a responsible conservative might turn some crucial tide.
<filename>extensions/rss/deployment/src/main/java/org/apache/camel/quarkus/component/rss/deployment/RssProcessor.java /* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.camel.quarkus.component.rss.deployment; import java.io.IOException; import java.io.InputStream; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.Properties; import io.quarkus.deployment.annotations.BuildProducer; import io.quarkus.deployment.annotations.BuildStep; import io.quarkus.deployment.builditem.FeatureBuildItem; import io.quarkus.deployment.builditem.IndexDependencyBuildItem; import io.quarkus.deployment.builditem.nativeimage.NativeImageResourceBuildItem; import io.quarkus.deployment.builditem.nativeimage.ReflectiveClassBuildItem; class RssProcessor { private static final String FEATURE = "camel-rss"; @BuildStep FeatureBuildItem feature() { return new FeatureBuildItem(FEATURE); } @BuildStep NativeImageResourceBuildItem nativeImageResources() { return new NativeImageResourceBuildItem("com/rometools/rome/rome.properties"); } @BuildStep IndexDependencyBuildItem indexDependencies() { return new IndexDependencyBuildItem("com.rometools", "rome"); } @BuildStep void registerForReflection(BuildProducer<ReflectiveClassBuildItem> reflectiveClass) { // Register for reflection feed parser / generator classes from rome.properties try (InputStream stream = Thread.currentThread().getContextClassLoader() .getResourceAsStream("com/rometools/rome/rome.properties")) { Properties properties = new Properties(); properties.load(stream); List<String> parserGenerators = new ArrayList<>(); for (Map.Entry<Object, Object> entry : properties.entrySet()) { for (String className : entry.getValue().toString().split(" ")) { parserGenerators.add(className); } } reflectiveClass.produce( new ReflectiveClassBuildItem(false, false, parserGenerators.toArray(new String[parserGenerators.size()]))); } catch (IOException e) { throw new RuntimeException(e); } // Rome does some reflective work on classes that can be cloned String[] clonableClasses = new String[] { "com.rometools.rome.feed.module.DCModuleImpl", "com.rometools.rome.feed.module.SyModuleImpl", "com.rometools.rome.feed.module.ModuleImpl", "java.util.Date", }; reflectiveClass.produce(new ReflectiveClassBuildItem(true, false, clonableClasses)); } }
Band-Offset Engineering for GeSn-SiGeSn Hetero Tunnel FETs and the Role of Strain In this paper a simulation study of the effect of conduction and valence band offsets on the subthreshold swing (SS) of a double-gate tunnel field-effect transistor (TFET) with gate-overlapped source is presented. The simulations show that if the pn-junction and the hetero-junction coincide, the band offsets can significantly improve the SS by suppressing the so-called point tunneling at the pn-junction. It turns out that the performance of an n-channel TFET is determined by the direct conduction band offset whereas that of a p-channel TFET is mainly effected by the energy difference between the light hole bands of the two materials. Thus, the performance of the hetero-junction TFET can be improved by selecting material systems with high conduction or valence band offsets. The misalignment between the pn-junction and the hetero-junction is shown to degrade the SS. The above-described band-offset engineering has been applied to the GeSn/SiGeSn hetero-structure system with and without strain. Simulations of GeSn/SiGeSn hetero-TFETs with band-to-band-tunneling parameters determined from pseudopotential calculations show that compressive strain in GeSn widens the design space for TFET application while tensile strain reduces it.
<gh_stars>1000+ // Copyright 2016 The Chromium Authors. All rights reserved. // Use of this source code is governed by a BSD-style license that can be // found in the LICENSE file. #include "media/gpu/windows/d3d11_h264_accelerator.h" #include <windows.h> #include "base/memory/ptr_util.h" #include "base/trace_event/trace_event.h" #include "media/gpu/h264_decoder.h" #include "media/gpu/h264_dpb.h" #include "media/gpu/windows/d3d11_picture_buffer.h" #include "media/gpu/windows/return_on_failure.h" #include "third_party/angle/include/EGL/egl.h" #include "third_party/angle/include/EGL/eglext.h" #include "ui/gfx/color_space.h" #include "ui/gl/gl_bindings.h" #include "ui/gl/gl_context.h" #include "ui/gl/gl_image_dxgi.h" #include "ui/gl/gl_surface_egl.h" #include "ui/gl/scoped_binders.h" namespace media { class D3D11H264Picture : public H264Picture { public: D3D11H264Picture(D3D11PictureBuffer* picture) : picture(picture), level_(picture->level()) { picture->set_in_picture_use(true); } D3D11PictureBuffer* picture; size_t level_; protected: ~D3D11H264Picture() override; }; D3D11H264Picture::~D3D11H264Picture() { picture->set_in_picture_use(false); } D3D11H264Accelerator::D3D11H264Accelerator( D3D11VideoDecoderClient* client, Microsoft::WRL::ComPtr<ID3D11VideoDecoder> video_decoder, Microsoft::WRL::ComPtr<ID3D11VideoDevice> video_device, Microsoft::WRL::ComPtr<ID3D11VideoContext> video_context) : client_(client), video_decoder_(video_decoder), video_device_(video_device), video_context_(video_context) {} D3D11H264Accelerator::~D3D11H264Accelerator() {} scoped_refptr<H264Picture> D3D11H264Accelerator::CreateH264Picture() { D3D11PictureBuffer* picture = client_->GetPicture(); if (!picture) { return nullptr; } return base::MakeRefCounted<D3D11H264Picture>(picture); } bool D3D11H264Accelerator::SubmitFrameMetadata( const H264SPS* sps, const H264PPS* pps, const H264DPB& dpb, const H264Picture::Vector& ref_pic_listp0, const H264Picture::Vector& ref_pic_listb0, const H264Picture::Vector& ref_pic_listb1, const scoped_refptr<H264Picture>& pic) { scoped_refptr<D3D11H264Picture> our_pic( static_cast<D3D11H264Picture*>(pic.get())); HRESULT hr; for (;;) { hr = video_context_->DecoderBeginFrame( video_decoder_.Get(), our_pic->picture->output_view().Get(), 0, nullptr); if (hr == E_PENDING || hr == D3DERR_WASSTILLDRAWING) { // Hardware is busy. We should make the call again. // TODO(liberato): For now, just busy wait. ; } else if (!SUCCEEDED(hr)) { LOG(ERROR) << "DecoderBeginFrame failed"; return false; } else { break; } } sps_ = *sps; for (size_t i = 0; i < 16; i++) { ref_frame_list_[i].bPicEntry = 0xFF; field_order_cnt_list_[i][0] = 0; field_order_cnt_list_[i][1] = 0; frame_num_list_[i] = 0; } used_for_reference_flags_ = 0; non_existing_frame_flags_ = 0; int i = 0; // TODO(liberato): this is similar to H264Accelerator. can they share code? for (auto it = dpb.begin(); it != dpb.end(); it++) { scoped_refptr<D3D11H264Picture> our_ref_pic( static_cast<D3D11H264Picture*>(it->get())); if (!our_ref_pic->ref) { i++; continue; } ref_frame_list_[i].Index7Bits = our_ref_pic->level_; ref_frame_list_[i].AssociatedFlag = our_ref_pic->long_term; field_order_cnt_list_[i][0] = our_ref_pic->top_field_order_cnt; field_order_cnt_list_[i][1] = our_ref_pic->bottom_field_order_cnt; frame_num_list_[i] = ref_frame_list_[i].AssociatedFlag ? our_ref_pic->long_term_pic_num : our_ref_pic->frame_num; int ref = 3; used_for_reference_flags_ |= ref << (2 * i); non_existing_frame_flags_ |= (our_ref_pic->nonexisting) << i; i++; } slice_info_.clear(); return RetrieveBitstreamBuffer(); } bool D3D11H264Accelerator::RetrieveBitstreamBuffer() { DCHECK(!bitstream_buffer_bytes_); DCHECK(!bitstream_buffer_size_); current_offset_ = 0; void* buffer; UINT buffer_size; HRESULT hr = video_context_->GetDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_BITSTREAM, &buffer_size, &buffer); if (!SUCCEEDED(hr)) { LOG(ERROR) << "GetDecoderBuffer (Bitstream) failed"; return false; } bitstream_buffer_bytes_ = (uint8_t*)buffer; bitstream_buffer_size_ = buffer_size; return true; } bool D3D11H264Accelerator::SubmitSlice(const H264PPS* pps, const H264SliceHeader* slice_hdr, const H264Picture::Vector& ref_pic_list0, const H264Picture::Vector& ref_pic_list1, const scoped_refptr<H264Picture>& pic, const uint8_t* data, size_t size) { scoped_refptr<D3D11H264Picture> our_pic( static_cast<D3D11H264Picture*>(pic.get())); DXVA_PicParams_H264 pic_param = {}; #define FROM_SPS_TO_PP(a) pic_param.a = sps_.a #define FROM_SPS_TO_PP2(a, b) pic_param.a = sps_.b #define FROM_PPS_TO_PP(a) pic_param.a = pps->a #define FROM_PPS_TO_PP2(a, b) pic_param.a = pps->b #define FROM_SLICE_TO_PP(a) pic_param.a = slice_hdr->a #define FROM_SLICE_TO_PP2(a, b) pic_param.a = slice_hdr->b FROM_SPS_TO_PP2(wFrameWidthInMbsMinus1, pic_width_in_mbs_minus1); FROM_SPS_TO_PP2(wFrameHeightInMbsMinus1, pic_height_in_map_units_minus1); pic_param.CurrPic.Index7Bits = our_pic->level_; pic_param.CurrPic.AssociatedFlag = slice_hdr->bottom_field_flag; FROM_SPS_TO_PP2(num_ref_frames, max_num_ref_frames); FROM_SLICE_TO_PP(field_pic_flag); pic_param.MbaffFrameFlag = sps_.mb_adaptive_frame_field_flag && pic_param.field_pic_flag; FROM_SPS_TO_PP2(residual_colour_transform_flag, separate_colour_plane_flag); FROM_SLICE_TO_PP(sp_for_switch_flag); FROM_SPS_TO_PP(chroma_format_idc); pic_param.RefPicFlag = pic->ref; FROM_PPS_TO_PP(constrained_intra_pred_flag); FROM_PPS_TO_PP(weighted_pred_flag); FROM_PPS_TO_PP(weighted_bipred_idc); pic_param.MbsConsecutiveFlag = 1; FROM_SPS_TO_PP(frame_mbs_only_flag); FROM_PPS_TO_PP(transform_8x8_mode_flag); // TODO(liberato): sandersd@ believes that this should only be set for level // >= 3.1 . verify this and fix as needed. pic_param.MinLumaBipredSize8x8Flag = 1; pic_param.IntraPicFlag = slice_hdr->IsISlice(); FROM_SPS_TO_PP(bit_depth_luma_minus8); FROM_SPS_TO_PP(bit_depth_chroma_minus8); // The latest DXVA decoding guide says to set this to 3 if the software // decoder (this class) is following the guide. pic_param.Reserved16Bits = 3; memcpy(pic_param.RefFrameList, ref_frame_list_, sizeof pic_param.RefFrameList); if (pic_param.field_pic_flag && pic_param.CurrPic.AssociatedFlag) { pic_param.CurrFieldOrderCnt[1] = pic->bottom_field_order_cnt; pic_param.CurrFieldOrderCnt[0] = 0; } else if (pic_param.field_pic_flag && !pic_param.CurrPic.AssociatedFlag) { pic_param.CurrFieldOrderCnt[0] = pic->top_field_order_cnt; pic_param.CurrFieldOrderCnt[1] = 0; } else { pic_param.CurrFieldOrderCnt[0] = pic->top_field_order_cnt; pic_param.CurrFieldOrderCnt[1] = pic->bottom_field_order_cnt; } memcpy(pic_param.FieldOrderCntList, field_order_cnt_list_, sizeof pic_param.FieldOrderCntList); FROM_PPS_TO_PP(pic_init_qs_minus26); FROM_PPS_TO_PP(chroma_qp_index_offset); FROM_PPS_TO_PP(second_chroma_qp_index_offset); pic_param.ContinuationFlag = 1; FROM_PPS_TO_PP(pic_init_qp_minus26); FROM_PPS_TO_PP2(num_ref_idx_l0_active_minus1, num_ref_idx_l0_default_active_minus1); FROM_PPS_TO_PP2(num_ref_idx_l1_active_minus1, num_ref_idx_l1_default_active_minus1); // UNUSED: Reserved8BitsA memcpy(pic_param.FrameNumList, frame_num_list_, sizeof pic_param.FrameNumList); pic_param.UsedForReferenceFlags = used_for_reference_flags_; pic_param.NonExistingFrameFlags = non_existing_frame_flags_; pic_param.frame_num = pic->frame_num; FROM_SPS_TO_PP(log2_max_frame_num_minus4); FROM_SPS_TO_PP(pic_order_cnt_type); FROM_SPS_TO_PP(log2_max_pic_order_cnt_lsb_minus4); FROM_SPS_TO_PP(delta_pic_order_always_zero_flag); FROM_SPS_TO_PP(direct_8x8_inference_flag); FROM_PPS_TO_PP(entropy_coding_mode_flag); FROM_PPS_TO_PP2(pic_order_present_flag, bottom_field_pic_order_in_frame_present_flag); FROM_PPS_TO_PP(num_slice_groups_minus1); CHECK_EQ(0u, pic_param.num_slice_groups_minus1); // UNUSED: slice_group_map_type FROM_PPS_TO_PP(deblocking_filter_control_present_flag); FROM_PPS_TO_PP(redundant_pic_cnt_present_flag); // UNUSED: Reserved8BitsB // UNUSED: slice_group_change_rate // // // pic_param.StatusReportFeedbackNumber = 1; UINT buffer_size; void* buffer; HRESULT hr = video_context_->GetDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS, &buffer_size, &buffer); if (!SUCCEEDED(hr)) { LOG(ERROR) << "ReleaseDecoderBuffer (PictureParams) failed"; return false; } memcpy(buffer, &pic_param, sizeof(pic_param)); hr = video_context_->ReleaseDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS); if (!SUCCEEDED(hr)) { LOG(ERROR) << "ReleaseDecoderBuffer (PictureParams) failed"; return false; } DXVA_Qmatrix_H264 iq_matrix_buf = {}; if (pps->pic_scaling_matrix_present_flag) { for (int i = 0; i < 6; ++i) { for (int j = 0; j < 16; ++j) iq_matrix_buf.bScalingLists4x4[i][j] = pps->scaling_list4x4[i][j]; } for (int i = 0; i < 2; ++i) { for (int j = 0; j < 64; ++j) iq_matrix_buf.bScalingLists8x8[i][j] = pps->scaling_list8x8[i][j]; } } else { for (int i = 0; i < 6; ++i) { for (int j = 0; j < 16; ++j) iq_matrix_buf.bScalingLists4x4[i][j] = sps_.scaling_list4x4[i][j]; } for (int i = 0; i < 2; ++i) { for (int j = 0; j < 64; ++j) iq_matrix_buf.bScalingLists8x8[i][j] = sps_.scaling_list8x8[i][j]; } } hr = video_context_->GetDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX, &buffer_size, &buffer); if (!SUCCEEDED(hr)) { LOG(ERROR) << "GetDecoderBuffer (QuantMatrix) failed"; return false; } memcpy(buffer, &iq_matrix_buf, sizeof(iq_matrix_buf)); hr = video_context_->ReleaseDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX); if (!SUCCEEDED(hr)) { LOG(ERROR) << "ReleaseDecoderBuffer (QuantMatrix) failed"; return false; } // Ideally all slices in a frame are put in the same bitstream buffer. // However the bitstream buffer may not fit all the data, so split on the // necessary boundaries. size_t out_bitstream_size = size + 3; size_t remaining_bitstream = out_bitstream_size; size_t start_location = 0; while (remaining_bitstream > 0) { if (bitstream_buffer_size_ < remaining_bitstream && slice_info_.size() > 0) { if (!SubmitSliceData()) { LOG(ERROR) << "SubmitSliceData failed"; return false; } if (!RetrieveBitstreamBuffer()) { LOG(ERROR) << "RetrieveBitstreamBuffer failed"; return false; } } size_t bytes_to_copy = remaining_bitstream; bool contains_end = true; if (bytes_to_copy > bitstream_buffer_size_) { bytes_to_copy = bitstream_buffer_size_; contains_end = false; } size_t real_bytes_to_copy = bytes_to_copy; // TODO(jbauman): fix hack uint8_t* out_start = bitstream_buffer_bytes_; if (bytes_to_copy >= 3 && start_location == 0) { *(out_start++) = 0; *(out_start++) = 0; *(out_start++) = 1; real_bytes_to_copy -= 3; } memcpy(out_start, data + start_location, real_bytes_to_copy); DXVA_Slice_H264_Short slice_info = {}; slice_info.BSNALunitDataLocation = (UINT)current_offset_; slice_info.SliceBytesInBuffer = (UINT)bytes_to_copy; if (contains_end && start_location == 0) slice_info.wBadSliceChopping = 0; else if (!contains_end && start_location == 0) slice_info.wBadSliceChopping = 1; else if (contains_end && start_location != 0) slice_info.wBadSliceChopping = 2; else slice_info.wBadSliceChopping = 3; slice_info_.push_back(slice_info); bitstream_buffer_size_ -= bytes_to_copy; current_offset_ += bytes_to_copy; start_location += bytes_to_copy; remaining_bitstream -= bytes_to_copy; bitstream_buffer_bytes_ += bytes_to_copy; } return true; } bool D3D11H264Accelerator::SubmitSliceData() { CHECK(slice_info_.size() > 0); UINT buffer_size; void* buffer; // TODO(liberato): Should we release the other buffers on failure? HRESULT hr = video_context_->GetDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL, &buffer_size, &buffer); if (!SUCCEEDED(hr)) { LOG(ERROR) << "GetDecoderBuffer (SliceControl) failed"; return false; } CHECK_LE(sizeof(slice_info_[0]) * slice_info_.size(), buffer_size); memcpy(buffer, &slice_info_[0], sizeof(slice_info_[0]) * slice_info_.size()); hr = video_context_->ReleaseDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL); if (!SUCCEEDED(hr)) { LOG(ERROR) << "ReleaseDecoderBuffer (SliceControl) failed"; return false; } hr = video_context_->ReleaseDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_BITSTREAM); if (!SUCCEEDED(hr)) { LOG(ERROR) << "ReleaseDecoderBuffer (BitStream) failed"; return false; } D3D11_VIDEO_DECODER_BUFFER_DESC buffers[4] = {}; buffers[0].BufferType = D3D11_VIDEO_DECODER_BUFFER_PICTURE_PARAMETERS; buffers[0].DataOffset = 0; buffers[0].DataSize = sizeof(DXVA_PicParams_H264); buffers[1].BufferType = D3D11_VIDEO_DECODER_BUFFER_INVERSE_QUANTIZATION_MATRIX; buffers[1].DataOffset = 0; buffers[1].DataSize = sizeof(DXVA_Qmatrix_H264); buffers[2].BufferType = D3D11_VIDEO_DECODER_BUFFER_SLICE_CONTROL; buffers[2].DataOffset = 0; buffers[2].DataSize = (UINT)(sizeof(slice_info_[0]) * slice_info_.size()); buffers[3].BufferType = D3D11_VIDEO_DECODER_BUFFER_BITSTREAM; buffers[3].DataOffset = 0; buffers[3].DataSize = (UINT)current_offset_; hr = video_context_->SubmitDecoderBuffers(video_decoder_.Get(), 4, buffers); current_offset_ = 0; slice_info_.clear(); bitstream_buffer_bytes_ = nullptr; bitstream_buffer_size_ = 0; if (!SUCCEEDED(hr)) { LOG(ERROR) << "SubmitDecoderBuffers failed"; return false; } return true; } bool D3D11H264Accelerator::SubmitDecode(const scoped_refptr<H264Picture>& pic) { if (!SubmitSliceData()) { LOG(ERROR) << "SubmitSliceData failed"; return false; } HRESULT hr = video_context_->DecoderEndFrame(video_decoder_.Get()); if (!SUCCEEDED(hr)) { LOG(ERROR) << "DecoderEndFrame failed"; return false; } return true; } void D3D11H264Accelerator::Reset() { if (!bitstream_buffer_bytes_) return; HRESULT hr = video_context_->ReleaseDecoderBuffer( video_decoder_.Get(), D3D11_VIDEO_DECODER_BUFFER_BITSTREAM); bitstream_buffer_bytes_ = nullptr; bitstream_buffer_size_ = 0; current_offset_ = 0; CHECK(SUCCEEDED(hr)); } bool D3D11H264Accelerator::OutputPicture( const scoped_refptr<H264Picture>& pic) { scoped_refptr<D3D11H264Picture> our_pic( static_cast<D3D11H264Picture*>(pic.get())); client_->OutputResult(our_pic->picture); return true; } } // namespace media
Quebec's Director General of Elections acknowledged that "an incident" occurred on Aug. 24 but was limited to one computer and not the computer system. Quebec’s director general of elections is denying a news report that says confidential information contained in the provincial voter list was hacked during a computer security breach at the beginning of the provincial election campaign. In a statement issued on Friday, the provincial elections office said it refutes a Journal de Montréal report that said the chief electoral officer’s computer system was the target of an attack on Aug. 24, and that the evidence of it was erased without informing the police. However, Quebec’s director general of elections, Pierre Reid, acknowledged “an incident” occurred on Aug. 24. But “serious and rigorous” examination showed that the “attempted attack” on the network was limited to one computer and that the computer system wasn’t compromised, the statement said. A person posing as a computer technician managed to convince an employee of a returning officer to provide remote access to the employee’s computer under the pretence that the fake technician wanted to protect the computer from a virus. When the impostor asked to be paid for the services, the employee realized that it was an attempt at phishing, the statement said. The fake technician’s takeover of the computer ended there, according to Reid. The computer contained spreadsheets that weren’t connected to the Director General of Elections’ systems and that contained information concerning 50 people. Some personal information, including curriculum vitae and addresses, of two of the 50 were in the spreadsheets. The two people, who worked for the elections office, were advised of the incident, Reid added. The statement says the director general of elections has asked its units to review procedures to avoid any similar incident in the future.
BDNF and extracellular matrix regulate differentiation of mice neurospherederived cells into a GABAergic neuronal phenotype Differentiation of neurospherederived cells is regulated by extracellular cues, namely, growth factors and proteins of the extracellular matrix (ECM). In this study we analyzed the influence of nerve growth factor (NGF), brainderived neurotrophic factor (BDNF), retinoic acid plus potassium chloride (RAKCl), and the nonsynthetic ECMs laminin (LN) and fibronectin (FN) versus the synthetic adhesion substrate polyLlysine (PLL) in the in vitro differentiation of postnatal neurosphere cells. BDNF increased the number of differentiated neurons and decreased the number of neuronal precursors (nestinpositive cells) compared with NGF or RAKCl. Moreover, cells treated with BDNF plus B27 supplement acquired a aminobutyric acid (GABA)ergic phenotype and showed increased survival. No significant differences were found in the number of differentiated neurons in the presence of the ECMs alone. Nevertheless, FN or PLL in combination with BDNF promoted the acquisition of a GABAergic phenotype. The results obtained in this study highlight the importance of growth factors and ECM proteins for the potential of neurosphere cells to differentiate into neurons. © 2009 WileyLiss, Inc.
# Copyright (c) 2020, Frappe Technologies Pvt. Ltd. and Contributors # MIT License. See license.txt from __future__ import unicode_literals import frappe, requests from werkzeug.useragents import UserAgent def get_context(context): user_agent = UserAgent(frappe.request.headers.get("User-Agent")) platform = user_agent.platform context.download_links = get_download_links() context.default_download_link = "https://github.com/frappe/books/releases/latest" if platform == "macos": context.platform = "macOS" elif platform == "windows": context.platform = "Windows" elif platform == "linux": context.platform = "Linux" context.developer_mode = frappe.conf.developer_mode def get_download_links(): def get_from_github(): # find the download links from the latest release try: response = requests.get( "https://api.github.com/repos/frappe/books/releases/latest" ) except Exception: return {} if response.ok: data = response.json() assets = data["assets"] platform_download_links = {} for asset in assets: browser_download_url = asset["browser_download_url"] extension = browser_download_url.rsplit(".", 1)[1] if extension == "dmg": platform_download_links["macOS"] = browser_download_url elif extension == "exe": platform_download_links["Windows"] = browser_download_url elif extension == "AppImage": platform_download_links["Linux"] = browser_download_url frappe.cache().hset("books_download_links", data["name"], platform_download_links) return platform_download_links return {} links = frappe.cache().hgetall("books_download_links") if not links: return get_from_github() versions = list(links.keys()).sort() if not versions: return get_from_github() latest_version = versions[-1] return links.get(latest_version, {})
// NewHTTPServer - returns a new *http.Server instance bootstrapped with our own GraphQL server. func NewHTTPServer(port string) (*http.Server, error) { s := &server{} mux := mux.NewRouter() mux.Handle("/", handler.Playground("LINGVA Playground", "/graphql")) config := gql.Config{Resolvers: s} mux.Handle("/graphql", auth.ValidateMiddleware(handler.GraphQL( gql.NewExecutableSchema(config), handler.ErrorPresenter(func(ctx context.Context, err error) *gqlerror.Error { return &gqlerror.Error{ Message: err.Error(), Path: graphql.GetResolverContext(ctx).Path(), } }), ))) mux.HandleFunc("/login", Login).Methods("POST") mux.HandleFunc("/admin-login", AdminLogin).Methods("POST") mux.HandleFunc("/image/{imageName}", ImageHandler).Methods("GET") return &http.Server{ Addr: ":" + port, Handler: cors.AllowAll().Handler(mux), }, nil }
/** * Called before every write to ensure we are ready to write. <br/> * This method also checks if there is a current table lock, and increments * the {@link #writesInProgress} counter. * <p/> * If the engine is spinning down then we throw because engines are read-only * when spinning down. */ protected Transaction ensureWriteReady() throws TransactionStateException { long tblLock = tableLock.get(); if(tblLock != -1 && tblLock != Thread.currentThread().getId()){ synchronized(tableLockSync){ tblLock = tableLock.get(); while(tblLock != -1 && tblLock != Thread.currentThread().getId()){ try { tableLockSync.wait(); } catch (InterruptedException ex) { } tblLock = tableLock.get(); } } } Transaction tx = this.transactionManager.getTransaction(); if(tx != null){ if(tx.isReadOnly()){ throw new TransactionStateException("Cannot write in a read-only transaction"); } if(tx.isCommitted()){ throw new TransactionStateException("Cannot write in an already committed transaction"); } if(tx.isReverted()){ throw new TransactionStateException("Cannot write in an already reverted transaction"); } } EngineState state = this.state.get(); if(state == EngineState.SpunDown || state == EngineState.SpinningDown){ throw new EngineStateException("Write operations not supported on an engine that is spinning down", state); } this.transactionManager.bindEngineToCurrentTransaction(this); int inprog = this.writesInProgress.incrementAndGet(); if(inprog < 1){ this.writesInProgress.compareAndSet(inprog, 1); if(log.isTraceEnabled()) log.trace(String.format("Writes in progress was less than 1: %d", inprog)); } return tx; }
package net.mky.rl; import java.awt.*; import javax.swing.*; import java.util.*; public class boardPanel extends JPanel { static final boolean USEVEC = true; // multiple painting per square (assume user paints each default) int xdim, ydim; int wdim, hdim,sqw, sqh; final Color background = Color.white; boardObject[][] board; // objects on board, listed as array of boardObject elements Vector boardVec; // objects on board to paint, listed as vector in order of painting boardObject def; Dimension preferredSize; // d is a boardObject for the background. public boardPanel(boardObject d, int x, int y) { this(d,1,1,x,y); } public boardPanel(boardObject d, int x, int y, int w, int h) { //bObjects = b; def = d; xdim = x; ydim = y; wdim = w; hdim = h; sqw = wdim / xdim; sqh = hdim / ydim; //System.out.println("sqx:"+sqw+" sqy:"+sqh); preferredSize = new Dimension(wdim, hdim); board = new boardObject[xdim][ydim]; boardVec = new Vector(); // this instance will probably get replaced, // but allows setting before clearing } public boolean setSquare(boardObject b, Dimension d) { return setSquare(b,d.width, d.height); } public boolean setSquare(boardObject b, int x, int y) { if ((x<0) || (x>=xdim) || (y<0) || (y>=ydim)) return false; //out of range if (USEVEC) { boardContainer thisSquare = new boardContainer(new Dimension(x,y),b); boardVec.addElement(thisSquare); } else board[x][y] = b; return true; } public void clearBoard() { if (USEVEC) boardVec = new Vector(); else { for (int i=0; i<xdim; i++) { for (int j=0; j<ydim; j++) board[i][j] = null; } } } public boolean clearSquare(int x, int y) { if ((x<0) || (x>=xdim) || (y<0) || (y>ydim)) return false; //out of range board[x][y] = null; return true; } public void paintComponent(Graphics g) { //super.paintComponent(g); drawboard(g); } public Dimension getPreferredSize() { return preferredSize; } public Dimension getMaximumSize() { return preferredSize; } public Dimension getMinimumSize() { return preferredSize; } void drawboard(Graphics g) { g.setColor(Color.black); g.fillRect(0,0,getWidth(), getHeight()); sqw = getWidth()/xdim; sqh = getHeight()/ydim; // draw background panels if (def != null) { for (int y=0; y<ydim; y++) { for (int x=0; x<xdim; x++) def.drawObject(g, x, y, sqw, sqh, this); } } if (USEVEC) { // draw each element in the vector for (Enumeration e = boardVec.elements(); e.hasMoreElements();) { boardContainer c = (boardContainer) e.nextElement(); c.o.drawObject(g,c.d.width, c.d.height, sqw, sqh, this); } } else { // draw from grid for (int y=0; y<ydim; y++) { for (int x=0; x<xdim; x++) if (board[x][y] != null) board[x][y].drawObject(g, x, y, sqw, sqh, this); } } } public void setDimensions(int x, int y) { this.xdim = x; this.ydim = y; this.sqw = wdim / xdim; this.sqh = hdim / ydim; if (!USEVEC) board = new boardObject[xdim][ydim]; } public String toString() { String retString=""; for (int j=0;j<ydim; j++) { retString+="["; if (board[0][j] != null) retString += board[0][j].toString(); else retString += "x"; for (int i=1; i<xdim; i++) { retString+=","; if (board[i][j] != null) retString += board[i][j].toString(); else retString += "x"; } retString+=("]\n"); } return retString; } } class boardContainer { public Dimension d; public boardObject o; public boardContainer(Dimension d, boardObject o) { this.d = d; this.o = o; } }
Today I love the first dawn of February and the very positive way it has arrived for us here in my little city of snow. I love that I spent part of the last day of January making bread without using a bread maker and that’s something I haven’t done in ten or more years. I love that there was fresh bread in the house as February came to be, and that if I had been making bread all along these loaves I made yesterday would not have been disappointments in any way. I love that when it became obvious that my yeast was a bit old and not as active as it should be I did not panic but simple waited it out and the dough did indeed rise to the occasion eventually.I love that February’s first order of business is to put an end to this cold snap, easing up through today to a balmy 11 Celsius degrees below freezing which is, of course, 12 ° Fahrenheit. I love that new plans for the weekend will take us above freezing and give us a bit of the January thaw we missed out on this year so we’ll have to call it a February thaw. I love February most of all for being just four weeks long but being the transition between January, the first full month of winter, and March, the month in which Spring arrives. I love February’s firm yet cheerful attitude toward getting its four weeks of winter taken care of. Today I love that it is Friday. I love that it’s the Friday that I am the host of the Open Mic at the Bleeding Carrot. I love that I am able to play the guitar again, though not for long stretches. I love that my left hand is getting better even though it has taken so long. I love that the only way anyone can tell that my hand is bothering me is to listen to me tell them, it looks fine and has nearly full range of motion again. I love how great it is to be able to play again, even if I can tell that I’m not at my best yet. Today I love jam on fresh bread and dang-it-all I’m going to get me some jam today or tomorrow I promise. I love honey on fresh bread and I’m going to have some of that with my next cup of coffee. I love making bread from scratch because it is as close to magic as anything in this world will ever be. Alchemy, baby, pure alchemy! I love how excited I am about having made bread again and how amazed I am that I had forgotten how wonderful it can be. Today I love drinking coffee while I ponder the biggest problem in my world today, trying to remember the spell that stops me from eating all the bread.
<filename>src/dssat/__init__.py """ Class definition for the DSSAT model interface .. module:: dssat :synopsis: Definition of the DSSAT model class .. moduleauthor:: <NAME> <<EMAIL>> """ import logging import tempfile import decimal import dbio import rpath import sys import os import shutil import distutils.core import numpy as np import subprocess from datetime import date, timedelta import string class DSSAT(object): def __init__(self, dbname, name, resolution, startyear, startmonth, startday, endyear, endmonth, endday, nens, vicopts, shapefile=None, assimilate=True): log = logging.getLogger(__name__) self.path = tempfile.mkdtemp(dir=".") self.startyear = startyear self.startmonth = startmonth self.startday = startday self.endyear = endyear self.endmonth = endmonth self.endday = endday self.crop = None self.cultivars = {} self.lat = [] self.lon = [] self.elev = [] self.depths = [] self.dbname = dbname self.name = name self.res = resolution self.nens = nens self.shapefile = shapefile self.assimilate = assimilate self.modelpaths = {} self.modelstart = {} self.grid_decimal = - (decimal.Decimal(str(self.res)).as_tuple().exponent - 1) db = dbio.connect(self.dbname) cur = db.cursor() if 'lai' in vicopts or ('save' in vicopts and vicopts['save'].find("lai") >= 0): self.lai = "vic" else: self.lai = None if 'save to' in vicopts: self.datafrom = vicopts['save to'] else: self.datafrom = "db" cur.execute( "select * from information_schema.tables where table_name='basin' and table_schema=%s", (name,)) if not bool(cur.rowcount): log.error("No simulation named {0} exists in database. You might have to run VIC.".format(name)) sys.exit() cur.execute( 'select basefile from vic.input where resolution=%f;' % self.res) self.basefile = "{0}/{1}".format(rpath.data, cur.fetchone()[0]) cur.close() db.close() def readVICSoil(self): """Extract information from VIC database table on latitude, longitude, elevation and soil depths.""" db = dbio.connect(self.dbname) cur = db.cursor() sql = "select st_y(geom), st_x(geom), elev, depths from {0}.basin".format( self.name) cur.execute(sql) pixels = cur.fetchall() self.lat, self.lon, self.elev, self.depths = zip(*pixels) self.lat = np.array(self.lat) self.lon = np.array(self.lon) self.elev = np.array(self.elev) self.depths = list(self.depths) cur.close() db.close() def writeWeatherFiles(self, modelpath, name, year, month, day, weather, elev, lat, lon, ts=None, te=None): """Writes ensemble weather files for specific pixel.""" if isinstance(weather, list): data = (weather * (int(self.nens / len(weather)) + 1))[:self.nens] else: data = [weather] * self.nens for ens in range(self.nens): filename = "{0}/WEATH{1:03d}.WTH".format(modelpath, ens + 1) fout = open(filename, 'w') fout.write("*WEATHER DATA : {0}\r\n".format(name[:5].upper())) fout.write("\r\n") fout.write("@ INSI LAT LONG ELEV TAV AMP REFHT WNDHT\r\n") tavg = np.mean(data[ens][:, 1:3]) fout.write("{0:6s} {1} {2} {3:.0f} {4:.1f} {5:.1f} {6:.1f} {7:.1f} \r\n".format( name[:5].upper(), lat, lon, elev, tavg, -99.0, -99.0, -99.0)) fout.write("@DATE SRAD TMAX TMIN RAIN DEWP WIND PAR\r\n") if ts is None or te is None: ts = 0 te = len(data[ens]) for p in range(ts, te): datestr = str(int(year[p]))[-2:] + date(int(year[p]), int(month[p]), int(day[p])).strftime("%j") fout.write("{0} {1:4.1f} {2:4.1f} {3:4.1f} {4:4.1f}\r\n".format( datestr, data[ens][p, 0] * 0.086400, data[ens][p, 1], data[ens][p, 2], data[ens][p, 3])) fout.close() def readVICOutputFromFile(self, lat, lon, depths, filespath): """Read DSSAT inputs from VIC output files for a specific pixel.""" startdate = date(self.startyear, self.startmonth, self.startday) enddate = date(self.endyear, self.endmonth, self.endday) filename = "{0}/output/eb_{1:.{3}f}_{2:.{3}f}".format( filespath, lat, lon, self.grid_decimal) viceb = np.loadtxt(filename) filename = "{0}/output/sub_{1:.{3}f}_{2:.{3}f}".format( filespath, lat, lon, self.grid_decimal) vicsm = np.loadtxt(filename) filename = "{0}/output/sur_{1:.{3}f}_{2:.{3}f}".format( filespath, lat, lon, self.grid_decimal) vicsr = np.loadtxt(filename) filename = "{0}/forcings/data_{1:.{3}f}_{2:.{3}f}".format( filespath, lat, lon, self.grid_decimal) met = np.loadtxt(filename) sm = vicsm[:, 3:len(depths) + 3] weather = np.vstack( (viceb[:, 3] + viceb[:, 4], met[:, 1], met[:, 2], met[:, 0])).T year = vicsm[:, 0].astype(int) month = vicsm[:, 1].astype(int) day = vicsm[:, 2].astype(int) tidx = [i for i in range(len(year)) if date(year[i], month[i], day[ i]) >= startdate and date(year[i], month[i], day[i]) <= enddate] lai = dict(zip([date(year[i], month[i], day[i]) for i in range(len(year)) if i in tidx], vicsr[:, 12])) return year[tidx], month[tidx], day[tidx], weather[tidx, :], sm[tidx, :], lai def readVICOutputFromDB(self, gid, depths): """Read DSSAT inputs from database.""" startdate = date(self.startyear, self.startmonth, self.startday) enddate = date(self.endyear, self.endmonth, self.endday) ndays = (enddate - startdate).days + 1 db = dbio.connect(self.dbname) cur = db.cursor() date_sql = "fdate>=date '{0}-{1}-{2}' and fdate<=date '{3}-{4}-{5}'".format( self.startyear, self.startmonth, self.startday, self.endyear, self.endmonth, self.endday) data = {} varnames = ["net_short", "net_long", "soil_moist", "rainf", "tmax", "tmin"] if self.lai is not None: varnames.append("lai") else: lai = None for varname in varnames: sqlvars = ["fdate"] sql = "select column_name from information_schema.columns where table_schema='{0}' and table_name='{1}' and column_name='ensemble'".format( self.name, varname) cur.execute(sql) if bool(cur.rowcount): sqlvars += ["ensemble"] sql = "select column_name from information_schema.columns where table_schema='{0}' and table_name='{1}' and column_name='layer'".format( self.name, varname) cur.execute(sql) if bool(cur.rowcount): sqlvars += ["layer"] sql = "select {0}, avg((st_summarystats(rast)).mean) from {1}.{2}, {1}.agareas where st_intersects(rast,geom) and gid={3} and {4} group by gid,{0} order by fdate".format( string.join(sqlvars, ","), self.name, varname, gid, date_sql) cur.execute(sql) if bool(cur.rowcount): results = cur.fetchall() if "ensemble" in sqlvars: vicnens = np.max([r[1] for r in results]) data[varname] = [np.array( [r[-1] for r in results if r[1] == ens + 1]) for ens in range(vicnens)] if "layer" in sqlvars: layers = np.array([r[2] for r in results if r[1] == 1]) nlayers = np.max(layers) else: year = np.array([r[0].year for r in results if r[1] == 1]) month = np.array([r[0].month for r in results if r[1] == 1]) day = np.array([r[0].day for r in results if r[1] == 1]) else: data[varname] = np.array([r[-1] for r in results]) if "layer" in sqlvars: layers = np.array([r[1] for r in results]) nlayers = np.max(layers) else: year = np.array([r[0].year for r in results]) month = np.array([r[0].month for r in results]) day = np.array([r[0].day for r in results]) assert len(year) == ndays and len(month) == ndays and len(day) == ndays cur.close() db.close() if "ensemble" in sqlvars: weather = [np.vstack((data["net_short"][e] - data["net_long"][e], data["tmax"][ e], data["tmin"][e], data["rainf"][e])).T for e in range(len(data["net_short"]))] sm = [np.zeros((len(year), nlayers))] * len(data["soil_moist"]) if self.lai is not None: lai = dict(zip([date(year[i], month[i], day[i]) for i in range( len(year))], np.mean(np.array(data["lai"]).T, axis=1))) for e in range(len(sm)): for l in range(nlayers): sm[e][:, l] = [m for mi, m in enumerate( data["soil_moist"][e]) if layers[mi] == l + 1] else: weather = np.vstack( (data["net_short"] - data["net_long"], data["tmax"], data["tmin"], data["rainf"])).T if self.lai is not None: lai = dict(zip([date(year[i], month[i], day[i]) for i in range(len(year))], np.array(data["lai"]).T)) sm = np.zeros((len(year), nlayers)) for l in range(nlayers): sm[:, l] = [m for mi, m in enumerate( data["soil_moist"]) if layers[mi] == l + 1] return year, month, day, weather, sm, lai def readVICOutput(self, gid, depths): """Reads DSSAT time-varying inputs by reading either from files or a database.""" log = logging.getLogger(__name__) if isinstance(self.datafrom, list): inputs = [] while len(inputs) < self.nens: inputs += self.datafrom inputs = inputs[:self.nens] lat, lon = self.gid[gid] if self.datafrom == 'db': year, month, day, weather, sm, lai = self.readVICOutputFromDB( gid, depths) else: log.error("VIC output was not saved in the database. Cannot proceed with the DSSAT simulation.") sys.exit() return year, month, day, weather, sm, lai def writeLAI(self, modelpath, gid, viclai=None, tablename="lai.modis"): """Writes LAI file for DSSAT.""" fout = open("{0}/LAI.txt".format(modelpath), 'w') db = dbio.connect(self.dbname) cur = db.cursor() cur.execute("select * from information_schema.tables where table_name=%s and table_schema='lai'", (tablename.split(".")[1],)) if bool(cur.rowcount) and not self.lai == "vic": sql = "select fdate,avg((st_summarystats(st_clip(rast,geom))).mean) from {0},{1}.agareas where st_intersects(rast,geom) and fdate>=date '{2}-{3}-{4}' and fdate<=date '{5}-{6}-{7}' and gid={8} group by fdate".format( tablename, self.name, self.startyear, self.startmonth, self.startday, self.endyear, self.endmonth, self.endday, gid) cur.execute(sql) if bool(cur.rowcount): results = cur.fetchall() lai = {} for r in results: if r[1] is None: lai[r[0]] = -9999.0 else: lai[r[0]] = r[1] / 10.0 else: lai = {} else: lai = viclai enddate = date(self.endyear, 12, 31) startdate = date(self.startyear, 1, 1) for t in range((enddate - startdate).days + 1): dt = startdate + timedelta(t) if lai is not None and dt in lai: fout.write("{0:.1f}\n".format(lai[dt])) else: fout.write("-9999.0\n") fout.close() cur.close() db.close() def writeSoilMoist(self, modelpath, year, month, day, smi, dz): """Writes soil moisture information file.""" filename = "{0}/SOIL_MOISTURE.ASC".format(modelpath) fout = open(filename, 'w') ndays = (date(year[0] + 1, 1, 1) - date(year[0], 1, 1)).days tv = 0 for t in range(ndays): dt = date(year[0], 1, 1) + timedelta(t) doy = int(dt.strftime("%j")) fout.write("{0:.0f} {1:.0f} {2:.0f} ".format( dt.year, dt.month, dt.day)) if tv < len(year) and dt == date(int(year[tv]), int(month[tv]), int(day[tv])): for lyr in range(len(dz)): fout.write("{0:.3f} ".format(smi[tv, lyr])) tv += 1 else: for lyr in range(len(dz)): fout.write("{0:.0f} ".format(-9999.0)) fout.write("{0}\n".format(doy)) fout.close() def sampleSoilProfiles(self, gid): """Samples soil profiles from database to be used in DSSAT control file.""" db = dbio.connect(self.dbname) cur = db.cursor() sql = "with f as (select st_envelope(geom) as geom from {0}.agareas where gid={1}) select props from dssat.soils as s,f where st_intersects(s.geom,f.geom)".format(self.name, gid) cur.execute(sql) # if crop area is too small, look for nearest soil profiles dist = 0.1 while not bool(cur.rowcount): sql = "with a as (select st_buffer(geom,{2}) as geom from {0}.agareas where gid={1}) select props from dssat.soils as s,a where st_intersects(s.geom,a.geom)".format( self.name, gid, dist) dist += 0.1 cur.execute(sql) profiles = cur.fetchall() ens = np.random.choice(range(len(profiles)), self.nens) cur.close() db.close() return [profiles[e] for e in ens] def writeConfigFile(self, modelpath, nlayers, startdate, enddate): """Write DSSAT-ENKF config file.""" configfilename = "ENKF_CONFIG.TXT" fout = open("{0}/{1}".format(modelpath, configfilename), 'w') fout.write("!Start_DOY_of_Simulation:\n{0}\n".format( int(startdate.strftime("%j")))) fout.write("!End_DOY_of_Simulation\n{0}\n".format( int(enddate.strftime("%j")))) fout.write("!Year_of_Simulation:\n{0}\n".format(startdate.year)) fout.write("!Ensemble_members\n{0}\n".format(self.nens)) fout.write("!Number_of_soil_layers\n{0}\n".format(nlayers)) ndays = (date(self.endyear, 12, 31) - date(self.startyear, 1, 1)).days fout.write("!Number_of_RS_data\n{0}".format(ndays)) fout.close() return configfilename def calcCroplandFract(self): """Calculate fraction of cropland for specific pixel.""" db = dbio.connect(self.dbname) cur = db.cursor() sql = "select gid,avg((st_summarystats(st_clip(rast,geom))).mean) from dssat.cropland,{0}.agareas where st_intersects(rast,geom) group by gid order by gid".format( self.name) cur.execute(sql) fract = dict((r[0], r[1]) for r in cur.fetchall()) cur.close() db.close() return fract def readShapefile(self): """Read areas from shapefile where DSSAT will be run.""" log = logging.getLogger(__name__) try: cmd = "{0}/shp2pgsql -s 4326 -d -I -g geom {1} {2}.agareas | {0}/psql -d {3}".format(rpath.bins, self.shapefile, self.name, self.dbname) subprocess.call(cmd, shell=True) db = dbio.connect(self.dbname) cur = db.cursor() sql = "select gid, st_x(st_centroid(geom)), st_y(st_centroid(geom)) from {0}.agareas".format(self.name) cur.execute(sql) geoms = cur.fetchall() return geoms except IOError: log.error("Shapefile {0} for DSSAT simulation does not exist. Exiting...".format( self.shapefile)) sys.exit() def planting(self, lat, lon, fromShapefile=False): """Retrieve planting dates for pixel.""" if self.crop is None: self.crop = "maize" db = dbio.connect(self.dbname) cur = db.cursor() sql = "select st_value(rast,st_geomfromtext('POINT({0} {1})',4326)) as doy from crops.plantstart where type like '{2}' and st_intersects(rast,st_geomfromtext('POINT({0} {1})',4326)) order by doy".format( lon, lat, self.crop) cur.execute(sql) results = cur.fetchall() plantdates = [date(self.startyear, 1, 1) + timedelta(r[0] - 1) for r in results if r[0] is not None] cur.close() db.close() startdt = date(self.startyear, self.startmonth, self.startday) planting = [p for p in plantdates if p >= startdt and p <= date(self.endyear, self.endmonth, self.endday)] if planting is []: planting = [plantdates[np.argmax([(t - startdt).days for t in plantdates if (t - startdt).days < 0])]] return planting def interpolateSoilMoist(self, sm, depths, dz): """Estimate soil moisture at DSSAT depths.""" sm_i = [] if len(sm.shape) < 2: sm = np.reshape(sm, (1, len(sm))) for t in range(sm.shape[0]): u = sm[t, :] / np.array(depths * 1000.0) z = [100.0 * depths[0] / 2.0] for lyr in range(1, len(u)): # midpoint of each layer in cm z.append(100.0 * (depths[lyr - 1] + depths[lyr] / 2.0)) dz1 = [0.0] + list(dz) znew = np.array([dz1[i] + (dz1[i + 1] - dz1[i]) / 2.0 for i in range(len(dz1) - 1)]) unew = np.interp(znew, z, u) sm_i.append(unew) return np.array(sm_i) def copyModelFiles(self, geom, pi, dssatexe): """Copy DSSAT model files to instance's directory.""" gid, lat, lon = geom modelpath = os.path.abspath("{0}/{1}_{2}_{3}".format(self.path, lat, lon, pi)) self.modelpaths[(gid, pi)] = modelpath os.mkdir(modelpath) os.mkdir(modelpath + "/ENKF_Results") shutil.copyfile("{0}/{1}".format(rpath.bins, dssatexe), "{0}/{1}".format(modelpath, dssatexe)) distutils.dir_util.copy_tree("{0}/dssat".format(rpath.data), modelpath) def setupModelInstance(self, geom, dssatexe): """Setup parameters and write input files for a DSSAT model instance over a specific geometry.""" log = logging.getLogger(__name__) gid, lon, lat = geom c = np.argmin(np.sqrt((lat - self.lat) ** 2 + (lon - self.lon) ** 2)) # use the soil depths from the nearest VIC pixel to the centroid depths = np.array(self.depths[c]) year, month, day, weather, sm, vlai = self.readVICOutput(gid, depths) vicstartdt = date(year[0], month[0], day[0]) planting = self.planting(lat, lon) for pi, pdt in enumerate(planting[:1]): self.copyModelFiles(geom, pi, dssatexe) try: if pdt > date(pdt.year, 1, 8): simstartdt = pdt - timedelta(7) else: simstartdt = pdt assert simstartdt >= vicstartdt modelpath = self.modelpaths[(gid, pi)] self.modelstart[(gid, pi)] = simstartdt dz, smi = self.writeControlFile(modelpath, sm, depths, simstartdt, gid, self.lat[c], self.lon[c], pdt, None, None) ti0 = [i for i in range(len(year)) if simstartdt == date(year[i], month[i], day[i])][0] if pi + 1 < len(planting): ti1 = [i for i in range(len(year)) if (planting[pi + 1] - timedelta(10)) == date(year[i], month[i], day[i])][0] else: ti1 = [i for i in range(len(year)) if (planting[pi] + timedelta(min(180, len(year) - (planting[pi] - date(self.startyear - 1, 12, 31)).days))) == date(year[i], month[i], day[i])][0] self.writeWeatherFiles(modelpath, self.name, year, month, day, weather, self.elev[c], self.lat[c], self.lon[c]) #, ti0, ti1) self.writeSoilMoist(modelpath, year, month, day, smi, dz) self.writeLAI(modelpath, gid, viclai=vlai) self.writeConfigFile(modelpath, smi.shape[1], simstartdt, date(year[ti1], month[ti1], day[ti1])) log.info("Wrote DSSAT for planting date {0}".format(pdt.strftime("%Y-%m-%d"))) except AssertionError: log.error("No input data for DSSAT corresponding to starting date {0}. Need to run VIC for these dates. Exiting...".format(simstartdt.strftime('%Y-%m-%d'))) def runModelInstance(self, modelpath, dssatexe): """Runs DSSAT model instance.""" log = logging.getLogger(__name__) os.chdir(modelpath) if bool(self.assimilate): if str(self.assimilate).lower() is "sm": sm_assim = "Y" lai_assim = "N" elif str(self.assimilate).lower() is "lai": sm_assim = "N" lai_assim = "Y" else: sm_assim = lai_assim = "Y" else: sm_assim = lai_assim = "N" proc = subprocess.Popen(["wine", dssatexe, "SOIL_MOISTURE.ASC", "LAI.txt", "SM{0}".format(sm_assim), "LAI{0}".format(lai_assim)]) out, err = proc.communicate() log.debug(out) def save(self): """Saves DSSAT output to database.""" db = dbio.connect(self.dbname) cur = db.cursor() cur.execute( "select * from information_schema.tables where table_name='dssat' and table_schema='{0}'".format(self.name)) if not bool(cur.rowcount): cur.execute("create table {0}.dssat (id serial primary key, gid int, ensemble int, fdate date, wsgd real, lai real, gwad real, geom geometry, CONSTRAINT enforce_dims_geom CHECK (st_ndims(geom) = 2), CONSTRAINT enforce_geotype_geom CHECK (geometrytype(geom) = 'POLYGON'::text OR geometrytype(geom) = 'MULTIPOLYGON'::text OR geom IS NULL))".format(self.name)) db.commit() # overwrite overlapping dates cur.execute("delete from {0}.dssat where fdate>=date'{1}-{2}-{3}' and fdate<=date'{4}-{5}-{6}'".format(self.name, self.startyear, self.startmonth, self.startday, self.endyear, self.endmonth, self.endday)) sql = "insert into {0}.dssat (fdate, gid, ensemble, gwad, wsgd, lai) values (%(dt)s, %(gid)s, %(ens)s, %(gwad)s, %(wsgd)s, %(lai)s)".format(self.name) for gid, pi in self.modelpaths: modelpath = self.modelpaths[(gid, pi)] startdt = self.modelstart[(gid, pi)] for e in range(self.nens): with open("{0}/PLANTGRO{1:03d}.OUT".format(modelpath, e + 1)) as fin: line = fin.readline() while line.find("YEAR") < 0: line = fin.readline() for line in fin: data = line.split() dt = date(startdt.year, 1, 1) + \ timedelta(int(data[1]) - 1) dts = "{0}-{1}-{2}".format(dt.year, dt.month, dt.day) if self.cultivars[gid][e] is None: cultivar = "" else: cultivar = self.cultivars[gid][e] if float(data[9]) > 0.0: cur.execute(sql, {'dt': dts, 'ens': e + 1, 'gwad': float( data[9]), 'wsgd': float(data[18]), 'lai': float(data[6]), 'gid': gid, 'cultivar': cultivar}) cur.execute( "update {0}.dssat as d set geom = a.geom from {0}.agareas as a where a.gid=d.gid".format(self.name)) db.commit() cur.execute("drop index if exists {0}.d_t".format(self.name)) cur.execute("drop index if exists {0}.d_s".format(self.name)) cur.execute( "create index d_t on {0}.dssat(fdate)".format(self.name)) cur.execute( "create index d_s on {0}.dssat using gist(geom)".format(self.name)) db.commit() cur.close() db.close() self.yieldTable() def yieldTable(self): """Create table for crop yield statistics.""" fsql = "with f as (select gid,geom,gwad,ensemble,fdate from (select gid,geom,gwad,ensemble,fdate,row_number() over (partition by gid,ensemble order by gwad desc) as rn from {0}.dssat) gwadtable where rn=1)".format(self.name) db = dbio.connect(self.dbname) cur = db.cursor() cur.execute( "select * from information_schema.tables where table_name='yield' and table_schema='{0}'".format(self.name)) if not bool(cur.rowcount): sql = "create table {0}.yield as ({1} select gid,geom,max(gwad) as max_yield,avg(gwad) as avg_yield,stddev(gwad) as std_yield,max(fdate) as fdate from f group by gid,geom)".format(self.name, fsql) cur.execute(sql) cur.execute("alter table {0}.yield add column crop text".format(self.name)) cur.execute("alter table {0}.yield add primary key (gid)".format(self.name)) else: cur.execute("delete from {0}.yield where fdate>='{1}-{2}-{3}' and fdate<='{4}-{5}-{6}'".format(self.name, self.startyear, self.startmonth, self.startday, self.endyear, self.endmonth, self.endday)) sql = "insert into {0}.yield ({1} select gid,geom,max(gwad) as max_yield,avg(gwad) as avg_yield,stddev(gwad) as std_yield,max(fdate) as fdate from f group by gid,geom)".format(self.name, fsql) cur.execute(sql) db.commit() cur.execute("update {0}.yield set std_yield = 0 where std_yield is null".format(self.name)) cur.execute("drop index if exists {0}.yield_s".format(self.name)) db.commit() cur.execute("create index yield_s on {0}.yield using gist(geom)".format(self.name)) cur.close() db.close() def run(self, dssatexe="DSSAT_EnKF.exe", crop_threshold=0.1): """Runs DSSAT simulation.""" self.readVICSoil() geoms = self.readShapefile() cropfract = self.calcCroplandFract() for geom in geoms: gid = geom[0] if cropfract[gid] >= crop_threshold: self.setupModelInstance(geom, dssatexe) for k in self.modelpaths: modelpath = self.modelpaths[k] self.runModelInstance(modelpath, dssatexe) self.save()
Electronic structure and optical properties of (BeTe)n/(ZnSe)m superlattices Abstract The structural, electronic and optical properties of (BeTe)n/(ZnSe)m superlattices have been computationally evaluated for different configurations with m = n and m≠n using the full-potential linear muffin-tin method. The exchange and correlation potentials are treated by the local density approximation (LDA). The ground state properties of (BeTe)n/(ZnSe)m binary compounds are determined and compared with the available data. It is found that the superlattice band gaps vary depending on the layers used. The optical constants, including the dielectric function (), the refractive index n() and the refractivity R(), are calculated for radiation energies up to 35 eV.
Inverse Multiobjective Optimization Through Online Learning We study the problem of learning the objective functions or constraints of a multiobjective decision making model, based on a set of sequentially arrived decisions. In particular, these decisions might not be exact and possibly carry measurement noise or are generated with the bounded rationality of decision makers. In this paper, we propose a general online learning framework to deal with this learning problem using inverse multiobjective optimization. More precisely, we develop two online learning algorithms with implicit update rules which can handle noisy data. Numerical results show that both algorithms can learn the parameters with great accuracy and are robust to noise. Introduction Understanding human participants' preferences and desires is critical for an organization in designing and providing services or products. Nevertheless, as in most scenarios, we can only observe their decisions or behaviors, while cannot directly access their decision making schemes. Indeed, participants probably do not have exact information regarding their own decision making process. To bridge the discrepancy, one idea has been proposed and received significant research attention, which is to infer or learn the missing information of the underlying decision models from observed data, assuming that human decision makers are making optimal decisions. This idea actually carries the data-driven concept and is more applicable as a large amount of data are generated and become readily available, especially those from digital devices and online transactions. Inferring unknown parameters of optimization model from observed decisions is often cast into an inverse optimization problem. It seeks particular values for those parameters such that the difference between the actual observation and the expected solution to the optimization model (populated with those inferred values) is minimized. Although complicated, an inverse optimization model can often be simplified for computation through using KKT conditions or strong duality of the decision making model, provided that it is convex. Nowadays, extending from its initial form that only considers a single observation, inverse optimization has been further developed and applied to handle many observations. Nevertheless, a particular challenge, which is almost unavoidable for any large data set, is that the data could be inconsistent due to measurement errors or decision makers' sub-optimality. To address this challenge, the assumption on the observations' optimality is weakened to integrate those noisy data, and KKT conditions or strong duality is relaxed to incorporate inexactness. Different from the majority of existing literature, take another perspective to explain the so called "data inconsistency": decision makers are driven by multiple criteria, and different people have different preferences or weights over those criteria, which leads them to make a variety of responses or choices. Then, it can be anticipated that once we remove the variance caused by such multi-criteria decision making from data, their quality or consistency can be greatly improved. We note that this explanation matches a real situation where data are collected from multiple participants of different backgrounds or personalities. Indeed, it is not rare that the same customer, even when facing the same set of products, makes different purchases over time, reflecting that her preferences on multiple criteria might shift. Moreover, we would like to point out that for a service provider or a product supplier, it is more critical to correctly understand the whole customer population, their decision making criteria, and the distribution of their preferences, rather than to have a precise estimation on every single customer's utility function. Actually the latter one is practically infeasible or unnecessary when the customer population is large. In this paper, we aim to learn the constraints and a set of objective functions of a decision making problem with multiple objectives, instead of inferring parameters of a decision making problem with a single objective. In particular, we consider such learning problem in an online fashion, noting that in many practical scenarios observations are unveiled sequentially. Specifically, we study such learning problem as an inverse multiobjective optimization problem (IMOP) dealing with noisy data, develop online learning algorithms to derive parameters for each objective function and constraint, and finally output an estimation of the distribution of weights (which, together with those objective functions, define individuals' utility functions) among human subjects. Related work Our work is most related to the subject of inverse multiobjective optimization. The goal is to find multiple objective functions or constraints that explains the observed efficient solutions well. This subject actually carries the data-driven concept and becomes more applicable as large amounts of data are generated and become readily available, especially those from digital devices and online transactions. There are several recent studies related to the presented research. One is in, which considers a single observation that is assumed to be an exact optimal solution. Then, given a set of well-defined linear functions, an inverse optimization is formulated to learn their weights. Another two are, in which which propose the batch learning framework to infer the utility functions or constraints from multiple noisy decisions through inverse multiobjective optimization. These two work can be categorized as doing inverse multiobjective optimization in batch setting. In contrast, we do inverse multiobjective optimization in online setting, and the proposed online learning algorithms significantly accelerate the learning process with performance guarantees, allowing us to deal with more realistic and complex preference inference problems. Also related to our work is the line of research conducted by and, which develops online learning methods to infer the utility function or constraints from sequentially arrived observations. However, their approach is only possible to handle inverse optimization with single objective. More specifically, their methods apply to situations where observations are generated by decision making problems with only one objective function. Differently, our approach does not make the singleobjective assumption and only requires the convexity of the underlying decision making problem with multiple objectives. Hence, we believe that our work generalize their methods and extends the applicability of online learning from solving inverse optimization problems to inverse multiobjective optimization problems. Our contributions To the best of authors' knowledge, we propose the first general framework of online learning for inferring decision makers' objective functions or constraints using inverse multiobjective optimization. This framework can learn the parameters of any convex decision making problem, and can explicitly handle noisy decisions. Moreover, we show that the online learning approach, which adopts an implicit update rule, has an O( √ T ) regret under suitable regularity conditions when using the ideal loss function. We finally illustrate the performance of two algorithms on both a multiobjective quadratic programming problem and a portfolio optimization problem. Results show that both algorithms can learn parameters with great accuracy and are robust to noise while the second algorithm significantly accelerate the learning process over the first one. Problem setting In this section, we review basic concepts in multiobjective decision making problem and introduce the framework for solving inverse multiobjective optimization problems in batch setting. Decision making problem with multiple objectives We consider a family of parametrized multiobjective decision making problems of the form min In the study of multiobjective optimization, the set of all efficient solutions is denoted by X E () and called the efficient set. The weighting method is commonly used to obtain an efficient solution through computing the problem of weighted sum (PWS) as follows. min where w = (w 1,..., w p ) T. Without loss of generality, all possible weights are restricted to a simplex, which is denoted by W p = {w ∈ R p + : 1 T w = 1}. Next, we denote the set of optimal solutions for the PWS by S(w, ) = arg min Let W + p = {w ∈ R p ++ : 1 T w = 1}. Following from Theorem 3.1.2 of, we have: Proposition 2.1. If x ∈ S(w, ) and w ∈ W + p, then x ∈ X E (). The next result from Theorem 3.1.4 of states that all the efficient solutions can be found by the weighting method for convex DMP. Proposition 2.2. Assume that DMP is convex. If x ∈ X is an efficient solution, then there exists a weighting vector w ∈ W p such that x is an optimal solution of PWS. By Propositions 2.1 -2.2, we can summarize the relationship between S(w, ) and X E () as follows. In the following, we make a few assumptions to simplify our understanding, which are actually mild and appear often in the literature. Assumption 2.1. Set is a convex compact set. There exists D > 0 such that 2 ≤ D for all ∈. In addition, for each ∈, both f (x, ) and g(x, ) are convex in x. Inverse multiobjective optimization Consider a learner who has access to decision makers' decisions, but does not know their objective functions or constraints. In the inverse multiobjective optimization model, the learner aims to learn decision makers' multiple objective functions from observed noisy decisions only, and no information regarding decision makers' preferences over multiple objective functions is available. We denote y the observed noisy decision that might carry measurement error or is generated with a bounded rationality of the decision maker, i.e., being suboptimal. Throughout the paper we assume that y is a random variable distributed according to an unknown distribution P y supported on Y. As y is a noisy observation, we note that y does not necessarily belong to X(), i.e., it might be either feasible or infeasible with respect to X(). Loss function and surrogate loss function Ideally, the learner would aim to learn by finding parameter values that minimizes the distance between the noisy decision and the predicted decision derived with those values. Without knowing decision makers' preferences over multiple objective functions, however, the learner cannot predict a desired decision even when is given. Hence, the traditional loss function of the distance between the observation and the prediction in the traditional online learning is not applicable. To address such a challenge, we begin with a discussion on the construction of an appropriate loss function for the inverse multiobjective optimization problem. Given a noisy decision y and a hypothesis, the following loss function can be defined as the minimum (squared) distance between y and the efficient set X E (): l(y, ) = min For a general DMP, however, there might exist no explicit way to characterize the efficient set X E (). Hence, an approximation approach to practically describe this set can be adopted. Following from Corollary 2.2.1 and its following remarks, a sampling approach can be adopted to generate w k ∈ W p for each k ∈ and approximate X E () as k∈ S(w k, ). Then, the surrogate loss function is defined as surrogate loss By using binary variables, this surrogate loss function can be converted into then Surrogate Loss Problem. Constraint k∈ z k = 1 ensures that exactly one of efficient solutions will be chosen to measure the distance to y. Hence, solving this optimization problem identifies some w k with k ∈ such that the corresponding efficient solution S(w k, ) is closest to y. Remark 2.1. It is guaranteed that no efficient solution will be excluded if all weight vectors in W p are enumerated. As it is practically infeasible due to computational intractability, we can control the number of sampled weights K to balance the tradeoff between the approximation accuracy and computational efficacy. Certainly, if the computational power is strong, we would suggest to draw a large number of weights evenly in W p to avoid any bias. In practice, for general convex DMP, we evenly sample {w k } k∈ from W + p to ensure that S(w k, ) ∈ X E (). If f (x, ) is known to be strictly convex, we can evenly sample {w k } k∈ from W p as S(w k, ) ∈ X E () by Proposition 2.1. Online learning for IMOP In our online learning setting, noisy decisions become available to the learner one by one. Hence, the learning algorithm produces a sequence of hypotheses ( 1,..., T +1 ). Here, T is the total number of rounds, and 1 is an arbitrary initial hypothesis and t for t > 1 is the hypothesis chosen after seeing the (t − 1)th decision. Let l(y t, t ) denote the loss the learning algorithm suffers when it tries to predict y t based on the previous observed decisions {y 1,..., y t−1 }. The goal of the learner is to minimize the regret, which is the cumulative loss T t=1 l(y t, t ) against the best possible loss when the whole batch of decisions are available. Formally, the regret is defined as Unlike most online learning problems that assume the loss function to be smooth, in this study, l(y, ) and l K (y, ) are not necessarily smooth, due to the structures of X E () and k∈ S(w k, ). Thus, the popular gradient based online leaning algorithm fails and our problem is significantly more difficult than most online learning problems. To address this challenge, two online learning algorithms are developed in the next subsection. Online implicit updates Once receiving the tth noisy decision y t, the ideal way to update t+1 is by solving the following optimization problem using the ideal loss lunction: where t is the learning rate in each round, and l(y t, ) is defined in loss lunction. As explained in previous section, l(y t, ) might not be computable due to the non-existence of closed form of the efficient set X E (). Thus, we seek to approximate the update by the following: where t is the learning rate in each round, and l K (y t, ) is defined in surrogate loss. The update approximates that in, and seeks to balance the tradeoff between "conservativeness" and "correctiveness", where the first term characterizes how conservative we are to maintain the current estimation, and the second term indicates how corrective we would like to modify with the new estimation. As no closed form exists for t+1 in general, this update method is an implicit approach. To solve, we can replace x k ∈ S(w k, ) by KKT conditions for each k ∈ . See supplementary material Section A for details. Alternatively, solving is equivalent to solving K independent programs defined in the following and taking the one with the least optimal value (breaking ties arbitrarily). Our application of the implicit update rule to learn the parameter of an DMP proceeds as outlined in Algorithm 1. set learning rate t ∝ 1/ √ t 10: update t+1 by solving directly (or equivalently solving K subproblems ) 11: end if 12: end for Remark 3.1. (i) When choosing to update t+1, we can do parallel computing to implement the K independent problems of, which would dramatically improve the computational efficiency. (ii) After the completion of Algorithm 1, we can allocate every y t to the w k that minimizes l K (y t, T +1 ), which provides an inference on the distribution of weights of component functions f l (x, ) over human subjects. Acceleration of Algorithm 1: Note that we update and the weight sample assigned to y t in simultaneously, meaning that both and the weight sample index k are variables when solving. In other words, one needs to solve K subproblems to get an optimal solution for. However, we note that the increment of by solving is typically small for each update. Consequently, the weight sample assigned to y t using t+1 is roughly the same as using the previous guess of this parameter, i.e., t. Hence, it is reasonable to approximately solve by first assigning a weight sample to y t based on the previous updating result. Then, instead of computing K problems of, we simply compute a single one associated with the selected weight sample. Through such a procedure, we significantly ease the computational burden of solving. Our application of the accelerated implicit update rule proceeds as outlined in Algorithm 2. suffer loss l K (y t, t ) 6: if l K (y t, t ) = 0 then update t+1 by solving with k = k * 12: end if 13: end for Mini-batches One technique to enhance online learning is to consider multiple observations per update. In online IMOP, this means that computing t+1 using |N t | > 1 noisy decisions: However, we should point out that applying Mini-batches might not be suitable here as the update is drastically more difficult to compute even for |N t | = 2 than the update with a single observation. Analysis of convergence Note that the proposed online learning algorithms are generally applicable to learn the parameter of any convex DMP. In this section, we show that the average regret converges at a rate of O(1/ √ T ) under certain regularity conditions based on the ideal loss lunction l(y, ). Namely, we consider the regret bound when using the ideally implicit update rule. Next, we introduce a few assumptions that are regular in literature. Assumption 3.1. (a) X() is closed, and has a nonempty relatively interior. X() is also bounded. Namely, there exists B > 0 such that x 2 ≤ B for all x ∈ X(). The support Y of the noisy decisions y is contained within a ball of radius R almost surely, where R < ∞. In other words, P( y 2 ≤ R) = 1. Regarding Assumption 3.1(a), assuming that the feasible region is closed and bounded is very common in inverse optimization. The finite support of the observations is needed since we do not hope outliers have too many impacts in our learning. Let = min l∈ { l }. It follows that w T f (x, ) is strongly convex with parameter for w ∈ W p. Therefore, Assumption 3.1(b) ensures that S(w, ) is a single-valued set for each w. The performance of the algorithm also depends on how the change of affects the objective values. For ∀w ∈ W p, 1 ∈, 2 ∈, we consider the following function Basically, this assumption says that the objective functions will not change much when either the parameter or the variable x is perturbed. It actually holds in many common situations, including the multiobjective linear program and multiobjective quadratic program. Let * be an optimal inference to min ∈ t∈ l(y t, ), i.e., an inference derived with the whole batch of observations available. Then, the following theorem asserts that under the above assumptions, the regret R T = t∈ (l(y t, t ) − l(y t, * )) of the online learning algorithm is of O( √ T ). We establish the above regret bound by extending Theorem 3.2 in. Our extension involves several critical and complicated analyses for the structure of the optimal solution set S(w, ) as well as the loss function, which is essential to our theoretical understanding. Moreover, we relax the requirement of smoothness of loss function to Lipschitz continuity through a similar argument in Lemma 1 of and. Remark 3.2. The above regret bound applies for ideal case where the loss lunction l(y, ) is used for the online learning. Regret bound for using the surrogate loss is currently under investigation as it requires more complicated analyses about the structure of k∈ S(w k, ) and the corresponding l K (y, ). Nonetheless, we numerically demonstrate that l K (y, ), the approximation to l(y, ), indeed works well in learning the parameters of DMP under various environments. Experiments In this section, we will provide a multiobjective quadratic program (MQP) and a portfolio optimization problem to illustrate the performance of the proposed online learning Algorithms 1 and 2. The mixed integer second order conic problems, which are derived from using KKT conditions in, are solved by Gurobi. All the algorithms are programmed with Julia. The experiments have been run on an Intel(R) Xeon(R) E5-1620 processor that has a 3.60GHz CPU with 32 GB RAM. Learning the preferences and restrictions for an MQP Consider the following multiobjective quadratic optimization problem. where the parameters of the two objective functions are and the parameters for the feasible region are We suppose there are T decision makers. In each round, the learner would receive one noisy decision. Her goal is to learn the objective functions or restrictions of these decision makers. The noisy decision in each round t ∈ is generated as follows. In round t, we suppose that the decision maker derives an efficient solution x t by solving PWS with weight w t, which is uniformly chosen from W 2. Next, the learner receives the noisy decision y t which has been corrupted by noise that has a jointly uniform distribution with support 2. Namely, y t = x t + t, where each element of t ∼ U (−0.5, 0.5). Learning the Objective Functions In the first set of experiment, the learner seeks to learn c 1 and c 2 given the noisy decisions that arrive sequentially in T rounds. We assume that c 1 is within range 2, c 2 is within range 2, T = 1000 rounds of noisy decisions are generated, and K = 41 weights from W 2 are evenly sampled. The learning rate is set to t = 5/ √ t. Then, we implement Algorithms 1 and 2. At each round t, we solve using parallel computing with 6 workers. To illustrate the performance of the algorithms in a statistical way, we run 100 repetitions of the experiments. Figure 1a shows the total estimation errors of c 1 and c 2 in each round over the 100 repetitions for the two algorithms. We also plot the average estimation error of the 100 repetitions. As can be seen in this figure, convergence for both algorithms is pretty fast. Also, estimation errors over rounds for different repetitions concentrate around the average, indicating that our algorithm is pretty robust to noise. The estimation error in the last round is not zero because we use a finite K to approximate the efficient set. We see in Figure 1b that Algorithm 2 is much faster than Algorithm 1 especially when K is large. To further illustrate the performance of algorithms, we randomly pick one repetition using Algorithm 1 and plot the estimated efficient set in Figure 1c. We can see clearly that the estimated efficient set almost coincides with the real efficient set. We also plot our prediction of the distribution for the preferences of f 1 (x) and f 2 (x) among the 1000 decision makers. Since there are only two objective functions, it is sufficient to draw the distribution of the weight for f 1 (x) (given that the sum of weights of f 1 (x) and f 2 (x) equals to 1). As shown in Figure 1d, except in two endpoint areas, the distribution follows roughly uniformly distribution, which matches our uniformly sampled weights. Indeed, comparing Figures 1c and 1d, we would like to point out that a bound effect probably occurs in these two endpoint areas. As can be seen, although different weights are imposed on component functions, the noiseless optimal solutions, as well as observed decisions, do likely to merge together due to the limited feasible space in those areas. We believe that it reflects an essential challenge in learning multiple objective functions in practice and definitely deserves a further study. Learning the Right-hand Side In the second set of experiment, the learner seeks to learn b given the noisy decisions that arrive sequentially in T rounds. We assume that b is within 2. T = 1000 rounds of noisy decisions are generated. K = 81 weights from W 2 are evenly sampled. The learning rate is set to t = 5/ √ t. Then, we apply Algorithms 1 and 2. To illustrate the performance of the two algorithms, we run 100 repetitions of the experiments. Figure 3a shows the estimation error of b in each round over the 100 repetitions for the two algorithms. We also plot the average estimation error of the 100 repetitions. As can be seen in the figure, convergence for both algorithms is pretty fast. In addition, we see in Figure 3b that Algorithm 2 is much faster than Algorithm 1. Learning the expected returns in portfolio optimization In this example, we consider various noisy decisions arising from different investors in a stock market. More precisely, we consider a portfolio selection problem, where investors need to determine the fraction of their wealth to invest in each security in order to maximize the total return and minimize the total risk. The portfolio selection process typically involves the cooperation between an investor and a portfolio analyst, where the analyst provides an efficient frontier on a certain set of securities to the investor and then the investor selects a portfolio according to her preference to the returns and risks. The classical Markovitz mean-variance portfolio selection in the following is used by analysts. where r ∈ R n + is a vector of individual security expected returns, Q ∈ R nn is the covariance matrix of securities returns, x is a portfolio specifying the proportions of capital to be invested in the different securities, and b i is an upper bound on the proportion of security i, ∀i ∈ . Dataset: The dataset is derived from monthly total returns of 30 stocks from a blue-chip index which tracks the performance of top 30 stocks in the market when the total investment universe consists of thousands of assets. The true expected returns and true return covariance matrix for the first 8 securities are given in Appendix. Suppose a learner seeks to learn the expected return for the first five securities that an analyst uses. The noisy decision is generated as follows. We set the upper bounds for the proportion of the 8 securities to b i = 1.0, ∀i ∈. Then, we sample T = 1000 weights such that the first element of w i, ranging from 0 to 1, follows a truncated normal distribution derived from a normal distribution with mean 0.5 and standard deviation 0.1. In what follows, we will not distinguish truncated normal distribution from normal distribution because their difference is negligible. These weights are then used to generate optimal portfolios on the efficient frontier that is plot in Figure 4a. Subsequently, each component of these portfolios is rounded to the nearest thousandth, which can be seen as measurement error. The learning rate is set to t = 5/ √ t. At each round t, we solve using parallel computing. r − r true 2 0.1270 0.1270 0.0420 0.0091 In Table 1 we list the estimation error and estimated expected returns for different K ∈ {6, 11, 21, 41}. As is shown in the table, the estimation error becomes smaller when K increases, indicating that we have a better approximation accuracy of the efficient set when using a larger K. We also plot the estimated efficient frontier using the estimatedr for K = 41 in Figure 4a. We can see that the estimated efficient frontier is very close to the real one, showing that our algorithm works quite well in learning expected returns in portfolio optimization. We also plot our estimation on the distribution of the weight of f 1 (x) among the 1000 decision makers. As shown in Figure 4b, the distribution follows roughly normal distribution. The result of Chi-square goodness-of-fit test supports our hypotheses. Second, suppose Assumption 3.3(b) hold. Then, l(y, 1 ) + l(y, 2 ) − l(y, 1 + 2 ) = y − x( 1 ) 2 2 + y − x( 2 ) 2 2 − y − x( 1 + 2 ) 2 C Omitted Examples
Predicting rarity and decline in animals, plants, and mushrooms based on species attributes and indicator groups In decisions on nature conservation measures, we depend largely on knowledge of the relationship between threats and environmental factors for a very limited number of species groups, with relevant environmental factors often being deduced from the relationship between threat and species traits. But can relationships between traits and levels of threats be identified across species from completely different taxonomic groups; and how accurately do well-known taxonomic groups indicate levels of threat in other species groups? To answer these questions, we first made a list of 152 species attributes of morphological and demographic traits and habitat requirements. Based on these attributes we then grew random forests of decision trees for 1183 species in the 18 different taxonomic groups for which we had Red Lists available in the Netherlands, using these to classify animals, plants, and mushrooms according to their rarity and decline. Finally, we grew random forests for four species groups often used as indicator groups to study how well the relationship between attribute and decline within these groups reflected that relationship within the larger taxonomic group to which these groups belong. Correct classification of rarity based on all attributes was as high as 88% in animals, 85% in plants, and 94% in mushrooms and correct classification of decline was 78% in animals, 69% in plants, and 70% in mushrooms. Vertebrates indicated decline in all animals well, as did birds for all vertebrates and vascular plants for all plants. However, butterflies poorly indicated decline in all insects. Random forests are a useful tool to relate rarity and decline to species attributes thereby making it possible to generalize rarity and decline to a wider set of species groups. Random forests can be used to estimate the level of threat to complete faunas and floras of countries or regions. In regions like the Netherlands, conservation policy based on attributes known to be relevant for the decline to birds, vertebrates or plants will probably also impact all aboveground terrestrial and freshwater macrofauna or macrophytes. Introduction Many countries have ratified the Convention of Biological Diversity, thereby agreeing to protect their biodiversity and to prevent extinction of their native species (http:// www.cbd.int/convention/parties/list/). Globally, the number of known species is estimated to be around 1.24 million, with many more species as yet unknown (). This vast number of species makes it difficult to inform policy-makers and the general public on changes in biodiversity because it is virtually impossible to monitor all species. As a result, it is difficult to ascertain which species are in most urgent need of conservation measures, and whether conservation measures are sufficient to prevent extinctions. Out of sheer necessity, policy-makers and nature managers generally focus on a limited number of selected species, thereby assuming that these are representative of all native species. The European Union, for instance, focuses on the protection of birds through the Bird Directive and on a scatter of other species through the Habitat Directive (http://bd.eionet.europa.eu/activities/Natura_2000/reference_ portal). In this selection, vertebrates and butterflies are clearly overrepresented (). In global assessments of biodiversity change, vertebrates are also overrepresented (Millennium Ecosystem Assessment 2005;). For example, the most widely used indicator of global biodiversity change, the Living Planet Index, is an aggregated statistic composed of trends in vertebrate species only, in which birds and mammals are currently overrepresented (). Although there may be good reason to focus on vertebrates and other more visible species, such as butterflies, rather than selecting species hardly recognized by the general public, a key question remains whether the species selected for use in nature policy and management may be regarded as representing all species (). In an effort to test the issue of representativeness, some authors have examined whether trends in one species group were similar to those in other species groups (). Others have compared the overlap of diversity hotspots between species groups (Reid 1998;Heino 2002;). If trends or hotspots coincided, one could thin out the number of species groups that need to be taken into account for information on the need and progress in conservation actions. However, even if trends and hotspots coincided across species groupsand often they do notit remains unclear whether the environmental factors determining trends and hotspots are the same. Consequently, generalization of findings beyond the species groups studied is difficult. A more fruitful approach often applied is to examine which traits make species vulnerable to threats because traits may be linked to the environmental factors causing the species to decline (). The advantage of this approach is that it makes it easier to generalize findings beyond the particular species studied. Most studies on the relationship between traits and threat status are within-species group studies, for example, in birds (;;van ;V egv ), fowl (), bats (), butterflies (), moths (), and beetles (). Several other studies have involved cross-taxon analyses, mainly to assess common or different traits across a handful of species groups (mammals: ; four groups of invertebrates: Kleukers and Reemer 2003; three groups of vertebrates: ;mammals and arthropods: Jennings and Pocock 2009; tropical forest species: ; three invertebrate groups and birds: ). Here, we aim to find a method that is universal in the sense that it predicts whether a species is rare or in decline across multiple taxonomic groups based solely on traits and other species attributes. This would enable us to estimate the level of threat of complete faunas and floras of countries and regions. To do so, we examined the relationship between rarity and decline in 1183 Dutch species of 18 different taxonomic groups and 61 species traits, transformed into 152 attributes. We use a broad definition of "trait", including morphological and demographic traits as well as habitat requirements of species. To study the relevance of traits for threats, regression analysis is a favored approach (;;;;Fr ;;van ). However, regression analysis has considerable drawbacks, in particular because it is difficult to treat many different traits, to include nonadditive and nonlinear relationships between trends and traits, and to handle nonadditivity, nonlinearity, collinearity, and interactions between traits (). Decision trees are an alternative method ;;;). These may perform better in categorizing species than regression-based approaches, as they suffer less from all the aforementioned difficulties. They treat nonlinearity and interactions without the need to incorporate these features explicitly a priori in a model (Fr ). Besides, there is no a priori need for trait selection in the case of a high number of traits or collinearity. Another advance over regression approaches is that pseudoreplication due to phylogenetic relationships between species is no longer an issue ;). A recent development in decision tree analysis is the use of random forests, a well-established technique in machine learning but relatively new to ecology (Breiman 2001;;Boyer 2009;;). Random forests consist of a large number of decision trees, each based on a random sample of species and traits to prevent overfitting (Breiman 2001). They have the advantage that there is no need to omit part of the data set from the training data set for use in validation because each decision tree of the forest is grown based on a subset of the species, with the classification error of the tree being monitored on the other species (out-of-bag approach, Breiman 2001). Our first main research question is focused on the performance of random forests: How well do random forests of decision trees grown from data of taxonomically very different species predict rarity and decline in the species? For our second main question we use random forests to get insight in the indicative value of specific taxonomic Rarity and decline in the studied species were derived from the existing Red Lists of the Netherlands. The Dutch species groups evaluated for the Red Lists are not a representative sample of all the species groups occurring on Dutch territory (). A preliminary comparison of the species evaluated for Red Lists and all known Dutch species showed that small species, marine species, and soil species are underrepresented. Therefore, our data set could be regarded as representative for aboveground terrestrial and freshwater macrofauna, macrophytes, and macrofungi. We believe our results can be regarded as indicative for areas like the Netherlands, that is, temperate areas with a high degree of urbanization and intensive land use. Selection of species In the Netherlands, Red Lists are available for 18 taxonomic groups (Table S1). All Dutch species within these groups are evaluated for the Red Lists, except those for which rarity or decline is insufficiently known. A total of 6097 evaluated species are available from which 1183 species were selected for further analysis (Table S1). A random selection from all evaluates species would yield virtually only plant and fungus species. To achieve better distribution across the taxonomic groups, we randomly selected a number of species within each group. This number was proportional to the natural logarithm of the number of species per group or the number based on equal numbers of species per group, whichever was higher. In addition, from groups with only a few species, for example, reptiles, we selected all species. As no validation data set is needed when applying Random Forest analyses, all the latter species could be included in our analyses. Because for some species the experts were not able to find all information needed, the actual number of species for analyses were 622 animals, 222 plants, and 248 mushrooms. According to the Dutch Red List criteria, a species' threat status is determined by the trend since 1950 and its current range or abundance within the Netherlands. Rarity and decline categories of the Dutch Red Lists are based on information on past and present distribution, corrected for known biases due to differences in research effort between species, but may differ slightly among species groups. They are generally accepted by experts as the best estimates of the actual rarity and decline in the species. Analyses were performed on rarity and decline, and not on Red List status because these two criteria are the ecological features of a species that might be causally related to traits. "Rarity" was defined as a binary that states whether the species is rare in the Netherlands ("rare" and higher categories in the Dutch Red Lists), "decline" as a binary indicating whether the species range is declining in the Netherlands ("moderately declining" and higher categories in the Dutch Red Lists) (for details see de Iongh and Bal 2007). These definitions were chosen so that prevalence of rarity and decline, that is, the number of rare and declining species divided by the total number of species within our data set, was as close as possible to 0.5, as random forests perform best when class membership is approximately equal (). Overall prevalence was 0.57 for rarity and 0.46 for decline. In nature conservation policy-making and management, groups that are often implicitly regarded as indicator groups are the vertebrates (e.g., ), birds (e.g., ), butterflies (e.g., Thomas 2005), and vascular plants (Vamosi and Vamosi 2008). For studying the indicative value of these groups for the higher taxonomic group to which they belong, we grew random forests for the decline in our sample of the species in the indicator group. We then used these random forests to classify our sample of the higher taxonomic group into either decreasing or nondecreasing species. In the case of the vertebrates, the higher taxonomic group included all animal species; with the birds, all vertebrates; with the butterflies, all insects; and with the vascular plants, all plants. Selection of traits To identify traits predicting rarity and decline in species across taxonomic groups, we had to find traits that are shared by as many species as possible and that are ecologically relevant. We distinguished four main categories of traits that are known to be relevant for the range or abundance of a species (rarity) and its change in range or abundance (decline). The first category is formed by traits related to the niche of the species; these traits are connected to abiotic or biotic factors or susceptibility to isolation (Pulliam 2000;Silvertown 2004;Sober on 2007). Traits connected to abiotic factors include habitat and climatic requirements. Traits connected to biotic factors include trophic level and competitive strength. Among the factors reinforcing isolation are poor dispersion capacity or occurrence in isolated habitat types. The second category contains traits related to direct human influence. Species may, for example, be harvested or protected by humans. Certain traits may make species more vulnerable to stochastic processes than others. Traits related to stochastic processes are therefore regarded as a third category. Stochastic processes are usually subdivided into genetic, population-dynamic, and environmental stochastic processes. Examples include the number of eggs per female and life span. While the first three categories treat traits as fixed characteristics, the fourth category contains traits related to the flexibility of traits, that is, trait evolvability and trait plasticity (Forsman and Hagman 2009). All four categories were further subdivided, leading to a list of 61 traits intended to be as complete as possible (Table 1). To examine whether important traits may have been missed, we compared our list with traits cited in the literature. Traits marked with an X in Table 1 were also indicated as relevant in at least one of 32 recent articles (Supporting Information). Certain traits found in the literature could not easily be included in our categorization. These traits appeared to be either species group specific (e.g., nest site in birds) or covered by a combination of traits in our list (e.g., habitat disturbance in plants). The traits listed in Table 2 were used to design a questionnaire that was sent to species group experts, who were asked to fill in the traits for the species within "their" species group. Apart from the answers to these questionnaires, for most of the species distribution within the Netherlands was also known, both in the past and present (e.g., Hustings and Vergeer 2002;Creemers and van Delft 2009). This information was used to assess the range of the species in the Netherlands between 1950 and 1990. It was also used to assess the species' preference for certain land-use categories (LUC) and physical-geographical regions (PGR). The answers to the questionnaires, together with information on distribution of the species, were our independent variables. In accordance with decision tree literature, we called these variables the attributes of the species. In several cases more than one attribute could be regarded as reflecting a certain trait. Some traits turned out to be irrelevant for the Dutch species (e.g., altitude). In some cases, we did not succeed to collect information on the trait (e.g., intake of oxygen or nutrients thru skin) or assumed that the trait was correlated with other traits (e.g., body mass and body length) ( Table 1). All attributes were transformed into categorical variables in order to avoid the influence of cardinality on the importance of attributes (). In the case of existing categorical variables of no relevance for certain species (whether a species prefers stagnant or running water is of no relevance for nonaquatic species), the variable was transformed into a dummy attribute (species of stagnant water; species of running water) so that the species for which the variable had no relevance had zeros for all the dummy attributes. In the case of scale variables, the scale was divided into five equal parts, leading to a five-point ordinal attribute. When needed, the raw values of the scale were log transformed for approaching a normal distribution before this transformation into an ordinal attribute. For preferences of species for a LUC or a PGR the group-equalized phi-coefficient was calculated (r g /, De C aceres and Legendre 2009). LUC and PGR "specialization" is the square root of the sum of squares of the phi-coefficients of, respectively, all LUC and PGR categories. "Commonness 1950-1990" is the logit transformation of the number of all Dutch grid cells in which the species was observed at least once over the complete period divided by the number of grid cells in which the species group was observed in the period 1950-1990. This attribute was not included in the analyses of rarity. The maximum number of categories of an attribute is 10, the minimum number two, but most attributes have either two or five categories (Table S2). Attributes that turned out to have more than 99% of the species in one category were omitted from the analyses because these attributes were deemed to be uninformative. Attributes that had more than 25% values missing were also omitted because it was feared that replacing missing values by imputed values might introduce a bias in attributes with a large number of missing values. Attribute availability All the species in our analyses have been evaluated for Red List status and are therefore well studied. However, if the results of our analyses are to be used to estimate the threat to all the species of the Netherlands, one of our ultimate goals, due consideration should be given to the fact that for most of the nonevaluated species much less information is available on traits and distribution. To be able to study the effect of this possible lack of information on species classification, we drew up three groups of attributes based on the expected availability of information. All our attributes are attributes known for the "evaluated species". Of these, a subset is known for the "well-known species". These attributes cover ecological and behavioral information, but not distribution. The attributes known for the "poorly-known species" are again a subset of these attributes. They include morphological and taxonomical information. This classification of attributes was based on expert judgment and can be found in our overview of attributes in Table S2. Random forests We applied Random Forest analysis, using the package "randomForest" of R 2.12.2 (version 4.6-6; Breiman and Cutler 2012). The random forests were evaluated by means of correctness of classification, that is, the proportion of species correctly classified. We tested whether the rare or declining species were classified differently from not rare or not declining species or whether two classifications differed significantly from each other with the Pearson Chi-square test of R. For more insight, we also give the risks of classification errors, that is, the false-positive rate or probability of type I errors: the probability of a species that is common or not declining being classified as rare or declining, and the false-negative rate or probability of type II errors: the probability of a species that is rare or declining being classified as common or not declining (Fig. S1). The probability of Type I errors should be low in order to minimize the risk of limited resources for conservation policy being used for nonthreatened species, whereas the probability of Type II errors should be low to minimize the risk of a threatened species not being recognized as such. For proper classification, then, both Type I and Type II error probabilities Column R indicates analysis of the trait in one or more references, the last column which attributes were used in this study as a proxy for the trait. Attribute numbers are specified in Table S2. should be low. In our discussion, we regard probabilities of errors less than 0.2 as "low". For each random forest grown we followed the same procedure. First, the missing values in the data set were replaced by imputed values, which were based on the values of proximate species according to 1000 decisions trees. Imputing was iterated 10 times. Then, a random forest of 10,000 trees was grown from the imputed data set. The defaults of the randomForest package were kept for the number of species that were randomly selected for each tree, as for the number of attributes that were randomly tested at each node as candidates for the split (Breiman and Cutler 2012). Stability of the error classification was always checked visually. In case of the analyses of the indicative value of specific species groups, the influence of the random parts of the procedure on the outcome was checked by growing the random forest, including the imputation, 10 times. In the results, these analyses can be recognized by the error bars. The importance of an attributes for the classification of the species can be estimated by comparing the correct classification of the random forest with that of a random forest in which the values of the attributed are randomly permuted (Breiman and Cutler 2012). The larger the decrease in correct classification, the more important the attribute is. As we have no formal way of making a distinction between the "really" important attributes and the others, we arbitrarily give only the 10 most important attributes in order of importance in the results. When classifying the species of a higher taxonomic group using a random forest of an indicator group, we only classified the decline in species, assuming that it is most relevant for conservation. We used the imputed data of these species because we did not want the evaluation of the indicative value of the indicator group to be affected by an unbalanced lack of information. Classification of animals, plants, and mushrooms by attributes When random forests were grown based on all available attributes in our data set, these classified 87.9% of the animals, 84.7% of the plants, and 94.3% of the mushrooms to the correct rarity class ( Table 2). The probabilities of Type I and Type II errors were small in all cases, except for the Type I error in plants, which was over 0.2. In all three random forests almost all important attributes were preferences for certain LUC and PGR (Table 3). If these random forests were to be used to classify species not included in our learning set, not all attributes would be known for all species. The random forests correctly classify the rarity of 66.6% of the animals when based on attributes expected to be known for poorly known species, 73.0% of the plants, and 65.3% of the mushrooms (Table 2). In mushrooms, this classification does not differ from a random classification (Chi-square test, P = 0.2239; Table 2). With the lower rate of correct classifications due to fewer attributes being available, the risk of error obviously increases (Table 2). In all three cases, Type I errors have a higher probability than Type II errors. Attributes of well-known species do not significantly improve correct classification, but the improvement from attributes of well-known species to those of evaluated species is significant in all three groups (Chisquare test animals: P < 0.001; plants: P = 0.005; mushrooms: P < 0.001; Fig. 1). Random forests grown based on all available attributes, correctly classified the decline in 76.9% of the animals, 68.9% of the plants, and 70.2% of the mushrooms (Table 2). Only for animals is the probability of a Type I error below 0.2. In all three cases the probability of a Type II error is higher than that of a Type I error and over 0.3. The attribute "Commonness in 1950-1990" is the only attribute that was important in animals, plants as well as mushrooms (Table 3). Classification of decline, when based solely on the attributes of poorly known species, is correct in 64.3% of the animals, 61.3% of the plants, and 46.0% of the mushrooms. Again, in mushrooms this classification does not differ from a random classification (Chi-square test, P = 0.182; Table 2). In all three cases error probabilities are high, with Type II errors more probable than Type I errors (Table 2). Now, in mushrooms classification by attributes of well-known species leads to a marked improvement (Chi-square test mushrooms: P = 0.003). The improvement from attributes of well-known species to those of evaluated species is significant in animals and mushrooms, not in plants (Chi-square test animals: P = 0.003; plants: P = 0.091; mushrooms: P = 0.011; Fig. 1). Classification by indicator groups The forests were able to classify the decline in 65.7% of the vertebrates species correctly, 62.3% of the birds, 87.8% of the butterflies, and 75.2% of the vascular plants (Table 4). The bird classification was no different from a random classification (Chi-square test, P = 0.573). In the case of vertebrates, birds, and vascular plants, the risk of Type II errors was high: around 0.5 or higher. In butterflies, the probability of Type I errors was high (0.3), but that of Type II errors extremely low (0.05). The question now is whether these generally high risks of errors affect the indicative value of the indicator groups. This was examined by applying the random forests found to the higher taxonomic group of the species concerned. When the random forest of vertebrates, birds, butterflies, and vascular plants are used to classify the decline in all animals, vertebrates, insects, and plants, respectively, 73.0%, 75.4%, 57.1%, and 79.7% of the latter are correctly classified (Table 5). So, the random forests of vertebrates, birds, and vascular plants appear to classify the species in the group to be indicated better than the indicator group itself, but with the butterfly random forest that classification is much worse (compare Table 4 with Table 5). Before any conclusions are drawn from this, consideration needs to be given to two possible reasons for the difference between the classification of the indicator group and that of the group to be indicated, even in the case of exactly the same probability of Type I and Type II errors. First, prevalence in the indicator group might differ from that in the group to be indicated. The expected percentage of correct classification in Table 5 is the percentage of correct classification of the indicators (Table 4), corrected for differences in prevalence between the indicator group and the higher taxonomic group using equation (S1) (Fig. S2). Second, as the separate decision trees of a random forest are based on random selections of species and attributes, the effect of these random procedures needs to be duly considered. Figure 2 shows the difference between the expected percentage of correct classification and the actual percentage, including the effect of the random parts of the random forest procedure. The actual percentages are clearly higher in vertebrates indicating all animals, in birds indicating all vertebrates, and in vascular plants indicating all plants, but lower in butterflies indicating all insects. The fact that the butterflies perform poorly as an indicator group for insects is also obvious from the probability of errors: although this is zero for Type II errors, it is almost Table S2; LUC, land-use category; PGR, physical-geographical region; NL, Netherlands. 0.9 for Type I errors (Table 5). Based on the probability of errors, the vascular plants perform best, with no probability of errors over 0.23. In the case of vertebrates, the risk of a Type I error is high, while that of a Type II error is low. With birds this pattern is inverted, with a low probability of a Type I error and a high probability of a Type II error (Table 5). When indicator groups of species are used to design conservation measures intended to be effective for species outside the indicator group, too, the relationships between attribute categories and decline in the indicator group are used under the assumption that those relationships are also valid in the group to be indicated. For an initial check on this, we compared the 10 most important attributes of the random forests of the indicator group with the 10 most important attributes of the random forests of the higher taxonomic groups (Table 6). Of the 10 most important attributes of the all-animal random forest, five are also important in the vertebrate forest. The vertebrate random forest has six most important attributes in common with the bird forest. Of the 10 most important attributes of the insect random forest, only three are also important in the butterfly forest. And the all-plant random forest again has six most important attributes in common with the vascular plant forest. Classification of animals, plants, and mushrooms by attributes Random forests using traits and species from highly different groups prove to be a powerful tool for classifying species rarity when all available attributes are used. Understandably, knowledge of the preference of the species for certain LUC and PGR is a crucial factor in this high predictability of rarity. When this information is lacking, correct classification drops, with a sharp increase in the probabilities of errors. Decline is more difficult to predict than rarity. Knowledge of preferences for LUC and PGR seems to be less vital to correct classification of decline than in the case of rarity, but knowledge of the commonness of the species in the past always contributes to the classification, which is consistent with the results of previous studies showing a negative relationship between range size and decline (e.g., Walker and Preston 2006;). Our results on classification of all species are better or lie in the same range as those of other studies using decision trees (Bekker and Kwak 2005;;;) or a little lower (). However, these other studies all concerned very limited species groups with group-specific attributes and in one case including known extrinsic threats (). Our analyses seem to show that the amount of information used for growing a forest may be crucial. If there are only a few attributes available, as in the case of poorly known mushrooms, classification is no better than random classification. However, other studies that have applied decision tree approaches show highly correct classifications with a limited numbers of traits (Bekker and The expected correct classification is the correct classification of Table 4 applied on the higher taxonomic group, that is, corrected for the difference in prevalence between the indicator group and the higher taxonomic group (see Supporting Information). Prev.: prevalence: number of declining species divided by all species. Kwak 2005; ;;;). This may be due to the fact that in these studies on specific groups, ecological knowledge was used to select specifically tailored traits. In our study, we explicitly selected nongroup-specific traits. Classification by indicator groups In three of our four tests of the indicative performance of a species group, correct classifications were found to be higher than expected. In these cases, then, the random forest actually performed better on the nonlearning data set than on the learning data set. This seems counterintuitive, so how can this result been explained? It would appear that the random forests of the indicator group use attributes and decision criteria that are indeed relevant for the complete higher taxonomic groups, but that the relationship between the attributes/criteria and the decline in the higher taxonomic groups is stronger than in the indicator group, or, in other words, that the species of the higher taxonomic group show fewer exceptions to the decision rules of the random forests. This could be a consequence of the fact that the indicator group is more often the focus of human attention than the other groups, resulting in focused human activities like protection and control that may blur the relationships between ecological traits and decline. Although we endeavored to incorporate these mechanisms in our trait list, we may not have succeeded in capturing all subtleties. Formally, it may be argued that the fact that the species of the higher taxonomic group are classified better than those of the indicator group shows that the indicative value of the indicator groups is limited. Using these indicator groups for estimating the decline in nonevaluated species will result in overestimation of the probabilities of Type I and II errors, that is, in overestimation of uncertainty. From a nature conservation point of view, however, it is actually very good news: When the attributes of these indicator random forests are used to find environmental factors for conserving species, these factors may work even better for species outside the indicator group. Of course, our results also show that this might not always be the case. It must be concluded that butterflies are not a good indicator group for insects. Based on ecological reasoning, Thomas concluded that butterflies may be adequate indicators for terrestrial insects. Our insects included freshwater species and many other groups with ecological requirements very different from butterflies. Hence, insects are probably too heterogeneous a group to be indicated solely by butterflies. Applications Our results give confidence that the random forests of all animals, plants, and mushrooms are well able to classify the aboveground terrestrial and freshwater macrofauna, Attributes in bold are also in the ten most important attributes in the indicator group. Definitions of attributes in Table S2; LUC, land-use category; PGR, physical-geographical region. macrophytes, and macrofungi according to their rarity and decline. Combining these classifications per species can then be used to estimate the Red List status of all species of which the attributes are known. This could lead to an estimation of overall level of threat of Dutch biodiversity. However, while this means that a smaller number of groups and species can be used to predict the Red List status of a much wider group, it does not render distribution data on the wider group redundant, as LUC and PGR preferences are used as attributes. Furthermore, the probability of Type I and II errors are too high to have great confidence in the prediction of the status of a single species and the method should therefore best be restricted to predicting the conservation status of groups of species. The information the model provides on the causes of threats is limited to the traits used in the model, which means that more specific questions pertaining to conservation policy cannot be addressed in any detail. Knowing that birds, vertebrates, and vascular plants are good indicator groups for all vertebrates, all animals, and all plants, respectively, raises the question whether some of the groups now included in our data set might be redundant for getting an overall picture on levels of threat of biodiversity. Which taxonomic groups could possibly be omitted without changing the performance of our random forest significantly? A follow-up study to answer this question could result in lists of groups, or even a limited list of species, that could most efficiently deliver information on the relationship between attributes and decline in aboveground terrestrial and freshwater macrospecies. Also, nature management may be more cost-effective when resources can be devoted to a limited number of well-selected species or groups under conditions of empirically assessed assurance, with known uncertainty that other species will be protected as well. The fact that the number of attributes may be important for finding random forests that yield good classification does not mean that attributes may not be redundant. Future studies could seek to identify those attributes that are not required for good classification. Methods for doing so are available (e.g., ). Given the advantages of random forest techniques over regression-based techniques, cited earlier, we consider random forests to be a promising technique for studying relationships between traits and a wide variety of species characteristics relevant for policy-making. Future avenues for employing the methodology might be the study of invasiveness, pathogenicity, and range shift (). Conclusions Random forests are found to be powerful analyzing instruments. Using traits and requirements, they are able to correctly classify species in highly different taxonomic groups into categories of rarity and decline. They may therefore be helpful in finding efficient indicator sets of species and attributes. In designing nature conservation measures we depend largely on knowledge on the relationship between threats and environmental factors for a very limited number of species. Generally speaking, well-known species groups are implicitly used as an indicator group for other species. We found that three of four test analyses of the indicative performance of a species groups proved to perform well, while one indicator species group indicated the species of the higher taxonomic group poorly. The matching importance of some attributes between taxonomic groups shows that these attributes are of key importance and may help to focus conservation policy. We should emphasize, however, that this does not necessarily mean that the different taxonomic groups will show identical responses to conservation measurements based on these attributes. Conservation measures for butterflies may not be effective for other insects, though. Given that insects are by far the most species-rich class of eukaryotes, this is a conclusion of great concern. Our study shows that it is possible to construct models based on limited taxonomic groups that predict threats to all species based on their traits and requirements. More importantly, it is possible to check the indicative value of species groups, provided sufficient information is available on the Red List status of species from different groups. As this type of information is becoming increasingly available, such checks should become standard procedure. Table S1. Dutch red lists used in this study. N spec.: number of Dutch indigenous and reproducing species; Eval.: number of species evaluated for the Red List; Sel.: number of species selected for this study. Number of Dutch species is based on the most recent list of (). Table S2. List of attributes. Availability: Ev, evaluated species; We, well-known species; Po, poorly-known species. Species group: An, animals; Pl, plants; Mu, mushrooms. Figure S1. Definitions of prevalence, correct classification, Type I error and Type II error probability. Figure S2. Theoretical effect of prevalence, that is, the number of declining species divided by the total number of species, of the species group and Type I and Type II error probabilities on correct classifications. The effect of three examples of combinations of Type I and Type II error probabilities are shown.
Sensing of carboxylate drugs in urine by a supramolecular sensor array. A supramolecular sensor array consisting of eight chemosensors embedded in a hydrogel matrix was used to sense carboxylate drugs. The discriminatory power of the array has been evaluated using principal component analysis and linear discriminant analysis. The eight-member sensor array has been shown to accurately identify 14 carboxylates in water with 100% classification accuracy. To demonstrate the potential for practical utility in the physiological environment, analysis of carboxylate drugs in human urine was also performed achieving 100% correct classification. In addition, the array performance in semiquantitative identification of nonsteroidal anti-inflammatory drugs has been investigated, and the results show that the sensor array is able to differentiate six typical nonsteroidal anti-inflammatory drugs at concentrations of 0.5-100 ppm. This illustrates the potential utility of the designed sensor array for diagnostic and environmental monitoring applications.
Self-Blame and Moral Responsibility Self-blame is an integral part of our lives. We often blame ourselves for our failings and experience familiar unpleasant emotions such as guilt, shame, regret, or remorse. Self-blame is also what we often aim for when we blame others: we want the people we blame to recognize their wrongdoing and blame themselves for it. Moreover, self-blame is typically considered a necessary condition for forgiveness. However, until now, self-blame has not been an integral part of the theoretical debate on moral responsibility. This volume presents twelve new essays by leading moral philosophers, who set out bold new theories of the nature and ethics of self-blame, and the interconnection between self-blame and moral responsibility. The essays cast new light on traditional problems in the debate on moral responsibility and open new, exciting avenues for research in moral philosophy, moral psychology and the philosophy of punishment.
<filename>Problem Solving/Algorithms/Implementation/Drawing Book.py #def pageCount(n, p): # a = (p-n)//2 # b = n//2 # return a if a<b else b #p = int(input()) #n = int(input()) #print(pageCount(n,p)) def solve(n, p): if(p<=n): return min(p//2, n//2 - p//2) n = int(input().strip()) p = int(input().strip()) result = solve(n, p) print(result)
<reponame>tusharnankani/socli """ Urwid-based class hierarchy that forms the front end of the SoCLI application. """ import sys import subprocess import urwid import socli.printer as pr question_post = None question_page = None display_header = None MAIN_LOOP = None class UnicodeText(urwid.Text): """ encode all text to utf-8 """ def __init__(self, text): # As we were encoding all text to utf-8 in output before with dispstr, do it automatically for all input text = UnicodeText.to_unicode(text) urwid.Text.__init__(self, text) @classmethod def to_unicode(cls, markup): """convert urwid text markup object to utf-8""" try: return pr.display_str(markup) except AttributeError: mapped = [cls.to_unicode(i) for i in markup] if isinstance(markup, tuple): return tuple(mapped) else: return mapped class Header(UnicodeText): """ Header of the question page. Event messages are recorded here. """ def __init__(self): self.current_event = None UnicodeText.__init__(self, '') def event(self, event, message): self.current_event = event self.set_text(message) def clear(self, event): if self.current_event == event: self.set_text('') class EditedMainLoop(urwid.MainLoop): def process_input(self, keys): super(EditedMainLoop, self).process_input(keys) global question_post if question_post is not None: if 'window resize' in keys: question_post.keypress(question_post, 'window resize') class QuestionPage(urwid.WidgetWrap): """ Main container for urwid interactive mode. """ def __init__(self, data): """ Construct the Question Page. :param data: tuple of (question_url, question_title, question_desc, question_stats, answers, comments, dup_url, dup_link) """ question_url, question_title, question_desc, question_stats, answers, comments, dup_url, dup_link = data self.dup_url = dup_url self.dup_link = dup_link self.question_title = question_title self.question_desc = question_desc self.question_stats = question_stats self.url = question_url self.answer_text = AnswerText(answers, comments) answer_frame = self.make_frame() urwid.WidgetWrap.__init__(self, answer_frame) def make_frame(self): """ Returns a new frame that is formatted correctly with respect to the window's dimensions. :return: a new urwid.Frame object """ self.screenHeight, screenWidth = subprocess.check_output( ['stty', 'size']).split() self.question_text = urwid.BoxAdapter(QuestionDescription(self.question_desc), int(max(1, (int(self.screenHeight) - 9) / 2))) if self.dup_url: answer_frame = urwid.Frame( header=urwid.Pile([ display_header, QuestionTitle(self.question_title), self.question_text, QuestionStats(self.question_stats), urwid.Divider('-') ]), body=self.answer_text, footer=urwid.Pile([ QuestionURL(self.url), UnicodeText( u'\u2191: previous answer, \u2193: next answer, c:comments, o: open in browser, \u2190: back, ' u'd: visit duplicated question, q: quit') ]) ) elif self.dup_link: answer_frame = urwid.Frame( header=urwid.Pile([ display_header, QuestionTitle(self.question_title), self.question_text, QuestionStats(self.question_stats), urwid.Divider('-') ]), body=self.answer_text, footer=urwid.Pile([ QuestionURL(self.url), UnicodeText( u'\u2191: previous answer, \u2193: next answer, c:comments, o: open in browser, \u2190: back, ' u'd: back to original question, q: quit') ]) ) else: answer_frame = urwid.Frame( header=urwid.Pile([ display_header, QuestionTitle(self.question_title), self.question_text, QuestionStats(self.question_stats), urwid.Divider('-') ]), body=self.answer_text, footer=urwid.Pile([ QuestionURL(self.url), UnicodeText( u'\u2191: previous answer, \u2193: next answer, c: comments, o: open in browser, ' u'\u2190: back, q: quit') ]) ) return answer_frame def make_comment_frame(self): """ Returns a new frame that is formatted correctly with respect to the window's dimensions. :return: a new urwid.Frame object """ self.screenHeight, screenWidth = subprocess.check_output( ['stty', 'size']).split() self.question_text = urwid.BoxAdapter(QuestionDescription(self.question_desc), int(max(1, (int(self.screenHeight) - 9) / 2))) comment_frame = urwid.Frame( header=urwid.Pile([ display_header, QuestionTitle(self.question_title), self.question_text, QuestionStats(self.question_stats), urwid.Divider('-') ]), body=self.answer_text, footer=urwid.Pile([ QuestionURL(self.url), UnicodeText( 'o: open in browser, v: back to answer, \u2190: back, q: quit') ]) ) return comment_frame def keypress(self, size, key): """ Overrides keypress in superclass, so don't fall for the trap! size parameter is needed! """ if key in {'down', 'n', 'N'} and not self.answer_text.comments_toggled: # bool comparison is necessary to disable up down buttons when comments are being shown self.answer_text.next_ans() elif key in {'up', 'b', 'B'} and not self.answer_text.comments_toggled: self.answer_text.prev_ans() elif key in {'c', 'C'}: self.answer_text.show_comments() self._invalidate() comment_frame = self.make_comment_frame() urwid.WidgetWrap.__init__(self, comment_frame) self.answer_text.comments_toggled = True elif key in {'v', 'V'}: self.answer_text.set_content() self._invalidate() answer_frame = self.make_frame() urwid.WidgetWrap.__init__(self, answer_frame) self.answer_text.comments_toggled = False elif key in {'o', 'O'}: import webbrowser display_header.event('browser', "Opening in your browser...") webbrowser.open(self.url) elif key == 'left': global question_post global question_page question_post = None if question_page is None: sys.exit(0) else: MAIN_LOOP.widget = question_page elif key == 'window resize': screen_height, screen_width = subprocess.check_output( ['stty', 'size']).split() if self.screenHeight != screen_height and not self.answer_text.comments_toggled: self._invalidate() answer_frame = self.make_frame() urwid.WidgetWrap.__init__(self, answer_frame) elif self.screenHeight != screen_height and self.answer_text.comments_toggled: self._invalidate() comment_frame = self.make_comment_frame() urwid.WidgetWrap.__init__(self, comment_frame) elif key in {'q', 'Q'}: sys.exit(0) elif key in {'d', 'D'}: if self.dup_url: pr.display_results(self.dup_url, self.url) elif self.dup_link: pr.display_results(self.dup_link) class AnswerText(urwid.WidgetWrap): """Answers to the question. Long answers can be navigated up or down using the mouse. """ def __init__(self, answers, comments): urwid.WidgetWrap.__init__(self, UnicodeText('')) self._selectable = True # so that we receive keyboard input self.answers = answers self.comments_list = comments # if the comments are being shown then comments_toggled will be True else when are answers are being # shown then comments_toggled will be False # This Bool is necessary to disable up/down arrow keys when comments are being shown self.comments_toggled = False self.index = 0 self.set_content() def set_content(self): """ We must use a box adapter to get the text to scroll when this widget is already in a Pile from the main question page. Scrolling is necessary for long answers which are longer than the length of the terminal. """ self.content = [('less-important', 'Answer: ')] + self.answers[self.index].split("\n") self._w = ScrollableTextBox(self.content) def prev_ans(self): """go to previous answer.""" self.index -= 1 if self.index < 0: self.index = 0 display_header.event('answer-bounds', "No previous answers.") else: display_header.clear('answer-bounds') self.set_content() def next_ans(self): """go to next answer.""" self.index += 1 if self.index > len(self.answers) - 1: self.index = len(self.answers) - 1 display_header.event('answer-bounds', "No more answers.") else: display_header.clear('answer-bounds') self.set_content() def show_comments(self): """Shows comments by loading a new frame name QuestionPage.make_comment_frame()""" self.content = [('less-important', 'Comments: \n')] + \ self.comments_list[self.index] self._w = ScrollableTextBox(self.content) def __len__(self): """ return number of rows in this widget """ return len(self.content) class ScrollableTextBox(urwid.ListBox): """ Display input text, scrolling through when there is not enough room. Scrolling through text takes a little work to support on Urwid. """ def __init__(self, content): """ :param content: text string to be displayed """ lines = [UnicodeText(line) for line in content] body = urwid.SimpleFocusListWalker(lines) urwid.ListBox.__init__(self, body) def mouse_event(self, size, event, button, col, row, focus): SCROLL_WHEEL_UP = 4 SCROLL_WHEEL_DOWN = 5 if button == SCROLL_WHEEL_DOWN: self.keypress(size, 'down') elif button == SCROLL_WHEEL_UP: self.keypress(size, 'up') else: return False return True class QuestionTitle(UnicodeText): """ Title of the question,""" def __init__(self, title): text = ["Question: ", ('title', title), "\n"] UnicodeText.__init__(self, text) # Must convert to BoxAdapter object if used as a flow widget. class QuestionDescription(urwid.WidgetWrap): """ Description of the question """ def __init__(self, description): urwid.WidgetWrap.__init__(self, UnicodeText('')) self.description = description self.set_description() def set_description(self): """ We must use a box adapter to get the text to scroll when this widget is already in a Pile from the main question page. Scrolling is necessary for long questions which are longer than the length of the terminal. """ self.content = self.description.strip("\n").split("\n") self._w = ScrollableTextBox(self.content) def __len__(self): """ return number of rows in this widget """ return len(self.content) class QuestionStats(UnicodeText): """ Stats of the question,""" def __init__(self, stats): text = ["\n", ('metadata', stats)] UnicodeText.__init__(self, text) class QuestionURL(UnicodeText): """ url of the question """ def __init__(self, url): text = ["\n", ('heading', 'Question URL: '), url] UnicodeText.__init__(self, text)
import unittest from importlib.util import spec_from_file_location, module_from_spec spec = spec_from_file_location('lamdba_function', 'src/hello-world/lambda_function.py') lamdba_function = module_from_spec(spec) spec.loader.exec_module(lamdba_function) class TestHelloWorld(unittest.TestCase): def test_LambdaHandler(self): self.assertEqual(lamdba_function.lambda_handler('', ''), { 'statusCode': 200, 'headers': {'Content-Type': 'application/json'}, 'body': '{"Hello": "World"}' } ) if __name__ == '__main__': unittest.main()
The Co-crystal structure of staphylococcal enterotoxin type A with Zn2+ at 2.7 A resolution. Implications for major histocompatibility complex class II binding. Superantigens form complexes with major histocompatibility complex (MHC) class II molecules and T-cell receptors resulting in extremely strong immunostimulatory properties. Staphylococcus aureus enterotoxin A (SEA) belongs to a subgroup of the staphylococcal superantigens that utilizes Zn2+ in the high affinity interaction with MHC class II molecules. A high affinity metal binding site was described previously in SEA co-crystallized with Cd2+ in which the metal ion was octahedrally co-ordinated, involving the N-terminal serine. We have now co-crystallized SEA with its native co-factor Zn2+ and determined its crystal structure at 2.7 resolution. As expected for a Zn2+ ion, the co-ordination was found to be tetrahedral. Three of the ligands are located on the SEA surface on a C-terminal domain -sheet, while the fourth varies with the conditions. Further analysis of the zinc binding event was performed using titration microcalorimetry, which showed that SEA binds Zn2+ with an affinity of KD = 0.3 M in an entropy driven process. The differential Zn2+ co-ordination observed here has implications for the mechanism of the SEA-MHC class II interaction. Superantigens form complexes with major histocompatibility complex (MHC) class II molecules and T-cell receptors resulting in extremely strong immunostimulatory properties. Staphylococcus aureus enterotoxin A (SEA) belongs to a subgroup of the staphylococcal superantigens that utilizes Zn 2 in the high affinity interaction with MHC class II molecules. A high affinity metal binding site was described previously in SEA cocrystallized with Cd 2 in which the metal ion was octahedrally co-ordinated, involving the N-terminal serine. We have now co-crystallized SEA with its native cofactor Zn 2 and determined its crystal structure at 2.7 resolution. As expected for a Zn 2 ion, the co-ordination was found to be tetrahedral. Three of the ligands are located on the SEA surface on a C-terminal domain ␤-sheet, while the fourth varies with the conditions. Further analysis of the zinc binding event was performed using titration microcalorimetry, which showed that SEA binds Zn 2 with an affinity of K D 0.3 M in an entropy driven process. The differential Zn 2 co-ordination observed here has implications for the mechanism of the SEA-MHC class II interaction. Superantigens bind as nonprocessed proteins to major histocompatibility (MHC) 1 class II molecules on antigen presenting cells and subsequently activate T-lymphocytes by interactions with T-cell receptors. Superantigen activated T-cells proliferate vigorously, and subsequently T-cell and monocyte derived cytokines are produced in large amounts. The released cytokines contribute to the development of toxin-induced disease processes (for a review see Ref. 1). The best characterized superantigens are the staphylococcal enterotoxins. Based on sequence similarity, these may be divided into two subgroups: the first consists of staphylococcal enterotoxins A, D, E, and H (SEA SED, SEE, and SEH) and the second of staphylococcal enterotoxins B and C1-C3 (SEB, SEC1, SEC2, and SEC3) (reviewed in Ref. 2). The sequence identity of SEA to other staphylococcal enterotoxins ranges from 25 (SEC1) to 83% (SEE). In addition, SEA, SED, and SEE are all dependent on Zn 2 for high affinity binding to MHC class II molecules in contrast to SEB and SEC1-3 that bind MHC class II molecules independently of metal ions. Recently solved crystal structures of the free forms of SEA, SEB, SEC2, and toxic shock syndrome toxin 1, as well as of the SEB-MHC class II complex, have created an understanding for the structural constraints by which superantigens interact with their target receptors. The structure of SEB, bound to a MHC class II molecule, confirmed that the superantigen binds to the ␣-chain of the MHC class II molecule, outside the peptide antigen-binding groove. The more distantly related superantigen, toxic shock syndrome toxin 1, binds in a fashion similar to that of SEB, although it covers a larger area on the receptor and in addition utilizes a bound peptide antigen in the interactions. Site-directed mutagenesis of SEA confirmed that co-ordination of Zn 2 is required for high affinity binding to MHC class II molecules. It was also shown that SEA most likely binds bivalently to both the ␣and the ␤-chain, of two separate MHC class II molecules, utilizing a surface corresponding to the site previously defined in SEB in the first case and the Zn 2 binding site in the latter. SEB in contrast, binds monovalently to only the ␣-chain. The recently determined crystal structures of the free forms of SEA co-crystallized with Cd 2 (SEA-Cd 2 ), and SEC2 revealed a metal binding site in each protein. An octahedrally co-ordinated Cd 2 ion in SEA was located on the surface of the ␤-sheet of the C-terminal domain, whereas a tetrahedral coordination of a Zn 2 ion in SEC2 is observed at the interface between the N-and the C-terminal domains. In this study, the crystal structure of SEA, co-crystallized with its native "co-factor" Zn 2 at 2.7 resolution is presented and compared with the previously described SEA-Cd 2 structure. Further, the Zn 2 binding is analyzed using titration microcalorimetry. The biological implications of the mode of Zn 2 co-ordination in SEA are discussed with emphasis on metal ion assisted SEA-MHC class II interactions. MATERIALS AND METHODS Chemicals and Equipment-If otherwise not stated, all chemicals were purchased from Sigma or Fluka. All protein purification equipment and material were from Pharmacia Biotech Inc. Cloning, in Vitro Mutagenesis, Expression, and Purification of SEA-SEA used in the protein crystallographic work as well as in the microcalorimetric titration was expressed and purified as described previously. Calorimetric Titration-The titration microcalorimetric experiments were performed at 30°C using a titration microcalorimetric 2-ml stainless steel vessel for the multiple channel microcalorimetric system TAM (Thermometric AB, Sweden). Another 2-ml vessel lacking stirring facilities containing 0.8 ml of water was used as a calorimetric reference in the twin microcalorimetric unit. The noise level was estimated to be 10 nW. Electrical calibration was performed in connection * The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. with each experiment, with regard to both energy and time constants of the instrument. The calorimetric vessel was loaded with 900 l of 30 -60 M SEA. At each titration 10 -15 aliquots of 5.5 l of 1.2 mM ZnCl 2 were added with a 6-min interval between each injection using a Hamilton syringe attached with a hypodermic needle mounted on an automated motor-driven pump. To correct for dilution enthalpies additional dilution experiments of ZnCl 2 in the buffer solution were performed. The SEA solutions were prepared by exhaustive dialyzing of the protein solutions in 20 mM HEPES, adjusted to pH 6.91 at 22°C giving pH 6.80 at 30°C. The ZnCl 2 solutions were dissolved in the same buffer solution that the SEA solutions had been prepared in. The concentrations of the proteins were obtained from amino acid analysis. Thermodynamic Analysis-The contributions to the entropy that can be identified are: ⌬S hydr-prot is the change in hydration of the protein when Zn 2 is bound, which is proportional to the change in solvent accessible surface area and is related to the change in heat capacity by ⌬S hydr-prot ⌬C p ln(303/ T s ), where T s is the reference temperature (385.15 K). ⌬S hydr-prot is dominated by the change in hydrophobic hydration and is at this temperature range positive upon dehydration. ⌬S hydr-Zn is the change in entropy for transferring the fully hydrated Zn 2 -ion to the protein binding site with additional positive contribution to the total entropy change. This process is analogous to the transfer of Zn 2 from water to a nonaqueous solvent, where the change in entropy ranges between 3060 J (K mol) 1. Thus, the transfer of the Zn 2 will significantly contribute to the total entropy change. ⌬S conf is the entropy contribution due to the change in the degree of conformational freedom. A reduction in conformational degree of freedom will give a negative contribution to the total entropy change. ⌬S no part correlates with the change of the number of particles in the system (for 1:1 binding Rln( 1 2) 5.8 J (K mol) 1, where R is the gas constant 8.314 J (K mol) 1 ). The last term, ⌬S ion, arises from entropy changes where there is proton linkage and subsequent proton exchange upon binding a ligand to a protein. The sign of this contribution depends on whether there is a positive or a negative proton linkage in the reaction. Crystallization of SEA-SEA was crystallized using vapor diffusion. Crystals in the space group P3 1 21 was grown by mixing 3 l of protein solution (10 mg/ml) containing 100 M ZnSO 4 with 3 l of 15% (w/v) of polyethylene glycol 6000 in 0.3 M ammonium sulfate and 0.1 M MES buffer at pH 6.25 in a sealed tissue culture 24-well plate (Falcon). The crystallization droplets were equilibrated at 18°C with 1 ml of the mother liquor for 1-2 weeks to obtain optimal diffraction quality crystals. Crystals were 0.3 0.3 0.3 mm in size and diffracted to 2.5 with a conventional x-ray source. The crystals obtained were difficult to handle using conventional capillary mounting. However, when cryogenic conditions were applied, the crystals could easily be mounted in cryo-loops after stabilization in the mother liquor with 30% glycerol (v/v) added and frozen directly in the N 2 beam (Oxford Cryosystems). Data Collection and Processing-Data were collected using a MAR image plate system and processed with MOSFLM using the refix algorithm for indexing and point group determination and then further reduced and scaled using the CCP4 program package. For crystallographic data see Table I. Structure Determination-At the time this work was initiated, the only superantigen co-ordinates available were those for the SEB-HLA-DR1 complex. A modified search molecule for molecular replacement was created where SEB was converted to an polyalanine model except for those residues that were identical in SEA and SEB. The AMORE molecular replacement solution obtained with this modified SEB model was later verified using the SEA-Cd 2 model when these coordinates were available. The highest scoring solution in the resolution interval 4 -8 was found in space group P3 1 21 with two SEA molecules in the asymmetric unit. A rigid body refinement in X-plor preceded a cyclic process of model building in the program O making corrections for main and side chain differences and POWELL minimizations in X-plor using data between 18 -2.7. In addition, NCS restraints as well as simulated annealing refinement steps in X-plor were included in the end of the refinement. At this moment, solvent molecules were manually introduced into persistent F o F c densities above 3.0. After three cycles, 132 solvent molecules had been introduced. A final POWELL minimization followed by a dynamics run from 2500 to 300 K in 50 ps steps including data between 10 and 2.7 was performed. B-value refinement was added as the final step and solvent molecules with high temperature factors (40 2 ) as well as those with absent 2F o F c electron densities at 1 above mean were removed leaving 92 solvent molecules in the final model. The free R-value was used too validate the progress of the entire refinement. The quality of the model was assessed using PROCHECK, and structural alignments were performed using the least square fit procedure in the program O. The co-ordinates of the SEA-Zn 2 structure will be deposited in the Protein Data Bank (Brookhaven National Laboratory, Chemistry Department, Upton NY 11973). RESULTS Structure Determination-The three-dimensional structure of SEA was determined using data from crystals in the space group P3 1 21 grown at pH 6.25. The initial structure was solved using a modified structure of SEB (co-ordinates kindly provided by Professor D. Wiley); as a search molecule in a molecular replacement procedure and in the final steps of refinement, the SEA-Cd 2 co-ordinates were used. The asymmetric unit contained two SEA molecules, and their structures were refined at 2.7 resolution. At the present stage of refinement the SEA model consists of residues 10 -233 for both molecules in the asymmetric unit. Furthermore, two Zn 2 ions and 92 well ordered solvent molecules have been included. The bound Zn 2 ions in the asymmetric unit were easily identified as pronounced F o F c densities in the electron density maps. The refined structure at 2.7 resolution show a well defined electron density map (Fig. 1). The crystallographical R-factor is 20.6% (R-free 30.2%) in the resolution interval 10 -2.7. The SEA Monomer-The SEA molecule consists of two closely packed domains and show a topology similar to that observed in other staphylococcal enterotoxin structures. A 1. A representative part of the 2F o F c electron density map of SEA-Zn 2 in the core of the superantigen molecule. Observe the interactions between tyrosine residues that possibly contribute to the resistance of SEA to environmental factors such as high temperature or protease degradation. The 2F o F c electron density map is contoured at 1.2 above the mean. ␤-barrel is comprised in the N-terminal domain (residues 31-116), and a ␤-grasp motif constitutes the major part of the C-terminal domain (residues 117-233). Nine residues in the N terminus of each SEA molecule lacks electron density in this crystal form, but in the SEA-Cd 2 crystal structure packs against the C-terminal domain where it forms a one turn helix (residues 4 -7) and covers a partly hydrophobic area on the C-terminal ␤-sheet around residues Tyr 229 and Tyr 231. A detailed description of the SEA structure has previously been published, and therefore we will focus this description on major differences between the two crystal forms (Fig. 2). The refined structure reveals an expected close similarity to other structure determined staphylococcal superantigens, with an overall root mean square deviation of 0.74 to SEA-Cd 2, 0.89 to SED, 2 1.53 to SEB, and 1.87 to toxic shock syndrome toxin 1, comparing 220, 201, 181, and 147 structurally equivalent C␣ positions, respectively. The major difference between the previous SEA-Cd 2 structure and the current structure lies in the N terminus; neither molecule in the asymmetric unit shows electron density corresponding to the first 9 residues, thus it appears to be unordered. Furthermore, we observe a tetrahedral metal ion coordination in contrast to the octahedral geometry previously found for Cd 2. As a consequence of a zinc mediated proteinprotein interaction between the two SEA molecules in the asymmetric unit, the loop 59 -63, unordered in the SEA-Cd 2 structure, now clearly is visible in one of the molecules and could be included as a polyalanine loop in the other. Differential Zn 2 Co-ordination-The Zn 2 ion co-ordination differs between the two molecules in the asymmetric unit, although both bound zinc ions are tetrahedrally co-ordinated. His 187, His 225, and Asp 227 are conserved zinc ligands, but the fourth ligand differs. In one SEA molecule, His 61 from the neighboring molecule in the asymmetric unit is used, whereas a water molecule is used in the other (Fig. 3). Thus, a tetrahedral binding of two Zn 2 ions in the asymmetric unit is seen. His 187, His 225, and Asp 227 as Zn 2 ligands have previously been defined by mutagenesis experiments and in the crystal structure of SEA-Cd 2. The Cd 2 ion in the first SEA structure was octahedrally co-ordinated involving the N-terminal Ser 1 amino nitrogen and ␥-oxygen and a water molecule, in addition to the three high affinity ligands discussed above. Biochemical Characterization of Zn 2 Binding-Because we observe that the two SEA molecules in the asymmetric unit are bridged by a Zn 2 ion, it indicated that SEA might form Zn 2dependent dimers similar to what has been observed for SED. 2 To assess the significance of this observation, gel permeation chromatography on SEA was performed in the presence and in the absence of Zn 2. The protein eluted at an apparent size corresponding to the monomer, irrespective of the addition of (orange) and the current SEA-Zn 2 structure (yellow) is shown. Major differences in the main chain of the two protein structures relate to ordered and disordered regions in the N terminus (residues 1-9, ordered in the SEA-Cd 2 structure) as well as the loop 59 -63 (ordered in the present study). The figure was drawn using Molscript and Raster3D. FIG. 3. The zinc binding site of SEA co-crystallized with Zn 2. A, tetrahedral zinc co-ordination in molecule one (yellow) in the asymmetric unit. Note that the use of His 61 from the neighboring molecule (cyan) as zinc ligand leads to the loop 59 -63, absent in the SEA-Cd 2 structure, here becoming ordered. B, tetrahedral zinc co-ordination in the second molecule of the asymmetric unit. The three high affinity SEA ligands are used and in addition a water molecule (H 2 O) is used as the fourth Zn 2 ligand. The figures were drawn using Molscript and Raster3D. Zn 2 or EDTA (data not shown). When the same experiment was repeated with SED, dimers were observed at a Zn 2 concentration of 1 M. 2 Even at a Zn 2 concentration as high as 100 M, no dimerization tendency could be observed with SEA. Microcalorimetric titration was used to study the interaction between Zn 2 and SEA at 30°C, pH 6.8. At the concentration range for the experiments, 30 -60 M of SEA, the stoichiometry for the Zn 2 -SEA binding process is 1:1 (Fig. 4). The enthalpy is small and endothermic, ⌬H o 5.21 kJ mol 1, and the affinity calculated to be K D 0.3 M. The small and endothermic enthalpy shows that the process is strongly entropy-driven at these conditions, ⌬S o 146 J (K mol) 1. The T⌬S o term is 44.2 kJ mol 1, which should be compared with ⌬H o 5.21 kJ mol 1. DISCUSSION Zinc ions are essential for the activity of many enzymes and a structural component in many protein-DNA and proteinprotein interactions. A requirement of Zn 2 to obtain the strong affinity between the SEA and MHC class II molecules was first shown by Fraser and co-workers. The amino acid residues involved in the co-ordination of a Zn 2 ion on the SEA surface were identified by site-directed mutagenesis. When the residues Phe 47, Asn 128, His 187, His 225, or Asp 227 are substituted for alanines the ability to induce MHC class II-dependent T-cell proliferation is markedly reduced. Because histidines and aspartates are preferred zinc ligands, it was speculated that the lowered bioactivity was due to impaired zinc binding and metal ion-dependent SEA-MHC class II inter-actions for His 187, His 225, and Asp 227 (and possibly also for Asn 128 ). A disruption of a SEB-like interaction with the MHC class II molecule ␣-chain was expected for the Phe 47 to alanine substitution. As shown in the SEA-Cd 2 structure, the effects of the substitutions described above could be explained with disruption of the metal binding site. His 187, His 225, and Asp 227 were shown to be direct high affinity zinc ligands, and Asn 128 was shown to stabilize the conformation of Asp 227 via a strong hydrogen bond. However, a comparison between the SEA-Cd 2 structure and the SEA-Zn 2 structure presented here reveals that the two refined structures are virtually identical, but with one important exception: the co-ordination of the bound metal ion. In the present structure the metal ion is co-ordinated without involvement of the N-terminal serine. We clearly observe a tetrahedral Zn 2 co-ordination in both molecules of the asymmetric unit. The two independent molecules in the crystal asymmetric unit have their respective metal binding site in different environments. In both molecules the three high affinity Zn 2 ligands are His 187, His 225, and Asp 227. The fourth ligand, however, is His 61 of the neighboring molecule in one case and a water molecule in the second. Thus, none of the molecules of the asymmetric unit utilizes the N terminus in Zn 2 co-ordination as observed for SEA co-crystallized with Cd 2. In fact, the N terminus (residues 1-9) is unordered in each of the molecules. If a Zn 2 ion could be octahedrally co-ordinated as the Cd 2 ion in the SEA-Cd 2 structure, one would expect this situation also in this SEA-Zn 2 structure, at least in the second molecule where no symmetry molecule interactions are observed, instead a water molecule is used as the fourth ligand in tetrahedral co-ordination. However, one possible explanation to this contradiction could be due to different crystallization conditions. Although both Zn 2 and Cd 2 ions could adopt tetrahedral as well as octahedral co-ordination in nonprotein environments, the norm for protein bound Zn 2 ions is tetrahedral (reviewed in Ref. 13). The thermodynamic properties for the SEA-Zn 2 1:1 complex are dominated by the large and positive entropy, ⌬S o 146 J (K mol) 1. The possibility to perform a rigorous thermodynamic analysis of the process in terms of dissecting the different contributions to the thermodynamic properties is dependent on heat capacity data, which at the moment are not available. However, the most likely dominating contribution to positive entropy change are ⌬S hydr-prot, which is the change in hydration of the protein when Zn 2 is bound, and ⌬S hydr-Zn, which is the change in entropy for transferring the fully hydrated Zn 2 ion to the protein binding site. In addition, either positive or negative contributions from ⌬S ion, the entropy change upon proton exchange when binding a ligand to a protein, can occur. The sign of this latter contribution depends on whether there is a positive or a negative proton linkage in the reaction. One possible interpretation of these thermodynamic properties is that a reduction in conformational degrees of freedom occurs upon metal binding, subsequently dehydrating hydrophobic surface residues. An ordering of the N terminus upon Zn 2 binding could possibly explain the thermodynamic properties discussed above, although not observed in any of the two molecules in the asymmetric unit of the present crystal structure. In this context it should be stressed that the form of SEA used here is the product from predicted signal peptide processing, whereas SEA purified from its native host Staphylococcus aureus is a mixture of this and of two truncated forms lacking three or five N-terminal residues, with the latter two as the major forms. 2 Thus, none of the shorter forms could co-ordinate the metal ion as observed in the SEA-Cd 2 structure. A thermodynamic analysis of such truncated variants of SEA would be invaluable to the interpretation of the biological significance of the differential metal ion binding modes observed in the SEA-Cd 2 and the present structure. The crystal structure of SEC2 revealed a bound zinc ion in the domain interface region, a metal binding site distinct from the one observed in SEA. The Zn 2 co-ordinating residues were Asp 83, His 118, and His 122 from one molecule and Asp 9 from a neighboring molecule in the crystal lattice. In contrast to SEA, SED and SEE, SEB/SEC-MHC class II molecule interactions do not require zinc ions. Thus, the most likely explanation for the function of the zinc ion bound to SEC2 is that it serves a structural function, possibly by stabilizing the domain-domain interactions. However, from a crystallographical point of view, this situation resembles the case for SEA-Zn 2 observed in this study. As described above, one of the molecules in the asymmetric unit utilizes His 61 of the neighboring molecule as the fourth Zn 2 ligand. Thus the loop 59 -63, that normally is highly mobile, becomes ordered due to the zinc-mediated protein-protein interaction. Interestingly, in the formation of the SEA-MHC class II complex, three Zn 2 ligands are postulated to be derived from the superantigen and the fourth from the receptor. The postulated co-ordinating residue in the MHC class II ␤-chain is an exposed histidine residue, His 81. Thus, the N terminus as oriented in the SEA-Cd 2 structure would have to disengage from the metal ion in order to allow the ligand function of the MHC class II residue. The ligand function of His 61 from the neighboring SEA molecule in the asymmetric unit observed in the current structure may thus mimic the SEA-MHC class II ␤-chain Zn 2 -dependent interaction. Judged from the previous SEA-Cd 2 and the present SEA-Zn 2 co-crystal structure, a model for this interaction could be regarded in which the N terminus of SEA can be utilized in the co-ordination of zinc but is released upon MHC class II molecule interactions. A second option would be that the N terminus is not at all involved in zinc binding. The latter case will most surely exist in vivo where naturally occurring SEA can have three or five residues removed in the N terminus compared with the material used in this study and by Schad and co-workers. However, a full understanding of the interactions in the SEA-MHC class II molecule complex will have to await the crystal structure of such a protein complex.
<filename>src/com/sun/demo/chuangjianxing/yuanxing/Manage.java package com.sun.demo.chuangjianxing.yuanxing; import java.util.HashMap; /** *原型管理器 */ public class Manage { private HashMap ht = new HashMap(); private static Manage manage = new Manage(); public Manage() { ht.put("write", new WritePerson()); ht.put("black", new BlackPerson()); } //增加新的动物对象 public void addPerson(String key, Person person){ ht.put(key,person); } //通过浅克隆获取新的动物对象 public Person getPerson(String key){ return ((Person)ht.get(key)).clone(); } public static Manage getManage() { return manage; } }
Austin’s bike sharing system continues to expand. This week, B-cycle Austin is installing three new stations – one at Cesar Chavez Street and Congress Avenue, a second at Sterzing Street and Barton Springs Road, and a third on Henderson Street between Sixth and Ninth streets. Pam LeBlanc hits the road to test Austin’s B-cycle bike share system. The three new stations are part of a larger 18-station, 125-bicycle expansion taking place during the next 18 months. The system, which launched in December 2013, already does brisk business with its 51 stations and 380 bright red, basket-adorned bikes. To use the system, users must either buy an annual membership for $80 or use a credit card to pay for a $12 day pass, a $15 three-day pass or an $11 monthly pass (plus one-time $15 enrollment fee). Once they’ve done that, they can take as many trips as they want for no additional cost — as long as they check the bike in at one of the stations every 60 minutes. If they keep a bike out longer, they’re charged $4 for each 30 minutes. So far in 2017, 23,016 riders have logged 104,000 trips and 366,000 miles on the bikes. Officials have not yet decided where the other 15 stations will be installed. The expansion is funded in part by the Federal Highway Administration’s Transportation Alternatives Program and administered by the Texas Department of Transportation. Austin B-cycle is a public-private partnership between the City of Austin, the system owner, and Bike Share of Austin, the local non-profit operator.
Lipases (triacylglycerol acylhydrolases, E.C. 3.1.1.3) consist of a genetically diverse and distinctive grouping of water-soluble hydrolytic enzymes that typically act on the ester bonds of lipid substrates. Lipids can include fats, waxes, sterols, fat-soluble vitamins, monoglycerides, diglycerides, fatty acyls, polyketides, and fatty acids, and exist as a number of variations containing different additional chemical structures such as phospholipids, glycerolipids, glycerophospholipids, sphingolipids, sterol lipids, prenol lipids, saccharolipids, etc. Lipases have been used in ester and amide synthesis, kinetic resolutions or asymmetric synthesis to obtain optically pure compounds, and lipid modifications (Bomscheuer and Kazlauskas, 2005). Lipases play an essential role in: (1) the metabolism of dietary lipids, (2) injury and inflammation, (3) cell biology, (4) fermentation, (5) biocatalysis, (6) vitamin absorption, (7) laundering, (8) synthesis of pharmaceuticals and many other biological and chemical processes. Such wide and varying roles have been attributed to lipase stability in organic solvents, high specificity, high enantio-selectivity and regio-selectivity, and a general lack of a need of cofactors for their action. Genes encoding lipases have been found in most, if not all, types of organisms. Typically, the tertiary structure of lipases includes the alpha/beta (α/β) hydrolase fold pattern (Ollis et al., 1992), also common in peptidases and esterases (Holmquist, 2000), and can be composed of a core of up to eight beta strands, connected and surrounded by alpha helices. The active sites of lipases are usually formed by at least a catalytic triad consisting of a serine residue as the nucleophile, a histidine residue, and an aspartic or glutamic acid residue. The active site residues are located in a hydrophobic pocket that is covered by a flap or lid structure, usually composed of amphiphilic α helices (Anthonsen et al., 1995). Lipases typically act at the interface generated by a hydrophobic lipid substrate in a hydrophilic aqueous medium. There are typically four basic steps in lipase hydrolysis and/or alcoholysis (i.e., ethanolysis), which involves a conformational change of the lipase itself. First, the lipase is adsorbed and activated by the opening of the hydrophobic pocket by displacement of the lid structure, by the so-called interfacial activation. Once the pocket is opened, the ester bond of the lipid substrate is able to reach and bind to the lipase active site. Second, the nucleophilic oxygen of the serine side chain binds the carbonyl carbon of the ester bond, forming a tetrahedral intermediate, stabilized by hydrogen bonding with amide nitrogen atoms of the amino acid residues nearby. Third, the ester bond is cleaved, which frees an alcohol and produces an acyl-enzyme complex. Last, the acyl-enzyme is hydrolyzed upon entry of a water molecule or alcohol into the active site. This frees the fatty acid (in case of water as nucleophile) or ester (in case of an alcohol as nucleophile) and the lipase is regenerated. Due, in part, to their diverse functioning and structure, as revealed by sequence analysis and crystallography, lipases belong to different enzyme subclasses or families. Pseudozyma (formerly Candida) antarctica, is a basidiomycetous yeast strain isolated from Lake Vanda in Antarctica that produces two differently functioning lipases: lipase A (CAL-A) and lipase B (CAL-B) (Ericsson et al., 2008). These two lipases have been previously characterized and the amino acid and DNA sequences encoding these lipases have been determined (Novo Nordisk A/S, by Hoegh et al., 1995). CAL-B is a widely used enzyme in organic synthesis on both the laboratory and commercial scale, especially in the resolution of racemic mixtures. CAL-A is one representative of a new class of lipases and, due to its properties, including thermostability, has been used as a catalyst in the paper, wax, food, flavor, and biopharmaceutical industries. CAL-A has an unusual lid structure and C-terminal flap, which can accept very bulky substrates like highly branched acyl groups and sterically hindered alcohols and amines (Kirk and Christensen, 2002; Krishna et al., 2002; Schmidt et al., 2005). CAL-A also shows a higher homology to peptidase structures rather than typical lipase structures (Ericsson et al., 2008). Mono- or poly-unsaturated fats with trans-isomer fatty acid(s) are commonly called “trans fats.” Trans-isomers contain chains where the carbon atoms next to the double bond are located geometrically opposite, whereas in cis-isomers the carbon atoms next to the double bond are geometrically on the same side. In the cis configuration, the naturally occurring unsaturated fatty acids have lower melting points than those of saturated fatty acids, and thus are found in liquid form. Typically, trans-fatty acids are found in food products as a result of a partial hydrogenation process. Trans-fatty acids have higher melting points than those of the cis-unsaturated fatty acids and are less susceptible to auto-oxidation and so can form a more stable solid or semi-solid fat. Dietary intake of trans-fatty acids has been linked to an increased risk for heart disease, diabetes, obesity, metabolic syndrome, Alzheimer's disease, cancer, liver dysfunction and infertility. For these reasons, attempts have been made to reduce the trans-fatty acid content in dietary products (Ratnayake and Cruz-Hernandez, 2009). Lipases have gained significant commercial importance; however, the expression levels in native organisms are too low to meet these increasing needs. Therefore, numerous attempts have been made to optimize the activity, selectivity, sensitivity and stability of lipases. These include immobilizing the lipase on solid supports and using non-aqueous solvents as well as recombinant DNA techniques and protein engineering. Understanding the mechanisms underlying gene expression, protein folding and excretion of lipases enables higher-level production of these biocatalysts (Napolitano and Giuffrida, 2009). Numerous lipase assay methods have been used to determine lipase activity, including, but not limited to, using colored or fluorescent substrates, which allow spectroscopic and fluorimetric detection of lipase activity, chromatography techniques including high-performance liquid chromatography (HPLC), silver ion chromatography, gas chromatography and thin layer chromatography, titration of fatty acids released from the substrate, mass spectrometry and controlled surface pressure or oil drop tensiometry. Due to the central importance of lipase function in lipid metabolism and transport, and its implication in serious diseases and conditions such as heart disease, diabetes, obesity, metabolic syndrome, Alzheimer's disease, cancer, liver dysfunction and infertility, it is imperative to know not only how lipases work, but also how to improve the activity, selectivity, sensitivity and stability of lipases. What is desirable, therefore, are compositions and methods for producing a novel lipase variant, increasing the preference of a lipase for long fatty acid chains, increasing the range and number of fatty acid chains that a lipase is able to catalyze, and increasing the trans-selectivity of lipases and reducing or eliminating trans-fatty acids from lipid substrates. Such compositions and methods find particular utility in a variety of analytical assays and dietary regimens.
import mongoose from "mongoose"; import { MONGODB_URI } from "./utils/secrets"; import logger from "./utils/logger"; import * as Models from "./models"; import * as Repositories from "./repositories"; mongoose.set("useFindAndModify", false); // Connect to MongoDB mongoose.connect(MONGODB_URI, { useNewUrlParser: true, useUnifiedTopology: true }).then( () => { /** ready to use. The `mongoose.connect()` promise resolves to undefined. */ logger.debug(`Connected to database at ${MONGODB_URI}`); }, ).catch(err => { logger.debug("MongoDB connection error. Please make sure MongoDB is running. " + err); process.exit(1); }); export { mongoose, Models, Repositories };
Total resection of inferiorly located sacral chordoma with posterior only approach: case report and review of the literature. Chordoma is a primary sacral neoplasm of ectodermal origin and makes up %1- 4 of all primary bone tumors. It is usually present on the midline cerebrospinal axis and the most common locations are the spheno-clival region and the sacrum. The treatment of primary sacral tumors represents a challenge because of a large tumor mass at presentation and a hemorrhage risk in surgery. Sacral tumors may present a difficult problem to the surgeon who desires to obtain a clear margin of excision. Using the retrorectal fat tissue as a cleavage line in the posterior approach guides the neurosurgeon to resect the tumor totally and reduce the hemorrhage in sacral chordomas. In this case report, we tried to discuss the advantages of using of retrorectal fat tissue as a cleavage line in sacral chordomas under the literature.
/* DHTController.h - Library for handling DH11 dht measurements Created by <NAME>, July, 14, 2021. Released into the public domain. */ #include <Arduino.h> #include "SocketIO.h" #include "DHT.h" #include "Controller.h" #include <map> #include <string> #ifndef __DHTController_h #define __DHTController_h class DHTController: public Controller { public: DHTController():Controller(){}; void init(SocketIO* t_socket, const int t_pin, const std::string& t_name, const std::string& t_actuator); void loop(); void sense(); protected: DHT* dht; uint64_t lastSense = millis(); uint64_t senseInterval = 10000; }; void DHTController::loop() { uint64_t now = millis(); if(now - lastSense > senseInterval) { lastSense = now; sense(); } } void DHTController::sense() { float h = dht->readHumidity(); float t = dht->readTemperature(); float hic = dht->computeHeatIndex(t, h, false); std::map<std::string, std::string> payload; payload["id"] = id; payload["type"] = "dht"; payload["temperature"] = std::to_string(t).substr(0,4); payload["humidity"] = std::to_string(h).substr(0,4); payload["hic"] = std::to_string(hic).substr(0,4); socket -> send("board:data", payload); } void DHTController::init(SocketIO* t_socket, const int t_pin, const std::string& t_name, const std::string& t_actuator) { Controller::init(t_socket, t_pin, t_name, t_actuator); dht = new DHT(pin, DHT11); dht->begin(); } #endif
There comes a time in every child's life when simple acceptance of facts ceases to be enough. When that they had taken for granted becomes curious, then bizarre, then vexing. Children are born innocent and trusting, but children age, and what is taken for granted in youth must be challenged in maturity. Statements which had once carried the weight of truth by mere virtue of being spoken by an authority must be challenged, questioned, and tested. And so, the day comes for every child when they must turn to those they most look up to, and demand the question which inevitably plagues every inquisitive young mind: Why the hell are Rock-types vulnerable to water? It is easy, and tempting, for an adult who senses the impending loss of their unquestioned hegemony over a young mind, to respond with sombre words and sage gazes. The rock stands mighty, my child. Grand and proud, unchallenged. All gaze upon the mountain in wonder, for it is boastful and proud. But water - water is something greater than grand. It is relentless. It does not quit, it does not surrender. It fights, day after day, year after year. And as time passes, that which once seemed immutable is muted. That which once towered over all is worn down to sand, for even the grandest mountain can be weathered to naught by the tireless efforts of a humble stream. And through persistence, and dedication, and unflinching determination, it is the water that endures. When confronted with this, there are three responses a child might venture. The first child will say "okay," and go on to a career in middle management. The second child will nod, thank the elder for their words of wisdom, then get high and write poetry about how queen Beedrill are the real slaves. But the third child will frown, and point out that the natural phenomenon of erosion is not a satisfactory answer in the context of Pokémon battles. The adult will implore them to consider the deeper meaning of their words. The child will counter that the Geodude wasn't held under a Water Gun for eight hundred years before tapping out. The adult will shake their head and tell the child that they'll understand when they're older. The child then goes on the Internet and discovers peer-reviewed studies on natural selection, as well as a powerful and deeply confusing fascination with Miltank lactation videos. There is also a fourth child, who does not have this conversation at all. For they ask this question of their father, who is a world-renowned Pokémon Professor, and he explains that the term 'Rock-type' is a misnomer stemming from a shared etymological origin in Old Kanton. He teaches the child that so-called 'Rock-types' are actually defined by a rigid exoskeleton which possesses a superficial resemblance to common forms of stone. He elaborates that the ancestors of these modern 'Rock-types' evolved primarily in certain underground locations with limited access to liquid water, and that these bygone creatures developed the ability to absorb ambient moisture from the local atmosphere to provide themselves with adequate hydration, allowing them to exist in a very specific ecological niche. However, this adaptation causes them to fare poorly in the face of liquid water, which their exoskeletons attempt to absorb. This results in cells expanding until they physically burst, causing tremendous pain, disablement, and the temporary exposure of their delicate organ systems. The child nods, appreciative of the knowledge but unable to grasp how lucky he was that he didn't get saddled with some berk talking about the universe's intrinsic need for balance. These were the thoughts I occupied my mind with while I sat in Pewter City's Pokémon Center, waiting for their assessment of Cubone. Admiral had also been taken in to be checked for any lingering effects from the paralysis, and the Venonats for a general examination, but I wasn't particularly concerned about them. Despite my exhaustion, sleep had made only passing acquaintance last night. The presence of death down the tunnel had unsettled me, worries about Cubone's state had plagued my mind, and every time my eyes had begun to close, I was jolted back to wakefulness by the distant echo of a phantom tap. I had drifted off eventually, but morning came far too quickly, and the fatigue had settled deep into my bones. No longer preoccupied with thoughts of vanity, I hitched a lift on the first convoy to pass. The passengers had eyed me with sympathy, their only ventures at conversation being are you okay and offers of food and water. I dozed off on the ride, but the rest of the trip couldn't have taken more than an hour. First stop was the Pokémon Center. I should have been preparing for my battle with Brock, running through potential strategies and contingencies, anticipating what commands I'd need Admiral to know and what instructions I should give him beforehand. But I was drained. There was nothing left in me. The verdict came, and it was the best I could have expected. Minor malnourishment, fixable with a few days of proper feeding - if I could get him to eat. No physical trauma - the Kangaskhan had protected her child to the last - but the psychological damage was severe. Cubone was only a few months old, and the bond between a Kangaskhan and her child was the strongest in all of nature. No other Pokémon had an entire evolutionary branch induced purely by grief. He had made no response throughout the examination. No acknowledgment of the nurses, no reaction to their words of reassurance, no reply to the Chansey that had tried to communicate with him, nothing but a reflexive flinch when they drew some blood. The Venonats - a pair of sisters - were fine, if quite wild. When they had been released into the perspex box - standard procedure for any fresh capture, until their aggressiveness could be determined - one had bared her teeth and spit acid at the nurses, while the other cowered behind her. Admiral, meanwhile, had amused the staff with a little tap-dance routine. One of them had recorded it and, with my permission, uploaded it to the PokéCenter social media page. Once they were released back into my care, I headed straight for the nearest hotel. It was barely past noon, but I knew I wasn't getting anything done today. I did manage to force myself to pull out Admiral for a short while, to teach him a few more key codewords. Just stuff I expected we'd need for tomorrow - things like rock, water, enemy, win, lose. Yes and no. I would definitely need to expand his vocabulary, and soon, but it was all I could manage to make sure he'd have what he needed to handle the Pewter Gym. Leader Brock had acquired a reputation as a first-ring adversary who could be relied upon to be tough, but fair. He typically deployed a Geodude against first-time challengers, providing sufficient challenge without brutalizing them. Occasionally he fielded an Onix, but did not require Trainers to actually defeat it so much as demonstrate their capability at handling oversize threats. And while Onix would certainly represent a tough opponent, I'd seen videos of Trainers with far weaker starters find triumph against him. It may sound cocky, but the truth is that I wasn't really worried about Brock. First-ring battles were tests of command and control, an opportunity for a qualified Gym Leader to assess the challenger's ability to handle minor contests before progressing to greater challenges. A Geodude - honestly, even an Onix - would struggle to mount a serious fight against Admiral. He was well-trained, he obeyed commands, and the type advantage would be overwhelming. As Leader of the Rock Gym, Brock would be obligated to field only Kanton Rock-types, and Kanto had few that the League would recognise as suitable for a first-ring challenger. In short, the problem space was small, far worse Trainers than me had gone through unscathed, and I was confident Admiral could handily deal with any opponent he could legally be faced with. I was exhausted, and I figured I'd be better off getting a full night's sleep so I could stay sharp and react to unexpected developments, rather than spending hours preparing for obscure, niche scenarios that were highly unlikely to materialize. So I went to bed, checking my Pokédex only for direct messages and high-priority news alerts. With none present, I slammed my face into the pillow. The Gym floor was stone, coated in a thick layer of gravel. Ridges, boulders, and sloping elevation changes abounded, the entire surface a chaotic jumble of rock. At the base of a long, sloping stone lay a fissure, likely struck into the foundations by some spectacular display of power. There were no plants, no life, no hint of anything but stern, unyielding resilience. I mounted the dais, controlling my breathing as I approached the railing. There should have been more fanfare, more ritual. This is it. My first Gym battle. I should have been a gladiator, stepping upon the sands to the roar of the crowd. But I was just a boy with a stained jacket and a cap, in an empty room of stone. Nearly empty. Across the room, on the opposite podium, stood Brock. Arms crossed, face still, wearing a khaki t-shirt with an open, grey padded vest. He made no noise and gave no reaction as I reached the railing. A moment passed as he surveyed me, considering, watching. Total silence. When he spoke, it was without inflection or emotion. Simple, deep, and clear. Even his speech was unadorned. "Red Oak, of Pallet. You wish to challenge me?" I steeled myself, projecting my words as best I could without, trying to keep my voice low and masculine. "I do." "This is your first contest. Do you affirm that you understand all relevant League regulations, and are aware of the consequences of misconduct?" "I affirm it." "Have you selected your Pokémon?" "I have." He unclipped a Pokéball from his belt, not breaking eye contact, and pointed it towards the ground before him. "You may deploy when ready." Admiral's Pokéball was already in my hand. I raised it as Brock had, and whispered the words I had fantasized a thousand times. I was here. It was time. "Admiral. I choose you." I pressed the button, and a flash of blue light struck out from the Pokéball as Admiral formed upon the ground. As I did so, Brock did the same, his own streak of energy coalescing in the form of… Kabuto. I had read about them. The Professor had a beautiful fossilized one mounted in a glass case at home, but live specimens were *incredibly* rare. I'd never seen one in person before. It wasn't surprising that Brock had one. His Kabutops was a legendary warrior, and had been a key player in his challenge of the Elite Four. But I had never heard of a Kabuto being presented against a first-ring challenger, and while it probably fell within the challenge rating mandate of the League, it was certainly on the borderline. I was not prepared for this. And it was huge. Difficult to gauge the exact size over that distance, but it wouldn't have been much smaller than a metre long. This was not a fresh hatch. This was a grown, trained Pokémon, and Admiral could not rely on simple water attacks to overcome it. I gave no command. Instead my mind raced, running through everything I'd read about Kabuto to try and construct a viable strategy. Aquatic Pokémon. Nearly immune to water attacks. Hard outer shell, direct physical attacks ineffective. Underside is fleshy and vulnerable - potential weak point, but has deadly-sharp claws. Four eyes. Two on top have poor vision, mostly just detects light for avoiding predation. Lower eyes under the shell have much greater acuity. It'll have to expose them to see properly - narrow jets of high-pressure water, directed to the eyes. That could work. Five seconds passed. Ten. I had no idea how many people were watching, but I could feel the pressure of their anticipation. I'm not doing anything. I'm supposed to do something. I'm just standing here, I have to give a command, I have to look decisive and— The pressure was building. I took in a short breath, trying to suppress the urge to act rashly. Stop. Don't worry about them. Focus. Think. Weak points, what does it have? More seconds. More phantom stares. Brock made no move. Kabuto are built for water. They don't have tails, if they're flipped upside-down they can't get back up. Get it on its back and you win. But how can Admiral possibly flip that thing? It must weigh five times what he does, he'd need some insane sort of leverage to— "Trainer Red," said Brock, shattering the silence. "You have not given order to attack. Is there a problem?" "No, Leader," I replied. "I'm just…" I trailed off. I didn't know what to say. "Do you object to my selection?" "No, Leader." "Then proceed." I opened my mouth, but nothing came out. I needed to give an order, but I had no idea what. Admiral turned to look at me, tilting his head. Either he needed advice, or he just didn't see the problem. I need to give him a command. I need to tell him what to do, but what? If he runs in firing water blasts and trying his usual showboating wrestling malarky, he's going to get annihilated. He doesn't have the mass or finesse to toss that thing over. Leverage. He needs leverage. If he can get the Kabuto to— "Trainer Red," said Brock, his voice firmer, reverberating through the speakers dotted around the Gym. "You are the challenger. The onus is upon you to achieve victory. You must deliver a command." "I apologise, Leader. I am considering." He gave the barest incline of his head. "You have that right." With that tacit permission to stop and think, the pressure lessened. The desperation to look quick and clever diminished, and my thoughts became that little bit clearer. It wouldn't look nearly so bad to be hesitant with a Gym Leader's endorsement. Leverage. Use the environment. Push the Kabuto over a ledge. Get it jammed in that fissure, down next to that sloping boulder. Kabuto's entire structure is rigid, it has nearly no mobility. Get it wedged in somewhere, doesn't even need to be completely flipped. Get its claws off the ground, and that should be enough. Now, how the hell do I communicate that to Admiral? I ran through what limited vocabulary Admiral had learned, trying to formulate the instructions without having to resort to plain Kanton. Working out what would hopefully be enough to convey my intent, I began relaying my plan in our code. I spoke slowly, taking care to enunciate clearly and ensure he had time to process each individual part, joining phrases to try and connect concepts. "Admiral. Attack, water. Front, not rock. Attack front water, not-rock. Move forward, move back. Move. Move. Move. Enemy up-down. Enemy up-down, win. No physical attack. Enemy rock. Physical attack, enemy win. Move enemy down low-rock fast, enemy up-down." He stared at me, frowning in concentration, trying to process everything I was saying. Once I'd gone through it once, I repeated it. Take your time, make it clear. He needs to be on the same page as you. I finished the second repetition and waited, silently willing him to understand. Was this too complex for our simple tongue? Could he grasp that awkward jumbling of words? He turned around, fully presenting his back to his opponent to face me with a questioning look. Then, slowly, he raised his hands to his waist and, with small, subtle movements, taking care to obscure Brock's view, placed one hand over the other. With a flipping motion, he inverted them. I broke out into a grin. He got it. I nodded, and he returned the nod with a shark's smile, baring his teeth. "Admiral. Go." He charged, breaking into a sprint towards his opponent. Gravel crunched beneath his feet, kicking pieces up as he ran. He bounded over the fissure that stood between them, closing the distance. The floor as a whole was covered with large rocks, but the path between the two was relatively clear. Kabuto held its ground as Admiral approached, until the gap was down to about fifteen metres. Brock delivered a short series of instructions in his own language, and Kabuto began its own approach. Its claws unsuited to bare rock, it moved slowly, shifting gravel as it progressed. Admiral stopped roughly five metres from Kabuto, opening his mouth to dispense a stream of pressurized water. It struck one of Kabuto's lower eyes, causing it to flinch and lower its shell, presenting Admiral with only exoskeleton and continuing its slow advance. "Retreat, down-rock," I called out. Admiral took a single step back, firing off another shot of water. This one skidded harmlessly off Kabuto's shell. It was followed by another, this one lower, trying to take one of Kabuto's claws from under it. Gravel went flying and Kabuto wobbled a bit, but held itself up and continued advancing. Ten metres to the fissure. It would take a minute at this slow pace, but this was the Rock Gym. Patience was a virtue here. Admiral fired a few more experimental shots. At first, it seemed pointless - strikes landing on the shell were entirely disregarded, blasts to the claws and underside caused no more than momentary instability - it was creeping, keeping itself low to the ground, it wouldn't be undone by them. But then a shot struck one of the recessed upper eyes, and Kabuto flinched. It wasn't much - the eyes were small, difficult to hit, and the strike seemed to cause no more than a flash of minor pain - but it was something. With Admiral next to the fissure, Brock called out a command. Kabuto stopped. Well, it's not like the plan wasn't obvious. I could hardly expect Kabuto to just wander up to the precipice and wait. A second series of commands from Brock, longer and more intricate this time. I would have seized on the moment to take action, but I had no idea what would be effective. I wanted to interrupt, to make Admiral initiate some sort of attack before Brock's plan could be conveyed, but the only thing I could think of - beyond taking more potshots at the eyes - was to charge the giant rock with the fearsome claws, and that did not seem a wise notion. The instructions conveyed, Kabuto waited until another Water Gun had slid off its shell, reared up, and fired a stream of high-velocity bubbles at Admiral. He dodged to the side, catching only a few and not so much as wincing as they connected. He returned fire, but Kabuto hunkered down as he did so, presenting nothing but shell. Think. What's Brock's plan? Those orders were too long and detailed to just be 'fire bubbles', and they'll never cause real harm to Admiral. Brock knows this. He's planning something. But what? Nothing came to mind, beyond forcing Admiral to maneouvre. But that didn't seem enough, and it would take a more powerful water attack to achieve even that much. Admiral could tank bubbles indefinitely. And yet, Kabuto made no follow-up play. When the bubbles ceased, it hunkered back down in anticipation of Admiral's return fire, which duly came. These exchanges continued for perhaps a minute, five or six volleys each way. In the context of a Gym battle it felt like forever. Kabuto had no hesitation in cutting its attacks short when Admiral opened his mouth, so he was unable to land any meaningful hits. Another blow struck the upper eye, but any damage it inflicted was negligible. What IS his plan? Brock's all about patience. Is he planning to just wear Admiral down? Bubbles take far less water to produce than full jets of water, and Admiral's water glands can't replenish indefinitely. Plus, Kabuto has far greater mass than Admiral, and presumably greater storage capacity. If he can keep this going long enough, Admiral will run out well before Kabuto does. And even if that isn't his plan, whatever's happening now is what Brock wants to happen. Is he expecting me to reach that conclusion? Kabuto isn't going to move any closer to the ledge without prodding, I can't take it out with just water attacks - is he hoping I'll take my chances with a closer engagement? Do I have a choice? I gave the command to Admiral to stop firing. He was just going to drain himself - whatever minor pain he was inflicting with the eye strikes, it wouldn't be enough. Admiral's mouth closed, and he stuck to just dodging bubbles. That'll wear him down, too. It won't take him out, but he'll get tired if he has to keep evading for too long. I gritted my teeth. Think. THINK! More moments passed. This is a Gym battle. First ring. Brock's not going to present you with an impossible situation, he's obligated to make sure the challenge is beatable. There IS a path to victory. Admiral's movements were slowing. He wasn't getting exhausted, but the impetus to dodge was diminishing. He started letting bubbles hit him - it wasn't like they could deal any damage. And all the while, my mind raced. But it wasn't racing usefully anymore, just churning through the same stable of ideas I'd already had, falsifying them again and again, periodically interrupted by vague, frustrated mental screams shouting THINK OF AN IDEA. And after Admiral had taken a few bubble hits without concern, Kabuto reared up again, the same way it had a dozen times. But this time there was no stream of bubbles - rather, a red glow emanated from its mouth, and a dull maroon beam struck out at great speed. Admiral, caught off-guard, was too slow to dodge, and the beam connected. In a flash, droplets of water beaded across his flesh and were pulled away, speeding back towards Kabuto's glowing maw. Absorb. Kabuto can learn that? Admiral threw himself to the side, shuddering, rolling as he landed, staying just ahead of the beam as it moved. As he rose from the roll, he wobbled, visibly drained. The beam traveled, hitting Admiral again, tremors running through him. He was running, sprinting, taking a route around the sloped boulder to position it between him and the Kabuto. It pursued as it lost line of sight - slower than Admiral, but inexorable. It reached the bottom of the boulder's slope, keeping to the edge of the gravel - as the boulder ramped upwards, the gravel gave way to clear stone. But just a few feet from the gravel line lay the fissure. Opportunity? Hold for a few seconds. Wait until Kabuto's at the narrowest point - in about fifteen seconds, it'll reach the spot where there's only a narrow path of gravel. It'll be right next to the fissure. If Admiral charges him there, he might be able to knock it into the crevice. Not great, but it's the best shot we're going to get. "Admiral. No move. Stop. Physical attack enemy. Enemy up-down. Stop. Stop." I really needed to teach him a wait command, but he seemed to grasp it. He looked at me, leaning against the rock, breathing heavily and perspiring, and nodded. Kabuto neared the bottleneck, still staying as far away from the fissure as it could…and went off the gravel, traversing the bare rock of the ramp instead. It knows its weakness. Its progress was slow, claws unsuited to walking on stone sans an intermediary to provide more resistance. Its movements were hesitant, finding a small crack or crevice for its next step before raising a claw from an established point. But it was moving, and it wasn't getting any closer to where I needed it to go. Out of time. Take the shot. "ATTACK!" Admiral charged, racing around the boulder at full pelt. The Kabuto seemed to intuit the meaning of my command and stopped moving, bracing itself on the patch of bare rock it stood and raising its head, mouth glowing. Admiral entered its sight, and it fired, beam streaking out towards him. Admiral made no attempt to dodge, taking the hit. He barreled towards his foe and struck it full-force, thick skull connecting right in its mouth. Kabuto lost its footing, claws scratching exposed stone as it struggled to hold ground. Unable to withstand the tackle, it skidded back, legs digging into gravel as it was driven towards the edge, desperately trying to stop… …and succeeding. Not enough momentum. It held, right at the edge of the precipice. The red light was gone, whatever trauma Admiral had inflicted upon Kabuto's mouth cutting off the attack. But Admiral, now beneath the edge of Kabuto's shell, was finally exposed to its deadly claws. The front pair swung around, one catching him by the shell, the other digging into the flesh of his arm. The shell's front descended, semi-enclosing Admiral, and from beneath I could see that familiar maroon glow again. I couldn't see exactly what has happening - Kabuto was facing away from me - but noise told the tale. Admiral was shouting, roaring in his croaking little way. Kabuto's shell was bucking as Admiral rained blows from below, the red light cutting off abruptly, getting knocked tantalizingly close to the ledge. But Kabuto's savage claws drew back and struck again, and again, and the shakes from Admiral's strikes grew subtler, smaller. This was not what I had wanted. My first Gym battle. But Kabuto was no longer moving closer to the precipice, and I would not allow Admiral to become another Golduck. There was only one option left. Concede. I opened my mouth, the words catching in my throat. It took a moment, but I found them, and my tongue began to move. And then, I saw Admiral. He'd thrown himself to the side, trying to get out from under Kabuto's shell. He was on his front, clawing at gravel, crawling away. His arms were shining red, what little of his face I could see cut and smeared with blood. A claw caught his leg as he struggled out, opening a new wound, but he jerked his leg away and pulled free, staggering to his feet and breaking into a ragged run up the sloping boulder, Kabuto's scything claws falling out of reach. It reared up and fired another maroon beam, catching Admiral by the leg and making him tremble, but then he was over the boulder's crest and upon its summit. The boulder flattened at the top, presenting a space the linear Absorb could not reach. Admiral collapsed. Bleeding, exhausted, but safe. "Rakka?" I called out. His tail flopped from one side to the other, once. Hurt, but can continue. Great, but continue to what? Ranged attacks don't work. He doesn't have the strength for another melee. Kabuto's already repositioning, moving to a space with more gravel behind it. It's got the fissure behind it, but if the first charge didn't do the trick, then a second definitely won't. Not in Admiral's state, not with that much ground to cover. Sure, Admiral can keep going, but we've got no win condition. It's over. "Huk," I called. "Varra-krinn." Stop. Enemy win. Admiral lay there for a moment, without response or acknowledgement. Then, he turned his head to look at me, face dark, and uttered a short bark of contempt. I shook my head. "Huk," I said again, more insistent this time. "Varra-KRINN." This time, he shouted. A loud, furious bark of rejection. Then again, and again, and again. Perhaps I should have ignored him. Said the words, raised his Pokéball, returned him. Perhaps I should have lost, then and there. It would have been the wise thing to do. But that look in his eyes - that glare, that fury, that utter refusal to accept defeat - I couldn't bring myself to deny it. He would have taken it as a betrayal, as a shame I'd forced upon him. We had accepted defeat before, in the face of Venomoth, and we would go on to do it again. Sometimes, every path leads to defeat. Admiral knew that, and while he hated losing, he would never begrudge me for throwing in when the last hope had faded. But to make that decision for him, against his wishes, to call him vanquished when his eyes still saw triumph - that, I could not do. I nodded. Faced with no option but victory, I knelt my head and began to think. Brock has not declared the challenge lost. That means he still thinks there's a way for Admiral to win - Leaders don't let first-ring battles drag on pointlessly, he'd stop it if there was no hope. He's given me a much greater challenge than he usually does to first-timers - why? Doesn't matter. Focus. He's given me a greater challenge, but it's still a first-ring battle. If there's a way to win, he's probably keeping it open for me. He's fought hundreds of these kinds of fights, he knows he's supposed to let capable Trainers succeed. He doesn't have a reputation for being unfair. He can see a win condition. It's there. I just need to find it. I looked up at him. He stood stalwart, unmoving. He gave no words, made no attempt to hurry or pressure me. He's letting me see it. Where is it? What's he doing that's allowing me to win? I shook my head. No. We've tried that line of thought. Wrong question. What's he not doing that would cause me to lose? I surveyed the battlefield, trying to see it from Brock's point of view. If we were reversed, and I was commanding Kabuto, I wouldn't keep it where it is. There's no need to be anywhere near that fissure. It can move to any open space on the field and wait, and there's nothing I'd be able to do about it. He's keeping Kabuto near the precipice. He thinks that's how I can win. So I'm on the right track, I'm just missing something. What would make Admiral able to knock Kabuto over, that didn't work the first time? The angle's a bit better, the fissure is right behind it now, rather than parallel to Kabuto. Is that enough? Probably not. Kabuto's given itself a good five feet of space, and the ground there is thick with gravel. It'll be able to dig in, the rocks provide too much resistance. If I can get Kabuto to a space of clear stone floor, it might work. But Kabuto's not moving. No attack is going to force a maneouvre. Water attacks are useless, it doesn't need to evade them. It can just stay there. Oh. The last piece of the puzzle fell into place. It was so obvious, now that I saw it. "Admiral. Water attack enemy. Water attack rock. Water attack, no physical attack. Water attack enemy, water attack rock." He frowned at me. I nodded. He considered it for a moment, and shrugged. Crawling along the flat top of the boulder, he moved to the crest where it began to slope. As his head emerged over the top, Kabuto reared up and fired another red beam - Admiral ducked back, avoiding it. But as soon as the beam faded, he leaned back over and fired a shot of water. It skidded over the top of Kabuto, ineffective. Another stream of maroon light. Admiral ducked back, waited, and returned fire. The second shot hit Kabuto's upper eye, causing a flinch but little more. Another beam, another retreat, another attack. The third missed Kabuto entirely, striking only the earth beneath its claws, sending a small portion of gravel flying into the air. As he ducked back to avoid the fourth Absorb, Admiral's eyes met mine. This time, he nodded back. The exchange continued for a few minutes. Kabuto would open its mouth and send a beam of dull red light towards Admiral, and Admiral would hide himself behind the boulder's ridge as it came. Then, he would emerge with open mouth and send a stream of water glancing off Kabuto's shell, or hitting one of its upper eyes. Every time Kabuto flinched at one such eye connection, I would shout out in an approving voice, huge smile on my face, "Varrakh-ohm!" Attack rock. And on every third or fourth shot, when the water struck only gravel, I adopted a grave tone and admonished him, shaking my head and grumbling "Tare. Tare." Yes. Yes. Credit to the little guy, he worked it out quickly. After a few of these cycles, he began cheering at the phrase "Varrakh-ohm", and wincing when he heard a disapproving tare. And little by little, piece by piece, the gravel around Kabuto was washed away. There's no way Brock didn't notice. The deception was a nice touch - in retrospect, we might have oversold it - but he would have worked our plan out long before the stone was cleared. But a Gym Leader's role is to challenge, not to vanquish. He gave no command to Kabuto, no instruction that it should move. It stayed, and when the ground beneath it was bare, and the stone slick with water, I shouted out a new command to Admiral. "Physical attack. Enemy up-down. Win, Admiral. Win." Another suppressing beam faded over Admiral's head, and he launched himself from the summit, barreling down upon his enemy. Charging a rock. No. Not a rock. Just a shell. The red light struck out once more, but Admiral was already there. Crashing into Kabuto, he sent his foe skidding across the stone floor, claws scraping and flailing in desperate attempt to find purchase. But none was present, and the rear of Kabuto's shell collided with the far edge of the fissure, scraping down the crevice wall, head rising inevitably into the air. It wasn't on its back - there hadn't been enough momentum for that. Nor was it helpless - it was wedged somewhat upright, at about a 45° angle. The fissure wasn't tight enough to fit Kabuto snugly. Its rear claws pressed themselves against the shelf nearest Admiral, pushing, beginning to rock back and forth, moving its centre of balance. Admiral wasted no time. He hopped over the crevice as the swings grew larger, got behind Kabuto and, at the peak of its backward swing, jumped up - catching the top ridge of Kabuto's shell and seizing tightly upon it. With all his weight now on Kabuto's back, it overbalanced, shell scratching hideously against stone - and Kabuto fell backward. Claws flailed in the air. Helpless. Admiral released his hold, taking a step back. Watching. Kabuto struggled. And struggled. And, finally, stopped. When Brock's inflectionless voice rang out through the PA, it was all I could do to keep myself composed. "Congratulations, Trainer Red. You have earned the Boulder Badge." Admiral raised his arms to the sky, eyes shining, and roared. I'd known for a long time that when I won my first Badge, some sort of celebration would be in order. But I'd never really considered what that celebration might be, and had I been left to my own devices, I'd probably have ended up just spending the night in the hotel with my Pokémon and some ramen. Fortunately, when Brock clasped my hand at his podium after giving me the Badge - smiling, no less - he gave me an opening, speaking in a low voice. "Free for a drink later?" I sat at a table of rough-hewn wood, alone amongst a bustle of noise and rowdiness. Talking, shouting, uproarious laughter, the evening chill of Pewter held at bay by the fire burning at the hearth. I was by myself, striking a deliberately nonchalant pose, nursing a beer, and pretending I didn't hate it. Is it egotistical to admit I was hoping someone would recognise me and act impressed? Probably. But that day, I'd accomplished the single greatest feat of my life to date, and what I really wanted was validation. For somebody to notice. And nobody did. There hadn't even been any answer when I'd called home. I didn't need to wait too long, though. I saw Brock the moment he walked through the door - I'd been keeping an eye on the entrance since I'd arrived - and gave the most restrained, casual wave I could. He spotted me, waved back, and raised a finger as he walked up to the bar. A moment as he ordered a drink, engaged in some small talk with the bartender, and made his way over to me. In contrast to his stony demeanour at the Gym, he was now quite casual and cheery. He took my hand with a smile, said "congratulations," and swung himself down onto a seat. He took a long pull of his beer - if his expression was anything to gauge by, actually enjoying it - and, apropos of nothing, gave a laugh. "Really enjoyed our fight today, Red. Good stuff." A swell of pride filled my chest. "Thank you, Leader." He waved me away over another pull from his drink. "Hah, Leader. Brock." "Brock," I said with a smile of my own, taking a sip from my own beer and not even grimacing. "I'm glad you liked it. I had a great time, too." "I'm sure," he said wryly. "I mean, it was hard," I said. "Like, really hard. I really thought we'd lost there." "So did I. By the time your Squirtle crawled out from under Skellig, I was ready to call an end to it." "Me too. Why didn't you?" "Oh," said Brock, leaning back, amusement playing across his face. "I was hoping you'd have the self-discipline to admit defeat. See if you were the kind of idiot who'd keep fighting a battle you'd obviously lost." Well, we couldn't have misread that one any worse. "Huh," I said, formulating a response. "Guess I'm that idiot, then." He shrugged. "Well, you found a way out of it. Eventually. That play with the crack in the ground was pretty clever. Kinda convoluted, but still good." "I thought you'd figured out what I was doing." "Oh, yeah. Your Squirtle was dancing around that crack most of the battle, it wasn't difficult. I just didn't think you'd make it work." I smiled, bowing my head. "It was good, in a ridiculous sort of way." My gaze came back up. "How do you mean?" "I mean, you went about formulating this complex series of gambits to get Skellig over to a crevice, clear out the terrain, disrupt her footing, bet it all on a mad ploy to physically overpower a Pokémon far larger and heavier than your Squirtle, and go to the effort of trying to disguise all that through a transparent deception. And in all that, you missed the obvious solution." My heart sank. "Oh?" "She's a Kabuto. Hit her from the sides." I wished I'd conjured a more eloquent response than "What?", but there we go. "Kabuto's completely forward-facing. Claws, mouth, everything - can only attack what's in front of them. No maneouvreability on land. Stay behind them, they can't turn around fast enough to fight." He shrugged. "Easy win." I let his words wash over me for a moment, elation giving way to embarrassment. "Uh," I said, "can we chalk that one up to crowd-pleasing?" "Sure." "Thanks." I took another drink. "See, I thought you were giving me a really hard fight, and I've spent half the afternoon trying to figure out why. And now I'm finding out it was supposed to be an easy one, and I feel kinda stupid now." He shook his head. "Not easy. You made it harder than it needed to be, but it wasn't meant as a cakewalk. Even if you'd worked out the orientation part, there's still a lot of danger there. Your Squirtle gets overzealous, starts trying to land blows too deep under the shell, a claw can catch him and ruin his day." I nodded. "Makes sense." We spent a minute in silence, sipping at our drinks. The lack of conversation weighed awkwardly on me, but Brock seemed quite at ease with it, gazing around the room, seemingly happy to wander through his own thoughts. Eventually, I felt the need to speak. "I do have a question, Lea-Brock." "Mm?" "Why did you choose Kabuto for this fight? I mean, I've looked through all the public info on your first-ring fights, it's always a Geodude, maybe a small Onix. It just seemed…" I tried to find a word less petulant than unfair. "…different," I ended. He exhaled, settling down to face me fully. The contented expression he'd been wearing faded, and suddenly I was facing Brock the Leader again. "You want to know why I went harder on you than the others." "Well, yes," I said. "I'm not complaining, mind. Just curious." "Red," he said, pushing his glass to the side, "you know the words we say, when a Trainer wins a Badge?" "Yeah. Congratulations, Trainer. You've earned the - well, Boulder Badge, or what have you." He nodded. "That's right. Earned. Not 'won', not 'received'. Earned. Most of the first-timers I get, they're kids. They bring a Pidgey or an Oddish, a Krabby, maybe a Pinsir every once in a while. You can't imagine the number of times I've had a kid come in with his pet Nidoran and try to make it work. They don't have anyone giving them Association starters, they've never had any special mentoring, they come in under-equipped with a dream and a plan and none of the resources to make it work. "And then there's you. You spend your whole life with one of the world's most accomplished Pokémon Professors for a dad, you get an incredible starter that any of these kids would cut off their right arm to have. Then it gets stolen, you make a joke about it, and within a week you've got another one lined up for you. You step out the gate, and there's paparazzi waiting to take your photo and hand you endorsements. Most Trainers, when they show up to a Gym and it's closed for repairs? They're shit out of luck. They swallow it, and go do something else until the sign on the door says 'Open', and pray they've got enough money to tide them over that long. You throw a fit at a receptionist - word travels, Red - and a Gym Leader buys you a steak dinner to apologise. "So yes, I went harder on you. I didn't throw a Rhydon down against your Squirtle, but I gave you a proper challenge. Because that's what I do for every kid who shows up with a Ratatta, and I wasn't about to let you cruise through to your first Badge on type advantage. You've had it really, really easy, Red. I know it might not seem like it, because I don't doubt you've had to face plenty of your own challenges. But you're coming into this with every advantage, you've had opportunities rained down on you like most of these kids could never even dream of, and it wouldn't have been fair of me to just toss out the same Geodude I do for everyone else. I can't just give you a Badge. You have to earn it." I didn't say anything. Couldn't think of what to say. My cheeks were burning, I couldn't look him right in the eye. I knew he wasn't trying to be cruel, but it felt like he was just scolding me. His tone softened. "Don't get me wrong, Red. You did earn it. But you can't come out of this feeling like you've been unfairly victimized. You're going to have a lot of people telling you how great you are, and you need to remember you're starting out with a hell of a leg-up. Don't let it get to your head." He paused. "Like your brother." Now, I looked up at him. "Blue? Have you fought him yet?" This time, it was Brock's turn to look confused. "You haven't heard?" I shook my head. "Had no signal going through Viridian, and I didn't think to check after I arrived. I was kind of tired." He laughed, his Leader persona fading away. "Oh, man. I'm so glad I get to be the one to tell you the story." I groaned. "What'd he do?" "Well, it's a good thing you challenged me when you did. You got here just in time." Oh, Arceus. "He came through three days ago. Showed up all gung-ho, casual as you like. He sent out his Eevee, I put down a Geodude. Don't worry, a nice big one - taking a challenge from an Eevee is very different from a Squirtle - and you know what he does?" My face was buried in my hands. "What?" "He laughs. Faked a yawn. I ask him if there's a problem, and he tells me that he thought Gyms were supposed to be a challenge. Just showboating for the cameras. I ask if he'd like a tougher opponent, and he says 'yes, please.'" "Mew. So, what'd you do?" Brock's grin was so wide, I was afraid his cheeks might split open. "Called his bluff. Withdrew Geodude and sent out a Graveler." "And?" "And have you ever seen an Eevee try to fight a Graveler?" Oh, no. He raised a hand. "Don't worry, I didn't hurt it. Your brother tried all kinds of fancy stuff, but Giliath weighs over 200 kilos. In the end, I had him hold Eevee by the scruff of the neck for a while. It kicked around for a bit, gave up, and your brother had to concede." Holy shit. "I'm guessing he didn't take it well." Brock shook his head. "On the contrary. He laughed, said he'd learned a lesson in humility, and thanked me for my time. When the media contacted him, he told them he had no problem with how I'd handled it, that he'd been arrogant and unmannerly, and that it was an important lesson he was grateful to learn. Said he looked forward to challenging me again soon, and 'to show proper respect due to a Leader who has inspired us all.'" He rolled his eyes. "Yeah. He's good like that. And?" "The League didn't take it so well. Sanctioned me, said I'd shown poor decorum and had 'behaved in a manner unbecoming the dignity of a Gym Leader.' Brought the League into disrepute, apparently. They offered your brother the Boulder Badge as an apology, and you know what he did?" "What?" "He turned it down. Said he wouldn't accept anything he hadn't earned, and would only take the Badge by winning it, fair and square. Media went wild at that one. So, next day, he reapplies - I recused myself, my brother Forrest took his challenge - faces a Geodude, wins handily, and goes on his way. I've never seen a Trainer show such disrespect to a Leader, and he's come out of the whole thing looking better than he came in." It's just classic fucking Blue, isn't it? Acts like a complete dick, and comes out smelling like roses. "Anyway, word came through from the League this afternoon. I've been suspended from Gym duties, pending an inquiry. Forrest will be taking over in my stead. Guess I offended the wrong people." I gazed up at the ceiling, astonished and yet only barely surprised. "Dammit. I'm sorry, Brock." He shrugged. "Don't be sorry. You didn't do anything, and I did overstep. Shouldn't have risen to his bait like that." "He does have a way of getting under people's skin." "Amen to that," said Brock, lifting his glass. We clinked glasses, and he finished his drink. I managed to hit the halfway point of mine. "Anyway," he said, "I'm not too cut up about it. I've been thinking of getting away from it all for a while, anyway. Which brings me to why I asked to meet you tonight." "Oh?" I asked. "Well, I'd like to take some time away from everything. League inquiries tend to take a while, and I could really do with a bit of time away from the city. Getting back to roots, that sort of thing. I was thinking of heading into the mountain, do some exploring, some meditating, all that good stuff. And if your next stop is Cerulean, then I'm guessing you're headed the same way. "I was wondering if you'd like to come along." Obviously, I said yes. It was only when I'd gotten back to the hotel, head slightly spinning from the alcohol, that I saw the blinking light on my Pokédex signalling a message. I opened it up to see a waiting voicemail, from Daisy. "Hey Red!" her voice crackled through the tinny speaker. "Sorry we missed your call, things got a bit hectic. Grandpa got a message from Dr. Fuji this morning, got real agitated. He flew off on Ozone a few hours ago. Something going down at the lab in Cinnabar." This concludes Arc I of Pokémon: The Line. I can be contacted on Twitter, under the username 'RadHominin'. The Line will return in February, with Arc II: "Undertow"
Frequency Hopping Based Wireless Metering in Smart Grid: Code Design and Performance Analysis To address the security and performance challenges, a novel smart grid communication system based on the frequency-hopping (FH) technology is investigated in this paper. A new FH sequence set with various levels of Hamming cross-correlation values is proposed for this system in order to endow the power users various qualities of services (QoSs). By using the real measurement, we propose a statistic model of the data collected from power grid. Based on the proposed FH sequence set and the data model, the analytic expression of the bit error rate (BER) of such a system with binary frequency shift keying (BFSK) modulation is derived under slow Rayleigh fading channel model. This expression firstly reveals the general relation among the BER, the properties of deterministic FH sequence and other system parameters and provides the guidance for the sequence design in the view of the system performance. The simulation results which are performed by using the real measurement data coincide with the analytical conclusions.
Financial Analysis as a Franchising Tool in Modern Conditions of Digitalization of the Economy Pandemic conditions and business constraints related to the spread of coronavirus infection have also made adjustments to the basic principles of franchise agreements. The justified transition to digitalization has adjusted the existing forms of running a business in franchising as well. New ways of communication between franchisor and franchisee have required the updating of franchising principles. Financial analysis tools allow timely reaction to changes in economic conditions, in particular within the framework of the franchise agreement.
Use the iPad to make video calls with the Skype app. 2 Do You Get Charged for Making Calls With Skype? The built-in digital camera on the iPad makes it easy to use Skype for videoconferencing. With the Skype app, the portable device allows you to make voice and video calls from almost anywhere, and the app is a free download for iPad owners. Skype video calling works with iPad 2 and newer models. Skype voice calling works on all iPad models. In the Skype app, you can make standard voice calls free to other Skype users. Click on the "Contacts" list to see your full list of contacts. Choose a contact and tap on his name or picture to load his profile. Click on the "Voice Chat" icon and Skype places the call. If your contact uses multiple numbers, select the preferred number of the person you want to call. Once connected, talk normally and then press the red "End Call" icon to hang up. If the person you want to call does not have a Skype account, you can still call him by purchasing Skype Credits. These credits allow you to call any number, including landline and mobile phones. Tap on the "Buy Skype Credit" icon to load the options screen. You can buy credits by the month or by the minute. After choosing a payment option, follow the standard call method. During the call, the Skype Credit amount appears towards the top of the screen. After selecting a contact, you can choose to make a video call. Click on the "Video" call icon, and Skype attempts to connect to the contact. After you are connected, the other person's video feed appears on the iPad screen. A small view of your feed appears in the bottom right corner of the screen. Talk normally into the iPad to progress through the call. Tap the "End Call" icon to exit the video call. When making calls, errors can occur. If you do not have a Wi-Fi or 3G connection on the iPad, your calls cannot go through. If the other person is not currently available, the phone call cannot connect, and a message displays indicating this. If you run out of Skype Credit when speaking with a non-Skype user, the phone call automatically ends; you can set the Auto-Recharge option in the Skype app to make sure your calls do not end suddenly. Skype: Help for Skype: How Do I Use Skype for iPad? Donahue, Alan. "Making Skype Calls From an iPad." Small Business - Chron.com, http://smallbusiness.chron.com/making-skype-calls-ipad-40584.html. Accessed 19 April 2019.
Retail pharmacy prescription medicines availability, prices and affordability in Eswatini Background Limited availability of medicines in public facilities and unaffordable prices in the private sector act as barriers to medicines access. Patients in Eswatini may be forced to buy medicine from the private sector resulting from chronic medicines shortages in public health facilities. The extent to which they can afford to do so is unknown. Aim To determine the availability, price and affordability of medicines in the retail pharmacies in Eswatini, and to compare the results regionally and internationally. Setting Retail pharmacy sector in the four administrative regions of Eswatini. Methods Data on availability, price and affordability to patients for 50 medicines in the originator brand (OB) and the lowest priced generic (LPG) equivalent, were collated using the standardised World Health Organization/Health Action International methodology from 32 retail pharmacies in the four regions of Eswatini. Prices were then compared with selected countries. Results The overall mean availability of all medicines in selected retail pharmacies was 38.5%; standard deviation = 20.4% for OBs and 80.9%; s.d. = 19.0% for LPGs. The overall median price ratio (MPR) in the surveyed pharmacies was 18.61 for the OBs and 4.67 for LPGs. Most standard treatments with LPGs cost less than a days wages whilst for OBs cost more than a days wages. The differences between Eswatini and South African prices were statistically significant. Conclusion Drug pricing policies and price monitoring tools are needed for the whole pharmaceutical chain in Eswatini to monitor availability, affordability and accessibility of medicines to the general populace. Introduction Although medicines are crucial in the healthcare system, they are generally not affordable to people globally. It is well documented in literature that prohibitive pricing is one of the major barriers to essential medicines' access. 1,2,3 There is significant underuse of costly medicines in populations that do not have any medical insurance, and thus, even the smallest price changes in drugs will impact adherence significantly amongst the poor. 4 More than 30% of the world population does not have reliable access to essential medicines. 5 In some of the poorest countries in Asia and Africa, the proportion is as high as 50%. Unavailability and/or low availability of essential medicines in the public health outlets may become a major barrier to medicines' access, especially when coupled with high unaffordable prices in the private sector. 6 It is thus imperative that prices in the private sector be affordable to ensure equitable access. 7,8 Medicines' expenditure constitutes between 20% and 60% of health expenditure in low-and middle-income nations compared to just 18% in the European nations. In developing countries, only a mere 10% have health insurance and the rest buy medication through out-of-pocket payments, which makes medication the biggest family expenditure item after food. 9 France and Italy utilise drug price controls to directly manage the drug prices. In Germany and Japan, prices are indirectly controlled via reimbursement under social insurance schemes. The National Pharmaceutical Price Authority in the United Kingdom (UK) monitors price by regulating the profits that companies make on branded prescription medicines' sales. 11 It is widely believed that the drug prices are generally higher in countries with less stringent price regulation (like the UK) or no regulation at all (the United States ) as compared to countries with strict price regulation. 12 The market share of originator brands (OBs) reduces significantly after the introduction of generics after the expiration of patent protection, and the competition between the generic options lowers the prices of the branded products further. 12 Free-pricing systems may lower the medicines' prices when optimum conditions are created, and as much as regulation in price-controlled systems may reduce the prices of both generics and brands, it may become a barrier to incentives to lowering prices below the listed ones. 10 Studies in the US in price variability with other commodities besides pharmaceuticals found that the poorer individuals usually find themselves paying more for similar goods and services as compared to their richer counterparts. 13 The grocery stores in poorer locations are usually smaller and more expensive than in the wealthier suburbs, mainly because of the existence of large chain stores in the more affluent areas and mainly independents in the poorer areas. 13,14 Not much is known about price regulation or affordability in low-and middle-income countries, especially across the African region. Eswatini is classified as a lower-middle-income country with the majority of the population living below the upper poverty line. 15 Although Mhlanga et al. 16 recommended implementation of a pharmaceutical pricing policy in 2016, there is still no price regulation of pharmaceuticals in Eswatini. 17 Mhlanga et al. 16 highlighted the importance of reliable evidence on medicines' prices in ascertaining the type of challenges in a system, before deciding on solutions to ensure availability of essential medicines at the lowest possible price to the consumer. Prices for prescription medicines are a significant obstacle to appropriate medicine use. 18 It is not easy to find reliable medicines' prices' information in developing countries, 19 including Eswatini. The World Health Organization/Health Action International (WHO/HAI) has set a benchmark of 80% medicine availability as high to ensure the supply of essential medicines. 20 Although generic medicines are way cheaper than the OBs, they are still relatively unaffordable in many parts of developing countries. 21 The Medicines Regulatory Unit in the Ministry of Health is currently responsible for medicines' registration; however, a medicines' regulatory authority and pharmacy council are still to be established in Eswatini. 22 All prescription medicines (defined as any medicine on a valid doctor's prescription) attract value added tax (VAT) at 0%, and where there is no prescription, VAT is levied at 15%. 17 Cross-national differences in pharmaceutical prices are of great importance as they help governments to come up with appropriate domestic pricing policies. 23 South Africa, as Eswatini's neighbour, is considered as an upper-middle-income country and has introduced price controls for medicines. The single exit price (SEP) mechanism lists the price that a medicine can be sold by a manufacturer to an end dispenser. The South African Medicines and Related Substances Act (as amended) regulates the maximum additional dispensing fee that can be charged by people licensed to dispense and retail pharmacists, based on a tier structure directly tied to the SEP. All retail medicines attract a 15% VAT. 24 The Act has a provision that prohibits the use of bonuses, rebates or any incentives in the supply of medicines, to avoid undermining the SEP. 25 The aim of this study was to investigate the availability, affordability and prices that people pay for medicines in different parts of Eswatini. In addition, this study sought to ascertain how the medicines' prices in Eswatini were compared to the prices of the same medicines in South Africa and internationally. Study design A quantitative study using a cross-sectional descriptive design was employed. Setting Eswatini is a very small country, at just over 17 000 km and a population of just over a million people and a population density of 66.1% people/m. It is located in southern Africa and is bordered by South Africa and Mozambique (see Figure 1). The country is divided into four regions: Hhohho (north-west: 28.5% of the population), Manzini (central: 30.5% of the population), Lubombo (east: 21% of the population) and Shiselweni (south: 20.5% of the population). 27 The urban population in Eswatini is 24.2%, and the unemployment rate was estimated at 23.4% in 2020. 27 Consultation fee is levied at $1.41 in the public sector, and medicines are free. 16 The lowest paid government worker in Eswatini earns a daily wage of $8.86 ($1.00 = Swazi lilangeni 17.103 according to the exchange rate as at 15 June 2020). 28 Less than 20% afford medical insurance, and patients in Eswatini may be forced to buy medicines from the private sector because of the chronic medicines' shortages in Eswatini public health facilities. 16 Pricing surveys have been used to determine the extent of price differentiation and affordability in countries. Thus, for Eswatini, the WHO/HAI medicine price survey was used to identify medicine pricing and affordability in the country in order to provide policymakers and all stakeholders with evidence to draft policies that may improve availability, affordability and accessibility of medicines. 6,20 Study population and sampling strategy A list of pharmacies was obtained from the office of the Deputy Director of Pharmaceutical Services in the Ministry of Health of the 60 registered retail pharmacies in Eswatini at the time of the study, with 14 geographically situated in Hhohho, 30 in Manzini, 10 in Lubombo and six in the Shisweleni region. Stratified sampling was used to group the pharmacies. The primary criterion was the region of location for the pharmacy. A total number of 32 pharmacies were included in the study. Random disproportionate stratified sampling was used to ensure that the sample was representative of the Eswatini retail pharmacy population. The sample comprised 12 pharmacies from Manzini, eight from Hhohho, six from Lubombo and six from Shiselweni. Simple random sampling was used to select the pharmacies to use in each region. To analyse the effect of the size of the city/town on prescription prices, pharmacies were categorised as from either city or town/rural area. Categorisation of pharmacy size was based on the population of the location: City population ≥ 50 000 Town/rural population < 50 000 Data collection The survey included a total of 50 medicines (Appendix Table 1-A1). The list comprised the WHO/HAI Global Core List of 14 medicines and 36 supplementary lists based on the disease burden and local relevance in Eswatini. Expert advice was sought from pharmacists, medical doctors, academicians and Ministry of Health Professionals on the relevance of the selected medicines. For analysis, medicines were stratified by use (i.e. anti-infectives and non-communicable disease medicines) and also on whether they were on the essential medicines list (EML) or not. The managers and/or responsible pharmacists of the selected facilities were contacted via email and telephonically explaining the objectives of the study and the information that would be collected from their pharmacies. The responsible were informed that they were not obliged to participate in the study and that all identity information would be kept confidential. Data collection took place between 15 June 2020 and 13 August 2020. Data on price, availability and affordability to patient were collected for two products, namely the OB and the LPG using the standardised WHO/HAI methodology. 29 The following specific data were collected for 50 essential medicines as per the WHO listing (14 from the Global Core List and 36 selected based on local relevance), 30 using the WHO/HAI workbook, 31 during visits to 'retail pharmacies': The name of pharmacy, administrative region where located, and whether it was in a city or town/rural area. Brand/product name and manufacturer of the LPG found at the site. Availability of the OB and the LPG. Pack size and price of the pack found for the OB and the LPG. Any other comments regarding a given product. During the retail pharmacy visits, data were recorded on hard copy medicine prices' data collection forms. The data collector made sure the data collection forms were complete and legible before leaving an outlet. As per the standardised methodology, data collection forms were reviewed every day after completion of the fieldwork to ensure data quality. The data were then entered from the hard copy forms into the electronic survey workbook, and the double-entry programme was run, and any mistakes were corrected. Any questionable data identified after running data checker were investigated and corrected. Data analysis Data were entered in the pre-programmed WHO/HAI Microsoft Excel Workbook. 31 The workbook automatically generated analysis of the data entered once complete, giving summary tables of percentage availability and median price ratios (MPRs). 31 Median local prices were expressed as ratios to international reference prices (IRPs), using the formula: Medicine price = median local unit price ratio (MPR) International reference unit price Management Sciences for Health (MSH)'s 2015 prices were used as the default IRPs. 32 The IRPs used are prices offered to international not-for-profit agencies for purchase of generics. The availability of medicines was determined as a percentage of outlets where medicine was found on the day of data collection. Mean availability of the 50 medicines was also reported. The differences in average percentage availability of OBs and LPGs determined if there was any variance in the availability of the two product types. To describe availability, the following ranges were used as reference: 19 < 30% extremely low 30% -49% low 50% -80% fairly high > 80% high. The MPR pharmacy prices for each drug were compared across region categories using analysis of variance (ANOVA). The MPR pharmacy prices for each drug were compared between the two categories of location size using t-test. The lowest and maximum prices in Eswatini were compared to the lowest and highest permissible retail prices in South Africa (based on current dispensing fee guide in South Africa at the time), 33,34 and the differences were expressed as percentage price difference for similar products and were analysed using the t-test for statistical significance. Affordability was calculated as the number of wage days that the lowest paid government worker needed to spend to pay for treatment and was based on the median local price of a medicine prescribed at a standard dose. All analysis was carried out using Statistical Package for Social Sciences (SPSS) version 27 (University of KwaZulu-Natal, School of Health Sciences). Distribution of pharmacies A total number of 32 pharmacies were included in the study. Twelve pharmacies were drawn from Manzini region, eight pharmacies from Hhohho region, six pharmacies from Lubombo region and six pharmacies from Shiselweni region. Of the 32 surveyed pharmacies, 13 of them were located in cities and 19 in towns and rural areas. Medicines' availability on the day of data collection The overall mean availability of all medicines in the surveyed retail pharmacies was 38 between the two locations was statistically significant (p = 0.001). The availability of LPGs was high in the city and fairly high in the town located pharmacies, with pharmacies in the cities registering 87.4% availability and the outlets in towns and rural areas recording 76.5% availability. Results showed that this difference was statistically significant (p = 0.001). The overall availability of all the different groups analysed was generally high for the LPGs and low for the OBs. The global core medicines had the highest LPG mean availability (89.2%; s.d. = 9.0%), followed by AI at 86.7%, then EML medicines with 85.1%, followed by 82.3% for the NCD medicines and then 77.7% overall mean availability for the supplementary medicines ( Table 1). The differences between the different regions in terms of availability of LPGs of the different classes were not statistically significant. A similar trend was observed with respect to availability of the different medicines' groups in both cities and smaller towns and the rural areas. Consolidated private retail sector patient price ratios Of the 50 medicines surveyed in the 32 outlets, price ratios were calculated for 33 OBs and 48 LPGs (where medicines were found in four or more outlets Regional patient price comparison Overall patient prices in Shiselweni were approximately 4% and 8% more than patient prices in Manzini region, whilst the Lubombo prices were approximately 42% and 9% more in comparison with Manzini for OBs and LPGs, respectively. The overall MPR patient prices in Shiselweni were approximately 7% cheaper than the prices in Hhohho for the LPGs. Analysis of variance results showed there was no significant difference between the regions in terms of LPG prices (p < 0.719). A few of the LPGs had large price differentials across the different regions. The LPG omeprazole 20 mg cap/tab was sold to patients at 3.18 MPR in Manzini, but was available at more than four times that price in Lubombo and Shiselweni regions. Comparatively in the Hhohho region, the patients had to part with almost 10 times the Manzini price. The LPG furosemide was available at 2.12 MPR in Manzini, required more than 3 times this in Lubombo and Shiselweni, and was available at 9.38 MPR in the Hhohho region. Atenolol 50 mg, diclofenac 50 mg and fluoxetine 20 mg also showed large price differentials across the regions. Overall patient prices in the cities were 3% more and 5% less than patient prices in the smaller towns for OBs and LPGs, respectively. The LPG diazepam 5 mg was available to patients at 1.12 MPR to patients in the cities, whilst it was sold at 6.58 MPR in the smaller towns. OB price ratios were only analysed for Hhohho and Manzini regions as the other two regions did not have significant OB availability for any meaningful comparison. Overall, patient prices in Manzini were approximately 3% and 1% more than patient prices in the Hhohho region for OBs and LPGs, respectively. Results showed that the difference between Manzini and Hhohho OB medicines' prices was not statistically significant (p = 0.166). Comparison of retail prescription drug prices in Eswatini and South Africa Of the 32 OBs that were found in more than four outlets in Eswatini only two products, diclofenac 50 mg capsule/tablet (cap/tab) and amlodipine 5 mg cap/tab had their highest prices equal to the maximum permissible South African patient price, whilst eight products had their Eswatini highest prices lower than the maximum permissible South African patient prices, and the rest had Eswatini highest prices that were more than the maximum permissible South African patient prices (see Table 3). The differences were statistically significant (p = 0.004). Only one product, carvedilol 12.5 mg cap/tab, had its lowest Eswatini unit price lower than the cheapest South African patient price, whilst the lowest prices for all the other OBs were higher than the lowest prices in South Africa of the same products. The differences were statistically significant (p = 0.14). Of the 48 LPGs analysed, one product, tamsulosin, had its Eswatini lowest price equal to South Africa's lowest generic patient price, 26 products' Eswatini lowest prices were higher than South Africa's lowest prices and 18 products had their lowest prices lower than corresponding South African LPGs for the same molecules. The lowest prices of simvastatin 20 mg and omeprazole 20 mg were more than 300% lower than the lowest generic patient prices in South Africa. The lowest and highest LPG prices for South Africa were calculated using the MRPs from the Mediscor platform. 34 Of all the LPGs that were analysed, only 11 products had Eswatini highest prices lower than the South African maximum permissible patient prices in the retail pharmacies. The other 37 products had highest prices, higher than the highest generic patient prices for similar molecules in South Africa. The difference in the prices was statistically significant (p = 0.014). Treatment affordability The LPGs for the most common AI were generally affordable in all the regions, with an adult 7 days' course of amoxicillin, a paediatric 7 days' course of co-trimoxazole and a paediatric 7 days' course of amoxicillin + clavulanic acid, all requiring less than a day's wages of the lowest paid government unskilled worker to purchase them (see Table 4). The first line of management for diabetes mellitus 2 (DM II) was affordable for both the OBs and LPGs in all the four regions of Eswatini, with a monthly course of 60 tablets of glibenclamide 5 mg requiring less than 0.5 days' wages to purchase. It is worth noting that less than 0.8 days' wages were required to purchase a course of either the OBs or the LPGs of metformin 500 mg in the four regions. Gliclazide was less affordable and required more than the equivalent of the lowest paid unskilled government worker's single day wage to purchase a course for 1 month of the LPGs. No LPG was found for human insulin (30% regular/70% isophane) in all the surveyed areas and the OB required 5.20 days' wages in Manzini, 5.36 days' wages in Hhohho and 7.37 days' wages in the Lubombo region. Table 5 illustrates the affordability of a 3-drug regimen when OBs and LPGs when purchased from the retail pharmacy sector in Eswatini. The lowest paid government worker would have to work 2 days to afford the LPG regimen, and in case the LPGs are not available, she/he will have to work for 4.8 days to be able to afford the OBs. Discussion The study showed that the overall availability of the LPG medicines in all the regions was higher than the recommended minimum availability benchmark of 80% 20 set by WHO/ HAI. It can be deduced from the results that there is promotion of generics' dispensing as compared to the branded products in all the regions. The more affluent settlements are in Hhohho and Manzini regions, and the availability of OBs was comparably higher in these two regions. Guan et al.'s 35 findings highlighted the challenges associated with regional disparity of essential medicines. Another study looked at the undiscounted prices for both OB and the LPG for 25 essential medicines from 17 private pharmacies in Shaanxi Province, western China. It noted that Table 2 Continues on the next column→ generics were more available as compared to OBs and prices varied across different discount programmes. The study concluded that price transparency of pharmaceuticals helps consumers in the identification of potential savings. 36 The overall availability of LPGs was not different in all the regions. However, the pharmacies located in cities had higher availability than the ones in smaller towns and rural areas for both LPGs and OBs. A survey conducted in Peru using the WHO/HAI medicines' prices and availability survey did not find any significant differences in overall availability or prices of the medicines under study by retail location. 15 It is commendable that the availability of LPGs on the EML was more than 90% in all regions, and as such in the event of medicines being unavailable at government hospitals, patients can access medications at retail pharmacies. It was quite interesting to note that the LPGs' prices were comparable in all the four regions and the differences reflected were not statistically significant. Although there is currently no price regulation administered by the government, 37 A careful analysis of the molecules that had large differentials, for example, omeprazole 20 mg, showed that the overseas parallel generic imports were available at a significantly lower price to the patients as compared to similar generic molecules that were sourced regionally. The clients in the more affluent locations are generally brand-sensitive as compared to patients in the rural settings, where outlets can afford to keep the most affordable non-branded import generics. There is a need for the authorities to ascertain that these lowly priced molecules also meet the stipulated quality standards. The study showed that generally the OB patient prices were higher in Eswatini when compared to prices of the same prescription molecules in South Africa, and this is not a deviation from the expected as literature suggests medicines' prices in regulated environments are generally lower than in countries with no price regulation. 23 Eswatini procures their OBs from South Africa where pricing is regulated, and as such, the best/lowest cost to the retail outlets for these would be at the SEP price and as such it is expected that the retailing prices will be higher than SEP in the Eswatini outlets. Eswatini prescription medicines' prices have VAT at 0%, 4 whereas in South Africa all medicines attract VAT at 15%. If the VAT regulation in Eswatini would change from 0% to the standard 15%, 24 the Eswatini prescription medicines will become more expensive to the end users. The Eswatini market is not limited to the South African generic molecules only as they also have access to parallel import generics from overseas which were found to be generally cheaper than the South African equivalent molecules. The difference in the gross domestic product (GDP) per capita per month between South Africa and Eswatini needs to be considered, 8 as a small price difference may turn out to be huge with respect to affordability in the Eswatini context. Most first-line treatment regimens for NCDs were generally affordable, requiring less than a day's wages. It is worth noting that more than 60% of Eswatini's population lives below the upper poverty line of $8.21 per capita per month, and as such, 8 treatment regimens calculated as affordable may still be way out of the range for the general populace. 16 There is usually more than one family member requiring chronic medications, and even though the individual courses may be affordable, the combined regimens will be unaffordable. Anti-epileptic medications and most second-line management regimes required more than a day's wages; hence, unavailability at the public hospitals could lead to patients defaulting their treatments. Limitations Affordability was calculated based on the daily wage of the lowest paid unskilled government worker, but a large portion of the labour force is not employed by the government and the minimum wages are way lower than the salary of the lowest paid government worker, thus the data may not be a true reflection of affordability in Eswatini. Availability refers to the day of data collection at each facility and might not be indicative of average availability over time. Availability and prices' data were collected during level 2 lockdown (as a result of the COVID-19 pandemic) when supply chains were disrupted, and hence, these findings may not be a true reflection of the availability throughout the year (Eswatini depends entirely on imports as there is no manufacturing of any pharmaceuticals that takes place in the kingdom). Recommendations Future research should focus on comprehensive national surveys in all public and private entities to determine medicines' prices, including from the wholesalers. Price components throughout the entire pharmaceutical supply chain should be studied. Focus on these will assist in developing policies that will work towards improving affordability and availability. Conclusion Drug pricing control by the government is one of the factors responsible for lower retail prices in South Africa. The concept of 'free market economy' in Eswatini may not be enough to regulate the prices of medicines. There is a need to develop drug pricing policies that govern the whole supply chain. 16 However, for that to happen all the necessary data on the current pricing structure in the whole pharmaceutical supply chain of Eswatini should be gathered. The Medicines and Related Substances Act of 2016 allows for the implementation of a pricing system for medicines in Eswatini. 17
// Code generated by go-swagger; DO NOT EDIT. package hosts // This file was generated by the swagger tool. // Editing this file might prove futile when you re-run the swagger generate command import ( "fmt" "io" "github.com/go-openapi/runtime" "github.com/go-openapi/strfmt" "github.dev.purestorage.com/FlashArray/terraform-provider-cbs/cbs/internal/array/faclient/2.4/models" ) // DeleteAPI24HostsHostGroupsReader is a Reader for the DeleteAPI24HostsHostGroups structure. type DeleteAPI24HostsHostGroupsReader struct { formats strfmt.Registry } // ReadResponse reads a server response into the received o. func (o *DeleteAPI24HostsHostGroupsReader) ReadResponse(response runtime.ClientResponse, consumer runtime.Consumer) (interface{}, error) { switch response.Code() { case 200: result := NewDeleteApi24HostsHostGroupsOK() if err := result.readResponse(response, consumer, o.formats); err != nil { return nil, err } return result, nil case 400: result := NewDeleteApi24HostsHostGroupsBadRequest() if err := result.readResponse(response, consumer, o.formats); err != nil { return nil, err } return nil, result default: return nil, runtime.NewAPIError("unknown error", response, response.Code()) } } // NewDeleteApi24HostsHostGroupsOK creates a DeleteApi24HostsHostGroupsOK with default headers values func NewDeleteApi24HostsHostGroupsOK() *DeleteApi24HostsHostGroupsOK { return &DeleteApi24HostsHostGroupsOK{} } /*DeleteApi24HostsHostGroupsOK handles this case with default header values. OK */ type DeleteApi24HostsHostGroupsOK struct { } func (o *DeleteApi24HostsHostGroupsOK) Error() string { return fmt.Sprintf("[DELETE /api/2.4/hosts/host-groups][%d] deleteApi24HostsHostGroupsOK ", 200) } func (o *DeleteApi24HostsHostGroupsOK) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error { return nil } // NewDeleteApi24HostsHostGroupsBadRequest creates a DeleteApi24HostsHostGroupsBadRequest with default headers values func NewDeleteApi24HostsHostGroupsBadRequest() *DeleteApi24HostsHostGroupsBadRequest { return &DeleteApi24HostsHostGroupsBadRequest{} } /*DeleteApi24HostsHostGroupsBadRequest handles this case with default header values. BadRequest */ type DeleteApi24HostsHostGroupsBadRequest struct { Payload *models.Error } func (o *DeleteApi24HostsHostGroupsBadRequest) Error() string { return fmt.Sprintf("[DELETE /api/2.4/hosts/host-groups][%d] deleteApi24HostsHostGroupsBadRequest %+v", 400, o.Payload) } func (o *DeleteApi24HostsHostGroupsBadRequest) GetPayload() *models.Error { return o.Payload } func (o *DeleteApi24HostsHostGroupsBadRequest) readResponse(response runtime.ClientResponse, consumer runtime.Consumer, formats strfmt.Registry) error { o.Payload = new(models.Error) // response payload if err := consumer.Consume(response.Body(), o.Payload); err != nil && err != io.EOF { return err } return nil }
Satiety value of groats in healthy women as affected by selected physicochemical parameters ABSTRACT The aim of the study was to investigate the satiety levels following the consumption of groats and to analyze the relationships between selected nutrients found in the groats. A total of 54 women were enrolled in a crossover, single-blind study. The participants tested five types of groats (a 240-kcal portion for 180 min). The highest satiety was determined for oat groats and barley groats. The correlation analysis indicated that the satiety of the examined groats was correlated to the highest extent with the content of dietary fiber and hydration degree. Studies showed that the key role in the regulation of satiety could be played by the presence of soluble dietary fiber (SDF). The results of the study indicated that all studied types of groats are a good source of high satiety food. Due to their high satiety value, groats should be used in the prevention and dietary treatment of people with excess body weight. Introduction Energy balance is regulated by the frequency of meal intake, portion size, method of culinary preparation, and nutritional value, with the simultaneous energy expenditure. A positive energy balance leads to the development of obesity. In the context of overweight prevention and the treatment of obesity, various elimination diets, as well as diets reducing the calorific value of meals (in particular the percentage of simple sugars), are applied. The frequently applied restrictive diets are, in the longer term, ineffective. One of the ways to limit the development of obesity epidemics is to promote meals that provide the proper amount of essential nutrients, with servings of a size sufficient to rapidly induce the feeling of satiety and satiate for a long time, thus extending the period until the next meal and limiting snacking between meals. Satiety is defined as a feeling of fullness after a meal, which suppresses the sensation of hunger. It is the opposite of appetite and hunger, in both physiological and psychological aspects. Among the many food products with the greatest satiating potential, groats produced from various cereal species should be mentioned. Groats are either whole or crushed cereal grains from which nondigestible components have been eliminated. The technological process of groats production involves pre-treatment and proper processing. After cleaning and sorting by the size of raw material, it is hulled, sorted, rolled, broken up, and polished in order to give an attractive appearance to the groats and to increase their range in the market. At the same time, these operations bring about changes, sometimes significant, to the nutritional value and functional properties of groats in relation to the human body. The consumption of groats is typical of the cuisine of Central and Eastern European countries. Groats prepared from barley, buckwheat, millet, oats, and wheat are most commonly used. The nutritional value of groats is very high. They are a rich source of starch, protein, dietary fiber, as well as vitamin B, phosphorus, iron, zinc, and other essential ingredients. They also contain flavonoids which exhibit antioxidant properties. Due to the wide range and the possibility for the application of numerous preparation techniques, groats are widely used. For a long time, groats were forgotten, yet recently their percentage in the diet has been steadily increasing, mainly due to their nutritional value. Groats are characterized by a significant percentage of dietary fiber. Water-soluble dietary fiber found in groats exhibits all functional properties of viscous and gel-forming food hydrocolloids. Thanks to these properties, it can increase the filling of the stomach, delay gastric emptying, and reduce the level of intestinal hormones involved in inducing the feeling of appetite and satiety. A significant content of raw fiber reduces the glycemic index of food, which prolongs the duration of feeling satiety and reduces hunger after consumption. Moreover, there are studies which have demonstrated that food with a low glycemic index is characterized by higher energy values, referred to as dietinduced thermogenesis, which is necessary for the absorption and metabolism of food. It is usually reported that it accounts for 10% of the total energy expenditure associated with digestion. The main aim of the study was to investigate the hunger and satiety levels after the consumption of the five most popular types of groats and to analyze the relationships between selected nutrients found in the groats and their satiating properties. Participants The study included 64 women living in northern Poland. The criteria for inclusion in the study were as follows: BMI 18.5-25 kg/m 2, age of 20-28 years, and female gender. The participants qualified to the study were healthy, in good nutritional condition, without using any medicines, diet supplements, or special diets. All participants voluntarily signed their consent for participation in the study. The study was approved by the Institutional Ethics Committee at the Medical University of Gdask (no NKBBN/356/2013). The women participating in the study completed a questionnaire with questions concerning their age, body weight, height, pregnancy status, cigarette smoking, and physical activity. After an additional analysis, four women who smoked more than 10 cigarettes a day and five persons who were not able to complete the study were rejected. In total, 54 women aged between 20 and 28 were enrolled in the study (23.3; SD = 3.5) Study products In the study, five types of groats produced by the Cenos company were used. The quality of the groats was in accordance with the following standards: pearl barley millet groats, buckwheat groats, grits, and oat groats: industry standard. All groats intended for analyses were cooked according to the instructions on the package and were subjected to testing at a temperature of 65°C. Study design This was a crossover, single blind study. Each participant tested five breakfasts on five separate days. The order of the test breakfasts was randomized. The study was carried out in the morning, between 8 a.m. and 9 a.m., and the participants were on an empty stomach at the beginning of the study. Each study participant assessed the level of hunger and satiety they felt prior to consuming the product and after consuming it at 1-h intervals for the subsequent 180 min, in intervals of every two days for 10 days. Satiety ratings An unstructured 100 mm visual analog scale visual analogue scale (VAS) with 0 mm as a "very hungry" end point and 100 mm as a "very satiated" end point was used. Participants consumed the entire sample and the duration of intake was as short as possible and did not exceed 5 min. A serving had a calorific value of 240 kcal. Participants specified the levels on the VAS scale for 180 min in the morning. The first measurement was carried out on an empty stomach and the next measurement after consuming 240 kcal of selected groats. The next measurement was carried out after 60 min, another after 120 min, and the final one after 180 min. Each participant tested each type of groats at 2-day intervals. Physicochemical research The basic nutrients, protein content P-142 ed. I of 14.05.2012, fat content, and carbohydrate content PN-A-82100:1985 were determined in the analyzed groats. Then, additional parameters likely to affect the feeling of satiety and hunger, i.e. starch content, soluble dietary fiber (SDF) and insoluble dietary fiber (IDF) contents, and water content, were determined. Starch content was assayed using the method PB-265 ed. I of 30.06.2014. The analysis involved hydrolysis of starch with DMSO and HCl solution. The amount of generated NADH was assayed spectrophotometrically. A Starch-Boehringer 10 207 748 035 enzymatic test was applied. The contents of SDF and IDF were assayed using the gravimetric-enzymatic AOAC 991.43:1994 method. The water content of the groats was assayed using the oven-drying method in accordance with standard AOAC 925.10. Statistical analysis The obtained results were verified statistically with a single factor ANOVA analysis of variance, using STATISTICA 12.0 software. The parameters of multiple regression taking into account the concept of shared variance were estimated using the REGLINP command in the Excel 2010 PL spreadsheet. In order to determine the levels of satiety and hunger, the area under the curve (AUC) was computed using the trapezoid method. The measurement was carried out every hour for 180 min. The correlation of the tested parameters was then calculated to determine the general trend and the relationships between the tested parameters. The significance level of p < 0.05 was adopted. Table 1 presents the contents of basic nutrients found in selected cooked groats. A high water content of cooked grits (84.7%) and millet groats (73.9%) determined the significance (compared to other tested types of groats) of the weight of the serving with the calorific value specified in this study ( Table 2). The intake of such a large serving during a short period of time was probably responsible for the induction of one of the mechanisms determining the rapid postprandial increase in the level of satiety being experienced, which was associated with the effect of stomach distension. At the same time, the high water content of these types of groats determined their low calorific value, which probably contributed to the significantly higher rate of the decrease in the level of satiety being experienced, compared to other types of groats characterized by higher calorific value. It was assumed that the water content of the tested types of groats was primarily determined by the presence of all hydrophilic components capable of binding and retaining water. To this end, the parameters of multiple regression that were used to determine the effects of protein, starch, and dietary fiber contents on water content in cooked groats were estimated. The simultaneous use of many explanatory variables increased the accuracy of forecasting compared to the use of only one variable. The obtained equation took the following form: y = 2.42 x 1 -1.33 x 2 -2.80 x 3 + 91.42, where y = water content, x 1 = protein content, x 2 = starch content, x 3 = dietary fiber content. The value of the squared coefficient of correlation between the response variable and the best combination of its predictors R 2 was 0.9819. This means that it expresses the dominant part of the shared variance by this best combination and the response variable. In turn, the standard error of the estimation, which describes the residual dispersion (the residues being the differences between the actual values and forecasted values) was 2.2241. The calculated value of F-statistics amounted to 18.0596. The obtained results indicate that the degree of hydration of the dry matrix of particles of the tested types of groats, determining the weight of a meal and its calorific value, was a resultant of the predictors taken into account in the regression equation, of which protein content was of the greatest significance. These results only partially explain the varied degree of hydration of particular types of groats. In-depth research should also take into account the differences in geometrical characteristics of particles of the tested types of groats. Pearl barley (128 kcal/100 g) and oat groats (125 kcal/100 g) were the most calorific. The high calorific value of these types of groats was primarily determined by the high protein content (4.4%), fat content (3.2%), and the very high starch content (22.3%) for oat groats and the high protein content (4.03%) and starch content (19.7%) for pearl barley. The lowest content of starch was found for cornmeal. It was statistically proven that the content of starch significantly differs for different types of groats (p = 0.0004). The content of protein in three groats (buckwheat, barley, and oat) was similar and the lowest content of protein was found in cornmeal. Results Oat groats also contained a significant amount (3.4 g/100 g) of dietary fiber, which could have affected the results concerning the determination of hunger and satiety levels. The highest content of total dietary fiber (TDF) was found in oat and barley groats. Low amounts of total fiber were found in cornmeal and millet. While analyzing the IDF and SDF separately, it was found that the content of SDF in each examined product was below 1 g per 100 g of boiled groats. For oat and barley groats, the content of SDF was threefold lower than that of IDF. The technical parameters of the cooking process and, consequently, the degree of hydration of groats, can play a decisive role in shaping the satiating properties of the selected groats. Hunger and satiety The estimated level of satiety being experienced on an empty stomach by participants of the study ranged from 25 to 35 mm (Table 2, Figure 1). Immediately after the intake of isocaloric and isothermal servings of cooked groats, i.e. after a maximum of 8 min from the beginning of the study, the experienced satiety level reached the highest value for grits (83 ± 7) and millet groats (82 ± 7). However, after 3 h (the end of the test), the level of satiety being experienced after the intake of these groats fell significantly to reach the lowest noted values (16 ± 8 for grits and 23 ± 8 for millet groats) (Table 2, Figure 1). This means that the intake of grits caused a rapid, yet shortterm, increase in the level of satiety being experienced, which was equally rapidly replaced by a growing feeling of hunger (Table 3, Figure 2). In turn, immediately after the intake of the other types of groats tested, the level of satiety being experienced reached similar values (71 74) which, after 3 h, decreased and fell within a range from 35 ± 10 (white buckwheat groats) to 50 ± 8 (oat groats) (Table 2, Figure 1). This means that the intake of oat groats and pearl barley resulted in a slower (compared to grits and millet groats), yet relatively longer term, feeling of satiety, which was relatively slowly replaced by a growing feeling of hunger (Table 3, Figure 2). An analysis of the mean values of the level of satiety subjectively experienced after the intake of the tested groats, which gradually decreased during the test, indicated that the highest level of satiety was associated with the intake of oat groats and the lowest level was associated with the intake of grits (Table 2). At the same time, having analyzed the mean value of the level of hunger experienced after the intake of the tested groats, which gradually decreased during the test, it can be concluded that the lowest level of hunger being experienced was ensured by the intake of oat groats, and the highest level by the intake of grits (Table 3). Based on the obtained results, it can be concluded that the intake of oat groats and pearl barley resulted in the highest satiety level ( Figure 1) and the lowest hunger level (Figure 2) expressed using the VAS score. In order to quantify the subjective primary data, the AUC values for satiety were estimated, and compared, which indicated that the lowest level of satiety was observed for the intake of grits and millet groats, as the AUC for these types of groats was the smallest. The largest AUC was determined for oat groats and pearl barley. A statistical analysis demonstrated that the satiety index is determined by the type of groats ( = 0.05, F = 65.22 of the critical value). For the determination of hunger level, the aim was to verify a similar assumption concerning the effect of the type of groats on the level of hunger being experienced. As in the case of determining the satiety level, the value F = 68.35 determined for the hunger level at the significance level = 0.05 indicated that the hunger level was determined by the type of groats. Based on an analysis of the results, it was concluded that the highest hunger level was observed following the intake of grits and was only slightly lower for millet groats. On the other hand, the lowest level of hunger being experienced was recorded following the intake of oat groats. For grits, hunger appeared the quickest. As early as after 2 h, some participants exhibited severe symptoms of hunger at a VAS level of 68 ± 10 mm. For millet groats, the level of hunger after 2 h was high as well ( Table 3). The AUC of hunger had the highest values for grits and millet groats. The intake of unroasted buckwheat groats was characterized by moderate levels of hunger and satiety. The AUC amounted to 212 for satiety level and 168.31 for hunger level. The dependence of hunger and satiety on selected physicochemical parameters Based on the values of Pearson's correlation coefficient, it was determined that the highest level of correlation occurs between hunger and satiety levels and protein and dietary fiber contents. Based on an analysis of linear regression describing the relationship between nutrient contents and the feeling of satiety, it was concluded that satiety was determined, to the greatest extent, by the dietary fiber content of the tested types of groats. An increase in the percentage of dietary fiber by one percentage point may lead to an increase in the satiety level by 10.6 points according to the VAS score, with other features remaining unchanged (Table 4). In turn, an increase in the percentage of protein by one percentage point results in an increase in the satiety level by 5.5 points according to the VAS score, with the same assumption (Table 4) Table 4). The analysis of the IDF and SDF fractions showed that although none of the variables was found to be significant due to a low number of observations, a strong correlation of these variables indicates that such a correlation exists. A considerably greater effect on the satiety of groats was recorded for the content of soluble fiber compared to the effect of insoluble fiber. Moreover, while adopting the approach taken previously (assuming that the simultaneous use of many explanatory variables would help to increase the accuracy of forecasting), the parameters of the equation describing the effect of water, starch, and dietary fiber contents in the tested types of groats on the level of satiety being experienced after their intake, expressed in the AUC values of satiety and parameters of the equation describing the effect of the same variables on the level of hunger being experienced (expressed in the AUC values for hunger) were estimated. The obtainedmultiple equationofsatiety tookthe followingform: y = -0.78x 1 +0.36x 2 +3.39x 3 + 253.66, where y = water content, x 1 = water content, x 2 = starch content, x 3 = dietary fiber content. The value of the squared correlation coefficient R 2 took a value of 0.9807, which means that it expresses the dominant part of the shared variance by the best combination and the response variable. In turn, the standard error of estimation, describing the residual dispersion, took a value of 3.6509. The value of F-statistics amounted to 16.9004. The obtained results indicate that the level of satiety being experienced following the intake of groats was a resultant of the predictors taken into account in the regression equation, of which dietary fiber content was of the greatest significance for inducing the feeling of satiety. In the long term, the presence of water contributed to the decrease in the satiating potential of groats, most probably because it only serves the function of a mechanical filler of the stomach, which is independently not capable of staying within it. The multiple equation of hunger took the following form: y = 1.68 x 1 + 1.55 x 2 -9.46 x 3 + 35.12, where y = AUC M of hunger, x 1 = water content, x 2 = starch content, x 3 = dietary fiber content. The value of the squared correlation coefficient R 2 was 0.9720, expressing the dominant part of the shared variance. In turn, the standard error of estimation was 6.2340 and the value of F-statistics amounted to 11.5860. The collected results indicate that the level of hunger being experienced following the intake of groats was a resultant of the predictors taken into account in the regression equation, of which, similarly to the effect of the same parameters on the satiating potential of the tested types of groats, dietary fiber content was of the greatest significance for suppressing the feeling of hunger, which is proven by the negative value of this parameter (counteracting the tested phenomenon). On the other hand, water and starch contents determined, to a similar extent, the emergence of the feeling of hunger. Both water and starch had positive values in the determined equation, which suggests that their presence was conducive to promoting the feeling of hunger (cooperation with the tested phenomenon). Water, as a component lacking the ability to provide energy, does not counteract the feeling of hunger. On the other hand, starch, as a result of its pasting, was easily digested, triggering an increase in glycaemia, which was conducive to another episode of hunger. Considering the regression result for the combined effect of water, starch, and fiber on the level of satiety and hunger, it was found that fiber induced the feeling of satiety to the highest degree. At the same time, fiber counteracted the feeling of hunger to the highest degree. A multiple regression analysis was carried out taking account of different types of fiber. A multiple equation for the correlation between the feeling of satiety and the presence of both fiber fractions in the water with which they interact was constructed: Based on a comparative analysis of parameters of both regression equations, it can be concluded that these results may be treated as confirmation of the diverse role of a soluble dietary fiber fraction (SDF) compared to the insoluble dietary fiber fraction (IDF) in affecting the capacity of food to induce the feeling of satiety and counteract the feeling of hunger. Both IDF and SDF had negative values in the constructed equation, which suggests that their presence limited the feeling of hunger. However, based on our studies it is difficult to conclusively determine the effect of the particular fractions on satiety because it is not a model study and the present study covers a variety of variables with an effect on satiety. Nevertheless, all of the examined groats were characterized by a high degree of satiety. Discussion The current study concerns the novel issue of satiety and is the first study to investigate the satiating properties of groats. The results of the study demonstrate that groats can be considered as a product with high-satiating properties. The satiating potential of buckwheat groats, oat groats, and pearl barley is particularly noteworthy. The satiating properties of groats are probably a resultant of many factors. The basic determinant shaping the satiety value of groats is their chemical composition, which determines the nutritional value. The chemical composition differs depending on the species of cereal used to produce groats and on the type of groatswhich is determined by the adopted method of technological processing. In addition, the duration and method of cooking may increase the degree of water absorption and affect the size of a portion. The effect of the intake of groats on the regulation of hunger and satiety is relatively poorly known. The literature remains ambiguous as to which macronutrients trigger the most satisfactory effect of satiety. It is suggested that protein satiates hunger most effectively compared to fats and carbohydrates. Protein-rich diets were the subject of numerous studies which have demonstrated that the protein contained in foods may effectively contribute to a favorable energy balance and help to control body weight. Groats, however, are not a rich source of protein. The protein content of grits is negligible and amounts to 0.78 g/100 g, while in other types of groats, the protein content amounts to a maximum of 4.5 g/100 g. Therefore, this is not a value which could determine the satiating properties of groats. Neither protein nor fats are components to which the satiating properties of groats could be attributed. It appears that the most important factor affecting the satiating potential of groats is the content and type of carbohydrates. Their role is manifold, depending on their type and structure. The effects of carbohydrates on metabolism are associated with the occurrence of hormonal effects, the properties of particular carbohydrates contained in food products, and the ability of certain carbohydrates to ferment in the large intestine. Properly composed meals based on complex carbohydrates can help control the body weight due to their satiating properties. In the current study, carbohydrate contents in groats varied from 22.7 to 27.8 g/100 g, with the exception of grits (13 g/100 g). They are the dominant component determining the energy value of cooked groats. The greatest share is that of starch. Starch properties depend mainly on the amylose-to-amylopectin proportion, which can vary to a large extent even within one cereal species. Regarding the types of groats under study, the amylose-to-amylopectin ratio was as follows: millet grains, 1:3; buckwheat groats, 1:4.8; pearl barley, 2.1:1; oat groats, 1:2.3; grits, 1.85:1. Starch content is linked to the GI. A higher amylose content in relation to amylopectin ensures lower postprandial glycaemia and insulinemia, as amylose contained in starch is less sensitive to the action of enzymes. The properties of dietary fiber also contribute to the feeling of satiety. Its presence mainly contributes to the occurrence of the effect of stomach distention and an increase in the viscosity of stomach contents, which also delays stomach emptying. In turn, the presence of dietary fiber in the intestine results in the slowdown of the rate of digestion of carbohydrates and lipids. Based on studies, apart from starch, the most important component of grain products is dietary fiber. In the recent years, there has been an interest in the possibilities of using dietary fiber in the regulation of satiety. In the present study, the total fiber content in a variety of groats ranged from 0.1 g to 3.4 g. The highest TDF content and the highest satiety degree were characteristic of oat groats and pearl barley. On the other hand, H et al. reported that following hydrothermal treatment, the content of TDF was considerably higher for buckwheat groats (16.45 g/100 g) compared to barley groats (7.99 g/100 g), which was not confirmed by the current study. Moreover, Grecka et al. found a higher TDF content in boiled barley groats than in buckwheat groats. Presumably, these differences in TDF content resulted from the technological treatment methods applied, grain variety, and species. Nevertheless, oat groats was characterized by the highest content of dietary fiber. Geliebter et al. studied the effect of oat and corn flakes on the feeling of satiety. The satiety was higher and the consumption of the tested meal ad libitum was lower following the consumption of oats than following consumption of corn flakes, which was confirmed in the present study. Schroeder et al. conducted studies of the effect of fiber on satiety and reported that the consumption of full-grain barley with a high fiber content (12 g of fiber/56 g barley portion) induced an increase in the feeling of satiety before a meal ad libitum at dinner time when compared to full-grain wheat (5 g of fiber/56 g portion) and refined rice (1 g of fiber/56 g portion). This confirms that oat products are characterized by a very high satiety value. On the other hand, Korczak et al. do not confirm that correlation. They studied the effect of oat and barley bran on satiety (10 g of oat bran, 10 g of barley bran, and low content of fiber), and did not find any differences between bran type and satiety. A comparison of the study results concerning satiety properties is very difficult due to the differences in the type of used fiber (SDF versus IDF) and doses used in preliminary load. An extensive account of model studies using different fiber types can be found in the literature. Authors reported that an addition of dietary fiber to food considerably increased the feeling of satiety. Bajerska et al. examined the effect of an addition of cherry pomace (CP), a by-product from fruit processing, to muffin production, as a substitute of wheat flour, in a variety of concentrations. They found that an substitiution of wheat flour with cherry pomace at both levels of 20% CP and 30% CP improved satiety and resulted in a lower energy consumption after 3 h (level 40%CP addition, altough tested as well, was not acceptable for consumer due to sensory properties). Lyly et al. found some benefits when measuring satiety after providing the study participants with beverages enriched with a variety of fibers in comparison to a fiber-free beverage. It was found that not all the fiber added produced a desirable effect. Only the beverage enriched with guar gum ensured a statistically significant increase in the feeling of satiety compared to a fiber-free beverage. For many years, researchers have been studying the effect of enrichment of bread with fiber to increase its satiety. Touyarou et al. studied satiety response to two types of bread enriched with fiber compared to white bread. Although the soluble:insoluble fiber ratio was similar in the fortified breads, one of them resembled a multi-grain bread while the other resembled a traditional sandwich bread. The researchers found that a weaker feeling of hunger was achieved following the consumption of both breakfasts compared to the consumption of the control breakfast, while the appearance and taste did not affect a subconscious decision of the consumers. In the current study, since all of the participants also liked groats and did not reveal an aversion to any of them, consumer preferences did not have a significant effect. Other studies found that food enrichment with psyllium, lupin, or flax may considerably regulate the consumption of a meal and reduce the energy of a daily diet, while the predominance of fiber insensitive to digestion does not produce a desirable effect. Recently, increasingly frequently researchers have been analyzing the effect of fiber content on satiety, depending on their fraction and proportion. It is assumed that the effect of the insoluble fraction on the promotion of satiety is mainly determined by the reduction of an energy value of a diet. Other studies have indicated that the presence of hemicelluloses may contribute to a decrease in hunger. Many studies however underline the role of soluble fiber which, even in small amounts, may play a key role in satiety regulation. The presence of soluble fiber promotes satiety through slowing down glucose absorption. In the small intestine's environment, with the required amount of water, fiber swells and forms viscous gels. IDF to SDF ratios in grain products are similar; however, it is underlined that the highest share of watersoluble fiber can be found in oat products, due to the content of -glucans and other soluble components. In the current study, the highest satiety was obtained following the consumption of oat groats (with the highest SDF content). Other researchers have underlined the role of arabinoxylans and -glucans in satiety regulation. FOSCHIA et al. found that, among the cereals, the highest content of -glucans was reported for barley (2-20 g per 100 g dry matter) and (3-8 g) for oats. Other cereals also contain these compounds although in considerably lower amounts. On the other hand, reports suggest that cereal grain processing and the use of hulless grain to a material degree impoverish a product in these components. The amount of these compounds in boiled groats is insufficient to ensure a significant reduction of the feeling of hunger. It was observed that, although the content of glucans is presumably the highest in barley and oat groats compared to other cereal products, this would not produce a desired satiety effect. However, this could reduce energy consumption from the next meal. Water-soluble and water-insoluble fibers reveal different physicochemical properties and it may be expected that they have a varied effect on satiety signals after a meal. The results of multiple studies show that different types of fiber modulate the appetite control time and may cause changes in consumption motivation and patterns, not necessarily affecting total energy consumption. Dietary fiber and starch contents affect the glycemic index and satiety level. An increase in shortterm satiety with the intake of low-glycemic products has been proven in many studies, while the effect of the application of a low-glycemic diet on the feeling of satiety and body weight in the long term still remains controversial. Regarding the groats in the current study, the determined satiety levels correlate with the typical GI values. The types of groats which exhibited high satiating potential in the current study are assigned low GI values in the literature (oat groats GI = 47, pearl barley GI = 45, millet groats GI = 70). All of the examined groats were characterized by high satiety compared to other food products. Further research to verify this relationship is required. Given the capacity of groats for inducing and maintaining the feeling of satiety, it is also necessary in further research to take into account (in addition to the mentioned dietary fiber protein and starch) water content (the degree of hydration) which determines the degree of starch pasting and viscosity of food. The results of the current study indicate that the water present in groats affects the feeling of satiety within a short time after consuming them. Depending on the method of technological processing and the degree of overcooking, groats differ not only in water content but also in the GI and satiating properties. The water content of cooked groats is determined by the geometrical features of the groats' granules and the degree of their hydration. Grits and millet groats were characterized by the highest water content. They were also characterized by the smallest granules, which could, as a result, determine the satiety value of groats. Groats with large, hard granules after cooking (pearl barley, oat groats, buckwheat groats) were characterized by relatively high satiating potential, which was confirmed by numerous studies. This suggests that not only the chemical properties of groats, but also their physical features, mainly those associated with the structure of groats' granules and rheological properties of the stomach contents after their intake, may have an effect on the occurrence of the feeling of satiety. Therefore, while determining the satiating potential of food products, it is necessary to take into account not only the nutrients but also the physical features of the product. An assessment of the satiating properties of groats should be extended in the future with additional tests focused on the physical parameters of foods, which may have an effect on the duration of the feeling of satiety and on the time of the appearance of another episode of hunger, with constant chemical parameters of foods and hormonal parameters of the body. Conclusion The satiety potential of groats is mostly determined by the dietary fiber content and the degree of hydration of the groats after cooking. Oat groats and pearl barley were the groats, which satiated most efficiently, at relatively the highest level of protein. A study of the correlation between satiety and IDF and SDF fractions did not produce clear conclusions. In the course of the study it was found that all the examined groats were characterized by high satiative properties. The consumption of groats should be the basis for each rational diet, not only for people caring for health and controlling body weight but also for obese people and patients with diabetes type 2 as they are characterized by a relatively low GI.
Tyrosine kinase and CD45 tyrosine phosphatase activity mediate p21ras activation in B cells stimulated through the antigen receptor. Cross-linking of the Ag receptor (AgR) induces intracellular signaling events in B cells, such as p21ras activation, that lead to their proliferation and differentiation. This event is accompanied by the tyrosine phosphorylation of the p21ras-associated GTPase-activating protein p120 ras.GAP, raising the possibility that AgR-stimulated p21ras activity is regulated by protein tyrosine kinases (PTKs) and protein tyrosine phosphatases (PTPases) in B cells. To test this possibility, we examined the effects of PTK and PTPase inhibitors on protein tyrosine phosphorylation and p21ras activation induced by AgR cross-linking in TNP-specific TA3 7.9 murine B lymphoma cells. Although AgR-induced protein tyrosine phosphorylation was inhibited by the PTK inhibitors genistein and herbimycin A, it was enhanced by exposure to the PTPase inhibitor phenylarsine oxide (PAO). Cross-linking of the AgR by Ag or F(ab')2 anti-IgM induced a rapid (within 5 min) two- to threefold increase in p21ras activation in 7.9 B cells. Interestingly, a second peak of p21ras activation was evident at approximately 40 min after stimulation. Genistein and herbimycin A and PAO each blocked AgR-stimulated p21ras activation. Similarly, Ag-induced p21ras activation was inhibited by pretreatment of 7.9 B cells with an anti-CD45 mAb (detects the 220-kDa B cell isoform of CD45). Moreover, p21ras activation was induced by Ag and F(ab')2 anti-IgM in CD45+ but not CD45- J558L microns 3 B cells. These data indicate that p21ras activation induced by AgR cross-linking in B cells is regulated by both PTK and CD45 PTPase activities.
<reponame>npetrangelo/AdventOfCode filename = "input.txt" chunks = {'{':'}', '(':')', '<':'>', '[':']'} scores = {')':3, ']':57, '}':1197, '>':25137} score_list = [] def parse(line): stack = [] for char in line: if char in chunks: stack.append(chunks[char]) continue if char != stack.pop(): return [] return list(reversed(stack)) with open(filename) as f: for line in f: # print(line.strip()) stack = [] for char in line.strip(): if char in chunks: stack.append(chunks[char]) continue close = stack.pop() if char != close: # print(char, close) score_list.append(scores[char]) break # print(score_list) print(sum(score_list)) scores = {')':1, ']':2, '}':3, '>':4} stacks = [] score_list = [] with open(filename) as f: for line in f: stack = parse(line.strip()) if len(stack) != 0: stacks.append(stack) total_score = 0 # print(f"Start with a total score of {total_score}.") for c in stack: # print(f"Multiply the total score by 5 to get {5*total_score}, then add the value of {c} ({scores[c]}) to get a new total score of {5*total_score+scores[c]}.") total_score = 5*total_score + scores[c] score_list.append(total_score) print("".join(stack), total_score) print(f"Middle score: {sorted(score_list)[len(score_list)//2]}")
/** * A generic class for MPI based parallelization of our optimization problems. * Please note: you must call MPI_INIT from OUTSIDE this and decide already, * if the process is master or slave! * @author Johannes Dieterich * @version 2014-04-02 */ public class GenericMPIOptimization<E, T extends Optimizable<E>> { private static final Logger log = LoggerFactory.getLogger(GenericMPIOptimization.class); private static final int NEXTTASK = 0; private static final int EXITDONE = 1; private static final int WAITFOR = 2; private static final int KICKOFF = 42; private static final long TIMEOUT = 5000; // 5 seconds private GenericMPIOptimization(){} @SuppressWarnings("unchecked") public static <T> void runAsMaster(final Job<T> job) throws Exception { log.debug("Entering generic MPI globopt as master. BRACE YOURSELF BIG TIMES!"); final int noProcs = MPI.COMM_WORLD.Size(); if(noProcs <= 1){ throw new RuntimeException("Trying to actually work all by myself is not in my nature. Bye."); } /* * initial broadcast */ final char[] initMessage = "Hello from MPI master, all is well!".toCharArray(); MPI.COMM_WORLD.Bcast(initMessage, 0, 35, MPI.CHAR, KICKOFF); /* * fill up all slaves once */ int taskCounter = 1; for(int proc = 1; proc < noProcs; proc++){ final Task<T> task = job.nextTask(); final String outFile = "task" + taskCounter + ".dat"; OutputPrimitives.writeObjToBinFile(outFile, task); final char[] message = new char[50]; final char[] outFM = outFile.toCharArray(); System.arraycopy(outFM, 0, message, 0, outFM.length); MPI.COMM_WORLD.Send(message, 0, 50, MPI.CHAR, proc, 0); taskCounter++; } while(!job.jobFinished()){ // receive something final char[] answer = new char[50]; final Status rcvStat = MPI.COMM_WORLD.Recv(answer, 0, 50, MPI.CHAR, MPI.ANY_SOURCE, MPI.ANY_TAG); final String resPath = String.copyValueOf(answer).trim(); final Result<T> result = (Result<T>) InputPrimitives.readBinInput(resPath); // delete the result file ManipulationPrimitives.remove(resPath); // submit it job.submitResult(result); // and give something out final Task<T> task = job.nextTask(); if(task == null){ final char[] message = new char[50]; MPI.COMM_WORLD.Send(message, 0, 50, MPI.CHAR, rcvStat.source, WAITFOR); } else { final String taskPath = "task" + taskCounter + ".dat"; OutputPrimitives.writeObjToBinFile(taskPath, task); final char[] message = new char[50]; final char[] outFM = taskPath.toCharArray(); System.arraycopy(outFM, 0, message, 0, outFM.length); MPI.COMM_WORLD.Send(message, 0, 50, MPI.CHAR, rcvStat.source, NEXTTASK); taskCounter++; } } /* * tell everybody to go kill themselves */ final char[] finalMessage = new char[50]; for(int proc = 1; proc < noProcs; proc++){ MPI.COMM_WORLD.Send(finalMessage, 0, 50, MPI.CHAR, proc, EXITDONE); } } @SuppressWarnings("unchecked") public static <X,Y extends Optimizable<X>> void runAsSlave(final long timeout) throws Exception { log.debug("Entering generic MPI globopt as slave. BRACE YOURSELF BIG TIMES!"); final int myRank = MPI.COMM_WORLD.Rank(); // try to receive the initial broadcast final char[] initMessage = new char[35]; MPI.COMM_WORLD.Bcast(initMessage, 0, 35, MPI.CHAR, KICKOFF); final String sInit = new String(initMessage).trim(); if(!sInit.equalsIgnoreCase("Hello from MPI master, all is well!")){ throw new RuntimeException("Initial message from master was not what " + myRank + "expected. Exiting. Received message: " + sInit); } // do stuff as long as time permits final long startTime = System.currentTimeMillis(); int taskCounter = 0; BigLoop: while((System.currentTimeMillis()-startTime) < timeout ){ // always the same idea: ask the master for a new task final char[] message = new char[50]; final Status status = MPI.COMM_WORLD.Recv(message, 0, 50, MPI.CHAR, 0, MPI.ANY_TAG); final int messTag = status.tag; if (messTag == EXITDONE || (messTag != WAITFOR && messTag != NEXTTASK)) { log.debug("Master tells me to quit: doing so! Tag was " + messTag); break; } else if(messTag == WAITFOR) { Thread.sleep(TIMEOUT); continue BigLoop; } // there is a task: execute it log.debug("There is a new task!"); taskCounter++; final Task<Y> task = (Task<Y>) InputPrimitives.readBinInput(new String(message).trim()); final Result<Y> result = task.executeTask(myRank); // delete task ManipulationPrimitives.remove(new String(message).trim()); final String outputFile = "result" + taskCounter + ".bin"; OutputPrimitives.writeObjToBinFile(outputFile, result); // report back final char[] filePath = outputFile.toCharArray(); final char[] answer = new char[50]; assert (filePath.length <= answer.length); System.arraycopy(filePath, 0, answer, 0, filePath.length); MPI.COMM_WORLD.Send(answer, 0, 77, MPI.CHAR, 0, myRank); } } }
Feasibility of using comparative judgement and student judges to assess writing performance of English language learners This study aims to identify how feasible it is to use comparative judgement (CJ) and student judges to assess the writing performance of English language learners. For this purpose, 35 paragraphs written by the students who were enrolled in a freshman Academic Writing course at a semi-private university located in the Turkish Republic of Northern Cyprus were selected and uploaded to http://www.nomoremarking.com website. Ten instructors of the Academic Writing course and 112 students taking the course volunteered to participate in the study. Then, the students were taken into 5 groups according to their writing performance level. In total, around 350 comparisons were done by each group. The results suggested that it could be feasible to use CJ to assess short writing performance like paragraphs if the instructors were experienced and trained. Moreover, instructors liked CJ and described it as a more practical, easier, fairer, faster, more enjoyable way of marking student papers. The students also liked CJ and it was also found that students who were high achievers in paragraph writing might be used to mark student papers through comparative judgement as long as they were trained. Introduction Assessing student performance effectively is one of the prerequisites of language teaching because it is desired that language learners move their language learning to performance level in order to be able to present their learning. At this stage, assessing student performance accurately is critical to pinpoint student performance validly and reliably and to inform the teachers and/or administrators about how their program and/or students are doing in terms of performance level English proficiency. In English language teaching, performance is mostly measured through two productive skills: speaking and writing. Assessing speaking accurately is difficult because the level of anxiety a student experiences may confound the construct being tested () and cause construct irrelevant variance on the student scores. Apart from this, making students speak during the exam, creating context and finding authentic situations, preparing a rubric, maintaining the intra-and inter-rater reliability are just a few problematic issues to consider while assessing speaking. Assessing writing is also problematic in nature similarly as it necessitates the use of subjective annotations (), and the use of multiple judges for marking as the student number increases. More importantly, it requires double marking if the test results will be used for high stakes purposes. Additionally, as the variability of the responses is open, this makes the marking of essays a truly complex process. Due to this somewhat problematic nature, writing assessment is handled with more care in educational organizations and is paid utmost attention. If the reliability of marking cannot be maintained and if multiple raters assess the papers with similar qualities in totally different ways, this may lead to a decrease in the reliability of the scores attained and objections to the results may be received. As can be seen, assessing writing has some challenges difficult to solve in terms of reliability and validity. Problems of Traditional Rubric-based Writing Assessment There are some concerns related to validity raised out of the writing assessment (van ). First of all, Humphry and Heldsinger state that the matrix system used in analytic rubrics may even constitute a threat to validity as they may cause 'pronounced rating tendencies' which can cause Halo Effect (p.253). Moreover, what constitutes a good piece of writing is an arbitrary decision developed exponentially and uniquely by each rater throughout their service as raters or teachers. It is also backed by the research that judges differ in their views of good writing. This problem can be decreased using marking schemes. However, research indicates that using marking schemes can also increase the reliability concerns () because the recent trend is to make marking schemes more specific to increase reliability which turns out to be counterproductive and this negatively affects the instruction as it comes down to narrowly following what is assessed in the marking schemes. Furthermore, although using marking schemes or rubrics helps to obtain absolute scores, the raters often make comparisons between the paper being scored and the previously scored papers. Apart from validity and reliability concerns, using marking schemes or rubrics to assess writing requires training of the raters, monitoring them, and standardization (), which can be tiring. Due to these problems, alternatives have been sought for a long time. The strongest alternative to traditional rubric-based marking is Comparative judgement (CJ). What is Comparative Judgement? CJ is simply based on a judge's comparing two stimuli, that is, two responses to a certain task and choosing the better one. It is not just based on the decision made by one judge though. There are multiple judges and therefore there are multiple comparisons that each stimulus is taken into account. After the repeated comparisons, the judges' decisions are tallied, and the score each stimulus gets is calculated based on these tallies and the stimuli are rank-ordered according to their standard scores. This calculation is based on the law of comparative judgement which was first introduced by Thurstone. Thurstone based the law of comparative judgement on people's being better at comparing two objects with each other rather than comparing them to a preset of criteria. Thurstone's law of CJ has been reformulated in the Rasch model by Brogden and Andrich. It has also been introduced to educational assessment by Pollitt. A detailed review of the mathematical foundations of CJ can be found in Pollitt. CJ is well known for its high reliability (Bramley & Vitelio, 2018). One can reach up to overall score reliability of.96 () and.98 (Humphry & McGrane, 2015) via CJ. This may prove that CJ reaches reliability scores which are difficult to obtain in traditional rubric-based marking. CJ is also beneficial because it releases the stress of marking and makes the judges focus more on their expertise, which is thought to increase validity during the marking of, for example, essays (van ). Challenges of CJ Apart from the advantages it presents in terms of validity and reliability, CJ can be timeconsuming and tiring for judges (). For this reason, adaptive comparative judgement (ACJ) has been developed. In CJ, each stimulus should be paired with all other stimuli to have a ranking score. However, in ACJ, not every stimulus has to be paired with all other stimuli. With the help of computer systems, for example, if stimulus 1 is judged to be better than stimulus 2 and if stimulus 2 is better than stimulus 3, then it is known that stimulus 1 is better than stimulus 3. Therefore, ACJ doesn't let stimulus 1 and stimulus 3 be paired and compared. This brings efficiency to traditional CJ. Pollitt asserts that ACJ also reaches a high level of reliability that cannot be attained by any other marking methods. However, a problem stated by Pollitt was that expanding CJ even in the form of ACJ to very large-scale assessments involving thousands of students and raters would be problematic due to practicality concerns. Due to the nature of CJ (and also of ACJ), the number of paired comparisons increases as the number of stimuli (e.g. paragraphs) involved in the judging process increases. This means that more raters and more time are needed as the student number increases. It may not always be easy or cost-effective to find more raters. However, including students in the marking process may be a solution. This can be advantageous in many ways. First of all, as the number of students is plenty, if they can be used for marking, a huge hurdle in front of CJ could be overcome. Secondly, it is highly common lately to include students in assessment mechanisms in the form of peer feedback or self-reflection. Using students in marking of writing performance via CJ could be a step further to this trend without harming the reliability of the scores because it is practically impossible or rather difficult to favor one single student paper in CJ. Moreover, successful integration of students to the marking process via CJ may work in favor of performance assessment if we consider the high-reliability CJ promises in performance assessment. This may mean that if the scalability hurdle of CJ could be overcome by using the students as judges, CJ might also be implemented widely for large-scale assessment. Challenges of CJ The motivation behind this research was the scalability hurdle of CJ. It has been thought that this study will help to overcome this issue CJ has by addressing the following research questions:  Is it feasible to use CJ to mark paragraphs written by English language learners?  Is it feasible to integrate students into the marking process of paragraphs using CJ?  What do students think about their CJ experience? Method In this part of the paper, the methodology will be presented under the research design, data collection tools, participants of the study, data collection procedure, and data analysis titles. Research Design In this study, both qualitative and quantitative data were collected through the Likert type and open-ended items in data collection tools. According to the Model 2 of Steckler et al., in mixed method studies, the qualitative data is collected to further explain the quantitative data. In this study, the qualitative data was also gathered to support and explain the data collected quantitatively in detail. Therefore, it can be said that this study is a mixed-method study based on Model 2 of Steckler et al.. Participants Ten instructors from the Modern Languages Department of Middle East Technical University Northern Cyprus Campus (METU NCC) and 112 students from the METU NCC who took ENG 101 course during the spring semester of 2018-2019 academic year were used. The students and instructors were informed about the aims of the study and their voluntary contribution was asked. Then, the students were divided into 5 groups. There were three classes (sections) that participated in the study. They were labeled as Whole Class 1, 2, and 3 (WC1, WC2, WC3). There was a total of 80 students in these classes. Moreover, students from different sections of the ENGL 101 course with high paragraph writing scores (over 8 out of 10) were also invited to the study and 32 students accepted to participate and they were divided into two groups and labeled as Skilled Raters 1 and 2 (SR1, SR2). The descriptive information about the raters can be found in Table 1. Data Collection Tools The data collection was done using paragraphs as stimuli because they were thought to be less time-consuming for the raters and easier to evaluate by the student raters. In addition, www.nomoremarking.com website was used as the platform for marking. There were 10 comparisons, which was suggested by Pollitt, set for each student paper; thus, a total of around 350 comparisons were reached as the minimum number of comparisons per group. NMM was a free-to-use platform for CJ at the time this study was conducted, and it is still free for personal and research use. The paragraphs were scanned and uploaded to the system. Judge names and contact information were entered into the website. Judges received a unique link for their judgements. The system allows the judges to have a break any time and continue even after days. When the judges start their comparisons, they see two papers on the screen at the same time. It is possible to zoom a part of a paper or read the papers one by one on the screen for a larger view. The simple action that should be done by the judges was to choose the better one between the two given paragraphs presented on the screen. A survey was prepared for both the students and the instructors who participated in the study to get their in-depth views on CJ. The survey for instructors was a survey with 3 background questions and 10 open-ended questions (see Appendix 1). Open-ended items were preferred in the instructor survey in order to collect meaningful qualitative data from a very limited number of participants (n=10). The students were administered a 7-item survey with a 5-point Likert scale. The student survey items were Likert type items because the number of the students was adequate to get meaningful results and the students' expertise on marking was low. The background survey (which included age, gender, name surname, email address, course section, and department) was administered to student judges separately. The surveys were kept short deliberately to foster response rate as this study was solely based on voluntary efforts of both students and instructors and the task asked was rather challenging and time-consuming for a study with no incentives but the mere contribution to a research study. The survey items first were written by the researcher and they were given to a measurement and evaluation specialist to be reviewed in terms of psychometric qualities. Modifications were done based on their feedback. Then, two English instructors were asked to review the items in terms of linguistic quality and conciseness. The items were also modified based on their feedback. Then, both surveys were given to two individuals from the instructor and student groups and they were asked to read and comment on what they understand. At this stage, all items were found to be concise, and clear. Therefore, no change was made in the surveys. Data Collection Procedure 35 paragraphs written by the students who took ENGL 101 course in the fall semester of 2018-2019 academic year during the midterm exam of the same course were selected to be used in the study. The paragraphs were around 150 words and on "The causes of sleeping disorders". The student names were removed from the papers and they were uploaded to the NMM site after being given a code number each. Upon their consent, the student and instructor judges' names, surnames, and email information were entered to the NMM website and each rater received an email from the website with a unique link for their marking so that they could start marking. No training has been given to the instructors and the students to see their bare performance as in the nature of CJ studies and as CJ requires no training at all (Jones & Wheadon, 2015). However, all judges (including the student ones) were given a short speech on the aims of the study and how its findings could help the language assessment field. Their questions were answered after the speech. The prompt given to the raters during the comparisons was "Which paragraph do you think is better". The surveys were given to the students and the teachers immediately after the completion of their judging tasks. 6 instructors and 82 students responded to the surveys. Data Analysis The descriptive statistics like the total number of comparisons, the number of comparisons per judge, average time spent per judge, and comparison were taken from the NMM website. Similarly, Scale Separation Reliability coefficients were also taken from the NMM website. According to the NMM website SSR is calculated with the formula. Note. SD: Standard deviation; RMSE: Root Mean Squared Error The scores obtained from NMM for each paper were compared to the original scores of the papers assigned by the course instructor (using an analytic rubric) during the academic term they were written. For this comparison, Pearson product-moment correlation coefficients were calculated (Jones & Alcock, 2014;Jones & Wheadon, 2015) using SPSS 22 (IBM Corp, 2013). Spearman rank-order correlation (rho) was also calculated. The original scores were obtained out of the marking of the course instructor deliberately without the intervention of a second marker in order to see what the result would be if the CJ were used instead of the course instructor. Results When the comparisons were completed, the descriptive details obtained from the NMM website were put into a table to compare the rater groups from many different dimensions. Table 2 shows these descriptive statistics. As illustrated in Table 2, the number of judges varies between 10 and 30, and the number of comparisons per judge varies between 12 and 35. Moreover, it can also be seen in Table 2 that the total number of comparisons is nearly the same for each judge in each judge group except WC3. It was deliberately set in this way to see if it would be more or less effective to increase the total number of comparisons when the student judges are involved in it. However, it didn't yield any positive or negative results except having a larger SSR. An interesting piece of information in Table 2 is the average time spent for each comparison by the judges in each judge group. According to this information, the student judges mostly spent around 2 minutes for each comparison except SR1 and the instructors. They spent 4.17 and 5.20 minutes on average per comparison respectively. It is interesting to note that these two groups have spent nearly two times more time than the other judge groups. As mentioned earlier, the original scores that each paper was assigned by the instructors of the course when the task was assigned to the students in the previous semester were compared to the scores given by the student and instructor judges through the NMM website. The correlation coefficients obtained from this analysis can be found in Table 3. Note. ** Correlations are significant at 0.01 level, * Correlations are significant at 0.05 level As can be seen, the highest correlation (r=.65, rho=.51) with the original paper scores was obtained by the NMM scores out of instructor judges. The correlations obtained from comparative judgements by the Skilled Rater judge groups (SR1,r=.43,rho=.31 and SR2,r=.39,rho=.45) were relatively higher than the whole groups (WC1,r=.19,rho=.11;WC2,r=.19,rho=.21;WC3,r =.19,rho=.22). Moreover, there is a high correlation between the WC1, WC2, and WC3 scores and SR2 scores for both r and rho correlations. However, this practically means nothing as these score distributions are away from the original score distribution. It is also interesting to note that SR1 and SR2 scores both correlate with the Instructor scores (SR1, r=.44; rho=.51; SR2, r=.47 rho=.48) relatively higher than WC1, WC2, and WC3 scores (WC1-2-3, r=.19; WC1 rho=.11; WC2, rho=.21; WC3, rho=.22). An interesting piece of information to consider in Table 3 is the SSR reliability coefficients obtained from each judge group. As can be seen in Table 3, the instructors and SR1 groups got higher reliability scores (SSRins=.72 and SSRSR1=.64 respectively). SR2, WC1, and WC3 judge groups had similar reliability coefficients,.52,.53,.58 respectively. However, it is interesting to note that although SR2 scores had a higher correlation with the original score, the reliability coefficient was.52 and although WC2 scores had a relatively lower correlation with the original scores, the reliability score for WC2 was found to be.65. This may be because SSR is an internal consistency score. WC3's getting higher SSR than SR2 can be explained by their having more comparisons than the other judge groups. Additionally, all judges in WC3 may be marking equally bad and this may yield higher SSR rates. However, this may be misleading, and their scores may be arbitrary and inaccurate. Similarly, a judge group's SSR can be low, but this may be because of few judges who mark consistently different or with a huge difference than the group. As a result, the group's SSR decreases. This may be the case in the current situation as well. Instructors' Perspective Out of 10 instructors, six responded to the survey. The age of the instructors ranged between 36 and 55. The average year of language teaching experience of the instructors was 23 years and the average university-level teaching experience was 21. The instructor responses to 10 open-ended questions were analyzed qualitatively and the findings were presented under themes in the following sections. Preference on holistic or analytic scoring Most of the instructors who participated in the study were found to be fans of holistic scoring. The participant instructors stated that they found holistic scoring more practical, time-saving, and easier. One of the instructors also stated that "s/he could see the whole picture after years of experience". This was an expected finding given the average years of experience (23 years) of the participants. Experienced teachers may not want to deal with the details of the analytic rubrics and may want to grade holistically. More importantly, teachers prefer holistic rubrics () Another instructor stated that although s/he was a fan of holistic scoring, s/he believed that analytic scoring contributes more to the process of standardization. This was a response from a relatively younger member of the instructor judges. Views about the CJ on being an alternative All the instructors who responded to the open-ended questions for the instructor judges indicated that they enjoyed the CJ experience that they had. Only one stated that "it was hard to concentrate, and it was odd to see the same paper again and again". Other than that, the instructors described it as an "interesting experience" and that they found it "suitable for experienced teachers" like themselves. One also stated that "it was not so hard, and it went smoothly"; however, "marking papers on the computer screen was a sort of challenge" for one. This may be an expected outcome given the average age of the instructors. All in all, it can be concluded that the instructors enjoyed their CJ experience. Most of the instructors who responded to the open-ended questions endorsed the idea of the CJ's being a sound alternative to the traditional rubric-based scoring. One of the instructors stated that this was a nice scoring method, and it was not necessary to deal with "nitty gritty details of overly detailed grading criteria". Another instructor found it useful and stated that "comparative judgements seem to be an effective tool to support rubric-based scoring". S/he also stated that "comparative markings can be used as a reference point when reliability is affected because of delays and interruptions in traditional marking". Another one endorsed the CJ as an alternative method but also stated that a second stage was necessary to decide what to assign the best and the worst papers and called for a second stage where a further assessment criterion could be used. Last but not least, an instructor who endorsed the CJ stated that it could be used as long as the instructors' expectations were similar. There was also some sort of criticism against the CJ. One of the instructors didn't find the CJ useful because s/he thought it was not practical for her and s/he experienced concentration problems. Another instructor stated that this method was feasible only when the instructors had adequate writing marking experience. All in all, it may be concluded that although the instructors have some concerns regarding the practical uses of the CJ they see it as an alternative to the traditional rubric-based scoring. Challenges, advantages and the disadvantages of the CJ There were some challenges stated by the instructors about the difficulty to decide which paper was better. The instructors stated that more guidance was needed. One stated that although s/he didn't have to, s/he "kept thinking about the different aspects of scoring like content, organization, language, and their percentages." Another instructor stated that s/he marked them at different times, and s/he believed marking all of them at once would be better. Another challenge put forward was some papers' reappearing continuously and giving the judge the feeling of not being able to rate them accurately. The advantages of the CJ are plenty according to the instructors. The majority of the instructors think that it is practical and faster to use the CJ. One of the instructors defines it as "a healthier approach to marking" and "more enjoyable for the teachers". Another instructor stated that the CJ "helps eliminate the problem of fairness especially if you are assessing too many papers". Another instructor endorsed this instructor by saying "it looks fairer while comparing two different levels of paragraphs in terms of weak and strong students. One also stated that "it is easier and quicker than traditional marking". Another instructor called it as "time-saving". Only one instructor stated that s/he "can't think of any". All in all, it can be concluded that the CJ was seen as a fairer, more practical, more enjoyable, easier, and quicker way of marking student papers by the instructors. There were also some disadvantages stated by the instructors. One of the instructors stated that "you can't give feedback to students. In other words, they won't know their strengths and weaknesses". Another one criticized the idea of comparing two student performances with each other by saying "We've always been taught that we shouldn't be comparing student work. This may be wrong". Another instructor who thinks the best paper with the highest wins will get the full score stated that "the best paper may still lack some aspects and shouldn't be given full mark". Another one based his/her criticism on the cut point and standardization by saying "I don't know how the cut-off point is determined. Which papers are below and above the threshold level? How can the judges be standardized?". These concerns of the instructors indicate that although the instructors see the CJ as advantageous in terms of marking time and effort, they put forward some disadvantages based on the after marking procedures. It is clear from their statements that the teachers have concerns regarding giving feedback to the students and justifying the assigned scores to the students. As they do not know the technical calculations behind the system, their concerns about the paper with the highest wins getting full point may be ignored because there is no such rule. The paper with the highest wins does not get the full mark in CJ. The time the CJ requires The instructors think that the CJ is faster than rubric-based marking. For example, one of the instructors said: "traditional marking is more time consuming". Another instructor pointed out that "as long as you have at least one perfect paper or a perfect sample at hand already, it can be more practical". Another one mentioned that "rubrics tend to take too much time". Similarly, two instructors also state that it "depends on the task and the criteria but it's time-consuming with a rubric as the details and specific parts will slow me down" and "I would spend more time with a rubric". These responses indicated that the CJ was clearly seen as less time-consuming than traditional rubric-based marking. The tasks the CJ is suitable for The instructors have a consensus that the CJ can be used with short writing tasks. They stated that the CJ was more suitable for "paragraphs", "interviews", "presentations", "very short, focused texts", "short paragraphs". In addition, one of the instructors stated that "it is good for summative tasks". Moreover, instructors have a consensus that the CJ is not appropriate to be used for "full essays", "longer texts", "research papers", "more argumentative papers" because "there is too much to consider". One of the instructors stated that "this is not appropriate when giving feedback to students, especially in process writing". These responses indicate that the instructors see the CJ as a suitable marking tool for short pieces of student work. This is an expected outcome as the comparison task becomes much more complicated as the length of the stimulus increases. The CJ and student judges It is obvious from their responses that the instructors did not like the idea of asking students to mark student papers via the CJ. However, although they were against it, two of them stated that they would endorse the idea "only if they were given the specifications and guidelines beforehand" and two other instructors stated that "I would be doubtful. They should be trained beforehand carefully" and "the students who mark can identify those who wrote them." Students' Perspective As mentioned earlier, the students were given a 7-item survey ( = 82) with a five-point Likert type items. The frequency and percentages of their responses can be found in Table 4. According to the responses to the first survey item, it can be said that most of the students felt positive about their CJ experience (when "I agree" and "I totally agree" responses are combined). Although around one third of the students were unsure about their experience, it is important to note that this is the item with the highest agreement rate among the other items. The responses to this item may indicate that just like the instructors, the students enjoyed their CJ experience. By looking at the figures in Table 4, it can be said that around half of the students endorse the idea that the CJ can be used instead of rubric-based marking. It can also be said that although a large group of students agreed with the statement, one-third of the students still disagree with it (when "I disagree" and "I totally disagree" responses are combined). This may indicate that the students are unsure about the use of CJ instead of rubric-based marking. Their concern can be not being able to get feedback from their teachers and they may not be sure as they may have no idea about the distinction between the rubric-based marking and the CJ at all. According to the responses to the third item in the survey, it can be said that the majority of the students didn't find the CJ task difficult. Only around one fourth of the students stated that it was a difficult task for them to complete the CJ task. This may indicate that the students didn't find CJ tasks that much difficult just like their instructors. According to the student responses to the fourth item in the student survey, it can be said that most of the students felt that the time they had spent to complete CJ task was not long. However, it should not be ignored that around one third of the students think that it took a long time for them to complete the task. It is interesting to note that this fourth item was the item with the highest unsure response. It may be because students do not have an idea about what long means in terms of marking. Still, it is important to note that the students did not find CJ task time-consuming just like their instructors. According to the responses to the fifth item, around half of the students think that CJ system has the potential to be used in marking the papers in future. Although around one third of the students disagreed, the responses obtained for this item indicated that students endorsed the use of CJ to mark student papers in the future just like their instructors. The responses to the sixth item in the survey indicate that around half of the students think that using CJ to mark the paragraph takes shorter than using a rubric. This result indicates that the students think that CJ takes less time than traditional rubric-based marking like the instructors. The responses to the last item in the student survey demonstrate that around half of the students think that the students can be used to mark student papers. This may mean that the students endorse the idea of having student judges to mark papers in CJ contrary to their instructors who had some concerns about it. Discussion In this part, results regarding the research questions will be discussed under themes. Moreover, the limitations of the study and suggestions for further research will be presented. Feasibility of putting CJ into use The first research question was "Is it feasible to use CJ to mark paragraphs written by English language learners?" Although this study cannot give a definitive answer to this question, it can be said that the medium level correlation coefficients obtained from the original scores and the CJ scores indicated that the CJ had the potential to be used by the departments instead of the classroom assessment if the judges were trained. In addition, CJ is liked by the instructors probably due to its holistic nature. It is important to note that the instructors described CJ as a fairer, more practical, more enjoyable, easier, and quicker way of marking student papers. This is a finding that conflicts with that of Bramley et al.'s and McGrane et al.'s in which the judges found the CJ task as overwhelming and rather time-consuming. This conflict may be partially due to current participants' mostly being fans of holistic scoring and partially due to their age and experience level. Another important finding was on the suitable tasks that CJ could be put into use in English language teaching. The participating instructors stated that CJ was more suitable for short and focused performance excerpts like "paragraphs", "interviews", "presentations" and unsuitable for long pieces of writing performance like essays. When the average time spent by each instructor per judgement (4.17 mins) is considered, this is justifiable. If essays rather than paragraphs were marked through CJ, this time would be doubled or even tripled as essays are longer and more complex pieces of writing performance. This may limit the use of CJ to only paragraphs or short pieces of written or oral performance if one does not have extraordinarily patient and focused raters. The results of the instructor survey also indicated that there might be a need to train the instructors about the CJ and to make them believe that this method works because it was obvious from the instructor survey that the instructors did not know much about how CJ worked and how the student scores were calculated. Some concerns over the implementation of CJ were identified. First of all, it was identified that the instructors had concerns about justifying and giving feedback on the score assigned to a student paper (Jones & Wheadon, 2015) as feedback can only be attributed to analytic assessment. These may be the primary challenges to get over before CJ can be put into use at an institution because an important part of assessing writing is to give feedback to the students after marking and justifying the score assigned to the paper. If this cannot be done, the instructors can be under pressure and the reliability of the scores can seriously be questioned by the students. Sometimes, even a student with 97 out of 100 points objects to her score asking why she did not get 100. This may seriously increase the burden of the instructor and can damage the trust between the students and their instructors. Another concern over the implementation of CJ is the number of instructors necessary to implement it at an institution. CJ necessitates the use of a larger number of instructors than does the traditional rubric-based assessment. Although CJ makes the writing assessment faster, easier and enjoyable, it should be noted that there were only 35 papers marked by ten instructors. If the number of paragraphs was 350, this would obviously overburden 10 instructors. It is not an improbable scenario considering that an institution with 10 instructors may easily have 200 students if each instructor teaches a single class of 20 students. This may mean that CJ has the potential to increase the burden of the instructors. Another concern may emerge if CJ is used for the essay or longer pieces of student performance marking. Essays have multiple paragraphs and many aspects should be considered by the instructors while comparing two essays. This may slow down the comparison process and may increase the decision time between the pairs. Moreover, it may require extensive training of the instructors because, if not instructed, they probably will focus on different qualities of the essays and may utilize different strategies to decide to choose the better one. This would decrease the reliability of the judgements. Although increasing the minimum comparison from 10 to 20 as suggested by Verhavert et al. can help to increase the reliability, at least a common comparison strategy should be communicated with the raters before CJ is implemented over essays. Otherwise, it would be burdensome for the judges as they may spend more time and effort to complete the task. The Feasibility of having Student Judges The correlation coefficients obtained from the WC1, WC2, and WC3 indicated that using student judges as a whole without a selection criterion may not be feasible. There may be some reasons for this. First of all, it was a bit difficult to control the whole group of students while they were performing the comparisons. Some may not have taken the activity seriously. It was already expected to have lower reliability coefficients from novice assessors than expert judges (Jones & Alcock, 2014). In order to reach higher levels of reliability, more comparisons might be necessary to be done by the novice assessors (). However, the correlation coefficients obtained from SR1 and SR2 student judge groups revealed some promising results. As mentioned earlier, these judge groups consisted of students who had high paragraph writing scores and thus better paragraph writing skills. As expected, they could distinguish between a good or a bad paragraph better than the other students. The correlation coefficient obtained from their judgements may seem inadequate at first glance. However, they were promising as these judges did not have previous marking experience. Moreover, from the descriptive statistics, it was seen that they already spent as much time as the instructors on each pair of comparisons. It may be thought that much better results could have been obtained if a short training had been given to SR1 and SR2 groups as it would help these students better understand why a paragraph was better than another. In this way, they also would benefit from the decreased cognitive demand that CJ necessitates for the expert raters (Liu & Li, 2012). This finding of the present study also concurs with the findings of Jones & Alcock's in which student judges performed close to expert judges as well. All in all, choosing the students who are known to be high scorers in writing and to be the ones who are aware of what a good piece of a paragraph is in CJ may be feasible and deserves to be investigated further. The concerns of the instructors over the student judges accumulate around students' giving high scores to each other. However, they think so as they do not know how the scoring algorithms work behind the comparisons. It is not possible to favor a single paper in CJ as long as all judges are not reached and asked to favor that paper. In a class of 20 students and judges marking their papers, this would not be possible because no one would like to favor another person's paper in such a systematic way because their papers would get a lower score in return. The only thing student judges could do would be to choose their papers as the stronger one every time they meet them while comparing papers. This would not be enough to increase the score of that paper though if at least half of the judges did not do the same for the same paper. Therefore, this is a concern with a very low probability and may be ignored. As mentioned earlier, the last item in the student survey was about the feasibility of using students instead of the instructors to score student paragraphs through CJ. 45.1% of the students endorsed this statement. It should be noted that this is a score obtained without informing the students in detail of how CJ algorithm works and how scores are calculated. The endorsement of this idea might be higher if the students were informed more and knew how the scores were calculated. In addition, it should be noted that this endorsement could increase if the students were trained and given more opportunities to practice marking through CJ. Therefore, the responses to this item could be taken as a clear endorsement to the use of students instead of instructors in marking paragraphs by the students. Student Perspectives over the use of CJ It is important to first note that the students took the marking activity mostly seriously without any sort of reinforcement other than supporting scientific research. This is the desired principle in the design of similar studies. Moreover, the student survey indicated that the students mostly had positive impressions out of their CJ experience. One of the indications of this is that 43.9% of the students stated that CJ could be used instead of rubric-based marking. It was also found that the students did not find their task in CJ difficult (59.7%) and it did not take so much time of theirs (49.7%). When the average time spent for marking by the student judges is analyzed, it can be seen that the average time spent by a student ranged between 24 minutes to 114 minutes. This may mean that some students could easily complete the whole task in less than 30 minutes. However, around one-fifth (23.2%) of the students thought it was difficult for them. This can be justified as some students spent around four times more than the average on the task. Therefore, it was expected that one-third of the students had some concerns in this regard. There are other indicators of student support towards CJ based on the responses to the student survey. 46.3% of the students stated that they see a potential in CJ to be used for marking papers in the future. However, around one-third (30.5%) had concerns either. This may be due to their not knowing how the algorithm behind CJ works and how their scores are calculated. Therefore, students should be informed about these issues in detail if the CJ is to be implemented officially. Around half of the students (48.7%) think that rubric-based marking would take a longer time. This finding is similar to the response obtained from the teachers. This may indicate that although the students are not fully aware of how long using a rubric to mark the papers would take, they may have guessed knowing the structure the rubrics have. In addition, these students may have participated in peer feedback activities at the university previously and they may know what it takes to use a rubric while marking a text. Therefore, this may be why they were in favor of CJ. All in all, around one-third of the students had concerns regarding the use of CJ in the marking of student papers, and there was around 50% endorsement to each statement in the survey. Therefore, it can be said that the students endorse the future use of CJ. Conclusion This research aimed to investigate the feasibility of using comparative judgement and integrating the students as judges into CJ system in order to overcome the need for more judges as the number of papers to mark increases. It was found that the teachers and the students liked the idea of CJ and stated that they would like to have CJ to be in use in the future. In addition, it was found that although the students in the present study could not get correlation coefficients as high as the experts as in the previous studies in the field (Jones & Alcock, 2014;Jones & Wheadon, 2015), if the students with high paragraph writing scores are trained, they may be used for marking papers in CJ. It was also found that both the teachers and the students thought CJ could be used for marking paragraphs in English language teaching. However, the main concern of the instructors was that CJ might not be feasible to use for longer stimuli like essays. Rather it would be more feasible to use CJ for short stimuli like paragraphs. Moreover, the instructors also had concerns about giving feedback to the students. They thought it would be difficult to justify their scores and to give feedback to the students over their scores. In addition, one-fourth of the students who had concerns were "unsure" in many of the questions. This was an expected outcome as CJ was a totally new technique to the students and it was embraced with some concerns. All in all, it can be said that the use of CJ was endorsed by the instructors and the students who participated in the study in marking paragraphs, and the students who are high scorers and who are aware of what a good paragraph looks like may be used in CJ to mark student papers. The instructors who participated in this study were highly experienced. This may be an advantage in CJ. Therefore, it should be noted that the findings of the study regarding the instructor perspectives could be biased due to the homogeneity of the participants' experience level in marking. In addition, the original paragraph scores used to compare the scores out of CJ were obtained from a single rater (classroom teacher) and they were accepted as true scores of the paragraphs. There was no double marking made deliberately to see what the situation would be if CJ was used instead of rubric-based marking in that course. However, this may have caused errors in the scoring of the paragraphs and the correlations between the scores may be misleading. The findings should be considered keeping this limitation in mind as well. The findings of this research study necessitated the need for further research. Firstly, the idea to use high scorer students who are aware of what a good paragraph is in marking paragraphs via CJ should be investigated more. In the present study, students were not trained deliberately to see their bare performance before training. Therefore, the key point in such a study would be to train the student judges before marking. Secondly, a study that includes less experienced instructors can be conducted to see whether their performance and their perceptions are similar to that of experienced instructors in this study. Last but not least, further studies investigating the feasibility of using CJ in essay marking would also be desirable.
<filename>react-media-tracker/app/components/presentational/group/details/form/wrapper/index.tsx<gh_stars>0 import React, { Component, ReactNode } from 'react'; import { Formik, FormikProps } from 'formik'; import { GroupFormViewComponent } from 'app/components/presentational/group/details/form/view'; import { GroupInternal } from 'app/data/models/internal/group'; import { groupFormValidationSchema } from 'app/components/presentational/group/details/form/data'; import { i18n } from 'app/utilities/i18n'; import { ConfirmAlert } from 'app/components/presentational/generic/confirm-alert'; /** * Presentational component that handles the Formik wrapper component for the group form */ export class GroupFormComponent extends Component<GroupFormComponentInput & GroupFormComponentOutput> { private formikProps?: FormikProps<GroupInternal>; /** * @override */ public componentDidUpdate(): void { const { sameNameConfirmationRequested, saveGroup } = this.props; // If we need to ask for same-name confirmation... if(sameNameConfirmationRequested && this.formikProps) { const title = i18n.t('group.common.alert.addSameName.title'); const message = i18n.t('group.common.alert.addSameName.message'); ConfirmAlert.alert(title, message, () => { if(this.formikProps) { saveGroup(this.formikProps.values, true); } }); } } /** * @override */ public render(): ReactNode { return ( <Formik<GroupInternal> onSubmit={(result) => { this.props.saveGroup(result, false); }} initialValues={this.props.initialValues} validationSchema={groupFormValidationSchema} validateOnMount={true}> {(formikProps: FormikProps<GroupInternal>) => { this.formikProps = formikProps; return ( <GroupFormViewComponent {...formikProps} saveRequested={this.props.saveRequested} notifyFormStatus={this.props.notifyFormStatus} /> ); }} </Formik> ); } } /** * GroupFormComponent's input props */ export type GroupFormComponentInput = { /** * The initial group values for the form inputs */ initialValues: GroupInternal; /** * If an external component requests the form submission. Triggers form validation and, if OK, its submission. */ saveRequested: boolean; /** * If an external component requests confirmation to save the group even if there's already one with the same name */ sameNameConfirmationRequested: boolean; } /** * GroupFormComponent's output props */ export type GroupFormComponentOutput = { /** * Callback to notify the current status of the form * @param valid true if the form is valid, i.e. no validation error occurred * @param dirty true if the form is dirty, i.e. one or more fields are different from initial values */ notifyFormStatus: (valid: boolean, dirty: boolean) => void; /** * Callback to save the group, after form validation is successful * @param group the group to be saved * @param confirmSameName if the user confirmed to create a group with the same name as an existing one */ saveGroup: (group: GroupInternal, confirmSameName: boolean) => void; }
Functional fibers having high mechanical strength, high modulus of elasticity and a high heat resistance, for example, para-aromatic polyamide fibers, are widely employed as reinforcing materials for resin composites containing as a matrix, rubbers, epoxy resins or phenol resins, in industrial practice. These functional fibers are disadvantageous in that the bonding property to the matrix resins is low due to such a fact that the surfaces of the functional fibers exhibit a high smoothness, and the polymeric materials from which the functional fibers are formed exhibit a poor chemical activity, and thus the reinforcing effect of the functional fibers is not so high as expected from the high mechanical strength, for example, the high tensile strength, of the functional fibers. It is known that spun yarns and stretch broken fiber yarns produced from the functional fibers have excellent bonding property to various types of matrix resins due to an anchor effect thereof provided from the fluffs present on or close to the peripheries of the yarns, and exhibit a good reinforcing effect on the matrix resins. However, the spun yarns and stretch broken fiber yarns are formed from short length fibers produced by cutting or stretch breaking continuous filaments, and thus the resultant spun yarns and stretch broken fiber yarns exhibit a significantly lower mechanical strength, for example, tensile strength, than that expected from the mechanical strength of the original filaments. Therefore, the spun yarns and the stretch broken fiber yarns do not exhibit as high a reinforcing effect as that expected from the mechanical strength of the original filaments. For the purpose of solving the above-mentioned problems of the conventional reinforcing yarns, a method in which functional groups for enhancing the bonding property to various types of the matrix resins are introduced into chemical structures of the polymers for forming the reinforcing fibers, and a method in which the stretch breaking length of the stretch broken fiber is increased to enhance the contribution of the mechanical strength of the original filaments on the mechanical strength of the resultant stretch broken fiber yarn, have been provided. In the former method, however, the types of the functional groups to be introduced must be varied in response to the types of the matrix resins, and this necessity causes not only the productivity of the reinforcing yarns to be degraded but also the cost of the reinforcing yarns to be increased. In the latter method, the contribution of the mechanical strength of the original filaments on the mechanical strength on that of the resultant stretch broken fiber yarn can be enhanced. However, as it is true that the fibers forming the yarn are short length fibers prepared by stretch breaking the continuous filaments, a reinforcing effect as high as that expected from the mechanical strength (tensile strength) of the original filaments cannot be realized. Accordingly, a new type of reinforcing yarn in which the mechanical strength of a multifilament yarn is utilized with a high efficiency and the bonding effect to the matrix resins is satisfactory and the production cost is relatively low, is strongly desired.
Gray scale and power Doppler ultrasonography in evaluation of early rheumatoid arthritis. INTRODUCTION Ultrasonography provides information regarding synovial membrane proliferation and its vascularization. The AIM of our study was to evaluate the role of gray scale and power Doppler ultrasonography in assessing early rheumatoid arthritis by analyzing the scores determined by the evaluation of synovial proliferation, joint effusion, erosion or soft tissue swelling. MATERIAL AND METHODS The study was prospective comprising 34 patients (31 female, 3 men), mean age 45.68 years, with clinical changes and biochemical early rheumatoid arthritis. Bilateral wrist, II-V metacarpophalangeal, and proximal interphalangeal joints were evaluated by dorsal and palmar scans. RESULTS The mean duration from the onset of symptoms was 3.46 months. Based on the clinical, biochemical and US scores the patients from our study presented early stages of RA. Also, statistically significant correlations were observed between the time elapsed from the onset, the changes highlighted by ultrasound and the stage of the disease (stage 0 or 1). CONCLUSIONS Our study confirms that US evaluation of changes in the joints of the hand offers useful information for staging the diagnosis of RA.
<filename>Source/Common/EGame.cpp #include "EGame.h" #include "CFourCC.h" #include "Common/Serialization/IArchive.h" TString GetGameName(EGame Game) { static const TString skGameNames[int(EGame::Max)] = { "Metroid Prime Demo", "Metroid Prime", "Metroid Prime 2: Echoes Demo", "Metroid Prime 2: Echoes", "Metroid Prime 3: Corruption E3 2006 Prototype", "Metroid Prime 3: Corruption", "Donkey Kong Country Returns" }; int GameIdx = (int) Game; return (GameIdx >= 0 && GameIdx < (int) EGame::Max) ? skGameNames[GameIdx] : "Unknown Game"; } TString GetGameShortName(EGame Game) { static const TString skGameNames[int(EGame::Max)] = { "MP1Demo", "MP1", "MP2Demo", "MP2", "MP3Proto", "MP3", "DKCR" }; int GameIdx = (int) Game; return (GameIdx >= 0 && GameIdx < int(EGame::Max)) ? skGameNames[GameIdx] : "Unknown"; } CFourCC GameTo4CC(EGame Game) { static const CFourCC skGame4CCs[int(EGame::Max)] = { FOURCC('MP1D'), FOURCC('MPRM'), FOURCC('MP2D'), FOURCC('MP2E'), FOURCC('MP3P'), FOURCC('MP3C'), FOURCC('DKCR') }; int GameIdx = (int) Game; return (GameIdx >= 0 && GameIdx < (int) EGame::Max) ? skGame4CCs[GameIdx] : FOURCC('UNKN'); } EGame GameFrom4CC(CFourCC GameId) { static const std::unordered_map<uint32, EGame> skIdToGame = { { FOURCC('MP1D'), EGame::PrimeDemo }, { FOURCC('MPRM'), EGame::Prime }, { FOURCC('MP2D'), EGame::EchoesDemo }, { FOURCC('MP2E'), EGame::Echoes }, { FOURCC('MP3P'), EGame::CorruptionProto }, { FOURCC('MP3C'), EGame::Corruption }, { FOURCC('DKCR'), EGame::DKCReturns } }; auto MapFindIter = skIdToGame.find(GameId.ToLong()); return (MapFindIter != skIdToGame.end() ? MapFindIter->second : EGame::Invalid); } void Serialize(IArchive& rArc, EGame& rGame) { // We serialize EGame as a fourCC in binary formats as a future-proofing measure. // Additionally, older versions of IArchive always serialized EGame as a fourCC. if (rArc.ArchiveVersion() < IArchive::eArVer_GameEnumClass || rArc.IsBinaryFormat()) { CFourCC GameId; if (rArc.IsWriter()) { GameId = GameTo4CC(rGame); } rArc.SerializePrimitive(GameId, 0); if (rArc.IsReader()) { rGame = GameFrom4CC(GameId); } } else { DefaultEnumSerialize<EGame>(rArc, rGame); } }
The quality of epidural anesthesia is crucial in the assessment of perioperative outcome. experience with this technique. Data from patients with previously inserted thoracic epidural catheters undergoing cardiopulmonary bypass while being fully heparinized suggest that epidural anesthesia can be safe even under these conditions. Finally, coagulation tests in our donors confirm that this management is feasible, considering that epidural catheters were exclusively inserted in donors with normal preanesthesia coagulation tests and pulled postoperatively while coagulation tests were within normal limits (see Figure1). Obviously, there is no sound rationale to withhold an established therapy, epidural anesthesia, to living donors when considering it worthwhile for no donor patients undergoing liver resection. Needless to say that we are used to administer epidural bupivacaine with respect to hemodynamic stability and perfusion pressures. We congratulate the authors for not even being dependent on blood salvage techniques but have not seen these data published. However, considering that maximum safety of the donor clearly is the goal we do not understand why these techniques are not used by Takaoka et al.
/** * Called periodically by Job Actor to purge old records. * * @param currentTime */ public void expireResubmitRecords(long currentTime) { Iterator<ResubmitRecord> it = resubmitRecords.values().iterator(); while (it.hasNext()) { ResubmitRecord record = it.next(); if (record.getResubmitAt() - record.getDelayedBy() < (currentTime - this.expireResubmitDelaySecs * 1000)) it.remove(); } }
<reponame>ergolabs/ergo-dex-sdk-js<filename>src/models/refundParams.ts import {Address, TxId} from "@ergolabs/ergo-sdk" export type RefundParams = { txId: TxId // txId the operation request to refund was submitted in recipientAddress: Address }
Prime Minister Benjamin Netanyahu and Defense Minister Ehud Barak are trading jabs over deteriorating relations with the US. Venezuelan President Hugo Chavez says if he were American he would vote for Barack Obama in the US presidential election. As both candidates debate financial relations with China, Mitt Romney's latest tax returns reveal substantial investments in several lucrative Chinese companies. While Romney benefits from Prime Minister Benjamin Netanyahu’s barely disguised support, backers in Israel worry his campaign is on the fritz. Romney and Ryan have invoked the specter of China to hammer home points about the US economy. But where does Romney really stand?
package com.example.smartsend.smartsendapp; import android.app.IntentService; import android.app.Notification; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Context; import android.content.Intent; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Color; import android.media.RingtoneManager; import android.net.Uri; import android.os.Bundle; import android.support.v7.app.NotificationCompat; import android.util.Log; import android.widget.Toast; import com.google.android.gms.gcm.GcmReceiver; import com.google.android.gms.gcm.GoogleCloudMessaging; /** * Created by <NAME> on 1/6/2016. */ public class GCMMessageHandler extends IntentService { //public static final int NOTIFICATION_ID = 1; private NotificationManager mNotificationManager, cNotificationManager; NotificationCompat.Builder builder; public GCMMessageHandler() { super("GCMMessageHandler"); } @Override protected void onHandleIntent(Intent intent) { Bundle extras = intent.getExtras(); GoogleCloudMessaging gcm = GoogleCloudMessaging.getInstance(this); String messageType = gcm.getMessageType(intent); String actionIntent = intent.getAction(); String notificationFor = extras.getString("notification_for"); //Check if action is receive or not if(actionIntent == "com.google.android.c2dm.intent.RECEIVE"){ if(notificationFor.equals("rider")){ //Get data from server and show dialog String successMessage = extras.getString("success_message"); String clientId = extras.getString("client_id"); String outletId = extras.getString("outlet_id"); String outletName = extras.getString("outlet_name"); String outletType = extras.getString("outlet_type"); String pickupDatetime = extras.getString("pickup_datetime"); String deliverDatetime = extras.getString("deliver_datetime"); String mobileNumber = extras.getString("mobile_number"); String customerName = extras.getString("customer_name"); String postalCode = extras.getString("postal_code"); String address = extras.getString("address"); String unitNumberFirst = extras.getString("unit_number_first"); String unitNumberLast = extras.getString("unit_number_last"); String foodCost = extras.getString("food_cost"); String receiptNumber = extras.getString("receipt_number"); //Getting current rider and available rider index int uniqueNotificationId = (int) (System.currentTimeMillis() & 0xfffffff); int totalRiderIndex=0, currentRiderIndex=0; try { currentRiderIndex = Integer.parseInt(extras.getString("current_rider_index")); totalRiderIndex = Integer.parseInt(extras.getString("total_rider_index")); }catch (Exception e){ } //Getting Available Rider List // if(totalRiderIndex > 0){ int[] riderId = new int[totalRiderIndex]; String[] riderName = new String[totalRiderIndex]; String[] riderLat = new String[totalRiderIndex]; String[] riderLng = new String[totalRiderIndex]; String[] riderGCMRegId = new String[totalRiderIndex]; try { for(int i=0; i<totalRiderIndex; i++){ riderId[i] = Integer.parseInt(extras.getString("rider_id_"+i)); riderName[i] = extras.getString("rider_name_" + i); riderLat[i] = extras.getString("rider_lat_" +i); riderLng[i] = extras.getString("rider_lng_" +i); riderGCMRegId[i] = extras.getString("rider_gcm_reg_id_" +i); } }catch (Exception e){ } // } Toast.makeText(getApplicationContext(), "totalRiderIndex in Rec: "+totalRiderIndex, Toast.LENGTH_LONG).show(); // Toast.makeText(getApplicationContext(), "New Order : "+riderId[currentRiderIndex], Toast.LENGTH_LONG).show(); // sendNotification(notifyData); //if(actionIntent == "com.google.android.c2dm.intent.RECEIVE"){ // if(){ // } //Notification mNotificationManager = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); PendingIntent contentIntent; Intent nIntent = new Intent(this, OrderDetailsForRiderActivity.class); // nIntent.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP | Intent.FLAG_ACTIVITY_CLEAR_TOP); nIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_SINGLE_TOP); //Bind intent data nIntent.putExtra("success_message", successMessage); nIntent.putExtra("client_id", clientId); nIntent.putExtra("outlet_id", outletId); nIntent.putExtra("outlet_name", outletName); nIntent.putExtra("outlet_type", outletType); nIntent.putExtra("pickup_datetime", pickupDatetime); nIntent.putExtra("deliver_datetime", deliverDatetime); nIntent.putExtra("mobile_number", mobileNumber); nIntent.putExtra("customer_name", customerName); nIntent.putExtra("postal_code", postalCode); nIntent.putExtra("address", address); nIntent.putExtra("unit_number_first", unitNumberFirst); nIntent.putExtra("unit_number_last", unitNumberLast); nIntent.putExtra("food_cost", foodCost); nIntent.putExtra("receipt_number", receiptNumber); nIntent.putExtra("current_rider_index", (int) currentRiderIndex); nIntent.putExtra("total_rider_index", (int) totalRiderIndex); nIntent.putExtra("unique_notification_id", uniqueNotificationId); //Binding Rider Data for(int i=0; i<totalRiderIndex; i++){ nIntent.putExtra("rider_id_" +i, riderId[i]); nIntent.putExtra("rider_name_" +i, riderName[i]); nIntent.putExtra("rider_lat_" +i, riderLat[i]); nIntent.putExtra("rider_lng_" +i, riderLng[i]); nIntent.putExtra("rider_gcm_reg_id_" +i, riderGCMRegId[i]); } contentIntent = PendingIntent.getActivity(this, uniqueNotificationId, nIntent, PendingIntent.FLAG_UPDATE_CURRENT); NotificationCompat.Builder mBuilder = (NotificationCompat.Builder) new NotificationCompat.Builder( this).setSmallIcon(R.drawable.icon_notification) .setContentTitle("SmartSend Notification") .setStyle(new NotificationCompat.BigTextStyle().bigText("New Order")) .setContentText("New Order").setWhen(System.currentTimeMillis()); //Set large Icon Bitmap largeIcon = BitmapFactory.decodeResource(getResources(), R.drawable.icon_notification_white); mBuilder.setLargeIcon(largeIcon); //Set Sound of notification Uri nSound = RingtoneManager.getDefaultUri(Notification.DEFAULT_SOUND); mBuilder.setSound(nSound); mBuilder.setVibrate(new long[]{1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000}); mBuilder.setLights(Color.RED, 3000, 3000); mBuilder.setContentIntent(contentIntent); mNotificationManager.notify(uniqueNotificationId, mBuilder.build()); //Check if notification is for client or not }else if(notificationFor.equals("client")){ //Get data from server and show dialog String successMessage = extras.getString("success_message"); String accreptedRiderId = extras.getString("rider_id"); String acceptedRiderDeviceRegId = extras.getString("rider_device_reg_id"); String acceptedRider_name = extras.getString("rider_name"); String acceptedRiderContactNumber = extras.getString("rider_contact_number"); String acceptedRiderProfilePicture = extras.getString("rider_profile_picture"); //Getting current rider and available rider index int uniqueNotificationIdForClient = (int) (System.currentTimeMillis() & 0xfffffff); Toast.makeText(getApplicationContext(), "Accepted Rider ID : "+accreptedRiderId, Toast.LENGTH_LONG).show(); //Notification cNotificationManager = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); PendingIntent contentIntentForClient; Intent cIntent = new Intent(this, AcceptedOrderActivity.class); // cIntent.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP | Intent.FLAG_ACTIVITY_CLEAR_TOP); cIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_SINGLE_TOP); //Bind intent data cIntent.putExtra("success_message", successMessage); cIntent.putExtra("rider_id", accreptedRiderId); cIntent.putExtra("rider_name", acceptedRider_name); cIntent.putExtra("rider_contact_number", acceptedRiderContactNumber); cIntent.putExtra("rider_profile_picture", acceptedRiderProfilePicture); cIntent.putExtra("unique_notification_id_for_client", uniqueNotificationIdForClient); contentIntentForClient = PendingIntent.getActivity(this, uniqueNotificationIdForClient, cIntent, PendingIntent.FLAG_UPDATE_CURRENT); NotificationCompat.Builder cBuilder = (NotificationCompat.Builder) new NotificationCompat.Builder( this).setSmallIcon(R.drawable.icon_notification) .setContentTitle("SmartSend Notification") .setStyle(new NotificationCompat.BigTextStyle().bigText("Order Accepted")) .setContentText("Your order has been accepted just now").setWhen(System.currentTimeMillis()); //Set large Icon Bitmap cLargeIcon = BitmapFactory.decodeResource(getResources(), R.drawable.icon_notification_white); cBuilder.setLargeIcon(cLargeIcon); //Set Sound of notification Uri cSound = RingtoneManager.getDefaultUri(Notification.DEFAULT_SOUND); cBuilder.setSound(cSound); cBuilder.setVibrate(new long[]{1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000}); cBuilder.setLights(Color.RED, 3000, 3000); cBuilder.setContentIntent(contentIntentForClient); cNotificationManager.notify(uniqueNotificationIdForClient, cBuilder.build()); //Check if this for failed notification or not }else if(notificationFor.equals("no_rider")){ String message = extras.getString("message"); //Getting current rider and available rider index int uniqueNotificationIdForClient = (int) (System.currentTimeMillis() & 0xfffffff); Toast.makeText(getApplicationContext(), "Message : "+message, Toast.LENGTH_LONG).show(); //Notification cNotificationManager = (NotificationManager) this.getSystemService(Context.NOTIFICATION_SERVICE); PendingIntent contentIntentForClient; Intent cIntent = new Intent(this, OrderFailedActivity.class); // cIntent.addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP | Intent.FLAG_ACTIVITY_CLEAR_TOP); cIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_SINGLE_TOP); //Bind intent data cIntent.putExtra("message", message); cIntent.putExtra("unique_notification_id_for_client", uniqueNotificationIdForClient); contentIntentForClient = PendingIntent.getActivity(this, uniqueNotificationIdForClient, cIntent, PendingIntent.FLAG_UPDATE_CURRENT); NotificationCompat.Builder cBuilder = (NotificationCompat.Builder) new NotificationCompat.Builder( this).setSmallIcon(R.drawable.icon_notification) .setContentTitle("SmartSend Notification") .setStyle(new NotificationCompat.BigTextStyle().bigText("No rider found")) .setContentText("All rider are busy now. Please try again.").setWhen(System.currentTimeMillis()); //Set large Icon Bitmap cLargeIcon = BitmapFactory.decodeResource(getResources(), R.drawable.icon_notification_white); cBuilder.setLargeIcon(cLargeIcon); //Set Sound of notification Uri cSound = RingtoneManager.getDefaultUri(Notification.DEFAULT_SOUND); cBuilder.setSound(cSound); cBuilder.setVibrate(new long[]{1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000}); cBuilder.setLights(Color.RED, 3000, 3000); cBuilder.setContentIntent(contentIntentForClient); cNotificationManager.notify(uniqueNotificationIdForClient, cBuilder.build()); } GcmReceiver.completeWakefulIntent(intent); } //End of if // GcmReceiver.completeWakefulIntent(intent); } private void sendNotification(String msg) { Log.d("Notification To Rider", "Notification sent successfully."); } }
/** * Factory method to create a class wrapping a new Bucketize operation. * * @param scope current scope * @param input Any shape of Tensor contains with int or float type. * @param boundaries A sorted list of floats gives the boundary of the buckets. * @return a new instance of Bucketize */ @Endpoint( describeByClass = true ) public static Bucketize create(Scope scope, Operand<? extends TNumber> input, List<Float> boundaries) { OperationBuilder opBuilder = scope.opBuilder(OP_NAME, "Bucketize"); opBuilder.addInput(input.asOutput()); float[] boundariesArray = new float[boundaries.size()]; for (int i = 0 ; i < boundariesArray.length ; i++) { boundariesArray[i] = boundaries.get(i); } opBuilder.setAttr("boundaries", boundariesArray); return new Bucketize(opBuilder.build()); }
The Hawai‘i County Department of Parks and Recreation announces that Kahalu`u Beach Park in Kailua-Kona is closed until further notice due to plumbing issues. The Department of Parks & Recreation apologizes for any inconvenience the closure may cause, and thanks the public for their understanding. For more information please contact Charmaine Kamaka at 961-8311. The Hawaiʻi Police Department reminds firearms owners that after obtaining the proper permit to purchase a firearm (both pistol and long gun) and after the firearm has been purchased, it is the owner’s responsibility to register it within five days. Likewise, a firearm brought into the State of Hawaiʻi must be registered within five days. The 61g flow was still active, with lava entering the ocean near Kamokuna and surface breakouts downslope of Pu‘u ‘Ō‘ō and on the coastal plain about 730 m (about 0.5 mi) inland of the ocean. A remote sensing instrument that can determine the temperature of distant objects based on their incandescent color is called an optical pyrometer. “Remote sensing” refers to the use of imaging technology that acts as extensions of our eyes. Due to damage done to the pathway (trail) by fallen trees, the pathway at Akaka Falls on Hawaii Island will be temporarily closed today and be re-opened tomorrow Friday (Feb 24). Average retail gasoline prices in Hawaii have risen 1.9 cents per gallon in the past week, averaging $3.11/g yesterday, according to GasBuddy’s daily survey of 355 gas outlets in Hawaii. This compares with the national average that has fallen 0.3 cents per gallon in the last week to $2.27/g, according to gasoline price website GasBuddy.com.
Republican presidential candidate asks interviewer to ‘show me somebody who is 100% accurate’ in accounts of decades-old events The Republican presidential candidate Ben Carson sought on Sunday to brush away growing questions about the accuracy of his autobiographical statements, implying it was not possible to be “100% accurate” when recalling events from recent history and instead blaming journalists for what he called a “political hit job”. Ben Carson's house: a homage to himself – in pictures Read more Carson, a retired pediatric neurosurgeon who is neck and neck with Donald Trump at the top of the polls, has experienced a torrid week as numerous inconsistencies in his 1992 autobiography, Gifted Hands, have been uncovered by reporters. Asked on ABC if he believed he needed to be more precise in documenting his past, Carson said: “Show me somebody … who is 100% accurate in everything that they say happened 40 or 50 years ago. Please show me that person, because I will sit at their knees and I will learn from them.” The appearance followed revelations published by the Wall Street Journal on Saturday and relating to Carson’s time at Yale University, where he has claimed to have been recognised by a psychology professor as “the most honest student in class”. Carson wrote in Gifted Hands that the professor, who taught a class called Perceptions 301, had pulled a hoax on his 150 students by pretending they all had to re-sit a final exam because their papers had “inadvertently burned”. According to Carson, every student apart from him refused to retake the exam. Once the prank was revealed, he wrote, the professor presented Carson with $10 as a reward for his honesty, and an article about the incident was published in the the Yale Daily News. According to the Wall Street Journal, no such article appears in the archives of the campus newspaper and Yale has no record of any such class being taught. Carson rejected this on Sunday, saying he had a copy of the newspaper article he planned to publish in the coming days. He did, however, concede that the name of the class may have been recorded incorrectly. “I wonder why, with all their investigative abilities, [the Wall Street Journal] can’t find it,” Carson said of the article. Carson also disputed the Journal’s reporting of an incident relating to his time in high school in Detroit, in which he has claimed to have offered shelter to a group of white students inside the school’s biology lab during race riots following the death of Martin Luther King in 1968. The Journal interviewed six of Carson’s fellow students at Southwestern high, along with his physics teacher from the time. None could remember being offered sanctuary in the lab, although they did recall the riots. Carson told NBC on Sunday none of the students interviewed had been those offered sanctuary. “Why would they know about that, unless they were one of those students?” he asked, adding that he believed others might come forward to confirm the story. The Journal’s report was the latest in a damaging set of revelations, tied to Carson’s past, which have allowed political rivals to question his integrity. He told NBC such media attention was applied to him because he represented “a threat to the secular progressive movement”. On Sunday, Trump argued that Carson was “going to have to explain a lot of things away”, relating to a his claim to have been offered a scholarship to attend the United States Military Academy at West Point, New York. Carson was forced on to the defensive on Friday, after a report was published by the website Politico pointing out that West Point does not offer scholarships per se. In Gifted Hands, Carson wrote that he was offered a scholarship to West Point by General William Westmoreland in 1969, following an impressive performance in a reserve officers training corps programme run by his high school. Ben Carson: inside the worldview of a political conundrum Read more On ABC, Carson dismissed Trump’s comments and maintained he had been offered the scholarship but had ended up not applying for entry. “None of the things are lies,” he said. “What does it say about people who immediately jump on the bandwagon if they hear something bad? Rather than waiting and finding out what the truth is.” Carson’s personal journey, from poverty in Detroit to prominence as a pioneering neurosurgeon and recipient of the presidential medal of freedom, forms the backbone of his campaign as a candidate with no formal political record to draw on. Asked if any of the revelations this week had affected his campaign plans, Carson said: “Our campaign is the same: We tell the truth, we deal with the issues and I’m not a politician. “You’re not going to find me acting like a politician. I don’t do that.”
/* * Copyright 2020-2021 the original author or authors. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * https://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.springframework.data.jdbc.core.convert; import static org.assertj.core.api.Assertions.*; import static org.assertj.core.api.SoftAssertions.*; import static org.mockito.Mockito.*; import lombok.Data; import java.sql.Array; import java.sql.Timestamp; import java.time.Instant; import java.time.LocalDate; import java.time.LocalDateTime; import java.time.LocalTime; import java.time.OffsetDateTime; import java.time.ZoneOffset; import java.time.ZonedDateTime; import java.util.Date; import java.util.List; import java.util.UUID; import org.assertj.core.api.SoftAssertions; import org.junit.jupiter.api.Test; import org.springframework.data.annotation.Id; import org.springframework.data.jdbc.core.mapping.AggregateReference; import org.springframework.data.jdbc.core.mapping.JdbcMappingContext; import org.springframework.data.jdbc.core.mapping.JdbcValue; import org.springframework.data.jdbc.support.JdbcUtil; import org.springframework.data.relational.core.mapping.RelationalPersistentEntity; import org.springframework.data.relational.core.mapping.RelationalPersistentProperty; import org.springframework.data.relational.core.sql.IdentifierProcessing; import org.springframework.data.util.ClassTypeInformation; /** * Unit tests for {@link BasicJdbcConverter}. * * @author <NAME> */ public class BasicJdbcConverterUnitTests { JdbcMappingContext context = new JdbcMappingContext(); StubbedJdbcTypeFactory typeFactory = new StubbedJdbcTypeFactory(); BasicJdbcConverter converter = new BasicJdbcConverter( // context, // (identifier, path) -> { throw new UnsupportedOperationException(); }, // new JdbcCustomConversions(), // typeFactory, IdentifierProcessing.ANSI // ); @Test // DATAJDBC-104, DATAJDBC-1384 public void testTargetTypesForPropertyType() { RelationalPersistentEntity<?> entity = context.getRequiredPersistentEntity(DummyEntity.class); SoftAssertions softly = new SoftAssertions(); checkTargetType(softly, entity, "someEnum", String.class); checkTargetType(softly, entity, "localDateTime", LocalDateTime.class); checkTargetType(softly, entity, "localDate", Timestamp.class); checkTargetType(softly, entity, "localTime", Timestamp.class); checkTargetType(softly, entity, "zonedDateTime", String.class); checkTargetType(softly, entity, "offsetDateTime", OffsetDateTime.class); checkTargetType(softly, entity, "instant", Timestamp.class); checkTargetType(softly, entity, "date", Date.class); checkTargetType(softly, entity, "timestamp", Timestamp.class); checkTargetType(softly, entity, "uuid", UUID.class); softly.assertAll(); } @Test // DATAJDBC-259 public void classificationOfCollectionLikeProperties() { RelationalPersistentEntity<?> entity = context.getRequiredPersistentEntity(DummyEntity.class); RelationalPersistentProperty listOfString = entity.getRequiredPersistentProperty("listOfString"); RelationalPersistentProperty arrayOfString = entity.getRequiredPersistentProperty("arrayOfString"); SoftAssertions softly = new SoftAssertions(); softly.assertThat(converter.getColumnType(arrayOfString)).isEqualTo(String[].class); softly.assertThat(converter.getColumnType(listOfString)).isEqualTo(String[].class); softly.assertAll(); } @Test // DATAJDBC-221 public void referencesAreNotEntitiesAndGetStoredAsTheirId() { RelationalPersistentEntity<?> entity = context.getRequiredPersistentEntity(DummyEntity.class); SoftAssertions softly = new SoftAssertions(); RelationalPersistentProperty reference = entity.getRequiredPersistentProperty("reference"); softly.assertThat(reference.isEntity()).isFalse(); softly.assertThat(converter.getColumnType(reference)).isEqualTo(Long.class); softly.assertAll(); } @Test // DATAJDBC-637 void conversionOfDateLikeValueAndBackYieldsOriginalValue() { RelationalPersistentEntity<?> persistentEntity = context.getRequiredPersistentEntity(DummyEntity.class); assertSoftly(softly -> { LocalDateTime testLocalDateTime = LocalDateTime.of(2001, 2, 3, 4, 5, 6, 123456789); checkConversionToTimestampAndBack(softly, persistentEntity, "localDateTime", testLocalDateTime); checkConversionToTimestampAndBack(softly, persistentEntity, "localDate", LocalDate.of(2001, 2, 3)); checkConversionToTimestampAndBack(softly, persistentEntity, "localTime", LocalTime.of(1, 2, 3, 123456789)); checkConversionToTimestampAndBack(softly, persistentEntity, "instant", testLocalDateTime.toInstant(ZoneOffset.UTC)); }); } @Test // GH-945 void conversionOfPrimitiveArrays() { int[] ints = { 1, 2, 3, 4, 5 }; JdbcValue converted = converter.writeJdbcValue(ints, ints.getClass(), JdbcUtil.sqlTypeFor(ints.getClass())); assertThat(converted.getValue()).isInstanceOf(Array.class); assertThat(typeFactory.arraySource).containsExactly(1, 2, 3, 4, 5); } private void checkConversionToTimestampAndBack(SoftAssertions softly, RelationalPersistentEntity<?> persistentEntity, String propertyName, Object value) { RelationalPersistentProperty property = persistentEntity.getRequiredPersistentProperty(propertyName); Object converted = converter.writeValue(value, ClassTypeInformation.from(converter.getColumnType(property))); Object convertedBack = converter.readValue(converted, property.getTypeInformation()); softly.assertThat(convertedBack).describedAs(propertyName).isEqualTo(value); } private void checkTargetType(SoftAssertions softly, RelationalPersistentEntity<?> persistentEntity, String propertyName, Class<?> expected) { RelationalPersistentProperty property = persistentEntity.getRequiredPersistentProperty(propertyName); softly.assertThat(converter.getColumnType(property)).describedAs(propertyName).isEqualTo(expected); } @Data @SuppressWarnings("unused") private static class DummyEntity { @Id private final Long id; private final SomeEnum someEnum; private final LocalDateTime localDateTime; private final LocalDate localDate; private final LocalTime localTime; private final ZonedDateTime zonedDateTime; private final OffsetDateTime offsetDateTime; private final Instant instant; private final Date date; private final Timestamp timestamp; private final AggregateReference<DummyEntity, Long> reference; private final UUID uuid; // DATAJDBC-259 private final List<String> listOfString; private final String[] arrayOfString; private final List<OtherEntity> listOfEntity; private final OtherEntity[] arrayOfEntity; } @SuppressWarnings("unused") private enum SomeEnum { ALPHA } @SuppressWarnings("unused") private static class OtherEntity {} private static class StubbedJdbcTypeFactory implements JdbcTypeFactory { public Object[] arraySource; @Override public Array createArray(Object[] value) { arraySource = value; return mock(Array.class); } } }
package me.Allt.Tracker; import org.bukkit.entity.Player; public class RunnerInstance { private Player runner; private int runnerID; private Player hunter; RunnerInstance(Player hunter) { this.hunter = hunter; runnerID = 0; } public void setRunner(Player runner) { this.runner = runner; } public void setRunner(int id) { this.runnerID = id; } public Player getRunner() { return runner; } public Player getHunter() { return hunter; } public int getRunnerID() { return runnerID; } }
def neighbours(self, x, radius): distances = self.state_distance(self.node_state, x) in_sphere_flag = distances <= radius indices = np.arange(self.node_state.shape[0])[in_sphere_flag] return list(indices)
Owen Dyer 1Montreal Charles Denham, one of the best known patient safety advocates in the United States and a former editor of the Journal of Patient Safety, has paid $1m (£0.7m; €0.9m) to settle US government civil allegations that he solicited and accepted kickbacks to influence infection prevention guidelines in a way that favored his sponsor’s product. While serving as co-chair of the National Quality Forum’s Safe Practices Committee, Denham was also receiving payments through his consultancy company Health Care Concepts Inc and his research organization Texas Medical Institute of Technology, both of which are parties to the settlement.1 He did not declare these payments to the committee, the government alleged. Minutes of committee meetings show how Denham used his position to advocate for a recommendation specifying 2% chlorhexidine topical formulations in the prevention of surgical site infections.2 This strength of chlorhexidine was found only in the ChloraPrep products made by CareFusion, whose parent company, Cardinal Health, was paying Denham’s companies. These payments totaled $11.6m, and while they ostensibly covered contracted services, the government alleged, they paid far over the market value and violated the federal Anti-Kickback Statute and the False Claims Act. “Quality and patient safety must drive medical recommendations,” said Daniel Levinson, inspector general of the US Department of Health and Human Services. “Doctors that put profits ahead of this core value must be held accountable. Dr Denham and his two businesses will be excluded from Medicare, Medicaid, and all federal health programs as part of this settlement.” The settlement does not include an admission of liability. Denham, who trained as a radiation oncologist, was not a practicing physician but made himself indispensable to the patient safety movement with his connections, energy, and seemingly limitless resources. The National Quality Forum, whose safety guidelines carry great weight in US hospitals, received $725 000 in donations from Denham’s institute together with extensive administrative and research services provided free by Denham’s staff, to support the work of the Safe Practices Committee. Denham told the committee about an upcoming study in a major journal that would show the superiority of chlorhexidine and alcohol solutions in preventing surgical site infections. That study, which appeared five months later in the New England Journal of Medicine,3 was funded by CareFusion, and a company medical director was one of its coauthors. “The specifics,” Denham told the committee, “were the 2%.” A draft of the guidelines recommended 2% chlorhexidine solutions for surgical site infection prevention, but this triggered a complaint from CareFusion’s competitor 3M and was removed after review. A guideline for preventing central line associated bloodstream infections recommended chlorhexidine generically in the draft but was altered, without the committee’s approval, to specify 2% chlorhexidine in the final 2010 guidelines. This insertion “is likely to reflect improper commercial influence,” a committee member, Patrick Romano, told an investigation by the non-profit news organization ProPublica. Romano, a professor at the University of California’s Davis School of Medicine, told how suspicion of Denham’s motives first arose in 2009. “It was all a bit of a mystery to us that Chuck Denham was so generous with his time and his staff time to support this process,” he said. Eventually National Quality Forum staff became alarmed by his “inordinate interest” in recommending a particular strength of chlorhexidine and severed ties in March 2010. But Denham remained a major figure in the patient safety movement, becoming editor of the Journal of Patient Safety in 2011. The journal’s current editors, reviewing Denham’s time there in a recent issue, wrote that it remained a mystery how a man with such sparse academic credentials had been chosen for the post.4 In January 2014 the Department of Justice announced that CareFusion had paid $40m to settle kickback allegations and named Denham as a recipient of those kickbacks. The suit settled by CareFusion also accused Rabih Darouiche, the lead author of the New England Journal of Medicine paper cited by Denham, of taking and distributing kickbacks from CareFusion.5 Denham and Darouiche had co-presented webinars marketing ChloraPrep as a product endorsed by the National Quality Forum. Denham, who is based in Laguna Beach, California, has declined media comment. Darouiche, of Baylor College of Medicine in Houston, Texas, did not respond to a BMJ request for comment. Julia Agresto of the New England Journal of Medicine told The BMJ, “After we were made aware of the allegations in the legal complaint, we asked Dr Rabih Darouiche to provide evidence of institutional review board [IRB] approval and patient consent, and we asked if there were any undisclosed conflicts of interest. We received adequate documentation of IRB approval and patient consent and received no evidence of undisclosed conflicts of interest. The authors disclosed relevant financial relationships at the time of submission, and they are listed at the end of the published article. We are satisfied that these matters were handled appropriately.” The journal declined to comment on how Denham might have known of the study’s results five months before publication.
<filename>First Year (2012-2013)/AP2/TP5/SavingsAccount.java package bankap; public class SavingsAccount extends BankAccount { //Attributs private double interestRate; //Taux d'intérêt en % !!! //Méthodes //Constructeur public SavingsAccount (int accountNumber, double rate) { super (accountNumber); this.interestRate = rate; } //Autres méthodes public double monthBalance() { double ret; this.addInterest(); ret = this.balance; return ret; } public void addInterest() { this.balance = this.balance + (this.balance * (this.interestRate/100)); } }
1. Field of the Invention The present invention relates to a liquid crystal display (LCD) device, and more particularly, an LCD device having a polarizing plate for improving luminance and preventing backlight Mura phenomenon. 2. Discussion of the Related Art With development of the present information society, the demand for various display devices has increased dramatically quite recently. Accordingly, much effort has been expended to research and develop various flat display devices, such as liquid crystal display (LCD), plasma display panel (PDP), electroluminescent display (ELD), and vacuum fluorescent display (VFD) devices. Some of these flat display devices have already been used in displays of various devices. Among the various flat display devices, the liquid crystal display (LCD) device has been most widely used due to its numerous advantages. LCD devices are thin, lightweight, and have a relatively low power consumption compared with the other types of displays, most notably Cathode Ray Tubes (CRT). This allows the LCD to substitute for the CRT in most devices. In addition to LCDs incorporated in mobile devices such as being used as a display for a notebook computer or personal data assistant (PDA), LCD devices have been developed for stationary electronic devices such as computer monitors and televisions to receive and display broadcasting signals. Despite various technical developments in the LCD technology with applications in different fields, research in enhancing the picture quality of the LCD device has been in some respects lacking as compared to other features and advantages of the LCD device. In order to use the LCD device in various fields as a general display, the key to developing the LCD device lies on whether the LCD device can implement a high quality picture, such as high resolution and high luminance with a large-sized screen while still maintaining lightness in weight, thinness, and low power consumption. The LCD device includes an LCD panel for displaying a picture image, and a driving part for applying a driving signal to the LCD panel. The LCD panel includes lower and upper glass substrates bonded to each other at a predetermined interval, and a liquid crystal layer injected between the lower and upper glass substrates. At this time, the liquid crystal layer is driven according to an electric field between the lower and upper substrates, thereby controlling light transmittance. As a result, the picture image is displayed on the LCD panel. Hereinafter, a related art LCD device will be described with reference to the accompanying drawings. FIG. 1 is a cross-sectional view illustrating the related art LCD device. As shown in FIG. 1, a lower substrate 10 of the related art LCD device includes a plurality of pixel regions (not shown) in a matrix type, and a thin film transistor (not shown) and a pixel electrode 11 formed in each pixel region. Also, an upper substrate 1 includes a color filter layer 3 for displaying various colors, and a common electrode 5. Then, a liquid crystal layer 13 is formed between the lower and upper substrates 10 and 1. Subsequently, first and second polarizing plates 14a and 14b are respectively formed on the upper substrate 1 and under the lower substrate 10 for linearly polarizing visible light, and a backlight unit 15 is formed under the second polarizing plate 14b. Although not shown, a plurality of gate lines are formed on the lower substrate (TFT array substrate) 10 at fixed intervals, and a plurality of data lines are formed perpendicular to the gate lines at fixed intervals, thereby defining the plurality of pixel regions. Then, the plurality of pixel electrodes 11 are respectively formed in the pixel regions as the matrix type, and the plurality of thin film transistors are switchable in response to signals of the respective gate lines for transmitting signals of the respective data lines to the respective pixel electrodes 11. After that, a first alignment layer 12 is formed to determine an alignment direction of liquid crystal. Also, the upper substrate (color filter substrate) 1 includes a black matrix layer 2 for excluding light from portions of the lower substrate except in the pixel regions, a Red/Green/Blue color filter layer 3 for displaying the various colors, the common electrode 5 on an entire surface of the upper substrate 1 for obtaining a picture image, and a second alignment layer 6 on the common electrode 5 for determining the alignment direction of the liquid crystal. An overcoat layer 4 protects the color filter layer 3 and flattens the upper substrate 1. FIG. 2 is a cross-sectional view taken along line I-I of FIG. 1, which illustrates a cross-sectional structure of the second polarizing plate 14b. Referring to FIG. 2, the second polarizing plate 14b sequentially includes a first adhesive layer 20, a first passivation layer 21, a polarizer 22, a second passivation layer 23, a second adhesive layer 24, a λ/4 phase shift plate 25, a third adhesive layer 26, a Cholesteric Liquid Crystal (CLC) layer 27 and a third passivation layer 28. At this time, an upper surface of the first adhesive layer 20 is in contact to the lower substrate 10, and a lower surface of the third passivation layer 28 is in contact to the backlight unit 15. The first, second, and third passivation layers 21, 23 and 28 are formed of Tri-Acetyl-Cellulose (TAC). In order to obtain the necessary thinness and lightness of an LCD module for a notebook PC in the LCD device having the aforementioned structure, a light-scattering means formed on a light-guiding plate of the backlight unit 15 is formed of three sheets. The light-scattering means receives the light emitted from the backlight, and uniformly scatters the received light to an entire surface of the LCD panel. Generally, the light-scattering means is comprised of four sheets such as a lower light-diffusion plate, first and second prism sheets, and an upper light-diffusion plate. Recently, the light-scattering means using the three sheets, removing the upper light-diffusion plate to decrease the thickness of the LCD device. Thus, the light-scattering means is formed from a lower light-diffusion plate 15a and first and second prism sheets 15b. However, as compared to the light-scattering means using four sheets, the light-scattering means using only three sheets has problems with a backlight Mura phenomenon (referred to as Newton's Ring or Wet-Out). According to the backlight Mura phenomenon, rainbow-spots are generated on a screen when two glass substrates come in contact with each other, thereby generating spots on the screen during displaying of the picture image. As a result, the picture image is not smoothly displayed on the screen. A key quality requirement for photomasks used in fabricating the display is the absence of Mura. Mura is caused by systematic deviations in the photomask and can be visible as stripes. Mura compromises the image quality of the finished display. Usually the deviations causing the Mura are very small, below a few hundred nanometers. While deviations of that size spread over a large area can be difficult to detect by measuring, the human eye can still see them due to its high sensitivity to systematic changes in gray scale. Laser repairs are often performed to correct such deviations, however, such repairs are difficult, time consuming, and costly as they require specialized equipment. In order to solve the problem of the backlight Mura phenomenon in the related art LCD device as shown in FIG. 2, a diffusion process is performed on the polarizing plate by adding beads to the third adhesive layer 26 between the CLC layer 27 and the λ/4 phase shift plate 25. This diffusion process permits the backlight Mura phenomenon to be decreased. However, the luminance of the resulting LCD is smaller than that of an LCD having a polarizing plate in which the diffusion process is not performed. Also, if laser repair is to be performed, it is extremely difficult to focus on the layer of the LCD panel to be repaired when watching the lower substrate 10 under the microscope because of the large density of beads added to the third adhesive layer 26 for the diffusion process. Specifically, one of polarizing plate characteristics, Haze, indicates the light-scattering intensity of transmitted light and reflected light. If the Haze value is small, brightness of the screen is greatly changed at portions such as the black matrix layer (i.e. the area excluding the light), so that it is hard to smoothly display the picture image. Meanwhile, if the Haze value is large, deterioration in the resolution ratio results. If the diffusion process is not performed on the polarizing plate, the Haze of the standard polarizing plate is about 0% and the luminance is about 30%. These values are such that the polarizing plate is susceptible to the backlight Mura phenomenon. However, if the conventional diffusion process is performed on the third adhesive layer, the Haze of the polarizing plate is about 80% and the luminance is only about 20%. While such a Haze prevents the backlight Mura phenomenon, this is an unacceptable decrease in luminance. In practicality, a Haze of at least 40% permits sufficient scattering to reduce the backlight Mura phenomenon to a negligible amount.
Power Management System With Integrated Maximum Power Extraction Algorithm for Microbial Fuel Cells Microbial fuel cells (MFC) are alternative renewable power sources that can directly produce electricity from biodegradable substances. However, due to their low power and voltage production, power management systems (PMS) are required to process the MFC power to a more readily usable level. For this application, a monolithic PMS with an integrated maximum power extraction algorithm (MPEA) is proposed. The MPEA allows for quick and accurate pin-pointing of the matching conditions for maximum power transfer from the MFC to the PMS. The PMS delivers a regulated fixed voltage from low fluctuating voltages produced by MFC at maximum power point to a supercapacitor from which electronic devices, such as wireless sensors, for extended operation time can be directly powered. Along with the MPEA system, the PMS is composed of a dc-dc boost converter operating in discontinuous conduction mode to maximize efficiency. In addition, a zero current switching tracking loop is proposed to improve overall system efficiency and minimize losses in the PMS through accurate P-type Metal-oxide-semiconductor field-effect transistor on/off timing control. The PMS circuit was fabricated in 0.5-m CMOS technology. The maximum dynamic efficiency measured was ~58% for a load of ~250 W.
(Newser) – Studies have suggested that the time of day one receives a flu shot can actually affect how effective it is, and now University of Cambridge researchers are reporting in the Proceedings of the National Academy of Sciences that our immune systems are better at fighting off viruses and pathogens at certain times of day as well. The short version is that we are as much as 10 times more susceptible to illness after nightfall than in the morning—in other words, we are at our most vulnerable during our commute home, as the Telegraph reports. This is because our circadian rhythms are constantly instructing our bodies to turn various things (from sleep patterns and body temperature to hormones and immune system activity) into on and off mode, meaning the "resources" available, such as immune system response, vary throughout the day based on our body clock. Wake the government up early. Perhaps they can fight off back pocket corruption. Hah--told you so. I did write about this course I write about anything, but your chemistry changes to accommodate being a drunk or working all night or living in a hoarding mess. This is why all people who work at home and create crap do it in the morning. This is why we eat breafast but can skip lunch. Everything about you is more efficient in the morning. Want to get pregnant--do it in the morning. Having surgery? Go for the first appointment--not cause of you, but cause of the surgeon. Teeth cleaning? Any time. Oral surgery is early. Immunity is everything and thus anything can mess with it. My son is an endless fount of useless science info. People who cut themselves? Cutters--they have a better immune system because they have to heal those cuts. And to heal anything requires all your organs to kick in so even your heart gets a work out. We didn't recognize Cuba cause someone wanted a beach vacation 90 miles from Florida. Their doctors got no meds with our 50 year embargo and after the Soviet Union's decades ago collapse. So they worked on immunology and yeah--big pharma snarked that up. Seriously, just another blithering Obama error? Where do you think he got a billon dollars to run for president in 2008? Out of his checking account? Trump gets paid charter rates for every second he's on that plane or helicopter for his campaign. He said years ago that he'd be the first person to make money off running. And he is. If he weren't insane he'd be a good president. But he's so high on the Mach scale that anyone voting for him would be terrified if they had a clue. Yes you are rested and starving which induces the bodies repair mode.
Determinants, characteristics and treatment of neck pain in a tertiary care hospital in Kerala Neck pain is one of the major causes of disability and its prevalence rate is more than 30% per year. Majority of acute attacks of neck pain will subside with or without medications but half of the patients will have persistent symptoms or they will get frequent occurrences. In most studies conducted so far it is observed that neck pain is not a self-limited problem, and that many patients will have long-term symptoms which may be moderately disabling. Hence it should be diagnosed and treated properly. 1 Risk factors for neck pain included genetics, poor psychological health, mentally and physically stressful jobs and exposure to tobacco. Important information regarding pain, whether it is neuropathic or mechanical, can be revealed by detailed history taking and physical examination. Based on aetiology neck pain can be classified as specific or non-specific neck pain. Specific neck pain results from an identifiable pain generating mechanism such as the intervertebral disc, cervical facet joints, and nerve root dura. Non-specific neck pain has no identifiable aetiology although it should be aggravated by neck movements. Imaging studies will help to identify the underlying pathology and magnetic resonance imaging is usually considered for patients with focal neurological deficits, pain not responding to conventional treatment and when an interventional treatment is indicated. 2-4 INTRODUCTION Neck pain is one of the major causes of disability and its prevalence rate is more than 30% per year. Majority of acute attacks of neck pain will subside with or without medications but half of the patients will have persistent symptoms or they will get frequent occurrences. In most studies conducted so far it is observed that neck pain is not a self-limited problem, and that many patients will have long-term symptoms which may be moderately disabling. Hence it should be diagnosed and treated properly. 1 Risk factors for neck pain included genetics, poor psychological health, mentally and physically stressful jobs and exposure to tobacco. Important information regarding pain, whether it is neuropathic or mechanical, can be revealed by detailed history taking and physical examination. Based on aetiology neck pain can be classified as specific or non-specific neck pain. Specific neck pain results from an identifiable pain generating mechanism such as the intervertebral disc, cervical facet joints, and nerve root dura. Non-specific neck pain has no identifiable aetiology although it should be aggravated by neck movements. Imaging studies will help to identify the underlying pathology and magnetic resonance imaging is usually considered for patients with focal neurological deficits, pain not responding to conventional treatment and when an interventional treatment is indicated. Only few clinical studies have evaluated different treatment modalities for neck pain. Skeletal muscle relaxants along with analgesics are found to be beneficial in neck pain, presenting with or without radicular symptoms. There is some evidence to support exercise and heat or cold compresses. This study was done to evaluate the socio demographic profile, diagnostic approaches and treatment options for neck pain. METHODS This was a cross sectional study conducted in the outpatient Department of Physical Medicine and Rehabilitation, in a tertiary care hospital during the period of October 2010 to March 2011. A total of 170 patients, satisfying the inclusion criteria were included in the study. Inclusion criteria consisted of patients complaining of pain anytime within the last 7 days and age between 18-65 years. Pregnant and lactating women, those with history of epilepsy, and those with cognitive deficit were excluded from the study. Structured performa validated by the statistician was used for data collection. Research committee and Ethics committee approval were obtained. Patients satisfying the inclusion criteria were enrolled in the study. A written informed consent was obtained from the patient. Study comprehended single patient visit and information regarding patient demographics, past history, concomitant diseases and medications he/she were on, was obtained from the patient / care giver. All the information collected from each patient was recorded in the pre-prepared proforma. Data regarding the symptoms of the disease and clinical diagnosis were also recorded. Drugs prescribed to each patient included in the study were also noted. Clinician's opinion about patient's X-ray of cervical spine and the data regarding baseline investigations like routine blood examination and fasting blood sugar were also recorded in the proforma. Data analysis was done with the help of excel 2007 and SPSS 16 statistical software. RESULTS 170 patients with neck pain registered in the PMR outpatient Clinic at Government Medical College Hospital between October 2010 to March 2011, were selected for the study. The data related to each drug is evaluated and statistical analysis was done. The age range of patients included in the study was between 18 and 65 with a mean age of 47.41±9.76 years and the median age was 48 ( Table 1). Majority of patients (74.1%) included in the study were females and 61.8% of the patients were manual labourers ( Figure 1). 14.1% of patients had sustained neck injury in the past and the remaining population did not give any history of neck injury.98.8% of patients experienced radiation of neck pain to one or both arms. Patients also gave history of numbness in arm, forearm or hand ( Figure 2). 99.4% patients complained about associated stiffness of neck and 87.1% of patients had shoulder stiffness. On examination 98.2% of patients had tenderness in the neck and only 6.5% of the patients had a spinal deformity. 50% of the study population were found to have neurological deficit associated with neck pain. 98.2% of patients had positive radiological evidence for neck pain. Majority of patients (85.9%) are diagnosed to have cervical spondylosis ( Figure 3). The most common co morbidity associated with neck pain was hypertension (23.5%) followed by diabetes mellitus (18.8%), coronary artery disease (12.9%), and dyslipidemia (11.8%). Osteoarthritis is another co morbidity which occurred in 10.6 % of patients. 8.2% of patients in the study population were observed to have Thyroid disorders. 4.7% were found to be asthmatic and 3.5% of patients are observed to have renal diseases. periarthritis of shoulder was seen in 3.5% of patients and 2.4% of patients had migraine as co morbidity. 1.2% of patients were found to have COPD. All the patients received NSAIDS and skeletal muscle relaxants. Diclofenac was the most frequently prescribed NSAID ( DISCUSSION Neck pain or cervicobrachialgia is a common problem seen in all age group. The physical, psychological, and socioeconomic impact of neck pain is underappreciated. According to the Global Burden of Disease 2010 Study, neck pain is the fourth leading cause of years lost to disability, ranking behind back pain, depression, and arthralgias. 5 Approximately half of all individuals will experience a clinically important neck pain episode over the course of their lifetime. 1 There is substantial heterogeneity in the reported prevalence rates of neck pain; however, most epidemiological studies report an annual prevalence ranging between 15% and 50%,with one systematic review reporting a mean rate of 37.2%. 1,2 The prevalence of neck pain is higher in females and peaks in middle age. 1,2 In the present study also majority of patients belong to the age group of 40-49 years and mean age of patients in this study is 47.41 years and majority of patients in this study were females (74.1%). The study by Zhen PC, Zhu LG et al showed female predominance (69%). 6 In contrast to that the studies done by Furlan JC, Kalsi-Ryan S and Wang MC, Kreuter W, Wolfla CE, et al more than 50% of patients were males. 7,8 Considering the socioeconomic status, in the present study, majority of the patients were manual labourers in both treatment groups (61.8%). The results agreed with observations in previous studies which had shown that mentally and physically stressful job is a major risk factor for cervicobracialgia. 9 Although certain occupations such as office and computer workers, manual labourers, and health care workers, have been found in some studies to have a higher incidence of neck pain, the major workplace factors associated with the condition are low job satisfaction and perceived poor workplace environment. 10 Unique risk factors for neck pain include trauma (e.g., traumatic brain and whiplash injuries) and certain sports injuries (eg, wrestling, ice hockey, football).In our study 14.1% of the patients gave a history of neck injury. In a study conducted by, Donald R Murphy et al it is found that individuals who have experienced trauma in the neck are at increased risk of developing neck pain and 20% of patients in his study had neck injury. 11 The present study also revealed that 98.8% of patients experienced radiation of neck pain which is a characteristic feature of cervical radiculopathy caused by compression of the nerve root. 6,9 This goes hand in hand with the study done by Zhen PC, Zhu LG et al, when also it is observed that majority of conditions causing cervicobrachialgia are associated with radiculopathy. 6 Numbness and paresthesia are another common features of cervical radiculopathy and in this study most patients had numbness in arm (82.4%), forearm (61.8%) and hand (59%). 12 Neck pain is associated with several comorbidities including headache, back pain, arthralgias, and depression. Several studies have shown that degenerative changes and disease processes in the cervical spine can lead to reflex spasm of related muscles. 9 99.4% of patients experienced neck stiffness and 87.1% patients experienced shoulder stiffness. 98.2% of the total population experienced tenderness due to spasm of paraspinal muscles. Spinal deformities like kyphosis, scoliosis and congenital abnormalities of cervical spine can predispose to neurologic injury and chronic neck pain. The study done by Anthony P. Trenga et al showed that 8% of patients who presented with cervicobrachialgia had spinal deformity. 13 In the present study the incidence of spinal deformity was found to be 6.5%. Myelopathy is invariably caused by compression of the spinal cord. The spinal cord must be subjected to at least 40% compression to produce neurological deficits. 7,9 50% of patients included in the study were found to have neurological deficits on clinical examination. In a previous study done by Okada E, Matsumoto M, et al the incidence of neurological deficit was 42%. 14 Most of the conditions causing cervicobrachialgia can be diagnosed on the basis of radiological evidence. In a study done by Kieran Michael et al the diagnosis of cervicobrachialgia was supported by radiological evidence in majority of patients in which 85% of patients had radiological evidence for cervicobrachialgia. 9 In the present study also 98.2% of the patients had relevant radiological findings at the time of inclusion in to the study. In the present study majority of patients (85.9%) were diagnosed to have cervical spondylosis. According to many previous studies in this contest, cervical spondylosis is the commonest cause of cervicobrachialgia and age as the major predisposing factor for developing Cervical spondylosis. 9,12,14 The study done by Okada E, Matsumoto M, et al in Japanese patients showed that majority of patients with cervicobrachialgia had cervical spondylosis. 14 As per the present study, other major causes of cervicobrachialgia are cervical disc prolapse (5.9%), osteoporosis (4.1%), cervical rib (2.4%), rheumatoid arthritis (1.2%) and fibromyalgia (0.6%). Most common co morbidity in the current study is hypertension (23.5%) followed by diabetes mellitus (18.8%), coronary artery disease (12.9), and dyslipidemia (11.8%). In a study conducted in US, Lad SP, Patil CG et al also showed similar incidence of co morbidities like hypertension (30%), diabetes mellitus (28%) and other (37%). 15 But the frequency of co morbidities varies in different populations. Few high-quality studies have evaluated pharmacotherapy for neck pain. Systemic non-steroidal anti-inflammatory drugs (NSAIDs) have been found to be beneficial for spinal pain in general but have not been formally studied in neck pain. Although NSAIDs are more efficacious than acetaminophen, the American College of Rheumatologists recommends acetaminophen as a first-line treatment, even for arthritis, because of its more favourable adverse effect profile. 16 In the present study drug therapy for pain relief mainly included NSAIDS, muscle relaxants, tricyclic antidepressants which goes hand in hand with the findings in the previous studies. Previous studies also showed the effectiveness of alternative and complementary treatment in neck pain like spinal manipulation and physiotherapy. 19,20 CONCLUSION Neck pain is one of the leading causes of disability in the world, yet the amount of research devoted to treatment is relatively low in comparison to the other leading causes. For acute neck pain, most cases will resolve spontaneously over a period of weeks to months, but a substantial proportion of individuals will be left with residual or recurrent symptoms. Treatment appears to have little effect on the course of acute neck pain. History and physical examination may provide important clues as to whether the pain is neuropathic or mechanical. NSAIDs and central skeletal muscle relaxants are the commonly prescribed medications and alternative treatments like spinal manipulation and physiotherapy appears to be beneficial in patients with neck pain.
Anderson quaterback David Thompson threw for 339 yards and three touchdowns in the Redskins’ victory over West Clermont. Receiver Eric Curless caught nine passes for 196 yards and a touchdown. Running back Owen Koelle added 96 total yards and three total touchdowns. Records: A 5-2 (2-2 ECC), WC 3-4 (2-2 ECC). Quarterback Brayden Sipple threw for 268 yards and two touchdowns as the Wildcats knocked off Williamsburg in convincing fashion. Sipple’s two top targets were Brent Hopkins, who hauled in three catches for 124 yards and a touchdown, as well as Clayton Schirmer who added 115 yards and a touchdown on five receptions. Colerain’s Syncere Jones took the first play from scrimmage 54 yards to the end zone, Friday night. From there the Cardinals never looked back as they rolled to their 72nd-consecutive GMC win. Ivan Pace Jr. finished with seven rushes for 67 yards and a TD to lead the Colerain ground attack. He also caught one of Deante Smith-Moore’s three TD passes. Smith-Moore finished with 78 yards passing on 5 of 7 attempts. He also ran for 61 yards. Jones ended the night with 64 yards on two carries. In all, Colerain outgained Mason 329 to 134. Of those 329 yards, 251 came on the ground. Records: C 7-0 (5-0 GMC), M 5-2 (4-1 GMC). Running back Dae’von Bryant ran in four scores, while standout tight end Joe Hocker hauled in a 78-yard touchdown catch. The Deer Park defense allowed Finneytown just one offensive play past the 50-yard line. Records: DP 6-1 (3-1 CHL), F 2-5 (0-4 CHL). Fairfield scored 40 first-half points and nearly pulled off a shutout as the Indians beat Sycamore 40-7 to improve to 5-0 in the GMC. Fairfield forced four turnovers and outgained the Aviators 345-49. JuTahn McClain ran for 112 yards and two TDs on 11 carries while Jeff Tyus ran for a TD and threw another in the win. Fairfield’s defense got into the backfield frequently on Friday night, finishing the evening with six sacks. Senior linebacker Del Thomas led the way with seven solo tackles, two sacks and a pick. Records: F 6-1 (5-0 GMC), S 4-3 (2-3 GMC). Fenwick made a major statement Friday evening, knocking off the top-ranked team in Ohio Division III. The Falcons rolled past Chaminade Julienne thanks to a monster rushing attack paced by senior Jack Fessler. Fenwick put up 533 yards of offense with 445 coming on the ground. Fessler led the way with a school-record 317 on 19 carries. He finished with three TDs, all of them coming from long distance. He scored on runs of 62, 68 and 68 yards. Logan Miller added a 65-yard TD run and had 93 yards on seven carries. Sully Janeck threw for 88 yards and a score in the victory. CJ-Jones 1 run (Staub kick). Records: F 5-2 (3-1 GCL-C), CJ 5-2 (3-1 GCL-C). With 9:22 remaining in Friday night’s game against Middletown, Kaleb Johnson found some space and broke a 61-yard TD run. He was stopped on the ensuing 2-point try but Hamilton’s defense was able to hold on for the one-point victory and the program’s first win of 2018. The Big Blue ran for 345 yards, Keyshawn Stephens accounted for 257 of those yards and a score. Josh Bryant led the Middies with 150 rushing yards and both of Middletown’s touchdowns. Records: H 1-6 (1-4 GMC), M 1-6 (0-5 GMC). Indian Hill running back Dimetrius Baylor led the way for the Braves, rushing for 132 yards and three scores. Baylor also amassed 69 yards receiving and a touchdown. Quarterback Cole Dein threw for 147 yards and three touchdowns, also adding 105 yards and a score on the ground. Records: IH 6-1 (4-0 CHL), T 2-5 (1-3 CHL). Ashton Koller ran for three TDs and 189 yards and Kings remained undefeated in the ECC. Koller had first-quarter TD runs of 85 and 55 yards. He also added a 10-yard TD run in the third quarter. Walnut Hills out-gained Kings 245 to 234 but the Knights held a 180 to 98 edge on the ground. Hunter Henry had a 20-yard interception return for a TD in the win. Tyrese Dorn ran for 69 yards and a score in the loss. Ryan Mickens threw for 147 and a TD for Walnut Hills. Records: K 6-1 (4-0 ECC), WH 1-6 (0-4 ECC). Lakota West opened the scoring against Lakota East with a 42-yard field goal from Nicklas Hjort, Friday night. From that point on it was all Thunderhawks as Lakota East won its third-consecutive game in the rivalry. Jack Dobrozsi finished with 240 rushing yards and three TDs on 13 carries. East outgained West 366 to 214 including a 270 to 133 edge on the ground. Sean Church threw a pair of TD passes to Evan Yablonsky. East’s defense recoverd three fumbles. Records: E 5-2 (4-1 GMC), W 2-5 (1-4 GMC). Running back Evan Crim ran for 131 of Middletown Madison’s 366 rushing yards. Crim also scored two touchdowns in the big victory. Records: M 7-0 (4-0 SWBL), C 0-7 (0-3 SWBL). Milford running back Cameron Kells had another impressive performance, running for 118 yards and two touchdowns. Quarterback Hunter Johnson added 206 yards and three scores through the air. Records: M 6-1 (4-0 ECC), W 1-6 (0-4 ECC). Mt. Healthy rattled off 56 unanswered points to stomp SWOC-rival Talawanda and move to 4-3 on the year. The Owls did it with their ground game, racking up 410 rushing yards and seven touchdowns on 37 attempts. Ty Mincy led the charge, running for a game-high 173 yards and a pair of touchdowns on nine carries. Quarterback Michael Crawford only threw three passes (including a 22-yard TD), but ran for 108 yards and three scores on nine attempts. Senior Jamal Kelly had a 22-yard TD catch and a 31-yard TD run in the second quarter. For Talawanda, quarterback Tyler Teeters threw for a touchdown and ran for another. Tyler Prater ran for 47 yards for the Braves and had a rushing TD for the third straight game. Northwest pulled out the 21-20 victory over Ross despite gaining just 90 yards of total offense and 76 of them came on one play - a touchdown run from quarterback Dae’Mon Cherry. Northwest also scored on a 65-yard kickoff return and a 79-yard fumble return. Ross was led by running back Dylan Caldwell, who ran for 228 yards and two touchdowns on 35 carries. Records: N 2-5 (1-2 SWOC), R 3-4 (1-3 SWOC). Receiver Jermaine Wimpye hauled in six receptions for 165 yards and two touchdowns and running back Trey Key ran for 103 yards and a touchdown in the Vikings’ big win over the Highlanders. Records: P 2-5 (2-3 GMC), OH 3-4 (2-3 GMC). Records: R 2-5 (1-3 CHL), M 4-3 (1-3 CHL). Justin Silverstein threw for 345 yards and four TDs as Turpin used a huge second half to down Loveland. Silverstein finished 24 of 32 through the air. Kaidan Naughton led the ground game for the Spartans. He carried the ball 18 times for 80 yards and a score he also caught two passes for 37 yards and a TD. Cody Kidd led Turpin in receiving, catching 10 balls for 77 yards. Liam Hamill stood out in the loss, running the ball 19 times for a game-high 144 yards and a TD. He also led the Tigers with three receptions for 76 yards and a TD. Records: T 5-2 (3-1 ECC), L 1-6 (1-3 ECC). Kayvon Britten ran in three fourth-quarter TDs and scored 20 of Western Hills’ 22 fourth-quarter points to help the Mustangs complete a come-from-behind win at Woodward. Britten finishd with 197 yards on 21 carries with the three TDs and a two-point conversion run. West High’s defense was paced by senior twins Jason and Jonathan Harrison. Jason had four solo tackles, two assisted tackles and two fumble recoveries. Jonathan had four solo tackles and an assisted tackle. Reggie Taylor-Benton had two forced fumbles and Jakobe Scott had 2 interceptions on back-to-back drives to close out the win. Daniel Ingram finished with 18 rushes for 114 yards and two TDs in the loss. He also had 140 yards passing and a 54-yard punt return TD for Woodward. Records: WH 5-2 (4-0 CMAC), W 2-5 (1-2 CMAC). Running back Pierson Rogers ran for 80 yards and three touchdowns on just nine carries. Quarterback Evan Prater threw nine times for 196 yards and a touchdown, also adding 63 yards and two touchdowns on the ground. Defensively, the Cowboys took two interceptions back for touchdowns. In total, they held Mustangs to just 72 total yards. Records: W 7-0 (4-0 CHL), M 4-3 (2-2 CHL). Bishop Brossart senior quarterback Tyler MacDonald threw for one touchdown and ran in another. The Mustang defense held Bracken County scoreless after the first quarter. Covington Catholic extended its winning streak to 22 games, surviving an second-quarter surge from Bishop Chatard before cruising in the second half. Caleb Jacob was 19-for-30 for 385 yards and three touchdowns. Jack Coldiron caught four passes for 145 yards and one touchdown, while tight end Michael Mayer had four catches for 129 yards and two scores. Mayer also had a pick-six. Running back Casey McGinness (26 carries, 158 yards) helped put the game away with two third-quarter TD runs. Tayquan Calloway ran for a touchdown and caught another as Holmes recovered from a 9-point second-half deficit to outlast Bourbon County and move to 3-4. Calloway ran five times for 40 yards and a touchdown, caught three passes for 33 yards and a score and had an 89-yard kickoff return. Holmes quarterback James Walker only completed 5 of 17 passes, but his two-yard TD run in the fourth quarter ended up being the game-winner. Team results: 1. St. Xavier 291, 2. Taylor County 294, T3. Ballard 300, T3. Madison Central 300, 5. Lexington Christian Academy 302, 6. Covington Catholic 307, 7. Montgomery County 313, 8. Daviess County 318, 9. Highlands 320, 10. Marshall County 323, 11. Grant County 324, 12. Wayne County 325. MISSED CUT (locals) Ryle 347. Individual top performer: Drew Doyle (St. Xavier) shot a four-under-par 68. Remaining top five (and top two locals): 2. Butler (Trinity) 70, T3. Coyle (Taylor County) 71, T3. Webb (Scott County) 71, T3. Mixon (South Warren). T11. Sweeten (Covington Catholic) 73, T11. Kennedy (Covington Catholic) 73. Goals: G-Utter 3, Carter, Williams, Benjamin, Bradford, Johnson. Shutout: Clifton. Fanklin County d. South Dearborn 25-8, 25-15, 18-25, 25-12. Roger Bacon d. Seton 17-25, 21-25, 26-24, 26-24, 15-11. Notre Dame Academy d. Ryle 25-21, 16-25, 38-36, 25-19.
Well-Tolerated Amphotericin B Derivatives That Effectively Treat Visceral Leishmaniasis. Chemotherapy against the neglected tropical disease visceral leishmaniasis (VL) is suboptimal with only four licensed drugs. Amphotericin B (AmB), despite its toxicity, remained a second line drug for a long time. However, the demonstration that liposomal AmB is highly effective against VL propelled it, despite its cost, to a first line drug in many countries. While several ongoing efforts are aiming at finding cheaper and stable AmB-formulations, an alternative strategy is the development of less-toxic AmB derivatives. We show here that two less-toxic AmB derivatives with the carboxylate at position 16 of AmB derivatized to a methyl urea (AmB-MU) or amino urea (AmB-AU) are active in vitro against Leishmania donovani, both as free-living parasites as well as their intracellular form. Both less-toxic derivatives, similarly to AmB, target the ergosterol pathway of L. donovani. While the AmB-AU derivative showed female-specific liver toxicity in vivo, the AmB-MU derivative was well-tolerated and more effective than AmB against experimental VL. These studies are an important step for improving AmB-based therapy against a prevalent parasitic disease.
// This is an implementation of PlannerCost. // It takes a polyline and computes a smooth distance from a point to a polyline, then it computes // a quadratic cost from it: cost = 0.5 * gain * dist * dist. // In addition if a speed reward is provided, a linear cost will be added. // By default it is assumed that the position (x, y) corresponds to the first and second dimension // of the state vector, while the speed is the fourth. If this is not the case you can set the // correct indices of (x, y, speed) using setIndices. class PolylineDistanceQuadraticCost : public PlannerCost { public: PolylineDistanceQuadraticCost() = default; double evaluate(double time, const VectorXd& state) override; void addGradient(double time, const VectorXd& state, Eigen::Ref<VectorXd> gradient) override; void addHessian(double time, const VectorXd& state, Eigen::Ref<MatrixXd> hessian) override; void setPolyline(geometry::Polyline2d polyline) { polyline_ = std::move(polyline); } const geometry::Polyline2d& getPolyline() const { return polyline_;} void setGain(double gain) { gain_ = gain; } void setSpeedReward(double speed_reward) { speed_reward_ = speed_reward; } void setIndices(const Vector3i& indices) { indices_ = indices; } private: Vector3i indices_ = Vector3i(0, 1, 3); geometry::Polyline2d polyline_; double gain_ = 1.0; double speed_reward_ = 0.0; }
Sumitomo Bank Ltd., Japan's second-largest lender, said yesterday that it lost 251.3 billion yen, or $1.85 billion, in the year ended March 31 as it wrote off a record amount of bad debt. And Daiwa Bank Ltd., the nation's ninth-largest commercial lender, had net income of 12.65 billion yen, or $93 million, after tax benefits were factored in. Write-offs drove the bank to a pretax loss of 142.5 billion yen, or $1.05 billion, its third straight loss.
Earlier this year, the Bulletin of the Atomic Scientists’ Science and Security Board announced their decision to move the Doomsday Clock closer to midnight. Around the same time, the US Department of Defense released the 2018 Nuclear Posture Review (NPR) – these two developments draw from the same global security environment but are in stark contrast to each other in the faith they place in nuclear weapons. Amongst a slew of modernisation measures that the new NPR proposes, two stand out: development of low-yield nuclear warheads and an expansion of the scope of nuclear weapons use by the US against an adversary. According to the new NPR, the US plans to deploy “low-yield sea-launched ballistic missile nuclear (SLBM) warheads” and re-commission a “modern nuclear armed sea-launched cruise missile (SLCM),” again with a low-yield nuclear option. Introducing a low yield nuclear option is a form of ‘tailored deterrence’ that the US has been contemplating for quite some time now, especially vis-à-vis Russia. Russia has been known to allegedly lower the nuclear threshold by increasing its repository of non-strategic nuclear weapons, also commonly known as tactical nuclear weapons, thus bringing back US concerns about limited nuclear use. Russia currently possesses 4,500 nuclear warheads of which 2,000 are believed to be non-strategic. Quoting Russia’s modernisation drive as one of the primary reasons for US policy change, this move is likely to close the gap between Russian and US nuclear capabilities in terms of their non-strategic nuclear weapons. However, it can be considered militarily inexplicable for one important reason. Russia’s decision to modernise its military with non-strategic nuclear weapons was to compensate for its lacking conventional military strength, in which domain the US is far superior. The rationale behind US deployment of a low-yield nuclear option as mentioned in the NPR hinges on its so-called ability to contain a possible limited nuclear escalation by countries like Russia. However, escalation control only makes for a credible theoretical framework and has slim chances of any practical implementation. A low-yield, non-strategic nuclear warhead can cause enough damage to invite a full-scale nuclear response. On one hand, the NPR document emphasises measures like “decreasing misperception and miscalculation and avoiding destabilising nuclear arms competition,” while at the same time proposing the re-introduction of a nuclear-tipped SLCM (which was decommissioned in 2011) in the long-term. This is likely to increase the chances of an accidental cataclysmic nuclear exchange precisely because of the ability of a cruise missile to disable any interception or detection, leading to potential miscommunication between two adversaries. Employing ambiguity in a country’s nuclear doctrine is not new; the French nuclear doctrine, for example, maintains vagueness as to what constitutes ‘national interests’. Nonetheless, lack of transparency in explicitly stating the circumstances that would justify a nuclear strike against an adversary, especially if the US uses non-strategic nuclear weapons, can create space for a pre-emptive strike from the adversary due to uncertainty about a possible first strike. The NPR uses the “uncertain international security environment” and Russia’s muscle-flexing as justifications for significantly modernising its existing nuclear arsenal. Although it claims to be responding to changing strategic necessities, it is also true that as the Doomsday Clock nears midnight, in many ways the NPR carries the potential of doing further disservice to nuclear non-proliferation, and making the global security environment more perilous.
<filename>src/main/java/org/dsaw/poker/engine/gui/PlayerPanel.java // This file is part of the 'texasholdem' project, an open source // Texas Hold'em poker application written in Java. // // Copyright 2009 <NAME> // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package org.dsaw.poker.engine.gui; import org.dsaw.poker.engine.Card; import org.dsaw.poker.engine.Player; import org.dsaw.poker.engine.actions.Action; import java.awt.*; import java.math.BigDecimal; import javax.swing.*; import javax.swing.border.Border; import javax.swing.border.EmptyBorder; /** * Panel representing a player at the table. * * @author <NAME> */ public class PlayerPanel extends JPanel { /** The serial version UID. */ private static final long serialVersionUID = 5851738752943098606L; /** Filled dealer button image when player is dealer. */ private static final Icon BUTTON_PRESENT_ICON = ResourceManager.getIcon("/images/button_present.png"); /** Empty dealer button image when player is not dealer. */ private static final Icon BUTTON_ABSENT_ICON = ResourceManager.getIcon("/images/button_absent.png"); private static final Icon CARD_PLACEHOLDER_ICON = ResourceManager.getIcon("/images/card_placeholder.png"); private static final Icon CARD_BACK_ICON = ResourceManager.getIcon("/images/card_back.png"); /** The border. */ private static final Border BORDER = new EmptyBorder(10, 10, 10, 10); /** The label with the player's name. */ private JLabel nameLabel; /** The label with the player's amount of cash. */ private JLabel cashLabel; /** The label with the last action performed. */ private JLabel actionLabel; /** The label with the player's current bet. */ private JLabel betLabel; /** The label for the first hole card. */ private JLabel card1Label; /** The label for the second hole card. */ private JLabel card2Label; /** The label for the dealer button image. */ private JLabel dealerButton; /** * Constructor. */ public PlayerPanel() { setBorder(BORDER); setBackground(UIConstants.TABLE_COLOR); setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); nameLabel = new MyLabel(); cashLabel = new MyLabel(); actionLabel = new MyLabel(); betLabel = new MyLabel(); card1Label = new JLabel(CARD_PLACEHOLDER_ICON); card2Label = new JLabel(CARD_PLACEHOLDER_ICON); dealerButton = new JLabel(BUTTON_ABSENT_ICON); gc.gridx = 0; gc.gridy = 0; gc.gridwidth = 2; gc.gridheight = 1; gc.weightx = 1.0; gc.weighty = 1.0; gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.NONE; add(dealerButton, gc); gc.gridx = 0; gc.gridy = 1; gc.gridwidth = 1; gc.gridheight = 1; gc.insets = new Insets(1, 1, 1, 1); gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.HORIZONTAL; gc.weightx = 1.0; gc.weighty = 1.0; add(nameLabel, gc); gc.gridx = 1; gc.gridy = 1; gc.gridwidth = 1; gc.gridheight = 1; gc.weightx = 1.0; gc.weighty = 1.0; gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.HORIZONTAL; add(cashLabel, gc); gc.gridx = 0; gc.gridy = 2; gc.gridwidth = 1; gc.gridheight = 1; gc.weightx = 1.0; gc.weighty = 1.0; gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.HORIZONTAL; add(actionLabel, gc); gc.gridx = 1; gc.gridy = 2; gc.gridwidth = 1; gc.gridheight = 1; gc.weightx = 1.0; gc.weighty = 1.0; gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.HORIZONTAL; add(betLabel, gc); gc.gridx = 0; gc.gridy = 3; gc.gridwidth = 1; gc.gridheight = 1; gc.weightx = 1.0; gc.weighty = 1.0; gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.NONE; add(card1Label, gc); gc.gridx = 1; gc.gridy = 3; gc.gridwidth = 1; gc.gridheight = 1; gc.weightx = 1.0; gc.weighty = 1.0; gc.anchor = GridBagConstraints.CENTER; gc.fill = GridBagConstraints.NONE; add(card2Label, gc); setInTurn(false); setDealer(false); } /** * Updates the panel. * * @param player * The player. */ public void update(Player player) { nameLabel.setText(player.getName()); cashLabel.setText("$ " + player.getCash()); BigDecimal bet = player.getBet(); if (bet.equals(BigDecimal.ZERO)) { betLabel.setText(" "); } else { betLabel.setText("$ " + bet); } Action action = player.getAction(); if (action != null) { actionLabel.setText(action.getName()); } else { actionLabel.setText(" "); } if (player.hasCards()) { Card[] cards = player.getCards(); if (cards.length == 2) { // Visible cards. card1Label.setIcon(ResourceManager.getCardImage(cards[0])); card2Label.setIcon(ResourceManager.getCardImage(cards[1])); } else { // Hidden cards (face-down). card1Label.setIcon(CARD_BACK_ICON); card2Label.setIcon(CARD_BACK_ICON); } } else { // No cards. card1Label.setIcon(CARD_PLACEHOLDER_ICON); card2Label.setIcon(CARD_PLACEHOLDER_ICON); } } /** * Sets whether the player is the dealer. * * @param isDealer * True if the dealer, otherwise false. */ public void setDealer(boolean isDealer) { if (isDealer) { dealerButton.setIcon(BUTTON_PRESENT_ICON); } else { dealerButton.setIcon(BUTTON_ABSENT_ICON); } } /** * Sets whether it's this player's turn to act. * * @param inTurn * True if it's the player's turn, otherwise false. */ public void setInTurn(boolean inTurn) { if (inTurn) { nameLabel.setForeground(Color.YELLOW); } else { nameLabel.setForeground(Color.GREEN); } } /** * Custom label for a player panel. * * @author <NAME> */ private static class MyLabel extends JLabel { /** Serial version UID. */ private static final long serialVersionUID = 3607645928062082095L; /** * Constructor. */ public MyLabel() { setBorder(UIConstants.LABEL_BORDER); setForeground(UIConstants.TEXT_COLOR); setHorizontalAlignment(SwingConstants.CENTER); setText(" "); } } }
<reponame>isaberamos/Programinhas def maior(* lista): maior = 0 for elemento in lista: if elemento > maior: maior = elemento return maior
Two-loop massless QCD corrections to the $g+g \rightarrow H+H$ four-point amplitude We compute the two-loop massless QCD corrections to the four-point amplitude $g+g \rightarrow H+H$ resulting from effective operator insertions that describe the interaction of a Higgs boson with gluons in the infinite top quark mass limit. This amplitude is an essential ingredient to the third-order QCD corrections to Higgs boson pair production. We have implemented our results in a numerical code that can be used for further phenomenological studies. Introduction The discovery of the Higgs boson at the Large Hadron Collider is an important milestone in particle physics. It puts the Standard Model (SM) in a firm position to describe the dynamics of all the known elementary particles. Of course, there are several shortcomings in the SM which lead physicists to explore physics beyond the SM. There have been tremendous efforts to construct models that address these shortcomings and at the same time demonstrate rich phenomenology that can be explored at present and future colliders. All these culminated into dedicated experimental searches for hints of new physics which in turn constrain the parameters of beyond the SM scenarios. By measuring the mass of the Higgs boson, one can predict the trilinear self-coupling in the Higgs sector of the SM. This is a crucial parameter that describes the shape of the Higgs potential. In order to better understand the Higgs sector and the nature of the electroweak symmetry breaking mechanism, it is important to measure this self-coupling independently. At hadron colliders, one of the potential channels that can probe this self-coupling is the production of a pair of Higgs bosons. The dominant production channel in the SM is through the loop-induced gluon fusion subprocess. At leading order (LO), this process involves two mechanisms, with the scattering amplitude for one of the them depending on the trilinear Higgs boson coupling. Since both mechanisms are loop-induced through heavy quarks and there is destructive interference between their respective amplitudes, the SM production cross section at LHC energies is only few tens of a femtobarn. In addition, a large and irreducible background makes its detection an experimentally demanding task. Double Higgs boson production can receive substantial contributions from physics processes beyond the SM, and there are already several detailed studies indicating scenarios for a substantial increase in its production rate (see and the references therein). Theoretically, it is a challenging task to compute higher order QCD effects when taking into account the exact top quark mass dependence, since the Born-level contribution appears only at one loop. The first computation of next-to-leading order (NLO) QCD corrections was performed in the infinite top quark mass limit in. In this limit, the top quark is integrated out, resulting in a field theory that contains effective operators coupling the Higgs field to the gluon field. These early results were then improved upon by considering various NLO contributions from finite top quark mass effects. Recently, the full NLO corrections with exact top quark mass dependence could be completed, owing to technical progress in the numerical evaluation of two-loop integrals and amplitudes with internal masses. At next-to-next-to-leading order (NNLO) level, results are available only in the heavy top limit. The prediction at NNLO level in the soft plus virtual (SV) approximation can be found in, the leading top quark mass corrections were then included in, while in the impact of the remaining hard contributions were studied. The relevant Wilson coefficients at NNLO were obtained in. For the fully differential results at NNLO level, see. By using a re-weighting approach, these fixed-order NNLO results for infinite top quark mass can be combined with the exact NLO top quark mass dependence to quantify the top quark mass effects at NNLO. Effects of threshold resummation at next-to-next-to-leading logarithm (NNLL) level using soft collinear effective theory were obtained in. Going beyond NNLO level in QCD is a challenging task owing to the technical difficulties involved in computing the loop integrals for the virtual subprocesses and the phase space integrals when there are real emissions. In this article we make a first step towards computing the third-order correction to the production of a pair of Higgs bosons in the gluon initiated channels. In particular we compute virtual amplitudes for the subprocess g + g → H + H, resulting from two effective operator insertions, at the two-loop level. The paper is structured as follows. In Section 2, we introduce the notation, describe the effective field theory that results in the limit of an infinite top quark mass, and discuss the different purely virtual contributions to Higgs boson pair production up to next-to-next-tonext-to-leading order (N 3 LO). Section 3 describes in detail the calculation of the two-loop amplitude for g + g → H + H, and the numerical evaluation of the results is discussed in 4. We conclude with an outlook on future applications in Section 5. Higgs effective field theory We compute the relevant amplitudes in an effective theory where the top quark degrees of freedom are integrated out. The effective Lagrangian that describes the coupling of one and two Higgs bosons to gluons is given by where G denotes the gluon field strength tensor,, the Higgs boson and v = 246 GeV is the vacuum expectation value of the Higgs field. Note that we have taken only those terms in the L ef f into account that are relevant for the production of two Higgs bosons in a gluongluon initiated process. The constants C H and C HH are the Wilson coefficients [28, determined by matching the effective theory to the full theory and they can be expanded in powers of the renormalized strong coupling constant a s = g 2 and where n f is the number of light flavors, m t is the M S top quark mass at scale R and N = 3 is fixed for QCD. Kinematics Consider the production of a pair of Higgs bosons in the gluon fusion subprocess, where p 1 and p 2 are the momenta of the incoming gluons, and p 3 and p 4 the momenta for the outgoing Higgs bosons, respectively. The Mandelstam variables for the above process are given by They satisfy s + t + u = 2m 2 h where m h is the mass of the Higgs boson. In the following, we describe the computation of the one-and two-loop QCD corrections to the amplitude given in Eq. (2.4). We find that it is convenient to express this amplitude in terms of the dimensionless variables x, y and z Tensors and projectors Using gauge invariance, the amplitude can be decomposed in terms of two second rank Lorentz tensors T i with i = 1, 2 as follows : where the tensors are given by with p 2 T = (tu − m 4 h )/s. In color space, the amplitude is diagonal in the indices (a, b) of the incoming gluons. The scalar functions M i can be obtained from M ab by using appropriate projectors as follows where the projectors in d dimensions are given by, 2.4 Diagrams to O(a 4 s ) When considering higher order massless QCD corrections to the g + g → H + H amplitudes in the effective theory, we encounter two topologically distinct classes of subprocesses we call Class-A and Class-B hereafter. We perform an expansion in a s to include all the contributing diagrams. Class-A, see Fig. 1, contains diagrams where two Higgs bosons couple to each other and to gluons. They either couple to the gluons directly through a C HH Wilson coefficient (left-hand column of Fig. 1), or through a Higgs boson propagator and the C H Wilson coefficient (right-hand column of Fig. 1). The latter diagrams are linearly proportional to the triple Higgs coupling. Class-B, see Fig. 2, contains diagrams where Higgs bosons couple to two gluons through the effective vertices proportional to C H, but do not couple to each other. Both Wilson coefficients C H and C HH start at order a s. Consequently, to LO in a s only Class-A diagrams contribute. Beyond LO, that is from order a 2 s onwards, the class-A diagrams are only of form factor type and the results for class-A to a 4 s can be readily obtained from the three loop form factor that appears in purely virtual contributions to single Higgs boson production. The class-B diagrams start contributing from order a 2 s, with results only available up to order a 3 s. In the following, we will complete the a 4 s contributions to the g + g → H + H amplitude, by computing the class-B diagrams to this order, which amount to their two-loop corrections. In general, the scalar amplitudes M i can be written as a sum of amplitudes resulting from the two classes A and B Since the M A i are proportional to the Higgs boson form factor, they can be expressed as (2.14) The amplitude M A 2 is identically to zero to all orders in perturbation theory due to the choice of the tensorial basis. The form factors F (j) (d) for j = 1, 2, 3 are known in the literature. In this article, the amplitudes of class-B are presented up to two loop level in perturbative QCD. At each order, the amplitude contains a pair of vertices resulting from the first term of the effective Lagrangian L ef f and hence will be proportional to the square of the Wilson coefficient C H (a s ), expanded to the desired accuracy in a s. Beyond leading order, the one-and two-loop diagrams are not only ultraviolet (UV) divergent but also infrared (IR) divergent resulting from soft and collinear regions of the loop momenta. We use dimensional regularization to treat both UV and IR divergences and all the divergences show up as poles in, where the space time dimension is d = 4 − 2. Ultraviolet renormalization and operator mixing The bare strong coupling constant in the regularized theory is denoted by s which is related to its renormalized counter-part b where S = exp with ≈ 0.5772... the Euler-Mascheroni constant. The beta function coefficients 0 and 1 are given by for the SU(N) color factors we have Besides coupling constant renormalisation, the amplitudes also require the renormalisation of the effective operators in the effective Lagrangian, Eq. (2.1). Both composite operators that appear in our one-and two-loop amplitudes can develop UV divergences and thus have to undergo renormalisation, as derived in detail in. In particular, a new renormalisation constant Z L 11 is needed in a counter term proportional to G G to renormalize the additional UV divergence resulting from amplitudes involving two G G type operators starting from 2-loop order in class-B amplitudes. If we denote the amplitudes computed in the bare theory byM B i, then the relation between these bare amplitudes and the UV renormalized ones is given by The overall renormalisation constant is given by where and Z L 11 is given by, The UV renormalized amplitude M B i can be expanded in powers of a s up to the two-loop level as follows: where, In summary, the UV divergences that appear at the one-and two-loop level can be removed using coupling constant renormalisation through Z and the overall operator and the contact renormalisation constants, Z O and Z L 11 respectively. Infrared factorization The resulting UV finite amplitudes will contain divergences of infrared origin, which remain as poles in the dimensional regularization parameter. These will cancel when combined with the real emission processes to compute observables. While these divergences disappear in the physical observables, the amplitudes beyond leading order demonstrate a very rich universal structure in the IR region. Catani predicted IR divergences for n-point two-loop amplitudes in terms of certain universal IR anomalous dimensions, exploiting the iterative structure of the IR singular parts in any UV renormalized amplitudes in QCD. These could be related to the factorization and resummation properties of QCD amplitudes, and were subsequently generalized to higher loop order. Following, we obtain g ( ) are the IR singularity operators given by T F n f, It is known that the terms that become finite or vanish as goes to zero, i.e., O( ), ≥ 0 in the subtraction operators I g and I g are arbitrary and they define the scheme in which these IR divergences are subtracted to obtain IR finite parts of amplitudes, M B,(j),f in i. These scheme-dependent terms in the finite part of virtual contributions will cancel against those coming from the soft gluon emission subprocesses at the observable level. The only scheme dependence that will be left in a physical subprocess coefficient function is then due to the subtraction of collinear initial state divergences through mass factorization, parametrized by a factorization scale F. Calculation of the Amplitude For the amplitudes of class-B, we needed to consider only those diagrams which involve a pair of vertices resulting from the first term of the effective Lagrangian and hence all the amplitudes are proportional to C 2 H. These Feynman diagrams up to two-loop level were obtained with help of the package QGRAF. There are 2 diagrams at tree level, 37 at one loop and 865 at two-loop order in perturbation theory. The output from QGRAF was then used for further algebraic manipulations involving traces of Dirac matrices, contraction of Lorentz and color indices, using two independent sets of in-house routines based on a symbolic package FORM. The entire manipulations were performed in d = 4 − 2 dimensions and most of the algebraic simplifications were done at this stage. We used the Feynman gauge throughout and hence allowed ghost particles in the loops. External ghosts are not required due to the transversal nature of the tensorial projectors Eq. (2.11). At this stage, we obtain a large number of Feynman integrals with different sets of propagators and each containing scalar products of the independent external and internal momenta. Using the REDUZE2 package, we can identify the momentum shifts that are required to express each diagram in terms of a standard set of propagators (called auxiliary topology). The auxiliary topologies in the two-loop corrections to the class-B process are identical to those in equal-mass on-shell vector boson pair production at this loop order. They are described in and were used to compute the two-loop corrections to qq → V V in. They were subsequently extended towards non-equal gauge boson masses. It is well known that the resulting Feynman integrals are not all independent and hence they can be expressed in terms of fewer scalar integrals, called Master Integrals (MIs) by using integration-by-parts (IBP) identities. Further simplifications can be done by exploiting the Lorentz invariance of the integrands, resulting in Lorentz invariance (LI) identities. These identities can be solved systematically using lexicographic ordering (Laporta algorithm, ) to express any Feynman integral in terms of master integrals. These are implemented in several specialized computer algebra packages, for example AIR, FIRE, REDUZE2 and LiteRed, to perform suitable integral reductions such that one ends up with only MIs. We performed two independent reductions of the integrals in the two-loop class-B amplitude, one based on the Mathematica based package LiteRed and the other based on REDUZE2. Counting kinematical crossings as independent integrals, we can express the one-loop amplitude in terms of 10 master integrals, while the two-loop amplitude contains 149 master integrals. These master integrals are two-loop four-point functions with internal massless propagators and two massive external legs of equal mass. They were computed analytically as Laurent series expansion in in. These MIs were then expressed in terms of generalized harmonic polylogarithms. An alternative functional basis can be obtained in terms of logarithms, polylogarithms Li n≤4 and the multiple polylogarithm Li 2,2 by matching the original expression at the symbol level. We use the master integrals in this latter representation. Substituting the MIs from, we obtain the bare amplitudesM (right panel). The behavior of the amplitudes close to the production threshold, x = 0, is shown in the insets. We see that the finite parts of the two-loop amplitude shows stable behavior, and they display a non-trivial dependence on the process kinematics. In the numerical evaluations, the large rational coefficients of the classical polylogarithms can introduce numerical instabilities in case we do not demand high enough precision. In particular, there are large cancellations between the numerator and denominator of rational functions. Therefore, we evaluate the polylogarithms at double, and the rational coefficients at even higher precision. Discussion and Conclusions The two-loop massless corrections to the g + g → H + H amplitude derived above complete the set of purely virtual amplitudes required for the prediction of the N 3 LO corrections to Higgs boson pair production in gluon fusion, in the infinite top quark mass limit. All other amplitudes relevant at this order are either (class-A) known already from the calculation of inclusive gluon fusion Higgs boson production at N 3 LO or (class-B) amount to one-loop and tree-level amplitudes that can be computed using automated tools. The combination of these amplitudes into a fully differential N 3 LO calculation of Higgs boson pair production does still require substantial advances in the techniques for handling in-frared singular real radiation configurations at this order, with first steps being taken most recently. More imminent applications of the newly derived results to Higgs boson pair production are the computation of fixed order soft-virtual corrections to the total cross section or of the hard matching coefficients in the resummation of corrections at low pair transverse momentum. In this paper, we have computed all virtual amplitudes that contribute to the production of a pair of Higgs bosons from the gluon-gluon initiated partonic processes at order a 4 s. The calculation is performed in an effective field theory where the top quark is integrated out, and all other quarks are massless. The exact calculation of top quark mass effects is currently out of reach at this order, but reweighting procedures allow to reliably quantify these effects. We deal with two classes of amplitudes separately, named class-A (one effective operator insertion) and class-B (two effective operator insertions). The amplitudes of class-A can be related to the gluon form factor which is already known up to three loop order while amplitudes of class-B were known previously up to one loop. Our explicit computation of the two-loop corrections to the class-B amplitudes now completes the perturbative expansion of the g + g → H + H amplitude to order a 4 s. We observe that the pole structure of the amplitude is in agreement with predictions from infrared factorization, and provide (as ancillary files with the arXiv submission of this article) a numerical code to evaluate its finite remainder piece. The newly derived amplitudes open up opportunities for a new level of precision phenomenology predictions in Higgs boson pair production.
<filename>src/Data/Verticaltype/Vertical_Set.java<gh_stars>0 package Data.Verticaltype; import Data.judgement.change_direction; import java.lang.reflect.Array; import java.util.Arrays; import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.LinkedHashSet; import java.util.LinkedList; import com.sun.istack.internal.NotNull; import m_Exception.runtime.insertException; /** * * @author rkppo * set:删除的时候能够自动的从数组中清除并且不用复杂的重新整理数组 * hashmap中的integer记录element所在的数组位置 * elements数组负责内存分配功能和主存储 * elements中的node用index_label存储自身的插入顺序 * node之间整理一条list,使得auto_changed_indexlocal可以调用 * empty_array_local记录数组中的空位置,这样在realloc的时候就不用对现有的node分布做调整 * * 由这个结构,所以所有传入的位置值都应该是链表的位置值,而不是数组的下标值,这之间必须经过一次转换 */ public class Vertical_Set<E> extends Vertical_Column<E>{ Vertical_Node<E> head,bottom; LinkedList<Integer> empty_array_local; HashMap<E,Integer> singal; //element,array_local HashMap<Integer,Integer> Addr_mapping; //insert_local,array_local ,问题是这个结构很难做到实时更新 HashMap<E,Integer> Addr_element; //element,insert_local ,用空间换取时间的做法 public Vertical_Set(String Vertical_name, String vertical_type) { super(Vertical_name, vertical_type); head=null; bottom=null; singal=new HashMap<>(); empty_array_local=new LinkedList<>(); empty_array_local.add(0); Addr_mapping=new HashMap<>(); } public Vertical_Set(LinkedList<Vertical_Node> newelement, String Vertical_name, String vertical_type, boolean index) throws insertException {//尚未完成的初始化方法 this(Vertical_name, vertical_type); realloc(newelement.size()+1); Iterator<Vertical_Node> initele=newelement.listIterator(); for(int loop=1;loop<mem;loop++) {//此时的循环次数应当和newelement的长度正好一致 E e= (E) initele.next().getelement(); empty_array_local.add(loop); insert(e);//这里如果有插入错误,那么至少整张表的载入应当崩溃处理 } } @Override public Vertical_Node getindex_element(int index) { if(index>Size) return null; return elements[this.linklocal2arraylocal(index)]; } @Override public LinkedList<Vertical_Node> getindex_elements(Integer... index) { //Arrays.sort(index); //应该所有传入的次序都是由小到大的,这个排序暂时不启用 Vertical_Node<E> node=head; LinkedList<Vertical_Node> Return = new LinkedList<>(); int loopi=0; for(int loop=0;loop<index.length;loop++) { Return.add(getindex_element(index[loop])); } return Return; } @Override public LinkedList<Vertical_Node> getAll() { Vertical_Node<E> node=head; LinkedList<Vertical_Node> Return = new LinkedList<>(); while(node.getNextNode()!=null) { Return.add(node); node=node.getNextNode(); } return Return; } @Override protected Vertical_Node<E> insert(E element) throws insertException { int local=this.empty_array_local.removeFirst(); Vertical_Node<E> insert = null; try{ if(singal.get(element)!=null) throw new Exception(); singal.put(element,local); insert=new Vertical_Node<>(element,Size,null); elements[local]=insert; bottom.setNextNode(insert); insert.setpreviousNode(bottom); bottom=insert; } catch(NullPointerException e) { if(bottom==null) { head=insert; bottom=head; } } catch(Exception e) { this.empty_array_local.addFirst(local); throw new insertException("违反了唯一约束"); } Addr_mapping.put(this.Size,local); Size++; return insert; } @Override public LinkedList<Vertical_Node<E>> insert(E... element) throws insertException { LinkedList<Vertical_Node<E>> Return =new LinkedList<>(); for(int loop=0;loop<element.length;loop++) { Return.add(insert(element[loop])); } return Return; } @Override public LinkedList<Vertical_Node<E>> delete(Integer... line) { LinkedList<Vertical_Node<E>> Return=new LinkedList<Vertical_Node<E>>(); for(int loop=0;loop<line.length;loop++) { int local=this.linklocal2arraylocal(line[loop]); this.elements[local].auto_changed_indexlocal(change_direction.del);//将队列中被删除元素后方的元素位置减一 try { this.elements[local].getpreviousNode().setNextNode(this.elements[local].getNextNode()); } catch (Exception e) {}//节点位于队列头会捕捉到空指针异常 try { this.elements[local].getNextNode().setpreviousNode(this.elements[local].getpreviousNode()); } catch (Exception e) {}//节点位于队列尾会捕捉到空指针异常 this.empty_array_local.add(local); Return.addLast(this.elements[local]); } return Return; } @Override public LinkedList<Vertical_Node> update(E element, Integer... line) throws insertException { LinkedList<Vertical_Node> Return=new LinkedList<>(); E oldelement; int loop; try{ if(singal.get(element)!=null||line.length!=1) throw new Exception(); for(loop=0;loop<line.length;loop++) { singal.put(element,line[loop]); Return.add(this.elements[line[loop]].CopyNode()); oldelement=this.elements[line[loop]].getelement(); this.elements[line[loop]].updateelement(element); singal.remove(oldelement); } } catch(Exception e) { throw new insertException("违反了唯一约束"); } return Return; } @Override public LinkedList<Integer> Pick_Condition(String conditionSymbol, E conditionValue) throws Exception { LinkedList<Integer> Return=new LinkedList<>(); Iterator<E> temp=this.singal.keySet().iterator(); Vertical_Node<E> tnode=new Vertical_Node(conditionValue,null,null); switch(conditionSymbol) { case "is":case "=": { int linklocal=elements[singal.get(conditionValue)].get_indexlabel(); Return.add(linklocal); break; } case "isnot":case "!=": { while(temp.hasNext()) { E telement=temp.next(); if(tnode.compare(telement)!=0) continue; int linklocal=elements[singal.get(telement)].get_indexlabel(); Return.add(linklocal); } break; } case ">": { while(temp.hasNext()) { E telement=temp.next(); if(tnode.compare(telement)!=1) continue; int linklocal=elements[singal.get(telement)].get_indexlabel(); Return.add(linklocal); } break; } case "<": { while(temp.hasNext()) { E telement=temp.next(); if(tnode.compare(telement)!=-1) continue; int linklocal=elements[singal.get(telement)].get_indexlabel(); Return.add(linklocal); } break; } case ">=": { while(temp.hasNext()) { E telement=temp.next(); if(tnode.compare(telement)==-1) continue; int linklocal=elements[singal.get(telement)].get_indexlabel(); Return.add(linklocal); } break; } case "<=": { while(temp.hasNext()) { E telement=temp.next(); if(tnode.compare(telement)==1) continue; int linklocal=elements[singal.get(telement)].get_indexlabel(); Return.add(linklocal); } break; } } if(Return==null) Return=new LinkedList<>(); return Return; } @Override public Vertical_Column<E> checkout(@NotNull String newVerticalName, Integer... p_c) { Object[] o=new Object[p_c.length]; Vertical_Array<E> Return; try { Return=new Vertical_Array<>(getindex_elements(p_c),Vertical_name,Vertical_type,false); } catch (insertException ex) { Return=new Vertical_Array<>(Vertical_name,this.Vertical_type); } return Return; } @Override public void initIndex() {} @Override public void dropIndex() {} @Override public void run() { throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates. } private int linklocal2arraylocal(int linklocal) { Vertical_Node n=head; for(int loop=0;loop<linklocal;loop++) { n=n.getNextNode(); } return this.singal.get(n.getelement()); } }
With over 1.4 million apps available on Google Play and over 1.2 million on the App Store, the idea that smartphone users are spoilt for choice is an understatement. And it’s not just free apps that are crowding the stores. January 2015 was the App Store’s most profitable month ever, with users spending about half a billion dollars on apps and in-app purchases in the first one week alone. Think about that figure for a moment. Half a billion. That’s more than the entire budgets of the Titanic and Avengers: Age of Ultron combined. And that’s what people like you and I spent on our favorite yoga app or must-have game in just one week. There’s a je ne sais quoi about apps that go viral that every app developer would like to lay their hands on. I don’t claim to be have all the answers, but an analysis of the flavors of the season – Dubsmash, Meerkat and Periscope – might help us distill the secrets behind building apps that drive fanatic downloads and sky-high engagement rates. Dubsmash enjoys pride of place at number five on the App Store as well as Google Play store charts. This wildly popular lip syncing app, officially launched in November last year raced to the top of the app heap in less than a week in its home turf, Germany. Since then, it’s been launched across 192 countries and has over 50 million users and counting. Dubsmash’s premise is simple. Pick a sound. It could be a song, movie dialogues or anything from Dubsmash’s seemingly endless (though copyright suspect) library, shoot a selfie of yourself lip syncing to your selected sound, add filters to jazz it up and voila, you have your very own Dubsmash video ready to go! Though it’s not a social network, the videos made with Dubsmash have endless social sharing potential. Dubsmash’s makers got third time lucky with the app. The first two versions didn’t do well and were pared down to the basics and surprise, surprise! It works. Meerkat was hands-down the most talked about app at the SXSW in Austin, this year. A live video streaming app that allows you to record and stream whatever you want to your entire Twitter following has virality hardwired into it. Like Snapchat, videos streamed on Meerkat are transitory and can only be saved by the creator to be uploaded to other video sharing sites later. It became such a sensation that Twitter shut down direct user access to Meerkat just before SXSW was to begin. It then went on to buy Periscope, a Meerkat lookalike, and put its entire marketing muscle behind it. The result? Both Meerkat and Periscope have been hugely successful. Anyone with a smartphone can now stream video live and have millions of people tune in simultaneously. The recent Mayweather vs. Pacquaio fight stood testimony to the burgeoning popularity of live video. However, now that they’ve the Twitter name (and clout) behind them, in terms of sheer numbers, Periscope has left Meerkat far behind (#9 vs. #60 in App Annie’s App Store rankings for May 2015). But what can we learn from these three? If nothing else, Dubsmash is mind-bendingly simple. The ease of use and extremely basic UI make it an app that even kids can use. In the case of Meerkat and Periscope, live video streaming is not new. Qik had live video way back in 2008. It never enjoyed even a fraction of the success these two apps are riding right now. The reason is simple – live video streaming was never so easily executable. Make it simple and the world will lap it up. Dubsmash is the third iteration of a lip syncing, selfie video app that the creators dreamt up. The first two versions didn’t do well. But that didn’t stop the team from tweaking it continuously till they had a real winner on their hands. The original apps were pared down to the basics and surprise, surprise, the new version simply works! Take a cue from this viral phenomenon and test out your app ideas on a pre-launch review/PR service such as PreApps. What I like about this particular site is that it offers developers direct access to early adopters and beta testers – people whose inputs are hugely influential in driving awareness and ironing out bugs before you go all out. Just lip synced to ABBA’s “Dancing Queen” in full costume? Of course, you want to share that with your friends! Watching U2 live, in concert? By all means, you’d want to share the experience live with anyone who’d care to listen! All three apps recognize the new role of the end user as a publisher and help them tailor content with almost no effort at all. Dubsmash and its premise of lip syncing to funny music or movie dialogues is fun and extremely addictive, making it a downloader’s darling. Periscope and Meerkat open up new possibilities for live journalism, event collaborations and more. Most apps try and let users share their content with friends via social media sharing features. In the case of these Periscope and Meerkat, social is built right into the DNA of these apps. Both platforms are dependent on Twitter. Every time a user uploads a new video stream to either app, their entire Twitter following becomes a captive audience. Dubsmash does not have a social angle yet, but its videos are light and easy to share on any platform, making them super-popular on social media. As more people see the results of the app, more line up to download the app. Many apps make the mistake of being way too ahead of their times. While they get the privilege of being trailblazers, they barely get downloads. The key things about apps that go viral is that they come in at the right time. If still selfies were all the rage in 2014, video selfies are the next big thing with better cameras on the iPhone 6, 6+ and the Samsung Galaxy Edge. Dubsmash rode the trend superbly. Live streaming, as I noted before, is nothing new. But with 4G networks now common across the world, the ubiquity of WiFi connections, and smartphone cameras being able to record 1080p quality video, all the key ingredients for a perfect viral storm are in place. Rihanna established Dubsmash firmly on the world stage when she famously launched her new single on Dubsmash instead of YouTube, Vine or Instagram. Meerkat had some big name early adopters like Jimmy Fallon, Julia Louis-Dreyfus and Jeb Bush to thank while Periscope has users like Ringo Starr and David Blaine among its ranks. The fact that these apps are highly social combined with the “me-too” behavior among their fans, makes celebrity endorsement a huge trigger in taking apps viral. This is a factor very few app makers even consider. Promoting your app online and on social media are fine. But how about doing the same at live events, in front of the right target audience? This is one thing that Meerkat got right and hit event marketing at the SXSW right out of the park. The great word of mouth publicity that both Meerkat and Periscope generated at the SXSW meant huge press mentions, thousands of downloads and more importantly, active engagement from the best brains in the digital business. Thanks to the SXSW fillip, anyone who had anything to do with digital media found out about the app and probably downloaded them too. What this goes to show is that there’s a lot more to app marketing than building a decent app, promoting it on the app stores, advertising it across social media and so on. Maybe it’s time to look at the fundamentals of your app first. You might even double down on your PR strategy while you’re at it. The bottom line is, if you do what everyone else is doing, you’ll get results that are the same as everyone else. While you spend some time rethinking the box, I’m going to go create my latest avatar on MyIdol. Read Next: Why is live video app Meerkat suddenly popular, and can it last?
def resample(y, samples=10000, replacement=False, unit=None): if isinstance(y, Var): pass elif isinstance(y, NDVar): if not y.has_case: raise ValueError("Need NDVar with cases") else: raise TypeError("Need Var or NDVar") out = y.copy(f'{y.name}_resampled') for index in permute_order(len(out), samples, replacement, unit): out.x[index] = y.x yield out
package resolver import ( "strings" "context" internalContext "github.com/dipdup-net/metadata/cmd/metadata/context" ) // prefixes const ( PrefixTezosStorage = "tezos-storage:" ) // TezosStorage - type TezosStorage struct { ctx *internalContext.Context } // NewTezosStorage - func NewTezosStorage(ctx *internalContext.Context) TezosStorage { return TezosStorage{ctx} } // Resolve - func (s TezosStorage) Resolve(ctx context.Context, network, address, value string) ([]byte, error) { var uri TezosURI if err := uri.Parse(value); err != nil { return nil, newResolvingError(0, ErrorTypeTezosURIParsing, err) } if uri.Network == "" { uri.Network = network } if uri.Address == "" { uri.Address = address } item, ok := s.ctx.Get(uri.Network, uri.Address, uri.Key) if !ok { return nil, newResolvingError(0, ErrorTypeKeyTezosNotFond, ErrTezosStorageKeyNotFound) } return item.Value, nil } // Is - func (s TezosStorage) Is(link string) bool { return strings.HasPrefix(link, PrefixTezosStorage) }
// Copyright (C) 2018-2021 Intel Corporation // SPDX-License-Identifier: Apache-2.0 // #include "gtest/gtest.h" #include "ngraph/ngraph.hpp" #include "ngraph/runtime/tensor.hpp" #include "ngraph/type/bfloat16.hpp" #include "ngraph/type/float16.hpp" #include "runtime/backend.hpp" #include "util/all_close.hpp" #include "util/all_close_f.hpp" #include "util/engine/test_engines.hpp" #include "util/known_element_types.hpp" #include "util/ndarray.hpp" #include "util/test_case.hpp" #include "util/test_control.hpp" #include "util/test_tools.hpp" using namespace std; using namespace ngraph; static string s_manifest = "${MANIFEST}"; using TestEngine = test::ENGINE_CLASS_NAME(${BACKEND_NAME}); static std::vector<float16> from_float_vector(const std::vector<float>& v_f32) { if (v_f32.empty()) { return std::vector<float16>(); } size_t num_of_elems = v_f32.size(); std::vector<float16> v_f16(num_of_elems); for (size_t i = 0; i < num_of_elems; ++i) { v_f16[i] = float16(v_f32[i]); } return v_f16; } static std::vector<float> to_float_vector(const std::vector<float16>& v_f16) { if (v_f16.empty()) { return std::vector<float>(); } size_t num_of_elems = v_f16.size(); std::vector<float> v_f32(num_of_elems); for (size_t i = 0; i < num_of_elems; ++i) { v_f32[i] = float(v_f16[i]); } return v_f32; } static const std::vector<float> input_data = { 0.85943836, 0.009941814, 0.004292889, 0.54598427, 0.8270831, 0.49770153, 0.9035636, 0.19274887, 0.8589833, 0.88759327, 0.72343576, 0.057539318, 0.915801, 0.63455844, 0.25069925, 0.045601673, 0.29793364, 0.8492151, 0.6885839, 0.57419384, 0.009737609, 0.68192583, 0.7614807, 0.37603703, 0.51804876, 0.033039097, 0.63702065, 0.78960556, 0.5007368, 0.7248742, 0.2040932, 0.1211606, 0.76035476, 0.44004318, 0.95635134, 0.82913375, 0.225465, 0.009166263, 0.05445403, 0.5885675, 0.87822133, 0.14324947, 0.68606305, 0.3274419, 0.9169595, 0.732179, 0.04614906, 0.03505424, 0.84526163, 0.9972937, 0.89781004, 0.9987864, 0.24641308, 0.34678686, 0.22731997, 0.95805293, 0.595993, 0.8537836, 0.9174756, 0.17441267, 0.86681056, 0.15913424, 0.6638066, 0.522398, 0.51548326, 0.024979044, 0.1731268, 0.068090245, 0.6125645, 0.4865482, 0.2873719, 0.35936728, 0.64452374, 0.27963468, 0.59981745, 0.6309508, 0.507604, 0.23389837, 0.77500635, 0.4462004, 0.53165394, 0.6535075, 0.4306448, 0.21468966, 0.6925882, 0.11183031, 0.25347117, 0.2209481, 0.8060583, 0.34712377, 0.78980505, 0.16110454, 0.6376819, 0.78736854, 0.909368, 0.6915289, 0.24747796, 0.32442623, 0.22714981, 0.23976989, 0.25199527, 0.28412706, 0.32461873, 0.51917267, 0.8394496, 0.6324911, 0.28498915, 0.8887276, 0.90213394, 0.16050571, 0.32190812, 0.67677563, 0.8594967, 0.28917953, 0.1931407, 0.8282108, 0.14881423, 0.18073067, 0.8490643, 0.2356146, 0.86200285, 0.57409924, 0.94718546, 0.092213534, 0.34502912, 0.4719212, 0.60031396, 0.22602181, 0.3067876, 0.49529344, 0.11133887, 0.47633907, 0.13236542, 0.69677263, 0.8490109, 0.6685073, 0.24199674, 0.7983137, 0.37593383, 0.74520975, 0.16743147, 0.84144354, 0.93073046, 0.55940866, 0.67484015, 0.077098235, 0.69045097, 0.06949082, 0.6804774, 0.79804176, 0.49027568, 0.8843709, 0.5665486, 0.91798306, 0.47884017, 0.94707423, 0.98279756, 0.62054926, 0.8134105, 0.01336217, 0.78324115, 0.9938295, 0.99227554, 0.66681916, 0.38842493, 0.3835454, 0.120395586, 0.5478275, 0.13309076, 0.9468553, 0.24595714, 0.0057277656, 0.14570542, 0.31220108, 0.41687667, 0.679465, 0.5731583, 0.7383743, 0.013198466, 0.34619793, 0.9278514, 0.48510832, 0.46039802, 0.8171329, 0.5041023, 0.37600085, 0.124404594, 0.4201713, 0.7470036, 0.7340853, 0.8449047, 0.137517, 0.14771219, 0.99655616, 0.2178388, 0.4121613, 0.8655656, 0.32849622, 0.7574791, 0.95230037, 0.5806251, 0.9598742, 0.7183528, 0.042957753, 0.2926446, 0.5882527, 0.05208914, 0.3216481, 0.5205192, 0.5095992, 0.011508227, 0.5209922, 0.78207654, 0.34570032, 0.7968098, 0.4619513, 0.0047925604, 0.029039407, 0.7673424, 0.571703, 0.44400942, 0.82529145, 0.29335254, 0.34418115, 0.48119327, 0.38321403, 0.31083322, 0.7179562, 0.41055596, 0.06207573, 0.8747831, 0.6018095, 0.4483476, 0.16189687, 0.8611539, 0.79723805, 0.42178747, 0.95597315, 0.5534858, 0.33466807, 0.36827618, 0.60728735, 0.6582703, 0.6790265, 0.870856, 0.8868432, 0.43860948, 0.32468447, 0.77624434, 0.3403061, 0.14144918, 0.23022941, 0.07176102, 0.06941459, 0.37346482, 0.9120822, 0.65890974, 0.77746564, 0.4515671, 0.45455948, 0.15909587, 0.8017096, 0.6259673, 0.6117355, 0.77020043, 0.08495594, 0.30376136, 0.55266386, 0.8497134, 0.91790336, 0.86088765, 0.88179666, 0.9009849, 0.97200614, 0.94119, 0.77911216, 0.8057816, 0.14040896, 0.66522235, 0.6649202, 0.048396785, 0.75035393, 0.4520953, 0.9877601, 0.46115568, 0.2167145, 0.9271302, 0.39395386, 0.68578094, 0.576275, 0.20754486, 0.5408786, 0.46040633, 0.18199016, 0.66303253, 0.6288556, 0.14313427, 0.91675115, 0.36198065, 0.51337945, 0.84241706, 0.22333568, 0.38011634, 0.024615016, 0.19370414, 0.23593484, 0.32207185, 0.47971123, 0.6202779, 0.6944977, 0.43612957, 0.07961436, 0.57468814, 0.100025274, 0.42476946, 0.95338464, 0.666547, 0.8683905, 0.52689695, 0.6284723, 0.85813546, 0.4865953, 0.8269112, 0.08833949, 0.69269264, 0.41784903, 0.5969149, 0.07599888, 0.14184453, 0.49042618, 0.44027725, 0.6256328, 0.2716237, 0.0999099, 0.09831784, 0.92469853, 0.24196884, 0.9073526, 0.7523511, 0.7761173, 0.28489882, 0.96349007, 0.5884645, 0.74933976, 0.06400105, 0.4376275, 0.34752035, 0.6006149, 0.034923803, 0.066874385, 0.9790322, 0.5558188, 0.97579825, 0.025802653, 0.537738, 0.24921915, 0.111012295, 0.85987717, 0.781183, 0.69588315, 0.94621634, 0.74946797, 0.6949375, 0.009165181, 0.91075164, 0.72913235, 0.25934777, 0.19463088, 0.5283304, 0.9241759, 0.0563183, 0.74323857, 0.43722472, 0.2958358, 0.85980684, 0.029655656, 0.362904, 0.19682994, 0.37778872, 0.09406928, 0.23010127, 0.44393733, 0.420214, 0.39723217, 0.13777487, 0.06385251, 0.9535715, 0.89861375, 0.2463547, 0.673834, 0.8008994, 0.0861585, 0.6613363, 0.79498637, 0.79322547, 0.083214305, 0.577025, 0.58655965, 0.119723536, 0.0012204717}; static const std::vector<float> expected_dft1d_results = { 6.329814, 4.2950764, -0.8105316, -0.7187835, -0.059136264, 0.2709784, 0.82793635, 0.57937646, 0.5997731, -1.3291739, 1.188664, 1.462941, -0.01811248, -1.8314927, 0.16004556, -2.219835, 1.0620322, -1.0679832, -0.68610185, 0.658314, 4.627743, 4.5935497, -0.78950775, -0.32600924, -1.4706655, -1.1615934, 0.708719, 1.4568751, -1.0970218, -0.39268675, -0.5990571, -0.81545514, -0.39174145, -0.420258, 0.55279106, 2.339339, -0.59915966, 1.3964193, -0.8447231, 0.14907542, 6.2576666, 5.5670385, 0.25636938, -1.7026355, 1.161571, 0.12042561, 0.19768336, -1.3421875, -0.90698814, 1.4111948, 0.70803046, 0.5795436, 1.2021728, -0.5199567, -2.558736, -0.80762154, 1.1657354, -0.8685272, 1.2987087, -1.0047817, 5.6461143, 3.2111988, 0.2361581, 0.3099669, 0.6179653, 0.099535145, 1.0438079, -0.016701937, -0.88529384, -0.12907594, 0.64785606, -0.8428119, -0.058392793, -1.0488291, -0.4019828, 0.20333555, 0.45051938, 0.45967662, 1.3713523, -0.6549525, 5.5258985, 3.7522945, -1.8860855, -0.2230255, 0.8160669, -0.46607828, 0.123957604, 0.61024696, 0.26978388, 0.9723815, 0.3050212, 0.69621503, 0.27244493, -1.0805726, 0.20593566, 1.5653824, -0.27690098, 0.8950307, -0.039584313, -0.18680441, 4.975611, 4.6955333, 0.19031112, -0.8860659, 0.91665065, -0.5264673, -0.4547393, 1.1623507, -1.4774656, 1.671129, 1.028168, -1.6014669, -1.2178835, -0.13447604, -0.14712845, -0.6739672, -0.3273949, -0.9012072, -0.9661755, 0.03590688, 4.771964, 5.244689, -0.03415192, -0.37281254, -0.49070793, -0.65789306, 0.8143984, -0.8913989, -0.19890547, 0.17876014, -0.9956009, 0.82810897, 0.55270624, -0.023768127, 1.5358362, 0.6981953, 0.23165298, 0.51040155, 2.4328363, 0.2267083, 6.4758024, 5.72882, -0.8707881, -1.110683, 0.12478554, 1.3484334, 0.3689712, 0.29180524, -0.8149491, -0.0922713, -0.33161288, 0.78140867, -0.9623072, 0.8999919, -2.1120539, 0.84492886, -1.5347936, 0.7440938, 1.3312622, -1.0220959, 3.8123238, 5.62084, 1.3551373, 0.6460793, -0.21639234, -1.2077228, 1.1639122, -0.05263084, 0.48105645, -0.5892652, 0.2349168, 1.128768, 0.42568994, 0.36398163, -1.2250046, 2.3513904, 0.64331245, 0.8099514, 1.1574583, 0.8668997, 5.59726, 5.659527, 0.48095328, 0.59446967, 1.1849049, 1.4709316, -1.2589264, -0.11577609, 0.6299068, -1.4621243, 0.7872094, 0.18096408, 0.5553762, -2.0060503, -0.4373122, 0.9938256, 0.89633095, -0.5491595, 0.8428093, 0.084472984, 4.52676, 4.351716, 0.73079205, 0.8098516, 0.27877963, -0.0073297992, 0.36545974, 0.6745955, -2.3818088, 1.5816333, -0.16544427, 0.51321346, -0.23699868, -0.13254744, 1.551896, 0.62098134, 0.7739359, 1.6108581, 0.36288044, -0.42423314, 5.0995026, 5.1843014, -1.1968713, 1.1790991, -0.018864498, -0.7500831, 0.0879575, 0.22010106, 1.1136081, 2.2893274, -0.6877146, -0.40740123, 0.046427906, 0.8681825, -0.50678635, 0.23051873, 0.35328788, -0.45622703, 0.1495475, -0.104907334, 4.8094087, 5.2818966, 0.49697292, 0.29568392, -0.4144543, -0.64546454, 0.31737912, -0.8962374, -1.0404948, 0.91764164, 0.6826862, 0.08073502, 0.33942595, 0.053232975, -1.1867946, 0.51120156, -1.1452568, -1.4197243, 0.82389224, 1.8939058, 6.882805, 6.4072084, -1.3024135, -0.22483894, -0.22082287, 1.0370905, -0.7639439, 0.6950346, -0.731326, 0.16821115, 0.0887468, -0.5732441, -0.40715322, -0.96244293, -0.89126545, 1.3140129, -0.42358512, 1.7674587, -0.6400819, -1.6113993, 4.4106574, 5.706909, -1.1110737, 0.10560027, -1.1108764, 0.34190884, 2.1167603, -0.067495525, -0.16237324, 0.2604496, -0.8129095, -0.42274237, -1.1412699, -0.0011268258, -0.63462454, -0.15172139, -0.7164279, 0.14801888, -0.3538928, 1.583736, 4.9876184, 4.2879796, -0.8491325, 0.5345522, -0.60507995, -0.9020085, 1.0447598, 0.21135187, -0.4787205, -0.3230412, 0.8076494, -0.04361339, 0.62797767, 0.15487206, -0.23772183, 0.69546384, 1.8609382, -1.7030516, 1.2658813, -0.6791475, 4.921037, 4.8929176, -0.0124401, -0.6873918, -0.21879943, -0.48610657, 0.36776963, 0.12423802, -0.7854952, 0.48838156, -0.5085067, -0.08865434, 1.1653454, 0.81965554, -0.6399579, -1.0967884, 1.4099771, -0.15370974, 2.8824244, 1.0534087, 4.7045717, 5.2045445, -0.6350576, 2.5321684, 0.6987691, -0.53839976, -0.09889791, 0.5662097, 0.4088725, 0.635128, -1.763303, -0.49720347, -1.0772469, 1.2422445, -0.3619956, -1.311133, 1.5846866, 1.0530244, -0.61141044, 0.74831486, 5.433625, 3.9661994, 2.006918, -0.8703619, -0.7658511, 0.0811044, 0.83877516, -0.63553256, -0.67563355, 1.7368636, 0.9372277, 1.8246815, 0.8615329, -0.18161502, 0.62479717, 0.2028623, 0.159001, 1.860977, 0.04177074, -0.49050322, 4.9402246, 4.0296063, -0.74729615, -0.27802998, -0.8077982, -0.5414143, 0.467114, 0.9016136, 2.1971147, -1.466963, -1.2350414, 1.0967304, -0.95607626, 0.51462483, 0.28838068, 1.0117096, -0.21846394, 0.114624545, -1.627146, -0.9431294}; static const std::vector<float> expected_dft2d_results = { 54.020195, 48.368538, -1.8721353, -3.7894967, 2.5850394, -0.7094516, 3.5357249, 1.6819549, -3.4001002, 0.23887074, 2.9735894, 2.3982158, 0.3599546, -5.801426, -4.427606, 5.2949734, 1.7113355, 1.428697, 5.8978443, -0.8472582, -3.288164, -0.099487126, -0.33851182, 2.614974, -2.766882, 0.18681616, 0.34976268, -0.2601711, 4.998401, -2.9831958, -1.6652081, 0.53361464, -0.9001389, -3.6454318, -3.7148805, -0.68562484, 2.0633714, -2.2154818, -3.3697965, 3.5273929, 1.5474558, -1.6305131, -5.3327236, 0.54002213, -1.6671672, 2.4493377, -2.2604918, 1.4117424, 2.1797671, 2.5013056, 0.8525213, 1.6570821, 1.717532, -2.101283, 4.6570606, -3.6786642, 0.8912736, -0.4010569, -5.9480867, 1.441097, 2.1150498, -1.4524796, -3.5035098, 3.0815587, -3.3185432, 4.7882123, 5.64722, -1.1192517, 1.8302126, -2.5760055, -0.41363025, 3.2350469, 1.4296081, 0.8722873, 6.1752787, -1.7328868, 2.312786, 4.4069357, 1.7721124, 3.3802934, -0.53283703, 3.7646027, 4.440572, -4.353462, -2.7639425, 3.6855025, 1.8912748, -2.5849285, -2.9895856, 1.1341677, 1.4818796, 0.7482485, -1.3077981, 1.0669674, -0.76039124, -10.8791685, 2.998129, -4.2489543, 0.41752052, -0.45298803, -0.62486386, 0.5913104, -0.36638862, -0.9528576, -0.16223967, -3.171127, 2.7200532, -3.8751457, 3.8895426, 1.0489256, -0.091531515, 6.992935, 4.5098467, -0.38218838, 0.6637606, -2.1199496, 3.9403267, -0.870952, 2.4287906, 1.9679271, 3.652341, -4.4909067, -1.4710087, 0.5256169, 5.4580984, -2.6554706, -0.058261395, 3.6613276, 0.5612789, 1.0594783, 4.5429516, -1.447232, -2.388829, 0.52541757, -6.1111097, -2.3621864, -1.4885365, -2.6265867, -4.4030347, 0.27728367, 3.9584684, -3.7618577, -3.128574, -2.8671994, 1.4171265, 0.02298975, -2.0790722, 1.6526843, 0.59488124, -3.2548752, -0.82249254, 1.3645289, -2.9066925, -3.4377484, -2.501403, -2.821631, -4.427053, -2.3529994, 0.6670886, -4.7455816, -2.160026, -1.0587022, 1.1341916, -0.9469211, 0.67554307, -4.0473633, -1.2422556, 4.538533, -0.739814, -3.22405, 1.2332113, -4.0489397, -4.560828, -3.5195189, 6.7066355, -2.8439593, -0.43901098, -3.9980454, -4.2256207, 3.0529652, 4.6105156, 2.720234, 2.3327744, -1.0400636, -0.048398018, 2.1603358, -0.22459112, 0.6870126, -0.926849, -7.2363615, 3.7953386, 3.195907, 3.8662248, -1.8919971, 0.91311014, -0.36923724, 3.0576966, 0.19861764, -0.09782998, -1.0179963, 50.71621, 49.313248, -2.6195984, 3.396334, -3.1849973, -2.4107025, 4.7431326, 1.7938776, -2.5362587, 6.287631, -2.656609, 1.4825039, -0.77803206, 2.3750808, -1.9940716, 2.0271082, 3.6380908, 2.822246, 2.2938647, 1.0260472, 3.248794, -3.05949, 2.0979533, 3.565119, 1.9497933, 0.2390036, -2.255065, 0.7849397, 1.9622431, 4.2382064, -3.2529292, 0.78572094, -2.9386084, 0.66875017, 5.743927, 4.850876, -4.8014383, 6.371132, -2.6618924, -1.8847032, -1.7702236, -1.1031301, 1.4129921, -0.080709964, -2.7634878, -3.6456683, 1.4174454, 3.4085226, 3.10102, 0.114031196, -2.4092412, 0.27725983, 2.8974152, -1.866328, -0.68216217, 2.249536, -0.42875588, -5.8182187, 5.347006, -6.2936745, 0.8000201, 3.651592, 1.3155181, 2.3413098, 2.1600244, 1.8733575, -2.4694557, 0.39358342, 2.020084, -0.062472403, -4.131041, -1.5137839, -2.0354557, 1.1957052, -0.6644075, -2.0442688, 2.0753646, 4.874056, -0.090800405, 1.3911223, 0.68129027, -4.0028048, -0.8021738, 0.43866205, 2.7812133, 0.4525791, -0.87565154, 1.2364697, -2.725146, 2.7965212, 4.148448, -1.9204504, -0.61004305, -4.790703, 3.1498234, 0.79403657, 5.305445, 0.2803253, -3.67164, -4.3974924, -2.5132315, -0.9139994, 6.841936, -4.089568, -1.2774054, 0.9789283, 3.269153, -3.3947415, -7.5553513, 3.682307, 2.9227152, 2.3319635, 2.754105, -1.2598821, 1.4247041, -1.8540356, -2.675635, 1.2705915, 5.2202816, 6.206577, 0.4957786, 2.1150033, 5.8791704, 2.8043785, -0.37886655, 0.011162788, -1.0408137, -1.5385519, -8.079001, -0.68451786, 2.3513699, 3.0877895, 2.6497078, 1.3670976, 0.77233493, 2.2921152, -1.2679763, 2.113087, 4.990262, -0.046566606, 0.015865922, 1.1569002, -4.8347507, 1.9560149, 1.979923, 2.34512, -0.9634773, 4.3939066, -6.2031984, 0.8311275, -2.7740612, -2.9296994, -3.4624243, -1.4588313, 2.4724, -0.79477566, -0.4295609, 5.8110385, -2.6649034, -2.270977, -2.5511568, -3.1238616, -4.46209, 0.16335368, 1.9146351, 1.0459399, 2.8069792, -0.4705832, -4.0632596, -2.220704, 1.7770543, -0.5791014, -2.2041528, 3.026476, 5.324942, -0.7805673, 5.9275556, 0.14159381, -0.81569004, 4.1947803, -3.8557377, -0.5163199, 2.478963, -2.396379, -0.3930376, -0.96302, -0.9776549, 0.13852966, 0.26078847, 0.8342015, 2.3698487, 4.109933, 1.3575013, -0.5828376, -0.028537825, -0.53020877, 0.39626116, -1.7572733, -4.31769, -2.1674476}; static const std::vector<float> expected_dft3d_results = { 104.7364, 97.68179, -4.491728, -0.39316452, -0.59995466, -3.1201572, 8.278858, 3.4758341, -5.9363585, 6.5265055, 0.3169801, 3.8807175, -0.418082, -3.4263492, -6.4216776, 7.3220854, 5.3494234, 4.2509427, 8.191702, 0.17879319, -0.03937006, -3.1589758, 1.7594413, 6.180092, -0.8170867, 0.42582142, -1.9053001, 0.52476853, 6.9606423, 1.255014, -4.9181366, 1.319335, -3.838747, -2.9766817, 2.0290484, 4.16525, -2.7380676, 4.155652, -6.0316873, 1.6426877, -0.2227689, -2.7336447, -3.919732, 0.45931256, -4.4306555, -1.1963288, -0.8430467, 4.8202653, 5.280785, 2.6153364, -1.556721, 1.9343407, 4.614946, -3.96761, 3.9748988, -1.4291265, 0.46251905, -6.2192726, -0.60107887, -4.852579, 2.9150705, 2.1991146, -2.1879911, 5.4228687, -1.158518, 6.661569, 3.1777658, -0.7256692, 3.8502965, -2.6384768, -4.544671, 1.721262, -0.6058461, 2.067991, 5.5108714, -3.7771575, 4.388153, 9.280992, 1.681312, 4.7714148, 0.14845347, -0.23820269, 3.6383984, -3.9147997, 0.017270446, 4.138083, 1.0156215, -1.3484575, -5.7147317, 3.9306912, 5.630328, -1.1722009, -1.9178381, -3.7237349, 2.3894331, -10.085134, 8.303572, -3.9686286, -3.2541199, -4.850478, -3.1380959, -0.32268947, 6.475547, -5.0424256, -1.4396465, -2.1921992, 5.9892044, -7.269888, -3.665809, 4.7312326, 2.8311844, 9.324896, 7.2639513, -1.6420703, 2.0884657, -3.9739842, 1.2646922, 0.39964193, 7.649071, 8.174507, 4.148118, -2.3759027, 4.4081597, 3.3299959, 5.0792284, -2.6443086, -1.0990746, 2.1227744, -7.517721, 0.3749615, 6.894322, 1.6405574, 0.26087707, 1.8925169, -5.3387756, -0.07007182, -2.7565134, -0.51350284, 0.5872268, 0.23071745, 3.9743357, -2.6049578, -7.963324, -0.9111862, 3.3970497, 2.368112, -3.0425484, 6.0465913, -5.608317, -2.4237492, -3.5965526, -1.5651696, -6.369116, -4.896579, -0.029001951, -3.616405, -4.8566127, 3.4580388, -1.9978137, -7.016559, -4.71118, -4.1825647, -3.3278992, -0.7835678, 2.5901778, -3.0014238, 1.5647203, 4.06795, -4.803074, -5.444754, 3.0102665, -4.6280394, -6.764982, -0.49304247, 12.031577, -3.6245267, 5.488541, -3.8564541, -5.04131, 7.2477474, 0.7547778, 2.2039144, 4.8117356, -3.4364424, -0.44143593, 1.1973162, -1.2022457, 0.8255428, -0.66605973, -6.4021583, 6.1651874, 7.3058405, 5.2237253, -2.4748354, 0.88457155, -0.89944726, 3.453958, -1.558656, -4.4155188, -3.1854444, 3.303988, -0.9447114, 0.7474582, -7.185831, 5.770039, 1.7012511, -1.2074116, -0.11192033, -0.86384296, -6.048759, 5.6302013, 0.9157127, 1.1379871, -8.176507, -2.433535, 3.2678652, -1.9267552, -1.393548, 3.6039736, -1.873306, -6.536957, 2.9600024, -2.4364662, -0.95014465, -4.716674, -0.052186966, 2.6048284, -1.0451086, 3.036159, -7.221403, 1.5877211, -0.25210607, 2.0384693, -4.3141813, -9.458808, -5.5365014, 6.8648105, -8.586614, -0.7079052, 5.412094, 3.3176801, -0.5273831, -6.745717, 0.62073076, 1.0963198, 6.0950055, -3.677938, -1.9967818, -0.921252, 2.387275, 3.261763, 1.3798212, -1.1798835, -0.23495495, 5.339221, -5.928199, 1.3200281, 5.417163, -11.295093, 7.7347717, 1.3150296, -5.1040716, -4.8190293, 0.74024755, -5.4785676, 2.914854, 8.116676, -1.5128357, -0.1898706, -2.5135324, 3.7174103, 4.7488313, 3.4650638, -0.32341766, 6.8396864, 0.31138325, 0.2374219, -0.46712062, 1.8629129, 1.9891711, -1.2141278, 7.7674093, 5.2427464, -4.792124, -5.5451555, 3.2329237, 2.766926, -3.8213987, -0.26443875, -1.6623533, -2.6665692, 2.6686997, -0.6977545, 5.85767, -3.9102163, -11.673204, -2.3073153, -4.529278, 4.0891604, 3.9445055, 1.8883687, 1.50531, -7.2083244, 3.1367111, 1.1151649, -4.1500554, -0.54910004, -0.48040384, 11.444895, -2.6333811, -3.0142484, 4.6609726, 1.755743, 0.87769306, -0.7609439, -0.26591438, 6.615961, -2.141545, -2.7914915, -4.2386503, 3.1565619, -6.6059103, -7.35018, -2.2787585, 5.836963, -2.6666338, 0.98255026, 5.199881, 8.640279, 1.7439961, 2.191582, -4.535021, -5.038538, -0.841679, -6.8834453, -4.654301, -0.220559, -4.7396717, -9.393296, 0.32385087, 3.9426038, -4.9187584, 1.7061774, -4.8232145, -0.5627973, -2.3221302, -1.1155958, -2.7412212, 6.798079, -4.0860014, 1.9515686, 4.2942266, 0.5557329, -1.9789174, -4.973804, -2.0268555, -3.9974911, -8.164038, 3.3319929, -2.474605, 0.39113098, 2.0651584, 5.5962815, -1.1102749, -1.2390921, -5.0933027, -4.0492353, 5.009116, 3.323446, -1.0033474, -0.54384375, -3.4698372, -2.3566747, -6.545992, 1.3816929, -2.0633929, -6.3665648, -4.13964, -3.4099324, -1.1418146, 8.466255, 3.2365537, -0.14618888, 1.3563147, 0.3446387, 3.1233552, 0.7530624, 0.548483, -1.1876376, -8.070564, 1.4254899, -0.9140264, 2.5087235, -1.3091599, 0.9416502, 0.16097029, 2.6614356, 1.9558911, 4.219861, 1.1494511}; static const std::vector<float> expected_dft1d_signal_size_results = { 6.138384, 4.8263664, 6.2014966, 4.641298, 6.2220087, 3.340786, 3.8338857, 3.458686, 6.393098, 6.578215, 4.9169006, 3.8786886, 5.0566025, 5.701084, 5.099263, 6.690686, 4.686806, 4.9369535, 5.471756, 4.315829, 3.6622288, -4.547995, 2.3657713, -4.4210963, 3.3341353, -3.560755, 3.0613456, -2.0019536, 4.9098253, -3.27648, 3.6913419, -2.365142, 5.2827687, -3.2966752, 5.633893, -2.990755, 2.4099903, -2.5792742, 3.009334, -3.318112, -0.8632047, -0.302661, -0.9658433, 1.3884914, -0.12056512, -0.5264965, 0.053616166, 0.5239285, -0.37204745, 0.6682581, 0.88452375, -1.4486976, -0.9331777, -0.6865864, -0.32639223, -1.2646291, -0.187691, 1.0645473, -0.45738214, 0.48435384, 1.2216191, -0.61395854, 2.4932637, -1.6152657, 0.99030006, -0.45764852, 2.4245698, 0.31936115, 2.9254415, 0.4994774, 0.2869299, -0.82977176, 1.759331, 0.66088116, 2.0010936, -0.18261093, 1.5729225, -0.6416664, 1.2219726, -0.4185537, -0.33628678, 0.21890742, -2.2292616, -0.9053817, 0.53581333, 0.36471185, 0.90989465, -0.067255855, 0.9978778, -0.6023144, 1.2700583, -0.055348396, 0.7769584, 0.20883593, 0.68046755, -1.3861976, -0.7743764, -0.17685926, -0.28369236, 0.7703819, 0.88469267, 0.7845876, 0.4245007, 1.0558772, 1.5855613, -0.88230014, 2.5918908, 0.5176118, 0.9809585, -0.16784734, 0.44176394, -1.8440124, 1.8485708, 0.13407728, 0.99209386, -0.49863797, -0.05547434, 0.51047516, 0.95244277, 0.16795856, 1.4046597, 1.2883723, -0.4217211, -0.30820233, -0.94360316, -1.0276735, 1.8597782, -1.7973311, 0.17352016, 0.14103556, -0.53083634, -0.08058083, 0.58797073, -0.1623843, 1.0300912, -1.594127, -0.37183756, 0.6519355, -0.67296886, 1.4364773, 2.9115105, -0.62121296, 0.10118961, 0.4055282, -0.765203, 1.1095873, 0.25468233, -0.8044969, 0.37809694, 0.47051764, -0.5070367, -0.69294405, 1.678687, -0.05850029, -0.15289319, -2.1158576, -0.28707075, 0.64672077, 2.1430318, 1.8936268, 0.287481, -1.212002, -0.8066146, -0.024840236, 0.4002909, 1.5536453, 0.90662, -0.1200099, 0.2907222, 1.3641009, -1.2066911, 2.2924597, -0.10727149, -0.90693283, -1.7303126, -0.9309965, -0.39670166, 1.4576144, 1.8296418, 0.29156286, 0.914652, 0.48379737, 0.35427743, 1.0552206, 1.0729686, 0.66110367, 1.1590368, -0.883232, 1.5702324, 0.37410414, 2.7553983, 1.3418052, 0.4280968, 0.43797877, -0.42501903, 0.6896758, 0.17888534, 0.7881189, 1.906157, -0.893877, 1.6907314, -0.07711154, -0.08057277, -0.94700074, 0.118160814, 1.0535691, 0.013901293, -1.0134851, -0.49273467, 0.77010435, 0.61979324, -0.4796943, -0.9006692, -0.14570916, 0.20728627, -0.6043751, -0.77368677, 2.1912723, -1.0270727, -0.15626097, 1.6779256, -1.3633705, -1.419299, 0.4458414, 1.8119955, 1.3894738, -0.0533064, -0.2651497, 2.156881, 1.774823, 1.6229321, 0.83354133, 0.6217755, 2.7520647, -0.8899409, -0.5549828, 2.2334838, 1.866106, 2.2245884, 1.6349602, -0.17061183, -0.75332606, -0.7192313, 1.011065, 1.1424315, -0.14698744, -0.5063292, 0.047560766, 0.8158023, -0.99613833, 1.3294827, -0.884288, 1.9334476, -0.82400334, -1.0622213, 0.45689362, 0.3765804, -0.2486183, 0.5129931, -2.1530728, 1.6927868, -0.4909035, 0.07076289, 1.1461573, 1.2108444, 0.5570269, 0.57290983, 1.0781552, 0.2522127, 0.9315722, 0.82063264, -0.27504963, -1.298832, 0.5996642, 0.035723105, 0.9061513, 0.7085681, 1.3870897, -0.33059365, 1.4296496, -0.9227723, -2.0201705, -0.25814092, -0.044265598, 0.74225616, -0.7740435, 0.56227565, -0.7865786, 0.16598742, -0.13509351, 0.65011877, -0.5367288, 0.7644322, 1.754046, 0.14904708, 0.0060333014, 0.81808805, -0.023402452, 1.2871823, -1.2016544, -0.016474128, 1.0952724, -0.83657134, 0.959798, -0.29334623, 0.46025404, -1.329956, 0.88328505, 0.311208, 1.5458176, 1.058334, -0.65749556, 0.7922486, 1.2470598, 0.009132266, 0.07870856, 0.6166347, 0.009361565, -1.6813973, 0.3131196, -0.3617983, -1.6096013, -0.80183095, 0.60364366, 0.032118678, 0.53880775, 0.79869264, 2.0884013, 0.30808622, -1.1033678, -1.0830308, -1.5599371, 1.2167512, 0.439706, -0.76799685, -0.46132383, -1.6585693, -0.8193617, 0.15754253, 0.82434106, -1.4365332, 2.5602462, -0.59798455, 2.2706695, 0.094361365, 1.5161843, 1.576273, 0.8282173, -2.615784, 2.0659475, -0.70808023, 1.8205551, -0.23570198, 1.0002637, -0.84214133, 1.1558707, -0.8486479, 3.3955946, -0.9163475, 1.2400286, 1.7278013, -0.2593556, 0.12464893, 0.045035288, 0.14191893, 0.60069644, 0.6033013, -0.40642756, 0.30952126, 2.1911335, 0.38403878, -0.5504798, 0.7629653, 0.96752846, -0.77223957, -0.45594752, 1.2607243, -0.5419304, 0.06783953, 1.1299804, -2.9180245, 2.812955, -2.912982, 4.157113, -0.7707863, 4.184089, -1.2218096, 2.2556906, -2.2792397, 4.6580005, -2.2278588, 3.2439072, -1.7189349, 2.8687704, -3.8549495, 3.9684548, -3.5499556, 3.1096249, -1.6433489, 3.6931372, 4.762172, 6.8113427, 5.6586823, 3.9343526, 4.874974, 4.044377, 4.5118494, 4.560476, 4.814545, 5.255967, 4.8088293, 4.8661695, 5.5842476, 3.047568, 6.3495092, 5.8194113, 3.9938629, 6.2386484, 5.357541, 4.734993, 4.009847, -1.85078, 3.257053, -2.863433, 3.2807407, -2.4543116, 2.3266344, -2.7742774, 5.0006027, -3.1107163, 3.1461582, -2.4130437, 1.9839633, -3.2893786, 4.9680586, -1.5064957, 4.93627, -2.2536325, 4.4328127, -2.371289, -0.09072271, 1.6559569, 0.9273602, -0.16627279, -0.15694867, -0.16094846, -0.30682713, 0.62848985, -0.16314042, -1.793005, -0.025120497, 0.035565466, 0.4509227, 1.029233, 2.6076002, -1.3557681, -0.042177677, -1.8681216, 0.047852248, -1.0646176, 3.5719476, 0.61097944, 1.9404714, -1.8996478, 1.4736449, -0.3556636, 0.7955406, -0.70645106, 0.106241465, -1.4871876, 1.4906516, -0.5542434, 1.8523693, -1.8795702, 0.20563364, -1.7517232, -0.2156514, -0.42685848, 1.2532125, 0.29982674, 0.6122022, -1.2758396, -0.7942943, -1.2208992, -0.28703833, -0.6799724, -0.22529805, 0.88721895, -1.6784416, 0.6357958, -0.40500844, -1.1880269, -1.3736604, 0.27873987, 0.9415196, 1.5691454, 0.637203, 0.6283375, 0.8025869, -0.73763883, 0.07619148, 0.29328048, -0.21484284, 0.39326593, 0.2614212, 0.25093663, 1.1460452, -0.42564535, 1.2621714, 0.7867665, -0.9763881, 0.67735475, 1.3954227, 0.8466128, 2.6533723, -1.3165393, 1.0205896, -1.2907634, -0.09324902, 0.19477385, -0.10201472, 1.2100208, 0.8927874, 1.1437554, -0.27520463, -0.18907726, -0.1451918, 0.3773734, -1.0439434, -0.35780138, 1.1060231, 1.0964282, 0.2501399, 0.31307727, -0.13760762, -0.86377877, -0.49448854, 0.09268577, 0.74952096, 0.82891256, 1.9546115, 1.2895672, 2.1169534, -1.0286292, 0.0854094, 0.63047266, 1.0325564, -1.0493125, 0.31182784, 2.3592472, 0.69874203, -0.92576516, 1.5970948, 0.7720525, 0.9282105, -0.13239098, 1.5795128, -0.7387508, 0.9950645, 0.11649346, 0.7381579, -0.9112861, -1.0479062, -0.9858517, -0.31902313, -0.43754727, -1.9271741, 0.41418013, 1.5709126, 0.12488553, 0.34839654, -0.14153089, 1.2988961, -0.17553245, 0.36363417, -0.4992725, -0.87406987, -1.5621511, 0.52947193, 0.17129752, -0.19691896, 0.88363117, 0.5983897, 1.4228462, -1.309372, 1.6419725, 2.096242, 1.3451272, 0.21495643, 0.16032922, -0.21870668, -2.3123596, 1.511457, -1.2067473, 0.30244982, -0.5896039, -0.20020528, -0.17678946, 0.646347, 0.12540624, 0.8411275, 0.29581466, 1.0424525, -0.3198546, 1.5812268, 1.633207, 0.036333233, -1.9386438, 0.4908937, 0.4255972, -3.0946343, 0.4557737, -1.538063, -1.0618666, -0.766645, 0.09507492, -1.1704439, -0.58377063, 0.06451824, 0.084664315, -0.33639127, 0.43388176, 0.7445558, 0.56547284, 0.20360313, -0.52746487, -0.22392502, 0.10510802, 0.2932141, 0.13039428, 0.2339833, 1.1078603, 0.07111454, 1.674398, 0.24977088, 0.7773813, 0.10618341, 1.3232847, 0.07770634, 0.8410483, 0.6371973, 1.1520991, 1.6076822, -0.553284, 0.0399023, 1.6575105, -1.002435, -1.153806, -0.338341, 0.75674164, -1.9532704, -0.16773497, -0.09083623, -0.09499304, -0.15297657, 0.6092089, 1.1756519, -0.8699633, 0.57320195, 0.77921844, 0.38325477, -0.4647501, 0.16934802, -0.9541189, 1.8387299, -0.2722485, -0.9011179, 1.2189366, 1.0526755, 1.2198145, -0.66335034, 2.4358046, -0.0044496655, 2.4705029, 0.7398137, 1.1581391, -0.08892931, -1.3800118, 0.39516693, 0.7783501, -1.6864198, 0.90398335, -0.09139767, 0.18852606, -0.7292757, -0.7595531, -0.30982962, -0.37637365, 0.27046034, -0.2601264, 0.06654024, 0.83308995, 2.1443768, 0.7846114, 0.72724646, 0.43702295, -1.3782393, -1.555314, 1.0024056, 0.96103704, 0.62146187, 2.4383464, 0.97525114, 0.1517681, -0.05941461, 0.20787807, -0.7399595, 1.4447442, 0.370912, 1.5718691, 0.36367816, 1.2211394, 1.2772232, 0.46179056, 1.0423609, -0.1160976, -1.8006848, 0.2063675, 0.699636, -0.2978242, 0.36548108, 0.13973325, 0.06818205, -0.8364538, -1.8770711, -0.46342957, 0.5138623, -0.7012725, 1.3353106, -0.7529058, -0.5607584, -0.3658438, 1.3651763, -0.8271546, -1.3937892, -0.4218138, -1.5759501, 0.052277893, -0.79160595, 1.0530065, -0.25257057, 1.7259041, -0.09510864, 0.31564656, -1.4286227, 2.806394, -2.0088015, 0.6337404, -1.4553217, 0.3904129, -0.8321003, 2.0365574, -0.47588396, 0.03407097, 0.08727789, 2.440409, -1.3018095, 1.9136591, 1.5979958, 1.496789, -0.2709299, -0.38308293, -1.0800201, -0.7544405, 0.074904405, 1.2379612, -0.62439823, 0.5188385, -0.05306366, 1.060843, -0.17591527, -0.21396813, -0.27043432, 0.16332026, -0.57039225, -0.76971775, -0.22342275, -0.28223512, -0.66207, -1.0938429, -4.0251827, 4.238682, -2.3085427, 4.3264065, -1.419694, 3.9545622, -3.0023227, 3.424511, -1.9520879, 3.0750623, -3.127586, 3.9366179, -1.3875456, 3.5732715, -3.2088501, 5.656434, -3.9873497, 3.1138892, -2.331269, 4.533456}; static const std::vector<float> expected_dft1d_bfloat16_results = { 6.3125, 4.28125, -0.804688, -0.722656, -0.0551758, 0.271484, 0.832031, 0.578125, 0.601563, -1.32031, 1.1875, 1.45313, -0.0197754, -1.82813, 0.15332, -2.21875, 1.05469, -1.0625, -0.6875, 0.660156, 4.625, 4.5625, -0.785156, -0.332031, -1.46875, -1.15625, 0.710938, 1.45313, -1.09375, -0.394531, -0.59375, -0.8125, -0.388672, -0.419922, 0.546875, 2.32813, -0.59375, 1.39063, -0.84375, 0.143555, 6.25, 5.5625, 0.251953, -1.70313, 1.16406, 0.120117, 0.195313, -1.34375, -0.90625, 1.40625, 0.699219, 0.574219, 1.19531, -0.515625, -2.5625, -0.804688, 1.15625, -0.859375, 1.28906, -1, 5.625, 3.21875, 0.240234, 0.308594, 0.617188, 0.0947266, 1.04688, -0.0205078, -0.875, -0.126953, 0.640625, -0.84375, -0.050293, -1.04688, -0.40625, 0.207031, 0.443359, 0.458984, 1.375, -0.65625, 5.5, 3.75, -1.88281, -0.226563, 0.816406, -0.464844, 0.121582, 0.609375, 0.269531, 0.960938, 0.304688, 0.695313, 0.273438, -1.07813, 0.207031, 1.5625, -0.277344, 0.890625, -0.0373535, -0.185547, 4.9375, 4.6875, 0.191406, -0.882813, 0.914063, -0.53125, -0.455078, 1.16406, -1.46875, 1.66406, 1.01563, -1.59375, -1.21875, -0.126953, -0.137695, -0.671875, -0.324219, -0.902344, -0.960938, 0.0281982, 4.75, 5.25, -0.034668, -0.378906, -0.492188, -0.65625, 0.816406, -0.890625, -0.201172, 0.173828, -0.996094, 0.828125, 0.554688, -0.020752, 1.53125, 0.691406, 0.227539, 0.503906, 2.42188, 0.220703, 6.4375, 5.6875, -0.867188, -1.10156, 0.128906, 1.34375, 0.363281, 0.289063, -0.8125, -0.0976563, -0.328125, 0.78125, -0.960938, 0.898438, -2.09375, 0.847656, -1.53125, 0.742188, 1.32813, -1.03125, 3.79688, 5.625, 1.34375, 0.640625, -0.213867, -1.20313, 1.15625, -0.0522461, 0.476563, -0.585938, 0.228516, 1.125, 0.421875, 0.363281, -1.21875, 2.34375, 0.644531, 0.804688, 1.15625, 0.863281, 5.59375, 5.65625, 0.484375, 0.59375, 1.17969, 1.46875, -1.25781, -0.115723, 0.628906, -1.46875, 0.789063, 0.179688, 0.554688, -2, -0.435547, 0.992188, 0.898438, -0.546875, 0.847656, 0.0820313, 4.5, 4.3125, 0.726563, 0.8125, 0.273438, -0.00793457, 0.365234, 0.671875, -2.375, 1.57813, -0.167969, 0.511719, -0.239258, -0.128906, 1.54688, 0.625, 0.769531, 1.60938, 0.363281, -0.417969, 5.09375, 5.1875, -1.1875, 1.17188, -0.0154419, -0.746094, 0.0834961, 0.225586, 1.10938, 2.28125, -0.6875, -0.410156, 0.0449219, 0.867188, -0.507813, 0.229492, 0.353516, -0.457031, 0.145508, -0.108887, 4.78125, 5.25, 0.498047, 0.296875, -0.410156, -0.644531, 0.320313, -0.898438, -1.03125, 0.914063, 0.675781, 0.0810547, 0.335938, 0.0527344, -1.1875, 0.503906, -1.14063, -1.42188, 0.820313, 1.89063, 6.875, 6.375, -1.29688, -0.229492, -0.220703, 1.04688, -0.765625, 0.6875, -0.734375, 0.173828, 0.0917969, -0.574219, -0.408203, -0.953125, -0.890625, 1.3125, -0.421875, 1.75781, -0.640625, -1.59375, 4.40625, 5.6875, -1.10938, 0.103516, -1.10938, 0.34375, 2.10938, -0.0664063, -0.164063, 0.261719, -0.808594, -0.414063, -1.14063, -0.00567627, -0.625, -0.146484, -0.710938, 0.149414, -0.363281, 1.57813, 4.96875, 4.28125, -0.84375, 0.53125, -0.601563, -0.90625, 1.04688, 0.213867, -0.472656, -0.320313, 0.808594, -0.0415039, 0.632813, 0.15625, -0.238281, 0.695313, 1.85938, -1.69531, 1.25781, -0.679688, 4.90625, 4.875, -0.00488281, -0.6875, -0.213867, -0.488281, 0.367188, 0.118164, -0.78125, 0.488281, -0.5, -0.0839844, 1.15625, 0.820313, -0.640625, -1.09375, 1.40625, -0.148438, 2.875, 1.04688, 4.6875, 5.1875, -0.632813, 2.53125, 0.695313, -0.539063, -0.09375, 0.566406, 0.410156, 0.632813, -1.75781, -0.5, -1.07813, 1.23438, -0.355469, -1.3125, 1.57813, 1.04688, -0.613281, 0.742188, 5.4375, 3.95313, 2, -0.863281, -0.765625, 0.0791016, 0.835938, -0.632813, -0.671875, 1.73438, 0.9375, 1.82031, 0.855469, -0.178711, 0.621094, 0.206055, 0.15918, 1.85938, 0.0454102, -0.488281, 4.9375, 4, -0.746094, -0.277344, -0.804688, -0.539063, 0.460938, 0.898438, 2.1875, -1.46875, -1.23438, 1.09375, -0.953125, 0.515625, 0.291016, 1.01563, -0.22168, 0.113281, -1.625, -0.945313}; static const std::vector<float> expected_dft2d_bfloat16_results = { 54, 48.25, -1.85938, -3.8125, 2.59375, -0.714844, 3.53125, 1.67188, -3.39062, 0.214844, 2.95312, 2.39062, 0.369141, -5.78125, -4.4375, 5.3125, 1.70312, 1.41406, 5.875, -0.875, -3.25, -0.0917969, -0.34375, 2.59375, -2.75, 0.199219, 0.355469, -0.271484, 5, -2.96875, -1.65625, 0.539062, -0.90625, -3.65625, -3.71875, -0.671875, 2.0625, -2.1875, -3.34375, 3.53125, 1.53125, -1.60938, -5.3125, 0.53125, -1.66406, 2.4375, -2.25, 1.42188, 2.17188, 2.5, 0.867188, 1.65625, 1.71875, -2.09375, 4.625, -3.67188, 0.890625, -0.412109, -5.9375, 1.46875, 2.125, -1.4375, -3.48438, 3.09375, -3.29688, 4.78125, 5.65625, -1.11719, 1.82812, -2.5625, -0.386719, 3.21875, 1.42969, 0.859375, 6.125, -1.73438, 2.28125, 4.375, 1.76562, 3.375, -0.535156, 3.75, 4.4375, -4.3125, -2.76562, 3.67188, 1.89062, -2.59375, -2.96875, 1.14062, 1.46875, 0.75, -1.3125, 1.0625, -0.765625, -10.875, 2.96875, -4.21875, 0.417969, -0.457031, -0.625, 0.585938, -0.388672, -0.980469, -0.147461, -3.15625, 2.71875, -3.875, 3.875, 1.04688, -0.0986328, 7, 4.5, -0.378906, 0.648438, -2.125, 3.9375, -0.859375, 2.40625, 1.98438, 3.65625, -4.5, -1.45312, 0.53125, 5.4375, -2.67188, -0.0605469, 3.67188, 0.546875, 1.07812, 4.5, -1.46094, -2.39062, 0.539062, -6.0625, -2.34375, -1.46875, -2.60938, -4.375, 0.283203, 3.96875, -3.78125, -3.10938, -2.85938, 1.40625, 0.0375977, -2.07812, 1.64062, 0.601562, -3.25, -0.820312, 1.35938, -2.89062, -3.4375, -2.51562, -2.8125, -4.4375, -2.34375, 0.664062, -4.75, -2.125, -1.07812, 1.15625, -0.953125, 0.65625, -4.03125, -1.21875, 4.5625, -0.734375, -3.21875, 1.25, -4.03125, -4.5625, -3.51562, 6.6875, -2.84375, -0.429688, -4, -4.1875, 3.01562, 4.59375, 2.6875, 2.34375, -1.03906, -0.0419922, 2.17188, -0.214844, 0.695312, -0.921875, -7.1875, 3.79688, 3.1875, 3.84375, -1.89062, 0.898438, -0.371094, 3.04688, 0.197266, -0.102539, -1, 50.5, 49, -2.59375, 3.39062, -3.17188, -2.40625, 4.75, 1.78906, -2.51562, 6.28125, -2.64062, 1.48438, -0.789062, 2.375, -1.98438, 2.03125, 3.625, 2.8125, 2.28125, 1.01562, 3.25, -3.03125, 2.0625, 3.5625, 1.96094, 0.248047, -2.26562, 0.792969, 1.96094, 4.25, -3.25, 0.78125, -2.9375, 0.667969, 5.71875, 4.84375, -4.8125, 6.34375, -2.64062, -1.85938, -1.75781, -1.09375, 1.42188, -0.0986328, -2.76562, -3.65625, 1.42188, 3.40625, 3.09375, 0.113281, -2.40625, 0.291016, 2.90625, -1.85938, -0.695312, 2.26562, -0.425781, -5.8125, 5.3125, -6.28125, 0.8125, 3.625, 1.3125, 2.34375, 2.14062, 1.89062, -2.4375, 0.382812, 2, -0.0454102, -4.125, -1.51562, -2.04688, 1.19531, -0.65625, -2.03125, 2.0625, 4.875, -0.0996094, 1.42188, 0.648438, -4, -0.8125, 0.445312, 2.78125, 0.4375, -0.867188, 1.25, -2.70312, 2.8125, 4.125, -1.9375, -0.585938, -4.75, 3.14062, 0.796875, 5.3125, 0.277344, -3.64062, -4.375, -2.51562, -0.925781, 6.8125, -4.0625, -1.28125, 0.972656, 3.26562, -3.40625, -7.5625, 3.6875, 2.90625, 2.34375, 2.73438, -1.26562, 1.41406, -1.83594, -2.65625, 1.29688, 5.1875, 6.1875, 0.484375, 2.10938, 5.875, 2.79688, -0.386719, 0.00540161, -1.01562, -1.54688, -8.0625, -0.679688, 2.34375, 3.07812, 2.625, 1.375, 0.75, 2.26562, -1.28125, 2.125, 4.96875, -0.0222168, 0.0286865, 1.15625, -4.8125, 1.95312, 1.96094, 2.34375, -0.984375, 4.375, -6.1875, 0.828125, -2.75, -2.92188, -3.45312, -1.45312, 2.46875, -0.789062, -0.433594, 5.8125, -2.65625, -2.26562, -2.54688, -3.125, -4.4375, 0.167969, 1.92188, 1.04688, 2.79688, -0.453125, -4.0625, -2.21875, 1.78125, -0.570312, -2.1875, 3.01562, 5.3125, -0.765625, 5.9375, 0.157227, -0.8125, 4.1875, -3.84375, -0.523438, 2.46875, -2.375, -0.408203, -0.953125, -0.984375, 0.144531, 0.253906, 0.816406, 2.34375, 4.0625, 1.34375, -0.574219, -0.0258789, -0.53125, 0.390625, -1.75, -4.3125, -2.15625}; static const std::vector<float> expected_dft3d_bfloat16_results = { 104.5, 97.5, -4.4375, -0.414063, -0.582031, -3.125, 8.25, 3.46875, -5.90625, 6.5, 0.3125, 3.875, -0.419922, -3.40625, -6.40625, 7.3125, 5.34375, 4.25, 8.125, 0.138672, -0.0197754, -3.125, 1.71875, 6.1875, -0.796875, 0.445313, -1.90625, 0.523438, 6.9375, 1.25, -4.9375, 1.32031, -3.84375, -2.98438, 2, 4.15625, -2.75, 4.125, -6, 1.66406, -0.223633, -2.70313, -3.90625, 0.435547, -4.4375, -1.21875, -0.832031, 4.8125, 5.25, 2.625, -1.54688, 1.95313, 4.625, -3.95313, 3.9375, -1.40625, 0.464844, -6.1875, -0.632813, -4.8125, 2.9375, 2.1875, -2.17188, 5.4375, -1.15625, 6.6875, 3.20313, -0.734375, 3.84375, -2.60938, -4.5, 1.70313, -0.617188, 2.0625, 5.5, -3.78125, 4.34375, 9.25, 1.67188, 4.8125, 0.115234, -0.242188, 3.60938, -3.90625, 0.0158691, 4.125, 1.01563, -1.35156, -5.6875, 3.95313, 5.625, -1.1875, -1.89063, -3.71875, 2.375, -10, 8.25, -3.9375, -3.21875, -4.84375, -3.14063, -0.341797, 6.4375, -5.0625, -1.42969, -2.1875, 6, -7.25, -3.67188, 4.71875, 2.8125, 9.3125, 7.21875, -1.64063, 2.0625, -3.96875, 1.26563, 0.4375, 7.625, 8.1875, 4.125, -2.375, 4.40625, 3.32813, 5.0625, -2.65625, -1.07813, 2.125, -7.5, 0.394531, 6.875, 1.625, 0.240234, 1.90625, -5.3125, -0.0849609, -2.75, -0.492188, 0.574219, 0.261719, 4, -2.625, -7.9375, -0.90625, 3.375, 2.375, -3.0625, 6, -5.59375, -2.40625, -3.57813, -1.5625, -6.34375, -4.90625, -0.0458984, -3.59375, -4.875, 3.45313, -1.98438, -7, -4.6875, -4.1875, -3.29688, -0.78125, 2.57813, -3, 1.57031, 4.09375, -4.78125, -5.4375, 3.03125, -4.59375, -6.75, -0.492188, 12, -3.625, 5.5, -3.84375, -5, 7.1875, 0.761719, 2.17188, 4.8125, -3.42188, -0.449219, 1.21875, -1.20313, 0.835938, -0.664063, -6.375, 6.125, 7.25, 5.1875, -2.46875, 0.871094, -0.902344, 3.4375, -1.54688, -4.40625, -3.17188, 3.28125, -0.925781, 0.742188, -7.1875, 5.75, 1.70313, -1.20313, -0.112305, -0.867188, -6.0625, 5.59375, 0.90625, 1.15625, -8.125, -2.4375, 3.26563, -1.92969, -1.40625, 3.60938, -1.89063, -6.5, 2.9375, -2.40625, -0.96875, -4.71875, -0.0493164, 2.625, -1.0625, 3.03125, -7.21875, 1.60156, -0.238281, 2.03125, -4.3125, -9.4375, -5.5, 6.875, -8.5, -0.710938, 5.375, 3.28125, -0.523438, -6.75, 0.632813, 1.10156, 6.125, -3.65625, -1.98438, -0.914063, 2.39063, 3.28125, 1.375, -1.17969, -0.226563, 5.34375, -5.9375, 1.3125, 5.375, -11.25, 7.75, 1.3125, -5.0625, -4.8125, 0.746094, -5.4375, 2.89063, 8.125, -1.5, -0.171875, -2.51563, 3.73438, 4.75, 3.48438, -0.335938, 6.8125, 0.308594, 0.214844, -0.46875, 1.86719, 1.96094, -1.1875, 7.75, 5.25, -4.78125, -5.5625, 3.23438, 2.75, -3.84375, -0.257813, -1.65625, -2.65625, 2.6875, -0.726563, 5.84375, -3.90625, -11.625, -2.3125, -4.5, 4.0625, 3.9375, 1.89063, 1.50781, -7.21875, 3.09375, 1.13281, -4.125, -0.550781, -0.472656, 11.375, -2.625, -3, 4.625, 1.74219, 0.882813, -0.761719, -0.291016, 6.59375, -2.15625, -2.8125, -4.1875, 3.15625, -6.625, -7.3125, -2.26563, 5.84375, -2.6875, 0.953125, 5.21875, 8.625, 1.75781, 2.17188, -4.53125, -5, -0.832031, -6.84375, -4.625, -0.195313, -4.75, -9.375, 0.304688, 3.9375, -4.9375, 1.72656, -4.8125, -0.550781, -2.3125, -1.09375, -2.75, 6.8125, -4.0625, 1.94531, 4.28125, 0.5625, -1.99219, -5, -2.01563, -4, -8.125, 3.3125, -2.46875, 0.414063, 2.04688, 5.59375, -1.11719, -1.26563, -5.0625, -4, 5, 3.3125, -1, -0.539063, -3.46875, -2.375, -6.53125, 1.39063, -2.09375, -6.34375, -4.15625, -3.40625, -1.15625, 8.375, 3.21875, -0.126953, 1.34375, 0.367188, 3.125, 0.773438, 0.546875, -1.17188, -8, 1.45313, -0.902344, 2.51563, -1.3125, 0.921875, 0.157227, 2.65625, 1.94531, 4.1875, 1.17188}; static const std::vector<float> expected_dft1d_float16_results = { 6.32812, 4.29688, -0.811035, -0.71875, -0.0587158, 0.270508, 0.827148, 0.57959, 0.599609, -1.3291, 1.18848, 1.46289, -0.0178833, -1.83203, 0.160034, -2.2207, 1.0625, -1.06738, -0.686523, 0.657715, 4.62891, 4.59375, -0.789062, -0.326416, -1.4707, -1.16113, 0.709473, 1.45703, -1.09766, -0.392334, -0.599609, -0.814941, -0.391846, -0.420166, 0.552734, 2.33984, -0.599121, 1.39648, -0.845215, 0.149414, 6.25781, 5.56641, 0.256348, -1.70215, 1.16211, 0.120178, 0.197266, -1.34277, -0.906738, 1.41113, 0.708496, 0.57959, 1.20215, -0.519043, -2.55859, -0.808594, 1.16602, -0.868652, 1.29883, -1.00488, 5.64453, 3.21094, 0.235962, 0.310059, 0.617676, 0.100159, 1.04297, -0.0167999, -0.885254, -0.12915, 0.648926, -0.842773, -0.0584106, -1.04883, -0.402344, 0.203247, 0.450439, 0.459229, 1.37109, -0.654785, 5.52734, 3.75195, -1.88672, -0.223389, 0.816895, -0.46582, 0.123657, 0.609863, 0.27002, 0.97168, 0.304932, 0.696289, 0.272217, -1.08105, 0.205933, 1.56543, -0.276367, 0.89502, -0.0397949, -0.187134, 4.97656, 4.69531, 0.190063, -0.88623, 0.916504, -0.525879, -0.454834, 1.16211, -1.47754, 1.67188, 1.02832, -1.60156, -1.21777, -0.134521, -0.147461, -0.67334, -0.32666, -0.901367, -0.966309, 0.0360413, 4.77344, 5.24609, -0.0340881, -0.372559, -0.490723, -0.657715, 0.814453, -0.891602, -0.199219, 0.179199, -0.996094, 0.828613, 0.552246, -0.0240173, 1.53516, 0.69873, 0.231689, 0.510742, 2.43164, 0.226562, 6.47656, 5.73047, -0.871094, -1.11035, 0.125244, 1.34863, 0.369141, 0.291504, -0.814941, -0.0922852, -0.331299, 0.780762, -0.962402, 0.899902, -2.11133, 0.845215, -1.53516, 0.743164, 1.33203, -1.02148, 3.8125, 5.62109, 1.35449, 0.645996, -0.216187, -1.20801, 1.16406, -0.0525208, 0.480957, -0.589355, 0.234863, 1.12793, 0.425781, 0.36377, -1.22461, 2.35156, 0.642578, 0.80957, 1.15723, 0.866699, 5.59766, 5.66016, 0.480469, 0.594727, 1.18457, 1.4707, -1.25879, -0.116394, 0.629883, -1.46191, 0.787109, 0.181274, 0.554688, -2.00586, -0.437012, 0.994141, 0.895996, -0.549316, 0.842773, 0.0840454, 4.52734, 4.35156, 0.730957, 0.810059, 0.278564, -0.00706482, 0.365479, 0.674805, -2.38281, 1.58203, -0.165527, 0.513672, -0.236938, -0.132935, 1.55176, 0.620605, 0.773926, 1.61133, 0.362793, -0.424316, 5.10156, 5.18359, -1.19727, 1.17871, -0.0184937, -0.75, 0.0879517, 0.219238, 1.11328, 2.28906, -0.688477, -0.407715, 0.0468445, 0.868164, -0.507324, 0.230225, 0.353271, -0.456299, 0.149902, -0.105286, 4.80859, 5.28125, 0.496826, 0.29541, -0.414551, -0.64502, 0.317627, -0.895508, -1.04102, 0.917969, 0.682617, 0.0803833, 0.339111, 0.0525818, -1.18652, 0.51123, -1.14551, -1.41895, 0.82373, 1.89453, 6.88281, 6.40625, -1.30273, -0.224731, -0.220825, 1.03711, -0.763672, 0.694824, -0.731445, 0.168091, 0.0882568, -0.573242, -0.406982, -0.962891, -0.890625, 1.31445, -0.423828, 1.76758, -0.640137, -1.61133, 4.41016, 5.70703, -1.11133, 0.105713, -1.11133, 0.341553, 2.11719, -0.0671997, -0.161743, 0.260742, -0.813477, -0.422607, -1.1416, -0.000315189, -0.634766, -0.1521, -0.716797, 0.148071, -0.353516, 1.58398, 4.98828, 4.28906, -0.849121, 0.534668, -0.60498, -0.902344, 1.04492, 0.211914, -0.479004, -0.322754, 0.807617, -0.0440674, 0.628418, 0.155029, -0.237671, 0.695801, 1.86035, -1.70312, 1.26562, -0.679199, 4.92188, 4.89453, -0.012001, -0.6875, -0.218506, -0.486816, 0.367432, 0.124451, -0.786621, 0.48877, -0.508301, -0.0883179, 1.16504, 0.819336, -0.640625, -1.09668, 1.40918, -0.153564, 2.88281, 1.05371, 4.70312, 5.20312, -0.634766, 2.53125, 0.699219, -0.538574, -0.098938, 0.565918, 0.408936, 0.634766, -1.76367, -0.49707, -1.07715, 1.24219, -0.362549, -1.31152, 1.58398, 1.05273, -0.611328, 0.748535, 5.43359, 3.9668, 2.00586, -0.870117, -0.765625, 0.0811768, 0.839355, -0.635254, -0.675293, 1.73633, 0.9375, 1.8252, 0.861816, -0.182007, 0.625, 0.203125, 0.159302, 1.86133, 0.041687, -0.490723, 4.94141, 4.02734, -0.74707, -0.277832, -0.808105, -0.541016, 0.467285, 0.901367, 2.19727, -1.4668, -1.23535, 1.09668, -0.955566, 0.51416, 0.288574, 1.01172, -0.219116, 0.114624, -1.62695, -0.943359}; static const std::vector<float> expected_dft2d_float16_results = { 54.0312, 48.375, -1.87305, -3.78906, 2.58594, -0.708984, 3.53516, 1.67969, -3.40039, 0.239502, 2.97461, 2.39844, 0.358887, -5.80078, -4.42969, 5.29688, 1.71191, 1.42676, 5.89844, -0.848145, -3.28711, -0.10022, -0.339111, 2.61523, -2.76562, 0.185791, 0.349365, -0.259277, 5, -2.98438, -1.66406, 0.533203, -0.898438, -3.64453, -3.71484, -0.685547, 2.0625, -2.2168, -3.36914, 3.52734, 1.54785, -1.63281, -5.33203, 0.538086, -1.66699, 2.44922, -2.26172, 1.41113, 2.17969, 2.50195, 0.851562, 1.66016, 1.7168, -2.10156, 4.65625, -3.67578, 0.893555, -0.398926, -5.94922, 1.44043, 2.11523, -1.45312, -3.50391, 3.08203, -3.31836, 4.78516, 5.64844, -1.11914, 1.8291, -2.57617, -0.412842, 3.23633, 1.42773, 0.870117, 6.17578, -1.73242, 2.3125, 4.40625, 1.77148, 3.38086, -0.532715, 3.76562, 4.44141, -4.35156, -2.76367, 3.68555, 1.8916, -2.58398, -2.99023, 1.13477, 1.4834, 0.74707, -1.30762, 1.06641, -0.759766, -10.8828, 3, -4.25, 0.417969, -0.451416, -0.625, 0.592773, -0.366211, -0.952637, -0.15979, -3.17383, 2.71875, -3.875, 3.89062, 1.04785, -0.0922241, 6.99219, 4.51172, -0.383301, 0.664062, -2.12305, 3.94141, -0.869629, 2.42773, 1.96582, 3.65234, -4.49219, -1.47168, 0.526367, 5.45703, -2.65625, -0.0605774, 3.66211, 0.561523, 1.05859, 4.54297, -1.44824, -2.38672, 0.524902, -6.11328, -2.36328, -1.48926, -2.62695, -4.40234, 0.276123, 3.95703, -3.75977, -3.12695, -2.86719, 1.41699, 0.0242462, -2.08008, 1.6543, 0.595703, -3.25586, -0.820801, 1.36328, -2.90625, -3.43945, -2.50195, -2.82227, -4.42578, -2.35352, 0.666016, -4.74609, -2.16016, -1.05859, 1.13379, -0.946777, 0.675781, -4.04688, -1.24219, 4.53906, -0.742188, -3.22461, 1.23145, -4.04688, -4.5625, -3.51953, 6.70703, -2.84375, -0.4375, -3.99609, -4.22656, 3.05273, 4.60938, 2.7207, 2.33398, -1.04004, -0.0499268, 2.1582, -0.223877, 0.686523, -0.927246, -7.23438, 3.79492, 3.19727, 3.86719, -1.89062, 0.915039, -0.369629, 3.05664, 0.201294, -0.0986938, -1.01953, 50.7188, 49.3125, -2.61914, 3.39648, -3.18359, -2.41016, 4.74219, 1.79492, -2.53711, 6.28906, -2.6582, 1.48242, -0.777344, 2.37305, -1.99512, 2.02734, 3.63477, 2.82227, 2.29492, 1.02539, 3.25195, -3.06055, 2.09766, 3.56641, 1.94922, 0.241577, -2.25391, 0.783203, 1.96289, 4.23828, -3.25391, 0.787109, -2.9375, 0.66748, 5.74219, 4.84766, -4.80078, 6.37109, -2.66211, -1.88477, -1.76953, -1.10449, 1.41309, -0.0806274, -2.76367, -3.64648, 1.41602, 3.41016, 3.09961, 0.116394, -2.41016, 0.277344, 2.89648, -1.86426, -0.683105, 2.25, -0.430176, -5.81641, 5.34766, -6.29297, 0.800781, 3.65234, 1.31641, 2.3418, 2.16016, 1.87207, -2.4707, 0.392822, 2.01953, -0.0645142, -4.13281, -1.5127, -2.03516, 1.19531, -0.666016, -2.04492, 2.07422, 4.87109, -0.0910034, 1.38965, 0.680664, -4, -0.802734, 0.439697, 2.78125, 0.452393, -0.876465, 1.23828, -2.72656, 2.79688, 4.14844, -1.9209, -0.609863, -4.79297, 3.15234, 0.793457, 5.30859, 0.279785, -3.67383, -4.39844, -2.51172, -0.914062, 6.83984, -4.08984, -1.27832, 0.978516, 3.26953, -3.39258, -7.55469, 3.68359, 2.92383, 2.33398, 2.75195, -1.25977, 1.42383, -1.85449, -2.67383, 1.27148, 5.21875, 6.20703, 0.493652, 2.11523, 5.87891, 2.80469, -0.37793, 0.0119629, -1.04004, -1.53711, -8.07812, -0.686035, 2.35156, 3.08789, 2.65039, 1.36719, 0.771973, 2.29297, -1.26855, 2.11523, 4.99219, -0.0458984, 0.0148392, 1.1582, -4.83594, 1.95605, 1.97949, 2.3457, -0.963379, 4.39453, -6.20312, 0.832031, -2.77344, -2.92773, -3.46484, -1.45898, 2.47266, -0.795898, -0.428467, 5.8125, -2.66406, -2.27148, -2.54883, -3.12305, -4.46094, 0.164429, 1.91504, 1.04688, 2.80664, -0.47168, -4.06641, -2.22266, 1.77734, -0.578613, -2.20312, 3.02734, 5.32422, -0.783203, 5.92578, 0.141968, -0.815918, 4.19531, -3.85547, -0.515137, 2.47852, -2.39648, -0.394043, -0.960938, -0.977051, 0.137573, 0.259766, 0.834961, 2.37109, 4.10938, 1.3584, -0.584473, -0.0278931, -0.530762, 0.396484, -1.75781, -4.31641, -2.16797}; static const std::vector<float> expected_dft3d_float16_results = { 104.75, 97.6875, -4.49219, -0.392578, -0.598633, -3.11914, 8.28125, 3.47461, -5.9375, 6.52734, 0.316162, 3.88086, -0.418213, -3.42773, -6.42578, 7.32422, 5.34766, 4.25, 8.1875, 0.17688, -0.0348816, -3.16016, 1.75781, 6.17969, -0.817383, 0.427246, -1.90527, 0.523926, 6.96094, 1.25391, -4.91797, 1.32031, -3.83594, -2.97852, 2.0293, 4.16406, -2.73633, 4.15625, -6.03125, 1.64258, -0.221558, -2.73633, -3.91992, 0.45752, -4.42969, -1.19727, -0.844238, 4.82031, 5.28125, 2.61914, -1.55762, 1.9375, 4.61328, -3.9668, 3.97461, -1.42676, 0.463135, -6.21875, -0.599609, -4.85547, 2.91406, 2.19922, -2.1875, 5.42188, -1.15723, 6.66016, 3.17578, -0.726562, 3.84961, -2.64062, -4.54297, 1.72363, -0.606445, 2.06641, 5.50781, -3.77734, 4.38672, 9.28125, 1.68066, 4.76953, 0.148071, -0.237183, 3.63672, -3.91406, 0.0173492, 4.14062, 1.01465, -1.34668, -5.71484, 3.93164, 5.63281, -1.17285, -1.91797, -3.72656, 2.39258, -10.0859, 8.30469, -3.96875, -3.25586, -4.84766, -3.13672, -0.321045, 6.47656, -5.04297, -1.4375, -2.19531, 5.98828, -7.26562, -3.66602, 4.73047, 2.83203, 9.32812, 7.26172, -1.64258, 2.08789, -3.97656, 1.26562, 0.402588, 7.64844, 8.17188, 4.14453, -2.375, 4.40625, 3.33008, 5.07812, -2.64453, -1.10059, 2.125, -7.51953, 0.372559, 6.89453, 1.63867, 0.263672, 1.8916, -5.33984, -0.071228, -2.75781, -0.512695, 0.587402, 0.230225, 3.97266, -2.60156, -7.96094, -0.911133, 3.39648, 2.36914, -3.04297, 6.04688, -5.60547, -2.42383, -3.5957, -1.56543, -6.37109, -4.89844, -0.0288696, -3.61719, -4.85547, 3.45898, -1.99805, -7.01562, -4.71094, -4.17969, -3.32812, -0.782227, 2.58984, -3.00195, 1.56348, 4.06641, -4.80859, -5.44531, 3.00781, -4.625, -6.76562, -0.492432, 12.0312, -3.62695, 5.48828, -3.85352, -5.04297, 7.24609, 0.754883, 2.20703, 4.8125, -3.4375, -0.443848, 1.19727, -1.20117, 0.824219, -0.66748, -6.40234, 6.16406, 7.30469, 5.22266, -2.47461, 0.887207, -0.900391, 3.45312, -1.55566, -4.41797, -3.1875, 3.30469, -0.945801, 0.745605, -7.1875, 5.76953, 1.70215, -1.20898, -0.115417, -0.862305, -6.05078, 5.63281, 0.916504, 1.13574, -8.17188, -2.43359, 3.26953, -1.92383, -1.39551, 3.60156, -1.87305, -6.53906, 2.96094, -2.43555, -0.952148, -4.71484, -0.0558167, 2.60352, -1.04297, 3.03516, -7.22266, 1.58984, -0.253906, 2.03906, -4.3125, -9.46094, -5.53516, 6.86328, -8.58594, -0.70752, 5.41406, 3.31641, -0.527832, -6.74609, 0.618652, 1.09668, 6.09766, -3.67773, -1.99902, -0.919434, 2.38672, 3.26172, 1.38281, -1.17969, -0.237305, 5.33984, -5.92578, 1.32324, 5.41797, -11.2969, 7.73438, 1.31348, -5.10547, -4.82031, 0.739258, -5.47656, 2.91406, 8.11719, -1.51172, -0.191284, -2.51172, 3.71875, 4.75, 3.46289, -0.325439, 6.83984, 0.311768, 0.237549, -0.465088, 1.8623, 1.99023, -1.21289, 7.76562, 5.24219, -4.79297, -5.54297, 3.23438, 2.76758, -3.82227, -0.263428, -1.66113, -2.66602, 2.66797, -0.698242, 5.85938, -3.91211, -11.6719, -2.30664, -4.52734, 4.09375, 3.94531, 1.88672, 1.50684, -7.20703, 3.13672, 1.11816, -4.15234, -0.55127, -0.482422, 11.4453, -2.63477, -3.01562, 4.66016, 1.75879, 0.876465, -0.759766, -0.267334, 6.61719, -2.14062, -2.79102, -4.24219, 3.1582, -6.60547, -7.35156, -2.27734, 5.83594, -2.66992, 0.979492, 5.19922, 8.64062, 1.74414, 2.19141, -4.53516, -5.03516, -0.841797, -6.88672, -4.65625, -0.220825, -4.74219, -9.39062, 0.322021, 3.94336, -4.91797, 1.70703, -4.82422, -0.562012, -2.32031, -1.11719, -2.73828, 6.80078, -4.08594, 1.95312, 4.29297, 0.558105, -1.97949, -4.97656, -2.02539, -3.99805, -8.16406, 3.33008, -2.47461, 0.388672, 2.06445, 5.59375, -1.11133, -1.23828, -5.09375, -4.04688, 5.01172, 3.32227, -1.00293, -0.545898, -3.46875, -2.35938, -6.54688, 1.38086, -2.06055, -6.36328, -4.13672, -3.41211, -1.14062, 8.46875, 3.23633, -0.145142, 1.35742, 0.343994, 3.11914, 0.753418, 0.548828, -1.18652, -8.07031, 1.42383, -0.911621, 2.50781, -1.30566, 0.942871, 0.161255, 2.66016, 1.95898, 4.21875, 1.14844}; static const std::vector<float> input_data_1 = { 0.9795938, 0.14046684, 0.9819369, 0.51320475, 0.9390985, 0.06454252, 0.48972926, 0.042538255, 0.3341647, 0.14752749, 0.44628578, 0.8509109, 0.6611515, 0.5711897, 0.10807402, 0.67733586, 0.4091941, 0.23590194, 0.4385734, 0.40270114, 0.75568604, 0.9842337, 0.82819414, 0.49742407, 0.7397849, 0.6104118, 0.019504193, 0.7756829, 0.9271429, 0.6423316, 0.3300541, 0.8688829, 0.21220064, 0.76539195, 0.8143789, 0.70724595, 0.54020476, 0.29437974, 0.19398275, 0.20308031, 0.30458412, 0.040420562, 0.36627868, 0.61882246, 0.3416973, 0.5482437, 0.68851316, 0.5670022, 0.58812225, 0.6487681, 0.88266903, 0.07287276, 0.7716641, 0.12443388, 0.4170407, 0.8380076, 0.17115247, 0.8118648, 0.7704737, 0.5179812, 0.9407177, 0.7311383, 0.4538601, 0.01992845, 0.4758718, 0.25867644, 0.55573237, 0.89606065, 0.8505143, 0.47349417, 0.3970769, 0.3293097, 0.7601557, 0.24247961, 0.8102311, 0.7387785, 0.15742134, 0.8387721, 0.100493915, 0.3733577, 0.4904671, 0.9106489, 0.0049963384, 0.89285916, 0.24380954, 0.7329451, 0.9373891, 0.52886724, 0.65965563, 0.7307209, 0.5160155, 0.97944415, 0.43991584, 0.9839402, 0.6350642, 0.16712844, 0.40538687, 0.80509937, 0.4988989, 0.02185218, 0.74142575, 0.8026278, 0.28912508, 0.50405765, 0.7768013, 0.9817653, 0.9995751, 0.74799776, 0.8615655, 0.058107413, 0.27611437, 0.76372087, 0.93234706, 0.7603203, 0.30816016, 0.80595773, 0.8843074, 0.46457228, 0.43644127, 0.6553406, 0.9050378, 0.5044161, 0.49364874, 0.59174323, 0.2650881, 0.78394204, 0.57706285, 0.33071348, 0.7140054, 0.5885716, 0.60252094, 0.92644346, 0.91704935, 0.64020824, 0.99652874, 0.8375778, 0.45622328, 0.3755286, 0.8324417, 0.77270067, 0.50742614, 0.7814994, 0.30720684, 0.36613366, 0.9426107, 0.12557131, 0.87243265, 0.002567238, 0.8350289, 0.1262151, 0.35253504, 0.07578735, 0.34082502, 0.9211622, 0.38055828, 0.3247621, 0.5061271, 0.87862396, 0.1869049, 0.7774487, 0.030804915, 0.25322768, 0.06073754, 0.27092665, 0.9209875, 0.86690956, 0.74456835, 0.42403135, 0.61839956, 0.9004572, 0.94674456, 0.17315134, 0.74403226, 0.30930993, 0.23992635, 0.9080931, 0.4886828, 0.9973065, 0.32888287, 0.32976696, 0.09137513, 0.1410893, 0.4248779, 0.019689998, 0.6828394, 0.47350892, 0.02358055, 0.94660497, 0.9253734, 0.1509718, 0.540138, 0.7050524, 0.20855357, 0.9753569, 0.0044813985, 0.5063834, 0.6836877, 0.2784342, 0.0139586115, 0.8785785, 0.4754602, 0.38955635, 0.151705, 0.5694773, 0.14548586, 0.6798502, 0.057361145, 0.031760257, 0.91168743, 0.5762714, 0.54128575, 0.53421247, 0.5860678, 0.97197753, 0.940639, 0.18688098, 0.54635745, 0.513619, 0.5645304, 0.91558236, 0.8496063, 0.6258071, 0.31261826, 0.20282389, 0.2723365, 0.5039135, 0.6405068, 0.65471405, 0.5857442, 0.57205665, 0.23835625, 0.32288164, 0.068663165, 0.43674967, 0.049117915, 0.78065604, 0.98437595, 0.60793245, 0.38907775, 0.6610265, 0.5587009, 0.89418066, 0.6170649, 0.1305905, 0.5760506, 0.10885323, 0.5303117, 0.16950679, 0.9630447, 0.9476875, 0.22327174, 0.87473476, 0.917824, 0.44959846, 0.055421904, 0.22361691, 0.9334828, 0.16427046, 0.5914317, 0.81789917, 0.48431975, 0.3067152, 0.53250873, 0.19298424, 0.23529118, 0.4841604, 0.24943262, 0.41821656, 0.59484303, 0.4535004, 0.50373393, 0.6057788, 0.6799498, 0.21368006, 0.17095569, 0.97966874, 0.3839032, 0.48672524, 0.9375583, 0.84598905, 0.049092207, 0.47682214, 0.56488436, 0.7817405, 0.8975917, 0.75874424, 0.43242812, 0.8459973, 0.7138231, 0.9834999, 0.7273379, 0.05828699, 0.6884237, 0.07486352, 0.4326547, 0.78577167, 0.8844588, 0.9474644, 0.542272, 0.49642876, 0.48886803, 0.11854455, 0.01492267, 0.22648218, 0.7607531, 0.5930919, 0.9450968, 0.02894685, 0.67735505, 0.46363172, 0.18415985, 0.66824925, 0.6137258, 0.6086626, 0.6422855, 0.7637218, 0.56419605, 0.74026155, 0.18709394, 0.14683136, 0.32289994, 0.15482259, 0.11222768, 0.9085655, 0.43263617, 0.32097924, 0.29690787, 0.77809244, 0.2413839, 0.8267769, 0.82795614, 0.018312717, 0.9958108, 0.769578, 0.13144562, 0.45904484, 0.38071582, 0.24182741, 0.7200288, 0.20737973, 0.5285696, 0.3680287, 0.46252182, 0.89153767, 0.13071166, 0.84319293, 0.10841641, 0.40668696, 0.7636801, 0.42153865, 0.65055484, 0.86845386, 0.6452055, 0.6112245, 0.84526664, 0.15358071, 0.7889171, 0.6356269, 0.2515375, 0.86599886, 0.20071381, 0.20584217, 0.24220705, 0.049883988, 0.77259976, 0.49566683, 0.8112268, 0.49028614, 0.2187354, 0.70172536, 0.47309682, 0.12539592, 0.13696012, 0.33588144, 0.98134226, 0.537496, 0.9999663, 0.13245043, 0.5659106, 0.39207155, 0.48483336, 0.49371332, 0.12930158, 0.103645995}; static const std::vector<float> expected_dft1d_results_1 = { 4.940035, 3.0077164, -0.43999052, -0.72356576, 0.35775006, -1.1781573, 0.35552078, -0.5878226, 0.8879826, -1.1602633, 0.71755445, 0.15355057, -0.9307331, 0.48268145, 1.9486318, 1.1295953, 4.4481335, 5.01757, -0.57099926, -0.85269475, -0.7217729, -0.08008081, -1.1429803, -1.1934547, 1.2154821, -0.07181215, 0.59362185, 0.44658875, -0.345927, -1.480422, -0.20200539, 0.10152125, 3.4618404, 3.744587, 0.12548631, 0.8791516, 0.19086862, -0.33497274, -0.69986683, 0.6364535, -0.6644666, -0.44771492, -0.8179812, 0.17377639, -0.92110324, 0.26135075, 1.0228279, 1.2105042, 4.9957, 3.764995, 0.17936486, -0.9405504, -1.2201893, -0.17718112, 1.1820351, 0.5077594, -0.052387, 0.86741495, -0.55883414, 0.9524643, -0.68602496, 1.3873026, 0.8653134, -1.17206, 4.107497, 4.150929, -0.95916677, -0.56429225, 1.1602635, -1.679503, 0.5507363, 0.53716975, 0.38042903, -0.5240841, -0.33995685, -0.78949994, -0.7040798, 0.05728233, -0.38874817, 0.8814098, 3.9273133, 5.9265537, -0.80074155, 0.20659067, 1.642705, 0.9759259, 0.85149074, 0.44840366, -0.25961697, 0.78995633, -0.039625674, 0.545478, -0.70991015, -1.1269569, -0.68787766, -0.48076022, 4.848893, 4.6852283, -0.6871975, -0.041547477, -0.91873163, -0.0071051717, -1.4497755, 0.3778788, 0.7214658, 0.6099715, 1.4334095, -0.07150489, 0.07712549, 1.859364, -0.78209424, -0.97149, 4.8020935, 4.897006, 0.05723229, -0.21079391, 1.0996364, 0.22791737, 0.7594234, 1.1837918, 1.1714673, 0.12949562, -0.64135337, -0.5158363, 0.2763425, -0.19547313, -0.06606534, 0.56645525, 5.3334026, 5.288664, -0.09143779, -0.7460747, 0.2411859, -0.5888344, 1.4911498, 0.52246934, -0.1439941, -0.51704764, 0.32441977, 0.35291424, -0.7496793, -0.32638037, -0.6930033, 0.72286314, 4.4170227, 3.232138, -0.64390934, -1.3210952, -0.58362705, -0.6716566, 0.39952934, -1.1999383, 0.83216095, 0.8710072, 0.34266293, -0.92789006, 0.46818644, 0.7554455, 2.3088598, 0.26656008, 4.306201, 4.1061068, -1.286478, -0.14309806, -1.9038618, -0.045521975, -0.43500268, -0.6120295, 0.3222475, 0.5537019, 1.2264881, -1.5052714, -0.12776875, 0.00045275688, -1.8553859, -0.32851917, 3.50575, 3.7639909, -0.8274512, 1.2718699, 0.7064032, 1.7913067, -1.4024514, -0.49303484, 0.8707912, -0.23823786, 0.41937304, 1.443722, -0.396856, 0.56620187, 1.0339032, -0.12736642, 1.7406936, 4.309397, -0.18755847, -0.46101326, 0.020362198, 0.3217622, 0.7620988, 1.9022003, 1.2856812, 0.3369981, -1.149087, 0.5562107, -0.31068176, 0.4914955, -0.49307993, 0.34580433, 5.2527924, 4.527175, -0.029956281, -0.35984623, 1.0824606, -0.360453, 0.19873115, -0.3701315, 0.53464556, 0.8481753, 1.4529572, 1.012228, -1.037719, -0.6553353, -0.16041204, -0.03164065, 3.2281785, 4.5399303, 0.3643899, 0.30424678, -0.7776585, -0.3015166, -0.61336106, -0.7931169, 0.5940609, -0.29862595, -0.02879478, 0.6273444, -1.6805825, -0.17713517, 1.0924593, 0.1301811, 4.4416904, 3.7987688, -1.3668841, -0.81391287, 0.64007276, 1.0288135, -0.57070565, -0.52160406, 1.58955, 1.0018709, -0.123293996, 1.390446, -0.5843305, 1.5380195, 0.44350854, -0.26895642, 4.125044, 3.443525, 0.7636179, 0.10296479, 0.52696383, 0.08359367, 0.6142223, -1.2670981, 0.3708297, -0.6262324, 0.339195, -0.5216981, -0.34774148, -0.30716318, 1.0757314, 0.4062716, 4.1163635, 5.389367, -0.1369414, 0.3118773, -0.48302984, 0.07917905, 1.6785579, -0.9954241, -0.09528947, -1.517424, 0.85461855, 0.18921553, -0.62187576, -1.1891136, 0.12719542, -0.558237, 4.492761, 3.6913419, -0.29317212, -1.2950531, -0.03654802, 0.91552365, 0.123229444, 0.514639, 1.0583864, 0.5574026, -0.13546133, 0.9680127, 0.87852824, 2.559589, -0.3771388, -0.043456703, 4.574666, 4.013397, -0.06427416, -1.2290373, 0.11051571, -1.2182673, 0.05659631, 0.77380556, 0.65739393, 0.7978984, -0.19493088, 0.9715779, 0.1553396, 1.2139899, 0.79071796, -0.57862896, 3.361268, 4.236172, -0.13507411, 0.6842204, -1.1744223, -0.62078804, 2.008315, -1.2499349, 0.62419355, -0.091858864, -0.5990913, -0.90177983, -0.55390406, 0.40287262, -0.94808567, -1.2203228, 3.745199, 4.248646, 0.63732016, -0.82302505, -1.9267471, 0.58008444, -0.38652933, -0.9787377, -0.1378448, -0.4994706, -0.24433172, 0.09051508, 0.3651026, 0.010821462, 0.9935576, -0.69421434, 4.5576744, 3.50811, 1.745633, 0.16605312, -1.8684177, -0.33893645, -0.17866233, 0.5833766, 0.2571981, 0.38861072, -0.5767294, 0.61207676, 0.43722266, -0.28951776, 0.78772557, 0.26002276, 3.9901466, 2.82238, -1.4889656, -0.1150527, 0.47323376, 0.07621753, 0.16292411, 0.17989358, -0.30915606, 0.50516117, -0.38916004, 1.9493489, 0.72058266, -0.067055345, -1.4097221, 0.26290974}; static const std::vector<float> expected_dft2d_results_1 = { 25.880518, 25.61235, -2.4660468, -1.9953613, 1.409625, -2.473969, 1.0969357, 0.34850854, 1.5074215, -0.546504, -0.44522142, 1.482357, -4.297778, -0.41876173, 2.5581412, 1.6702101, -0.79171646, 0.87513673, -0.5556948, -1.4017968, 1.6127079, 3.341402, -2.2336023, 0.7553977, 0.8801464, -1.5552741, 2.8809369, -0.12944597, -0.08941656, -2.4948978, 1.1106122, -0.5771601, 1.5280423, -3.6573076, -1.325342, -0.75811887, -4.0773964, 0.41215408, 0.24999249, 0.3498589, -0.31276864, -2.3484, -0.4591713, -0.04454562, -0.7590859, 2.5111904, 3.1611128, -0.09711918, -0.8617741, -3.8058863, -0.0812951, 1.1779473, 2.0081396, -3.9112964, -0.6841551, 0.82309175, -0.2995335, -3.7176208, -0.43554613, -2.4067037, -0.81405425, 2.0213914, 2.6072812, 4.772808, 2.3986423, -1.6369095, 3.009512, -2.2388682, 0.08045465, -2.0042, 3.2657382, -0.93855727, 1.3121321, 2.0163581, 1.3805538, 1.8802332, 0.20659024, 3.5175233, 2.7225797, -1.7004844, 1.4864945, 0.6589138, -1.221076, 0.8748032, 1.1129706, -2.4330344, 0.43821555, -4.865236, 2.2404962, -0.81013864, 1.3837745, 0.13940874, 0.16934663, -2.240356, -0.46793693, 2.7093167, 27.21336, 25.973133, -3.4792416, -1.1907393, -1.358995, 0.70610523, -0.63712704, -0.22086221, 3.7741385, 1.4088898, 3.1050003, -1.2238663, -0.45265055, 2.6596098, -0.053786665, 0.12850314, 1.7713342, -0.92604506, -1.5456572, 0.4535787, -0.4252041, -0.20687354, -0.26421398, -1.5723603, 0.21247786, -0.19034994, 0.116511285, 3.5963366, -0.9552689, 1.4078308, 0.17855054, -1.2697299, 0.24928832, -1.3436013, -1.018871, -1.1798176, -2.4574528, 0.14592099, -0.7871367, -1.3267987, 1.6891341, 0.8528522, -2.194655, -0.7497572, 0.66770875, 1.4708114, 2.0073843, 0.8376069, 1.7636304, 2.1868649, -0.65098536, -0.6707011, -3.8038197, -1.9890289, -0.15012956, 0.7975005, -1.9746995, -0.11563957, 2.8636346, -1.2238576, -1.1479954, 0.40726233, -6.6071806, -1.2827955, 0.335096, -0.8774332, 0.5047921, -1.7173706, -0.6906272, -2.8883119, -1.7264752, -0.91851616, -0.8023921, 2.1811929, 4.4178715, -1.0245608, 1.4208769, 3.714005, 2.626697, -3.0808997, -2.2393522, 3.0984519, 2.0667777, 4.0557647, 3.22371, 4.1895566, -5.1335697, 5.5083103, 1.4301378, -0.47711706, 0.29209352, 0.19667566, 0.9300822, 1.4966636, -2.8442304, -1.1616251, 22.90476, 26.008162, -0.59333247, -0.9156835, 1.009171, 0.85137844, 2.0695426, -2.0451744, 4.279478, -0.2552395, 1.3455946, 3.2537463, -4.582932, -0.29923248, 2.0854027, 0.023423433, -1.4901955, 1.2697036, 0.12445855, 0.37839913, -0.90889513, -0.96464497, 3.2230172, 5.11582, 1.7657483, -1.2759314, 1.6806445, -0.48582482, 1.0328861, -0.21219438, -1.8203479, 0.28618455, -3.8749995, -2.6027172, -2.7910428, -1.8929406, 0.43884945, -0.8854169, -0.6166424, 3.3119302, 3.9380612, 1.783706, -2.8637185, 0.45624626, 1.298605, 2.399745, -0.42191154, 0.3671223, -4.7169294, -1.4224572, 2.4742305, 0.80807984, -1.4698355, -0.64370054, -0.54362357, 1.729145, 0.2216661, -0.920482, -3.022968, -1.9300321, -0.09508008, 0.31362647, 1.264819, 1.741091, -0.48260987, 0.91905135, -1.2789521, -1.0161536, 0.53328425, 4.0857644, -0.8787215, 2.8750324, 0.4081546, 2.4881384, -2.2990177, 2.1299765, 0.59928864, 3.988031, -1.8122058, -0.16000175, -1.8958641, 1.6846397, 0.9392875, -0.12778088, 0.51960033, -0.5128077, 1.3190198, 0.42644808, -2.8990207, 0.20179635, -1.7350545, -0.08684918, -0.11685985, -3.241004, -2.2542362, -0.18299285, 24.721714, 22.520046, 0.40146637, -2.611894, -4.422385, -0.6061659, 1.7858734, -0.17695832, 2.1501722, 1.6577435, -2.1397042, 3.6897519, 2.0028722, 3.830699, -0.16294527, -2.0136907, 2.7324684, -0.48164713, -3.0283842, -1.1742884, 2.3383465, -0.04261756, -1.3686588, 0.50161046, 0.76707345, 0.40514386, -1.7530769, 2.333971, 2.7187724, 4.413412, -3.610829, 0.57066756, 0.3970142, -0.89236856, -1.0979964, -4.7337284, -1.6107149, 3.461636, 0.8141506, 1.3783914, 0.97534364, -1.261203, 0.9644269, -0.4446571, 1.3737998, 1.5714393, 1.5593243, -3.5085554, 0.10169166, 0.3512014, 2.2333064, 1.7223357, -1.7363904, 0.5177647, 2.1198907, -0.12688133, 1.7293842, 0.05056551, -0.4828595, -2.333132, -0.4791782, 1.5151871, -0.91205263, 0.0061766207, -0.4048485, 2.1922839, 1.728973, 0.9913887, 0.14321594, 1.6313545, -3.389923, -2.5937288, -0.36389086, -0.2227447, 0.03589952, -0.069511056, 0.3542207, 2.3090918, 0.45287704, 3.309232, -0.59147376, -1.541465, -1.9963981, -1.9641305, 5.0686407, 0.53117156, 0.77804404, 4.1053996, 1.0922346, 2.7149107, 2.5625482, 2.6316533, -0.69931746, 1.7177012, 0.4107918, 1.375428787}; static const std::vector<float> expected_dft3d_results_1 = { 100.72035, 100.11369, -6.1371527, -6.7136793, -3.3625832, -1.5226498, 4.315223, -2.0944858, 11.711212, 2.2648964, 1.8656702, 7.201989, -7.3304863, 5.772318, 4.4268136, -0.1915536, 2.2218904, 0.7371478, -5.005279, -1.7441093, 2.6169558, 2.1272666, -0.64345765, 4.800469, 3.6254454, -2.6164103, 2.9250154, 5.315038, 2.706975, 3.1141493, -4.1420155, -0.99003804, -1.7006547, -8.495995, -6.2332516, -8.564608, -7.7067156, 3.134292, -0.33963704, 3.7133825, 6.2897673, -0.9730439, -4.5531178, -0.7827141, 2.581028, 7.953187, 6.305909, -2.4009464, -3.7133813, -2.690277, 3.9752584, 3.0376637, -5.0019054, -6.0262628, 0.7419828, 3.2228546, -0.32318294, -4.7031775, -1.0777391, -7.8937283, -2.5363095, 4.257466, -3.6471338, 5.237282, 1.8462799, 0.5969913, 3.9643247, -3.981004, 0.0663265, 0.82460785, -2.7293837, -1.5757694, 0.55400586, 6.462948, 3.5353048, 2.9161394, 2.580977, 13.528652, 3.98995, -1.632153, -3.240194, 3.900541, -0.21140909, 2.8386583, 9.924921, 1.7748868, -2.5982907, 5.174925, 1.8638494, 1.6294506, 2.5033612, 2.8808913, 0.2832532, -2.2669961, -5.155612, 2.7401254, 6.428844, -2.8874602, -0.45156026, 2.8010314, 1.7127267, -6.3887377, -1.0165125, 4.816684, -3.0209088, -1.9152341, -6.7044344, -7.0160933, -0.8859364, 2.3359919, 2.614932, 1.5376289, 0.2540813, 0.56656873, 0.947714, -3.2629232, 2.3573487, 7.069599, -7.53059, -5.4648676, -1.4810953, 0.27525342, 2.4626575, -1.5132098, -4.127886, 1.3913381, 1.090563, -4.6527243, 4.9518104, -0.906865, 5.0196123, 1.055696, -7.831962, 2.144308, -1.838556, -1.3607846, -2.1367745, -4.8458967, 2.0994475, 2.6582882, -2.158319, 0.8175374, 7.929186, -0.9123031, 5.690818, -4.0453672, -4.948562, 3.2541595, 0.9711809, -1.2001665, 0.78384995, 1.3639677, -0.6874037, 0.9069457, 3.6966968, -3.823165, -1.826899, 2.3765814, 0.0534904, 8.726845, -0.18846548, -3.2959056, 1.5797036, 0.0014669895, -4.9724956, -5.2561207, 5.819672, -5.477039, 3.3079143, -0.033277154, 2.7245224, -4.631716, 1.0122153, -1.5371637, -1.8553452, -3.7143025, 8.022276, 0.62215286, 3.8595328, -3.060592, 4.2517557, -0.075296044, 0.5221062, 0.6199312, 1.9474881, -1.3498385, 0.6838516, 2.4967105, 0.06516862, -0.6287519, -0.7507546, 6.147333, -3.149796, 3.1273334, 0.018394649, 0.8915896, 8.200176, -1.7225304, 2.0177326, -1.2988436, -0.13740933, -3.868376, -0.06492156, 2.2702193, -10.430931, -7.2083035, 4.860276, 3.578821, -6.7857146, 3.5525331, 4.142806, -0.3026886, -1.20933, 2.6262493, 2.6222875, 6.941968, 1.6663432, -3.0459986, 6.198147, -6.5455766, -0.8200346, -8.528335, 2.722542, 0.4080863, -2.993259, -4.024056, -1.999518, 3.2624865, 0.42962015, -4.08082, -0.39366418, 3.6101956, 0.9608154, -0.15634394, -2.0926623, 1.6061159, -1.5019901, 1.8686844, -0.8275065, 2.9409513, -7.4440265, -7.7664104, 0.8106141, 0.9343933, 6.078513, -3.0837321, -3.1975398, 1.8816166, 0.16744804, -4.573029, -5.839288, -0.7797469, 0.71803975, 0.41256714, 11.391333, 7.790516, 1.9857845, -2.0327086, -0.5032053, -2.5290394, 1.16115, 3.3385215, 7.5034156, 5.4487205, 2.886569, 2.5460477, -5.3722363, 5.1042805, -0.9692185, 1.4824567, -2.1692014, -2.0888186, 2.4214573, 0.78656745, -0.3521694, -1.3446121, -6.659781, -7.66657, 6.1127615, -14.052498, -3.1808968, -2.8461368, -3.2059226, -2.7757711, -0.17827892, -8.695724, -0.2887354, 2.312519, -0.4773283, 2.095835, -3.293869, -4.960386, -0.9118179, -0.2619573, -0.92870337, -0.029317379, -2.5232022, 1.3327014, 3.1228013, 3.4733155, 1.4562413, -2.5750513, -1.6694541, 1.7559463, 1.142877, -1.3557005, -2.30802, -0.29746848, 2.6858592, 1.5424967, -3.3826494, -3.2559767, -0.2901088, -0.83393717, -0.06207335, 2.225967, 1.8832793, -5.9567456, 4.7713566, 2.9260354, 5.854274, -1.2023156, -2.0882115, 1.2139479, -1.2005312, 0.4508332, 3.571826, -4.5633574, -6.3648844, -3.4183156, 2.7096481, -3.659875, -1.957063, -0.5946456, -0.76313734, -0.016180754, 2.0194921, -0.72149086, -0.16249031, -2.5144238, 5.9847684, -5.335026, -1.0649127, -3.176074, -0.3549943, -6.501223, 1.4781482, 2.8698225, 0.3889513, 1.0389466, 2.6314335, -2.6634102, 5.950971, -1.8160157, 6.9972243, -2.4468954, 4.066836, -6.923808, 2.4692469, -2.1501422, -1.4999585, -0.91028214, 4.634622, 4.132228, -1.7976125, 0.59614825, 10.924917, 0.63333595, -1.2575581, -2.6736045, -8.180259, 5.0657587, -3.065015, -3.7651565, -2.2837136, -11.203299, 8.331546, -0.6740327, 5.5538063, -2.0441968, 0.5072439, 2.630047, 4.323353, -0.3627143}; static const std::vector<float> expected_dft2d_signal_size_results_1 = { 16.022383, 15.693684, -0.5664709, -5.672549, 1.6033735, -1.197921, 1.7188175, 2.0131826, 3.5833678, 5.2242, 0.46792942, 1.134562, -0.67151153, 1.9223167, 0.8663217, 6.124037, -2.2579927, 0.89829487, 19.077686, 18.455149, 0.22091317, -6.1652884, 3.7212925, -2.246737, 2.152333, -0.35627568, 2.3711212, 3.8686695, 2.6396408, 0.9601658, 2.798329, 2.7985961, -3.0813444, 2.1557612, -2.7741354, 0.020231158, 15.38469, 17.809353, -0.18956879, -1.9654348, 2.1695504, -4.6052723, 3.9439118, 1.2099034, 2.0613155, 0.06963302, -0.44236425, 0.388348, -1.0990168, 1.5654994, -5.6095753, 3.5276392, -2.5259602, 0.11462754, 17.599613, 15.044548, -0.6420444, -7.04445, -1.3374522, -2.994011, 4.8180413, 0.4241563, -0.16446547, 3.4031193, 1.6924896, 1.1558795, -0.33493042, 3.6295547, -0.43396032, 6.2375803, 1.0340569, -2.6632495, 17.096416, 15.208672, -0.25167805, -3.4815063, 1.0696993, -5.2873764, 3.1979918, -0.007851064, 2.4643226, 1.5116049, 2.2652526, 1.3904041, -1.4545498, 0.3900972, -2.9886885, 0.8594112, -2.6304812, 1.0031368, 15.539574, 17.902292, -3.2155356, -2.029044, 3.1789296, 1.7891771, 5.1005206, -0.47131428, 2.3852003, 1.3723673, -1.1129482, 2.6773098, 3.0800571, 3.9973392, -3.7624567, 1.2616894, -4.292824, -1.0895696, 6.214655, -5.0111494, -1.411555, 0.7537558, 0.16763747, -2.843416, 1.3127775, -1.9945961, 1.9270117, -1.8245299, 0.14755654, 0.119710535, 0.4844484, -2.400567, -0.7895191, 0.11959961, 2.022987, 1.5247601, 2.940802, -5.3442945, -1.244915, 0.3108465, -1.0918118, -1.5366958, 0.7903812, -2.396515, 0.7696649, -0.8931735, -0.37421072, -2.557737, -0.7477829, -1.0872394, -1.3062975, -1.1412712, 0.78160894, 1.5905662, 4.9873405, -6.715282, -1.6470392, -0.42781863, 0.28193593, 0.7067425, -2.6839638, -0.57462484, 1.5813833, -1.840177, 0.5190145, 2.371542, 0.2732622, 0.15681863, 0.59334445, 2.0408263, 2.5408773, 2.1669984, 2.546817, -6.3569403, -2.1067922, 1.5111246, 0.5735901, -2.7920475, -1.0018231, 0.92297626, 1.4706146, -1.4367175, 0.91449654, -1.5520309, 0.19974053, -0.20228307, 2.309344, 0.36271697, 0.69444144, -1.5499384, 2.2808971, -4.045352, -3.9713645, 0.18369448, 1.7138574, 0.91558576, -2.5425308, -1.1829549, 0.2302165, -0.30635485, 0.61019206, -1.425853, -3.5264308, -1.3583288, -0.19054441, 0.04516536, -1.9273582, 2.771572, 3.5409505, -2.9621074, 1.0486665, 1.2725343, 0.65368336, 2.2211008, -2.333774, -3.4962246, 1.3083582, -0.9263934, 2.2978144, 0.9176514, -0.6095743, -0.518545, 0.30718052, 2.4780507, -1.0288026, -1.3045118, 5.109543, -3.7778285, -1.1729475, -1.3083767, 1.1134523, 0.1949772, -0.097847104, -0.55074453, 3.2532492, -0.6656364, 1.5148337, -1.4027967, 0.028162748, 1.0119103, 0.8516027, -1.6230793, 2.1586115, 2.5022216, 5.989767, 1.5173278, -1.8896376, -2.3683255, -1.0292325, -2.0638425, 2.086112, -0.5161866, -0.77660584, 0.7603841, -1.6347449, -0.068522215, 3.9827218, 1.4527302, -0.121331334, -1.3804352, -0.89542794, -1.1340978, 4.0029855, -1.0835376, -1.1934209, 1.8954589, -0.8401254, -1.9993141, 2.1870675, -2.8295417, -2.0395064, 0.68151194, -1.0094807, -0.923614, -1.7888981, 1.7585123, -1.8216331, 0.37374687, 0.9950139, 2.351936, 6.2798033, 1.7030699, -0.47087464, -1.6360345, -4.468318, 1.9200281, -0.09482679, -0.043346405, -0.9715949, -0.34371126, 0.09224248, -0.1786593, 0.65415484, 3.1819649, -1.7768605, 2.832635, -3.1208003, -2.8149338, 5.781748, 0.034614563, 0.8586121, -0.7064364, 0.77345586, 1.713623, 1.2533236, 0.9109094, 3.2923024, -1.0421813, 1.0721587, -1.610004, 0.881869, 1.9586189, -1.6461011, 0.88225365, 1.3470546, 2.804223, 3.3672202, 3.4784017, -0.15211546, 0.04246044, 3.7608738, -0.26021224, -0.023485124, 3.0048823, 2.4691834, -1.2352742, 1.824518, 0.97959256, 3.5056772, 0.37062028, -0.5023904, -2.280041, -1.4408232, 0.74235415, 0.77733827, 7.157131, 2.0188837, -2.4676952, 2.2076206, -0.9512329, 2.4511347, 1.5196023, -0.66184056, -1.0361224, -2.452705, -1.6693871, 0.67556, 0.7263571, -3.562127, 3.6935072, 2.6473353, 0.34725356, 1.2102544, 5.872654, 0.78656757, -1.734544, 0.10659918, -1.5254192, -0.4401021, -2.3011737, -1.5704286, 3.2446222, -0.09058446, 1.2970262, 0.03513348, -2.1620555, -2.3477712, 1.0844729, -0.31028935, -0.2960415, -1.6364298, 5.4394045, 0.4011662, 1.0095837, 0.6786694, -0.16024262, -2.0813947, 0.82111263, 2.0470252, 1.2589197, 0.16413942, 1.1889694, -1.1947216, 0.09492111, -1.2447665, 2.2591739, 1.107242, -3.266813, 1.62448, 5.5707865, 1.4339288, 0.5752156, 2.1131587, -1.9095923, 0.44389912, 2.6914995, 1.3760473, 0.59850556, -1.7392722, 0.6407434, 0.7244801, 0.6499002, 0.8710161, 2.361711, -0.88572574, 0.36826122, 0.83188176, 5.91977, 1.3777022, -1.1016716, 5.073045, -2.9808154, -1.9151478, -1.0893023, -0.19272709, -1.2990279, -1.1223007, -0.48658985, -1.1106746, 1.1981947, -1.0553399, 1.1354556, 1.3973879, -0.95949686, 3.825083, 7.3595786, -0.042000458, -3.2074027, -0.54039013, 0.91688734, 1.3646243, 0.059570312, -2.4480567, -3.120977, 0.3088463, 0.019118637, -1.9716125, 1.1828063, -2.605321, 0.0031490326, -3.989009, -1.0951542, -3.423747, 0.9767442, -0.703821, -0.53229225, 1.8792986, -2.6091163, 0.073565304, -3.0182452, -1.9846222, -0.05088371, -1.6033902, 1.1012542, 1.2818389, -2.9548755, -2.4526162, -1.0030507, 3.313137, -1.8719615, -6.977839, 4.587015, -0.20812428, 0.6176362, -2.5226798, 1.6537584, 1.0535446, 0.1687131, 0.12539, 2.351955, -0.62948394, 0.5153229, -1.483371, -0.05357921, -0.6367198, -4.435566, -1.0412523, -0.90299845, -5.4293838, 3.2729964, 1.8316314, 0.65426755, 1.3051615, 2.5043561, -2.6727161, -0.51436025, 0.97311413, 0.7939715, -1.4862274, 1.005588, -0.9745222, 2.339565, -1.0618652, -2.5664845, 0.19176674, 2.0832286, -3.072213, 2.863511, 3.0667965, -1.0378871, 0.41042465, 0.49165142, -0.92485595, 3.3757823, -1.9955772, -2.7065873, 1.1462497, 1.7917634, -0.27417037, -0.1864685, -1.9386586, 1.0884712, 0.13079014, 1.2462993, -5.453458, 3.6369412, -1.1260786, -3.395249, 2.7768104, 1.427433, -1.2371898, 2.7506876, -0.9577626, -0.4295229, -0.8855097, 0.2148636, 1.3714015, -2.9233427, 0.51835805, -0.8493191, -1.9358122, -1.2579556, -6.63626, 3.854603, -0.1117461, 0.36959448, 0.46802208, 2.4610949, 0.4212538, 1.9248259, -1.3794608, 2.7059088, 1.091836, -0.37234998, -0.15367316, 0.66511256, -2.183856, -0.85531116, 1.759064, 0.17355615}; static const std::vector<float> expected_dft2d_signal_size_results_2 = { 13.276131, 13.103815, 12.738701, 10.948082, 15.043786, 10.639425, 12.309946, 12.0796795, 12.158493, 13.553247, 11.337328, 14.506093, 15.737373, 13.892806, 8.118599, 11.390546, 0.5381909, 1.3429388, -0.95012784, -1.1195304, 0.09536344, -0.55193377, 0.8235052, 1.1744928, 2.2546248, -0.5024011, 0.8459603, 1.935416, 0.035488486, -1.2282364, -1.4211154, -0.3135983, -0.66958404, -0.80205476, 0.18983603, -4.1904936, 2.5468197, -4.6636457, -1.8926563, 0.83797, 0.5354397, 1.2066656, -0.9827197, -0.26218012, -0.11811975, -0.47548425, -1.3096738, -0.14677164, -1.4478016, -0.6947719, 0.13221443, 1.9222442, 2.1022313, 0.022250533, -2.556076, 0.25012577, -1.4458936, -1.5957909, 1.5911676, -2.0354197, -1.226819, -1.4284164, -0.8624064, 0.8694984, 1.7259736, 2.142551, 2.9860144, -2.218176, -0.29875967, -0.13178241, -1.4536587, -0.3602928, -0.46407622, 3.210749, 0.51306593, 0.12302576, 0.23700514, -1.691548, -1.3992835, -0.47753286, 0.42123353, 2.3340597, -1.8207074, 1.2562377, -1.1736273, 2.0084949, 0.1217463, -0.71629894, 1.7867224, -1.07459, -0.46577477, 1.7219527, -1.7225032, -0.5029696, -0.38728696, -1.1263466, -0.1653564, -0.8395238, 1.9879192, 1.0128975, 1.6438143, -1.188045, -1.9405808, -0.7925463, 1.2240379, -1.3743367, 2.5557013, -0.6062887, -0.99852824, 1.0005589, 2.1218362, -0.10017598, -0.75340086, -0.6988709, 0.36165902, -0.13171434, -0.5997418, 0.2789104, 0.52501214, -1.5872824, 0.0040130913, 3.0245605, -1.34743, 1.8258516, 0.73562264, -2.1836894, 1.3283473, 0.038802683, 0.7543056, -0.16875261, 2.0229807, -0.9240825, 0.85228086, -0.9607723, 0.111747265, 0.69834816, -2.5481172, -0.5289763, -0.5825273, 0.49042946, 2.3490484, -1.2178793, 1.9920914, 1.7048208, 0.4666462, 0.944975, 0.44091535, 0.3074813, 2.064869, -0.8005053, 1.2041193, -2.94473, 0.5702777, -1.4354767, 2.0478277, -0.8460398, -0.6000862, -0.27820373, -0.5037507, 1.0071316, 0.92846537, -2.9930074, -1.0054629, -0.013398886, 0.31482947, 0.4755001, -0.37606037, 0.35212934, -1.1386731, 0.46239042, 1.0611072, -2.298436, 1.4551028, 0.3905257, -1.427774, 0.32839155, 2.325178, 0.5964565, 1.8381126, 1.1603787, 1.6229985, 0.5935496, 1.6401024, 0.11901784, 1.2464942, -0.9543896, -0.45777965, 0.37926614, -0.20978808, -0.599459, 0.016958952, -0.672667, 0.16418982, -0.5287609, -0.25067472, 3.268743, -0.24228573, 0.15815735, 0.60354567, -2.0075817, -1.5436802, -1.8892074, -1.8738651, 3.1379614, -0.02182722, 1.8892908, 0.014802456, -0.90126705, 1.0671308, -0.61172044, -3.7529547, 0.5599045, 1.3885388, 1.425593, -0.61869496, 0.70866895, -2.8543148, -0.7371676, -1.1487055, 0.9924042, -2.1610408, 0.17656213, 1.2943268, 1.038288, -1.0522705, 0.62840223, 0.013757467, -0.108183146, 0.090308785, -0.60490894, -1.4133914, -0.7655864, 0.27606705, -2.2265353, 0.48509985, -0.37584236, -0.33032784, 0.112843275, -1.0625036, -0.6842445, 0.33563662, -0.64797795, 1.7336315, 0.24296224, -0.5694554, -2.861343, -2.8949642, -2.4447322, -0.45564282, -3.1046734, -2.7624254, 0.2988112, -2.9488277, 0.44427407, 0.11802065, 0.30626833, 0.5653825, 1.4086826, -0.7607212, -0.36166418, 0.68313515, -1.9396621, 0.552457, 1.787599, 0.700689, -0.075103045, 0.07150927, -0.2504335, 0.4869701, 0.8627523, -0.31363714, -3.4648805, -0.66644526, -4.2852783, 1.5780505, 2.0349314, -0.19262195, 0.6150762, 2.462497, 0.50935733, -1.2329292, -0.32018808, -0.43475008, -2.747131, 1.7122769, 2.9606051, -0.80461985, 2.0191946, -0.65316653, 0.10388613, 0.23720229, -1.2899456, -0.04140687, 1.0865793, 2.0807414, 0.47735834, 0.039139986, 0.043996334, -0.30400705, 0.8887136, -0.8448317, 0.4798069, -0.9909992, 0.3054396, 0.5550651, -0.62566614, 0.5333818, -0.56075704, -0.7468512, -0.19177368, 1.3637853, -1.408153, 0.8004116, -1.0253057, -2.049946, 0.6309613, -0.18224144, 0.7479264, 0.8692721, 1.0770676, -0.029760003, -1.4737389, 0.5606033, 1.1633584, 0.064171284, -0.8330089, 2.4364436, -0.6911875, -0.88729143, 0.2826275, 2.2328167, -0.4202252, 0.5975744, -0.28619608, 0.87971634, 1.0560545, 1.3650498, -2.12536, 1.7964764, 0.5196252, -1.3630133, -0.39718735, -0.16041839, -0.018146515, 0.644505, -0.7543384, 0.39063865, 2.3242626, 0.18570781, -0.33447194, -0.8394531, 0.06412578, 3.9682813, -1.1106789, 3.655115, -0.25420773, -0.12754375, 0.22988068, -0.13729298, -1.0493382, -2.288222, -1.1528109, 0.0876067, 1.4938865, 0.6729909, 0.6704601, 0.12003565, -0.64320755, 0.4911754, -1.6283998, -0.6994369, -0.9077794, -0.61947733, -0.8598275, -1.0383508, -3.1879523, 0.6378788, 0.5077131, -0.099037886, -0.48517847, 3.027417, 0.30669552, -2.9577267, 3.5811238}; static const std::vector<float> expected_dft2d_signal_size_results_3 = { 6.235876, 5.533142, -1.1544118, 0.082814306, 1.6796162, 0.599913, -0.38694367, 1.4678228, 7.972584, 7.075794, 0.43039304, 0.068965316, 0.9859961, 0.9107602, -0.3760583, -1.7655449, 6.727815, 7.016465, 0.08040795, -0.27730647, -0.24473351, -0.28742158, -1.7693194, 0.9797714, 7.944232, 3.3173547, -0.970581, 0.2367388, 0.6001234, 0.73869, 0.78396153, 2.381297, 6.578037, 5.254505, 0.22289926, -0.50469196, 0.81392545, -0.6079184, -1.8542231, -1.4371969, 5.182848, 7.0792365, 1.7477604, 2.5558662, -0.4264726, -0.1144461, 0.13226332, -1.0341146, 1.5301563, -2.5123873, 0.55265933, -0.6910203, 0.8812098, -0.38004306, -0.86332023, -0.09744531, 0.022181302, 0.0048598824, -0.51295733, -0.1617982, -1.5669001, -0.89008003, 0.6439479, -0.7541246, 0.09064455, -1.8462008, 0.9278267, 0.029414916, -0.8102952, 0.32216373, -1.0387933, 0.8412336, -0.8255059, 0.24349298, -1.9935097, -0.10879634, -0.66851157, -1.0654241, 0.17485179, 0.15639359, 0.4183568, 0.9885812, 0.41185328, 1.5258284, 0.9891837, -0.30820954, -1.5363256, 1.1204776, -0.9755474, 1.7845668, 0.43930066, 0.14675245, 1.5588113, 0.501778, 1.3166, -0.54990625, 2.405043, -0.7384981, 2.135238, -0.6406439, -1.2197473, -1.2720709, -0.040249567, 0.3340183, -0.69982177, -0.71987194, -1.2410803, -0.98330003, 0.27524203, 0.9393509, -1.0231966, -0.89418787, -1.5361586, 0.7400294, -0.4797501, -0.20026025, 0.28715983, 0.4135941, 0.31160337, 1.45322, 0.8597624, 1.4913996, 0.118060485, 0.04817532, 0.24861759, -0.086301126, 0.7859658, 0.43219715, -0.15880768, -0.37046334, -0.058427185, -2.141556, -0.6823787, -0.56347156, 0.5663684, 0.14823253, 0.8226858, 0.33215827, -0.35511267, 0.6276708, -1.7566651, 0.27827078, -1.8008664, -0.68004596}; static const std::vector<float> expected_dft2d_signal_size_results_4 = { 16.022383, 15.693684, -1.6079187, -2.5211797, -0.5771674, 0.05202335, -0.20892644, 2.2068956, 3.953516, 0.34410894, 0.86641574, 1.6062691, -0.28576106, 5.39313, 0.29631865, 0.4604528, 19.077686, 18.455147, -0.6079974, -2.6523724, 1.5708399, -1.4308838, -0.12822962, 0.39401102, 3.578989, 1.7037572, 1.2102947, 1.9145584, -0.9519639, -1.1172407, 0.3622353, 0.057706878, 15.384689, 17.809353, 0.26336426, 1.1215441, -1.5200262, -1.8461118, 2.1862373, -0.8841291, 0.40979373, -1.3552474, -1.1214476, 0.25225526, -3.9052691, 0.16070783, 0.47419822, 0.84322566, 17.599613, 15.044547, -1.1941085, -3.8985834, -3.0904906, 0.7600602, 0.62432945, -2.1925209, 2.2314792, 2.2408223, -0.58379686, 1.5055354, -0.43706644, 3.6915889, 4.6112394, -1.8686706, 17.096416, 15.208671, 0.26360607, -0.43837237, -2.085052, -1.9803678, 0.5512935, -0.7585812, 1.3307043, -0.20800388, 0.64899683, -2.204393, -0.7423673, -0.53894585, -0.3806771, 1.2191849, 15.539573, 17.902292, -3.2540998, 1.6752851, 2.339312, 2.9226294, 1.2905213, -0.86016166, 0.2067287, -0.46054435, 0.845206, 4.1277647, -1.0080593, -1.8169241, -0.9365012, -0.9034538, 4.193228, -1.6578128, 1.0010736, 0.13147289, -0.58524096, -0.6177359, -0.5433382, -0.91701794, -0.34512973, -1.1603408, 0.82712364, -1.9715309, -1.3202763, 0.7925887, 1.5136781, 1.1887463, 0.07894993, 0.2629676, 0.47720033, -0.6143549, -0.35804892, -0.7087485, -0.9317253, -1.5261502, 0.012433767, -1.4340608, -2.3467493, -0.11921686, -0.7176709, -0.9460896, 1.1034908, 0.98994523, 1.2861536, -2.767478, -1.6691985, 0.5312684, 1.0004808, -1.4490643, 1.6858988, 1.9467357, -1.6837163, 0.61909866, 0.46550757, -1.377079, 0.03022629, 0.6342612, 1.8735541, 0.8252406, -0.46249843, -0.7055974, 1.0481787, 1.154592, -3.1120033, -2.5491147, 1.5315402, 0.24330461, -0.27145922, -1.1044617, -1.4539453, -1.0249764, 0.64292955, -0.25380075, 1.3825793, -2.2184052, 0.58045006, 0.9588773, -2.0319357, 2.364854, 0.9267142, -1.7276525, -1.258892, 2.0606081, 0.17469049, 0.037098885, -2.7965002, -2.0710194, -0.066367805, 0.9294369, -2.0530214, 3.1182497, 0.7525606, 1.0215833, 0.7231224, -0.76680106, 3.8408241, 0.66357744, -1.4999955, 3.0092032, -0.9077265, 1.127433, -1.3998711, -0.45227063, 0.5452228, 1.1795954, -1.2053492, -2.3661485, -2.6609254, -1.0594559, 0.35282075, 0.15202153, 1.3333919, -1.7648137, 2.4441655, 0.42185998, 0.39381158, -1.9906393, -1.7294805, -0.1867466, -2.1970685, -3.444776, 2.6147847, 2.4903462, 0.3241663, 0.63434124, -0.5939137, 0.22729027, -0.8494644, 0.54981613, -1.7602688, -3.5211835, -0.07873356, -0.1510309, 2.882863, 1.0030751, -1.8153281, -3.1542742, -1.0870701, 0.0820543, -2.004652, -1.2403183, 0.71638805, 1.2452526, 0.34644645, 0.57313305, -4.812693, 0.5708023, -0.5506052, -0.13743436, -0.5721044, 1.3499863, -1.2981024, 0.0077233315, 3.7563763, 1.8381448, 1.2751684, 0.082980156, -1.1809301, 0.38965702, 1.9302576, 0.9432045, 0.5983294, 2.1648314, 0.8428469, 1.4977492, -0.7804593, 3.1802852, -2.1036444, 2.1590552, -1.9935954, -1.013362, -0.6313343, -0.019762993, -0.65470386, -0.48428255, 5.459507, -1.2114509, 1.7786237, -0.7012754, 0.17181313, -2.092629, -0.65052056, -0.41800314, -1.3612752, 0.039184153, 1.7546436, 1.3561778, 0.54778004, 4.72955, 1.3787336, -0.63834924, -0.019961834, -0.8124193, 3.769576, -0.23387921, -0.9165416, -0.99439096, 0.7847799, -2.6583774, -1.6555126, -2.815217, -0.18486327, -1.1745405, 2.2054548, -0.9455488, -1.5059378, -0.6565779, 1.2600167, -2.3821032, -0.26981768, -4.063028, -0.45026755, -1.834182, 2.906159, 1.1662107, 0.08017355, -0.8102168, 3.3697448, 0.37883544, -1.6882677, 0.71782255, -1.5592864, -0.37134206, -3.2504182, 1.2694929, -1.7516978, -0.12049621, 1.3492393, -0.40591407, 0.6280788, -1.0120616, 2.101255, -0.7040838, -1.1866775, -0.7236214, -0.8188298, 1.1767912, 1.1913915, 0.6185412, 0.9365735, 1.3821521, -1.8589101, 0.9124049, -0.83333874, -0.9172766, -2.0438805, 0.469943, 1.4887323, 0.24271065, -2.0128174, 1.3354056, 1.5705173, 0.6380501, 2.0443192, -1.407867, -0.6085211, 0.13712549, 1.9739413, 1.8154222, -3.012415, 0.83554983, 0.5828651, 0.14901292, -0.8463185, -0.047633052, -0.5389695, 0.41219842, -0.61554337, 0.45593047, -1.4136335, -3.6993682, 0.33988523, -1.7985406, 1.1319201, 1.5479275, -0.15549183, 0.1671977, 1.4381963, 1.5354156, -0.64630884, -0.2005459, -0.8759377, -2.1679733, -1.130661, 0.052789927, -2.0507228, 0.55622774, 0.4106456, 1.1299163, -0.15413862, -0.1215477, 0.5790715, 3.4873276, -0.38861734, 1.1647955, -0.7212916, -1.055282, -0.4247969, 2.521102}; static const std::vector<float> expected_dft2d_signal_size_results_5 = { 8.798115, 7.435564, -1.856497, 0.6967675, 1.9218704, 0.8142178, 0.36594042, 2.6711423, 10.539949, 8.829714, -0.12551129, -0.56251144, 1.2948477, 1.2702878, 0.34664208, -0.8751477, 7.8924866, 9.178925, 0.10114479, -0.25878292, 0.10166702, -1.4982777, -2.0095286, 0.6289345, 10.036068, 5.0261283, -1.8025928, -0.1469695, 0.32685816, 0.24962878, 1.3202658, 2.5126028, 9.108963, 7.5209365, -0.6332305, -1.1228236, 1.1512439, -0.38064647, -1.2855166, -0.8678701, 6.9929824, 8.779736, 1.4174356, 3.4299555, -1.5252162, 0.26026887, 0.6261386, -1.176516, 3.6406178, -3.1231828, 1.3689959, -0.68496245, 0.48079437, -0.796133, -1.0121856, -0.5319141, 1.5931433, -2.9828, 0.30207276, -0.6309908, 0.016751591, -1.5604393, -1.6819947, -0.6282208, 3.1937852, -3.9242973, 0.14749089, 0.43185335, -0.2489954, 1.0694493, -0.22732289, 1.4830055, 0.12429348, -3.6180806, -0.078341216, -0.84553397, 1.3277338, -0.93593657, 1.1153932, 0.46926653, -0.01660335, -2.19146, 0.0050742924, 0.16887031, -0.13835368, -0.22266653, -3.1048126, 0.28844416, 0.7672101, -1.3757203, 1.4307625, -0.8216129, 0.5319145, 0.6184877, -0.42566353, 0.5484251, 3.4520624, -2.2700894, 0.48456866, 0.50225604, 1.4660833, -0.8105061, 0.26780176, 0.08084947, 3.162387, 0.07700956, -0.92056286, 0.15617561, -1.4525607, -0.030844629, 1.7492342, -1.8916597, 1.9068949, -0.5026501, 0.9467689, -0.5010593, -1.7409866, -0.12933461, -1.7828977, 1.2331147, 2.161806, 0.7638128, -2.586247, 1.2947125, -1.5071061, -0.28886533, 0.20840368, 0.28412366, 4.238263, 0.624877, 0.7483618, 1.7655015, 1.4296482, -1.0612158, -0.36541831, 0.8688911, 1.9176203, 2.7282064, 0.99068373, 1.069428, 1.4693688, 0.456266, 1.3150638, -2.101551, 2.3444064, 3.1458972, 0.9041112, -0.18337753, -0.4628029, -1.5899987, -0.96295846, 1.8805511, 0.9849721, 2.3036761, -0.21096146, -1.9477112, -0.28516757, 1.7867885, -1.6535637, -0.59629035, -1.8184824, 2.2060392, -0.3854983, -0.6308861, 1.3785319, 0.6067129, 0.0436399, 1.6603687, 1.0021243, 3.9316323, 0.0843097, -0.1404491, 0.55632716, -0.1349278, 1.0070219, 1.4757582, 0.6121656, 2.6698961, 1.4586289, -3.1324296, -0.5294236, -0.8063517, -0.08189219, 1.4184481, 1.7388614, 3.607197, -0.8525236, 0.45530546, -2.311192, -1.7529185, -1.2852949, -1.3684388, -1.2834095, -1.3844275, 1.6546304, -2.4121, -1.1708138, 0.62875193, -0.80945396, -1.2599692, -4.1222115, 2.373704, -1.2511115, 1.191483, -0.0833078, 0.1342597, -0.019162834, -1.6984439, -2.3708487, 2.8924727, 0.07090135, 0.21195525, -0.7699983, 0.69867724, -0.18473946, 0.4516183, -0.02681084, 2.3169198, -0.36051208, 0.13176969, -0.40343064, 0.42170894, -0.7431194, 0.20806183, -2.546812, 1.1634552, -0.6182922, 0.45351535, -0.045230594, 0.0048814267, 0.13067257, -1.9887246, -3.0333633, 1.5871837, 0.06688923, 1.4174063, 0.79458106, 1.5272335, -0.8169159, 0.32463634}; static const std::vector<float> expected_dft3d_signal_size_results = { 49.728928, 45.265354, -1.5653555, 1.8258603, 1.1586399, 1.1093023, 3.4622312, -3.068374, -1.6144468, 3.955975, 0.7007327, -12.262681, -1.6978629, 0.005776912, 2.1834314, 4.4191337, -0.41734934, 1.0642323, 2.1575608, -1.6815078, 13.372203, -10.360054, -1.2915184, -4.100011, -2.7976396, 2.0324564, -0.87680733, -5.934458, -0.45711768, 6.1547985, 7.1691613, 1.8739722, 3.5988164, 2.9202309, 1.6726661, 1.2348467, -0.100646675, -1.0087874, -0.26146126, -2.555842, -0.054751158, 11.550103, -3.3944821, -1.6603465, 1.6404665, 1.0531695, -3.6879756, 2.841142, -0.9843147, 8.387482, -5.754649, 6.6857386, -5.0392313, -0.09710765, -1.665895, 3.3698869, 2.4361796, -1.1781744, 9.256103, -2.0982075, -10.405502, 6.6115084, 4.9817243, -0.2607507, -1.7066463, 2.1667802, -0.124430835, 0.9151207, -3.847056, -1.9442594, 0.9573264, -3.7053752, 1.6387817, 0.4028574, 1.9858243, 1.7182893, -3.1588082, -0.20052435, -2.4657276, -0.34011212, -1.8019962, -3.6929393, -0.52881837, 0.70309484, -4.9345665, -0.51714915, 2.9993763, 1.8249977, 1.7067848, -0.3447387, 3.3535786, -3.3786156, -0.066281304, -3.208472, 2.4551795, -1.3978454, -0.6575866, -1.5296084, -2.9372633, 2.5082207, 4.9071603, -1.0131404, -0.35466444, 3.4279532, -0.5829768, 1.2354829, 2.7378159, -2.277992, -3.9665284, -1.6454957, -2.1138058, -6.9444704, 6.667962, -2.4544113, 0.8878386, 1.802904, 1.0337299, -0.83422416, 3.2563305, 4.0551376, 0.9553633, -0.1692334, -3.562861, 1.3307956, -0.59709, 1.4318806, -1.5622483, -2.975128, 0.5372238, 0.39611292, 5.9997783, -4.276651, 2.7130606, -4.0972834, 3.100973, -4.163089, 0.6286693, -2.0351622, -0.85036594, -1.843902, 0.81661844, 0.86445946, 0.66823524, -3.632821, -2.0656922, -2.796157, -1.0641904, 1.0206027, -1.1506183, 1.9784743, 0.97015285, 0.84469545, -1.6403551, -0.3596968, 3.9319327, -3.472964, 1.9130713, -1.3176007, -0.56489336, -0.21743466, 1.0296725, -2.3882782, 3.0200386, -1.5742422, 2.092337, 0.9063153, -2.189577, 1.3231093, -0.4461195, 1.6234826, 3.447591, -1.9438455, 2.598078, 2.8626804, 0.66956127, -4.3974814, -0.47485933, -0.22436841, 0.06777835, 5.5086756, 0.47679606, 2.6178322, -0.02077597, -0.9915889, 2.167689, -2.8916657, -1.9383656, 3.241434, -0.69579893, -0.63952047, 3.3214207, -2.407516, 5.2349954, -6.0218415, 5.819358, -7.4365478, 2.8206537, 2.7585952, 1.7629972, 0.044816617, 1.1392056, -4.696983, 0.45275614, 1.9134089, -3.8572056, -2.009159, 1.6307822, -0.9646755, -1.2407924, 2.6003554}; NGRAPH_TEST(${BACKEND_NAME}, dft1d_eval) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{1}, {2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft1d_results[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft1d_eval_1) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{1}, {2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft1d_results_1[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft1d_eval_float16) { auto data = std::make_shared<op::Parameter>(element::f16, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{1}, {2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f16, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f16, Shape{2, 10, 10, 2}); copy_data(backend_data, from_float_vector(input_data)); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = to_float_vector(read_vector<float16>(dft_output)); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft1d_float16_results[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft1d_eval_bfloat16) { auto data = std::make_shared<op::Parameter>(element::bf16, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{1}, {2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::bf16, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::bf16, Shape{2, 10, 10, 2}); copy_data(backend_data, bfloat16::from_float_vector(input_data)); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = bfloat16::to_float_vector(read_vector<bfloat16>(dft_output)); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft1d_bfloat16_results[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft1d_eval_i32) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i32, Shape{1}, {2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft1d_results[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_eval_1) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_results_1[j], 0.000062); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_eval) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_results[j], 0.000062); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_eval_float16) { auto data = std::make_shared<op::Parameter>(element::f16, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f16, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f16, Shape{2, 10, 10, 2}); copy_data(backend_data, from_float_vector(input_data)); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = to_float_vector(read_vector<float16>(dft_output)); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_float16_results[j], 0.002); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_eval_bfloat16) { auto data = std::make_shared<op::Parameter>(element::bf16, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::bf16, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::bf16, Shape{2, 10, 10, 2}); copy_data(backend_data, bfloat16::from_float_vector(input_data)); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = bfloat16::to_float_vector(read_vector<bfloat16>(dft_output)); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_bfloat16_results[j], 0.0003); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_eval_i32) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i32, Shape{2}, {1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_results[j], 0.000062); } } NGRAPH_TEST(${BACKEND_NAME}, dft3d_eval_1) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{3}, {0, 1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft3d_results_1[j], 0.0002); } } NGRAPH_TEST(${BACKEND_NAME}, dft3d_eval) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{3}, {0, 1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft3d_results[j], 0.0002); } } NGRAPH_TEST(${BACKEND_NAME}, dft3d_eval_float16) { auto data = std::make_shared<op::Parameter>(element::f16, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{3}, {0, 1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f16, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f16, Shape{2, 10, 10, 2}); copy_data(backend_data, from_float_vector(input_data)); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = to_float_vector(read_vector<float16>(dft_output)); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft3d_float16_results[j], 0.0005); } } NGRAPH_TEST(${BACKEND_NAME}, dft3d_eval_bfloat16) { auto data = std::make_shared<op::Parameter>(element::bf16, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{3}, {0, 1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::bf16, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::bf16, Shape{2, 10, 10, 2}); copy_data(backend_data, bfloat16::from_float_vector(input_data)); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = bfloat16::to_float_vector(read_vector<bfloat16>(dft_output)); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft3d_bfloat16_results[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft3d_eval_i32) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i32, Shape{3}, {0, 1, 2}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft3d_results[j], 0.0002); } } NGRAPH_TEST(${BACKEND_NAME}, dft1d_signal_size_eval) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{2, 10, 10, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{1}, {-2}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{1}, {20}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{2, 20, 10, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{2, 10, 10, 2}); copy_data(backend_data, input_data); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft1d_signal_size_results[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_signal_size_eval_1) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {0, 2}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {5, 9}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{5, 6, 9, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_signal_size_results_1[j], 0.000021); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_signal_size_eval_2) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {0, 1}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {4, 6}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_signal_size_results_2[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_signal_size_eval_3) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {0, 2}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {3, 4}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{3, 6, 4, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_signal_size_results_3[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_signal_size_eval_4) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {0, 2}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {4, 8}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_signal_size_results_4[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft2d_signal_size_eval_5) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {0, 2}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{2}, {5, 4}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{5, 6, 4, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft2d_signal_size_results_5[j], 0.00001); } } NGRAPH_TEST(${BACKEND_NAME}, dft3d_signal_size_eval) { auto data = std::make_shared<op::Parameter>(element::f32, Shape{4, 6, 8, 2}); auto axes_input = op::Constant::create<int64_t>(element::i64, Shape{3}, {0, 1, 2}); auto signal_size_input = op::Constant::create<int64_t>(element::i64, Shape{3}, {3, 7, 5}); auto dft = std::make_shared<op::v7::DFT>(data, axes_input, signal_size_input); auto f = make_shared<Function>(dft, ParameterVector{data}); auto backend = runtime::Backend::create("${BACKEND_NAME}"); auto dft_output = backend->create_tensor(element::f32, Shape{3, 7, 5, 2}); auto backend_data = backend->create_tensor(element::f32, Shape{4, 6, 8, 2}); copy_data(backend_data, input_data_1); auto handle = backend->compile(f); handle->call({dft_output}, {backend_data}); auto result = read_vector<float>(dft_output); size_t num_of_elems = result.size(); for (std::size_t j = 0; j < num_of_elems; ++j) { EXPECT_NEAR(result[j], expected_dft3d_signal_size_results[j], 0.00002); } }
Fundamentals of aggregation in concentrated dispersions: fiber-optic quasielastic light scattering and linear viscoelastic measurements. Fiber-optic quasielastic light scattering and oscillatory shear rheology are employed to monitor the structure formation in concentrated, aqueous dispersions of model, electrostatically stabilized, polymer colloids. A regime of fractal scaling is observed in the vicinity or the gel point, as signified by a power law decay in the autocorrelation function with delay time and a power law dependence of the shear moduli on frequency. The power law exponents and the gel times extracted from the two techniques are compared for the first time on the same dispersion. The details of structure formation and aggregation kinetics in these concentrated dispersions are compared to the universal behavior of DLCA and RLCA aggregation observed at dilute concentrations in these and other dispersions, as well as to polymer gelation.
The present invention pertains to a process for suppressing the influence of roll eccentricities on the strip thickness of the rolled material in a roll stand. Eccentricities that influence the quality of the strip to be rolled are often found in rolling stands due to unevenly machined backup rolls or inaccurate bearing alignment. These eccentricities are manifested in the strip with the rotational speed of the roll affected by the eccentricity, usually the backup roll, depending on the rigidity of the roll stand and the material to be rolled. The frequency spectrum of the eccentricities and their negative influence on the strip includes basically the fundamental frequencies of the upper and lower backup roll; although there are also higher harmonic frequencies, these only appear with reduced amplitudes. Due to the slightly different diameters and rotational speeds of the upper and lower backup rolls, the frequencies of these backup rolls may differ. In a process described in European Patent B-0 170 016, the roll eccentricities of the upper and lower backup rolls are simulated through the sum of the output signals of two oscillators connected in a feedback loop, and supplied to a position or thickness control for the roll stand to suppress the influence of roll eccentricities on the exit thickness of the rolled material. The oscillators work by the monitor principle, where the frequencies of their output signals are set according to the measured rotational speed of the rolls; the amplitude and phase of the output signals are corrected according to the difference between the summed output signal of the two oscillators and another sum signal obtained from the measured rolling force multiplied by the sum of inverse values of the roll stand's and the rolled material's rigidity and the measured actual value of the roll screw-down. The oscillators can be implemented as digital filters connected to the other analog position or thickness control of the roll stand through analog/digital converters and digital/analog converters. Assuming that the dynamics of position control (i.e., the dynamics of the control circuits and actuators used for regulating the screw-down position of the rolls) are negligible, the process in this European patent provides proper compensation for roll eccentricity. The measurement of the rolling force and thus the compensation for roll eccentricity, however, can be influenced by friction in the roll stand. In a process described in U.S. Pat. No. 4,648,257 for compensating for roll eccentricities, the thickness of the rolled material is measured after its exit from the roll stand and used, together with the measured instantaneous rotation angle of at least one roll, for the ongoing calculation of estimated values for thickness changes in the rolled material. These estimated values are corrected, on the basis of the measurement delay, resulting from the distance of the thickness measurement point from the roll gap (i.e., the point of thickness change of the rolled material) convened into the corresponding rotation angle of the roll. The corrected estimated values, referenced to the rotation angle, are then supplied to the position or thickness control to compensate for eccentricities. The exact determination of the instantaneous rotation angle of the rolls is, however, considered relatively difficult especially due to the rough environment around the roll stand. An object of the present invention is to provide a process for compensating for roll eccentricities without the need for measuring the rolling force or the instantaneous rotation angle of the rolls.
NEW YORK — Powerful and influential women from all walks of life smiled and hugged at the 2018 Glamour Women of the Year Awards on Monday, but the devastating wildfires on the other side of the country were on a lot of minds. Actress Alicia Silverstone had first-hand knowledge of how devastating the fires are. “I know a lot of people who were affected and evacuated and some are still holding ground and not leaving. It’s really bad,” she said. The awards — celebrating “game changers, rule breakers and trailblazers” — included honorees such as Chrissy Teigen, Janelle Monae, Emma Gonzalez and Aly Raisman, as well as presenters such as Claire Danes, John Legend, Lupita Nyong’o and Padma Lakshmi. But the wildfires came up again and again, with some attendees saying the flames are part of a larger issue — global warming.
PM shares global outlook during fireside chat with Alibaba boss Jack Ma. E-commerce giant will court third-party sellers at New York event, to try and prevent them from defecting to its competitors. China’s biggest e-commerce company tries to appeal to a growing middle class demanding premium products. China's biggest e-commerce company increased its projection for fiscal 2017 revenue growth. It appears that Trump’s threats of heavy tariffs, taxes on Twitter are having some effect on global corporations. If its $4.8 billion deal with Verizon closes, the company will get a rename — and new faces on its board of directors. The e-commerce giant has beaten analyst estimates for five straight quarters. China may become the largest cross-border business-to-consumer e-market before the decade is out. What are the barriers to tapping that market? Justin Trudeau joined Jack Ma, the founder of e-commerce giant Alibaba, to launch an online storefront to sell Canadian products to Chinese consumers. Prime Minister Justin Trudeau and Alibaba founder Jack Ma announced the launch of a ‘pavilion’ that will brand Canadian goods and services for Chinese consumers. New media company "Thrive" is expected to provide lifestyle content contributed by celebrities and bloggers, and will be backed by Alibaba’s Jack Ma. In a move reminiscent of Jeff Bezos' acquisition of The Washington Post, e-commerce giant Alibaba is buying an English-language newspaper. But why? E-commerce giant’s purchase of city’s most prominent English-language publication helps it diversify away from Internet shopping. Alibaba said in a statement on Friday that the purchase will include the flagship newspaper and other related businesses including magazine and recruitment. Yahoo says it won't spin off its $31 billion stake in e-commerce giant Alibaba. Experts say taxes likely influenced the decision. The co-founder of Microsoft has recruited billionaires such as Amazon's Jeff Bezos and Facebook's Mark Zuckerberg to boost R&amp;D in clean energy.
/** * Generates random test serviceability response. * @param address Location postal address. * @return Array of serviceability results from cable providers. * @throws KyrioException returned by the server. */ private ServiceabilityResult[] mockDetermineBusinessServiceability(Address address) throws KyrioException { try { Thread.sleep(1500); } catch (InterruptedException ex) { } if (this._account.getEnableTestError() && RandomData.chance(1, 100)) throw RandomData.nextError(); int resultCount = RandomData.nextInteger(0, 2); ServiceabilityResult[] results = new ServiceabilityResult[resultCount]; for (int index = 0; index < resultCount; index++) { Provider provider = RandomData.nextProvider(); ServiceabilityResult result = new ServiceabilityResult(); result.setLocationId("" + RandomData.nextInteger(99999)); result.setLocationType(RandomData.pick(new LocationType[] { LocationType.Unknown, LocationType.Residential, LocationType.Business })); result.setProviderId(provider.getId()); result.setProvider(provider.getName()); result.setSiteStatus(RandomData.pick(new SiteStatus[] { SiteStatus.OnNet, SiteStatus.OffNet, SiteStatus.NearNet, SiteStatus.SurveyRequired, SiteStatus.Proximity })); results[index] = result; } return results; }
Retrospective comparison of surgical techniques to prevent secondary opacification in pediatric cataracts. PURPOSE To evaluate the effect of different surgical methods for management of the posterior capsule and anterior vitreous on the rate of posterior capsule opacification in pediatric cataracts. METHODS Charts of 34 children (47 eyes) aged 40 days to 18 years (mean: 8.5 years) who had primary cataract surgery with or without posterior chamber intraocular lens (IOL) implantation during the past 5 years were reviewed. In 26 eyes, cataracts were managed with a posterior continuous curvilinear capsulorhexis, and in 21 eyes, the posterior capsule was left intact. Follow-up averaged 10 months (range: 6.5 months to 5 years). RESULTS Visually significant secondary cataract developed in nine eyes with intact posterior capsules, and seven eyes required Nd:YAG laser posterior capsulotomy. The average time for YAG capsulotomy postcataract removal in the second group was 4 months. The visual axis remained clear in all eyes that had posterior continuous curvilinear capsulorhexis with or without posterior chamber IOL. Complications such as fibrinoid membrane, stromal edema, posterior synechiae, updrawn pupil, and transient glaucoma occurred in both groups at a similar rate. CONCLUSION Primary posterior continuous curvilinear capsulorhexis is an effective method for preventing secondary cataract formation in pediatric cataracts.
Published: Feb. 2, 2014 at 03:03 p.m. Updated: Feb. 2, 2014 at 05:15 p.m. EAST RUTHERFORD, N.J. -- Derrick Brooks had just been announced as a member of the Pro Football Hall of Fame Class of 2014. Warren Sapp couldn't wait to congratulate his ex-teammate with the Tampa Bay Buccaneers. I was backstage at "NFL Honors" as a production assistant led Sapp down a dark corridor of Radio City Music Hall in search of Brooks. It was there that Sapp encountered Michael Strahan, another new Hall of Famer, and the man Sapp had been verbally sparring with all week. As you probably know, Sapp has openly disparaged Strahan's career achievements for some time. The criticisms of Strahan's game apparently evaporated the moment the former Giants star got the nod into Canton. "And if he tells the story any different than that, he lying to you, America, because I sure did apologize. I'm going to beg for forgiveness because there's a party in Canton and I promise you I'm not going to miss it." I didn't hear what Sapp said, but I certainly saw what appeared to be a genuine moment between the two men. Sapp didn't want to let go of Strahan, holding him by his shoulders for close to a minute as they spoke closely. Strahan appeared receptive to the pleas for forgiveness before Sapp bounded away. It was a nice moment between two Hall of Famers who shouldn't have been taking shots at each other in the first place.
package edu.cmu.hcii.whyline.source; import java.util.List; import edu.cmu.hcii.whyline.bytecode.Branch; /** * * @author <NAME> * */ public class BranchExpression extends ConsumerExpression<Branch> { public BranchExpression(Decompiler decompiler, Branch consumer) { super(decompiler, consumer); } public String getJavaName() { return code.getReadableDescription(); } public boolean alwaysAppearsInSource() { return true; } public boolean mayAppearInSource() { return true; } protected Token parseHelper(List<Token> tokens) { parseArgument(tokens, 0); Token last = parseThis(tokens); if(code.getNumberOfOperandsConsumed() > 1) return parseArgument(tokens, 1); else return last; } }
ENDOPHYTIC FUNGI FROM THE BRAZILIAN FLORA AND THEIR EMPLOYMENT IN BIOTRANSFORMATION REACTIONS The biotransformation reactions are used as a successful alternative to derivatization by the traditional chemistry because they conduct to uncommon reactions, which would hardly be carried out by chemical synthesis. A wide diversity of compounds may be metabolized by fungi, leading to chemical derivatives through selective reactions that work under ecofriendly conditions. Endophytic fungi live in symbiosis with health tissues of plants. The employment of endophytic fungi as enzymatic sources to biotransformation reactions is very promising since these microorganisms came from uncommon and underexplored habitat. The environmental conditions directly influence the composition of the endophytic microbiota and its genetic expression, which could change also the production of the enzymes. The extraordinary richness of the Brazilian biodiversity is a key factor in the diversification of the endophytic community. The present review presents a mapping of the biotransformation reactions catalyzed by Brazilian endophytic fungi. Our findings contribute both to the appreciation of these fungi in the chemical derivatization and for the preservation of the Brazilian species.
ASND earnings call for the period ending June 30, 2018. Good day, ladies and gentlemen, and welcome to the Ascendis Pharma second-quarter 2018 earnings conference call. [Operator instructions] As a reminder, this conference call is being recorded. I would now like to introduce your host for today's conference, Scott Smith, chief financial officer. Sir, you may begin. Thank you, operator. Thank you, everyone, for joining our second-quarter 2018 financial results conference call today. I'm Scott Smith, chief financial officer of Ascendis. Joining me on today's call are Jan Mikkelsen, president and chief executive officer; and Dr. Jonathan Leff, chief medical officer. Before we begin, I would like to remind you that this conference call will contain forward-looking statements that are intended to be covered under the safe harbor provided by the Private Securities Litigation Reform Act. Examples of such statements may include, but are not limited to, our progress on our pipeline candidates and our expectations with respect to their continued progress, statements regarding our strategic plans, our goals regarding our clinical pipeline of rare disease endocrinology programs, statements regarding the market potential of our pipeline candidates and statements regarding the planned regulatory filings. These statements are based on information that is available to us today. Actual results or events could differ materially from those in the forward-looking statements, and we may not achieve our goals, carry out our plans or intentions or meet the expectations or projections disclosed in our forward-looking statements, and you should not place undue reliance on these statements. Our forward-looking statements do not reflect the potential impact of any licensing agreements, acquisitions, mergers, dispositions, joint ventures or investments that we may enter into or terminate. We assume no obligation to update these statements as circumstances change, except as required by law. For additional information concerning the factors that could cause actual results to differ materially, please see the Forward-looking Statements section in today's press release and the Risk Factors section of our annual report on Form 20-F filed on March 28, 2018. On today's call, we will discuss our second-quarter 2018 financial results and provide a business update. Following some prepared remarks, we will then open up the call to questions. I will now turn the call over to Jan Mikkelsen, our chief executive officer. Jan? Thanks, Scott, and good afternoon. In this quarter, we continue to execute on our strategic goals, advancing toward our vision to build a fully integrated biopharma company. For TransCon Growth Hormone, we are completing enrollment in the fliGHt Trial, and our heiGHt Trial is progressing as planned toward top line Phase 3 results expected in Q1 2019. For TransCon PTH, we have completed the Phase 1 study and recently outlined a change in the development program. This involves a Phase 2 trial with a planned long-term extension and the expansion of the Phase 3 program to a global pivotal trial, incorporating trial site in Japan and possible other Asian countries. Our updated plan is based on an analysis of how to strengthen the product profile, initial feedback from FDA how to reduce development risk and the relation of the market potential for TransCon PTH in Asia. In the Phase 2 trial, we plan to measure not only PK and PD of different fixed doses of TransCon PTH in adult patients with hypoparathyroidism but also at titration schedule designed to completely discontinue activated vitamin D and calcium supplements. We believe that TransCon PTH also has the potential to address a large market in the Asian countries. In Japan alone, our research indicates that there are more than 30,000 patients with hypoparathyroidism. Therefore, we plan to conduct a global Phase 3 trial incorporating sites in Japan and possible other Asian countries. We believe this new approach will accelerate the regulatory findings in the Asian market by several years, thereby, broadening the world market potential by only moving back our U.S. filing by less than one year. We believe our new plan also reduce development risk for the TransCon PTH development program by testing the proposed titration protocol before Phase 3 initiation and providing additional guidance for our Phase 3 power calculation. With this updated development strategy, we believe we are strengthening the commercial product, we can collect and learn from long-term extension data, potentially improve the label, decrease development risk and potentially broaden the commercial opportunity via geographic expansion. For our last product opportunity for TransCon CNP, we are moving to the Phase 1 study, which we plan to complete as planned in the fourth quarter of this year. Now I would like to focus on our core strategies and reflect on how Ascendis plans to create sustainable growth as our company matures. Our vision is to create a biopharma company with several therapeutic areas, each containing multiple independent products created by our technology platform and not to establish Ascendis as a one-trick pony with a single product. We have built a pipeline of three independent product opportunities in endocrinology rare disease. All three of these wholly owned product candidates have the opportunity to provide sustainable growth through label expansion. We're also planning to create further growth by building new therapeutic areas outside of endocrinology rare disease. Each of these new therapeutic areas will contain multiple independent product opportunities. Why do we believe we have strong fundamentals for sustainable growth? We have a unique technology platform, which can continue to deliver innovation and does not currently face significant competition. We continue to advance the TransCon technology and also to expand the platform to new areas such as localized delivery. Over the last several years, our platform has evolved from systematic delivery of an unmodified parent drug, such as with TransCon Growth Hormone, TransCon PTH and TransCon CNP, to also include localized delivery of an unmodified parent drug, tailor-made for specific unmet medical needs. Our TransCon platform localized delivery capabilities have been developed through both internal resource, effort and our growth genetic optimality collaboration. I believe our localized delivery platform has now reached a stage where it can be applied across different therapeutic areas, and we are investigating these as we continue expanding our pipeline. In addition, we have a strong culture of innovation, one that values science and drives our product development efforts. We have already applied this mindset and expertise to build our pipeline of three independent product candidates in endocrinology rare disease. By combining these essential fundamentals, our unique technology platform and culture of innovation, we expect to grow through a consistent stream of high value differentiated product opportunity as our company matures as a leading biopharma company. Another driver of sustainable growth and value creation at Ascendis is our plan to be -- to become fully integrated. It is logical that we forward integrate in endocrinology rare disease as a commercial company because we can realize strong synergies by having multiple products in a single therapeutic area. We are already making important progress toward this objective by establishing of initial commercial team discipline this is led by Tom Larson. They are undertaking important projects to get a deep understanding of the market dynamics of each of our product candidate, including physician, patient and market access resource that will help us define future product positioning. Our goal is to build multiple independent therapeutic areas, each with a diversified pipeline. We have built our first pipeline in endocrinology rare disease, a pipeline that is diverse and present multiple potential label expansion opportunities to support further growth. Now we intend to put in place an additional source of growth with our pipeline in a new therapeutic area. We expect to disclose this next therapeutic area at the beginning of next year. We believe this approach, with our technology platform and a culture of innovation at the core, can successfully create sustainable long-term growth. Now Jonathan will review our clinical progress. Thanks, Jan. I am pleased to provide an update today on recent pipeline developments, starting with TransCon Growth Hormone. We are completing enrollment in our fliGHt or SWITCH trial in the coming weeks. As a reminder, fliGHt is enrolling subjects who switch from daily growth hormone to weekly TransCon Growth Hormone, with a follow-up of six months. Results from the trial will strengthen our safety database and provide guidance for switching from daily to weekly growth hormones. Subjects are then provided the opportunity to roll into enliGHten, our long-term extension trial. In fliGHt, we have enrolled some subjects below three years of age. This will provide information and utilization of TransCon Growth Hormone in patients younger than those enrolled in the heiGHt Trial. In parallel, our ongoing Phase 3 heiGHt Trial continues as planned. To date, all subjects completing the 12-month heiGHt Trial have chosen to enter the enliGHten long-term extension. We now have many dozens of subjects in enliGHten, data from which will be a key component of our filing package. Finally, we continue to work toward building awareness of our program among the pediatric endocrine community. Next month, we are participating in two medical conferences. We are a proud supporter of the Growth Hormone Research Society Conference, a gathering of top opinion leaders and scientists in the field, and we plan to present two abstracts on our program at the European Society of Paediatric Endocrinology meeting. We are happy with the progress of our TransCon Growth Hormone program and increasingly excited about the prospects of a once-weekly product that importantly can provide the same efficacy, safety and tolerability as daily growth hormone. Turning now to TransCon PTH. We are making progress on both the clinical and regulatory fronts. We have recently provided an update on the PTH development program, following our ongoing review of the clinical, commercial and regulatory landscape. We now plan to conduct a randomized placebo-controlled Phase 2 trial to be initiated in the first quarter of 2019. The trial will follow subjects for approximately four weeks of treatment. We will evaluate pharmacokinetics as well as serum in urinary calcium levels in subjects with hypoparathyroidism treated with different fixed dosing regimens of TransCon PTH. We will also assess the ability to down titrate calcium and active Vitamin D supplementation. Following the trial, subjects may then enter into a long-term extension trial. We will also investigate patient-reported outcomes with a goal to incorporate those measures into our pivotal trial. We expect topline data from the Phase 2 trial in early 2020. With this updated development strategy, we believe we are strengthening the clinical program in paving the way for a broader commercial opportunity and potential benefit to more patients with hypoparathyroidism. Finally, we continue to communicate the potential of TransCon PTH through our ongoing communications to the medical community. We intend to present data at the American Society of Bone and Mineral Research Conference in late September. This will include a poster presentation on the final results of the Phase 1 trial, summarizing data from the full set of MAD cohorts, as well as a second poster on bone turnover markers from the Phase 1 trial. Our TransCon CNP program is also progressing, and recruiting continues for the ongoing Phase 1 trial. We are now dosing the second to last cohort, and we are on target with our plan to complete the trial during the fourth quarter of 2018. As a reminder, this trial is evaluating safety, tolerability and pharmacokinetics of TransCon CNP in healthy volunteers. Our goal is to demonstrate that we can achieve continuous exposure to CNP at levels designed to optimize efficacy without adverse cardiovascular effects with a convenient once-weekly dose. In particular, we are looking to evaluate the cardiovascular risk profile. Our planning for initiation of a natural history study in achondroplasia is also well under way with a goal of trial initiation this year. We believe, this trial, which will take place both in the U.S. and Europe, could help to enhance enrollment of future proof-of-concept studies in subjects with achondroplasia. We are excited about all three of our wholly owned product candidates for rare endocrine diseases. Each one continues to advance as we move forward with our clinical programs, paving the way to offer patients and physicians new and differentiated therapies. Now Scott will provide a financial update. Thank you, Jonathan. Turning to our financial results for the three months ended June 30, 2018, let me review some highlights. For the second quarter, we reported a net loss of EUR 22.8 million or EUR 0.55 per basic and diluted share compared to a net loss of EUR 30.7 million or EUR 0.94 per basic and diluted share during the same period in 2017. The second-quarter 2018 results financial -- results reflect financial income of EUR 22.6 million due to foreign currency exchange rate fluctuations of our cash holdings. Research and development costs for the second quarter were EUR 40.2 million compared to EUR 21.9 million during the 2017 quarter. The higher costs were primarily attributable to our TransCon Growth Hormone; continued execution and expansion of our Phase 3 clinical program, including the heiGHt, fliGHt and enliGHten trials and the ongoing development of the auto-injector; costs associated with our Phase 3 program clinical supply; and ongoing preparation of the manufacturing of TransCon Growth Hormone validation batches. These batches are required as part of the regulatory approval process and will be recognized as R&D costs when incurred. However, they go into inventory and may be used for either clinical trial supply or, upon approval, for commercial sale. For TransCon PTH, costs related to the Phase 1 clinical trial and Phase 3 enabling activities, including manufacturing and device development. And for TransCon CNP, costs associated with the execution of the Phase 1 trial and ongoing Phase 2 enabling activities. General and administrative expenses for the second quarter of 2018 were EUR 5.2 million compared to EUR 3.2 million during the second quarter of 2017. We ended the second quarter with cash and cash equivalents of EUR 352.6 million and 41,841,590 ordinary shares outstanding. We expect the increase in R&D costs to continue throughout the remainder of 2018 as we advance our wholly owned internal pipeline programs and invest in the TransCon technology platforms. R&D is expected to include, for TransCon Growth Hormone, costs associated with our Phase 3 program in manufacturing of validation batches as well as development and manufacturing of the auto-injector which will be used for administration. For TransCon PTH, ongoing IND-enabling activities, including nonclinical talks, manufacturing of clinical supply in validation batches, regulatory and device development activities as well as preparation for the initiation of our Phase 2 clinical trial. And finally, for TransCon CNP, costs associated with the ongoing Phase 1 trial and Phase 2 enabling activities, including nonclinical talks and manufacturing. Our pipeline continues to advance with three wholly owned independent rare disease endocrinology product candidates in clinical development, each representing a potential worldwide market opportunity greater than $1 billion. We plan to continue to create long-term sustainable growth as we apply our innovative TransCon technology and product development algorithm in other therapeutic areas. Operator, we are now ready to take questions. Thank you. [Operator instructions] And our first question comes from the line of Jessica Fye with JPMorgan. Your line is now open. Hey, guys, good afternoon and thanks for taking my questions. A couple from me. First, with respect to the new therapeutic area that'll be disclosed at the beginning of next year, can you provide a bit of a framework about what that disclosure might look like and how many preclinical programs we might hear about with that initial disclosure? I'm also wondering if you could elaborate on your comments around leveraging the TransCon platform for localized delivery. And lastly, can you talk about the status of completing the manufacturing of validation batches for TransCon GH? Thanks, Jess. Some really good questions. I hope I can answer most of them. One of the question has been, how do we really select a new therapeutic area? And one of the element of Ascendis Pharma, always to have a strong focus on the patient. So we start with really looking on where do we have a major unmet medical need as for the fundamentals. Then we see how can we build up a pipeline of multiple product opportunities where we can balance the risk profile high risk, low risk, meaning that we also like to use parent drugs already with established proven efficacy and safety. Then we integrate these elements together and see if we can make a highly differentiated product opportunity with a basic and nearly impossible for anyone else to develop. And what is the uniqueness we have now is that you have seen how we have enabled the soluble products. This is what we call a product like TransCon Growth Hormone, TransCon PTH and CNP. And now, I'll be adding one more arm of our technology platform where we can make localized delivery of protein peptide small molecule, and we actually have now starting to establish data up to even more than a half a year, where we can see all these profile of active blocks. And by doing that, we actually have been through lot of different therapeutic areas, and we have now selected one, where we're starting to execute and making what I call proof of concept of the entire use of TransCon technology to this specific area. And what we would like to disclose at this time when we come to the early part of next year, the rationale for why we selected this area, how we see a huge opportunity to address major unmet medical need and how we at Ascendis can really develop a pipeline of highly differentiated product opportunities will be nearly impossible for anyone else to develop. That is the data we would like to show you from the beginning. That was one question. The other question you asked me was related to the validation batches from growth hormone, and the validation batches from growth hormone is progressing according to our plan. Nothing is basic more to say that. It's an activity that we initiated for 18 months ago now, so we're really just progressing through the entire supply chains of really doing the right in each single step for every process control. Beyond this, any questions? Thanks, Jan. So just to clarify, are the -- is the new therapeutic area that you'll disclose going to be using this localized delivery approach? Or are those two kind of distinct priorities? I think it will be likely that we're building up a pipeline where we apply the two arms of our TransCon technology potentially both in synergy but also in a position that it just give us a unique opportunity certainly to apply and develop completely from profiles of product opportunities that we didn't have an opportunity before. OK. So is that to say that for some products within the new therapeutic vertical, they might use the local delivery and some might use systemic delivery? Yes, that will be potentially the case. Thank you. And our next question comes from the line of Michelle Gilson with Canaccord Genuity. Your line is now open. Hi, thanks for taking my question. I review the data that we're going to see in fourth quarter for TransCon CNP. Can you talk a little bit about the biomarker data that you'll be reporting in fourth quarter? And specifically, on cGMP, should we expect a growth response? And then what data are you looking for that would help you select the next -- the dosage for the next study in achondroplasia patients? And is there like a cutoff in pre-CNP or cGMP that you're looking for? OK. Thanks. This is Jonathan. Thanks for the question, Michelle. So in order -- the CNP Phase 1 trial will tell us a lot. Remember, this is in healthy volunteers. So first and foremost, it will evaluate the safety of the drug, which we assume is going to be very safe. And to date, it's been very safe. But in particular, we're going to evaluate the cardiovascular risk profile, so paying attention to hypotension and tachycardia, since a major tenet of the target product profile is that we can give a once-weekly drug that is safe, that has elevated levels of CNP throughout the dosing intervals for the possibility of enhanced efficacy. So it all begins with the cardiovascular risk profile, which we're carefully evaluating. Secondly, we're evaluating whether the PK profile, in fact, supports the once-weekly dosing that we anticipate that it will and that it has in animals, and I'll remind you that our animal data has always been exquisitely predictive of human data. And then finally, we'll evaluate the three CNP levels, so not just the prodrug but how much CNP, the active moiety, is actually available, free during 24 hours at levels that we think can lead to appropriate efficacy. So that's the main deliverables for the Phase 1 study. We will be looking at some other biomarkers, cyclic GMP and -- but those are not really the critical factors. Since these are healthy volunteers, we, of course, do not expect to see any growth in these subjects and its short-term single dosing. And in terms of the dose, we will -- and they're adults so they've finished growing. And in -- the dose ranges that we are evaluating will be very broad and which will provide enough support for us to choose our initial dose selection for our Phase 2 proof-of-concept study. OK. And then could you maybe talk about maybe some of your hypothesis for resistance mechanisms in achondroplasia patients to CNP therapy? And how an increased therapeutic window and ability to dose higher with TransCon CNP might overcome some of those challenges? Sure. So we don't believe in the hypothesis of resistance, and we believe that the scientific data that's been generated in preclinical models argues strongly against that. So you can take mice, for example, that are replete in CNP and give them more CNP and cause growth. And the more CNP you give them, the more growth that you see. So we think that you can overcome that there is no evidence of resistance actually. That remains purely a theoretical concern. And if you actually believe in what we -- how we're treating diabetic people specific Type 2 diabetes, people saying you have insulin resistant there. But basic, the main therapy for Type 2 diabetes is still insulin. And I think it's well known, you have some kind of low level of persistence. You can get that, as what people have talked about. But what the basic treatment we have seen is to overcome this resistance with a higher level. And why you have a level of diabetic resistance is because the organ is in some way trying to respond with a lack of efficacy, and this is why you suddenly all compensate the dishormone, and therefore, you just keep more and then you get the right effect. Thank you. And our next question comes from the line of Adam Walsh with Stifel. Your line is now open. Hi, guys. This is Neil Carnahan on for Adam. On TransCon PTH, can you share some of the feedback from the post-Phase 1 meeting with the FDA? What's changed? And then can you discuss the rationale behind running a Phase 2 study now instead of going from Phase 1 directly into Phase 3? I do think that if you go back to what we explained before, we actually came from the acceleration that we integrated a lot of different elements in change of the development plan, and this is the change of the development plan, and we made this change of development plan because we feel that we can get a much stronger product. We can be in a position we can really ensuring that this product would come up to many more patients. So it's done out from the assumption that we're going from something to something that is much better. This is the reason for our change. The feedback from FDA was what I call extremely positive feedback, and Jonathan can comment further on that. We came into a position where the basic have no comes to our preclinical safety to our CMC packet or device development. And they also gave us some kind of opportunities, potentially to have possibility to strengthen the product profile with potential at data label. And this is some of the thing we're now exploring, and we're feeling really, really strongly supported by FDA. And you can also just go back to see if there is understanding of this disease area, which are really, really high from the NATPARA. I actually thought where the -- one of the first thing really came with and proposed better product profile to really treat patients with hypoparathyroidism, and I think they recognize that we have a profile that really potentially can fulfill this criteria from an optimal target profile for a product for this patient group. So we're feeling pretty confident what we're doing. It's really for the benefit of the product opportunities, and we're feeling that we will have a much, much stronger product opportunity now with this -- our current development plan than we had before. OK. Then I just have one follow-up on the same topic. Just on study design, any details on how many patients you plan to enroll, whether it will be a global study -- you said, I understand, maybe a placebo-controlled study? Would the control arm be supplemented with Calcium and Vitamin D? Yes, OK. Yes, so probably in the 40 to 50 size range. Does that answer your question? Thank you. [Operator instructions] And your next question comes from the line of Jim Birchenough with Wells Fargo Securities. Your line is now open. Good afternoon. It's Nick on for Jim this afternoon. Thanks for taking our questions. First, a couple on the Growth Hormone program. I''m wondering if you have patient reported outcomes or quality-of-life tools in the heiGHt and fliGHt studies, if there is a validated tool. And you also mentioned a pro for hypoparathyroidism, so similar question there in terms of what's available and validated. Sure. So in heiGHt, we do not. But in fliGHt, we do. So we have some preference in satisfaction questionnaires in the fliGHt trial, specifically comparing their experience to the daily growth hormone they're coming off of to the new weekly TransCon Growth Hormone they're going onto. Thank you. And then as a follow-up from that. So if you see improved compliance and persistence driven by this increased level of satisfaction, how much bigger do you think the Growth Hormone market can become? Well, just any increase in compliance rates, even one percentage point is increasing the market, so it's proportional to the amount of increase that we'll see. So we won't know what their previous compliance was going into the fliGHt Trial. We'll, of course, know what it is in the fliGHt Trial. But you also asked about the PTH trial. So we are developing our own patient reported outcome symptom score for the PTH program, and having the Phase 2 trial now to test that then will allow us to hopefully validate that in the Phase 2 trial and utilize it in its full form in the Phase 3 program, which could, of course, potentially lead to inclusion in the label, theoretically. OK. And I think that's a segue to my next question, which is, so is there a recognized strategy for down titration of Calcium and Vitamin D that's acceptable to regulators that could also result in a label claim? So there's some experience from the recent trials that have gone on, and I don't think anyone really cares specifically how you do it, how fast you do it. What matters is that you do it safely in where they end up. So if you end up off Calcium and Vitamin D, maybe you could talk about that. And we will be guided by previous experience in this area. But no one is really too picky about exactly how you do it as long as it's done safely and you lower the treatment burden. So there's no -- for numbers of patients being off, it will be what it is at the end. We feel that a large proportion of patients will ultimately be free of Calcium and Vitamin D because the PTH will appropriately manage them. But what exactly that number is, we'll determine in the Phase 3 trial. Yes. We have a strong belief that potentially most of the patients can get up because if you look to the continuous infusion studies that have been conducted both in the pediatric and adult patient population, you are eliminating all supplement, meaning activated Vitamin D and Calcium supplement in this patient group when you start the infusion or pump studies. So we -- from that perspective, it makes sense that we should end in the same situation, where it should be possible to basically eliminate the activated Vitamin D and Calcium supplement compared to a standard population at that stage. OK. Thank you. And then on the Phase 3, would that -- do you think that would need to be an active controlled study, say, with NATPARA, for example? Background, Calcium and Vitamin D, the standard of care. It's in control. And then just answer the -- I know you're going to wait until next year to elaborate on the new therapeutic area, but it seems as if you have the possibility of localized delivery, which might be good for maintenance therapy. Systemic delivery might be very good for acute therapy. Is oncology something that might fit the bill? I think I know you're extremely excited. We're really excited ourselves, but you need to wait until we disclose it. Fair enough. Thanks for taking the questions. Thank you. And I am not showing any further questions at this time. Thanks, everybody. Have a great day. Bye.
Informed Non-Negative Matrix Factorization for Source Apportionment. (Factorisation informes de matrice pour la sparation de sources non-ngatives) Source apportionment for air pollution may be formulated as a NMF problem by decomposing the data matrix X into a matrix product of two factors G and F, respectively the contribution matrix and the profile matrix. Usually, chemical data are corrupted with a significant proportion of abnormal data. Despite the interest for the community for NMF methods, they suffer from a lack of robustness to a few abnormal data and to initial conditions and they generally provide multiple minima. To this end, this thesis is oriented on one hand towards robust NMF methods and on the other hand on informed NMF by using some specific prior knowledge. Two types of knowlodge are introduced on the profile matrix F. The first assumption is the exact knowledge on some of flexible components of matrix F and the second hypothesis is the sum-to-1 constraint on each row of the matrix F. A parametrization able to deal with both information is developed and update rules are proposed in the space of constraints at each iteration. These formulations have been appliede to two kind of robust cost functions, namely, the weighted Huber cost function and the weighted divergence. The target application-namely, identify the sources of particulate matter in the air in the coastal area of northern France - shows relevance of the proposed methods. In the numerous experiments conducted on both synthetic and real data, the effect and the relevance of the different information is highlighted to make the factorization results more reliable.
incompetent father level grinding to unlock his final form of ‘responsible and intuitive parent’ and the precocious daughter growing to understand and trust her new dad. I came into Otaku no Musume-san expecting this same progression I’d witnessed so many times before to unfold and if truth be told it didn’t really disappoint me on that front. In terms of plot Otaku no Musume-san has a lot in common with its contemporaries, whilst it develops in a more roundabout sort of way, overall it basically unfolds the way you would expect it too. But that doesn’t necessarily mean that it’s not worth the read, especially if, like me, you’re a sap for this kind of thing and are looking for something more after reading the other better known series. However what Otaku no Musume-san lacks in concept originality it more than makes up for in execution. While ticking all the ‘sudden daughter appearance’ boxes of protagonist cathartic clichés, the series has a completely different tone from its predecessors. Otaku no Musume-san is markedly more absurd and light hearted, in both art and story, with a greater focus on ridiculous character driven comedy than slice of life. Therefore unlike a more serious series like My Girl you’re more likely to laugh then be left with warm fuzzy feelings of endearment. Kanau (the daughter) is also a fair bit older than the standard which changes the dynamic between the protagonists quite a bit. There are a number of areas where Otaku no Musume-san stands out from the crowd and for the sake of brevity (for those who can’t be bother to finish reading my review) they can be summed up as the ‘Kramer Effect’ of memorable supporting characters and the plethora of otaku insider jokes which perhaps for some will strike a little too close to home (making you both laugh and feel awkward at the same time).The story of Otaku no Musume-san follows the character of Kanau who after meeting her father for the first time at nine years of age discovers that he is not the responsible well-adjusted suit toting corporate salary-man she’d hoped for, but rather a doujinshi artist with an otaku power level well over 9000. Due to complicated circumstances Kanau must live with Musume forcing both of them to be initiated into the otaku and real worlds.The art style for the series isn’t really spectacular but it definitely comes together well to suit the script. Given the greater emphasis on comedy expect constant departures from the base art style to a more simplistic but exaggerated range of character expressions and designs. Background art is however spartan with the characters largely dominating the panels.However where this series really shines is in its creation and generous use of supporting characters. Many ‘sudden daughter appearance’ series only focus on the relationship between the father and daughter, however by setting the series in a small apartment block (which actually has a greater resemblance to a share-house) occupied by a range of supporting characters with big personalities a far wider breadth of character interactions occur. From hyperbole representations of fandom devotees to a “villain” with all the hallmarks of a nineteenth century-esque melodrama, these supporting characters receive ample development and interact well with both Kanau and Musume.As suggested before the comedy of the series relies quite heavily on an insider knowledge of otaku sub-culture. Many of the series long running jokes are references to other well-known manga or anime series and for those who haven’t seen much from the 90’s and 00’s they may go over your head. Additionally the occupations and interests of the protagonist and many of the supporting character require at least a cursory knowledge of manga tropes and clichés for many of the situations and jokes to be fully appreciated. Basically this series, like Genshiken, is best read by those with a well-stocked database of otaku references, or at least those who won’t freak out in regards to things like bishoujo figurines, crossplay, hug pillows featuring your waifu or the very existence of lo**cons. That said if you’re new to anime and manga you may very well relate to the non-otaku Kanau giving you a different angle on the story.Therefore although not breaking any new ground in terms of story concept, the execution Otaku no Musume-san takes to the, by now well done, sudden daughter appearance genre definitely makes it worth the read.
def update_veh_state(self, current_time, next_time): LOG.debug(f"update veh state {current_time} -> {next_time} : {self}") dict_boarding_requests = {} dict_start_alighting = {} dict_alighting_requests = {} list_passed_VRL = [] c_time = current_time remaining_step_time = next_time - current_time if self.start_next_leg_first: add_boarding_rids, start_alighting_rids = self.start_next_leg(c_time) for rid in add_boarding_rids: dict_boarding_requests[rid] = (c_time, self.pos) for rid in start_alighting_rids: dict_start_alighting[rid] = (c_time, self.pos) self.start_next_leg_first = False while remaining_step_time > 0: if self.status in G_DRIVING_STATUS: arrival_in_time_step = self._move(c_time, remaining_step_time, current_time) if arrival_in_time_step == -1: remaining_step_time = 0 else: remaining_step_time -= (arrival_in_time_step - c_time) c_time = arrival_in_time_step add_alighting_rq, passed_VRL = self.end_current_leg(c_time) for rid in add_alighting_rq: dict_alighting_requests[rid] = c_time if isinstance(passed_VRL, list): list_passed_VRL.extend(passed_VRL) else: list_passed_VRL.append(passed_VRL) if self.assigned_route: add_boarding_rids, start_alighting_rids = self.start_next_leg(c_time) for rid in add_boarding_rids: dict_boarding_requests[rid] = (c_time, self.pos) for rid in start_alighting_rids: dict_start_alighting[rid] = (c_time, self.pos) elif self.status != VRL_STATES.IDLE: if remaining_step_time < self.cl_remaining_time: self.cl_remaining_time -= remaining_step_time if self.assigned_route[0].power > 0: self.soc += self.compute_soc_charging(self.assigned_route[0].power, remaining_step_time) self.soc = min(self.soc, 1.0) remaining_step_time = 0 else: c_time += self.cl_remaining_time remaining_step_time -= self.cl_remaining_time if self.assigned_route[0].power > 0: self.soc += self.compute_soc_charging(self.assigned_route[0].power, self.cl_remaining_time) self.soc = min(self.soc, 1.0) add_alighting_rq, passed_VRL = self.end_current_leg(c_time) for rid in add_alighting_rq: dict_alighting_requests[rid] = c_time list_passed_VRL.append(passed_VRL) if self.assigned_route: add_boarding_rids, start_alighting_rids = self.start_next_leg(c_time) for rid in add_boarding_rids: dict_boarding_requests[rid] = (c_time, self.pos) for rid in start_alighting_rids: dict_start_alighting[rid] = (c_time, self.pos) else: remaining_step_time = 0 return dict_boarding_requests, dict_alighting_requests, list_passed_VRL, dict_start_alighting
<reponame>standage/genhub #!/usr/bin/env python # # ----------------------------------------------------------------------------- # Copyright (c) 2015-2016 <NAME> <<EMAIL>> # Copyright (c) 2015-2016 Indiana University # # This file is part of genhub (http://github.com/standage/genhub) and is # licensed under the BSD 3-clause license: see LICENSE.txt. # ----------------------------------------------------------------------------- """ Class for managing a genome database. By "genome database" we mean a collection of sequence, annotation, and ancillary data files for a annotated genome assembly. This "superclass" defines many default characteristics and behaviors that are shared across different genome databases. Subclasses implement additional specifics for managing data from a particular source. """ from __future__ import print_function import glob import gzip import hashlib import os import subprocess import sys import tempfile import genhub try: FileNotFoundError except NameError: # pragma: no cover FileNotFoundError = IOError class GenomeDB(object): def __init__(self, label, conf, workdir='.'): self.label = label self.config = conf self.workdir = workdir assert 'source' in conf, 'data source unconfigured' # ---------- # Filenames for unprocessed data from the primary source. # ---------- def file_path(self, filename, check=False, message=None): """ Resolve a file's complete path, optionally checking if the file exists. """ filepath = '%s/%s/%s' % (self.workdir, self.label, filename) if check: # pragma: no cover if not os.path.exists(filepath): msg = 'file "%s" not found' % filepath if message is not None: msg += '; %s' % message raise FileNotFoundError(msg) return filepath @property def gdnafilename(self): if 'scaffolds' in self.config: return self.config['scaffolds'] return None # pragma: no cover @property def gff3filename(self): return self.config['annotation'] @property def protfilename(self): return self.config['proteins'] # ---------- # Complete file paths for unprocessed data. # ---------- @property def gdnapath(self): return self.file_path(self.gdnafilename) @property def gff3path(self): return self.file_path(self.gff3filename) @property def protpath(self): return self.file_path(self.protfilename) # ---------- # File paths for processed data. # ---------- @property def gdnafile(self): filename = '%s.gdna.fa' % self.label return self.file_path(filename) @property def gff3file(self): filename = '%s.gff3' % self.label return self.file_path(filename) @property def protfile(self): filename = '%s.all.prot.fa' % self.label return self.file_path(filename) @property def ilocusfile(self): filename = '%s.iloci.gff3' % self.label return self.file_path(filename) @property def milocusfile(self): filename = '%s.miloci.gff3' % self.label return self.file_path(filename) @property def ilocustable(self): filename = '%s.iloci.tsv' % self.label return self.file_path(filename) @property def milocustable(self): filename = '%s.miloci.tsv' % self.label return self.file_path(filename) @property def ilocustableshuf(self): # pragma: no cover filename = '%s.iloci.shuffled.tsv' % self.label return self.file_path(filename) @property def milocustableshuf(self): # pragma: no cover filename = '%s.miloci.shuffled.tsv' % self.label return self.file_path(filename) @property def premrnatable(self): filename = '%s.pre-mrnas.tsv' % self.label return self.file_path(filename) # ---------- # Determine whether raw data files need to be compressed during download. # ---------- @property def compress_gdna(self): if 'compress' in self.config and 'gdna' in self.config['compress']: return True return False @property def compress_gff3(self): if 'compress' in self.config and 'gff3' in self.config['compress']: return True return False @property def compress_prot(self): if 'compress' in self.config and 'prot' in self.config['compress']: return True return False # ---------- # Miscellaneous properties. # ---------- @property def source(self): """The institutional source of the data.""" return self.config['source'] @property def dbdir(self): """Dedicated directory for this genome database.""" return '%s/%s' % (self.workdir, self.label) # ---------- # Build task method implementations. # ---------- def download_gdna(self, logstream=sys.stderr): # pragma: no cover """Download genomic DNA sequence.""" subprocess.call(['mkdir', '-p', self.dbdir]) if logstream is not None: logmsg = '[GenHub: %s] ' % self.config['species'] logmsg += 'download genome sequence from %r' % self print(logmsg, file=logstream) genhub.download.url_download(self.gdnaurl, self.gdnapath, compress=self.compress_gdna) def download_gff3(self, logstream=sys.stderr): # pragma: no cover """Download genome annotation.""" subprocess.call(['mkdir', '-p', self.dbdir]) if logstream is not None: logmsg = '[GenHub: %s] ' % self.config['species'] logmsg += 'download genome annotation from %r' % self print(logmsg, file=logstream) genhub.download.url_download(self.gff3url, self.gff3path, compress=self.compress_gff3) def download_prot(self, logstream=sys.stderr): # pragma: no cover """Download protein sequences.""" subprocess.call(['mkdir', '-p', self.dbdir]) if logstream is not None: logmsg = '[GenHub: %s] ' % self.config['species'] logmsg += 'download protein sequences from %r' % self print(logmsg, file=logstream) genhub.download.url_download(self.proturl, self.protpath, compress=self.compress_prot) def download(self, logstream=sys.stderr): # pragma: no cover """Run download task.""" subprocess.call(['mkdir', '-p', self.dbdir]) self.download_gdna(logstream) self.download_gff3(logstream) self.download_prot(logstream) def prep(self, logstream=sys.stderr, verify=True, strict=True): # pragma: no cover """Run prep task""" self.preprocess_gdna(logstream=logstream, verify=verify, strict=strict) self.preprocess_gff3(logstream=logstream, verify=verify, strict=strict) self.preprocess_prot(logstream=logstream, verify=verify, strict=strict) def preprocess(self, datatype, logstream=sys.stderr, verify=True, strict=True): """ Preprocess genome data files. Set `verify` to False to skip shasum checks for pre-processed data. Set `strict` to False to proceed in case of failed verification. Note that this is a wrapper function: each subclass must implement 3 methods (`format_gdna`, `format_gff3`, and `format_prot`) to do the actual formatting. """ datatypes = {'gdna': 'genome sequence file', 'gff3': 'annotation file', 'prot': 'protein sequence file'} assert datatype in datatypes if logstream is not None: # pragma: no cover logmsg = '[GenHub: %s] ' % self.config['species'] logmsg += 'preprocess %s' % datatypes[datatype] print(logmsg, file=logstream) infile = {'gdna': self.gdnapath, 'gff3': self.gff3path, 'prot': self.protpath}[datatype] outfile = {'gdna': self.gdnafile, 'gff3': self.gff3file, 'prot': self.protfile}[datatype] if datatype != 'gff3': if infile.endswith('.gz'): instream = gzip.open(infile, 'rt') else: instream = open(infile, 'r') outstream = open(outfile, 'w') if datatype == 'gdna': self.format_gdna(instream, outstream, logstream) elif datatype == 'prot': self.format_prot(instream, outstream, logstream) else: self.format_gff3(logstream) if datatype != 'gff3': instream.close() outstream.close() if verify is False: return if 'checksums' in self.config and datatype in self.config['checksums']: sha1 = self.config['checksums'][datatype] testsha1 = self.file_sha1(outfile) passed = testsha1 == sha1 if not passed: message = '{} {} integrity check failed\n{}\n{}'.format( self.label, datatypes[datatype], testsha1, sha1 ) if strict: message += ('\n\nTo proceed in spite of this failure, re-' 'run with the `--relax` option enabled.') raise Exception(message) else: # pragma: no cover if logstream is not None: message += ', proceeding anyway' print('Warning:', message, file=logstream) else: # pragma: no cover if logstream is not None: message = 'Cannot verify integrity of %s ' % self.label message += '%s without a checksum' % datatypes[datatype] print(message, file=logstream) def preprocess_gdna(self, logstream=sys.stderr, verify=True, strict=True): self.preprocess('gdna', logstream, verify, strict) def preprocess_gff3(self, logstream=sys.stderr, verify=True, strict=True): self.preprocess('gff3', logstream, verify, strict) def preprocess_prot(self, logstream=sys.stderr, verify=True, strict=True): self.preprocess('prot', logstream, verify, strict) def filter_file(self): """ Write exclusion filter to a temporary file and return. Data configurations may include an optional `annotfilter` with patterns to discard from the input annotation a la `grep -v`. This function retrieves the pattern(s) from the genome configuration and writes them to a temporary file that can be used in a `grep -vf` command. The calling function is responsible for unlinking the temporary file from the operating system. """ if 'annotfilter' not in self.config: return None excludefile = tempfile.NamedTemporaryFile(mode='wt', delete=False) if isinstance(self.config['annotfilter'], str): print(self.config['annotfilter'], file=excludefile) else: for exclusion in self.config['annotfilter']: print(exclusion, file=excludefile) excludefile.close() return excludefile def file_sha1(self, filepath): """ Stolen shamelessly from http://stackoverflow.com/a/19711609/459780. """ sha = hashlib.sha1() with open(filepath, 'rb') as f: while True: block = f.read(2**10) if not block: break sha.update(block) return sha.hexdigest() def cleanup(self, patterns_to_keep=None, fullclean=False, dryrun=False): """ Clean up the DB working directory. By default, the files to be kept are the following. - *.iloci.fa - *.iloci.gff3 - *.miloci.gff3 - *.tsv - original (downloaded) data files All other files are deleted. If `fullclean` is true, the original data files are deleted as well. If `patterns_to_keep` is declared, each file to be deleted is checked to see if it contains any of the specified strings. If so, it is spared deletion. The `dryrun` parameter is just for unit testing. """ dbfiles = glob.glob(self.dbdir + '/*') files_deleted = list() suffixes = ['.iloci.fa', '.iloci.gff3', '.miloci.gff3', '.tsv'] for dbfile in dbfiles: tokeep = False for suffix in suffixes: if dbfile.endswith(suffix): tokeep = True break if tokeep: continue if dbfile in [self.gdnapath, self.gff3path, self.protpath]: if not fullclean: continue if patterns_to_keep: pattern_match = False for pattern in patterns_to_keep: if pattern in dbfile: pattern_match = True break if pattern_match: continue files_deleted.append(dbfile) if not dryrun: # pragma: no cover os.unlink(dbfile) return files_deleted def get_prot_map(self): mapfile = '%s/%s.protein2ilocus.tsv' % (self.dbdir, self.label) with open(mapfile, 'r') as instream: next(instream) for line in instream: if line.strip() == '': # pragma: no cover continue protid, locid = line.strip().split() yield protid, locid # ----------------------------------------------------------------------------- # Unit tests # ----------------------------------------------------------------------------- def test_file_path(): """GenomeDB: file name resolution""" db = genhub.test_registry.genome('Bimp') assert db.file_path('bogus.txt') == './Bimp/bogus.txt' db = genhub.test_registry.genome('Bimp', workdir='wd') assert db.file_path('Bimp.gff3') == 'wd/Bimp/Bimp.gff3' assert db.ilocusfile == 'wd/Bimp/Bimp.iloci.gff3' assert db.milocusfile == 'wd/Bimp/Bimp.miloci.gff3' assert db.ilocustable == 'wd/Bimp/Bimp.iloci.tsv' assert db.milocustable == 'wd/Bimp/Bimp.miloci.tsv' assert db.premrnatable == 'wd/Bimp/Bimp.pre-mrnas.tsv' checkfailed = False try: db = genhub.test_registry.genome('Amel') path = db.file_path('Amel.iloci.gff3', check=True) except FileNotFoundError as e: checkfailed = True assert e.args[0] == 'file "./Amel/Amel.iloci.gff3" not found' assert checkfailed def test_props(): """GenomeDB: properties""" db = genhub.test_registry.genome('Bimp') assert db.dbdir == './Bimp' assert db.gdnafile == './Bimp/Bimp.gdna.fa' assert db.gff3file == './Bimp/Bimp.gff3' assert db.protfile == './Bimp/Bimp.all.prot.fa' assert db.source == 'refseq' db = genhub.test_registry.genome('Dqcr', workdir='/opt/data/genomes') assert db.dbdir == '/opt/data/genomes/Dqcr' assert db.gdnafile == '/opt/data/genomes/Dqcr/Dqcr.gdna.fa' assert db.gff3file == '/opt/data/genomes/Dqcr/Dqcr.gff3' assert db.protfile == '/opt/data/genomes/Dqcr/Dqcr.all.prot.fa' assert db.source == 'crg' def test_filter_file(): """GenomeDB: filter file""" db = genhub.test_registry.genome('Lalb') assert db.filter_file() is None db = genhub.test_registry.genome('Drer') ff = db.filter_file() with open(ff.name, 'r') as infile: excludestr = infile.read() assert excludestr.strip() == 'NC_002333.2' os.unlink(ff.name) def test_compress(): """GenomeDB: download compression""" db = genhub.test_registry.genome('Emex') assert db.compress_gdna is False assert db.compress_gff3 is False assert db.compress_prot is False db.config['compress'] = ['gdna', 'prot', 'gff3'] assert db.compress_gdna is True assert db.compress_gff3 is True assert db.compress_prot is True
#ifndef JSON_PARSER_H #define JSON_PARSER_H #include "json.hpp" using nlohmann::json; class JsonParser{ private: json users_json; public: json processJSON(const std::string&); void saveJSON(json , const std::string&); }; #endif
Mona Eltahawy is one of the foremost female Arab journalists. The 43-year-old New York-based, Egypt-born speaker regularly appears on US television and in newspapers around the globe. Eltahawy is a columnist for Canada’s Toronto Star, Israel’s The Jerusalem Report and Denmark’s Politiken and writes often for The Washington Post and the International Herald Tribune. Previously she’d been Cairo and Israel correspondent for Reuters and reported from regions as diverse as Saudi Arabia and China. She was the first Egyptian journalist working for a Western agency in Israel. In 2010, the Anna Lindh Foundation awarded her its Special Prize for Outstanding Contribution to Journalism and the Estlow International Center for Journalism and New Media at the University of Denver gave her its Anvil of Freedom Award. An active element in last year’s Egyptian revolution, Eltahawy was shockingly assaulted in the Interior Ministry. “When a woman who took part wrote to tell me I’d helped to inspire the march because I’d spoken out on Egyptian TV about my beating and assault, I was finally able to cry. They were the tears of a survivor, not a victim,” she said.
Ethanol Elicits Inhibitory Effect on the Growth and Proliferation of Tongue Carcinoma Cells by Inducing Cell Cycle Arrest Cellular effects of ethanol in YD-15 tongue carcinoma cells were assessed by MTT assay, caspase activity assay, Western blotting and flow cytometry. Ethanol inhibited the growth and proliferation of YD-15 cells in a dose- and time-dependent manner in an MTT assay. The effects of ethanol on cell cycle control at low percent range of ethanol concentration (0 to 1.5%), the condition not inducing YD-15 cell death, was investigated after exposing cells to alcohol for a certain period of time. Western blotting on the expression of cell cycle inhibitors showed that p21 and p27 was up-regulated as ethanol concentration increases from 0 to 1.5% whilst the cell cycle regulators, cdk1, cdk2, and cdk4 as well as Cyclin A, Cyclin B1 and Cyclin E1, were gradually down-regulated. Flow cytometric analysis of cell cycle distribution revealed that YD-15 cells exposed to 1.5% ethanol for 24 h was mainly arrested at G2/M phase. However, ethanol induced apoptosis in YD-15 cells exposed to 2.5% or higher percent of ethanol. The cleaved PARP, a marker of caspase-3 mediated apoptosis, and the activation of caspase-3 and -7 were detected by caspase activity assay or Western blotting. Our results suggest that ethanol elicits inhibitory effect on the growth and proliferation of YD-15 tongue carcinoma cells by mediating cell cycle arrest at G2/M at low concentration range and ultimately induces apoptosis under the condition of high concentration. INTRODUCTION Cancer of the oral cavity is an aggressive disease with a high mortality rate when arising in the lip, tongue, floor of the mouth, gingivae, palate, buccal mucosa/vestibule and salivary glands. It is generally believed that ethanol (alcohol or ethyl alcohol) and tobacco are the main risk factors in the development of the oral cancer. However, despite the potential association between ethanol and oral cancer, no evidence exists to convince ethanol as a promoter of tumorigenesis and ethanol itself is not mutagenic or clastogenic. Diverse cellular effects of ethanol in oral cavity region have been reported. Chronic exposure of the oral mucosa to ethanol increased the penetration of carcinogens across the oral mucosa either by enhancing their solubility or the permeability of the oral mucosa. Morphological changes in the oral mucosa and the decrease in basal cell size of the esophageal mucosa were observed after exposure to ethanol. In addition, ethanol induced cellular death via apoptosis in certain cell types such as macrophages, human mast cells and the HL-60 promyelocytic leukemia cells. Thus, apparently ethanol effects are cell-type dependent and tumor cells are not exceptional. In this context, it is surprising to find that few studies have been undertaken on the effects of ethanol against various tumors originating in oral cavity. Recently, our preliminary study with ethanol showed that tumor cell-types have different sensitivities of cell growth and proliferation to acute ethanol treatment, intriguing us to study further on the ethanol effects in molecular basis. In this study, we investigated the cellular effects of ethanol on YD-15 tongue carcinoma cells, mucoepidermoid carcinoma cells originating in the oral tongue, by exposing cells to various concentrations of ethanol. YD-15 cells showed a high sensitivity to ethanol which was distinct from other cell lines of oral cavity. Further investigation on ethanol effects to YD-15 cells revealed that ethanol elicits inhibitory effect on the growth and proliferation of YD-15 tongue carcinoma cells by mediating cell cycle arrest at low concentration range and induces the cellular death via apoptosis at high concentration. Cell preparation YD-15 cells were purchased from the Korean cell line bank (KCLB No 60504, Seoul, Korea) and maintained in RPMI 1,640 media which was supplemented with 10% FBS and antibiotics in a 37 o C incubator at 5% CO2. Other cell lines used in this study include YD-38 (gingival carcinoma), KB (mouth epidermal carcinoma), FaDu (pharyngeal carcinoma), MCF-7 (breast cancer) and HeLa (cervical cancer). All cell lines were obtained from the Korean cell line bank. MTT assay To investigate the effects of ethanol on different human cancer cell lines, cells (410 4 /well) in 96-well plates were treated with various concentrations of ethanol and incubated for 24 h. The cells were placed in medium containing 0.5 mg/ml of MTT-1 (Sigma, St Louis, MO). After further incubating for 3 h, media containing MTT-1 were replaced with MTT-2 solution (10% SDS in 0.1 N HCl) by adding 200 l into each well, and the plates were further incubated for 2 h. The UV absorbance of each sample was then measured at 540 nm. The data were analyzed by Anova single factor. The negative control was the ethanol untreated and used for the determination of percent of live cells. Cell growth and proliferation assay Cells in growth phase were transferred into 24-well plates with 310 4 cells/well. After incubation for 24 h, cells were treated with various quantities of ethanol (0, 0.25, 0.5, 0.75, 1.0 and 1.5%) in the culture medium. Cell growth and proliferation were evaluated by an MTT assay and medium replacements contained the same concentrations of ethanol. The OD value of the initial measurement was used as the control. Western blotting Cells were treated with various concentrations of ethanol for 24 h and then lysed in a buffer containing freshly added PMSF (1 mM). The cell lysate was homogenized by pipetting for 15 minutes on ice and sonicated for 1 minute. The supernatant of each lysate was collected into new tubes after centrifugation at 20,000 g for 15 minutes at 4 o C and the total protein concentration was measured using a BCA kit (Pierce, Rockford, IL). Antibodies were used in accordance with the manufacturer's instructions. The protein bands were normalized to the beta actin expression level using the Bio-Rad Image Master Program (Bio-Rad, Hercules, CA). Caspase activity assay Activated caspase-3 and -7 were detected by using Caspase-Glo Ⓡ 3/7 Assay kit (Promega, Madison, WI). Cells in 24-well plate (10 4 cells/well) were exposed to 0, 1.5 or 2.5% of ethanol in medium for 24 h. After ethanol treatment, cells were harvested by using Reporter Lysis buffer (Promega, Madison, WI). Cell lysate was centrifuged at 20,000 g and the supernatant was used for the detection of caspase activities in duplicate. Total protein concentration was measured by using BCA assay kit. Activities of caspases in 1 g of total protein were normalized with the one of control samples. DNA content analysis by Fluorescence Activated Cell Sorting (FACS) Cells were treated with 1.5% or 2.5% ethanol in medium for 24 h. The cells were then harvested by trypsinization and centrifugation at 300 g. After washing of the cell pellets with cold PBS, the cells were fixed in 70% ethanol for 2 h at -20 o C, washed again in ice cold PBS and stained with PI solution (20 g/ml propidium iodide, 200 g/ml DNasefree RNase-A in PBS) for 15 minutes at 37 o C. DNA contents were analyzed by using a Beckman Coulter system. Comparison of the effects of ethanol on different cell types within the oral cavity The effects of ethanol on epithelial cancer cells were investigated using cell types originating from different regions of the oral cavity and also in HeLa and MCF7 cells. Oral carcinoma cell lines such as YD-15 (tongue carcinoma), YD-38 (gingival carcinoma), KB cell (mouth epidermal carcinoma), and FaDu (pharyngeal carcinoma) were selected and the effects of ethanol on cell viability were analyzed. YD-15 cells showed the highest growth sensitivity to acute ethanol treatment among this panel (Fig. 1) and the growth inhibition of these cells was even stronger. Ethanol effects on YD-15 cell proliferation The effects of ethanol on proliferation were investigated by treating YD-15 cells for 24, 48, 72, 96 and 120 h with various quantities of ethanol (0, 0.25, 0.5, 0.75, 1.0 and 1.5%) in the culture medium. Ethanol was found to inhibit YD-15 cell growth in both a dose-and time-dependent manner ( Fig. 2A). A substantial reduction in YD-15 cell survival was observed at ethanol concentrations of above 0.75% but no effects were evident at exposures to less than 0.5% ethanol. In contrast, FaDu squamous carcinoma cells were unaffected by the same alcohol concentrations (Fig. 2B). Ethanol effects on cell cycle protein inhibitors To further understand the molecular basis of ethanol effects on cell cycle control, we investigated the expression of cell cycle inhibitors at the lower percent range of ethanol, the condition that ethanol affects on cell cycle without inducing cell death, by treating YD-15 cells with a series of ethanol concentrations for 24 h. The cell cycle inhibitor proteins, p21 and p27, were found to be induced in a dose dependent manner (Fig. 3) and significantly enhanced as the ethanol concentration increased from 0% to 1.5%. The largest induction of p27 and p21 in YD-15 cells was observed at 0.75% and 1.0% ethanol exposures, respectively. Ethanol effects on cell cycle regulators Since cell division is strictly regulated by cell cycle regulators, we further investigated the effects of ethanol on cell cycle regulators such as cyclins and cyclin-dependent protein kinases. The cells were treated with various doses of ethanol (0, 0.25, 0.5, 0.75, 1.0 and 1.5%) for 24 h and lysated for Western blotting. Cylin A, Cyclin E1 and Cyclin B1 were suppressed as the dose of ethanol in the medium increased (Fig. 4A). Band intensities of Cyclin A, Cyclin E1 and Cyclin B1 were gradually decreased in the concentration range of ethanol up to 1.5%. Similarly, ethanol treatment suppressed the expression of cdk-1, -2 and -4 (Fig. 4B). The band intensities of cdk-1 and cdk-2 were gradually decreased as ethanol concentration in the medium increased from 0.25% to 1.5%. Interestingly, the band intensity of cdk-4 was rapidly decreased up to 0.5% ethanol exposure and remained without further decrease at higher ethanol concentrations. This may indicate that cdk-4 is more sensitive to ethanol than cdk-1 or -2, and associated with cell cycle inhibition at an early stage. Flow cytometry analysis To further evaluate the nature of the cell cycle arrest caused by ethanol, FACS analysis was carried out. As shown in Fig. 5, the ethanol-exposed YD-15 cells for 24 h resulted in the accumulation of cells at the G2/M phases. Cells treated with a 1.5% dose of ethanol showed a marginal increase in cell distribution at G1/S phase (61.4%) compared to the ethanol untreated cells (56.4%). In contrast, moderate increase, 22.9% to 31.1%, in the cell distribution of ethanol-treated cells at G2/M phase was observed with simultaneous decrease in S phase. Ethanol effects on the apoptotic response To determine whether the cell loss caused by ethanol treatment was due to apoptosis or not, we further analyzed the expression of apoptosis related proteins (Bcl-2, Bad and Bax) by western blot, after exposing cells to ethanol at low concentration range of ethanol for 24 h. Ethanol effects on apoptosis associated proteins were minimal at the ethanol range of less than 1.5%. Ratios of Bax/Bcl-2 and Bad/Bcl-2 were relatively constant and the cleaved form of PARP (p85), a major hydrolysis product of executor caspases, was only weakly detected, suggesting that ethanol did not affect cell death at this range of ethanol concentration (Fig. 6A). However, at 2.5% ethanol, ethanol induced YD-15 cell death via apoptosis. Strong bands of the cleaved form of PARP (p85) and the activated form of caspase 7 (20 kDa) were detected in Western blotting (Fig. 6B). In addition, dramat-ic increase of caspase-3 and -7 activities were detected in caspase activity assay (Fig. 6C). The results indicate that ethanol suppressed cell cycle progress in the low range of ethanol concentration without inducing apoptosis. DISCUSSION In this study we investigated the cellular effects of ethanol on YD-15 tongue carcinoma cells by exposing cells to various concentrations of ethanol. YD-15 cells showed a high sensitivity to ethanol which was clearly distinguished from other cell lines of oral cavity (Fig. 1, 2). Further investigation on ethanol effects to YD-15 cells revealed that ethanol elicits inhibitory effect on the growth and proliferation of YD-15 tongue carcinoma cells by mediating cell cycle arrest at G2/M at low concentration range and induces the cellular death via apoptosis at high concentration. The inhibitory ethanol effects on the growth and proliferation of YD-15 cells was confirmed by altered expression of both cell cycle inhibitors and cell cycle regulators. P27 and p21 cell cycle inhibitors were up-regulated dose-dependently by ethanol (Fig. 3) whilst the expression of cell cycle regulators, cdks and cyclins, was down-regulated (Fig. 4). P27 blocks G1 progression by binding and inactivating the cyclin E/cdk2 and cyclin D/cdk4 complexes and p21 is also known to inhibit a broad range of the cell cycle proteins, including G1 cyclin/cdk complexes and cyclin B1/cdk1 complexes. Thus, the up-regulation of p27 and p21 by ethanol in YD-15 cells may cause the reduced expression of cdk2 and cdk4. In addition, the expression of both cyclin B1 and cdk1 was suppressed by ethanol treatment, limiting the supply of the cdk1/cyclin B complex required for the G2 to M phase transition and thereby delaying the phase transition. FACS analysis further confirmed the cell cycle arrest at G2/M phase in cells exposed with 1.5% ethanol (Fig. 5). Despite the suppression of cell cycle arrest at a low percentage of ethanol, ethanol did not promote the apoptosis of YD-15 cells, suggesting that the cell cycle inhibition by ethanol was not sufficient to induce apoptosis at a low percentage of ethanol (less than 1.5%) (Fig. 6). Conversely, YD-15 cells exposed to a high percentage of ethanol (2.5%) for 24 h induced significant programmed cell death with the appearance of the cleaved PARP (the marker of caspase-3 mediated apoptosis) and the activated form of caspase-3 and -7. The lateral border of the tongue, one of the non-keratinized tissues, is known to be more permeable than keratinized tissues such as the palate and gingivae. Thus the unusual sensitivity of YD-15 cells to ethanol could be due to the enhanced cellular uptake of ethanol by the increased lipid membrane permeability. The fast passage cycle of tongue carcinoma cells may also contribute to the ethanol sensitivity of YD-15 cells. Interestingly, previous reports also showed the inhibitory effect by ethanol on the proliferation of cancer or non-cancer cells such as fibroblast growth factor 1-induced human smooth muscle cells and smooth muscle cell in the postprandial state [11,. In addition, a recent study by Clemens et al demonstrated that ethanol treatment in recombinant Hep-G2 cancer cells caused a G2/M cell-cycle arrest. To our best knowledge, this is the first report that ethanol specifically suppresses the growth and proliferation of tongue carcinoma cells among various tumor cell-types originating in oral cavity. The question how ethanol induces the cell cycle arrest and what is the signal transition mechanism underlying the cell cycle arrests remain to be answered in future study. Overall, our results suggest that the cellular inhibitory effects by ethanol may function as a tumor suppressor in the oral tongue cancer.
Upper degree-constrained partial orientations We wish to orient as many edges as possible in an undirected graph (or multigraph), subject to upper bounds on the indegree and out-degree of each vertex. Frank and Gyrfs solve this problem in polynomial time when there are no in-degree bounds, and when every edge can be oriented within the given bounds. However we show that in general the problem is MAXSNP-hard. When viewed as a 3-dimensional matching problem the local improvement algorithm of Hurkens and Schrijver achieves approximation ratio 2/3 -- ; we believe is the best previous bound for our problem. We give an LP-rounding algorithm that achieves approximation ratio 3/4.