text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
NXT & LeJOS & Wii Nunchuck October 18, 2010 16 Comments Finally ! It’s been a while since I first thought about trying LeJOS, the JVM for NXT… I don’t recall why I didn’t do it when I first bought the NXT set, more than 2 years ago… The thing is that, after some frustration with the URBI NXT installation (very nice project, but somehow didn’t manage to make the latest version work…) I felt that I had to do something “new” with the NXT… and I must admit that I’m so pleased with the simplicity and power of the LeJOS JVM that it feels like a pity I didn’t install it earlier ! The installation is quite straight forward, you need to install stuff on your PC and flash the NXT to replace the original firmware (no worries it can be flashed back to the initial state). It’s also great that the bluetooth communication works “out of the box” so no more wires to program your box and communicate with it ! First project: interact with a Wii Nunch). Here’s a link nicely describing the hack (it’s in German, but the pictures are quite straight forward 🙂 ) Now my approach was much simpler - get rid of the SMD R1 and R2 - put back the nunchuck without anything else - add 100KOhms pull-up resistors directly on the cables (where the nunchuck interfaces with the NXT cable) – I didn’t have 33KO ones, and 100 is closer anyway to the NXT specs - no diode as I didn’t have any, and was already using the nunchuck with a 5V Arduino… hope it won’t damage it too quickly 🙂 And here’s the Java module, please feel free to re-use it in your programs import lejos.nxt.I2CPort; import lejos.nxt.I2CSensor; /** * */ public class WiiNunchuck extends I2CSensor { private static final int NUNCHUCK_ADDR = 0x52; // 7bit addressing. It would be 0xA4 in 8bit private static final int NUNCHUCK_MEM_ADDR = 0x40; // array to store RAW nunchuck data private byte[] _buff = new byte[6]; private byte[] _initBuff = new byte[2]; private int joy_x_axis; private int joy_y_axis; private int accel_x_axis; private int accel_y_axis; private int accel_z_axis; private boolean z_button = false; private boolean c_button = false; public WiiNunchuck(I2CPort port){ super(port); setAddress(NUNCHUCK_ADDR); } public boolean updateData() { // ??? somehow it gets disabled.... getPort().i2cEnable(I2CPort.LEGO_MODE); // INITIALIZE - tell the nunchuck we're talking to it _initBuff[0] = 0x00; sendData(NUNCHUCK_MEM_ADDR, _initBuff, 1); try{ Thread.sleep(10);}catch(InterruptedException e){} // send a request for data sendData(0x00, (byte)0x00); if(getData(0x00, _buff, 6) != 0) return false; try{ Thread.sleep(10);}catch(InterruptedException e){} for(int i = 0; i < 6; i++) _buff[i] = nunchuk_decode_byte(_buff[i]); // transform into something meaningful joy_x_axis = _buff[0]; joy_y_axis = _buff[1]; accel_x_axis = _buff[2]; accel_y_axis = _buff[3]; accel_z_axis = _buff[4]; // byte nunchuck_buf[5] contains bits for z and c buttons z_button = (_buff[5] & 0x01) == 0; c_button = (_buff[5] & 0x02) == 0; // it also contains the least significant bits for the accelerometer data so we have to check each bit of byte outbuf[5] if ((_buff[5] & 0x03) > 0) accel_x_axis += 2; if ((_buff[5] & 0x04) > 0) accel_x_axis += 1; if ((_buff[5] & 0x05) > 0) accel_y_axis += 2; if ((_buff[5] & 0x06) > 0) accel_y_axis += 1; if ((_buff[5] & 0x07) > 0) accel_z_axis += 2; if ((_buff[5] & 0x08) > 0) accel_z_axis += 1; joy_x_axis = updateRange(joy_x_axis); joy_y_axis = updateRange(joy_y_axis); accel_x_axis = updateRange(accel_x_axis); accel_y_axis = updateRange(accel_y_axis); accel_z_axis = updateRange(accel_z_axis); return true; } /** * Transforms a 0..127 -128 .. 0 range into a 128..0..-128 one */ private static int updateRange(int init){ int result = init; boolean negative = result < 0; if(negative) result = -result; result = - result + 128; if(negative) result = -result; return result; } // Encode data to format that most wiimote drivers except only needed if you use one of the regular wiimote drivers private static byte nunchuk_decode_byte (byte x){ return (byte)((x ^ 0x17) + 0x17); } public int getAccelX() { return accel_x_axis; } public int getAccelY() { return accel_y_axis; } public int getAccelZ() { return accel_z_axis; } public int getJoyX() { return joy_x_axis; } public int getJoyY() { return joy_y_axis; } public boolean isC_button() { return c_button; } public boolean isZ_button() { return z_button; } } It feels so good now that I can leverage the power of Java to program it… no more awkward syntax or C-like string manipulation… it’s comparable to the pleasure of programming the FEZ Domino, but even better as I’m really a Java guy and feel much more comfortable with it than with C#. Pingback: RC Car Electronics | Robotics / Electronics / Physical Computing Hi~ I made a Nunchuk Sensor for my nxt. And I used your class code,but it’s not work… I put my main.java and your WiiNunchuck.java in same folder… I enter nxjc Test.java nxj Test But it isn’t work on the nxt. Where is wrong? Request your assistance! My main.java import lejos.nxt.*; public class Test { public static void main(String args[]){ WiiNunchuck nunchuk = new WiiNunchuck(SensorPort.S1); nunchuk.updateData(); while(!Button.ESCAPE.isPressed()){ LCD.drawInt(nunchuk.getAccelX(),0,0); LCD.drawInt(nunchuk.getAccelY(),0,1); LCD.drawInt(nunchuk.getAccelZ(),0,2); LCD.drawInt(nunchuk.getJoyX(),0,3); LCD.drawInt(nunchuk.getJoyY(),0,4); LCD.refresh(); try { Thread.sleep(10); }catch (InterruptedException e) {} } } } It’s notoriously hard to find a problem like this remotely, without having more concrete details about your exact set up…. The test code looks ok, and it’s simple enough. If you are using my EXACT WiiNunchuck.java class, do you have LeJOS installed properly on your NXT ? Can you run other java programs ? Also, have you physically altered your nunchuck to remove the 2 1.8kOhms resistors as described in the ” ? Hope this helps, dan Yes! I can run other java programs on nxt . And I change 2 1.8k resistors to 33k. But I use nxc of the (tom123) ,it work! But I want to use lejos… (I need to change 2 1.8k resistors to 100k?? I already change these to 33k…) Thank! Ok, so you’re saying that: – LeJOS is installed properly, you can run other Java programs – the nunchuck works ok, as you can get data from it with the nxc example… That’s strange… I’m using 100K resistors indeed, and I think in the specs the NXT needs 82K or something like that… IF the nunchuck works with your NXC program, then it should work with Java too, regardless of what resistors you’re using… it’s the same physical stuff… Here’s my test class, it’s almost identical with your test example… have a look to see if there’s not some silly mistake in there: import lejos.nxt.*; import lejos.nxt.addon.*; public class WiiNunchuckTest { private final WiiNunchuck _nunchuck = new WiiNunchuck(SensorPort.S4); private final Motor _motor = new Motor(MotorPort.A); private void loop() throws InterruptedException{ if(! _nunchuck.updateData()) throw new IllegalStateException(“errg…”); int deg = _nunchuck.getJoyX(); int newDeg = 0; while (!Button.ESCAPE.isPressed()) { _nunchuck.updateData(); display(_nunchuck); newDeg = _nunchuck.getJoyX(); if(Math.abs(newDeg – deg) > 3){ _motor.rotate(newDeg – deg); deg = newDeg; } Thread.sleep(20); } } private void display(WiiNunchuck nunchuck) throws InterruptedException{ LCD.clear(); LCD.drawString(“OK”, 0, 0); LCD.drawInt(nunchuck.getJoyX(), 0, 1); LCD.drawInt(nunchuck.getJoyY(), 10, 1); LCD.drawInt(nunchuck.getAccelX(), 0, 3); LCD.drawInt(nunchuck.getAccelY(), 6, 3); LCD.drawInt(nunchuck.getAccelZ(), 12, 3); LCD.drawString(String.valueOf(nunchuck.isC_button()), 0, 5); LCD.drawString(String.valueOf(nunchuck.isZ_button()), 7, 5); LCD.refresh(); } public static void main(String[] args) throws Exception { new WiiNunchuckTest().loop(); } } dan I test your code,and I get this error… Exception:31 errg& at:66(17) at:68(8) Where is wrong? Thank! And I found the java compiled your WiiNunchuck.java ,it told me… Note: .\WiiNunchuck.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. Whether it is an old version of java?? I use Java 7 (jdk7) Thanks! Benny. I think I remember this warning and it was still working despite it… ( I was indeed using Java 1.6 but it shouldn’t make any difference) But to be honest, it’s been a while since I did that test and I don’t have the computer / NXT set up right now to test it… Let me know if you finally find the problem, I’d be interested to know / update my post. dan I want to get the Nunchuck value and show it on the LCD… like this “Z : RELEAZED” or “Z : PRESSED” “C: RELEAZED” or “C : PRESSED” “accel_x: “value”” “accel_y: “value”” “accel_z: “value”” “joy_x: “value”” “joy_y: “value”” Could you post all code?? Please… May I ask you a question? This web : In the web two codes . How can I link the two codes? A+B?? A(del something) +B?? or A+B(del something)?? How can I link? I just want to show the Nunchuck Button C,Z , Accel x,y,z and Joystick x,y on the LCD!! Please attach complete code. Please help me… thank lot!!! I’m not sure I understand your question… Also requests like “Please attach complete code” sound like if you had some entitlement … I try to answer as quickly and as accurately as I can to all the questions, but I’m not paid to be writing the exact code you are looking for ! Dan Sorry !! I mean (R1,R2 is 1.8k , == is 32k) (NXT Green)─==────┬──┬───(Nunchuck Red) R1 R2 (NXT Yellow)──────-┴──┼──(Nunchuck Yellow) (33.8k) (NXT Blue)──────────-┴──(Nunchuck Green) (33.8k) (NXT Red)──────────────(Nunchuck White) Can I do this ??? A question, I real have to get rid of the SMD R1 and R2 ? Maybe I can put a 32K resistor on the “NXT Green -> Nunchuck Red ” this line? like this ↓ (R1,R2 is 1.8k , — is 32k) (NXT Green)──–────┬──┬───(Nunchuck Red) R1┤ ├ R2 (NXT Yellow)───────┴──┼──(Nunchuck Yellow) (33.8k) (NXT Blue) ──────────┴──(Nunchuck Green) (33.8k) (NXT Red) ─────────────(Nunchuck White) Can I do this ??? I’m afraid not ! The problem with R1 and R2 are that they are too LOW. By putting resistors in parallel you can ONLY LOWER the total resitance… the only way to increase it is by putting resistors in series… which you obviously can’t do without de-soldering R1/R2. Dan Pingback: Android IOIO Wii Nunchuck « Robotics / Electronics / Physical Computing Pingback: Wii motion plus and Arduino « Robotics / Electronics / Physical Computing
https://trandi.wordpress.com/2010/10/18/nxt-lejos-nunchuck/
CC-MAIN-2018-34
en
refinedweb
. In addition we will also need to change how we tell our application about the location of the Mongo DB. Right now the application will assume the Mongo DB is running on localhost, that will not be the case in Bluemix. Lets address this problem first. As you probably already know if you are familiar with services in a Cloud Foundry based PaaS, a service’s credentials are made available to an application via the environment variable VCAP_SERVICES. One option to access the Mongo DB credentials is to parse the VCAP_SERVICES environment variable extract the credentials for the Mongo DB service and use those to instantiate our Mongo code in our app. Most applications will also have the requirement that the app be able to run locally, so in addition to parsing the VCAP_SERVICES environment variable we will also need to add some code to figure out if we are running in the cloud or not. This code would not be hard to write, but there are better options. There is a nice library that can do all this for us called Spring Cloud Connectors. The Spring Cloud Connectors project makes it easy to use client libraries for various services when your application is running in the cloud. In addition to supporting various cloud environments, the Spring Cloud Connectors project can support the same code running locally as well. To get started using the Spring Cloud Connectors project we need to add some dependencies to our POM. Add the following dependencies to your POM file. <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-spring-service-connector</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-localconfig-connector</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-cloudfoundry-connector</artifactId> </dependency> The spring-cloud-spring-service-connector artifact provides us the with the necessary magic when running in a Spring app. The spring-cloud-local-config-connector artifact provides the ability to run the app locally. The spring-cloud-cloudfoundry-connector project provides the necessary magic when running on PaaS based on Cloud Foundry, like Bluemix. Now that we have our dependencies, lets add some code. In the demo package of our application add the following class package demo; import org.springframework.cloud.config.java.AbstractCloudConfig; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.mongodb.MongoDbFactory; @Configuration public class CloudConfig extends AbstractCloudConfig { @Bean public MongoDbFactory documentMongoDbFactory() { return connectionFactory().mongoDbFactory(); } } That’s it, that is all the code we need to write. No need to parse JSON and figure out if we are running locally or in the cloud, all that will be taken care of for us by the Spring Cloud Connectors project. We do need to do a little more configuration to make sure our application will run locally. If you had added spring.data.mongodb.uri to your application.properties file you can remove that property, it will no longer be needed. The Spring Cloud Connectors project uses a separate configuration file for all the cloud services you want to use when running locally. Somewhere locally on your machine create a file called spring-cloud.properties (you can name it whatever you want if you would like). I usually put the file in my home directory. In the file create a property called spring.cloud.appId and set it to any value you would like. Add another property called spring.cloud.mongo and set its value to mongodb://localhost:27017. If your Mongo DB server is not running on localhost or requires a username and password be sure to make the necessary changes to the URI. Your properties file should now look like this. spring.cloud.appId: mongo-rest spring.cloud.mongo: mongodb://localhost:27017 Save the file if you have not done so already. Now we need to tell our app about this properties file. We can do that in several ways, but for this example we will specify it in a properties file on the classpath. In src/main/resources create a new file called spring-cloud-bootstrap.properties. In this file add the property spring.cloud.propertiesFile and set it to the path of your spring-cloud.properties file. If you placed your properties file your home directory you can use the variable ${user.home} in the path to represent your home directory. For example spring.cloud.propertiesFile: ${user.home}/spring-cloud.properties Once the spring.cloud.propertiesFile is created you should be able to run your application locally and should work just as it did before. The only difference is that we can now deploy the application to a Cloud Foundry PaaS and as long as there is a Mongo DB service bound to the application it will work there as well. Now we need to change how we package the application. I like to be able to setup my POM so I can build both a jar and war file, I find being able to produce a self container jar convenient. One way to do this is to create a Maven profile to package the application as a war. Open the POM for the application and add the following XML to the POM. <packaging>${packaging.type}</packaging> <properties> <packaging.type>jar</packaging.type> </properties> <profiles> <profile> <id>war</id> <properties> <packaging.type>war</packaging.type> </properties> <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> <scope>provided</scope> </dependency> </dependencies> <build> <finalName>${project.artifactId}</finalName> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> <plugin> <artifactId>maven-clean-plugin</artifactId> <version>2.5</version> <executions> <execution> <id>auto-clean</id> <phase>initialize</phase> <goals> <goal>clean</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> </profiles> If you already have a packaging element replace it with the one above. If you already have a properties element add the new property packaging.type to it. When the war profile is activated the build will package your application as a war without the Tomcat dependencies. To get started deploying the Spring Boot application to Bluemix you should login, head to the Catalog, and create a new MongoLab service. Name the service spring-boot-mongo and leave it unbound for now, we will bind it to the app once we deploy it. Next package your application by running $ mvn package -P war, this should produce a war file in your target directory. Finally push your application (the following assumes you are in the root of your project). $ cf push mongo-demo -p target/mongo-demo.war –no-start $ cf bind-service mongo-demo spring-boot-mongo $ cf start mongo-demo Once your application is deployed test it out by using the REST APIs we defined in the previous posts. It should work exactly the same locally as it does in the cloud!
http://ryanjbaxter.com/2015/02/02/deploying-spring-boot-apps-to-bluemix-with-spring-cloud-connectors/
CC-MAIN-2018-34
en
refinedweb
Given an array a, we have to find maximum product possible with the subset of elements present in the array. The maximum product can be single element also. Examples: Input : a[] = { -1, -1, -2, 4, 3 } Output : 24 Explanation : Maximum product will be ( -2 * -1 * 4 * 3 ) = 24 Input : a[] = { -1, 0 } Output : 0 Explanation : 0(single element) is maximum product possible Input : a[] = { 0, 0, 0 } Output : 0 A simple solution is to generate all subsets, find product of every subset and return maximum product. A better solution is to use the below facts. - If there are even number of negative numbers and no zeros, result is simply product of all - If there are odd number of negative numbers and no zeros, result is product of all except the largest valued negative number. - If there are zeros, result is product of all except these zeros with one exceptional case. The exceptional case is when there is one negative number and all other elements are 0. In this case, result is 0. // CPP program to find maximum product of // a subset. #include <bits/stdc++.h> using namespace std; int maxProductSubset(int a[], int n) { if (n == 1) return a[0]; // Find count of negative numbers, count // of zeros, maximum valued negative number // and product of non-zero numbers int max_neg = INT_MIN; int count_neg = 0, count_zero = 0; int prod = 1; for (int i = 0; i < n; i++) { // If number is 0, we don't // multiply it with product. if (a[i] == 0) { count_zero++; continue; } // Count negatives and keep // track of maximum valued negative. if (a[i] < 0) { count_neg++; max_neg = max(max_neg, a[i]); } prod = prod * a[i]; } // If there are all zeros if (count_zero == n) return 0; // If there are odd number of // negative numbers if (count_neg & 1) { // Exceptional case: There is only // negative and all other are zeros if (count_neg == 1 && count_zero > 0 && count_zero + count_neg == n) return 0; // Otherwise result is product of // all non-zeros divided by maximum // valued negative. prod = prod / max_neg; } return prod; } int main() { int a[] = { -1, -1, -2, 4, 3 }; int n = sizeof(a) / sizeof(a[0]); cout << maxProductSubset(a, n); return 0; } 24 Time Complexity : O(n) Auxiliary Space :: - Range product queries in an array - Goldman Sachs Internship Interview Experience (On-Campus) - Sudo Placement[1.5] | Second Smallest in Range - Sudo Placement[1.5] | Partition - Print all the combinations of N elements by changing sign such that their sum is divisible by M - Amazon Interview Experience - Elements of first array that have more frequencies - Find index of first occurrence when an unsorted array is sorted - Array elements that appear more than once - Kth smallest element after every insertion Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
https://www.geeksforgeeks.org/maximum-product-subset-array/
CC-MAIN-2018-34
en
refinedweb
Contents Abstract This PEP proposes changing the syntax for declaring metaclasses, and alters the semantics for how classes with metaclasses are constructed. Rationale There are two rationales for this PEP, both of which are somewhat particular, there is an important body of use cases where it would be useful to preserve the order in which a class members are declared. Ordinary Python objects store their members in a dictionary, in which ordering is unimportant, and members are accessed strictly by name. However, Python is often used to interface with external systems in which the members are organized according to an implicit ordering. Examples include declaration of C structs; COM objects; Automatic translation of Python classes into IDL or database schemas, such as used in an ORM; and so on. In such cases, it would be useful for a Python programmer to specify such ordering directly using the declaration order of class members. Currently, such orderings must be specified explicitly, using some other mechanism (see the ctypes module for an example.) Unfortunately, the current method for declaring a metaclass does not allow for this, since the ordering information has already been lost by the time the metaclass comes into play. By allowing the metaclass to get involved in the class construction process earlier, the new system allows the ordering or other early artifacts of construction to be preserved and examined. There proposed metaclass mechanism also supports a number of other interesting use cases beyond preserving the ordering of declarations. One use case is to insert symbols into the namespace of the class body which are only valid during class construction. An example of this might be "field constructors", small functions that are used in the creation of class members. Another interesting possibility is supporting forward references, i.e. references to Python symbols that are declared further down in the class body. The other, weaker, rationale is purely cosmetic: The current method for specifying a metaclass is by assignment to the special variable __metaclass__, which is considered by some to be aesthetically less than ideal. Others disagree strongly with that opinion. This PEP will not address this issue, other than to note it, since aesthetic debates cannot be resolved via logical proofs. Specification): ... Note that this PEP makes no attempt to define what these other keywords might be - that is up to metaclass implementors to determine. More generally, the parameter list passed to a class definition will now support all of the features of a function call, meaning that you can now use *args and **kwargs-style arguments in the class base list: class Foo(*bases, **kwds): ... Invoking the Metaclass In the current metaclass system, the metaclass object can be any callable type. This does not change, however in order to fully exploit all of the new features the metaclass will need to have an extra attribute which is used during class pre-construction. This attribute is named __prepare__, which is invoked as a function before the evaluation of the class body. The __prepare__ function takes two positional arguments, and an arbitrary number of keyword arguments. The two positional arguments are: The interpreter always tests for the existence of __prepare__ before calling it; If it is not present, then a regular dictionary is used, as illustrated in the following Python snippet. def prepare_class(name, *bases, metaclass=None, **kwargs): if metaclass is None: metaclass = compute_default_metaclass(bases) prepare = getattr(metaclass, '__prepare__', None) if prepare is not None: return prepare(name, bases, **kwargs) else: return dict() The example above illustrates how the arguments to 'class' are interpreted. The class name is the first argument, followed by an arbitrary length list of base classes. After the base classes, there may be one or more keyword arguments, one of which can be 'metaclass'. Note that the 'metaclass' argument is not included in kwargs, since it is filtered out by the normal parameter assignment algorithm. (Note also that 'metaclass' is a keyword- only argument as per PEP 3102 [6].) Even though __prepare__ is not required, the default metaclass ('type') implements it, for the convenience of subclasses calling it via super(). __prepare__ returns a dictionary-like object which is used to store the class member definitions during evaluation of the class body. In other words, the class body is evaluated as a function block (just like it is now), except that the local variables dictionary is replaced by the dictionary returned from __prepare__. This dictionary object can be a regular dictionary or a custom mapping type. This dictionary-like object is not required to support the full dictionary interface. A dictionary which supports a limited set of dictionary operations will restrict what kinds of actions can occur during evaluation of the class body. A minimal implementation might only support adding and retrieving values from the dictionary - most class bodies will do no more than that during evaluation. For some classes, it may be desirable to support deletion as well. Many metaclasses will need to make a copy of this dictionary afterwards, so iteration or other means for reading out the dictionary contents may also be useful. The __prepare__ method will most often be implemented as a class method rather than an instance method because it is called before the metaclass instance (i.e. the class itself) is created. Once the class body has finished evaluating, the metaclass will be called (as a callable) with the class dictionary, which is no different from the current metaclass mechanism. Typically, a metaclass will create a custom dictionary - either a subclass of dict, or a wrapper around it - that will contain additional properties that are set either before or during the evaluation of the class body. Then in the second phase, the metaclass can use these additional properties to further customize the class. An example would be a metaclass that uses information about the ordering of member declarations to create a C struct. The metaclass would provide a custom dictionary that simply keeps a record of the order of insertions. This does not need to be a full 'ordered dict' implementation, but rather just a Python list of (key,value) pairs that is appended to for each insertion. Note that in such a case, the metaclass would be required to deal with the possibility of duplicate keys, but in most cases that is trivial. The metaclass can use the first declaration, the last, combine them in some fashion, or simply throw an exception. It's up to the metaclass to decide how it wants to handle that case. Example Here's a simple example of a metaclass which creates a list of the names of all class members, in the order that they were declared: # class MyClass(metaclass=OrderedClass): # method1 goes in array element 0 def method1(self): pass # method2 goes in array element 1 def method2(self): pass Sample Implementation Guido van Rossum has created a patch which implements the new functionality: Alternate Proposals Josiah Carlson proposed using the name 'type' instead of 'metaclass', on the theory that what is really being specified is the type of the type. While this is technically correct, it is also confusing from the point of view of a programmer creating a new class. From the application programmer's point of view, the 'type' that they are interested in is the class that they are writing; the type of that type is the metaclass. There were some objections in the discussion to the 'two-phase' creation process, where the metaclass is invoked twice, once to create the class dictionary and once to 'finish' the class. Some people felt that these two phases should be completely separate, in that there ought to be separate syntax for specifying the custom dict as for specifying the metaclass. However, in most cases, the two will be intimately tied together, and the metaclass will most likely have an intimate knowledge of the internal details of the class dict. Requiring the programmer to insure that the correct dict type and the correct metaclass type are used together creates an additional and unneeded burden on the programmer. Another good suggestion was to simply use an ordered dict for all classes, and skip the whole 'custom dict' mechanism. This was based on the observation that most use cases for a custom dict were for the purposes of preserving order information. However, this idea has several drawbacks, first because it means that an ordered dict implementation would have to be added to the set of built-in types in Python, and second because it would impose a slight speed (and complexity) penalty on all class declarations. Later, several people came up with ideas for use cases for custom dictionaries other than preserving field orderings, so this idea was dropped. Backwards Compatibility It would be possible to leave the existing __metaclass__ syntax in place. Alternatively, it would not be too difficult to modify the syntax rules of the Py3K translation tool to convert from the old to the new syntax.
http://docs.activestate.com/activepython/3.6/peps/pep-3115.html
CC-MAIN-2018-34
en
refinedweb
If you’ve ever looked at even small amounts of CIL, you’ll notice that two different instructions are used to call methods: “call” and “callvirt”. My goal in this post is to introduce these two methods and provide a general understanding of how they are used. call – The basics Call provides basic method calling functionality in CIL. Let’s jump right into an example to see how it works. class Program { static void Main(string[] args) { Printer.Print("Hello World"); } } public class Printer { public static void Print(string message) { Console.WriteLine(message); } } .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 11 (0xb) .maxstack 8 IL_0000: ldstr "Hello World" IL_0005: call void ConsoleApplication1.Printer::Print(string) IL_000a: ret } // end of method Program::Main .method public hidebysig static void Print(string message) cil managed { // Code size 7 (0x7) .maxstack 8 IL_0000: ldarg.0 IL_0001: call void [mscorlib]System.Console::WriteLine(string) IL_0006: ret } // end of method Printer::Print There isn’t actually anything too complicated going on here. When we execute Main, “Hello World” is loaded onto the stack, and then the Print method, which takes a single string parameter, is called using the “call” instruction. Notice that the call instruction itself takes as a descriptor a reference to the method to call. (This reference is actually a metadata token, but going into details about metadata is a topic for another day.) When call executes, it pops the number of arguments of the stack that the method being called requires, and passes them as zero-indexed arguments to the method. We can see this in action at line IL_0000 of the Print method, where we load “argument 0” onto the stack so that it can be passed to the Console.WriteLine method by another “call” invocation. In our case the Print method doesn’t return anything, but if it did the return value would simply be pushed onto the stack before the final “ret” call of the method. callvirt – The basics Perhaps the easiest way to distinguish call from callvirt is to refer to their different descriptions in the CIL spec. While call is simply used to “call a method”, “callvirt” is used to “call a method associated, at runtime, with an object”. To understand how the notion of an object impacts a method call, take this function. public static void Print(object thingy) { Console.WriteLine(thingy.ToString()); } As we learned back in this post, the behaviour that this method will exhibit is entirely dependent on what type “thingy” really is, due to the fact that ToString() is a virtual method. But how can the runtime know what implementation of “ToString” to call if it calls it on a simple object? Well, this is where callvirt really starts to make sense. Callvirt takes into account the type of the object on which the method is being called in order to provide us with the polymorphic behaviour that we expect from such cases. All that is required in order to execute a callvirt instruction is to pass a pointer to the object on which the method is being called. We can see this if we look at the IL of the ToString() call in the Print method. .method private hidebysig instance void Print(object thingy) cil managed { // Code size 12 (0xc) .maxstack 8 IL_0000: ldarg.1 IL_0001: callvirt instance string [mscorlib]System.Object::ToString() IL_0006: call void [mscorlib]System.Console::WriteLine(string) IL_000b: ret } // end of method Printer::Print The override of ToString() that we are calling doesn’t take any parameters, however before calling it, argument at index 0 is loaded onto the stack. Argument at index 0 is of course “thingy”, whatever it happens to be. When callvirt is executed to call the ToString() method, it first verifies that “thingy” isn’t null, and then goes on to determine the type of “thingy” before locating the correct instance of ToString() to call by walking up the inheritance tree until it finds a valid ToString() implementation. When callvirt replaces call… So far the distinction that we have made between call and callvirt has been simple: call provides simple method calling functionality, while callvirt provides support for virtual methods and polymorphism. However, if you begin to examine the IL of your own C# programs you’ll notice that callvirt is also used to call nonvirtual instance methods. But why would the C# compiler do this? Off the top of my head I can think of two advantages: - Nonvirtual methods can be made virtual without recompiling calling assemblies. - Developers don’t need to keep track of which methods are virtual and can therefore be called on null references (because of callvirt’s integrated null check). This “feature” is limiting, but simplifies coding. It is also important to understand that calling nonvirtual methods with callvirt doesn’t impact performance as much as one may think. While the null reference check integrated into callvirt is still performed on nonvirtual calls, if the jitter knows that a given method is nonvirtual it won’t bother searching through the inheritance tree to find the correct method implementation. It’ll go straight to the correct implementation just as call would. This makes callvirt almost as fast as call when calling nonvirtual instance methods. When call replaces callvirt… Despite the fact that it doesn’t have “virt” in the name, call can still be used to call nonvirtual methods. It simply calls them nonvirtually, invoking the method declared on the type of the variable instance as it appears in the calling scope. An example of when this occurs is when an overriding method calls a base implementation. Were the call to be made with callvirt, the runtime would end up re-calling the derived implementation which would then re-call the base implementation and so on and so forth until a stack overflow occurred. Final word Hopefully by now you’ll have a decent understanding of the call and callvirt instructions. This understanding will be important in several upcoming articles, so stay tuned to make use of what we’ve discussed.
http://www.levibotelho.com/development/call-and-callvirt-in-cil/
CC-MAIN-2018-34
en
refinedweb
Allow basic regexp in namespace prefix of index-rule ---------------------------------------------------- Key: JCR-2458 URL: Project: Jackrabbit Content Repository Issue Type: Improvement Components: jackrabbit-core Reporter: Marcel Reutegger Priority: Minor Currently a regular expression is limited to the local name, which makes fallback declarations that should match everything else difficult to write. I.e. you have to write a line per namespace in the node type registry, which bloats the index-rule unnecessarily. Currently: <property isRegexp="true">.*</property> will only match properties with the empty namespace URI. I propose we allow a basic regular expression in the prefix. That is the match all pattern: '.*' (dot star). The following would match any property, including any namespace: <property isRegexp="true">.*:.*</property> -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/201001.mbox/%[email protected]%3E
CC-MAIN-2018-34
en
refinedweb
Unlike C++, why template class can not be derived from type parameter in C# public class FeatureObject <Interface> : FeatureBase, Interface { } it give compiler error: cannot derive from 'Interface' because it is a type parameter. Any reason? . View Complete Post'm working with a console application to generate my sites and subsites (hierarchy). And i've got a problem with a generated spmetal class. this doesn't work: static void Main(string[] args) { string SITE = string.Concat("http://", Environment.MachineName);(O How to enable the hidden labels On Create Column Page after selecting our Custom Field type before just ckicking OK so that all those labels appear.); } }); & I Hello, we are using a "device system" here which is using reflection to invoke methods. Consider this class: public class Test { public void SetDayOfWeek(DayOfWeek day) { //... } } Using MethodInfo m = typeof(Test).GetMethod( "SetDayOfWeek", BindingFlags.Instance | BindingFlags.Public, null, new Type[] { typeof(int) }, null); m is null - probably because I am passing an int as the enum parameter DayOfWeek. Is there any way to have an int be accepted as enum parameter in such a lookup? regards, Florian Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/53968-c-sharp-why-class-cannot-derive-from-type.aspx
CC-MAIN-2018-34
en
refinedweb
NAME xx - twice as dirty SYNOPSIS ~ > gem install "double x" require "xx" include XX::XHTML doc = xhtml_{ html_{ head_{ title_{ " go xx! " } } body_{ " one more and it would be illegal " } } } URI DESCRIPTION xx is a library designed to extend ruby objects with html, xhtml, and xml generation methods. the syntax provided by xx aims to make the generation of xml or xhtml as clean looking and natural as ruby it self. the approach taken, that of extending objects, allows natural document generation while preserving access to instance data. in essence it provides ruby objects (including the top level 'main' object) an intuitive means to generate various markup views of their data in a way that is correct and elegant. xx is brought to you by the good folks at. SAMPLES <========< sample/a.rb >========> ~ > cat sample/a.rb require "xx" include XX::XHTML # # xx modules extend the current object to allow natural document markup # doc = xhtml_{ html_{ head_{ title_{ " go xx! " } } body_{ " one more and it would be illegal " } } } puts doc.pretty ~ > ruby sample/a.rb <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "... <html lang='en' xml: <head> <title> go xx! </title> </head> <body> one more and it would be illegal </body> </html> <========< sample/b.rb >========> ~ > cat sample/b.rb require "xx" # # xml is as easy as html. xx extends your object very carefully, adding an # one method that is not prefaced with 'xx_' : 'method_missing'. the # method_missing defined is conservatively, recognizing only methods that end # with underscore ('_') as 'tag' methods intended to generate markup. as with # html, attributes may be passed to any tag method as either symbol or string. # class Table < ::Array include XX::XML attr "fields" def initialize *a, &b @fields = a.shift replace a end def self::[] *a, &b new *a, &b end def to_xml xml_{ table_{ each do |row| row_{ fields.zip(row) do |field, value| field_(:name => field, 'width' => value.size){ value } end } end } } end end table = Table[ %w( first_name last_name ssn ), %w( jane doe 424-24-2424 ), %w( john buck 574-86-4242 ), ] puts table.to_xml.pretty ~ > ruby sample/b.rb <?xml version='1.0'?> <table> <row> <field name='first_name' width='4'>jane</field> <field name='last_name' width='3'>doe</field> <field name='ssn' width='11'>424-24-2424</field> </row> <row> <field name='first_name' width='4'>john</field> <field name='last_name' width='4'>buck</field> <field name='ssn' width='11'>574-86-4242</field> </row> </table> <========< sample/c.rb >========> ~ > cat sample/c.rb require "xx" # # xx makes it impossible to generate invalid (syntactically) invalid documents # - unless to instruct it in insert raw html or xml using the 'h_' or 'x_' # methods. text inserted with 't_' is automatically escaped. like all xx # methods these can have one or more underscores after them in case there is a # collision with another method or the tag 'h', 'x', or 't' needs to be # generated. # include XX::XML doc = xml_{ root_{ div_{ t_ "this is escaped < > & text" } div_{ h_ "this is raw <html>. & is not escaped" } div_{ x_ "<raw> xml </raw>" } div_{ x_{ even_{ entire_{ documents_{ "nest" } } } } } } } puts doc.pretty ~ > ruby sample/c.rb <?xml version='1.0'?> <root> <div>this is escaped < > & text</div> <div>this is raw <html>. & is not escaped</div> <div><raw> xml </raw></div> <div><even><entire><documents>nest</documents></entire></even></div> </root> <========< sample/d.rb >========> ~ > cat sample/d.rb require "xx" # # xx has only a few methods which end in '_'. these methods, therefore, cannot # be used in conjuction with method_missing to auto-generate tags. for those # methods a tag of the same method can be generated using and escaped form, # namely two or more underscores always mean 'generate a tag'. those methods # are: # # - g_ # - text_ # - t_ # - h_ # - x_ # - c_ # - at_ # - att_ # - yat_ # include XX::XML doc = xml_{ root_{ t_{ "this is a text element" } t__{ "this is not text, but a __tag__ called t" } x_{ "this un-escaped & < > stuff" } x__{ "this is not un-escaped & < > stuff but a tag called x" } } } puts doc.pretty ~ > ruby sample/d.rb <?xml version='1.0'?> <root>this is a text element<t>this is not text, but a __tag__ called t</t>this un-escaped & < > stuff<x>this is not un-escaped & < > stuff but a tag called x</x> </root> HISTORY 0.1.0: - added the "g_" method, which generates any tag ^ g_("anytag", "key" => "value"){ b_{ "bold" } } - added at_ and att_ methods to parse yaml and k=v strings as hashes. at_("src : image.jpg, width : 100%") #=> {"src"=>"image.jpg", "width"=> "100%"} 0.0.0: - initial version AUTHORS dan fitzpatrick <[email protected]> ara.t.howard <[email protected]> BUGS please send bug reports to /dev/null. patches to addresses above. ;-) LICENSE ePark Labs Public License version 1 Copyright (c) 2005, ePark Labs, Inc. ePark Labs. enjoy. -a on 2006-01-25 01:24 on 2006-01-25 02:12 Quoting "Ara.T.Howard" <[email protected]>: > doc = xhtml_{ > html_{ > head_{ title_{ " go xx! " } } > body_{ " one more and it would be illegal " } > } > } Out of curiousity, how does this compare with markaby? -mental on 2006-01-25 02:31 [email protected] wrote: > > Out of curiousity, how does this compare with markaby? Or the XML Builder in Nitro, which has a similar syntax? James -- James B. - Ruby Help & Documentation - The Journal By & For Rubyists - The Ruby Store for Ruby Stuff - Playing with Better Toys - Building Better Tools on 2006-01-25 04:04 On Wed, 25 Jan 2006, James B. wrote: >> >> >> Out of curiousity, how does this compare with markaby? > > Or the XML Builder in Nitro, which has a similar syntax? hmm. i think the implimentation is better ;-) seriously - i have a big problem with blanket method_missing like those used in the nitro and rails xml builders - they make debugging an absolute nightmare that reminds me of perl. by using a simple rule : tag methods end in underscore, i can delegate to the default method missing in cases where a mere typo was made. eg. in nitro foo will output <foo></foo> but not in xx. you have to be explicit that you need to generate a tag using foo_{ 42 } which will generate <foo>42</foo> also, the return value of blocks do not appear to be used very well in nitro (i could be wrong here). my reading of the docs seems to suggest that foo{ 42 } would output simply <foo></foo> in xx this outputs <foo>42</foo> as you would expect. again, alot of this revolves around handling method_missing in a catch-all fashion. because the handling is so generic in nitro's xml builder it is un-suitable to mixin to your own classes, let alone built-in ones. with 'xx' this is not so - the library is quite carefully designed to pollute the includee (is that a word?) namespace minimally and certainly will not hide errors or, worse, simply output xml/xhtml when a typo is made. this is all possible due to the requirement that tag methods end in underscore. xx handles both xml, xhtml, and xhtml. for all of them you will be hard pressed to generate invalid documents - generating end tags is not supported because it is always done for you in a sensible way. lastly - 'xx' generates xhtml in a way that should be friendly to IE - something which is harder than it ought to be. i can thank sean (rexml) for that! cheers. -a on 2006-01-25 05:46 [email protected] wrote: > On Wed, 25 Jan 2006, James B. wrote: > >> [email protected] wrote: ... > in the nitro and rails xml builders - they make debugging an absolute > <foo></foo> Thanks for the details. This looks really quite slick. James
http://www.ruby-forum.com/topic/52651
CC-MAIN-2018-34
en
refinedweb
I need hr consultant who can give me advice as consultant just for one question who donot ask me to pay any yearly membership jobs Adempiere Finance Functional Consultant for desigining Ecommerce & Accounting System xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Iam looking for someone from tamilnadu who can give business promotion video for my business. As iam concentration on tamilnadu i need people from chennai for creating promotion video. I need an Android app. I would like it designed and built. .. maintain correctly working I need you to write some articles. PROJECT PHP, AND PROGRAMING FOR MY FRIEND Need a business consultant who can help prepare the project proposal and the estimation document. I need around 110 questions which will be pertaining to some specific topics and need to be very specific to the profile "Consultant - Network Security". Will send the necessary documents and topics of the Question Bank to interested parties. We need the existing logo in a better HR format, see LOGO PEGASUS.PNG. Endresult will be HR format in EPS/PNG/PDF , one serie with name, one serie without name. i have some questions on a wireless sensor networks WSN. ( in designing, modeling, formulation, node deployment and so on) also in corona-based WSN so, i am looking for the expert in WSN and Networking. expert in WSN expert in WSN Our Company is located in Mysore & want to recruit candidates from Mangalore, Dakshina Kannada & Shimoga. I need some graphic design. dollar...... I need a new website. I need you to design and build it. I need some graphic design. # need some who have high experience in NETWORK (expert) to answer some questions. regarding to design the network, modeling, routing, deployment the node..ect i need a very clear answer to these questions ( theory only) for my assignment. Need advice for tripsadivosr to have more traffic I have some work, in an Excel spreadsheet. I need you to develop some software for me. I would like this software to be developed for Windows. Seeking a videographer to create a 60 - 120 second video with b-roll. I am a speaker, hosting an HR panel in San Francisco. The video will be used to promote my future speaking website. Hope for a long term business relationship translating A sample web page has to be developed by using AngularJS (Any version of below 2) where pre-loading page content should be displayed until unless the page is fully loaded during page navigation/ routing. Reference WebSite for better understanding: ----------------------------------------------------------------------- You may have noticed a loading I need some one to explain the following concepts with complex examples: interfaces inheritance extensions namespaces entities LinQ multi-threading implements NOTE: this is NOT an assignment or any thing related to any educational institute. need a legal advice about getting a pay out of business partner. I need interview guidance for the post of branch manager in indian insurance company.
https://www.dk.freelancer.com/job-search/i-need-hr-consultant-who-can-give-me-advice-as-consultant-just-for-one-question-who-donot-ask-me-to-pay-any-yearly-membership/
CC-MAIN-2018-34
en
refinedweb
Building an FM Radio with RDS Support Introduction This article explains how to use the open source USB FM library (written by me) and Windows Presentation Foundation to build a simple yet fully functional radio player with RDS and TMC support. Background The USB FM library provides managed interfaces, developed with C# to USB FM receivers that support RDS. WPF (Windows Presentation Foundation) provides an easy-to-use framework to build rich user interfaces with zero time investment. "Blending" those together will bring you an ability to build fully functional applications without a heavy time investment. Step 1: Building Wireframes To build a WPF application, you should first build a wireframe. WPF provides you with a rich choice of layout controls. In your case, you'll use a Grid to mark up areas in the main (and only) application window. <Grid> <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="35px"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="*"/> <ColumnDefinition Width="Auto"/> </Grid.ColumnDefinitions> </Grid> As you can see, you have three rows and three columns. Now, you can start putting your controls into it. In any radio receiver, you have jogs to control volume level and tune to stations. There is a ready-made jog control, prepared by the Microsoft Expression Blend team, so you'll use it "as-is." To do this, you have to reference the control library and define a namespace of the control within the XAML file of the application body. xmlns: <c:RotaryControl Also, you'll add two labels and a list box of preset stations. These will be bound later to the FM device library. <TextBlock Text="Volume" Grid. <TextBlock Text="Tune" Grid. <ListBox Name="Presets" ItemTemplate="{StaticResource PresetTemplate}" Grid. <ListBox.ItemsPanel> <ItemsPanelTemplate> <DockPanel Margin="0" IsItemsHost="True"/> </ItemsPanelTemplate> </ListBox.ItemsPanel> </ListBox> The only thing that remains in the XAML markup is to set the display for frequency and program text indicators, mono/stereo icon, and signal strength emitter. To set all those, you'll create another grid and put everything inside it. <Grid Grid. <Grid.RowDefinitions> <RowDefinition Height="12px"/> <RowDefinition Height="*"/> <RowDefinition Height="20px"/> <RowDefinition Height="20px"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width=".2*"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <TextBlock Name="Freq" Grid. <TextBlock Name="PS" Grid. <TextBlock Name="PTY" Grid. <Path Name="MonoStereo" Stroke="White" Fill="White" Stretch="Fill" Grid. <Rectangle Grid. <Rectangle Grid. <Rectangle.RenderTransform> <ScaleTransform x: </Rectangle.RenderTransform> </Rectangle> <StackPanel Grid. <TextBlock Style="{StaticResource IndiStyle}" Text="MS" Name="MS"/> <TextBlock Style="{StaticResource IndiStyle}" Text="TA" Name="TA"/> <TextBlock Style="{StaticResource IndiStyle}" Text="TP" Name="TP"/> </StackPanel> </Grid> You are finished wireframing your application. Now, it's time to make it look better. There are no comments yet. Be the first to comment!
https://www.codeguru.com/csharp/.net/net_wpf/article.php/c15817/Building-an-FM-Radio-with-RDS-Support.htm
CC-MAIN-2019-22
en
refinedweb
. Step 2: Let's Get Started Open Flash and create a new Flash File (ActionScript 3). Set the stage size to 600x350 and add a gray radial gradient (#EEEEEE, #DDDDDD). Step 3: Adding a Preloader We're going to add a preloading animation to tell the user when the content is loading. In this case I used the Apple inspired preloader that we created before. Since we're going to use only the animation, there's no need to import the class or use an Export Identifier. Place the preloader on the stage and center it. Step 4: Embedding a Font We're going to embed a font, a super easy task when adding a TextField to the Stage in the Flash IDE, but a little different using ActionScript. Open the Library Panel and right-click in the items area without selecting one, a contextual menu will appear. Click on "New Font" to open a dialog window, give a name to your font and select the one you want to use as shown in the following image. This will create a class of the font you selected, we'll instantiate this in Step 9. Step 5: XML Let's create the XML file. Open your prefered XML or Text editor and write: <?xml version="1.0" encoding="UTF-8"?> <images> <image src="images/image.jpg" title="This is image 1"/> <image src="images/image2.jpg" title="This is image 2"/> <image src="images/image3.jpg" title="This is image 3"/> <image src="images/image4.jpg" title="This is image 4"/> <image src="images/image5.jpg" title="This is image 5"/> </images> When you're done, save it as "images.xml" in your xml folder. Step 6: ActionScript The code that we'll use will be written in a single class that will be used as the Document Class in the FLA file. Create a new ActionScript File (File > New) Save it as "Main.as". Step 7: Package We'll begin with: package classes { The package keyword allows you to organize your code into groups that can be imported by other scripts, it's recommended to name them starting with a lowercase letter and use intercaps for subsequent words for example: galleryClasses. If you don't want to group your files in a package or you have only one class, you can use it right from your source folder, but the idea is to be organized. Step 8: Required Classes import flash.display.Sprite; import flash.display.MovieClip; import flash.net.URLLoader; import flash.net.URLRequest; import flash.display.Loader; import flash.events.Event; import flash.filters.BitmapFilter; import flash.filters.DropShadowFilter; import flash.text.TextFormat; import flash.text.TextField; import flash.text.AntiAliasType; import flash.events.MouseEvent; import fl.transitions.Tween; import fl.transitions.easing.Strong; import fl.transitions.TweenEvent; These are the classes that we'll need to make this gallery. If you need help with a specific class please use the Flash Help (F1). Step 9: Extending the Class public class Main extends MovieClip { The extends keyword defines a class that is a subclass of another class. The subclass inherits all the methods, properties and functions, that way we can use them in our class. We're going to use MovieClip specific methods and properties so we extend using the MovieClip Class. Step 10: Variables var xml:XML; // The XML Object that will parse the XML File var images:Array = new Array(); //This array will store the images loaded var imagesLoaded:int = 0; //A counter, counts the images loaded var imagesTitle:Array = new Array(); //The title properties of the XML File var tween:Tween; //Handles the animation var zoomed:Boolean = false; //Checks if a picture is zoomed, false by default var canClick:Boolean = true; //Checks if the user can click a picture to zoom it, true by default var lastX:int; //Stores the x property of the last picture that was clicked var lastY:int; //Stores the y property of the last picture that was clicked var textformat:TextFormat = new TextFormat(); //A TextFormat Object var screen:Sprite = new Sprite(); //A black screen to focus on the active picture var formatFont:Avenir = new Avenir(); //This is the embedded font Step 11: Constructor The constructor is a function that runs when an object is created from a class. This code is the first to execute when you make an instance of an object or when using the Document Class. In this function we'll set the properties of the TextFormat object that we'll use to display a title or a description of each image. Create the black screen that appears when the user clicks on a picture and call the function which loads the desired XML file. public function Main():void { textformat.color = 0xFFFFFF; textformat.font = formatFont.fontName; textformat.size = 17; //Use the same size you used when embedding the font from the Library screen.graphics.beginFill(0x111111, .75); screen.graphics.drawRect(0, 0, stage.stageWidth, stage.stageHeight); screen.graphics.endFill(); loadXML("xml/images.xml"); } Step 12: XML Loader Function This function loads the XML file provided by the "file" parameter. We also add a listener to handle when the load is complete. private function loadXML(file:String):void { var urlLoader:URLLoader = new URLLoader(); var urlReq:URLRequest = new URLRequest(file); urlLoader.load(urlReq); urlLoader.addEventListener(Event.COMPLETE, handleXML); } Step 13: Parse XML Here we convert the loaded XML file to a valid XML object using the parameter "data" of the URLLoader. Then we use a "for" statement to create a Loader for every image in the XML. Additional information is found in the commentary. private function handleXML(e:Event):void { xml = new XML(e.target.data); for (var i:int = 0; i < xml.children().length(); i++) { var loader:Loader = new Loader(); loader.load(new URLRequest(String(xml.children()[i].@src))); images.push(loader); //Adds the Loaders to the images Array to gain access to them outside this function imagesTitle.push(xml.children()[i].@title); //Adds the title attribute content to the array to use it outside this function loader.contentLoaderInfo.addEventListener(Event.COMPLETE, loaded); //A listener to the function that will be executed when an image is loaded } } Step 14: Images Loaded When a Loader has loaded an image from the XML, the following code is executed: private function loaded(e:Event):void { imagesLoaded++; //Adds one to the imagesLoaded variable if (xml.children().length() == imagesLoaded) //When all images are loaded... { removeChild(preloader); //Removes the Preloader MovieClip prepareImages(); //This function is explained in the next step } } Step 15: Prepare Images This function will add the frame, the TextField to display the title or description, the black background used for that and a Shadow Filter. Let's take it in parts. private function prepareImages():void { for (var i:int = 0; i < images.length; i++) //These actions will be applied to all the images loaded so we use a "for" and the "images" array to do that { var container:Sprite = new Sprite(); //A container that will store the image, frame, TextField, TextField background and shadow var frame:Sprite = new Sprite(); //The Frame Sprite var infoArea:Sprite = new Sprite(); //The TextField background var infoField:TextField = new TextField(); //The TextField Step 16: Image Frame This creates a white frame around the image. frame.graphics.beginFill(0xFFFFFF); frame.graphics.drawRect(-20, -20, images[i].width + 40, images[i].height + 80); frame.graphics.endFill(); The rectangle will be positioned under the image to be used as a frame. Step 17: Information Background This creates a black rectangle in the bottom part of the image, where the TextField will be. infoArea.graphics.beginFill(0x111111, 0.75); infoArea.graphics.drawRect(0, 0, images[i].width, 60); infoArea.graphics.endFill(); infoArea.y = images[i].height - 60; Step 18: Image Information The following code sets the TextField properties and adds its contents. infoField.defaultTextFormat = textformat; infoField.embedFonts = true; //You have to add this to use the embedded font infoField.antiAliasType = AntiAliasType.ADVANCED; //This property will display the text more clearly infoField.width = images[i].width - 5; infoField.height = 20; infoField.text = imagesTitle[i]; //The content, obtained from the XML and stored in the Array Step 19: Resizing the Images Here we set the desired scale of the images. Since everything will be inside the Container Sprite, we only need to resize it. container.scaleX = 0.3; container.scaleY = 0.3; Step 20: Position The images will have a random position based on the center of the Stage area. We use Math for that. container.x = stage.stageWidth / 4 + Math.floor(Math.random() * (stage.stageWidth / 4)); container.y = stage.stageHeight / 5 + Math.floor(Math.random() * (stage.stageHeight / 5)); Step 21: Shadow Filter This will create a Shadow Filter. var shadowFilter:BitmapFilter = new DropShadowFilter(3, 90, 0x252525, 1, 2, 2, 1, 15); //Distance, angle, color, alpha, blur, strength, quality var filterArray:Array = [shadowFilter]; container.filters = filterArray; //Apply the filter Step 22: Adding to Stage Time to add the Children, the order in which we add them is the order they will take in the Display List, so be sure to add them in this way. infoArea.addChild(infoField); //Adds the TextField to the TextField Background container.addChild(frame); //Adds the Frame to the Container container.addChild(images[i]); //Adds the Image on top of the Frame in the Container infoArea.visible = false; //We set the image information to invisible by default container.addChild(infoArea); //Adds the information area in top of everything Step 23: Listeners Although we could add the Listeners to every Sprite before, I'm going to add them now that they are inside the Container to show you how the Display List works. container.getChildAt(1).addEventListener(MouseEvent.MOUSE_UP, zoomHandler); //This is the Image loaded by the XML, this is the Loader object container.getChildAt(0).addEventListener(MouseEvent.MOUSE_DOWN, dragImage); //This is the Frame container.getChildAt(0).addEventListener(MouseEvent.MOUSE_UP, stopDragImage); //Frame addChild(container); //Lastly, we add the Container to the Stage Step 24: Drag Functions In the previous step we added two listeners to the Frame of the images. These functions will take care of the drag. We use "parent" beacuse we want to drag all the objects, since the "target" is the Frame Sprite, the parent object is the Container. private function dragImage(e:MouseEvent):void { e.target.parent.startDrag(); } private function stopDragImage(e:MouseEvent):void { e.target.parent.stopDrag(); } Step 25: Zoom This function is in charge of zooming in and out. Its Listener is in the actual image, so clicking in the Frame will not call this function. Editor's Note: For some reason, the else if () statement within this zoomHandler function was making our syntax highlighter crash. As it doesn't want to display on the page, I've made the function available for download. Sorry for any inconvenience, Ian. Step 26: Motion Finish Some actions need to be executed when the Tweens are finished, these are those actions. private function zoomInFinished(e:TweenEvent):void { zoomed = true; //Modify the variables according to the event canClick = true; tween.obj.getChildAt(2).visible = true; //Sets the Information area to visible } private function zoomOutFinished(e:TweenEvent):void { zoomed = false; removeChild(screen); //Removes the black screen tween.obj.getChildAt(0).addEventListener(MouseEvent.MOUSE_DOWN, dragImage); //Adds the drag listener back to the Frame Sprite } Step 27: Document Class Go back to the FLA and add Main as the Document Class in the Properties Panel. If you save your class in a package you have to add the name of the package too, something like: yourpackage.Main Test your file and see your gallery working! Conclusion As always, try different things in your code to make the gallery just as you want. I hope you enjoyed this tut, thanks for reading! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/create-a-shuffle-gallery-in-flash-using-xml-and-actionscript-30--active-1369
CC-MAIN-2019-22
en
refinedweb
Some of you might have heard that Nathan and I took time out of our dev schedules to do an *Intro to Komodo* webinar on the 19th of November, 2015. What you may not know is that everyone asked a TON of questions! We couldn’t possibly answer all of them in the time that we had, so we took all the questions asked and I answered them here. First here’s the webinar… …and now all the answers! Tweet at me ([@th3coop]()) which one you think is my favourite question. If you get it I will totally high-five you. So here they are… - What does “Mozilla codebase” mean?Mozilla is an application framework that allows developers to create desktop applications. They can be cross platform, come with built in libraries, and build user interface (UI) using XUL (HTML for your desktop). Mozilla is what Firefox is built on. This means that some addons that are compatible with Firefox can be ported to Komodo with a few small changes. Mozilla also allows users to build and manipulate the UI using CSS, Javascript, and, in our case, Python. - How can I change the colors in the side and bottom panels?Because Komodo uses web technologies like Javascript and CSS, you can easily change ANY aspect of the UI by adding your own custom CSS rules. We recommend using the Stylish Addon (ported from Firefox) to implement your custom styles. To investigate the Komodo UI structure we recommend the DOM-Inspector addon and Element Inspector (must have DOM-Inspector already installed). You can easily install all of these tools through Commando ((ctrl [command] + shift + O) > Package scope). Additionally if you don’t mind being on the bleeding edge, you could try the Tabula Rasa skin which allows you to set your own colors. - Can I add namespaces to my file through commando?Not yet, but we’re tracking that request on github. - Is Carey prepared?Carey, for once, was totally prepared! - Where is the Commando Documentation? I don’t understand how it works.You can find the Commando Documentation here. - How can I extend Komodo so that I can display the code structure in the ‘#’ tab for another language (for example, Fortran)?For that you would need to write your own Code Intelligence addon for the language you choose – essentially teaching Komodo how to interpret code at a high level. For example see Komodo-Go. Alternatively, you could extend New Source Tree, which uses regex to scan a file. - Could you set a Userscript to auto run/prompt?Yes, definitely. You can set a Userscript to run on many different events in Komodo. You can customize this under the Userscripts properties dialog that you can reach by right clicking the Userscript > Properties > Trigger tab. For more details see our Komodo Macros and Userscripts documentation. - Could I adapt Komodo to use as an interactive debugger with a compiled language (C++/Fortran) and gdb? For example, as is done in emacs.It is possible **but** it is not something that is easily answered and takes quite a bit of Komodo and GDB knowledge to accomplish. It can also be challenging because usable resources are limited, since this is based on IDE which is closed source. However, the code is all there in IDE and you could theoretically reverse engineer it. We are always available on the forums if you need help. - You mentioned the skin and theme (tabula rusa) color customization may appear in a future version of Komodo. What else can we expect to see?You can expect us to push the envelope, break the mold, give it 110%, go the extra mile, and, of course, be all that we can be. For more details, watch our forums, read our tweets, and stay tuned for our newsletters to keep on top of what’s in the works. As we continue to develop future Komodo builds, you can also expect to find beta releases available for testing. - Can Komodo debug Node.js ?Yes, Komodo’s supports Node debugging for all current stable Node versions. *(The current being 5.1.0 as of this text being written. For those of you reading this in the future and using later NodeJS builds, how did the Canucks do this year? You don’t care? Ya me neither.)* - It is well known that you are very awesome. How did you get to be so awesome?This was one of the best questions asked in my personal (awesome) opinion. To answer your awesome question Sean, I will quote the great basically, make sure you drink lots of water so that you’re well hydrated. Also yoga classes will probably help you to stay limber to fit into a wide variety of shaped bowls and other containers. - Does IDE work with Visual Basic?No…no no no no no…No. Well kinda. You can get support for basically any language using Project menu > New from template > New Komodo Language. - Do you have something to generate getters and setters for class variables in different languages?This is what the conditional snippets were meant for. Using EJS inside Komodo snippets you can generate any construct you like. Refer to the Python ninit snippet in the sample abbreviations folder that comes with the Komodo install. - Is the source control interface command-based or is a compiled library required? In other words, if I want to customize for another SCM (such as Fossil) is it possible?The easiest way to do this would be through the Commando Shell scope. You would need to look at existing examples in the Komodo source to get started but that tool is implemented to be extensible. - Do I need to be connected to the Internet to use Komodo?Only if you’re using tools that need to be connected to the Internet such as Collaboration, Source Code Control, or remote folders. - Do you work from home full-time?Since 74% of ActiveState is in Vancouver, BC, Canada, the main office is there. Since 66.6666667% of the dev team lives 83% of the width of North America away from Vancouver, they work from home 90% of the time. According to Google it would take 27% of a week to drive, which is a brutal commute. I occasionally work from home but I’m lucky enough to live in Vancouver, about 33.3333333% of an hour from the office. - Do I need a different license for each of my computers?NOPE! Unlike BC car insurance, Komodo licenses are on the user and not the hardware. So you can install your license and Komodo on all the dev machines you have. - How do you handle being cross-platform, is the software Java-based?NO…no no no no no no no no no…My god no. As I mentioned before, Komodo is based on Mozilla which can be compiled for a variety of platforms. This allows us to create builds for Linux (many different flavours), OSX, and Windows, relatively easily. Komodo proprietary and open sourced code is Python and Javascript which also means no platform lock-in. - What are your favourite community-provided packages?My favourite is Element Inspector, which I’m certain was ported over by our very active community member Defman. I use it almost daily.I’m sure Nathan’s fav is Stylish since he’s got so much *STYLE*. Seriously though, if you like the look and feel of Komodo lately, you can thank Nathan for that. He’s done a TON of work to make the Komodo skin easier to customize, look more modern, and be more pleasant to use. - What functionality do you have for Node.js?NodeJS is a first class language in Komodo. All the major IDE tools work with Node including, but not limited to, debugging, code intelligence, syntax checking, and code browsing. - What is the difference between Komodo Edit and IDE?You can see a complete comparison between Komodo Edit and Komodo IDE on our website. Komodo Edit is a very robust text editor and Komodo IDE is, well, a full blown IDE. - How does this work with Docker?Komodo IDE provides an interface through the Commando Shell scope. This interface provide contextual command help, command listing, intelligent command arguments, and completions for partial strings that match a possible next command. - What are the advantages to using Komodo rather than us all using different programs? - Easily extended using dynamic languages and web technologies - Great community for support and learning - One IDE all your projects and languages - No external language or tool dependencies before you can install - Cross platform with native look and feel - Great out of the box experience - Auto configuration - No addons installation just to get core tools and feature - Super awesome development team - You both look beautiful by the way.(context: the attendees could see the slides AND our webcams) This is not a question… but I’ll allow it.
https://www.activestate.com/blog/komodo-webinar-questions/
CC-MAIN-2019-22
en
refinedweb
Continue Reading → Enterprise Library Data Access Application Block In C# .NET What is a Data Access Application Block (DAAB)? A Data Access Application Block encapsulates the performance and resource management best practices for accessing Microsoft SQL Server databases. It can easily be used as a building block in your own .NET-based application. If you use it then you will reduce the amount of custom code you need to create, test, and maintain. It comes with a single assembly with a class that has many useful methods. It reduces the amount of custom code.. Read more in. Install Enterprise Library Please follow this link to download the Enterprise Library: Create a new MVC web application. Make the below changes in your web.config file. Add a DAL Folder in your project. Add a Baseclass and add the below code in the baseclass. using Microsoft.Practices.EnterpriseLibrary.Data; namespace MVC_ADO.DAL { public class BaseClass { public virtual Database GetDatabase() { Database db; db = DatabaseFactory.CreateDatabase("MasterDB"); return db; } } } Add a EmployeeModel class and add the below code using System.Data; using System.Data.Common; using Microsoft.Practices.EnterpriseLibrary.Data; namespace MVC_ADO.DAL { public class EmployeeModel : BaseClass { Database db = null; public override Database GetDatabase() { return base.GetDatabase(); } public DataSet GetEmployee() { try { db = GetDatabase(); DataSet ds = new DataSet(); DbCommand dbCommand = db.GetStoredProcCommand("PROC_GET_EMPLIST"); // db.AddInParameter(dbCommand, "@IP_UserID", DbType.Int32, UserID); //db.AddOutParameter(dbCommand, "@OP_strException", DbType.String, 200); ds = db.ExecuteDataSet(dbCommand); return ds; } catch { throw; // ds = null; //strException = ex.Message.ToString(); } } } } Note: Create the PROC_GET_EMPLIST Stored Procedure in SQL Server. Now Call the GetEmployee Function from your Controller. Now Call the GetEmployee Function from your Controller. using System.Web.Mvc; using MVC_ADO.DAL; namespace MVC_ADO.Controllers { public class HomeController : Controller { EmployeeModel model = new EmployeeModel(); public ActionResult Index() { var list = model.GetEmployee(); return View(); } } } You will get the list of all employees from Employee table.
http://www.dotnetguru.in/2017/11/
CC-MAIN-2019-22
en
refinedweb
#include <mw/animationdataprovider.h> Link against: animationshared.lib Pure virtual base class for data providers. A data provider takes an animation specification (such as a file), converts it (if needed) into a format recognised by an animator (such as CAnimationFrame objects), and passes it to the animator, via the medium of an animation. Most animation types take a data provider as an argument during contruction. For more detailed usage instructions, refer to the documentation of the derived classes. See also: CAnimation MAnimationDataProviderObserver Called from the animation to obtain the type of data to expect. Sends an event with no associated data to the observer. See SendEventL(TInt,TAny*,TInt) for further details. Sends an event with a single integer data item to the observer. See SendEventL(TInt,TAny*,TInt) for further details. Sends an event with an arbitrary size data item to the observer. See also: TAnimationEvent Sets the destination for data from this data provider. You do not need to call this function unless you are writing a new animation type.
http://devlib.symbian.slions.net/belle/GUID-C6E5F800-0637-419E-8FE5-1EBB40E725AA/GUID-CB96F59F-BEF9-3296-A80A-4E8E8BE354C6.html
CC-MAIN-2019-22
en
refinedweb
curl-library Need advice on handling CyaSSL/wolfSSL's build configurations Date: Thu, 16 Apr 2015 01:05:18 -0400 Recently I made some changes to lib/vtls/cyassl.c to include the CyaSSL build options [1] and support SNI [2]. The latter change is dependent on the former. CyaSSL's includes do not themselves include its build options (file cyassl/options.h) but the defines in that file are needed by the other includes to determine which optional function declarations to expose, and it's possible some structures may be affected as well. There is an exception to not including the build options for some embedded platforms which have kind of a de facto options.h (cyassl/ctaocrypt/settings.h) that is included by the other includes. Because of the above I determined that the options.h needs to come before any CyaSSL include. As you can see in the commit I put it before all other cyassl, however smoke testing has shown urldata.h includes a CyaSSL include so really I would need to place the options.h before everything. That could be a problem --or rather could be more of a problem-- because it turns out the options.h redefines some important symbols DEBUG or NDEBUG, which I reported as a bug and has been (mostly) fixed just recently [4]. It's an issue for me on Visual Studio, and maybe other build systems as well. As you can see from the first draft of the kludge [5] to remedy, I have a workaround for any CyaSSL/wolfSSL version in which the issue is not fixed (currently all) which is to save DEBUG and NDEBUG before CyaSSL includes and restore afterwards. It only works assuming DEBUG and NDEBUG are just the default 1 if they are defined (big assumption) and it doesn't handle other symbols like _POSIX_THREADS, for example. I'm left with a dilemma. A second draft to account for all of above is just going to be more kludgy. And as you can see in [4] the _POSIX_THREADS part is still an open issue. As I mention in [4] I'm thinking about doing an ac_check_funcs in the curl configure.ac for the SNI function, and then define HAVE_SNI based on that. I would also have to do something to test for NO_FILESYSTEM. There used to be functions exposed only when NO_FILESYSTEM but I recall they changed that recently so they are always exposed. Another idea is maybe only include options.h if it's from a version where they fix it. So assuming it's fixed in the next version, something like: #if defined(HAVE_CYASSL_OPTIONS_H) && (LIBCYASSL_VERSION_HEX > 0x03004008) #include <cyassl/options.h> #endif People with older versions of the library would get the same old behavior, but no HAVE_SNI and no NO_FILESYSTEM (exception is embedded versions). Any suggestions? [1]: [2]: [3]: [4]: [5]: ------------------------------------------------------------------- List admin: Etiquette: Received on 2015-04-16
https://curl.haxx.se/mail/lib-2015-04/0069.html
CC-MAIN-2019-22
en
refinedweb
span8 span4 span8 span4 Hello, How can I replace a line with its corresponding points with a python caller? I tried the following code but it is not working. import fme import fmeobjects class pointCreate(object): def __init__(self): pass def input(self,feature): if feature.hasGeometry(): coord = feature.getAllCoordinates() cpt = len(coord) #coordSys = feature.getCoordSys() for cpt in range(0,len(coord)): feature.setAttribute('X_e', coord[cpt][0]) feature.setAttribute('Y_e', coord[cpt][1]) feature.setGeometry(fmeobjects.FMEPoint([float(coord[cpt][0]), float(coord[cpt][1])])) #feature.setCoordSys(coordSys) self.pyoutput(feature) def close(self): pass Any ideas, Thanks Easier to step through the coordinates like this, but as @david_r said, the chopper followed by a coordinate extractor will do the same thing and would be my preference. coords = feature.getAllCoordinates() for coord in coords: feature.setAttribute('X_e',coord[0]) feature.setAttribute('Y_e',coord[1]) feature.setGeometry(fmeobjects.FMEPoint(coord[0], coord[1])) self.pyoutput(feature) The constructor of fmeobject.FMEPoint class requires two or three individual float values representing coordinates. See the API reference if you want to leverage Python FME Objects API. I think this code could work for you. feature.setGeometry(fmeobjects.FMEPoint(coord[cpt][0], coord[cpt][1])) Personally I prefer importGeometryFromOGCWKT for creating anything in FME python so therefore create a string as Well Known Text format and pass that in. feature.importGeometryFromOGCWKT("POINT({0} {1})".format(*coord)) This is really good for creating lines and polygons Why use a PythonCaller when you can use a Chopper set to 1? If you also need the X/Y as attributes you can follow up with a CoordinateExtractor. Answers Answers and Comments 19 People are following this question. Arithmetic Python/TCL in private parameters 0 Answers make contour polygon from points 1 Answer Expose attributes that match a pattern 2 Answers Import module fme objects from FME 2017 1 Answer PythonCreator feature type in GeoJSON writer error 4 Answers
https://knowledge.safe.com/questions/88483/how-to-create-geometries-with-a-python-caller.html
CC-MAIN-2019-22
en
refinedweb
I'm trying to use the return function, I'm new to python but it is one of the things I don't seem to understand. In my assignment I have to put each task in a function to make it easier to read and understand but for example I create a randomly generated number in a function, I then need the same generated number in a different function and I believe the only way this can be done is by returning data. For example here I have a function generating a random number: def generate(): import random key = random.randint(22, 35) print(key) But if I need to use the variable 'key' again which holds the same random number in a different function, it won't work as it is not defined in the new function. def generate(): import random key = random.randint(22, 35) print(key) def number(): sum = key + 33 So how would I return data (if that is what you need to use) for it to work? The usage of return indicates to your method to 'return' something back to whatever called it. So, what you want to do for example in your method is simply add a return(key): # Keep your imports at the top of your script. Don't put them inside methods. import random def generate(): key = random.randint(22, 35) print(key) # return here return key When you call generate, do this: result_of_generate = generate() If you are looking to use it in your number method, you can actually simply do this: def number(): key = generate() sum = key + 33 And if you have to return the sum then, again, make use of that return in the method in similar nature to the generate method.
https://www.codesd.com/item/using-the-same-variables-in-different-functions-in-python.html
CC-MAIN-2019-22
en
refinedweb
Unions in C and C++ are aggregate quantities like structs, except that each elements of the union has offset 0, and the total size of the union is only as large as is required to hold its largest member [1]. Only one member of a union may be "active" at a time. Unions are most often used to provide variant functionality; i.e., allowing a variable to contain data of different types, as in the following structure from a GUI list control: struct CellItem { UInt mask; /* valid field mask. (Combination of flags). */ . . . Boolean bReadOnly; /* Indicates item's editable state. */ Int type; /* Type of data. */ <table width="100%" border="0" bgcolor="gray"> <tr> <td> <pre> union /* Union of data. */ { SInt32 lVal; /* 32-bit integral value. */ double dVal; /* Floating-point value. */ char *strVal; /* String. */ time_t tVal; /* File time. */ } data;. . . The CellItem structure represents the data for each cell in the display as one of an integral number, floating-point number, string, or time. A space economy is achieved by defining the data member as a unionin this case, it's an unnamed unionand hence storing only space for the largest memberin this case, dVal (a double)rather than storing space for all possible data types. Naturally, the more members a union has, the more significant the space saving. Note that a separate member, type, in the CellItem structure, is required to record which member of data is "active," but that would be the case even if data were a structure, so it doesn't detract from the space saving of the union itself. However, unions have another, darker, side. Because each member has a zero offset, a union can be used to overlay the bytes for one member with those of another. Since unions can have members of heterogeneous typethey'd be a pointless construct if they didn'tthis content overlay can be used for casting. You may already be alarmed at such a prospect. If that's the case, you've got good instincts. But let's break it down as you may still have missed some of the subtleties. There are actually four aspects to converting one type to another in such a "raw" byte block transfer: alignment, size, value, and bit representation. The first aspect, alignment, is handled for us with the union's characteristic of placing all members at 0 offset. However, the other three are by no means guaranteed. In fact, they are emphatically violatable by unionsagain, if this wasn't the case, unions would not be able to support their intended purpose. Let's consider the issue of size. If we look back at our union from the CellItem, we might be inclinedassuming a 32-bit architectureto cast between the lVal and strVal members, since they're both 32-bit quantities. Any hardy travelers who've encountered the Win32 API will have done such conversions many times, though more likely with C-style casts than via unions. We can probably cast between lVal and tVal, since time_t is often defined as a 32-bit quantity. However, we'd certainly be asking for trouble if we attempted to coerce values between dVal and the other union members, because double is usually represented as either 64-bits or 80-bits [2]. Clearly, using unions to cast between elements of different sizes is a nonstarter. The conversion between lVal and strVal, had another important consideration: the value of the integer/pointer. On some machine architectures, certain types must always be aligned on word boundaries, so translating from arbitrary integer values can result in misalignment, and a nasty bus error. Even on those architectures when you don't precipitate hardware violations, you're likely to incur significant costs if you oblige the processor to manipulate, say, 32-bit types that are spread over two 32-bit words. And that's not even considering the fact that the actual pointer value would very likely be wrong! This particular problem is also possible with reinterpret_cast, but it's arguably subtler with unions. However, even when we've got compatible sizes, alignment and the value we want to convert is meaningfully in both "typespaces," there can still be problems. If dVal had been a float, which is often 32-bits on 32-bit systems, we'd still have a big problem in converting from dVal to, say, lVal. Although it's not guaranteed by the Standard, casting between a 32-bit integral type and a 32-bit pointer is usually "valid" to the degree that pointers are effectively integral indexes into the address space of the "abstract machine." But there's nothing like that level of compatibility between the bit forms of an integral type and a floating-point type. For example, signed integers in C/C++ are represented in two's complement, and floating-points are represented in, er, well, floating point. There are very good reasons why we leave such things to the compiler to handle on our behalf. Consider the following code: union { long l; float f; } u; u.l = 999; assert(u.f > 998.0 && u.f < 1000.0); // Not a chance, bluey! The actual value of u.f in this case is going to be some wildly different number; in my testing environment it is 1.39990x10-42. Not exactly a victory for the union cast! So, if you didn't know before, hopefully you now realize that using a union to perform a cast is a pretty bad idea. Because it almost completely circumvents the compiler's ability to do any type checking, there's no protection from misalignment, truncation, or representation mismatches, and will likely get you only "dangerous and nonportable" [3] code. (Actually, in chapter 19 of Imperfect C++ [4], I show how union casts, in the form of the STLSoft [5] union_cast template class, can be made into a very robust and useful technique by using constraints and a dash of template metaprogramming to restrict the cast types. Notwithstanding those techniques, union casts should be considered harmful and avoided wherever possible.) Unfortunately, sometimes the carelessness of some library writers leads other library writersthat's us!with little choice but to get them out of the bottom of the toolbox and carefully dust them off. Take, for example, the Microsoft WinInet API. Win32 ANSI / Unicode Compilation The WinInet function FtpFindFirstFile() is actually a macro that is defined as either FtpFindFirstFileA() or FtpFindFirstFileW(), depending on whether ANSI or Unicode compilation is selected (by the absence or presence of the UNICODE preprocessor symbol). Similarly, the WIN32_FIND_DATA structure is actually a macro defined either as WIN32_FIND_DATAA or WIN32_FIND_DATAW. Hence, although the third parameter of FtpFindFirstFile() is notionally a pointer to a WIN32_FIND_DATA structure, in actuality the two function variants take pointers to the corresponding structure variants, as in: HINTERNET FtpFindFirstFileA( HINTERNET hConnect , char const *lpszSearchFile <table width="100%" border="0" bgcolor="gray"> <tr> <td> <pre> , WIN32_FIND_DATAA *lpFindFileData, DWORD dwFlags , DWORD dwContext); and HINTERNET FtpFindFirstFileW( HINTERNET hConnect , wchar_t const *lpszSearchFile <table width="100%" border="0" bgcolor="gray" ><tr> <td> <pre> , WIN32_FIND_DATAW *lpFindFileData, DWORD dwFlags , DWORD dwContext); This technique is not especially sophisticated, but it does work and is widely used throughout the Win32 API, and third-party libraries, for building different binaries from the same source. Add a Dash of Traits In writing a traits class, as part of the InetSTL libraries [6], I came across a problem that necessitated using the any_caster class described in this article. The problem it addresses is that in the WinInet header files that come with versions 5 and 6 of the Visual C++ compiler, both variants of the FtpFindFirstFile() function, are declared as taking a pointer to a WIN32_FIND_DATA structure, and not to that of its requisite variants as shown above. Because WIN32_FIND_DATA is defined as WIN32_FIND_DATAA for ANSI compilation, and as WIN32_FIND_DATAW for Unicode compilation, there is no conflict with the analogous definition of the FtpFindFirstFile() macro. This means that if you code in terms of the two macros, rather any of the specific ANSI/Unicode variants, they are in sync and you won't have a problem. Consider the following code: WIN32_FIND_DATA fd; FindFirstFile(..., &fd, ...); If you compile this without UNICODE defined, it is actually translated to: WIN32_FIND_DATAA fd; FindFirstFileA(..., &fd, ...); Whether this is compiled with the correct formFindFirstFileA(..., WIN32_FIND_DATAA*, ...)or the incorrect formFindFirstFileA(..., WIN32_FIND_DATA*, ...)it still works since WIN32_FIND_DATA is translated to WIN32_FIND_DATAA. Conversely, if you compile this with UNICODE defined, it becomes: WIN32_FIND_DATAW fd; FindFirstFileW(..., &fd, ...); Again, this works with both forms because WIN32_FIND_DATA is translated to WIN32_FIND_DATAW in the presence of UNICODE. However, if you want to write code that references the functions/structures explicitly, you're in a bit of a pickle, whether you attempt to use FtpFindFirstFileA() explicitly from a Unicode compilation, or FtpFindFirstFileW() from an ANSI compilation. In either case, the function will be prototyped to point to the wrong structure variant. Consider the following correct code: WIN32_FIND_DATAA fda; WIN32_FIND_DATAW fdw; FindFirstFileA(..., &fda, ...); // 1 FindFirstFileW(..., &fdw, ...); // 2 This does not work with the incorrect form of the WinInet libraries. If UNICODE is not defined, then line 2 won't compile. If UNICODE is defined, then line 1 won't compile. This is not good. Because traits that specialize in character type explicitly use the ANSI or Unicode variants of a given functiontraits are entirely independent of the presence/absence of UNICODEeither the wchar_t or char specialization will cause an error when compiled with the Visual C++ 5/6 headers. One solution is to attempt to discriminate which compiler you're using, and write the code with casts, as in: template <> struct inetstl::filesystem_traits<wchar_t> { . . . static HINTERNET find_first_file( HINTERNET hconn , wchar_t const *spec , find_data_type *findData , uint32_t flags = 0 , uint32_t ctxt = 0) { #if defined(_MSC_VER) && \ _MSC_VER <= 1200 return ::FtpFindFirstFileW( hconn, spec , reinterpret_cast<LPWIN32_FIND_DATA>(findData) , flags, ctxt); #else /* ? compiler */ return ::FtpFindFirstFileW(hconn, spec, findData, flags, ctxt); #endif /* ? compiler */ } . . . Naturally, this is very ugly, and a maintenance headache: We'd have to vet each and every compiler's WinInet.h. But we have to put up with headaches every day, and ugliness is part and parcel of any portable coding effort. The overriding objection to this approach is that it is no solution at all. If you specify any recent version of the Microsoft Platform SDK's include directory prior to those that come with the compiler, Visual C++ 5/6 will build the correct form without issue. If we then present it with the code shown above, it will fail to compile because the LPWIN32_FIND_DATAA type is no longer incorrectly expected by FtpFindFirstFileW(). This is where our naughty cast comes in. Let's look at it in action before we see how it's implemented. Rewriting the function in an error-free form we get: template <> struct inetstl::filesystem_traits<wchar_t> { . . . static HINTERNET find_first_file( HINTERNET hconn , wchar_t const *spec , find_data_type *findData , uint32_t flags = 0 , uint32_t ctxt = 0) { return ::FtpFindFirstFileW( hconn, spec <table width="100%" border="0" bgcolor="gray"> <tr> <td> <pre> , any_caster< find_data_type* , LPWIN32_FIND_DATAA , LPWIN32_FIND_DATAW >(findData), flags, ctxt); } . . . . . . // same for inetstl::filesystem_traits<char> The any_caster template (shown in Listing 1) takes a source type, followed by two or more conversion typesit has to be at least two, otherwise there'd be no pointand provides implicit conversion from the former to any of the latter. Naturally, there should be no ambiguity between the types to convert to, but that's the user's responsibility. Make no mistake: At no time do we try to pass an ANSI structure to a Unicode function, or vice versa. It's just that the compiler thinks that we should, and we must do the right thing while making the compiler believe we're doing what it thinks is the right thing (which is wrong). The converter has a remarkably simple implementation. It is just a union of nine types. Its constructor takes a single parameter of the source type, and there are eight implicit conversion operators for the eight destination types. In order to support between two and eight destination types, the latter six are defaulted. I originally wanted to default them to void, but naturally one cannot have implicit conversions to void. Nor can they be the same type, as the compiler would rightly complain about having multiple implicit conversion operators to the same type. So what I've done is default them to pointers to distinct instantiations of the InvalidType helper template. any_caster is implemented as a union precisely because we want, in this rare case, to subvert all type checking. The alternative would be to implement the conversion operators using C++ casts. Unfortunately, as I show in [4], it can be very difficult to generically code such conversions with the appropriate mix of C++ casts, and even C casts will precipitate warnings with come compilers. Since the size, alignment and bit representation of our convertee types are compatibleLPWIN32_FIND_DATAA and LPWIN32_FIND_DATAW are both 32-bit pointers, and the conversion from one to the other is valid in this case because one actually is the otherthe union cast is the appropriate choice in this (unusual) case. Hence union casts may be considered necessary. And that's it! Readers of Imperfect C++ might want to apply some of the techniques described in the discussion of the union_cast template to any_caster in order to increase its robustness by constraining its range of acceptable types; the full implementation is available with the STLSoft libraries. We might, for example constrain all the types to be the same size and to be, say, all integral types or be all pointers, using static assertions [4]. For example: ~any_caster() // Place in dtor so always gets checked { STATIC_ASSERT( is_pointer_type<T>::value && is_pointer_type<T1>::value && is_pointer_type<T2>::value && is_pointer_type<T3>::value && is_pointer_type<T4>::value && is_pointer_type<T5>::value && is_pointer_type<T6>::value && is_pointer_type<T7>::value && is_pointer_type<T8>::value); } We could even constrain all types to be pointers that point to things that are the same size. Even if you elect to take such measures to increase the safety of your union casts, it's worth stressing one last time that using unions for casting is something to be done only in extremis. But I think that when pushed into it by poorly designed/tested libraries, we are entitled to get out the big guns! Acknowledgments Thanks to Bjorn Karlsson, Garth Lancaster, Greg Peet, John Torjo and Walter Bright, for their excellent criticisms and suggestions. About the Author Matthew Wilson is a software development consultant for Synesis Software, and creator of the STLSoft libraries. He is author of the book Imperfect C++ (Addison-Wesley, 2004), and is currently working on his next two books, one of which is not about C++. Matthew can be contacted via. Notes & References [1] Kernighan, Brian and Dennis Ritchie. The C Programming Language, Prentice-Hall, 1988. [2] How Java's Floating-point Hurts Everybody Everywhere, Kahan and Darcy,. [3] Stroustrup, Bjarne. The C++ Programming Language (Special Edition), Addison-Wesley, 2000. [4] Wilson, Matthew. Imperfect C++, Addison-Wesley, 2004. (I can't recommend this book highly enough, ;-) ) [5] STLSoft is an open-source organization whose focus is the development of robust, lightweight, cross-platform STL-compatible software, and is located at. [6] InetSTL () is the Internet-related subproject of STLSoft, which was introduced with STLSoft version 1.7.1. It currently provides STL-like mapping of the Win32 WinInet APIs, but will evolve to cover other Internet APIs (including those on platforms other than Win32).
http://www.drdobbs.com/flexible-c-8-union-casts-considered-harm/184403890
CC-MAIN-2019-22
en
refinedweb
Let’s get started by doing “the simplest thing that could possibly work”. public class DynamicDataTable : DynamicObject { private readonly DataTable _table; public DynamicDataTable(DataTable table) { _table = table; } } For now, we’ll use a DataTable for the actual storage and a DynamicObject to provide an implementation of IDynamicMetaObjectProvider. What can we accomplish with this? Well, quite a lot, actually – in a very real sense, we’re only limited by our imagination. GetMember The first ability we want is to be able to extract a column out of the data table; given a DynamicDataTable “foo”, the expression “foo.Bar” should give us something enumerable that represents the data in the column. The DLR describes this operation as “get member”, and DLR-based languages implement a GetMemberBinder in order to bind a dynamic “get member” operation. DynamicObject makes it very easy for us to handle the GetMemberBinder. We simply override the virtual method TryGetMember and implement the behavior that we want. The binder has two properties: Name, which indicates the name of the member that is being bound, and IgnoreCase. You can reasonably expect that case-sensitive languages like C#, Ruby and Python will set IgnoreCase to false, while VB will set it to true. private DataColumn GetColumn(string name, bool ignoreCase) { if (!ignoreCase) { return _table.Columns[name]; } for (int i = 0; i < _table.Columns.Count; i++) { if (_table.Columns[i].ColumnName.Equals(name, StringComparison.InvariantCultureIgnoreCase)) { return _table.Columns[i]; } } return null; } public override bool TryGetMember(GetMemberBinder binder, out object result) { var c = _table.Columns[binder.Name]; if (c == null) { return base.TryGetMember(binder, out result); } var a = Array.CreateInstance(c.DataType, _table.Rows.Count); for (int i = 0; i < _table.Rows.Count; i++) { a.SetValue(_table.Rows[i], i); } result = a; return true; } Here I’ve chosen to return an Array whose elements are typed identically to the column’s original data type. That’s because it’s very easy to create an Array of a particular type and to set its individual elements from the System.Objects that we can get from the DataRow. By factoring out GetColumn into a separate method, I’ve made it easy to change just this logic. We might want, for instance, to allow a symbol name like “hello_world” to match the column named “hello world”. Non-dynamic members What if I want to directly access other properties of the DataTable like the “Rows” DataRowCollection? The design of the DLR makes this easy. If you don’t handle a binding operation yourself, it’s possible to fall back to a default behavior implemented by the language-provided binder. And for VB, C#, Python and Ruby, the fallback behavior is to treat the object like a normal .NET object and to access its features via Reflection. That’s why it’s useful to call base.TryGetMember instead of throwing an exception when the column name can’t be found. So if we implement a trivial “Rows” property, a reference to DynamicDataTable.Rows will return DataTable.Rows even when the GetMember is performed dynamically at runtime (unless there actually is a column named “Rows”…). public DataRowCollection Rows { get { return _table.Rows; } } SetMember The next interesting thing we want to be able to do is to set a column on the DataTable whether or not it already exists. The DLR describes this operation as “set member”, and defines a corresponding SetMemberBinder to perform the binding operation. Like the GetMemberBinder, this class has two properties: Name and IgnoreCase. We want to be able to set the column either to a single repeated constant value or to a list of values. But there are lots of different lists we might like to support: for instance, lists, collections or even plain IEnumerables. Let’s make some decisions about the semantics of the SetMember operation on our type: - If the object’s type implements IEnumerable and the object isn’t a System.String, then we’ll treat it like an enumeration. Otherwise, we’ll treat it like a single value. - If it’s an IEnumerable<T> we’ll use the generic type as our DataType. For a plain IEnumerable, the DataType will be System.Object. - If the object does not implement IEnumerable (or the object is a System.String) then the DataType will be the object’s actual RuntimeType. For an enumeration, we’ll read items into a temporary array until we reach the number of rows in the table. If the enumeration ends before then, we’ll raise an error. If at that point, there are still additional items remaining in the enumeration, then we’ll also raise an error. The specific behavior of our implementation for each of these types isn’t very important. What is important is that we’ve identified all the types that we expect we might get, and have identified the logic we’re going to implement for those types. Now, on to the code! public override bool TrySetMember(SetMemberBinder binder, object value) { Type dataType; IEnumerable values = (value is string) ? null : (value as IEnumerable); bool rangeCheck = (values != null); if (values != null) { dataType = GetGenericTypeOfArityOne(value.GetType(), typeof(IEnumerable<>)) ?? typeof(object); } else { values = ConstantEnumerator(value); dataType = (value != null) ? value.GetType() : typeof(object); } object[] data = new object[_table.Rows.Count]; var nc = values.GetEnumerator(); int rc = _table.Rows.Count; for (int i = 0; i < rc; i++) { if (!nc.MoveNext()) { throw new ArgumentException(String.Format("Only {0} values found ({1} needed)", i, rc)); } data[i] = nc.Current; } if (rangeCheck && nc.MoveNext()) { throw new ArgumentException(String.Format("More than {0} values found", rc)); } var c = GetColumn(binder.Name, binder.IgnoreCase); if (c != null && c.DataType != dataType) { _table.Columns.Remove(c); c = null; } if (c == null) { c = _table.Columns.Add(binder.Name, dataType); } for (int i = 0; i < rc; i++) { _table.Rows[i] = data[i]; } return true; } (GetGenericTypeOfArityOne and ConstantEnumerator are methods whose names are pretty self-explanatory – and whose implementations can be found in the downloadable source code). Armed with these two methods, our type now supports all of the operations we need to implement the sample program described in Part 0 of this series. A version of the complete source code can be downloaded this location. In Part 2, we’ll add the ability to perform numerical operations between columns. See you then! Thank you for submitting this cool story – Trackback from DotNetShoutout The next thing we want for our dynamic DataTable is to do calculations between one or more columns. Imagine
https://blogs.msdn.microsoft.com/curth/2009/05/24/dynamicdatatable-part-1/
CC-MAIN-2019-22
en
refinedweb
Heightmap patches beyond basemap distance will use a precomputed low res basemap. This improves performance for far away patches. Close up Unity renders the heightmap using splat maps by blending between any amount of provided terrain textures. using UnityEngine; public class Example : MonoBehaviour { void Start() { Terrain.activeTerrain.basemapDistance = 100; } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2018.3/Documentation/ScriptReference/Terrain-basemapDistance.html
CC-MAIN-2019-22
en
refinedweb
2017-12-04 - Fiji + KNIP hackathon From Monday, December 4, 2017 through Friday, December 15, 2017, the Max Planck Institute of Molecular Cell Biology and Genetics hosts ~50 developers at the Center for Systems Biology in Dresden, Germany for a hackathon to develop ImageJ2 and Fiji core infrastructure and plugins. Contents - 1 Voluntary hackathon calendar - 2 Hackathon google doc - 3 Hackathon on Twitter (#hackdd17) - 4 Technical Discussions - 5 Hackathon Progress - 5.1 ilastik - (Dominik Kutra, Carsten Haubold) - 5.2 SciView - (Kyle Harrington, Ulrik Günther, Curtis Rueden) - 5.3 ImageJ server - (Curtis Rueden, Petr Bainar) - 5.4 SciJava - (Curtis Rueden) - 5.5 ImageJ Ops - (Curtis Rueden) - 5.6 ImageJ Legacy - (Curtis Rueden) - 5.7 ImageJ Launcher - (Curtis Rueden) - 5.8 Bio-Formats - (Curtis Rueden) - 5.9 ClearControl / ClearVolume / ClearCL (Robert Haase, Loic Royer) - 5.10 ImgLib2 IJ - (Matthias Arzt) - 5.11 Labkit - (Matthias Arzt) - 5.12 ImgLib2 ROIs - (Alison Walter, Tobias Pietzsch, Curtis Rueden) - 5.13 ImageJ-OMERO - (Alison Walter, Josh Moore) - 5.14 BigCAT (Philipp Hanslovsky) - 5.15 scyjava & imglyb (Philipp Hanslovsky, Curtis Rueden) - 5.16 Image Sequence Registration Plugin (Christian Tischer) - 5.17 Mastodon (Jean-Yves Tinevez, Tobias Pietzsch) - 5.18 TrackMate & MaMuT (Ulrik Günther, Kyle Harrington, Jean-Yves Tinevez) Voluntary hackathon calendar Hackathon google doc Hackathon on Twitter (#hackdd17) Technical Discussions N5, not HDF5 For more info on N5, check out the github repository here. - feels like HDF5, but stores chunks(blocks) in separate files in the file system. - is a Java library, but Constantin Pape already wrote a C++/python version of it: z5 (also matches "zarr" library), - attributes are stored in an additional JSON file - Discussion: should we define standard now as to how data should be stored in there to prevent an emergence of a zoo of different flavors as there is for HDF5? - how to do time series where each timestep / angle could have different image size - if we want a general "N5 viewer" for images, we'd have to add calibration data - put this information around the N5 dataset, because it behaves more like a dataset within an H5 file. - Perhaps make it versioned? Because a duck is not always a duck... - why another file format? - parallel writes (awesome for clusters with shared filesystem) - there is a special type for label blocks - blocks can have a halo - the block grid does not need to be filled dense, some blocks could be missing - couldn't this just be another flavor of HDF5? - are parallel writes to the same block prevented by some kind of locks? - the HDF5 team should be included in the discussions to learn from their mistakes - there is lots of information on parallel writing of HDF5 files out there - Try to write a N5 dataset into a FUSE filesystem/file??? Could this be a work-around for the many-small-files issue? BigDataViewer - There is a fork of BigDataViewer for JavaFx. [1] - BigDataViewer will be splitted into a UI independent part, and the Swing UI. This will make it possible to merge the JavaFx and Swing Version of BigDataViewer . We discussed opportunities to improvement for the Bdv design: - Slim the core of bdv, make it less smart and more predictable. - Bdv handle for different UI toolkits, - Use property pattern for settings like active source ... - Actively add/remove views - Update views (force cache-dropping) - for BDV add/remove views - Access to color settings, overlap rendering, … - Allow Grouping of views (externally is ok) - Consider Accumulator / Composite that works with type other than ARGBType - Make it easy to disable or replace Dialogs - Make it easy to remove an overlay - BdvVistools show function for converters/volatile Matrix and Vector libraries - Discussion notes: - Compared different matrix and vector libaries: - Conclusion: - use ojAlgo for N-D matrix operations - vector3 and vector4 is a complicated conclusion. probably use mastodon and JOML ImageJ/Fiji parallelization - initial focus on embarassingly parallel data - specific parallel (cluster) implementation will be abstracted - from the system architecture point of view, the design can be inspired by the search bar feature implementation - started use case definition Hackathon Progress ilastik - (Dominik Kutra, Carsten Haubold) - developed an ImageJ2 plugin that allows to stream raw data and predictions from the work in progress ilastik processing backend: - discussed with Philipp Hanslovsky about the benefits of using the N5 format for communication - the label block format together with the fact that datasets can be stored sparse could be a great foundation for label storage / communication - played around with Constantin Pape's C++/Python N5 reader as available format on the ilastik server - noticed that the only thing needed to serve a N5 dataset on the web is to serve the directory contents. Philipp wrote a N5 REST reader for BigCAT to talk to the ilastik server that way - worked on some caching problems in the ilastik server - discussed possible interfaces between Matthias Arzt's LabKit and ilastik-backend for interactive segmentation in ImageJ/BigDataViewer with an ilastik classifier backend SciView - (Kyle Harrington, Ulrik Günther, Curtis Rueden) - set up automatic deployment of Fiji update site that works on all operating sustems - imglib2 images as textures on 3D objects - imagej2-style logging - public availability of SciView plugin ImageJ server - (Curtis Rueden, Petr Bainar) - discussed its design with respect to the ImageJ/Fiji architecture as well as its future use cases - identified the most imminent issues, raised them on GitHub and brainstormed possible solutions - worked on a new JavaScript client with enhanced UI, making use of the Angular framework - demoed the new client to the community - encouraged members of the community to use annotations specifying whether a SciJava plugin can run headless or not SciJava - (Curtis Rueden) - Added a minimum Java build version feature to pom-scijava-base (scijava/pom-scijava-base@54bf6664). - Implemented #@scriptdirective (scijava/scijava-common@d9dce68b, scijava/scijava-common#294). - In response to a question from Klim Kolyvanov, cleaned up the PrefServiceand fixed related parameter persistence bugs (scijava/scijava-common@195878b7). - With Robert Haase and Deborah Schmidt, developed a SearchServiceAPI for extensible text-based searches (scijava/scijava-search). Initially supports searching SciJava modules (e.g. commands and scripts) and ImageJ web resources. Also offers a code snippet executer, as suggested by Kyle Harrington. - In response to a question from Richard Domander, fixed a bug with DynamicCommandvalidater method callbacks (scijava/scijava-common@13e4ec32). - In response to a question from Richard Domander, added a short-term workaround for the fact that you cannot execute DynamicCommandvia CommandService#run(scijava/scijava-common@753dd703). ImageJ Ops - (Curtis Rueden) - Merged Gabriel Selzer's improvements to the transform namespace for intervals (imagej/imagej-ops#515). - Merged Vladimír Ulman's Gabor and bigauss filters (imagej/imagej-ops#485). - Merged Gabriel Selzer's Frangi vesselness filter (imagej/imagej-ops#525). - Merged Brian Northan's bug-fix to the IFFT op (imagej/imagej-ops#529). - Merged Brian Northan's diffraction-based kernel, useful for generating PSFs for deconvolution (imagej/imagej-ops#530). - Merged Kyle Harrington's morphological thinning ops, ported from KNIME (imagej/imagej-ops#317). - Merged Eike Heinz's Sobel and Hessian filters (imagej/imagej-ops#349). - With Gabriel Einsdorf, worked on fusion ops for combining RAIs that overlap (imagej/imagej-ops#230). PR was closed without merge, but a demo of RAI fusion was pushed to my sandbox (ctrueden/sandbox@b5518112). - In response to a question from Klim Kolyvanov, pushed an example of using the slice op to iterate an op over planes (ctrueden/sandbox@753dd703). ImageJ Legacy - (Curtis Rueden) - In response to a question from Klim Kolyvanov, fixed bug with parameter visibility (imagej/imagej-legacy@e89ce40e). ImageJ Launcher - (Curtis Rueden) - Merged Stefan Helfrich's CI configuration for AppVeyor and Travis CI (imagej/imagej-launcher#49). - Made minimal changes required for the the Launcher to work with Java 9 (imagej/imagej-launcher@e501d695). Bio-Formats - (Curtis Rueden) - In response to a question from Christian Tischer and David Hörl, filed a PR to enable the high-level Bio-Formats API to accept file patterns directly (openmicroscopy/bioformats#3019). - Fixed new and existing Java-8 installations of Fiji to work properly with Bio-Formats again (1). This entailed creating a script that instructs users of the old Java 6 version of Fiji to please enable the Fiji-Legacy update site, and instructs users of the new Java 8 version of Fiji to please enable the Java-8 update site. ClearControl / ClearVolume / ClearCL (Robert Haase, Loic Royer) - Made Image Quality / Focus measurements from Royer et Al. Nat. Biotechnol. (2016) available as Fiji plugin on a preliminarily update site - Added support for image quality measurements in regions, tissue depth (rings) or tiled over the whole image - Fixed bugs regarding discovery of OpenCL files distributed using ClearCLs mechanism - Adapted ClearControl software to run on another microscope - Revived code for controlling filter wheels and deformable mirrors - Merged changes from Max to ClearControl-lightsheet preparing his time stepping procedures ImgLib2 IJ - (Matthias Arzt) - Add a wrapper for IJ1 VirtualStack, based on imglib2-caches to support large virtual stacks. (imglib/imglib2-ij#12) - Refactored, wrappers for ImagePlus (imglib/imglib2-ij#12) Labkit - (Matthias Arzt) - Improved Usability of the Labkit plugin. (maarzt/imglib2-labkit) - Benchmark of different sparse ROI implementations. ImgLib2 ROIs - (Alison Walter, Tobias Pietzsch, Curtis Rueden) - Changes related to: imglib2/imglib2-roi#29 - Improved transform operation for ROIs - Fixed boundary computation for transformed ROIs - Added knownConstant to ROIs for determining if a ROI will always return falseor true - Removed typeparameter for wrappers in Masksstatic methods, BoolTypeby default ImageJ-OMERO - (Alison Walter, Josh Moore) - Added integration test structure (imagej/imagej-omero#69) - Began adding integration tests (imagej/imagej-omero#70) - Discussed ROI support and ROI conversions BigCAT (Philipp Hanslovsky) - relevant bigcat branch - Added Painting Functionality - Multi-scale WIP - Used for dense annotations but can be used for label creation for classifier training - Added open dialogs for N5 and HDF5 - Auto-discovery - Channels - Time series - multi-scale - User guidance in case of invalid input - Created a mock server to access local file system through http rest api - Confirmed time series and multi-channel capabilities - Connected to ilastik backend (Dominik Kutra, Carsten Haubold) - Tried to re-introduce scenery for 3D rendering together with Ulrik Günther (WIP) scyjava & imglyb (Philipp Hanslovsky, Curtis Rueden) - Made plan to separate Java and Python code in imglyb - Created scyjava python wrapper around pyjnius to allow for runtime resolution of jars/classes that are not on the classpath yet. - For now: Facilitiate groovy grape, might switch to more generic scijava-grab library later on. - Removes fat jar requirement of imglyb and will make it easier to distribute - Currently WIP as jnius.autoclass does not accept custom class loaders as parameter: (kivy/pyjnius#70) Image Sequence Registration Plugin (Christian Tischer) - Wrote an ImageJ2 plugin for image sequence registration - Source code on github - Installation via new Fiji Update site EMBL-CBA - The code is written such that additional registration modalities can be readily added - Current functionality - Accepts N-dimensional input, allowing the user to select which axes to register - Translational registration using phase-correlation - Translational and rotational registration, using phase-correlation for translation and "brute-force" testing of all rotations within user specified range - Region of interest selection Mastodon (Jean-Yves Tinevez, Tobias Pietzsch) - User-assigned tags - Editable TrackScheme graph - Revised action handling to allow to switch keymaps - Navigate-to-branch-child/parent/sibling actions - Preferences dialog - Render settings for BDV view - Keymap configuration - Code cleanup
https://imagej.net/index.php?title=2017-12-04_-_Fiji_%2B_KNIP_hackathon&amp;diff=next&amp;oldid=36714
CC-MAIN-2019-22
en
refinedweb
11 replies on 1 page. Most recent reply: Jan 25, 2012 7:48 AM by Don Sawatzky In the last few weeks my collegues and me have been involved in a project which required a command line interface. We did so by leveraging on the cmd module in the standard Python library, to which we added a network layer using Twisted. In the end, we had classes interacting with the standard streams stdin, stdout, stderr and classes interacting with nonstandard streams such as Twisted transports. All the I/O was line oriented and we basically needed three methods: Depending on the type of self, self.stdout was sys.stdout, a Twisted transport, a log file or a file-like wrapper to a database. Likewise for self.stderr and self.stdin. This is a problem that begs for generic functions. Unfortunately, nobody in the Python world uses them (with the exception of P. J. Eby) so for the moment we are using a suboptimal design involving mixins instead. I am not really happy with that. The aim of this blog post is to explain why a mixin solution is inferior to a generic functions solution. In the mixin solution, instead of generic functions one uses plain old methods, stored into a mixin class. In this specific case let me call the class StdIOMixin: class StdIOMixin(object): "A mixin implementing line-oriented I/O" stdin = sys.stdin stdout = sys.stdout stderr = sys.stderr linesep = os.linesep def print_out(self, text, *args): "Write on self.stdout by flushing" write(self.stdout, str(text) + self.linesep, *args) def print_err(self, text, *args): "Write on self.stderr by flushing" write(self.stderr, str(text) + self.linesep, *args) def readln_in(self): "Read a line from self.stdin (without trailing newline) or None" line = self.stdin.readline() if line: return line[:-1] # strip trailing newline where write is the following helper function: def write(stream, text, *args): 'Write on a stream by flushing if possible' if args: # when no args, do not consider '%' a special char text = text % args stream.write(text) flush = getattr(stream, 'flush', False) if flush: flush() StdIOMixin is there to be mixed with other classes, providing them with the ability to perform line-oriented I/O. By default, it works on the standard streams, but if the client class overrides the attributes stdout, stderr, stdin with suitable file-like objects, it can be made to work with Twisted transports, files and databases. For instance, here is an example where stdout and stderr are overridden as files: class FileIO(StdIOMixin): def __init__(self): self.stdout = file('out.txt', 'w') self.stderr = file('err.txt', 'w') >>> FileIO().print_out('hello!') # prints a line on out.txt The design works and it looks elegant, but still I say that it is sub-optimal compared to generic functions. The basic problem of this design is that it adds methods to the client classes and therefore it adds to the learning curve. Suppose you have four client classes - one managing standard stream, one managing files, one managing Twisted transports and one managing database connections - then you have to add the mixin four times. If you generate the documentation for your classes, the methods print_out, print_err and readln_in will be documented four times. And this is not a shortcoming of pydoc: the three methods are effectively cluttering your application in a linear way, proportionally to the number of classes you have. Moreover, those methods will add to the pollution of your class namespace, with the potential risk on name collisions, especially in large frameworks. In large frameworks (i.e. Plone, where a class my have 700+ attributes) this is a serious problem: for instance, you cannot even use auto-completion, since there are just too many completions. You must know that I am very sensitive to namespace pollution so I always favor approaches that can avoid it. Also, suppose you only need the print_out functionality; the mixin approach naturally would invite you to include the entire StdIOMixin, importing in your class methods you don't need. The alternative would be to create three mixin classes StdinMixin, StdoutMixin, StderrMixin, but most of the time you would need all of them; it seems overkill to complicate so much your inheritance hierarchy for a very simple functionality. As you may know, I am always looking for solutions avoiding (multiple) inheritance and generic functions fit the bill perfectly. I am sure most people do not know about it, but Python 2.5 ships with an implementation of generic functions in the standard library, in the pkgutil module (by P.J. Eby). Currently, the implementation is only used internally in pkgutil and it is completely undocumented; therefore I never had the courage to use it in production, but it works well. Even if it is simple, it is able to cover most practical uses of generic functions. For instance, in our case we need three generic functions: from pkgutil import simplegeneric @simplegeneric def print_out(self, text, *args): if args: text = text % args print >> self.stdout, text @simplegeneric def print_err(self, text, *args): if args: text = text % args print >> self.stderr, text @simplegeneric def readln_in(self): "Read a line from self.stdin (without trailing newline)" line = self.stdin.readline() if line: return line[:-1] # strip trailing newline The power of generic functions is that you don't need to use inheritance: print_out will work on any object with a .stdout attribute even if it does not derive from StdIOMixin. For instance, if you define the class class FileOut(object): def __init__(self): self.stdout = file('out.txt', 'w') the following will print a message on the file out.txt: >>> print_out(FileOut(), 'writing on file') # prints a line on out.txt Simple, isn't it? One advantage of methods with respect to ordinary functions is that they can be overridden in subclasses; however, generic functions can be overridden too - this is why they are also called multimethods. For instance, you could define a class AddTimeStamp and override print_out to add a time stamp when applied to instances of AddTimeStamp. Here is how you would do it: class AddTimeStamp(object): stdout = sys.stdout @print_out.register(AddTimeStamp) # add an implementation to print_out def impl(self, text, *args): "Implementation of print_out for AddTimeStamp instances" if args: text = text % args print >> self.stdout, datetime.datetime.now().isoformat(), text and here in an example of use: >>> print_out(AddTimeStamp(), 'writing on stdout') 2008-09-02T07:28:46.863932 writing on stdout The syntax @print_out.register(AddTimeStamp) is not the most beatiful in the world, but its purposes should be clear: we are registering the implementation of print_out to be used for instances of AddTimeStamp. When print_out is invoked on an instance of AddTimeStamp a time stamp is printed; otherwise, the default implementation is used. Notice that since the implementation of simplegeneric is simple, the internal registry of implementations is not exposed and there is no introspection API; moreover, simplegeneric works for single dispatch only and there is no explicit support for multimethod cooperation (i.e. call-next-method, for the ones familiar with Common Lisp). Yet, you cannot pretend too much from thirty lines of code ;) In this example I have named the AddTimeStamp implementation of print_out impl, but you could have used any valid Python identifier, including print_out_AddTimeStamp or _, if you felt so. Since the name print_out is explicit in the decorator and since in practice you do not need to access the explicit implementation directly, I have settled for a generic name like impl. There is no standard convention since nobody uses generic functions in Python (yet). There were plan to add generic functions to Python 3.0, but the proposal have been shifted to Python 3.1, with a syntax yet to define. Nevertheless, for people who don't want to wait, pkgutil.simplegeneric is already there and you can start experimenting with generic functions right now. Have fun!
https://www.artima.com/forums/flat.jsp?forum=106&thread=237764
CC-MAIN-2017-51
en
refinedweb
Back to index 00001 /* High precision, low overhead timing functions. x86-64 version. 00002 Copyright (C) 2002, 2004, 2005 _HP_TIMING_H 00021 00022 /* We can use some of the i686 implementation without changes. */ 00023 # include <sysdeps/i386/i686/hp-timing.h> 00024 00025 /* The "=A" constraint used in 32-bit mode does not work in 64-bit mode. */ 00026 # undef HP_TIMING_NOW 00027 # define HP_TIMING_NOW(Var) \ 00028 ({ unsigned int _hi, _lo; \ 00029 asm volatile ("rdtsc" : "=a" (_lo), "=d" (_hi)); \ 00030 (Var) = ((unsigned long long int) _hi << 32) | _lo; }) 00031 00032 /* The funny business for 32-bit mode is not required here. */ 00033 # undef HP_TIMING_ACCUM 00034 # define HP_TIMING_ACCUM(Sum, Diff) \ 00035 do { \ 00036 hp_timing_t __diff = (Diff) - GLRO(dl_hp_timing_overhead); \ 00037 __asm__ __volatile__ ("lock; addq %1, %0" \ 00038 : "=m" (Sum) : "r" (__diff), "m" (Sum)); \ 00039 } while (0) 00040 00041 #endif /* hp-timing.h */
https://sourcecodebrowser.com/glibc/2.9/x86__64_2hp-timing_8h_source.html
CC-MAIN-2017-51
en
refinedweb
Facelets fits JSF like a glove Finally, a view technology made just for JSF! While. . Overview of Facelets Related topics). From Tiles to Facelets As I mentioned, the example Web application I use here is based on one I created for my JSF for nonbeliever. Installing Facelets. Adding init parameters This step assumes you already have a working JSF application (such as the online CD store example) installed and you are editing an existing web.xml page by adding the following parameter: <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> This tells JSF to assume a prefix of xhtml, which the Faceletâs renderer can interpret. Facelets has many parameters, so see Related topics: <application> <locale-config> <default-locale>en</default-locale> </locale-config> <view-handler>com.sun.facelets.FaceletViewHandler</view-handler> </application> Templating with Facelets: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> </html> No further detail required! Step 2. Define the Facelets' namespace To use Facelets tags for templating, you need to import them using the XML namespace as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: ... Notice the definition of the ui namespace. Step 3. Use ui:insert tag to define logical areas of the page Next, you define logical areas of your layout like title, header, navigation, content, and more. Here's an example of how you could define a title: <: <title>#{title}</title> Step 4. Use plain text and ui:includes to define defaults You can pass more than plain text as default. For example, study the following code fragment from layout.xhtml: <div id="header"> <ui:insert <ui:include </ui:insert> </div>> <body> <div id="header"> <ui:insert <ui:include </ui:insert> </div> <div id="left"> <ui:insert <ui:include </ui:insert> </div> <div id="center"> <br /> <span class="titleText"> <ui:insert </span> <hr /> <ui:insert <div> <ui:include </div> </ui:insert> </div> <div id="right"> <ui:insert <ui:include </ui:insert> </div> <div id="footer"> <ui:insert <ui:include </ui:insert> </div> </body> </html> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <ui:composition <ui:defineCD form</ui:define> <ui:define <!-- use the form tag to set up this form --> <h:form ... ... ... </h:form> </ui:define> </ui:composition> </html>. Location is everything When the page invokes the layout template, it just needs to specify the location of the template, as shown here: <ui:composition This tag invokes the template shown in Listing 1, so all I need to do is pass the parameters to the template. Then, inside the composition flag, I can pass simple text like the title: <ui:defineCD form</ui:define> or an entire component tree: <ui:define <!-- use the form tag to setup this form --> <h:form ... ... ... </h:form> </ui:define> Notice that of the many logical areas I could have defined and passed, the cdForm.xhtml only passes two: content and title. Composition components <h:dataTable <!-- Title --> > <!-- Artist --> > <h:dataTable <a:column <a:column <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: THIS TEXT WILL BE REMOVED <ui:composition> <!-- The label attribute is optional. Generate it if it is missing. --> <c:if <c:set </c:if> <!-- The sort attribute is optional. Set it to true if it is missing. --> <c:if <c:set </c:if> <h:column> <f:facet <h:panelGroup> ${label} <c:if [ <h:commandLink <h:outputText <f:param <f:param </h:commandLink> , <!-- Sort descending --> <h:commandLink <h:outputText <f:param <f:param </h:commandLink> ] </c:if> </h:panelGroup> </f:facet> <!-- Display the field name --> <h:outputText </h:column> </ui:composition> THIS TEXT WILL BE REMOVED AS WELL </html> The fine points Before I get into more advanced examples, I want to draw your attention to a few things. First, notice how I referenced the value binding in a generic way in Listing 5: <h:outputText Second, when I invoke this composition component, I'll pass the entity and fieldName as attributes, as shown here: <a:column. Creating a component <?xml version="1.0"?> <!DOCTYPE facelet-taglib PUBLIC "-//Sun Microsystems, Inc.//DTD Facelet Taglib 1.0//EN" "facelet-taglib_1_0.dtd"> <facelet-taglib> <namespace></namespace> <tag> <tag-name>field</tag-name> <source>field.xhtml</source> </tag> <tag> <tag-name>column</tag-name> <source>column.xhtml</source> </tag> <tag> <tag-name>columnCommand</tag-name> <source>columnCommand.xhtml</source> </tag> </facelet-taglib>: <context-param> <param-name>facelets.LIBRARIES</param-name> <param-value> /WEB-INF/facelets/tags/arcmind.taglib.xml </param-value> </context-param>: <html xmlns="" xmlns: ... ... <a:column <a:column <a:column <a:columnCommand Notice the namespace defined as follows: xmlns:a="" The namespace value is the same as the namespace element I declared in the tag library back in Step 1. Advanced tricks and tips <h:form <h:inputHidden <h:panelGrid <!-- Title --> <h:outputLabel <h:inputText <h:message <!-- Artist --> <h:outputLabel <h:inputText <h:message <!-- Price --> <h:outputLabel <h:inputText <f:validateDoubleRange </h:inputText> <h:message? Passing subelements <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: THIS TEXT WILL BE REMOVED <ui:composition> <!-- The label is optional. Generate it if it is missing. --> <c:if <c:set </c:if> <!-- The required attribute is optional, initialize it to true if not found. --> <c:if <c:set </c:if> <h:outputLabel <h:inputText <ui:insert /> </h:inputText> <!-- Display any error message that are found --> <h:message </ui:composition> THIS TEXT WILL BE REMOVED AS WELL </html> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: ... <h:form <!-- Title, Artist, Price --> <a:field <a:field <a:field <f:validateDoubleRange </a:field> ... The field tag for price gets passed the validator as an anonymous insert. Because the other fields don't define a body, the anonymous insert introduces nothing as the default. Passing actions): <h:commandLink Study the value of the action attribute. Notice that I've accessed the method in the same way that I earlier referenced a field from the entity. I can invoke this component using the following syntax: <a:columnCommand This call binds the editCD() method from the CDManagerBean to the link when invoked. Listing 10 shows the complete listing for columnCommand.xhtml: Listing 10. columnCommand.xhtml <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: THIS TEXT WILL BE REMOVED <ui:composition> <!-- The label is optional. Generate it if it is missing. --> <c:if <c:set </c:if> <h:column> <f:facet <h:panelGroup> <h:outputText </h:panelGroup> </f:facet> <h:commandLink </h:column> </ui:composition> THIS TEXT WILL BE REMOVED AS WELL </html> Downsides of Facelets Related topics),. In conclusion. Acknowledgment Special thanks Jacob Hookom, creator of Facelets, for his review and input on this article, and props to Athenz for his careful and insightful editing. Downloadable resources - PDF of this content - Facelets source code (j-facelets_code.zip | 267KB) - Facelets source code with jars and wars (j-faceletsjarsandwars.zip | 47MB) Related topics - "Inside Facelets Part 1: An Introduction" (Jacob Hookom, JSF Central, August 2005): Thoughts on Facelets from its creator. - Another Sleepless Night in Tucson: Author Rick Hightower's blog, where he thinks aloud about JSF (not to mention wine and cooking) till the wee hours of the morning. - Got JSR 252?: Download JavaServer Faces. - XULFaces: One of the alternatives to XHTML markup in Facelets.
https://www.ibm.com/developerworks/java/library/j-facelets/
CC-MAIN-2017-51
en
refinedweb
Subject: Re: [Unbound-users] named.cache & .conf setup best practices Date: Tue, May 28, 2013 at 10:45:01PM +1000 Quoting shmick at riseup.net (shmick at riseup.net): > will i be able to resolve gTLD's such as .satan (which cesidian can) > .africa (which namespace can) or .geek (which opennic can) ? Why would you want to? When nobody else can or even wants to? -- Måns Nilsson primary/secondary/besserwisser/machina MN-1334-RIPE +46 705 989668 This MUST be a good party -- My RIB CAGE is being painfully pressed up against someone's MARTINI!! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: Digital signature URL: <>
https://www.unbound.net/pipermail/unbound-users/2013-May/002924.html
CC-MAIN-2017-51
en
refinedweb
C# LINQ for Scala heads This is a memo of C# Linq features for Scala programmers. Or vice versa. Type inference C# has type inference. I try to use var when I can for local variables. var x = 1; Scala also has var, but the preferred way is to use immutable val if possible. val x = 1 Creating a new List or an Array C# can create collections in-line. using System.Collections.Generic; var list = new List<string> { "Adam", "Alice", "Bob", "Charlie" }; var array = new [] { 0, 1, 2 }; All collections in Scala comes with a factory method. val list = List("Adam", "Alice", "Bob", "Charlie") val array = Array(0, 1, 2) Filtering using lambda expression C# has "enrich-my-library" monkey patching that adds Where method to a normal Array. using System; using System.Collections.Generic; using System.Collections.Linq; var xs = array.Where(x => x >= 1); There are several ways to write this in Scala. array.filter(x => x >= 1) array filter { _ >= 1 } array filter { 1 <= } Projection Projection in C# is done by Select and SelectMany. var xs = array.Select(x => x + 1); var yx = array.SelectMany(x => new [] { x, 3 }); These correspond to map and flatMap. array map { _ + 1 } array flatMap { Array(_, 3) } Sorting C# can sort things using OrderBy. var xs = list.OrderBy(x => x.Length); I can't remember the last time I had to sort something in Scala, but you can do that using sortBy. list sortBy { _.length } Filtering using query expression Now comes the query expression. var results = from x in array where x >= 1 select x; The closest thing Scala got probably is for-comprehension. for (x <- array if x >= 1) yield x You can write something similar in C#, but unlike Scala foreach does not return a value, so the whole thing needs to be wrapped in a method. static IEnumerable<int> Foo(int[] array) { foreach (var x in array) if (x >= 1) yield return x; } Projection using query expression Let's try projection to an anonymous type in C#. var results = from x in array select new { Foo = x + 1 }; Scala using for-comprehension. for (x <- array) yield new { def foo = x + 1 } Sorting by intermediate values Here's how to sort by intermediate values. using System; using System.Collections.Generic; using System.Collections.Linq; using System.Text.RegularExpression; var results = from x in list let cs = new Regex(@"[aeiou]").Replace(x.ToLower(), "") orderby cs.Length select x; Scala's for-comprehension does not support sorting, but you can always sort things afterwards. list sortBy { x => val cs = """[aeiou]""".r.replaceAllIn(x.toLowerCase, "") cs.length } Cross join The SQL-ness comes in handy when you join with C#. var results = from x in list from c in x.ToCharArray() where c != 'a' && c != 'e' select c; Using Scala for-comprehension. for { x <- list c <- x.toCharArray if c != 'a' && c != 'e' } yield c Inner join Inner joining using C#. var results = from name in list join n in array on name.Length equals n + 3 select new { name, n }; Using Scala for-comprehension. for { name <- list n <- array if name.length == n + 3 } yield (name, n) Grouping Grouping using C#. var results = from x in list group x by x[0] into g where g.Count() > 1 select g; Not for-comprehension, but still doable in Scala. list groupBy { _(0) } filter { case (k, vs) => vs.size > 1 } Quantifiers Quantifiers work more or less the same way. var hasThree = list.Any(x => x.Length == 3) var allThree = list.All(x => x.Length == 3) In Scala. val hasThree = list exists { _.length == 3 } val allThree = list forall { _.length == 3 } Pattern matching One unique aspect of Scala is that it accepts a partial function where a lambda expression is expected. array map { case 1 => "foo" case n if n % 2 == 0 => n.toString + "!" } You probably have to throw an exception to mimic this in C#. Notes For Scala I prefer normal calls to filter and map over for-comprehension. Infix operator syntax and placeholder syntax makes array filter { _ >= 1 } concise enough that for-comprehension ends up becoming more bulkier unless they are nested. On the other hand in C#, query expression syntax rids of some of the symbols ( ., (), =>) from fluent syntax. Rahul (@missingfaktor) wrote a nice list of Enumerable methods and their equivalent ones in Scala, which covers everything I couoldn't here.
http://eed3si9n.com/node/58
CC-MAIN-2017-51
en
refinedweb
JetBrains News DataGrip 2017.3 is Here! DataGrip, our IDE for SQL and databases, has reached a new version. New features and improvements:Database tree view — Ability to group data sources — More convenient managing schemas — Users and roles are now displayed in PostgreSQL and AWS Redshift — Foreign data wrappers are now displayed in PostgreSQL — Drag-and-drop multiple objects to the editor SQL coding — SQL generator — Better JOIN statement completion — PostgreSQL 10 grammar support Executing queries — Ability to choose a schema when running an SQL file — The list of data sources/consoles is available when attaching the console to a file — Three independent Execute actions — Foreign data wrappers are now displayed in PostgreSQL — Set Current Schema action Connectivity — OpenSSH config files are supported (~/.ssh/config и /etc/ssh/ssh_config) — Dialog added for One Time Password — Ability to use SSH-agent and Pageant for authentication — Exasol support Data editor — Paste data in a DSV format (i.e. from Excel) — Numerical data are now right-aligned by default — Tabs are restored after re-opening DataGrip — Cells can be compared Navigation — Navigate references to DDL editor options — Jump to Console in the context menu of the data source — Replace the selected occurrences in the Replace In Path dialog — Scratches and Consoles are now placed in Files Your DataGrip Team CLion 2017.3 released with C++ support improvements, Valgrind Memcheck, Boost.Test and much - And more it installed on your machine, enable the Registry option clion.enable.msvc__ macr - Generate Definitions (Ctrl+Shift+D on Linux/Windows, Shift-Command-D on macOS) for function templates - New intention action to invert the condition in an if clause and flip its if-else blocks and dst pointers in memcpy and related functions - Incorrect size value passed to the memory allocation functions - Memory leaks Learn how to configure and use ValgrindMem) / Control-Option-R (macOS) for Run and with Shift+Alt+F9 (Linux/Windows) / Control-Option-D . Your CLion Team IntelliJ IDEA 2017.3: Coding Assistance, Debugger, Run Dashboard, Frameworks and More Exciting news: A new massive update for IntelliJ IDEA is here! Please welcome IntelliJ IDEA 2017.3! It includes loads of new features and dozens of important bugfixes. Try it now and see for yourself. Read this summary about the highlights of this release. - Java - Smart code completion is now aware of type casts. - Many new and improved inspections: inspection for Redundant throws declarations, quick-fix for deprecated code, inspection for possible nullability in Stream API call chains, and more. - Improved JUnit5 support. Learn more - Configurable command line shortener: a new handy way to specify a method used to shorten the command line for each configuration. Learn more - Run Dashboard: Add different run configurations types - JVM debugger - A new feature called On-demand Data Renderers helps reduce overhead. To enable it for any renderer, choose Mute Renderers from the context menu. - Async Stacktraces now causes very low overhead and works out of the box. - The Java Stream Debugger plugin is now bundled. - Java EE 8 - For Asynchronous CDI Events, you can now navigate between where an event was fired and where it was received. - Navigate between Injection point and Injected Beans using gutter icons for dynamic beans (CDI extensions). - Navigate from disposer methods to their producers. Learn more - Spring and Spring Boot - The Spring Beans Dependencies diagram has a new Neighborhood Mode. For better readability, you can switch to Borderless View. - There’s now an auto-detection facet for Spring Boot MVC web applications and Spring. Learn more - Brand new editor-based REST client - Kotlin: a bundled Kotlin plugin has been updated to v1.2, and support for the experimental Kotlin multiplatform projects is now available. - IntelliJ IDEA 2017.3 provides better synchronization of your settings across different installations. Learn more - Local variable type inference is supported. Learn more For more detailed information about the shiny new features, check out the What’s New page. You can download the new IntelliJ IDEA 2017.3 right now! Your feedback, as always, is very much appreciated in our issue tracker. RubyMine2017.3 RubyMine 2017.3 Released! RubyMine 2017.3, the biggest and final release of this year, is now available! What's new in RubyMine 2017.3: - IDE: improved performance, support for apps with nested projects, and better code resolution and code insight. More - Linux subsystem for Windows (WSL) support. More - Refactoring: Extract methods directly to privateand protectedsections. More - RuboCop: autocorrect by offense class or cop department. More - Code style: the ability to indent privateand protectedmethods, and choose which operators should be wrapped with space. More - Puppet: Embedded Puppet (EPP) templates support. More - Debugger: the new Trace to_s evaluation option detects costly operations and throws a timeout message. More - JavaScript: better code completion, documentation, CSS, and Vue.js. More - VCS: the Interactively Rebase from Here action, workspaces for branches. More - Database tools: managing schema, SQL generator, grouping data sources. More Other improvements include support for Ruby 2.5, Gems.rb, Docker Compose v3, Cucumber Expressions, and more. Check out the What's new page, and update to RubyMine 2017.3! PyCharm 2017.3 Out Now We're happy to announce that PyCharm 2017.3 is now available! PyCharm 2017.3 is faster, more usable, and better for data science. Upgrade Now - It's Faster. Indexing got faster for both Python and JavaScript code. Faster variable loading during debugging. Debugging is now fast by default on Windows and macOS. - Scientific Mode. The scientific mode puts all the tools you need for analyzing data at your fingertips. - Easier Setup For Virtualenvs. With PyCharm 2017.3 it is easy to set up virtualenvs when creating a project, and when configuring existing projects. - New REST Client. If you develop an API, you often need to construct a request to test your software. PyCharm 2017.3 has an all-new REST client. PhpStorm 2017.3 Out Now Today we are proud to announce the release of PhpStorm 2017.3, the last major update for PhpStorm in 2017. - Brand new editor-based REST client. With the new REST client, all the powers of the PhpStorm code editor are now available for your REST requests. - Significant performance improvements. Typing latency in very complex PHP files has decreased significantly. We've examined typing latency in the mPDF main file, which is a 38k-line-long mix of PHP, JS, and HTML, and it is down by 75% in PhpStorm 2017.3! - New inspections for exception handling. Three new inspection Unhandled exception, Redundant @throws and Redundant catch clause with corresponding Quick Fixes will help you take exceptions under your control! - Test generation improvements. Now you can create Codespec and PhpSpec classes and create test methods! - Improved Twig support. We've implemented language injections for Twig custom tags and named blocks as well as improved Twig formatting that can now handle complex structures. Announcing WebStorm 2017.3 Today we’re announcing WebStorm 2017.3! This big update brings improvements to all parts of the IDE, from support for JavaScript, TypeScript, and the frameworks to debugging and testing. Explore the new features and download WebStorm 2017.3 on our website. YouTrack 2017.4 Please welcome YouTrack 2017.4 featuring Japanese localization, estimation report type, date and time custom fields, and other improvements. The latest release also brings: - Sort by Relevance in Search Results - Date and Time Custom Fields - npm Package Support for Workflows in JavaScript Other enhancements: - Text Indexing for Issue Fields - Extended Text Index Support - Import from Jira Option for New Projects - Redefined Project Teams - Shared Mailbox Support for Microsoft Exchange - Markdown Support as an experimental feature For more details visit the What's new page. Get YouTrack 2017.4 today and enjoy its wide range of issue tracking and project management capabilities. The latest version is available for download or cloud registration.! This update brings a better user experience to both learners and educators, making the product’s use as simple as possible, whether it is used for learning, or for teaching. First of all, we’ve changed the welcoming UI. Now you begin by choosing your role, Learner or Educator. Depending on your choice, you get access to the courses you can join as a learner and can practice with the help of simple and effective “fill in the missing code” exercises. Or, you can create your own code practice tasks and integrated tests as an educator.VCF7 skips the frames without sources. We hope this makes the feature easier to use and more intuitive., shift cmd A line numbers, - Debugger: filtering arrays, collections, and maps - Spring Boot run dashboard and actuator endpoints - Managing multiple applications is now easier, thanks to the new Run Dashboard tool window - Both the Run and Run Dashboard tool windows now provide temporarily.!. This means that only constexpr is actually missing from C++14. As for C++17, we've started with the most upvoted feature, nested namespaces. ,. Check more details. testing.. - CLion supports Microsoft Visual C++ compiler that ships with VS 2013, 2015 and 2017. - There's no support for msbuild. CLion works through CMake and the NMake.
http://www.jetbrains.com/allnews.jsp?year=2012
CC-MAIN-2017-51
en
refinedweb
Opened 2 years ago Last modified 23 months ago #25534 new New feature Allow using datetime lookups in QuerySets aggregate calls Description (last modified by ) I've been scouring the web for an answer to this but the closest I can find is #25339, which is almost what I need but not quite, so I think I can safely conclude that it is just not possible with the currently available functionality and thus I'm opening this ticket to suggest adding it. I would like to be able to use datetime lookups in an aggregate() call on a QuerySet. My specific use case is this: I have a set of electricity consumption readings, each with a datetime field (and a few others). I need to sum the consumption and cost values grouped by month, day, year, week, etc. In other words, I need to be able to get the total energy consumption value and corresponding cost for each month, day, year, week, etc. This is my ElectricityReading model and its parent Reading model (separated because we also have consumption readings for water and gas, which also derive from Reading): from model_utils.models import TimeStampedModel # Other imports here... class Reading(TimeStampedModel): device = models.ForeignKey(Device) datetime = models.DateTimeField() # Terrible property name, I know :) manual = models.BooleanField(default=False) inserted_by = models.ForeignKey(User) class Meta: abstract = True class ElectricityReading(Reading): vph1 = models.DecimalField(max_digits=18, decimal_places=3, null=True) vph2 = models.DecimalField(max_digits=18, decimal_places=3, null=True) vph3 = models.DecimalField(max_digits=18, decimal_places=3, null=True) wh_imp = models.DecimalField(max_digits=18, decimal_places=3) varh = models.DecimalField(max_digits=18, decimal_places=3, null=True) pf = models.DecimalField(max_digits=18, decimal_places=3, null=True) price = models.ForeignKey(ElectricityPrice) consumption = models.DecimalField(max_digits=18, decimal_places=3, null=True, blank=True, default=None) cost = models.DecimalField(max_digits=18, decimal_places=3, null=True, blank=True, default=None) I think the code I need is something along the lines of the following: result = ElectricityReading.objects\ .filter(device__grid__building_id=1) \ .annotate(num_readings=Count('id'))\ .annotate(total_consumption=Sum('consumption'))\ .annotate(total_cost=Sum('cost'))\ .aggregate(total=Count('datetime__month')) Right now I'm doing this with this raw SQL: SELECT (EXTRACT(YEAR FROM datetime)) AS reading_date_year, (EXTRACT(MONTH FROM datetime)) AS reading_date_month, (EXTRACT(DAY FROM datetime)) AS reading_date_day, (EXTRACT(HOUR FROM datetime)) AS reading_date_hour, SUM(consumption) as total, COUNT(id) as num_readings, SUM(cost) as total_cost, price_id FROM electricity_reading WHERE device_id IN (1, 2, 3) AND datetime >= '2015-10-01' AND datetime <= '2015-10-10' GROUP BY reading_date_year, reading_date_month, reading_date_day,reading_date_hour, price_id The part I can't seem to replicate with Django's ORM is the GROUP BY clause at the end, which is what I was expecting to be able to achieve using the aggregate(total=Count('datetime__month')) but instead I get the following error: FieldError: Cannot resolve keyword 'datetime' into field. Choices are: consumption, cost, created, datetime, device, device_id, id, inserted_by, inserted_by_id, manual, modified, pf, price, price_id, varh, vph1, vph2, vph3, wh_imp, num_readings, total_consumption, total_cost I would love that someone would tell me I am missing something, and if that's the case, please do! :) Otherwise, I believe it would be beneficial to add this. Change History (8) comment:1 Changed 2 years ago by comment:2 Changed 2 years ago by comment:3 Changed 2 years ago by comment:4 Changed 2 years ago by Sorry, I should have included the model. I'll edit the original post and add it. As for the possible duplicate, I did see that post but it didn't look like the same thing (correct me if I'm wrong). comment:5 Changed 2 years ago by comment:6 Changed 2 years ago by It's not a duplicate, no, but they are related. #10302 wants transform/lookup support in values() whereas this ticket is asking for support in aggregates/expressions. 1.9 converts transforms into func expressions, so we'll be able to do something like (simplifying the model here..): from django.db.models.lookups import MonthTransform as Month result = ElectricityReading.objects.aggregate(total=Count(Month('datetime'))) Which isn't quite as nice as Count('datetime__month'). It should be possible to convert the latter into the former internally though. I would imagine this would be handled internally within F(). Detect if we're trying to access a transform, extract the transform, wrap the original field, and continue as normal. This example (datetime part extraction) is probably the canonical usecase for transform support in aggregates. If transforms can be supported with underscore syntax within F() objects, then that should solve #10302 as well. There are probably a few more tickets that could be closed with this implementation. It would be helpful if you could include the simplest set of models so we can try the query.
https://code.djangoproject.com/ticket/25534
CC-MAIN-2017-51
en
refinedweb
QJSEngine add custom object Hi, I am trying to use a script to plot a graph in my program. I am using QCustomPlot and it works fine if used in my C++ code. However, if I pass the QCustomPlot to the QJSEngine like this: QJSEngine myEngine; QCustomPlot *graph = new QCustomPlot; QJSValue scriptgraph = myEngine.newQObject(graph); myEngine.globalObject().setProperty("graph",scriptgraph); And my script looking like this: graph.addGraph(); graph.xAxis.setLabel("TESTX"); graph.yAxis.setLabel("TESTY"); graph.replot(); The QJSEngine will throw the following error: result: 1 : "ReferenceError: addGraph is not defined" Which makes me think that the object is passes but not the classes it has. Am I missing something here? Thanks! Hi, You need to show your QCustomPlot class. Did you add the Q_INVOKABLE macro in front of your addGraph() function? Greetings, t3685 The QCustomPlot is from internet. And no, I did not add Q_Invokable to the functions of QCustomPlot. I can try and see if this fixes the error. Thanks! Edit: tried Q_INVOKABLE QCPGraph *addGraph(QCPAxis *keyAxis=0, QCPAxis *valueAxis=0);without succes. Hi, What error are you getting? Did you rerun qmake to regenerate the moc_*.cpp files? Greetings, t3685 I am getting: result: 1 : "ReferenceError: addGraph is not defined"with my script looking like: graph.addGraph(); graph.xAxis.setLabel("TESTX"); graph.yAxis.setLabel("TESTY"); graph.replot(); and wrapping the graph like this: QCustomPlot *graph = new QCustomPlot; ui->verticalLayout->addWidget(graph); QJSValue scriptgraph = myEngine.newQObject(graph); myEngine.globalObject().setProperty("graph",scriptgraph); And yes, I cleaned, reran Qmake and even rebuild everything but without succes. Am I running into a limitation of QJSEngine? If so, what would be the wisest thing for me to do? I would really like to access objects like the above, if that is not possible, would QTQuick with QML be anything for me? If I read this it looks like that what I am trying to achieve is relatively simple in QML with QtQuick. However, I do not have QtQuick installed at the moment so I would have to rewrite all my existing code to use the import "script.js" as sciptfunction, correct? If not, how would this be done in my Qt Creator project? Thanks! :) Hi, How are you invoking your script? As in how are you calling "script.js". Are you calling it using "myEngine"? If yes, can you show that code? Greetings, t3685 I am calling it like this: QString fileName = "customlogic.qs"; QFile scriptFile(fileName); if (!scriptFile.open(QIODevice::ReadOnly)) { }// handle error QTextStream stream(&scriptFile); QString contents = stream.readAll(); scriptFile.close(); QJSValue result = myEngine.evaluate(contents, fileName); if (result.isError()) { qDebug() << "result: " << result.property("lineNumber").toInt() << ":" << result.toString(); } thanks! Have you tried giving argument to addGraph()? I don't remember whether the default argument translate well to javascript. If that still doesn't work, you can try to make add a dummy QObject class with Q_INVOKABLE functions to your engine to see if the problem is in your code or some possible interaction with QCustomPlot
https://forum.qt.io/topic/59142/qjsengine-add-custom-object
CC-MAIN-2017-51
en
refinedweb
Subclass QDial Hi, I would like to subclass QDial because : Here the QDial can jump values (ex : what I want : If QDial.value() = 2, I want the QDial able to move only to 1 or 3. So I tried : #ifndef CUSTOMDIAL_H #define CUSTOMDIAL_H #include <QObject> #include <QDial> #include <QMouseEvent> class CustomDial : public QDial { Q_OBJECT public: CustomDial(QWidget * parent = nullptr); private: virtual void mousePressEvent(QMouseEvent *e) override; }; #endif // CUSTOMDIAL_H I started to try only by overriding mousePressEvent() and if this work I'll also do it for other mouse-related function. In order to do that, I took mousePressEvent from Qt source: void QDial::mousePressEvent(QMouseEvent *e) { Q_D(QDial); if (d->maximum == d->minimum || (e->button() != Qt::LeftButton) || (e->buttons() ^ e->button())) { e->ignore(); return; } e->accept(); setSliderPosition(d->valueFromPoint(e->pos())); // ### This isn't quite right, // we should be doing a hit test and only setting this if it's // the actual dial thingie (similar to what QSlider does), but we have no // subControls for QDial. setSliderDown(true); } and tried to make some adjustment : void CustomDial::mousePressEvent(QMouseEvent* e) { int oldValue = value(); //copy the beggining of Qt source : Q_D(QDial); if (d->maximum == d->minimum || (e->button() != Qt::LeftButton) || (e->buttons() ^ e->button())) { e->ignore(); return; } e->accept(); //And now change the setValue if there is a difference like this : int newValueFromMouse = d->valueFromPoint(e->pos()); int diff = newValueFromMouse - oldValue; if(diff > 1) setSliderPosition(oldValue +1); else if(diff < -1) setSliderPosition(oldValue -1); else setSliderPosition( newValueFromMouse); } In my opinion, the algorith should works right? But there is a problem with the variable 'd' : C:\Qt\5.6\mingw49_32\include\QtCore\qglobal.h:1018: erreur : 'QDialPrivate* QDial::d_func()' is private inline Class##Private* d_func() { return reinterpret_cast<Class##Private *>(qGetPtrHelper(d_ptr)); } \ And I don't understand why ?! I mean I understand the problem, but I don't understand why there is this mistake if I just used Qt source code... - mrjj Qt Champions 2016 hi you are trying to call private stuff. - mrjj Qt Champions 2016 QDial well i never tried to use privates as they are private for a reason but i think you need to include private\xxx_p.h file but I dont see any for QDial. Sorry. Im not sure how to allow this. How about: - In slot connected to sliderPressedsave old value - Set tracking - In slot connected to valueChangedsignal have something like: //valueChanged(int value) if( value > (old_value+1) ) dial->setValue(old_value + 1); else if( value < (old_value-1) ) dial->setValue(old_value - 1); Two regards: - not sure if i understood what you want to achive corectly - above code is not tested, just a sketch
https://forum.qt.io/topic/67712/subclass-qdial
CC-MAIN-2017-51
en
refinedweb
Exercises - Chapter 9 - Harry Hicks - 2 years ago - Views: Transcription 1 Exercises - Chapter 9 Complete the following exercises. Assume that all these exercises use the perpetual inventory system. Exercise #1 1. hip top Shirt retailers bought $15,000 worth of shirts from Super Shirt Wholesalers Ltd. on March 15th. payment was due in April. prepare the journal entry at the time of purchase. Mar 15 Inventory 15,000 Accounts Payable 15,000 Record the purchasing of inventory 2. referring to the purchase made in #1 above, prepare the journal entry for hip top Shirt retailers for the payment of $15,000 made to Super Shirt Wholesalers on April 15th. Apr 15 Accounts Payable 15,000 Cash 15,000 Record the payment to Super Shirt Wholesalers 3. JB Supermarkets bought $3,000 worth of groceries on account from a produce supplier on May 10th. On May 11th, JB s bookkeeper was informed that $200 worth of tomatoes were substandard and returned to the supplier. prepare the journal entry to record the purchase return. May 11 Accounts Payable 200 Inventory 200 Record the purchase return 325 2 4. On January 12th, Corner-Mart received a shipment of t-shirts from promo Novelties for an event. the invoice amounted to $5,000 and was recorded in the accounting system. Soon after the delivery was made, the marketing manager discovered that the logo was printed incorrectly. the goods were returned to promo Novelties on January 31st. prepare the journal entry that would be recorded on January 31st. Jan 31 Accounts Payable 5,000 Inventory 5,000 Record the purchase return 5. Signs Unlimited received a shipment of plastic sheets on April 3rd. the value of the plastic was $8,000, plus $100 of freight charges. prepare the journal entry to record the receipt of goods by Signs Unlimited, assuming the payment would be made in May. Apr 3 Inventory 8,100 Accounts Payable 8,100 Record the purchasing of inventory ($8,000 + $100 = $8,100) 6. referring to question #5 above, several of the plastic sheets delivered to Signs Unlimited were in the wrong colour. After some negotiation, the manager agreed to keep the products with a 10% discount. prepare the entry on April 10th to record the purchase allowance. (Assume all plastic sheets were still in inventory.) Apr 10 Accounts Payable 800 Inventory 800 Allowance for goods with wrong colour ($8,000 x 10% = $800) 7. referring to questions #5 and #6 above, journalize the transaction for Signs Unlimited when the payment is made on May 3rd. May 3 Accounts Payable 7,300 Cash 7,300 Payment for the goods 326 3 8. Sporty pants, a manufacturer of sports uniforms, received a $12,000 shipment of material on July 18th from Woven Fabric Suppliers. the payment for the goods is to be made in August. Upon inspection of the goods, it was found that 10% of the fabric was undesirable. rather than taking the goods back, Woven Fabric Suppliers agreed to allow Sporty pants to deduct 10% from the amount owing. record the journal entry for the purchase allowance (assume the goods have not yet been sold). Accounts Payable 1,200 Inventory 1,200 Allowance for undesirable goods ($12,000 x 10% = $1,200) 9. referring to question #8 above, assume that the fabric had already been used and sold in the form of pants. prepare the journal entry to record the purchase allowance. Accounts Payable 1,200 COGS 1,200 Allowance for undesirable goods ($12,000 x 10% = $1,200) 327 4 Exercise #2 1. the following is written on an invoice relating to goods that were purchased: 5/10, n/30. What does it mean? It means a 5% discount would apply if paid within 10 days. The net amount owing is due in 30 days. 2. Shoe retailers purchased $10,000 worth of shoes from runnerwear Supplies on March 1st. Since Shoe retailers has good cash reserves, the accountant took advantage of the early payment discount that runnerwear offers. runnerwear s invoice shows terms of 2/10, n/30. What is the latest date that Shoe retailers could pay the bill to take advantage of the discount? March referring to question #2 above, as the bookkeeper for Shoe retailers. prepare the journal entry to record the purchase on March 1st. Mar 1 Inventory 10,000 Accounts Payable 10,000 Record the purchase 4. referring to question #2 above, journalize the transaction for payment of the invoice, assuming the payment was made on March 5th. Mar 5 Accounts Payable 10,000 Cash 9,800 Inventory 200 Paid invoice owing less discount received 5. referring to question #2 above, journalize the transaction for payment of the invoice, assuming the payment was made on March 30th. Mar 30 Accounts Payable 10,000 Cash 10,000 Record the payment to Runnerwear Supplies 328 5 6. Suppose that on July 15th Local Drug Mart bought goods from Glasgow & Glasgow for $30,000, with terms of 2/10, n/30. a. if Local Drug Mart took advantage of the early payment discount, how much would they be required to pay Glasgow & Glasgow? $29,400 b. When would payment need to be made to qualify for the discount? By July 25th c. how much would Local Drug Mart need to pay Glasgow & Glasgow to settle the bill on August 15th? $30, referring to question #6 above, suppose that $3,000 worth of the merchandise was water damaged during shipping and Local Drug Mart returned the damaged merchandise to Glasgow & Glasgow on July 18th. a. Journalize the return of the goods ($3,000) on July 18th. Jul 18 Accounts Payable 3,000 Inventory 3,000 Record the purchase return b. After returning the damaged products to Glasgow & Glasgow, Local Drug Mart paid the remaining balance on July 20th. how much did Local Drug Mart have to pay? ($30,000 $3,000) x 98% = $26, 6 7 9. On March 20th, Cup-A-Java received a shipment of gift mugs for resale from Cup Makers inc. in the amount of $5,000, plus $200 shipping charges. the terms stated on the invoice from Cup Makers inc. were as follows: 3/15, n/60. Journalize the following scenarios: a. As the bookkeeper for Cup-A-Java, complete the original invoice transaction. Mar 20 Inventory 5,200 Accounts Payable 5,200 Record the purchase of inventory plus freight charges b. if Cup-A-Java decided to take advantage of the early payment cash discount, by when should the payment be made to qualify for the discount? By April 4th c. the payment by Cup-A-Java to Cup Makers inc. was made on March 31st. prepare the journal entry for the payment of goods. May 31 Accounts Payable 5,200 Cash 5,050 Inventory 150 Paid invoice owing less discount received for early payment ($5,000 x 0.03 = $150) d. Journalize the entry if payment was made on May 20th. May 20 Accounts Payable 5,200 Cash 5,200 Record the payment to Cup Makers Inc. e. On March 25th, 20% of the shipment was returned because they were in the wrong colour. Cup Makers inc. agreed to apply the same percentage deduction to the freight charges. the invoice has not been paid. prepare the journal entry to record this transaction Mar 25 Accounts Payable 1,040 Inventory 1,040 Record the purchase return ($5,000 x $200 x 0.02 = $1,040) 331 8 f. Continue from 9(e) on the previous page, journalize the entry if Cup-A-Java took advantage of the early payment cash discount when paying for the balance of the cups on March 31st. round off to the nearest dollar. Mar 31 Accounts Payable 4,160 Cash 4,040 Inventory 120 Paid invoice owing less discount ($4,000 x 0.03 = $120) 10. if a computer company bought computers for $10,000 and sold them for $14,000, how much would the gross profit be on the entire shipment if the business took advantage of the early cash payment terms of 2/15, n/30 from their supplier? Gross Profit = $14,000 ($10,000 x 98%) = $4,200 Exercise #3 1. On May 1st, Food Wholesalers purchased $3,000 worth of dried fruit inventory on account and paid $100 for freight charges. On May 15th, Food Wholesalers sold all of the dried fruit inventory to retail Grocers for $4,000 on account. As the bookkeeper for Food Wholesalers, journalize the transactions. May 1 Inventory 3,100 Accounts Payable 3,100 Purchased inventory on account May 15 Accounts Receivable 4,000 Revenue 4,000 Made sales on account May 15 COGS 3,100 Inventory 3,100 Record COGS 2. referring to question #1 above, if operating expenses were $500: a. how much was Food Wholesalers gross profit? $900 b. how much was Food Wholesalers net profit? $ 9 3. Macks is a maker of cotton garments that sells to various retailers. On June 1st, Cory s retailers sent back a shipment of goods that was unsatisfactory. As a gesture of good will, Macks agreed to the return of the goods. the goods were sold on account for $6,000 originally and cost $4,000. Complete the following: a. As Mack s bookkeeper prepare the journal entries to reflect the return. June 1 Sales Returns and Allowances 6,000 Accounts Receivable 6,000 Record sales returns for unsatisfactory products June 1 Inventory 4,000 COGS 4,000 Restock inventory returned b. Journalize the entry if Cory s only returned half of the shipment. June 1 Sales Returns and Allowances 3,000 Accounts Receivable 3,000 Record sales returns for unsatisfactory products Inventory 2,000 COGS 2,000 Restock inventory returned c. What happened to the value of Macks owner s equity when Cory s returned the merchandise? Did it increase, decrease or stay the same? Explain your answer. Owner s equity decreased because sales returns and allowances is a contra-revenue account which decreases revenue (i.e. owner s equity). 333 10 11 c. Suppose that ted s shipped back half the goods on August 15th and kept the other half with 10% discount. Journalize the transactions. Aug 15 Sales Return and Allowances 825 Accounts Receivable 825 Record sales returns from Ted s Retailers Inventory 500 COGS 500 Restock inventory returned Note to the instructor: only transactions took place on August 15 should be journalized in part c. d. Continue from question 4(b). Since all the goods were sold and returned in the same period, what happened to Moira s gross profit? (Disregard the additional shipping and administration costs). Explain your answer. Moira s Gross profit increased by $1,500 when goods were sold and decreased by the same amount when goods were returned. 5. pete s Wholesalers imports and distributes towels. they sell their products to various retailers throughout the country and offer payments terms of 2/10, n/30. On October 1, pete s made a large sale to Ernie s Bathroom retailers in the amount of $15,000, which cost pete s $9,000. pete s uses a perpetual inventory system. Complete the following: a. Journalize the sale that was made on account. Oct 1 Accounts Receivable 15,000 Revenue 15,000 Made sales on account COGS 9,000 Inventory 9,000 Record COGS 335 12 b. By what date must Ernie s pay the invoice to qualify for the early cash payment discount? By October 11 c. Assume Ernie s paid the bill on October 5th. record the journal entries. Oct 5 Cash 14,700 Sales Discount 300 Accounts Receivable 15,000 Collected Accounts Receivable less discount allowed d. if Ernie s had returned half the shipment and paid for the balance owing on October 5th, how would the transaction be journalized? Oct 5 Sales Return and Allowance 7,500 Accounts Receivable 7,500 Record sales returns from Ernie s Inventory 4,500 COGS 4,500 Restock inventory returned Cash 7,350 Sales Discount 150 Accounts Receivable 7,500 Collected accounts receivable less discount allowed 336 13 e. Suppose Ernie s found the goods unsatisfactory and agreed to keep the goods with a 10% discount. prepare the journal entry to record the sales allowance and Ernie s payment on October 20th. Oct 20 Sales Return and Allowance 1,500 Accounts Receivable 1,500 Allowance provided to Ernie s for unsatisfactory goods Oct 20 Cash 13,500 Accounts Receivable 13,500 Collected outstanding Accounts Receivable f. referring to scenario 5(a) - Ernie s kept the entire shipment and paid the invoice on October 10th to take advantage of the early payment discount. record the journal entries for the payment to pete s. Oct 10 Cash 14,700 Sales Discount 300 Accounts Receivable 15,000 Collected Accounts Receivable less discount allowed 6. Assume you are the bookkeeper for Joe the printer. the company bought ink cartridges from various suppliers, refilled them and sold them to both retailers and customers. All purchases and sales are made on account. Complete the following for Joe the printer. a. record the purchase on December 15th of $3,000 ink cartridges from inkster Supplies, whose payment terms are 3/10, n/45. Dec 15 Inventory 3,000 Accounts Payable 3,000 Purchased inventory, term 3/10, n/45 337 14 By Dec 25th b. When must Joe s pay the account to qualify for the discount? c. prepare the journal entry to record Joe s payment on December 20th. Dec 20 Accounts Payable 3,000 Inventory 90 Cash 2,910 Paid invoice owing less discount ($3,000 x 0.03 = $90) d. if Joe s printer made the payment on December 31st instead, jouralize the transaction. Dec 31 Accounts Payable 3,000 Cash 3,000 Paid invoice owing e. if Joe s returned 1/3rd of the products and paid the balance, how would both of these transactions be journalized? Assume both transactions occurred on December 20th. Dec 20 Accounts Payable 1,000 Inventory 1,000 Record purchase returns Accounts Payable 2,000 Inventory 60 Cash 1,940 Paid invoice owing less discount ($2,000 x 0.03 = $60) 338 15 f. if on January 5th, Joe s sold all $3,000 worth of inventory for $5,000 to Smith printers on account, how would the transactions be journalized? Jan 5 Accounts Receivable 5,000 Sales 5,000 Made sales on account COGS 3,000 Inventory 3,000 Record COGS g. Continue from the previous question. if Joe s selling terms were 4/7, n/30. prepare the journal entry to record receipt of payment on January 12th. Jan 12 Cash 4,800 Sales Discount 200 Accounts Receivable 5,000 Collected Accounts Receivable less discounts allowed 339 Merchandise Accounts. Chapter 7 - Unit 14 Merchandise Accounts Chapter 7 - Unit 14 Merchandising... Merchandising... There are many types of companies out there Merchandising... There are many types of companies out there Service company - sells. Appendix 5A: Periodic inventory system 1 Chapter 5 Accounting for merchandising operations Appendix 5A: Periodic inventory system 2 Learning objectives 1. Record purchase and sales transactions under the periodic inventory system 2. Prepare Module 3 - Inventory Definitions Module 3 - Inventory Definitions Inventory goods held for resale COGS expenses incurred to purchase or manufacture the merchandise sold for a period Raw material Work-In-Process Finished Goods Inventory Chapter 6 Homework BRIEF EXERCISE 6-6 Chapter 6 Homework BRIEF EXERCISE 6-6 Dec. 31 Sales... 630,000 Merchandise Inventory (December 31)... 90,000 Purchase Returns and Allowances... 11,000 Capital... 731,000 Dec. 31 Capital... 476,000 Merchandise Study Guide Chapter 5 Financial Study Guide Chapter 5 Financial 53. Merchandising companies that sell to retailers are known as a. brokers. b. corporations. c. wholesalers. d. service firms. 57. Gross profit equals the difference between Merchandising Operations Chapter 5 Merchandising Operations Financial Statements of a Service Company and a Merchandiser: - Service Companies: Revenues earned through performance of services. Examples: Dentists, Accounting Firms, IMPERIAL OIL LIMITED (in millions) December 31 1994 1993 C H A P T E R 5 Accounting for Merchandising Activities Many companies earn profits by buying merchandise and selling it to customers. Accounting helps managers to determine the amount of income earned Accounting for a Merchandising Business CHAPTER 10 Accounting for a Merchandising Business SECTION 10.1 REVIEW QUESTIONS (page 401) 1. A service business sells a service to the general public but does not deal in merchandise. For example, a CHAPTER5 Accounting for Merchandising Operations 5-1 CHAPTER5 Accounting for Merchandising Operations 5-1 5-2 PreviewofCHAPTER5 Merchandising Operations Merchandising Companies Buy and Sell Goods Wholesaler Retailer Consumer The primary source of revenues CHAPTER 5 ACCOUNTING FOR MERCHANDISING BUSINESSES 1. Merchandising businesses acquire merchandise for resale to customers. It is the selling of merchandise, instead of a service, that makes the activities of a merchandising business different from the Accounting for Merchandising Operations Instructor: masum 5-1 Bangladesh University of Textiles 5 Accounting for Merchandising Operations Learning Objectives After studying this chapter, you should be able to: [1] Identify the differences between 1. Merchandising company VS Service company V.S Manufacturing company Chapter 6 Mechandising Activities 1. Merchandising company VS Service company V.S Manufacturing company Manufacturing companies use raw materials to make the inventory they sell. Their operating cycles Companies: Journal Entries PrinciplesofAccounting HelpLesson #4 Accounting for Merchandising Companies: Journal Entries By Laurie L. Swanson Merchandising Company A merchandising business is one that buys and sells goods in order ANSWERS TO QUESTIONS FOR GROUP LEARNING Accounting for a 5 Merchandising Business ANSWERS TO QUESTIONS FOR GROUP LEARNING Q5-1 A merchandising business has a major revenue reduction called cost of goods sold. The computation of cost of goods Week 9/ 10, Chap7 Accounting 1A, Financial Accounting Week 9/ 10, Chap7 Accounting 1A, Financial Accounting Reporting and Interpreting Cost of Goods Sold and Inventory Instructor: Michael Booth Understanding the Business Primary Goals of Inventory Management Module 4: Accounting for merchandising activities Course Schedule Course Modules Review and Practice Exam Preparation Resources Module 4: Accounting for merchandising activities Overview In the first three modules, you studied how to determine income 6. Accounting for retailing CONTENTS CHAPTER 6 Accounting for retailing CONTENTS 6.1 Journal entries periodic inventory system 6.2 Journal entries involving discounts, closing entries and statements of financial performance both perpetual Assessment Questions Assessment Questions AS-1 ( 1 ) Define accounts receivable. The amount billed to customers and owing from them but not yet collected. AS-2 ( 1 ) Describe the presentation of accounts receivable on Accounting for inventory. Accounting for inventory Whats inside Accounting for your inventory is as important as accounting for your sales. Every product you have on the shelf has a cost value, and the total cost of goods is likely Purchasing/Human Resources/Payment Process: Recording and Evaluating Expenditure Process Activities Chapter 8 Purchasing/Human Resources/Payment Process: Recording and Evaluating Expenditure Process Activities McGraw-Hill/Irwin Copyright 2011 by The McGraw-Hill Companies, Inc. All rights reserved. What 04 - Accounting for Merchandising Operations. Chapter Outline I. Merchandising Activities Products that a company acquires to resell to customers are referred to as merchandise (also called goods). A merchandiser earns net income by buying and selling merchandise. 05 Reporting and Analyzing Inventories Conceptual Chapter INVENTORY CONTROL SYSTEMS INVENTORY CONTROL SYSTEMS SPECIFIC OUTCOMES ٱ ٱ ٱ ٱ Prepare the following ledger accounts in general ledger: Trading stock account Creditors allowances Purchases account Carriage on purchases Trading account Accounting for Merchandising Operations Prepared by Coby Harmon University of California, Santa Barbara Westmont College 5-1 5 Accounting for Merchandising Operations Learning Objectives After studying this chapter, you should be able to: [1] Ch6. Student: 2. Cost of goods sold is an asset reported in the balance sheet and inventory is an expense reported in the income statement. Ch6 Student: 1. Inventory is usually reported as a long-term asset in the balance sheet. 2. Cost of goods sold is an asset reported in the balance sheet and inventory is an expense reported in the income Dr. M.D. Chase Accounting Principles Examination 2J Page 1 Accounting Principles Examination 2J Page 1 Code 1 1. The term "net sales" refers to gross sales revenue reduced by sales discounts and transportation-in. 2. The cost of goods available for sale in a given Financial Statements Tutorial Financial Statement Review: Financial Statements Tutorial There are four major financial statements used to communicate information to external users (creditors, investors, suppliers, etc.) - 1. Balance Quiz - Chapter 9 - Solutions Quiz - Chapter 9 - Solutions 1. Royco, Inc. contracted, for the current year, to purchase $425,000 worth of light fixtures from a retailer for $5 per unit. Royco keeps 12 1/2 percent of its annual purchases Accounting 303 Exam 3, Chapters 7-9 Fall 2011 Section Row Accounting 303 Name Exam 3, Chapters 7-9 Fall 2011 Section Row I. Multiple Choice Questions. (2 points each, 34 points in total) Read each question carefully and indicate your answer by circling the letter CHAPTER 8 ACCOUNTING FOR PURCHASES, ACCOUNTS PAYABLE, AND CASH PAYMENTS CHAPTER 8 ACCOUNTING FOR PURCHASES, ACCOUNTS PAYABLE, AND CASH PAYMENTS Chapter Opener: Thinking Critically Students may discuss one or more of the following merchandise selection tools: consumer surveys, INVENTORY ACCOUNTING INVENTORY ACCOUNTING The flow of costs through the system and through the General Ledger can be somewhat complicated. This discussion will explain the flow plus give you suggestions on keeping the G/L HELP. Reporting Special Jurisdiction Sales Tax Reporting Special Jurisdiction Sales Tax Reporting Sales Tax for Special Jurisdictions Reporting City Tax Return Example Reporting Returned Merchandise Error Messages Most Indian Tribes in South Dakota Chapter 8: Accounting for Purchases and Accounts Payable Chapter 8: Accounting for Purchases and Accounts Payable Chapter Opener: Thinking Critically Students may discuss one or more of the following merchandise selection tools: consumer surveys, customer. Investments Advance to subsidiary company 81,000 EXERCISE 7-3 (10 15 minutes) Current assets Accounts receivable Customers Accounts (of which accounts in the amount of $40,000 have been pledged as security for a bank loan) $79,000 Installment accounts With 11,000 employees serving 2 million customers weekly, Chapter 13 MARK LENNHIAN/AP PHOTO PHOTO: CARY BENBOW LEARNING OBJECTIVES Careful study of this chapter should enable you to: LO1 Explain the impact of merchandise inventory on the financial statements. CHAPTER 5 ACCOUNTING FOR MERCHANDISING OPERATIONS CHAPTER 5 ACCOUNTING FOR MERCHANDISING OPERATIONS LEARNING OBJECTIVES 1. IDENTIFY THE DIFFERENCES BETWEEN SERVICE AND MERCHANDISING COMPANIES. 2. EXPLAIN THE RECORDING OF PURCHASES UNDER A PERPETUAL INVENTORY Module 4 - Audio File Legend Module 4 - Audio File Legend Part 1 2 3 4 5 Content Learning Objectives and Basics of merchandising operations Recording merchandise purchases and sales Problem: Purchase and sale journal entries Income WHAT ARE INVENTORY SYSTEMS? WHAT ARE INVENTORY SYSTEMS? CHRISTINE NYANDAT, 25 Oct, 2013 Definition: An inventory control system is a set of hardware and software based tools that automate the process of tracking inventory. The kinds CHAPTER 6. Inventories ASSIGNMENT CLASSIFICATION TABLE. B Problems. A Problems. Brief Exercises Do It! Exercises CHAPTER 6 Inventories ASSIGNMENT CLASSIFICATION TABLE Study Objectives Questions Brief Exercises Do It! Exercises A Problems B Problems 1. Describe the steps in determining inventory quantities. 1, 2, Chapter. Skyline College 7-1 Chapter 7 Skyline College 7-1 The three types of business operations are: A service business is a business that sells services. A merchandising business is a business that sells goods purchased for resale. Perpetual vs. Periodic Inventory Accounting Chapter 6 INVENTORY In the balance sheet of merchandising and manufacturing companies, inventory is frequently the most significant current asset. In the income statement, inventory is vital in determining Chapter Review Problems Chapter Review Problems Unit 17.1 Income statements 1. When revenues exceed expenses, is the result (a) net income or (b) net loss? (a) net income 2. Do income statements reflect profits of a business Accounting 303 Exam 3, Chapters 8-9 Spring 2011 Section Row Accounting 303 Name Exam 3, Chapters 8-9 Spring 2011 Section Row I. Multiple Choice Questions. (2 points each, 34 points in total) Read each question carefully and indicate your answer by circling the Revisited. Working Capital Lines of Credit, CR Credit Risk Working Capital Lines of Credit, Revisited Avoid the risks of oversized lines of credit by understanding the factors that may create a need for bank financing. 34 October 2012 The RMA Journal Processing accounting information Chapter 2 Processing accounting information This chapter covers briefly the following topics: Double entry bookkeeping and ledger accounts The debit and credit rules Recording transactions Balancing off Simple Interest. and Simple Discount CHAPTER 1 Simple Interest and Simple Discount Learning Objectives Money is invested or borrowed in thousands of transactions every day. When an investment is cashed in or when borrowed money is repaid, CHAPTER 8. Reporting and Analyzing Receivables ANSWERS TO QUESTIONS CHAPTER 8 Reporting and Analyzing Receivables ANSWERS TO QUESTIONS 1. Accounts receivable are amounts customers owe on account. They result from the sale of goods and services (i.e., in trade). Notes receivable, BACKGROUND KNOWLEDGE for Teachers and Students Pathway: Business, Marketing, and Computer Education Lesson: BMM C6 4: Financial Statements and Reports Common Core State Standards for Mathematics: N.Q.2 Domain: Quantities Cluster: Reason quantitatively Accounting 303 Exam 3, Chapters 7-9 Accounting 303 Exam 3, Chapters 7-9 Spring 2012 Name Row I. Multiple Choice Questions. (2 points each, 30 points in total) Read each question carefully and indicate your answer by circling the letter preceding The Profit & Loss Account Accounting for Revenue & Expenses The Profit & Loss Account Accounting for Revenue & Expenses Chapter 3 Luby & O Donoghue (2005) Profit & Loss Account The main reason why people set up in business is to make a profit. The profit and loss CHAPTER 5. Accounting for Merchandising Operations ASSIGNMENT CLASSIFICATION TABLE. Brief. B Problems. A Problems 2, 3, 4 1 1 CHAPTER 5 Accounting for Merchandising Operations ASSIGNMENT CLASSIFICATION TABLE Study Objectives Questions Brief Exercises Exercises A Problems B Problems *1. Identify the differences between service CHAPTER 26. Working Capital Management. Chapter Synopsis CHAPTER 26 Working Capital Management Chapter Synopsis 26.1 Overview of Working Capital Any reduction in working capital requirements generates a positive free cash flow that the firm can distribute immediately Assets=Liabilities + Owner s Equity. Owner s Equity= Owner s Capital (Investments) Drawings + Profit OR Loss Chapter 1~3 Important formulas Assets=Liabilities + Owner s Equity An expansion: Owner s Equity= Owner s Capital (Investments) Drawings + Profit OR Loss [Note: Owner s capital includes investments by the Ratios and interpretation Unit Ratios and interpretation As we learnt in our earlier studies, accounting information is used to answer two key questions about a business: Is it making a profit? Are its assets sufficient to meet Chapter 2: Debits and Credits. 2012 Educating Bookkeepers for Business, Inc. Chapter 2: Debits and Credits Think through and record transactions (write sentences) using T-accounts and journal entries. Debits and Credits Every transaction (sentence in the story of what happened Workbook 1 Buying and Selling Contents Highlights... 2 Quick Practice Session on Buying and Selling... 2 Financial Quiz 1 - Buying & Selling... 3 Learning Zone Buying and Selling... 3 Talk the talk... 4 Understand the link between) Chapter 5 In-Class Exercise Merchandising Chapter 5 In-Class Exercise Merchandising 1. The following events pertain to Downtown Toy Shop for October 2016. The company uses the perpetual inventory method. Record the following transactions in general Bookkeeping Proficiency Bookkeeping Proficiency (Exam) Your AccountingCoach PRO membership includes lifetime access to all of our materials. Take a quick tour by visiting. Table of Contents (click Number of Units. Total Cost Jan. 1 Beginning inventory 50 $72 $ 3,600 May. 5 Purchase 200 75 15,000 Nov. 3 Purchase 100 80 8,000 350 $26,600 BRIEF EXERCISES BE6 1 Match each of the following types of companies with its definition. Types of Companies 1. Service company 2. Merchandising company 3. Manufacturing company Definitions a. Purchases Financial Accounting Study Guide Fall 2013 CH1 & 2 PART VI RATIOS Financial Accounting Study Guide Fall 2013 CH1 & 2 PART VI RATIOS Name: Selected information from the financial statements of Miller Company for the year ended December 31, 2012, appears below: 2012 Current Computing the Total Assets, Liabilities, and Owner s Equity 21-1 Assets are the total of your cash, the items that you have purchased, and any money that your customers owe you. Liabilities are the total amount of money that you owe to creditors. Owner s equity,
http://docplayer.net/10693657-Exercises-chapter-9.html
CC-MAIN-2017-51
en
refinedweb
Recently I had to spend some time trying to adapt my imperative/OO background to a piece of code I need to write in functional paradigm. The problem is quite simple and can be described briefly. You have a collection of pairs containing an id and a quantity. When a new pair arrives you have to add the quantity to the pair with the same id in the collection, if exists, otherwise you have to add the pair to the collection. Functional programming is based on three principles (as seen from an OO programmer) – variables are never modified once assigned, no side-effects (at least, avoid them as much as possible), no loops – work on collections with collective functions. Well maybe I missed something like monad composition, but that’s enough for this post. Thanks to a coworker I wrote my Scala code that follows all the aforementioned principles and is quite elegant as well. It relies on the “partition” function that transforms (in a functional fashion) a collection into two collections containing the elements of the first one partitioned according to a given criteria. The criteria is the equality of the id so that I find the element with the same id if it exists, or just an empty collection if it doesn’t. Here’s the code: Yes, I could have written more concisely, but that would have been too much write-only for me to be comfortable with. Once the pleasant feeling of elegance wore off a bit I wondered what is the cost of this approach. Each time you invoke merge the collection is rebuilt and, unless the compile optimizer be very clever, also each list item is cloned and the old one goes to garbage recycling. Partitioning scans and rebuild, but since I’m using an immutable collection, also adding an item to an existing list causes a new list to be generated. Performance matters in some 20% of your code, so it could acceptable to sacrifice performance in order to get a higher abstraction level and thus a higher coding speed. But then I wonder what about premature pessimization? Premature pessimization, at least in context where I read the them, means the widespread adoption of idioms that lead to worse performances (the case was for C++ use of pre or post increment operator). Premature pessimization may cause the application to run generally slower and makes more difficult to spot and optimize the cause. This triggered the question – how is language idiomatic approach impacts on performances? To answer the question I started coding the same problem in different languages. I started from my language of choice – C++. In this language it is likely you approach a similar problem by using std::vector. This is the preferred collection and the recommended one. Source is like this: Code is slightly longer (consider that in C++ I prefer opening brace on a line alone, while in Scala “they” forced me to have opening braces at the end of the statement line). Having mutable collections doesn’t require to warp your mind around data to find which aggregate function could transform your input data into the desired output – just find what you are looking for and change it. Seems simpler to explain to a child. Then I turned to Java. I’m not so fond of this language, but it is quite popular and has a comprehensive set of standard libraries that really allow you to tackle every problem confidently. Not sure what a Java programmer would consider idiomatic, so I staid traditional and went for a generic List. The code follows: I’m not sure why the inner class Data needs to be declared static, but it seems that otherwise the instance has a reference to the outer class instance. Anyway – code is decidedly more complex. There is no function similar to C++ find_if nor to Scala partition. The loop is simple, but it offers some chances to add bugs to your code. Anyway explaining the code is straightforward once the iterator concept is clear. Eventually I wrote a version in C. This language is hampered by the lack of basic standard library – beside some functions on strings and files you have nothing. This could have been fine in the 70s, but today is a serious problem. Yes there are non-standard libraries providing all the needed functionalities, you have plenty of them, gazillions of them, all incompatible. Once you chose one you are locked in… Well clearly C shows the signs of age. So I write my own single linked list implementation: Note that once cleaned of braces, merge function is shorter in C than in Java! This is a hint that Java is possibly verbose. I just wrote here the merge function. The rest of the sources is not relevant for this post, but it basically consists in parsing the command line (string to int conversion), getting some random numbers and getting the time. The simplest frameworks for this operation are those based on the JVM. The most complex is C++ – it allows a wide range of configuration (random and time), but I had to look up on internet how to do it and… I am afraid I wouldn’t know how to exploit the greater range of options. Well, in my career as a programmer (some 30+ years counting since when I started coding) I think I never had the need to use a specific random number generator, or some clock different from a “SystemTimeMillis” or Wall Clock Time. I don’t mean that because of this no one should ask for greater flexibility, but that I find daunting that every programmer should pay this price because there is case for using a non default RNG. Anyway back to my test. In the following table I reported the results. Times have been taken performing 100000 insertions with max id 10000. The test has been repeated 20 times and the results have been averaged in the table. The difference in timing between C++ and Scala is dramatic – with the first faster about 70 times the latter. Wildly extrapolating you can say that if you code in C++ you need 1/70 of the hardware you need to run Scala… there’s no surprise (still guessing wildly) that IBM backs this one. Java is about 5 times faster than Scala. I’m told this is more or less expected and possibly it is something you may be willing to pay for higher level. In the last column I reported the results for a version of the C++ code employing std::list for a more fair comparison (all the other implementations use a list after all). What I didn’t expected was that C++ is faster (even if slightly) than C despite using the same data structure. It is likely because of some template magic. The other interesting value I wrote in the table is the number of lines (total, not just the merge function) of each implementation. From my studies (that now are quite aged) I remember that some researches reported that the speed of software development (writing, testing and debugging), stated as lines of code per unit of time, is the same regardless of the language. I’m starting having some doubt because my productivity in Scala is quite low if compared with other languages, but … ipse dixit. Let’s say that you spend 1 for the Scala program, then you would pay 2.31 for C++, 1.97 for Java and 3.20 for C. Wildly extrapolating again you could draw a formula to decide whether it is better to code in C++ or in Scala. Be H the cost of the CPU and hardware to comfortably run the C++ program. Be C the cost of writing the program in Scala. So the total cost of the project is: (C++) H+C×2.31 (Scala) 68×H+C (C++) > (Scala) ⇒ H+C×2.31 > 68×H+C ⇒ C×1.31 >67×H ⇒ C > 51.14×H That is, you’d better using Scala when the cost of the hardware you want to use will not exceed the cost of Scala development by a factor of 50. If hardware is going to cost more, then you’d better use C++. Beside of being a wild guess, this also assumes that there is no hardware constraint and that you can easily scale the hardware of the platform. (Thanks to Andrea for pointing out my mistake in inequality) 6 thoughts on “Comparisons” Your comparison among Scala and the other languages is not fair. You are comparing an implementation using immutable structures with implementations that use mutable structures. It’s obvious that the performance is 2 orders of magnitude worse, both in terms of speed and memory! You should reimplement the Scala version with a mutable collection (and update items in place) and redo the tests. This specific problem screams for a mutable structure (and I’d rather use a map, maybe a hashmap, rather than a list) Scala let’s you implement in an imperative way, when you need to. It’s perfectly legit, and you should not be ashamed of it. Also, comparing C++ and Scala based on the raw performance doesn’t make sense. Obviously C++ is more performant then Scala (given a good C++ programmer as you are). But the choice of a programming language depends on many different things. It is likely I had to state it better – this comparison didn’t want to be between raw performances. In fact it makes little sense to compare a compiled language vs an interpreted one. Instead my goal was to compare how much “premature pessimization” you get when you tackle a problem with a functional approach. In other words my experiment was intended to be – give the same assignment to one Scala programmer and to one C++ programmer and compare the outcome. In this sense I don’t find it unfair – you pick up the tool you consider more apt, but in the end it only matters how much it costs to create it and how much it costs to run it. I agree that the problem is fictitious – I just took a piece of code for which an elegant solution has been found and overcharged it. I mean, in the real world that code does just a few iterations and I tested it with a number of iteration 1000 times. No one of the two programmers was concerned with performances, as it was reasonable. But this doens’t mean that you don’t pay for the functional approach. The overall code will be slower, maybe this will go unnoticed in the 80% of the code, but then you will need more hardware to run multiple instances, you will need more energy. I am not saying this is bad and you should avoid Scala in favor of C++, but that it is nice to take in account the extra you are paying and will be paying at run-time. I did other tests on the same problem – using mutable hash table Scala is only 10 times slower than C++. Changing data representation to a more suitable one (both in C++ and Scala) you can have the fastest implementation and Scala is only 2 times slower than C++. But this is just out of the scope – the first approach for C++ programmers is to use a std::vector, while for Scala programmers is to use an immutable List. Again I have to disagree on the statement FP = “premature pessimization”. I’d rather say “bad design choice” = “premature pessimization”. In this specific example, as stated, you have a growing list of data that have a state (the quantity) that is frequently changing. You should not implement it with immutable structures that at every call reconstruct completely. For example (as I explained in another reply) you can compromise by using an immutable list of (partly) mutable objects. The list grows by adding new elements at the head, and this is very cheap. A functional programmer will focus on the “what” he has to do, rather than “how”, and he won’t be bothered by the usual boilerplate of iterating, updating, etc. This can improve productivity a lot. Then again, performance compared against C++ is in general worse. But the choice of a language should not be done only on raw performance (you’re saying that you’re not comparing raw performance, but indeed it is this you are saying again in your reply). Btw, Scala is not an interpreted language, it is compiled and executed in a VM. So, Scala is better? C++ is better? Both are better… The important thing is being able to choose the right programming language for the right problem, given many factors. among them also the expertise of developers. And don’t have a prejudice on Functional Programming Paradigm. It can take a while to get used to it after many years of Imperative paradigm, but in the end it can help describe problems in a more logical and clear way. I think we are coming about to the same conclusion – pick the tool which better deals with the problem you are trying to solve. The business doesn’t care for FP / OO / Scala / Java / whatever. And we agree raw speed is not everything because it is just a part of the goal. I tried to take in account both speed of execution and speed of code production. I already know that trying to find a good Scala programmer is exceedingly hard. And I wondered what is the extra CPU you are paying to do FP. Maybe I have been exposed to too much bad Scala code, but I keep seeing a strong aversion for “var” and mutable. So, I focused on this aspect. I tried also with mutable data structures with a solution close to the one you proposed. I don’t have data here, but IIRC the code was five times slower wrt C++. As you said and I fully agree this is not enough to dismiss Scala in favor of C++. My final disequation – taken with a lot of grains of salt – pretended to be a rule of thumb to select the language according to the cost of running the application. Mutable or not mutable there is a price to pay for functional, this is why we needed to wait some 50 years since the formulation of lambda calculus and Lisp, for a functional language to start becoming mainstream. I’ll keep an eye for other cases to benchmark 🙂 If you accept that “quantity” in Data object is mutable (and you should, since it is in the Java, C++ and C versions) you can write: case class Data(val id: String, var quantity: Int) type Collection = List[Data] def merge( collection : Collection, data : Data ) : Collection = { collection.find(_.id == data.id).map { d => d.quantity += data.quantity collection }.getOrElse( data +: collection ) } In this case if a Data object is already present, its quantity is updated, otherwise a new Data object is added at the head of the (immutable) list. The list is not duplicated (not wasted memory), and “quantity” is the only mutable state. I think performance should improve. With an implicit (and this can be an answer to your post on implicits) you can write (sorry that indentation is lost in this editor): case class Data(val id: String, var quantity: Int) type Collection = List[Data] implicit class mergedCollection(val collection: Collection) extends AnyVal { def merge(data: Data): Collection = collection.find(_.id == data.id).map { d => d.quantity += data.quantity collection }.getOrElse( data +: collection ) } val c0: Collection = List.empty c0.merge(Data("a", 1)). merge(Data("b", 1)). merge(Data("a", 1)) With an implicit class we have easily extended our Collection with a method merge(). That’s nice!
http://www.maxpagani.org/2016/07/12/comparisons/
CC-MAIN-2017-51
en
refinedweb
// count and display the words in a sentence // optionally store the words in a vector // a Dev-C++ tested Console Application by vegaseat 17mar2005 #include <iostream> #include <sstream> #include <string> #include <vector> // for the vector option using namespace std; int main() { string sentence, word; // create a string vector to hold the words vector<string> sV; cout << "Enter a sentence: "; getline(cin, sentence); // put the sentence into a stream istringstream instr(sentence); // the >> operator separates the stream at a whitespace while (instr >> word) { cout << word << endl; // optionally store each word in a string vector // could use an array, but a vector is easier sV.push_back(word); } // now let's look at the vector cout << "You typed " << sV.size() << " words:\n"; for(int k = 0; k < sV.size(); k++) { cout << sV[k] << endl; } cin.sync(); // purge enter cin.get(); // console wait return 0; } Are you able to help answer this sponsored question? Questions asked by members who have earned a lot of community kudos are featured in order to give back and encourage quality replies.
https://www.daniweb.com/programming/software-development/code/216482/words-in-a-sentence
CC-MAIN-2017-51
en
refinedweb
Revision: 69059 Author: brlcad Date: 2016-10-14 14:56:02 +0000 (Fri, 14 Oct 2016) Log Message: ----------- bit again. remove dead symlinks so removed/renamed files don't mess with the build (e.g., when a tclIndex needs to be regenerated). if it globs but doesn't exist, it's a dead link and useless in the build tree regardless. copies are left to fend for themselves (and don't exhibit the same problems). Advertising Modified Paths: -------------- brlcad/trunk/misc/CMake/BRLCAD_Targets.cmake Modified: brlcad/trunk/misc/CMake/BRLCAD_Targets.cmake =================================================================== --- brlcad/trunk/misc/CMake/BRLCAD_Targets.cmake 2016-10-14 14:55:51 UTC (rev 69058) +++ brlcad/trunk/misc/CMake/BRLCAD_Targets.cmake 2016-10-14 14:56:02 UTC (rev 69059) @@ -764,6 +764,15 @@ endif(NOT CMAKE_CONFIGURATION_TYPES) endforeach(filename ${fullpath_datalist}) + # check for and remove any dead symbolic links from a previous run + file(GLOB listing LIST_DIRECTORIES false "${CMAKE_BINARY_DIR}/${targetdir}/*") + foreach (filename ${listing}) + if (NOT EXISTS ${filename}) + message("Removing stale symbolic link ${filename}") + execute_process(COMMAND ${CMAKE_COMMAND} -E remove ${filename}) + endif (NOT EXISTS ${filename}) + endforeach (filename ${listing}) + # The custom command is still necessary - since it depends on the original source files, # this will be the trigger that tells other commands depending on this data that # they need to re-run one one of the source files is changed. This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! _______________________________________________ BRL-CAD Source Commits mailing list [email protected]
https://www.mail-archive.com/[email protected]/msg35265.html
CC-MAIN-2017-51
en
refinedweb
Well, this problem has a nice BFS structure. Let's see the example in the problem statement. start = "hit" end = "cog" dict = ["hot", "dot", "dog", "lot", "log"] Since only one letter can be changed at a time, if we start from "hit", we can only change to those words which have only one different letter from it, like "hot". Putting in graph-theoretic terms, we can say that "hot" is a neighbor of "hit". The idea is simpy to begin from start, then visit its neighbors, then the non-visited neighbors of its neighbors... Well, this is just the typical BFS structure. To simplify the problem, we insert end into dict. Once we meet end during the BFS, we know we have found the answer. We maintain a variable dist for the current distance of the transformation and update it by dist++ after we finish a round of BFS search (note that it should fit the definition of the distance in the problem statement). Also, to avoid visiting a word for more than once, we erase it from dict once it is visited. The code is as follows. class Solution { public: int ladderLength(string beginWord, string endWord, unordered_set<string>& wordDict) { wordDict.insert(endWord); queue<string> toVisit; addNextWords(beginWord, wordDict, toVisit); int dist = 2; while (!toVisit.empty()) { int num = toVisit.size(); for (int i = 0; i < num; i++) { string word = toVisit.front(); toVisit.pop(); if (word == endWord) return dist; addNextWords(word, wordDict, toVisit); } dist++; } } private: void addNextWords(string word, unordered_set<string>& wordDict, queue<string>& toVisit) { wordDict.erase(word); for (int p = 0; p < (int)word.length(); p++) { char letter = word[p]; for (int k = 0; k < 26; k++) { word[p] = 'a' + k; if (wordDict.find(word) != wordDict.end()) { toVisit.push(word); wordDict.erase(word); } } word[p] = letter; } } }; The above code can still be speeded up if we also begin from end. Once we meet the same word from start and end, we know we are done. This link provides a nice two-end search solution. I rewrite the code below for better readability. Note that the use of two pointers phead and ptail save a lot of time. At each round of BFS, depending on the relative size of head and tail, we point phead to the smaller set to reduce the running time. class Solution { public: int ladderLength(string beginWord, string endWord, unordered_set<string>& wordDict) { unordered_set<string> head, tail, *phead, *ptail; head.insert(beginWord); tail.insert(endWord); int dist = 2; while (!head.empty() && !tail.empty()) { if (head.size() < tail.size()) { phead = &head; ptail = &tail; } else { phead = &tail; ptail = &head; } unordered_set<string> temp; for (auto itr = phead -> begin(); itr != phead -> end(); itr++) { string word = *itr; wordDict.erase(word); for (int p = 0; p < (int)word.length(); p++) { char letter = word[p]; for (int k = 0; k < 26; k++) { word[p] = 'a' + k; if (ptail -> find(word) != ptail -> end()) return dist; if (wordDict.find(word) != wordDict.end()) { temp.insert(word); wordDict.erase(word); } } word[p] = letter; } } dist++; swap(*phead, temp); } return 0; } }; Plz clarify my doubt! Thanks. :) First of all, thanks for your detail comment. But in your first solution, I think, "wordDict.erase(word);" can be deleted(the first statement in function addNextWords), because it is not necesary since you have deleted the word when it is pushed in the queue, so it just influence the beginWord, we delete the beginWord if it exist in toVisit. Firstly, thank you for the detailed explanation. It is very helpful! I have a question here, since the problems requests "shortest transformation sequence ", I see you return "dist" when finding the endword. How can you make sure dist is the smallest? Thank you! Well, since adjacent words can only have one different letter and we search for the next words while sticking to this requirement using BFS, the dist will be the answer when the endWord is hit. I did not understand why you pushed wordDict.insert(endWord); in the beginning. How is it making the solution easy? As even without it it is working Share my java Solution, 26ms, use a HashMap to mark whether a word is not yet visited or it is visited by head / tail queue. public class Solution { public int ladderLength(String beginWord, String endWord, Set<String> wordList) { HashMap<String, Integer> h = new HashMap(); for (String i : wordList) h.put(i, 0); ArrayList<String> l = new ArrayList(), r = new ArrayList(); l.add(beginWord); r.add(endWord); h.put(beginWord, 1); h.put(endWord, -1); int i = 0, j = 0; while (i < l.size() && j < r.size()) { int curStep = h.get(l.get(i)); char[] c = l.get(i).toCharArray(); for (int t = 0; t < c.length; t++) { char cc = c[t]; for (char x = 'a'; x <= 'z'; x++) if (x != cc) { c[t] = x; String y = String.valueOf(c); if (h.containsKey(y)) { int step = h.get(y); if (step == 0) { l.add(y); h.put(y, curStep + 1); } else if (step < 0) { return curStep - step; } } c[t] = cc; } } i++; curStep = h.get(r.get(j)); c = r.get(j).toCharArray(); for (int t = 0; t < c.length; t++) { char cc = c[t]; for (char x = 'a'; x <= 'z'; x++) if (x != cc) { c[t] = x; String y = String.valueOf(c); if (h.containsKey(y)) { int step = h.get(y); if (step == 0) { r.add(y); h.put(y, curStep - 1); } else if (step > 0) { return step - curStep; } } c[t] = cc; } } j++; } return 0; } } Thank you for sharing this awesome code! It's straightforward and easy to understand and well explained. The following is my take based on your code, added some comments and used more explicit variable names. Hope it could help somebody. class Solution { public: int ladderLength(string beginWord, string endWord, unordered_set<string>& wordDict) { /* Search from both ends. From 'beginWord', find the set of words which are one character from 'beginWord'. Do the same to 'endWord', form a set of words one character from 'endWord'. Check each word in the smaller of 'beginWord'/'endWord', if it is in the other set. If it is, we are done. Otherwise, for each word in 'begigWord'/'endWord', update the set (by changing one character) */ unordered_set<string> head, tail, *smallerSet, *biggerSet; head.insert(beginWord); tail.insert(endWord); int dist = 1; while (!head.empty() && !tail.empty()) { ++dist; smallerSet = (head.size() < tail.size()) ? &head : &tail; biggerSet = (head.size() < tail.size()) ? &tail : &head; unordered_set<string> reachableWords; //for (auto itr = smallerSet->begin(); itr != smallerSet->end(); ++itr) for(const auto& w: *smallerSet) { string word(w); wordDict.erase(word); for (int i = 0; i < (int)word.length(); ++i) { char letter = word[i]; for (int c = 0; c < 26; ++c) { word[i] = 'a' + c; if (biggerSet->find(word) != biggerSet->end()) return dist; if (wordDict.find(word) != wordDict.end()) { reachableWords.insert(word); wordDict.erase(word); } } word[i] = letter; } } swap(*smallerSet, reachableWords); } return 0; } }; Seems having problem for the second solution. If the endWord is one character different from the beginWord, and the beginWord is not in the dictionary, it will return 2, but actually the length could either be zero or larger than 2. @nosrepus That is because in the end you need to return 0, not dist. dist is always returned when the last word is found, in the loop. int ladderLength(string beginWord, string endWord, unordered_set<string>& wordList) { int res = 1; deque<string> candidate; deque<string> cur; cur.push_back(beginWord); wordList.erase(beginWord); string temp; while (!cur.empty()) { while (!cur.empty()) { beginWord = cur.front(); cur.pop_front(); for (int i = 0;i<endWord.size();i++) { temp = beginWord; for (int j = 0;j<26;j++) { temp[i] = 'a' + j; if (temp == endWord) return res+1; if (temp != beginWord&&wordList.find(temp) != wordList.end()) { candidate.push_front(temp); wordList.erase(temp); } } } } swap(cur, candidate); res++; } return 0; } Thanks for sharing! Here's my similar Java version. I use visited set explicitly rather than modify dict which is more straightforward in my view. update (2017/03/01): wordList is of List type now. And all transformed words (including endWord) must be in dictionary. For more efficiency, please refer to my bidirectional BFS solution (). public int ladderLength(String beginWord, String endWord, List<String> wordList) { Set<String> dict = new HashSet<>(wordList), vis = new HashSet<>(); Queue<String> q = new LinkedList<>(); q.offer(beginWord); for (int len = 1; !q.isEmpty(); len++) { for (int i = q.size(); i > 0; i--) { String w = q.poll(); if (w.equals(endWord)) return len; for (int j = 0; j < w.length(); j++) { char[] ch = w.toCharArray(); for (char c = 'a'; c <= 'z'; c++) { if (c == w.charAt(j)) continue; ch[j] = c; String nb = String.valueOf(ch); if (dict.contains(nb) && vis.add(nb)) q.offer(nb); } } } } return 0; } Hey! I just want to thank you so much for actually explaining your solutions. Most people just copy and paste from their OJ but you actually take the time to explain. So thanks for that. Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/16983/easy-76ms-c-solution-using-bfs
CC-MAIN-2017-51
en
refinedweb
Navigation Overview. Page, Hyperlink, NavigationService, and the journal form the core of the navigation support offered by WPF. This overview explores these features in detail before covering advanced navigation support that includes navigation to loose Extensible Application Markup Language (XAML) files, HTML files, and objects. Note In this topic, the term "browser" refers only to browsers that can host WPF applications, which currently includes Microsoft Internet Explorer and Firefox. Where specific WPF features are supported only by a particular browser, the browser version is referred to. Navigation in WPF Applications This topic provides an overview of the key navigation capabilities in WPF. These capabilities are available to both standalone applications and XBAPs, although this topic presents them within the context of an XBAP. Note This topic doesn't discuss how to build and deploy XBAPs. For more information on XBAPs, see WPF XAML Browser Applications Overview. This section explains and demonstrates the following aspects of navigation: - - Configuring the Host Window's Title, Width, and Height - - - Programmatic Navigation with the Navigation Service - Remembering Navigation with the Journal Page Lifetime and the Journal Retaining Content State with Navigation History - - Implementing a Page In WPF, you can navigate to several content types that include .NET Framework objects, custom objects, enumeration values, user controls, XAML files, and HTML files. However, you'll find that the most common and convenient way to package content is by using Page. Furthermore, Page implements navigation-specific features to enhance their appearance and simplify development. Using Page, you can declaratively implement a navigable page of XAML content by using markup like the following. <Page xmlns="" /> A Page that is implemented in XAML markup has Page as its root element and requires the WPFXML namespace declaration. The Page element contains the content that you want to navigate to and display. You add content by setting the Page.Content property element, as shown in the following markup. <Page xmlns=""> <Page.Content> <!-- Page Content --> Hello, Page! </Page.Content> </Page> Page.Content can only contain one child element; in the preceding example, the content is a single string, "Hello, Page!" In practice, you will usually use a layout control as the child element (see Layout) to contain and compose your content. The child elements of a Page element are considered to be the content of a Page and, consequently, you don't need to use the explicit Page.Content declaration. The following markup is the declarative equivalent to the preceding sample. <Page xmlns=""> <!-- Page Content --> Hello, Page! </Page>. <Page xmlns="" xmlns: Hello, from the XBAP HomePage! </Page> using System.Windows.Controls; namespace SDKSample { public partial class HomePage : Page { public HomePage() { InitializeComponent(); } } } Imports System.Windows.Controls Namespace SDKSample Partial Public Class HomePage Inherits Page Public Sub New() InitializeComponent() End Sub End Class End Namespace To allow a markup file and code-behind file to work together, the following configuration is required: In markup, the Pageelement must include the x:Classattribute. When the application is built, the existence of x:Classin the markup file causes Microsoft build engine (MSBuild) to create a partialclass that derives from Page and has the name that is specified by the x:Classattribute. This requires the addition of an XML namespace declaration for the XAML schema ( xmlns:x=""). The generated partialclass implements InitializeComponent, which is called to register the events and set the properties that are implemented in markup. In code-behind, the class must be a partialclass with the same name that is specified by the x:Classattribute in markup, and it must derive from Page. This allows the code-behind file to be associated with the partialclass that is generated for the markup file when the application is built (see Building a WPF Application). In code-behind, the Page class must implement a constructor that calls the InitializeComponentmethod. InitializeComponentis implemented by the markup file's generated partialclass to register events and set properties that are defined in markup. Note When you add a new Page to your project using Microsoft Visual Studio, the Page is implemented using both markup and code-behind, and it includes the necessary configuration to create the association between the markup and code-behind files as described here.. <Application xmlns="" xmlns: using System.Windows; namespace SDKSample { public partial class App : Application { } } Imports System.Windows Namespace SDKSample Partial Public Class App Inherits Application End Class End Namespace An XBAP can use its application definition to specify a start Page, which is the Page that is automatically loaded when the XBAP is launched. You do this by setting the StartupUri property with the uniform resource identifier (URI) for the desired Page. Note In most cases, the Page is either compiled into or deployed with an application. In these cases, the URI that identifies a Page is a pack URI, which is a URI that conforms to the pack scheme. Pack URIs are discussed further in Pack URIs in WPF. You can also navigate to content using the http scheme, which is discussed below. You can set StartupUri declaratively in markup, as shown in the following example. <Application xmlns="" xmlns:. Note For more information regarding the development and deployment of XBAPs, see WPF XAML Browser Applications Overview and Deploying a WPF Application.. <Page xmlns="" xmlns: Hello, from the XBAP HomePage! </Page>. <Page xmlns="" WindowTitle="Page With Hyperlink" WindowWidth="250" WindowHeight="250"> <Hyperlink NavigateUri="UriOfPageToNavigateTo.xaml"> Navigate to Another Page </Hyperlink> </Page> A Hyperlink element requires the following: The pack URI of the Page to navigate to, as specified by the NavigateUriattribute. Content that a user can click to initiate the navigation, such as text and images (for the content that the Hyperlinkelement. <Page xmlns="" WindowTitle="Page With Fragments" > <!-- Content Fragment called "Fragment1" --> <TextBlock Name="Fragment1"> Ea vel dignissim te aliquam facilisis ... </TextBlock> </Page>. <Page xmlns="" WindowTitle="Page That Navigates To Fragment" > <Hyperlink NavigateUri="PageWithFragments.xaml#Fragment1"> Navigate To pack Fragment </Hyperlink> </Page> Note This section describes the default fragment navigation implementation in WPF. WPF also allows you to implement your own fragment navigation scheme which, in part, requires handling the NavigationService.FragmentNavigation event. Important You can navigate to fragments in loose XAML pages (markup-only XAML files with Page as the root element) only if the pages can be browsed via HTTP. However, a loose XAML page can navigate to its own fragments.. Note It is possible for a WPF application to have more than one currently active NavigationService. For more information, see Navigation Hosts later in this topic.Navig. using System.Windows.Navigation; // Get a reference to the NavigationService that navigated to this Page NavigationService ns = NavigationService.GetNavigationService(this); ' Get a reference to the NavigationService that navigated to this Page Dim ns As NavigationService = NavigationService.GetNavigationService(Me) As a shortcut for finding the NavigationService for a Page, Page implements the NavigationService property. This is shown in the following example. using System.Windows.Navigation; // Get a reference to the NavigationService that navigated to this Page NavigationService ns = this.NavigationService; ' Get a reference to the NavigationService that navigated to this Page Dim ns As NavigationService = Me.NavigationService Note A Page can only get a reference to its NavigationService when Page raises the Loaded event. Programmatic Navigation to a Page Object The following example shows how to use the NavigationService to programmatically navigate to a Page. Programmatic navigation is required because the Page that is being navigated to can only be instantiated using a single, non-default constructor. The Page with the non-default constructor is shown in the following markup and code. <Page x: <!-- Content goes here --> </Page> using System.Windows.Controls; namespace SDKSample { public partial class PageWithNonDefaultConstructor : Page { public PageWithNonDefaultConstructor(string message) { InitializeComponent(); this.Content = message; } } } Namespace SDKSample Partial Public Class PageWithNonDefaultConstructor Inherits Page Public Sub New(ByVal message As String) InitializeComponent() Me.Content = message End Sub End Class End Namespace The Page that navigates to the Page with the non-default constructor is shown in the following markup and code. <Page xmlns="" xmlns: <Hyperlink Click="hyperlink_Click"> Navigate to Page with Non-Default Constructor </Hyperlink> </Page> using System.Windows; using System.Windows.Controls; using System.Windows.Navigation; namespace SDKSample { public partial class NSNavigationPage : Page { public NSNavigationPage() { InitializeComponent(); } void hyperlink_Click(object sender, RoutedEventArgs e) { // Instantiate the page to navigate to PageWithNonDefaultConstructor page = new PageWithNonDefaultConstructor("Hello!"); // Navigate to the page, using the NavigationService this.NavigationService.Navigate(page); } } } Namespace SDKSample Partial Public Class NSNavigationPage Inherits Page Public Sub New() InitializeComponent() End Sub Private Sub hyperlink_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) ' Instantiate the page to navigate to Dim page As New PageWithNonDefaultConstructor("Hello!") ' Navigate to the page, using the NavigationService Me.NavigationService.Navigate(page) End Sub End Class End Namespace When the Hyperlink on this Page is clicked, navigation is initiated by instantiating the Page to navigate to using the non-default constructor and calling the NavigationService.Navigate method. Navigate accepts a reference to the object that the NavigationService will navigate to, rather than a pack URI.. <Page xmlns="" xmlns: <Hyperlink Click="hyperlink_Click">Navigate to Page by Pack URI</Hyperlink> </Page> using System; using System.Windows; using System.Windows.Controls; using System.Windows.Navigation; namespace SDKSample { public partial class NSUriNavigationPage : Page { public NSUriNavigationPage() { InitializeComponent(); } void hyperlink_Click(object sender, RoutedEventArgs e) { // Create a pack URI Uri uri = new Uri("AnotherPage.xaml", UriKind.Relative); // Get the navigation service that was used to // navigate to this page, and navigate to // AnotherPage.xaml this.NavigationService.Navigate(uri); } } } Namespace SDKSample Partial Public Class NSUriNavigationPage Inherits Page Public Sub New() InitializeComponent() End Sub Private Sub hyperlink_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) ' Create a pack URI Dim uri As New Uri("AnotherPage.xaml", UriKind.Relative) ' Get the navigation service that was used to ' navigate to this page, and navigate to ' AnotherPage.xaml Me.NavigationService.Navigate(uri) End Sub End Class End Namespace Refreshing the Current Page A Page is not downloaded if it has the same pack URI as the pack URI that is stored in the NavigationService.Source property. To force WPF to download the current page again, you can call the NavigationService.Refresh method, as shown in the following example. <Page xmlns="" xmlns: <Hyperlink Click="hyperlink_Click">Refresh this page</Hyperlink> </Page> using System.Windows; using System.Windows.Controls; using System.Windows.Navigation; namespace SDKSample { public partial class NSRefreshNavigationPage : Page { Namespace SDKSample Partial Public Class NSRefreshNavigationPage Inherits Page void hyperlink_Click(object sender, RoutedEventArgs e) { // Force WPF to download this page again this.NavigationService.Refresh(); } } } Private Sub hyperlink_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) ' Force WPF to download this page again Me.NavigationService.Refresh() End Sub End Class End Namespace. <Page xmlns="" xmlns: <Button Click="button_Click">Navigate to Another Page</Button> </Page> using System; using System.Windows; using System.Windows.Controls; using System.Windows.Navigation;; } } } Namespace SDKSample Partial Public Class CancelNavigationPage Inherits Page Public Sub New() InitializeComponent() ' Can only access the NavigationService when the page has been loaded AddHandler Loaded, AddressOf CancelNavigationPage_Loaded AddHandler Unloaded, AddressOf CancelNavigationPage_Unloaded End Sub Private Sub button_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) ' Force WPF to download this page again Me.NavigationService.Navigate(New Uri("AnotherPage.xaml", UriKind.Relative)) End Sub Private Sub CancelNavigationPage_Loaded(ByVal sender As Object, ByVal e As RoutedEventArgs) AddHandler NavigationService.Navigating, AddressOf NavigationService_Navigating End Sub Private Sub CancelNavigationPage_Unloaded(ByVal sender As Object, ByVal e As RoutedEventArgs) RemoveHandler NavigationService.Navigating, AddressOf NavigationService_Navigating End Sub Private Sub NavigationService_Navigating(ByVal sender As Object, ByVal e As NavigatingCancelEventArgs) ' Does the user really want to navigate to another page? Dim result As MessageBoxResult result = MessageBox.Show("Do you want to leave this page?", "Navigation Request", MessageBoxButton.YesNo) ' If the user doesn't want to navigate away, cancel the navigation If result = MessageBoxResult.No Then e.Cancel = True End If End Sub End Class End Namespace. Important In Internet Explorer, when a user navigates away from and back to an XBAP, only the journal entries for pages that were not kept alive are retained in the journal. For discussion on keeping pages alive, see Page Lifetime and the Journal later in this topic.attribute value. The Page.Titleattribute value. The Page.WindowTitleattribute. <Page xmlns="" xmlns: </Page> using System.Windows.Controls; namespace SDKSample { public partial class PageWithTitle : Page { Namespace SDKSample Partial Public Class PageWithTitle Inherits Page } } End Class End Namespace. <Page xmlns="" xmlns: An instance of this page is stored in the journal. </Page>List. The NavigationWindow Class. <NavigationWindow xmlns="" xmlns: using System.Windows.Navigation; namespace SDKSample { public partial class MainWindow : NavigationWindow { public MainWindow() { InitializeComponent(); } } } Namespace SDKSample Partial Public Class MainWindow Inherits NavigationWindow Public Sub New() InitializeComponent() End Sub End Class End Namespace This code creates a NavigationWindow that automatically navigates to a Page (HomePage.xaml) when the NavigationWindow is opened. If the NavigationWindow is the main application window, you can use the StartupUri attribute to launch it. This is shown in the following markup. <Application xmlns="" StartupUri="MainWindow.xaml" />. <Page xmlns="" Title="Home Page" WindowTitle="NavigationWindow"> <. <Application xmlns="" StartupUri="HomePage.xaml" /> If you want a secondary application window such as a dialog box to be a NavigationWindow, you can use the code in the following example to open it. // Open a navigation window as a dialog box NavigationWindowDialogBox dlg = new NavigationWindowDialogBox(); dlg.Source = new Uri("HomePage.xaml", UriKind.Relative); dlg.Owner = this; dlg.ShowDialog(); ' Open a navigation window as a dialog box Dim dlg As New NavigationWindowDialogBox() dlg.Source = New Uri("HomePage.xaml", UriKind.Relative) dlg.Owner = Me dlg.ShowDialog(). The Frame Class. <Page xmlns="" WindowTitle="Page that Hosts a Frame" WindowWidth="250" WindowHeight="250"> <Frame Source="FramePage1.xaml" /> </Page>. <Page xmlns="" WindowTitle="Page that Hosts a Frame" WindowWidth="250" WindowHeight="250"> <Frame Source="FramePage1.xaml" JournalOwnership="OwnsJournal" /> </Page> The following figure illustrates the effect of navigating within a Frame that uses its own journal. Notice that the journal entries are shown by the navigation UI in the Frame, rather than by Internet Explorer. Note If a Frame is part of content that is hosted in a Window, Frame uses its own journal and, consequently, displays its own navigation UI. If your user experience requires a Frame to provide its own journal without showing the navigation UI, you can hide the navigation UI by setting the NavigationUIVisibility to Hidden. This is shown in the following markup. <Page xmlns="" WindowTitle="Page that Hosts a Frame" WindowWidth="250" WindowHeight="250"> <Frame Source="FramePage1.xaml" JournalOwnership="OwnsJournal" NavigationUIVisibility="Hidden" /> </Page> Navigation Hosts. Navigating to Content Other than XAML Pages. Note For more information about publishing and launching loose XAML pages, see Deploying a WPF Application.. <Frame Source="" />; } } } } Namespace SDKSample Public Class Person Private _name As String Private _favoriteColor As Color Public Sub New() End Sub Public Sub New(ByVal name As String, ByVal favoriteColor As Color) Me._name = name Me._favoriteColor = favoriteColor End Sub Public Property Name() As String Get Return Me._name End Get Set(ByVal value As String) Me._name = value End Set End Property Public Property FavoriteColor() As Color Get Return Me._favoriteColor End Get Set(ByVal value As Color) Me._favoriteColor = value End Set End Property End Class End Namespace To navigate to it, you call the NavigationWindow.Navigate method, as demonstrated by the following code. <Page xmlns="" xmlns: <Hyperlink Name="hyperlink" Click="hyperlink_Click"> Navigate to Nancy Davolio </Hyperlink> </Page> using System.Windows; using System.Windows.Controls; using System.Windows.Media; namespace SDKSample { public partial class HomePage : Page { public HomePage() { InitializeComponent(); } void hyperlink_Click(object sender, RoutedEventArgs e) { Person person = new Person("Nancy Davolio", Colors.Yellow); this.NavigationService.Navigate(person); } } } Namespace SDKSample Partial Public Class HomePage Inherits Page Public Sub New() InitializeComponent() End Sub Private Sub hyperlink_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) Dim person As New Person("Nancy Davolio", Colors.Yellow) Me.NavigationService.Navigate(person) End Sub End Class End Namespace. Security WPF navigation support allows XBAPs to be navigated to across the Internet, and it allows applications to host third-party content. To protect both applications and users from harmful behavior, WPF provides a variety of security features that are discussed in Security and WPF Partial Trust Security. See Also SetCookie GetCookie Application Management Overview Pack URIs in WPF Structured Navigation Overview Navigation Topologies Overview How-to Topics Deploying a WPF Application
https://docs.microsoft.com/en-us/dotnet/framework/wpf/app-development/navigation-overview
CC-MAIN-2017-51
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards detail/gcd_lcm.hpp provides two generic integer algorithms: greatest common divisor and least common multiple. namespace details { namespace pool { template <typename Integer> Integer gcd(Integer A, Integer B); template <typename Integer> Integer lcm(Integer A, Integer B); } // namespace pool } // namespace details For faster results, ensure A > B None. This header may be replaced by a Boost algorithms.
http://www.boost.org/doc/libs/1_32_0/libs/pool/doc/implementation/gcd_lcm.html
CC-MAIN-2017-13
en
refinedweb
HIPL on Android Bug Description HIPL does not build or work on Android yet. Android platform is problematic because it is targeted for java applications. The challenges include: * DNS proxy is written with python, so it's not possible to port it at all * The h/w platform for Android is really picky with memory alignments and requires some modifications to the code. * Many of the libraries are missing, so HIPL has to be cross-compiled and statically linked with some ofthe libraries. However, the latest Android 2.3 seems to have OpenSSL already available should make the work easier. Please check out (especially the end of the latter link): http:// http:// The alignment problems can be fixed with: system("echo 3 > /proc/cpu/ However, this degrades the performance of the system radically. So, this was solved by a student (on some really old version of Android) using packed data structures. I have some patches from him, but I am not going to publish them online because they do not work with the current data base. Instead, I'll just quote his report: When you cast unaligned pointer to an aligned type, the gcc takes your word and inlines memcpy. But this will generate unaligned trap, but will work in x86 and other processors where unaligned accesses will be fixed automatically, but not in ARM. For example, the following code will raise SIGBUS when run in ARM (in foo(), the types are same, even though we have typecasted it from a packed struct, so gcc inlines the memcpy) #include <string.h> typedef struct { unsigned int a; unsigned int b; unsigned char c; } s; typedef struct { unsigned int a; unsigned int b; unsigned char c; } __attribute__ ((packed)) ust; void foo(s *cp) { s dst; memcpy(&dst, cp, sizeof(s)); return 0; } int main(int k, char *kk[]) { ust tt; return foo(&tt); } One place where this was causing a problem was hadb.c: The compilation problems about shadowed declarations should be trivial to fix. Just rename the "index" variables to "idx". Rene gave a pointer earlier: the new GCC version may actually help you with HIPL for Android: http:// Ibraham, feel free to pose questions here too... Ibrahim will not be able to complete this task. New up-to-date instructions for Ubuntu *oneiric*. Did not double check the instructions, I hope they are ok from my history. I don't know if there's some unncessary parts in the instructions. Please note that there's absolutely no reason install android stuff to "/" - everything should be in your home directory. Also, no need to have root privileges for compilation. # gcc -6 requires a new Ubuntu mkomu@bling: No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric # let's install it, no need to compile in Oneiric sudo aptitude install gcc-4.6- # Download and SDK and NDK. Set Android root to make paths shorter later. cd ~ wget http:// tar xvzf android- cd android-sdk-linux wget http:// tar xvjf android- export ANDROID_ source ~/.bashrc # Download, compile, modify and install OpenSSL to the NDK directory. cd ~ wget http:// tar xvzf openssl- cd openssl-1.0.0g ./config no-asm shared --prefix= edit Makefile: CC= arm-linux-- RANLIB= arm-linux- NM= arm-linux- PERL= /usr/bin/perl TAR= tar TARFLAGS= --no-recursion MAKEDEPPROG= gcc LIBDIR=lib make install # Download and cross-compile HIPL. cd ~ bzr co lp:hipl trunk cd trunk edit configure.ac and comment out all AM_CFLAGS autoreconf --install ./configure --disable-gcc-warn --disable-firewall -host=arm-linux CC=arm- make make all-am make[1]: Entering directory `/home/ CC lib/core/builder.lo In file included from lib/core/ lib/core/ lib/core/ lib/core/ lib/core/ lib/core/ lib/core/ lib/core/ ... (it appears in_port_t definition is missing from android (?), so it may have to be declared redudantly in some header file) Just an update, the code doesn't completely compile yet, but this is how I'd initialize a build environment under Ubuntu 13.04 now: sudo apt-get -y install gcc-4.7- cd ~ wget http:// tar xvf android- cd android-sdk-linux/ wget http:// tar xvf android- cd android-ndk-r9/ echo "export ANDROID_ export ANDROID_ROOT=$(pwd) cd ~ wget http:// tar xvzf openssl- cd openssl-1.0.0g ./config no-asm shared --prefix= wget http:// patch Makefile < droid-openssl- make install cd ~ mkdir hipl cd hipl bzr branch lp:~hipl-core/hipl/android-port-new cd android-port-new autoreconf --install ./configure --disable-gcc-warn --disable-firewall -host=arm-linux CC=arm- echo "You are now ready to run 'make' and 'make all-am'" From Juhani (another way to configure): ./configure --disable-gcc-warn --disable-firewall --host=arm-linux --enable- This is what we have managed so far to complete with a student with a bit older version of HIPL: 1 Download arm-linux tools-chain:. org/download/ projects/ toolchain/ arm-linux- gcc-3.4. 1.tar.bz2 /usr/local/ arm/3.4. 1/bin download from http:// unzip tar to / directory. export PATH=$PATH: 2 build openssl:. org/source/ download openssl source code from http:// ./config noï¼asm shared Change the Makefile replace the section started from line 62 with the following: CC= arm-linux-gcc frame-pointer -Wall NO_RFC3779 -DOPENSSL_NO_STORE-ar $(ARFLAGS) r RANLIB= arm-linux-ranlib NM= arm-linux-nm PERL= /usr/bin/perl TAR= tar TARFLAGS= --no-recursion MAKEDEPPROG= gcc LIBDIR=lib and then make it! make install 3 download Android SDK and NDK developer. android. com/sdk/ index.html developer. android. com/sdk/ ndk/index. html ROOT=XXX/ android- sdk-linux_ 86/android- ndk-r3 XXX/android- sdk-linux_ 86/android- ndk-r3/ build/prebuilt/ linux-x86/ arm-eabi- 4.4.0/bin http:// http:// unzip ndk into sdk directory. export ANDROID_ export PATH=$PATH: copy openssl headers from openssl- 1.0.0a/ include/ openssl directory to $ANDROID_ ROOT/build/ platforms/ android- 3/arch- arm/usr/ include/ directory ROOT/build/ platforms/ android- 3/arch- arm/usr/ lib/ directory copy libcrypto.so and libcrypto.a libraries to $ANDROID_ 4 build HIPL product configure by run command autoreconf --install method 1: use arm-eabi-gcc provided by NDK: â€-I$ANDROID_ ROOT/build/ platforms/ android- 3/arch- arm/usr/ include/ †CFLAGS= â€-nostdlib†LDFLAGS= â€-Wl, -rpath- link=$ANDROID_ ROOT/build/ platforms/ android- 3/arch- arm/usr/ lib/ -L$ANDROID_ ROOT/build/ platforms/ android- 3/arch- arm/usr/ lib/†LIBS=â€-lc “ "-I$OPENSSL/ include/ " CFLAGS="-nostdlib" LDFLAGS= "-Wl,-rpath- link=$ANDROID_ ROOT/build/ platforms/ android- 3/arch- arm/usr/ lib,-L$ ANDROID_ ROOT/build/ platforms/ android- 3/arch- arm/usr/ lib/" LIBS="-lc" ./configure -host=arm-eabi CC=arm-eabi-gcc CPPFLAGS= method 2 (WORKS): use arm-linux-gcc which is a general crosscompile tool: ./configure --disable-gcc-warn --disable-firewall -host=arm-linux CC=arm-linux-gcc CPPFLAGS= If you get this far, there are some compilation issues for you to solve: In file included from ./lib/modulariz ation/lmod. h:37, from lib/core/ state.h: 37, from lib/core/ builder. h:43, from lib/core/ builder. c:98: linkedlist. h:64: warning: declaration of 'index' shadows a global declaration usr/local/ arm/3.4. 1/bin/. ./lib/gcc/ arm-linux/ 3.4.1/. ./../.. /../arm- linux/sys- include/ string. h:267: warning: shadowed declaration is here linkedlist. h:67: warning: declaration of 'index' shadows a global declaration usr/local/ arm/3.4. 1/bin/. ./lib/gcc/ arm-linux/ 3.4.1/. ./../.. /../arm- linux/sys- include/ string. h:267: warning: shadowed declaration is here linkedlist. h:70: warning: declaration of 'index' shadows a global declaration usr/l.. . ./lib/core/ /home/mkomu/ ./lib/core/ /home/mkomu/ ./lib/core/ /home/mkomu/
https://bugs.launchpad.net/hipl/+bug/715126
CC-MAIN-2017-13
en
refinedweb
I am reading documentation about new order calculations: It mentions that for calculations to work properly, line items should exist in shipment: "The order calculators will only calculate line items that belongs to a shipment. This is a changed behaviour from how it worked in the workflow activities." So what is the proper way to add new line item now? Previously I used Quicksilver's CartService implementation, but I am not sure if it supports new way of handling things: public bool AddToCart(string code, out string warningMessage) { var entry = CatalogContext.Current.GetCatalogEntry(code); CartHelper.AddEntry(entry); CartHelper.Cart.ProviderId = "frontend"; // if this is not set explicitly, place price does not get updated by workflow ValidateCart(out warningMessage); return CartHelper.LineItems.Select(x => x.Code).Contains(code); } It would be nice to have some working code sample :) I found an example here: var lineItem = CreateLineItem(variant, 1m, (decimal)price.UnitPrice.Amount); // A shipment needs to exists // Replaced in 9.2: cart.Forms.First().Shipments.First().LineItems.Add(lineItem); var orderForm = cart.OrderForms.First(); orderForm.LineItems.Add(lineItem); var index = orderForm.LineItems.IndexOf(lineItem); cart.OrderForms.First().Shipments.First().AddLineItemIndex(index, lineItem.Quantity); But this seems too complicated. Is it correct way to handle line item adding? I used dotPeak too see whats happening inside "CartHelper.AddEntry(entry);" and as fare as I can see/understand of the code then it detects if the new system is used and do that for you Sebastian is right if you use CartHelper and VNextWorkflow enabled then it will make sure shipment is there. Also is VNextWorkflow is enabled you can use the workflow sytem as is and discount will be calculated with the new engine. If you want to use the new api without CartHelper something like this will work var cart = OrderRepository.LoadOrCreate<Cart>(PrincipalInfo.CurrentPrincipal.GetContactId(), Cart.DefaultName); var price = PriceService.GetDefaultPrice(marketId, DateTime.UtcNow, new CatalogKey(new Guid(variation.ApplicationId), variation.Code), SiteContext.Current.Currency); var lineItem = CreateLineItem(variation, quantity, price.UnitPrice.Amount); cart.Forms.First().Shipments.First().LineItems.Add(lineItem); PromotionEngine.Run(cart); OrderRepository.Save(cart); Seems that my initializable module didn't enabled new Workflows. I now configured it in ecf.app.config and it started to work. Here is my initializable module: [InitializableModule] [ModuleDependency(typeof(EPiServer.Web.InitializationModule))] public class SwitchOnNewPromoEngine : IInitializableModule { public void Initialize(InitializationEngine context) { var featureSwitch = ServiceLocator.Current.GetInstance<IFeatureSwitch>(); featureSwitch.InitializeFeatures(); featureSwitch.Features.Add(new WorkflowsVNext()); featureSwitch.EnableFeature(WorkflowsVNext.FeatureWorkflowsVNext); } public void Uninitialize(InitializationEngine context) { } } I created it same as in this article: Anyway thanks Sebastian and Mark! © Episerver 2017 | About Episerver World
http://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2016/1/proper-way-to-add-item-to-the-cart-for-new-beta-features/
CC-MAIN-2017-13
en
refinedweb
The Investment Matrix Revelations?" Since I normally simply analyze Fed actions rather than prescribe them (I assume Greenspan does not really care about my opinions) I was brought up a little short, and answered that I would like to see the Fed tell us whether they are going to work to bring down long rates, instead of merely hinting or suggesting or threatening. The answer would give us a real indication as to whether we will have a recovery or at least a continuation of the Muddle Through Economy or will slide into a recession. They told us absolutely nothing, which in my opinion is a very risky option. The bond and stock markets seem to agree. Since that interview, I have given a great deal of thought to that question. The answer is far more complex, and has to do with how a number of factors, much of which is beyond Fed control, interact. I started to write on this today, but realize I need to let this topic cook in my mind some more. The economy of the world and the US is at an "inflection point." Since next week is the beginning of the second half, we will also discuss if there is the hint of the elusive "second half recovery." That will wait for next week. Today we will deal with a far more important matter than my thoughts on the Fed: What kind of returns can we expect from the stock market over the next 10-20 years? This week's letter will require you to put on your thinking caps, but will help you to be a better investor if you grasp the import of what we are saying. (As a side note, I will be in Paris, Geneva, Boston, Halifax, San Francisco and New Orleans within the next few months. Already I am tired. Details below.) The essay below is part two (of four parts) of a series from my upcoming book-in-progress. Warning: this letter is a little longer than most, but this section needed to be kept together. This section is co-authored with Ed Easterling of Crestmont Holdings.. This research into stock market and economic cycles will give us insight into how secular bear markets actually work. It will also give us a clue on how to invest in stocks even in a bear market cycle. (Note: when the pronoun "I" is used, it denotes a personal comment by John Mauldin.) The Investment Matrix: The Real Truth about Stock Market Returns (This section will reference charts available at. Click on "Stock Markets" and the graph called "Long Term Returns." We will provide large fold-out versions of the graphs in the book. Readers of this e-letter can hopefully get the sense of what we are saying without looking at the graphs, but if you have the time, we would suggest reviewing them. If you are not going to be able to look at them, you might skip the first sections which explain what you are looking at and go on to the analysis following the subhead: The Investment Matrix Revelations. [Note from John - you will need Adobe Acrobat. I prefer to greatly increase the viewing size. You can also get Kinko's (or other similar firms) to print these on large color graphs.]) The past 103 years have provided over 5,000 investment period scenarios-that is, the combination of investment periods from any start year to every year since that time. This provides an extensive history across which to assess the potential and likely outcomes. Like the movie, The Matrix, this Investment Matrix slows down the fast-paced motion of the markets, letting us see the ebb and flow of the economic tides over long periods of time. There are several versions of the chart on the web site. We call your attention to two of them: one, called "Tax-Payer Real" is the S&P 500 index including dividends and transaction costs adjusted to reflect the net return after inflation and taxes (see details on taxes below). You will not see this one in a mutual fund sales presentation. The second is called "Tax-Exempt Nominal." It assumes your money is all in tax sheltered retirement accounts, there is no inflation (thus "nominal"), and you don't pay taxes when you take out your money. This is the "long run" numbers you are most likely to see in marketing brochures. (You can view other versions of the chart which show "Tax-Payer Nominal" and "Tax-Exempt Real" at) Let's take a moment to explain the layout of the charts. There are three columns of numbers on the left hand side of the page and three rows of numbers on the top of the page. The column and row closest to the main chart reflect every year from 1900 through 2002. The column on the left side will serve as our start year and the row on the top represents the ending year. The row on the top has been abbreviated to the last two numbers of the year due to space constraints. Therefore, if you wanted to know the annual compounded return from 1950 to 1973, look for the row represented by the year '1950' on the left and look for the intersecting column designated by '73' (for 1973). The result on the version titled "S&P Index Only" is 6, reflecting an annual compounded return of 6% over that 23 year period. Looking out another 9 years the number drops to 2% for an after tax, inflation adjusted return over 32 years. There is a thin black diagonal line going from the top left to the lower right. This line shows you what the returns are 20 years after an initial investment. This will help you see what returns have been over the "long run" of 20 years. Also note the color of the cell represents the level of the return. If the annual return is less than 0%, the cell is shaded red. When the return is between 0% and 3%, the shading is pink. Blue is used for the range 3% to 7%, light green when the returns are between 7% and 10%, and dark green indicates annual returns in excess of 10%. This enables us to look at the big picture. Whereas, long-term returns tend to be shaded blue, shorter-term periods use all of the colors. As well, note that our original number 6 mentioned above was presented with a black-colored font, while some of the numbers are presented in white. If the P/E ratio for the ending year is higher than the P/E for the starting year-representing rising P/E ratios-the number is black. For lower P/E ratios, the color is white. In general, red and pink most often have white numbers and the greens and blues share a space with black numbers. The P/E ratio for each year is presented along the left side of the page and along the top of the chart. Lastly, there's additional data included on the chart. On the left side of the page, note the middle column. As well, on the top of the page, note the middle row. Both series represent the index values for each year. This is used to calculate the compounded return from the start period to the end period. Along the bottom of the page, we've included the index value, dividend yield, inflation (Consumer Price Index), real GDP, nominal GDP, and the ten-year annual compounded average for both GDP measures. For the index value, keep in mind that the S&P 500 Index value for each year represents the average across all trading days of the year. Along the right, there's an arbitrary list of developments for each of the past 103 years. In compiling the list of historical milestones, it's quite interesting to reflect upon the past century and recall that the gurus of the 1990's actually believed that we were in a "New Economy" era. Looking at the historical events, it could be argued that almost every period had a reason to be called a "New Economy." But that's an argument for another chapter. The Investment Matrix Revelations ten years, twenty years, and even longer aren't long enough to ensure positive or acceptable returns. Note also that we've recently completed the longest run of green (very high market return) Real, which is what you experience in your actual accounts, you will notice that the returns tend to be in the 3-5% range after long periods of time. Often real returns are 2% or less over multiple decades. Again, the charts clearly show the most important thing you can do to positively affect your long term returns is to begin investing in times of low P/E ratios. The Matrix assumes an estimate of each year's taxes at the then current rates over this period (details below). We are aware that the income tax did not exist in 1901. This was a tricky number to assume, as taxes on stocks are comprised of both long term and short term gains, and are taxed at different rates for different times. Some of you pay additional state taxes. While we estimated taxes for each individual year, the average over time was about 20%. Why not just assume all long-term gains? If you buy your stocks through mutual funds, as most individuals do, then you are probably seeing a lot of turnover in your portfolio. Remember Peter Lynch of Magellan fame? His reported average holding period was about 7 months during the 70's. Some of you will pay higher taxes, and some of you will pay lower, depending upon your investment styles. The recent average we assume is around 20%.You can adjust your expectations accordingly. Now, what can we learn from these tables? First, there are very clear periods when returns are better than others. These relate to secular bull and bear markets. No big insight there. But what you should notice is the correlation with P/E ratios. In general, when P/E ratios begin to rise, you want to be in the stock market. When they are falling, total returns over the next decade will be below par. (More on that phenomenon below.) With the exception of WWII, when these periods of falling P/E ratios start, they just keep going until the P/E ratios top out. Generally, this topping period comes prior to a recession. Can you use the P/E ratios to signal a precise turn from a secular bull to a secular bear? No, but you can use them to assist you in confirming other signals. And once that turn has begun, the historical evidence is that the trend continues. Investors are advised to change their stock investing habits. As noted above, there will be bear market rallies which will momentarily halt the decline of the P/E ratios. These always end as reversion to the mean (trend) is simply too strong a force. Second, the Investment Matrix Long Term Money In Stocks Again? When can you profitably begin to be a long term investor, even in a secular bear market? Look at the tables. You have excellent chances of getting above average returns from the stock market if you buy when P/E ratios are 10-12 or below. You might have to suffer in the short term, but long term you will probably be OK. (I will deal with stock market investment strategies at length in a later chapter.) For index investors, a good strategy would be to start averaging in when the market values begin approaching a P/E of 10-12. Even in the worst of the Depression, you would have done well over the next 20 years using this strategy. Investors who want to own individual stocks should focus on stocks with deep value and rising dividends, although the evidence indicates you will have periods where you will still need patience. Death and Taxes If you look at just the nominal returns without thinking about taxes, some would make the case that trying to time the market is pointless. Over enough time, the returns tend to be the same. And we agree, if you have 50 years, time can heal a lot of mistakes. Historically, investing through full cycles would give you a 10% compound return after many decades, and 6-7% in inflation adjusted terms. However, if you take into account inflation, transaction costs and taxes, real in-your-account returns tend to be in the 3% annual average range. And if you begin to invest at the beginning of a secular bear, real returns over the next 20 years are likely to be negative! You lose buying power. Let's look at some other points. First, these tables include dividends. The 6-7% returns show up over time primarily because much of the last century had very high dividends of 4% or more. Given that dividends are under 2% or non-existent for many NASDAQ stocks, the 3% long-term return number becomes far more realistic. Periods of high dividends greatly increase the return potential over using simple S&P 500 Index returns. Second, inflation did a great deal to mask the seriousness of the 1970 bear markets. It took 16 years for the index to make new highs after 1966, but it was another 10 years, or 1992, before investors saw a rise in their actual buying power in terms of the S&P 500 index. On the table, you see that compound returns over the 26 years from 1966 to 1992 was 8% without inflation and 2% taking into account inflation. If you take into account taxes and other costs the return to the investor was zero. For a period of 26 years, investors in index funds did not see a real increase in their buying power. The bulk of earnings on this table over that period came from dividends. The compounding effect of dividends upon returns was huge. The 8% returns an investor apparently got from 1966 through 1992 depended largely upon inflation. The 2% real returns are almost entirely due to dividends. During much of that period, dividends were in the 4-5% range. Today, dividends on the S&P are less than 2%, instead of the 4-5% of the 70's. Further, we are not in a period of high inflation, although that may change by the end of the decade. The clear implication is that we are facing a period where stock market returns are going to be difficult. If you look at the tax-deferred account real returns table for the period of 1966 through 1992 period, the after-inflation return numbers for the majority of that period are negative for a long time, until you begin to get to periods of low P/E ratios. the average stock or index fund from where we are today. The table suggests it might even be unsafe to assume 2%! How can we even think that stocks might not compound at 2% a year over the next ten years? It is because there has only been one time when investors have made more than a 2% real return (after taxes and other costs) over the next ten years when P/E ratios started over 21, which is easily where they are today (no matter who is figuring them). That one lone example is from the mid-90's through today. If we are right and returns become flat, then even that one period will turn out to be closer to 2%. 5. Of course, all of my readers are above average, but you might want to warn your brother-in-law. Just food for thought. "What type of returns should you expect from the stock market for the next 5, 10, or 20 years?" (This next section is authored by John Mauldin, as it is a tad on the acerbic side and Ed is a rather gentle soul.) is used by them to urge investors to buy some more stocks or mutual fund shares today and hold them as well. If you just keep buying, the study says you will get your reward, by and by. This is the sweet buy and buy sales pitch. The Ibbotson study (and numerous similar studies) is one of the most misused pieces of market propaganda ever foisted on innocent investors. If I thought for one minute you really could get 7% compound annualized returns over the next 20 years by simply buying and holding, I would agree that it would be a smart thing to do. I cannot tell you how many soon-to-be-retired couples I have talked to, after their retirement savings have been hit 30-40-50% and their comfortable retirement dreams are shattered, who tell me their brokers or advisors told them if they just hold on the market would come back. Soon, they are promised. These were the investment professionals they trusted and they assumed had done their homework. Now they know these guys flunked Stock Market Returns 101, or possibly skipped class in order to attend lectures by Jack Grubman on "How To Buy Telecommunications Stocks." Today I give you the class notes they should have shown you. The Most Dangerous Threat to Your Retirement Typical is the email I got from a reader. Quote: "My wife and I just heard another presentation by an investment firm recommending that retired people, needing income from their sheltered funds, place enough assets in fixed instruments for 5 years living expenses and the rest in stock funds. The hope is that within 5 years, there will be an upturn such that stock funds can be sold at a gain, from which to draw income. Ibbotson data was, of course, used to show how unlikely it was for there to be many consecutive years of down markets. The firm had a CPA and several financial advisors who had been working in the field for 20 - 30 years. "It drove me nuts also, especially at this meeting where heads were nodding around the room as these advisors (looking for people to give them their money to manage) explained how scientific their approach was. The CPA member of the firm said (in comparison to 1966 -1982) that the market could be that bad or even worse, but that this was very unlikely, and went on to recommend the strategy described above." Let's review this for a moment. I will leave aside the question of making a one size fits all recommendation for retirees, as I assume such stupidity is self-evident. That alone should be enough to make you run, not walk, to the exits. In 1976, a young Roger Ibbotson co-authored a research paper predicting that during the following two decades the stock market would produce a return of about 10% a year, and that the Dow Jones Industrial Average would hit 10,000 in 1999. Ibbotson, now a professor at Yale, currently forecasts a compounded return on stocks during the next two decades of 9.4% - about 1 percentage point a year lower than his earlier projections. "I'm neither an optimist nor a pessimist," Ibbotson said recently in an interview. "I'm a scientist, and I am not telling people to buy or sell stocks now. I'm saying that over the long run stocks will outperform bonds by about four percentage points a year." (from AdvisorSites, Inc.) It turned out Ibbotson was right about 1999, and with the imprimatur of a Yale professor, investment managers everywhere use this "scientific" study to show investors why they should put money in the stock market and leave it. (I am not sure how economists get to be scientists, or how investment predictions can be scientific, but that is a debate for another time.) If the S&P were to grow at 9.4% for the next two decades, it would be in the range of 4,500 and the Dow would be at 42,000 or so in 2023 (give or take a few thousand). Of course, that's starting at today's market values. If we start out with the market tops in 2000, we get around 8,200 and 63,000 respectively in 2020. Thus, investors shouldn't worry about the short term. Ibbotson assures us, as a scientist, that things will get better buy and buy. The Investment Matrix clearly demonstrates why you should leave the room whenever an investment advisor brings out this study to sell you on an investment strategy. If your advisor actually believes this nonsense, then this will help you understand why you should fire him. (That should get me a few letters.) There may be reasons to think the markets might go up, but the Ibbotson study is not one of them, in my opinion. Further, over the next 70 years, the market may in fact rise 9.4% a year. But to suggest to retirees it will do so over the next few years based upon "scientific analysis" is irresponsible and misleading. Let's start our analysis in 1976, the year Ibbotson did the study. (I could make a much better case starting with another year, but 1976 works just fine.) From 1976 through 2002, the S&P 500 returned 12% a year (including dividends), even better than Ibbotson predicted, and after a rather significant drop over the last few years. However, 5% of that annual return is due simply to inflation. In real, inflation adjusted terms the S&P was up 7% a year. The Price to Earnings (P/E) ratio was a rather low 12 in 1976. It ended up around 22 last year, using pro forma numbers. Thus almost half the return from the last 26 years has been because investors value a dollar of earnings almost twice as much in 2003 as they did in 1976. At a similar P/E ratio to 1976, the S&P would be less than 500 today (around 466 or so as I glance at the screen, again using pro forma earnings numbers.) Thus, without increased investor optimism, the compound growth would be around 6.3%-7% over the last 26 years, or only a few points over inflation during that time. The point is not the exact number but that a significant part of the growth in the stock market is due to increased P/E valuations. In fact, if you back out dividends, the growth is almost entirely due to inflation and increased P/E valuations. The stock market has been a good investment since 1976 primarily because of these two factors. The question that investors must ask today is, "To what extent will these two factors, plus dividends, contribute to the return from the stock market over the next 5-10-20 years?" How to Lose 20% in Five Years - Guaranteed Before I attempt to answer that, let's look at the advice the investment managers were suggesting to retirees at the seminar my reader attended. Assume that you can make 5% (today) on your investment portfolio. You can take that 5% and live on it in retirement (plus social security and any pensions) and not touch your original principal. It doesn't make any difference in this example what the amount is. I simply assume you live on a budget of what you actually get. If that 5% is what you need for the next five years, then according to the analysis given at the seminar, you will need to put about 22% or so of your savings in bonds, which will be consumed over the next five years (remember the 22% will grow because of interest). The other 78% or so will be put in stocks. Since the Ibbotson studies show stocks grow around an average of 9.4% per year, your total portfolio will have grown to122% of where it is today. For this advice, we want you to pay us 2% a year. (And now back to the collaboration with Ed.) Slip-Sliding Away "The more you near your destination, the more you slip-slide away." - Paul Simon The long run profits we read about in the brochures don't seem to match what we see in our accounts. The closer we get to retirement and the need for those funds, the more those profits seem to slip away Before we start looking at cycles, let's explore the impact of dividends, transaction costs, slippage, taxes and other factors on total return. Ever notice how quickly we're reminded, while looking at the change in the index or basket of stocks, not to forget the added return from the dividends? As we seek to translate the language of benchmark returns into changes in our account balances, let's also not forget a few other components. While annual dividends have averaged 4.4% over the course of the past century, transaction costs and taxes have imposed their share of impact on the portfolio as well. For individual investors, taxes can affect the realized return. To provide a reasonable assessment of the impact of taxes, we considered several factors and included a number of simplifying assumptions. The objective was to estimate the effect on a typical taxpayer. In general, the average tax rate was approximately 20% across the entire period starting in 1913, when the income tax was introduced. For each year, we assumed that 80% of gains were long-term capital gains and 20% were short-term capital gains. Only 90% of gains each year were realized and the long-term capital gain portion was lagged by one-year to simulate the effect of longer holding periods. For a measure of conservatism, 10% of gains are never taxed. Most of the capital losses are used to offset gains in future years. Dividends were assumed to be taxed at the short-term rate. Transaction costs include: (a) commissions, (b) asset management fees, (c) bid/ask spreads, (d) execution slippage, (e) and lots of numerous extra costs. Commissions are well recognized by most investors as a cost of buying and selling stocks or mutual funds. The commission cost can be-and certainly was historically-greater for individual investors than for larger institutional investors (i.e. pension plans, mutual funds, etc.) Even with today's low rates, for active and/or small traders, they can be very significant. Asset management fees are charges levied by an advisor, the investment fund, trustees, the pension fund managers and/or other constituents in the investment process. These can run anywhere from 0.5% to 3%. Mutual fund fees of 2% or more are quite common. (As asset managers, we are not against fees, as that is how we make our living. But we do think investors should get a "bang" for their commission "buck.") The third cost, bid/ask spreads, represents the difference between the price that one pays for a stock and the price at which the stock could be sold at the same time. Index returns are based upon the last price traded for each stock, some on "the bid" (the price at which one can sell) and some on "the ask" (the price at which one can buy). We can refer to the blend of bid and ask prices as the "mid" price-averaging near the middle. However, investors bear the cost within their account or mutual fund of slightly higher prices for purchases and slightly lower prices for sales. The cost of the spread is often far more than the commissions. There are studies beginning to surface which shows the cost of spreads is actually increasing after the conversion to decimalization of stock prices last year, which was NOT what was expected. The fourth element listed above, slippage, affects larger buyers of stock more so than individual investors. While large accounts may pay less in commissions, some of the advantages of larger scale asset management require the often under-recognized costs of scale. Slippage is the impact of buying or selling hundreds of thousands of shares-the average cost of completing a large purchase in comparison to the market price for a few shares of a stock. When large buyers of a stock, a mutual fund for example, decide to buy or sell a position, the size of the order can push the market price in one direction or another. Slippage is the difference in the average price when buying or selling 100 shares compared to buying or selling 100,000 shares. If you are a large manager trying to beat an index, or a hedge fund getting a piece of the profits, we can guarantee you that slippage is the cause of a great deal of frustration, if not acrimony, on the trading floor. Finally, you have lots of hidden costs. Account opening fees and loads can add up. Funds of all types have auditing and accounting fees, which are passed directly to the fund and thus to investors. Mutual funds have "independent boards" whose members must be paid. Most off-shore hedge funds are required to have one, if not two, independent directors, who get small fees. What about custodial or administrative fees from your fund? Is there a consultant in the mix? Does your fund pay higher commissions (so-called soft dollar arrangements) to get access to research or free rent and technology? (This happens a LOT more than you think. It is a way to pass operating expenses to the fund without showing the actual expense. Investors would object to a line item that says "rent" but never see the extra penny on the commission or the spread.) Attorney fees are often fund related costs. If you are a typical individual investor, you have your own accounting costs, investment newsletters, books, planners, consultants and a host of investment related expenses. That is not to say that each of them are not necessary to do your job as manager of your portfolio, but they do cost money. While these are not always directly deducted from your investment accounts, they are an expense never-the-less. Our analysis in the "Tax-Payer Real" chart assumed that the total cost of commissions, asset management fees, bid/ask spreads, and execution slippage equaled 2% per year. Although there are a few (somewhat limited) examples of those investors that can demonstrate a lower overall transaction cost on their stock investments, most professional investors have indicated that we are being too conservative-the effect of which would overstate the returns in the matrix. We believe that a rate of 2% is reasonable, with a bias toward being conservative. With current dividend yields averaging considerably less than 2%, the net effect of transaction costs may well exceed the benefit of dividends. Paris, Geneva and Points Beyond I have to go to Geneva for business, so I will leave a few days early to go to Paris to visit with my friend Bill Bonner in his countryside chateau, otherwise known as a money pit. His new book, The Day of Financial Reckoning, will soon be out and I look forward to a few good vintages and conversation. I will be available both in Paris and Geneva for a limited number of meetings. My host, Constantin Felder of Safra Banque in Geneva, may be arranging a more formal gathering as well. I will be available in Paris on Monday, July 21 and will be in Geneva for the next two days. I introduced Constantin to Texas barbecue when he was here last month and he promises to reciprocate with a taste of the local cuisine. Then I travel to Boston for a day to meet with a hedge fund (Monday, July 28) and on to Halifax for a two week working vacation. I have promised my bride some relief from the Texas heat, so we will see if I can actually work outside the office. I will also be in San Francisco August 13-17 at the 2003 Agora Wealth Symposium. This should be a very interesting conference for active investors. You can learn more by going to. Again, I will set aside time to meet with investors. You can email me (if you have not already done so) if you are interested in meeting. I will be speaking at the New Orleans investment conference October 18-21. More details later. Many thanks to Art Cashin of CNBC fame for allowing me to follow him around on the New York Stock Exchange floor. As well as head trader for UBS PaineWebber, he is also a NYSE governor. This means he is part sheriff, part justice of the peace on the NYSE. I was amazed at the real authority these elected members have to police the place. To watch him gave me a great deal more confidence in the fairness of the exchanges. And yes, (assuming it is still culturally permissible to compliment a lady) Sue Herera is as pretty and gracious in person as she appears to be on TV. Tonight is a guy's night out, so I and the boys (9 and 14) will be looking for meat and fun. Time to run, and remember the word's of John Ruskin, "The highest reward for a man's toil is not what he gets for it, but what he becomes." Your hoping to get the book finished in three weeks analyst, TweetTweet
http://www.safehaven.com/article/733/the-investment-matrix-revelations
CC-MAIN-2017-13
en
refinedweb
Here. Andrew – Thanks for the information. Is there (or will there be) any way of determining the installed version of OneNote? It would be nice for applications to automatically change to the correct schema, based upon the version ID. Perhaps it could be a read-only integer property of the CSimpleImporter COM object. I don’t think there’s any plan to offer such functionality, mainly because the SP1 Preview isn’t an actual version of OneNote so much as it is an early cut of the final SP1 code. I think that because the Preview is very much still a ‘beta’ work in progress, the expectation is that anyone currently using it will want to update to the release-quality final version of SP1 once it ships. So in the long run, there won’t be a continuing need to program to the SP1 Preview. However, if you’ve already written code using the SimpleImport interface, you’ll have to change the namespace in your code, or it won’t work once the user has upgraded to SP1. PingBack from
https://blogs.msdn.microsoft.com/andrew_may/2004/05/12/onenote-namespace-change-for-sp1/
CC-MAIN-2017-13
en
refinedweb
Code from CCI book: public class ListNode { int val; ListNode next = null; ListNode(int x) { this.val = x; } public void appendToTail(int d) { ListNode end = new ListNode(d); ListNode n = this; while (n.next != null) { n = n.next; } n.next = end; } Inside your while loop you say: n = n.next When that line of code executes you change the value of n from "this" to n.next. This points to the current link object in the list and next points to a completely different object. As you walk your way through the linked list the value of n keeps updating as you pass through each link in the linked list. When you get to the end of the linked list there is no next, i.e. value of n is null. So you exit the while loop and add a new link to the end of the linked list.
https://codedump.io/share/dgFHvEh6fpc8/1/changing-references-to-objects-java
CC-MAIN-2017-13
en
refinedweb
Results 1 to 2 of 2 - Join Date - Jul 2001 - 30 - Thanks - 0 - Thanked 0 Times in 0 Posts Carriage return (ALT+ENTER) not working (Excel 97 SR1) On a Win95 machine the ALT+ENTER is not working. What would effect that to stop it from working correctly? <img src=/S/bummer.gif border=0 alt=bummer width=15 height=15> - Join Date - Jan 2001 - Location - South Carolina, USA - 7,295 - Thanks - 0 - Thanked 0 Times in 0 Posts Re: Carriage return (ALT+ENTER) not working (Excel 97 SR1) Can you explain what you are trying to do when you press Alt+Enter and what happens when you press it? Is it not working in any workbook, or just one particular workbook? Are there any macros in the workbook (a macro can disable the keystroke combination)?Legare Coleman
http://windowssecrets.com/forums/showthread.php/23771-Carriage-return-%28ALT-ENTER%29-not-working-%28Excel-97-SR1%29
CC-MAIN-2017-13
en
refinedweb
Thanks, #include <iostream> using namespace std; int twelveDays(int firstDay, int lastDay, int endOf) { if(firstDay <= lastDay) { return twelveDays(++firstDay, lastDay, endOf + 1); } else { return twelveDays(++firstDay, lastDay, endOf); } return endOf; } int countDays(int firstDay, int lastDay) { return twelveDays(firstDay, lastDay, 0); }; int main() { int Days[] = {1,2,3,4,5,6,7,8,9,10,11,12}; const char* Verses[] = {"A Partridge in a Pear Tree","Two Turtle Doves", "Three French Hens","Four Calling Birds", "Five Golden Rings","Six Geese a Laying", "Seven Swans a Swimming","Eight Maids a Milking", "Nine Ladies Dancing","Ten Lords a Leaping", "Eleven Pipers Piping","12 Drummers Drumming"}; cout << "On the " << countDays << " of Christmas my true love gave to me " << twelveDays << endl; cout << countDays << endl; system("pause"); return 0; } I have a slight learning disability so the more simple you can put it, the better. I'm not getting any error after the build and running it, but it's only producing one like and it's giving me random numbers in the banks of mem it's pulling from (is my guess). Thanks.
http://www.dreamincode.net/forums/topic/348247-arrays-recursion-and-the-12-days-of-christmas/
CC-MAIN-2017-13
en
refinedweb
This tutorial is for those who prefer the pleasant company of a text editor and a trusty command prompt. Even if you routinely use an IDE, you will find that it’s often quicker and easier to compile, test and install your applications from the command line. We’ll be using Maven () to manage the large number of jars that a GeoTools projects depend on. Don’t worry if you’re not familiar with Maven because we will explain everything step by step. The example application is the same one used for the NetBeans and Eclipse Quickstart tutorials: a simple program to load and display a shapefile. We would like thank members of the GeoTools User mailing list for their feedback while we were preparing the course material, with special thanks to Eva Shon for testing/reviewing early drafts. If you have any questions or comments about this tutorial, please post them to the user list. We are going to be making use of Java so if you don’t have a Java Development Kit (JDK) installed now is the time to do so. Download the latest Java 8 JDK: At the time of writing the latest Java 8 release was: GeoTools is not yet tested with Java 9, we are limited by build infrastructure and volunteers. Click through the installer you will need to set an acceptance a license agreement and so forth. By default this will install to: C:\Program Files (x86)\Javajdk1.8.0_66 Note In this tutorial we refer to file and directory paths as used by Windows. If you are fortunate enough to be using another operating system such as Linux or OSX all of the commands and source code below will work, just modify the paths to suit. Maven is a widely-used build tool which works by describing the contents of a project. This is a different approach than that used by the Make or Ant tools which list the steps required to build. It takes a while to get used to Maven and, for some, it remains a love-hate relationship, but it definitely makes working with GeoTools much easier: Download Maven from In this tutorial we refer to Maven version 3.2.3, we have had relatively little trouble with Maven version 3. Unzip the file apache-maven-3.2.3-bin.zip You need to have a couple of environmental variables set for maven to work. Navigate to Control Panel ‣ System ‣ Advanced. Change to the Advanced tab and click Environmental Variables button. Add the following system variables: And add the following to your PATH: Open up a commands prompt Accessories ‣ Command Prompt Type the following command to confirm you are set up correctly: C:java> mvn --version This should produce something similar to the following output: C:\java\apache-maven-3.2.3 Java version: 1.8.0_66, vendor: Oracle Corporation Java home: C:\Program Files (x86)\Java\jdk1.8.0_66\jre Default locale: en_US, platform encoding: Cp1252 OS name: "windows 7", version: "6.1", arch: "x86", family: "windows" The above command creates the following files and directories: tutorial tutorial\pom.xml tutorial\src tutorial\src\main tutorial\src\main\java tutorial\src\main\java\org tutorial\src\main\java\org\geotools tutorial\src\main\java\org\geotools\App.java tutorial\src\test tutorial\src\test\java tutorial\src\test\java\org tutorial\src\test\java\org\geotools tutorial\src\test\java\org\geotools\AppTest.java App.java and AppTest.java are just placeholder files not used in this tutorial. During the build process your local maven repository will be used to store both downloaded jars, and those you build locally. Your local Maven repository is located in your home folder. Open the pom.xml file in your favourite text editor. If your editor has an XML syntax mode switch into that now because it will make it a lot easier to find errors such as mis-matched brackets. Some editors, such as vim, will do this automatically on loading the file. We are going to start by defining the version number of GeoTools we wish to use. This workbook was written for 17-SNAPSHOT although you may wish to try a different version. For production a stable release is recommended: <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <geotools.version>15.1</geotools.version> </properties> To make use of a nightly build set the geotools.version property to 17-SNAPSHOT . <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- use the latest snapshot --> <geotools.version>17-SNAPSHOT</geotools.version> </properties> We specify the following dependencies (GeoTools modules which your application will need): <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11<> We tell maven which repositories to download jars from: <repositories> <repository> <id>maven2-repository.dev.java.net</id> <name>Java.net repository</name> <url></url> </repository> <repository> <id>osgeo</id> <name>Open Source Geospatial Foundation Repository</name> <url></url> </repository> </repositories> If you are using a nightly build (such as 17-SNAPSHOT) and add a reference to the snapshot repository. <repositories> <repository> <id>maven2-repository.dev.java.net</id> <name>Java.net repository</name> <url></url> </repository> <repository> <id>osgeo</id> <name>Open Source Geospatial Foundation Repository</name> <url></url> </repository> <repository> <snapshots> <enabled>true</enabled> </snapshots> <id>boundless</id> <name>Boundless Maven Repository</name> <url></url> </repository> </repositories> If you’d like to use Java 8 language level features (eg. lambdas), you need to tell Maven to use the 1.8 source level <build> <plugins> <plugin> <inherited>true</inherited> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build> </project> Return to the command line and get maven to download the required jars for your project with this command: C:\java\example> mvn install If maven has trouble downloading any jar, you can always try again. A national mirror is often faster than the default maven central. Now we are ready to create the application. Crete the org.geotools.tutorial.quickstart package by navigating to the directory tutorial and create the directory src\main\java\org\geotools\tutorial\quickstart In the new sub-directory, create a new file Quickstart.java using your text editor..FeatureLayer; import org.geotools.map.Layer; import org.geotools.map.MapContent; import org.geotools.styling.SLD; import org.geotools.styling.Style; content and add our shapefile to it MapContent map = new MapContent(); map.setTitle("Quickstart"); Style style = SLD.createSimpleStyle(featureSource.getSchema()); Layer layer = new FeatureLayer(featureSource, style); map.addLayer(layer); // Now display the map JMapFrame.showMap(map); } } Go back to the top project directory (the one that contains your pom.xml file) and build the application with the command: mvn clean install If you need some shapefiles to work with you will find a selection of data at the project which is supported by the North American Cartographic Information Society. Head to the link below and download some cultural vectors. You can use the ‘Download all 50m cultural themes’ at top. Unzip the above data into a location you can find easily such as the desktop. You can run the application using Maven on the command line: mvn exec:java -Dexec.mainClass=org.geotools.tutorial.quickstart.Quickstart The application will connect to your shapefile, produce a map context, and display the shapefile. A couple of things to note about the code example: Try out the different sample data sets. You can zoom in, zoom out and show the full extent and use the info tool to examine individual countries in the sample countries.shp file. Download the largest shapefile you can find and see how quickly it can be rendered. You should find that the very first time it will take a while as a spatial index is generated. After that rendering will become much faster.); } You will also need to add this import statement: import org.geotools.data.CachingFeatureSource; Hint When working in a text editor instead of an IDE use the GeoTools javadocs to work out what import statements are required in your source. The javadocs also list the GeoTools module in which each class is found. Note When building you may see a message that CachingFeatureSource is deprecated. It’s ok to ignore it, it’s just a warning. The class is still under test but usable. File file = JFileDataStoreChooser.showOpenFile("shp", null); Map<String,Object> params = new HashMap<>();] ); So what jars did maven actually use for the Quickstart application? Try the following on the command line: mvn dependency:tree We will be making use of some of the project in greater depth in the remaining tutorials.
http://docs.geotools.org/latest/userguide/tutorial/quickstart/maven.html
CC-MAIN-2017-13
en
refinedweb
#include <gcugtk/gcuperiodic.h> The GcuPeriodic Widget displays a Periodic table of the elements, each element being represented in a toggle button. A test program is available in the tests directory of the Gnome Chemistry Utils source archive (source in testgcuperiodic.c). This widget has one signal: There are two properties: Functions related to the GcuPeriodic Widget are described in the gcuperiodic.h page.
http://gchemutils.nongnu.org/reference/structGcuPeriodic.html
CC-MAIN-2017-13
en
refinedweb
Making an Adapter Now it is a little awkward to remember to use the Items collection of the list box for some operations and not for others. For this reason, we might prefer to have a class that hides some of these complexities and adapts the interface to the simpler one we wish we had, rather like the list box interface in VB6. We'll create a simpler interface in a ListAdapter class which then operates on an instance of the ListBox class: public class ListAdapter { private ListBox listbox; //operates on this one public ListAdapter(ListBox lb) { listbox = lb; } //----- public void Add(string s) { listbox.Items.Add (s); public int SelectedIndex() { return listbox.SelectedIndex; public void Clear() { listbox.Items.Clear (); public void clearSelection() { int i = SelectedIndex(); if(i >= 0) { listbox.SelectedIndex =-1; Then we can make our program a little simpler: private void btClone_Click(object sender, EventArgs e) { int i = lskids.SelectedIndex (); if( i >= 0) { Swimmer sw = swdata.getSwimmer (i); lsnewKids.Add (sw.getName() + "\t" + sw.getTime ()); lskids.clearSelection (); Now, let's recognize that if we are always adding swimmers and times space apart like this, maybe there should be a method in our ListAdapter that handles the Swimmer object directly: public void Add(Swimmer sw) { listbox.Items.Add (sw.getName() + "\t" + sw.getTime()); This simplifies the click event handler even more: lsnewKids.Add (sw); Shashi Ray Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/kb/368-making-adapter.aspx
CC-MAIN-2017-13
en
refinedweb
How to: Handle Deployment Conflicts You can provide your own code to handle deployment conflicts for a SharePoint project item. For example, you might determine whether any files in the current project item already exist in the deployment location, and then delete the deployed files before the current project item is deployed. For more information about deployment conflicts, see Extending SharePoint Packaging and Deployment. To handle a deployment conflict Create a project item extension, a project extension, or a definition of a new project item type. For more information, see the following topics: In the extension, handle the DeploymentStepStarted event of an ISharePointProjectItemType object (in a project item extension or project extension) or an ISharePointProjectItemTypeDefinition object (in a definition of a new project item type). In the event handler, determine whether there is a conflict between the project item that is being deployed and the deployed solution on the SharePoint site, based on criteria that apply to your scenario. You can use the ProjectItem property of the event arguments parameter to analyze the project item that is being deployed, and you can analyze the files at the deployment location by calling a SharePoint command that you define for this purpose. For many types of conflicts, you might first want to determine which deployment step is executing. You can do this by using the DeploymentStepInfo property of the event arguments parameter. Although it typically makes sense to detect conflicts during the built-in AddSolution deployment step, you can check for conflicts during any deployment step. If a conflict exists, use the M:Microsoft.VisualStudio.SharePoint.Deployment.IDeploymentConflictCollection.Add(System.String,System.Func`2,System.Boolean) method of the Conflicts property of the event arguments to create a new IDeploymentConflict object. This object represents the deployment conflict. In your call to the M:Microsoft.VisualStudio.SharePoint.Deployment.IDeploymentConflictCollection.Add(System.String,System.Func`2,System.Boolean) method, also specify the method that is called to resolve the conflict. The following code example demonstrates the basic process for handling a deployment conflict in a project item extension for list definition project items. To handle a deployment conflict for a different type of project item, pass a different string to the SharePointProjectItemTypeAttribute. For more information, see Extending SharePoint Project Items. For simplicity, the DeploymentStepStarted event handler in this example assumes that a deployment conflict exists (that is, it always adds a new IDeploymentConflict object), and the Resolve method simply returns true to indicate that the conflict was resolved. In a real scenario, your DeploymentStepStarted event handler would first determine if a conflict exists between a file in the current project item and a file at the deployment location, and then add an IDeploymentConflict object only if a conflict exists. For example, you might use the e.ProjectItem.Files property in the event handler to analyze the files in the project item, and you might call a SharePoint command to analyze the files at the deployment location. Similarly, in a real scenario the Resolve method might call a SharePoint command to resolve the conflict on the SharePoint site. For more information about creating SharePoint commands, see How to: Create a SharePoint Command. using Microsoft.VisualStudio.SharePoint; using Microsoft.VisualStudio.SharePoint.Deployment; using System.ComponentModel.Composition; namespace Contoso.DeploymentConflictExtension { [Export(typeof(ISharePointProjectItemTypeExtension))] [SharePointProjectItemType("Microsoft.VisualStudio.SharePoint.ListDefinition")] class DeploymentConflictExtension : ISharePointProjectItemTypeExtension { public void Initialize(ISharePointProjectItemType projectItemType) { projectItemType.DeploymentStepStarted += DeploymentStepStarted; } private void DeploymentStepStarted(object sender, DeploymentStepStartedEventArgs e) { if (e.DeploymentStepInfo.Id == DeploymentStepIds.AddSolution) { e.Conflicts.Add("This is an example conflict", this.Resolve, true); e.ProjectItem.Project.ProjectService.Logger.WriteLine("Added new example conflict.", LogCategory.Status); } } private bool Resolve(ISharePointProjectItem projectItem) { projectItem.Project.ProjectService.Logger.WriteLine("Returning 'true' from Resolve method for example conflict.", LogCategory.Status); return true; } } } This example requires references to the following assemblies: Microsoft.VisualStudio.SharePoint System.ComponentModel.Composition To deploy the extension, create a Visual Studio extension (VSIX) package for the assembly and any other files that you want to distribute with the extension. For more information, see Deploying Extensions for the SharePoint Tools in Visual Studio. Extending SharePoint Packaging and Deployment Extending SharePoint Project Items How to: Run Code When Deployment Steps are Executed How to: Create a SharePoint Command
https://msdn.microsoft.com/en-us/library/ff655129.aspx
CC-MAIN-2017-13
en
refinedweb
But I'm pleased to report that you can copy/paste HTML to/from the clipboard on Mac OS, even though it doesn't seem to work on Linux, and someone else says it doesn't work on Windows (Java bug #4765240). Note that what I'm talking about is being able to paste styled text into a native application. So rather than "hello <b>world!</b>", you get "hello world!". If you run the oddly-named "PasteHorkTest" from bug #4765240 on Linux and try to paste from Mozilla Firefox, you'll find that there seems to be a problem with the character encoding. It looks like the clipboard has 2-byte Unicode data (I can't remember which endian), because each recognizable character is separated from the next by the square that Sun's implementations use to render characters they have no glyph for. It may be that the HTML tags would be correctly interpreted if it weren't for this encoding problem, but whatever, it's broken. If you run the same .class file on Mac OS, you can paste from Safari or Mail just fine. I couldn't find anything useful on the web about copying HTML to the clipboard from Java, perhaps because the most common case – copying from a JEditorPane– is already dealt with by Swing. I knocked up some code based on javax.swing.plaf.basic.BasicTransferablebut using the collections classes rather than primitive arrays, for clarity: import java.awt.*; import java.awt.event.*; import java.awt.datatransfer.*; import java.io.*; import java.util.*; import javax.swing.*; public class html { public html() { JButton b = new JButton("copy"); b.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { copy(); } }); JFrame f = new JFrame(); f.getContentPane().add(b); f.pack(); f.setVisible(true); } public static void main(String[] args) { new html(); } private void copy() { Transferable t = new HtmlSelection("<html><p>Hello <b>world!</b>"); Toolkit.getDefaultToolkit().getSystemClipboard().setContents(t, null); } private static class HtmlSelection implements Transferable { private static ArrayList htmlFlavors = new ArrayList(); static { try { htmlFlavors.add(new DataFlavor("text/html;class=java.lang.String")); htmlFlavors.add(new DataFlavor("text/html;class=java.io.Reader")); htmlFlavors.add(new DataFlavor("text/html;charset=unicode;class=java.io.InputStream")); } catch (ClassNotFoundException ex) { ex.printStackTrace(); } } private String html; public HtmlSelection(String html) { this.html = html; } public DataFlavor[] getTransferDataFlavors() { return (DataFlavor[]) htmlFlavors.toArray(new DataFlavor[htmlFlavors.size()]); } public boolean isDataFlavorSupported(DataFlavor flavor) { return htmlFlavors.contains(flavor); } public Object getTransferData(DataFlavor flavor) throws UnsupportedFlavorException { if (String.class.equals(flavor.getRepresentationClass())) { return html; } else if (Reader.class.equals(flavor.getRepresentationClass())) { return new StringReader(html); } else if (InputStream.class.equals(flavor.getRepresentationClass())) { return new StringBufferInputStream(html); } throw new UnsupportedFlavorException(flavor); } } } This doesn't work on Linux, if you want to paste into a native application. I assume that it doesn't work on Windows either (but I don't use Windows, so I don't know). On Mac OS, it works like a charm. Click the button, and you can paste styled text into Mail. Which is exactly what I wanted to be able to do. Annoyingly, inline CSS (or any other kind of CSS, as far as I can tell) doesn't work. And I really need CSS for what I want to do. It's strange, because I can copy this from Safari to Mail just fine: White Text on a Blue Background Anyway, feel free to copy & paste the HtmlSelection class into your own software! [If you don't want deprecated code, you can remove the support for InputStreamand the copying still works fine on Mac OS, which as far as I know is the only platform where this works at all. Note also that I'm not really gloating here; I'd much rather this worked on the other platforms.]
http://elliotth.blogspot.com/2005/01/copying-html-to-clipboard-from-java.html
CC-MAIN-2017-13
en
refinedweb
80 MHz PLL (dsPIC33F/PIC24H) dsPIC33F and PIC24H devices contain a programmable PLL module which can be used to provide an FOSC = 80 MHz (max.) system clock, enabling an FCY = 40 MIPs (max.) operation. The 80 MHz PLL block requires a 0.8-8 MHz input signal; it uses this to generate a 100-200 MHz output signal which is then scaled to provide an 80 MHz (max.) system clock. For a proper PLL operation, the Phase Frequency Detector (PFD) input frequency, FREF, and Voltage Controlled Oscillator (VCO) output frequency, FVCO, must meet the following requirements at all times: - FREF must be in the range of 0.8-8 MHz - FVCO must be in the range of 100-200 MHz PLL register default settings on POR place certain constraints on application code for oscillator configuration. These are discussed below. PLL Configuration: PLL frequency control is achieved via dynamic SFR register modification: - CLKDIV<PLLPRE4:0> - specify the input frequency divider ratio (N1) - CLKDIV<PLLPOST1:0> - specify the output frequency divider ratio (N2) - PLLFBD<PLLDIV8:0> - specify the divider ratio (M) The following equations define the relations between FIN, FREF, FVCO, FOSC:(1) Where N1 = PLLPRE+2 N2 = 2 x (PLLPOST + 1) M = PLLDIV + 2 Input Clock Limitation at Start-up: Default POR values of PLLPRE, PLLPOST and PLLDIV, set N1=2, N2=4 and M=50 respectively. Given these reset values, the following relations are active at POR:(2) Given the preceding equations, the FIN to the PLL module must be limited to 4 MHz < FIN < 8 MHz to comply with the FVCO requirement (100 MHz < FVCO < 200 MHz), if the default values of PLLPRE, PLLPOST, and PLLDIV are used. To use a PLL when the input frequency is not within the 4-8 MHz range, you must follow this process: 1. Power-up the device with the Internal FRC Oscillator, or the POSC, without a PLL. 2. Change PLLDIV, PLLPRE, and PLLPOST bit values, based on the input frequency, to meet these PLL requirements: - FREF must be in the range of 0.8-8.0 MHz - FVCO must be in the range of 100-200 MHz 3. Switch the clock to a PLL mode in software. PLL Lock Status: Whenever the PLL input frequency, the PLL prescaler, or the PLL feedback divisor, is changed, the PLL requires a finite amount of time (TLOCK) to synchronize to the new settings. TLOCK is applied when the PLL is selected as the clock source at a POR, or during a clock switching operation. The value of TLOCK is relative to the time at which the clock is available to the PLL input. For example, with the POSC, TLOCK starts after the OST delay. Refer to the specific device data sheet for information about typical TLOCK values. The LOCK bit in the Oscillator Control register (OSCCON<5>) is a read-only Status bit that indicates the lock status of the PLL. The LOCK bit is cleared at a POR, and on a clock switch operation, if the PLL is selected as the destination clock source. The LOCK bit remains clear when any clock source that is not using a PLL is selected. After a clock switch event in which a PLL is enabled, it is a good practice to wait for the LOCK bit to be set before executing other code. Code Example: Configure dsPIC33F for 40 MIPs Operation with POSC = 8 MHz XTAL In this configuration, the POSC input frequency (FIN) complies with the default PLL divisor settings to meet the FVCO requirements (4 MHz < FIN < 8 MHz), however, it is unable to meet the user requirements (FCY = FOSC/2 = (6.25 x FIN)/2 = 25 MIPs) with these settings. We will need to enable a non-PLL oscillator type on POR (ex. 7.37 MHz FRC), perform the correct PLL adjustments in our application code, then initiate a clock switch to the PLL: #include <xc.h> #pragma config FNOSC = FRC // select internal FRC at POR #pragma config FCKSM = CSECMD // enable clock switching #pragma config POSCMD = XT // configure POSC for XT mode int main(void) { // Configure PLL prescaler, PLL postscaler, PLL divisor PLLFBD=41; // M = 43 CLKDIVbits.PLLPOST = 0; // N2 = 2 CLKDIVbits.PLLPRE = 0; // N1 = 2 // Initiate Clock Switch to Primary Oscillator with PLL (NOSC = 0b011) __builtin_write_OSCCONH(0x03); __builtin_write_OSCCONL(0x01); // Wait for Clock switch to occur while (OSCCONbits.COSC != 0b011); // Wait for PLL to lock while(OSCCONbits.LOCK != 1); ... } The PLL Prescaler (PLLPRE) and PLL Feedback Divisor (PLLDIV) bits should not be changed when operating in PLL mode. You must clock switch to a non-PLL mode (e.g., Internal FRC) to make the necessary changes and then clock switch back to the PLL mode.
http://microchipdeveloper.com/16bit:osc-pll-80mhzpll
CC-MAIN-2017-13
en
refinedweb
Wes McKinney, the creator of pandas, is kind of obsessed with performance. From micro-optimizations for element access, to embedding a fast hashtable data structure inside pandas, we benefit from all his hard work. One thing I'm not really going to touch on is storage formats. There's too many other factors that go into the decision of what format to use for me to spend much time talking exclusively about performance. Just know that pandas can talk to many formats, and the format that strikes the right balance between performance, portability, data-types, metadata handling, etc., is an ongoing topic of discussion. It's pretty common to have many similar souces (say a bunch of CSVs) that need to be combined into a single DataFrame. There are two routes to the same end For pandas, the second option is faster. DataFrame appends are expensive relative to a list append. Depending on the values, you may have to be recast data to a different type. And indexes are immutable, so each time you append pandas has to create an entirely new one. In the last section we download a bunch of weather files, one per state, writing each to a separate CSV. One could imagine coming back later to read them in, using the following code. The idiomatic python way files = glob.glob('weather/*.csv') columns = ['station', 'date', 'tmpf', 'relh', 'sped', 'mslp', 'p01i', 'vsby', 'gust_mph', 'skyc1', 'skyc2', 'skyc3'] # init empty DataFrame, like you might for a list weather = pd.DataFrame(columns=columns) for fp in files: city = pd.read_csv(fp, columns=columns) weather.append(df) This is pretty standard code, quite similar to building up a list of tuples, say. The only nitpick is that you'd probably use a list-comprehension if you were just making a list. But we don't have special syntax for DataFrame-comprehensions (if only), so you'd fall back to the "intitilize empty container, append to said container" pattern. But, there's a better, pandorable, way files = glob.glob('weather/*.csv') weather_dfs = [pd.read_csv(fp, names=columns) for fp in files] weather = pd.concat(weather_dfs) Subjectively this is cleaner and more beautiful. There's fewer lines of code. You don't have this extreaneous detail of building an empty DataFrame. And objectively the pandorable way is faster, as we'll test next. We'll define two functions for building an identical DataFrame. The first append_df, creates an empty dataframe and appends to it. The second, concat_df, creates many DataFrames, and concatenates them at the end. We also write a short decorator that runs the functions a handful of times and records the results. import time size_per = 5000 N = 100 cols = list('abcd') def timed(n=30): ''' Running a microbenchmark. Never use this. ''' def deco(func): def wrapper(*args, **kwargs): timings = [] for i in range(n): t0 = time.time() func(*args, **kwargs) t1 = time.time() timings.append(t1 - t0) return timings return wrapper return deco @timed(60) def append_df(): ''' The pythonic (bad) way ''' df = pd.DataFrame(columns=cols) for _ in range(N): df.append(pd.DataFrame(np.random.randn(size_per, 4), columns=cols)) return df @timed(60) def concat_df(): ''' The pandorabe (good) way ''' dfs = [pd.DataFrame(np.random.randn(size_per, 4), columns=cols) for _ in range(N)] return pd.concat(dfs, ignore_index=True) t_append = append_df() t_concat = concat_df() timings = (pd.DataFrame({"Append": t_append, "Concat": t_concat}) .stack() .reset_index() .rename(columns={0: 'Time (s)', 'level_1': 'Method'})) timings.head() %matplotlib inline sns.set_style('ticks') sns.set_context('talk') plt.figure(figsize=(4, 6)) sns.boxplot(x='Method', y='Time (s)', data=timings) sns.despine() plt.tight_layout() plt.savefig('../content/images/concat-append.svg', transparent=True) objectdtypes where possible¶ The pandas type system esentially NumPy's with a few extensions ( categorical, datetime64, timedelta64). An advantage of a DataFrame over a 2-dimensional NumPy array is that the DataFrame can have columns of various types within a single table. That said, each column should have a specific dtype; you don't want to be mixing bools with ints with strings within a single column. For one thing, this is slow. It forces the column to be have an object dtype (the fallback container type), which means you don't get any of the type-specific optimizations in pandas. For another, it violates the maxims of tidy data. When should you have object columns? There are a few places where the NumPy / pandas type system isn't as rich as you might like. There's no integer NA, so if you have any missing values, represented by NaN, your otherwise integer column will be floats. There's also no date dtype (distinct from datetime). Consider the needs of your application: can you treat an integer 1 as 1.0? Can you treat date(2016, 1, 1) as datetime(2016, 1, 1, 0, 0)? In my experience, this is rarely a problem other than when writing to something with a stricter schema like a database. But at that point it's fine to cast to one of the less performant types, since you're just not doing any operations any more. The last case of object dtype data is text data. Pandas doesn't have any fixed-width string dtypes, so you're stuck with python objects. There is an important exception here, and that's low-cardinality text data, which is great for Categoricals (see below). We know that "Python is slow" (scare quotes since that statement is too broad to be meaningful). There are various steps that can be taken to improve your code's performance from relatively simple changes, to rewriting your code in a lower-level language or trying to parallelize it. And while you might have many options, there's typically an order you would proceed in. First (and I know it's cliche to say so, but still) benchmark your code. Make sure you actual need to spend time optimizing it. There are many options for benchmarking and visualizing where things are slow. Second, consider your algorithm. Make sure you aren't doing more work than you need to. A common one I see is doing a full sort on an array, just to select the N largest or smallest items. Pandas has methods for that. df = pd.read_csv("878167309_T_ONTIME.csv") delays = df['DEP_DELAY'] # Select the 5 largest delays delays.nlargest(5).sort_values() 62914 1461.0 455195 1482.0 215520 1496.0 454520 1500.0 271107 1560.0 Name: DEP_DELAY, dtype: float64 delays.nsmallest(5).sort_values() 307517 -112.0 39907 -85.0 44336 -46.0 78042 -44.0 27749 -42.0 Name: DEP_DELAY, dtype: float64 We follow up the nlargest or nsmallest with a sort (the result of nlargest/smallest is unordered), but it's much easier to sort 5 items that 500,000. The timings bear this out: %timeit delays.sort_values().tail(5) 10 loops, best of 3: 63.3 ms per loop %timeit delays.nlargest(5).sort_values() 100 loops, best of 3: 12.3 ms per loop Assuming you're at a spot that needs optimizing, and you've got the correct algorithm, and there isn't a readily available optimized version of what you need in pandas/numpy/scipy/scikit-learn/statsmodels/..., then what? The first place to turn is probably a vectorized NumPy implmentation. Vectorization here means operating on arrays, rather than scalars. This is generally much less work than rewriting it with something like Cython, and you can get pretty good results just by making effective use of NumPy and pandas. Not all operations are amenable to vectorization, but many are. Let's work through an example calculating the Great-circle distance between airports. Grab the table of airport latitudes and longitudes from the BTS website and extract it to a CSV. coord = (pd.read_csv("227597776_T_MASTER_CORD.csv", index_col=['AIRPORT']) .query("AIRPORT_IS_LATEST == 1")[['LATITUDE', 'LONGITUDE']] .dropna() .sample(n=500, random_state=42) .sort_index()) coord.head() For whatever reason, suppose we're interested in all the pairwise distances (I've limited it to just a sample of 500 airports to make this managable. In the real world you probably don't need all the pairwise distances, and --since you know to pick the right algorithm before optimizing-- would be better off with a tree). MultiIndexes have an alternative from_product constructor for getting the cartesian product of the arrays you pass in. We'll pass in the coords.index twice and do some index manipulation to get a DataFrame with all the pairwise combinations of latitudes and longitudes. This will be a bit wasteful since the distance from airport A to B is the same as B to A, but we'll ignore that for now. idx = pd.MultiIndex.from_product([coord.index, coord.index], names=['origin', 'dest']) pairs = pd.concat([coord.add_suffix('_1').reindex(idx, level='origin'), coord.add_suffix('_2').reindex(idx, level='dest')], axis=1) pairs.head() Breaking that down a bit: The add_suffix (and add_prefix) is a handy method for quickly renaming the columns. coord.add_suffix('_1').head() Alternatively you could use the more general .rename like coord.rename(columns=lambda x: x + '_1'). Next, we have the reindex. Like I mentioned last time, indexes are cruical to pandas. .reindex is all about aligning a Series or DataFrame to a given index. In this case we use .reindex to align our original DataFrame to the new MultiIndex of combinations. By default, the output will have the original value if that index label was already present, and NaN otherwise. If we just called coord.reindex(idx), with no additional arguments, we'd get a DataFrame of all NaNs. coord.reindex(idx).head() That's because there weren't any values of idx that were in coord.index, which makes sense since coord.index is just a regular one-level Index, while idx is a MultiIndex. We use the level keyword to handle the transition from the original single-level Index, to the two-leveled idx. level: int or name Broadcast across a level, matching Index values on the passed MultiIndex level coord.reindex(idx, level='origin').head() If you ever need to do an operation that mixes regular single-level indexes with Multilevel Indexes, look for a level keyword argument. For example, all the math operations have them. try: coord.mul(coord.reindex(idx, level='origin')) except ValueError: print('ValueError: confused pandas') ValueError: confused pandas coord.mul(coord.reindex(idx, level='origin'), level='dest').head() Tangent, I got some... pushback is too strong a word, let's say skepticism on my last piece about the value of indexes. Here's an alternative version for the skeptics from itertools import product, chain coord2 = coord.reset_index() x = product(coord2.add_suffix('_1').itertuples(index=False), coord2.add_suffix('_2').itertuples(index=False)) y = [list(chain.from_iterable(z)) for z in x] df2 = (pd.DataFrame(y, columns=['origin', 'LATITUDE_1', 'LONGITUDE_1', 'dest', 'LATITUDE_1', 'LONGITUDE_2']) .set_index(['origin', 'dest'])) df2.head() It's also readable (it's Python after all), though a bit slower. With that diversion out of the way, let's turn back to our great-circle distance calculation. Our first implementation is pure python. The algorithm itself isn't too important, all that matters is that we're doing math operations on scalars. import math def gcd_py(lat1, lng1, lat2, lng2): ''' Calculate great circle distance between two points. Parameters ---------- lat1, lng1, lat2, lng2: float Returns ------- distance: distance from ``(lat1, lng1)`` to ``(lat2, lng2)`` in kilometers. ''' # python2 users will have to use ascii identifiers (or upgrade) degrees_to_radians = math.pi / 180.0 ϕ1 = (90 - lat1) * degrees_to_radians ϕ2 = (90 - lat2) * degrees_to_radians θ1 = lng1 * degrees_to_radians θ2 = lng2 * degrees_to_radians cos = (math.sin(ϕ1) * math.sin(ϕ2) * math.cos(θ1 - θ2) + math.cos(ϕ1) * math.cos(ϕ2)) # round to avoid precision issues on identical points causing ValueErrors cos = round(cos, 8) arc = math.acos(cos) return arc * 6373 # radius of earth, in kilometers The second implementation uses NumPy. Note that aside from numpy having a builtin deg2rad convenience function (which is probably a bit slower than multiplying by a constant $\pi/180$ ), basically all we've done is swap the math prefix for np. Thanks to NumPy's broadcasting, we can write code that works on scalars or arrays of conformable shape. def gcd_vec(lat1, lng1, lat2, lng2): ''' Calculate great circle distance. Parameters ---------- lat1, lng1, lat2, lng2: float or array of float Returns ------- distance: distance from ``(lat1, lng1)`` to ``(lat2, lng2)`` in kilometers. ''' # python2 users will have to use ascii identifiers ϕ1 = np.deg2rad(90 - lat1) ϕ2 = np.deg2rad(90 - lat2) θ1 = np.deg2rad(lng1) θ2 = np.deg2rad(lng2) cos = (np.sin(ϕ1) * np.sin(ϕ2) * np.cos(θ1 - θ2) + np.cos(ϕ1) * np.cos(ϕ2)) arc = np.arccos(cos) return arc * 6373 To use the python version on our DataFrame, we can either iterate... %%time pd.Series([gcd_py(*x) for x in pairs.itertuples(index=False)], index=pairs.index) CPU times: user 955 ms, sys: 13.6 ms, total: 968 ms Wall time: 971 ms origin dest A03 A03 0.000000 A12 375.581448 A21 989.197819 A27 820.626078 A43 121.894542 ... ZXX ZMT 1262.373758 ZNE 14222.583846 ZNZ 15114.635597 ZXK 1346.351439 ZXX 0.000000 dtype: float64 Or use DataFrame.apply. %%time r = pairs.apply(lambda x: gcd_py(x['LATITUDE_1'], x['LONGITUDE_1'], x['LATITUDE_2'], x['LONGITUDE_2']), axis=1); CPU times: user 16.1 s, sys: 63.8 ms, total: 16.2 s Wall time: 16.2 s But as you can see, you don't want to use apply, especially with axis=1 (calling the function on each row). It's doing a lot more work handling dtypes in the background, and trying to infer the correct output shape that are pure overhead in this case. On top of that, it has to essentially use a for loop internally. You rarely want to use DataFrame.apply and almost never should use it with axis=1. Better to write functions that take arrays, and pass those in directly. Like we did with the vectorized version %%time r = gcd_vec(pairs['LATITUDE_1'], pairs['LONGITUDE_1'], pairs['LATITUDE_2'], pairs['LONGITUDE_2']) CPU times: user 35.2 ms, sys: 7.2 ms, total: 42.5 ms Wall time: 32.7 ms r.head() origin dest A03 A03 0.000000 A12 375.581350 A21 989.197915 A27 820.626105 A43 121.892994 dtype: float64 So about 30x faster, and more readable. I'll take it. I try not to use the word "easy" when teaching, but that optimization was easy right? The key was knowing about broadcasting, and seeing where to apply it (which is more difficult). I have seen uses of .apply(..., axis=1) in my code and other's, even when the vectorized version is availble. For example, the README for lifetimes (by Cam Davidson Pilon, also author of Bayesian Methods for Hackers, lifelines, and Data Origami) used to have an example of passing this method into a DataFrame.apply. data.apply(lambda r: bgf.conditional_expected_number_of_purchases_up_to_time( t, r['frequency'], r['recency'], r['T']), axis=1 ) If you look at the function I linked to, it's doing a fairly complicated computation involving a negative log likelihood and the Gamma function from scipy.special. But crucially, it was already vectorized. We were able to change the example to just pass the arrays (Series in this case) into the function, rather than applying the function to each row. This got us another 30x speedup on the example dataset. bgf.conditional_expected_number_of_purchases_up_to_time( t, data['frequency'], data['recency'], data['T'] ) I bring this up because it's very natural to have to translate an equation to code and think, "Ok now I need to apply this function to each row", so you reach for DataFrame.apply. See if you can just pass in the NumPy array or Series itself instead. Not all operations this easy to vectorize. Some operations are iterative by nature, and rely on the results of surrounding computations to procede. In cases like this you can hope that one of the scientific python libraries has implemented it efficiently for you, or write your own solution using Numba / C / Cython / Fortran. Other examples take a bit more thought to vectorize. Let's look at this example, taken from Jeff Reback's PyData London talk, that groupwise normalizes a dataset by subtracting the mean and dividing by the standard deviation for each group. import random def create_frame(n, n_groups): # just setup code, not benchmarking this stamps = pd.date_range('20010101', periods=n, freq='ms') random.shuffle(stamps.values) return pd.DataFrame({'name': np.random.randint(0,n_groups,size=n), 'stamp': stamps, 'value': np.random.randint(0,n,size=n), 'value2': np.random.randn(n)}) df = create_frame(1000000,10000) def f_apply(df): # Typical transform return df.groupby('name').value2.apply(lambda x: (x-x.mean())/x.std()) def f_unwrap(df): # "unwrapped" g = df.groupby('name').value2 v = df.value2 return (v-g.transform(np.mean))/g.transform(np.std) %timeit f_apply(df) 1 loop, best of 3: 3.55 s per loop %timeit f_unwrap(df) 10 loops, best of 3: 68.7 ms per loop Pandas GroupBy objects intercept calls for common functions like mean, sum, etc. and substitutes them with optimized Cython versions. So the unwrapped .transform(np.mean) and .transform(np.std) are fast, while the x.mean and x.std in the .apply(lambda x: x - x.mean()/x.std()) aren't. Groupby.apply is always going to be around, beacuse it offers maximum flexibility. If you need to fit a model on each group and create additional columns in the process, it can handle that. It just might not be the fastest (which may be OK sometimes). Thanks to some great work by Jan Schulz, Jeff Reback, and others, pandas 0.15 gained a new Categorical data type. Categoricals are nice for many reasons beyond just efficiency, but we'll focus on that here. Categoricals are an efficient way of representing data (typically strings) that have a low cardinality, i.e. relatively few distinct values relative to the size of the array. Internally, a Categorical stores the categories once, and an array of codes, which are just integers that indicate which category belongs there. Since it's cheaper to store a code than a category, we save on memory (shown next). import string s = pd.Series(np.random.choice(list(string.ascii_letters), 100000)) print('{:0.2f} KB'.format(s.memory_usage(index=False) / 1000)) 800.00 KB c = s.astype('category') print('{:0.2f} KB'.format(c.memory_usage(index=False) / 1000)) 100.42 KB Beyond saving memory, having codes and a fixed set of categories offers up a bunch of algorithmic optimizations that pandas and others can take advantage of. Matthew Rocklin has a very nice post on using categoricals, and optimizing code in general. The pandas documentation has a section on enhancing performance, focusing on using Cython or numba to speed up a computation. I've focused more on the lower-hanging fruit of picking the right algorithm, vectorizing your code, and using pandas or numpy more effetively. There are further optimizations availble if these aren't enough.
http://nbviewer.jupyter.org/gist/TomAugspurger/2d6cb8332868e762daeadf228b6e2bbf
CC-MAIN-2017-13
en
refinedweb
Other languages [Main] As well as its own scripting language, obmm also lets you script using iron python, C# and visual basic. In this case, access to obmm's scripting functions is provided via an interface . C# and vb scripts must define a class named 'Script' which inherits from 'IScript' in the root namespace. It must contain a method called 'Execute' which takes an IScriptFunctions interface as a parameter. Both of these interfaces exist in the OblivionModManager.Scripting namespace. For example, a basic C# script would look like: using OblivionModManager.Scripting; class Script : IScript { public void Execute(IScriptFunctions sf) { sf.Message("This is a C# script"); } } Unlike C# and vb, python scripts don't require any special setup. obmm attempts to enforce security restrictions on these scripts, but as I'm not certain of their safety, python, C# and vb scripts are disabled by default. If you want to enable them, you can do so from the options menu.
http://timeslip.chorrol.com/obmmm/otherlanguages.htm
CC-MAIN-2017-13
en
refinedweb
Auto Print an HTML Page SuperGeekery.) Additionally, the client wanted these same recipes sharable and accessible from their web site. Those site-based recipes also needed to be printable. I wanted a solution that didn't end up creating multiple versions of the recipes. There were already many moving pieces in the project and having multiple versions of those recipes floating simply increased the chances of errors creeping in. The Goal - one recipe to rule them all. The solution was to build a single HTML page for each recipe that had a print button that would trigger the printing functionality. We also needed some mechanism to trigger printing on load when that URL was accessed from the Flash banner. (I also had an additional goal of keeping each recipe short enough to fit on a single printed page.) First take a look at a sample recipe. You'll see a basic recipe with a print button that triggers the print function of your computer. Being able to print via a button click requires javascript, but the print button is hidden when there is no javascript on the page in the noscript tags at the bottom of the page, so you'll only see it if you do have javascript available. Now take a look at the same URL that we used when calling it from a Flash banner. It's the same URL, but with GET variables added to the end: One you click on that link, it should launch the recipe like before but now your print dialog box should also appear. Step 1: Build the HTML & CSS If you looked at the HTML for the sample recipe, you'll see it uses tables to keep the graphics all lined up. This was originally built to also be used as an HTML email. That's why it looks like HTML from 1997. :-) That's just the state of making HTML emails. Typically you wouldn't use tables to align your graphics. There were a number of recipes that were in this collection and some of them were long enough that they made the recipes spill over onto a 2nd page. We wanted to keep each recipe to a single page , so the big burger image needed to be replaced at the bottom of the page for printing. The print style sheet basically turns off burger image at the bottom of the page with a solid rule, leaving the 'cleanbase' image, a solid rule, on. The web style sheet does the opposite. From the Web style sheet: #cleanbase { display: none; } From the Print style sheet: #fullburgerimage { display: none; } The both style sheets also hide the "Print" button so that it's not visible unless told to be visible by the Javascript. Step 2: Use Javascript to make the print button work. As I said before, none of what we're going over is new. It's just putting together several well-known techniques in a single page. It just works and that makes it handy to know. The print function is simple javascript. You print the contents of the window element like this: window.print(); Now we just need to have that function triggered when a user clicks the print button. // Send coupon to printer $('#printbutton a').click(function(event){ event.preventDefault(); window.print(); }); Step 3: Checking the GET variables and triggering the print function In the jQuery document.ready function, I wanted to check for the variables passed into the page via the URL, basically, I wanted to check the GET variables. Doing this in PHP would have been simple, but I wanted to use Javascript to do this. Stack Overflow to the rescue. Here's a great little function to do that very task. function $_GET(q,s) { s = s ? s : window.location.search; var re = new RegExp('&'+q+'(?:=([^&]*))?(?=&|$)','i'); return (s=s.replace(/^\?/,'&').match(re)) ? (typeof s[1] == 'undefined' ? '' : decodeURIComponent(s[1])) : undefined; } Now you just need to test to see to see if 'print' is 'true'. If it is, we trigger the print window function when the document is ready. if ($_GET('print') == 'true') { window.print(); } Step 4: Finishing up There are a few additional things this sample doesn't show that you could do. - Add the Open Graph meta data for this page. - Add Facebook 'like' and 'sharing' buttons - Add a Twitter share button - Build a email this functionality to this page. - Add Google Analytics to the page to see how people are using it. Please comment below if you've got any feedback. Thanks. Thank you for the help! I was looking on google for the “auto-print” function. Happy new year! I hope you find it helpful, Christian. Happy 2012 to you too. This post exactly fits my requirement. Thanks a lot in counter, customers are standing in a line. so i must to do the billing as fast as possible to make customer happy. now for this window takes time for each bill, and every time i am using same printer. so i don’t think this is necessary for me. and ultimately time === money. so i need this technical help.. please help if you can I made a slight update on how I hid the print button in the cases where there was no javascript. The noscript tag was a better solution for me than trying to guess the media type via JS.
https://supergeekery.com/geekblog/comments/auto_print_an_html_page
CC-MAIN-2017-13
en
refinedweb
in reply to Re: help using packForget()in thread help using packForget() the idea of local $boom is to give the global $boom a local value so when you speak of $boom, here, it's a package global. This description, if not outright wrong, is at least misleading. The idea of local is to give the global variable a dynamic value for the time being, but allow it to be automagically restored to its former value later (when the current block exits). The distinction is important, because the dynamic value is global in nature, not local in the traditional sense. (Yes, local is misnamed.) Any other code that gets called, even from other packages, will see the dynamic value. Therein lies its value. In fact, you would ordinarily not use local on your own variables. For those you would typically use my or our or place them in a package namespace. local is more useful for dynamically scoping the package variables used by other code that you are calling, in order to adjust its behavior in some way. (In Perl this is most often special variables belonging to built-in code that is part of perl itself, but in principle it could also be global (or package) variables belonging to a module, as is common e.g. in elisp.) Yes No Results (282 votes). Check out past polls.
http://www.perlmonks.org/?node_id=612560
CC-MAIN-2017-13
en
refinedweb
Opened 5 years ago Closed 4 years ago Last modified 4 years ago #18210 closed Bug (fixed) Regression and crash with any "special" prefix values passed to reverse() Description After updating to Django 1.4, I get no fewer than 5 messages a day where the Django 404 page generation gets totally fouled up and ends up resulting in a 500 server error. The common thread here was these URLs arrived via the Apache ErrorDocument route. I'm running under mod_wsgi, and I narrowed it down the attached test cases to show the broken behavior. Applying this to the 1.3.X branch results in all 3 new tests passing, but on 1.4.X and trunk all three tests fail in related but different ways. The high level reason has to to with some of the crazy PATH_INFO, SCRIPT_NAME, and SCRIPT_URL usage Django is doing, from what I can tell. In the ErrorDocument situation, the SCRIPT_URL envvar is not set to be the WSGI script; instead, it remains set to the original missing URI (something such as '/static/magazine/2010/ALM-2010-Feb/bump%20map.png'). This causes all sorts of issues because PATH_INFO is much shorter (in my case, it gets rewritten to '/404'). I'm not sure how critical this bug is, but it is extremely trivial to cause Django to 500 under any ErrorDocument setup at the moment- if one includes a '{', ')', or '%' character in the URL they are requesting that ends up getting handled via ErrorDocument, the application will error 100% of the time as stands, from what I can tell. All of the normalize(prefix) stuff in reverse() appears to be new in 1.4, and that is where all three of these failures can be traced back to. Attachments (1) Change History (12) Changed 5 years ago by comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by Marking as a release blocker because this is a regression. comment:4 Changed 4 years ago by A few things: - Pull request to fix this here: - While I agree this was a regression and that the exceptions have to be fixed, I believe the 1.3-era output (which your attached tests reflect) is also wrong for cases 2 and 3. The first case with the curly braces escaped is the correct behavior, and the other two cases should reflect the URLs with the special characters escaped as well. The tests in the pull request reflect that. - Normally the prefixargument to reverse is used internally to handle namespaces included within one another wherein it'd be very hard to get special regex characters into that prefix. That doesn't mean this isn't an issue, only that it's an uncommon (and in fact undocumented) thing to do. In the internal case the value of prefix is always processed by normalizebefore being passed in and has to have originated from a valid regex anyhow. - The intention of the quote "Since most people don't have regex special characters in the prefix to namespaced urls, it wasn't a problem" in the original context was to say that the bug reported in #15900 hadn't been discovered because nesting captured groups via multiple namespace-included URLconfs was uncommon. Those captured groups were the "regex special characters" in question and they weren't being treated as part of the regex URL pattern at all. This issue is effectively the reverse of that one where unintended special characters are being treated as part of the regex (caveat see point #2). comment:5 Changed 4 years ago by comment:6 Changed 4 years ago by toofishes: can you confirm that Gabriel's patch resolves your problem, given your ErrorDocument setup? Patch with three new failing tests
https://code.djangoproject.com/ticket/18210
CC-MAIN-2017-13
en
refinedweb
pyliftover 0.3 Pure-python implementation of UCSC ``liftOver`` genome coordinate conversion. PyLiftover is a library for quick and easy conversion of genomic (point) coordinates between different assemblies. It uses the same logic and coordinate conversion mappings as the UCSC liftOver tool. As of current version (0.2), PyLiftover only does conversion of point coordinates, that is, unlike liftOver, it does not convert ranges, nor does it provide any special facilities to work with BED files. For single-point coordinates it produces exactly the same output as liftOver (verified with at least the hg17ToHg18.over.chain.gz file for now). Installation The simplest way to install the package is via easy_install or pip: $ easy_install pyliftover Usage The primary usage example, supported by the library is the following: from pyliftover import LiftOver lo = LiftOver('hg17', 'hg18') lo.convert_coordinate('chr1', 1000000) The first line will automatically download the hg17-to-hg18 coordinate conversion chain file from UCSC, unless it is already cached or available in the current directory. Alternatively, you may provide your own chain file: lo = LiftOver('hg17ToHg18.over.chain.gz') lo.convert_coordinate('chr1', 1000000, '-') The result of lo.convert_coordinate call is either None (if the source chromosome name is unrecognized) or a list of target positions in the new assembly. The list may be empty (locus is deleted in the new assembly), have a single element (locus matched uniquely), or, in principle, have multiple elements (although this is probably a rare occasion for most default intra-species genomic conversions). Note that coordinates in the tool are 0-based. That is, a position that you would refer to in the genome browser by chr1:10 corresponds to coordinate 9 in PyLiftover’s terms. Although you may try to apply the tool with arbitrary chain files, like the original liftOver tool, it makes most sense for conversion of coordinates between different assemblies of the same species. See also - Blog post: - Report issues and submit fixes at Github: - Author: Konstantin Tretyakov - Keywords: bioinformatics liftover genome-analysis - License: MIT - Platform: Platform Independent - Categories - Package Index Owner: kt - DOAP record: pyliftover-0.3.xml
https://pypi.python.org/pypi/pyliftover
CC-MAIN-2017-13
en
refinedweb
SPListItem item = web.Lists["Docs"].Items[0]; string fieldValue = item["Contact"].ToString(); // Contact is the name of Person or Group column and in this case has a value "1;#Test User" SPFieldUserValue user = new SPFieldUserValue(web, fieldValue); string name = user.LookupValue; // Test User The LookupId for this field type is the ID property of the SPPrincipal class. Sometimes you might need to determine what this Lookup ID is for a particular user name (you can even search for a user by email address), and that's where the SPUtility.ResolvePrincipal method is helpful, e.g.: using Microsoft.SharePoint.Utilities; ... SPPrincipalInfo userInfo = SPUtility.ResolvePrincipal(web, "Test User", SPPrincipalType.All, SPPrincipalSource.All, null, false); if (userInfo != null) { item["Contact"] = new SPFieldUserValue(web, userInfo.PrincipalId, userInfo.DisplayName); item.Update(); } A good example of using this method can be found here which demonstrates using this method to find out if a user has access to your site or list. A second SPUtility method - SPUtility.SearchPrincipals - returns a list of principals much like the list returned by the "Select People and Groups" picker dialog box, e.g: bool reachedMaxCount = false; IList<SPPrincipalInfo> users = SPUtility.SearchPrincipals(web, "T", SPPrincipalType.All, SPPrincipalSource.All, null, 100, out reachedMaxCount);
http://geek.hubkey.com/2008_01_01_archive.html
CC-MAIN-2017-13
en
refinedweb
Controlling Project and File Properties with C++ Macros In previous columns, I introduced you to the basics of writing Visual Studio macros in C++—well, to be accurate, writing a library in C++ that provides all the functionality for macros. I showed you how to insert text into a file being edited, and how to work with the code model that represents classes, interfaces, functions, and the like within your project. In this installment, I tackle the project model. The particular task my macro will perform is changing a file (within a managed project) from managed (/clr) to unmanaged. This is something you might do for performance reasons, creating a mixed executable. When you make this change in Solution Explorer, you have to make a companion change, turning off precompiled headers. The macro does both steps. I'll leave it as an exercise for you to write the opposite macro that puts the file back to unmanaged. Sample Project I created an unmanaged console application and added a class to it (kept in a separate file). The class is called Person: the header is in Person.h and the implementation is in Person.cpp. I plan to flip Person.cpp back and forth between managed and unmanaged using the macro. Here's Person.h: class Person { private: int number; char code; public: Person(int n, char c) ; int getnumber(); }; You can guess what the two functions look like, and you might be tempted to write them inline in the .h file. But think what will happen when you #include that .h file into a .cpp file that is being compiled /clr: you will get MSIL versions of the functions. That's why I put the implementations into a separate file. That file will then be compiled to MSIL or native code according to the properties you've set for it. And after all, in real life, if you're flipping a file back to native code for performance reasons, it's going to have a great deal of code in it and not these little "demo code" examples. The macros don't care how much code they work on, so I wrote small examples. Here is Person.cpp: #include "StdAfx.h" #include ".\person.h" Person::Person(int n, char c) { number = n; code = c; } int Person::getnumber() { return number; } I then wrote a really simple main(): #include "stdafx.h" #include <iostream> using namespace std; #include "Person.h" int _tmain(int argc, _TCHAR* argv[]) { Person p1(1,'a'); Person p2(2,'q'); cout << "total of the numbers: " << p1.getnumber() + p2.getnumber() << '\n'; return 0; } So far, this is all unmanaged code and has no .NET part to it. I built and ran it to make sure nothing weird was going on, and then used Solution Explorer to make the entire project managed (/clr). I then built and ran it again to make sure it still worked. This should be familiar (if you've read my head-spinning columns) as the "xcopy port" to the CLR. Page 1 of 2
http://www.developer.com/net/cplus/article.php/3368451/Controlling-Project-and-File-Properties-with-C-Macros.htm
CC-MAIN-2017-13
en
refinedweb
I have a custom subclass of UIButton: import UIKit @IBDesignable class MyButton: UIButton { var array : [String]? } import UIKit @IBDesignable class MyButton<T>: UIButton { var array : [T]? } MyButton<String> MyButton<Int> Command /Applications/Xcode6-Beta4.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift failed with exit code 254 Interface Builder "talks" to your code through the ObjC runtime. As such, IB can can access only features of your code that are representable in the ObjC runtime. ObjC doesn't do generics, so there's not a way for IB to tell the runtime which specialization of your generic class to use. (And a Swift generic class can't be instantiated without a specialization, so you get an error trying to instantiate a MyButton instead of a MyButton<WhateverConcreteType>.) (You can recognize the ObjC runtime at work in IB when you break other things: Attempting to use a pure Swift class with outlets/actions in a nib/storyboard gives runtime errors about ObjC introspection. Leaving an outlet connected whose corresponding code declaration has changed or gone away gives runtime errors about KVC. Et cetera.) To ignore the runtime issues and put it in a slightly different way... let's go back to what IB needs to know. Remember that the nib loading system is what instantiates anything in a storyboard at runtime. So even if the parts of your class that take a generic type parameter aren't @IBInspectable, the nib still needs to know what specialization of your generic class to load. So, for IB to let you use generic classes for views, IB would have to have a place for you to identify which specialization of your class the nib should use. It doesn't — but that'd make a great feature request. In the meantime, if it's still helpful for your MyButton class to involve some generic storage, perhaps you could look into having MyButton reference another class that includes generic storage. (It'd have to do so in such a way that the type of said storage isn't known at compile time, though — otherwise the type parameters of that other class would have to propagate back to MyButton.)
https://codedump.io/share/kffLQMzwTYP6/1/use-a-generic-class-as-a-custom-view-in-interface-builder
CC-MAIN-2017-13
en
refinedweb
Details - Type: Bug - Status: Closed - Priority: Critical - Resolution: Fixed - Affects Version/s: 2.7.2 - Fix Version/s: 2.8.0, 2.7.3, 3.0.0-alpha1 - Component/s: None - Labels:None Description SNN was down for sometime because of some reasons..After restarting SNN,it became unreponsive because - 29 DN's sending IBR in each 5 million ( most of them are delete IBRs), where as each datanode had only ~2.5 million blocks. - GC can't trigger on this objects since all will be under RPC queue. To recover this( to clear this objects) ,restarted all the DN's one by one..This issue happened in 2.4.1 where split of blockreport was not available. Activity - All - Work Log - History - Activity - Transitions As current intention is not overload the NN. Planning to fix like following - Clear the IBRS on re-register to namenode. void reRegister() throws IOException { if (shouldRun()) { // re-retrieve namespace info to make sure that, if the NN // was restarted, we still match its version (HDFS-2120) NamespaceInfo nsInfo = retrieveNamespaceInfo(); // and re-register register(nsInfo); scheduler.scheduleHeartbeat(); //HDFS-9917,Standby NN IBR can be very huge if standby namenode is down // for sometime. if (state == HAServiceState.STANDBY) { ibrManager.clearIBRs(); } } } Any thoughts on this..? Clear the IBRS on re-register to namenode. I think this is fine. This is only one part of the solution to make SNN start successfully. Also its required to limit the number of IBRs for Standby. 1. May". Tsz Wo Nicholas Sze/Jing Zhao, does this make sense to you? When SNN is restarted, DNs send a full BR to it. Then, the IBRs collected before the full BR can be dropped. Is it the case? Before Full BR, all pending IBRs will be flushed. In current problem case, size of IBR itself is huge than FBR,IBR itself failed. because NN was not able to process it completely. thats why it kept accumulating. > Before Full BR, all pending IBRs will be flushed. ... Yes, this is the current problem. I suggest that NN could just ignore the pending IBRs before the first full BR. Would it fix the problem? I suggest that NN could just ignore the pending IBRs before the first full BR. Would it fix the problem? Yes, I think its same as clearing on reRegister() at datanode itself. Advantage of clearing on reRegister() in DN itself, is unnecessary RPC will go to namenode and Namenode need to unnecessary GC for these IBR's.. We may also need to limit the DN keep accumulating the IBRs and use lot of memory I meant to say,we can avoid RPC to namenode and unnecessary GC for these IBR's.. Ping Tsz Wo Nicholas Sze. Brahma Reddy Battula, your proposal on reRegister() sounds great, thanks. Uploaded patch..Kindly review.. can we limit the number of IBR's to standby where DN keep accumulating the IBRs and use lot of memory..? Current changes for clearing IBRs on re-Register() looks good. For the second part, i.e. Avoid accumulation of IBRs when the standby is down for long time, can we consider as below. (Already mentioned in my above comment)". In that case, for sure re-Register() will be called on reconnection to running NameNode (if any). Only question is, heartBeatExpiryInterval in NameNode depends on conf "dfs.namenode.heartbeat.recheck-interval" which is namenode side configuration. By default this is 5 min. If there is any change in this in Namenode side, that change should also be present in datanode config. Is it okay to use this? or introduce a common conf to NN and DN? Tsz Wo Nicholas Sze, what is your opinion in this? Considering second part of this issue needs more discussion about getting heartBeatExpiryInterval at datanode side, this could be done in a follow up Jira. Brahma Reddy Battula, Please file a follow up jira for the "Avoid accumulation of IBRs for SNN when the standby is down for more than expected time". Seeing the criticality of this issue, I feel it would be better to land this in 2.7.3 with reRegister() IBR clearance fix. Current changes looks good for the fix. Please add a Test to verify the same. Mock Tests would be sufficient. TestBPOfferService.java contains similar tests. you can refer them. Please file a follow up jira for the "Avoid accumulation of IBRs for SNN when the standby is down for more than expected time". Raised HDFS-10244. Seeing the criticality of this issue, I feel it would be better to land this in 2.7.3 with reRegister() IBR clearance fix. Uploaded the patch ..Kindly Review. Raised HDFS-10245 and HDFS-10248 for Findbugs warnings and ASF License warnings and Test failures are unrelated. Re-uploaded the trunk patch to trigger Jenkins.. Testcase failure is unrelated , HDFS-10253 is raised to track this..So Kindly review branch-2.7 and trunk patches.. +1 for the trunk patch. In branch-2.7 patch, instead of changing the pendingIncrementalBRperStorage accessor to default, its better to add a method to get the size of the pendingIBRs, i.e. getPendingIBRSize(). Similar to one in trunk patch, which was added in IncrementalBlockReportManager. Otherwise +1. Uploaded the branch-2.7 patch to address the above comment.. Kindly Review.. latest branch-2.7 patch also looks good. +1. Will commit this tomorrow, unless there is any objections. Thanks Brahma Reddy Battula for updating the patch. Committing shortly. Committed to trunk, branch-2, branch-2.8 and branch-2.7. Thanks Brahma Reddy Battula for the contribution. Thanks Tsz Wo Nicholas Sze for reviews. Vinayakumar B thanks for review and commit and thanks for Tsz Wo Nicholas Sze for additional review. FAILURE: Integrated in Hadoop-trunk-Commit #9555 (See) HDFS-9917. IBR accumulate more objects when SNN was down for sometime. (vinayakumarb: rev 818d6b799eead13a17a0214172df60a269b046fb) - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java - hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/IncrementalBlockReportManager.java - hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBPOfferService.java Brahma Reddy Battula and Vinayakumar B, thanks a lot for working on this! Closing the JIRA as part of 2.7.3 release. Following tested on trunk,not the original cluster data.
https://issues.apache.org/jira/browse/HDFS-9917
CC-MAIN-2017-13
en
refinedweb
- (201) - GNU General Public License version 2.0 (111) - BSD License (25) - GNU Library or Lesser General Public License version 2.0 (18) - Apache License V2.0 (15) - MIT License (12) - Academic Free License (8) - GNU General Public License version 3.0 (7) - Eclipse Public License (6) - Common Public License 1.0 (5) - Open Software License 3.0 (5) - Affero GNU Public License (4) - Artistic License (4) - Common Development and Distribution License (4) - PHP License (4) - W3C License (4) - Public Domain (13) - Creative Commons Attribution License (10) - Other License (8) - Windows (186) - Linux (179) - Grouping and Descriptive Categories (146) - All POSIX (51) - OS Independent (48) - 32-bit MS Windows (36) - All 32-bit MS Windows (36) - OS Portable (18) - All BSD Platforms (17) - 64-bit MS Windows (9) - 32-bit MS Windows (6) - Project is an Operating System Distribution (5) - Project is an Operating System Kernel (4) - Project is OS Distribution-Specific (3) - Classic 8-bit Operating Systems (2) - Mac (142) - Modern (79) - Android (64) - BSD (52) Top Apps - Audio & Video - Business & Enterprise - Communications - Development - Home & Education - Games - Graphics - Science & Engineering - Security & Utilities - System Administration aut AUT - Advanced UTility framework, is a powerful application framework to develop business logics. The component and workflow based design can simply apply to almost all tech layers, and special data dictionary design supports now and furture data model. bli The BLI design pattern is a design pattern that attaches importance in the way of making M layer of the MVC model. The BLI design pattern can solve making business logic classes.Developers get plainly and high maintainability. camerons game This game is to teach kids hand i quardination. cell phone suicide bomber project Aims of Cell Phone SB project are providing various image-processing functions for cell phone platform. Mainly we will translate opencv‘s function into Java and Brew. Especially we are interesting to Haar face detector for cell phone camera. cjk-xchg-lib This lib(python) will provide APIs for Chinese, Korean, Japanese characters exchanging. codename: GXNP Sorry in development at the moment im also looking for C# programars to help me so far we have 2 so i need the help of 2 more so i will give you the stats of the app wen you contact me craftorun minecraft server a minecraft dolphin mail project This project aims to develop a MTA (Mail Transfer Agent) system which can send emails stably to PC and mobile users and supports CRM functions. e107jp e107jp is the home of the Japanese community of e107 (). e107 is a website system written in php and mySQL. e107jp provides complete translation in Japanese of the original script and of its plugins, and gives support to Japanese people. eSM-Project Secure chat software.Build on SRSo "epo" is an advanced archiving system on Windows platforms. It tightly integrates into Explorer through its shell namespace extension and offers very easy-to-use archiving features. family money manage system It's an application for Fammily Money Management which is to be made intenationally. All of it is made for fammily. foXchan a chan barely alive. anonymous, we can rebuild it. we have the technology. we have the capability to build the world's first bionic chan. foXchan will be that chan. we can make it better than it was before. Better. Stronger. Faster. freeboard: OSS board game client/server freeboard is aimed as being a free opensource server/client for setting up community boardgames servers. It supports: - one to one direct connection - one to one indirect connection (behind proxies) - one to server connection graphical client availabl -. games 2day online 2d interactive world for learning languages. Users are placed within a virtual world, Users are placed there where the other users dont speak the same language. users need communicate to achieve objectives and organise life within the game goAnywhere A web-based Go ("") playing client and server. gtlp(Garbage Tools/Libraries Project) GTLP(Garbage Tools/Libraries Project) will make some tools/libraries for webpage Creation and/or maintenance. j2e-translator A flexible Japanese to English language translation engine that allows multiple results be displayed according to all possible grammar ratings, i.e. a list of results will be displayed instead of only one. Please browse the CVS tree for actual project.
https://sourceforge.net/directory/natlanguage%3Ajapanese/developmentstatus%3Aplanning/?sort=popular&page=8
CC-MAIN-2017-13
en
refinedweb
<span class="code-string">"XFontDialog.rc"</span> right before the #endif: #include <span class="code-string">"XFontDialog.rc"</span> ) HWND hWndColor = ::GetDlgItem(m_hWnd, 1139); ::SendMessage (hWndColor, CB_RESETCONTENT, 0, 0); afx_msg LONG CFontEx::OnSelEndOK(UINT lParam, LONG wParam) { HWND hWndColor = ::GetDlgItem(m_hWnd, 1139); ::SendMessage (hWndColor, CB_RESETCONTENT, 0, 0); int idx = ::SendMessage (hWndColor, CB_ADDSTRING, 0, (LPARAM)_T("aa")); ::SendMessage (hWndColor, CB_SETITEMDATA, idx, (LPARAM)m_ColorPicker.GetColour()); ::SendMessage (hWndColor, CB_SETCURSEL, idx, 0); Invalidate(); return TRUE; } if (!bDeleted && bSuccess && tm.tmItalic) { bDeleted = RemoveFont(hWndFont, szFont); } if (!bDeleted && bSuccess && tm.tmWeight > 500 ) { bDeleted = RemoveFont(hWndFont, szFont); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/articles/4385/xfontdialog-customizing-cfontdialog-part-i-adding?fid=15876&df=10000&mpp=10&noise=1&prof=true&sort=position&view=none&spc=none&select=1074216&fr=11
CC-MAIN-2017-13
en
refinedweb
The following post strips back Gerard's example to instead to consider the steps in setting up and testing One-Way SSL for a JAX-WS web service generated via JDeveloper 11gR1 and installed in WLS 10.3.1, using the WLS policy Wssp1.2-2007-Https.xml. Assumptions This article assumes the reader has the following basic knowledge: * HTTPS/SSL * Digital certificates and trusted/certificate authorities (CAs) * Oracle's WebLogic Server, WLS managed servers and the WLS console One-Way SSL vs Two-Way SSL For those not familiar with either, Oracle's WLS documentation has a good explanation of the implementation of and differences between One-Way SSL and Two-Way SSL in the Understanding Security for Oracle WebLogic Server manual. Steps To implement a One-Way SSL example we'll run through the following steps: 1) Create a basic JAX-WS web service with JDeveloper 11gR1 2) Generate the digital certificates required for the WLS server 3) Modify the web service to use the Wssp1.2-2007-Https.xml WLS policy 4) Deploy the running web service to WLS 5) Test the running web service via JDeveloper's HTTP Analyzer 6) Test the running web service via SoapUI 7) Test the running web service via a JAX-WS client 8) Inspect the web service packets on the wire to verify the traffic is indeed encrypted 1) Create a basic JAX-WS web service with JDeveloper 11gR1 This step is documented in a previous blog post Creating JAX-WS web services via a WSDL in JDev 11g. There are also a number of viewlet demonstrations available from Oracle's OTN which show how to construct the WSDL in a drag'n'drop fashion. The resulting web service we'll demonstrate here is a very simple one. It is comprised of the following solutions: OneWaySSLExample. OneWaySSLExample.wsdl xmlns:tns="urn:OneWaySSLExample.wsdl" xmlns:xsd=""xmlns:tns="urn:OneWaySSLExample.wsdl" xmlns:xsd="" xmlns:soap="" xmlns:mime="" xmlns: The overall web service comprises of a single operation accepting the inputElement and outputElement strings as specified in the XSD. OneWaySSLPortTypeImpl.java package au.com.sagecomputing.ws;A very basic JAX-WS web service accepting the inputElement String and returning the outputElement String prefixed with "Hello ". import javax.jws.WebService; import javax.xml.ws.BindingType; import javax.xml.ws.soap.SOAPBinding; ) public class OneWaySSLPortTypeImpl { public String oneWaySSLOperation(String part) { return "Hello " + part; } } Example request SOAP payload Chris Example response SOAP payload The overall application/project structure will look as follows in JDeveloper's Application Navigator:The overall application/project structure will look as follows in JDeveloper's Application Navigator: Hello Chris 2) Generate the digital certificates required for the WLS server In order for a client to undertake a SSL connection with our web service on the WLS server, the WLS server must be configured with a valid digital certificate. Again note from the Oracle documentation how One-Way SSL works at runtime: With one-way SSL authentication, the target (the server) is required to present a digital certificate to the initiator (the client) to prove its identity. The client performs two checks to validate the digital certificate: 1. The client verifies that the certificate is trusted (meaning, it was issued by the client's trusted CA), is valid (not expired), and satisfies the other certificate constraints. 2. The client checks that the certificate Subject's common name (CN) field value matches the host name of the server to which the client is trying to connect If both of the above checks return true, the SSL connection is established. In this section we consider the digital certificates required for the WLS server. WLS is an interesting application server in that it keeps two separate Java keystores, 1 for storing the digital certificates for such actions as SSL, and another which is typically used for storing CA digital certificates. The former is referred to as the identity keystore, the later the trust keystore. The WLS manual Securing Oracle WebLogic Server section 11 Configuring Identity and Trust has a detailed explanation of this setup. By default WLS comes with demonstration identity and trust keystores containing demonstration digital certificates. As the WLS documentation takes great pains to explain these are for development purposes only and should never be used in a production environment. For the purposes of this blog post if you're testing One-Way SSL in a development environment you can in fact skip this entire step as the demonstration WLS keystores will suffice. To check that the demonstration keystores are currently installed login to your WLS console, select your server, and under the Configurations -> Keystores tab you'll see the following entries: Your entries for the file locations of the keystore will be different from my example here dependent on where you installed WLS. However using the demonstration keystores avoids the whole learning exercise of configuring your own custom digital certificates in WLS which is an important lesson. The following describes those steps in detail, as based off Gerard's original post. To install our own digital certificate we followed these general steps: a) Open a command prompt and set the WLS environment b) Generate our own trusted certificate authority digital certificate c) Store the private key and digital certificate and import into the identity keystore d) Store the same digital certificate into the trust keystore. e) Configure the new keystores in WLS's identity and trust keystore The following describes those steps in detail. In order to do this we've used WLS utilities to do as much of the work as possible. a) Open a command prompt and set the WLS environment Under Windows open a command prompt on the same machine as where WLS is installed, create a temporary directory in your favourite place and cd to that directory, and run your WLS server's setDomainEnv.cmd command. Something like: "C:\ Once run ensure you're still in your new directory. b) Generate our own trusted certificate authority digital certificate java utils.CertGen -certfile ServerCACert -keyfile ServerCAKey -keyfilepass ServerCAKey -selfsigned -e [email protected] -ou FOR-DEVELOPMENT-ONLY -o XXXX -l PERTH -s WA -c AU This generates 4 files: ServerCACert.der, ServerCACert.pem, ServerCAKey.der, ServerCAKey.pem The utils.CertGen utility is useful for development purposes, but as per the WLS documentation, should not be used for production purposes. Alternatively OpenSSL could be used instead. Note the use of selfsigned flag. This implies this digital certificate will be used both as the CA in the trust keystore and the served digital certificate in the identity keystore. This is not what we'd do for a production environment using commercial Certificate Authorities, but is sufficient for demonstration purposes in this post. More information on: * the WLS CertGen utility can be found here. * .der vs .pem files can be found here and here. * WLS provides two utilities der2pem and pem2der can be used to convert between the two file types. Under Windows you can double click on the ServerCACert.der file to show its contents: If you have access to the openSSL command line tool you can use it to query the certificate we just created: openssl x509 -text -inform der -in ServerCACert.der Certificate: Data: Version: 3 (0x2) Serial Number: 0d:a9:d1:4a:0f:0b:b2:61:13:90:89:f5:40:4d:4f:e2 Signature Algorithm: md5WithRSAEncryption Issuer: C=AU, ST=WA, L=PERTH, O=SAGECOMPUTING, OU=FOR-DEVELOPMENT-ONLY, CN= Validity Not Before: Jul 9 07:06:49 2009 GMT Not After : Jul 10 07:06:49 2029 GMT Subject: C=AU, ST=WA, L=PERTH, O=SAGECOMPUTING, OU=FOR-DEVELOPMENT-ONLY, CN= Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:df:cb:6c:ed:86:75:4c:5b:66:cd:aa:3d:34:8f: 73:f6:9c:b5:ed:82:9c:c3:15 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Certificate Sign X509v3 Basic Constraints: critical CA:TRUE, pathlen:1 Signature Algorithm: md5WithRSAEncryption b7:fa:1b:8f:c4:ee:af:6b:1d:f0:dc:f4:cf:35:20:f1:df:eb: 0c:fe -----BEGIN CERTIFICATE----- MIIC8zCCAlygAwIBAgIQDanRSg8LsmETkIn1QE1P4jANBgkqhkiG9w0BAQQFADCB i7Pd63d03mWkI85tvsr5Q+40yitOL5JnLsbyHSrM+1aK8kkY7Qz+ -----END CERTIFICATE----- This identifies information that maybe useful later if we make a mistake, such as the encryption algorithm used (RSA), the size of the keys (1024bit), the serial number of the certificate (a hex number). c) Store the private key and the digital certificate in the identity keystore java utils.ImportPrivateKey -certfile ServerCACert.der -keyfile ServerCAKey.der -keyfilepass ServerCAKey -keystore ServerIdentity.jks -storepass ServerCAKey -alias identity -keypass ServerCAKey d) Store the same digital certificate into the trust keystore Import the certificate generated in step b into a trust keystore. keytool -import -v -trustcacerts -alias identity -file ServerCACert.der -keystore ServerTrust.jks -storepass ServerTrustStorePass e) Configure the new keystores in WLS's identity and trust keystore To configure the keystores in WLS enter the WLS console, select the managed server you're interested in, then make the following changes under the following tabs: Configuration tab -> General subtab SSL Listed Port Enabled = checkbox SSL Listen Port = 7102 (and different from the Listen Port) Configuration tab -> Keystores subtab Keystores = Custom Identity and Custom Trust Custom Identity Keystore = Custom Identity Keystore Type = jks Custom Identity Keystore Passphrase = ServerCAKey Confirm Custom Identity Keystore Passphrase = ServerCAKey Custom Trust Keystore = Custom Trust Keystore Type = jks Custom Trust Keystore Passphrase = ServerTrustStorePass Confirm Custom Trust Keystore Passphrase = ServerTrustStorePass Configuration tab -> SSL subtab Identify and Trust Locations = Keystores Private key alias = identity Private Key Passphrase = ServerCAKey Confirm Private Key Passphrase = ServerCAKey Then save. After this restart your WLS server and you should see similar messages to the following in the WLS logs: Alternatively is you see the following messages you have made a mistake in your configuration: 10/07/2009 4:08:30 PM WST> <10/07/2009 4:08:30 PM WST> <10/07/2009 4:08:30 PM WST> 3) Modify the web service to use the Wssp1.2-2007-Https.xml WLS policy This can be done in a number of ways in JDeveloper, the easiest of which for this blog post at least is just to insert the @Policy annotation into the JAX-WS endpoint as follows: (Note if you're using earlier versions of JDeveloper or Eclipse, this mechanism wont work, you must manually add the policies to the WSDL). package au.com.sagecomputing.ws;4) Deploy the running web service to WLS import javax.jws.WebService; import javax.xml.ws.BindingType; import javax.xml.ws.soap.SOAPBinding; import weblogic.jws.Policy; ) @Policy(uri = "policy:Wssp1.2-2007-Https.xml") public class OneWaySSLPortTypeImpl { public String oneWaySSLOperation(String part) { return "Hello " + part; } } Within JDeveloper to deploy and run from the integrated WLS, it's simply a case of right clicking on the JAX-WS file and selecting Run. If you click on the hyperlink provided in the log window, this will open the HTTP Analyzer. From the HTTP Analyzer you can open the WSDL at the top of the window:. 5) Test the running web service via JDeveloper's HTTP Analyzer JDeveloper out of the box includes HTTP Analyzer for testing your web services. It's particularly useful as you don't have to leave the confines of your IDE to test your web services. In order to run the HTTP Analyzer with SSL'ed web service traffic, you need to make some changes to the configuration of JDeveloper. Selecting the Tools->Preferences menu option, followed by Https and Truststore Settings, you can configure the Client and Server keystores HTTP Analyzer needs to run with SSL. If you followed my exact instructions on setting up a selfsigned CA into the WLS identity and trust keystores, you need to enter the following options in the Preferences Https and Trusting Settings page: Client Trusted Certificate Keystore: c:\temp\ServerTrust.jks Client Trusted Keystore Password: ServerTrustStorePass Server Keystore: c:\temp\ServerIdentity.jks Server Keystore Password: ServerCAKey Server Private Key Password: ServerCAKey When you run your web service you can access the HTTP Analyzer by clicking on the URL of your served web service in the JDev IDE log window, among other methods. This presents the following HTTP Analyzer screens: In the top of the screen you'll see the HTTP Analyzer has formed a dummy request for you to send out based on the web service's WSDL. In my example picture I've filled out the part field and pressed Send Request, of which you can see the reply from the web service on the right hand side. At the bottom of the screen you can the individual request/responses that were generated in order to service the request. 6) Test the running web service via SoapUI SoapUI is a popular web service testing tool. I wanted to show how to configure it here to show similar results to the HTTP Analyzer. The following steps were built with SoapUI v3.0. a) Create a new Project via File -> New soapUI Project b) In the New SoapUI Project dialog, enter a custom project name, then your WSDL, leave the rest of the fields as default. c) In the Project list expand your new project to the last Request 1 node, and double click it. d) This will open the Request 1 window, showing on the left handside the outgoing request payload, where you can modify the inputElement XML element with your name. e) Pressing the green arrow executes the request against the webservice, you'll now hopefully see the SOAP response on the right handside of the window. f) Note at the bottom right of the right handside of the window you have the text SSL Info. Clicking on this shows another sub-window with the SSL certificate information that was swapped with the WLS server to undertake the SSL communications. 7) Test the running web service via a JAX-WS client Assuming under JDeveloper you know how to create a Java Proxy for the deployed web service, you'll end up with the following code: import clientexamples.SSLUtilities;Note SSLUtilities is a handy class written by Srgjan Srepfler that includes a number of methods for handling and modifying the default SSL behaviour. In our case in writing a simple test client we're not overly concerned about trusting the server's CA, so we can use SSUtilities.trustAllHttpsCertificates to stop the required checking. import javax.xml.ws.WebServiceRef; public class OneWaySSLPortTypePortClient { @WebServiceRef private static OneWaySSLService oneWaySSLService; public static void main(String [] args) { oneWaySSLService = new OneWaySSLService(); OneWaySSLPortType oneWaySSLPortType = oneWaySSLService.getOneWaySSLPortTypePort(); SSLUtilities.trustAllHttpsCertificates(); System.out.println(oneWaySSLPortType.oneWaySSLOperation("Chris")); } } 8) Inspect the web service packets on the wire to verify the traffic is indeed encrypted What neither JDeveloper's HTTP Analyzer nor SoapUI can do is actually confirm for you that the traffic on the network was actually encrypted. To check this we can use a wire sniffing tool called WireShark. Warning: at some sites using wire sniffing tools like WireShark can be a dismissible offence because you can see private data on the network. Be careful to check your organisation policies before doing this. Note if you're running the JAX-WS web services via the integrated WLS on the same localhost as SoapUI, you're most likely running through the localhost address. For various technical reasons WireShark cannot sniff packets through localhost or the MS loopback adapter in Windows. Instead we must separate our WLS and SoapUI installations, and place them on different hosts. Let's call them Box1 and Box2, with WLS and SoapUI installed respectively Once you have both up and running, determine the IP address of Box2. Let's say that IP address was: 101.102.103.104 a) Start WireShark. In the filter box top left enter: ip.addr == 101.102.103.104 b) Select the filter Apply button. c) Select the Capture -> Interfaces d) Select the Start button for your ethernet card e) WireShark is now sitting listening for traffic from the other ip.address of Box2. f) Now in SoapUI execute the request. g)In WireShark you should see the incoming requests: As WireShark works at the network level it sees the individual packets, several of which will comprise the request/response between SoapUI and WLS, effectively an incredible amount of detail. You can select each packet and look at the data contained within in the bottom window of the display. This window shows the data in both hex and raw text, so you'll need to carefully look to see the data contained within. Obviously if the traffic is encrypted you wont see much meaning at all which is what we want! To see the unencrypted traffic, remove the policy from your web service, redeploy it and run the same scenario again. Thanks I must aim my very strong thanks to Gerard Davison from Oracle UK with assistance with this article, Gerard's help has been invaluable. Any mistakes in this post are of course mine however, of which I'm sure there will be a few in such a long post. 4 comments: Came to be after your post, another way of seeing the HTTPS traffic is using the "-Djavax.net.debug=ssl" system property. This dumps out all of the certs and all of the traffic to the console, and can be useful when debugging ssl issues. Gerard Thanks chris works well. I did find that I needed to add the following java options to my client to avoid this run time error. Error ----- Exception in thread "main" javax.xml.ws.WebServiceException: weblogic.wsee.wsdl.WsdlException: Failed to read wsdl file from url due to -- javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found at weblogic.wsee.jaxws.spi.WLSProvider.readWSDL(WLSProvider.java:322) Solution -------- -Djavax.net.ssl.trustStore=D:\jdev11gr1\jdeveloper\jdev\mywork\OneWaySSLWebService\SecureProxyClient\keys\ServerTrust.jks -Djavax.net.ssl.trustStoreType=jks -Djavax.net.ssl.trustStorePassword=ServerTrustStorePass For reference as passed on by Pas, the following Metalink notes exist: 1. I have created a note for gerards demo. Note.882585.1 Securing a JAX-WS Web Service Within JDeveloper 11g R1 Using X509 Token Policy 2. Now I have created a note on your one which won't be visible for a few more hours. Note.884650.1 Securing a JAX-WS Web Service Within JDeveloper 11g R1 Using One way SSL Thanks for the reference! :)
http://one-size-doesnt-fit-all.blogspot.com/2009/08/one-way-ssl-with-jax-ws-using.html
CC-MAIN-2017-13
en
refinedweb
An LED-based fire lamp (Last modified 8 Oct 11) I call this a fire lamp because the LEDs, driven by an Atmel ATtiny85 MCU, provide a very realistic imitation of a flickering flame about the size of a tennis ball. Unfortunately, I've never bothered to buy a video camera, so I can't embed any live video. However, the flickering effect is excellent, without any strobing or blinking. I used the Liteon LTL912SEKSA Piranha red LEDs for this build. These LEDs are available (Oct 2011) from Electronic Goldmine for $0.39 each. Unlike typical 3 mm or 5 mm LEDs, these device will take up to 70 mA each, have a wide dispersion angle of 60 degrees, and put out nearly four lumens. They are commonly used in automobile brake lights and make a totally excellent light source for light art. Closeup of the Liteon Pirhana LEDs. Note that the pins are in a square pattern, spaced 0.2" apart. The circuit is slightly more complex than a typical LED driver because of the LED current. The ATtiny can't source the needed 60 mA per LED, so I added a 2N2222 transistor in each LED control line. You can use just about any NPN transistor for this if you can't find any 2N2222 or PN2222 devices. You can check out the schematic here. ( firelamp.pdf ) The schematic includes a note for changing resistor values to use smaller (less current) LEDs. I built the circuit up on a small breadboard, adding a 2-pin power connector for hooking up a wall-wart power supply. You can find small, 5 VDC switcher wall-warts in a lot of surplus or thrift stores now days; these make excellent power supplies for small projects. You can tell you're holding a switcher wall-wart because it will be very lightweight, only a few ounces. If you hook it up to AC and measure the output, you will see a value from about 4.9 to 5.1 VDC, unlike the 6 to 9 VDC put out by some of the older, unregulated (and heavier) wall-warts. The circuit fits onto a 2" x 3" protoboard with plenty of room to spare. The capacitor C1 is shown as 25 uf, but just about anything between 10 and 1000 uf will work. Using a small wall-wart and an old-fashioned socket plug lets you install the firelamp circuit board directly in a table lamp socket. This gives you a table lamp that looks like it has a small fire in it, rather than a bulb (which explains the project's name). Here is the firelamp PCB and wall-wart, plugged into a socket plug and installed in a table lamp. I stuck the PCB onto the top of the wall-wart using foam tape, but hot-glue or RTV would probably work, as well. I didn't shorten the power cord on the wall-wart, just wound it around the body and tied it in place. The firmware for this project was written in C in Atmel's AVR Studio4, then pushed into the ATtiny using an AVRISP mkII programmer. The only tricky part of the firmware is the technique for doing pulse-width modulation (PWM) for the LEDs. I am really fussy about LED PWM. I don't want to see any strobing when my eyes scan past the LEDs. For this project, I used a PWM clock of 1 MHz / 256, or about 4 kHz. Each LED is controlled by a 32-bit PWM mask. At each PWM clock (4000 times per second), the low bit of each LED's PWM mask is written to the output port for that LED. Intensity of each LED is varied by a different PWM mask from the table of 32 possibilities. For example, using a PWM mask value of 0x55555555UL will give an LED intensity of about 50%, since the LED is on every other PWM clock. Feel free to play around with the timing. You can modify the timer setup to use a compare-match instead of the overflow shown here, which would let you use an even faster PWM clock rate. Here is a .hex file you can burn directly into an ATtiny85 if you don't want to be bothered compiling. ( firelamp.hex ) /* * firelamp.c PWM control of LEDs on an ATtiny85 */ #include <avr/io.h> #include <avr/pgmspace.h> #include <avr/interrupt.h> #define NUM_LEDS 6 #define MASK_LEDS 0x3f /* assumes PB0 - PB5 */ #define NUM_PWM_STATES 32 #define MAX_PWM_STATE (NUM_PWM_STATES-1) #define PORT_LEDS PORTB #define DDR_LEDS DDRB #define DELAY 1000UL /* general delay in tics */ const unsigned long int pwmvals[NUM_PWM_STATES] PROGMEM = { 0x00000000L, 0x00010000L, 0x00010001L, 0x80200800L, 0x01010101L, 0x82080820L, 0x84108410L, 0x84112410L, 0x11111111L, 0x11249111L, 0x12491249L, 0x25225252L, 0x25252525L, 0x25525522L, 0x25552555L, 0x25555555L, 0x55555555L, 0x55575555L, 0x57555755L, 0x57575755L, 0x57575757L, 0x57577775L, 0x57777577L, 0x57777777L, 0x77777777L, 0xf7777777L, 0xf777f777L, 0xf7f7f777L, 0xf7f7f7f7L, 0xf7fff7f7L, 0xf7fffff7L, 0xffffffffL }; uint8_t bright[NUM_LEDS]; uint8_t delta[NUM_LEDS]; uint32_t pwm[NUM_LEDS]; uint16_t delays[NUM_LEDS]; volatile uint16_t tics[NUM_LEDS]; /* * Local functions */ uint16_t readtics(uint8_t cntr); void writetics(uint8_t cntr, uint16_t delay); void assignpwm(uint8_t led); static long unsigned int b_random(void); static long unsigned int rnd(long unsigned int val); int main(void) { uint8_t n; unsigned long int nval; TCCR0B = (1<<CS01); // /8 prescaler TIMSK = (1<<TOIE0); // enable interrupt on TOF PORT_LEDS = PORT_LEDS & ~MASK_LEDS; // turn off all LEDs DDR_LEDS = DDR_LEDS | MASK_LEDS; // make all LED drive lines outputs for (n=0; n<NUM_LEDS; n++) { bright[n] = 0; // start with all brightness values at 0 assignpwm(n); // make it so delays[n] = 200; // start with an arbitray delay value } sei(); // turn on interrupts while (1) // main loop { for (n=0; n<NUM_LEDS; n++) // for each LED { if (readtics(n) == 0) // if done with current delay for this LED... { nval = rnd(20) + 11; bright[n] = nval & 0xff; // update the brightness assignpwm(n); nval = rnd(750) + 250; // calc a random delay writetics(n, nval & 0xffff); } } } return 0; } void assignpwm(uint8_t led) { cli(); // do not disturb pwm[led] = pgm_read_dword(&pwmvals[bright[led]]); sei(); } uint16_t readtics(uint8_t cntr) { uint16_t t; t = tics[cntr]; if (t != tics[cntr]) t = tics[cntr]; return t; } void writetics(uint8_t cntr, uint16_t delay) { cli(); tics[cntr] = delay; sei(); } /* * The following functions try to duplicate the ANSI random() function for 8-bit MCUs such as the * Atmel ATmega1284p. Seed is fixed at compile time. */ static long unsigned int b_random(void) { static long unsigned int seed = 12345678L; seed = 1664525L * seed + 1013904223L; return seed; } static long unsigned int rnd(long unsigned int val) { long unsigned int t; t = b_random(); // compute a 32-bit random number val = t % val + 1L; // now keep it within requested range return val; } SIGNAL(TIM0_OVF_vect) { uint8_t n; uint8_t mask; uint32_t t; mask = 0; for (n=0; n<NUM_LEDS; n++) // for LED 0 through NUM_LED-1... { t = pwm[n] & 1; // get low bit of PWM for this LED mask = mask | (t << n); // move low bit of PWM into proper bit of mask pwm[n] = (pwm[n] >> 1); // move PWM value one bit to the right if (t) pwm[n] = pwm[n] + 0x80000000; // rotate original low bit into high bit if (tics[n]) tics[n]--; // drop this counter if not yet 0 } PORT_LEDS = PORT_LEDS & ~(MASK_LEDS); // strip off port lines dedicated to LEDs PORT_LEDS = PORT_LEDS | mask; // turn on the LEDs }
http://www.seanet.com/~karllunt/firelamp.html
CC-MAIN-2017-13
en
refinedweb
I have a piece of server-ish software written in Java to run on Windows and OS X. (It is not running on a server, but just a normal user's PC - something like a torrent client.) I would like the software to signal to the OS to keep the machine awake (prevent it from going into sleep mode) while it is active. Of course I don't expect there to be a cross platform solution, but I would love to have some very minimal C programs/scripts that my app can spawn to inform the OS to stay awake. Any ideas? I use this code to keep my workstation from locking. It's currently only set to move the mouse once every minute, you could easily adjust it though. It's a hack, not an elegant solution. import java.awt.*; import java.util.*; public class Hal{ public static void main(String[] args) throws Exception{ Robot hal = new Robot(); Random random = new Random(); while(true){ hal.delay(1000 * 60); int x = random.nextInt() % 640; int y = random.nextInt() % 480; hal.mouseMove(x,y); } } }
https://codedump.io/share/YhB1cm12ZVc6/1/how-do-you-keep-the-machine-awake
CC-MAIN-2017-13
en
refinedweb
I haven't been able to get gprof to output results that make any sense for some time now. I'd be interested if you actually get it to work. Marc Manolo to Anton Shepelev: >>while they clearly take different times to com- >>plete. > >How do you know this? Have you measured times? I measured the times manually, but here I do it in the code: #include "test.h" #include <stdlib.h> #include <stdio.h> #include <time.h> void genwait( unsigned t ) { clock_t before, after; unsigned i; char line[3]; double time_sec; before = clock(); for( i = 0; i < t; i++ ) { sprintf(line, "a"); } after = clock(); time_sec = ((double)(after - before)) / CLOCKS_PER_SEC; printf( "%i cycles took %2.3g seconds.\n", t, time_sec ); } void wait1() { genwait(100000000); } void wait2() { genwait(800000000); } void wait3() { genwait(1600000000); } void WorkHard() { int i=0; wait1(); wait2(); wait3(); } And the program outputs different execution times: 100000000 cycles took 0.344 seconds. 800000000 cycles took 2.86 seconds. 1600000000 cycles took 5.7 seconds. The modified test sample is here: but gprof still shows the average time of about 2.7 seconds for each test run. ------------------------------------------------------------------------------ _______________________________________________
http://mingw.5.n7.nabble.com/Strange-gprof-results-td35276.html
CC-MAIN-2017-13
en
refinedweb
wikiHow to Break up With Your Significant Other when You Are Already Dating Someone Else Nobody enjoys having to break up with someone––but it can be even more difficult when you’ve already moved on both mentally and in action, and have a new significant other in your life. If you've already started seeing someone new but haven’t mustered the courage yet to break it off with your current squeeze, it's vital that you do so, including clarifying things for the new person in your life who will need reassuring that you're not flip-flopping between lovers. This article suggests some steps to help ease the transition. And the sooner you do it, the better because eventually it's all going to get found out! Steps - 1Evaluate your original relationship. Consider why you started seeing someone else while you were still in a relationship. Did you and your significant other simply grow apart or did something happen that made you stray? It’s important to understand why you started dating another person in order to make the break up as painless as possible. Make a list of at least three reasons why you may have mentally left your original relationship and started dating another person. - How compelling are the reasons? Are they enough to stay with the new person or do you feel that this has been a big mistake? You need to know this now before you're a pond full of regrets. - 2Ponder your current relationship. Perform the same mental analysis with your new steady as you did with your original mate. Why did you start dating this person and what attracted you to the relationship? Most importantly, does the new person know that you're currently dating someone else? If your new boyfriend or girlfriend is in the dark, this may cause problems later down the road, especially if you become serious and yet you've not acted as if you have treated the relationship seriously. As with your original mate, list three or more reasons why you have entered into this new relationship and how it will differ from the previous relationship. - Are these reasons compelling enough to want your new date to completely take the place of your current lover? Again, ensure that there is no ambiguity in your reasoning. - 3Check your calendar for the best time to meet with your original boyfriend or girlfriend. Timing is everything. Avoid major life events such as holidays, birthdays or anniversaries––especially if the anniversary marks a sad occasion such as the death of a loved one. Select a totally random day––one that should have no meaning to you or your current mate. However, don't use an inability to select "the right day" as an excuse not to get this over and done with. The sooner that you deal with breaking up, the better for both of you. - 4Choose a location for the break-up. Always break up in person––never on the phone, by mail or text. You owe the other person a face-to-face meeting. However, if you believe the break-up could be filled with intense drama, choose a public place, but avoid crowded, intimate restaurants. If your significant other decides to explode, he or she may not be concerned with the surroundings and have a very public reaction. Additionally, consider a place where you can make a quick getaway. Waiting to pay for the check at a restaurant can be very awkward, so head to a destination that will provide you with mobility. Some suggestions for places include: - A spacious outdoor park (away from kids and playground equipment) - A shopping mall - The gym - A coffee shop - A bar and grille - The beach - An athletics park. - Places to avoid: - An intimate restaurant - Your favorite place to go as a couple - The movies - Your or his/her home––however, some people feel more comfortable breaking up with someone from their own home turf if they're the only one living there, so this depends on the context - While on vacation - A play or concert. - 5Let your new boyfriend/girlfriend know you plan to break up with your original mate. If you haven’t already told your new steady that you had someone else, now is a good time. If you want to have a strong, honest relationship with your new boyfriend or girlfriend, it’s imperative you alert your new honey to the situation. Along the same lines as breaking up with your other mate, choose a random day and place to tell your new boyfriend/girlfriend about the other person. - Begin the conversation by reinforcing your feelings for him/her. - Explain how your life has changed since you met him/her. - Discuss your plans for the future with the new person. - Gently tell him or her that you have current boyfriend/girlfriend, but that you will be breaking up on a certain date and why you plan to break up. - Reassure your new boyfriend/girlfriend that the break-up will truly result in the end of that relationship. - 6Contact your original boyfriend/girlfriend to arrange for a meeting in order to break up. Don’t tell the other person over the phone, email or text why you want to meet, but simply ask if you can meet on a certain day and time to talk. Don’t make a lot of small talk on the phone and definitely do not say things like, “I love you” or “I miss you.” Avoid confusing the situation--even if the other person is the one who says it first. Stand strong but be gentle. - 7Prepare for the meeting. If you have to rehearse the delivery, do it. Just don't have notecards out in front of you and refer to them while you're breaking up. Punctuate the other person’s positive qualities first but make no qualms about why you're there––to break up. Ask the other person if they were truly happy in the relationship. You may be surprised to learn that he or she wasn’t happy either. (Be prepared for them to say they were though, in which case, asking them will backfire on you and you'll have to apologize and recognize that they were happy but explain that you're still not.) Other points to consider: - Avoid telling the other person that they drove you into the arms of another––that will only escalate into an unproductive discussion and says more about your inability to be independent-minded than it does about them. It's not a tactic to escape unscathed; it's a way of telling your soon-to-be ex that you're making excuses. - Don’t lead the other person on to think that you could possibly get back together. Make it clear that it's over. - Don’t point fingers––it takes two to make a relationship work (or not work). Acknowledge your own faults, lack of participation and inability to contribute fully to the relationship. - Don't drag out the past––remain in the “here and now” instead of talking about the time he or she kissed someone else, for example. The idea is to not apportion blame or to try to make your soon-to-be ex look bad; rather, help them to see that this is ultimately a good decision for the two of you. - 8Be on time for the meeting. Show the other person respect by being prompt and exactly in the place where you agreed to meet, at the time you agreed. If you know that they're never prompt, take something along to do to pass the time so that you avoid getting frustrated waiting for them. Take a book, your eReader or play phone games. Just resolve to stay calm until they arrive (and after, of course). - 9Remain calm and in control throughout the discussion. Keeping in control of a conversation means being ready to open it and to lead with the news of the break up as quickly as possible. Also be prepare to ask questions as much as or more even than you're asked questions, questions about how the other person is taking the news, how they're feeling and what they'll do next. By making them respond to your questions, it shows that you care enough about their welfare to be interested but also deflects a focus off you all of the time, as they're forced to think over how they're taking it and how they're going to move on. - All the same, anticipate the possibility that your significant other could flip out so keep that in mind during your break up delivery. If you remain calm, perhaps you can tone down the situation. - If they have items in your home, be sure to allow them plenty of space to retrieve their things without pressure or anxiety. You could even offer to have them delivered but don't sound like you don't want them to collect their own things if they want to. - 10Keep an eye on the time. Don't allow the break up to last more than an hour. You owe the other person the time to discuss his or her feelings, but you don’t want to drag the break up out for hours; doing so will just encourage unhealthy wallowing and your ex will be tempted to raise a whole raft of reasons why this shouldn't be happening and why you need to reconsider. Have a good excuse ready such as meeting someone else, having to get work done or needing to get to bed early for an early meeting, etc. Offer to drop them back home if it helps or to shout them a taxi ride. - 11Try to end the meeting on a good note. This may be impossible, especially if the other person wasn’t expecting it or didn’t want to break up. If the other person storms off, there is nothing you can do. However, if you can end it amicably, wish the other person well and you can even hug. Don’t make plans to see them soon or say, “Let’s be friends.” The break up is still too fresh to identify any future plans or friendship dynamic. - 12After speaking with your “now former” significant other, arrange to meet your new squeeze to reassure him or her that you went through with the break up. They will need to be sure that you went through with it and that things are truly over and done with, allowing the two of you to proceed forward happily and with strength as an unencumbered couple. Community Q&A Tips - You could also try breaking up with the person the minute you lose interest, as opposed to waiting until you've met and become involved with someone else. Have a heart. Don't play with your relationships, they are not a joke. - If you run into your former flame, while with your new boyfriend/girlfriend do not flaunt your new relationship. Of course don’t hide your boyfriend/girlfriend but be cordial and friendly––no PDA or mushy talk. - Depending on how serious you were with the other person, avoid bringing any personal items to the break-up such as jewelry or symbolic gifts to return (i.e. a special teddy bears or birthday gift). The break-up speech is not the day to unload personal items––it will only pour salt into the other person’s wounds. These items can be returned more discreetly at a later, but not too distant, date. Warnings - There is always a risk that your new flame won't like any of this and will feel betrayed that you hadn't already ended a former relationship before entering a new one. - If your original boyfriend or girlfriend won’t accept your initial break-up, repeat the steps above one more time. Re-evaluate your behavior to determine if you are doing anything to lead the person on or if you are giving him or her false hope. If not, eliminate all contact with the other person if he or she still will not accept that you are going to break up. - If you feel as if your former flame continues to pursue you even though you have asked him or her to stop, say that you may seek a restraining order. Hopefully, simply saying it will get the other person to back off. If it doesn’t and you feel uncomfortable, proceed with the restraining order. Article Info Featured Article Categories: Featured Articles | Dating In other languages: Español: acabar una relación cuando ya empezaste otra, Русский: разорвать старые отношения, если у вас уже появился новый парень, Italiano: Lasciare una Persona se ne stai Frequentando già un'Altra, Português: Terminar um Namoro se Você Já Estiver Saindo Com Outra Pessoa Thanks to all authors for creating a page that has been read 157,451 times.
http://www.wikihow.com/Break-up-With-Your-Significant-Other-when-You-Are-Already-Dating-Someone-Else
CC-MAIN-2017-13
en
refinedweb
I am following these directions: The instructions give the steps in installing Pygame for Python 3 on Ubuntu. I am having no problems with it until i reach the python3 setup.py build Traceback (most recent call last): File "setup.py", line 109, in <module> from setuptools import setup, find_packages ImportError: No module named 'setuptools' import pygame sudo apt install python3-setuptools ^ separate from Python 2 setuptools. Per this answer.
https://codedump.io/share/Pn2VQV9IB9l5/1/pygame-for-python-3---quotsetuppy-buildquot-command-error
CC-MAIN-2017-13
en
refinedweb
: 3008 Author: m-schindler Date: 2009-06-09 11:41:40 +0000 (Tue, 09 Jun 2009) Log Message: ----------- added an example for axis titles Modified Paths: -------------- trunk/pyx/examples/graphs/axis.py trunk/pyx/examples/graphs/axis.txt Modified: trunk/pyx/examples/graphs/axis.py =================================================================== --- trunk/pyx/examples/graphs/axis.py 2009-05-21 15:19:20 UTC (rev 3007) +++ trunk/pyx/examples/graphs/axis.py 2009-06-09 11:41:40 UTC (rev 3008) @@ -1,8 +1,8 @@ from pyx import * g = graph.graphxy(width=8, - x=graph.axis.log(min=1e-1, max=1e4), - y=graph.axis.lin(max=5)) + x=graph.axis.log(min=1e-1, max=1e4, title=r"$x$-axis"), + y=graph.axis.lin(max=5, title=r"$y$-axis")) g.plot(graph.data.function("y(x)=tan(log(1/x))**2")) g.writeEPSfile("axis") g.writePDFfile("axis") Modified: trunk/pyx/examples/graphs/axis.txt =================================================================== --- trunk/pyx/examples/graphs/axis.txt 2009-05-21 15:19:20 UTC (rev 3007) +++ trunk/pyx/examples/graphs/axis.txt 2009-06-09 11:41:40 UTC (rev 3008) @@ -1,4 +1,4 @@ -Using a logarithmic axis and defining the axis range +Using a logarithmic axis and defining the axis range and its label Sometimes, you have to visualize rather pathological funtions containing divergencies and/or varying over a large parameter range. This is best done @@ -19,7 +19,8 @@ the sampling points choosen by PyX along the x-direction. Since the function only diverges towards positive y-values, we only need to set the maximal value of the y-axis (using the `max`-argument). Of course, you can also fix the lower end -of the axis range by providing a `min`-argument. +of the axis range by providing a `min`-argument. In order to introduce the +`title` keyword, we give names to the axes. ! Note how PyX changes the way the x-axis is drawn. Instead of simple decimal numbers, an exponential notation is used. This happens automatically by This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
https://sourceforge.net/p/pyx/mailman/pyx-checkins/?viewmonth=200906&viewday=9&style=flat
CC-MAIN-2017-39
en
refinedweb
Accessor of the second element of a Lisp cons cell. Unlike the accessor of the first element, car, it does not suffer a namespace collision with a common English word.. --The Jargon File version 4.3.1, ed. ESR, autonoded by rescdsk. Log in or register to write something here or to contact authors. Need help? [email protected]
https://everything2.com/title/CDR
CC-MAIN-2017-39
en
refinedweb
After realizing that I can't dispatch a NavigationEvent through my mxml file, which I don't tend to configure in the context file, I used the presentation model to dispatch the event (see). However, I find it abusing to control navigation through the PM. It is not it's responsibility as I see it. I thought about using the ContentDestination class to do the job, configure it as a singleton parsley object, and inject it to the relevant mxml files. For example: package com.structures.main.presentation { import com.adobe.cairngorm.navigation.NavigationEvent; import flash.events.EventDispatcher; [Event(name="navigateTo", type="com.adobe.cairngorm.navigation.NavigationEvent")] [ManagedEvents(names="navigateTo")] public class MainContentDestination extends EventDispatcher { public static const BACK_OFFICE:String = "structures.backoffice"; public static const LOGIN:String = "structures.login"; public function navigateTo(destination:String):void{ dispatchEvent(NavigationEvent.newNavigateToEvent(destination)); } } } Any insights about it? Lior.
https://forums.adobe.com/thread/587324
CC-MAIN-2017-39
en
refinedweb
I am trying to create an object of the StageWebView() var stageview = new window.runtime.flash.media.StageWebView(); Im getting the error: TypeError: Results of expression 'window.runtime.flash.media.StageWebView' [] is not a constructor. I'm guessing this is because the object StageWebView does not exist. I have Adobe AIR 2.7 installed. When I generate a preview build through Dreamweaver my useragent string says I am running 2.0.2, but when I compile a build it says my version is 2.7. Either way I can't create a StageWebView object. Any ideas? Well, first of all, why create a StageWebView object on the desktop, in an HTML-based AIR app? But as to the error, it sounds like Dreamweaver is using the AIR 2.02 version of the SDK. The StageWebView class is only available in AIR 2.5+. This is why it won't work when previewing. At runtime, it probably doesn't work because Dreamweaver is undoubtedly setting the namespace to AIR 2 as well, which controls the APIs that are available even when running under a newer runtime. You can try finding and editing the application descriptor manually to set the namespace version to 2.7 (but I'm unsure whether or not Dreamweaver will fight you here).
https://forums.adobe.com/thread/870662
CC-MAIN-2017-39
en
refinedweb
I 've looked through the docs and googled the heck out of this list and the 'net, but can't find a single working example of a slip setup. I need to use slip because I am running the UML in a VPS (with no kernel tun/tap support, exactly the situation pointed out in the docs as the only reason to use slip) There's a lot of commentary about how slip was the first way UML was connected to the 'net and how it has drawbacks, but unfortunately the examples present in the documentation make a lot of assumptions about the host. I get the feeling the author genuinely believed that using slip was a bad thing, and so avoided giving a useful example as a way of discouraging its use. Here's my config so far (all taken from the docs, but not verbatim, as it would collide with my upstream router) #HOST ./linux-2.6.24-rc7 ubda=Fedora.cow,FedoraCore5-x86-root_fs eth0=slip,192.168.0.1 [root@localhost ~]# dmesg Linux version 2.6.24-rc7-dirty ([email protected]) (gcc version 4.1.2 20070925 (Red Hat 4.1.2-27)) #97 Mon Jan 7 11:18:24 EST 2008 [...snip...] net_namespace: 64 bytes Using 2.6 host AIO NET: Registered protocol family 16 NET: Registered protocol family 2 Time: itimer clocksource has been installed. Switched to high resolution mode on CPU 0 IP route cache hash table entries: 1024 (order: 0, 4096 bytes) TCP established hash table entries: 2048 (order: 2, 16384 bytes) TCP bind hash table entries: 2048 (order: 1, 8192 bytes) TCP: Hash tables configured (established 2048 bind 2048) TCP reno registered Checking host MADV_REMOVE support...<3>MADV_REMOVE failed, err = -38 Can't release memory to the host - memory hotplug won't be supported mconsole (version 2) initialized on /root/.uml/wOqluE/mconsole [...snip...] TCP cubic registered NET: Registered protocol family 1 NET: Registered protocol family 17 Initialized stdio console driver Console initialized on /dev/tty0 console [tty0] enabled Initializing software serial port version 1 Choosing a random ethernet address for device eth0 Netdevice 0 (66:35:f6:cf:fb:ab) : SLIP backend - SLIP IP = 192.168.0.1 console [mc-1] enabled [...snip...] Setting slip line discipline: Invalid argument uml_net : waitpid process 15689 failed, errno = 10 slip_tramp failed - err = 10 [root@localhost ~]# ~]# ifconfig eth0 up Setting slip line discipline: Invalid argument uml_net : waitpid process 27924 failed, errno = 10 get_ifname failed, err = 22 SIOCSIFFLAGS: Invalid argument [root@localhost ~]# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 169.254.0.0 * 255.255.0.0 U 0 0 0 lo [from the docs, 2 steps to configure the interface for slip, albeit not explicitly given.] [root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes IPADDR=192.168.0.100 GATEWAY=192.168.0.1 NETMASK=255.255.255.0 [root@localhost ~]# halt #HOST ifconfig sl0 192.168.0.1 pointopoint 192.168.0.100 up SIOCSIFADDR: No such device sl0: unknown interface: No such device SIOCSIFDSTADDR: No such device sl0: unknown interface: No such device sl0: unknown interface: No such device Best guess at this point is I'm missing something here to create the virtual slip network interface, and possibly the other portion is failing because it is not present, but the errors are not so helpful. "Setting slip line discipline: Invalid argument" "uml_net : waitpid process 27924 failed, errno = 10" "get_ifname failed, err = 22" "SIOCSIFFLAGS: Invalid argument" - sounds like the guest side (or echoing the host side, vague) was unable to find the slip interface.
https://sourceforge.net/p/user-mode-linux/mailman/attachment/[email protected]/1/
CC-MAIN-2017-39
en
refinedweb
U++ SQL Basic Use and Description For this section, the example used will be oriented to PostgreSQL use. See the SQL example packages provided in the Upp examples for using MySQL and SQLite as well. The Schema description file (.sch file) In each schema description file, you describe the table and column layout of your database. Postgresql Example ("person_db.sch"): TABLE_ (PERSON) SERIAL_ (PERSON_ID) PRIMARY_KEY STRING_ (NAME, 25) DATE_ (BIRTH_DATE) INT_ (NUM_CHILDREN) DATE_ (DATE_ADDED) SQLDEFAULT(CURRENT_DATE) END_TABLE TABLE_ (EMPLOYEE) SERIAL_ (EMPLOYEE_ID) PRIMARY_KEY STRING_ (DEPARTMENT, 50) STRING_ (LOCATION, 50) DATE_ (START_DATE) BOOL_ (IS_SUPERVISOR) TIME_ (WORKDAY_START) TIME_ (WORKDAY_END) INT64 (PERSON_ID) REFERENCES(PERSON.PERSON_ID) In this schema, we have described a 'person' table and an 'employee' table, with the foreign key 1 to 1 relationship "an employee is a person". The different types mentioned in this example map to SQL types. More information about types should be referenced by looking at the source code header files for the database type. In this example, all of the types referenced are found defined in the file "PostgreSQLSchema.h" from the "PostgreSQL" U++ package. Each type declaration has 2 variants; one with an underscore "_" and one without. When an underscore is used, an SqlId object is automatically created for use as a variable in your source files. When not used, you must manually define the SqlID object in your source. Reference the SqlId objects section below for further explanation. Note: if you use a name more than once, you should use an underscore only the first time you declare the name, otherwise you will get "already defined" compilation errors. This is shown in the above example where the column name "PERSON_ID" is used twice; there is an underscore only the first time it is used. Source Files (for PostgreSQL example) Header file includes/defines ("person.hpp"): #include <PostgreSQL/PostgreSQL.h> #define SCHEMADIALECT <PostgreSQL/PostgreSQLSchema.h> #define MODEL <MyPackage/person_db.sch> #include "Sql/sch_header.h" Source file includes ("person.cpp"): #include "person.hpp" #include <Sql/sch_schema.h> #include <Sql/sch_source.h> Session objects: PostgreSQLSession m_session; The session object is used to control the connection and session information. Each database dialect will have its own session object to use. Database connection using session: bool good_conn = m_session.Open("host=localhost dbname=persons user=user1 password=pass1") The Open() function returns a true or false value depending on success of connecting to database. SqlId objects: SqlId objects aid the formation of sql statements by mapping database field/column names to local variables. SqlId all("*"); SqlId person_name("NAME"); We will now be able to use "all" and "person_name" in our SQL CRUD statements in our code. As mentioned previously, all of the declarations in our schema file that end in an underscore will automatically be declared as SqlId variables we can access in our source code. Example use of SqlId variables: sql * Insert(PERSON)(NAME, "John Smith") (BIRTH_DATE, Date(1980,8,20)) (NUM_CHILDREN, 1) The variables PERSON, NAME, BIRTH_DATE, NUM_CHILDREN were available to us even though we didn't define them in our source. We could have also used the variable person_name instead of NAME as we defined it ourselves. Sql objects Sql objects are used for CRUD operations on the database; they operate on a session. Sql sql(m_session); //define Sql object to act on Session object m_session. Queries Select example: sql * Select(all).From(PERSON).Where(person_name == "John Smith"); Note: Here we can use "all" because we defined it as an SqlId variable above (same goes for "person_name"). Exceptions vs Checking for errors. There 2 ways to make sql statements. 1. Manual error checking. Manual error checking uses the asterisk ("*") operator when writing SQL statements. sql * Select(all).From(PERSON).Where(NAME == "John Smith"); if(sql.IsError()){ Cout() << m_session.GetErrorCodeString() << "\n"; } 2. Exception handling. Specify exception handling by using the ampersand ("&") operator when writing SQL statements. try{ sql & Select(all).From(PERSON).Where(NAME == "John Smith"); }catch(SqlExc& err){ Cout() << err << "\n"; // Or we can get the error from the session too... *Remember, SqlExc is a subclass of Exc, which is a subclass of String, so it can be used as a string to get its error. Getting Values from Sql Queries The Fetch() method will fetch the next row resulting from the query into the Sql object and return true. If there are no more rows to fetch, it will return false. while(sql.Fetch()){ Cout() << Format("Row: %s %s %s \n", \ AsString(sql[NAME]), \ AsString(sql[BIRTH_DATE]), \ AsString(sql[NUM_CHILDREN])); You can reference each row by SqlId as above, or by integer array index (Ie. "sql[0]"). Notice the use of AsString() here. sql[id] returns a U++ Value type object. You can then convert that Value type to its appropriate type afterward. Last edit by cxl on 04/14/2014. Do you want to contribute?. T++
https://www.ultimatepp.org/srcdoc$Sql$BasicUse$en-us.html
CC-MAIN-2017-39
en
refinedweb
Hello, I don't quite understand how to integrate Kafka and Flink, after a lot of thoughts and hours of reading I feel I'm still missing something important. So far I haven't found a non-trivial but simple example of a stream of a custom class (POJO). It would be good to have such an example in Flink docs, I can think of many many scenarios in which using SimpleStringSchema is not an option, but all Kafka+Flink guides insist on using that. Maybe we can add a simple example to the documentation [1], it would be really helpful for many of us. Also, explaining how to create a Flink De/SerializationSchema from a Kafka De/Serializer would be really useful and would save a lot of time to a lot of people, it's not clear why you need both of them or if you need both of them. As far as I know Avro is a common choice for serialization, but I've read Kryo's performance is much better (true?). I guess though that the fastest serialization approach is writing your own de/serializer. 1. What do you think about adding some thoughts on this to the documentation? 2. Can anyone provide an example for the following class? --- public class Product { public String code; public double price; public String description; public long created; } --- Regards, Matt [1]
http://mail-archives.apache.org/mod_mbox/flink-user/201612.mbox/%3CCAJYNqU+1eLsYM4J89HhA-SYDiSH6Vuyqkfg9hjsnM71PW-XmAw@mail.gmail.com%3E
CC-MAIN-2017-39
en
refinedweb
Earlier this week, we looked at how the v4 CLR continued the evolution of the security transparency model that started in v2 and started evolving with Silverlight in order to make it the primary security enforcement mechanism of the .NET 4 runtime. The result is that the v4 transparency model, while having roots in the v2 transparency model, is also somewhat different in both the rules that it enforces and how it enforces them. These differences are enough that code written for the v2 transparency model will not likely run without some modifications in the v4 model. Since the v4 runtime is compatible with v2 assemblies, the CLR security system needs to provide a way for code written for the older v2 transparency model to continue to run until it is updated to work in the more modern v4 transparency model. This was done by splitting the transparency models up into two rule sets: - Level 1 - the security transparency model that shipped in the v2 CLR - Level 2 - the security transparency model that ships with the v4 CLR Assemblies built against the v2 .NET framework are automatically considered to be level 1 assemblies - after all, if they were written before the v4 transparency model even shipped how could they possibly be written to use that model? Similarly, assemblies built against the v4 runtime are by default considered to be using the level 2 model. Since level 1 exists largely for compatibility reasons, new code starts out automatically using the modern transparency enforcement system. What about existing code bases that are simply being recompiled for v4 however? Those assemblies were also not written with the v4 transparency rules in mind, so it doesn't follow that a simple recompile has fixed up the assembly's code to understand the new security rules. In fact, the first step in moving v2 code to v4 is very likely trying to simply getting it to compile with as few source changes as possible. For assemblies in this bucket, the CLR offers an attribute to lock an assembly (even though it is built for v4) back to the level 1 security transparency rules. In order to do that, all the assembly needs to do is apply the following assembly level attribute: [assembly: SecurityRules(SecurityRuleSet.Level1)] (Both the SecurityRulesAttribute and the SecurityRuleSet enumeration live in the System.Security namespace) Adding this attribute unblocks the assembly being recompiled from being forced to update to the new security transparency model immediately, allowing you more time to make that transition. When the assembly is ready to move forward to the v4 transparency model, the level 1 attribute can simply be replaced with the equivalent attribute stating that the assembly is now going to be using the level 2 rules: [assembly: SecurityRules(SecurityRuleSet.Level2)] Although this isn't strictly necessary, as level 2 is the default for all assemblies built against the v4 runtime, I consider it a good practice to explicitly attribute assemblies with the security rule set that they are written to use. Being explicit, rather than relying on defaults, future proofs your code by having it be very clear about the security model that it understands.
https://blogs.msdn.microsoft.com/shawnfa/2009/11/11/transparency-models-a-tale-of-two-levels/
CC-MAIN-2017-39
en
refinedweb
Tips for Localizing Windows Phone 8 XAML Apps – Part1 This blog post was authored by Dan Zucker, a program manager on the Windows Phone team. – Adam This is Part 1 of a two part series. Part 1 gives you a deeper understanding of the standard pattern for localizing a Windows Phone 8 XAML app along with a lesson learned or two. In Part 2, I describe how the powerful Multilingual App Toolkit , which has been released for the Windows Phone SDK 8.0, can make the end-to-end process of localizing your app substantially easier. Phone section of MSDN. You are ready to factor globalization and localization into your app design. How can you best take advantage of the Windows Phone SDK 8.0? What’s new? The first thing to be aware of is that the new projects and templates for XAML apps provide several new helpful features: - Projects start you out with a neutral language resource file in place (AppResources.resx). - Templates contain commented sample code for binding resource strings in XAML. - Templates contain the LocalizedStrings helper class already configured to provide easy code access to the resources that match the current culture of an app. - Templates also contain sample code for localizing the app bar, including accessing resource strings in code-behind. - Templates contain initialization code and locale-specific parameter resources that insure that fonts are rendered correctly for all languages (xml:lang and traditional flow direction are explicitly set for the RootFrame—and if that’s not the pattern you want it’s easy to modify). - Adding a Supported Culture from the Project Properties in Visual Studio will cause a new resource file with locale-specific name and app language initialization parameters in place to be created. STANDARD LOCALIZING STEPS With the list of new Windows Phone 8 localization features in mind, I created a hard-coded English language version of an illustrative sample app I called the Humanitarian Reader. Before I dive into the custom localizing code in my app, I’ll review the standard steps I took to localize the hard-coded version. If you are localizing a Windows Phone 7.1 project, see Using the Windows Phone 8.0 SDK to localize Windows Phone OS 7.1 projects before proceeding. Getting into a bind (in a good way) As I mentioned above, I first created the hard-coded English language version of a sample app. Next, I bound the XAML text elements to string resources. I did this by copying each hard-coded string in the app’s XAML that I wanted localized to a new row in the string table of my AppResources.resx file. I gave it a unique name. Then, I caused the XAML element to refer to this resource by adding the unique name and a standard binding clause in place of the hard-coded value. The original XAML appears as follows: - <TextBlock Text=”Application Title” Style=”{StaticResource PhoneTextNormalStyle}“/> The updated XAML is bound to the localized resource: - <TextBlock x:Name=”AppTitleTextBlock” - Text=”{Binding Path=LocalizedResources.ApplicationTitle, Source={StaticResource LocalizedStrings}}“ - Style=”{StaticResource PhoneTextNormalStyle}“/> Then I searched through the project’s code-behind for each place my code modified a text attribute of a UI element. I replaced the hard-coded value with a reference to the resource string for each element. Therefore, the original code appears as follows: - ApplicationBarMenuItem about_appBarMenuItem = - new ApplicationBarMenuItem(“menu item”); The updated code has been modified to refer to the localized resource: - ApplicationBarMenuItem about_appBarMenuItem = - new ApplicationBarMenuItem(AppResources.AppBarAboutMenuItem); Adding languages Be sure to read Part 2 of this blog (coming soon), where I will describe how the powerful Multilingual App Toolkit, soon to be released for Windows Phone 8, can be used to make the following steps even easier. Adding languages to a Windows Phone 8 project in Visual Studio was simple. I navigated to the project’s property page and selected the languages I wanted from the Supported Cultures list. I added Arabic (Saudi Arabia), Chinese (Simplified), and Spanish (Spain), whose locale codes are ar-SA, zh-Hans, and es-ES, respectively. When I saved the project, Visual Studio created and initialized a new copy of the AppResources.resx file for each locale. How the resource file is initialized As I mentioned, the initialization of a newly created resource file uses the locale-based .resx file name. For example, selecting “Arabic (Saudi Arabia)” adds the resource file AppResources.ar-SA.resx. The newly created resource file is prepopulated with the existing resources from the AppResources.resx file. Included in each resource file are two very special resources named ResourceLanguage and ResourceFlowDirection. These two resources are used when the InitializeLanguage method is called from the App.xaml.cs constructor. The ResourceLanguage and ResourceFlowDirection values are checked automatically to ensure they match the culture of the resource file actually loaded at run time. The ResourceLanguage value is initialized with the locale name of the resource file and is used to set the RootFrame.Language value. The purpose here is to ensure that the user sees the right font for cases like East Asian languages, where Unicode character ranges overlap and the system needs xml:lang to render the correct character. For instance, without specifying both character code and language, rendering of a Japanese language app on a phone with Chinese set as its language may show a Chinese character in the middle of a Japanese string. The ResourceFlowDirection value is set to the traditional direction of that resource language. In our example (AppResources.ar-SA.resx), the reading order is “RightToLeft”. Note: While ResourceLanguage and ResourceFlowDirection are part of a localization pattern we believe you will find beneficial, you can always modify the resource and code to align with your design style. Using machine translation to visualize localized text Now I have a localized app minus the translation of the text. My next step is to use Microsoft Translator to get a rough draft for each language. This task was reduced to a few clicks and almost no wait by using Satish Chandra’s Resource Translator plug-in for Visual Studio 2012 (again, see Part 2 of this blog for even more powerful tools coming soon!). When installed, this tool shows up in the shortcut menu for .resx files. (Note that this tool wouldn’t process my .resx files while they were in a subfolder. Solution: temporarily move .resx files to the project root, translate, and move them back to the Resources folder.) With machine translated text, I was able to see that the Arabic translation was going to take more space then I had allowed for the English text, and I adjusted the size of the UI element. I could also easily see that the display language, flow, and navigation of the app changed as expected; you can think of it as fulfilling some of the function of the best practice known as pseudolocalization. Human translation Machine translation is truly amazing these days. This is especially true when the technology can ask an author which of several possible alternate meanings is intended. With these advances, the quality and consistency of machine translation will likely soon become good enough for the purposes of many projects. Having said that, it really is a good idea to involve a human who is fluent in both the original and the target translation languages. It may be a long time before software can match a human for specific domain knowledge required for your app or for understanding of the cultural and political sensitivities of a region. I was able to crowd source my translations (Microsoft is a VERY international place and my coworkers are very generous with their time!). I simply sent my .resx file to my translation volunteers and replaced my machine-translated strings with the human-translated strings. Finding and affording the right translator is a task that keeps many people from localizing in the first place, but this is getting easier. There is an ever-growing variety of translation options on the web, ranging from free crowd sourcing up to large commercial houses. At the risk of repeating myself, see Part 2 of this blog to find out what Microsoft is doing to make this workflow easier. CUSTOM LOCALIZATION The XAML and code-behind needed for localizing an app will, of course, vary based on its features and overall design. In the case of the Humanitarian Reader, I wanted the app to do two basic things: use the user’s selected Phone Language in Settings when it is first launched, and allow the user to select a different language on-the-fly at run time. Letting the system decide what language to display The key method that maintains the app’s language at run time is called SetUILanguage, which takes a culture name (such as “en-US”) as a parameter. To always follow the user’s Phone Language selection, I simply had to stay out of the way of the system, so at launch I initialize the UI language using the current culture: - // Render the page using the current culture - SetUILanguage(CultureInfo.CurrentUICulture.Name); The user Phone Language selection is one of the four translated language resources, along with flow direction and font selection. If the user has selected any other language, the system will fall back to the app’s neutral language—US English in this case. Designating some text to always remain in the same language The Application Bar menu is one place in the app where I did NOT want some strings to localize. In my design, the user selects their display language via the Application Bar menu. I wanted the fluent reader of each language to see the menu items for their language localized in that language regardless of what language the app UI was displaying at the moment. Note in the following illustration that the Arabic, Chinese, English, and Spanish menu items stay consistent across display languages while the “about” item localizes. To accomplish this, I simply left the values of the language selection items hard-coded in the BuildLocalizedApplicationBar method that creates the localized ApplicationBarMenuItems on launch and whenever display language is changed: - // Build the localized ApplicationBar - private void BuildLocalizedApplicationBar() - { - // Set the page’s ApplicationBar to a new instance of ApplicationBar. - ApplicationBar = new ApplicationBar(); - - // Create new menu items with hard-coded, translated values for the language selections. - //These do not localize. - ApplicationBarMenuItem ar_appBarMenuItem = new ApplicationBarMenuItem(“العربية“); - ApplicationBar.MenuItems.Add(ar_appBarMenuItem); - ar_appBarMenuItem.Click += new EventHandler(ar_appBarMenuItem_Click); - - “about” item is resource bound because that label needs translation to the user’s choice of language. - // Add an ?about? menu item that is localized from app resources. - ApplicationBarMenuItem about_appBarMenuItem = - new ApplicationBarMenuItem(AppResources.AppBarAboutMenuItem); - ApplicationBar.MenuItems.Add(about_appBarMenuItem); Enabling the user to change the language on the fly When a user taps a language in the Application Bar menu, the SetUILanguage method is once again called with the locale name of the selected language used as a parameter in the Click handler. So if the user has tapped “中文” (Simplified Chinese), the following code sets the language: - // App Bar menu item handler to change the app language to Chinese (PRC). - private void zh_appBarMenuItem_Click(object sender, EventArgs e) - { - SetUILanguage(“zh-CN”); - } The SetUILanguage method first resets the CurrentUICulture of the app to the locale supplied in the call. - // Set this thread’s current culture to the culture associated with the selected locale. - CultureInfo newCulture = new CultureInfo(locale); - Thread.CurrentThread.CurrentCulture = newCulture; - Thread.CurrentThread.CurrentUICulture = newCulture; From this point on, any resource-bound text rendered by the app will use the resources of the specified locale. The next action is to use the parameters in the locale’s resource to set the FlowDirection and Language of the RootFrame, which causes any new UI rendered by the app to follow these settings. - // Set the FlowDirection of the RootFrame to match the new culture. - FlowDirection flow = (FlowDirection)Enum.Parse(typeof(FlowDirection), - AppResources.ResourceFlowDirection); - App.RootFrame.FlowDirection = flow; - - // Set the Language of the RootFrame to match the new culture. - App.RootFrame.Language = XmlLanguage.GetLanguage(AppResources.ResourceLanguage); The one hitch is that MainPage.xaml has already been rendered, so SetUILanguage needs to do a couple of things to cause the currently displayed elements to be refreshed in the new language. The first is to fetch each translated resource string for its XAML element and shift the element’s language to match the locale supplied in the SetUILanguage call: - // Modify the language of each page UI element and render it in the new language. - AppTitleTextBlock.Language = XmlLanguage.GetLanguage(locale); - AppTitleTextBlock.Text = AppResources.ApplicationTitle; - - PageTitleTextBlock.Language = XmlLanguage.GetLanguage(locale); - PageTitleTextBlock.Text = AppResources.PageTitle; - - MissionTextBlock.Language = XmlLanguage.GetLanguage(locale); - MissionTextBlock.Text = AppResources.MissionText; - - GoToNewsTextBlock.Language = XmlLanguage.GetLanguage(locale); - GoToNewsTextBlock.Text = AppResources.GoToNews; - - DisclaimerTextBlock.Language = XmlLanguage.GetLanguage(locale); - DisclaimerTextBlock.Text = AppResources.DisclaimerText; Changing FlowDirection The other task is to check the current flow direction and shift elements around if it has changed from one direction to the other. Changing FlowDirection at the app RootFrame level causes the layout of all XAML elements in the application to immediately switch flow direction. If the direction changes to RightToLeft (RTL) then, without any more work on your part, text in the controls reads right to left and justifies according to RTL rules (text that is flush left when FlowDirection is LTR changes to flush right). Also, the layout of controls relative to each other shifts. Note: Without any additional code, the logo, app title, and page title will switch orientation appropriately. This built-in localization support truly simplifies your localization efforts. I hope you’ll agree. However, there’s still one thing remaining: Changing flow direction also changes the direction of navigation. Study the illustration above and you’ll realize that in this case I do want the arrow image to both change orientation on the page AND to flip to a mirror image. I needed an LTR arrow pointing to the right for the next page, an RTL arrow pointing to the left for that purpose, and code to switch them on FlowDirection switch: - //Change next page arrow image depending on FlowDirection - bool isFlowRTL = AppResources.ResourceFlowDirection == “RightToLeft” ? true : false; - if (isFlowRTL) - { - GoToNewsImage.Source = new BitmapImage(new Uri(“Assets/rtlGoToNews.png”, - UriKind.RelativeOrAbsolute)); - } - else - { - GoToNewsImage.Source = new BitmapImage(new Uri(“Assets/ltrGoToNews.png”, - UriKind.RelativeOrAbsolute)); - } Mapping RSS source language to the app’s current culture The last thing the SetUILanguage method does is to map the current culture of the app to the correct three-letter language code used to form the URL of the appropriate RSS feed: - // Set the RSS language variable to match the language of the new culture - switch (CultureInfo.CurrentUICulture.TwoLetterISOLanguageName.ToString()) - { - case “ar”: RSSLocale = “ara”; break; - case “zh”: RSSLocale = “chi”; break; - case “en”: RSSLocale = “eng”; break; - case “es”: RSSLocale = “spa”; break; - } FlowDirection and the web browser navigation UI The Article view page of the app displays the RSS feed whose links open the Browse page. The Browse page contains a web browser for displaying the destination HTML pages as well as back and forward browse arrow navigation elements. This is another case where the navigation direction flip-flops when switching FlowDirection. For RTL flow, left equals forward and right equals backward, instead of the opposite for LTR. In this case, I modified my web control navigation routine to take a direction parameter. The BrowserNav method is called from the Click handler of the browse arrows, and the value of dir is conditionally based on the current flow direction. I initialized a page Boolean variable using the current FlowDirection: - bool isFlowRTL = AppResources.ResourceFlowDirection == “RightToLeft” ? true : false; Then I used it to determine the direction value used in the BrowserNav call: - private void Button_Click_LeftBrowseArrow(object sender, RoutedEventArgs e) - { - string dir = isFlowRTL ? “forward” : “backward”; - BrowserNav(dir); - } - - private void Button_Click_RightBrowseArrow(object sender, RoutedEventArgs e) - { - string dir = isFlowRTL ? “backward” : “forward”; - BrowserNav(dir); - } INTERESTING IMPLICATIONS OF SERVICE-PROVIDED CONTENT FORMAT Finally, remember that content format counts. Here are a couple of issues I found, despite having carefully studied the content format of the RSS feed that the app consumes. Browser content and FlowDirection As I wrote earlier, setting FlowDirection for the app RootFrame affects all elements in the tree, including the WebBrowser control. In many cases this may not have an impact, but be aware that web content may already have its own FlowDirection rendered by the browser. This was the case with the destination pages displayed in my app. So, to my surprise, setting the RootFrame.FlowDirection caused the FlowDirection of the web content I displayed to be the opposite of the intended. Because properties such as FlowDirection can be overridden at any point in the hierarchy, the fix was simple: hard-code the FlowDirection of the WebBrowser control in XAML: - <phone:WebBrowser Grid.Row=“1” Name=“webBrowser1” FlowDirection=“LeftToRight” - Navigated=“webBrowser1_Navigated” Margin=“0,17,0,28” Grid.RowSpan=“2” - BorderThickness=“1”/> RSS data format and an XMLReader globalization gotcha There was another interesting and instructive collision of feed data format and app design choice: The Article view page displays the content of the RSS feed as a linked headline and a block of descriptive text. The RSS feed is parsed and rendered by using the XMLReader class; although it is not fully supported in Windows Phone, it is offered in our sample code as a quick and easy way to make an RSS feed app. And, it works fine for English. However, as it happens, the format of the RSS feed I used has a quirk. The RSS 2.0 spec calls for a 3-letter month code. In the RSS data, the Arabic and Chinese feeds provide that code translated in a way that XMLReader could not digest as part of a date object. The feed contains a well-formed date, but it is in a separate namespace that is not visible to XMLReader. The answer: it may be best to stick with the flexible and fully supported Linq to XML for localized RSS feed parsing. Well that’s all for this blog, and to repeat myself yet again, read Part 2 to learn how I used the Multilingual App Toolkit (released for Windows Phone SDK 8.0) to implement localization for the Humanitarian Reader Windows Phone 8 app. Updated November 7, 2014 11:52 pm Join the conversation
https://blogs.windows.com/buildingapps/2013/02/01/tips-for-localizing-windows-phone-8-xaml-apps-part1/
CC-MAIN-2017-39
en
refinedweb
Local product inventory feed specification The local products inventory feed is a list of the products you sell in each store. Some attributes are required for all items, some are required for certain types of items, and others are recommended. Note: Not providing a required attribute may prevent that particular item from showing up in results, and not providing recommended attributes may impact the ad's performance. Full and incremental feeds Inventory price and quantity can change frequently and on a store-by-store basis. Use incremental feeds to make quick updates to inventory data. Full local product inventory feed: Submit daily and include all of your inventory. The feed type is 'Local product inventory.' Incremental local product inventory feed: If the price and/or quantity of your items per store changes throughout the day, submit only the items that have changed with their new details multiple times throughout the day. The feed type is 'Local product inventory update.' The local product inventory update feed type processes faster than the full local product inventory feed, allowing for more-up-to-date information in your local inventory ads. Submit local product inventory feeds File type: The local product inventory feed is only available as a delimited text file or via API. XML files are not supported for this feed type at this time. Registering a new feed: You’ll follow the standard steps to register a new data feed, but you’ll select either "local product inventory" or "local product inventory update" as the feed type. Important: Some attributes in this local product inventory feed spec contain spaces and underscores. To make sure you submit attributes with correct characters and spacing, follow the guidelines below for your file type: - CSV feeds: Spaces are required. If the attribute has underscores, use a space instead of the "_". - XML API or JSON API: Underscores are required, and are converted into whitespace when received. Summary of attribute requirements Required inventory details These attributes describe basic inventory information per item per store. A unique alphanumeric identifier for each local store. You must use the same store codes that you provided in your Google My Business account. When to include: Required for all items. A unique alphanumeric product identifier for an item across all stores. If you sell the same item in multiple stores, you will have the same itemid appear for multiple store codes. You should include one itemid per store and use quantity to indicate how many of each item is in stock in that store. If you have multiple feeds of the same type for one country, ids of items within different feeds must still be unique. If your SKUs are unique across your inventory and meet the requirements below, we suggest you use your SKUs for this attribute. When to include: Required for all items. Important: - Use the same itemidvalues in both your local products and local product inventory feeds. - Starting and trailing whitespaces and carriage returns (0x0D) are removed. - Each sequence of carriage return (0x0D) and whitespace characters (Unicode characters with the whitespace property) is replaced by a single whitespace (0x20). - Only valid unicode characters are accepted; this excludes the following characters: - control characters (except carriage return 0x0D) - function characters - private area characters - surrogate pairs - non assigned code points (in particular any code point larger than 0x10FFFF) - Once an item is submitted, the id must not change when you update your data feed. - Once an item is submitted, the id must not be used for a different product at a later point in time. - Only include products that are available for purchase in stores. The number of items in stock for the store. If you submit items that are temporarily out of stock, you must include a value of '0' for this attribute. When to include: Required for all items. Important: - Google considers "in stock" items to be those with 3+ availability, "limited availability" to be 1-2, and "out of stock" to be 0. - For local inventory ads, the number expressed in quantity may be a placeholder representing availability. For Google Express, the exact quantity must be shared. The regular price of your item. If you submit price here and in the local products feed, this price will override the price in the local products feed for the associated store. When to include: Required for all items. Important: - This attribute is required in either the local products feed for national default pricing or in this feed for any store-specific overrides. Optional inventory details You can use these attributes to give additional information about the price, quantity, and availability of your items. The advertised temporary sale price that denotes a store-specific override of the 'price' attribute in this feed and the local products feed. We recommend submitting the 'sale price effective date' attribute for any items with sale prices, as this will determine when your sale price should be live. If the 'sale price effective date' isn't submitted, the sale price will be in effect for that item for as long as it is submitted in your feed. Note: Any ‘price’ value submitted in an incremental feed will not automatically remove a ‘sale price’ value from a previous feed. To remove a ‘sale price’ using the incremental feed, include an expired value in the ‘sale price effective date’ attribute. The dates during which the advertised sale price is effective. Note: Timezone is optional [YYYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]. If timezone is absent, Google assumes the local timezone for each store. Additionally, note that we are using 24h time for the hours values. Learn more about the format for this attribute. - 'in stock': Indicates that the item is in stock at your local store. - 'out of stock': Indicates that the item is out stock at your local store. - 'limited availability': Indicates that only a few items are left in stock at your local store. - 'on display to order': Indicates that the item is on display to order at your local store (e.g. a refrigerator that needs to be shipped from a warehouse). For items on display to order, submit the value 'on display to order' along with the value '1' for the attribute 'quantity'. Important: - Google considers "in stock" items to be those with 3+ availability, "limited availability" to be 1-2, and "out of stock" to be 0. - If you use a different value, your item will not be processed. The value you provide for this attribute may or may not appear in Google Shopping results as submitted. Note: You should only submit items that are out of stock if they have the availability attribute with the value ‘out of stock’ and the quantity attribute with the value '0'. Estimate of how many weeks worth of inventory you have. To calculate, divide the quantity available for purchase by average weekly units sold. Optional store pickup details You can highlight the store pickup option by adding the following 2 attributes to your feed. Add these attributes to your local product inventory feed for store-specific pickup information or add them to your local products feed for any items where the values are true in all stores (e.g. a customer can pick up the XYZ television in any of your stores nationally). Specify whether store pickup is available for this offer and whether the pickup option should be shown as buy, reserve, or not supported. - 'buy': the entire transaction occurs online - 'reserve': the item is reserved online and the transaction occurs in-store - 'not supported': the item is not available for store pickup Specify the expected date that an order will be ready for pickup, relative to when the order is placed. - 'same day': indicates that the item is available for pickup the same day that the order is placed, subject to cutoff times - 'next day': indicates that the item is available for pickup the following day that the order is placed
https://support.google.com/merchants/answer/3061342
CC-MAIN-2017-39
en
refinedweb
Hashtable.Add Method Assembly: mscorlib (in mscorlib.dll) Parameters - key - Type: System.Object The key of the element to add. - value - Type: System.Object The value of the element to add. The value can be a null reference (Nothing in Visual Basic). ImplementsIDictionary.Add(Object, Object) A key cannot be a null reference (Nothing in Visual Basic),. import System import System.Collections // Creates and initializes a new Hashtable. var myHT : Hashtable = new Hashtable() myHT.Add("one", "The") myHT.Add("two", "quick") myHT.Add("three", "brown") myHT.Add("four", "fox") // Displays the Hashtable. Console.WriteLine("The Hashtable contains the following:") PrintKeysAndValues(myHT) function PrintKeysAndValues(myList : Hashtable){ var myEnumerator : IDictionaryEnumerator = myList.GetEnumerator() Console.WriteLine("\t-KEY-\t-VALUE-") while(myEnumerator.MoveNext()) Console.WriteLine("\t{0}:\t{1}", myEnumerator.Key, myEnumerator.Value) Console.WriteLine() } // This code produces the following output. // // The Hashtable contains the following: // -KEY- -VALUE- // three: brown // four: fox // two: quick // one:.
https://msdn.microsoft.com/en-us/library/system.collections.hashtable.add(v=vs.90).aspx?cs-save-lang=1&cs-lang=jscript
CC-MAIN-2017-39
en
refinedweb
.*; import java.awt.event.*; public class DoubleClickTest extends Panel implements MouseListener { public void mousePressed(MouseEvent event) { if (event.getClickCount() == 2) { // Executed only when the user double-clicks on this component System.out.println("User double-clicked"); } // if (event.getClickCount() == 2) } // public void mousePressed() public void mouseReleased(MouseEvent event) {}; public void mouseEntered(MouseEvent event) {}; public void mouseExited(MouseEvent event) {}; public void mouseClicked(MouseEvent event) {}; } // public class DoubleClickTest extends Panel implements MouseListener () Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/5525
CC-MAIN-2018-43
en
refinedweb
I got installed Esri ArcGIS SDK (C++) samples (I got it as developer). The library´s used by the samples are been recognized from QtCreator as it´s spected (I´ve been running the samples and everything went well in QT). I tried to use different classes part of the SDK C++ for Qt and I got problems with the headers files not been recognised. For exemple when I include the WmsDynamicMapService as below: #include "WmsDynamicMapServiceLayer.h"...........( Cannot open include file: 'WmsDynamicMapServiceLayer.h'). The same is happening with others classes too. I looked into include directory (C:\Program Files (x86)\ArcGIS SDKs\Qt100.0\sdk\include) and I could not find this library mentioned above and many others libraries that I would like to use. Need I install something else to have all headers files available in SDK C++ for Qt refeference classes? I have a similar issue with ArcGis Runtime sdk version 10.2.6 there are no header files in the sdk. Version 100.0 has include files but the header files have changed drastically which I assume why Mario can not find what he needs. The funny thing is the API documentation lists all the header files I need, but they are not in the SDK. Is there a version of 10.2.6 that has include files?
https://community.esri.com/thread/193077-arcgis-sdk-c-developer-some-header-s-files-are-not-been-recognised
CC-MAIN-2018-43
en
refinedweb
Foods.pgf readPGF :: FilePath -> IO PGF linearize :: PGF -> Language -> Tree -> String parse :: PGF -> Language -> Category -> String -> [Tree] linearizeAll :: PGF -> Tree -> [String] linearizeAllLang :: PGF -> Tree -> [(Language,String)] parseAll :: PGF -> Category -> String -> [[Tree]] parseAllLang :: PGF -> Category -> String -> [(Language,[Tree])] languages :: PGF -> [Language] categories :: PGF -> [Category] startCat :: PGF -> Category import PGF main = do pgf <- readPGF "Foods.pgf" interact (unlines . map (translate pgf) . lines) translate pgf s = case parseAllLang pgf (startCat pgf) s of (from,tree:_):_ -> unlines [linearize pgf to tree | to <- languages pgf, to/=from] _ -> "NO PARSE" Is 123 prime ? No. 77 est impair ? Oui. abstract Arithmetic = { flags startcat=Question ; cat Answer; Object; Question; fun Even, Odd, Prime : Object -> Question; Number : Int -> Object; Yes, No : Answer; } concrete ArithmeticEng of Arithmetic = { lincat Answer, Object, Question = Str; lin Even object = is "even" object; Odd object = is "odd" object; Prime object = is "prime" object; Number int = int.s; No = "No."; Yes = "Yes."; oper is : Str -> Str -> Str = \ pred,obj -> "is"++obj++pred++"?"; } import PGF main :: IO () main = do pgf <- readPGF "Arithmetic.pgf" interact (unlines . map (translate pgf transfer) . lines) translate :: PGF -> (Tree->Tree) -> String -> String translate pgf transfer s = case parseAllLang pgf (startCat pgf) s of (lang,tree:_):_ -> linearize pgf lang (transfer tree) _ -> "NO PARSE" transfer :: Tree -> Tree --transfer = ... module Arithmetic where data GAnswer = GNo | GYes data GObject = GNumber GInt data GQuestion = GEven GObject | GOdd GObject | GPrime GObject class Gf a where gf :: a -> Tree fg :: Tree -> a import PGF import Arithmetic transfer :: Tree -> Tree transfer = gf . answer . fg answer :: GQuestion -> GAnswer answer (GEven x) = test even x answer (GOdd x) = test odd x answer (GPrime x) = test prime x test :: (Int->Bool) -> GObject -> GAnswer test p (GNumber (GInt x)) = if p x then GYes else GNo prime n = and [n `mod` d /= 0 | d <-[2..n-1]] Answer: ... answer.cgi: gf -make --output-format=js FoodEng.gf FoodIta.gf
http://www.grammaticalframework.org/~hallgren/Talks/GF/Tutorial2012/tutorial2012.talk
CC-MAIN-2018-43
en
refinedweb
2009 Campus Test Coordinator Training. February 11-12, 2009 Trainer – Peggy Bradfield Lamar Consolidated Independent School District. TAKS Training Requirements (Page 50 in Coordinator Manual). What’s New for 2009?. Not 11-12, 2009 Trainer – Peggy Bradfield Lamar Consolidated Independent School District The high school honor statement has been expanded to include high school TAKS-M testing. Each column is an administration What happens after testing – pages 8-9 TAKS-M TELPAS STATE TESTINGCampus Test Coordinator Responsibilities Read the coordinator’s manual, the coordinator supplement(eoc), the accommodations manual, and the test security supplement. You and your principal are responsible for test security on your campus. Test Security involves accounting for all secure materials before, during, and after each test administration. All testing personnel must be trained and sign an oath before handling secure test materials. Secure materials (test booklets and LAT simplification guides) may not be duplicated without specific prior approval from TEA. No person may change any response or instruct a student to do so. Watching students during testing. The focus of the teacher’s attention is on the students and not elsewhere. “Active Monitoring” Working on the computer or doing email. Seating Charts are required for all test administrations. Seating Charts must include: Students in grades 9-12 will be asked to sign an honor statement immediately prior to taking TAKS and TAKS-M assessments.. Incidents resulting in a deviation from documented testing procedures are defined as testing irregularities. Testing irregularities that constitute a disclosure of secure testing materials or altering student results either directly or indirectly are considered serious and may result in actions being taken against a teaching certificate or the filing of criminal charges for tampering. Districts are required to maintain the following documentation for a period of five years. TEA will again be conducting on-site visits to districts and campuses throughout the 2009 testing year. When materials arrive on your campus, open the boxes and do an inventory check to make sure that you have everything on your packing list. If anything is missing then immediately notify the district testing coordinator. Secure testing materials must be kept under lock and key in a secure location. There are two answer documents that are used for all TAKS, TAKS (Accommodated), TAKS-M, and LAT testing. See page 161-162. Note: There are 6 different math and 6 different reading answer keys for this answer document. Note the minor change in the language taken abbreviations on the answer document this year. Mark ONE score code for each test that is included on the answer document. (Note the “*” score code means “did not test on this answer document” for the subject indicated. For example the student split testing between TAKS and TAKS-M.) If you fail to mark a test taken information code the system will default to TAKS English, which would certainly fail the student unless they took the TAKS English test. For State Accountability Ratings For AYP Accountability Ratings For PBMS Accountability / Federal ProgramsCompliance Everything is important since it affects something in one of the three accountability systems. If the name or PEIMS ID number is incorrect then Precoded Answer Documents Precoded Labels (You will receive a precoded TAKS-M label for every student that is special education even if they will not be taking TAKS-M)Precode Answer Documents & Labels * Precoded labels can be used on either English or Spanish scorable documents. All precoded answer documents if not used must be returned under a VOID header with scorable materials. Since 2009 is a designated TAKS release year, districts may retain a copy of each student’s TAKS, TAKS Accommodated, and TAKS-M compositions and/or open-ended responses for assessments administered during the 2008-2009 testing cycle (October 2008-July 2009). (Handout: See TEA update & place in DCCM on pg. 179) TAKS Testing Procedures are listed in Campus Coordinator Activity 11 on pages 173-182 of the Coordinator Manual. This PowerPoint is not a substitute for reading the appropriate sections of the 2009 Coordinator’s Manual. At least one test administrator for every 30 students. Provide dictionaries (English) for grade 7 writing, 9th grade reading, and grade 10-11 ELA. At least 1 for every 5 students. May provide ESL dictionaries for LEP students. Students may use highlighters in non-scorable test booklets. Test administrators are not allowed to answer any question relating to the content of the test itself. No scratch paper for any TAKS testing (except as an accommodation or for an online test). All TAKS tests are untimed. Each student must be allowed to have as much time as necessary to respond to every test item. Grade 3 Mathematics reading assistance Accommodations are practices and procedures that provide equitable access during instruction and assessments for students with special needs. Contains information about accommodations for TAKS, TAKS (Accommodated), TAKS-M, LAT, and TELPAS Reading tests. TEA has re-written the accommodations manual giving more direction about allowable accommodations and requiring fewer Accommodation Request Forms (ARFs) be submitted for approval. Must submit one request per student. Available for eligible students in grades 3-8 on TAKS and TAK-A, but not TAKS-M. Available for eligible special education students on TAKS, TAKS Accommodated, and TAKS-M. Available for students who have a recent immigrant LEP exemption from taking TAKS or TAKS-M in reading, math, and science. Linguistic accommodations available on LAT test administrations is listed on page 34 of the coordinator manual. Make-up testing sessions are permitted only for the tests in grades and subjects that are used by NCLB to determine AYP ratings. Every student gets a TAKS or TAKS-M answer document for the primary administration of all tests, even if they do not take a test, including LAT testers. (For example, SSI reading grades 3, 5, & 8 will require an answer document coded “L” even though they will not take the LAT reading test until April.) Home schools are responsible for their students at special sites Special Sites Coordinator responsibilities Packaging Materials Identification Sheets What goes in here and how? Verify that no answer documents have inadvertently been left in test booklets. Verify that all test booklets and answer documents are accounted for. There is only one Campus & Group ID sheet and only one Class ID sheet for TAKS, TAKS-A, and TAKS-M. Separate scorable from nonscorable materials. Follow the packing charts in the coordinator’s manual to pack materials for return (pages 188-198).
https://www.slideserve.com/peter-perkins/2009-campus-test-coordinator-training
CC-MAIN-2018-43
en
refinedweb
Getting Started To begin evolution, we need to create a seed genome and a population from it. Before everything though, we create an object which holds all parameters used by NEAT: import MultiNEAT as NEAT params = NEAT.Parameters() params.PopulationSize = 100 This is usually the point where all custom values for the parameters are set. Here we set the population size to 100 individuals (default value is 300). Now we create a genome with 3 inputs and 2 outputs: genome = NEAT.Genome(0, 3, 0, 2, False, NEAT.ActivationFunction.UNSIGNED_SIGMOID, NEAT.ActivationFunction.UNSIGNED_SIGMOID, 0, params, 0) Notice that we set more properties of the genome than just number of inputs/outputs. Also, if the number of inputs you're going to use in your project is 2, you need to write 3 in the constructor. Always add one extra input. The last input is always used as bias and also when you activate the network always set the last input to 1.0 (or any other constant non-zero value). The type of activation function of the outputs and hidden neurons is also set. Hidden neurons are optional. After the genome is created, we create the population like this: pop = NEAT.Population(genome, params, True, 1.0, 0) # the 0 is the RNG seed The last two parameters specify whether the population should be randomized and how much. Because we are starting from a new genome and not one that was previously saved, we randomize the initial population. Evolution can run now. For this we need an evaluation function. It takes a Genome as a parameter and returns a float that is the fitness of the genome's phenotype. def evaluate(genome): # this creates a neural network (phenotype) from the genome net = NEAT.NeuralNetwork() genome.BuildPhenotype(net) # let's input just one pattern to the net, activate it once and get the output net.Input( [ 1.0, 0.0, 1.0 ] ) net.Activate() output = net.Output() # the output can be used as any other Python iterable. For the purposes of the tutorial, # we will consider the fitness of the individual to be the neural network that outputs constantly # 0.0 from the first output (the second output is ignored) fitness = 1.0 - output[0] return fitness So we have our evaluation function now, we can enter the basic generational evolution loop. for generation in range(100): # run for 100 generations # retrieve a list of all genomes in the population genome_list = NEAT.GetGenomeList(pop) # apply the evaluation function to all genomes for genome in genome_list: fitness = evaluate(genome) genome.SetFitness(fitness) # at this point we may output some information regarding the progress of evolution, best fitness, etc. # it's also the place to put any code that tracks the progress and saves the best genome or the entire # population. We skip all of this in the tutorial. # advance to the next generation pop.Epoch() The rest of the algorithm is controlled by the parameters we used to initialize the population. One can modify the parameters during evolution, accessing the pop.Parameters object. When a population is saved, the parameters are saved along with it.
http://multineat.com/docs.html
CC-MAIN-2018-43
en
refinedweb
At the moment I’m struggling with Microchip’s new “Harmony” framework for the PIC32. I don’t want to say bad things about it because (a) I haven’t used it enough to give a fair opinion and (b) I strongly suspect it’s a useful thing for some people, some of the time. Harmony is extremely heavyweight. For example, the PDF documentation is 8769 pages long. That is not at all what I want – I want to work very close to the metal, and to personally control nearly every instruction executed on the thing, other than extremely basic things like <stdlib.h> and <math.h>. Yet Microchip says they will be supporting only Harmony (and not their old “legacy” peripheral libraries) on their upcoming PIC32 parts with goodies like hardware floating point, which I’d like to use. So I’m attempting to tease out the absolute minimum subset of Harmony needed to access register symbol names, etc., and do the rest myself. My plan is to use Harmony to build an absolutely minimum configuration, then edit down the resulting source code to something manageable. But I found that many of Microchip’s source files are > 99% comments, making it essentially impossible to read the code and see what it actually does. Often there will be 1 or 2 lines of code here and there separated by hundreds of lines of comments. So I wrote the below Python script. Given a folder, it will walk thru every file and replace all the .c, .cpp, .h, and .hpp files with identical ones but with all comments removed. I’ve only tested it on Windows, but I don’t see any reason why it shouldn’t work on Linux and Mac. from __future__ import print_function import sys, re, os # for Python 2.7 # Use and modification permitted without limit; credit to NerdFever.com requested. # thanks to zvoase at # and Lawrence Johnston at def comment_remover(text): def replacer(match): s = match.group(0) if s.startswith('/'): return " " # note: a space and not an empty string else: return s pattern = re.compile( r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"', re.DOTALL | re.MULTILINE ) r1 = re.sub(pattern, replacer, text) return os.linesep.join([s for s in r1.splitlines() if s.strip()]) def NoComment(infile, outfile): root, ext = os.path.splitext(infile) valid = [".c", ".cpp", ".h", ".hpp"] if ext.lower() in valid: inf = open(infile, "r") dirty = inf.read() clean = comment_remover(dirty) inf.close() outf = open(outfile, "wb") # 'b' avoids 0d 0d 0a line endings in Windows outf.write(clean) outf.close() print("Comments removed:", infile, ">>>", outfile) else: print("Did nothing: ", infile) if __name__ == "__main__": if len(sys.argv) < 2: print("") print("C/C++ comment stripper v1.00 (c) 2015 Nerdfever.com") print("Syntax: nocomments path") sys.exit() root = sys.argv[1] for root, folders, fns in os.walk(root): for fn in fns: filePath = os.path.join(root, fn) NoComment(filePath, filePath) To use it, put that in "nocomments.py", then do: python nocomments.py foldername Of course, make a backup of the original folder first. #1 by Bob on 2015 June 29 - 22:13 Quote If you look at just the Driver Libraries in Harmony, are those roughly the equivalent of the old peripheral libraries? The documentation for the Driver Libraries is a mere 1,129 pages. MPLAB X can collapse comments by default. Go to Tools | Options, Editor, Folding, and check the Comments box. Files you open after that should have the comments collapsed. #2 by Dave on 2015 June 29 - 23:58 Quote Thanks; Tools>Options>Editor>Folding is very useful; I didn’t know about it. Another trick I found useful is to go to the Harmony folder, right click, then Properties, and check Read-only (recursively). That sets all the standard Harmony files as read-only; NetBeans is smart enough to know it – it italicizes the file name and greys out the editor window – so you can read that code but not modify the “master” files. Here’s an example of the kind of thing I’m uncomfortable about with Harmony: MHC creates a file “system_config.h” that includes the FOSC clock rate you selected via the setup GUI: #define SYS_CLK_FREQ 4000000ul Now, what happens if my code goes and changes the clock rate while running? Will the rest of Harmony know it? I don’t see how. Will it assume it’s still running at 40 MHz (when it’s not) and get all kinds of timing things wrong? How can I be sure that doesn’t happen? I’d much rather manage this stuff myself. Maybe I’m just not trusting enough. But I’m not. #3 by C Grier on 2015 November 5 - 15:50 Quote Hi Dave, Originally I was going to post about the comment folding options, but see that Bob beat me to it. You are right that embedded MCUs have traditionally been more bare-metal programming exercises, particularly if you’ve had long experience with a platform and the base peripheral set. However, the industry is changing as more connectivity, files systems, and GUI expectations end up being part of modern projects. You might want to check out the Renesas Synergy Platform if you feel that Harmony isn’t exactly what you want. Synergy has the HAL, Framework and (optional) RTOS wrapped together with an Eclipse IDE and ARM Cortex cores. The nice part is that the product family was designed to the software API sepcification – not the other way around. That means the complexity in the drivers, stacks, and middleware are kept to a minimum while still supporting low-end and high end performance. And the PDFs are fully hyperlinked, with the API document coming in at a modest 2700 pages. 😉 Google Renesas Synergy or go to synergyxplorer dot com to learn more. –CG
http://nerdfever.com/remove-all-comments-from-c-and-c-source-code/
CC-MAIN-2017-47
en
refinedweb
In this series of tutorials, I'll show you how to make a Geometry Wars-inspired twin-stick shooter, with neon graphics, crazy particle effects, and awesome music, for iOS using C++ and OpenGL ES 2.0. Rather than rely on an existing game framework or sprite library, we'll attempt to program as close to the hardware (or "bare metal") as we possibly can. Since devices running iOS run on smaller scale hardware compared to a desktop PC or games console, this will allow us to get as much bang for our buck as possible. These tutorials are based on Michael Hoffman's original XNA series, which has been translated to other platforms: The goal of these tutorials is to go over the necessary elements that will allow you to create your own high-quality mobile game for iOS, either from scratch or based on an existing desktop game. I encourage you to download and play with the code, or even to use it as a basis for your own projects. We'll cover the following topics during this series: - First steps, introducing the Utility library, setting up the basic gameplay, creating the player's ship, sound and music. - Finish implementing the gameplay mechanics by adding enemies, handling collision detection, and tracking the player's score and lives. - Add a virtual gamepad on-screen, so we can control the game using multi-touch input. - our audio section. The sprites are by Jacob Zinman-Jeanes, our resident Tuts+ designer. The font we'll use is a bitmap font (in other words, not an actual "font", but an image file), which is something I've created for this tutorial. All the artwork can be found in the source files. Let's get started. Overview Before we dive into the specifics of the game, let's talk about the Utility Library and Application Bootstrap code I've provided to support developing our game. The Utility Library Though we'll primarily be using C++ and OpenGL to code our game, we'll need some additional utility classes. These are all classes I've written to help development in other projects, so they're time tested and usable for new projects such as this one. package.h: A convenience header used to include all relevant headers from the Utility library. We'll include it by stating #include "Utility/package.h"without having to include anything else. Patterns We'll leverage some existing tried and true programming patterns used in C++ and other languages. tSingleton: Implements a singleton class using a "Meyers Singleton" pattern. It's template based, and extensible, so we can abstract all singleton code to a single class. tOptional: This is a feature from C++14 (called std::optional) that's not quite available in current versions of C++ yet (we're still at C++11). It's also a feature available in XNA and C# (where it's called Nullable.) It allows us to have "optional" parameters for methods. It's used in the tSpriteBatchclass. Vector Math Since we're not using an existing game framework, we'll need some classes to deal with the mathematics behind the scenes. tMath: A static class taht provides some methods beyond what's available in C++, such as converting from degrees to radians or rounding numbers to powers of two. tVector: A basic set of Vector classes, providing 2-element, 3-element, and 4-element variants. We also typedef this structure for Points and Colors. tMatrix: Two matrix definitions, a 2x2 variant (for rotation operations), and a 4x4 option (for the projection matrix required to get things on-screen), tRect: A rectangle class providing location, size, and a method to determine whether points lie inside rectangles or not. OpenGL Wrapper Classes Although OpenGL is a powerful API, it's C-based, and managing objects can be somewhat difficult to do in practice. So, we'll have a small handful of classes to manage the OpenGL objects for us. tSurface: Offers a way to create a bitmap based on an image loaded from the application's bundle. tTexture: Wraps the interface to OpenGL's texture commands, and loads tSurfacesinto textures. tShader: Wraps the interface to OpenGL's shader compiler, making it easy to compile shaders. tProgram: Wraps the interface to OpenGL's shader program interface, which is essentially the combination of two tShaderclasses. Game Support Classes These classes represent the closest we'll get to having a "game framework"; they provide some high level concepts that are not typical to OpenGL, but that are useful for game development purposes. tViewport: Contains the state of the viewport. We use this primarily to handle changes to device orientation. tAutosizeViewport: A class that manages changes to the viewport. It handles device orientation changes directly, and scales the viewport to fit the screen of the device so that the aspect ratio stays the same—meaning that things don't get stretched or squashed. tSpriteFont: Allows us to load a "bitmap font" from the application bundle, and use it to write text on the screen. tSpriteBatch: Inspired by XNA's SpriteBatchclass, I wrote this class to encapsulate the best of what's needed by our game. It allows us to sort sprites when drawing in such a way so we get the best possible speed gains on the hardware we have. We'll also use it directly to write text on screen. Miscellaneous Classes A minimal set of classes to round things out. tTimer: A system timer, used primarily for animations. tInputEvent: Basic class definitions to provide orientation changes (tilting the device), touch events, and a "virtual keyboard" event to emulate a gamepad more discretely. tSound: A class dedicated to loading and playing sound effects and music. Application Bootstrap We'll also need what I call "Boostrap" code—that is, code that abstracts away how an application starts, or "boots up." Here's what's in Bootstrap: AppDelegate: This class handles application launch, as well as suspend and resume events for when the user presses the Home button. ViewController: This class handles device orientation events, and creates our OpenGL view OpenGLView: This class initializes OpenGL, tells the device to refresh at 60 frames per second, and handles touch events. Overview of the Game In this tutorial we will create a twin-stick shooter; the player will control the ship using on-screen multi-touch controls. We'll use a number of classes to accomplish this: Entity: The base class for enemies, bullets, and the player's ship. Entities can move and be drawn. Bulletand PlayerShip. EntityManager: Keeps track of all entities in the game and performs collision detection. Input: Helps manage input from the touch screen. Art: Loads and holds references to the textures needed for the game. Sound: Loads and holds references to the sounds and music. MathUtiland Extensions: Contains some helpful static methods and extension methods. GameRoot: Controls the main loop of the game. This is our main class. The code in this tutorial aims to be simple and easy to understand. It will not have every feature designed to support every possible need; rather, it will do only what it needs to do. Keeping it simple will make it easier for you to understand the concepts, and then modify and expand them into your own unique game. Entities and the Player's Ship Open the existing Xcode project. GameRoot is our application's main class. We'll start by creating a base class for our game entities. Take a look at the class Entity { public: enum Kind { kDontCare = 0, kBullet, kEnemy, kBlackHole, }; protected: tTexture* mImage; tColor4f mColor; tPoint2f mPosition; tVector2f mVelocity; float mOrientation; float mRadius; bool mIsExpired; Kind mKind; public: Entity(); virtual ~Entity(); tDimension2f getSize() const; virtual void update() = 0; virtual void draw(tSpriteBatch* spriteBatch); tPoint2f getPosition() const; tVector2f getVelocity() const; void setVelocity(const tVector2f& nv); float getRadius() const; bool isExpired() const; Kind getKind() const; void setExpired(); }; All our entities (enemies, bullets and the player's ship) have some basic properties, such as an image and a position. mIsExpired will be used to indicate that the entity has been destroyed and should be removed from any lists holding a reference to it. Next we create an EntityManager to track our entities and to update and draw them: class EntityManager : public tSingleton<EntityManager> { protected: std::list<Entity*> mEntities; std::list<Entity*> mAddedEntities; std::list<Bullet*> mBullets; bool mIsUpdating; protected: EntityManager(); public: int getCount() const; void add(Entity* entity); void addEntity(Entity* entity); void update(); void draw(tSpriteBatch* spriteBatch); bool isColliding(Entity* a, Entity* b); friend class tSingleton<EntityManager>; }; void EntityManager::add(Entity* entity) { if (!mIsUpdating) { addEntity(entity); } else { mAddedEntities.push_back(entity); } } void EntityManager::update() { mIsUpdating = true; for(std::list<Entity*>::iterator iter = mEntities.begin(); iter != mEntities.end(); iter++) { (*iter)->update(); if ((*iter)->isExpired()) { *iter = NULL; } } mIsUpdating = false; for(std::list<Entity*>::iterator iter = mAddedEntities.begin(); iter != mAddedEntities.end(); iter++) { addEntity(*iter); } mAddedEntities.clear(); mEntities.remove(NULL); for(std::list<Bullet*>::iterator iter = mBullets.begin(); iter != mBullets.end(); iter++) { if ((*iter)->isExpired()) { delete *iter; *iter = NULL; } } mBullets.remove(NULL); } void EntityManager::draw(tSpriteBatch* spriteBatch) { for(std::list<Entity*>::iterator iter = mEntities.begin(); iter != mEntities.end(); iter++) { (*iter)->draw(spriteBatch); } } Remember, if you modify a list while iterating over it, you will get a runtime exception. The above code takes care of this by queuing up any entities added during updating in a separate list, and adding them after it finishes updating the existing entities. Making Them Visible We will need to load some textures if we want to draw anything, so we'll make a static class to hold references to all our textures: class Art : public tSingleton<Art> { protected: tTexture* mPlayer; tTexture* mSeeker; tTexture* mWanderer; tTexture* mBullet; tTexture* mPointer; protected: Art(); public: tTexture* getPlayer() const; tTexture* getSeeker() const; tTexture* getWanderer() const; tTexture* getBullet() const; tTexture* getPointer() const; friend class tSingleton<Art>; }; Art::Art() { mPlayer = new tTexture(tSurface("player.png")); mSeeker = new tTexture(tSurface("seeker.png")); mWanderer = new tTexture(tSurface("wanderer.png")); mBullet = new tTexture(tSurface("bullet.png")); mPointer = new tTexture(tSurface("pointer.png")); } We load the art by calling Art::getInstance() in GameRoot::onInitView(). This causes the Art singleton to get constructed and to call the constructor, Art::Art(). Also, a number of classes will need to know the screen dimensions, so we have the following members in GameRoot: tDimension2f mViewportSize; tSpriteBatch* mSpriteBatch; tAutosizeViewport* mViewport; And in the GameRoot constructor, we set the size: GameRoot::GameRoot() : mViewportSize(800, 600), mSpriteBatch(NULL) { } The resolution 800x600px is what the original XNA-based Shape Blaster used. We could use any resolution we wish (like one closer to an iPhone or iPad's specific resolution), but we'll stick with the original resolution just to make sure our game matches the look and feel of the original. Now we'll go over the PlayerShip class: class PlayerShip : public Entity, public tSingleton<PlayerShip> { protected: static const int kCooldownFrames; int mCooldowmRemaining; int mFramesUntilRespawn; protected: PlayerShip(); public: void update(); void draw(tSpriteBatch* spriteBatch); bool getIsDead(); void kill(); friend class tSingleton<PlayerShip>; }; PlayerShip::PlayerShip() : mCooldowmRemaining(0), mFramesUntilRespawn(0) { mImage = Art::getInstance()->getPlayer(); mPosition = tPoint2f(GameRoot::getInstance()->getViewportSize().x / 2, GameRoot::getInstance()->getViewportSize().y / 2); mRadius = 10; } We made PlayerShip a singleton, set its image, and placed it in the center of the screen. Finally, let's add the player ship to the EntityManager. The code in GameRoot::onInitView looks like this: //In GameRoot::onInitView EntityManager::getInstance()->add(PlayerShip::getInstance()); . . . glClearColor(0,0,0,1); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE);); glHint(GL_GENERATE_MIPMAP_HINT, GL_DONT_CARE); glDisable(GL_DEPTH_TEST); glDisable(GL_CULL_FACE); We're drawing the sprites with additive blending, which is part of what will give them their "neon" look. We also don't want any bluring or blending, so we use GL_NEAREST for our filters. We don't need or care about depth testing or backface culling (it just adds unnecessary overhead anyway), so we turn it off. The code in GameRoot::onRedrawView looks like this: //In GameRoot::onRedrawView EntityManager::getInstance()->update(); EntityManager::getInstance()->draw(mSpriteBatch); mSpriteBatch->draw(0, Art::getInstance()->getPointer(), Input::getInstance()->getMousePosition(), tOptional<tRectf>()); mViewport->run(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); mSpriteBatch->end(); glFlush(); If you run the game at this point, you should see your ship in the center of the screen. However, it doesn't respond to input. Let's add some input to the game next. Input For movement, we'll use a multi-touch interface. Before we go full force with on-screen gamepads, we'll just get a basic touch interface up and running. In the original Shape Blaster for Windows, player movement could be done with the WASD keys the keyboard. For aiming, they could use the arrow keys or the mouse. This is meant to emulate Geometry Wars's twin-stick controls: one analog stick for movement, one for aiming. Since Shape Blaster already uses the concept of keyboard and mouse movement, the easiest way to add input would by emulating keyboard and mouse commands through touch. We'll start with mouse movement, as both touch and mouse share a similar component: a point containing X and Y coordinates. We'll make a static class to keep track of the various input devices and to take care of switching between the different types of aiming:Keyboard(const tKeyboardEvent& msg); void onTouch(const tTouchEvent& msg); friend class tSingleton<Input>; };; } } We call Input::update() at the beginning of GameRoot::onRedrawView() for the input class to work. As stated previously, we'll use the keyboard state later on in the series to account for movement. Shooting Now let's make the ship shoot. First, we need a class for bullets. class Bullet : public Entity { public: Bullet(const tPoint2f& position, const tVector2f& velocity); void update(); }; Bullet::Bullet(const tPoint2f& position, const tVector2f& velocity) { mImage = Art::getInstance()->getBullet(); mPosition = position; mVelocity = velocity; mOrientation = atan2f(mVelocity.y, mVelocity.x); mRadius = 8; mKind = kBullet; } void Bullet::update() { if (mVelocity.lengthSquared() > 0) { mOrientation = atan2f(mVelocity.y, mVelocity.x); } mPosition += mVelocity; if (!tRectf(0, 0, GameRoot::getInstance()->getViewportSize()).contains(tPoint2f((int32_t)mPosition.x, (int32_t)mPosition.y))) { mIsExpired = true; } } We want a brief cooldown period between bullets, so we'll have a constant for that: const int PlayerShip::kCooldownFrames = 6; Also, we'll add the following code to PlayerShip::Update(): tVector2f aim = Input::getInstance()->getAimDirection(); if (aim.lengthSquared() > 0 && mCooldowmRemaining <= 0) { mCooldowmRemaining = kCooldownFrames; float aimAngle = atan2f(aim.y, aim.x); float cosA = cosf(aimAngle); float sinA = sinf(aimAngle); tMatrix2x2f aimMat(tVector2f(cosA, sinA), tVector2f(-sinA, cosA)); float randomSpread = tMath::random() * 0.08f + tMath::random() * 0.08f - 0.08f; tVector2f vel = 11.0f * (tVector2f(cosA, sinA) + tVector2f(randomSpread, randomSpread)); tVector2f offset = aimMat * tVector2f(35, -8); EntityManager::getInstance()->add(new Bullet(mPosition + offset, vel)); offset = aimMat * tVector2f(35, 8); EntityManager::getInstance()->add(new Bullet(mPosition + offset, vel)); tSound* curShot = Sound::getInstance()->getShot(); if (!curShot->isPlaying()) { curShot->play(0, 1); } } if (mCooldowmRemaining > 0) { mCooldowmRemaining--; } This code creates two bullets that travel parallel to each other. It adds a small amount of randomness to the direction, which makes the shots spread out a little bit like a machine gun. We add two random numbers together because this makes their sum more likely to be centered (around zero) and less likely to send bullets far off. We use a two-dimensional matrix to rotate the initial position of the bullets in the direction they're travelling. We also used two new helper methods: Extensions::NextFloat(): Returns a random float between a minimum and maximum value. MathUtil::FromPolar(): Creates a tVector2ffrom an angle and magnitude. So let's see what they look like: //In Extensions float Extensions::nextFloat(float minValue, float maxValue) { return (float)tMath::random() * (maxValue - minValue) + minValue; } //In MathUtil tVector2f MathUtil::fromPolar(float angle, float magnitude) { return magnitude * tVector2f((float)cosf(angle), (float)sinf(angle)); } Custom Cursor There's one more thing we should do now that we have the inital Input class: let's draw a custom mouse cursor to make it easier to see where the ship is aiming. In GameRoot.Draw, simply draw Art's mPointer at the "mouse's" position. mSpriteBatch->draw(0, Art::getInstance()->getPointer(), Input::getInstance()->getMousePosition(), tOptional<tRectf>()); Conclusion If you test the game now, you'll be able to touch anywhere on screen to aim the continuous stream of bullets, which is a good start. In the next part, we will complete the initial gameplay by adding enemies and a score. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://gamedevelopment.tutsplus.com/tutorials/make-a-neon-vector-shooter-for-ios-first-steps--gamedev-14316?WT.mc_id=Tuts+_website_relatedtutorials_sidebar
CC-MAIN-2017-47
en
refinedweb
In previous tutorials we've outlined temperature sensing, PIR motion controllers and buttons and switches, all of which can plug directly into the Raspberry Pi's GPIO ports. The HC-SR04 ultrasonic range finder is very simple to use, however the signal it outputs needs to be converted from 5V to 3.3V so as not to damage our Raspberry Pi! We'll introduce some Physics along with Electronics in this tutorial in order to explain each step! What you'll need: HC-SR04 1kΩ Resistor 2kΩ Resistor Jumper Wires Ultrasonic Distance Sensors Sound consists of oscillating waves through a medium (such as air) with the pitch being determined by the closeness of those waves to each other, defined as the frequency. Only some of the sound spectrum (the range of sound wave frequencies) is audible to the human ear, defined as the “Acoustic” range. Very low frequency sound below Acoustic is defined as “Infrasound”, with high frequency sounds above, called “Ultrasound”. Ultrasonic sensors are designed to sense object proximity or range using ultrasound reflection, similar to radar, to calculate the time it takes to reflect ultrasound waves between the sensor and a solid object. Ultrasound is mainly used because it’s inaudible to the human ear and is relatively accurate within short distances. You could of course use Acoustic sound for this purpose, but you would have a noisy robot, beeping every few seconds. . . . A basic ultrasonic sensor consists of one or more ultrasonic transmitters (basically speakers), a receiver, and a control circuit. The transmitters emit a high frequency ultrasonic sound, which bounce off any nearby solid objects. Some of that ultrasonic noise is reflected and detected by the receiver on the sensor. That return signal is then processed by the control circuit to calculate the time difference between the signal being transmitted and received. This time can subsequently be used, along with some clever math, to calculate the distance between the sensor and the reflecting object. The HC-SR04 Ultrasonic sensor we’ll be using in this tutorial for the Raspberry Pi has four pins: ground (GND), Echo Pulse Output (ECHO), Trigger Pulse Input (TRIG), and 5V Supply (Vcc). We power the module using Vcc, ground it using GND, and use our Raspberry Pi to send an input signal to TRIG, which triggers the sensor to send an ultrasonic pulse. The pulse waves bounce off any nearby objects and some are reflected back to the sensor. The sensor detects these return waves and measures the time between the trigger and returned pulse, and then sends a 5V signal on the ECHO pin. ECHO will be “low” (0V) until the sensor is triggered when it receives the echo pulse. Once a return pulse has been located ECHO is set “high” (5V) for the duration of that pulse. Pulse duration is the full time between the sensor outputting an ultrasonic pulse, and the return pulse being detected by the sensor receiver. Our Python script must therefore measure the pulse duration and then calculate distance from this. IMPORTANT. The sensor output signal (ECHO) on the HC-SR04 is rated at 5V. However, the input pin on the Raspberry Pi GPIO is rated at. The following circuit and simple equation can be applied to many applications where a voltage needs to be reduced. If you don’t want to learn the techy bit, just grab 1 x 1kΩ and 1 x 2kΩ resistor. Without getting too deep into the math side, we only actually need to calculate one resistor value, as it’s the dividing ratio that’s important. We know our input voltage (5V), and our required output voltage (3.3V), and we can use any combination of resistors to achieve the reduction. I happen to have a bunch of extra 1kΩ resistors, so I decided to use one of these in the circuit as R1. Plugging our values in, this would be the following: So, we’ll use a 1kΩ for R1 and a 2kΩ resistor as R2! Assemble the Circuit We’ll be using four pins on the Raspberry Pi for this project: GPIO 5V [Pin 2]; Vcc (5V Power), GPIO GND [Pin 6]; GND (0V Ground), GPIO 23 [Pin 16]; TRIG (GPIO Output) and GPIO 24 [Pin 18]; ECHO (GPIO Input) 1. Plug four of your male to female jumper wires into the pins on the HC-SR04 as follows: Red; Vcc, Blue; TRIG, Yellow; ECHO and Black; GND. 2. Plug Vcc into the positive rail of your breadboard, and plug GND into your negative rail. 3. Plug GPIO 5V [Pin 2] into the positive rail, and GPIO GND [Pin 6] into the negative rail. 4. Plug TRIG into a blank rail, and plug that rail into GPIO 23 [Pin 16]. (You can plug TRIG directly into GPIO 23 if you want). I personally just like to do everything on a breadboard! 5. Plug ECHO into a blank rail, link another blank rail using R1 (1kΩ resistor) 6. Link your R1 rail with the GND rail using R2 (2kΩ resistor). Leave a space between the two resistors. 7. Add GPIO 24 [Pin 18] to the rail with your R1 (1kΩ resistor). This GPIO pin needs to sit between R1 and R2 That's it! Our HC-SR04 sensor is connected to our Raspberry Pi! Sensing with Python Now that we’ve hooked our Ultrasonic Sensor up to our Pi, we need to program a Python script to detect distance! The Ultrasonic sensor output (ECHO) will always output low (0V) unless it’s been triggered in which case it will output 5V (3.3V with our voltage divider!). We therefore need to set one GPIO pin as an output, to trigger the sensor, and one as an input to detect the ECHO voltage change. First, import the Python GPIO library, import our time library (so we make our Pi wait between steps) and set our GPIO pin numbering. import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BCM) Next, we need to name our input and output pins, so that we can refer to it later in our Python code. We’ll name our output pin (which triggers the sensor) GPIO 23 [Pin 16] as TRIG, and our input pin (which reads the return signal from the sensor) GPIO 24 [Pin 18] as ECHO. TRIG = 23 ECHO = 24 We’ll then print a message to let the user know that distance measurement is in progress. . . . print "Distance Measurement In Progress" Next, set your two GPIO ports as either inputs or outputs as defined previously. GPIO.setup(TRIG,GPIO.OUT) GPIO.setup(ECHO,GPIO.IN) Then, ensure that the Trigger pin is set low, and give the sensor a second to settle. GPIO.output(TRIG, False) print "Waiting For Sensor To Settle" time.sleep(2) The HC-SR04 sensor requires a short 10uS pulse to trigger the module, that we’ve sent our pulse signal we need to listen to our input pin, which is connected to ECHO. The sensor sets ECHO to high for the amount of time it takes for the pulse to go and come back, so our code therefore needs to measure the amount of time that the ECHO pin stays high. We use the “while” string to ensure that each signal timestamp is recorded in the correct order. The time.time() function will record the latest timestamp for a given condition. For example, if a pin goes from low to high, and we’re recording the low condition using the time.time() function, the recorded timestamp will be the latest time at which that pin was low. Our first step must therefore be to record the last low timestamp for ECHO (pulse_start) e.g. just before the return signal is received and the pin goes high. while GPIO.input(ECHO)==0: pulse_start = time.time() Once a signal is received, the value changes from low (0) to high (1), and the signal will remain high for the duration of the echo pulse. We therefore also need the last high timestamp for ECHO (pulse_end). while GPIO.input(ECHO)==1: pulse_end = time.time() We can now calculate the difference between the two recorded timestamps, and hence the duration of pulse (pulse_duration). pulse_duration = pulse_end - pulse_start With the time it takes for the signal to travel to an object and back again, we can calculate the distance using the following formula. The speed of sound is variable, depending on what medium it’s travelling through, in addition to the temperature of that medium. However, some clever physicists have calculated the speed of sound at sea level so we’ll take our baseline as the 343m/s. If you’re trying to measure distance through water, this is where you’re falling down – make sure you’re using the right speed of sound! We also need to divide our time by two because what we’ve calculated above is actually the time it takes for the ultrasonic pulse to travel the distance to the object and back again. We simply want the distance to the object! We can simplify the calculation to be completed in our Python script as follows: We can plug this calculation into our Python script: distance = pulse_duration x 17150 Now we need to round our distance to 2 decimal places (for neatness!) distance = round(distance, 2) Then, we print the distance. The below command will print the word “Distance:” followed by the distance variable, followed by the unit “cm” print "Distance:",distance,"cm" Finally, we clean our GPIO pins to ensure that all inputs/outputs are reset GPIO.cleanup() Save your python script, I called ours "range_sensor.py", and run it using the following command. Running a root (sudo), is important with this script: The sensor will settle for a few seconds, and then record your distance! Downloads You can download the above example HC-SR04 Raspberry Pi Python Script Here Sources Thanks for the following sources for information on this tutorial: Raspberry Pi Spy - Part 1 Raspberry Pi Spy - Part 2
https://www.modmypi.com/blog/hc-sr04-ultrasonic-range-sensor-on-the-raspberry-pi
CC-MAIN-2017-47
en
refinedweb
Introduction Creating a Generic Repository pattern in an MVC3 application with Entity Framework is the last topic that we are about to cover in our journey of learning MVC. The article will focus on Unit of Work Pattern and Repository Pattern, and shows how to perform CRUD operations in an MVC application when there could be a possibility of creating more than one repository class. To overcome this possibility and overhead, we make a Generic Repository class for all other repositories and implement a Unit of Work pattern to provide abstraction. Our roadmap towards Learning MVC running sample application that we created in fifth part of the article series. - We have Entity Framework 4.1 package or DLL on our local file system. - We understand how MVC application is created (follow second part of the series). Why Generic Repository We have already discussed what Repository Pattern is and why do we need Repository Pattern in our last article. We created a User Repository for performing CRUD operations, but think of the scenario where we need 10 such repositories. Are we going to create these classes? Not good, it results in a lot of redundant code. So to overcome this situation we’ll create a Generic Repository class that will be called by a property to create a new repository thus we do not result in lot of classes and also escape redundant code too. Moreover we save a lot of time that could be wasted creating those classes. Unit of Work Pattern According to Martin Fowler Unit of Work Pattern “Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems." From MSDN, The Unit of Work pattern isn't necessarily something that you will explicitly build yourself, but the pattern shows up in almost every persistence tool. The ITransactioninterface in NHibernate, the DataContextclass in LINQ to SQL, and the ObjectContextclass in the Entity Framework are all examples of a Unit of Work. For that matter, the venerable DataSet can be used as a Unit of Work. Other times, you may want to write your own application-specific Unit of Work interface or class that wraps the inner Unit of Work from your persistence tool. You may do this for a number of reasons. You might want to add application-specific logging, tracing, or error handling to transaction management. Perhaps you want to encapsulate the specifics of your persistence tooling from the rest of the application. You might want this extra encapsulation to make it easier to swap out persistence technologies later. Or you might want to promote testability in your system. Many of the built-in Unit of Work implementations from common persistence tools are difficult to deal with in automated unit testing scenarios." The Unit of Work class can have methods to mark entities as modified, newly created, or deleted. The Unit of Work will also have methods to commit or roll back all of the changes as well. The important responsibilities of Unit of Work are, - To manage transactions. - To order the database inserts, deletes, and updates. - To prevent duplicate updates. Inside a single usage of a Unit of Work object, different parts of the code may mark the same Invoice object as changed, but the Unit of Work class will only issue a single UPDATE command to the database. The value of using a Unit of Work pattern is to free the rest of our code from these concerns so that you can otherwise concentrate on business logic. Why use Unit of Work? Again Martin Fowler statements, . A Unit of Work keeps track and takes responsibility of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work." You see I don’t have to concentrate much on theory, we already have great definitions existing, all we needed is to stack them in a correct format. Using the Unit of Work One of the best ways to use the Unit of Work pattern is to allow disparate classes and services to take part in a single logical transaction. The key point here is that you want the disparate classes and services to remain ignorant of each other while being able to enlist in a single transaction. Traditionally, you've been able to do this by using transaction coordinators like MTS/COM+ or the newer System.Transactions namespace. Personally, I prefer using the Unit of Work pattern to allow unrelated classes and services to take part in a logical transaction because I think it makes the code more explicit, easier to understand, and simpler to unit test(From MSDN). Creating a Generic Repository Cut the Redundancy… Step2: Right click Learning MVC project folder and create a folder named GenericRepository and add a class namedGenericRepository.cs to that folder. The code of the GenericRepository.cs class is as follows: using System; using System.Collections.Generic; using System.Data; using System.Data.Entity; using System.Linq; using System.Linq.Expressions; namespace LearningMVC.GenericRepository { public class GenericRepository<TEntity> where TEntity : class { internal MVCEntities context; internal DbSet<TEntity> dbSet; public GenericRepository(MVCEnt; } } } We can see, we have created the generic methods and the class as well is generic, when instantiating this class we can pass any model on which the class will work as a repository and serve the purpose. TEntityis any model/domain/entity class. MVCEntitiesis our DBContextas discussed in earlier parts. Step 3: Implementing UnitOfWork: Create a folder named UnitOfWork under LearningMVC project, and add a class UnitOfWork.cs to that folder. The code of the class is as follows: using System; using LearningMVC.GenericRepository; namespace LearningMVC.UnitOfWork { public class UnitOfWork : IDisposable { private MVCEntities context = new MVCEntities(); private GenericRepository<User> userRepository; public GenericRepository<User> UserRepository { get { if (this.userRepository == null) this.userRepository = new GenericRepository<User>(context); return userRepository; } } public void Save() { context.SaveChanges(); } private bool disposed = false; protected virtual void Dispose(bool disposing) { if (!this.disposed) { if (disposing) { context.Dispose(); } } this.disposed = true; } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } } } We see the class implements IDisposableinterface for objects of this class to be disposed. We create object of DBContextin this class, note that earlier it was used to be passed in Repository class from a controller. Now it's time to create our User Repository. We see in the code itself that, simply a variable named userRepositoryis declared as private GenericRepository<User> userRepository;of type GenericRepositoryserving User entity to TEntitytemplate. Then a property is created for the same userRepositoryvariable in a very simplified manner, public GenericRepository<User> UserRepository { get { if (this.userRepository == null) this.userRepository = new GenericRepository<User>(context); return userRepository; } } I.e., mere 6-7 lines of code. Guess what? Our UserRepository is created. (Taken from Google) You see it was as simple as that, you can create as many repositories you want by just creating simple properties, and no need to create separate classes. And now you can complete the rest of the story by yourself, confused???? Yes it's DBOperations, let's do it. Step 4: In MyController, declare a variable unitOfWorkas: private UnitOfWork.UnitOfWork unitOfWork = new UnitOfWork.UnitOfWork(); Now this unitOfWork instance of UnitOfWork class holds all th repository properties,if we press “." After it, it will show the repositories.So we can choose any of the repositories created and perform CRUD operations on them. E.g. our Index action: public ActionResult Index() { var userList = from user in unitOfWork.UserRepository.Get() }); } } ViewBag.FirstName = "My First Name"; ViewData["FirstName"] = "My First Name"; if(TempData.Any()) { var tempData = TempData["TempData Name"]; } return View(users); } Here, unitOfWork.UserRepository> Accessing UserRepository. unitOfWork.UserRepository.Get()-> Accessing Generic Get()method to get all users. Earlier we used to have MyControllerconstructor like: public MyController() { this.userRepository = new UserRepository(new MVCEntities()); } Now, no need to write that constructor, in fact you can remove the UserRepositoryclass and Interface we created in part 5 of Learning MVC. I hope you can write the Actions for rest of the CRUD operations as well. Details public ActionResult Details); } Create: [HttpPost] public ActionResult Create(LearningMVC.Models.UserList userDetails) { try { var user = new User();; } unitOfWork.UserRepository.Insert(user); unitOfWork.Save(); return RedirectToAction("Index"); } catch { return View(); } } Edit: public ActionResult Edit Edit(int id, User userDetails) { TempData["TempData Name"] = "Akhil"; try { var user = unitOfWork.UserRepository.GetByID(id); user.FirstName = userDetails.FirstName; user.LastName = userDetails.LastName; user.Address = userDetails.Address; user.PhoneNo = userDetails.PhoneNo; user.EMail = userDetails.EMail; user.Company = userDetails.Company; user.Designation = userDetails.Designation; unitOfWork.UserRepository.Update(user); unitOfWork.Save(); return RedirectToAction("Index"); } Delete: public ActionResult Delete(int id) { var user = new LearningMVC.Models.UserList(); var userDetails = unitOfWork.UserRepository.GetByID(id); if (userDetails != null) { Delete(int id, LearningMVC.Models.UserList userDetails) { try { var user = unitOfWork.UserRepository.GetByID(id); if (user != null) { unitOfWork.UserRepository.Delete(id); unitOfWork.Save(); } return RedirectToAction("Index"); } catch { return View(); } } Note: Images are taken from Google images. Conclusion We now know how to make generic repositories too, and perform CRUD operations using it. We have also learnt UnitOfWork pattern in detail. Now you are qualified and confident enough to apply these concepts in your enterprise applications. This was the last part of this MVC series, let me know if you feel to discuss any topic in particular or we can also start any other series as well. For more Read more: - C# and ASP.NET Questions (All in one) - MVC Interview Questions - C# and ASP.NET Interview Questions and Answers - Web Services and Windows Services Interview Questions Other Series My other series of articles:
http://csharppulse.blogspot.com/2013/09/learning-mvc-part-6-generic-repository.html
CC-MAIN-2017-47
en
refinedweb
I am trying to fit exponential function from my data. I am not very experienced with fitting mathematical functions to my data yet. Below is my code right now. import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit my_x = (4,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40) my_y = (0.022172333,0.020881,0.017729,0.021641333,0.02479,0.030755667,0.037235,0.048389,0.068451,0.06898974,0.161409,0.242802333,0.316012667,0.440762333,0.569118333,0.7016839,0.832527333) def myfunc(x,a,b,c): return a*np.exp(b*x)+c p=[my_x,0.0045,0.1262,0] #pre-determined a=0.0045, b=0.1262, c=0 according to excel popt, pcov = curve_fit(myfunc,my_x,my_y, p0=p) plt.plot (my_x,myfunc(my_x, *popt)) return function(xdata, *params) - ydata TypeError: myfunc() takes 4 positional arguments but 5 were given The myfunc function receive four parameters, x, a, b and c. The error message says that you provided 5 parameters. It's because you unpacked popt who probably contains fours elements (and thus four paramaters while unpacking it). In this line: plt.plot (my_x,myfunc(my_x, *popt)) The myfunc function receive my_x + four parameters. It makes five parameters but the function only wants four. More about unpacking here.
https://codedump.io/share/kdUbfwnvoGaA/1/function-takes-4-positional-arguments-but-5-were-givendef-function-in-python-35
CC-MAIN-2017-47
en
refinedweb
Log4c rolling file appender interface. More... #include <log4c/defs.h> #include <log4c/appender.h> #include <log4c/rollingpolicy.h> Go to the source code of this file.. Get a new rolling file appender configuration object. Get the prefix string in this rolling file appender configuration. Get the logging directory in this rolling file appender configuration. Set the prefix string in this rolling file appender configuration. Set the logging directory in this rolling file appender configuration. Set the rolling policy in this rolling file appender configuration. rollingfile appender type definition. This should be used as a parameter to the log4c_appender_set_type() routine to set the type of the appender.
http://log4c.sourceforge.net/appender__type__rollingfile_8h.html
CC-MAIN-2017-47
en
refinedweb
PAM_GETENVLIST(3) Linux-PAM Manual PAM_GETENVLIST(3) pam_getenvlist - getting the PAM environment #include <security/pam_appl.h> char **pam_getenvlist(pam_handle_t *pamh); The pam_getenvlist function returns a complete copy of the PAM environment as associated with the handle pamh. The PAM environment variables represent the contents of the regular environment variables of the authenticated user when service is granted. The format of the memory is a malloc()'d array of char pointers, the last element of which is set to NULL. Each of the non-NULL entries in this array point to a NUL terminated and malloc()'d char string of the form: "name=value". It should be noted that this memory will never be free()'d by libpam. Once obtained by a call to pam_getenvlist, it is the responsibility of the calling application to free() this memory. It is by design, and not a coincidence, that the format and contents of the returned array matches that required for the third argument of the execle(3) function call. The pam_getenvlist function returns NULL on failure. pam_start(3), pam_getenvLIST(3) Pages that refer to this page: pam(3), pam_getenv(3), pam_misc_drop_env(3), pam_putenv(3), pam_exec(8)
http://man7.org/linux/man-pages/man3/pam_getenvlist.3.html
CC-MAIN-2017-47
en
refinedweb
doing a wordl-like word cloud. I know, word clouds are a bit out of style but I kind of like them any way. My motivation to think about word clouds was that I thought these could be combined with topic-models to give somewhat more interesting visualizations. So I looked around to find a nice open-source implementation of word-clouds ... only to find none. (This has been a while, maybe it has changed since). While I was bored in the train last week, I came up with this code. A little today-themed taste: The first step is to get some document. I used the constitution of the united states for the above. with open("constitution.txt") as f: lines f.readlines() text = "".join(lines) The next step is to extract words and give the words some weighting - for example how often they occur in the document. I used scikit-learn's CountVectorizer for that as it is convenient and fast, but you could also use nltk or just some regexp. I get the counts of the 200 most common non-stopwords and normalize by the maximum count (to be somewhat invariant to document size). cv = CountVectorizer(min_df=0, charset_error="ignore", stop_words="english", max_features=200) counts = cv.fit_transform([text]).toarray().ravel() words = np.array(cv.get_feature_names()) # normalize counts = counts / float(counts.max()) Now the real work starts. The basic idea is to randomly sample a place on the canvas and draw a word with a size related to its importance (frequency). We have to take care not to make the words overlap, though. There seems to be no good alternative to the Python image library (PIL), which is really, really horrible. There are no docstrings. You specify colors using strings. There is a weird module structure. There are no docstrings. Any way, we can get a canvas and a drawing object like this: img_grey = Image.new("L", (width, height)) draw = ImageDraw.Draw(img_grey)We can then write in the image using font = ImageFont.truetype(font_path, font_size) draw.setfont(font) draw.text((y, x), "Text that will appear in white", fill="white")The font_pathhere is an absolute path to a true type font on your system. I found now way to get around this (didn't look very hard, though). Ok, now we could draw random positions and see if we could draw there without touching any other words. There is a handy function in ImageDraw.textsize, which tells you how large a piece of text will be once rendered. We can use that to test if there is any overlap. Unfortunately, random sampling any place in the image turns out to be very inefficient: if a lot of the room is already taken, we have to try quite often to find some space. My next idea was first to find out all possible free places in the image and then sample randomly from those. The easiest way to find free positions is to convolve the current image with a box of size ImageDraw.textsize(next_word). The places where the result is zero are exactly the places that have enough room for the text. Using scipy.ndimage.uniform_filterthat worked quite nicely. But what do we do if there is not enough room to draw a word in the size we want? Then we have to make the font smaller and try again. Which means convolving the image again, this time with a somewhat smaller box. The code wasn't very fast and this seemed pretty wasteful, so I wanted to use another approach: integral images! Integral images are a way to pre-compute a simple 2d structure from which it is possible to extract the sum over arbitrary rectangles in the image in constant time. The integral image is basically a 2d cumulative sum and can be computed as integral_image = np.cumsum(np.cumsum(image, axis=0), axis=1). This can be done once, and then we can look up rectangles of any size very fast. If we are interested in windows of size (w, h)we can find the sum over all possible windows of this size via area = (integral_image[w:, h:] + integral_image[:w, :h] - integral_image[w:, :h] - integral_image[:w, h:])This is a combination of the integral image query (see wikipedia) and my favorite numpy trick to query all positions simulataneuosly. So basically this does the same as the convolution above, only it precomputes a structure so that we can query for all possible windows sizes. After drawing a word, we have to compute the integral image again. Unfortunately, the fancy indexing with the integral image was a bit sluggish. On the other hand, that was a great opportunity to try out typed memory views in cython, which I learned about from Stefan Behnel at Pycon DE :) def query_integral_image(unsigned int[:,:] integral_image, int size_x, int size_y): cdef int x = integral_image.shape[0] cdef int y = integral_image.shape[1] cdef int area, i, j x_pos, y_pos = [] for i in xrange(x - size_x): for j in xrange(y - size_y): area = integral_image[i, j] + integral_image[i + size_x, j + size_y] area -= integral_image[i + size_x, j] + integral_image[i, j + size_y] if not area: x_pos.append(i) y_pos.append(j)Awesome! Easy to write down and straight to C-Speed. Except for the last two lines ... lists are not fast. I couldn't get that much faster (the array module doesn't have a C API afaik). I wanted to sample from all possible positions any way, so I just rand the above code twice: once counting how many possible positions there are, then sampling, then going to the position that I sampled. Using C++ lists would probably be easier but I was to lazy to try... Anyhow, now I had pretty decent integral images :) The building still took some time, though... so I lazily recomputed only the part that is changed after I draw a new word. It is not very pretty but I think should be quite readable. Less talk more pictures: To scale the fonts I used some arbitrary logarithmic dependency on the frequency, that I felt looked decent. It is also possible just to become smaller if there is no more room. Oh and of course I allowed flipping of the words :) I also played with using arbitrary colors. I didn't see anything like colormaps in PIL, so I just used the HSL space and just sampled the hue. More elaborate schemes are obviously possible. Again, I used a slight trick for a bit more speed: I first computed everything in grey-scale, saved all the positions and then re-did it in color. One more, this time a bit more with the theme of the blog (can you guess what this is?) And with less saturation: There is definitely some room for improvement w.r.t. the look of it, but I feel this is already a nice start if you want to play around. One last comment: I though about improving performance (apparently the only thing on my mind during this little project) by doing the whole thing at a lower resolution and then recreating it at a higher one. This has two problems: if you use a too small resolution, some text might actually become invisible as it is too small. The other problem is that PIL's font sizes don't scale linearly. So it is not possible to say "I want this font 4 times larger". You can work around that but it's not pretty. So I went with the cython / integral image way, which I think is kind of cool :) If you scrolled down for the code, it is here. PS: yes, this doesn't generate css / html4. But as you get the text sizes and positions, it should be easy to use this as a backend to generate a html page. PR welcome ;) Very nice! As an alternative to PIL, what about using PyQt / PySide and paint into a QPixmap? It may need a bit more code but I guess more people have PyQt / PySide than PIL. Thomas Thanks. I'm not really familiar with PyQt and I wanted a short simple piece of code (sort of). The real work is done in numpy and as long as the you can easily get the data out of the QPixmap into a numpy array, replacing PIL should be easy. Great job Andreas ... I did an implementation of wordly cloud in Python years back using PyQt and it was great fun ... You output is much better then mine. It's truly a fun exercise to do is what I can recall. Thanks :) Did you use rectangles to model the place where a word is or the rendered word, as I do it? COOL!!!! Hi Andreas, Thanks for the Python based word-cloud. Looks indeed nice :) Hi Andreas, Really cool one. I tried with non-english text also it wirks. Earlier I use PyTagClou but it misses the multilingual word-cloud facility. Hi,. Very cool ! Very Nice, I unfortunately once - in 1987 - had to implement a postcript word-cloud. Now I'm using Jason Davis' d3 version. Thanks. The d3 version is pretty good, indeed :) I have a Javascript version of a WordCloud at - not directly comparable, but it will do clouds in shapes other than a parallellogram. Great Russ! So simple and so beatiful and powerful! Thanks a lot!! Pretty cool :) My code can now (well nearly now) do other shapes, too! Instead of choosing a random place on the image and then drawing a specific word, why don't you start filling up the image in a orderly fashion with random words? It is not clear to me how to do that. The words have different sizes and shapes, so if you start from, say, the top right, the shape will become "unorderly" very soon and the collision detection will be as hard as it is with random assignments, I would guess. Hi Andres! Thank you for the great post! I tried your script and I got this error message, I tried to google it but no luck. any idea? def query_integral_image(unsigned int[:,:] integral_image, int size_x, int size_y): ^ SyntaxError: invalid syntax the arrow was under int[:,:] Thanks a lot! Hi Karin. I would guess that your cython is too old. Try "pip install --user --upgrade cython" to get a newer version. Thank you Andreas for the quick reply! I run the command line you suggested and it upgrade the cython. when I run the word cloud script I got the same error. Any suggestion? Thank you very much!!!! How did you run the file? Compile using "make" or "python setup.py build_ext -i" as stated in the readme, and then call "python wordcloud.py". I run "python setup.py build_ext -i" and I get this message :"running build_ext" then I run "python wordcloud.py" and I still get the message. ,maybe something to do with my configuration ubuntu system ? That is pretty odd. Can you give the exact error? The error is in the cython file, which should not be called by python. Having a syntax error in cython during runtime is ... weird.. sure! when I run : "python wordcloud.py" I get this bellow: Traceback (most recent call last): File "wordcloud.py", line 13, in from query_integral_image import query_integral_image File "/var/www/word_cloud-master/query_integral_image.py", line 7 def query_integral_image(unsigned int[:,:] integral_image, int size_x, int size_y): ^ SyntaxError: invalid syntax There should be no file query_integral_image.py, only query_integral_image.pyx. my bad :( I copied the file to the my server and I rerun it again. now I get a different error message when I run make or "python setup.py build_ext -i" : python setup.py build_ext -i Compiling query_integral_image.pyx because it changed. Cythonizing query_integral_image.pyx Error compiling Cython file: ------------------------------------------------------------ ... # cython: wraparound=False import array import numpy as np def query_integral_image(unsigned int[:,:] integral_image, int size_x, int size_y): ^ ------------------------------------------------------------ query_integral_image.pyx:7:38: Expected an identifier or literal Traceback (most recent call last): File "setup.py", line 7, in ext_modules=cythonize("*.pyx"), File "/usr/lib/pymodules/python2.7/Cython/Build/Dependencies.py", line 517, in cythonize cythonize_one(pyx_file, c_file, quiet, options) File "/usr/lib/pymodules/python2.7/Cython/Build/Dependencies.py", line 540, in cythonize_one raise CompileError(None, pyx_file) Cython.Compiler.Errors.CompileError: query_integral_image.pyx make: *** [all] Error 1 And which version of Cython are you calling there? Can you try ``cython --version`` and ``python -c "import Cython; print(Cython.__version__)`` ? I would guess you have an older cython somewhere in your path. Hi Andreas, Great work. I tried running your code and I get error message that I don't know where it comes from. On the Windows in the CMD windows here is what I run and get: ...\wordcloudPython\trunk>python setup.py build_ext -i running build_ext ...\wordcloudPython\trunk>python wordcloud.py C:\Python33\lib\site-packages\sklearn\feature_extraction\text.py:615: Deprecatio nWarning: The charset_error parameter is deprecated as of version 0.14 and will be removed in 0.16. Use decode_error instead. DeprecationWarning) Traceback (most recent call last): File "wordcloud.py", line 183, in counts = make_wordcloud(words, counts, output_filename) File "wordcloud.py", line 102, in make_wordcloud box_size = draw.textsize(word) File "C:\Python33\lib\site-packages\PIL\ImageDraw.py", line 281, in textsize return font.getsize(text) File "C:\Python33\lib\site-packages\PIL\ImageFont.py", line 189, in getsize w, h = self.font.getsize(text)[0] TypeError: 'int' object is not iterable what is the reason for the error? And how should I run the code so it gets the constitution.txt as input? (sorry I am new in Python). That error is weird as it is inside PIL. Did you change the font path in the file? You need to set "FONT_PATH" to a true-type font that exists on your system. The default will only work under Linux. The code uses the constitution by default but you can just pass another text file as command line argument. Hth, Andy Thanks Andy. After a lot of Google search I found this that resolved the error: To get it to work change line 189 in from C:\Python33\Lib\site-packages\PIL\ImageFont.py: w, h = self.font.getsize(text)[0] to: w, h = self.font.getsize(text) Do you know if your code works with Persian (Farsi language) as well? So that is a bug in PIL under Python3? For Persian: basically yes. if: 1) you pick a font that supports the signs, 2)your text is properly encoded (utf8 and hopefully my code reads that correctly) 3) the regular expression in the scikit-learn Vectorizer makes sense for the language (which is probably fine). The vectorizer tokenizes the text into words based on a simple regular expression that basically separates words at whitespaces and punctuation iirc. For languages where that is not meaningful you would need to adjust the regular expression (an optional argument to the Vectorizer). Yes, that is a bug in PIL for Python3. Thanks for the explanation for Persian language. I used a Persian font, and I debugged the code. It reads a persian text fine and in the code it creates correct "words" and "counts"but at the end the generated image is just a bunch of rectangles! do you know what should I do to create in image with Persian words in it? Thanks again for all your help. So do the extracted "words" make sense? And what is their encoding? The code just renders the words using PIL. I am not very familiar with PIL, sorry. You could try writing a stand-alone script that tries to render some word using PIL and see if the problem persists. What does it mean to "make" this file? The install and use instructions could be improved. I'm on windows. It means running the program "make", the way most software is build on most operating systems. You can just run "python setup.py build_ext -i" as I said above. Feel free to send a PR improving the Readme. Hi Andy, I tried to re-install Python 3.3 and while your code was working before, now I get this error: from query_integral_image import query_integral_image ImportError: DLL load failed: %1 is not a valid Win32 application. Do you know what could be the reason? Thanks for the help. This is really interesting. Though my brain can't compered the stuff about integral images. I've been playing with making word clouds using bash scripting and ImageMagick, starting from a state of pretty much total ignorance on how to do it. Rather than randomly selecting points in the canvas and trying to put a word there I've been starting off by putting the most common word in the centre of the canvas and then checking for free space spiralling out from the centre. Your post provides an answer to a question I've been wondering about which is how do people get clouds to fit a specified shape, even just a simple rectangle: "But what do we do if there is not enough room to draw a word in the size we want? Then we have to make the font smaller and try again." However, this seems to conflict with the premise of a word cloud. As you put it: "…draw a word with a size related to its importance (frequency)." If you're fitting words in to spaces by way of shrinking their size then aren't you destroying the relationship between the size of the word and it's frequency? Especially because as I read it if a word won't fit in a space you just shrink it until it fits. Doesn't this approach mean that you can potentially end up with a word of frequency N being drawn larger than one with frequency 2N? Or I have misunderstood something? Hey. I think my approach to wordclouds is very non-standard. I also started from ignorance and tried something out. There is a paper about the wordl way, which I can't find at the moment. I think this java-script implementation uses the same algorithm: it also relies on a spiral and a dynamic that moves the words apart if they overlap. Actually, the way I present the algorithm here (and the way it is implemented) it is true that the size does not correspond to the frequency. BUT the ranking of the words is preserved. I sort the words by frequency before I start drawing, and the size will only decrease. Maybe that wasn't clear from my description. Hth, Andy How difficult would it be to create an image where the background is white? I've tried playing around in the code- specifically adding color="white" parameter when all of the images are created, but was unsuccessful..
http://peekaboo-vision.blogspot.be/2012/11/a-wordcloud-in-python.html
CC-MAIN-2017-47
en
refinedweb
* CancelGraph.java21 *22 * Created on April 26, 2006, 3:20 PM23 *24 * To change this template, choose Tools | Template Manager25 * and open the template in the editor.26 */27 28 package org.netbeans.modules.xml.refactoring.ui;29 30 import org.openide.util.Cancellable;31 32 /**33 *34 * @author Jeri Lockhart35 */36 37 38 public class CancelGraph implements Cancellable,CancelSignal {39 40 private boolean isCancelRequested;41 42 public boolean cancel() {43 isCancelRequested = true;44 return true;45 }46 47 /**48 * Implement CancelSignal49 */50 public boolean isCancelRequested() {51 return isCancelRequested;52 }53 }54 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/xml/refactoring/ui/CancelGraph.java.htm
CC-MAIN-2017-47
en
refinedweb
This sample shows how to protect your PDF document with a password. There are two passwords types: a "user" and an "owner" password. PdfDocument.OwnerPassword property is for an "owner" password and PdfDocument.UserPassword property is for a "user" password. Opening a PDF document with an "owner" password allows a reader of your document to do everything with opened document. Opening a PDF document with a "user" password allows a reader of your document to only perform operation allowed by user access permissions. using System.Diagnostics; namespace BitMiracle.Docotic.Pdf.Samples { public static class SetPassword { public static void Main() { // NOTE: // When used in trial mode, the library imposes some restrictions. // Please visit // for more information. string pathToFile = "SetPassword.pdf"; using (PdfDocument pdf = new PdfDocument()) { pdf.UserPassword = "test"; pdf.Save(pathToFile); } Process.Start(pathToFile); } } } Imports System.Diagnostics Imports BitMiracle.Docotic.Pdf Namespace BitMiracle.Docotic.Pdf.Samples Public NotInheritable Class SetPassword Public Shared Sub Main() ' NOTE: ' When used in trial mode, the library imposes some restrictions. ' Please visit ' for more information. Dim pathToFile As String = "SetPassword.pdf" Using pdf As New PdfDocument() pdf.UserPassword = "test" pdf.Save(pathToFile) End Using Process.Start(pathToFile) End Sub End Class End Namespace
https://bitmiracle.com/pdf-library/help/set-password.aspx
CC-MAIN-2017-47
en
refinedweb
With {module_pagename} you get the pagename of the current page you visit. Is there also a way to get the template ID of the page you currently visiting with module or any liquid tags? Thanks! Hi, You can get this with liquid: <p>PageID: {{oid_0.oid}}</p> <p>Page Name: {module_pagename}</p> {module_data resource="pages" version="v3" fields="id,templateId,name,template" resourceId="{{oid_0.oid}}" order="id" collection="myData"} <p>Template ID: {{myData.templateId}}</p> <p>Template: {{myData.template.name}}</p> The OID position placement will differ per site based on the call, naming you collections is also important for good use in your projects. {module_pagename} can be set as a data element to say your body tag so it does not render in your actual website. From there, this is now accessible as a liquid data point. {{pagename.name}} To get: {module_data resource="pages" version="v3" fields="id,templateId,name" where="\{'name':'{{pagename.name}}'\}" skip="0" limit="2" order="id" collection="pageData"} {{pageData.templateId} That is a bit cleaner and more reliable. I have a couple of other posts with people thinking the template id could be useful for their liquid conditions. But it really is not though. You can not control templates on the front end, you can not dynamically change them. One page can only have one template. I see the thinking, but far other better options and ways to achieve things. Yes this approach works for pages. The only downside is when system generated pages like, blog,faq,system pages, webapps. even those have different template assigned. I currently have now idea how to get the template Id if l landed on those system pages. Any ideas? Possible, but just a lot of work and code on the layouts you have access too. Can I ask why? What are you trying to achieve? Like I said, I bet you do not need to do this. Thank you for your reply. Currently I want to fully use the abilities of liquid and try to avoid javascript as much as possible. Currently I want those pages with certain type of template to display different content. {% if {{myTemplate}} == 'templateIDA' -%} {module_contentholderA} {% endif %} {% if {{myTemplate}} == 'templateIDB' -%} {module_contentholderB} {% endif %} Hope it is not too complex to achieve this. Or maybe other suggestions? Thank for your time! Well your going page for that when you should be doing that within the scope of the template. Your over complicating that.
https://forums.adobe.com/thread/2095178
CC-MAIN-2017-47
en
refinedweb
0 I have some code to parse POST and GET parameters passed to a CGI program written in C and I'd like to test it out. To parse GET parameters, I need to access the QUERY_STRING environment variable, so I'd like to set that environment variable to different things and see if my code can handle the test cases. My problem is that I cannot "set" and "get" QUERY_STRING. Here's my code (the part relevant to this topic): #include <stdlib.h> #include <stdio.h> #include <string.h> int main () { char* queryStringGet; char* queryStringSet; char* command; queryStringGet = (char*) malloc (100); // more memory than needed queryStringSet = (char*) malloc (100); // more memory than needed command = (char*) malloc (100); // more memory than needed strcpy (command, "export QUERY_STRING="); strcpy (queryStringSet, "a=1&b=2&c=3"); strcat (command, queryStringSet); printf ("command = %s\n", command); // display to make sure we have the correct command system (command); // execute command. This should set the QUERY_STRING env. variable queryStringGet = getenv ("QUERY_STRING"); // should be the same as queryStringSet printf ("QUERY_STRING = %s\n", queryStringGet); free (queryStringGet); free (queryStringSet); free (command); return 0; } Results of line 19 (as expected): command = export QUERY_STRING=a=1&b=2&c=3 Results of line 23 (not what I wanted): QUERY_STRING = (null) Any ideas of what I am doing wrong? I'm using Linux. Thanks.
https://www.daniweb.com/programming/software-development/threads/255653/setting-and-getting-environment-variable-in-c
CC-MAIN-2017-47
en
refinedweb
02 July 2012 09:00 [Source: ICIS news] LONDON (ICIS)--Here are some of the top stories from ICIS Europe for the week ended 29 June 2012. ?xml:namespace> Swiss Ameropa acquires 20.5% stake in Azomures for €53.7m Swiss grain and fertilizer trader Ameropa Holding has acquired a further 20.53% stake in Romania's largest fertilizer producer Azomures for New Lei 240.1 m (€53.7m, $67.3m), the company said on Tuesday. Europe chem stocks fall as European chemical stocks fell on Monday, in line with financial markets, as investors lost confidence after
http://www.icis.com/Articles/2012/07/02/9574192/europe-top-stories-weekly-summary.html
CC-MAIN-2015-06
en
refinedweb
look VoIP Keep Your Number VoIP Keep Your Number VoIP Keep Your Existing Phone Number Changing your phone service to Lingo VoIP phone service is easy, even better since you can keep Keep servlet session alive - Development process Keep servlet session alive Hi, I am developing an application in java swing and servlet. Database is kept on server. I am using HttpClient for swing servlet communication. I want to send heartbeat message from client JAVA JAZZ UP - Free online Java magazine JAVA JAZZ UP - Free online Java magazine Our this issue contains: Java Jazz Up Issue 3 Index...; Valued Java Jazz Up Readers Community We invite you to post Java-technology Java - Java Beginners , the book store keep track the number of the books bought and total amount spent..., and then resets the total amount spent to 0. Write a program that can process up to 1000 November 2007 Issue of Java Jazz up magazine November 2007 Issue of Java Jazz up magazine Java News Open source solution... Java project. NetBeans IDE The NetBeans IDE, product of Sun Java Basic Tutorial for Beginners Java Basic Tutorial for Beginners - Learn how to setup-development environment and write Java programs This tutorial will get you started with the Java...) More tutorials for beginners Getting Started with Java Learn Java Problem - Java Beginners Java Problem Write a program 2 input a positive integer n and check wheter n is prime or not and also know the position of that number in the prime..., Code to solve the problem : import java.io.*; public class PrimeNumber JAVA JAZZ UP - Free online Java magazine JAVA JAZZ UP - Free online Java magazine Our this issue contains: Java Jazz Up Issue 1 Index.... Reader?s Forum Welcome to the Java Jazz Up tomcat server start up error - Struts tomcat server start up error Hai friends..... while running tomcat server ,I got a problem..... Sep 5, 2009 4:49:08 AM... on the java.library.path: C:\Program Files\Java\jre6\bin;.;C:\WINDOWS\Sun\Java\bin;C Difference in two dates - Java Beginners for more information:... on that. The thing is, that I need to find the difference between the two dates in JAVA...(); Now I want to keep this in my Database...... So after few days exe set up software - Java Server Faces Questions exe set up software hi i want a exe file creator software. i am doing a project in swings. now i need exe converter. pls send simple code - Java Beginners simple code to input a number and check wether it is prime or not and print its position in prime nuber series. Hi friend, Code to help in solving the problem : import java.io.*; class PrimeNumber { public String Array - Java Beginners each value one by one. if it is string keep it in a string variable... it once up on a time, yes it worked. String st = new String("3d41d6 small java project - Java Beginners small java project i've just started using java at work and i need to get my self up to speed with it, can you give me a small java for beginners project to work on. your concern will be highly appreciated Setting Up SSI on WAMP Setting up SSI on WAMP Server Server Side Includes (SSI) - Definition The Server Side Includes is a simple interpreted server side scripting used... for SSI directives. Here, you will have to keep in mind that you will have Learn Java for beginners professionals as well as other learners as to how to keep up with it. The Java course...Learning Java for beginners has become an easy task now, with the popularity... Java Beginners tutorial Home page please help me how to set up Netbeans for JSP please help me how to set up Netbeans for JSP Hi roseindian.net, the following page is what i have seen when i run jsp project.What I can do,please...-For: 10.140.135.201 Cache-Control: max-age=259200 Connection: keep-alive Some possible pop up pop up how to create pop up in html java - Java Beginners up to a specific limit? Q2- write a program in java to check a given number for Fibonacci term? Q3- write a program in java to generate prime number up...-conversion I hope this will help you airline reservation java code - Java Beginners airline reservation java code Write a program that simulates... to simulate the box effect as in the diagram above, but try to keep the output... reset the bookings and free up ALL seats once again. Things to consider sort java - Java Beginners sort java 1. A statistics company wants to keep information of families. The information of a family is the family name, the number of members and first name of each member. The families are sorted alphabetically by family name java - Java Beginners java how can we show pop up menu with keypress event? do we need to give body of all three methos i.e. keypress,keyrelease,and keyprint program code for this question - Java Beginners an up-to-date text file in which they keep data about each of their customers...: Write a Java application, ProduceBills.java, that reads the name of the file Sorting in java - Java Beginners Sorting in java Hello.. I want you to help me with this question.. 1. A statistics company wants to keep information of families. The information of a family is the family name, the number of members and first name of each java programming - Java Beginners java programming hello friends! My name is David.I am a new to JAVA i really need exposure in the Language.I would appreciate it if you guys can... explanation) of to my mail anytime so that i can brush up myself.My e-mail write a program in java Adding up the subscript(st,nd,rd,th) to the number of days in a input string write a program in java Adding up the subscript(st,nd,rd,th) to the number of days in a input string write a program in java Adding up the subscript(st,nd,rd,th) to the number of days in a input string ex:If a user enters Beginners in Java Beginners in Java Hi, I am beginners in Java, can someone help me... tutorials for beginners in Java with example? Thanks. Hi, want to be command over Java, you should go on the link and follow the various beginners java question - Java Beginners java question Given the string "hey how are you today?" how many tokens would you have after breaking up the string using whitespace as a delimiter? Hi Friend, Try the following code: import java.util.*; public java beginners - Java Beginners the following links: beginners what is StringTokenizer? what is the funciton Java - Java Beginners are also great for setting up quick tests to see how Java works. The applications...Java Console application What is Java Console application? Hi friend,A Java Console application can only display textual data. Console Java help - Java Beginners Java help I didnot get the code ,therefore I am posting my question again.. Thanks in advance ...:) Programming Assignment: A) Consider... prescription B) The pharamacy want now to keep record of each patient Java Stacks - Java Beginners Java Stacks Hello..Help me with this assignment plzz. Programming Assignment: A) Consider the outpatient pharamacy at University Hospital... B) The pharamacy want now to keep record of each patient contains all Learn to Set Up An Internal Private Repository For An Organization Learn to Set Up An Internal Private Repository For An Organization... have set up an internal Maven Repository for our organisation so... to download the library files if your development team is big. When setting up a local Java - Java Beginners Java Java is call by value or call by referance? Hi... variables affect the caller?s original variables. Java never uses call by reference. Java always uses call by value. import java.io.*; import java.awt. HELP - Java Beginners HELP Hello sir ,how i can make Java Programs Set up File ,Please give me steps to make core java - Java Beginners change the fields in the caller?s objects they point to. In Java, you cannot... System.out.println("Massage 2: i= " + i + ", d= " + d); Double(i, i); //Java... original variables. Java never uses call by reference. Java always uses call by value Maximize Sales By Setting up Your Shopping Cart Maximize Sales By Setting up Your Shopping Cart Setting up a shopping cart... the features which boost up your sales. One of the main features is the auto responders... checkout list. This feature helps the administrator keep track of the aborted Java Compilation - Java Beginners Java Compilation I want to write a program that takes a positive integer from the keyboard. I want my program to sum all the integers from 1 up..., the sum is = 15 Here is what i came up with: import java.util.Scanner JAVA LOOPS - Java Beginners JAVA LOOPS Hi I need a Java program that can toss a coin over and over until it comes up head 10 times. It should also record the number of tails. Hi Friend, Try the following code: class Toss{ public final java code - Java Beginners java code Dear sir i need one java code * * * * * * * * * * * * * * * this model only i dont want need this type i need up one only ** *** **** ***** Hi friend Algorithm_3 - Java Beginners the following links:... index up to n-1 index. The algorithm follows the same steps iteratively unlit linked list in java - Java Beginners information. list in java Hi, how to implement linked list in java using...(); for (int i = 0; i < N; i++) { // look up a value in the list // using Java Program - Java Beginners Java Program Hi I'm having trouble with this program, i keep getting errors. Can you help me. Thanks Write a program to create a file named "numbers.dat". Then create an algorithm that adds all even numbered integers from 1 Java - Java Beginners Java I've been trying to figure out this program but I keep getting errors and its not working out. Can someone help me? this is the program... either use the value 3.1416 for Pi or use the Java provided value named Math.PI. Java Program - Java Beginners Java Program Hi, I'm have complications with this program. I keep getting errors and my coding is off. Can you help me? Write a program called OfficeAreaCalculator.java that displays the following prompts using two label Hiiiii - Java Beginners wanted to keep Java simple and found that operator overloading made code more...Hiiiii Hi, tell me all operator are overloaded in java Thanks Hi Ragini, It is true that Java does not allow operator java - Java Beginners ListSelectionListener simply set up types that you implement.You can define its Java Programming Tutorials for beginners Java Programming tutorials for beginners are made in such a way... simple so that not only Java professionals but Java beginners can also learn it easily. For the beginners in Java we bring our best of the best collection Java Coding - Java Beginners Java Coding How do I code the following: Code a switch statement that tests the value of an int variable named weightInt. This variable contains the weight of a shipment rounded up to the nearest multiple of 5. If the value java program - Java Beginners java program I worte out this program that was supposed to simulate a die rolling and then the program printing out each roll, the program also printing the number of of times each face came up/the percentage of the times. I java - Java Beginners java Q: write a program in java which input a positive natural N and output all combination of consecutive natural which add up to give N . Example : N=15 then the output should be 1 2 3 4 5 4 5 6 java - Java Beginners () { //Create and set up the window. JFrame frame = new JFrame("Drop example...); //Create and set up the content pane. JComponent newContentPane = new...:// Thanks Array in Java - Java Beginners program allows any number of numbers to be entered, up to 50 numbers. The output java - Java Beginners () { //Create and set up the window. JFrame frame = new JFrame("Drop...); //Create and set up the content pane. JComponent newContentPane... for more information. Java Program - Java Beginners Java Program Hi I'm having trouble with this problem, I keep getting errors. Write a program GuessGame.java that plays the game ?guess the number? as follows: Your program chooses the number to be guessed by selecting java button - Java Beginners java button i want to make form with checkboxes, after this i need button thich will open new pop-up page, where will be shown only this checkboxes whitch have been marked. please help me pleas wright me on my e-mail address NEW IN JAVA - Java Beginners ) subtraction,(other arithmetic operations have not been include to keep JAVA Error - Java Beginners JAVA Error i have made my own calculator GUI..and i want the text...() { //Create and set up the window. JFrame frame = new JFrame...: Thanks. Hello Just use codes - Java Beginners Java codes Ex#1. Write a java programe that declares 25 characters. Fill up the array with the letters of the English Alphabets(A-Z) and print out... on the monitor. Ex#3. Write a java program that declares an array containing java - Java Beginners an ACCEPT Set up a temporary stack. Copy the stack over, checking for the surname... the list. Technicalities 1. Use Java 1.4 Java Program HELP - Java Beginners Java Program HELP Hi I'm having trouble with this program, i keep getting errors. Can you help me. Thanks Write a program to create a file named "numbers.dat". Then create an algorithm that adds all even numbered integers Methods in Java - Java Beginners questions, which each are lalabeledith 4 answers, i was hoping it would end up looking up something similar to this. Question 1: Question will be here Java for beginners Java for beginners Java for beginners Which is the best resource... Java video tutorial for beginners. Thanks Hi, Here are the best resources for Learning Java for beginners: Java Video tutorial Java tutorials error in program when trying to load image in java - Java Beginners to add an image to my GUI using java graphics. I have cleaned up all my compiler errors but my program still won't run. I keep getting this type of message and I..." and add the logo to the GUI using Java graphics. this is the main codes java programming - Java Beginners contains the output from the ATM. Look up on google 'java TextArea' to view javadoc...java programming heloo there, hopw all of u guys are fine my question is how to program a atm machine consept by using java ?? im having problem Java array - Java Beginners Java array Q 1- write a program that counts the frequency...-- programming java hello would be displayed as hello java programming...[]) { String str[] = "programming java hello".split(" "); System.out.println java programing - Java Beginners java programing Write a program that reads numbers fromthe keyboard into an array of type int[]. You may assume that there will be 50 or fewer entries in the array .Your program allows any numbers of numbers to be entered , up Abstract class - Java Beginners Abstract class Why can use abstract class in java.?when abstract class use in java.plz Explain with program. abstract class abs{ public... but a template 4 the subclasses .thats all clear....u have to keep 3 things in ur mind What is Abstraction - Java Beginners What is Abstraction What is abstraction in java? How is it used in java..please give an example also Hi Friend, Abstraction is nothing... is made up of different components,does not need to know how the different Tomcat5.5 - Java Beginners Tomcat5.5 how to setup and run tomcat 5.5 in windows xp? to set up tomcat ...jus u have to do 2 steps 1.set ur java_home 2. set ur catlina_home it means specifying java and tomcat bin directory path java compilation error - Java Beginners java compilation error I need to know how to correct a compiler error for my program to run. The error I keep getting is unclosed string literal. The code it's firing on looks like this public String toString() { return Java Program- Complications - Java Beginners Java Program- Complications Hi, I'm have complications with this program. I keep getting errors. Write a program called OfficeAreaCalculator.java that displays the following prompts using two label components: Enter how to start with java - Java Beginners how to start with java sir i am new to java and i need the guidence... is based on jsp so suggest me the way to follow and cope up with this technology. framework - struts database - postgreSQL and java and jsp is used java beginners doubt! java beginners doubt! How to write clone()in java strings programming - Java Beginners programming for java beginners How to start programming for java beginners Java Printer Listener - Java Beginners Java Printer Listener I want some programs to make listener for all... to pop up our application before printing using java Hi friend, I am... for more information. program for Hashmaps - Java Beginners to keep track of the insertion order. Here is the code of HashMap import.... Thanks. Amardeep Java Developer Training . There are separate training program for beginners so that they can learn from the start and work there way up. The Java Developer training program includes the Servlet...Java Developer training are provided by Roseindia for the Java developers Java for beginners - Java Beginners :// Thanks...Java for beginners Hi! I would like to ask you the easiest way to understand java as a beginner? Do i need to read books in advance Inheritance - Java Beginners Inheritance class StdOps { //method: fileRead(String s) //purpose: opens up file s and reads (output to the screen)- one int per...");} //method: filewrite(String s, int[] a) //purpose: opens up file s and writes Compilatation problem - Java Beginners but after installing 1.4 and setting up the classpath also i am getting... link: Java guide for beginners Java guide provided at RoseIndia for beginners is considered best to learn...-to-date with recent releases in Java, one can also turn up to these guides. Once... and understand it completely. Here is more tutorials for Java coding for beginners
http://www.roseindia.net/tutorialhelp/comment/91697
CC-MAIN-2015-06
en
refinedweb
Python has emerged as a top programming language in terms of capabilities and usage around the world. Today, we are here to make you familiar with one of the simplest data structures for coding, i.e. arrays. So if you wish to learn about array in Python, keep reading this tutorial till the end to understand how to find the length of an array in Python. Explaining Python array An array in Python refers to a collection that has multiple items saved together in contiguous memory chunks. Simply put, these locations hold many items of identical data type in a sequential arrangement. Let us understand this with an example: Imagine a fleet of stairs where each step denotes a value. And suppose that your friends are standing on different steps of this stairway. You can find the location of any one of your friends by simply knowing the count of the stair they are standing on. Python has a specific module called “array,” which you can use to manipulate specific values. You can create lists where all elements must have the same data type. With a data structure like an array, you are able to access numerical data from a defined series, fetching the required values by specifying an index number. (Note: The index begins from 0, and the stored items are called elements). Furthermore, you can change the array and perform several data manipulations, depending on your needs. But before we explore that in detail, we should address a common point of consumption. Although both Python arrays and lists store values in a similar manner, there exists a fundamental distinction between the two. While a list stores anything from integers to strings, an array can only have single value types. Therefore, you come across an array of strings, an array of integers, and so on. When and Why do We use Arrays? We typically utilize the Python array module for purposes like interfacing with code written C. Arrays offer an easier way of storing C-style data types faster and with less memory space. Moreover, a combination of arrays with Python is also time-efficient. It reduces the overall size of your code and enables you to avoid problematic syntax, a major concern with other languages. For instance, if you had to store 100 variables with different names, it makes sense to store them as integers (1-100). It is a far better option to save them using an array instead of spending time remembering their names. Using array in Python Let us take it one step at a time: - Import the array module - Create an array list (Specify the data type and value list as arguments) - Add elements to the array using insert() and append() - Start accessing elements - Update elements, as desired (Slice, change, remove) - Search elements - Find the array’s length Now that you are aware of the different operations of using an array in Python, let us look at the sample code. - To import the module, you simply use the ‘import’ command followed by the qualifier — let this be ‘jam’. import array as jam a = jam.array(‘o’,[1.2,3.6,4.7]) print (a) This would display the following output: array(‘o’,[1.2,3.6,4.7]) - If you want to access a specific element of an array, you can use a code like this: import array as cam b = cam.array(‘i’,[1,3,5,7]) print(“1st element:”,b[0]) print(“2nd element:”, b[1]) print(“Last element:”, b[-1]) The output would be shown as follows: First element: 1 Second element: 3 Last element: 7 - The following sample code will help you understand how to slice a part of the Python array import array as mac numbers_list = [22, 5, 42, 5, 52, 48, 62, 5] numbers_array = mac.array(‘j’,numbers_list) print(numbers_array[3:6]) # 4th to 6th print (numbers_array[:-5]) # beginning to 4th print (numbers_array[4:]) #5th to end print (numbers_array[:]) #beginning to end This code will give you an output with specific integer values that you mentioned; see below: array(‘j’,[5, 52, 48]) array(‘j’,[22, 5, 42]) array(‘j’,[52, 48, 62, 5]) array(‘j’,[22, 5, 42, 5, 52, 48, 62, 5) - Since a Python array is mutable, you can alter the items, add more elements, and remove others. Check out these examples: import array as pac numbers = pac.array( ‘m’, [5, 10, 7, 1, 2, 3]) #to change the first element numbers[0] = 6 print(numbers) # Output: array(‘m’ , [6, 10, 7, 1, 2, 3]) #to change the fourth element to fifth element numbers[3:4] = pac.array( ‘m’, [8, 9, 4]) Then, write the command to print the output array. # Output: array(‘m’, [6, 10, 7, 8, 9, 4]) If you want to add a new item to the array, you can use the append() method. Alternatively, you can add many new items using the extend() method. We have demonstrated this for more clarity: import array as dac numbers = dac.array(‘i’, [3, 4, 5]) numbers.append(6) Upon printing the output, you will get: #Output: array(‘i’, [3, 4, 5, 6]) # extend() to append iterable items to the end numbers.extend[7,8,9]) Again, print the output to get the array: array(‘i’, [3, 4, 5, 6, 7, 8, 9]) Similarly, you can remove one or more items using the del statement in Python. Let’s use the same array for this demonstration. del number[1] # to remove second element print number() #Output: array ( ‘i’, [3, 5, 6, 7, 8, 9]) You can also use the remove() function to delete a specific item and implement pop() to remove any given index. numbers.remove(8) print(numbers.pop(4))number array ( ‘i’, [3, 5, 6, 7, 9]) - If you intend to search for a particular element, you can use index(), an in-built method in Python that returns the index of the first occurrence of the argument value. With this, we have given you a refresher on what are arrays in Python and their usage. You may also be interested in finding the array length. Here, length refers to how many elements are present in the Python array. You can use the len() function to determine the length. It is as simple as inputting the len(array_name) statement, and a value (integer) will be returned. Take, for example, this array: a=arr.array(‘f’, [2.1, 4.1, 6.1, 8.1]) len(a) #Output: 4 As you can see, the value returned is equal to the number of elements in the Python array. Read: Python Interview Questions & Answers Conclusion Now you know what are arrays in Python, their usage, along with how to find the length of the array in Python. This information will help you strengthen your Python programming skills. So, keep practising! If you are curious to learn more.
https://www.upgrad.com/blog/arrays-in-python/
CC-MAIN-2021-31
en
refinedweb
Camera Client/Camera Server Interaction << Back to the EasyImage Toolkit page Tutorial - Camera Example What you will learn This tutorial illustrates the facilities in EasyImage's CameraClient API. Above and beyond what you learnt in the Minimal Camera Example, you will learn how to - discover several properties of the server, e.g., its frame rate - hook into server events, e.g., when its stopped and started - set the camera's properties, i.e., the desired frame rate - retrieve the frame's current size - using a timer, monitor the actual frame rate from the desired frame rate. Background There are several nuances about how the Camera Client receives frames that you need to understand. - It is the Camera Server, through its interactive controls, that let you set the actual size of a frame and the maximum frame rate - The Camera Server will try to generate frames at the maximum rate, but cannot guarantee it. Maximum frame rate is often limited by the actual camera hardware and by system load - The Camera Client can see if the size of the frame has changed by looking at the frame's Bitmap Width, Height or Size property - The Camera Client can request a frame rate from the server. If the frame rate is less than what the server setting is, the client may receive up to that amount (but often less!). If the frame rate is greater than what the server setting is, the client will likely receive multiple copies of a frame (i.e., it is over-sampling). Yes, this is not a great situation... but its what we are stuck with for now. Still, you can check to see what the server frame rate is and make sure you don't request a faster frame rate than it can really deliver. - Typically, most cameras/systems should be able to handle around 10 to 15 fps. If you set it higher, you are not likley to receive more 'real' frames. - Note that some of these client/sever interactions are buggy, e.g., if you start / stop the camera/test pattern, it may not always have the desired effect. These are known bugs. Download While we recommend you create this program from scratch, you can also download the source and executables. Preconditions 1. Including EasyImages in your new Visual Studio 2005 C# project. This was described in Minimal Camera Example. Make sure to include the using EasyImages; line in your project. Step 2. Creating GUI controls 2). Add the following GUI controls to your form window, so it looks like the window at the top of this page. - PictureBox will display your camera's frames - Name = pbCamera - Size = 640,480 (the Width and Height of the frame we will put in it) - BorderStyle = FixedSingle - BackColor = Black - CheckBox (start and stop the camera) - Name = cbPlay - Two Group Boxes - Text = Frame Size, Frame Rate (frames / second) - 4 Labels (for annotation as seen in the form) - Text = width:, height:, desired:, actual: - 4 Labels (for displaying various values) - Name = lblFrameWidth, lbelFrameHeight, lblDesiredFrameRate, lblActualFrameRate - Text = ?, ?, 10, 0 - TrackBar (interactively change the frame rate in frames per second) - Name = tbFrameRate - Minimum = 1 - Maximum = 30 - Timer (for calculating the actual frames per second) - Interval = 1000 (1 second) - Enabled = true Step 3. Writing the program The complete program is listed below. Its simpler than it looks, for most of it are just callbacks for the GUI controls. Enter it and try it. Then read the explanation that follows. using System.Drawing; using System.Windows.Forms; using EasyImages; // Author: Saul Greenberg, University of Calgary, // Documentation: see and select EasyImages. // The API Documentation and a tutorial explaining this and other programs are available there // License: See License details included in the distribution package. Essentialy, permission to // use and/or alter this program for non-commerical and/or educational purposes is granted, // as long as attribution to the above author is maintained. namespace CameraClientServerInteraction { public partial class Form1 : Form { private EasyImages.CameraClient camera; // CameraClient retrieves frames from the camera server private delegate void SetPictureBoxImage(PictureBox pbox, Bitmap image); int frameCounter = 0; // A count of the number of frames we actually see per second public Form1() { InitializeComponent(); } // Create a CameraClient that connects to the Camera server's default camera // Create all its event handlers, i.e., when a frame arrives, when the servers starts, and when it stops. private void Form1_Load(object sender, EventArgs e) { camera = new EasyImages.CameraClient("DefaultCamera"); camera.ReceivedFrame += new CameraClient.ReceivedFrameEventHandler(camera_ReceivedFrame); camera.ServerStarted += new CameraClient.ReceivedServerStartedEventHandler(camera_ServerStarted); camera.ServerStopped += new CameraClient.ReceivedServerStoppedEventHandler(camera_ServerStopped); } // // Event handlers for the camera // // The camera server just started. void camera_ServerStarted(object sender, CameraServerStarted e) { toolStripServerStatus.Text = "Server - started " + camera.ServerMaxSpeed.ToString(); Reset(); } // The camera server just stopped void camera_ServerStopped(object sender, CameraServerStopped e) { toolStripServerStatus.Text = "Server - stopped " + camera.ServerMaxSpeed.ToString(); Reset(); } //When we receive a frame, display it in the picture box and increment the frame counter void camera_ReceivedFrame(object sender, CameraEventArgs e) { DisplayImageInPictureBox (pbCamera, e.Bitmap); frameCounter++; } // Display the image in the picture box in the correct thread private void DisplayImageInPictureBox(PictureBox pbox, Image image) { if (pbox.InvokeRequired) // We are in the wrong thread. Call ourselves in the correct thread { SetPictureBoxImage theDelegate = new SetPictureBoxImage(DisplayImageInPictureBox); BeginInvoke(theDelegate, new object[] { pbox, image }); } else // we are in the correct thread, so assign the image { pbox.Image = image; } } // // Event handlers for user interactions // //Start or stop the camera client, i.e., how it retrieves frames from the camera server //Reset the camera to update the interface... private void cbPlay_CheckedChanged(object sender, EventArgs e) { if (cbPlay.Checked) { if (camera.Start() == true) { Reset(); lblDesiredFrameRate.Text = tbFrameRate.Value.ToString(); toolstripClientStatus.Text = "Client - started"; } else { toolstripClientStatus.Text = "Client - could not start"; } } else { camera.Stop(); Reset(); toolstripClientStatus.Text = "Client - stopped"; } } //Interactively adjust the frame rate of the camera, in frames per second, // and display it as the Desired Frame Rate //Note that we cannot actually get more frames than those set by the camera server... private void tbFrameRate_Scroll(object sender, EventArgs e) { if (tbFrameRate.Value > 0 ) camera.FramesPerSecond = tbFrameRate.Value; lblDesiredFrameRate.Text = tbFrameRate.Value.ToString(); } //Display how many frames have actually been seen in the last second private void timer1_Tick(object sender, EventArgs e) { lblActualFrameRate.Text = frameCounter.ToString(); frameCounter = 0; } // // Reset the interface and the camera settings to reflect the current state // private void Reset() { ResetMaxFrameRate ((int) camera.ServerMaxSpeed); //Get an image from the camera and find out its size. //Display the size and reset the size of the picture box to match it. Image img; int w, h; img = camera.GetFrame (); if (null == img) { w = h = 0; } else { w = img.Width; h = img.Height; } //Display the frame's current size lblFrameWidth.Text = w.ToString(); lblFrameHeight.Text = h.ToString(); } //Display the server's maximum frame rate //Then reset the trackbar to this maximum. private void ResetMaxFrameRate(int max) { tbFrameRate.Maximum = max; lblMaxFPS.Text = max.ToString() + "fps (Max)"; //Make sure that that trackbar's current value is within the trackbar range, // and set the client's frame rate to that. if (tbFrameRate.Value <= 0 || tbFrameRate.Value > max) tbFrameRate.Value = max; lblDesiredFrameRate.Text = tbFrameRate.Value.ToString(); if (tbFrameRate.Value > 0) camera.FramesPerSecond = (float)tbFrameRate.Value; } } } Explanation Much of this program is similar to what was seen in the Minimal Camera, so we only describe what is different. - frameCounter will be incremented whenever a new frame is seen. - Form_Load is the Form Load event handler; it creates the camera client and sets its properties. In this case, we set the desired frame rate to the value in the trackbar (whose values range from 1-30 fps) and then display that frame rate in the label. We don't start the camera quite yet... - camera_ReceivedFrame is the camera event handler that displays the frame as a bitmap in the picturebox. At the same time, it displays the width and height of the frame in the labels. It also increments the frameCounter. - cbPlay_CheckedChanged is the cbPlay checkbox handler; it interactively starts / stops the Camera through a checkbox. - tbFrameRate_Scroll is the tbFrameRate event handler; it interactively adjusts the desired frame rate in frames per second via the tbFrameRate trackbar, and displays it as the desired frame rate. - timer1_Tick goes off every second. It displays the actual frame rate, i.e., the current value of the frameCounter (the number of frames seen in the last second). It then resets the frameCounter to 0. - Reset resets the camera settngs and the user interface to the current settngs.
https://grouplab.cpsc.ucalgary.ca/cookbook/index.php/Toolkits/CameraExample
CC-MAIN-2021-31
en
refinedweb