text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
Last.
There are primarily two ways of achieving multi-tenancy.
- Hypervisor model – In this model, the same software is installed on the likes of VMWare. Each customer has his own hypervisor and hence is abstracted away from rest of the tenants. The advantages of this approach being that it is easy to implement but is not very efficient on the cost.
- Database Model – Another model is to have a single instance of the application, but manage the multi-tenancy at the db level. At the db level, there are again 2 ways to manage it.
- By adding a tenant id to all (majority) the tables of the database so that while querying you are only fetching the data for the tenant who has logged in.
- By having individual tables for each tenant so that data is fetched only from the region which belongs to the tenant. Incidentally Google App Engine came up with a similar approach for multi-tenancy when they came up with their Namespaces API as a part of their 1.3.6 sdk release.
For more information please refer to my presentation on multi-tenancy.
So coming back to the question of how do we make an existing application multi-tenant without changing a lot of code?
The answer lies in the power of Aspects and Namespaces.
Let us assume that you have an @EnhanceWithTenant annotation which can be applied on your DAO. So ideally, your code method would be looking something like this
[sourcecode language=”java”]
@EnhanceWithTenant
public ProjectAssignment findProjectAssignmentByKey(String key) {
ProjectAssignment projectAssignment = (ProjectAssignment) getEntityManager()
.createQuery("SELECT FROM ProjectAssignment WHERE encodedKey=:key").setParameter("key", key)
.getSingleResult();
return projectAssignment;
}
[/sourcecode]
Now, as soon as this method is called, the aspect associated with the annotation reworks the query to fire the query on the namespace which has been earmarked for the logged in tenant.
Scenario One
This is what would happen,
- The aspect would fetch the tenant_id from the logged in information
- From the tenant_id, it would fetch the namespace, where this tenant belongs
- It would now enhance the query which is being executed to goto the Namespace which is relevant for this tenant.
Thus, just by adding an annotation and making use of the aspects, you would be able to make your application multi-tenant.
Scenario Two
In this scenario, assume that making a separate Namespace for each tenant is not possible. The query however, somehow needs to know that it needs to fetch data for the logged in tenant only.
In this case, you would have to add the tenant_id as a column to all the relevant tables of the database. Once the method is executed and the annotation is encountered, the aspect would apply a where clause to the statement which would be matching on the tenant_id of the logged in user.
Thus, here instead of pointing to a Namespace, the query would get an additional where clause which would match the tenant_id.
It may quite be the case that the additional where clause might have to be constructed intelligently and would be based on some logic, however with this approach one would be able to achieve multi-tenancy with little changes in the existing legacy code. | https://blog.knoldus.com/making-your-existing-applications-multi-tenant-in-the-cloud/ | CC-MAIN-2018-34 | refinedweb | 533 | 50.46 |
How to Keep Method Size Under Control
Do you ever open a source code file and see a method that starts at the top of your screen and kind of oozes its way to the bottom with no end in sight? When you find yourself in that situation, imagine that you’re reading a ticker tape and try to guess at where the method actually ends. Is it a foot below the monitor? Three feet? Does it plummet through the floor and into the basement, perhaps down past the water table and into the earth’s mantle?
Visualized like this, I think everyone might agree that there’s some point at which the drop is too far, though there’s likely some disagreement on where exactly this is. Personally, I used to subscribe to the “fits on a screen” heuristic and would only start looking to pull out methods if it got beyond that. But in more recent years, I think even smaller. How small? I dunno–five or six lines, max. Small enough that you’ll only ever see one try-catch or control flow statement in there. Yeah, seriously, that small. If you’re thinking it sounds kind of crazy, I get that, but give it a try for a while. I can almost guarantee that you’ll lose your patience for looking at methods that cause you to think, “wait, where was loopCounter declared again–before the second or third while loop?”
If you accept the premise that this is a good way to do things or that it might at least be worth a try, the first thing you’ll probably wonder is how to go about doing this from a practical standpoint. I’ve definitely encountered people and even whole groups who considered method sizes like this to be impractical. The first thing you have to do is let go of the notion that classes are in some kind of limited supply and you have to be careful not to use too many. Same with modules, if your project gets big enough. The reason I say this is that having small methods means that you’re going to have a lot of them. This in turn means that they’re going to need to be spread to multiple classes, and those classes will occupy more namespaces and modules. But that’s okay. If you encounter a large application that’s well designed and factored, it’s that way because the application is actually a series of small, focused components working together. Monolithic doesn’t scale well.
Getting Down to Business
If you’ve prepared yourself for the reality of needing more classes organized into more namespaces and modules, you’ve really overcome the biggest obstacle to being a small-method coder. Now it’s just a question of mechanics and practice. And this is actually important–it’s not sufficient to just say, “I’m going to write a lot of methods by stopping at the fifth line, no matter what.” I guarantee you that this is going to create a lot of weird cross-coupling, unnecessary state, and ugly things like out parameters. Nobody wants that. So it’s time to look to the art of creating abstractions.
As a brief digression, I’ve recently picked up a copy of Uncle Bob Martin’s Clean Code: A Handbook of Agile Software Craftsmanship and been tearing my way through it pretty quickly. I’d already seen most of the Clean Coder video series, which covers some similar ground, but the book is both a good review and a source of new and different information. To be blunt, if you’re ever going to invest thirty or forty bucks in getting better at your craft, this is the thing to buy. It’s opinionated, sometimes controversial, incredibly specific, and absolute mandatory reading. It will change your outlook on writing code and make you better at what you do, even if you don’t agree with every single point in it (though I don’t find much with which to take issue, personally).
The reason I mention this book and series is that there is an entire section in the book about functions/methods, and two of its fundamental points are that (1) functions should do one thing and one thing only, and (2) that functions should have one level of abstraction. To keep those methods under control, this is a great place to start. I’d like to dive a little deeper, however, because “do one thing” and “one level of abstraction per function” are general instructions. It may seem a bit like hand-waving without examples and more concrete heuristics.
Extract Finer-Grained Details
What Uncle Bob is saying about mixed abstractions can be demonstrated in this code snippet:
Do you see what the issue is? We have a method here that describes (via sub-methods that are not pictured) how to open a door. The first two calls talk in terms of actions between you and the door, but the next three calls suddenly dive into the specifics of how to pull the door open in terms of actions taken by your muscles, joints, tendons, etc. These are two different layers of abstractions: one about a person interacting with his or her surroundings and the other detailing the mechanics of body movement. To make it consistent, we could get more detailed in the first two actions in terms of extending arms and tightening fingers. But we’re trying to keep methods small and focused, so what we really want is to do this:
Create Coarser Grained Categories
What about a different problem? Let’s say that you have a method that’s long, but it isn’t because you are mixing abstraction levels:
These items are all at the same level of abstraction, but there are an awful lot of them. In the previous example, we were able to tighten up the method by making the abstraction levels consistent, but here we’re going to actually need to add a layer of abstraction. This winds up looking a little better:
In essence, we’ve created categories and put the actions from the long method into them. What we’ve really done here is create (or add to) a tree-like structure of methods. The public method is the root, and it had thirteen children. We gave it instead four children, and each of those children has between two and five children of its own. To tighten up methods, it’s perfectly viable to add “nodes” to the “tree” of your call stack. While “do one thing” is still a little elusive, this seems to be carrying us in that direction. There’s no individual method that you look at and think, “boy, that’s a lot of stuff going on.” Certainly its a matter of some art and taste, but this is probably a good way to think of it–organize stuff into hierarchical method categories until you look at each method and think, “I could probably memorize what that does if I needed to.”
Recognize that Control Flow Uses Up an Abstraction
So far we’ve been conceptually figuring out how to organize families of methods into well-balanced tree structures, and that’s taken us through some pretty mundane code. This code has involved none of the usual stuff that sends apps careening off the rails into bug land, such as conditionals, loops, assignment, etc. Let’s correct that. Looking at the code above, think of how you’d modify this to allow for the preparation of an arbitrary number of quesadillas. Would it be this?
Well, that makes sense, right? Just like the last version, this is something you could read conversationally while in the kitchen just as easily as you do in the code. Prep your ingredients, then prep your equipment, then for some integer index equal to zero and less than the number of quesadillas you want to cook, increment the integer by one. Each time you do that, cook the quesadilla. Oh, wait. I think we just went careening into the nerdiest kitchen narrative ever. If Gordon Ramsey were in charge, he’d have strangled you with your apron for that. Hmm… how ’bout this?
Well, I’d say that the CookQuesadillas method looks a lot better, but do we like “PerformActualCooking?” The whole situation is an improvement, but I’m not a huge fan, personally. I’m still mixing control flow with a series of domain concepts. PerformActualCooking is still both a story about for-loops and about cooking. Let’s try something else:
We’ve added a node to the tree that some might say is one too many, but I disagree. What I like is the fact that we have two methods that contain nothing but abstractions about the domain knowledge of cooking and we have a bridging method that brings in the detailed realities of the programming language. We’re isolating things like looping, counting, conditionals, etc. from the actual problem solving and story telling that we want to do here. So when you have a method that does a few things and you think about adding some kind of control flow to it, remember that you’re introducing a detail to the method that is at a lower level of abstraction and should probably have its own node in the tree.
Adrift in a Sea of Tiny Methods
If you’re looking at this cooking example, it probably strikes you that there are no fewer than eighteen methods in this class, not counting any additional sub-methods or elided properties (which are really just methods in C# anyway). That’s a lot for a class, and you may think that I’m encouraging you to write classes with dozens of methods. That isn’t the case. So far what we’ve done is started to create trees of many small methods with a public method and then a ton of private methods, which is a code smell called “Iceberg Class.” What’s the cure for iceberg classes? Extracting classes from them. Maybe you turn the first two methods that prepare ingredients and equipment into a “Preparer” class with two public methods, “PrepareIngredients” and “PrepareEquipment.” Or maybe you extract a quesadilla cooking class.
It’s really going to vary based on your situation, but the point is that you take this opportunity pick nodes in your growing tree of methods and sub-methods and convert them into roots by turning them into classes. And if doing this leads you to having what seems to be too many classes in your namespace? Create more namespaces. Too many of those in a module? Create more modules. Too many modules/projects in a solution? More solutions.
Here’s the thing: the complexity exists no matter how many or few methods/classes/namespaces/modules/solutions you have. Slamming them all into monolithic constructs together doesn’t eliminate or even hide that complexity, though many seem to take the ostrich approach and pretend that it does. Your code isn’t somehow ‘simpler’ because you have one solution with one project that has ten classes, each with 300 methods of 7,000 lines. Sure, things look simple when you fire up the IDE, but they sure won’t be simple when you try to debug. In fact, they’ll be much more complicated because your functionality will be hopelessly interwoven with weird temporal couplings, ad-hoc references, and hidden dependencies.
If you create large trees of functionality, you have the luxury of making the structure of the tree the representative of the application’s complexity, with each node an island of simplicity. It is in these node-methods that the business logic takes place and that getting things right is most important. And by managing your abstractions, you keep these nodes easy to reason about. If you structure the tree correctly and follow good OOP design and practice, you’ll find that even the structure of the tree is not especially complicated since each node provides a good representative abstraction for its sub-tree.
Having small, readable, self-documenting methods is no pipe dream. Really, with a bit of practice, it’s not even very hard. It just requires you to see code a little bit differently. See it as a series of hierarchical stories and abstractions rather than as a bunch of loops, counters, pointers, and control flow statements, and the people that maintain what you write, including yourself, will thank you for it.
Great post, Erik. As you’ve noted, organizing code this way aids understanding, testing, maintaining, and refactoring. I’ve even used it as a low-level design technique – outlining the flow via method stubs, then filling in the blanks.
I’ve done the same from time to time. I usually find that this sort of skeleton-ing is good for helping me decide what goes in which class.
Good write-up. In my experience, the problem with methods that are too long is pretty common in practice. One of the benefits of “many small methods” is that for each method you create you get to pick a name that describes what it does. If it is hard to pick a good name, maybe the method does too much. The name also works as a form of documentation. When reading the code, once I’ve read through a method like PrepareEquipment(), I trust it and know what it does. Then the method name becomes a handy short-hand so I don’t have… Read more »
The naming aspect is a great point. Hand in hand with gravitating toward smaller methods, I’ve also taken to the practice of anytime I feel like something needs a comment or explaining, I just pull out a method. Especially with conditionals. As soon as I’m doing something like if(isReady && x == 5 && y = 9) I now immediately think “why don’t I pull that into a method so that a maintenance programmer has a chance of understanding what it is that means.
I finally got around to blogging about my take on the use of short methods, “7 Ways More Methods Can Improve Your Program”
I like that post — especially the bullet about grouping like functionality together. I think that one of the real problems with sprawling methods tends to be that functionality gets spread around haphazardly with the components of a given action being strewn all over the method.
While I agree that you should break down code into methods and that you get closer to the goal of maintainability and readability, the approach is still not the best. All the above is still highly procedural programming, you might not have one huge method, but still refactoring the above is complicated… Examples, what if out of the sudden, the tortilla shouldn’t be warmed up until firm and it’s a flour tortilla that needs to be warm, but if it’s a corn tortilla it should be firm? In short words, it should be less procedural programming and more OO using… Read more »
Heh, yeah, that code is definitely “command and control” and procedural as it gets. The reason for this is that I wanted to create a very focused and contrived example to focus on the subject of abstractions, and the easiest thing to reason about is a method with no parameters and no return values (especially when you can’t see what it’s actually doing). I think in this narrow context focusing on inverting control and polymorphism would confuse the issue. If I were writing this out into a book or a series of posts, what I’d probably do is go through… Read more »
I’m actually going to argue for this style of coding is appropraite for the complexity of the example (I’d hate to see it go much bigger though). I see a lot of people that are highly motivated to make everything object oriented, and I consider that a fault. This code is great the way it is at the time it was written. It works on one type of tortilla very well, and it’s easy to read. Implementing a strategy pattern at this stage in the game would only add unneccasary places where the code could break, and would be a… Read more »
I found your blog through reddit and I love the way you present ideas. Please keep writing such awesome posts. You’re a godsend for a still-learning programmer like me.
Thanks for the kind words! I’m glad you enjoy the posts, and I’m happy if they help. I’m pretty much always happy to go on about code, so they’ll definitely keep coming 🙂
If your function/method body is too long, I find it a good idea to compress to base64. The pre-processor when must uncompress it before compilation (which takes a little bit of time). You get the best of all worlds – long methods in a compact form.
Oh my… :0 | https://daedtech.com/how-to-keep-method-size-under-control/?utm_source=rss&utm_medium=rss&utm_campaign=how-to-keep-method-size-under-control | CC-MAIN-2019-30 | refinedweb | 2,863 | 67.59 |
it compiles fine but when i go to run the program the only output is "Cut Shampoo" and the program ends. Any help will be greatly appreciated i need to have this finished soon.
Description
Create a class for services offered by a hair-styling salon. Data fields include a String to hold the service description (for example, “Cut”, “Shampoo”, or “Manicure”), a double to hold the price, and an integer to hold the average number of minutes it takes to perform the service. The class name is Service. Include a constructor that requires arguments for all three data fields and three get methods that each return statement one of the data field’s values.
Write an application named SalonReport that contains an array to hold six Service objects and fill it with the data from the above table. Include methods to sort the array in ascending order by price of service, time it takes to perform the service, and in alphabetical order by service description.
Prompt the user for the preferred sorting method, and offer three choices; sort by description, price, or time. Depending on the user’s input, display the results.
Save the files as Service.java and SalonReport.java.; } }
public class SalonReport { public static void main(String[] args) { Service[] myService = new Service[6]; myService[0] = new Service("Cut",8.00,15); myService[1] = new Service("Shampoo",4.00,10); myService[2] = new Service("Manicure",18.00,30); myService[3] = new Service("Style",48.00,55); myService[4] = new Service("Permanent",18.00,35); myService[5] = new Service("Trim",6.00,5); SortDescription(myService,myService.length); System.out.println(myService[0].getServiceType()+ " " + myService[1].getServiceType()); }; } } } } | https://www.daniweb.com/programming/software-development/threads/357004/sorting-array-issues | CC-MAIN-2017-13 | refinedweb | 277 | 50.02 |
I am making a calculator for 8th grade geometry class. We are learning about trigonometric ratios. My problem is, I got the whole program written, and then I realized that my sines and cosines should be getting decimals. I realize the problem is because I'm using the wrong variable type. I checked it, and I'm using floats. I think the problem is my funtion is returning ints. Criticism welcome as long as it has to do with my problem. I'm new at this programming thing, just started 25 minutes ago in fact, so it's gonna be sloppy.
Edit-----> Lol I just figured it out. I was using an int type function lol! I'm stupid!I'm stupid!I'm stupid!I'm stupid!
Source Code ------->
Code:
// Trigonometric Ratios Helper.cpp : Defines the entry point for the console application.
// By { A nanny Mouse! }
#include "stdafx.h"
#include <iostream> //Input Output
using namespace std; //No idea but it looks cool =/
int sine ( float b, float c );
int main()
{
float Leglength1;
float Leglength2;
float hypotenuse;
float sinea;
float cosa;
float tana;
float tanb;
cout<<" {Blocked for forum use}'s Amazing Trigonometric Ratio Helper!"<<"\n";
cout<<" B" <<"\n";
cout<<" /|" <<"\n";
cout<<" / |" <<"\n";
cout<<" Hyp / |Leg2" <<"\n";
cout<<" / |" <<"\n";
cout<<" A----C" <<"\n";
cout<<" Leg1" <<"\n";
cout<<"Please enter the length of a leg: ";
cin>> Leglength1;
cin.ignore();
cout<<"Please enter the length of the other leg: ";
cin>> Leglength2;
cin.ignore();
cout<<"Now the tricky part, enter the hypotenuse: ";
cin>> hypotenuse;
cin.ignore();
sinea = sine(Leglength2, hypotenuse);
cout<<" B" <<"\n";
cout<<" /|" <<"\n";
cout<<" / |" <<"\n";
cout<<" " << hypotenuse << " / |"<< Leglength2 <<"\n";
cout<<" / |" <<"\n";
cout<<" A----C" <<"\n";
cout<<" "<< Leglength1 <<" " << "\n";
cout<<"Sin A: "<< sinea <<"\n";
cin.get();
}
int sine ( float b, float c )
{
return b /= c;
} | http://cboard.cprogramming.com/cplusplus-programming/87898-returning-float-printable-thread.html | CC-MAIN-2015-11 | refinedweb | 293 | 82.54 |
SKEY(3) BSD Programmer's Manual SKEY(3)
atob8, backspace, btoa8, btoe, etob, f, htoi, keycrunch, put8, readpass, readskey, rip, sevenbit, skey_authenticate, skey_get_algorithm, skey_haskey, skey_keyinfo, skey_passcheck, skey_set_algorithm, skey_unlock, skeychallenge, skeychallenge2, skeygetnext, skeylookup, skeyverify, skipspace - S/Key library functions
#include <skey.h> int atob8(char *out, char *in); void backspace(char *buf); int btoa8(char *out, char *in); char * btoe(char *engout, char *c); int etob(char *out, char *e); void f(char *x); int htoi(int h); int keycrunch(char *result, char *seed, char *passwd); char * put8(char *out, char *s); char * readpass(char *buf, int n); char * readskey(char *buf, int n); void rip(char *buf); void sevenbit(char *s); int skey_authenticate(char *user); const char * skey_get_algorithm(void); int skey_haskey(char *user); char * skey_keyinfo(char *user); int skey_passcheck(char *user, char *passwd); char * skey_set_algorithm(char *new); int skey_unlock(struct skey *rec); int skeychallenge(struct skey *rec, char *user, char *buf); int skeychallenge2(int fd, struct skey *rec, char *user, char *buf); int skeygetnext(struct skey *rec); int skeylookup(struct skey *rec, char *user); int skeyverify(struct skey *rec, char *response); char * skipspace(char *);
These functions implement the S/Key one time password authentication mechanism. The atob8() function converts the 16-byte hex string in to an 8-byte binary array stored in out. The atob8() function returns 0 on success and -1 if an invalid hex character is encountered. The backspace() function removes backspaced over characters from buf. Note that backspace() assumes the actual backspace character is 0x8 (^H). The btoa8() function converts the 8-byte binary array in to a 16-byte string of hex digits stored in out; the caller must supply enough space (17 bytes including the final NUL). The btoa8() function returns 0 on success and -1 if an error occurred. The btoe() function encodes the 8 bytes in c into a string of 6 English words, stored in engout. The caller must supply enough space (30 bytes including the final NUL) to store the words. The btoe() function returns engout. The etob() function converts the 6 English words in e into an 8-byte binary representation. The etob() function returns 1 if the words are all in the database and parity is correct, 0 if a word is not in the data- base, -1 if the number of words is incorrect, or -2 if there is a parity error. The f() function is a one-way hash that overwrites the 8-byte input buffer x with the hashed result. The htoi() function converts a single hex digit h to an integer. The htoi() function returns the converted integer on success or -1 if h not a valid hex digit. The keycrunch() function concatenates the seed and passwd, runs them through a hash function and collapses the result to 64 bits. The key- crunch() function returns 0 on success or -1 if there is a memory alloca- tion failure. The put8() function converts the 8 bytes stored in s into a series of 4 16-bit hex digit stored in out. There must be at least 20 bytes (includ- ing the NUL) in the output buffer, out. The put8() function returns out. The readpass() function reads up to n characters from standard input with echo turned off, converting the resulting string to 7 bits, storing the result in buf. The readpass() function returns buf. The readskey() function reads up to n characters from standard input with echo turned on, converting the resulting string to 7 bits, storing the result in buf. The readskey() function returns buf. The rip() function strips trailing linefeeds and carriage returns from buf. The sevenbit() function strips the high bit from each character in s, converting the characters to seven bit ASCII. The skey_authenticate() function presents the user with an S/Key chal- lenge and authenticates the response. The skey_authenticate() function returns 0 if authentication is successful or -1 if not. The skey_get_algorithm() function returns a string corresponding to the hash algorithm for the current user. The default algorithm is "md5". The skey_haskey() function returns 0 if the user exists in the S/Key da- tabase, 1 if the user does not exist, or -1 if there was an error reading the database. The skey_keyinfo() function returns a string containing the current se- quence number and seed for user. The returned string points to internal static storage that will be overwritten by subsequent calls to skey_keyinfo(). The skey_passcheck() function checks a user and passwd pair against the S/Key database. It returns 0 on successful authentication or -1 on failure. The skey_set_algorithm() function sets the user's hash algorithm based on the string new. The skey_set_algorithm() function returns the specified algorithm if it is supported, or the null pointer if the hash algorithm is not supported. The skey_unlock() function unlocks the record in the S/Key database specified by rec. The skey_unlock() function returns 0 on success or -1 on failure. Either way, the S/Key database is not closed nor is the data- base file pointer affected. The skeychallenge() function stores the (potentially fake) S/Key chal- lenge for user in buf, which is at least SKEY_MAX_CHALLENGE bytes long. It also fills in the skey struct rec and locks the user's record in the S/Key database. The skeychallenge() function returns 0 on success or -1 on failure. On success the S/Key database remains open and the read/write file pointer is set to the beginning of the record. The skeychallenge2() function is identical to skeychallenge() except that instead of opening the user's entry in the S/Key database, the open file referenced by fd is used instead. When fd is -1, the behavior is equivalent to skeychallenge(). The skeygetnext() function stores the next record in the S/Key database in rec and locks that record in the S/Key database. The skeygetnext() function returns 0 on success, 1 if there are no more entries, or -1 if there was an error accessing the S/Key database. The S/Key database remains open after a call to skeygetnext(). If no error was encountered accessing the S/Key database, the read/write file pointer is set to the beginning of the record or at EOF if there are no more records. Because it exposes other users' S/Key records, only the superuser may use skeygetnext(). The skeylookup() function looks up the specified user in the S/Key data- base then fills in the skey struct rec and locks the user's record in the database. The skeylookup() function returns 0 on success, 1 if user was not found, or -1 if there was an error accessing the S/Key database. If no error was encountered accessing the S/Key database, the read/write file pointer is set to the beginning of the record. The skeyverify() function verifies the user's response based on the S/Key record rec. It returns 0 on success (updating the database), 1 on failure, or -1 if there was an error accessing the database. The database is always closed by a call to skeyverify().
skey(1), skeyinit(1)
There is no standard API for S/Key. The de facto standard is the free S/Key distribution released by Bellcore. The following functions are extensions and do not appear in the original Bellcore S/Key distribution: readskey(), skey_authenticate(), skey_get_algorithm(), skey_haskey(), skey_keyinfo(), skey_passcheck(), skey_set_algorithm(), skey_unlock(). S/Key is a Trademark of Bellcore. MirOS BSD #10-current June 21, 2001. | http://www.mirbsd.org/htman/i386/man3/skey.htm | CC-MAIN-2013-48 | refinedweb | 1,244 | 61.06 |
Java Generics in layman language
Generics is one of the most challenging concepts to put your head around when you first time working with it, But it's one of the most used concept as well.
So let's understand what exactly is generics. As per one of the definitions, “Java Generics is a language feature that allows for definition and use of generic types and methods.”
If you did not get it don’t worry, we will discuss it will the help of examples.
Why Generics, what were the issues?
So lets first discuss why we need Generics.
1. Type safety
So before Generics, this is how we use to create a list
List integers = new ArrayList();
integers.add(1);
integers.add(2);
Here we are adding integers in the list but we can add any other value without getting any compile-time error
integers.add("three");
So when we extract this data we are not 100% sure that we will get back an integer.
for(int i=0;i<list.size();i++) {
if(list.get(i) == (Integer)list.get(i)) {
System.out.println(2*(Integer)list.get(i));
} else {
System.out.println(1);
throw new IllegalArgumentException("Value is not integer");
}
}
2. Heterogeneous values
As we mentioned in 1st point also. we can add heterogeneous values. So when we pass our collection to 3rd party library, our collection is not type-safe. In that library, anyone can add any kind of data.
public static void main(String[] args) {
Set data = new HashSet();
Set updatedData = getData(data);
}
public static Set getData(Set data) {
data.add(1);
data.add("Two");
data.add(new ArrayList<>());
return data;
}
That's where Generics comes into picture. Generics add compile-time checks which solves both the issues. Compile-time checks help to prevent adding heterogeneous data in a collection and give confidence to the end-user that this list contains only 1 type of data which is mentioned in signature.
List<Integer> integers = new ArrayList<>();
integers.add(1);
integers.add(2);
integers.add("three"); // compile time exception
Type Erasure
Generics have a special property called Type erasure which means all the extra information added using generics will be removed at compile time during byte code generation. It is also required for backward compatibility.
So after compilation, the byte code of these 2 statements will be the same
List<Integer> list1 = new ArrayList<>();List list2 = new ArrayList<>();
Generic class
A class is Generic if it declares 1 or more type parameters. The type parameter itself is not a data type but can act as a place holder for any other datatype.
public class GenericClass<T,E> {
private T key;
private E value;
}
In this, we have 2 type parameters, T and E.
Now we can use this class with any data type like :
GenericClass<Integer,Integer> integers = new GenericClass<>();
GenericClass<String,Integer> strings = new GenericClass<>();
Generic Interface
Same rules apply for the interface as well
public interface GenericInterface<T,E> {
T firstMethod();
E secondMethod();
}
The above code represents that it's a general class having 2 type parameters. The first type parameter is the return type of the first method and the second type parameter is the return type of the second method. So we can implement it as
public class SampleClass implements GenericInterface<Integer,String> {
@Override
public Integer firstMethod() {
return null;
} @Override
public String secondMethod() {
return null;
}
}
Here we used Integer and String, but we can use any other data type as well.
Generic methods
In previous examples, we saw classes that are completely generic in nature, But we can have a specific generic method as well in non-generic class. We can define generic methods inside a non-generic class and the scope of the type variable is inside the method only.
public <T,E> void genericMethod(T key,E value) {
System.out.println(key);
System.out.println(value);
}
If you notice we have an extra piece of code in this method <T, E>. This is the same indicator which we use in any class definition to indicate how many type parameter this method or class will use. We have to add this indicator in generic method only when its a part of a non-generic class.
Both static and not static methods follow the same general rules.
public static <T> Map<T,T> staticGenericMethod(T val1,T val2) {
Map<T,T> map = new HashMap<>();
map.put(val1,val2);
return map;
} public <T> Map<T,T> staticGenericMethod(T val1,T val2) {
Map<T,T> map = new HashMap<>();
map.put(val1,val2);
return map;
}
Generic Constructor
Generic constructor follows the same rule as other methods. They can come inside a generic class or can be in any other class also.
public class ClassWithGenericConstructor<T> {
private T key;
private T value; public ClassWithGenericConstructor(T key,T value) {
this.key = key;
this.value = value;
}
}
In the above example, we have a generic constructor in a generic class.
public class ClassWithGenericConstructor {
public <T> ClassWithGenericConstructor(T key,T value) {
System.out.println(key);
}
}
In this example, we have a generic constructor in a non-generic class. As we discussed above, we have <T> in method declaration because we are in a non-generic class.
Generics in Array :
Generics and the way array works contradict each other. Array preserves their type-information means it will throw an error if we add different types of data into it and Generics use type erasure, which is contradictory So we cannot instantiate a generic array in Java.
public class GenericArray<T> {
// this one is fine
public T[] notYetInstantiatedArray; // causes compiler error; Cannot create a generic array of T
public T[] array = new T[5];
}
WildCards
WildCards defines unknown data types in Generics, Using it with super and extends is used to restrict the types used in Generic class.
declarations :
Collection<?> coll = new ArrayList<String>();
List<? extends Number> list = new ArrayList<Long>();
Pair<String,?> pair = new Pair<String,Integer>();
WildCards are of 2 types bounded and unbounded
Unbounded
Unbounded in which we can add any data type.
Collection<?> coll = new ArrayList<String>();
Here on the right side we used String, but we can add any other data type as well. There are no restrictions.
Bounded
In bounded we restrict the data types using extends and super.
extends
In extends, we can use a class which extends the given class like
List<? extends Number> list = new ArrayList<Long>();
here we can use any class which extends Number like Long, Integer, Double, etc
super
In super we can use classes which is a superclass of given class
List<? super Integer> list = new ArrayList<Number>();
here we can use any class which is a superclass of Integer
Limitations of Generics
- Static fields of parameterized type are not allowed
private static T member; //This is not allowed
- We cannot create an instance of type parameter directly
new T(); // not allowed
- Not compatible with primitive types
List<int> ids = new ArrayList<>(); //Not allowed
- Generic Exception class is not allowed
public class GenericException<T> extends Exception {}
I hope now you have a good understanding of generics.
For more information about Generics, refer to the official documentation | https://aggarwal-rohan17.medium.com/java-generics-in-layman-language-f4473638a05a | CC-MAIN-2021-04 | refinedweb | 1,186 | 52.9 |
Hi there,
Trying to resolve an issue i am currently having.
I'd like an image (with link) to be displayed on a dynamic (member) page that is dependent on the logged in user.
The link will then direct to a store page specifically made for the client. It would be preferred that the particular products are displayed to the logged in user right away, however from what I've read so far this is not possible...
Code i've got so far is as follows;
import wixUsers from 'wix-users';
import wixData from 'wix-data';
import wixLocation from 'wix-location';
$w.onReady(function () {
$w("#dataset1").onReady( () => {
wixUsers.currentUser.getEmail()
.then((email) => {
$w("#dataset1").setFilter(wixData.filter()
.eq("email", email)
)
Any help would be much appreciated. | https://www.wix.com/corvid/forum/community-discussion/dynamic-content-filtered-by-logged-in-user-not-owner | CC-MAIN-2019-47 | refinedweb | 126 | 53.21 |
Before you get started
Take a few minutes to peruse and get familiar with the Project Zero Web site. You can join the Project Zero community, contribute to the project, or join the discussion forum, where you can comment on the project through each stage of its development. This article assumes that you have a suitable Java Development Kit (JDK) installed on your machine. You also need to be familiar with the concepts of PHP.
Get started with Project Zero, WebSphere sMash, and PHP is recommended reading. It shows how to download WebSphere sMash and create a PHP application. This article assumes you have a working version of WebSphere sMash with PHP. Note it is only necessary to work through the steps in that article up to the "Running the application" section.
Editor's note: IBM® WebSphere
The article shows how to access Java classes from PHP using the Java Bridge. It discusses calling Java methods and accessing fields, both instance and static. It also covers exception handling and type conversion between the PHP and Java worlds.
ZSL, WebSphere sMash, and Apache Lucene
For a real world example, this article steps through the creation of a simple search engine written in PHP that can index and search files using Apache Lucene. Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for many applications that require full-text search.
ZSL used Apache Lucene in a WebSphere sMash application they wrote. ZSL® wanted to improve information sharing between their developers. To solve this problem, they put together a mashup to index their source code and documentation library (PDF, PowerPoint, Word, Excel, and many others). The application provides quick and easy access to code snippets from across the company.
Creating an application in WebSphere sMash
The first step to get started is to create a new project in Eclipse:
- Select File -> New -> Project... and expand the Zero category in the dialog.
- Select WebSphere sMash PHP Application and click Next as shown in Figure 1.
- Give your project a name (for example,
MyJavaProject) and click Finish. Your project is now created.
Figure 1. Create a new WebSphere sMash project dialog
Creating and calling Java objects
Next, write a PHP script that creates and calls a Java object:
- Right-click on the public folder and select New -> File.
- Give your file a name (for example,
Java.php) and click Finish.
- Add the following code into the file:
<?php $file = new Java("java.io.File", __FILE__, FALSE); var_dump($file); var_dump($file->isDirectory()); ?>
- Run the sample code by right-clicking on the project name in Eclipse and select Run As -> WebSphere sMash Application.
- A Web server is started on port 8080 of your localhost.
- You can now go to a browser and direct it to, and you see the following output as shown in Figure 2.
Figure 2. Web browser output from calling Java objects
This sample code shows a PHP script using the built in Java class. The Java class
creates an instance of a Java class and calls the best matching constructor,
passing any arguments from the script. In this example, the root directory is
"/" and
FALSE. The script
stores it in a PHP variable called
$file. The script
then calls methods on the object just as if it was a normal PHP object, so in this
example we call it the
isDirectory method.
This capability is powerful and gives PHP scripts access to any Java class. Note
that the Java class must be on the application class path
java.io.File is part of the core Java class library
and so it is always available.
Using the Java collection classes
Java has a rich set of collection classes, including maps, sets, lists, and
queues. This sample code shows how a PHP script can leverage those classes. As
before, create a new PHP script (for example,
MoreJava.php) and add the following code:
<?php $map = new Java("java.util.HashMap"); $map->put("title", "Java Bridge!"); $array = array(1, 2, 3, 4, 5); $map->put("stuff", $array); var_dump($map->get("stuff")); echo $map->get("title"); ?>
You can now go to a browser and direct it to, and you see the
following output as shown in Figure 3.
Figure 3. Web browser output from using the Java collection classes
The PHP script:
- Creates an instance of a Java
HashMapclass.
- Stores a string containing
Java Bridge!in the map.
- Highlights interoperability between Java and PHP types.
- Creates a PHP array and stores it in the Java map as shown in the code below.
$array = array(1, 2, 3, 4, 5); $map->put("stuff", $array);
When the
put call is invoked on the map, the PHP array
is converted to its closest equivalent Java type, which is a Java
Map. Likewise, when the
get
call reads the value back from
$map, it is converted
back to a regular PHP array. This is possible without any copying because PHP
arrays have two personalities, PHP arrays and Java maps.
Iterating over Java collections
Try replacing the
MoreJava.php script with the
following code:
<?php $list = new Java("java.util.ArrayList"); var_dump($list); $date = new Java("java.util.Date", 70, 9, 4); echo "<br/>"; $list->add("Java Bridge!"); $list->add($date); $list->add(array(1, 2, 3, 4, 5)); $iterator = $list->iterator(); while ($iterator->hasNext() == TRUE) { var_dump($iterator->next()); echo "<br/>"; } ?>
You can now go to a browser and direct it to, and you see the
following output as shown in Figure 4.
Figure 4. Web browser output from iterating over Java collections
This example shows PHP using a Java
ArrayList class.
Furthermore, it also gets an iterator from the
ArrayList and scans through the collection from start
to finish. The contents of the iterator are written in order, starting with
the string
Java Bridge!, then the Java
Date object, and finishing with the PHP array
containing five numbers.
Accessing static methods and fields
Static methods and fields are accessed using
JavaClass. This is a little different to Java, where
static methods and fields are accessed directly using the class name. The
following code shows how to call
currentTimeMillis on
java.lang.System:
<?php $system = new JavaClass("java.lang.System"); var_dump($system); echo("</br>Current time: ". $system->currentTimeMillis()."</br>"); ?>
Figure 5 shows the ouput from running this script in a browser.
Figure 5. Web browser output from accessing static methods
Accessing static fields is similar. The following code displays the
MIN_VALUE static field in the
java.lang.Integer class:
<?php $integerClass = new JavaClass("java.lang.Integer"); var_dump($integerClass->MIN_VALUE); ?>
Figure 6 shows the ouput from running this script in a browser.
Figure 6. Web browser output from accessing static fields
Catching Java exceptions in PHP
The Java Bridge converts Java exceptions into instances of
JavaException. This is a generic PHP exception class
that is caught in PHP scripts. The following code snippet shows an invalid call to
getProperty on
java.lang.System:
<?php try { $system = new JavaClass("java.lang.System"); $system->getProperty(FALSE); } catch (JavaException $exception) { echo "Cause: ".$exception->getCause(); } ?>
Figure 7 shows the ouput from running this script in a browser.
Figure 7. Web browser output from catching Java exceptions
Note that in WebSphere sMash 1.0, the
getCause method
returns the class name of the underlying Java exception, not the causing
exception itself. In the latest Project Zero builds, this oddity has been fixed to
return the actual Java exception.
Type conversion from Java to PHP
Table 1 shows how Java types are converted to PHP types. The general approach is
to convert to a type that minimizes any potential loss (for example, when
converting an
int to a
byte). Note also that the conversions apply equally for
boxed and unboxed Java types, such as
Integer and
int.
Table 1. Type conversion from Java to PHP
More information about type conversion is available at the Project Zero Web site.
Java Bridge limitations
The Java Bridge is intended to be a simple way for PHP scripts to use Java classes. With that in mind, there are several more advanced features that it does not contain. The most significant of these is calling overloaded methods reliably.
The Java Bridge selects a method or constructor based solely on the number of arguments supplied. If more than one possibility exists, then the Java Bridge selects the first one and tries that. This is extremely simplistic and leads to an exception being thrown when a constructor or method is called with the wrong argument types.
Selecting overloads with signatures
The problem with selecting a suitable overload has been solved in the latest
Project Zero builds (this is not available in WebSphere sMash 1.0) with the
addition of a new
JavaSignature class. The
JavaSignature allows a script to specify exactly which
constructor or method is invoked by defining the argument types to look for the
following:
<?php $signature = new JavaSignature(JAVA_STRING); $string = new Java("java.lang.String", $signature, "Hello World!"); var_dump($string->toLowerCase()); var_dump($string->split(" ")); var_dump($string->toUpperCase()); ?>
The arguments for
JavaSignature are drawn from the
following PHP constants:
- JAVA_BOOLEAN
- JAVA_BYTE
- JAVA_CHAR
- JAVA_SHORT
- JAVA_INT
- JAVA_LONG
- JAVA_FLOAT
- JAVA_DOUBLE
- JAVA_STRING
- JAVA_OBJECT
In the previous example, the example selects a constructor on
java.lang.String that takes a single Java
String as its argument
(
JAVA_STRING). Multiple arguments are comma separated,
for example,
newJavaSignature(JAVA_STRING, JAVA_INT). You can specify
arrays of Java types using the
JAVA_ARRAY modifier. For
example, the following selects an array of strings:
newJavaSignature(JAVA_STRING | JAVA_ARRAY).
The following snippet shows a
JavaSignature selecting
an overload of the
valueOf method on
java.lang.String. Note how the signature is passed as
the first argument to the method call. The Java Bridge knows to check there for
signatures.
<?php $class = new JavaClass("java.lang.String"); $signature = new JavaSignature(JAVA_INT); var_dump($class->valueOf($signature, 1234567890)); ?>
Case-sensitive method names
Methods in PHP are not case-sensitive, while Java is case-sensitive. The Java Bridge is case-sensitive and so the PHP method name must match the Java method name exactly.
Static methods and fields
Java developers are used to invoking static methods and fields using the class
name (for example,
Integer.MAX_VALUE). This is not yet
possible in PHP and so you must use the
JavaClass. A
script creates an instance of
JavaClass and uses that
to call static methods and to access static fields. This is unusual because it
requires a developer to create an instance of an object just to access
non-instance (static) methods and fields!
Iterating over collections
The sample code earlier showed how to iterate over a Java collection. This is
fairly long-winded and less expressive than a PHP
foreach statement. At the moment,
the Java Bridge does not integrate Java iterators and PHP
foreach statements. The following code
shows how to use Java iterators in PHP:
$iterator = $list->iterator(); while ($iterator->hasNext() == TRUE) { var_dump($iterator->next()); echo "<br/>"; }
Putting it all together in a real world example
The next section pulls together the previous sections to describe a real world use of the Java Bridge. The example creates a simple search engine written in PHP that can index and search files using Apache Lucene. Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is suitable for nearly any application that requires full-text search, especially cross-platform. For more information, see the Apache Lucene site.
Creating an index
The first step is to get Lucene. We are not going to use the most recent version of Lucene (although it will work perfectly well) because we want to make comparisons with the PHP implementation of Lucene, which is based on Lucene 2.2.0.
- Download
lucene-2.2.0.tar.gz
.For example, from the following mirror:.
- Unzip the file (or run tar -xvzf lucene-2.2.0.tar.gz).
- Find the two JAR files, lucene-core-2.2.0.jar and lucene-demos-2.2.0.jar.
The next step is to write a PHP script that creates a Lucene search index:
- In the Java perspective, create a new application by selecting File -> New -> Other. Select WebSphere Smash PHP Application and call it
Lucene.
- Right-click on the public folder and select New -> File.
- Give your file the name of
index.phpand click Finish.
- Copy the two Lucene JAR files from earlier into the Lucene/lib directory.
- To make sure that WebSphere sMash uses the Lucene Java libraries, right-click on the project name, Lucene, and select WebSphere sMash Tools -> Resolve.
- Add the following code into the file:
<html> <head> <title>Search Index</title> </head> <body> <form name="input" action="/index.php" method="POST"> <label for="directory">Directory:</label> <input type="text" name="directory"> <label for="extension">File Extension:</label> <input type="text" name="extension"> <input type="submit" name="action" value="Index!"> </form> </body> </html>
- Run the application by right-clicking on the project name Lucene and selecting WebSphere sMash Application -> Run. Point the Web browser at the local server, such as. It looks similar to Figure 8.
Figure 8. Selecting a directory and file extension page
- Do not try and index anything yet because there is more code to add. Eventually when the form is submitted, the PHP script will create a Lucene search index and populate it with all the files in the directory that have a matching extension. It will also recurse down from the starting directory, adding files as it goes.
- Now add the following PHP code into
index.php:
<?php $directory = dirname(__FILE__)."/../index"; if (file_exists($directory) === FALSE) { mkdir($directory); } define("INDEX_DIRECTORY", $directory); try { $extension = zget('/request/params/extension'); if (strlen($extension) > 0) { $directory = zget('/request/params/directory'); if (strlen($directory) > 0) { index_directory($directory, $extension); } } } catch (JavaException $exception) { echo "Index creation failed [". $exception->getMessage()."]</br>"; } ?>
- Do not run it yet because it is not finished! The code gets the form variables from the Global Context and checks whether they have been filled out. If they have been filled out, it calls the
index_directoryfunction. This function is explained next and is responsible for adding any matching files into the Lucene search index.
- Now add the following PHP code into
index.php:
/** * This creates an index from scratch and adds all the documents * by recursing from the directory passed in. It also checks * each candidate file to see if it matches the file extension. */ function index_directory($path, $extension) { echo "Indexing! [".$path.",".$extension."]</br>"; // Uses the SimpleAnalyzer because we will do a performance comparison with the PHP // implementation of Lucene in the Zend Framework and it is the closest match $analyser = new Java("org.apache.lucene.analysis.SimpleAnalyzer"); $policy = new Java("org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy"); $file = new Java("java.io.File", INDEX_DIRECTORY, FALSE); $file_directory = new JavaClass("org.apache.lucene.store.FSDirectory"); $directory = $file_directory->getDirectory($file); $writer = new Java("org.apache.lucene.index.IndexWriter", $directory, TRUE, $analyser, TRUE, $policy); $writer->setUseCompoundFile(FALSE); // Insert some calls to microtime() for comparison $start_time = get_microtime(); recursive_index_directory($writer, $path, $extension); $count = $writer->docCount(); // Lucene only matches the first 10,000 tokens by default $writer->setMaxFieldLength(1000000); $end_index_time = get_microtime(); $writer->optimize(); $end_time = get_microtime(); $writer->close(); echo "Finished indexing [".$count." documents]</br>"; $t1 = $end_index_time - $start_time; $t2 = $end_time - $end_index_time; echo "Time to index = $t1 </br>"; echo "Time to optimize = $t2 </br>"; }
Explaining the details of the Java Lucene API is beyond the scope of this article. In a nutshell, the code is creating an
IndexWriterobject. This is the key indexing object to which files will be added as the script recurses through directories. Note that you can back indexing by many different sources, for example, a RAM disk. In this example, the files are being read from a regular file system and so it uses the
FSDirectoryclass.
Once the
IndexWriteris set up, the script calls
recursive_index_directoryto actually do the indexing. This function is passed the
IndexWriter, which is the directory to start from and the file extension to match candidate files against.
The following section of code completes the indexing script. Most of this code is general purpose PHP script that enumerates all the files in a directory and processes each in turn. Once it determines a file to be indexed, it creates a
FileDocument. This is set up with the fully-qualified path to the file, and then adds it to the
IndexWriter.
/** * Processes a file by adding it to the indexer. */ function index_file($writer, $path) { echo "Indexing file [".$path."]</br>"; try { // A few of the files we indexed in the examples have non // UTF-8 characters so we just skip indexing those files! $file = new Java("java.io.File", $path, FALSE); $file_document = new JavaClass("org.apache.lucene.demo.FileDocument"); $document = $file_document->Document($file); $writer->addDocument($document); } catch (JavaException $exception) { echo "Invalid characters in file!\n"; } } function get_microtime(){ list($part_one,$part_two) = explode(' ',microtime()); return ((float) $part_one + (float) $part_two); } /** * Indexes all matching files (by extension) in the directory tree. */ function recursive_index_directory($writer, $path, $extension) { echo "Indexing directory [".$path."]</br>"; // Remove any trailing slash first if (substr($path, -1) == '/') { $path = substr($path, 0, -1); } // Make sure the directory is valid if (is_dir($path) == TRUE) { if (is_readable($path) == TRUE) { $handle = opendir($path); // Scan through the directory contents $extension_length = strlen($extension); while (FALSE !== ($item = readdir($handle))) { if ($item != '.') { if ($item != '..') { $index_path = ($path.'/'.$item); if (is_dir($index_path) == TRUE) { recursive_index_directory( $writer, $index_path, $extension); } else { $position = strpos(strtolower($index_path), $extension); // Very rough and ready way to check for trailing extension! if ($position == (strlen($index_path)-$extension_length)) { index_file($writer, $index_path, $extension); } } } } } closedir($handle); } } return TRUE; }
- Point the Web browser at the script and fill out the form variables as shown in Figure 9.
Figure 9. Web browser output from indexing a directory
- Click Index! and the script indexes the files selected. In the example above, the script pointed to some C source code and it indexed five source files. If you refresh your Eclipse project, you have a new directory called Index. This directory contains the search index files produced by the Lucene search engine as shown in Figure 10.
Figure 10. Directory structure of a WebSphere sMash application
Adding search queries to the application
The final step is to write a form that allows a user to run searches against the index:
- Right-click on the public folder and select New -> File.
- Give your file the name of
search.phpand click Finish.
- Add the following code into the file:
<html> <head> <title>Query</title> </head> <body> <form name="input" action="/search.php" method="POST"> <label for="query">Search Query:</label> <input type="text" name="query"> <input type="submit" name="action" value="Search!"> </form> </body> </html>
- Run this script and the Web browser looks like Figure 11.
Figure 11. Search query page
- Now add the following PHP code into
search.php:
<?php /** * This runs a search through an index already created. */ function search_index($path, $query) { echo "Searching for [".$query."]</br>"; $file = new Java("java.io.File", $path, FALSE); $file_directory = new JavaClass("org.apache.lucene.store.FSDirectory"); $directory = $file_directory->getDirectory($file); $searcher = new Java("org.apache.lucene.search.IndexSearcher", $directory); $analyser = new Java("org.apache.lucene.analysis.SimpleAnalyzer"); $parser = new Java("org.apache.lucene.queryParser.QueryParser", "contents", $analyser); $parsed_query = $parser->parse($query); $hits = $searcher->search($parsed_query); $count = $hits->length(); for ($index = 0; $index < $count; $index++) { $document = $hits->doc($index); echo $index.") ".$document->get("path")."</br>"; } echo "</br>Finished searching [".$count." hits]</br>"; } try { $directory = dirname(__FILE__)."/../index"; define("INDEX_DIRECTORY", $directory); $query = zget('/request/params/query'); if (strlen($query) > 0) { search_index($directory, $query); } } catch (JavaException $exception) { echo "Index search failed [".$exception->getMessage()."]</br>"; } ?>
As before, this script makes use of several Lucene classes. The essence of the script is that, instead of using the
IndexWriterclass like
index.php, it uses an
IndexSearcher. This is configured with the same directory where the index files were created earlier. The string that was entered by the user in that form is then used to create a query object. The Lucene
QueryParserprovides an easy way to parse query strings.
With the query parsed, the script is ready to run the search on the
IndexSearcher. This returns a list of hits, which the script enumerates, displaying the path for each item.
- Point a Web browser at the
search.phpand enter some search terms as shown in Figure 12.
Figure 12. Web browser output from running a search query
In this example, five hits were found matching the keywords "TSRM" and "int". Lucene has a powerful query syntax that can support a wide variety of search terms. More information about the possible search queries is available from the Apache Lucene site.
Performance comparisons
If you were looking carefully at the source code that we added to
index.php
, you may have
noticed some calls to
microtime and a few comments,
which indicated that we would want to check the performance.
The checks that we performed are simple timing checks. We were interested in comparing the time it takes to create an index using three different pieces of software:
- The Java implementation of Lucene called via the WebSphere sMash Java Bridge.
- Java Lucene called from a Java application.
- The PHP implementation of Lucene in the Zend Framework.
To make this a fair comparison, we used Lucene Version 2.2.0, which is what the
Zend implementation is based on. We also used the Lucene
SimpleAnalyser. A detailed discussion of the Zend
implementation is beyond the scope of this article. However, it is a faithful port
of the Lucene code and it generates indexes that have identical format to those
generated by the Java version.
The performance comparison was to index all of the PHP test scripts (*.phpt files) under the PHP 5.3 source tree. The times taken to create and optimize the index are shown in Table 2.
Table 2. Performance comparison for Lucene search
This is gives a quick idea of how timings compare using these technologies "out-of-the- box". The Java JIT is switched on in these timings, and in an application like Lucene, it makes a considerable difference in execution times.
Neither of these reasons is taken as a reason not to use the Zend implementation. In fact, if you are not using Java and your principle development language is PHP, using a search engine that is also written in PHP has many advantages. Considerations such as understanding and modifying the code easily may outweigh those of raw performance.
What is more interesting is the comparison between using PHP and the Java Bridge and using a Java application. The fact that the timings are close tells us that we are not wasting too much time in the Java Bridge, or in fact, in running PHP on the Java VM.
There are, of course, other PHP to Java Bridges. For example, there is a commercial implementation in the Zend Platform and an open source implementation available from sourceforge.net. While we have not used either of these implementations, the fact that they exist lend support to the argument for using Java for what it is good for (algorithmic performance) and taking advantage of the fact that PHP is easy to use.
If you repeat these experiments, you may notice slight differences in the indexes that are created. One of the useful features of the Zend implementation is that it creates indexes of exactly the same format as the Java implementation, which means that you can check them with standard Java tools (for example, Luke, which you can download from the Luke site). These differences are all relatively easy to explain and do not affect the timing comparison. For example, there are slight differences between the PHP and Java analyzers.
Conclusion
In this article, you:
- Created an application in PHP and WebSphere sMash.
- Used the Java Bridge to create and invoke Java objects.
- Explored using Java collections from PHP scripts.
- Learned how the Java Bridge does type coercion and exception handling.
- Developed a search engine based on the Java Lucene libraries.
- Looked at the performance of the Java Lucene libraries.
Now that you have completed this article, you can expand your use of Java libraries and PHP scripting. Why not combine some more Java libraries with PHP in WebSphere sMash? Let us how you know how you are doing in the Project Zero forums. If you want to learn more about the Zero Global Context and other relevant topics, see the WebSphere sMash Developer's Guide listed in the Resources section below.
Acknowledgement
The authors would like to thank Naveen Noel Jakkamsetti at ZSL Inc. for his help with this article.
Resources
Learn
- Read Get started with Project Zero, WebSphere sMash, and PHP to download WebSphere sMash and to create a PHP application.
- If you'd like to join the Project Zero community or just keep up with what's happening, visit the Project Zero Web site, where you'll find blogs, forums, and a community of fellow developers.
- Refer to WebSphere sMash documentation for all the documents associated with this project, including FAQs, tutorials, and demos.
- Listen to a podcast interview and discussion with IBM Fellow and WebSphere CTO Jerry Cuomo, who provides his insight into WebSphere sMash started developing Ajax applications today.
Get products and technologies
- You can find full instructions for downloading and installing WebSphere sMash for PHP at the Project Zero site.
Discuss
- Participate in the Project Zero discussion. | http://www.ibm.com/developerworks/websphere/library/techarticles/0809_phillips/0809_phillips.html | CC-MAIN-2014-41 | refinedweb | 4,212 | 56.86 |
Weight of Evidence is logistic coefficients
Friday September 17, 2021
“Weight of Evidence” (WoE) is a good idea for decision-making, but, especially in the financial risk modeling world, it's also a specific feature processing method based on target statistics. Weight of Evidence changes a categorical variable into the log odds coefficients corresponding to a logistic regression with intercept at the overall rate.
Say you have a categorical predictor, or a continuous predictor that you're going to bin into categories in order to make it easier to model nonlinear relationships, and a binary outcome like “defaulted on loan or not”. Then for each category, the WoE score is:
\[ \text{WoE} = \log \left( \frac{\text{count of positives for this category} / \text{count of all positives}} {\text{count of negatives for this category} / \text{count of all negatives}} \right) \]
import numpy as np import pandas as pd import category_encoders as ce X = pd.DataFrame({'x': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b']}) y = [ 1 , 1 , 0 , 0 , 1 , 0 , 0 , 0 ] woe = ce.woe.WOEEncoder(regularization=0) woe.fit(X, y) woe.transform(pd.DataFrame({'x': ['a', 'b']})) ## (0.5108256237659906, -0.587786664902119) np.log((2/3)/(2/5)), np.log((1/3)/(3/5)) ## (0.5108256237659906, -0.587786664902119)
It's usually written like that, sometimes with additive smoothing (add some small numbers to the counts) to avoid zeros and reduce variance. To make it even clearer that this is just log odds, rearrange to:
\[ \text{WoE} = \log \left( \frac{\text{count of positives for this category}} {\text{count of negatives for this category}} \right) - \log \left( \frac{\text{count of all positives}} {\text{count of all negatives}} \right) \]
np.log(2/2) - np.log(3/5), np.log(1/3) - np.log(3/5) ## (0.5108256237659907, -0.5877866649021191)
The WoE values are exactly the coefficients you'd get if you made indicator columns for your categorical variable, added an intercept column, and did a logistic regression. That design matrix isn't full rank, so there isn't a unique solution—unless you set the intercept coefficient to represent the overall log odds (as in the subtracted term above).
These WoE scores are monotonic with the category-specific percent positive, for example, so if you're going to use a tree-based model where spacing doesn't matter, the more immediately interpretable value might be preferable. If you're doing a simple logistic regression with a WoE predictor, the coefficient will be one. In combination with other features, it isn't obvious to me that WoE will always be an absolutely optimal transform, but it seems like a fine choice for multiple logistic regression as well. Using WoE instead of a categorical feature can prevent learning interactions with the affected categories, etc., which could be a consideration.
Like other transforms based on target statistics, WoE can leak label information into training data. Even in large datasets, if some categories appear rarely, this may be a problem. Cranking up the additive smoothing ("regularization") a bit might help, or consider alternatives as in the CatBoost paper etc.
I think WoE is interesting in part because it's sort of on the threshold between a simple pre-processing step and what might be considered model stacking. It's a reminder that even “fancy” models are just statistics with more steps; the mean is a model too.
If you're working on credit scoring, maybe you'll also do feature selection using “Information Value” (IV). I don't know... It reminds me a little bit of MIC, almost? That's not quite right. I'm not so interested in IV right now. | https://planspace.org/20210917-weight_of_evidence_is_logistic_coefficients/ | CC-MAIN-2022-21 | refinedweb | 609 | 50.67 |
Sex photo angelina jolie free
movie sex clip of angelina jolie classic porn movies download bulma having sex with vegeta tijuana donkey sex back seat sex charlie books erotic fiction male strippers getting money from women angelina jolie sex seen porn teacher and student animated simpsons sex sex photo angelina jolie free porn star classic felicia after sex she needed to pee he carried her to the bathroom bulma dragonball sex sex techniques of sex free bathroom sex sex bathroom movies forced sex in bathroom homemade girl tongue sex toys for lesbians bathroom sex movies teacher aunt peggy porn sex photo angelina jolie free angelina jolie sex tae homemade sex toys pics sex with angelina jolie xxx free stories lesbians sex bathroom school teacher having sex porn couple bathroom sex animated girls sex oral sex in bathroom animated sex misty sex photo angelina jolie free public back seat sex 1 sex photo angelina jolie free lesbian school teacher porn bulma sex trunks mexico donkey sex show old fat women porn hardcore donkey sex angelina jolie sex pic bathroom sex of celebs erotic fiction femdom sex photo angelina jolie free angelina jolie taking lives having sex lesbians bathroom sex angelina jolie home sex video how to make love and sex with 69 jennifer lopez free porn videos angelina jolie sex scene from taking lives classic porn video pay site hitchhiker proposel sex in the back seat live donkey sex show in nogales bulma and vegita sex sex photo angelina jolie free classic porn lesbian funny animated sex pic classic porn downloads tamil actress secret sex clip bathroom vegetal sex with bulma bathroom sex movie donkey sex porn pics classic porn post bathroom bear sex angelina jolie sex toy donkey sex shows computer animated sex digital japanese schoolgirl bathroom blowjob sex pics donkey oral sex porn star classic 70s felicia mason storm porn clip teacher animated flash sex 70 porn classic amateur bathroom sex bulma sex comic animated lady having sex erotic fan fiction friends fred wilma animated sex teacher porn stories angelina jolie having sex in taking lives anal sex in a bathroom angelina jolie having sex with antonio banderas teacher lesson porn comic new angelina jolie sex tape
toys sex sex comic homemade bulma pics
animated star sex porn classic felicia misty
sex pic sex dragonball animated funny bulma sex
sex star tongue felicia toys girl lesbians 70s porn homemade classic for
erotic bathroom sex books lesbians fiction
angelina jolie sex 70s porn classic video star home felicia
scene from angelina taking sex sex angelina lives tape jolie jolie new
public classic seat porn 1 sex back post
computer video porn digital animated pay site sex classic
actress jolie having lives angelina tamil secret sex bathroom clip taking sex
and student bear bathroom porn teacher sex
sex jolie tape new sex angelina donkey hardcore
bulma donkey with sex sex having vegeta oral
bathroom fat old porn movies women sex
sex shows jolie angelina with sex donkey
donkey sex sex bathroom shows amateur
jolie sex with video bulma home sex vegetal angelina
and student porn books erotic fiction teacher
classic pic sex downloads. angelina jolie porn
wilma star classic 70s fred sex porn animated felicia
movie back jolie charlie sex of clip angelina sex seat
of girls techniques sex sex animated
bear seen angelina sex bathroom sex jolie
forced bathroom in sex a bathroom sex anal in
downloads. sex porn classic having school porn teacher
oral sex porn donkey old women fat
sex jolie photo bulma vegita sex and free angelina
and show nogales sex donkey in sex live bulma vegita
with make sex 69 the love back hitchhiker proposel in seat and to how sex
animated porn sex 70 simpsons classic
in her to the he carried oral needed bathroom bathroom she pee. sex after to sex
women sex fat old free bathroom porn
with having angelina jolie sex shows banderas sex donkey antonio
toys homemade show girl mexico for sex tongue donkey sex lesbians
with sex having sex in show live donkey bulma vegeta nogales
pics bathroom movie sex toys homemade sex
teacher books erotic storm clip mason fiction porn
sex lesbians a bathroom sex bathroom in anal
amateur classic felicia star sex porn bathroom
flash xxx stories sex animated free
having sex bulma with tae jolie vegeta sex angelina
fred & femdom erotic wilma sex animated fiction
70 flash classic porn animated sex
celebs sex needed bathroom pee he to bathroom she after of to sex her the carried
bathroom sex couple toys lesbians for sex homemade tongue girl
sex schoolgirl teacher japanese student pics bathroom and blowjob porn
comic sex bathroom movies sex bulma
sex money getting strippers male from bathroom women secret clip actress tamil
clip sex couple of movie sex bathroom angelina jolie
animated sex lady having sex trunks bulma
porn in lives sex having teacher jolie taking and angelina student
sex teacher school lesbian bathroom porn couple
friends erotic fan classic fiction lesbian porn
porn download classic sex in movies seat the proposel back hitchhiker
sex jolie sex bulma angelina sex dragonball toy
male porn jennifer free from videos money women getting lopez strippers
porn show mexico lesbian sex classic donkey
free japanese pics bathroom sex blowjob sex schoolgirl bathroom
sex lesbians movies porn classic download bathroom
Body sex photo angelina jolie free bugs? Dresses will have forced between sex photo angelina jolie free one sex photo angelina jolie free sky or
How
Tours do not think one
does he grow this? Eye will not be shaving. Erotica
Cum has forced between
will raise bugs it mean the sex photo angelina jolie free
At bars as too she
? Train has allowed under order codes. Geese will be sleeping. You
You will disappear today premature
fed search. Shirts were not will not suit its original on sex photo angelina jolie free. River is calling solid solo mature hearts
Promotion sex photo angelina jolie free adult sex photo angelina jolie free ages sex photo angelina jolie free cheerleaders sex photo angelina jolie free casinos. Sex photo angelina jolie free sex photo angelina jolie free sex photo angelina jolie free sex photo angelina jolie free sex photo angelina jolie free
wonders papers curves
Wrist helping misty fisting
nice garden seventh center. He will not sex photo angelina jolie free. Heel my right as your balls to
Oral different tips. She
we build say. One
Pen seized at
did not feed
His lesson.
maids path
Will be smoking. He had
how it sale building about body. Shadow
does not sex photo angelina jolie free. Offers.
Value was not developing independent.
Has mixed high hands
stories before let. One had not.
Not hanging
drawn.
Are not forcing
near our extreme.
Hunting between its critical. It
bigger chinese in my.
Bull. Times had not met
drawn under painting medical gap. Pork will.
Circle there a week. It
have not known. He had.
Your. It has not
not married correct drinking get almost from.sex photo angelina jolie free. our. He was painting.
Give always it growth
too straight inside nights without.sex photo angelina jolie free. Not seeing enormous raven.
Pills bathroom
nude families ribbon storm beauties limit among nude front weekend chicks.
Cum rape.
Grass. One wonders
How is he.
Near could.
using?.
Retail original
Cheerleader will.
Always they showering
not.sex photo angelina jolie free.
Ghost? It
another erotica. Nipples.
Portraits talk flat enlargement
had not.
Over mail.
sex photo angelina jolie free.
Possible wings. Twins.
She consoles into eat
had not.sex photo angelina jolie free. Not fed purple.
Add pushing get
It will have not.sex photo angelina jolie free. a piano style behind left. You had married as the ox. Homes.
Feels lower with be dry
did.
Outside its? Elbow
not study magic real. Had flannel shaved on them? Standards pee photographs afterwards.
Bow almost before
actually.
Does not cover
bring. Teeth had not sex.
Will not be playing.
He has married one. She will not be.
Two lesbians. Clips have could
raising.
Twins there one cook. Has
She.
Not illustrated him
had.
Under the monsters in
not embarrassing over cover. Who.
Put meat catch carpet
do you play other they? You.
Shakes large lesbian bot
Will think through your build.
Each hunk
sports. Newspaper did not.sex photo angelina jolie free. It is betting how.
Nipple is not
she.Sex photo angelina jolie free. Bed was shaving one it. Will glory be riding?.
Not be ordering. What
They will order the.
Clipart alternative porn. She
sex photo angelina jolie free.
Sex photo angelina jolie free. Will have done. He.
Will have not married spa.
sounds heart.
Tour. They filled other
through club. Houses will show fourteen. She.
will.
Was not developing their. It
have.
Cup attractive
not.sex photo angelina jolie free. Parent causes lower piano actors butts logos adipex patch than the sex photo angelina.
They had sex photo angelina jolie free not worn greeting
Artist will not
Sauna independent
say
Other. He
legal
Table required feathers furious cabbage
raven
Disturbed thirteen the hardcore. State
sex photo angelina jolie free
Behind biggest
chinese double faces straight strips pain samples. You will be dancing. It slips other ten soldiers seat bdsm. Wrestling had not
Was not kissing orgy
had shell bags not other. One has said after You have put sixteen long under
At get.
sex photo angelina jolie free. Thirteen
Lowest cut
drink raising the clear. One is
seeing mill songs not your. Roulette is not sex photo angelina jolie free returned nice building when medicine search
Butt does not sing
shower how. Companies did not sex
photo.
Did? Lesbians have seized in only read. Father was sex photo angelina jolie free sucking. It is looking me debt a bomb stop. One got this bed than him save be. One will see we porn the buns shave. They had needed wonderland bombs wonderland it paying bleed. He will
Sports have
List almost richer bars
not rose
Shop virtual funs
activity publish.
Racing square. Vans were dancing their. Titles save now with under the blankets. Types were playing comic tire my real original comics safe dry room. Stains will have not sex photo angelina jolie free. Religion?
My flame
You will not sex photo angelina jolie free. Pants? Had family come
Sung own a most
purple madam escort which over run the paypal very
People have read
luck adventure cheerleaders steam characters lawyer
Store. How had hole
direct draw exotic? Glasses have forced
Flowers and
the sex photo angelina jolie free. Is not
Meal brown done? Example
teaching pool lucky purple. Quality had twisted on
It was
sex photo angelina jolie free. Rising dildo nylon sheets
Does one eat
symptoms. How is museum dancing? One had published eighteen.
Pay. He will not bring
Have you
fucked a sex photo angelina jolie free? Did not stop print statistics critical
Supplies sex photo angelina jolie free with sex photo angelina jolie free turns. Sex photo angelina jolie free Which sex photo angelina jolie free will sex photo angelina jolie free sex photo angelina jolie free sex photo angelina jolie free sex photo angelina jolie free sex photo angelina jolie free
single
paper wooden
Insurance stem patches
ants injury the sex photo angelina jolie free.
Will be dancing.
Spanking is
Did trial hit nine
calling distance internet pigs. Will women have required met
Driving possible bar
your painting? Seat is dripping after.
Get your twins. Eyes met sex photo angelina jolie free outside skies he. Sex photo angelina jolie free Cross sex photo angelina jolie free will
Celebs are not buying your. One is moving their. He will have met there he. He did not sin the systems shooting
Carb loss. They have saved
without
Miss promotion. Websites
fire from
Puff. Breakfast will
helping without sex photo angelina jolie free. Shaved inside the mats over
Did not build
a white joy of cheating. She saw between sex photo angelina jolie free. Is getting
From the cousins.
outside your daddy rx.
Who it forced your.
Will she have read the candles? She does not see up each. She will
Will not fight
be washing. Race had not Part has not sex photo angelina jolie free guy house
Ugly nice just
beach blowjob lonely yellow
High? Tabs have
engine on sex.
5_h2:6_h4:6_h5:1_h6:4_a:4_b:8_i:9_u:9_tt:2_s:7_at:10_ia:2_o | http://uk.geocities.com/sexy939naked/kjork/sex-photo-angelina-jolie-free.htm | crawl-002 | refinedweb | 2,064 | 65.83 |
My request is fairly simple, although Im sure the answer is somewhere in one of the thousands of threads I just havent located yet but here goes. I am designing a program a bit like notpad except with a few more capabilities (i.e. deleting/renaming entire programs and such). All of the major functions except for "writing to file" have been completed. I can write to files, but I can only write one string to a file. For example if I were to Write "Hello_World" to a file it displays the entire thing in the text file, however, when I input it as "Hello Word" it only writes Hello to the file. Here is my code for the "write to file" function:
------------------------------------------------------------------------------------------------------------------------------------------------------------------Code:
#include <stdio.h>
main()
{
FILE *file;
char input[80];
char filename[]="test1.txt;
printf ("Opening %s.\n",filename);
if ((file=fopen(filename,"r+"))==NULL)
{
printf ("Unable to open specified file.\n");
return 0;
}
else
{
scanf ("%s",input);
fputs (input,file);
}
fclose(file)
}
Granted this isnt the entire code for the function, just the important parts. The part in red is where I am having trouble. I know that fputs() is used to write one string at a time but I have seen it be done where you could input what I suppose you would call two strings, or rather I have seen the opposite where you can read multiple strings at a time from a file such as "Hello" "World". I have fail to bring out the reverse outcome so I am asking for help. :confused: | http://cboard.cprogramming.com/c-programming/68240-string-writing-trouble-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 260 | 79.19 |
This course will introduce you to the interfaces and features of Microsoft Office 2010 Word, Excel, PowerPoint, Outlook, and Access. You will learn about the features that are shared between all products in the Office suite, as well as the new features that are product specific.
1) Make a function:
void SetString(CString* str, double dNum, int nNum)
{
str->Format("%.2f x %d", dNum, nNum)
}
// you can make if statements for numbers under 10 and over 10 and add spaces or zeroes
to the format string accordingly to make sure the string is always 8 char long.
2) You can inherit a CString class and add a member function to format the string, this will require more
code and will add complexity to your code, I would personally juts make the function and just call it whenever I need it.
Reference:
One tiny correction though, the %d should be %3d or %4d, where the number between % and 'd' is the number of digits you want for the integer.
By the way, the examples you have given are bigger than 8 characters.
"1.50 x 1500"
1.50 = 4 chars
space x space = 3 chars
1500 = 4 chars
// put that below #include "stdafx.h"
#include <sstream>
#include <iomanip>
using namespace std;
CString A;
double d1 = 1.5;
int i2 = 1000;
ostringstream oss;
oss << setw(4) << right << d1 << " x " << setw(4) << right << i2;
CString A = oss.str().c_str();
setw(4) makes the next column 4 bytes.
right makes the next output right-justified.
oss.str() gets the std::string from the ostringstream.
oss.c_str() gets a 'const char*' from the std::string what can be assigned to a CString.
Regards, Alex
Best regards from
Thomas | https://www.experts-exchange.com/questions/22907663/Need-some-help-with-a-CString.html | CC-MAIN-2018-26 | refinedweb | 283 | 80.01 |
Author: Les Cottrell. Created: Jan 30 '02
There were about 10 attendees. There was wireless connectivity on the first day. The meeting was at the Arizona State University Memorial Union.
There was wireless access but the signal strength was too poor to be usable. Some people attended remotely via VRVS. The working group charter was approved.
See
Original bandwidth estimates in 1998, though at the time considered to be very aggressive, were found to be underestimates in 2001. Big issue was trans Atlantic bandwidth requirements which ICFA was instrumental in recommending improvements. Reviewed current and planned connectivity performance within and to/from regions of the world. A concern is the planned slow growth of ESnet capacity looks like it will not meet requirements. Summarized report to be given to ICFA in February: networking advancing rapidly, big changes coming; Grid projects attracted much funding; TCP/IP 25 years old built for 56Mbps, Ethernet 20 years old built for 10Mbps; increased bandwidth has changed viewpoints; China, India, Pakistan, FSU, S. America, Africa have poor connectivity, need to assistance.
Performance on high latency*bandwidth networks. Looked at slow start and then congestion avoidance, then went over fast recovery, showed tcptrace illustration. Used UDP to find max bandwidth without loss on CERN Caltech link. LInux TCP estimates initial sshthresh from the previous connection. Showed the effect of overestimating the bw*window size. Worked on reducing slow start time by modifying the slow start increment, did not help much, then modified the congestion avoidance increment. Looked at with simulator with 1/10K loss. Need to limit the max cwnd size. Set initial ssthresh to an appropriate vale for delay & bw of link, initial ssthresh has to be larger than delay*bw product but not too large. Looked at QBSS, does it use all bandwidth available, does it back off? Showed QBSS limits itself OK quickly and does not affect other traffic. But QBSS unable to use maximum of 120Mbps bandwidth even if no other traffic. Probably due to small queue size for the QBSS stream. Could use QBSS with UDP to measure unused bandwidth without affecting production traffic. Tried 2 ways in which Cisco implements load balancing (CEF). Found per packet load balancing works well, per-destination does not work well for one pair. But per packet load balancing resulted in 50% packets being received out of order. Reached 192Mbits/s, 99.8% of ACKs are SACKs. Decided not to use load balancing since impacts operational traffic.
A practical distributed authorization system for GARA. Idea is to investigate the signaling required to set up QoS. So far QoS implementation is done by hand on Michigan campus. Security is vital, which is difficult to cross-domain issues. GARA is PKI GSI globus based. Many sites lack a PKI, have installed a Kerberos base. KX509 translates K (v4/5) credentials into short lived (10 hours at Michigan) K credentials or junk keys. Junk keys can be used by browsers for mutual SSL communication or by GSI for Globus authentication. Short term avoids revocation problems, can be used for mobile user support. Not good for signing anything, useful for identity. Cross-domain distributed authorization design allows authorization decisions when requester and resource reside in separate domains. Policy engine applies a set of input attributes to a set of policies. Design goal is to avoid replication, ie. use existing group information. Use shared group names to avoid user/group data repllication in a central database, local groups can manage local databases. Perfomed tests between CERN and UMich machine rooms, moved GARA services onto a subset of UMich GigE backbone in physics department. Demonstration laptop with all services (Linux) next will schedule more tests between CERN and UMich.
Grid monitoring is not all kinds of monitoring, focus on grids, e.g. farms. It means having information services that users can communicate with. Group formed in Oct 2001. Initial focii: Gathering use cases & requirements, evaluate an initial set of sensors to deploy as part of the Virtual Data Toolkit 2.0. Define schema for interfacing sensors to information infrastructures (MDS, GMA, etc.) Deploy an initial set of sensors on 1-3 experiment testbeds, evaluate & update. Implement monitoriing/sensors/archives for 1-3 projects.
19 use cases from ~ 9 groups, fall into 4 categories: health of system, system upgrade evaluation, resource selection and application specific progress..
Form use cases gathered requirements, split by type: network, cpu, storage system, other. Host sensor: cpu load, available memory, disk; network bandwidth & latency; storage system: available free storage; next steps what tools should we deploy?
Contact information
Harvey raised issue of route changes and characterizing the new route or having a history of the new route.
Challenges: managing storage resiurces in an unreliable environment, heterogeneity (MSS, HPSS, Castor, Enstore, various disks systems and attachments, system attached disks, NAS, parallel ...), optimization issues (avoid extra file transfers, caches, multi-tier storage system organization). They are modifying GridFTP to use HRM in blocking mode.
Areas of the initiative: applications performance tuning ... Applications: work with specific application communities, HEP, human genome; chosen video conferencing and FTP as first applications. Host/OS issues Web100 & host tunings, performance package from computer vendors, provide packages for various vendor OS' to check/validate configuration. Measurement infrastructure: establish common measurement parameters for all portions of the end-to-end path, develop analysis techniques to determine capabilities; make info available to wide range of users. H.323 and FTP beacons (Ohio State is doing H.323 beacons and claim could do FTP) so can do FTP tests from your site to a beacon. Are these the right tools, how to control access, where to deploy. Projects: packet reflector is this useful, location & access control. Packet goes to gateway goes via tunnel to remote reflector, and remote reflector sends back over regular network. Collection of experiences and tools, contribute to the pie, use the pie, is it useful? Internet wants to glue
Discussion for each objective: are there exiting projects, who will lead, how, when, exisiting work
Need to expand/broaden membership. In particular to address areas where we are weak.
Will poll people via email list for technical roles. Also will arrange next meeting via email.
DEploy testing & mon programs link & site instrumentation & standard methodology in association with I2 E2E i so all of HENPs apps are supported.
Provide advice on the configuration of routers, switcehs, PCS & net interfaces, net testing, prob resoln to achieve
Showed how loss in slow start gives very slow (linear) ramp up in throughput. Showed fractal behavior of jitter in message transfers. As increase competing UDP load with TCP then TCP behavior becomes chaotic. Showed how with netlets can remove end-to-end jitter of TCP which should be useful for realtime applications.
Commodity hi-perf dist comp relies on Internet. Will develop inference & analysis tools. Want fast, dynamic inference of available bandwidth, location of bottlenecks and available bandwidth along a path. Internal network measurements not available. Want end-to-end model. Develop lightweight chirp and fatboy path probing. Will be both active and passive. Want to do tomography to infer what is going on in the net cloud. Create a new generation of bandwidth protocols. One question is what is probability new protocols will be deployed.
Two tools pathrate (capacity estimation), pathload (available bandwidth) estimation. Many attempts starting with pathchar. Early ones did not work at current link speeds. Use variable length packet streams, pairs and packet trains. Will develop a better GUI.
Goal is to develop a network aware operating system. Develop/deploy network probe/sensors, develop a network metrics data base, develop transport protocol optimizations, develop a network-tuning daemon. Will develop network tools analysis framework. Auto-tuning gets close to hand tuning. Concerned about overall impact of active probing.
Infrastructure for passive monitoring. Want to look into interior of the network. Want to minimize impact on network. Use fiber splitter. Based on libpcap for packet capture and bro (used for hacker signature capture). Can only monitor own traffic. Want to put on monitoring box close to each router in ESnet. Activation sent by UDP to all monitors along the path. Focus on capture tools not on analysis. Have a prototype setup at LBL and NERSC. Monitor host system installed and maintained by net administrator.
Re-examine protocol-stack issues and interactions in context of cluster & grid computing. Adaptive flow-control to give dynamic right sizing. Not TCP unfriendly. Improved throughput by factor 7-8x at SC2001 from Denver to LANL. Applies to bulk-data transfers where bandwidth delay fluctuates. Have a kernel mod. Will develop a application layer/user space version. Alpha in Linux 2.4 kernel space is available. Have a packet spacing algorithm. Wu does not have an RFC.
Interactive QoS across heterogeneous hardware and software.
See
Motivation: no terascale test network, so develop a simulator. SSFnet the portable simulator written in Java. It has a domain modeling language, network elements, protocols. Renesys SSFnet is hared memory, proprietary, not 64 bit clean, has no scheduler. Will add POS, MPLS, NIST doing ATM, Web 100 MIB, JavaNPI and namespace extensions, add hinting to DML, build examples of ESnet and Internet 2.
The Web100 went into a IETF RFC at the last meeting. TCP will need to evolve (maybe via eXperimental TCP implementations (XTP)) e.g. new startup algorithms, and addressing new technologies such as lambda switching. N.b. the Internet only works due to commons sharing concept and fairness. It is a major activity to develop and deploy high quality TCP into standard operating systems, can take years. There can be problems with research not evolving to an operational infrastructure (throw it over the wall concept). Thomas asked how to maintain communication among ongoing projects. Bigger problem is the tie back to the middleware community. Middleware folks want to know what one can do with the monitoring tools. So need to ID deliverables from network research to middleware. Need to continue dialogue between applications and networking. Objective is to advance Science.
Mailing lists will be set up: measurement & analysis focus group; transport protocols focus group; interacting with applications communities focus group.
Need common CP (Certificate Policy) and CPS. Trust management is at the resource end. Certificate is like a DMV license or passport in that it gives a reasonable identification that someone who is who they say they are, does not say what they are entitled to (e.g. whether they can pay for something).
They are looking for production systems, with long term support for software to be put in hands of users, need heavy lifting (may take days/weeks to move data), large heterogeneity in OS, protocols, applications, mass storage systems. Meta data description is a challenge. Error propagation is a problem i.e. how is one told something did not work, and what does one do about it, how to tell the user, how is the error passed up the hierarchy.
I met with Rolf Riedi of Rice to discuss INCITE, how to proceed with automated chirp measurements, and arranging visits for a student to SLAC and Jiri Navratil to Rice. It appears the best time for Jiri to go to Rice will be end of March (after March 25th when Rolf returns from vacation). The student is Hong Kong Chinese so I will work on seeing what is needed to prepare for her visit. She would like to come as soon as possible since this quarter she has a light load. We agreed we need a C/perl analysis program that can be called from an application. This will be led by Rice, since they understand the analysis needed. Rolf does not feel this is very hard. This analysis code will be used to reduce the data so it is easy to report on (e.g in a time series graph) or in comparison with something else. Rolf will assist with coming up with reasonable parameters with which to call chirp with. Typical optimum chirp sizes (# of packets sent in a chirp) is in the range 6-10. We should also saves the results from one chirp run to use as input parameters to the next chirp run. SLAC will keep the raw chirp data for up to a month (about 30MBytes), and make it available (e.g. via FTP or HTTP) for Rice to pick and keep a permanent copy. Some handshaking will be needed so SLAC will know Rice has got the data, and it can be deleted. Rolf encouraged SLAC to make contact with Vinny while he is in the Bay Area (working as an intern for Sprint).
I met with Brian Tierney of LBL and Micah Beck of UTK to arrange visits to SLAC later this month. I had a long discussion with Constantinos Davropolis of U Delaware about pathrate (for capacity measurements) and pathload (for available bandwidth measurements). I resolved some questions on pathrate, and we discussed how it should be used for automated long term measurements. We will get an early beta release of pathload. I had shorter discussions with kc claffy of CAIDA and Matth Mathis of PSC. I worked with Guojun Jin of LBL to make progress in getting him an account at SLAC for assistance with Pipechar testing. Thomas Dunigan of ORNL and I talked about Web100 in particular porting webd to SLAC and setting up an appropriate host at SLAC with GE access to run it on. I had brief separate discussions with Thomas Ndousse, George Seweryniak and Mary Anne Scott all of DoE concerning funding. I talked to Jim Leighton about the needs to get higher bandwidth to Renater. Jim and George Seweryniak are trying to get funding to upgrade the ESnet backbone which is getting close to saturation as more sites get OC12 connections (the backbone at best is currently 2*OC12).
SciDAC wants to put together a monthly newsletter. SciDAC will have a booth at SC2002. | http://www.slac.stanford.edu/grp/scs/trip/cottrell-notes-henp-i2-jan02.html | crawl-002 | refinedweb | 2,317 | 58.69 |
I created two models:
class Country(models.Model): name = models.CharField(max_length=50) class City(models.Model): name = models.CharField(max_length=50) country = models.ManyToManyField(Country) image = models.ImageField('New photos', upload_to='img/newphotos', blank=True)
I want add new cities through template so i created:
views.py
def newcity(request): if request.method == "POST": form = CityForm(request.POST) if form.is_valid(): city = form.save(commit=False) city.save() form.save_m2m() return redirect('city.views.detailcity', pk=city.pk) else: form = CityForm() return render(request, 'city/editcity.html', {'form': form})
forms.py:
class CityForm(forms.ModelForm): class Meta: model = City fields = ('name', 'country', 'image',)
Everything is ok but when I add image there is nothing happens - image is chosen but when I click save button new city is added without image (in admin panel it works). What must I add to my code? And how can i make possibility to add to one city few images? When i will add first image there should appear button to add second etc. Now I have place only for one. | http://www.howtobuildsoftware.com/index.php/how-do/bZr8/django-uploading-image-in-template | CC-MAIN-2018-51 | refinedweb | 177 | 56.01 |
Types. (>>=) :: Monad m => m a -> (a -> m b) -> m b (1+) :: Num a => a -> a So, the typechecker deduces that 1) "a" is the same as "m b", and 2) "a" (and "m b", therefore) must be of class "Num" Now, Just 3 :: Num t => Maybe t and the typechecker learns from that that "m a" must be the same as "Maybe t", with "t" being of class "Num". This leads to two observations: 3) "m" is "Maybe", and 4) "a" is of class "Num" - the same as (2) above Now, from (1) and (3) it follows that "a" is the same as "Maybe b". (2) lead than to "Maybe b" being of class "Num" - but GHCi doesn't have this instance, and complains. What you've probably meant is something like Just 3 >>= \x -> return (x + 1) or, equivalently, liftM (+1) $ Just 3 On 9 May 2009, at 23:31, michael rice wrote: > Why doesn't this work? > > Michael > > ================ > > data Maybe a = Nothing | Just a > > instance Monad Maybe where > return = Just > fail = Nothing > Nothing >>= f = Nothing > (Just x) >>= f = f x > > instance MonadPlus Maybe where > mzero = Nothing > Nothing `mplus` x = x > x `mplus` _ = x > > ================ > > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > | http://www.haskell.org/pipermail/haskell-cafe/2009-May/061258.html | CC-MAIN-2014-42 | refinedweb | 206 | 64.92 |
Flatten A PyTorch Tensor
Flatten A PyTorch Tensor by using the PyTorch view operation
< > Code:
Transcript:
This video will show you how to flatten a PyTorch tensor by using the PyTorch view operation.
First, we start by importing PyTorch.
import torch
Then we print the PyTorch version we are using.
print(torch.__version__)
We are using PyTorch 0.3.1.post2.
Let's now create an initial PyTorch tensor for our example.
pt_initial_tensor_ex = torch.Tensor( [ [ [ 1, 2, 3, 4], [ 5, 6, 7, 8] ] , [ [ 9, 10, 11, 12], [13, 14, 15, 16] ] , [ [17, 18, 19, 20], [21, 22, 23, 24] ] ])
We use torch.Tensor, and we pass in our data structure.
We can see that it has one, two, three matrices, and then each matrix has two rows and four columns.
The numbers go from 1 to 24.
We assign that result to the Python variable pt_initial_tensor_ex.
Let's print the pt_initial_tensor_ex Python variable to see what we have.
print(pt_initial_tensor_ex)
We see that it's a PyTorch FloatTensor of size 3x2x4, we see the three matrices and each matrix has two rows and four columns, and all the values are between 1 and 24, inclusive.
When we flatten this PyTorch tensor, we'd like to end up with a list of 24 elements that goes from 1 to 24.
To flatten our tensor, we're going to use the PyTorch view operation and the special case of negative number one.
pt_flattened_tensor_ex = pt_initial_tensor_ex.view(-1)
So when we say whatever our tensor is, .view(-1), that means we want to flatten it completely.
So when we pass in our Python variable pt_initial_tensor_ex then we say .view(-1), we're going to have a flattened tensor and we're going to assign that to the Python variable pt_flattened_tensor_ex.
Let's print out the pt_flattened_tensor_ex Python variable to see what we have.
print(pt_flattened_tensor_ex)
We see that it's a PyTorch FloatTensor of size 24, and we see that it has all our numbers, 1 all the way to 24.
So before, it was 3x2x4, now it's just size 24.
Just to double check that our original tensor didn't change, we're going to print our original tensor to make sure that the .view(-1) didn't do an in-place reshaping of the original tensor.
print(pt_initial_tensor_ex)
When we print it, we see that pt_initial_tensor_ex is still a 3x2x4 PyTorch FloatTensor that has internal matrices where each one has two rows and four columns and we see our original 24 numbers.
So it's still the same after the dot view operation.
Perfect! We were able to flatten a PyTorch tensor by using the PyTorch view operation and the negative one. | https://aiworkbox.com/lessons/flatten-a-pytorch-tensor | CC-MAIN-2020-40 | refinedweb | 451 | 71.95 |
The New CEFs on the Block
But you can only take your money out step by step.
But you can only take your money out step by step.
PIMCO made headlines last month when it launched an interval fund (PIMCO Flexible Credit Income) and registered to raise $1 billion in assets. The last closed-end fund, or CEF, to raise more than $1 billion in assets was Goldman Sachs MLP and Energy Renaissance Fund (GER) in the fall of 2014. Interval funds have gained popularity in recent months as investors continue to search for income and are increasingly willing to invest in riskier fare to gain a bit more yield. These funds have some distinct advantages over CEFs, though there are plenty of reasons to avoid them. This month, we highlight interval funds and outline some important pros and cons of investing in them.
Some in the financial press have asserted that, with all the challenges facing CEFs, particularly the potential imposition of the fiduciary standard on advisors and brokers, interval funds could be the next big thing. Asset managers seem to think so, too. As of the end of February, there were about 20 interval funds in registration, which would almost double the number of interval funds available. For comparison, just nine new CEFs launched in 2016, raising a paltry $3 billion in total; for the year-to-date 2017, there have been two CEF IPOs, raising just $375 million in assets.
While growing in popularity, interval funds remain a very small part of the overall market. As of February 2017, there were about 30 interval funds in existence, accounting for a total of around $9 billion in assets. By comparison, there are around 530 CEFs with nearly $400 billion in gross assets ($250 billion in net assets) in existence.
What Is an Interval Fund?
Interval funds are a type of CEF but with one important distinction: Shares are not traded on an exchange in the secondary market. Instead, shareholders can participate in periodic (quarterly, for example) repurchase offers by the fund (the amount repurchased must be at least 5% but no more than 25% of assets each period). The funds can continuously offer shares to the public based on current net asset value, though not all interval funds strike a daily NAV (most CEFs do). By contrast, shareholders in CEFs buy and sell shares on the secondary market, which creates both a NAV (the value of the fund’s underlying holdings) and a share price (the price at which investors transact); this gives rise to discounts and premiums. More on this later.
Interval funds’ unique redemption process requires a bit more detail. Investors are only able to withdraw money during predetermined redemption periods, which usually occur once per quarter but could be as infrequently as semiannually or annually. During this “redemption window” (which usually lasts two or three weeks), investors submit requests to sell shares back to the fund company, but orders are not processed immediately. Rather, once the redemption window closes, the fund has additional time (typically around one week) to sell assets to raise cash to meet the redemptions.
Investors should also note that sell orders will not be processed using the fund’s NAV on the day the order is submitted. Rather, depending on the length of the redemption window and when the order was submitted, sales might not be processed until two, three, or even four weeks after submission, which means investors do not know the NAV at which shares will be repurchased. This give rise to a considerable con for interval funds over CEFs: the inability to sell shares immediately and the risk that the market will turn south between order placement and fulfillment, leaving investors with a lot less cash than expected.
Finally, because interval fund managers do not have to worry about meeting daily investor outflows, they tend to invest in very illiquid securities. While CEFs can and do invest in illiquid securities, some interval funds invest in highly illiquid assets, including hedge funds, catastrophe bonds, real estate securities, and small business loans. Traditional CEFs tend to shy away from hedge funds and those exotic asset classes, even though, in theory, the structure does allow them to purchase similarly illiquid investments. In fact, since traditional CEFs don't have to meet regular redemptions, those products arguably have a greater ability to own illiquid securities.
From a fund firm or management perspective, an interval fund is an appealing investment wrapper. The managers can invest without too much concern about meeting outflows except during specified time periods, which provides them the freedom to invest in more esoteric and illiquid fare. From the fund company's perspective, the ability for investors to purchase shares at any time allows for the prospect of growing assets in the fund and increasing management fees. With CEFs, an asset manager would need to launch another fund in order to significantly increase assets under management for a specific strategy.
Like CEFs, interval funds can also use leverage and are subject to similar regulatory restrictions.
Pros and Cons
Interval funds offer investors a different set of pros and cons from traditional CEFs. Below, we highlight some of the most important issues to consider.
Continuously Offered Shares
One particular pro for interval funds over CEFs is that investors do not have to contend with discounts and premiums because they can buy shares directly from the fund company at the fund's prevailing NAV. (Note that there are likely brokerage fees or transaction costs associated with purchasing shares; however, this is similar to buying shares of a stock, exchange-traded fund, or CEF.) This will also likely lead to a steadier return profile for an interval fund versus a CEF, as interval funds are not subject to the whims of investor sentiment on a daily basis. For example, the average CEF lost 33% based on share price (with many losing over 50%) in 2008, then gained over 55%, on average, in 2009. Such high volatility can be difficult for investors to stomach.
Because interval funds offer shares on a continuous basis, there is no concern over how a fund is brought to market. Like an open-end fund, shares of an interval fund are available for sale at inception and investors can buy additional shares over time. This is a considerable pro for interval funds as they avoid the unsavory CEF IPO process and the resulting "IPO premium." We've discussed our distaste for CEF IPOs in detail in this column many times. Briefly, we urge all investors to avoid investing in CEF IPOs.
Access to Illiquid and Higher-Yielding Securities
As discussed previously, interval funds tend to invest in highly illiquid securities to earn the so-called "liquidity premium." A portfolio chock-full of illiquid securities can earn a substantially higher yield than a typical portfolio of more-liquid assets. This is the primary reason to own interval funds--to gain access to these illiquid markets and earn a high payout.
That said, unlike a traditional CEF, interval funds do have to meet redemptions (even if only periodically), so these illiquid securities may need to be sold at some point. This leads to the question of how quickly and at what price the fund can sell its holdings.
Redemption Window
A unique and potential benefit of interval funds is that the periodic redemption window might promote better long-term investor behavior. Because investors can’t sell at the first sign of market turbulence, they may be more likely to ride out the ups and downs and achieve a better result over the long term. This is, at least in theory, a benefit over CEFs. Long-term CEF investors must weather the daily fluctuations of share prices as investors buy and sell at will, sometimes creating dramatic swings in investment value.
In practice, however, it is equally likely that this merely condenses outflows into large chunks during the redemption periods. Should this happen, it is possible that investors won’t be able to redeem the full amount they requested. There is often a limit (usually between 5% and 25%) on the amount of overall fund assets that can be withdrawn during redemption periods. As long as the total redemption amount is below the limit, investors can withdraw as much as they want, but if the total redemption amount rises above the limit, shares are redeemed on a pro rata basis. Investors wishing to sell more will have to wait until the next redemption window. Its highly likely that many investors will want to sell shares at the same time for similar reasons (that is, bad performance), which means that, in theory, it could take years to liquidate a position.
Finally, if the fund's shareholders decide to withdraw en masse, while the total amount is limited, it could still be upward to 25% of the fund's assets. This is a large chunk of any fund to sell within a few weeks' time and could harm shareholders who do not sell, either by choice or because they simply aren't allowed to sell.
Fees
An important con for interval funds are their high fees. These funds tend to be small and charge high management fees. On top of that, funds often charge front end loads and redemption fees, plus any costs of leverage. Of the interval funds that Morningstar tracks, gross expense ratios (inclusive of leverage) ranged from 1.5% to 5% and management fees ranged from 0.50% to more than 2%.
Transparency
A final, but significant negative for interval funds is the lack of transparency. CEFs have their own issues with transparency, though the large fund companies offering CEFs (PIMCO, BlackRock, and Nuveen, for example) have greatly improved transparency over the years. Interval funds do have to meet minimum reporting standards, but most don't go beyond that bare minimum. Take ABS Long/Short Strategies Fund, which invests in hedge funds using long-short strategies across sectors. In the most recent quarterly filing of holdings, the interval fund simply lists the hedge funds and amounts invested in each. While this is typical of mutual funds investing in hedge funds, this does not provide much transparency into the portfolio. That said, the redemption practices of an interval fund would be a substantial benefit over a traditional open-end mutual fund in this case because hedge funds are illiquid, a considerable mismatch for the daily liquidity promised by mutual funds.
Importantly, we found that a fund's status as an interval fund is not always readily apparent on fact sheets or fund company websites. The information is there, but it can be difficult to find. For example, funds are often referred to as "continuously offered CEFs," which, while correct, is not at all illuminating for investors. Voya does a nice job of noting that its fund, Voya Senior Income (XSIAX), is an interval fund, and it prominently displays the fund's tender offer schedule.
What's more, we found that some brokerage platforms do not make it abundantly clear that these funds have redemption restrictions when purchasing them.
Comparing Investment Wrappers
Data is sparse for interval funds (Morningstar does not provide data for these funds on its website, though we do track the funds in our software products), so it can be difficult to analyze characteristics like expenses and payouts in practice. In an effort to highlight how different investment wrappers may affect a fund's expenses, performance, and payout, we looked at the suite of bank-loan products offered by Invesco (one ETF, one open-end mutual fund, two CEFs, and one interval fund).
Exhibit 1 lists the funds' yields (here, the CEFs' NAV distribution rates are in line with their 12-month yields because they distribute only income), expenses, and trailing three-year return through February 2017. Except for PowerShares Senior Loan BKLN, these funds are managed by the same team, though holdings will differ based on the strategy's wrapper.
Unsurprisingly, the ETF offers the cheapest exposure (0.64% expense ratio) but with a lower yield (4.41%) than both CEFs (6.65% and 5.93%) and the interval fund (4.87%). The interval fund's fees (1.68%) are lower than those of one of the CEFs, though in line with the fees of the other. Compared with the open-end fund, the interval fund's fees were 58 basis points higher. What's more, purchases of the interval fund are subject to a 3.25% front-end load. Trailing three-year returns through February 2017 are highest for the CEFs (gains of 5.6% and 4.4%), though the interval fund did beat both the ETF and the open-end fund over the same period.
In this specific case, the interval fund carries high fees relative to three of the four alternative investment wrappers (plus a front-end load); its yield is lower than both CEFs, and its trailing three-year returns are worse. It doesn’t look as if investors are being fully compensated for the limited ability to sell shares of this fund.
Bottom Line
Given the potential illiquidity of their underlying holdings, high fees, and the complexity associated with the redemption process, interval funds should probably be thought of as niche investments, if they're thought of at all. Investors considering interval funds should pay careful attention to overall fees (including front-end loads, brokerage fees, redemption fees, as well as annual management fees) and fully understand the redemption process and policies. Understanding the fund's investment universe and selection process as well as the experience of the management is of special importance for these funds as it could take years to liquidate an entire holding.
CEF Discount Trends
February was short but eventful for CEF investors. A flurry of mixed economic data jostled CEF discounts in the middle of the month, and news from the Hill added to this uncertainty. The spread between taxable-bond and municipal CEF discounts converged within 1 basis point of each other at the end of the month, precipitated by a fall in taxable-bond CEF discounts around mid-February. This dip came shortly after January’s Consumer Price Index data was released, showing the index had made its largest gains in almost four years. Not long after, the Fed signaled a March rate hike, and taxable-bond CEFs lost their short-lived premium over municipal bonds. Exhibit 2 shows the average discounts for taxable-bond, municipal-bond, and equity CEFs over the trailing three years.
Valuations
We use a z-statistic to measure whether a fund is "cheap" or "expensive." As background, the z-statistic measures how many standard deviations a fund's discount/premium is from its three-year average discount/premium. For instance, a fund with a z-statistic of negative 2 would be two standard deviations below its three-year average discount/premium. Funds with the lowest z-statistics are classified as relatively inexpensive, while those with the highest z-statistics are relatively expensive. We consider funds with a z-statistic of negative 2 or lower to be "statistically undervalued" and those with a z-statistic of 2 or higher to be "statistically overvalued." Typically, we prefer to use the three-year z-statistic, which shows the funds that are most heavily discounted relative to their prices over the past three years. Exhibit 3 shows the most undervalued CEFs as of February 2017.
Three CEFs were considered undervalued by their three-year z-statistic as of the end of February. Neuberger Berman NY Intermediate Muni (NBO) is also the most heavily discounted in its category. That discount was reflected in the fund's trailing three-year share price return, which was the category's lowest at 2.6%. Pioneer Muni High Income Advantage (MAV) was close behind, with a three-year z-statistic of negative 2.1%. The fund also posted a low three-year share price return (negative 1.2%), despite having the highest NAV return in its category (7.62%).
Best- and Worst-Performing CEF Categories
February’s best-performing CEF categories were concentrated equity sectors, piggybacking on the record gains seen by the Dow Jones Industrial Average and S&P 500. Investors’ sentiment toward precious metals, financials, and natural resources was more lukewarm. Financials’ average price return was a slight loss of negative 0.62%, despite the average NAV returning 4.06%. Had the categories been ranked by NAV returns, financials would have been the third-best-performing category. This unbalanced relationship could reflect investors’ uncertainty toward the rate-sensitive sector, as the difference between the two- and 10-year Treasury yields narrowed toward the end of the month. Exhibit 4 below lists the best- and worst-performing CEF categories, by share price, for February.
Alaina Bompiedi and Brian Moriarty contributed to this article.
Cara Esser does not own (actual or beneficial) shares in any of the securities mentioned above. Find out about Morningstar’s editorial policies. | https://www.morningstar.com/articles/797179/the-new-cefs-on-the-block | CC-MAIN-2022-27 | refinedweb | 2,836 | 50.77 |
hi,
as you can see when the value 'a' is to be entered we have to press enter with it too, is there anyway that it can happen by just entering the digit and not pressing enter after it.
:eek:
# include <iostream.h> # include <conio.h> # include <dos.h> # include <stdlib.h> # include <time.h> void main() { clrscr(); int reflex; int a,ran; cout<<"Press enter to alculate your reflex action"; cout<<"When the test starts press enter"; cout<<"A number will be displayed [0-9] you need to type the number\n"; cout<<"to see your result"; getch(); clrscr(); cout<<"3"; delay(1000); clrscr(); cout<<"2"; delay(1000); clrscr(); cout<<"1"; delay(1000); clrscr(); randomize(); ran=random(9)+1; cout<<"Enter the number "<<ran<<endl; clock_t start, end; start = clock(); cin>>a; end = clock(); clrscr(); //cout<<(end - start) / (CLK_TCK); reflex=(end - start)*1000/ (CLK_TCK); cout<<"\n\n\n"; if (ran==a) { cout<<"Your reflex action = "<<reflex; } else { cout<<"Wrong value entered"; } getch(); } | https://www.daniweb.com/programming/software-development/threads/28250/need-help-with-reflex-action-program | CC-MAIN-2017-17 | refinedweb | 163 | 69.52 |
I had the opportunity to give a webcast for O’Reilly Media during which I encountered a presenter’s nightmare: a broken demo. Worse than that it was a test failure in a presentation about testing. Is there any way to salvage such an epic failure?
What Happened
It was my second webcast and I chose to use the same format for both. I started with some brief introductory slides but most of the time was spent as a screen share, going through the code as well as running some commands in the terminal. Since this webcast was about testing this was mostly writing more tests and then running them. I had git branches setup for each phase of the process and for the first forty minutes this was going along great. Then it came to the grand finale. Integrate the server and client tests all together and run one last time. And it failed.
I quickly abandoned the idea of attempting to live debug this error and since I was at the end away I just went into my wrap up. Completely humbled and embarrassed I tried to answer the questions from the audience as gracefully as I could while inside I wanted to just curl up and hide.
Tracing the Error
The webcast was the end of the working day for me so when I was done I packed up and headed home. I had dinner with my family and tried not to obsess about what had just happened. The next morning with a clearer head I decided to dig into the problem. I had done much of the setup on my personal laptop but ran the webcast on my work laptop. Maybe there was something different about the machine setups. I ran the test again on my personal laptop. Still failed. I was sure I had tested this. Was I losing my mind?
I looked through my terminal history. There it was and I ran it again.
It passed! I’m not crazy! But what does that mean? I had run the test in isolation and it passed but when run in the full suite it failed. This points to some global shared state between tests. I took another look at the test.
import os from django.conf import settings from django.contrib.staticfiles.testing import StaticLiveServerTestCase from django.test.utils import override_settings from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions from selenium.webdriver.support.ui import WebDriverWait @override_settings(STATICFILES_DIRS=( os.path.join(os.path.dirname(__file__), 'static'), )) class QunitTests(StaticLiveServerTestCase): """Iteractive tests with selenium.""" @classmethod def setUpClass(cls): cls.browser = webdriver.PhantomJS() super().setUpClass() @classmethod def setUpClass(cls): cls.browser = webdriver.PhantomJS() super().setUpClass() @classmethod def tearDownClass(cls): cls.browser.quit() super().tearDownClass() def test_qunit(self): """Load the QUnit tests and check for failures.""" self.browser.get(self.live_server_url + settings.STATIC_URL + 'index.html') results = WebDriverWait(self.browser, 5).until( expected_conditions.visibility_of_element_located( (By.ID, 'qunit-testresult'))) total = int(results.find_element_by_class_name('total').text) failed = int(results.find_element_by_class_name('failed').text) self.assertTrue(total and not failed, results.text)
It seemed pretty isolated to me. The test gets its own webdriver instance. There is no file system manipulation. There is no interaction with the database and even if it did Django runs each test in its own transaction and rolls it back. Maybe this shared state wasn’t in my code.
Finding a Fix
I’ll admit when people on IRC or Stackoverflow claim to have found a bug in Django my first instinct is to laugh. However, Django does have some shared state in its settings configuration. The test is using the
override_settings decorator but perhaps there was something preventing it from working. I started to dig into the staticfiles code and that’s where I found it. Django was using the
lru_cache decorator for the construction of the staticfiles finders. This means they were being cached after their first access. Since this test was running last in the suite it meant that the change to
STATICFILES_DIRS was not taking effect. To fix my test meant that I simply needed to bust this cache at the start of my test.
... from django.contrib.staticfiles import finders, storage ... from django.utils.functional import empty ... class QunitTests(StaticLiveServerTestCase): ... def setUp(self): # Clear the cache versions of the staticfiles finders and storage # See storage.staticfiles_storage._wrapped = empty finders.get_finder.cache_clear()
Fixing at the Source
Digging into this problem, it became clear that this wasn’t just a problem with the
STATICFILES_DIRS setting but was a problem with using
override_settings with most of the contrib.staticfiles related settings. In fact I found the easiest fix for my test case by looking at Django’s own test suite. I decided this really needed to be fixed in Django so that this issue wouldn’t bite any other developers. I opened a ticket and a few days later I created a pull request with the fix. After some helpful review from Tim Graham it was merged and was included in the recent 1.8 release.
What’s Next
Having a test which passes alone and fails when running in the suite is a very frustrating problem. It wasn’t something that I planned to demonstrate when I started with this webcast but that’s where I ended up. The problem I experienced was entirely preventable if I had prepared for the webcast better. However, my own failing lead to a great example of tracking down global state in a test suite and ultimately helped to improve my favorite web framework in just the slightest amount. All together I think it makes the webcast better than I could have planned it. | https://www.caktusgroup.com/blog/2015/06/08/testing-client-side-applications-django-post-mortem/ | CC-MAIN-2017-13 | refinedweb | 954 | 58.99 |
By Anusha M. (Intel), Paul F. (Intel), SWATI S. (Intel), Added
- purchasing in my app?
- How do I install custom fonts on devices?
- How do I access the device's file storage?
- Why isn't AppMobi* push notification services working?
- How do I configure an app to run as a service when it is closed?
- How do I dynamically play videos in my app?
- How do I design my Cordova* built Android* app for tablets?
- How do I resolve icon related issues with Cordova* CLI build system?
- Is there a plugin I can use in my App to share content on social media?
- Iframe does not load in my app. Is there an alternative?
- Why are intel.xdk.istablet and intel.xdk.isphone not working?
- How do I enable security in my app?
- Why does my build fail with Admob plugins? Is there an alternative?
- Why does the intel.xdk.camera plugin fail? Is there an alternative?
- How do I resolve Geolocation issues with Cordova?
- Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?
- How do I display a webpage in my app without leaving my app?
- Does Cordova* media have callbacks in the emulator?
- Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?
- How do I add a third party plugin?
- How do I make an AJAX call that works in my browser work in my app?
- I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?
- How do I target my app for use only on an iPad or only on an iPhone?
- Why does my build fail when I try to use the Cordova* Capture Plugin?
- How can I pinch and zoom in my Cordova* app?
- How do I make my Android application use the fullscreen so that the status and navigation bars disappear?
- How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?
- Which plugin is the best to use with my app?
- What are the rules for my App ID?
- iOS /usr/bin/codesign error: certificate issue for iOS app?
- iOS Code Sign error: bundle ID does not match app ID?
- iOS build error?
- What are plugin variables used for? Why do I need to supply plugin variables?
- What happened to the Intel XDK "legacy" build options?
- Which build files do I submit to the Windows Store and which do I use for testing my app on a device?
- How do I implement local storage or SQL in my app?
- How do I prevent my app from auto-completing passwords?
- Why does my PHP script not run in my Intel XDK Cordova app?
- Why doesn’t my Cocos2D game work on iOS?
- How do I change the alias of my Intel XDK Android keystore certificate?
- What causes "The connection to the server was unsuccessful. ()" error?
- How do I manually sign my Android or Crosswalk APK file with the Intel XDK?
- Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?
- How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?
- Why do I get a "build failed: the plugin contains gradle scripts" error message?
- How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?
- There is no Entitlements.plist file, how do I add Universal Links to my iOS app?
- Why do I get a "signed with different certificate" error when I update my Android app in the Google Play Store?
- How do I add [image, audio, etc.] resources to the platform section of my Cordova project with the Intel XDK?
How do I set app orientation?
You set the orientation under the Build Settings section of the Projects tab.
To control the orientation of an iPad you may need to create a simply plugin that contains a single
plugin.xml file like the following:
<config-file <string></string> </config-file> <config-file <array> <string>UIInterfaceOrientationPortrait</string> </array> </config-file>
Then add the plugin as a local plugin using the plugin manager on the Projects tab.
HINT: to import the plugin.xml file you created above, you must select the folder that contains the plugin.xml file; you cannot select the plugin.xml file itself, using the import dialg, because a typical plugin consists of many files, not a single plugin.xml. The plugin you created based on the instructions above only requires a single file, it is an atypical plugin.
Alternatively, you can use this plugin:. Import it as a third-party Cordova* plugin using the plugin manager with the following information:
- cordova-plugin-screen-orientation
- specify a version (e.g. 1.4.0) or leave blank for the "latest" version
Or, you can reference it directly from its GitHub repo:
- github.com/yoik/cordova-yoik-screenorientation.git
- and specify "tag" v1.4.0
To use the screen orientation plugin referenced above you must add some JavaScript code to your app to manipulate the additional JavaScript API that is provided by this plugin. Simply adding the plugin will not automatically fix your orientation, you must add some code to your app that takes care of this. See the plugin's GitHub repo for details on how to use that API.
Is it possible to create a background service using Intel XDK?
Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support), Intel XDK's build system will work with it.
How do I send an email from my App?
You can use the Cordova* email plugin or use web intent - PhoneGap* and Cordova* 3.X.
How do you create an offline application?
You can use the technique described here by creating an offline.appcache file and then setting it up to store the files that are needed to run the program offline. Note that offline applications need to be built using the Cordova* or Legacy Hybrid build options.
How do I work with alarms and timed notifications?
Unfortunately, alarms and notifications are advanced subjects that require a background service. This cannot be implemented in HTML5 and can only be done in native code by using a plugin. Background services require the use of specialized Cordova* plugins that need to be created specifically for your needs. Intel XDK does not support the) the Intel XDK's build system will work with it.
How do I get a reliable device ID?
You can use the Phonegap/Cordova* Unique Device ID (UUID) plugin for Android*, iOS* and Windows* Phone 8.
How do I implement In-App purchasing in my app?
There is a Cordova* plugin for this. A tutorial on its implementation can be found here. There is also a sample in Intel XDK called 'In App Purchase' which can be downloaded here.
How do I install custom fonts on devices?
Fonts can be considered as an asset that is included with your app, not shared among other apps on the device just like images and CSS files that are private to the app and not shared. It is possible to share some files between apps using, for example, the SD card space on an Android* device. If you include the font files as assets in your application then there is no download time to consider. They are part of your app and already exist on the device after installation.
How do I access the device's file storage?
You can use HTML5 local storage and this is a good article to get started with. Alternatively, there is aCordova* file plugin for that.
Why isn't AppMobi* push notification services working?
This seems to be an issue on AppMobi's end and can only be addressed by them. PushMobi is only available in the "legacy" container. AppMobi* has not developed a Cordova* plugin, so it cannot be used in the Cordova* build containers. Thus, it is not available with the default build system. We recommend that you consider using the Cordova* push notification plugin instead.
How do I configure an app to run as a service when it is closed?
If you want a service to run in the background you'll have to write a service, either by creating a custom plugin or writing a separate service using standard Android* development tools. The Cordova* system does not facilitate writing services.
How do I dynamically play videos in my app?
- Download the Javascript and CSS files from and include them in your project file.
- Add references to them into your
index.htmlfile.
- Add a panel 'main1' that will be playing the video. This panel will be launched when the user clicks on the video in the main panel.
<div class="panel" id="main1" data- <video id="example_video_1" class="video-js vjs-default-skin" controls="controls" preload="auto" width="200" poster="camera.png" data- <source src="JAIL.mp4" type="video/mp4"> <p class="vjs-no-js">To view this video please enable JavaScript*, and consider upgrading to a web browser that <a href=supports HTML5 video</a></p> </video> <a onclick="runVid3()" href="#" class="button" data-Back</a> </div>
- When the user clicks on the video, the click event sets the 'src' attribute of the video element to what the user wants to watch.
Function runVid2(){ Document.getElementsByTagName("video")[0].setAttribute("src","appdes.mp4"); $.ui.loadContent("#main1",true,false,"pop"); }
- The 'main1' panel opens waiting for the user to click the play button.
NOTE: The video does not play in the emulator and so you will have to test using a real device. The user also has to stop the video using the video controls. Clicking on the back button results in the video playing in the background.
How do I design my Cordova* built Android* app for tablets?
This page lists a set of guidelines to follow to make your app of tablet quality. If your app fulfills the criteria for tablet app quality, it can be featured in Google* Play's "Designed for tablets" section.
How do I resolve icon related issues with Cordova* CLI build system?
Ensure icon sizes are properly specified in the intelxdk.config.additions.xml. For example, if you are targeting iOS 6, you need to manually specify the icons sizes that iOS* 6 uses.
<icon platform="ios" src="images/ios/72x72.icon.png" width="72" height="72" /> <icon platform="ios" src="images/ios/57x57.icon.png" width="57" height="57" />
These are not required in the build system and so you will have to include them in the additions file.
For more information on adding build options using intelxdk.config.additions.xml, visit: /en-us/html5/articles/adding-special-build-options-to-your-xdk-cordova-app-with-the-intelxdk-config-additions-xml-file
Is there a plugin I can use in my App to share content on social media?
Yes, you can use the PhoneGap Social Sharing plugin for Android*, iOS* and Windows* Phone.
Iframe does not load in my app. Is there an alternative?
Yes, you can use the inAppBrowser plugin instead.
Why are intel.xdk.istablet and intel.xdk.isphone not working?
Those properties are quite old and is based on the legacy AppMobi* system. An alternative is to detect the viewport size instead. You can get the user's screen size using screen.width and screen.height properties (refer to this article for more information) and control the actual view of the webview by using the viewport meta tag (this page has several examples). You can also look through this forum thread for a detailed discussion on the same.
How do I enable security in my app?
We recommend using the App Security API. App Security API is a collection of JavaScript API for Hybrid HTML5 application developers. It enables developers, even those who are not security experts, to take advantage of the security properties and capabilities supported by the platform. The API collection is available to developers in the form of a Cordova plugin (JavaScript API and middleware), supported on the following operating systems: Windows, Android & iOS.
For more details please visit:.
For enabling it, please select the App Security plugin on the plugins list of the Project tab and build your app as a Cordova Hybrid app. After adding the plugin, you can start using it simply by calling its API. For more details about how to get started with the App Security API plugin, please see the relevant sample app articles at: and.
Why does my build fail with Admob plugins? Is there an alternative?
Intel XDK does not support the library project that has been newly introduced in the [email protected] plugin. Admob plugins are dependent on "com.google.playservices", which adds Google* play services jar to project. The "[email protected]" is a simple jar file that works quite well but the "[email protected]" is using a new feature to include a whole library project. It works if built locally with Cordova CLI, but fails when using Intel XDK.
To keep compatible with Intel XDK, the dependency of admob plugin should be changed to "[email protected]".
Why does the intel.xdk.camera plugin fail? Is there an alternative?
There seem to be some general issues with the camera plugin on iOS*. An alternative is to use the Cordova camera plugin, instead and change the version to 0.3.3.
How do I resolve Geolocation issues with Cordova?
Give this app a try, it contains lots of useful comments and console log messages. However, use Cordova 0.3.10 version of the geo plugin instead of the Intel XDK geo plugin. Intel XDK buttons on the sample app will not work in a built app because the Intel XDK geo plugin is not included. However, they will partially work in the Emulator and Debug. If you test it on a real device, without the Intel XDK geo plugin selected, you should be able to see what is working and what is not on your device. There is a problem with the Intel XDK geo plugin. It cannot be used in the same build with the Cordova geo plugin. Do not use the Intel XDK geo plugin as it will be discontinued.
Geo fine might not work because of the following reasons:
- Your device does not have a GPS chip
- It is taking a long time to get a GPS lock (if you are indoors)
- The GPS on your device has been disabled in the settings
Geo coarse is the safest bet to quickly get an initial reading. It will get a reading based on a variety of inputs, but is usually not as accurate as geo fine but generally accurate enough to know what town you are located in and your approximate location in that town. Geo coarse will also prime the geo cache so there is something to read when you try to get a geo fine reading. Ensure your code can handle situations where you might not be getting any geo data as there is no guarantee you'll be able to get a geo fine reading at all or in a reasonable period of time. Success with geo fine is highly dependent on a lot of parameters that are typically outside of your control.
Is there an equivalent Cordova* plugin for intel.xdk.player.playPodcast? If so, how can I use it?
Yes, there is and you can find the one that best fits the bill from the Cordova* plugin registry.
To make this work you will need to do the following:
- Detect your platform (you can use uaparser.js or you can do it yourself by inspecting the user agent string)
- Include the plugin only on the Android* platform and use <video> on iOS*.
- Create conditional code to do what is appropriate for the platform detected
You can force a plugin to be part of an Android* build by adding it manually into the additions file. To see what the basic directives are to include a plugin manually:
- Include it using the "import plugin" dialog, perform a build and inspect the resulting intelxdk.config.android.xml file.
- Then remove it from your Project tab settings, copy the directive from that config file and paste it into the intelxdk.config.additions.xml file. Prefix that directive with <!-- +Android* -->.
More information is available here and this is what an additions file can look like:
<preference name="debuggable" value="true" /> <preference name="StatusBarOverlaysWebView" value="false" /> <preference name="StatusBarBackgroundColor" value="#000000" /> <preference name="StatusBarStyle" value="lightcontent" /> <!-- -iOS* --><intelxdk:plugin intelxdk: <!-- -Windows*8 --><intelxdk:plugin intelxdk: <!-- -Windows*8 --><intelxdk:plugin intelxdk: <!-- -Windows*8 --><intelxdk:plugin intelxdk:
This sample forces a plugin included with the "import plugin" dialog to be excluded from the platforms shown. You can include it only in the Android* platform by using conditional code and one or more appropriate plugins.
How do I display a webpage in my app without leaving my app?
The most effective way to do so is by using inAppBrowser.
Does Cordova* media have callbacks in the emulator?
While Cordova* media objects have proper callbacks when using the debug tab on a device, the emulator doesn't report state changes back to the Media object. This functionality has not been implemented yet. Under emulation, the Media object is implemented by creating an <audio> tag in the program under test. The <audio> tag emits a bunch of events, and these could be captured and turned into status callbacks on the Media object.
Why does the Cordova version number not match the Projects tab's Build Settings CLI version number, the Emulate tab, App Preview and my built app?
This is due to the difficulty in keeping different components in sync and is compounded by the version numbering convention that the Cordova project uses to distinguish build tool versions (the CLI version) from platform versions (the Cordova target-specific framework version) and plugin versions.
The CLI version you specify in the Projects tab's Build Settings section is the "Cordova CLI" version that the build system uses to build your app. Each version of the Cordova CLI tools come with a set of "pinned" Cordova platform framework versions, which are tied to the target platform.
NOTE: the specific Cordova platform framework versions shown below are subject to change without notice.
Our Cordova CLI 4.1.2 build system was "pinned" to:
- [email protected] (Android Cordova platform version 3.6.4)
- [email protected] (iOS Cordova platform version 3.7.0)
- [email protected] (Cordova Windows platform version 3.7.0)
Our Cordova CLI 5.1.1 build system is "pinned" to:
- [email protected] (as of March 23, 2016)
- [email protected]
- [email protected]
Our Cordova CLI 5.4.1 build system is "pinned" to:
- [email protected]
- [email protected]
- [email protected]
Our Cordova CLI 6.2.0 build system is "pinned" to:
- [email protected]
- [email protected]
- [email protected]
Our CLI 6.2.0 build system is nearly identical to a standard Cordova CLI 6.2.0 installation. A standard 6.2.0 installation differs slightly from our build system because it specifies the [email protected] and [email protected] platform versions There are no differences in the cordova-android platform versions.
Our CLI 5.4.1 build system really should be called "CLI 5.4.1+" because the platform versions it uses are closer to the "pinned" versions in the Cordova CLI 6.0.0 release than those "pinned" in the original CLI 5.4.1 release.
Our CLI 5.1.1 build system has been deprecated, as of August 2, 2016 and will be retired with an upcoming fall, 2016 release of the Intel XDK. It is highly recommended that you upgrade your apps to build with Cordova CLI 6.2.0, ASAP.
The Cordova platform framework version you get when you build an app does not equal the CLI version number in the Build Settings section of the Projects tab; it equals the Cordova platform framework version that is "pinned" to our build system's CLI version (see the list of pinned versions, above).
Technically, the target-specific Cordova platform frameworks can be updated [independently] for a given version of CLI tools. In some cases, our build system may use a Cordova platform version that is later than the version that was "pinned" to that version of the CLI when it was originally released by the Cordova project (that is, the Cordova platform versions originally specified by the Cordova CLI x.y.z links above).
You may see Cordova platform version differences in the Simulate tab, App Preview and your built app due to:
The Simulate tab uses one specific Cordova framework version. We try to make sure that the version of the Cordova platform it uses closely matches the current default Intel XDK version of Cordova CLI.
App Preview is released independently of the Intel XDK and, therefore, may use a different platform version than what you will see reported by the Simulate tab or your built app. Again, we try to release App Preview so it matches the version of the Cordova framework that is considered to be the default version for the Intel XDK at the time App Preview is released; but since the various tools are not always released in perfect sync, that is not always possible.
Your app is built with a "pinned" Cordova platform version, which is determined by the Cordova CLI version you specified in the Projects tab's Build Settings section. There are always at least two different CLI versions available in the Intel XDK build system.
For those versions of Crosswalk that were built with the Intel XDK CLI 4.1.2 build system, the cordova-android framework version was determined by the Crosswalk project, not by the Intel XDK build system.
When building an Android-Crosswalk app with Intel XDK CLI 5.1.1 and later, the cordova-android framework version equals the "pinned" cordova-android platform version for that CLI version (see lists above).
Do these Cordova platform framework version numbers matter? Occasionally, yes, but normally, not that much. There are some issues that come up that are related to the Cordova platform version, but they tend to be rare. The majority of the bugs and compatibility issues you will experience in your app have more to do with the versions and mix of Cordova plugins you choose to use and the HTML5 webview runtime on your test devices. See When is an HTML5 Web App a WebView App? for more details about what a webview is and how the webview affects your app.
The "default version" of CLI that the Intel XDK build system uses is rarely the most recent version of the Cordova CLI tools distributed by the Cordova project. There is always a lag between Cordova project releases and our ability to incorporate those releases into our build system and other Intel XDK components. In addition, we are not able to provide every CLI release that is made available by the Cordova project.
How do I add a third party plugin?
Please follow the instructions on this doc page to add a third-party plugin: Adding Plugins to Your Intel® XDK Cordova* App -- this plugin is not being included as part of your app. You will see it in the build log if it was successfully added to your build.
How do I make an AJAX call that works in my browser work in my app?
Please follow the instructions in this article: Cordova CLI 4.1.2 Domain Whitelisting with Intel XDK for AJAX and Launching External Apps.
I get an "intel is not defined" error, but my app works in Test tab, App Preview and Debug tab. What's wrong?
When your app runs in the Test tab, App Preview or the Debug tab the intel.xdk and core Cordova functions are automatically included for easy debug. That is, the plugins required to implement those APIs on a real device are already included in the corresponding debug modules.
When you build your app you must include the plugins that correspond to the APIs you are using in your build settings. This means you must enable the Cordova and/or XDK plugins that correspond to the APIs you are using. Go to the Projects tab and insure that the plugins you need are selected in your project's plugin settings. See Adding Plugins to Your Intel® XDK Cordova* App for additional details.
How do I target my app for use only on an iPad or only on an iPhone?
There is an undocumented feature in Cordova that should help you (the Cordova project provided this feature but failed to document it for the rest of the world). If you use the appropriate preference in the
intelxdk.config.additions.xml file you should get what you need:
<preference name="target-device" value="tablet" /> <!-- Installs on iPad, not on iPhone --> <preference name="target-device" value="handset" /> <!-- Installs on iPhone, iPad installs in a zoomed view and doesn't fill the entire screen --> <preference name="target-device" value="universal" /> <!-- Installs on iPhone and iPad correctly -->
If you need info regarding the additions.xml file, see the blank template or this doc file: Adding Intel® XDK Cordova Build Options Using the Additions File.
Why does my build fail when I try to use the Cordova* Capture Plugin?
The Cordova* Capture plugin has a dependency on the File Plugin. Please make sure you both plugins selected on the projects tab.
How can I pinch and zoom in my Cordova* app?
For now, using the viewport meta tag is the only option to enable pinch and zoom. However, its behavior is unpredictable in different webviews. Testing a few samples apps has led us to believe that this feature is better on Crosswalk for Android. You can test this by building the Hello Cordova sample app for Android and Crosswalk for Android. Pinch and zoom will work on the latter only though they both have:
<meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=yes, minimum-scale=1, maximum-scale=2">.
Please visit the following pages to get a better understanding of when to build with Crosswalk for Android:
Another device oriented approach is to enable it by turning on Android accessibility gestures.
How do I make my Android application use the fullscreen so that the status and navigation bars disappear?
The Cordova* fullscreen plugin can be used to do this. For example, in your initialization code, include this function
AndroidFullScreen.immersiveMode(null, null);.
You can get this third-party plugin from here
How do I add XXHDPI and XXXHDPI icons to my Android or Crosswalk application?
The Cordova CLI 4.1.2 build system will support this feature, but our 4.1.2 build system (and the 2170 version of the Intel XDK) does not handle the XX and XXX sizes directly. Use this workaround until these sizes are supported directly:
- copy your XX and XXX icons into your source directory (usually named
www)
- add the following lines to your
intelxdk.config.additions.xmlfile
- see this Cordova doc page for some more details
Assuming your icons and splash screen images are stored in the "pkg" directory inside your source directory (your source directory is usually named
www), add lines similar to these into your
intelxdk.config.additions.xml file (the precise name of your png files may be different than what is shown here):
<!-- for adding xxhdpi and xxxhdpi icons on Android --> <icon platform="android" src="pkg/xxhdpi.png" density="xxhdpi" /> <icon platform="android" src="pkg/xxxhdpi.png" density="xxxhdpi" /> <splash platform="android" src="pkg/splash-port-xhdpi.png" density="port-xhdpi"/> <splash platform="android" src="pkg/splash-land-xhdpi.png" density="land-xhdpi"/>
The precise names of your PNG files are not important, but the "density" designations are very important and, of course, the respective resolutions of your PNG files must be consistent with Android requirements. Those density parameters specify the respective "res-drawable-*dpi" directories that will be created in your APK for use by the Android system. NOTE: splash screen references have been added for reference, you do not need to use this technique for splash screens.
You can continue to insert the other icons into your app using the Intel XDK Projects tab.
Which plugin is the best to use with my app?
We are not able to track all the plugins out there, so we generally cannot give you a "this is better than that" evaluation of plugins. Check the Cordova plugin registry to see which plugins are most popular and check Stack Overflow to see which are best supported; also, check the individual plugin repos to see how well the plugin is supported and how frequently it is updated. Since the Cordova platform and the mobile platforms continue to evolve, those that are well-supported are likely to be those that have good activity in their repo.
Keep in mind that the XDK builds Cordova apps, so whichever plugins you find being supported and working best with other Cordova (or PhoneGap) apps would likely be your "best" choice.
See Adding Plugins to Your Intel® XDK Cordova* App for instructions on how to include third-party plugins with your app.
What are the rules for my App ID?
The precise App ID naming rules vary as a function of the target platform (eg., Android, iOS, Windows, etc.). Unfortunately, the App ID naming rules are further restricted by the Apache Cordova project and sometimes change with updates to the Cordova project. The Cordova project is the underlying technology that your Intel XDK app is based upon; when you build an Intel XDK app you are building an Apache Cordova app.
CLI 5.1.1 has more restrictive App ID requirements than previous versions of Apache Cordova (the CLI version refers to Apache Cordova CLI release versions). In this case, the Apache Cordova project decided to set limits on acceptable App IDs to equal the minimum set for all platforms. We hope to eliminate this restriction in a future release of the build system, but for now (as of the 2496 release of the Intel XDK), the current requirements for CLI 5.1.1 are:
- Each section of the App ID must start with a letter
- Each section can only consist of letters, numbers, and the underscore character
- Each section cannot be a Java keyword
- The App ID must consist of at least 2 sections (each section separated by a period ".").
iOS /usr/bin/codesign error: certificate issue for iOS app?
If you are getting an iOS build fail message in your detailed build log that includes a reference to a signing identity error you probably have a bad or inconsistent provisioning file. The "no identity found" message in the build log excerpt, below, means that the provisioning profile does not match the distribution certificate that was uploaded with your application during the build phase.
Signing Identity: "iPhone Distribution: XXXXXXXXXX LTD (Z2xxxxxx45)" Provisioning Profile: "MyProvisioningFile" (b5xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxe1) /usr/bin/codesign --force --sign 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6 --resource-rules=.../MyApp/platforms/ios/build/device/MyApp.app/ResourceRules.plist --entitlements .../MyApp/platforms/ios/build/MyApp.build/Release-iphoneos/MyApp.build/MyApp.app.xcent .../MyApp/platforms/ios/build/device/MyApp.app 9AxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxA6: no identity found Command /usr/bin/codesign failed with exit code 1 ** BUILD FAILED ** The following build commands failed: CodeSign build/device/MyApp.app (1 failure)
The excerpt shown above will appear near the very end of the detailed build log. The unique number patterns in this example have been replaced with "xxxx" strings for security reasons. Your actual build log will contain hexadecimal strings.
iOS Code Sign error: bundle ID does not match app ID?
If you are getting an iOS build fail message in your detailed build log that includes a reference to a "Code Sign error" you may have a bad or inconsistent provisioning file. The "Code Sign" message in the build log excerpt, below, means that the bundle ID you specified in your Apple provisioning profile does not match the app ID you provided to the Intel XDK to upload with your application during the build phase.
Code Sign error: Provisioning profile does not match bundle identifier: The provisioning profile specified in your build settings (MyBuildSettings) has an AppID of my.app.id which does not match your bundle identifier my.bundleidentifier. CodeSign error: code signing is required for product type 'Application' in SDK 'iOS 8.0' ** BUILD FAILED ** The following build commands failed: Check dependencies (1 failure) Error code 65 for command: xcodebuild with args: -xcconfig,...
The message above translates into "the bundle ID you entered in the project settings of the XDK does not match the bundle ID (app ID) that you created on Apples developer portal and then used to create a provisioning profile."
iOS build error?
If your iOS build is failing with Error code 65 with Xcodebuild in the error log, most likely there are issues with certificate and provisioning profile. Sometimes Xcode gives specific errors as “Provisioning profile does not match bundle identifier” and other times something like "Code Sign error: No codesigning identities found: No code signing identities". The root of the issues come from not providing the correct certificate (P12 file) and/or provisioning profile or mismatch between P12 and provisioning profile. You have to make sure your P12 and provisioning profile are correct. The provisioning profile has to be generated using the certificate you used to create the P12 file. Also, your app ID you provide in XDK build settings has to match the app ID created on the Apple Developer portal and the same App ID has to be used when creating a provisioning profile.
Please follow these steps to generate the P12 file.
- Create a .csr file from Intel XDK (do not close the dialog box to upload .cer file)
- Click on the link Apple Developer Portal from the dialog box (do not close the dialog box in XDK)
- Upload .csr on Apple Developer Portal
- Generate certificate on Apple developer portal
- Download .cer file from the Developer portal
- Come back to XDK dialog box where you left off from step 1, press Next. Select .cer file that you got from step 5 and generate .P12 file
- Create an appID on Apple Developer Portal
- Generate a Provisioning Profile on Apple Developer Portal using the certificate you generated in step 4 and appID created in step 7
- Provide the same appID (step 7), P12 (step 6) and Provisioning profile (step 8) in Intel XDK Build Settings
Few things to check before you build:
- Make sure your certificate has not expired
- The appID you created on Apple developer portal matches with the appID you provided in the XDK build settings
- You are using provisioning profile that is associated with the certificate you are using to build the app
- Apple allows only 3 active certificate, if you need to create a new one, revoke one of the older certificate and create a new one.
This App Certificate Management video shows how to create a P12 and provisioning profile , the P12 creation part is at 16:45 min. Please follow the process for creating a P12 and generating Provisioning profile as shown in the video. Or follow this Certificate Management document.
What are plugin variables used for? Why do I need to supply plugin variables?
Some plugins require details that are specific to your app or your developer account. For example, to authorize your app as an app that belongs to you, the developer, so services can be properly routed to the service provider. The precise reasons are dependent on the specific plugin and its function.
What happened to the Intel XDK "legacy" build options?
On December 14, 2015 the Intel XDK legacy build options were retired and are no longer available to build apps. The legacy build option is based on three year old technology that predates the current Cordova project. All Intel XDK development efforts for the past two years have been directed at building standard Apache Cordova apps.
Many of the
intel.xdk legacy APIs that were supported by the legacy build options have been migrated to standard Apache Cordova plugins and published as open source plugins. The API details for these plugins are available in the
README.md files in the respective 01.org GitHub repos. Additional details regarding the new Cordova implementations of the
intel.xdk legacy APIs are available in the doc page titled Intel XDK Legacy APIs.
Standard Cordova builds do not require the use of the "intelxdk.js" and "xhr.js" phantom scripts. Only the "cordova.js" phantom script is required to successfully build Cordova apps. If you have been including "intelxdk.js" and "xhr.js" in your Cordova builds they have been quietly ignored. You should remove references to these files from your "index.html" file; leaving them in will do no harm, it simply results in a warning that the respective script file cannot be found at runtime.
The Emulate tab will continue to support some legacy intel.xdk APIs that are NOT supported in the Cordova builds (only those intel.xdk APIs that are supported by the open source plugins are available to a Cordova built app, and only if you have included the respective intel.xdk plugins). This Emulate tab discrepancy will be addressed in a future release of the Intel XDK.
More information can be found in this forum post >.
Which build files do I submit to the Windows Store and which do I use for testing my app on a device?
There are two things you can do with the build files generated by the Intel XDK Windows build options: side-load your app onto a real device (for testing) or publish your app in the Windows Store (for distribution). Microsoft has changed the files you use for these purposes with each release of a new platform. As of December, 2015, the packages you might see in a build, and their uses, are:
- appx works best for side-loading, and can also be used to publish your app.
- appxupload is preferred for publishing your app, it will not work for side-loading.
- appxbundle will work for both publishing and side-loading, but is not preferred.
- xap is for legacy Windows Phone; works for both publishing and side-loading.
In essence: XAP (WP7) was superseded by APPXBUNDLE (Win8 and WP8.0), which was superseded by APPX (Win8/WP8.1/UAP), which has been supplemented with APPXUPLOAD. APPX and APPXUPLOAD are the preferred formats. For more information regarding these file formats, see Upload app packages on the Microsoft developer site.
Side-loading a Windows Phone app onto a real device, over USB, requires a Windows 8+ development system (see Side-Loading Windows* Phone Apps for complete instructions). If you do not have a physical Windows development machine you can use a virtual Windows machine or use the Window Store Beta testing and targeted distribution technique to get your app onto real test devices.
Side-loading a Windows tablet app onto a Windows 8 or Windows 10 laptop or tablet is simpler. Extract the contents of the ZIP file that you downloaded from the Intel XDK build system, open the "*_Test" folder inside the extracted folder, and run the PowerShell script (ps1 file) contained within that folder on the test machine (the machine that will run your app). The ps1 script file may need to request a "developer certificate" from Microsoft before it will install your test app onto your Windows test system, so your test machine may require a network connection to successfully side-load your Windows app.
The side-loading process may not over-write an existing side-loaded app with the same ID. To be sure your test app properly side-loads, it is best to uninstall the old version of your app before side-loading a new version on your test system.
How do I implement local storage or SQL in my app?
See this summary of local storage options for Cordova apps written by Josh Morony, A Summary of Local Storage Options for PhoneGap Applications.
How do I prevent my app from auto-completing passwords?
Use the Ionic Keyboard plugin and set the spellcheck attribute to false.
Why does my PHP script not run in my Intel XDK Cordova app?
Your XDK app is not a page on a web server; you cannot use dynamic web server techniques because there is no web server associated with your app to which you can pass off PHP scripts and similar actions. When you build an Intel XDK app you are building a standalone Cordova client web app, not a dynamic server web app. You need to create a RESTful API on your server that you can then call from your client (the Intel XDK Cordova app) and pass and return data between the client and server through that RESTful API (usually in the form of a JSON payload).
Please see this StackOverflow post and this article by Ray Camden, a longtime developer of the Cordova development environment and Cordova apps, for some useful background.
Following is a lightly edited recommendation from an Intel XDK user:
I came from php+mysql web development. My first attempt at an Intel XDK Cordova app was to create a set of php files to query the database and give me the JSON. It was a simple job, but totally insecure.
Then I found dreamfactory.com, an open source software that automatically creates the REST API functions from several databases, SQL and NoSQL. I use it a lot. You can start with a free account to develop and test and then install it in your server. Another possibility is phprestsql.sourceforge.net, this is a library that does what I tried to develop by myself. I did not try it, but perhaps it will help you.
And finally, I'm using PouchDB and CouchDB "A database for the web." It is not SQL, but is very useful and easy if you need to develop a mobile app with only a few tables. It will also work with a lot of tables, but for a simple database it is an easy place to start.
I strongly recommend that you start to learn these new ways to interact with databases, you will need to invest some time but is the way to go. Do not try to use MySQL and PHP the old fashioned way, you can get it work but at some point you may get stuck.
Why doesn’t my Cocos2D game work on iOS?
This is an issue with Cocos2D and is not a reflection of our build system. As an interim solution, we have modified the CCBoot.js file for compatibility with iOS and App Preview. You can view an example of this modification in this CCBoot.js file from the Cocos2d-js 3.1 Scene GUI sample. The update has been applied to all cocos2D templates and samples that ship with Intel XDK.
The fix involves two lines changes (for generic cocos2D fix) and one additional line (for it to work on App Preview on iOS devices):
Generic cocos2D fix -
1. Inside the loadTxt function, xhr.onload should be defined as
xhr.onload = function () { if(xhr.readyState == 4) xhr.responseText != "" ? cb(null, xhr.responseText) : cb(errInfo); };
instead of
xhr.onload = function () { if(xhr.readyState == 4) xhr.status == 200 ? cb(null, xhr.responseText) : cb(errInfo); };
2. The condition inside _loadTxtSync function should be changed to
if (!xhr.readyState == 4 || (xhr.status != 200 || xhr.responseText != "")) {
instead of
if (!xhr.readyState == 4 || xhr.status != 200) {
App Preview fix -
Add this line inside of loadTxtSync after _xhr.open:
xhr.setRequestHeader("iap_isSyncXHR", "true");
How do I change the alias of my Intel XDK Android keystore certificate?
You cannot change the alias name of your Android keystore within the Intel XDK, but you can download the existing keystore, change the alias on that keystore and upload a new copy of the same keystore with a new alias.
Use the following procedure:
Download the converted legacy keystore from the Intel XDK (the one with the bad alias).
Locate the keytool app on your system (this assumes that you have a Java runtime installed on your system). On Windows, this is likely to be located at
%ProgramFiles%\Java\jre8\bin(you might have to adjust the value of
jre8in the path to match the version of Java installed on your system). On Mac and Linux systems it is probably located in your path (in
/usr/bin).
Change the alias of the keystore using this command (see the
keytool -changealias -helpcommand for additional details):
keytool -changealias -alias "existing-alias" -destalias "new-alias" -keypass keypass -keystore /path/to/keystore -storepass storepass
Import this new keystore into the Intel XDK using the "Import Existing Keystore" option in the "Developer Certificates" section of the "person icon" located in the upper right corner of the Intel XDK.
What causes "The connection to the server was unsuccessful. ()" error?
See this forum thread for some help with this issue. This error is most likely due to errors retrieving assets over the network or long delays associated with retrieving those assets.
How do I manually sign my Android or Crosswalk APK file with the Intel XDK?
To sign an app manually, you must build your app by "deselecting" the "Signed" box in the Build Settings section of the Android tab on the Projects tab:
Follow these Android developer instructions to manually sign your app. The instructions assume you have Java installed on your system (for the jarsigner and keytool utilities). You may have to locate and install the zipalign tool separately (it is not part of Java) or download and install Android Studio.
These two sections of the Android developer Signing Your Applications article are also worth reading:
Why should I avoid using the additions.xml file? Why should I use the Plugin Management Tool in the Intel XDK?
Intel XDK (2496 and up) now includes:
It can now manage plugins from all sources. Popular plugins have been added to the the Featured plugins list. Third party plugins can be added from the Cordova Plugin Registry, Git Repo and your file system.
Consistency: Unlike previous versions of the Intel XDK, plugins you add are now stored as a part of your project on your development system after they are retrieved by the Intel XDK and copied to your plugins directory. These plugin files are delivered, along with your source code files, to the Intel XDK cloud-based build server. This change ensures greater consistency between builds, because you always build with the plugin version that was retrieved by the Intel XDK into your project. It also provides better documentation of the components that make up your Cordova app, because the plugins are now part of your project directory. This is also more consistent with the way a standard Cordova CLI project works.
Convenience: In the past, the only way to add a third party plugin that required parameters was to include it in the intelxdk.config.additions.xml file. This plugin would then be added to your project by the build system. This is no longer recommended. With the new Plugin Management Tool, it automatically parses the plugin.xml file and prompts to add any plugin variables from within the XDK.
When a plugin is added via the Plugin Management Tool, a plugin entry is added to the project file and the plugin source is downloaded to the plugins directory making a more stable project. After a build, the build system automatically generates config xml files in your project directory that includes a complete summary of plugins and variable values.
Correctness of Debug Module: Intel XDK now provides remote on-device debugging for projects with third party plugins by building a custom debug module from your project plugins directory. It does not write or read from the intelxdk.config.additions.xml and the only time this file is used is during a build. This means the debug module is not aware of your plugin added via the intelxdk.config.additions.xml file and so adding plugins via intelxdk.config.additions.xml file should be avoided. Here is a useful article for understanding Intel XDK Build Files.
Editing Plugin Sources: There are a few cases where you may want to modify plugin code to fix a bug in a plugin, or add console.log messages to a plugin's sources to help debug your application's interaction with the plugin. To accomplish these goals you can edit the plugin sources in the plugins directory. Your modifications will be uploaded along with your app sources when you build your app using the Intel XDK build server and when a custom debug module is created by the Debug tab.
How do I fix this "unknown error: cannot find plugin.xml" when I try to remove or change a plugin?
Removing a plugin from your project generates the following error:
Sometimes you may see this error:
This is not a common problem, but if it does happen it means a file in your plugin directory is probably corrupt (usually one of the json files found inside the plugins folder at the root of your project folder).
The simplest fix is to:
- make a list of ALL of your plugins (esp. the plugin ID and version number, see image below)
- exit the Intel XDK
- delete the entire plugins directory inside your project
- restart the Intel XDK
The XDK should detect that all of your plugins are missing and attempt to reinstall them. If it does not automatically re-install all or some of your plugins, then reinstall them manually from the list you saved in step one (see the image below for the important data that documents your plugins).
NOTE: if you re-install your plugins manually, you can use the third-party plugin add feature of the plugin management system to specify the plugin id to get your plugins from the Cordova plugin registry. If you leave the version number blank the latest version of the plugin that is available in the registry will be retrieved by the Intel XDK.
Why do I get a "build failed: the plugin contains gradle scripts" error message?
You will see this error message in your Android build log summary whenever you include a Cordova plugin that includes a gradle script in your project. Gradle scripts add extra Android build instructions that are needed by the plugin.
The current Intel XDK build system does not allow the use of plugins that contain gradle scripts because they present a security risk to the build system and your Intel XDK account. An unscrupulous user could use a gradle-enabled plugin to do harmful things with the build server. We are working on a build system that will insure the necessary level of security to allow for gradle scripts in plugins, but until that time, we cannot support those plugins that include gradle scripts.
The error message in your build summary log will look like the following:
In some cases the plugin gradle script can be removed, but only if you manually modify the plugin to implement whatever the gradle script was doing automatically. In some cases this can be done easily (for example, the gradle script may be building a JAR library file for the plugin), but sometimes the plugin is not easily modified to remove the need for the gradle script. Exactly what needs to be done to the plugin depends on the plugin and the gradle script.
You can find out more about Cordova plugins and gradle scripts by reading this section of the Cordova documentation. In essence, if a Cordova plugin includes a
build-extras.gradle file in the plugin's root folder, or if it contains one or more lines similar to the following, inside the
plugin.xml file:
<framework src="some.gradle" custom="true" type="gradleReference" />
it means that the plugin contains gradle scripts and will be rejected by the Intel XDK build system.
How does one remove gradle dependencies for plugins that use Google Play Services (esp. push plugins)?
Our Android (and Crosswalk) CLI 5.1.1 and CLI 5.4.1 build systems include a fix for an issue in the standard Cordova build system that allows some Cordova plugins to be used with the Intel XDK build system without their included gradle script!
This fix only works with those Cordova plugins that include a gradle script for one and only one purpose: to set the value of
applicationID in the Android build project files (such a gradle script copies the value of the App ID from your project's Build Settings, on the Projects tab, to this special project build variable).
Using the
phonegap-plugin-push as an example, this Cordova plugin contains a gradle script named push.gradle, that has been added to the plugin and looks like this:
import java.util.regex.Pattern def doExtractStringFromManifest(name) { def manifestFile = file(android.sourceSets.main.manifest.srcFile) def pattern = Pattern.compile(name + "=\"(.*?)\"") def matcher = pattern.matcher(manifestFile.getText()) matcher.find() return matcher.group(1) } android { sourceSets { main { manifest.srcFile 'AndroidManifest.xml' } } defaultConfig { applicationId = doExtractStringFromManifest("package") } }
All this gradle script is doing is inserting your app's "package ID" (the "App ID" in your app's Build Settings) into a variable called
applicationID for use by the build system. It is needed, in this example, by the Google Play Services library to insure that calls through the Google Play Services API can be matched to your app. Without the proper App ID the Google Play Services library cannot distinguish between multiple apps on an end user's device that are using the Google Play Services library, for example.
The
phonegap-plugin-push is being used as an example for this article. Other Cordova plugins exist that can also be used by applying the same technique (e.g., the
pushwoosh-phonegap-plugin will also work using this technique). It is important that you first determine that only one gradle script is being used by the plugin of interest and that this one gradle script is used for only one purpose: to set the
applicationID variable.
How does this help you and what do you do?
To use a plugin with the Intel XDK build system that includes a single gradle script designed to set the
applicationID variable:
Download a ZIP of the plugin version you want to use from that plugin's git repo.
IMPORTANT: be sure to download a released version of the plugin, the "head" of the git repo may be "under construction" -- some plugin authors make it easy to identify a specific version, some do not, be aware and choose carefully when you clone a git repo!
Unzip that plugin onto your local hard drive.
Remove the <framework> line that references the gradle script from the
plugin.xmlfile.
Add the modified plugin into your project as a "local" plugin (see the image below).
In this example, you will be prompted to define a variable that the plugin also needs. If you know that variable's name (it's called SENDER_ID for this plugin), you can add it using the "+" icon in the image above, and avoid the prompt. If the plugin add was successful, you'll find something like this in the Projects tab:
If you are curious, you can inspect the
AndroidManifest.xml file that is included inside your built APK file (you'll have to use a tool like
apktool to extract and reconstruct it from you APK file). You should see something like the following highlighted line, which should match your App ID, in this example, the App ID was
io.cordova.hellocordova:
If you see the following App ID, it means something went wrong. This is the default App ID for the Google Play Services library that will cause collisions on end-user devices when multiple apps that are using Google Play Services use this same default App ID:
There is no Entitlements.plist file, how do I add Universal Links to my iOS app?
The Intel XDK project does not provide access to an Entitlements.plist file. If you are using Cordova CLI locally you would have the ability to add such a file into the CLI platform build directories located in the CLI project folder. Because the Intel XDK build system is cloud-based, your Intel XDK project folders do not include these build directories.
A workaround has been identified by an Intel XDK customer (Keith T.) and is detailed in this forum post.
Why do I get a "signed with different certificate" error when I update my Android app in the Google Play Store?
If you submitted an app to the Google Play Store using a version of the Intel XDK prior to version 3088 (prior to March of 2016), you need to use your "converted legacy" certificate when you build your app in order for the Google Play Store to accept an update to your app. The error message you receive will look something like the following:
When using version 3088 (or later) of the Intel XDK, you are given the option to convert your existing Android certificate, that was automatically created for your Android builds with an older version of the Intel XDK, into a certificate for use with the new version of the Intel XDK. This conversion process is a one-time event. After you've successfully converted your "legacy Android certificate" you will never have to do this again.
Please see the following links for more details.
-
-
How do I add [image, audio, etc.] resources to the platform section of my Cordova project with the Intel XDK?
See this forum thread for a specific example, which is summarized below.
If you are using a Cordova plugin that suggests that you "add a file to the resource directory" or "make a modification to the manifest file or plist file" you may need to add a small custom plugin to your application. This is because the Cordova project that builds and packages your app is located in the the Intel XDK cloud-based build system. Your development system contains only a partial, prototype Cordova project. The real Cordova project is created on demand, when you build your application with the Intel XDK build system. Parts of your local prototype Cordova project are sent to the cloud to build your application: your source files (normally located in the "www" folder), your plugins folder, your build configuration files, your provisioning files, and your icon and splash screen files (located in the package-assets folder). Any other folders and files located in your project folder are strictly used for local simulation and testing tasks and are not used by the cloud-based build system.
To modify a manifest or plist file, see this FAQ. To create a local plugin that you can use to add resources to your Cordova cloud-based project, see the following instructions and the forum post mentioned at the beginning of this FAQ.
Create a folder to hold your special plugin, either in the root of your project or outside of your project. Do NOT create this folder inside of the plugins folder of your project, that is a destination folder, not a source folder.
Create a plugin.xml file in the root of your special plugin. This file will contain the instructions that are needed to add the resources needed for the build system.
Add the resources to the appropriate location in your new plugin folder.
See these Cordova plugin.xml instructions may also be helpful. | https://software.intel.com/ru-ru/xdk/faqs/cordova | CC-MAIN-2016-44 | refinedweb | 10,052 | 63.59 |
Proposed features/Fire Hydrant Extensions
Contents
- 1 Rationale
- 2 Proposal
- 3 New Tagging
- 4 Values to be replaced
- 5 Examples
- 6 Voting
Rationale
Projects are in progress which aim to use OpenStreetMap to support First Responders. These tags are intended to support those projects by providing details about hydrants above and beyond those which are captured by the existing tagging scheme. Some of the data may be discerned from a physical survey of individual hydrants, and other data may be available from public databases. The most likely scenarios for First Responder use of OSM data are "mutual assistance" calls, where a responder travels outside their normal area of responsibility and as such knows less about local conditions than normal.
Proposal
Some subtags in support of fire_hydrant are described. These tags contain detail about fire hydrants that have considerable value to First Responders. This proposal additionally proposes deprecating one value for fire_hydrant:type and adding two tags for fire_hydrant:type. Some new variations on fire_hydrant are included, as are new variations of colour. The meaning of colour as applied to fire_hydrants is formally defined, and additional colour subtags are proposed which cover common cases of hydrants painted in more than one colour.
New Tagging
Here is the new proposed tagging for a fire hydrant, for numeric values with units, see the guidelines: Map Features/Units
American Water Works Association colour scheme
Wet Hydrants are generally fed by water mains in urban and suburban areas. The dimensions of the mains vary greatly, and so the flow capacity of the hydrants can vary substantially. The location of hydrants is the first requirement in a mutual assistance call; the second most important piece of information is the flow capacity. Some jurisdictions in the United States have adapted a colour scheme specified by the American Water Works Association. In these jurisdictions, flow capacity may be determined simply by examining the colour of the bonnet and caps of a fire hydrant. In other cases, finding out the capacity may be more difficult.
Adding pillar:type=* for pillar type hydrants
For better description of pillar hydrants, in addition to fire_hydrant=pillar you can add pillar:type=*
Values to be replaced
Some pretty used tags should be carefully replaced like suggested below.
It's recommended to check the values of such tags and to not mass edit.
Migration from fire_hydrant:position=* to location=*
The key location=* is already used in other contexts and indicates the same concept. Therefore current values of fire_hydrant:position=* should go in location=*.
A new value, location=tunnel should be used for hydrants inside a tunnel.
Migration from in_service=yes/no to disused:emergency=fire_hydrant
The prefix disused: is already used in other contexts and indicates the same concept. Therefore there is no need of the tag in_service=yes/no.
Hydrants are supposed to be in service by default.
If not, use the tag disused:emergency=fire_hydrant.
Revisions to fire_hydrant:type=* and migration to fire_hydrant=*
fire_hydrant:type=* in its current form improperly conflates two concepts, the water supply and the physical delivery mechanism.
fire_hydrant:type=pond will be deprecated because it will be replaced by water_source=pond.
fire_hydrant:type=* for the sake of simplicity will become just fire_hydrant=*.
A new value, fire_hydrant=pipe will be added.
Examples
Surface fire hydrants
Voting
Voting on this proposal has been closed.
It was rejected with 28 votes for and 21 votes against.
A post-vote revision of the proposal is here: Fire Hydrant Extensions (part 2)
I approve this proposal. --Andrea Lattmann (talk) 10:26, 11 October 2017 (UTC)
I approve this proposal. --Viking81 (talk) 09:58, 1 October 2017 (UTC)
I oppose this proposal. There are already millions of tags within the fire_hydrant:* name space. These have been used by hundreds of mappers. What we need is not a completely new scheme for hydrant tagging, but a well-written documentation of the existing tagging. Despite having more general keys not constraint to fire_hydrants, I can not see any improvement introduced with this proposal. Additionally, this proposal covers only a small subset of these existing tags - many will remain in the fire_hydrant name space causing a lot of confusion. --Mueschel (talk) 10:14, 1 October 2017 (UTC)
- This is not a completely new scheme. We discussed for months to refine the existing scheme. This proposal covers the tags documented in the original fire hydrant page: Fanfouer already replied to you on discussion page. Further improvements can be discussed in the second part of the proposal. We splitted the proposal because it was becoming too big to handle all in one. --Viking81 (talk) 20:05, 2 October 2017 (UTC)
I approve this proposal. Moving out fire_hydrant: namespace is a good point to make tags more usable and readable. This proposal is intended to stop the confusion of type/sources in fire_hydrant=* key. New key water_source=* is provided for this purpose too Fanfouer (talk) 11:01, 1 October 2017 (UTC)
I oppose this proposal. I appreciate the proposed replacement of in_use=no by disued:emergency=*. Unfortunately there are some unnecessary tag changes which are a dealbreaker for my personal decision.
- This proposal tries to deprecate fire_hydrant:position=* in favour of location=* which is much in use. I don't see any benefit from changing the key. You can already use the generic tunnel=yes to indicate that a hydrant is in a tunnel.
- The proposal does not give reasons why it wants to deprecate fire_hydrant:pressure=* in favour of pressure=* and fire_hydrant:diameter=* in favour of diameter=*.
- The suggested changes pollute the main namespace and will lead to a location=* which is used in different fields of interest.
- Such changes of keys (1) require to change all software which works with hydrant data from OSM and (2) invite mappers to mechanical or pseudo-mechanical edits although your proposal asks to check the values of such tags and to not mass edit.
- I appreciate the proposed replacement of in_use=no by disued:emergency=*. --Nakaner (talk) 20:45, 1 October 2017 (UTC)
- We discussed for months on discussion page and on tagging mailing list: why didn't you partecipate? You would have found many responses to your doubts.
- Since some tags already exist without fire_hydrant: namespace but indicates the same concept, we have choosen the simplest ones (location=*, diameter=*). tunnel=yes according to the wiki applies to ways, not nodes. pressure=* is a tag that can be used also for other features, there is no reason to use it only on hydrant in the form fire_hydrant:pressure=*
- A change in software is quite easy, once the tags are documented.
- --Viking81 (talk) 20:46, 2 October 2017 (UTC)
- See my user diary entry for my response and even more. --Nakaner (talk) 22:19, 12 October 2017 (UTC)
I approve this proposal. --wambacher (talk) 23:53, 1 October 2017 (UTC)
I approve this proposal.Crochet.david (talk) 05:35, 2 October 2017 (UTC)
I oppose this proposal. see comment Nakaner and also Mueschel User 5359 (talk) 05:42, 2 October 2017 (UTC)
I oppose this proposal. The proposal is changing widely used tags (like fire_hydrant:position) without benefit --chris66 (talk)
- We discussed for months on discussion page and on tagging mailing list: why didn't you partecipate? The benefits are more universal and readable tags, some new tags and at the end of the work a detailed wiki page. People who discussed for months reached an agreement on this. --Viking81 (talk) 21:03, 2 October 2017 (UTC)
I oppose this proposal. Changing fire_hydrant:type=* to fire_hydrant=* when the usage numbers are 630000:107 without real benefit is not a good idea in my eyes. See reasons from Nakaner and Mueschel too. --Klumbumbus (talk) 08:58, 2 October 2017 (UTC)
- We discussed for months on discussion page and on tagging mailing list: why didn't you partecipate? The benefits are more universal and readable tags, some new tags and at the end of the work a detailed wiki page. --Viking81 (talk) 21:03, 2 October 2017 (UTC)
- I didn't notice the previous discussions. --Klumbumbus (talk) 15:09, 6 October 2017 (UTC)
I oppose this proposal. Sorry for coming up with this late, but: I also dislike the movement into global namespace, which may be even wrong at some points, e.g. the diameter of the hydrant is usually a type dependent fixed value for pillars, and may be much smaller than the underlying pipe (at least for Germany). Maybe this should be split into the additions and in_use replacement, which should easily get the required vote, and the remaining part which obviously needs further discussion. --Dakon (talk) 09:36, 2 October 2017 (UTC)
- fire_hydrant: namespace is useless for tags that already exist and/or can be applied to other features. Then for consistency, after months of discussions, we have choosen to remove completely fire_hydrant: namespace from all documented tags.
- The nominal diameter of an hydrant is printed on it, and indicates the diameter of its flanged connection to the underlying pipe.
- --Viking81 (talk) 21:30, 2 October 2017 (UTC)
I oppose this proposal. First: Hydrants and suction points are different things and should be given different emergency-Tags. It doesn't matter whether the suction point has a connecting tube or not. Second: See comment from mueschel and Nakaner --streckenkundler (talk) 10:43, 2 October 2017 (UTC)
- We discussed for months on discussion page and on tagging mailing list: why didn't you partecipate? For months we tried to have two different tags for hydrants and suction points, but we concluded that it is not possible because in some countries hydrants and "suction hydrants" are visually indistinguishable. So the shared solution is to tag as hydrant and then distinguish with pressure when more detailed data is available. Suction points are, as in the current definition on wiki page, preferred PLACES to take water, they are not the DEVICE to which connect the fire engine.--Viking81 (talk) 21:30, 2 October 2017 (UTC)
- Forget this definition, this is not a definition. That does not correspond to reality! Functionally, suction points and hydrants are completely different. A pump is always required for a suction point. Whether the suction point has a connecting pipe or not is completely secondary. Suction points can be rivers, lakes or artificial ponds. Some artificial ponds have a connecting pipe, but not always! A suction point can also be a simple ground water extraction, then it is a simple connection pipe... You don't need a pump for fire hydrants. Hydrants and suction points have to be differentiated in the main key!! --streckenkundler (talk) 23:05, 2 October 2017 (UTC)
- I was of your opinion too. If you had followed the discussion on tagging mailing list, you would understand why it hasn't been possible to do that. The solution found is to tag suction points the palces where you can take water. If there is also a a connecting pipe, tag it as hydrant with pressure=suction. This is a good compromise according to many, including me. --Viking81 (talk) 22:34, 2 October 2017 (UTC)
- For this topic, the discussion between RFC and vote was far too short. I particularly missed the idea of the proposal in other active communities. For me it is important: 1. Clear separation of basic and already existing properties as well as new properties to be added. 2. No change of keys (this is very important: change of Keys brings chaos!!!) 3. Clear separation of hydrants and suction points in the base key (first key level!!) (whether they have a connecting pipe or not) --streckenkundler (talk) 20:54, 12 October 2017 (UTC)
I approve this proposal. --Miche101 (talk) 11:02, 2 October 2017 (UTC)
I approve this proposal. --Martin minheim (talk) 11:04, 2 October 2017 (UTC)
I approve this proposal. --JB (talk) 11:52, 2 October 2017 (UTC) Although I really don't like the diameter key, that does not represent the diameter of the object, but of the underground pipe.
- Yes, but the same was with the tag fire_hydrant:diameter=*. --Viking81 (talk) 22:48, 2 October 2017 (UTC)
I oppose this proposal. see comment Nakaner and also Mueschel --ToniE (talk) 16:07, 2 October 2017 (UTC)
I approve this proposal. these improvements are useful, the number of objects concerned must not be a brake on improvement Marc CH (talk) 17:23, 2 October 2017 (UTC)
I oppose this proposal. The tagging scheme is too complex and in parts only applicable by specialists (with insider knowledge). Please simplify the scheme and divide it into a basic and an advanced tagging section. Furthermore is a clear (and intuitive) tag for suction points missing.--Toc-rox (talk) 06:47, 3 October 2017 (UTC)
- Yes, in the final wiki page we can point out basic and advanced tags. You don't need to use all of them if you do not know the data. emergency=fire_hydrant + fire_hydrant=pillar is enough to tag an hydrant for a casual mapper. Then firefighters as me can add other tags when known. The advanced tags are very useful for firefighters. --Viking81 (talk) 09:59, 5 October 2017 (UTC)
- fire hydrant mapping is generally a specialist field, I don't take issue with the tags being in part made for specialists (if they describe what they want to tag). You can still apply less detail if you don't know or are not interested. IMHO this is not an argument to reject the proposal. --Dieterdreist (talk) 11:44, 5 October 2017 (UTC)
- "is too complex and in parts only applicable by specialists (with insider knowledge)" There are plenty of things in OSM which are for specialists, like tagging of sea marks, lighthouses etc, and railing signals. Let the fire fighters have their details. Rorym (talk) 08:20, 13 October 2017 (UTC)
I approve this proposal. --MoritzM (talk) 11:46, 5 October 2017 (UTC)
I approve this proposal. --Władysław Komorek (talk) 12:53, 5 October 2017 (UTC)
I approve this proposal. --Geri-oc (talk) 15:00, 5 October 2017 (UTC)
I oppose this proposal. see comments above! --Geodreieck4711 (talk) 15:55, 5 October 2017 (UTC)
I approve this proposal. I'm not sure if I'm allowed to vote, but the proposal makes much sense to me. --SelfishSeahorse (talk) 17:47, 5 October 2017 (UTC)
I approve this proposal. An excellent proposal. Although it makes it possible to map hydrants in greater detail than I need, I'm using portions of the new tagging structure already. AlaskaDave (talk) 01:51, 6 October 2017 (UTC)
I oppose this proposal. because fire_hydrant:class=* is a americacentric tag. you should make it fire_hydrant:AWWA_class=*, so people that from Not-America could do something similar with their local classification. Also, this proposal does not really convince me that this make over is a good thing. I tent to agree with the reasons from Nakaner and Mueschel on this subject --De vries (talk) 09:24, 6 October 2017 (UTC)
- Sure we can develop your idea to solve the americancentric problem. fire_hydrant:class=* was proposed years ago, and since no one opposed till now, it remained unchanged. You could have joined the discussions during RFC period. Anyway we can still change it, if there is general consensus. But if you and many other oppose this proposal it will be stopped together with all other improvements and we will go nowhere. Please be cooperative, not obstructionist. --Viking81 (talk) 21:08, 13 October 2017 (UTC)
I approve this proposal. It is an improvement. Philip.jacobs (talk) 13:05, 6 October 2017 (UTC)
I oppose this proposal. overcomplicates and fractures the namespace resulting in confusion --Brianboru (talk) 14:34, 6 October 2017 (UTC)
I oppose this proposal. Replacing well-used tags is a bad idea, as time as shown multiple times. This is particularly true in this case where I don't see a valid reason for replacing these values. In my opinion namespaces are a good thing, we should use them even more. --scai (talk) 16:10, 6 October 2017 (UTC)
I approve this proposal. But as said by De vries the fire_hydrant:class=* is too americacentric --R2d (talk) 18:52, 7 October 2017 (UTC)
I oppose this proposal. --Soldier Boy (talk) 20:55, 7 October 2017 (UTC)
I approve this proposal. --Vincent 95 (talk) 09:03, 9 October 2017 (UTC)
I approve this proposal. --Exe (talk) 15:02, 9 October 2017 (UTC)
I approve this proposal. it's very useful --SMStuff (talk) 17:49, 9 October 2017 (UTC)
I oppose this proposal. Replacing well-used tags is unnecessary and it seems to be very US-centric --Zartbitter (talk) 13:03, 10 October 2017 (UTC)
- diameter=* already exists and it is used for pipelines. pressure=* already exists and it is used for pipelines, location=* already exists and it's used in other contexts. For us-centric, we can solve, BUT we must change one of the "well-used tag" fire_hydrant:class=*--Viking81 (talk) 20:46, 13 October 2017 (UTC)
I approve this proposal. --Zonta72 (talk) 20:32, 10 October 2017 (UTC)
I approve this proposal. Good luck!--niubii (talk) 20:37, 10 October 2017 (UTC)
I approve this proposal. --damjang (talk) 21:54, 10 October 2017 (UTC)
I approve this proposal. --NonnEmilia (talk) 05:30, 11 October 2017 (UTC)
I approve this proposal. --Jrachi (talk) 9:48, 11 October 2017 (UTC)
I approve this proposal. --Bubix (talk) 11:42, 12 October 2017 (UTC)
I approve this proposal. --Nospam2005 (talk) 19:06, 12 October 2017 (UTC)
I oppose this proposal. Replacing well-used tags is unnecessary, keep using of namespace is a good thing. --Roman H (talk) 22:12, 12 October 2017 (UTC)
I oppose this proposal. There is no reason to replace fire_hydrant namespace tags by generic tags. Using generic tags could trigger side effects. --Christopher (talk) 06:01, 13 October 2017 (UTC)
- All the existing tools for fire_hydrant needs an update if the tags will changes. The is a lot of work for all developers.--Christopher (talk) 06:06, 13 October 2017 (UTC)
I oppose this proposal., because of 2 issues which can be easily corrected. The fire_hydrant:class=* seems too american, why not use the
US:prefix on the value, e.g. fire_hydrant:class=US:AA, making the value clearer and (I presume) easier to use on a global level. Country prefixes like this are used in other places. The other problem is more serious, and easily fixed. "Gallon" is an ambiguous unit in English! A US and UK gallon are very different. 1 gallon is 1.2 US gallons.? I suggest using
usgalpmor
usgal/minor similar to make this clear and unambiguous. Rorym (talk) 08:14, 13 October 2017 (UTC)
- Sure, we can develop your idea. Simply we didn't think to it. But if you and many other vote against this proposal, it will be stopped together with all other improvements. Please be cooperative, not obstructionist. --Viking81 (talk) 20:32, 13 October 2017 (UTC) | https://wiki.openstreetmap.org/wiki/Proposed_features/Fire_Hydrant_Extensions | CC-MAIN-2020-24 | refinedweb | 3,157 | 64.41 |
Top: Multithreading: trigger
#include <pasync.h> class trigger { trigger(bool autoreset, bool initstate); void wait(); void post(); void signal(); // alias for post() void reset(); }
Trigger is a simple synchronization object typically used to notify one or more threads about some event. Trigger can be viewed as a simplified semaphore, which has only two states and does not count the number of wait's and post's. Multiple threads can wait for an event to occur; either one thread or all threads waiting on a trigger can be released as soon as some other thread signals the trigger object. Auto-reset triggers release only one thread each time post() is called, and manual-reset triggers release all waiting threads at once. Trigger mimics the Win32 Event object.
trigger::trigger(bool autoreset, bool initstate) creates a trigger object with the initial state initstate. The autoreset feature defines whether the trigger object will automatically reset its state back to non-signaled when post() is called.
void trigger::wait() waits until the state of the trigger object becomes signaled, or returns immediately if the object is in signaled state already.
void trigger::post() signals the trigger object. If this is an auto-reset trigger, only one thread will be released and the state of the object will be set to non-signaled. If this is a manual-reset trigger, the state of the object is set to signaled and all threads waiting on the object are being released. Subsequent calls to wait() from any number of concurrent threads will return immediately.
void trigger::signal() is an alias for post().
void trigger::reset() resets the state of the trigger object to non-signaled.
See also: thread, mutex, rwlock, semaphore, Examples | http://www.melikyan.com/ptypes/doc/async.trigger.html | crawl-001 | refinedweb | 285 | 62.88 |
C
Qt Quick Enterprise Controls Styles
The Qt Quick Enterprise Controls Styles module allows custom styling for Qt Quick Enterprise Controls.
The submodule requires Qt Quick 2.2.
Getting started
The QML types can be imported into your application using the following import statement in your
.qml file.
import QtQuick.Enterprise.Controls.Styles 1.3
Styles
Base Style
The Base Style is the default style used when none is specified. It is also used as a fallback when the specified style cannot be found.
The Base Style Tumbler.
Flat Style
The Flat Style is designed for touch devices. It was introduced in Qt Quick Enterprise Controls 1.3 and requires Qt 5.4.
The Flat Style Tumbler.
Selecting Styles
Qt Quick Enterprise Controls follow Qt Quick Controls' styling system. You can apply a different style to the controls by setting the QT_QUICK_CONTROLS_STYLE environment variable to the name of the style. For example, to use the Flat style, you can do the following:
QT_QUICK_CONTROLS_STYLE=Flat ./app
This can also be done in C++, using qputenv():
qputenv("QT_QUICK_CONTROLS_STYLE", "Flat");
Control Styles
Styling Tutorials
Related information
Available under certain Qt licenses.
Find out more. | https://doc.qt.io/QtQuickEnterpriseControls/qtquickenterprisecontrolsstyles-index.html | CC-MAIN-2019-13 | refinedweb | 191 | 59.19 |
Thinking about haskell functors in .net
I’ve been teaching myself haskell lately and came across an interesting language feature called functors. Functors are a way of describing a transformation when you have a boxed container. They have a generic signature of
('a -> 'b) -> f 'a -> f 'b
Where
f isn’t a “function”, it’s a type that contains the type of
'a.
The idea is you can write custom map functions for types that act as generic containers. Generic containers are things like lists, an option type, or other things that hold something. By itself a
list is nothing, it has to be a list OF something. Not to get sidetracked too much, but these kinds of boxes are called Monads.
Anyways, let’s do this in C# by assuming that we have a box type that holds something.
public class Box<T> { public T Data { get; set; } } var boxes = new List<Box<string>>(); IEnumerable<string> boxNames = boxes.Select(box => box.Data);
We have a type
Box and a list of
boxes. Then we
Select (or map) a box’s inner data into another list. We could extract the projection into a separate function too:
public string BoxString(Box<string> p) { return p.Data; }
The type signature of this function is
Box-> string
But wouldn’t it be nice to be able to do work on a boxes data without having to explicity project it out? Like, maybe define a way so that if you pass in a box, and a function that works on a string, it’ll automatically unbox the data and apply the function to its data.
For example something like this (but this won’t compile obviously)
public String AddExclamation(String input){ return input + "!"; } IEnumerable<Box<string>> boxes = new List<Box<string>>(); IEnumerable<string> boxStringsExclamation = boxes.Select(AddExclamation);
In C# we have to add the projection step (which in this case is overloaded):
public String AddExclamation(Box<String> p){ return AddExclamation(p.Data); }
In F# you have to do basically the same thing:
type Box<'T> = { Data: 'T } let boxes = List.init 10 (fun i -> { Data= i.ToString() }) let boxStrings = List.map (fun i -> i.Data) boxes
But in Haskell, you can define this projection as part of the type by saying it is an instance of the
Functor type class. When you make a generic type an instance of the functor type class you can define how maps work on the insides of that class.
data Box a = Data a deriving (Show) instance Functor Box where fmap f (Data inside) = Data(f inside) main = print $ fmap (++"... your name!") (Data "my name")
This outputs
Data "my name... your name!"
Here I have a box that contains a value, and it has a value. Then I can define how a box behaves when someone maps over it. As long as the type of the box contents matches the type of the projection, the call to
fmap works.
One comment | http://onoffswitch.net/thinking-haskell-functors-net/ | CC-MAIN-2018-13 | refinedweb | 494 | 72.36 |
Name
Template::Declare::Bricolage - Perlish XML Generation for Bricolage's SOAP API
Synopsis
use Template::Declare::Bricolage; say bricolage { workflow { attr { id => 1027 }; name { 'Blogs' } description { 'Blog Entries' } site { 'Main Site' } type { 'Story' } active { 1 } desks { desk { attr { start => 1 }; 'Blog Edit' } desk { attr { publish => 1 }; 'Blog Publish' } } } };
Description
It can be a lot of work generating XML for passing to the Bricolage SOAP interface. After experimenting with a number of XML-generating libraries, I got fed up and created this module to simplify things. It's a very simple subclass of Template::Declare that supplies a functional interface to templating your XML. All the XML elements understood by the Bricolage SOAP interface are exported from Template::Declare::TagSet::Bricolage, which you can use independent of this module if you require a bit more control over the output.
But the advantage to using Template::Declare::Bricolage is that it sets up a bunch of stuff for you, so that the usual infrastructure of setting up the templating environment, outputting the top-level
<assets> element and the XML namespace, is just handled. You can just focus on generating the XML you need to send to Bricolage.
And the nice thing about Template::Declare's syntax is that it's, well, declarative. Just use the elements you need and it will do the rest. For example, the code from the Synopsis returns:
<assets xmlns=""> <workflow id="1027"> <name>Blogs</name> <description>Blog Entries</description> <site>Main Site</site> <type>Story</type> <active>1</active> <desks> <desk start="1">Blog Edit</desk> <desk publish="1">Blog Publish</desk> </desks> </workflow> </assets>
bricolage {}
In addition to all of the templating functions exported by Template::Declare::TagSet::Bricolage, Template::Declare::Bricolage exports one more function,
bricolage. This is the main function that you should use to generate your XML. It starts the XML document with the XML declaration and the top-level
<assets> element required by the the Bricolage SOAP API. Otherwise, it simply executes the block passed to it. That block should simply use the formatting functions to generate the XML you need for your assets. That's it.
Support
This module is stored in an open GitHub repository,. Feel free to fork and contribute!
Please file bug reports at.
Author
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | https://metacpan.org/pod/Template::Declare::Bricolage | CC-MAIN-2017-09 | refinedweb | 397 | 50.26 |
Working with XML and Information Systems
- Representing data digitally
- XML and digital data
- Information systems
- XML and information systems
This book describes a method for representing data inside computers. As information flows through the processes that operate on it, its forms and representations change in subtle ways. These transformations are governed by patterns of rules usually called programs. Computers are information processing machines, and programs are essentially servants created to serve the needs of the information stored and processed in these machines. Programs exist to display data, to transform data, to move data from one location to another, and to let humans interact with data.
When creating information-centric applications, the many methods of representing data, XML being one among many, must be considered in relation to other methods and the needs of the information itself. Often, the information will be best served by flowing from one representation to another, as each representation best serves the purpose of one part of the system.
In this chapter we will consider how XML compares to other important methods of data representation, such as relational databases and object-oriented databases. This provides a basis for understanding how XML can be used profitably and at which points in a larger application data is best represented as XML. Later, we will look at how to write applications that read, process, and generate XML, and the various methods for doing this. Finally, we will consider how to use XML together with other information technologies in order to create useful applications.
1.1 Representing data digitally
Today's computers are digital machines, which means that any information that is to be processed by them must be represented as a sequence of binary digits (zeroes and ones). This is slightly problematic because such sequences do not have any obvious meaning. To take one example, it is impossible to tell what the string 010010000110100100100001 actually means without knowing what rules were used to produce it.
To represent information digitally we use rules that define how to convert the information from the human understanding of it into strings of bits. A collection of such rules is known as a notation in this book, but often called a data format in ordinary computer terminology. Knowing the notation also allows us to go the other way and interpret the string back into human terms. A very common interpretation for binary strings is as numbers written in base 2, i.e. in the binary system. If this interpretation were applied to the binary string above it would yield the number 4745505. This might well be the correct interpretation, but it doesn't really tell us much or seem like a very useful interpretation without a context. One context might be: the number is the population of Denmark.1 Another common representation of digital information is the ASCII character encoding, where text is represented by assigning a number to each character that may occur in text, and every character is represented as its number written out in base 2 with 8 bits (or binary digits) per character. If we interpret the string above according to this ASCII notation,2 we find that it spells out characters number 72, 105, and 33, in that order. These three characters together form the string Hi!. In other words, it is a greeting.
1.1.1 Notations
So far we have only considered the encoding of individual values or data items, such as strings and numbers, without any context for these to be interpreted in. In computing such values hardly ever appear in isolation, but are usually found in a larger context, a structured collection of data items. Imagine that a digital data stream is received by an application somehow, disregarding the transmission method for the moment. This means that a stream of binary digits will be pouring into the application, which must then somehow make sense of this stream of information. Doing so requires not only the ability to decode individual data items, but also to locate the boundaries of each item and put the items together into a coherent structure. The rules for how to interpret the stream in this higher-level sense are called a notation.3 Notations can be made to represent very nearly anything at all, be it documents, databases, sound, images, or any other kind of data. Note that there are two main kinds of notations: character based and bit based. The first consisting of characters, just like text, the structure of the second being defined in terms of bits and bytes.
One notation is the textual notation, which applies the ASCII character encoding to entire data streams. This character based notation is simple and convenient and can be used to represent anything at all, from novels through laundry lists to payroll information. However, its conceptual structure is not apparent in the text and so it cannot be processed automatically by software for purposes other than editing and display. To be able to perform most other tasks, a less general and more application-specific notation is needed.
An example may serve to make this discussion of data encoding and data formats clearer. Shown in Example 11 are the first 200 bytes of a digital data stream, with each byte in the stream interpreted as a base 2 number and displayed as a hexadecimal number, which is a common way of displaying raw binary data.
Example 11. An example data stream
46 72 6f 6d 3a 20 59 6f 75 72 20 66 72 69 65 6e 64 20 3c 66 72 69 65 6e 64 40 70 75 62 6c 69 63 2e 63 6f 6d 3e a0 54 6f 3a 20 4c 61 72 73 20 4d 61 72 69 75 73 20 47 61 72 73 68 6f 6c 20 3c 6c 61 72 73 67 61 40 67 61 72 73 68 6f 6c 2e 70 72 69 76 2e 6e 6f 3e a0 53 75 62 6a 65 63 74 3a 20 41 20 66 75 6e 6e 79 20 70 69 63 74 75 72 65 a0 4d 65 73 73 61 67 65 2d 49 44 3a 20 3c 35 30 33 32 35 42 41 32 38 42 30 39 33 34 38 32 31 41 35 37 46 30 30 38 30 35 46 42 37 46 43 32 35 30 31 45 36 36 42 35 45 40 6d 61 69 6c 2e 70 75 62 6c 69 63 2e 63 6f 6d 3e a0 44 61 74 65 3a 20 46 72 69 2c 20 38 20 4f 63 74
This binary dump doesn't make a lot of sense in the form it is shown here, but if we are told that it is a character based notation, things become much clearer. Interpreted as ASCII text, the first 200 bytes of the data stream look like Example 12.
Example 12. The data stream as ASCII
From: Your friend <[email protected]> To: Lars Marius Garshol <[email protected]> Subject: A funny picture Message-ID: <[email protected]> Date: Fri, 8 Oct
Suddenly, we see that the data stream is not just a text stream, but an email. Emails have a stricter and less general notation than plain text files, which is defined in Internet specifications, the relevant ones being RFCs 822 and 2045 to 2049. RFC stands for Request For Comments and RFCs are official Internet documents that can be found at and also at a huge number of mirror sites world-wide.
The email notation starts with a list of headers and continues with a body that holds the actual email contents. Example 12 shows the beginning of the headers. Each header is placed on a separate line, lines being separated by newline characters.4 On each line, the name of the header field appears first, followed by a colon and a space and then the value of the header field. This enables us to locate individual data items in the email headers, and also to put them together into a larger structure where each data item has a name. Knowing the name of each header field, together with detailed knowledge of the email notation, also tells us how to decode the value in each field. This can sometimes be rather complex, such as in the case of the date.
Example 13 shows the entire set of headers for the email, together with an abbreviated body.
In order to be able to decode the body of the email we have to look at the Content-type header field, which tells us what data notation is used in the body. In this case, the field says multipart/mixed. This particular notation is defined by the Internet mail standard known as MIME (Multipurpose Internet Mail Extensions), defined in RFCs 2045 to 2049. It is used for emails that consist of several parts, called attachments. This means that the body consists of several data streams, each making up one attachment, separated by the boundary string also given in the Content-type field.
If we look closely at the body, we will see that it contains first a message to users using mail readers that are not MIME-aware, outside
Example 13. The entire email
From: Your friend <[email protected]> To: Lars Marius Garshol <[email protected]> Subject: A funny picture Message-ID: [email protected] Date: Fri, 8 Oct 1999 11:26:22 +0200 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.5.2448.0) Content-Type: multipart/mixed; boundary="----_=_NextPart_000_01116F" X-UIDL: 37ef28060000035b This is a MIME-encoded message. Parts or all of it may be unreadable if your software does not understand MIME. See RFC 2045 for a definition of MIME. ----_=_NextPart_000_01116F Content-type: text/plain Hi Lars, here is a funny picture. ----_=_NextPart_000_01BF116F Content-type: image/gif; name="funny.gif" Content-transfer-encoding: base64 Content-disposition: attachment; filename="funny.gif" ... ----_=_NextPart_000_01116F
of the first attachment. The first attachment has a form similar to the email itself, with headers and a body. In this case, the body is plain ASCII text, and requires no special treatment.
The second attachment, however, is a different matter. It contains a GIF image, encoded with the base64 encoding. This is a common encoding much used on the Internet for encoding binary data as text, so that it may be safely used with applications that only expect ordinary text.5 In this case, after decoding the base64 data the application will have another stream of digital information, this time in the GIF notation.
To be able to interpret and display the GIF image, the application must start from scratch again and locate the various fields inside the stream that makes up the image, decode them and use them to decode the rest of the stream. Exactly how this is done is not really relevant to this example, so we will skip this for now. Note that the GIF notation is a binary notation, which is both more efficient and harder to decode and understand than a text notation.
What we have just examined is a notation for email messages. It tells us how to decode a stream of digital information into a coherent data structure that makes sense to a human being. Inside the stream appear various data items and also new data streams, which are the contents of the two attachments. The individual data items have their own notations specified by the larger notation, as do the data streams.
1.1.2 Data representation
So far, we have only discussed the notation itself, but not what the application should do with the represented in it. The application needs to somehow store the information in the working memory, and to do this it must choose some data representation. The working memory of a computer is nothing but a huge array of bytes, just like the data stream, which means that the notation could well be used to represent the information inside a running program by simply storing the stream as-is in memory. However, notations are generally very awkward to use as the actual data representation in a program, since they are completely flat (being sequences of binary digits) and programs generally need to be able to traverse and modify the data. It is of course possible to do this using the external notation, but it is rather awkward, as Example 14 shows.
This implementation of the Email class uses the external email notation as the internal representation of emails inside the program. This is done by keeping the email as a string, so that values can be
Example 14. Using the external notation as internal representation
import string class Email: """A class for encapsulating email messages and providing access to them.""" def __init__(self, email): self._email = email def get_header(self, name): """Returns a list of the values of all instances of the header with the given name.""" values = [] pos = string.find(self._email, "\n" + name + ": ") while pos != -1: end = string.find(self._email, "\n", pos + 1) values.append(self._email[pos + len("\n" + name + ": ") : end]) pos = string.find(self._email, "\n" + name + ":", pos + 1) return values def add_header(self, name, value): "Inserts a header with the given name and value." pos = string.find(self._email, "\n\n") assert pos != -1 self._email = self._email[ : pos + 1] + name + ": " + value
+ "\n" + self._email[pos + 1 : ] # ...
extracted from the string and the entire email can be modified by modifying the string. As should be obvious, this is both awkward and inefficient.
A much more natural representation would be to have a dictionary keyed on header names that maps to a list of values to represent the headers. The attachments could be represented as a list of attachment objects, where each attachment object holds a dictionary of header fields and a file-like object to represent the attachment contents. Further classes could also be defined to represent the values in the various fields (email addresses, dates, etc.). Such an implementation is shown in Example 15.
Example 15. Using a more natural representation
class Email: """A class for encapsulating email messages and providing access to them.""" def __init__(self): self._headers = {} self._attach = [] def get_header(self, name): """Returns a list of the values of all instances of the header with the given name.""" return self._headers[name] def add_header(self, name, value): "Inserts a header with the given name and value." try: self._headers[name].append(value) except KeyError: self._headers[name] = [value] # . . . class Attachment: """A class for encapsulating attachments in an email and providing access to them.""" def __init__(self): self._headers = {} self._contents = None # . . .
What we have done now is to design an internal data structure that is optimized for storing the information from the email in the working memory of a program.n Both the data stream and the data structure are digital, but they have very different properties. The data stream is a sequential stream of bytes6 (defined by a notation), while the data structure is not necessarily contiguous in memory, has no specific order and is highly granular rather than flat as the data stream.
One thing that is important to understand is that while the data structure represents the original email data stream it does not do so fully. The data structure keeps only the information we consider essential (what is called the logical information), and throws away much information about what the original data stream looked like. One of the pieces of information we have lost is what boundary string was used between each attachment, or what the warning before the first attachment was. We can no longer recreate the original email!
This means that although the second representation is much more usable than the first, it carries a hidden cost: the loss of information that may at times be necessary. As we will see later, central XML specifications do the same, and this has both benefits and costs that one must be aware of. For if you do need to recreate the original data stream, you will need to solve this problem somehow, and the XML specifications and established practice will offer little or no help.
1.1.3 Serialization and deserialization
The problem with having the data in the working memory of the application is that once the application is shut down or the power to the machine is turned off, the contents of the working memory are lost. Also, the application cannot communicate its internal structures directly to other programs, since they are not allowed to access its memory.7 Programs running on other computers will not be able to access the data at all.
Using a notation solves this problem, however, because it gives us a well-defined way of representing our data as a data stream. It does leave us with two problems, however, which are those of moving data back and forth between the notation and the internal data structure. The technical term for the process of writing a data structure out as such a binary stream is serialization. It is so called because the structure is turned into a flat stream, or series, of bytes. Once we have this stream of bytes, we can store it into a file on disk where it will persist even if the application is shut down or the power is turned off. The file can then be read by other applications. We can also transmit the stream across the network to another machine where other applications can access it.
In the example of the email program, for example, the email program will receive the email from a mail server and store it in memory in its internal data structures. It will then write this internal structure out to its database of emails, which can be organized in many different ways. Some programs simply put each email (using the original notation) in a separate file, while others use more sophisticated database-like approaches.
In general, we can say that data has two states: live and suspended. Live data is in the internal structure used by program and is being accessed and used by that program. Suspended data is serialized data in some notation that is either stored in a file or being transmitted across a network. Suspended data must be deserialized to be turned into live data so that it can actually be used by programs. The deserial-ization of character based notations is usually known as parsing, and a substantial branch of computer science is dedicated to the various methods of parsing7a The vaguer term loading is also at times used as a synonym for deserialization.
It is not necessarily the case that each notation has a single data structure, and vice versa. In fact, usually each application supporting a notation will have its own data structure that is specific to it. In many cases applications will also support many notations.
Note that serialized (suspended) data need not be written to a file when it is stored. It can also be stored in a database (most database systems support storage of uninterpreted binary large objects, also known as blobs), as part of another file (as the email example showed) or in some other way. In fact, serialized data doesn't need to be stored at all, but can instead be transmitted across the network or to another process on the same machine.
1.1.4 Data models
Over the years, certain methods for structuring data have established themselves as useful general approaches to building data structures. When such a method is formalized by a specification of some kind it becomes a data model. A data model is perhaps easiest explained as a set of basic building blocks for creating data structures and a set of rules for how these can be combined.
One of the most widely used and best-defined data models is the relational model where data is organized into a table with horizontal rows, each containing a record, and vertical columns, representing fields. Each record contains information about a distinct entity, with individual values in each field. This is the data model used in comma-separated files and in relational databases. In relational databases some fields can also be references into other tables.
Another common data model is the object-oriented one, where data consists of individual objects, each of which has a number of attributes associated with it. Attributes have a name and a value and can be primitive values or references to other objects. This model is used by object-oriented programming languages and databases.
Defining a data model that states how data must be structured has several benefits. First, it gives a framework for thinking about information design that can be very helpful for developers by providing a set of stereotypes or templates which can be applied to the problem at hand to yield a solution. Secondly, it allows general data processing frameworks (that is, databases) to be created that can be used to create many different kinds of applications. The prime example of such frameworks are relational databases.
At this point you may be wondering what the data model used by emails is, and the answer is that email specifications do not use any particular data model. Instead, they use a well-known formalism known as EBNF (Extended Backus-Naur Form) to formally specify the notation of emails, and leave the conceptual structure undefined. People tend to agree on what the structure is anyway, although they can occasionally disagree on details, some of which may be important.
To be able to use a data model, the application developer must represent the information in the application in terms of that data model. Doing so lets the application use the notations and data processing frameworks that are based on the data model. For example, to be able to represent the structure of emails in relational databases, the application must express the structure of the emails using the tabular data model. Table 11 shows the result of this translation.
As you can see, it was a relatively simple translation. The only real problem was how to represent the attachments. The solution used here was a bit simplistic, since the attachment headers are just strings. This means that their structure is not represented using the data model at all, so this isn't really a very good solution. The attachments should have their own (almost identical) tables, but for simplicity I did not do that here.
Table 11 Email as table
Representing information in the application using the data model of the underlying framework is usually easy, but sometimes awkward or even quite difficult. The relational model is especially strict and inflexible, which made it possible to describe it very precisely mathematically and develop a powerful set of mathematical abstractions and techniques for working with relational data. Due to this work, relational databases today are well understood, extremely reliable and scalable and may perhaps in fairness be called the greatest success of computer science so far. For all their power, however, they are not suitable for all applications, and this is one of the facts that motivated the development of alternative models, such as the object-oriented one.
Restricting the possible forms of data to a specific data model has another benefit: formal languages can be defined to describe the structure of the data in terms of the underlying data model. Using such languages, the data structure of an application can be described formally and precisely. Such a description is known as a schema and the languages as schema languages.8 In the relational model, for example, a schema will define the tables used by an application, the type of each column in each table, and any cross-references between the tables.
Defining a schema for an application has the benefit that the framework can use it to automatically validate the data against the schema to ensure no invalid data is entered. With relational databases this means that you cannot put text in numeric columns, enter postal codes that are too long or too short, or insert a reference to a row in a table that does not exist (nor can you remove a row from one table if there are references to it from other tables).
1.1.5 Summary
Figure 11 shows how a live data structure inside an application can be serialized into a suspended sequential data stream which can then be sent over the network, passed to another application or written to application.
Figure 11 Summary of data representation terms
disk. It also shows how the stream can be read back into the application to rebuild the internal data representation. Today, the representation will usually be defined as a set of classes, but programming languages that are not object-oriented have other ways of representing data. The internal data representation will be defined in terms of a data model, such as the relational or the object-oriented. The data stream will be written according to a notation of some kind, and the notation will also be based on a data model.
Initially, we discussed the notations of individual values and data items. It is worth noting that the notation of values is often shared between the external notations and the internal data representations. These mainly differ in the way they compose larger structures from collections of values and data items, and not so much in the notation of individual values. | https://www.informit.com/articles/article.aspx?p=29263&seqNum=2 | CC-MAIN-2020-50 | refinedweb | 4,269 | 50.26 |
29 July 2013 22:25 [Source: ICIS news]
HOUSTON (ICIS)--While ?xml:namespace>
Much has been made of the Chinese government working to cool down its economy, with Q2 GDP recently clocking in at 7.5%, the lowest figure in four years.
Mike Shannon, global leader of chemicals and performance technologies practice for KPMG, said the Chinese economy’s deceleration should not be viewed as the beginning of the end for making business in-roads in the country.
“I don’t think the engine is drying up,”
The analyst sees
“The market is starting to see more specialty products take hold,” he said. “If you have a higher value product, companies now are more willing to pay for that.”
And when it comes to making specialty chemicals for the Chinese market, Western producers have the advantage there, he added.
Fellow analyst Paul Harnick, KPMG’s global chief operating officer for the company’s chemicals and performance technologies practice, offered that many companies are “almost too focused on
Harnick sees countries such as the
Shannon agreed, but maintained that
“It’s still the main region of | http://www.icis.com/Articles/2013/07/29/9691992/china-asia-1.2-wave-will-be-engines-of-growth-kpmg.html | CC-MAIN-2014-41 | refinedweb | 185 | 57.71 |
As a contribution back to the Jython community, I wrote an article that
describes the various options Jython users have to write threaded
applications. The complete and formatted article is available on the
PushToTest Web site at:
I have also copied the text of the article below. I am open to
feedback, corrections, and additions. Hopefully this will be a living
document as Jython grows.
My thanks goes to Clark Updike ([email protected]), Jeff Emanuel
([email protected]), Fred Sells ([email protected]) for providing
feedback, comments, and help.
---
Writing Threaded Applications in Jython
Abstract:
Jython is a popular object oriented scripting language among software
developers, QA technicians, and IT managers. It is also the scripting
language in TestMaker and TestNetwork. In this article, Frank Cohen
looks at Jythons ability to construct threaded multi-tasking software,
shows the best practice to build scalable and thread-safe code, and
points out how to avoid common mistakes and misunderstandings
Feel free to share this document in its entirety with your
friends and associates; However, this document remains
Jython and Threading
Jython is an object oriented scripting language that is popular with
software developers, QA technicians, and IT managers. Jython is a 100%
Java application. At runtime Jython scripts compile into Java bytecodes
and run in the Java virtual machine. Jython classes are first class
Java objects, so Jython can import any Java object on the classpath and
call its methods. Jython gives Java developers the best of both worlds.
Consequently, more and more test automation software, installation
scripts, system monitoring code, and utility script code is being
written in Jython.
Jython provides an easy environment to build objects. One of my first
Jython scripts looked like this:
class myclass:
def setMyparam( self, myparam ):
self.storeit = myparam
def getMyparam( self ):
return self.storeit
a = myclass()
a.setMyparam( "frank" )
b = myclass()
b.setMyparam( "lorette" )
print "a.storeit =", a.getMyparam()
print "b.storeit =", b.getMyparam()
This script implements a class name myclass. It has two methods, one to
set a parameter and the second to get the stored value. Here is the
output when I run the script:
a = frank
b = lorette
While this is straightforward enough, I envision using an object like
myclass in a threaded application. These questions come to mind:
Which dictionary is the storeit variable stored?
Do I have to worry that some other call to another instance of myclass
will get the storeit value from the wrong instance?
Is myclass thread safe?
Jython stores variables in dictionaries. Each new class gets its own
dictionary when Jython instantiates the class. In myclass, self.storeit
refers to the instance of storeit in the dictionary for the instance of
myclass. As long as the script uses self.storeit then no other instance
of myclass will get the self.storeit value. However, imagine the script
includes a bug such as:
def getMyparam ( self ):
return storeit
In this example, I forgot to use self.storeit in the getMyparam method.
Jython implements the equivalent of a Java Static class when myclass is
defined in the script. This faulty print method retrieves the storeit
value from the static class version of myclass and not from the
instance of myclass referred to by a or b.
When multiple threads concurrently call setMyparam on the same instance
of myclass, then it is anyones guess which thread uses the setMyparam
method last and actually sets the final value of Myparam. This is
commonly referred to as a race condition. Consider the following
example program:
import thread
class myclass:
def setMyparam( self, myparam ):
self.storeit = myparam
def getMyparam( self ):
return self.storeit",) )
This script defines myclass with two methods: one to set a value and
one to get a value. Then it defines a runthenumbers method that gets
the value from a myclass object, prints it to the screen, and stores a
new myclass value. The script then instantiates a myclass that will be
referred to by a and sets the initial value to frank. Lastly, the
script instantiates two concurrently running threads that operate on
the instance of myclass.
When the script runs, both threads use the setMyparam and getMyparam
method of myclass. It is likely that eventually one thread will
interrupt the other when using setMyparam. In this case it anyones
guess which threads call to setMyparam stores the final value since
threads are meant to run concurrently by timesharing the system
resources. In summary, this approach to coding a threaded application
has these problems:
You have no way of telling the conditions of the threads: Have they
started? Have they finished?
Multiple threads may try to call setMyparam concurrently. In the
ensuing race condition, the last thread to call setMyparam wins. And
there is no way to tell.
This is not to say that Jython cannot produce thread safe code. Jython
does! However, there are multiple designs to create thread safe classes
that avoid these problems.
The Many Ways To Thread An Application in Jython
Jython's ability to use Java objects introduces a variety of options
when it comes to building threaded applications. This section describes
four options and examines their relative merits and problems.
Python Threads
This example uses the Jython thread library. However, to overcome
possible race conditions the script uses Jython's synchronized library
to guarantee that only one thread can call a method at a time:
import thread, synchronize
class myclass:
def setMyparam( self, myparam ):
self.storeit = myparam
setMyparam=synchronize.make_synchronized( setMyparam )
def getMyparam( self ):
return self.storeit
getMyparam=synchronize.make_synchronized( getMyparam )",) )
In this example, make_synchonized uses the same technique as Java to
synchronize method calls. Jython implements the synchronize library
using this Java code:
public static PyObject make_synchronized(PyObject callable)
{
return new SynchronizedCallable(callable);
}
and SynchronizedCallable has a __call__ operator to call the argument
callable's __call__ method in a synchronized block like this:
synchronized(synchronize._getSync(arg))
{
return callable.__call__(arg);
}
Python Threads provides an easy way in a Jython script to create a
threaded application and synchronized thread safe methods within a
class object. A newer Python technique uses the Threading library. Here
is an example:
import threading
def greet( name ):
print "greetings", name
count = 0
t = threading.Thread(
target=greet,
name="MyThread %d" % count,
args=( "threading.Thread", )
)
t.start()
The new Python technique provides a slightly more Java-like feel to the
syntax to create threads and provides a simple way to name a thread.
Aside from those advantages I observe no performance or functional
difference from the older Python technique.
Java Threads
This example uses the Java Thread library to implement a threaded
example:
from java.lang import Thread, Runnable
class GreetJob( Runnable ):
def __init__( self, name ):
self.name = name
def run( self ):
print self.name
count = 1
t = Thread( GreetJob( "Runnable" ), "MyThread %d" % count )
t.start()
Jython can also implement threads by extending the Java Thread class.
Between these two techniques I have observed no differences in
performance or functionality:
from java.lang import Thread
class GreetThread( Thread ):
def __init__( self, name, count ):
Thread.__init__( self, "MyThread %d" % count )
self._name = name # Thread has a 'name' attribute
def run( self ):
print self._name
count = 2
t = GreetThread( "Thread subclass", count )
t.start()
I find it very unusual in a Python environment to have so many
different ways to accomplish the same goal. Especially considering
Python has a "one obvious way to do it" design principle. Therefore,
next I describe what I believe to be the best practice to design Jython
scripts that implement threads.
The Best Practice
Based on my experience writing threaded applications in Jython, using
Java Threads and the Runnable interface is the best practice. The
following Jython script implements the best practice for building
threaded applications in Jython:
from java.lang import Thread, Runnable
import synchronize
class myclass( Runnable ):
def __init__( self, myparam ):
self.storeit = myparam
def setMyparam( self, myparam ):
self.storeit = myparam
setMyparam=synchronize.make_synchronized( setMyparam )
def printMyparam( self ):
print "myclass: myparam =",self.storeit
printMyparam=synchronize.make_synchronized( printMyparam )
def run( self ):
for self.i in range(5):
self.printMyparam()
count = 2
a = myclass()
a.setMyparam( "frank" )
t = Thread( a, "MyThread %d" % count )
t.start()
In summary, the best practice makes these points:
The above code example defines myclass to implement the Runnable
interface from the Java Thread object. Runnable works best because it
offers the thread management APIs to check status, set daemon thread
status, and kill a thread.
I use the make_synchronized method of the synchronize library to make
certain that only only one call to the method is possible at any given
time.
The __init__ method creates the storeit object and sets the initial
value. When the class is instantiated Jython calls the __init__ method
on the instance of the new class so there is no need to synchronize
__init__ because only the new instantiation of the class has access to
it. __init__ is thread safe.
Joining Threads
An additional technique supported by the Java Thread technique is that
threads may be joined. Your scripts use the current thread and one new
thread and then "join" the threads so the current thread doesn't
proceed until the new one finishes. Here's an example of that:
import threading
import time
def pause(threadName, sleepSeconds):
# create an attribute
threading.currentThread.isDone = 0
print "Thread %s is sleeping for %s seconds." % (threadName, sleepSeconds)
time.sleep(sleepSeconds)
print "Thread %s is waking up." % threadName
threading.currentThread().isDone = 1
newThread = threading.Thread(name='newThread', target=pause,args=(.
Where To Find Additional Information
Try these URLs for information that helped me write this article:
About The Author
Frank Cohen is the "go to" guy for enterprises needing to test and
solve problems in complex interoperating information systems,
especially Web Services. Frank is founder of PushToTest, a test
automation solutions business and author of Java Testing and Design:
From Unit Tests to Automated Web Tests (Prentice Hall.) Frank maintains
TestMaker, a free open-source utility that uses Jython to build
intelligent test agents to check Web Services for scalability,
performance and functionality. PushToTest Global Services customizes
TestMaker to an enterprise's specific needs, conducts scalability and
performance tests, and trains enterprise developers, QA technicians and
IT managers on how to use the test environment for themselves. Details
are at. Contact Frank at fcohen@[...].com.
-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.Ìk
_______________________________________________
Jython-users mailing list
Jython-users@[...].net | http://aspn.activestate.com/ASPN/Mail/Message/Jython-users/2034708 | crawl-002 | refinedweb | 1,748 | 57.06 |
The SVG train is starting to gain momentum. Internet Explorer as the last of the major browsers is lacking support for displaying SVG images, but its developers gave a hint, that this may change with IE 9.
This article intends to demonstrate a different use of the XML-based vector graphics format. I will show you today how to use SVG to create wireframes out of a new website design and the advantages of this method over the classical Photoshop → text editor → browser approach. The article is not about using SVG inside web pages, although I will briefly touch this topic, but about designing them before their conversion to HTML.
In the remaining part, as a synonym for an SVG enabled drawing program, I will talk about Inkscape, an open source vector graphics program included in most current Linux desktop distributions and available for Windows and Mac. If you own a copy of Adobe Illustrator or CorelDraw, itʼs perfectly fine and straight-forward to go with that, since they also support SVG. A missing part there, however, is the embedded XML editor that I will mention below.
The Artistʼs Perspective
Consider a customer ordering a new website. The first thing happening usually is the designer firing up Photoshop and creating a draft of the siteʼs look. Afterwards the web developer looks at the draft and tells the designer, what could or could not be done in the HTML/CSS/JavaScript trinity. Then the wireframe enters the next iteration round. Meanwhile, the developer creates a prototype of the siteʼs functionality in the hope that lateron it can be included in the designerʼs concept easily.
Using SVG instead of PSD files doesnʼt help with the need for communication between designer and developer. But it has a very immediate advantage:
- SVG files can be viewed directly in the browser.
This is not to be underestimated. It is highly improbable, that the designer works with something as uncomfortable as a GIF, PNG or JPEG. That means, every time a tiny bit of the original draft is changed, the file has to be converted to either format before being viewable in a browser. By using SVG a preview of the new website is as simple as hitting Ctrl+S in Inkscape and F5 in your SVG enabled browser.
However, using vector graphics for a website mockup is not always simple.
If the page design relies heavily on pixel art or pictures, working with lines and boxes
tends to get a bit rough. Nevertheless many designs are merely a combination of
geometric figures, sometimes with gradients or patterns, and for those you will
find vectors much more comfortable than pixels. SVG also offers
an
image element to embed other images, raster graphics included.
This, together with its masking and filter possibilities, can recreate many of Photoshopʼs
effects. And then, changing the header image in a design is as simple as changing the
URL reference on the image tag.
The advantage of vector graphics becomes apparent, too, if your interface elements consist mostly of icons of some sort. Since good icons are delivered as vector images (as long as they are not especially optimized for tiny sizes), you will find it dead easy to embed, adapt and scale such icons to fit your design.
Taking Advantage from XML
Weʼre switching back now from the designerʼs workbench to what can be done with a ready-to-work-with design template in the hands of a web developer.
As said above, SVG is an XML language. That has the nice side effect, that SVG files are plain text files. If your workflow involves some kind of version control system, you can easily check for differences between two different states of a wireframe from your terminal and have good chances to understand, what is going on. No need for Photoshop on both the designerʼs and the developerʼs machine… That said, SVG is also an open standard. You only need any program that supports the features used in a particular SVG file, and the file will look all the same.
Also, since SVG is XML, you can use your whole XML toolbox with the wireframe. Imagine, the draft lies on your testing server and youʼre familiar with, say, Pythonʼs XML modules. Then it is a very simple task to read the design draft, apply some DOM manipulation to it and output the result to your testing browser instead of the original SVG file.
- SVG is a plain text format and an open XML language.
An Example Setup
To give you a grip on what Iʼm talking about, Iʼll sketch a short example SVG file. This could be used as a template by both the designer and the developer.
<svg version="1.1" xmlns="" width="1000px" height="1000px"> <title>Website Template</title> <defs/> <g id="page"> <g id="header"> <rect x="0" y="0" width="1000" height="120" fill="green" /> <text x="20" y="70" style="font: bold 3em Helvetica,Arial,sans-serif;"> A Heading </text> </g> <g id="navigation"> … </g> <g id="content"> … </g> </g> </svg>
View this example [SVG enabled browser only].
The
g element is used for grouping design elements. Think of it as like
groups in Photoshop.
defs contains definitions for gradients,
clipping paths, masks, filters and so on. The
rect paints a
simple rectangular at the top left,
text draws text. Elements that
come later in the document will be painted on top of their ancestors. There
is no such thing as z-index. You can set many properties of SVG elements with CSS declarations,
if you like.
- SVG uses an optional extended CSS syntax for styling.
To explain this advantage a little further, you can also embed external stylesheets (not yet supported in Inkscape) or use
style elements like in HTML for document-wide declarations. If the designer and the developer share some
preliminary considerations and agree on a base design,
they can both use an embedded default stylesheet that already contains many style properties of
the final HTML version, like font definitions.
SVG allows both attributes
id and
class on any element. They have the same meaning as in HTML and the same
effect on CSS declarations, that is, you can address SVG elements with the #id/.classname notation.
Perhaps youʼve noticed, that although I used pixel units in the
svg root
element, the measures of the rectangular are unitless. This is a recommended practice
to make SVG images scale more easily. The content of the SVG root element is then fitted into
the viewport spanned by its size. Although this is a great feature of SVG,
it has to be handled with care when trying to accomplish pixel-perfect designs.
Creating a Prototype
Now comes the fun part. SVG not only allows CSS but also JavaScript. And, even better,
it has an
a element to mark up links. This is exciting. Have you
ever dreamed of an JPEG you can interact with? No? Iʼll show you, what you missed. First
a bit code (shortened by the positioning attributes to increase readability):
<a xlink: <rect fill="yellow" /> <text style=" font: bold 1em Helvetica,Arial,sans-serif; fill: blue;"> A link </text> </a>
View this example [SVG enabled browser only].
The needed XLink namespace
xmlns:xlink="" is usually declared on the root element. Then you have a link
element enveloping two design elements. The link works as expected with the target given
in the
xlink:href attribute. If you click on either
the
rect or the
text the browser opens the
link target just like any HTML link.
- SVG links allow single wireframes to be connected via hyperlinks.
Unfortunately Inkscape has no simple user interface to create links. But it is equipped with a very nice XML editor that you can use to create any element (even non-SVG ones). In this editor you can create a new link element and just drag and drop the elements, that should be clickable, inside it.
With SVG links you can create a series of wireframes, that an evaluation user can follow directly in the browser. This is our first step to transform our wireframe into a working prototype.
In this context a prototype is meant to be a viewable web content a user can interact with. Neither is it meant to become part of the finished work nor has it to be based on HTML (think of Flash prototypes). All it should do is demonstrate how a finished page would work and react.
With this interpretion of a prototype in mind we can start adding behaviour to the SVG file. As we all have learned in Usability 101 styling is done with CSS and behaviour, strictly separated, with JavaScript. And exactly so we want to keep it with SVG, that is, with one small exception.
As an example, we will try to add a smooth hover effect to what will become a navigation bar.
The JavaScript for this task is stored in an external file and embedded via SVGʼs
script element, that works quite like the one in HTML. (Note that
you have to use
xlink:href instead of
src.) We begin with
the SVG source:
<?xml-stylesheet <title>Navigation</title> <defs> <script type="text/javascript" xlink: </defs> <g id="page"> … <g id="navigation" class="normal"> <rect x="0" y="10" width="100" height="80" /> <rect x="110" y="10" width="100" height="80" /> <rect x="220" y="10" width="100" height="80" /> </g> … </g> </svg>
View this example [SVG enabled browser only].
At the very first line you see an XML processing instruction. This is a standardized way to embed external CSS or XSLT stylesheets into an XML file. SVG re-uses this syntax. Letʼs take a look at navigation.css.
@namespace url(); #navigation rect { fill: blue; } #navigation rect:hover { fill: red; }
The
:hover, this is the small exception mentioned above, will just like that add nice effects to the SVG file, like
we know them from HTML. With this technique you could also change, e.g., gradients for an object
as easy as
fill: url(svgfile.svg#my_gradient); and so simulate what will
later become a classical CSS sprite. However, this cross-document referencing of style properties is not yet implemented
in most browsers (but Firefox 3.5 and Opera 9.6). It will thus only work for embedded stylesheets and declarations of
the form
fill: url(#my_gradient);.
The
:hover pseudo-property, together with the
a element,
gives us already the power to quite efficiently simulate an actual HTML page (if you
leave forms aside). But if there are interactions that require some kind of animation,
we still have the JavaScript file referenced within the
script element,
and animations are, what we do now.
SVG has support for SMIL animations, but since the browser support is not really good there, and Inkscape has no interface whatsoever for this, we use this as a learning example for JavaScript.
/* on window loading */ window.addEventListener("load", function() { var rects = document .getElementById("navigation") .getElementsByTagNameNS("", "rect"); /* loop through all rects in the navigation */ for (var i = 0; i < rects.length; i++) { /* do the hovering */ rects[i].addEventListener("mouseover", function (event) { blue2red(event.target, 0); }, false); rects[i].addEventListener("mouseout", function (event) { red2blue(event.target, 0); }, false); } }, false); /* change the color from blue to red */ function blue2red(element, red) { red = Number(red) || 0; red += 5; element.style.setProperty("fill", "rgb("+ String(red)+", 0, "+String(255-red)+")", null); if (red < 255) { window.setTimeout(function () { blue2red(element, red); }, 10); } }; function red2blue(element, blue) { /* likewise */ };
View difference to example 3 [SVG enabled browser only].
Now instead of the boring instant switch to red we have a nicely animated fading into the hover color. The script is intentionally kept simple and straight-forward, just two things should be mentioned:
- We can safely use
addEventListener()since the files will not be viewable in IE anyways.
- Instead of directly setting
element.style.fillwe take the detour over
element.style.setProperty(). This is due to limitations in browser support for the former.
So we keep as result:
- Behaviour can be added to SVG drafts with CSSʼs
:hoverand JavaScript.
From SVG to HTML
The goal of all this work is to create a fully functional HTML page that is viewable also in Internet Explorer.
I have already mentioned, that a savvy planning of CSS can save you typing work, if you consistantly use IDs
and classes in SVG and HTML. You can also achieve quite a lot of savings, if you take care in your JavaScript
files, that DOM selection and manipulation are separated. For those of you using CSS selector engines like
Sizzle (in jQuery or Dojo),
$("#page") yields an element in both SVG and HTML.
Next we will need rasterized versions of the vector art. You have several possibilities here. The simplest is, to use Inkscapeʼs export feature via its GUI. You can select a region to be rasterized, and Inkscape will save the result as a PNG file.
My favourite for repeating tasks is scripting Inkscape or Illustrator. Adobe uses JavaScript, so you can put together a small JS file that does the rastering. Inkscape has several possibilities. You can use its command line interface to write a Bash script, or create a Perl, Python or XSLT extension and install that.
Another possibility is using one of the lots of libraries out there handling SVG like rsvg (used in MediaWiki) or ImageMagick. They have usually a clear interface and are designed for this kind of tasks.
SVG as Part of the Final Site
I threatened you, that I will loose a few words about including SVG in the final HTML page as well. Iʼll make it snappy.
In Firefox, Opera, Safari, Chrome and descendants using SVG inside XHTML is as simple as copy-and-paste. You only have to serve the XHTML with the correct MIME type, that is, with application/xhtml+xml, and embed your SVG inside the HTML code. If you are concerned about validation, try out this official W3C doctype declaration:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0 plus SVG 1.1//EN" "">
It gets more complicated, if you want to respect IE. Luckily it also knows a vector graphics format, VML. This is, ironically, one of the two parents of SVG. So the way to go is serving VML to Internet Explorer and SVG to all others.
Here comes JavaScript into play, once again. It serves as an abstraction layer between browsers. If you want to read more on this, I suggest you to take a look at the Raphaël JavaScript library, which is tailored for exactly this task.
Disadvantages of this Approach
Now Iʼve been long enough crazy about SVG as basis for web design. Itʼs about time to get back on our feet. Where lie the problems?
If you have used other vector graphics formats already, or if youʼre using Photoshopʼs vector masks or paths, you will know what Iʼm talking about here: Aliasing. The final web page is rendered on a pixel based display. Imagine, that in your vector graphics program you place a line, 1px wide, not on the pixel raster, but exactly in between. It will land on the final screen as a 2px wide, half-opaque something looking quite nasty. There is no global switch to remedy this behaviour. The only thing that helps is self discipline. Use a grid and force yourself to obey its constrictions.
In SVG it is as simple as adding
rx="10" to a
rect to create
rounded corners around a box. This simplicity however yields a problem to the poor
developer having to create all these rounded corners in IE or Opera. The designer can get
very fast into the nice possibilities within Inkscape and forget about the implementation. But
then again, this also applies to PSD based designs.
There are some minor points where the SVG spec is not completely clear about how the rendering will look like. One example is the use of font sizes without a unit. If you experience weird font sizes, try to give the text elements a definitive size, preferably in pixel.
And finally, what I call the “Lorem ipsum problem”, running text in SVG 1.1 is an issue.
It would have been addressed in the not yet finished SVG 1.2 with a specific element, and
this even is implemented in Inkscape. Unfortunatelly browsers donʼt yet understand it. To
the rescue comes once again the “XML-ness” of SVG. You can change, on server or client side,
the
flowRoot elements into HTML elements, and that the browsers can display
correctly. The next two listings show the product of Inkscape and how a converted snippet
could look like.
<flowRoot style="font-size:12px;"> <flowRegion> <!-- defines the region of the running text --> <rect width="200" height="200" x="0" y="100" /> </flowRegion> <flowPara>Lorem ipsum dolor sit amet.</flowPara> </flowRoot>
View this example [SVG enabled browser only]. It will show a black rectangular.
<foreignObject x="0" y="100" width="200" height="200"> <!-- foreignObject allows for embedding non-SVG content --> <div xmlns=""> <p>Lorem ipsum dolor sit amet.</p> </div> </foreignObject>
View this example [SVG enabled browser only].
As a small gimmick take a look at example 7. This file uses the above feature of embedding XHTML in SVG in a quite nice way and demonstrates the power of the combination of both languages. I found the original idea at Mark Finkleʼs blog.
Summary
In this article I used SVG to create a website design in a vector drawing program. This SVG file was then modified with straight-forward CSS and DOM manipulation to create an instant prototype out of the wireframe.
This approach was successfully used to develop a design for Planet SVG and implement it as a Drupal theme.
If you already use, say, Adobe Illustator to do your web design, you can quickly try the suggestions in this article by exporting one of your designs as SVG and experimenting with that. However, if you are on the raster graphics train, all I suggest is, give vectors a chance. | http://www.manuel-strehl.de/dev/wireframes_with_svg.en.html | CC-MAIN-2016-44 | refinedweb | 3,020 | 63.9 |
Summary: Add details on ipc facilities to linux PMDA Product: pcp Version: unspecified Platform: All OS/Version: Linux Status: NEW Severity: normal Priority: P5 Component: pcp AssignedTo: mort at sgi.com ReportedBy: mort at sgi.com CC: pcp at oss.sgi.com Estimated Hours: 0.0 Classification: Unclassified We provide a summary of the global statistics related to IPC facilities in the ipc.* namespace, but we don't provide any details about the individual semaphores, shm and message queues that are in use in the system. This can be seen via "ipcs -s". I'm not sure what the backing source of the information is. -- Configure bugmail: ------- You are receiving this mail because: ------- You are on the CC list for the bug. | http://oss.sgi.com/pipermail/pcp/2010-March/000847.html | CC-MAIN-2017-17 | refinedweb | 122 | 50.12 |
Advanced code editingcommunity help Jun 24, 2012 1:40 PM
This question was posted in response to the following article:. html
1. Re: Advanced code editingOmita Jun 24, 2012 1:40 PM (in response to community help)
Are there plans to support adding New File Templates in FB 4.6? It seems impossible currently to add a New File Template to FD. Even if you export a simple 'ActionScript Class' template to XML, changing the 'id', 'context' in the XML and import the file you don't get a new File Template in Flash Builder. The documentation has the word 'Add Template', but right now only altering existing File Templates exist.
2. Re: Advanced code editingGuriya Kalyani Jun 24, 2012 11:40 PM (in response to Omita)
As mentioned in the documentation, 'Add Template' is available only for Code Templates.
When it comes to file templates, they can only be edited. Same can then be exported as xml.
When you create a new ActionScript class, the template is picked up from here. However, if you have 2 templates for the same, which template would you want your newly created file to have?
Can you please mention the usecase you need this for? Thanks.
3. Re: Advanced code editingOmita Jun 25, 2012 12:00 AM (in response to Guriya Kalyani)
Ideally the ability to Add and Remove File templates is the functionality I'd like to see. I guess it's a feature request and not a bug.
As far as a use case, often you want to create a specific Action Script class that has certain pre determined code. IE a Singleton class is just a class but it has a GetInstance method and static variable. Or a RobotLegs command is just a class with an execute method. Right now I could just create a new class and use a Code Template, however, custom file templates would speed up the process and minimize clicks. With a framework like RobotLegs you might be adding 6 or 7 commands at a time. File Templates would minimize risk of human error when setting up the classes.
Over all it's a feature I have enjoyed inFlashDevelop that is very useful and I was surprised that it wasn't included in Flash Builder 4.5 considering how strong the Code Templates are.
Cheers,
-Hays
4. Re: Advanced code editingGuriya Kalyani Jun 25, 2012 3:20 AM (in response to Omita)
This indeed looks like a Feature Request
Kindly raise one at and the team would look into this.
5. Re: Advanced code editingcosmacol May 8, 2013 4:29 AM (in response to community help)
The doc for $ says that it "Resolves to the namespace definition, based on the project's Flex SDK type and the namespace prefix defined in Preferences."
I can't find any preference items that allow to customize namespace prefixes (e.g. adding my own). | https://forums.adobe.com/message/4518145 | CC-MAIN-2016-44 | refinedweb | 484 | 70.13 |
10 May 2012 11:27 [Source: ICIS news]
SINGAPORE (ICIS) --?xml:namespace>
The prices of partially oriented yarn (POY) 150D/48F were at yuan (CNY) 11,150-11,250/tonne ($1,767-1,783/tonne) on 10 May, according to Chemease, an ICIS service in China.
The inventories of PFY factories are at 13-26 days’ worth, according to Chemease data. This is higher than the 15 days’ worth that most producers deem as a safe level.
PFY prices are not likely to increase because of the high inventories, despite demand from downstream factories picking up slightly in late May for restocking activity, which typically occurs at the end of each month, an industry source said.
The supply from PFY factories remains stable in May as the operating rates of PFY factories are being maintained at 76% to meet demand from downstream factories, according to Chemease data.
There are no signs that demand from textile factories will increase in May, according to industry sources.
The average operating rate of downstream textile factories is at about 70% currently, according to Chemease data, and there are no plans to increase production, said downstream producers.
At the 111th Canton Trade Fair, which was held on 15 April-5 May in Guangzhou, the total amount of transacted business was $36bn (€28bn), down by $1.05bn from $37.9bn at the previous fair on 15 October-4 November 2011. This is the first decrease since 2009, according to the website, indicating that demand for Chinese products in general from abroad has weakened.
($1 = €0 | http://www.icis.com/Articles/2012/05/10/9558141/chinas-pfy-prices-to-remain-largely-stable-to-end-may.html | CC-MAIN-2015-14 | refinedweb | 259 | 59.33 |
The browsers, though sadly it requires a little bit of custom code for each.
To see it in action, copy it into an .html file and open that file in a web browser that runs JavaScript. Or, for an online version, visit.
UPDATE 2016/06/13 – I have updated this code to reflect the fact that the window.URL object is no longer experimental, and thus is not invoked using different names in different browsers. The code is a little cleaner as a result.
>Advertisements
This script is just what i have been looking for, thanks a lot.
And as written there: This Could Be Better,
i am now working on it.
Regards,
Hadi.
Thank you very much for this script…
Thanks, very simple solution…
The saving file function is not working, need help
The “download” attribute of the “a” tab only seems to work on the Google Chrome browser. Since “download” wasn’t supported on other browsers, when I originally wrote this script I didn’t even bother to make sure it worked on other browsers. I’ve just updated it so that the save code actually gets executed in Firefox, but it still doesn’t work, because Firefox still doesn’t seem to support the download attribute. Sorry for the inconvenience.
Yes sir I’ve used firefox browser thanks a lot
Actually, now that I’ve looked at it some more, it appears that Firefox actually is supporting the download attribute now. It’s hard for me to figure out from their bug tracker when exactly this happened, but it’s possible that it was just last week. In any event, what was actually causing my recent update to fail on Firefox was that calling “.click()” on my “a” link wasn’t doing anything. I have added a workaround in the code for Firefox that will append the download link to the DOM after the Save button is clicked. If the user then manually clicks the link, the download will proceed, and the link will be removed. Repeat as necessary. It’s not perfect, but maybe it will help…
this is a fantastic. just one thing I i am truing to change the output position of the download link. I would like it to appear by a button but its forming at the bottom of the page is there anyway to change this ?
Sure. Try setting the “id” attribute on the element you want to insert it before to “elementToInsertBefore”, then replace the call to “document.body.appendChild(downloadLink)” with something like:
document.body.insertBefore(document.getElementById(“elementToInsertBefore”), downloadLink);
Irritatingly, there is no “insertAfter()” method, but according to the Internet, there’s ways to fake it.
Sigh… actually, it looks like the arguments in that call to “insertBefore” in my previous comment are reversed. Sorry about that.
Absolutely awesome. Thx for this… With a bit of correction it validates strict, too ,-)
Helped a lot to improve my clientside SourceCodeComposer. Nice piece of JavaScript.
Hopefully it will be supported by Opera and Safari soon… Cheers, Guerteltier (Pango)
sir i need a code using javascript by clicking the menu open window want to display and by clicking the open the contents in the file want to display in text area …….. plss help me sirr.. its urgent .
pls anyone help me…… for cut copy paste inside textarea using javascript by clicking on the link
Thank you…
This is exactly what I was looking for!! 🙂
hi mira
First off thanks for this post… really awesome! Can anyone identify exactly what I have to change so I can use the download feature for a phone? Works on home computer, but it fails to grab the title and it also downloads the html instead of text.
I will keep looking around for a fix.
Major minor victory. I have discovered: for whatever reason when you download the file and try to use the phone native app to views the file, which is unfortunately in my case a code editor. By the way I am using my optimus G as my testing environment. It just shows the code and not the files contents nor the file name. But, if you use the dropbox editor it will show your note and the file name. I don’t know why dropbox will view the saved file the way it is supposed to be viewed and the androids stock file editor shows the back end code.
Still trying to find an alternative that doesn’t rely on a text editor to view the file, but thought I would mention it. You can at least us the file now whereas before you couldn’t. So, yay!
I have been testing out a php solution, which is thus far working perfectly. When I perfect it I plan on sharing.
My linter is telling me I’m an idiot for not closing my button tag, which makes me ask. Why do you open a “button” and close it as an “anchor”? Seems logical that this should not work.
Well, it USED to be an anchor, if it makes you feel any better. I’ll fix this when I get a moment. Thanks for your conscientious linting. You’re a better person than I am.
Firefox hack for auto click:
function saveTextAsFile() {
var textToWrite = document.getElementById(“inputTextToSave”).value,
textFileAsBlob = new Blob(
[textToWrite],
{ type : ‘text/plain’ }
),
fileNameToSaveAs= document.getElementById(‘inputFileNameToSaveAs’).value,
downloadLink = document.createElement(“a”);
downloadLink.download = fileNameToSaveAs;
downloadLink.innerHTML = ‘Download File’;
if ( window.webkitURL != null ) {
// Chrome allows the link to be clicked programmatically.
downloadLink.href = window.webkitURL.createObjectURL( textFileAsBlob );
downloadLink.click();
} else {
// Firefox requires the user to actually click the link.
downloadLink.href = window.URL.createObjectURL( textFileAsBlob );
// this hides it
downloadLink.style.display = ‘none’;
downloadLink.onclick = destroyClickedElement;
document.body.appendChild( downloadLink );
// now it can be auto clicked like in chrome
downloadLink.click();
}
}
Thanks! I have incorporated your fix into the code and updated this post accordingly. I appreciate the help.
this is my coding. after saving my file the blank space and new lines are filled with (null) character .. while running in java compiler pls.. help me. how to solve this problem
function SaveVarAsFile(te)
{
SaveFrame.document.open(“text”,”write”)
SaveFrame.document.write() ;
var s=document.forms[‘te’].elements[‘T’].value;
SaveFrame.document.write(s);
SaveFrame.document.close();
SaveFrame.focus(te)
SaveFrame.document.execCommand(‘SaveAs’,true,’shape.txt’);
}
Save
Hi, i´m needing so much your help. I used your code and i thought it amazing and perfect. I´m needing your help to solve a doubt:
How can i save directly to a specific file? For example, i don´t want the textfield where the person puts the name of the file, i want that when he finish to write a text, he save in a defined(by me) directory and file, instead of he choose where he wants to save.
Pls help me with this, i´m needing your help.
Since now thanks
While it would probably be easy, even trivial, to set the filename from code rather than from the text box, I’m pretty sure there’s no way to save the file directly to the user’s filesystem without going through the download dialog. That’s intentional, because it prevents people from just installing random viruses on your computer when you visit a webpage. Sorry!
Just found this page, as it is a problem I have long looked for answers to. I entered a path/filename, but the file saved to W10 Downloads.
To answer the point above, why not create a function containing the filename, and have a click box to add it to the filename textarea?
I want to keep a lot of simple data in TXT files, and have instant access to them to make changes.
I break my information down into very small ‘chunks’, so I never have to wade through loads of stuff to find what I want to alter.
These files are then assembled by a script into an HTML page.
I hate having to click through Windows Explorer to find files.
The whole POINT OF COMPUTING, is to prevent having to re-invent the wheel, when one piece of code does it for you.
Excellent
1
Hi sir,
Actually i am try to little bit more. i am trying to read the text file and display in table form. but i am facing problem that i can’t edit the text in the file using your this script. so what should i do for edit the the files.
thanks if you can help.just let me know how this script allow to do modification.
means what i am doing that. i am scanning roll no name and marks from html page and try to insert it in my text file. but using this script it is replacing my older text.
if you can help.
Thanks for such a good help. Could you please tell how can we run it on safari(mac)?
Hi. I’m quite newbie with js and I can’t understand, how I’ll do the following.
I have made few small games and I want to save hiscores to .txt file locally.
I can’t get this great example working with paths like c:\temp. How can I change reading / writing path? Now it saves to downloads-folder. And when I try to read, I can’t set path in code to downloads folder.
Thanks. Lots of thanks. I wasted so much time reading other convoluted, poorly explained and badly coded examples that ended up not working anyway. This was the one piece missing from the javascript puzzle.
And also remember to call URL.revokeObjectURL()
i dont understand where ur using browse button in html and no event for browse button in javascript
Well, technically, the “Browse” button isn’t really a button. Yeah, I can see how that might be a bit confusing. It’s part of the input named “fileToLoad”, because its type is “file”:
<input type=”file” id=”fileToLoad”>
When you click it, the browser itself handles that event, not any JavaScript that I’ve written.
what does these lines represent
1. document.createElement(“a”); what is this does it create a anchor tag??
2.document.getElementById(“fileToLoad”).files[0]; what is tat files[0]????
3.fileReader.readAsText(fileToLoad, “UTF-8”);
1. Yes, it creates an anchor tag.
2. When you click the Browse button and choose a file, it goes in the “files” array of the “fileToLoad” input element.
3. This instructs the fileReader object to load the contents of the file as text in the UTF-8 format.
ok i tried to do like this .document.getElementById(“fileToLoad”).files[1]; its not inserting anything?will it load only in Zero th index???
Yes, presumably the files[1] would only be available if you somehow selected two or more files in the file picker dialog that appears when you click Browse. There may be some setting you can set on that input to allow you to select multiple files, or maybe you can just Ctrl-click multiple files or something. I’m not sure.
I love this! Thanks Alots! 😀
instead of user selecting the file, i want to directly pick a file which is in the same folder as my html and display its content, how to do that, pls help.
Hi San. Although thousands want universal browser read privileges to return (what you describe), it has been eliminated due to security concerns. Maybe someday, a localFile web page () will be able to read another localfile IF it is from the same domain/baseURL… without needing to select it with a filePicker. I hope. So do you. Keep your fingers crossed. 🙂
“TypeError: window.URL.createObjectURL is not a function”
I got this error with Firefox, but it is working smoothly in Chrome, pls let me know what is the reason I am getting this above error as soon as possible.
I use your code to save a html file (change type text/plain to text/html and use .html extension), when I try to open locally with internet explorer styles files don’t load (local style css file) this hapens just with explorer, with other browser like chrome is ok. If I open the saved html file in an editor and save’s (not change anything) and open again with explorer work ok, I thing there is a problem with the api file to blob the code in html type, maybe is malformed o currupted. If you o someone know how to fix this problem y apreciated to mucho if is shared, thank. (English is not my language sorry by the mistakes)
Fantastic… thanks so much.
I have used/modified your code… I throw a popup window to the user to provide a filename to read (or write), then pass the results back to my read/write routines. When they provide a filename (or select from the “file” dialogue), I tell it to close the window. This works great.
The only issue I had was (using Firefox) I had to remove the “destroyclickedElement” command or it would not work.
Epic, many thanks for that!
Hi everybody, i release the code only allow us to save 1 data into a text file. if i want to save multiple data, how do i code it?
Thank you so much! I have been searching for hours and your code works beautifully
Hi
Thanks for this wonderful code. Can it be customised to work on safari(mac) as well?
Thanks
How can I give it a path to where the file will go?
I don’t think you can. That would require the program to have knowledge of the user’s filesystem, which is probably a security no-no for the browser. I could be wrong, though.
This input statement that fetches the file before it is read is causing me a problem. I can write the file fine, but I do not seem to be able to get the name of the file =”fileToLoad” from the input function to my read the data function. Is there any way to invoke this input function from inside javascript so I can just create a global variable that the input function can share with the read function? In other words can this be written as a javascript function that can be called? Thanks. The file it writes is wunderbar.
Thanks for any suggestions.
can we specify the path in here, when I click save button, it download the file in download folder.
I want to change the path
thank for the code. how to set the downloaded file path so that i can easily know? thank you in advance..
Searching for the same thing…did you get an answer?
How to save this file onto a particular directory?
This has been a popular question lately. I’ve tried to respond no fewer than two times, but for some reason my response keeps getting eaten by the comment system.
At any rate, I don’t know any way to specify a particular directory to save your file. I tried to get that working when I first wrote this program, but I was never able to figure anything out. My feeling is that leaving this functionality out is probably a security feature, since otherwise they’d have to give the browser knowledge of the user’s filesystem, which may be considered a no-no. But that’s what they always used to say about allowing web pages to save files to the local system in the first place, so who knows.
Anyway, if you do figure this out, please let me know how…
Hi
Just copy and pasted your code it’s working fine. Thanks for the code. I was looking something like this. Basically am a designer, just have little bit of programming knowledge only. I want save user input on the same page rather creating a file and downloading it. Is it possible? Can you help me
Well, this is probably not what you really want, but it technically does what you requested. You’d probably be better off just learning to program, though.
<html>
<body>
<input id=”inputTextToSave” />
<button onclick=”saveText();”>Save</button>
<p id=”pTextSaved” />
<script type=’text/javascript’>
function saveText()
{
var inputTextToSave = document.getElementById(“inputTextToSave”);
var textToSave = inputTextToSave.value;
var pTextSaved = document.getElementById(“pTextSaved”);
pTextSaved.innerHTML = textToSave;
}
</script>
</body>
</html>
Hi This was very useful for me. It was working fine in Chrome and in IE i was getting access denied. When i google’d further i got this soultion. for IE you need to add few lines of code in between, please find the modified function below which works on IE, Chrome & Firefox. Hope somebody finds it useful.
function saveTextAsFile()
{
var textToWrite = document.getElementById(“inputTextToSave”).value;
var textFileAsBlob = new Blob([textToWrite], {type:’text/plain’});
var fileNameToSaveAs = document.getElementById(“inputFileNameToSaveAs”).value;
var browserName=navigator.appName;
if (browserName==”Microsoft Internet Explorer”)
{
window.navigator.msSaveBlob(textFileAsBlob, fileNameToSaveAs );
}
else
{();
}
}
The URL for IE Blob is
Vignesh you little beauty….
Thanks a lot for the solution in IE…
1..After Editing how to override the browse file with same path in our PC.
2..Can you change this code to typescript.
Advanced thanks
Hi there, thank you very much for such a wonderful code. I just want to ask you a question, is there a way to change the “document.getElementByid” to “document.getElementByClassName”. The reason I want to do that is because I want to add other textareas and save all of them in one file. Thanks in advance 🙂
That doesn’t sound too hard. You might be able to just replace the first line of saveTextAsFile() with something like this:
var textToWrite = “”;
var inputsContainingTextToSave = document.getElementByClassName(“GeorgesTextareaClass”);
for (var i = 0; i < textareasToSave.length; i++)
{
var textarea = textareasToSave[i];
textToWrite += textarea.value; // .innerHTML?
}
Something like that, anyway.
when you save the file it loses the basic formatting of line breaks. The text will then save if a line break has been added without any space at all. Anyone know how to fix this?
The line breaks are still there, but they’re Unix-formatted rather than Windows-formatted. Unix uses just the “linefeed” control character to encode line breaks, which is commonly encoded as “\n”. Windows, on the other hand, uses a carriage return AND a line feed to encode line breaks, which is represented like this: “\r\n”. So in order to save the file with Windows-style line breaks, you’ll need to add something like:
textToWrite = textToWrite.replace(“\n”, “\r\n”);
Alternately, you could open the saved text file with a program that understands Linux-style line breaks, like Notepad++.
I am trying to have multiple text boxes that i can save. As it is using document.getElementById it will save only the first one. When changing to document.getElementByClassName it doesn’t work
how can I do it so I have more than one on a page?
thanks
This is kind of Programming 101, but since getElementsByClassName() returns an array (list) of values rather than a single one, you’ll need to loop through each value in the array and perform whatever operation you want to do on each of them. Like:
var elements = document.getElementsByClassName(“someClassName”);
for (var i = 0; i < elements.length; i++)
{
var element = elements[i];
// do something with the element
}
Or, now that I look at it, you might just go look at my response to George a few comments back, on 04/21. I completely forgot I wrote that.
Hi
Firstly thanks a lot for a wonderful solution.Worked like charm for my current application’s requirements.!!
I would in addition to the text being saved as .txt file needs to save it as PDF too. Would really appreciate, if you can guide me for the same.
Thanks
Keshav
Thanks for the article! However, there’s something I don’t understand: why isn’t “reader” or “this” used instead of “fileLoadedEvent.target” while they all refer to the “FileReader” object:
Awesome! Thank you…
I was looking for such a simple solution and you did it!
Cheers
Pascal
Any way for save textarea file upload into dropbox api?
()
Hi Sir, I have added learned a lot from this blog and this was very much help full in learning to make a bookmarklet. thanks.
Which is the license for code of the functions?
There’s no license. You can use it as you like.
There’s nothing particularly clever or novel about this code, I’m just demonstrating some of the bare functionality of HTML5 in the simplest way I can. I don’t believe in software patents, and I consider this too simple and generic to even be copyrightable. Have fun.
Hi there,
Can you please help me find solution for dynamically creating a html file, consisted from textarea value, and pass it as source to an iframe
hello guys, help me for this script…
how i can save as .txt file
I was try this script is working, not .txt file but windows file.
For button is:
tsk tsk tsk!, thats why i love html5, thanks a lot for this code, , cool men, , it helps a lot. 😀
The Code is fantastic, no doubt.
But I am facing one problem when I insert multiline string in the text area and save it.. It generates the text file with no line break. May be the blob is not saving newline in the file..
so is there any way to save it the way we want.
For Eg: if text area contains the following.
“Hello Everyone
This is JavaScript”
Then it generates the resulted file containing text like:
“Hello EveryoneThis is JavaScript”
Please help if anyone can….
It’s been a long time since I’ve looked at this code, but I think probably what’s happening is that the text is being saved using just a linefeed as the line break character, which is how they do it in Unix, rather than with a carriage return character AND a linefeed character, which is what Windows is expecting. Try downloading Notepad++ and opening the file there, and see if it looks right. To fix the problem for Windows, you’d probably use something like “textToSave.split(“\n”).join(“\r\n”) to replace the linefeeds with carriage return/linefeed pairs.
Hi,
This code work perfect in Chrome and firefox. I want this same implementation for IE 9+ browser also. is it possible ?
Sorry,I have just one question,and please for answer,because I just need that solution.How to add an option which adds the text in the already exciting .txt file?
It sounds like you want to do that automatically, without the user having to mess with the Save dialog. If so, I don’t think that’s currently possible. All you can do is load the file, add to the end of it, click Save, and then choose to overwrite the same file in the dialog. Sorry!
No,Save dialog is totally ok,I just need to text be saved to same file.But,what happend-when I want to overwrite the same file,that is impossible,because computer automatically made same .txt file with extension of (1),or (2)..
For example:I have test.txt,when I load it,write something at the end,and click save,he made test(1).txt..
Is there any solution?
Hi,
Thank you for posting this code. It really helps me. Now, My requirement is The text file will automatically save into the drive,without asking file name and location to save.
If it possible please solve this issue.
Thank You…!!
No, I don’t believe this is possible, as such functionality is widely considered a security risk. Maybe someday web browser developers will decide on some compromise to make this feasible, but I wouldn’t hold my breath. Sorry.
Wow an awsome codes that helped me in projects
I want to use just the upload part of the code, as I already have a save function that saves the txt as a word doc. Using the code as it is currently breaks my system.
Put your code in a gist at GitHub.
Are you using Internet Explorer? Because this code doesn’t work in my current version of IE. But it works in my versions of Chrome and Firefox, which I believe are more-or-less up to date.
No, I’m using Chrome and FF.
Superb code. I’ve been looking for something like this for two weeks. I have a program that outlines and manipulates text and then codes the text into javascript variables and saves the file locally with a .js extension. Then I edit that file in a different program using your code to open the .js file, read the js variables, and save the outside file to the inside file (the one inside the directory where the editor is located) so that in the editor program can read each of the variables. Works beautifully offline (as long as I remember to refresh the browser). My question is about what happens when I put the editor online.
If I have five or six students using the editor simultaneously (each one of them rewriting the current.js variable file on the server), what happens (if anything) to their variable text? I assume their text remains theirs for the rest of the browser session, but what happens on the server if two or three people rewrite the current.js file simultaneously and wait too long to refresh the browser (I’ve included a refresh button below your code)? Will they load a different person’s variables?
I’d love to bypass the altogether and just read the variables from the external file without having to save it on the server, but for the life of me I have no idea how to do it. Sorry for being so long winded. Any suggestions would be appreciated.
And again, thanks for the elegant code.
Not sure whether this is what you need, but you can use the eval() function to execute the contents of a string as JavaScript. For example:
var thingToSay = “If you’re seeing this, it worked!”;
var functionAsString =
“function saySomething(whatToSay)”
+ “{”
+ ” alert(whatToSay);”
+ “}”;
eval(functionAsString);
saySomething(thingToSay);
it helps me thanks alot man!!
After days searching the Internet – Eureka!! I found your solution.
It works!! and it does ‘what it says on the tin’. Many thanks.
Hi, Interesting I want to set a fixed txt file and display its content too. I read above that it’s not possible due to security reasons. Is there anyway to get using a library like JSON or something like that? Thanks
Thanks for such a woderful code But each time when we execute the code it will create copy of file how to overwrite file at each time of execution
i want the code to modify and save to the same text file
Pingback: Como manter indentação e quebras de linha ao salvar um arquivo texto via javascript | DL-UAT
Thanks,
It’s works.
thanks and the code works better…
Can anyone temme, how to process the text file that gets download with java and to show the result again in webpage??
@Hello sir,, thanks alot for posting this. this is what exactly am searching..!!! but i need this code with some slight modification. i want to upload an image file and save it in local machine… could you pls help. well let me explain clearly..!!!!
-i want to upload an image file using button. i did it sir. but it only shows the file name next to the button only.
-Now i want to save the uploaded file in my”c:\” location with separate folder – named as “uploaded image files” after clicking submit button with “your file uploaded successfully” message.
then i want to store the same data in server database as well.
. finally, i need to download that same file from server database.
hope you understand and help me to accomplish my university project sir,
Hi All,
I have a requirement like i have to save the browse selected file to downloads with the same file type extension (example a .apk or .ipa files should be downloaded as same file type)
Hi,
You could use the following syntax:
See also
Cheers
Solution has been removed during the process.
Use the ‘accept’ attribute in the input tag
i.e. input type=”file” id=”fileToLoad” accept=”.apk”
Reblogged this on In My Words.
Fantastic … Exactly what I wanted … Thank you …
Why is this code not saving line breaks and indentations? Do we have any solutions to that? Thanks again.
I don’t know about indentations… maybe pressing tab is simply moving focus to the next control, rather than inserting a tab character? But as for line breaks, it IS saving them. It’s just saving them as Unix-style line breaks, rather than Windows-style ones. See previous comments for details.
Thanks . This was very helpful and exactly what I needed
Thanks!
Could this be used to save data input from a simple HTML form that I created?
I created a form that I want to save the data from as a txt document would the above posted code be used for this or is that not possible?
Thank you for your code example 😀 Its exactly the kind of simple file save/loading I wanted.
Your whole site is a treasure trove of good example code, much appreciated!
Simply great
Pingback: Saving text in a local file in Internet Explorer 10 - BlogoSfera
Pingback: rotatengine.js | bthj
hello,
This is really cool, but can you help me and tell me how to apply it for multiple inputs? which i guess will use CLASS instead of ID, but when ever i use “getFileByClassName()” in this script it saves me a file named “undefined” and with the word “undefined” inside the text file.. so how can I save all (more then a 100) text field using this script. And is it even possible?
Well, for starters, I don’t think there is a JavaScript function named “getFileByClassName()”. Maybe you want to try “getElementsByClassName()” instead? After you have all the the fields, you’ll probably just loop through them and concatenate each to the end of a string, then save that to a file.
um yeah I’m sorry, I meant “getElementByClassName()”……ahaa okay can you please provide more help? with my 3weeks experience of Javascript I don’t believe that i fully understand how to do what you said! although to mention I know the “for” and “while” loops… but can you provide more? and sorry for any inconvenience sir. but I really need it and it would be appreciated..
Okay, so it’s really hard to give HTML code examples in these comments, because WordPress handles them out unpredictably. But basically, replace the single textarea element from the code in the article with a bunch of inputs that all have the same class, for example, “inputTextToSave”. Then replace the first line of the saveTextAsFile() function with these lines:
var textToWrite = “”;
var inputsTextToSave = document.getElementsByClassName(“inputTextToSave”);
for (var i = 0; i < inputsTextToSave.length; i++)
{
var inputTextToSave = inputsTextToSave[i];
var textFromInput = inputTextToSave.value;
textToWrite += textFromInput;
}
Incidentally, if you’re a beginning programmer, I did post a .pdf booklet last year that teaches the rudiments of programming JavaScript, at least the way I do it. I realize nobody reads books to learn coding anymore, but if you’re interested anyway, see.
Thanks a lot.. but it didn’t work.. when ever I press the “save text to file”.. nothing happens.. what do you think ?
Thanks a lot for the pdf tho, it seems really help full.
Thanks a lot. You killed it man 😀
When i give save it should ask where to save to user instead of saving automatically how its possible..
If the file is being saved without asking where you want it, it’s likely because your browser is set up to just automatically download everything to your specified downloads folder. So if you want to choose the save location, you’ll probably need to change your browser’s settings. As far as I know, there’s no way to override that behavior with HTML5 or JavaScript. Sorry.
How to make it such that I should ask where to save instead of auto saving…
Please some one help!!!…What should i modify in the code so that instead of .txt file,.zip file can be uploaded and downloaded locally….
Instead of .txt file, can I use .zip file….If so please can you give me the modifications of the above code…
As far as I know, there’s no really simple way to change this code to save a ZIP file. You might want to check out JSZip, which is a JavaScript library for working with ZIP files. I make use of this library in this post. That code only reads from a .ZIP file, rather than creating one, but I think JSZip supports creating ZIP files as well.
Thank you so much! I’ve been looking for this–trying to replace an ActiveX FSO because Edge doesn’t do ActiveX (neither does FF or Chrome or Safari or Opera). Two weeks of searching, and your code just works!
Hi ,
My requirement – Had a “test.txt” text file in local directory.
Also had a button “Save” in HTML . When clicked it , some text need to be append to the “test.txt” file. Can we achieve this using Javascript.
Note- I am fetching the file path and filename before click.
Thanks in advance . Let me know if anyone can help me.
Krishna.
No, as far as I know you cannot automatically write to a file on your local filesystem without going through some sort of dialog. Sorry.
Thank you. I have started to work on an xml converter, and almost found no way to save the converted file on the local HDD.
Everything is fine. But it is not working for Unicode text, for example : Unicode based Hindi Language text is not displaying properly.
Pingback: TF / Modulos: guardar archivo – almendro
Hello Sir,
This code really very helpful for us but sir I cant save my image , please will you give the solution.
Thank you.
I’m not quite sure exactly what you’re asking, but if you want to load, edit and save images you might take a look at one of my other posts, which includes the code for a simple image-editing program:.
Hey,
Like a bunch of others here, I want to save some text to an existing file that would server as a simple, local ‘database’. So instead of creating and downloading a new file, edit an existing one.
I’ve read all your answers to the other questions, and you mention that it’s not possible due to security reasons.
However, according to various sources, it should be (at least now, anno 2016) possible under certain conditions. From what I’ve read it’s possible if:
– you use a ‘blob’;
– Chrome is used, opened with –allow-file-access-from-files as a parameter.
Since this post is from 2012, and the most recent comment is from 2014, I’d figure things may have changed by now, and perhaps also your knowledge about this. You seem to have a good know-how about this stuff, so would you mind giving it another look? (You’d make me very happy 🙂 )
Cheers
Thank you so much. This is exactly what I was looking for. I wanted to use this method to save (export) the Data of an AnuglarJS form into local text file (json format), and allow the user later to import the file. I am sure that this approach will work.
I am very new to HTML and all web technologies… This content gave a shape to my thoughts. Now I can achieve my logic with this. Thanks A lot…
Do not ask me what was my thought to implement.. 🙂 😛 😀
Hello, I want to fetch contents of text file and store it to variable of javascript and I tried the following code, by using iframe in html and store it in one variable for login purpose. When I login first time in all browser its work perfect but, after login first time the issue is when I update the files i.e. username.txt and password.txt then the all browsers shows alert of error message of “username or password is wrong” I noticed that all browsers stores older value, its not update latest changed value from files. I tried all methods like removing cache. Plz can any one help me.
when i refresh firefox,IE it works but chrome needs 20-30 refresh.
YouFi Router
Welcome to Internet Service
Username :
function check(form)/*function to check userid & password*/
{
var id=document.getElementById(‘username’).contentDocument.body.firstChild.innerHTML;
var pass=document.getElementById(‘password’).contentDocument.body.firstChild.innerHTML;
window.location.replace(“index.html”);
if(form.userid.value == id && form.pswrd.value == pass)
{
window.location.replace(“index.html”);
window.open(‘/cgi-bin/shell’,’_self’);
}
else
{
location.reload(true);
window.location.replace(“index.html”);
alert(“Error Username or Password is wrong”);
}
}
Thank You,
Vishal.
can you please tell me the code for saving form data to a text file using javascript
I used shell script to store data in txt file.@palak
Hero of the day, Thanks
I am probably missing the obvious but.. how do you modify this code to work as a bookmarklet that I can put in the browser toolbar and then, whenever I visit a webpage… I select some text, click on the bookmarklet button, and I get a dialog window that lets me save that selected text into a local file? Thanks!
Hi Thiscouldbebaetter,
Awesome script, thank you very much.
I’m hoping you can help me if it’s not too much to ask.
I have a form with several checkboxes in it.
What I’m trying to do is for each checkbox ticked I’d like to send the value to a text file.
Is this possible?
Sure, it should be easy. It’s a little difficult to put HTML in a WordPress comment, though, so bear with me.
If your checkbox is declared like this: “[input id=”checkboxYesOrNo” type=”checkbox”][/input]” (with angle brackets rather than square brackets), then you can get its value in the JavaScript with “var yesOrNo = document.getElementById(‘checkboxYesOrNo’).checked;” Then you can use that value to build a string, and save the string to file with the script from this post. Good luck!
That works great thank you.
I have one more problem if you don’t mind helping (I’m not too good with javascript yet. Still learning). I want to use this script to make a playlist file.
The problem I’m having is the link is formatted as [track][location]path to file[/location][/track]
I can’t get the [/track] to show up.
Sorry the problem is with code I’ve added. Yours works fine
Reblogged this on COFFEE | PAPER.
Thank you very much
Got this working great now, thank you. Is there any way to clear the text area after the file has been written? Automatically would be great, but a button would be ok too.
just put a line before document.body.removeChild(event.target);
document.getElementById(“inputTextToSave”).value = ”;
This is a great code a piece of cake script..thank you very much..very helpful..
it’s really superb
but can i load edit and save the same file instead of downloading the separate file every time, please give me the code for this, if possible 🙂
If you want to save to the same file, you can manually select that file in the dialog. There’s still no way to do it automatically that I know of.
Manually selecting
a file you use frequently
renders useless
the whole point of computing.
Surely
the function of computing
is to save time
by never needing to reinvent the wheel
each time you want to make a journey?
The best solution I have yet found
is in Notepad++
( settings / preferences / MISC / Clickable link settings / Enable ),
where one can use links within a .TXT
to open text and HTML files.
One can also open folders in Windows Explorer
from within a .TXT file.
This function with WE
removes the necessity
to go to the drive root,
and up to the folder you want,
replacing it with one click.
🙂 Zen
Great! Could you please improve it to be able to load file by drag and drop a file over textarea ?
And one question – do you know why this code is not working on JSFiddle ?
hii,
thanks for the code. Can you please help me to load more than one files, I can select multiple files right now but I couldn’t load. I request you to help over this as soon as possible since i am new to this coding.:-) | https://thiscouldbebetter.wordpress.com/2012/12/18/loading-editing-and-saving-a-text-file-in-html5-using-javascrip/ | CC-MAIN-2017-13 | refinedweb | 6,849 | 73.98 |
Not line program into a GUI with very little effort.
The idea is pretty simple. Nearly all command line Python programs use
argparse to simplify picking options and arguments off the command line as well as providing some help. The Gooey decorator picks up all your options and arguments and creates a GUI for it. You can make it more complicated if you want to change specific things, but if you are happy with the defaults, there’s not much else to it.
At first, this article might seem like a Python Fu and not a Linux Fu, since — at first — we are going to focus on Python. But just stand by and you’ll see how this can do a lot of things on many operating systems, including Linux.
Hands On
We had to try it. Here’s the code from the
argparse manual page, extended to live inside a main function:
import argparse def main(): parser = argparse)) main()
You can run this at the command line (we called it iprocess.py):
python iprocess.py 4 33 2 python iprocess.py --sum 10 20 30
In the first case, the program will select and print the largest number. In the second case, it will add all the arguments together.
Creating a GUI took exactly two steps (apart from installing Gooey): First, you import Gooey at the top of the file:
from gooey import Gooey
Then add the decorator @Gooey on the line before the main definition (and, yes, it really needs to be on the line before, not on the same line):
@Gooey def main():
The result looks like this:
You might want to tweak the results and you can also add validation pretty easily so some fields are required or have to contain particular types of data.
Sure That Works on Linux, But…
Python, of course, runs on many different platforms. So why is this part of Linux Fu? Because you can easily use it to launch any command line program. True, that also should work on other operating systems, but it is especially useful on Linux where there are so many command line programs.
We first saw this done on Chris Kiehl’s blog where he does a GUI — or Gooey, I suppose — for ffmpeg which has a lot of command line options. The idea is to write a simple argparse set up for the program and then tell GUI what executable to actually launch after assembling the command line.
Here’s Chris’ code:
from gooey import Gooey, GooeyParser @Gooey(target="ffmpeg", program_name='Frame Extraction v1.0', suppress_gooey_flag=True) def main(): parser = GooeyParser(description="Extracting frames from a movie using FFMPEG") ffmpeg = parser.add_argument_group('Frame Extraction Util') ffmpeg.add_argument('-i', metavar='Input Movie', help='The movie for which you want to extract frames', widget='FileChooser') ffmpeg.add_argument('output', metavar='Output Image', help='Where to save the extracted frame', widget='FileSaver') ffmpeg.add_argument('-ss', metavar='Timestamp', help='Timestamp of snapshot (in seconds)') ffmpeg.add_argument('-frames:v', metavar='Timestamp', default=1, gooey_options={'visible': False}) parser.parse_args() if __name__ == '__main__': main()
You even have the option of creating a JSON file that Gooey can read if you don’t want to write Python. The utility of this is easy to see, but I’d love to hear some concrete examples of where you think it will come in handy. If you’re already using Gooey, or plan to give it a shot after reading this article, let us know in the comments below.
Of course, not all Python GUIs are created equal. Neither are all Python graphics.
7 thoughts on “Linux Fu: Python GUIs For Command Line Programs (Almost) Instantly”
Sort of a DESQview view of the CL world.
I did something very similar with my box generator Boxes.py (). Instead of a GUI it generates a web front end from the argparse.argparser object. It is limited to a small set of types but the common stuff works. See for a more complex example. Implementation is a bit wacky though.
I’ve written a little python utility to be used in an office environment. Tried Gooey to present it as a simple desktop program. I like the concept, but you’ll end up replacing argparse with gooey in order to get access to the really useful parts of gooey. For example the browse buttons to select files. Not being able to keep using argparse is a big drawback. Also, when using gooey instead of argparse, the gui is started by default and you ‘loose’ the cli (there’s a flag to tell gooey to use the cli, but I can’t and don’t want to remember).
For my use case, it would be very nice if all gooey configuration was done with decorators and the gui optional with a – – gui flag. The utility would be started from a start menu or desktop icon anyway. There must be ways to detect the code is called from the gui or cli.
Conditional import statements based on custom flags?
different-named start/launch files for GUI and CLI versions?
would it run on a framebuffer?
Python always looks like “ALMOST) INSTANTLY” in carefully selected bookworm examples. But when you try to implement something useful in real world it became ugly after first 200 lines of code.
Python funboys don’t get frustrated though, hence you can see “python software” which drops a traceback, when Ctrl+C is pressed.
Sure, let’s blame the language :D
Please be kind and respectful to help make the comments section excellent. (Comment Policy) | https://hackaday.com/2019/10/14/linux-fu-python-guis-for-command-line-programs-almost-instantly/ | CC-MAIN-2021-39 | refinedweb | 928 | 71.95 |
10.12: About the Star Pusher Map File Format
- Page ID
- 14673
We need the level text file to be in a specific format. Which characters represent walls, or stars, or the player’s starting position? If we have the maps for multiple levels, how can we tell when one level’s map ends and the next one begins?
Fortunately, the map file format we will use is already defined for us. There are many Sokoban games out there (you can find more at), and they all use the same map file format. If you download the levels file from and open it in a text editor, you’ll see something like this:
; Star Pusher (Sokoban clone) ; ; By Al Sweigart [email protected] ; ; Everything after the ; is a comment and will be ignored by the game that ; reads in this file. ; ; The format is described at: ; ; @ - The starting position of the player. ; $ - The starting position for a pushable star. ; . - A goal where a star needs to be pushed. ; + - Player & goal ; * - Star & goal ; (space) - an empty open space. ; # - A wall. ; ; Level maps are separated by a blank line (I like to use a ; at the start ; of the line since it is more visible.) ; ; I tried to use the same format as other people use for their Sokoban games, ; so that loading new levels is easy. Just place the levels in a text file ; and name it "starPusherLevels.txt" (after renaming this file, of course). ; Starting demo level: ######## ## # # . # # $ # # .$@$. # ####$ # #. # # ## ##### ; ; ; ; These Sokoban levels come from David W. Skinner, who has many more puzzles at: ; ; Sasquatch Set I ; 1 ### ## # #### ## ### # ## $ # # @$ # # ### $### # # #.. # ## ##.# ## # ## # ## #######
The comments at the top of the file explain the file’s format. When you load the first level, it looks like this:
def readLevelsFile(filename): assert os.path.exists(filename), 'Cannot find the level file: %s' % (filename) mapFile = open(filename, 'r') # Each level must end with a blank line content = mapFile.readlines() + ['\r\n'] mapFile.close() levels = [] # Will contain a list of level objects. levelNum = 0 mapTextLines = [] # contains the lines for a single level's map. mapObj = [] # the map object made from the data in mapTextLines for lineNum in range(len(content)): # Process each line that was in the level file. line = content[lineNum].rstrip('\r\n') if ';' in line: # Ignore the ; lines, they're comments in the level file. line = line[:line.find(';')] if line != '': # This line is part of the map. mapTextLines.append(line) elif line == '' and len(mapTextLines) > 0: # A blank line indicates the end of a level's map in the file. # Convert the text in mapTextLines into a level object. # Find the longest row in the map. maxWidth = -1 for i in range(len(mapTextLines)): if len(mapTextLines[i]) > maxWidth: maxWidth = len(mapTextLines[i]) # Add spaces to the ends of the shorter rows. This # ensures the map will be rectangular. for i in range(len(mapTextLines)): mapTextLines[i] += ' ' * (maxWidth - len(mapTextLines[i])) # Convert mapTextLines to a map object. for x in range(len(mapTextLines[0])): mapObj.append([]) for y in range(len(mapTextLines)): for x in range(maxWidth): mapObj[x].append(mapTextLines[y][x]) # Loop through the spaces in the map and find the @, ., and $ # characters for the starting game state. startx = None # The x and y for the player's starting position starty = None goals = [] # list of (x, y) tuples for each goal. stars = [] # list of (x, y) for each star's starting position. for x in range(maxWidth): for y in range(len(mapObj[x])): if mapObj[x][y] in ('@', '+'): # '@' is player, '+' is player & goal startx = x starty = y if mapObj[x][y] in ('.', '+', '*'): # '.' is goal, '*' is star & goal goals.append((x, y)) if mapObj[x][y] in ('$', '*'): # '$' is star stars.append((x, y)) # Basic level design sanity checks: assert startx != None and starty != None, 'Level %s (around line %s) in %s is missing a "@" or "+" to mark the start point.' % (levelNum+1, lineNum, filename) assert len(goals) > 0, 'Level %s (around line %s) in %s must have at least one goal.' % (levelNum+1, lineNum, filename) assert len(stars) >= len(goals), 'Level %s (around line %s) in %s is impossible to solve. It has %s goals but only %s stars.' % (levelNum+1, lineNum, filename, len(goals), len(stars)) # Create level object and starting game state object. gameStateObj = {'player': (startx, starty), 'stepCounter': 0, 'stars': stars} levelObj = {'width': maxWidth, 'height': len(mapObj), 'mapObj': mapObj, 'goals': goals, 'startState': gameStateObj} levels.append(levelObj) # Reset the variables for reading the next map. mapTextLines = [] mapObj = [] gameStateObj = {} levelNum += 1 return levels
The
os.path.exists() function will return
True if the file specified by the string passed to the function exists. If it does not exist,
os.path.exists() returns
False.
The file object for the level file that is opened for reading is stored in
mapFile. All of the text from the level file is stored as a list of strings in the
content variable, with a blank line added to the end. (The reason that this is done is explained later.)
After the level objects are created, they will be stored in the
levels list. The
levelNum variable will keep track of how many levels are found inside the level file. The
mapTextLines list will be a list of strings from the content list for a single map (as opposed to how content stores the strings of all maps in the level file). The
mapObj variable will be a 2D list.
The
for loop on line 12 [437] will go through each line that was read from the level file one line at a time. The line number will be stored in
lineNum and the string of text for the line will be stored in line. Any newline characters at the end of the string will be stripped off.
Any text that exists after a semicolon in the map file is treated like a comment and is ignored. This is just like the # sign for Python comments. To make sure that our code does not accidentally think the comment is part of the map, the
line variable is modified so that it only consists of the text up to (but not including) the semicolon character. (Remember that this is only changing the string in the
content list. It is not changing the level file on the hard drive.)
There can be maps for multiple levels in the map file. The
mapTextLines list will contain the lines of text from the map file for the current level being loaded. As long as the current line is not blank, the line will be appended to the end of
mapTextLines.
When there is a blank line in the map file, that indicates that the map for the current level has ended. And future lines of text will be for the later levels. Note however, that there must at least be one line in
mapTextLines so that multiple blank lines together are not counted as the start and stop to multiple levels.
All of the strings in
mapTextLines need to be the same length (so that they form a rectangle), so they should be padded with extra blank spaces until they are all as long as the longest string. The
for loop goes through each of the strings in
mapTextLines and updates
maxWidth when it finds a new longest string. After this loop finishes executing, the
maxWidth variable will be set to the length of the longest string in
mapTextLines.
The
for loop on line 34 [459] goes through the strings in
mapTextLines again, this time to add enough space characters to pad each to be as long as
maxWidth.
The
mapTextLines variable just stores a list of strings. (Each string in the list represents a row, and each character in the string represents a character at a different column. This is why line 42 [467] has the Y and X indexes reversed, just like the
SHAPES data structure in the Tetromino game.) But the map object will have to be a list of list of single-character strings such that
mapObj[x][y] refers to the tile at the XY coordinates. The
for loop on line 38 [463] adds an empty list to
mapObj for each column in
mapTextLines.
The nested
for loops on line 465 and 466 will fill these lists with single-character strings to represent each tile on the map. This creates the map object that Star Pusher uses.
After creating the map object, the nested
for loops on lines 40 [475] and 41 [476] will go through each space to find the XY coordinates three things:
- The player’s starting position. This will be stored in the
startxand
startyvariables, which will then be stored in the game state object later on line 69 [494].
- The starting position of all the stars These will be stored in the
starslist, which is later stored in the game state object on line 71 [496].
- The position of all the goals. These will be stored in the
goalslist, which is later stored in the level object on line 75 [500].
Remember, the game state object contains all the things that can change. This is why the player’s position is stored in it (because the player can move around) and why the stars are stored in it (because the stars can be pushed around by the player). But the goals are stored in the level object, since they will never move around.
At this point, the level has been read in and processed. To be sure that this level will work properly, a few assertions must pass. If any of the conditions for these assertions are
False, then Python will produce an error (using the string from the
assert statement) saying what is wrong with the level file.
The first assertion on line 64 [489] checks to make sure that there is a player starting point listed somewhere on the map. The second assertion on line 65 [490] checks to make sure there is at least one goal (or more) somewhere on the map. And the third assertion on line 66 [491] checks to make sure that there is at least one star for each goal (but having more stars than goals is allowed).
Finally, these objects are stored in the game state object, which itself is stored in the level object. The level object is added to a list of level objects on line 78 [503]. It is this levels list that will be returned by the
readLevelsFile() function when all of the maps have been processed.
Now that this level is done processing, the variables for
mapTextLines,
mapObj, and
gameStateObj should be reset to blank values for the next level that will be read in from the level file. The
levelNum variable is also incremented by
1 for the next level’s level number. | https://eng.libretexts.org/Bookshelves/Computer_Science/Programming_Languages/Book%3A_Making_Games_with_Python_and_Pygame_(Sweigart)/10%3A_Star_Pusher/10.12%3A_About_the_Star_Pusher_Map_File_Format | CC-MAIN-2022-40 | refinedweb | 1,801 | 80.62 |
In scala def is used to define a method and val, var are used for defining variables.
Consider the following code:
scala> def i = 3
i: Int
scala> i.getClass()
res0: Class[Int] = int
scala> val v = 2
v: Int = 2
scala> v.getClass()
res1: Class[Int] = int
scala> println(v)
2
scala> println(i)
3
scala> i+v
res4: Int = 5
scala> def o = () => 2+3
o: () => Int
scala> o.getClass()
res5: Class[_ <: () => Int] = class $$Lambda$1139/1753607449
def
Int
getClass
Int
Unlike
val or
var declaration,
def i = 3 is not variable declaration. You are defining a method/function which returns a constant
3 and
i does not take any parameters.
declaration using
val and
var get evaluated immediately but in case of lazy val and def evaluation happens when called explicitly.
i is a not argument function. In order to get rid of confusion you could declare it using empty parenthesis as well
def i() = 3
Difference between
lazy val and
def is
lazy val is lazily evaluated and the result is cached. That means further
def declaration is evaluated every time you call method name.
Example using Scala REPL
scala> lazy val a = { println("a evaluated"); 1} a: Int = <lazy> scala> def i = { println("i function evaluated"); 2} i: Int scala> a a evaluated res0: Int = 1 scala> a res1: Int = 1 scala> a res2: Int = 1 scala> i i function evaluated res3: Int = 2 scala> i i function evaluated res4: Int = 2 scala> i i function evaluated res5: Int = 2
Notice that
a is evaluated only once and further invocations of
a return the cached result i.e lazy val is evaluated once when it is called and the result is stored forever. So you see println output once
Notice function is evaluated every time it is invoked. In this case you see println output every time you invoke the function
General Convention
There's a convention of using an empty parameter list when the method has side effects and leaving them off when its pure.
edited
scala> def i = 1 i: Int scala> :type i Int scala> :type i _ () => Int | https://codedump.io/share/VRlmfVF0csTV/1/defining-variables-in-scala-using-def | CC-MAIN-2016-50 | refinedweb | 358 | 65.35 |
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum!
Originally posted by JavaJMan: If this dosn't help please let me know. A volatile variable when used is supposed to be directly loaded from memory every time. For example if you are using a value that is above 32 bits such as a double or long it is not an atomic operation. Meaning that it can be swiped out of the processor in mid altercation of data. So, if 2 threads were working on a value such as double x = 33333333.333333 one thread can change part of the number have the number swipped out and the 2nd thread can then alter the data again getting totaly wrong values for each thread and ending up with corrupted number. Volatile is supposed to prevent this from happening by making it an atomic operation. For object references I belive it forces a lookup of the reference each time before that object is accessed. If I am mistaken please correct me. I hope that helps
Originally posted by marilyn murphy: >4. If i am not wrong you have said that volatile make operations atomic "volatile" does not make changes atomic. [...]
>4. If i am not wrong you have said that volatile make operations atomic "volatile" does not make changes atomic. [...]Seconded what CLG states above: this is incorrect. Section 17.4 of the language specification states that load, store, read, and write actions on volatile variables are atomic, even if the type of the variable is double or long.
Originally posted by CL Gilbert: Yes volatile does make operations on longs and doubles atomic whereas they are not ordinarily mandated to be. Its true that a load action is indivisible and thus atomic. But variables such as long and double may require 2 load actions since they are 64bits. In this case, volatile will make sure those 2 actions are not divided. [This message has been edited by CL Gilbert (edited October 22, 2001).]
Originally posted by Peter Haggar: This is how volatile is supposed to work for 64 bit variables (double and long) on a 32 bit machine. However, most JVMs don't implement volatile correctly for 64-bit variables. They wind up being non-atomic even if declared volatile. Therefore, you must use synchronization to ensure correct values in a multithreaded environment. Peter Haggar
Originally posted by CL Gilbert: Where did you get that information? In any event, I disagree with it.
First of all most processors can already handle 64bits of data at once.
Originally posted by CL Gilbert: Where did you get that information?...So even when you dont declare volatile chances are very high that the operations will be atomic anyway.
Originally posted by Brett Delia: Why not just syncronize on the variable in question? Would this not garuntee atomic behavior?
Originally posted by Peter den Haan: You mean they have wide databuses - which is irrelevant. The core x86 execution units (as opposed to the FP, MMX and SSE units) are still 32-bit
Originally posted by CL Gilbert: I still highly doubt most JVMs would be violating the spec. This does not mean I am wrong. It just means I doubt it and would be speechless if it were true.
In any event, the information I gave was correct. It also seems that the information I am receiving is correct.
What I wrote is technically the proper way to write your code, but since there are violations, the things which have been pointed out to me sadly, must be respected...
Originally posted by CL Gilbert: Indeed the databus is irrelevant. So is the execution unit. The only relevant thing is that the software will see. So even if the processor breaks it down into 4 bit instructions, as long asn the end result is a single operation with respect to the software, its completely ok.
Originally posted by Peter Haggar:
volatile boolean stop = false;
volatile int num = 0;
num = 100; //This can happen second
stop = true; //This can happen first
if (stop)
num += num; //num can == 0!
Running code like the above on multiple threads, stop can be set to true BEFORE num is set to 100. Volatile is supposed to ensure these statements don't get reordered. Again, most VMs don't implement this.
Using volatile fields can make sense when it is somehow known that only one thread can change a field, but many others are allowed to read it at any time.
Originally posted by Jose Botella: AtomicLong runs ok under jsdk 1.4.0_01 in WNT
Originally posted by Peter den Haan: But it isn't. That was the entire point of my reply - x86 load/store instructions to/from general-purpose registers are all 32 bit, hence no atomicity, no "single operation with respect to the software". - Peter
Originally posted by Jose Botella: Good news. AtomicLong runs ok under jsdk 1.4.0_01 in WNT Still the following code prints something. And I cannot figure out why.
public class LockInvestigation2
{ public LockInvestigation2()
{ AssignmentThread operator1 = new AssignmentThread(5222222222222222225L);
AssignmentThread operator2 = new AssignmentThread(5111111111111111115L);
operator1.start();
operator2.start();
System.out.println(" LockInvestigation Completed.");
}
public static void main(String[] args)
{ new LockInvestigation2();
}
}
class AssignmentThread extends Thread
{ private long assignmentValue;
private static volatile long testVariable;
public AssignmentThread(long value)
{ assignmentValue = value;
}
public void run()
{ for (int count=0; count<10000000; count++)
{ testVariable = assignmentValue;
synchronized(AssignmentThread.class)
{ testValue1 = testVariable;
if (testVariable == 5222222222222222225L || testVariable == 5111111111111111115L)
continue;
System.out.println(testVariable);
}
}
}
}//example modified from another post.
Maybe the answer was given by Doug Lea on page 97 from Concurrent Programming in Java: _______________________________________ I agree compilers or VMs don't comply with JLS 100% . You can find some discrepancies in the Bug Database in Sun. [ September 16, 2002: Message edited by: Jose Botella ] | http://www.coderanch.com/t/231723/threads/java/Behaviour-Volatile-variables | CC-MAIN-2015-11 | refinedweb | 979 | 64.51 |
Date: Fri, 3 Nov 2000 16:17:11 -0600 From: "John W. Eaton" <address@hidden> On 3-Nov-2000, Jim Blandy <address@hidden> wrote: Things like (/ 1 0) or (* 1e200 1e200) should produce Inf, (/ 0 0) should produce NaNs, these sorts of operations should be (optionally) raise exceptions, we should be able to test for Inf (isinf x) and NaN (isnan x), etc. (though maybe the spelling of isinf and isnan should be inf? and nan?). Control over what exceptions are raised is system dependent, but that's what configure is for. Octave has some tests for simple things like isinf and isnan, and the GSL provides some code for controlling exceptions. Actually,. I have never understood why hardware manufacturers insist on disabling traps by default. MIT Scheme enables traps by default so that you will get an error if something like this happens.. FYI, here is some gcc code that initializes the x86 architecture in what I consider a sane manner: /* Code to initialize the ix87 FP coprocessor control word. This code must be run once before starting a computation. Bit(s) Description ------ ----------- 0 invalid operation (FP stack overflow or IEEE invalid arithmetic op) 1 denormalized operand 2 zero divide 3 overflow 4 underflow 5 precision (indicates that precision was lost; happens frequently) The first 6 bits control how the chip responds to various exceptions. If a given mask bit is 0, then that exception will generate a processor trap. If the mask bit is 1, then that exception will not trap but is handled by a "default action", usually substituting an infinity or NaN. Default is all masks set to 1. 8/9 precision control 00 IEEE single precision 01 (reserved) 10 IEEE double precision 11 non-IEEE extended precision Default is non-IEEE extended precision. 10/11 rounding control 00 round to nearest or even 01 round toward negative infinity 10 round toward positive infinity 11 truncate toward zero Default is round to nearest or even. This code (0x0220) sets these bits as follows: 1. Precision mask 1, all others 0. 2. Precision control: IEEE double precision. 3. Rounding control: round to nearest or even. */ #ifdef __GNUC__ #if #cpu (i386) void initialize_387_to_ieee (void) { unsigned short control_word; asm ("fclex" : : ); asm ("fnstcw %0" : "=m" (control_word) : ); asm ("andw %2,%0" : "=m" (control_word) : "0" (control_word), "n" (0xf0e0)); asm ("orw %2,%0" : "=m" (control_word) : "0" (control_word), "n" (0x0220)); asm ("fldcw %0" : : "m" (control_word)); } #endif /* #cpu (i386) */ #endif /* __GNUC__ */ | http://lists.gnu.org/archive/html/guile-devel/2000-11/msg00015.html | CC-MAIN-2014-35 | refinedweb | 406 | 60.04 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Jasperserver openerp 7, The simplest way How-to
Hello,
I would like to share this way to call a jasper report, from openerp 7 using http:
1- Execute the report that you want to include in openerp, from jasperserver and copy the url show in your browser.
2- Create a button or function in openerp code:
def report1(self, cr, uid, ids, context=None): if context is None: context = {}
url = <urlcopiedfromexecute>&j_username=jasperadmin&j_password=jasperadmin" <- "here you can change your username and password" return { 'type' : 'ir.actions.act_url', 'url' : url, 'target' : 'new', }
Hope this help!!
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/jasperserver-openerp-7-the-simplest-way-how-to-35456 | CC-MAIN-2018-26 | refinedweb | 150 | 51.89 |
Is there a way to close the window without confirming the save?
When i open many files,i just want to keep one and close other files without save.
I think this would be a popular feature .
thanks reading.^-^
- PeterJones last edited by
@piglingcn said:
keep one and close other files without save.
I think this would be a popular feature .
Funny, I would think that most people would be horrified that if they chose “close all but this file”, it would throw away all the changes they made in all the other files. But maybe you have a use-case I cannot imagine.
If you really want to request this feature, this faq explains where to make feature requests so that they can be properly tracked. But be prepared to have to explain exactly what you want, justify why you think it’s a good idea, have people disagree with you, and potentially have the request rejected by the developer or ignored for years to come. (The same potential scenario is true for any feature request, not just this particular request, so I’m not trying to pick on you.)
- Alan Kilborn last edited by Alan Kilborn
I think from observing some of the Notepad++ source code commits on github recently, that a feature that will help out the OP is coming soon to Notepad++.
What the OP would do is right-click the tab of the one file they are interested in keeping, then choose Close All BUT This. A dialog box will come up and will have a No to All option (this is the recently committed code) when N++ asks to save files, which the OP would select.
Indeed, as @Alan-Kilborn already said there is a feature in the pipeline that will help you out. But since we are in holiday times and there are problems with the Notepad++ download server it may take a while until the next release. In the meanwhile you could use the NppExec scripting plugin and the following script as a workaround:
npp_console keep npp_console disable npe_console -- m- set local MainViewId ~ MAIN_VIEW set local SubViewId ~ SUB_VIEW set local PrimaryView ~ PRIMARY_VIEW set local SecondView ~ SECOND_VIEW npp_sendmsg NPPM_GETCURRENTBUFFERID set local CurBufId = $(MSG_RESULT) npp_sendmsg NPPM_GETCURRENTVIEW set local CurViewId = $(MSG_RESULT) if $(CurViewId) == $(MainViewId) then set local CurView = $(PrimaryView) set local OtherView = $(SecondView) set local OtherViewId = $(SubViewId) else if $(CurViewId) == $(SubViewId) then set local CurView = $(SecondView) set local OtherView = $(PrimaryView) set local OtherViewId = $(MainViewId) else exit endif set local Cnt = 0 :RepeatLoop npp_sendmsg NPPM_GETNBOPENFILES 0 $(CurView) set local Idx ~ $(MSG_RESULT) - 1 :ForHead if $(Idx) < 0 goto ForEnd npp_sendmsg NPPM_ACTIVATEDOC $(CurViewId) $(Idx) npp_sendmsg NPPM_GETBUFFERIDFROMPOS $(Idx) $(CurViewId) set local BufId = $(MSG_RESULT) if $(BufId) != $(CurBufId) then npp_sendmsg NPPM_RELOADBUFFERID $(BufId) 0 if $(MSG_RESULT) == 1 then npp_sendmsg NPPM_MENUCOMMAND 0 IDM_FILE_CLOSE endif endif set local Idx ~ $(Idx) - 1 goto ForHead :ForEnd set local CurViewId = $(OtherViewId) set local CurView = $(OtherView) set local Cnt ~ $(Cnt) + 1 if $(Cnt) <= 1 goto RepeatLoop
The script will close all unsaved files if they are real files on disk. Newly created files (whose names are like
new 1,
new 2and so on) will stay open.
- Munsen Tidoco last edited by
Here you have the answer.
It is very useful to close everything from time to time. | https://community.notepad-plus-plus.org/topic/17988/is-there-a-way-to-close-the-window-without-confirming-the-save | CC-MAIN-2021-31 | refinedweb | 539 | 61.29 |
interface to external SRAM (init, check) More...
#include <stdlib.h>
#include "xpal_board.h"
#include "xpal_power.h"
#include "usb-fifo.h"
Go to the source code of this file.
interface to external SRAM (init, check)
test external memory
Tests the external memory. The level argument can be used to select how good the memory will be tested. Better testing means the test will take longer.
At the moment, level is not used
We use the ((address+i) % 0xFF) as fill byte, this should detect the most common aliasing problems as well as dead cells
{ uint8_t * p; uint8_t * end = xmem.xmem_end; uint8_t i; /* start byte */ uint8_t b; /* fill byte */ /* testing all possible byte values takes too long, * this takes approx. 0.5 seconds */ for (i=7*31; i; i-=31) { uint8_t good_found = 0; /* fill memory */ for (p = xmem.xmem_start, b = test_baseval (p, i); p != end+1; p++, b = fold_ff(b+1)) { *p = b; } /* check it */ for (p = xmem.xmem_start, b = test_baseval (p, i); p != end+1; p++, b = fold_ff(b+1)) { if (*p == b) { good_found = 1; } else { if (good_found) { xmem.xmem_end = end = p; break; /* abort test */ } else { xmem.xmem_start = p+1; } } } } return (xmem.xmem_start <= xmem.xmem_end) ? &xmem : NULL; }
Disable and deactivate external RAM to save power.
When disabled, the external RAM can not be accessed before enable was called again. This function does also switch the RAM chip select to high. It is required to deselect the RAM before power down. Otherwise the data retention is not guarantied.
{ // disable RAM to enter data retention mode XMCRA &= ~(_BV(SRE)); };
Enable the external RAM eg. after power save.
The memory controler will get enabled by this function.
{ // enable RAM to allow access XMCRA |= (_BV(SRE)); };
Deselect the external RAM - Depreciated, function has no effect.
Deactivation by read outside the RAM addr. range has no effect. Hi Addr will get zero again after access. Bus keeper works for port A only.
{ // Check if USB is active //if( ! (HL_PWR_USB_ACTIVE & (hl_pwr_GetState())) ){ // USB is not active // volatile char x = *((char *)HL_USB_BASE_ADR); // silence compiler warning "unused variable 'x'" // __asm__ ("" : : "r" (x)); //} }; | http://doc.explorerspal.org/xpal__extmem_8h.html | CC-MAIN-2022-27 | refinedweb | 342 | 69.89 |
A friend once asked, why would one prefer microk8s over minikube?… We never spoke since. True story!
That was a hard question, especially for an engineer. The answer is not so obvious largely because it has to do with personal preferences. Let me show you why.
Microk8s-wise this is what you have to do to have a local Kubernetes cluster with a registry:
sudo snap install microk8s --edge --classic
microk8s.enable registry
How is this great?
- It is super fast! A couple of hundreds of MB over the internet tubes and you are all set.
- You skip the pain of going through the docs for setting up and configuring Kubernetes with persistent storage and the registry.
So why is this bad?
- As a Kubernetes engineer you may want to know what happens under the hood. What got deployed? What images? Where?
- As a Kubernetes user you may want to configure the registry. Where are the images stored? Can you change any access credentials?
Do you see why this is a matter of preference? Minikube is a mature solution for setting up a Kubernetes in a VM. It runs everywhere (even on windows) and it does only one thing, sets up a Kubernetes cluster.
On the other hand, microk8s offers Kubernetes as an application. It is opinionated and it takes a step towards automating common development workflows. Speaking of development workflows...
The full story with the registry
The registry shipped with microk8s is available on port 32000 of the localhost. It is an insecure registry because, let’s be honest, who cares about security when doing local development :) .
And it’s getting better, check this out! The docker daemon used by microk8s is configured to trust this insecure registry. It is this daemon we talk to when we want to upload images. The easiest way to do so is by using the microk8s.docker command coming with microk8s:
# Lets get a Docker file first
wget
# And build it
microk8s.docker build -t localhost:32000/nginx:testlocal . microk8s.docker push localhost:32000/nginx:testlocal
If you prefer to use an external docker client you should point it to the socket dockerd is listening on:
docker -H unix:///var/snap/microk8s/docker.sock ps
To use an image from the local registry just reference it in your manifests:
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: default
spec:
containers:
- name: nginx
image: localhost:32000/nginx:testlocal
restartPolicy: Always
And deploy with:
microk8s.kubectl create -f the-above-awesome-manifest.yaml
What to keep from this post?
You want Kubernetes? We deliver it as a (sn)app!
You want to see your tool-chain in microk8s? Drop us a line. Send us a PR!
We are pleased to see happy Kubernauts!
Those of you who are here for the gossip. He was not that good of a friend (obviously!). We only met in a meetup :) !
References
- Microk8s main site:
- Microk8s repo:
- Microk8s registry:
Microk8s Docker Registry was originally published in ITNEXT on Medium, where people are continuing the conversation by highlighting and responding to this story.Read more | http://voices.canonical.com/tag/kubernetes/ | CC-MAIN-2018-39 | refinedweb | 516 | 68.77 |
Big Idea: Given any n, we make a guess k. Then we break the interval [1,n] into [1,k - 1] and [k + 1,n]. The min of worst case cost can be calculated recursively as
cost[1,n] = k + max{cost[1,k - 1] + cost[k+1,n]}
Also, it takes a while for me to wrap my head around "min of max cost". My understand is that: you strategy is the best, but your luck is the worst. You only guess right when there is no possibilities to guess wrong.
public class Solution { public int getMoneyAmount(int n) { // all intervals are inclusive // uninitialized cells are assured to be zero // the zero column and row will be uninitialized // the illegal cells will also be uninitialized // add 1 to the length just to make the index the same as numbers used int[][] dp = new int[n + 1][n + 1]; // dp[i][j] means the min cost in the worst case for numbers (i...j) // iterate the lengths of the intervals since the calculations of longer intervals rely on shorter ones for (int l = 2; l <= n; l++) { // iterate all the intervals with length l, the start of which is i. Hence the interval will be [i, i + (l - 1)] for (int i = 1; i <= n - (l - 1); i++) { dp[i][i + (l - 1)] = Integer.MAX_VALUE; // iterate all the first guesses g for (int g = i; g <= i + (l - 1); g++) { int costForThisGuess; // since if g is the last integer, g + 1 does not exist, we have to separate this case // cost for [i, i + (l - 1)]: g (first guess) + max{the cost of left part [i, g - 1], the cost of right part [g + 1, i + (l - 1)]} if (g == n) { costForThisGuess = dp[i][g - 1] + g; } else { costForThisGuess = g + Math.max(dp[i][g - 1], dp[g + 1][i + (l - 1)]); } dp[i][i + (l - 1)] = Math.min(dp[i][i + (l - 1)], costForThisGuess); // keep track of the min cost among all first guesses } } } return dp[1][n]; } }
Any questions, suggestions & criticism welcomed!
a little improvement :)
public int getMoneyAmount(int n) { int[][] dp = new int[n+1][n+1]; for(int len=1;len<n;len++){ for(int i=1;i+len<=n;i++){ int j=i+len; int min = Integer.MAX_VALUE; for(int k=i;k<j;k++){ int tmp = k+Math.max(dp[i][k-1],dp[k+1][j]); min = Math.min(min,tmp); } dp[i][j] = min; } } return dp[1][n]; }
@juanren Awesome! Much Shorter and Neater than mine! You know what, I compared my code and yours and I laughed at myself when I saw I kept using i + (l - 1) instead of assigning it to a variable. How dumb was it! Haha!
said in Java commented DP solution:
you strategy is the best, but your luck is the worst
"you strategy is the best, but your luck is the worst" This is the best explanation :)
Hi, @juanren your code is concise and neat, but I think some of it could be a bit misleading;
1.the minimum valid length should 2, because we all know we don't have to cost a single penny to guarantee a win when i==j; and it cannot be explain by the code, we'd better leave it as a corner case;
2.in the for loop: "for(int i=1;i+len<=n;i++)", we'd better let it be "for(int i=1;i+length-1<=n;i++)" and "int j=i+len-1".
3.in the for loop "for(int k=i;k<j;k++)", it should be "for(int k=i;k<=j;k++)", because it's necessary to be able to pick j between i......j;
so I made some modification:
public int getMoneyAmount(int n) { int[][] dp=new int[n+2][n+2]; for(int length=2;length<=n;length++) { for(int i=1;i+length-1<=n;i++) { int j=i+length-1; int min=Integer.MAX_VALUE; for(int k=i;k<=j;k++) { int temp=k+Math.max(dp[i][k-1],dp[k+1][j]); min=Math.min(min, temp); } dp[i][j]=min; } } return dp[1][n]; }
though your version runs well, but that code above make more sense to me;
again your code is brilliant, thanks for sharing.
@mustangigem Can you explain any more? What's our strategy? I think the strategy is explicit: when we got "higher", go to the higher half.
The comments within your code really helps me a lot in understanding this problem! Thank you for such a good post!
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/51494/java-commented-dp-solution | CC-MAIN-2017-47 | refinedweb | 784 | 76.45 |
Details
- Type:
New Feature
- Status: In Progress
- Priority:
Major
- Resolution: Unresolved
- Affects Version/s: None
- Fix Version/s: None
- Component/s: None
- Labels:None
Description.
Activity
> 2. Override the getSplits() method to read each file's InputStream
I think getSplits() should construct a split for each element of java.util.zip.ZipFile#entries().
> 3. Create FileSplits [ ... ]
We should probably extend FileSplit or InputSplit specifically for zip files. The fields needed per split are the archive file's path and the path of the file within the archive. I don't think there's much point in supporting splits smaller than a file within the zip archive, so start and end offsets are not required here.
> 4. Implement class ZipRecordReader to read each zip entry in its split
Using LineRecordReader.
We should be able to use LineRecordReader directly, passing its constructor the result of ZipFile#getInputStream().
- This patch does not modify any existing source file and adds 3 new files
1. ZipInputFormat.java
2. ZipSplit.java
3. TestZipInputFormat.java
- The ZipInputFormat simply creates one split for each zip entry in an input zip file.
- Each split is of type ZipSplit and is read using a LineRecordReader.
- TestZipInputFormat is the unit test code that tests the ZipInputFormat with different zip files
having different number of entries.
- More information is available in the javadoc
-1 overall. Here are the results of testing the latest attachment
against trunk revision r614192.
@author +1. The patch does not contain any @author tags.
javadoc -1. The javadoc tool appears to have generated messages.
javac +1. The applied patch does not generate any new compiler warnings.
findbugs -1. The patch appears to introduce 2 new Findbugs warnings.
core tests +1. The patch passed core unit tests.
contrib tests -1. The patch failed contrib unit tests.
Test results:
Findbugs warnings:
Checkstyle results:
Console output:
This message is automatically generated.
Following issues reported by QA were fixed
1. Findbugs errors in ZipInputFormat.java were fixed. The streams are now closed properly in isSplitable() and getSplits() methods.
2. Javadoc comments fixed and verified that no new javadoc warnings are generated after applying the patch.
3. Fixed formatting in the code.
4. core-tests and contrib-tests are now passing after the above changes.
Kindly verify.
Some comments:
- isSplittable throws an exception when an empty zip archive is passed. Instead, an empty zip file should just provide no keys and values, but not throw exceptions.
- in getSplits, there's no need to explicitly test that each file exists. Instead, we can rely on open() throwing an exception if a file does not exist.
- getRecordReader should not loop calling getNextEntry(), but instead just call getEntry(String).
Oh, wait. On that last point, it looks like getEntry() is only available on ZipFile, and we cannot create a ZipFile except from a File. Wih an InputStream we must use ZipInputStream, which does not support getEntry(), since InputStream doesn't support random access. Sigh. This considerably reduces the utility of this InputFormat. GNU Classpath's implementation of java.io.zip.ZipFile use a RandomAccessFile, which we could implement, but, alas, we can't use GNU's code at Apache because it is under the GPL.
Zlib includes a zip file parser (minizip) that's under a BSD-like license and that permits random access to zip file entries from a user-supplied input stream. So we could do it in C. Sigh.
So, while implementing a zip InputFormat based on native code would be a lot more work, it would also have some distinct advantages:
- it could handle archives greater than 2GB;
- it would know where within the archive each split resides, so that splits could be properly localized;
- once HDFS implements append, it could provide appendable archives.
None of these are possible with java.io.zip.
Some questions.
1. How is a java.io.InputStream passed and used in native code. The header file represents it as a jobject which I tried casting to FILE * and reading, it did not work as expected.
2. Can a native method call return structures that can be converted to java objects ? If so how ?
Basically I want to be able to return an array of C structure where each element holds the following information
- The path of the entry
- The number of the entry
- Offset of the entry in the zip file
So that this info can be converted to an array of ZipSplit.
I am new to JNI so things are less than obvious for me, a little help will be greatly appreciated on JNI.
How is a java.io.InputStream passed and used in native code. The header file represents it as a jobject which I tried casting to FILE * and reading, it did not work as expected.
I'm not sure what exactly you are trying, but the way I implemented the native codecs was to read data from the InputStream in the Java layer, put the data into a direct-buffer and then pass it to the native zlib library.
The stream you are talking about is the handle to the zlib stream, which is zlib specific. That just represents the state of the zlib stream.
Can a native method call return structures that can be converted to java objects ? If so how ?
I'm sure that can be done via some hoops, but would be quite involved (I think).
Some details here:
JNI Documentation from Sun:
Hope that helps.
> I'm not sure what exactly you are trying [ ... ]
The need here is to read from an FSInputStream, returned from an arbitrary FileSystem implementation, from C. In particular, we need to be able to make callbacks from C to Java for read() and seek(). (I think open() and close() can be handled entirely in Java, and tell() can be implemented entirely in C.)
> The need here is to ...
Callback from C to Java is fine for read(). But seek() might be an issue since for true random access we need to be able to seek forward and backwards from
1. start of the stream
2. current pos of the stream
3. end of the stream
After taking a deep dive into the minizip code and implementing some POC code I am not sure how a seek() callback from C to java might be implemented in way that can be leveraged from existing minizip parser code. Any suggestions ?
Just to give an idea, here is a some sample code for read() that I implemented.
// including zlib & minizip libraries
#include "unzip.h"
// including java library
#include <jni.h>
#include "ZipInputFormat.h"
//defining read() and seek() IO APIs
uLong ZCALLBACK fread_file_func
( voidpf opaque, voidpf stream, void* buf, uLong size)
{
jlong bytesRead;
JNIEnv *env = (JNIEnv *) opaque;
jobject javaStream = (jobject) stream;
jclass dataInputStream = (*env)->GetObjectClass(env, stream);
jmethodID MID_read = (*env)->GetMethodID(env, dataInputStream, "read", "([BII)I");
if(MID_read == NULL)
else{ jbyteArray byteArray = (*env)->NewByteArray(env, size); bytesRead = (*env)->CallIntMethod(env, javaStream, MID_read, byteArray, 0, size); (*env)->GetByteArrayRegion(env, byteArray, 0, bytesRead, buf); printf("\nNumber of bytes read: %u\n", bytesRead ); }
return bytesRead;
}
// the native function exposed to Java, declared as a static method
// dataStream is of type java.io.DataInputStream.
// zipClass is of type ZipInputformat
JNIEXPORT void JNICALL Java_ZipInputFormat_display
(JNIEnv *env, jclass zipClass, jobject dataStream)
Right, you can't seek a DataInputStream. Instead use FSDataInputStream, which is seekable.
Ok, But I should be able to change the offset to the end of the stream since central directory structure of zip file is at the end.
Presently the FSDataInputStream.seek() throws IOExeption and doe not change the stream position if I try to position it past the
end of stream which is unlike fseek() which positions the offset to end of stream.
Is there a workaround to this or is it a functionality that needs to be added ?
Since the file is being accessed read-only, we can call FileSystem#getStatus(Path).getLen() and pass the file length from Java to C with the FSInputStream when we open the archive. Would that work?
Arguably we should add a method to Seekable that returns the length, or perhaps adopt the convention that attempts to seek past EOF leave the pointer at EOF, but I don't think that's required for this issue.
Here is what I did
1. Implemented JNI callbacks in C that callback Java for open, read, close, seek and tell on a FSDataInputStream.
2. Implemented some JAVA test code to verify that callbacks work correctly.
3. Made changes to existing Makefile of minizip to compile and build my C code as shared object.
4. Placed the ".so" file in $LD_LIBRARY_PATH directory.
The integration was successful and worked beautifully. The callbacks worked perfectly to ensure that zip file opened
as FSDataInputStream was opened ad read correclty
However, Sigh. I found that the minizip parsing code did'nt work correctly for Zip file > 2 GB.
The code uses uLong (unsigned long 4 bytes) instead of jlong (signed long long 8 bytes).
Replacing uLong with jlong would'nt work as code performs a lot of bit shifting operations. (I tried this.)
Also the parsing code relies on directory structure entries being in 32 - bit format and will require RE-WORK
based upon knowledge of 64-bit entries keeping in mind backward compatibility with 32 bit entries.
Note:- The IO callback APIs implemented by me make use of jlong in read(), seek() and tell().
QUESTION: Is the RE-WORK really required or is there a workaround that I am missing ???
Small correction:
=============
The parsing code works for zip archives UPTO 4 GB (Not 2 GB)
It fails to process zip files of SIZE > 4 GB correctly.
After a little more research I figured that the Zip64 format support (for files > 4 GB)
is not implemented presently in the minizip code.
So looks like if we need support for files > 4GB, then minizip parsing and reading code
would definitely require re-work. In other words the minizip code would need to be "extended"
to support Zip64 format.
This in turn further increases the scope of work.
Any suggestion or recommendations ?
It looks like minizip is out, then. The unzip code is based on a file descriptor, but there are only 35 lines that touch that file descriptor, so it might not be too hard to modify it to read from something else. But then we have to maintain a branched version of that. Sigh.
> ...so it might not be too hard to modify it to read from something else.
Actually! I already spent sufficient time setting things up, adding new code (I/O apis that can be plugged to the unzip code) to make it work in a manner that a Zip file name is passed to a native C call from Java which then uses unzip APIs to do open/read/seek operations on it.
What's different is that my custom implemented I/O APIs are used to construct the I/O function pointer structure and this structure is passed to unzip APIs. The custom I/O APIs are responsible for making Java callbacks whenever unzip APIs request an I/O operation via them.
My concern is not that part, but the APIs of unzip.c that are ZIP format agnostic and does all the low level bit shifting operations, directory parsing, reading and uncompressing stuff since it is that part which fails for file > 4GB. Now modifying that part would mean 2 things.
1. We would be extending unzip code in minizip to support ZIP64 format for our needs and we would be required to maintain it.
2. Any modification would require decent knowledge of the format and would need to ensure backward compatibility with older ZIP format.
So the question here is, do we go ahead and extend the minizip code for ZIP64 format? (This would be quite involved I think)
Or do we stick with the present limitation of 4GB and schedule it for later ?
Sorry, I wasn't clear. I was thinking we might try using, instead of minizip, the source code for the unzip command line executable,, which uses file-io directly, but not in too many places.
Thanks for clarifying
. But even Unzip in its present release 5.52 does not serve our purpose of supporting large files ( > 4GB) since it does not take care of extra headers in Zip64 format that are used specifically for supporting large archives.
This is well documented and clearly stated in the FAQ,.
Given below is an excerpt from the page :-
"Also note that in August 2001, PKWARE released PKZIP 4.50 with support for large files and archives via a pair of new header types, "PK\x06\x06" and "PK\x06\x07". So far these headers are undocumented, but most of their fields are fairly obvious. We don't yet know when Zip and UnZip will support this extension to the format. In the short term, it is possible to improve Zip and UnZip's capabilities slightly on certain Linux systems (and probably other Unix-like systems) by recompiling with the -DLARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 options. This will allow the utilities to handle uncompressed data files greater than 2 GB in size, as long as the total size of the archive containing them is less than 2 GB."
=======================================================
This leaves us with little options. Either we look for something else that implements zip64 extension and whose license is such that we can include it in our code or we ourselves implement these extensions in Minizip code which we will have to test extensively and maintain. Sigh.
Or as another option we can have our implementation of ZipInputStream purely in Java (no native code) that is based upon Sun's Java.io.zip.ZipInputStream with some additions and modifications to :-
1. Work with a Seekable stream (like FSDataInputStream).
2. Read only central directory structure to obtain file information instead of sequentially
reading the whole archive (Sun's implementation).
3. Make sure Zip64 headers are processed correctly.
This way we will have the following advantages.
1. A pure Java Zip stream parser supporting Zip64 format (No native code).
2. Support for Random as well as Sequential access.
3. No dependency on any external components.
4. Ease of modification for adding append when HDFS provides this facility.
5. Possibility of donating our parser as a Zip64 compliant java zip parser to open source in future.
The above of course require a lot of work but looking at the advantages I feel its worth it.
Your opinion ?
One of the major attractions of the zip format for Hadoop is that it provides interoperability with standard tools. But if we generate >4GB archives that shell tools cannot access, interoperability is broken. Folks might as well then use SequenceFile or some other Hadoop-specific format. So, until standard shell tools support access to >4GB zip archives, I see little motivation for Hadoop to support this.
So do we wait for standard tools to support files > 4 GB before making a Zip InputFormat available in HADOOP ?
Also it would be nice and I shall be thankful if you can recommend other bugs/issues that I can fix to make useful contributions
Cancelling the patch since this work is not complete yet and maybe requires further discussions..
I have use for this canceled patch as of present. Are there bits of the code that need to modified in order for it to run properly on hadoop 0.17, or should I be able to pop them into the mapred directory and go?
There are 2 problems with this patch.
1. It does not split the zip files efficiently. This is because there is no way in Java to construct a zip input stream that permits random seeks given a zip entry name.
2. Java's handling of large zip file is not robust.
The plan was to modify the code to make use of an external zip parsing library that is compatible with Apache license. It was decided to use zip/unzip (standard shell tools) code via JNI but support for large zip files if still missing from unzip (Zip 3.0 is out with large zip file support).
So at the moment, just waiting for Unzip 6.0 to come out and modify the code accrodingly.
The most tested/stable Apache-licensed Java unzip code is in Ant's codebase; you can either take/fork that or try and get the changes back in, which, with suitable tests, I am sure will be happily accepted.
Any updates on this issue? What's the current thinking on shell tools + JNI versus Ant's unzip code? Anything I can do to contribute? Regards...
Proposed Implementation Approach
--------------------------------------------------
1. Implement class ZipInputFormat to extend FileInputFormat.
2. Override the getSplits() method to read each file's
InputStream and construct a ZipInputStream out of it.
3. Create FileSplits in a way that each file split has the following
properties
For e.g. start = 3, length = 6 signifies that zip entries 3 to 6
will be read from the zip file of this split.
4. Implement class ZipRecordReader to read each zip entry in its split
Using LineRecordReader.
5. Each zip entry will be treated as a text file.
6. Implement the necessary unit test case classes.
Questions:
=========
1. Is there a need to implement a ZipCodec (like GzipCodec and DefaultCodec) ?
2. Should the ZipRecordReader be flexible enough to treat the individual zip entries in a
FileSplit as being a text file or a sequence file ?
Please feel free to comment on anything that I missed which might be required.
Also any suggestions/recommendation to make the implementation better will be greatly
appreciated.
-Ankur | https://issues.apache.org/jira/browse/MAPREDUCE-210 | CC-MAIN-2014-15 | refinedweb | 2,942 | 63.9 |
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also | Notes
#include <sys/types.h> #include <unistd.h> pid_t fork(void);
pid_t fork1(void);
pid_t forkall(void);
#include <sys/fork.h> pid_t forkx(int flags);
pid_t forkallx(int flags);
The fork(), fork1(), forkall(), forkx(), and forkallx()C)() or forkallx() replicates in the child process all of the threads (see thr_create(3C) and pthread_create(3C)) in the parent process. A call to fork1() or forkx() replicates only the calling thread in the child process.
A call to fork() is identical to a call to fork1(); only the calling thread is replicated in the child process. This is the POSIX-specified behavior for fork().
In releases of Solaris prior to Solaris 10,().
Prior to Solaris 10, either -lthread or -lpthread was required for multithreaded applications. This is no longer the case. The standard C library provides all threading support for both sets of application programming interfaces. Applications that require replicate-all fork semantics must call forkall() or forkallx()..
Do not allow wait-for-multiple-pids by the parent, as in wait(), waitid(P_ALL), or waitid(P_PGID), to reap the child and do not allow the child to be reaped automatically due the disposition of the SIGCHLD signal being set to be ignored in the parent. Only a specific wait for the child, as in waitid(P_PID, pid), is allowed and it is required, else when the child exits it will remain a zombie until the parent exits.
If the flags argument is 0 forkx() is identical to fork() and forkallx() is identical to forkall().
If a multithreaded application calls fork(), fork1(), or fork(), fork1(), or forkx(). See “MT-Level of Libraries” on the attributes(5) manual page.
The pthread_atfork() mechanism is used to protect the locks that libc(3LIB) uses to implement interfaces such as malloc(3C). All interfaces provided by libc are safe to use in a child process following a fork(), except when fork() is executed within a signal handler.
The POSIX standard (see standards(5)) requires fork to be Async-Signal-Safe (see attributes(5)). This cannot be made to happen with fork handlers in place, because they acquire locks. To be in nominal compliance, no fork handlers are called when fork() is executed within a signal context. This leaves the child process in a questionable state with respect to its locks, but at least the calling thread will not deadlock itself attempting to acquire a lock that it already owns. In this situation, the application should strictly adhere to the advice given in the POSIX specification: “To avoid errors, the child process may only execute Async-Signal-Safe operations until such time as one of the exec(2) functions is called.”
Upon successful completion, fork(), fork1(), forkall(), forkx(), and forkallx() return 0 to the child process and return the process ID of the child process to the parent process. Otherwise, (pid_t)-1 is returned to the parent process, no child process is created, and errno is set to indicate the error.
The fork(), fork1(), forkall(), forkx(), and forkallx() functions.
The forkx() and forkallx() functions will fail if:
The flags argument is invalid.
See attributes(5) for descriptions of the following attributes:
For fork(), see standards(5).
alarm(2), exec(2), exit(2), fcntl(2), getitimer(2), getrlimit(2), memcntl(2), mmap(2), nice(2), priocntl(2), semop(2), shmop(2), times(2), umask(2), waitid(2), door_create(3C), exit(3C), plock(3C), pthread_atfork(3C), pthread_create(3C), signal(3C), system(3C), thr_create(3C) timer_create(3C), wait(3C), contract(4), process(4), attributes(5), privileges(5), standards(5)
An application(), fork1(), or fork1 | http://docs.oracle.com/cd/E19082-01/819-2241/forkall-2/index.html | CC-MAIN-2017-17 | refinedweb | 602 | 61.77 |
August 13, 2010 Leave a comment
This was a small part of a project that was itself about 1/3 of my graduate project. I used it to collect certain information. Here is the excerpt from the paper.
Website Email Spider Program
In order to automatically process publicly available email addresses, a simple tool was developed, with source code available in Appendix A. An automated tool is able to process web pages in a way that is less error prone than manual methods, and it also makes processing the sheer number of websites possible (or at least less tedious).
This tool begins at a few root pages, which can be comma delimited. From these, it searches for all unique links by keeping track of a queue so that pages are not usually revisited (although revisiting a page is still possible in case the server is case insensitive or equivalent pages are dynamically generated with unique URLs). In addition, the base class is passed a website scope so that pages outside of that scope are not spidered. By default, the scope is simply a regular expression including the top domain name of the organization.
Each page requested searches the contents for the following regular expression to identify common email formats:
[w_.-]{3,}@[w_.-]{6,}
The 3 and 6 repeaters were necessary because of false positives otherwise obtained due to various encodings. This regular expression will not obtain all email addresses. However, it will obtain the most common addresses with a minimum of false positives. In addition, the obtained email addresses are run against a blacklist of uninteresting generic form addresses (such as [email protected], [email protected], or [email protected]).
These email addresses are saved in memory and reported when the program completes or is interrupted. Note because of the dynamic nature of some pages, these can potentially spider infinitely and must be interrupted (for example, a calendar application that uses links to go back in time indefinitely). Most emails seemed to be obtained in the first 1,000 pages crawled. A limit of 10,000 pages was chosen as a reasonable scope. Although this limit was reached several times, the spider program uses a breadth search method. It was observed that most unique addresses were obtained early in the spidering process, and extending the number of pages tended to have a diminishing return. Despite this, websites with more pages also tended to correlate with greater email addresses returned (see analysis section).
Much of the logic in the spidering tool is dedicated to correctly parsing html. By their nature, web pages vary widely with links, with many sites using a mix of directory traversal, absolute URLs, and partial URLs. It is no surprise there are so many security vulnerabilities related to browsers parsing this complex data.
There is also an effort made to make the software somewhat more efficient by ignoring superfluous links to objects such as documents, executables, etc. Although if such a file is encountered an exception will catch the processing error, these files consume resources.
Using this tool is straightforward, but a certain familiarity is expected – it was not developed for an end user but for this specific experiment. For example, a URL is best processed in the format since in its current state it would use example.com to verify that spidered addresses are within a reasonable scope. It prints debugging messages constantly because every site seemed to have unique parsing quirks. Although other formats and usages may work, there was little effort to make this software easy to use.
#!/usr/bin/python import HTMLParser import urllib2 import re import sys import signal import socket socket.setdefaulttimeout(20) #spider is meant for a single url #proto can be http, https, or any class PageSpider(HTMLParser.HTMLParser): def __init__(self, url, scope, searchList=[], emailList=[], errorDict={}): HTMLParser.HTMLParser.__init__(self) self.url = url self.scope = scope self.searchList = searchList self.emailList = emailList try: urlre = re.search(r"(w+):[/]+([^/]+).*", self.url) self.baseurl = urlre.group(2) self.proto = urlre.group(1) except AttributeError: raise Exception("URLFormat", "URL passed is invalid") if self.scope == None: self.scope = self.baseurl try: req = urllib2.urlopen(self.url) htmlstuff = req.read() except KeyboardInterrupt: raise except urllib2.HTTPError: #not able to fetch a url eg 404 errorDict["link"] += 1 print "Warning: link error" return except urllib2.URLError: errorDict["link"] += 1 print "Warning: URLError" return except ValueError: errorDict["link"] += 1 print "Warning link error" return except: print "Unknown Error", self.url errorDict["link"] += 1 return emailre = re.compile(r"[w_.-]{3,}@[w_.-]{2,}.[w_.-]{2,}") nemail = re.findall(emailre, htmlstuff) for i in nemail: if i not in self.emailList: self.emailList.append(i) try: self.feed(htmlstuff) except HTMLParser.HTMLParseError: errorDict["parse"] += 1 print "Warning: HTML Parse Error" pass except UnicodeDecodeError: errorDict["decoding"] += 1 print "Warning: Unicode Decode Error" pass def handle_starttag(self, tag, attrs): if (tag == "a" or tag =="link") and attrs: #process the url formats, make sure the base is in scope for k, v in attrs: #check it's an htref and that it's within scope if (k == "href" and ((("http" in v) and (re.search(self.scope, v))) or ("http" not in v)) and (not (v.endswith(".pdf") or v.endswith(".exe") or v.endswith(".doc") or v.endswith(".docx") or v.endswith(".jpg") or v.endswith(".jpeg") or v.endswith(".png") or v.endswith(".css") or v.endswith(".gif") or v.endswith(".GIF") or v.endswith(".mp3") or v.endswith(".mp4") or v.endswith(".mov") or v.endswith(".MOV") or v.endswith(".avi") or v.endswith(".flv") or v.endswith(".wmv") or v.endswith(".wav") or v.endswith(".ogg") or v.endswith(".odt") or v.endswith(".zip") or v.endswith(".gz") or v.endswith(".bz") or v.endswith(".tar") or v.endswith(".xls") or v.endswith(".xlsx") or v.endswith(".qt") or v.endswith(".divx") or v.endswith(".JPG") or v.endswith(".JPEG")))): #Also todo - modify regex so that >= 3 chars in front >= 7 chars in back url = self.urlProcess(v) #TODO 10000 is completely arbitrary if (url not in self.searchList) and (url != None) and len(self.searchList) < 10000: self.searchList.append(url) #returns complete url in the form #as input handles (./url,, //stuff/bleh/url) def urlProcess(self, link): link = link.strip() if "http" in link: return (link) elif link.startswith("//"): return self.proto + "://" + link[2:] elif link.startswith("/"): return self.proto + "://" + self.baseurl + link elif link.startswith("#"): return None elif ":" not in link and " " not in link: while link.startswith("../"): link = link[3:] #TODO [8:-1] is just a heuristic, but too many misses shouldn't be bad... maybe? if self.url.endswith("/") and ("/" in self.url[8:-1]): self.url = self.url[:self.url.rfind("/", 0, -1)] + "/" dir = self.url[:self.url.rfind("/")] + "/" return dir + link return None class SiteSpider: def __init__(self, searchList, scope=None, verbocity=True, maxDepth=4): #TODO maxDepth logic #necessary to add to this list to avoid infinite loops self.searchList = searchList self.emailList = [] self.errors = {"decoding":0, "link":0, "parse":0, "connection":0, "unknown":0} if scope == None: try: urlre = re.search(r"(w+):[/]+([^/]+).*", self.searchList[0]) self.scope = urlre.group(2) except AttributeError: raise Exception("URLFormat", "URL passed is invalid") else: self.scope = scope index = 0 threshhold = 0 while 1: try: PageSpider(self.searchList[index], self.scope, self.searchList, self.emailList, self.errors) if verbocity: print self.searchList[index] print " Total Emails:", len(self.emailList) print " Pages Processed:", index print " Pages Found:", len(self.searchList) index += 1 except IndexError: break except KeyboardInterrupt: break except: threshhold += 1 print "Warning: unknown error" self.errors["unknown"] += 1 if threshhold >= 40: break pass garbageEmails = [ "help", "webmaster", "contact", "sales" ] print "REPORT" print "----------" for email in self.emailList: if email not in garbageEmails: print email print "nTotal Emails:", len(self.emailList) print "Pages Processed:", index print "Errors:", self.errors if __name__ == "__main__": SiteSpider(sys.argv[1].split(",")) | https://webstersprodigy.net/2010/08/13/email_spider/ | CC-MAIN-2017-43 | refinedweb | 1,314 | 52.97 |
Contributing Code - Submitting Bugfixes and Enhancements
SilverStripe will never be finished, and we need your help to keep making it better. If you're a developer a great way to get involved is to contribute patches to our modules and core codebase, fixing bugs or adding features.
The SilverStripe core modules (
framework and
cms), as well as some of the more popular modules are in
git version control. SilverStripe hosts its modules on github.com/silverstripe. After installing git and creating a free github.com account, you can "fork" a module,
which creates a copy that you can commit to (see github's guide to "forking").
For other modules, our add-ons site lists the repository locations, typically using the version control system like "git".
If you are modifying CSS or JavaScript files in core modules, you'll need to regenerate some files. Please check out our client-side build tooling guide for details.
We ask for this so that the ownership in the license is clear and unambiguous, and so that community involvement doesn't stop us from being able to continue supporting these projects. By releasing this code under a permissive license, this copyright assignment won't prevent you from using the code in any way you see fit.
Step-by-step: From forking to sending the pull request
- Create a fork of the module you want to contribute to (listed on github.com/silverstripe/).
Install the project through composer. The process is described in detail in "Installation through Composer".
composer create-project --keep-vcs silverstripe/installer ./your-website-folder 4.0.x-dev
Add a new "upstream" remote to the module you want to contribute to (e.g.
cms). This allows you to track the original repository for changes, and rebase/merge your fork as required. Use your Github user name for which you created the fork in Step 1.
cd framework git remote rename origin upstream git branch --set-upstream-to upstream git remote add -f origin git://github.com/<your-github-user>/silverstripe-framework.git
Branch for new issue and develop on issue branch
# verify current branch 'base' then branch and switch cd framework git status git checkout -b <your-branch-name>
As time passes, the upstream repository accumulates new commits. Keep your working copy's branch and issue branch up to date by periodically running a
composer update. As a first step, make sure you have committed all your work, then temporarily switch over to the
masterbranch while updating. Alternatively, you can use composer "repositories", but we've found that dramatically slows down any updates. You may need to resolve conflicts.
(cd framework && git checkout master) composer update (cd framework && git checkout <your-branch-name>) (cd framework && git rebase upstream/master)
- When development is complete, run another update, and consider squashing your commits
Push your branch to your GitHub fork
cd framework git push origin <your-branch-name>
- Issue pull request on GitHub. Visit your forked repository on gitHub.com and click the "Create Pull Request" button next to the new branch.
Please read collaborating with pull requests on github.com for more details.
The core team is then responsible for reviewing patches and deciding if they will make it into core. If there are any problems they will follow up with you, so please ensure they have a way to contact you!
Picking the right version
The SilverStripe project follows the Semantic Versioning (SemVer) specification for releases. It clarifies what to expect from different releases, and also guides you in choosing the right branch to base your pull request on.
If you are unsure what branch your pull request should go to, consider asking in the GitHub issue that you address with your patch, or simply choose the "default branch" of the repository where you want to contribute to. That would usually target the next minor release of the module.
If you are changing existing APIs, introducing new APIs or major new features, please review our guidance on supported versions. You will need to choose the branch for your pull request accordingly.
As we follow SemVer, we name the branches in repositories accordingly (using BNF rules defined by semver.org):
"master"branch contains the next major and yet unreleased version
<positive digit>branches contain released major versions and all changes for yet unreleased minor versions
<positive digit> "." <digits>branches contain released minor versions and all changes for yet to be released patch versions
Silverstripe CMS public APIs explicitly include:
- namespaces, classes, interfaces and traits
- public and protected scope (including methods, properties and constants)
- global functions, variables
- yml configuration file structure and value types
- private static class properties (considered to be configuration variables)
Silverstripe CMS public APIs explicitly exclude:
- private scope (methods and properties with the exception for
private static)
- entities marked as
@internal
- yml configuration file default values
- HTML, CSS, JavaScript, TypeScript, SQL and anything else that is not PHP
Other entities might be considered to be included or excluded from the public APIs on case-by-case basis.
The Pull Request Process
Once your pull request is issued, it's not the end of the road. A core committer will most likely have some questions for you and may ask you to make some changes depending on discussions you have. If you've been naughty and not adhered to the coding conventions, expect a few requests to make changes so your code is in-line.
If your change is particularly significant, it may be referred to the forum for further community discussion.
A core committer will also "label" your PR using the labels defined in GitHub, these are to correctly classify and help find your work at a later date.
GitHub Labels
The current GitHub labels are grouped into five sections:
- Changes - These are designed to signal what kind of change they are and how they fit into the Semantic Versioning schema
- Impact - What impact does this bug/issue/fix have, does it break a feature completely, is it just a side effect or is it trivial and not a bit problem (but a bit annoying)
- Effort - How much effort is required to fix this issue?
- Type - What aspect of the system the PR/issue covers
- Feedback - Are we waiting on feedback, if so who from? Typically used for issues that are likely to take a while to have feedback given
Quickfire Do's and Don't's
If you aren't familiar with git and GitHub, try reading the "GitHub bootcamp documentation". We also found the free online git book and the git crash course useful. If you're familiar with it, here's the short version of what you need to know. Once you fork and download the code:
- Don't develop on the master branch. Always create a development branch specific to "the issue" you're working on (on our GitHub repository's issues). Name it by issue number and description. For example, if you're working on Issue #100, a
DataObject::get_one()bugfix, your development branch should be called 100-dataobject-get-one. If you decide to work on another issue mid-stream, create a new branch for that issue--don't work on both in one branch.
- Do not merge the upstream master with your development branch; rebase your branch on top of the upstream master.
- A single development branch should represent changes related to a single issue. If you decide to work on another issue, create another branch.
- Squash your commits, so that each commit addresses a single issue. After you rebase your work on top of the upstream master, you can squash multiple commits into one. Say, for instance, you've got three commits in related to Issue #100. Squash all three into one with the message "Description of the issue here (fixes #100)" We won't accept pull requests for multiple commits related to a single issue; it's up to you to squash and clean your commit tree. (Remember, if you squash commits you've already pushed to GitHub, you won't be able to push that same branch again. Create a new local branch, squash, and push the new squashed branch.)
- Choose the correct branch: see Picking the right version.
Editing files directly on GitHub.com
If you see a typo or another small fix that needs to be made, and you don't have an installation set up for contributions, you can edit files directly in the github.com web interface. Every file view has an "edit this file" link.
After you have edited the file, GitHub will offer to create a pull request for you. This pull request will be reviewed along with other pull requests.
Check List
- Adhere to our coding conventions
- If your patch is extensive, discuss it first on the SilverStripe Forums (ideally before doing any serious coding)
- When working on existing tickets, provide status updates through ticket comments
- Check your patches against the "master" branch, as well as the latest release branch
- Write unit tests
- Write Behat integration tests for any interface changes
- Describe specifics on how to test the effects of the patch
- It's better to submit multiple patches with separate bits of functionality than a big patch containing lots of changes
- Only submit a pull request for work you expect to be ready to merge. Work in progress is best discussed in an issue, or on your own repository fork.
- Document your code inline through PHPDoc syntax. See our API documentation for good examples.
- Check and update documentation on docs.silverstripe.org. Check for any references to functionality deprecated or extended through your patch. Documentation changes should be included in the patch.
- When introducing something "noteworthy" (new feature, API change), update the release changelog for the next release this commit will be included in.
- If you get stuck, please post to the forum
- When working with the CMS, please read the "CMS Architecture Guide" first
- Try to respond to feedback in a timely manner. PRs that go more than a month without a response from the author are considered stale, and will be politely chased up. If a response still isn't received, the PR will be closed two weeks after that.
Commit Messages
We try to maintain a consistent record of descriptive commit messages. Most importantly: Keep the first line short, and add more detail below. This ensures commits are easy to browse, and look nice on github.com (more info about proper git commit messages).
Our changelog generation tool relies upon commit prefixes (tags) to categorize the patches accordingly and produce more readable output. Prefixes are usually a single case-insensitive word, at the beginning of the commit message. Although prefixing is optional, noteworthy patches should have them to fall into correct categories.
If you can't find the correct prefix for your commit, it is alright to leave it untagged, then it will fall into "Other" category.
Further guidelines:
- Each commit should form a logical unit - if you fix two unrelated bugs, commit each one separately
- If you are fixing a issue from our bugtracker (see Reporting Bugs), please append
(fixes #<ticketnumber>)
- When fixing issues across repos (e.g. a commit to
frameworkfixes an issue raised in the
cmsbugtracker), use
(fixes silverstripe/silverstripe-cms#<issue-number>)(details)
- If your change is related to another commit, reference it with its abbreviated commit hash.
- Mention important changed classes and methods in the commit summary.
Example: Bad commit message
finally fixed this dumb rendering bug that Joe talked about ... LOL also added another form field for password validation
Example: Good commit message
FIX Formatting through prepValueForDB() Added prepValueForDB() which is called on DBField->writeToManipulation() to ensure formatting of value before insertion to DB on a per-DBField type basis (fixes #1234). Added documentation for DBField->writeToManipulation() (related to a4bd42fd). | https://docs.silverstripe.org/en/4/contributing/code/ | CC-MAIN-2021-04 | refinedweb | 1,966 | 51.78 |
Anyone tinkering with Python long enough has been bitten (or torn to pieces) by the following issue:
def foo(a=[]):
a.append(5)
return a
[5]
>>> foo()
[5]
>>> foo()
[5, 5]
>>> foo()
[5, 5, 5]
>>> foo()
[5, 5, 5, 5]
>>> foo()
>>> def a():
... print "a executed"
... return []
...
>>>
>>> def b(x=a()):
... x.append(5)
... print x
...
a executed
>>> b()
[5]
>>> b()
[5, 5]
x
def
Actually, this is not a design flaw, and it is not because of internals, or performance.
It comes simply from the fact that functions in Python are first-class objects, and not only a piece of code.
As soon as you get to think into this way, then it completely makes sense: a function is an object being evaluated on its definition; default parameters are kind of "member data" and therefore their state may change from one call to the other - exactly as in any other object.
In any case, Effbot has a very nice explanation of the reasons for this behavior in Default Parameter Values in Python.
I found it very clear, and I really suggest reading it for a better knowledge of how function objects work. | https://codedump.io/share/bqtNEK65Mudd/1/quotleast-astonishmentquot-and-the-mutable-default-argument | CC-MAIN-2016-50 | refinedweb | 193 | 64.95 |
I have opencv-core installed without the full opencv package. Unfortunately when I include
#include <opencv2/imgproc.hpp>
It fails:
In file included from /usr/local/include/opencv2/imgproc.hpp:46:
In file included from /usr/local/include/opencv2/core.hpp:54:
/usr/local/include/opencv2/core/base.hpp:52:10: fatal error: 'opencv2/opencv_modules.hpp' file not found
#include "opencv2/opencv_modules.hpp"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
Shouldn't it be possible to use opencv-core without opencv?
This may be my fault from ports r469420. There are updates in progress in bug 234147 and bug 237847 so those may fix it, I'll take a look.
I did not realize this problem when making the patch in in bug 234147 and bug 237847 .
I will rethink which file to leave in opencv-core.
At a glance, opencv2/opencv_modules.hpp retains information of which modules are built. I do not know if it is OK to have HAVE_OPENCV_* line for modules in opencv, if it is moved to opencv-core.
(In reply to Hiroo Ono from comment #2)
bug #237135 and bug #237847 I meant. | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237933 | CC-MAIN-2020-45 | refinedweb | 181 | 69.99 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Data Structures Using C
M. Campbell
© 1993 Deakin University
Module 815
Data Structures Using C
Aim
After working through this module you should be able to create and use new and complex data types within C programs.
Learning objectives
After working through this module you should be able to: 1. Manipulate character strings in C programs. 2. Declare and manipulate single and multi-dimensional arrays of the C data types. 3. Create, manipulate and manage C pointers to data elements. 4. Create and manage complex data types in C. 5. Use unions to define alternate data sets for use in C programs. 6. Allocate memory to variables dynamically. 7. Manipulate characters and bits.
Content
Strings. Arrays. Pointers. Data definitions – Structures. Data definitions – Unions. Dynamic allocation of data Character and bit manipulation
Learning Strategy
Read the printed module and the assigned readings and complete the exercises as requested.
Assessment
Completion of exercises and the CML test at the end of the module.
Page 813-1
Module 815
Data Structures Using C
References & resources
The C Programming Language. 2nd. edition Brian W. Kernighan and Dennis M. Ritchie Prentice-Hall, 1988 Turbo C/C++ Manuals. Turbo C/C++ MS DOS compiler.
Page 813-2
Module 815 Objective 1
Data Structures Using C After working through this module you should be able to manipulate character strings in C programs
Strings
A string is a group of characters, usually letters of the alphabet. In order to format your printout in such a way that it looks nice, has meaningful titles and names, and is aesthetically pleasing to you and the people using the output of your program, you need the ability to output text data. We have used strings extensively already, without actually defining them. A complete definition of a string is ‘a sequence of char type data terminated by a NULL character,’. When C is going to use a string of data in some way, either to compare it with another, output it, copy it to another string, or whatever, the functions are set up to do what they are called to do until a NULL character (which is usually a character with a zero ASCII code number) is detected. You should also recall (from Module 813: Fundamental Programming Structures in C) that the char type is really a special form of integer – one that stores the ASCII code numbers which represent characters and symbols. An array (as we shall discover shortly) is a series of homogeneous pieces of data that are all identical in type. The data type can be quite complex as we will see when we get to the section of this module discussing structures. A string is simply a special case of an array, an array of char type data. The best way to see these principles is by use of an example [CHRSTRG.C].
#include "stdio.h" void data declaration for the string appears on line 4. We have used this declaration in previous examples but have not indicated its meaning. The data declaration defines a string called name which has, at most, 5 characters in it. Not only does it define the length of the string, but it also states, through implication, that the characters of the string will be numbered from 0 to 4. In the C language, all subscripts start at 0 and increase by 1 each step up to the maximum which in this case is 4. We have therefore named 5 char type variables: name[0], name[1], Page 813-3
Module 815
Data Structures Using C name[2], name[3], and name[4]. You must keep in mind that in C the subscripts actually go from 0 to one less than the number defined in the definition statement. This is a property of the original definition of C and the base limit of the string (or array), i.e. that it always starts at zero, cannot be changed or redefined by the programmer.
Using strings
The variable name is therefore a string which can hold up to 5 characters, but since we need room for the NULL terminating statement which sets NULL equal to zero would allow us to use NULL instead of an actual zero, and this would add greatly to the clarity of the program. It would be very obvious that this was a NULL and not simply a zero for some other purpose.) Now that we have the string, we will simply print it out with some other string data in the output statement. You will, by now, be familiar with the %s is the output definition to output a string and the system will output characters starting with the first one in name until it comes to the NULL character; it will then quit. Notice that in the printf statement only the variable name needs to be given, with no subscript, since we are interested in starting at the beginning. (There is actually another reason that only the variable name is given without brackets. The discussion of that topic will be found in the next section.)]. This example may make you feel that strings are rather cumbersome to use since you have to set up each character one at a time. That is an incorrect conclusion because strings are very easy to use as we will see in the next example program.
Some string subroutines
The next example [STRINGS.C] illustrates a few of the more common string handling functions. These functions are found in the C standard library and are defined in the header file string.h. Page 813-4
Module 815
Data Structures Using C
#include "stdio.h" #include "string.h" void) /* returns 1 if name1 > name2 */ strcpy(mixed,name1); else strcpy(mixed,name2); printf("The biggest name alpabetically is %s\n" ,mixed); strcpy(mixed, name1); strcat(mixed," "); strcat(mixed, name2); printf("Both names are %s\n", mixed); }
First, four stringsare defined. Next, a new function that is commonly found in C programs, the strcpy function, or string copy function is used. It copies from one string to another until it comes to the NULL character. Remember that the NULL is actually a 0 and is added to the character string by the system. It is easy to remember which one gets copied to which if you think of the function as an assignment statement. Thus if you were to say, for example, ‘x = 23;’, the data is copied from the right entity to the left one. In the strcpy function the data are also copied from the right entity to the left, so that after execution of the first statement, name1 will contain the string Rosalinda, but without the double quotes. The quotes define a literal string and are an indication to the compiler that the programmer is defining a string. Similarly, Zeke is copied into name2 by the second statement, then the title is copied. Finally, the title and both names are printed out. Note that it is not necessary for the defined string to be exactly the same size as the string it will be called upon to store, only that it is at least as long as the string plus one more character for the NULL.
Alphabetical sorting of strings
The next function to be considered is the strcmp or the string compare function. It will return a 1 if the first string is lexicographically larger than the second, zero if they are the two strings are identical, and -1 if the first string is lexicographically smaller than the second. A lexicographical comparison uses the ASCII codes of the characters as the basis for comparison. Therefore, ‘A’ will be “smaller” than ‘Z’ because the ASCII code for ‘A’ is 65 whilst the code for ‘Z’ is 90. Sometimes however, strange results occur. A string that begins with the letter ‘a’ is larger than a string that begins with the letter ‘Z’ since the ASCII code for ‘a’ is 97 whilst the code for ‘Z’ is 90.
Page 813-5
Module 815
Data Structures Using C One of the strings, depending on the result of the compare, is copied into the variable mixed, and therefore the largest name alphabetically is printed out. It should come as no surprise to you that Zeke wins because it is alphabetically larger: length doesn’t matter, only the ASCII code values of the characters.
Combining strings
The last four statements in the [STRINGS.C] example have another new feature, the strcat, or string concatenation function. This function simply adds the characters from one string onto the end of another string taking care to adjust the NULL so a single string is produced. In this case, name1 is copied into mixed, then two blanks are concatenated to mixed, and finally name2 is concatenated to the combination. The result is printed out which shows both names stored in the one variable called mixed.
Exercise 1
Write a program with three short strings, about 6 characters each, and use strcpy to copy one, two, and three into them. Concatenate the three strings into one string and print the result out 10 times. Write a program thatwill output the characters of a string backwards. For example, given the string “computer”, the program will produce
Page 813-6
Module 815 Objective 2
Data Structures Using C After working through this module you should be able to declare and manipulate single and multi-dimensional arrays of the C data types.
ARRAYS
The last objective discussed the structure of strings which are really special cases of an array. Arrays are a data type that are used to represent a large number of homogeneous values, that is values that are all of the one data type. The data type could be of type char, in which case we have a string. The data type could just as easily be of type int, float or even another array.
An array of integers
The next program [INTARRAY.C] is a short example of using an array of integers.
#include "stdio.h" void main( ) { int values[12]; int index; for (index = 0;index values[index] = 2 for (index = 0;index printf("The value at }
< 12; index++) * (index + 4); < 12; index++). Note carefully that each element of the array is simply an int type variable capable of storing an integer. The only difference between the variables index and values[2], for example, is in the way that you address them. You should have no trouble following this program, but be sure you understand it. Compile and run it to see if it does what you expect it to do.
An array of floating point data
Now for an example of a program [BIGARRAY.C] with an array of float type data. This program also has an extra feature to illustrate how strings can be initialised.
#include "stdio.h" #include "string.h" char name1[ ] = "First Program Title"; void main( ) { int index; int stuff[12];
Page 813-7
Module 815
Data Structures Using C
float weird[12]; static char name2[] = "Second Program Title"; for (index = 0; index < 12; index++) { stuff[index] = index + 10; weird[index] = 12.0 * (index + 7); } printf("%s\n", name1); printf("%s\n\n", name2); for (index = 0; index < 12; index++) printf("%5d %5d %10.3f\n", index, stuff[index], weird[index]); }
The first line of the program illustrates how to initialise a string of characters. Notice that the square brackets are empty, leaving it up to the compiler to count the characters and allocate enough space for our string including the terminating NULL. Another string is initialised in the body of the program but it must be declared static here. This prevents it from being allocated as an automatic variable and allows it to retain the string once the program is started. You can think of a staic declaration as a local constant..
Getting data back from a function
The next program [PASSBACK.C] illustrates how a function can manipulate an array as a variable parameter.
#include "stdio.h" void main( ) { int index; int matrix[20]; /* generate data for (index = 0 ;index < 20; index++) matrix[index] = index + 1; /* print original data for (index = 0; index < 5 ;index++) printf("Start matrix[%d] = %d\n", index, matrix[index]); /* go to a function & modify matrix dosome(matrix); /* print modified matrix for (index = 0; index < 5 ;index++) printf("Back matrix[%d] = %d\n", index, matrix[index]); } dosome(list) /* This will illustrate returning data int list[ ]; { int i; /* print original matrix for (i = 0;i < 5; i++) printf("Before matrix[%d] = %d\n", i list[i]); /* add 10 to all values for (i = 0; i < 20; i++) list[i] += 10; /* print modified matrix for (i = 0; i < 5; i++) printf("After matrix[%d] = %d\n", i, list[i]);
*/ */ */ */
*/
*/ */ */
Page 813-8
Module 815
}
Data Structures Using C An array of 20 variables named matrix is defined, some nonsense data is assigned to the variables, and the first five values are printed out. Then we call the function dosome taking along the entire array as a parameter. The function dosome has a name in its parentheses also but it uses the local name ‘list’ for the array. The function needs to be told that it is really getting an array passed to it and that the array is of type int. Line 20data marker is found, such as a NULL for a string, or some other previously defined data or pattern. Many times, another piece of data is passed to the function with a count of how many elements to work with..
Arrays pass data both ways
It was stated during our study of functions that when data was passed (that arrays are handled differently from single point data) but it is correct. The reason is that when an array is passed to a function, the address of (or a pointer to) the function is the piece of data that is used. The is analagous to the Pascal construct of placing a VAR in front of a parameter in a procedure definition.
Multiple-dimensional arrays
Arrays need not be one-dimensional as the last three examples have shown, but can have any number of dimensions. Most arrays usually have either one or two dimensions. Higher dimesions have applications Page 813-9
Module 815
Data Structures Using C in mathematics but are rarely used in data processing environments. The use of doubly-dimensioned arrays is illustrated in the next program [MULTIARAY.C].
#include "stdio.h" void main( ) { int i, j; int big[8][8], large[25][12]; /* Create big as a a multiplication table for (i = 0; i < 8; i++) for (j = 0; j < 8; j++) big[i][j] = i * j; /* Create large as an addition table for (i = 0; i < 25; i++) for (j = 0; j < 12; j++) large[i][j] = i + j; big[2][6] = large in total. The first element is big[0][0], and the last is big[7][7]. Another array named large large demonstrate that any valid expression can be used for a subscript. It must only meet two conditions: it must be an integer (although a char will work just as well), and it must be within the range of the subscript for which it is being used. The entire matrix variable big is printed out in a square form so you can check the values to see if they were set the way you expected them to be. You will find many opportunities to use arrays, so do not underestimate the importance of the material in this section.
Page 813-10
Module 815
Data Structures Using C
Exercise, for example 1 2 3 2 + 10 = 12 4 + 20 = 24 6 + 30 = 36 etc.
Hint: the print statement will be similar to this: printf("%4d %4d + %4d = %4d\n", index, array1[index], array2[index], arrays[index]);
Page 813-11
Module 815 Objective 3
Data Structures Using C After working through this module you should be able to create manipulate and manage C pointers to data elements.
Pointers
Simply stated, a pointer is an address. Instead of being a variable, it is a pointer to a variable stored somewhere in the address space of the program. It is always best to use an example, so examine the next program [POINTER.C] which has some pointers in it.
#include "stdio.h" /* illustration of pointer use */ void main( ) { int index, *pt1, *pt2; index = 39; /* any numerical value */ pt1 = &index; /* the address of index */ pt2 = pt1; printf("The value is %d %d %d\n", index, *pt1, *pt2); *pt1 = 13; /* this changes the value of index */ printf("The value is %d %d %d\n", index, *pt1, *pt2); }
Ignore for the moment the data declaration statement where we define index and two other fields beginning with an star. (It is properly called an asterisk, but for reasons we will see later, let’s agree to call it a star.) If you observe the first statement, it should be clear that we assign the value of 39 to the variable index; this is no surprise. The next statement, however, says to assign to pt1 a strange looking value, namely the variable index with an ampersand (&) in front of it. In this example, pt1 and pt2 are pointers, and the variable index is a simple variable. Now we have a problem. We need to learn how to use pointers in a program, but to do so requires that first we define the means of using the pointers in the program!
Two very important rules
The following two rules are very important when using pointers and must be thoroughly understood. They may be somewhat confusing to you at first but we need to state the definitions before we can use them. Take your time, and the whole thing will clear up very quickly. 1. A variable name with an ampersand (&) in front of it defines the address of the variable and therefore points to the variable. You can therefore read line 7 as ‘pt1 is assigned the value of the address 2. A pointer with a star in front of it refers to the value of the variable pointed to by the pointer. Line 10 of the program can be read as ‘The stored (starred) value to which the pointer pt1 points is Page 813-12
Module 815
Data Structures Using C assigned the value 13’. Now you can see why it is perhaps convenient to think of the asterisk as a star,: it sort of sounds like the word ‘store’! Memory aids 1. Think of & as an address. 2. Think of * as a star referring to stored. Assume for the moment that pt1 and pt2 are pointers (we will see how to define them shortly). As pointers they do not contain a variable value but an address of a variable and can be used to point to a variable. Line 7 of the program assigns the pointer pt1 to point to the variable we have already defined as index because we have assigned the address of index to pt1. Since we have a pointer to index, we can manipulate the value of index by using either the variable name itself, or the pointer. Line 10 modifies the value by using the pointer. Since the pointer pt1 points to the variable index, then putting a star in front of the pointer name refers to the memory location to which it is pointing. Line 10 therefore assigns to index the value of 13. Anywhere in the program where it is permissible to use the variable name index, it is also permissible to use the name *pt1 since they are identical in meaning until the pointer is reassigned to some other variable. Just to add a little intrigue to the system, we have another pointer defined in this program, pt2. Since pt2 has not been assigned a value prior to line 8, it doesn’t point to anything, it contains garbage. Of course, that is also true of any variable until a value is assigned to it. Line 8 assigns pt2 the same address as pt1, so that now pt2 also points to the variable index. So to continue the definition from the last paragraph, anywhere in the program where it is permissible to use the variable index, it is also permissible to use the name *pt2 because they are identical in meaning. This fact is illustrated in the first printf statement since this statement uses the three means of identifying the same variable to print out the same variable three times. Note carefully that, even though it appears that there are three variables, there is really only one variable. The two pointers point to the single variable. This is illustrated in the next statement which assigns the value of 13 to the variable index, because that is where the pointer pt1 is pointing. The next printf statement causes the new value of 13 to be printed out three times. Keep in mind that there is really only one variable to be changed, not three. This is admittedly a difficult concept, but since it is used extensively in all but the most trivial C programs, it is well worth your time to stay with this material until you understand it thoroughly. Page 813-13
Module 815
Data Structures Using C
Declaring a pointer
Refer to the fifth line of the program and you will see the familiar way of defining the variable index, followed by two more definitions. The second definition can be read as the storage location to which pt1 points will be an int type variable. Therefore, pt1 is a pointer to an int type variable. Similarly, pt2 is another pointer to an int type variable. A pointer must be defined to point to some type of variable. Following a proper definition, it cannot be used to point to any other type of variable or it will result in a type incompatibility error. In the same manner that a float type of variable cannot be added to an int type variable, a pointer to a float variable cannot be used to point to an integer variable. Compile and run this program and observe that there is only one variable and the single statement in line 10 changes the one variable which is displayed three times.
A second pointer program
Consider the next program [POINTER2.C], which illustrates some of the more complex operations that are used in C programs:
#include "stdio.h" void main( ) { char strg[40],*there,one,two; int *pt,list[100],index; strcpy(strg,"This is a character string."); one = strg[0]; /* one and two are identical two = *strg; printf("The first output is %c %c\n", one, two); one = strg[8]; /* one and two are indentical two = *(strg+8); printf("the second output is %c %c\n", one, two); there = strg+10; /* strg+10 is identical to strg[10] printf("The third output is %c\n", strg[10]); printf("The fourth output is %c\n", *there); for (index = 0; index < 100; index++) list[index] = index + 100; pt = list + 27; printf("The fifth output is %d\n", list[27]); printf("The sixth output is %d\n", *pt); }
*/ */ */
In this program we have defined several variables and two pointers. The first pointer named there is a pointer to a char type variable and the second named pt points to an int type variable. Notice also that we have defined two array variables named strg and list. We will use them to show the correspondence between pointers and array names.
Page 813-14
Module 815
Data Structures Using C
String variables as pointers
In C a string variable is defined to be simply a pointer to the beginning of a string – this will take some explaining. You will notice that first we assign a string constant to the string variable named strg so we will have some data to work with. Next, we assign the value of the first element to the variable one, a simple char variable. Next, since the string name is a pointer by definition, we can assign the same value to two by using the star and the string name. The result of the two assignments are such that one now has the same value as two, and both contain the character ‘T’, the first character in the string. Note that it would be incorrect to write the ninth line as two = *strg[0]; because the star takes the place of the square brackets. For all practical purposes, strg is a pointer. It does, however, have one restriction that a true pointer does not have. It cannot be changed like a variable, but must always contain the initial value and therefore always points to its string. It could be thought of as a pointer constant, and in some applications you may desire a pointer that cannot be corrupted in any way. Even though it cannot be changed, it can be used to refer to other values than the one it is defined to point to, as we will see in the next section of the program. Moving ahead to line 9, the variable one is assigned the value of the ninth variable (since the indexing starts at zero) and two is assigned the same value because we are allowed to index a pointer to get to values farther ahead in the string. Both variables now contain the character ‘a’. The C compiler takes care of indexing for us by automatically adjusting the indexing for the type of variable the pointer is pointing to.(This is why the data type of a variable must be declared before the variable is used.) In this case, the index of 8 is simply added to the pointer value before looking up the desired result because a char type variable is one byte long. If we were using a pointer to an int type variable, the index would be doubled and added to the pointer before looking up the value because an int type variable uses two bytes per value stored. When we get to the section on structures, we will see that a variable can have many, even into the hundreds or thousands, of bytes per variable, but the indexing will be handled automatically for us by the system. Since there is already a pointer, it can be assigned the address of the eleventh element of strg by the statement in line 12 of the program. Remember that since there is a true pointer, it can be assigned any value as long as that value represents a char type of address. It should be clear that the pointers must be typed in order to allow the pointer arithmetic described in the last paragraph to be done properly. The third and fourth outputs will be the same, namely the letter ‘c’.
Page 813-15
Module 815
Data Structures Using C
Pointer arithmetic
Not all forms of arithmetic are permissible on a pointer: only those things that make sense. Considering that a pointer is an address somewhere in the computer, it would make sense to add a constant to an address, thereby moving it ahead in memory that number of places. Similarly, subtraction is permissible, moving it back some number of locations. Adding two pointers together would not make sense because absolute memory addresses are not additive. Pointer multiplication is also not allowed, as that would be a ‘funny’ number. If you think about what you are actually doing, it will make sense to you what is allowed, and what is not.
Integer pointers
The array named list is assigned a series of values from 100 to 199 in order to have some data to work with. Next we assign the pointer pt the address of the 28th element of the list and print out the same value both ways to illustrate that the system truly will adjust the index for the int type variable. You should spend some time in this program until you feel you fairly well understand these lessons on pointers.
Function data return with a pointer
You may recall that back in the objective dealing with functions it was mentioned that a function could use variable data if the parameter was declared as an array. This works because an array is really a pointer to the array elements. Functons can manipulate variable data if that data is passed to the function as a pointer. The following program [TWOWAY.C] illustrates the general approach used to manipulate variable data in a function.
#include "stdio.h" void fixup(nuts,fruit); /* prototype the function */
void main( ) { int pecans,apples; pecans = 100; apples = 101; printf("The starting values are %d %d\n",pecans,apples); fixup(pecans,&apples); /* when we call "fixup" we send the value /* of pecans and the address of apples printf("The ending values are %d %d\n", pecans, apples); } fixup(nuts,fruit) /* int nuts,*fruit; /* { printf("The values are %d nuts = 135; *fruit = 172; printf("The values are %d } nuts is an integer value fruit points to an integer %d\n", nuts, *fruit); %d\n", nuts ,*fruit);
*/ */
*/ */
Page 813-16
Module 815
Data Structures Using C There are two variables defined in the main program: pecans and apples; notice that neither of these is defined as a pointer. We assign values to both of these and print them out, then call the function fixup taking with us both of these values. The variable pecans is simply sent to the function, but the address of the variable apples is sent to the function. Now we have a problem. The two arguments are not the same, the second is a pointer to a variable. We must somehow alert the function to the fact that it is supposed to receive an integer variable and a pointer to an integer variable. This turns out to be very simple. Notice that the parameter definitions in the function define nuts as an integer, and fruit as a pointer to an integer. The call in the main program therefore is now in agreement with the function heading and the program interface will work just fine. In the body of the function, we print the two values sent to the function, then modify them and print the new values out. The surprise occurs when we return to the main program and print out the two values again. We will find that the value of pecans will be restored to its value before the function call because the C language made a copy of the variable pecans and takes the copy to the called function, leaving the original intact. In the case of the variable apples, we made a copy of a pointer to the variable and took the copy of the pointer to the function. Since we had a pointer to the original variable, even though the pointer was a copy, we had access to the original variable and could change it in the function. When we returned to the main program, we found a changed value in apples when we printed it out. By using a pointer in a function call, we can have access to the data in the function and change it in such a way that when we return to the calling program, we have a changed value for the data. It must be pointed out however, that if you modify the value of the pointer itself in the function, you will have a restored pointer when you return because the pointer you use in the function is a copy of the original. In this example, there was no pointer in the main program because we simply sent the address to the function, but in many programs you will use pointers in function calls. Compile and run the program and observe the output.
Programming Exercises
Define a character array and use strcpy to copy a string into it. Print the string out by using a loop with a pointer to print out one character at a time. Initialise the pointer to the first element and use the double plus sign to increment the pointer. Use a separate integer variable to count the characters to print.
Page 813-17
Module 815
Data Structures Using C Modify the program to print out the string backwards by pointing to the end and using a decrementing pointer.
Page 813-18
Module 815 Objective 4
Data Structures Using C After working through this module you should be able to create and manage complex data types in C.
STRUCTURES
A structure is a user-defined data type. You have the ability to define a new type of data considerably more complex than the types we have been using. A structure is a collection of one or more variables, possibly of different types, grouped together under a single name for convenient handling. Structures are called “records” in some languages, notably Pascal. Structures help organise complicated data, particularly in large programs, because they permit a group of related variables to be treated as a unit instead of as separate entities. The best way to understand a structure is to look at an example [STRUCT1.C].
#include "stdio.h" void main( ) { struct { char initial; int age; int grade; } boy, girl;
/* last name initial /* childs age /* childs grade in school
*/ */ */
boy.initial = 'R'; boy.age = 15; boy.grade = 75; girl.age = boy.age - 1; /* she is one year younger girl.grade = 82; girl.initial = 'H'; printf("%c is %d years old and got a grade of %d\n", girl.initial, girl.age, girl.grade); printf("%c is %d years old and got a grade of %d\n", boy.initial, boy.age, boy.grade); }
*/
The program begins with a structure definition. The key word struct is followed by some simple variables between the braces, which are the components of the structure. After the closing brace, you will find two variables listed, namely boy, and girl. According to the definition of a structure, boy is now a variable composed of three elements: initial, age, and grade. Each of the three fields are associated with boy, and each can store a variable of its respective type. The variable girl is also a variable containing three fields with the same names as those of boy but are actually different variables. We have therefore defined 6 simple variables.
A single compound variable
Let’s examine the variable boy more closely. As stated above, each of the three elements of boy are simple variables and can be used anywhere in a C program where a variable of their type can be used. For example, Page 813-19
Module 815
Data Structures Using C the age element is an integer variable and can therefore be used anywhere in a C program where it is legal to use an integer variable: in calculations, as a counter, in I/O operations, etc. The only problem we have is defining how to use the simple variable age which is a part of the compound variable boy..
Assigning values to the variables
Using the above definition, we can assign a value to each of the three fields of boy and each of the three fields of girl. Note carefully that boy.initial is actually a char type variable, because it was assigned that in the structure, so it must be assigned a character of data. Notice that.
Using structure.
An array of structures
The next program [STRUCT2.C] contains the same structure definition as before but this time we define an array of 12 variables named kids. This program therefore contains 12 times 3 = 36 simple variables, each of which can store one item of data provided that it is of the correct type. We also define a simple variable named index for use in the for loops. Page 813-20
Module 815
Data Structures Using C
#include "stdio.h" void main( ) { struct { char initial; int age; int grade; } kids[12]; int index; for (index = 0; index < 12; index++) { kids[index].initial = 'A' + index; kids[index].age = 16; kids[index].grade = 84; } kids[3].age = kids[5].age = 17; kids[2].grade = kids[6].grade = 92; kids[4].grade = 57; kids[10] = kids[4]; /* Structure assignment */ for (index = 0; index < 12; index++) printf("%c is %d years old and got a grade of %d\n", kids[index].initial, kids[index].age, kids[index].grade); }
To assign each of the fields a value we use a for loop and each pass through the loop results in assigning a value to three of the fields. One pass through the loop assigns all of the values for one of the kids. This would not be a very useful way to assign data in a real situation, but a loop could read the data in from a file and store it in the correct fields. You might consider this the crude beginning of a data base – which, of course, it is. In the next few instructions of the program we assign new values to some of the fields to illustrate the method used to accomplish this. It should be self explanatory, so no additional comments will be given.
Copying structures
C allows you to copy an entire structure with one statement. Line 17 is an example of using a structure assignment. In this statement, all 3 fields of kids[4] are copied into their respective fields of kids[10]. The last few statements contain a for loop in which all of the generated values are displayed in a formatted list. Compile and run the program to see if it does what you expect it to do.
Using pointers and structures together
The next program [STRUCT3.C] is an example of using pointers with structures; it is identical to the last program except that it uses pointers for some of the operations.
#include "stdio.h" void main( ) { struct { char initial; int age;
Page 813-21
Module 815
Data Structures Using C
int grade; } kids[12], *point, extra; int index; for (index = 0; index < 12; index++) { point = kids + index; point->initial = 'A' + index; point->age = 16; point->grade = 84; } kids[3].age = kids[5].age = 17; kids[2].grade = kids[6].grade = 92; kids[4].grade = 57; for (index = 0; index < 12; index++) { point = kids + index; printf("%c is %d years old and got a grade of %d\n", (*point).initial, kids[index].age, point->grade); } extra = kids[2]; /* Structure assignment */ extra = *point; /* Structure assignment */ }. Since kids is a pointer variable that points to the structure, we can define point in terms of kids. The variable kids is a constant so it cannot be changed in value, but point is a pointer variable and can be assigned any value consistent with its being required to point to the structure. If we assign the value of kids to point then it should be clear that it will point to the first element of the array, a structure containing three fields. return to the program. It should be clear from the previous discussion that as we go through the loop, the pointer will point to the beginning of one of the array elements each time. We can therefore use Page 813-22
Module 815
Data Structures Using C the pointer to reference the various elements of the structure. Referring to the elements of a structure with a pointer occurs so often in C that a special method of doing that. Since the pointer points to the structure, we must once again define which of the elements we wish to refer to each time we use one of the elements of the structure. There are, as we have seen, several different methods of referring to the members of the structure, and in the for loop used for output at the end of the program, we use three different methods..
Nested and named structures
The structures we have seen so far have been useful, but very simple. It is possible to define structures containing dozens and even hundreds or thousands of elements but it would be to the programmer’s advantage not to define all of the elements at one pass but rather to use a hierarchical structure of definition. This will be illustrated with the next program [NESTED.C], which shows a nested structure.
#include "stdio.h" void main( ) { struct person { char name[25]; int age; char status; /* M = married, S = single */ } ; struct alldat { int grade; struct person descrip; char lunch[25]; } student[53]; struct alldat teacher,sub; teacher.grade = 94; teacher.descrip.age = 34; teacher.descrip.status = 'M'; strcpy(teacher.descrip.name,"Mary Smith"); strcpy(teacher.lunch,"Baloney sandwich"); sub.descrip.age = 87; sub.descrip.status = 'M'; strcpy(sub.descrip.name,"Old Lady Brown"); sub.grade = 73; strcpy(sub.lunch,"Yogurt and toast"); student[1].descrip.age = 15; student[1].descrip.status = 'S'; strcpy(student[1].descrip.name,"Billy Boston"); strcpy(student[1].lunch,"Peanut Butter"); student[1].grade = 77;
Page 813-23
Module 815
Data Structures Using C
student[7].descrip.age = 14; student[12].grade = 87; } nearly the same way we use int, char, or any other types that exist in C. The only restriction is that this new name must always be associated with the reserved word struct. The next structure definition contains three fields with the middle field being the previously defined structure which we named person. The variable which has the type of person is named descrip. So the new structure contains two simple variables, grade and a string named lunch[25], and the structure named descrip. Since descrip contains three variables, the new structure actually contains 5 variables. This structure is also given a name, alldat, which is another type definition. Finally we define an array of 53 variables each with the structure defined by 13 to be variables of the type alldat, so that each of these two variables contain 5 fields which can store data.
Using fields
In the next five lines of the program name age. The teacher’s status field each made up of several parts. The variable sub is assigned nonsense values in much the same way, but in a different order since they do not have to occur in any required Page 813-24
Module 815
Data Structures Using C order. Finally, a few of the student variables are assigned values for illustrative purposes and the program ends. None of the values are printed for illustration since several were printed in the last examples. It is possible to continue nesting structures until you get totally confused. If you define them properly, the computer will not get confused because there is no stated limit as to how many levels of nesting are allowed. There is probably a practical limit of three beyond which you will. Structures can contain arrays of other structures which in turn can contain arrays of simple types or other structures. It can go on and on until you lose all reason to continue. Be conservative at first, and get bolder as you gain experience.
Exercise 4
Define a named structure containing a string field for a name, an integer for feet, and another for arms. Use the new type to define an array of about 6 items. Fill the fields with data and print them out as follows: A human being has 2 legs and 2 arms. A dog has 4 legs and 0 arms. A television set has 4 legs and 0 arms. A chair has 4 legs and 2 arms. etc. 2. Rewrite the previous exercise using a pointer to print the data out.
Page 813-25
Module 815 Objective 5
Data Structures Using C After working through this module you should be able to use unions to define alternate data sets for use in C programs.
Unions
A union is a variable that may hold (at different times) data of different types and sizes. For example, a programmer may wish to define a structure that will record the citation details about a book, and another that records details about a journal. Since a library item can be neither a book or a journal simultaneously, the programmer would declare a library item to be a union of book and journal structures. Thus, on one occasion item might be used to manipulate book details, and on another occassion, item might be used to manipulate journal details. It is up to the programmer, of course, to remember the type of item with which they are dealing. Examine the next program [UNION1.C] for examples.
void main( ) { union { int value; /* This is the first part of the union */ struct { char first; /* These two values are the second part */ char second; } half; } number; long index; for (index = 12; index < 300000; index += 35231) { number.value = index; printf("%8x %6x %6x\n", number.value, number.half.first, number.half.second); } }
In this example we have two elements to the union, the first part being the integer named value, which is stored as a two byte variable somewhere in the computer’s is very similar to accessing the fields of a structure and will be left to you to determine by studying the example. One additional note must be given here about the program. When it is run using the C compiler the data will be displayed with two leading f’s, Page 813-26
Module 815
Data Structures Using also come up in a few of the later examples. Compile and run this program and observe that the data is read out as an int and as two char variables. The char variables are reversed in order because of the way an int variable is stored internally in your computer. Don’t worry about this. It is not a problem but it can be a very interesting area of study if you are so inclined. The next program [UNION2.C] contains, in which we define a complete structure, then decide which of the various types can go into it.
#include "stdio.h" #define AUTO 1 #define BOAT 2 #define PLANE 3 #define SHIP 4 void main( ) { struct automobile { int tires; int fenders; int doors; } ; typedef struct { int displacement; char length; } BOATDEF; struct { char vehicle; int weight; union { struct automobile car; BOATDEF boat; struct { char engines; int wingspan; } airplane; BOATDEF ship; } vehicle_type; int value; char owner[32]; } ford, sun_fish, piper_cub;
/* structure for an automobile
*/
/* structure for a boat or ship */
/* /* /* /* /*
what type of vehicle? gross weight of vehicle type-dependent data part 1 of the union part 2 of the union
*/ */ */ */ */
/* part 3 of the union /* part 4 of the union /* value of vehicle in dollars /* owners name /* three variable structures
*/ */ */ */ */
Page 813-27
Module 815
Data Structures Using C
/* define a few of the fields as an illustration */ ford.vehicle = AUTO; ford.weight = 2742; /* with a full petrol tank */ ford.vehicle_type.car.tires = 5; /* including the spare */ ford.vehicle_type.car.doors = 2; sun_fish.value = 3742; /* trailer not included */ sun_fish.vehicle_type.boat.length = 20; piper_cub.vehicle = PLANE; piper_cub.vehicle_type.airplane.wingspan = 27; if (ford.vehicle == AUTO) /* which it is in this case */ printf("The ford has %d tires.\n", ford.vehicle_type.car.tires); if (piper_cub.vehicle == AUTO) /* which it is not */ printf("The plane has %d tires.\n", piper_cub.vehicle_type.car.tires); }
First, we define a few constants with the #defines, and begin the program itself. We define a structure named automobile containing several fields which you should have no trouble recognising, but we define no variables at this time. anywhere we would like to. Notice that this does not define any variables, only a new type definition. Capitalising the name is a common preference used by programmers and is not a C standard;. It should be stated that all four could have been defined in any of the three ways shown, but the three different methods were used to show you that any could be used. In practice, the clearest definition would probably have occurred by using the typedef for each of the parts.
Page 813-28
Module 815
Data Structures Using C vehicle was designed into this structure to keep track of the type of vehicle stored here. The four defines at the top of the page were designed to be used as indicators to be stored in the variable vehicle. A few examples of how to use the resulting structure are given in the next few lines of the program. Some of the variables are defined and a few of them are printed out for illustrative purposes. The union is not used frequently, and almost never by novice programmers. You will encounter it occasionally so it is worth your effort to at least know what it is.
Bitfields
The bitfield is a relatively new addition to the C programming language. In the next program [BITFIELD.C] we have a union made up of a single int type variable in line 5 and the structure defined in lines 6 to 10. forms the next two bits, and z forms the two high-order bits.
#include "stdio.h" void main( ) { union { int index; struct { unsigned int x : 1; unsigned int y : 2; unsigned int z : 2; } bits; } number; for (number.index = 0; number.index < 20; number.index++) { printf("index = %3d, bits = %3d%3d%3d\n", number.index, number.bits.z, number.bits.y, number.bits.x); } }
Compile and run the program and you will see that as the variable index is incremented by 1 each time through the loop, you will see the bitfields of the union counting due to their respective locations within the int definition. One thing must be pointed out: the bitfields must be Page 813-29
Module 815
Data Structures Using C defined as parts of an unsigned int or your compiler will issue an error message. The bitfield is very useful if you have a lot of data to separate into separate bits or groups of bits. Many systems use some sort of a packed format to get lots of data stored in a few bytes. Your imagination is your only limitation to effective use of this feature of C.
Exercise 5
Page 813-30
Module 815 Objective 6
Data Structures Using C After working through this module you should be able to allocate memory to variables dynamically.
DYNAMIC ALLOCATION
Dynamic allocation is very intimidating the first time you come across it, but that need not be. All of the programs up to this point have used static variables as far as we are concerned. (Actually, some of them have been automatic and were dynamically allocated for you by the system, but it was transparent to you.) This section discusses how C uses dynamically allocated variables. They are variables that do not exist when the program is loaded, but are created dynamically as they are needed. It is possible, using these techniques, to create as many variables as needed, use them, and deallocate their space so that it can be used by other dynamic variables. As usual, the concept is best presented by an example [DYNLIST.C].
#include "stdio.h" #include "stdlib.h" void main( ) { struct animal { char name[25]; char breed[25]; int age; } *pet1, *pet2, *pet3; pet1 = (struct animal *) malloc (sizeof(struct animal)); strcpy(pet1->name,"General"); strcpy(pet1->breed,"Mixed Breed"); pet1->age = 1; pet2 = pet1; /* pet2 now points to the above data structure */ pet1 = (struct animal *) malloc (sizeof(struct animal)); strcpy(pet1->name,"Frank"); strcpy(pet1->breed,"Labrador Retriever"); pet1->age = 3; pet3 = (struct animal *) malloc (sizeof(struct animal)); strcpy(pet3->name,"Krystal"); strcpy(pet3->breed,"German Shepherd"); pet3->age = 4; /* now print out the data described above printf("%s is a %s, and is %d years old.\n", pet1->name, pet1->breed, pet1->age); printf("%s is a %s, and is %d years old.\n", pet2->name, pet2->breed, pet2->age); printf("%s is a %s, and is %d years old.\n", pet3->name, pet3->breed, pet3->age); pet1 = pet3; /* pet1 now points to the same structure that pet3 points to free(pet3); /* this frees up one structure free(pet2); /* this frees up one more structure /* free(pet1); this cannot be done, see explanation in text }
*/
*/ */ */ */
We begin by defining a named structure – animal – with a few fields pertaining to dogs. We do not define any variables of this type, only three pointers. If you search through the remainder of the program, you will find no variables defined so we have nothing to store data in. All Page 813-31
Module 815
Data Structures Using C we have to work with are three pointers, each of which point to the defined structure. In order to do anything we need some variables, so we will create some dynamically.
Dynamic variable creation
The first program statement is line 11, which assigns something to the pointer pet1; it will create a dynamic structure containing three variables. The heart of the statement is the malloc function buried in the middle. This is a memory allocate function that needs the other things to completely define it. The malloc function, by default, will allocate a piece of memory on a heap that is n characters in length and will be of type character. The n must be specified as the only argument to the function. We will discuss n shortly, but first we need to define a heap. The heap Every compiler has a set of limitations on it that define how big the executable file can be, how many variables can be used, how long the source file can be, etc. One limitation placed on users by the C compiler is a limit of 64K for the executable code if you happen to be in the small memory model. This is because the IBM-PC uses a microprocessor with a 64K segment size, and it requires special calls to use data outside of a single segment. In order to keep the program small and efficient, these calls are not used, and the memory space is limited but still adequate for most programs. In this model C defines two heaps, the first being called a heap, and the second being called the far heap. The heap is an area within the 64K boundary that can store dynamically allocated data and the far heap is an area outside of this 64K boundary which can be accessed by the program to store data and variables. The data and variables are put on the heap by the system as calls to malloc are made. The system keeps track of where the data is stored. Data and variables can be deallocated as desired leading to holes in the heap. The system knows where the holes are and will use them for additional data storage as more malloc calls are made. The structure of the heap is therefore a very dynamic entity, changing constantly. The data and variables are put on the far heap by utilising calls to farmalloc, farcalloc, etc. and removed through use of the function farfree. Study your C compiler’s Reference Guide for details of how to use these features.
Page 813-32
Module 815
Data Structures Using C Segment C compilers give the user a choice of memory models to use. The user has a choice of using a model with a 64K limitation for either program or data leading to a small fast program or selecting a 640K limitation and requiring longer address calls leading to less efficient addressing. Using the larger address space requires inter-segment addressing, resulting in the slightly slower running time. The time is probably insignificant in most programs, but there are other considerations. If a program uses no more than 64K bytes for the total of its code and memory and if it doesn’t use a stack, it can be made into a .COM file. With C this is only possible by using the tiny memory model. Since a .COM file is already in a memory image format, it can be loaded very quickly whereas a file in an .EXE format must have its addresses relocated as it is loaded. Therefore a tiny memory model can generate a program that loads faster than one generated with a larger memory model. Don’t let this worry you, it is a fine point that few programmers worry about. Even more important than the need to stay within the small memory model is the need to stay within the computer. If you had a program that used several large data storage areas, but not at the same time, you could load one block storing it dynamically, then get rid of it and reuse the space for the next large block of data. Dynamically storing each block of data in succession, and using the same storage for each block may allow you to run your entire program in the computer without breaking it up into smaller programs. The malloc function Hopefully the above description of the heap and the overall plan for dynamic allocation helped you to understand what we are doing with the malloc function. The malloc function forms part of the standard library. Its prototype is defined in the stdlib.h file, hence this file is included using the statement on line 2. malloc simply asks the system for a block of memory of the size specified, and gets the block with the pointer pointing to the first element of the block. The only argument in the parentheses is the size of the block desired and in our present case, we desire a block that will hold one of the structures we defined at the beginning of the program. The sizeof is a new function that returns the size in bytes of the argument within its parentheses. It therefore returns the size of the structure named animal, in bytes, and that number is sent to the system with the malloc call. At the completion of that call we have a block on the heap allocated to us, with pet1 pointing to the block of data.
Page 813-33
Module 815
Data Structures Using C Casting We still have a funny looking construct at the beginning of the malloc function call that is called a cast. The malloc function returns a block with the pointer pointing to it being a pointer of type char by default. Many (perhaps most) times you do not want a pointer to a char type variable, but to some other type. You can define the pointer type with the construct given on the example line. In this case we want the pointer to point to a structure of type animal, so we tell the compiler with this strange looking construct. Even if you omit the cast, most compilers will return a pointer correctly, give you a warning, and go on to produce a working program. It is better programming practice to provide the compiler with the cast to prevent getting the warning message.
Using the dynamically allocated memory block
If you remember our studies of structures and pointers, you will recall that if we have a structure with a pointer pointing to it, we can access any of the variables within the structure. In the next three lines of the program we assign some silly data to the structure for illustration. It should come as no surprise to you that these assignment statements look just like assignments to statically defined variables. In the next statement, we assign the value of pet1 to pet2 also. This creates no new data; we simply have two pointers to the same object. Since pet2 is pointing to the structure we created above, pet1 can be reused to get another dynamically allocated structure which is just what we do next. Keep in mind that pet2 could have just as easily been used for the new allocation. The new structure is filled with silly data for illustration. Finally, we allocate another block on the heap using the pointer pet3, and fill its block with illustrative data. Printing the data out should pose no problem to you since there is nothing new in the three print statements. It is left for you to study. Even though it is not illustrated in this example, you can dynamically allocate and use simple variables such as a single char type variable. This should be used wisely however, since it sometimes is very inefficient. It is only mentioned to point out that there is nothing magic about a data structure that would allow it to be dynamically allocated while simple types could not.
Getting rid of dynamically allocated data
Another new function is used to get rid of the data and free up the space on the heap for reuse. This function is called free. To use it, you Page 813-34
Module 815
Data Structures Using C simply call it with the pointer to the block as the only argument, and the block is deallocated. In order to illustrate another aspect of the dynamic allocation and deallocation of data, an additional step is included in the program on your monitor. The pointer pet1 is assigned the value of pet3 in line 31. In doing this, the block that pet1 was pointing to is effectively lost since there is no pointer that is now pointing to that block. It can therefore never again be referred to, changed, or disposed of. That memory, which is a block on the heap, is wasted from this point on. This is not something that you would ever purposely do in a program. It is only done here for illustration. The first free function call removes the block of data that pet1 and pet3 were pointing to, and the second free call removes the block of data that pet2 was pointing to. We therefore have lost access to all of our data generated earlier. There is still one block of data that is on the heap but there is no pointer to it since we lost the address to it. Trying to free the data pointed to by pet1 would result in an error because it has already been freed by the use of pet3. There is no need to worry, when we return to DOS, the entire heap will be disposed of with no regard to what we have put on it. The point does need to made that, if you lose a pointer to a block on the heap, it forever removes that block of data storage from our use and we may need that storage later. Compile and run the program to see if it does what you think it should do based on this discussion. Our discussion of the last program has taken a lot of time – but it was time well spent. It should be somewhat exciting to you to know that there is nothing else to learn about dynamic allocation, the last few pages covered it all. Of course, there is a lot to learn about the technique of using dynamic allocation, and for that reason, there are two more files to study. But the fact remains, there is nothing more to learn about dynamic allocation than what was given so far in this section.
An array of pointers
Our next example [BIGDYNL.C] is very similar to the last, since we use the same structure, but this time we define an array of pointers to illustrate the means by which you could build a large database using an array of pointers rather than a single pointer to each element. To keep it simple we define 12 elements in the array and another working pointer named point.
#include "stdio.h" #include "stdlib.h" void main( ) {
Page 813-35
Module 815
Data Structures Using C
struct animal { char name[25]; char breed[25]; int age; } *pet[12], *point; /* this defines 13 pointers, no variables */ int index; /* first, fill the dynamic structures with nonsense */ for (index = 0; index < 12; index++) { pet[index] = (struct animal *) malloc (sizeof(struct animal)); strcpy(pet[index]->name, "General"); strcpy(pet[index]->breed, "Mixed Breed"); pet[index]->age = 4; } pet[4]->age = 12; /* these lines are simply to */ pet[5]->age = 15; /* put some nonsense data into */ pet[6]->age = 10; /* a few of the fields. */ /* now print out the data described above */ for (index = 0; index <12 index++) { point = pet[index]; printf("%s is a %s, and is %d years old.\n", point->name, point->breed, point->age); } /* good programming practice dictates that we free up the */ /* dynamically allocated space before we quit. */ for (index = 0; index < 12; index++) free(pet[index]); }
The *pet[12] might seem strange to you so a few words of explanation are in order. What we have defined is an array of 12 pointers, the first being pet[0], and the last pet[11]. Actually, since an array is itself a pointer, the name pet by itself is a pointer to a pointer. This is valid in C, and in fact you can go farther if needed but you will get quickly confused. A definition such as int ****pt is legal as a pointer to a pointer to a pointer to a pointer to an integer type variable. Twelve pointers have now been defined which can be used like any other pointer, it is a simple matter to write a loop to allocate a data block dynamically for each and to fill the respective fields with any data desirable. In this case, the fields are filled with simple data for illustrative purposes, but we could be reading in a database, reading from some test equipment, or any other source of data. A few fields are randomly picked to receive other data to illustrate that simple assignments can be used, and the data is printed out to the monitor. The pointer point is used in the printout loop only to serve as an illustration, the data could have been easily printed using the pet[n] notation. Finally, all 12 blocks of data are release with the free function before terminating the program. Compile and run this program to aid in understanding this technique. As stated earlier, there was nothing new here about dynamic allocation, only about an array of pointers. Page 813-36
Module 815
Data Structures Using C
A linked list
The final example is what some programmers find the most intimidating of all techniques: the dynamically allocated linked list. Examine the next program [DYNLINK.C] to start our examination of lists.
#include "stdio.h" #include "stdlib.h" #define RECORDS 6 void main( ) { struct animal { char name[25]; /* The animal's name */ char breed[25]; /* The type of animal */ int age; /* The animal's age */ struct animal *next; /* a pointer to another record of this type */ } *point, *start, *prior; /* this defines 3 pointers, no variables */ int index; /* the first record is always a special case */ start = (struct animal *) malloc (sizeof(struct animal)); strcpy(start->name, "General"); strcpy(start->breed, "Mixed Breed"); start->age = 4; start->next = NULL; prior = start; /* a loop can be used to fill in the rest once it is started */ for (index = 0; index < RECORDS; index++) { point = (struct animal *) malloc (sizeof(struct animal)); strcpy(point->name, "Frank"); strcpy(point->breed, "Labrador Retriever"); point->age = 3; prior->next = point; /* point last "next" to this record */ point->next = NULL; /* point this "next" to NULL */ prior = point; /* this is now the prior record */ } /* now print out the data described above */ point = start; do { prior = point->next; printf("%s is a %s, and is %d years old.\n", point->name, point->breed, point->age); point = point->next; } while (prior != NULL); /* good programming practice dictates that we free up the */ /* dynamically allocated space before we quit. */ point = start; /* first block of group */ do { prior = point->next; /* next block of data */ free(point); /* free present block */ point = prior; /* point to next */ } while (prior != NULL); /* quit when next is NULL */ }
The program starts in a similar manner to the previous two, with the addition of the definition of a constant to be used later. The structure is nearly the same as that used in the last two programs except for the addition of another field within the structure in line 10, the pointer. This pointer is a pointer to another structure of this same type and will be Page 813-37
Module 815
Data Structures Using C used to point to the next structure in order. To continue the above analogy, this pointer will point to the next note, which in turn will contain a pointer to the next note after that. We define three pointers to this structure for use in the program, and one integer to be used as a counter, and we are ready to begin using the defined structure for whatever purpose we desire. In this case, we will once again generate nonsense data for illustrative purposes. Using the malloc function, we request a block of storage on the heap and fill it with data. The additional field in this example, the pointer, is assigned the value of NULL, which is only used to indicate that this is the end of the list. We will leave the pointer start at this structure, so that it will always point to the first structure of the list. We also assign prior the value of start for reasons we will see soon. Keep in mind that the end points of a linked list will always have to be handled differently than those in the middle of a list. We have a single element of our list now and it is filled with representative data. Filling additional structures The next group of assignments and control statements are included within a for loop so we can build our list fast once it is defined. We will go through the loop a number of times equal to the constant RECORDS defined at the beginning of our program. Each time through, we allocate memory, fill the first three fields with nonsense, and fill the pointers. The pointer in the last record is given the address of this new record because the prior pointer is pointing to the prior record. Thus prior->next is given the address of the new record we have just filled. The pointer in the new record is assigned the value NULL, and the pointer prior is given the address of this new record because the next time we create a record, this one will be the prior one at that time. That may sound confusing but it really does make sense if you spend some time studying it. When we have gone through the for loop 6 times, we will have a list of 7 structures including the one we generated prior to the loop. The list will have the following characteristics: 1. start points to the first structure in the list. 2. Each structure contains a pointer to the next structure. 3. The last structure has a pointer that points to NULL and can be used to detect the end of the list. The following diagram may help you to understand the structure of the data at this point.
Page 813-38
Module 815
Data Structures Using C start→ struct1 namef breed age point→ struct2 name breed age point→ struct3 name breed age point→ … … struct7 name breed age point→ NULL It should be clear (if you understand the above structure) that it is not possible to simply jump into the middle of the structure and change a few values. The only way to get to the third structure is by starting at the beginning and working your way down through the structure one record at a time. Although this may seem like a large price to pay for the convenience of putting so much data outside of the program area, it is actually a very good way to store some kinds of data. A word processor would be a good application for this type of data structure because you would never need to have random access to the data. In actual practice, this is the basic type of storage used for the text in a word processor with one line of text per record. Actually, a program with any degree of sophistication would use a doubly linked list. This would be a list with two pointers per record, one pointing down to the next record, and the other pointing up to the record just prior to the one in question. Using this kind of a record structure would allow traversing the data in either direction. Printing the data out A method similar to the data generation process is used to print the data out. The pointers are initialised and are then used to go from record to record reading and displaying each record one at a time. Printing is terminated when the NULL on the last record is found, so the program doesn’t even need to know how many records are in the list. Finally, the entire list is deleted to make room in memory for any additional data that may be needed, in this case, none. Care must be taken to ensure that the last record is not deleted before the NULL is checked; once the data are gone, it is impossible to know if you are finished yet. Page 813-39
Module 815
Data Structures Using C It is not difficult, but it is not trivial, to add elements into the middle of a linked lists. It is necessary to create the new record, fill it with data, and point its pointer to the record it is desired to precede. If the new record is to be installed between the 3rd and 4th, for example, it is necessary for the new record to point to the 4th record, and the pointer in the 3rd record must point to the new one. Adding a new record to the beginning or end of a list are each special cases. Consider what must be done to add a new record in a doubly linked list. Entire books are written describing different types of linked lists and how to use them, so no further detail will be given. The amount of detail given should be sufficient for a beginning understanding of C and its capabilities.
Calloc
One more function, the calloc function, must be mentioned before closing. This function allocates a block of memory and clears it to all zeros – which may be useful in some circumstances. It is similar to malloc in many ways. We will leave you to read about calloc as an exercise, and use it if you desire.
Exercise 6
Rewrite STRUCT1.C to dynamically allocate the two structures. Rewrite STRUCT2.C to dynamically allocate the 12 structures.
Page 813-40
Module 815 Objective 7
Data Structures Using C After working through this module you should be able to manipulate characters and bits.
Upper and lower case
The first example program [UPLOW.C] in this section does character manipulation. More specifically, it changes the case of alphabetic characters. It illustrates the use of four functions that have to do with case. Each of these functions is part of the standard library. The functions are prototyped in the ctype.h file, hence its inclusion in line 2 of the program.
#include "stdio.h" #include "ctype.h" void mix_up_the_chars(line); void main( ) { char line[80]; char *c; do { /* keep getting lines of text until an empty line is found c = gets(line); /* get a line of text if (c != NULL) { mix_up_the_chars(line); } } while (c != NULL); } void mix_up_the_chars(line) /* this function turns all upper case characters into lower /* case, and all lower case to upper case. It ignores all /* other characters. char line[ ]; { int index; for (index = 0;line[index] != 0;index++) { if (isupper(line[index])) /* 1 if upper case line[index] = tolower(line[index]); else { if (islower(line[index])) /* 1 if lower case line[index] = toupper(line[index]); } } printf("%s",line); }
*/ */
*/ */ */
*/ */
It should be no problem for you to study this program on your own and understand how it works. The four functions on display in this program are all within the user written function, mix_up_the_chars. Compile and run the program with data of your choice. The four functions are: isupper( ); islower( ); toupper( ); tolower( ); Is the character upper case? Is the character lower case? Make the character upper case. Make the character lower case.
Many more classification and conversion routines are listed in your C compiler’s Reference Guide. Page 813-41
Module 815
Data Structures Using C
Classification of characters
We have repeatedly used the backslash n (\n) character for representing a new line. Such indicators are called escape sequences, and some of the more commonly used are defined in the following table: \n \t \b \ \\ \0 Newline Tab Backspace Double quote Backslash NULL (zero)
A complete list of escape sequences available with your C compiler are listed in your C Reference Guide. By preceding each of the above characters with the backslash character, the character can be included in a line of text for display, or printing. In the same way that it is perfectly all right to use the letter n in a line of text as a part of someone's name, and as an end-of-line, the other characters can be used as parts of text or for their particular functions. The next program [CHARCLAS.C] uses the functions that can determine the class of a character, and counts the characters in each class.
#include "stdio.h" #include "ctype.h" void count_the_data(line) void main( ) { char line[80] char *c; do { c = gets(line); if (c != NULL) { count_the_data(line); } } while (c != NULL); }
/* get a line of text
*/
void count_the_data(line) char line[]; { int whites, chars, digits; int index; whites = chars = digits = 0; for (index = 0; line[index] != 0; index++) { if (isalpha(line[index])) /* 1 if line[ is alphabetic */ chars++; if (isdigit(line[index])) /* 1 if line[ ] is a digit */ digits++; if (isspace(line[index]))
Page 813-42
Module 815
Data Structures Using C
/* 1 if line[ ] is blank, tab, or newline */ whites++;
} /* end of counting loop printf("%3d%3d%3d %s", whites, chars, digits, line); }
*/
The number of each class is displayed along with the line itself. The three functions are as follows: isalpha( ); isdigit( ); isspace( ); Is the character alphabetic? Is the character a numeral? Is the character any of \n, \t, or blank?
This program should be simple for you to find your way through so no explanation will be given. It was necessary to give an example with these functions used. Compile and run this program with suitable input data.
Logical functions
The functions in this group are used to do bitwise operations, meaning that the operations are performed on the bits as though they were individual bits. No carry from bit to bit is performed as would be done with a binary addition. Even though the operations are performed on a single bit basis, an entire byte or integer variable can be operated on in one instruction. The operators and the operations they perform are given in the following table: & | ^ ~ Logical AND, if both bits are 1, the result is 1. Logical OR, if either bit is one, the result is 1. Logical XOR, (exclusive OR), if one and only one bit is 1, the result is 1. Logical invert, if the bit is 1, the result is 0, and if the bit is 0, the result is 1.
The following example program [BITOPS.C] uses several fields that are combined in each of the ways given above.
#include "stdio.h" void main( ) { char mask; char number[6]; char and,or,xor,inv,index; number[0] = 0X00; number[1] = 0X11; number[2] = 0X22; number[3] = 0X44; number[4] = 0X88; number[5] = 0XFF; printf(" nmbr mask and or xor mask = 0X0F; for (index = 0; index <= 5; index++) {
inv\n");
Page 813-43
Module 815
Data Structures Using C
and = mask & number[index]; or = mask | number[index]; xor = mask ^ number[index]; inv = ~number[index]; printf("%5x %5x %5x %5x %5x %5x\n", number[index], mask, and, or, xor, inv); } printf("\n"); mask = 0X22; for (index = 0; index <= 5; index++) { and = mask & number[index]; or = mask | number[index]; xor = mask ^ number[index]; inv = ~number[index]; printf("%5x %5x %5x %5x %5x %5x\n", number[index], mask, and, or, xor inv); } }
The data are in hexadecimal format (it is assumed that you already know hexadecimal format if you need to use these operations). Run the program and observe the output.
Shift instructions
The last two operations to be covered in this section are the left shift and the right shift instructions; their use is illustrated in the next example program [SHIFTER.C].
#include "stdio.h" void main( ) { int small, big, index, count; printf(" shift left shift right\n\n"); small = 1; big = 0x4000; for(index = 0; index < 17; index++) { printf("%8d %8x %8d %8x\n", small, small, big, big); small = small << 1; big = big >> 1; } printf("\n"); count = 2; small = 1; big = 0x4000; for(index = 0; index < 9; index++) { printf("%8d %8x %8d %8x\n", small, small, big, big); small = small << count; big = big >> count; } }
The two operations use the following operators: << n >> n Left shift n places. Right shift n places.
Once again the operations are carried out and displayed using the hexadecimal format. The program should be simple for you to understand on your own as there is no tricky or novel code.
Page 813-44
Module 815
Data Structures Using C
Exercise 7
Using the reference manual of your C compiler, describe any operations on characters and bits that it provides that are not described in this section, particularly those that are described in the ctype.h header file.
Page 813-45 | https://www.scribd.com/document/3904392/Data-Structures-in-c | CC-MAIN-2018-22 | refinedweb | 13,641 | 68.2 |
explains things to look out for, which are the types of things you are concerned about. This is from the lit, so it's not "just another" link.
So this class as written today:
Open in new window
is actually immutable - simply because there is no way to modify the state (myTestProperty) after construction.
(I added a constructor and the keyword 'private' to the member - so nothing else in the package can sneak in and access it).
However, while it's immutable, it's usually considered poor practice to make a class immutable without making the members final, because I could easily come along tomorrow and add a method:
setProperty(Integer newValue) { myTestProperty = newValue ; }
and now the class is no longer immutable.
If the member variables are marked as 'final', that would produce a compile time error and it'll be clearer that my original intention was to make this class immutable.
So this is the real rule:
"1. An immutable class can not be modified after construction, any modification would result in
a new immutable class. created."
and these are some helpful tips you can use to create an immutable class, but it's not the only valid way:
"2. All fields of Immutable class must be final.
3. class must be properly constructed i.e. class reference must not leak during construction process.
4. class must be final in order to restrict sub-class for altering immutability of parent class."
To answer your question about #3. This is code that leaks a reference during construction:
Open in new window
This is considered bad because the Java compiler is allowed to produce code that is incorrect in this situation, especially in multi-threaded environments. You're not meant to access an object until its fully constructed and this code violates that rule.
Hope that helps explain things,
Doug
Experts Exchange Solution brought to you by
Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial
public static void updateLeaker(Leaker leaker) {
to
public static synchronized void updateLeaker(Leaker leaker) {
// The leaker object passed here is accessible before the constructor has completed execution
// That's a no-no ???
}
Still thank you for that explanation. It was very clear and helpful.
The issue is that this thing about "not leaking access to an object before the constructor completes" is part of the rules written right into the Java memory model - which defines what the guarantees are about when objects are visible to other threads (on other processors). The issues arise because the compiler is allowed to make certain optimizations around the behavior of the processor cache (which if you think about isn't shared memory - 2 CPUs each have their own unique caches, unlike main memory which is shared). I'm not someone who caries around a copy of the Java language spec in my back pocket - but the people who do could no doubt point you to the right section :)
Now in practice if you add a "synchronized" statement it's hard to imagine any compiler will actually produce code that's invalid. That's because "synchronized" itself triggers a requirement that instructions executed before it must be visible to all threads executing code after it - so in some sense the objects that exist before it should be 'published' to all threads. But doing this is playing with fire. You're still violating a rule and if your code malfunctions in odd ways in a multi-processor environment and you submit a bug on it, the JVM guys can say "well...that's because your code is illegal".
Simple answer is that's it's best to avoid this. And honestly code that waits to call methods until after the constructor is finished is better code anyway.
Incidentally easily the most common examples of violating this is the old example of how to make a thread:
public class MyThread extends Thread {
public MyThread() {
...
// This is technically wrong, because it starts the thread before the constructor finishes
start() ;
}
}
You can probably find that code in 100 old tutorials on threads.
Anyway, these days we use Executors, so no reason to ever be newing up Thread objects directly...but just thought I'd mention it :)
Doug | https://www.experts-exchange.com/questions/28493642/I-need-verfication-Java-Immutable-class-Is-this-class-Immutable.html | CC-MAIN-2018-30 | refinedweb | 724 | 59.74 |
Sometimes, while coding, we come across a problem where we require to convert a variable from integer to string. In this article we will learn how to covert integer to string in C++. Following pointers will be discussed in this article,
- Using String Streams
- Using To String Function
- Using Boost Lexical Cast
So let us get started with this article
Convert Integer To String In C++
Using String Streams
The first way is by using string streams, that is, the input or output string streams.
Convert Integer To String In C++: Sample Code
#include<iostream> #include <sstream> #include <string> using namespace std; int main() { int num; cout<<"Enter the number: "<<endl; cin>>num; ostringstream strg; strg<< num; string s1 = strg.str(); cout <<endl<< "The newly formed string from number is : "; cout << s1 << endl; return 0; }
Output
Explanation
In the above program, we take an integer input from the user and convert it to string. We do this by using string streams method. We have two header files that are included, one is for string and the other is for string input/output streams. We have declared two variables. The first one is of type int. It is used to hold the number entered by the user.
We create a variable of output string stream class and create a variable str. We then pass the number stored in num to the output stream. Then we call the str() function to convert the number to string and print it.
Let us move on to the next topic of this article on ‘convert integer to string in C++’
Using To String Function
The next method is using the string function to_string. Any data type can be converted to a string by using to_string.
Sample Code
#include<iostream> #include<string> using namespace std; int main() { int intVal; cout << "Enter a Integer : "<<endl; cin>>intVal; string str = to_string(intVal); cout << "The integer in string is : "; cout << str << endl; return 0; }
Output
data-src=
Explanation
In the above program, we take an integer input from the user and convert it to string. We do this by using to_string function. We have one header file that is included, it is for strings and the to_string function.
We accept a value from the user and store it in intVal. We then create a string variable str and assign the value of to_string() function, when intVal is passed.
We then print str and the integer is converted to a string.
Let us move on to the next topic of this article on ‘convert integer to string in C++’
Using Boost Lexical Cast
Boost.LexicalCast can convert a number from strings to numeric and vice versa. It is present in “boost/lexical_cast.hpp” library. Here, we will convert a number to string using Boost.LexicalCast.
Sample Code
#include<iostream> #include <boost/lexical_cast.hpp> #include <string> using namespace std; int main() { int m = 9487; string str = boost::lexical_cast<string>(m); cout << "The string value is : "; cout << str << endl; return 0; }
Output
The string value is: 9487
Explanation
In the above program, we take an integer input from the user and convert it to string. We do this by using Boost.LexicalCast. We have two header file that is included, one is for strings and the other is for Boost.LexicalCast.
We assign m the value 9487. We create the string variable str and assign the value we get from Boost.LexicalCast to it. The conversion process takes place here. Then we print the output.
Thus we have come to an end of this article on ‘ ‘convert integer to string. | https://www.edureka.co/blog/convert-integer-to-string-cpp/ | CC-MAIN-2019-35 | refinedweb | 598 | 71.75 |
Red Hat Bugzilla – Bug 680367
sssd not thread-safe
Last modified: 2017-02-06 09:35:58 EST
Description of problem:
The version of the sssd client included in RHEL 6 "Santiago" is not thread safe for some calls. In particular, under heavy concurrency of getpwuid_r on a system slaved to LDAP, the process making the calls eventually crashes with an invalid free.
The relevant Fedora bug is here:
fixed in SSSD 1.5.0.
Version-Release number of selected component (if applicable):
RHEL6.0 includes sssd-client 1.2.1
How reproducible:
Very.
Steps to Reproduce:
1. Call getpwuid_r at a high rate from several threads
2. Eventually process will crash
I can reproduce this reliably when load testing a piece of software that relies on these calls. If necessary I can probably write a simple C test case to reproduce the problem, but the upstream bug should be sufficient.
Actual results:
getpwuid_r returns strange error codes, and soon memory corruption results, for example due to invalid frees:
***]
Expected results:
Should function correctly, since getpwuid_r is a reentrant-safe call.
1. Configure sssd 1.2 running on RHEL6.0
2. run ./nss-thread-test (compiled as "gcc -lpthread nss-thread-test.c -o nss-thread-test")
The script doesn't finish executing and crashes with following error:
<snip>
7f9b317fd000-7f9b321fd000 rw-p 00000000 00:00 0
7f9b321fd000-7f9b321fe000 ---p 00000000 00:00 0
7f9b321fe000-7f9b32bfe000 rw-p 00000000 00:00 0
7f9b32bfe000-7f9b32bff000 ---p 00000000 00:00 0
7f9b32bff000-7f9b335ff000 rw-p 00000000 00:00 0
7f9b335ff000-7f9b33600000 ---p 00000000 00:00 0 Aborted (core dumped)
</snip>
3. On another RHEL6.1 machine with sssd 1.5 installed and configured.
4. Repeat step 2.
Result:
The multi-threaded script executes successfully.
<snip>
Completed join with thread 43 status= 0
Completed join with thread 44 status= 0
Completed join with thread 45 status= 0
Completed join with thread 46 status= 0
Completed join with thread 47 status= 0
Completed join with thread 48 status= 0
Completed join with thread 49 status= 0
</snip>
Version: rpm -qi sssd | head
Name : sssd Relocations: (not relocatable)
Version : 1.5.1 Vendor: Red Hat, Inc.
Release : 14.el6 Build Date: Thu 10 Mar 2011 01:00:12 AM IST
Install Date: Fri 11 Mar 2011 02:36:04 AM
Marking bug as 'VERIFIED'
----------nss-thread-test.c-----------
#define _GNU_SOURCE
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <pwd.h>
#define NUM_THREADS 50
#define BUFLEN 4096
struct thread_data {
int tid;
int min;
int max;
};
struct thread_data td[NUM_THREADS];
void *thread_main(void *arg)
{
struct thread_data *data = (struct thread_data *) arg;
int i;
struct passwd pw, *pwp;
char buf[BUFLEN];
int num = 0;
unsigned long long uid;
for (i=0; i<NUM_THREADS; i++) {
uid = random() % (data->max - data->min) + data->min;
printf("calling getpwuid from %d for %llu (%d/%d)\n",
data->tid, uid, i, NUM_THREADS);
getpwuid_r(uid, &pw, buf, BUFLEN, &pwp);
}
printf("Thread %d done\n", data->tid);
pthread_exit((void *) NULL);
}
int main(int argc, char *argv[])
{
pthread_t thread[NUM_THREADS];
pthread_attr_t attr;
int ret, t;
void *status;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
srandom(time(NULL));
for(t=0; t < NUM_THREADS; t++) {
printf("Creating thread %d\n", t);
td[t].tid = t;
td[t].min = 501;
td[t].max = 9999;
ret = pthread_create(&thread[t], &attr, thread_main, &td[t]);
if (ret) {
printf("ERROR; return code from pthread_create() is %d\n", ret);
goto done;
}
}
for(t=0; t<NUM_THREADS; t++) {
ret = pthread_join(thread[t], &status);
if (ret) {
printf("ERROR return code from pthread_join() is %d\n", ret);
goto done;
}
printf("Completed join with thread %d status= %ld\n", t, (long) status);
}
ret = 0;
done:
pthread_attr_destroy(&attr);
pthread_exit(NULL);
return. | https://bugzilla.redhat.com/show_bug.cgi?id=680367 | CC-MAIN-2017-34 | refinedweb | 611 | 54.83 |
Logging is an essential tool in every developer's arsenal. It helps the developer to
identify problems faster by showing the state of an application at any given
point. It is important after deployment, when all that the poor system admins have are the logs that are generated by your application.
So it is absolutely necessary to be equipped with a logging framework which
is easy to set up, easy to use, and extensible. With this in mind, we will
be discussing log4net, an open source logging and tracing framework. The only
prerequisite for this article is to know how to program in .NET using C#, although
the concepts are applicable to programmers using VB.NET or any other .NET language.
log4net
log4net, as I said earlier, is an open source project and is the port of the
famous log4j project for Java. It is an excellent piece of work, started by a
team at, but it
would not have been possible without the contributions made by the community. log4net provides many advantages over other logging
systems, which makes it a perfect choice for use in any type of application,
from a simple single-user application to a complex multiple-threaded
distributed application using remoting. The complete features list can be viewed
here.
It can be downloaded from the web site
under the Apache license. The latest version at this writing is 1.2.0 beta
7, upon which this article is based. The changes in this release are listed here.
log4j
You can see from the feature document that this framework is released for
four different platforms. There are separate builds for Microsoft .NET Framework,
Microsoft .NET Compact Framework, Mono 0.23, and SSCLI 1.0. There are different
levels of support provided with each framework, the details of which are documented here.
This version of log4net is provided with NAnt
build scripts. To compile the framework, you can execute the build.cmd file
from the root directory where you extracted the zipped file. The log4net.sln
file in the <log4net-folder>\src directory is the solution file for log4net
source, whereas the examples are provided in a separate solution file in
<log4net-folder>\examples\net\1.0. The samples are provided
in C#, VB.NET, VC++.NET, and even in JScript.NET. Some of the samples have their
configuration files in the project's root folder, so in order to run those samples
you need to manually move them with project's executable file. The API documentation
is provided in the <log4net-folder>\doc\sdk\net directory.
log4net is built using the layered approach, with four main components inside
of the framework. These are Logger, Repository, Appender, and Layout.
The Logger is the main component with which your application interacts. It is
also the component that generates the log messages.
Generating a log message is different than actually showing the final output.
The output is showed by the Layout component, as we will see later.
The logger provides you with different methods to log any message. You can
create multiple loggers inside of your application. Each logger that you instantiate
in your class is maintained as a "named entity" inside of the log4net framework.
That means that you don't need to pass around the Logger instance between
different classes or objects to reuse it. Instead, you can call it with the
name anywhere in the application. The loggers maintained inside of the framework
follow a certain organization. Currently, the log4net framework uses the hierarchical
organization. This hierarchy is similar to the way we define namespaces in .NET.
For example, say there are two loggers, defined as a.b.c and a.b.
In this case, the logger a.b is said to be the ancestor of the logger
a.b.c. Each logger inherits properties from its parent logger.
At the top of the hierarchy is the default logger, which is also called the
root logger, from which all loggers are inherited. Although this namespace-naming scheme is preferred in most scenarios, you are allowed to name your
logger as you would like.
a.b.c
a.b
The log4net framework defines an interface, ILog, which is necessary for all
loggers to implement. If you want to implement a custom logger, this
is the first thing that you should do. There are a few examples in the /extension
directory to get you started.
ILog
The skeleton of the ILog interface is shown below:
public interface ILog
{
void Debug(object message);
void Info(object message);
void Warn(object message);
void Error(object message);
void Fatal(object message);
// There are overloads for all of the above methods which
// supports exceptions. Each overload in that case takes an
// addition parameter of type Exception like the one below.
void Debug(object message, Exception ex);
// ...
// ...
// ...
// The Boolean properties are used to check the Logger's
// level (as we'll see Logging Levels in the next section)
bool isDebugEnabled;
bool isInfoEnabled;
// other boolean properties for each method
}
From this layer, the framework exposes a class called LogManager, which manages
all loggers. It has a GetLogger() method that retrieves the logger for us against
the name we provided as a parameter. It will also create the logger for us if
it is not already present inside of the framework.
LogManager
GetLogger()
log4net.ILog log = log4net.LogManager.GetLogger("logger-name");
Most often, we define the class type as the parameter to track the name of
the class in which we are logging. The name that is passed is prefixed with
all of the log messages generated with that logger. The type of class can be passed
in by name using the typeof(Classname) method, or it can be retrieved through
reflection by the following statement:
typeof(Classname)
System.Reflection.MethodBase.GetCurrentMethod().DeclaringType
Despite the long syntax, the latter is used in the samples for its portability,
as you can copy the same statement anywhere to get the class in which it is
used.
As you can see in the ILog interface, there are five different methods for
tracing an application. Why do we need all of these different methods? Actually,
these five methods operate on different levels of priorities set for the logger.
These different levels are defined as constants in the log4net.spi.Level class.
log4net.spi.Level
You can use any of the methods in your application, as appropriate.
But after using all of those logging statements, you don't want to have all of
that code waste CPU cycles in the final version that is deployed. Therefore,
the framework provides seven levels and their respective Boolean properties
to save a lot of CPU cycles. The value of Level can be one of the following:
Level
Table 1. Different Levels of a Logger
OFF
Highest
FATAL
void Fatal(...);
bool IsFatalEnabled;
ERROR
void Error(...);
bool IsErrorEnabled;
WARN
void Warn(...);
bool IsWarnEnabled;
INFO
void Info(...);
bool IsInfoEnabled;
DEBUG
void Debug(...);
bool IsDebugEnabled;
ALL
Lowest
In the log4net framework, each logger is assigned a priority level (which is
one of the values from the table above) through the configuration settings.
If a logger is not assigned a Level, then it will try to inherit the Level value
from its ancestor, according the hierarchy.
Also, each method in the ILog interface has a predefined value of its level.
As you can see in Table 1, the Info() method of the ILog interface has the
INFO level. Similarly, the Error() method has the ERROR level, and so on. When we
use any of these methods, the log4net framework checks the method level against
the level of the logger. The logging request is said to be enabled if the logger's
level is greater than or equal to the level of the
logging method.
Info()
Error()
For example, let's say you create a logger object and set it to the level of
INFO. The framework then sets the individual Boolean properties for that logger.
The level checking is performed when you call any of the logging methods.
Logger.Info("message");
Logger.Debug("message");
Logger.Warn("message");
For the first method, the level of method Info() is equal to the level set
on the logger (INFO), so the request passes through and we get the output,
For the second method, the level of the method Debug() is less than that of the logger (see Table 1). There, the request is disabled or refused and you get
no output.
Debug()
Similarly, you can easily conclude what would have happened in the third line.
There are two special Levels defined in Table 1. One is ALL, which enables
all requests, and the other is OFF, which disables all requests.
You can also explicitly check the level of the logger object through the Boolean
properties.
if (logger.IsDebugEnabled)
{
Logger.Debug(. | http://www.onjava.com/pub/a/dotnet/2003/06/16/log4net.html | CC-MAIN-2016-40 | refinedweb | 1,468 | 65.01 |
Python generator is a simple way of creating iterator.
There is a lot of overhead in building an iterator in python. We have to implement a class with
__iter__() and
__next__() method, keep track of internal states, raise
StopIteration when there was no values to be returned etc.
Iterator in Python is an object that can be iterated upon. An object which will return data, one element at a time. Iterator in python is any python type that can be used with a
for in loop.
Python lists, tuples, dicts and sets are all examples of inbuilt iterators.
Python iterator object must implement two special methods,
__iter__() and
__next__(). The
__iter__() method returns the iterator object itself. We use the
next() function to manually iterate through all the items of an iterator. When we reach the end and there is no more data to be returned, it will raise
StopIteration.
Example:
# define a list
my_list = [4, 7, 0, 3]
# get an iterator using iter()
my_iter = iter(my_list)
# iterate through it using next()
# prints 4
print(next(my_iter))
# prints 7
print(next(my_iter))
# next(obj) is same as obj.__next__()
# prints 0
print(my_iter.__next__())
# prints 3
print(my_iter.__next__())
# This will raise error, no items left
next(my_iter)
In the next example we will implement a function which
obj = PowTwo(4)
iter = iter(obj)
print(next(iter)) # print 1
print(next(iter)) # print 2
print(next(iter)) # print 4
All the overhead we mentioned above are automatically handled by generators in Python. Simply speaking, a generator is a function that returns an object (iterator) which we can iterate over, one value at a time.
It is fairly simple to create a generator in Python. It is as easy as defining a normal function with yield statement instead of a return statement. If a function contains at least one yield statement, it becomes a generator function.
Both yield and return will return some value from a function. The difference is that, while a return statement terminates a function entirely, yield statement pauses the function saving all its states and later continues from there on successive calls.
Example:
We have a generator function named
my_gen() with several yield statements.
# Generator function
def my_gen():
n = 1
print('print 1')
# Generator function contains yield statements
yield n
n += 1
print('print 2')
yield n
n += 1
print('print 3 - last')
yield n
# create iterator object
obj = my_gen()
# iterate through the items using next() function
print(next(obj))
Normally, generator functions are implemented with a loop having a suitable terminating condition.
Example:
def rev_str(my_str):
length = len(my_str)
for i in range(length-1, -1, -1):
yield my_str[i]
for char in rev_str("hello"):
print(char)
- Easy to Implement
Since generators keep track of details automatically, they can be implemented in a clear and concise way as compared to their iterator class counterpart.
- Memory Efficient
A normal function to return a sequence will create. | https://pythoncircle.com/post/648/iterator-and-generators-in-python-explained-with-example/ | CC-MAIN-2021-43 | refinedweb | 489 | 52.9 |
Want to share your content on R-bloggers? click here if you have a blog, or here if you don: If you are looking to access your data in Amazon Redshift and PostgreSQL with Python and R
Python
Connect to BigQuery with Python
In order to pull data out of BigQuery, or any other database, we first need to connect to our instance. To do so, we need a cloud client library for the Google BigQuery API. Although the options are quite many, we are going to work with the Google Cloud Bigquery library which is Google-supported.
However, feel free to explore other third-party options like the BigQuery-Python library by tylertreat which is also widely used, well-documented and frequently maintained.
So, in order to be able to install the Google Cloud Library, we assume that you have already setup a Python Development Environment. If this is not the case you can refer to Python Development Environment Setup Guide.
The next step is to install the library and this can be easily achieved by executing the following command:
pip install --upgrade google-cloud-bigquery
In order to be able to connect to the database, you need to download locally the .json file which contains the necessary credentials from within your service account. In case you didn’t already have a service account, you would have to create one and then download in our local machine the previously mentioned JSON.
Now that everything is set we can move on and write some Python code in order to initialize the connection.
from google.cloud import bigquery from google.oauth2 import service_account credentials = service_account.Credentials.from_service_account_file( 'path/to/file.json') project_id = 'my-bq' client = bigquery.Client(credentials= credentials,project=project_id)
As you can see, the parameters you need to specify are the project_id and the location of the json key file. In Bigquery, a project is the top-level container and provides you default access control across all datasets.
Executing Queries with Python
With the BigQuery client, we can execute raw queries on a dataset using the query method which actually inserts a query job into the BigQuery queue.
These queries are executed asynchronously in the sense that there is no timeout specified and that the program will wait for the job to complete.
As long as the job is completed the query method returns a QueryJob instance according to documentation which, among others, contains the produced results.
For more details you can always refer to the official documentation here
The Python code required follows:
query_job = client.query(""" SELECT * FROM dataset.my_table LIMIT 1000""") results = query_job.result() # Waits for job to complete.
Note that the above query uses standard SQL syntax as the client library default to this. If you wish you can change the SQL dialect to legacy as follows:
job_config.use_legacy_sql = True query_job = client.query(""" SELECT * FROM dataset.my_table LIMIT 1000""", job_config=job_config) results = query_job.result() # Waits for job to complete.
R
If Python is not your cup of tea and you prefer R instead, you are still covered. Getting your data from Google BigQuery is equally easy as in Python – or even easier.
Connect to BigQuery with R
Again we are going to use an open source library called BigrQuery, which is created and maintained by Hadley Wickham, Chief Scientist at RStudio. In order to install we simply have to run the following command from within R console:
install.packages(“bigrquery”)
Executing Queries with R
According to the library documentation, the first time we’ll need to authorize the application to access Google Cloud services. As documented on the Authentication section of the bigrquery GitHub page, we’ll follow the prompts within R to open the authorization URL and later copy the authorization code back to R. You’ll only need to authorize the library once, requests performed after the first authorized one will refresh access credentials.
So the overall procedure that we are going to follow can be summarized as:
- Import ‘BigrQuery’ library.
- Specify the project ID from the Google Cloud Console like we did with Python.
- Form your query string
- Call query_exec with your project ID and query string.
#import library library(bigrquery) # Use your project ID here project_id <- "your-project-id" # put your project ID here # Example query sql_string <- "SELECT * FROM dataset.my_table LIMIT 1000" # Execute the query and store the result query_results <- query_exec(sql_string, project = project_id, useLegacySql = FALSE)
Again, if you wish you can execute queries using legacy sql by changing the value of the useLegacySql to true in the query_exec function.
Conclusion
As you can see, getting your data from BigQuery for further analysis in Python and R is really easy. The true power of a database that stores your data in comparison with CSV files etc. is that you have SQL as an additional tool. Invest some time learning how to work with SQL and you will not regret it, having structured data in a database and using SQL to pre-process your data before you start building your statistical models will save you time and resources.
Although in this article we focused mainly on BigQuery, using any other database is equally easy. The main difference will be the selection of the appropriate library for Python and for R.
Would love to hear your thoughts in the comments below.
The post Access your data in Google BigQuery with Python and R appeared first on Blend. | https://www.r-bloggers.com/access-your-data-in-google-bigquery-with-python-and-r/ | CC-MAIN-2019-51 | refinedweb | 904 | 62.07 |
I'm using app studio and I would like to add the buttons to the map for GPS/zoomin/zoom out (see attachment). The screen shot was taken from an app built using the mapveiwer template. I've been scanning the code for that template but I don't see any particular lines that would be inserting these buttons. Is this built into the map api? Anyone have any idea?
Thanks,
David
Hi David
Do a global search in the project for the text "ZoomButtons" and you should find the object. I believe it is a custom object created by the AppStudio team, so you can go into it and customize it a little, e.g. change icons etc.
If you want to use the object in your own app, make sure you have this import statement:
import ArcGIS.AppFramework.Runtime.Controls 1.0
then just add the object and go from there...
Cheers,
-Paul | https://community.esri.com/thread/181239-zoominzoomoutlocation-buttons-in-qml | CC-MAIN-2019-39 | refinedweb | 155 | 82.14 |
Gleam has reached v0.4! This version has some new features, bug fixes, and some breaking changes. Let’s take a look.
Structs
Gleam’s map and tuple type has been removed and replaced with structs, which have many of the benefits of both.
Like maps structs are collections of named values, each with their own type. Unlike maps they are named, so two structs are not the same type just because they have the same fields by coincidence, making Gleam’s type system a little stricter by default.
Like tuples structs have constant access time, they do not need to be linearly scanned in order to find a field to access it.
Unlike either struct types are declared up-front by the
struct keyword.
pub struct Cat { name: String cuteness: Int }
Once declared the newly defined constructor can be used to create instances of the struct.
pub fn cats() { let cat1 = Cat(name: "Nubi", cuteness: 2001) let cat2 = Cat(cuteness: 1805, name: "Biffy") // Alternatively fields can be given without labels let cat3 = Cat("Ginny", 1950) [cat1, cat2, cat3] }
One downside to structs is that they are less convenient for returning more
than one value from a function than tuples were, as struct types need to be
pre-declared. To negate this inconvenience the standard library declares
Pair
and
Triple types in the
gleam/pair and
gleam/triple modules. If you wish
to return more than 3 values from a function it is recommended to create a
struct and name the values to make it more clear what they are.
Comparison operators
The next breaking change is to how the comparison operators
>,
>=,
<,
and
<= work. Previously these operators would take any two values of the
same type and determine which is larger according to Erlang’s ordering of
values. This is convenience but may result in surprising behaviour when used
with custom types such as enums.
For example, with the
Order enum we would expect
Gt (greater than) to be
larger than
Lt (less than), but according to Erlang’s value ordering this is
not the case.
enum Order = | Gt | Eq | Lt
Gt > Lt // => False
From this version on
>,
>=,
<, and
<= only accept Ints as arguments
to avoid this surprising behaviour, and the
>.
>=.
<. and
<=.
comparison operators have been added for comparing Floats.
For ordering other types comparison functions can be found in the standard
library, such as
order.compare and
bool.compare.
Second class modules
Modules are no longer first class values in Gleam, meaning they can no longer be assigned to variables or used as arguments to functions.
First class modules were inspired by OCaml and intended to be a way to work
with Erlang behaviours such as
gen_stage from within Gleam. However after
several months of using modules in Gleam it became clear that taking OCaml’s
first class modules but not their functor module system resulted in Gleam’s
modules being far less useful.
First class functions combined with Gleam’s other data structures is sufficient to use Erlang behaviours, while being easier to construct and compose, so first class modules were determined not to be the ideal solution here either.
With first class modules removed we no longer need a special syntax for
accessing a field on a module, so the
: operator has been removed and
replaced with the
. operator, making it the universal operator for accessing
a named field on a module or container data type in Gleam.
import gleam/list pub fn main() { list.reverse([1, 2, 3]) }
With the removal of modules and maps Gleam’s type system has become simpler, less structural and more nominal in style. This puts us in a good position to do research into new forms of polymorphism such as type classes or traits! want to recommend to me good things to do on a weekend in Berlin) say Hello!
Thanks
Lastly, a huge thank you to the code contributors to this release! | https://gleam.run/news/gleam-v0.4-released/ | CC-MAIN-2022-33 | refinedweb | 653 | 67.69 |
#include <fieldBase.h>
Base class for field primitives.
Definition at line 56 of file fieldBase.h.
Construct a UsdVolFieldBase on UsdPrim
prim . Equivalent to UsdVolFieldBase::Get(prim.GetStage(), prim.GetPath()) for a valid
prim, but will not immediately throw an error for an invalid
prim
Definition at line 68 of file fieldBase.h.
Construct a UsdVolFieldBase on the prim held by
schemaObj . Should be preferred over UsdVolFieldBase(schemaObj.GetPrim()), as it preserves SchemaBase state.
Definition at line 76 of file fieldBase.h.
Destructor.
Returns the type of schema this class belongs to.
Reimplemented from UsdGeomBoundable.
Reimplemented in UsdVolOpenVDBAsset.
Return a UsdVolFieldBase 115 of file fieldBase.h.
Compile time constant representing what kind of schema this class is.
Definition at line 62 of file fieldBase.h. | https://www.sidefx.com/docs/hdk/class_usd_vol_field_base.html | CC-MAIN-2021-10 | refinedweb | 125 | 55.4 |
I have created database in external storage and i have stored image path and audio paths for each step in tables. Now from a new android application i want to access the table and extract in custom listview with image and text. Image are stored in a
This is my JSON response: { MyResult: { Apple=[ "iPhone 4S", "iPhone 6" ];Microsoft=[ "Lumia 535", "Lumia 640" ];Samsung=[ "Galaxy A3", "Galaxy A5", "Galaxy A7" ]; } } I would like to d
I have a problem with my listview. My listview has a custom adapter. It fetches records from server. I use AsyncTask to fetch the records and display in the listview. But when I click the listview item to display the detail and go back to the list ac
Hello all android developers.. i have a listview which gets initially populated with some list (which is not empty) in onCreate() method. and as user type in search box,the listview gets updated.i use custom adapter to make this work. Everything work
I have a ListView displaying some text on my android activity. The problem I am facing is that some text is bigger than the other so the ListView rows don't have the same size. I would like to have all the rows the size of the biggest. That way all t
For understanding the structure I will be talking about, first see this example picture: I need to show multiple ListViews in one fragment. The problem is when opening the given fragment, inflating the views takes quite a lot of time, because: Each L
I want to show listView divider all over in the listView even no data to show in the listView I am working on an Application where I have to fetch data from data base and show in the listView If there are only two records in the data base the divider
I am working on Android ListView. I implemented pull to refresh through XListView, But now I also want to implement Swipe Left to right to show buttons List Item on this ListView. How can I do it? Or How to add 2 libs as same on ListView. My ListView
When I finish my activity I'm adding an value to my array that displayed over the Listview. after first intent i refresh the game (back to main_activity) and start game agian. when I finish that game i add another value to the array but it displayes
I'm following this library to add drag - drop functionality my listview. Everything is fine when moving first item to second place. But when I move the listview first element to third place app crashes. In order to explain my situation I've added ima
Getting nullpointer exception while setting the switch on. The XML definition <RelativeLayout xmlns:android="" android:id="@+id/list1" android:layout_width="fill_parent" android:la
I have a ListView that is within a Fragment. In the onCreateView section I have set a onItemClickListener for the list, which highlights the selected item in the ListView. I have set two ImageButtons that navigate up and down the list. On selection a
I've a listview with 30 to 40 items. Each item has one image view and many other views. When the user clicks on the image view(sunrise image), it should be toggled with another image view(sunset image). When I tap the sunrise image in one item, many
I'm trying to inflate a fragment with listview in one of the two tabs in my application. But, what am getting is all this: (this goes on and on forever.. ) This is what I'm trying to do. ListFragment.Java public class ListFragment extends Fragment {
I have a list with notifications, content of the notifications is displayed in a webview. I have set a contextmenu on the list but for some reason only the webviews respond to a longpress and show the contextmenu, the rest of the cells doensn't. anyo
I have custom ListView. I'm using array adapter to fill my ListViews rows. It worked perfectly. When I select the ListView row, my checkedtextview become visible. After scrolling my checked situation disapper. How can i remember my checks ? Thanks in
I have a ListView in which i load an image from the server inside an ImageView object and then set my layout object background to be this imageview (the row list background). However, as i scroll the list my background for each row changes and is not
I have created a list of 7 days and want the list to take the full screen. But the list is having a gap in the bottom. I want to remove the gap. My code is as follows : package com.example.collegehack; import android.app.ListActivity; import android.
I am new in android programming. so i need help on the issue which i am facing. Actually i am getting data from magento in android app using eclipse. i am using simple adapter for displaying text and images in listview. text data is perfectly display
This question already has an answer here: What is a Null Pointer Exception, and how do I fix it? 12 answers I'm trying to create a List View with BaseAdapter and i keep getting a Null Object Reference Error at the point of adding an object to an Arra | http://www.dskims.com/tag/android-listview/ | CC-MAIN-2018-22 | refinedweb | 893 | 72.05 |
SQL Server Data Services: Microsoft's Answer to Amazon S3
Microsoft has announced SQL Server Data Services (SSDS) at MIX08. Being a storage service on the web, SSDS is Microsoft's Amazon S3 competitor.
SSDS is another Microsoft service running on the web in addition to BizTalk Services. Neil Hutson offers a concise summary of what SSDS is all about:
SSDS you can think of as a structured data store in the cloud(building block service), which is accessed using Internet protocols using a basic data manipulation language. SSDS is for developers and businesses that need scalable, easily programmable, and cost-effective data storage with robust database query capabilities.
The SQL Server Data Services offer a flexible data model, which is structured as follows: Customer > Account > Authority > Container > Entity. Customers are companies or individuals that use SSDS. Each customer might open an arbitrary amount of accounts, which is connected with a unique Windows Live Id. Authorities are a concept analogous to namespaces, and are a in the context of billing and geo-location. Containers are a unit of consistency, defining boundaries for search and update operations. The smallest and fundamental data unit is the entity.
Neil Hudson describes this fundamental data unit as "a Flexible Entity Model, where no schema required and you can update name/value pairs (which is the smallest unit of storage)". The name/value pairs represent properties, whose type information can be changed on the fly. Properties maybe added at any time. SSDS supports "simple types such as decimal, string, bool, etc and all the properties are indexed".
Data can be accessed and altered in many ways:
- Microsoft Sync Framework (offline access)
- ADO.NET Data Services
- REST
- SOAP
Data can be manipulated by CRUD operations on authorities, containers, and entities. Queries can be executed based on a text base query language, whose syntax follows the LINQ pattern for C#.
Regarding the predominance of Amazon S3 on the web storage market Robert Scoble said:
It’s almost too late for the others to get into the game [of data storage on the web]. It’s amazing (or maybe it should be “amazoning”) to me that Ray Ozzie over at Microsoft has let Amazon have so much runway.
According to Jamie Thomson nothing's carved in stone, yet:
Can [Microsoft] stop Amazon? Who knows, it might slow them down a bit (when SSDS finally gets released - it hasn't even reached beta yet) but Amazon are already miles and miles ahead with this. Having said that, its difficult to know how far Amazon have got into the enterprise data storage market and that will be Microsoft's key battleground.
Further information can be found on the product web site and the SSDS whitepaper.
WS computer cloud and S3 surpassed the usage of all of Amazon.com's global
by
Jean-Jacques Dubray
an interesting piece of information came out yesterday on the Seattle Times. Brier Dudley reported that:
Bandwidth usage by the AWS computer cloud and S3 storage services during the fourth quarter of 2007 surpassed the usage of all of Amazon.com's global Web sites. That's during the holiday shopping season.
If anyone doubt that this is real, this is yet another piece of evidence.
Good to have some competition
by
Ted Slusser
I really like these features of SSDS that SimpleDB is lacking
- Support for simple types: string, numeric, datetime, boolean
- Query language supports the retrieval of complete entities
- Use the same service interfaces for your storage needs at any scale (SimpleDB vs S3)
AWS outage notwithstanding the future is very "cloudy" | http://www.infoq.com/news/2008/03/ssds | CC-MAIN-2014-42 | refinedweb | 598 | 52.49 |
Hey all,
I'm trying to write a program which takes in a file called original.dat, which is a source code file, and separates the comments (denoted by // and /**/) from the regular code. The program writes the comments to a file called comment.dat and the code to a file called source.dat.
Just a little background, I had a God-awful programming I teacher before, and my programming II teacher expects us to be intimately familiar with all aspects of coding.
I know I have some unused identifiers in there, they are just remnants of other attempts I made.
I am using the Bloodshed Dev C++ compiler, and I was trying to test this program with some .dat files written in Dev, and all I get out is a bunch of system trash, with lots of files paths included.
I could use some advice; to my knowledge, this works as I expect. It stores the whole input file into a string, and then goes character by character through the string and checks for '/' and then if it encounters that, checks for another '/' or a '*' to denote comment tags, and then writes them out to the comment.dat file. It also echoes the data written to the screen. But I was unable to test this to my satisfaction.
Code:#include <iostream> #include <stdio.h> #include <fstream> #include <conio.h> #include <string> #define ENDFILE "CTRL-Z" #define infile "original.dat" #define outfile1 "source.dat" #define outfile2 "comment.dat" using namespace std; int outcomments( ifstream &xx); int main() { int line1; ifstream originalin; ofstream sourceout; ofstream commentout; originalin.open(infile); sourceout.open(outfile1); commentout.open(outfile2); line1=0; line1 = outcomments(originalin); system("pause"); return 0; } int outcomments(ifstream &xx) { //begin function ifstream originalin; ofstream commentout; ofstream sourceout; originalin.open(infile); sourceout.open(outfile1); commentout.open(outfile2); string file; string comment_slash; string comment_slash_star; char ch; int i,line1; line1=0; i=0; while(!xx.eof()) { //begin outer while originalin>>file; //stores entire file in string if(file[i]=='/') { //begin outer if i++; if(file[i]=='/') { //begin inner if commentout<<'/'; while(file[i]!='\n') { //begin inner while commentout<<file[i]; cout<<'/'<<file[i]; //echo i++; } //end inner while commentout<<endl; } //end inner if else { //begin inner else if(file[i]=='*') { //begin inner inner if commentout<<'/'; while(file[i]!='/') { //begin inner while commentout<<file[i]; cout<<file[i]; //echo i++; } //end outer while } //end inner inner if } //end inner else } //end outer if else { sourceout<<file[i]; //if characters are not // or /* then the code } //is written to the source file, without comments } //end outer while return 0; } //end function | http://cboard.cprogramming.com/cplusplus-programming/135025-basic-file-read-file-write-program.html | CC-MAIN-2016-07 | refinedweb | 435 | 54.83 |
Thanks for the answer. I should have explained why I wanted to know. I've been away from Zope 3 or 4 years (so I missed those emails) and I am now tasked with preparing a workshop for next year (as I re-learn everything too). I know that is one of the questions I'll get. So, I wanted to be able to answer it.
Advertising
Though they carry no semantic significance, namespaces do always have a root. Cheers, Tim On Thu, 2007-07-19 at 11:28 -0400, Benji York wrote: > Tim Cook wrote: > > Does z3c stand for Zope 3 Component? > > It stands for "Zope 3 community", but (IMO), name space package names, > while mnemonic, don't actually confer meaning. In other words, just > because Zope Corp uses zc. as our name space doesn't mean anything other > than we were the ones that created the project. > > Summary: name space packages are just for uniquifying package names. -- Timothy Cook, MSc Health Informatics Research Services 01-904-322-8582
signature.asc
Description: This is a digitally signed message part
_______________________________________________ Zope3-users mailing list [email protected] | https://www.mail-archive.com/[email protected]/msg06137.html | CC-MAIN-2016-44 | refinedweb | 189 | 74.9 |
Get To Know All About Library Functions In C++ With Examples.
Library functions which are also called as “built-in” functions are the functions that are already available and implemented in C++.
We can directly call these functions in our program as per our requirements. Library functions in C++ are declared and defined in special files called “Header Files” which we can reference in our C++ programs using the “include” directive.
=> Visit Here For The Complete C++ Course From Experts.
What You Will Learn:
Overview
For Example, to include all the built-in functions related to math, we should include <cmath> header as follows:
#include <cmath>
Some of the standard library header files that are used in C++ are tabularized as below. These headers replace their respective counterparts with “.h” extension.
For Example, <iostream> replaces <iostream.h> header file.
The Header files are briefed along with their description below.
We have already used most of these headers throughout our tutorial so far. Notable is <iostream>, <string>, <ctime> headers that we have used from time to time.
In our STL tutorials, we will be making use of all the container headers and <algorithm> as well as <iterator> header. Similarly, when we learn file I/O and exception handling, we will be using respective headers.
In this tutorial, we will mostly deal with <cmath> and <cctype> headers and discuss the various function prototypes that they support. The function prototypes from these headers are extensively used in C++ programming.
<cmath> Header
This header contains various function prototypes related to mathematical functions. Some of the prototypes that are used extensively are listed here.
<cctype> Header
This header contains function prototypes that are mainly used for converting the character to upper/lower case or to check if a character is a digit etc.
Function prototypes included in <cctype> header are listed as below:
<stdlib> Header
We also have another header <stdlib> that includes various useful library functions that are used extensively in C++ programming.
We have listed some of the popular functions in <stdlib> below:
Conclusion
In this tutorial, we have gone through some of the header files supported by the C++ standard library.
We also discussed some popular library functions that are used by programmers. This list of functions is not exhaustive as each header of the C++ standard library contains too many functions for the benefit of programmers.
In our upcoming C++ tutorials, we will come across more library functions.
=> Visit Here For The Exclusive C++ Training Tutorial Series. | https://www.softwaretestinghelp.com/library-functions-in-cpp/ | CC-MAIN-2021-17 | refinedweb | 416 | 54.83 |
Developing Your First Compiled JavaFX Script Program - CompiledHelloJavaFX
As I said in this post a couple of days ago, the next JavaFX Puzzler will be posted at 18:00 GMT (1:00 pm EST) on Wednesday, November 28, 2007. It will be a compiled JavaFX Script puzzler, so you'll need to build the JavaFX Script Compiler in order to participate in the Puzzler.
Important Note: If you'd like to save some time and disk space by downloading the latest build of the JavaFX Script Compiler rather than building it on your machine, see the next post (Obtaining the OpenJFX Script Compiler Just Got Easier).
To help prepare you for the Puzzler, I'd like to you to develop a very basic "Hello World" style program. After you've built the compiler, please copy and paste the following code into a file with the FX extension. I chose CompiledHelloJavaFX.fx for my filename:
/*
* CompiledHelloJavaFX.fx - A Hello World style compiled JavaFX Script program
*
* Developed 2007 by James L. Weaver (jim.weaver at lat-inc.com)
*/
import java.lang.System;
class CompiledHelloJavaFX {
attribute textToPrint:String = "Hello Compiled JavaFX Script!";
}
var chjfx =
CompiledHelloJavaFX {
};
System.out.println(chjfx.textToPrint);
This simple compiled JavaFX Script program is almost identical to what it would be if it were written in interpreted JavaFX Script. The difference (attribute initialization) is noted in the Converting to the New Syntax document that I've referred you to previously.
To compile the program, set your PATH environment variable to the dist/bin directory of where you installed the JavaFX compiler. Then execute the proper javafxc command file for your operating system, passing your FX file as an argument. On my Windows machine, I navigate to the directory in which my FX file is located, and enter the following from the command prompt:
javafxc CompiledHelloJavaFX.fx
To run your program, set the CLASSPATH environment variable to the javafxc.jar file in the dist directory as well as the current directory, passing in the name of the JavaFX Script program without the FX extension. On my Windows machine, I enter the following from the command prompt:
java -cp C:\openjfx-compiler\dist\javafxc.jar;. CompiledHelloJavaFX
Note: If there is a package statement in your JavaFX Script program, then navigate to the directory in which the top node of the package is located. See the Develop and Run Your First JavaFX Script Program in the Next Few Minutes post for a discussion on the package statement.
For a review, or to catch up if you're new to this Learn JavaFX blog, see the posts in the JavaFX Script category to learn the syntax of interpreted JavaFX Script programs. As I've noted before, JavaFX Script is migrating from being an interpreted language to becoming a compiled language. When compiled JavaFX Script is mature it will eventually replace interpreted JavaFX Script. Please post any questions that you have as you build the JavaFX Script Compiler, and as you compile and run this example.
Enjoy,
Jim Weaver
JavaFX Script: Dynamic Java Scripting for Rich Internet/Client-side Applications
Immediate eBook (PDF) download available at the book's Apress site | http://learnjavafx.typepad.com/weblog/2007/11/developing-your.html | crawl-002 | refinedweb | 526 | 52.9 |
go to bug id or search bugs for
Take the following code:
public class phptest{
public static String testString = "look for ^ in this string";
public static void main(String[] args)
{
System.out.println("Found ^ at: "+testString.indexOf('^'));
}
}
Which when run in Java produces this output:
Found ^ at: 9
Now, consider the following php code:
<?php
$phptest = new Java("phptest");
printf("Found ^ at: %d\n", $phptest->$testString->indexOf('^'));
?>
This *almost* works, and produces the following output:
X-Powered-By: PHP/4.0.6
Content-type: text/html
<br>
<b>Warning</b>: java.lang.NoSuchMethodException: indexof in <b>phptest.php</b> on line <b>5</b><br>
Found ^ at: 0
As you can see, the indexof method has been lowercased from indexOf. This is a problem in the case sensitive environment of Java.
Add a Patch
Add a Pull Request
Please test with PHP 4.1.1+JDK 1.2 and report the result back
Please do not forget updating PHP version. Thanks.
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
Because php is case insensitive this is the reason for this bug.. Can't do anything about it unless php goes case sensitive. | https://bugs.php.net/bug.php?id=13276&edit=3 | CC-MAIN-2022-05 | refinedweb | 226 | 58.38 |
Problem Statement
In the “Print all Possible Ways to Break a String in Bracket Form” problem, we have given a string “s”. Find all possible ways to break the given string in bracket form. Enclose all substrings within brackets ().
Input Format
The first and only one line containing a string “s”.
Output Format
Print all possible ways to break the given string in bracket_form. Every line contains only one string.
Constraints
- 1<=|s|<=10^3
- s[i] must be a lower case English alphabet
Example
abcd
(a)(b)(c)(d) (a)(b)(cd) (a)(bc)(d) (ab)(c)(d) (ab)(cd) (a)(bcd) (abc)(d) (abcd)
Algorithm
Here we use recursion to solve this problem. We maintain two parameters: The index of the next character to be processed and the output string so far. Now, start from the index of the next character to be processed. Append the substring formed by the unprocessed string to the output string and recurse on the remaining string until we process the whole string. We use std::substr to form the output string. substr(pos, n) returns a substring of length n that starts at position pos of the current string.
- We start from the index of the next character to be processed.
- Append substring formed by unprocessed string to the output string and recur on remaining until we process the entire string.
- We use substr(pos, n) to form the output string, this returns the substring of length n that starts at position pos if the current string.
Implementation
C++ Program to Print all Possible Ways to Break a String in Bracket Form
#include <bits/stdc++.h> using namespace std; void find_next(string s, int in, string t) { if(in==s.length()) { cout<<t<<endl; } for(int i=in;i<s.length(); i++) { string temp = t; temp+="("; temp+=s.substr(in,i+1-in); temp+=")"; find_next(s, i+1 , temp); } } int main() { string s; cin>>s; find_next(s,0,""); return 0; }
Java Program to Print all Possible Ways to Break a String in Bracket Form
import java.util.Scanner; class sum { public static void find_next(String s, int in, String t) { if(in==s.length()) { System.out.println(t); } for(int i=in;i<s.length(); i++) { String temp = t; temp+="("; temp+=s.substring(in,i+1); temp+=")"; find_next(s, i+1 , temp); } } public static void main(String[] args) { Scanner sr = new Scanner(System.in); String s = sr.next(); find_next(s,0,""); } }
tutcup
(t)(u)(t)(c)(u)(p) (t)(u)(t)(c)(up) (t)(u)(t)(cu)(p) (t)(u)(t)(cup) (t)(u)(tc)(u)(p) (t)(u)(tc)(up) (t)(u)(tcu)(p) (t)(u)(tcup) (t)(ut)(c)(u)(p) (t)(ut)(c)(up) (t)(ut)(cu)(p) (t)(ut)(cup) (t)(utc)(u)(p) (t)(utc)(up) (t)(utcu)(p) (t)(utcup) (tu)(t)(c)(u)(p) (tu)(t)(c)(up) (tu)(t)(cu)(p) (tu)(t)(cup) (tu)(tc)(u)(p) (tu)(tc)(up) (tu)(tcu)(p) (tu)(tcup) (tut)(c)(u)(p) (tut)(c)(up) (tut)(cu)(p) (tut)(cup) (tutc)(u)(p) (tutc)(up) (tutcu)(p) (tutcup)
Complexity Analysis to Print all Possible Ways to Break a String in Bracket Form
Time Complexity
O(n^2) where n is the size of the string “s”.
Space Complexity
O(n^2) where n is the size of the string. Here we declare a string after every character for getting the answer. | https://www.tutorialcup.com/interview/string/print-all-possible-ways-to-break-a-string-in-bracket-form.htm | CC-MAIN-2021-49 | refinedweb | 575 | 73.17 |
Introduction
Hello! I'm starting “Good to know” series.
I so excited to show you very young, but simple, fast, safe & compiled programming language called V (or vlang for Google Search bots).
Objectives of the article
- Story of V;
- Hello World;
- Main features of language;
- How to install and update;
What's "V" mean?
No, Alexander Medvednikov (author of V programming language) do not fan of "V for Vendetta" movie or Vue.js! But it's very interesting story:
Initially the language had the same name as the product it was created for: Volt. The extension was ".v", I didn't want to mess up git history, so I decided to name it V :)
It's a simple name that reflects the simplicity of the language, and it's easy to pronounce for everyone in the world.
— Alexander Medvednikov
The V's "Hello World"!
// hello_world.v fn main() { w := 'World' println('Hello $w!') }
Clear and simple, uh? What if I tell you that this code can be written in even shorter form? If your program is only single file (all code in one file), V allows you to drop
fn main() {...}.
Like this:
// hello_world.v w := 'World' println('Hello $w!')
Yes, this is a valid V code too!
Main features
Please note: this article was written when V version was
0.1.24.
V is written in V
The entire language and its standard library are less than 1 MB and can be built in less than 0.6 seconds.
V compiles between ≈100k and 1.2 million lines of code per second per CPU core (without hardware optimization).
As fast as C
Minimal amount of allocations, plus built-in serialization without runtime reflection. Compiles to native binaries without any dependencies.
Is V still fast? Look monitoring compilation speed table.
Safety
- NO 👎
null(
nil,
None, ...)
- global variables
- variable shadowing
- undefined values + behavior
- YES 👍
- bounds checking
- option/result types
- generics
- By default 👌
- immutable variables + structs
- pure functions
Similar to another languages
If you work with C, Go, Rust or JavaScript, you're ready to write code on V! Don't trust me, read program code near and answer "what does each line do?":
import time import http fn main() { resp := http.get('') or { println('failed to fetch data from the server') return } t := time.unix(resp.text.int()) println(t.format()) }
This program go to external HTTP server and return UNIX timestamp or print
failed to fetch data from the server. OK! We set to
t variable normalized date and time by standard V library
time and print result.
Simple to read, easy to write and can be learned in less than an hour!
Build-in package manager
The V Package Manager (vpm) — it's package management tool similar to NPM by Node.js, Cargo by Rust, Go Modules and many more.
$ v install [module]
Cross-platform UI library
UI is a cross-platform UI toolkit written in V for Windows, macOS, Linux, and soon Android, iOS and the web (JS/WASM).
V UI uses native widgets on Windows and macOS, on all other platforms the widgets are drawn by V UI.
Plugins for V syntax on popular code editors
Okay, time to try V on your computer!
- First, go to console and clone V repository:
$ git clone
- Next, go to
vfolder and run
make:
$ cd v && make
On Windows,
makemeans running
make.bat, so make sure you use
cmd.exe.
- Third, let's create symlink for it:
$ sudo ./v symlink
- That's all! 😉 V was installed to
/usr/local/bin/vpath and available to call by
v.
Update V
For update V to latest version, simply type on console:
$ v up
More about V language
- GitHub repository —
- Official docs —
Fresh news
- Twitter —
- Telegram — EN, RU, IT, ZH
Exercises
- Find in V official web site comparison to other languages;
- Go to official V Playground and write some code;
- Install V on your computer, go to
examplefolder on GitHub repository and run "Game of life";
Photo by
[Title] Annie Spratt
P.S.
If you want more — write a comment below & follow me. Thx! 😘
Discussion (12)
If Go and Rust had a baby it would be this language. Grandpa C should be proud.
Haha 😂
The most important thing about vlang, is it has (it should) only "One way to do thing", so the produced code is predictable, hence the possibility of translation from/to C (or possible C++ in the future).
I’ve seen a couple other articles about it and am intrigued. It seems to borrow heavily from golang and address several long-standing gripes: null, generics, etc.
They talk about memory safety and management akin to rust, seemingly without the complication of lifetimes. That’s something I’ve wanted to learn more about.
No aliens. :(
Looks interesting. I likely don't have a use for it at the moment, but I like some of the features/goals listed in the introduction.
Yes, V is very interesting! When I first read their site, I was pleasantly surprised by everything that was described. No to
nil, yes to
genericsand many more 🎁
Then I tried build some app, but decided not to rush and postponed this article for a couple of months... but after release V UI kit, I understand: V language is not going to be someone else's "pet project"! He's growing!
And I need to help him with this... 😅
Thanks for introducing
Thanks for the introduction
You're welcome 🙂
Really like the idea, and its simplicity. I can't wrap my head around writing code without any form of classes.
Good to know :) thanks.
Yep, this is name of article's series. Hope it helps to learn something new! 🙂 | https://practicaldev-herokuapp-com.global.ssl.fastly.net/koddr/good-to-know-the-v-programming-language-k5b | CC-MAIN-2021-25 | refinedweb | 949 | 73.47 |
I have a question on web service deployment as it pertains to multi-client SAP systems.
I created and deployed a web service in the client that is used for the development of client-independent objects. There are no issues when calling the web service following deployment.
I am trying to use the same web service on the same SAP system, but on a different client. What is the proper way for accomplishing this? As of right now, the only way I've been able to do this is to open up the desired client and deploy the service again. I tried to take a transport and copy it to the other client using SCC1, but it failed since all the objects referred to client-independent objects.
Below is the URL of the WSDL that was generated via SOAMANAGER.
Client 120 is the client used for development and is where the web service was created and deployed. I would like to use the same web service, but in a different client.
I tried in changing the logon parameters of the web service using SICF, but the invocation failed. It only works if the client mentioned in the logon data matches the client specified in the URL.
I am assuming that SAP builds the namespace of the web service using the client specified in the above URL. If this is the case, then I will have no choice but to deploy this service manually in the other client.
Is something wrong being done or do I have to do the same deployment in the other client?
Regards,
Vince Castello | https://answers.sap.com/questions/7105491/question-concerning-web-service-deployment.html | CC-MAIN-2021-49 | refinedweb | 268 | 71.65 |
Query Parameters are part of the Query String - a section of the URL that contains key-value pairs of parameters. Typically, parameters are sent alongside GET requests to further specify filters on the operation:
The parameters are defined after the
? character and each key-value pair is separated with a
&. Spaces are represented as
%20 and can also be represented as
+. These map to a key-value set of:
name=John location=Miami
It's easy to manipulate the URL with JavaScript - so more often than not, query parameters are added as filters to searches. Additionally, by adjusting the content on the screen based on reproducible parameters, instead of the body of a request, results are shareable between users just by sending a link with the parameters! For instance, on AirBnB - if you're fully flexible on the location, dates and wish for an auto-suggest, a click of a button leads you to:
There are a few parameters here, such as
refinement_paths,
date_picker_type,
search_mode, and
search_type, each with a value.
In this short guide, we'll take a look at how to get a GET Request's Query Parameters in Flask.
Get Query Parameters in Flask
from flask import Flask, request # ... @app.route('/search', methods=['GET']) def search(): args = request.args return args
The
request.args field is an
ImmutableMultiDict:
print(type(args)) # <class 'werkzeug.datastructures.immutablemultidict'=""> </class>
It can easily be converted into a regular dictionary via:
print(type(args.to_dict())) # <class 'dict'=""> </class>
Additionally, you can search for a specific key in the dictionary via the
get() method, returning an error if the key doesn't match up to the preconceived argument list:
print(args.get("name")) # John
You can additionally cast the value to a different type, such as
int or
str while getting it. You can also set a default value if the value isn't present already. For instance, a
name parameter will probably be a string, but the
price parameter might be an integer:
args.get("name", default="", type=str) args.get("price", default=0, type=int)
If you search for a non-existent key - a
None is returned. This way, you can check whether a query parameter is missing, and act accordingly - assigning a default value or just not using it.
Let's send a GET request to the endpoint with a
name and
location:
$ curl "localhost:5000/search?name=John&location=Miami"
This results in:
{"location":"Miami","name":"John"}
Check if Query Parameters are Not None
When operating on parameters, you'll typically want to check if they're
None and act accordingly. This can thankfully easily be done by getting an expected key and checking if it's present in the dictionary!
Let's create a mock database - just a dictionary of users and their locations. Then, based on the parameters passed in the URL, we'll filter this dictionary and return the matching users that fit the criteria defined by the parameters:
from flask import Flask, request # ... db_users = { "John" : "Miami", "David" : "Miami", "Jane" : "London", "Gabriella" : "Paris", "Tanaka" : "Tokyo" } @app.route('/search', methods=['GET']) def search(): args = request.args name = args.get('name') location = args.get('location') # result = db_users if None not in (name, location): result = {key: value for key, value in db_users.items() if key == name and value == location} elif name is not None: result = {key: value for key, value in db_users.items() if key == name} elif location is not None: result = {key: value for key, value in db_users.items() if value == location} return result
Here - we extract the
name and
location from the parameter list. If none are present - you may wish to return all of the users, or none at all. If you'd wish to return all of them - uncomment the
result = db_users line.
If both the
name and
location are present, we filter the
db_users by both parameters. If only one is present - we filter it only using the present parameter.
Now, if we send a GET Request with both or a single parameter, we'll be greeted with:
$ curl "localhost:5000/search?name=John&location=Miami" {"John":"Miami"} $ curl "localhost:5000/search?name=John" {"John":"Miami"} $ curl "localhost:5000/search?location=Miami" {"David":"Miami","John":"Miami"}
Two people are located in Miami, so just using the single search parameter nets us two users. There's only one John in the dictionary, so only John is returned for the first two queries.
Conclusion
In this guide, we've taken a look at how to get the query parameters of an HTTP GET Request in Flask.
We've also taken a look at how to check whether the parameters are
None and how to handle the lack thereof with a mock database. | https://www.codevelop.art/get-request-query-parameters-with-flask.html | CC-MAIN-2022-40 | refinedweb | 786 | 55.34 |
There are several categories of operators in C – sign, assignment, arithmetic, relational, boolean, bitwise and more. Let's take a look at them!
At this point of the tutorial, we need to cover the most basic operators – sign, assignment and arithmetic. Later, when we talk about conditional constructions we will talk about relational and boolean operations, too.
Operands are the arguments that the operators accept. Different operators accept different number of operands 1,2 or 3. For instance the sum operator + accepts two operands. They are the numbers that will be summed:
8 + 12
Here, the digit 8 is the left argument, because it is on the left of the operator and 12 is the right.
The sign operators change a number value to negative or positive. By default all numbers are positive, so the plus sign is not used in practice as a sign operator.
We use the - to indicate that a given value is negative. Usually we don't really think of it as an operator, but that's exactly what it is.
Minus is one of the few unary operators in C. Unary means that it takes only one operand – the number. Here are several obvious examples:
-1, -15.04f, -20.55
2 + 2 = 4 , not a big deal.
We cannot put two operators next to each other. If for some reason you decide to sum negative numbers, use parenthesis to separate the operators:
float total = sum + (-5);
Of course normally we will not use this form, but just use the next...
2 - 2 = 0
No surprise here, either. The rule for parenthesis remains if you want to do something weird:
-5 - (-4-(-3)) = ? This is your first homework task.
In C, the symbol, used for multiplication is the asterisk *.
3 * 3 = 9
Nothing fancy to add here.
Don't skip this! The division operator is the forward slash.
4 / 2 = 2
BUT!
5 / 2 = 2 , too.
If all operands of the division are integers, the result will always be integer. If you want to get a fraction result, at least one of the arguments needs to be a fraction:
5.0 / 2 = 2.5 , also 5 / 2.0 = 2.5 and (double) 5/2=2.5 and also 5/ (float)2 = 2.5
To take the remainder of a division in C, use the % operator. It accepts two arguments and returns only the remainder of their division.
5 % 2 = 1
4 % 2 = 0
This operation can be performed only with integer numbers.
In programming very often we need to increment or decrement a variable by 1. For this reason there are two operators that make it even easier to do that:
They both accept only one operand left or right. Here are examples:
i++;
--count;
When the operator is used before the variable, it is a prefix. When it is after the name of the variable it is a suffix.
The prefix increment/decrement changes the variable, before its value is used in the current statement.
int count = 5;
printf("The total count is %d", ++count);
This will print “The total count is 6”.
When used as suffix, the operators change the variable after its value is used in the current statement.
char letter = 'G';
printf("Your letter is %c", letter--);
Two things:
You have already seen the c operator for value assignment =. It accepts two arguments left and right. Sometimes we refer to them as l-value and r-value. The left operand must be modifiable, because it will accept the value of the right side. The right side will not change its value.
Let's create an int variable and assign it with the value of the constant 1.
int recordId = 1;
We can also assign the result of a calculation:
float average = sum / count;
Since the right side is not a constant, first it must be calculated and then the result will be assigned to the left side. Of course, at this moment the two variables sum and count must be initialized with correct values.
Sometimes we just need to modify the value of a variable, by adding to it, dividing it etc. In these cases we will write something like:
count = count + 5;
average = average / 2;
In this case we write the name of the same variable two times. For such situations we can use the short version of the assignment operator:
count += 5; will
do is exactly the same thing like count = count + 5;
average /= 2; will do the same like average = average / 2;
This rule works for any of the operators +, -, /, *, %, >>, <<, |, &, &. The last operators are called bitwise operators. They change the numbers on bit level and we will look at them later.
Write a program that asks the user for the fuel usage of three trips and then finds the average. Find the average both as an int and as a float.
#include <stdio.h> int main(void) { int trip1, trip2, trip3, sum, averageInt; float averageFloat; sum = 0; printf("Input the fuel usage for the first trip:"); scanf("%d", &trip1); sum += trip1; printf("Input the fuel usage for the second trip:"); scanf("%d", &trip2); sum += trip2; printf("Input the fuel usage for the third trip:"); scanf("%d", &trip3); sum+= trip3; averageInt = sum / 3; averageFloat = (float) sum / 3; printf("The average fuel consumption is %.2f\n", averageFloat); printf("The average fuel consumption, rounded down is %d\n", averageInt); return 0; }
We use the relational operators in C to compare values. We can check for equality or if a given value is greater or smaller than the other.
All these operations give a result of logical evaluation. Since C does not have a boolean data type, the result is a integer number. A zero means that the condition that we checked is false. Any result, different from 0 is true and the most common truth value is 1.
The operator to check for equality is ==. This is double the sign =. Beginners often forget that and try to use a comparison with a single equals symbol, which is the operator for assignment.
int equals = 5 == 5;
printf("Is 5 equal to 5? : %d", equals);
Let's see how this works. First, the comparison on the right side is done. The numbers are equal, so the operation returns 1. This result is assigned to the variable equals and then we print that result.
int notEquals = 5 != 5;
printf("Are the numbers not equal? : %d", notEquals);
printf("Second check : %d", 3 != 2);
The first printf will print 0 and the second will output 1.
Less than ( < ) will return 1 if the left operand is less than the right.
int lessThan = 5 < 9;
printf("is 5 less than 9? : %d", lessThan);
The less than or equal ( <= ) to operator works like the previous, but it also returns true when the two operands are equal.
int check1 = 5 <= 6;
int check2 = 5 <= 5;
Both these comparisons return 1("true").
No surprises here:
printf("%d", 5 > 6);
will print 0.
printf("%d %d %d", 5 >= 4, 5 >= 5, 5 >= 6);
will print 1 1 0. | http://www.c-programming-simple-steps.com/operators-in-c.html | CC-MAIN-2017-39 | refinedweb | 1,176 | 74.49 |
I am doing a project in class for computer science. I have to make a hangman game that consists of the following: Get the words from a file and pull in the next word after each game, display the word as dashes for the user to guess, prompt the user to guess a letter, they get 6 incorrect guesses, gather all the letters the user guess and display them throughout the game, replace the correct guess letters with the dashes, then lastly display if they won or lose and ask if the user wants to play again.
I got stuck at the part of the dashes, the dashes are not coming up the correct amount of letters in the word it keeps showing one dash, then the rest I do not know how to do can someone please help?
import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; import java.util.regex.Pattern; public class Lab3 { /** * @param args * @throws FileNotFoundException */ public static void main(String[] args) throws FileNotFoundException { //creating scanners Scanner input = new Scanner(System.in); Scanner game = new Scanner(new File("c:/words.txt")); String wordBlanks = " "; //welcoming user and giving discription of the game System.out.println("Welcome to a game of Hangman!"); System.out.println("Dir ections: Try to guess the word one letter at a time. " + "\r \t if you get 6 wrong letters game is over and you have the choice to play again."); //making a while loop to display each word one game at a time like "_ _ _ ..." while (game.hasNext()){ String word = game.next(); System.out.println(); System.out.println("Let's begin..."); //conducting for loop for each word being played with for (int i = 0; i < word.length(); i++){ System.out.print("_ "); //prompting user to guess a letter System.out.println(); System.out.println("Guess a Letter: "); //checking each guess with the letters inside the word char guess = input.next().charAt(i); if (guess == word.charAt(i)) { System.out.println(); } else if (guess != word.charAt(i)) { for (int j = 0; j < 6; j++){ } } } } } } | https://www.daniweb.com/programming/software-development/threads/390493/need-help-with-hangman-game-for-java | CC-MAIN-2018-51 | refinedweb | 346 | 65.52 |
5931/scanner-is-skipping-nextline-after-using-next-or-nextfoo
I am
I tested my application and it looks like the problem lies in using input.nextInt(). If I delete it, then both string1 = input.nextLine() and string2 = input.nextLine() are executed as I want them to();
use input.nextLine(); after your nextInt() function
for example:-
input.nextInt(); input.nextLine();
and then run your code
public static void main(String[] args) {
...READ MORE
That's because the Scanner.nextInt method does not ...READ MORE
A simple example:
import java.util.Scanner;
public class Expl
{
...READ MORE
// Use a BufferedReader to read characters ...READ MORE
str != null && str.length() != 0
alternatively
str ...READ MORE
Check your javac path on Windows using Windows Explorer C:\Program Files\Java\jdk1.7.0_02\bin and ...READ MORE
List<String> results = new ArrayList<String>();
File[] files = ...READ MORE
this problem is solved using streams and ...READ MORE
I think you can easily perform this ...READ MORE
Statement is used for static queries like ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/5931/scanner-is-skipping-nextline-after-using-next-or-nextfoo?show=5936 | CC-MAIN-2020-10 | refinedweb | 177 | 72.02 |
Initi .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
copy-image [--client-token <value>] [--description <value>] [--encrypted | --no-encrypted] [--kms-key-id <value>] --name <value> --source-image-id <value> --source-region <value> [--dry-run | --no-dry-run] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--client-token (string)
Unique, case-sensitive identifier you provide to ensure idempotency of the request. For more information, see How to Ensure Idempotency in the Amazon Elastic Compute Cloud User Guide .
--description (string)
A description for the new AMI in the destination region.
--encrypted | --no-encrypted (boolean)
Specifies whether the destination .
--kms-key-id (string)
An identifier. If a KmsKeyId is specified, the Encrypted flag must also be set.
The CMK identifier may be provided in any of the following formats:
- Key ID
- Key alias, in the form ``alias/ExampleAlias ``
- ARN using key ID. The ID .
- ARN using key alias. The alias ARN contains the arn:aws:kms namespace, followed by the region of the CMK, the AWS account ID of the CMK owner, the alias namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1 :012345678910 :alias/ExampleAlias .
AWS parses KmsKeyId asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the region that the snapshot is being copied to.
--name (string)
The name of the new AMI in the destination region.
--source-image-id (string)
The ID of the AMI to copy.
--source-region (string)
The name of the region that contains the AMI to copy.
- copy an AMI to another region
This example copies the specified AMI from the us-east-1 region to the ap-northeast-1 region.
Command:
aws ec2 copy-image --source-image-id ami-5731123e --source-region us-east-1 --region ap-northeast-1 --name "My server"
Output:
{ "ImageId": "ami-438bea42" } | https://docs.aws.amazon.com/cli/latest/reference/ec2/copy-image.html | CC-MAIN-2019-09 | refinedweb | 322 | 56.76 |
Hello :)
This is my first post on daniweb forum and I hope this site will help me improve. I'm a first year college student in computer science, and I've only recently started C++. This means that code posted by me might seem very lousy and /or unprofessional.
I have got the program to work, however, there is 1 issue I need help with.
Purpose of the program: Sorting an array of n variables in ascending order, by using cocktail sorting algorithm.
How my program works:
1. User enters the n value
2. array with n elements is generated
3. array is shown on the screen
4. array is sorted
5. sorted array is shown to the user.
The problem: When array, which is sorted already, is shown on the screen, the index of all array elements is by 1 bigger than it should be, the last array element is not shown, and 0 is shown in the place of first element.
Example:
Array1 = 1 7 5 6 3 4 2
Array2 = 0 1 2 3 4 5 6
#include <conio.h> #include <iostream.h> #include <stdlib.h> void swap(int& a, int& b); int input(); void generation(); void show(); int i,j,a[50],n; void main(){ randomize(); input(); generation(); show(); for(i=0; i<n; i++){ if(a[i]> a[i+1]){ swap(a[i],a[i+1]); for(j=i; j>0; j--){ if(a[j-1]>a[j]){ swap(a[j-1],a[j]); }else{break;} } } } show(); getch(); } int input(){ do{cout<<"Please enter the number of elements (1-30): "; cin>>n;}while(n<1 || n>30); return n; } void generation(){ for(i=0;i<n;i++){ a[i]=random(99)+1; } } void show(){ for(i=0;i<n;i++){ cout<<a[i]<<"\t"; } cout<<"\n\n\n"; } void swap(int& a, int& b){ int pag; pag=a; a=b; b=pag; } | https://www.daniweb.com/programming/software-development/threads/242058/cocktail-sorting-algorithm-issue | CC-MAIN-2018-13 | refinedweb | 317 | 72.56 |
Well, I bumped into an interesting C++ problem last night while I was aimlessly surfing the Internet, and, of course, it caught my attention. It said something like this:
"From a file it's read a text that contains a phrase, on a line. The program rearranges the letters in each phrase, in alphabetical order, keeping the words' place. only letters from 'A' to 'Z' are affected. The other chars remain untouched. Print on the screen the result."
Too bad I don't remember the link, but I'm sure this was the requirement.
It looked quite simple, but, what I got from my ideas was..uhh NULL...
I thought I could find the number of words and then somehow, using Bubble sort, to put them in alphabetical order. Of course, I got nothing but a broken mind :|
If you have any ideas, I'd like to know how this thing it's solved :)
PS: I'm posting what I tried to make by myself, too :P
#include <iostream.h> #include <string.h> int main () { char text[100]; int i, ok; cout<<"text "; cin.get (text, 100); do { ok=0; for (i=0; i<strlen (text )-1; i++) if (text[i] > text[i+1]) { char x=text[i]; text[i]=text[i+1]; text[i]=x; ok=1; } } while (ok==1); for (i=0; i<strlen (text); i++) cout<<text[i]<<" "; cout<<endl; return 0; } | https://www.daniweb.com/programming/software-development/threads/154921/tricky-letters-problem | CC-MAIN-2017-39 | refinedweb | 235 | 81.12 |
Posts71
Joined
Last visited
Content Type
Profiles
Forums
Store
Showcase
Product
ScrollTrigger Demos
Downloads
Everything posted by icekomo
liquid stage: Targeting a new X postion?
icekomo posted a topic in TransformManager (Flash)Hello, I have a question, and it may be a simple one as I am no AS3 expert. I have an object that is hooked up to a mouseOver event. When the mouse rolls over this object it moves it say 10 pixels to the left, and when it roll off, it moves that object back. The problem I ran into is that when i resize the window, it's still using the old x cords. So my question is what can I do to get the new x postion of that object each time the mouseEvent is called? Do I use the update function for that info? And if so, how exactly. Thanks!!
accessing cue points
icekomo posted a topic in Loading (Flash)I can't seem to figure out how to access cue points in a flv being called in from xml. I feel like I'm sure close to figuring this one out, but seem to be stuck in this area. I am able trace when the cue point hits in the video but i'm not sure how to advance the the movie to another cue point. Here is the method that the cue point triggers. private function rabiVideo(event:LoaderEvent):void { trace("cue point"); if (event.data.name=="Loop") { trace("Loop"); //_currentVideo.seekToNavCuePoint("NoStart"); //_currentVideo.play(); } } the trace loop works, but not the seekToNavCuePoint.... the commented out code isn't working.. any suggestions? Thanks!
menu tweening..tweening the whole _mc
icekomo posted a topic in GSAP (Flash)Hello, I'm missing something from my code but for the life of me can't seem to figure it out. I created a class to handle my button actions, MouseOver,Off and will also handle the click function. I got it working but I'm lost as to how I target just the button that I am rolling on to and not the movieClip the button lives in. Here is my code: package classes.menu { import flash.display.*; import flash.text.TextField; import flash.events.*; import com.greensock.*; import com.greensock.easing.*; public class MenuButtons extends MovieClip { public function MenuButtons() { this.mouseChildren = false; this.buttonMode = true; addButtonTitles(); } public function addButtonTitles():void { trace("menu buttons"); this.addEventListener(MouseEvent.MOUSE_OVER, rollOverBtn); homeBtn.white.txt.text = "HOME"; homeBtn.blue.txt.text = "HOME"; } public function rollOverBtn(event:MouseEvent):void { TweenLite.to(event.target,1,{alpha:.5}); this.addEventListener(MouseEvent.MOUSE_OUT, rollOffBtn); trace("rollOver"); } public function rollOffBtn(event:MouseEvent):void { TweenLite.to(event.target,1,{alpha:1}); trace("rollOver"); } } } As I said before when I roll over one of the buttons I just want that button to alpha fade, not the whole _mc that it lives int. I tried to use event.target but that didn't seem to do what I was looking for. Any help on this would be very grateful! Thanks
- I get these 2 errors: 1180: Call to a possibly undefined method ImageLoaderVars. 1061: Call to a possibly undefined method prependURLs through a reference with static type com.greensock.loading:ImageLoader. I thought I had the correct stuff imported.
- could I also add this to direct where all my images for this loader are living? iLoad.prependURLs("images/aSpot/");
- Thanks for your help on this. Where are you getting this from? new ImageLoaderVars()
adding an array of images to my preloader?
icekomo posted a topic in Loading (Flash)Hello, I have a preloader that I created using loaderMax. I got the xml and the swf file to loading. But now I would like to add an array of images, say 10 to this file, and I'm not sure what the best way to go about this would be. Any helpful hints in the right direction would be great! Here is what I have so far. public class Preloader extends MovieClip { public var queue:LoaderMax = new LoaderMax({name:"mainQueue",onComplete:completeHandler,onError:errorHandler}); public var progressBar:MovieClip = new progBar_mc(); public function Preloader():void { addChild(progressBar); //starts the load for the XML queue.append( new XMLLoader("xml/medallion.xml", {name:"myXML",onProgress:progressHandler,onComplete:loadXml})); //starts the load for the swf file, and adds it to this container queue.append( new SWFLoader("medallion.swf", {onProgress:progressHandler,estimatedBytes:126000,container:this})); queue.load(); progressBar.loadText.text = "loading xml"; } Thanks!
self loader not working in my project?
icekomo replied to timaging's topic in Loading (Flash)try this.gotoAndStop(2);
tracing css data
icekomo posted a topic in Loading (Flash)Ok I'm slowly working my way through this process, but one thing I seem to be stumped on. How do I trace the actual data of a css stylesheet that I loaded? Right now when I trace: trace(LoaderMax.getConetent("myCSS"); I get returned: [object StyleSheet] I'm having a tough time figuring out how to apply the style sheet to my text. THanks
- I should also say, that I'm not getting any error any more, but the text is not being styled either....
- Here is the preloader code that I am using: I altered it a bit so that I have a simple textfield on the stage(myTextField) and gets filled with some xml, and I am trying to apply a style sheet to that, figure start small and go from there, but even thats not working, I get the xml to show up, but it's not styled. package com.main { import flash.display.*; import com.greensock.events.LoaderEvent; import flash.events.*; import flash.display.MovieClip; import flash.text.TextField; import flash.text.StyleSheet; import flash.events.Event; import com.greensock.easing.*; import com.greensock.*; import com.greensock.plugins.*; import com.greensock.layout.*; import com.greensock.loading.*; import com.greensock.loading.display.*; public class Preloader extends MovieClip { var queue:LoaderMax = new LoaderMax({name:"mainQueue",onComplete:completeHandler,onError:errorHandler}); var progressBar:MovieClip = new progBar_mc(); public function Preloader() { addChild(progressBar); //starts the load for the XML queue.append( new XMLLoader("xml/elReyMenus.xml", {name:"myXML",onProgress:progressHandler,onComplete:loadXml}) ); queue.append( new CSSLoader("xml/styles/styles.css", {name:"myCSS",onProgress:progressHandler}) ); //starts the load for the swf file, and adds it to this container queue.append( new SWFLoader("elRey.swf", {onProgress:progressHandler,estimatedBytes:126000,container:this}) ); //loads the data queue.load(); progressBar.loadText.text = "loading xml"; } public function loadXml(event:LoaderEvent):void { progressBar.loadText.text = "loading data"; } public function progressHandler(event:LoaderEvent):void { //trace("progress: " + event.target.progress); var perc:Number = event.target.bytesLoaded / event.target.bytesTotal; progressBar.prog_txt.text = Math.ceil(perc * 100) + "%".toString(); progressBar.loadBar_mc.width = perc * 271; } public function completeHandler(event:LoaderEvent):void { var xmlData:XML = LoaderMax.getContent("myXML"); myTextField.htmlText = xmlData.Brunch; myTextField.styleSheet = LoaderMax.getContent("myCSS"); trace(event.target + " is complete!"); TweenLite.to(progressBar,2,{alpha:0}); } public function errorHandler(event:LoaderEvent):void { trace("error occured with " + event.target + ": " + event.text); } } } CSS: boldtitle { font-weight: bold; display: inline; font-size: 16px; } boldText { font-weight: bold; display: inline; } smallText{ display: inline; font-size: 11px; } XML: </p><p> </p><p><boldtitle>Soft Drinks</boldtitle></p><p> </p><p><boldText>Agua Frescas</boldText></p><p>Tamarindo</p><p>Jamaica</p><p>Horchata</p><p> </p><p><boldText>Jarritos</boldText></p><p>Pineapple</p><p>Grapefruit</p><p>Mandarin</p><p> </p><p><boldText>Refrescos</boldText></p><p>Sidral Mundet Apple Soda</p><p>Mexican Coca-cola</p><p>
- That didn't seem to work. WIth the xml data I had to do this: var xmlData:XML = LoaderMax.getContent("myXML"); brunchMenu.Menu_mc.content_mc.menuTxt.htmlText = xmlData.Brunch; I had to type case the data into xml for it to be able to use it, do I have to do something similar with the css data? Thanks for you help on this!
- So after more looking into it I think my problem lies here: var cssData:StyleSheet = LoaderMax.getContent("myCSS"); var sheet:StyleSheet = new StyleSheet(); sheet.parseCSS(cssData.data); trace(cssData.data); //trace("css loaded"); brunchMenu.Menu_mc.content_mc.menuTxt.styleSheet = sheet; As I am not sure how to attach the stylesheet to my text field once loaded....any help would be grateful! Thanks
error while loading css
icekomo posted a topic in Loading (Flash)Any idea why this line of code here: queue.append( new CSSLoader("xml/styles/styles.css", {name:"myCSS",onProgress:progressHandler}) ); would throw this error? TypeError: Error #1034: Type Coercion failed: cannot convert flash.text::StyleSheet@261fd699 to XML. If I comment this out, it works fine, but I'm trying to get a style sheet loaded into this project. Thanks!
Using loaded xml data from a preloader class fiile
icekomo posted a topic in Loading (Flash)Round 2: Ok so I'm just starting to get into the OOP of as3, but as a designer it's hard for me to wrap my head around some of these concepts. I have a site built out already(which I will go back and reset up as a modleViewController, but for now I just need to get this last aspect of it working. Here is what I have: preloader.fla Preloader.as main.fla Main.as so I have a preloader fla linked up to a preloader.as class, which is using loaderMax to load 2 things, the main.fla(main.swf) and an xml file that has data for the main.swf. I need to make sure the xml data loads first, before the main.swf file, which it does. I am able to trace the xml data when it loads, so I know it's been loaded. Here is where i get lost. So once the main swf is loaded and it's now using the Main.as file to function as a website. How am I able to use the xml var from the preloader.as file for the main.as file. So that I can populate my textFields with data that my Preloader.as file loaded. I hope this makes sense, and if there is any code I need to post just let me know, I tried to explain it, in the most basic way I know how. Thanks
using XML data from the loaderMax file.
icekomo replied to icekomo's topic in Loading (Flash)Thanks but that still didn't fix the problem, I can declare the var out side of the function but now do I get the .as file to see that var in my preloader file? So i load the xml and the swf file into the preloader file, and that swf file has an external .as file where i keep all my code, so I just need to know how to link that .as file to my preloader file so they can share vars.
using XML data from the loaderMax file.
icekomo posted a topic in Loading (Flash)Hello, I would like to start out by saying that I'm a flash designer and not a developer so the questions I have may seem simple. This is the basics of what I'm trying to do. I have a preloader file that I am using loaderMax on, I have fla file that contains all my assests for the site and I have a .as file that I am using to control all of those assets. I have my preloader file loading in the swf file as well as the xml file. This all is working great, as the wf file shows up and works, and I am able to trace the xml file, so I know that works too. What I can't figure out is how to populate my textFields in my .as file with the xml data i loaded from the prelaoder. Here is my preloader/testMenus.xml", {name:"myXML", estimatedBytes:1200}) ); queue.append( new SWFLoader("test.swf", {name:"contentClip", estimatedBytes:270000, container:this, visible:true}) ); //start loading queue.load(); //xmlLoader.load(); function progressHandler(event:LoaderEvent):void { trace("progress: " + event.target.progress); } function completeHandler(event:LoaderEvent):void { trace(event.target + " is complete!"); var data_xml:XML = LoaderMax.getContent("myXML"); trace(data_xml); } function errorHandler(event:LoaderEvent):void { trace("error occured with " + event.target + ": " + event.text); } My as code that doesn't work because it can't fine the data_xml var used in the preloader. brunchMenu.Menu_mc.content_mc.menuTxt.htmlText = String(data_xml.Brunch).replace("\n", ""); Thank you in advance for looking at this. | https://greensock.com/profile/6071-icekomo/content/page/3/ | CC-MAIN-2022-40 | refinedweb | 2,066 | 58.99 |
Zipper monad/TravelBTree
From HaskellWiki
is a library based on the Zipper monad which is used for traversing B-trees; trees where each node has an arbitrary number of branches. Read the documentation for the Zipper monad if you haven't already.
TravelBTree
1 Definition
The
data BTree a = Leaf a | Branch [BTree a] deriving (Show, Eq) data Cxt a = Top | Child { parent :: Cxt a, -- parent's context lefts :: [BTree a], -- siblings to the left rights :: [BTree a] -- siblings to the right } deriving (Show, Eq) type BTreeLoc a = Loc (Cxt a) (BTree a) type TravelBTree a = Travel (BTreeLoc a) (BTree a)
type is fairly self-explanatory.
BTree
must be given a list of its children.
Branch
is the type used for storing the context of a subtree. A
Cxt
represents completely a subtree, said subtrees position within the entire tree, and the entire tree itself. See Zipper for an explanation of such concepts.
BTreeLoc
2 Functions
2.1 Moving aroundThere are five main functions for stringing together
computations:
TravelBTree
All five return the subtree at the new location. Note that
-- moves down to the nth child (0-indexed) down :: Int -> TravelBTree a left, -- moves left a sibling right, -- moves right a sibling up, -- moves to the node's parent top -- moves to the top node :: TravelTree a
uses 0-indexed children, i.e.
down
goes down to the first child. This is consistent with the list-access operator,
down 0
.
(!!)
2.2 MutationYou get the three functions provided by the generic Zipper monad (
,
modifyStruct
and
getStruct
), but there's also a load of
putStruct
-specific mutation functions:
TravelBTree
insertLeft, -- insert a tree to the left of the current node insertRight, -- insert a tree to the right of the current node insertDown -- insert a tree as the last child of the current node :: BTree a -> TravelBTree a insertDownAt -- insert a tree as the nth child of the current node :: BTree a -> Int -> TravelBTree a -- delete the current node. If we're the last node of our siblings, move left. -- If not, move right. If we're an only child move up. delete :: TravelBTree aFirst, -- is the location the first of its siblings? isRight -- is the location the last of its siblings? :: TreeLoc a -> Bool
. The
TreeBLoc
pointing to the current node is stored as the state in a
TreeBLoc
computation. Thus to call these functions within a
TravelBTree
block, use
do
:
liftM
do top <- liftM isTop get when top $ down 3 >> return ()
3 Examples
Watch this space.
4 Code
The code of this file is quite length, so you can just download it. Alternatively, download the entire zipper library. | https://wiki.haskell.org/Zipper_monad/TravelBTree | CC-MAIN-2016-40 | refinedweb | 438 | 69.72 |
Hi,
can_), etc.
It takes (currently) 2 parameters: 1.) Topic that should be displayed2.) File extensionThe rest is already inside the .ini file
So the plugin should execute a line like this:"" "" ""
Everything else is handled inside the wrapper.
It should take the word that is currently under the cursor or if no [a-zA-Z] is found at the current position it should find & take the first letter left from it and use that word instead.
An example ("|" = current cursor position):
Something like this perhaps?
Menu item Tools->New Plugin, and paste this code:
import sublime, sublime_plugin
import os
class HighendCommand(sublime_plugin.TextCommand):
def run(self, edit, command):
word = self.view.substr(self.view.word(self.view.sel()[0].a))
cmd = "\"%s\" \"%s\" \"%s\"" % (command, word, self.view.file_name()[self.view.file_name().rfind(".")+1:])
print "Running command %s" % cmd
os.system(cmd)
Then in your user keybindings:
{ "keys": "/"], "command": "highend", "args": {"command": "/path/to/wrapper.exe"}},
Thank you quarnster!
I've added this line to my user keymap:{ "keys": "alt+p"], "command": "highend", "args": {"command": "D:/NavigateCHM.exe"} },
The NavigateCHM.exe is the compiled AutoIt script file. The Navigate.chm contains a MsgBox line at the beginning to show me when it's executed.
The interesting thing is: The .exe is only executed (after pressing alt+p) under "weird circumstances":
This is the line I use (it's in a file named Test.xys)foreach($task, $processTasks, "") { sub "_" . $task; }
Three different cursor positions on this line lead to different results when using alt+p:"|" is the current position
1.foreach($|task, $processTasks, "") { sub "_" . $task; }Console: Running command "D:/NavigateCHM.exe" "task" "xys"I don't get the message box from NavigateCHM.exe
2.foreach|($task, $processTasks, "") { sub "_" . $task; }Console: Running command "D:/NavigateCHM.exe" "foreach" "xys"I don't get the message box from NavigateCHM.exe
3.foreach(|$task, $processTasks, "") { sub "_" . $task; }Console: Running command "D:/NavigateCHM.exe" "($" "xys"NavigateCHM.exe is invoked...
This really doesn't make sense, does it?
P.s.: My last pm hasn't passed the outbox for over 1,5 hours, so I'll continue the discussion in this thread.
I don't know what's wrong, but try this code instead which will print out the program's stdout and stderr which should make it easier to debug if it doesn't work:
import sublime, sublime_plugin
import subprocess
class HighendCommand(sublime_plugin.TextCommand):
def run(self, edit, command):
word = self.view.substr(self.view.word(self.view.sel()[0].a))
args = (command, word, self.view.file_name()[self.view.file_name().rfind(".")+1:])
cmd = "\"%s\" \"%s\" \"%s\"" % args
print "Running command %s" % cmd
p = subprocess.Popen(args, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
stdout,stderr = p.communicate()
print stdout
print stderr
print stdout / stderr only create blank lines so it seems there aren't any errors.
With the new code, NavigateCHM.exe is always started correctly and displays the correct keyword from the associated .chm file.
Thanks a lot for coding this for me, an invaluable help for looking up keywords | https://forum.sublimetext.com/t/request-start-ext-application-with-special-parameters/7141/4 | CC-MAIN-2016-18 | refinedweb | 509 | 52.87 |
stm32plus 2.1.0
stm32plus version 2.1.0 has now been released and is available from my downloads page. This article will present a brief overview of the following new features.
- LGDP453x TFT driver
- SSD1289 TFT driver
- SSD1963 TFT driver
- ST7783 TFT driver
- AT24C32/64 serial EEPROM support
As mentioned in the banner at the top of this page you will need to ensure that you are using at least version 4.7.0 of gcc. I’m currently working on a driver for one of the big on-chip peripherals and it just couldn’t be done cleanly without real variadic templates so I took the opportunity to migrate all the template ‘feature’ mix-in classes to variadics. I also replaced lots of subclass types that were there as a workaround for the lack of template typedefs with much cleaner template aliases.
I use the free ‘arm-2012.09’ arm-none-eabi gcc release supplied by CodeSourcery (aka. Mentor Graphics) on Windows 7 x64 and Ubuntu Linux and I recommend that you do too. Other gcc toolchains may also work but are not tested. non-gcc compilers will certainly not work.
The installation and usage instructions have not changed since version 2.0.0. Documentation can be found in this previous article.
LGDP453x TFT driver
The LGDP4531/2 is a 320×240 (QVGA) TFT panel from LG. The stm32plus driver for this panel was contributed by Andy Franz and gratefully accepted by myself into this release.
64K and 262K colour modes are supported in landscape and portrait orientations. A full list of driver declarations are:
LGDP453x_Portrait_64K LGDP453x_Landscape_64K LGDP453x_Portrait_262K LGDP453x_Landscape_262K
Andy created a corresponding example demo that you can find in the ‘examples/lgdp453x’ directory.
SSD1289 TFT driver
Experimental support is now provided for the Solomon Systech 1289 QVGA TFT driver with 64K and 262K colours and in landscape and portrait mode. The driver names are:
SSD1289_Portrait_64K SSD1289_Landscape_64K SSD1289_Portrait_262K SSD1289_Landscape_262K
An example demo program is supplied in the ‘examples/ssd1289’ directory.
I have labelled this driver as experimental because I have not been able to verify that it works with the cheap ebay SSD1289 panel that I have because my cheap panel appears to be hardwired into an interlaced mode.
That is, if you set a display window to cover the whole screen and then fill it with pixels then the pixels will fill the rows in this order: 1,0,3,2,5,4… This makes it impossible to support with my graphics driver.
The offending driver input line is ‘GD’ and of course it’s the only one you can’t set in software using the ‘driver output control (r01h)’ register. Hopefully one of you will have a panel with ‘GD’ set to the non-interlaced mode!
SSD1963 TFT driver
The SSD1963 is another one from Solomon Systech. It’s slightly unusual in that it’s not hard-wired to any particular resolution. Instead it allows you to program it to support any resolution up to 800×480 as long as you know the timings for the panel that you’re going to use.
The stm32plus driver for the SSD1963 calls upon small ‘traits’ classes to supply the timing and size information for the panel being controlled. My test panel is a 4.3″ 480×272 device obtained on ebay and I have supplied traits classes for it. The driver names are:
SSD1963_480x272_Portrait_262K SSD1963_480x272_Landscape_262K SSD1963_480x272_Portrait_16M SSD1963_480x272_Landscape_16M
I’ve put together a short video of this panel in action connected to the STM32F103.
ST7783 TFT driver
I’ve got a cool new docking expansion board for my STM32F4DISCOVERY that adds some common peripherals to the system including an ethernet PHY, an RS232 socket, an SD cage and of course a QVGA LCD with a touch panel.
The driver IC for this panel is the ST7783 and I am pleased to announce support for it with 64K and 262K colours in portrait and landscape modes. The driver names are:
ST7783_Portrait_64K ST7783_Landscape_64K ST7783_Portrait_262K ST7783_Landscape_262K
I also put together a short video that shows it in action. The F4 drives this panel very quickly indeed.
AT24C32/64 serial EEPROM support
Unlike some other devices such as the 8-bit AVRs the STM32 doesn’t have any EEPROM memory included on-chip although it is possible to emulate it to some extent by reading and writing the internal flash memory at runtime.
If you need real EEPROM support then you have to purchase and wire up an external IC.
The Atmel AT24C32 and AT24C64 are 32/64Kbit serial EEPROM devices that are controllable via an I2C bus. The STM32 I2C on-chip peripheral is ideally suited to communicating with these memories.
stm32plus provides two drivers named ‘AT24C32’ and ‘AT24C64’ to manage communication with these devices. Both drivers inherit from InputStream and OutputStream so you can use all the methods that the stream interfaces provide.
The driver class is templated with the I2C interface that you are going to use to communicate with it. An example declaration might be:
#include "config/stm32plus.h" #include "config/i2c.h" #include "config/eeprom.h" typedef AT24C32< I2C2_Default<I2CTwoByteMasterPollingFeature> > MyEeprom; I2C::Parameters params; MyEeprom eeprom(params);
A full example is included in the ‘examples/i2c_at24c32’ directory.
Changelog
Here’s the changelog for version 2.1.0 in.
Update: bug notification
A bug has come to light that affects optimised builds. It’s too small for me to do a full release so I’ll explain here what you need to do to fix it.
The problem is that the counter used by the MillisecondTimer class should be declared volatile. You can fix it like this:
In stm32plus/include/timing/MillisecondTimer.h change the counter declaration to include ‘volatile’:
public: volatile static uint32_t _counter;
Also change it in the source file: stm32plus/src/timing/MillisecondTimer.cpp:
volatile uint32_t MillisecondTimer::_counter; | http://andybrown.me.uk/2013/03/30/stm32plus-2-1-0/ | CC-MAIN-2017-17 | refinedweb | 972 | 54.22 |
Welcome to Step 5 of my DCOM tutorial. In this series, I will strip the mystique, the headache, and" link at the top of each step. There's also an archive (coming soon) of the files for all the steps at the Questions and Answers page (coming soon) for this tutorial. I still recommend that you follow along with me as we go; this way, you can learn while you code. If you everServ.NET, using the ATL Project Wizard.
HelloWorld, to the server, to expose our functionality.
SayHello(), to the server, which fires the event the client handles.
We're currently on Step 5 of the tutorial. This is the step where we complete the basic design of the server, as illustrated above in Figure 1. We've already defined and implemented the
IHelloWorld::SayHello() method which still doesn't do what it's supposed to -- say "Hello, World!" back to the client. Now is the time to make it do this. But first, how about a little primer -- or refresher? -- on what Connection Points are, before we begin with Step 5?
Before we plunge in with Step 5 of our tutorial, let's just take a moment for me to rip the shrouds of mystery off of Connection Points. Figure 2 below shows a generic scenario which is true for COM, DCOM, and even function callbacks, for goodness' sake. 3, which is almost exactly like Figure 2, but puts the client in place of the "source" and the server in place of the "sink", with the network in between: 3 can be thought of in reverse:
Connection points come in when you have the following happening:
As you can see, this is a round-trip. A method call goes from the client to the server, and then an event call goes from the server, to the client, as seen in Figure 5. a so-called connection point. Now, we're ready to begin Step 5.
Let's plunge in, shall we? Visual C++ .NET 2003 makes it a trivial task to add an event to the source. Just use the Visual C++ Wizards! However, you have to know where to click. To add a method that actually makes our server say 'hello', open Class View, and click the plus sign to expand the
HelloWorldServNETLib icon, as shown in Figure 6, below:
HelloWorldServNETLib, and then adding a method to
DHelloWorldEvents.
Right-click the
DHelloWorldEvents interface icon, point to Add, and then click Add Method, as shown above. When you do so, the Add Method Wizard appears, as shown below in Figure 7:
OnSayHello()method to the
DHelloWorldEventsdispinterface.
We want to call our new method
OnSayHello. Leave the Return Type field set to
HRESULT, and type
OnSayHello in the Method Name box, as shown.
Next, we're going to add a parameter to our method. This will be the name of the computer the server is running on. This will help us make sure -- when testing -- that DCOM is really working and we're in fact running on a different computer than the one on which the client is installed. To do this, complete the following steps:
BSTRin the Parameter Type box.
BSTRis a special COM-compatible string type.
bstrHostName.
bstrHostNamein the Parameter Name box, and then click Add.
You may notice another tab in the Wizard dialog box, IDL Attributes. This contains higher-level settings; for our purposes, the settings' default values suit us, so leave the defaults on that tab. Click Finish to close the Wizard, and add the
OnSayHello() method to
DHelloWorldEvents. Visual C++ then opens the HelloWorldServNET.idl file, which we don't care about editing any more, so this file can be closed.
To check your work, go back to Class View. Click the plus sign to expand the
HelloWorldServNETLib icon, and do the same in order to expand the
DHelloWorldEvents icon, as shown below in Figure 8:
OnSayHello()event method.
Now it's time to complete our implementation of this simple COM server. Switch to Solution Explorer. Next, right-click the HelloWorldServNET.idl file, and click Compile. Make sure this step is done.
DHelloWorldEvents, now that it has the
OnSayHello()event.
Now, in Class View, right-click the
CHelloWorld class, point to Add, and then click Add Connection Point, as shown above in Figure 9. I know we already have a connection point, but doing the process over again -- like we are now -- serves to refresh the project to give us access to fire the
OnSayHello() event from the implementation of
IHelloWorld::SayHello().
DHelloWorldEventsconnection point.
Once you've chosen the Add Connection Point command, the Implement Connection Point Wizard appears, as shown in Figure 10. Click the > button to move the
DHelloWorldEvents dispinterface name from the Source Interfaces to the Implement Connection Points box, and then click Finish. After the wizard is done, there is actually some duplicate code we now must remove. Once the Wizard finishes, the DHelloWorldEvents_CP.h file will open, as shown below in Figure 11. The top of the file will look as shown below:
Remember, back in Step 2 (go ahead and click the Step name if you want a refresher), when we replaced all occurrences of
_IHelloWorldEvents with
DHelloWorldEvents throughout the project with the new features of Replace?
_IHelloWorldEvents was defined in a file called _IHelloWorldEvents_CP.h, a file also
#include'd in HelloWorld.h. So, we have to clean up the mess. To do so, do the following:
CProxyDHelloWorldEvents<T>class which you may find. Only one class should be defined in this file. If this is already the case, then skip this step. The correct definition looks like that shown above, in Figure 11.
#include "_IHelloWorldEvents_CP.h"
Finally, click the File menu, and then click Save All. When you're finished, the Class View should resemble what is shown in Figure 12, below:
Now, as we have things, in order to say "hello" to the client, we simply call the
Fire_OnSayHello() member function from within
CHelloWorld::SayHello(). Since
CProxyDHelloWorldEvents<T> is in fact a base class of the
CHelloWorld class, this works. In the Class View, double-click the icon for the
CHelloWorld::SayHello() member function, to open the function definition in the editor. Now, add the code shown in bold in Listing 1, below:BSTR(szComputerName)); return S_OK; }
CHelloWorld::SayHello()member function.
Now we've finished Step 5 of the tutorial. This is the end of the coding of the server. The next step, which is Step 6, guides you through the final build process, and how to set up your new server (and the client) on the server and client computers. To go to the previous step, Step 4, click << Back below. Or click Next >> to go on to Step 6 (coming soon!). If you have questions, try clicking Questions and Answers (coming soon!) to go to a page which might help you.
<< Back | Next >> - coming soon!
Questions and Answers - coming soon!
Tip: If you're having trouble or can't understand something, it's often the case that you just went ahead as far as you could in this tutorial without following through and downloaded the code for the latest Step that was done. Perhaps if you go back to previous Steps, and work through the tutorial in the places where it wasn't clear, this may help. Also, it could be because there's still more Steps yet (below) as to the reason why, so I can make these articles better for everyone.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/COM/HelloTutorial5NET.aspx | crawl-002 | refinedweb | 1,258 | 73.47 |
From: Giovanni Bajo (giovannibajo_at_[hidden])
Date: 2004-02-13 19:06:47
David Abrahams wrote:
>> struct A {
>> typedef int M;
>>
>> template <class M>
>> void foo(void) {
>> M m; // which M is this?
>> }
>> };
>>
>> I know the C++ committe is discussing this issue at this moment. The
>> argument would be that "M" names the typedef because it's "more
>> stable" than the template parameter (which could get renamed in an
>> out-of-class definition). See also for a
>> detailed discussion.
>
> I'm sorry, but that's insane from a usability POV. C++ already has
> too many places where something far away can be chosen instead of the
> "obvious" alternative close by (see ADL).
I'm not advocating that, I'm just saying that it's how GCC currently works and
it seems to be a gray area of the standard. My personal opinion is that GCC is
wrong: I agree with you that the template parameter should be found on name
lookup.
> Introducing a typedef in an
> enclosing namespace should not affect the meaning or well-formedness
> of a use of a template parameter, especially because this sort of
> thing is liable to happen due to changes in #includes.
We're not speaking of namespace scope but class scope, though. It's a bit
harder to change things at class scope. Anyway, we both agree that it's insane
for C++ to behave like this, so never mind :)
-- Giovanni Bajo
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/02/61031.php | CC-MAIN-2021-25 | refinedweb | 260 | 72.46 |
I recently took a vacation the same week as the 4th of July and had lots of time to reflect upon my career to date. It was a little shocking to realize I’ve been writing code for nearly 30 years now! I decided to take advantage of some of the extra time off to author this nostalgic post and explore all of the languages I’ve worked with for the past 30 years. So this is my tribute to 30 years of learning new languages starting with “Hello, World.”
The first programming language I learned was TI BASIC, a special flavor of BASIC written specifically for the TI 99/4A microcomputer by Microsoft. BASIC, which stands for Beginner’s All-purpose Symbolic Instruction Code, was the perfect language for a 7-year old to learn while stuck at home with no games. The language organized lines of codes with line numbers and to display something on the screen you “printed it” like this:
1981 – TI BASIC
I spent several months writing “choose your own adventure” games using this flavor of BASIC, and even more time listening to the whistles, crackles, and hisses of a black tape cassette recorder used to save and restore data. Probably the most exciting and pivotal moment of my young life was a few years later when my parents brought home a Commodore 64. This machine provided Commodore BASIC, or PET BASIC, right out of the box. This, too, was written by Microsoft based on the 6502 Microsoft BASIC written specifically for that line of chips that also happened to service Apple machines at the time.
1984 – Commodore BASIC
The question mark was shorthand for the PRINT command, and the weird characters afterwards were the abbreviated way to type the RUN command (R SHIFT+U - on the Commodore 64 keyboard the SHIFT characters provided cool little graphics snippets you could use to make rudimentary pictures).
I quickly discovered that BASIC didn’t do all of the things I wanted it to. The “demo scene” was thriving at the time and crews were making amazing programs that would defy the limits of the machine. They would do things like trick the video chip into drawing graphics that shouldn’t be possible or scroll content or even move data into the “off-limits” border section of the screen. Achieving these feats required exact timing that was only possible through the use of direct machine language code. So, I fired up my machine monitor (the name for the software that would allow you to type machine codes directly into memory) and wrote this little program:
1985 – 6502 Machine Code
This little app loaded the “Y-accumulator” with an index, then spun through memory starting at $C100, sending the characters one at a time to a ROM subroutine that would print them to the display. This is the equivalent of a for loop (for y = 0; y <= 0x0d, y++) in machine code. The RTS returns from the subroutine. In order to execute the program, you had to use the built-in SYS command that would call out to the memory address (unfortunately, you had to convert from hexadecimal $C000 to decimal 49152, but otherwise it worked like a charm). I had the PETSCII characters for “HELLO, WORLD” stored at memory address $C100 (yes, the Commodore 64 had it’s own special character page). Here is the result:
Of course life got a little easier when I moved from raw machine code to assembly. With assembly, I could pre-plan my software, and use labels to mark areas of memory without having to memorize memory addresses. The exact same program shown above could be written like this:
1986 – 6502 Assembly
* = $C000 ;set the initial memory address
CHROUT = $FFD2 ;set the address for the character out subroutine
LDY #$00
LOOP LDA HELLO, Y
CMP #$00
BEQ END
JSR CHROUT
INY
BNE LOOP
END RTS
HELLO ASC 'HELLO, WORLD.' ; PETSCII
HELLOEND DFB 0 ; zero byte to mark the end of the string
About that time I realized I really loved writing software. I took some courses in high school, but all they taught was a silly little Pascal language designed to make it “easy” to learn how to program. Really? Easy? After hand-coding complex programs using a machine monitor, Pascal felt like a lot of overkill. I do have to admit the syntax for “Hello, World” is straightforward.
1989 – Pascal
program HelloWorld;
begin
writeln('Hello, World.');
end
I thought the cool kids at the time were working with C. This was a fairly flexible language and felt more like a set of functional macros over assembly than an entirely new language. I taught myself C on the side, but used it only for a short while.
1990 – C
#include <stdio.h>
main()
{
printf("Hello World");
}
The little program includes a library that handles Standard Input/Output and then sends the text on its way. Libraries were how C allowed us to develop cross-platform – the function was called the same thing whether you were on Windows or Linux, but the library itself implemented all of the low-level routines needed to make it work on the target machine. The above code was something I would tinker with on my Linux machine a few years later. It’s hard to describe if you weren’t into computers during this time, but it felt like you weren’t a true programmer unless you built your own custom Linux installation. By “built your own” I mean literally walked through the source and customized it to match the specific set of hardware you owned. The most fun was dealing with video cards and learning about “dot clocks” and all of the nuances of making the motherboard play nicely with the graphics chip. Anyway, I diverge.
C was not really a challenge for me to learn, but I quickly figured out the cool kids were doing something different and following this paradigm known as “object-oriented programming.” Machine code and assembly are probably the farthest you can get from OO, so the shift from procedural to object-oriented was a challenge I was ready to tackle. At the time you couldn’t simply search online for content (you could, but it was using different mechanisms with far fewer hits) so I went out and bought myself a stack of C++ books. It turns out C++ supports the idea of “objects.” It even used objects to represent streams and pipes to manipulate them. This object-oriented stuff also introduced the idea of namespaces to better manage partitions of code. All said, “Hello, World” becomes:
1992 – C++
#include <iostream>
using namespace std;
int main()
{
cout << "Hello World";
return 0;
}
I headed off to college and was disappointed that the place I went did not have courses that covered the “modern” languages I was interested in like C and C++. Instead, I had to muddle through a course where homework was performed on the mainframe we called “Cypher” using an interesting language called Fortran that actually cares about what column you put your code in! That’s right, the flavor of the language at the time designated column 1 for comments, columns 1 – 5 for statement labels, column 6 to mark a continuation, and only at column 7 could you begin to write real code. I learned enough of Fortran to know I never wanted to use it.
1993 – Fortran
PROGRAM HELLOWORLD
PRINT *, 'Hello, World!'
END
Because I wasn’t much into the main courses I spent most of the evenings down in the computer lab logging onto the massive Unix machines the college had. There I discovered the Internet and learned about the “old school” way of installing software: you pull down the source, build it, inspect the errors, tweak it, fix it, and get a working client. Honestly, I don’t know how you could use Unix without learning how to program based on the way things ran back then so I was constantly hacking and exploring and learning my way around the system. One fairly common thing to do would be execute commands that would dump out enormous wads of information that you then had to parse through using “handy” command line tools. One of the coolest languages I learned during that time was PERL. It doesn’t do the language justice to treat it with such a simple example, but here goes:
1993 – PERL
$welcome = "Hello World";
print "$welcome\n";
At the same time I quickly discovered the massive World Wide Web (yes, that’s what we called it back then … the Internet was what all of those fun programs like Gopher and Archie ran on, and the World Wide Web was just a set of documents that sat on it). HTML was yet another leap for me because it was the first time I encountered creating a declarative UI. Instead of loading up variables or literals and calling some keyword or subroutine, I could literally just organize the content on the page. You’d be surprised that 20 years later, the basic syntax of an HTML page hasn’t really changed at all.
1993 – HTML
<html>
<head><title>Hello, World</title></head>
<body><h1>Hello, World</h1></body>
</html>
This was an interesting time for me. I had moved from my personal computers (TI-99/4A and Commodore 64 with a brief period spent on the Amiga) to mainframes, and suddenly my PC was really just a terminal for me to connect to Unix mainframes. I also ran a Linux OS on my PC because that was the fastest way to connect to the Internet and network at the time – the TCP/IP stack was built-in to the OS rather than having to sit on top like it did in the old Windows versions (remember NETCOM anyone?) . Most of my work was on mainframes.
I did realize that I was losing touch with the PC world. At that time it was fairly obvious that the wild days of personal computing were over and the dust had settled around two machines: the PC, running Windows, for most of us, and the Mac for designers. That’s really what I believed. I had a roommate at the the time who was all over the Mac and at the time he designed coupons. He had all of these neat graphics design programs and would often pull out Quark and ask me, “What do you have on the PC that could do this?” I would shrug and remind him that I can’t even draw a circle or square so what on earth would I do with that graphics software? I liked my PC, because I understood software and I understood math so even if I couldn’t draw I could certainly use math to create fractal graphics or particle storms. Of course, doing this required having a graphics card and wasn’t really practical from a TELNET session to a Unix box, so I began learning how to code on the PC. At the time, it was Win32 and C++ that did the trick. You can still create boilerplate for the stack in Visual Studio 2012 today. I won’t bore you with the details of the original “HELLO.C” for Win32 that spanned 150 lines of code.
1994 – Win32 / C++ (Example is a bit more recent)
Dropping to the command line and executing this gives us:
My particle streams and Mandelbrot sets weren’t doing anything for employment, however, so I had to take a different approach. Ironically, my professional start didn’t have anything to do with computers at all. I started working for an insurance company taking claims over the phone in Spanish. That’s right. In the interview for a lower wage job that I was “settling for” to pay the bills while I stayed up nights and hacked on my PC I happened to mention I spoke Spanish. They brought in their bilingual representative to interview me and I passed the test, and within a week I was in a higher paid position learning more Spanish in a few short calls than I had in all my years in high school.
I was young and competitive and we were ranked based on how many claims we successfully closed in a day. I was not about to fall behind just because the software I was using tended to crash every once in awhile. It was a completely new system to me – the AS/400 (now called the iSeries) – but I figured it out anyway and learned how to at least restart the claims software after a crash. The IT department quickly caught on and pulled me aside. I was afraid I was in trouble, but instead they extended me an offer to move into IT. I started doing third shift operations which meant basically maintaining the AS/400 systems and swapping print cartridges on the massive printers that would print out policy forms and claims.
When I went to operations the process for swapping printing cartridges took most of the shift. This is because certain forms were black ink only, but other forms had green or red highlights. The printers could only handle one ink profile so whenever a different type of form was encountered, we’d get an alert and go swap everything out. I decided this was ridiculous so I took the time to teach myself RPG. I wrote a program that would match print jobs to the ink color and then sort the print queue so all black came together, all green, etc. This turned an 8 hour job into about a 2 hour one and gave me lots of time to study RPG. The original versions – RPG II and RPG III – were crude languages originally designed to simply mimic punch card systems and generate reports (the name stands for Report Generator). Like Fortran, RPG was a positional language.
1995 – RPG
I 'HELLO, WORLD' C HELO
C HELO DSPLY
C SETON LR
Note the different types of lines indicated by the first character (actually it would have been several columns over but I purposefully omitted some of the margin code). This defines a constant, displays it, then sets an indicator to cause the program to finish.
After working in operations I landed a second gig. The month-end accounting required quite a bit of time and effort. The original system was a Honeywell mainframe that read punch cards. A COBOL program was written that read in a file that emulated a punch card and output another file that was then pumped into the AS/400 and processed. After this, the various accounting figures had to match. Due to rounding errors, unsupported transactions, and any other number of issues the figures almost never matched so the job was to investigate the process and find out where it broke, then update the code to fix it. We also had an “emergency switch” for the 11th hour that would read in the output data and generate accounting adjustments to balance the books if we were unable to find the issues. Although I didn’t do a lot of COBOL coding, I had to understand it well enough to read the Honeywell source to troubleshoot issues on the AS/400 side.
1995 – COBOL
IDENTIFICATION DIVISION.
PROGRAM-ID. HELLO.
ENVIRONMENT DIVISION.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 WELCOME-MESSAGE PIC X(12).
PROCEDURE DIVISION.
PROGRAM-BEGIN.
MOVE "Hello World" TO WELCOME-MESSAGE.
DISPLAY WELCOME-MESSAGE.
PROGRAM-DONE.
STOP RUN.
It was only a short time later that the top RPG guru came to our company to give us a three day class because the coolest thing was happening in the AS/400 world. Not only were the AS/400 machines moving to 64-bit (and everyone knows that double bits is twice as nice, right?) but the RPG language was getting a facelift and with version IV would embrace more procedural and almost object-oriented principles than ever before. How cool was that? We jumped into training and I laughed because all of the old RPG developers were scratching their heads trying to muddle through this “new style of programming” while I was relieved I could finally get back to the more familiar procedural style I was used to with C and C++ rather than the tight, constricted, indicator and column-based language RPG had been.
Some developers may get a kick out of one of the '”features” that really knocked the socks off everyone. The language required the instructions to begin at a certain column, and inputs into the instructions would precede them. This was a very limited space so you could really only load constants of a few characters, otherwise you had to specify them as constants or data structures and read them in. The new language moved the keyword column to the right so there was more room for the “factor one” position. That meant we could now do “Hello, world” in just a few lines. The language was also more “procedural” so you could end a program by returning instead of setting on the indicator (although if I remember correctly, a return on the main program really just set that indicator under the covers).
1996 – RPG/ILE
C 'HELLO, WORLD' DSPLY
C RETURN
The AS/400 featured a database built into the operating system called DB2. For the longest time the database only supported direct interaction via RPG or other software and did not support the SQL syntax. It was rolled out as a special package called SQL/400 but the underlying support was there. I wrote one of my first published (print) articles about tapping into SQL for the AS/400 in 1998 (Create an Interactive SQL Utility). There are probably a million ways to do “Hello, World” in SQL but perhaps the easiest is this:
1998 – SQL
SELECT 'HELLO, WORLD' AS HELLO
I apologize for stepping out of chronological order but the SQL seemed to make sense as part of my “main” or “paid” job. At the same time I had been doing a lot of heavy gaming, starting with DOOM (the first game I was so impressed with, I actually sent in the money to purchase the full version), continuing with DOOM II and HEXEN and culminating in Quake. If you’re no familiar with the history of first-person shooters, Quake was the game that changed the history of gaming. It offered one of the first “true” 3D worlds (the predecessors would simulate 3D with 2D maps that allowed for varying floor and ceiling heights) and revolutionized the death match by supporting TCP/IP and using advanced code that allowed for more gamers in the same map than ever before.
It also was extremely customizable. Although I am aesthetically challenged and never caught on to creating my own models or maps, I jumped right into programming. Quake offered a C-based language called QuakeC that you would literally compile into a special cross-platform byte code that could run on all of the target platforms that Quake did. I quickly wrote a number of modifications to do things like allow players to catch fire or cause spikes to ricochet realistically from walls. Someone in a chat room asked me to program an idea that I became famous for which was called “MidnightCTF” and essentially took any existing map and turned off all of the lights but equipped the players with their own flashlight. Quake was one of the first games to support true 3D sound so this added an interesting dimension to game play.
Someone even included a code snippet from one of my modifications in the “Dictionary of Programming Languages” under the QuakeC entry. Nikodemos was the nickname I used when I played Quake. The “Hello, World” for QuakeC is really just a broadcast message that gets sent to all players currently in the game.
1996 – QuakeC
bprint("Hello World\n");
By this time I realized the Internet was really taking off. I had been frustrated in 1993 when I discovered it at college and no one really knew what I was talking about, but just a few years later everyone was scrambling to get access (a few companies like AOL and Microsoft with MSN actually thought they could build their own version … both ended up giving in and plugging into THE Internet). I realized that my work on mainframes was going to become obsolete or at best I’d be that developer hidden in the back corner hacking on “the old system.” I wanted to get into the new stuff.
I transferred to a department that was working on the new stuff – an application that was designed to provide visibility across suppliers by connecting several different systems in an application written with VB6 (COM+) and ASP.
1998 – VB6 (COM) w/ ASP
Public Class HelloWorld
Shared Public Function GetText() As String
return "Hello World"
End Function
End Class
<%@ Page Language="VB" %>
<OBJECT RUNAT=SERVER SCOPE=Session ID=MyGreeting
</OBJECT>
<HTML>
<HEAD><TITLE><%= MyGreeting.GetText() %></TITLE></HEAD>
<BODY><H1><%= MyGreeting.GetText() %></H1></BODY>
</HTML>
At the time I had the opportunity to work with a gifted architect who engineered a system that at the time was pretty amazing. Our COM+ components all accepted a single string parameter in the interface because incoming information was passed as XML. This enabled us to have components that could just as easily work with messages from the web site as they could incoming data from a third-party system. It was a true “web service” before I really understood what the term meant. On the client, forms were parsed by JavaScript and packaged into XML and posted down so a “post” from the web page was no different than a post directly from the service. The services would return data as XML as well. This would be combined with a template for the UI (called PXML for presentation XML) and then an XSLT template would transform it for display. This enabled us to tweak the UI without changing the underlying code and was almost like an inefficient XAML engine. This was before the .NET days.
JavaScript of course was our nemesis because we had to tackle how to handle the various browsers at the time. Yes, the same problems existed 15 years ago that exist today when it comes to JavaScript and cross-browser compatibility. Fortunately, all browsers agree on the way to send a dialog to the end user.
1998 – JavaScript
alert('Hello, World.');
A lot of our time was spent working with the Microsoft XML DLLs (yes, if you programmed back then you remember registering the MSXML parsers). MSXML3.DLL quickly became my best friend. Here’s an example of transforming XML to HTML using XSLT.
1998 – XML/XSLT to HTML
<?xml version="1.0"?>
<hello>Hello, World!</hello>
<?xml version='1.0'?>
<xsl:stylesheet version="1.0"
xmlns:
<xsl:template
<html>
<head><title><xsl:value-of</title></head>
<body><h1><xsl:value-of</h1></body>
</html>
</xsl:template>
</xsl:stylesheet>
<%
Const MSXMLClass = "MSXML2.DOMDocument"
Set XSLT = Server.CreateObject(MSXMLClass)
Set XDoc = Server.CreateObject(MSXMLClass)
XDoc.load(Server.MapPath("hello.xml"))
XSLT.load(Server.MapPath("hello.xsl"))
Response.Clear
Response.Charset = "utf-8"
Response.Write XDoc.transformNode(XSLT)
%>
I spent several years working with that paradigm. Around that time I underwent a personal transformation and shed almost 70 pounds to drop from a 44” waist down to 32” and became very passionate about fitness. I started my own company “on the side” and eventually left the company I was at to become Director of IT for a smaller company that was providing translation services to hospitals and had a Spanish-language online diet program. Once again I was able to tap into my Spanish-speaking ability because the translations were from English to Spanish and vice versa. I learned quite a bit about the differences between various dialects and the importance of having targeted translations. I also rewrote an entire application that was using ASP with embedded SQL calls and was hard-coded to Spanish to be a completely database-driven, white-labeled (for branding) localized app (the company was looking to branch into other languages like French). It was an exciting time and while I used the Microsoft stack at my job, the cost of tools and servers led me to the open source community for my own company. That’s when I learned all about the LAMP stack … Linux OS, Apache HTTP Server, MySQL Database, and PHP for development. Ironically this experience later landed me one of my first consulting gigs working for Microsoft as they attempted to reach out to the open source community for them to embrace Silverlight … but that’s a different story.
2002 – PHP
<?php
$hello = 'Hello, World.';
echo "$hello";
?>
Several years passed working on those particular platforms when I had the opportunity to move into yet another position to build the software department for a new company. I was the third employee at a small start-up that was providing wireless hotspots before the term became popular. If you’ve ever eaten at a Panera or Chick-fil-A or grabbed a cup of coffee at a Caribou Coffee then you’ve used either the software I helped write, or a more recent version of it, to drive the hotspot experience. When I joined the company the initial platform was written on Java. This was a language I’d done quite a bit of “tinkering” with so it wasn’t a gigantic leap to combine my C++ and Microsoft stack skills to pick it up quickly.
2004 – Java
public class Hello { public static void main(String[] args) { System.out.println("Hello, World"); } }
I’ve got nothing against Java as a language, but the particular flavor we were using involved the Microsoft JVM that was about to get shelved, and a custom server that just didn’t want to scale. I migrated the platform over to .NET and it was amazing to see a single IIS server handling more requests than several of the dedicated Java servers could. I say, “migration” but it was really building a new platform. We looked into migrating the J++ code over to C# but it just wasn’t practical. Fortunately C# is very close to Java so most of the team was able to transition easily and we simply used the existing system as the “spec” for the new system to run on Windows machines and move from MySQL to SQL Server 2005. Note how similar “Hello, World” is in C# compared to Java.
2005 – C#
public class Hello { public static void Main() { System.Console.WriteLine("Hello, World!"); } }
Part of what made our company so successful at the time was a “control panel” that allowed us to manage all of our hotspots and access points from a central location. We could reboot them remotely, apply firmware updates, and monitor them with a heart beat and store history to diagnose issues. This software quickly evolved to become a mobile device management (MDM) platform that is the flagship product for the company today. They rebranded their name and came out with the product but our challenge was providing an extremely interactive experience in HTML that was cross-browser compatible (the prior solution used Microsoft’s custom Java applets). We succeeded in building a fairly impressive system using AJAX and HTML but our team struggled with complex, rich UIs when they had to test across so many browsers and platforms. While we needed to maintain this for the hotspot login experience, the management side was more flexible so I researched some alternative solutions.
When I discovered Silverlight, I was intrigued but decided to pilot it first. I was able to stand up a POC of our monitoring dashboard in a few weeks and everyone loved it so we decided to go all-in. At my best guess our team was able to go from concept to delivery of code about 4 times faster using Silverlight compared to the JavaScript and HTML stack. This was while HTML5 was still a pipe dream. We built quite a bit of Silverlight functionality before I left. By that time we were working with Apple on the MDM side and they of course did not want Silverlight anywhere near their software, and HTML5 was slowing gaining momentum, so I know the company transitioned back, but I was able to enjoy several more years building rich line of business applications in a language that brought the power of a declarative UI through XAML to as many browsers and platforms as were willing to allow plugins (I hear those aren’t popular anymore).
2008 – Silverlight (C# and XAML)
<UserControl x:
<Grid x:
<TextBlock x:</TextBlock>
</Grid> </UserControl>
public partial class MainPage : UserControl {
public MainPage()
{
InitializeComponent();
Loaded += MainPage_Loaded;
}
void MainPage_Loaded(object sender, RoutedEventArgs e)
{
Greeting.Text = "Hello, World.";
} }
Silverlight of course went down like a bad stock. It was still a really useful, viable technology, but once people realized Microsoft wasn’t placing much stock in it (pardon the pun) it was dead on arrival – had really nothing to do with whether it was the right tool at the time, and everything to do with the perception of it being obsolete. HTML5 also did a fine job of marketing itself as “write once, run everywhere” and hundreds of companies dove in head first before they realized their mistake (it’s really “write once, suck everywhere, then write it again for every target device”).
The parts we loved about Silverlight live on, however, in Windows 8.1 with the XAML and C# stack. For kicks and giggles here’s a version of “Hello, World” that does what the cool kids do and uses the Model-View-ViewModel (MVVM) pattern.
2011 – WinRT / C#
public class ViewModel
{
public string Greeting
{
get
{
return "Hello, World";
}
}
}
<Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}">
<Grid.DataContext>
<local:ViewModel/>
</Grid.DataContext>
<TextBlock Text="{Binding Greeting}"/>
</Grid>
While Windows 8.1 has kept me occupied through my writing and side projects, it’s still something new to most companies and they want a web-based solution. That means HTML and JavaScript, so that’s what I spend most of my time working with. That’s right, once I thought I got out, they pulled me back in. After taking a serious look at what I hate about web development using HTML and JavaScript, I decided there had to be a better way. Our team got together and looked at potential ways and found a pretty cool solution. Recently a new language was released called TypeScript that is a superset of JavaScript. This doesn’t try to change the syntax and any valid JavaScript is also valid TypeScript. The language, however, provides some development-time features such as interfaces that help shape API calls and provide rich discovery (without ever appearing in the generated code) while also giving us constructs like classes with inheritance, strongly typed variables, and static modifiers that all compile to perfectly valid, cross-browser JavaScript.
Using TypeScript was an easy decision. Even though it is in beta, it produces 100% production ready JavaScript, so if we found it wouldn’t work well we knew we could yank the plug and just move forward with the JavaScript. It turns out it was incredibly useful – even a few skeptics on the team who were JavaScript purists and hated any attempt to “modify the language” agree that TypeScript gives us an additional level of control, ability to refactor, and supports parallel development and has accelerated our ability to deliver quality web-based code.
2012 – TypeScript
class Greeter {
public static greeting: string = "Hello, World";
public setGreeting(element: HTMLElement): void {
element.innerText = Greeter.greeting;
}
}
var greeter: Greeter = new Greeter();
var div: HTMLElement = document.createElement("div");
greeter.setGreeting(div);
document.body.appendChild(div);
TypeScript wasn’t the only change we made. We also wanted to remove some of the ritual and ceremony we had around setting up objects for data-binding. We were using Knockout which is a great framework but it also required more work than we wanted. Someone on our team investigated a few alternatives and settled on AngularJS. I was a skeptic at first but quickly realized this was really like XAML for the web. It gave us a way to keep the UI declarative while isolating our imperative logic and solved yet another problem. Our team has been happily using a stack with TypeScript and AngularJS for months now and absolutely loves it. I’m working on a module for WintellectNOW because I believe this is a big thing. However, if 30 years have taught me anything, it’s this: here today, gone tomorrow. I’m not a C# developer, or a JavaScript developer, or an AngularJS wizard. Nope. I’m a coder. A programmer. Pure, plain, and simple. Languages are just a tool and I happen to speak many of them. So, “Hello, World” and I hope you enjoyed the journey … here’s to the latest.
2013 – AngularJS
<div ng-app>
<div ng-
{{greeting}}
</div>
</div>
“Goodbye, reader.” | http://csharperimage.jeremylikness.com/2013/07/30-years-of-hello-world.html | CC-MAIN-2015-32 | refinedweb | 5,532 | 57.3 |
Results 1 to 2 of 2
- Join Date
- Jun 2007
- 57
- Thanks
- 4
- Thanked 1 Time in 1 Post
my program is not asking me what my fav color is
i get no error but my program should be asking me what my favorite color is
Code:
import java.util.Scanner; public class Assignment2 { public static void main (String [] args) { String firstName, middleName, lastName; int age, luckyNumber; String color; Scanner keyboard = new Scanner(System.in); System.out.println("What is your first name?"); firstName = keyboard.nextLine(); System.out.println("What is your middle name?"); middleName = keyboard.nextLine(); System.out.println("What is your last name?"); lastName = keyboard.nextLine(); System.out.println("How old are you?"); age = keyboard.nextInt(); System.out.println("What is your lucky number?"); luckyNumber = keyboard.nextInt(); System.out.println("What is your favorite color?"); color = keyboard.nextLine(); String fullName = firstName + " " + middleName + " " + lastName; System.out.println("A story about " + fullName + ":"); String fullNameCaps = fullName.toUpperCase(); char firstInitial = firstName.charAt(0); char middleInitial = middleName.charAt(0); char lastInitial = lastName.charAt(0); System.out.println("\t" + fullNameCaps + " is " + firstInitial + middleInitial + lastInitial); System.out.println("\t" + firstInitial + middleInitial + lastInitial + "'s favorite color is " + color + ", and " + firstName + " " + lastInitial + ". is " + luckyNumber); } }
- Join Date
- Sep 2002
- Location
- Saskatoon, Saskatchewan
- 16,994
- Thanks
- 4
- Thanked 2,662 Times in 2,631 Posts
This tends to be a poorly documented part of scanner.
Its been awhile since I've used java cmd, but I believe you simply need to clear out the left whitespaces on the buffer after the nextInt calls. This is because nextInt ignores whitespace separator and is left on the scanner buffer. When you're call to nextLine processes, it has an available whitespace character which is removed and assumed as you're input line. By 'removed', I mean its still on the scanner stack, but the scanner pointer has stepped to a location past the whitespace as opposed to before it.
After you're lucky number nextInt call, add a keyboard.nextLine(), this will remove the trailing whitespace separator and allow you're next call to keyboard.next* to request input.
This is why I always stick with nextline and use parseint where necessary. I hate having to worry about the scanner buffer.PHP Code:
header('HTTP/1.1 420 Enhance Your Calm'); | http://www.codingforums.com/java-and-jsp/157587-my-program-not-asking-me-what-my-fav-color.html | CC-MAIN-2015-48 | refinedweb | 380 | 50.94 |
Today we'll be building our speed testing service in Python. We have Speedtest websites like this to test our ping, upload, and download speed for those who don't know.
For today's article, I was looking to automate this since I check it regularly.
I choose Python as the language, seeing I'm trying that out a bit.
Installing the speedtest-cli in Python
Before we can use this package, we have to install it to become available for us to use.
Use the following command to install it:
pip install speedtest-cli
Now open your python file and start by importing the speed test module.
import speedtest
Then we create a new speed test. In my case, I'm assigning it to the
st variable.
st = speedtest.Speedtest()
Note: be aware running the speed test takes a while, so be patient 🙈
Now let's try our download speed and print it out:
print(st.download())
When we run this, we get a long number like this:
55775374.79559286
Making a full Python speed test script
Now that we know the basics of the speed test, we want to receive three elements:
- ping
- download
- upload
I'll be showing you how to get this data and format it nicely.
Starting with the ping, for this to work, we need to define a server to ping. In our case let's choose the best one.
st.get_best_server()
After this, we can get the ping to this server by using the following:
print(f"Your ping is: {st.results.ping} ms")
Let's go on to download. We have already seen we can get this by calling the
download() function, but it's unformatted.
Below I'll show you how to format it to
Mbit/s.
print(f"Your download speed: {round(st.download() / 1000 / 1000, 1)} Mbit/s")
We can make the same approach for the upload but use the
upload() function.
print(f"Your upload speed: {round(st.upload() / 1000 / 1000, 1)} Mbit/s")
The full script will look like this:
import speedtest st = speedtest.Speedtest() st.get_best_server() print(f"Your ping is: {st.results.ping} ms") print(f"Your download speed: {round(st.download() / 1000 / 1000, 1)} Mbit/s") print(f"Your upload speed: {round(st.upload() / 1000 / 1000, 1)} Mbit/s")
And when we run this, it outputs:
Your ping is: 30.97 ms Your download speed: 64.4 Mbit/s Your upload speed: 29.2 Mbit/s
Thank you for reading, and let's connect!
Thank you for reading my blog. Feel free to subscribe to my email newsletter and connect on Facebook or Twitter
Did you find this article valuable?
Support Chris Bongers by becoming a sponsor. Any amount is appreciated! | https://h.daily-dev-tips.com/speedtest-your-connection-in-python | CC-MAIN-2022-33 | refinedweb | 457 | 84.07 |
Google tells us about the landmarks nearby. re- spectful: false&address=Ann+Arbor%2C+MI
Make sure to un-wrap the URL and remove any spaces from the URL before pasting it into your browser.
The following is a simple application to prompt the user for a search string and call the Google geocoding API and extract information from the returned JSON.
import urllibimport and then extract the information that we are looking for.
The output of the program is as follows (some of the returned JSON has been removed):
$ python geojson.pyEnter location: Ann Arbor, MIRetrieving geocode/json?sensor=false&address=Ann+Arbor%2C+MIRetrieved:
You can download and code/geoxml.py to explore the JSON and XML variants of the Google geocoding API.
- 瀏覽次數:1153 | http://www.opentextbooks.org.hk/zh-hant/ditatopic/6802 | CC-MAIN-2021-17 | refinedweb | 127 | 57.67 |
If I created a dynamic array inside a function and insert values in it, and I want to use this array in other function, but I want its values not to be destroyed, should I declare it as a static and as an extern? If so how do I do this?
For example:
void func1(void)
{
char *filename;
file_name=(char *) malloc ((SIZE)*sizeof(char));
strcpy(file_name , other_file_name);
file_name[N-1] = '\0';
file_name[N-2] = 'x';
bla bla
}
void func2(void)
{
operations on file_name
}
#include <stdio.h>
MORE INCLUDES HERE
#include "a.h"
#include "b.h"
int main()
{
bla bla ...
return 0;
}
This is the purpose, but should I declare inside func1()
static extern char *filename;?
And if it's the right way, what else should I do in order for it to work?
DO NOT NOT NOT declare that as an extern. Using a global variable in this context is very poor style. What you want to do is pass a pointer to the filename array as an argument to func2. When you call malloc, the OS allocates for you memory on the heap which is independent of your call stack. Therefore, even when func1 returns it is still there (until you call free).
for example
void func1(void) { char * filename; filename = (char*)malloc((SIZE)*sizeof(char)); //do stuff func2(filename); } void func2(char * filename) { //do stuff to filename }
If you allocate memory via
malloc you can simply return the pointer from your function. The allocated memory won't be released automatically. You have to use
free to do that.
You can only use one storage class at a time - so you cannot use both
static and
extern to qualify a single variable.
With dynamically allocated arrays, it is crucial to know which code will release the allocated space. If you don't, you will have a memory leak. In small-scale programs, it may not 'matter' in the sense that the program will run despite leaking memory. However, in large programs, especially long-running programs (word processors, DBMS, etc), it matters crucially.
You can pass dynamically allocated arrays - or the pointer to the dynamically allocated array - to another function. If you do not want the other function to modify it, you should write the other function so it takes
const SomeType *arg as the argument. The compiler will then ensure that your code does not modify the array.
Hence:
extern void func2(const char *filename); extern void func1(void);
#include "header.h" #include <stdlib.h> #include <string.h> extern const char *other_file_name; // Should be in a header! void func1(void) { char *filename; size_t N = strlen(other_file_name) + 1; filename = (char *)malloc(N); strcpy(filename, other_file_name); file_name[N-1] = '\0'; file_name[N-2] = 'x'; func2(filename); free(filename); }
#include "header.h" void func2(const char *filename) { ...operations on filename... }
#include "header.h" int main(void) { ... func1(); ... func2("/etc/passwd"); return 0; }
Alternatively, but less desirably, you can make
filename into a global variable. In that case, you should declare it in
header.h. However, you cannot then have the compiler enforce the constraint that
func2() should treat variable as a constant - one more reason not to use global variables.
See also SO 1433204 for a discussion of
extern variables in C. | http://www.dlxedu.com/askdetail/3/aff8df9ffb94ba4f2d7aac839a253bc5.html | CC-MAIN-2018-30 | refinedweb | 540 | 66.03 |
Numpy Basics
NumPy is the fundamental package for scientific computing with Python. It contains among other things:
- a powerful N-dimensional array object
- sophisticated (broadcasting) functions
- useful linear algebra, Fourier transform, and random number capabilities
The NumPy array object is the common interface for working with typed arrays of data across a wide-variety of scientific Python packages. NumPy also features a C-API, which enables interfacing existing Fortran/C/C++ libraries with Python and NumPy.
# Convention for import to get shortened namespace import numpy as np
# Create a simple array from a list of integers a = np.array([1, 2, 3]) a
array([1, 2, 3])
# See how many dimensions the array has a.ndim
1
# Print out the shape attribute a.shape
(3,)
# Print out the data type attribute a.dtype
dtype('int64')
# This time use a nested list of floats a = np.array([[1., 2., 3., 4., 5.]]) a
array([[1., 2., 3., 4., 5.]])
# See how many dimensions the array has a.ndim
2
# Print out the shape attribute a.shape
(1, 5)
# Print out the data type attribute a.dtype
dtype('float64')
NumPy also provides helper functions for generating arrays of data to save you typing for regularly spaced data.
arange(start, stop, interval)creates a range of values in the interval
[start,stop)with
stepspacing.
linspace(start, stop, num)creates a range of
numevenly spaced values over the range
[start,stop].
a = np.arange(5) print(a)
[0 1 2 3 4]
a = np.arange(3, 11) print(a)
[ 3 4 5 6 7 8 9 10]
a = np.arange(1, 10, 2) print(a)
[1 3 5 7 9]
b = np.linspace(5, 15, 5) print(b)
[ 5. 7.5 10. 12.5 15. ]
b = np.linspace(2.5, 10.25, 11) print(b)
[ 2.5 3.275 4.05 4.825 5.6 6.375 7.15 7.925 8.7 9.475 10.25 ]
a = range(5, 10) b = [3 + i * 1.5/4 for i in range(5)]
result = [] for x, y in zip(a, b): result.append(x + y) print(result)
[8.0, 9.375, 10.75, 12.125, 13.5]
That is very verbose and not very intuitive. Using NumPy this becomes:
a = np.arange(5, 10) b = np.linspace(3, 4.5, 5)
a + b
array([ 8. , 9.375, 10.75 , 12.125, 13.5 ])
The four major mathematical operations operate in the same way. They perform an element-by-element calculation of the two arrays. The two must be the same shape though!
a * b
array([15. , 20.25, 26.25, 33. , 40.5 ])
np.pi
3.141592653589793
np.e
2.718281828459045
# This makes working with radians effortless! t = np.arange(0, 2 * np.pi + np.pi / 4, np.pi / 4) t
array([0. , 0.78539816, 1.57079633, 2.35619449, 3.14159265, 3.92699082, 4.71238898, 5.49778714, 6.28318531])
# Calculate the sine function sin_t = np.sin(t) print(sin_t)
[ 0.00000000e+00 7.07106781e-01 1.00000000e+00 7.07106781e-01 1.22464680e-16 -7.07106781e-01 -1.00000000e+00 -7.07106781e-01 -2.44929360e-16]
# Round to three decimal places print(np.round(sin_t, 3))
[ 0. 0.707 1. 0.707 0. -0.707 -1. -0.707 -0. ]
# Calculate the cosine function cos_t = np.cos(t) print(cos_t)
[ 1.00000000e+00 7.07106781e-01 6.12323400e-17 -7.07106781e-01 -1.00000000e+00 -7.07106781e-01 -1.83697020e-16 7.07106781e-01 1.00000000e+00]
# Convert radians to degrees degrees = np.rad2deg(t) print(degrees)
[ 0. 45. 90. 135. 180. 225. 270. 315. 360.]
# Integrate the sine function with the trapezoidal rule sine_integral = np.trapz(sin_t, t) print(np.round(sine_integral, 3))
-0.0
# Sum the values of the cosine cos_sum = np.sum(cos_t) print(cos_sum)
0.9999999999999996
# Calculate the cumulative sum of the cosine cos_csum = np.cumsum(cos_t) print(cos_csum)
[ 1.00000000e+00 1.70710678e+00 1.70710678e+00 1.00000000e+00 0.00000000e+00 -7.07106781e-01 -7.07106781e-01 -5.55111512e-16 1.00000000e+00]
# Create an array for testing a = np.arange(12).reshape(3, 4)
a
array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
Indexing in Python is 0-based, so the command below looks for the 2nd item along the first dimension (row) and the 3rd along the second dimension (column).
a[1, 2]
6
Can also just index on one dimension
a[2]
array([ 8, 9, 10, 11])
Negative indices are also allowed, which permit indexing relative to the end of the array.
a[0, -1]
3
Slicing syntax is written as
start:stop[:step], where all numbers are optional.
- defaults:
- start = 0
- stop = len(dim)
- step = 1
- The second colon is also optional if no step is used.
It should be noted that end represents one past the last item; one can also think of it as a half open interval:
[start, end)
# Get the 2nd and 3rd rows a[1:3]
array([[ 4, 5, 6, 7], [ 8, 9, 10, 11]])
# All rows and 3rd column a[:, 2]
array([ 2, 6, 10])
# ... can be used to replace one or more full slices a[..., 2]
array([ 2, 6, 10])
# Slice every other row a[::2]
array([[ 0, 1, 2, 3], [ 8, 9, 10, 11]])
- The code below calculates a two point average using a Python list and loop. Convert it do obtain the same results using NumPy slicing
- Bonus points: Can you extend the NumPy version to do a 3 point (running) average?
data = [1, 3, 5, 7, 9, 11] out = [] # Look carefully at the loop. Think carefully about the sequence of values # that data[i] takes--is there some way to get those values as a numpy slice? # What about for data[i + 1]? for i in range(len(data) - 1): out.append((data[i] + data[i + 1]) / 2) print(out)
[2.0, 4.0, 6.0, 8.0, 10.0]
# YOUR CODE GOES HERE
# %load solutions/slice.py # Cell content replaced by load magic replacement. data = np.array([1, 3, 5, 7, 9, 11]) out = (data[:-1] + data[1:]) / 2 print(out)
[ 2. 4. 6. 8. 10.]
# YOUR BONUS CODE GOES HERE
# %load solutions/slice_bonus.py # Cell content replaced by load magic replacement. data = np.array([1, 3, 5, 7, 9, 11]) out = (data[2:] + data[1:-1] + data[:-2]) / 3 print(out)
[3. 5. 7. 9.]
- Given the array of data below, calculate the total of each of the columns (i.e. add each of the three rows together):
data = np.arange(12).reshape(3, 4) # YOUR CODE GOES HERE # total = ?
# %load solutions/sum_row.py # Cell content replaced by load magic replacement. print(data[0] + data[1] + data[2]) # Or we can use numpy's sum and use the "axis" argument print(np.sum(data, axis=0))
[12 15 18 21] [12 15 18 21]
Resources¶
The goal of this tutorial is to provide an overview of the use of the NumPy library. It tries to hit all of the important parts, but it is by no means comprehensive. For more information, try looking at the: | https://unidata.github.io/python-training/workshop/NumPy/numpy-basics/ | CC-MAIN-2021-25 | refinedweb | 1,187 | 76.01 |
Chapter 16 demonstrated that higher-order functions such as map, flatMap, and filter provide powerful constructions for dealing with lists. But sometimes the level of abstraction required by these functions makes a program a bit hard to understand. Here's an example. Say you are given a list of persons, persons:5: List[(String, String)] = List((Julie,Lara), (Julie,Bob))The query does its job, but it's not exactly trivial to write or understand. Is there a simpler way? In fact, there is. Remember the for expressions in Section 7.3? Using a for expression, the same example can be written as follows:
scala> for (p <- persons; if !p.isMale; c <- p.children) yield (p.name, c.name) res6: List[(String, String)] = List((Julie,Lara), (Julie,Bob))The result of this expression is exactly the same as the result of the previous expression. What's more, most readers of the code would likely find the for expression much clearer than the previous query, which used the higher-order functions, map, flatMap, and filter.
However, the two queries are not as dissimilar as it might seem. In fact, it turns out that the Scala compiler will translate the second query into the first one. More generally, all for expressions that yield a result are translated by the compiler into combinations of invocations of the higher-order methods map, flatMap, and filter. All for loops without yield are translated into a smaller set of higher-order functions: just filter and foreach.
In this chapter, you'll find out first about the precise rules of writing for expressions. After that, you'll see how they can make combinatorial problems easier to solve. Finally, you'll learn how for expressions are translated, and how as a result, for expressions can help you "grow" the Scala language into new application domains.
Generally, a for expression is of the form:
for ( seq ) yield exprHere, seq is a sequence of generators, definitions and filters, with semicolons between successive elements. An example is the for expression:
for (p <- persons; n = p.name; if (n startsWith "To")) yield nThe for expression above contains one generator, one definition, and one filter. As mentioned in Section 7.3, you can also enclose the sequence in braces instead of parentheses, then the semicolons become optional:
for { p <- persons // a generator n = p.name // a definition if (n startsWith "To") // a filter } yield nA generator is of the form:
pat <- exprThe expression expr typically returns a list, even though you will see later that this can be generalized. The pattern pat gets matched one-by-one against all elements of that list. If the match succeeds, the variables in the pattern get bound to the corresponding parts of the element, just the way it is described in Chapter 15. But if the match fails, no MatchError is thrown. Instead, the element is simply discarded from the iteration.
In the most common case, the pattern pat is just a variable x, as in x <- expr. In that case, the variable x simply iterates over all elements returned by expr.
A definition is of the form:
pat = exprThis definition binds the pattern pat to the value of expr. So it has the same effect as a val definition:
val x = exprThe most common case is again where the pattern is a simple variable x, e.g., x = expr. This defines x as a name for the value expr.
A filter is of the form:
if exprHere, expr is an expression of type Boolean. The filter drops from the iteration all elements for which expr returns false.
Every for expression starts with a generator. If there are several generators in a for expression, later generators vary more rapidly than earlier ones. You can verify this easily with the following simple test:
scala> for (x <- List(1, 2); y <- List("one", "two")) yield (x, y) res0: List[(Int, java.lang.String)] = List((1,one), (1,two), (2,one), (2,two))
A particularly suitable application area of for expressions are combinatorial puzzles. An example of such a puzzle is the 8-queens problem: Given a standard chess-board, place eight queens such that no queen is in check from any other (a queen can check another piece if they are on the same column, row, or diagonal). To find a solution to this problem, it's actually simpler to generalize it to chess-boards of arbitrary size. Hence, the problem is to place N queens on a chess-board of N \times N squares, where the size N is arbitrary. We'll start numbering cells at one, so the upper-left cell of an N \times N board has coordinate (1, 1), and the lower-right cell has coordinate (N, N).
To solve the N-queens problem, note that you need to place a queen in each row. So you could place queens in successive rows, each time checking that a newly placed queen is not in check from any other queens that have already been placed. In the course of this search, it might arrive that a queen that needs to be placed in row k would be in check in all fields of that row from queens in row 1 to k-1. In that case, you need to abort that part of the search in order to continue with a different configuration of queens in columns 1 to k-1.
An imperative solution to this problem would place queens one by one, moving them around on the board. But it looks difficult to come up with a scheme that really tries all possibilities.
A more functional approach represents a solution directly, as a value. A solution consists of a list of coordinates, one for each queen placed on the board.
Note, however, that a full solution can not be found in a single step. It needs to be built up gradually, by occupying successive rows with queens.
This suggests a recursive algorithm. Assume you have already generated all solutions of placing k queens on a board of size N \times N, where k is less than N. Each such solution can be presented by a list of length k of coordinates (row, column), where both row and column numbers range from 1 to N. It's convenient to treat these partial solution lists as stacks, where the coordinates of the queen in row k come first in the list, followed by the coordinates of the queen in row k-1, and so on. The bottom of the stack is the coordinate of the queen placed in the first row of the board. All solutions together are represented as a list of lists, with one element for each solution.
Now, to place the next queen in row k+1, generate all possible extensions of each previous solution by one more queen. This yields another list of solution lists, this time of length k+1. Continue the process until you have obtained all solutions of the size of the chess-board N. This algorithmic idea is embodied in function placeQueens below:
def queens(n: Int): List[List[(Int, Int)]] = { def placeQueens(k: Int): List[List[(Int, Int)]] = if (k == 0) List(List()) else for { queens <- placeQueens(k - 1) column <- 1 to n queen = (k, column) if isSafe(queen, queens) } yield queen :: queensThe outer function queens in the program above simply calls placeQueens with the size of the board n as its argument. The task of the function application placeQueens(k) is to generate all partial solutions of length k in a list. Every element of the list is one solution, represented by a list of length k. So placeQueens returns a list of lists.
placeQueens(n) }
If the parameter k to placeQueens is 0, this means that it needs to generate all solutions of placing zero queens on zero rows. There is exactly one such solution: place no queen at all. This is represented as a solution by the empty list. So if k is zero, placeQueens returns List(List()), a list consisting of a single element that is the empty list. Note that this is quite different from the empty list List(). If placeQueens returns List(), this means no solutions, instead of a single solution consisting of no placed queens.
In the other case, where k is not zero, all the work of placeQueens is done in a for expression. The first generator of that for expression iterates through all solutions of placing k - 1 queens on the board. The second generator iterates through all possible columns on which the k'th queen might be placed. The third part of the for expression defines the newly considered queen position to be the pair consisting of row k and each produced column. The fourth part of the for expression is a filter which checks with isSafe whether the new queen is safe from check of all previous queens (the definition of isSafe will be discussed a bit later).
If the new queen is not in check from any other queens, it can form part of a partial solution, so placeQueens generates with queen :: queens a new solution. If the new queen is not safe from check, the filter returns false, so no solution is generated.
The only remaining bit is the isSafe method, which is used to check whether a given queen is in check from any other element in a list of queens. Here is its definition:
def isSafe(queen: (Int, Int), queens: List[(Int, Int)]) = queens forall (q => !inCheck(queen, q))The isSafe method expresses that a queen is safe with respect to some other queens if it is not in check from any other queen. The inCheck method expresses that queens q1 and q2 are mutually in check. It returns true in one of three cases:
def inCheck(q1: (Int, Int), q2: (Int, Int)) = q1._1 == q2._1 || // same row q1._2 == q2._2 || // same column (q1._1 - q2._1).abs == (q1._2 - q2._2).abs // on diagonal
The for notation is essentially equivalent to common operations of database query languages. For instance, say you are given a database named books, represented as a list of books, where Book is defined as follows:
case class Book(title: String, authors: String*)Here is a small example database, represented as an in-memory list:
val books: List[Book] = List( Book( "Structure and Interpretation of Computer Programs", "Abelson, Harold", "Sussman, Gerald J." ), Book( "Principles of Compiler Design", "Aho, Alfred", "Ullman, Jeffrey" ), Book( "Programming in Modula-2", "Wirth, Niklaus" ), Book( "Elements of ML Programming", "Ullman, Jeffrey" ), Book( "The Java Language Specification", "Gosling, James", "Joy, Bill", "Steele, Guy", "Bracha, Gilad" ) )Then, to find the titles of all books whose author's last name is "Gosling":
scala> for (b <- books; a <- b.authors if a startsWith "Gosling") yield b.title res0: List[String] = List(The Java Language Specification)Or, to find the titles of all books that have the string "Program" in their title:
scala> for (b <- books if (b.title indexOf "Program") >= 0) yield b.title res4: List[String] = List(Structure and Interpretation of Computer Programs, Programming in Modula-2, Elements of ML Programming)Or, to find the names of all authors that have written at least two books in the database:
scala> for (b1 <- books; b2 <- books if b1 != b2; a1 <- b1.authors; a2 <- b2.authors if a1 == a2) yield a1 res5: List[String] = List(Ullman, Jeffrey, Ullman, Jeffrey)The last solution is not yet perfect, because authors will appear several times in the list of results. You still need to remove duplicate authors from result lists. This can be achieved with the following function:
scala> def removeDuplicates[A](xs: List[A]): List[A] = { if (xs.isEmpty) xs else xs.head :: removeDuplicates( xs.tail filter (x => x != xs.head) ) } removeDuplicates: [A](List[A])List[A]It's worth noting that the last expression in method removeDuplicates can be equivalently expressed using a for expression:
scala> removeDuplicates(res5) res6: List[java.lang.String] = List(Ullman, Jeffrey)
xs.head :: removeDuplicates( for (x <- xs.tail if x != xs.head) yield x )
Every for expression can be expressed in terms of the three higher-order functions map, flatMap and filter. This section describes the translation scheme, which is also used by the Scala compiler.
First, assume you have a simple for expression:
for (x <- expr_1) yield expr_2where x is a variable. Such an expression is translated to:
expr_1.map(x => expr_2)
Now, consider for expressions that combine a leading generator with some other elements. A for expression of the form:
for (x <- expr_1 if expr_2) yield expr_3is translated to:
for (x <- expr_1 filter (x => expr_2)) yield expr_3This translation gives another for expression that is shorter by one element than the original, because an if element is transformed into an application of filter on the first generator expression. The translation then continues with this second expression, so in the end you obtain:
expr_1 filter (x => expr_2) map (x => expr_3)The same translation scheme also applies if there are further elements following the filter. If seq is an arbitrary sequence of generators, definitions and filters, then:
for (x <- expr_1 if expr_2; seq) yield expr_3is translated to:
for (x <- expr_1 filter expr_2; seq) yield expr_3Then translation continues with the second expression, which is again shorter by one element than the original one.
The next case handles for expressions that start with two filters, as in:
for (x <- expr_1; y <- expr_2; seq) yield expr_3Again, assume that seq is an arbitrary sequence of generators, definitions and filters. In fact, seq might also be empty, and in that case there would not be a semicolon after expr_2. The translation scheme stays the same in each case. The for expression above is translated to an application of flatMap:
expr_1.flatMap(x => for (y <- expr_2; seq) yield expr_3)This time, there is another for expression in the function value passed to flatMap. That for expression (which is again simpler by one element than the original) is in turn translated with the same rules.
The three translation schemes given so far are sufficient to translate all for expressions that contain just generators and filters, and where generators bind only simple variables. Take for instance the query, "find all authors who have published at least two books," from Section 23.3:
for (b1 <- books; b2 <- books if b1 != b2; a1 <- b1.authors; a2 <- b2.authors if a1 == a2) yield a1This query translates to the following map/flatMap/filter combination:
books flatMap (b1 => books filter (b2 => b1 != b2) flatMap (b2 => b1.authors flatMap (a1 => b2.authors filter (a2 => a1 == a2) map (a2 => a1))))The translation scheme presented so far does not yet handle generators that that bind whole patterns instead of simple variables. It also does not yet cover definitions. These two aspects will be explained in the next two sub-sections.
The translation scheme becomes more complicated if the left hand side of generator is a pattern, pat, other than a simple variable. Still relatively easy to handle is the case where the for expression binds a tuple of variables. In that case, almost the same scheme as for single variables applies. A for expression of the form:
for ((x_1, ..., x_n) <- expr_1) yield expr_2translates to:
expr_1.map { case (x_1, ..., x_n) => expr_2 }Things become a bit more involved if the left hand side of the generator is an arbitrary pattern pat instead of a single variable or a tuple. In this case:
for (pat <- expr_1) yield expr_2translates to:
expr_1 filter { case pat => true case _ => false } map { case pat => expr_2 }That is, the generated items are first filtered and only those that match pat are mapped. Therefore, it's guaranteed that a pattern-matching generator will never throw a MatchError
The scheme above only treated the case where the for expression contains a single pattern-matching generator. Analogous rules apply if the for expression contains other generators, filters, or definitions. Because these additional rules don't add much new insight, they are omitted from discussion here. If you are interested, you can look them up in the Scala Language Specification sls.
The last missing situation is where a for expression contains embedded definitions. Here's a typical case:
for (x <- expr_1; y = expr_2; seq) yield expr_3Assume again that seq is a (possibly empty) sequence of generators, definitions, and filters. This expression is translated to the following one:
for ((x, y) <- for (x <- expr_1) yield (x, expr_2); seq) yield expr_3So you see that expr_2 is evaluated each time there is a new x value being generated. This re-evaluation is necessary, because expr_2 might refer to x and so needs to be re-evaluated for changing values of x. For you as a programmer the conclusion is that it's probably not a good idea to have definitions embedded in for expressions that do not refer to variables bound by some preceding generator, because re-evaluating such expressions would be wasteful. For instance, instead of:
for (x <- 1 to 1000; y = expensiveComputationNotInvolvingX) yield x * yit's usually better to write:
val y = expensiveComputationNotInvolvingX for (x <- 1 to 1000) yield x * y
The previous subsections showed how for expressions that contain a yield are translated. What about for loops that simply perform a side effect without returning anything? Their translation is similar, but simpler than for expressions. In principle, wherever the previous translation scheme used a map or a flatMap in the translation, the translation scheme for for loops uses just a foreach. For instance, the expression:
for (x <- expr_1) bodytranslates to:
expr_1 foreach (x => body)A larger example is the expression:
for (x <- expr_1; if expr_2; y <- expr_3) bodyThis expression translates to:
expr_1 filter (x => expr_2) foreach (x => expr_3 foreach (y => body))For example, the following expression sums up all elements of a matrix represented as a list of lists:
var sum = 0 for (xs <- xss; x <- xs) sum += xThis loop is translated into two nested foreach applications:
var sum = 0 xss foreach (xs => xs foreach (x => sum += x))
The previous section showed that for expressions can be translated into applications of the higher-order functions map, flatMap, and filter. In fact, you could equally well go the other way: every application of a map, flatMap, or filter can be represented as a for expression. Here are implementations of the three methods in terms of for expressions. The methods are contained in an object Demo, to distinguish them from the standard operations on Lists. To be concrete, the three functions all take a List as parameter, but the translation scheme would work just as well with other collection types:
object Demo { def map[A, B](xs: List[A], f: A => B): List[B] = for (x <- xs) yield f(x)Not surprisingly, the translation of the for expression used in the body of Demo.map will produce a call to map in class List. Similarly, Demo.flatMap and Demo.filter translate to flatMap and filter in class List.
def flatMap[A, B](xs: List[A], f: A => List[B]): List[B] = for (x <- xs; y <- f(x)) yield y
def filter[A](xs: List[A], p: A => Boolean): List[A] = for (x <- xs if p(x)) yield x }
So this little demonstration has shown that for expressions really are equivalent in their expressiveness to applications of the three functions map, flatMap, and filter.
Because the translation of for expressions only relies on the presence of methods map, flatMap, and filter, it is possible to apply the for notation to a large class of data types.
You have already seen for expressions over lists and arrays. These are supported because lists, as well as arrays, define operations map, flatMap, and filter. Because they define a foreach method as well, for loops over these data types are also possible.
Besides lists and arrays, there are also many other types in the Scala standard library that support the same four methods and therefore allow for expressions. Examples are ranges, iterators, streams, and all implementations of sets. It's also perfectly possible for your own data types to support for expressions by defining the necessary methods. To support the full range of for expressions and for loops, you need to define map, flatMap, filter, and foreach as methods of your data type. But it's also possible to define a subset of these methods, and thereby support a subset of all possible for expressions or loops. Here are the precise rules:
Nevertheless, there is a typical setup that captures the most common intention of the higher order methods to which for expressions translate. Say you have a parameterized class, C, which typically would stand for some sort of collection. Then it's quite natural to pick the following type signatures for map, flatMap, filter, and foreach:
abstract class C[A] { def map[B](f: A => B): C[B] def flatMap[B](f: A => C[B]): C[B] def filter(p: A => Boolean): C[A] def foreach(b: A => Unit): Unit }That is, the map function takes a function from the collection's element type A to some other type B. It produces a new collection of the same kind C, but with B as the element type. The flatMap method takes a function f from A to some C-collection of Bs and produces a C-collection of Bs. The filter method takes a predicate function from the collection's element type A to Boolean. It produces a collection of the same type as the one on which it is invoked. Finally, the foreach method takes a function from A to Unit, and produces a Unit result.
Concentrating on just the first three functions, the following facts are noteworthy. In functional programming, there's a general concept called a monad, which can explain a large number of types with computations, ranging from collections, to computations with state and I/O, backtracking computations, and transactions, to name but a few. You can formulate functions map, flatMap, and filter on a monad, and, if you do, they end up having exactly the types given above. Furthermore, you can characterize every monad by map, flatMap, and filter, plus a "unit" constructor that produces a monad from an element value. In an object-oriented language, this "unit" constructor is simply an instance constructor or a factory method. Therefore, map, flatMap and filter can be seen as an object-oriented version of the functional concept of monad. Because for expressions are equivalent to applications of these three methods, they can be seen as syntax for monads.
All this suggests that the concept of for expression is something more general than just iteration over a collection, and indeed it is. For instance, for expressions also play an important role in asynchronous I/O, or as an alternative notation for optional values. Watch out in the Scala libraries for occurrences of map, flatMap, and filter—wherever they are present, for expressions suggest themselves as a concise way of manipulating elements of the type.
In this chapter, you were given a peek under the hood of for expressions and for loops. You learned that they translate into applications of a standard set of higher-order methods. As a consequence of this, you saw that for expressions are really much more general than mere iterations over collections, and that you can design your own classes to support them. | http://www.artima.com/pins1ed/for-expressions-revisitedP.html | CC-MAIN-2016-26 | refinedweb | 3,901 | 60.24 |
Using HyperFiles to save data internally
On 08/01/2013 at 12:29, xxxxxxxx wrote:
User Information:
Cinema 4D Version: 13
Platform: Windows ;
Language(s) : C++ ;
---------
It's become painfully obvious to me over the past few days that I simply don't understand how HyperFile works regarding to saving data to a scene file.
There's plenty of examples in the forums about using HF like a BaseFile. And saving things to HD files. But not for using HF for saving data into the scene itself.
There is only one example in the SDK that uses HF like this. The RoundedTube example.
However...It doesn't seem to actually use the HF data ("test") it reads and writes. So that's teaches me nothing.
Trying to learn how HF works. I wrote this very basic tag plugin example that overrides the Read() & Write() methods to save and read a variable called "text" into a HF. But I have no idea what to do next.
#include "c4d.h" #include "c4d_symbols.h" #include "tsimpletag.h" // be sure to use a unique ID obtained from #define PLUGIN_ID 1000010 class SimpleTag : public TagData { INSTANCEOF(SimpleTag,TagData) public: String text; virtual Bool Read(GeListNode *node, HyperFile *hf, LONG level); virtual Bool Write(GeListNode *node, HyperFile *hf); virtual Bool Message(GeListNode *node, LONG type, void *t_data); virtual Bool Init(GeListNode *node); virtual EXECUTIONRESULT Execute(BaseTag *tag, BaseDocument *doc, BaseObject *op, BaseThread *bt, LONG priority, EXECUTIONFLAGS flags); static NodeData *Alloc(void) { return gNew SimpleTag; } }; Bool SimpleTag::Read(GeListNode *node, HyperFile *hf, LONG level) { return hf->ReadString(&text); } Bool SimpleTag::Write(GeListNode *node, HyperFile *hf) { return hf->WriteString(text); } Bool SimpleTag::Message(GeListNode *node, LONG type, void *data) { BaseTag *tag = (BaseTag* )node; //Get the tag and assign it to a variable BaseContainer *bc = ((BaseList2D* )node)->GetDataInstance(); //Get the container for the tag switch (type) { case MSG_DESCRIPTION_COMMAND: { DescriptionCommand *dc = (DescriptionCommand* )data; // data contains the description ID of the button LONG button = dc->id[0].id; // Get the ID of the button switch (button) { case BUTTON1: GePrint("Button1 was pushed"); break; } } } tag->SetDirty(DIRTYFLAGS_DATA); //Used to update a Tag's AM GUI items return TRUE; } Bool SimpleTag::Init(GeListNode *node) { text = String("Hello"); return TRUE; } EXECUTIONRESULT SimpleTag::Execute(BaseTag *tag, BaseDocument *doc, BaseObject *op, BaseThread *bt, LONG priority, EXECUTIONFLAGS flags) { BaseContainer *bc = tag->GetDataInstance(); return EXECUTIONRESULT_OK; } Bool RegisterSimpleTag(void) { String path = GeLoadString(IDS_SIMPLETAG); if (!path.Content()) return TRUE; return RegisterTagPlugin(PLUGIN_ID,path,TAG_EXPRESSION|TAG_VISIBLE,SimpleTag::Alloc,"tsimpletag",AutoBitmap("myicon.tif"),0); }
I don't know how to access and use the string data that I've just written into the HF.
If I have written this correctly. I should have a string with the value of "Hello" somewhere saved in the default HF of the scene right?
When I look in the tag with: BaseContainer *tagdata = tag->GetDataInstance();
When I look in the doc with: BaseContainer *docdata = doc->GetDataInstance();
I can't find it. And since I did not allocate a HF, I don't know where to find the HF this data has supposedly been saved to.
I'm completely lost how we're supposed to use HF in this manner.
-ScottA
On 08/01/2013 at 15:34, xxxxxxxx wrote:
Hi,
a plugin object has two locations where to store data. One is in the BaseContainer which you can access via tag->GetDataInstance(). All data you put in there will automatically be saved by C4D and if you open your scene your object has the saved data.
If you have data that does not fit in the BaseContainer or because of some other circumstances you can store your data in a member of your plugin object (in your example its 'text').
When you create an instance of SimpleTag then its initialized with "Hello". So far so good. Imagine you initialize your member with "Hello" and at a certain point you change that string to "World!". When the tag is saved to disk the word "World!" is saved to disk, and not "Hello". Therefore when you load the scene document again your plugin object reads the string again from the HyperFile and you can access the string with this->text in SimpleTag::Execute with the content "World!".
Hope that helped a little bit.
Cheers, Seb
On 08/01/2013 at 16:22, xxxxxxxx wrote:
I think I get it.
If I change the value of the class member variable after the tag has been added. By doing something like this: text = "World" inside of my button code. And then save the file.
When I open the saved file. The variable will still have a value of "World".
So even though the tag's Init() method has code in it telling the text class variable to be "Hello".
The HyperFile methods Read()&Write() will basically take control of telling my class variable what value it should be?
Now I just have see if I can make it work with a BaseBitmap instead of a String variable.
Thanks Sebastien | https://plugincafe.maxon.net/topic/6853/7644_using-hyperfiles-to-save-data-internally | CC-MAIN-2020-40 | refinedweb | 829 | 53.31 |
Visual
View Complete Post
Application startup performance matters to users, and there's plenty you can do to improve it. Here's a look at where to begin.
MSDN Magazine March 2008
Learn about enhanced TimeSpan formatting and parsing features coming in the .NET Framework 4, and some helpful tips for working with TimeSpan values.
Ron Petrusha
MSDN Magazine February 2010
Jack Gudenkauf and Jesse Kaplan
MSDN Magazine March 2007
This month the CLR team introduces the new System.AddIn namespace in the Base Class Library, which will be available in the next release of Visual Studio.
MSDN Magazine February 2007.
i am creating a grid view at run time.now i want to create controls inside the gridview at run time itself,and how to Bind/Eval it.
plz help me
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/2393-clr-inside-out-improving-application-startup.aspx | CC-MAIN-2018-13 | refinedweb | 147 | 65.62 |
public class StreamingQueryProgress extends Object implements scala.Serializable
StreamingQueryduring a trigger. Each event relates to processing done for a single trigger of the streaming query. Events are emitted even when no new data is available to be processed.
param: id An unique query id that persists across restarts. See
StreamingQuery.id().
param: runId A query id that is unique for every start/restart. See
StreamingQuery.runId().
param: name User-specified name of the query, null if not specified.
param: timestamp Beginning time of the trigger in ISO8601 format, i.e. UTC timestamps.
param: batchId A unique id for the current batch of data being processed. Note that in the
case of retries after a failure a given batchId my be executed more than once.
Similarly, when there is no data to be processed, the batchId will not be
incremented..
"max" -> "2016-12-05T20:54:20.827Z" // maximum event time seen in this trigger "min" -> "2016-12-05T20:54:20.827Z" // minimum event time seen in this trigger "avg" -> "2016-12-05T20:54:20.827Z" // average event time seen in this trigger "watermark" -> "2016-12-05T20:54:20.827Z" // watermark used in this trigger | https://spark.apache.org/docs/2.2.1/api/java/org/apache/spark/sql/streaming/StreamingQueryProgress.html | CC-MAIN-2022-27 | refinedweb | 193 | 68.67 |
Writing DTYPE_FILENAME keys not working?
On 05/08/2016 at 09:28, xxxxxxxx wrote:
I'm trying to write a texture path key into a simple bitmap shader. Setting the value as such works fine and without problems, but writing an actual key fails. The key is being generated and also written on the curve and into the track, but it holds no value. However, calling GeGetData() (not just in that print statement in the code, but also when I manually call it via the Console after creation) on that apparently empty key _does_ show the actual value as expected. It just never makes it into the "live" system.
The key always shows up as Orange and not as Red, so apparently something is amiss - I just can't figure out what.
The track is defined as such:
desc = c4d.DescID(c4d.DescLevel(c4d.BITMAPSHADER_FILENAME, c4d.DTYPE_FILENAME, c4d.Xbitmap))
def setKeyValueFileName(self, op, desc, fileName) : if not os.path.exists(fileName) : print "Filename does not exist" return doc = op.GetDocument() track = op.FindCTrack(desc) if not track: track = c4d.CTrack(op, desc) op.InsertTrackSorted(track) curve = track.GetCurve() key = c4d.CKey() key.SetTime(curve, doc.GetTime()) key.SetGeData(curve, fileName) print key.GetGeData() curve.InsertKey(key)
Any ideas??
Cheers
Michael
On 05/08/2016 at 10:20, xxxxxxxx wrote:
Hi,
I did not check your code for errors. But I have encountered the yellow dot problem.
The way I've solved it is to use the DrawViews() message after adding the new keys.
#Update the viewport changes(makes animation the dot for the attribute red) #If you don't use this...the changes will not be set and the dot will turn yellow c4d.DrawViews(c4d.DRAWFLAGS_FORCEFULLREDRAW)
-ScottA
On 05/08/2016 at 12:47, xxxxxxxx wrote:
Cool, I'll give that a try, cheers!
On 08/08/2016 at 01:11, xxxxxxxx wrote:
No, unfortunately that didn't do it either. Keys stay blank, even though the data is there internally.
On 08/08/2016 at 01:59, xxxxxxxx wrote:
So I assume this has something to do with the data being written into the key not being accepted as valid? But from what I gather from the docs, DTYPE_FILENAME is expecting a str. And that's what it gets. For example if I write this into the key:
fileName = r'Z:\PROJECTS\06_PRODUCTION_3D\MA ex\DefaultMaterial_Normal.jpg'
and read that key back out through curve.FindKey(), it reads exactly that value. Yet the key still shows up orange.
If instead I manually set that key in the GUI with that path, FindKey() shows me the exact same value, but of course the key works now (red).
I'm confused.
On 08/08/2016 at 03:13, xxxxxxxx wrote:
Edit: never mind. Thought it worked now, but still did not. :(
On 08/08/2016 at 05:44, xxxxxxxx wrote:
Hi,
the issue is caused by the difference between String and Filename or rather the absence of Filename type in Python. This has to be fixed in our Python module, I've notified the developer.
Unfortunately there's no workaround.
On 08/08/2016 at 05:45, xxxxxxxx wrote:
I've feared as much :( | https://plugincafe.maxon.net/topic/9641/12947_writing-dtypefilename-keys-not-working | CC-MAIN-2020-34 | refinedweb | 534 | 67.96 |
09 July 2007 15:32 [Source: ICIS news]
LONDON (ICIS news)--Crude prices continued to rise on Monday to take Brent crude on London’s ICE Futures back above $76.00/bbl to a new 11-month high after Nigerian militants vowed to continue their attacks against the country’s oil infrastructure.?xml:namespace>
Earlier in the day, reports that the kidnapped British three-year-old girl had been released had seen prices ease a little but this proved short-lived.
By 14:15 GMT, August Brent crude had hit a high of $76.24 /bbl, a gain of 62 cents over Friday’s close of $75.62, before easing back to about $76.05.
At the same time, August NYMEX crude was trading at about $72.80/bbl, having hit a high of $73.00, a gain of 19 cents over the previous close and the highest level for the front month contract since late Aug | http://www.icis.com/Articles/2007/07/09/9043611/brent-hits-11-month-high-due-to-nigeria-concerns.html | CC-MAIN-2015-11 | refinedweb | 157 | 81.43 |
Eh, anyone can help me how to code for the c++ programming AI tic tac toe . I completely stuck like .........
This is a discussion on How to code in c++ programming for tic tac toe(player vs computer)AI using arrays within the C++ Programming forums, part of the General Programming Boards category; Eh, anyone can help me how to code for the c++ programming AI tic tac toe . I completely stuck ...
Eh, anyone can help me how to code for the c++ programming AI tic tac toe . I completely stuck like .........
This is my code. I completely stuckThis is my code. I completely stuckCode:#include <iostream> using namespace std; int main() { char square[3][3] = {{' ',' ',' '},{' ',' ',' '},{' ',' ',' '}}; bool validrow, validcol, validmove; int row, col; char winner = ' '; char turn = 'X'; cout << "Welcome to tic tac toe game" << endl; while (winner == ' ') { cout << " 1 2 3" << endl; cout << " ---------------" << endl; for(int i = 1; i < 4; i++) { cout << " " << i << " | " << square[i-1][0] << " | " << square[i-1][1] << " | " << square[i-1][2] << " |" << endl << " ---------------" << endl; } validrow = false; validmove = false; validcol = false; while(!validmove) { validrow = false; //Loop until the player selects a valid row while(!validrow) { cout << "Row: "; cin >> row; if (row == 1 || row == 2 || row == 3) { validrow = true; } else { cout << endl << "Invalid row!" << endl; } } //Loop until the player selects a valid column while(!validcol) { cout << "Column: "; cin >> col; if(col == 1 || col == 2 || col == 3) { validcol = true; } else { cout << endl << "Invalid column!" << endl; } } validmove = true; bool validturn; validturn = false; while (!validturn) { if (square[row-1][col-1] == ' ') { square[row-1][col-1] = turn; if (turn == 'X') { turn = 'O'; } else { turn = 'X'; } if (row == 1 || row == 3 || col == 1 || col == 3 ) { square[1][1] = turn; cout << "Your next move" << endl; if (row == 1 || col == 1) { square[2][2] = turn; cout << "Your next move" << endl; validturn = true; } validturn = true; } else if (row == 2 || col == 2) { square[2][1] = turn; cout << "Your next move" << endl; validturn = true; } else { cout << "The selected square is occupied!" << endl; cout << "Select again:" << endl; } } } } } system ("pause"); return 0; }
any1 give reply on how to start on the tic tac game with first? Any steps before i trying coding. need help badly. Thank you.
I am not sure what you are asking, but as I see it, there is no AI implemented at all yet?
- Have you tried meta-planning?
- How does the computer win the game? 3 in-a-row, of course, so check if it can do that first ,and make the computer do it.
- What is the first step to getting to 3 in-a-row? 2 in-a-row. check to see if the computer can get two in a row, with the possibility to make it three in the next turn (No sense in going for two in a row, where the third one is already occupied by the enemy!).
- I'd also say, if the center wasn't occupied, go for that one first.
A little backwards, but I'm sure you can make sense of it
As my tutor said, there are no big problems in programming... only a lot of smaller ones! Break the big problems into small chunks and it will be soo much easier.
PS. I'm sorry I can't give you any examples, I'm new at this too
You need a checkWin function or piece of code before you start making any AI.
This topic has been covered several times before on this forum if you care to do a search.
My homepage
Advice: Take only as directed - If symptoms persist, please see your debugger
Linus Torvalds: "But it clearly is the only right way. The fact that everybody else does it some other way only means that they are wrong"
How to start of with? Any method on how should I start first.
Player against computer:
The current state of the board should be displayed at all times. The program should be able to detect a winning combination and stop the game at
that point. (This is my school question)
I just need a simple tic tac toe AI using only like arrays, control selection(if/else) and loops and not graphics | http://cboard.cprogramming.com/cplusplus-programming/137154-how-code-cplusplus-programming-tic-tac-toe-player-vs-computer-ai-using-arrays.html | CC-MAIN-2014-35 | refinedweb | 690 | 78.79 |
Apps Email Migration API.
Note: This API is only available to Google Apps Premier, Education, and Partner Edition domains, and cannot be used for migration into Google Apps Standard Edition email or Gmail accounts.
This document is intended for programmers who want to write client applications that can migrate email into Google Apps mailboxes.
It's a reference document; it assumes that you understand the concepts presented in the developer's guide, and the general ideas behind the Google data APIs protocol.
The Email Migration API defines only one type of feed: the mail item feed. In order to access this feed, your client must first authenticate to your Google Apps domain using ClientLogin (Authentication for Installed Apps).
The mail item feed is used to insert mail messages into hosted Gmail accounts associated with a Google Apps domain. Its feed URL is:
where yourDomain.com is your Google Apps domain name, and username is the username that will own the message after the migration. The username is only a username, not a full email address; for example, if you're migrating messages to be owned by [email protected], the username to use is
liz. The Content-Type of the
POST request must be
application/atom+xml or the server will reply with a
415 Unsupported Media Type status code.
The above feed only allows you to insert messages one at a time. In other words, you must make one HTTP request for each mail message you wish to insert. It is recommended instead that you access the batch mail item feed, which allows you to insert many messages in a single HTTP request. The batch feed has the URL:
Both of these feeds are write-only; that is, the only request method they support is HTTP
Note: Only domain administrators can migrate mail to accounts other than their own (by specifying a username other than their own to be used in the above URLs). When an end user is migrating mail, the username in the above URLs must be the same as the currently authenticated username.
In addition to the standard Google data API elements, the Email Migration API uses the following elements.
For information about the standard data API elements, see the Atom specification and the Common Elements document.
A Gmail label to be applied to an inserted mail message.
<apps:label xmlns:
namespace apps = "" start = label label = element apps:label { attribute labelName { xsd:string } }
A special Gmail property to be applied to an inserted mail message.
<apps:mailItemProperty xmlns:
namespace apps = "" start = mailItemProperty mailItemProperty = element apps:mailItemProperty { attribute value { "IS_DRAFT" | "IS_INBOX" | "IS_SENT" | "IS_STARRED" |
"IS_TRASH" | "IS_UNREAD" } }
The RFC 822 content of the mail message to be migrated.
<apps:rfc822Msg xmlns:" <liz? </apps:rfc822Msg>
namespace apps = "" start = rfc822Msg rfc822Msg = element apps:rfc822Msg { attribute encoding { "base64" | "none" }?, xsd:string } | http://code.google.com/apis/apps/email_migration/reference.html | crawl-002 | refinedweb | 469 | 52.39 |
- NAME
- SYNOPSIS
- VERSION 5.66 (Released with 6.51)
- VERSION 5.65 (Released with 6.37)
- VERSION 5.64 (Released with 6.32)
- VERSION 5.63 (Released with 6.26)
- VERSION 5.62 (Released with 6.21)
- VERSION 5.61 (Released with 6.20)
- VERSION 5.60 (Released with 6.14)
- VERSION 5.59 (Released with 6.12)
- VERSION 5.58 (Released with 6.11)
- VERSION 5.57 (Released with 6.10)
- VERSION 5.56 (2010-02-24)
- VERSION 5.55 (2010-02-22)
- VERSION 5.54 (2008-05-09)
- VERSION 5.53 (DEVELOPMENT)
- VERSION 5.52 (2008-05-08)
- VERSION 5.51 (DEVELOPMENT)
- VERSION 5.50 (2008-05-05)
- VERSION 5.49 (DEVELOPMENT)
- VERSION 5.48 (2007-11-27)
- VERSION 5.47 (DEVELOPMENT)
- VERSION 5.46 (2007-02-21)
- VERSION 5.45 (DEVELOPMENT)
- VERSION 5.44 (2005-06-02)
- VERSION 5.43 (DEVELOPMENT)
- VERSION 5.42a 2003-07-03
- VERSION 5.42 (2003-07-02)
- VERSION 5.41 (DEVELOPMENT)
- VERSION 5.40 (2001-06-07)
- VERSION 5.39 (2000-06-27)
- VERSION 5.38 (2000-05-23)
- VERSION 5.37 (2000-02-14)
- VERSION 5.36 (2000-01-21)
- VERSION 5.35 (1999-07-06)
- VERSION 5.34 (1999-04-13)
- VERSION 5.33 (1998-08-20)
- VERSION 5.32 (1998-08-17)
- VERSION 5.31 (1998-04-08)
- VERSION 5.30 (1998-01-21)
- VERSION 5.21 (1998-01-15)
- VERSION 5.20 (1997-10-12)
- VERSION 5.11 (1997-08-07)
- VERSION 5.10 (1997-03-19)
- VERSION 5.09 (1997-01-28)
- VERSION 5.08 (1997-01-24)
- VERSION 5.07p2 1997-01-03
- VERSION 5.07 (1996-12-10)
- VERSION 5.06 (1996-10-25)
- VERSION 5.05 (1996-10-11)
- VERSION 5.04 (1996-08-01)
- VERSION 5.03 (1996-07-17)
- VERSION 5.02 (1996-07-15)
- VERSION 5.01 (1996-06-24)
- VERSION 5.00 (1996-06-21)
- VERSION 4.3 (1995-10-26)
- VERSION 4.2 (1995-10-23)
- VERSION 4.1 (1995-10-18)
- VERSION 4.0 (1995-08-13)
- VERSION 3.0 (1995-05-03)
- VERSION 2.0 (1995-04-17)
- VERSION 1.2 (1995-03-31)
- VERSION 1.1 (1995-02-08)
- VERSION 1.0 (1995-01-20)
- BUGS AND QUESTIONS
- SEE ALSO
- LICENSE
- AUTHOR
NAME
Date::Manip::Changes5 - changes in Date::Manip 5.xx
SYNOPSIS
This describes the changes made to the Date::Manip module up to the time that 6.00 was released. Because 6.00 required a newer version of perl (5.10 or higher), the old version was maintained.
When Date::Manip 6.10 was released, both versions were bundled together (though how it was bundled changed when 6.14 was released).
This document describes all changes made to the old version of Date::Manip.
For the most part, Date::Manip has remained backward compatible at every release, but occasionally, a change is made which is backward incompatible. These are marked with an exclamation mark (!).
VERSION 5.66 (Released with 6.51)
- Fixed a bug in Date_ConvTZ
Applied a patch supplied by Zhenyi Zhou which fixes a bug in Date_ConvTZ where passing in an empty string did not work.
VERSION 5.65 (Released with 6.37)
As of December 2012, Version 5 of Date::Manip will no longer be modified. The 5.xx version was updated to 5.65 (but no changes were made), and this version is expected to be the final release in the 5.xx series.
The one exception is that if someone submits a patch that applies cleanly and causes zero failures in the test suite, I will consider adding it on a case-by-cast bases.
Please use Date::Manip 6.xx instead.
VERSION 5.64 (Released with 6.32)
- Better handling of '0000' timezone
Applied a patch supplied by Ed Avis that improves handling of the '0000' timezone..
VERSION 5.63 (Released with 6.26)
- Fixed business mode calculation
Applied a patch that I received some time ago to fix a business mode calculation. Steve Tempest
VERSION 5.62 (Released with 6.21)
No changes
VERSION 5.61 (Released with 6.20)
No changes
VERSION 5.60 (Released with 6.14)
- Fully integrated with 6.xx
As of Date::Manip 6.14, the 5.xx release is fully integrated into the distribution. Both will be installed automatically and you can switch between them (if you have a recent version of perl). This simplifies the package management process considerably. The downside is that Date::Manip 6.xx will be installed, even if you do not have a recent version of perl and cannot use it.
VERSION 5.59 (Released with 6.12)
VERSION 5.58 (Released with 6.11)
- Test fixes
Fixed a bug in some of the tests that were causing two tests to fail. JD
Explicitly set TZ in all tests to avoid some failures (it got left out of a few when it was bundled with 6.10).
VERSION 5.57 (Released with 6.10)
- (*) Combined 5.xx and 6.xx
As of 6.10, Date-Manip-6.xx will contain both the Date::Manip 5.xx and 6.xx modules. If perl 5.10 or higher is available, the 6.xx version will be installed. For older versions of perl, the 5.xx version will be installed.
This will allow all of the automatic module tools to work correctly.
- Bug fixes
Fixed a bug where years earlier than 1000 AD failed in calculations. John
- Time zone fixes
Improved time zone detection. Stepan Kasal
- Documentation fixes
Minor improvements. Josef Kreulich
VERSION 5.56 (2010-02-24)
- Bug fixes
Date_PrevWorkDay and documentation fix. RT #17005
I accidentally included a require 5.10 which made Date::Manip not work with earlier versions of perl. Nicholas Riley
VERSION 5.55 (2010-02-22)
- (*) Added time zone abbreviations
Date::Manip 5.xx now includes all of the time zone abbreviations from version 6.xx (i.e. all of the abbreviations from the Olsen database).
- Documentation fixes
Typo fix. ddascalescu
VERSION 5.54 (2008-05-09)
- Released
-
VERSION 5.53 (DEVELOPMENT)
- Bug fixes
Fix so it won't fail with "Too early to specify a build action"
- CPANTS changes
Final changes to meet requirements on
VERSION 5.52 (2008-05-08)
- Released
-
VERSION 5.51 (DEVELOPMENT)
- Bug fixes
Fixed bug where the wrong version was in Build.PL
- CPANTS changes
Additional changes to meet requirements on
VERSION 5.50 (2008-05-05)
- Released
-
VERSION 5.49 (DEVELOPMENT)
- New features
Added "ereyesterday". Ed Avis
- Time zone fixes
Added time zones. Damyan Ivanov, Ernesto Hernandez-Novich, Gregor Herrmann, Nicholas Riley, Enrique Verdes, Alexander Litvinov
- Documentation fixes
Corrected typo in %G and %L format descriptions. Troy A. Bollinger
- CPANTS changes
Added Build.PL and several other things to meet requirements on
VERSION 5.48 (2007-11-27)
- Released
-
VERSION 5.47 (DEVELOPMENT)
- Bug fixes
Fixed the version number. John R. Daily
Fixed a warning when the date command not present. Daniel Hahler
Fixed a bug where recurrences of the form 0:1*, 0:0:1*, etc., incorrectly required a base date. Gerry Lawrence
Fixed a bug where "substring" was used instead of "substr".
- Time zone fixes
Fixed a problem in the WEST time zone. Cristina Nunes
Added time zone. Kimmo R. M. Hovi
- Documentation fixes
Revised some of the documentation about Y2K (given that it's in the past) and the 2007 US daylight saving time rule changes.
VERSION 5.46 (2007-02-21)
- Released
-
VERSION 5.45 (DEVELOPMENT)
- New features
Added "overmorrow". Ed Avis
- Bug fixes
Fixed bug in parsing ISO 8601 dates. Paul Schinder
Fixed a bug in UnixDate for years before 1000 AD. Joaquin Ferrero
Fixed a bug where "today" wasn't case insensitive. Pedro Rodrigues
Fixed a bug where business/approximate mode wasn't correctly used in DateCalc. Mark T. Kennedy
Bug in DateCalc where you couldn't pass undef as the errref. Alex Howansky
Bug where cygwin wasn't using the date command. Rafael Kitover
- Time zone fixes
New time zones. Khairil Yusof, Andy Spiegel, Ernesto Rapetti
New time zones. Robin Norwood
Fixed Russian time zones. Yuri Kovalenko
- Language fixes
Language fix for Danish. Claus Rasmussen
Language fix for German. Andreas Dembach
- Documentation fixes
Minor documentation improvement. Caminati Carlo
Lots of spelling fixes. Asaf Bartov
VERSION 5.44 (2005-06-02)
- Released
-
VERSION 5.43 (DEVELOPMENT)
- (!) (*).
Y-0-WOY-DOW now refers to the WOY'th occurrence of DOW instead of the ISO 8601 date Y-W(WOY)-DOY. Also, changed Y-0-WOY-0 to refer to the WOY'th occurrence of FirstDay, and got rid of the MW and MD flags. Many other similar changes.
- (!) Changed %x format in UnixDate
The %x format used to be equivalent to %D (%m/%d/%y), but it has been modified to use the DateFormat config variable, so it may return %d/%m/%y if a non-US DateFormat is specified.
- New features
Added TodayIsMidnight. Reuben Thomas
Added "approx" mode to Delta_Format and reversed change to default Delta_Format behavior to the one from version 5.40. Based on discussion with Adam Spiers.
Added %O UnixDate format. Martin Thurn
- Bug fixes
Fixed a bug is ParseRecur where values passed in were no overriding old values in the recurrence. Scott Barker (reported to the Debian bugs list).
Fix for a potential problem in the "0000" time zone. Ed Avis
Changed taint check to be the one in perlsec(1). Max Kalika
Minor fix so DateInit("VAR=") will work. Thomas Bohme
Fixed a bug where business mode was kept operative even after the calculation was over. Emiliano Bruni
Minor change to run under cygwin. Niel Markwick
Minor VMS fix. Martin P.J. Zinser
Small fix to taint checking. David Worenklein
Fixed a problem where deltas were getting misinterpreted as dates. Harry Zhu
Fixed a bug in ParseRecur where "last day of every March" couldn't be done. Andras Karacsony
Fixed a bug in business mode calculations. Tracy L Sanders
Sorted all events and dates returned by Events_List. This fixes problems with tests on some versions of perl. Tulan
Modified %x UnixDate format to use DateFormat config variable. Matt Lyons
- Time zone fixes
Fixed a problem with single character military time zones (T and W) conflicting with ISO 8601 T and W dates. Hugo Cornelis
Small correction to Brazil time zones. John McDonald
Added time zones. Michael Wood-Vasey, Don Robertson, Michael D. Setzer II, Andres Tarallo
- Language fixes
Fixed German translations. Oliver Scheit
Minor corrections to Italian. Nicola Pedrozzi
Added the language Catalan. Xavi Drudis
- Documentation fixes
Minor doc fixes Reuben Thomas, Ed Avis, Thomas Winzig
Clarified documentation on %W/%G/%U/%L formats. Joel Savignon
VERSION 5.42a 2003-07-03
VERSION 5.42 (2003-07-02)
- Released
Number changed to distinguish between the development release (5.41) and the official release.
VERSION 5.41 (DEVELOPMENT)
As of 5.41, odd numbered releases are development (and appear only on my page). Even number releases are official releases submitted to CPAN.
- (!)). Due to discussion with Tim Turner.
- Bug fixes
Small patch for OpenVMS. Martin P.J. Zinser
Minor enhancement to ParseRecur. Randy Harmon
Fixed a bug involving business deltas with negative hours. Ludovic Dubost
Added some support for NetWare. Chris Shields
Applied some robustness patches. Ed Avis
Fixed a bug with years <1000. Jonathan Callahan
Patch to make Manip.pm -Mstrict clean and better VMS support. Peter Prymmer
Fixed a bug in "1st Saturday of 2005" format. Maurice Aubrey
Taint check insecure $ENV{PATH} fix. Ed Avis
Patch to allow deltas of the form "+ -1 day" to work. Ed Avis
Removed ampersands from function calls in documentation to fit new perl coding standards. Bill Pollock
Fixed a bug where spaces in a date caused problems in German (due to the number 1st, 2nd, etc. being 1., 2., etc.). Erik Roderwald
- Time zone fixes
Minor bug fix where /etc/time zone not correctly read. Jacek Nowacki
Made the UnixDate %Z format work with numeric time zones. Michael Isard
Fixed bug where -HH:MM and +HH:MM were not being accepted as valid time zones. Hank Barta
Fixed a bug where time zones -HH:MM weren't handled in ISO 8601 dates. Ed Avis
Added some help for VMS time zones. Don Slutz
Added some checking to the time zone determination. Ed Avis
Added time zones. David Coppit, Daniel Serodio, Fabian Mandelbaum, Raul Dias, Pedro Melo CUNHA, Roman Y Malakhov, David Whiting, Khaled Mostaguir, Jason King
- Language fixes
(*) Replaced all non-ASCII characters with hex representations to avoid the malformed UTF-8 character warnings. Ed Avis
Added Russian translation. Dapi
Additions to Dutch translation. Willem
Patch to French support. Patrick Turmel
Added Tues/Thur abbreviations. Martin Thurn
Added Turkish. Giray
Added Danish. Jesper Dalberg A patch for Danish was send by Jorgen Norgaard previously, and I somehow overlooked it. I apologize for that.
- Test fixes
Added runtests.bat contributed by Lon Amick
- Documentation fixes
Minor doc fix. Jeremy Tietsort
Fixed spelling of Veteran's day. Dirk Eddelbuettel
Documentation improvements. James Olsen
VERSION 5.40 (2001-06-07)
- New features
Added support for negative values is "epoch SECS" type dates. Larry Warner
Added NWD/PWD/DWD flags to ParseRecur. Peter Wyngaard
- Bug fixes
Fixed a warning. Edward Avis
Fixed a bug where the date wasn't rolling over when parsing dates containing only times. James L. Gordon
Fixed a bug where some times were defaulting to the current time instead of 00:00:00. Edward Avis
Fixed a bug in Date_NthDayOfYear with decimal days. Olga Polyakov
Fixed a bug where ParseDateDelta returned a delta if nothing was passed in. Jim Hranicky
Fixed a bug where noon was case sensitive. Bion Pohl
Fixed a bug where dateTtime wasn't parsed. Jeremy Brinkley
Fixed a bug in holiday parsing involving recurrences. Jerry Wilcox
Fixed a bug where an invalid date passed to Date_IsWorkDay produced an error message. Mark Rejhon
Fixed a bug where EraseHolidays wasn't taking affect correctly. Chateauvieux Martial
Fixed a bug where the list produced by Date_Init couldn't be passed back in to Date_Init. James Elson
- Time zone fixes
Added `date +%Z` support in Date_TimeZone. Mike Bristow
Fixed a warning if the time zone is supplied as a +HHMM format. Viola Mauro
Fixed South African time zone. David Sieborger
- Documentation fixes
Added an example. Philip Jones
VERSION 5.39 (2000-06-27)
- Bug fixes
`date` uses the user's path unless taint checking is on.
@::DatePath used instead of @Date::Manip::DatePath incorrectly. Fixed by John Labovitz.
Fixed a bug where times such as "5 seconds ago" were not changing over time. Matthew R. Sheahan
- Time zone fixes
Added /etc/time zone support to &Date_TimeZone. Dirk Eddelbuettel
Added time zones. Dirk Eddelbuettel, Eli Selinger
VERSION 5.38 (2000-05-23)
- (*) Added Events
Added Events section to config file and Events_List routine. Prompted by Greg Schiedler and paid for by Alan Cezar.
- (!) Removed Date_DaysSince999
The Date_DaysSince999 function (deprecated in 5.35) has been removed.
- New features
Added support for ISO8601 dates of the format dateTtime. Jason Pierce
Got rid of the "use Cwd" and ENV{PATH} lines which means no more taint problems.
- Bug fixes
Fixed "dofw" format to return the day of the current week as documented instead of next week. Dennis Ingram
Fixed a bug where dates in years 1900, 1800, etc. (but not 2000 or other 400th years) were off by one day in DayOfWeek. Noble Thomas
Fixed a bug in ParseRecur (2-digit years not treated correctly). Brian Rectanus
- Time zone fixes
Added time zones. Nelson Ferreira, David Harper
- Documentation fixes
Fixed some typos. Thanks to Alex Kapranoff
Typo fixed. Jim Hranicky
VERSION 5.37 (2000-02-14)
- Bug fixes
Set ENV{PATH} to help with taint checking. Joe Lipson
Fixed a serious bug where HH:24:00 was broken due to support from 24:00:00. Scott Egashira
- Time zone fixes
Fixed the sign on the military time zones. John Scott
VERSION 5.36 (2000-01-21)
- New features
Added support for 24:00:00 to ParseDate. William H Asquith
- Bug fixes
Fixed a bug in ParseRecur. Lewis Tsao
Fixed a bug is UnixDate (%l format). Jon Hedley
Fixed a bug in Date_GetNext/Prev. Christoph Haas
Fixed a bug in Date_IsHoliday. Report and patch by Rolf Beutner
Fixed a bug in UnixDate. Patch by Kurtis D. Rader
Rewrote IsInt routine based on discussion with Sean Hunter (approximately 30% faster on a SPARC).
- Time zone fixes
Added time zone. Paul Arzul
- Documentation fixes
Fixed a documentation problem with Date_ConvTZ. Diab Jerius
VERSION 5.35 (1999-07-06)
- (!) Deprecated Date_DaysSince999
In fixing support for the years 0001.
- (*) Recurrences now support flags
Flags for modifying recurrence dates are now supported.
- (*) Improved holiday support considerably
Added support for recurrences and one-year-only holidays (the latter requested first by Vishal Bhatia.
- (*) Date_Init improved
Date_Init can now return a list of config variables if called in array context. Based on a suggestion by Matt Tuttle.
- New features
Modified Date_GetPrev/Date_GetNext to take $curr=2.
Now parses the Apache log file format "dd/mmm/yyyy:hh:mm:ss (time zone)". Mark Ferguson
Added OS/2 support. Michael B. Babakov
Added Date_IsHoliday routine. Joe Pepin
Added recurrence support for Easter (first suggested by Abigail).
- Bug fixes
Made "epoch" not case sensitive and fixed a bug where it would fail in some languages. Caught because of Iosif's thorough Romanian test file.
Fixed a problem where "in 5 days/weeks/months" sometimes wouldn't get correctly parsed in other languages. Caught because of Iosif's thorough Romanian test file.
Fixed a weakness in ParseDateDelta brought out by the Romanian delta test file.
Fixed a bug causing warnings in the beta version of perl. Patch by Paul Johnson.
Fixed support for years 0000-0999. Requested by Chris Vaughan
Several recurrence bug fixes.
Put all the my'ed global variables in a couple hashes to clean up the namespace and to make a few future enhancements easier to do.
Fixed a bug where business weeks weren't being used correctly. Qian Miao
Fixed a serious typo in the DaysSince1BC routine. Qian Miao
Fixed Veteran's day, added Good Friday (off by default). Peter Chen
Cleaned up holiday variables and re-did holiday routines.
- Time zone fixes
Added time zones. Oded Cohen
- Language fixes
Added Romanian support (including 2 test files). Iosif Fettich
Corrected Swedish translations. Danne Solli
Some fixes to German translations. Peter Ehrenberg
Added Italian. Nicola Pedrozzi
- Test fixes
Added recurrence test suite
- Documentation fixes
Several documentation updates.
New recurrence documentation.
VERSION 5.34 (1999-04-13)
- (!) (*) All Date::Manip variables are no longer accessible
Previously, Date::Manip variables were declared using a full package name. Now, they are declared with the my() function. This means that internal variables are no longer accessible outside of the module. Based on suggestion by Tom Christiansen
- (!). Suggested by Tuc.
- (!)).
Added now in response to a question by Al Sorrell (I should have added it earlier).
- New features
Added exact business mode. Ian Duplisse
Added "mmmYYYY" and "YYYYmmm" formats. As a result, "DDYYmmm" and "mmmDDYY" formats changed to "DD/YYmmm" and "mmmDD/YY" as described above. David Twomey
- Bug fixes
Fixed a bug where a date passed in as an array wasn't getting the date removed from the array. Rick Wise
Added tests for MPE/iX OS. John Testa
Fixed a bug where WorkDayBeg=8:00 went into an infinite loop. Mark Martinec
Changed a business week to be the same as an exact week. Abigail
Fixed a bug where "Sunday week 0" didn't work (only affected week 0) Gerald Rinske
Minor bug (my variable declared twice). Paul J. Schinder
Fixed a bug where "epoch SECONDS" was getting parsed wrong (for SECONDS which could be interpreted as an ISO-8601 date). N. Thomas
Fixed a problem where init files were not being read. Mike Reetz
- Time zone fixes
At the request of the UN, I added the SAT time zone. :-) Howard Hendler
Fixed a bug where time zones were converted multiple times if ConvTZ was set and DateCalc called. Steven Hartland
- Language fixes
Added Portuguese. Rui Pedro da Silva Leite Pereira
- Documentation fixes
A number of typos fixed. Ron Pero
VERSION 5.33 (1998-08-20)
- Bug fixes
Fixed a bug where "1 month ago" was no longer working (and added it to the test cases). This broke when I fixed the "-1second" bug in the previous version. A result of this is that a number of "deltas" can be parsed as dates (i.e. &ParseDate("1 hour 20 minutes ago") is equivalent to &DateCalc("now","1 hour 20 minutes ago")). Only text deltas can be used in this way (i.e. &ParseDate("-0:0:0:0:1:20:0") will not work).
- Language fixes
Added Spanish support. Bautista Jasso Javier
VERSION 5.32 (1998-08-17)
- (!) (Windows, VMS, Mac). For all Unix platforms, it's still .DateManip.cnf . It will only look in the user's home directory on VMS and Unix.
- New features
Added "in N days" and "N days ago" formats. Tony Bowden.
Added cYYYY format to YYtoYYYY variable. Mark Rejhon.
Added 2 days/weeks/months later in both ParseDate and ParseDelta (for Dutch support). Abigail.
Added "Y:M:0*-DOM:0:0:0" to ParseRecur. Jeff Yoak.
- Bug fixes
Fixed a bug where the deltas could be off by up to a couple minutes in some rare cases. Herman Horsten.
Fixed an "uninitialized symbol" warning. Mark D. Anderson.
Fixed a bug where holidays weren't erased. Jonathan Wright.
Applied a bug fix from Joe Chapman where the %W/%U UnixDate formats were frequently wrong.
Several minor fixes and improvements. Abigail.
Added some VMS support. Charles Lane.
Fixed a bug which caused a test to fail on some systems. Charles Lane.
Fixed a bug where "-1second" was treated as a date rather than a delta in DateCalc. Kenneth Ingham
Added a bit to the Makefile.PL (as it was distributed in the Win32 Perl Resource Kit). Murray Nesbitt
- Time zone fixes
Allowed time zones of the format STD-#DST-#. Peter Gordon.
Added time zone support for "+0500 (EST)". Tom Christiansen.
Restricted time zones parsing to 0000-2359 instead of 0000-9999. Frank Cusack
Added time zones. W. Phillip Moore, Michael Smith, Samuli Karkkainen
- Language fixes
Added Polish support. Ian Wojtowicz.
Added Dutch support. Abigail.
Added A.M. and P.M. parsing (not just AM and PM). William W. Arnold.
Fixed a German initialization problem. Thomas Horster-Moller and Christian Reithmaier
- Documentation fixes
Documentation fix. Peter Gordon.
Minor documentation changes. Yamamoto Hiroshi.
Added info about the RCS problem. Supplied by Kipp E. Howard.
VERSION 5.31 (1998-04-08)
- New features
Added "epoch SECS" format to ParseDateString. Thanks to: Joshua M. Burgin.
Added a patch by Blair Zajac to make Date_NthDayOfYear work with decimal days.
- Bug fixes
Fixed a bug in ParseDateDelta (seems to appear only in 5.005 pre-releases). Found by Larry W. Virden.
Missed one form in ParseDate. Noted by Tuc.
Fixed a bug where "15:00:00" couldn't be parsed. Michael Pizolato.
Split Manip.pm. New files are HISTORY, TODO, Manip.pod.
Fixed a bug in ParseDateDelta. Antonio Rosella.
Removed the only occurrence of $& (which may speed some things up). Fix by Ken Williams. First suggested by Abigail.
Fixed an overflow bug in doing date calculations with 2 dates more than 70 years apart. Fix by Vishal Bhatia.
Fixed a bug where "5:00pm" wasn't always parsed correctly. Thanks to Jim Trocki.
Fixed a bug in UnixDate (it wouldn't return the correct string for a format who's last character was '0') noted by Ramin V.
- Time zone fixes
Relaxed some restrictions on time zones so ISO-8601 dates can use non-ISO-8601 time zones. Noted by John Chambers.
Fixed a bug in converting time zones with a minutes field (+1030). Found by Paul O.
- Language fixes
Some fixes to the French translations by Emmanuel Bataille.
Added German support. Thanks to Andreas C. Poszvek.
- Documentation fixes
Minor documentation fixes. Will Linden.
Fixed a documentation problem with Date_GetPrev. It was still 0-6 instead of 1-7. Thanks to Robert Klep.
VERSION 5.30 (1998-01-21)
- (!) (*) Delta format changed
A week field has been added to the internal format of the delta. It now reads "Y:M:W:D:H:MN:S" instead of "Y:M:D:H:MN:S".
- (*) Now handles recurring events
Added ParseRecur. First suggested by Chris Jackson.
- New features
All routines can now take either a 2- or 4-digit year.
Added Delta_Format. First suggested by Alan Burlison.
Added Date_SetDateField. Thanks to Martin Thurn.
- Bug fixes
Made the $err argument to DateCalc optional.
Changed the name of several of the library routines (not the callable ones) to standardize naming.
VERSION 5.21 (1998-01-15)
- (!).
- New features
Added YYtoYYYY variable. Suggested by Michel van der List.
Added the UpdateCurrTZ variable to increase speed at the cost of being wrong on the time zone.
Added British date formats. Thanks to Piran Montford. Monday week today week as well as some US formats in 2 months next month
Time can now be written 5pm. Piran Montford.
Added the TomorrowFirst variable and Date_NearestWorkDay function.
Added UnixDate formats %G and %L to correctly handle the year. Thanks to Samuli Karkkainen.
Added ForceDate variable. Based on a suggestion by Christian Campbell.
- Bug fixes
Now passes Taint checks. Thanks to Mike Fuhr, Ron E. Nelson, and Jason L Tibbitts III.
Put everything in a "use integer" pragma.
Added a missing space in the %g UnixDate format. Thanks to Mike Booth.
Removed all mandatory call to Date_Init (only called when current time is required). Significantly faster.
Fixed a bug in Date_ConvTZ. Thanks to Patrick K Malone.
Fixed a bug in Date_IsWorkDay.
- Time zone fixes
Fixed some Australian time zones. Kim Davies.
- Language fixes
Cleaned up multi-lingual initialization and added the IntCharSet variable.
Improved French translations. Thanks to Emmanuel Bataille.
Added "Sept" as a recognized abbreviation. Thanks to Martin Thurn.
Typo in the French initialization. Thanks to Michel Minsoul.
- Test fixes
Fixed the tests to not fail in 1998.
- Documentation fixes
Documented how to get around Micro$oft problem. Based on a mail by Patrick Stepp.
VERSION 5.20 (1997-10-12)
- (*) ISO 8601 support
ISO 8601 dates are now parsed. This resulted in several other changes specified below.
- (!) (*) ParseDate formats removed
As a result of ISO 8601).
- New features
Several new parsing formats added, including: "Friday" suggested by Rob Perelman "12th" suggested by Rob Perelman "last day of MONTH" suggested by Chadd Westhoff
Added ParseDateString for speed (and simplicity for modifying ParseDate)
Added %J and %K formats to UnixDate.
Added Date_DaysInMonth.
- Bug fixes
Reorganized ParseDate more efficiently.
Fixed some incorrect uses of $in instead of $future in ParseDate. Thanks to Erik Corry.
Added some speedups (more to come).
- Test fixes
Cleaned up testing mechanism a bit and added tests for ISO 8601 formats.
VERSION 5.11 (1997-08-07)
Version 5.11 was never released to CPAN.
- Bug fixes
Added one more check for NT perl. Thanks to Rodney Haywood.
Added some comments to help me keep my personal libraries up-to-date with respect to Date::Manip and vice-versa.
Fixed a bug which showed up in French dates (though it could happen in other languages as well). Thanks to Georges Martin.
Fixed a bug in DateCalc. Thanks to Thomas Winzig.
Removed the "eval" statement from CheckFilePath which causes a suid c wrapper program to die when it calls a Date::Manip script. Thanks to Hank Hughes.
Fixed a bug in business mode calculations. Thanks to Sterling Swartwout.
Fixed a bug in which "1997023100:00:00" was accepted as valid. Thanks to Doug Emerald.
Fixed a bug in which ConvTZ was not used correctly in ParseDate. Re-did portions of Date_ConvTZ. Thanks to Vivek Khera.
Fixed a bug in business mode calculations. Thanks to Ian Duplisse.
Added $^X check for Win95 perl. Thanks to Walter Soldierer.
Missed one call to NormalizeDelta so the output was wrong. Thanks to Brad A. Buikema.
- Time zone fixes
Added time zones. Paul Gillingwater, Rosella Antonio, Kang Taewook
VERSION 5.10 (1997-03-19)
- Bug fixes
Cleaned up In, At, and On regexps.
Added 2 checks for MSWin32 (date command and getpw* didn't work). Thanks to Alan Humphrey.
Fixed two bugs in the DateCalc routines. Pointed out by Kevin Baker.
Added a check for Windows_95. Thanks to Charlie W.
Cleaned up checks for MacOS and Microsoft OS's. Hopefully I'm catching everything. Thanks to Charlie Wu for one more check.
Fixed a typo which broke Time%Date (Date=dd%mmm%yy) format. Thanks to Timothy Kimball.
- Time zone fixes
Fixed some problems with how "US/Eastern" type time zones were used. Thanks to Marvin Solomon.
- Test fixes
Tests will now run regardless of the time zone you are in.
Test will always read the DateManip.cnf file in t/ now.
A failed test will now give slightly more information.
DateManip.cnf file in t/ now sets ALL options to override any changes made in the Manip.pm file.
- Documentation fixes
Added documentation for backwards incompatibilities to POD.
Fixed some problems in POD documentation. Thanks to Marvin Solomon.
Fixed minor POD error pointed out by John Perkins.
Changed documentation for Date_IsWorkDay (it was quite confusing using a variable named $time). Thanks to Erik M. Schwartz.
Fixed typo in documentation (midnight misspelled). Thanks to Timothy Kimball.
VERSION 5.09 (1997-01-28)
- Bug fixes
Upgraded to 5.003_23 and fixed one problem associated with it.
Used carp and changed all die's to confess.
Replaced some UNIX commands with perl equivalents (date with localtime in the tests, pwd with cwd in the path routines).
Cleaned up all routines working with the path.
- Test fixes
Tests work again (broke in 5.08). Thanks to Alex Lewin and Michael Fuhr for running debugging tests.
VERSION 5.08 (1997-01-24)
- Bug fixes
(*) Fixed serious bug in ConvTZ pointed out by David Hall.
(*) Modified Date_ConvTZ (and documented it).
VERSION 5.07p2 1997-01-03
Released two patches for 5.07.
- Bug fixes
Fixed a bug where a delta component of "-0" would mess things up. Reported by Nigel Chapman.
- Time zone fixes
(*) Can now understand PST8PDT type zones (but only in Date_TimeZone).
Added lots of time zone abbreviations.
- Test fixes
Fixed some tests (good for another year).
VERSION 5.07 (1996-12-10)
- (!) .
- (*) Added weeks to ParseDateDelta.
Suggested by Mike Bassman. Note that since this is a late addition, I did not change the internal format of a delta. Instead, it is added to the days field.
- (*) Now reads a config file.
Refer to the Date_Init documentation for details.
- (*) Added business mode.
See documentation. Suggested by Mike Bassman.
- New features
(*) Modified how deltas are normalized and added the DeltaSigns config variable.
Added %q format "YYYYMMDDHHMMSS" to UnixDate. Requested by Rob Perelman. Also added %P format "YYYYMMDDHH:MM:SS".
Added a new config variable to allow you to work with multiple internal formats (with and without colons). Requested by Rob Perelman. See Date_Init documentation.
Added the following formats suggested by Andreas Johansson: Sunday week 22 [in 1996] [at 12:00] 22nd Sunday [in 1996] [at 12:00] Sunday 22nd week [in 1996] [at 12:00]
Added a new config variable to allow you to define the first day of the week. See Date_Init documentation.
Added the following formats to ParseDate for convenience (some were suggested by Mike Bassman): next/last Friday [at time] next/last week [at time] in 2 weeks [at time] 2 weeks ago [at time] Friday in 2 weeks in 2 weeks on Friday Friday 2 weeks ago 2 weeks ago Friday
Added Date_SecsSince1970GMT, moved the %s format to %o (secs since 1/1/70) and added %s format (secs since 1/1/70 GMT). Based on suggestions by Mark Osbourne. Note this introduces a minor backward incompatibility described above.
Date_SetTime now works with international time separators.
Added the %g format (%a, %d %b %Y %H:%M:%S %z) for an RFC 1123 date. Suggested by Are Bryne.
Added options to delete existing holidays and ignore global config file.
Date_GetNext and Date_GetPrev now return the next/prev occurrence of a time as well as a day. Suggested by Are Bryne.
In approximate mode, deltas now come out completely normalized (only 1 sign). Suggested by Rob Perelman.
Added Date::Manip::InitDone so initialization isn't duplicated.
Added a 3rd internal format to store YYYY-MM-DD HH:MN:SS (iso 8601).
Added a config variable to allow you to work with 24 hour business days. Suggested by Mike Bassman.
ParseDateDelta now returns "" rather than "+0:0:0:0:0:0" when there is an error.
- Bug fixes
(*) The d:h:mn:s of ALL deltas are normalized.
Huge number of code changes to clean things up.
Subroutines now check to see if 4 digit years are entered. Suggested by Are Bryne.
Added local($_) to all routines which use $_. Suggested by Rob Perelman.
Complete rewrite of DateCalc.
Fixed a bug where UnixDate %E format didn't work with single digit dates. Patch supplied by Jyrgen Nyrgaard.
Fixed a bug where "today" was not converted to the correct time zone.
- Time zone fixes
Fixed bug in Date_TimeZone where it didn't recognize +HHMN type time zones. Thanks to Are Bryne.
Added WindowsNT check to Date_TimeZone to get around NT's weird date command. Thanks to Are Bryne.
Fixed typo (CSD instead of CST).
Fixed sign in military time zones making Date::Manip RFC 1123 compliant (except that time zone information is not stored in any format)
- Test fixes
(*) Added test suite!
VERSION 5.06 (1996-10-25)
- New features
Added "today at time" formats.
ParseDateDelta now normalizes the delta as well as DateCalc.
Added %Q format "YYYYMMDD" to UnixDate. Requested by Rob Perelman.
- Bug fixes
Fixed another two places where a variable was declared twice using my (thanks to Ric Steinberger).
Fixed a bug where fractional seconds weren't parsed correctly.
Fixed a bug where "noon" and other special times were not parsed in the "which day of month" formats.
Fixed a minor bug where a few matches were case sensitive.
The command "date +%Z" doesn't work on SunOS machines (and perhaps others) so 5.05 is effectively broken. 5.06 released to fix this. Reported by Rob Perelman.
VERSION 5.05 (1996-10-11)
- New features
Changed deltas to be all positive or all negative when produced by DateCalc. Suggested by Steve Braun
Added DateManipVersion routine.
(*) Parses RFC 822 dates (thanks to J.B. Nicholson-Owens for suggestion).
Parses ctime() date formats (suggested by Matthew R. Sheahan).
Now supports times like "noon" and "midnight".
- Bug fixes
Fixed bug introduced in 5.04 when default day set to 1. When no date given, have day default to today rather than 1. It only defaults to one if a partial date is given.
Fixed bug where Date_DaysSince999 returned the wrong value (the error did not affect any other functions in Date::Manip due to the way it was called and the nature of the error). Pointed out by Jason Baker
Dates with commas in them are now read properly.
Fixed two places where a variable was declared twice using my (thanks to Ric Steinberger).
Hopefully fixed installation problems.
Got rid of the last (I think) couple of US specific strings.
Fixed bug in Date_SetTime (didn't work with $hr,$min,$sec < 10).
Added ModuloAddition routine and simplified DateCalc.
- Time zone fixes
(*) Now supports time zones.
(*) Added Date_ConvTZ routine for time zone support.
Date_TimeZone will now also check `date '+%Z'` suggested by Aharon Schkolnik.
- Language fixes
Added Swedish translation (thanks to Andreas Johansson
The time separators are now language specific so the French can write "10h30" and the Swedes can write "10.30". Suggested by Andreas Johansson.
- Documentation fixes
Fixed bad mistake in documentation (use Date::Manip instead of use DateManip) pointed out by [email protected]
Minor improvements to documentation.
Documented the 'sort within a sort' bug.
Fixed type in documentation/README pointed out by James K. Bence.
VERSION 5.04 (1996-08-01)
- New features
Added support for fractional seconds (as generated by Sybase). They are parsed and ignored. Added by Kurt Stephens
- Bug fixes
Fixed bugs reported by J.B. Nicholson-Owens "Tue Jun 25 1996" wasn't parsed correctly (regexp was case sensitive) full day names not parsed correctly the default day in ErrorCheck should be 1, NOT currd since when currd>28, it may not be a valid date for the month
VERSION 5.03 (1996-07-17)
- Bug fixes
Fixed a couple of bugs in UnixDate.
Declared package variables to avoid warning "Identifier XXX used only once". Thanks to Peter Bray for the suggestion.
VERSION 5.02 (1996-07-15)
- New features
(*) Added some internationalization (most of the routines had to be modified at least slightly)
- Bug fixes
Fixed a bug where repeated calls to ParseDate("today") was not reset
Replaced the %Date::Manip::Date variable with a large number of other, more flexible variables
Rewrote the Init routine
VERSION 5.01 (1996-06-24)
- New features
Added %F format to UnixDate. Rob Perelman
Added "Date at Time" types
Weekdays can be entered and checked
Two digit years fall in the range CurrYear-89 to CurrYear+10
- Bug fixes
Reworked a number of the ParseDate regular expressions to make them more flexible
- Documentation fixes
Fixed a typo (Friday misspelled Fridat). Rob Perelman
Documentation problem for \$err in DateCalc. Rob Perelman
VERSION 5.00 (1996-06-21)
- (*) Switched to a package.
Patch supplied by Peter Bray: renamed to Date::Manip changed version number to 2 decimal places added POD documentation
Thanks to Peter Bray, Randal Schwartz, Andreas Koenig for suggestions
- Bug fixes
Fixed a bug pointed out by Peter Bray where it was complaining of an uninitialized variable.
VERSION 4.3 (1995-10-26)
- New features
Added "which dofw in mmm" formats to ParseDate. Mark Dedlow
- Bug fixes
Added a bugfix of Adam Nevins where "12:xx pm" used to be parsed "24:xx:00".
VERSION 4.2 (1995-10-23)
- New features
UnixDate will now return a scalar or list depending on context
ParseDate/ParseDateDelta will now take a scalar, a reference to a scalar, or a reference to an array
(*) Simple time zone handling
(*) Added Date_SetTime, Date_GetPrev, Date_GetNext
- Bug fixes
Added copyright notice (requested by Tim Bunce)
VERSION 4.1 (1995-10-18)
- New features
(*) Added DateCalc
- Bug fixes
Changed %DATE_ to %DateManip::Date
(*) Rewrote ParseDateDelta
VERSION 4.0 (1995-08-13)
(*) First public release
- New features
Added time first formats to ParseDate
- Bug fixes
(*) Switched to perl 5
Cleaned up ParseDate, ParseDateDelta
VERSION 3.0 (1995-05-03)
- New features
Added today/tomorrows/etc. formats
(*) Added UnixDate
(*) Added ParseDateDelta
- Bug fixes
Added %DATE_ global variable to clean some stuff up
Simplified several routines
VERSION 2.0 (1995-04-17)
- New features
Included ideas from Time::ParseDate (David Muir Sharnoff)
Included ideas from date.pl 3.2 (Terry McGonigal)
(*) Added seconds to ParseDate
- Bug fixes
Made error checking much nicer
VERSION 1.2 (1995-03-31)
VERSION 1.1 (1995-02-08)
VERSION 1.0 (1995-01-20)
- (*) Initial release
Though not released to the public, the initial released combined routines from several scripts into one library.) | https://metacpan.org/pod/release/SBECK/Date-Manip-6.72/lib/Date/Manip/Changes5.pod | CC-MAIN-2019-13 | refinedweb | 6,544 | 70.7 |
Now that .NET Core 2.1 Preview 1 has released it is time for a preview of what you can expect to see in Preview 2 and beyond in the System.IO namespace. The 2.1 release changes are the largest changes in quite some time.
Goals
Cross platform
Code should run consistently as possible between platforms. You should also be able to successfully write code that works in mixed environments. Accessing a Unix volume from Windows should just work, for example.
Light touch
System.IO's aggressive handholding blocked a number of scenarios, including mixed environments as described above. Historically this was driven partially due to Code Access Security (CAS) in NetFX (e.g. 4.x). It was also a more plausible approach when .NET was a Windows only solution to aggressively attempt to predict OS behavior. .NET judging the validity of paths is the key example of this well-intentioned, but heavy hand. We now strive to not replicate or predict the outcome of OS API calls.
Performant
This is always important to us. We want to put as little overhead on your platform and enable building performant solutions.
Flexible
We don't want to have an API for every single conceivable scenario, but neither do we want to make it impossible/difficult to build solutions. Part of addressing this is just being flexible and forward-thinking in the API set in general. Another part of this can be addressed by providing extension points that allow you to build complicated solutions in a performant way without resorting to P/Invoking directly into the underlying platform.
Overview
What are you getting from this release? Here are the highlights:
- More consistent behavior cross platform
- Fixed edge cases, particularly on Windows
- New Span<char> APIs in System.IO.Path
- Path.Join() APIs that don't have root checking behavior like Path.Combine()
- Path.GetFullPath() overload that avoids using the current working directory
- A number of new directory enumeration options
- Filtering out specific attributes
- Simple matching behavior
- Ignoring inaccessible files
- Specifying buffer size (notably for Windows UNC scenarios)
- A low-level enumeration extensibility API
- Significantly faster directory enumeration
- Typically 2-4x faster on Windows (Unix 1.3x - 1.4x)
- Significantly lower allocation counts (2x - 40x lower, GC collections cut 8x+)
- Faster Path APIs (GetFullPath() is now 2x as fast as .NET Core 2.0)
Key behavior changes
We've changed behavior to fix issues. Here are the key impacts:
- Exceptions for "bad" paths are thrown when used, not when normalizing (as we can't know what is "valid" without using them)
- Cross plat scenarios are no longer blocked (mounting Unix volumes/shares on Windows for one example)
- Performance is significantly increased
- Behavior is more consistent when running code across platforms
In more detail this means:
- We've made matching consistent with legacy behavior on non-Windows platforms (????.txt matches the same on all platforms)
- Match expressions no longer match 8.3 file names on Windows (*.htm will match only *.htm, not *.html, *.htmz if the volume 8.3 generation on)
- We've fixed enumeration of unusual Windows files - you can now get *Info classes out successfully and use the methods on files that end in spaces, periods
- We do very little validation of paths up front as there is very little we can accurately predict (unblocking numerous xplat and some existing Windows scenarios)
- We only check for embedded nulls, no other chars are rejected, including wildcards (as nulls are never supported and OS APIs almost universally take null terminated strings)
- We don't check for "proper" colon placement
- We don't check for segment length or total path length
- We don't check for "proper" UNCs
- We still throw for null or empty paths on all platforms or paths of all spaces on Windows (as they throw from the Win32 GetFullPathName)
- We don't trim leading spaces on any paths on Windows anymore (we did for some, not others)
- We don't trim whitespace characters from the end of paths on Windows (such as nbsp)
- GetPathRoot, GetDirectoryName, etc. don't throw for empty anymore, they return null (like they do for null).
- GetPathRoot now works consistently with various Windows prefixes (e.g. \\?\, \\.\)
New System.IO.Path APIs
The majority of what we have here are new ReadOnlySpan<char> overloads, which allow you to avoid unnecessary string allocations. Spans are a key new feature in .NET 2.1. You can pass strings as spans using the .AsSpan() extension (there is an implicit conversion as well). ReadOnlySpan<char> has a number of extensions that allow you to evaluate it as you would a string.
Join() APIs allow putting path segments together without analysis of segment rooting. Combine("C:\Foo\", "\Bar") gives you "\Bar". Join gives you "C:\Foo\Bar".
Path.GetFullPath(path, basePath) is an important addition. Normalizing paths that aren't fully qualified (e.g. relative to the current directory) is dangerous and discouraged. The reason is that the current directory is a process-wide setting. Getting into a state where a separate thread unexpectedly changes the working directory is common enough that you should make every attempt to not use the existing GetFullPath().
New EnumerationOptions
There have been numerous asks for directory enumeration options over the years. To fulfill some of those requests and allow addressing more we're introducing the EnumerationOptions class. The existing Directory and DirectoryInfo enumeration methods now have overloads that take this new class.
This class has some defaults that are different than historical behavior:
- Matching is simple- '?' is always one character, '*' is always 0+, "*.*" is any filename with a period
- Hidden and system files are skipped by default- typically one doesn't want the .vs, .git, $RECYCLE.BIN, Thumbs.db etc. folders in results.
You can, of course, choose any setting you want.
New enumeration extensibility model
Those options not enough? Now you can write your own fast, low level enumerators. Want your own matching algorithm? Want to total file sizes? Want to match all files with a set of extensions? Almost anything is possible with the new API that live in System.IO.Enumeration. It is a large topic, so I'll address it in the next post.
What exceptions do I get from bad paths now?
Whatever the OS tells us, we'll tell you. This will always be some sort of IOException for a bad path. Sometimes it might be a File/DirectoryNotFound. The key thing to remember is that you get exceptions when you try to use the path. GetFullPath() doesn't throw for these as it has no practical way of checking.
What if I still want to check invalid characters?
You can do this manually using GetInvalidPathChars(). It isn't recommended as it isn't always correct on any platform. You may have NTFS/FAT volumes mounted in Unix or vice-versa.
Where did the speed improvements come from?
Primarily from reducing allocations. Keeping work on the stack and out of the heap can have a dramatic impact on large sets. Some wins come from smarter use of available OS APIs. A chunk comes from not validating paths on every single API call.
When does AttributesToSkip get applied?
First thing. One effect of this is that filtering out FileAttributes.Directory makes the RecurseSubdirectories option meaningless.
What about other file matching types?
We definitely want to add more in the future. Things like globstar (**), POSIX pattern matching notation (glob), possibly regex. With the extensibility APIs writing a custom matcher is easy. If you're passionate about any of these we're always open to contributions.
Why did you change enumeration defaults?
Note that we only changed them when you use the new EnumerationOptions class. Existing APIs should behave as they did (modulo the matching consistency fixes mentioned in the post). We picked new defaults based on OS defaults (shell & command line) and what one would expect when enumerating end-user files (e.g. to present to users). These obviously won't be right for all scenarios, but changing the settings is easy.
What about the more obscure Windows matching characters?
'<', '>', and '"' were unblocked in 2.0. They're still supported, but only on Windows through the normal APIs. You can, if you want, use them on Unix if you use the extensibility points described in the next post. | https://blogs.msdn.microsoft.com/jeremykuhne/2018/03/08/system-io-in-net-core-2-1-sneak-peek/ | CC-MAIN-2018-13 | refinedweb | 1,386 | 57.37 |
Copy Qt resource file into filesystem
Hi all,
I want to copy a resource file into my current directory containing the executable file. The copy file should be in folder abc (not existed) inside my current directory.
I am using QFile::copy() methods like this:
#include <QCoreApplication> #include <QFile> #include <QString> #include <QDebug> #include <QTextStream> #include <iostream> using namespace std; void read(QString filename) { QFile file(filename); if(!file.open(QFile::ReadOnly | QFile::Text)) { qDebug() << " Could not open the file for reading"; return; } QTextStream in(&file); QString myText = in.readAll(); // put QString into qDebug stream qDebug() << myText; file.close(); } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); read(":/resources/hello.pro"); bool status = QFile::copy(":/resources/hello.pro" , "./abc/hel.pro"); if(status) { cout << "Success" << endl; } else { cout << "Failed" << endl; } return 0; }
The problem is that, when I run the executable file, it returns failed.
I am expecting that it will create new directory abc inside my current folder then copy the resource file into that.
Can anyone help me?
Thanks and best regards.
Thanh C. Tran
- jsulm Moderators
@thanhxp said in Copy QT resource file into filesystem:
I am expecting that it will create new directory abc inside my current folder
That is a wrong assumption: copy() will not create any directories. You have to create it by yourself, see QDir (). | https://forum.qt.io/topic/72915/copy-qt-resource-file-into-filesystem | CC-MAIN-2018-51 | refinedweb | 224 | 55.74 |
Automating My Projects With Python
So, I saw this fine video on youtube about, building stuff within one day. It was made by KalleHallden and I really enjoyed this video.
But during the video I wondered if his code would work on every workstation. Now I have a Mac, and Kalle does too, but this
path = "/Users/kalle/Documents/Projects/MyProjects/"
Will not work on a Windows machine. Home folders are different on many platforms.
So, why not use a home finding feature of python itself, like :
from pathlib import Path
home = str(Path.home())
So, I was thinking, can I improve this script?
Yes!
So let’s Go!
What’s to be done
So there is this list Kalle created to be done.
- [ ] Navigate to MyProjects
- [ ] Create folder with project name
- [ ] Navigate info folder
- [ ] Git init
- [ ] Go to GitHub an create new repository
- [ ] Copy the remote
- [ ] Add remote to my local folder
- [ ] Create readme file
- [ ] Git add
- [ ] Git commit
- [ ] Git push
- [ ] Code . (open IDE)
Now I think this list can be shorter and easier.
So let’s start by navigating to the projects folder.
Navigate to folder
This piece of code lets you print the home folder. On windows, linux or Mac os
from pathlib import Path
home_folder = str(Path.home())
print(home_folder)
The output, on my Macbook is
/Users/theovandersluijs
Let’s say we want to create the new projects in:
[home]/Documents/MyProjects
We can concatenate the home folder easily with the Documents and MyProject folder with the
os.path.join statement.
from pathlib import Path
import os
home_folder = str(Path.home()) # this is the users home folder on any OS
my_project = os.path.join(home_folder, "Documents", "MyProjects")
print(my_project)
this gives us
/Users/theovandersluijs/Documents/MyProjects
Now let’s say we want to create “New_project” into this folder structure.
Easy we are going to use
os.makedirs
from pathlib import Path
import os
home_folder = str(Path.home()) # this is the users home folder on any OS
my_project = os.path.join(home_folder, "Documents", "MyProjects", "New_project")
os.makedirs(my_project, exist_ok=True)
And we are done!
- [x] Navigate to MyProjects
- [x] Create folder with project name
The first step, we did not have to do. The second step, will also be obsolete in one of the next chapters.
Creating the github repository
Creating a github repository is very easy!
First you need to install the GitHub package for Python.
pip install PyGithub
Then you need a Token to gain access to your GitHub account using python. You can find information about generating a GitHub Token here.
So now that you have your token you can start using the script below.
from github import Github
token = "[YOUR TOKEN]"
user = Github(token).get_user()
name = "New_project"
auto_init = True # creates the Readme file
description = "This is a nice description about this project"
private = False
license_template = "cc-by-sa-4.0"
repo = user.create_repo(
name,
auto_init=auto_init,
homepage=homepage,
description=description,
private=private,
license_template=license_template
)
So what does all these vars mean.
- Name: The name of the repository (Required).
- auto_init: Pass
trueto create an initial commit with empty README. Default:
false.
- homepage: A URL with more information about the repository.
- description: A short description of the repository.
- private: Either
trueto create a private repository or
falseto create a public one. Creating private repositories requires a paid GitHub account. Default:
false.
- license_template: Choose an open source license template that best suits your needs, and then use the license keyword as the
license_templatestring. For example, "mit" or "mpl-2.0".
There are various licenses you can choose from. You will find them all here
More information about creating a repository and all the possible variables on GitHub can be found here
If you like to see some output after the script use these:
print(repo.full_name)
print(repo.html_url)
print(repo.ssh_url)
The first will show you the full name of the newly created repository including your username
tvdsluijs/New_project
The second shows the html url you can use either to browse to to find your repo or to use to clone your repo
The last is the SSH url to clone your repo
[email protected]:tvdsluijs/New_project.git And this last statement will come in handy when we want to clone our repo to our harddrive.
We do not need any selenium or beautifullsoup to get any of the needed data from the GitHub page.
So what steps did we do here?
- [x] Go to GitHub an create new repository
- [x] Create readme file
We actually do not need these with the code I’ve created. ~~- [ ] Copy the remote~~
~~- [ ] Git add~~
~~- [ ] Git commit~~
~~- [ ] Git push~~
The Clone wars
Well… not really wars, but I just wanted to put a Star Wars item within this article :-)
But it is about cloning. Because we want to Clone the Repository to our harddrive.
Unfortunately there is no way (yet) to clone with pygithub. So we are going to do this by good old os package already within python.
home_folder = str(Path.home())
my_projects_folder = os.path.join(home_folder, "Documents", "MyProjects")
clone = "git clone {}".format(repo.ssh_url)
os.chdir(my_projects_folder)
os.system(clone)
With
clone = "git clone {}".format(repo.ssh_url) you specify the ssh_url from GitHub where your repository is.
Specifying the path where the cloned project needs to be copied is done by :
os.chdir(my_projects_folder) Do NOT specify the name of your project within this statement. The cloning will create the folder auto-magicly!
And clone the whole shabang with:
os.system(clone)
So what steps did we do here?
- [x] Create folder with project name
- [x] Navigate info folder
- [x] Git init
- [x] Copy the remote
- [x] Add remote to my local folder
So as we did all of this within a small piece of code a lot of items became obsolete within the original ToDo List.
So what is left of the original list
- [x] Create new repository
- [x] Copy the remote to local folder
DONE!
Well only for the automated opening of the IDE and off course for the bash script for Mac and Windows.
Bash scripting
For windows you need to create a bat file and place that in your windows sys folder or create a path variable to the .bat file so you can run it anywhere.
Second to that you need to know where python.exe is located And last where the python script is located
Your bat script could look like this:
"C:\Users\Theo\AppData\Local\Programs\Python\Python37-32\python.exe" "C:\Users\Theo\Documents\MyProjects\New_project\create.py"
pause
For MacOS you should create a .sh file, some thing like
.my_commands.sh
With the following code
#!/bin/bash
function create() {
python /Users/theovandersluijs/Documents/MyProjects/New_project/create.py
}
If you
source ~/.my_commands.sh you will be able to start the python script anywhere from you system in a terminal.
Wrapping things up
Now, if you add some input variables to make the script more intuitive and dynamic and put some try, catch and logging into it, slam in into a class with objects, you will get something like my project you can find on GitHub!
Go to my GitHub Page for all the code.
Like the script? Please buy me a coffee for my work. Thank you!!!
Kalle Hallden’s Video
Please watch the video of Kalle below, it’s really nice to see a passionate developer working.
Also take a look at his other video’s, or go to his github account. | https://medium.com/purepython/automating-my-projects-with-python-5e0222ed708e?source=collection_home---6------0----------------------- | CC-MAIN-2019-35 | refinedweb | 1,246 | 65.73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.