text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
08 January 2010 15:23 [Source: ICIS news]
TORONTO (ICIS news)--Shell will convert its 130,000 bbl/day refinery in ?xml:namespace>
The refinery no longer fit in with Shell’s long-term strategies, the company said.
Shell announced last July it planned to either sell or close the refinery, which employs a staff of around 500.
The company did not announce a firm timeline for the conversion. “For the time being, it remains business as usual for our refinery operations,” it said.
In March last year, Shell’s Canadian affiliate PTT Poly
That facility was later bought by Portuguese firm IMATOSGIL GROUP, which will convert it to polyethylene terephthalate (PET) production.
Meanwhile, Shell’s is also reviewing downstream operations in
In 2008, Shell cancelled plans for a possible grassroots refinery | http://www.icis.com/Articles/2010/01/08/9324167/Shell-to-convert-Montreal-refinery-to-fuel-terminal.html | CC-MAIN-2014-15 | refinedweb | 132 | 53.61 |
I know this is fairly basic stuff, but please, bare with me. It's quite hot in here (35C at the moment), so my brain is probably in its melting phase.
Long story short, let's say we have a table in SQL represented in C# like this:
public class Product { public int ID { get; set; } public string Name { get; set; } public string Picture { get; set; } // filename of the picture, e.g. apple.jpg public int CategoryID { get; set; } }
Now we would query the database and retrieve the object, let's say with values like this:
ID = 1 Name = Yellow apple Picture = apple.jpg CategoryID = 25
All perfectly normal. The thing I'm meditating about at the moment is this: if I want to show a product, I need some additional info that wasn't queried from the database, like exact file path to the image, all we have is
apple.jpg
, but we need maybe something like
~/images/apple.jpg
So, I was thinking of 3 possibilities:
1.) add a new property to the class Product
public string PictureUrl { get { return "~/images/apple.jpg"; } }
2.) specify the full url during performing of the presentation logic, let's say:
public void ShowProductDetails() { Product p = ProductRepo.GetProduct(id); txtName.Text = p.Name; imgPicture.ImageUrl = "~/images/" + p.Picture; }
3.) use Decorator pattern
First approach seems wrong to me (even though I have been using it for quite a long time), because I'm trying to have a layered web application. I'm not sure hard-coding this is a good way to go.
Second approach is better, but worse in the sense it can't be easily reused. If I have multiple places where I'm doing the same thing and something changes, ... Maybe it would work if I specify some static constants holding the paths...
Third possibility seems quite complicated in terms of maintainability. The number of my classes would probably have to double. If I have 30 classes now, it would suddenly become 60 :/
What is the best/recommended way of doing things like this? If I add properties to my POCOs that aren't included in the db schema, I'm unable to use Dapper.Contrib or Rainbow and similar libraries, because even though "selects" work fine, I can't "insert" nor "delete". I have to hard-code the sql strings for every command which becomes really tedious after some time, when you're doing all the time the same stuff.
EDIT:
OK, maybe I should have been more specific... The solution from Govind KamalaPrakash Malviya is great, but can't be used every time. I need a way to solve this for any type of properties, even those more complex ones - for instance the number of photos of some album. It's a good idea to query the count of photos along with albums, but assign it to what? Create a decorated class using a Decorator pattern?
How do YOU solve this kind of architecture problems?
I normally solve this by leaving the entity object as it is and creating an extra data container, which will either hold a reference to the corresponding entity or implement the corresponding properties from the entity object itself. In the latter case I use a mapping library (AutoMapper) to copy data from an entity to a the enhanced container.
The logic for filling the extra properties normally lies in a factory (or factory method). It's up to you, where you want to place this in your architecture. In a current project we are including them in our data access facade on client side, because we don't want to clutter the data access layer with too many DTO's. This of course means, that the data access layer still needs to support retrieving the extra properties. In your case an operation like
int GetNumberOfPhotosForAlbum(Album album).
We found that the benefits outweigh the risk of an ever-growing contract of the data access layer, which of course might need to support many different calls like the example above instead of just
EnhancedAlbum GetEnhancedAlbumWithAllKindsOfExtraProperties(long albumId). This might also become a performance problem in some scenarios, because of the overhead of an increased frequency of service calls. In the end you need to decide, what's best for your project.
I like this approach, because my entities (
Album) stay untouched and I retain a clear separation of concerns between persistence, client logic and mapping.
Example:
class Album { string Name { get; set; } } class EnhancedAlbum { Album Album { get; set; } int NumberOfPhotos { get; set; } } class EnhancedAlbumFactory { private MyDataService _dataService; //include some means of constructing or (better) injecting the data service EnhancedAlbum GetEnhancedAlbum(Album album) { return new EnhancedAlbum { Album = Album, NumberOfPhotos = _dataService.GetNumberOfPhotosForAlbum(album); }; } }
I think you should manipulate it in presentation layer because image path for presentation layer only. so use third one but make it easy using utility method
public class PathUtility { public static string ImageUrl(string imageName) { if(string.IsNullOrEmpty(imageName)) { throw new Exception("Image name not valid!!"); } else { return "YourImageDirectroyUrl" + imageName; } } }
and use it easily
PathUtility.ImageUrl("apple.jpg"); | https://dapper-tutorial.net/knowledge-base/11275183/csharp-database-access--dapper--sql-and-pocos---programming-design | CC-MAIN-2019-04 | refinedweb | 847 | 53.1 |
I from math import log def bench3(n): for i in xrange(n): for j in xrange(1000): # m=j+1 z=log(j+1) z1=log(j+2) z2=log(j+3) z3=log(j+4) z4=log(j+5) z5=log(j+6) z6=log(j+7) z7=log(j+8) z8=log(j+9) z9=log(j+10) return z9 Here is the result: >>> t6=timeit.Timer("bench1.bench6(10)", "import bench1") >>> t6.repeat(1,1) [0.73878858905254674] >>> t3=timeit.Timer("bench1.bench3(10)", "import bench1") >>> t3.repeat(1,1) [0.056632337350038142] Anyone know why? Thanks Frank >From: "Kurt Smith" <kwmsmith at gmail.com> >To: "wang frank" <fw3 at hotmail.co.jp> >Subject: Re: Speed of Python >Date: Fri, 7 Sep 2007 16:49:05 -0500 > >On 9/7/07, wang frank <fw3 at hotmail.co.jp> wrote: > > Hi, > > Here is the matlab code: > > function [z]=bench1(n) > > for i=1:n, > > for j=1:1000, > > z=log(j); > > z1=log(j+1); > > z2=log(j+2); > > z3=log(j+3); > > z4=log(j+4); > > z5=log(j+5); > > z6=log(j+6); > > z7=log(j+7); > > z8=log(j+8); > > z9=log(j+9); > > end > > end > > z = z9; > > > > I am not familiar with python, so I just simply try to reproduce the same > > code in python. > > If you think that my python script is not efficient, could you tell me how > > to make it more efficient? > >One thing you can do is bind math.log to the function's namespace thusly: > >import math >def bench1_opt(n): > log = math.log > for i in range(n): > for j in range(1000): > m=j+1 > z=log(m) > z1=log(m+1) > z2=log(m+2) > z3=log(m+3) > z4=log(m+4) > z5=log(m+5) > z6=log(m+6) > z7=log(m+7) > z8=log(m+8) > z9=log(m+9) > return z9 > >On my system I get about a 20% speedup over the 'unoptimized' version >(even though this optimization is rather trivial and may even help >readability). Still not matlab speed, but better. You might be able >to do better using xrange instead of range, but the loop overhead >isn't the main slowdown (only about 1% ). > >For comparisons in real-world usage (if you are doing numerical work), >I urge you to take a look at a specifically numerical package -- >numpy/scipy or their equivalents: Python is a >*very* general language not suited for heavy numerical work out of the >box -- dedicated numerical packages adapt python to this specialized >envirornment, and are becoming more and more competitive with Matlab. >The best part is you can put your time-critical code in FORTRAN or C >and wrap it with pyrex, f2py, weave, etc. pretty easily, and still >have the beauty of Python gluing everything together. > >Kurt _________________________________________________________________ メッセンジャー用アイコンに大人気オンラインゲームの萌えなキャラが登場! | https://mail.python.org/pipermail/python-list/2007-September/439864.html | CC-MAIN-2017-30 | refinedweb | 473 | 69.52 |
sp_mergemetadataretentioncleanup not cleaning merge metadata during regular sync causing stale data in metadata tables
I recently came across an interesting situation in merge replication and thought I should write about it:
Description:
> There is a high volume replication setup with thousands of subscriber and a very volatile(lot of transactions per second) publisher database
> Started observing huge blocking when multiple merge agents starts synchronizing
> Even if merge agents are stopped, the time taken for a single merge agent is huge
> There are millions of rows in merge metadata tables (msmerge_genhistory, msmerge_contents and msmerge_tombstone) and they are not getting cleaned up
> Running update stats every hour does not help
Cause:
> The huge backlog in the merge metadata tables was causing the merge queries to take long times. These long times were resulting in blocking when multiple merge agents were sync'ing.
> This is because the msmerge_genhistory metadata table is referred everywhere during merge sync. Any query that joins this table (and there are a lot of them) had to get blocked since some other queries had taken locks on the same tables rows.
> The root cause of this problem was the millions of rows those metadata tables were not just getting cleaned up.
What I found out:
> Once I found that the root cause of the issue is the backlog in the metadata tables, ran sp_mergemetadataretentioncleanup manually to force a cleanup and it returned immediately with 0 rows cleaned.
> Ran it repeatedly and that did not help.
> So, its obvious that the merge agents (Note: merge agents run retention based metadata cleanup as the first step during regular sync) were also not cleaning anything from those tables.
> Further looking into the code of this proc (by running sp_helptext sp_mergemetadataretentioncleanup) found that it checks if any other agent is running sp_mergemetadataretentioncleanup (of it this SP is being manually on the server). If yes, it just skips execusting any code and returns.
<snip>
-- if somebody else is already cleaning up in this database, we simply return
set @applockname= 'MS_sp_mergemetadataretentioncleanup' + convert(nvarchar(11), db_id())
exec @retcode= sp_getapplock @Resource= @applockname, @LockMode= 'Exclusive', @LockOwner= 'Session', @LockTimeout= 0, @DbPrincipal = @DbPrincipal
if @@error <> 0 or @retcode < 0 return (0)
</snip>
> From DBCC OPENTRAN, found that there was one merge agent which was running sp_mergemetadataretentioncleanup and taking long time but the user might have hit cancel and it was hung while rolling back.
> This was causing other executions of sp_mergemetadataretentioncleanup to not work and also causing the metadata to keep getting stale causing further performance (and blocking issues)
> Killed that spid, and stopped all the merge syncs and ran sp_mergemetadataretentioncleanup manually.
> This took long time as it always cleans the rows from metadata tables in a batch of 5000 rows. Also it checks expired subscriptions (using sp_MSmark_expired_subscriptions) and this can take some time when you have thousands of subscribers.
> The complete execution took long time but it cleaned millions of rows from metadata tables. There were only a few hundred rows left in the metadata tables after this.
> After this, the merge queries ran instantly and there was a almost no blocking.
If you come across a situation where you find sp_mergemetadataretentioncleanup is not cleaning up rows from metadata tables, it will be worth to: stop all merge agents, confirm that no spids that might be executing any merge metadata commands are lying around.
After this, run sp_mergemetadataretentioncleanup manually and wait till it finishes completely. Once it finishes completely, check the table count. After this, start the merge agents again. | https://docs.microsoft.com/en-us/archive/blogs/mangeshd/sp_mergemetadataretentioncleanup-not-cleaning-merge-metadata-during-regular-sync-causing-stale-data-in-metadata-tables | CC-MAIN-2020-24 | refinedweb | 581 | 52.02 |
Step-by-Step: A Simple Vue.js App
Let’s build an app! It’s going to be simple and I’ll do a lot of explaining — a kind of Vue.js show-and-tell, for newbies. If all goes well, you’ll learn how Vue sees the world, feel inspired to learn more about Vue, and have the confidence to start implementing your ideas. We’ll build a single-user voting app, feature by tiny feature, beginning with an empty text editor.
Step 1: A Blank Page
Create a new text file and name it index.html (or whatever). Inside, add the basic boilerplate code for an HTML page:
There’s nothing special here yet — no content, no JavaScript, just an HTML5 skeleton to build on.
Step 2: Proof That Vue.js Works
The next goal is to prove that Vue.js is running properly in the page. You’ll pull in the Vue library, create a Vue instance, and have the instance render a message. Create a new empty file called index.js (you can choose another name, but be sure to change the script tag at the bottom of index.html so that the
src attribute points to the name you chose). Then, modify your HTML and JavaScript files to match the code shown here:
So, what changed since the last version? For starters, you now have a JavaScript file, where you created a new Vue instance and passed it a configuration object. Let’s take a look at the contents of that object.
The
el field tells Vue where to bind itself on the page — in this case, you created a new div with id
app and bound Vue to it. Vue then treats this div as the boundaries where it will do its work — nothing outside of that div can affect (or be affected by) your code.
The
data field stores the initial values for the state (or data) you want Vue to keep track of. As your app grows, it will read and write the contents of this data in various ways. At the moment your
data contains just one value,
“Vue is working!”, which you’ve named
Besides
el and
data, there are lots of other useful fields you could add. To learn all about them, I recommend you browse the Vue.js guide and API documentation …eventually. For this little app, though, we’ll only need a few and I’ll walk you through them as we go.
Turning to your HTML file, take a look at the curly brace “mustache” syntax inside the app div, where it says
{{message}}. Those mustaches are one way to grab values from Vue’s
data and show it on the page. Depending on your background, it could help to think of the app div as a template that guides what Vue does with its
data.
Step 3: Data-Driven HTML Elements
Showing a string was a great first step. Let’s try something a little more complicated: creating new HTML elements on the fly, based on an array of values in
data. In our case, the array will hold information about JavaScript frameworks. Not much information, mind you, just a name and a vote count. Here’s the code… take a look, modify your files to match, and then we’ll dissect it.
Okay, so first check out the JavaScript and notice what you’ve replaced
message with. Instead, you now have an array called
frameworks, with three objects inside. Each object has a string
name and a number
votes.
Then in the HTML, you’ve added a
ul element with a
li element inside. This
li has an unfamiliar
v-for attribute on it, and some text contents (including some text in mustaches,
{{f.name}} and
{{f.votes}}).
One way to read what’s going on in with the
v-for is: “Let each item in the
frameworks array create its own
li element on the page, one by one. Inside each
li, the particular item that created it will be known as
f.” So, given your current data, this
v-for causes three
li elements to be generated. In the first one,
f is
{name: 'Vue.js', votes: 0}. In the second,
f is
{name: 'React', votes: 0} and so on. You use the mustache syntax here just like you did before with
{{message}}, as a way to render
name and
votes within each
li.
Step 4: Modifying Data
Right now, if you’ve followed along, you have a three-way tie of zero votes. Let’s add some voting buttons we can use to increase the
votes count for each framework. After this step, the contents of your files should match this:
Looking at the JavaScript first, you’ve added a new entry to the Vue instance’s configuration object:
methods. This is an object whose keys are function names, and whose values are function implementations. You’ve also defined a new function inside it, called
voteFor. This function expects a framework object, whose
votes count should be increased. It increases
votes by one, and then exits.
In the HTML, you added a
button element, and set up a Vue event listener with the
v-on attribute. You can read
v-on:click="voteFor(f)" as “when this
button gets clicked, send
f to the
voteFor method.”
Notice that after clicking the button, the
li shows the new vote count immediately without any further hassle on your part. Since
votes is inside the
data object, Vue monitors it for changes and refreshes the relevant parts of the page when necessary.
Step 5: Creating Data
At present, you’re limited to voting on the three frameworks we hard-coded into our JavaScript. Let’s lift that restriction and let users add new ones to the list. As you modify your files to match mine, you’ll add a text input box for the new name, and have Vue listen for the
Enter key. You’ll also define a method that the event listener can call, which will grab the input’s text from the event, build a new framework object, add it to the list, then clear the input. Take a look:
This time let’s check out the HTML first. You’ve seen most of this stuff already, except the
v-on:keyup.enter event listener. Note that we don’t seem to send any parameter to the
addNew function, so how will it know what the user entered? Hold that thought, then flip over to your JavaScript file.
Over in the JavaScript, the
addNew function does accept a parameter, even though we didn’t explicitly send one in the event listener. That’s because, by default, Vue event listeners send the event itself to event handlers. This is handy, since the event object has useful attributes on it.
Here, in particular,
event.target.value points to the text the user typed into the
input. So, you can use
event.target.value as the framework
name when you build the new object for pushing onto the frameworks list, and you can set
event.target.value to the empty string when you’re done, to start fresh for the next entry.
Note also the use of
this.frameworks when pushing the new framework onto the list, instead of using
this.data.frameworks or just
frameworks. In the HTML template (
mustaches,
v-for attributes, etc) it’s
frameworks, but in the JavaScript code it’s
this.frameworks. This discrepancy might trip you up for a while, but eventually it ends up feeling nice and clean to leave it off in the HTML, and it ends up feeling clear and helpful to have things walled off behind namespaces in the JavaScript.
Step 6: Deleting Data
With the ability to add frameworks, you’ve also allowed users to make mistakes, such as typos ending up in the list! Let’s give users the ability to remove frameworks they’ve added. As you might expect by now, this will involve a new event listener and a new entry in
methods.
Since deletion is a very different user intent than voting, I decided to make it visually distinct by using an anchor tag instead of a button. Other than that, there should be no surprises in the HTML— you added a
v-on:click event listener, which passes
f to a new function
remove. In the JavaScript, that
remove function works by filtering
frameworks to only include things that aren’t
f.
Step 7: Persistence Using LocalStorage
Currently, each time you refresh the page, you reset the app’s state to the hard-coded framework list. To fix this, let’s implement simple
save and
load functions that use the LocalStorage API. Modify your JavaScript file to match what’s below, and then we’ll discuss what you did.
First let’s look at
save. LocalStorage can only store a string, so you encoded
frameworks using
JSON.stringify. Then you used
localStorage.setItem to save it in the browser’s storage, so your app can retrieve it later. Glancing through the code, you can see that you added calls to
this.save() after every modification to
frameworks, so the saved copy is always up-to-date.
The
load function is almost as simple as
save, with a slight twist: When the user visits for the first time, or if storage gets cleared, there won’t be any saved data to load. Because of this, we do a quick check to make sure our loaded string at least has something in it before we parse it and set the value of
frameworks.
It’s probably not immediately obvious where to call your
load function. Fortunately Vue.js supplies a few special functions called lifecycle hooks, for situations like this where you want to do something at a particular moment. In this case, a reasonable place to load your data would be right after the Vue instance is created, which corresponds to the
created lifecycle hook. There aren’t very many lifecycle hooks, and they’re useful to understand, so you may want to check out what the Vue.js guide has to say about them.
Step 8: Hiding Elements
You’ve added some nice features, and things are getting a little cluttered! The app’s most important features are viewing and voting, so let’s try hiding the delete links and input box until we need them, by tucking them behind an “edit mode” toggle. Copy my edits, and then we’ll talk about what changed:
Let’s talk about the JavaScript first. In your
data object, you added a boolean
editMode that starts out false and will keep track of whether the app should show the add/delete UI elements. Then down below in
methods you added a simple function to toggle the value of
editMode. In designing this app, I decided not to bother with preserving
editMode through a page refresh, but feel free to implement that as a personal exercise!
In the HTML you added a new conditional rendering attribute
v-if="editMode" on everything that should only appear during Edit Mode. You also added a button, with an event listener that calls
toggleEditMode. The text of the button changes to “Edit” when
editMode is false, and “Done” when
editMode is true. Instead of using the JavaScript ternary operator like I did, you may want to try implementing the same behavior yourself using
v-if (and consider exploring
v-else too).
Step 9: Computed Properties
One last feature to add — a section to call out the current winner(s). There are a few edge cases to consider, like when several frameworks have the same number of votes, or when there are no frameworks in the list.
Think for a second about how you might design this, or even try your hand at an implementation before continuing on.
Ready? OK, so you might choose to define a new array in
data to store the winners, with a
method to keep that array updated. You would need to call that method after every vote and during loading to keep it fresh.
That could work, but Vue.js has a more convenient way called computed properties. Take a look:
In the HTML, you render the value of
winnerString using mustaches, just like you did previously with
frameworks, which was a variable contained inside
data.
In the JavaScript though, we see that
winnerString is not a variable inside
data, but rather a function contained within
computed. Inside
computed is where you can put any derived values that are basically read-only, but which must be kept up-to-date with changes to the variables (such as vote counts) that they rely on. Vue happily lets us access
winnerString like a variable in
data, but caches its return value and automatically refreshes it whenever the votes change who’s winning, without us needing to manage all of that. Fewer headaches and less code to write. It’s pretty great.
The implementation of
winnerString itself uses map, filter, and apply to grab the highest vote count, create a list of all frameworks having that many votes, and build a comma-separated string of their names.
Step 10: Your Turn
Have ideas for how to extend this? How about calling out the losers like we did with the winners? Sorting the list by votes? CSS to highlight the winners? Better layout & styling? Go forth and hack!
Thanks for Reading!
If you found this useful or interesting, please give the clap icon a few clicks below. Not only does that help other readers find this more easily, but also it tells me that I was able to help you out, which motivates me to write more guides like this one. Thanks again, and let me know what you build! | https://medium.com/@warrenfrancis/step-by-step-a-simple-vue-js-app-55f8eb3ffc63 | CC-MAIN-2018-39 | refinedweb | 2,315 | 72.26 |
Now that installation and client setup is complete, you can now start migrating your Microsoft OracleClient application code to ODP.NET. The first thing your application will need is a reference to ODP.NET.
Adding the ODP.NET namespace makes referencing ODP.NET objects easier. In this step, we will remove the Microsoft OracleClient namespace and replace it with the ODP.NET namespace.
Find the Microsoft OracleClient namespace reference and replace it with an ODP.NET reference. Optionally, if you plan to use ODP.NET-specific data types, you should add that namespace also.
Nearly all Microsoft OracleClient connection string attributes map to ODP.NET connection string attributes. Most applications will not need to make any changes to start using ODP.NET.
To migrate to ODP.NET, remove these attributes if they are part of the Microsoft OracleClient connection string.
If you use the "Server" attribute, change its name to " Data Source" for the ODP.NET equivalent attribute.
Both Microsoft OracleClient and ODP.NET use the colon to bind parameters and bind parameters by name. While ODP.NET supports binding by name, it binds by position by default. To match Microsoft OracleClient's parameter name binding behavior, add the following line to your code after each OracleCommand instantiation that you bind parameters to:
By now, you have completed the great majority of the work needed to migrate your Microsoft OracleClient application to ODP.NET. There are a few class differences between the two providers and any additional changes are generally straightforward. If you have any additional questions about Microsoft OracleClient to ODP.NET migration, feel free to email the ODP.NET product manager, Alex Keh (alex.keh [at] oracle.com).
Next step: ODP.NET Deployment. | http://www.oracle.com/technetwork/topics/dotnet/code-154692.html | CC-MAIN-2014-42 | refinedweb | 285 | 52.66 |
Completing Tag Names
IntelliJ IDEA automatically completes the names of tags, attributes and their values in files of the following types:DTD or Schema
If there is no schema association, IntelliJ IDEA will use the file content (tag and attribute names and their values) to complete your input.
In JSP/JSPX and XML/XSL files, completion for taglibs and namespaces is available.
You can have IntelliJ IDEA do one of the following:
- Insert a declaration of the taglib the tag in question belongs to.
- Import the desired taglib and insert all required import and reference statements.
To invoke the tag name completion
- Press < and start typing the tag name. IntelliJ IDEA displays the list of tag names appropriate in the current context. Use the ArrowUp and ArrowDown buttons to scroll through the list.
- Press Enter to accept selection from the list, or continue typing the tag name manually.
IntelliJ IDEA. IntelliJ IDEA adds the declaration of the selected taglib. | https://www.jetbrains.com/help/idea/2016.1/completing-tag-names.html | CC-MAIN-2018-05 | refinedweb | 159 | 62.38 |
Bummer! This is just a preview. You need to be signed in with a Pro account to view the entire video.
Adding the Password Reset Token8:38 with Jason Seifer
Our application is looking pretty good! People can sign in and out of the application and everything is scoped properly. Now it's time to allow people to reset their password if they forget it.
Code Snippets
Create the migration:
bin/rails g migration add_password_reset_token_to_users
Migration contains:
add_column :users, :password_reset_token, :string add_index :users, :password_reset_token
Migrate database:
bin/rake db:migrate bin/rake db:migrate RAILS_ENV=test
Create the password reset method token:
def generate_password_reset_token! update_attribute(:password_reset_token, SecureRandom.urlsafe_base64(48)) end
- 0:00
[MUSIC]
- 0:04
Okay, so our application is coming along pretty well.
- 0:08
We've got authentication set up.
- 0:10
We've scoped our todo list and todo items to the currently logged in user.
- 0:15
There is just one more problem, which is that we don't have a way for
- 0:19
a user to reset their password once they've forgotten it.
- 0:24
So if for some reason they can't log in, what we wanna do is send them an email and
- 0:30
give them some way to reset their password.
- 0:33
Now the way that we're gonna do this is by generating a random token for the user.
- 0:40
And we're gonna call that the Password Reset Token.
- 0:44
So in order to do that, we need to add an attribute to our user model,
- 0:49
which we're gonna call Password Reset Token.
- 0:53
Then what we'll do when somebody clicks forgot their password,
- 0:56
we'll generate a new random reset token and email them a link to it.
- 1:01
So let's go ahead and do that, we're gonna create a migration first.
- 1:06
And we'll call that add_password_reset_token_to_users.
- 1:21
Okay, now we've generated the migration.
- 1:24
Let's go ahead and open it up.
- 1:25
And this is a pretty easy migration to write, we're gonna add
- 1:30
a column on the :users table called :password_reset_token,
- 1:39
Which is a string.
- 1:42
And since we're gonna occasionally look that password_reset_token,
- 1:46
we need to add an index for it.
- 1:50
Now we'll go ahead and migrate our database.
- 1:56
And let's not forget to migrate our test database as well.
- 2:00
All right, so we've got that done, now what we need to do is write a method
- 2:06
that will change that password_reset_token and generate one for us.
- 2:11
Cuz that's what we're gonna call from our password resets controller,
- 2:15
as soon as we make it.
- 2:18
So let's go ahead and open up our user_spec here.
- 2:22
Now we'll follow this same pattern that we were doing before,
- 2:26
where we write our test first, and then we make it pass.
- 2:32
So, let's call this method that we're gonna write for the user,
- 2:36
generate_password_reset_token.
- 2:40
And, I'm adding an exclamation point to it,
- 2:42
because it is potentially a dangerous method.
- 2:45
And the first thing we'll say is that it changes
- 2:50
the password_reset_token attribute.
- 3:00
So we'll expect that calling generate_password_reset_token changes that
- 3:05
attribute.
- 3:06
We're gonna need a user to work with here, so we'll use factory girl to create one.
- 3:13
And we'll say, user.generate_password_reset_token
- 3:19
Changes.
- 3:27
That attribute.
- 3:28
Now you'll notice we're using curly braces here
- 3:32
instead of parentheses, the reason is when we do that we're having
- 3:37
it evoked as a block instead of just a method.
- 3:44
This will allow the code inside to actually run,
- 3:46
rather then just being called.
- 3:47
Let's go ahead and run that test, and watch it fail.
- 3:55
Okay, undefined method genereate_password_reset_token.
- 3:58
Makes sense, haven't written it yet.
- 4:05
Let's go ahead and open up our user model.
- 4:14
All right, so now we have that method in there.
- 4:18
Let's watch that part of the test fail.
- 4:22
Looks like I spelled it wrong here.
- 4:27
More than one time.
- 4:33
Okay, it should have changed, but it is still nil.
- 4:40
So the way that we're actually gonna generate this password
- 4:45
reset function is with a method called urlsafe_base64.
- 4:50
And what this does is generates a random urlsafe_base64 string.
- 4:58
So, the first argument is the length of the string.
- 5:04
And then the length of the result string is about four thirds of n.
- 5:08
So what we'll do is we get a string like this, and
- 5:12
then the reason we want your url safe is that is the address that we're gonna
- 5:16
send in the email for the user to go reset it to.
- 5:19
Now, this is a random string, so
- 5:22
we can be pretty sure that we're not gonna have any collisions.
- 5:27
We're probably not gonna have tons of users resetting their passwords at
- 5:30
the same time.
- 5:31
And, by the way,
- 5:32
this is on the secure random method in the Ruby standard library.
- 5:40
So the way that we'll do this is we'll say we'll
- 5:44
update the attribute, password_reset_token.
- 5:53
And we'll call that method securerandom.urlsafe_base64.
- 6:00
Let's run that test again.
- 6:06
Okay, that passes.
- 6:20
And let's go ahead and make sure that this method is actually called.
- 6:54
So the way we do that is we expect the secure random class to receive
- 6:59
urlsafe_base64.
- 7:01
And let's go ahead and run that.
- 7:05
Okay, now that passes.
- 7:07
Now, just to be sure, let's go ahead and
- 7:11
specify that we want this to be a certain length.
- 7:17
And that'll just make sure that our long random string is even longer.
- 7:24
Cool. Now, let's go ahead and
- 7:25
just see how that works in the console.
- 7:53
So here I am creating a user in the console.
- 7:57
Okay, the user is valid, so I'm gonna save it.
- 8:00
Okay, the user is saved, we could see that right here.
- 8:05
Now, let me go ahead and
- 8:07
call that generate_password_reset_token and see what happens.
- 8:14
So when I call that, it updates the user's table and
- 8:18
it sets that password reset token to that long random string.
- 8:22
Lets take a look at it.
- 8:24
And there we go.
- 8:26
So what we're going to do in the next screen cast is start our
- 8:32
password reset controller, which will call this password_reset_token method. | https://teamtreehouse.com/library/user-authentication-with-rails/password-resets-and-testing/adding-the-password-reset-token | CC-MAIN-2016-50 | refinedweb | 1,229 | 82.14 |
How to Use GraphQL with Ruby on Rails – Part 2 - React + Apollo Frontend
Andy Leverenz
Originally published at
web-crunch.com
on
・11 min read
Continuing on from Part 1 is the frontend portion of the tutorial. I’ll leverage React, Apollo and Tailwind CSS to build out the frontend of our Ruby on Rails and GraphQL API application.
The tools I’m reaching for include the following:
- React
- React Apollo
- Tailwind CSS Download the source code ## Carrying over from Part 1
Important note: I made an entire copy of the original app and created a new Github repo for you to download/reference. So if you’re coming from Part 1 you either need to carry on from it or clone the new repo.
Here are the steps I took to get the Rails API app up and running.
- Clone the part 1 repo
$ git clone [email protected]:justalever/graphql_fun.git graphql_fun_frontend $ cd/graphql_fun_frontend $ bundle install $ rails db:migrate $ rails db:seed $ rails server
The commands above should get you a booted Ruby on Rails API application with some seeded data to query with GraphQL.
Part 2 Setup
You could potentially separate your front-end completely from this project and have two separate apps communicating in tandem. We’ll be doing this but I’ll house the frontend app within the same repo as the Ruby on Rails app. Version control becomes a touch easier in my opinion for this but it also mixes concerns. To each their own so approach that how you wish.
Rails API
For our front-end app to communicate “securely ” with the Rails API app we need to add a new gem called
rack-cors. It should be commented out in your
Gemfile at this point. Uncomment it and run
bundle install
# Gemfile gem 'rack-cors'
Then, inside your
config/initializers/cors.rb file you can uncomment the code there to match the following:
# Be sure to restart your server when you modify this file. # Avoid CORS issues when API is called from the frontend app. # Handle Cross-Origin Resource Sharing (CORS) in order to accept cross-origin AJAX requests. # Read more: Rails.application.config.middleware.insert_before 0, Rack::Cors do allow do origins '*' resource '*', headers: :any, methods: [:get, :post, :put, :patch, :delete, :options, :head] end end
Important: When pushing this to a production environment you will want to change the
origins to whatever remote domains your app lives on i.e. (
origins 'web-crunch.com', 'staging.web-crunch.com') and so on.
React Frontend
Now on to the frontend portion. If you’ve been around the frontend scene for any amount of time recently you’ve probably heard of React. I won’t go into heavy detail of what React is or why you should/shouldn’t use it but rather direct you to the docs to see the benefits.
I personally am more of a Vue.js fan but React certainly has a large fan base.
All that aside, we’ll make use of
create-react-app to get things set up pretty darn fast.
$ yarn global add create-react-app
I added the
create-react-app module bundle globally so we could reference for other projects later. Consider this optional for your own system.
$ create-react-app frontend $ cd frontend $ yarn start
You may get a noticed about port 3000 being already in use. It will prompt you to use an alternate. I went ahead and said yes to the command. My frontend app now runs on
localhost:3001 in another browser tab.
To get a visual of the current directory structure I like to make use of
tree.
On a mac you can run
brew install tree to use it. Passing an
-I plus a string of folders/files will ignore those.
$ tree . -I 'node_modules' . ├── README.md ├── package.json ├── public │ ├── favicon.ico │ ├── index.html │ ├── logo192.png │ ├── logo512.png │ ├── manifest.json │ └── robots.txt ├── src │ ├── App.css │ ├── App.js │ ├── App.test.js │ ├── index.css │ ├── index.js │ ├── logo.svg │ └── serviceWorker.js └── yarn.lock 2 directories, 16 files
A few notes:
- I’m not going to be worrying about front-end tests here for brevity sake
- We can delete the logo images and svgs since we’ll use our own assets
Add Tailwind CSS
We need some dependencies installed to get Tailwind CSS dev-ready.
$ yarn add tailwindcss $ yarn add postcss-cli autoprefixer -D // Save for dev use only
Initialize a config file:
$ yarn tailwind init --full
This generates a default
tailwind.config.js file with the default scheme thanks to the
--full flag.
Inside
index.css lets scrap everything and add the tailwind directives.
/* frontend/index.css */ @tailwind base; @tailwind components; @tailwind utilities;
Add a
postcss.config.js file within
frontend
// frontend/postcss.config.js module.exports = { plugins: [ require('tailwindcss')('tailwind.config.js'), require('autoprefixer'), ] };
Let’s update our
package.json scripts section to account for Tailwind
"scripts": { "build:style": "tailwind build src/index.css -o src/tailwind.css", "start": "yarn build:style && react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" },
Your results may vary here depending on your own folder structure. The general idea is that we’ll add styles to
index.css and output those to
tailwind.css as compiled styles.
If your server is running at this point you should restart it:
$ yarn start
My updated
frontend folder structure now looks like the following:
# graphql_fun_frontend/frontend $ tree . -I 'node_modules' . ├── README.md ├── package.json ├── postcss.config.js ├── public │ ├── favicon.ico │ ├── index.html │ ├── manifest.json │ └── robots.txt ├── src │ ├── components │ │ ├── App.js │ │ └── Users.js │ ├── index.css │ ├── index.js │ ├── serviceWorker.js │ └── tailwind.css ├── tailwind.config.js └── yarn.lock 3 directories, 15 files
Be sure to update your main
index.js imports and
components/App.js file. Notice I made a components folder for better organization as well. This is just a preference.
// src/index.js import React from 'react'; import ReactDOM from 'react-dom'; import './tailwind.css'; import App from './components/App'; import * as serviceWorker from './serviceWorker'; ReactDOM.render(<App />, document.getElementById('root')); serviceWorker.unregister();
And the
App.js file
// frontend/src/components/App.js import React from 'react'; function App() { return ( <div className="App"> </div> ); } export default App;
Apollo
You may ask why Apollo? My answer is…mostly because it’s the easier/faster solution to querying GraphQL via the front-end. Are there other approaches out there? I’m 100% sure there are but the Apollo team are what I’d consider the pioneers of the approach. We’ll follow their conventions in this tutorial.
I’ll be leveraging:
react-apollo– A React port for using Apollo within components.
apollo-boost– Apollo Boost is a zero-config way to start using Apollo Client. It includes some sensible defaults, such as our recommended
InMemoryCacheand
HttpLink, which come configured for you with our recommended settings.
graphql– GraphQL itself
$ yarn add react-apollo apollo-boost graphql
After those are installed we can extend
frontend/src/index.js to include the following:
// frontend/src/index.js import React from 'react'; import ReactDOM from 'react-dom'; import './tailwind.css'; import App from './components/App'; import * as serviceWorker from './serviceWorker'; import { ApolloProvider } from 'react-apollo'; import { ApolloClient } from 'apollo-client'; import { createHttpLink } from 'apollo-link-http'; import { InMemoryCache } from 'apollo-cache-inmemory'; const link = createHttpLink({ uri: '' // This is relative to our Rails API port running on 3000 }); const client = new ApolloClient({ link: link, cache: new InMemoryCache() }); ReactDOM.render( <ApolloProvider client={client}> <App /> </ApolloProvider>, document.getElementById('root') ); serviceWorker.unregister();
With the
client now passed down from
index.js, we can start writing GraphQL queries. Let’s start with a
Users.js component. Create a new file
src/components/Users.js. Within that file import the following.
// src/components/Users.js import React from 'react'; import { useQuery } from '@apollo/react-hooks'; import gql from 'graphql-tag'; import Gravatar from 'react-gravatar';
We added one more dependency here for Gravatars.
$ yarn add react-gravatar # a handy gravatar package
Next, we can build a familiar query from Part 1. The file then becomes a bit longer.
// src/components/Users.js import React from 'react'; import { useQuery } from '@apollo/react-hooks'; import gql from 'graphql-tag'; import Gravatar from 'react-gravatar'; const GET_USERS = gql` { users { id name email postsCount } } `;
Finally, we can build our
Users component and pipe in the data. We’ll leverage Tailwind CSS for styling here. This also makes use of React hooks.
import React from 'react'; import { useQuery } from '@apollo/react-hooks'; import gql from 'graphql-tag'; import Gravatar from 'react-gravatar'; const GET_USERS = gql` { users { id name email postsCount } } `; function Users() { const { loading, error, data } = useQuery(GET_USERS); if (loading) return 'Loading...'; if (error) return `Error ${error.message}`; return ( <div className="flex flex-wrap items-center"> {data.users.map(user => ( <div class="lg:w-1/3 w-full p-4 border" key={user.id}> <Gravatar email={user.email} size={150} <h3 className="font-bold text-xl">{user.name}</h3> <p className="text-gray-500">{user.email}</p> <p className="text-gray-500">{user.postsCount} posts</p> </div> ))} </div> ); } export default Users;
Within it, we destructure
{ loading, error, data } variables for use. The main one being
data which is what comes back thanks to our GraphQL query.
To actually render this component we need to import it inside
App.js
// frontend/src/components/App.js import React from 'react'; import Users from './Users'; class App extends React.Component { render() { return ( <div className="container mx-auto px-4"> <Users /> </div> ); } } export default App;
That gets us some basic stuff in the view!
User Profile & Posts View
Let’s create a singular profile page called
User.js inside
src/components/User.js. I’ll be using React Hooks where possible as we digress a bit further in creating more components. You can opt for the traditional React component approach as well. You’ll find I mix and match a bit.
For our User component, I went ahead and cleaned up a bit of code to extract some bits into smaller components. The
UserAvatar component now can be used everywhere we want it as a result. It accepts a user prop.
First, we need to import those dependencies and components.
// frontend/src/components/User.js import React from 'react'; import { useQuery } from '@apollo/react-hooks'; import gql from 'graphql-tag'; import UserAvatar from './UserAvatar'; import Posts from './Posts';
Then add the
gql query
// frontend/src/components/User.js const GET_USER = gql` query User($id: ID!) { user(id: $id) { posts { id title } } } `;
And finally, the React Hook itself
function User({ user, selectUser }) { const { loading, error, data } = useQuery(GET_USER, { variables: { id: user.id } }); if (loading) return 'Loading...'; if (error) return `Error ${error.message}`; return ( <React.Fragment> <div className="flex flex-wrap my-4"> <button className="bg-gray-200 hover:bg-gray-400 text-gray-900 font-bold py-2 px-4 rounded" onClick={selectUser.bind(this, null)}> Back </button> </div> <div className="flex flex-wrap items-start mb-4"> <div className="lg:w-1/4 w-full rounded text-center"> <UserAvatar user={user} /> </div> <div className="px-4 flex-1 w-full"> <Posts posts={data.user.posts} user={user} /> </div> </div> </React.Fragment> ); } export default User;
There is some code we reference here that hasn’t been addressed yet so let’s do that now.
// frontend/src/components/UserAvatar.js import React from 'react'; import Gravatar from 'react-gravatar'; const UserAvatar = ({ user }) => ( <React.Fragment> <Gravatar email={user.email} size={200} <div className="px-6 py-4"> <div className="font-bold text-xl mb-2">{user.name}</div> <p className="text-gray-500 text-sm">{user.email}</p> <p className="text-gray-500 text-base">{user.postsCount} posts</p> </div> </React.Fragment> ) export default UserAvatar;
Above is the
UserAvatar component. It wraps our
react-gravatar import into a nice reusable package for us.
// frontend/src/components/Posts.js import React from 'react'; function Posts({ posts, user }) { return ( <React.Fragment> <div className="lg:pl-10"> <h1 className="font-bold mb-4">Posts from {user.name}</h1> {posts.map(post => ( <div key={post.id}> <div className="p-6 shadow mb-4"> <h3 className="text-2xl font-bold text-gray-800">{post.title}</h3> </div> </div> ))} </div> </React.Fragment> ); } export default Posts;
Next is the
Posts component which accounts for the rendering of each user’s posts.
Update the main App.js Component
// frontend/src/components/App.js import React from 'react'; import User from './User'; import Users from './Users'; class App extends React.Component { state = { selectedUser: null } selectUser = (user) => { this.setState({ selectedUser: user }) } render() { return ( <div className="container mx-auto px-4"> {this.state.selectedUser ? <User user={this.state.selectedUser} selectUser={this.selectUser} /> : <Users selectUser={this.selectUser} />} </div> ); } } export default App;
Here we use a traditional React component and some state to manage if a user is indeed selected. If there’s an
onClick fired we see a
User profile instead of the
Users listing.
Create a User
Creating a user requires GraphQL Mutations. Our approach will be similar to our other components with a few variances.
Create a new component called
CreateUser.js. Inside I added the following:
import React, { Component } from 'react'; import gql from "graphql-tag"; import { Mutation } from "react-apollo"; const CREATE_USER = gql` mutation CreateUser($name: String!, $email: String!) { createUser(input: { name: $name, email: $email }) { user { id name email postsCount } errors } } `; class CreateUser extends Component { state = { name: '', email: '' } onSubmit = (e, createUser) => { e.preventDefault(); createUser({ variables: this.state }); this.setState({ name: '', email: '' }); } render() { return ( <Mutation mutation={CREATE_USER} update={this.props.onCreateUser}> {createUserMutation => ( <div className="lg:fixed bottom-0 left-0 w-full bg-white border-t border-gray-300"> <form className="lg:px-8 pt-2 pb-2" onSubmit={e => this.onSubmit(e, createUserMutation)}> <div className="lg:flex flex-wrap flex-between items-center justify-center lg:p-0 p-6"> <h4 className="font-bold lg:pr-4 mb-2">Create new user</h4> <div className="lg:pr-4 mb-2"> <input className="border rounded w-full py-2 px-3" type="text" value={this.state.name} <input className="border rounded w-full py-2 px-3" type="email" value={this.state.email} Create User </button> </div> </form> </div> )} </Mutation> ); } } export default CreateUser;
I chose to use traditional React render props instead of React hooks for this component. Being newer to react this version made more sense to me. We’re setting some state relative to the User object. To create a new user we need an email and name. Adding those happen on the frontend with a form. Using state we can capture events
onChange to fire the
setState method.
When the form is submitted we call a method
createUser where we pass in the state. Once the state updates our GraphQL mutation is finally called.
In the end, the UI looks like the following:
The form is fixed to the bottom of the browser window but you can see I’ve added a couple of my own accounts with gravatar images.
Wrapping Up
We’ve come a long way. GraphQL + React + Ruby on Rails can be a very powerful combo. I invite you to extend this app to account for creating posts as well. You’ll need to add new queries on both the backend and frontend to achieve this result.
If you followed along this far, I can’t thank you enough. Be sure to check out my other content as well as my YouTube channel to see more videos there.
If you’re brand new to Ruby on Rails I also created a full course on it called Hello Rails. It’s 90 videos of jam-packed knowledge about the awesome framework I use every day.
The post How to Use GraphQL with Ruby on Rails – Part 2 appeared first on Web-Crunch.
This website, DEV, is a social media platform designed specifically for developers.
Join Now. 100% Free Forever.
What do you code to relax?
My projects have me jumping through hoops of a wide variety of technology, talkin...
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/justalever/how-to-use-graphql-with-ruby-on-rails-part-2-react-apollo-frontend-19gf | CC-MAIN-2020-05 | refinedweb | 2,657 | 50.73 |
The “Internet of Stranger Things” Wall, Part 3 – Voice Recognition and Intelligence
Overview
I called this project the “Internet of Stranger Things,” but so far, there hasn’t been an internet piece. In addition, there really hasn’t been anything that couldn’t be easily accomplished on an Arduino or a Raspberry Pi. I wanted this demo to have more moving parts to improve the experience and also demonstrate some cool technology.
First is voice recognition. Proper voice recognition typically takes a pretty decent computer and a good OS. This isn’t something you’d generally do on an Arduino alone; it’s simply not designed for that kind of workload.
Next, I wanted to wire it up to the cloud, specifically to a bot. The interaction in the show is a conversation between two people, so this was a natural fit. Speaking of “natural,” I wanted the bot to understand many different forms of the questions, not just a few hard-coded questions. For that, I wanted to use the Language Understanding Intelligent Service (LUIS) to handle the parsing.
This third and final post covers:
- Adding Windows Voice Recognition to the UWP app
- Creating the natural language model in LUIS
- Building the Bot Framework Bot
- Tying it all together
You can find the other posts here:
- Part 1 – Introduction and Remote Wiring
- Part 2 – Constructing the wall and adding Music
- Part 3 – Adding voice recognition and intelligence (this post)
If you’re not familiar with the wall, please go back and read part one now. In that, I describe the inspiration for this project, as well as the electronics required.
Adding Voice Recognition
In the TV show, Joyce doesn’t type her queries into a 1980s era terminal to speak with her son; she speaks aloud in her living room. I wanted to have something similar for this app, and the built-in voice recognition was a natural fit.
Voice recognition in Windows 10 UWP apps is super-simple to use. You have the option of using the built-in UI, which is nice but may not fit your app style, or simply letting the recognition happen while you handle events.
There are good samples for this in the Windows 10 UWP Samples repo, so I won’t go into great detail here. But I do want to show you the code.
To keep the code simple, I used two recognizers. One is for basic local echo testing, especially useful if connectivity in a venue is unreliable. The second is for sending to the bot. You could use a single recognizer and then just check some sort of app state in the events to decide if you were doing something for local echo or for the bot.
First, I initialized the two recognizers and wired up the two events that I care about in this scenario.
SpeechRecognizer _echoSpeechRecognizer; SpeechRecognizer _questionSpeechRecognizer; private async void SetupSpeechRecognizer() { _echoSpeechRecognizer = new SpeechRecognizer(); _questionSpeechRecognizer = new SpeechRecognizer(); await _echoSpeechRecognizer.CompileConstraintsAsync(); await _questionSpeechRecognizer.CompileConstraintsAsync(); _echoSpeechRecognizer.HypothesisGenerated += OnEchoSpeechRecognizerHypothesisGenerated; _echoSpeechRecognizer.StateChanged += OnEchoSpeechRecognizerStateChanged; _questionSpeechRecognizer.HypothesisGenerated += OnQuestionSpeechRecognizerHypothesisGenerated; _questionSpeechRecognizer.StateChanged += OnQuestionSpeechRecognizerStateChanged; }
The HypothesisGenerated event lets me show real-time recognition results, much like when you use Cortana voice recognition on your PC or phone. In that event handler, I just display the results. The only real purpose of this is to show that some recognition is happening in a way similar to how Cortana shows that she’s listening and parsing your words. Note that the hypothesis and the state events come back on a non-UI thread, so you’ll need to dispatch them like I did here.
private async void OnEchoSpeechRecognizerHypothesisGenerated( SpeechRecognizer sender, SpeechRecognitionHypothesisGeneratedEventArgs args) { await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { EchoText.Text = args.Hypothesis.Text; }); }
The next is the StateChanged event. This lets me alter the UI based on what is happening. There are lots of good practices here, but I took an expedient route and simply changed the background color of the text box. You might consider running an animation on the microphone or something when recognition is happening.
private SolidColorBrush _micListeningBrush = new SolidColorBrush(Colors.SkyBlue); private SolidColorBrush _micIdleBrush = new SolidColorBrush(Colors.White); private async void OnEchoSpeechRecognizerStateChanged( SpeechRecognizer sender, SpeechRecognizerStateChangedEventArgs args) { await Dispatcher.RunAsync(CoreDispatcherPriority.Normal, () => { switch (args.State) { case SpeechRecognizerState.Idle: EchoText.Background = _micIdleBrush; break; default: EchoText.Background = _micListeningBrush; break; } }); }
I have equivalent handlers for the two events for the “ask a question” speech recognizer as well.
Finally, some easy code in the button click handler kicks off recognition.
private async void DictateEcho_Click(object sender, RoutedEventArgs e) { var result = await _echoSpeechRecognizer.RecognizeAsync(); EchoText.Text = result.Text; }
The end result looks and behaves well. The voice recognition is really good.
So now we can talk to the board from the UWP PC app, and we can talk to the app using voice. Time to add just a little intelligence behind it all.
Creating the Natural Language Model in LUIS
The backing for the wall is a bot in the cloud. I wanted the bot to be able to answer questions, but I didn’t want to have the exact text of the question hard-coded in the bot. If I wanted to hard-code them, a simple web service or even local code would do.
What I really want is the ability to ask questions using natural language, and map those questions (or Utterances as called in LUIS) to specific master questions (or Intents in LUIS). In that way, I can ask the questions a few different ways, but still get back an answer that makes sense. My colleague, Ryan Volum, helped me figure out how LUIS worked. You should check out his Getting Started with Bots Microsoft Virtual Academy course.
So I started thinking about the types of questions I wanted answered, and the various ways I might ask them.
For example, when I want to know the location of where Will is, I could ask, “Where are you hiding?” or “Tell me where you are!” or “Where can I find you?” When checking to see if someone is listening, I might ask, “Are you there?” or “Can you hear me?” As you can imagine, hard-coding all these variations would be tedious, and would certainly miss out on ways someone else might ask the question.
I then created those in LUIS with each master question as an Intent, and each way I could think of asking that question then trained as an utterance mapped to that intent. Generally, the more utterances I add, the better the model becomes.
The above screen shot is not the entire list of Intents; I added a number of other Intents and continued to train the model.
For a scenario such as this, training LUIS is straight forward. My particular requirements didn’t include any entities or Regex, or any connections to a document database or Azure search. If you have a more complex dialog, there’s a ton of power in LUIS to be able to make the model as robust as you need, and to also train it with errors and utterances found in actual use. If you want to learn more about LUIS, I recommend watching Module 5 in the Getting Started with Bots MVA.
Once my LUIS model was set up and working, I needed to connect it to the bot.
Building the Bot Framework Bot
The bot itself was the last thing I added to the wall. In fact, in my first demo of the wall, I had to type the messages in to the app instead of sending it out to a bot. Interesting, but not exactly what I was looking for.
I used the generic Bot Framework template and instructions from the Bot Framework developer site. This creates a generic bot, a simple C# web service controller, which echoes back anything you send it.
Next, following the Bot Framework documentation, I integrated LUIS into the bot. First, I created the class which derived from LuisDialog, and added in code to handle the different intents. Note that this model is changing over time; there are other ways to handle the intents using recognizers. For my use, however, this approach worked just fine.
The answers from the bot are very short, and I keep no context. Responses from the Upside Down need to be short enough to light up on the wall without putting everyone to sleep reading a long dissertation letter by letter.
namespace TheUpsideDown { // Reference: // // Partial class is excluded from project. It contains keys: // // [Serializable] // [LuisModel("model id", "subscription key")] // public partial class UpsideDownDialog // { // } // public partial class UpsideDownDialog : LuisDialog<object> { // None [LuisIntent("")] public async Task None(IDialogContext context, LuisResult result) { string message = $"Eh"; await context.PostAsync(message); context.Wait(MessageReceived); } [LuisIntent("CheckPresence")] public async Task CheckPresence(IDialogContext context, LuisResult result) { string message = $"Yes"; await context.PostAsync(message); context.Wait(MessageReceived); } [LuisIntent("AskName")] public async Task AskName(IDialogContext context, LuisResult result) { string message = $"Will"; await context.PostAsync(message); context.Wait(MessageReceived); } [LuisIntent("FavoriteColor")] public async Task FavoriteColor(IDialogContext context, LuisResult result) { string message = $"Blue ... no Gr..ahhhhh"; await context.PostAsync(message); context.Wait(MessageReceived); } [LuisIntent("WhatIShouldDoNow")] public async Task WhatIShouldDoNow(IDialogContext context, LuisResult result) { string message = $"Run"; await context.PostAsync(message); context.Wait(MessageReceived); } ... } }
Once I had that in place, it was time to test. The easiest way to test before deployment is to use the Bot Framework Channel Emulator.
First, I started the bot in my browser from Visual Studio. Then, I opened the emulator and plugged in the URL from the project properties, and cleared out the credentials fields. Next, I started typing in questions that I figured the bot should be able to handle.
It worked great! I was pretty excited, because this was the first bot I had ever created, and not only did it work, but it also had natural language processing. Very cool stuff.
Now, if you notice in the picture, there are red circles on every reply. It took a while to figure out what was up. As it turns out, the template for the bot includes an older version of the NuGet bot builder library. Once I updated that to the latest version (3.3 at this time), the “Invalid Token” error local IIS was throwing went away.
Be sure to update the bot builder library NuGet package to the latest version.
Publishing and Registering the Bot
Next, it was time to publish it to my Azure account so I could use the Direct Line API from my client app, and also so I could make the bot available via other channels. I used the built-in Visual Studio publish (right click the project, click “Publish”) to put it up there. I had created the Azure Web App in advance.
Next, I registered the bot on the Bot Framework site. This step is necessary to be able to use the Direct Line API and make the bot visible to other channels. I had some issues getting it to work at first, because I didn’t realize I needed to update the credential information in the web.config of the bot service. The BotId field in the web.config can be most anything. Most tutorials skip telling you what to put in that field, and it doesn’t match up with anything on the portal.
As you can see, there are a few steps involved in getting the bot published and registered. For the Azure piece, follow the same steps as you would for any Web App. For the bot registration, be sure to follow the instructions carefully, and keep track of your keys, app IDs, and passwords. Take your time the first time you go through the process.
You can see in the previous screen shot that I have a number of errors shown. Those errors were because of that NuGet package version issue mentioned previously. It wasn’t until I had the bot published that I realized there was an error, and went back and debugged it locally.
Testing the Published Bot in Skype
I published and registered the bot primarily to be able to use the Direct Line channel. But it’s a bot, so it makes sense to test it using a few different channels. Skype is a pretty obvious one, and is enabled by default, so I hit that first.
Through Skype, I was able to verify that it was published and worked as expected.
Using the Direct Line API
When you want to communicate to a bot from code, a good way to do it is using the Direct Line API. This REST API provides an additional layer of authentication and keeps everything within a structured bot framework. Without it, you might as well just make direct REST calls.
First, I needed to enable the Direct Line channel in the bot framework portal. Once I did that, I was able to configure it and get the super-secret key which enables me to connect to the bot. (The disabled field was a pain to try and copy/paste, so I just did a view source, and grabbed the key from the HTML.)
That’s all I needed to do in the portal. Next, I needed to set up the client to speak to the Direct Line API.
First, I added the Microsoft.Bot.Connector.DirectLine NuGet package to the UWP app. After that, I wrote a pretty small amount of code for the actual communication. Thanks to my colleague, Shen Chauhan (@shenchauhan on Twitter), for providing the boilerplate in his Hunt the Wumpus app.
private const string _botBaseUrl = "(the url to the bot /api/messages)"; private const string _directLineSecret = "(secret from direct line config)"; private DirectLineClient _directLine; private string _conversationId; public async Task ConnectAsync() { _directLine = new DirectLineClient(_directLineSecret); var conversation = await _directLine.Conversations .NewConversationWithHttpMessagesAsync(); _conversationId = conversation.Body.ConversationId; System.Diagnostics.Debug.WriteLine("Bot connection set up."); } private async Task<string> GetResponse() { var httpMessages = await _directLine.Conversations .GetMessagesWithHttpMessagesAsync(_conversationId); var messages = httpMessages.Body.Messages; // our bot only returns a single response, so we won't loop through // First message is the question, second message is the response if (messages?.Count > 1) { // select latest message -- the response var text = messages[messages.Count-1].Text; System.Diagnostics.Debug.WriteLine("Response from bot was: " + text); return text; } else { System.Diagnostics.Debug.WriteLine("Response from bot was empty."); return string.Empty; } } public async Task<string> TalkToTheUpsideDownAsync(string message) { System.Diagnostics.Debug.WriteLine("Sending bot message"); var msg = new Message(); msg.Text = message; await _directLine.Conversations.PostMessageAsync(_conversationId, msg); return await GetResponse(); }
The client code calls the TalkToTheUpsideDownAsync method, passing in the question. That method fires off the message to the bot, via the Direct Line connection, and then waits for a response.
Because the bot sends only a single message, and only in response to a question, the response comes back as two messages: the first is the message sent from the client, the second is the response from the service. This helps to provide context.
Finally, I wired it to the SendQuestion button on the UI. I also wrapped it in calls to start and stop the MIDI clock, giving us a bit of Stranger Things thinking music while the call is being made and the result displayed on the LEDs.
private async void SendQuestion_Click(object sender, RoutedEventArgs e) { // start music StartMidiClock(); // send question to service var response = await _botInterface.TalkToTheUpsideDownAsync(QuestionText.Text); // display answer await RenderTextAsync(response); // stop music StopMidiClock(); }
With that, it is 100% complete and ready for demos!
What would I change?
If I were to start this project anew today and had a bit more time, there are a few things I might change.
I like the voice recognition, Bot Framework, and LUIS stuff. Although I could certainly make the conversation more interactive, there’s really nothing I would change there.
On the electronics, I would use a breadboard-friendly Arduino, not hot-glue an Arduino to the back. It pains me to have hot-glued the Arduino to the board, but I was in a hurry and had the glue gun at hand.
I would also use a separate power supply for LEDs. This is especially important if you wish to light more than one LED at a time, as eventually, the Arduino will not be able to support the current draw required by many LED lights.
If I had several weeks, I would have my friends at DF Robot spin a board that I design, rather than use a regular breadboard, or even a solder breadboard. I generally prefer to get boards spun for projects, as they are more robust, and DF Robot can do this for very little cost.
Finally, I would spend more time to find even uglier wallpaper <g>.
Here’s a photo of the wall, packaged up and ready for shipment to Las Vegas (at the time of this writing, it’s in transit), waiting in my driveway. The box was 55” tall, around 42” wide and 7” thick, but only about 25 lbs. It has ¼” plywood on both faces, as well as narrower pieces along the sides. In between the plywood is 2” thick rigid insulating foam. Finally, the corners are protected with the spongier corner form that came with that box.
It costs a stupid amount of money to ship something like that around, but it’s worth it for events. 🙂
After this, it’s going to Redmond where I’ll record a video walkthrough with Channel 9 during the second week of November.
What Next?
Windows Remote Wiring made this project quite simple to do. I was able to use the tools and languages I love to use (like Visual Studio and C#), but still get the IO of a device like the Arduino Uno. I was also able to use facilities available to a UWP app, and call into a simple bot of my own design. In addition to all that, I was able to use voice recognition and MIDI all in the same app, in a way that made sense.
The Bot Framework and LUIS stuff was all brand new to me, but was really fun to do. Now that I know how to connect app logic to a bot, there will certainly be more interactive projects in the future.
This was a fun project for me. It’s probably my last real maker project of the fall/winter, as I settle into the fall home renovation work and also gear up for the NAMM music event in January. But luckily, there have been many other posts here about Windows 10 IoT Core and our maker and IoT-focused technology. If this topic is interesting to you, I encourage you to take a spin through the archives and check them out.
Whatever gift-giving and receiving holiday you celebrate this winter, be sure to add a few Raspberry Pi 3 devices and some Arduino Uno boards on your list, because there are few things more enjoyable than cozying up to a microcontroller or some IoT code on a cold winter’s day. Oh, and if you steal a strand or two of lights from the tree, I won’t tell. 🙂
Resources
- My stranger things repo on GitHub
- Windows Remote Wiring
- Windows 10 IoT Core
- LUIS
- Netflix series: Stranger Things
- com synth sounds of Stranger Things
- Pete’s MIDI helper library
- Pete’s SoundCloud
Questions or comments? Have your own version of the wall, or used the technology described here to help rid the universe of evil? Post below and follow me on Twitter @pete_brown
Most of all, thanks for reading!
Updated June 28, 2018 7:43 am
Join the conversation | https://blogs.windows.com/buildingapps/2016/11/04/the-internet-of-stranger-things-wall-part-3-voice-recognition-and-intelligence/ | CC-MAIN-2018-30 | refinedweb | 3,269 | 63.39 |
To get familiar with WordPress REST API, I fired up Python started playing with the
requests module, the elegant and simple HTTP library for Python, built for human beings. Python and the WordPress REST API Handbook gave me enough information to get started.
The default cookies authentication mechanism would not work with Python and you need to install a plugin for this sort of application. See this tutorial on how to use the WordPress OAuth 1.0a plugin.
For a quick demo, the Application Passwords plugin for WordPress is the easiest choice. The installation is straightforward, and the instructions to generate a password for the client application are quite simple to follow. I needed to modify the .htaccess file due to the way my DreamPress site is configured. Notice: the password generated by the plugin contains white spaces, and they are part of the password. Once the WordPress authentication is taken care of, it’s all much easier.
On to Python, get
json and
requests modules ready, set the base URL of your WordPress site, your username and the application password generated by the plugin (including the white spaces):
import requests import json user = 'username' pythonapp = 'G4kN hBNh r35J luXk aXyd n6Lm' url = ''
The Application Password plugin requires a token made of username and password encoded in base64, so let python create it and add that to the http headers:
token = base64.standard_b64encode(user + ':' + pythonapp) headers = {'Authorization': 'Basic ' + token}
Next, prepare some demo content for a new post. WordPress API expect a JSON object:
post = {'date': '2017-06-19T20:00:35', 'title': 'First REST API post', 'slug': 'rest-api-1', 'status': 'publish', 'content': 'this is the content post', 'author': '1', 'excerpt': 'Exceptional post!', 'format': 'standard' }
That’s all you need to create a post:
r = requests.post(url + '/posts', headers=headers, json=post) print('Your post is published on ' + json.loads(r.content)['link'])
You’ll see printed on screen the URL of the new post. That was fun and quick
Now let’s publish an image: first you have to add it to the Media library (as you would when you use WordPress admin panel). To publish images you’ll need to use the
media API endpoint. For the example, use the file
demo.jpg in the current directory
media = {'file': open('demo.jpg',rb), 'caption': 'My great demo picture'}
And let Python
requests do the heavy lifting:
image = requests.post(url + '/media', headers=headers, files=media) print('Your image is published on ' + json.loads(image.content)['link'])
That should give you the URL for the image you just uploaded. Now to embed that image in a post, we can edit its content. To update a post we need to find its ID and push to the API endpoint the new value using JSON.
imgsrc = json.loads(up.content)['source_url'] postid = json.loads(r.content)['id'] updatedpost = {'content': 'Changed things.<img src=' + imgsrc + '>'}
Update the post with the new content:
update = requests.post(url + '/posts/' + postid, headers=headers, json=updatedpost) print('The updated post is published on ' + json.loads(updatedpost.content)['link'])
And that’s all: you created a new post, added an image to WordPress media library and modified a post using only Python and the REST API. | https://discussion.dreamhost.com/t/how-to-post-content-to-wordpress-using-python-and-rest-api/65166 | CC-MAIN-2018-13 | refinedweb | 540 | 63.49 |
An isomorphic web app gives you the best of both server side rendering and single page application (SPA).
TL;DR
To get the code up and running on localhost:3000
$ git clone
$ cd isomorphic-router-demo
$ npm install
$ npm start
This is the repository to check out the code:
Motivation
There have been many articles written about the benefits of an isomorphic web app, like this, this, this, and this. There’s a book being written about it. I like to think of an isomorphic web app as SPA 2.0. It’s SPA with server side rendering.
A SPA is a client side (browser rendered) application that bootstraps the entire website after initial load. This means when you visit example.com using your browser, example.com’s server sends a HTML template and some javascript for your browser to execute the code that renders the actual content of the webpage. Because the tight coupling of the code that creates the DOM and the DOM, SPAs can handle complex DOM manipulation.
We are all familiar with the features of a SPA: quick response to user input, highly interactive webpages (think google docs), and ability to use it offline once the page loads. Most importantly for a startup founder such as myself who’s trying to quickly create a prototype of a website with some dummy data, a SPA lets you build a website independently from a server application. In many cases, you can get away with not building a server application at all if you used a sophisticated front end library like React, Amazon S3 for hosting, and data you store in a CSV file. That’s exactly what I did for LooseLeaf.
This separation of concerns improves productivity initially when you are prototyping a MVP for your website, but there’s a point of diminishing returns for a website deployed as a SPA that talks to a server with an API for data. The main disadvantages of this approach are:
Long load time (bad UX)
because the website is bootstrapped, it takes some time for the page content to display itself after the initial load. Initial load occurs when you type the example.com into your browser and press enter. Whatever the browser gets back from the initial load is whatever the server sends.
If the server sends a blank HTML template and javascript to render stuff into that template, then the user will see a blank page and a maybe a page loading animation. How long the user has to wait until something is displayed scales with the complexity of the webpage and how fast their internet service is so on a mobile device, pages tend to load much slower.
Bad SEO (bad for business)
Search engines and social sharing are two of the most important means of acquiring new users.
Think of search engine optimization (SEO) as ways to get Google to rank your webpage higher on the list to relevant query searches. For Google to rank your webpage content, it needs to know what content’s in your webpage. Google deploys an army of crawlers, which are just programs that make requests to webpages, look at the response, scrap content off the HTML, and look at how to rank that webpage amongst other webpages on the internet based on relevance. These crawlers don’t generally run JavaScript or wait around for a long time for the page to render itself. If your webpage gives the crawler blank pages on initial load, then Google will not know what your page is about to accurately place your webpage high up on the hits list when a relevant search query is entered on google.com.
The same thing happens with social media sites like Facebook and Twitter sharing who have their own army of crawlers to render a preview of the page based on meta tags in the header of HTML. The header is rendered on the server side and don’t change when the content changes based on dynamic loading when the webpage is bootstrapped in the browser. This means if you have a website that sells books and a SPA that uses the same template HTML to render different pages for different books, then when you share a link to the page for a particular book on Facebook, the preview will display a generic preview about your website which says something like it’s a place which sells thousands of titles, but will not display any unique information for the particular book. This article did a good job laying out the limitation of a SPA in its ability to generate unique header for social sharing and how to use server rendering to solve that.
What about Pure Server Rendering Solutions?
If you are reading this, that means I’ve convinced you that a simple SPA is not the way to go. A pure server side application is not the way to go either because from a development standpoint, we want to be able to build our client application and server application separately. From a user experience standpoint, once a SPA is fully loaded, user experience may greatly exceed that of a server-rendered webpage. Also I don’t want the entire page to reload every time I click a button.
So the shortcoming of a pure SPA is in the initial load. The shortcoming of the pure server rendering solution is with what happens after the initial load. What can we do to get the best of both worlds? 🤔
Setting up an isomorphic web app
Client side rendering and server side rendering complement each other. We can build an isomorphic web app that enhances the capability of a server rendered page with a SPA. The isomorphic web app starter project I’m about to introduce takes advantage of the fact that JavaScript is used to build both the client application and the server application. This promotes code reusability as we can use the same code to render the SPA as well as the HTML for the server to send for the initial load.
All the code is contained in this repository. I’ll be walking through snippets of code from there for the remainder of this article.
isomorphic-router-demo - demo project to show you how to set up an isomorphic webapp using React Router 4 and…github.com
The Stack
The stack for this starter project include Node, Express, React, React Router 4, and react-router-config, babel, and Webpack 4. I’m not using any third party universal application middleware or frameworks such as React Universal Component or Loadable Component.
File Structure
The project is divided into server-specific code, client-specific code for rendering the SPA, and shared code, which support both server and client rendering.
~/isomorphic-router-demo$ tree -l 3 --ignore 'node_modules'
/isomorphic-router-demo
├── build
| └── main.bundle.js
├── client
| └── main.js
├── iso-middleware
| └── renderRoute.js
├── package.json
├── .babelrc
├── .env
├── server
| ├── run.js
| └── server.js
├── shared
| ├── App.js
| ├── components
| | ├── About.js
| | ├── HTML.js
| | ├── TopNav.js
| | ├── Home.js
| | ├── Main.js
| | └── NotFound.js
| └── routes.js
└── webpack.config.js
Shared Code
The main entry point for the shared code is the
<App> component:
// shared/App.js
import React from 'react';
import TopNav from './components/TopNav';
import Main from './components/Main';
const App = () => (
<div>
<TopNav />
<Main />
</div>
);
export default App;
It’s a pretty standard top React component, which uses sub-components to render different pages.
<TopNav> defines navigation around the app using React Router’s
<Link> component:
// shared/components/TopNav.js
import React from 'react';
import { Link } from 'react-router-dom';
export default () => (
<nav>
<div className="nav-wrapper">
<a href="/" className="brand-logo">Demo</a>
<ul id="nav-mobile" className="right">
<li><Link to="/">Home</Link></li>
<li><Link to="/about">About</Link></li>
<li><Link to="/foo">Foo</Link></li>
</ul>
</div>
</nav>
);
The mapping for which page to serve based on the route is contained in
routes.js, which is imported into the
<Main> component.
// shared/routes.js
import Home from './components/Home';
import About from './components/About';
import NotFound from './components/NotFound';
const routes = [
{
path: '/',
exact: true,
component: Home
},
{
path: '/about',
component: About
},
{
path: '*',
restricted: false,
component: NotFound
}
];
export default routes;
In the
<Main> component, the
react-router-config renderRoutes function is used to generates the
<Route> component based on the path to component mapping defined in
routes.
import React from 'react';
import { Switch } from 'react-router-dom';
import { renderRoutes } from 'react-router-config';
import routes from '../routes';
const Main = () => (
<Switch>
{renderRoutes(routes)}
</Switch>
);
export default Main;
As defined in
routes, the pages you can render are Home, About, and NotFound, as shown below:
Server Rendering
The idea is we want the server to send the HTML for the Home page when the we type
localhost:3000 into the browser and press enter.
We begin with mounting the middleware function for rendering the app to the server application as shown in
server.js as below:
// server/server.js
import express from 'express';
import renderRouterMiddleware from '../iso-middleware/renderRoute';
// ...
app.get('*', renderRouterMiddleware);
// ...
renderRouterMiddleware contains all the logic for creating the HTML string using the components in the
shared folder.
renderRouterMiddleware is one of the most important file in our project, because it has the logic for making the app isomorphic.
For the most part, the server side rendering part of the code is pretty boilerplate, but has the secret ingredients for server side rendering of an HTML that makes it possible for the client application take over subsequent to the initial load. Specifically, the
<HTML> component, which is imported here for the server side rendering contains the tie to the client application. But before I show you the code
<HTML> we need to go over a few other things.
Side Note: Another thing worth noting is that for server rendering, we want to wrap our
<App> in React Router’s
<StaticRouter> component before converting everything to using React’s
renderToString function. For client rendering, which we are going to discuss next, we want to use
<BrowserRouter>.
Client Rendering
The server code below provides the browser all the code that the browser needs to render a SPA:
// server/server.js
// ...
const buildPath = path.join(__dirname, '../', 'build');
app.use('/', express.static(buildPath));
app.use(express.static(__dirname));
// ...
This block of code is telling the server to serve static assets from the build folder, to
localhost:3000/.
As shown in the File Structure above, there’s only one file —
main.bundle.js — in the build folder. If you type
localhost:3000/main.bundle.js into your browser, you’ll see a bunch of JavaScript, which contains code from our
shared folder that has been transpiled down from ES6 to an earlier version of JavaScript.
main.bundle.js is created by Webpack. In
package.json, scripts have been set up to execute a build before starting the server so the
main.bundle.js is rebuilt every time we start the server.
The build definition is in our
webpack.config.js file, which defines
./client/main.js as the entry for the build.
main.js and everything it uses are bundled up into
main.bundle.js. Here’s the code for
main.js:
// client/main.js
import React from 'react';
import ReactDOM from 'react-dom';
import { BrowserRouter } from 'react-router-dom';
import App from '../shared/App';
const renderRouter = Component => {
ReactDOM.hydrate(
<BrowserRouter>
<Component />
</BrowserRouter>, document.getElementById('root')
);
};
renderRouter(App);
The first thing you probably noticed is that
ReactDOM.hydrate is used instead of
ReactDOM.render. This is because we want to attach the client rendered app to the server rendered HTML’s
root div. Although our app, which uses React v16, will work with
ReactDOM.render, React gives you a warning which says: “ ReactDOM.render() to hydrate server-rendered markup will stop working in React v17. Replace the ReactDOM.render() call with ReactDOM.hydrate() if you want React to attach to the server HTML.”
Remember I said earlier the
<HTML> component is the tie between server rendered HTML and the client rendered application?
<HTML>, which is used by the server to render the HTML for the initial load, creates a
root div and dynamically loads
main.bundle.js in a script tag. This is what makes this isomorphic app work!
Isomorphic App In Action
After starting up the app using
npm start, type
localhost:3000 into your browser address bar, press enter, and you’ll see the home page is rendered after the page load wheel spins a bit in the browser tab.
The page load wheel spinning indicates the server has done some work to deliver this page to you. If you clicked About and Foo from the NavBar, you’ll see the About page and NotFound page load up without any page load wheel spinning in the browser tab. This tells you that the SPA mode has been kicked in and is handling the page navigation based on click events. In fact, the app can even run when you stop the server. Go ahead and stop the server from the terminal to see that you can still click around to load the pages just like before…but with one difference:
Instead of a message from the server and a random quote, you see the word “Loading”.
This is by design. I’ve tried to make the app more interesting by having it deliver a random inspirational quote to the Home page every time you navigate to the Home page via client app TopNav or loading it directly from the server.
This is also to demonstrate a common design pattern in modern web applications whereby parts of the page content is loaded after the page has loaded via an asynchronous fetch from an API.
The
<Home> component fetches some data from two API endpoints right after it mounts.
// shared/components/Home.js
import React from 'react';
import fetch from 'isomorphic-fetch';
class Home extends React.Component {
constructor(props) {
super(props);
this.state = {
resHello: 'Loading...',
resQuote: 'Loading...'
};
}
componentDidMount() {
// Get hello message
this.callApi('')
.then(res => this.setState({ resHello: res.express }))
.catch(err => console.log(err));
// Get random quote
const rand = Math.random();
this.callApi(`{rand}`)
.then(res => this.setState({ resQuote: res.express }))
.catch(err => console.log(err));
}
callApi = async function (endpoint) {
const response = await fetch(endpoint);
const body = await response.json();
if (response.status !== 200) throw Error(body.message);
return body;
}
render() {
console.log('rendering: Home');
return (
<div className="container">
<h1>Home page</h1>
<h6>
{`Message from the server: ${this.state.resHello}`}
</h6>
<h5>Random Quote</h5>
<blockquote>
{this.state.resQuote}
</blockquote>
</div>
);
}
}
export default Home;
The server code responsible for delivering data to those API end points are shown here:
// server/server.js
import apiVersion1 from './api/api1';
// ...
app.use('/api', apiVersion1);
// ...
and here:
// server/api/api1.js
import express from 'express';
const api = express.Router();
// const quotes = ... too long to write it out here
api.get('/quote/:rand', (req, res) => {
const rand = parseFloat(req.params.rand);
if (Number.isNaN(rand)) {
res.send({ express: 'Bad request.' });
return;
}
const randomQuote = quotes[randomInd(rand)];
res.send({ express: `${randomQuote}` });
});
module.exports = api;
For simplicity, I hardcoded the
quotes JSON directly in the api code but you can have the quotes be in a
quotes.json file or stored in a database and use express middleware to fetch them before use.
Bottomline
An isomorphic web app supports both server side rendering and dynamic rendering of webpages. This provides a great amount of flexibility to develop your web apps and to get great user interface as well as good SEO and quick load time. But with great power, comes great responsibility. As web developers, we have to decide how to partition our web apps into smaller isomorphic applications, SPAs, or server rendered pages. We have to decide what to render dynamically in the browser and what to do only on the server side based on what makes more sense for the app we are building.
Thanks for reading!
Once again, this is the repo for the isomorphic starter project:
I read many tutorials to set up this project with the latest and greatest stack. Attributions are provided in the repo’s README file.
I hope this article and starter project can help more people understand the motivation and concept behind an isomorphic web app and start using it. | https://hackernoon.com/get-an-isomorphic-web-app-up-and-running-in-5-minutes-72da028c15dd | CC-MAIN-2019-47 | refinedweb | 2,716 | 63.59 |
Handling stack overflow on custom stacks
On my computer, the callstack of a new process is around 10MB [1]. Modern operating system automatically reserve some amount of virtual memory and install protections on the page below the stack to create a segmentation fault on stack overflow. This ensures that a stack overflow won’t go corrupting random parts of memory.
We want to have a lot of coroutines, so they should have smaller stacks, maybe 16 or 64KB. This makes stack overflow an even greater possibility, but at the same time, coroutines implemented in user space don’t get this checking for free–we have to build it ourselves. In the process, we can even do better: we can give some information about the coroutine which crashed.Here’s the idea:
- Memory-protect the page on the bottom of the stack (remember, x86 callstacks grow down) to disallow reads and writes.
- Install a signal handler for SIGSEGV.
- In the handler, examine which address caused the page fault:
- If the address is within the protected page of a coroutine, report a callstack overflow for the coroutine, with a stack backtrace and some information about the coroutine that crashed.
- Otherwise, report a generic memory fault at the address, with a stack backtrace.
I’m treating stack overflow as a fatal error here, but a more nuanced approach is possible: rather than killing the whole database, it could just kill that coroutine and return to the scheduler. But this particular kind of error tolerance would require broad modifications to RethinkDB which we’re not ready to do. I could also make stack overflow resize the stack to be larger, but this is difficult in C++ because there might be pointers into the stack.
Nothing here is complicated to implement, but it involves the interaction of a few different system calls, which I’ll explain in this article.
Manipulating memory protection
We want to allocate the stack and make the bottom page unreadable and
unwritable. The
mprotect system call manipulates memory protection, and
getpagesize tells us how big a page is (it might not be 4KB).
valloc makes a page-aligned memory allocation.
void *stack = valloc(stack_size); mprotect(stack, getpagesize(), PROT_NONE);
When deallocating the stack, be sure to reset the protection to what it was before.
mprotect(stack, getpagesize(), PROT_READ|PROT_WRITE); free(stack);
Installing a signal handler
In order to catch the segfault, we have to install a signal handler. The
signal system call won’t cut it–it just doesn’t give us enough
information about what happened. Instead, we have to use
sigaction, which
takes a whole struct of parameters, not just a function pointer, for how to
handle the signal. One struct member is
sa_flags. We have to turn on the
SA_ONSTACK flag in order to use a user-provided stack (see below) and the
SA_SIGINFO flag, in order to call a function with more information. If
SA_SIGINFO is set, then we can set the
sa_sigaction member to a function
which takes a
siginfo_t struct as an argument. The
si_addr member of that
struct contains the address of the location which caused the fault. All
together, the code for establishing the page handler is as follows:
struct sigaction action; bzero(&action, sizeof(action)); action.sa_flags = SA_SIGINFO|SA_STACK; action.sa_sigaction = &sigsegv_handler; sigaction(SIGSEGV, &action, NULL);
The signal handler itself will print out the CPU where the coroutine was
initialized, but it would be easy to extend to support printing other metadata
contained in the coroutine.
int coro_t::in_coro_from_cpu(int) reports which
CPU a coroutine was initialized on, or -1 if the address was not from the
protected page of a coroutine stack.
crash will cause the program to
terminate with the given error message, together with a stack trace.
void sigsegv_handler(int signum, siginfo_t *info, void *data) { void *addr = info->si_addr; int info = coro_t::in_coro_from_cpu(addr); if (cpu == -1) { crash("Segmentation fault from reading the address %p.", addr); } else { crash("Callstack overflow from a coroutine initialized on CPU %d at address %p.", cpu, addr); } }
Installing a special stack for the signal handler.
To make it work, we have to provide an alternate stack to execute the signal
handler on. The system call to install this stack is called
sigaltstack.
As a parameter, it takes a
stack_t, which consists of a pointer to the base
of the stack, the size of the stack, and some flags that aren’t relevant for
our purposes.
stack_t segv_stack; segv_stack.ss_sp = valloc(SEGV_STACK_SIZE); segv_stack.ss_flags = 0; segv_stack.ss_size = SEGV_STACK_SIZE; sigaltstack(&segv_stack, NULL);
SEGV_STACK_SIZE doesn’t have to be so big, but it has to be big enough to
call
printf from. The
MINSIGSTKSZ constant indicates how big a stack has to
be to execute any signal handler at all. To be on the safe side, used that
constant plus 4096 for
SEGV_STACK_SIZE.
sigaltstack should be called before
the associated call to
sigaction which is intended to register a signal
handler with that stack.
Operating in a multithreaded environment
If a process calls
sigaction and then spawns pthreads within it, then those
pthreads will inherit the signal handlers that were already installed.
Apparently, this is not the case for
sigaltstack: If a signal handler is
installed with
sigaction using a
sigaltstack, and a thread spawned from
that process is killed with the right signal, then the installed stack will not
be found! The signal handler must instead be installed on each pthread
individually. I’m not sure whether this is a bug in Linux or just a quirk of
POSIX; in any case, I couldn’t find it documented anywhere.
Pulling it all together
A few simple system calls can allow stack overflows in user-space coroutines to be handled nicely, providing detailed error messages without runtime overhead in cases where the stack does not overflow. Indeed, the benchmarks which I previously reported are unaffected by this change. Other programming language runtimes, like that of Io , checks the height of the callstack on every function call in order to catch overflow. This technique is more efficient.
Io supports callstacks that will resize on overflow, to a point. Such a
feature is more difficult to implement in C++ because there might be pointers
into the callstack, and weak typing makes it impossible to trace the stacks
and heap to update these even if we wanted to. However, virtual memory may be
usable to implement this resizing. First, it may just work to allocate very
large stacks and not touch them, hoping that the virtual memory system will
ensure that no physical memory or swap space is used for the stacks. But this
might put stress on the kernel’s data structures, and it may not work well if
overcommitting is turned off, as it is in some server environments.
Alternatively,
mremap may be used to expand the stack from the page fault
handler. But I’m not sure how I could reserve a chunk of virtual memory to
expand into, without depending on overcommitting in some cases. None of these
techniques would work out well on 32-bit systems because there isn’t enough
address space. This is still something to look into in the future, though.
I implemented stack overflow checking after getting stuck on a particular bug in the database, and when returning to that bug, I found that this was in fact the cause! Stack overflow isn’t an obscure, uncommon thing, especially when stacks are small and there might be recursion. Working together with the operating system allows us to broaden the applicability of these software engineering benefits to performance-critical code.
[1] You can find the value on your computer by running the following program (without optimizations!) and examining the difference between the first and last number, when it terminates with a segfault.
#include <stdio.h> int main() { char stuff[2048]; printf("I'm at %p\n", &stuff); main(); } | https://rethinkdb.com/blog/handling-stack-overflow-on-custom-stacks/ | CC-MAIN-2017-47 | refinedweb | 1,315 | 60.55 |
[
]
Bill Blough updated AXIS2C-1564:
--------------------------------
Component/s: (was: xml/om)
util
> axutil_uri_parse_string() only parses URLs, not URIs
> ----------------------------------------------------
>
> Key: AXIS2C-1564
> URL:
> Project: Axis2-C
> Issue Type: Bug
> Components: util
> Affects Versions: 1.6.0
> Reporter: Elliot Silver
> Priority: Major
> Fix For: Next Version
>
> Original Estimate: 48h
> Remaining Estimate: 48h
>
> axutil_uri_parse_string() appears designed to handle parsing of URLs, not generic URIs
(which consist of both URLs and URNs). A URN consists of a scheme ("urn") followed by a colon,
a namespace, another colon an then a name within that namespace. The name itself may contain
further colons.
> It appears that axutil_uri_parse_string is expecting that a second colon in the input
string to indicate the start of the port number (), and if that string
can't be converted to a number, the uri is invalid. With a URN this isn't the case (urn:ietf:rfc:2648).
Unfortunately, the failure isn't communicated upwards.
> axutil_uri_parse_string() should be modified to accept both urls and urns, and the API
updated appropriately (when called on a urn, axutil_uri_get_port() should return an error
value, etc.). a new function to determine if the uri is a urn or url should be added, and
new functions to access the urn specific components added.
> In my particular case, the incorrect parsing is hampering interaction with an ebXML registry/repository,
which indicates operation status in the form of a uri attribute in the returned structure.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.us.apache.org/mod_mbox/axis-c-dev/202004.mbox/%[email protected]%3E | CC-MAIN-2021-04 | refinedweb | 267 | 52.9 |
History
[Main]
1.1.11
Added: New script function LoadEarly
Added: A new option to never ever modify load order, even if scripts request it. (For anyone who uses bash exclusivly to manage load order)
1.1.10
Fixed: Issue with bsa creator when a bsa contains many files
1.1.9
Added: New C# script functions: 'CancelDataFileCopy' and 'CancelDataFolderCopy'
Fixed: BSA's registered via the RegisterBSA script function no longer get added before the redirection BSA.
1.1.8
Added: New C#, vb and python script function 'GenerateBSA' to pack data files into a BSA.
Added: New C# and vb script function 'SetNewLoadOrder' to set the load order of existing plugins.
Tweak: In C#, vb and python scripts, LoadAfter/LoadBefore can now be used on plugins created by CopyPlugin. (This was already possible in obmm scripts)
1.1.7
Added: GetPlugins/GetDataFiles/GetPluginsFolders/GetDataFolders can now be used from simulated scripts. (From niaht)
Added: C#, vb and python scripts can now access data files inside BSAs, instead of just the normal loose files.
Added: C#, vb and python scripts can now generate new data files mid script.
Added: New C#, vb and python script function 'GetOBSEPluginVersion' to check the version info of obse plugin dll's.
Fixed: Script functions to get the Oblivion or OBSE version weren't working from C# or vb scripts.
Fixed: Exceptions when using the simulate button in the script editor to simulate a script that displays dialog boxes or forms.
Fixed: A possible crash when checking the required data files of an esp which uses a mesh stored in a custom BSA.
1.1.6
Fixed: Activating an omod which tried to overwrite an esp which had been added to the data folder while obmm was running would cause the activation to fail.
1.1.5
Added: Script simulation (From Falados)
Added: C#, vb and python scripts get an IsSimulation and ReadExistingDataFile function
Tweak: C# and vb scripts can now use resources in the System.Xml namespace
Tweak: C#, vb and python scripts are now enabled by default
Fixed: PatchDataFile was broken when called from C#, vb or python scripts if the file to be patched already existed
Fixed: CopyDataFolder was broken when called from C#, vb or python scripts
1.1.4
Added: Python scripts now have some security
Added: New functions for C#, vb and python scripts (CreateCustomDialog, ReadDataFile)
Tweak: C# and vb scripts can now use resources in the System.Drawing and System.Windows.Forms namespaces
Fixed: Security exceptions when using c# or vb scripts
Fixed: null reference execption if a C# or vb script didn't contain a Script class. (It'll now give a useful error message explaining the problem)
Fixed: SetPluginXXX functions when called from C#, vb or python scripts were broken
Fixed: Install/DontInstall/CopyDataFolder functions were broken when called from C#, vb or python scripts
1.1.2
Added: On vista, obmm can now detect if file writes are being redirected to the virtual store, and will start up in limited user mode if they are
Added: On vista, if obmm is set to run as an administrator it will offer to move files out of the virtual store back to oblivion's directory
Tweak: Changed the behaviour of the SelectWithDescriptions dialog slightly
Fixed: More python stuff (From Falados)
1.1.1
Added: An option to export omod conversion data without unpacking the omod
Added: 5 new scripting functions for editing esp's
Fixed: Some python syntax highlighting. (From Falados)
Fixed: The DependsOnRegex and ConflictsWithRegex script functions were broken
1.1.0
Added: Support for using python, C# or vb scripts in the place of obmm's normal scripting language. (Thanks to Falados for the python code.)
Added: A few more keyboard shortcuts
Added: obmm scripts can be cancelled by hitting escape
Added: All options on the settings form now have tooltips
Tweak: Gaps in an omods version number now get treated as zero's, instead of causing the version number to reset back to 1. (i.e. '.1.0' becomes '0.1.0' instead of '1')
Tweak: When converting omods to archives, or extracting to folders, the 'do you want to create omod conversion data' prompt is now given before any files are unpacked.
Tweak: When converting an omod to a 7z archive, -mx7 compression is used instead of -mx9 if the compression boost option is not selected
Fixed: Several functions would crash obmm when used on an omod that had been activated using the 'acquisition activate' button
Fixed: When deleting active omods, two error prompts were displayed out of order, causing Weird Things to happen if you clicked yes to the first and no to the second
1.0.7
Added: ConflictsWithRegex and DependsOnRegex script functions.
Added: An option to disable conflict tracking on the settings page. (All inactive omods display green, but decreases startup time and speeds up omod activation.)
Tweak: Updated the script editor to use the latest version of sharpdevelop's text editor. (2.1 beta 1 -> 2.1 final)
Fixed: InputString script function was broken
1.0.6
Fixed: Crash bug if two omods tried to edit the same line of oblivion.ini
Fixed: If two omods edited the same line of oblivion.ini, obmm would still display 'line xxx has already been modified' warnings even after both omods were deactivated
1.0.5
Added: The nif viewer now respects a few extra NiXXXProperty nodes
Added: Any nif subsets which require alpha blending are now sorted before being rendered.
Fixed: Fixed some broken shaders in the nif viewer.
Fixed: A bug that could let multiple copies of obmm run at once.
1.0.4
Added: A new 'acquisition activate' option for omods. (See the readme for details of what it's supposed to do.)
Added: A 'view load order' button, which displays your load order in a human readable format ready for copy/pasting. (Too many people failing to grasp the concept of copy/pasting from plugins.txt)
Added: The nif viewer now has an option to toggle backface culling
Added: Loading and resaving a nif with the nif viewer will automatically fix any forward slashes in texture file names
Tweak: Updated unrar.dll to the latest version. (3.6 -> 3.7)
Tweak: Switched the default archiveinvalidation mode from BSA alteration to BSA redirection. (Only affects new users; updating users will just get a message advising them to switch.)
Tweak: obmm now displays a warning if you try to turn on 'autoupate on exit' while using bsa redirection.
Tweak: When editing shaders, obmm no longer changes the time stamps on any modified sdp's.
Tweak: When viewing a single subset in the nif viewer, it now displays the formats of the normal/glow maps if any are loaded.
Fixed: In some cases PatchPlugin could display an error saying that the plugin couldn't be patched even though it had already been patched succesfully.
Fixed: PatchPlugin was changing the timestamp of the patched file
Fixed: The nif viewer was reading the tangents/binormals incorrectly
Fixed: The non-parallaxed normal map shader had the y axis inverted.
Fixed: The nif viewer was trying to apply a specular map, even when none existed
Fixed: The apply mode combo box in the nif viewer wasn't being updated when the subset being viewed was changed.
1.0.3
Added: An option to allow the use of windows explorer style shift\control shortcuts on SelectMany menus.
Added: The script editor now remembers its previous size.
Added: The bsa browser now lets you preview textures as well as meshes.
Added: When viewing the data files used by an esp, texture files referenced by nifs which are packed into bsas are now picked up
Added: When viewing the data files used by an esp, obmm now tries to work out if _n and _g textures are actually used before listing them.
Added: The nif viewer can now open nifs which lack textures or UV coords.
Added: The nif viewer can now open nifs that use triangle lists intead of strips.
Added: The nif viewer now generates a log of which TriStrips failed to load. (Push 'L')
Added: The nif viewer now has some new shaders available that give colour representations of normal, tangent and binormal data
Added: Initial support in the nif viewer for some additional shaders added by obge.
Tweak: The EditXML... functions now work on other text files. (.ini, .txt, .bat)
Tweak: The various clean options will now always skip over files which are included in the base oblivion installation.
Tweak: The nif viewer now loads the bsa contents when needed instead of at startup.
Tweak: The nif viewer was updated to use the latest version of niflib (0.6.2 -> 0.7.2)
Tweak: If a nif contains a TriStrip that the nif viewer can't handle, only that subset will fail to render instead of the nif viewer failing to load the whole mesh.
Tweak: Saving a mesh with the nif viewer no longer automatically saves the colour map. (Saving the colour map is now a seperate option)
Tweak: Saving a mesh with the nif viewer now brings up a save file dialog instead of automatically trying to save over the old mesh
Tweak: Generating a heightmap for a nif no longer automatically sets the subset that the texture belongs to to use the parallax shader.
Fixed: Opening the script editer caused the saved position of the generic text editor to be lost.
Fixed: Holding down control to specify yes to all or no to all now works again
Fixed: The nif viewer couldn't load any nif's that were stored in the data directory instead of the BSAs
Fixed: Nif viewer crash if you clicked save without opening a nif first
Fixed: Nif viewer crash, due to trying to change the magnification filter when anisotropic filtering was enabled.
Fixed: Nif viewer crash if device was destroyed while a nif was loaded
1.0.2
Added: New script functions to support oblivion graphics extender ('If GraphicsExtenderPresent' and 'If GraphicsExtenderNewerThan')
Added: A context menu to script editor
Added: If the flow control checker finds multiple issues, up to 10 of them will now all be displayed instead of just the first one.
Added: Flow control checker now picks up some unreachable code and multiple sequential Default's.
1.0.1 (Beta)
Added: 'Continue' and 'Exit' script flow control statements.
Tweak: Rewritten flow control in scripts. (Should solve the problems caused by nesting many layers of flow control, but may break scripts that depend on the broken behaviour.)
Fixed: Error when trying to activate an omod which tries to modify a part of oblivion.ini which has already been modified by another omod
Fixed: The script flow control checker no longer displays incorrect results if your script contains unix style line ends
Fixed: The script flow control checker was displaying incorrect errors when Break statements were nested inside If or For statements
1.0.0
Added: A shortcut to the scripting help in the toolbar of the script editor
Added: Run-on lines in scripts. (Put 'AllowRunOnLines' on a line by itself, and then end a line with a '\')
Added: Added a button to remove an image from an omod in the omod creator.
Added: The 'do you want to overwrite ...' prompt when activating conflicting omods now tells you which omods the existing file is owned by.
Added: Readded the Goto and Label script functions. They now behave slightly differently, so see the scripting help for details
Added: Some additional If conditions. (Equal, GreaterThan, GreaterEqual, fGreaterThan, fGreaterEqual)
Added: A 'check flow control' button in the script editor. Scans for malformed lines and control structures.
Added: New command line option '-conflict-detector' to launch the conflict detector without loading the main parts of obmm. (The installer will create an appropriate shortcut if requested)
Tweak: Slightly better FormID formatting in the new conflict detector.
Fixed: The open readme link in the installer had gone missing at some point in the 0.9.x series.
Fixed: New conflict detector crash if no mods were active
0.9.25
Tweak: Slightly reduced memory requirement for using very high 7z compression when an omod contains both esps and data files.
Tweak: Tweaked layout of the new conflict detector, and removed the BETA tag.
Fixed: New conflict reporter crash when 'read EDIDs' was checked and an esp contained 0 length records.
Fixed: The new conflict reporter was displaying empty record groups when 'display non conflicting records' wasn't checked.
0.9.24
Added: 2 new script functions for editing XML files.
Tweak: When creating an omod, you can now remove files by using delete or backspace instead of having to use the context menus
Tweak: SelectManyWith... functions now show the preview/description of the most recently selected option. (Previously you had to deselect all options but the one you wanted to preview.)
0.9.23
Fixed: Crash if you checked 'pack face textures into redirection target BSA' but didn't actually have any face textures.
0.9.22
Added: A new archiveinvalidation method; BSA redirection (Thanks Quarn!)
Tweak: Renamed the archiveinvalidation options to match dev_akm's faq.
Fixed: When clicking 'create' on the omod creation form, the file list would flip between plugins and data files
Fixed: Extracting an omod to a folder now works when the folder you are extracting to is on a different drive to obmm's temp folder.
0.9.21
Added: Extra script functions to manipulate strings
Added: Extra script functions to perform basic maths
Fixed: ScriptExtenderNewerThan _still_ wasn't fixed.
0.9.20
Added: You can now just drag/drop individual files from the BSA browser into windows explorer instead of having to use the extract button and go through the file dialogs.
Fixed: When extracting an omod to a folder or converting it to a standard archive, esps in subfolders no longer go missing.
0.9.19
Added: Drag and drop support for modifing load order
Tweak: Trying to move an esp before an esm in the load order now displays an error message instead of failing silently.
0.9.18
Fixed: Load order problems introduced in 0.9.16
0.9.17
Fixed: obse functions still weren't completely fixed.
Fixed: esps with identicle timestamps weren't properly fixed either.
0.9.16
Added: A new option to ignore plugins with a timestamp in the future when inserting new esps into the load order
Tweak: bash's copy of the plugin load order now includes the timestamps.
Tweak: obse script functions now look for 'obse_loader.exe' instead of 'obse.dll'. (Thanks to Waruddar for pointing out that the dll name changed in version 0009c)
Fixed: Archive invalidation no longer crashes if bits of the [Archive] section are missing from oblivion.ini
Fixed: The load order now handles esms correctly. (Esms always load before esps, regardless of the last modified date)
Fixed: obmm should now handle multiple esps with identical timestamps correctly.
0.9.15
Added: A copy of plugins.txt will be copied to obmm's directory for wyre bash to use.
Added: obmm will now ensure that shader packages are not read only. (Fixes an issue with the 1.2 patch)
Tweak: Improved check for multiple running obmm copies
Fixed: The SetGlobal script function wasn't working on globals containing upper case characters
Fixed: Bug caused by manually removing active omods from obmm's mod directory. (Could cause startup crashes or other omods to deactivate themselves.)
Fixed: Several buglets with Default statements in scripts
0.9.14
Fixed: Possible crash when opening the Nif viewer
Fixed: Possible crash when trying to get a list of used textures for an esp.
0.9.13
Fixed: The nif viewer now renders vertex colour slightly more correctly.
Fixed: Default statements in selects now get fired correctly, (was broken in 0.9.11...)
Fixed: More nested Select screwiness
Removed: The 'check for updates' button. (Updates are too infrequent now to warrent it.)
0.9.12
Added: Certain yes/no dialog boxes now let you choose 'yes to all' or 'no to all' by holding control while clicking
Added: New utility - NIF viewer (requires Managed DirectX December 2005 update or newer)
Added: A preview button to the BSA browser. (Uses the nif viewer, obviously, so also needs MDX)
Tweak: omods are now allowed to overwrite each others ini and sdp edits (A similar warning will be given to when they try to overwrite esp's.)
Fixed: Many omod group bugs caused by the 'None' group getting lost when opening the settings menu.
Fixed: The warning given when trying to delete an active omod now has some resemblance to reality
Fixed: obmm would sometimes try to force deactive omods which had already been deleted, resulting in crashes on startup
Fixed: Unchecking the warn on script ini edit option didn't actually stop obmm from warning you about scripts editing oblivion.ini
Fixed: Force disable was only trying to disable omods which were already disabled
Fixed: syntax highlighting on some functions which take a variable as the first argument wasn't colouring the variable name
0.9.11
Added: A new function 'SelectString' which behaves in the same way as SelectVar but it does not require a variable as an argument
Tweak: Attempting to use the SelectVar function on a variable which doesn't exist will now display a warning
Fixed: Syntax highlighting on IfNot
Fixed: ExecLines now works correctly if it's used in the last line of a script
Fixed: Nested Select functions now work correctly
Fixed: The BSA creator was complaning about files being to big when they were about 50% of the size it could actually cope with
Fixed: The BSA creator could create corrupt BSA's if more than one file was included from a folder when a third file was included from a subfolder of that folder
0.9.10
Added: Syntax highlighting when editing scripts
Tweak: Unterminated variable names or variables overlapping strings in scripts will now display warnings
Fixed: Trying to use an animated image as a preview no longer causes Weird Things to happen. (animated images are not supported)
Fixed: Tooltip text on the save button in the text editor
Fixed: Another possible crash when running obmm directly after an oblivion reinstall.
0.9.9 (BETA)
Tweak: Added scroll bars to the descriptions shown when the SelectWithDescriptions command is used
Tweak: Mods are now allowed to overwrite existing esps. (Although very much under protest :p)
Fixed: Removed a +1 from the text editor which shouldn't have been there. (fixes the 'find next' button when the search term appears twice without any seperation)
Fixed: When a mod has a conflicting data file and you choose not to overwrite, obmm no longer fails to copy the next non conflicting file
Fixed: A couple of documentation errors
0.9.8
Fixed: Trying to display previews in a select function was broken in 0.9.7
0.9.7
Added: 4 new script functions to manipulate file paths. (Useful for modifing the output from the 'For Each ...' functions)
Added: An InputString function to read in player input
Added: A 'Return' function to exit from scripts early
Added: 4 new Select functions to allow the addition of text descriptions of the options
Added: PatchPlugin and PatchDataFile script functions for creating patch omods
Added: An 'Unlink' option to the esp context menu
Tweak: The 'very high' compression setting when using 7-zip no longer uses quite such a ridiculous dictionary size
Tweak: Added a check to the 'Copy...' script functions so that it displays an appropriate error if someone trys to copy a file over itself.
Tweak: Updated inno setup installer version to 5.1.8
Fixed: Missing line breaks in the info view if a shader performed multiple shader edits
Fixed: An empty string wasn't considered to be a safe path, meaning that the 'For Each ...' functions couldn't act on the root of the omod
Fixed: esps/esms no longer show up as data files in the data files view. (They belong in the esps view, after all...)
Fixed: Line numbers in script error messages were always one line out.
Fixed: Error messages given for the CopyPlugin script function would sometimes be mixed up with CopyDataFile errors.
0.9.6
Fixed: omod validation when creating the omod didn't work properly
0.9.5
Added: An option to extract omods to a folder. (As opposed to recompressing them to an archive)
Added: Instead of always removing the newer file, the validate omod option will now give you the choice of files to remove.
Tweak: Rewrote the validate omod option to be much faster when dealing with larger mods
Tweak: The omod creator now validates the omod immediately before creation
Fixed: Error when hiding an omod which contained a picture while that picture was being displayed in the preview box
Fixed: If you choose not to create the omod conversion data when converting an omod to a zip, any esp files weren't saved either
0.9.4
Fixed: The hidden omod switcher wasn't openening
0.9.3
Added: A 'reset' button on the options menu, for people with corrupt settings files.
Added: The ability to hide omods, which reduces the resources required by omods which are not going to be activated in the immediate future
Added: The omod info page now shows info about ini and shader edits that have been made by an omod
Tweak: Made the select form a tool window (no icon or control box. The control box was disabled, so it was confusing people leaving it there)
Tweak: The preview button on the select form is now hidden if there are no previews, and is enabled immediately if the script specifies default values
Fixed: Some file manipulation script functions had become case sensitive
Fixed: The crash on opening the omod creator that was reported by some people. (If this happens to you, read the warning carefully.)
Fixed: Trying to clear your omod groups when none exist no longer crashes obmm
0.9.2
Added: An 'unsafe' mode. In this mode obmm starts up much faster, but manually changing your data folder may cause weird errors
Added: A few command line arguments to launch obmm componants without having to wait for its normal GUI to load
Added: A rescan esp headers option under the batch actions menu. (For use in unsafe mode or if you have 'update unparented esp headers' disabled)
Tweak: A few tweaks to try and decrease startup time
Fixed: obmm would attempt to load settings files from newer obmm versions, resulting in permanent save file corruption.
Fixed: BSA creator error if you started to edit a path but then exited without making any changes
Fixed: The CopyPlugin, CopyDataFile and CopyDataFolder script commands now preserve case
Removed: The option to back up shader packages. Use omods combined with EditShader instead. (This method allows multiple mods to edit the same shader package)
0.9.1
Added: SelectVar script function
Added: ReadINI script function
Added: ReadRendererInfo script function
Added: InstallDataFolder script function
Added: EditShader script function
Added: Select/Case/EndSelect structures can now have a default case
Tweak: When scanning for required data files, _n and and _g textures are now automatically added for all base textures applied to nifs.
Tweak: Shader packages and the original movies are now protected files, and cannot be overwritten by omods. (Use the EditINI and EditShader script functions instead)
Fixed: When getting a texture list from a nif, obmm was adding two sets of 'textures\' to the begining of the path
Fixed: Rounding error in BSA hash Generator. (Only affected wav files)
Fixed: When creating BSAs, uncompressed files were being incorrectly marked as compressed
Fixed: Crash when trying to use the old conflict detector
Fixed: If you manually delete an active omod from your omod directory, any ini edits it had made are now correctly removed.
Fixed: If you used the background processes killer, oblivion would not start after it was finished
0.9.0 (BETA)
Added: Completed the BSA creator. It should now generate working BSAs
Added: A BSA uncorrupter utility, which checks that BSA hashes match up with the filenames, and fixes them if they don't
Added: String variables in scripts.
Added: Loop support in scripts.
Added: CopyDataFolder now optionally acts on subfolders too
Added: An ExecLines script function.
Added: The various Select functions now let you specify default selections by putting a '|' in front of the parameters.
Added: An option to back up the original shader packages, and restore them after using an omod which overwrites them
Added: A data file browser, to view the files in your data folder and BSAs, as well as the omods they're attached to.
Added: The built in updater can now update the readme as well as the main exe.
Added: A background process killer similar to MGE's. (No option to restart processes this time though.)
Added: The beginnings of a new conflict detector tool. (The old one is obviously still available for now, since the new one is barely functional)
Tweak: When converting an omod back to a zip/7-zip archive, you are now given an option to not create the omod conversion data folder
Tweak:Added a warning to the launch oblivion button to point out that it does not have to be used for obmm to work
Tweak: Added a progress dialog to the BSA patcher
Tweak: BSA patcher now checks the BSA flags for file types, speeding it up massively if you only want to edit textures
Tweak: Crash dumps no longer include the blurb at the start, and include a date/time stamp
Tweak: Inner exceptions and obmm's version number are now also included in crash dumps
Tweak: On the old conflict detector, all dialog overlaps now show up as minor conflicts.
Fixed: Passing False as the second parameter of DontInstallDataFolder caused it to fail
Fixed: Unterminated quotes in scripts are now picked up correctly. (Previously, the script interperator would complain about missing arguments.)
Fixed: Extracting files from BSAs now works again. (Broken some time in the late 0.8.x series)
Fixed: Irreversable BSA corruption if you use the BSA editor and then do some combination of oblivion reinstall/patch without reversing the edits.
Fixed: omod conversion data not always imported if the omod contained no data files.
Fixed: omod conversion data wasn't importing the third version number
Fixed: when importing omod conversion data where only the major version was set, the an invalid version number would be created
Fixed: It was possible to move obmm's temp folder to C:\, or some other important location, and hence cause a huge loss of data.
Fixed: When moving the temp folder, any files in the previous location weren't deleted correctly.
Fixed: 0 length or locked BSAs will no longer crash the BSA patcher
Fixed: Not all unhandled exceptions were getting caught by the error handler
Fixed: When using more than 32 omod groups, filters and font changes did not work correctly
0.8.18
Added: Options to import/export omod group info
Fixed: A crash if you started editing the relative path of a file during omod creation, but canceled without making any changes (broken in 0.8.14)
0.8.17
Fixed: A crash if a SelectWithPreview or SelectManyWithPreview function was used but one of the cases didn't have an image assigned.
0.8.16
Fixed: A crash complaining about the OblivionBSA class constructor failing if you run obmm without ever running oblivion first.
0.8.15 (BETA)
Added: The ability to directly open BSAs. The installer will also optionally associate obmm with .bsa files.
Added: A new parameter to DontInstallDataFolder which allows it to optionally act on sub folders
0.8.14 (BETA)
Tweak: Clearer error message if the use of CopyPlugin in a script results in a conflict that render the mod unactivatable
Tweak: The omod creator now allows esps in sub directories. (These esps will be invisable to most obmm functions. Use CopyPlugin to install them)
Fixed: Identical warning messages getting displayed multiple times when trying to convert some archives to omods
0.8.13
Tweak: Clearer error message for some zips which cannot automatically be converted to omods
Fixed: The font selector dialog choosing the incorrect default font when changing the font of a group which already had a font assigned
Fixed: Conflict detector null reference exception
Fixed: BSA patcher complaining that another process was holding the BSA's open
0.8.12
Added: An option to set an individual font and colour for each omod group.
Fixed: Crash if you tried using the BSA patcher after deactivating an omod which contained a patched BSA if you didn't remove the patches first.
Fixed: Conflict reporter crash if there was a corrupt plugin in your data directory
Fixed: SelectMany bugs when more than 2 items were selected at a time
Fixed: Commas were getting stripped from scripts even if contained inside quotes
Fixed: A mistake in the DependsOn documentation.
0.8.11 (BETA)
Added: An option to move obmm's temp folder around. (For best performance, it should be on the same drive as your oblivion install)
Added: A 'CopyDataFolder' script function.
Fixed: A script error if you tried to use CopyPlugin twice with the same destination path each time
0.8.10
Fixed: Images in the save manager. (Really, this time.)
Fixed: The changelog link on the update page
0.8.9
Added: An 'OblivionNewerThan' script function to check oblivions file version. (Can be used to check if the official patch is installed)
Added A 'DontInstallDataFolder' script function to make it easier to not install whole folders.
Added: An option to rescan the headers of unparented esps every startup.
Fixed: Images in the save manager had the blue and red channels flipped
Fixed: ScriptExtenderNewerThan script function running incorrectly if obse was not installed at all
Fixed: Script error if a quoted string ended with a double escape. (i.e. \\" )
0.8.8
Fixed: An active mod marked as dependent on another mod using DependsOn was not deactivated correctly if the mod it depended on was deactivated first
0.8.7
Fixed: InstallPlugin and DontInstallPlugin script functions
0.8.6
Added: If you have obse installed and click 'launch oblivion' without specifing a command line, obse_loader.exe will be launched instead of oblivion.exe
Tweak: The add extra files option on the conflict report now lets you select esm's
Fixed: The SelectWithPreview and SelectManyWithPreview script functions
0.8.5
Added: 2 new scripting functions to check the users currently installed version of obse
Added: A 'view data files used by this esp' option on the main page
Added: The scan for data files required by this esp option now picks up many more files than it did previously
Tweak: Some script functions which would fail silently if a file name was misspelt will now display warnings
0.8.4
Fixed: Crash on startup if you had no omods installed (broken in obmm 0.8.3)
0.8.3
Added: A new limited user mode. (Previously limited users shared the normal interface, and would experience crashes on exit or when using some functions)
Fixed: Correct warnings are now displayed when data files conflict (broken in 0.8.0)
Fixed: The CRCs of installed data files were not getting updated correctly when conflicting mods were activated (broken since 0.1, apparently)
Fixed: esps and data files losing capitalization when the DontInstallAny... script functions were used.
Fixed: Some issues with the omod list forgetting where it had been scrolled to.
0.8.2
Tweak: Improved the installer a bit. (Now flashes up the gpl and offers to display the readme, also creates a shortcut to the readme in the start menu)
Tweak: Fixed some spelling mistakes and made a few changes to the documentation
0.8.1 (BETA)
Added: An option to enable/disable automatic conflict updating when enabling/disabling omods.
Added: The omod creator now remembers your previous compression type and level settings
Added: When adding folders to omods, archiveinvalidation files are now filtered out
Tweak: CopyPlugin now fails if the CopyTo parameter is in a subdirectory of the data folder, or doesn't have a .esp or .esm extension
Tweak: CopyDataFile now fails if you try to give the file a .esp or .esm extension
Fixed: The icon on the create mod form somehow got lost in 0.8.0
Fixed: When converting an omod to a normal archive and back, the omod conversion data folder would end up included in the omod
Fixed: Problems when adding esps with an uppercase extension to an omod, and even more weirdness when trying to activate them
Fixed: File names losing capitalization when added to an omod
Fixed: CopyDataFile now lets you copy data files to folders which do not exist in the original omod again. (Broken in 0.8.0)
Fixed: When converting omods to zip/7z archives, the overwrite prompt is now displayed correctly even if you don't manually add the file extension
Fixed: Including a '.' in the filename when exporting your active mods or load order no longer prevents the correct file extension being added
0.8.0 (BETA) (Save files from 0.7.x or earlier cannot be imported into this version)
Added: New script function 'CopyPlugin' which performs an identical function to 'CopyDataFile' except that it works on plugins
Added: New script function 'SelectMany', which performs a similar function to Select but allows users to select multiple options
Added: New script functions 'SelectWithPreview' and 'SelectManyWithPreview', which allow you to attach preview images to options
Added: New script functions 'DisplayImage' and 'DisplayText' which display an image or text file respectively.
Added: A new line on the omod info page to give the file version of the omod
Tweak: Updated all 3rd party libraries to their latest versions
Tweak: Partially rewrote some stuff to hopefully make things a bit faster
Tweak: The 'show omod info before installing' option no longer needs to temporarily copy the omod to your omod dir to extract the omod info
Tweak: omods are now unpacked before any attached script is run, allowing the addition of some extra script functions
Tweak: Manually deleting an active omod from obmm's mod directory will now cause obmm to clean that mod the next time it is started up
Fixed: Editing the script or readme of an omod after the omod had been created would corrupt it. (Corrupted omods are still usable, but contain a bit of garbage)
Fixed: Insanly slow loading of text files
Fixed: Crash when trying to use an rtf file containing embedded images as a readme
Fixed: (Possibly) Corrupt or password protected rar file error when running on windows xp x64.
Removed: The EditXML script function, since it was completely and utterly useless. (Use wz_Builder instead.)
Removed: The Label and GoTo script functions, which are not useful now that better control structures have been added. (These were undocumented functions anyway)
0.7.14
Added: The ignore normal maps option is available again when in edit BSA mode. (Unlike 0.7.11, it now actually does something)
Added: A 'reset to defaults' button on the archiveinvalidation page.
Fixed: The ConflictsWith and DependsOn script functions were horribly broken.
0.7.13
Added: An option to reset the BSA timestamps (useful after installing the 1.1 patch)
0.7.12
Added: When adding archives or folders to omods, any stray thumbs.db and desktop.ini files will be ignored
Added: The esp tooltips now display the first byte of FormID's from that esp
Tweak: omod creator tweaks to speed up creation and activation of large omods
Tweak: The archiveinvalidation tool now defaults to edit bsa mode
Fixed: The archiveinvalidation tool's ignore normalmaps option is now correctly disabled when in edit BSA mode
Fixed: The update conflicts option now updates the visable omod list instead of just obmm's internal values
Fixed: Removed the empty lines from esp tooltips when the esp doesn't contain a description
0.7.11
Added: Update conflicts option under batch actions
Added: More control over obmm's behaviour when you set the archiveinvalidation to edit BSA mode
Fixed: Possible permenent BSA file corruption if another program (eg. the CS) had a BSA locked open when you exited obmm
Fixed: The CopyDataFile function was broken. (Again...)
Fixed: Archiveinvalidation bug which caused sounds not to get picked up (Again...)
0.7.10
Tweak: Instead of complaining about hash map collisions, obmm will add archiveinvalidation entries for the problem files
Tweak: The save manager now sorts in order of date saved by default
Fixed: hopefully fixed the gradual slowdown when creating multiple omods in quick succession
0.7.9
Tweak: Clicking 'update now' in the archiveinvalidation generator will display a messagebox when done
Fixed: Read only esps no longer cause crashes when you try to modify the load order
Fixed: The 'edit BSA's' feature of the archiveinvalidation generator now checks for hash map collisions
0.7.8
Fixed: BSA timestamps no longer get updated when obmm undoes changes made by setting the ai.txt generator to edit BSA mode
Fixed: switching the ai.txt generator to standard mode from edit BSA mode no longer results in obmm not undoing changes
Fixed: Many fixes to the BSA creator. It still does not generate working BSAs as the hashing algorithm is incorrect
Fixed: When adding individual files to a BSA, the dialog now sets the relative path correctly
Fixed: When creating a BSA based on the contents of an esp, the relative paths get set correctly
0.7.7 (BETA)
Added: A new archiveinvalidation option to edit BSAs to prevent oblivion finding the original files
Added: If the last folder you added to an omod is missing, the dialog will now default to the closest higher level directory
Fixed: Closing the BSA browser while an archive is open no longer results in the file remaining locked
0.7.6
Fixed: Archiveinvalidation bugs with sounds and distantlod entries not being picked up
0.7.5
Fixed: Archiveinvalidation generator was using '\' instead of '/', resulting in missing textures
Fixed: A corrupt esp in the data directory no longer results in a crash on startup
0.7.4
Added: obmm automatically removes empty data subdirectories on exit
Added: A new and much improved archiveinvalidation.txt generator
Added: An option to delete backed up and corrupt omods
Tweak: Many menu options are now greyed out in situations where they are be unusable.
Fixed: Some overzelous file path security checks
Fixed: Removed the random . from the end of the omod belongs to field
0.7.3
Added: A clean filtered option to clean only the currently displayed omods
Added: Import/Export active omod lists and plugin load order options
Added: A 'compression boost' option, which may give slightly better 7zip compression in exchange for slower compression rates
Added: The 'add folder' option on the omod creator remembers the last path you selected over restarts
Added: A BSA creator (incomplete - does not create working BSAs)
Tweak: When loading omods created with obmm 0.5 or less, the extra .0 is no longer appended to the end of the version number
Tweak: Some speed optimizations (probably wont make any noticable difference, but every little helps)
Tweak: Reordered the interface to group all the batch operations together and give activate its own button back
0.7.2
Fixed: Using CopyDataFile to move a file to a directory which doesn't exist in the original mod no longer causes errors
Fixed: omods which contain a zero length file right at the end of the data file list now activate correctly
0.7.1 (BETA)
Added: An option in the omod creator to view an esps required data files without adding them to the omod
Tweak: The esp and omod sort orders are now saved over restarts
Tweak: A couple of things to try and prevent ghost file links from occuring (changes may have broken other stuff)
Fixed: Possible crash when sorting by omod creation date. (Old omods may no longer be sorted correctly)
Fixed: Crash when trying to save the contents of the text editor to a locked file
Fixed: Crash when archives only contain a single folder (introduced in 0.7.0)
0.7.0 (BETA)
Added: omods can be sorted by group (omods can belong to multiple groups, so this may not work as expected)
Added: Importing data from an esp imports the mod name too if the current omod name hasn't already been changed
Added: The conflict detector remembers its settings
Added: The data file conflicts list tells you which omod owns each file
Added: The clean omod option scans for ghost data file links
Added: Rename group and clear groups options
Added: A select/case/break flow control structure to scripts
Added: An option to move a group to the end of the list
Tweak: Clean all will remove any ghost data file links
Tweak: Improved the way the omod creator detects readmes
Tweak: Trying to import the author/description from an esp which doesn't contain them displays a warning
Tweak: renamed the 'delete' menu item to 'remove from omod' on the omod creation form
Fixed: Adding a zip to an omod via the add archive button no longer causes an error if the zip contains a zero length file
Fixed: archives which are packed correctly but which only contain a single folder no longer get turned into corrupt omods
Fixed: A corrupt settings file no longer completely prevents obmm from starting up
Fixed: The DataFileExists script function has been fixed
Fixed: Editing an omod which contains a screenshot, canceling the edit then clicking the omod no longer results in a crash
Fixed: trying to load a read only omod no longer results in an error
0.6.10
Added: The automatic archiveinvalidation generator now lets you select the file types it will include in the file
Fixed: VersionLessThan and VersionGreaterThan script functions now work correctly
Fixed: Crash when editing and overwriting a mod containing a screenshot (introduced in 0.6.9, I think)
Fixed: ArchiveInvalidation generator will no longer generate empty files if you do not have any textures
0.6.9
Fixed: Crash on exit if autoupdate archiveinvalidation.txt was enabled but the archiveinvalidation.txt was read only
Fixed: Auto rename of new omods to avoid overwriting anything wasn't working correctly
Fixed: Empty readme's getting created if you left the editor in rtf mode and then deleted everything
Fixed: omod creation failer if you tried to import the author and description from an esp with no description
Fixed: Adding a screenshot to an omod no longer locks the file open even after the create omod form is closed
Fixed: screenshots with non 4:3 aspect resolutions no longer get stretched
Fixed: adding a single data file from outside the data directory now puts the filename into the relative path
Fixed: Using CopyDataFile to overwrite another file from the same omod did not work correctly
0.6.8
Fixed: Editing active omods no longer screws up obmm
0.6.7
Added: Colomn widths of the esp list get saved over restarts
Tweak: Now gives a more explicit error message if you enter invalid characters into the mod name
Tweak: The ignore inactive esps/omods options in the conflict detector are now seperate checkboxes
Tweak: obmm will now let you delete active omods
Fixed: Many cosmetic layout issues and tab order gunk
Fixed: The 'add extra files' option on the conflict detector now works correctly
0.6.6
Added: An option to disable the default 'esp belongs to omod' warnings when deactivating esps
Added: The ability to supply a custom command line for use when clicking on the 'launch oblivion' button
Added: 'scan esp for required data files' can now pick up models or icons from inside compressed records
Fixed: People using localized copies of oblivion should no longer find their oblivion.ini going all weird
Fixed: 'scan esp for required data files' no longer crashes if the esp contains uncompressed NPC, LAND or PGRD records
0.6.5
Added: Displays a warning if obmm thinks that a mod has been packed weirdly but cant do anything about it
Fixed: obmm no longer tries to open oblivion.ini for writing on exit even if it has nothing to write
0.6.4
Added: You can assign omods to groups on the omod creation page
Fixed: Manually deleting active omods should no longer completely screw up obmm
Fixed: Moving your omod directory no longer prevents you from creating new omods
Fixed: Write protecting oblivion.ini no longer results in a crash on exit if you enable the 'lock FOV at 75' option
Fixed: An omods groups are no longer lost if you edit it and resave with the same file name
0.6.3
Added: You can change load order by holding alt and pushing up or down instead of using the buttons or context menu
Tweak: You can now create omods with zero or one decimal point in their version number instead of obmm adding extra .0's
Fixed: show new omod info and lock fov settings on the options form weren't sticking
0.6.2
Fixed: Crash when editing or deleting omods which contained a screenshot
0.6.1
Added: 'Import mod details' option on create omod form to import an esps author and description
Tweak: Progress bars should now be on top of all other obmm forms
Fixed: Files wrongly being marked as belonging to an omod when blocking an omod from overwriting manually placed files
Fixed: A bug in the autoupdater
0.6.0
Added: You are now allowed an extra decimal point in an omod version number (i.e. 1.1.1 instead of 1.1)
Added: If you add a screenshot when creating an omod, hovering the mouse over the 'add screenshot' button will display it
Added: An option to not include the version number in the omod file name
Added: Major updates to the save game lister
Added: The conflict detector will indicate the most serious conflict level if you choose not to display individual conflicts
Added: A BSA browser
Added: An 'omod report' which will generate an excel importable tabel of your omods
Added: New script functions to modify gmsts and global variables in esps/esms
Added: Major updates to the 'check for updates' tool
Added: The ability to move your omod folder around
Added: Activate and deactivate all buttons for esps. (Wont deactivate base esms, such as oblivion.esm)
Added: Activate/deactive all in group options for omods
Tweak: Update archive invalidation is now off by default.
Tweak: All utilities (save manager, conflict detector etc.) are now in a seperate menu
Tweak: Brought back the 'tidy data folder' option
Tweak: Progress bars are no longer displayed always on top
Tweak: xml files for the EditXML script function get extracted from the misc BSA instead of obmm keeping its own copy.
Fixed: Canceling a conflict report now works correctly if you had already generated one previously
Fixed: omods containing images could sometimes get locked open
Fixed: Save manager reporting the wrong time for save dates
Fixed: The conflict detector didn't work if obmm was running as a 64 bit process
0.5.8
Added: An 'edit groups' option to the bottom of the group filter combo box
Added: When adding a folder or archive to an omod, any screenshot.jpg file will be used for the screenshot
Added: When converting omods to a zip file, omod specific stuff is saved in an 'omod conversion data' folder
Added: The omod creator can read this omod conversion data to exactly recreate the old omod
Added: Esps can now be sorted by creation date
Added: omods can be sorted by the date the omod was packed, and also by the date the omod was installed
Added: A new option on the settings form to display omod info before installing it
Added: You can now add new omods by double clicking them while an existing copy of obmm is open
Added: An option to lock the FOV in oblivion.ini file at 75 degrees
Tweak: The omod creator now picks up readmes with a .rtf extension as well as .txt
Fixed: Some screwiness with file extensions when converting an omod into a zip archive.
Fixed: Uncompressed pathgrid records no longer crash the conflict detector (the arithmatic overflow error)
Fixed: Using 'scan for data files' on an esp containing compressed land or pathgrid records no longer causes crashes.
0.5.7
Added: The text editor remembers its last position and window state
Added: The ability to filter omods that haven't been assigned to any group
Added: When converting an omod to a 7-zip or zip, the omod info gets saved in a text file called 'omod info.txt'
Added: Add archive will now work correctly on some badly packed archives
Added: A save file lister which will tell you which plugins a saved file is dependent on
Fixed: The info page now uses proper line ends, so if you save and reopen it in notepad it doesn't look messed up
Fixed: converting omods to zips now works correctly if the omod doesn't contain a screenshot
Removed: The tidy data folder button. (Clean all will perform the same task)
0.5.6
Added: Tooltips showing an esp's author and description
Added: The omod creator now lets you overwrite existing omods
Tweak: Tidied up the data files conflicts page
Tweak: When converting omods to archives, the screenshot gets saved too
Tweak: When editing an omod, it now overwrites the existing one by default instead of creating a new one
Fixed: Adding rar archives to omods now works correctly
Fixed: Adding meshes to the archive invalidation file could cause crashes, so now I don't
0.5.5
Added: A 'convert omod to zip/7zip' option
Added: View data file and script defined conflicts for individual omods
Added: A force disable option, to disable mods even if they have dependencies which cannot be disabled
Added: A 'save' option in the text editor. (Will save rtf or plain text files)
Added: The ability to search in the text editor
Added: esps and omods can now be sorted by filesize
0.5.4
Added: A much better conflict report tool, to compensate for reduced real time conflict tracking
Fixed: If you deleted a file from an omod and then used 'add folder' or 'add archive' the file would reappear
Removed: Realtime EDID tracking. It was buggy, and far too slow.
0.5.3
Fixed: Index out of range errors and incorrect esp names in conflict reports
0.5.2
Added: The 'add zip' button on the omod creator has been renamed 'add archive', and can now handle .rar and .7z files
Tweak: Added a warning when setting 7-zip compression to very high.
0.5.1
Added: A clean all button to clear out all data files and esps installed by obmm
Tweak: The installer links to the download page for .NET instead of directly downloading it
Tweak: The installer gives the option to continue with the installation even if .NET is not detected
Fixed: The installer now correctly links you to the x64 version of .NET if it detects you are running 64 bit windows
Fixed: The argument null exception on startup if any active esps were parented to omods
Fixed: Possible error from not being able to get a list of the bsa files
Fixed: obmm no longer crashes on exit if oblivion.ini is write protected
Fixed: Cleaning a mod no longer causes the conflict report to display conflicts with deleted plugins
0.5.0 (Save files from 0.4.x or earlier cannot be imported into this version)
Added: An edit button to edit existing omods
Added: Visit website and check for update options on the about page
Added: The ability to include a screenshot inside an omod, which gets displayed in a new preview window
Added: Hovering the mouse over an omod displays a tooltip containing the description
Added: A 'tidy data folder' option which clears out empty directories left behind by mods
Added: The data files form remembers its size across obmm restarts
Added: The omod list can be sorted by file name, author or conflict level
Added: You can now create groups, assign omods to them them and then filter the mod list by group
Added: A launch oblivion button on the data files form
Added: obmm will now automatically add entries to ArchiveInvalidation.txt. (Can be disabled in the options menu)
Added: obmm keeps track of the EDID's of all entries in plugins, and warns about conflicts (Can be disabled)
Added: The option to disable warning messages when deleting, cleaning or deactivating mods
Added: 'Scan for required data files' option for esps when creating omods
Added: Progress bars while obmm is activating omods and when it is updating conflicts
Added: New script functions for editing oblivion.ini and xml files
Added: Script functions for changing the default checked status of an esp, and disabling the uncheck warning
Added: A script function for renaming/copying data files
Added: Script functions for getting the current obmm version
Tweak: The esp list and omod list boundry is now resizable
Tweak: added a cut/copy/paste context menu in text editor, as an alternative to the keyboard shortcuts
Fixed: The RegisterBSA script function now works correctly
Fixed: (Hopefully) The occasional missing temp directory error
Fixed: The tab order on the data files selector and create omod form
0.4.4
Added: About box
Tweak: The installer no longer refuses to run on windows 98.
Fixed: Overlapping text when many omods were installed
0.4.3
Tweak: omod list is now resizable, and doesn't display the omod file extension after every file
Fixed: The check to prevent you from running multiple copies of obmm broke if obmm was left open for too long.
0.4.2
Fixed: Crash on startup if you ran obmm without either installing the construction set or viewing the launchers data files page
Fixed: The 'clean omod' feature now works correctly
0.4.1
Fixed: Crash on startup if your my documents folder wasn't where obmm expected it to be. (Credit goes to ASk for fixing this one)
0.4.0 (This is the first version which works with oblivion instead of morrowind)
Added: The CRC of an omod is displayed in the advanced info view
Added: When validating an omod on the omod creation form, you now have to option to cancel
Added: The esp list can now be sorted by load order, active, owner or alphabetically
Added: Limited user support
Tweak: Many, many tweaks to get it working with oblivion instead of morrowind
Tweak: Performance optimizations and cosmetic tweaks
Fixed: When adding individual data files the relative path would not get set correctly
Fixed: Possible issue with data files not getting installed correctly
Fixed: A few problems caused by running multiple copies of obmm simultaniously
Fixed: Bug caused by manually placing omods in the omod directory and then double clicking on them anyway
Fixed: Many more bugs
Removed: Scrapped the .NET 1.1 version (It was far too much work to maintain two versions)
0.3 (BETA)
Added: An installer which automatically registers file associations and checks .NET framework version
Added: A help button
Fixed: RegisterBSA wasn't doing anything unless another mod had previously installed the same file
Fixed: Missing icons in .NET v1.1 version
Fixed: Text editor crash when changing from RTF to plain text format.
Fixed: Icon transparancy in text editor
Fixed: Working directory issue when installing new omods
Fixed: A few more ickle things
0.2 (BETA)
Added: Viewing the advanced omod info now gives some file stats
Added: omod files now save their creation time
Added: A cancel button when creating omods
Added: A .NET v1.1 build
Tweak: The default settings in the create omod form should now always result in a sucessful omod creation.
Tweak: When creating an omod, files are no longer validated automatically. Right click on the list and click validate to perform a check manually.
Fixed: Manually deleting omod files from the mods directory no longer causes unhandled exceptions
Fixed: Trimming the first letter from filenames when converting a zip to an omod.
Fixed: The gui is no longer completely unresponsive when creating a 7-zip format omod. (Nothing I can do about standard zip.)
Fixed: If omod creation fails for some reason, you should no longer be left with a corrupt file in your mods directory
Fixed: A load order bug that caused esps to be inserted in the wrong place if they had no load order information but a plugin already installed did.
Fixed: Esp load order checks are no longer case sensitive
Fixed: More bug fixes and cleanup
0.1 (BETA)
Added: Documentation
Added: The ConflictsWith script function can now specify how bad the conflict is
Added: A progress bar is now displayed while an omod is being generated
Added: Corrupt omods get moved to the obmm\corrupt directory, so you dont get an error everytime you start up obmm
Added: A feature to automatically create omod files out of mods distributed in .zip format.
Added: You can now alter the readme and script of omods after they've been created.
Tweak: Changed the conflict information icons to something a little less stupid looking.
Tweak: Modified the source code to compile under .NET 1.1 as well as 2.0
Fixed: Way too many bugs to list.
0.0 (ALPHA)
First release, just to prove that obmm does, in fact, exist. A bit buggy though. Don't turn your back on it; it may eat your data files folder. :p | http://timeslip.chorrol.com/obmmm/history.htm | CC-MAIN-2017-13 | refinedweb | 9,670 | 54.05 |
grandmetal 122 Report post Posted October 22, 2008 Hi, everyone. I have a problem with loading a dll in my program using LoadLibrary( LibaryName ). I'm only having that problem with a particular dll and never had that problem before. All of my other dlls are loadable and usable without problems. No significant changes have been made since the last successful use of the dll in my exe. Here's the general format of my dll and all my other dlls: #include <windows.h> #include <stdio.h> #include <math.h> #include <ctime> #include <cstdlib> /* (global constants and some global variables here) (function and class declarations here) (global variables here) */ #ifdef __cplusplus extern "C" { #endif __declspec(dllexport) void __cdecl SomeFunction1( /* arguments */ ) { // bla bla bla } // the list continues... BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { return TRUE; } #ifdef __cplusplus } #endif To exclude the possibility that something was wrong in my main program that caused the error, I've made a simple console program that just loads the dll using LoadLibrary(), and as expected it returned NULL. I've tried using GetLastError after the dll failed to load but it just returned a 0. One of the possible causes that I can think of is my recent installation of SP3 for WinXP on my PC. Any suggestions and/or solutions are greatly appreciated. 0 Share this post Link to post Share on other sites | https://www.gamedev.net/forums/topic/512366-dll-loading-problem/ | CC-MAIN-2017-34 | refinedweb | 233 | 55.24 |
Hi Gang! I am new to the forums, and programming in general, so I apologize forthright if I am slow or just plain dense on anything/everything. With that disclaimer gotten outta the way, I have an assignment and I am not sure what exactly how to execute it. The problem is as follows:
Given Atomic Weights:
Oxygen = 15.9994
Carbon = 12.011
Nitrogen = 14.00674
Sulfur = 32.066
Hydrogen = 1.00794
Problem Description:
Write a program that asks the user to enter the number of atoms of each of the five elements for an amino acid. Then compute and print the average weight of the atoms in the amino acid.
So far, my program code consists of asking the user to input the number of atoms for each of the five elements. I also have an obviously wrong formula to find the average of values entered. I am unsure of how to declare the weights, and how to write the formula to find the average weights. My coding is at this point is as follows:
#include<stdio.h> #include<math.h> int main(void) { int O, C, N, S, H ; double Weight ; printf("Enter number of atoms for O: \n") ; scanf("%d", &O); printf("Enter number of atoms for C: \n") ; scanf("%d", &C) ; printf("Enter number of atoms for N: \n") ; scanf("%d", &N) ; printf("Enter number of atoms for S: \n") ; scanf("%d", &S) ; printf("Enter number of atoms for H: \n") ; scanf("%d", &H) ; Weight = (O+C+N+S+H)/5 ; printf("The average weight of the amino acid: %\n" , (O+C+N+S+H)/5 ) ; system("PAUSE") ; return (0) ; }
Any help would be appreciated. | https://www.daniweb.com/programming/software-development/threads/147044/looking-for-some-help | CC-MAIN-2017-26 | refinedweb | 279 | 70.53 |
Your official information source from the .NET Web Development and Tools group at Microsoft.
Range requests is the ability in HTTP to request a part of a document based on one or more ranges. This can be used in scenarios where the client wants to recover from interrupted data transfers as a result of canceled requests or dropped connections. It can also be used in scenarios where a client requests only a subset of a larger representation, such as a single segment of a very large document for special processing. Ranges specify a start and an end given a unit. The unit can be anything but by far the most common is “bytes”. An example of a range request asking for the first 10 bytes is as follows:
GET /api/range HTTP/1.1 Host: localhost:50231 Range : bytes=0-9
An example asking for the first and last byte contains two ranges separated by comma as follows:
In this example the resource which we are doing range requests over contains the 26 letters of the English alphabet:
HTTP/1.1 200 OK Content-Length: 26 Content-Type: text/plain abcdefghijklmnopqrstuvwxyz
The response to a byte range request is a 206 (Partial Content) response. If only one range was requested then the response looks similar to a 200 (OK) response with the exception that it has a Content-Range header field indicating the range and the total length of the document:
HTTP/1.1 206 Partial Content Content-Length: 10 Content-Type: text/plain Content-Range: bytes 0-9/26 abcdefghij
Note that the Content-Length header indicates the bytes actually in the response and not the total size of the document requested.
If more than one ranges were requested then the response has the media type “multipart/byteranges” with a body part for each range:
HTTP/1.1 206 Partial Content Content-Length: 244 Content-Type: multipart/byteranges; boundary="57c2656a-9716-4ea0-9d3b-2f76cbac4885"
--57c2656a-9716-4ea0-9d3b-2f76cbac4885 Content-Type: text/plain Content-Range: bytes 0-0/26 a --57c2656a-9716-4ea0-9d3b-2f76cbac4885 Content-Type: text/plain Content-Range: bytes 25-25/26 z --57c2656a-9716-4ea0-9d3b-2f76cbac4885--
Range requests that don’t overlap with the extent of the resource result in a 416 (Requested Range Not Satisfiable) with a Content-Range header indicating the current extent of the resource.
HTTP/1.1 416 Requested Range Not Satisfiable Content-Range: bytes */26
In addition to using ranges as described above, range requests can be made conditional using an If-Range header field meaning “send me the following range but only if the ETag matches; otherwise send me the whole response.”
With the addition of the ByteRangeStreamContent class to ASP.NET Web API (available in latest nightly build, not RTM), it is now simpler to support byte range requests. The ByteRangeStreamContent class can also be used in scenarios supporting conditional If-Range requests although we don’t show this scenario in this blog.
The ByteRangeStreamContent is very similar to the already existing StreamContent in that it provides a view over a stream. ByteRangeStreamContent requires the stream to be seekable in order to provide one or more ranges over it. Common examples of seekable streams are FileStreams and MemoryStreams. In this blog we show an example using a MemoryStream but a FileStream or other seekable stream would work just as well.
Below is the sample controller. It is part of the ASP.NET Web API samples where the entire sample project is available in our git repository.
1: public class RangeController : ApiController
2: {
3: // Sample content used to demonstrate range requests
4: private static readonly byte[] _content = Encoding.UTF8.GetBytes("abcdefghijklmnopqrstuvwxyz");
5:
6: // Content type for our body
7: private static readonly MediaTypeHeaderValue _mediaType = MediaTypeHeaderValue.Parse("text/plain");
8:
9: public HttpResponseMessage Get()
10: {
11: // A MemoryStream is seekable allowing us to do ranges over it. Same goes for FileStream.
12: MemoryStream memStream = new MemoryStream(_content);
13:
14: // Check to see if this is a range request (i.e. contains a Range header field)
15: // Range requests can also be made conditional using the If-Range header field. This can be
16: // used to generate a request which says: send me the range if the content hasn't changed;
17: // otherwise send me the whole thing.
18: if (Request.Headers.Range != null)
19: {
20: try
21: {
22: HttpResponseMessage partialResponse = Request.CreateResponse(HttpStatusCode.PartialContent);
23: partialResponse.Content = new ByteRangeStreamContent(memStream, Request.Headers.Range, _mediaType);
24: return partialResponse;
25: }
26: catch (InvalidByteRangeException invalidByteRangeException)
27: {
28: return Request.CreateErrorResponse(invalidByteRangeException);
29: }
30: }
31: else
32: {
33: // If it is not a range request we just send the whole thing as normal
34: HttpResponseMessage fullResponse = Request.CreateResponse(HttpStatusCode.OK);
35: fullResponse.Content = new StreamContent(memStream);
36: fullResponse.Content.Headers.ContentType = _mediaType;
37: return fullResponse;
38: }
39: }
40: }
The first thing to check is if the incoming request is a range request. If it is then we create a ByteRangeStreamContent and return that. Otherwise we create a StreamContent and return that. The ByteRangeStreamContent throws an InvalidByteRangeException is no overlapping ranges are found so we catch that and create a 416 (Requested Range Not Satisfiable) response.
Running the sample creates a set of range requests. We write the corresponding responses to the console as follows:
Full Content without ranges: 'abcdefghijklmnopqrstuvwxyz'
Range 'bytes=0-0' requesting the first byte: 'a'
Range 'bytes=-1' requesting the last byte: 'z'
Range 'bytes=4-' requesting byte 4 and up: 'efghijklmnopqrstuvwxyz'
Range 'bytes=0-0, -1' requesting first and last byte:
--04214a40-a998-4b9e-a564-c21955bd36db
Content-Type: text/plain
Content-Range: bytes 0-0/26
a
--04214a40-a998-4b9e-a564-c21955bd36db
Content-Type: text/plain
Content-Range: bytes 25-25/26
z
--04214a40-a998-4b9e-a564-c21955bd36db--
Range 'bytes=0-0, 12-15, -1' requesting first, mid four, and last byte:
--b1d1d766-c424-49cb-9843-dd741be35f4c
Content-Type: text/plain
Content-Range: bytes 0-0/26
a
--b1d1d766-c424-49cb-9843-dd741be35f4c
Content-Type: text/plain
Content-Range: bytes 12-15/26
mnop
--b1d1d766-c424-49cb-9843-dd741be35f4c
Content-Type: text/plain
Content-Range: bytes 25-25/26
z
--b1d1d766-c424-49cb-9843-dd741be35f4c--
Range 'bytes=100-' resulted in status code 'RequestedRangeNotSatisfiable' with
Content-Range header 'bytes */26'
Have fun!
Henrik
Would this be an option for enabling paging in API requests that return lists of items ?
I think we're mixing up a couple of things here. I don't think it's wise for a controller to return a representation, even if you're going to return a block of bytes and it's really that form that will see its way to the client.
Livingston, no this is not the way to do paging. A better way is to use the [Queryable] attribute from the OData NuGet package which is currently out in preview. We will be posting a sample on how to do paging using the [Queryable] fairly soon.
i think its great mix solution to control ranges..
nice job. this is a good solution to massive downloads.
I think this is very useful in solving the progress bar problem, by requiring several partial contents to the server through multiple jquery ajax calls. An example showing this application might be vey usefull.
Thanks for to share this blog ASP.Net is very essential part for the url address which is send a message between user & server.
Could firewalls or other intermediaries block or remove Range header? | http://blogs.msdn.com/b/webdev/archive/2012/11/23/asp-net-web-api-and-http-byte-range-support.aspx | CC-MAIN-2014-52 | refinedweb | 1,242 | 52.9 |
Agenda
See also: IRC log
<fsasaki>
fsasaki: Felix had action item to
check whether W3C can host the namespace; alternative was to
use the itsx namespace
... David mention that itsx was used by other applications, so it would need to be a different namespace
daveL: we encountered similar
things in provenance, ITS ontology;
... having the extension in a separately addressable file made sense for us
<fsasaki>
<fsasaki> <itst:match
fsasaki: itst is currently used
by the ITS tools,
... People should be able to create their own extension of the namespace, with the ability to have stand-alone files for the namespaces
daveL: can we have sub-spaces there? /rdf, etc?
fsasaki: yes, since it's an URI
<daveL>
daveL: here's an example of how
we used extensions in our provenance RDF example
... we want to be able to use it both in RDF use cases, as well as XLIFF
fsasaki: Why would the namespace for the RDF be different as for XLIFF?
daveL: we could manage them
separately, not sure if any other difference
... in the RDF one, we have 'domain', but it wouldn't map to an existing ITS XML attribute
fsasaki: would you make use of 'domain' data type when validating RDF?
daveL: yes, you could actually do this within RDF
fsasaki: RDF tools usually don't
resolve the XSD to enforce data types
... would people like an XSD on the namespace URI location?
daveL: David should know more on that
<fsasaki>
<fsasaki> <p its-Legal notice for Canada</p>
fsasaki: we want an ability for some content to be excluded; yves proposed using the ! operator for 'excluding' a particular locale
<fsasaki> <p its-Legal notice for all locales except Canada</p>
shaunm: we had that functionality, but no-one wanted it
fsasaki: we dropped it in August, and we didn't have a clear record of discussion beyond private e-mails
Yves_: i hadn't followed up on
that topic
... but it is still useful functionality
fsasaki: would that result into additional changes in the test suite?
Yves_: current test cases only include locales, we'd need to add a new test case for excluding
shaunm: it's a fairly big change for a last call
fsasaki: we shouldn't avoid doing that, we don't have to make a final decision today
Des: I support the ! exclusion proposal
fsasaki: we have 5 implementers for locale filter
<omstefanov> an excluding test case should show excluding of more than 1 case, such would be realistic. The example Yves brought is so "real world" it's hard to believe we missed it.
fsasaki: do people have a preference for the ! over a special attribute?
<omstefanov> what would the attribute look like, and would it be IN ADDITION TO positive selections?
shaunm: attribute also isn't problematic
<Ankit> either way (! or keyword) wors for us (DCU)
Yves_: personally think ! is easier, but not against a specific attribute
<clieske> BCP47 does not provide an impression for "inverse"?
<fsasaki> no
<clieske> Should it? :-)
clieske: this is a suggestion for providing feedback to BCP47 people for including an exclusion syntax
<scribe> ACTION: felix to check with I18N group about exclamation mark to signify an exclusion [recorded in]
<trackbot> Created ACTION-454 - Check with I18N group about exclamation mark to signify an exclusion [on Felix Sasaki - due 2013-03-11].
<scribe> ACTION: Yves to create examples of locale filters with exclusion (using exclamation marks) [recorded in]
<trackbot> Created ACTION-455 - Create examples of locale filters with exclusion (using exclamation marks) [on Yves Savourel - due 2013-03-11].
<fsasaki>
Yves_: some html elements have
counterintuitive behaviour on inheritance, for instance
<del> shouldn't be translatable, while its sub-elements
can have different defaults
... we shouldn't make things like this compicated for the user
kfritsche: it makes sense to implement this as defaults, not as defaults
shaunm: we could make the same
argument for every format
... it becomes a problem on how we apply information
fsasaki: in practice, many tools
have HTML filters, which describe defaults different to the
defaults in the format-agnostic ITS spec
... for instance, the translate in HTML will have different defaults than its:translate
<fsasaki>
<fsasaki> above is the HTML5 bug
Yves_: shaun has a point, but in this case we write against the HTML5 spec, which we refer to specifically. it might make sense to have different defaults for that particular format
fsasaki: another argument might
be the workflows of Linguaserve, where XHTML is used in the XML
processing chain, as well as HTML processing chains, and having
them aligned can be beneficial
... the question is also for the xml-only implementers: will they be required to implement these defaults
Pedro: yes, we have this use case
fsasaki: we need some more feedback on this on mailing list. let's keep the topic on the agenda and see how the discussion moves forward
<fsasaki>
<fsasaki>
<fsasaki>
fsasaki: here's the test suite status
fsasaki: our charter says that we do testing from March to October, and we're mostly done. We can take time to finalize some features, and I'm confident we won't need much time for testing
philr: one topic from last week was discussing which way should the stand-off references point?
fsasaki: I made the experiment from inverting the pointer direction, from outside to inside might be more efficient
<scribe> ACTION: phil to follow up on stand-off pointer direction (quality, provenance) [recorded in]
<trackbot> Created ACTION-456 - Follow up on stand-off pointer direction (quality, provenance) [on Phil Ritchie - due 2013-03-11].
fsasaki: some errors aren't
really errors, for instance RDFa attributes
... the output will likely influence the output of the test suite
... so having an update in time will be helpful
swalter: the discussion split in
several threads. we made some specification on how a program
should behave when not supporting an encoding given
constraints, there wasn't any consensus on what should be done
with the storageSize constraint
... it usually denotes the size of the database field, and it's not a MUST. how can we have a MUST, if it's not required behavior
fsasaki: if we feel that a MUST is not necessary, we can reject the comment
shaunm: what should the program do if it doesn't understand the character encoding?
swalter: the program uses the
encoding to verify the constraint, but it's not obligatory to
validate that
... As an user of ITS, you could feel unsafe whether you give a tool a constraint which it will not obey
... especially when there are consumers that may care
... however, we normally don't constrain consumers
Yves_: we could have a note that
would describe the expected behaviour (described with lowercase
should, must)
... and not worry about conformance at our level
swalter: there could be a distinction between what consumers or producers conform to
fsasaki: should we push for such a note?
<fsasaki>
<scribe> ACTION: Stephan Walter to propose a note for the character encoding behaviour [I18N-ISSUE-246] [recorded in]
<fsasaki> no further update, see discussion above
<fsasaki>
<fsasaki>
<fsasaki>
kfritsche: the defaults in the storage encoding is a value, but the understanding of the default was that it was integrated later; we propose the behavior that would set it to utf-8 if no other value was present
Yves_: it's different behaviour than ordinary defaults
<fsasaki> action-390?
<trackbot> ACTION-390 -- Tadej Štajner to check tan examples and test suite with regards to presence of annotatorsRef -- due 2013-01-30 -- OPEN
<trackbot>
<fsasaki> tadej: can be closed, done, changes are in github pull requeste
<fsasaki> close action-390
<trackbot> Closed ACTION-390 Check tan examples and test suite with regards to presence of annotatorsRef.
<fsasaki> tadej: did the check, all examples are valid
fsasaki: anyone at CeBIT?
... we'll have some demos for ITS, thanks to Cocomore
... is the current schedule of calls ok for all?
... right now, we still don't have a right call time that works for everyone. let's try again after the review
This is scribe.perl Revision: 1.137 of Date: 2012/09/20 20:19:01 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/local/locale/ Succeeded: s/whileits/while its/ Succeeded: s/counterintutitive/counterintuitive/ Succeeded: s/rules/defaults, not as rules/ Succeeded: s/rules/defaults/ Succeeded: s/informaiton/information/ Succeeded: s/bugs/bug/ Found Scribe: tbd Found Scribe: tadej Inferring ScribeNick: tadej Scribes: tbd, tadej Present: fsasaki omstefanov tadej Karl daveLewis philr shaunm Yves Giuseppe DomJones Marcis Ankit mdelolmo Milan Des Pedro leroy chriLi Regrets: Pablo Agenda: Got date from IRC log name: 04 Mar 2013 Guessing minutes URL: People with action items: felix phil philr stephan swalter walter yves yves_[End of scribe.perl diagnostic output] | http://www.w3.org/2013/03/04-mlw-lt-minutes.html | CC-MAIN-2015-11 | refinedweb | 1,468 | 54.86 |
Hi I need help from you guys ,stuck with this scenario for more than a week ,thought to post it in a forum in a hope that i will get a better response from u guys. I will explain my problems clearly.I have a project wjich opens an existing excel file.My project has one jcom.dll file,jcom.jar file,sample excel file which i am opening in the application and my classes jar file.I created the jar file using the netbeans using the one jar script ,but i m still getting the error like jcom.dll cannot be found on the java.library.path.Can anyone please help me regarding this ,the steps that i need to care while copying dll files also and if there is any alternative methods.In my application I m not loading the dll file ,as it is in the current directory it will load automaticallly. also i cannot paste the dll file into client machine ,i have to load from my application only.
Are you sure the dll should not be in a sub-directory called lib. This is where the supporting jars are usually placed. Otherwise, you may need to retrieve the path to the jar in your program and then use the retrieved path to locate the dll.
Just a few suggestions.
You might try putting the DLL in the PATH variable.
*Manikantan Narender Nath*I love Mozilla
/The secret of walking on water is knowing where the stones are/
If you are loading the dll from your code you need to find the directory where your jar is. This can be done using the following code:
import java.security.CodeSource;
import java.net.URL;
import java.io.*;
String jarLocation = ";
try
{
String qualifiedClassName = this.getClass().getName();
Class qc = Class.forName( qualifiedClassName );
CodeSource source = qc.getProtectionDomain().getCodeSource();
URL location = source.getLocation();
File srcFile = new File(location.getFile().replace("%20", " "));
jarLocation = srcFile.getParent();
}
catch (Exception e)
{
//Do what you need to do
}
File jcomDll = new File(jarLocation "\\jcom.dll");
if(jcomDll.exists())
System.load(jcomDll.toString());
else
{
//This is an error - dll is not in same directory as jar file
System.out.println("jcomDll not found");
}
____________
This will load the jcom.dll from the directory your jar file is in on a windows machine. It also works if you are just running a class not in a jar. Let me know if this is what you needed.
Also, I assume you know that if you are on a 64bit machine you will need a 64bit compiled dll where as if you are on a 32bit machine you will need a 32bit compiled dll. Something to keep in mind if you are to deploy on multilple os archetectures. You can determine which dll you need with the following:
private static final boolean is64bit = (System.getProperty("os.arch").indexOf("64") != -1);
Regards,
Richard
I have used the previous code for a wrapped jar (.exe) and it works fine. The dll must be outside the jar for it to work.
(Also, obviously:File jcomDll = new File(jarLocation "\\jcom.dll"); is supose to be jarLocation {plus} "\\jcom.dll") I don't know what plus sign does not show up in reply.
hi,
call this method in main
class"copyResourceFromJarToBinDir(""jcom.dll)";This method will copy the dll
file and put it in *java home path(java.home).*
private void copyResourceFromJarToBinDir(String jcomDll) // Name of
the DLL file
{
InputStream dllInputStream = getClass().getClassLoader()
.getResourceAsStream(jcomDll);
BufferedInputStream binStream = null;
BufferedOutputStream boutStream = null;
String javaHome = System.getProperty("java.home");
System.out.println("javaHome: " + javaHome);
String fileSeparator = System.getProperty("file.separator").charAt(0)
+ "";
File outFile = null;
try {
binStream = new BufferedInputStream(dllInputStream);
outFile = new File(javaHome + fileSeparator + "bin" + fileSeparator
+ jcomDll);
if (outFile.exists()) {
return;
}
boutStream = new BufferedOutputStream(new FileOutputStream(outFile));
int b = -1;
while ((b = binStream.read()) != -1) {
boutStream.write(b);
}
binStream.close();
boutStream.flush();
boutStream.close();
} catch (Exception e) {
try {
binStream.close();
boutStream.flush();
boutStream.close();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
e.printStackTrace();
}
}
--
With Regards,
Nagarajan.P
LISEC Software
LIU No : B1-Road WB-21
P.O.B. : 54290
Office : +971-4-2994596
Mobile : +971-50-2709342
Hi nagaraj thanks for ur reply.Your info is good,but the thing i cant place or copy any files in client machine,so i was forced to create a double clickable exe file and not installable exe.Can u please tell me any other way.Richard i tried using ur code ,its giving the jar location upto build folder.Actually I am not getting were i have to keep my dll file.Can u please mention the structure of directorys and i will try with this code.I want to create a executable jar file with all the files and directories using one jar.Hoping this code also work safely.I am getting the error like jcom is not in the java.library.path. Fed up with tis error :( .Can u please mention it clearly the location of the dll file should be.My application works fine when i keep it in the project main folder and not calling System.loadlibrary().I am sorry for disturbing u guys.it is very urgent.Thanks in advance
You are not allowed to put dll in jar. It will not work. dll must be in main file system since it is a machine code file.
How do you use dll without System.load?? Are you not using JNI??
The code I gave previously will point to folder jar is in. I was wrong in saying it works for class file also. It only works for executable jar and wrapped jar(.exe)
You can not run dll from inside executable jar. If you need supporting dll consider using JAWS
Again, I know of no way to run dll from inside jar. The dll must be loaded into the operating system. The os does not know how to extract dll from jar. Dll must be extracted first. If you want to ship dll with jar than nagaraj has the best idea for a solution - extract dll yourself. If you cannot write to client drive, I think you are out of luck. The only other solution I can think of is if you can somehow create a Ram Disk and store dll there. You used to be able to create Ram Disks using os commands; however I don't think you can do that anymore. I ran into the same problem and never found a solution.
Richard
Thank you Richard,Anyway i have now a found a way of doing that.I am using jcom.dll file for opening the Excel file.To invoke the Excel Application we need jcom in the project.Without that it will give error like jcom not found java.library.path. I am not calling any methods from dll.and also System.loadlibrary.Simply i have to keep that dll file project main folder.Without the dll file my application fails.I have read one article about one-jar which helps to include native libraries.But i didn't get any exact idea about that.I have done whatever they have mentioned in the tutorial.but..no way.If u get anyway please help me..Will post the steps if i am able to do that.. thanks | https://it.toolbox.com/question/executable-jar-file-050310 | CC-MAIN-2020-05 | refinedweb | 1,221 | 68.57 |
#include <chariter.h>
This is a minimal interface for iteration without random access or backwards iteration. It is especially useful for wrapping streams with converters into an object for collation or normalization.
Characters can be accessed in two ways: as code units or as code points. Unicode code points are 21-bit integers and are the scalar values of Unicode characters. ICU uses the type UChar32 for them. Unicode code units are the storage units of a given Unicode/UCS Transformation Format (a character encoding scheme). With UTF-16, all code points can be represented with either one or two code units ("surrogates"). String storage is typically based on code units, while properties of characters are typically determined using code point values. Some processes may be designed to work with sequences of code units, or it may be known that all characters that are important to an algorithm can be represented with single code units. Other processes will need to use the code point access functions.
ForwardCharacterIterator provides nextPostInc() to access a code unit and advance an internal position into the text object, similar to a
return text[position++].
It provides next32PostInc() to access a code point and advance an internal position.
next32PostInc() assumes that the current position is that of the beginning of a code point, i.e., of its first code unit. After next32PostInc(), this will be true again. In general, access to code units and code points in the same iteration loop should not be mixed. In UTF-16, if the current position is on a second code unit (Low Surrogate), then only that code unit is returned even by next32PostInc().
For iteration with either function, there are two ways to check for the end of the iteration. When there are no more characters in the text object:
Example:
void function1(ForwardCharacterIterator &it) { UChar32 c; while(it.hasNext()) { c=it.next32PostInc(); // use c } } void function1(ForwardCharacterIterator &it) { UChar c; while((c=it.nextPostInc())!=ForwardCharacterIterator::DONE) { // use c } }
Definition at line 89 of file chariter.h.
Value returned by most of ForwardCharacterIterator's functions when the iterator has reached the limits of its iteration.
Definition at line 96 of file chariter.h.
Returns a UClassID for this ForwardCharacterIterator ("poor man's RTTI").
Despite the fact that this function is public, DO NOT CONSIDER IT PART OF CHARACTERITERATOR'S API!
Implemented in StringCharacterIterator, and UCharCharacterIterator.
Generates a hash code for this iterator.
Implemented in UCharCharacterIterator.
Returns FALSE if there are no more code units or code points at or after the current position in the iteration range.
This is used with nextPostInc() or next32PostInc() in forward iteration.
Implemented in UCharCharacterIterator.
Gets the current code point for returning and advances to the next code point in the iteration range (toward endIndex()).
If there are no more code points to return, returns DONE.
Implemented in UCharCharacterIterator.
Gets the current code unit for returning and advances to the next code unit in the iteration range (toward endIndex()).
If there are no more code units to return, returns DONE.
Implemented in UCharCharacterIterator.
Returns true when the iterators refer to different text-storage objects, or to different characters in the same text-storage object.
Definition at line 681 of file chariter.h.
References operator==().
Assignment operator to be overridden in the implementing class.
Definition at line 184 of file chariter.h.
Returns true when both iterators refer to the same character in the same character-storage object.
Implemented in StringCharacterIterator, and UCharCharacterIterator.
Referenced by operator!=(). | http://www.icu-project.org/apiref/icu4c/classForwardCharacterIterator.html | crawl-002 | refinedweb | 581 | 50.23 |
#include <wx/dataview.h>
This class is used to indicate to a wxDataViewCtrl that a certain item (see wxDataViewItem) has extra font attributes for its renderer.
For this, it is required to override wxDataViewModel::GetAttr.
Attributes are currently only supported by wxDataViewTextRendererText.
Constructor.
Returns the colour to be used for the background.
Returns value of the bold property.
Returns this attribute's colour.
Return the font based on the given one with this attribute applied to it.
Returns value of the italics property.
Returns true if the background colour property has been set.
Returns true if the colour property has been set.
Returns true if any property affecting the font has been set.
Returns true if none of the properties have been set.
Call this to set the background colour to use.
Currently this attribute is only supported in the generic version of wxDataViewCtrl and ignored by the native GTK+ and OS X implementations.
Call this to indicate that the item shall be displayed in bold text.
Call this to indicate that the item shall be displayed with that colour.
Call this to indicate that the item shall be displayed in italic text. | https://docs.wxwidgets.org/3.0/classwx_data_view_item_attr.html | CC-MAIN-2019-09 | refinedweb | 193 | 68.67 |
#455 – Using ItemContainerStyle to Bind Data Elements in a Collection to a Grid
December 21, 2011 22 Comments
I showed earlier that to bind Grid.Row and Grid.Column properties on child items in an ItemsControl, we need to set up the binding on the ContentPresenter elements that contain the individual items. I did this by creating a class that inherited from ItemsControl and then set the bindings at runtime, programmatically.
There is a much easier way to do this, pointed out by reader Bruno (thanks Bruno)!
You can set up the bindings by specifying the ItemContainerStyle of the ItemsControl and using property setters to do the binding.
}" HorizontalAlignment="Center" VerticalAlignment="Center"/> </DataTemplate> </ItemsControl.ItemTemplate> <ItemsControl.ItemContainerStyle> <Style> <Setter Property="Grid.Row" Value="{Binding Row}"/> <Setter Property="Grid.Column" Value="{Binding Column}"/> </Style> </ItemsControl.ItemContainerStyle> </ItemsControl>
Pingback: Dew Drop – December 21, 2011 (#1,224) | Alvin Ashcraft's Morning Dew
Thanks. But what properties does the type you are giving as ItemsSource have? How does {Binding Row} works? are you setting it from itemssource for each item?
Ok, got it. If ChessPieces is defined as List this definition works:
public class ChessPiece
{
public string Text { get; set; }
public int Row { get; set; }
public int Column { get; set; }
}
Thank you for your example
Yes, exactly. You just need something with Text, Row and Column properties and then put them in an ObservableCollection (see post #449).
public class ChessPiece
{
public string Text { get; set; }
public int Row { get; set; } // 0..n-1
public int Column { get; set; } // 0..n-1
public ChessPiece(string text, int row, int col)
{
Text = text;
Row = row – 1;
Column = col – 1;
}
}
Hi Sean,
where can I download the (working) sample?
Thanks
All of the code that I have is included in this post or other posts. See and (the post you commented on).
Hi,
thanks. But I can’t see how the GridBasedItemsControl-class is binded with XAML(-Elements). This Code is not working with me.
Thanks
Tom, I just double-checked the code listed above and it’s working fine. Can you maybe post your entire XAML and code-behind files?
Hi Sean,
I am using the same code like codes on this page. The XAML-page:
The Code-Bihind of XAML:
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
ObservableCollection ChessPieces = new ObservableCollection();”);
}
private void OnPropertyChanged(string p)
{
}
}
and I have 2 other classes names:
class ChessPiece
{
public string Text{get; set;}
public int Row { get; set; }
public int Column { get; set; }
public ChessPiece(string text, int row, int column)
{
this.Text = text;
this.Row = row;
this.Column = column;
}
}
and
class GridBasedItemsControl : ItemsControl
{
protected override void PrepareContainerForItemOverride(DependencyObject element, object item)
{
base.PrepareContainerForItemOverride(element, item);
ContentPresenter cp = element as ContentPresenter;
if ((cp != null) && (item != null))
{
BindingOperations.SetBinding(cp, Grid.RowProperty, new Binding { Source = item, Path = new PropertyPath(“Row”)});
BindingOperations.SetBinding(cp, Grid.ColumnProperty, new Binding { Source = item, Path = new PropertyPath(“Column”) });
}
}
}
What is wrong hier?
Thanks again
A couple of comments:
Where are you setting up the data binding, i.e setting the data context? I don’t see it in the code behind. If you want to bind to the ObservableCollection that you created, you’ll have to make it a property of your class, set the data context of the window or the ItemsControl and then implement the INotifyPropertyChanged event.
Also, you don’t need a derived class for the ItemsControl–using the XAML that I posted does the data binding to the Row and Column properties.
Sean
Sorry, my XAML-file can not be displayed.
Hi,
can you complete my class with this code please?
Thanks
:O) I think you’re better off reading my posts and completing it yourself–it’s pretty straightforward.
To start with, just create a simple WPF app that uses data binding to bind some property to a property in your code-behind. This post on using an ItemContainerStyle is much more advanced and won’t make much sense if you don’t understand basic data binding. If you’re going to do any work in WPF, it will be important to have a good understanding of data binding and the INotifyPropertyChanged interface. Take a look at for starters.
Hi Sean,
thanks. It works. But I can’t understand what role plays the GridBasedItemsControl-class. This class is not called.
Thanks
You don’t need a custom class–all you need to bind elements of a collection to the Grid is to specify the ItemContainerStyle in the XAML.
That is correct. But you should save the grid row and column for each object like this “ChessPieces.Add(new ChessPiece(“QR-Blk”, 1, 1));”. I need a way to show the contents automatically for example into two columns and 10, without that the column and rows numbers have to save.
Thanks
How can I set Height of RowDefinition to “Auto” or “*” depended on property “FillParent” in my ControlViewModel?
How can I fill RowDefinitions dynamically depended on collection length?
I have collection of ControlViewModel and I try to create same count of Rows. I use now StackPanel but StackPanel can’t fill all parent height like LinearLayout in Android does. DockPanel is also useless cause my ViewModel should add ControlViewModel in certain sequence (last ControlViewModel will be with FillParent == true).
Thanks.
Probably you don’t want to use a Grid, since you don’t typically change the Row or Column definitions dynamically. You likely want to use something like a ListBox to bind to your collection and then modify its control template to look like what you want/need.
ItemsControl is similar to ListBox. Does WPF have panel which stretch like DockPanel/Grid or LinearLayout in Android? Or should I create class derived from Panel for this?
How would you code the next step for moving the pieces?
Would help me as well 😉 Love your Work. | https://wpf.2000things.com/2011/12/21/455-using-itemcontainerstyle-to-bind-data-elements-in-a-collection-to-a-grid/ | CC-MAIN-2022-33 | refinedweb | 976 | 57.57 |
- Grammar
- Contracts
- Function Return Values
- Functions Without Bodies
- Pure Functions
- Nothrow Functions
- Ref Functions
- Auto Functions
- Auto Ref Functions
- Inout Functions
- Optional Parentheses
- Property Functions
- Virtual Functions
- Inline Functions
- Function Overloading
- Function Parameters
- Local Variables
- Nested Functions
- Delegates, Function Pointers, and Closures
- main() Function
- Function Templates
- Compile Time Function Execution (CTFE)
- No-GC Functions
- Function Safety
- Function Attribute Inference
- Uniform Function Call Syntax (UFCS)
Functions
Grammar
Function declaration attributes
FunctionAttributes: FunctionAttribute FunctionAttribute FunctionAttributes FunctionAttribute: nothrow pure Property MemberFunctionAttributes: MemberFunctionAttribute MemberFunctionAttribute MemberFunctionAttributes MemberFunctionAttribute: const immutable inout return shared FunctionAttribute
Function body contracts
Contracts
The in and out blocks or expressions of a function declaration specify the pre- and post-conditions of the function. They are used in Contract Programming. The code inside these blocks should not have any side-effects, including modifying function parameters and/or return values.
Function Return Values
Function return values are considered to be rvalues. This means they cannot be passed by reference to other functions.
Functions Without Bodies:
- does not read or write any global or static mutable state
- cannot call functions that are not pure
- can override an impure function, but cannot be overridden by an impure function
- is covariant with an impure function
- cannot perform I/O:
- read and write the floating point exception flags
- read and write the floating point mode flags, as long as those flags are restored to their initial state upon function entry
- perform impure operations in statements that are in a ConditionalStatement controlled by a DebugCondition.
Nothrow functions can only throw exceptions derived from class Error.
Nothrow functions are covariant with throwing ones.
Ref Functions
Ref functions allow functions to return by reference. This is analogous to ref function parameters.
ref int foo() { auto p = new int; return *p; } ... foo() = 3; // reference returns can be lvalues
Auto Functions
Inout Functions:
- No argument types are composed of inout types.
- A mutable, const or immutable argument type can be matched against each corresponding parameter inout type.(); }
Optional Parentheses); }
Property Functions
WARNING: The definition and usefulness of property functions is being reviewed, and the implementation is currently incomplete. Using property functions is not recommended until the definition is more certain and implementation more mature..
Simple getter and setter properties can be written using UFCS. These can be enhanced with the additon of the @property attribute to the function, which adds the following behaviors:
- @property functions cannot be overloaded with non-@property functions with the same name.
- @property functions can only have zero, one or two parameters.
- @property functions cannot have variadic parameters.
- For the expression typeof(exp) where exp is an @property function, the type is the return type of the function, rather than the type of the function.
- For the expression __traits(compiles, exp) where exp is an @property function, a further check is made to see if the function can be called.
- @property are mangled differently, meaning that @property must be consistently used across different compilation units.
- The ObjectiveC interface recognizes @property setter functions as special and modifies them accordingly.
A simple property would be:
struct Foo { @property int data() { return m_data; } // read property @property int data(int value) { return m_data = value; } // write property private: int m_data; }.. Static or final functions with Objective-C linkage are virtual as well. This results in fewer bugs caused by not declaring a function virtual and then overriding it anyway.
Member functions which are private or package are never virtual, and hence cannot be { override int def() { ... } // ok, overrides A.def override { // overrides and is covariant with Foo.test() override B test() { return null; } }
Virtual functions all have a hidden parameter called the this reference, which refers to the class object for which the function is called.
Functions with Objective-C linkage has an additional hidden, unnamed, parameter which is the selector it was called with.(); }
Function Inheritance and Overriding
A function foo = A,. }
Static functions with Objective-C linkage are overridable.
Inline Functions
The compiler makes the decision whether to inline a function or not. This decision may be controlled by pragma(inline), assuming that the compiler implements it, which is not mandatory.
Note that any FunctionLiteral should be inlined when used in its declaration scope.
Function Overloading
Functions are overloaded based on how well the arguments to a function can match up with the parameters. The function with the best match is selected. The levels of matching are:
- no match
- match with implicit conversions
- match with qualifier conversion (if the argument type is qualifier-convertible to the parameter type)
- exact match
Each argument (including any this pointer) is compared against the function's corresponding parameter, to determine the match level for that argument. The match level for a function is the worst match level of each of its arguments.
Literals do not match ref or out parameters.. This is because the name mangling might) }
Function Parameters
Parameter Storage Classes
Parameter storage classes are in, out, ref, lazy, const, immutable, shared, inout or scope. For example:
int foo(in int x, out int y, ref int z, int q);
x is in, y is out, z is ref, and q is none.
- The function declaration makes it clear what the inputs and outputs to the function are.
- It eliminates the need for IDL (interface description language) as a separate language.
- It provides more information to the compiler, enabling more error checking and possibly better code generation.
void foo(out int x) { // x is set to int.init, // which Parameters
An argument to a lazy parameter is not evaluated before the function is called. The argument is only evaluated if/when the parameter is evaluated within the function. Hence, a lazy argument can be executed 0 or more times.
import std.stdio : writeln; void main() { int x; 3.times(writeln(x++)); writeln("-"); writeln(x); } void times(int n, lazy void exp) { while (n--) exp(); }
prints to the console:
0 1 2 − 3
A lazy parameter cannot be an lvalue.
The underlying delegate of the lazy parameter may be extracted by using the & operator:
void test(lazy int dg) { int delegate() dg_ = &dg; assert(dg_() == 7); assert(dg == dg_()); } void main() { int a = 7; test(a); }
A lazy parameter of type void can accept an argument of any type.
See Also: Lazy Variadic Functions
Function Default Arguments
Function parameter declarations can have default values:
void foo(int x, int y = 3) { ... } ... foo(4); // same as foo(4, 3);
Default parameters are resolved and semantically checked in the context of the function declaration.
module m; private immutable int b; pure void g(int a=b){}
import m; int b; pure void f() { g(); // ok, uses m.b }
The attributes of the AssignExpression are applied where the default expression is used.
module m; int b; pure void g(int a=b){}
import m; enum int b = 3; pure void f() { g(); // error, cannot access mutable global `m.b` in pure function }
If the default value for a parameter is given, all following parameters must also have default values.
Return Ref Parameters }
Struct non-static methods marked with the return attribute ensure the returned reference will not outlive the struct instance.
struct S { private int x; ref int get() return { return x; } } ref int escape() { S s; return s.get(); // Error: escaping reference to local variable s }
Returning the address of a ref variable is also checked.
int* pluto(ref int i) { return &i; // error: returning &i escapes a reference to parameter i } int* mars(return ref int i) { return &i; // ok }
If the function returns void, and the first parameter is ref or out, then all subsequent return ref parameters are considered as being assigned to the first parameter for lifetime checking. The this reference parameter to a struct non-static member function is considered the first parameter.
If there are multiple return ref parameters, the lifetime of the return value is the smallest lifetime of the corresponding arguments.
Neither the type of the return ref parameter(s) nor the type of the return value is considered when determining the lifetime of the return value.
It is not an error if the return type does not contain any indirections.
int mercury(return ref int i) { return i; // ok }
Template functions, auto functions, nested functions and lambdas can deduce the return attribute.
ref int templateFunction()(ref int i) { return i; // ok } ref auto autoFunction(ref int i) { return i; // ok } void uranus() { ref int nestedFunction(ref int i) { return i; // ok } } void venus() { auto lambdaFunction = (ref int i) { return &i; // ok }; }
inout ref parameters imply the return attribute.
inout(int)* neptune(inout ref int i) { return &i; // ok }
Return Scope Parameters
Parameters marked as return scope that contain indirections can only escape those indirections via the function's return value.
@safe: int* gp; void thorin(scope int*); void gloin(int*); int* balin(return scope int* p, scope int* q, }
Class references are considered pointers that are subject to scope.
@safe: class C { } C gp; void thorin(scope C); void gloin(C); C balin(return scope C p, scope C q, }
return scope can be applied to the this of class and interface member functions.
class C { C bofur() return scope { return this; } }
Template functions, auto functions, nested functions and lambdas can deduce the return scope attribute.
Note: Checks for scope parameters are currently enabled only for @safe functions when compiled with the -dip1000 flag.
Ref Return Scope Parameters
Parameters marked as ref return scope come in two forms:
U xerxes(ref return scope V v); // (1) ref and return scope ref U xerxes(ref return scope V v); // (2) return ref and scope
The first form attaches the return to the scope, and has return scope parameter semantics for the value of the ref parameter.
The second form attaches the return to the ref, and has return ref parameter semantics with additional scope parameter semantics.
Although a struct constructor returns a reference to the instance being constructed, it is treated as form (1).
The lexical order of the attributes ref, return, and scope is not significant.
It is not possible to have both return ref and return scope semantics for the same parameter.
@safe: struct S { this(return scope ref int* p) { ptr = p; } int val; int* ptr; } int* foo1(return scope ref S s); int foo2(return scope ref S s); ref int* foo3(return ref scope S s); ref int foo4(return ref scope S s); int* test1(scope S s) { return foo1(s); // Error: scope variable `s` may not be returned return foo3(s); // Error: scope variable `s` may not be returned } int test2(S s) { return foo2(s); return foo4(s); } ref int* test3(S s) { return foo3(s); // Error: returning `foo3(s)` escapes a reference to parameter `s` } ref int test4(S s) { return foo4(s); // Error: returning `foo4(s)` escapes a reference to parameter `s` } S test5(ref scope int* p) { return S(p); // Error: scope variable `p` may not be returned } S test6(ref return scope int* p) { return S(p); }
User-Defined Attributes for ParametersSee also: User-Defined Attributes
Variadic Functions
Functions taking a variable number of arguments are called variadic functions. A variadic function can take one of three forms:
- C-style variadic functions
- Variadic functions with type info
- Typesafe variadic functions
C-style Variadic Functions }
D-style Variadic Functions); }
0x00870FE0 5 arguments int 2 long 3 double 4.5 Foo 0x00870FE0 Bar 0x00870FD0
D-style variadic functions cannot be marked as @safe.
Typesafe Variadic Functions
Typesafe variadic functions are used when the variable argument portion of the arguments are used to construct an array or class object.
For arrays:
int;); }.
Local Static Variables:
- Declare the functions to be static members of a nested struct:.
Delegates, Function Pointers, and Closures); }
Function pointers can be passed to functions taking a delegate argument by passing them through the std.functional.toDelegate template, which converts any callable to a delegate without context.
Future directions: Function pointers and delegates may merge into a common syntax and be interchangeable with each other.
Anonymous Functions and Anonymous Delegates
See FunctionLiter(string[] args) { ... } int main() { ... } int main(string[] args) { ... }
Function Templates.
Compile Time Function Execution (CTFE)
Functions which are both portable and free of global side-effects can be executed at compile time. In certain contexts, such compile time execution is guaranteed. It is called Compile Time Function Execution (CTFE) then. The contexts that trigger CTFE are:
- initialization of a static variable or a manifest constant
- static initializers of struct/class members
- dimension of a static array
- argument for a template value parameter
- static if
- static foreach
- static assert
- mixin statement
- pragma argument
enum eval(Args...) = Args[0]; int square(int i) { return i * i; } void foo() { static j = square(3); // CTFE writeln(j); assert(square(4)); // run time writeln(eval!(square(5))); // CTFE }
CTFE is subject to the following restrictions:
- The function source code must be available to the compiler. Functions which exist in the source code only as extern declarations cannot be executed in CTFE.
- Executed expressions may not reference any global or local static variables.
- asm statements are not permitted
- Non-portable casts (eg, from int[] to float[]), including casts which depend on endianness, are not permitted. Casts between signed and unsigned types are permitted
- Reinterpretation of overlapped fields in a Union is not permitted.
Pointers are permitted in CTFE, provided they are used safely:
- C-style semantics on pointer arithmetic are strictly enforced. Pointer arithmetic is permitted only on pointers which point to static or dynamic array elements. Such pointers must point to an element of the array, or to the first element past the array. Pointer arithmetic is completely forbidden on pointers which are null, or which point to a non-array.
- The memory location of different memory blocks is not defined. Ordered comparison (<, <=, >, >=) between two pointers is permitted when both pointers point to the same array, or when at least one pointer is null.
- Pointer comparisons between independent memory blocks will generate a compile-time error, unless two such comparisons are combined using &&.
- Equality comparisons (==, !=, is, !is) are permitted between all pointers, without restriction.
- Any pointer may be cast to in CTFE must also be executable at run time. The compile time evaluation of a function does the equivalent of running the function at run time. This means that the semantics of a function cannot depend on compile time values of the function. For example:
int foo(string s) { return mixin(s); } const int x = foo("1");is illegal, because the runtime code for foo cannot be generated. A function template would be the appropriate method to implement this sort of thing.
No-GC Functions
No-GC functions are functions marked with the @nogc attribute. Those functions do not allocate memory on the GC heap, through the following language features:
- constructing an array on the heap
- resizing an array by writing to its .length property
- array concatenation and appending
- constructing an associative array on the heap
- indexing an associative array (because it may throw RangeError if the specified key is not present)
- allocating an object on the heap
. A @nogc function is covariant with a non-@nogc function.
void function() fp; void function() @nogc gp; // pointer to @nogc function void foo(); @nogc void bar(); void test() { fp = &foo; // ok fp = &bar; // ok, it's covariant gp = &foo; // error, not contravariant gp = &bar; // ok }
To ease debugging, in a ConditionalStatement controlled by a DebugCondition @nogc functions can call functions that are not @nogc.
Function Safety
Safe functions are functions that are statically checked to exhibit no possibility of undefined behavior. Undefined behavior is often used as a vector for malicious attacks.
Safe Functions
Safe functions are marked with the @safe attribute.
The following operations are not allowed in safe functions:
- No casting from a pointer type to any type with pointers other than void*.
- No casting from any non-pointer type to a pointer type.
- No pointer arithmetic (including pointer indexing).
- Cannot access unions that have pointers or references overlapping with other types.
- Calling any system functions.
- No catching of exceptions that are not derived from class Exception.
- Disallow @system asm statements.
-.
- Cannot use void initializers for pointers.
- Cannot use void initializers for class or interface references.
Trusted functions are marked with the @trusted attribute.
Trusted functions are guaranteed to not exhibit any undefined behavior if called by a safe function. Furthermore, calls to trusted functions cannot lead to undefined behavior in @safe code that is executed afterwards. It is the responsibility of the programmer to ensure that these guarantees are upheld.
Example:
immutable(int)* f(int* p) @trusted { version (none) p[2] = 13; // Invalid. p[2] is out of bounds. This line would exhibit undefined // behavior. version (none) p[1] = 13; // Invalid. In this program, p[1] happens to be in-bounds, so the // line would not exhibit undefined behavior, but a trusted function // is not allowed to rely on this. version (none) return cast(immutable) p; // Invalid. @safe code still has mutable access and could trigger // undefined behavior by overwriting the value later on. int* p2 = new int; *p2 = 42; return cast(immutable) p2; // Valid. After f returns, no mutable aliases of p2 can exist. } void main() @safe { int[2] a = [10, 20]; int* mp = &a[0]; immutable(int)* ip = f(mp); assert(a[1] == 20); // Guaranteed. f cannot access a[1]. assert(ip !is mp); // Guaranteed. f cannot introduce unsafe aliasing. }
Trusted functions may call safe, trusted, or system functions.
Trusted functions are covariant with safe or system functions. Attribute Inference.
Uniform Function Call Syntax (UFCS)[]) } } | https://dlang.org/spec/function.html | CC-MAIN-2020-16 | refinedweb | 2,918 | 52.6 |
Bringing my Meetup APIMASH Starter Kit to Windows Phone
Bringing my Meetup APIMASH Starter Kit to Windows Phone
Join the DZone community and get the full member experience.Join For Free:
The goal of my starter kit (and the other kits on the site) is to demonstrate how to call the API from a Windows Store app, how to display the results (in my case using databinding), and also to provide a starting point for folks who might want to "mash up" one of the starter kits with additional data from other APIs.
Reimagining for Windows Phone
The next step in APIMASH is bringing our starter kits to Windows Phone. In my case, my original starter kit was written using HTML and JavaScript, and built on top of the Grid App Visual Studio template, which uses the WinJS ListView control to display the data on the home page.
Because WinJS is not supported on Windows Phone, I had to make a decision on how to move the app to that platform. I could have chosen to use the WebBrowser control, and architect the app using the single-page-app (SPA) style of development, using libraries like KnockoutJS, etc. And if I was starting an app today from scratch that I wanted to build with HTML and JavaScript, and deploy to both the Windows Store and Windows Phone, that would probably be an avenue I'd explore. Perhaps I'll do just that in a future blog post. In the meantime…
Given the UI differences between a PC and a phone, I figured I would just approach the phone version of the app in the same way I did the Windows Store version…find the Visual Studio template that most closely approximates my needs, and modify it as little as needed to achieve the UX and functionality I desired.
I settled on the Windows Phone Databound app template, as shown in the dialog below:
This template provides a simple databound list in the main page, and when any given item is invoked, the app navigates to a details page for that particular item. The app uses a simplified MVVM architecture, and demonstrates the use of sample data as well.
Understanding the Template
The Windows Phone Databound App template is pretty straightforward. It consists of:
- 2 XAML pages, MainPage.xaml and DetailsPage.xaml (both with accompanying "code behind" C# files.
- An App.xaml file, whose main purpose in the context of the app template is to instantiate and initialize the main viewmodel of the app (more on that in a moment)
- A set of design-time sample data, implemented in a XAML file (this file is located in the aptly-named SampleData folder)
- A pair of ViewModel classes, implemented in C#, a MainViewModel, which implements the items collection which will be bound to a list in MainPage.xaml, and an ItemViewModel, which contains the properties that represent an item.
Let's start with the ViewModels. Their job is to abstract the concerns of fetching and parsing data away from the pages. The ItemViewModel has three properties, LineOne, LineTwo, and LineThree, which are implemented with private members for internal representation (whose names start with an underscore and a lowercase first letter), and public properties (whose names start with an uppercase letter). Each one calls NotifyPropertyChanged if the property value changes, so that any controls bound to the data can update the value.
We could just stuff the values we want for a given meetup into these properties, which would avoid some changes elsewhere in the code, but there are a couple of problems with this…one, it would be potentially confusing to stick with such generic property names, since they don't tell us anything about what the property represents, and two, we probably actually need more than three properties. We'll come back to the ItemViewModel momentarily.
The MainViewModel, which as noted above is instantiated and initialized at app launch in App.xaml.cs, is responsible for creating an ObservableCollection of ItemViewModel items, and (via its LoadData function) populating the collection with data. In the template code, the runtime data is populated by simply adding a bunch of instances of the ItemViewModel class initialized with sample data.
Template Modifications
To modify the template code for our purposes, we'll start with the ItemViewModel.cs class. First, we'll rename both the class and the filename to MeetupViewModel, to be more descriptive of what the model is. One nice feature of C#, if you've not used it before, is the support for refactoring, in particular for renaming members. So when we change the class name from ItemViewModel to MeetupViewModel, Visual Studio gives us a hint that there's some context-specific commands available, which we can expand using the Ctrl+. keystroke combo, which shows the following:
This command will search the project, and find all references to the ItemViewModel class and update them to the new name. If you prefer to review each change, you can use Rename with preview… This is a very convenient way to quickly update all (well, nearly all…if a member is referred to by a string, such as in the call to NotifyPropertyChanged in the the property declarations, rename won't update it).
Next, we'll update the properties and replace the generic LineOne, LineTwo, etc. properties with more descriptive properties, including MeetupID, Name, Url, Description, CityState (combines the City and State fields from the Meetup venue property, for easier display), and more. One of the advantages of using a ViewModel is that we can describe how we want the data to look for our app, regardless of whether that's how the original data actually looks. When we load the data, we can massage it to fit the format we want to use.
That's all that's needed for the MeetupViewModel class.
For the MainViewModel (view the full code here), we'll start in the constructor. In order to support the retrieval of live data, we'll add a couple of lines of code to create an instance of the System.Net.WebClient class, and handle its DownloadStringCompleted event:
public MainViewModel() { _client = new WebClient(); _client.DownloadStringCompleted += _client_DownloadStringCompleted; this.Items = new ObservableCollection<MeetupViewModel>(); :
1: public void LoadData()
2: {
3: AppConstants.meetupUri += "&city="
4: + AppConstants.meetupCity
5: + "&state=" + AppConstants.meetupState
6: + "&page=" + AppConstants.maxMeetupsToFind
7: + "&key=" + AppConstants.meetupKey
8: + "&radius=" + AppConstants.meetupDistance;
9: if (AppConstants.meetupKeywords != "")
10: {
11: AppConstants.meetupUri +=
12: "&text=" + AppConstants.meetupKeywords;
13: }
14:
15: _client.DownloadStringAsync(new
16: Uri(AppConstants.meetupUri));
17: }
Pretty simple…we just construct the URI with the parameters desired, and call DownloadStringAsync with the URI.
Next, we add the handler function for handling the response from the WebClient request, like so (apologies for any extra line breaks in the code…you can view the code on gihub for a more readable version):
1: void _client_DownloadStringCompleted(object sender,
2: DownloadStringCompletedEventArgs e)
3: {
4: XElement meetupElements =
5: XElement.Parse(e.Result);
6:
7:
8: var meetups =
9: from meetup in
10: meetupElements.Descendants("item")
11: where meetup.Element("venue") != null
12: select new MeetupViewModel
13: {
14: MeetupID =
15: meetup.Element("id").Value,
16: Name =
17: meetup.Element("name").Value,
18: Url =
19: meetup.Element("event_url").Value,
20: Description =
21: meetup.Element("description").Value,
22: CityState =
23: meetup.Element("venue").Element("city").Value
24: + ", " +
25: meetup.Element("venue").Element("state").Value,
26: Latitude =
27: meetup.Element("venue").Element("lat").Value,
28: Longitude =
29: meetup.Element("venue").Element("lon").Value,
30: LatLong =
31: meetup.Element("venue").Element("lat").Value
32: + ", " +
33: meetup.Element("venue").Element("lon").Value
34: };
35:
36: var index = 0;
37: foreach (MeetupViewModel
38: meetupItem in meetups)
39: {
40: meetupItem.ID = index.ToString();
41: this.Items.Add(meetupItem);
42: index++;
43: }
44:
45: this.IsDataLoaded = true;46: }
In the code above, we start by using Linq to XML (requires a reference to System.Xml.Linq) to parse the XML returned from the request to the Meetup API. We could request JSON as the format as in the HTML/JS version, but Linq to XML is pretty awesome, so XML makes more sense here.
Once parsed into a series of elements, we run a Linq query that looks for all meetups that are in-person (since we're focused on finding local coffee shops near each meetup, including online/virtual meetups wouldn't make much sense), and for each of the returned items, creates a new instance of the MeetupViewModel class, mapping the desired values from the elements in the XML data to the properties of the MeetupViewModel class.
Once that's done, we loop over the results, and add a simple numeric ID to each (this property is used in the app navigation to identify which item was invoked…see DetailsPage.xaml.cs for how its used), and then add the item to the ObservableCollection created at app startup, and increment the ID value.
Assuming no errors, we've now got data! But since we've changed the names of the properties, the databinding code found in MainPage.xaml, and DetailsPage.xaml will no longer work. Let's fix that.
In MainPage.xaml, the changes are pretty straightforward…we just need to update the bindings to the new property names. In the template, MainPage.xaml contains an ItemTemplate for the LongListSelector control, which has a DataTemplate containing two TextBlocks inside a StackPanel. These are bound to LineOne and LineTwo respectively, and I want them to show the meetup Name and the composite CityState property, so I just need to update these like so (some attributes omitted for readability):
1: <TextBlock Text="{Binding Name}"
2:
3: <TextBlock Text="{Binding CityState}"
4: TextWrapping="Wrap"
5:
I also updated the app name and page name to be specific to my app, and added a Meetup-style icon for each row in the list, and the result looks like this:
Note that no changes were needed to the code-behind for MainPage.xaml, as the page relies solely on declarative databinding for the display of the items. Tapping any of the items will navigate to DetailsPage.xaml, but if we do that now, the app won't work, because we've not yet updated that page.
Here, the changes are a little more substantial, but not dramatically so. As with MainPage.xaml, we should change the app and page name, and then also update the placeholder property names to the ones in our MeetupViewModel class. We'll also add a Map control to the page, as well as a couple of buttons to show us nearby coffee shops and to navigate to the web site for the specific meetup. Here's what the updated XAML looks like (some attributes omitted for readability):
1: <StackPanel x:Name="TitlePanel"
2: Grid.
3: <TextBlock
4:
5: <TextBlock
6: Text="{Binding Name}"
7:
8: </StackPanel>
9:
10: <Grid x:Name="ContentPanel"
11: Grid.
12: <ScrollViewer Margin="0,0,0,365">
13: <TextBlock
14: Text="{Binding Description}"
15:
16: </ScrollViewer>
17: <maps:Map x:Name="MyMap"
18: Loaded="MyMap_Loaded"
19: LandmarksEnabled="True"
20: PedestrianFeaturesEnabled="True"
21: VerticalAlignment="Bottom"
22: Height="280"
23:
24: <Button Content="Need COFFEE!"
25: HorizontalAlignment="Left"
26: Margin="0,527,0,0"
27: VerticalAlignment="Top"
28: Height="80"
29: Width="230"
30:
31: <Button
32: Content="Meetup Site"
33: HorizontalAlignment="Left"
34: Margin="216,527,0,0"
35: VerticalAlignment="Top"
36: Height="80" Width="230"
37:
38: </Grid>
In the code-behind (DetailsPage.xaml.cs), we first need some using declarations for the namespaces of the various features we'll add:
1: using System.Device.Location;
2: using System.Windows.Shapes;
3: using System.Windows.Media;
4: using Microsoft.Phone.Maps.Controls;
5: using Microsoft.Phone.Tasks;
6: using APIMASH_MeetupMaps_StarterKit.Customization;
All but the last two are related to the map functionality. Microsoft.Phone.Tasks allows us to access two of the tasks (MapsTask and WebBrowserTask) we'll use to handle our button clicks. And the last one provides easier access to the AppConstants class containing our static variables, which includes the search term used below.
Next, because we'll be using its value in some of our other code, we need to move the declaration of the index variable to the start of the codebehind class:
1: public partial class DetailsPage : PhoneApplicationPage
2: {
3: int index;
4:
5: // Constructor
6: public DetailsPage()
7: {
8: // etc.
9: }
The OnNavigatedTo event is unchanged, apart from the modification to where the index variable was declared.
When the map control is loaded, it will fire its Loaded event, which is mapped to the MyMap_Loaded handler:
1: private void MyMap_Loaded(object sender,
2: RoutedEventArgs e)
3: {
4: double lat =
5: double.Parse(App.ViewModel.Items[index].Latitude);
6: double lon =
7: double.Parse(App.ViewModel.Items[index].Longitude);
8:
9: MyMap.Center = new GeoCoordinate(lat, lon);
10: MyMap.ZoomLevel = 15;
11:
12: // Create a small circle to
13: // mark the current meetup location.
14: Ellipse myCircle = new Ellipse();
15: myCircle.Fill =
16: new SolidColorBrush(Colors.Red);
17: myCircle.Height = 20;
18: myCircle.Width = 20;
19: myCircle.Opacity = 50;
20:
21: MapOverlay myOverlay = new MapOverlay();
22: myOverlay.Content = myCircle;
23: myOverlay.PositionOrigin = new Point(0.5, 0.5);
24: myOverlay.GeoCoordinate = MyMap.Center;
25:
26: MapLayer myLayer = new MapLayer();
27: myLayer.Add(myOverlay);
28: MyMap.Layers.Add(myLayer);
29: }
The code above grabs the latitude and longitude values from the current meetup item, based on the index of the selected item, centers the map on those coordinates, and then adds an Ellipse object to mark the location of the meetup.
The last bit of additional code are the button click event handlers:
1: private void Button_Click(object sender,
2: RoutedEventArgs e)
3: {
4: double lat =
5: double.Parse(App.ViewModel.Items[index].Latitude);
6: double lon =
7: double.Parse(App.ViewModel.Items[index].Longitude);
8:
9: MapsTask getCoffeeTask = new MapsTask();
10: getCoffeeTask.Center = new GeoCoordinate(lat, lon);
11: getCoffeeTask.SearchTerm = AppConstants.searchTerm;
12: getCoffeeTask.ZoomLevel = 16;
13: getCoffeeTask.Show();
14: }
15:
16: private void Button_Click_1(object sender,
17: RoutedEventArgs e)
18: {
19: WebBrowserTask meetupTask =
20: new WebBrowserTask();
21:
22: meetupTask.Uri =
23: new Uri(App.ViewModel.Items[index].Url);
24: meetupTask.Show();
25: }
In the first, we once again grab the latitude and longitude from the meetup item, and use that to launch a new MapsTask, setting its SearchTerm to the configured searchTerm in our AppConstants class (which defaults to "coffee").
The second click handler uses the WebBrowserTask to launch a new browser window for the web site for the meetup being viewed.
Here's what DetailsPage.xaml looks like when we're done:
Clicking the "Need COFFEE!" (and who doesn't?) button launches the Maps app and shows coffee shops in the area:
Because we're leveraging the built-in Maps app on the phone, we can now easily select a desired coffee shop and get walking or driving directions from our current location, without having to write any of that code ourselves.
Last, but not least, clicking the "Meetup Site" button opens a new browser window with the url set to the meetup site being viewed:.
AppBuilder
If you've not yet signed up, head over to the AppBuilder site. It's free, and you can find lots of informative videos and more to help you get started. And AppBuilder has recently added rewards, letting you earn points that you can redeem for XBOX games, a free year's Windows Store or Windows Phone developer account, and more! }} | https://dzone.com/articles/bringing-my-meetup-apimash | CC-MAIN-2018-30 | refinedweb | 2,580 | 54.93 |
A Simple Servlet for Running JUnit in Glassfish
When teaching unit testing in the context of a simple EJB3.1 application, I
was looking for an easy way of testing managed beans and session beans inside
Glassfish. Of course, one can test out-of-container or use an embedded
container (but I didn't quite figure out how to do that with Glassfish
v3—I'd appreciate hints), or amock container (but that seemed to
require a bit of setup).
I hit upon a scheme that I had not seen elsewhere: put the unit tests in the
container and trigger them with a servlet that reports the outcomes.
Advantages:
- There is very little setup to learn
- It is easy to run the tests from a browser
- The tests run in the exact same environment as your app
- No need to wait for container startup with every test. (This could also
be a disadvantage because the database isn't in a pristine state at the
beginning of each test.)
Here is how to do it.
Add the JUnit 4 JAR to
WEB-INF/lib.
Add the following servlet to your WAR file:
package myapp.servlets;
import java.io.IOException;
import java.io.OutputStream;
import java.io.PrintStream;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.junit.internal.JUnitSystem;
import org.junit.runner.JUnitCore;
public class TestServlet extends HttpServlet {
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
String className = request.getParameter("class");
response.setContentType("text/plain");
OutputStream out = response.getOutputStream();
final PrintStream pout = new PrintStream(out);
new JUnitCore().runMain(new JUnitSystem() {
public PrintStream out() { return pout; }
public void exit(int arg0) {}
}, className);
out.close();
}
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
doGet(request, response);
}
}
Add the following entries to
web.xml:
<servlet>
<servlet-name>TestServlet</servlet-name>
<servlet-class>myapp.servlets.TestServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>TestServlet</servlet-name>
<url-pattern>/test</url-pattern>
</servlet-mapping>
Write your JUnit test case in the usual way. For example, here is a test for
a session bean:
package myapp.session;
import static org.junit.Assert.assertEquals;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import org.junit.Before;
import org.junit.Test;
import myapp.entity.Person;
public class UserSBTest {
private UserSB userSB;
@Before public void init() throws NamingException {
InitialContext context = new InitialContext();
userSB = (UserSB) context.lookup("java:global/MyApp/UserSB");
}
@Test public void testAddUser() {
Person p = new Person();
p.setFirstName("Fred");
p.setUsername("fred");
userSB.save(p);
Person q = userSB.find("fred");
assertEquals(p.getFirstName(), q.getFirstName());
userSB.removePerson("fred");
}
}
Unfortunately, you can't just have the container inject the session bean.
@EJB private UserSB userSB; // Doesn't work when JUnit loads the class
When JUnit loads the class, it doesn't deal with EJB annotations.
Then point your browser to
Proverbial exercise to the reader: Add a green bar when the test cases
pass.
- Login or register to post comments
- Printer-friendly version
- cayhorstmann's blog
- 11977 reads
by cayhorstmann - 2009-02-17 07:18Thanks--I agree that using OpenEJB is a reasonable approach. I understand there is now (or there soon will be) a way of embedding Glassfish in a similar way, but I haven't yet figured out the setup.
by bvansomeren - 2009-02-17 00:10Hi, That's actually a nice way of doing it. I've worked on projects where we had a whole maven build that used a small container (OpenEJB) to load our EJB 3.0 components and test them. Setting it up was a pain as you can imagine. Now I'm looking for a way to do something simpler as part of a pet project of mine and this fits the bill nicely. You could even run this during deploy and throw a fat Runtime exception when a test fails and thus bail out of the deploy. Thanks
by cayhorstmann - 2009-02-16 10:28Thanks for the tip about easygloss. Of course, what I really want is not to have to worry at all but find a way for the container to do the injection. Is there some way to locate the ambient app server, hand it an object, and say "hey, inject this"?
by stephenconnolly - 2009-02-16 10:00You could also have a look at easygloss.dev.java.net It could certainly handle injecting your injectables for you ;-) It also might be easier, with EasyMock, for running the tests outside of the servlet container
by cayhorstmann - 2009-02-15 07:17Thanks, but Cactus is much more heavyweight than what I was looking for. Maybe because it has been there for many years :-)
by euxx - 2009-02-14 22:24Apache Cactus has been there for years | https://weblogs.java.net/blog/cayhorstmann/archive/2009/02/a_simple_servle.html | CC-MAIN-2015-22 | refinedweb | 793 | 58.58 |
RIP (Routing Information Protocol) was born to make system administrators happy. Every administrator who has to supervise a large number of routers with never-ending connectivity changes should know RIP. There are a lot of other routing protocols that are much better and more full-featured than RIP, but RIP was and still is the basic routing protocol. In this Daily Drill Down, I’ll take a look at this useful protocol.
Let's start from the beginning
RIP is one of the most enduring of all routing protocols. It’s a very simple protocol, based on distance-vector (or Bellman-Ford, as it's also known) routing algorithms that predate ARPANet (the Advanced Research Projects Agency Network). To be exact, these algorithms were originally described academically by R. E. Bellman, L. R. Ford, Jr., and D. R. Fulkerston between 1957 and 1962. During the 1960s, these algorithms were widely deployed by various companies and marketed under different names.
The RIP protocol in the form that we use now was developed in the 1970s at Xerox Labs as part of the XNS (Xerox Network Systems) Routing Protocol suite. The most popular variants are RIP version 1, described in RFC1058, and RIP version 2, described in RFC2453.
RIP version 2, or RIPv2 as it is more commonly known, was first proposed as an update to RIP in RFC1388 in January 1993. This RFC was later superseded by RFC1723 in November 1994 by Gary Scott Malkin and Scott Bradner. Neither of these RIPv2 proposals was intended to be a replacement for RIP, but they were both designed as an extension of RIP to provide additional functionality and capability. Some of these capabilities are compatible with RIP (first version) and some are not. To avoid supplying information to RIPv1 routes that could be misinterpreted, RIPv2 can use only noncompatible features when its packets are multicast. On interfaces that aren’t capable of IP multicast, RIPv1-compatible packets that don’t contain potentially confusing information are used. Some of the most notable RIPv2 enhancements are:
- Next hop
- Network mask
- Authentication
- RIP tag field
RIP at run time
Because RIP is a distance-vector protocol, a router running RIP will send updates to its neighbors, thus allowing the convergence to a known topology. In each update, the distance of any given router will be broadcast to its neighbor. RIP classifies routers as active and passive (silent). Active routers advertise their routes (reachability information) to others. Passive routers listen and update their routes based on advertisements but do not advertise. Typically, routers run RIP in active mode, while hosts use passive mode.
I'd rather not dwell on the internal RIP structure, but I will discuss ways of implementing this protocol on routers and on servers working as routers. If you're interested in the protocol structure, please refer to the appropriate RFCs.
Let's look at a sample corporate network, which I used to supervise as a network administrator and security engineer some time ago. Its scheme is depicted in Figure A.
The server
The next important node of this network is the server (192.168.85.33), which serves as a router, operating on the FreeBSD 3.2-STABLE operating system and which serves the local network alongside the remote corporate network connected by dial-up connection to this router.
The state of the dial-up connection to the corporate network (192.168.140.0) is very important because it will influence the building of the correct route to this network. In the case of a working dial-up connection, traffic addressed to any host from this network would be routed through 192.168.85.33. Otherwise, all traffic would be routed via the default (0.0.0.0) route. We’ll also be interested in the private network’s (192.168.2.0) announcements to enable it to communicate with hosts on this network.
Back to our Cisco router
Our configuration begins by telling the router to start a RIP process. It then tells the router the networks on which it should send and listen for RIP updates. Because RIP is a classed protocol, the configuration can't specify the aggregate 192.168.85.0/28 directly. Instead, you supply a network statement for each of the class B networks. These statements don’t restrict what routes can be carried to and from this router, only which of the router's directly attached networks will be configured for RIP processing.
In this configuration, I’ve added a static default route through the host 172.16.92.16, which is used as the Internet connection. However, I've commented out the string that tells the router to redistribute all static routes using the RIP protocol. In the topology of the network I'm using as an example, all hosts are using Cisco's IP as a static default route.
! process RIP
router rip
version 2
network 192.168.85.0
network 192.168.140.0
network 192.168.2.0
! redistribute static routes with a default metric
! redistribute static
ip route 0.0.0.0 0.0.0.0 172.16.92.16
Applications
After you've configured the Cisco router, you should start the same process on the server (192.168.85.33) that runs FreeBSD. There are a lot of tools for configuring this on the FreeBSD OS. The most popular one is GateD , a routing daemon that handles multiple routing protocols. Since GateD is very popular in the UNIX world, a lot of UNIX distributions already include it in the binary package. But I'll get back to it a bit later; now I'd like to discuss software solutions from a rather new but still potentially profitable and promising source—GNU Zebra .
GNU Zebra is free software (distributed under GNU General Public License) that manages TCP/IP-based routing protocols. Unlike traditional, GateD-based, monolithic architectures and even the so-called new modular architectures that remove the burden of processing routing functions from the CPU and utilize special ASIC chips instead, Zebra software offers true modularity. Zebra is intended to be used as a route server and a route reflector. Currently, Zebra is available on the GNU/Linux platform and almost all branches of FreeBSD, NetBSD, and OpenBSD, and it’s in the development stage for Solaris 7.
A Cisco of a different color
From the user's side, Zebra looks like Cisco with the same interface and with all the basic commands of Cisco routers. It consists of separate daemons for numerous routing protocols. The main daemon starts on the server and accepts connections to the port 2601 (Zebra). After you've connected to it, you'll get an interface very similar to Cisco's.
And that's why, in the case of a server running Zebra, all you need to do is perform the same operations with it as you did with the Cisco router earlier and change the static route not to the Cisco's E1 link but to the router directly. This looks pretty easy.
ip route 0.0.0.0 0.0.0.0 192.168.85.46
This is the sample cut of /etc/gated.conf, which will enable the RIP protocol and will allow it to send its announcements to the ed2 interface (192.168.85.0). You don’t need any announcements to the ed3 interface since you have no host to use this information in the Windows NT network.
rip yes {
interface "ed2" ripin ripout version 2;
};
import proto rip {
all;
};
export proto rip restrict;
Now, we should check to see whether everything is okay. To do this, let's look at our routing table on the Cisco router:
kitty.16.92.16 to network 0.0.0.0
R 192.168.2.0/24 [120/1] via 192.168.85.33, 00:00:13, Ethernet0
192.168.85.0/28 is subnetted, 1 subnets
C 192.168.85.32 is directly connected, Ethernet0
172.16.0.0/30 is subnetted, 1 subnets
C 172.16.92.16 is directly connected, Serial0
S* 0.0.0.0/0 [1/0] via 172.16.92.16
And on the FreeBSD server, running Zebra:
dixi#show ip route
Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF,
B - BGP, * - FIB route.
K* 0.0.0.0/0 ed2 (1) 195.3.85.46
R 0.0.0.0/0 ed2 (1) 195.3.85.46
C* 127.0.0.0/8 lo0 (6) direct
C* 192.168.2.0/24 ed3 (2) direct
C* 195.3.85.32/28 ed2 (1) direct
The state of the FreeBSD server, running GateD, can be easily checked with the ipquery command, which accompanies the GateD package.
You’ve now finished with the RIP setup in your network!
Security
Unless you implement defense methods to secure your network, it will be vulnerable to a RIP protocol attack, which has as its target a default route change. In our example, you have your traffic going through the hacker's host. That's why I'll spend some time describing how to use the RIP protocol and secure it effectively.
During RIP protocol setup in a network with a complex topology, you have a lot of chances to mistakenly reconfigure one of the routers, which will cause a routing loop and cause you to lose control of your remote routers. To avoid this, you have to propagate static backup routes on your remote routers to be able to access them in either case. The trouble with a static route is that, on most routers, static routes normally overlap any route gathered from a dynamic protocol. And you want the static route to take effect only in an emergency.
The secret to allowing a static route to be overlapped by any route gathered from a dynamic routing protocol in the Cisco IOS lies in assigning an administrative distance to the static route. The administrative distance makes that route less preferable than any route that comes from the dynamic routing protocol. To set this distance, you need to know at least the basic administrative distances that Cisco has assigned. These are the most important ones for us:
- Connected Interface 0
- Static Route 1
- RIP 120
- Unknown 255
For a complete list, refer to the Cisco documentation, which comes with the router.
To set the route preference for this static route, insert the distance number after the ip route statement. For example:
! set the administrative distance on the static route to be higher then
! RIP, what will make any route which comes from RIP be more preferable
ip route 0.0.0.0 0.0.0.0 192.168.140.1 130
The next potential problem appears when you need to get away without sending routing updates to the specific interfaces. There are a lot of reasons that can be pertinent here, starting from a low-bandwidth link between the two sites and ending with the absence of a device that can process such updates. To accomplish this, you should turn such interfaces to the "passive" state. For example, to exclude your serial interface from sending any route updates there, use the following:
! process RIP
router rip
version 2
network 192.168.85.0
network 192.168.140.0
network 192.168.2.0
! suppress advertisements on serial interface
passive-interface serial 0
Unfortunately, sometimes this isn't enough to solve the problem. Often, when you're suppressing updates on an interface, you also want to avoid getting any updates that another router can send. The solution is rather simple: administrative distances. Let's look at this example:
! process RIP
router rip
version 2
network 192.168.85.0
network 192.168.140.0
network 192.168.2.0
! suppress advertisements on serial interface
passive-interface serial 0
! set the default administrative distance to 255 what will make router
! ignore routing updates as far as 255 is considered unusable by Cisco IOS
distance 255
! set the administrative distance for these sources back to normal
distance 120 192.168.85.0 0.0.0.255
distance 120 192.168.140.0 0.0.0.255
distance 120 192.168.2.0 0.0.0.255
An administrative distance of 255 is considered unusable by Cisco IOS; therefore, by using this rule set, you can easily avoid having the router get any updates from the "passive" interface.
Finally, I'd like to address the problem of restricting the sources of the route advertisements. First, suppose I want to set my router to trust the routers on 192.168.140.0/24 and 192.168.85.0/24 to tell me anything. At the same time, I don’t want the router on 192.168.2.0/24 to be fully trusted, and I want to trust only route updates for the 192.168.2.0/24 network.
! process RIP
router rip
version 2
network 192.168.85.0
network 192.168.140.0
network 192.168.2.0
distance 120 192.168.140.0 0.0.0.255
distance 120 192.168.85.0 0.0.0.255
! set the administrative distance for these sources normal but only got
! routes that pass access list 1
distance 120 192.168.2.0 0.0.0.255 1
!
access-list 1 permit 192.168.2.0 0.0.0.255
access-list 1 deny 0.0.0.0 255.255.255.255
Suggestions and evolution
It goes without saying that I don't pretend to fully cover the subject of the RIP protocol in this Daily Drill Down. I’ve described only some basics of theory and practice of implementing of the RIP protocol. For more full coverage of this subject, I suggest you refer to the book IP Routing Fundamentals (ISBN: 1-57870-071-x), written by Mark A. Sportack and published by Cisco Press in 1999.
The evolution of information and network infrastructure has required a rethinking of the importance of routing in future networks. Nowadays, routing technologies are being used to perform tasks beyond the capabilities of traditional hardware-based routers. As progress never stops, the need for new routing algorithms and technologies will follow step by step as the demand multiplies. However, steps forward cannot be accomplished without a knowledge of the basics. The RIP protocol is one of the main components of this basic knowledge foundation. It's worth pointing out that even now, RIP is still a core routing protocol in a lot of corporate as well as scientific and technical networks.. | http://www.techrepublic.com/article/understanding-the-rip-protocol/ | CC-MAIN-2017-34 | refinedweb | 2,435 | 65.42 |
Capital Asset Pricing Model (CAPM)
11/10
Markowtiz (1952) did the ground work for the CAPM (Capital Asset Pricing Model). From the study of the early theories we know that the risk of an underlying security is measured by the standard deviation of its pay off or return. Therefore, for a larger risk we will have higher standard deviation of the respective security return. Markowtiz argued that the standard deviations of security returns for any two securities are not additive if they are combined together unless the returns of those two assets are perfectly positively correlated. He also observed that the standard deviation of security return of a portfolio is less than the sum of the standard deviation of those assets constituted the portfolio. Markowitz developed the efficient frontier of portfolio, the efficient set from where the investors select the portfolio which is most suitable for them. Technically, an investor will hold a mean-variance efficient portfolio which will return the highest pay off to them with a given level of variance. Markowitz’s computation of risk reduction is very rigorous and tedious. Sharpe (1964) developed the single index model which is computationally efficient. He derived a common index where the asset return is related with the common index. This common index can be any variable which has influence on the asset return. We can apply this single index model to the portfolio as well since the expected return of a portfolio is the weighted average of the expected returns of the constituents of the portfolio.
When we need to analyze the risk of an individual security, we have to consider the other securities of the portfolio as well. Because, we are interested about the additional risk being added to the portfolio when one addition security is added to the portfolio. Thus the concept of risk share of an individual security to the portfolio is different from the risk of that security itself. An investor faces two kinds of risks. One is called the systematic risk and the other is known as unsystematic risk. Unsystematic risk is a kind of risk which can be minimized or eliminated by increasing the size of the portfolio, namely, by increasing the diversity of the portfolio. The systematic risk is well known as the market risk. Because, it depends on the overall movement of the market and the financial condition of the whole economy. By diversifying the portfolio, we cannot eliminate the systematic risk.
Theoretically CAPM offers very commanding predictions about how to measure risk and return relationship. However, the empirical evidence of CAPM is not very encouraging. One may conclude that these failings are rooted in poor construction of the model but once can argue that this failing arises because of the difficulties of building comprehensive and valid test model. The estimation strategy of CAPM is not free from the data-snooping bias. Because of the non-experimental nature of economic theory we cannot avoid this problem. Moreover a lot of investigations already have been done to test the validity of the CAPM. Thus, no attempt has been made in this paper to test the validity of the model. Here in this paper we will critically examine some literatures on CAPM testing. We will begin with understanding the model. We will briefly outline some mathematics required to understand the underlying assumptions of the model. Then we will focus on the single and multi-factor CAPM models to analyze the model assumptions and restrictions required to hold these models to be true.
2. The Capital Asset Pricing Model Explained
In 1959 Markowitz introduced the notion of mean-variance efficient portfolio. According to him it is optimal for an investor to hold a mean-variance efficient portfolio. The mean-variance efficient portfolio is a portfolio for an investor where he minimizes the portfolio return, given the expected return and maximizes expected return, given the variance. Later Sharpe (1964) and Lintner (1965b) further developed the work of Markowitz. In their work it has been showed that if the investors’ expectations are homogeneous and when the hold the mean-variance efficient portfolio then in the nonexistence of market friction the market portfolio will be a mean-variance efficient portfolio.
There are two basic building blocks to derive the CAPM: one is the capital market line (CML) and the other one is the security market line (SML). In CAPM the securities are priced in a way where the expected risks are compensated by the expected returns. As we will be investigating different form of CAPM in this work it is worthy to review the basic notions of CML and SML.
The capital market line (CML) conveys the return of an investor for his portfolio. As we have already mentioned, there is a linear relationship exists between the risk and return on the efficient portfolio that can be written as follows:
On the Other hand the SML specifies the return what an individual expects in terms of a risk-free rate and the relative risk of a portfolio. The SML with security i can be represented as follows:
Here the Beta is interpreted as the amount of non-diversifiable risk intrinsic in the security relative to the risk of the efficient market portfolio.
The utility function of the market agent is either quadratic or normal
All the diversifiable risks are eliminated
The efficient market portfolio and the risk-free assets dominate the opportunity set of the risky asset.
We can use the security market line can be used to test whether the securities are fairly priced.
3. The Logic of the Model:
To understand the logic of CAPM, let us consider a portfolio M. To clear the asset market this portfolio must be on the efficient frontier. Thus the underlying concept that is true for minimum variance portfolio, must be true for the market portfolio as well. With the minimum variance condition for portfolio M when there are N risky assets, we can write the minimum variance condition by the following equation:
Where is the expected return on the asset i and . The market beta for the asset is derived by dividing the covariance of the market return and individual asset return by the variance of the market return,:
However, this assumption of riskless borrowing and lending is unrealistic. Black (1972) developed a CAPM model where he did not make this extreme assumption. He showed that the mean variance efficient portfolio can be obtained by allowing the short selling of the risky assets. The Black and Sharpe-Lintner model differ in terms of the . Black observed that has to be less than the expected market return which allows the premium for the market beta to be positive. In the Sharpe-Lintner model the expect return was the risk free interest rate. The assumption that Black made about short selling is not realistic either. Because, if there is no risky asset (Sharpe-Lintner version) and if there is unrestricted short selling of the risky asset (Black version) then the efficient portfolio is actually not efficient and there does not exist any relation between market beta and CAPM (Fama and French: 2003). So, the CAPM models are built on some extreme assumptions. To testify the validity of these models researchers have tested the model against the market data. In this paper we will investigate some of those empirical researches.
4. Literature on CAPM testing
There are three relationships between expected return and market beta which is implied by the model. First, the expected returns on all the underlying assets are linearly related to their respective betas. Second, the premium for beta is positive which implies that the expected return on the market portfolio exceeds the expected return on assets. Moreover, the returns of these assets are uncorrelated with the expected return of market portfolio. Third, in the Sharpe-Lintner model we see that the underlying assets which are uncorrelated with the market portfolio have the expected returns which are equal to the risk neutral interest rate. In that model, if we subtract the risk free rate from the expected market return, we get the beta premium. Conventionally the tests of CAPM are based on those three implications mentioned above.
4.1 Tests on Risk Premiums
Most of the previous cross-section regression tests primarily focus on the Sharpe-Lintner model’s findings about the concept and the slope term which studies the relationship between expected return and the market beta. In that model they regressed the mean asset returns on the estimated asset betas. The model suggests that the constant term in the cross-section regression stands for the risk free interest rate and the slope term stands for the difference between market interest rate and risk free interest rate.
There are some demerits of the study. First of all, the estimated betas for individual assets are imprecise which creates the measurement error when we use them to explain average returns. Secondly, the error term in the regression has some common sources of variation which produces positive correlation among the residuals. Thus the regression has the downward bias in the usual OLS estimate. Blume (1970) and Black, Scholes and Jensen (1972) worked on overcoming the shortcomings of Sharpe-Lintner model. Instead of working on the individual securities they worked on the portfolios. They combined the expected returns and market beta in a same way that if the CAPM can explain the security return, it can also explain portfolio return. As the econometric theory suggests, the estimated beta for diversified portfolios are more accurate than the estimated beta for the individual security. Therefore, if we use the market portfolio in the regression of average return on betas, it lessens the critical problem. However, grouping shrinks the range of estimated betas and shrinks the statistical power as well. To tackle this researchers sort securities to create two portfolios. The first one contains securities with the lowest beta and it moves up to the highest beta.
We know that when there exists a correlation among the residuals of the regression model, we cannot draw accurate inference from that. Fama and Macbeth (1973) suggested a method to address this inference problem. They ran the regression of returns on beta based on the monthly data rather than estimating a single cross-section regression of the average returns on beta. In this approach the standard error of the means and the time series means can be used to check whether the average premium for beta is positive and whether the return on the asset is equal to the average risk free interest rate.
Jensen (1968) noted that Sharpe-Lintner model also implies a time series regression test. According to Sharpe-Lintner model, the average realized CAPM risk premium explains the average value of an asset’s excess return. The intercept term in the regression entails that “Jensen’s alpha”. The time series regression takes the following form:
In early studies we reject Sharpe-Lintner model for CAPM. Although there exists a positive relation between average return and beta, it’s too flat. In Sharpe-Lintner model the intercept stands for the risk free rate and the slope term indicates the expected market return in access of the risk neutral rate. In that regression model the intercept is greater than the risk neutral rate and the coefficient on beta is less than . In Jensen’s study the p value for the thirty years period is 0.02 only which indicates that the null hypothesis is rejected at 5% significance level. The five and ten year sub-period demonstrates the strongest evidence against the restrictions imposed by the model.
In past several studies it has been confirmed that the relationship in between average return and beta is too flat (Blume: 1970 and Stambaugh: 1982). With the low betas the constant term in the time series regression of excess asset return on excess market return are positive and it becomes negative for the high betas of the underlying assets.
In the Sharpe-Linter model, it has been predicted that portfolios are plotted along a straight line where the intercept equals the risk free rate, , and the slope equals to the expected excess return on the market rate . Fama and French (2004) observed that risk premium for beta (per unit) is lower than the Sharpe-Lintner model and the relationship between asset return and beta is linear. The Black version of CAPM also observes the same where it predicts only the beta premium is positive.
4.2 Testing the ability of market betas of explaining expected returns
Both the Sharpe-Lintner and Black model predict that market portfolio is mean-variance efficient. The mean-variance efficiency implies that the difference in market beta explains the difference in expected return of the securities and portfolios. This prediction plays a very important role in testing the validity of the CAP.
In past literatures, researchers tend to follow different kinds of tests to see whether the constant term in the time-series regression is zero. However, it is very debatable to conclude about the best small sample properties of the test. Gibbons, Shanken and Ross (1989) came up with an F-test for the constant term that has the exact-small sample properties and which is asymptotically efficient as well.
For the tangency portfolio, this F-test builds an entrant by combining the market proxy and the average value of an asset’s excess return. Then we can test if the efficient set and the risk free asset is superior to that one obtained by combining the market proxy and risk free asset alone. From the study of Gibbons, Ross, and Shanken (1989) we can also test whether market betas are sufficient enough to explain the expected returns. The statistical test what is conventionally done is if the explanatory variables can identify the returns which are not explained by the market betas. We can use the market proxy and the left hand side of the regression we can construct a test to see if the market proxy lies on the minimum variance frontier.
All these early tests really do not test the CAPM. These tests actually tested if market proxy is efficient which can be constructed from it and the left hand side of the time series regression used in the statistical test. Its noteworthy here that the left hand side of the time series regression does not include all marketable assets and it is really very difficult to get the market portfolio data (Roll, 1977). So, many researchers concluded that the prospect of testing the validity of CAPM is not very encouraging.
From the early literatures, we can conclude that the market betas are sufficient enough to explain expected returns which we see from the Black version of CAPM. That model also predicts that the respective risk premium for beta is positive also holds true. But at the same time the prediction made by Sharpe and Lintner that the risk premium beta is derived from subtracting the risk free interest rate from the expected return is rejected. The attractive part of the black model is, it is easily tractable and very appealing for empirical testing.
4.3 Recent Tests on CAPM
Recent investigations started in the late 1970s have also challenged the success of the Black version of the CAPM. In recent empirical literatures we see that there are other sources are variation in expected returns which do not have any significant impact on the market betas. In this regard Basu’s (1977) work is very significant. He shows that if we sort the stocks according to earning-price ratios, then the future returns on high earning-price ratios are significantly higher than the return in CAPM. Instead of sorting the stocks by E/P, if we sort it by market capitalization then the mean returns on small stocks are higher than the one in CAPM (Banz, 1981) and if we do the same by book-to-market equity ratios then the set of stocks with higher ratio gives higher average return (Statman and Rosenberg, 1980).
The ratios have been used in the above mentioned literatures associate the stock prices which involves the information about expected returns which are not captured by the market betas. The price of the stock does not solely depend on the cash flows, rather it depends on the present discounted value of the cash flow. So, the different kind of ratios discussed above play a crucial role in analyzing the CAPM. In line with this Fama and French (1992) empirically analyzed the failure of the CAPM and concluded that the above mentioned ratios have impact on stock return which is provided by the betas. In a time series regression analysis they concluded the same thing. They also observed that the relationship between the average return and the beta is even flatter after the sample periods on which early CAPM studies were done. Chan, Hamao, and Lakonishok (1991) observed a strong significant relationship between book-to-market equity and asset return for Japanese data which is consistent with the findings of Fama and French (1992) implies that the contradictions of the CAPM associated with price ratios are not sample specific.
5. Efficient Set of Mathematics
The mathematics of mean-variance efficient set is known as the efficient set of mathematics. To test the validity of the CAPM, one of the most important parts is to test the mean-variance efficiency of the model. Thus, it is very important to understand the underlying mathematics of the model. Here, we will discuss some of the useful results of it (Roll, 1977).
Here we assume that there are N risky assets with a mean vector μ and a covariance matrix Ω. In addition we also assume that the covariance matrix is of full rank. is vector of the portfolio weight. This portfolio has the average return; and variance. Portfolio p is the minimum variance portfolio with the mean return if its portfolio weight vector is the solution to the following constrained optimization:
We solve this minimization problem by setting the Lagrangian function. Let’s define the following:
The efficient frontier can be generated from any two minimum variance portfolios. Let us assume that p and r be any two minimum variance portfolio. The covariance of these two portfolios is as follows:
For a global minimum-variance portfolio g we have the following:
The covariance of the asset return of the global minimum portfolio g and any other portfolio as defined as a is as follows:
For a multiple regression of the return of an asset or portfolio on any minimum variance portfolio except the global minimum variance portfolio and underlying zero-beta portfolio we have the following:
The above mentioned result deserves some more attention. Here we will prove the result. As . The result is obvious. So, we just need to show that
and . Let us assume that r be the minimum variance portfolio with expected return . From the minimization problem we can write the following:
Portfolio a can be expressed as a combination of portfolio r and an arbitrage portfolio which is composed of portfolio a minus portfolio . The return of is expressed as:
Since , the expected return of is zero. Because, as mentioned earlier that it is an arbitrage portfolio with an expected return of zero, for a minimum variance portfolio q. We have the following minimization problem:
The solution to the optimization problem is c=0. Any other solution will contradict q from being the minimum variance.
Since, , thus taking the derivative gives the following expression:
Setting the derivative equal to zero and by substituting in the solution c=0 gives:
Thus the return of is uncorrelated with the return of all other minimum variance portfolio.
Another important assumption of the CAPM is if the market portfolio is the tangency portfolio then the intercept of the excess return market model is zero. Here we will prove the result. Let us consider the following model with the IID assumptions of the error term:
Now by taking the unconditional expectation we get,
As we have showed above, the weight vector of the market portfolio is,
Using this weight vector, we can calculate the covariance matrix of asset and portfolio returns, the expected excess return and the variance of the market return,
Combining these results provide,
Now, by combining the expression for beta and the expression for the expected excess return give,
Therefore, the immediate result is
6. Single-factor CAP
In practice, to check the validity of the CAPM we test the SML. Although CAPM is a single period ex-ante model, we rely on the realised returns. The reason being the ex ante returns are unobservable. So, the question which becomes so obvious to ask is: does the past security return conform to the theoretical CAPM?
We need to estimate the security characteristic line (SCL) in order to investigate the beta. Here the SCL considers the excess return on a specific security j to the excess return on some efficient market index at time t. The SCL can be written as follows:
Here is the constant term which represents the asset return (constant) and is an estimated value of . We use this estimated value as an explanatory variable in the following cross-sectional regression:
Conventionally this regression is used to test for a positive risk return trade off. The coefficient of is significantly different from zero and is assumed to be positive in order to hold the CAPM to be true. This also represents the market price of risk. When we test the validity of CAPM we test if is true estimate of . We also test whether the model specification of CAPM is correct.
The CAPM is single period model and they do not have any time dimension into the model. So, it is important to assume that the returns are IID and jointly multivariate normal. The CAPM is very useful in predicting stock return. We also assume that investors can borrow and lend at a risk free rate. In the Black version of CAPM we assume that zero-beta portfolio is unobservable and thus becomes an unknown parameter. In the Black model the unconstrained model is the real-return market model. Here we also have the IID assumptions and the joint normality return.
Many early studies (e.g. Lintner, 1965; Douglas, 1969) on CAPM focused on individual security returns. The empirical results are off-putting. Miler and Scholes (1972) found some statistical setback faced when using individual securities in analyzing the validity of the CAPM. Although, some of the studies have overcome the problems by using portfolio returns. In the study by Black,Jensen and Scholes (1972) on New York stock exchange data, portfolios had been formed and reported a linear relationship between the beta and average excess portfolio return. The intercept approaches to be negative (Positive) for the beta greater than one (less than one). Thus a zero beta version was developed of the CAPM model. The model was developed in a model where the intercept term is allowed to take different values in different period. Fama and Mcbeth (1973) extended the work of Black et al (1972). They showed the evidence of a larger intercept than the risk neutral rate. They also found that a linear relationship exists between the average returns and the beta. It has also been observed that this linear relation becomes stronger when we work with a dataset for a long period. However, other subsequent studies provide weak empirical evidence of this zero beta version.
We have mixed findings about the asset return and beta relationship based on the past empirical research. If the portfolio used as a market proxy is inefficient then the single factor CAPM is rejected. This is also true if the proxy portfolio is inefficient by a little margin (Roll: 1977, Ross: 1977). Moreover, there exists survivorship bias in the data used in testing the validity of CAPM (Sloan, 1995). Bos and Newbold (1984) observed that beta is not stable for a period of time. Moreover, there are issues with the model specifications too. Amihud, Christen and Mendelson (1993) observed that there are errors in variables and these errors have impact on the conclusion of the empirical research.
We experience less favourable evidence for CAPM in the late 1970s in the so called anomalies literature. We can think the anomalies as the farm characteristics which can be used to group assets in order to have a high ex post Sharpe ratio relative to the ratio of the market proxy for the tangency portfolio. These characteristics provide explanatory power for the cross-section of the average mean returns beyond the beta of the CAPM which is a contradiction to the prediction of CAPM.
We have already mentioned that the early anomalies include the size effect and P/E ratio as we have already mentioned. Basu (1977) observed that the portfolio formed on the basis of P/E ratio is more efficient than the portfolio formed according to the mean-variance efficiency. With a lower P/E firms have higher sample average return and with high P/E ratio have lower mean return than would be the case if the market portfolio is mean-variance efficient. On the other hand the size effect shows that low market capitalization firms have higher sample return than would be expected if the market portfolio was mean-variance efficient.
Fama and French (1992,1993) observed that beta cannot explain the difference between the portfolio formed based on ratio of book value of equity to the market value of equity. Firm has higher average return for higher book market ratio than originally predicted by the CAPM. However, these results signal economically deviations from CAPM. In these anomalies literatures, there are hardly any motivations to study the farm characteristics. Thus there is a possibility of overstating the evidence against the CAPM since there are sample selection bias problem in estimating the model and also there is a problem of data snooping bias. This a kind of bias refers to the biases in drawing the statistical inference that arises from data to conduct subsequent research with the same or related kind of data. Sample selection bias is rooted if we exclude certain sample of stocks from our analysis. Sloan (1995) argued that data requirements for the study of book market ratios lead to failing stocks being excluded which results the survivorship bias.
Despite an ample amount of evidences against CAPM, it is still being widely used in finance. There is also the controversy exists about how we should interpret the evidence against the CAPM. Some researchers often argue that CAPM should be replaced with multifactor model with different sources of risks. In the following section we will analyze the multifactor model.
7. Multifactor Models
So far we have not talked anything about the cross sectional variation. In many studies we have found that market data alone cannot explain the cross sectional variation in average security returns. In the analysis of CAPM, some variables like, ratio of book-to-market value, price-earning ratio, macroeconomic variables, etc are treated as the fundamental variables. The presence of these variables account for the cross-sectional variation in expected returns. Theoretical arguments also signal that more than one factor are required.
Fama and French (1995), in their study showed that the difference between the return of small stock and big stock portfolio (SMB) and the difference between high and low book-to-market stock portfolio (HML) become useful factor in cross sectional analysis of the equity returns. Chung, Johnson and Schill (2001) found that the SMB and HML become statistically insignificant if higher order co-moments are included in the cross sectional portfolio return analysis. We can infer from here that the SMB and HML can be considered as good proxies for the higher order co-moments. Ferson and Harvey (1999) made a point that many econometric model specifications are rejected because they have the tendency of ignoring conditioning information.
Now we will show one of the very important results of the multifactor model. Let us consider a regression of portfolio on the returns of any set of portfolios from which the entire minimum variance boundary can be generated. We will show that the intercept of this regression will be zero and that factor regression coefficients for any asset will sum to unity. Let the number of the portfolios in the set be K and is the (Kx1) vector of time period t of asset returns. For any value of the constant μ, there exists a combination of portfolio and assets. Let us consider μ be the global minimum variance portfolio and we denote the portfolio as op. Corresponding to op is minimum variance portfolio p which is uncorrelated with the return of op. As long as p and op are efficient portfolios in terms of the minimum variance their returns are the linear combinations of the elements of ,
where and are (Kx1) vectors of portfolio weights. As p and op are minimum variance portfolio their returns are linear combinations of the elements of ,
Then for the K portfolios we have,
By rearranging, we get the following,
Substituting this value into μ returns the following:
Now let us consider a multivariate regression of N assets on K factor portfolios,
where. | https://www.ukessays.com/essays/business/capital-asset-pricing-model.php | CC-MAIN-2019-09 | refinedweb | 4,898 | 50.57 |
Results 1 to 9 of 9
Thread: Good songs to start Ipod?
Good songs to start Ipod?
- Member Since
- Jan 20, 2008
- 149
I just bought a new ipod and I am filling it up with songs. Can anybody recommend me popular or well known songs?
- Member Since
- Feb 23, 2006
- Location
- Barbados
- 143
- Specs:
- PowerBook G4, 1.5Ghz, 768Mb RAM, 80gb hard drive-imac G4-ipod touch 32Gb 3g-Macbook 2.4ghz, 2BG RAM
Anything from Rihanna.
- Member Since
- Dec 03, 2006
- Location
- Irvine, CA
- 9,385
- Specs:
- Black Macbook C2D 2GHz 3GB RAM 250GB HD iPhone 4 iPad 3G
It would probably help if you told us what sort of music you listen to. Wouldn't want to be recommending rap if you're more of a pop or country kind of guy
June 2007
July 2009
- Member Since
- Jul 18, 2007
- Location
- Central California
- 3,185
- Specs:
- 2.16GHz C2D MacBook w/ 2GB RAM & 120GB HD. HTC Droid Incredible.
I love the cotton eye joe song. You should get that one and make a playlist of just that song over and over again.Member Of The Month for December '08.
It's only the internet!
Radiohead - In Rainbows:
And why not fill it with fun and interesting podcasts?
here's a few well known songs of many genres:
(not necessarily my favorites, but pretty well known)
joe walsh . life's been good
curtis mayfield . superfly
derek and the dominos . layla
roxy music . avalon
Robert Johnson . Cross Road Blues
rage against the machine . killing in the name
nina simone . my baby just cares for me
basement jaxx . do your thing
the byrds . eight miles high
supertramp . the logical song
cat stevens . how can i tell you
radiohead . paranoid android
Bauhaus . Bela Lugosi's Dead
faith no more . midlife crisis
black sabbath . war pigs
gnarles barkely . crazy
n.e.r.d. . maybe
bob dylan . maggie's farm
the church . under the milky way
men at work . be good johnny
Til Tuesday . Voices Carry
van halen . you really got me
iron and wine . such great heights
Frou Frou . Let Go
parliament funkadelic . flashlight
john denver . follow me
animotion . obsession
nine inch nails . closer
the replacements . skyway
the white stripes . the hardest button to button
Ten Years After . I'd Love To Change The World
yeah yeah yeahs . maps
the zombies . she's not there
outkast . hey ya
Peter Schilling . Major Tom (Coming Home)
johnny cash . ring of fire
The Cranberries . Zombie
toto . africa
jet . are you gonna be my girl?
the psychedelic furs . love my way
golden earing . radar love
saliva . click click boom
elvis costello . veronica
moody blues . nights in white satin
green day . american idiot
soundgarden . black hole sun
jeff buckley . last goodbye
simon and garfunkel . mrs. robinson
luscious jackson . naked eye
depeche mode . enjoy the silence
death cab for cutie . soul meets body
the alman brothers band . midnight rider
gorillaz . feel good inc
pink floyd . comfortably numb
alice in chains . would?
the flamingos . i only have eyes for you
tatu . all the things she said
the yardbirds . for your love
bill withers . ain't no sunshine
us3 . cantaloop
arcade fire . no cars go
david bowie . heroes
sugar . your favorite thing
r.e.m. . the one i love
janis ian . seventeen
paul simon . 50 ways to leave your lover
doves . there goes the fear
pearl jam . alive
chic . good times
the dream academy . life in a northern town
xtc . dear god
the who . baba o'riley
joy division . dead souls
dave brubeck quartet . take five
unkle (feat thom yorke) . rabbit in your headlights
barry manilow . mandy
the killers . when you were young
otis redding . sittin' on the dock of the bay
beethoven . moonlight sonata
wang chung . dance hall days
björk . it's oh so quiet
the police . every little thing she does is magic
level 42 . something about you
dean martin . ain't that a kick in the head
muddy waters . mannish boy
traffic . glad
blood sweat and tears . spinning wheel
bo diddley . i'm a man
yellowcard . ocean avenue
pet shop boys . west end girls
buffalo springfield . for what it's worth
the hollies . the air that i breathe
tool . sober
the carpenters . close to you
edwin starr . war
the stone roses . love spreads
temptations . i can't get next to you
talk talk - it's my life
stevie wonder . superstition
prince . let's go crazy
fluke . atom bomb
creedence clearwater revival . run through the jungle
steve miller band . fly like an eagle
norah jones . don't know why
extreme . hole hearted
the beach boys . god only knows
jay ferguson . thunder island
david and david . welcome to the boomtown
counting crows . mr jones
coldplay . the scientist
eric clapton . wonderful tonight
neil young . cinnamon girl
elton john . saturday night's alright for fighting
blur . song 2
dido . all you want
chopin . nocturne in b major, op. 9 no. 3
the association . along comes mary
robbie williams . angels
aretha franklin . respect
pretenders . message of love
live . selling the drama
the cure . friday i'm in love
lauryn hill . everything is everything
madness . our house
journey . don't stop believin'
doris day . whatever will be will be (que sera sera)
the jackson 5 . never can say goodbye
grand funk railroad . we're an american band
frank sinatra . night and day
mountain . mississippi queen
george harrison . my sweet lord
beck . devil's haircut
ozzy osborne . i don't know
crosby, stills, nash, and young . suite judy blue eyes
men without hats . the safety dance
metallica . one
boz skaggs . lido shuffle
the eagles . hotel california
dixie chicks . landslide (or the stevie nicks original)
miles davis . so what
cheap trick . i want you to want me (live)
herb albert . rise
queen . don't stop me now
nikka costa . everybody got their something
j giles band . centerfold
gavin degraw . i don't wanna be
foreigner . feels like the first time
wall of voodoo . mexican radio
james brown . it's a man's man's man's world
fiona apple . criminal
ratt . round and round
marvin gaye . let's get it on
bachman-turner overdrive . takin' care of business
iron maiden . run to the hills
foo fighters . learning to fly
ringo starr - it don't come easy
grandmaster flash . the message
duke ellington . echoes of harlem
flock of seagulls . i ran
cyndi lauper . time after time
filter . hey man, nice shot
m . popmuzik
groove armada . i see you baby
pachabel . cannon in d
beastie boys . sabotage
berlin . take my breath away
earth, wind and fire . boogie wonderland
judas priest . you've got another thing coming
etta james . at last
patsy cline . sweet dreams
cream . sunshine of your love
dave matthews . crash
common . come close
roberta flack . killing me softly with his song
billy joel . big shot
massive attack . protection
tom petty . refugee
the cult . she sells sanctuary
nelly . hot in here
def leppard . photograph
jimi hendrix . are you experienced
geto boys . ____ it feels good to be a gangsta
dusty springfield and burt bacharach . the look of love
aerosmith . sweet emotion
chicago . does anybody really know what time it is?
bruce springsteen . born to run
beyonce . crazy in love
carly simon . you're so vain
electric light orchestra . don't bring me down
rolling stones . gimme shelter
kansas . dust in the wind
john coltrane . blue train
cristopher cross . sailing
the doors . five to one
nancy sinatra . these boots are made for walkin'
bryan adams . run to you
diana ross and the supremes . love child
Dead Can Dance . The Ubiquitous Mr. Lovegrove
big country . big country
alan parsons project . eye in the sky
the sex pistols . bodies
gordon lightfoot . sundown
lush . ladykillers
abba . waterloo
paul mccartney and wings . band on the run
inxs . the one thing
billy squier . the stroke
mel torme . do do do
grateful dead . ripple
jefferson airplane . somebody to love
blink 182 . what's my age again
elastica . connection
soft cell . tainted love
normal greenbaum . spirit in the sky
yes . i've seen all good people
the beatles . a day in the life
311 . down
asia . heat of the moment
everclear . santa monica
roy orbison . crying
madonna . borderline
franz ferdinand . take me out
hole . doll parts
neil diamond . girl, you'll be a woman soon
kool and the gang . hollywood swinging
eurithmics . sweet dreams (are made of this)
blonde . heart of glass
duncan sheik . barely breathing
pj harvey . down by the water
deep purple . hush
badfinger . no matter what
cracker . teen angst
quiet riot . cum on feel the noize
fleetwood mac . you make loving fun
nazareth . love hurts
sly and the family stone . stand!
arrested development . tennessee
duran duran . rio
billy bragg . a new england
the mamas and the papas . california dreamin'
ac/dc . dirty deeds done dirt cheap
joe jackson . stepping out
edie brickell and new bohemians . what i am
tears for fears . shout
matthew sweet . girlfriend
el-p . orange 9mm
sarah mclachlan . hold on
the dandy warhols . bohemian like you
jethro tull . aqualung
leadbelly . where did you sleep last night
john lennon . imagine
the clash . should i stay or should i go
howard jones . things can only get better
dire straits . sultans of swing
suzanne vega . tom's diner
ohio players . love rollercoaster
led zeppelin . rock and roll
simple minds . don't you (forget about me)
gerry rafferty . baker street
air . playground love
megadeth . peace sells
zz top . sharp dressed man
danzig . mother
snow patrol . run
jamiroquai . canned heat
heart . barracuda
the verve . bittersweet symphony
rod stewart . every picture tells a story
the kinks . ou really got me
the cars . drive
nick drake . pink moon
naked eyes . always something there to remind me
thomas dolby . she blinded me with science
james taylor . something in the way she moves
nirvana . smells like teen spirit
the guess who . no time
tommy tutone . 867-5309 (jenny)
rare earth . get ready
bobby womack . across 110th street
soul coughing . super bon bon
monster magnet . space lord
manfred mann's earth band . blinded by the light
debby boone . you light up my lifePlease participate in our Member of the Month polls. Every vote counts! And remember to use the user reputation system!
["Dear Homer, I. O. U. one emergency donut. Signed, Homer." - Note by Homer Simpson]
Well, it depends on what kind of person you are and what you listen to.
I'm kind of out there music wise, but some of my favorite artists are Coheed and Cambria, Mindless Self Indulgence, and Frank Sinatra.
The first 2 aren't very old groups, obviously Frank Sinatra is, but he's timeless. Im sure people will still be listening to him 50 years from now.
I also like some of the soundtracks from musicals like RENT and Moulin Rouge.
Just depends on who you are.
- Member Since
- Dec 01, 2007
- Location
- new york city
- 53
- Specs:
- ++24" imac 3ghz / may-08+ ibook G3-500mhz-10g-384m-12", nov-07 +G4 quicksilver 867 mhz/512MB july/08
crue wild side
that is all
you're a little late to this party..
Good software to add metadata to songs?By soonerfan237 in forum Music, Audio, and PodcastingReplies: 2Last Post: 06-17-2011, 11:39 AM
Can't put songs purchased on iPod onto iPodBy Sporanzo in forum iPod Hardware and AccessoriesReplies: 4Last Post: 11-26-2006, 07:06 PM | http://www.mac-forums.com/forums/ipod-hardware-accessories/95191-good-songs-start-ipod.html | CC-MAIN-2016-18 | refinedweb | 1,869 | 82.2 |
# Wireshark 3.x: code analysis under macOS and errors review

Wireshark Foundation released the final stable-version of the popular network traffic analyzer — Wireshark 3.0.0. The new release fixes several bugs, it is now possible to analyze the new protocols, apart from that the driver on Npcap WinPcap is replaced. Here is where quoting of the announcement ends and our note about bugs in the project starts off. The projects authors definitely haven't done their best in fixing bugs before the release.
Let's collect hotfixes right now to give a motive in doing a new release :).
Introduction
------------
[Wireshark](https://www.wireshark.org/) is a well-known tool to capture and analyze network traffic. The program works with the vast majority of known protocols, has intuitive and logical graphical interface, an all-powerful system of filters. Wireshark is cross-platform, works in such OSs, as: Windows, Linux, macOS, Solaris, FreeBSD, NetBSD and many others.
To do the source code analysis, we used [PVS-Studio](https://www.viva64.com/en/pvs-studio/) static code analyzer. To analyze the source code, first we needed to compile the project in an OS. The choice was wide not only due to the cross platform nature of the project, but also because of that of the analyzer. I chose macOS for the analysis. You can also run the analyzer under Windows and Linux.
I'd like to draw special attention to the code quality. Unfortunately, I can't give big points to it. It is a subjective assessment, but since we regularly check plenty of projects, I have a frame of reference. What stands out in this case is a great number of PVS-Studio warnings for a small amount of code. In total, more than 3500 warnings of all levels triggered for this project. It is typical for the projects, which generally don't use static analysis tools, even free ones. Another factor pointing at the project quality is repeated errors detected by the analyzer. I won't cite same-type code examples, whereas some similar errors take place in hundreds of places.
Such inserts also don't give a boost to code quality:
```
/* Input file: packet-acse-template.c */
#line 1 "./asn1/acse/packet-acse-template.c"
```
There are more than 1000 of them in the entire project. Such inserts make it more difficult for the analyzer to match issued warnings with the appropriate files. Well, I think average developers won't get a kick out of maintaining such code.
Typos
-----
**Warning 1**
V641 The size of the allocated memory buffer is not a multiple of the element size. mate\_setup.c 100
```
extern mate_cfg_gog* new_gogcfg(mate_config* mc, gchar* name) {
mate_cfg_gog* cfg = (mate_cfg_gog *)g_malloc(sizeof(mate_cfg_gop));
....
}
```
There are structures of two types: *mate\_cfg\_gog* and *mate\_cfg\_gop,* they are very similar, but not equal. Most likely, in this code fragment functions are mixed up, which is fraught with potential errors in the program when accessing memory by a pointer.
Here are the fragments of mixed-up data structures:
```
typedef struct _mate_cfg_gog {
gchar* name;
GHashTable* items;
guint last_id;
GPtrArray* transforms;
LoAL* keys;
AVPL* extra;
float expiration;
gop_tree_mode_t gop_tree_mode;
gboolean show_times;
....
} mate_cfg_gog;
typedef struct _mate_cfg_gop {
gchar* name;
guint last_id;
GHashTable* items;
GPtrArray* transforms;
gchar* on_pdu;
AVPL* key;
AVPL* start;
AVPL* stop;
AVPL* extra;
float expiration;
float idle_timeout;
float lifetime;
gboolean drop_unassigned;
gop_pdu_tree_t pdu_tree_mode;
gboolean show_times;
....
} mate_cfg_gop;
```
**Warning 2**
V519 The 'HDR\_TCP.dest\_port' variable is assigned values twice successively. Perhaps this is a mistake. Check lines: 495, 496. text\_import.c 496
```
void write_current_packet (void)
{
....
HDR_TCP.source_port =isOutbound ? g_htons(hdr_dest_port):g_htons(hdr_src_port);
HDR_TCP.dest_port = isOutbound ? g_htons(hdr_src_port) :g_htons(hdr_dest_port);
HDR_TCP.dest_port = g_htons(hdr_dest_port);
....
}
```
In the last line, the value (that has just been evaluated) of the variable *HDR\_TCP.dest\_port* is rewritten.
Logical errors
--------------
In this section, I'll cite several examples of errors in conditional operators, and all of them will be completely different from each other.
**Warning 1**
V547 Expression 'direction == 0' is always false. packet-adb.c 291
```
#define P2P_DIR_RECV 1
#define P2P_DIR_SENT 0
static void
save_command(....)
{
....
if ( service_data
&& service_data->remote_id == 0
&& direction == P2P_DIR_RECV) {
if (direction == P2P_DIR_SENT) {
service_data->remote_id = arg1; // unreachable code
} else {
service_data->remote_id = arg0;
}
....
}
....
}
```
In the external condition, the *direction* variable is compared with the constant *P2P\_DIR\_RECV.* According to the expressions written with the AND operator, when getting to the inner condition, the value of the variable *direction* will definitely be different from another constant *P2P\_DIR\_SENT*.
**Warning 2**
V590 Consider inspecting the '(type == 0x1) || (type != 0x4)' expression. The expression is excessive or contains a misprint. packet-fcsb3.c 686
```
static int dissect_fc_sbccs (....)
{
....
else if ((type == FC_SBCCS_IU_CMD_HDR) ||
(type != FC_SBCCS_IU_CMD_DATA)) {
....
}
```
The error of this code fragment is that the result of the condition depends only on one expression:
```
(type != FC_SBCCS_IU_CMD_DATA)
```
**Warning 3**
V590 Consider inspecting this expression. The expression is excessive or contains a misprint. snort-config.c 40
```
static char *skipWhiteSpace(char *source, int *accumulated_offset)
{
int offset = 0;
/* Skip any leading whitespace */
while (source[offset] != '\0' && source[offset] == ' ') {
offset++;
}
*accumulated_offset += offset;
return source + offset;
}
```
The result of the conditional operator will depend only on this part of the expression *(source[offset] == ' ')*. The check *(source[offset] != '\0')* is redundant and can be safely removed. It is not the actual error, but redundant code makes code reading and understanding the program more difficult, so it's better to simplify it.
**Warning 4**
V547 Expression 'eras\_pos != NULL' is always true. reedsolomon.c 659
```
int
eras_dec_rs(dtype data[NN], int eras_pos[NN-KK], int no_eras)
{
....
if(eras_pos != NULL){
for(i=0;i
```
Perhaps, we are dealing with a redundant check, probably with a typo, and another thing has to be checked in one of the conditions of the *if* block.
Strange asserts
---------------
**Warning 1**
V547 Expression 'sub\_dissectors != NULL' is always true. capture\_dissectors.c 129
```
void capture_dissector_add_uint(....)
{
....
sub_dissectors = (struct capture_dissector_table*)g_hash_table_lookup(....);
if (sub_dissectors == NULL) {
fprintf(stderr, "OOPS: Subdissector \"%s\" not found ... \n", name);
if (getenv("WIRESHARK_ABORT_ON_DISSECTOR_BUG") != NULL)
abort();
return;
}
g_assert(sub_dissectors != NULL); // <=
....
}
```
The check of the *g\_assert* pointer is redundant here, as the pointer was already checked before that. Perhaps, only *g\_assert* was in this function and a developer forgot to remove it, but maybe a structure field should have been checked here.
**Warning 2**
V547 Expression 'i < count' is always true. packet-netflow.c 10363
```
static int
dissect_v9_v10_template_fields(....)
{
....
count = tmplt_p->field_count[fields_type];
for(i=0; ifields\_p[fields\_type] != NULL) {
DISSECTOR\_ASSERT (i < count); // <=
tmplt\_p->fields\_p[fields\_type][i].type = type;
tmplt\_p->fields\_p[fields\_type][i].length = length;
tmplt\_p->fields\_p[fields\_type][i].pen = pen;
tmplt\_p->fields\_p[fields\_type][i].pen\_str = pen\_str;
if (length != VARIABLE\_LENGTH) {/
tmplt\_p->length += length;
}
}
....
}
....
}
```
It's not quite clear why *assert*, which duplicates the condition from the loop, takes place in the function. The loop counter won't change in the body.
Errors with pointers
--------------------
**Warning 1**
V595 The 'si->conv' pointer was utilized before it was verified against nullptr. Check lines: 2135, 2144. packet-smb2.c 2135
```
static int
dissect_smb2_fid(....)
{
....
g_hash_table_insert(si->conv->fids, sfi, sfi); // <=
si->file = sfi;
if (si->saved) {
si->saved->file = sfi;
si->saved->policy_hnd = policy_hnd;
}
if (si->conv) { // <=
eo_file_info = (.... *)g_hash_table_lookup(si->conv->files,&policy_hnd);
....
}
....
}
```
The pointer *si->conv* gets dereferenced a few lines before its check for null.
**Warning 2**
V774 The 'protos' pointer was used after the memory was released. packet-k12.c 311
```
static gboolean
k12_update_cb(void* r, char** err)
{
gchar** protos;
....
for (i = 0; i < num_protos; i++) {
if ( ! (h->handles[i] = find_dissector(protos[i])) ) {
h->handles[i] = data_handle;
h->handles[i+1] = NULL;
g_strfreev(protos);
*err = g_strdup_printf("Could not find dissector for: '%s'", protos[i]);
return FALSE;
}
}
....
}
```
*protos* is an array of strings. When handling a special case in the program this array is first cleared by the *g\_strfreev* function and then one string of this array is used in the error message. Most likely, these lines should be interchanged:
```
*err = g_strdup_printf("Could not find dissector for: '%s'", protos[i]);
g_strfreev(protos);
```
Memory leaks
------------
V773 The 'ptmpstr' pointer was assigned values twice without releasing the memory. A memory leak is possible. idl2wrs.c 2436
```
static void parsetypedefunion(int pass)
{
char tmpstr[BASE_BUFFER_SIZE], *ptmpstr;
....
while(num_pointers--){
g_snprintf(tmpstr, BASE_BUFFER_SIZE, "%s_%s", ptmpstr, "unique");
FPRINTF(eth_code, "static int\n");
FPRINTF(eth_code, "....", tmpstr);
FPRINTF(eth_code, "{\n");
FPRINTF(eth_code, " ....", ptmpstr, ti->str);
FPRINTF(eth_code, " return offset;\n");
FPRINTF(eth_code, "}\n");
FPRINTF(eth_code, "\n");
ptmpstr=g_strdup(tmpstr);
}
....
}
```
After the [*g\_strdup*](https://developer.gnome.org/glib/stable/glib-String-Utility-Functions.html#g-strdup) function we need to call the [*g\_free*](https://developer.gnome.org/glib/stable/glib-Memory-Allocation.html#g-free) function at some point. It is not done in the given code snippet and a new part of memory is allocated in the loop on each iteration. Here come multiple memory leaks.
Some other warnings for similar code fragments:
* V773 The 'ptmpstr' pointer was assigned values twice without releasing the memory. A memory leak is possible. idl2wrs.c 2447
* V773 The 'ptmpstr' pointer was assigned values twice without releasing the memory. A memory leak is possible. idl2wrs.c 2713
* V773 The 'ptmpstr' pointer was assigned values twice without releasing the memory. A memory leak is possible. idl2wrs.c 2728
* V773 The 'ptmpstr' pointer was assigned values twice without releasing the memory. A memory leak is possible. idl2wrs.c 2732
* V773 The 'ptmpstr' pointer was assigned values twice without releasing the memory. A memory leak is possible. idl2wrs.c 2745
Unfortunately, in the code there are many other similar cases, where memory gets released.
Miscellaneous
-------------
**Warning 1**
V535 The variable 'i' is being used for this loop and for the outer loop. Check lines: 7716, 7798. packet-opa-mad.c 7798
```
/* Parse GetVFInfo MAD from the Performance Admin class. */
static gint parse_GetVFInfo(....)
{
....
for (i = 0; i < records; i++) { // <= line 7716
....
for (i = 0; i < PM_UTIL_BUCKETS; i++) { // <= line 7748
GetVFInfo_Util_Stats_Bucket_item = proto_tree_add_item(....);
proto_item_set_text(....);
local_offset += 4;
}
....
for (i = 0; i < PM_ERR_BUCKETS; i++) { // <= line 7798
GetVFInfo_Error_Stats_Bucket_item = proto_tree_add_item(....);
proto_item_set_text(....);
local_offset += 4;
....
}
....
}
....
}
```
In a very long function developers boldly change the value of the loop counter, even doing it a few times. We cannot say for sure if it's an error or not, however, there are about 10 such loops in the project.
**Warning 2**
V763 Parameter 'item' is always rewritten in function body before being used. packet-cdma2k.c 1324
```
static void cdma2k_message_ORDER_IND(proto_item *item, ....)
{
guint16 addRecLen = -1, ordq = -1, rejectedtype = -1;
guint16 l_offset = -1, rsc_mode_ind = -1, ordertype = -1;
proto_tree *subtree = NULL, *subtree1 = NULL;
item = proto_tree_add_item(tree,hf_cdma2k_OrderIndMsg, tvb, ....); // <=
subtree = proto_item_add_subtree(item, ett_cdma2k_subtree1);
....
}
```
The *item* pointer, taken by the function, is immediately changed with another value. It is very suspicious. Moreover, the code contains several dozen of such places, so it's hard to decide whether it's an error or not. I came across similar code in another large project, such code was correct there, no one simply dared to change the function's interface.
**Warning 3**
V762 It is possible a virtual function was overridden incorrectly. See third argument of function 'headerData' in derived class 'PacketListModel' and base class 'QAbstractItemModel'. packet\_list\_model.h 48
```
QVariant
QAbstractItemModel::headerData(int section, Qt::Orientation orientation,
int role = Qt::DisplayRole) const // <=
class PacketListModel : public QAbstractItemModel
{
Q_OBJECT
public:
....
QVariant headerData(int section, Qt::Orientation orientation,
int role = Qt::DisplayRole | Qt::ToolTipRole) const; // <=
....
};
```
The analyzer has detected the invalid overloading of the *headerData* function. Functions have different default values of the *role* parameter. This can cause the wrong behavior, not the one expected by a programmer.
**Warning 4**
V610 Undefined behavior. Check the shift operator '>>'. The right operand ('bitshift' = [0..64]) is greater than or equal to the length in bits of the promoted left operand. proto.c 10941
```
static gboolean
proto_item_add_bitmask_tree(...., const int len, ....)
{
....
if (len < 0 || len > 8)
g_assert_not_reached();
bitshift = (8 - (guint)len)*8;
available_bits = G_GUINT64_CONSTANT(0xFFFFFFFFFFFFFFFF) >> bitshift;
....
}
```
A 64-bit shift will result in undefined behavior according to language standard.
Most likely, the correct code should be like this:
```
if (bitshift == 64)
available_bits = 0;
else
available_bits = G_GUINT64_CONSTANT(0xFFFFFFFFFFFFFFFF) >> bitshift;
```
Conclusion
----------
It might seem that this review shows few errors, but in the full report the considered cases repeat dozens and hundreds of times. In addition, PVS-Studio warnings reviews are of demonstrative nature. They represent contribution to the quality of open source projects, but one-time checks are the most inefficient in terms of static analysis methodology.
You can get and analyze the full report yourself. To do this, you just need to [download](https://www.viva64.com/en/pvs-studio-download/) and run the PVS-Studio analyzer. | https://habr.com/ru/post/447156/ | null | null | 2,114 | 51.04 |
Hey, Iv decided to tackle a bit off C++ before I start it in college in fall and Im sort of stuck on a problem. The book im reading is Accelerated C++ and its exercise 3.2.
Basically the question asks to get values from the user and take for at a time and give the sum of it...I have taken a few liberties since the actual question is very vague. My idea is that the user enters enough numbers to be divisible by 4 after the vector gets sorted. So you basically sort the vector and take the first 4 and the the second 4 and so on until you reach the end of the vector. I am not going to divide by 4 yet just sum the values right now.
My question is,
a) how can I terminate the second loop
b) will my variable z go out of scope before it goes through the whole vector.
#include <algorithm> #include <iomanip> #include <ios> #include <iostream> #include <string> #include <vector> using std::cin; using std::sort; using std::cout; using std::streamsize; using std::endl; using std::string; using std::setprecision; using std::vector; int main() { // ask for and read the value cout << "Please enter an integer value: "; vector<int> value; double x; while (cin >> x) value.push_back(x); // check that a value has been entered typedef vector<double>::size_type vec_sz; vec_sz size = value.size(); if (size == 0) { cout << endl << "You must enter your grades. " "Please try again." << endl; return 1; } // sort the grades sort(value.begin(), value.end()); // compute the values taking the largest 4 and so on int y = 0; int i, z; vector<int> sum; while(z != size){ while(y != 4){ y++; sum[i] += value[z]; z++; } i++; } // Output and create a vector for sum cout << sum[i] << endl; return 0; } | https://www.daniweb.com/programming/software-development/threads/123210/accelerated-c-question | CC-MAIN-2018-05 | refinedweb | 306 | 69.41 |
Before starting the article, I’m obliged to mention that web scraping is a grey area legally and ethicaly in lots of circumstances. Please consider the positive and negative effects of what you scrape before doing so!
Warning over. Web scraping is a hugely powerful tool that, when done properly, can give you access to huge, clean data sources to power your analysis. The applications are just about endless for anyone interested in data. As a professional analyst, you can scrape fixtures and line-up data from around the world every day to plan scouting assignments or alert you to youth players breaking through. As an amateur analyst, it is quite likely to be your only source of data for analysis.
This tutorial is just an introduction for Python scraping. It will take you through the basic process of loading a page, locating information and retrieving it. Combine the knowledge on this page with for loops to cycle through a site and HTML knowledge to understand a web page, and you’ll be armed with just about any data you can find.
Let’s fire up our modules & get started. We’ll need requests (to access and process web pages with Python) and beautifulsoup (to make sense of the code that makes up the pages) so make sure you have these installed.
import requests from bs4 import BeautifulSoup import pandas as pd
Our process for extracting data is going to go something like this:
- Load the webpage containing the data.
- Locate the data within the page and extract it.
- Organise the data into a dataframe
For this example, we are going to take the player names and values for the most expensive players in a particular year. You can find the page that we’ll use here.
The following sections will run through each of these steps individually.
Load the webpage containing the data
The very first thing that we are going to do is create a variable called ‘headers’ and assign it a string that will tell the website that we are a browser, and not a scraping tool. In short, we’ll be blocked if we are thought to be scraping!
Next, we have three lines. The first one assigns the address that we want to scrape to a variable called ‘page’.
The second uses the requests library to grab the code of the page and assign it to ‘pageTree’. We use our headers variable here to inform the site that we are pretending to be a human browser.
Finally, the BeautifulSoup module parses the website code into html. We will then be able to search through this for the data that we want to extract. This is saved to ‘pageSoup’, and you can find all three lines here:
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36'} page = "" pageTree = requests.get(page, headers=headers) pageSoup = BeautifulSoup(pageTree.content, 'html.parser')
Locate the data within a page & extract it
To fully appreciate what we are doing here, you probably need a basic grasp of HTML – the language that structures a webpage. As simply as I can put it for this article, HTML is made up of elements, like a paragraph or a link, that tell the browser what to render. For scraping, we will use this information to tell our program what information to take.
Take another look at the page we are scraping. We want two things – the player name and the transfer value.
The player name is a link. This is denoted as an ‘a’ tag in HTML, so we will use the ‘find_all’ function to look for all of the a tags in the page. However, there are obviously lots of links! Fortunately, we can use the class given to the players’ names specifically on this page to only take these ones – the class name is passed to the ‘find_all’ function as a dictionary.
This function will return a list with all elements that match our criteria.
If you’re curious, classes are usually used to apply styles (such as colour or border) to elements in HTML.
The code to extract the players names is here:
Players = pageSoup.find_all("a", {"class": "spielprofil_tooltip"}) #Let's look at the first name in the Players list. Players[0].text
'Luís Figo'
Looks like that works! Now let’s take the values.
As you can see on the page, the values are not a link, so we need to find a new feature to identify them by.
They are in a table cell, denoted by ‘td’ in HTML, so let’s look for that. The class to highlight these cells specifically is ‘rechts hauptlink’, as you’ll see below.
Let’s assign this to Values and check Figo’s transfer value:
Values = pageSoup.find_all("td", {"class": "rechts hauptlink"}) Values[0].text
'£54.00m'
That’s a lot of money! Even in today’s market! But according to the page, our data is correct. Now all we need to do is process the data into a dataframe for further analysis or to save for use elsewhere.
Organise the data into a dataframe
PlayersList = [] ValuesList = [] for i in range(0,25): PlayersList.append(Players[i].text) ValuesList.append(Values[i].text) df = pd.DataFrame({"Players":PlayersList,"Values":ValuesList}) df.head()
And now we have a dataframe with our scraped data, pretty much ready for analysis!
Summary
This article has gone through the absolute basics of scraping, we can now load a page, identify elements that we want to scrape and then process them into a dataframe.
There is more that we need to do to scrape efficiently though. Firstly, we can apply a for loop to the whole program above, changing the initial webpage name slightly to scrape the next year – I’ll let you figure out how!
You will also need to understand more about HTML, particularly class and ID selectors, to get the most out of scraping. Regardless, if you’ve followed along and understand what we’ve achieved and how, then you’re in a good place to apply this to other pages.
The techniques in this article gave the data for our joyplots tutorial, why not take a read of that next? | https://fcpython.com/blog/introduction-scraping-data-transfermarkt | CC-MAIN-2022-33 | refinedweb | 1,049 | 72.26 |
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
SSH into your Christmas tree with Raspberry Pi
SSH into your Christmas tree with Raspberry Pi
Get the newsletter
Earlier this year, I wrote an article about how to use the Raspberry Pi to create a music light show using an open source project called LightShowPi. My little Christmas tree light show was popular enough that I was invited to demo it for a group of middle school kids in North Carolina.
Which brings me to this year's Christmas season. I was flirting with the idea of taking the light show outdoors, but due to the business of life I just ended up not having enough time (or motivation) to make that leap. I did, however, put a bit of time into improving last year's setup.
Instead of five channels running 500 lights, I have eight channels running 800 lights. I also modified the LightShowPi's configuration to customize the lights a bit more. I am running all songs in four channels and mirroring the other four channels. This makes the lights a little more fun with fewer blackouts from unused channels during certain songs.
My configuration is now headless as well (i.e., no monitor), and is running on a Raspberry Pi 2. The headless configuration is nice, as I don't need room under the tree for a monitor anymore. The Raspberry Pi 2 (instead of the B+) doesn't make much of a difference, as the performance of LightShowPi is great in either version. With the WiFi dongle, I just SSH into the Pi from my phone and start/stop the lights and music anytime I want.
Finally, this year I also put a bit more thought into the the song selection, trying to add a bit more variety and fun songs to the playlist.
Here's a sample of what the tree is looking like this year:
After getting the Christmas tree lights up and "dancing," one problem immediately became apparent: My wife and kids wanted a simple way to toggle between "solid" and "dancing" mode.
So, I went through my Raspberry Pi CanaKit and decided to try to use the push buttons that came with it to solve this interesting problem.
To help me set up the buttons, I used this nice video tutorial. Here's the result.
Then I had to write up some code to make this work. At first I based my code on the YouTube video I linked to above, but I didn't like that it had so many loops. So, I did some more reading and was able to come up with a better piece of code. Granted, I'm sure the code can be improved a lot, but it's good enough for a small personal project/proof of concept.
#!/usr/bin/env python
import RPi.GPIO as gpio
import os
import time
gpio.setmode(gpio.BCM)
gpio.setup(4, gpio.IN, pull_up_down=gpio.PUD_UP)
gpio.setup(17, gpio.IN, pull_up_down=gpio.PUD_UP)
lights = 0
while True:
b1 = gpio.input(4)
b2 = gpio.input(17)
#button 1 (solid lights)
if (b1 == False):
if lights == 0:
os.system("export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi; sudo python /home/pi/lightshowpi/py/hardware_controller.py --state=on")
lights = 1
elif lights == 1:
os.system("export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi; sudo python /home/pi/lightshowpi/py/hardware_controller.py --state=off")
lights = 0
#button 2 (dancing mode)
if (b2 == False):
if lights == 0:
os.system("export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi; sudo lightshowpi/bin/start_music_and_lights")
lights = 1
elif lights == 1:
os.system("export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi; sudo lightshowpi/bin/stop_music_and_lights")
lights = 0
# trying not to waste cycles on the pi
time.sleep(0.2)
The code above basically uses the python RPi python library to interact with the two GPIO pins I use for my buttons (pins 4 and 17), and if a button is pressed and the lights are already off, then turn it on, and vice versa.
Finally—and this took me a while to figure out—I had to modify the
$SYNCHRONIZED_LIGHTS_HOME/bin/stop_music_and_lights script that comes with LightShowPi because it had a
sudo killall python on it that would kill my Python script.
So, I modified that line to:
sudo kill $(ps aux | grep 'synchronized_lights.py' | awk '{print $2}')
That's it! Here's a look at the end result:
9 Comments
I love this article and I want to try to do what you did. I have a Raspberry Pi Canakit and thanks to your wonderful documentation I have a chance to do just that. :)
Great Don. Any questions, shoot me a tweet.
Ok. So is there a way to SSH from your phone to make the lights solid? Now I'm just learning about SSH so I'm not sure if my question is good or makes sense.
Yes. At least using LightShowPi, you will need a SSR board (like the Sainsmart) and you can ssh into the Pi, and run the command: export SYNCHRONIZED_LIGHTS_HOME=/home/pi/lightshowpi && sudo python /home/pi/lightshowpi/py/hardware_controller.py --state=on
You can read my previous article on how to get started.
Great work and documentation on this Anderson. The switching from lights solid to dancing is something I haven't tried yet. I like the way yours is set up. Maybe I will have the time to make it ready for next year. Won't have time this year.
forgive the noobness, but are just turning the lights on and off with the relay via the rpi?
not sure I follow the question, but I am using lightshowpi to do the whole dancing lights thing, but within lightshowpi there is a way to turn the lights on and off... so I added that. but one could hook up the Christmas lights to a relay and they raspberry pi and just write some plain python code to do the same. Does that help?
Hi, Anderson. It rocks! Great article!!! Your name is typically portuguese. Are you brazilian or portuguese?
Gerson, isso. Brasileiro ;-) | https://opensource.com/life/15/12/ssh-your-christmas-tree-raspberry-pi?utm_content=buffer32a11&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer | CC-MAIN-2019-04 | refinedweb | 1,017 | 63.8 |
The statements introduced in this chapter will involve tests or conditions. More syntax for conditions will be introduced later, but for now consider simple arithmetic comparisons that directly translate from math into Python. Try each line separately in the Shell
2 < 5 3 > 7 x = 11 x > 10 2 * x < x type(True)
You see that conditions are either True or False (with no quotes!). These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python the name Boolean is shortened to the type bool. It is the type of the results of true-false conditions or tests.
Run this example program, suitcase.py. Try it at least twice, with inputs: 30 and then 55. As you an see, you get an extra result, depending on the input. The main code is:
weight = float(input("How many pounds does your suitcase weigh? ")) if weight > 50: print("There is a $25 charge for luggage that heavy.") print("Thank you for your business.")
The middle two line are an if statement. It reads pretty much like English. If it is true that the weight is greater than 50, then print the statement about an extra charge. If it is not true that the weight is greater than 50, then don’t do the indented part: skip printing the extra luggage charge. In any event, when you have finished with the if statement (whether it actually does anything or not), go on to the next statement that is not indented under the if. In this case that is the statement printing “Thank you”.
The general Python syntax for a simple if statement is
if condition :indentedStatementBlock
If the condition is true, then do the indented statements. If the condition is not true, then skip the indented statements.
Another fragment as an example:
if balance < 0: transfer = -balance # transfer enough from the backup account: backupAccount = backupAccount - transfer balance = balance + transfer
As with other kinds of statements with a heading and an indented block, the block can have more than one statement. The assumption in the example above is that if an account goes negative, it is brought back to 0 by transferring money from a backup account in several steps.
In the examples above the choice is between doing something (if the condition is True) or nothing (if the condition is False). Often there is a choice of two possibilities, only one of which will be done, depending on the truth of a condition.
Run the example program, clothes.py. Try it at least twice, with inputs 50 and then 80. As you can see, you get different results, depending on the input. The main code of clothes.py is:
temperature = float(input('What is the temperature? ')) if temperature > 70: print('Wear shorts.') else: print('Wear long pants.') print('Get some exercise outside.')
The middle four lines are an if-else statement. Again it is close to English, though you might say “otherwise” instead of “else” (but else is shorter!). There are two indented blocks: One, like in the simple if statement, comes right after the if heading and is executed when the condition in the if heading is true. In the if-else form this is followed by an else: line, followed by another indented block that is only executed when the original condition is false. In an if-else statement exactly one of two possible indented blocks is executed.
A line is also shown outdented next, about getting exercise. Since it is outdented, it is not a part of the if-else statement: It is always executed in the normal forward flow of statements, after the if-else statement (whichever block is selected).
The general Python if-else syntax is
if condition :indentedStatementBlockForTrueConditionelse:indentedStatementBlockForFalseCondition
These statement blocks can have any number of statements, and can include about any kind of statement.
All the usual arithmetic comparisons may be made, but many do not use standard mathematical symbolism, mostly for lack of proper keys on a standard keyboard.
There should not be space between the two-symbol Python substitutes.
Notice that the obvious choice for equals, a single equal sign, is not used to check for equality. An annoying second equal sign is required. This is because the single equal sign is already used for assignment in Python, so it is not available for tests.
Warning
It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment!
Tests for equality do not make an assignment, and they do not require a variable on the left. Any expressions can be tested for equality or inequality (!=). They do not need to be numbers! Predict the results and try each line in the Shell:
x = 5 x x == 5 x == 6 x x != 6 x = 6 6 == x 6 != x 'hi' == 'h' + 'i' 'HI' != 'hi' [1, 2] != [2, 1]
An equality check does not make an assignment. Strings are case sensitive. Order matters in a list.
Try in the Shell:
'a' > 5
When the comparison does not make sense, an Exception is caused. [1]
Following up on the discussion of the inexactness of float arithmetic in String Formats for Float Precision, confirm that Python does not consider .1 + .2 to be equal to .3: Write a simple condition into the Shell to test.
Here is another example: Pay with Overtime. Given a person’s work hours for the week and regular hourly wage, calculate the total pay for the week, taking into account overtime. Hours worked over 40 are overtime, paid at 1.5 times the normal rate. This is a natural place for a function enclosing the calculation.
Read the setup for the function:
def calcWeeklyWages(totalHours, hourlyWage): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. '''
The problem clearly indicates two cases: when no more than 40 hours are worked or when more than 40 hours are worked. In case more than 40 hours are worked, it is convenient to introduce a variable overtimeHours. You are encouraged to think about a solution before going on and examining mine.
You can try running my complete example program, wages.py, also shown below. The format operation at the end of the main function uses the floating point format (String Formats for Float Precision) to show two decimal places for the cents in the answer:
def calcWeeklyWages(totalHours, hourlyWage): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' if totalHours <= 40: totalWages = hourlyWage*totalHours else: overtime = totalHours - 40 totalWages = hourlyWage*40 + (1.5*hourlyWage)*overtime return totalWages def main(): hours = float(input('Enter hours worked: ')) wage = float(input('Enter dollars paid per hour: ')) total = calcWeeklyWages(hours, wage) print('Wages for {hours} hours at ${wage:.2f} per hour are ${total:.2f}.' .format(**locals())) main()
Here the input was intended to be numeric, but it could be decimal so the conversion from string was via float, not int.
Below is an equivalent alternative version of the body of calcWeeklyWages, used in wages1.py. It uses just one general calculation formula and sets the parameters for the formula in the if statement. There are generally a number of ways you might solve the same problem!
if totalHours <= 40: regularHours = totalHours overtime = 0 else: overtime = totalHours - 40 regularHours = 40 return hourlyWage*regularHours + (1.5*hourlyWage)*overtime
Write a program, graduate.py, that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.)
Write a program headstails.py. It should include a function flip(), that simulates a single flip of a coin: It randomly prints either Heads or Tails. Accomplish this by choosing 0 or 1 arbitrarily with random.randrange(2), and use an if-else statement to print Heads when the result is 0, and Tails otherwise.
In your main program have a simple repeat loop that calls flip() 10 times to test it, so you generate a random sequence of 10 Heads and Tails.
Save the example program jumpFuncStub.py as jumpFunc.py, and complete the definitions of functions jump and main as described in the function documentation strings in the program. In the jump function definition use an if-else statement (hint [2]). In the main function definition use a for-each loop, the range function, and the jump function.
The jump function is introduced for use in Strange Sequence Exercise, and others after that.
Often you want to distinguish between more than two distinct cases, but conditions only have two possible results, True or False, so the only direct choice is between two options. As anyone who has played “20 Questions” knows, you can distinguish more cases by further questions. If there are more than two choices, a single test may only reduce the possibilities, but further tests can reduce the possibilities further and further. Since most any kind of statement can be placed in an indented statement block, one choice is a further if test for one grade at a time, and resolve all the remaining possibilities inside the next else clause:
def letterGrade(score): if score >= 90: letter = 'A' else: # grade must be B, C, D or F if score >= 80: letter = 'B' else: # grade must be C, D or F if score >= 70: letter = 'C' else: # grade must D or F if score >= 60: letter = 'D' else: letter = 'F' return letter
This repeatedly increasing indentation with an if statement as the else block can be annoying and distracting. A preferred alternative in this situation, that avoids all this indentation, is to combine each else and if block into an elif block:
def letterGrade(score): if score >= 90: letter = 'A' elif score >= 80: letter = 'B' elif score >= 70: letter = 'C' elif score >= 60: letter = 'D' else: letter = 'F' return letter
The most elaborate syntax for an if-elif-else statement is indicated in general below:
if condition1 :indentedStatementBlockForTrueCondition1elif condition2 :indentedStatementBlockForFirstTrueCondition2elif condition3 :indentedStatementBlockForFirstTrueCondition3elif condition4 :indentedStatementBlockForFirstTrueCondition4else:indentedStatementBlockForEachConditionFalse
The if, each elif, and the final else line are all aligned. There can be any number of elif lines, each followed by an indented block. (Three happen to be illustrated above.) With this construction exactly one of the indented blocks is executed. It is the one corresponding to the first True condition, or, if all conditions are False, it is the block after the final else line.
Be careful of the strange Python contraction. It is elif, not elseif. A program testing the letterGrade function is in example program grade1.py.
See Grade Exercise.
A final alternative for if statements: if-elif-.... with no else. This would mean changing the syntax for if-elif-else above so the final else: and the block after it would be omitted. It is similar to the basic if statement without an else, in that it is possible for no indented block to be executed. This happens if none of the conditions in the tests are true.
With an else included, exactly one of the indented blocks is executed. Without an else, at most one of the indented blocks is executed.
if weight > 120: print('Sorry, we can not take a suitcase that heavy.') elif weight > 50: print('There is a $25 charge for luggage that heavy.')
This if-elif statement only prints a line if there is a problem with the weight of the suitcase.
Write a program sign.py to ask the user for a number. Print out which category the number is in: 'positive', 'negative', or 'zero'.
In Idle, load grade1.py and save it as grade2.py Modify grade2.py so it has an equivalent version of the letterGrade function that tests in the opposite order, first for F, then D, C, .... Hint: How many tests do you need to do? [3]
Be sure to run your new version and test with different inputs that test all the different paths through the program.
* Modify the wages.py or the wages1.py example to create a program wages2.py1.py easier to adapt than wages.py.
The power of a language like Python comes largely from the variety of ways basic statements can be combined. In particular, for and if statements can be nested inside each other’s indented blocks. For example, suppose you want to print only the positive
numbers from an arbitrary list of numbers in a function with the following heading. Read the pieces for now.
def printAllPositive(numberList): '''Print only the positive numbers in numberList.'''
For example, suppose numberList is [3, -5, 2, -1, 0, 7]. You want to process a list, so that suggests a for-each loop,
for num in numberList:
but a for-each loop runs the same code body for each element of the list, and we only want
print(num)
for some of them. That seems like a major obstacle, but think closer at what needs to happen concretely. As a human, who has eyes of amazing capacity, you are drawn immediately to the actual correct numbers, 3, 2, and 7, but clearly a computer doing this systematically will have to check every number. In fact, there is a consistent action required: Every number must be tested to see if it should be printed. This suggests an if statement, with the condition num > 0. Try loading into Idle and running the example program onlyPositive.py, whose code is shown below. It ends with a line testing the function:
def printAllPositive(numberList): '''Print only the positive numbers in numberList.''' for num in numberList: if num > 0: print(num) printAllPositive([3, -5, 2, -1, 0, 7])
This idea of nesting if statements enormously expands the possibilities with loops. Now different things can be done at different times in loops, as long as there is a consistent test to allow a choice between the alternatives. Shortly, while loops will also be introduced, and you will see if statements nested inside of them, too.
The rest of this section deals with graphical examples.
Run example program bounce1.py. It has a red ball moving and bouncing obliquely off the edges. If you watch several times, you should see that it starts from random locations. Also you can repeat the program from the Shell prompt after you have run the script. For instance, right after running the program, try in the Shell
bounceBall(-3, 1)
The parameters give the amount the shape moves in each animation step. You can try other values in the Shell, preferably with magnitudes less than 10.
For the remainder of the description of this example, read the extracted text pieces.
The animations before this were totally scripted, saying exactly how many moves in which direction, but in this case the direction of motion changes with every bounce. The program has a graphic object shape and the central animation step is
shape.move(dx, dy)
but in this case, dx and dy have to change when the ball gets to a boundary. For instance, imagine the ball getting to the left side as it is moving to the left and up. The bounce obviously alters the horizontal part of the motion, in fact reversing it, but the ball would still continue up. The reversal of the horizontal part of the motion means that the horizontal shift changes direction and therefore its sign:
dx = -dx
but dy does not need to change. This switch does not happen at each animation step, but only when the ball reaches the edge of the window. It happens only some of the time - suggesting an if statement. Still the condition must be determined. Suppose the center of the ball has coordinates (x, y). When x reaches some particular x coordinate, call it xLow, the ball should bounce.
The edge of the window is at coordinate 0, but xLow should not be 0, or the ball would be half way off the screen before bouncing! For the edge of the ball to hit the edge of the screen, the x coordinate of the center must be the length of the radius away, so actually xLow is the radius of the ball.
Animation goes quickly in small steps, so I cheat. I allow the ball to take one (small, quick) step past where it really should go (xLow), and then we reverse it so it comes back to where it belongs. In particular
if x < xLow: dx = -dx
There are similar bounding variables xHigh, yLow and yHigh, all the radius away from the actual edge coordinates, and similar conditions to test for a bounce off each possible edge. Note that whichever edge is hit, one coordinate, either dx or dy, reverses. One way the collection of tests could be written is
if x < xLow: dx = -dx if x > xHigh: dx = -dx if y < yLow: dy = -dy if y > yHigh: dy = -dy
This approach would cause there to be some extra testing: If it is true that x < xLow, then it is impossible for it to be true that x > xHigh, so we do not need both tests together., because it is possible for the ball to reach a corner, and need both dx and dy reversed.
The program also uses several accessor methods for graphics objects that we have not used in examples yet. Various graphics objects, like the circle we are using as the shape, know their center point, and it can be accessed with the getCenter() method. (Actually a clone of the point is returned.) Also each coordinate of a Point can be accessed with the getX() and getY() methods.
This explains the new features in the central function defined for bouncing around in a box, bounceInBox. The animation arbitrarily goes on in a simple repeat loop for 600 steps. (A later example will improve this behavior.))
The program starts the ball from an arbitrary point inside the allowable rectangular bounds. This is encapsulated in a utility function included in the program, getRandomPoint. The getRandomPoint function uses the randrange function from the module random. Note that in parameters for both the functions range and randrange, the end stated is past the last value actually desired:
def getRandomPoint(xLow, xHigh, yLow, yHigh): '''Return a random Point with coordinates in the range specified.''' x = random.randrange(xLow, xHigh+1) y = random.randrange(yLow, yHigh+1) return Point(x, y)
The full program is listed below, repeating bounceInBox and getRandomPoint for completeness. Several parts that may be useful later, or are easiest to follow as a unit, are separated out as functions. Make sure you see how it all hangs together or ask questions!
''' Show a ball bouncing off the sides of the window. ''' from graphics import * import time, random) def getRandomPoint(xLow, xHigh, yLow, yHigh): '''Return a random Point with coordinates in the range specified.''' x = random.randrange(xLow, xHigh+1) y = random.randrange(yLow, yHigh+1) return Point(x, y) def makeDisk(center, radius, win): '''return a red disk that is drawn in win with given center and radius.''' disk = Circle(center, radius) disk.setOutline("red") disk.setFill("red") disk.draw(win) return disk def bounceBall(dx, dy): '''Make a ball bounce around the screen, initially moving by (dx, dy) at each jump.''' win = GraphWin('Ball Bounce', 290, 290) win.yUp() radius = 10 xLow = radius # center is separated from the wall by the radius at a bounce xHigh = win.getWidth() - radius yLow = radius yHigh = win.getHeight() - radius center = getRandomPoint(xLow, xHigh, yLow, yHigh) ball = makeDisk(center, radius, win) bounceInBox(ball, dx, dy, xLow, xHigh, yLow, yHigh) win.close() bounceBall(3, 5)
Write a program short.py with a function printShort with heading:
def printShort(strings): '''Given a list of strings, print the ones with at most three characters. >>> printShort(['a', 'long', one']) a one '''
In your main program, test the function, calling it several times with different lists of strings. Hint: Find the length of each string with the len function.
The function documentation here models a common approach: illustrating the behavior of the function with a Python Shell interaction. This begins with a line starting with >>>. Other exercises and examples will also document behavior in the Shell.
Write a program even1.py with a function printEven with heading:
def printEven(nums): '''Given a list of integers nums, print the even ones. >>> printEven([4, 1, 3, 2, 7]) 4 2 '''
In your main program, test the function, calling it several times with different lists of integers. Hint: A number is even if its remainder, when dividing by 2, is 0.
Write a program even2.py with a function chooseEven with heading:
def chooseEven(nums): '''Given a list of integers, nums, return a list containing only the even ones. >>> chooseEven([4, 1, 3, 2, 7]) [4, 2] '''
In your main program, test the function, calling it several times with different lists of integers and printing the results. Hint: Create a new list, and append the appropriate numbers to it.
* The madlib2.py program has its getKeys function, which first generates a list of each occurrence of a cue in the story format. This gives the cues in order, but likely includes repetitions. The original version of getKeys uses a quick method to remove duplicates, forming a set from the list. There is a disadvantage in the conversion, though: Sets are not ordered, so when you iterate through the resulting set, the order of the cues will likely bear no resemblance to the order they first appeared in the list. That issue motivates this problem:
Copy madlib2.py to madlib2a.py, and add a function with this heading:
def uniqueList(aList): ''' Return a new list that includes the first occurrence of each value in aList, and omits later repeats. The returned list should include the first occurrences of values in aList in their original order. >>> vals = ['cat', 'dog', 'cat', 'bug', 'dog', 'ant', 'dog', 'bug'] >>> uniqueList(vals) ['cat', 'dog', 'bug', 'ant'] '''
A useful Boolean operator is in, checking membership in a sequence:
>>> vals = ['this', 'is', 'it] >>> 'is' in vals True >>> 'was' in vals False
It can also be used with not, as not in, to mean the opposite:
>>> vals = ['this', 'is', 'it] >>> 'is' not in vals False >>> 'was' not in vals True
In general the two versions are:
item in sequenceitem not in sequence
Hint: Process aList in order. Use the new syntax to only append elements to a new list that are not already in the new list.
After perfecting the uniqueList function, replace the last line of getKeys, so it uses uniqueList to remove duplicates in keyList.
Check that your madlib2a.py prompts you for cue values in the order that the cues first appear in the madlib format string.
To be eligible to graduate from Loyola University Chicago, you must have 120 credits and a GPA of at least 2.0. This translates directly into Python as a compound condition:
credits >= 120 and GPA >=2.0
This is true if both credits >= 120 is true and GPA >= 2.0 is true. A short example program using this would be:
credits = float(input('How many units of credit do you have? ')) GPA = float(input('What is your GPA? ')) if credits >= 120 and GPA >=2.0: print('You are eligible to graduate!') else: print('You are not eligible to graduate.')
The new Python syntax is for the operator and:
condition1 and condition2
The compound condition is true if both of the component conditions are true. It is false if at least one of the conditions is false.
See Congress Exercise.
In the last example in the previous section, there was an if-elif statement where both tests had the same block to be done if the condition was true:
if x < xLow: dx = -dx elif x > xHigh: dx = -dx
There is a simpler way to state this in a sentence: If x < xLow or x > xHigh, switch the sign of dx. That translates directly into Python:
if x < xLow or x > xHigh: dx = -dx
The word or makes another compound condition:
condition1 or condition2
is true if at least one of the conditions is true. It is false if both conditions are false. This corresponds to one way the word “or” is used in English. Other times in English “or” is used to mean exactly one alternative is true.
Warning
When translating a problem stated in English using “or”, be careful to determine whether the meaning matches Python’s or.
It is often convenient to encapsulate complicated tests inside a function. Think how to complete the function starting:
def isInside(rect, point): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect.getP1() pt2 = rect.getP2()
Recall that a Rectangle is specified in its constructor by two diagonally oppose Points. This example gives the first use in the tutorials of the Rectangle methods that recover those two corner points, getP1 and getP2. The program calls the points obtained this way pt1 and pt2. The x and y coordinates of pt1, pt2, and point can be recovered with the methods of the Point type, getX() and getY().
Suppose that I introduce variables for the x coordinates of pt1, point, and pt2, calling these x-coordinates end1, val, and end2, respectively. On first try you might decide that the needed mathematical relationship to test is
end1 <= val <= end2
Unfortunately, this is not enough: The only requirement for the two corner points is that they be diagonally opposite, not that the coordinates of the second point are higher than the corresponding coordinates of the first point. It could be that end1 is 200; end2 is 100, and val is 120. In this latter case val is between end1 and end2, but substituting into the expression above
200 <= 120 <= 100
is False. The 100 and 200 need to be reversed in this case. This makes a complicated situation. Also this is an issue which must be revisited for both the x and y coordinates. I introduce an auxiliary function isBetween to deal with one coordinate at a time. It starts:
def isBetween(val, end1, end2): '''Return True if val is between the ends. The ends do not need to be in increasing order.'''
Clearly this is true if the original expression, end1 <= val <= end2, is true. You must also consider the possible case when the order of the ends is reversed: end2 <= val <= end1. How do we combine these two possibilities? The Boolean connectives to consider are and and or. Which applies? You only need one to be true, so or is the proper connective:
A correct but redundant function body would be:
if end1 <= val <= end2 or end2 <= val <= end1: return True else: return False
Check the meaning: if the compound expression is True, return True. If the condition is False, return False - in either case return the same value as the test condition. See that a much simpler and neater version is to just return the value of the condition itself!
return end1 <= val <= end2 or end2 <= val <= end1
Note
In general you should not need an if-else statement to choose between true and false values! Operate directly on the boolean expression.
A side comment on expressions like
end1 <= val <= end2
Other than the two-character operators, this is like standard math syntax, chaining comparisons. In Python any number of comparisons can be chained in this way, closely approximating mathematical notation. Though this is good Python, be aware that if you try other high-level languages like Java and C++, such an expression is gibberish. Another way the expression can be expressed (and which translates directly to other languages) is:
end1 <= val and val <= end2
So much for the auxiliary function isBetween. Back to the isInside function. You can use the isBetween function to check the x coordinates,
isBetween(point.getX(), p1.getX(), p2.getX())
and to check the y coordinates,
isBetween(point.getY(), p1.getY(), p2.getY())
Again the question arises: how do you combine the two tests?
In this case we need the point to be both between the sides and between the top and bottom, so the proper connector is and.
Think how to finish the isInside method. Hint: [4]
Sometimes you want to test the opposite of a condition. As in English you can use the word not. For instance, to test if a Point was not inside Rectangle Rect, you could use the condition
not isInside(rect, point)
In general,
not condition
is True when condition is False, and False when condition is True.
The example program chooseButton1.py, shown below, is a complete program using the isInside function in a simple application, choosing colors. Pardon the length. Do check it out. It will be the starting point for a number of improvements that shorten it and make it more powerful in the next section. First a brief overview:
The program includes the functions isBetween and isInside that have already been discussed. The program creates a number of colored rectangles to use as buttons and also as picture components. Aside from specific data values, the code to create each rectangle is the same, so the action is encapsulated in a function, makeColoredRect. All of this is fine, and will be preserved in later versions.
The present main function is long, though. It has the usual graphics starting code, draws buttons and picture elements, and then has a number of code sections prompting the user to choose a color for a picture element. Each code section has a long if-elif-else test to see which button was clicked, and sets the color of the picture element appropriately.
'''Make a choice of colors via mouse clicks in Rectangles -- A demonstration of Boolean operators and Boolean functions.''' from graphics import * def isBetween(x, end1, end2): '''Return True if x is between the ends or equal to either. makeColoredRect(corner, width, height, color, win): ''' Return a Rectangle drawn in win with the upper left corner and color specified.''' corner2 = corner.clone() corner2.move(width, -height) rect = Rectangle(corner, corner2) rect.setFill(color) rect.draw(win) return rect def main(): win = GraphWin('pick Colors', 400, 400) win.yUp() # right side up coordinates redButton = makeColoredRect(Point(310, 350), 80, 30, 'red', win) yellowButton = makeColoredRect(Point(310, 310), 80, 30, 'yellow', win) blueButton = makeColoredRect(Point(310, 270), 80, 30, 'blue', win) house = makeColoredRect(Point(60, 200), 180, 150, 'gray', win) door = makeColoredRect(Point(90, 150), 40, 100, 'white', win) roof = Polygon(Point(50, 200), Point(250, 200), Point(150, 300)) roof.setFill('black') roof.draw(win) msg = Text(Point(win.getWidth()/2, 375),'Click to choose a house color.') msg.draw(win) pt = win.getMouse() if isInside(pt, redButton): color = 'red' elif isInside(pt, yellowButton): color = 'yellow' elif isInside(pt, blueButton): color = 'blue' else : color = 'white' house.setFill(color) msg.setText('Click to choose a door color.') pt = win.getMouse() if isInside(pt, redButton): color = 'red' elif isInside(pt, yellowButton): color = 'yellow' elif isInside(pt, blueButton): color = 'blue' else : color = 'white' door.setFill(color) win.promptClose(msg) main()
The only further new feature used is in the long return statement in isInside.
return isBetween(point.getX(), pt1.getX(), pt2.getX()) and \ isBetween(point.getY(), pt1.getY(), pt2.getY())
Recall that Python is smart enough to realize that a statement continues to the next line if there is an unmatched pair of parentheses or brackets. Above is another situation with a long statement, but there are no unmatched parentheses on a line. For readability it is best not to make an enormous long line that would run off your screen or paper. Continuing to the next line is recommended. You can make the final character on a line be a backslash (\) to indicate the statement continues on the next line. This is not particularly neat, but it is a rather rare situation. Most statements fit neatly on one line, and the creator of Python decided it was best to make the syntax simple in the most common situation. (Many other languages require a special statement terminator symbol like ‘;’ and pay no attention to newlines). Extra parentheses here would not hurt, so an alternative would be
return (isBetween(point.getX(), pt1.getX(), pt2.getX()) and isBetween(point.getY(), pt1.getY(), pt2.getY()) )
The chooseButton1.py program is long partly because of repeated code. The next section gives another version involving lists.
A person is eligible to be a US Senator who is at least 30 years old and has been a US citizen for at least 9 years. Write an initial version of a program congress.py to obtain age and length of citizenship from the user and print out if a person is eligible to be a Senator or not.
A person is eligible to be a US Representative who is at least 25 years old and has been a US citizen for at least 7 years. Elaborate your program congress.py so it obtains age and length of citizenship and prints out just the one of the following three statements that is accurate:
Here are a few more string methods useful in the next exercises:
s.startswith( pre )
returns True if string s starts with string pre: Both '-123'.startswith('-') and 'downstairs'.startswith('down') are True, but '1 - 2 - 3'.startswith('-') is False.
s.endswith( suffix )
returns True if string s ends with string suffix: Both 'whoever'.endswith('ever') and 'downstairs'.endswith('airs') are True, but '1 - 2 - 3'.endswith('-') is False.
s.replace( sub , replacement , count )
returns a new string with up to the first count occurrences of string sub replaced by replacement. The replacement can be the empty string to delete sub. For example:
s = '-123' t = s.replace('-', '', 1) # t equals '123' t = t.replace('-', '', 1) # t is still equal to '123' u = '.2.3.4.' v = u.replace('.', '', 2) # v equals '23.4.'
In library alphabetizing, if the initial word is an article (“The”, “A”, “An”), then it is ignored when ordering entries. Write a program completing this function, and then testing it:
def startsWithArticle(title): '''Return True if the first word of title is "The", "A" or "An".'''
Be careful, if the title starts with “There”, it does not start with an article. What should you be testing for?
** In the later Safe Number Input Exercise, it will be important to know if a string can be converted to the desired type of number. Explore that here. Save example isNumberStringStub.py as isNumberString.py and complete it. It contains headings and documentation strings for the functions in both parts of this exercise.
A legal whole number string consists entirely of digits. Luckily strings have an isdigit method, which is true when a nonempty string consists entirely of digits, so '2397'.isdigit() returns True, and '23a'.isdigit() returns False, exactly corresponding to the situations when the string represents a whole number!
In both parts be sure to test carefully. Not only confirm that all appropriate strings return True. Also be sure to test that you return False for all sorts of bad strings.
Recognizing an integer string is more involved, since it can start with a minus sign (or not). Hence the isdigit method is not enough by itself. This part is the most straightforward if you have worked on the sections String Indices and String Slices. An alternate approach works if you use the count method from Object Orientation, and some methods from this section.
Complete the function isIntStr.
Complete the function isDecimalStr, which introduces the possibility of a decimal point. The string methods mentioned in the previous part remain useful. | http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/ifstatements.html | CC-MAIN-2014-15 | refinedweb | 5,944 | 64.41 |
Your Account
If you’ve read through most of this book, you’ll notice that it
doesn’t have much of a “Do this, not that” theme. Ruby as a language doesn’t
fit well into that framework, as there are always exceptions to any rule you
can come up with..
However, there are certainly a few things you really shouldn’t do,
unless you know exactly why you are doing them. This appendix is meant to
cover a handful of those scenarios and show you some better alternatives.
I’ve done my best to stick to issues that I’ve been bit by myself, in the
hopes that I can offer some practical advice for problems you might actually
have run into.
A bad practice in programming shouldn’t simply be characterized as
some ill-defined aesthetic imposed upon folks by the “experts.” Instead, we
can often track antipatterns in code down to either flaws in the high-level
design of an object-oriented system, or failed attempts at cleverness in the
underlying feature implementations. These bits of unsavory code produced by
bad habits or the misunderstanding of certain Ruby peculiarities can be a
drag on your whole project, creating substantial technical debt as they
accumulate.
We’ll start with the high-level design issues and then move on to the
common sticking points when implementing tricky Ruby features. Making an
improvement to even a couple of these problem areas will make a major
difference, so even if you already know about most of these pitfalls, you
might find one or two tips that will go a long way..
Ruby’s class variables are one of the easiest ways to break
encapsulation and create headaches for yourself when designing class
hierarchies. To demonstrate the problem, I’ll show an example in which
class variables were tempting but ultimately the wrong
solution.
In my abstract formatting library fatty, I
provide a formatter base class that users must inherit from to make use
of the system. This provides helpers that build up anonymous classes for
certain formats. To get a sense of what this looks like, check out this
example:
class Hello < FattyRBP::Formatter
format :text do
def render
"Hello World"
end
end
format :html do
def render
"<b>Hello World</b>"
end
end
end
puts Hello.render(:text) #=> "Hello World"
puts Hello.render(:html) #=> "<b>Hello World</b>"
Though I’ve omitted most of the actual functionality that
fatty provides, a simple implementation of this system using class
variables might look like this:
module FattyRBP
class Formatter
@@formats = {}
def self.format(name, options={}, &block)
@@formats[name] = Class.new(FattyRBP::Format, &block)
end
def self.render(format, options={})
@@formats[format].new(options).render
end
end
class Format
def initialize(options)
# not important
end
def render
raise NotImplementedError
end
end
end
This code will make the example shown earlier work as advertised.
Now let’s see what happens when we add another subclass into the
mix:
class Goodbye < FattyRBP::Formatter
format :text do
def render
"Goodbye Cruel World!"
end
end
end
puts Goodbye.render(:text) #=> "Goodbye Cruel World!"
At first glance, things seem to be working. But if we dig deeper,
we see two problems:
# Should not have changed
puts Hello.render(:text) #=> "Goodbye Cruel World!"
# Shouldn't exist
puts Goodbye.render(:html) #=> "<b>Hello World</b>"
And here, we see the problem with class variables. If we think of
them as class-level state, we’d be wrong. They are actually
class-hierarchy variables that can have their state modified by any
subclass, whether direct or many levels down the ancestry chain. This
means they’re fairly close to global state in nature, which is usually a
bad thing. So unless you were actually counting on this behavior, an
easy fix is to just dump class variables and use class instance
variables instead:
module FattyRBP
class Formatter
def self.formats
@formats ||= {}
end
def self.format(name, options={}, &block)
formats[name] = Class.new(FattyRBP::Format, &block)
end
def self.render(format, options={})
formats[format].new(options).render
end
end
class Format
def initialize(options)
# not important
end
end
end
Although this prevents direct access to the variable from
instances, it is easy to define accessors at the class level. The
benefit is that each subclass carries its own instance variable, just
like ordinary objects do. With this new code, everything works as
expected:
puts Hello.render(:text) #=> "Hello World"
puts Hello.render(:html) #=> "<b>Hello World</b>"
puts Goodbye.render(:text) #=> "Goodbye Cruel World"
puts Hello.render(:text) #=> "Hello World"
puts Goodbye.render(:html) #=> raises an error
So the moral of the story here is that class-level state should be
stored in class instance variables if you want to allow subclassing.
Reserve class variables for data that needs to be shared across an
entire class hierarchy.
One good practice is to provide alternative constructors for your
classes when there are common configurations that might be generally
useful. One such example is in Prawn, when a user wants to build up a
document via a simplified interface and then immediately render it to
file:
Prawn::Document.generate("hello.pdf") do
text "Hello Prawn!"
end
Implementing this method was very simple, as it simply wraps the
constructor and calls an extra method to render the file
afterward:
module Prawn
class Document
def self.generate(filename,options={},&block)
pdf = Prawn::Document.new(options,&block)
pdf.render_file(filename)
end
end
end
However, some months down the line, a bug report made me realize
that I made a somewhat stupid mistake here. I accidentally prevented
users from being able to write code like this:
class MyDocument < Prawn::Document
def say_hello
text "Hello MyDocument"
end
end
MyDocument.generate("hello.pdf") do
say_hello
end
The problem, of course, is that Prawn::Document.generate hardcodes the
constructor call, which prevents subclasses from ever being instantiated
via generate. The fix is so easy that
it is somewhat embarrassing to share:
Prawn::Document.generate
generate
module Prawn
class Document
def self.generate(filename,options={},&block)
pdf = new(options,&block)
pdf.render_file(filename)
end
end
end
By removing the explicit receiver, we now construct an object
based on whatever self is, rather
than only building up Prawn::Document
objects. This affords us additional flexibility at virtually no cost. In
fact, because hardcoding the name of the current class in your method
definitions is almost always an accident, this applies across the board
as a good habit to get into.
self
Prawn::Document
Although much less severe, the same thing goes for class method
definitions as well. Throughout this book, you will see class methods
defined using def self.my_method
rather than def MyClass.my_method.
The reason for this is much more about maintainability than it is about
style. To illustrate this, let’s do a simple comparison. We start off
with two boring class definitions for the classes A and B:
def self.my_method
def MyClass.my_method
A
B
class A
def self.foo
# ..
end
def self.bar
# ..
end
end
class B
def B.foo
# ...
end
def B.bar
# ...
end
end
These two are functionally equivalent, each defining the class
methods foo and bar on their respective classes. But now,
let’s refactor our code a bit, renaming A to C and
B to D. Observe the work involved in doing
each:
foo
bar
C
D
class C
def self.foo
# ..
end
def self.bar
# ..
end
end
class D
def D.foo
# ...
end
def D.bar
# ...
end
end
To rename A to C, we simply change the name of our class, and
we don’t need to touch the method definitions. But when we change
B to D, each and every method needs to be reworked.
Though this might be OK for an object with one or two methods at the
class level, you can imagine how tedious this could be when that number
gets larger.
So we’ve now found two points against hardcoding class names, and
could probably keep growing the list if we wanted. But for now, let’s
move on to some even higher-level design issues.
Inheritance is very nice when your classes have a clear
hierarchical structure between them. However, it can get in the way when
used inappropriately. Problems begin to crop up when we try to model
cross-cutting concerns using ordinary inheritance. For examples of this,
it’s easy to look directly into core Ruby.
Imagine if Comparable were a
class instead of a module. Then, you would be writing code like
this:
Comparable
class Person < Comparable
def initialize(first_name, last_name)
@first_name = first_name
@last_name = last_name
end
attr_reader :first_name, :last_name
def <=>(other_person)
[last_name, first_name] <=> [other_person.last_name, other_person.first_name]
end
end
However, after seeing this, it becomes clear that it’d be nice to
use a Struct here. If we ignore the
features provided by Comparable here
for a moment, the benefits of a struct to represent this simple data
structure become obvious.
Struct
class Person < Struct.new(:first_name, :last_name)
def full_name
"#{first_name} #{last_name}"
end
end
Because Ruby supports single inheritance only, this example
clearly demonstrates the problems we run into when relying too heavily
on hierarchical structure. A Struct
is certainly not always Comparable.
And it is just plain silly to think of all Comparable objects being Struct objects. The key distinction here is
that a Struct defines what an object
is made up of, whereas Comparable
defines a set of features associated with certain objects. For this
reason, the real Ruby code to accomplish this modeling makes a whole lot
of sense:
class Person < Struct.new(:first_name, :last_name)
include Comparable
def <=>(other_person)
[last_name, first_name] <=> [other_person.last_name, other_person.first_name]
end
def full_name
"#{first_name} #{last_name}"
end
end
Keep in mind that although we are constrained to exactly one
superclass, we can include as many modules as we’d like. For this
reason, modules are often used to implement features that are completely
orthogonal to the underlying class definition that they are mixed into.
Taking an example from the Ruby API documentation, we see Forwardable being used to
very quickly implement a simple Queue
structure by doing little more than delegating to an underlying Array:
Forwardable
Queue
Array
require "forwardable"
class Queue
extend Forwardable
def initialize
@q = [ ]
end
def_delegator :@q, :push, :enq
def_delegator :@q, :shift, :deq
def_delegators :@q, :clear, :first, :push, :shift, :size
end
Although Forwardable would make
no sense anywhere in a class hierarchy, it accomplishes its task
beautifully here. If we were constrained to a purely inheritance-based
model, such cleverness would not be so easy to pull off.
The key thing to remember here is not that you should avoid
inheritance at all costs, by any means. Instead, you should simply
remember not to go out of your way to construct an artificial
hierarchical structure to represent cross-cutting or orthogonal
concerns. It’s important to remember that Ruby’s core is not special or
magical in its abundant use of mixins, but instead, is representative of
a very pragmatic and powerful object model. You can and should apply
this technique within your own designs, whenever it makes sense to do
so.
Ruby lets you do all sorts of clever, fancy tricks. This cleverness
is a big part of what makes Ruby so elegant, but it also can be downright
dangerous in the wrong hands. To illustrate this, we’ll look at the kind
of trouble you can get in if you aren’t careful.
Throughout this book, we’ve dynamically evaluated code blocks all
over the place. However, what you have not seen much of is the use of
eval(), class_eval(), or even instance_eval() with a string. Some might
wonder why this is, because eval()
can be so useful! For example, imagine that you are exposing a way for
users to filter through some data. You would like to be able to support
an interface like this:
eval()
class_eval()
instance_eval()
user1 = User.new("Gregory Brown", balance: 2500)
user2 = User.new("Arthur Brown", balance: 3300)
user3 = User.new("Steven Brown", balance: 3200)
f = Filter.new([user1, user2, user3])
f.search("balance > 3000") #=> [user2, user3]
Armed with instance_eval, this
task is so easy that you barely bat an eye as you type out the following
code:
instance_eval
class User
def initialize(name, options)
@name = name
@balance = options[:balance]
end
attr_reader :name, :balance
end
class Filter
def initialize(enum)
@collection = enum
end
def search(query)
@collection.select { |e| e.instance_eval(query) }
end
end
Running the earlier example, you see that this code works great,
exactly as expected. But unfortunately, trouble strikes when you see
queries like this:
>> f.search("@balance = 0")
=> [#<User:0x40caa4 @name="Gregory Brown", @balance=0>,
#<User:0x409138 @name="Arthur Brown", @balance=0>,
#<User:0x402874 @name="Steven Brown", @balance=0>]
Or, perhaps even scarier:
>> f.search("system('touch hacked')")
=> [#<User:0x40caa4 @name="Gregory Brown", ...]
>> File.exist?('hacked')
=> true
Because the ability for user-generated strings to execute
arbitrary system commands or damage the internals of an object aren’t
exactly appealing, you code up a regex filter to protect against
this:
def search(query)
raise "Invalid query" unless query =~ /^(\w+) ([><!]=?|==) (\d+)$/
@collection.select { |e| e.instance_eval(query) }
end
This protects against the two issues we saw before, which is
great:
>> f.search("system('touch hacked')")
RuntimeError: Invalid query
from (irb):33:in `search'
from (irb):38
from /Users/sandal/lib/ruby19_1/bin/irb:12:in `<main>'
>> f.search("@balance = 0")
RuntimeError: Invalid query
from (irb):33:in `search'
from (irb):39
from /Users/sandal/lib/ruby19_1/bin/irb:12:in `<main>'
But if you weren’t paying very close attention, you would have
missed that we got our anchors wrong. That means there’s still a hole to
be exploited here:
>> f.search("balance == 0\nsystem('touch hacked_again')")
=> [#<User:0x40caa4 @name="Gregory Brown", @balance=0 ...]
>> File.exist?('hacked_again')
=> true
Because our regex checked the first line and not the whole string,
we were able to sneak by the validation. Arguably, if you’re very
careful, you could come up with the right pattern and be reasonably
safe. But as you are already validating the syntax, why play with fire?
We can rewrite this code to accomplish the same goals with none of the
associated risks:
def search(query)
data = query.match(/^(?<attr>\w+) (?<op>[><!]=?|==) (?<val>\d+)$/)
@collection.select do |e|
attr = e.public_send(data[:attr])
attr.public_send(data[:op], Integer(data[:val]))
end
end
Here, we don’t expose any of the object’s internals, preserving
encapsulation. Because we parse out the individual components of the
statement and use public_send to pass
the messages on to our objects, we have completely eliminated the
possibility of arbitrary code execution. All in all, this code is much
more secure and easier to debug. As it turns out, this code will
actually perform considerably better as well.
public_send
Every time you use eval(string), Ruby needs to fire up its parser
and tree walker to execute the code you’ve embedded in your string. This
means that in cases in which you just need to process a few values and
then do something with them, using a targeted regular expression is
often a much better option, as it greatly reduces the amount of work the
interpreter needs to do.
eval(string)
For virtually every situation in which you might turn to a raw
string eval(), you can work around it
using the tools Ruby provides. These include all sorts of methods for
getting at whatever you need, including instance_variable_get, instance_variable_set, const_get, const_set, public_send, send, define_method, method(), and even Class.new/Module.new. These tools allow you to
dynamically manipulate Ruby code without evaluating strings directly.
For more details, you’ll definitely want to read Chapter 3, Mastering the Dynamic Toolkit.
instance_variable_get
instance_variable_set
const_get
const_set
send
define_method
method()
Class.new
Module.new
Ruby provides a lot of different ways to handle exceptions. They
run the gamut all the way from capturing the full stack trace to
completely ignoring raised errors. This flexibility means that
exceptions aren’t necessarily treated with the same gravity in Ruby as
in other languages, as they are very simple to rescue once they are
raised. In certain cases, folks have even used rescue as a stand-in replacement for
conditional statements. The classic example follows:
rescue
name = @user.first_name.capitalize rescue "Anonymous"
Usually, this is done with the intention of capturing the NoMethodError raised by something like
first_name being
nil here. It accomplishes this task well, and looks
slightly nicer than the alternative:
NoMethodError
nil
name = @user.first_name ? @user.first_name.capitalize : "Anonymous"
However, the downside of using this trick is that you will most
likely end up seeing this code again, at the long end of a painful
debugging session. For demonstration purposes, let’s assume our User is implemented like this:
User
require "pstore"
class User
def self.data
@data ||= PStore.new("users.store")
end
def self.add(id, user_data)
data.transaction do
data[id] = user_data
end
end
def self.find(id)
data.transaction do
data[id] or raise "User not found"
end
end
def initialize(id)
@user_id = id
end
def attributes
self.class.find(@user_id)
end
def first_name
attributes[:first_name]
end
end
What we have here is basically a PStore-backed user database. It’s not terribly
important to understand every last detail, but the code should be fairly
easy to understand if you play around with it a bit.
PStore
Firing up irb, we can see that the rescue trick works fine for cases in which
User#first_name returns nil:
User#first_name
>> require "user"
=> true
>> User.add('sandal', email: '[email protected]')
>> @user = User.new('sandal')
=> #<User:0x48c448 @
>> name = @user.first_name.capitalize rescue "Anonymous"
=> "Anonymous"
=> #<User:0x49ab74 @
>> @user.first_name
=> nil
>> @user.attributes
Ordinary execution also works fine:
>> User.add('jia', first_name: "Jia", email: "[email protected]")
=> {:first_name=>"Jia", :email=>"[email protected]"}
>> @user = User.new('jia')
=> #<User:0x492154 @
>> name = @user.first_name.capitalize rescue "Anonymous"
=> "Jia"
>> @user.attributes
=> {:first_name=>"Jia", :email=>"[email protected]"}
>> @user.first_name
=> "Jia"
>> @user = User.new('sandal')
It seems like everything is in order; however, you don’t need to
look far. Notice that this line will succeed even if @user is undefined:
@user
>> @user = nil
=> nil
>> name = @user.first_name.capitalize rescue "Anonymous"
=> "Anonymous"
This means you can’t count on catching an error when a typo or a
renamed variable creeps into your code. This weakness of course
propagates down the chain as well:
>> name = @user.a_fake_method.capitalize rescue "Anonymous"
=> "Anonymous"
>> name = @user.a_fake_method.cannot_fail rescue "Anonymous"
=> "Anonymous"
Of course, issues with a one-liner like this should be easy enough
to catch even without an exception. This is most likely the reason why
this pattern has become so common. However, this is usually an
oversight, because the problem exists deeper down the bunny hole as
well. Let’s introduce a typo into our user implementation:
class User
def first_name
attribute[:first_name]
end
end
Now, we go back and look at one of our previously working
examples:
>> @user = User.new('jia')
=> #<User:0x4b8548 @
>> name = @user.first_name.capitalize rescue "Anonymous"
=> "Anonymous"
>> @user.first_name
NameError: undefined local variable or method `attribute' for #<User:0x4b8548 ...>
from (irb):23:in `first_name'
from (irb):32
from /Users/sandal/lib/ruby19_1/bin/irb:12:in `<main>'
Hopefully, you’re beginning to see the picture. Although good
testing and extensive quality assurance can catch these bugs, using this
conditional modifier rescue hack is
like putting blinders on your code. Unfortunately, this can also go for
code of the form:
def do_something_dangerous
might_raise_an_error
rescue
"default value"
end
Pretty much any rescue that does not capture a
specific error may be a source of silent failure in your applications.
The only real case in which an unqualified rescue might make sense is when it is combined
with a unqualified raise, which
causes the same error to resurface after executing some code:
raise
begin
# do some stuff
rescue => e
MyLogger.error "Error doing stuff: #{e.message}"
raise
end
In other situations, be sure to either know the risks involved, or
avoid this technique entirely. You’ll thank yourself later.
One thing you really don’t want to do is mess up a method_missing hook. Because the purpose of
method_missing is to handle unknown
messages, it is a key feature for helping to find bugs in your
code.
method_missing
In Chapter 3, Mastering the Dynamic Toolkit, we covered
some examples of how to use method_missing properly. Here’s an example of
how to do it wrong:
class Prawn::Document
# Provides the following shortcuts:
#
# stroke_some_method(*args) #=> some_method(*args); stroke
# fill_some_method(*args) #=> some_method(*args); fill
# fill_and_stroke_some_method(*args) #=> some_method(*args); fill_and_stroke
#
def method_missing(id,*args,&block)
case(id.to_s)
when /^fill_and_stroke_(.*)/
send($1,*args,&block); fill_and_stroke
when /^stroke_(.*)/
send($1,*args,&block); stroke
when /^fill_(.*)/
send($1,*args,&block); fill
end
end
end
Although this may look very similar to an earlier example in this
book, it has a critical flaw. Can you see it? If not, this
irb session should help:
>> pdf.fill_and_stroke_cirlce([100,100], :radius => 25)
=> "0.000 0.000 0.000 rg\n0.000 0.000 0.000 RG\nq\nb\n"
>> pdf.stroke_the_pretty_kitty([100,100], :radius => 25)
=> "0.000 0.000 0.000 rg\n0.000 0.000 0.000 RG\nq\nb\nS\n"
>> pdf.donuts
=> nil
By coding a method_missing hook
without delegating to the original Object definition, we have effectively muted
our object’s ability to complain about messages we really didn’t want it
to handle. To add insult to injury, failure cases such as fill_and_stroke_cirlce and stroke_the_pretty_kitty are doubly confusing,
as they return a non-nil value, even though they do
not produce meaningful results.
Object
fill_and_stroke_cirlce
stroke_the_pretty_kitty
Luckily, the remedy to this is simple. We just add a call to
super in the catchall case:
super
def method_missing(id,*args,&block)
case(id.to_s)
when /^fill_and_stroke_(.*)/
send($1,*args,&block); fill_and_stroke
when /^stroke_(.*)/
send($1,*args,&block); stroke
when /^fill_(.*)/
send($1,*args,&block); fill
else
super
end
end
Now, if we rerun our earlier examples, you will see much more
predictable behavior, in line with what we’d expect if we had no hook
set up in the first place:
>> pdf.fill_and_stroke_cirlce([100,100], :radius => 25)
NoMethodError: undefined method `cirlce' for #<Prawn::Document:0x4e59f8>
from prawn/lib/prawn/graphics/color.rb:68:in `method_missing'
from prawn/lib/prawn/graphics/color.rb:62:in `method_missing'
from (irb):4
from /Users/sandal/lib/ruby19_1/bin/irb:12:in `<main>'
>> pdf.stroke_the_pretty_kitty([100,100], :radius => 25)
NoMethodError: undefined method `the_pretty_kitty' for #<Prawn::Document:0x4e59f8>
from prawn/lib/prawn/graphics/color.rb:68:in `method_missing'
from prawn/lib/prawn/graphics/color.rb:64:in `method_missing'
from (irb):5
from /Users/sandal/lib/ruby19_1/bin/irb:12:in `<main>'
>> pdf.donuts
NoMethodError: undefined method `donuts' for #<Prawn::Document:0x4e59f8>
from prawn/lib/prawn/graphics/color.rb:68:in `method_missing'
from (irb):6
from /Users/sandal/lib/ruby19_1/bin/irb:12:in `<main>'
An important thing to remember is that in addition to ensuring
that you call super from within your
method_missing() calls, you are also
responsible for maintaining the method’s signature. It’s possible to
write a hook that captures only a missing method’s name while ignoring
its arguments and associated block:
method_missing()
def method_missing(id)
# ...
end
However, if you set things up this way, even when you call
super, you’ll be breaking things
farther up the chain, as Object#method_missing expects the whole
signature of the function call to remain intact. So it’s not only
delegating to the original that is important, but delegating without
information loss.
Object#method_missing
If you’re sure to act responsibly with your method_missing calls, it won’t be that
dangerous in most cases. However, if you get sloppy here, it is
virtually guaranteed to come back to haunt you. If you get into this
habit right away, it’ll be sure to save you some headaches down the
line.
This appendix doesn’t come close to covering all the trouble that
you can get yourself into with Ruby. It does, however, cover some of the
most common sources of trouble and confusion and shows some much less
painful alternatives.
When it comes to design, much can be gained by simply reducing
complexity. If the path you’re on seems too difficult, odds are that it
can be made a lot easier if you just think about it in a different way. As
for “clever” implementation tricks and shortcuts, they can be more trouble
than they’re worth if they come at the expense of clarity or
maintainability of your code.
Put simply, the worst practices in Ruby are ones that make you work
much harder than you have to. If you start to introduce code that seems
really cool at first, but later is shown to introduce complicated faults
at the corner cases, it is generally wise to just rip it out and start
fresh with something a little less exciting that’s more reliable.
If you maintain the balancing act between creative approaches to
your problems and ones that work without introducing excess complexity,
you’ll have a very happy time writing Ruby code. Because Ruby gives you
the power to do both good and evil, it’s ultimately up to you how you want
to maintain your projects. However, code that is maintainable and
predictable is much more of a joy to work with than fragile and sloppy
hacks that have been simply duct-taped together.
Now that we have reached the very end of this book, I trust that you
have the skills necessary to go out and find Ruby Best (and Worst)
Practices on your own. The real challenge is knowing the difference
between the two, and that ability comes only with practical experience
gained by working on and investigating real problems. This book has
included enough real-world examples to give you a head start in that area,
but the heavy lifting needs to be done by you.
I hope you have enjoyed this wild ride through Ruby with me, and I
really hope that something or the other in this book has challenged or
inspired you. Please go out now and write some good open source Ruby code,
and maybe you’ll make a guest appearance in the second edition!
If you enjoyed this excerpt, buy a copy of Ruby Best Practices.
© 2014, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://archive.oreilly.com/pub/a/ruby/excerpts/ruby-best-practices/worst-practices.html | CC-MAIN-2014-52 | refinedweb | 4,413 | 55.74 |
17 October 2012 14:47 [Source: ICIS news]
LONDON (ICIS)--?xml:namespace>
In its official autumn assessment (“Herbstgutachten”), the government pointed to impacts from the ongoing eurozone crisis and weaker growth trends in emerging economies in Asia and
However, economics minister Philipp Rosler said that while there was a weakening of growth, it would go too far to talk about a “collapse in growth.”
In fact,
“There are many indications that the global economy will gain momentum in 2013. Then
For 2012, the government slightly raised its GDP projection to 0.8%, from 0.7% in its previous assessment issued in April.
In 2011, | http://www.icis.com/Articles/2012/10/17/9604831/germanys-government-cuts-2013-gdp-growth-forecast.html | CC-MAIN-2015-22 | refinedweb | 104 | 63.19 |
On 22 July 2014 15:30, Brian Wylie <briford.wylie at gmail.com> wrote: > Okay, the transformer approach worked amazingly well. It's a bit of a hack > the transformer simply adds a ',' to the beginning of lines where I'm > calling commands that need to be 'auto-quoted'.. but certainly speaks well > of the IPython design that my hack worked so quickly. :) Well, I'm glad it worked. How are you deciding which lines need that treatment? There are two bits of machinery transforming input in IPython. Input transformers handle things where you can tell just from looking at the line, like %magic and !shell commands. Then the prefilter machinery changes things that depend on what's in the current namespace, like autocall. Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/ipython-dev/2014-July/013358.html | CC-MAIN-2022-33 | refinedweb | 133 | 76.82 |
Anim storing things like the timestamp and the request’s ID.
useRef is not only for DOM referencesuseRef is not only for DOM references
There are three ways to store variables within functional components:
- We can define a simple
constor
letwhose value will always be reinitialized with every component re-rendering.
- We can use
useStatewhose value persists across re-renderings, and if you change it, it will also trigger re-rendering.
- We can use.
For instance, the example below will always show 5 even if the component is re-rendered by its parent.
function Component() { let variable = 5; setTimeout(() => { variable = variable + 3; }, 100) return <div>{variable}</div> }
…whereas this one will keep increasing the number by three and keeps re-rendering even if the parent does not change.
function Component() { const [variable, setVariable] = React.useState(5); setTimeout(() => { setVariable(variable + 3); }, 100) return <div>{variable}</div> }
And finally, this one returns five and won’t re-render. However, if the parent triggers a re-render then it will have an increased value every time (assuming the re-render happened after 100 milliseconds).
function Component() { const variable = React.useRef(5); setTimeout(() => { variable.current = variable.current + 3; }, 100) return <div>{variable.current}</div> }
If we have mutable values that we want to remember at the next or later renders and we don’t want them to trigger a re-render when they change, then we should use
useRef. In our case, we will need the ever-changing request animation frame ID at cleanup, and if we animate based on the the time passed between cycles, then we need to remember the previous animation’s timestamp. These two variables should be stored as refs.
The side effects of useEffectThe side effects of useEffect
We can use the
useEffect hook to initialize and cleanup our requests, though we want to make sure it only runs once; otherwise it’s going to end up creating, canceling and re-creating the animation frame request at every render. Here’s a working, but bad example:
function App() { const [state, setState] = React.useState(0) const requestRef = React.useRef() const animate = time => { // Change the state according to the animation requestRef.current = requestAnimationFrame(animate); } // DON’T DO THIS React.useEffect(() => { requestRef.current = requestAnimationFrame(animate); return () => cancelAnimationFrame(requestRef.current); }); return <div>{state}</div>; }
Why is it bad? If you run this, the
useEffect will trigger the animate function that will both change the state and request a new animation frame. It sounds good, except that the state change will re-render the component by running the whole function again including the
useEffect hook that will first as a cleanup cancel the request made by the animate function in the previous cycle and then spin up a new request. This ultimately replaces the request made by the animate function and it’s completely unnecessary. We could avoid this by not spinning up a new request in the animate function, but that still wouldn’t be so nice. It would still leave us with an unnecessary cleanup every round and if the component re-renders for some other reason — like the parent re-renders it or some other state has changed — then the unnecessary cancelation and request re-creation would still happen. It is a better pattern to only initialize requests once, keep them spinning by the animate function and then cleanup once when the component unmounts.
To make sure the
useEffect hook runs only once, we can pass an empty array as a second argument to it. Passing an empty array has a side-effect though, which avoids us from having the correct state during animation. The second argument is a list of changing values that the effect needs to react to. We don’t want to react to anything — we only want to initialize the animation — hence we have the empty array. But React will interpret this in a way that means this effect doesn’t have to be up to date with the state. And that includes the animate function because it was originally called from the effect. As a result, if we try to get the value of the state in the animate function, it will always be the initial value. If we want to change the state based on its previous value and the time passed, then it probably won’t work.
function App() { const [state, setState] = React.useState(0) const requestRef = React.useRef() const animate = time => { // The 'state' will always be the initial value here requestRef.current = requestAnimationFrame(animate); } React.useEffect(() => { requestRef.current = requestAnimationFrame(animate); return () => cancelAnimationFrame(requestRef.current); }, []); // Make sure the effect runs only once return <div>{state}</div>; }
The state’s setter function also accepts a functionThe state’s setter function also accepts a function
There’s a way to use our latest state even if the
useEffect hook locked our state to its initial value. The setter function of the
useState hook can also accept a function. So instead of passing a value based on the current state as you probably would do most of the time:
setState(state + delta)
… you can also pass on a function that receives the previous value as a parameter. And, yes, that’s going to return the correct value even in our situation:
setState(prevState => prevState + delta)
Putting it all togetherPutting it all together
Here’s a simple example to wrap things up. We’re going to put all of the above together to create a counter that counts up to 100 then restarts from the beginning. Technical variables that we want to persist and mutate without re-rendering the whole component are stored with
useRef. We made sure
useEffect only runs once by passing an empty array as its second parameter. And we mutate the state by passing on a function to the setter of
useState to make sure we always have the correct state.
See the Pen
Using requestAnimationFrame with React hooks by Hunor Marton Borbely (@HunorMarton)
on CodePen.
Update: Taking the extra mile with a custom HookUpdate: Taking the extra mile with a custom Hook
Once the basics are clear we can also go meta with Hooks, by extracting most of our logic into a
custom Hook. This will have two benefits:
- It greatly simplifies our component, hiding technical variables that are related to the animation, but not to our main logic
- Custom Hooks are reusable. If you need an animation in another component you can simply use it there as well
Custom Hooks might sound like an advanced topic at first, but ultimately we just move a part of our code from our component to a function, then call that function in our component just like any other function. As a convention, a custom Hook’s name should start with the use keyword and the rules of Hooks apply, but otherwise, they are just simple functions that we can customize with inputs and that might return something.
In our case to make a generic Hook for
requestAnimationFrame we can pass on a callback that our
custom Hook will call at every animation cycle. This way our main animation logic will stay in our component, but the component itself will be more focused.
See the Pen
Using requestAnimationFrame with custom React Hook by Hunor Marton Borbely (@HunorMarton)
on CodePen.
Thanks for the read. New to React. Good to dive right in. Glad to see CSS-Tricks is aware of its ecosystem!
A nice addition could be to extract the RAF logic to a reusable hook:
True, initially I thought that it’s complicated enough already, but then I also added a version with a custom Hook. I ended up with a solution that is really similar to yours except you calculate the time passed from the beginning and I’m using a combination of the last state and the time passed between two cycles. In your case, it’s indeed true that you can round the number when setting the state, but in mine, it’s actually not.
Also, if you round when setting state, it will save unnecessary renders.
Thanks for this post. The problem is that even if you want to do a simple animation it’s going to be very laggy. I don’t know the exact mechanism behind react rendering but I assume they use promise. In browser rAF is processed before even it gets to the css and rendering elements. But react rendering is not the same as browser rendering. so I think we cannot expect to see the same result here.
A simple svg animation with this approach : | https://css-tricks.com/using-requestanimationframe-with-react-hooks/ | CC-MAIN-2021-10 | refinedweb | 1,428 | 52.49 |
stags: Scala tags generatorstags: Scala tags generator
InstallationInstallation
Using Coursier:Using Coursier:
coursier bootstrap co.pjrt:stags-cli_2.12:0.4.2 -o stags
If you want to use
stags tag generation as a library, you can add it to sbt with:
libraryDependencies += "co.pjrt" % "stags_2.12" % "0.4.2"
Using Nailgun:Using Nailgun:
You can use Coursier to create a standalone cli for starting Stags with Nailgun like this:
coursier bootstrap --standalone co.pjrt:stags-cli_2.12:0.4.2 \ -o stags_ng -f --main com.martiansoftware.nailgun.NGServer stags_ng & // start nailgun in background ng ng-alias stags co.pjrt.stags.cli.Main ng stags --version
You can then create an alias for
ng stags if that's still too much typing.
Caveats and tips:
- You must call
ng ng-aliasafter every restart of the nailgun server. You could create a script to do this
- You could also simply make an alias in your terminal (ie:
alias stags=ng co.pjrt.stags.cli.Main).
- If you are running multiple Nailgun instances (for example, one for
stagsand one for
scalafmt) you must run one of them in a different port.
- In the above example, simply call
stags_ng $new_portto run the stags ng instance in a different port. Then all
ngcalls need to have the flag
--nailgun-port $new_portin them.
- You could in theory simply feed both jars (ie: stags and scalafmt) into the same ng instance but beware this could cause classpath conflicts between the two (or more) jars.
UsageUsage
stags ./
This will fetch all Scala files under the current directory. The tags file will be generated in
./tags. To place the tags file somewhere else, do:
stags ./ -o path/to/tags
FeaturesFeatures
The two main differences between stags and a general ctags generator like Universal CTags is its ability to understand Scala code (with all its intricacies) and the ability to produce qualified tags.
Understanding Scala intricacies and static tagging themUnderstanding Scala intricacies and static tagging them
What are static tags? Static tags are tags for "static functions". In the C world this means functions that can only be used in the file where they are defined; you could think of them as "private". Vim understand static tags and will match them first before anything else.
Static tags lend themselves nicely to private field and functions, so
stags marks private statements and fields as static, while taking care of some Scala intricacies.
If a def/val/class/ect is
private within its file, then it is static. If it is private for some large scope, then it isn't static. This means that if it is
private[X] then we check if
X is an enclosing object within the file. However, if X isn't an enclosing object in this file, then we mark it as non-static. For example
package org.example.somepackage.test object X { object Y { private[X] def f = … } } object K { private[somepackage] def g = … }
In this example,
f would be static, but
g isn't because
g might be accessed from outside the file.
Other cases that are marked as static are:
- constructor fields in classes (ie: in
class X(a: Int, b: String, c: Boolean),
a,
band
cwill all be static)
- But non-static for the first parameter group of
caseclasses (since those are accessible by default)
case class X(a: Int)(b: Int)<-
awill be non-static, but
bwill be static
- Any that are marked as "private" are static
- the single field in an implicit class/case class
implicit class X(val x: Int)<-
xis static
- this is done because chances are that
xwill never be accessed anywhere but this file
- all implicit things (val, defs, class, etc)
- these things are rarely, if ever, accessed via their tokens
Qualified tagsQualified tags
A common pattern found when importing conflicting fields is to use them in a qualified form. For example:
import org.example.SomeObject import org.example.OtherObject SomeObject.foo(...) OtherObject.foo(...)
In order to differentiate between the two,
stags generates tags for all fields along with an extra tag that combines their parent with the tag itself. Note that
stags never generates qualified tags for fields/methods in
trait and
class (only objects and package objects) since said fields/methods cannot be qualifiedly referenced.
Following code, by default, would produce three tags:
Example,
foo and
Example.foo:
package object test { object Example { def foo(...) } }
The depth of the qualified tags is controlled by
--qualified-depth. Setting it to three (3) would produce a third tag
test.Example.foo.
Vim support for qualified tagsVim support for qualified tags
Vim won't understand such a tag right off the bat. The following modification is required:
function! QualifiedTagJump() abort let l:plain_tag = expand("<cword>") let l:orig_keyword = &iskeyword set iskeyword+=\. let l:word = expand("<cword>") let &iskeyword = l:orig_keyword let l:splitted = split(l:word, '\.') let l:acc = [] for wo in l:splitted let l:acc = add(l:acc, wo) if wo ==# l:plain_tag break endif endfor let l:combined = join(l:acc, ".") try execute "ta " . l:combined catch /.*E426.*/ " Tag not found execute "ta " . l:plain_tag endtry endfunction nnoremap <silent> <C-]> :<C-u>call QualifiedTagJump()<CR> | https://index.scala-lang.org/pjrt/stags/stags/0.4.1?target=_2.12 | CC-MAIN-2021-39 | refinedweb | 861 | 62.27 |
It's not the same without you
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi,
I'm looking to create a script post function that will set a custom field(group picker) if the user is a member of a specfic group
Group Name = GroupA
Users in GroupA = A,B,C
CustomFiled = Participants (Group Picker (Single Group))
If the report/creator is a member of GroupA i want to set the Participants field to that group name so as that all users in that group can view the ticket.
thanks,
Dan
Hey Daniel,
The script would look something like this:.reporter?.name, 'jira-administrators')) { // Check if the user in Group A
def cf = customFieldManager.getCustomFieldObjectByName("Participants") // Participants Custom Field
def correctGroup = groupManager.getGroup("jira-administrators") //
}
First, the script checks to see is the reporter of the issue in a specific group. Next, we get the custom field object for Participants. Then, we get the group object for Group A. We then add the Group object to a group list. Finally, we update the Participants field and set it to Group A.
Hi Joshua,
Really appreciate the reply, So i have tried the above and it doesnt appear to work, I've replaced my group name in place of 'jira-administrators', you'll have to excuse my ignorance on this bit, but do i need to put something in place of <group>?
thanks,
Dan
Hi Dan,
You're correct that you need to replace your group name instead of 'jira-administrators'. Make sure you replace that in both places in the script, I believe lines 8 and 11. I would ensure that you have the right group name. In JIRA, if you go to User Management -> Groups, it will give you the exact group name. You do not need to replace <Group> with anything.
The second potential issue here is having the correct name of the custom field you want to set. I would verify the name is correct. In my script, it is "Participants". I would go to Custom Fields page in JIRA and double check you have the right field name.
The third potential issue here is that you don't have your post-function in the correct order. After adding the post-function to the workflow step, did you make sure you put it in the right execution order? See attached image:
Post functions should go after the "Fire" step. You can use the up and down arrows when in edit mode to move the post function to the correct order. Remember to publish your workflow when finished editing.
Also, what JIRA and ScriptRunner versions are you running? That might be useful to know if you still can't get it to work.
Hi Joshua,
So i have tried the above and still no joy, I have checked and copied the name from both the Group and Custom Field.
We are currently using Jira 6.3.15 and script runner 3.1.4, saying that we are in the process of upgrading our system to Jira 7.3.8 and the latest compatable version of script runner, we have run into trouble with the scripts as part of the upgrade, thats holding up the upgrade, we are waiting on support for that.
thanks,
Dan
Hi Daniel,
Apologies for not getting that information from you sooner. Just to clarify, you are testing this script on an instance of JIRA 6.3.15 with SR 3.1.4? If so, I am not sure that the script I gave you will work at all. I wrote and tested that script with JIRA 7.4.1 and SR 5.0.15. I unfortunately don't think I have a way to test the script for SR 3.1.4.
A lot changed from JIRA 6 to JIRA 7. I would recommend taking a look at this documentation page on our website. If you're evaluating the new version of ScriptRunner or have already purchased it, you can gladly come to the Adaptavist support portal and we can help you with upgrading your scripts if you're having. | https://community.atlassian.com/t5/Adaptavist-questions/Set-Custom-Filed-value-grouppicker-if-creator-report-is-in-a/qaq-p/608005 | CC-MAIN-2018-17 | refinedweb | 694 | 72.66 |
Contents
- Reason for writing unit tests and how they help us in the development process
- The peculiarities of testing and the ways to generate data
- How to write unit tests?
- TDD, or test-driven development
- Mock: what’s that and why should you apply it?
- Conclusion
Reason for writing unit tests and how they help us in the development process
Let’s say we are in the middle of a project development with complicated logic and a number of various forms. The first sprint is behind and we’re moving towards the end of the second one. We submit some changes into the logic of a form’s functioning model and think everything is alright. However, we soon get a bug report from the QA engineers team telling that the part of the functionality we implemented during the sprint one has failed. Could we avoid this to happen?
A project is a regiment of software components, each performing one small task. A component receives some data, performs operations according to its business logic and returns result. By knowing the code behind a component we can predict the result for any incoming data. To check if the project works fine different kinds of testing techniques are used, unit testing being one of them. In this technique, a test case is written to check every component’s possible variant of behavior. It makes sure a component returns a definite result provided some certain data known beforehand. We can assume the whole project would function well If every component worked out well during this test.
When is the right time for writing tests and what are their advantages?
If you are working on a landing page you will hardly need unit tests. Still, we are totally positive about writing them anyway, at least smoke ones. These tests will let you avoid some critical mistakes in your code, though they won’t save you from invalid data. The origin of such testing type takes place from radioengineering. They used to apply electrical power to a card and wait to see if there would appear smoke, which in its turn indicated malfunctioning of the card. In our case the crux is the same: if there is an error, something doesn’t work properly. So it is always better to write tests that check your code execution.
Developers often say:
1. "Writing unit tests takes too much time";
2. "Running tests takes too much time";
3. "That’s not my job to do testing";
4. "I haven’t got a clue how the code works".
We answer:
1. Writing unit tests during the development process will save a lot of your time at the end of the development. You can discover a bug at the very moment it pops up.
2. You should write test intending them to be fast in their fulfillment. You can also adjust tests on Jenkins at push or pull request in the repository. Thus the tests are not carried out on your computer and the code with mistakes doesn’t go into the stable branch.
3. Just no comments guys :)
4. It can also happen when the project is not developed from scratch by your team. In this case it is better to spend some time and figure it out.
The peculiarities of testing and the ways to generate data
The majority of frameworks use unittest module so it doesn’t matter which one you use. For python unit tests automatization this module supports some important concepts:
Test Case, i.e. a testing scenario (a set of conditions, variables, system states, or modes that are tested). It is generally indivisible and can include one or more asserts. Here on a test is considered a Test Case. However, Test Case is a class derived from unittest.TestCase() in Python documentation. In this article, a Test Case is a method of the mentioned class (starting with test_), not a class.*
Test Suite, i.e. Test Case set within a class or a module. Test sets (Test Suite) are formed on the basis of functional or logical characteristics.
Test Fixture, i.e. a number of functions or means for consistent run of a testing scenario.
Test Runner, i.e. a component that controls tests execution and displays the result for a user. A runner can use graphic or text interface, or it can return a definite entry signaling about the results of the test run.
Every test has its set of statuses:
. - ok - test has been successfully carried out
F - FAIL - test has failed
E - ERROR - an unexpected error occurred whilst running a test
x- expected failure - you got an expected exception (exception)
u - unexpected success - you got success though expected an error
s - skipped 'msg' - text is skipped
Follow the link to get more information.
So there actually is an instrument for testing in Python. So why bother and write them? Let’s find out.
How to write unit tests?
You should always write unit tests alongside the code so that any developer engaged in the project could understand what is written there. Unit tests have a standard structure as a rule. A testing case is derived from TestCase and must be independent meaning it shouldn’t hinge on other tests. The method of every test should start with the word test_. Suppose we need to execute a set of instructions for adjusting, downloading, and subsequently deleting data. There exists a number of methods in unittest module for this:
setUp – method called to prepare the test fixture; it is called before every test.
tearDown – method called immediately after the test method has been called and the result recorded. This is called even if the test method raised an exception
setUpClass – a method called before tests in an individual class run.
tearDownClass – a method called after tests in an individual class have run.
setUpModule – a method called before classes in an individual module run.
tearDownModule – a method called after classes in an individual module run.
setUpClass and tearDownClass are to be used altogether with @classmethod, i.e. a decorator that declares a function in a class in a way that it doesn’t need access to the class where it is located. Moreover, this function can be called using (Class.f()) or its sample (Class().f()).
setUpModule and tearDownModule are implemented as separate functions in a module and they do not enter any of the class of a module.
def setUpModule(): createConnection() def tearDownModule(): closeConnection() class MyUnitTest(unittest.TestCase): @classmethod def setUpClass(cls): do_something_expensive_for_all_sets_of_tests() def setUp(self): do_something_for_test() class MyFirstSetOfTests(MyUnitTest): @classmethod def tearDownClass(cls): super(MyFirstSetOfTests, cls).tearDownClass() do_something_expensive_for_just_these_first_tests() def tearDown(self): do_something_for_test()
A word on how we write unit tests for the functionality. We always start from the preparatory stage called SetUp. Then we divide it into logical parts and test them. It resembles the process of code development. For example:
1. Authorization.
def test_permissions(self): resp = self.client.get(self.login_url, self.valid_sign_up_data) self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED) resp = self.client.patch(self.login_url, self.valid_sign_up_data) self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED) resp = self.client.put(self.login_url, self.valid_sign_up_data) self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED) resp = self.client.delete(self.login_url, self.valid_sign_up_data) self.assertEqual(resp.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
2. Valid execution.
def test_success(self): resp = self.client.post(self.signup_url, self.valid_sign_up_data) self.assertEqual(resp.status_code, status.HTTP_200_OK) resp = self.client.post(self.login_url, self.valid_log_in_data) self.assertEqual(resp.status_code, status.HTTP_200_OK)
3. Form’s errors.
def test_bad_request(self): data = copy.deepcopy(self.valid_sign_up_data) data['username'] = '' resp = self.client.post(self.signup_url, data) self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST) data = copy.deepcopy(self.valid_sign_up_data) data['email'] = 'email' resp = self.client.post(self.signup_url, data) self.assertEqual(resp.status_code, status.HTTP_400_BAD_REQUEST)
It also seems convenient to drag tests to the python module and divide them into separate files, every file being responsible for a logical part of the functionality.
To run tests we need to generate an appropriate set of data. What methods can we use for that?
1. Create testing data set in advance.
2. Generate testing data set for every test.
Fixtures
You can create some definite set of data, so-called fixtures. They will be loading as soon as the test run begins, and they will be processed during its execution. Still this approach has some peculiar drawbacks:
1. You can’t store a big amount of stockpiled data, as their loading takes time.
2. It is not a flexible approach for test execution.
3. If the data structure changes, you’ll have to change all the fixtures.
However, its positive side lets you use a small and invariable set of data for tests execution (for instance, the list of cities).
Factories
If your data structure is constantly changing and there’s a need to change data depending on the condition, we recommend using another option. It is of a critical significance when working with large databases. Here it seems more logical to generate testing data set in the SetUp method. For example, you can manually create an entry in the table of the database, or generate a file, or use a tool that changes fixtures for dynamic data generation factoryboy.readthedocs.io.
This tool is compatible with a few ORMs:
- Django
- MongoEngine
- SQLAlchemy
You can generate different sets of data and strictly set parameters if needed:
class UserFactory(factory.django.DjangoModelFactory): class Meta: model = User @factory.lazy_attribute_sequence def username(self, n): return '{0}_{1}'.format(lorem_ipsum.words(1, False), n) @factory.lazy_attribute_sequence def email(self, n): return '{0}_{1}@example.com'.format(lorem_ipsum.words(1, False), n) @factory.lazy_attribute_sequence def first_name(self, n): return '{0}_{1}'.format(lorem_ipsum.words(1, False), n) @factory.lazy_attribute_sequence def last_name(self, n): return '{0}_{1}'.format(lorem_ipsum.words(1, False), n) @factory.lazy_attribute def password(self): return make_password('qwerty') is_active = True
You can also use SubFactory, RelatedFactory, post_generation for generation of all correspondent connections Foreign Key, Many to Many and others.
class TaskFactory(factory.django.DjangoModelFactory): class Meta: model = Task @factory.lazy_attribute def name(self): return lorem_ipsum.words(3, False) def start(self): return now() @factory.post_generation def end(self, create, extracted, **kwargs): if create: self.end = self.start + timedelta(days=1) self.save() user = factory.SubFactory(UserFactory)
How to use it in tests:
class TestAPI(APITestCase): def setUp(self): self.url = reverse('api:user:api_task_list') # create user self.user = UserFactory() TaskFactory.create_batch(10, user=self.user) TaskFactory.create_batch(10) def test_user_tasks_list(self): self.client.force_authenticate(self.user) self.assertEqual(Task.objects.count(), 20) resp = self.client.get(self.url) self.assertEqual(resp.status_code, status.HTTP_200_OK) self.assertEqual(len(resp.data), 10)
This work with database slows down your tests run because before test execution happens the following:
Test database is emptied in case it contains some information and in case the permission is granted by a user;
All tables and indexes are created in test database;
Fixtures sets are loaded;
Tests are executed;
Everything created during test execution is deleted.
Can we speed everything up?
Well, we can easily change database for sqlite. Tests will be executed a way faster.
PostgreSQL
Creating test database for alias 'default'... .... ---------------------------------------------------------------------- Ran 4 tests in 113.917s
SQL Lite
Creating test database for alias 'default'... .... ---------------------------------------------------------------------- Ran 4 tests in 67.901s
The speed increased twice as you can see. You should, though, be cautious changing the database, for example, if you are working with something specific, like queryset.Extra. Or you can find yourself up the creek working with DATETIME FORMAT.
- SQLIte use ISO-8601 date and time format;
- Postgresql ISO 8601, SQL-compatible, traditional POSTGRES, and others;
You’ll get an error while changing database as well if you add JSONField or ArrayField
$ InterfaceError: Error binding parameter 6 - probably unsupported type.
Your tests should be based on the production server database you are going to use when you work with the database. This will prevent you from having a lot of problems later on.
Code coverage
Code coverage is a measure that determines how much of the application’s source code is being tested. The tool called Coverage is usually used for measuring code coverage. The race for the high percentage of covering results comes to no good. The big number, in this case, isn’t equal to absence of errors. A well-written test should embrace all the cases and errors, it should check all ACLs and services availability.
So how to write tests:
- Cover all cases;
- Consider all the variants of errors;
- Check access rights;
- Check the validity of the received data.
Following these rules will guarantee you great code coverage.
TDD, or test-driven development
This method lies in:
1. First, you write a test for a given task;
2. Then you write code for this task and test passes;
3. After this, you refactor the code and make it comply with the standards;
4. Finally you repeat the whole process for the next part of the code.
Let’s see how it works via unit testing example:
Suppose we have a task to implement form sending that contains a great amount of sending data with various validations. Suppose it’s REST API and we need to send a complicated JSON.
1. We start from writing a unit test for sending a simple form without attachments. The test won’t work, we get crashes. We write code and we have the test running.
2. We add the test for error validation check which first won’t work either. Then we’ll continue with writing code and doing its refactoring.
3. Then we add an attachment to the form we created. The test won’t work again. So we write code to make it run.
4. Then we want to add test for attachment validation and write code which will allow the execution of the test.
5. We’ll end with code refactoring.
The process described above suggests making some concessive and repeated steps which will result in a functioning code.
Where does the advantage of this method lie? Now imagine you came up with the code that sends this complicated data structure. You’ll spend hours to debug it since you’ll need to fill in the data and send it. As an option, you can resort to dividing data into blocks. However, it will lead you to filling in a great amount of data eventually. While coping with the tasks using TDD will let you fill in the data once, after this you’ll just need to debug the code in accordance with the test.
TDD suggests much more than merely correction check, it can also influence the program’s design. Focused on tests in the beginning, you can more clearly understand what kind of functionality the user needs.
Though you’ll need to write more code using TDD method, the general time consumption for the development turns out to be little. That’s why you’ll decrease the amount of time spent on debugging manyfold. Moreover, the more tests you will write, the fewer errors the code will have.
When to apply TDD?
Every developer, who resorted to the TDD at least once in their career and found this methodology useful, chooses implementation area for it. In our company, we apply TDD only when working with huge amounts of data with complicated structure. This does save time.
Mock: what’s that and why should you apply it?
According to the dictionary, a mock means “an act of imitation”. The module with this name helps to simplify modules testing on Python.
Its operation principle is simple: if you need to test a function, you can substitute everything that doesn’t relate to it with mocks (e.g. reading from disk or network). And you won’t need to adapt these functions for tests: Mock replaces the objects in other modules even if the code doesn’t accept them in the form of parameters. It means that you can execute tests without adapting anything to tests.
So this kind of behaviour is not a toy rocket, it is more of a toy planet where you can fly your test jet planes and rockets. And you use mock package for this. If you use Python 2.7 you’ll just need to install package
$ pip install mock
Versions Python 3.3 and above include mock library which you can use.
A Mock object has a number of attributes with the information about calls:
- called — shows if the object was called or not
- call_count — the number of the calls
- call_args — the arguments of the last call
- call_args_list — the list of calls
- method_calls — the track of calls to methods and attributes and their methods and attributes
- mock_calls — the record of calls to the mock object, its methods, attributes and returned values
To check everything once again for better confidence you can also call one of assert_* methods in automated tests.
You can make your stub smart using side_effect
def small_function(args): prepare_fake_data() with patch('module.strong_method', side_effect=small_function) as mock_method: another_class.take(mock_method)
To be more illustrative let’s consider more examples of unit testing:
Suppose we are working on the payment gateway using stripe. After signing up we got all the keys, performed client’s signup and payment. Our next step is to cover this functionality with tests.
Client signup is carried out using stripe library and create method execution for stripe object.
stripe.Customer.create(email=email)
Let’s add registration in stripe on the background alongside regular registration.
Now how can we test it? We can use a stub for this function. We need to patch the function for this. The patch can act as a decorator in the tests.
@patch('stripe.Customer.create', return_value=FAKE_CUSTOMER) def test_success(self, create_stripe_customer): resp = self.client.post(self.signup_url, self.valid_sign_up_data) self.assertEqual(resp.status_code, status.HTTP_200_OK) create_stripe_customer.assert_called_once_with(email=self.valid_sign_up_data['email'])
We can also patch several functions or methods:
@patch("stripe.Customer.create", return_value=deepcopy(FAKE_CUSTOMER)) @patch("stripe.Customer.retrieve", return_value=deepcopy(FAKE_CUSTOMER)) @patch("stripe.Event.retrieve") def test_webhook_with_transfer_event(self, event_retrieve_mock, customer_retrieve_mock, customer_create_mock): Customer.create(self.company.user) customer_create_mock.assert_called_once_with(email=self.company.user.email) fake_event = deepcopy(FAKE_EVENT_CUSTOMER_CREATED) event_retrieve_mock.return_value = fake_event resp = Client().post( reverse("payment:webhook"), json.dumps(fake_event), content_type="application/json" ) self.assertEquals(resp.status_code, 200) customer_retrieve_mock.assert_called_once_with(self.company.user.сustomer.striep_id)
This tool enables us to test the handcrafted wrappers on third-party API without direct address. In other words, it allows replacing intensive operations with stubs.
When to use Mock:
when you need to save resources;
when you need to test third-party API wrappers;
when you need to replace the result of function execution.
Having highlighted python unit test examples and how they help in the development process, we arrive at the following conclusions.
Conclusion
1. Writing unit tests is compulsory at any time;
2. If you lack time you’d better write at least Smoke Tests to except the evident mistakes;
3. Use TDD approach if there’s a need;
4. If you have third-party API wrappers, use Mock for testing and replacing the result of third-party API operation. | https://steelkiwi.com/blog/how-write-unit-tests-and-how-they-help-development/ | CC-MAIN-2019-35 | refinedweb | 3,193 | 58.69 |
I've created a custom search plugin by copying an existing () into ~/.flexget/plugins. However, when I run flexget plugins, my plugin is not there. What could be the reason?
~/.flexget/plugins
flexget plugins
The piratebay plugin already exists in the standard FlexGet package. Why are you installing another copy?
Did you change this at the bottom of the plugin?
plugin.register(UrlRewritePirateBay, 'piratebay', interfaces=['urlrewriter', 'search'], api_ver=2)
The name of the plugin - 'piratebay' - has to be unique.
'piratebay'
@ianstalk: the current one doesn't seem to be working for me (wrong TLD), so I'm making a fixed version. I would also like to make other plugins, based on this one.
@tubedogg: I did, yes,
@event('plugin.register')
def register_plugin():
plugin.register(UrlRewritePirateBayCC, 'piratebaycc', interfaces=['urlrewriter', 'search'], api_ver=2)
I also changed the name of the class to UrlRewritePirateBayCC, and that is the only change I made.
UrlRewritePirateBayCC
As @tubedoggsaid, you need to change the name the plugin is registered under. Alternately you could submit a pull request to contribute your fixes back to the project.
@ianstalk: as you can see, i have changed the name. I wouldn't submit any changes before I know they work, and I won't know if they work until flexget registers the plugin.
Oops missed that. Maybe try renaming the .py file and/or clearing out any .pyc files that were made? The former shouldn't matter, but it's worth a shot.
I've renamed the file piratebaycc.py. I can't find any .pyc files in .flexget/plugins. I'm running flexget as a daemon, do you think that might interfere? For daemons, are external plugins loaded at each command execution, or when the config is reloaded?
piratebaycc.py
.pyc
.flexget/plugins
Ah that might be it. I think they load when the daemon itself starts, so you'll want to completely stop and restart the daemon.
Just FYI you don't have to do that unless you have some specific reason to do so. Changing the name of the plugin is enough to make it unique.
That's correct.
Running flexget -L trace plugins I get the following suspicious line:
flexget -L trace plugins
2017-05-06 17:05 DEBUG plugin Trying to load plugins from: [u'/home/xbian/.flexget/plugins/plugins', '/usr/local/lib/python2.7/dist-packages/flexget/plugins']
Effectively, I moved my script to /home/xbian/.flexget/plugins/plugins and it is now being registered. I did not dig further into why flexget looks into this (erroneous) location. Not sure if there is an error in my system. I guess it comes from
/home/xbian/.flexget/plugins/plugins
env_path = os.environ.get('FLEXGET_PLUGIN_PATH')
but running
echo $FLEXGET_PLUGIN_PATH
in my shell returns nothing.
Thanks for all your help!
An easier way to do this would probably be just to create a dev env and test your changes on the actual plugin | https://discuss.flexget.com/t/custom-search-plugin-not-found/3437/12 | CC-MAIN-2017-39 | refinedweb | 485 | 60.61 |
This is a solution that I came up with by myself. If this follows the same idea as yours, there's nothing to be surprised with, as the basic idea for the optimal solution should be unique in most leetcode problems. I tried out several special cases to finally get the high-level idea, and then this solution.
nextToMaxSum: The next integer to the maximum value of the range that can all be covered up to now. In other words, with some added integers and the elements in array 'nums' we've already checked, nextToMaxSum stands for the first value we cannot reach.
For example, for nums=[1,1,2,9] and n=23. If we've checked 1,1,2, then 5 is the next integer to the current sum.
Needless to say, we'll have to add nextToMaxSum to the array, if it's not there yet. For example, we need to add 5 to the array in the example above. However, if nextToMaxSum is already in the array(If nums=[1,1,2,5]), then we don't need to add it.
Once nextToMaxSum is added to the array, the range almost doubles: Previously, it's (nextToMaxSum-1), but now, it's (nextToMaxSum-1+nextToMaxSum=2nextToMaxSum-1). So the new value for nextToMaxSum should be 2nextToMaxSum. Similarly, if we have an element x in array 'nums', then now the bound should be x+nextToMaxSum-1.
public class Solution { public int minPatches(int[] nums, int n) { int cnt; int patches = 0; long nextToMaxSum = 1; for(cnt = 0; cnt<nums.length && nextToMaxSum-1<n; ){ if(nextToMaxSum < nums[ cnt ]){ nextToMaxSum = nextToMaxSum << 1; patches++; } else { nextToMaxSum += nums[ cnt++ ]; } } while(nextToMaxSum-1 < n){ nextToMaxSum = nextToMaxSum << 1; patches++; } return patches; } }
As for the time complexity, since we need to iterate through array 'nums', it's at least O(nums.length). Also, we need to keep doubling nextToMaxSum from 1, until it exceeds n. Since n is an integer that has only 32 bits, the time complexity for this is a constant. So the total time complexity is O(nums.length+32)=O(nums.length) | https://discuss.leetcode.com/topic/46433/my-1ms-java-solution | CC-MAIN-2017-47 | refinedweb | 351 | 62.78 |
A Killer Vue.js Blog Demo: Launch in 2 Hours TopsNovember 02, 2017
We're very excited to sponsor **VueConf TO 2018** Come hang out and learn the latest Vue.js development (Nov. 15-16).
In a rush? Skip to tutorial or GitHub repo & live demo.
The JS landscape evolves... briskly. And that's putting it mildly.
But amidst the whirlwind of new frameworks & libraries, there ARE awesome tools.
Which ones should you use? Well, a few. My fav right now:
That progressive JavaScript framework is well-worth your time (I swear; scout's honor).
Be it to build e-commerce, refactor your app's frontend, craft a complex, SEO-friendly SPA, or launch a simple blog.
And building a Vue.js blog is precisely what I'm going to focus on today.
In this post, I'm going to provide an open source Vue.js blog demo + cover:
- Setup and routing
- Displaying your blog feed with filters
- Displaying individual posts with comments
- Creating a custom Vue plugin to keep your data decoupled
The result will be a JAMstack-ready, truly decoupled Vue blog you can plug to any data source—more on that later.
Important note: for this post, we assume you have a basic understanding of Vue.js.
Separating concerns in our Vue blog application
Take a quick look at the important bits of the document tree:
I won't go into the details of the webpack setup, but those familiar with Vue might recognize the vue-cli webpack template. We're also using vue-router for... well, routing purposes.
I've only added two folders to the base setup:
src/sass and
src/resources. We'll get into why the Vue app's Sass is separate from the components another time.
The
src/resources folder is where we'll put our decoupled data-access layer which we discuss towards the end of the post.
The component tree itself is nothing groundbreaking:
<Blog>- Our homepage
<BlogNav>- Handles navigation logic
<BlogFeed>- Renders the post listing
<BlogPost>- Renders an individual post
Finally, we've got the router used for URL handling and passing down props.
import Vue from 'vue' import Router from 'vue-router' import Blog from '../components' Vue.use(Router) export default new Router({ mode: 'history', linkActiveClass: 'active', routes: [{ path: '/', name: 'feed', component: Blog }, { path: '/by/:author', name: 'author', props: true, component: Blog }, { path: '/read/:post', name: 'post', props: true, component: Blog }] })
I've taken a slightly non-traditional approach here, as the router will always render the same component. We handle displaying the right content ourselves, the
<Blog> component being our main hub dispatching props to its three children.
The first route is our site root which just displays our default view (
<BlogFeed>) unfiltered.
The second is our authors filter, accessed by navigating to
/by/:author. Vue-Router grabs any path nodes preceded by
: as variables and injects their value into the route's component as props.
Last but not least, we do the same thing for the
/read/:post route, which will display a single blog post's content.
Rendering the blog feed
For now, we'll skip over actually fetching our data and assume it's already been loaded. Here's what the
<BlogFeed> template looks like:
<template> <transition-group <li v- <router-link : <figure class="preview__figure"> <img : <transition name="fade"> <figcaption v- {{ post.title }} </figcaption> </transition> </figure> </router-link> <transition name="fade"> <aside v- <h5 class="preview__meta"> <router-link {{ post.author }} </router-link> <time class="preview__published"> {{ prettyDate(post.published) }} </time> </h5> </aside> </transition> </li> </transition-group> </template>
And its logic:
import { scrollTo, kebabify, prettyDate } from '../helpers' export default { name: 'blog-feed', resource: 'BlogFeed', props: { filters: Object }, data() { return { posts: [] } }, computed: { reading() { return this.filters.post }, classes() { return { 'preview': true, 'blog__post': true, 'preview--reading': this.reading } }, feed() { const filterBy = { post: (filter, { id }) => filter === id, author: (filter, { author }) => filter === this.kebabify(author) } if (!Object.keys(this.filters).length) return this.posts return this.posts.filter(post => { return Object.keys(this.filters).every(filter => { return filterBy[filter](this.filters[filter], post) }) }) } }, methods: { scrollTo, kebabify, prettyDate }, beforeMount() { this.$getResource('feed') } }
As you can see towards the top of the script, we receive a
filters object from the parent
<Blog> component. The
feed() computed property will take care of automatically handling any changes to the filters as they happen. It filters the post array by looping over each active filter and running the corresponding function against the post, returning only posts that pass every test.
Then, in our template, we just
v-for the filtered feed, which will keep it up to date at all times. This is probably the most efficient way of handling filters, as you can easily add new ones by appending a new method to
filterBy.
When a post is clicked, the route will change to that post's ID. However, you may notice that the selected image from the feed remains visible on the left side. We never actually hide the listing, we just filter out any post whose ID does not match, leaving us with just one.
Note: in case you're inclined, I recently covered some Vue transitions used here on CSS-Tricks.
Rendering individual blog posts
Okay, we've got the right image displayed on the left side. Now we just need to bring in the corresponding content to its right! Again, it may seem counter-intuitive, but the
<BlogPost> component is always there, just waiting for something to do.
As soon as a
/read/:post route is hit, it will load the corresponding post and slide into view using a Vue
<transition>. The rest is just a plain old Vue template, putting the right variables in place. You'll generally receive the post body with the HTML pre-inserted, so make sure to use the
v-html attribute instead of
{{ mustaches }} to avoid auto-escaping tags.
In the demo, I used Disqus along with vue-disqus to add comments to posts. This is what I like the most about the state of frontend development these days: you can add features like this in minutes.
Vue data decoupling: a word on the JAMstack
The JAMstack (JavaScript, APIs, & Markup) is the product of frontend development's rapid evolution in recent years, notably in the JS community.
Get up to speed on the JAMstack with this key talk.
IMHO, two of its most redeeming features fuel its rise in popularity:
- Ease of access Familiar languages/frameworks coupled with abstracted backends make it hard to resist. I wouldn't be shocked to learn most frontend devs share my dread for databases.
- Decoupled data source In theory, it doesn't matter where your data is coming from or how many APIs you're calling to get it. As long as you can feed it to your app. For websites, this means you're never strictly bound to your CMS; you can swap it out if need be!
Why does it matter?
The decoupled backend is without a doubt an attractive prospect. One that I sold to my boss when looking into building a website with Vue.js and a headless CMS. As it turns out, it's very easy to allow your API's data structure to define your app's inner workings. Sooner or later, you'll find yourself wondering what happened to the whole "decoupled" argument.
A good hint that you're falling into this trap is if you're fetching data and parsing the response directly in your Vue components. The first step is to remove all your API calls from your components to create a replaceable data-access layer.
There are a ton of tools and techniques to help you implement this sort of pattern. What's difficult is making sure you keep it in mind while building your website or app.
Creating the resource plugin
This demo is a simple, open source example of how you could go about it without adding any dependencies other than
lodash.merge (4.2kb gzipped).
Often, the simplest way with Vue is to use its plugin system. You've probably made use of it before, like mounting the router or Vuex. Just to refresh our memory: all you need to do is pass a plugin to
Vue.use() along with any options before creating your root Vue instance.
Behind the scenes, Vue takes the plugin object and looks for an
install() method, which it calls passing in Vue as the first argument and your options object as the second.
There a whole bunch of sweet things you can do within this scope. But our mission today is just to create a
$getResource instance method that you'll be able to call using
this.$getResource(method, options) from within a component. This approach is pretty cool since it's mounted to each new component instance. That's just a fancy way of saying you get access to the component's
this binding, just as you're used to.
Head over to
./resources/resource.js:
import _merge from 'lodash.merge' export default { // don't be a fool, make use of defaults install(Vue, { endpoint = '', resources = {} }) { // add the method to the Vue prototype Vue.prototype.$getResource = function(method, options) { // this "this" references "this" in this component, ok? let name = this.$options.resource // turn around and walk away if anything is missing if (!name || !resources[name] || !resources[name][method]) return; // get the API path and response resolver let { path, resolve } = resources[name][method](options) // methods return promises to keep chain alive const mappers = { // deep merge dataSet with component's $data merge: dataSet => { _merge(this.$data, dataSet) return Promise.resolve(dataSet) }, // set $data props, accepts "dot.notation" string access set: dataSet => { Object.keys(dataSet).forEach(prop => { this.$set(this.$data, prop, dataSet[prop]) }) return Promise.resolve(dataSet) } } // fetch and parse resource then pass it to the resolver return fetch(endpoint + path) .then(response => response.json()) .then(response => resolve(response, mappers)) } } }
We assign the
$getResource method to the Vue prototype. As you can see, our options are
endpoint, being the API's base URL to which we'll append our query paths, and an object of
resources, which are "implementations" or definitions indicating how to handle the resource. We'll get into those real soon.
The
install() method creates a closure, capturing these options in its scope. Any function defined herein will have access to them at all times.
Stepping into the function, after a few checks to make sure we've got everything, we call the resource method, defined as a string by the first argument, and pass in any option received as the second argument. We grab the
path property and the
resolve method and define our two mappers:
merge() and
set(). The former, using lodash's merge utility, does a deep merge of the
dataSet with the component's
$data while the latter loops over
dataSet's keys, assigning them to the
$data.
That last bit is a pretty nice way of adding a shortcut without convoluting your code. Both methods conserve Vue's reactivity and return a Promise, meaning we can chain
.then(dataSet => {}) to run some logic after the fetch is complete.
Finally, the call to ES2015's native
fetch() method is made. The JSON response is parsed and passed along with the mappers to the
resolve function.
Resource implementations
With most of the heavy lifting taken care of, we're now ready to look at how we define our resources! If we look at the
./resources folder in our project you'll see the implementation directory. Open up
implementation/BlogPost.js:
export default { post(id) { return { path: `/post/${id}.json`, resolve: (response, mappers) => { let { title, content, meta } = response.results[0] content = '<p>' + content.split('\n\n').join('</p><p>') + '</p>' return mappers.merge({ title, content, ...meta }) } } } }
This implementation offers a
post() method, expecting a post's id to be passed in.
It then returns an object containing the computed path and a resolver function that maps the correct response data to the component. Keep in mind, the goal here is to shape the data to match your components and not the other way around.
Connecting the dots
Now we can get things going by telling Vue to use our custom plugin. First, we create an index in our implementations folder that declares
export { default as ResourceName } from './ResourceName' for each resource. In
main.js we pass the plugin and our resource collection to Vue like so:
import resource from './resources/resource' import * as resources from './resources/implementation' Vue.use(resource, { resources, endpoint: '/static/api' })
Calling your resources
Now all that's left to do is to call the resources from our components by adding a
resource option with the name of the resource we want to use. When we call
this.$getResource it will know which one to load. I've included some sample JSON data in the
static/api directory, which is what will be loaded and processed by our plugin.
Take
<BlogPost> for example.
import VueDisqus from 'vue-disqus/VueDisqus' import { kebabify, prettyDate } from '../helpers' export default { name: 'blog-post', resource: 'BlogPost', components: { VueDisqus }, props: { post: String }, data() { return { title: '', author: '', content: '', published: '', description: '' } }, watch: { post(to, from) { if (to === from || !this.post) return; this.$getResource('post', to) } }, methods: { kebabify, prettyDate }, beforeMount() { if (this.post) this.$getResource('post', this.post) } }
Simple stuff! We've got our component's data object schema, all filled out with placeholders as recommended by Vue and our resource definition. In the
beforeMount() hook, we check to see if the visitor has landed directly on a blog-post route and call it. Otherwise we wait for the
post property to change and react by loading the new post.
We don't even need to assign the response to the data; it just gets assigned to our component data by the resolvers! Personally, I quite like this approach because:
- I like things that make my life easier.
- I like it when other people can figure out what's going on. Our placeholder data is nicely defined and the calls to
$getResourcehint that we're fetching data.
Awesome! We're avoiding any references specific to our API's implementation inside the component. Instead we call our AJAX methods using an identifier, much like you would with Vuex, Vue's official state management plugin.
Result: live Vue.js blog demo (steal it!)
See/steal the demo on GitHub
See live demo deployed on Netlify from GH
Closing thoughts
You should end up with a simple blog application frontend that is 1) darn smooth and 2) truly decoupled from the backend. Which means you could completely change API providers & data structure and nothing would change in your Vue components. All the necessary logic is in one place!
Again, my goal in this post is, among other things, to open your eyes to this potential issue so you don't get caught off guard:
While the above is a given for many backend devs, it might be new territory for frontend ones.
The approach I've proposed works quite well but lacks some flexibility for large-scale projects. If you're keen on managing App/API relationship more closely, I recommend checking out Vue.js' documentation and tools like Vuex centralized data-store, AJAX libraries like vue-resource or Axios and data-persistence layers like JS-Data.
As we've seen here, Vue.js not only offers blazing fast data-reactivity, it's also very flexible and extendable. I some recent projects, I've developed similar data-access layers that even take care of component data injection and validation as well as defining and generating the input forms for the CMS.
Knowing your tools, their strengths/weaknesses and how to leverage them is the key to becoming a solid developer, something I've hopefully nudged you closer to today.
If you've enjoyed this post, please take a second to share it on Twitter. Got comments, questions? Hit the section below! | https://snipcart.com/blog/vuejs-blog-demo | CC-MAIN-2018-34 | refinedweb | 2,657 | 65.12 |
Today, I complete my tour through the C++20 core language features with a few small improvements. One interesting of these minor improvements is that most of volatile has been deprecated.
The abstract in the proposal P1152R0 gives a short description of the changes which volatile undergo: "The proposed deprecation preserves the useful parts of volatile, and removes the dubious / already broken ones. This paper aims at breaking at compile-time code which is today subtly broken at runtime or through a compiler update. "
Before I show you what semantic of volatile is preserved, I want to start with the deprecated features:
If you want to know all the sophisticated details, I strongly suggest you watch the CppCon 2019 talk "Deprecating volatile" from JF Bastien. Here are a few examples from the talk referring to used numbers (1) to (3).
(1)
int neck, tail;
volatile int brachiosaur;
brachiosaur = neck; // OK, a volatile store
tail = brachiosaur; // OK, a volatile load
// deprecated: does this access brachiosaur once or twice
tail = brachiosaur = neck;
// deprecated: does this access brachiosaur once or twice
brachiosaur += neck;
// OK, a volatile load, an addition, a volatile store
brachiosau = brachiosaur + neck;
#########################################
(2)
// deprecated: a volatile return type has no meaning
volatile struct amber jurassic();
// deprecated: volatile parameters aren't meaningful to the
// caller, volatile only applies within the function
void trex(volatile short left_arm, volatile short right_arm);
// OK, the pointer isn't volatile, the data is opints to is
void fly(volatile struct pterosaur* pterandon);
########################################
(3)
struct linhenykus { volatile short forelimb; };
void park(linhenykus alvarezsauroid) {
// deprecated: doe the binding copy the foreelimbs?
auto [what_is_this] = alvarezsauroid;
// ...
}
I didn't answer the crucial question so far: When should you use volatile? A note from the C++ standard says that "volatile is a hint to the implementation to avoid aggressive optimization involving the object because the value of the object might be changed by means undetectable by an implementation." This means for a single thread of execution that the compiler must perform load or store operations in the executable as often as they occur in the source code. volatile operations, therefore, cannot be eliminated or reordered. Consequently, you can use volatile objects for communication with a signal handler but not for communication with another thread of execution.
To make it short, volatile avoids aggressive optimization and has no multithreading semantic.
I present the remaining small improvements with a short example that runs in the Compiler Explorer.
With C++20, you can directly use a range-based for-loop with an initializer.
// rangeBasedForLoopInitializer.cpp
#include <iostream>
#include <vector>
int main() {
for (auto vec = std::vector{1, 2, 3}; auto v : vec) { // (1)
std::cout << v << " ";
}
std::cout << "\n\n";
for (auto initList = {1, 2, 3}; auto e : initList) { // (2)
e *= e;
std::cout << e << " ";
}
std::cout << "\n\n";
using namespace std::string_literals;
for (auto str = "Hello World"s; auto c: str) { // (3)
std::cout << c << " ";
}
}
The range-based for-loop uses in line (1) a std::vector, in line (2) a std::initializer_list, and in line (3) a std::string. Additionally, in line (1) and line (2) I apply automatic type deduction for class templates which we have since C++17. Instead of std::vector<int> and std::initalizer_list<int>, I just write std::vector and std::initializer_list.
With GCC 10.2 and the Compiler Explorer, I get the expected output.
A constexpr function has the potential to run at compile-time but can also be executed at run-time. Consequently, you can make a constexpr function with C++20 virtual. Both directions are possible. Either can a virtual constexpr function override a non-constexpr function, but also can a virtual non-constexpr function override a constexpr virtual function. I want to emphasize, that override implies that the regarding function of a base class is virtual.
The following program shows both combinations.
// virtualConstexpr.cpp
#include <iostream>
struct X1 {
virtual int f() const = 0;
};
struct X2: public X1 {
constexpr int f() const override { return 2; }
};
struct X3: public X2 {
int f() const override { return 3; }
};
struct X4: public X3 {
constexpr int f() const override { return 4; }
};
int main() {
X1* x1 = new X4;
std::cout << "x1->f(): " << x1->f() << std::endl;
X4 x4;
X1& x2 = x4;
std::cout << "x2.f(): " << x2.f() << std::endl;
}
Line (1) uses virtual dispatch (late binding) via a pointer, line (2) virtual dispatch via reference. Once more, here is the output with GCC 10.2 and the Compiler Explorer.
Additionally, to the character types char16_t and char32_t from C++11, C++20 gets the new character type char8_t.char8_t is large enough to represent any UTF-8 code unit (8 bits). It has the same size, signedness, and alignment as an unsigned char, but is a distinct type.
Consequently, C++20 has a new typedef for the character type char8_t (1) and a new UTF-8 string literal (2).
std::u8string: std::basic_string<char8_t> (1)
u8"Hello World" (2)
The following program shows the straightforward usage of char8_t:
// char8Str.cpp
#include <iostream>
#include <string>
int main() {
const char8_t* char8Str = u8"Hello world";
std::basic_string<char8_t> char8String = u8"helloWorld";
std::u8string char8String2 = u8"helloWorld";
char8String2 += u8".";
std::cout << "char8String.size(): " << char8String.size() << std::endl;
std::cout << "char8String2.size(): " << char8String2.size() << std::endl;
char8String2.replace(0, 5, u8"Hello ");
std::cout << "char8String2.size(): " << char8String2.size() << std::endl;
}
Without further ado. Here is the output of the program on the Compiler Explorer.
A using enum declaration introduces the enumerators of the named enumeration in the local scope.
// enumUsing.cpp
#include <iostream>
#include <string_view>
enum class Color {
red,
green,
blue
};
std::string_view toString(Color col) {
switch (col) {
using enum Color; // (1)
case red: return "red"; // (2)
case green: return "green"; // (2)
case blue: return "blue"; // (2)
}
return "unknown";
}
int main() {
std::cout << std::endl;
std::cout << "toString(Color::red): " << toString(Color::red) << std::endl;
using enum Color; // (1)
std::cout << "toString(green): " << toString(green) << std::endl; // (2)
std::cout << std::endl;
}
The using enum declaration in (1) introduces the enumerators of the scoped enumerations Color into the local scope. From that point on, the enumerators can be used unscoped (2). This time, the only C++ compiler supporting using enum is the Microsoft Compiler 19.24:
First, what is a bit field? Here is the definition of Wikipedia: "A."
With C++20, we can default initialize the members of a bit-field:
// bitField.cpp
#include <iostream>
struct Class11 { // (1)
int i = 1;
int j = 2;
int k = 3;
int l = 4;
int m = 5;
int n = 6;
};
struct BitField20 { // (2)
int i : 3 = 1;
int j : 4 = 2;
int k : 5 = 3;
int l : 6 = 4;
int m : 7 = 5;
int n : 7 = 6;
};
int main () {
std::cout << std::endl;
std::cout << "sizeof(Class11): " << sizeof(Class11) << std::endl;
std::cout << "sizeof(BitField20): " << sizeof(BitField20) << std::endl;
std::cout << std::endl;
}
According to the member of a class (1) with C++11, the members of bit-field can have default initializers (2) with C++20. Finally, here is the output of the program with the Clang 10.0 compiler:
In the next fortnight, I will be in Italy and I will, therefore, not write a regular post.In case you want to read in the meantime one of my more than 300 posts to modern C++; I created a visual tour through my blog. This visual tour explains the TOC, categories, tags, the archive, and the search system and should help you find the post you are looking for.
Here you go:.
After my short break, I continue my journey through C++20 with the new library. In particular, I will write about std::span.35
Yesterday 8077
Week 24384
Month 208920
All 5761263
Currently are 199 guests and no members online
Kubik-Rubik Joomla! Extensions
Read more...
Read more... | http://modernescpp.com/index.php/volatile-and-other-small-improvements-in-c-20 | CC-MAIN-2021-10 | refinedweb | 1,294 | 59.84 |
Github repository:
Thread:
I also included a class named ScreenContext in context.py that allows (almost) anything to be done on the ODROID-SHOW using Python without having to worry about throttling input or entering escape commands manually. Printing text, changing background/foreground colors, performing linebreaks and most of the functionality can be done easily and neatly using method chaining.
GETTING STARTED:
Assuming you've created a .py file in the same file as context.py and have performed the steps described in INSTALL, you can start with the following template
This template creates a new screen context we can use for interacting with the ODROID-SHOW. Note that we sleep for 6 seconds to make sure ODROID-SHOW is done displaying the bootup screen, after which we can be sure that all commands are received and handled correctly.
Code: Select all
from context import Screen, ScreenContext import atexit ctx = ScreenContext("/dev/ttyUSB0") # Make sure the cleanup routine is called to clear the screen # when we close the script atexit.register(ctx.cleanup) # Wait 6 seconds for the screen to boot up before we start uploading anything ctx.sleep(6).reset_lcd().set_rotation(0)
Now, we can start with a simple Hello World program. Place the following at the end of the script.
This creates a simple loop that displays the text "Hello world!" on the ODROID-SHOW, the word "Hello" in red, on the first line, and the word "world!" in blue, on the second line.
Code: Select all
# Main loop while True: ctx.fg_color(Screen.RED).write("Hello").linebreak() ctx.fg_color(Screen.BLUE).write("world!").home()
The last home() method call makes sure the cursor is placed back at the start, otherwise the words "Hello" and "world!" would be drawn until they were offscreen.
Now you can run the script using the Python interpreter. Assuming you named the file example.py, you can just run the following
python example.py
Another thing to note is that you don't need to call sleep() to throttle the script's execution to keep the ODROID-SHOW in sync; the ScreenContext already takes care of that. However, if you do need it for any reason, you can call ctx.sleep(seconds) to halt the script's execution for any amount of seconds you want.
In case you only want to use ScreenContext but not the SHOWtime script itself, you can simply copy context.py, port_open and utils.py and place them in the same directory as your script.
TIPS:
All of the methods in ScreenContext have been commented, so you shouldn't have trouble checking it yourself for what you need. There are, however, some methods which may need some additional demonstration in order to use them as they were intended.
Prevent ghosting using ctx.write_line()
Let's try out the following script.
Looking at the code, you would expect the screen to display the following text:
Code: Select all
eggs = 555 spam = 1234 while True: ctx.write("Eggs %d" % eggs).linebreak() ctx.write("Spam %d" % spam).home() eggs = 99 spam = 321
However, since we have to explicitly write over text that has already been displayed to clear it, following is displayed instead:
Code: Select all
Eggs 99 Spam 321
Fortunately, ScreenContext has a convenient method that prints the given text to the screen and fills the rest of the line with whitespace, effectively preventing these ghosting issues. You can fix the example by doing this:
Code: Select all
Eggs 995 Spam 3214
Note that this also removes the need to use linebreak() to change the line.
Code: Select all
eggs = 555 spam = 1234 while True: ctx.write_line("Eggs %d" % eggs) ctx.write_line("Spam %d" % spam).home() eggs = 99 spam = 321 | https://forum.odroid.com/viewtopic.php?f=53&t=7715 | CC-MAIN-2021-25 | refinedweb | 619 | 73.68 |
#include "avcodec.h"
#include "get_bits.h"
#include "put_bits.h"
#include "dsputil.h"
Go to the source code of this file.
Definition in file huffyuv.c.
Value:
{\ uint16_t code = get_vlc2(&s->gb, s->vlc[3+plane1].table, VLC_BITS, 1);\ if(code != 0xffff){\ dst0 = code>>8;\ dst1 = code;\ }else{\ dst0 = get_vlc2(&s->gb, s->vlc[0].table, VLC_BITS, 3);\ dst1 = get_vlc2(&s->gb, s->vlc[plane1].table, VLC_BITS, 3);\ }\ }
Definition at line 685 of file huffyuv.c.
Referenced by decode_422_bitstream(), and decode_gray_bitstream().
Value:
Referenced by encode_bgr_bitstream().
Definition at line 828 of file huffyuv.c.
Referenced by decode_bgr_bitstream().
Definition at line 196 of file huffyuv.c.
Referenced by read_huffman_tables().
Definition at line 272 of file huffyuv.c.
Referenced by read_huffman_tables(), and read_old_huffman_tables().
Definition at line 177 of file huffyuv.c.
Referenced by read_huffman_tables(), and read_old_huffman_tables().
Initial value:
{ 3, 1, 2, 2, 2, 2, 3, 3, 7, 5, 7, 5, 8, 6, 11, 9, 7, 13, 11, 10, 9, 8, 7, 5, 9, 7, 6, 4, 7, 5, 8, 7, 11, 8, 13, 11, 19, 15, 22, 23, 20, 33, 32, 28, 27, 29, 51, 77, 43, 45, 76, 81, 46, 82, 75, 55, 56,144, 58, 80, 60, 74,147, 63, 143, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 27, 30, 21, 22, 17, 14, 5, 6,100, 54, 47, 50, 51, 53,106,107,108,109,110,111, 112,113,114,115, 4,117,118, 92, 94,121,122, 3,124,103, 2, 1, 0,129,130,131,120,119,126,125,136,137,138,139,140,141,142,134, 135,132,133,104, 64,101, 62, 57,102, 95, 93, 59, 61, 28, 97, 96, 52, 49, 48, 29, 32, 25, 24, 46, 23, 98, 45, 44, 43, 20, 42, 41, 19, 18, 99, 40, 15, 39, 38, 16, 13, 12, 11, 37, 10, 9, 8, 36, 7,128,127,105,123,116, 35, 34, 33,145, 31, 79, 42,146, 78, 26, 83, 48, 49, 50, 44, 47, 26, 31, 30, 18, 17, 19, 21, 24, 25, 13, 14, 16, 17, 18, 20, 21, 12, 14, 15, 9, 10, 6, 9, 6, 5, 8, 6, 12, 8, 10, 7, 9, 6, 4, 6, 2, 2, 3, 3, 3, 3, 2, }
Definition at line 115 of file huffyuv.c.
Referenced by read_old_huffman_tables().
Initial value:
{ 3, 9, 5, 12, 10, 35, 32, 29, 27, 50, 48, 45, 44, 41, 39, 37, 73, 70, 68, 65, 64, 61, 58, 56, 53, 50, 49, 46, 44, 41, 38, 36, 68, 65, 63, 61, 58, 55, 53, 51, 48, 46, 45, 43, 41, 39, 38, 36, 35, 33, 32, 30, 29, 27, 26, 25, 48, 47, 46, 44, 43, 41, 40, 39, 37, 36, 35, 34, 32, 31, 30, 28, 27, 26, 24, 23, 22, 20, 19, 37, 35, 34, 33, 31, 30, 29, 27, 26, 24, 23, 21, 20, 18, 17, 15, 29, 27, 26, 24, 22, 21, 19, 17, 16, 14, 26, 25, 23, 21, 19, 18, 16, 15, 27, 25, 23, 21, 19, 17, 16, 14, 26, 25, 23, 21, 18, 17, 14, 12, 17, 19, 13, 4, 9, 2, 11, 1, 7, 8, 0, 16, 3, 14, 6, 12, 10, 5, 15, 18, 11, 10, 13, 15, 16, 19, 20, 22, 24, 27, 15, 18, 20, 22, 24, 26, 14, 17, 20, 22, 24, 27, 15, 18, 20, 23, 25, 28, 16, 19, 22, 25, 28, 32, 36, 21, 25, 29, 33, 38, 42, 45, 49, 28, 31, 34, 37, 40, 42, 44, 47, 49, 50, 52, 54, 56, 57, 59, 60, 62, 64, 66, 67, 69, 35, 37, 39, 40, 42, 43, 45, 47, 48, 51, 52, 54, 55, 57, 59, 60, 62, 63, 66, 67, 69, 71, 72, 38, 40, 42, 43, 46, 47, 49, 51, 26, 28, 30, 31, 33, 34, 18, 19, 11, 13, 7, 8, }
Definition at line 96 of file huffyuv.c.
Referenced by read_old_huffman_tables().
Initial value:
{ 66,36,37,38,39,40,41,75,76,77,110,239,144,81,82,83,84,85,118,183, 56,57,88,89,56,89,154,57,58,57,26,141,57,56,58,57,58,57,184,119, 214,245,116,83,82,49,80,79,78,77,44,75,41,40,39,38,37,36,34, 0 }
Definition at line 90 of file huffyuv.c.
Referenced by read_old_huffman_tables().
Initial value:
{ 34,36,35,69,135,232,9,16,10,24,11,23,12,16,13,10,14,8,15,8, 16,8,17,20,16,10,207,206,205,236,11,8,10,21,9,23,8,8,199,70, 69,68, 0 }
Definition at line 84 of file huffyuv.c.
Referenced by read_old_huffman_tables(). | http://ffmpeg.org/doxygen/0.6/huffyuv_8c.html | CC-MAIN-2017-04 | refinedweb | 799 | 82.58 |
ASP.NET Core: Building enum provider to convert C# enums to JavaScript
My previous post about ASP.NET Core and getting C# enums to JavaScript was primitive and used simple attribute on enums to detect ones we need in JavaScript. This blog post extends the idea and makes some generalizations to support also those enums that are located in the libraries we don’t control or on what we don’t want to apply attribute.
- Converting C# enums to JavaScript
- Converting multiple C# enums to JavaScript
- ASP.NET Core: Converting C# enums to JavaScript
- ASP.NET Core: Building enum provider to convert C# enums to JavaScript
JavaScriptEnum attribute
We start again by defining JavaScriptEnum attribute and some sample enums that are decorated with this attribute. JavaScriptEnum attribute is marker attribute and it doesn’t carry any functionality.
public class JavaScriptEnumAttribute : Attribute
{
}
[JavaScriptEnum]
public enum PaymentTypeEnum
{
CreditCard,
Check,
Cash
}
[JavaScriptEnum]
public enum CustomerStatusEnum
{
Regular,
Gold,
Platinum
}
Now let’s move away from web application specifics and suppose we have to work with application that contains multiple libraries. We cannot force JavaScriptEnum attribute in library projects as we don’t want web specifics to pollute so called lower layers. Also we want to support system enums and enums from third-party libraries.
We need something more general than just marker attribute. We need provider and we start with defining interface for this. This way we are able to support multiple implementations of provider.
public interface IEnumProvider
{
IEnumerable<Type> GetEnumTypes();
}
Enum provider will be class that gathers all enum types that are needed in JavaScript and returns them as an IEnumerable<Type>. Notice that code below gathers enums that are decorated with JavaScriptEnum attribute and adds also two system enums to return value.
public class EnumProvider : IEnumProvider
{
public IEnumerable<Type> GetEnumTypes()
{
var enums = new List<Type>();
enums.AddRange(GetJavaScriptEnums());
enums.Add(typeof(CalendarWeekRule));
enums.Add(typeof(ProcessorArchitecture));
return enums;
}
private static IEnumerable<Type> GetJavaScriptEnums()
{
return from a in GetReferencingAssemblies()
from t in a.GetTypes()
from r in t.GetTypeInfo().GetCustomAttributes<JavaScriptEnumAttribute>()
where t.GetTypeInfo().BaseType == typeof(Enum)
select t;
};
}
}
NB! It’s possible to write more flexible and general enum provider. By example, enum provider can read information about enums to return from application configuration. It is also possible to make it return all enums from given name spaces and so on. There are no limits for providers.
Before using enum provider in web application views we have to register it with built-in dependency injection so it will be available for views and view components.
services.AddSingleton<IEnumProvider, EnumProvider>();
No let’s write view component that takes enum provider and turns returned enums to JavaScript string. Here again StringBuilder is used as buffer.
public class EnumsToJavaScriptViewComponent : ViewComponent
{
private readonly IEnumProvider _enumProvider;
public EnumsToJavaScriptViewComponent(IEnumProvider enumProvider)
{
_enumProvider = enumProvider;
}
public Task<HtmlString> InvokeAsync()
{
var buffer = new StringBuilder(10000);
foreach (var jsEnum in _enumProvider.GetEnumTypes())
{);
}
}
Notice that the view component doesn’t need any views. It returns just HtmlString with JavaScript and that’s it. Building view for this would be pointless overkill.
The following code snippet shows how to use the view component on layout page.
<script>
@await Component.InvokeAsync("EnumsToJavaScript")
</script>
And here is the result as seen in browser.
<script>
var PaymentTypeEnum = { "CreditCard": 0, "Check": 1, "Cash": 2 };
var CustomerStatusEnum = { "Regular": 0, "Gold": 1, "Platinum": 2 };
var CalendarWeekRule = { "FirstDay": 0, "FirstFullWeek": 1, "FirstFourDayWeek": 2 };
var ProcessorArchitecture = { "None": 0, "MSIL": 1, "X86": 2, "IA64": 3, "Amd64": 4, "Arm": 5 };
</script>
If some other casing is needed then enum provider can be changed. It also possible to create JavaScript variable formatter and ibject this to enum provider.
Other ways to convert enums to JavaScript
It’s also possible to use other ways to get enums to JavaScript.
- HtmlHelper extension method – it’s possible but it’s not very convenient to make it work with dependency injection. I don’t have any good idea right now how to do it the way the code looks nice and there are no architectural pollution,
- Extension method for Enum – we can replace this solution by simple extension method but then we have to write one line of code per Enum in view where we want to define enums; I don’t like this solution much,
- EnumProvider that returns JavaScript as string – this solution has one good point: we don’t need view component but the drawback is we need base class where method for creating JavaScript string is held (we can use the provider easily in controller actions or when using view injection we can use it also directly in views).
From these methods I would go with last one but there is one thing I don’t like – it binds enum provider strictly to web application and it has additional ballast when used in some other context. Of course, this is the matter of taste and like always – it depends!
Wrapping up
I started this enum to JavaScript saga with converting one enum to JavaScript on classic ASP.NET. Step by step I fleshed out the solution described in this post as ASP.NET Core has good features like view injection and view components that make the framework more flexible and easier to use when architecture is more complex. Although the solution here may seem like too big or too much like “corporate application architecture” it still keeps changes to enums logic in one place – if something changes then is is the matter of provider and there is no need to change layout pages. Also this way it is easier to test enum providers.
3 thoughts on “ASP.NET Core: Building enum provider to convert C# enums to JavaScript”
Pingback:The Morning Brew - Chris Alcock » The Morning Brew #2407
Great Article!
What would happen if you wrap the “@await Component.InvokeAsync(“EnumsToJavaScript”)” in a cache tag?
probably get a small perf gain? Otherwise, just output it as a JS file from a T4 template because it is compile time generated.
Thanks, Thomas for feedback. Optimizations are possible, of course. I think T4 is best way to go as then it is possible to add generated JS file to JS bundle if bundling is enabled for given application. | https://gunnarpeipman.com/aspnet/aspnet-core-enum-provider/ | CC-MAIN-2019-30 | refinedweb | 1,030 | 61.97 |
written two patches. One is for utils.py. It includes two new
classes and two new functions.
The classes are needed to make the SAX parser work and should not be
invoked directly.
The function XmlStringToDocutilsNodes copies an XML string to a docutils
node set.
The function xml_copy_tree makes an identical copy of an XML string for
testing purposes. (It now occurs to me that this function and the class
that goes with it should go in test_utils.py?)
The other patch is for test_utils.py. I've included tests to test the
new functions.
The code is documented.
Paul()
--
Aahz (aahz@...) <*>
"If you think it's expensive to hire a professional to do the job, wait
until you hire an amateur." --Red Adair
On 10/30/11 10:41 AM, Aahz wrote:
>()
etree is not compatible with Python 2.3. SAX is robust, standard, and
fast for this task. The most complex part of the code involves handling
namespaces. By using SAX, I can control the formatting of the
namespaces. For example, I can allow for a default namespace so that the
namespace isn't written for each element, making the XML easier to read.
Paul
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details | https://sourceforge.net/p/docutils/mailman/docutils-develop/thread/[email protected]/ | CC-MAIN-2016-30 | refinedweb | 240 | 77.33 |
Date: Thu, 1 Jul 1999 09:46:12 +0200 From: Martin Neumann <[email protected]> To: [email protected] Subject: [PATCH] /proc/net/dev_hwaddr --UugvWAfsgieZRqgk Content-Type: text/plain; charset=us-ascii I attached a patch against 2.2.10 which adds "dev_hwaddr" to /proc/net/. Here an example: % cat /proc/net/dev_hwaddr lo 00:00:00:00:00:00 eth0 00:E0:7D:00:55:67 ippp0 - I did this to avoid parsing ifconfig's output. I have a box which has two network interfaces - one is connected to the part of the intranet which is considered to be "trusted", the other is considered "untrusted". This two interfaces are configured differently concerning ipchains-setup etc. I could have used the device name for distinguishing the two interfaces, but it depends on the card initialization order which one gets eth0 and which one is eth1. So if I change one of the NICs one day, the two devices perhaps swap their names and my scripts would the configure the wrong to be trusted. That would really be fatal. But of course I can distinguish both by the NIC hardware address. But to get the hwaddr I would have to parse ifconfig's output. I don't want to rely on that. So I did the patch so that I can easily lookup which device name refers to which NIC hwaddr. This could be also useful for those having need for a "unique system identifier" - now they can easily lookup whether a specific NIC hwaddr is present in the system. [Note] All hwaddr get formated like an (ethernet|tokenring|..) NIC's hwaddr. If you want to have the right format for other NIC types like ARCnet you have to reformat it. But I consider this reformating not to be a kernel job. Instead this should be done in userland. [Disclaimer] I never did any kernel patch before and I don't claim to have superb kernel knowledge. If I did something braindead, tell me. But if this patch is considered useful, I would be glad if this gets into the official kernel source. -- `Unser Kopf ist rund, damit das Denken die Richtung wechseln kann.' ~ Francis Picabia --UugvWAfsgieZRqgk Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="patch.dev_hwaddr" --- linux/net/core/dev.c Thu Mar 25 18:23:34 1999 +++ linux-changed/net/core/dev.c Thu Jul 1 08:49:39 1999 @@ -56,6 +56,7 @@ * A network device unload needs to purge * the backlog queue. * Paul Rusty Russel : SIOCSIFNAME + * Martin Neumann : Added support for /proc/net/dev_hwaddr */ #include <asm/uaccess.h> @@ -1221,6 +1222,64 @@ return len; } +/* + * The following two procedures create the output for /proc/net/dev_hwaddr + * The code was mostly cloned from dev_get_info and sprintf_stats + */ + +static int sprintf_hwaddr(char *buffer, struct device *dev) +{ + int size; + + if (dev->addr_len) { /* Some devices have a hw address, some don't */ + int i; + char addr_byte_buf[dev->addr_len * 3], tmpbuf[4]; + + for (i=0; i < dev->addr_len; i++) { + sprintf(tmpbuf, "%02X:", (dev->dev_addr[i] & 0377)); + memcpy(&addr_byte_buf[i*3], tmpbuf, 3); + }; + + addr_byte_buf[dev->addr_len * 3 - 1] = 0; + size = sprintf(buffer, "%s %s\n", dev->name, addr_byte_buf); + } + else + size = sprintf(buffer, "%s -\n", dev->name); + + return size; +}; + +int dev_hwaddr_get_info(char *buffer, char **start, off_t offset, int length, int dummy) +{ + int len=0; + off_t begin=0; + off_t pos=0; + int size; + + struct device *dev; + + for (dev = dev_base; dev != NULL; dev = dev->next) + { + size = sprintf_hwaddr(buffer+len, dev); + len+=size; + pos=begin+len; + + if(pos<offset) + { + len=0; + begin=pos; + } + if(pos>offset+length) + break; + } + + *start=buffer+(offset-begin); /* Start of wanted data */ + len-=(offset-begin); /* Start slop */ + if(len>length) + len=length; /* Ending slop */ + return len; +}; + static int dev_proc_stats(char *buffer, char **start, off_t offset, int length, int *eof, void *data) { @@ -1867,6 +1926,13 @@ 0, &proc_net_inode_operations, dev_get_info }; + +static struct proc_dir_entry proc_net_dev_hwaddr = { + PROC_NET_DEV, 10, "dev_hwaddr", + S_IFREG | S_IRUGO, 1, 0, 0, + 0, &proc_net_inode_operations, + dev_hwaddr_get_info +}; #endif #ifdef CONFIG_NET_RADIO @@ -2003,6 +2069,8 @@ struct proc_dir_entry *ent = create_proc_entry("net/dev_stat", 0, 0); ent->read_proc = dev_proc_stats; } + + proc_net_register(&proc_net_dev_hwaddr); #endif #ifdef CONFIG_NET_RADIO --- linux/include/linux/proc_fs.h Tue May 11 19:36:09 1999 +++ linux-changed/include/linux/proc_fs.h Mon Jun 28 13:38:11 1999 @@ -146,6 +146,7 @@ PROC_NET_IPFW_CHAIN_NAMES, PROC_NET_AT_AARP, PROC_NET_BRIDGE, + PROC_NET_HWADDR, PROC_NET_LAST }; --UugvWAfsgieZRqgk-- - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] Please read the FAQ at | http://lwn.net/1999/0708/a/dev_hwaddr.html | crawl-002 | refinedweb | 738 | 52.49 |
.
Setup
First, I added the eth1 interface. The PUM comes with a host only connect by default. I didn’t want to affect the host only connection, so I added a second interface to the Virtualbox machine settings.
Next, I added the Oracle yum repository to make installing required packages easier.
cd /etc/yum.repos.d wget yum list
These are the prerequisites needed for building…
yum install gmp-devel libtool db4 db4-devel
Note: If you look ahead, you’ll also want to install ncurses-devel and subversion as well.
Install
I downloaded Open Cobol. Now, it is actually called Gnu Cobol.
cp gnu-cobol-2.0_rc-2.tar.gz /opt cd /opt tar -xzvf gnu-cobol-2.0_rc-2.tar.gz cd gnu-cobol-2.0 ./configure make make install
Note, I just ran it all as root. The last make install line you would need sudo in front of.
At this point, I got a compile error, and it didn’t install. This is the end of the make output:
libtool: compile: gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I.. -I.. -O2 -pipe -finline-functions -fsigned-char -Wall -Wwrite-strings -Wmissing-prototypes -Wno-format-y2k -U_FORTIFY_SOURCE -MT libcob_la-screenio.lo -MD -MP -MF .deps/libcob_la-screenio.Tpo -c screenio.c -fPIC -DPIC -o .libs/libcob_la-screenio.o screenio.c:2505: error: conflicting types for ‘cob_field_display’ ../libcob/common.h:1562: error: previous declaration of ‘cob_field_display’ was here screenio.c:2523: error: conflicting types for ‘cob_field_accept’ ../libcob/common.h:1566: error: previous declaration of ‘cob_field_accept’ was here screenio.c:2550: error: conflicting types for ‘cob_screen_accept’ ../libcob/common.h:1559: error: previous declaration of ‘cob_screen_accept’ was here make[2]: *** [libcob_la-screenio.lo] Error 1 make[2]: Leaving directory `/opt/gnu-cobol-2.0/libcob' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/opt/gnu-cobol-2.0' make: *** [all] Error 2 [/sorucecode] Fix: install ncurse development package: yum install ncurses-devel
Ran configure and make again. It worked this time.
Compiling
Created compile script (using the script from the old post)
su - psadm1 cd $PS_HOME/setup vi gnucbl.sh chmod +x gnucbl.sh
Here’s the script that I have so far. I do need to change it to accomodate the PS_APP_HOME. Right now, it only compiles the COBOL in the PS_HOME.
#!/bin/sh if [ ! -e "$PS_HOME" ]; then BASEDIR=$(dirname "$0") $LOGDIR/$BASENAME.out 2>&1 if [ "$?" == 0 ]; then echo "$cbl" >> $LOGDIR/success.txt echo " success" else echo "$cbl" >> $LOGDIR/fail.txt echo " fail" fi done
Updating
I found several compile errors with programs. As best I could tell, the fixes were coded but were not in the release. So, I decided to give the current development code a try. That seemed to fix the compile errors.
yum install subversion svn checkout open-cobol-code
The svn command downloads the code. I ran the configure, make, and make install commands the way I did earlier.
OS Info Issue
The next issue is that it wouldn’t compile the COBOL because of a CBL_GET_OS_INFO procedure. As best I can tell, that is something that Microfocus COBOL has. Here’s the error message:
libcob: Cannot find module 'CBL_GET_OS_INFO'
I found it in:
- PTPNETRT.cbl
- PTPSQLRT.cbl
To fix it, I just commented out the call.
vi $PS_HOME/src/cbl/PTPSQLRT.cbl
Updated to:
* Skp -- Removed to use with GnuCobol * CALL 'CBL_GET_OS_INFO' USING PRMOS
To test, I ran it with this:
cobcrun PTPDBTST ORACLE/HR92U022/PS/PS/TST/167916//0
Required Library Errors
My next error was this:
libcob: Cannot find module 'C_SQLCNC'
I searched for a COBOL program that matched, but I didn’t find one:
find ../src/cbl -exec grep -il C_SQLCNC {} \;
So, I created this script to find the library:
#!/bin/bash function searchLib() { results=$(nm -D "$1" | grep -i "$2") found=$? if [ $found -eq 0 ]; then echo "Found: $1" echo "$results" echo echo fi } #searchLib ./bin/libpssqlapi.so "$1" find $PS_HOME -name \*.so | while read line; do searchLib $line "$1" \; done
Here was the output:
$ ./setup/libSearch.sh C_SQLCNC Found: /opt/oracle/psft/pt/ps_home8.55.14/bin/libpssqlapi.so 000000000000fb40 T C_SQLCNC Found: /opt/oracle/psft/pt/ps_home8.55.14/bin/libpssqlapi_ansi.so 000000000000e4f0 T C_SQLCNC
I attempted to include that path on the library path environment
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:/opt/oracle/psft/pt/ps_home8.55.14/bin/
Next, I attempted to call the COBOL from a C program.
Here’s the source to gnurun.c (I put it into cblopen:
#include <libcob.h> extern int PTPDBTST(); int main(int argc, char **argv) { int ret; cob_init(argc, argv); ret = PTPDBTST(); return ret; }
I added the code to the cblopen directory, compiled it, and linked it together.
vi gnurun.c cc -c `cob-config --cflags` gnurun.c cobc -x -o gnurun gnurun.o PTPDBTST.so ./gnurun ORACLE/HR92U022/PS/PS/TST/167916//0
I linked in the library that I found earlier:pssqlapi_ansi.so: undefined reference to `PS_sqlsil'
With that message, I searched for that library:
setup/libSearch.sh PS_sqlsil
It found this library:
/opt/oracle/psft/pt/ps_home8.55.14/bin/libpsora_ansi64.so
So, I tried to link it like this:psora_ansi64.so
I attempted these two environment variables as well. The first one worked great as far as making it so that I could run it from anywhere and not have to be in the “cblopen” directory where I had compiled the open cobol output. The 2nd variable didn’t help at all. I’m not sure why it wasn’t working.
COB_LIBRARY_PATH=/opt/oracle/psft/pt/ps_home8.55.14/cblopen COB_PRE_LOAD=/opt/oracle/psft/pt/ps_home8.55.14/bin/libpssqlapi_ansi.so
After adding the libpsora_ansi64.so library to the link command, I got this error message:
Application Program Failed Action Type : SQL CONNEC In Pgm Section : SQLRT:GG100 SQL-CONNECT With Return Code: 09977 Error Message : SQLRT: Attempting to use Ansi API for a Unicode DB FAIL TO CONNECT PTPDBTST: Application Program Failed In Pgm Section : CONNECT FAILED With Return Code: 09977 Error Message : SQLRT: Attempting to use Ansi API for a Unicode DB
This fixed the problem:
cobc -x -o gnurun gnurun.o PTPDBTST.so /opt/oracle/psft/pt/ps_home8.55.14/bin/libpssqlapi.so /opt/oracle/psft/pt/ps_home8.55.14/bin/libpsora64.so
Running through Process Scheduler
To get it to run from the process scheduler, I had to create the PSRUN script. I put it into $PS_HOME/bin/PSRUN
#!/bin/sh echo $* >> $PS_HOME/cblout.log echo $* export COB_LIBRARY_PATH=$PS_HOME/cblopen export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib:$PS_HOME/cblopen $PS_HOME/cblopen/gnurun $2 | tee -a $PS_HOME/cblout.log
I can run it from the command line like this:
$ ./PSRUN PTPDBTST ORACLE/HR92U022/PS/PS/TST/167916//0PTPDBTST ORACLE/HR92U022/PS/PS/TST/167916//0 Application Program Failed Action Type : SQL CONNEC In Pgm Section : SQLRT:GG100 SQL-CONNECT With Return Code: 01045 Error Message : ORA-01045: user PS lacks CREATE SESSION privilege; logon denied FAIL TO CONNECT PTPDBTST: Application Program Failed In Pgm Section : CONNECT FAILED With Return Code: 01045 Error Message : ORA-01045: user PS lacks CREATE SESSION privilege; logon denied
I haven’t figured this issue out yet! | https://psst0101.digitaleagle.net/2017/07/14/opencobol-try-2/ | CC-MAIN-2018-13 | refinedweb | 1,190 | 59.09 |
.
What is an Affine Transformation
According to Wikipedia an affine transformation is a functional mapping between two geometric (affine) spaces which preserve points, straight and parallel lines as well as ratios between points. All that mathy abstract wording boils down is a loosely speaking linear transformation that results in, at least in the context of image processing, one or more manipulations like rotating, flipping, scaling or shearing by applying a transformation matrix.
One good thing is that since this is essentially a 2D geometric operation we can visualize it. Let me start off by giving a table of affine transformations that describe each type of geometric manipulation.
* Affine transformation uses angle of rotation that is clockwise which is in contrast to the typical geometry unit circle of angles being measured in counter clockwise rotation with 0 starting from the positive X axis, therefore you will see that the negative of the angle is often applied.
' notation here is just referring to the transformed output coordinate of x or y not the calculus notation for a derivative
For means of simple demonstration I will apply a couple transformations to manipulate the x and y coordinates of the following points which have three dimensional components of x, y and ascii character index similar to the way an image pixel has 3 dimensional components of x, y, and frequency (or intensity).
a = (0, 1, 0)
b = (1, 0, 1)
c = (0, -1, 2)
d = (-1, 0, 3)
The transformations for this example will be Scaling by 2 in all directions and rotation of 90 degrees clockwise. First I will perform the transformations individually to show the direct effect each has on moving the points around then I will combine the transformations and apply them in one action.
To begin I want to build a Numpy array (some may call this a matrix) with each row representing the point where the first column is the x, the second the y, and the third is the index of its letter in the ascii character set similar to the table shown below. Next I use Matplotlib to plot the points (after applying the unchanging Identity transformation) to give a baseline visual of where we stand.
import matplotlib.pyplot as plt import numpy as np import string # points a, b and, c a, b, c, d = (0, 1, 0), (1, 0, 1), (0, -1, 2), (-1, 0, 3) # matrix with row vectors of points A = np.array([a, b, c, d]) # 3x3 Identity transformation matrix I = np.eye(3)
color_lut = 'rgbc' fig = plt.figure() ax = plt.gca() xs = [] ys = [] for row in A: output_row = I @ row x, y, i = output_row xs.append(x) ys.append(y) i = int(i) # convert float to int for indexing c = color_lut[i] plt.scatter(x, y, color=c) plt.text(x + 0.15, y, f"{string.ascii_letters[i]}") xs.append(xs[0]) ys.append(ys[0]) plt.plot(xs, ys, color="gray", linestyle='dotted') ax.set_xticks(np.arange(-2.5, 3, 0.5)) ax.set_yticks(np.arange(-2.5, 3, 0.5)) plt.grid() plt.show()
The three points a, b, and c plotted on a grid after applying the Identity transformation to them via a simple vector matrix dot product leaving them unchanged.
I will now move on to creating a scaling transformation matrix \(T_s\) , as shown below, which scales the placement of the points in all directions.
$$ T_s = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$
Now I will move on to plotting the transformed points similar to what was done with the original points unaltered by the Identity transformation but, this time I will apply the scaling transformation matrix defined above. For a better visualization, I plot a dotted line connecting the points.
# create the scaling transformation matrix T_s = np.array([[2, 0, 0], [0, 2, 0], [0, 0, 1]]) fig = plt.figure() ax = plt.gca() xs_s = [] ys_s = [] for row in A: output_row = T_s @ row x, y, i = row x_s, y_s, i_s = output_row xs_s.append(x_s) ys_s.append(y_s) i, i_s = int(i), int(i_s) # convert float to int for indexing c, c_s = color_lut[i], color_lut[i_s] # these are the same but, its good to be explicit plt.scatter(x, y, color=c) plt.scatter(x_s, y_s, color=c_s) plt.text(x + 0.15, y, f"{string.ascii_letters[int(i)]}") plt.text(x_s + 0.15, y_s, f"{string.ascii_letters[int(i_s)]}'") xs_s.append(xs_s[0]) ys_s.append(ys_s[0]) plt.plot(xs, ys, color="gray", linestyle='dotted') plt.plot(xs_s, ys_s, color="gray", linestyle='dotted') ax.set_xticks(np.arange(-2.5, 3, 0.5)) ax.set_yticks(np.arange(-2.5, 3, 0.5)) plt.grid() plt.show()
From the plot above it should be very clear that the x and y dimensions were simply scaled up by a factor of two while the third dimension responsible for the ASCII letter index was left unchanged. In fact, those familiar with matrix algebra will have noticed that for all of the affine transformations listed in the first table the value represented in the third dimension is always left un-altered as indicated by the all zeros and one lone value in the third dimension index of the last column.
Now let me describe how to interpret the rotation transformation. I will start by solving the two trigonometric functions for the desired angle of rotation of 90 degrees, then I simply plug them into the rotation transformation matrix listed in the previous table.
$$
sin (90^{o}) = 1
$$
$$
cos (90^{o}) = 0
$$
$$ T_r = \begin{bmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$
Now all I need to do is apply the same logic to transform and plot the points, like so:
# create the rotation transformation matrix T_r = np.array([[0, 1, 0], [-1, 0, 0], [0, 0, 1]]) fig = plt.figure() ax = plt.gca() for row in A: output_row = T_r @ row x_r, y_r, i_r = output_row i_r = int(i_r) # convert float to int for indexing c_r = color_lut[i_r] # these are the same but, its good to be explicit letter_r = string.ascii_letters[i_r] plt.scatter(x_r, y_r, color=c_r) plt.text(x_r + 0.15, y_r, f"{letter_r}'") plt.plot(xs, ys, color="gray", linestyle='dotted') ax.set_xticks(np.arange(-2.5, 3, 0.5)) ax.set_yticks(np.arange(-2.5, 3, 0.5)) plt.grid() plt.show()
Hopefully you can tell from the plot that all points were rotated 90 degrees around an axis of rotation at the origin.
The neat thing about affine transformations being essentially linear transformations is that you can combine the transformations and apply them in one step. To demonstrate this I will apply the dot product (matrix multiplication) of my two transformation matrices, like:
$$ T_{comb} = \begin{bmatrix} 0 & 1 & 0 \\ -1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} * \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 2 & 0 \\ -2 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$
Now I can apply this combined transformation matrix to the points and replot them to show a combination of scaling by two and rotation by 90 degrees.
# create combined tranformation matrix T = T_s @ T_r fig = plt.figure() ax = plt.gca() xs_comb = [] ys_comb = [] for row in A: output_row = T @ row x, y, i = row x_comb, y_comb, i_comb = output_row xs_comb.append(x_comb) ys_comb.append(y_comb) i, i_comb = int(i), int(i_comb) # convert float to int for indexing c, c_comb = color_lut[i], color_lut[i_comb] # these are the same but, its good to be explicit letter, letter_comb = string.ascii_letters[i], string.ascii_letters[i_comb] plt.scatter(x, y, color=c) plt.scatter(x_comb, y_comb, color=c_comb) plt.text(x + 0.15 , y, f"{letter}") plt.text(x_comb + 0.15, y_comb, f"{letter_comb}'") xs_comb.append(xs_comb[0]) ys_comb.append(ys_comb[0]) plt.plot(xs, ys, color="gray", linestyle='dotted') plt.plot(xs_comb, ys_comb, color="gray", linestyle='dotted') ax.set_xticks(np.arange(-2.5, 3, 0.5)) ax.set_yticks(np.arange(-2.5, 3, 0.5)) plt.grid() plt.show()
Working with an Image
By now I hope that I've been able to build up some intuition about how affine transformations are used to simply move around points in 2D space, so with that out of the way I'd like to start working with some real image data to give a more concrete demonstration of how all this works.
This also allows me to cover another important topic of affine transformations which deals with the third dimension. The third dimension of data in an image represents the actual pixel value, or sometimes referred to as the intensity domain, whereas the physical 2D location of the pixels in the other two dimensions are referred to as the spatial domain.
To begin I will read in and display an image using matplotlib, which is simply a large capital letter R.
img = plt.imread('letterR.jpg') img.shape # (1000, 1000, 4)
Using the
imread(...) method I am able to read in the JPG image, representing the capital letter R, into a numpy ndarray. I then display the dimensions of the array which are 1000 rows by 1000 columns, together making up 1,000,000 pixels locations in the spatial domain. The individual pixel data is then in the form of an array of 4 unsigned integers representing a red, green, blue and alpha channel (or sample) that together provide the intensity data of each pixel.
plt.figure(figsize=(5, 5)) plt.imshow(img)
Next, I would like to apply the previous scale and rotation to the spatial domain of the image data, thus transforming the pixel locations similar to what I demonstrated earlier with the points data. However, I need to take a slightly different approach because the image data is organized in a different way than that of the rows of data points I worked with earlier. With the image data I need to map the indices for each pixel of the input data to the transformed output indices using the transformation matrix T, defined earlier.
# 2x scaling requires a tranformation image array 2x the original image img_transformed = np.empty((2000, 2000, 4), dtype=np.uint8) for i, row in enumerate(img): for j, col in enumerate(row): pixel_data = img[i, j, :] input_coords = np.array([i, j, 1]) i_out, j_out, _ = T @ input_coords img_transformed[i_out, j_out, :] = pixel_data plt.figure(figsize=(5, 5)) plt.imshow(img_transformed)
Plotting the image after applying the transformation clearly shows that the original image has been rotated 90 degrees clockwise and scaled up 2X. However, the result is now obviously diminished as you can easily see discontinuity in the pixel intensities.
To understand the reason for this I will again utilize a simple grid plot for demonstration. Consider a plot of 4 squares in a 2x2 grid similar to the spatial domain of a 2x2 image.
def plot_box(plt, x0, y0, txt, w=1, h=1): plt.scatter(x0, y0) plt.scatter(x0, y0 + h) plt.scatter(x0 + w, y0 + h) plt.scatter(x0 + w, y0) plt.plot([x0, x0, x0 + w, x0 + w, x0], [y0, y0 + h, y0 + h, y0, y0], color="gray", linestyle='dotted') plt.text(x0 + (.33 * w), y0 + (.5 * h), txt) # x0, y0, letter a = np.array((0, 1, 0)) b = np.array((1, 1, 1)) c = np.array((0, 0, 2)) d = np.array((1, 0, 3)) A = np.array([a, b, c, d]) fig = plt.figure() ax = plt.gca() for pt in A: x0, y0, i = I @ pt x0, y0, i = int(x0), int(y0), int(i) plot_box(plt, x0, y0, f"{string.ascii_letters[int(i)]} ({x0}, {y0})") ax.set_xticks(np.arange(-1, 5, 1)) ax.set_yticks(np.arange(-1, 5, 1)) plt.grid() plt.show()
Now watch what happens when I apply a 2X scaling transformation as depicted below. Recall that:
$$ T_s = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$
You will notice that such a spatial transformation results in... well, "gaps" to put it in simple terms, which I've made obvious by plotting question marks along with the coordinates. The 2x2 grid is transformed into a 3x3 grid with the original squares being repositioned based of the linear transformation applied. This means that (0,0) * \(T_s\) remains (0,0) because of its properties as a 0 vector, but all others are scaled by two, such as (1,1) * \(T_s\) -> (2,2).
fig = plt.figure() ax = plt.gca() for pt in A: xt, yt, i = T_s @ pt xt, yt, i = int(xt), int(yt), int(i) plot_box(plt, xt, yt, f"{string.ascii_letters[i]}' ({xt}, {yt})") delta_w, delta_h = 0.33, 0.5 plt.text(0 + delta_w, 1 + delta_h, "? (0, 1)") plt.text(1 + delta_w, 0 + delta_h, "? (1, 0)") plt.text(1 + delta_w, 1 + delta_h, "? (1, 1)") plt.text(1 + delta_w, 2 + delta_h, "? (1, 2)") plt.text(2 + delta_w, 1 + delta_h, "? (2, 1)") ax.set_xticks(np.arange(-1, 5, 1)) ax.set_yticks(np.arange(-1, 5, 1)) plt.grid() plt.show()
The question remains of what to do with those gaps that have been introduced? An intuitive thought would be to simply look to the original image for the answer. It just so happens that if we apply the inverse of the transformation to a coordinate in the output I will get the corresponding location of the original input.
In matrix operations such as backwards mapping looks like this:
$$ (x, y, 1) = T_s^{-1} * (x' y' 1) $$
where x', y' are the coordinates in the above transformed 3x3 grid, specifically the a missing location, such as (2, 1), \(T_s^{-1}\) (actual values shown below) is the inverse of the 2x scaling matrix \(T_s\) and x, y are the coordinates that are found in the original 2x2 grid.
$$ T_s^{-1} = \begin{bmatrix} 1/2 & 0 & 0 \\ 0 & 1/2 & 0 \\ 0 & 0 & 1 \end{bmatrix}^{-1} $$
However, you will soon realize there is a bit of an issue that still needs sorted out due to the fact that each of the gap's coordinates map back to fractional values of the 2x2 coordinate system. In the case of image data you can't really have a fraction of a pixel. This will be clearer with an example of mapping the (2, 1) gap back to the original 2x2 space, like so:
$$ T_s^{-1} * (2, 1, 1) = (1, 1/2, 1) $$
In this case I will round the y' = 1/2 down to 0 and say that that maps to (1, 0). In the general sense this method of selecting a value in the original 2x2 grid to put into the gaps of the transformed 3x3 grid is known as interpolation, and in this specific example I am using a simplified version of the nearest neighbor interpolation method.
Ok, now back to the image data. It should be fairly clear what should be done now to fix those gaps in the scaled and rotated version of the letter R. I must develop an implementation of nearest neighbor interpolation based off the backwards mapping, using the inverse of the transformation matrix T, of the pixel coordinates in the transformed image to find either the exact match or nearest neighbor in the original image.
T_inv = np.linalg.inv(T) # nearest neighbors interpolation def nearest_neighbors(i, j, M, T_inv): x_max, y_max = M.shape[0] - 1, M.shape[1] - 1 x, y, _ = T_inv @ np.array([i, j, 1]) if np.floor(x) == x and np.floor(y) == y: x, y = int(x), int(y) return M[x, y] if np.abs(np.floor(x) - x) < np.abs(np.ceil(x) - x): x = int(np.floor(x)) else: x = int(np.ceil(x)) if np.abs(np.floor(y) - y) < np.abs(np.ceil(y) - y): y = int(np.floor(y)) else: y = int(np.ceil(y)) if x > x_max: x = x_max if y > y_max: y = y_max return M[x, y,] img_nn = np.empty((2000, 2000, 4), dtype=np.uint8) for i, row in enumerate(img_transformed): for j, col in enumerate(row): img_nn[i, j, :] = nearest_neighbors(i, j, img, T_inv) plt.figure(figsize=(5, 5)) plt.imshow(img_nn)
Not too shabby right?
I should note that in most cases the nearest neighbor method will not be sufficient. There are two other more common interpolation methods known as bilinear and bicubic interpolation that generally provide much better results. I will speak more about these other interpolation algorithms when introducing the Pillow and OpenCV libraries in latter sections. The purpose of this section is just to build an intuitive understanding of how things work.
Affine Transformations with Pillow
In this section I will be briefly covering how to use the excellent Python image processing library Pillow to perform affine transformations.
First off, Pillow will need to be installed. I used pip to accomplish this, like so:
$ pip install pillow
Now the first step is to import the
Image class from the PIL (PIL is the name of the Python module associated with Pillow) module and read in my image.
from PIL import Image
To read in the sample image file name "letterR.jpg" I call the class method
Image.open(...), passing it the filename, which returns an instance of the
Image class, which I then convert to a numpy array and display with matplotlib.
img = Image.open('letterR.jpg') plt.figure(figsize=(5, 5)) plt.imshow(np.asarray(img))
The Pillow
Image class has a handy method called
transform(...) that allows you to perform fine-grained affine transformations, but there are a few oddities that I must discuss first before I jump into a demonstration of it. The
transform(...) method begins with two required parameters representing
size as a tuple of height and width, followed by the
method of transformation to be applied, which will be
Image.AFFINE in this case.
The remaining parameters are optional keyword arguments that control how the transformation is to be performed. In the case of this example I will be using the
data parameter, which takes the first two rows of an affine transformation matrix.
For example, the 2x scaling transformation matrix I've been working with trimmed down to just the first two rows looks like this:
$$ T_s = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix} $$
The last parameter that I will be using with the
transform(...) method is
resample, which is used to indicate the type of pixel interpolation algorithm to apply out of the possible choices of
Image.NEAREST (nearest neighbor),
Image.BILINEAR, or
Image.BICUBIC. This choice will often vary depending on the transformation being applied. However, bilinear and bicubic generally give better results than nearest neighbor, but as already demonstrated in this example nearest neighbor works quite well.
There are a few peculiarities that served as real gotchas for me the first time I used the
Image.transform(...) method, particularly around the construction of the affine transformation matrix with the weirdly truncated off last row. Thus, I'd like to spend some time going over why things work the way they do because its a bit of a process.
First thing that must happen is the image must be translated so that the origin (0, 0) is in the middle of the image. In the case of the 1000 x 1000 image of the letter R in this example that means a translation of -500 in the x and y.
Below I show the generic translation transformation matrix \(T_{translate}\) and the one I'll be using in the example \(T_{neg500}\).
$$ T_{translate} = \begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{bmatrix} $$
$$
T_{neg500} = \begin{bmatrix}
1 & 0 & -500 \
0 & 1 & -500 \
0 & 0 & 1
\end{bmatrix}
$$
Then there are the 2X scaling \(T_{scale}\) and 90 degree rotation \(T_{rotate}\) matrices from before. However, the Pillow library actually decided to use standard geometric angles (i.e., counter clockwise) rather than the clockwise rotations I described earlier so the signs on the of the sin functions flip. Below are the resultant individual transformation matrices.
$$ T_{rotate} = \begin{bmatrix} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} $$
$$
T_{scale} = \begin{bmatrix}
2 & 0 & 0 \
0 & 2 & 0 \
0 & 0 & 1
\end{bmatrix}
$$
Next another translation matrix needs to be applied which acts to reposition the spatial domain of the pixels essentially negating the first one that centered the origin. In this case I need a positive translation of 1000 in the x and y, where 1000 comes from twice the original because it has been scaled up by two.
$$ T_{pos1000} = \begin{bmatrix} 1 & 0 & 1000 \\ 0 & 1 & 1000 \\ 0 & 0 & 1 \end{bmatrix} $$
These constitute the individual transformation steps that are required, so all that remains is to multiply the matrices in order (i.e., right to left), like so:
$$ T = T_{pos1000} * T_{rotate} * T_{scale} * T_{neg500} $$
Ok, so there is actually one last oddity. The
Image.transform(...) method actually requires the inverse of the transformation matrix be supplied to the
data parameter as a flattened array (or tuple) excluding the last row.
$$ T_{inv} = T^{-1} $$
In code this all works as follows:
# recenter resultant image T_pos1000 = np.array([ [1, 0, 1000], [0, 1, 1000], [0, 0, 1]]) # rotate - opposite angle T_rotate = np.array([ [0, -1, 0], [1, 0, 0], [0, 0, 1]]) # scale T_scale = np.array([ [2, 0, 0], [0, 2, 0], [0, 0, 1]]) # center original to 0,0 T_neg500 = np.array([ [1, 0, -500], [0, 1, -500], [0, 0, 1]]) T = T_pos1000 @ T_rotate @ T_scale @ T_neg500 T_inv = np.linalg.inv(T)
img_transformed = img.transform((2000, 2000), Image.AFFINE, data=T_inv.flatten()[:6], resample=Image.NEAREST) plt.imshow(np.asarray(img_transformed))
Affine Transformations with OpenCV2
Continuing on I would like to briefly describe how to carry out these affine transformations with the popular image processing and computer vision library OpenCV. I use the word brief here because it is largely the same as what is required in the previous demonstration using Pillow.
First things first, you must install like so:
$ pip install opencv-python
As I mentioned above there is significant overlap in methodology between the Pillow approach and using OpenCV. For example, you still create a transformation matrix that first centers the array of pixels to the origin and, you only use the first two rows of the transformation matrix. The major difference is that with OpenCV you give it the standard matrix rather than the inverse.
So, with that understanding laid out I will jump into the code starting with importing the opencv-python module, which is named
cv2.
import cv2
Reading the image is as simple as calling the
cv2.imread(...) method, passing the filename as an argument..
Thus, in order to plot the numpy image data originating from the OpenCV library one must reverse the order of the pixel channels. Luckily, OpenCV provides a convince method
cvtColor(...) that can be used to do this as shown below (although numpy purists are likely to know that
img[:,:,::-1] will do the same).
img = cv2.imread('letterR.jpg') plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
A few last items to mention are that OpenCV requires the data in the transformation matrix to be of type 32 bit float rather than the default 64 bit float, so be sure to convert down to 32 bit with
numpy.float32(...). Also, the API to
cv2.warpAffine(...) does not provide the ability to specify what type of pixel interpolation algorithm to apply and I could not determine from the docs what is used. If you know or find out please post in the comments below.
T_opencv = np.float32(T.flatten()[:6].reshape(2,3)) img_transformed = cv2.warpAffine(img, T_opencv, (2000, 2000)) plt.imshow(cv2.cvtColor(img_transformed, cv2.COLOR_BGR2RGB))
Conclusion
In this article I have covered what an affine transformation is and how it can be applied to image processing using Python. Pure numpy and matplotlib was used to give a low-level intuitive description of how affine transformations work. I concluded by demonstrating how the same can be done using two popular Python libraries Pillow and OpenCV.
Thanks for reading and as always don't be shy about commenting or critiquing below. | https://stackabuse.com/affine-image-transformations-in-python-with-numpy-pillow-and-opencv/ | CC-MAIN-2021-21 | refinedweb | 4,031 | 63.59 |
30 March 2012 21:24 [Source: ICIS news]
HOUSTON (ICIS)--The American Chemistry Council (ACC) has welcomed recent data indicating continued expansion in the ?xml:namespace>
It said that the
Official February data showed personal income increased by 3.2% year on year, while consumer spending rose 4.1%.
“These reports suggest that the economy may be strengthening in light of improving payroll gains, rising wages, and a willingness to take on additional debt,” ACC said in the report.
“Pent-up demand, a stabilizing housing sector, and an accommodative Fed are other factors behind a strong growth scenario.”
However, the ACC warned that rising gasoline prices could stall consumer spending.
Gasoline prices in the US have shot up by about 24% since the start of the year to $3.4006/gal on 29 March, according to ICIS.
Meanwhile, major plastic resins production in the US rose 11.9% year on year in February, while year-to-date production totalled 12.7bn lbs, up 7.4% year on year, the ACC reported.
“Analysis of underlying demand indicates that customers have been restocking their thermoplastics inventories thus far this year,” the ACC said. “This follows after a stretch of drawing down inventories | http://www.icis.com/Articles/2012/03/30/9546634/acc-upbeat-on-us-data-warns-high-gasoline-prices-may-hurt-demand.html | CC-MAIN-2013-48 | refinedweb | 200 | 57.87 |
JSON Support Axis2C
ProjectProgress
JSON support for Axis2/C project progress is outlined under this page. The project deliveribles, important issues and final solutions stated on monthly/weekly basic.
Duration : 1 st of May – 1 st of June
- The initial design of the project was finalized and start the implementation of the JSON object model. The design of the Object model was based on JSON-C implementation which is avalable at the JSON home page.
- Since the project is based on Axis2/C, the studies on Axis 2/C was also required.
- The object model was tested and as I proposed in the project proposal a StAX JSON parser should use to generate StAX events when we are parsing the JSON string. A preliminary studies on StAX was conducted during this period.
- When designing the StAX parser, the axiom StAX wrappers (axiom_xml_reader.h, axiom_xml_writer.h) should be considered and the implementation of the StAX parser should be scnc. with the wrappers.
- A simple JSON StAX parser was implemented based on JSON object model.
Duration : 2 nd of June – 15 th of June
- In the case of the implemention of the StAX parser there was an issue on the integration it with Axis2/C. This problem was discussed on the Axis2/C-dev mailing list and we finally decided to not to have StAX parser. Other than the parser simply converte the JSON to XML (reader) and XML to JSON(writer).
- The JSON to XML conversion can be done using two different conventions; Badgerfish and Mapped conventions. Therefore, there should be a way to switch between those parsers based on convention. This issue was also discussed on the mailing list and finalize the following design.
JSON_StAX_PARSER
- - READER
- - Badgerfish Reader - Mapped Reader
- - Badgerfish Writer - Mapped Writer
The StAX parser has reader and writer which are having two implementations of each conventions. So, when we are creating the StAX parser, the convention should be initialized.
Duration : 16 th of June – 6 th of July
- The implementation of the Badgerfish reader for parsing JSON string was implemented. In this case even if we used the conversion of JSON to XML, we can’t do inline parsing from JSON to XML because of the limitations such as namespace and attrinutes. (i.e. attributes/namespace are coming at the end of the JSON element.). Therefore we have to parser the whole JSON string before converting to XML.
- In the Badgerfish convention the XML mixed contenet was not supported. So the JSON Badgerfish reader should have a proper way to represent the mixed content.
- The solution that I found was as follows.
<alice>bob
<charlie>david</charlie>edgar
</alice>
The relevant JSON string is as follows
{ "alice": [ { "$": "bob" },
- { "charlie": { "$": "david" } }, { "$": "edgar" }
- ]
}
- This issue was discussed on the mailing list and finalized the above implementation. The current version of the JSON reader is capable of supporting this convention.
Milestone 1 : Mid Evaluation Dead Line
<code up-loading> | https://wiki.apache.org/ws/GSOC2007/JSON_Support_Axis2C/ProjectProgress?action=diff | CC-MAIN-2016-50 | refinedweb | 489 | 55.03 |
e-mail. Note that read-only fields are not available in form.cleaned_data (and setting a value in a custom clean() method won't have any effect) because these fields are displayed as text rather than as input elements, and thus are not posted back to the server.
Extending the above">E-mail. E-mail address.
- {{ field.label_tag }}
- The field's label wrapped in the appropriate HTML <label> tag, e.g. <label for="id_email">E-mail address</label>
- {{ if the form field is a hidden field and False otherwise. It's not particularly useful as a template variable, but could be useful in conditional tests such as:
{% if field.is_hidden %} {# Do something special #} {% endif %}. | https://docs.djangoproject.com/en/1.3/topics/forms/ | CC-MAIN-2014-10 | refinedweb | 114 | 57.27 |
Defining functions in python
Posted February 27, 2013 at 02:49 PM | categories: python | tags: | View Comments
Updated March 06, 2013 at 06:28 PM
Compare what's here to the Matlab implementation.
We often need to make functions in our codes to do things.
def f(x): "return the inverse square of x" return 1.0 / x**2 print f(3) print f([4,5])
... ... >>> 0.111111111111 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in f TypeError: unsupported operand type(s) for ** or pow(): 'list' and 'int'
Note that functions are not automatically vectorized. That is why we see the error above. There are a few ways to achieve that. One is to “cast” the input variables to objects that support vectorized operations, such as numpy.array objects.
import numpy as np def f(x): "return the inverse square of x" x = np.array(x) return 1.0 / x**2 print f(3) print f([4,5])
>>> ... ... ... ... >>> 0.111111111111 [ 0.0625 0.04 ]
It is possible to have more than one variable.
import numpy as np def func(x, y): "return product of x and y" return x * y print func(2, 3) print func(np.array([2, 3]), np.array([3, 4]))
6 [ 6 12]
You can define “lambda” functions, which are also known as inline or anonymous functions. The syntax is
lambda var:f(var). I think these are hard to read and discourage their use. Here is a typical usage where you have to define a simple function that is passed to another function, e.g. scipy.integrate.quad to perform an integral.
from scipy.integrate import quad print quad(lambda x:x**3, 0 ,2)
(4.0, 4.440892098500626e-14)
It is possible to nest functions inside of functions like this.
def wrapper(x): a = 4 def func(x, a): return a * x return func(x, a) print wrapper(4)
16
An alternative approach is to “wrap” a function, say to fix a parameter. You might do this so you can integrate the wrapped function, which depends on only a single variable, whereas the original function depends on two variables.
def func(x, a): return a * x def wrapper(x): a = 4 return func(x, a) print wrapper(4)
16
Last example, defining a function for an ode
from scipy.integrate import odeint import numpy as np import matplotlib.pyplot as plt k = 2.2 def myode(t,y): "ode defining exponential growth" return k * t y0 = 3 tspan = np.linspace(0,1) y = odeint(myode, y0, tspan) plt.plot(tspan, y) plt.xlabel('Time') plt.ylabel('y') plt.savefig('images/funcs-ode.png')
Copyright (C) 2013 by John Kitchin. See the License for information about copying. | http://kitchingroup.cheme.cmu.edu/blog/2013/02/27/Defining-functions-in-python/ | CC-MAIN-2019-26 | refinedweb | 460 | 67.15 |
Name: ____________________________________________________ Alpha: _____________________
Describe help received: _________________________________________________________________
#include <iostream> #include <fstream> using namespace std; int main() { ifstream fin("hwdata.txt"); int n; fin >> n; double **D = new double*[n]; for(int i = 0; i < n; i++) D[i] = new double[2]; // index 0 is x-coord, index 1 is y-coord for(int i = 0; i < n; i++) fin >> D[i][0] >> D[i][1]; double* w = new double[2]; cin >> w[0] >> w[1]; int k = search(D,n,w); if (k == n) cout << "No point found!" << endl; else cout << "Point: (" << D[k][0] << ',' << D[k][1] << ')' << endl; }
search(D,n,w)would return index
kprovided that the point described by array element
D[k]was within distance .01 of the point defined by
w. Note: We mean "Euclidean distance" here, i.e. the distance between points (a,b) and (c,d) is sqrt((a - c)^2 + (b - d)^2).
ints from the user and prints them into order so that all the odd numbers come first (in increasing size) followed by all the even numbers (in increasing size). For example:
~/$ ./hw 18 2 7 14 29 3 5 8 16 11 3 5 7 11 29 2 8 14 16 18It is expected that you use selection sort (as described in the notes), and there is a "good" way to do this, and an "ugly hack" way to do it. Try to do it the good way! | https://www.usna.edu/Users/cs/wcbrown/courses/F16IC210/lec/l28/hw.html | CC-MAIN-2018-09 | refinedweb | 240 | 73.71 |
Downloading and processing files and images¶
Scrapy provides reusable item pipelines for downloading files attached to a particular item (for example, when you scrape products and also want to download their images locally). These pipelines share a bit of functionality and structure (we refer to them as media pipelines), but typically you’ll either use the Files Pipeline or the Images Pipeline.
Both pipelines implement these features:
- Avoid re-downloading media that was downloaded recently
- Specifying where to store the media (filesystem directory, Amazon S3 bucket)
The Images Pipeline has a few extra functions for processing images:
- Convert all downloaded images to a common format (JPG) and mode (RGB)
- Thumbnail generation
- Check images width/height to make sure they meet a minimum constraint
The pipelines also keep an internal queue of those media URLs which are currently being scheduled for download, and connect those responses that arrive containing the same media to that queue. This avoids downloading the same media more than once when it’s shared by several items.
Using the Files Pipeline¶
The typical workflow, when using the
FilesPipeline goes like
this:
- In a Spider, you scrape an item and put the URLs of the desired into a
file_urlsfield.
- The item is returned from the spider and goes to the item pipeline.
- When the item reaches the
FilesPipeline, the URLs in the
file_urlsfield are scheduled for download using the standard Scrapy scheduler and downloader (which means the scheduler and downloader middlewares are reused), but with a higher priority, processing them before other pages are scraped. The item remains “locked” at that particular pipeline stage until the files have finish downloading (or fail for some reason).
- When the files are downloaded, another field (
files) will be populated with the results. This field will contain a list of dicts with information about the downloaded files, such as the downloaded path, the original scraped url (taken from the
file_urlsfield) , and the file checksum. The files in the list of the
filesfield will retain the same order of the original
file_urlsfield. If some file failed downloading, an error will be logged and the file won’t be present in the
filesfield.
Using the Images Pipeline¶
Using the
ImagesPipeline is a lot like using the
FilesPipeline,
except the default field names used are different: you use
image_urls for
the image URLs of an item and it will populate an
images field for the information
about the downloaded images.
The advantage of using the
ImagesPipeline for image files is that you
can configure some extra functions like generating thumbnails and filtering
the images based on their size.
The Images Pipeline uses Pillow for thumbnailing and normalizing images to JPEG/RGB format, so you need to install this library in order to use it. Python Imaging Library (PIL) should also work in most cases, but it is known to cause troubles in some setups, so we recommend to use Pillow instead of PIL.
Enabling your Media Pipeline¶
To enable your media pipeline you must first add it to your project
ITEM_PIPELINES setting.
For Images Pipeline, use:
ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}
For Files Pipeline, use:
ITEM_PIPELINES = {'scrapy.pipelines.files.FilesPipeline': 1}
Note
You can also use both the Files and Images Pipeline at the same time.
Then, configure the target storage setting to a valid value that will be used
for storing the downloaded images. Otherwise the pipeline will remain disabled,
even if you include it in the
ITEM_PIPELINES setting.
For the Files Pipeline, set the
FILES_STORE setting:
FILES_STORE = '/path/to/valid/dir'
For the Images Pipeline, set the
IMAGES_STORE setting:
IMAGES_STORE = '/path/to/valid/dir'
Supported Storage¶
File system is currently the only officially supported storage, but there is also support for storing files in Amazon S3.
File system storage¶
The files are stored using a SHA1 hash of their URLs for the file names.
For example, the following image URL:
Whose SHA1 hash is:
3afec3b4765f8f0a07b78f98c07b83f013567a0a
Will be downloaded and stored in the following file:
<IMAGES_STORE>/full/3afec3b4765f8f0a07b78f98c07b83f013567a0a.jpg
Where:
<IMAGES_STORE>is the directory defined in
IMAGES_STOREsetting for the Images Pipeline.
fullis a sub-directory to separate full images from thumbnails (if used). For more info see Thumbnail generation for images.
Amazon S3 storage¶
FILES_STORE and
IMAGES_STORE can represent an Amazon S3
bucket. Scrapy will automatically upload the files to the bucket.
For example, this is a valid
IMAGES_STORE value:
IMAGES_STORE = 's3://bucket/images'
You can modify the Access Control List (ACL) policy used for the stored files,
which is defined by the
FILES_STORE_S3_ACL and
IMAGES_STORE_S3_ACL settings. By default, the ACL is set to
private. To make the files publicly available use the
public-read
policy:
IMAGES_STORE_S3_ACL = 'public-read'
For more information, see canned ACLs in the Amazon S3 Developer Guide.
Usage example¶
In order to use a media pipeline first, enable it.
Then, if a spider returns a dict with the URLs key (
file_urls or
image_urls, for the Files or Images Pipeline respectively), the pipeline will
put the results under respective key (
files or
images).
If you prefer to use
Item, then define a custom item with the
necessary fields, like in this example for Images Pipeline:
import scrapy class MyItem(scrapy.Item): # ... other item fields ... image_urls = scrapy.Field() images = scrapy.Field()
If you want to use another field name for the URLs key or for the results key, it is also possible to override it.
For the Files Pipeline, set
FILES_URLS_FIELD and/or
FILES_RESULT_FIELD settings:
FILES_URLS_FIELD = 'field_name_for_your_files_urls' FILES_RESULT_FIELD = 'field_name_for_your_processed_files'
For the Images Pipeline, set
IMAGES_URLS_FIELD and/or
IMAGES_RESULT_FIELD settings:
IMAGES_URLS_FIELD = 'field_name_for_your_images_urls' IMAGES_RESULT_FIELD = 'field_name_for_your_processed_images'
If you need something more complex and want to override the custom pipeline behaviour, see Extending the Media Pipelines.
If you have multiple image pipelines inheriting from ImagePipeline and you want to have different settings in different pipelines you can set setting keys preceded with uppercase name of your pipeline class. E.g. if your pipeline is called MyPipeline and you want to have custom IMAGES_URLS_FIELD you define setting MYPIPELINE_IMAGES_URLS_FIELD and your custom settings will be used.
Additional features¶
File expiration¶
The Image Pipeline avoids downloading files that were downloaded recently. To
adjust this retention delay use the
FILES_EXPIRES setting (or
IMAGES_EXPIRES, in case of Images Pipeline), which
specifies the delay in number of days:
# 120 days of delay for files expiration FILES_EXPIRES = 120 # 30 days of delay for images expiration IMAGES_EXPIRES = 30
The default value for both settings is 90 days.
If you have pipeline that subclasses FilesPipeline and you’d like to have different setting for it you can set setting keys preceded by uppercase class name. E.g. given pipeline class called MyPipeline you can set setting key:
MYPIPELINE_FILES_EXPIRES = 180
and pipeline class MyPipeline will have expiration time set to 180.
Thumbnail generation for images¶
The Images Pipeline can automatically create thumbnails of the downloaded images.
In order use this feature, you must set
IMAGES_THUMBS to a dictionary
where the keys are the thumbnail names and the values are their dimensions.
For example:
IMAGES_THUMBS = { 'small': (50, 50), 'big': (270, 270), }
When you use this feature, the Images Pipeline will create thumbnails of the each specified size with this format:
<IMAGES_STORE>/thumbs/<size_name>/<image_id>.jpg
Where:
<size_name>is the one specified in the
IMAGES_THUMBSdictionary keys (
small,
big, etc)
<image_id>is the SHA1 hash of the image url
Example of image files stored using
small and
big thumbnail names:
<IMAGES_STORE>/full/63bbfea82b8880ed33cdb762aa11fab722a90a24.jpg <IMAGES_STORE>/thumbs/small/63bbfea82b8880ed33cdb762aa11fab722a90a24.jpg <IMAGES_STORE>/thumbs/big/63bbfea82b8880ed33cdb762aa11fab722a90a24.jpg
The first one is the full image, as downloaded from the site.
Filtering out small images¶
When using the Images Pipeline, you can drop images which are too small, by
specifying the minimum allowed size in the
IMAGES_MIN_HEIGHT and
IMAGES_MIN_WIDTH settings.
For example:
IMAGES_MIN_HEIGHT = 110 IMAGES_MIN_WIDTH = 110
Note
The size constraints don’t affect thumbnail generation at all.
It is possible to set just one size constraint or both. When setting both of them, only images that satisfy both minimum sizes will be saved. For the above example, images of sizes (105 x 105) or (105 x 200) or (200 x 105) will all be dropped because at least one dimension is shorter than the constraint.
By default, there are no size constraints, so all images are processed.
Extending the Media Pipelines¶
See here the methods that you can override in your custom Files Pipeline:
- class
scrapy.pipelines.files.
FilesPipeline¶
get_media_requests(item, info)¶
As seen on the workflow, the pipeline will get the URLs of the images to download from the item. In order to do this, you can override the
get_media_requests()method and return a Request for each file URL:
def get_media_requests(self, item, info): for file_url in item['file_urls']: yield scrapy.Request(file_url)
Those requests will be processed by the pipeline and, when they have finished downloading, the results will be sent to the
item_completed()method, as a list of 2-element tuples. Each tuple will contain
(success, file_info_or_error)where:
successis a boolean which is
Trueif the image was downloaded successfully or
Falseif it failed for some reason
file_info_or_erroris a dict containing the following keys (if success is
True) or a Twisted Failure if there was a problem.
url- the url where the file was downloaded from. This is the url of the request returned from the
get_media_requests()method.
path- the path (relative to
FILES_STORE) where the file was stored
checksum- a MD5 hash of the image contents
The list of tuples received by
item_completed()is guaranteed to retain the same order of the requests returned from the
get_media_requests()method.
Here’s a typical value of the
resultsargument:
[(True, {'checksum': '2b00042f7481c7b056c4b410d28f33cf', 'path': 'full/0a79c461a4062ac383dc4fade7bc09f1384a3910.jpg', 'url': ''}), (False, Failure(...))]
By default the
get_media_requests()method returns
Nonewhich means there are no files to download for the item.
item_completed(results, items, info)¶
The
FilesPipeline.item_completed()method called when all file requests for a single item have completed (either finished downloading, or failed for some reason).
The
item_completed()method must return the output that will be sent to subsequent item pipeline stages, so you must return (or drop) the item, as you would in any pipeline.
Here is an example of the
item_completed()method where we store the downloaded file paths (passed in results) in the
file_pathsitem field, and we drop the item if it doesn’t contain any files:
from scrapy.exceptions import DropItem def item_completed(self, results, item, info): file_paths = [x['path'] for ok, x in results if ok] if not file_paths: raise DropItem("Item contains no files") item['file_paths'] = file_paths return item
By default, the
item_completed()method returns the item.
See here the methods that you can override in your custom Images Pipeline:
- class
scrapy.pipelines.images.
ImagesPipeline¶
- The
ImagesPipelineis an extension of the
FilesPipeline, customizing the field names and adding custom behavior for images.
get_media_requests(item, info)¶
Works the same way as
FilesPipeline.get_media_requests()method, but using a different field name for image urls.
Must return a Request for each image URL.
item_completed(results, items, info)¶
The
ImagesPipeline.item_completed()method is called when all image requests for a single item have completed (either finished downloading, or failed for some reason).
Works the same way as
FilesPipeline.item_completed()method, but using a different field names for storing image downloading results.
By default, the
item_completed()method returns the item.
Custom Images pipeline example¶
Here is a full example of the Images Pipeline whose methods are examplified above:
import scrapy from scrapy.pipelines.images import ImagesPipeline from scrapy.exceptions import DropItem class MyImagesPipeline(ImagesPipeline): def get_media_requests(self, item, info): for image_url in item['image_urls']: yield scrapy.Request(image_url) def item_completed(self, results, item, info): image_paths = [x['path'] for ok, x in results if ok] if not image_paths: raise DropItem("Item contains no images") item['image_paths'] = image_paths return item | https://doc.scrapy.org/en/1.1/topics/media-pipeline.html?highlight=filespipeline | CC-MAIN-2018-43 | refinedweb | 1,943 | 50.06 |
In the past few months, and in particular in the past two weeks, I’ve gotten a number of people asking me the question: Is Rust a functional programming language? This makes sense: I’m a big advocate of functional programming, I work at a company with FP in its name, my primary programming language is Haskell, and yet I use and enjoy Rust. So is Rust consistent with everything else in my FP-centric world, or is it my dirty vice to get away from it all?
Learn more about Rust at FP Complete
To give an executive summary to people who want a headline to run off with:
- Rust is not a functional programming language, it’s imperative
- I personally don’t think defining languages as “functional” makes a lot of sense, I prefer talking about specific code using a functional style
- Rust does adhere to many of the tenets of functional programming
- In many cases, you can easily, naturally, and idiomatically write Rust in a functional style
Alright, let’s expound on those points.
What is functional programming?
I’ve answered this question in at least one talk in the past, but to my recollection I haven’t done so in blog post form yet. I don’t believe there is any universally agreed upon definition of what makes a language “functional.” To demonstrate:
- Is Javascript functional? Many people successfully leverage higher-order functions, and functions are first class values. Also, by this metric, even C is a functional programming language (though it lacks closures).
- Haskell is a purely functional programming language, since all functions are pure. Instead of allowing effects to occur in functions, we create actions which, when run, perform these effects. And then we use the type system to distinguish these effects from pure values. Is that the bar for functional programming? If so, many presumably functional languages like OCaml wouldn’t meet the bar, so that’s not great.
- I’ve seen people call XSLT a functional programming language since it’s declarative. But most users of functional programmers would be surprised to use a functional programming language where it’s difficult and/or impossible to create user-defined functions.
The definition is quite malleable. And frankly, it doesn’t help anyone to harp on these definitions. Let me show you why.
Imperative Haskell
I wouldn’t be surprised if there is 100% consensus that Haskell is a functional programming language, by just about any definition of the term. So please observe this beauty of functional programming:
import Data.IORef main :: IO () main = do putStrLn "I'm going to calculate a sum, hang on a sec" totalRef <- newIORef (0 :: Int) let loop i | i > 100 = pure () | otherwise = do oldTotal <- readIORef totalRef let newTotal = oldTotal + i writeIORef totalRef $! newTotal loop $! i + 1 loop 1 total <- readIORef totalRef putStrLn $ "The total is " ++ show total
FP at its finest, am I right? We’re using types like
IORef and
IO to ensure that this code is, in fact, purely functional. However, despite that, it has a distinctly imperative flavor. I’d argue that the end result is code that is worse than idiomatic code in an imperative language, e.g.:
print("I'm going to calculate a sum, hang on a sec") total = 0 for i in range(1, 101): total += i print("The total is " + str(total))
Or in Rust:
fn main() { println!("I'm going to calculate a sum, hang on a sec"); let mut total = 0; for i in 1 .. 101 { total += i; } println!("The total is {}", total); }
Obviously that Haskell code is highly non-idiomatic. But you’d be hard pressed to convince me that, because that code is written in Haskell, it magically gains the benefits of functional programming. It’s an identical algorithm to the imperative Python and Rust!
Idiomatic Haskell
Alright, I’m beating up on Haskell, sorry about that. Let’s show some idiomatic, functional, beautiful Haskell:
main = print $ foldl (+) 0 [1..100]
(Side note: in real code, please use
foldl' instead of
foldl. I’m using the bad function to avoid an extra import in these examples.)
This is using beautiful functional concepts like folds. This program leverages laziness of lists to avoid using up too much memory, and introduces no mutability, making it easier to understand. And yes, there’s a
sum function as well, but I wanted to be explicit about the folding.
But wait a second, a wild Rust appears!
fn main() { println!("{}", (1..101).fold(0, |x, y| x + y)); }
Huh, this Rust version has immutability, higher-order functions, and folds. We get the necessary benefits of laziness with iterators. Can we really argue that this Rust solution isn’t functional?
Functional style
With that roundabout introduction, I can now present my thesis on functional programming:
- There are certain principles which makes code more functional (detailed in a bit)
- Some languages make it easier or harder to adhere to those principles
- Generally speaking, languages which make it easier to adhere to functional principles, and harder to violate those principles, are “more functional” languages
- However, it’s almost always possible to override the language default for a specific bit of code and go more or less functional
If you still find the term “functional programming language” useful, don’t let me stop you. But for the rest of this post, I’m going to switch the original question to a pair of questions I feel more comfortable asking:
- What are functional programming principles?
- Does Rust make it easy or hard to adhere to them?
Immutable over mutable
Mutability is a necessary component of software development. At the lowest level of software, machine code is inherently mutable (mutating memory and register values). We layer abstractions of immutability on top of that, such as the fact that in C you write
x = y + z instead of something like (psuedo-assembly):
mov R1, $y mov R2, $z add R3, R1, R2 mov $x, R3
Higher level languages, including Java, Python, and Haskell move immutability higher up the chain, including immutable strings. Haskell goes much further: all variables are immutable, and you have explicit wrappers than add in mutability (
IORef,
MVar,
TVar, etc).
So how about Rust? It’s not as draconian as Haskell, but Rust programs certainly make use of functional patterns that take advantage of immutability.
Immutable by default
Remember this code?
fn main() { let mut total = 0; for i in 1 .. 101 { total += i; } }
We had to explicitly say
mut total to indicate that we wanted to be able to mutate the value. If you leave that off, you’ll get a compile-time error.
Move semantics
Let’s look at a different version of the code above that looks like it violates mutability by using a move:
fn get_sum(mut total: u32, mut i: u32) -> u32 { if i > 100 { return total; } total += i; i += 1; get_sum(total, i) } fn main() { let total = 0; let total = get_sum(total, 1); println!("{}", total); }
This looks like a hot mutable mess! We created an immutable
total variable, but then we pass it to
get_sum which mutates it, and finally update
total. Don’t let my bad code fool you though. In reality, the original immutable
total value gets moved out of
main into the
get_sum function. In my opinion, this adheres to functional principles: the value is fully immutable in
main, and then disappears from
main, so that you cannot be tricked into thinking your value has remained the same. We then grab a new
total value from the result of
get_sum.
Inside
get_sum, we are in fact fully mutable. But this is similar to Haskell’s
ST type, which allows for local mutation. This is the same concept of layering immutability on top of mutability I mentioned above. We get the primary benefit we want from immutable-by-default: mutable code is explicitly called out, so you know where to look for bugs.
Verdict: Rust adheres to the functional principle of immutability.
Recursion
Rustaceans are probably cringing at that last example I just provided. Ignoring integer overflow for a bit, what happens when I change the code to add up to a really large number?
fn get_sum(mut total: u32, mut i: u32) -> u32 { if i > 10000000 { return total; } total = total.wrapping_add(i); i += 1; get_sum(total, i) } fn main() { let total = 0; let total = get_sum(total, 1); println!("{}", total); }
I get the output:
thread 'main' has overflowed its stack fatal runtime error: stack overflow Abort trap: 6
Currently, Rust provides no means of tail-call optimization (TCO). This is quite intentional, as I was helpfully taught by the Rust community a year or two ago. Automatic TCO is brittle and has the potential to be broken by moving code around. This would lead to silently broken performance expectations and potential runtime failures. I buy the argument, and look forward to something like the
become keyword being added for explicit TCO.
But this gets at a root difference with functional programming. FP encourages usage of recursion over iteration. Rust does not encourage this.
Verdict: Rust does not adhere to the functional principle of recursion.
First class functions/higher-order functions
When we say a language has first class functions we are referring to the ability to pass functions around as values. This generally means that you can pass function-values as arguments to other functions. Functions which themselves take functions as arguments are commonly called “higher-order functions.” Rust checks both of these boxes nicely. There are three related concepts:
- Closures are functions which can refer to values in the local scope which weren’t explicitly passed as function arguments. Rust has strong support for closures. However, due to needing to track ownership of values, this can be more clunky than closures in a garbage collected language.
- Lambdas are anonymous functions. We saw one above with the
fold(0, |x, y| x + y)example. The second argument to
foldis a lambda. Not only does Rust have lambdas, but it has a nice, lightweight syntax which encourages their use.
- Partial function application allows you to partially saturate the arguments of a function to create a new function. Using closures and lambdas, you can do this in Rust, but it’s not as lightweight as the equivalent in Haskell.
Let’s demonstrate this a bit. In Haskell, you can double all of the values in a list, sum it, and print the result with:
main = print $ foldl (+) 0 $ map (* 2) [1..100]
* 2 is an operator section, and applies the binary operator
* to a single argument, creating a new anonymous function which takes a single argument. Doing the same in Rust is more involved:
fn main() { println!("{}", (1..101).map(|x| x * 2).fold(0, |x, y| x + y)); }
Also, notice how in Haskell we use
(+) as the first argument to
foldl. We’re able to explicitly refer to the operator as a function. In our Rust code, we’re using a lambda to do the same thing. This isn’t actually necessary, we can refer instead to the
add function underneath the
+ operator:
fn main() { println!("{}", (1..101).fold(0, std::ops::Add::add)); }
However, as you may have noticed, this is slightly more verbose than the original lambda-ified version.
Finally, it’s a bit unfair for me to compare Rust’s facilities here against Haskell. Haskell’s handling of lambdas, closures, first class and higher-order functions is best-in-class, since it’s explicitly the goal of the language. If you compare Rust to most other languages out there, like Javascript, Ruby, or Python, Rust compares even more favorably.
Verdict: Rust mostly does adhere to the functional principles of first class and higher-order functions. However, some aspects of value lifetimes and syntax add a bit more friction than a language like Haskell.
Pure functions
In a mathematical sense, a function returns the same value for the same input. Addition is a great example:
add(1, 2) will always return
3.
With that definition, is
random() a function? The answer is no, it isn’t. Most programming languages that have “functions” do not have mathematical functions. Instead, they have subroutines. This is because they allow effects to interleave in their functions.
Haskell is a purely functional language, in that all effects are listed in the type signature (ignoring unsafe functions that provide an escape hatch for that immutable-over-mutable layering mentioned above). However, Rust does nothing like that. You are free to mutate variables, read files, make network connections, or anything else you’d enjoy doing in a function.
You are of course free to make your functions pure in many cases, but the language will do nothing to help you ensure this.
Verdict: Rust does not adhere to the functional principle of pure functions.
Total functions
Total functions state that, for every valid input value, there is a valid, terminating output value. In contrast to a total function, a partial function may result in an infinite loop, program crash, or runtime exception for some input.
Here’s an example of a partial function:
ages = {} ages["Alice"] = 25 ages["Bob"] = 30 print(ages["Charlie"])
ages["Charlie"] throws a runtime exception. Dictionary indexing is partial because it throws an exception on input which does not appear in the dictionary. (There’s also the total
get method.)
In a turing-complete language, it’s impossible to prevent infinite loops, and therefore Rust allows for partial functions. However, the more common case of partiality are the crash and runtime exception cases. Here, Rust is even more functional than Haskell:
- Haskell allows for bottom values which will throw a runtime exception upon evaluation
- Haskell has runtime exceptions in
IO
Rust does have the
panic! mechanism, which works like a runtime exception, but isn’t intended to be used as a control flow mechanism, and instead should be used for accounting for programmer errors, not unexpected input. (A good example of this would be if an internal invariant of a data structure has somehow been violated.)
Haskellers may argue that the same applies to bottom values in Haskell: they shouldn’t be used in general. Though I’d agree with that, unfortunately in Haskell today that’s not the case. The standard prelude, for instance, still exports functions like
head and
readFile. Typeclasses like
Enum use partial methods like
succ and
pred.
And at the language level itself, Haskell will happily compile code with incomplete pattern matches, while Rust will complain. Compare this partial Haskell code:
data Color = Red | Yellow | Blue sayColor color = case color of Red -> "red" Yellow -> "yellow" main = putStrLn (sayColor Blue)
Versus the following Rust code, which will fail to compile:
enum Color { Red, Yellow, Blue, } fn say_color(color: &Color) -> &'static str { match color { Color::Red => "red", Color::Yellow => "yellow", } } fn main() { println!("{}", say_color(&Color::Blue)); }
Verdict Not only does Rust adhere to the functional principle of total functions, it does a better job than most other functional languages, excepting dependently typed languages.
Advantages of Rust over FP
Overall, Rust does in fact adhere to many FP principles, though not all. I would be remiss to leave this blog post implying that there wasn’t a good reason for those deviations. Keep in mind that Rust has the stated goal of high performance, memory safe systems programming with no garbage collection.
One example of deviation above was the lack of tail call optimization. But this fits in perfectly with Rust’s goal of predictable, high performance.
Rust could certainly add tracking of effects like Haskell has. However, this would add signifiant friction around mutation, which is far more common in Rust to avoid creation of extra garbage data. Again, this is a performance trade-off that I believe the creators of Rust have consciously made.
This lack of a runtime and garbage collection also simplifies the programming model in Rust. In Haskell, for instance, you occassionally need to make decisions about storable versus unboxed vectors, paying attention to how the garbage collector treats them differently. This is a problem that disappears in Rust.
Finally, the lack of runtime exceptions in Rust certainly simplifies code significantly in many cases. For Rust’s use cases, I’m bought into the idea that the language is simply better without the exceptions. In languages like Haskell, I think the jury is still out, and things like asynchronous exceptions complicate the story a lot, adding both advantages and disadvantages to them.
Conclusion
I’ve devoted a lot of my professional career to functional programming. I think it’s a powerful approach to programming, and helps make better code in general. When I write Rust, I tend to write in a functional style.
In my opinion, programmers in any language should be familiar with functional style, and try to use it in their language-of-choice whenever possible. Some languages, like Rust and even Javascript, make this much easier. Other languages, like Java and C#, are actively adopting these approaches. Others, like C, are not moving in that direction, and so FP will be much harder to achieve there.
At the risk of sounding biased, I believe the best way to learn to adopt these functional programming principles is to learn Haskell. As long as you’re using a language where an imperative or object oriented approach feels natural, you’ll tend to gravitate back to that way of thinking. Haskell kicks out the training wheels of for loops and the like.
FP Complete provides a Haskell syllabus online for those interested in learning more. We also provide commercial Haskell, Rust, and DevOps training.
If you’re interested in our professional consulting services, check out our consulting page, and learn more about our Rust offerings. | https://www.fpcomplete.com/blog/2018/10/is-rust-functional | CC-MAIN-2018-51 | refinedweb | 2,963 | 62.78 |
You need to learn Angular 2
Reading Time: 5 minutes
TLDR: Even if you don’t ever plan on using Angular 2 in a production webapp, you should learn it and here’s why.
It’s no secret that with the rate of usage and adoption of Angular 1.x is incredible.
The ease, accessibility, and expressiveness in web development today is incredible. With the increasing size of web developers (at least in the US), scores of frameworks are being constantly released (sometimes it feels like daily). It can be difficult to pick the right web development framework for an app.
The benefits of Angular 1.x are smatterd across the Internet. Although they are out of scope for this post (and honestly, a single Google search will answer better than I could), we’re going to focus on why Angular 2 matters and what it means for you and your team.
Angular 2 is not just another framework
Component-based web development is pretty much the future of web development. If you’re unfamiliar with the component development pattern, it basically emphasizes the separation of concerns throughout a given system (Wikipedia). For all the benefits of using Angular 1.x, it requires the entire stack to be written using Angular. Web components, on the other hand emphasizes the separation of concerns and allows segmentation within an app to be written independently of each other. If you’re familiar with the module-loading mechanism in Angular 1.x, this should come as no surprise. Angular 2 takes this a step further and requires nothing else to be written using Angular.
Angular 2 is more important than you think
Angular 1.x took great ideas from other frameworks and wrapped the process into a pleasant development environment. It does a great job of abstracting a lot of difficult parts of web development so allow us to focus on the functionality of our applications. Rather than dealing with module-loading orders, finding and injecting dependencies, and made building custom components incredibly easy, we could just focus on business logic. These benefits are no longer a convenient feature, but almost a requirement out of any thick-client web framework. With transpilers (such as BabelJS), we’re taken out of the ivory tower and can (and should) start using the pattern today.
Not only does Angular 2 keep this pattern, it makes it a requirement to core development. Take the very first example in the (official current) Angular 2:
import {bootstrap, Component} from 'angular2'; @Component({ selector: 'my-app', template: '<h1>My first Angular 2 app</h1>' }) class AppComponent { // application logic goes here } bootstrap(AppComponent);
There it is, a simple component completely isolated from every other part of our application. If we want to build and use another component, we simple do. There’s no ceremony, no major refactor to support the new functionality. It just works. This idea is fundamental to using the web component pattern and it is here to stay. Angular 2 makes learning and using the pattern easy.
Another really cool and important piece to notice from the example is that the business logic of our application is just JavaScript. There’s no magic in writing JavaScript objects (well… that’s exposed to us, the developer). It is just JavaScript. This makes things really easy to test, to write, to share and expose. Seriously easy.
Sharing code (aka letting someone else do the work)
If you’ve ever built any software worth it’s keyboard salt, you’ve likely used other libraries written by someone else. We’ve already discussed how Angular 1.x abstracts away complexities in software, component-based development not only allows us to write our pieces independently, it also allows us to use other components written by someone else in another framework (be it Angular or not). This is the real saucy promise. Find yourself needing an AutoSuggestion field or a credit card checkout form? Likely someone else needs it too and has written it.
The best software I write these days is the software I don’t write.
Polymer, WebComponents, React , and Angular 2 take this concept seriously (although in different ways). In our opinion, Angular 2 takes the best ideas of these other framework contenders and wraps them up the right way. We diver deeper into these concepts in ng-book 2.
Comparing Angular 2
TypeScript
Let’s get this out of the way first: Angular 2 does not require TypeScript.
So… while trying not to start a flamewar here, but types are awesome. Typescript, the markup language sits atop JavaScript that needs to be ‘compiled’ to JavaScript (browsers don’t yet know about TypeScript). A logical question to ask is why use TypeScript in the first place? How does it make sense to compile a language that can be executed without the pains of compilation?
The answer: error checking, documentation, and share-ability. Just as tests allow us to be confident that our app is running the way that we expect, types force us to only call our functions with expected types. For instance, take the following function:
// ES6 syntax let computeTotal = (a, b) => a + (a * b) // ES5 syntax function computeTotal (a, b) { return a + (a * b); }
Despite the terrible naming, it’s pretty obvious what this function does, right? We pass Numbers into the function and get out another number. Later in our app we realize it’s important to show significant bits of a number to a certain amount of precision (for instance in cents). Taking our example a bit further, let’s use our
computeTotal function to compute the total our customers would pay including sales tax:
// ES6 syntax const tax = 0.075; // Sales tax here in SF, California let preTaxTotal = 10 + 32; // Two items in a shopping cart const total = add(preTaxTotal, tax); // $45.15
Great, now our functionality is done, right? What happens if we don’t pass a number into the function, for instance a JSON object from a backend doesn’t give us a raw number, but instead returns a string — happens more often than I’d like to admit). When we call our add function, it’s not going to do anything different than what we told it to do and it’ll look kinda funny:
// ... let preTaxTotal = "10" + "32"; // Two items in a shopping cart const total = add(preTaxTotal, tax); // "103277.39999999999999"
Umm… not quite right. Typing can help us fix this problem:
// ES6 syntax let computeTotal = (a: number, b: number) => a + (a * b)
Now when we call the
computeTotal function, we can ensure that we’ll only send a number in (compiled with type–checking), so rather than showing our crazy string from above, we’ll throw an error instead. If we write a test to check on the usage, we can be sure that our
computeTotal function works as expected.
TypeScript is not a requirement of Angular 2, but it is awesome.
Seriously, learn Angular 2
Although there is so much more to Angular 2 than we roughly covered here, we strongly recommend learning it. Not only will it help you and your team be faster and write better code, it will make you a better software developer overall. Invest the time into Angular 2. Seriously. | https://blog.ng-book.com/you-need-to-learn-angular-2/ | CC-MAIN-2021-43 | refinedweb | 1,213 | 62.27 |
Details
Description
When trying to run an example CRUD application for Java EE 6 (see) on TomEE beta 2, I noticed that the logs and data for the embedded h2 datasource end up as hsqldb equivalents in [TOMEE HOME]/data/hsqldb/.
The datasource definition in web.xml is as follows:
<data-source> <name>java:app/MyApp/myDS</name> <class-name>org.h2.jdbcx.JdbcDataSource</class-name> <url>jdbc:h2:~/mydb;DB_CLOSE_DELAY=-1</url> <user>sa</user> <password>sa</password> <transactional>true</transactional> <isolation-level>TRANSACTION_READ_COMMITTED</isolation-level> <initial-pool-size>2</initial-pool-size> <max-pool-size>10</max-pool-size> <min-pool-size>5</min-pool-size> <max-statements>0</max-statements> </data-source>
So clearly it should be using h2, and the DB should be created in my home as mydb. When I remove the h2 implementation jar from WEB-INF/lib, TomEE does complain, so it does try to do something with h2 for sure. Inspecting the log reveals it really are hsqldb log lines and not h2.
What's happening here? Why is TomEE silently swapping one DB for the other?
Activity
You're welcome!
My test was rather simple in scope: creating users, listing them, trying to create users with an id that already exists (to trigger DB exception). All seemed to work perfectly.
There is one small thing though, but it's maybe a separate issue. When I shutdown TomEE, I get the following exception:
06-29 23:05:55 database: close
org.h2.message.DbException: General error: "java.lang.NoClassDefFoundError: org/h2/index/PageBtreeCursor"; SQL statement:
SELECT ID FROM INFORMATION_SCHEMA.LOBS WHERE TABLE = ? [50000-161]
at org.h2.message.DbException.convert(DbException.java:269)
at org.h2.store.LobStorage.removeAllForTable(LobStorage.java:158)
at org.h2.engine.Database.close(Database.java:1080)
at org.h2.engine.DatabaseCloser.run(DatabaseCloser.java:80)
Caused by: org.h2.jdbc.JdbcSQLException: General error: "java.lang.NoClassDefFoundError: org/h2/index/PageBtreeCursor"; SQL statement:
SELECT ID FROM INFORMATION_SCHEMA.LOBS WHERE TABLE = ? [50000-161]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
at org.h2.message.DbException.get(DbException.java:158)
at org.h2.message.DbException.convert(DbException.java:277)
at org.h2.command.Command.executeQuery(Command.java:189)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:96)
at org.h2.store.LobStorage.removeAllForTable(LobStorage.java:152)
... 2 more
Caused by: java.lang.NoClassDefFoundError: org/h2/index/PageBtreeCursor
at org.h2.index.PageBtreeIndex.find(PageBtreeIndex.java:186)
at org.h2.index.PageBtreeIndex.find(PageBtreeIndex.java:178)
at org.h2.index.BaseIndex.find(BaseIndex.java:102)
at org.h2.index.IndexCursor.find(IndexCursor.java:145)
at org.h2.table.TableFilter.next(TableFilter.java:321)
at org.h2.command.dml.Select.queryFlat(Select.java:512)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:617)
at org.h2.command.dml.Query.query(Query.java:298)
at org.h2.command.dml.Query.query(Query.java:268)
at org.h2.command.dml.Query.query(Query.java:37)
at org.h2.command.CommandContainer.query(CommandContainer.java:82)
at org.h2.command.Command.executeQuery(Command.java:185)
... 4 more
Caused by: java.lang.ClassNotFoundException: org.h2.index.PageBtreeCursor
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1711)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1556)
... 16 more
06-29 23:05:55 database: close
java.lang.NoClassDefFoundError: org/h2/store/PageStreamTrunk$Iterator
at org.h2.store.PageLog.free(PageLog.java:208)
at org.h2.store.PageStore.compact(PageStore.java:468)
at org.h2.engine.Database.closeOpenFilesAndUnlock(Database.java:1169)
at org.h2.engine.Database.close(Database.java:1119)
at org.h2.engine.DatabaseCloser.run(DatabaseCloser.java:80)
Caused by: java.lang.ClassNotFoundException: org.h2.store.PageStreamTrunk$Iterator
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1711)
at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1556)
... 5 more
My guess is that the war is unloaded before this hook of the H2 DB comes into effect, so there's probably nothing TomEE can do about this. But just wanted to let you know, just in case.
great so i'll close this issue, please reopen it if you see your tests was not complete
thank for your time and your report
I think it actually works Romain! You did it
I tried and the aforementioned application with the configuration given in the description of this ticket.
There's no more [tomee home]/hsqldb/hsqldb.log file being generated, and instead a mydb.h2.db appears in my home. The application works as expected!
I think so
I'll look at then.
Just so that I know where to concentrate my tests on, is java:/app now supposed to work? Thanks!
hmm, probably forgot to mention you can test
, sorry
Romain, just curious, is there any progress to report? If you have any questions or if I can test anything, please let me know.
you are right, was just the current behavior
i hacked a bit on it, i didnt try with JPA but at least injections work
Romain, java:/global and java:/app mapping to the same thing would be rather troublesome. It's very important that they are different namespaces.
java:/global is a namespace that's global for the entire AS. This means that if I deploy 2 individual wars to the same TomEE instance, they both see this.
java:/app is relative to a single war. This means only the war that declared this name can see it.
Put differently, if two wars both declare a java:/global/foo resource, they will conflict. But if both wars declare a java:/app/foo resource, they both will see their own version. That's why especially for embedded datasources java:/app is such an important namespace, but the others should be supported too.
See this for more details about this:
(sorry in advance if I didn't understood your question correctly and explained something obvious to you)
currently java:global, java:app, ... are treated the same way. I thin kthat's the way you think when you speak about java:app.
>Yes from a packaging point of view but notfrom a deployment one
I guess that's more difficult indeed. But what do you think, will TomEE be able to support java:/app?
Yes from a packaging point of view but notfrom a deployment one
>Persistence module are not in the ejb-jar module
Do you mean persistence.xml and a JPA persistence unit? These can be in the ejb-jar module of course.
Or do you mean the driver file? This cannot be in the ejb-jar module indeed, but the ejb-jar has access to all the class loaders of its parent EAR. So a java:module definition could theoretically be in the ejb-jar, which then loads the driver from the parent ear. The spec says the driver needs to be available on the classpath, but not restricts the driver to be available only to the same module.
Is that what you meant?
Persistence module are not in the ejb-jar module so java:module doesnt work. Then what you said work, you can inject the datasource with the java:... name. I thought too it was clear but that's clearly not.
>generally in app server java:... is not supported and the server puts the datasource where it wants...i know that's far to be perfect
For the propriety datasource files (JBoss' -ds.xml, WebLogic's -jdbc.xml) this indeed seemed to be mostly the case, just as before Java EE 6, the AS could put EJBs where it wants.
But it seems the spec is somewhat clear that for @DataSourceDefinition/web.xml/ejb-jar.xml/application.xml the java: namespace should be supported (at least the portable ones as defined for EJB).
JSR 316 in EE.5.17 gives the java:/app example, while the related JSR 250 (Common Annotations) in 2.13 uses an example with java:global. The description of the "name" field in JSR 250 is: "JNDI name by which the data source will be registered"
generally in app server java:... is not supported and the server puts the datasource where it wants...i know that's far to be perfect
>weird since at least as doesnt support it, deltaspike project talked about it and absolute path is not supported in general in application servers (and that's not in the spec)
Romain, what do you mean exactly by that? java:app/ is strictly speaking a relative path, as it's relative to the application in question. java:module/ is also relative. It seems logical that if someone would use java:module/, then the datasource would only be available/visible in the module that declared it. E.g. if it's in web.xml, then only the web module to which that web.xml belongs would see it and not any of the EJB modules in the same EAR (if an EAR is used at all).
java:global/ is an absolute path for sure. This might be problematic to support if the driver is inside the application, but might be doable if it's globally available to the AS.
weird since at least as doesnt support it, deltaspike project talked about it and absolute path is not supported in general in application servers (and that's not in the spec)
we hope it will be fixed in jee7
in tomee we add the app name before the datasource name in jndi that's why you need to add it in your persistence.xml (but that's not mandatory injecting the datasource normally)
That might be the issue. Indeed, many of the proprietary xml files for many servers have something like this. E.g. specify datasource FooDS in WebLogic's *-jdbc.xml as JNDI name, and you'll have to reference it by java:app/FooDS.
Nevertheless, for the name in @DataSourceDefinition and web.xml, it seems to be agreed that if you ask for java:app/MyApp/myDS, that you can refer to it via exactly that; java:app/MyApp/myDS. In EE.5.17 the Java EE 6 spec gives an example using exactly this scope. The other servers that I tested (JBoss AS 7.1.1, GlassFish 3.1.2, WebLogic 12c) all do follow this.
trunk should manage it more correctly
Looking forward again to testing that, will a snap shot of this be made soon? Otherwise I can try building trunk from source. Thanks again for your efforts!
trunk should manage it more correctly
in persistence.xml use <jta-data-source>jsf_ejb_jpa/myDS</jta-data-source> and in web.xml <name>myDS</name> (another note is default password is empty) and it should work.
@DataSourceDefinition or the web.xml datasource is not very well specified actually :s
in tomee we add the app name before the datasource name in jndi that's why you need to add it in your persistence.xml (but that's not mandatory injecting the datasource normally)
i'll have a quick look to see if we can do better
With beta 3 we're one step further as injection is working again, but unfortunately persistence is still being done in the hsqldb,.
I deployed the CRUD app to TomEE and persisted two users. Afterwards the [tomee home]/hsqldb/hsqldb.log contained the followed content:
/C2/SET SCHEMA PUBLIC
DISCONNECT
/C4/SET SCHEMA PUBLIC
DISCONNECT
/C5/SET SCHEMA PUBLIC
CREATE TABLE User (id INTEGER NOT NULL, name VARCHAR(255), surname VARCHAR(255), PRIMARY KEY (id))
/C3/SET SCHEMA PUBLIC
INSERT INTO USER VALUES(0,'ddddddd','ffffffff')
COMMIT
INSERT INTO USER VALUES(1,'ddddddd','gggggg')
COMMIT
So it seems it's still using the default hsqldb and not the h2 DB that I defined in web.xml. The exact TomEE download I tried is
I tried with both h2 on disk (jdbc:h2:~/mydb;DB_CLOSE_DELAY=-1) and in-memory (jdbc:h2:mem:test;DB_CLOSE_DELAY=-1). In case of the on-disk version, there is nothing at all being created in my home (no locks, logs, nothing) which seems to be indicative of h2 not even being touched.
Sorry to bother again, but any news here? Did I do something wrong, or was the wrong version of TomEE released as 1.0?
I've waited a few days, but unfortunately it seems the maven repo doesn't get updated anymore. It's stuck at 27 April.
Maybe worse, the TomEE I downloaded from the official download page at is a version seemingly from 26 April and doesn't have the JSF injections.
I'm looking forward to test again, but I can't seem to get the latest version I'm afraid.
Thanks for the quick fix! I'll try the snapshot tomorrow then and report back how it went. You being able to persist a user sounds very promising already
well for the 1.0.0 one thing we optimized was to avoid to scan multiple times (one for jsf, one for ejb, one for cdi...) but for jsf classes a part was missing (the part allowing injections).
normally i fixed it on trunk if you can build tomee from sources (mvn clean install -Dmaven.test.skip=true -pl tomee/apache-tomee -am) or wait a bit (tomorrow i guess) to get our snapshot to give a try, it should work (i persisted a user with your app)
I also tried TomEE+ from the same repo (), but here too the injection fails.
I immediately tried the 1.0.0 release, but it seems it doesn't work at all now. EJB injection in managed beans doesn't happen anymore. Specially, the following code fragment now throws a NPE at the first line of addUser():
@ViewScoped
@ManagedBean
public class IndexBacking {
private User user = new User();
@EJB
private UserDAO userDAO;
/**
- Adds a user to persistent storage
- @return String - navigation to the success page
*/
public String addUser() { userDAO.add(user); return "success?faces-redirect=true"; }
I deployed the same app to TomEE beta 2 again (and as cross-check to JBoss AS 7.1.1) and there it works. (I tried)
just added a sample:
it seems it works fine on trunk
we just released the 1.0.0 of tomee (available on repo1 of maven:) can you have a try please?
Thanks for your opinion, I appreciate it.
Item 1) is indeed a fiercely debated item. Some people want to keep database configuration external others want to be able to ship self-contained wars. In that debate, a data-source in web.xml isn't really different from META-INF/context.xml (and from JBoss' *-ds.xml, and from oracle-jdbc.xml, etc etc). It's exactly like those, but just with standardized syntax.
When writing applications that already contain their own embedded DB (derby, h2, etc) it's rather clear that their configuration should be embedded as well. However, for regular databases, it also makes sense when deploying to a small amount of internal application servers, especially when deployment descriptors are configurable with placeholders (as might happen for Java EE 7).
Thanks again for looking into this. If there's anything else I can test, let me know.
I agree but let me give you some more info (and a bit my opinion):
1) datasource connection info shouldnt be in the app so external config is fine (for me the spec is not relevant on this, thats just my opinion)
2) it is easy to wrap but that's not done and we don't want to add as many datasource type as driver
3) what you suggest will more or less be the fix i'll do
For the property: i know it is in the annotation so i guess it is in the xml (didnt check)
They should mirror each other exactly, so this should be the case. However, if it's in the properties section it is a vendor specific setting (in this case thus TomEE specific). See
That's already a bit better than a vendor specific configuration file, but since this concerns a fairly basic setting I would say it's not exactly in the spirit of the spec to require for this for the JDBC driver. It would be like vendors requiring the "data-source" in persistence.xml to be in the properties section.
Moreover we use commons dbcp and that's mandatory
Are you specifically referring to BasicDataSource ()?
Long time ago I looked at it, but if I remember correctly it should be rather trivial to wrap an instance of the DataSource class in a Driver. Additionally, for its XA functionality even Commons DBCP uses a DataSource and not a Driver (see).
For the property: i know it is in the annotation so i guess it is in the xml (didnt check)
for "why do we need a driver": because that's the way it currently works for a lot of datasource impl. Moreover we use commons dbcp and that's mandatory
>we create a pool and not just a datasource from the specified class. for this reason we need a jdbc driver.
I wonder if you really need the JDBC driver class. There is a similar discussion currently on the JBoss JIRA (their web.xml data-source/persistence.xml combo also didn't work correctly, see)
To crosspost a relevant part of my reply from there:
.
[...]
Other AS implementations, like GlassFish, do all their magic with a javax.sql.DataSource (or XA variant) instead of a Driver."
Can you try adding the property JdbcDriver=xxx please?
I'll certainly try this. To be clear, this is a property for the data-source element in web.xml?
Can you try adding the property JdbcDriver=xxx please?
we create a pool and not just a datasource from the specified class. for this reason we need a jdbc driver. But ill hack on it and find a solution. dont worry.
Thanks for looking into it.
I do wonder, how can the code explicitly support specific databases? Shouldn't you be just looking at javax.sql.DataSource? I mean, you can never possibly anticipate all possible databases in advance, can you?
I just checked. We only support derby and hsqddb (default) for datasource definition. I'll enhance it next week if i find some time. Thanks for the report
Could maybe the fact that
OPENEJB-1665 is not resolved yet be related to this?
The problem is however that META-INF/resources.xml is Tomcat/TomEE specific. The entire purpose of the data-source element in web.xml is to be portable.
It can be done providing a meta-inf/resources.xml file containing the same datasource definition than in openejb.xml.
I'm used to use it but i define my datasource in openejb.xml (or tomee.xml).
Thanks for the quick reply. I noticed that option, but the entire point of the exercise was to use/test the standard Java EE 6 data-source/@DataSourceDefinition.
Another point is i think you need to define the h2 driver (the default one is hsql in tomee that's why you get this behavior).
I see. An outright fail (like in GlassFish) or some warning that another driver is being used might be in place here.
Incidentally, supporting loading the driver from WEB-INF/lib for app scoped datasources might not be such a bad idea (JBoss AS 7.1 can do this), but if TomEE is not capable of doing this now then I guess this would be a new feature request.
Eclipse WTP project file for example app. Uses JBoss AS 7.1 runtime for Java EE 6 library dependencies.
Hi,
I'm used to use it but i define my datasource in openejb.xml (or tomee.xml).
Another point is i think you need to define the h2 driver (the default one is hsql in tomee that's why you get this behavior).
Romain
can you share your h2 url please? i use it in memory and don't see such an error | https://issues.apache.org/jira/browse/TOMEE-171?focusedCommentId=13404481&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-18 | refinedweb | 3,343 | 58.99 |
How to Think Like a Computer Scientist
- Dinah Bond
- 1 years ago
- Views:
Transcription
1 How to Think Like a Computer Scientist C Version Allen B. Downey C-Version by Thomas Scheffler Version 1.08 November 25th, 2012
2 2 effectflower Hill, Waterville, ME The GNU General Public License is available from or by writing to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA , USA. This book was typeset by the author using LaTeX and dvips, which are both free, open-source programs.
3 Contents 1 The way of the program What is a programming language? What is a program? What is debugging? Compile-time errors Run-time errors Logic errors and semantics Experimental debugging Formal and natural languages The first program Glossary Exercises Variables and types More output Values Variables Assignment Outputting variables Keywords Operators Order of operations Operators for characters
4 ii Contents 2.10 Composition Glossary Exercises Function Floating-point Constants Converting from double to int Math functions Composition Adding new functions Definitions and uses Programs with multiple functions Parameters and arguments Parameters and variables are local Functions with multiple parameters Functions with results Glossary Exercises Conditionals and recursion Conditional execution The modulus operator Alternative execution Chained conditionals Nested conditionals The return statement Recursion Infinite recursion Stack diagrams for recursive functions Glossary Exercises
5 Contents iii 5 Fruitful functions Return values Program development Composition Boolean values Boolean variables Logical operators Bool functions Returning from main() Glossary Exercises Iteration Multiple assignment Iteration The while statement Tables Two-dimensional tables Encapsulation and generalization Functions More encapsulation Local variables More generalization Glossary Exercises Arrays Increment and decrement operators Accessing elements Copying arrays for loops
6 iv Contents 7.5 Array length Random numbers Statistics Array of random numbers Passing an array to a function Counting Checking the other values A histogram A single-pass solution Random seeds Glossary Exercises Strings and things Containers for strings String variables Extracting characters from a string Length Traversal Finding a character in a string Pointers and Addresses String concatenation Assigning new values to string variables strings are not comparable Character classification Getting user input Glossary Exercises
7 Contents v 9 Structures Compound values Point objects Accessing member variables Operations on structures Structures as parameters Call by value Call by reference Rectangles Structures as return types Passing other types by reference Glossary Exercises A Coding Style 113 A.1 A short guide on style A.2 Naming conventions and capitalization rules A.3 Bracing style A.4 Layout B ASCII-Table 117
8 vi Contents
9 Chapter 1 The way of the program The. 1.1 What is a programming language? The programming language you will be learning is C, which was developed in the early 1970s by Dennis M. Ritchie at the Bell Laboratories. C is an example of a high-level language; other high-level languages you might have heard of are Pascal, C++ and Java..
10 2 The way of the program But the advantages are enormous. First, it is much easier to program in a highlevel language; by easier I mean that the program takes less time to write, it s shorter and easier to read, and it s more likely to be correct. Secondly, highlevel. source code compiler object code executor The compiler reads the source code and generates object code. You execute the program (one way or another) and the result appears on the screen. The next step is to run the program, which requires some kind of executor. The
11 1.2 What is a program? 3 role of the executor is to load the program (copy it from disk into memory) and make the computer start executing the program. Although this process may seem complicated, in most programming environments (sometimes called development environments), these steps are automated for you. Usually you will only have to write a program and press a button, which we will call statements, look different in different programming languages, but there are a few basic operations most languages can perform:. That s pretty much all there is to it. Every program you ve ever used, no matter how complicated, is made up of statements that perform these operations. Thus, one way to describe programming is the process of breaking a large, complex task up into smaller and smaller subtasks until eventually the subtasks are simple enough to be performed with one of these basic operations. 1.3.
12 4 The way of the program.
13 1.4 Formal and natural languages Experimental debugging, = 6 is a syntactically correct mathematical statement, but
14 6 The way of the program 3 = +6$ is not. Also, H 2 O is a syntactically correct chemical name, but 2 Zz.
15 1.5 The first program 7 display the words Hello, World. In C, this program looks like this: #include <stdio.h> #include <stdlib.h> /* main: generate some simple output */ int main(void) printf("hello, World.\n"); return(exit_success); /*, two. The first is an output statement, meaning that
16.
17.. statement: A part of a program that specifies an action that will be performed when the program runs. A print statement causes output to be displayed on the screen. comment: A part of a program that contains information about the program, but that has no effect when the program runs.). logical error: An error in a program that makes it do something other than what the programmer intended. debugging: The process of finding and removing any of the three kinds of errors.
18 10 The way of the program 1.7 Exercises Exercise 1.1 Computer. What is an executable? Exercise 1.2 Before you do anything else, find out how to compile and run a C program in your environment. Some environments provide sample programs similar to the example in Section 1.5. a. Type in the Hello World program, then compile and run it. b. Add a second print statement that prints a second message after the Hello World.. Something witty like, How are you? Compile and run the program again. c. Add a comment line to the program (anywhere) and recompile it. Run yourself trying to debug one program while you are accidentally executing another. Adding (and changing) print statements is a simple way to establish the connection between the program you are looking at and the output when the program runs. Exercise 1.3 It is a good idea to commit as many errors as you can think of, so that you see what error messages the compiler produces. Sometimes the compiler will tell you exactly what is wrong, and all you have to do is.
19 1.7 Exercises 11 d. Instead of main, write mian. e. Remove the closing */ from a comment. f. Replace printf with pintf. g. Delete one of the parentheses: ( or ). Add an extra one. h. Delete the semicolon after the return statement.
20 12 The way of the program
21"); /* output one line */ printf ("How are you?\n"); /* output another line */ return (EXIT_SUCCESS); \n from the first printf: int main (void) printf ("Goodbye, "); printf ("cruel world!\n");
22 14 Variables and types return (EXIT_SUCCESS);
23. One of the most powerful features of a programming language is the ability to manipulate values through the use of variables. So far the values that we have used in our statements where fixed to what was written in the statement. Now we will use a variable.).
24 */ warning.!! */
25 2.5 Outputting variables Outputting variables You can output the value of a variable using the same commands we used to output simple values. int hour, minute; char colon; hour = 11; minute = 59; colon = : ; printf ("The current time is "); printf ("%i", hour); printf ("%c", colon); printf ("%i", minute); printf ("\n");. The name of a variable only has significance for the programmer. The compiled program no longer contains a human readable reference to the variable name in your program. The printf() command is capable of outputting several variables in a single statement. To do this, we need to put placeholders in the so called format string, that indicate the position where the variable value will be put. The variables will be inserted in the order of their appearance in the statement. It is important to observe the right order and type for the variables. By using a single output statement, we can make the previous program more concise: int hour, minute; char colon; hour = 11; minute = 59; colon = : ; printf ("The current time is %i%c%i\n", hour, colon, minute); On one line, this program outputs a string, two integers and a character. Very impressive!
26 18 Variables and types and many more. Reserved keywords in the C language auto break case char const continue default do double else enum extern float for goto if inline int long register restrict return short signed sizeof static struct switch typedef union unsigned void volatile while _Bool _Complex _Imaginary The complete list of keywords is included in the C Standard, which is the official language definition adopted by the the International Organization for Standardization (ISO) on September values. In each case the name of the variable is replaced with its value before the computation is performed. Addition, subtraction and multiplication all do what you expect, but you might be surprised by division. For example, the following program:
27 2.8 Order of operations 19 int hour, minute; hour = 11; minute = 59; printf ("Number of minutes since midnight: %i\n", hour*60 + minute); printf ("Fraction of the hour that has passed: %i\n", minute/60); would generate the following output: Number of minutes since midnight: 719 Fraction of the hour that has passed: 0 The first line is what we expected, but the second line is odd. The value of the variable minute is 59, and 59 divided by 60 is ,: printf ("Percentage of the hour that has passed: "); printf ("%i\n",.
28 20 Variables and types 2.9 Operators for characters Interestingly, the same mathematical operations that work on integers also work on characters. For example, char letter; letter = a + 1; printf ("%c\n", letter); ; printf ("%i\n", number);: printf ("%i\n", 17 * 3);
29 2.11 Glossary 21 Actually, I shouldn t say at the same time, since in reality the multiplication has to happen before the output, but the point is that any expression, involving numbers, characters, and variables, can be used inside an output statement. We ve already seen one example: printf ("%i\n", hour * 60 + minute); variable: A named storage location for values. All variables have a type, which determines which values it can store. value: A letter, or number, or other thing that can be stored in a variable. type: The meaning of values. The types we have seen so far are integers (int in C) and characters (char in C). keyword: A reserved word that is used by the compiler to parse programs. Examples we have seen include int, void and char..
30 22 Variables and types precedence: The order in which operations are evaluated. composition: The ability to combine simple expressions and statements into compound statements and expressions in order to represent complex computations concisely Exercises Exercise 2.1 a. Create a new program named MyDate.c. Copy or type in something like the "Hello, World" program and make sure you can compile and run it. b. Following the example in Section 2.5, write a program that creates variables named day, month and year What type is each variable? Assign values to those variables that represent today s date. c. Print the value of each variable on a line by itself. This is an intermediate step that is useful for checking that everything is working so far. d. Modify the program so that it prints the date in standard American form: mm/dd/yyyy. e. Modify the program again so that the total output is: American format: 3/18/2009 European format: The point of this exercise is to use the output function printf to display values with different types, and to practice developing programs gradually by adding a few statements at a time. Exercise 2.2 a. Create a new program called MyTime.c. From now on, I won t remind you to start with a small, working program, but you should. b. Following the example in Section 2.7, create variables named hour, minute and second, and assign them values that are roughly the current time. Use a 24-hour clock, so that at 2pm the value of hour is 14. c. Make the program calculate and print the number of seconds since midnight. d. Make the program calculate and print the number of seconds remaining in the day. e. Make the program calculate and print the percentage of the day that has passed. f. Change the values of hour, minute and second to reflect the current time (I assume that some time has elapsed), and check to make sure that the program works correctly with different values.
31 2.12 Exercises 23 The point of this exercise is to use some of the arithmetic operations, and to start thinking about compound entities like the time of day.
32 24 Variables and types
33 = ; It is also legal to declare a variable and assign a value to it at the same time: int x = 1; char first_char = "a"; double pi = ;-point number, or both? Strictly speaking, C distinguishes the integer value 1 from the floating-point value 1.0, even though they seem to be the same number. They belong to different types, and strictly speaking, you are not allowed to make assignments between types. For example, the following is illegal: int x = 1.1;
34 26 is convenient for the programmer, but it can cause problems; for example: double y = 1 / 3; You might expect the variable y to be given the value , , as expected. All the operations we have seen addition, subtraction, multiplication, and division work on floating-point values, although you might be interested to know that the underlying mechanism is completely different. In fact, most processors have special hardware just for performing floating-point operations. 3.2 Constants In the previous section we have assigned the value to a floating point variable. An important thing to remember about variables is, that they can hold as their name implies different values at different points in your program. For example, we could assign the value to the variable pi now and assign some other value to it later on: double pi = ;... pi = ; /* probably a logical error in your program */ The second value is probably not what you intended when you first created the named storage location pi. The value for π is constant and does not change over time. Using the storage location pi to hold arbitrary other values can cause some very hard to find bugs in your program. C allows you to specify the static nature of storage locations through the use of the keyword const. It must be used in conjunction with the required type of the constant. A value will be assigned at initialisation but can never be changed again during the runtime of the program.
35 3.3 Converting from double to int 27 const double PI = ; printf ("Pi: %f\n", PI);... PI = ; /* wrong, error caught by the compiler */ It is no longer possible to change the value for PI once it has been initialised, but other than this we can use it just like a variable. In order to visually separate constants from variables we will use all uppercase letters in their names. 3.3 requires the explicit specification of the target type, set in parenthesis before the expression (Type). For example: const double PI = ; int x = (int) PI; The (int) operator casts the value of PI into an integer, so x gets the value 3. Converting to an integer always rounds down, even if the fraction part is For every type in C, there is a corresponding operator that typecasts its argument to the appropriate type. 3.
36 28 Function. const stdio.h using an include statement: #include <stdio.h> stdio.h contains information about input and output (I/O) functions available in C. Similarly, the math header file contains information about the math functions. You can include it at the beginning of your program along with stdio.h: #include <math.h> 3.
37 3.6 Adding new functions parentheses containing the keyword (void) in it s definition. The first couple of functions we are going to write also have no parameters, so the syntax looks like this: void PrintNewLine (void) printf ("\n"); This function is named PrintNewLine(). It contains only a single statement, which outputs a newline character. Notice that we start the function name with an uppercase letter. The following words of the function name are also capitalized. We will use this convention for the naming of functions consistently throughout the book. In main() we can call this new function using syntax that is similar to the way we call the built-in C commands: int main (void) printf ("First Line.\n"); PrintNewLine (); printf ("Second Line.\n"); return EXIT_SUCCESS; The output of this program is: First line. Second line. Notice the extra space between the two lines. What if we wanted more space between the lines? We could call the same function repeatedly: int main (void) printf ("First Line.\n");
38 30 Function NewLine (); NewLine (); NewLine (); printf ("Second Line.\n"); Or we could write a new function, named PrintThreeLines(), that prints three new lines: void PrintThreeLines (void) PrintNewLine (); PrintNewLine (); PrintNewLine (); int main (void) printf ("First Line.\n"); PrintThreeLines (); printf ("Second Line.\n"); return EXIT_SUCCESS; You should notice a few things about this program: You can call the same procedure repeatedly. In fact, it is quite common and useful to do so. You can have one function call another function. In this case, main() calls PrintThreeLines() and PrintThreeLines() calls PrintNewLine(). Again, this is common and useful. In PrintThreeLines(), PrintNewLine() or printf("\n")? 2. Creating a new function can make a program smaller by eliminating repetitive code. For example, a short way to print nine consecutive new lines is to call PrintThreeLines() three times. How would you print 27 new lines?
39 3.7 Definitions and uses Definitions and uses Pulling together all the code fragments from the previous section, the whole program looks like this: #include <stdio.h> #include <stdlib.h> void PrintNewLine (void) printf ("\n"); void PrintThreeLines (void) PrintNewLine (); PrintNewLine (); PrintNewLine (); int main (void) printf ("First Line.\n"); PrintThreeLines (); printf ("Second Line.\n"); return EXIT_SUCCESS; This program contains three function definitions: PrintNewLine(), PrintThreeLine(), and main(). Inside the definition of main(), there is a statement that uses or calls PrintThreeLine(). Similarly, PrintThreeLine() calls PrintNew.8 Programs with multiple functions When you look at the C source code
40 32 Function PrintThreeLines(). But while we are executing PrintThreeLines(), we get interrupted three times to go off and execute PrintNewLine(). Fortunately, C is adept at keeping track of where it is, so each time PrintNewLine() completes, the program picks up where it left off in PrintThreeLine(), and eventually gets back to main() so the program can terminate. What s the moral of this sordid tale? When you read a program, don t read from top to bottom. Instead, follow the flow of execution. 3 function definition, the parameter list indicates the type of each parameter. For example: void PrintTwice (char phil) printf("%c%c\n", phil, phil); (void) PrintTwice ( a ); return EXIT_SUCCESS;
41 3.10 Parameters and variables are local 33 EXIT_SUCCESS; stack diagram for PrintTwice() looks like this: main() argument 'B' PrintTwice() phil 'B'
42 34 Function Functions with multiple parameters The syntax for declaring and invoking functions with multiple parameters is a common source of errors. First, remember that you have to declare the type of every parameter. For example void PrintTime (int hour, int minute) printf ("%i", hour); printf (":"); printf ("%i",); Functions with results You might have noticed by now that some of the functions we are using, like the math functions, yield results. Other functions, like PrintNewLine, perform an action but don t return a value. PrintNewLine() + 7? Can we write functions that yield results, or are we stuck with things like PrintNewLine() and PrintTwice()?
43 3.13 Glossary 35 Glossary constant: A named storage location similar to a variable, that can not be changed once it has been initialised. Exercises Exercise 3.1 The point of this exercise is to practice reading code and to make sure that you understand the flow of execution through a program with multiple functions. a. What is the output of the following program? Be precise about where there are spaces and where there are newlines. HINT: Start by describing in words what Ping() and Baffle() do when they are invoked. #include <stdio.h> #include <stdlib.h> void Ping () printf (".\n");
44 36 Function void Baffle () printf ("wug"); Ping (); void Zoop () Baffle (); printf ("You wugga "); Baffle (); int main (void) printf ("No, I "); Zoop (); printf ("I "); Baffle (); return EXIT_SUCCESS; b. Draw a stack diagram that shows the state of the program the first time Ping() is invoked. Exercise 3.2 The point of this exercise is to make sure you understand how to write and invoke functions that take parameters. a. Write the first line of a function named Zool() that takes three parameters: an int and two char. b. Write a line of code that invokes Zool(), passing as arguments the value 11, the letter a, and the letter z. Exercise 3.3 The purpose of this exercise is to take code from a previous exercise and encapsulate it in a function that takes parameters. You should start with a working solution to exercise a. Write a function called PrintDateAmerican() that takes the day, month and year as parameters and that prints them in American format. b. Test your function by invoking it from main() and passing appropriate arguments. The output should look something like this (except that the date might be different): 3/29/2009 c. Once you have debugged PrintDateAmerican(), write another function called PrintDateEuropean() that prints the date in European format.
45 3.14 Exercises 37 Exercise 3.4 Many computations can be expressed concisely using the multadd operation, which takes three operands and computes a*b + c. Some processors even provide a hardware implementation of this operation for floating-point numbers. a. Create a new program called Multadd.c. b. Write a function called Multadd() that takes three doubles as parameters and that prints their multadditionization. c. Write a main() function that tests Multadd() by invoking it with a few simple parameters, like 1.0, 2.0, 3.0, and then prints the result, which should be 5.0. d. Also in main(), use Multadd() to compute the following value: sin π 4 + cos π 4 2 e. Write a function called Yikes() that takes a double as a parameter and that uses Multadd() to calculate and print xe x + 1 e x HINT: the Math function for raising e to a power is double exp(double x);. In the last part, you get a chance to write a function that invokes a function you wrote. Whenever you do that, it is a good idea to test the first function carefully before you start working on the second. Otherwise, you might find yourself debugging two functions at the same time, which can be very difficult. One of the purposes of this exercise is to practice pattern-matching: the ability to recognize a specific problem as an instance of a general category of problems.
46 38 Function
47 Chapter 4 Conditionals and recursion 4.1 Conditional execution In order to write useful programs, we almost always need the ability to check certain conditions and change the behavior of the program accordingly. Conditional statements give us this ability. The simplest form is the if statement: if (x > 0) printf ("x is positive\n"); The expression in parentheses is called the condition. If it is true, then the statements in brackets get executed. If the condition is not true, nothing happens. The condition can contain any of the comparison operators: x == y /* x equals syntax C uses is a little different from mathematical symbols like =, and..
48 40 Conditionals and recursion 4.3 Alternative execution A second form of conditional execution is alternative execution, in which there are two possibilities, and the condition determines which one gets executed. The syntax looks like: if (x%2 == 0) printf ("x is even\n"); else printf ("x is odd\n");) printf ("x is even\n");
49 4.4 Chained conditionals 41 else printf ("x is odd\n");) printf ("x is positive\n"); else if (x < 0) printf ("x is negative\n"); else printf ("x is zero\n");)
50 42 Conditionals and recursion printf ("x is zero\n"); else if (x > 0) printf ("x is positive\n"); else printf ("x is negative\n");) printf ("Positive numbers only, please.\n"); return; double result = log (x); printf ("The log of x is %f
51 4.7 Recursion 43. For example, look at the following function: void Countdown (int n) if (n == 0) printf ("Blastoff!"); else printf ("%i", n);: int main (void) Countdown (3); return EXIT_SUCCESS; The execution of Countdown() begins with n=3, and since n is not zero, it outputs the value 3, and then calls itself... The execution of Countdown() begins with n=2, and since n is not zero, it outputs the value 2, and then calls itself...
52 44 Conditionals and recursion: Blastoff! As a second example, let s look again at the functions PrintNewLine() and PrintThreeLines(). void PrintNewLine () printf ("\n"); void PrintThreeLines () PrintNewLine (); PrintNewLine (); PrintNewLine (); Although these work, they would not be much help if I wanted to output 2 newlines, or 106. A better alternative would be void PrintLines (int n) if (n > 0) printf ("\n"); Print.
53 4.8 Infinite recursion() Countdown() n 3 Countdown() n 2 Countdown() n 1 Countdown() n 0 There is one instance of main() and four instances of Countdown(), each with a different value for the parameter n. The bottom of the stack, Countdown()
54 46 Conditionals and recursion with n=0 is the base case. It does not make a recursive call, so there are no more instances of Countdown(). The instance of main() is empty because main() does not have any parameters or local variables. As an exercise, draw a stack diagram for PrintLines(), invoked with the parameter n= Exercises Exercise 4.1 This exercise reviews the flow of execution through a program with multiple methods. Read the following code and answer the questions below. #include <stdio.h> #include <stdlib.h> void Zippo (int quince, int flag); void Baffle (int output) printf ("%i\n",output); Zippo (12, -5); void Zippo (int quince, int flag) if (flag < 0) printf ("%i zoop\n", quince);
55 4.11 Exercises 47 else printf ("rattle "); Baffle (quince); printf ("boo-wa-ha-ha\n"); int main (void) Zippo (5, 13); return EXIT_SUCCESS; a. Write the number 1 next to the first statement of this program that will be executed. Be careful to distinguish things that are statements from things that are not. b. Write the number 2 next to the second statement, and so on until the end of the program. If a statement is executed more than once, it might end up with more than one number next to it. c. What is the value of the parameter blimp when baffle() gets invoked? d. What is the output of this program? Exercise 4.2 The first verse of the song 99 Bottles of Beer is: prints the entire lyrics of 99 Bottles of Beer. Your program should include a recursive method that does the hard part, but you also might want to write additional methods to separate the major functions of the program. As you are developing your code, you will probably want to test it with a small number of verses, like 3 Bottles of Beer. The purpose of this exercise is to take a problem and break it into smaller problems, and to solve the smaller problems by writing simple, easily-debugged methods. Exercise 4.3 You can use the getchar() function in C to get character input from the user through the keyboard. This function stops the execution of the program and waits for the input from the user.
56 48 Conditionals and recursion The getchar() function has the type int and does not require an argument. It returns the ASCII-Code (cf. Appendix B) of the key that has been pressed on the keyboard. a. Write a program, that asks the user to input a digit between 0 and 9. b. Test the input from the user and display an error message if the returned value is not a digit. The program should then be terminated. If the test is successful, the program should print the input value on the computer screen. Exercise 4.4 such that Fermat s Last Theorem says that there are no integers a, b, and c a n + b n = c n except in the case when n = 2. Write a function named checkfermat() that takes four integers as. You should assume that there is a function named raisetopow() that takes two integers as arguments and that raises the first argument to the power of the second. For example: int x = raisetopow (2, 3); would assign the value 8 to x, because 2 3 = 8.
57: Print
58 50 Fruitful functions)
59 5.2 Program development 51 return x; /* WRONG!! */ 1, y 1 ) and (x 2, y 2 ). By the usual definition, distance = (x 2 x 1 ) 2 + (y 2 y 1 ):
60 52 Fruitful functions double Distance (double x1, double y1, double x2, double y2) return 0.0;); printf ("%f\n" dist); I chose these values so that the horizontal distance is 3 and the vertical distance is 4; that way, the result will be 5 (the hypotenuse of a 2 x 1 and y 2 y 1. I will store those values in temporary variables named dx and dy. double Distance (double x1, double y1, double x2, double y2) double dx = x2 - x1; double dy = y2 - y1; printf ("dx is %f\n", dx); printf ("dy is %f\n", dy;; printf ("d_squared is %f\n", dsquared);
61 5.3 Composition 53 return 0.0; Again, I would compile and run the program at this stage and check the intermediate value (which should be 25.0). Finally, we can use the sqrt() function to compute and return the result.: Start with a working program and make small, incremental changes. At any point, if there is an error, you will know exactly where it is. Use temporary variables to hold intermediate values so you can output and check them.;
62 54 Fruitful functions Wrapping that all up in a function, we get: double AreaFromPoints (double xc, double yc, double xp, double yp) double radius = Distance (xc, yc, xp, yp); double result = Area (radius); return result; The temporary variables radius and area are useful for development and debugging, but once the program is working we can make it more concise by composing the function calls: double AreaFromPoints (double xc, double yc, double xp, double yp) return Area (Distance (xc, yc, xp, yp)); 5.4 Boolean values The types we have seen so far can hold very large values. There are a lot of integers in the world, and even more floating-point numbers. By comparison, the set of characters is pretty small. Well, many computing languages implement an even more fundamental type that is even smaller. It is called _Bool, and the only values in it are true and false. Unfortunately, earlier versions of the C standard did not implement boolean as a separate type, but instead used the integer values 0 and 1 to represent truth values. By convention 0 represents false and 1 represents true. Strictly speaking C interpretes any integer value different from 0 as true. This can be a source of error if you are testing a value to be true by comparing it with 1. Without thinking about it, we have been using boolean values in the last of chapter. The condition inside an if statement is a boolean expression. Also, the result of a comparison operator is a boolean value. For example: if (x == 5) /* do something*/ The operator == compares two integers and produces a boolean value. Pre C99 has no keywords for the expression of true or false. A lot of programs instead are using C preprocessor definitions anywhere a boolean expression is called for. For example, #define FALSE 0 #define TRUE 1... if (TRUE)
63 5.5 Boolean variables 55 /* will be always executed */ is a standard idiom for a loop that should run forever (or until it reaches a return or break statement). 5.5 Boolean variables Since boolean values are not supported directly in C, we can not declare variables of the type boolean. Instead, programmers typically use the short datatype in combination with preprocessor definitions to store truth values. #define FALSE 0 #define TRUE 1... short fred; fred = TRUE; short variable short evenflag = (n%2 == 0); /* true if n is even */ short positiveflag = (x > 0); /* true if x is positive */ and then use it as part of a conditional statement later if (evenflag) printf("n was even when I checked it"); A variable used in this way is called a flag, since it flags the presence or absence of some condition. 5.
64 56 Fruitful functions Logical operators often provide a way to simplify nested conditional statements. For example, how would you write the following code using a single conditional? if (x > 0) if (x < 10) printf ("x is a positive single digit.\n"); 5.7 Bool functions It is sometimes appropriate for functions to return boolean values just like any other return type. This is is especially convenient for hiding complicated tests inside functions. For example: int IsSingleDigit (int x) if (x >= 0 && x < 10) return TRUE; else return FALSE; The name of this function is IsSingleDigit(). It is common to give such test functions names that sound like yes/no questions. The return type is int, which means that again we need to follow the agreement that 0 represents false and 1 represents true. Every return statement has to follow this convention, again, we are using preprocessor definitions. The code itself is straightforward, although it is a bit longer than it needs to be. Remember that the expression x >= 0 && x < 10 is evaluated to a boolean value, so there is nothing wrong with returning it directly, and avoiding the if statement altogether: int IsSingleDigit (int x) return (x >= 0 && x < 10); In main() you can call this function in the usual ways: printf("%i\n", IsSingleDigit (2)); short bigflag =!IsSingleDigit (17);
65 5.8 Returning from main() 57 The first line outputs the value true because 2 is a single-digit number. Unfortunately, when C outputs boolean values, it does not display the words TRUE and FALSE, but rather the integers 1 and 0. The second line assigns the value true to bigflag only if 17 is not a positive single-digit number. The most common use of boolean functions is inside conditional statements if (IsSingleDigit (x)) printf("x is little\n"); else printf("x is big\n"); 5.8 Returning from main() Now that we know functions that return values, we can look more closely at the return value of the main() function. It s supposed to return an integer: int main (void) The usual return value from main() is 0, which indicates that the program succeeded at whatever it was supposed to to. If something goes wrong, it is common to return -1, or some other value that indicates what kind of error occurred. C provides two predefined constants EXIT_SUCCESS and EXIT_FAILURE int main (void) return EXIT_SUCCESS; /*program terminated successfully*/ Of course, you might wonder who this value gets returned to, since we never call main() ourselves. It turns out that when the operating system executes a program, it starts by calling main() in pretty much the same way it calls all the other functions. There are even some parameters that can be passed to main() by the system, but we are not going to deal with them for a little while, so we define main() as having no parameters: int main (void). 5.9 Glossary return type: The type of value a function returns. return value: The value provided as the result of a function call.
66 58 Fruitful functions. boolean: A value or variable that can take on one of two states, often called true and false. In C, boolean values are mainly stored in variables of type short and preprocessor statements are used to define the states. flag: A variable that records a condition or status information. comparison operator: An operator that compares two values and produces a boolean that indicates the relationship between the operands. logical operator: An operator that combines boolean values in order to test compound conditions Exercises Exercise 5. Write a function named IsTriangle() that it takes three integers as arguments, and that returns either TRUE or FALSE, depending on whether you can or cannot form a triangle from sticks with the given lengths. The point of this exercise is to use conditional statements to write a function that returns a value. Exercise 5.2 What is the output of the following program? The purpose of this exercise is to make sure you understand logical operators and the flow of execution through fruitful methods. #define TRUE 1 #define FALSE 0 short IsHoopy (int x) short hoopyflag; if (x%2 == 0)
67 5.10 Exercises 59 hoopyflag = TRUE; else hoopyflag = FALSE; return hoopyflag; short IsFrabjuous (int x) short frabjuousflag; if (x > 0) frabjuousflag = TRUE; else frabjuousflag = FALSE; return frabjuousflag; int main (void) short flag1 = IsHoopy (202); short flag2 = IsFrabjuous (202); printf ("%i\n", flag1); printf ("%i\n", flag2); if (flag1 && flag2) printf ("ping!\n"); if (flag1 flag2) printf ("pong!\n"); return EXIT_SUCCESS; Exercise 5.3 a. Create a new program called Sum.c, and type in the following two functions. int FunctionOne (int m, int n) if (m == n) return n;
68 60 Fruitful functions else return m + FunctionOne (m+1, n); int FunctionTwo (int m, int n) if (m == n) return n; else return n * FunctionTwo (m, n-1); b. Write a few lines in main() to test these functions. Invoke them a couple of times, with a few different values, and see what you get. By some combination of testing and examination of the code, figure out what these functions do, and give them more meaningful names. Add comments that describe their function abstractly. c. Add a prinf statement to the beginning of both functions so that they print their arguments each time they are invoked. This is a useful technique for debugging recursive programs, since it demonstrates the flow of execution. Exercise 5.4 Write a recursive function called Power() that takes a double x and an integer n and that returns x n. Hint: a recursive definition of this operation is Power (x, n) = x * Power (x, n-1). Also, remember that anything raised to the zeroeth power is 1. Exercise 5.5 (This exercise is based on page 44 of Ableson and Sussman s Structure and Interpretation of Computer Programs.) The following algorithm is known as Euclid s Algorithm because it appears in Euclid s Elements (Book 7, ca. 300 B.C.). It may be the oldest nontrivial algorithm. The algorithm is based on the observation that, if r is the remainder when a is divided by b, then the common divisors of a and b are the same as the common divisors of b and r. Thus we can use the equation gcd(a, b) = gcd(b, r) to successively reduce the problem of computing a GCD to the problem of computing the GCD of smaller and smaller pairs of integers. For example, gcd(36, 20) = gcd(20, 16) = gcd(16, 4) = gcd(4, 0) = 4
69 5.10 Exercises 61 implies that the GCD of 36 and 20 is 4. It can be shown that for any two starting numbers, this repeated reduction eventually produces a pair where the second number is 0. Then the GCD is the other number in the pair. Write a function called gcd that takes two integer parameters and that uses Euclid s algorithm to compute and return the greatest common divisor of the two numbers. Exercise 5.6 The distance between two points (x 1, y 1) and (x 2, y 2) is Distance = (x 2 x 1) 2 + (y 2 y 1) 2 Please write a function named Distance() that takes four doubles as parameters x1, y1, x2 and y2 and that prints the distance between the points. You should assume that there is a function named SumSquares() that calculates and returns the sum of the squares of its arguments. For example: double x = SumSquares (3.0, 4.0); would assign the value 25.0 to x. The point of this exercise is to write a new function that uses an existing one. You should write only one function: Distance(). You should not write SumSquares() or main() and you should not invoke Distance(). Exercise 5.7 The point of this exercise is to practice the syntax of fruitful functions. a. Use your existing solution to Exercise 3.4 and make sure you can still compile and run it. b. Transform Multadd() into a fruitful function, so that instead of printing a result, it returns it. c. Everywhere in the program that Multadd() gets invoked, change the invocation so that it stores the result in a variable and/or prints the result. d. Transform Yikes() in the same way.
70 62 Fruitful functions
71; printf ("%i", fred); fred = 7; printf ("%i", 5 fred.
72 64 Iteration Furthermore, in mathematics, a statement of equality is true for all time. If a = b now, then a will always equal b. In C, an assignment statement can make two variables equal, but they don t have to stay that way! int a = 5; int b = a; /* a and b are now equal */ a = 3; /*. In section 4.7 we have seen programs that use recursion to perform repetition, such as PrintLines() and Countdown(). I now want to introduce a new type of repetition, that is called iteration, and C provides several language features that make it easier to write repetetive programs. The two features we are going to look at are the while statement and the for statement. 6.3 The while statement Using a while statement, we can rewrite Countdown(): void Countdown (int n) while (n > 0) printf ("%i\n", n); n = n-1; printf ("Blastoff!\n"); You can almost read a while statement as if it were English. What this means is, While n is greater than zero, continue displaying the value of n and then reducing the value of n by 1. When you get to zero, output the word Blastoff! More formally, the flow of execution for a while statement is as follows:
73 6.3 The while statement Evaluate the condition in parentheses, yielding true or false. 2. If the condition is false, exit the while statement and continue execution at the next statement. 3. If the condition is true, execute each of the statements between the curlybrackets,) printf ("%i\n", n); if (n%2 == 0) /* n is even */ n = n / 2; else /* n is odd */ n = n*3 + 1;
74 66 Iteration) printf ("%.0f\t%f\n", x,log(x)); x = x + 1.0; The sequence \t represents a tab character. The sequence \n represents a newline character. They are so called escape sequences which are used to encode non-printable ASCII-characters. Escape causes the cursor to move on to the next line. The output of this program is:
75 6.4 Tables If these values seem odd, remember that the log() function uses base e. Since powers of two are so important in computer science, we often want to find logarithms with respect to base 2. To do that, we can use the following formula: log 2 x = log ex log e 2 Changing the output statement to printf ("%.0f\t%f\n", x, log(x) / log(2.0)); yields:) printf ("%.0f\t%.0f\n", x, log(x) / log(2.0)); x = x * 2.0; Now instead of adding something to x each time through the loop, which yields an arithmetic sequence, we multiply x by something, yielding a geometric sequence. The result is:
76 68 Iteration (that s 2 16 ). Print it out and memorize it.) printf("%i i = i + 1; printf("\n"); ", i*2); \n from the first output statement, we get all the output on a single line. The output of this program is:.7. Generalization means taking something specific, like printing multiples of 2, and making it more general, like printing the multiples of any integer. Here s a function that encapsulates the loop from the previous section and generalizes it to print multiples of n.
77 6.6 Encapsulation and generalization 69 void PrintMultiples (int n) int i = 1; while (i <= 6) printf("%i ", i*n); i = i + 1; printf("\n");: and with argument 4, the output is(). I only replaced the call of the printf() function with the call of the PrintMultiples() function. The output of this program is which is a (slightly sloppy) multiplication table. If the sloppiness bothers you, try replacing the spaces between columns with tab characters and see what you get.
78 70 Iteration 6.7 Functions In the last section I mentioned all the things functions are good for. About this time, you might be wondering what exactly those things are. Here are some of the reasons functions are useful: By giving a name to a sequence of statements, you make your program easier to read and debug. Dividing a long program into functions allows you to separate parts of the program, debug them in isolation, and then compose them into a whole. Functions facilitate both recursion and iteration. Well-designed functions are often useful for many programs. write and debug one, you can reuse it. Once
79 6.10 More generalization 71. PrintMultiples() n 1 i 3 PrintMultTable() i 1 main() Notice that the value of the parameter n in PrintMultiples() has to be the same as the value of i in PrintMultTable(). On the other hand, the value of i in PrintMultiples() goes from 1 up to 6.;
80 72 Iteration I replaced the value 6 with the parameter high. If I call PrintMultTable() with the argument 7, I get: which is fine, except that I probably want the table to be square (same number of rows and columns), which means I have to add another parameter to PrintMultiples(), to specify how many columns the table should have. Just to be annoying, I will also call this parameter high, demonstrating that different functions can have parameters with the same name (just like local variables): void PrintMultiples (int n, int high) int i = 1; while (i <= high) printf ("%i ", n*i); i = i + 1; printf ("\n"); void PrintMultTable (int high) int i = 1; while (i <= high) PrintMultiples (i, high); i = i + 1; Notice that when I added a new parameter, I had to change the first line of the function, and I also had to change the place where the function is called in PrintMultTable(). As expected, this program generates a square 7x7 table:
81 6.11 Glossary 73: PrintMultiples (i, high); PrintMultiples (i, i); I ll leave it up to you to figure out how it works.
82 74 Iteration 6.12 Exercises Exercise 6.1 void Loop(int n) int i = n; while (i > 1) printf ("%i\n",i); if (i%2 == 0) i = i/2; else i = i+1; int main (void) Loop(10); return EXIT_SUCCESS; a. Draw a table that shows the value of the variables i and n during the execution of the program. The table should contain one column for each variable and one line for each iteration. b. What is the output of this program? Exercise 6.2 In Exercise 5.4 we wrote a recursive version of Power(), which takes a double x and an integer n and returns x n. Now write an iterative function to perform the same calculation. Exercise 6.3 Let s say you are given a number, a, and you want to find its square root. One way to do that is to start with a very rough guess about the answer, x 0, and then improve the guess using the following formula: x 1 = (x 0 + a/x 0)/2 For example, if we want to find the square root of 9, and we start with x 0 = 6, then x 1 = (6 + 9/6)/2 = 15/4 = 3.75, which is closer. We can repeat the procedure, using x 1 to calculate x 2, and so on. In this case, x 2 = and x 3 = So that is converging very quickly on the right answer (which is 3). Write a function called SquareRoot that takes a double as a parameter and that returns an approximation of the square root of the parameter, using this algorithm. You may not use the sqrt() function from the math.h library.
83 6.12 Exercises 75 As your initial guess, you should use a/2. Your function should iterate until it gets two consecutive estimates that differ by less than ; in other words, until the absolute value of x n x n 1 is less than You can use the built-in abs() function form the math.h library to calculate the absolute value. Exercise 6.4 One way to evaluate e x2 is to use the infinite series expansion e x2 = 1 2x + 3x 2 /2! 4x 3 /3! + 5x 4 /4!... In other words, we need to add up a series of terms where the ith term is equal to ( 1) i (i+1)x i /i!. Write a function named Gauss() that takes x and n as arguments and that returns the sum of the first n terms of the series. You should not use factorial() or pow().
84 76 Iteration
85 Chapter 7 Arrays A array is a set of values where each value is identified and referenced by a number (called an index). The nice thing about arrays is that they can be made up of any type of element, including basic types like ints and doubles, but all the values in an array have to have the same type. When you declare an array, you have to determine the number of elements in the array. Otherwise the declaration looks similar to other variable types: int c[4]; double values[10]; Syntactically, array variables look like other C variables except that they are followed by [NUMBER_OF_ELEMENTS], the number of elements in the array enclosed in square brackets. The first line in our example, int c[4]; is of the type "array of integers" and creates a array of four integers named c. The second line, double values[10]; has the type "array of doubles" and creates an array of 10 doubles. C allows you to to initialize the element values of an array immediately after you have declared it. The values for the individual elements must be enclosed in curly brakets and separated by comma, as in the following example: int c[4] = 0, 0, 0, 0; This statement creates an array of four elements and initializes all of them to zero. This syntax is only legal at initialisation time. Later in your program you can only assign values for the array element by element. The following figure shows how arrays are represented in state diagrams: c c[0] c[1] c[2] c[3] The large numbers inside the boxes are the values of the elements in the array. The small numbers outside the boxes are the indices used to identify each
86 78 Arrays box. When you allocate a new array, without initializing, the arrays elements typically contain arbitrary values and you must initialise them to a meaningful value before using them. 7.1 Increment and decrement operators Incrementing and decrementing are such common operations that C provides special operators for them. The ++ operator adds one to the current value of an int, char or double, and -- subtracts one. Technically, it is legal to increment a variable and use it in an expression at the same time. For example, you might see something like: printf ("%i\n ", i++); PrintMultTable() from Section 6.10: void PrintMultTable(int high) int i = 1; while (i <= high) PrintMultiples(i); i++;.2 Accessing elements The [] operator allows us to read and write the individual elements of an array. The indices start at zero, so c[0] refers to the first element of the array, and c[1] refers to the second element. You can use the [] operator anywhere in an expression:
87 7.3 Copying arrays 79 c[0] = 7; c[1] = c[0] * 2; c[2]++; c[3] -= 60; All of these are legal assignment statements. fragment: Here is the effect of this code c c[0] c[1] c[2] c[3] By now you should have noticed that the four elements of this array are numbered from 0 to 3, which means that there is no element with the index 4. Nevertheless, it is a common error to go beyond the bounds of an array. In safer languages such as Java, this will cause an error and most likely the program quits. C does not check array boundaries, so your program can go on accessing memory locations beyond the array itself, as if they where part of the array. This is most likely wrong and can cause very severe bugs in your program. It is necessary that you, as a programmer, make sure that your code correctly observes array boundaries! You can use any expression as an index, as long as it has type int. One of the most common ways to index an array is with a loop variable. For example: int i = 0; while (i < 4) printf ("%i\n", c[i]); i++; This is a standard while loop that counts from 0 up to 4, and when the loop variable i is 4, the condition fails and the loop terminates. Thus, the body of the loop is only executed when i is 0, 1, 2 and 3. Each time through the loop we use i as an index into the array, printing the ith element. This type of array traversal is very common. Arrays and loops go together like fava beans and a nice Chianti. 7.3 Copying arrays Arrays can be a very convenient solution for a number of problems, like storing and processing large sets of data. However, there is very little that C does automatically for you. For example you can not set all the elements of an array at the same time and you can not assign one array to the other, even if they are identical in type and number of elements.
88 80 Arrays double a[3] = 1.0, 1.0, 1.0; double b[3]; a = 0.0; /* Wrong! */ b = a; /* Wrong! */ In order to set all of the elements of an array to some value, you must do so element by element. To copy the contents of one array to another, you must again do so, by copying each element from one array to the other. int i = 0; while (i < 3) b[i] = a[i]; i++; 7 This statement is exactly equivalent to INITIALIZER; while (CONDITION) BODY INCREMENTOR except that it is more concise and, since it puts all the loop-related statements in one place, it is easier to read. For example: int i; for (i = 0; i < 4; i++) printf("%i\n", c[i]); is equivalent to
89 7.5 Array length 81 int i = 0; while (i < 4) printf("%i\n", c[i]); i++; 7.5 Array length C does not provide us with a convenient way to determine the actual length of an array. Knowing the size of an array would be convenient when we are looping through all elements of the array and need to stop with the last element. In order to determine the array length we could use the sizeof() operator, that calculates the size of data types in bytes. Most data types in C use more than one byte to store their values, therefore it becomes necessary to divide the byte-count for the array by the byte-count for a single element to establish the number of elements in the array. sizeof(array)/sizeof(array_element) It is a good idea to use this value as the upper bound of a loop, rather than a constant. That way, if the size of the array changes, you won t have to go through the program changing all the loops; they will work correctly for any size array. int i, length; length = sizeof (c) / sizeof (c[0]); for (i = 0; i < length; i++) printf("%i\n", c[i]); The last time the body of the loop gets executed, the value of i is length - 1, which is the index of the last element. When i is equal to length, the condition fails and the body is not executed, which is a good thing, since it would access a memory location that is not part of the array. 7
90 82 Arrays generate pseudorandom numbers and use them to determine the outcome of the program. Pseudorandom numbers are not truly random in the mathematical sense, but for our purposes, they will do. C provides a function called rand() that generates pseudorandom numbers. It is declared in the header file stdlib.h, which contains a variety of standard library functions, hence the name. The return value from rand() is an integer between 0 and RAND_MAX, where RAND_MAX is a large number (about 2 billion on my computer) also defined in the header file. Each time you call rand() you get a different randomly-generated number. To see a sample, run this loop: for (i = 0; i < 4; i++) int x = rand(); printf("%i\n", x); On my machine I got the following output: = rand (); = rand (); double y = (double) x / RAND_MAX; This code sets y to a random value between 0.0 and 1.0, including both end points. As an exercise, you might want to think about how to generate a random floating-point value in a given range; for example, between and
91 7.7 Statistics Statistics The numbers generated by rand(). 7.8 Array of random numbers The first step is to generate a large number of random values and store them in a array. By large number, of course, I mean 20. It s always a good idea to start with a manageable number, to help with debugging, and then increase it later. The following function takes three arguments, an array of integers, the size of the array and an upper bound for the random values. It fills the array of ints with random values between 0 and upperbound-1. void RandomizeArray (int array[], int length, int upperbound) int i; for (i = 0; i < length; i++) array[i] = rand() % upperbound; The return type is void, which means that this function does not return any value to the calling function. To test this function, it is convenient to have a function that outputs the contents of a array. void PrintArray (int array[], int length) int i; for (i = 0; i < length; i++) printf ("%i ", array[i]); The following code generates an array filled with random values and outputs it: int r_array[20]; int upperbound = 10; int length = sizeof(r_array) / sizeof(r_array[0]); RandomizeArray (r_array, length, upperbound); PrintArray (r_array, length);
92 84 Arrays On my machine the output is: the number of elements in our array. 7.9 Passing an array to a function You probably have noticed that our RandomizeArray() function looked a bit unusual. We pass an array to this function and expect to get a a randomized array back. Nevertheless, we have declared it to be a void function, and miraculously the function appears to have altered the array. This behaviour goes against everything what I have said about the use of variables in functions so far. C typically uses the so called call-by-value evaluation of expressions. If you pass a value to a function it gets copied from the calling function to a variable in the called function. The same is true if the function returns a value. Changes to the internal variable in the called function do not affect the external values of the calling function. When we pass an array to a function this behaviour changes to something called call-by-reference evaluation. C does not copy the array to an internal array it rather generates a reference to the original array and any operation in the called function directly affects the original array. This is also the reason why we do not have to return anything from our function. The changes have already taken place. Call by reference also makes it necessary to supply the length of the array to the called function, since invoking the sizeof operator in the called function would determine the size of the reference and not the original array. We will further discuss call by reference and call by value in Section 8.7, Section 9.6 and Counting A good approach to problems like this is to think of simple functions that are easy to write, and that might turn out to be useful. Then you can combine them into a solution. This approach is sometimes called bottom-up design.
93 7.11 Checking the other values 85. In our current example we want to examine a potentially large set of elements and count the number of times a certain value appears. You can think of this program as an example of a pattern called traverse and count. The elements of this pattern are: A set or container that can be traversed, like a string or a array. A test that you can apply to each element in the container. A counter that keeps track of how many elements pass the test. In this case, I have a function in mind called HowMany() that counts the number of elements in a array that are equal to a given value. The parameters are the array, the length of the array and the integer value we are looking for. The return value is the number of times the value appears. int HowMany (int array[], int length, int value) int i; int count = 0; for (i=0; i < length; i++) if (array[i] == value) count++; return count; 7.11 Checking the other values HowMany() only counts the occurrences of a particular value, and we are interested in seeing how many times each value appears. We can solve that problem with a loop: int i; int r_array[20]; int upperbound = 10; int length = sizeof(r_array) / sizeof(r_array[0]); RandomizeArray(r_array, length, upperbound); printf ("value\thowmany\n"); for (i = 0; i < upperbound; i++)
94 86 Arrays printf("%i\t%i\n", i, HowMany(r_array, length, i)); This code uses the loop variable as an argument to HowMany(), in order to check each value between 0 and 9, in order. The result is: value HowMany Again, it is hard to tell if the digits are really appearing equally often. If we increase the size of the array to 100,000 we get the following: value HowMany In each case, the number of appearances is within about 1% of the expected value (10,000), so we conclude that the random numbers are probably uniform array with length 10. That way we can create all ten storage locations at once and we can access them using indices, rather than ten different names. Here s how: int i; int upperbound = 10; int r_array[100000];
95 7.13 A single-pass solution 87 int histogram[upperbound]; int r_array_length = sizeof(r_array) / sizeof(r_array[0]); RandomizeArray(r_array, r_array_length, upperbound); for (i = 0; i < upperbound; i++) int count = HowMany(r_array, length, i); histogram[i] = count; I called the array histogram because that s a statistical term for a array A single-pass solution Although this code works, it is not as efficient as it could be. Every time it calls HowMany(), it traverses the entire array. In this example we have to traverse the array ten times! It would be better to make a single pass through the array. For each value in the array we could find the corresponding counter and increment it. In other words, we can use the value from the array as an index into the histogram. Here s what that looks like: int upperbound = 10; int histogram[upperbound] = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0; for (i = 0; i < r_array_length; i++) int index = r_array[i]; histogram[index]++; The second an array and the range of values in the array (in this case 0 through 10) as two parameters min and max. You should pass a second array to the function where a histogram of the values in the array can be stored.
96 88 Arrays 7 time() to generate something reasonably unpredictable and unrepeatable, like the number of seconds since January 1970, and use that number as a seed. The details of how to do that depend on your development environment Glossary array: A named collection of values, where all the values have the same type, and each value is identified by an index. element: One of the values in a array. The [] operator selects elements of a array. index: An integer variable or value used to indicate an element of a array. increment: Increase the value of a variable by one. The increment operator in C is ++. decrement: Decrease the value of a variable by one. The decrement operator in C is --..
97 7.16 Exercises 89 histogram: A array of integers where each integer counts the number of values that fall into a certain range Exercises Exercise 7.1 A friend of yours shows you the following method and explains that if number is any two-digit number, the program will output the number backwards. He claims that if number is 17, the method will output 71. Is he right? If not, explain what the program actually does and modify it so that it does the right thing. #include <stdio.h> #include<stdlib.h> int main (void) int number = 71; int lastdigit = number%10; int firstdigit = number/10; printf("%i",lastdigit + firstdigit); return EXIT_SUCCESS; Exercise 7.2 Write a function that takes an array of integers, the length of the array and an integer named target as arguments. The function should search through the provided array and should return the first index where target appears in the array, if it does. If target is not in the array the function should return an invalid index value to indicate an error condition (e.g. -1). Exercise 7.3 One not-very-efficient way to sort the elements of an array is to find the largest element and swap it with the first element, then find the second-largest element and swap it with the second, and so on. a. Write a function called IndexOfMaxInRange() that takes an array of integers, finds the largest element in the given range, and returns its index. b. Write a function called SwapElement() that takes an array of integers and two indices, and that swaps the elements at the given indices. c. Write a function called SortArray() that takes an array of integers and that uses IndexOfMaxInRange() and SwapElement() to sort the array from largest to smallest.
98 90 Arrays
99 Chapter 8 Strings and things 8.1 Containers for strings We have seen four types of values characters, integers, floating-point numbers and strings but only three types of variables char, int and double. So far we have no way to store a string in a variable or perform operations on strings. This chapter is going to rectify this situation and I can now tell you that strings in C are stored as an array of characters terminated by the character \0. By now this explanation should make sense to you and you probably understand why we had to learn quite a bit about the working of the language before we could turn our attention towards string variables. In the previous chapter we have seen that operations on arrays have only minimal support from the C language itself and we had to program extra functions by ourselves. Fortunately things are a little bit easier when we manipulate these special types of arrays - called strings. There exist a number of library functions in string.h that make string handling a bit easier than operations on pure arrays. Nevertheless string operations in C are still a lot more cumbersome than their equivalence in other programing languages and can be a potential source of errors in your programs, if not handled carefully. 8.2 String variables You can create a string variable as an array of characters in the following way: char first[] = "Hello, "; char second[] = "world.";
100 92 Strings and things The first line creates an string and assigns it the string value "Hello." In the second line we declare a second string variable. Remember, the combined declaration and assignment is called initialization. Initialisation time is the only time you can assign a value to a string directly (just as with arrays in general). The initialisation parameters are passed in the form of a string constant enclosed in quotation marks ("... "). Notice the difference in syntax for the initialisation of arrays and strings. If you like you can also initialize the string in the normal array syntax, although this looks a little odd and is not very convenient to type. char first[] = H, e, l, l, o,,,, \0 ; There is no need to supply an array size when you are initialising the string variable at declaration time. The compiler compute the necessary array size to store the supplied string. Remember what we said about the nature of a string variable. It is an array of characters plus a marker that shows where our string ends: the termination character \0. Normally you do not have to supply this termination character. The compiler understands our code and insertes it automatically. However, in the example above, we treated our string exactly like an array and in this case we have to insert the termination character ourselves. When we are using a string variable to store different sting values during the lifetime of our program we have to declare a size big enough for the largest sequence of characters that we are going to store. We also have to make our string variable exactly one character longer than the text we are going to store, because of the necessary termination character. We can output strings in the usual way using the printf() function: printf("%s", first); 8.3 Extracting characters from a string Strings are called strings because they are made up of a sequence, or string, of characters. The first operation we are going to perform on a string is to extract one of the characters. C uses an index in square brackets ([ and ]) for this operation: char fruit[] = "banana"; char letter = fruit[1]; printf ("%c\n", letter); The expression fruit[1] indicates that I want character number 1 from the string named fruit. The result is stored in a char named letter. When I output the value of letter, I get a surprise: a
101 8.4 Length 93 put zero in the square brackets: char letter = fruit[0]; 8.4 Length To find the length of a string (the number of characters this string contains), we can use the strlen() function. The function is called using the string variable as an argument: #include <string.h> int main(void) int length; char fruit[] = "banana"; length = strlen(fruit); return EXIT_SUCCESS; The return value of strlen() in this case is 6. integer length for further use. We assign this value to the In order to compile this code, you need to include the header file for the string.h library. This library provides a number of useful functions for operations on strings. You should familiarize yourself with these functions because they can help you to solve your programming problems faster. To find the last letter of a string, you might be tempted to try something like int length = strlen(fruit); char last = fruit[length]; /* WRONG!! */ That won t work. The reason is that fruit is still an array and there is no letter at the array index fruit[6] in "banana". Since we started counting at 0, the 6 letters are numbered from 0 to 5. To get the last character, you have to subtract 1 from length. int length = strlen(fruit); char last = fruit[length-1]; 8.5 Traversal A common thing to do with a string is start at the beginning, select each character in turn, do something to it, and continue until the end. This pattern of
102 94 Strings and things processing is called a traversal. A natural way to encode a traversal is with a while statement: int index = 0; while (index < strlen(fruit)) char letter = fruit[index]; printf("%c\n", letter); index = index + 1; This loop traverses the string and outputs each letter on a line by itself. Notice that the condition is index < strlen(fruit), which means that when index is equal to the length of the string, the condition is false and the body of the loop is not executed. The last character we access is the one with the index strlen(fruit) a string as an argument and that outputs the letters backwards, all on one line. 8.6 Finding a character in a string If we are looking for a letter in a string, we have to search through the string and detect the position where this letter occurs in the string. Here is an implementation of this function: int LocateCharacter(char *s, char c) int i = 0; while (i < strlen(s)) if (s[i] == c) return i; i = i + 1; return -1; We have to pass the string as the first argument, the other argument is the character we are looking for. Our function returns the index of the first occurrence of the letter, or -1 if the letter is not contained in the string.
103 8.7 Pointers and Addresses Pointers and Addresses When we look at the definition of the LocateCharacter() function you may notice the following construct char *s which looks unfamiliar. Remember, when we discussed how we had to pass an array to a function, back in Section 7.9, we said that instead of copying the array, we only pass a reference to the function. Back then, we did not say exactly what this reference was. C is one of the very few high-level programming languages that let you directly manipulate objects in the computer memory. In order to do this direct manipulation, we need to know the location of the object in memory: it s address. Adresses can be stored in variables of a special type. These variables that point to other objects in memory (such as variables, arrays and strings) are therefore called pointer variables. A pointer references the memory location of an object and can be defined like this: int *i_p; This declaration looks similar to our earlier declarations, with one difference: the asterisk in front of the name. We have given this pointer the type int. The type specification has nothing to do with the pointer itself, but rather defines which object this pointer is supposed to reference (in this case an integer). This allows the compiler to do some type checking on, what would otherwise be, an anonymous reference. A pointer all by itself is rather meaningless, we also need an object that this pointer is referencing: int number = 5; int *i_p; This code-fragment defines an int variable and a pointer. We can use the "address-of" operator & to assign the memory location or address of our variable to the pointer. i_p = &number; Pointer i_p now references integer variable number. We can verify this using the "content-of" operator *. printf("%i\n", *i_p); This prints 5, which happens to be the content of the memory location at our pointer reference. With pointers we can directly manipulate memory locations: *i_p = *i_p + 2; printf("%i\n", number); Our variable number now has the value 7 and we begin to understand how our LocateCharacter() function can directly access the values of string variables through the use of a char pointer.
104 96 Strings and things Pointers are widely used in many C programs and we have only touched the surface of the topic. They can be immensely useful and efficient, however they can also be a potential source of problems when not used appropriately. For this reason not many programming languages support direct memory manipulation. 8.8 String concatenation In Section 8.6 we have seen how we could implement a search function that finds a character in a string. One useful operation on strings is string concatenation. To concatenate means to join the two operands end to end. For example: shoe and maker becomes shoemaker. Fortunately, we do not have to program all the necessary functions in C ourselves. The string.h library already provides several functions that we can invoke on strings. We can use the library function strncat() to concatenate strings in C. char fruit[20] = "banana"; char bakedgood[] = " nut bread"; strncat(fruit, bakedgood, 10); printf ("%s\n", fruit); The output of this program is banana nut bread. When we are using library functions it is important to completely understand all the necessary arguments and to have a complete understanding of the working of the function. The strncat() does not take the two strings, joins them together and produces a new combined string. It rather copies the content from the second argument into the first. We therefore have to make sure that our first string is long enough to also hold the second string. We do this by defining the maximum capacity for string fruit to be 19 characters + 1 termination character (char fruit[20]). The third argument of strncat() specifies the number of characters that will be copied from the second into the first string. 8.9 Assigning new values to string variables So far we have seen how to initialise a string variable at declaration time. As with arrays in general, it is not legal to assign values directly to strings, because it is not possible to assign a value to an entire array. fruit = "orange"; /* Wrong: Cannot assign directly! */ In order to assign a new value to an existing string variable we have to use the strncpy() function. For example,
105 8.10 strings are not comparable 97 char greeting[15]; strncpy (greeting, "Hello, world!", 13); copies 13 characters from the of the second argument string to the first argument string. This works, but not quite as expected. The strncpy() function copies exactly 13 characters from the second argument string into the first argument string. And what happens to our string termination character \0? It is not copied automatically. We need to change our copy statement to copy also the invisible 14th character at the end of the string: strncpy (greeting, "Hello, world!", 14); However, if we only copy parts of the second string into the first we need to explicitly set the n+1th character in the greeting[15] string to \0 afterwards. strncpy (greeting, "Hello, world!", 5); /*only Hello is copied*/ greeting[5] = \0 ; Attention! In the last two sections we have used the strncpy() and the strncat() function that require you to explicitly supply the number of characters that will get copied or attached to the first argument string. The string.h library also defines the strcpy() and the strcat() functions that have no explicit bound on the number of characters that are copied. The usage of these functions is strongly discouraged! Their use has lead to a vast number of security problems with C programs. Remember, C does not check array boundaries and will continue copying characters into computer memory even past the length of the variable strings are not comparable All the comparison operators that work on ints and doubles do work on strings. For example, if you write the following code to determine if two strings are equal: if (word == "banana") /* Wrong! */ This test will always fail. You have to use the strcmp() function to compare two strings with each other. The function returns 0 if the two strings are identical, a negative value if the first string is alphabetically less than the second (would be listed first in a dictionary) or a positive value if the second string is greater. Please notice, this return value is not the standard true/false result, where the return value 0 is interpreted as false. The strcmp() function is useful for putting words in alphabetical order.
106 98 Strings and things if (strcmp(word, "banana") < 0) printf( "Your word, %s, comes before banana.\n", word); else if (strcmp(word, "banana") > 0) printf( "Your word, %s, comes after banana.\n", word); else printf ("Yes, we have no bananas!\n"); You should be aware, though, that the strcmp() function)) printf("the character %c is a letter.", letter); The return value from isalpha() is an integer that is 0 if the argument is not a letter, and some non-zero value if it is. It is legal to use this kind of integer in a conditional, as shown in the example. The value 0 is treated as false, and all non-zero values are treated as true. an argument and return a (possibly converted) character.
107 8.12 Getting user input 99 char letter = a ; letter = toupper (letter); printf("%c\n", letter); stdio.h, C defines a function named scanf() that handles input in much the same way that printf() handles output. We can use the following code to get an integer value from the user: int x; scanf("%i", &x); The scanf() function, the scanf() function returns and leaves the value in x unchanged. Fortunately, there is a way to check and see if an input statement succeeds. The scanf() function returns the number of items that have been successfully read. This number will be 1 when the last input statement succeeded. If not, we know that some previous operation failed, and also that the next operation will fail. Getting input from the user might look like this: int main (void) int success, x; /* prompt the user for input */ printf ("Enter an integer: \n"); /* get input */
108 100 Strings and things success = scanf("%i", &x); /* check and see if the input statement succeeded */ if (success == 1) /* print the value we got from the user */ printf ("Your input: %i\n", x); return EXIT_SUCCESS; printf("that was not an integer.\n"); return EXIT_FAILURE; There is another potential pitfall connected with the scanf() function. Your program code might want to insist that the user types a valid integer, because this value is needed later on. In this case you might want to repeat the input statement in order to get a valid user input: if (success!= 1) while (success!= 1) printf("that was not a number. Please try again:\n"); success = scanf("%i", &x); Unfortunately this code leads into an endless loop. You probably ask yourself, why? The input from the keyboard is delivered to your program by the operating system, in something called an input buffer. A successful read operation automatically empties this buffer. However, if the scanf() function fails, like in our example, the buffer does not get emptied and the next scanf() operation re-reads the old value - you see the problem? We need to empty the input buffer, before we can attempt to read the next input from the user. Since there is no standard way to do this, we will introduce our own code that reads and empties the buffer using the getchar() function. It run through a while-loop until there are no more characters left in the buffer (notice the construction of this loop, where all the operations are executed in the test condition): char ch; /* helper variable stores discarded chars*/ while (success!= 1) printf("that isn t a number. Please try again:\n"); /* now we empty the input buffer*/ while ((ch = getchar())!= \n && ch!= EOF); success = scanf("%i", &x);
109 8.13 Glossary 101 The scanf() function can also be used to input a string: char name[80] ; printf ("What is your name?"); scanf ("%s", name); printf ("%s", name); Again, we have to make sure our string variable is large enough to contain the complete user input. Notice the difference in the argument of the scanf() function when we are reading an integer or a string. The function requires a pointer to the variable where the input value will be stored. If we are reading an integer we need to use the address operator & with the variable name. In the case of a string we simply provide the variable name. Also notice, that the scanf() function only takes the first word of the input, and leaves the rest for the next input statement. So, if you run this program and type your full name, it will only output your first name Glossary. concatenate: To join two operands end-to-end. pointer: A reference to an object in computer memory. address: The exact storage location of objects in memory Exercises Exercise 8.1 A word is said to be abecedarian if the letters in the word appear in alphabetical order. For example, the following are all 6-letter English abecedarian words. abdest, acknow, acorsy, adempt, adipsy, agnosy, befist, behint, beknow, bijoux, biopsy, cestuy, chintz, deflux, dehors, dehort, deinos, diluvy, dimpsy a. Describe an algorithm for checking whether a given word (String) is abecedarian, assuming that the word contains only lower-case letters. Your algorithm can be iterative or recursive. b. Implement your algorithm in a function called IsAbecedarian().
110 102 Strings and things Exercise 8.2 Write a function called LetterHist() that takes a String as a parameter and that returns a histogram of the letters in the String. The zeroeth element of the histogram should contain the number of a s in the String (upper and lower case); the 25th element should contain the number of z s. Your solution should only traverse the String once. Exercise 8.3 A word is said to be a doubloon if every letter that appears in the word appears exactly twice. For example, the following are all the doubloons I found in my dictionary. Abba, Anna, appall, appearer, appeases, arraigning, beriberi, bilabial, boob, Caucasus, coco, Dada, deed, Emmett, Hannah, horseshoer, intestines, Isis, mama, Mimi, murmur, noon, Otto, papa, peep, reappear, redder, sees, Shanghaiings, Toto Write a function called IsDoubloon() that returns TRUE if the given word is a doubloon and FALSE otherwise. Exercise 8.4 The Captain Crunch decoder ring works by taking each letter in a string and adding 13 to it. For example, a becomes n and b becomes o. The letters wrap around at the end, so z becomes m. a. Write a function.5 In Scrabble each player has a set of tiles with letters on them, and the object of the game is to use those letters to spell words. The scoring system is complicated, but as a rough guide longer words are often worth more than shorter words. Imagine you are given your set of tiles as a String, like "qijibo" and you are given another String to test, like "jib". Write a function called TestWord() that takes these two Strings and returns true if the set of tiles can be used to spell the word. You might have more than one tile with the same letter, but you can only use each tile once. Exercise 8.6 In real Scrabble, there are some blank tiles that can be used as wild cards; that is, a blank tile can be used to represent any letter. Think of an algorithm for TestWord() that deals with wild cards. Don t get bogged down in details of implementation like how to represent wild cards. Just describe the algorithm, using English, pseudocode, or C.
111 Chapter 9 Structures 9.1 Compound values Most of the data types we have been working with represent a single value an integer, a floating-point number, a character member variables). This ambiguity is useful. It is also useful to be able to create your own compound values. C provides a mechanism for doing that: structures.: typedef struct double x; double y; Point_t;
112 104 Structures struct definitions appear outside of any function definition, usually at the beginning of the program (after the include statements). This definition indicates that there are two elements in this structure, named x and y. These elements are called the members or fields of a structure. It is a common error to leave off the semi-colon at the end of a structure definition. It might seem odd to put a semi-colon after curly-brackets, but you ll get used to it. Once you have defined the new structure, you can create variables with that type: Point_t blank; blank.x = 3.0; blank.y = 4.0; The first line is a conventional variable declaration: blank has type Point_t. The next two lines initialize the fields of the structure. The dot notation used here is called the field selection operator and allows to access the structure fields. The result of these assignments is shown in the following state diagram: blank x: 3 y: 4 As usual, the name of the variable blank appears outside the box and its value appears inside the box. In this case, that value is a compound object with two named member variables. 9.3 Accessing member variables You can read the values of an member variable using the same syntax we used to write them: double x = blank.x; The expression blank.x means go to the object named blank and get the value of x. In this case we assign that value to a local variable named x. Notice that there is no conflict between the local variable named x and the member variable named x. The purpose of dot notation is to identify which variable you are referring to unambiguously. You can use dot notation as part of any C expression, so the following are legal. printf ("%0.1f, %0.1f\n", blank.x, blank.y); double distance = blank.x * blank.x + blank.y * blank.y; The first line outputs 3, 4; the second line calculates the value 25. Python
WEEK ONE Introduction to Python Python is such a simple language to learn that we can throw away the manual and start with an example. Traditionally, the first program to write in any programming language 2: Problem Solving Using C++
Chapter 2: Problem Solving Using C++ 1 Objectives In this chapter, you will learn about: Modular programs Programming style Data types Arithmetic operations Variables and declaration statements Common
Moving from CS 61A Scheme to CS 61B Java
Moving from CS 61A Scheme to CS 61B Java Introduction Java is an object-oriented language. This document describes some of the differences between object-oriented programming in Scheme (which we hope One Introduction to Programming
Chapter One Introduction to Programming 1-1 Algorithm and Flowchart Algorithm is a step-by-step procedure for calculation. More precisely, algorithm is an effective method expressed as a finite list of
Chapter 3. Input and output. 3.1 The System class
Chapter 3 Input and output The programs we ve looked at so far just display messages, which doesn t involve a lot of real computation. This chapter will show you how to read input from the keyboard, use
Adjusted/Modified by Nicole Tobias. Chapter 2: Basic Elements of C++
Adjusted/Modified by Nicole Tobias Chapter 2: Basic Elements of C++ Objectives In this chapter, you will: Become familiar with functions, special symbols, and identifiers in C++ Explore simple data
A First Book of C++ Chapter 2 Data Types, Declarations, and Displays
A First Book of C++ Chapter 2 Data Types, Declarations, and Displays Objectives In this chapter, you will learn about: Data Types Arithmetic Operators Variables and Declarations Common Programming Errors
Programming in Access VBA
PART I Programming in Access VBA In this part, you will learn all about how Visual Basic for Applications (VBA) works for Access 2010. A number of new VBA features have been incorporated into the 2010
Access Tutorial 12: An Introduction to Visual Basic
Access Tutorial 12: An Introduction to Visual Basic 12.1 Introduction: Learning the basics of programming Programming can be an enormously complex and difficult activity. Or it can be quite straightforward.
Lecture 1 Notes: Introduction
Introduction to C++ January 4, 2011 Massachusetts Institute of Technology 6.096 Lecture 1 Notes: Introduction 1 Compiled Languages and C++ 1.1 Why Use a Language Like C++? At its core, a computer is just
The C Programming Language course syllabus associate level
TECHNOLOGIES The C Programming Language course syllabus associate level Course description The course fully covers the basics of programming in the C programming language and demonstrates fundamental,
[Refer Slide Time: 05:10]
Principles of Programming Languages Prof: S. Arun Kumar Department of Computer Science and Engineering Indian Institute of Technology Delhi Lecture no 7 Lecture Title: Syntactic Classes Welcome to lecture
Topics. Introduction. Java History CS 146. Introduction to Programming and Algorithms Module 1. Module Objectives
Introduction to Programming and Algorithms Module 1 CS 146 Sam Houston State University Dr. Tim McGuire Module Objectives To understand: the necessity of programming, differences between hardware and software,
Sources: On the Web: Slides will be available on:
C programming Introduction The basics of algorithms Structure of a C code, compilation step Constant, variable type, variable scope Expression and operators: assignment, arithmetic operators, comparison,
Introduction to C Language
Introduction to C Language A look at C We are going to learn a language called C. C is a very flexible and powerful programming language originally designed in the early 1970s. It is famous as the language
PHP Debugging. Draft: March 19, 2013 2013 Christopher Vickery
PHP Debugging Draft: March 19, 2013 2013 Christopher Vickery Introduction Debugging is the art of locating errors in your code. There are three types of errors to deal with: 1. Syntax errors: When code++ INTERVIEW QUESTIONS
C++ INTERVIEW QUESTIONS Copyright tutorialspoint.com Dear readers, these C++ Interview Questions have been designed specially to get
Visual Logic Instructions and Assignments
Visual Logic Instructions and Assignments Visual Logic can be installed from the CD that accompanies our textbook. It is a nifty tool for creating program flowcharts, but that is only half of the story.
C++ Programming Language
C++ Programming Language Lecturer: Yuri Nefedov 7th and 8th semesters Lectures: 34 hours (7th semester); 32 hours (8th semester). Seminars: 34 hours (7th semester); 32 hours (8th semester). Course abstract
TN203. Porting a Program to Dynamic C. Introduction
TN203 Porting a Program to Dynamic C Introduction Dynamic C has a number of improvements and differences compared to many other C compiler systems. This application note gives instructions and suggestions
Lecture 03 Bits, Bytes and Data Types
Lecture 03 Bits, Bytes and Data Types In this lecture Computer Languages Assembly Language The compiler Operating system Data and program instructions Bits, Bytes and Data Types ASCII table Data Types
University of Hull Department of Computer Science. Wrestling with Python Week 01 Playing with Python
Introduction Welcome to our Python sessions. University of Hull Department of Computer Science Wrestling with Python Week 01 Playing with Python Vsn. 1.0 Rob Miles 2013 Please follow the instructions carefully.
Python Programming: An Introduction to Computer Science
Python Programming: An Introduction to Computer Science Chapter 1 Computers and Programs 1 The Universal Machine n A computer -- a machine that stores and manipulates information under the control of a.
Chapter 2: Elements of Java
Chapter 2: Elements of Java Basic components of a Java program Primitive data types Arithmetic expressions Type casting. The String type (introduction) Basic I/O statements Importing packages. 1 Introduction
Writing Portable Programs COS 217
Writing Portable Programs COS 217 1 Goals of Today s Class Writing portable programs in C Sources of heterogeneity Data types, evaluation order, byte order, char set, Reading period and final exam Important
Programming Languages & Tools
4 Programming Languages & Tools Almost any programming language one is familiar with can be used for computational work (despite the fact that some people believe strongly that their own favorite programming
Chapter 13: Program Development and Programming Languages
Understanding Computers Today and Tomorrow 12 th Edition Chapter 13: Program Development and Programming Languages Learning Objectives Understand the differences between structured programming, object-oriented
Computer Programming Tutorial
Computer Programming Tutorial COMPUTER PROGRAMMING TUTORIAL by tutorialspoint.com tutorialspoint.com i ABOUT THE TUTORIAL Computer Prgramming Tutorial Computer programming is the act of writing computer
2 Programs: Instructions in the Computer
2 2 Programs: Instructions in the Computer Figure 2. illustrates the first few processing steps taken as a simple CPU executes a program. The CPU for this example is assumed to have a program counter (PC),
Introduction to Python
Caltech/LEAD Summer 2012 Computer Science Lecture 2: July 10, 2012 Introduction to Python The Python shell Outline Python as a calculator Arithmetic expressions Operator precedence Variables and assignment
Pemrograman Dasar. Basic Elements Of Java
Pemrograman Dasar Basic Elements Of Java Compiling and Running a Java Application 2 Portable Java Application 3 Java Platform Platform: hardware or software environment in which a program runs. Oracle
(
General Software Development Standards and Guidelines Version 3.5
NATIONAL WEATHER SERVICE OFFICE of HYDROLOGIC DEVELOPMENT Science Infusion Software Engineering Process Group (SISEPG) General Software Development Standards and Guidelines 7/30/2007 Revision History Date
#820 Computer Programming 1A
Computer Programming I Levels: 10-12 Units of Credit: 1.0 CIP Code: 11.0201 Core Code: 35-02-00-00-030 Prerequisites: Secondary Math I, Keyboarding Proficiency, Computer Literacy requirement Semester 1
Exercise 1: Python Language Basics
Exercise 1: Python Language Basics In this exercise we will cover the basic principles of the Python language. All languages have a standard set of functionality including the ability to comment code,,
C++ Basics. C++ Basics: Names: Identifiers. Names: Identifiers. Display 2.1 A C++ Program (1 of 2) Display 2.1 A C++ Program (2 of 2)
C++ Basics C++ Basics: 2.1 Variables and Assignments Variables and Assignments Input and Output Data Types and Expressions Simple Flow of Control Program Style A C++ variable can hold a number or other
River Dell Regional School District. Computer Programming with Python Curriculum
River Dell Regional School District Computer Programming with Python Curriculum 2015 Mr. Patrick Fletcher Superintendent River Dell Regional Schools Ms. Lorraine Brooks Principal River Dell High School
Exercise 4 Learning Python language fundamentals
Exercise 4 Learning Python language fundamentals Work with numbers Python can be used as a powerful calculator. Practicing math calculations in Python will help you not only perform these tasks, but also
Python Programming: An Introduction to Computer Science
Python Programming: An Introduction to Computer Science Chapter 1 Computers and Programs 1 Objectives To understand the respective roles of hardware and software in a computing system. To learn what computer
C Compiler Targeting the Java Virtual Machine
C Compiler Targeting the Java Virtual Machine Jack Pien Senior Honors Thesis (Advisor: Javed A. Aslam) Dartmouth College Computer Science Technical Report PCS-TR98-334 May 30, 1998 Abstract One of the
Hypercosm. Studio.
Hypercosm Studio Hypercosm Studio Guide 3 Revision: November 2005 Copyright 2005 Hypercosm LLC All rights reserved. Hypercosm, OMAR, Hypercosm 3D Player, and Hypercosm Studio are trademarks
MATLAB Basics MATLAB numbers and numeric formats
MATLAB Basics MATLAB numbers and numeric formats All numerical variables are stored in MATLAB in double precision floating-point form. (In fact it is possible to force some variables to be of other types
C++ Programming: From Problem Analysis to Program Design, Fifth Edition. Chapter 4: Control Structures I (Selection)
C++ Programming: From Problem Analysis to Program Design, Fifth Edition Chapter 4: Control Structures I (Selection) Objectives In this chapter, you will: Learn about control structures Examine relational)
Computer Programming I
Computer Programming I COP 2210 Syllabus Spring Semester 2012 Instructor: Greg Shaw Office: ECS 313 (Engineering and Computer Science Bldg) Office Hours: Tuesday: 2:50 4:50, 7:45 8:30 Thursday: 2:50 4:50,
JavaScript: Control Statements I
1 7 JavaScript: Control Statements I 7.1 Introduction 2 The techniques you will learn here are applicable to most high-level languages, including JavaScript 1 7.2 Algorithms 3 Any computable problem can
Objectives. Python Programming: An Introduction to Computer Science. Lab 01. What we ll learn in this class
Python Programming: An Introduction to Computer Science Chapter 1 Computers and Programs Objectives Introduction to the class Why we program and what that means Introduction to the Python programming language
Writing Simple Programs
Chapter 2 Writing Simple Programs Objectives To know the steps in an orderly software development process. To understand programs following the Input, Process, Output (IPO) pattern and be able to modify
Python Loops and String Manipulation
WEEK TWO Python Loops and String Manipulation Last week, we showed you some basic Python programming and gave you some intriguing problems to solve. But it is hard to do anything really exciting until
Practical Programming, 2nd Edition
Extracted from: Practical Programming, 2nd Edition An Introduction to Computer Science Using Python 3 This PDF file contains pages extracted from Practical Programming, 2nd Edition, published by the Pragmatic
Chapter Goals. 1.1 Computer Programs. Contents 1/9/13
CHAPTER 1 Chapter Goals To learn about computers and programming To compile and run your first Java program To recognize compile-time and run-time errors To describe an algorithm with pseudocode In this
Pseudo code Tutorial and Exercises Teacher s Version
Pseudo code Tutorial and Exercises Teacher s Version Pseudo-code is an informal way to express the design of a computer program or an algorithm in 1.45. The aim is to get the idea quickly and also easy
COLLEGE ALGEBRA. Paul Dawkins
COLLEGE ALGEBRA Paul Dawkins Table of Contents Preface... iii Outline... iv Preliminaries... Introduction... Integer Exponents... Rational Exponents... 9 Real Exponents...5 Radicals...6 Polynomials..
Formatting Variables in C-Max 2.0
Formatting Variables in C-Max 2.0 One of the many new features in C-Max 2.0 is the enhanced formatting now available for variables. This new capability is available in two distinct areas of variable usage:
CHAPTER 3 Numbers and Numeral Systems
CHAPTER 3 Numbers and Numeral Systems Numbers play an important role in almost all areas of mathematics, not least in calculus. Virtually all calculus books contain a thorough description of the natural, | http://docplayer.net/213486-How-to-think-like-a-computer-scientist.html | CC-MAIN-2017-22 | refinedweb | 17,234 | 61.16 |
« Gecko Plugin API Reference « Plug-in Side Plug-in API
Summary
Provides global deinitialization for a plug-in.
Syntax
#include <npapi.h> void NP_Shutdown(void);
Windows
#include <npapi.h> void WINAPI NP_Shutdown(void);
Description
The browser calls this function once after the last instance of your plug-in is destroyed, before unloading the plug-in library itself. Use
NP_Shutdown to delete any data allocated in to be shared by all instances of a plug-in.
If you have defined a Java class for your plug-in, be sure to release it at this time so that Java can unload it and free up memory.
NOTE: If enough memory is available, the browser can keep the plug-in library loaded if it expects to create more instances in the near future. The browser calls
NP_Shutdown only when the library is finally unloaded. | https://developer.mozilla.org/en-US/Add-ons/Plugins/Reference/NP_Shutdown | CC-MAIN-2016-36 | refinedweb | 141 | 55.13 |
# Here’s How to Update Node.js Via Visual Studio, NPM, Windows/Mac

I hope that you will find Node version 12 new capabilities compelling and soon you will upgrade your app to it.
In turn, you will get advanced debugging, intelligent coding with the powerful IntelliSense engine, interactive window, quick tracking of performance issues, unit testing, typescript integration, source control, cloud integration, and npm integration.
To get started in this walkthrough, this post captures the steps on how to update Node.js in Visual Studio, Windows/macOS, and NPM.
**First, see a couple of useful tricks to check which Node.js npm version you have installed:**
Write the command line to update Node.js npm:
`“node -v” or “npm -v”` simply type the one you want to check.
If the installed version of npm is not the latest one, you can update it using the syntax code:
```
npm npm@latest -g
```
***(Note: The -g flag is used to update npm globally.)***
Secondly, see which Node/Npm version Visual Studio you are using.
Now, use **Visual Studio Command Prompt** that is also called **Visual Studio plugin.** It can help you open a command line.
If you are not able to use Visual Studio Command Prompt, you can use the **“Task Runner Explorer”** by adding a task and running it in this way:
```
"check": "node -v && npm -v"
```
Another way is using **C:\Program Files (x86)\Microsoft Visual Studio 14.0\Web\External\”** on your Window’s.
***(Note: This way you will come to know about the local version of your Visual Studio.)***
If you have any issues with these steps, you can call for help from the industry’s best Node.js developers on a particular project on which you happen to be working.
All of this is fine. A developer is expecting to have Node.js updated on Microsoft Visual Studio that supports all those new features.
**Steps to Update Node.js in Visual Studio**
Development teams usually build one or two node.js update strategy that can be sometimes using an automated process. Or, sometimes updates can be based on the goodwill of the developer. The second one seems to be the riskiest approach. Consider applying the first strategy by using automated tools to update node.js.
**For Example:**
The tool like [greenkeeper](https://greenkeeper.io/) offers automatic dependency management for node.js npm dependencies. It improves your code quality, catches problems that you would have missed, easy to use, and reliable.
So, if you wish to get an update Node.js, there’re actually simple ways to do so. Here are the ways how to update node.js in Visual Studio and various operating systems.
Start by installing the new global version on node.js on your computer. Simply visit the Node.js download page and start installing the new version. Like the newest version, Node 12 is packed with notable features.
* Node 12 runs on the V8 engine for faster JavaScript execution
* Improved startup time by 30%
* Node 12 supports TLS 1.3 for increased security of codes
* N-API improvements to prevent libraries from breaking
To tell Visual Studio to use the global version follow the command:
Go to Tools > Options > Projects and Solutions > External Web Tools
Visual Studio uses the $(PATH) to look for its external tools. When performing node.js installation make sure that the $(PATH) should be first on your list.
If that doesn’t work, you can restart the Visual Studio. Or, you can add the usual path directly to the list in which node.js is installed by clicking on **“Add” and set “C:\Program Files\nodejs\.”** Now, restart the system and recheck the node.js update process.
***(Note: If it doesn’t work, make sure node.js is actually installed in that folder. If it is there, but still not work, try uninstalling, remove that folder manually, and install it again.)***
**How to Update Node.js on Windows and Mac Operating System?**
Already familiar with the Node.js updating steps on Windows. Great, these are the foundation of a successful development strategy. However, in the past decade, the node.js development world had gone through dramatic changes but the updating processes were left intact. Some modern node.js update techniques can even replace the traditional ones to give you a better and leaner update strategy with a better ROI.
**Example:**
If you wish to upgrade Node.js on Windows and Mac OS, then simply visit the [Node.js homepage](https://nodejs.org/en/) and select your operating system.
From there, a wizard will magically update your Node, and replace the older version with the new one.
**Now, See How to Update Node.js Using npm (Node Package Manager)**
To update node.js you can use [Node Package Manager](https://www.npmjs.com/package/n) (npm) that is already preinstalled. But before you start updating node.js, make sure your npm is of the latest version. Here are a few simple steps to follow to update your npm.
First, you need to find out the version of your Node Package Manager (npm), by running npm -v command.
After checking the version, you can run the command npm install `npm@latest -g` to install the latest version of Node Package Manager.
Finally, use the command `npm -v` to check whether your new version of npm was successfully installed or not.
Further, to update node.js using npm, use [n module](https://www.npmjs.com/package/n). Also, use the following code to clear cache memory from your Node Package Manager and install the latest version of node.js:
```
sudo npm cache clean -f
sudo npm install -g n
sudo n stable
```
***(Note: If you are looking for a specific version of node.js, you can also use the command n #.#.#.)***
**Also, See the Steps for Updating NPM (Node Package Manager) in Visual Studio**This will not take much of your time. If you have installed node.js from their official website, you probably have installed NPM with it. To check it, you can use the command line: `“npm-v”`.
**To upgrade npm in Visual Studio, you can follow the command:**
```
cd C:\Program Files (x86)\Microsoft Visual Studio 14.0\Web\External\ npm install npm@latest
```
This is how you can update Visual Studio’s NPM version to the latest one.
**Final Words — Improve Your Skills with the Right Knowledge**
There are various other techniques to update node.js that can help you take the right steps at the right time. Nevertheless, many developers neither having a strategy for effectively performing node.js update. Spending a few hours on reading the steps I have shared here can make a great difference to your knowledge on node.js updates and development processes. | https://habr.com/ru/post/467335/ | null | null | 1,142 | 68.26 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
Hi!
attached comes my MapServer ebuild contribution.
MapServer is an OpenSource development environment for building spatially enabled Internet applications.
The only hard dependence is media-libs/gd that already exists on portage tree.
This has its own license ... but I don`t know how to include this, so it's attached too.
Created an attachment (id=42853) [edit]
mapserver-4.2.5.ebuild (New Package)
It could be placed at app-sci. But, perhaps, we could create the science`s own
tree. If so, it should be like sci-geo or sci-gis.
Created an attachment (id=42854) [edit]
mapserver-4.2.5 license file
A new license
A new USE tag is used in this ebuild: proj.
proj "Adds support for dev-libs/proj cartographic projection library"
Cool!
What would be the best way to have support for the various mapscript modules (perl, php, etc..)?
Created an attachment (id=44845) [edit]
mapserver-4.2.5.ebuild with gdal and postgis
I have added support for gdal (ebuild can be found here in the bug system) and
postgis just for those who need it. See 'emerge -vp mapserver'.
I added on php support, so that it now generates php_mapscript.so. To do this I
had to patch the php ebuild to use regex=system. This is a requirement rather
than the default regex handling -- thus I included with the php patch
USE="mapserver..." e.g.:
# You must rebuild php if you have it already installed w/o the mapserver use
flag.
USE="mapserver proj tiff php gdal" emerge php mapserver
Created an attachment (id=45224) [edit]
Patch to add php support to mapserver ebuild
cd /usr/portage/*/mapserver
# or
# /usr/local/portage*/mapserver
# then
patch -p0 < mapserver_w_php.patch
Created an attachment (id=45225) [edit]
mapserver ebuild w/ php support
Php must have regex=system.
Created an attachment (id=45226) [edit]
Patch for php ebuild to make it work w/ mapserver
This patch simply changes the configure setting for php's regex handling.
I think that it would be good to mention that mapserver php module works only
when php is run as cgi. I think that this is still true. I think it would be
good to add this notice to the mapserver ebuild. This can save troubles :-)
Yes, probably a good idea. I'll add a little echo in somewhere to that affect
and maybe add this url as well:
Created an attachment (id=45357) [edit]
Newest ebuild w/ php support
Simply added some einfo and made small change to path (now uses vars/was
hardcoded).
Hi,
as of version 4.4 there is a possibility to compile mapserver php module (php_mapscript.so) for php running in DSO mode (that is not in cgi mode). This feature unfortunatelly requires some regex/*.o files from php build. I'm not an ebuild programmer but I think that the most simple solution would be if mod_php ebuild laid off (= installed) somewhere the regex/*.o files. Then mapserver could easily use the same regex and compile as module for php running as a DSO.
Here is the info from mapserver's README.CONFIGURE:
--with-php-regex-dir=DIR Specify directory where the PHP4 bundled REGEX
object files (regex/*.o) are available. Required in
order to compile the PHP/MapScript module with PHP
configured as a DSO.
Would this approach be correct?
Yes, I think so. Someone more familiar w/ the php build than myself could
probably be of some help.
PHP's Makefile (at least version 5) has the following targets which produce the
regex/*.o files:
regex/regcomp.lo
regex/regexec.lo
regex/regerror.lo
regex/regfree.lo
Wouldn't it be better to just perform a make of these targets (in the mapserver
ebuild) using a user's portage php source and point --with-php-regex-dir to the
regex directory? The advantage of this is mod_php doesn't need to be altered.
I managed to do this by hand from source (on a RedHat system), so I imagine it
should be possible to do from an ebuild. I'm new to ebuilds/gentoo so apologies
if I haven't notice some insurmountable obstacle!
Created an attachment (id=54349) [edit]
Update of ebuld file
I noticed that "proj" and "gdal" are now in the "sci-libs" directory in the
latest portage tree, so I made changes to the ebuild accordingly.
I also added a "if use php" check to the "src_install" function so that the
ebuild would not try to install "php_mapscript.so" unless "php" is in use.
Is there a consensus on where to put the mapserver ebuild in the portage tree?
From the comments I noticed someone has sugested "sci-geo" or "sci-gis".
Looking at the latest portage tree I see there is already "sci-geosciences".
Applications like GRASS reside there. Perhaps mapserver should go there as
well?
Geosciences seems to be the correct place since this is a GIS application.
As for the regex stuff. Mapserver itself does not need the php includes, tho mapscript/php does. I've been working on a project for the last few weeks on mapserver.
To compile mapserver/mapscript with php DSO suport you need the source of mod_php so having the work directory availible is needed. however as most people don't have keepwork in KEYWORDS we'll need to have a workaround. generally I ebuild mod_php compile then configure mapserver.
I'll also have a look at the php ebuilds and see if there is a clean way of doing this.
*** Bug 20106 has been marked as a duplicate of this bug. ***
*** Bug 39972 has been marked as a duplicate of this bug. ***
*** Bug 91774 has been marked as a duplicate of this bug. ***
I marked all other open requests for this app as duplicates of this bug. Having
4 separate requests for one ebuild is just waste of development effort.
Created an attachment (id=58494) [edit]
mapserver-4.4.2.ebuild
I present my apologies for my last post (on the bad bug...), in fact I've made
this post too quicly just before a trip, realy sorry ..
This ebuild support the mapscripts : ruby, perl, python, tcl, java, php and
mono. The lastest compile, but I'm unable to run the compiled files (which are
exe files) cause I'm a n00b in the mono environment, help for this language is
NOTE:
If you have proj in your USE variable then mapserver will be installed with
the wmsclient support.
If you use, in addition to postgis, the gdal option then the ogr wfs wcs
wmsclient wfsclient will be enabled automaticaly.
The examples files for each languages supports are now located directly into
the /usr/share/doc/mapserver-4.4.2/mapscript directory (I use /opt/mapserver
before but I don't think it's a good idea to do that way .... ).
For the geos support please refer this bug : (here I use the last one).
For postgis support I use this ebuild : which is extracted
from this bug : (you could also
use the 1.0.0 version of the ebuild).
If someone have any kind of problem using this ebuild thanks to use this
bugzilla interface to inform me.
Regards.
Created an attachment (id=58498) [edit]
The patch for php-cgi ebuild
When this patch is applied, you are able to use the option "mapscript" to
compile the php-cgi ebuild with the well known --with-regex=system option. As
you'll be warned when installing the mapserver ebuild, you also need to add the
noclean flag to your FEATURES in order to use the sources tree where php-cgi
has been compiled to compile mapserver.
Now I hope that you've got all the requiered parts.
(else you could ever find what I forget here : ,
but thanks to put a note on that page)
Created an attachment (id=58756) [edit]
DBD-XBase-0.240.ebuild
Maybe not the right place to put this ebuild but it's so simple ...
For a complete usablity of the sample perl scripts you need this to be
installed (the insertion in the mapserver ebuild is not made for the moment
cause it imply that we've already add the DBD-XBase-0.240.ebuild to portage
tree or to your overlay ... ).
Was building mapserver today on a fresh system and it looks like there is a
swig
dependency that needs to be in the ebuild, or one of the dependant ebuilds.
This
is the first error , there were a few more.
looking for Tcl in /usr
found lib/tclConfig.sh in /usr
looking for Swig in /usr/local
can not find swig.h in /usr/local/include
using pre-built swig tcl interface
tcl version = 8.4
creating Makefile
swig -tcl8 -dhtml -namespace -DIGNORE_MISSING_DATA -DNEED_STRLCAT -DUSE_EPPL
-DUSE_PROJ -DUSE_PROJ_API_H _OGR -DUSE_GDAL -DUSE_WMS_SVR -DUSE_WMS_LYR -DUSE_WFS_SVR -DUSE_WFS_LYR
-DUSE_WCS_SVR -I. -I/usr/include -I/usr/include/gdal \
mapscript.i
make: swig: Command not found
make: *** [mapscript_wrap.c] Error 127
Created an attachment (id=61374) [edit]
mapserver-4.4.2.ebuild
Thank you for your post and scuse me, it's my fault. Indeed as you could see in
the old ebuild there is no requirement for the tcl support but you need tcl and
swig so you could add it by yourself (add this line in the DEPEND set : tcl?
(dev-lang/tcl dev-lang/swig) ) or simply use this new proposed version of the
mapserver ebuild. That's must work now for the tcl support. If anyone have
tested the mono support, it will be welcome to notice here the results gotten.
The same modification must be made for the mono support
(If you want the lastest release made, you could use this link : and browse
the overlay, you could also read the corresponding "emerging log" in the "logs" directory).
Hmm,
I would like to test mapserver with postgis and mapscirpt, but this is what I
got when emerging mapserver. How should I go on?
--
evis
----8<------
i686-pc-linux-gnu-gcc -c .c -o
shp2img.o
i686-pc-linux-gnu-gcc .o -L.
-lmap -lgd -L/usr/lib -lgd -ljpeg -lfreetype -lpng -L/lib -lz -lXpm -lX11 -lpdf
-ltiff -ljpeg -lfreetype -lpng -L/lib -lz -lXpm -lX11 -lproj -ljpeg
-L/usr/lib -lpq -L/usr/lib -lcurl -lidn -lssl -lcrypto -ldl -lssl -lcrypto -ldl
-lz -lc -lm -lstdc++ -o shp2img
i686-pc-linux-gnu-gcc: /usr/lib/lib: No such file or directory
i686-pc-linux-gnu-gcc: /usr/lib/-lpq: No such file or directory
make: *** [shp2img] Error 1
!!! ERROR: sci-geosciences/mapserver-4.4.2_p20050608 failed.
!!! Function src_compile, Line 120, Exitcode 2
!!! make failed
!!! If you need support, post the topmost build error, NOT this status message.
---->8-------------
And emerge info:
evis mapserver # emerge info AMD Athlon(tm) XP 2700+
Gentoo Base System version 1.6.12
Python: dev-lang/python-2.2.3-r5,dev-lang/python-2.3.5 [2.3.5 (#1,
Apr 30 2005, 12:17:20)]
dev-lang/python: 2.2.3-r5, 2.3.5
sys-apps/sandbox: [Not Present]
sys-devel/autoconf: 2.13, 2.59-r6
sys-devel/automake: 1.5, 1.9.5, 1.6.3, 1.7.9-r1, -march=athlon-xp -fomit-frame-pointer ="-O2 -march=athlon-xp -fomit-frame-pointer -pipe"
DISTDIR="/usr/portage/distfiles"
FEATURES="autoaddcvs autoconfig candy ccache distlocks noclean sandbox sfperms
strict"
GENTOO_MIRRORS=""
MAKEOPTS="-j2"
PKGDIR="/usr/portage/packages"
PORTAGE_TMPDIR="/var/tmp"
PORTDIR="/usr/portage"
PORTDIR_OVERLAY="/usr/local/portage"
SYNC="rsync://trumpetti.atm.tut.fi/gentoo-portage"
USE="x86 3dnow X acpi adns alsa apache2 apm arts avi berkdb bitmap-fonts bonobo
cdr crypt cups curl doc dvb dvd eds emboss encode esd ethereal fam flac
foomaticdb fortran gd gdal gdbm geos gif gnome gphoto2 gpm gps gstreamer gtk
gtk2 gtkhtml ieee1394 imagemagick imap imlib innodb ipv6 java jpeg junit kde
libg++ libwww mad maildir mikmod mmx motif mozilla mp3 mpeg mysql ncurses nls
ogg oggvorbis opengl oss pam pdflib perl png postgres proj python qt quicktime
readline samba sasl scanner sdl slang speex spell ssl svga tcltk tcpd tetex tiff
truetype truetype-fonts type1-fonts vorbis xine xml xml2 xmms xv zlib
userland_GNU kernel_linux elibc_glibc"
Unset: ASFLAGS, CBUILD, CTARGET, LANG, LC_ALL, LDFLAGS, LINGUAS
emerging version of gdal helped.
Mapserver 4.6.0 is now running (I didn't use ebuild).
./mapserv -v
MapServer version 4.6.0 OUTPUT=GIF OUTPUT=PNG OUTPUT=JPEG OUTPUT=WBMP OUTPUT=PDF
OUTPUT=SVG SUPPORTS=PROJ SUPPORTS=FREETYPE SUPPORTS=WMS_SERVER
SUPPORTS=WMS_CLIENT SUPPORTS=WFS_SERVER SUPPORTS=WFS_CLIENT SUPPORTS=WCS_SERVER
SUPPORTS=GEOS INPUT=EPPL7 INPUT=POSTGIS INPUT=OGR INPUT=GDAL INPUT=SHAPEFILE
--
evis
I have looked into mapserver-4.6.0 and found that it no more relies on regex
object files from php. So it seems there might be some shift with this ebuild to
get it into the portage tree.
Can someone post an ebuild for the new version 4.6?
Created an attachment (id=64358) [edit]
mapserver-4.6.0.ebuild
I just come back from hollyday ... and I just finish to create a "pre-release"
of the mapserver-4.6.0 which is avaible (as always ;)) from the page. I compile it without tcl support for now, but the
ruby support works as fine as with the old ebuild, idem for python and perl (if
Xbase was installed via the g-cpan.pl script or via the ebuild which could be
downloaded here but it's a wrong way ... ), the tests of this scripts are just
made with the examples which you could found in the
/usr/share/doc/mapserver-4.6.0/mapscript/examples/ directory (as mentioned
during the "emergeance" of the ebuild) and all mapscript's support mentioned
before work just fine.
Testers for mono support are always require.
Thanks in advance.
Hope you enjoy this new ebuild ...
Created an attachment (id=64604) [edit]
gdal-config.patch
Hi,
after an emerge -uv world I have the same error as that described in comment
#29 . There is indeed a problem with gdal-1.2.5 ebuild, the mapserver configure
script use the gdal-config command but if you use "gdal-config --libs" from
command line then you get this result :
gdal-config --libs
And it's here that the error come from.
So I've just made a patch which solve the problem. In this patch, which is very
simple, you could see that the call to gdal-config --dep-libs return the
desired values except the /usr/lib/libgdal.a so. So I've just added it at the
begining of the "GDAL_LIB" variable's value and that seems to work well (here
it works like a charme after adding this line : "epatch
${FILESDIR}/gdal-config.patch" just before the line which contain the econf
call).
This patch must be used with the lastest ebuild version (so the 4.6.0 version),
I now work on another way to solve this problem. Cause if we choose to use the
patch way, we need to make one for each version of the mapserver ebuild, but
it's always the same line to change so maybe a use of sed could be enough to
solve this problem. If someone have another idea thanks in advance to give it
to us.
If someone has more informations on what is going on with gdal-1.2.5.ebuild, he
is very welcome to give it to us. Maybe we could create a new bug to specify
this error.
To compile mapserver with gdal-1.3.0, remove patch_it() from mapserver ebuild
and it will work fine.
when i try to load the php_mapserver module i get:
Unable to initialize module\nModule compiled with module API=20020429, debug=0,
thread-safety=0\nPHP compiled with module API=20020429, debug=0,
thread-safety=1\nThese options need to match\n in Unknown on line 0
in the apache log, so the module is clearly not matching phps compile options,
any suggestions how this can be fixed ?
Created an attachment (id=69105) [edit]
updated for dev-lang/php with some preliminary version detection...
see for more install details and
some examples
Rember,if you want the mapscript support, to edit the file
/etc/php/cgi-php4/php.ini adding
extension= "php_mapscript.so"
I have tried the ebuild updated for dev-lang/php. I have both php4 and php5
versions installed. Though I used eselect to set php to version 5, the ebuild
still used php4 include files. I think that using 'eselect php show' or
something like that would be better instead of 'portageq match / dev-lang/php'
which returns on my systems this:
dev-lang/php-5.0.5-r1
dev-lang/php-4.4.0-r1
The best approach would be if the ebuild could install mapserver for all
installed php main versions (4 and 5).
Created an attachment (id=70849) [edit]
files/mapserver-4.6.1_phps.patch
This patch will be used by the futur mapserver ebuild to ensure that, if both
dev-lang/php-4* and dev-lang/php-5* are installed on the target box, the
php_mapscript.so and eventualy (if you have "proj" in your use flags)
php_proj.so will be built and installed in their respectives extension-dir.
It could be used in other distro to install php_mapscript for both php versions
in "one pass" by using the new configure options : --with-php4 and --with-php5.
To use it on others distros, you must do as it will be done in the
mapserver-4.6.1 ebuild :
1) copying the mapscript/php3 directory into mapscript/php5,
2) applying the patch
3) be sure to run autoreconf before doing anything in next step
4) runing ./configure --your-options --with-php4=/path/to/php4. (and/or php5)
WARNING :
If you use php4 and you get compilation error message about the php_header
function call from the php_mapscript.c file, then you need to remove then end
of this patch before applying it (remove all lines after this one : "---
mapscript/php3/php_mapscript.c 2005-06-14 18:03:35.000000000 +0200").
Created an attachment (id=70853) [edit]
mapserver-4.6.1.ebuild
This is the new mapserver ebuild which, as requested, install the mapscript
support for both php4 and php5.
This is only the *first purpose* for installing mapserver for both php-4* and
php-5* and it must be conciderate as this.
In fact I choose the simplier way I've found to handle this, but if someone
have already thougth about another way to handle this then thx to post it here.
I stay open to all propositions.
In fact, I simply make a copy of the whole mapscript/php3 into mapscript/php5
and add the --with-php4 and --with-php5 options to the configure.in file
(that's why there is a new autotools inheritance for calling the eautoreconf
function). The whole copy will not be necessary in the futur version, but it's
now needed because of the php_mapscript.c file which must be patched only for
php-4* versions (tests was made with php 4.4.0-pl1-gentoo for the php4 version,
this was discussed in more detail in the Comment #40 Warning).
The mono support would be removed for the futur versions if I'm unable to find
a way to compile the mapscript support for this language.
AS before the sample data could be found in :
/usr/share/doc/mapserver-4.6.1/tests/ and sample scripts are in :
/usr/share/doc/mapserver-4.6.1/mapscript/examples/your_favorite_language.
All supports (except for mono as discribed above) seems to work well (for
sample data discussed above).
NEWS:
Cleaner syntax.
Appropriates die messages was added.
Inheritances added : depend.php, autotools.
WARNING:
Mapscript support which must not be used (don't compile here) :
* mono
NOTE:
As you've maybe already seen in the ebuild, you'll need a patch called
mapserver-4.6.1_php4pb.patch which is not available here (you could find it
here :).
Indeed I think it's not realy relevant to post it here cause it only consists
in a copy of the mapserver-4.6.1_phps.patch end (the part needed was discribed
in the Comment #40 warning, so simply add the part from : "---
mapscript/php3/php_mapscript.c 2005-06-14 18:03:35.000000000 +0200" to
then end of the patch into files/mapserver-4.6.1_php4pb.patch).
TODO :
* Recheck if the php-config call is realy needed in this ebuild and if it
can't be remplaced by a new eclass function call. (thx for your work stuart)
* Verify optiomization options used in all makefiles involved in the
compilation process.
* Make the tests for all the possible php-4* php-5* couples (for all versions
availables in portage, tests was only made with dev-lang/php-5.0.5-r1 and
dev-lang/php-4.4.0-r1).
* Make the tests when only one version of php was installed (not tested now
cause it must work without modification for already tested versions ... ).
* Check the php_mapscript.so and php_proj.so linking (indeed there is still
an uneedded reference to libmysqlclient.so for both libraries).
* Learn more about the mono language, to be able to add its mapscript support
(if someone could help me on that part he's very very welcome ...).
I need your opinions before continuing in that way.
When trying the latest ebuild, I got this error:
>>> md5 files ;-) mapserver-4.4.1.ebuild
>>> md5 files ;-) mapserver-4.4.0.ebuild
>>> md5 files ;-) mapserver-4.6.1.ebuild
>>> md5 files ;-) mapserver-4.4.2.ebuild
>>> md5 files ;-) mapserver-4.6.1.ebuild.my
>>> md5 files ;-) mapserver-4.2.5.ebuild
>>> md5 files ;-) mapserver-4.6.0.ebuild
>>> md5 files ;-) files/digest-mapserver-4.2.5
>>> md5 files ;-) files/digest-mapserver-4.4.0
>>> md5 files ;-) files/digest-mapserver-4.4.1
>>> md5 files ;-) files/digest-mapserver-4.4.2
>>> md5 files ;-) files/digest-mapserver-4.6.0
>>> md5 files ;-) files/digest-mapserver-4.6.1
>>> md5 src_uri ;-) mapserver-4.6.1.tar.gz
*
* Using dev-lang/php-5.0.5-r1
*
* Checking for required PHP feature(s):
* Discovered missing USE flag cgi
*
* dev-lang/php-5.0.5-r1 needs to be re-installed with all of the following
* USE flags enabled:
*
* cgi
*
I only use php_mapscript.so with php as Apache module so I do not need the cgi
USE flag ON for php. But I don't know what's the policy for this, whether to
force users to have the cgi USE flag for php ON although they will use php only
as Apache module or let them have the cgi USE flag OFF.
At this moment I use previous mapserver ebuild and it works fine with php having
cgi flag off.
Created an attachment (id=70860) [edit]
mapserver-4.6.1.ebuild
Thank you Miroslav
Created an attachment (id=70860) [edit]
mapserver-4.6.1.ebuild
Thank you Miroslav ulc for your fast and relevant comment and please sorry me
for this forgotten thing. I've just updated the ebuild following your
instructions.
All remarks, warnings and notes from previous version still the sames.
NEWS:
* Using depen.php.eclass to check if php as cgi or cli is available.
WARNING :
* tests was only made with php as *cgi* by now.
* two php versions with different types (cgi and cli at the same time, i.e.
php4 as cgi and php5 as cli) was not tested by now but wouldn't work.
TODO (append the previous one) :
* tests must be made for php as apache module.
* handle when both php are in different state (one version configured as cli
and the other one as cgi).
I thank you, FENOY G
I thank you, FENOY Gérald, for the time you spend on this ebuild.
I have merged the updated ebuild and it built successfully for both php versions
I have installed.
I have tested php_mapscript.so for basic functionality (display of raster and
vector maps) using:
dev-lang/php-4.3.11-r1
dev-lang/php-5.0.5-r1
(both as Apache module) and it works. Thank you, Gérald.
The ebuild fails when I tried to build mapserver with only one version of php
installed (either php4 or php5). It only works if I have both installed.
When I tried with only php5, the ebuild failed when it tried to do
'cp *.so ../php4/ || die "Unable to copy php4 mapscript object files"'
This is probably because the mkdir calls at src_unpack only happen if two
versions of PHP are installed.
Building with only php4 also fails. It fails at the epatch call as you are
missing the 'cd ${S}' call.
It also fails with the 'cp *.so ../php4/" call.
Created an attachment (id=70931) [edit]
mapserver-4.6.1.ebuild
Thank you Ehud Shabtai for your comment.
I've corrected the ebuild following your instructions.
Hope it works now.
I've this error with mapserver 4.6.1
configure: checking for curl-config...
checking for curl-config... /usr/bin/curl-config
found libcurl version 7.15.0
configure: error: libcurl version 7.10.1 or more recent is required.
Created an attachment (id=72947) [edit]
mapserver-4.6.1_curlv.patch
Hi Mateo, I currently rewrite the mapserver ebuild, but there still some work
to do ...
It's not a problem relative to your architechture, indeed I've get the same
error (on x86) which tell us that the curl vesion we use is too old even if
it's not the case. So there are some weeks I've made a patch (sorry I did't
have the time to put it on the bugzilla), so here is the patch. The ebuild will
be updated as soon as possible.
I have lot of work this times ...
Hope this help.
Thanks, now it works.
I changed the ebuild with this patch, I hope it's correct.
--- mapserver-4.6.1.ebuild 2005-11-14 21:02:58.000000000 +0100
+++ mapserver-4.6.1.ebuild 2005-11-23 22:54:23.000000000 +0100
@@ -76,6 +76,8 @@
fi
fi
fi
+
+ epatch "${FILESDIR}"/mapserver-4.6.1_curlv.patch
}
src_compile() {
4.6.2 is out...You can found more info here:
"MapServer 4.6.2 has been released. This release contains no new
functionality and only bug fixes since 4.6.1. The complete list is
included below.
The source is available on the website at
Daniel "
merge fails, saying I don't have dev-lang/php.
The Merge Attempt:
beefy mapserver # emerge --resume mapserver
*** Resuming merge...
>>> emerge (1 of 1) sci-geosciences/mapserver-4.6.1 to /
>>> md5 files ;-) mapserver-4.6.1.ebuild
>>> md5 files ;-) files/digest-mapserver-4.6.1
>>> md5 src_uri ;-) mapserver-4.6.1.tar.gz
!!! ERROR: sci-geosciences/mapserver-4.6.1 failed.
!!! Function has_php, Line 233, Exitcode 1
!!! Unable to find an installed dev-lang/php package
!!! If you need support, post the topmost build error, NOT this status message.
What PHP I *do* have installed:
The php I have got installed is:
* dev-php/mod_php
Latest version available: 4.4.0-r9
Latest version installed: 4.4.0-r3
Size of downloaded files: 5,071 kB
Description: Apache module for PHP
License: PHP-3
* dev-php/php
Latest version available: 4.4.0-r4
Latest version installed: 4.4.0
Size of downloaded files: 13,052 kB
Description: PHP Shell Interpreter
License: PHP-3
My Emerge Info:
beefy mapserver # emerge info
Portage 2.0.54 (default-linux/x86/2005.0, gcc-3.3.6, glibc-2.3.5-r2,
2.6.14-gentoo-r5 i686)
=================================================================
System uname: 2.6.14-gentoo-r5 i686 AMD Athlon(tm) XP 2000+
Gentoo Base System version 1.6/terminfo /etc/env.d"
CXXFLAGS="-march=athlon-xp -O3 -pipe -fomit-frame-pointer"
DISTDIR="/usr/portage/distfiles"
FEATURES="autoconfig ccache distlocks fixpackages sandbox sfperms strict"
GENTOO_MIRRORS=""
LINGUAS="en zh_CN"
MAKEOPTS="-j2"
PKGDIR="/usr/portage/packages"
PORTAGE_TMPDIR="/var/tmp"
PORTDIR="/usr/portage"
SYNC="rsync://rsync.au.gentoo.org/gentoo-portage"
USE="x86 3dfx X alsa apache2 apm arts audiofile avi berkdb bitmap-fonts bonobo
bzip2 cdr cjk crypt cups curl dri eds emboss encode esd ethereal exif expat fam
ffmpeg flac foomaticdb fortran gd gdbm gif gimpprint glut gpm gstreamer gtk
gtk2 gtkhtml guile howl idn imlib ipv6 java jpeg kde lcms libg++ libwww mad
mhash mikmod mng motif mozilla mp3 mpeg mysql ncurses nls ogg oggvorbis opengl
oss pam pcre pdflib perl php png postgres ppds python qt quicktime readline
samba sdl snmp sqlite ssl tcltk tcpd tiff truetype truetype-fonts type1-fonts
udev unicode usb vorbis xml xml2 xmms xv xvid zlib linguas_en linguas_zh_CN
userland_GNU kernel_linux elibc_glibc"
Unset: ASFLAGS, CTARGET, LANG, LC_ALL, LDFLAGS, PORTDIR_OVERLAY
I can't install dev-lang/php just to satisfy it either :
gummay@beefy ~ $ emerge -p dev-lang/php
These are the packages that I would merge, in order:
Calculating dependencies ...done!
[blocks B ] dev-php/mod_php (is blocking dev-lang/php-5.0.5-r5)
[blocks B ] dev-php/php (is blocking dev-lang/php-5.0.5-r5)
[ebuild N ] app-admin/php-toolkit-1.0-r2
[ebuild N ] dev-lang/php-5.0.5-r5
Hi Tom,
as mentionned here :, you
would migrate to the "unified" dev-lang/php ebuild. You could find relevant
documentation on howto to do this here : and, to finish your
installation, just have look at this :.
Hi!
First I have to apologize for my poor english ...
I gave mapserver-4.6.1.ebuild a try. (Without curlv-patch)
With "tcl"-flag set it said:
QA Notice: the following files contain insecure RUNPATH's
Please file a bug about this at
For more information on this issue, kindly review:
/usr/lib:/usr/lib/MapscriptTcl1.1:/usr/lib:/var/tmp/portage/mapserver-4.6.1/work/mapserver-4.6.1:/usr/lib
usr/lib/MapscriptTcl1.1/libMapscript11.so
!!! ERROR: www-apps/mapserver-4.6.1 failed.
!!! Function dyn_install, Line 1057, Exitcode 0
!!! Insecure binaries detected
!!! If you need support, post the topmost build error, NOT this status message.
with "-tcl" (and "-ruby -xpm") it worked fine.
Problem with "ruby" was:
swig -ruby mapscript.i
make: *** No rule to make target `ruby.h', needed by `mapscript_wrap.o'. Stop.
And "xpm? ( media-libs/xpm )" does not exist.
*** Bug 129701 has been marked as a duplicate of this bug. ***
Created an attachment (id=84510) [edit]
mapserver-4.8.3.ebuild
Testing the new ebuild got this error:
../../maphash.h:69: Warning(801): Wrong class name (corrected to
`HashTableObj')
make: *** No rule to make target `ruby.h', needed by `mapscript_wrap.o'. Stop.
gekomachine geko # emerge info
Portage 2.0.54 (default-linux/x86/2005.1, gcc-3.3.6, glibc-2.3.5-r3,
2.6.14-gentoo-r2 i686)
=================================================================
System uname: 2.6.14-gentoo-r2 i686 Intel(R) Pentium(R) M processor 1500MHz
Gentoo Base System version 1.6.14
dev-lang/python: 2.3.5,=pentium3 -msse2 -O2 -pipe -fomit-frame-pointer"/eselect/compiler /etc/gconf /etc/terminfo /etc/env.d"
CXXFLAGS="-march=pentium3 -msse2 -O2 -pipe -fomit-frame-pointer"
DISTDIR="/usr/portage/distfiles"
FEATURES="autoconfig distlocks sandbox sfperms strict"
GENTOO_MIRRORS=""
LANG="it_IT@euro"
LINGUAS="it"
PKGDIR="/usr/portage/packages"
PORTAGE_TMPDIR="/var/tmp"
PORTDIR="/usr/portage"
PORTDIR_OVERLAY="/usr/local/portage"
SYNC="rsync://rsync.gentoo.org/gentoo-portage"
USE="x86 X acpi alsa apache2 apm asf audiofile avi berkdb bitmap-fonts bzip2
cdb cdr crypt cups curl dri dvd dvdr eds emboss encode exif expat fam ffmpeg
foomaticdb fortran gd gdbm geos gif glut gmp gpm gtk2 hal idn imagemagick imlib
ipod ipv6 isdnlog java jpeg junit kde lcms libg++ libwww mad mhash mikmod mng
motif mp3 mpeg mysql ncurses nls nvidia ogg oggvorbis opengl oss pam pcre
pdflib perl php png postgres pppd proj python qt quicktime readline ruby samba
scanner sdl speex spell sqlite ssl tcltk tcpd tiff truetype truetype-fonts
type1-fonts udev usb vorbis xine xml2 xv xvid zlib linguas_it userland_GNU
kernel_linux elibc_glibc"
Unset: ASFLAGS, CTARGET, INSTALL_MASK, LC_ALL, LDFLAGS, MAKEOPTS
Created an attachment (id=84539) [edit]
mapserver-4.8.3.ebuild
Hi,
thank you all for your feedbacks and comments.
Here the new ebuild' version which now use the webapp, java-pkg and ruby
eclass. So the java-config tool could be used to enable the mapscript java
package and the webapp-config one to install the mapserv cgi script where you
need it.
Some new patchs are required which I'll comment later..
Created an attachment (id=84540) [edit]
mapserver_tcl.patch
This patch solve the insecure RUNPATH problem described in Comment #53 and the
tclmodule.i dependancie.
Created an attachment (id=84541) [edit]
mapserver-4.8.3_php.patch
This patch make gentoo users able to compile the php mapscript and proj
extension for php version(s) installed (test was made with versions :
5.1.2-gentoo and 4.4.1-pl1-gentoo).
Created an attachment (id=84542) [edit]
mapserver_php4pb.patch
This patch solve problems encontred when trying to compile php mapscript
against only php 4 version.
Before posting this ebuild on the portage tree, I think it should be slotted.
Hi all,
I am a mapserver developer, mostly working on Java mapscript.
It seems that Java mapscript is not built correctly because the ebuild script
does not run the 'make interface' target in mapscript/java.
For historical reasons we provide a one-size-fits-all wrapper file in
mapscript/java, but its usage is discouraged, as per the README.
Note that the interface file requires swig at least >=1.3.21.
I will look into the ebuild and provide further comment asap..
(In reply to comment #65)
> .
>
I have a user report that it is working. Gentoo is the first distro to have
java mapscript support (almost) built-in.
Thanks,
Umberto
(In reply to comment #67)
>
>
GEOS support has been rewritten last week to support C API (it was previously
built and linked against C++ APIs). You will also need a recent GEOS to compile
mapserver successfully: 2.2.2 or higher - that's when length and area were
exposed to C API
BTW: GEOS support in mapserver is stable, thread-safe and cool.
I'm attempting to use the latest e-build from comment #67 and it is telling me
that it depends on ruby even thought the ruby USE flag is off.
can you past the output of it and eventually the emerge --info? I don't think
it a problem strictly related with mapserver
Hi Aran,
indeed you're right there is problem with the ruby eclass used in this ebuild.
You must add this line before eclasses inheritence : RUBY_OPTIONAL="yes".
This should solve the problem.
(In reply to comment #72)
>
>
Are you using cgi, fastcgi, php or mapscript and have you looked in apache's
error_log?
Umberto
[Wed Sep 20 14:02:04 2006] [notice] suEXEC mechanism enabled (wrapper:
/usr/sbin/suexec2)
[Wed Sep 20 14:02:05 2006] [notice] Digest: generating secret for digest
authentication ...
[Wed Sep 20 14:02:05 2006] [notice] Digest: done
[Wed Sep 20 14:02:05 2006] [notice] Apache configured -- resuming normal
operations
[Wed Sep 20 14:02:29 2006] [error] [client 127.0.0.1] File does not exist:
/var/www/localhost/ htdocs/favicon.ico
[Wed Sep 20 14:02:30 2006] [error] [client 127.0.0.1] File does not exist:
/var/www/localhost/ htdocs/ka-map2/htdocs/tools/kaExplorer/images, referer: s/kaExplorer/tools.css
[Wed Sep 20 14:03:04 2006] [error] [client 127.0.0.1] File does not exist:
/var/www/localhost/htdocs/favicon.ico
[Wed Sep 20 14:03:43 2006] [error] [client 127.0.0.1] File does not exist:
/var/www/localhost/htdocs/favicon.ico
After this, the connection between ka-map and apache seem lost....
Thanks
Luca
I have investigated more...Seem that each time you zoom or do a query on the
map,the number of MB of ram used by apache, increase. When it reach the limit,
map is no more usable
4.10 has been released:
The mapserver ebuild for 4.10.0 was in portage since the begining of the
week-end.
I hope it convince you and we could then close this old bug.
What new ? not so much, only the sos support has been added and some part has
been rewritten.
Thanks to give there your thoughs about this ebuild.
(In reply to comment #77)
> The mapserver ebuild for 4.10.0 was in portage since the begining of the
> week-end.
it could be masked ~amd64
Portage 2.1.1-r2 (default-linux/amd64/2006.1, gcc-4.1.1, glibc-2.4-r4,
2.6.18-gentoo-r3 x86_64)
There is a dependency missing in the 4.10.0 ebuild. If you have a version of
gdal less than 1.2.6 then you will get the following error message when you try
to emerge mapserver:
mapogr.cpp:165:28: error: gdal_version.h: No such file or directory
make: *** [mapogr.o] Error 1
This error is discussed in the following MAPSERVER-USERS post:
*** Bug 169463 has been marked as a duplicate of this bug. ***
Sorry, i forgot to say that i'm trying to use MapServer 4.8.4.
mapserver-4.8.4.ebuild, line 147: Called econf '--with-gdal' '--with-perl'
'--with-python' '--without-ruby' '--with-tcl' '--with-proj' '--without-postgis'
'--with-tiff' '--with-pdf' '--without-ming' '--without-java' '--with-ogr'
'--with-freetype' '--with-gd=/usr/lib64' '--with-geos=/usr/bin/geos-config'
'--with-wfs' '--with-wcs' '--with-wmsclient' '--with-wfsclient'
'--with-wmsserver' '--with-wmsclient' '--with-php=/usr/lib64/php5/include/php'
'--with-mapscript'
(In reply to comment #83)
>
>
You should emerge gd with USE="freetype jpeg"
Closing. | http://bugs.gentoo.org/69417 | crawl-002 | refinedweb | 6,316 | 66.33 |
Dynamixel and 3mxl driver
threemxl provides a library to communicate with Dynamixel servos and 3mxl motor control boards. It also allows multiple ROS nodes to communicate with a single Dynamixel chain using a shared_serial node.
As threemxl depends on some external packages, make sure to run
rosdep install threemxl at least once!
To quickly test the board, you can use the
console node. Just type
rosrun threemxl console, provided you have a
roscore running somewhere. Note that the console defaults to
/dev/ttyUSB0. You can specify a different port or namespace as argument.
The main classes are CDynamixel and C3mxl. These communicate with the hardware directly through a serial port (either LxSerial or LxFTDI). The ROS versions, CDynamixelROS and C3mxlROS, just subclass CDynamixel and C3mxl and set the package handler to CDxlROSPacketHandler such that the communication works over a shared_serial node. | https://docs.ros.org/en/indigo/api/threemxl/html/ | CC-MAIN-2021-21 | refinedweb | 141 | 57.27 |
On Sat, 2005-12-31 at 11:55 +0200, Pekka Enberg wrote:> > * u8, u16, ...> > * uint8_t, uint16_t, ...> > * u_int8_t, t_int16_t, ...> > From the above list, the first ones. See>. Please note that> there's also __le32 and __be32 for variables that have fixed byte> ordering.As ever, however, be aware that our esteemed leader is fickle.Especially when he's wrong, as he was on that occasion.The bit about namespace pollution is a red herring -- that's a goodenough reason for using '__u8', '__u16' etc. in those headers which areuser-visible and which mustn't require standard types, but it's noexcuse for the existence of the 'u8', 'u16' forms in code and headerswhich _aren't_ user-visible.The reason for the existence of the 'uXX' form is because once upon atime, the kernel was buildable with compilers which predated the C99standard types. It remains for historical reasons and because somepeople (especially Linus) have some kind of emotional attachment to it.The choice of whether to use 'uXX' or to use the proper standard'uintXX_t' types is to a large extent a matter of the individualdeveloper's taste. If you're writing large chunks of your own code, thendo as you see fit; if you're modifying existing code, then use what'sthere already.-- dwmw2-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | https://lkml.org/lkml/2005/12/31/85 | CC-MAIN-2016-36 | refinedweb | 241 | 63.19 |
When i received the new Arduino MKR1000, i was surprised that there is no official getting started, tutorial, or support in the IDE. I decided to write a short getting started guide to avoid others to spend.
Let's begin withe the Led blinking example. We will use here the pin 6 instead of the 13 that come with the IDE example as illustrated bellow:
void setup() { pinMode(6, OUTPUT); } void loop() { digitalWrite(6, HIGH); // turn the LED on (HIGH is the voltage level) delay(500); // wait for 500ms digitalWrite(6, LOW); // turn the LED off by making the voltage LOW delay(500); // wait for 500ms }
To upload a sketch, choose Arduino/Genuino MKR1000 from the Tools > Board menu in the Arduino IDE, and select the correct serial port from the Tools > Serial Port menu. In my case the port is COM19 as illustrated in the Figure 6.
One uploaded, you should get something similar to the output on the Figure 7. The green Led should also blink each 500ms as illustrated in figure 8.
One of the main features of the MKR1000 is it's ability to access to a WiFi network. To be able to use WIFI, you have to install the library first.Install the WIFI 101 Library
There is many way to install the wifi101 library (you must use WIFI101 0.8.0) on the IDE. We propose in this tutorial to install this library using the Library manager. This method DO NOT WORK as is, it needs an extra manipulation described bellow while waiting for the library update. First go to Sketch > Include Library > Manage Libraries as illustrated in Figure 9.
Search "101" and install the WiFi101 library as illustrated in the Figure 10.
IF THE INSTALLED VERSION OF THE WIFI101 IS 0.7.0
Download the library from github. open the folder "%userprofile%\documents\Documents\Arduino\libraries\WiFi101" and replace the content of the library with the content of the "WiFi101-master" folder in the downloaded zip
To check the WiFi101 library, open the Sketch located at Examples > WiFi101 > CheckWifi101FrimwareVersion as illustrated in Figure 11.
You can use the flowing code to start a web server that can turn on and off the MKR1000 Led. this code is an adaptation from the example from the WiFi101 library called "SimpleWebServerWiFi". Once uploaded, you should see the address of the server in the Serial Monitor. Open it on any browser and you can enjoy executing the examples of the WiFi101 library.
TroubleshootTroubleshoot
#include <WiFi101.h> #include <WiFiClient.h> #include <WiFiServer.h> #include <WiFiSSLClient.h> #include <WiFiUdp.h> /* * This example is modified from the original file * */ #include <SPI.h> #include <WiFi101.h> char ssid[] = "yourNetworkSSID"; // your network SSID (name) char pass[] = "yourNetworkPassword"; // your network password int keyIndex = 0; // your network key Index number (needed only for WEP) int ledpin = 6; bool val = true; int status = WL_IDLE_STATUS; WiFiServer server(80); void setup() { Serial.begin(9600); // initialize serial communication Serial.print("Start Serial "); pinMode(ledpin, OUTPUT); // set the LED pin mode // Check for the presence of the shield Serial.print("WiFi101 shield: "); if (WiFi.status() == WL_NO_SHIELD) { Serial.println("NOT PRESENT"); return; // don't continue } Serial.println("DETECTED"); // attempt to connect to Wifi network: while ( status != WL_CONNECTED) { digitalWrite(ledpin, LOW); Serial.print("Attempting to connect to Network named: "); Serial.println(ssid); // print the network name (SSID); digitalWrite(ledpin, HIGH); // digitalWrite(ledpin, HIGH); }pin, HIGH); // GET /H turns the LED on } if (currentLine.endsWith("GET /L")) { digitalWrite(ledpin,); }
As the MKR1000 still brand new, there is some issues.
There is a Topic in Arduino forum about the wifi101 that may help. Another Topic discuss the IDE related issues.
If you have issues with the Arduino/Genuino MKR1000 port, you can use the Zero port. Note that the official documentation of the Zero board commands using the Programming Port, However, it do not work for the MKR1000. So i recommend using the Native USB Port.
I would recommend to you to try 3 things if your device is not detected by windows:
- Verify that the USB cable that you are using supports data. The D+ and D- data lines are missing in some charge only cables. For that, try to connect your android device or another board using the USB cable and see if windows can detect it.
- Verify that the USB port in your computer. For that, simply try to plug device in an other USB port and wait some second to see if there is any change in the windows device manager. Sometime, unplugging all the USB devices and restarting windows resolve this issue.
- Verify that the driver is installed correctly. Open the Device manager > Ports, unplug the MKR1000 and plug it, if you see a new device appear that is not recolonized as MKR1000, right click on this device and click on update the driver. Click on choose the driver from my computer than choose the Arduino drivers folder. This should update the driver and detect the device as MKR1000.
An interesting pins description is illustrated in Figure13. This description was included in the code (then removed) from SAMD; It was used for their experiments on the MKR1000. The commit is available on github SAMD repository.
Steps are very well explained in this link:
Do not hesitate to post comment on this tutorial if you need help. | https://www.hackster.io/charifmahmoudi/arduino-mkr1000-getting-started-08bb4a | CC-MAIN-2021-10 | refinedweb | 888 | 65.52 |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- USAGE
- SUPPORT
- AUTHORS
NAME
inc::latest - use modules bundled in inc/ if they are newer than installed ones
VERSION
version 0.500
SYNOPSIS
# in Makefile.PL or Build.PL use inc::latest 'Some::Configure::Prereq';
DESCRIPTION
WARNING -- THIS IS AN EXPERIMENTAL MODULE. It was originally bundled (as an experiment) with Module::Build and has been split out for more general use.
The
inc::latest module helps bootstrap configure-time dependencies for CPAN distributions. These dependencies get bundled into the
inc directory within a distribution and are used by Makefile.PL or Build Makefile.PL or Build.PL). This bundled
inc::latest is the one that determines which module to load.
Special notes on bundling.
inc::latest has a number of heuristics to discover module names,.
Managing dependency chains
Before bundling a distribution you must ensure that all prerequisites are also bundled and load in the correct order.
For example, if you need
Wibble, but
Wibble depends on
Wobble, and you have bundled
Module::Build, before uploading to CPAN.
USAGE
As bundled in inc/
Using "Author-mode", a special stub module will be created in your distribute directory as inc/latest.pm. In your Makefile.PL or Build.PL, you can then load
inc::latest to load bundled modules.
When calling
use, the bundled
inc::latest takes a single module name and optional arguments to pass to that module's own import method.
use inc::latest 'Foo::Bar' qw/foo bar baz/;
The implementation is private. Only the
import method is public.
Author-mode
When you have inc::latest installed from CPAN, then you are in author-mode if any of the Author-mode methods are available. For example:
if ( inc::latest->can('write') ) { inc::latest->write('inc'); }
Using author-mode, you can create the stub inc/latest.pm and bundle modules into inc.
- loaded_modules()
my @list = inc::latest->loaded_modules;
This takes no arguments and always returns a list of module names requested for loading via "use inc::latest 'MODULE'", regardless of whether the load was successful or not.
- write()
inc::latest->write( 'inc' );
This writes the bundled version of inc::latest to the directory name given as an argument. It almost all cases, it should be '
inc'.
- bundle_module()>
Eric Wilhelm <[email protected]>
This software is Copyright (c) 2009 by David Golden.
This is free software, licensed under:
The Apache License, Version 2.0, January 2004 | https://metacpan.org/pod/inc::latest | CC-MAIN-2015-40 | refinedweb | 400 | 51.55 |
Any help would be greatly appreciated. Does not recognize cat's game also goes into infinate loop when you try to use the same box twice?
I have spent hours trying to figure it out. I know it something simple.
Any ideas?
//Board Display Class - Tic-Tac-Toe Program #include <iostream> using namespace std; char board[3][3]; int checkWinState(int); bool boardDraw(int, int, int); void showBoard(); int main() { bool boxPlayed; int player, boxNum; int wrong; int turn=1; int check=0; int x, y; board[0][0] = '1'; board[0][1] = '2'; board[0][2] = '3'; board[1][0] = '4'; board[1][1] = '5'; board[1][2] = '6'; board[2][0] = '7'; board[2][1] = '8'; board[2][2] = '9'; showBoard(); while(check == 0) // checks for a continuation of the game (0 = continue) { if(turn % 2 == 0) // determines the player player = 2; //o else player = 1; //x cout << "Player " << player << " What box do you want? "; // gets box choice cin >> boxNum; if(boxNum >=1 && boxNum <= 9) // validates box number { switch(boxNum) // converts the boxNum to x and y values for { // a two-dimensional array case 1: x = 0, y = 0; break; case 2: x = 0, y = 1; break; case 3: x = 0, y = 2; break; case 4: x = 1, y = 0; break; case 5: x = 1, y = 1; break; case 6: x = 1, y = 2; break; case 7: x = 2, y = 0; break; case 8: x = 2, y = 1; break; case 9: x = 2, y = 2; break; } wrong = 0; boxPlayed = boardDraw(x, y, turn); // seek valid box if(boxPlayed == false) check = checkWinState(turn); // looks for a win or draw while(boxPlayed == true) // play box if condition is true { cout << " That box has already been played.\nPlayer " << player << " Choose again: "; wrong = 1; } if(check != 1 && wrong != 1) // makes sure the right player is indicated as winner { turn++; } } else cout << "Enter an empty box number of 1 through 9.\n"; } if(check == 1) // displays tic-tac-toe or draw { cout << "Tic-Tac-Toe! "; if(turn % 2 == 0) cout << "O Wins!!\n"; else cout << "X Wins!!\n"; } else cout << "Cat's Game!\n"; system("pause"); return 0; } // takes the row and the column as input and if the spot is open // it will put an x or an o there depending on which player's turn it is. // returns true if the move was successful and false if it wasn't. //checks if the spot on the board is open. input is one and two. bool boardDraw(int one, int two, int t) { if(board[one][two] == 'X') return true; else if(board[one][two] == 'O') return true; else if(t % 2 != 0) { board[one][two] = 'X'; showBoard(); return false; } else { board[one][two] = 'O'; showBoard(); return false; } } //prints out the board void showBoard() { for(int q = 0; q < 3; q++) { for(int z = 0; z < 3; z++) { if(z == 0) { cout << " "; } cout << board[q][z]; if(z < 2) { cout << " | "; } if(z == 2) { cout << endl; } } if( q < 2) { cout << "-----------------" << endl; } } cout << endl; } int checkWinState(int t) { char player; if (t % 2 != 0) { player ='X'; } else { player ='O'; } if (board[0][0] == player && board[0][1] == player && board[0][2] == player ) { return 1; } else if (board[1][0] == player && board[1][1] == player && board[1][2] == player) { return 1; } else if (board[2][0] == player && board[2][1] == player && board[2][2] == player ) { return 1; } else if (board[0][0] == player && board[1][0] == player && board[2][0] == player ) { return 1; } else if (board[0][1] == player && board[1][1] == player && board[2][1] == player ) { return 1; } else if (board[0][2] == player && board[1][2] == player && board[2][2] == player ) { return 1; } else if (board[0][0] == player && board[1][1] == player && board[2][2] == player ) { return 1; } else if (board[0][2] == player && board[1][1] == player && board[2][0] == player ) { return 1; } if (t>9) return 3; return 0; } | https://www.daniweb.com/programming/software-development/threads/72362/thought-i-had-it-figured-out | CC-MAIN-2018-43 | refinedweb | 649 | 60.18 |
The key to do so is the "explorer elevated unelevated factory" registry as described in The only caveat with this method: It's hard to distinguish different explorer instances that run How to Configuring Windows XP Offline Files for My Documents: Before we can configure Offline Files we need to create a shared folder in the server where we can host the I say "problems"; I'm sure there's ONE guy in Redmond to whom many of the answers to my questions are obvious. Now from the desktop clisk [Start] -> Run and type the server name presided ‘\\' (example: \\IBM2003).
Well. Thanks poppet View Public Profile Send a private message to poppet Find all posts by poppet #2 05-23-2003, 07:25 AM scouse Offline Registered User Join Date: Aug 2002 About the "explorer cannot be run as a different user" - that's only partially true. Now I did what was advised on this page here, and it solved the problem!
Can't find your answer ? sharetweetshare More Information Subscribe to our blog feed: Related Posts: Citrix XenApp/XenDesktop API Hooking Explained How to Enable BitLocker Hardware Encryption with SSDs Measuring the Impact of Folder Redirection - Application That is how Offline Files work. Here are the steps to follow: Enable Offline Files feature: Double-click My Computer and click on Tools Click on Folder Options and select the Offline Files tab.
Now logoff the workstation Here the Synchronization starts working. Too Many Files Available Offline Problem You have administratively assigned one or more directories to be available offline. Notice that if you try to do it from a Remote Descktop Connection you will get the Following error. Reset Offline Files Windows 10 Be sure you have enough disk space on the server disk.
This one did have me baffled for a little while but problems now solved................ File Sync: why you need it? If that is your case you can use this mapping. Publish Related resources Solvedexternal hdd error 'incorrect function' at d time of initialization showing unknown disc Forum Offline files/folders trying to synchronize other users of..
I asked the system administrator here to do something about it, but all he said was, 2 minutes seem acceptable ... Delete Offline Files Cache Windows 10 How to Make Windows 2008 (or Windows 7) to Search for File Contents How to show my ‘Contacts’ in the Address Book - Outlook HP ProLiant: Unable to open the event Carol ccr_carol View Public Profile Send a private message to ccr_carol Find all posts by ccr_carol #7 12-09-2011, 02:49 AM ciara jack Offline Registered User Join Date: Dec The time now is 04:30 AM.
On Win 8, however, forget it. SoftwareTipsandTricks Forum > Operating Systems > Windows XP Windows XP How to Disable File Synchronization User Name Remember Me? Clear Offline Files Cache Windows 7 Offline Files considers \\server.domain.com\share and \\server\share 2 independent namespaces. Clear Offline Files Cache Windows 10 Uncheck the Use Fast User Switching.
Today when i start the pc it give me this error The Boot Configuration Data file is mis Forum My samsung HT 330 DVD player keeps giving me "Codec not supported" Get More Info THANKS awfully! CSC has the mark of an arrogant man at its core…) Reply Jon August 8, 2014 at 10:33 # You can also access the shares directly by using $NOCSC$ on the The memory could not be "read" ASP.NET SQL - Win32Exception (0×80004005): The system cannot find the file specified 0×80131904 Autocomplete for Outlook 2007, Outlook 2010 and Outlook 2013 Can't open or Delete Csc Folder Windows 7
Simple solution. Windows Explorer does not work, though: it cannot be run as different user. Mark Forums Read | View Forum Leaders Website Homepage - Site Map - Top Powered by vBulletin Version 3.5.2Copyright ©2000 - 2016, Jelsoft Enterprises Ltd. useful reference We have Notebooks with redirected Documentsfolder to the server and activated the gpo setting for subfolders, but this does not work proper.
In our example we had created a share folder named Users on our IBM2003 server. Delete Offline Files Windows 10 Example: If the path is \\full.qualified.domain.com\dfsroot\users, the synchronization must be enabled on dfsroot (which is a share on your domain controllers) and on users (which probably is a share on your After ticking No files or programs from the shared folder are available offline and logging off and back on the superfluous cached files were gone.
Steve More about : offline file synchronize incorrect function error Rose82 22 November 2009 20:42:47 I use a program to file synchronization, which is called File Sync. Locate and right-click the My Documents folder and click on Make Available Offline option and close the Windows. regards, Christian Reply Mat May 1, 2014 at 15:45 # I've no end of frustration (scrap that - outright hatred) for Offline Files. Reinitialize Offline Files Windows 10 poppet View Public Profile Send a private message to poppet Find all posts by poppet #5 12-27-2007, 10:46 AM Paldrion Offline Registered User Join Date: Dec 2007 Posts:
Why does CDO.Message give me 8004020F errors? The 'specialist' of the company I work for had not been able to resolve the problem for me. The 'make available offline' option under the network drive is greyed out meaning I cannot change the option. this page All Rights ReservedTom's Hardware Guide ™ Please click here if you are not redirected within a few seconds.
We worked around it by changing the configuration of those network shares that were not supposed to be available offline. Unfortunately, very strict permissions prevent even administrators to peek inside the cache - only SYSTEM has full access. It is permissions thing. However, the users have files from other directories in the Offline Files cache, too.
The synchronization runs as planned: the Offline Files icon in the notification area of the taskbar spins regularly and the Offline Files event log (Applications and Services Logs -> Microsoft -> But the advice given here has! Forum SolvedCMD-chkdsk error "index $0 of file 25 is incorrect" Forum CD Drive:is not accessable, Incorrect function error Forum Off-line file synchronization Forum Changing server name used in offline file synchronization Double click on "My Computer" Click "Tools" -> "Folder Options" Click on the "Offline Files" tab Uncheck the "Enable Offline Files" check box Well, hope this helps someone else that may
After transferring files it now gives me an error message of E:/ is not accessi Forum Batch file runs, but then gives error can't find file Forum Offline files - synch There may be additional settings you can choose that effectively allow you to keep it from working. Once we changed the location of Cookies and History back to the default locations in the user profile and rebooted, Offline Files synchronization worked as expected. What do you think about it?
Solution When Offline Files is enabled, network access is filtered and potentially redirected to the local offline files cache. I have a pool laptop that is connected to a domain and around 5 or 6 users will share the machine. Password Register FAQ Members List Calendar Today's Posts Search Windows XP How to Disable File Synchronization Thread Tools Search this Thread Rate Thread Display Modes #1 05-23-2003, 05:38 Ask !
For more information about Offline Files see also my other articles about this topic. Make sure you have enabled Offline Files synchronization on each share by checking Only the files and programs that users specify are available offline: If you are using DFS, this setting Locate and right-click your desktop My Documents folder and select Properties and enter the path to the user shared location in the Target box and click ‘Apply’. Message rejected. #554 5.7.1 STOP: C0000218 {Registry File Failure} The registry cannot load the hive (file): \SystemRoot\System32\Config\SOFTWARE Synology - Accessing shared folders without password Technical Documentation The application failed to initialize
Then click ‘Yes’ in the ‘Move Documents’ windows. | http://dlldesigner.com/offline-files/offline-synchronization-windows-xp-error.php | CC-MAIN-2017-51 | refinedweb | 1,340 | 55.78 |
From: Andrey Semashev (andysem_at_[hidden])
Date: 2008-02-10 11:35:21
Sorry for the long post in advance...
> * What is your evaluation of the design?
In short, questionable. More elaborate notes follow:
* To my mind, the named destination design approach is not intuitive. I
would expect the writer to accept a stream (or a number thereof) as a
destination, not a string names and their definitions. Such code is more
difficult to modify - you can misspel the destination name or forget one
in the list if names in destination::named and it'll only show in run
time. It also complicates making the list of destinations configurable
(e.g. loading logging settings of the user's app from a file).
* The approach to adding formatters is not very intuitive. Seeing this
piece of code:
g_l()->writer().add_formatter( formatter::idx() );
g_l()->writer().add_formatter( formatter::append_newline() );
g_l()->writer().add_formatter( formatter::tag::file_line() );
g_l()->writer().add_formatter( formatter::tag::level() );
I cannot tell what the output will look like. A lambda-like syntax would
be much better.
* Lifetime of the objects provided by users, such as streams, should be
controlled by the library. There should be no cases when you require the
user to make sure the object exists long enough for the library. Use
shared_ptr instead. The destination::stream is an example of such case.
* Filtering looks way too minimalistic to me. As far as I can see, a
filter is always static in the way it doesn't decide which particular
log record passes and which does not. It just has a flag which tells
what to do. Additionally, I see levels concept which I feel is very
close to filters but has a different interface for some reason. You may
argue that it's up to user to provide filtering logic, but the library
provides no sufficient interface to implement a filter more complex than
check a flag or a number thereof.
* I don't see the reason of all these macros for declaring and defining
logs, tags, etc. I would rather prefer to see functions or objects
forward declarations instead.
* I'm not sure I got it right with log records caching and
mark_as_initialized function. Does it mean that until this function is
called all log records are stored in memory and when it gets called all
records are flushed to the writers? Is it possible then to alter the set
of writers or destinations in run time after mark_as_initialized is
called? Anyway, I don't like functions like this, they are always a
source of mistakes.
* I see that there is no easy way to use once-initialized logs in module
A (exe or dll) in another module B. This would only be possible if
linking these logs from A, thus making a dependancy between A and B,
right? I have seen the dll_and_exe example and each module there has its
own logging objects and its own code of their initialization. I don't
like such code duplication.
* The most critical note, from my point of view. I didn't find any trace
of attributes or something like that. The library focuses on processing
raw strings and there is no way to pass any additional data to the
writers (I underline, the data as a typed object, not the formatted
string). Tags look like an attempt to implement attributes support but I
got the impression that they are only meant to rearrange data in the
formatted string and are not capable to aid filtering and cannot be
processed by writers as an independent piece of data.
This is a major drawback at the library extensibility side. I remember,
someone in the thread has also noted that the raw text logs make little
use in the enterprise-scaled applications. I totally agree and add that
even formatting the text (e.g. trying to put some layer of managing
attributes on top of the existing raw text oriented solution) does not
help much. I had a pleasure of analyzing gigabytes of logs to track down
a rarely seen bug that one of our customers encountered and without a
proper support for attributes and filtering based on them it is a nightmare.
* The design is not OOP-friendly. I can't create a logger object that is
specific for a request that my application is currently processing (or
can I?). This might be very useful if I wanted to log the context of
execution (the request parameters, again, as attributes or a similar
feature) in each log record. One can try to emulate such feature with a
more elaborate logging macro definitions but this is surely not the way
to go.
* The "optimize" namespace and "favor" stuff is a questionable. IMO, the
code should not contain things like this. Why would I chose to use
non-optimized features? Because optimized ones lack some features or
don't work in some cases? Which are those then? What are the guidelines
for their usage? But actually, I, as a user, don't need to know these
details. The library should function correctly and effectively without
involving the user into the optimization process. If there is a more
optimal way to provide some functionality, the library should enable it
itself, not involving the user. For example, std::distance just does the
right thing in the most optimal manner transparently for the user.
> * What is your evaluation of the implementation?
I didn't dig too deep into the code, but here are my 2 cents:
* Maybe a compilable configuration should be provided to reduce user's
code build times. This would be a better solution than to depend on a
macro that has different default values in release and debug.
* A better header segregation is needed. And it would be good to see in
docs what I have to include to use each component of the library.
* Line wrapping is needed. BTW, there's a requirement on this here:
* I can see that filtering may unnecessarilly block threads. That could
be fixed with read/write mutexes.
* Use __LINE__ with care when compiling on MSVC 7.0+ with /ZI. You may
run into problems when forming unique variable names. Use __COUNTER__
instead in such cases.
* Don't count on that pthread_t is ostreamable. Some platforms define it
as a fundamental type, some as a structure. It may well be a char* which
will most likely cause crashes in your code.
* Strictly speaking you are not guaranteed to have only a single thread
before main. A thread may be spawned in a namespace scope object
constructor. Either this case should be mentioned in the docs, or you
should protect function-local statics against multithreading. Actually,
you need that protection in either way because of cache synchronization
issues. After being initialized in one thread the function-local static
may look not initialized or partially initialized to another thread on
another CPU.
> * What is your evaluation of the documentation?
Needs a bit restructurization. I'd like to see it separated: a simple
usage tutorial, advanced features description (logging in depth :)),
extending the library section, concepts and rationale sections. It would
also be good if it followed the common Boost docs style.
> * What is your evaluation of the potential usefulness of the library?
I have a dual feeling about this. On one hand it provides a good set of
features to implement logging in a relatively small and simple
application. On the other hand, would I bother using it instead
std::cout? Maybe.
> * Did you try to use the library? With what compiler? Did you have any
> problems?
No, I did not compile anything.
> * How much effort did you put into your evaluation? A glance? A quick
reading?
> In-depth study?
About 4 hours of reading docs, examples and the library code.
> * Are you knowledgeable about the problem domain?
I believe, I am.
> * Do you think the library should be accepted as a Boost library?
No, at least at its current shape. I expect something different and more
elaborate from the logging library. I believe that logging is not only
writing strings into a file but it is an event reporting system. The
library should be flexible enough to be able to support statistics
gathering and alarming. The library should provide a flexible filtering
mechanism based on attributes of the logging records. I cannot see this
in the library, along with other less fundamental features.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2008/02/133287.php | CC-MAIN-2021-39 | refinedweb | 1,427 | 66.74 |
This is mainly true but there is a big problem with updating as fast as you can.
1. Uses up unnecessary amounts of resources.
2. You are forced to make the game run at a reasonable pace by waiting in the update code or in the main game loop with timers. This is a waste of computing power and often leads to crashing games.
Wont work: (Java)
public class Main { public static void main(String [] args) { while(true) { game.update(); wait(10); } } }
To fix this you can run the game at a designated ticks per second rate. So even if the game runs behind it uses all it needs without disabling threads. If it runs ahead it simply waits for the next tick.
Will work: (Java)
public class Main { private static boolean tickDone = true; private static long oldTime = System.nanoTime() /1000000; //Gets time in milliseconds, more accurate than System.currentTimeMillis() private static int ticksPerSecond = 200; public static void main(String [] args) { Game game = new Game(); while(true) { if(tickDone) { tick(System.nanoTime() /1000000, game); } } } private void tick(long inputTime, Game game) { tickDone = false; while(inputTime > oldTime) { game.update(); oldTime = oldTime + (1000/ticksPerSecond); //Every tick has a time in which it should be done. In this case 5 ms. This adds five ms's to time until it is up to date } tickDone = true; } }
This code means that the ticks will run at a steady rate. Each time the tick method is run the time is either less or more than what the game is up to. If less then the game runs until it is more. When more it declares the tick complete and waits for it to be less. If each object is given a set speed per tick then this enables the game to use minimal resources and doesnt waste what it is given by sleeping in a tick.
Any questions will be answered. | http://forum.codecall.net/topic/74011-how-to-make-an-economical-main-game-loop-essential-for-big-games/ | CC-MAIN-2016-18 | refinedweb | 316 | 81.63 |
Greetings.
I`ve been wanting to program for ages but have been put of by the fact it seems so confusing and for some reason i`ve always had this niggle inside me that told me to program, and so about 4 hours or so ago i did.
I`m using the tutorial/s on this site and am finding them helpful so far.
I am however (and this is one of the things that has put me off programming) confused over what a namespace is, i looked on the FAQ to this site and all over the web,and i just can`t understand them, so can someone please explain it to me in simple terms.
Also do you use <iostream> on all projects?
Many Thanks | http://cboard.cprogramming.com/cplusplus-programming/151866-namespace-iostream-help-required-confused-noob-printable-thread.html | CC-MAIN-2014-15 | refinedweb | 127 | 74.56 |
Deep Dive Into the RavenDB Client API
In this chapter, we're going to take a deep dive into how the client API works. We're going to show mostly C# code examples, but the same concepts apply to any of the RavenDB client APIs, regardless of platform, with minor changes needed to make things applicable.
There are still some concepts that we haven't gotten around to (clustering or indexing, for example), which will be covered in their own chapters. But the client API is rich and has a lot of useful functionality on its own, quite aside from the server-side behavior.
We already looked into the document store and the document session, the basic building blocks of CRUD in RavenDB. But in this chapter, we're going to look beyond the obvious and into the more advanced features.
One thing we'll not talk about in this chapter is querying. We'll cover that extensively in Chapter 9, so let's keep it there. You already know the basics of querying in RavenDB, but there's a lot more power for you to discover.
This chapter is going to contain a large number of code examples, and it will discuss the nitty-gritty details of using the client. It's divided into brief sections each dealing with a specific feature or behavior. I suggest reading this over to note the capabilities of RavenDB and coming back to it as needed in your application.
For the rest of this chapter, we'll use the classes shown in Listing 4.1 as our model, using a simplified help desk as our example.
Listing 4.1 Simplified Help Desk sample model
public class Customer { public string Id { get; set; } public string Name { get; set; } } public class SupportCall { public string Id { get; set; } public string CustomerId { get; set; } public DateTime Started { get;set; } public DateTime? Ended { get;set; } public string Issue { get; set; } public int Votes { get; set; } public List<string> Comments { get; set; } }
Writing documents
Writing documents in RavenDB is easy, as we saw in "Zero to RavenDB". If we want to create a new support call, we can use the code in Listing 4.2 to do so.
Listing 4.2 Creating a new support call using the session
using (var session = store.OpenSession()) { var call = new SupportCall { Started = DateTime.UtcNow, Issue = customerIssue, CustomerId = customerId }; session.Store(call); session.SaveChanges(); }
This is the basic behavior of RavenDB and how you would typically work with saving data. But there are lot of additional things that we can do when writing data. For example, let's say the user might have sent us some screenshots that we want to include in the support call.
Working with attachments
You can add attachments to a RavenDB document to store binary data related to that document. Let's assume the user sent us a screenshot of the problem along with the call. Listing 4.3 shows how we can store and retrieve the attachments.
Listing 4.3 Saving attachments to RavenDB as part of opening the support call
using (var session = store.OpenSession()) { var call = new SupportCall { Started = DateTime.UtcNow, Issue = customerIssue, CustomerId = customerId }; session.Store(call); foreach (var file in attachedFiles) { session.Advanced.StoreAttachment(call, file.Name, file.OpenStream()); } session.SaveChanges(); }
Note that we're using the session to store both the support call document and any attachments the user might have sent. An attachment is basically
a file name and a stream that will be sent to the server (with an optional content type). When the call to
SaveChanges is made, the RavenDB client API
will send both the new document and all of its attachments to the server in a single call, which will be treated as a transaction. Both the document and
the attachments will be saved, or both will fail.
That was easy enough, but now how do we retrieve the attachments? The list of attachments for a particular document is accessible via the session, as shown in Listing 4.4.
Listing 4.4 Getting the list of attachments for a support call
using (var session = store.OpenSession()) { var call = session.Load<SupportCall>("SupportCalls/238-B"); var attachments = session.Advanced.GetAttachmentNames(call); // render the call and the attachment names }
Calling
GetAttachmentNames is cheap; the attachments on a document are already present in the document metadata, which we loaded as part of
getting the document. There is no server-side call involved. Note that the result of
GetAttachmentNames doesn't include the content of the
attachments. To get the attachment itself and not just its name, you need to make an additional call, as shown in Listing 4.5.
Listing 4.5 Getting an attachment content
using (var session = store.OpenSession()) { var call = session.Load<SupportCall>("SupportCalls/238-B"); var attachments = session.Advanced.GetAttachmentNames(call); using (var stream = session.Advanced.GetAttachment(call, attachments[0].Name)) { // process the content of the attachment } }
Each call to
GetAttachment will make a separate call to the server to fetch the attachment. If you have a lot of attachments, be aware that
fetching all their information can be expensive due to the number of remote calls that are involved.
Working with the document metadata
In the attachments section, we noted that attachment information is stored in the document metadata. RavenDB uses the metadata for a lot of things. Most of them you don't generally care about (etag, change vector, etc.). But the document metadata is also available to you for your own needs and use.
An actual use case for direct use of the document metadata is pretty rare. If you want to store information, you'll typically want to store it in the document itself, not throw it to the metadata sidelines. Typical use cases for storing data in the metadata are cross-cutting concerns. The preeminent one is auditing. You may want to see who edited a document, for example.
In order to demonstrate working with the metadata, we'll consider creating a support call. Handling a support call can be a complex process that has to go through several steps. In this case,
we will save the new support call document to RavenDB with a draft status in the metadata.
Typical modeling advice would be to model this
explicitly in the domain (so you'll have an
IsDraft or
Status property on your model), but for this example, we'll use the metadata. You can see
the code for setting a draft status in the metadata in Listing 4.6.
Listing 4.6 Setting a metadata flag as part of creating a new support call
using (var session = store.OpenSession()) { var call = new SupportCall { Started = DateTime.UtcNow, Issue = customerIssue, CustomerId = customerId }; session.Store(call); var metadata = session.Advanced.GetMetadataFor(call); metadata["Status"] = "Draft"; session.SaveChanges(); }
We can call
GetMetadataFor on any document that has been associated with the session. A document is associated with the session either by loading
it from the server or by calling
Store. After the document has been associated with the session, we can get its metadata and manipulate it.
Changes to the metadata count as changes to the document and will cause the document to be saved to the server when
SaveChanges is called.
Change tracking and
SaveChanges
The document session implements change tracking on your documents, as you can see in Listing 4.7.
Listing 4.7 Saving the changes to a document without manually tracking what changed
using (var session = store.OpenSession()) { var call = session.Load<SupportCall>("SupportCalls/238-B"); call.Ended = DateTime.UtcNow; session.SaveChanges(); }
The session's change tracking (and identity map) means that you don't have to keep track of what changed and manually call
Store. Instead, when
you call
SaveChanges, all your changes will be sent to the server in a single request.
You have a few knobs available to tweak the process.
session.Advanced.HasChanges will let you know if calling
SaveChanges will result in a call to the
server. And
session.Advanced.HasChanged(entity) will tell you when a particular
entity has changed. You can also take it up a notch and ask RavenDB
to tell you what changed, using
session.Advanced.WhatChanged(). This will give you all the changes that happened in the session. The
WhatChanged
feature can be nice if you want to highlight changes for user approval, for example, or if you just want to see what modifications were made to your model
after a certain operation.
You can also tell RavenDB not to update a particular instance by calling
session.Advanced.IgnoreChangesFor(entity). The document will remain attached to
the session and will be part of any identity map operations, but it won't be saved to the server when
SaveChanges is called. Alternatively, you can call
session.Advanced.Evict(entity) to make the session completely forget about a document.
These operations tend to be useful only in specific cases, but they are very powerful when utilized properly.
Optimistic concurrency
We covered optimistic concurrency in Chapter 3, but only in the most general terms. Now, let's take a look and see how we can use optimistic concurrency in practice. Listing 4.8 shows two simultaneous sessions modifying the same support call.
Listing 4.8 Concurrent modifications of a support call
using (var sessionOne = store.OpenSession()) using (var sessionTwo = store.OpenSession()) { var callOne = sessionOne.Load<SupportCall>("SupportCalls/238-B"); var callTwo = sessionTwo.Load<SupportCall>("SupportCalls/238-B"); callOne.Ended = DateTime.Today; callTwo.Ended = DateTime.Today.AddDays(1); sessionOne.SaveChanges(); sessionTwo.SaveChanges(); }
In the case of the code in Listing 4.8, we're always going to end up with the support call end date set to tomorrow. This is because, by default, RavenDB
uses the
Last Write Wins model. You can change that by setting
store.Conventions.UseOptimisticConcurrency to "true," which will affect all sessions, or you can change it
on a case-by-case basis by setting
session.Advanced.UseOptimisticConcurrency to "true" on the session directly.
In either case, when this flag is set and
SaveChanges is called, we'll send the modified documents to the server alongside their change vectors that were received from the server when loading the documents. This allows the server to reject any stale writes. If the flag were set to true, the code in Listing 4.8 would result in a
ConcurrencyException on the
sessionTwo.SaveChanges() call.
This ensures that you can't overwrite changes you didn't see, and if you set
UseOptimisticConcurrency, you need to handle this error in some manner.
Pessimistic locking
When changes happen behind our backs to the document we modified, optimistic locking handles it. Pessimistic locking, on the other hand, prevents those changes entirely. RavenDB does not support pessimistic locking. And while you really need support from the database engine to properly implement pessimistic locking, we fake it in an interesting way. The following is a recipe for using approximating pessimistic locking in RavenDB. We mention it not so much because it's a good idea, but because it allows us to explore several different features and see how they work together.
Using pessimistic locking, we can lock a document for modification until we release the lock or until a certain amount of time has gone by. We can build a pessimistic
lock in RavenDB by utilizing the document metadata and optimistic concurrency. It's easier to explain with code, and you can find the
Lock and
Unlock
implementations in Listing 4.9.
The locks are opt-in
In RavenDB, both the pessimistic lock explored in this section and the optimistic lock in the previous section are opt-in. That means that you have to explicitly participate in the lock. If you're using
UseOptimisticConcurrencyand another thread isn't, that thread will get the
Last Write Winsbehavior (and might overwrite the changes made by the thread using optimistic concurrency).
In the same manner, the pessimistic lock recipe described here is dependent on all parties following it. If there's a thread that isn't, the lock will not be respected.
In short, when using concurrency control, make sure that you're using it across the board, or it may not hold.
Listing 4.9 Extension method to add pessimistic locking to the session
public static IDisposable Lock( this IDocumentSession session, string docToLock) { var doc = session.Load<object>(docToLock); if (doc == null) throw new DocumentDoesNotExistException("The document " + docToLock + " does not exists and cannot be locked"); var metadata = session.Advanced.GetMetadataFor(doc); if (metadata.GetBoolean("Pessimistic-Locked")) { // the document is locked and the lock is still value var ticks = metadata.GetNumber("Pessimistic-Lock-Timeout"); var lockedUntil = new DateTime(ticks); if (DateTime.UtcNow <= lockedUntil) throw new ConcurrencyException("Document " + docToLock + " is locked using pessimistic"); } metadata["Pessimistic-Locked"] = true; metadata["Pessimistic-Lock-Timeout"] = DateTime.UtcNow.AddSeconds(15).Ticks; // will throw if someone else took the look in the meantime session.Advanced.UseOptimisticConcurrency = true; session.SaveChanges(); return new DisposableAction(() => { metadata.Remove("Pessimistic-Locked"); metadata.Remove("Pessimistic-Lock-Timeout"); Debug.Assert(session.Advanced.UseOptimisticConcurrency); session.SaveChanges(); }); }
There's quite a bit of code in Listing 4.9, but there isn't actually a lot that gets done. We load a document and check if its metadata contains the
Pessimistic-Locked value. If it does, we check if the lock time expired. If it isn't locked, we first update the document metadata,
then enable optimistic concurrency and finally call
SaveChanges. If no one else modified the document in the meantime, we'll successfully mark the
document as ours, and any other call to
Lock will fail.
The
Lock method returns an
IDisposable instance that handles releasing the lock. This is done by removing the metadata values and then calling
SaveChanges again. If the lock has timed out and someone took the lock, we'll fail here with a concurrency exception as well.
Avoid your own distributed pessimistic locks
There's a reason why RavenDB does not include a pessimistic lock feature, and I strongly recommend you avoid using the recipe above. It's here to show how you'd use several different features at once to achieve a goal.
Actually handling a distributed lock is a non-trivial issue. Consider a RavenDB cluster with multiple nodes. If two lock requests go to two distinct nodes at the same time, both of them will succeed.1 The two nodes will quickly discover the conflicting updates and generate a conflict. But it isn't guaranteed they'll discover it inside the lock/unlock period.
Another issue is the subject of timing. If two clients have enough of a clock skew, a client might consider a lock to have expired even though it's still valid. Proper distributed locking requires a consensus protocol of some kind, and those aren't trivial to build or use. RavenDB does have a consensus protocol, but pessimistic locking is usually a bad fit for an OLTP environment, and we decided not to implement it.
A typical use for pessimistic locks is to lock a document while a user is editing it. That might sound like a good idea, but experience has shown that, in most cases, it leads to trouble. Consider, for example, version control systems. If you're reading this book, you've likely used a SCM of some kind. If you haven't, I suggest you pick up a book about source control and prioritize it over this one: learning about that is far more foundational.
Early source control systems (SourceSafe is a good example) used locks as their concurrency model, and that led to a lot of problems. Say Joe locks a file and then leaves on vacation. All of his coworkers then have to wait until he gets back before committing code to that file. This is a typical problem that arises in such cases. The same happens when you have pessimistic locks. Implementing pessimistic locks requires you to also implement forced lock release, a feature that tells you who locked the document and a whole slew of management functions around it. Typically, implementing optimistic concurrency or merging is easier and matches most users' expectations.
Offline optimistic concurrency
We looked at online optimistic concurrency in Listing 4.8, where we loaded a document into the session, modified it and then saved. In that time frame, if there was a change, we'd get a concurrency exception. But most software doesn't work like that. In a web application, you aren't going to keep the document session open for as long as the user is on the site. Instead, most likely you'll use a session-per-request model. The user will load a page with the document's content in one request and modify it in another request. There isn't a shared session in sight, so how can we implement optimistic concurrency?
All you need to do is send the change vector of the document to the user and accept it back when the user wants to save the document. Listing 4.10 shows an example using two separate sessions with concurrency handling between them.
Listing 4.10 Concurrent modifications of a support call
string changeVector; Supportcall callOne; using (var sessionOne = store.OpenSession()) using (var sessionTwo = store.OpenSession()) { callOne = sessionOne.Load<SupportCall>("SupportCalls/238-B"); changeVector = sessionOne.Advanced.GetChangeVectorFor(callOne); var callTwo = sessionTwo.Load<SupportCall>("SupportCalls/238-B"); callTwo.Ended = DateTime.Today.AddDays(1); sessionTwo.SaveChanges(); } using (var sessionThree = store.OpenSession()) { sessionThree.Advanced.UseOptimisticConcurrency = true; callOne.Ended = DateTime.Today; sessionThree.Store(callOne, changeVector, callOne.Id); sessionThree.SaveChanges(); // will raise ConcurrencyException }
The code in Listing 4.10 first loads the support call in
sessionOne. Then the code loads it again in
sessionTwo, modifies the support call and saves it to the server.
Both sessions are then closed, and we open a new session,
sessionThree. We call
Store, passing the entity instance and the change vector that we got from the first session, as
well as the document ID.
This gives the RavenDB client API enough information for us to do an optimistic concurrency check from the time we loaded
callOne in the first session. In
a web scenario, you'll typically send the change vector alongside the actual data and get it back from the client to do the check. You might also want to check out the
Changes API, which is covered a little later in this chapter. This might help you get early change notifications when you need to implement offline
optimistic concurrency.
Patching documents and concurrent modifications
The typical workflow with RavenDB is to open a session, load a document, run some business logic and call
SaveChanges. When you follow those steps, the
session will figure out what documents have changed and send them to the server. This model is simple and easy to follow, and it's the recommended way to work.
However, there are a few scenarios in which we don't want to send the entire document back to the server. For example, if our document is very large and we want to make a small change, we can avoid that cost. Another reason to avoid the full document save is a scenario that calls for concurrent work on a document.
Let's consider the
SupportCall.Votes property. Two users may very well want to vote on the same support call at the same time. One way to handle that is to
load the support call, increment the
Votes property and call
SaveChanges. In order to handle concurrent modifications, we can utilize optimistic
concurrency and retries. But that's quite a lot of work to write, and if the document is large, there's also a lot of data going back and forth over the
network for little reason. Listing 4.11 shows how we can do much better.
Listing 4.11 Incrementing a property using the Patch API
using (var session = store.OpenSession()) { session.Advanced.Increment<SupportCall, int>("SupportCalls/238-B", c => c.Votes, 1); session.SaveChanges(); }
What the code in Listing 4.11 does is generate a patch request, rather than load and save the full document. That request is stored in the session, and when
SaveChanges is called, it will be sent to the server (alongside any other changes/operations made on the session, as usual). On the server side, we'll apply
this operation to the document. This patch request is safe to call concurrently, since there's no loss of data when executing patches.
Which is faster, patching or load/save?
The hasty answer is that patching is faster. We send a lot less data, and we need one less round trip to do so. Winning all around, right?
But the real answer is that things are bit more complex. The patch request is actually a JavaScript function that we send. That means we need to parse and run it on the server side, potentially marshal values into the script environment and then marshal it back. Conversely, the code path for loading and saving documents in RavenDB is well trodden and has been optimized plenty. That means in many cases it might be easier to just load and modify the document directly, rather than use a patch.
Patching isn't expensive; I want to emphasize that. But at the same time, I've seen codebases where all writes had to be made using patching because of perceived performance benefits. That resulted in an extremely hard-to-understand system that was resistant to change. The general recommendation is to utilize patching only when you need to support concurrent modifications.
Note that in most cases concurrent modifications of the same document is not the default. A properly modeled document should have a single reason to change, but it's common that documents have additional data on them (like the
Votesproperty or
Comments) that are important to save but don't have any real business logic attached to them.
That kind of change is fine to do using patching. If you find yourself trying to run serious business logic in patch scripts (we'll see exactly how to do this in a bit), you should move that into your own business logic.
An important consideration to take into account is that there is no guarantee as to the order in which the patches will run. But, you don't need to worry about concurrency between patches on the same document as there is no concurrent/interleaved execution of the scripts on the same document.
A slightly more complex example of the use of patching is adding a comment to a
SupportCall. Just like before, we want to support adding a comment
concurrently. But to make things a bit more interesting, we'll add a business rule: a call that has been ended cannot have additional comments added to
it. Listing 4.12 shows the obvious way to accomplish this.
Listing 4.12 Adding a comment to a support call using patch
using (var session = store.OpenSession()) { var call = session.Load<SupportCall>("SupportCalls/238-B"); if (call.Ended != null) throw new InvalidOperationException("Cannot comment on closed call"); session.Advanced.Patch(call, c => c.Comments, comments => comments.Add("This is important stuff!")); session.SaveChanges(); }
In Listing 4.12, you can see how we moved from using the simple
Increment call to the
Patch, which allows us to either replace a property value completely or add an item to a collection. If you look closely at the code in Listing 4.12, you'll find that there's a hidden race condition there. The business rule
is that we can't add comments to a closed call; we're able to load the call, check that its
Ended property is null and then
send the patch request to the server. However, in the meantime, another client could have closed the call, and yet we'd still add the comment.
The seriousness of this issue depends entirely on the domain and the model. It's possible that you do want to add comments during that period, or it's possible that allowing it could break important business invariants.
Patch requests are sent as part of
SaveChanges
It's probably obvious, but I wanted to spell this out explicitly. Calls to
Patch,
Incrementor
Deferdon't go to the server immediately. Instead, they're added to the list of operations the session needs to execute and will be sent to the server in a single batch (along with any modified documents) when
SaveChangesis called. If you have multiple patch operations in the same session on the same document, they'll be merged into a single patch. And if there are multiple patches on different documents, they'll all be executed within the same transaction, as a single unit.
There are two ways to avoid the race condition. We can send a change vector to the server asking it to fail with a concurrency exception if the document has been
modified since we last saw it. That works, but it defeats the whole point of using patches for concurrent modification of the document. The second alternative
is to move the invariant check into the script itself. Calls to
Increment and
Patch are actually just wrappers around the
Defer call, which allows you to
add work to the session that's to be sent to the server when
SaveChanges is called.
In Listing 4.13, we're dropping down to using
Defer directly to manipulate the patch request ourselves, with no wrappers. As you can see, this is a bit involved,
but overall it's pretty straightforward.
Listing 4.13 Using a patch script to maintain call invariants
using (var session = store.OpenSession()) { session.Advanced.Defer(new PatchCommandData( id: "SupportCalls/238-B", changeVector: null, patch: new PatchRequest { Script = @" if (this.Ended != null) throw 'Cannot add a comment to a closed call'; this.Comments.push($comment); ", Values = { ["comment"] = "This is important stuff!!" } }, patchIfMissing: null)); session.SaveChanges(); }
The code in Listing 4.13 passes a
PatchCommandData, containing the relevant document ID, to
Defer. The key part is in the
PatchRequest itself. We do the check on
the document and fail if the call has already been closed. If it hasn't, we add the comment to the call. You can also see that we don't have to deal with string
concatenation here since we can pass arguments to the scripts directly. There's also the option to run a script if the document does not exist. This gives you
the option to do a "modify or create" style of operation.
Using this method, we can be sure that we'll never violate the rule about adding comments to a closed call. A cautionary word, though: this is more complex than anything we've dealt with before, and I would only recommend doing it if you really must. Running logic in this manner inside the database is a powerful technique, but it's also usually a bad idea because of its potential for abuse. Problems that come with abusing this feature include the scripts being run under a lock, which prevents the database from completing transactions quickly. If you find yourself doing something like this frequently, stop and reconsider.
Deferring commands
In the previous section, we used the
Defer method to register a
PatchCommandData on the session, to be executed when
SaveChanges is called. But
Defer is
more generic then this. It's a general mechanism for the user to register arbitrary commands that will be part of the same transaction as the
SaveChanges.
The RavenDB API is like an ogre. It has layers
Like the popular character Shrek, RavenDB API is composed of layers. At the top, you have the document store and the document session. Those are built using the operations concept, which you typically won't use directly. And the operations are handled via the request executor, which allows you to generate requests directly to the server (and take advantage of RavenDB's authentication and automatic failover).
The case of
Deferis a good example. Instead of forcing you to drop down all the way, we expose an extension point in the session so you can plug in a command of your own and piggyback on the session handling of transactions.
The available commands range from putting and deleting documents and attachments to applying patches that delete documents having some prefix. Aside from the patch
operation, the rest are only useful in rare cases. The most common use of
Defer beyond patch is when you need fine-grained control over the operations
that will be executed in a single transaction, to the point where you want to control the ordering of the operations.
RavenDB doesn't allow changing the document's collection, you can't just update the
@collection metadata. So, if you need to do this,2 you must first delete the old document and then create a new one with the appropriate collection. The session doesn't allow you to both
delete and modify a document. And for the purpose of discussion, let's say that we have to make this in a single transaction, so no other client sees a
point in time where the document was deleted.
To be clear, this is a strange situation, dreamt up specifically to showcase a feature that should be only used in special circumstances. This escape hatch in the API is specifically intended to prevent you from being blocked if you need something we didn't foresee, but I can't emphasize enough that this is probably a bad idea. The emergency exit is important, but you don't want to make it the front door.
Another reason to avoid using
Defer is that it's lower in the RavenDB client API layers. Instead of dealing with high level concepts
like entity, you'll be dealing with the direct manner in which RavenDB is representing JSON, the blittable format (which we'll discuss later in this chapter). That format is meant to be high performance,
and things like developer convenience were secondary in its design.
Bulk inserting documents to RavenDB
RavenDB is fast, really fast, but it still needs to face operational realities. The fallacies of distributed computing still apply, and I/O takes a non-trivial amount of time. This means that when you want to get the best speed out of RavenDB, you need to help it achieve that.
Stacking the deck
I'm going to be talking performance numbers in this section, and I wanted to make it clear that I've intentionally chosen the worst possible situation for RavenDB and then compounded the issue by using the wrong approaches. This is so I can show the real costs in a manner that's highly visible.
I'll refer you again to the fallacies of distributed computing. I'm trying to select a scenario that would break as many of those fallacies as possible and show how RavenDB is able to handle them.
Listing 4.14 shows the absolute slowest way to write 10,000 documents into RavenDB.
Listing 4.14 Writing 10,000 documents, one at a time
var sp = Stopwatch.StartNew(); for (int i = 0; i < 10_000; i++) { using (var session = store.OpenSession()) { session.Store(new Customer { Name = "Customer #" + i }); session.SaveChanges(); } } Console.WriteLine(sp.Elapsed);
For fun, I decided to run the code in Listing 4.14 against the live test instance we have. That instance was in San Francisco, and I was testing this from Israel. The test instance was also running as a container inside an AWS t2.medium machine (two cores and 4 GB of memory, with burst-only mode). In other words, this performance test was heavily biased against RavenDB, and the results were not great. In fact, they were bad.
This is because we're running each write as an independent operation, and we have to wait for the previous operation to complete before we can start the new one. What's more, the database server handles just a single request concurrently, which means we have no way to amortize I/O costs across multiple requests. This is the absolute worst way you can write a large amount of documents into RavenDB because most of the time is spent just going back and forth between the client and the application. For each request, we have to make another REST call, send a packet to the server, etc. On the other side, the server accepts a new request, processes it and commits it to disk. During the entire process, it's effectively idle, since most of the time is spent waiting for I/O. That's a big waste all around.
You can see the various times nicely when looking at the Fiddler3 statistics. Each request takes about 220–260 milliseconds to run. Writing the first 1,000 documents took four minutes and six seconds, and 2,000 requests took eight minutes on the dot. The full 10,000 documents would take 40 minutes or so. Granted, we're intentionally going to a remote server, but still...
What happens when we're running the writes in parallel? The code in Listing 4.15 shows how to do this.
Listing 4.15 Writing 10,000 documents, with a bit of parallelism thrown in
var sp = Stopwatch.StartNew(); Parallel.For(0, 10_000, i => { using (var session = store.OpenSession()) { session.Store(new Customer { Name = "Customer #" + i }); session.SaveChanges(); } }); Console.WriteLine(sp.Elapsed);
Using the method in Listing 4.15, I was able to write 1,000 documents in 56 seconds. We got to 2,000 in a minute and a half, 3,000 in a minute and 50 seconds, etc. The reason for the speed up is actually related to how thread pooling is handled on the client side. Since we make a lot of blocking requests, the thread pool figures out that we have plenty of blocking work and creates more threads. That means we have the chance to do more concurrent work. So as time goes by, more threads are created and we make additional concurrent requests to RavenDB.
The total time of writing 10,000 documents in this setup was two minutes and 52 seconds. So we've gotten done 20 times faster than we did using sequential writes. The code in Listing 4.15 is still using synchronous calls, which means the client side is spinning threads to handle the load and we're limited by the rate of new thread creation on the client.
RavenDB also supports an async API, which is much more suitable for scale-out scenarios because we aren't holding a thread for the duration of the connection. Listing 4.16 shows how we can write all those documents in parallel, using the async API. The code is a tad complex because we want to control the number of concurrent requests we make. Spinning 10,000 concurrent requests will likely load the network and require careful attention to how they are managed, which is out of scope for this book. Instead, I limited the number of concurrent connections to 128.
Listing 4.16 Writing 10,000 documents, using async API
var sp = Stopwatch.StartNew(); var semaphore = new SemaphoreSlim(128); async Task WriteDocument(int i) { using (var session = store.OpenAsyncSession()) { await session.StoreAsync(new Customer { Name = "Customer #" + i }); await session.SaveChangesAsync(); } semaphore.Release(); } var tasks = new List<Task>(); for (int i = 0; i < 10_000; i++) { semaphore.Wait(); tasks.Add(WriteDocument(i)); } Task.WaitAll(tasks.ToArray()); Console.WriteLine(sp.Elapsed);
The code in Listing 4.16 is also using a local method, which is a new C# 7.0 feature. It allows you to package a bit of behavior quite nicely, and it's very useful for small demos and internal async code. This code writes 1,000 documents in just under 10 seconds, and it completes the full 10,000 writes in under 30 seconds (29.6, on my machine). The speed difference is, again, related to the client learning our pattern of behavior and adjusting itself accordingly (creating enough buffers, threads and other resources needed; warming up the TCP connections.4)
However, we really had to make an effort. We wrote explicit async code and managed it, rate-limited our behavior and jumped through several hoops to get a more reasonable level of performance. Note that we went from over 40 minutes to less than 30 seconds in the span of a few pages. Also note that we haven't actually modified what we're doing — we only changed how we're talking to the server — but it had a huge impact on performance.
You can take it as a given that RavenDB is able to process as much data as you can feed it. The typical concern in handling writes is how fast we can get the data to the server, not how fast the server can handle it.
RavenDB contains a dedicated API and behavior that makes it easier to deal with bulk loading scenarios. The bulk insert API uses a single connection to talk to the server and is able to make much better use of the network. The entire process is carefully orchestrated by both the client and the server to optimize performance. Let's look at the code in Listing 4.17 first and then discuss the details.
Listing 4.17 using bulk insert to write 100,000 documents, quickly
var sp = Stopwatch.StartNew(); using (var bulkInsert = store.BulkInsert()) { for (int i = 0; i < 100_000; i++) { bulkInsert.Store(new Customer { Name = "Customer #" + i }); } } Console.WriteLine(sp.Elapsed);
The code in Listing 4.17 took two minutes and 10 seconds to run on my machine — which is interesting, because it seems slower than the async API usage sample, right? Except there's one problem. I made a typo when writing the code and wrote a hundred thousand documents instead of ten thousand. If I was writing merely 10,000 documents, it would complete in about 18 seconds. The code is fairly trivial to write, similar to our first sample in Listing 4.14, but the performance is many times faster.
To compare the costs, I ran the same code against a local machine, giving me a total time of 11 seconds to insert 100,000 documents (instead of two minutes remotely). If we wanted to compare apples with apples, then the cost for writing 10,000 documents is shown in Table. 4.1.
Table: Bulk insert costs locally and remotely
You can see that while bulk insert is significantly faster in all cases, being over three times faster than the session option (Listing 4.14) locally seems insignificant considering it's over 130 times faster in the remote case. The major difference, as you can imagine, is the cost of going over the network, but even on the local machine (and we're not even talking about the local network), there's a significant performance benefit for bulk insert.
Amusingly enough, using bulk insert still doesn't saturate the server. For large datasets, it's advisable to have parallel bulk insert operations going at the same time. This gives the server more work to do, and it allows us to do optimizations that increase the ingest rate of the server.
The way bulk insert works is by opening a single long running request to the server and writing the raw data directly into the database. That means we don't need to go back and forth between the client and the server and can rely on a single network roundtrip to do all the work. The server, for its part, will read the data from the network and write it to disk when it's best to do so. In other words, bulk inserts are not transactional. A bulk insert is actually composed of many smaller transactions, whose size and scope are determined by the server based on its own calculations, in order to maximize performance.
When the bulk insert is completed, you can rest assured that all the data has been safely committed to disk properly. But during the process, the data is committed incrementally instead of going with a single big-bang approach.
For the most part, RavenDB performance is ruled by how many requests you can send it. The more requests, the higher the degree of parallelism and the more efficiently RavenDB can work. In our internal tests, we routinely bumped into hardware limits (the network card cannot process packets any faster, the disk I/O is saturated, etc.), not software ones.
Reading documents
We just spent a lot of time learning how we can write documents to RavenDB in all sorts of interesting ways. But for reading, how much is there really to know? We already learned how to load and query a document; we covered that in "Zero to RavenDB." We also covered
Include and how
to use it to effortlessly get referenced documents from the server. What else is there to talk about? As it turns out, quite a bit.
In most applications, reads are far more numerous than writes — often by an order of magnitude. That means RavenDB needs to be prepared to handle a lot of reads, and those applications typically have a number of ways in which they access, shape and consume the data. RavenDB needs to be able to provide an answer to all those needs.
The first feature I want to present allows you to dramatically increase your overall performance by being a little lazy.
Lazy requests
In Section 4.1.7, which dealt with bulk insert, we saw how important the role of the network is. Running the same code on the local network vs. the public internet results in speed differences of 20 seconds to 41 minutes, just because of network latencies. On the other hand, moving from many requests to a single bulk insert request is the primary reason we cut our costs by two-thirds on the local machine and over two orders of magnitude in the remote case.
I talked about this a few times already, but it's important. The latency of going to the server and making a remote call is often much higher than the cost of actually processing the request on the server. On the local machine, you'll probably not notice it much. That's normal for running in a development environment. When you go to production, your database is typically going to run on a dedicated machine,5 so you'll have to go over the network to get it. And that dramatically increases the cost of going to the database.
This problem is well known: it's the fallacies of distributed computing. RavenDB handles the issue in several ways. A session has a budget on the number of remote calls it can make. (This is controlled by
session.Advanced.MaxNumberOfRequestsPerSession.) If it goes over that limit, an exception is thrown. We had this feature from the get-go, and that
led to a lot of thinking about how we can reduce the number of remote calls.
Include is obviously one such case. Instead of going to the server multiple times, we let the server know we'll need additional information after this request and tell it to send that immediately. But we can't always do that. Let's take a look at Listing 4.18, showing two queries that we
can't optimize using
Include.
Listing 4.18 Loading a customer and the count of support calls for that customer
using (var session = store.OpenSession()) { var customer = session.Load<Customer>("customers/8243-C"); var countOfCalls = session.Query<SupportCall>() .Where(c => c.CustomerId == "customers/8243-C")) .Count(); // show the customer and the number of calls to the user }
A scenario like the one outlined in Listing 4.18 is incredibly common. We have many cases where we need to show the user information from multiple sources,
and that's a concern. Each of those calls turns out to be a remote call, requiring us to go over the network. There are ways to optimize this specific
scenario. We can define a MapReduce index and run a query and
Include on it. We haven't yet gone over exactly what this means,6 but this is a pretty complex solution, and it isn't relevant when you have different
types of queries. If we wanted to also load the logged-in user, for example, that wouldn't work.
RavenDB's solution for this issue is the notion of lazy requests. A lazy request isn't actually being executed when you make it. Instead, it's
stored in the session, and you get a
Lazy<T> instance back. You can make multiple lazy requests, one after another, and no network activity will occur. However,
as soon as you access the value of one of those lazy instances, all the lazy requests held up by the session will be sent to the server as a single
unit.
All those requests will be processed by the server, and all the replies will be sent as a single unit. So no matter how many lazy requests you have, there will only ever be a single network round trip to the server. You can see the code sample in Listing 4.19.
Listing 4.19 Lazily loading a customer and their count of support calls
using (var session = store.OpenSession()) { Lazy<Customer> lazyCustomer = session.Advanced.Lazily .Load<Customer>("customers/8243-C"); Lazy<int> lazyCountOfCalls = session.Query<SupportCall>() .Where(c => c.CustomerId == "customers/8243-C")) .CountLazily(); // no network calls have been made so far // force execution of pending lazy operations explicitly session.Advanced.Eagerly.ExecuteAllPendingLazyOperations(); // if ExecuteAllPendingLazyOperations wasn't called, it // will be implicitly called here. int countOfCalls = lazyCountOfCalls.Value; Customer customer = lazyCustomer.Value; // show the customer and the number of calls to the user }
As the code in Listing 4.19 shows, we can define multiple lazy operations. At that stage, they're pending. They're stored in the session but haven't been
sent to the server yet. We can either call
ExecuteAllPendingLazyOperations to force all pending operations to execute, or we can have that happen implicitly
by accessing the
Value property on any of the lazy instances we received.
Why do we need ExecuteAllPendingLazyOperations?
The existence of ExecuteAllPendingLazyOperations is strange. It's explicitly doing something that will happen implicitly anyway. So why is it needed? This method exists to allow users to have fine-grained control over the execution of requests. In particular, it allows you to set up a stage in your pipeline that will request all the data it's going to need. Then it will call ExecuteAllPendingLazyOperations to fetch this explicitly.
The next stage is supposed to operate on the pure in-memory data inside the session and not require any calls to the server. This is important in advanced scenarios, when you need this level of control and want to prevent the code from making unexpected remote calls in performance-critical sections of your code.
The performance gain from
Lazy is directly correlated to the number of lazy requests it's able to batch and how far away the actual server is. The more
requests that can be batched and the further away the database server, the faster this method becomes. On the local machine, it's rarely worth going to the trouble,
but once we go to production, this can get you some real benefits.
Note that, as useful as
Lazy is, it's limited to requests that you can make with the information you have on hand. If you need to make queries based on the
results of another query, you won't be able to use
Lazy for that. For most of those scenarios, you can use
Include. And of course,
Lazy and
Include
can work together, so that will usually suffice.
Streaming data
When dealing with large amounts of data, the typical API we use to talk to RavenDB is not really suitable for the task. Let's consider the case of the code in Listing 4.20.
Listing 4.20 Query all support calls for a customer
using (var session = store.OpenSession()) { List<SupportCall> calls = session.Query<SupportCall>() .Where(c => c.CustomerId == "customers/8243-C")) .ToList(); }
What will happen if this is a particularly troublesome customer that opened a lot of calls? If this customer had just 30 calls, it's easy to see that we'll get them all in the list. But what happens if this customer has 30,000 calls? Figure 4.1 shows how a query is processed on the server in this case.
The server will accept the query, find all matches, prepare the results to send and then send them all over the network. On the client side, we'll read the results from the network and batch them all into the list that we'll return to the application.
If there are 30 results in all, that's great, but if we have 30,000, we'll likely suffer from issues. Sending 30,000 results over the network, reading 30,000 results from the network and then populating a list of 30,000 (potentially complex) objects is going to take some time. In terms of memory usage, we'll need to hold all those results in memory, possibly for an extended period of time.
Due to the way memory management works in .NET,7 allocating a list with
a lot of objects over a period of time (because we are reading them from the network) will likely push the list instance, and all of its contents, into a higher
generation. This means that, when you're done using it, the memory will not be collected without a more expensive
Gen1 or even
Gen2 round.
In short, for a large number of results, the code in Listing 4.20 will take more time, consume more memory and force more expensive GC in the future. In previous versions of RavenDB, we had guards in place to prevent this scenario entirely. It's easy to start writing code like that in Listing 4.20 and over time have more and more results come in. Our logic was that, at some point, there needed to be a cutoff point where an exception is thrown before this kind of behavior poisoned your application.
As it turned out, our users really didn't like this behavior. In many cases, they would rather the application do more work (typically unnecessarily) than to have it
throw an error. This allowed them to fix a performance problem rather than a "system is down" issue. As a result of this feedback, this limitation was removed, but
we still recommend always using a
Take clause in your queries to prevent just this kind of issue.
All queries should have a
Takeclause
A query that doesn't have a take clause can be a poison pill for your application. As data size grows, the cost of making this query also grows until the entire thing goes down.
The RavenDB client API contains a convention setting called
ThrowIfQueryPageSizeIsNotSet, which will force all queries to specify a
Takeclause and will error otherwise. We recommend that, during development, you set this value to true to ensure your code will always be generating queries that have a limit to the number of results they get.
Very large queries are bad, it seems, but that isn't actually the topic of this section. Instead, it's just the preface explaining why buffered large queries are a bad idea. RavenDB also supports the notion of streaming queries. You can see what that would look like in Figure 4.2.
Unlike the previous example, with streaming, neither client nor server need to hold the full response in memory. Instead, as soon as the server has a single result, it sends that result to the client. The client will read the result from the network, materialize the instance and hand it off to the application immediately. In this manner, the application can start processing the results of the query before the server is done sending it, and it doesn't have to wait. You can see the code for that in Listing 4.21.
Listing 4.21 Stream all support calls for a customer
using (var session = store.OpenSession()) { var callsQuery = session.Query<SupportCall>() .Where(c => c.CustomerId == "customers/8243-C")); using (var stream = session.Advanced.Stream(callsQuery)) { while (stream.MoveNext()) { SupportCall current = stream.Current; // do something with this instance } } }
Instead of getting all the results in one go, the code in Listing 4.21 will pull them from the stream one at a time. This way, the server, the client API and the application can all work in parallel with one another to process the results of the query. This technique is suitable for processing a large number of results (in the millions).
The use of streaming queries requires you to keep a few things in mind:
The results of a streaming query are not tracked by the session. Changes made to them will not be sent to the server when
SaveChangesis called. This is because we expect streaming queries to have a high number of results, and we don't want to hold all the references for them in the session. If we did, we would prevent the GC from collecting them.
Since streaming is happening as a single large request, there's a limit to how long you can delay before you call
MoveNextagain. If you wait too long, it's possible for the server to give up sending the rest of the request to you (since you didn't respond in time) and abort the connection. Typically, you'll be writing the results of the stream somewhere (to a file, to the network, etc.).
If you want to modify all the results of the query, don't call
session.Storeon each as they're pulled from the stream. You'll just generate an excess of work for the session and eventually end up with a truly humongous batch to send to the server. Typically, if you want to read a lot of results and modify them, you'll use a stream and a
Bulk Insertat the same time. You'll read from the stream and call
Storeon the bulk insert for each. This way, you'll have both streaming for the query as you read and streaming via the bulk insert on the write.
When you need to select whether to use a regular (batched) query or a streaming query, consider the number of items that you expect to get from the query and what you intend to do with them. If the number is small, you'll likely want to use a query for its simple API. If you need to process many results, you should use the streaming API.
Note that the streaming API is intentionally a bit harder to use. We made sure the API exposes the streaming nature of the operation. You should strive to avoid wrapping that. Streaming should be used on the edge of your system, where you're doing something with the results and passing them directly to the outside in some manner.
Caching
An important consideration when speeding up an application is caching. In particular, we can avoid expensive operations if we cache the results from the last time we accessed them. Unfortunately, caching is hard. Phil Karlton said:
There are only two hard things in Computer Science: cache invalidation and naming things.
Caching itself is pretty trivial to get right. The hard part is how you're going to handle cache invalidation. If you're serving stale information from the cache, the results can range from nothing much to critical, depending on what exactly you're doing.
With RavenDB, we decided early on that caching was a complex topic, so we'd better handle it properly. It's done in two parts. The server side generates an etag for all read operations. This etag is computed by the server and can be used by the client later on. The client, on the other hand, is able to cache the request from the server internally. The next time a similar read request is made, the client will send the cached etag to the server alongside the request.
When the server gets such a request with an etag, it follows a dedicated code path, optimized specifically for that, to check whether the results of the operation have changed. If they didn't, the server can return to the client immediately, letting it know that it's safe to use the cached copy. In this manner, we save computation costs on the server and network transfer costs between the server and the client.
Externally, from the API consumer point of view, there's no way to tell that caching happened. Consider the code in Listing 4.22.
Listing 4.22 Query caching in action
using (var session = store.OpenSession()) { var calls = session.Query<SupportCall>() .Where(c => c.CustomerId == "customers/8243-C")) .ToList(); } using (var session = store.OpenSession()) { // this query will result in the server // returning a "this is cached" notification var calls = session.Query<SupportCall>() .Where(c => c.CustomerId == "customers/8243-C")) .ToList(); }
The client code doesn't need to change in any way to take advantage of this feature. This is on by default and is always there to try to speed up your requests. Caching is prevalent in RavenDB, so although the example in Listing 4.22 uses queries, loading a document will also use the cache, as will most other read operations.
The cache that's kept on the client side is the already-parsed results, so we saved not only the network round-trip time but also the parsing costs. We keep the data in unmanaged memory because it's easier to keep track of the size of the memory and avoid promoting objects into Gen2 just because they've been in the cache for a while. The cache is scoped to the document store, so all sessions from the same document store will share the cache and its benefits.
Time to skip the cache
Caching by default can be a problem with a particular set of queries — those that use the notion of the current time. Consider the case of the following query:
session.Query<SupportCall>() .Where(c => c.StartedAt >= DateTime.Now.AddDays(-7) && c.Ended == null);
This query is asking for all opened support calls that are over a week old.
This kind of query is quite innocent looking, but together with the cache, it can have surprising results. Because the query uses
DateTime.Now, on every call, it will generate a different query. That query will never match any previously cached results, so it will always have to be processed on the server side. What's worse, every instance of this query will sit in the cache, waiting to be evicted, never to be used.
A much better alternative would be to use the following:
c.StartedAt >= DateTime.Today.AddDays(-7)
By using
Today, we ensure that we can reuse the cached entry for multiple calls. Even if you need more granularity than that, just truncating the current time to a minute/hour interval can be very beneficial.
The cache works by utilizing HTTP caching behavior. Whenever we make a
GET request, we check the cache to see if we previously made a request with the same URL and query
string parameters. If we did, we fetch the cached etag for that request and send it to the server. The server will then use that etag to check if the results have
changed. If they didn't, the server returns a
304 Not Modified response. The client will then just use the cached response that it already has.
While most read requests are cached, there are a few that aren't. Anything that will always be different, such as stats calls, will never be cached. This is because stats and debug endpoints must return fresh information any time they're called. Attachments are also not cached because they can be very large and are typically handled differently by the application.
Aggressive caching
Caching is great, it would seem. But in order to utilize caching by RavenDB, we still need to go back to the server. That ensures we'll get a server confirmation that the data we're going to return for that request is indeed fresh and valid. However, there are many cases where we don't care much about the freshness of the data.
On a heavily used page, showing data that might be stale for a few minutes is absolutely fine, and the performance benefits gained from not having to go to the server can be quite nice. In order to address this scenario, RavenDB supports the notion of aggressive caching. You can see an example of that in Listing 4.23.
Listing 4.23 Aggressive caching in action
using (var session = store.OpenSession()) using (session.Advanced.DocumentStore.AggressivelyCache()) { var customer = session.Load<SupportCall>( "customers/8243-C"); }
The code in Listing 4.23 uses aggressive caching. In this mode, if the request is in the cache, the RavenDB client API will never even ask the server if it's up to date. Instead, it will immediately serve the request from the cache, skipping all network traffic. This can significantly speed up operations for which you can live with a stale view of the world for a certain period.
However, operating in this mode indefinitely would be pretty bad since you'd never see new results. This leads us back to the problem of cache invalidation. Aggressive caching
isn't just going to blindly cache all data at all times. Instead, the first time it encounters an instruction to aggressively cache, the client is going
to open a connection to the server and ask the server to let it know whenever something changes in the database. This is done using the
Changes API, which
is covered in the next section of this chapter.
Whenever the server lets the client know that something has changed, the client will ensure that the next cached request will actually hit the server for the
possibly updated value. The client will not be asking specifically for changes it has in its cache. Instead, it will ask for all changes on the server.
(Take note of that, as you might be caching a lot of different queries, documents, etc.)
The client knows to check with the server for anything in the cache that is older than that notification.
When this is done, the cached etag is still being sent, so if that particular response hasn't changed, the server will still respond with a
304 Not Modified
and we'll use the cached value (and update its timestamp).
The idea is that, with this behavior, you get the best of both worlds. If nothing has changed, immediate caching is available without having to go to the server. But if something might have changed, we'll check the server for the most up-to-date response. Given typical behavioral patterns for the application, we'll often be able to use the aggressive cache for quite some time before a write will come in and make us check with the server.
Why isn't aggressive caching on by default?
Aggressive caching isn't on by default because it may violate a core constraint of RavenDB: that a request will always give you the latest information. With the requirement that aggressive caching must be turned on explicitly, you're aware that there's a period of time where the response you receive might have diverged from the result on the server.
Caching behavior in a cluster
Typically, RavenDB is deployed in a cluster, and a database will reside on multiple machines. How does caching work in this context? The caching is built on the full URL of the request, and that takes into account the particular server that we'll be making the request to. That means that the cache will store a separate result for each server, even if the request is identical otherwise.
For the most part, the etags generated for HTTP requests among servers should be identical for identical data since it uses the change vectors of the documents to compute it. However, different servers may receive documents in a different order, which may result in a difference in the actual results. That shouldn't impact the behavior of the system, but for now we skipped implementing cross node caching.
Changes API
We mentioned the
Changes API in the previous section since the aggressive caching is using it. The
Changes API is a way for us to connect to the server and
ask it to let us know when a particular event has happened. Listing 4.24 shows how we can ask the server to tell us when a particular document has changed.
Listing 4.24 Getting notified when a document changes
var subscription = store.Changes() .ForDocument("customers/8243-C") .Subscribe(change => { // let user know the document changed }); // dispose to stop getting future notifications subscription.Dispose();
Typically, we use the code in Listing 4.24 when implementing an edit page for a document. When the user starts editing the document, we register for notifications on changes to this document. If it does change, we let the user know immediately. That allows us to avoid having to wait for the save action to discover we need to redo all our work.
The
Changes API works by opening a WebSocket to the server and letting the server know exactly what kind of changes it's interested in. We can register for
a particular document, a collection, documents that start with a particular key, or even global events, such as operations or indexes.
Changes API in a cluster
In a cluster, the
Changes APIwill always connect to one node, and changes must first flow to that node before the changes will be sent to the client. The failure of the node will cause us to reconnect (possibly to a different server) and resume waiting for the changes we're interested in.
The
Changes API is meant to get non-critical notifications. It's cheap to run and is pretty simple, but it's possible that a failure scenario will cause you
to miss updates. If the connection has been reset, you might lose notifications that happened while you were reconnecting. It's recommended that you use
the
Changes API for enrichment purposes and not rely on it. For example, you might use it to tell if a document has changed so you can not only give early notification but also ensure
you have optimistic concurrency turned on so it will catch the change on save anyway.
Another example of a way you might use the
Changes API is with aggressive caching. If we missed a single notification, that isn't too bad. The next notification will put us in the same state. And we'll
be fine because the user explicitly chose performance over getting the latest version, in this case. Yet another example of
Changes API use might be for monitoring. You want to know what's going on in the server, but it's fine to lose something if there's an error because you're interested in what's happening now, not the full
and complete history of actions on the server.
For critical operations — ones where you can't afford to miss even a single change — you can use
Subscriptions, which are covered in the next chapter. They're
suited for such a scenario, since they guarantee that all notifications will be properly sent and acknowledged.
Projecting data in queries
In the previous chapter, we talked a lot about modeling and how we should structure our documents to be independent, isolated and coherent. That makes for an excellent system for transaction processing (OLTP) scenarios. But there are quite a few cases where, even in a business application, we have to look at the data differently. Let's take a look at our support case example. If I'm a help desk engineer and I'm looking at the list of open support calls, I want to see all the recent support calls and the customer that opened them.
Based on what we know so far, it would be trivial to write the code in Listing 4.25.
Listing 4.25 Recent orders and their customers
using (var session = store.OpenSession()) { List<SupportCall> recentOrders = session.Query<SupportCall>() .Include(c => c.CustomerId) // include customers .Where(c => c.Ended == null) // open order .OrderByDescending(c => c.Started) // get recent .Take(25) // limit the results .ToList(); }
The code in Listing 4.25 is doing a lot. It gets us the 25 most recently opened support calls, and it also includes the corresponding customers. We can now show this to the user quite easily, and we were able to do that in a single request to the server. Under most circumstances, this is exactly what you'll want to do.
However, in order to pull the few fields we need to show to the user in the application, we had to pull the full support calls and customers documents. If those documents are large, that can be expensive. Another way to handle this is by letting the server know exactly what we need returned and then letting it do the work on the server side.
This can be done by specifying a projection during the query. Projections allow us to control exactly what is being returned from the query. On the most basic level it allows us to decide what fields we want to get from the server, but we can actually do a lot more than that.
Let's see how we can use a query to project just the data that we are interested in. You can see the details in Listing 4.26.
Listing 4.26 Recent orders and their customers, using projection
using (var session = store.OpenSession()) { var recentOrders = ( from call in session.Query<SupportCall>() where call.Ended == null // open order orderby call.Started descending // get recent // fetch customer (happens on the server side) let customer = session.Load<Customer>(call.CustomerId) select new { // project just the data that we care about // back to the client CustomerName = customer.Name, call.CustomerId, call.Started, call.Issue, call.Votes } ) .Take(25) // limit the results .ToList(); }
There are a few things to note in Listing 4.26. First, we no longer include the
Customer in the query. We don't need that because the result of this query isn't a list
of
SupportCall documents but a list of projections that already include the
CustomerName that we want. The key part, as far as we're concerned, is the call to the
session.Load<Customer>() method. This is translated into a server side load operation that fetches the related
Customer document and extracts just the
Name field from
it.8
The output of a query like the one in Listing 4.26 is not a document, it is a projection and as a result of that, it isn't tracked by the session.
This means that changes to a projection won't be saved back to the server when
SaveChanges is called. You can call
Store on the result of a projection, but be
aware that this will create a new document, which is probably isn't what you intended to happen.
Cross-cutting concerns on the client
The RavenDB client API is quite big. We already discussed the notion of layers in the design of the client API, allowing you to select at what level you want to work at any given point in time. There is quite a lot that you can configure and customize in the client behavior. The most common customization is changing the conventions of the client by providing your own logic.
Most of the decisions that the RavenDB client API makes are actually controlled by the
DocumentConventions class. This class allows you to modify all sorts
of behaviors, from how RavenDB should treat complex values in queries to selecting what property to use as the document ID in entities.
If you need fine-grained control over the serialization and deserialization of your documents, this is the place to look. The
DocumentConventions class holds important configurations,
such as the maximum number of requests to allow per server or whether we should allow queries without a
Take clause. Listing 4.27 shows an example of controlling
what collection an object will find itself in.
Listing 4.27 Customize the collection to allow polymorphism
store.Conventions.FindCollectionName = type => typeof(Customer).IsAssignableFrom(type) ? "customers" : DocumentConventions.DefaultGetCollectionName(type);
In Listing 4.27, we're letting RavenDB know that
Customer or any derived type should be in the "customers" collection. That means that we can create a class
called
VIPCustomer that will have additional properties, but it will still be treated as a
Customer by anything else (indexing, queries, etc.). Such options allow
you to have absolute control over how RavenDB will work within your environment.
Event handlers
Alongside conventions, which are typically reactive, you can use event handlers to perform various operations during the execution of requests by RavenDB. The events are available as events that can be subscribed to at the document store level or at the individual session level.
The following events are available:
- OnBeforeStore
- OnAfterSaveChanges
- OnBefore
- OnBeforeQueryExecuted
This allows us to register to be called whenever a particular event happens, and that in turn gives us a lot of power. Listing 4.28 shows how we can implement
auditing with the help of the
OnBeforeStore event.
Listing 4.28 Implementing auditing mechanism
store.OnBeforeStore += (sender, args) => { args.DocumentMetadata["Modified-By"] = RequestContext.Principal.Identity.GetUserId(); };
Listing 4.28 ensures that whenever the document is modified, we'll register which user has made the modification in the metadata. You can also use this event
to handle validation in a cross-cutting fashion. Throwing an error from the event will abort the
SaveChanges operation and raise the error to the caller.
Another example of using events is to ensure that we always include a particular clause in our queries, as you can see in Listing 4.29.
Listing 4.29 Never read inactive customer
store.OnBeforeQueryExecuted += (sender, args) => { if (args.QueryCustomization is IDocumentQuery<Customer> query) { query.WhereEquals("IsActive", true); } }
The code in Listing 4.29 will apply to all queries operating on
Customer and will ensure that all the results returned have the
IsActive property set to true.
This can also be used in multi-tenancy situations, where you want to add the current tenant ID to all queries.
The full details of what you can do with conventions and listeners are in RavenDB's online documentation. I encourage you to browse the documentation and consider when it might make sense to use listeners in your application. Using them can be quite a time saver because listeners can be applied globally with ease.
Document revisions
In the previous section, we briefly mentioned auditing. In this section, we're going to take the notion of auditing and dial it up a notch, just to see what will happen. Certain classes of applications have very strict change control requirements for their data. For example, most medical, banking, payroll and insurance applications have strict "never delete since we need to be able to see all changes on the system" rules. One particular system I worked with had the requirement to keep all changes for a minimum of seven years, for example.
With RavenDB, this kind of system is much easier. That's because RavenDB has built-in support for handling revisions. Allow me to walk you through setting up such a system.
Go into the RavenDB Studio in the browser and create a new database, as we have seen in "Zero to RavenDB." Now, go into the database, and on the
left side menu, click
Settings and then
Document Revisions.
You can configure revisions globally for all collections or a single particular collection. For the purpose of this exercise, we'll define revisions
for all collections, as seen in Figure 4.3. You can enable revisions by clicking on
Create default configuration, accepting the defaults by clicking
OK
and then
Save.
Now that we've enabled revisions, let's see what this means. Go ahead and create a simple customer document, as seen in Figure 4.4.
You can see that this document has a
@flags metadata property that is set to
HasRevisions. And if you look at the right-hand side, you'll see a revisions
tab that you can select to see previous revisions of this document. Play around with this document for a bit, modify it, save and see how
revisions are recorded on each change.
Documents' revisions in RavenDB are created whenever you have revisions enabled and a document is modified. As part of saving the new document, we create a snapshot of the document (and its metadata, attachments, etc.) and store it as a revision. This allows us to go back in time and look at a previous revision of a document. In a way, this is similar to how we work with code. Every change creates a new revision, and we can go back in time and compare the changes between revisions.
Revisions in a cluster
In a cluster, revisions are going to be replicated to all the nodes in the database. Conflicts cannot occur with revisions, since each revision has its own distinct signature. One of the most common usages of revisions is to store the entire document history when you have automatic conflict resolution. We'll cover this behavior in depth in Chapter 6.
As part of configuring revisions on the database, you can select how many revisions we'll retain and for what period of time. For example, you may choose to keep around 15 revisions for seven days. Under those conditions, RavenDB will delete all revisions that are both older than seven days and have more than 15 revisions after them. In other words, if you've made 50 changes to a document in the span of a week, we'll keep all of them and only delete the earliest of them when they're over seven days old.
From the client side, you don't really need to consider revisions at all. This behavior happens purely on the server side and requires no involvement from the client. But you can access the revisions through the client API, as you can see in Listing 4.30.
Listing 4.30 Getting revisions of a document
List<SupportCall> revisions = session.Advanced .GetRevisionsFor<SupportCall>("SupportCalls/238-B");
The code in Listing 4.30 will fetch the most recent revisions and provide you with the revisions of the document as it was changed. You can also page through the revision history.
Once a revision has been created, it cannot be changed. This is done to ensure compliance with strict regulatory requirements, since it means you can treat the data in the revision as safe. It cannot be manipulated by the client API or even by the admin.
I've found that some applications, without the regulations requiring versioning, make use of revisions just because they give the users an easy way to look at the changes on an entity over time. That was especially true in one example I can recall, where the user in question was the business expert in charge of the whole system. This feature was very valuable to her since it allowed her to see exactly what happened and who took the action. (The application utilized a listener to implement audits, as shown in Listing 4.29). It was quite interesting to see how much value she got out of this feature.
In terms of cost, revisions obviously increase the amount of disk space required. But given today's disk sizes, that isn't usually a significant concern. Aside from disk space utilization, revisions don't actually have any impact on the system. Revisions are also quite important for ensuring that transaction boundaries are respected when replicating changes in the cluster9, handling historical subscriptions10 and ETL processes.
Revisions from an older version of your software
One thing you should note when using the revisions feature is that over time, as your software evolves, you might see revisions from previous versions of your application. As such, the revision document might be missing properties or have properties that have been removed or had their type changed.
Because revisions are immutable, it isn't possible to run migration on them, and you need to take that into account. When working with revisions, you might want to consider working with the raw document, rather than turning it into an instance of an object in your model.
How RavenDB uses JSON
The RavenDB server and the RavenDB C# client API use a dedicated binary format to represent JSON in memory. The details on this format are too low level for this book and generally shouldn't be of much interest to outside parties, but it's worth understanding a bit about how RavenDB handles JSON even at this stage. Typically, you'll work with JSON documents in their stringified form — a set of UTF8 characters with the JSON format. That is human-readable, cheap to parse and simple to work with.
But JSON parsing requires you to work in a streaming manner, which means that to pull up just a few values from a big document, you still need to parse the full document. As it turns out, once a document is inside RavenDB, there are a lot of cases where we want to just get a couple of values from it. Indexing a few fields is common, and parsing the JSON each and every time can be incredibly costly. Instead, RavenDB accepts the JSON string on write and turns it into an internal format called blittable.11
A blittable JSON document is a format that allows RavenDB random access to any piece of information in the document without having to parse the document, with a traversal cost of (amortised) O(1). Over the wire, RavenDB is sending JSON strings, but internally, it's all blittable. The C# client is also using the blittable format internally since that greatly helps with memory consumption and control. You generally won't see that in the public API, but certain low-level operations may expose you to it.
Blittable documents are immutable once created and must be disposed of once you're done with them. Since the document session will typically hold such blittable objects, the session must also be disposed of to make sure all the memory it's holding is released. An important consideration for the overall performance of RavenDB is that blittable documents always reside in native memory. This allows RavenDB fine-grained control over where and how the memory is used and reused, as well as its life cycle.
On the client side, using the blittable format means we have reduced memory consumption and reduced fragmentation. It also reduces the cost of caching significantly.
Summary
In this chapter, we've gone over a lot of the advanced features in the client API. We looked at working with attachments and understanding how we can use them to store binary data. Then we moved on to working with the document metadata in general. The document metadata is a convenient place to stash information about our documents that doesn't actually belong to the document itself. Auditing is one such example, and we saw how we can use listeners on the events that the client API exposes to us.
We looked at how change tracking is managed and how we can get detailed information from the session about what exactly changed in our documents. Then we examined how we should handle concurrency in our application. We looked at optimistic concurrency in RavenDB and even implemented pessimistic locking.12 Online optimistic concurrency can be handled for us automatically by the session, or we can send the change vector value to the client and get it back on the next request, thus implementing offline optimistic concurrency.
There's another way to handle concurrency — or just to save yourself the trouble of shuffling lots of data between client and server — and that way is to use patching. The client API offers patching
at several levels. Setting a value or incrementing a number is supported by a strongly typed API, but more complex tasks can be handled using
Defer,
which also offers you the ability to write JavaScript code that will be executed on the server to mutate your documents.
We also looked at various ways to get a lot of data into RavenDB, from a sequential
SaveChanges per document, to running them in parallel, to using the
bulk insert API to efficiently push data into RavenDB. We saw that the major limiting factor was typically the cost of going to the
database and that different approaches could produce significantly different performance profiles.
After looking at the various ways we could write data to RavenDB, it was time to look at the other side: seeing how we can optimize reads
from the server. We had already gone over
Include in "Zero to RavenDB," and in this chapter we looked at lazy requests, allowing us to combine
several different requests to the server into a single round trip.
The mirror image to bulk insert is the streaming feature, suitable for handling an immense amount of data. Streaming allows us to start processing the request from the server immediately, without having to wait for the complete results. This allows us to parallelize the work between client and server and gives us the option to immediately start sending results to the user.
Following the reading and writing of documents, we looked into caching them. The client API has sophisticated caching behaviors, and we delved into exactly how that works, as well as how it reduces the time we need to provide answers to the user. Caching allows us to tell the server we already have the result of a previous call to the same URL. And it allows the server to let us know if that hasn't been modified. If that's the case, the server doesn't need to send any results on the wire, and the client can use the cached (parsed and processed) data immediately. RavenDB also supports the notion of aggressive caching, which allows us to skip going to the server entirely. This is done by asking the server to notify the client when things change, and only then go to the server to fetch those changes.
That option is also exposed in the client API, using the changes API. The changes API gives you the ability to ask the server to tell you when a particular document, collection or set of documents with a given prefix has changed. This lets users know that someone has changed the document they're working on, and allows them to implement features such as "this data has changed," etc.
Next, we looked at how we can project the results of queries and document loads on the server side using projections. A projection allows you to modify the shape of the data that RavenDB returns to the client. This can be done by simply returning a subset of the data from the documents — or even by loading additional documents and merging the data from associated documents into a single result.
We looked at cross-cutting concerns on the client and how we can apply behavior throughout the client once. We can modify the client behavior
by controlling how RavenDB decides what class belongs in what collection, as well as serialization and deserialization. Listeners allow you to attach behavior to certain actions in the client API, giving you the option to customize certain behaviors. We looked at adding auditing to our application in about three lines of code and even saw how we can limit all the queries on the client to only include active users as a cross-cutting behavior.
Following cross-cutting behaviors, we moved to looking at the revisions feature in RavenDB and how to use it from the client side. The revisions feature asks the server to create a new revision of a document upon each change. Those revisions are immutable and create an audit trail and a change log for the documents in question. While this is primarily a server-side feature, we looked at how we can expose the revisions to the user through the client API and allow the users to view previous revisions of a document.
Our final endeavor was to cover at a high level the native serialization format that RavenDB uses, the blittable format. This format is meant to be extremely efficient in representing JSON. It's typically stored in native memory to reduce managed memory consumption and garbage collection costs down the road. You don't typically need to concern yourself with this, except to remember that it's important to dispose of sessions when you're done with them.
This chapter is a long one, but it still doesn't cover the full functionality. There are plenty of useful features, but they tend to be useful in specific, narrow circumstances or only make sense to talk about in a larger scope. We've barely discussed queries so far, but we'll do so extensively when we get to Chapter 11 and when we discuss indexing.
The next chapter is a good example of this. We'll dedicate that chapter to handling data subscriptions and all the myriad ways they make data processing tasks easier for you.
We'll discuss in detail how RavenDB clusters work in the next chapter.↩
And this should be a very rare thing indeed.↩
The Fiddler web proxy is a great debugging tool in general, and is quite useful to peek into the communication between RavenDB's server and clients.↩
TCP slow start can be a killer on benchmarks.↩
In fact, it's likely that a database cluster will be used on a set of machines.↩
We'll cover this technique when we discuss MapReduce indexes in Chapter 11.↩
The behavior on the JVM is the same. Other clients environment have different policies.↩
This is possible because we are using a synchronous session and queries. If we were using async queries, we'll need to use
RavenQuery.Load<Customer>in the Linq query.↩
See "Transaction atomicity and replication" in Chapter 6.↩
See "Versioned Subscriptions" in the next chapter.↩
I don't like the name, but we couldn't come up with anything better.↩
Although you shouldn't probably use that in real production code.↩ | https://ravendb.net/learn/inside-ravendb-book/reader/4.0/4-deep-dive-into-the-ravendb-client-api | CC-MAIN-2022-27 | refinedweb | 15,168 | 63.49 |
This note tells why and how you can use DocOnce to ease the writing of scientific books with much mathematics and computer code.
Why DocOnce?
Hwo do I get started?
Some real writing
References
Scientific books are very often written in LaTeX, but LaTeX is primarily suited for PDF output on paper. Today, your readers will be using different devices like tablet and phones for reading, and to address these media you need to write HTML. DocOnce lets you write in a syntax that is as simple as you use in email, and can then automatically translate that syntax to highly professional LaTeX or HTML for output (it can produce other formats too). So, the writing itself is easier since you avoid a lot of LaTeX or HTML tags, and the output is more versatile. We refer to the DocOnce tutorial and the web page for more arguments!
doc/srcand start writing the first chapter! (replace
why.do.txt)
bash make.sh, see
my-book-4print.pdf(this is the standard Springer book layout, the
svmonoclass adapted to DocOnce)
bash make_html.sh, see
my-book.html
Let us demonstrate emphasize text, bold text,
inline monospace font,
and of course computer code that we can copy from a part of a file
using regular expressions:
def f(x): return 42*x
It is a big advantage to copy computer code directly into the book, but you can also write it as part of the text, this time the FORTRAN equivalent:
subroutine f(x) real*8 x f = 42*x return end
Mathematics is written in plain LaTeX inside a begin-end tex environment:
$$ f(x) = 42x.$$
Remember to use simple LaTeX: just the
equation,
equation*,
\[ ... \],
align,
align*,
alignat, or
alignat* environments!
Inline mathematics makes use of dollar signs: \( f(x)=42x \).
As LaTeX writer, remember that white space counts in DocOnce syntax! Be extra careful with indentation of lists.
Also remember that DocOnce avoids backslash in label, ref and cite, e.g., in references like [1]. | http://hplgit.github.io/doconce/doc/src/book/repo/doc/pub/book/html/my-book.html | CC-MAIN-2018-39 | refinedweb | 335 | 64 |
from flask import Flask app = Flask(__name__) @app.route('/Wolves') <!DOCTYPE html> <html> <head> <title>Wolves<title> </head> <body> <!--Introduces Wolves, provides some information--> <h1>Wolves</h1> <p>There are three wolf species in the world. The Grey Wolf, the Red Wolf, and the Ethiopian Wolf. As well as subspecies of the three main species, that are named based on the region they inhabit. There are many similaries that are shared between these types of wolves but enough differences to seperate them and as research continues the distinct differences become more visible</p> <!--First Section contains some information about the Gray Wolves--> <h2>Gray Wolf</h> <p>The gray wold is also referred to as the timber wolf. In North America there are between 5 to 24 subspecies recognized. In Eurasia there are 7 to 12 subspecies.In Africa there is one subspecies recognized. A typical male can be 6.6 feet long, including their tail. They can stand 30 inches tall at the shoulder. Their weight can range anywhere from 31 to 143 pounds depending on the area they live in. On average females are around 20 percent smaller that males are. Fur color can be gray, brown, reddish, black or whitish, all with yellow- white fur on the under parts. Fur color is dependent on location.</p> <!--Second Section contains some information about Red Wolves--> <h3>Red Wolf</h3> <p>The most endanged member of the dog family.They are about 4 feet long and weigh between 45 to 80 pounds. Red wolves are native to the United States. They have a tawny reddish coat. The grey wolf and coyote were interbred in the past to produce the red wolves ancestors. So, they are intermediate in size between the two species.There are only an estimated 35 or less wild red wolves that remain in the Alligator River National Wildlife Refuge in eastern North Carolina and the surrounding countries. They are critically endanged.</p> <!--Third Section contains information about Ethiopian Wolves--> <h4>Ethiopian Wolf</h4> <p>The Ethiopian wolf is found in six or seven mountain ranges of Ethiopian.They are long-limbed and slender looking. They have a reddish coat with white markings on the legs, underbelly, tail, face, and chin.The boundries between the red and white are quite distinct. They have a characteristic white cresents below their eyes and white spots on their cheeks. There are black stripes down its tail and black hairs on the tip. Females are usually smaller and paler in color than the males.Ethiopian wolves have 5 toes on the front feet and 4 toes on the rear feet. They can weigh between 24 to 43 pounds and are between 33 to 40 inches long</p> <!--Contains information about animals related to wolves--> <h5>Canidae Family</5> <p>The Canidae Family is one of the oldest groups of carnivores still in exsistence. The Qualities found throughout the Canidae family include deep- chested bodies with elongated muzzles, physical endurance and an acute sense of smell and hearing. Some of the members include:</p> <ol> <li>Wolves</li> <li>Dogs</li> <li>Foxes</li> <li>Coyotes</li> <li>Jackals</li> </ol> <!--General Characteristics of Wolves--> <h6>General Characteristics of Wolves</h6> <p>Wolves have developed the capacity to survive in the most inhospital of climates.There are many characteristics that make up wolves and some of those include:</p> <ul> <li>Very intelligent creatures</li> <li>Upright ears</li> <li>Sharp teeth</li> <li>Pointed Muzzle</li> <li>Inquiring eyes</li> <li>Height varies from 26 to 38 inches at the shoulders</li> <li>Weight varies from 22 to 175 pounds</li> <li>Bodies built for stamina</li> <li>Posses features ideal for long distance travel</li> <li>Narrow Chests</li> <li>Powerful backs and legs</li> <li>And many more</li> </ul> <p>"<em><a href = "wolfworlds.com/wolf-species/">Wolf Species</a></em>"webpage has more information on the Wolf species.</p> <p>"<em><a href = "britannica.com/animal/gray-wolf">Gray Wolf</a></em>"webpage has more information on the Gray Wolf.</p> <p>"<em><a href = "nationalgeographic.com/animal/mammals/r/red-wolf/">Red Wolves </a></em>"webpage has more information about Red Wolves.</p> <p>"<em><w href ="animaldiversity.org/accounts/canis_simensis/">Ethiopian Wolf </a></em>"webpage has more information on the Ethiopian Wolf.</p> <p>"</em><a href = "animals.mom.me/animals-part-wolf-family-5278.html">Animals that are a part of the wolf family</a></em>"webpage has more information on the animals that are in the same family as wolves.</p> <p>"<em><a href = "animalcorner.co.uk/animals/wolves/">Wolves</a></em>" webpage has more information about the characteristics of Wolves.</p> </body> </html> app.run(host ='0.0.0.0', port = 8080)
This is what I'm getting when I try to run the program:
File "/home/ec2-user/environment/Project6/Python Web Page Code.py", line 10
<!DOCTYPE html>
^
SyntaxError: invalid syntax
Why is it invalid syntax? What am I missing? | https://www.dreamincode.net/forums/topic/417952-how-to-get-html-to-work/ | CC-MAIN-2020-05 | refinedweb | 841 | 58.79 |
1. Introduction
As we know, today’s web technology advances are fast in good and bad ways. With almost every technology, if not used properly, its results might be devastating. Many programmers are not introduced to the vulnerabilities that might occur when working and parsing XML files, so that was the reason for me to write this article. I hope you like it.
2. What is XML?
XML stands for Extensible Markup Language, mostly used for representing structured information. XML is widely employed in today’s web technology like web services (SOAP, REST, WSDL), RSS feed, Atom, configuration files (Microsoft Office and many other Desktop applications). XML has been standardized by the World Wide Web Consortium (W3C) and is part of SGML (ISO 8879). XML was created in 1996 by Tim Bray, Jean Paoli, C. M. Sperberg-McQueen, Eve Maler, François Yergeau, and John Cowan. The first standardization and specification for XML was made on 10 Feb 1998.
W3schools.com has a nice short description of what XML represents ():
-
3. Designing an XML structure
- XML Header (Document Type Definition – DTD)
Designing an XML structure is pretty straightforward. Each XML document begins with a header that defines the XML declaration:
<?xml version=”1.0″ encoding=”UTF-8″ ?>
Code 1: Sample of header declaration
For the current example, the header defines the type of the encoding and the version. Also, in the header some additional entities could be included such as !DOCTYPE or other material. This is known as DTD (Document Type Definition) where a set of declarations are added to the XML file (for the tags used in DTD visit).
- XML Elements
Each XML file contains elements that could be defined with any character you want, except for special characters. The start of the tag is “” for example and end of a tag is “”.
<item> <title>Looking for a job!</title> <description>Recent graduate student looking for..</description> <link></link> </item>
Code 2: Sample of element declaration
The declaration of tags is quite easy; you just need to stick to the following rules ():
- Names can contain letters, numbers, and other characters
- Names cannot start with a number or punctuation character
- Names cannot start with the letters xml (or XML, or Xml, etc)
- Names cannot contain spaces
- XML Attributes
Instead of making an element within an element, you can make the child element be an attribute to its parent element. Kind of confusing to explain, but in practice it’s very easy.
<item title="Looking for a job!"> <description>Recent graduate student looking for..</description> <link></link> </item>
Code 3: Sample of element declaration
Look in the previous sample code (Code 2); there, as you can see, I have made the “title” an element, and in this sample code, I made it an attribute. Not much of a difference, since there are not many rules about how you will make your XML file
- XML Validation
There are also web sites where you can validate you an XML file, to see if it is properly designed or not:,, and many more.
Today many web application and desktop application use XML as part of its structure and the RSS feed is one of them. It stands for Rich Site Summary, or more colloquially Really Simple Syndication, and its main function is to display summarized text of recent published blogs, posts, news and etc. Today many news aggregators including Google News works by using the RSS feed. Here is a sample script in PHP for making an RSS feed. This is just a sample for you to see how it works. I definitely wouldn’t recommend this for using in real life project!
<?php header("Content-Type: application/rss+xml; charset=UTF-8"); $"; $rss.="<channel>"; $rss.="<title>Feeds RSS Localhost</title>"; $rss.="<link></link>"; $rss.="<description>Sample RSS feed</description>"; $rss.="<language>en-us</language>"; $rss.="<copyright>Copyright (C) localhost.com</copyright>"; $rss.="</channel>"; foreach($adverts as $advert){ $rss.="<item>"; $rss.="<title>".$advert->title."</title>"; $rss.="<description>".$advert->advert_text."</description>"; $rss.="<link>{$advert->id}</link>"; $rss.="</item>"; } $rss.="</rss>"; echo $rss; ?>
Code 4. Sample XML generator in PHP
As you can see from the code, this script is used for generating a XML file with the tags that i have defined. This script will generate the following XML content (this is just a sample script from an old web page that i have been working on so it will not work on your server).
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0"> <channel> <title>Feeds RSS Localhost</title> <link></link> <description>Sample RSS feed</description> <language>en-us</language> <copyright>Copyright (C) localhost.com</copyright> </channel> <item> <title>Selling car for 30k$</title> <description>I am interested in selling my car...</description> <link></link> </item> <item> <title>Looking for a job!</title> <description>Recent graduate student looking for.. </description> <link></link> </item> <item> <title>Selling house for 430k$</title> <description>Want to live in great house…</description> <link></link> </item> </rss>
Code 5: Sample of generated XML
If you open it in Chrome, it will look like:
Figure 1: XML file in browser
In Python, you can easily parse XML files. There are many modules that can be used for this purpose, for this sample will be used BeautifulSoap ().
def parse_score(link): xml = urllib2.urlopen(link) xml_content = xml.read() soup = BeautifulSoup(xml_content) results = soup.find_all("item") for result in results: print result.contents
Code 6: Sample of XML parsing
The code is straightforward; three steps are involved: loading the XML link (or file), parsing the content by using BeautifulSoap and the last step is extracting the XML content.
6. Common XML vulnerabilities (sample of vulnerable code)
Every application has vulnerabilities, so XML parsers have some too. This is a list of well-known XML vulnerabilities that might occur in your application:
- Billion laughs
This vulnerability is a DoS (Denial Of Service) aimed for the parsers of the XML. This vulnerability is also known as XML bomb or Entity Expansion XML bomb. It also might happen that this vulnerability pass the validation of the XML schema. Consider the following tag:
<!ENTITY entityName “Some Value”>
Code 7: DTD tag
Now consider the following vulnerable code (the code is taken from):
Figure 2: Billion laughs vulnerable code
As you can see, we have 10 “lols”. So what is happening here? At the end, we have instance of “lol9”. When the &lol9; is parsed the entity lol9 will be called which has 10 “lol8” instances. The lol8 has 10 “lol7” instances and so on. At the end you may assume that there will be a lot of “lol” (100,000,000 instances = billion). The billion lol’s might cause DoS (Denial of Service). That’s why it is called the Billion Laughs Vulnerability. For more information about the vulnerability, check the link.
- Quadratic blowup
Another Entity Expansion XML bomb is the quadratic blowup vulnerability discovered by Amin Klein of Trusteer. The “kaboom” entity has 50,000 “a” represented as “&a;” When parsed, the size of it changes, from 200KB to 2.5gb, causing DoS. Still the billion laughs create much bigger size when parsing compared to quadratic blowup.
<?xml version="1.0"?> <!DOCTYPE kaboom [ <!ENTITY a "aaaaaaaaaaaaaa..... ]> <kaboom>&a;&a;&a;&a;&a;&a;.....</kaboom>
Code 7. Quadratic blowup
- DTD retrieval
Also with entity declaration, you can have an URL link for replacement (for definition of replacement see previous vulnerability). When using the System identifiers you can download the content from external location and embed it in you XML file.
<!DOCTYPE external [ <!ENTITY ee SYSTEM ""> ]> </span></p><p><span style="font-size:14pt"><root>ⅇ</root> </span></p><p><span style="font-size:14pt">
Code 8. Remote entity expansion retrieval example
The same vulnerability could be used for local file also:
<!DOCTYPE external [ <!ENTITY ee SYSTEM ""> > <root>ⅇ</root>
Code 9. Local entity expansion retrieval example
According to an article from Python’s blog about XML vulnerabilities, here are the possible “bad things” that might happen because of this vulnerability ():
-.
7. How to defend
Figure 3: Modules that lack protection from XML exploits ()
Tips:
-
-
- Using parsers that use safe functions
8. Conclusion
I think that this topic was interesting because it is something that many programmers are not aware of. We should care more about the security of web applications, because XML is more and more part of them, and that increases the risks of being exploited. We saw that the results of exploiting these vulnerabilities are devastating, and that is why we should be more concerned about using safe modules and functions.
9. References
-
-
-
-
-
-
-
-
-
-
- | http://resources.infosecinstitute.com/xml-vulnerabilities/ | CC-MAIN-2018-05 | refinedweb | 1,429 | 55.64 |
This article is a practical overview of Object Oriented Programming (OOP) in Python. It explains why OOP is useful, aside from how it’s done. This should be useful to both people who don’t know what OOP is, and experienced developers transitioning from other languages.
I am not a professional Python developer, and I am currently re-learning the language after not having used it for 8 years. So keep that in mind as you read, and feel free to offer feedback that can improve the quality of this article. Just be nice. 🙂
Due to the eternal divide between Python 2 and 3, I have to state that I’m using Python 3.6.4 here. Why Python 3? Because it makes no difference to me. When you are just learning and don’t have any requirements for maintaining backwards compatibility, you can afford to use the latest and greatest.
Introduction
In his hysterical rant on the web and OOP (in which he says the word “bizarre” enough times to qualify as a cover of OMC’s song), Zed Shaw cites OOP being “difficult to teach” as one of its major flaws.
Image credit: taken from here.
That’s a bold claim coming from someone who wrote in his own book:
“Search online for “object-oriented programming” and try to overflow your brain with what you read. Don’t worry if it makes absolutely no sense to you. Half of that stuff makes no sense to me either.” — Learn Python the Hard Way, Third Edition. Zed A. Shaw. 2014.
There are many things in computing that are hard to teach. I don’t think that Object Oriented Programming is one of them.
Motivation
In order to understand why OOP is useful, we’ll start off by not using it, and observe the problems we encounter. To do this, we need a proper example. People often teach OOP in terms of animals or cars, but I think games make more fun and interesting examples.
Screenshot from Dark Sun: Shattered Lands (1993)
A player-controlled character in a game typically has a number of attributes (e.g. name, hit points, etc). In order to group the attributes for our character, we need some kind of record or structure, which in C-style languages would be a struct. We don’t have that in Python, but we can use dictionaries instead.
talorus = { 'name': 'Talorus', 'hitpoints': 30, 'dead': False, 'inventory': [] }
Once we have a way to hold related data, we’ll want to perform some kind of operations on it.
def rename(character, newName): character['name'] = newName def sufferDamage(character, damage): character['hitpoints'] -= damage if (character['hitpoints'] <= 0): character['dead'] = True def receiveItem(character, item): character['inventory'].append(item)
Here’s some example usage:
You’ll notice a common theme across these functions. In all cases, we’re passing our character as the first parameter, and then using some of its attributes within the body of each function. We’re not using OOP yet, but we can already see a natural progression towards the character object being a first class citizen.
However, our current approach has a number of flaws. One of these is that it is easy for any code, anywhere, to tamper with our dictionary’s state.
Our logic from the
sufferDamage() function specifies that characters die only if they run out of hitpoints, so how is our character dead with 26 hitpoints?
Being able to tamper with an object’s state without restriction is a bad thing: it is a violation of encapsulation, which is one of the three pillars of OOP (along with inheritance and polymorphism). We’ll discuss these later.
Classes and Objects
A class is just an abstract template for a type of object. For instance:
class Troll: pass
We’re declaring a Troll class, and using the
pass keyword to indicate that there’s nothing in it for the time being. Once we have this class, then we can create concrete instances:
tom = Troll() bert = Troll() bill = Troll()
In Python, we create instances of a class (i.e. objects) by calling the class name as if it were a function.
An object may have any number of attributes (data members), just like the elements in a dictionary, but accessed using dot notation. Since Python is a dynamic language, it poses no restriction on the attributes that a class must have. We can add and remove attributes on the fly:
A class may define functions (called methods) that operate on an instance of the class:
class Character: def setName(self, newName): self.name = newName
This might look a bit weird, so let’s see some example usage and then discuss what we’re doing here:
The structure of the method might be familiar from the earlier section where we emulated OOP with dictionaries. In this case, we are similarly passing in the object itself as the first parameter, named
self by convention. This extra parameter is required by Python. Through
self, we can then access attributes of the class using dot notation.
What might look really strange here is that although
setName() takes two parameters, we’re calling it with one. That’s because the
self parameter is passed in implicitly when you call a method.
Constructors
A class may define a special method called
__init__() which serves as the class’s constructor. It is usually used to initialise the object’s attributes, and may optionally take parameters which must be supplied when the object is instantiated:
class Character: def __init__(self, name, hitPoints): self.name = name self.hitPoints = hitPoints self.dead = False self.inventory = [] def setName(self, newName): name = newName
Class-Level Variables
Screenshot from Ravenloft: Stone Prophet (1995)
A class may define variables within its scope:
class Monster: totalMonsters = 0 def __init__(self, name, immortal): self.immortal = immortal Monster.totalMonsters += 1
Such class-level variables are not attributes of individual objects. They are shared across all instances of the class, just like static member variables in other languages. The distinction should be clear when you see that you access object attributes using
self and class attributes using the name of the class itself. In this example, the shared totalMonsters counter is incremented every time a new monster is created:
Composition
Screenshot from Dark Sun: Shattered Lands (1993)
In the real world, complex objects are made up (composed) of other objects. The classic example is that a car has an engine (among other parts), but I prefer to stick to the game example. So let’s say we develop our inventory beyond a simple list, and make it into its own class:
class Inventory: def __init__(self): self.items = [] def add(self, item): self.items.append(item) def has(self, item): return item in self.items
While this is a trivial implementation, it can be extended to support more complex operations.
We can now change our Character class to contain the new Inventory class:
class Character: def __init__(self, name, hitPoints): self.name = name self.hitPoints = hitPoints self.dead = False self.inventory = Inventory() def setName(self, newName): name = newName
Composition is used to model a has-a relationship (e.g. Character has an Inventory). As you can see, it’s nothing special. It’s merely a case of a class (e.g. Character) having an attribute whose type is also a class (e.g. Inventory).
Inheritance
Screenshot from Ultima 9: Ascension (1999)
A sword is a very common weapon in games. We can represent a simple sword by the following class:
class Sword: def __init__(self): self.damage = 10 def attack(self, target): print('%d damage done to %s' % (self.damage, target))
Here’s an example usage:
However, there isn’t just one type of sword across all games in existence. Many games have magical swords with all sorts of positive (and negative) effects. One example is a fire sword. It does extra fire damage.
class FireSword: def __init__(self): self.damage = 10 self.fireDamage = 5 def attack(self, target): print('%d damage done to %s' % (self.damage, target)) print('%d extra fire damage done to %s' % (self.fireDamage, target))
As you can see, there’s a lot of repetition here. If we also add classes for lightning swords, poison daggers etc, do we really want to duplicate this code and have to maintain it in several different places?
Fortunately, OOP allows us to create classes that inherit from others.
class FireSword (Sword): pass
The above code states that
FireSword is-a
Sword, and as a result, it inherits all of
Sword‘s attributes and methods:
However, while we are reusing
Sword‘s implementation for
FireSword, we don’t yet have the extra functionality (i.e. extra fire damage) that makes it a fire sword, as we had in the original example. In order to do that, we must override
Sword‘s methods to provide the extra functionality.
class FireSword (Sword): def __init__(self): super().__init__() self.fireDamage = 5 def attack(self, target): super().attack(target) print('%d extra fire damage done to %s' % (self.fireDamage, target))
Here’s an example usage:
By calling
super(), we’re calling the
Sword class’s implementation before doing the extra logic specific to
FireSword. In OOP terminology,
Sword is the base class, parent class or superclass, and
FireSword is the derived class or child class.
When you request an attribute or call a method on a derived class, Python will first look for an implementation in the derived class, and if it’s not there, it will look it up in the base class. This mechanism is what enables inheritance. However, it is also possible to have a method in the derived class to replace or extend the equivalent method in the base class, as we have seen above.
In other OOP languages, methods must usually be marked as virtual to allow them to be overridden. This is not necessary in Python.
“For C++ programmers: all methods in Python are effectively virtual.” — The Python Tutorial – Classes
Python allows a class to inherit from more than one base class. This is known as multiple inheritance, and is strongly discouraged because it makes classes extremely hard to work with. More modern OOP languages such as Java and C# expressly forbid multiple inheritance.
As a humorous aside, if you have a copy of Zed Shaw’s “Learn Python the Hard Way” book, you might want to read his section on “Inheritance vs Composition” for laughs. Shaw wastes almost a whole page with a silly story about a forest and an evil queen, which are supposed to be analogies for inheritance and multiple inheritance. His argument is that inheritance is bad because multiple inheritance is troublesome. That’s a bit like saying we should ban fire because some idiot got burned.
“In object-oriented programming, inheritance is the evil forest. Experienced programmers know to avoid this evil because they know that deep inside the dark forest of.” — Learn Python the Hard Way, Third Edition. Zed A. Shaw. 2014.
Shaw suggests that inheritance should be avoided, and composition should be used instead. For him, the choice between “inheritance versus composition comes down to an attempt to solve the problem of reusable code”. Unfortunately, he misses the point entirely. The main benefit of OOP is to model objects and their relationships. Inheritance models an is-a relationship, whereas composition models a has-a relationship. Code reuse is a practical benefit of both, but does not make them interchangeable.
Encapsulation
In the Motivation section towards the beginning of this article, we saw how emulating OOP with dictionaries results in a situation where the internal state of our classes can be tampered with. Let’s revisit that example, but with OOP:
class Character: def __init__(self, name, hitPoints): self.name = name self.hitPoints = hitPoints self.dead = False def sufferDamage(self, damage): self.hitPoints -= damage if (self.hitPoints <= 0): self.dead = True
Unfortunately, OOP in Python doesn’t do much to protect our internal state, and we can still tamper with it without restriction:
Other OOP languages usually have
private,
protected and
public access modifiers to control access to internal data members of the class; these are enforced by the language. There is none of this in Python. The only thing you can do is follow a convention where private attributes are prefixed by an underscore, and hope that people play fair. It doesn’t stop people from accessing internal state.
Hiding the internal state of a class is called encapsulation. One strong reason why it is important is, as we’ve just seen, to ensure the consistency of that internal state (dead with 255 hit points? huh?). Another reason is to be able to modify the way that state works, without external code being affected.
So right now, we have an attribute called dead (or _dead, if we’re making it private by convention). Let’s add a method that exposes it:
class Character: def __init__(self, name, hitPoints): self._name = name self._hitPoints = hitPoints self._dead = False def sufferDamage(self, damage): self._hitPoints -= damage if (self._hitPoints <= 0): self._dead = True def isDead(self): return self._dead
Code external to this class may now check whether the character is dead by calling the
isDead() method, and should not access
_dead directly:
xanathar.isDead()
This extra method gives us a lot of flexibility because external code does not get to see how we store our internal state. We could, for instance, replace our
_dead attribute with a computation based on
_hitPoints, and the external code would never know the difference:
def isDead(self): return self._hitPoints <= 0
So while in Python you can’t force external code not to touch a class’s internal state (as other OOP languages usually do), it is good practice to hide internal state using the available conventions, and expose only what needs to be exposed.
Polymorphism
Image credit: screenshot of Ultima 7: The Black Gate (1992) using Exult, taken from Let’s Play Archive entry
Typically, a person in a game can talk:
class Person: def Talk(self): print('Hello!')
Sometimes, though, an item can also talk.
class BlackSword: def Talk(self): print('Which of my powers dost thou seek to use?')
Animals, too, may surprise you with their gift of speech.
class SherryTheMouse: def Talk(self): print('Do you have any cheese?')
So here we have three completely unrelated classes, but they all have the same ability: we can call the
Talk() method. When different objects exhibit similar behaviour, and thus we can work with them in a consistent manner, it’s called Polymorphism.
This is useful, for instance, when iterating over different kinds of objects in a loop:
This is unusual in the world of OOP, but since Python uses duck typing, it’s enough that two classes have the same method signature so that you can use them in the same way. In more strongly-typed OOP languages such as C# or Java, the classes would need to have something in common for you to do this (e.g. they implement the same interface, or they share a common base class).
Generics
This section is for developers coming from OOP in other languages. If you’re new to OOP, you may skip it.
Sometimes, you want to make a class have the same behaviour with different data types. For instance, you create a class representing a stack, and it should work the same regardless of whether it’s a stack of integers or of strings.
C++ provides this through templates, and C# and Java provide generics. These are a way to generalise the class implementation across dependent types, while still enforcing type safety.
Since Python is a dynamic language and it does not care what types you use, generics are not necessary. Your stack (or whatever) class will work just as will with integers, strings, or Animals (although I don’t recommend putting elephants at the top of the stack).
Summary
In this article, we’ve covered the basics of OOP in Python.
- Even if you’re not currently doing OOP, you’ll notice that groups of variables and functions will tend to relate to the same entity. There is a natural tendency towards OOP.
- Classes are groups of attributes and functions (methods). They provide a template but are not concrete.
- Objects are concrete instances of classes. Person is a class. Joe is an object.
- A constructor allows you to initialise attributes and pass in any parameters at instantiation time.
- Class-level variables are shared across all instances of that class.
- Composition is when a class contains other classes. It expresses a has-a relationship.
- Inheritance expresses an is-a relationship. If FireSword is-a Sword, then FireSword inherits all of Sword’s attributes and methods, and may override those methods to provide more specialised variants.
- Encapsulation is hiding internal attributes of a class so that external code can’t change them, and so that internal code can be changed without affecting external code. This is not enforced by the language but is upheld by convention.
- Polymorphism is when different objects behave in a similar way. In Python, it works as a result of duck typing.
- Generics aren’t necessary in a language without type safety.
This material includes basic concepts and language syntax, but is merely a starting point.
Mastering OOP is a matter of understanding that it is all about abstraction, and learning to work with abstractions in a way that is beneficial to software (e.g. models problem domains, maximises code reuse, reduces coupling, increases maintainability etc). The three pillars of OOP (inheritance, encapsulation and polymorphism) are basic building blocks of such abstraction. Various design patterns have emerged which demonstrate OOP abstractions put to good use. | https://gigi.nullneuron.net/gigilabs/tag/oop/ | CC-MAIN-2020-05 | refinedweb | 2,961 | 64.61 |
In Chapter.
Problem
Learncpp.com is unable to see where are the common mistakes, errors in train of thought (wrong assumptions), where students stop.
Suggestion:
How about setting up accounts to send in answers?
Each account will have data to see where are the common errors and mistakes.
Data analytics could mark the papers like TIOBE.
You can see where they stop and pause for extremely long periods of time ie most likely pointers and arrays. Specifically, passing an array through a function.
If we can declared variable inside the function then why we need to declare argument and parameter.
For example:
1. Variable declared inside function
Void func()
{
int i
}
2. Pass by argument
Void func(int i)
{
}
I don't understand why we need 2. If 1 is useful then why 2?
In 1, you have to know the value of 'i' at compile time.
In 2, the value of 'i' can be either known at compile time or at runtime.
If a value is known at compile time, it should not be an argument but stored as a constexpr likely in a header file.
If you need to experience the above, try writing a function calculating the average gravitational force the Earth will exert on an object sitting on its surface.
You will need the following variables:
- G the gravitational constant
- R the average radius of the Earth
- M the mass of the Earth
- m the mass of the object
The formula is F = GMm/R².
Of course the function has to work for objects of any mass...
hey nascardriver I have another ambiguity here,you said that all of the function parameters are created and then the value of the argument is copied into parameters.so here is what I think that happens:
1:the function parameters are created and the value of argument is copied into function parameter,this copy costs the same amount of byte as the type that is copied right ?so for an int argument totally 8 bytes is consumed for this process.
2:if the above reasoning is correct, in case of passing by reference or pointer the same things happen but the difference is that the copy that is passed is an address.however in passing by address section I asked you some questions and you said the cost of copying address is 8 bytes. don't addresses always occupy 4 bytes ? because they are the value that a pointer holds and pointers are always 4 bytes wide.so why did you say it is 8 bytes ?
thanks in advance!
1: Function parameters are normal variables that are initialized via copy-initialization by the caller.
2: An integer is usually 4 bytes wide, a pointer 8 bytes. On 32 bit systems, pointers were 4 bytes wide.
oh.. then the only cost is defining the function parameters, I thought passing argument also costs some memory...
thanks!
Dear Alex and nascardriver,
I want to tell you big thanks for your site, effort and patience to bring us the holy C++ knowledge :)
I finished reading and now is trying to do first small steps with C++ development. According a task in www I have to use qsort to do sorting of data in memory. As I find out it's not a good idea to use qsort. Anyway I have to use it.
According qsort specification
Could you explain please what const void* p1, const void* p2 mean are ? Are these function pointers .... confusing syntax void for function parameter.
Actually I didn't find any mentions, references or examples in your tutorials... I'd be very thankful.. :) Do you think is it needs to be described in tutorials ?
And please one more question. I can't understand why qsort with vectors does sorting the only for specified "id" field in custom type? It doesn't sort Worker structures in the current vector... but does sorting of id for the appropriate Workers structures. I supposed to get sorted Worker dataset in the vector.
-----------------------------------------------------------------------------
ID Surnmame Name Patronymic Post h/Pay Hours Salary
-----------------------------------------------------------------------------
50 Smith Alex Test_2 Driver 10 40 400
30 Columbus Christopher Test_1 Seaman 100 720 7200
After sorting
-----------------------------------------------------------------------------
ID Surnmame Name Patronymic Post h/Pay Hours Salary
-----------------------------------------------------------------------------
30 Smith Alex Test_2 Driver 10 40 400
50 Columbus Christopher Test_1 Seaman 100 720 7200
My apologies, just after reading I found ... Void pointers :)
Anyway, the second question is actual )
Please skip and the second question.
According "The type of the elements of the array must be a TrivialType, otherwise the behavior is undefined."
In my case each element is not a Trivial Type.
Thank you.
Minor typo: The opening sentence mentions Chapter 1 but all linked sections below are from Chapter 2. I suspect refactoring. :)
Why this happen?
The compile okay and programm works.
Even I put ";" at the end of function:
The complier still complain about ";".
My question is what ";" really means? Why it works when the
function is defined out of the main function, but not within the
main function?
You cannot define functions inside functions.
If you want to define a function in-line, have a look at lambda-functions.
Hi Alex and Nascardriver! I don't understand the meaning of the term 'copied' in sentence 'the value of the arguments are copied into the parameters'. Could you explain to me? cause it's confuse me.
Think of the parameter as a variable. Someone the value for that variable has to come from the caller to the function.
Think of it like this:
@i's value was copied to @j. They have the same value, but they're not the same. When you call a function, the same thing happens.
Hi!
I'm preparing for an IT exam in University and I'm facing a little problem. In one exercise it ask to write the prototype of the function fz, giving you these details:
int x;
double y[100];
I have to write the prototype based on this last information:
1.5 + fz(y, y[4], x + y[x]);
in a way that it is possible.
My question is: is it possible?
'cause searching everywhere, I can't find anything that says that is correct to identify a var with (x+y[x]), so I wanted to ask you if it is correct.
In that case my prototype should be: int fz (int[], int[], int[]); is it correct?
Thanks for your answer!
Hi Steven!
y is a double array, so int[] is wrong.
Judging by the context fz returns a double (1.5 + fz()).
The first parameter to fz is y, we know y is a double[].
The second parameter is y[4], since y is a double array every element in y is a double.
The third parameter is (int + double) which is a double.
The answer I'd go with would be:
double fz(double[], double, double)
y is defined as a double array, so the first parameter is clearly double[].
y[4] is the type of element 4 in the double array, so the second parameter is clearly double.
An int plus a double yields a double result, so the third parameter is also a double.
The return value isn't well defined (it could be anything), but since it's being added to a double, it's most likely a double.
I had to enter a series of even number,but it is not working as it should.
#include <iostream>
using namespace std;
bool number(int x);
int main()
{
int x = 0,i=0;
while (i!=-1)
{
cout << number(x);
if (number(x) == true)
cout << "Its even"<<endl;
else
cout << "its odd"<<endl;
}
return 0;
}
bool number(int x = 0)
{
int t = 0;
cout << "Enter Number";
cin >> x;
t = x % 2;
if (t = 0)
{
return true;
}
else
{
return false;
}
}
There are several errors here. First, in function number(), you're using assignment (=) instead of comparison (==). Second, you call number(x) from main() twice per iteration. You should only call it once (stick the result in a variable if need be).
Hi Alex,
Just write this down to thank you for putting such a comprehensive and yet easy-to-pick-up tutorial for c++. I have been slacking off for several days since finishing coding the blackjack at the end of Chapter 6. It's super rewarding experience. Now ready to move on!
Thank you!: 'A' : :D
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/function-parameters-and-arguments/ | CC-MAIN-2021-17 | refinedweb | 1,415 | 73.78 |
I’ve been writing primarily about Solid Edge part functionality as I’ve been learning my way through the software. I haven’t covered all the functions in parts, and I’ll want to come back to write about more of it later, but for now I want to move on and talk about something else. There’s a lot more to the software than just parts. The last series of articles was about multi-body tools and methods, which is going to serve as a bridge to working through assemblies.
This first topic in the assemblies area is going to cover Virtual Components.
Virtual Components, sometimes called “Zero D”, is a method that enables you to either layout an assembly using 2D sketch blocks, or to just create the assembly from a BOM (Bill of Materials) list of components. Virtual components can be empty documents, or they can be documents with special 2D sketch blocks that help you place the component in a layout sketch. You can also create Virtual assemblies to establish the organization of the model right from the start, without having to create actual part or assembly geometry.
The concept of virtual components is not exclusive to Solid Edge, but the ability to create an assembly from a BOM or to place parts using the 2D sketch are things you won’t find in SolidWorks. (Works used to have a Labs project called Treehouse that allowed you to build an assembly from a BOM, but the Labs area was done away with a couple of years ago.) Plus, Solid Edge goes a step further with some surprising tools for combining the ease of manipulating 2D sketch blocks with the power of 3D assembly modeling. We’ll get to that a little later.
You can work with Virtual Components in two different ways in Solid Edge:
This blog post is just going to go through the first one of these, to serve as a partial introduction to the capabilities of this tool. Virtual Components is not new, it was introduced about 10 years ago, but it may be another one of those features that doesn’t get enough attention.
In addition to creating an assembly from a BOM, you can also use Virtual Components to represent parts within the assembly that you don’t have to model. Things like compressed propane gas, grease, paint, batteries with a charge, or other non-geometrical elements that go into making your product what it is.
Let’s have a look at how to get started with Virtual Components.
This might seem obvious, but I’ve got to say it. If you want to learn Virtual Components, you have to start with an assembly document. You can start with a blank assembly, or you can start in an existing assembly that already contains other components. A Virtual “Component” can be either an assembly or a part. “Component” is just a generic term for an item in an assembly, whether part or assembly.
Here is the interface for creating Virtual Components:
Notice that the interface opens up with the Parts Library so that you can add either existing parts with 3D geometry, or just add virtual parts with no geometry.
The best way to add blank virtual components in this interface is to select the assembly that you want to add subassemblies or parts to, set the document type (part or assembly) and just start typing part names (or numbers), pressing Enter to create each component.
Notice parts and assemblies have different icons, and also notice that virtual components (shown in this image under the Engine in the Pathfinder) are different from the real components (shown at the bottom of the Pathfinder as Part1.par and Fourbar.asm)
If you start in an existing assembly, the Structure Editor will read the existing structure and allow you to add to it. You can also delete components.
You can create a combination of real and virtual components, which is very nice.
To make the Virtual Components into “real” components (that have files saved on a hard drive), you can use the Publish Virtual Components tool.
The publish tools are really nice, they allow you to specify the path for each individual component, and also you can specify templates to be used for each file as well. But there is only one option for Publish, and that’s to publish all the virtual components that exist at the time. You can have virtual items like paint, glue, grease, mentioned earlier, but if they exist when you publish, they will stop being virtual. To work around this, you might add items that you want to remain virtual only after all of your other components have been added. A good enhancement request for this area of the software might be a selective publish option.
What I’ve shown to this point is just a quick introduction to the Virtual Component tool, and only the simplest way to use Virtual Components. There is a lot more to show, and I’m going to save that for a second post on this topic. In my next post, I’ll talk about how you can use sketches and 2D block functionality to seriously accelerate your assembly design process.
Great article Matt -- and looking forward to future installments. I encourage folks to play around with the editor itself. I know you can only cover so much here. I think folks will find some good depth and ease of use in this tool. Just fill in a name and hit return (to make a part say); key in another, hit return, another, hit return. Really fast. Drag and drop to restructure parts into different asms. RMB and "Change Component Type" if you first thing something is PAR and then want it SM. Explore!!!
This is something I have never touched, mostly because I had trouble finding a purpose for using it. Looking forward to post #2 and seeing the light!
Can you copy and paste a BOM from excel or ERP?
Yes great article again Matt... but as Chally states above... when and why would I use this. A good example of how this workflow is used would be helpful. To be honest even using Virual Companents is a mystery to me. So a demo of how these tools are used and for what, would be very enlightening.
Bob
Dan,
Thanks for filling in some of the blanks there. I found the drag and drop reordering, but I had missed the RMB Change Component Type. That's very powerful.
HDS:
No, I don't think you can copy/paste a whole list, but the entry is pretty quick. You can copy/paste individual component names/numbers. They have to be added one at a time. That would be another great enhancement for this tool - copy/paste structure from a text file.
Bob,
Yeah, you know this is going to turn out to be more of a book than a blog. You're right, I think an example would go a long way here.
The truth is that I thought Virtual Components was just about the BOM entry thing, and then when the article was mostly written, I found the 2D sketch block functionality, which is even more compelling to me than the structure. So I had to re-edit the whole thing and turn it into another multi-part post.
So I think some important feedback here... While it is important to know the how, that is the content we normally get and Help is most helful on. It's the when and why that seems to be hard to come by. Think of it more as a process white paper that we would like to see. When and why would we use this tool, how does it interact with all the other tools to fulfill the "why"?
Good point, Ken. That helps focus on what the actual need is. Thanks again.
We were planning on showing this at the Productivity Summits around the country. My idea of how this could be used is for planning new products. If you create a virtual assembly you can create a report/BOM and copy it into a spread sheet. Then you could add cost information to that to figure out how much it could cost to produce. You could group together the purchased parts to get the purchasing folks to get estimates on that. You could plan which design groups can design each piece. Also if you added assemblies that you know you have already created you would know which areas you don't have to work on. You can also give part numbers to all the virtual components if you have a block of numbers to use. All this would be done before you create any new parts. Once the product has been costed out and you get approval to go forward then you can publish it and it would create all the parts for you. Then you could start concurrent design by design teams accessing the parts and assemblies they have been assigned to work on.
That is my scenario. Does that make sense?
Barry
I really think this needs to be approached form a process of early 2D conceptual/layout work that then transforms into 3D parts with the 2D layouts still being used to initiate revisions to the then existing 3D parts.
First, let me say how much I am diggin the dialog going on here on the community. Great to see the active participation rate rising. As far as some of the comments -- many of you have it right.
Barry has it right that its a great tool to create a "shopping list". In a larger project it can be very helpful to understand scope before you even get down to the details. This was certainly one of the goals.
Ken is right that where this gets really powerful is the 2D layout etc. You can solve a BUNCH of problems in 2D, but instead of having a big-ole-sketch, you actually have 2D PARTS. You don't have to worry about all those part files and stuff yet. Then, when the time is right you publish and all the parts get created on disk and all the sketches get moved down into them and its really pretty magical from a productivity standpoint -- but Matt will tell all y'all (back in the south!) about that shortly...
Yep, come to find out that "y'all" is singular and "all y'all" is plural...
Then there's the in between-er [for when you don't really know how many]... "suma ya'll"
But back to the post for a second...this is something I just never think of using, but seems like I really need to. Looking forward to the next installment, that will map it out a little clearer in practice.....THANKS Matt.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in. | https://community.plm.automation.siemens.com/t5/Solid-Edge-Blog/Virtual-Components-Part-1/ba-p/4107 | CC-MAIN-2019-22 | refinedweb | 1,848 | 70.94 |
LesHouchesFileReader is an base class to be used for objects which reads event files from matrix element generators.
More...
#include <LesHouchesFileReader.h>
LesHouchesFileReader is an base class to be used for objects which reads event files from matrix element generators.
It inherits from LesHouchesReader and extends it 39 of file LesHouchesFileReader.h.
Copy-constructor.
Note that a file which is opened in the object copied from will have to be reopened in this.
Make a simple clone of this object.
Implements ThePEG::InterfacedBase.
Reimplemented in ThePEG::MadGraphReader.
Referenced by filename()..
Referenced by LesHouchesFileReader().
Make a clone of this object, possibly modifying the cloned object to make it sane.
Reimplemented from ThePEG::InterfacedBase.
Initialize.
This function is called by the LesHouchesEventHandler to which this object is assigned.
Open a file with events.
Derived classes should overwrite it and first calling it before reading in the run information into the corresponding protected variables.
Function used to read in object persistently.
Function used to write out object persistently.
If LHF.
Map of attributes (name-value pairs) found in the last event tag.
Definition at line 228 of file LesHouchesFileReader.h.
Additional comments found with the last read event.
Definition at line 222 of file LesHouchesFileReader.h.
All lines from the header block.
Definition at line 206 of file LesHouchesFileReader.h.
Map of attributes (name-value pairs) found in the init tag.
Definition at line 217 of file LesHouchesFileReader.h.
Additional comments found in the init block.
Definition at line 211 of file LesHouchesFileReader.h.
If the file is a standard Les Houches formatted file (LHF) this is its version number.
If empty, this is not a Les Houches formatted file
Definition at line 195 of file LesHouchesFileReader.h.
All lines (since the last open() or readEvent()) outside the header, init and event tags.
Definition at line 201 of file LesHouchesFileReader.h. | https://thepeg.hepforge.org/doxygen/classThePEG_1_1LesHouchesFileReader.html | CC-MAIN-2018-39 | refinedweb | 308 | 61.22 |
AWS
AWS Fargate
Overview
AWS Fargate is a container execution service provided by AWS. AWS manages the complexity of running and scaling the underlying infrastructure (which happens to be AWS EC2 instances).
Why Fargate?
The AWS Lambda service is another way you can run code in the cloud in a serverless manner. Choose lambda if:
- You don’t need more than 500MB hard disk space
- You don’t need more than 3GB of memory
- Your program can finish within 15 minutes
- Your program does not need heavy compute power
Choose AWS Fargate if:
- You need more than 500MB disk space (you are only limited to what EC2 can provide).
- You need more than 3GB of memory (you are only limited to what EC2 can provide).
- Your program might take more than 15 minutes to complete. There is no runtime limitations with AWS Fargate.
- Your program needs heavy compute power. Fargate provides you with all the CPU options that EC2 can offer.
CloudFormation is a good example of setting up a Fargate service in CloudFormation.
Hello, World Example
Invoke Fargate Job
You can invoke Fargate jobs from within Python by using
boto3:
import boto3 client = boto3.client('ecs') response = client.run_task( cluster='fargatecluster', # name of the cluster launchType = 'FARGATE', taskDefinition='my-batch-job:1', # replace with your task definition name and revision count = 1, platformVersion='LATEST', networkConfiguration={ 'awsvpcConfiguration': { 'subnets': [ 'subnet-2ec3a94a', # replace with your public subnet or a private with NAT 'subnet-413a9c6e' # Second is optional, but good idea to have two ], 'assignPublicIp': 'DISABLED' } }) return str(response) | https://blog.mbedded.ninja/programming/cloud/aws/aws-fargate/ | CC-MAIN-2021-17 | refinedweb | 256 | 52.6 |
How can I detect programmatically whether the /3GB switch is enabled?
Raymond
A customer was doing some diagnostic work and wanted a way to detect
whether the
/3GB switch was enabled.
(Remember that the
/3GB switch is
meaningful only for 32-bit versions of Windows.)
The way to detect the setting is to call
GetSystemInfo and look at the
lpMaximumApplicationAddress.
#include <windows.h> #include <stdio.h> int __cdecl main(int, char **) { SYSTEM_INFO si; GetSystemInfo(&si); printf("%p", si.lpMaximumApplicationAddress); return 0; }
Compile this as a 32-bit program and run it.
On 32-bit systems, this reports the system-wide setting that
specifies the maximum user-mode address space,
regardless of how your application is marked.
Note, however, that your application must be marked
LARGEADDRESSAWARE
in order to take advantage of the space above 2GB.
On the other hand,
when you run a 32-bit application on 64-bit Windows,
it runs the application in an emulation layer.
Therefore, 64-bit Windows can give each application a different view of
the system.
In particular, depending on how your application is marked,
Windows can emulate a 32-bit system with or without the
/3GB switch enabled,
based on what the application prefers.
Armed with this knowledge, perhaps you can help this customer. Remember, you sometimes need to go beyond simply answering the question and actually solve the customer’s problem.
We would like to know how to detect from our 32-bit application whether the host operating system is 64-bit or 32-bit.
We need to know this because our program does some data processing, and we have to choose an appropriate algorithm. We have written one algorithm that is faster but uses 1½GB of address space, and we have also written a fallback algorithm that is slower but does not use anywhere near as much address space. When running on a native 32-bit system, there is typically not 1½GB of address space available, so we have to use the slow algorithm. But when running on a native 64-bit system (or a native 32-bit system with the
/3GBswitch enabled), our program can use the fast algorithm. Therefore, we would like to detect whether the native operating system is 64-bit so that we can decide whether to use the fast or slow algorithm.
Here’s another customer question you can now answer:
We have a 64-bit program, and since we know that Windows currently does not use the full 64-bit address space, we would like to steal the upper bits of the pointer to hold additional information: If there are at least 8 bits available, we can use a more efficient data format. Otherwise, we fall back to a less efficient format. How can we detect whether the upper 8 bits are being used for addressing?
Update: Clarified the table based on misunderstanding in comments. | https://devblogs.microsoft.com/oldnewthing/20141023-00/?p=43783 | CC-MAIN-2021-49 | refinedweb | 480 | 60.75 |
We're making some changes at Activetuts+. From now on, our tutorials will be using class-based code, instead of timeline code, wherever possible. This Quick Tip explains what you'll need to know.
Why Use Class Files?
I'll admit it - sometimes, coding entirely on the timeline is useful. It's a quick way to test out an effect, and the easiest way to sync actions to specific frames of an animation.
But for any project that relies more on code than on animation, there are serious disadvantages. All your ActionScript is trapped inside the FLA file; you can't split the programming between different developers, you have to copy and paste code if you want to re-use it, and you're forced to use Flash's Actions Panel.
Using class files sets your code free. And it's really no harder than coding on the timeline; it just involves a little more setup. I'll walk you through creating a Flash project that uses classes, then break down a class file in detail.
(Part of the reason we're switching to classes is to make it easier for AS3 developers that don't use Flash itself to follow our tutorials. If you're one of them, I expect you're used to dealing with classes already, but you can always read this Quick Tip as a refresher - just ignore the bits about Flash!)
Step 1: Create a FLA
I'm sure you already know how to do this. Open Flash and click File > New ... Flash File (ActionScript 3.0). Save it wherever you like. I've called mine Example.fla, but it doesn't matter what name you choose.
Step 2: Create an ActionScript File
Click File > New ... ActionScript File. Save this as Main.as in the same folder as your FLA.
This file is where we're going to put the code that powers the FLA itself, but how will Flash know how to find it?
Step 3: Link the FLA to the AS File
You may have dozens of AS files in the same folder as the FLA, so Flash won't want to guess which one to use. We'll have to tell it.
Switch to the Selection tool (Shortcut: V), then make sure you have nothing selected (hit Ctrl-Shift-A). Open the Properties panel (Window > Properties).
If you're using Flash CS3, it'll look like this:
Enter Main in the box labeled "Document class" - that's to match the name of your ActionScript file, minus the ".as" file extension..
If you're using Flash CS4, it'll look like this:
In this case, you'll need to enter Main in the box labeled "Class". For some reason, Adobe dropped the "Document" bit.
Step 4: (Optional) Reorganize your Folder Structure
You don't have to keep all your files in the same directory. Check out this Quick Tip screencast if you'd like to know how to move things around.
Step 5: Write your Document Class
Open your Main.as file and paste the following code:
package { import flash.display.MovieClip; public class Main extends MovieClip { public function Main() { } } }
This is a basic empty document class. It's the smallest amount of code we can write that will actually run. Let me break it down:
The package keyword tells Flash that all of the code between its curly braces is part of a single group.
Writing class Main also groups code together, but in a different way. Classes contain functions and variables; packages contain classes and import statements.
Note: you have to give your class the same name as the AS file: Main.
What about public? Well, that just means that other classes in your code will be able to see this class.
This class Main is going to power our FLA. By default, our FLA is a movie clip (it has a timeline).
We want Main to be able to do everything that a movie clip can do, plus more based on the code that we write. In other words, we want to extend the functionality of a regular MovieClip.
(Sometimes, we might not need to do any animation on the stage's main timeline; in this case, we don't need to extend MovieClip, and we can just extend Sprite instead. MovieClip itself extends Sprite, but adds extra features for animation, like the nextFrame() function. So if you're not sure whether you should extend MovieClip or Sprite, go for MovieClip -- it's safer!)
MovieClip is itself a class.
Flash doesn't automatically keep track of where all its class files are stored; in order for our extends MovieClip code to work, we need to tell Flash where to find the MovieClip class. That's what the import line does.
Import statements always go inside the package and outside the class, at the top.
Every class contains a function with the same name as the class. It's called the constructor function.
All the code inside this function is run when an object of this type of class is created - in our case, code between these curly braces will be run when the SWF is loaded.
Don't worry if you feel you don't quite grasp all of this yet. Once you start actually using classes and writing your own, it'll all snap into place.
Step 6: Make it do Something
As I said in Step 5, the constructor function contains the very first code to be run when your SWF is loaded. So let's put something in there to make sure everything's working:
package { import flash.display.MovieClip; public class Main extends MovieClip { public function Main() { trace( "Yep, it's working" ); } } }
Line 8 is the only new one there. Test your SWF in the usual way (Control > Test Movie). If all's well, you should see "Yep, it's working" pop up in the Output panel. If not...
- Have you saved the change you made to Main.as?
- Is your FLA's Document Class set to Main?
- Are you definitely testing the Example.fla movie?
If none of these questions help, please post a comment.
Step 7: Try Something a Little More Complex
Try replacing your Main.as code with this:
package { import flash.display.MovieClip; public class Main extends MovieClip { public function Main() { var greeting:String = "Hello"; trace( greeting ); } } }
Simple, right? We've just created a new String variable inside the constructor function. Now let's add a new function:
package { import flash.display.MovieClip; public class Main extends MovieClip { public function Main() { var greeting:String = "Hello"; changeGreetingToFrench(); trace( greeting ); } public function changeGreetingToFrench():void { greeting = "Bonjour"; } } }
There are a few things to note here.
Firstly, the new function goes inside the class, and after the constructor - by convention, the constructor is the first function in the class.
Secondly, the new function is public; when coding inside a class (and not on the timeline) it's good practice to put "public" (or "private" or "protected", but I'll leave those for another post) at the start of the line that defines the function. It's just a way of letting other classes know whether or not they can access it.
Thirdly, the new function's definition ends with :void. This just means it doesn't return a value. Constructor functions don't need the :void because they can't return a value.
If you test this movie, you'll get an error message:
Main.as, Line 15: 1120: Access of undefined property greeting.
When you create a variable inside a function, it can't be accessed by other functions. If you want every function in the class to be able to access the variable, you need to declare it inside the class but outside all of the functions:
package { import flash.display.MovieClip; public class Main extends MovieClip { public var greeting:String = "Hello"; public function Main() { changeGreetingToFrench(); trace( greeting ); } public function changeGreetingToFrench():void { greeting = "Bonjour"; } } }
Just as with functions, if you declare a variable outside of a function, you need to start it with "public" (or "private" or "protected"). Unlike functions, variables should be defined above the constructor.
Test your movie now and you should finally get a nice greeting in French. How useful!
Wrapping Up
So, this is not exactly an exciting result, but hopefully you now feel able to follow tutorials that don't code on the timeline.
I really want to make sure everyone understands how to use a document class, so please, if any of this was unclear, post a note in the comments. Once we've sorted out the confusion, I'll edit the Quick Tip to make it easier for the next person to understand. Thanks :)
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/how-to-use-a-document-class-in-flash--active-3233 | CC-MAIN-2017-13 | refinedweb | 1,477 | 73.07 |
{-# OPTIONS_GHC -fno-warn-name-shadowing #-} {-# LANGUAGE CPP, DeriveDataTypeable #-} #if __GLASGOW_HASKELL__ >= 701 {-# LANGUAGE Trustworthy #-} #endif ----------------------------------------------------------------------------- -- | -- Module : Control.Concurrent.STM.TChan -- Copyright : (c) The University of Glasgow 2004 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : [email protected] -- Stability : experimental -- Portability : non-portable (requires STM) -- -- TChan: Transactional channels -- (GHC only) -- ----------------------------------------------------------------------------- module Control.Concurrent.STM.TChan ( #ifdef __GLASGOW_HASKELL__ -- * TChans TChan, -- ** Construction newTChan, newTChanIO, newBroadcastTChan, newBroadcastTChanIO, dupTChan, -- ** Reading and writing readTChan, tryReadTChan, peekTChan, tryPeekTChan, writeTChan, unGetTChan, isEmptyTChan, cloneTChan #endif ) where #ifdef __GLASGOW_HASKELL__ import GHC.Conc import Data.Typeable (Typeable) #define _UPK_(x) {-# UNPACK #-} !(x) -- | 'TChan' is an abstract type representing an unbounded FIFO channel. data TChan a = TChan _UPK_(TVar (TVarList a)) _UPK_(TVar (TVarList a)) deriving (Eq, Typeable) type TVarList a = TVar (TList a) data TList a = TNil | TCons a _UPK_(TVarList a) -- |Build and return a new instance of 'TChan' newTChan :: STM (TChan a) newTChan = do hole <- newTVar TNil read <- newTVar hole write <- newTVar hole return (TChan read write) -- |@IO@ version of 'newTChan'. This is useful for creating top-level -- 'TChan's using 'System.IO.Unsafe.unsafePerformIO', because using -- 'atomically' inside 'System.IO.Unsafe.unsafePerformIO' isn't -- possible. newTChanIO :: IO (TChan a) newTChanIO = do hole <- newTVarIO TNil read <- newTVarIO hole write <- newTVarIO hole return (TChan read write) -- | :: STM (TChan a) newBroadcastTChan = do write_hole <- newTVar TNil read <- newTVar (error "reading from a TChan created by newBroadcastTChan; use dupTChan first") write <- newTVar write_hole return (TChan read write) -- | @IO@ version of 'newBroadcastTChan'. newBroadcastTChanIO :: IO (TChan a) newBroadcastTChanIO = do dummy_hole <- newTVarIO TNil write_hole <- newTVarIO TNil read <- newTVarIO dummy_hole write <- newTVarIO write_hole return (TChan read write) -- |Write a value to a 'TChan'. the next value from the 'TChan'. readTChan :: TChan a -> STM a readTChan (TChan read _write) = do listhead <- readTVar read head <- readTVar listhead case head of TNil -> retry TCons a tail -> do writeTVar read tail return a -- | A version of 'readTChan' which does not retry. Instead it -- returns @Nothing@ if no value is available. tryReadTChan :: TChan a -> STM (Maybe a) tryReadTChan (TChan read _write) = do listhead <- readTVar read head <- readTVar listhead case head of TNil -> return Nothing TCons a tl -> do writeTVar read tl return (Just a) -- | Get the next value from the @TChan@ without removing it, -- retrying if the channel is empty. peekTChan :: TChan a -> STM a peekTChan (TChan read _write) = do listhead <- readTVar read head <- readTVar listhead case head of TNil -> retry TCons a _ -> return a -- | A version of 'peekTChan' which does not retry. Instead it -- returns @Nothing@ if no value is available. tryPeekTChan :: TChan a -> STM (Maybe a) tryPeekTChan (TChan read _write) = do listhead <- readTVar read head <- readTVar listhead case head of TNil -> return Nothing TCons a _ -> return (Just a) -- |Duplicate a 'TChan': the duplicate channel begins empty, but data written to -- either channel from then on will be available from both. Hence this creates -- a kind of broadcast channel, where data written by anyone is seen by -- everyone else. dupTChan :: TChan a -> STM (TChan a) dupTChan (TChan _read write) = do hole <- readTVar write new_read <- newTVar hole return (TChan new_read write) -- |Put a data item back onto a channel, where it will be the next item read. unGetTChan :: TChan a -> a -> STM () unGetTChan (TChan read _write) a = do listhead <- readTVar read newhead <- newTVar (TCons a listhead) writeTVar read newhead -- |Returns 'True' if the supplied 'TChan' is empty. isEmptyTChan :: TChan a -> STM Bool isEmptyTChan (TChan read _write) = do listhead <- readTVar read head <- readTVar listhead case head of TNil -> return True TCons _ _ -> return False -- |Clone a 'TChan': similar to dupTChan, but the cloned channel starts with the -- same content available as the original channel. cloneTChan :: TChan a -> STM (TChan a) cloneTChan (TChan read write) = do readpos <- readTVar read new_read <- newTVar readpos return (TChan new_read write) #endif | http://hackage.haskell.org/package/stm-2.4.2/docs/src/Control-Concurrent-STM-TChan.html | CC-MAIN-2014-35 | refinedweb | 618 | 56.08 |
Vue 3 is scheduled to release in Q2 of 2020, and it comes with a host of new features. One of the biggest features in Vue 3’s release is Vue’s new Composition API. As developers begin building more large-scale projects with Vue, some of the underlying challenges of code organization, readability, and reusability have become more apparent.
The goal of the Composition API, which is inspired by React Hooks, is to simplify your components, improving readability and organization, and make it easier to reuse code throughout your application.
The Composition API will not only allow developers to use it over component options, but it also adds in composition functions, which can be used over Vue’s other methods of code reusability (e.g., mixins and scoped slots). In this article, we’ll outline the drawbacks that Vue 2 had in terms of reusability, readability and organization, and highlight the ways that the Composition API addresses those issues.
Wijmo’s Vue 3 Plans: The Wijmo team is working hard to make sure that Wijmo’s controls are ready for Vue 3 on launch. Wijmo is dedicated to providing powerful components, such as FlexGrid, to help Vue developers build their applications with ease.
Installing the Composition API
Though the Composition API is a part of Vue 3, it has been made available for Vue 2 as well. You can add the Composition API to your project by running the following command:
npm install @vue/composition-api --save
Then, you’ll import the file inside of your main.js file:
import VueCompositionAPI from “@vue/compostion-api”;
Vue.use(VueCompositionAPI);
Your project will now be able to utilize Vue 3’s Composition API.
Code Organization and Readability
In Vue 2, features of components were organized using component options. These options include things like data, method, etc.; in total, there are six component options, meaning that your component’s logic can be split across up to six sections of your component.
As your components grow, this fracturing of code within the components will not only make them harder to organize, but it will also make it harder for other developers that are reading through the code to understand the flow of logic within the component.
Vue 3’s Composition API gets rid of component options and replaces them with its setup() function. The setup() function gives developers more control over how they organize their code base, allowing it to be organized by logic and functionality instead of being restricted by the component options.
>
As we see above, with just a few methods, this doesn’t look too bad. But as we add more functionality to the component, the logic will become more and more fragmented across the component options.
() method. Now you’re able to structure your components in ways that best fit your logic and functionality, instead of being restricted by the component options.
This will make it easier for both you as well as other developers to read the code and understand the logic behind the component. This is especially useful as components grow larger and encompass more logic.
Developers are not required to use the Composition API, either; Vue 3 continues to support component options. If you’re more comfortable using the component options model, you’ll be able to use Vue 3 while maintaining the same component structure.
Code Reusability
The Composition API aims to tackle another issue that Vue 2 had: code reusability. Two ways that Vue 2 allowed you to reuse code was through mixins and scoped slots. However, both methods have drawbacks.
Mixins
Mixins are designed to let components share properties and methods between one another. Say we want to share the properties and methods of our base counter with another counter. We would need to create a mixin of our base counter that can be consumed by the new counter.
const counterMixin = { data: function() { return { count: 0 } }, methods: { countIncrement() { this.count++; }, clearCount() { this.count = 0; }, countByFive() { this.count += 5; }, } } export default { mixins: [counterMixin] }
Here, we create our mixin and allow it to be exported. It can then be imported to any component that needs to consume it by adding it to the mixin configuration property. However, one of the issues that mixins have is that they are prone to conflict.
If a property name in the mixin matches a property name of the component consuming it, the component will prioritize keeping the value of its property and will discard the property of the mixin.
Mixins also lack flexibility; the methods are restricted to their base functionality and cannot be configured to function based on the needs of the component consuming it.
Scoped Slots
Vue 2 also offered scoped slots as a way for developers to reuse code. While standard slots allowed you to pass a rendered element, scoped slots allowed you to create custom templates, which the scoped slots then consume to render the template in the parent element.
Here, we’ll show how to display users’ names along with their associated count value:
<template> <div> <slot v- <!-- fallback content here --> </slot> </div> </template> <script> export default { props: { items: { type: Array, default: () => [] } } }; </script>
This template binds the items attribute to the slot element, making it available to the parent element. That will allow us to modify this template when it is consumed.
Here, we have no fallback content when data isn’t provided to the template; typically, you’ll have something set up to display some fallback data.
<template> <div id="app"> <div class="list-title">Counter List</div> <List : <div slot- <span>Name:</span><span>Count:</span> </div> </List> </div> </div> </template> <script> import List from "./components/List.vue"; export default { components: { List }, data() { return { listItems: [ { name: "Bill", count: "6" }, { name: "Susan", count: "13" }, { name: "Jim", count: "2" }, ] }; } }; </script>
You can see how we can use scoped slots to render templates, as well as modify the templates to render the content that we need to display. While this addresses some of the issues that mixins had regarding flexibility, scoped slots still have their own limitations.
With more complex templates comes more indentation, which makes code less readable, and scoped slots also require a lot of configurations to be set up. Scoped slots also break up code across multiple components, which means that there is a performance cost as well.
Composition Functions
Vue 3’s Composition API brings composition functions to the table as another way of tackling code reusability. Composition functions accept configurations, making them easy to customize and use across multiple components.
First, we need to define the composition functions that we will use. In this case, we’ll create generic count increment and count reset methods:
export default function useIncrement({ increment }) {}
export default function useReset({ value }) {}
We’ll then need to import these functions into our component. After importing them, we can pass them the configurations that we want to use for them, and then give our component access to these functions by returning them in our setup() method:
<script> import { ref } from "@vue/composition-api"; import useIncrement from "./components/countIncrements"; import useReset from "./components/countReset"; export default { setup() { const count = ref(0); const countIncrement = useIncrement( /* configurations */ ); const countReset = useReset( /* configurations */ ); return { count, countIncrement, countReset }; } } </script>
By setting up these composition functions, we can now use them across our application, and customize them to fit our needs. Maybe we don’t want the counter to reset to 0, and instead reset to 10; we would just need to pass a different value into the function’s configuration, and it will now behave as needed.
Composition functions cut out a lot of excess code, allowing for more readable code. Configurations allow you to customize them across your application, making your code more reusable; and because they are functions, they are supported by Intellisense.
Closing Notes
The Composition API aims to address both code reusability and code readability in Vue 3. By replacing component options with the API’s setup() method, the Composition API allows developers to organize their components by logic and functionality, instead of being restricted by organizing it through component options.
Composition functions allow us to create easily reusable functions, which can accept configurations that customize it to fit the needs of the component consuming it.
Try Wijmo's Vue UI library Download the latest version of Wijmo
Try Wijmo's Vue UI library
Download the latest version of WijmoDownload Now! | https://www.grapecity.com/blogs/getting-started-with-vue-3-composition-api?utm_source=CooperPress&utm_medium=Dynamic&utm_campaign=FrontEndFocus_Q3_placement_08/2020 | CC-MAIN-2020-40 | refinedweb | 1,401 | 50.46 |
A shalfloop is a great circle on a sphere map.
Figure
depicts the relationship between a shalfloop and its incident
shalfloops, and sfaces on a sphere map. A shalfloop is
an oriented sloop. It is always paired with a
shalfloop whose supporting Sphere_circle is pointing in
the opposite direction. The twin() member function returns
this shalfloop of opposite orientation.
A sphere map having a shalfloop models the neighborhood of a vertex which is isolated on a facet. That facet is returned by the member function facet.
#include <CGAL/Nef_polyhedron_3.h>
The following types are the same as in Nef_polyhedron_3<Traits>.
There is no need for a user to create a SHalfloop explicitly. The class Nef_polyhedron_3<Traits> manages the needed shalfloops internally.
CGAL::Nef_polyhedron_3<Traits>::Halffacet
CGAL::Nef_polyhedron_3<Traits>::SFace
CGAL::Nef_polyhedron_S2<Traits>::Sphere_point | http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Nef_3_ref/Class_Nef_polyhedron_3-Traits---SHalfloop.html | crawl-001 | refinedweb | 133 | 51.95 |
1. Sending Mail
The
smtplib module provides a quick and easy way
to send e-mail messages.
This module contains a class called
SMTP
that initiates a connection to a mail server.
In the CS labs, you can use
mail.eecs.berkeley.edu.
Elsewhere, you will have to choose a server that's willing
to deliver mail for you (such as the one provided by your ISP).
Instances of the
SMTP class provide a
sendmail() method, which you can call with three arguments:
the sender's address, the receiver's address, and the message.
The message should be a multi-line string with the headers at the beginning
(such as
Date: or
Subject:),
followed by a blank line,
and then the body of the message.
When you're done with a connection, call its
close() method.
>>>>> >>> import smtplib >>> s = smtplib.SMTP('mail.eecs.berkeley.edu') >>> s.sendmail('[email protected]', '[email protected]', message) {} >>> s.close() >>>
Q1. Send a message to yourself – any message you like.
2. Fetching Pages
The
urllib module
lets you easily download Web pages (or anything else from the Web).
Calling
urllib.urlopen() on a URL
returns a file-like object (sometimes we call this a stream).
You can read all the data by calling
read() on this stream,
read the data one line at a time by calling
readline(),
or read a list of all the lines of the file with
readlines().
>>> import urllib >>> stream = urllib.urlopen('') >>> print stream.read() <HTML> <HEAD> <TITLE>Example Web Page < need to find out other information
about the downloaded file,
the received headers are stored in a dictionary
in
stream.headers.
>>> stream.headers.keys() ['last-modified', 'content-length', 'etag', 'date', 'accept-ranges', 'content-type', 'connection', 'server'] >>> stream.headers['content-type'] 'text/html' >>> stream.headers['last-modified'] 'Wed, 08 Jan 2003 23:11:55 GMT' >>>
Q2.
Get the top headline from Google News.
If you download,
it turns out that each headline is on a line
that contains the string
class=y .
Just look for that and print the first such line.
The
find() method on strings will come in handy.
You can use the regular expression
<[^>]*>
to match HTML tags and remove them.
Q3. Put these two pieces together to get a script that will e-mail you the top headline from Google News.
3. Operating System Functions
The
os module
provides all the system and file functions you might expect.
Commonly used functions from this module include:
listdir(path): return a list of the files in a directory
chdir(path): change the current directory
mkdir(path): create a new directory
rmdir(path): remove a directory
rename(old, new): rename (or move) a file
unlink(path): unlink (delete) a file
system(command): run a system command
Unix-heads will also recognize functions such as
fork(),
waitpid(pid, options),
link(source, dest),
symlink(source, dest),
readlink(path),
popen(command, mode),
and
kill(pid).
Look at the list of functions in
help(os) for more.
If you don't know what all of these do, that's okay. I'm just mentioning them here so you know they're available, and you can find them if you need to use them.
The
os module also contains a submodule,
os.path, specifically for manipulating paths.
Here are some useful functions in
os.path:
exists(path): test whether a path exists
isdir(path): test whether a path is a directory
isfile(path): test whether a path is an ordinary file
getatime(path): get access time of a file
getmtime(path): get modification time of a file
getsize(path): get size of a file
There are more; again, look at
help(os.path)
for anything having to do with file paths.
4. Class Objects
Everything in Python is an object. Certain other languages like to say this, but they don't really mean it. In Python, we really mean it. Numbers, strings, lists, dictionaries, and instances are objects; but functions, modules, classes, and methods are objects too.
Classes have their own namespaces,
just as modules and instances do.
Last time, we just talked about defining methods in classes.
But you can write any statements inside a class, if you want.
What really happens is that
the whole block inside the
class keyword
just gets executed inside the namespace of the class.
Any definitions or assignments are left in that namespace.
>>> class Foo: ... a = 3 ... def noogie(self): ... return self.a ... >>> Foo <class __main__.Foo at 0x81532b4> >>> Foo.a 3 >>> Foo.b = 4 >>> Foo.b 4 >>>
Try creating an instance of
Foo.
Call its
noogie() method.
What happens? Do you understand why?
Now try setting
Foo.a to some other number.
What happens when you call the
noogie() method
of the instance you just created?
Now create a second instance of
Foo.
Try setting the
a attribute directly on that instance.
Now what happens to the
a attribute of the first instance,
or of the class?
Explore the behaviour of the class and instances
by setting and checking their attributes
until you feel you understand how they work.
(Keep in mind that you can also remove assignments
using
del, as in
del Foo.a,
if that helps to clear things up as you experiment.)
Q4.
Draw a picture showing the relationships
between the
Foo class
and the two instances you just created,
showing how the value of the
a attribute is determined.
Call me over and show me your picture.
5. Functions and Methods
Define a function outside the class, such as this one:
>>> def ribbit(self, z): ... print self, z ... >>>
Try assigning this function to an attribute of a
Foo instance.
What happens when you try to call it on the instance?
What happens when you try to call it on the class?
Try assigning this function to an attribute of the
Foo class.
What happens when you try to call it on the class?
What happens when you try to call it on a previously-created instance?
What happens when you try to call it on a newly-created instance?
See if you can figure out what is happening. Ask me if you are confused.
Q5. Draw a picture showing how the class and instance behave when functions are assigned to their attributes, and showing what happens when you retrieve attributes that refer to functions. Call me over and show me your picture.
6. Inheritance
You can define a class that is derived from another class by giving the other class name in parentheses, like this:
>>> class Wong(Foo): ... def meow(self): ... return self.noogie() + 1 ... >>>
The result is a class with two callable methods:
noogie() and
meow().
Try creating an instance and calling its
meow() method.
You can test that this instance is an instance of the
Wong class by asking
isinstance(x, Wong)
(assuming your instance is called
x).
Notice that
isinstance(x, Foo) is also true,
because
Wong is a subclass of
Foo.
This is also the preferred way to check types;
for example,
isinstance(3, int) is true.
What happens if you set the
a attribute
on the
Wong class and call the
instance's
meow() method again?
How about if you set the
a attribute
on the
Foo class?
Q6.
Explain what happens when you call
meow(),
and explain which value of
a affects the result.
Now try this definition:
>>> class Wing(Wong): ... def noogie(self): ... return self.meow() + 1 ... >>>
What do you think will happen
when you call
noogie() on an instance of
Wing?
Try it.
If you want a method to call another method of a specific class, you call the method on the class, passing the instance as the first argument.
Q7.
What one change could you make, among the definitions of
Foo,
Wong, and
Wing,
that would cause a call to
noogie() on an
instance of
Wing to add 1 to
Foo.a twice,
returning 5?
7. The Exception Hierarchy
The exceptions are actually classes, arranged in an inheritance hierarchy to represent how some exceptions are subcategories of others.
You can see this hierarchy by doing
import exceptions
and then asking for
help(exceptions). Try it.
Here are some of the most common exceptions used by Python programmers (as opposed to exceptions raised by Python itself):
TypeError: an argument had the wrong type, or the wrong number of arguments was given
ValueError: an argument had a bad value
IOError: something went wrong with reading or writing (e.g. a file is missing, corrupted, etc.)
RuntimeError: something went wrong inside the program (e.g. an internal variable somehow ended up with a bad value); this usually means there's a bug
When you catch exceptions using
except,
you catch not only exceptions that are exactly of the class you
specify, but also its subclasses.
8. Custom Exceptions
You can create your own classes of exceptions, to represent exceptional conditions more specifically than with the existing categories.
When you do this, you should define your exception
as a subclass of an appropriate existing exception.
If none of the specific errors is appropriate,
you can make your exception a subclass of
StandardError, the superclass of all errors.
When you raise an exception,
you can create an instance of the exception and raise that.
You can define your exception to take whatever arguments it likes,
in order to record what caused the problem;
your exception class should then define a
__str__ method
to produce the message that gets displayed.
Q8.
Define a custom exception that you might throw
to indicate that someone tried to create a
Date
using a string in the wrong format.
Here's the seventh assignment.
If you have any questions about these exercises or the
assignment, feel free to send me e-mail at
bc
zesty
ca. | http://zesty.ca/bc/explore-07.html | CC-MAIN-2017-13 | refinedweb | 1,623 | 73.37 |
Namespaces overview (LINQ to XML)
This article introduces XML names, XML namespaces, XML namespace prefixes, and the XName and XNamespace classes.
XML names are often a source of complexity in XML programming. An XML name consists of an XML namespace (also called an XML namespace URI) and a local name. An XML namespace is similar to a namespace in a .NET program. It enables you to uniquely qualify the names of elements and attributes to avoid name conflicts between various parts of an XML document. When you've declared an XML namespace, you can select a local name that only has to be unique within that namespace.
Another aspect of XML names is XML namespace prefixes, which cause most of the complexity of XML names. These prefixes enable you to create a shortcut for an XML namespace, which makes the XML document more concise and understandable. However, the meaning of an XML prefix depends on context, which adds complexity. For example, the XML prefix
aw could be associated with one XML namespace in part of an XML tree, and with a different namespace in another part.
One of the advantages of using LINQ to XML with C# is that you don't have to use XML prefixes. When LINQ to XML loads or parses an XML document, each XML prefix is resolved to its corresponding XML namespace. After that, when you work with a document that uses namespaces, you almost always access the namespaces through the namespace URI, and not through the namespace prefix. When developers work with XML names in LINQ to XML they always work with a fully-qualified XML name (that is, an XML namespace and a local name). However, LINQ to XML does allow you to work with and control namespace prefixes as needed.
When using LINQ to XML with Visual Basic and XML literals, you must use namespace prefixes when working with documents in namespaces.
In LINQ to XML, the class that represents XML names is XName. XML names appear frequently throughout the LINQ to XML API, and wherever an XML name is required, you will find an XName parameter. However, you rarely work directly with an XName. XName contains an implicit conversion from string.
For more information, see XNamespace and XName. | https://docs.microsoft.com/en-us/dotnet/standard/linq/namespaces-overview | CC-MAIN-2021-04 | refinedweb | 377 | 61.87 |
Next: Creating a project, Previous: EDE Mode, Up: Top [Contents]
Once you have EDE enabled, you can create a project. This chapter provides an example C++ project that will create Automake files for compilation.
First, lets create a directory for our project. For this example, we’ll start with something in /tmp.
C-x C-f /tmp/myproject/README RET M-x make-directory RET RET
Now put some plain text in your README file to start.
Now, lets create the project:
M-x ede-new RET Automake RET myproject RET
Nothing visible happened, but if you use
dired to look at the
directory, you should see this:
/tmp/myproject: total used in directory 32 available 166643476 drwxr-xr-x 2 zappo users 4096 2012-02-23 22:10 . drwxrwxrwt 73 root root 20480 2012-02-23 22:10 .. -rw-r--r-- 1 zappo users 195 2012-02-23 22:10 Project.ede -rw-r--r-- 1 zappo users 10 2012-02-23 22:09 README
We’ll make a more complex project, so use dired to create some more directories using the + key, and typing in new directories:
+ include RET + src RET
Now I’ll short-cut in this tutorial. Create the following files:
include/myproj.hh
/** myproj.hh --- */ #ifndef myproj_hh #define myproj_hh 1 #define IMPORTANT_MACRO 1 int my_lib_function(); #endif // myproj_hh
src/main.cpp
/** main.cpp --- */ #include <iostream> #include "myproj.hh" int main() { } #ifdef IMPORTANT_MACRO int my_fcn() { } #endif
src/mylib.cpp
/** mylib.cpp --- * * Shared Library to build */ int my_lib_function() { }
EDE needs subdirectories to also have projects in them. You can now create those projects.
With main.cpp as your current buffer, type:
M-x ede-new RET Automake RET src RET
and in myproj.hh as your current buffer, type:
M-x ede-new RET Automake RET include RET
These steps effectively only create the Project.ede file in which you will start adding targets.
In order to build a program, you must have targets in your EDE
Projects. You can create targets either from a buffer, or from a
dired directory buffer.
Note: If for some reason a directory list buffer, or file does not have the ‘Project’ menu item, or if EDE keybindings don’t work, just use M-x revert-buffer RET to force a refresh. Sometimes creating a new project doesn’t restart buffers correctly.
Lets start with the header file. In include/myproj.hh, you could use the menu, but we will now start using the EDE command prefix which is C-c ..
C-c . t includes RET miscellaneous RET y
This creates a misc target for holding your includes, and then adds myproj.hh to the target. Automake (the tool) has better ways to do this, but for this project, it is sufficient.
Next, visit the src directory using dired. There should be a ‘Project’ menu. You can create a new target with
. t myprogram RET program RET
Note that . t is a command for creating a target. This command is also in the menu. This will create a target that will build a program. If you want, visit Project.ede to see the structure built so far.
Next, place the cursor on main.cpp, and use . a to add that file to your target.
. a myprogram RET
Note that these prompts often have completion, so you can just press TAB to complete the name myprogram.
If you had many files to add to the same target, you could mark them all in your dired buffer, and add them all at the same time.
Next, do the same for the library by placing the cursor on mylib.cpp.
. t mylib RET sharedobject RET . a mylib RET
Next, we’ll try to compile the project, but we aren’t done yet, so it won’t work right away.
Visit /tmp/myproject/Project.ede. We’re starting here because we don’t have any program files in this directory yet. Now we can use the compile command:
C-c . C
Because this is the very first time, it will create a bunch of files for you that are required by Automake. It will then use automake to build the support infrastructure it needs. This step is skipped if you choose just a Makefile build system.
After the Automake init, it runs compile. You will immediately discover the error in main.cpp can’t find myproj.hh. We need to go fix this.
To fix the failed compile, we need to add /tmp/myproject/include to the include path.
Visit main.cpp.
M-x customize-project RET
Select the ‘[Settings]’ subgroup of options. Under ‘Variable :’ click ‘[INS]’. At this point, you need to be somewhat savvy with Automake. Add a variable named ‘CPPFLAGS’, and set the value to ‘../include’.
You should see something like this:
Variables : [INS] [DEL] Cons-cell: Name: AM_CPPFLAGS Value: -I../include [INS] Variables to set in this Makefile.
Click ‘[Apply]’. Feel free to visit Project.ede to see how it changed the config file.
Compile the whole project again with C-c . C from main.cpp. It should now compile.
Note: Supporting shared libraries for Automake in this way is easy, but doing so from a project of type Makefile is a bit tricky. If you are creating shared libraries too, stick to Automake projects.
Next, lets add a dependency from main.cpp on our shared library. To do that, update main like this:
int main() { my_lib_function(); }
Now compile with:
C-c . c
where the lower case c compiles just that target. You should see an error.
This time, we need to add a dependency from main.cpp on our shared library. To do that, we need to customize our target instead of the project. This is because variables such as the include path are treated globally, whereas dependencies for a target are target specific.
M-x customize-target RET
On the first page, you will see an Ldlibs-local section. Add mylib to it by first clicking ‘[INS]’, and they adding the library. It should look like this:
Ldlibs-Local : [INS] [DEL] Local Library: libmylib.la [INS] Libraries that are part of this project. [Hide Rest] The full path to these libraries should be specified, such as: ../lib/libMylib.la or ../ar/myArchive.a
You will also see other variables for library related flags and system libraries if you need them. Click ‘[Accept]’, and from main.cpp, again compile the whole project to force all dependent elements to compile:
C-c . C
You can run your program directly from EDE.
C-c . R RET RET
If your program takes command line arguments, you can type them in when it offers the command line you want to use to run your program.
Next: Creating a project, Previous: EDE Mode, Up: Top [Contents] | https://www.gnu.org/software/emacs/manual/html_node/ede/Quick-Start.html | CC-MAIN-2016-22 | refinedweb | 1,130 | 76.32 |
MATLAB Newsgroup
(Roozbeh))
Does anyone know how to solve this problem. Below are what I have done so far to explore/solve the problem:
- The jnilib file is already added to Matlab path (librarypath.txt). It is not the case that Matlab does not see the jnilib file. My other java classes that rely on other jni files in the same directory run successfully in Matlab.
- The problem happens with both 32 bit and 64 bit compilations of the jnilib file on both 32 bit and 64 bit Matlab. A mismatch between the jni library and Matlab is ruled out.
- Because my java code runs successfully outside Matlab the code in the HIDAPI library can not be at fault. The jnilib file and its related C/C++ functions are kosher.
- My latest hypothesis is that Matlab may be lacking some of the dynamic libraries that are called by the C/C++ functions. Running 'otool -L' on the jnilib produces the following report:
/System/Library/Frameworks/IOKit.framework/Versions/A/IOKit (compatibility version 1.0.0, current version 275.0.0)
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 550.44.0)
/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
/usr/lib/libstdc++.6.dylib (compatibility version 7.0.0, current version 7.9.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 125.2.11)
I am not sure how to make Matlab see all of these libraries. I have added their paths to librarypath.txt but UnsatisfiedLinkError persists.
Any hypotheses, ideas or solutions? I will greatly appreciate any help.
"Roozbeh " <[email protected]> wrote in message <[email protected]>...
>)
[snip]
An UnsatisfiedLinkError exception is thrown if JNI encounters any error when loading a library file. This could range from file-not-found, to not being on the librarypath, to not being loadable as a dynamic library, and so on. It is therefore always prudent to test for this exception when loading native libraries.
Admittedly, all this is far from trivial. It gets even more complicated when we try to access Matlab native (dynamic library) functionality using JNI: The basic building block of JNI usage is the java.lang.System.loadLibrary(libName) method. Unfortunately, unlike almost any other Java method that can be tested from the Matlab Command Prompt, it appears that some internal bug or limitation in Matlab’s classloader prevents direct usage of System.loadLibrary from the Command Prompt. Instead, it can only be used from within Java code (i.e., a user-created Java class).
Therefore, to test our dynamic library, we need to create a simple Java class that does the actual loadLibrary, and then call that class from Matlab:
public class LoadLibrary
{
public static void loadLibrary(String s)
{
try {
System.loadLibrary(s);
} catch (UnsatisfiedLinkError error) {
System.out.println("Error loading library "+s);
}
}
}
We have several alternatives of specifying the librarypath in Matlab: we can set the LD_LIBRARY_PATH environment variable, or add a corresponding -Djava.library.path= directive to our java.opts file.
Alternatively, we could add a line in the librarypath.txt file, which is located in the %matlabroot%/toolbox/local/ folder. Type edit('librarypath.txt') at the Matlab Command Prompt to edit this file in the Matlab Editor, or use any external text editor for this. If we do not have administrator access to this file, then we can also place a copy of this file in our user's Matlab startup folder.
Once we have librarypath.txt set-up correctly and have restarted Matlab, we can now load our library in Matlab as follows:
javaaddpath('path-to-the-folder-that-contains-LoadLibrary.class');
LoadLibrary.loadLibrary('libMylib.so'); % or libMylib.dll in Windows
Using JNI in Matlab is described in detail in Section 9.5 of my Matlab-Java Programming book:
Yair Altman
Thanks Yair. The path to the jni file is already added to librarypath.txt (and is present in LD_LIBRARY_PATH). Also, I have already implemented LoadLibrary, following one of your earlier posts. I can soundly confirm that I can load the jni file in Matlab without invoking UnsatisfiedLinkError. The error seems to happen when I call the functions of the jni file. Because I can call those functions outside Matlab without invoking an error I assume that UnsatisfiedLinkError is caused because Matlab fails to load the dependencies of the jni file (i.e., the dynamic libraries it calls). How can I confirm it? Does Matlab load all dependencies of the jni file when it is loaded? I assume that Matlab loads them only when the related functions in the jni file are called. How can I ensure Matlab loads those files?
-roozbeh
The problem is solved by moving the java class that calls the functions of the jni library to the static java path of Matlab. Apparently, Matlab had a problem with calling the jni functions from the dynamic java path. I do not understand this oddity of Matlab but I am happy to report that the problem is solved.. | http://www.mathworks.com/matlabcentral/newsreader/view_thread/316596 | CC-MAIN-2015-22 | refinedweb | 849 | 50.73 |
[
]
Allen Wittenauer commented on HDFS-8003:
----------------------------------------
bq. However, since the saving namespace process can take minutes or even throw exception,
there is no way to guarantee the NN can correctly verify/do checkpoint before getting stopped.
You can always catch the exceptions and trap the signal from within the namenode.
In fact, given the above scenario...
bq. Instead if we add this functionality outside of NN (i.e., into the stopping NN shell),
we can make sure the checkpoint verification happens/finishes before stopping NameNode, and
the RPC timeout can provide a time bound of the operation.
... doing it in the shell program is even worse. If it gets an exception, it's just going
to assume that the previous command worked and continue on its way. The whole thing that
this hack is supposed to prevent is going to happen anyway, because there is no way within
the shell code that it can guarantee that the checkpoint is valid.
Here's a key question: if the checkpoint isn't valid, what is the shell code supposed to do
about it?
> hdfs has 3 new shellcheck warnings and the related code change is questionable
> ------------------------------------------------------------------------------
>
> Key: HDFS-8003
> URL:
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0
> Reporter: Allen Wittenauer
>
> HDFS-6353 introduced three new shell check warnings due to an unprotected ${HADOOP_OPTS}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332) | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201503.mbox/%[email protected]%3E | CC-MAIN-2017-47 | refinedweb | 236 | 62.58 |
Minutes of XML Protocol Working Group telcon, 4 February 2004.
> 1. Roll call. Scribe for minutes selected from attached list. Actions to be
> recorded on IRC. (8.30PT + 5)
>
Present 14/11
Canon, Herve Ruellan
DaimlerChrysler R. & Tech, Mario Jeckle
IBM, David Fallside
IBM, John Ibbotson
IONA Technologies, Seumas Soltysik
Microsoft Corporation, Martin Gudgin
Nokia, Michael Mahan
SAP AG, Volker Wiechers
SeeBeyond, Pete Wenzel
Sun Microsystems, Marc Hadley
Sun Microsystems, Tony Graham (scribe)
Systinet, Jacek Kopecky
W3C, Yves Lafon
W3C, Carine Bournez
Excused
Canon, Jean-Jacques Moreau
DaimlerChrysler R. & Tech, Andreas Riegg
IBM, Noah Mendelsohn
IONA Technologies, Mike Greenberg
Microsoft Corporation, Jeff Schlimmer
SAP AG, Gerd Hoelzing
Systinet, Miroslav Simek
Regrets
BEA Systems, Mark Nottingham
BEA Systems, David Orchard
Oracle, Anish Karmarkar
Absent
Oracle, Jeff Mischkinsky
>
>
> 2. Agenda review, and AOB (8.35 + 5)
>
>
>
> 3. Approval of January 28 telcon minutes,
> (8.40 + 5)
Approved without objection.
> 4. Review action items, see,
> the status of each action listed here taken as correct and will not be
> mentioned, unless any WG member has a question. These action items are
> taken from the XMLP member page. Some AIs may be discussed under other
> agenda items.
>
> 2004/01/21: Gudge&MarkN
> Find out what was agreed regarding allowable children of xop:include
> PENDING
>
> 2004/01/28: Yves
> Change teleconference time (30mn earlier)
> DONE
>
> 2004/01/28: Yves
> Send email ot xmlp-comments to open an issue about extension definition for
> HTTP for the representation header
> PENDING
Done.
> 2004/01/28: Jacek
> Send closing email to xmlp-comments and originator to close issue 442
> noting we expect to handle extensions in separate issue and that media type
> is handled under issue 443
> DONE
>
> 2004/01/28: Noah
> Send email to open issue on SOAP processing model and and representation
> headers
> DONE ()
> 5. Status reports and misc (8.45 + 10)
> -- Placeholder for pending items
> o March 2004, f2f planning (possible items include: co-ordination mtg w/
> WSD WG re. description of attachments)
>
> -- Update on XMLP collaboration with other WGs on MTOM/XOP
> 2004/01/28: DavidF
> Compile technical reasons not to use XInclude
> PENDING, have collected some material sent me after last week's call, still
> to track down more references
DavidF hopes to complete some time this week.
> -- Media types registrations ("application/soap+xml", etc)
>
> -- XMLP/WSD Task Force
>
No reports.
>
>
> 6. Attachments (8.55 + 65)
> --, Ed Copy, 26 January
> 2004,
>
> . Awaiting publication of WD.
> o XML-binary Optimized Packaging, Ed Copy, 26 January 2004,
>. Awaiting
> publication of WD.
Yves: More approval is still required to be published. A few things to fix
before publishing. Hopes for reply with publication date this week.
> -- Meeting our requirements and use cases:
> See
>
> with regard meeting requirements. We will go through this report to check
> "Met" entries and to resolve the "Unsure" entries.
>
> 2004/01/28: MichaelM
> Go through the UC and send a report by monday feb 1
> DONE, see
>.
> We will go through this report in the same manner as for the requirements.
Use Cases by MichaelM:
UC2: Met.
UC6: Not met. MTOM 4.3.11 conflicts.
Jacek: Met by XOP. In scope for XOP, out of scope for MTOM http
binding.
MichaelM: Will look in XOP.
Agreed with objection that okay if handled by XOP but not by MTOM.
MichaelM to produce consolidated evaluation of requirements and use
cases.
UC9: Met.
UC10: Met.
UC11: Met.
Requirements:
R9: Met. Extensibility defined in a few places in MTOM.
DavidF: Not met.
MichealM: Issue 444 is extensibility point?
MichaelM: It's not explicit in spec, but requirements don't ask for
explicit.
MichaelM: Is R9 met or unmet?
Silence.
DavidF: We'll leave this open and come back to it.
R9 question to be added to issues list.
R15: Met.
R32: Met.
R18: Met.
R27: Unclear about "extent possible". Left as unsure.
DavidF: Fair, since security is TBD.
Gudge: We can secure optimized content using C14N, but would require
producing Base64 characters. Have left open option of creating
MTOM-aware C14N, but not planning on creating it.
DavidF: Will wait on met/unmet until security section is written.
R29: Met.
Agreement.
R33: Met.
Agreement without objection.
R22: Met in XOP.
Agreement without objection.
R36. Unsure about security section.
DavidF: Do we need anything written down about encryption?
Gudge: Relatively straightforward. Encrypted XML is Base64, which you
could optimize.
Gudge: Can also encrypt data which contains one or more xbinc:Include
elements.
DavidF: Will leave open pending 'security' section.
R37:
Gudge: Can do that.
Will be covered by security section.
Gudge to write 'Security' section by 9th February.
R1: Met.
R2: Partially met. MTOM only does Base64.
Jacek: Since even XML fragments can be serialised as binary data, we
have met that. Requirement doesn't say we have to handle e.g. XML
fragments natively.
MichaelM to change to 'met'.
R3: Met.
R4: Met.
DavidF: The binary form is space-efficient representation? Saying
preserves Base64-binary is not space-efficient.
Rationale will change. Requirement is still met.
R35: Met.
R5: Met.
R13: Met.
R26: Not sure.
Jacek: Does MIME support chunked encoding?
MarcH: MIME/interleaved does.
Gudge: Could do with multiple xbinc:Include siblings. Would have to
be sure all except last were integral number of 24 bits so no
padding. No problem with canonical Base64 is no whitespace between
include elememnts.
Gudge: There's no terminator for Base64, but get equals signs if not
integral number of 24 bits. Keeps decoding 24 bits at a time until
you hit an equals sign.
Jacek: Would have to specify xbinc:Include siblings trick in XOP.
DavidF: What were we thinking when we drafted this requirement?
MichaelM reads UC12.
DavidF: R26 is a hold-out from UC12, which is out of scope. Any
objections to removing R26?
Approved without objections.
R26 to be removed from editor's copy.
DavidF: Consolidated document about meeting requirements should
include rationales for met/unmet/unsure. Not necessary for ones
no-one asked for clarification.
R21: Met by MIME.
Gudge: And by the fact that we're using XML.
Jacek: Should also cover versions of MIME. Met because other versions
of MIME would be compatible or use a different MIME type.
R31: Met by MIME RFC 2045.
R30: Met by MIME headers.
R6: Met.
R7: Met but need to verify.
MarcH: XOP allows multiple references to a single attachment. You can
have attachments that aren't referenced. In HTTP binding, can have
attachments without reference. If remove reference, effectively
remove attachment.
Jacek: Requirement is more like shouldn't create URIs by counting
parts in the serialisation. By the way we create the identification
URIs, this requirement is met by the basic idea of the URI.
DavidF: How to create packaging section in XOP?
Jacek: Cid? If create URIs in own domain, no conflict, no necessity
to rename on addition or deletion.
MarcH: Can remove any given one without affecting the others.
DavidF: True as specified in HTTP binding.
Gudge: HTTP binding says must use CID?
DavidF: Don't know.
Jacek: Should think it's in XOP.
DavidF: Have we met this requirement?
Agreement without objection.
Jacek: When people are creating URIs, DNS handles conflicts. Since
CID URIs contain any other kind of URIs, would be stupid idea for an
implementor to always use one URI, e.g., if two parts are named the
same name.
DavidF: The process for creating a XOP package -- one part that has a
unique URI -- then we went to some trouble to avoid conflict.
MichaelM: XOP 4.21.
DavidF: Add issue on meeting R7.
R12: Met.
R34: Met.
R17: Met.
DavidF: Been through our evaluation of requirements and use case. Any
problem with any of the met?
The working group believes that it has met the requirements as listed
by MichaelM except for the ones called out for further work.
> -- Moving forward with the Primer.
> See comments from Jacek,
>. Are
> there other comments to make? Shall we tell Nilo to continue, do we need to
> make corrections, etc?
Jacek: The new parts mostly okay except for minor things. E.g. more
mentions of XOP and some editorial issues. Have not identified
anything substantial. Should ask Nilo to finish this with the current
status of MTOM and XOP. Probably needs another round before Last
Call.
MichaelM: Not as clear as it could be in some places. Will try to
raise comments on it.
DavidF: Looks like doing the right thing. Need to make sure everybody
reads it, probably before Last Call, and do review.
DavidF: Nilo left the group. His company is still W3C member. I
invited him "back" to work on the Primer. We decided to get Nilo to
continue as best way forward instead of new editor or new Attachment
primer. He will attend meetings if invited to discuss Primer.
> -- Issue discussion. We will initially make a a pass over this list to
> determine what needs to be done to resolve each issue.
> o 449,, multiple
> representations of a single resource. Is this issue not now resolved?
MarcH: Not sure if resolved.
DavidF: More intended by this issue?
Jacek: Decided to include some text about the representation header
being used multiple times in SOAP message. Until have that text, not
sure covered. Keep issue until have text.
DavidF: Should be easily resolvable after finished representation
header description?
Agreement without objection.
> o 454,, variability
> of encoding in Miffy
DavidF: Need more discussion?
DavidF: Still under discussion. Will mark as "further discussion".
> o 443,, Media-type
> information for binary data
Gudge: Have attribute for doing that in XOP. Say should copy value
into MIME part.
Jacek: Subject of joint task force with WSD WG?
DavidF: Weren't they defining the representation of the typing names?
Jacek: Don't know.
No-one from that group on the call.
Gudge: Text under issue says Jacek copied stuff out of PASWA as
approach to fixing this. Defining do media type in XOP, can describe
in schema. What else?
Jacek: Restrict to GIF, JPEG, and PNG?
Gudge: Use a pattern.
Jacek to contact Anish to see what WSD WG will provide w.r.t. issue
443.
> o 447,, recognizing
> and processing MTOM/Miffy in the HTTP Binding
Gudge: Still TBD.
DavidF: When have media type, will have answer? Actually have to have
media type?
Gudge: Asked IETF if can use media type for transform of media type.
Waiting for them to give feedback. Shouldn't wait much longer.
DavidF: Ask MarkN to give read on likely outcome of issue 447.
Action to Yves.
> o 448,,
> extensibility/versioning in mtom elements
Gudge: We have a very restricted usage and defining stuff in
completely new namespace for new stuff is reasonable approach.
MarcH: Is versioning and extensibility. Do we allow extensibility of
stuff we're creating? Do we allow attribute extensibility?
Gudge: Don't allow any extensibility in schema. Don't think yet what
to do about children of xbinc:Include.
MarcH: This issue is placeholder for that question.
Gudge: For WD, decided xbinc:Include doesn't have any children.
MarcH: For versioning, reinvent it. Extensibility unresolved.
DavidF: We need to do some work on this.
> o 450,, Referenced
> parts MUST include the 'Content-Transfer-Encoding' field and should include
> the 'Content-Length' field. Discussion started at
>
Gudge: Decided in BEA F2F.
DavidF: So a solution/response should exist for that.
Action to Gudge to look through F2F minutes and WD text for answer to
issue 450.
> o 451,, referencing
> mechanism via 'Content-Type' and 'http' URI
Gudge: Must use CID.
Action to Gudge to follow up on this.
> o 452,, Addressing
> the 'data:' URI directly in the MIFFY specification. Discussion started at
>
Gudge: Pretty sure decided 'no'.
MarcH: Sent something to SVG group about this? Sent question about
them planning on moving away from this approach?
DavidF: Need to send them an email about closing issue since email
unanswered.
Action to DavidF.
> o 453,, Packaging
> multipart payloads via use of the 'multipart/multiplexed' mechanism in RFC
> 3391 should be considered as valid MIFFY encoding mechanism. Discussion
> started at
>
Multipart/Interleaved Internet Draft has been deleted.
Action to MarcH to research issue 453.
> o 455,. SOAP
> Processing model and representation header.
Jacek: Arising from discussion a few weeks ago. Representation header
and MAY/SHOULD/MUST do or not do?
DavidF: Needs more discussion.
> o 456,. Constraints
> on node types.
Gudge: Agree with Herve's proposal. Just talk about type property of
ones we will optimise.
DavidF: Proposal to drop two sentences. On next week's agenda.
> o 444,, MTOM and
> Data Model Details.
DavidF: Sounds like action to give to Noah, but he's not here.
> 8. SOAP 1.2 Recommendation (postponed)
> -- Issues and proposed resolutions.
>
> o Issue 8,,
> proposal ( (Anish)
>
> o Issue 9,,
> proposal
> (MarcH)
>
> o Comments on proposed resolutions to issues 10 and 11, see
>
>
> - Issue 10,,
> proposal (Herve)
>
> - Issue 11,,
> proposal
> (MarkN)
>
> o Issue 18,,
> proposal
>
>
>
Meeting concluded at 6:14 PM. BST | http://www.w3.org/2000/xp/Group/4/02/04-minutes.html | CC-MAIN-2014-10 | refinedweb | 2,149 | 69.99 |
I really haven't *understood* kernel configutarion but i can tell you
what i do when i want to add a config option. I first edit arch/i386/config.in
and add a line that looks like
bool 'whatever explanation' CONFIG_WHATEVER default
this is supposed to mean that CONFIG_WHATEVER is a boolean taking values
y or n. When you 'make config' you'll get something like
'whatever explanation (CONFIG_WHATEVER) [default]'
and you type in y or n. Now this automagically #defines CONFIG_WHATEVER
in <linux/autoconf.h>. Code that is specefic to the configuration can now
be enclosed in #ifdef CONFIG_WHATEVER ... #endif so it will be compiled in
only when configured. If you want any more explanation than can be given
on one line, you can have a set of 'comment ...." lines before the
'bool ....' line and that will be displayed for you during configuration.
I don't know if you'll find it useful but still .....
Venkatesha Murthy ([email protected]) | http://www.tldp.org/LDP/khg/HyperNews/get/tour/tour/3/2.html | CC-MAIN-2014-52 | refinedweb | 163 | 58.79 |
>can u tell me what this does
I could but MSDN does a better job than I could.
Printable View
It is windows programming.
<time.h>
go here...
I can't believe you're asking this again.
Get a programming book and read it. You need major help. This question has been answered in multiple ways and you're still not happy. Maybe it's time you look into doing something else. How about starting with the hello world program? When you get that working then move onto something more complicated.
I'm sorry but I've lost my patience with you.
hey dumbass were u not one of those people who said that making a clock was impossible whithout rellying on the system time yes i remember.
hate smartasses like that
by the way it's pode
heres one of ure posts
///////////////////////////////////////////////////////////////////
:: shakes head ::
This again?
What exactly are you trying to do? And why do you refuse to use the functions in time.h?
I guess I fail to see how you're going to make a "clock" without relying on the system at some point to keep it consistent.
and a nother
/////////////////////////////////////////
#include <windows.h>
Then you can use Sleep(1000);
Here's the prototype:
VOID Sleep(DWORD dwMilliseconds)
But that still relies on the system clock.
__________________
-Challenge stupidity.
man you are really stupid u know that
so my advice to u would be challenge yourself
lol
get it?
so what have u got to say now then
idiot
lol
And just how did you make a clock without using the system? Enlighten me.
GL with a clock with out useing system time thats a bit hardcore for me to help u.
how is ur clock supposed to get the correct time to start out with? it cant just look on ur wall and find out thats why the system is neccisary, it keeps track of your time. I guess you could make it look online but that would be a crapy clock. good luck.
look at the beginning of the thread it's not my work but it shows that it's possible
oookkk.
i thought u wanted a real clock where like they open the program and bam the time.
that one takes input and increases it based on the seconds.
I assume you mean the post using the Sleep() function. Sleep() relies on the system clock to wait at least the amount of milliseconds passed to it (meaning it could take longer making a not so efficient clock). From your quotes of mine, that suggestion wasn't good enough for you before. So again I ask, how do you make a clock without using the system? I'm very curious.
Use winsock and connect to a time server? | http://cboard.cprogramming.com/cplusplus-programming/11340-i-want-make-clock-2-print.html | CC-MAIN-2014-23 | refinedweb | 465 | 83.56 |
10 minutes
how many threads are enough threads?
The other night there was a discussion about python multi-threading on the nornir channel on slack so I decided to do some benchmarks and explain a couple of things. I am by no means an expert on the topic, I mostly know enough to be scared about the topic and to test assumptions to avoid surprises. I am also going to try to simplify things a bit so apologies in advanced if something is slightly inaccurate. Feel free to let me know if you think something needs further explanation or clarification.
The first thing you need to know is what a thread is, according to the wikipedia “a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system”. The TL;DR; is that a thread is something you can put on a CPU core to be executed. Threads are somewhat expensive to create and manage as the OS needs to maintain several datastructures and run complex algorithms so an alternative to threads are coroutines. Coroutines offer similar functionality to OS threads but are managed by a runtime instead of by the operating system and are much more lightweight than OS threads. You probably heard about asyncio or golang’s goroutines, those are examples of coroutine implementations.
Second thing you need to know is you can only run as many threads concurrently as cores you have available (twice with technologies like hyperthreading), however, computers have mechanisms to put threads in an idle state while waiting for some event to occur. For instance, if a python program runs
time.sleep(1) it’s going to go into this state for 1 second, during that second the program won’t consume any CPU, and, when the time comes, the program will be woken up and resume operations. This same technique can be used when waiting for IO operations, for instance, when trying to read/write to disk or when waiting for the network to send you some information. Because those operations are several orders of magnitude slower than executing CPU instructions it is worth trying to parallelize as many of those operations as possible. If you have heard the term “IO-bound program”, this is a summary of what it means.
Testing assumptions
Now that we are experts on CPU design and have read all the research ever written around the topic of schedulers, let’s design a simple test; we are going to simulate an IO-bound application by pretending we are going to connect to 10.000 devices. The application won’t really connect to any device, instead it will just go to sleep for a given amount of time. This time we are sleeping should simulate RTT.
Note that this is a very simple test and doesn’t really consume the same resources that a program connecting to real devices would consume (sockets, file descriptors, etc), resources that would add up and could cause side-effects, specially if you run the code on a shared machine. Quoting your favorite physics teacher “this only works in the vacuum with no friction”
Some of the things I want to see with the tests we are going to perform is:
- How does RTT affect the execution of the program
- How many threads are worth creating given they are expensive to create and manage under different RTTs
- How helpful coroutines are, are they a fad or do they solve an actual problem?
Counting threads with nornir
To see how many is worth using when attempting to parallelize the connection to 10.000 devices using different RTTs we are going to use
nornir. A continuation you can see the script (note it’s using a beta version of nornir 3.0 so it might not work out of the box if you try to execute it with nornir 2.0, however, it shouldn’t affect performance):
import sys import time from nornir import InitNornir from nornir.core.inventory import Defaults, Groups, Hosts, Host, Inventory from nornir.core.plugins.inventory import InventoryPluginRegister from nornir.core.task import Task NUM_DEVICES = 10000 class TestInv: """ Fake inventory that generates hosts dynamically """ def load(self) -> Inventory: hosts = Hosts() for i in range(0, NUM_DEVICES): name = f"dev{i}" hosts[name] = Host(name) return Inventory( hosts=hosts, groups=Groups(), defaults=Defaults() ) def fake_task(task: Task, sleep_time: float) -> None: """ fake task that simulates RTT """ time.sleep(sleep_time) def main(num_workers: int, sleep_time: float) -> None: InventoryPluginRegister.register("test-inv", TestInv) nr = InitNornir( inventory={"plugin": "test-inv"}, core={"num_workers": num_workers}, ) nr.run(task=fake_task, sleep_time=sleep_time) if __name__ == "__main__": num_workers = int(sys.argv[1]) sleep_time = float(sys.argv[2]) main(num_workers, sleep_time)
Great, now let’s see the results of running this with different parameters. First, with an RTT of 50ms:
python script.py 100 0.05 0.78s user 0.29s system 19% cpu 5.532 total python script.py 200 0.05 0.79s user 0.34s system 39% cpu 2.854 total python script.py 500 0.05 0.65s user 0.37s system 73% cpu 1.389 total python script.py 1000 0.05 0.81s user 0.37s system 118% cpu 0.995 total python script.py 1500 0.05 0.73s user 0.48s system 125% cpu 0.969 total python script.py 2000 0.05 0.78s user 0.47s system 125% cpu 0.993 total python script.py 5000 0.05 0.78s user 0.47s system 126% cpu 0.987 total python script.py 10000 0.05 0.82s user 0.37s system 123% cpu 0.962 total
Now, with an RTT of 100ms:
python script.py 100 0.1 0.77s user 0.30s system 10% cpu 10.551 total python script.py 200 0.1 0.75s user 0.32s system 19% cpu 5.424 total python script.py 500 0.1 0.79s user 0.35s system 47% cpu 2.376 total python script.py 1000 0.1 0.82s user 0.35s system 84% cpu 1.391 total python script.py 1500 0.1 0.86s user 0.56s system 119% cpu 1.192 total python script.py 2000 0.1 0.89s user 0.62s system 128% cpu 1.177 total python script.py 5000 0.1 0.89s user 0.84s system 136% cpu 1.266 total python script.py 10000 0.1 1.08s user 0.74s system 140% cpu 1.292 total
A continuation with 300ms:
python script.py 100 0.3 0.82s user 0.24s system 3% cpu 31.016 total python script.py 200 0.3 0.74s user 0.27s system 6% cpu 15.381 total python script.py 500 0.3 0.75s user 0.30s system 16% cpu 6.360 total python script.py 1000 0.3 0.73s user 0.38s system 33% cpu 3.354 total python script.py 1500 0.3 0.82s user 0.42s system 50% cpu 2.460 total python script.py 2000 0.3 0.94s user 0.42s system 67% cpu 2.004 total python script.py 5000 0.3 1.15s user 1.28s system 154% cpu 1.575 total python script.py 10000 0.3 1.14s user 1.04s system 141% cpu 1.535 total
And finally, with an RTT of 1s, just because reasons:
python script.py 100 1 0.70s user 0.28s system 0% cpu 1:40.55 total python script.py 200 1 0.75s user 0.19s system 1% cpu 50.445 total python script.py 500 1 0.64s user 0.30s system 4% cpu 20.335 total python script.py 1000 1 0.77s user 0.28s system 10% cpu 10.360 total python script.py 1500 1 0.73s user 0.39s system 15% cpu 7.364 total python script.py 2000 1 0.86s user 0.37s system 22% cpu 5.507 total python script.py 5000 1 1.04s user 0.79s system 60% cpu 3.005 total python script.py 10000 1 1.43s user 1.11s system 97% cpu 2.598 total
As you can see latency has a huge impact. If latency is low (~50ms), the cost of creating a large amount of threads is relatively high compared to the time each thread is idle so going from 200 threads to 500 threads doesn’t gain you a lot but it increase CPU consumption by 34%. With a latency of 100ms you can see the same effect going from 500 to 1000 threads. With 300 ms of latency there isn’t a massive spike but you certainly don’t gain much beyond 1000 threads. As a bonus, with a fake RTT of 1s you can see the sweet spot is around 1000 threads too, however, CPU is proportionally lower to the RTT, which makes sense as you are doing the same work over a longer period of time.
Coroutines to the rescue
At the time of writing this post
nornir doesn’t have support for
asyncio (even though there has been some proposals and even some working code, if you are interested in seeing this happen reach out to me). Instead, we are going to use
gornir to perform the same tests as before but using coroutines instead (or
goroutines as they are called in
golang). First the code:
package main import ( "context" "flag" "fmt" "time" "github.com/nornir-automation/gornir/pkg/gornir" "github.com/nornir-automation/gornir/pkg/plugins/logger" "github.com/nornir-automation/gornir/pkg/plugins/runner" ) func FakeInv() gornir.Inventory { hosts := make(map[string]*gornir.Host) for i := 0; i < 10000; i++ { name := fmt.Sprintf("dev%d", i) hosts[name] = &gornir.Host{Hostname: name} } return gornir.Inventory{ Hosts: hosts, } } type fakeRTT struct { rtt time.Duration } func (t *fakeRTT) Metadata() *gornir.TaskMetadata { return nil } func (t *fakeRTT) Run(ctx context.Context, logger gornir.Logger, host *gornir.Host) (gornir.TaskInstanceResult, error) { time.Sleep(t.rtt) return nil, nil } func main() { rtt := flag.Duration("fake-rtt", time.Millisecond, "") flag.Parse() log := logger.NewLogrus(false) gr := gornir.New().WithInventory(FakeInv()).WithLogger(log).WithRunner(runner.Parallel()) _, err := gr.RunSync( context.Background(), &fakeRTT{rtt: *rtt}, ) if err != nil { log.Fatal(err) } }
Before moving forward, some explanations of how multi-threading/coroutines work here:
- For each device
gorniris going to create a coroutine
- The golang runtime is going to create as many threads as
GOMAXPROCSindicates, by default the number of cores. These threads will be used to run the scheduler, the garbage collector, each coroutine, etc…
First we need to compile it:
$ go build -o fakertt-test main.go
If you haven’t dealt with golang before, yes, it’s that easy :) Now let’s run it with the default number of threads for an RTT of 50ms:
./fakertt-test -fake-rtt 50ms 0.16s user 0.04s system 185% cpu 0.111 total
As you can see with the default number of threads (one per core) and using coroutines we managed to squeeze the CPU and execute the program in 111ms, barely more than twice the RTT we set. Let’s see with only one thread:
GOMAXPROCS=1 ./fakertt-test -fake-rtt 50ms 0.09s user 0.03s system 74% cpu 0.158 total
CPU is now down to 74% and the application took 158ms, not bad. Let’s now try with 100 threads:
GOMAXPROCS=100 ./fakertt-test -fake-rtt 50ms 0.15s user 0.12s system 187% cpu 0.139 total
Unsurprisingly, it took longer than using only one per core while consuming the same amount of CPU.
Let’s do similar tests with higher latency, now with 100ms:
./fakertt-test -fake-rtt 100ms 0.12s user 0.11s system 144% cpu 0.160 total GOMAXPROCS=1 ./fakertt-test -fake-rtt 100ms 0.10s user 0.03s system 62% cpu 0.208 total GOMAXPROCS=100 ./fakertt-test -fake-rtt 100ms 0.19s user 0.13s system 168% cpu 0.192 total
We got similar results, CPU went down and execution time went up proportionally to the increase in RTT. Now let’s try with 300ms of RTT:
./fakertt-test -fake-rtt 300ms 0.13s user 0.08s system 57% cpu 0.363 total GOMAXPROCS=1 ./fakertt-test -fake-rtt 300ms 0.10s user 0.02s system 29% cpu 0.425 total GOMAXPROCS=100 ./fakertt-test -fake-rtt 300ms 0.15s user 0.14s system 74% cpu 0.387 total
Which lead to similar results, and finally with 1s of RTT:
./fakertt-test -fake-rtt 1s 0.17s user 0.05s system 19% cpu 1.071 total GOMAXPROCS=1 ./fakertt-test -fake-rtt 1s 0.08s user 0.05s system 11% cpu 1.121 total GOMAXPROCS=100 ./fakertt-test -fake-rtt 1s 0.14s user 0.13s system 25% cpu 1.078 total
And again, consistent results.
It is worth noting that, as we saw in python (although several orders of magnitude different), the relative cost of creating threads diminishes as RTT increases. This makes sense as creating a thread is orders of magnitude faster than crossing the Atlantic. It is also worth noticing how CPU utilization goes down with RTT, which makes sense as well as the same amount of work is spread across a longer period of time.
It is also worth noting how efficient golang with its goroutines is. In all cases the software took barely a bit longer than the RTT to complete, squeezing the CPU as much as possible.
Summary
Threads are great for parallelizing work and you can certainly create more than CPU cores you have, specially for IO-bound applications, however, they are not free, they have a cost. Coroutines help immensely lowering this cost but, specially in programming languages like python, having access to coroutines isn’t trivial.
This post is not trying to convince you that using a high number of threads is bad, on the contrary, it’s trying to encourage you to understand how computers work, your workload, and the environment you are running your code on as the same workload under different circumstances (different resources available, different latency, etc) may cause our application to behave differently. | https://nornir.tech/2020/05/01/how-many-threads-are-enough-threads/ | CC-MAIN-2020-29 | refinedweb | 2,375 | 71.71 |
It's been a month and a half since I started studying Java, and I have the task of creating a simple game.
The contents of the game are as follows.
・ 3 players
・ Roll the dice 3 times each and find the total value for each.
・ The person with the largest total value wins
I'm creating it using an array and random numbers, but I'm stumbling because I don't know how to write the code to find the total value.
I would appreciate it if you could teach me.
I look forward to working with you.
[1]: 5 3 1
9 ← The total value is not calculated, and it seems that the random number is output as it is.
[2]: 2 6 3
8 ← The total value is not calculated, and it seems that the random number is output as it is.
[3]: 5 2 6
5 ← The total value is not calculated, and it seems that the random number is output as it is.
package test;
public class SaikoroGames {
public static void main (String [] args) {
// Decide the number of players
int [] r = {1,2,3};
int sum = 0;
// Loop for the number of people
for (int j = 0;j
for (int i = 0;i
System.out.print (dice () + "");
sum = dice () + dice () + dice ();
}
}
System.out.println ();
System.out.print (sum);
System.out.println ();
}
}
// roll the dice
static int dice () {
return (int) (Math.random () * 6) + 1;
}
}
I googled how to find the total value using an array and random numbers, but it didn't work.Supplementary information (FW/tool version, etc.)
- Answer # 1
- Answer # 2
The total value is not calculated, and it looks like the random number is output as it is.
You can only see it (if you put it out as it is, you won't get "8" or "9"). The total value of the random numbers is actually recorded.
However, in the total calculation and output respectively
dice ()Is called, so the displayed value is not used for the total.
Related articles
- html - i would like to know how to calculate the average of the data
- mysql - i would like to know the sql that calculates the total from four tables
- java - i would like to know how to make twitter and evernote an ios or android application
- i would like to know how to solve [literal 9 of type int is out of range] junit unit test java
- calculate total number of elements using javascript filter
- java - calculate the shortest path using dp
- java - [scala] i want to calculate my age when i give birth date
- i would like a code review to check the total number of two dice in python
- java - i would like to see if this complies with my understanding of polymorphism, but can you look at an example?
- java - calculate the average value of the student information class that manages multiple students
- java - i want to calculate the deviation value from command line arguments
- java - i would like to know how to call the original method with the original implementation when using mockito's spy
- about java button creation
- java - calculator i would like to divide it into a number display part and a calculation display part
- java - total points are not displayed
- java - display of the total of each of the three grades
- java - i would like to know about tableau's tabcmd
- html - how to calculate the total value of columns stored in db
- java - i would like to know how to combine two csv files in batch and narrow down where and output
The cause is here
The dice function only describes the process of randomly creating numbers.
So the dice () displayed in print and the dice () performed in sum areIt's a different thing
If you tell me completely, it will not be good for you, so I commented on the pointed out part.
About two hints (almost the answer) of the solution
-Let's define a variable once and put it in it instead of displaying dice () directly. Then you can use it for other processing without losing the numerical value.
・ How about calculating the total value every time you ask for a random number?
Something is entered in ○○. It is the same as the one displayed by print, so you should be able to do it by referring to the first hint.
Please do your best. | https://www.tutorialfor.com/questions-323042.htm | CC-MAIN-2021-25 | refinedweb | 734 | 62.72 |
hi, this should be an easy one, i just havent dealt much with global variables.
i have an unmanaged c++ project set up with many different classes. Each class has its own .h and .cpp set up correctly. They're included in a standard library that each cpp references. All of this works.
There is a main cpp file which also references the standard library. all of its functions are global, and can be accessed from any other class or file. Should its variables, declared outside of functions then also be global, and available from any class? They're not. Other classes can't find these variables. When added to the standard library, they get an error because they are created about 20 different times (once per .cpp file that includes the library) despite the "#pragma once" that i have.
I need these variables to be available everywhere. Am i just missing some syntax? or would a namespace be my solution. I'd like to avoid the latter if at all possible.
thanks in advance. | http://cboard.cprogramming.com/cplusplus-programming/71270-global-variables-printable-thread.html | CC-MAIN-2015-27 | refinedweb | 175 | 86.6 |
For Django developers obsessed with performance. Minimize your Django templates so that your HTML is served up already minimized.
Project description
django-template-minimizer
For Django developers obsessed with performance. Minimize your templates once not your HTML each time you serve it up.
Run this Django command to minimize your Django templates. Eliminate the need to reprocess your HTML to minimize it; as the HTML is now being put together in minimized form.
The command minimizes django templates along with the html, in-line <script> javascript, and in-line <style> css inside the templates. The command includes a minimize and an undo option. The minimizers for html, css, and javascript are plugable so you can override or add your own.
Installing django-template-minimizer
You can install django-template-minimizer either via the Python Package Index (PyPI) or from source.
To install using pip (recommended):
$ pip install django-template-minimizer
To install using easy_install:
$ easy_install django-template-minimizer
Register the app in your Django project’s settings file:
import tmin ... INSTALLED_APPS += ('tmin',)
To install from source, download the source from github (). Decompress it and put it in the folder with your django project as another django app. Register the app in your Django project’s settings file.
Usage
Commands:
$ python manage.py minimizetemplates -> help text $ python manage.py minimizetemplates -m -> minimize $ python manage.py minimizetemplates -u -> undo
Use these commands to minimize (or unminimize) Django templates after development. This way, your templates are small when they are evaluated and the HTML served is already minimized; eliminiating any post-processing minimization step.
Use the comment tags {# NOMINIFY #} {# ENDNOMINIFY #} inside your templates to wrap content you do not want minified.
Uses the setting TEMPLATE_DIRS in your Django settings file to tell the command where to find templates to minimize.:
TEMPLATE_DIRS = [...]
Customization
The minimizer command uses default minimizers for html, style tag embeded css, and script tag embeded javascript. You can override these and chain any number of your own minimizers using the settings below. These settings go in the Django settings file. Custom minimizers must be functions that accept text as a string parameter and return text as a string.:
JAVASCRIPT_MINIMIZERS = [my_function_1, my_function_2, ...] CSS_MINIMIZERS = [my_function_3, my_function_4, ...] HTML_MINIMIZERS = [my_function_5, my_function_6, ...]
To turn off a minimizer, use the following pattern:
f = lambda x: x JAVASCRIPT_MINIMIZERS = [f,]
You can tell the minimizer command to disable an aggressive HTML minimizer in the default HTML minimizer chain. This minimizer normally removes (instead of just collapsing) the remaining space between ‘>’ & ‘<’ character. To disable this minimizer in the default chain, set the following setting to False in your Django settings file:
AGGRESSIVE_HTML_MINIMIZER = False
Method
For each template, the minimizer command:
- Replaces any {# NOMINIFY #} {# ENDNOMINIFY #} tags and content with a unique identifier and saves the content in memory so that it is excluded from the rest of the process.
- Remaining Django comments are removed.
- Django tags and django variables are replaced with unique identifiers. The tags and variables are saved in memory. This approach “protects” the tags and variables from the minimizers. It also allows you to use Django tags and variables inside your javascript and CSS without ill effect by the CSS or javascript minimizer.
- HTML script tags and content are replaced with unique identifiers. The tags and content are saved in memory for additional processing. The type attribute for the script tag is checked to see if the script content is javascript. If no type is provided, then javascript is assumed. Any javascript is then run through the javascript minimizers.
- An almost identical process to step 4 is implemented on the HTML style tags for css.
- The remaining text (with the identifiers) is run through the html minimizers.
- All of the content saved in memory and associated with unique identifiers are put back.
- The original template is moved to an archive folder. The minimized template is put in the original location.
Limitations
Use the {# NOMINIFY #} {# ENDNOMINIFY #} comment tags to overcome these limitations.
The default javascript and css minimizers do not handle script tags inside script tags or style tags inside style tags; an unusual occurance.
eg: <script>bla bla<script>bla</script>bla</script>
The minimizer collapses all white space not in a django tag, django variable, javascript, or inline css. This includes whitespace inside <pre>, <textarea>, and similar tags; and whitespace inside html attributes.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/django-template-minimizer/ | CC-MAIN-2019-30 | refinedweb | 740 | 57.47 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.