text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Provided by: libgetdata-doc_0.10.0-5build2_all NAME gd_close, gd_discard — close a Dirfile and free associated memory SYNOPSIS #include <getdata.h> int gd_close(DIRFILE *dirfile); int gd_discard(DIRFILE *dirfile); DESCRIPTION The gd_close() and gd_discard() attempt to close the open Dirfile dirfile and free all memory associated with it. The gd_close() function first flushes all pending metadata updates to disk. This step is skipped by gd_discard(), which simply discards metadata changes. For dirfiles opened read-only, these two functions are equivalent. Next, all pending data is flushed to disk and all open data files closed. In order to ensure that modified data files associated with RAW fields are properly terminated, changes to RAW data files are still flushed to disk by gd_discard(). Finally, if the above didn't encounter an error, these functions free memory associated with the DIRFILE object. If dirfile is NULL, nothing happens, and the call succeeds. gd_close() and gd_discard() return zero on success. On error, they do not de-allocate dirfile and instead return a negative-valued error code. Possible error codes error code is also stored in the DIRFILE object and may be retrieved after this function returns by calling gd_error(3). A descriptive error string for the error may be obtained by calling gd_error_string(3). HISTORY The function dirfile_close() appeared in GetData-0.3.0. The function dirfile_discard() appeared in GetData-0.6.0. In GetData-0.7.0 these functions were renamed to gd_close() and gd_discard(). In GetData-0.10.0, the error return from these functions changed from -1 to a negative- valued error code. SEE ALSO gd_dirfile_standards(3), gd_error(3), gd_error_string(3), gd_flush(3), gd_invalid_dirfile(3), gd_open(3)
http://manpages.ubuntu.com/manpages/disco/man3/gd_discard.3.html
CC-MAIN-2019-39
refinedweb
276
58.38
Prerequisite: Bool Data Type. int main() { bool arr[2] = {true, false}; return 0; } If we include the header file “stdbool.h” in the above program, it will work fine as a C program. #include <stdbool.h> int main() { bool arr[2] = {true, false}; return 0; } Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above Recommended Posts: - Integer Promotions in C - Comparison of a float with a value in C - Data Types in C - Storage Classes in C - Private Destructor - Fork() – Practice questions - Undefined Behavior in C and C++ - Print 1 2 3 infinitely using threads in C - Heap overflow and Stack overflow - How to clear console in C language? Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
https://www.geeksforgeeks.org/bool-in-c/
CC-MAIN-2018-13
refinedweb
136
61.06
Anything Slider Database ID Link rather than Slide # Link I’ve been toying with the Anything Slider for use on a personal portfolio site. I’m going to be pulling and populating the slides with information and photos from a database (via PHP/MySQL). I want to be able to hard link specific slides from other pages and other sites (facebook sharing, etc.). The problem I’m seeing is it appears (and maybe I’m wrong), that hard linking to specific slides is done via what slide number it is the progression (slide #1, #2, #3, etc.). Obviously, if I’m pulling from a database that is frequently updated, what is slide #2 initially, maybe slide #3 or #4, etc. depending on the new content being added to the database. In other words, I need to link to a slide based upon the database ID rather than the slide #. Now, I can easily insert the database record ID as a class or id in the ‘li’ tag, but have no idea how to make the javascript tie to the ‘li’ ID rather than the slide #. Anyone have a simple mod for this one? I should mention, I’m a design side guy. I know my way around PHP just enough to be dangerous, and my knowledge of javascript is pretty much plug and play only (though it’s on the list to learn). Thanks, Justin This reply has been reported for inappropriate content. Hi Justin! Actually if you think about it, AnythingSlider is just progressing through the slides that exist inside of it. I don’t know exactly how you are displaying or interacting with the slider but it would help if you described how a the next slide information is obtained. Here is the basic idea… if you look at the AnythingSlider demo page, you’ll notice a button to add a slide (appended to the end) and a button to remove slides (the last button). The script that does this could be adaptable to your needs: // Add a slide var imageNumber = 1; $('button.add').click(function(){ $('#slider1') .append(' - ') // this adds a slide that alternates between two images (demo purposes only) .anythingSlider(); // update the slider }); // Remove last slide $('button.remove').click(function(){ if ($('#slider1').data('AnythingSlider').pages > 1) { $('#slider1 > li:not(.cloned):last').remove(); // remove last non-cloned slide if you have "infiniteSlides:true" otherwise just use "li:last" $('#slider1').anythingSlider(); // update the slider } }); If you need specific slides, then maybe include the ID number in the panel when it is added - ... For instance, if you want to insert a new slide after a particular slide, use the jQuery “after()” function: $('#slider').find('li#1234') // find slide to add new slide after .after(' - ...') // insert new slide after .anythingSlider(); // update the slider To delete a particular slide, just target the ID, then remove it: $('#slider1').find('li#1234').remove(); I can help with more specifics when I get a better idea of what you need. Mottie, Thanks for the reply. Maybe I wasn’t quite clear. I haven’t actually built the page yet for this project, though I’ve experimented with this slider elsewhere. There’s no question about how to add or remove slides, that will all be done automatically with the database query. I’ll use PHP to do query of a MySQL database to pull the proper text and image URL that will be dropped into a repeating ‘li’ tag for however many slides are called for. And yes, as mentioned, I can easily tag the ‘li’ tag with an id=”1234″ pulled from the same dataset. The problem comes if I want to create a permanent external link to a specific slide in the set. If records are added to the database between the time that I post a link, say to facebook, and the time that someone else clicks on the link weeks or months later, what used to be slide #1 in the series would now be slide #2 or #3 etc. I need a way to have the permanent link tied to that ‘li’ id rather than the slide number in the series on the page, which will forever be fluid and changing. In other words, how do I create a permanent link to a DB driven slide that maybe #2 in the series on one day, and #5 on another or #6 etc. It appears the links are driven by slide # in the series, and not the id tag on the ‘li’. Does that make more sense? This reply has been reported for inappropriate content. Hey, Sorry I made a mistake in my previous post. You can’t start an ID with a number, so lets say you make the id of each li start with “record”, so you should end up with “record1234” as an ID. Then the following code will target the correct slide (by finding the slide number of course). // dbID is the database ID number var dbID = '1234', // the actual slider slider = $('#slider1'), // get the plugin variables plugin = slider.data('AnythingSlider'), // find target slide - change "#record" to whatever prefix you decide // to use for the record ID (leave the "#" in front) target = slider.find('#record' + dbID).closest('.panel'); // navigate to selected slide slider.anythingSlider( plugin.$items.index( target ) ); So if I’m following this correctly, I’ll be feeding the dbID via a URL variable, then the code you submitted above would find the li with that ID and then point to the corresponding slide number. Where would I insert the above code and do I need to do something further to pull the ID out of the URL string? This reply has been reported for inappropriate content. Hiya! Ok, so if your id is in the url, lets say in the “dbid” variable, like this: All the script does is look for the slide (the - ) with an ID of “record’ + the number. So if you have a url that looks something like this: Then use this code to get the id from the url: /* Returns URL parameter * url: * z = gup('name'); // z = hello; s = string, or leave blank to target window location * Original code from Netlobo.com () */ function gup(n,s){ n = n.replace(/[[]/,"\[").replace(/[]]/,"\]"); var p = (new RegExp("[\?&]"+n+"=([^&#]*)")).exec(s || window.location.href); return (p===null) ? "" : p[1]; } Add the above function to extract out the id from the url string. Then use the following var url = '', // the actual slider slider = $('#slider1'), // get the plugin variables plugin = slider.data('AnythingSlider'), dbid = gup('dbid', url), // dbid is the variable name in the string // find target slide - change "#record" to whatever prefix you decide // to use for the record ID (leave the "#" in front) target = slider.find('#record' + dbID).closest('.panel'); // navigate to selected slide slider.anythingSlider( plugin.$items.index( target ) ); Thanks Mottie, I’ll give it a shot and let you know how it works… Hmm, think I have a very similar question – I’m hoping to use a mysql database drawn out with php to populate my slider but I have a problem – The slider will be split into categories with a main/cover slide to introduce the category, then there are an unspecified number of product slides within each category and the number of these will change as more are added/deleted from the mysql database, so I’m guessing the the number of the particular category slide will change also. I want to be able to link to the specific category slides directly but as the number of the category slide will change, I’m not sure how I’d achieve this with something like the following: // External Link with callback function $(“#slide-jump”).click(function(){ $(‘#slider2’).anythingSlider(4, function(slider){ /* alert(‘Now on page ‘ + slider.currentPage); */ }); return false; }); e.g. is there a way I can update the “4” here dynamically without having to manually count through the slides each time something changes to directly link to it? I’ve had a look at the solution above – as the value is passed as a querystring does this require a page refresh? I’d like it to be as instantanious as the logic above shown in the demos, really. Thanks in advance MK I think the easiest solution would be to add an ID to your category slides, then you can just target the ID. $("#cat-1").click(function(){ $('#slider').anythingSlider("#category1", function(slider){ /* alert('Now on page ' + slider.currentPage); */ }); return false; }); For reference, look under “Targeting an object inside the slide” on this page. And the id can be on the panel itself (i.e. the LI, if you are using UL LI’s in the slider). Hi, I have a problem somehow the same as discussed above and really be great if someone helps me out. Actually, I am generating multiple sliders from my MSSql server and the string I am able to generate is I am writing it to my page dynamically, but somehow the slider dont seem to be working correctly. My problem is something like @mkultron has mentioned, as sliders being separated based on category. Well first off, it’s not a good idea to have an ID start with a number – I think it is allowed in HTML5, but from looking at your markup, that doesn’t look like HTML5. Secondly, I only see one panel in slider 3 and one panel in slider 4. So the slider won’t do anything – no navigation, no arrows – because there is only one slide. Try adding more. Thanks a lot pal :D, You really saved my day That suggestion really worked, but if possible can you give me an idea as of how can I make it look the way it looks for more than one panel. I mean if I can generate the next and previous buttons, for a slider that may have a single panel only. That’s because the content generate is dynamic and I can not say how many panels will be added to one slider. And in the above case the navigation buttons are not visible (I know that’s what it is supposed to be doing, but can’t help in that). Some more help is really appreciated :). You must be logged in to reply to this topic.
https://css-tricks.com/forums/topic/anything-slider-database-id-link-rather-than-slide-link/
CC-MAIN-2016-36
refinedweb
1,716
68.7
I have a problem in C# that I have faced a few times before, but I have always managed to work around it. Basically, I need to, from one class, access another class that has not been created in the current class. Both classes are in the same namespace. Example: Code://Form1.cs ClassTwo testClass; public void Initialize() { testClass = new ClassTwo(); testClass.doTest(); } public void aTest(string text) { this.Text=text; } //End of Form1.cs //ClassTwo.cs public void doTest() { Form1.aTest("This is a test"); } //End of ClassTwo.cs This is basically what I want to do. But it won't allow me to access the method(s) in Form1. How can I do this without creating a second instance of Form1?
http://cboard.cprogramming.com/csharp-programming/99111-accessing-classes.html
CC-MAIN-2015-27
refinedweb
123
69.79
Due by 11:59pm on Monday, 2/23)) Alyssa's program is incomplete because she has not specified the implementation of the interval abstraction. Define the constructor and selectors in terms of two-element lists: def interval(a, b): """Construct an interval from a to b.""" "*** YOUR CODE HERE ***" def lower_bound(x): """Return the lower bound of interval x.""" "*** YOUR CODE HERE ***" def upper_bound(x): """Return the upper bound of interval x.""" "*** YOUR CODE HERE ***". Any answer will be accepted, but please attempt to answer the question correctly. mulitple reference problem...""" Note: No tests will be run on your solution to this problem. Any answer will be accepted, but please attempt to answer the question correctly. ***"
http://gaotx.com/cs61a/hw/hw04/
CC-MAIN-2018-43
refinedweb
116
67.65
Why Do Projects Continue To Support Old Python Releases? 432 432 Posted by Unknown Lamer from the developers-vs-developers dept. from the developers-vs-developers dept. On Planet Python, Gregory Szorc asks why many projects continue to support Python releases prior to 2.7, when they are all EOLed (2.4 in 2008, 2.5 in 2011, and 2.6 last October), and Python 2.7 provides a clear upgrade path to Python 3. Quoting: "I think maintainers of Python projects should seriously consider dropping support for Python 2.6 and below. Are there really that many people on systems that don't have Python 2.7 easily available? Why are we Python developers inflicting so much pain on ourselves to support antiquated Python releases? As a data point, I successfully transitioned Firefox's build system from requiring Python 2.5+ to 2.7.3+ and it was relatively pain free." Shortly after posting, other developers responded with their reasons for using older Python releases. First, Rob Galanakis of CCP (EVE Online) explains the difficulties involved in upgrading a mature commercial project embedding Python. Nathan Froyd adds "I think this list of reasons to upgrade misses the larger point in providing software for other people: You do not get to tell your users what to do. ... Maybe those users don’t have sufficient control over their working environments to install a new version of Python. ... Maybe those users rely on certain APIs only available in older versions of Python and don’t wish to take an indeterminate amount of time to rewrite (retest, recertify, etc. etc.) their software. ... Maybe those users are just rationally lazy and don’t want to deal with downloading, configuring, and installing a new version of Python, plus dealing with inevitable fallout, when the old version has worked Just Fine for everything else." People don't upgrade (Score:3) When developing, it's important to aim for the lowest support level. If all my customers are running offline RHEL 4, I'm stuck with python 2.3. Offline side-by-side Python (Score:3) If all my customers are running offline RHEL 4, I'm stuck with python 2.3. How do you deliver an application to your offline users? And why can't you deliver an up-to-date side-by-side version of Python along with it? Re: (Score:3, Insightful) Because he doesn't want the support calls headache? Using the RH-provided Python makes it RH's responsibility to support rather than the ISV's. Re: (Score:3) Re: (Score:3) Re: (Score:2) Why would a Python install break something, if it's installed side by side with your app? If you don't want to break the OS, don't touch /usr at all, it's as simple as that - and it has always been possible on Linux. Re: (Score:2) Why would a Python install break something, if it's installed side by side with your app? ...as long as everything you call script/daemon-wise reads "python27 /path/to/my/new_product.py" , sure. Unless you want the sysadmins to love the idea of re-jiggering links. 'course, that might break dependent packages, other python bits running on the same box, etc. Re: (Score:3) Ideally, the sysadmins should not even be aware that you're using Python. The entry point should be a shell script that will spin up the appropriate Python interpreter from the directory that was bundled with the app, and that shell script is what's referenced from crontab, service scripts etc. Re: (Score:2) Python is like Java in that it isn't simply a single program, it's a complex web of programs, libraries and model classes. Although Java was expressly designed to permit multiple versions to co-exist and even execute at the same time in the same system, I'm less certain that you can do that with Python. At least without at least doing a chroot or a VM. Which isn't so much making the affected app play well with other systems that use other Python versions as it is setting up the app in a little world of its o Re: (Score:2) You still haven't explained how one can break a system by copying a bunch of files to /opt/yourapp. Yes, there are techniques where truly "nothing can go wrong", within certain reasonable limits. Knowing and applying those techniques is part of the skill set in this industry. If absolutely everything could break everything else, we'd never get things done. Sticking with system-installed Python means that you need to worry whenever that system decides to update the version of Python that it uses. Re: (Score:3) Sticking with system-installed Python means that you need to worry whenever that system decides to update the version of Python that it uses. "That system" would never "[decide] to update the version of (whatever)" because RedHat provides guarantees of that. If RedHat would start sending its RHEL users such updates, the next thing the users would do is to migrate to SuSE' SLES or CentOS. Remember, not every upstream is as fsckd up as GNOME or Mozilla or Google. Even MSFT is much much more sensible in that regard. Re: (Score:2, Informative) You can: python virtualenv Re: (Score:2) If all my customers are running offline RHEL 4, I'm stuck with python 2.3. How do you deliver an application to your offline users? And why can't you deliver an up-to-date side-by-side version of Python along with it? Anaconda - Red Hat's automated boot-time hardware configurator - is one of several critical RHEL components written in Python. If you swapped out the Python that came with the system with a new, incompatible Python, chances are good that it break so hard it wouldn't even be able to boot up. Re:People don't upgrade (Score:5, Insightful)?") Re:People don't upgrade (Score:5, Interesting)?") You should be working in Java. Or COBOL. Most languages mutate enough that yes, keeping ahead of the bit rot is indeed as much the developer's job as coding it in the first place. The only exceptions are systems designed with the mainframe mindset that you code it and forget it for 3-4 decades. COBOL, because no one could really stand to muck with it, and Java because Sun put in deprecation mechanisms so that even really nasty old stuff will still be maintainable in an emergency. Re:People don't upgrade (Score:5, Insightful) You just described any serious application used in production in a real business, who's core industry is not "software development". Fucking around re-writing stuff costs money. Re-testing everything costs money. Fixing bugs costs money. Unless there's a REALLY good, real world fucking reason to be spending said money that will directly impact the business in question it is wasted money. This is exactly the sort of thing many open source people don't seem to get: even if your software/development tools are FREE, pulling major compatibility-breaking changes costs real world people real world money. THIS is why Windows is so popular in the real world. We know the security is inferior. We know the stability is often questionable. We know it costs money in purchase and/or ongoing licensing fees. BUT: you pretty much get a guarantee that any code written today will continue to run for at least 10 years (due to specific OS version support for that long) and likely a lot longer than that; due to API support normally extending even further. Re-writing shit just because the platform changed costs real world time and money. Re: (Score:3) What do you mean, "most?" All the other commonly used languages -- C, C++, the various .NET languages, Java, etc. -- most certainly do not mutate like that! New versions come out, sure, but they're not so broken in design that programs written in different versions have trouble coexisting on the same system! Re: (Score:3) Requiring the developer to install the correct old version of the compiler toolchain in order to make a change is much more reasonable than requiring every user to install an old version of a runtime environment, especially when said runtime environment doesn't like coexisting with other versions of itself. Plus, Microsoft sucks. Compiling a C89 program in GCC is only a command-line switch away... Re:People don't upgrade (Score:4) The jump to python 3.0 is a little tricky because some code is not compatible (which is why we still have 2.x) so there's lots of software that would break if people upgrade. That much is true, but I read this post as just calling for support for pre-2.7 Python to be dropped, not all of 2.x. Upgrading to 2.7 doesn't introduce the kind of incompatibility that upgrading to 3.0 does. Re:People don't upgrade (Score:5, Insightful) No, but it still introduces significant incompatibility -- things can and will break -- and so re-testing, re-coding, re-certifying... still required. Now, you have two choices: a) Don't screw with the system, and everything continues to work just fine, no extra costs or problems are encountered, and everyone is happy except the cluetard who write TFA, who no one cares about anyway. Or: b) You can upgrade, break everything, incur lots of costs, and everyone in every direction is pissed off except said cluetard. Now, I ask you: what's obviously the best choice here? Re:People don't upgrade (Score:4, Interesting) The whole question of why users don't upgrade seems to be to be about making the developer's lives easier as opposed to the customers lives. But for my entire career I've had to support older releases I'd rather forget about, and it was taken as a normal and expected part of being a professional. It has only been relatively recent that developers start feeling peeved and annoyed at customers. However if you make it easier for customers to upgrade, maintaining perfect backwards compatibility, the problem can go away. For security issues, then back-port the fixes to older releases. You're selling a product, probably selling product support, so start offering the customers some service. Now I can understand with Python being an open source project that the devs who aren't getting paid just want to program for fun. That's the trap of adopting someone's pet project for production uses. But there are professional organizations who will offer paid support for open source projects. Re: (Score:3) Re: (Score:3) Part of this means the developer should not be making changes that break existing code. At the very least save this for major version changes. This isn't about pissing money into the wind, but giving customer value for the money they already spent. Ie, python 2.7 should not break python 2.3 code, there's no reason for that to occur. If customers figure out that older versions aren't supported and are left to rot away while devs work on the cool new stuff, then the customers will find other vendors to deal Re: (Score:2) You can build local copies of virtually everything, including the libc, into your own prefix so it won't affect anyone else (./configure --prefix=/home/user/blah etc)... The only thing you can't upgrade is the kernel. Not that you should have to do this, as its very painful to maintain but it is at least possible. Most userland apps should run just fine, even on the 2.4.x kernel providing you compile an appropriate set of libs for them. On the other hand, what are you doing with a compute cluster running such Wrong question (Score:5, Insightful) Let's say I have something in python that was developed on python 2.5 and is in maintenance now. The correct question is why would I demand python 2.7 suddenly? Wouldn't that be kinda bone headed of me? Why should I do extra work with the express purpose of making my software more demanding in it's requirements? Re:Wrong question (Score:4, Insightful) What does he care if they support older releases, which in no way implies that they won't also work with newer releases. It's the maintainers of those projects who do the work, not him. Re: (Score:2) Exactly. I have yet to find a case where 2.7 won't run code written for 2.5. There is a better case (over time) for adding 3.x compatibility so the 2.x line can eventually retire, but considering there are still new systems coming with 2.6, I suspect 2.7 will be with us for a while. Then again, it is mature and highly functional so I'm not so worried about that. Re: (Score:3) >He only said that he didn't want it to work with older versions, and that it was not a lot of work - i.e. it still took some amount of work - to make it not work with older versions He doesn't say that at all. Really. It's not even remotely in the article. He talks about dropping support for Python 2.6. This isn't an action involving work! It means you no longer care if it doesn't work in Python 2.6. Re: (Score:3) tl;dr: If it aint' broke, don't fix it. To that I would add, if it works, don't break it! The problem here is that 3.0 is not exactly friendly to 2.x scripts. I'm not going to argue the virtues of 2.4 verses 3.0, but I am going to say that if you break something when you upgrade your interpreter, expect to support the previous version for a LONG time. Re: (Score:3) I wish people in IT would stop saying "If it ain't broke". That's almost as bad as "All You Have To Do Is..." Computer systems don't decay in the literal sense. So in theory, once done, done forever. Reality, however, is different. Virtually no system is so isolated that some external hardware, OS, language or other upgrade cannot break otherwise healthy unchanged software. It will break. And according to Murphy, it's going to break at a time when the inconvenience, expense, and damage to your professional rep Re: (Score:3) If you wait long enough you can then just jump to whatever the latest and greatest language happens to be. In 20 or 30 years it may be harder to continue to use Python than t would be to jump to Psychedelic Giraffe, or whatever the hot new programming language will be. It's simple, its Redhat (Score:2, Informative) Get Redhat to upgrade their version of Python, then the developers will use it. Until then if you want to write portable code, you ignore the newer features. One Word (Score:2) Re: (Score:3) I worked at a company that used CentOS and they wouldn't upgrade Python for production environments because newer versions weren't certified to be stable. Even though common sense says they are. When core OS services are written in Python (and they are in CentOS/Red Hat), it doesn't matter how stable the newer releases are. If they don't perform exactly identically, there's a real risk that the OS itself may malfunction. Change = bugs (Score:4, Insightful) Change means bugs. Why would you want to disrupt your production system to fix bugs introduced in a new release? What benefit would there be to you? The system is built, and it works. Don't touch it unless you have to, or unless it was designed to be upgradeable. Because it works. (Score:5, Insightful) Many Linux distros in wide use still come with Python 2.6 as the stock Python. If you have some little program that needs to be portable, you write for Python 2.6 and test on 2.6 and 2.7. People aren't converting to Python 3 for the same reason Perl users aren't converting to Perl 6 - it's different and incompatible. Many third party Python 2.x packages were never ported to Python 3. Some were replaced by new packages with different APIs, which means rewriting code and finding out what's broken in the new packages. Newly written programs tend to be written for Python 3, but much old stuff will never be ported. Re: (Score:2) Indeed. A similar question would be why people aren't "upgrading" from Python to Java. It's a different language. Period. Re: (Score:3) Python 2.x and 3.x are definitely much more similar than Python and Java, or even than Perl 5 and Perl 6. Especially when you look at 2.7 and 3.3 (where both versions had bits and pieces added to make syntax more uniform - like adding b"" literals to 2.7 and u"" literals to 3.3), it's actually quite easy to write code that's valid and behaves identically on both. Re: (Score:2) Sure, it is. You need to know what you're doing to pull that off. It's still a much better situation than Perl 5 vs 6.. Strings however are a serious impediment to 3.0. You are blissfully ignoring the huge problem: a range of bytes can contain any arrangemen. It wouldn't actually fix print as the differences are much more than that. 2.7 print had a lot of funky syntax in it - e.g. you could use >>stream as the first argument to print to a given stream rather than stdout, and then there's trailing-comma-meaning-no-newline (a la BASIC). I don't think these can be uniformly handled without treating print specially, and the whole point of the change was to stop doing so. I think it's a good thing, too - there's really no reason to single out print, and when te Re: (Score:2) Many Linux distros in wide use still come with Python 2.6 as the stock Python. How many, actually? So far the only one that has been named explicitly is RHEL. RedHat (Score:5, Informative) The Python version included with RHEL 6 (that's the very latest version): 2.6 The Python version included with RHEL 5 (still widely used): 2.4 Thank them for forcing us to keep supporting old versions. Re: (Score:2) Debian stable was on older Pythons until recently also, but Debian 7.0 (released May 2013) now has 2.7 default. The flip side... (Score:2) Maybe users don't want to upgrade, like Mr. Froyd says. But maybe they would upgrade if you gave them a reason to. I'm definitely not a large scale user, but I've never encountered a situation where an older version is more attractive in any way. Re:The flip side... (Score:5, Funny) Windows 8 says hello. Not surprising (Score:4, Insightful) I'm just going to mirror what this guy said. There's a reason why IE6 was still around *years* after it should have been taken out back and shot. There's a lot of money dumped into these project and applications. I'd make every effort to encourage a transition, but those cost money and time. Ubuntu and Windows (Score:2) Re: (Score:3) Another is the fact that some Python libraries use C modules (as opposed to using pure-Python code that uses ctypes to interact with shared libraries), and some users under Windows may not own a copy of the appropriate version of Visual Studio to recompile the module. This particular aspect actually makes it easier to use more recent Python versions on Windows, not the other way around. Case in point: if you use 3.3, then your C extensions have to be built with VS 2010 compiler. If you use 2.7, they have to be built with VS 2008. Now, VC++ 2010 Express is still available as a download, and works just fine so long as you don't need to build 64-bit binaries. VC++ 2008 Express is also available, but only as a direct download link; the landing page (with the information abou Incompatibility (Score:2) New Python versions come with incompatible changes. Very rarely you can grab code written for one version and run it completely unmodified on a newer version. This sucks and is made worse by third party APIs that also have to be modified. It is hard to release new language versions and also keeping backward compatibility. Some languages do it better than others. Simple (Score:3) Turn the question in the right direction (Score:5, Insightful) Why doesn't the new version of Python interpreter support older dialects ? Re: (Score:3) Why doesn't the new version of Python interpreter support older dialects ? Not justifying it, but possibly it's to keep the code as homogenous as possible in the aid of readability. Like with the identing, Python likes to force you to do things its way, and its way is the only way. :) Re: (Score:3) Why doesn't the new version of Python interpreter support older dialects ? Bingo. Why has Microsoft dominates so much for so long? Backwards compatibility. (Not the only reason, but a arguably the big one that nobody can compete with.) (One day you'll get your day in the sun WINE...) Re: (Score:2) Re:Turn the question in the right direction (Score:5, Informative) If we're talking about 2.4/5/6 -> 2.7 migration, then the question does not apply, because there's no "older dialect" - it's the same language, albeit with a few minor additions. There are no major breaking changes there. The problem is that you can't be sure that any of the minor changes do not somehow break your program, especially if it relies on something that was never a hard guarantee, and so you need to retest on the new Python version before you can officially declare it supported. This is no different from Java, for example. If we're talking about 2.x -> 3.x, then the break is by design. Python 3 is simply a new language, which disposes with a lot of cruft that accumulated over the years in Python 2 (remember, we're talking about something that's 20 years old at this point). It makes it much better for writing new code, but also requires that you port old code, and places an additional burden on library developers to support both (at least for as long as 2.x remains popular). Re: (Score:3) Odd, that doesn't seem to have been a problem for many much older languages. What, the accumulation of cruft? It has been a problem for all much older languages that are still being actively used, but especially those that are also actively evolving. Java is piles upon piles of cruft at this point, for example. PHP, even more so. C++, don't even get me started on that. C doesn't have that much, but only because it's a much smaller language in general; proportionally, it has just as much cruft as anyone else (think "register" and "auto"). And yes, those languages do in fact break thin Re: (Score:3) I don't consider Python perfect or ideal, I can agree with that. just reasonably good. At the very least, certainly better than PHP. Not that. PHP's only real problems are inconsistent naming and parameter order. (Interestingly enough, a problem partially shared by python in spite of PEP 8) Unlike Python, it doesn't suffer from any serious design flaws. Here's an example that you're undoubtedly tired of hearing about: Meaningful white space. It looks like just a matter of personal preference at first, but has far-reaching consequences. I've mentioned in the past how it makes some editors unusable and it makes mov Re:Turn the question in the right direction (Score:4, Informative) Is this some kind of joke? Python's use of syntactically-significant whitespace is not in the same league as all the issues PHP has. [veekun.com] Re: (Score:2) Laziness? Lack of respect for users? Not understanding who the users actually are and what they do? Younger devs who've never dealt with backwards compatibility nightmares before? Rhel 5 (Score:2) Still ships with 2.4 Rhel 6 not much newer. Lot of us with lots of production boxes want stable and fight to keep it that way. There is also the library issue many have not ported to 3.x or made major we broke things changes to do so. Because Python 3 still needs work (Score:3) (source [selenic.com]) Legacy Support (Score:2) Because some people can't upgrade to the latest and greatest for a variety of reasons. Tone down your arrogance - its messing up your nose hair. It's because Python 3 is broken. (Score:3). Re: (Score:2) Re: (Score:2, Insightful) No really. I took a pass at Python 3 a while back. The amount of hoops I needed to jump through, to deal with compilation errors around Unicode handling, was terrifying. It was simply a poor user experience. That doesn't necessarily mean Python 3 is broken. Lots of Python developers are discovering that they don't really understand how to do Unicode properly. Their code works under Python 2 (mostly by accident), but Python 3 calls BS and forces you to do it right. Re: (Score:3) The amount of hoops I needed to jump through, to deal with compilation errors around Unicode handling, was terrifying. It was simply a poor user experience. That often (though not always) indicates that the original codebase is written without much if any awareness of the existence of different text encodings, and will therefore handle anything other than ASCII or Latin-1 (or, if you're really lucky, UTF-8) wrong. Exposing that is a good thing. The whole point of changes around strings in Python 3 was to explicitly expose the various assumptions centered around "text is just a bunch of bytes" and "a bunch of bytes is always meaningful texts" (like "every byte i Because it costs money... (Score:2, Insightful) Also, to be realistic, you don't earn any money upgrading your existing software to the latest platform. There is no point? (Score:2) If you don't require features from Python 2.7, why drop support for older version? Re: (Score:2) Because the older version can have bugs, and those bugs will never be fixed at this point as it is EOL'd? The Client (Score:4, Insightful) Deveopler Blindness (Score:2, Insightful) The appropriate question is: Why should I re-write/replace/upgrade my software, that works perfectly fine(!) because some developer of the Python project decided to refactor/deprecate a function/break a feature or otherwise fiddle with the system. Any software that runs properly today, should continue running long into the future. If it doesn't, if the langauge developers break my software, they are the issue. From a user perspective, there is nothing I hate more than some app that suddenly needs a completely because the writers of the language blew it (Score:3) since the authors of python don't do proper unit and regression test and maintain backward compatibility, the 2.4, 2.5, 2.6 and 2.7 ARE DIFFERENT LANGUAGES! Its past time for open source wares (including the LInux kernel ABI) to get some maturity already in the realm of backwards compatibility, open source could go more places if it had that. Blame Jython (Score:5, Informative) I think a large part of it is due to the convenience of Jython, which makes it really easy to embed Python directly into a Java application as a user scripting engine that doesn't require explicit installation or configuration by the user. I'd go so far as to say that Jython is probably 99% of the reason WHY so many Java apps that support scripting do it via Python instead of some other language. Jython only supports 2.x syntax. There IS a way to bind a Python interpreter to Java so it can exchange objects directly (py4j), but it still requires separate installation of Python, with all the usual things that can go wrong (and frequently do, at least under Windows), like environment variables, path definitions, etc. There's also IronPython, which is another 2.x-only Python that enjoys lots of "automatic" mindshare from Windows developers because it presents itself as the "official Microsoft-blessed .Net CLR Python" for Windows, and everyone remembers that a decade ago, Activestate Perl was the de-facto Perl for anyone running Windows (and eventually, the defacto Perl, period). It's basically abandonware at this point, but ActiveState doesn't go out of its way to make it obvious. That said, I'd put most of the blame/credit for 2.x on the non-existence of "Jython 3". Why is support for old versions even needed? (Score:3) Why are we Python developers inflicting so much pain on ourselves to support antiquated Python releases? If I were a Python developer, instead of Python user, and there was a lot of pain supporting old Pythons, I'd just stop supporting them (instead of whining.) Older versions of Python only receive bug fixes. People who are still using an older version evidently are happy with it, bugs included. So, Python developers should fix old bugs if it's painless, or not if it's painful. About the only exception I can think of is security problems: if a security flaw in an old version is suddenly revealed, it should be fixed by Python developers if at all possible. Then again, if that becomes too much trouble for Python developers, they can just (politely) say "if this bug really is a problem for you, you should just pay the pain yourself by upgrading to a newer version where we've fixed it." Another clear exception is that the last 2.x version, 2.7, should have its bugs fixed into the foreseeable future because so many of us still prefer 2.x. Then again, I would have already switched to 3.x if a 2.7 didn't (already and into the foreseeable future) work just fine for everything I need Python for. First try 2.4 to 2.7 (Score:3) Re: (Score:3) We have a few applications written in zope. Zope was one of those classical python programs that needs a very specific set of libraries for each version of zope. Some of the required versions aren't even on the net any more according to google. At some point the Zope team split up and now the new zope won't do most of what the old zope was doing and the old zope has bugs and is a security risk and is now abandon-ware . So far we have hired 3 different python coders to "fix" the old applications yet none Unit tests (Score:2) Re:python sucks (Score:5, Informative) If you are upgrading Python without so much as reading about what 3.x is and how it's different from 2.x, you're probably not qualified to be handling such an upgrade. 3.x was explicitly declared as a new language with breaking changes all around, so as to get rid of legacy cruft. This information was widely disseminated through all official channels - documentation, mailing lists, changelogs etc - and carried over on unofficial ones, like textbooks. That's precisely why all Linux distros out there install 3.x and 2.x side by side, and don't treat one as an update to another. If your Linux distro silently re-symlinked /usr/bin/python to python3 when you installed the latter, then your Linux distro is broken - you should complain to its maintainers. If you installed it manually, then you have only yourself to blame. And, getting back on topic, this all has nothing to do with 2.6 -> 2.7 transition, where the language is the same and backwards-compatible. Re: (Score:3, Insightful) Bullshit. All programs had '/usr/bin/python' set as their interpreter. Some developpers asumed it would be python 2.5, the others assumed it would point to 3.0. Why is it my fault (the user of a package) that python developpers make a mess of their versioning system? They made a new and incompatible language and gave the interpreter the exact same name and you think I am not qualified?????? Re: (Score:3) Python has a well-defined versioning scheme for interpreters which is documented in detail [python.org]. In particular, it requires that python2 is a symlink to some version of Python 2.x, and python3 is a symlink to some version of Python 3.x; authors of Python code should author their scripts correspondingly. If someone wrote a 3.x script that starts with #!/usr/bin/python, then they are the problem, and you should whack them with a stick until they fix that. Also note that this spec requires that python be the same as Re: (Score:3) So prior to 02-Mar-2011, people using say python 2.5.2 (released in 2008) were not being "informed" to write their code using /usr/bin/python2, and so all that old code is now "improper" according to a spec released *3 years* after the code was written. Any code written prior to the release of that PEP would be using #!/usr/bin/python, and the spec requires it to be a synonym for python2 for now, and at least for 2 more years (realistically, that change is probably going to be postponed for much longer). So someone using Python 2.5.2 in 2008, assuming they wrote the proper shebang, and assuming that the distro (which has no excuse for not following the PEP today) did things properly, will work fine. The problem that OP described was someone writing a script Re: (Score:3) Sounds like you did it wrong. Why didn't you just rename the binaries? Re: (Score:2) Well, that and custom add-on code often blows up under 2.7 unless it's re-factored (or in some cases re-written), and nobody ever seems to want to do that... Re:OS versions (Score:5, Insightful) Just look at the time line: (2.4 in 2008, 2.5 in 2011, and 2.6 last October), Stop obsoleting code on a 3 year cycle and you wouldn't have so many stragglers. Programming languages need stability and backward compatibility. You can't rebuild the world each time someone decides to issue a point release. Handle it in the compiler. Re:OS versions (Score:4, Interesting) Which is why COBOL will probably out live Python. Stability. Re:OS versions (Score:4, Insightful) Exactly. I'm pretty sure COBOL programs I wrote almost 30 years ago would still compile today. That code would be about the same age as the guy mentioned in the summary (5 years out of university btw) who is whining about supporting 3-5 year old code. Boo-hoo. I'm pretty sure when the world ends the only thing left will be cockroaches and they will be using COBOL. Re: (Score:3) That too, alive and kicking. BTW the last two versions, '08 and '03, are nice as the compilers give support for F77 while adding in OO and parallel programming functionality. Re:OS versions (Score:5, Insightful) I get this feeling that the core developers of Python have lost focus of what the typical developer wants. I see the same thing happening in C++ with templates. Templates aren't bad its just that the bulk of C++ developers are far more interested in the cool new iterating system in C++ than all the wonky template features that are added hourly. Luckily with C++ there has been no threats to break backward compatibility. It is almost like these people are trying to impress some old CS professor instead of making their customers more productive. But where the python people are playing with fire is that if a company has a massive codebase that is getting creaky and needs a huge Python porting effort that would probably be combined with a huge architecture refactor then that company might think, "we don't want to be forced to do this in the future, so what other language has been more version stable?" Or you might have a new IT head who has been pushing for his favorite language and will take advantage of this to switch. Python is awesome but there are other fish in the sea; and as we all know even good languages can fade (see PERL) and no so good languages take their place (PHP). When upgrades break code (Score:5, Interesting) The problem with upgrades is the developers don't take nearly enough care to ensure compatibility with existing software. For instance, I moved a Python script from an older machine (redhat 9) to an Ubuntu 12 system with a later 2.x series Python, and the script immediately bombed out. Turned out that I'd used "as" as a variable name, and it had become a reserved word in the interim. Elsewhere, I was adding ints to floats or vice-versa; now that produces an error. Somewhere, the behavior of "global" changed. I had a procedure written with global VariableName up front so the procedure could see a global switch. I handed the script to someone else, and they had to *remove* that to make it work. I moved a large system depending on perl scripts (yes, I know, not Python, but the issues are identical) across the same machine pair and it was so broken under the new perl that it took me two months (and cost my client a great deal of money) to adequately debug and update it. From external module changes (use DBI) to how references to hashes were handled, it was one thing after another. What made me fume the entire time is that this stuff worked *perfectly* under redhat 9 with the earlier perl -- it really wasn't broken. Bloody perl was broken! Worse, I'd abandoned Perl for Python years ago, and my Perl skills were very rusty... that's tough when you're trying to work through tens of thousands of lines of sophisticated code. I truly regret every line of Perl I ever wrote, and cringe at the idea of having to revisit it for any reason at all, because Perl, whatever its true merits, is a language that I was destined to hate to my very $core; This kind of thing makes me extremely unwilling to upgrade; for that matter, it can send me hunting for the old version to install in new machines, although that can present its own set of problems and requires at least some of a different skill set than programming that really leans more into system administration, which I am not in the least enthused about. For some strange reason, enjoying research on auroras does not automatically carry with it a predilection for meddling with OS environments. This is all part of a larger gripe set I have where makers of languages and APIs break existing code in favor of new ideas. Here's my position: If you ship the language with X features, you have NO BUSINESS breaking ANY of those features. You have a new idea? Fine. Implement it as something completely new within the language or the API. If it's got new keywords, provide the programmer a switch to enable them -- words that were not reserved before may have crept into the user's namespace(s.) Never take a call out of an API. Never make an API call work differently. If you think you need to do that, then *I* think you need an entirely new API. If you even use the word "deprecated", I will burn you in effigy and spit on the ashes. Furthermore, if you do it right, you will NEVER need to say "end of life" because the most current version will still run the first code ever written for API or language 1.0; upgrades will actually be upgrades rather than incompatible replacements. Windows got this very nearly right for a long time, then with XP, they began to break everything from window metrics to system calls. By the time Windows 7 rolled around, apps that worked perfectly looked like trash and worked even worse, and I tossed my Windows machine in the can both for development and as a user. Apple's guilty of this too, even going so far as to obsolete their own applications and feature-sets. One really irritating example of this is having bought Aperture 1, 2, and 3 for OSX 10.6, I bought a new camera, a Canon EOS 6D. Works *fabulous*, pretty much the first DSLR I've ever been actually satisfied with. Both Lightroom and Aperture required an update to read the RAW images. Lightroom upgrade went smoothly. The Aperture upgrade? Told me that in order to install the updated RAW support, I'd have to upgrade the OS. But 10.7 and later are not highly compatible upgrades, I have to support OSX Re: (Score:3) That was probably going from Perl 4 to Perl 5. Going across a major release where many features have changed is going to cause problems with any language. The changes from Python 2.5 to 2.7 are likely to be much less pain. Re:When upgrades break code (Score:5, Insightful) Why should I accept any pain at all? How about if I just don't upgrade, and my stuff doesn't break, and I continue on making new things? Isn't that more sensible on every possible level? Re:When upgrades break code (Score:5, Insightful) Exactly. Refuse to accept new release that don't support backward compatibility of existing code. Every other sane programming language does this to the fullest extent possible. Seems Python likes to just cut the cord. Re: (Score:3) Re: (Score:3) True, but then why do we have this entire whiny slashdot article wondering why people support old Python versions? The clear implication is that it is somehow the programmers fault for not re-writing the universe each time python developers sneeze? It should be patently obvious that the fault lies with the Python Devs for failing to bring their users along, and that some people, most people, just about all people, have something better to do than rewrite systems every three years. But you STILL have a maintenan Re:When upgrades break code (Score:5, Insightful) That was probably going from Perl 4 to Perl 5. Going across a major release where many features have changed is going to cause problems with any language. The changes from Python 2.5 to 2.7 are likely to be much less pain. not necessarily, in my experience one of the bigger issues is incompatible changes made in CPAN code: plenty of things change and it's not like you can say 'install from CPAN of 3 years ago'. If you write straight perl (i.e. no external modules) it's unusual for things to break badly, but if like most people you use CPAN modules you're at the mercy of each individual CPAN package mantainer Re:When upgrades break code (Score:5, Insightful) Exactly that. Nobody cares if your new version is 10% faster, or has new feature X. If someone is using your language for something serious in the real world, and your upgrade BREAKS CODE, then all bets are off. Even if the time and effort (in the real world, money, essentially) is spent to "fix" the project to work with the new version, all prior testing is essentially worthless, and the developer can't "trust" the code is correct and bug free any more until they've had sufficient time and testing to be comfortable with it. This problem is endemic in open source unfortunately, and not just for developers - constant UI change, application deprecation/replacement, etc. Every time you change something significantly, there's a real cost to any person using it, even if your software is free. Even if they decide not to use it any more - they need to find a replacement and re-learn how to use the replacement. I'm not saying "never change anything". But more work needs to be put into shims, switches, etc. to enable backwards compatible behaviour if possible - even if it is not enabled by default. Re: (Score:3) this, and I am not sure how whomever decided the string/unicode changes in 3.x with no 2.x backwards compatibility couldn't figure out that it would be very unlikely that people would port all their code right away. This blog post I read a few days ago sums up some issues pretty well [pocoo.org] Re:When upgrades break code (Score:4, Insightful) Very nice post! If I may summarize: To paraphrase Robert Monroe: "Change is the only constant. Fast change is called revolution, slow change evolution." The inexperienced young want change because of some cool new feature X, Y, and Z. The experienced older want stability because they know the wisdom of "Don't fix what isn't broken!" Businesses typically want stability because why would they pay someone again to re-write, refind bugs, re-test, and re-fix bugs, when the old version worked before? The only reason to *really* change an API is because of security issues, aka how Microsoft "extended" legagy C API's to 'safe' versions such as: scanf vs scanf_s, etc. Re: (Score:3) But, furthermore, Python 3.0 violates this principle as well: If there was one way to do it in 2.x, that way should continue to be the only way to do it in 3.0. I think they should just admit it was a mistake and deprecate Python3. AFAIK many of its features have been backported to 2.7 anyway. Let's face it, if a major revision of a very popular language has been out for 5 years, and it's still not that widely used, there is something wrong with the new revision.
http://developers.slashdot.org/story/14/01/09/1940232/why-do-projects-continue-to-support-old-python-releases?sdsrc=popbyskidbtmprev
CC-MAIN-2015-27
refinedweb
7,836
72.66
29 September 2010 18:00 [Source: ICIS news] By Nigel Davis ?xml:namespace> That was the message from a senior management briefing this week by Shell Chemicals. It used to be fine to talk about the refinery/chemicals interface. That phrase is now an anathema to Shell. “As long as you still have an interface, you have a problem,” said Ben van Beurden, the company's executive vice president for chemicals. Shell Chemicals wants to capitalise on what it calls “advantaged” feeds from the refinery, both liquids and gases, but the company is under pressure in Europe and Shell is likely to stick with its “cracker plus one” business model, which means it generally only ventures as far as first-line ethylene derivatives. But the company appears not to be averse to creative thinking if further ethylene offtake into downstream units, such as polymer plants, is warranted. Becoming further integrated within the Shell system, however, is the name of the game. “If you want to be in upgrading hydrocarbons, adding a petrochemicals part to it does make lot of strategic sense,” van Beurden said, referring to Shell’s view of chemicals. Continued stress and strain on the refining system is pointing up (chemicals) opportunities. “If you play the game very well, the whole envelope becomes more robust,” he said of Shell’s oil and gas-to-chemicals business strategy. Refining has become a more complex, and certainly less profitable, business. But chemicals at Shell can capitalise on low-value and other refinery streams and add value to the whole, he suggested. Shell is almost through in getting its overall portfolio in the right place, he said. The refinery divestment and closure programme affects chemicals in a relatively minor way – the loss of some capacities in Shell’s strategy in petrochemicals is to become the “highest profitable hydrocarbon upgrader,” said van Beurden. “We have discovered a whole seam of feedstock advantage,” he added. Tapping into those seams takes time: Shell has reconfigured its crackers at In Europe, it is closing a cracker in The company’s cracker at Moerdijk, the Netherlands, has been given the capability to crack hydrowax, a heavy-refinery product that is difficult to transport and not always welcome. The shift towards advantaged feeds at the cracker could be significant by mid-decade. The strategy fits closely with Shell’s view of refining, which is becoming a more complex business in every way. Chemicals profitability could benefit, van Beurden suggests, if the company gets its right, alongside the big investments in new petrochemicals capacities in Further down the line, Shell has a number of technology plays that could benefit chemicals. Its Pearl gas-to-liquids (GTL) venture in Shell is more gas-oriented upstream. So, on the technology front, at least, it makes sense, in chemicals, to follow suit..
http://www.icis.com/Articles/2010/09/29/9397422/insight-looking-at-refinery-streams-for-chemicals.html
CC-MAIN-2013-20
refinedweb
471
50.57
for connected embedded systems mq_send() Send a message to a queue Synopsis: #include <mqueue.h> int mq_send( mqd_t mqdes, const char * msg_ptr, size_t msg_len, unsigned int msg_prio ); Arguments: - mqdes - The message-queue descriptor, returned by mq_open(), of the message queue that you want to send a message to. - msg_ptr - A pointer to the message that you want to send. - msg_len - The size of the message. - msg_prio - The priority of the message, in the range from 0 to (MQ_PRIO_MAX-1). (in oflag of mq_open()) has been set, wasn't opened for writing. - EINTR - The call was interrupted by a signal. - EINVAL - One of the following cases is true: - msg_len was negative - msg_prio was greater than (MQ_PRIO_MAX-1) - msg_prio was less than 0 - EMSGSIZE - The msg_len argument was greater than the msgsize associated with the specified queue. Classification: See also: mq_close(), mq_open(), mq_receive(), mq_timedsend() mq, mqueue in the Utilities Reference
http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/m/mq_send.html
crawl-003
refinedweb
148
55.34
// _GBITMAP_H_ #define _GBITMAP_H_ #ifdef HAVE_CONFIG_H #include "config.h" #endif #if NEED_GNUG_PRAGMAS # pragma interface #endif #include "GSmartPointer.h" #ifndef NDEBUG #include "GException.h" #endif #ifdef HAVE_NAMESPACES namespace DJVU { # ifdef NOT_DEFINED // Just to fool emacs c++ mode } #endif #endif class GRect; class GMonitor; class ByteStream; /** @name GBitmap.h Files #"GBitmap.h"# and #"GBitmap.cpp"# implement class \Ref{GBitmap}. Instances of this class represent bilevel or gray-level images. The ``bottom left'' coordinate system is used consistently in the DjVu library. Line zero of a bitmap is the bottom line in the bitmap. Pixels are organized from left to right within each line. As suggested by its name, class #GBitmap# was initially a class for bilevel images only. It was extended to handle gray-level images when arose the need to render anti-aliased images. This class has been a misnomer since then. {\bf ToDo} --- Class #GBitmap# can internally represent bilevel images using a run-length encoded representation. Some algorithms may benefit from a direct access to this run information. @memo Generic support for bilevel and gray-level images. @author L\'eon Bottou <[email protected]> */ //@{ /** Bilevel and gray-level images. Instances of class #GBitmap# represent bilevel or gray-level images. Images are usually represented using one byte per pixel. Value zero represents a white pixel. A value equal to the number of gray levels minus one represents a black pixel. The number of gray levels is returned by the function \Ref{get_grays} and can be manipulated by the functions \Ref{set_grays} and \Ref{change_grays}. \Ref{minborder}. The border pixels are initialized to zero and therefore represent white pixels. You should never write anything into border pixels because they are shared between images and between lines. */ 00125 class DJVUAPI GBitmap : public GPEnabled { protected: GBitmap(void); GBitmap(int nrows, int ncolumns, int border=0); GBitmap(const GBitmap &ref); GBitmap(const GBitmap &ref, int border); GBitmap(const GBitmap &ref, const GRect &rect, int border=0); GBitmap(ByteStream &ref, int border=0); public: virtual ~GBitmap(); void destroy(void); /** @name Construction. */ //@{ /** Constructs an empty GBitmap object. The returned GBitmap has zero rows and zero columns. Use function \Ref{init} to change the size of the image. */ 00142 static GP<GBitmap> create(void) {return new GBitmap;} /** Constructs a GBitmap with #nrows# rows and #ncolumns# columns. All pixels are initialized to white. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. The number of gray levels is initially set to #2#. */ 00148 static GP<GBitmap> create(const int nrows, const int ncolumns, const int border=0) {return new GBitmap(nrows,ncolumns, border); } /** Copy constructor. Constructs a GBitmap by replicating the size, the border and the contents of GBitmap #ref#. */ 00153 static GP<GBitmap> create(const GBitmap &ref) {return new GBitmap(ref);} /** Constructs a GBitmap by copying the contents of GBitmap #ref#. Argument #border# specifies the width of the optional border. */ 00158 static GP<GBitmap> create(const GBitmap &ref, const int border) { return new GBitmap(ref,border); } /** Constructs a GBitmap by copying a rectangular segment #rect# of GBitmap #ref#. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. */ 00164 static GP<GBitmap> create(const GBitmap &ref, const GRect &rect, const int border=0) { return new GBitmap(ref,rect,border); } /** Constructs a GBitmap by reading PBM, PGM or RLE data from ByteStream #ref# into this GBitmap. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. See \Ref{PNM and RLE file formats} for more information. */ 00171 static GP<GBitmap> create(ByteStream &ref, const int border=0) { return new GBitmap(ref,border); } //@} /** @name Initialization. */ //@{ /** Resets this GBitmap size to #nrows# rows and #ncolumns# columns and sets all pixels to white. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. The number of gray levels is initialized to #2#. */ void init(int nrows, int ncolumns, int border=0); /** Initializes this GBitmap with the contents of the GBitmap #ref#. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. */ void init(const GBitmap &ref, int border=0); /** Initializes this GBitmap with a rectangular segment #rect# of GBitmap #ref#. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. */ void init(const GBitmap &ref, const GRect &rect, int border=0); /** Reads PBM, PGM or RLE data from ByteStream #ref# into this GBitmap. The previous content of the GBitmap object is lost. The optional argument #border# specifies the size of the optional border of white pixels surrounding the image. See \Ref{PNM and RLE file formats} for more information. */ void init(ByteStream &ref, int border=0); /** Assignment operator. Initializes this GBitmap by copying the size, the border and the contents of GBitmap #ref#. */ GBitmap& operator=(const GBitmap &ref); /** Initializes all the GBitmap pixels to value #value#. */ void fill(unsigned char value); //@} /** @name Accessing the pixels. */ //@{ /** Returns the number of rows (the image height). */ unsigned int rows() const; /** Returns the number of columns (the image width). */ unsigned int columns() const; /** Returns a constant pointer to the first byte of row #row#. This pointer can be used as an array to read the row elements. */ const unsigned char *operator[] (int row) const; /** Returns a pointer to the first byte of row #row#. This pointer can be used as an array to read or write the row elements. */ unsigned char *operator[] (int row); /** Returns the size of a row in memory (in pixels). This number is equal to the difference between pointers to pixels located in the same column in consecutive rows. This difference can be larger than the number of columns in the image. */ unsigned int rowsize() const; /** Makes sure that the border is at least #minimum# pixels large. This function does nothing it the border width is already larger than #minimum#. Otherwise it reorganizes the data in order to provide a border of #minimum# pixels. */ void minborder(int minimum); //@} /** @name Managing gray levels. */ //@{ /** Returns the number of gray levels. Value #2# denotes a bilevel image. */ int get_grays() const; /** Sets the number of gray levels without changing the pixels. Argument #grays# must be in range #2# to #256#. */ void set_grays(int grays); /** Changes the number of gray levels. The argument #grays# must be in the range #2# to #256#. All the pixel values are then rescaled and clipped in range #0# to #grays-1#. */ void change_grays(int grays); /** Binarizes a gray level image using a threshold. The number of gray levels is reduced to #2# as in a bilevel image. All pixels whose value was strictly greater than #threshold# are set to black. All other pixels are set to white. */ void binarize_grays(int threshold=0); //@} /** @name Optimizing the memory usage. The amount of memory used by bilevel images can be reduced using function \Ref{compress}, which encodes the image using a run-length encoding scheme. The bracket operator decompresses the image on demand. A few highly optimized functions (e.g. \Ref{blit}) can use a run-length encoded bitmap without decompressing it. There are unfortunate locking issues associated with this capability (c.f. \Ref{share} and \Ref{monitor}). */ //@{ /** Reduces the memory required for a bilevel image by using a run-length encoded representation. Functions that need to access the pixel array will decompress the image on demand. */ void compress(); /** Decodes run-length encoded bitmaps and recreate the pixel array. This function is usually called by #operator[]# when needed. */ void uncompress(); /** Returns the number of bytes allocated for this image. */ unsigned int get_memory_usage() const; /** Returns a possibly null pointer to a \Ref{GMonitor} for this bitmap. You should use this monitor to ensure that the data representation of the bitmap will not change while you are using it. We suggest using class \Ref{GMonitorLock} which properly handles null monitor pointers. */ GMonitor *monitor() const; /** Associates a \Ref{GMonitor} with this bitmap. This function should be called on all bitmaps susceptible of being simultaneously used by several threads. It will make sure that function \Ref{monitor} returns a pointer to a suitable monitor for this bitmap. */ void share(); //@} /** @name Accessing RLE data. The next functions are useful for processing bilevel images encoded using the run length encoding scheme. These functions always return zero if the bitmap is not RLE encoded. Function \Ref{compress} must be used to ensure that the bitmap is RLE encoded. */ //@{ /** Gets the pixels for line #rowno#. One line of pixel is stored as #unsigned char# values into array #bits#. Each pixel is either 1 or 0. The array must be large enough to hold the whole line. The number of pixels is returned. */ int rle_get_bits(int rowno, unsigned char *bits) const; /** Gets the bitmap line rle data passed. One line of pixel is stored one with 8 bits per #unsigned char# in an array. The array must be large enough to hold the whole line. */ static void rle_get_bitmap(const int ncolumns,const unsigned char *&runs, unsigned char *bitmap, const bool invert ); /** Gets the lengths of all runs in line #rowno#. The array #rlens# must be large enough to accomodate #w+2# integers where #w# is the number of columns in the image. These integers represent the lengths of consecutive runs of alternatively white or black pixels. Lengths can be zero in order to allow for lines starting with black pixels. This function returns the total number of runs in the line. */ int rle_get_runs(int rowno, int *rlens) const; /** Gets the smallest rectangle enclosing black pixels. Rectangle rect gives the coordinates of the smallest rectangle containing all black pixels. Returns the number of black pixels. */ int rle_get_rect(GRect &rect) const; //@} /** @name Additive Blit. The blit functions are designed to efficiently construct an anti-aliased image by copying smaller images at predefined locations. The image of a page, for instance, is composed by copying the images of characters at predefined locations. These functions are fairly optimized. They can directly use compressed GBitmaps (see \Ref{compress}). We consider in this section that each GBitmap comes with a coordinate system defined as follows. Position (#0#,#0#) corresponds to the bottom left corner of the bottom left pixel. Position (#1#,#1#) corresponds to the top right corner of the bottom left pixel, which is also the bottom left corner of the second pixel of the second row. Position (#w#,#h#), where #w# and #h# denote the size of the GBitmap, corresponds to the top right corner of the top right pixel. */ //@{ /** Performs an additive blit of the GBitmap #bm#. The GBitmap #bm# is first positioned above the current GBitmap in such a way that position (#u#,#v#) in GBitmap #bm# corresponds to position (#u#+#x#,#v#+#y#) in the current GBitmap. The value of each pixel in GBitmap #bm# is then added to the value of the corresponding pixel in the current GBitmap. {\bf Example}: Assume for instance that the current GBitmap is initially white (all pixels have value zero). This operation copies the pixel values of GBitmap #bm# at position (#x#,#y#) into the current GBitmap. Note that function #blit# does not change the number of gray levels in the current GBitmap. You may have to call \Ref{set_grays} to specify how the pixel values should be interpreted. */ void blit(const GBitmap *bm, int x, int y); /** Performs an additive blit of the GBitmap #bm# with anti-aliasing. The GBitmap #bm# is first positioned above the current GBitmap in such a way that position (#u#,#v#) in GBitmap #bm# corresponds to position (#u#+#x#/#subsample#,#v#+#y#/#subsample#) in the current GBitmap. This mapping results in a contraction of GBitmap #bm# by a factor #subsample#. Each pixel of the current GBitmap can be covered by a maximum of #subsample^2# pixels of GBitmap #bm#. The value of each pixel in GBitmap #bm# is then added to the value of the corresponding pixel in the current GBitmap. {\bf Example}: Assume for instance that the current GBitmap is initially white (all pixels have value zero). Each pixel of the current GBitmap then contains the sum of the gray levels of the corresponding pixels in GBitmap #bm#. There are up to #subsample*subsample# such pixels. If for instance GBitmap #bm# is a bilevel image (pixels can be #0# or #1#), the pixels of the current GBitmap can take values in range #0# to #subsample*subsample#. Note that function #blit# does not change the number of gray levels in the current GBitmap. You must call \Ref{set_grays} to indicate that there are #subsample^2+1# gray levels. Since there is at most 256 gray levels, this also means that #subsample# should never be greater than #15#. {\bf Remark}: Arguments #x# and #y# do not represent a position in the coordinate system of the current GBitmap. According to the above discussion, the position is (#x/subsample#,#y/subsample#). In other words, you can position the blit with a sub-pixel resolution. The resulting anti-aliasing changes are paramount to the image quality. */ void blit(const GBitmap *shape, int x, int y, int subsample); //@} /** @name Saving images. The following functions write PBM, PGM and RLE files. PBM and PGM are well known formats for bilevel and gray-level images. The RLE is a simple run-length encoding scheme for bilevel images. These files can be read using the ByteStream based constructor or initialization function. See \Ref{PNM and RLE file formats} for more information. */ //@{ /** Saves the image into ByteStream #bs# using the PBM format. Argument #raw# selects the ``Raw PBM'' (1) or the ``Ascii PBM'' (0) format. The image is saved as a bilevel image. All non zero pixels are considered black pixels. See section \Ref{PNM and RLE file formats}. */ void save_pbm(ByteStream &bs, int raw=1); /** Saves the image into ByteStream #bs# using the PGM format. Argument #raw# selects the ``Raw PGM'' (1) or the ``Ascii PGM'' (0) format. The image is saved as a gray level image. See section \Ref{PNM and RLE file formats}. */ void save_pgm(ByteStream &bs, int raw=1); /** Saves the image into ByteStream #bs# using the RLE file format. The image is saved as a bilevel image. All non zero pixels are considered black pixels. See section \Ref{PNM and RLE file formats}. */ void save_rle(ByteStream &bs); //@} /** @name Stealing or borrowing the memory buffer (advanced). */ //@{ /** Steals the memory buffer of a GBitmap. This function returns the address of the memory buffer allocated by this GBitmap object. The offset of the first pixel in the bottom line is written into variable #offset#. Other lines can be accessed using pointer arithmetic (see \Ref{rowsize}). The GBitmap object no longer ``owns'' the buffer: you must explicitly de-allocate the buffer using #operator delete []#. This de-allocation should take place after the destruction or the re-initialization of the GBitmap object. This function will return a null pointer if the GBitmap object does not ``own'' the buffer in the first place. */ unsigned char *take_data(size_t &offset); /** Initializes this GBitmap by borrowing a memory segment. The GBitmap then directly addresses the memory buffer #data# provided by the user. This buffer must be large enough to hold #w*h# bytes representing each one pixel. The GBitmap object does not ``own'' the buffer: you must explicitly de-allocate the buffer using #operator delete []#. This de-allocation should take place after the destruction or the re-initialization of the GBitmap object. */ inline void borrow_data(unsigned char &data, int w, int h); /** Same as borrow_data, except GBitmap will call #delete[]#. */ void donate_data(unsigned char *data, int w, int h); /** Return a pointer to the rle data. */ const unsigned char *get_rle(unsigned int &rle_length); /** Initializes this GBitmap by setting the size to #h# rows and #w# columns, and directly addressing the memory buffer #rledata# provided by the user. This buffer contains #rledatalen# bytes representing the bitmap in run length encoded form. The GBitmap object then ``owns'' the buffer (unlike #borrow_data#, but like #donate_data#) and will deallocate this buffer when appropriate: you should not deallocate this buffer yourself. The encoding of buffer #rledata# is similar to the data segment of the RLE file format (without the header) documented in \Ref{PNM and RLE file formats}. */ void donate_rle(unsigned char *rledata, unsigned int rledatalen, int w, int h); /** Static function for parsing run data. This function returns one run length encoded at position #data# and increments the pointer #data# accordingly. */ static inline int read_run(const unsigned char *&data); static inline int read_run(unsigned char *&data); /** Static function for generating run data. This function encoded run length #count# at position #data# and increments the pointer accordingly. The pointer must initially point to a large enough data buffer. */ static inline void append_run(unsigned char *&data, int count); /** Rotates bitmap by 90, 180 or 270 degrees anticlockwise and returns a new pixmap, input bitmap is not changed. count can be 1, 2, or 3 for 90, 180, 270 degree rotation. It returns the same bitmap if not rotated. The input bitmap will be uncompressed for rotation*/ GP<GBitmap> rotate(int count=0); //@} // These are constants, but we use enum because that works on older compilers. enum {MAXRUNSIZE=0x3fff}; enum {RUNOVERFLOWVALUE=0xc0}; enum {RUNMSBMASK=0x3f}; enum {RUNLSBMASK=0xff}; protected: // bitmap components unsigned short nrows; unsigned short ncolumns; unsigned short border; unsigned short bytes_per_row; unsigned short grays; unsigned char *bytes; unsigned char *bytes_data; GPBuffer<unsigned char> gbytes_data; unsigned char *rle; GPBuffer<unsigned char> grle; unsigned char **rlerows; GPBuffer<unsigned char *> grlerows; unsigned int rlelength; private: GMonitor *monitorptr; public: class ZeroBuffer; friend class ZeroBuffer; GP<ZeroBuffer> gzerobuffer; private: static int zerosize; static unsigned char *zerobuffer; static GP<ZeroBuffer> zeroes(int ncolumns); static unsigned int read_integer(char &lookahead, ByteStream &ref); static void euclidian_ratio(int a, int b, int &q, int &r); int encode(unsigned char *&pruns,GPBuffer<unsigned char> &gpruns) const; void decode(unsigned char *runs); void read_pbm_text(ByteStream &ref); void read_pgm_text(ByteStream &ref, int maxval); void read_pbm_raw(ByteStream &ref); void read_pgm_raw(ByteStream &ref, int maxval); void read_rle_raw(ByteStream &ref); static void append_long_run(unsigned char *&data, int count); static void append_line(unsigned char *&data,const unsigned char *row, const int rowlen,bool invert=false); static void makerows(int,const int, unsigned char *, unsigned char *[]); friend class DjVu_Stream; friend class DjVu_PixImage; public: #ifndef NDEBUG void check_border() const; #endif }; /** @name PNM and RLE file formats {\bf PNM} --- There are actually three PNM file formats: PBM for bilevel images, PGM for gray level images, and PPM for color images. These formats are widely used by popular image manipulation packages such as NetPBM \URL{} or ImageMagick \URL{}. {\bf RLE} --- The binary RLE file format is a simple run-length encoding scheme for storing bilevel images. Encoding or decoding a RLE encoded file is extremely simple. Yet RLE encoded files are usually much smaller than the corresponding PBM encoded files. RLE files always begin with a header line composed of:\\ - the two characters #"R4"#,\\ - one or more blank characters,\\ - the number of columns, encoded using characters #"0"# to #"9"#,\\ - one or more blank characters,\\ - the number of lines, encoded using characters #"0"# to #"9"#,\\ - exactly one blank character (usually a line-feed character). The rest of the file encodes a sequence of numbers representing the lengths of alternating runs of white and black pixels. Lines are encoded starting with the top line and progressing towards. @memo Simple image file formats. */ //@} // ---------------- IMPLEMENTATION inline unsigned int 00541 GBitmap::rows() const { return nrows; } inline unsigned int 00547 GBitmap::columns() const { return ncolumns; } inline unsigned int 00553 GBitmap::rowsize() const { return bytes_per_row; } inline int 00559 GBitmap::get_grays() const { return grays; } inline unsigned char * 00565 GBitmap::operator[](int row) { if (!bytes) uncompress(); if (row<0 || row>=nrows) { #ifndef NDEBUG if (zerosize < bytes_per_row + border) G_THROW( ERR_MSG("GBitmap.zero_small") ); #endif return zerobuffer + border; } return &bytes[row * bytes_per_row + border]; } inline const unsigned char * 00580 GBitmap::operator[](int row) const { if (!bytes) ((GBitmap*)this)->uncompress(); if (row<0 || row>=nrows) { #ifndef NDEBUG if (zerosize < bytes_per_row + border) G_THROW( ERR_MSG("GBitmap.zero_small") ); #endif return zerobuffer + border; } return &bytes[row * bytes_per_row + border]; } inline GBitmap& 00595 GBitmap::operator=(const GBitmap &ref) { init(ref, ref.border); return *this; } inline GMonitor * 00602 GBitmap::monitor() const { return monitorptr; } inline void GBitmap::euclidian_ratio(int a, int b, int &q, int &r) { q = a / b; r = a - b*q; if (r < 0) { q -= 1; r += b; } } inline int GBitmap::read_run(unsigned char *&data) { register int z=*data++; return (z>=RUNOVERFLOWVALUE)? ((z&~RUNOVERFLOWVALUE)<<8)|(*data++):z; } inline int 00629 GBitmap::read_run(const unsigned char *&data) { register int z=*data++; return (z>=RUNOVERFLOWVALUE)? ((z&~RUNOVERFLOWVALUE)<<8)|(*data++):z; } inline void 00637 GBitmap::append_run(unsigned char *&data, int count) { if (count < RUNOVERFLOWVALUE) { data[0] = count; data += 1; } else if (count <= MAXRUNSIZE) { data[0] = (count>>8) + GBitmap::RUNOVERFLOWVALUE; data[1] = (count & 0xff); data += 2; } else { append_long_run(data, count); } } inline void 00658 GBitmap::borrow_data(unsigned char &data,int w,int h) { donate_data(&data,w,h); bytes_data=0; } // ---------------- THE END #ifdef HAVE_NAMESPACES } # ifndef NOT_USING_DJVU_NAMESPACE using namespace DJVU; # endif #endif #endif
http://djvulibre.sourcearchive.com/documentation/3.5.24-7/GBitmap_8h_source.html
CC-MAIN-2017-26
refinedweb
3,479
56.35
Quick Links RSS 2.0 Feeds Lottery News Event Calendar Latest Forum Topics Web Site Change Log RSS info, more feeds Topic closed. Posted 10 years ago. No replies. Here's some info on pick3 states: CA midday not had a double in 16 draws.Further more, these 3 sums, combined, are over 9 times due: sums 7, 22 and 23. (17 boxed numbers: 016, 025, 034, 124, 589, 679, 689, 007, 115, 223, 133, 778, 779, 688, 788, 499, 599)Makes that 007 (box) look very tempting to play. NY midday not had a double in 13 draws. Here are the longest out doubles: 009, 668, 599, 133, 113, 228,449, 577, 244, 088, 556, 588, 677, 122, 225, 266, 558, 004, 277Mind that none of these numbers, straight, have fallen for over a thousand draws: 111, 113, 131, 133, 311, 313, 331, 333 But somehow i like the 569 (box) very much there... DC midday: 151, 154, 451 (straight) Kansas eve: 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, with special attention to a 2 digit-return (2 and 6) and perhaps 3-digit-return: 261 and even a 3-vtrac-return: 266. (all straights) Florida eve: lasas3 might be ligning up for that 20X frontpair to return: 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 (straight) MO eve: long time no see: 065, 067, 069, 085, 087, 089, 265, 267, 269, 285, 287, 289, 465, 467, 469, 485, 487, 489 (straight) Good luck,Ricky lasas3 An onion a day keeps everyone.
https://www.lotterypost.com/thread/158789
CC-MAIN-2017-26
refinedweb
258
80.24
Single-Image Stereograms Seeing (double) is believing Dennis Cronin Dennis writes drivers for Central Data's scsiTerminal Servers. He can be contacted at [email protected]. Three-dimensional illusions are showing up in everything from magazine advertisements to the comic pages of daily newspapers. Although appearing to be nothing more than a random field of dots or wavy patterns, striking 3-D images emerge when you "correctly" view the designs. Once you learn to get a fix on an image (and almost everyone can), you can look around the virtual 3-D image just like looking out a window. Figure 1 is a typical stereogram in which the word "SONY" appears. If you haven't been able to pick the images out, understanding the concept behind them may help you experience the illusion. In this article, I'll discuss how the illusion works and the origins of the technique. I'll also examine the basic algorithm for generating the images and present a sample program (available electronically; see "Availability," page 3) that lets you display 3-D images on your PC screen. You'll then be able to quickly design and generate your own custom 3-D illusions using a standard PC paint program. A 3-D Backgrounder The terms "single-image stereogram" and "autostereogram" refer to a 3-D illusion composed of only one image and requiring no special viewing apparatus. Other types of stereograms use two small, side-by-side images or require special glasses or other optics for viewing. The most basic single-image stereogram is the single-image, random-dot stereogram (SIRDS), which looks like a field of random dots with no apparent texture or pattern. In its simplest form, the image is composed only of black and white dots, yet a vivid 3-D image is clearly visible when viewed correctly. Commercial 3-D illusion posters take SIRDS a step further, replacing the TV-not-tuned-in dot field with a more visually appealing texture or repeated pattern. Nevertheless, the principle behind the illusion is the same. The current crop of 3-D illusions has its roots in basic vision research. Bela Julesz is generally credited with being the first to use computer-generated, random-dot images to create a sense of depth. In his early-1960s depth-perception studies, Julesz used pairs of random-dot images to demonstrate that a sense of depth could be achieved with no other visual cues. Christopher Tyler and Maureen Clark, in turn, are generally credited for combining two images into a single, random-dot image circa 1990, creating the forerunner of today's gift-shop rage. Since then, numerous companies and individuals have advanced the art with clever posters, books, and online images of autostereograms. The newsgroup alt.3d, for instance, carries a steady discussion of SIRDS-related issues, and the FTP site katz.anu.edu.au is probably the most active central clearing house of autostereograms, information, and programs (see the directory /pub/stereograms). Making a Point To understand how single-image stereograms work, I'll first examine the most fundamental case of how you make a single dot appear at some point out in virtual 3-D space. Assume that you want to make point A appear somewhere off in the distance beyond the plane of the paper (or screen). Imagine for a minute that the image is transparent; your eyes will have to converge (or triangulate) to view point A off beyond the plane of the image. Note the points where the rays from each eye intersect the plane of the image; see Figure 2. By placing a pair of dots at exactly those locations, you can imply the first point in the 3-D landscape. It's not easy to get much feeling of depth from setting up just one virtual point with two dots. The brain has no additional cues to help it interpret those two lonely dots as a magic point in deep space. The effect doesn't kick in until you start to build a larger set of information for your brain to cue from. Now let's see what happens if you want to make a dot appear somewhere further away than point A. Referring again to Figure 2, notice which rays converge at point B and where they cross the image plane. They are slightly farther apart than the two points identified for point A. With some basic geometry, you can formulate the distance of that virtual point as a function of dot separation; see Example 1. Figure 3 shows the basic convergence diagram again, but with the parameters in Example 1 indicated. To give a convincing illusion of depth, you have to build a complete system of dots that map out a 3-D scene. In doing so, you will be able to use the formula in Example 1 to map out a system of dot pairs such that a complete 3-D scene will be visible to the unaided eye. Getting the Effect As you stare at a stereogram, you shift the convergence of your eyes and let your focus wander. When you triangulate on a normal object, your brain tends to select a focal length that will closely match the distance implied by your eyes' convergence. To see a stereogram, you need to break focal length away from triangulation. When you find the exact point of triangulation that makes the dot pairs overlap, a portion of the 3-D image fuses. When your brain stumbles on the right triangulation and starts to note this image appearing, it will attempt to adjust focus to cause the image to solidify. For some people, this separation of triangulation and focus comes quite easily; others have trouble with it. When the image finally locks in for you, it's astounding how naturally the brain adjusts its vision machinery to this new set of rules. With a little practice, you can effortlessly maintain a good lock on the image as you look around in it. The effect for most people is exhilarating--some of the fun comes from just seeing the image appear, and some of it comes from the strange, unnatural feeling of having your eyes operating in a way they're not used to. Making the Scene Rendering a scene is a three-step process: - Develop a 2-D depth map of the scene to be displayed. - Process the depth map and build a map of dot-pair constraints. - Assign colors such that the constraints are met. Depth maps can also be generated which are not based on any real 3-D scene. I use a paint program to generate a depth map using different colors to represent different depths. The test program in Listing One (page 92) generates this type of color depth map using a simple mathematical formula. In Step #2 of the rendering process, you develop a constraint map that describes all the dot pairings necessary to create the final image. You don't describe what color the dot pairs have to be, just which ones have to match which other ones. A major simplifying assumption is that humans keep their heads oriented vertically with respect to the image. Thus, for every point in virtual 3-D space, you need define only two dots along the horizontal axis to imply that virtual point. In fact, if you tilt your head slightly when viewing a commercial stereogram, you will quickly lose the image. This assumption allows you to break the problem into a single case of rendering one horizontal line of the scene at a time. When you have devised an algorithm for rendering a single horizontal line, you are ready to generate the constraint map for the entire scene. The traditional algorithm for constraint mapping is quite simple. An adaptation of it (included in the sample program provided electronically) looks like Example 2, where base_period is the number of pixels between the dot pairs with the greatest separation. Picking this value correctly is critical to the success of the illusion. Usually, base_period pixels should map to 1.5-2 inches on the output device. cmap is the array in which you build the constraint map. The get_depth function returns a small integer value representing the z-coordinate (or depth), with larger values for closer virtual points. An index value is just an ID used to identify points which must be the same color. This algorithm simply scans a horizontal line, "looking back" and copying a previous index value. For closer virtual points, it looks back a smaller number of points; for maximum depth (get_depth returns 0), it looks back the full base_period number of points. While this algorithm does a nice job, several limitations make it unusable for more-serious stereogram work. A right-shift distortion becomes apparent as the depth variations increase and, more fatally, it can develop rather distracting artifacts or echoes around sudden jumps in depth. In Step #3, you pick colors for all the points in the scene adhering to the constraints developed in Step #2. For black-and-white stereograms, picking colors can be as simple as looking at each possible index value in the constraint map and randomly assigning it to be either black or white. For color stereograms, you randomly assign a color value from an available palette. Note that constraint-index values carry no meaning outside an individual horizontal line. A given value can be assigned one color in one line and another in the next, as long as consistency is maintained within each individual line. As you assign a color to each constraint index in a line, you scan the line and replace all occurrences of the index with the selected final-output color value. When you have finished processing all the lines this way, you've created the classic SIRDS. While a SIRDS certainly provides the 3-D experience, the random nature of the color-assignment process can lend a drab sameness to the final product. You can improve the aesthetics of the image by embellishing Step #3. Just Stare, It's There There are techniques to help spot the image in these illusions. The most well-known is to put the image behind glass and look for your reflection in the glass. Since the sample program displays directly to your computer screen, all you have to do is orient your screen and/or adjust the ambient lighting until you can see your reflection in the screen. This technique helps you achieve a fixed triangulation beyond the plane of the image. With a little practice, your brain will spot the emerging lines and acquire a focal lock on this image. Note that this technique will not work with all stereograms--it is suited for stereograms where the virtual image is rendered to appear at approximately twice the viewing distance. These stereograms use pattern separations mostly in the 1.25- to 1.5-inch range. It is also possible to make stereograms that use wider separations, requiring you to stare farther off into the distance to get the proper triangulation. Although these can be more difficult for novice viewers, I find them somewhat more breathtaking, possibly because the vision machinery is operating even farther out of its normal parameters using close focal lengths and very distant triangulations. Some stereograms provide a pair of registration dots immediately above or below the main image. By staring at these dots and adjusting your triangulation until you see exactly three dots, you can set your eyes to the depth for which the image was designed. Another technique for viewing images on paper is to pick an object in the distance and focus on it, then slowly interpose the stereogram, trying not to move your eyes. This is helpful in acquiring the image in stereograms designed with more-distant triangulations. If you don't find the image right away, keep trying. Almost anyone with normal or reasonably corrected binocular vision can eventually perceive the image. I've seen people spot the image within 30 seconds, and I've seen them struggle for hours. It generally gets easier with practice. Going into Depth In the SIRDS algorithm, the right-hand dot of the dot pair is always in fixed relation to the point in the depth map. This means that at closer virtual distances, the center of the dot pair is skewed to the right of the actual point in the depth map. You can easily correct this by calculating the necessary separation as before, then splitting it equally and symmetrically about the point in the depth map. It is important to clean up artifacts (or echoes), false images that result from unplanned repetitions in the pattern, or sudden depth changes. The paper "Displaying 3D Images: Algorithms for Single Image Random Dot Stereograms," by Thimbleby, Inglis, and Witten (available via FTP from katz.anu.edu.au in /pub/stereograms/papers) covers this artifacting phenomenon in some detail. Thimbleby et al. propose a method for removing artifacts based on hidden surface removal (HSR); however, this method removes portions of the image necessary for a solid illusion. This results in blurry areas to the sides of sudden depth transitions. HSR assumes that if a point in the scene is hidden from one eye by a foreground object, then constraints need not be developed for a dot pair describing that partially hidden point. While there is some geometric basis for this approach, it doesn't benefit the overall effect of the illusion. In real life, when a point is hidden from one eye, the other eye can still focus on it solo, providing some information about it. Illusions benefit from providing information about this partially hidden point to both eyes, even though this defies real-life physics. Despite its faults, HSR is on the right track: The solution to artifacts is to deconstrain points immediately to either side of the edges of foreground objects. You deconstrain only the number of points necessary to control unwanted repetitions of the pattern. This lets you reduce the number of points on which you sacrifice constraints, yielding a more-solid image overall while still removing artifacts. The program 3D.C (available electronically in both source and executable form) allows the artifact-removal parameter to be varied. At its maximum value of 10, it removes more constraints than it really needs, which leads to blurring. At its nominal value of 7, the program removes artifacts from all images I've tested, causing only slight, mostly unnoticeable blurring around foreground objects. (I haven't tried to prove that this technique can fully remove artifacts--I'll leave that to commercial stereogram developers.) The routine symmetrical_constraint_map in 3D.C gives details about the artifact-removal process. Many commercial stereograms have scenes that appear to have smooth, contoured, 3-D surfaces. In other stereograms, including those generated to the screen by 3D.C, the levels are discrete and distinguishable. This is not a fault of the rendering algorithm, but a product of the relatively low resolution of the output device. A 53-dpi EGA/VGA screen can't provide the smooth depth changes because the pixel width is too coarse to present very fine changes in spacing. Colors, Textures, and Sprites Coloring the final image is an art. At this stage, you have a constraint map, and as long as you assign colors based on the constraints, you can do just about anything. A simple method for improving the look of the stereogram is to map a texture onto the constraint map. An interesting texture is more appealing than a spray of random dots, and it may make it easier to lock onto the image from a distance, since the elements of the texture are coarser than little random dots. See the color_texture routine in 3D.C for my approach to developing textures on top of the constraints. Possibly the most ingenious method of coloring a stereogram is to map a repeated smaller image onto the constraint maps. This yields an attractive wallpaper effect when viewed normally; upon achieving proper focus, the 3-D image jumps out just as crisply as from any random dot or textured image. The scene itself then looks as if its been wallpapered. I've borrowed the term "sprite" from computer-game animation to describe the little image to be repeated. The sprite-mapping process should start near the center of the image and proceed outward. Some distortion of the sprite is inherent to the process, and starting at the center makes its mangling more symmetrical. Certain subjects may warrant starting the sprite mapping at a particular part of the screen so that the main subject matter gets mapped with relatively intact sprites. Clever choice of material for both the sprite and the 3-D scene can conceal the sprite distortions. Remember that this sprite-mapping technique is as much art as science and may require extensive fiddling to generate a pleasing final product. One additional trick 3D.C provides to conceal some of the side effects of the sprite distortion is the ability to sweep the sprite image from side to side in a gentle, wavy motion. This helps disguise fragmentation of the sprite image and can add some interesting effects to the sprite itself. About the Example Program The 3D.C example stereogram program showcases some of the popular stereogram techniques, displaying the images directly to the screen of your computer monitor. It was designed for use by the largest number of PC users: It runs in DOS mode, requires no extended memory, needs only 640x480 EGA/VGA graphics mode, and is built with a standard, 16-bit C compiler. To easily and quickly generate a custom 3-D scene, you use a standard, PCX-format file as a 3-D depth map. You can use any PC paint program that can generate a 16-color PCX image to edit a scene, with colors 0-15 being mapped to 16 depth levels. While it would have been nice to support more levels of depth, higher resolutions, and larger image sizes, PC memory limitations and compiler issues constrained the program design. However, you should have no trouble expanding upon the basic functionality. The program was built and tested with Borland Turbo C 2.0 and Borland C++ 2.0. Borland's graphics driver module EGAVGA.BGI must be accessible to the program at run time. If you use another compiler, you will need the graphics primitives or equivalents in Table 1. Painting in Space Before editing a scene, you will need to work out what colors map to what color numbers so you can control depth. Using the Edit Colors option, you can specify the color components. Set up a palette as in Table 2 and save it to a file using the Save Colors option. With this palette, color 0 (black) will be the farthest away, while color 15 (white) will yield the closest foreground image. Once you have established a known palette, you are ready to begin editing depth maps. Next, build a simple, depth-map image with the paint program. Start with large, uncomplicated shapes until you get a feel for how things work in 3-D. Your image will be more believable if you maintain depth order and don't let background-depth colors obscure foreground images. Once you have an image you want to look at in 3-D, save it as a PCX file and exit the paint program. Run 3D.EXE in DOS mode to render the scene; it runs extremely slow under Windows. Invoking 3D with no options other than the filename of the depth map (3D [filename.pcx]) causes it to default to a textured-coloring mode, picking random image colors and control parameters. First it loads and displays the depth-map image, then renders the image to 3D, line by line. An average image takes about 30 seconds on a 486/33. When you are done viewing the image, hit any key to exit. As the program exits, it prints out the random parameters it selected. You can use the parameters in command-line options to (more or less) re-create an image. Two other coloring modes are possible besides the default. To generate a traditional SIRDS using random dots, specify the -r option, or use the -s option to render the image using a sprite as coloring material. To view an image using the traditional constraint-mapping algorithm, use the -f option. The artifact-removal option (-a) has no effect in this mode. Images can be dramatically altered by using the control parameters horizontal, vertical, and wiggles. These have different meaning in texture and sprite modes; in random-dot mode, they are meaningless. In texture mode, the "horizontal" parameter determines the length of horizontal runs of like-colored pixels. The vertical parameter is a True/False value (1/0) that enables vertical runs. The wiggles parameter can be anything in the range 0-500 and only takes effect when the horizontal parameter is greater than 1. If wiggles is 0, the horizontal runs all drift right; if wiggles is greater than 480, the horizontal runs all drift left; in all other cases, the horizontal drifts reverse direction every line. In sprite mode, the parameter's functions are somewhat different. The horizontal parameter determines the maximum amount of horizontal waviness that can be injected into the sprite mapping. The vertical parameter can be: -1, where wiggles start at top of screen and decrease toward bottom; 0, where wiggle depth is constant top to bottom; and 1, where wiggle depth increases toward the bottom of the screen. The wiggles parameter controls the wiggle frequency, as in the texturing mode. While the limited depth resolution resulting from the use of a 16-color PCX image as a depth map is restrictive, clever scene design can still result in striking stereograms. The Egg-Carton Test Listing One is EGGCARTN.C, a test program. Compiling this program lets you see stereograms without editing a depth-map yourself. The program generates an example PCX depth-map file called "EGGCARTN.PCX." Invoking the program with an optional command-line parameter 0-15 causes it to generate different surfaces of varying degrees of interest. In DOS mode, you should first run the test program to generate the PCX depth map (type EGGCARTN and press Enter). Next run the stereogram program to view the results by entering 3D EGGCARTN.PCX. You should be able to see a repeating contour not unlike that of an egg carton. Conclusion Many areas of single-image-stereogram generation are still being explored. An example is "shimmering," which involves rendering the image several different ways, then rapidly flipping the graphics page between the different images. The image appears very solidly, but has a shimmering quality. Some people find the images easier to view this way. Shimmering paves the way for possible stereogram animation. So keep your eyes peeled. That apparently random texture on the stone front of a building, the slight shift in the wallpaper of your company's bathroom, the advertisement with the undulating, repeated logo in the background (that one's real already)--they all might be carrying secret messages in 3-D. Figure 1: Typical stereogram (courtesy of NVision Grafix and Sony Corp.). This figure is unavailable in electronic format.. view*eyes virt=----------- eyes-sep Example 2: Basic constraint-mapping algorithm. simple_constraint_map(int y) { int lx, /* left hand X coor */ mx, /* middle X coor */ rx, /* right hand X coor */ dx; /* left hand coor w/ delta */ for(rx = 0, lx = -base_period, mx = lx / 2, index = CMAP_INDEX; rx < MAX_X; rx++, mx++, lx++) { /* if dx on screen, copy to right */ if((dx = lx + get_depth(mx,y)) >= 0) cmap[rx] = cmap[dx]; /* otherwise, just pick new index */ else cmap[rx] = index++; } } Table 1: Graphic primitives required by the 3D program. Function Description initgraph Sets up 640x480 EGA/VGA graphics mode. closegraph Restores original screen mode. setpixel Plots a point on screen with a specified color. getpixel Returns the color at a point on screen. setcolor Sets the draw color for line drawing. line Draws a line from point x1,y1 to point x2,y2. Table 2: Setting up the palette. Color # Red Green Blue 0 0 0 0 1 128 0 0 2 0 128 0 3 128 128 0 4 0 0 128 5 128 0 128 6 0 128 128 7 128 128 128 8 192 192 192 9 255 0 0 10 0 255 0 11 255 255 0 12 0 0 255 13 255 0 255 14 0 255 255 15 255 255 255 Listing One /* EGGCARTN.C - generates a sample PCX depth map file called "eggcartn.pcx" for testing 3D.C stereogram program. Command-line param 0-15 generates different surfaces */ #include <stdio.h> #include <graphics.h> #include <stdlib.h> #include <string.h> #include <math.h> #include <conio.h> #define MAX_X 640 #define MAX_Y 480 #define BPP 80 /* bytes per plane */ #define NPLANES 4 /* number of color planes */ void main(int argc,char **argv); void doodle(void); void dump_screen(void); void put_line(unsigned char *data,int cnt); void out_run(int byte,int rep_cnt); void open_pcx(char *name); void init_graphics(void); double func(int val); int mode; double eggsz = 200.0; /* main */ void main(int argc,char **argv) { if(argc == 2) mode = atoi(argv[1]); open_pcx("eggcartn.pcx"); init_graphics(); doodle(); dump_screen(); closegraph(); printf("Output in 'eggcartn.pcx'.\n"); exit(0); } /* doodle */ void doodle(void) { int x,y,color; double xf,yf; if(mode & 4) eggsz *= 2.0; for(y = 0; y < MAX_Y; y++) { if(kbhit()) { getch(); closegraph(); exit(0); } yf = func(y); for(x = 0; x < MAX_X; x++) { xf = func(x); color = (int)(7.99 * (1 + xf * yf)); putpixel(x,y,color); } } } /* func */ double func(int val) { double res; res = mode & 1 ? val % (int) eggsz : val; if(mode & 2) { res /= eggsz / 2.0; res -= 1.0; } else { res *= M_PI / eggsz; res = cos(res); } return(mode & 8 ? res * 1.666 : res); } /* stuff for PCX dump routines */ struct pcx_header { char manufacturer, version, encoding, bits_per_pixel; int xmin, ymin, xmax, ymax, hres, vres; char colormap[16 * 3]; char reserved, num_planes; int bytes_per_line, palette_code; char filler[58]; } header = { 10, /* manu */ 5, /* version */ 1, /* encoding */ 1, /* bits per pixel */ 0, /* xmin */ 0, /* ymin */ 639, /* xmax */ 479, /* ymax */ 800, /* src hres */ 600, /* src vres */ 0,0,0, /* color 0 */ 0x80,0,0, /* color 1 */ 0,0x80,0, /* color 2 */ 0x80,0x80,0, /* color 3 */ 0,0,0x80, /* color 4 */ 0x80,0,0x80, /* color 5 */ 0,0x80,0x80, /* color 6 */ 0x80,0x80,0x80, /* color 7 */ 0xc0,0xc0,0xc0, /* color 8 */ 0xff,0,0, /* color 9 */ 0,0xff,0, /* color 10 */ 0xff,0xff,0, /* color 11 */ 0,0,0xff, /* color 12 */ 0xff,0,0xff, /* color 13 */ 0,0xff,0xff, /* color 14 */ 0xff,0xff,0xff, /* color 15 */ 0, /* reserved */ 4, /* # planes */ 80, /* bytes per line */ }; unsigned char planes[BPP * NPLANES]; FILE *pcx_fp; #define pcx_putc(c) fputc(c,pcx_fp); /* dump_screen */ void dump_screen(void) { int x,y,color,mask; unsigned char *p; static masktab[] = {0x80,0x40,0x20,0x10,0x8,0x4,0x2,0x1}; /* write PCX header */ for(x = 0, p = (unsigned char *)&header; x < sizeof(header); x++) pcx_putc(*p++); /* write out the screen */ for(y = 0; y < MAX_Y; y++) { memset(planes,0,sizeof(planes)); /* clear planes */ for(x = 0; x < MAX_X; x++) { /* break color into sep planes */ color = getpixel(x,y); mask = masktab[x & 7]; p = &planes[x >> 3]; if(color & 1) *p |= mask; p += BPP; if(color & 2) *p |= mask; p += BPP; if(color & 4) *p |= mask; p += BPP; if(color & 8) *p |= mask; } put_line(planes,BPP * NPLANES); } } /* put_line */ void put_line(unsigned char *data,int cnt) { int i,byte,rep_cnt; for(i = rep_cnt = 0; i < cnt; i++) { if(rep_cnt == 0) { /* no "current byte" */ byte = data[i]; rep_cnt = 1; continue; } if(data[i] == byte) { /* same as previous, inc run length */ rep_cnt++; /* full run then output */ if(rep_cnt == 0x3f) { out_run(byte,rep_cnt); rep_cnt = 0; } continue; } out_run(byte,rep_cnt); /* not equal to previous */ byte = data[i]; rep_cnt = 1; } if(rep_cnt) /* shove out any stragglers */ out_run(byte,rep_cnt); } /* out_run */ void out_run(int byte,int rep_cnt) { if((byte & 0xc0) == 0xc0 || rep_cnt > 1) pcx_putc(0xc0 | rep_cnt); pcx_putc(byte); } /* open_pcx */ void open_pcx(char *name) { pcx_fp = fopen(name,"wb"); if(pcx_fp == NULL) { printf("Can't open output PCX file '%s'.\n",name); exit(1); } } /* init_graphics */ void init_graphics(void) { int graphdriver,graphmode,grapherr; /* set VGA 640x480 graphics mode */ graphdriver = VGA; graphmode = VGAHI; initgraph(&graphdriver,&graphmode,"" ); if((grapherr = graphresult()) != grOk) { printf("Graphics init error: %s\n",grapherrormsg(grapherr)); exit(1); } /* make double sure we hit 640x480 mode */ if(getmaxx() != MAX_X - 1 || getmaxy() != MAX_Y - 1) { closegraph(); printf("Wrong screen size, x=%u y=%u.\n",getmaxx(),getmaxy()); exit(1); }.
http://www.drdobbs.com/database/single-image-stereograms/184409585?pgno=6
CC-MAIN-2015-11
refinedweb
4,788
61.16
This article describes how to create a ProgressBar control which, by having an appearance that can be customized, is a better looking and (to some extent) a more functional progress bar than what is provided as standard on the Windows Mobile 5 platform. There are already some good articles on creating good looking progress bars (such as this article), but this one will focus on making a progress bar that can take on virtually any appearance and run on a mobile device. I will also provide some tips on how to set up a Visual Studio project to reduce development time when implementing for Windows Mobile 5. Updated: this update contains a performance fix. The fix is described in the Performance chapter. The source code ZIP file that can be downloaded for this article contains one Visual Studio solution in a folder called Bornander UI. This solution contains the code for the progress bar and some code that tests it; all of it has Windows as the target platform. This project can be used when the .NET Compact Framework is not installed to try out the progress bar in a desktop environment. The downloadable ZIP also includes a ZIP file called Bornander UI (Cross platform).zip which contains the solution I used when building this progress bar. This also has projects that build the source code for a Device environment. The code for the progress bar control is all in the file ProgressBar.cs. This file holds a class called ProgressBar that extends from System.Windows.Panel. Since it extends from a standard Windows Forms control, it is possible to lay it out on a form or panel using the visual designer. When creating this control, I decided on a set of requirements that the control should implement: So, there are four rather straightforward requirements to implement. So, how did it go? One way to do custom rendering of a control is to override the paint methods and call methods such as DrawRect or FillRect to draw the desired graphics, for example: ... protected override void OnPaint(PaintEventArgs e) { e.Graphics.FillRectangle(new SolidBrush(backgroundColor), 0, 0, this.Width, this.Height); e.Graphics.FillRectangle(new SolidBrush(foregroundColor), 0, 0, scrollbarValue, this.Height); } ... This would first render a solid rectangle with a background in backgroundColor color and then render another, possibly shorter, rectangle over that in foregroundColor. This is basically how I think they've implemented the standard progress bar in the .NET Compact Framework. However, it would be quite hard (or at least very time-consuming) to draw a progress bar such as the one in Windows Vista this way since that uses gradient transitions between colours. Luckily, System.Drawing.Drawing2D has brushes that do gradient fill (such as System.Drawing.Drawing2D.LinearGradientBrush). Excellent! We've got the tools we need to do the job. Or do we? Everyone who has done a bit of .NET programming and then started doing .NET Compact Framework programming has realized just how compact the Compact Framework really is. Not only are some of the methods on some classes missing, but entire classes have also disappeared. For example, there is no System.Drawing.Drawing2D.LinearGradientBrush in the Compact Framework. After looking at the level of customization I was going for, I decided that even if there had been a System.Drawing.Drawing2D.LinearGradientBrush in the Compact Framework, it still wouldn't have met my requirements. I want my progress bar to allow for a greater level of customization. I would like to be able to have text or images in the background, kind of what you can sometimes see on progress bars found in games. I decided to go with a solution where my progress bar, instead of rendering itself using primitives, uses a set of images that I can provide using the visual form designer in Visual Studio. This has three big benefits: So images it is then. I started out trying to reproduce an XP kind of style and realized that I would need three images to do this: How big should the images be? Do they all need to be the same size? What if I want my progress bar to be wider than the images used? Obviously, I want the progress bar to be able to take on more or less any dimension regardless of the images used, so how should the progress bar render, for example, the background if the background image isn't as wide as I want my progress bar to be? There are two ways to fix such a case: I realized that both of these approaches have pros and cons. You can't stretch an image to create an XP-like look where the progress is indicated by green blocks. In this case, you have to tile the images. On the other hand, with a look such as Vista's it is more convenient to just stretch the image. In the end, I decided that I could get a cool effect from both of these approaches and that is why I left the choice up to the developer using the control instead. I did that by exposing a property called [imageType]DrawMethod. I wanted to have the option to tile or stretch the images differently for background, foreground and overlay: public class ProgressBar : Panel { public enum DrawMethod { Tile, Stretch } private DrawMethod foregroundDrawMethod = DrawMethod.Stretch; ... public DrawMethod ForegroundDrawMethod { get { return foregroundDrawMethod; } set { foregroundDrawMethod = value; } } ... } By exposing it as a public property, the visual form designer will allow me to change it using the property pages for the control. Perfect. Ok, so we have the types of images we need and we have ways to make sure that any length of image will still cover the entire width of the progress bar. Great, that means that we can make small images and save memory and resources that way. The XP-style progress bar in my example above is made up of these three images: However, what if we wanted to draw, for example, the background image stretched to 200 pixels? That would mess up the proportions of the image in the corner, like this: That does not look good, so I came up with the concept of image segments. I expose three segment related properties per image and then only stretch or tile the center segment. The three segments are defined by two properties of the ProgressBar class: [imageType]LeadingSize: this defines the left-most area that will not be stretched. [imageType]TrailingSize: this defines the right-most area that will not be stretched. The center segment is implicitly defined as the segment between the leading and trailing segments. Again, public properties expose the segment values to the form designer: public class ProgressBar : Panel { ... private int backgroundLeadingSize = 0; private int backgroundTrailingSize = 0; ... public int BackgroundLeadingSize { get { return backgroundLeadingSize; } set { backgroundLeadingSize = value; } } public int BackgroundTrailingSize { get { return backgroundTrailingSize; } set { backgroundTrailingSize = value; } } } Now, finally, all the properties we need are defined. It's now time to render the images. We do that by overloading OnPaint: protected override void OnPaintBackground(PaintEventArgs e) { // Do nothing in here as all the painting is done in OnPaint }); } A few things to note here: first, we not only override OnPaint, but also OnPaintBackground to make sure the panel is not trying to render its default background. This is important because failing to do this can cause the progress bar to flicker. Further, I do not render directly to the Graphics object that is passed to the OnPaint method, as that would also lead to flicker. Instead, I create an image in memory (this is done in the CreateOffscreen method) and render the progress bar to that. Then I render the off-screen image to e.Graphics at the end of the method. This way, the Graphics object that is visible on-screen only gets one update. No flicker! A method called Render in the progress bar class is used to render an image up to a certain width. This is called three times, one for each type of image: background, foreground and overlay. For background and overlay, the width of the control is passed as a width parameter, making the background and overlay always being drawn to fill the entire progress bar. The foreground is drawn with a width parameter representing the amount of "progress" on the progress bar. The Render method looks like this: protected void Render(Graphics graphics, Image sourceImage, DrawMethod drawMethod, int leadingSize, int trailingSize, int distance) { // If we don't have an image to render just bug out, this allows us // to call Render without checking sourceImage first. if (sourceImage == null) return; // // Draw the first segment of the image as defined by leadingSize, // this is always drawn at (0, 0). // ProgressBar.DrawImage( graphics, sourceImage, new Rectangle(0, 0, leadingSize, this.Height), new Rectangle(0, 0, leadingSize, sourceImage.Height)); // // Figure out where the last segment of the image should be drawn, // this is always to the right of the first segment // and then at the given distance minus the width of the last segment. // int trailerLeftPosition = Math.Max(leadingSize, distance - trailingSize); ProgressBar.DrawImage( graphics, sourceImage, new Rectangle(trailerLeftPosition, 0, trailingSize, this.Height), new Rectangle(sourceImage.Width - trailingSize, 0, trailingSize, sourceImage.Height)); // // We only draw the middle segment if the width of the first and last // are less than what we need to display. // if (distance > leadingSize + trailingSize) { RenderCenterSegment(graphics, sourceImage, drawMethod, leadingSize, trailingSize, distance, trailerLeftPosition); } } By passing in a source rectangle (specifying which area of the image being drawn is going to be used) and the destination rectangle (the area on the graphics object the image is drawn onto), it is easy to draw the leading, trailing and center segments. At the end of the Render, and only if the width parameter is greater than the leading segment width plus the trailing segment width (this check is here to make sure that the "ends" for the progress bar are always drawn). the center segment is rendered. This is the part that takes into account the current DrawMode: private void RenderCenterSegment(Graphics graphics, Image sourceImage, DrawMethod drawMethod, int leadingSize, int trailingSize, int distance, int trailerLeftPosition) { switch (drawMethod) { // This draws the middle segment stretched to fill the area // between the first and last segment. case DrawMethod.Stretch: ProgressBar.DrawImage( graphics, sourceImage, new Rectangle(leadingSize, 0, distance - (leadingSize + trailingSize), this.Height), new Rectangle(leadingSize, 0, sourceImage.Width - ( leadingSize + trailingSize ), sourceImage.Height)); break; // This draws the middle segment un-stretched as many times // as required to fill the area between the first and last segment. case DrawMethod.Tile: { Region clipRegion = graphics.Clip; int tileLeft = leadingSize; int tileWidth = sourceImage.Width - (leadingSize + trailingSize); // By setting clip we don't have to change the size // of either the source rectangle or the destination // rectangle, the clip will make sure the //overflow is cropped away. graphics.Clip = new Region( new Rectangle(tileLeft, 0, trailerLeftPosition - tileLeft, this.Height + 1)); while (tileLeft < trailerLeftPosition) { ProgressBar.DrawImage( graphics, sourceImage, new Rectangle(tileLeft, 0, tileWidth, this.Height), new Rectangle(leadingSize, 0, tileWidth, sourceImage.Height)); tileLeft += tileWidth; } graphics.Clip = clipRegion; } break; } } The observant reader might have reacted to the use of ProgressBar.DrawImage rather than graphics.DrawImage. This is because of a portability issue between .NET and .NET Compact Framework. I want the same code base to run both platforms, desktop and device. This leads to the use of some pre-processor directives, since we must change the code slightly between platforms. The .NET Compact Framework ignores transparent pixels in PNG files and renders them as white. This won't do, so I fix that by using a "chroma key" defined in an ImageAttribute class to achieve transparency. This would have worked for the desktop as well, but it is so much nicer being able to draw the images the way I want them to look, without the use of a "green screen" colour. This is why I decided to keep the original behaviour on the desktop. protected static void DrawImage(Graphics graphics, Image image, Rectangle destinationRectangle, Rectangle sourceRectangle) { /* * The only place where some porting issues arises in when * drawing images, because of this the ProgressBar code does not * draw using Graphics.DrawImage directly. It instead uses this * wrapper method that takes care of any porting issues using pre- * processor directives. */ #if PocketPC // // The .NET Compact Framework can not handle transparent pngs // (or any images), so to achieve transparancy we need to set // the image attributes when drawing the image. // I've decided to hard code the "chroma key" value to // Color.Magenta but that can easily // be set by a property instead. // if (imageAttributes == null) { imageAttributes = new ImageAttributes(); imageAttributes.SetColorKey(Color.Magenta, Color.Magenta); } graphics.DrawImage(image, destinationRectangle, sourceRectangle.X, sourceRectangle.Y, sourceRectangle.Width, sourceRectangle.Height, GraphicsUnit.Pixel, imageAttributes); #else graphics.DrawImage(image, destinationRectangle, sourceRectangle, GraphicsUnit.Pixel); #endif } I went with magenta as a hard-coded chroma key, making it impossible to use that colour in the progress bar as it will not be rendered. This is a good thing because magenta is an ugly color. And that's it. Custom progress bars that can take on any appearance! (This chapter was added in version 3 of this article.) As was correctly pointed out to me in this article's discussion, my implementation lacks a marquee mode. I decided to add that and implement the same type of customization possibilities. For those of you who are unfamiliar with marquee progress bars, this is what it's called when a progress bar is used to indicate processing rather than progress. It is normally used to show the user that the application is doing something, but does not know how much work there is left to do. The first thing my progress bar needed was a way to indicate what type of bar it was, so I added an enumeration: public class ProgressBar : Panel { ... public enum BarType { Progress, Marquee } ... } Using a member of this enumeration that is exposed by a get/set property, it is then easy to set the type of bar using the property pages in the visual designer: public class ProgressBar : Panel { ... public enum BarType { Progress, Marquee } private BarType barType = BarType.Progress; #if !PocketPC [Category("Progressbar")] #endif public BarType Type { get { return barType; } set { barType = value; } } ... } You might wonder what the pre-processor directive around an attribute on the property is for: #if !PocketPC [Category("Progressbar")] #endif public BarType Type By adding a Category attribute to the property, property pages in the visual designer can group related properties together. However, the .NET Compact Framework does not include the Category attribute, so the code would not compile without the pre-processor directive. This means that if we're using the progress bar in a desktop environment, the properties will be neatly grouped together. Unfortunately, this wont happen on the Device version. I also added another enumeration and member/property pair: public class ProgressBar : Panel { ... public enum MarqueeStyle { TileWrap, BlockWrap, Wave } ... } This gives the option to select a type of marquee rendering. There are three different rendering types to choose from in my implementation: Since the background and overlay part of the progress bar do not change with BarType, most of the ProgressBar.OnPaint method remains the same as in the previous implementation. The only change is the addition of a switch statement that renders the foreground based on the BarType. The rendering of the marquee foreground is delegated to another method, as there are three options and I want to keep the ProgressBar.OnPaint method tidy:); switch (barType) { case BarType.Progress: //); } break; case BarType.Marquee: // There are a couple of ways to render the marquee foreground // so this is delegated to a method RenderMarqueeForeground(); break; } //); } ProgressBar.RenderMarqueeForeground() is then responsible for rendering the foreground using the correct method: private void RenderMarqueeForeground() { switch (marqueeStyle) { case MarqueeStyle.TileWrap: RenderMarqueeTileWrap(); break; case MarqueeStyle.Wave: RenderMaqueeWave(); break; case MarqueeStyle.BlockWrap: RenderMarqueeBlockWrap(); break; } } The three methods are then used to render the foreground. These methods are all basically the same: they all render the foreground leading part, a center part and, finally, the trailing part. The only difference between them is how they calculate where the drawing should begin. As Windows Mobile Devices are a lot less powerful than desktop computers, I found that I needed to improve the rendering performance of the progress bar so that it would work smoother in cases where there are frequent updates to the progress value. In this case, it was not very difficult to find an area where some time could be saved. I know that the progress bar is basically not doing anything but rendering itself. So, to optimize it, I needed to optimize the methods that render the background, foreground and overlay. When I say that it would not be very difficult, I mean that it would be easy to find a compromise where speed was gained at the cost of something else. This is usually the case when optimizing, unless the code is really poorly written to start with, in which case optimizations can be made by removing redundant things or just do thing correctly. I decided to gain speed at the cost of memory used by storing the "calculated" graphics in cache images. By "calculated" I mean that the background, foreground and overlay as the way they appear on-screen is calculated using their leading, trailing properties. By rendering them to a cache image on Resize and then using the cache images when the progress bar needed repainting, no calculation would have to be done for a normal repaint. This approach is all right for background and overlay, but for foreground, which changes as the progress value changes, it becomes more complicated. I could have created a complete set of cache images for the foreground, one for each amount of progress, and then rendered the correct one based on the progress value. This would have either made the progress bar appear as if it where snapping between values, if I had used too few cache images, or consumed too much memory if I had created one cache image for each possible state (equal to the width in pixels of the progress bar). I decided to go for optimizing just the background and the overlay. The first thing needed then is a method that renders the images in their correct size, not to the off-screen, but to their cache images. This method is called from the method that handles resizes: protected void RenderCacheImages() { ProgressBar.DisposeToNull(backgroundCacheImage); ProgressBar.DisposeToNull(overlayCacheImage); backgroundCacheImage = new Bitmap(Width, Height); Graphics backgroundCacheGraphics = Graphics.FromImage(backgroundCacheImage); // Render the background, here we pass the entire // width of the progressbar as the distance value because we // always want the entire background to be drawn. Render(backgroundCacheGraphics, backgroundImage, backgroundDrawMethod, backgroundLeadingSize, backgroundTrailingSize, this.Width); overlayCacheImage = new Bitmap(Width, Height); Graphics overlayCacheGraphics = Graphics.FromImage(overlayCacheImage); // Make sure that we retain our chroma key value by starting with a // fully transparent overlay cache image overlayCacheGraphics.FillRectangle(new SolidBrush(Color.Magenta), ClientRectangle); // Render the overlay, this way we can get neat border on our // progress bar (for example) Render(overlayCacheGraphics, overlayImage, overlayDrawMethod, overlayLeadingSize, overlayTrailingSize, this.Width); } I can still re-use the existing ProgressBar.Render method, as that was created to render to an off-screen. However, instead of supplying the off-screen graphics as the first argument, the graphics for the cache images are provided. Note that in order to maintain any transparency, the overlay image is drawn onto a magenta area, as this will be left by the transparent (magenta) pixels on this pass and made transparent on the render to off-screen. The next step is then to replace the calls to ProgressBar.Render for background and overlay in the OnPaint method with calls to ProgressBar.DrawImage instead: using the cached image ProgressBar.DrawImage(offscreen, backgroundCacheImage, ClientRectangle, ClientRectangle); switch (barType) { // Render foreground here... ... } // Render the overlay using the cached image ProgressBar.DrawImage(offscreen, overlayCacheImage, ClientRectangle, ClientRectangle); // Finally, draw we our offscreen onto the Graphics in the event. e.Graphics.DrawImage(offscreenImage, 0, 0); } And that's it for the performance fix. When running the application, it is hard to see the difference on Windows Mobile 5 devices, but it's visible on PocketPC 2003 devices. Although this implementation is intended for use on Mobile 5 or later, I still try to make my code work on as wide a range of platforms as possible. All the requirements defined in the beginning of this article are implemented and the only thing that needs improvement is the performance when running on Mobile 5 devices. Overall, I'm pretty happy with the result. The ZIP file inside the downloadable shows how to set up a solution so that two projects can reference the same source files. This is good when developing for the .NET Compact Framework because it can take a little while to launch the test application on the emulator. It is therefore convenient to try it out in a desktop environment, but at the same time, I want instant feedback on API conformance. This is so that I do not spend days implementing something that then is useless because I've been using stuff that is unsupported by the Compact Framework. All are comments welcome, both on the code and the article. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/dotnet/CustomProgressBar.aspx
crawl-002
refinedweb
3,548
54.42
for connected embedded systems popen() Execute a command, creating a pipe to it Synopsis: #include <stdio.h> FILE* popen( const char* command, const char* mode ); Arguments: - command - The command that you want to execute. - mode - The I/O mode for the pipe, which must be "r" or "w"; see below. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:: - If mode is "r", then when the child process is started: - Its file descriptor, STDOUT_FILENO, is the writable end of the pipe. - The fileno( stream ) in the calling process is the readable end of the pipe, where stream is the stream pointer returned by popen(). - If mode is "w", then when the child process is started: - Its file descriptor, STDIN_FILENO, is the readable end of the pipe. - The fileno( stream ) in the calling process is the writable end of the pipe, where stream is the stream pointer return by popen(). - If mode is any other value, the result is undefined. Returns: A non-NULL stream pointer on successful completion. If popen() is unable to create either the pipe or the subprocess, it returns a NULL stream pointer and sets errno. Errors: - EINVAL - The mode argument is invalid. - ENOSYS - There's no pipe manager running. The popen() function may also set errno values as described by the pipe() and spawnl() functions. Examples: /* *; } Classification: See also: errno, pclose(), pipe(), spawnlp()
http://www.qnx.com/developers/docs/6.4.0/neutrino/lib_ref/p/popen.html
crawl-003
refinedweb
239
73.47
Equinox p2 Metadata Authoring This page describes the design of the proposed p2 meta data authoring project. Contents - 1 Metadata authoring proposal - 2 Files and Formats - 3 The IU Editor - 3.1 Overview Page - 3.1.1 Namespace - 3.1.2 ID - 3.1.3 Name - 3.1.4 Version - 3.1.5 Provider - 3.1.6 Filter - 4 Required Capabilities - 5 Provided Capabilities - 6 Artifact Keys - 7 Information - 8 Touchpoint Page - 9 Update - 10 TODO The problems this project should solve: - Many types are automatically "adapted" to installable units (e.g. features and plugins), but there is no direct authoring available. - Existing "adapted" installable units may have meta data that needs to be modified/augmented in order for the unit to be useful - there is a mechanism available to provide advice to the resolution mechanism, but authoring of such advice is not. I think the work consists of: - An XML definition for "authored installable unit" (most likely the <unit> format in the local meta data repository). It should be possible to create such an "iu" file with any name without requirements that it is in some special project or folder. - An XML definition for "advice to adapted installable unit" (as I am not sure what is needed here, a separate definition is perhaps needed). - An editor for Installable Unit - Ad editor for Installable Unit advice (is perhaps the same editor). - A way to test the installable unit (resolution, install-ability) - A way to export/publish installable units Files and Formats A new Installable Unit file can be created with the New IU Wizard. It is possible to browse for the container, and then create the file: The format of the "IU" file It seems obvious to reuse the xml format from the local meta data repository as it already describes an installable unit. <?xml version='1.0' encoding='UTF-8'?> <?InstallableUnit class='org.eclipse.equinox.internal.p2.metadata.InstallableUnit' version='1.0.0'?> <installable version="1.0.0"> <unit> <!-- like local metadata repository for 'unit' element --> </unit> </installable> In order to make it easier to reuse the parser/writer for the local metadata repository, a new root element "<installable>" is used to wrap a single "<unit>" element. This also opens up the possibility to include other elements (than unit) if required. Naming convention To make it easy to detect the "IU" files an ".iu" extension should be required. Writer/Parser design It was possible to crete a prototype that read and writes this format by reusing the metadata repository XMLReader and XMLWriter. It is however a bit unclear if the intention is to allow the InstallableUnit to be used by an editor, or if these classes should be considered internal/private. The IU Editor The IU editor is written with PDE Forms. The editor has support for: - Validation of (most) fields with interactive error reporting - Undo/Redo support that also moves focus to the modified page/field Overview Page Namespace The namespace is the naming scope of the installable unit's name. From what has been understood by looking at some installable, these are examples of namespaces: - org.eclipse.equinox.p2.iu - org.eclipse.equinox.p2.eclipse.type - osgi.bundle - java.package - org.eclipse.equinox.p2.localization Namespace Questions - Should the user be allowed to type anything in the namespace field? - If not allowed to type anything - where are the valid namespaces found? - Can the set of namespaces be extended? - Does namespace have a valid format (looks like it follows package name format)? - Can namespace be left empty? - There are two namespace properties - one called namespace_flavor, and one namespace_IU_ID - is there a description of how these are used? Namespace Implementation - Namespace is required - The namespace field is validated as a structured name (i.e. like a java package name) ID ID Questions - What is ID - how is it different from name? - What is the valid format? - What determines the format - can it be different in different name spaces? ID Implementation - ID Is required - Any string input is accepted Name Name Questions - What is the valid format of a name? - Is format determined by p2 for all namespaces, or can name format vary between different namespaces? - If it can vary, how is the namespace/name validation extended to new namespaces? Name Implementation The assumption is that the name should follow structured java naming - i.e. a pattern that: - name can consist of multiple parts where parts are separated by a "." - each part must be at least one char long - each part must begin with a-zA-Z_$ - subsequent chars in a part can be a-zA-Z0-9_$ Version Version Implementation - Version is an OSGi version and is validated as one. - Version is required. Provider Provider is an optional information about the provider (name of organization or individual) providing the unit. Provider Implementation - Optional string value. No validation. Filter This is assumed to be an LDAP filter expression. Filter Questions - Is the assumption correct (LDAP filter expression)? - Is it meaningful to provide structured input of the filter (i.e. like feature editor with separate sections for platform, arch, window system, arch and language)? - is p2 open and can filter on an expandable set of variables, or is it a fixed set (platform, arch, window system, arch and language)? Filter Implementation - The field uses a LDAP filter validator that conforms to the RFC 2254 for textual representation of LDAP filter with the following exceptions: - OCTAL STRING input is not handled - attribute options (e.g.";binary") is not handled - use of extensions or reference to LDAP matching rules is not handled - (currently) the attribute names are restricted to US ASCII letters, digits and the hyphen '-' character and '.' to separate parts in a structured name. (RFC 2254 specifies that ISO 10646 should be used, and that "letters", "digits" and "hyphens" are allowed.) Required Capabilities A list of required capabilities is shown and the user can add/remove, and move items up/down. Selecting an item opens the detail editor. Icons are set based on namespace. There is currently no repository lookup. Provided Capabilities This is implemented in the prototype by showing a list of provided capabilities. Entries can be added/removed, and moved up/down and edited. Artifact Keys Artifact keys can be added/removed, and moved up/down and edited. There is currently no repository lookup. Artifact Questions - What is "Classifier" - can this be a drop down list? - What determines what the classifier can be? - What determines its format? - it is now validated as a structured name Information The editor has editing of the information copyright notice, license agreement, and description. Some boilerplate text is inserted as a starting point. Touchpoint Page An installable Unit can be installed into one touchpoint. The IU meta data consists of a reference to the touchpoint (Touchpoint Type), and describes a set of actions/instructions to execute on the referenced touchpoint. Currently, two touchpoint types (native, and eclipse/osgi) have been implemented. The native touchpoint has aprox 5 different actions, and the eclipse touchpoint has aprox 20. Some of these actions take parameters. Here is a list of the actions per touchpoint The editor allows blocks of instructions to be added (touchpoint data), and each such instruction block allows editing of actions per instruction (aka. phase). There is meta data that describes the touchpoints available (native 1.0.0, and eclipse 1.0.0) which makes it possible to show better labels, list available actions etc. Instructions and actions are added by pressing add, and selcting in the popup menu that appears: The available actions are displayed, and user can select the action to add. If the user switches between touchpoint types - I decided to keep actions previously added (the alternative would be to remove actions that does not apply). Instead a message is displayed above the area for editing the action. (I really wanted to use the same error reporting as used elsewhere, but ran into a design flaw that made it difficult to keep this paricular type of problem in sync). Handling of Unknown Touchpoint Type The editor handles an IU file with an unknown touchpoint type by allowing editing of existing actions, but not adding new. Actions for an unknown touchpoint type are shown with the parameter names instead of formatted labels. When the original IU has an unknown type, that type is selectable in the type box (this to support a user trying to change the touchpoint type to something else and realizing that it is best to stick with the original). If the file is saved with a known touchpoint type, it is not possible to get the unknown type back again without editing the XML directly. Touchpoint Questions - Do you think it is ok to keep actions and instructions when switching touchpoint type even if they do not apply, and let the user delete them, or should all non applicable instructions and actions be removed (undoable))? Touchpoint Notes - p2 has a bug (link to issue TBD) that merges multiple instrucion blocks into one when reading the meta data. (The editor is not to blame if you run into this problem) - There is an enhancment request logged (link to issue TBD) - for being able to save the name of an instruction block (aka TouchpointData). The editor assigns labels called "Instruction block n" where n is the block number. - There is an issue with parameter values containing "," that causes p2 meta data to go off track. The editor therefore filters out all "," characters from parameter value fields for actions. - There is no validation of input fields for actions - this is somewhat difficult as runtime substitution of "${variable}" is supported. - Actions can be moved up/down within an instruction. - Instructions can not be moved - the editor displays all phases/instructions (empty instructions are not written to XML) - Instruction blocks can be moved up/down Update The editor has an "Update" page where information about what IU's the specified IU is an update of. Update Questions - What is the valid values for severity? Update Implementation - The severity can now be set to a value >= 0 - The description field should probably be longer. TODO - lookup of required capabilities from meta data repositories - lookup of artifacts from artifact repository - lookup of things from workspace - handle persistent problem markers (at least managed by the editor) - "test/build/install" - handle fragment - handle patch - Are there additional properties that should be editable (lock? contact email?)
http://wiki.eclipse.org/index.php?title=Equinox_p2_Metadata_Authoring&oldid=120433
CC-MAIN-2017-13
refinedweb
1,737
54.02
On Fri, Oct 13, 2017 at 1:45 PM, Ethan Furman <ethan at stoneleaf.us> wrote: > On 10/13/2017 09:48 AM, Steve Dower wrote: >> >> On 13Oct2017 0941, Yury Selivanov wrote: > > >>> Actually, capturing context at the moment of coroutine creation (in >>> PEP 550 v1 semantics) will not work at all. Async context managers >>> will break. >>> >>> class AC: >>> async def __aenter__(self): >>> pass >>> >>> ^ If the context is captured when coroutines are instantiated, >>> __aenter__ won't be able to set context variables and thus affect the >>> code it wraps. That's why coroutines shouldn't capture context when >>> created, nor they should isolate context. It's a job of async Task. >> >> >> Then make __aenter__/__aexit__ when called by "async with" an exception to >> the normal semantics? >> >> It seems simpler to have one specially named and specially called function >> be special, rather than make the semantics >> more complicated for all functions. > It's not possible to special case __aenter__ and __aexit__ reliably (supporting wrappers, decorators, and possible side effects). > +1. I think that would make it much more usable by those of us who are not > experts. I still don't understand what Steve means by "more usable", to be honest. Yury
https://mail.python.org/pipermail/python-ideas/2017-October/047332.html
CC-MAIN-2022-33
refinedweb
200
71.14
In .NET, the CommonDialog class is the base class for displaying common dialog boxes, such as the Color dialog box, the Font dialog box and the File dialog box. All the classes that inherit from the CommonDialog class override the RunDialog() function to create a specific common dialog box. The RunDialog() functionis automatically invoked when a user of a common dialog box calls the ShowDialog() function. The following are the types of CommonDialog classes: ColorDialog: Displays the color picker dialog box that enables users to set the color of an interface element. FontDialog: Displays a dialog box that enables users to set a font and its attributes OpenFileDialog: Displays a dialog box that enables users to navigate to and select a file. SaveFileDialog: Displays a dialog box that allows users to save a file PrintDialog: Displays a dialog box that enables users to select a printer and set its attributes. PageSetupDialog To change the background color and the foreground color of the text, you can use the default color dialog box. This is how the color dialog box appears. To use this, you can either add a ColorDialog control to the from or create an instance of the inherited ColorDialog class. AllowFullOpen: Gets/Sets whether the user can use the dialog box to define custom colors. AnyColor: Gets/Sets whether the dialog box displays all the available colors in the set of basic colons. Color: Gets/Sets the color selected by the user. CustomColors: Gets/Sets the set of custom colors shown in the dialog box. FullOpen: Gets/Sets whether the controls used to create custom colors are visible when the dialog box is opened. ShowHelp: Gets/Sets whether the dialog box displays a help button. SolidColorOnly: Gets/Sets whether the dialog box will restrict users to selecting solid colors only. The following code snippets illustrate how you can invoke the default color dialog box by two ways. To execute the following code snippets, you need to add one text box and two buttons to the form. You also need to change the Name and Text property of buttons to Background and Foreground. Private Sub Background_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Background.Click ColorDialog1.ShowDialog() TextBox1.BackColor = ColorDialog1.Color End Sub Private Sub Foreground_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Foreground.Click ColorDialog1.ShowDialog() TextBox1.ForeColor = ColorDialog1.Color End Sub Dim CDialog1 As New ColorDialog Private Sub Background_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Background.Click CDialog1.ShowDialog() TextBox1.BackColor = CDialog1.Color End Sub Private Sub Foreground_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Foreground.Click CDialog1.ShowDialog() TextBox1.ForeColor = CDialog1.Color End Sub When you click any of the two buttons, the color dialog box appears and you can change the background and the foreground colors for the text in the text box. AllowSimulations: Gets/Sets whether the dialog box allows graphics device interface font simulations. AllowVectorFonts: Gets/Sets whether the dialog box allows vector fonts. AllowVerticalFonts: Gets/Sets whether the dialog box displays both vertical and horizontal fonts or only horizontal fonts. Color: Gets/Sets selected font color. FixedPitchOnly: Gets/Sets whether the dialog box allows only the selection of fixed-pitch fonts. Font: Gets/Sets the selected font. FontMustExist: Gets/Sets whether the dialog box specifies an error condition if the user attempts to select a font or size that doesn't exist. MaxSize: Gets/Sets the maximum point size the user can select. MinSize: Gets/Sets the mainimum point size the user can select. ShowApply: Gets/Sets whether the dialog box contains an apply button. ShowColors: Gets/Sets whether the dialog box displays the color choice. ShowEffects: Gets/Sets whether the dialog box contains controls that allow the user to specify to specify strikethrough, underline and text color options. ShowHelp: Gets/Sets whether the dialog box displays a help button. Again you can do these using two ways, same as for color dialog box. Private Sub DisplayFont_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles DisplayFont.Click FontDialog1.ShowDialog() TextBox1.Font = ColorDialog1.Font End Sub Dim FDialog1 As New FontDialog Private Sub DisplayFont_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles DisplayFont.Click FDialog1.ShowDialog() TextBox1.Font = FDialog1.Font End Sub When you click any of the two buttons, the font dialog box appears and you can change the font, font size, font style for the text in the text box. The FileDilaog Class is an abstract class that is inherited from the CommonDialog class. Since it is abstract you cannot instantiate it directly. However you can use the OpenFileDialog or SaveFileDialog class to open a file or save an existing file. These two are inherited form FileDialog class. Note: It is important to remember that just selecting a file in the open or save as dialog box will not open or save the file. To open or save a file, you need to use file I/O features of .Net. The dialog box looks as: AddExtension: Gets/Sets if the dialog box adds extension to file names if the user doesn't supply the extension. CheckFileEixsts: Checks whether the specified file exists before returning from the dialog. CheckPathExists: Checks whether the specified path exists before returning from the dialog. DefaultExt: Allows you to set the default file extension. FileName: Gets/Sets file name selected in the file dialog box. FileNames: Gets the file names of all selected files. Filter: Gets/Sets the current file name filter string, which sets the choices that appear in the "Files of Type" box. FilterIndex: Gets/Sets the index of the filter selected in the file dialog box. InitialDirectory: This property allows you to set the initial directory which should open when you use the OpenFileDialog. MultiSelect: This property when set to True allows to select multiple file extensions. ReadOnlyChecked: Gets/Sets whether the read-only checkbox is checked. RestoreDirectory: If True, this property restores the original directory before closing. ShowHelp: Gets/Sets whether the help button should be displayed. ShowReadOnly: Gets/Sets whether the dialog displays a read-only check box. Title: This property allows to set a title for the file dialog box. ValidateNames: This property is used to specify whether the dialog box accepts only valid file names. Its works in the similar way as the above two dialog boxes i.e. you have two ways of using them. To execute the following code snippets add one button to the form and change its name and text to open. Private Sub open_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles open.Click OpenFileDialog1.ShowDialog() End Sub Private Sub open_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles open.Click Dim openfileDlg As New OpenFileDialog openfileDlg.ShowDialog() End Sub It looks like: Properties of the Save File Dialog are the same as that of the Open File Dialog. Please refer above. Notable property of Save File dialog is the OverwritePrompt property which displays a warning if we choose to save to a name that already exists. If you want to invoke the default Save As dialog Box. Its works in the similar way as the above two dialog boxes i.e. you have two ways of using them. To execute the following code snippets add one button to the form and change its name and text to save. Private Sub save_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles save.Click SaveFileDialog1.ShowDialog() End Sub Private Sub save_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles save.Click Dim savefileDlg As New SaveFileDialog savefileDlg.ShowDialog() End Sub You can use the default Print dialog box to print any text or graphics. For this you need to either add a PrintDialog control and a PrintDocument control to the form or create an instance of the PrintDialog and PrintDocument classes. You also need to set the Document property of the PrintDialog object either to the instance of the PrintDocument class or to PrintDocument control that you have added to the form. The following code snippet shows the working. Imports System.Drawing.Printing Public Class Form1 Inherits System.Windows.Forms.Form Private Sub Print_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Print.Click PrintDialog1.Document = PrintDocument1 Dim result As DialogResult = PrintDialog1.ShowDialog() If result = Windows.Forms.DialogResult.OK Then PrintDocument1.Print() End If End Sub Private Sub PrintDocument1_PrintPage(ByVal sender As System.Object, ByVal e As System.Drawing.Printing.PrintPageEventArgs) Handles PrintDocument1.PrintPage e.Graphics.DrawString(TextBox1.Text, New Font("Arial", 40, FontStyle.Bold), Brushes.Black, 150, 125) End Sub End Class As opposed to directly printing a file, a user may want to perform some preliminary preparation on the file or the printer. Microsoft Windows provides another dialog box used to control printing. It is called Page Setup. To invoke this dialog box, you will need to create an instance of the PageSettings class. The PageSettings class is included in the System.Drawing.Printing namespace. It actually specifies the settings to be applied to every printed page.. Tutorial toolbar: Tell A Friend | Add to favorites | Feedback |
https://beansoftware.com/NET-Tutorials/Common-Dialog-Classes.aspx
CC-MAIN-2021-49
refinedweb
1,538
51.14
Hi Team, As we saw in the XamDagarid, the control has the In, Not in filter., we wnat to implement the same for the XamGrid control. We want to create a custom filter (public class IndOperand<T>: FilterOperand), where we must ovverride the FilteringExpression method, can you define the expression what is used for the XamDataGrid? The class will be generic, while should work for every type of column (same like in the XamDataGird control) Hello, You can follow the example in our online documentation to create custom filters. Keep in mind the XamDataGrid is our premier grid that we maintain and support to use in WPF. Therefore it's preferable if you use it because we plan on retiring the XamGrid (TBA). Hi Michael, Yes we know that the XamGrid will be removed, we have started the migration also, but until than we have some task what we must do also for the XamGrid, this is one In custom filter. The link what you have added, I know, but have have asked the FilteringExpression for the InOperator, to create the custom filter. Are you trying to override InOperator? Did you try creating a custom class that inherits from FilterOperator? You can create a new operator based on InOperator as you wish and then remove the existing one like this: FilterColumnSettings fcs = this.MyDataGrid.Columns.DataColumns["ProductID"].FilterColumnSettings; fcs.RowFilterOperands.Remove(ComparisonOperator.InOperand); HI Fernando, Yes I try to create a new custom operator, and as I see I must override a new expressions. Okay, where are you at? What issues are you running into?
https://www.infragistics.com/community/forums/f/ultimate-ui-for-wpf/120762/custom-filter-with-inoperand-in-xamgrid
CC-MAIN-2019-51
refinedweb
265
53.1
User Name: Published: 10 Jan 2007 By: Alessandro Gallo As we've seen in the part 2 and part 3 of the tutorial, if a client side type exposes an event in its type descriptor, we can handle it declaratively using xml-script. A powerful way to handle events with xml-script is by using actions; in this article we'll see how to build custom actions to handle events declaratively. xml-script events actions An action encapsulates a block of JavaScript code that is executed when a particular event is fired. An example is the SetProperty action, which allows setting the value of a property exposed by a client side component, or the InvokeMethod action, which enables invoking a method and pass parameters to it. SetProperty InvokeMethod The Microsoft Ajax Library defines a third built-in action called PostBackAction. The purpose of this action is to postback the page (using the ASP.NET postback mechanism, i.e. a call to the __doPostBack function) in response to a particular event. PostBackAction The following example illustrates the PostBack action; it is a simple HTML button that, when clicked, causes a postback of the page. PostBack In this case, clicking the button causes the following JavaScript code to be executed: If we want to create a custom action, the easy way is to create a class that inherits from Sys.Preview.Action and override the performAction method. Sys.Preview.Action performAction The following example shows a simple AlertAction that can be used to display an alert on screen using xml-script. Create a fresh JavaScript file and name it AlertAction.js. Then, save it inside a ScriptLibrary folder in your Ajax CTP-enabled website and add the following code: AlertAction AlertAction.js The AlertAction class exposes a property called message that can be used to set the message to display in the alert box. The performAction is overridden to call the JavaScript alert method that displays the message on screen. Note that the AlertAction class inherits from Sys.Preview.Action and exposes a type descriptor (returned by the static descriptor expando) and this allows using the class in declarative code. Finally, here is the code showing the new AlertAction in action. The code is pretty simple to understand, since the custom action is used in the same way as the built-in actions provided by the Microsoft Ajax Library. However, since the AlertAction class is contained in the Samples namespace, we have to bind this namespace to a custom XML namespace. To do this, we have to add a namespace prefix as an attribute to the page element: namespace And then use the defined prefix in the alertAction tag: alertAction The reason why we don't use a prefix for elements like button, label, textBox or postBackAction is that the xml-script parser searches for the corresponding classes inside a set of default namespaces: All the classes declared in different namespaces need to have their namespace mapped to a XML namespace to be correctly used in declarative code. Anyways, for XML compliancy, it is always recommended to declare at least the global namespace in the root element of a portion of XML code, like we're doing in the root page element: In this fourth part of the tutorial we've seen how to use the PostBack action to postback a page declaratively and how to build custom actions by inheriting from the base Sys.Action class and overriding the performAction method. Actions are a great way to encapsulate a portion of JavaScript code and execute it through declarative code. Sys.Action This author has published 23
http://dotnetslackers.com/articles/ajax/xml_script_tutorial_part4.aspx
crawl-003
refinedweb
605
57.3
3.12 dup and dup2 Functions An existing file descriptor is duplicated by either of the following functions: #include <unistd.h> int dup(int filedes); int dup2(int filedes, int filedes2); Both return: new file descriptor if OK, -1 on error The new file descriptor returned by dup is guaranteed to be the lowest numbered available file descriptor. With dup 2 we specify the value of the new descriptor with the filedes2 argument. If filedes2 is already open, it is first closed. If filedes equals filedes2, then dup 2 returns filedes2 without closing it. The new file descriptor that is returned as the value of the functions shares the same file table entry as the filedes argument. We show this in Figure 3.4. Figure 3.4. Kernel data structures after dup(1). In this figure we're assuming that the process executes newfd = dup(1); when it's started. We assume the next available descriptor is 3 (which it probably is, since 0, 1, and 2 are opened by the shell). Since both descriptors point to the same file table entry they share the same file status flags (read, write, append, etc.) and the same current file offset. Each descriptor has its own set of file descriptor flags. As we describe in the next section, the close-on-exec file descriptor flag for the new descriptor is always cleared by the dup functions. Another way to duplicate a descriptor is with the fcntl function, which we describe in the next section. Indeed, the call dup(filedes); is equivalent to fcntl(filedes, F_DUPFD, 0); and the call dup2(filedes, filedes2); is equivalent to close(filedes2); fcntl(filedes, F_DUPFD, filedes2); In this last case, the dup2 is not exactly the same as a close followed by an fcntl. The differences are dup2 is an atomic operation, while the alternate form involves two function calls. It is possible in the latter case to have a signal catcher called between the close and fcntl that could modify the file descriptors. (We describe signals in Chapter 10.) There are some errno differences between dup2 and fcntl. The dup2 system call originated with Version 7 and propagated through the BSD releases. The fcntl method for duplicating file descriptors appeared with System III and continued with System V. SVR3.2 picked up the dup2 function and 4.2BSD picked up the fcntl function and the F_DUPFD functionality. POSIX.1 requires both dup2 and the F_DUPFD feature of fcntl.
http://www.informit.com/articles/article.aspx?p=99706&seqNum=12
CC-MAIN-2020-10
refinedweb
412
72.46
.. Hello. Please upload with full context to make patched easier to read. See Sorry, I'll make sure to upload with a full context in the next time. The comment partially covers my concern. The comment and the assertion that you've pointed out only covers the case where the latch block has a conditional branch that does not exit the loop. However, if the latch block has an unconditional branch, then it will break in the command LatchBR->getSuccessor(ExitIndex) with an out of range assertion. Since the latch has an unconditional branch, it means that: The way I see it, the two problems are related. For this reason, I think that it would be better if we catch both cases of malformed loops. But since the comment that you mention says that "This needs to be guaranteed by the callers of UnrollRuntimeLoopRemainder", I guess it would be better to handle it with an assertion instead of return false (as I suggested in this patch). If you agree with my point, I can update the patch with the assertion (instead of return false). Thanks for the feedback. We do seem to return false in every other case, I'm not very sure why this would be different and considered a precondition we would assert on. Perhaps it's best to update the assert to a return false too. Mind changing it that way instead? I completely agree with your assessment. I've done the suggested change and tested. All the LLVM tests pass as expected. Please upload patch with complete context. Could you also add a test case show casing your problem (which you've described earlier): Just a loop with unconditional latch terminator and run through runtime-unroll. I'd added this as assert because upstream passes guarantees that by the time we reach unroll remainder, we will have a latch with conditional terminator.. I've added a test case. Please make sure that it follows the testing standards. However, as mentioned by Anna, LLVM's loop unroll would not hit this problem as it guarantees externally that the loop latch has an exit branch. I'm happy with this as is, but if you want the API to be tested and ensure it keeps working, you could add a unittest. Something like unittests/Transforms/Utils/BasicBlockUtilsTest.cpp or the ones is CloningTest.cpp. Because I wanted to make sure it worked, I wrote this. Feel free to clean it up and use it as a unittest if you like: //===- BasicBlockUtils.cpp - Unit tests for BasicBlockUtils ---------------===// // // The LLVM Compiler Infrastructure // // This file is distributed under the University of Illinois Open Source // License. See LICENSE.TXT for details. // //===----------------------------------------------------------------------===// #include "llvm/Transforms/Utils/UnrollLoop.h" #include "llvm/Analysis/AssumptionCache.h" #include "llvm/Analysis/LoopInfo.h" #include "llvm/Analysis/ScalarEvolution.h" #include "llvm/Analysis/TargetLibraryInfo.h" #include "llvm/AsmParser/Parser.h" #include "llvm/IR/BasicBlock.h" #include "llvm/IR/Dominators.h" #include "llvm/IR/LLVMContext.h" #include "llvm/Support/SourceMgr.h" #include "gtest/gtest.h" using namespace llvm; static std::unique_ptr<Module> parseIR(LLVMContext &C, const char *IR) { SMDiagnostic Err; std::unique_ptr<Module> Mod = parseAssemblyString(IR, Err, C); if (!Mod) Err.print("BasicBlockUtilsTests", errs()); return Mod; } TEST(LoopUnrollRuntime, Latch) { LLVMContext C; std::unique_ptr<Module> M = parseIR( C, R"(define i32 @test(i32* %a, i32* %b, i32* %c, i64 %n) { entry: br label %while.cond while.cond: ; preds = %while.body, %entry %i.0 = phi i64 [ 0, %entry ], [ %inc, %while.body ] %cmp = icmp slt i64 %i.0, %n br i1 %cmp, label %while.body, label %while.end while.body: ; preds = %while.cond %arrayidx = getelementptr inbounds i32, i32* %b, i64 %i.0 %0 = load i32, i32* %arrayidx %arrayidx1 = getelementptr inbounds i32, i32* %c, i64 %i.0 %1 = load i32, i32* %arrayidx1 %mul = mul nsw i32 %0, %1 %arrayidx2 = getelementptr inbounds i32, i32* %a, i64 %i.0 store i32 %mul, i32* %arrayidx2 %inc = add nsw i64 %i.0, 1 br label %while.cond while.end: ; preds = %while.cond ret i32 0 })" ); auto *F = M->getFunction("test"); DominatorTree DT(*F); LoopInfo LI(DT); AssumptionCache AC(*F); TargetLibraryInfoImpl TLII; TargetLibraryInfo TLI(TLII); ScalarEvolution SE(*F, TLI, AC, DT, LI); bool ret = UnrollRuntimeLoopRemainder(*LI.begin(), 4, true, true, false, &LI, &SE, &DT, &AC, true); EXPECT_FALSE(ret); } Thanks for the suggestion and the unit-test code. I'll add it to the patch. About the check in the test file, I don't want to prevent the pass to unroll this loop. Hi, I'd like to confirm that someone can commit this patch for me because I believe I don't have commit access. Thank you all for the review. Sorry for the delay. Sure, I can do that. Can you rebase this on trunk first? It looks like the unit test files were renamed. I've rebased to the patch and changed the name of the unit-test UnrollLoop.cpp to UnrollLoopTest.cpp, similar to the other files in the same folder.
https://reviews.llvm.org/D51486
CC-MAIN-2019-43
refinedweb
825
68.97
Syntax::Feature::Method - Provide a method keyword version 0.001 use syntax 'method'; method foo ($n) { $n * $self->bar } my $method = method ($msg) { print "$msg\n"; }; This module will install the Method::Signatures::Simple syntax extension into the requesting namespace. You can import the keyword multiple times under different names with different options: use syntax 'method', 'method' => { -as => 'classmethod', -invocant => '$class', }; use syntax method => { -as => 'provide' }; provide addition ($n, $m) { $n + $m } The -as keyword allows you to install the keyword under a different name. This is especially useful if you want a separate keyword for class methods with a different invocant. use syntax method => { -invocant => '$me' }; method sum { $me->foo + $me->bar } Allows you to set a different default invocant. Useful if you want to import a second keyword for class methods that has a $class invocant. Called by the syntax dispatcher to install the exntension into the requesting package. syntax, Method::Signatures::Simple Please report any bugs or feature requests to [email protected] or through the web interface at:.
http://search.cpan.org/~phaylon/Syntax-Feature-Method-0.001/lib/Syntax/Feature/Method.pm
CC-MAIN-2017-51
refinedweb
176
53.51
, ... BITSTRING(3) OpenBSD Programmer's Manual BITSTRING(3) NAME bit_alloc, bit_clear, bit_decl, bit_ffc, bit_ffs, bit_nclear, bit_nset, bit_set, bitstr_size, bit_test - bit-string manipulation macros SYNOPSIS #include <bitstring.h> bitstr_t * bit_alloc(int nbits); bit_clear(bit_str name, int bit); bit_decl(bit_str name, int nbits); bit_ffc(bit_str name, int nbits, int *value); bit_ffs(bit_str name, int nbits, int *value); bit_nclear(bit_str name, int start, int stop); bit_nset(bit_str name, int start, int stop); bit_set(bit_str name, int bit); bitstr_size(int nbits); bit_test(bit_str name, int bit); DESCRIPTION These macros operate on strings of bits. The bit_alloc() macro returns a pointer of type bitstr_t * to sufficient space to store nbits bits, or NULL if no space is available. The bit_decl() macro allocates sufficient space to store nbits bits on the stack. The bitstr_size() macro returns the number of elements of type bitstr_t necessary to store nbits bits. This is useful for copying bit strings. The bit_clear() and bit_set() macros clear or set the zero-based numbered bit bit, in the bit string name. The bit_nclear() and bit_nset() macros clear or set the zero-based num- bered bit_ffc() macro stores in the location referenced by value the zero- based number of the first bit not set in the array of nbits bits refer- enced by name. If all bits are set, the location referenced by value is set to -1. The arguments to these macros are evaluated only once and may safely have side effects. EXAMPLE #include <limits.h> #include <bitstring.h> #define LPR_BUSY_BIT 0 #define LPR_FORMAT_BIT 1 #define LPR_DOWNLOAD_BIT 2 #define LPR_AVAILABLE_BIT 9 #define LPR_MAX_BITS 10 make_lpr_available() { malloc(3) HISTORY The bitstring functions first appeared in 4.4BSD. OpenBSD 2.6 July 19, 1993 2
http://www.rocketaware.com/man/man3/bitstring.3.htm
crawl-002
refinedweb
280
61.26
compat utili- ties. Setting the environment variable COMMAND_MODE to the value unix20032003 it behaves as if it were set to unix2003. COMPILATION Defining _NONSTD_SOURCE for i386 causes library and kernel calls to behave as closely to Mac OS X 10.3's library and kernel calls as possi- ble. Any behavioral changes are documented in the LEGACY sections of the man pages for the individual function calls. Defining this macro when compiling for any other architecture will result in a compilation val- ues less that 10.5, UNIX conformance will be off when targeting i386 (the equivalent of defining _NONSTD_SOURCE). In order to provide both legacy and conformance versions of functions, two versions of affected functions are provided. Legacy variants have symbol names with no suffix in order to maintain ABI compatibility. Con- formance versions have a $UNIX2003 suffix appended to their symbol name. These $UNIX2003 suffixes are automatically appended by the compiler tool- chain and should not be used directly. Platforms that were released after these updates only have conformance variants available and do not have a $UNIX2003 suffix. i386 ------------------------------+-------------------------------------------- user defines deployment | namespace conformance suffix target | ------------------------------+-------------------------------------------- (none) < 10.5 | full 10.3 compatibility (none) (none) >= 10.5 | full SUSv3 conformance $UNIX2003 _NONSTD_SOURCE (any) | full 10.3 compatibility (none) _DARWIN_C_SOURCE < 10.4 | full 10.3 compatibility (none) _DARWIN_C_SOURCE >= 10.4 | full SUSv3 conformance $UNIX2003 _POSIX_C_SOURCE < 10.4 | strict 10.3 compatibility (none) _POSIX_C_SOURCE >= 10.4 | strict SUSv3 conformance $UNIX2003 ------------------------------+-------------------------------------------- Newer Architectures ------------------------------+-------------------------------------------- user defines deployment | namespace conformance suffix target | ------------------------------+-------------------------------------------- (none) (any) | full SUSv3 conformance (none) _NONSTD_SOURCE (any) | (error) _DARWIN_C_SOURCE (any) | full SUSv3 conformance (none) _POSIX_C_SOURCE (any) | strict SUSv3 conformance (none) ------------------------------+-------------------------------------------- STANDARDS With COMMAND_MODE set to anything other than legacy, utility functions conform to Version 3 of the Single UNIX Specification (``SUSv3''). With _POSIX_C_SOURCE or _DARWIN_C_SOURCE for i386, or when building for any other architecture, system and library calls conform to Version 3 of the Single UNIX Specification (``SUSv3''). BUGS Different parts of a program can be compiled with different compatibility settings. The resultant program. Darwin June 30, 2010 Darwin Mac OS X 10.9 - Generated Wed Oct 16 18:19:27 CDT 2013
http://manpagez.com/man/5/compat/
CC-MAIN-2017-22
refinedweb
353
50.84
Technical Support On-Line Manuals CARM User's Guide Discontinued #include <stdio.h> int printf ( const char *fmtstr /* format string */ <[>, arguments ...<]>); /* additional arguments */ The printf function formats a series of strings and numeric values and builds a string to write to the output stream using the putchar function. The fmtstr argument is a format string that may be composed of characters, escape sequences, and format specifications. Ordinary characters and escape sequences are copied to the stream in the order in which they are interpreted. Format specifications always begin with a percent sign ('%') and require that additional arguments are included in the printf function call. The format string is read from left to right. The first format specification encountered references the first argument after fmtstr and converts and outputs it using the format specification. The second format specification accesses the second argument after fmtstr, and so on. If there are more arguments than format specifications, extra arguments are ignored. Results are unpredictable if there are not enough arguments for the format specifications or if the argument types do not match those specified by fmtstr. Format specifications have the following general format: % <[>flags<]> <[>width<]> <[>.precision<]> <[>{l|L}<]> type Each field in the format specification may be a single character or a number which specifies a particular format option. The type field is a single character that specifies whether the argument is interpreted as a character, string, number, or pointer, as shown in the following table. The optional characters l or L may immediately precede the type character to respectively specify long versions of the integer types. The width field is a non-negative number that specifies the minimum number of characters printed. If the number of characters in the output value is less than width, blanks are added on the left (by default) or right (when the - flag is specified) to pad to the minimum width. If width is prefixed with a ' in front of the asterisk specifies that the argument is an unsigned char. The precision field is a non-negative number that specifies the number of characters to print, the number of significant digits, or the number of decimal places. The precision field can cause truncation or rounding of the output value in the case of a floating-point number as specified in the following table. The precision field may be an asterisk ('*'), in which case an int argument from the argument list provides the value. Specifying a 'b' in front of the asterisk specifies that the argument is an unsigned char. Note The printf function returns the number of characters actually written to the output stream. gets, puts, scanf, sprintf, sscanf, vprintf, vsprintf #include <stdio.h> void tst_printf (void) { char a = 1; int b = 12365; long c = 0x7FFFFFFF; unsigned char x = 'A'; unsigned int y = 54321; unsigned long z = 0x4A6F6E00; float f = 10.0; float g = 22.95; char buf [] = "Test String"; char *p = buf; printf ("char %d int %d long %ld\n",a,b,c); printf ("Uchar %u Uint %u Ulong %lu\n",x,y,z); printf ("xchar %x xint %x xlong %lx\n",x,y,z); printf ("String %s is at address %p\n",buf,p); printf ("%f != %g\n", f, g); printf ("%*f != %*g\n", 8, f, 8,.
http://www.keil.com/support/man/docs/ca/ca_printf.htm
CC-MAIN-2020-16
refinedweb
542
53.41
I have uninstalled Mcaffee Antivirus and install Symantec Endpoint Protection. Mcaffee is still showing up in inventory and as out of date on the dashboard. Any ideas? J 3 Replies · · · Nov 16, 2010 at 9:34 UTC Follow at least #1 - #3 of this how-to: Troubleshooting Spiceworks Inventory Inconsistencies - Spiceworks ... 0 · · · Nov 16, 2010 at 10:22 UTC Do try what bytesnake suggests first. Also check with a WMI query to the AntiVirusProducts namespace. It could be possible that both the old A/V and the new A/V are registered. I have seen some mention of that in the past. 0 · · · Dec 15, 2010 at 2:22 UTC WMI hit it right on the head..... Thanks.......TimJ42 and all others...... 0
https://community.spiceworks.com/topic/118767-removed-mcaffee-still-in-inventory
CC-MAIN-2017-17
refinedweb
123
83.76
This is the mail archive of the [email protected] mailing list for the pthreas-win32 project. Gianluca wrote:There are pthreads implementations that allow a NULL thread parameter (Solaris - see below) and the question of a NULL value has been asked before on this list. My copy of the SUSV3 standard doesn't say that NULL can't be passed and doesn't require an error be returned. It appears to be left to the implementation. I have built the library with the Will Bryant's bmakefile.In general (ie. not for pthreads-win32 specifically), you need to declare a pthread_id_t and pass it's address as the first argument to pthread_create. Try making that change first. I've received a bunch of warnings but it was OK. I put the PthreadBC.dll on Windows directory, I included PthreadBC.lib in my .bpr project, I compiled and linked the program below. ); #include <stdio.h> #include <pthread.h> void * thr(void * arg) { printf("Inside thread\n"); /* Mandatory if we're going to be well behaved. */ pthread_detach(pthread_self()); } int main() { pthread_t t; int result = 0; result == pthread_create(NULL, NULL, thr, NULL); sleep(2); return result; } Linux (Redhat 9) segfaults without running the thread. Solaris 7 runs the thread and exits with no error or fault.
http://sourceware.org/ml/pthreads-win32/2004/msg00096.html
CC-MAIN-2018-30
refinedweb
216
69.18
I'm trying to make a Palindrome program and this loop is giving me trouble: for (int i= 0; i < 79; i++) { character[i] = scan.next(); if (character[i] == ".") {break; } } The user enters a word, and when they are done, they enter a period, which is supposed to break them out of the loop. But when I run the program and enter a period, nothing happens. It just allows me to enter letters until the array is full. Why is the program skipping over the if-statement? Here is the full program (And yes, I'm sure there are plenty of other mistakes, but I'll try to fix them on my own later. Forgive me, I'm new to Java): import java.util.Scanner; public class Palindrome { public static void main(String[] args) { System.out.println("Please enter a word or phrase. Enter a '.' when you are done."); Scanner scan = new Scanner (System.in); String[] character = new String[79]; for (int i= 0; i < 79; i++) { character[i] = scan.next(); if (character[i] == ".") {break; } } if (isPalindrome(character)==true) {System.out.println("It is a palindrome."); } if (isPalindrome(character) != true) {System.out.println("It is not a palindrome."); } } public static Boolean isPalindrome (String[] chars){ String[] reverse = new String[79]; for (int i=reverse.length-1;i>=0; i--) { if(reverse[i] == chars[i]) {return true; } else return false; } } }
https://www.daniweb.com/programming/software-development/threads/463985/beginner-s-java-loop-problem
CC-MAIN-2020-29
refinedweb
228
69.68
LoPy and LIDAR-Lite v2 I2C Hi All, I've recently purchased a LoPy and am trying to interface with a Lidar-Lite version 2 over I2C. I'm able to find the peripheral on the bus and successful send comments but reading from memory returns only 0x00 or 0x04 an unexpected result. The I2C protocol can be found here. I've verified with an arduino that the sensor is functional and that the power supply is stable enough for measurements. The arduino library can be found here. While the sensor requires 5v for operation the I2C is at 3.3v. I've also tried adding 4.7k pullups to SDA and SCL (P9 and 10 on the LoPy) with no change in the behavior. I initialize as follows: from machine import I2C import time i2c = I2C(0) i2c = I2C(0, I2C.MASTER) time.sleep(1) i2c.init(I2C.MASTER, baudrate=100000) addr = i2c.scan()[0] addr becomes 98, as expected for the address of 0x62. According to the register definitions, a meansuremnet is taken by writing 0x04 to 0x00 i2c.writeto_mem(addr,0x00,bytes([0x04])) #take measurement The write command returns and 1. The LSB of register 0x01 can then be polled to check the sensor status where 0 indicates its not busy: data = i2c.readfrom_mem(addr,0x01, 1) dataLSB = data[0] & 1 This returns only 0x00 or 0x04 which I believe to be erroneous. Finally the measurement can be read back from 0x8f as two bytes: data = i2c.readfrom_mem(addr,0x8f, 2) #Read 2 bytes from measrement register print(data) distance = (data[0] << 8) + data[1] print(distance) Again, only returns 0x00 or 0x04. Stuck on this one and appreciate any help or past experiences that the community might have. @kingj first change your init to only one line - i do not suppose that this matter but... i2c = I2C(0, I2C.MASTER, baudrate=100000) and try to add first default config i2c.writeto_mem(addr,0x00,bytes([0x00])) @livius and @RobTuDelft thanks for your suggestions I have attempted both. Unfortunately reading the status register always returns 0x00 and therefore a first bit of 0. I tried using using a level shiftier but the result was the same. The returned data is only 0x00. - RobTuDelft last edited by Probably a good idea to read the status register (0x01) in a loop until the first bit is "0". @kingj First in your situation i will be adding 2x 2N7000 or BSS138 or any 3V3 to 5V e.g. and try again.
https://forum.pycom.io/topic/1261/lopy-and-lidar-lite-v2-i2c
CC-MAIN-2022-33
refinedweb
420
66.33
(an object factory that creates an object instance) and Spring (an; it simply looks them up from Spring. Lastly, the Java component allows you to configure interceptors. An interceptor enables the developer to provide additional synchronous that are executed on Mule processor in the flow, the transaction is referred to as one-way. No response is required from the client. General Tab The General tab lets you change the display name, and specify a class and object. To configure the Java component: From the Message Flow canvas, double-click the Java component to open the Properties Editor. Click the green plus symbol to the right of the Class Name field to create a class for the component. Selecting a class is a required setting. In New Java Class, specify the class name in the Name field and click Finish. The Java class file opens in the text editor. See Basic Hello World Java Component Class for an example program you can copy and paste into the editor. Advanced Tab The Advanced tab lets you manage interceptors and select the object factory to be used by the component. An interceptor contains the business logic that is applied to the message payload before being sent to the next processor. your project and select New > Class. Name your class, and make the package the name of your project. You can use the basic class as a skeleton to construct a simple Java Component: package project_name; import org.mule.api.MuleEventContext; import org.mule.api.MuleMessage; import org.mule.api.lifecycle.Callable; import org.mule.api.transformer.TransformerException; import org.mule.transformer.AbstractMessageTransformer; public class helloWorldComponent implements Callable{ @Override public Object onCall(MuleEventContext eventContext) throws Exception { eventContext.getMessage().setInvocationProperty("myProperty", "Hello World!"); return eventContext.getMessage().getPayload(); } } Drag a new Java Component into your flow, and set the Class Name field to reference your newly created class. Or in the XML view, add a componentelement, and reference the Java class in the classattribute: <component class="javacomponent.helloWorldComponent" doc: Project XML Code The complete XML code is: <="javacomponentFlow"> <http:listener <component class="javacomponent.helloWorldComponent" doc: <spring-object/> </component> </flow> </mule>
https://docs.mulesoft.com/mule-runtime/3.9/java-component-reference
CC-MAIN-2019-51
refinedweb
354
50.43
. Discover, access, and read raw and processed data from any data historian compliant with the OPC Historical Data Access standard.Learn more Manage connections to OPC servers and collections of server items or tags.Learn more Read from or write to individual items or all the items in the group simultaneously.Learn more Browse the server namespace and retrieve fully qualified IDs of each item stored on the server.Learn more Browse for available OPC UA servers. You then connect to an OPC UA server by creating an OPC UA Client object.Learn more Discover more about OPC Toolbox by exploring these resources. Explore documentation for OPC Toolbox functions and features, including release notes and examples. View system requirements for the latest release of OPC Toolbox. View articles that demonstrate technical advantages of using OPC Toolbox. Read how OPC Toolbox is accelerating research and development in your industry. Find answers to questions and explore troubleshooting resources. There are many ways to start using OPC Toolbox. Download a free trial, or explore pricing and licensing options. Purchase OPC Toolbox and explore related products.Contact sales Use OPC Toolbox to solve scientific and engineering challenges:
https://jp.mathworks.com/products/opc.html?action=changeCountry
CC-MAIN-2017-13
refinedweb
193
59.6
Alright, time to have some fun exploring efficient negative sampling implementations in NumPy… Negative sampling is a technique used to train machine learning models that generally have several order of magnitudes more negative observations compared to positive ones. And in most cases, these negative observations are not given to us explicitly and instead, must be generated somehow. Today, I think the most prevalent usages of negative sampling is in training Word2Vec (or similar) and in training implicit recommendation systems (BPR). In this post, I’m going to frame the problem under the recommendation system setting — sorry NLP fans. Problem For a given user, we have the indices of positive items corresponding to that user. These are items that the user has consumed in the past. We also know the fixed size of the entire item catalog. Oh, we will also assume that the given positive indices are ordered. This is quite a reasonable assumption because positive items are often stored in CSR interaction matrices (err… at least in the world of recommender systems). And from this information, we would like to sample from the other (non-positive) items with equal probability. n_items = 10 pos_inds = [3, 7] Bad Ideas We could enumerate all the possible choices of negative items and then use np.random.choice (or similar). However, as there are usually orders of magnitude more negative items than positive items, this is not memory friendly. Incremental Guess and Check As a trivial (but feasible) solution, we are going to continually sample a random item from our catalog, and keep items if they are not positive. This will continue until we have enough negative samples. def negsamp_incr(pos_check, pos_inds, n_items, n_samp=32): """ Guess and check with arbitrary positivity check """ neg_inds = [] while len(neg_inds) < n_samp: raw_samp = np.random.randint(0, n_items) if not pos_check(raw_samp, pos_inds): neg_inds.append(raw_samp) return neg_inds A major downside here is that we are sampling a single value many times — rather than sampling many values once. And although it will be infrequent, we have to re-sample if we get unlucky and randomly choose a positive item. This family of strategies will pretty much only differ by how item positivity is checked. We will go through a couple of ways to tinker with the complexity of the positivity check, but keep in mind that the number of positive items is generally small, so these modifications are actually not super-duper important. Using in operator on the raw list: With a list, the item positivity check is O(n) as it checks every element of the list. def negsamp_incr_naive(pos_inds, n_items, n_samp=32): """ Guess and check with list membership """ pos_check = lambda raw_samp, pos_inds: raw_samp in pos_inds return negsamp_incr(pos_check, pos_inds, n_items, n_samp) Using in operator on a set created from the list: Here, we’re going to first convert our list into a python set which is implemented as a hashtable. Insertion is O(1), so the conversion itself is O(n). However, once the set is created, our item positivity check (set membership) will be O(1) thereon after. So we can expect this to be a nicer strategy if n_samp is large. def negsamp_incr_set(pos_inds, n_items, n_samp=32): """ Guess and check with hashtable membership """ pos_inds = set(pos_inds) pos_check = lambda raw_samp, pos_inds: raw_samp in pos_inds return negsamp_incr(pos_check, pos_inds, n_items, n_samp) Using a binary search on the list (assuming it’s sorted): One of best things you can do exploit the sortedness of a list is to use binary search. All this does is change our item positivity check to O(log(n)). from bisect import bisect_left def bsearch_in(search_val, val_arr): i = bisect_left(val_arr, search_val) return i != len(val_arr) and val_arr[i] == search_val def negsamp_incr_bsearch(pos_inds, n_items, n_samp=32): """ Guess and check with binary search `pos_inds` is assumed to be ordered """ pos_check = bsearch_in return negsamp_incr(pos_check, pos_inds, n_items, n_samp) (Aside: LightFM, a popular recommendation system implements this in Cython. They also have a good reason to implement this in a sequential fashion — but we won’t go into that.) Vectorized Binary Search Here we are going to address the issue of incremental generation. All random samples will now be generated and verified in vectorized manners. The upside here is that we will reap the benefits of NumPy’s underlying optimized vector processing. Any positives found during this check will then be masked off. A new problem arises in that if we hit any positives, we will end up returning less samples than prescribed by the n_samp parameter. Yeah, we could fill in the holes with the previously discussed strategies, but let’s just leave it at that. def negsamp_vectorized_bsearch(pos_inds, n_items, n_samp=32): """ Guess and check vectorized Assumes that we are allowed to potentially return less than n_samp samples """ raw_samps = np.random.randint(0, n_items, size=n_samp) ss = np.searchsorted(pos_inds, raw_samps) pos_mask = raw_samps == np.take(pos_inds, ss, mode='clip') neg_inds = raw_samps[~pos_mask] return neg_inds Vectorized Pre-verified Binary Search Finally, we are going to address both main pitfalls of the guess-and-check strategies. Vectorize: generate all our random samples at once Pre-verify: no need for an item positivity check We know how many negative items are available to be sampled since we have the size of our item catalog, and the number of positive items ( len(pos_inds) is just O(1) ) to subtract off. So let’s sample uniformly over a range of imaginary negative indices with 1–1 correspondence with our negative items. This gives us the correct distribution since we have the correct number of negative item slots to sample from; however, the indices now need to be adjusted. To fix our imaginary index, we must add the number of positive items that precede each position. Assuming our positive indices are sorted, this is just a binary search (compliments of np.searchsorted). But keep in mind that in our search, for each positive index, we also need to subtract the number of positive items that precede each position. def negsamp_vectorized_bsearch(pos_inds, n_items, n_samp=32): """ Pre-verified with binary search `pos_inds` is assumed to be ordered """ raw_samp = np.random.randint(0, n_items - len(pos_inds), size=n_samp) pos_inds_adj = pos_inds - np.arange(len(pos_inds)) ss = np.searchsorted(pos_inds_adj, raw_samp, side='right') neg_inds = raw_samp + ss return neg_inds Briefly, let’s look at how this works for all possible raw sampled values. n_items = 10 pos_inds = [3, 7] # raw_samp = np.random.randint(0, n_items - len(pos_inds), size=n_samp) # Instead of sampling, see what happens to each possible sampled value raw_samp = np.arange(0, n_items - len(pos_inds)) raw_samp: array([0, 1, 2, 3, 4, 5, 6, 7]) # Subtract the number of positive items preceding pos_inds_adj = pos_inds - np.arange(len(pos_inds)) pos_inds_adj: array([3, 6]) # Find where each raw sample fits in our adjusted positive indices ss = np.searchsorted(pos_inds_adj, raw_samp, side='right') ss: array([0, 0, 0, 1, 1, 1, 2, 2]) # Adjust our raw samples neg_inds = raw_samp + ss neg_inds: array([0, 1, 2, 4, 5, 6, 8, 9]) As desired, each of our sampled values has a 1–1 mapping to a negative item. Summary Notebook with Results The notebook linked below compares the implementations discussed in this post in some example scenarios. The previously discussed “Vectorized Pre-verified Binary Search” strategy seems to be the most performant except in the edge case where n_samp=1 where vectorization no longer pays off (in that case, all strategies are very close). Concluding Remarks In models that require negative sample, the sample stage is often a bottleneck in the training process. So even little optimizations like this are pretty helpful. Some further thinking: - how to efficiently sample for many users at a time (variable length number of positive items) - at what point (sparsity of our interaction matrix) does our assumption that n_neg_items >> n_pos_itemswreck each implementation - how easy is it to modify each implementation to accommodate for custom probability distributions — if we wanted to take item frequency or expose into account
https://tech.hbc.com/2018-03-23-negative-sampling-in-numpy.html
CC-MAIN-2019-35
refinedweb
1,320
51.18
Recursive Regular Expression August 20, 2015 I was discussing with a colleague about a simple problem that his company was asking during an interview: "Given a string composed from opened and closed parentheses, detect if all the parentheses are closed" ((())(())) -> Ok ()() -> Ok ()()) -> Wrong ())( -> Wrong You can solve this problem with counter starting from 0 and increment by 1 when you met ( and decrement by 1 when you met ). The sum needs to stay positive or equal to zero, otherwise it's invalid string. A basic function in python to do this check of parentheses could look like this: def check(val): counter = 0 for c in val: if c == '(': counter += 1 elif c == ')': counter -= 1 else: raise AttributeError('invalid character in the argument') if counter < 0: return False return counter == 0: It's not the most elegant piece of python code but would it be possible to do the same with a regular expression? And the answer is YES! Now, it's not possible to do it with the built-in re package in Python because it doesn't support recursive pattern! To solve this problem with a regular expression in Python then, you need to install the regex package which is more compatible with PCRE. PCRE 4.0 and later introduced regular expression recursion, this allow to re-execute all or a part of the regular expression on the unmatched text. To use recursive regex, you use (?R) or (?0). When the regex engine reaches (?R). This tells the engine to attempt the whole regex again at the present position in the string. If you want only to reapply a specific part of the regex then you use the grouping index: (?1), (?2) Using this, we can solve more complex problems with regex. Let's start by a more simple one and try to detect palindromes: >>> import regex >>> regex.search(r"(\w)((?R)|(\w?))\1", "kayak") is None True >>> regex.search(r"(\w)((?R)|(\w?))\1", "random") is None False Let's analyse and decompose this regex: (\w)match a single alphabetic character. eg: 'k' (\w)\1match 2 identical alphabetic characters. \1match the same value than (\w)matched. The number 1represent the group position. eg: 'aa', 'bb' (\w)(\w?)\1match 2 or 3 alphabetic characters where the first and the last are equal. eg: 'kak', 'kk' (\w)(((\w)\4)|(\w?))\1match a 3 or 4 characters palindrome. eg: 'kaak' or 'kak' With (\w)(((\w)\4)|(\w?))\1, you can see that we are repeating the same logic to add be able to match a palindrome of 1 character longer than (\w)(\w?)\1. Ideally we would like a way to make a loop or define a recursive pattern. Perfect that what this post is about and you can express that with ((?R)|(\w?)) which apply all the regex at the current position or stop if there is 1 or 0 character left to process (\w?). You can play with this regex via this link Let's come back to our initial problem of parentheses. Using what we learn with the palindrome example, we can write a regex to solve it. The answer is ^(\((?1)*\))(?1)*$, let see this regex in action: >>> import regex >>> regex.search(r"^(\((?1)*\))(?1)*$", "()()") is not None True >>> regex.search(r"^(\((?1)*\))(?1)*$", "(((()))())") is not None True >>> regex.search(r"^(\((?1)*\))(?1)*$", "()(") is not None False >>> regex.search(r"^(\((?1)*\))(?1)*$", "(((())())") is not None False Let's analyse and decompose this regex: ^match a start of the string $match the end of the string (\(\))match open and close parenthesises () (\((?R)?\))match parentheses like ((())) (\((?R)*\))match parentheses like (()()()) (\((?1)*\))(?1)*match parentheses like (()()())(())where ?1is (\((?1)*\)) ^(\((?1)*\))(?1)*$we add ^and $to consume the all the string You can play with this regex via this link If you ask about performance then the python code that we written at the start is more performing: >>> import regex >>> %timeit -n1000 check("(()())") 1000 loops, best of 3: 1.81 µs per loop >>> %timeit -n1000 regex.search(r"^(\((?1)*\))(?1)*$", "(()())") 1000 loops, best of 3: 13 µs per loop >>> comp = regex.compile(r"^(\((?1)*\))(?1)*$") >>> %timeit -n1000 comp.match("(()())") 1000 loops, best of 3: 7.49 µs per loop The syntax above is base on ipython which allow to execute timeit with the syntactic sugar %timeit. This test was done on my laptop, but otherwise you can see that the simple python code is much faster than using a regex for this problem. It's not a surprising result because this problem can be solved in O(n) and even if I don't know the complexity of applying a regular expression, I expect it to be bigger than O(n) to parse the input and doing some kind of recursivity. Still, I was curious try it because it's difficult to anticipate some behavior in Python when a component is written in C with Python. I hope that this post will give a taste of advanced features possible with regular expressions.
http://rachbelaid.com/recursive-regular-experession/
CC-MAIN-2018-26
refinedweb
832
72.46
#include <Servo.h> //int buttonPin = 7; int servoPin = 12; Servo servo; int angle = 0; // servo position in degrees void setup() { servo.attach(servoPin); // pinMode(buttonPin, INPUT);} void loop() { // val = digitalRead(buttonPin);// if (val == HIGH) {// servo.write(180);// delay(3000);// servo.write(0);// }} Do you have a link to the Servo you purchased? Sounds like you bought a continuous rotation servo. Sounds like you bought a continuous rotation servo. A way to test that would be to send it to position 90, which if I understand things correctly is 0 speed on a continuous servo, with position 0 being max speed one way and 180 being max speed the other. (Although from what I read here it's not always 90 exactly, try values either side.)Your 0 would indeed be sending it off in circles if it's a continuous servo. I can't even do that because as soon as I connect signal wire, it starts turning.... // } } Uno 5V to breadboard + No, I had it work to go from 0 to 180, and back to 0.... But somehow after my kid did something, it just started to turn endlessly.... now servo keeps turning. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=154309.msg1157102
CC-MAIN-2015-22
refinedweb
229
66.33
Today's lab focuses on an application of nested loops: sorting lists of numbers. The basic idea of a sorting algorithm is to take a list of items (we'll focus on whole numbers, but you can sort just about anything that you can measure) and return the list in order. So, for example, if you were given as input the list: [3, 8, 23, 2, 4, 9, 45, 1]the result of sorting the numbers (in increasing order) would be: [1, 2, 3, 4, 8, 9, 23, 45] There are many different approaches to sorting. Here's the idea for the selectionSort algorithm: [3, 8, 23, 2, 4, 9, 45, 1] <-- starting list [1, 3, 8, 23, 2, 4, 9, 45] <-- selected 1 and moved it to the front [1, 2, 3, 8, 23, 4, 9, 45] <-- selected 2 from remaining lst[1:] and moved it [1, 2, 3, 8, 23, 4, 9, 45] <-- selected 3 from lst[2:] but already in place [1, 2, 3, 4, 8, 23, 9, 45] <-- selected 4 from lst[3:] and moved it [1, 2, 3, 4, 8, 23, 9, 45] <-- selected 8 from lst[4:] but already in place [1, 2, 3, 4, 8, 9, 23, 45] <-- selected 9 from lst[5:] and moved it [1, 2, 3, 4, 8, 9, 23, 45] <-- selected 23 from lst[6:] but already in placeThe python code (from the book's Chapter 13) is: def selSort(lst): n = len(lst) for bottom in range(n-1): mp = bottom for i in range(bottom+1,n): if lst[i] < lst[mp]: mp = i lst[bottom], lst[mp] = lst[mp], lst[bottom]Note the nested loops: The program generates 10 random numbers (and pauses, waiting for a mouse click) and first sorts the list with the selectionSort algorithm from above. The code has been modified slightly from the book to update the graphics display as it runs. When the program runs, it repeatedly traverses the list. On its first traversal of the list, it finds the smallest number and then interchanges it with the first number in the list. It then repeats looking for the next smallest number and puts that in the second position in the list. It keeps going putting the next largest number in its place until the whole list is sorted. Run the program several times, paying attention to the first sort only. Can you see the smallest number moving each time? Algorithms are often written in pseudo-code, which is an informal, high-level description: func bubblesort( var a as array ) for i from 2 to N for j from 0 to N - 2 if a[j] > a[j + 1] swap( a[j], a[j + 1] ) end func(from) Let's translate this line-by-line into python: func bubblesort( var a as array ) for i from 2 to N for j from 0 to N - 2 if a[j] > a[j + 1] swap( a[j], a[j + 1] ) end func func bubblesort( var a as array )This first lines says we have a function with an array variable called a. In python, we define functions with the def keyword: def bubbleSort(a): for i from 2 to NThis next line is a for loop, but what is N? Usually, N or n refers to the length of the list, so, let's set that up as a variable first, then write out the for loop: N = len(a) for i in range(2, N+1):Note that their for loop goes from 0 upto and including N, so, we need to change the range accordingly. for j from 0 to N - 2Since we have N defined, we can easily write out this for-loop: for j in range(N-1): if a[j] > a[j + 1]We need one small addition to make this line proper python: if a[j] > a[j+1]: swap( a[j], a[j + 1] )Lastly, we need to swap the two values. In python, we can do this in one line with simultaneous assignment: a[j],a[j+1] = a[j+1],a[j] def bubbleSort(a): N = len(a) for i in range(2,N+1): for j in range(N-1): if a[j] > a[j+1]: a[j],a[j+1] = a[j+1],a[j]To make this function show the graphics, we will add two calls to the drawBox() as well as to sleep() (to slow it down enough to see):)Note that we need to add w, the graphics window variable, to the formal parameters of our function so that we can use it when calling our drawing function. Add the bubbleSort() above to your program and try running it. Can you see the largest value "bubble" up to the top? Try running the program multiple times to see the effect. # Spring 2014, CMP230, Lab 11 # This is a modified version of sortGUI.py which is available # on # Modifications are made by Maryam Ghaffari Saadat from graphics import * from random import * n = 10 def main(): #holds list of numbers to be sorted: a = [0]*n #our display window: w = GraphWin("Sorting Demo",500,500) w.setCoords(-5,0,n*10,n*10) #First, display the selection sort (code from book): initializeList(a) displayList(a,w) selSort(a,w) #Next, display the bubble sort clearScreen(w) initializeList(a) displayList(a,w) bubbleSort(a,w) w.getMouse() w.close() #Selection sort function from the textbook, with the addition # of code to draw the change each time def selSort(lst,w): n = len(lst) for bottom in range(n-1): mp = bottom for i in range(bottom+1,n): if lst[i] < lst[mp]: mp = i if mp != bottom: # draw the bars that are to be swapped drawMovingBars(lst, bottom, mp, w) # swap the bars lst[bottom], lst[mp] = lst[mp], lst[bottom] # draw the bars after they are swapped drawMovingBars(lst, bottom, mp, w) #Bubble sort from pseudocode:) #Set up the initial list to have 40 random numbers: def initializeList(a): for i in range(n): a[i] = (n-1)*10*random()+10 def drawMovingBars(lst, box_1_index, box_2_index, w): # draw the boxes that are moving, with red outlines drawBox(lst, box_1_index, w, True) drawBox(lst, box_2_index, w, True) # get a mouse click w.getMouse() ## uncomment to: wait for .2 seconds (= 2 deciseconds) #time.sleep(.2) # redraw the boxes without red outlines drawBox(lst, box_1_index, w, False) drawBox(lst, box_2_index, w, False) #Draw a single box to the screen: def drawBox(a,i,w, isMoving): # clear the space for bar i barSpace = Rectangle(Point(i*10, 0), Point((i+1)*10, n*10)) barSpace.setFill("white") barSpace.setOutline("white") barSpace.draw(w) # create a new bar with hight of a[i] bar = Rectangle(Point(i*10, 0), Point((i+1)*10, a[i])) # set the colour of the bar to be a shade of blue # the taller the bar, the lighter the blue. bar.setFill(color_rgb(0,0,a[i]*(255/(n*10)))) # set the thickness of the outline of the bar to 3 pixels bar.setWidth(3) # if the bar is moving, set the outline colour to red if isMoving: bar.setOutline('red') # otherwise set the outline colour to white else: bar.setOutline("white") # draw the bar bar.draw(w) #draw the portion of the list from start to stop: def displayList(a,w): for i in range(len(a)): drawBox(a,i,w,False) #White out the window, so, we can start again: def clearScreen(w): r = Rectangle(Point(0,0),Point(n*10,n*10)) r.setFill("white") r.draw(w) main() If you finish early, you may work on the programming problems.
https://stjohn.github.io/teaching/cmp/cmp230/f14/lab11.html
CC-MAIN-2022-27
refinedweb
1,283
64.58
This]. Then finding out the average of students on the base of marks. In the end to display result on the screen. Problem statement: This is C program that asks user to find out the highest marks and the average. This is C program to find out highest marks and average. Output of this program shown below. #include <stdio.h> int main () { float mark[10] = {45.6, 78.4, 65.9, 58.3, 82.1, 44.5, 61.8, 53.6, 49.2, 37.7}; int i; float sum = 0, average, highest = 0; clrscr(); for (i = 0; i < 10; i++) { sum += mark[i]; if (mark[i] > highest) highest = mark[i]; } average = sum / 10.0; printf("The Average Mark is %5.2f \n", average); printf("The Highest Mark is %5.2f \n", highest); getch(); return 0; }
http://ecomputernotes.com/c-program/write-a-program-find-the-highest-marks-and-average
CC-MAIN-2019-04
refinedweb
135
85.18
We have tested some of our classes for the moment. That's great, but we can’t stop there! We have to make sure our entire application is tested! Well, at least we agreed we'd be testing the Model. Good intentions! ☺️ So, how to be sure that we tested the whole model? We are mere human beings and we can make mistakes. The good news is that there is a tool for that too! The tool is called code coverage. It analyzes which part of the code is covered by the test. The working principle is simple: it identifies the lines of code that correspond to the tests when running the tests. And the inverted parts of the code by definition are identified as not-covered (or not tested). Setting up the code coverage To install the code coverage, we'll have to edit the scheme of our application. To do this, click on the name of the application at the top left: And select Edit Scheme... in the dropdown: This will present a configuration dialog: But what's this scheme? 🤔 A scheme is a set of actions the code can take. We've got 6 that are presented on the left of the pop-up. We've already used 3 of them: Build: compile the app (cmd + b). Run: execute the app after compilation (cmd + r). Test: run the tests (cmd + u). Back to our task - select Test on the left, check off Gather coveragedata and click Close: Now, when you run your tests, Xcode will collect the coverage data and indicate the lines of code that were executed during the tests. Inspecting the Code Coverage To inspect Code Coverage, you must first run the tests (cmd + u). Once that's done, switch to the Test navigator: The test navigator allows you to view all the tests of your application, organized by class. You can also run the tests from this interface by clicking the play button to the right of each test, class, or target. In the navigator, you can right-click on any test and choose Jump to Report: You'll be taken to the test report page where you can observe all the tests - if they passed or failed, sort them according to certain criteria. For the moment, what we're interested in is located under the Coverage tab: On this tab, you can see the coverage percentage of each file and even each function. So you can easily see what your tests are not covering: As you can see, most of our components are 100% tested through just thoroughly testing the Game class. We did a good job! For more details, you can view the Game.swift file and the coverage information is now available to the right of the file: On the other hand, if we look at the Tournament.swift file, it shows gaps: All the red lines on the right will have zero number of executions - they are not tested! We'll fix it shortly! Setting suitable goals Of course, ideal test coverage would be 100%. More often than not, optimal is more appropriate than ideal. What percentage of coverage is optimal? 100%? 90%? 50%? And the answer is: it's not a question of percentage, but of strategy! 🤓 For example, when working with an MVC model, I recommend aiming to cover the Model 100%. The rest may not be as trivial to test and may not be worth the effort. This makes it come down to answering the question, what percentage of your code is the model?! The key is to define which parts of the code are the most critical and make sure those parts are fully covered! The overall percentage doesn't matter. It's just a tool. Closing the gaps! Game on - let's close the gaps in our model! It's clear now, we only have one method there remaining to test: the addGame method! This method is quite small. Guess what? It's your turn now! Go ahead and create the tests for the Tournament class! As you write your tests, some may become redundant. Do not hesitate to delete them! Here's my version: import XCTest@testable import TicTacToeclass TournamentTestCase: XCTestCase {var tournament: Tournament!override func setUp() {super.setUp()tournament = Tournament()}func testGivenGameIsOverWithWinningPlayerOne_WhenAdded_ThenScoreShouldBeOneForPlayerOneAndZeroForPlayerTwo() {let tournament = Tournament()tournament.addGame(withWinner: .one)XCTAssertEqual(tournament.score(forPlayer: .one), 1)XCTAssertEqual(tournament.score(forPlayer: .two), 0)}} And let's confirm the coverage: That's it! Nicely done! ☺️ Let's Recap! You can install the Code Coverage by editing the scheme of your application for the the Test action. The Code Coverage allows you to identify areas of your code that are not covered by tests. No need to reach 100% coverage at all costs! It's important to choose a test strategy that is optimal for the project instead. The Code Coverage is a handy tool, but not an ultimate criteria to address.
https://openclassrooms.com/en/courses/4554386-enhance-an-existing-app-using-test-driven-development/5095716-evaluate-the-coverage-of-your-tests
CC-MAIN-2022-21
refinedweb
825
74.69
puzzle Let me first describe the MU puzzle shortly. The puzzle deals with strings that may contain the characters , , and . We can derive new strings from old ones using the following rewriting system: The question is whether it is possible to turn the string into the string using these rules. You may want to try to solve this puzzle yourself, or you may want to look up the solution on the Wikipedia page. The code The code is not only concerned with deriving from , but with derivations as such. Preliminaries We import Data.List: import Data.List Basic things We define the type Sym of symbols and the type Str of symbol strings: data Sym = M | I | U deriving Eq type Str = [Sym] instance Show Sym where show M = "M" show I = "I" show U = "U"by. [...] Today, I presented a Haskell program that computes derivations in the MIU formal system from Douglas Hofstadter’s MU puzzle. I have posted a write-up of my talk on my personal blog. [...] In the “replace” function, I noticed that you didn’t use the “stripPrefix” function – was that for readability concerns? The stripPrefixfunction returns a Maybevalue, since it needs to signal whether the given list starts with the given prefix or not. In the replacefunction, I have already ensured that it does; so it is simpler to use the solution with dropand length, where I do not have to get rid of the Maybe. By the way, in your Gravatar profile you say, “I like seeing how other languages do things differently.” So I wonder, did you already look at languages that are not indo-european, like Estonian? They are at some points quite different from what we “indo-europeans” are used to. Why is splitmore efficient than split'?? I was unable to exhibit any difference between the two (Criterion used). Thank you! Please enter this on the GHCi prompt: This should give you the result 1000001quite quickly. Now try this: I wasn’t able to get a result within a reasonable amount of time. The problem with split'lies in the use of inits. The initsfunction is implemented as follows: Note that the recursive application of is defined by a nested application of : initsis under a map. So the suffix of an expression inits [x_1,x_2,…,x_n]that starts at an index mapof depth map (x_1 : ) (map (x_2 : ) (…(map (x_i : ) […])…)) To detect that the result of an expression layers of time. map f xsis non-empty, we have to detect that xsis non-empty. So to detect that the above suffix is non-empty, we need to walk through the maps, which takes This means that to fetch the -th element of inits xs, we need time. The implementation of splitdoesn’t suffer from this problem, since it creates prefixes by a single application of map, and computes every single prefix independently.
http://jeltsch.wordpress.com/2013/04/18/miu-in-haskell/
CC-MAIN-2014-42
refinedweb
481
70.33
What i have: I have implemented three MediaPlayer.Objects in my App. All Three are created using a thread: protected void onResume() { // Threads mTT1 = new TrackThread(this, R.raw.audiofile1, 1, mHandler); mTT2 = new TrackThread(this, R.raw.audiofile2, 2, mHandler); mTT3 = new TrackThread(this, R.raw.audiofile3, 3, mHandler); // start thread mTT1.start(); mTT2.start(); mTT3.start(); super.onResume(); } "simplified" Code in the Thread for creating: public class TrackThread extends Thread implements OnPreparedListener { ... ... ... public void run() { super.run(); try { mMp.setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), afd.getDeclaredLength()); mMp.prepare(); } catch (IllegalArgumentException | IllegalStateException | IOException e) { Log.e(TAG, "Unable to play audio queue do to exception: " + e.getMessage(), e); } } As I read in several Tutorials the "prepare()" methode takes a little bit of time to finish. Therefore i implemented a "Waiting loop" which waits until all MPs are prepared and created. When "prepare and create" are done i enable the Start button and i want to start all 3 Mediaplayers SIMULTANEOUSLY. I again use a Thread for dooing so: public void onClick(View v) { // Button 1 if (mBtn.getId() == v.getId()) { mTT1.startMusic(); mTT2.startMusic(); mTT3.startMusic(); } Code in the thread: public class TrackThread extends Thread implements OnPreparedListener { ... ... ... // start public void startMusic() { if (mMp == null) return; mMp.start(); } Please note that the code above is not the full code, but it should be enough to define my problem. What i want, My problem: All MPs should play their Music in Sync, unfortunately sometimes when i start the music, there is a time delay between them. The MPs must start at the exact same time as the 3Audio-files must be played simultaneously (and exactly in sync) What i have already tried: +) using SoundPool: My Audio-files are to big(5Megabyte and larger) for SoundPool +) seekTo(msec): i wanted to seek every MP to a Specific time: eg.: 0, but this did not solve the problem. +) to reach more Programmers i also asked this question on: coderanch.com I hope somebody can help me! Thanks in advance As nobody could help me i found a solution on my own! MediaPlayer will not fulfill my requirements!! Android JETPlayer in combination with JETCreator fulfill all my requirements. CAUTION: Installing Python for using JETCreator is very tricky, therfore follow this tutorial: And be carefull with the versions of python and wxpython, not all versions support the JETCreator! I used: - Python Version 2.5.4 (python-2.5.4.msi) - wxPython 2.8 (wxPython2.8-win32-unicode-2.8.7.1-py25.exe) For those who do not know how to implement the Jetplayer take a look at: (at min.5 he starts with programming the Jetplayer) Unfortunately i do not speak french so i just followed the code which worked for me. Using Android JETCreator you can create your own JET Files and use them as your resource!
http://databasefaq.com/index.php/answer/121239/android-eclipse-synchronization-android-mediaplayer-play-music-synchronous-using-3-mediaplayer-objects-on-android-eclipse
CC-MAIN-2019-09
refinedweb
474
58.08
Created on 2015-07-30 22:18 by terry.reedy, last changed 2019-03-23 04:15 by terry.reedy. This issue is now closed. PyShell currently has this code try: from tkinter import * except ImportError: print("** IDLE can't import Tkinter.\n" "Your Python may not be configured for Tk. **", file=sys.__stderr__) sys.exit(1) import tkinter.messagebox as tkMessageBox When Idle is started from an icon, there is no place for the error message to go, so it appears than nothing happens. But this is the best we can do without invoking system specific error message functions. (This, if possible, would be another issue.) The second import assumes that messagebox is available, which is should be without modification of the tkinter package since long ago. But I think we should guard against someone trying to start Idle on pre 8.5. The following seems to work nicely. This is a prerequisite for any ttk patches. Any comment before I apply? try: from tkinter import ttk except: root = Tk() root.withdraw() tkMessageBox.showerror("Fatal Idle Import Error", "Idle cannot import required module tkinter.ttk.\n" "Click OK to exit.", parent=root) sys.exit(1) I tested by renaming installed 3.5 ttk, editing 3.5 PyShell with 3.4, and clicking 3.5 icon. Sounds good. Only suggestion, given it's user facing, is to have the error message point them towards a solution. Off the top of my head, maybe something like "IDLE requires Python be configured to use Tk 8.5 or newer (you have Tk x.x)" with a title of "IDLE Cannot be Started" New changeset 8203fc75b3d2 by Terry Jan Reedy in branch '2.7': Issue 24759: Gracefull exit Idle if ttk import fails. New changeset 13a8782a775e by Terry Jan Reedy in branch '3.4': Issue 24759: Gracefull exit Idle if ttk import fails. Improved message. Thanks. Should be good enough for the extremely few times it should ever be triggered. IMHO Tcl and Tk should be in title case (as Python or Django) or at least all in upper case (as APL or SDL). I don't see any mentions of ttk in IDLE source code except added check. Is ttk really required for IDLE? Re-opened for tweak. This patch, to exit gracefully if ttk is not available, is the first step before using ttk in Idle. Mark, a long-time tk/ttk expert and site/book author, has volunteered to help upgrade Idle with ttk. Please see "ttk in idle" on Idle-Sig list for general discussion and #24750 for the first substantive patches. The only reason I have not applied the ttk.Scrollbar patch, on that issue, is because Ned mentioned, on that issue, that the OS 8.5 python requires tcl/tk 8.4. I strongly feel that this should not stop the use of ttk, but I an first doing a few non-ttk patches, including Mark's patch for #24745, before continuing. Re-reading PEP 434, I was mistaken to apply this patch to 2.7 and 3.4 without further discussion. The PEP says "The PEP would apply to minor ..., but not necessarily to possible major re-writes such as switching to themed widgets ... ." Nick, should I post something on python-ideas or pydev, or continue here (moved from #24750) about making the switch also in 2.7 and 3.4? I believe it would be best overall to at upgrade Idle 2.7, but I will not cry if we stop patching it now. Ditto for 3.4, which will soon get security fixes only anyway. I am ok with using ttk starting with 3.5. Got it re: 2.7/3.4. So the only stumbling block left to 8.5/ttk in 3.5 would be the current Mac build that is ok with Tcl/Tk 8.4...? Ttk widgets are partially compatible with Tk widgets, so in many cases it is possible to make partial upgrade without major rewriting. Some changes perhaps need major rewriting (using styles, changing layout), and I think this is what the PEP does not allow. However it is possible that all changes can be done without major rewriting, we don't know until try. If 2.7 and 3.4 are left out of consideration, then, AFAIK, the only people necessarily affected are those with a PowerPC with OS 10.5 who upgrade to python3.5 and also want to run Idle. People with an Intel machine instead might be if there is no python3.5 that will run with the ActiveState 8.5 that Mark says works on Intel 10.5 machines. I wish I know how many people that is. I suspect a tiny fraction of Idle users and of people with such machines. But ... The compatibility approach will work for Scrollbars, but I am dubious about other widgets. For instance, tk Buttons have at least 31 options. ttk Buttons have 9 of the same + style (and class_, which I do not thing we would use), leaving 22 tk-only options. To write tk&ttk code *in one file*, I believe ...ttk.Button(parent, <options>, style='xyz') would have to be written (compactly) as something like b=...Button(parent, <common options>) b.config(**({'style':'xyz'} if ttk else { <tk style options>})) or (expansively, in 5 lines) b=...Button(parent, <common options>) if ttk: b['style'] = 'xyz' else: b.config(<tk style options in '=' format>}) I consider this impractical. I am unwilling to write code that way, in part because it would have to be carefully tested both with and without ttk -- by hand. I cannot imagine anyone else doing so either. It also does not work for uses of ttk.Treeview, whose API is different from the multiple classes used in Path and Class (Module) browser). I believe the same is true for ttk.Notebook and the idlelib tab widget. What I already planned to do instead is copy existing dialog code to new files, possibly refactored, with pep8 filenames and internal names style. I already planned to leave the existing files in place for now, though as zombies, in case of any external code imports. With a little more work, it should be possible to optionally use either old or new files. Most of the work would be done with alternate bindings to menu items and accelerator keys, such as Find in Files and Alt-F3. It might be more acceptible to use ttk in 2.7 and 3.4 as an option rather than as a replacement. Though I would only commit a ttk version when confident that is works, leaving the option to switch back would add a safety factor. I changed the title and will revert the patch later. Note also that committed patch doesn't work at all. "from tkinter import ttk" doesn't raise an exception with Tcl/Tk 8.4. See my patch in issue24750 that does working check. I consider impractical complex code for supporting 8.4 and ttk too. But in simplest cases this can be done easy. I realize now that tkinter.ttk is (normally) present and will define Python classes even if the tk widgets needed for them to work are not present. More comments on the added module on #24750 New changeset 0511b1165bb6 by Terry Jan Reedy in branch '2.7': Issue #24759: Revert 8203fc75b3d2. New changeset 06852194f541 by Terry Jan Reedy in branch '3.4': Issue #24759: Revert 13a8782a775e. New changeset 863e3cdbbabe by Terry Jan Reedy in branch '3.5': Issue #24759: Merge with 3.4 New changeset 3bcb184b62f8 by Terry Jan Reedy in branch 'default': Issue #24759: Merge with 3.5 A second specific reason to make ttk optional is either they or the accompanying re-writing may (and probably will) break some extension that goes beyond the narrowly defined extension interface. For the present, a user using such an extension would be able to continue to do so, by turning use_ttk off. (I plan to add some DeprecationWarnings, either at startup or old tk module import, when use_ttk is possible but turned off.) The normal bugfix-only policy, and the Idle exemption, starts with each x.y.0b1 release. The point of excluding 'major re-writes such as switching to themed widgets' was to exclude changes in bugfix releases that prevent idle from running in the 'current' environment. In private email, Nick agreed with me that with ttk and any possible disablement made optional, it can be added to all current releases. He also suggested being on by default when possible. I decided not to put anything into 3.5.0. I intend to start pushing ttk patches perhaps next week after the release branch is split off and the main 3.5 branch is 3.5.1. IDLE's in an "interesting" place right now - it isn't showing people Tcl/Tk in its best light, so folks are likely to assume all Tcl/Tk apps necessarily look that way, and it's also using GUI idioms like separate shell and editor windows that don't reflect the conventions of modern IDEs. For 3.5.1+, I think there's no question that we want to show IDLE in the best possible light, and we want to try to do that by default. That means modernising it to use the best cross-platform features that Tcl/Tk has to offer (including ttk). However, Ned pointed out that the last PPC-supporting Mac OS X (10.5) has a Tcl/Tk version older than 8.5, and there's the general compatibility risk for breaking extensions with large refactorings, so retaining a "non-ttk mode" will be a valuable approach to the modernisation effort. In msg265349, Ned stated that running IDLE with 8.4 should no longer be a requirement in 3.6. Hence the revised title and restriction to 3.6. Serhiy, I read the code in your #24750 patch. Should it not be enough to first check tkinter.Tkversion >= 8.5 before trying to import ttk. And is testing ttk then needed? Is it possible for ttk to not work in 8.5/6)? Tkversion (or a refinement thereof) is already checked in colorizer, config, editor, macosx, and pyshell. With one check on startup, existing check can be eliminated, along with code only for older versions. My patch in issue24750 makes sense only if we want support Tcl/Tk 8.4. I think that if require Tcl/Tk 8.5+, we can use Ttk directly and unconditionally. I don't think that it can be not working. Given what Serhiy said, the core of this patch is the original patch with a test of TkVersion instead of importing ttk. Code that only ran for 8.4- is removed. Some minimal new tests are added, and I may add some more. None of the changes should depend on OS. I want to apply this in a few days as it is a prerequisite for ttk patches. Do you forgot to attach a patch? Trying again. And yes, I would like a review. I don't think there is anything system specific, but some deletions required a minor rewrite. Did you try to run tests with Tk 8.4? No, 3.4 is security fixes only and this is a 3.6 only issue. I meant Tk 8.4, not Python 3.4. Tests should be either passed or skipped, no errors raised. I saw the review comment about adding lines. I do not have 8.4 either, but I get the point: the startup version check does not not guard the unittests. After test.text_idle imports unittest and tkinter (as tk) (line 6), I will add if tk.TkVersion < 8.5: raise unittest.SkipTest("IDLE requires tk 8.5 or later.") I will add a 'private API' and version-required notice to idlelib.idle_test.__init__. I will add version required to the notice already in idlelib.__init__. For similar reasons, the proposed interface module (#27162) would also need an explicit check. (Note added there.) New changeset e6560f018845 by Terry Jan Reedy in branch '2.7': Issue #24759: Add 'private' notice for idlelib.idle_test. New changeset d75a25b3abe1 by Terry Jan Reedy in branch '3.5': Issue #24759: Add 'private' notice for idlelib.idle_test. I believe require85-v2.diff adds everything specified in my last post. Fixed whitespace and added comment. New changeset 81927f86fa3a by Terry Jan Reedy in branch 'default': Issue #24759: Add test for IDLE syntax colorizoer. New changeset 76f831e4b806 by Terry Jan Reedy in branch 'default': Issue #24759: IDLE requires tk 8.5 and availability ttk widgets.
https://bugs.python.org/issue24759
CC-MAIN-2020-16
refinedweb
2,109
77.03
A subset of the C++17 language is used in the Zircon tree. This includes both the kernel and userspace code. C++ is mixed with C (and some assembly) in both places. Some C++ language features are avoided or prohibited. Use of the C++ standard library features is very circumspect. dynamic_cast thread_localin kernel code constexpr nullptr enum classes templates auto TODO: pointer to style guide(s)? Zircon code is built with -std=c++17 and in general can use C++ 17 language and library features freely (subject to style/feature constraints described above and library use guidelines described below). There is no general concern with staying compatible with C++ 14 or earlier versions. When a standard C++ 17 feature is the cleanest way to do something, do it that way. However any library that is published to the SDK must be compatible with SDK users building in both C++ 14 and C++ 17 modes. So, any libraries exported to the SDK must have public header files that are compatible with both -std=c++14 and -std=c++17. If a library is exported to the SDK as source code rather than as a binary, then its source code must also be completely compatible with both -std=c++14 and -std=c++17 (and not require other special options). TODO(mcgrathr): pointer to build-system docs about maintaining code to be exported to SDK All pure C code ( .c source files and headers used by them) is C 11. Some special exceptions are made for code meant to be reused by out-of-tree boot loaders, which stick to a conservative C 89 subset for embedded code. The C++ standard library API has many interfaces of widely varying characteristics. We subdivide the standard library API into several categories below, based on the predictability and complexity of each particular interface's code generation and use of machine and OS facilities. These can be thought of as widening concentric circles of the API from the most minimal C-like subset out to the full C++ 17 API. This section gives guidelines for how to think about the impact of using a particular standard C++ library API on the system as a whole. There are no hard and fast rules, except for the kernel (see the next section)--and except for implementation constraints, which one always hopes should be temporary. The overwhelming rule is be circumspect. Consider how well you understand the time and space complexity, the dynamic allocation behavior (if any), and the failure modes of each API you use. Then consider the specific context where it's being used, and how sensitive that context is to those various kinds of concerns. Be especially wary about input-dependent behavior that can quickly become far harder to predict when using nontrivial library facilities. If you‘re writing the main I/O logic in a driver, or anything that’s in a hot path for latency, throughput, or reliability, in any kind of system service, then you should be pretty conservative in what library facilities you rely on. They‘re all technically available to you in userspace (though far fewer in the kernel; see the next section). But there’s not so many you actually should use. You probably don't want to lean on a lot of std containers that do fancy dynamic allocation behind the scenes. They will make it hard for you to understand, predict, and control the storage/memory footprint, allocation behavior, performance, and reliability of your service. Nonetheless, even a driver is a userspace program that starts up and parses configuration files or arguments and so on. For all those nonessential or start-time functions that are not part of the hot path, using more complex library facilities is probably fine when that makes the work easier. Just remember to pay attention to overall metrics for your code, such as minimal/total/peak runtime memory use, code bloat (which uses both device storage and runtime memory), and resilience to unexpected failure modes. Maybe don't double the code size and memory footprint of your driver just to leverage that fancy configuration-parsing library. stdin kernel The C++ std namespace cannot be used in kernel code, which also includes bootloader. The few C++ standard library headers that don't involve std:: APIs can still be used directly. See the next section. No other C++ standard headers should be used in kernel code. Instead, any library facilities worthwhile to have in the kernel (such as std::move) are provided via kernel-specific APIs (such as ktl::move). The kernel's implementations of these APIs may in fact rely on toolchain headers providing std:: implementations that are aliased to kernel API names. But only those API implementations and very special cases in certain library headers should ever use std:: in source code built into the kernel. These header APIs are safe to use everywhere, even in the kernel. They include the C++ wrappers on the subset of standard C interfaces that the kernel supports: <cstdarg> <cstddef> <climits> <cstdint> <cinttypes> <cassert> <cstring> The std namespace aliases for C library APIs from these headers should not be used in kernel code. One pure C++ header is also available even in the kernel: The vanilla non-placement operator new and operator new[] are not available in the kernel. Use fbl::AllocChecker new instead. These header APIs are safe to use everywhere. They‘re not allowed in the kernel because they’re all entirely in the std namespace. But subsets of these APIs are likely candidates to get an in-kernel API alias if there is a good case for using such an API in kernel code. These are pure header-only types and templates. They don't do any dynamic allocation of their own. The time and space complexity of each function should be clear from its description. <algorithm> <array> <atomic> <bitset> <initializer_list> <iterator> <limits> <optional> <tuple> <type_traits> <utility> <variant> These involve some dynamic allocation, but only what's explicit: The std::shared_ptr, std::weak_ptr, and std::auto_ptr APIs should never be used. Use std::unique_ptr and fbl::RefPtr instead. These are not things that would ever be available at all or by any similar API or name in the kernel. But they are generally harmless everywhere in userspace. They do not involve dynamic allocation. Floating-point is never available in kernel code, but can be used (subject to performance considerations) in all userspace code. <cfenv> <cfloat> <cmath> <complex> <numeric> <ratio> <valarray> Full C 11 standard library, via C++ wrappers or in standard C <*.h>. <csetjmp> <cstdlib> Synchronization and threads. These standard APIs are safe to use in all userspace code with appropriate discretion. But it may often be better to use Zircon's own library APIs for similar things, such as <lib/sync/...>. <condition_variable> <execution> <mutex> <shared_mutex> <thread> These involve dynamic allocation that is hard to predict and is generally out of your control. The exact runtime behavior and memory requirements are often hard to reason about. Think very hard before using these interfaces in any critical path for reliability or performance or in any component that is meant to be lean and space-efficient. The entire Containers library See <lib/fit/function.h> for a homegrown alternative. FBL is the Fuchsia Base Library, which is shared between kernel and userspace. As a result, FBL has very strict dependencies. For example, FBL cannot depend on the syscall interface because the syscall interface is not available within the kernel. Similarly, FBL cannot depend on C library features that are not available in the kernel. NOTE: Some FBL interfaces below that overlap with standard C++ library interfaces will probably be either removed entirely or made kernel-only (and perhaps renamed inside the kernel) once userspace code has migrated to using standard C++ library facilities where appropriate. FBL provides: FBL has strict controls on memory allocation. Memory allocation should be explicit, using an AllocChecker to let clients recover from allocation failures. In some cases, implicit memory allocation is permitted, but functions that implicitly allocate memory must be #ifdef'ed to be unavailable in the kernel. FBL not available outside the Platform Source Tree. ZX contains C++ wrappers for the Zircon objects and syscalls. These wrappers provide type safety and move semantics for handles but offer no opinion beyond what's in syscalls.abigen. At some point in the future, we might autogenerate ZX from syscalls.abigen, similar to how we autogenerate the syscall wrappers in other languages. ZX is part of the Fuchsia SDK. FZL is the Fuchsia Zircon Library. This library provides value-add for common operations involving kernel objects and is free to have opinions about how to interact with the Zircon syscalls. If a piece of code has no dependency on Zircon syscalls, the code should go in FBL instead. FZL not available outside the Platform Source Tree. We encourage using C++ rather than C as the implementation language throughout Fuchsia. However, in many instances we require a narrow ABI bottleneck to simplify the problem of preventing, tracking, or adapting to ABI drift. The first key way to keep the ABI simple is to base it on a pure C API (which can be used directly from C++, and via foreign-function interfaces from many other languages) rather than a C++ API. When we link together a body of code into a module with a pure C external API and ABI but using C++ internally for its implementation, we call that hermetic C++. It‘s a hard and fast rule for binaries exported in the Fuchsia’s public SDK that shared libraries must have a pure C API and ABI. Such libraries can and should use C++ rather than C in their implementations, and they can use other statically-linked libraries with C++ APIs as long as ABI aspects of those internal C++ APIs don‘t leak out into the shared library’s public ABI. A “loadable module” (sometimes called a “plug-in” module) is very similar to a shared library. The same rules about pure a C ABI bottleneck apply for loadable module ABIs. Fuchsia device drivers are just such loadable modules that must meet the driver (pure C) ABI. Hence, every driver implemented in C++ must use hermetic C++. The Fuchsia C++ toolchain provides the full C++17 standard library using the libc++ implementation. In C++ executables (and shared libraries with a C++ ABI) this is usually dynamically linked, and that's the default behavior of the compiler. The toolchain also provides libc++ for hermetic static linking via the -static-libstdc++ switch to the compiler ( clang++). In the Zircon GN build system, a linking target such as executable(), test(), or library() (with shared = true), uses this line to request the hermetic C++ standard library: configs += [ "$zx/public/gn/config:static-libc++" ] This is required in each library() that is exported to the public SDK in binary form via sdk = "shared". Every driver() automatically uses hermetic C++ and so this line is not required for them. (Drivers cannot depend on their own shared libraries, only the dynamic linking environment provided by the driver ABI.) For executables and non-exported shared libraries, it‘s a judgment call whether to use static linking or dynamic linking for the standard C++ library. In Fuchsia’s package deployment model, there is no particular updatability improvement to using shared libraries as in many other systems. The primary trade-off is between the savings in memory and storage from many stored packages and running processes on the system using exactly the same shared library binary and compactness and (sometimes performance) of the individual package. Since many packages in the system build will all use the same shared libc++ library, that‘s usually the right thing to do unless there are special circumstances. It’s the default in the compiler and build system.
https://fuchsia.googlesource.com/fuchsia/+/refs/heads/sandbox/ravoorir/DEBUG_FINAL_REFS/zircon/docs/cxx.md
CC-MAIN-2020-16
refinedweb
1,982
53
Structure encapsulating the complete state of an 802.11 device. More... #include <net80211.h> Structure encapsulating the complete state of an 802.11 device. An 802.11 device is always wrapped by a network device, and this network device is always pointed to by the netdev field. In general, operations should never be performed by 802.11 code using netdev functions directly. It is usually the case that the 802.11 layer might need to do some processing or bookkeeping on top of what the netdevice code will do. Definition at line 786 of file net80211.h. The net_device that wraps us. Definition at line 789 of file net80211.h. Referenced by ath5k_probe(), ath5k_start(), ath_pci_probe(), iwlist(), iwstat(), net80211_alloc(), net80211_autoassociate(), net80211_change_channel(), net80211_check_settings_update(), net80211_deauthenticate(), net80211_free(), net80211_handle_auth(), net80211_prepare_assoc(), net80211_prepare_probe(), net80211_probe_start(), net80211_register(), net80211_rx(), net80211_rx_err(), net80211_set_rate_idx(), net80211_set_state(), net80211_step_associate(), net80211_tx_complete(), net80211_tx_mgmt(), net80211_unregister(), rtl818x_init_hw(), rtl818x_init_rx_ring(), rtl818x_init_tx_ring(), rtl818x_probe(), rtl818x_start(), trivial_change_key(), trivial_init(), wpa_derive_ptk(), wpa_psk_start(), and wpa_send_eapol(). List of 802.11 devices. Definition at line 792 of file net80211.h. Referenced by net80211_check_settings_update(), net80211_get(), net80211_register(), and net80211_unregister(). 802.11 device operations Definition at line 795 of file net80211.h. Referenced by net80211_alloc(), net80211_change_channel(), net80211_filter_hw_channels(), net80211_netdev_close(), net80211_netdev_irq(), net80211_netdev_open(), net80211_netdev_poll(), net80211_netdev_transmit(), net80211_prepare_assoc(), net80211_prepare_probe(), net80211_probe_start(), net80211_probe_step(), net80211_process_capab(), net80211_process_ie(), net80211_register(), net80211_set_rate_idx(), net80211_set_state(), and net80211_unregister(). Driver private data. Definition at line 798 of file net80211.h. Referenced by ath5k_attach(), ath5k_config(), ath5k_detach(), ath5k_irq(), ath5k_poll(), ath5k_probe(), ath5k_remove(), ath5k_setup_bands(), ath5k_start(), ath5k_stop(), ath5k_tx(), ath9k_bss_info_changed(), ath9k_config(), ath9k_irq(), ath9k_process_rate(), ath9k_start(), ath9k_stop(), ath9k_tx(), ath_isr(), ath_pci_probe(), ath_pci_remove(), ath_tx_setup_buffer(), ath_tx_start(), grf5101_rf_init(), grf5101_rf_set_channel(), grf5101_rf_stop(), grf5101_write_phy_antenna(), max2820_rf_init(), max2820_rf_set_channel(), max2820_write_phy_antenna(), net80211_alloc(), rtl818x_config(), rtl818x_free_rx_ring(), rtl818x_free_tx_ring(), rtl818x_handle_rx(), rtl818x_handle_tx(), rtl818x_init_hw(), rtl818x_init_rx_ring(), rtl818x_init_tx_ring(), rtl818x_irq(), rtl818x_poll(), rtl818x_probe(), rtl818x_set_hwaddr(), rtl818x_start(), rtl818x_stop(), rtl818x_tx(), rtl818x_write_phy(), rtl8225_read(), rtl8225_rf_conf_erp(), rtl8225_rf_init(), rtl8225_rf_set_channel(), rtl8225_rf_set_tx_power(), rtl8225_rf_stop(), rtl8225_write(), rtl8225x_rf_init(), rtl8225z2_rf_init(), rtl8225z2_rf_set_tx_power(), sa2400_rf_init(), sa2400_rf_set_channel(), sa2400_write_phy_antenna(), write_grf5101(), write_max2820(), and write_sa2400(). Information about the hardware, provided to net80211_register() Definition at line 801 of file net80211.h. Referenced by iwlist(), iwstat(), net80211_filter_hw_channels(), net80211_free(), net80211_prepare_probe(), net80211_probe_step(), net80211_process_ie(), net80211_register(), net80211_rx(), net80211_send_assoc(), and net80211_step_associate(). A list of all possible channels we might use. Definition at line 806_add_channels(), net80211_change_channel(), net80211_duration(), net80211_filter_hw_channels(), net80211_prepare_probe(), net80211_probe_step(), net80211_process_ie(), net80211_register(), and rtl818x_config(). The number of channels in the channels array. Definition at line 809 of file net80211.h. Referenced by iwstat(), net80211_add_channels(), net80211_change_channel(), net80211_filter_hw_channels(), net80211_prepare_probe(), net80211_probe_start(), net80211_probe_step(), and net80211_process_ie(). The channel currently in use, as an index into the channels array. Definition at line 812_change_channel(), net80211_duration(), net80211_filter_hw_channels(), net80211_prepare_probe(), net80211_probe_start(), net80211_probe_step(), net80211_process_ie(), net80211_register(), and rtl818x_config(). A list of all possible TX rates we might use. Rates are in units of 100 kbps. Definition at line 818 of file net80211.h. Referenced by ath5k_config(), ath9k_config(), iwstat(), net80211_cts_duration(), net80211_ll_push(), net80211_marshal_request_info(), net80211_prepare_probe(), net80211_process_ie(), net80211_set_rate_idx(), net80211_set_rtscts_rate(), net80211_tx_mgmt(), rc80211_set_rate(), rc80211_update_rx(), rc80211_update_tx(), rtl818x_config(), and rtl818x_tx(). The number of transmission rates in the rates array. Definition at line 821 of file net80211.h. Referenced by iwstat(), net80211_marshal_request_info(), net80211_prepare_probe(), net80211_process_ie(), net80211_set_rate_idx(), net80211_set_rtscts_rate(), rc80211_maybe_set_new(), rc80211_pick_best(), and rc80211_update_rx(). The rate currently in use, as an index into the rates array. Definition at line 824 of file net80211.h. Referenced by ath5k_config(), ath9k_config(), iwstat(), net80211_cts_duration(), net80211_ll_push(), net80211_prepare_assoc(), net80211_prepare_probe(), net80211_process_ie(), net80211_set_rate_idx(), net80211_set_rtscts_rate(), net80211_tx_mgmt(), rc80211_maybe_set_new(), rc80211_pick_best(), rc80211_set_rate(), rc80211_update_tx(), rtl818x_config(), and rtl818x_tx(). The rate to use for RTS/CTS transmissions. This is always the fastest basic rate that is not faster than the data rate in use. Also an index into the rates array. Definition at line 831 of file net80211.h. Referenced by ath5k_config(), ath9k_config(), net80211_cts_duration(), net80211_set_rtscts_rate(), and rtl818x_config(). Bitmask of basic rates. If bit N is set in this value, with the LSB considered to be bit 0, then rate N in the rates array is a "basic" rate. We don't decide which rates are "basic"; our AP does, and we respect its wishes. We need to be able to identify basic rates in order to calculate the duration of a CTS packet used for 802.11 g/b interoperability. Definition at line 843 of file net80211.h. Referenced by net80211_marshal_request_info(), net80211_process_ie(), and net80211_set_rtscts_rate(). The asynchronous association process. When an 802.11 netdev is opened, or when the user changes the SSID setting on an open 802.11 device, an autoassociation task is started by net80211_autoassocate() to associate with the new best network. The association is asynchronous, but no packets can be transmitted until it is complete. If it is successful, the wrapping net_device is set as "link up". If it fails, assoc_rc will be set with an error indication. Definition at line 858 of file net80211.h. Referenced by net80211_alloc(), net80211_autoassociate(), net80211_netdev_close(), and net80211_step_associate(). Network with which we are associating. This will be NULL when we are not actively in the process of associating with a network we have already successfully probed for. Definition at line 866 of file net80211.h. Referenced by iwstat(), net80211_autoassociate(), net80211_step_associate(), trivial_init(), wpa_handle_3_of_4(), wpa_make_rsn_ie(), and wpa_start(). Definition at line 874 of file net80211.h. Referenced by net80211_autoassociate(), and net80211_step_associate(). Definition at line 875 of file net80211.h. Referenced by net80211_autoassociate(), and net80211_step_associate(). Context for the association process. This is a probe_ctx if the PROBED flag is not set in state, and an assoc_ctx otherwise. Referenced by net80211_autoassociate(), and net80211_step_associate(). Security handshaker being used. Definition at line 879 of file net80211.h. Referenced by net80211_check_settings_update(), net80211_netdev_close(), net80211_prepare_assoc(), net80211_step_associate(), wpa_psk_start(), and wpa_psk_step(). State of our association to the network. Since the association process happens asynchronously, it's necessary to have some channel of communication so the driver can say "I got an association reply and we're OK" or similar. This variable provides that link. It is a bitmask of any of NET80211_PROBED, NET80211_AUTHENTICATED, NET80211_ASSOCIATED, NET80211_CRYPTO_SYNCED to indicate how far along in associating we are; NET80211_WORKING if the association task is running; and NET80211_WAITING if a packet has been sent that we're waiting for a reply to. We can only be crypto-synced if we're associated, we can only be associated if we're authenticated, we can only be authenticated if we've probed. If an association process fails (that is, we receive a packet with an error indication), the error code is copied into bits 6-0 of this variable and bit 7 is set to specify what type of error code it is. An AP can provide either a "status code" (0-51 are defined) explaining why it refused an association immediately, or a "reason code" (0-45 are defined) explaining why it canceled an association after it had originally OK'ed it. Status and reason codes serve similar functions, but they use separate error message tables. A iPXE-formatted return status code (negative) is placed in assoc_rc. If the failure to associate is indicated by a status code, the NET80211_IS_REASON bit will be clear; if it is indicated by a reason code, the bit will be set. If we were successful, both zero status and zero reason mean success, so there is no ambiguity. To prevent association when opening the device, user code can set the NET80211_NO_ASSOC bit. The final bit in this variable, NET80211_AUTO_SSID, is used to remember whether we picked our SSID through automated probing as opposed to user specification; the distinction becomes relevant in the settings applicator. Definition at line 921 of file net80211.h. Referenced by ath5k_config(), ath9k_bss_iter(), ath9k_config_bss(), iwlist(), iwstat(), net80211_autoassociate(), net80211_check_settings_update(), net80211_handle_mgmt(), net80211_ll_push(), net80211_netdev_close(), net80211_netdev_open(), net80211_rx(), net80211_send_disassoc(), net80211_set_state(), net80211_step_associate(), net80211_update_link_quality(), and rtl818x_config(). Return status code associated with state. Definition at line 924 of file net80211.h. Referenced by net80211_autoassociate(), net80211_deauthenticate(), net80211_ll_push(), net80211_set_state(), and net80211_step_associate(). RSN or WPA information element to include with association. If set to NULL, none will be included. It is expected that this will be set by the init function of a security handshaker if it is needed. Definition at line 932 of file net80211.h. Referenced by net80211_marshal_request_info(), net80211_prepare_assoc(), wpa_psk_init(), wpa_send_2_of_4(), and wpa_start(). 802.11 cryptosystem for our current network For an open network, this will be set to NULL. Definition at line 940 of file net80211.h. Referenced by ath_tx_setup_buffer(), net80211_handle_auth(), net80211_netdev_close(), net80211_netdev_transmit(), net80211_prepare_assoc(), net80211_rx(), net80211_tx_mgmt(), trivial_change_key(), trivial_init(), and wpa_install_ptk(). 802.11 cryptosystem for multicast and broadcast frames If this is NULL, the cryptosystem used for receiving unicast frames will also be used for receiving multicast and broadcast frames. Transmitted multicast and broadcast frames are always sent unicast to the AP, who multicasts them on our behalf; thus they always use the unicast cryptosystem. Definition at line 951 of file net80211.h. Referenced by net80211_prepare_assoc(), net80211_rx(), and wpa_install_gtk(). MAC address of the access point most recently associated. Definition at line 954 of file net80211.h. Referenced by ath5k_config(), ath9k_bss_iter(), eapol_key_rx(), net80211_handle_assoc_reply(), net80211_ll_push(), net80211_prepare_assoc(), net80211_rx(), net80211_send_disassoc(), net80211_step_associate(), rtl818x_config(), wpa_derive_ptk(), and wpa_send_eapol(). SSID of the access point we are or will be associated with. Although the SSID field in 802.11 packets is generally not NUL-terminated, here and in net80211_wlan we add a NUL for convenience. Definition at line 962 of file net80211.h. Referenced by iwstat(), net80211_autoassociate(), net80211_check_settings_update(), net80211_marshal_request_info(), net80211_prepare_assoc(), net80211_probe_start(), net80211_process_ie(), net80211_step_associate(), and wpa_psk_start(). Association ID given to us by the AP. Definition at line 965 of file net80211.h. Referenced by ath9k_bss_iter(), and net80211_handle_assoc_reply(). TSFT value for last beacon received, microseconds. Definition at line 968 of file net80211.h. Referenced by net80211_prepare_assoc(), and net80211_update_link_quality(). Time between AP sending beacons, microseconds. Definition at line 971 of file net80211.h. Referenced by iwstat(), and net80211_prepare_assoc(). Smoothed average time between beacons, microseconds. Definition at line 974 of file net80211.h. Referenced by iwstat(), and net80211_update_link_quality(). Physical layer options. These control the use of CTS protection, short preambles, and short-slot operation. Definition at line 983 of file net80211.h. Referenced by ath5k_config(), ath5k_txbuf_setup(), ath9k_bss_info_changed(), net80211_duration(), net80211_process_capab(), net80211_process_ie(), rtl818x_tx(), and rtl8225_rf_conf_erp(). Signal strength of last received packet. Definition at line 986 of file net80211.h. Referenced by iwstat(), and net80211_rx(). Rate control state. Definition at line 989 of file net80211.h. Referenced by net80211_free(), net80211_rx(), net80211_step_associate(), net80211_tx_complete(), rc80211_maybe_set_new(), rc80211_pick_best(), rc80211_set_rate(), rc80211_update(), and rc80211_update_tx(). Fragment reassembly state. Definition at line 994 of file net80211.h. Referenced by net80211_accum_frags(), net80211_free_frags(), and net80211_rx_frag(). The sequence number of the last packet we sent. Definition at line 997 of file net80211.h. Referenced by net80211_ll_push(), and net80211_tx_mgmt(). Packet duplication elimination state. We are only required to handle immediate duplicates for each direct sender, and since we can only have one direct sender (the AP), we need only keep the sequence control field from the most recent packet we've received. Thus, this field stores the last sequence control field we've received for a packet from the AP. Definition at line 1008 of file net80211.h. Referenced by net80211_rx(). RX management packet queue. Sometimes we want to keep probe, beacon, and action packets that we receive, such as when we're scanning for networks. Ordinarily we drop them because they are sent at a large volume (ten beacons per second per AP, broadcast) and we have no need of them except when we're scanning. When keep_mgmt is TRUE, received probe, beacon, and action management packets will be stored in this queue. Definition at line 1021 of file net80211.h. Referenced by net80211_alloc(), net80211_handle_mgmt(), and net80211_mgmt_dequeue(). RX management packet info queue. We need to keep track of the signal strength for management packets we're keeping, because that provides the only way to distinguish between multiple APs for the same network. Since we can't extend io_buffer to store signal, this field heads a linked list of "RX packet info" structures that contain that signal strength field. Its entries always parallel the entries in mgmt_queue, because the two queues are always added to or removed from in parallel. Definition at line 1034 of file net80211.h. Referenced by net80211_alloc(), net80211_handle_mgmt(), and net80211_mgmt_dequeue(). Whether to store management packets. Received beacon, probe, and action packets will be added to mgmt_queue (and their signal strengths added to mgmt_info_queue) only when this variable is TRUE. It should be set by net80211_keep_mgmt() (which returns the old value) only when calling code is prepared to poll the management queue frequently, because packets will otherwise pile up and exhaust memory. Definition at line 1046 of file net80211.h. Referenced by net80211_handle_mgmt(), and net80211_keep_mgmt().
http://dox.ipxe.org/structnet80211__device.html
CC-MAIN-2019-51
refinedweb
1,973
51.04
Learn ReactJS on Replit!📙 Some of you guys may have heard of ReactJS According to the official website: [React is a] a JavaScript library for building user interfaces. But what exactly does that mean? This tutorial is meant for people that know a bit of JavaScript and want to learn the fundamentals of React. But first, lemmie tell you a problem that we've all probably asked at one point How do I put one HTML file in another HTML file? When we create HTML files by hand, we often come across the problem of reusability. For example, we may want to reuse a heading component across a bunch of HTML files. Or, we have a navigation bar, and we want to avoid copying and pasting each navigation item over and over again. React helps us solve this problem. Fundamentals Before I show you how to do this, I'd like to share one of the fundamental concepts of React. Components. You can think of a component as a capsule that contains some HTML, CSS, and JavaScript. Let me show you some examples As you can see, each colored rectangle is a component. Rectangles that have the same color are the same component, but with a few differences. For example, all the components with the blue rectangle look the same because they all have a title, a 'last edited' time, and a picture. The differences are the actual numbers, and the actual text of the titles. The formatting is near-identical. I found another website, and outlined a few components. The header component is in red, and the navigation components are in yellow. From these example, you may have noticed that components are made to be reusable. That's part of the point. 🙂 So rather than having the following in your HTML: <nav> <ul> <li> <div class="wrapper"> <a href="/">Home</a> </div> </li> <li> <div class="wrapper"> <a href="/about">About</a> </div> </li> <li> <div class="wrapper"> <a href="/contact">Contact</a> </div> </li> </ul> </nav> You may want to have something like this: <nav> <ul> <NavigationItem></NavigationItem> <NavigationItem></NavigationItem> <NavigationItem></NavigationItem> </ul> </nav> (Note that this is not valid HTML markup, but it's useful to think this way when using React components) Each <NavigationItem> is a reusable component and may contain the following. <li> <div> <a href="placeholder1">placeholder 2</a> </div> </li> Although this code won't actually run, it's useful to have this mindset of components. If you are reusing a set of element on the page, it's likely you can create a component out of it. Components in React You create components in React with something called JSX. In a nutshell, it's a way of writing HTML in JavaScript. Here is a classic example of JSX const element = <h1>Hello, world!</h1>; With some special tools, you will be able to create HTML elements using this kind of syntax with React. Here is another example - writing JSX over multiple lines. let MyComponent = ( <div className="App"> <h1>Very neat</h1> <MyOtherComponent /> </div> ); As you have noticed, one main difference between writing HTML and JSX, is the use of HTML classes. Because JSX is more close to JavaScript (rather than HTML), you cannot use the word class. It's a reserved word (see JS Classes). If we ever want to have an HTML class, we just have to write className instead of class. Creating a React Repl 🚀 Now that we know a bit about React, and JS, let's create a React repl! A bunch of files are already created for us. I'd recommend looking at public/index.html. This is the actual html file that's send to your browser. You might be a little confused because there aren't any <script></script> tags to any JavaScript files or <link> tags to any CSS files. But after "Inspect Elementing" our website, you can clearly see that someone put them there (see the bottom of the orange rectangle or underneath the red rectangle). Who's putting our script tags there? The details are a bit too complicated for this tutorial, but just know that certain build tools automatically inject JavaScript and CSS files in our HTML. You don't need to know about these tools right now. Just know they exist when you create a React repl, and that they work ✨magically ✨behind the scenes. Before we leave our index.html element, just take a mental note of the single HTML in our <body> tag <div id="root"></div> So now let's check out our src/index.js (I deleted a few lines, since some of them were not needed). index.js is the start for our app. Here, we're importing React, our global CSS, and our main App component. - Our global CSS is the index.cssfile. It contains CSS for the entire page. Don't put too much CSS in here since it can potentially override CSS in your React components - Our Appcomponent is where all our other components will start. For example, you might have a <Heading />, <BodyComponent />, and <FooterComponent />in your Appcomponent Under all the imports,, we really have one line of code on our page ReactDOM.render(<App />, document.getElementById('root')); It's pretty self explanatory, but this makes React render our App component. It renders our App component and puts it all inside of that <div id="root"></div> element that I showed you earlier. So React is rendering our App component! What is inside of of it? Here, we create a class called App. This class is a React component. Inside this class, we are creating a render function. This function returns JSX, which will eventually get rendered on the screen. If you're a bit confused, try looking at the render function this way: class App extends Component { render() { let myJSX = ( <div className="App"> <header className="App-header"> ... </header> </div> ); return myJSX; } } Here, we explicitly create a variable called myJSX, then return it. Creating our Own Components Now you know a bit about React and JSX, let's create our own components. Components are usually put in a folder titled components. I'm going to start off by making a Heading component. I only have text in here, just to keep it simple. We want to put the Heading component inside our App component. To do this, we just have to import it inside our App component! import Heading from './components/Heading' Now, anywhere in your JSX, you can add the <Heading /> element. I'm going to add it to the top. It's working! However, the formatting is a bit off since there is a scrollbar. I solved this by removing everything inside of <header></header> and replacing it with my own <Body /> component. I also replaced the App.css file with my own css: * { margin: 0; padding: 0; } html, body { height: 100%; width: 100%; } .App { height: 100%; } If you want to see the repl up to this point, you can find it here. It looks like this: Remember how I was talking about reusing components? This is where we would do it - just copy whatever element you want to be repeated! Go ahead and try to play around - create some components and add them to your App, Heading, or Body component! Props There is one more basic concept that you should know about: props. Props (which stands for properties) let us transfer data from our parent component to a child component. This might sound confusing at first, but hopefully it'll clear up when I give an example. For this example, I'm going to be using the repl located here. Example I want to transfer data from my Heading to each NavigationItem. So, the parent component is Heading and child component is NavigationItem. Inside of the parent component, I need to have my child component imported and being used. I already have imported the NavigationItem component. I'm also using it (3 times). The next step is to add the prop. Props need to have a name and a value. I'm going to name them myPropName and myPropValue Great! We're now sending the prop from our Heading component to the NavigationItem component! We are almost finished - now we just have to render it! The props you pass down are properties of the this.props object. Simply access your prop by its name (in the child component), and you're good to go! Don't forget to add the {}curly brackets! You can also use props when you're dealing with HTML attribute values. Just add brackets and access the prop! <img src={ this.props.myImageLink } /> I hope this introduction to React was helpful! I remember when I was getting into JavaScript frameworks it was difficult since tutorials would gloss over the "obvious" parts. If this helped you or some parts were confusing, let me know and I'll make it better! Great react starter tutorial. I have forked your app and changed your class based components into functional ones. I have also used CSS modules to style the app. Check it out: Why do we need the class name = ‘app’? When I removed it, there was no difference. @Coder100 i guess for a simple example, it won't do much, but it exists because there is css for it in `src/App.css' on line 11 ;) Ah, ok. Thank you! @eankeen
https://repl.it/talk/learn/Learn-ReactJS-on-Replit/15980
CC-MAIN-2020-50
refinedweb
1,574
65.93
This is the mail archive of the cygwin mailing list for the Cygwin project. Hi, we are developing a multiplatform application using C++ with the embedded Python. The problem is that the import of the module 'binascii' (located in python2.7/lib-dynload/binascii.dll) fails when the C++ application (which eventually call the embedded python script) is called from the Windows cmd. Resulting error message is: 'ImportError: No such file or directory\n' However, the import works when the C++ application is called from the cygwin shell. Import of other module, in particular 'time' (located in python2.7/lib-dynload/time.dll), works in both cases. In fact it seems that the python found the binascii.dll, but was unable to load it somehow. If we create an auxiliary python2.7/lib-dynload/binascii.py we get the same error, so we guess that the importer found binascii.dll first but fail to load it and doesn't continue to find also binascii.py. The behavior is similar to the issue. We tried to reinstall zlib and related packages without success. >From our point of view it is a bug, but possibly such usage of the embedded Python is unsupported and we were just lucky until now using the modules that works only by chance. Anyway if it is not a bug, can anybody provide some hint what is the mechanism of loading DLL python modules and how to investigate the real problem (hidden by the generic error message). The code to reproduce the problem is on github: and also pasted at the end. thank you for any help. Jan Brezina and Jan Hybs Cygwin version: $ uname -a CYGWIN_NT-6.1 Flow123d 2.4.1(0.293/5/3) 2016-01-24 11:26 x86_64 Cygwin Windows version is: Windows 7 Bug occures in both python2.7 and python3.4 version: $ python2.7 --version Python 2.7.10 $ python3.4 --version Python 3.4.3 C++ code to reproduce the error: ----------------------------------------------------------------------- #include <Python.h> #include <iostream> #include <string> using namespace std; int main(int argc, char *argv[]) { Py_Initialize(); cout << "Python path: " << (Py_GetPath()) << endl; cout << "Python prefix: " << (Py_GetPrefix()) << endl; cout << "Python exec prefix: " << (Py_GetExecPrefix()) << endl; cout << "------------------" << endl; cout << "Python version" << endl; PyRun_SimpleString("\nimport sys; print (sys.version_info)"); cout << "------------------" << endl; cout << "Test: time module" << endl; PyRun_SimpleString("\nimport time as mod; print (mod, mod.__file__)" "\nfor x in dir(mod):\n" "\n print ('{:20s}{}'.format(x, str(getattr(mod, x, ''))[0:35]))" ); cout << "------------------" << endl; cout << "Test: binascii module" << endl; PyRun_SimpleString("\nimport binascii as mod; print (mod, mod.__file__)" "\nfor x in dir(mod):\n" "\n print ('{:20s}{}'.format(x, str(getattr(mod, x, ''))[0:35]))" ); Py_Finalize(); return 0; } ------------------------------------------------------------ -- ------------------------ Mgr. Jan Brezina, Ph. D. Technical University in Liberec, New technologies institute -- Problem reports: FAQ: Documentation: Unsubscribe info:
http://cygwin.com/ml/cygwin/2016-03/msg00041.html
CC-MAIN-2018-22
refinedweb
464
59.4
11.30.2008 14:32 opennms I was going to look at nagios for fink and look into adding checks for my tide stations, but I figured that since OpenNMS is already in fink and one of the fink maintainers also works on OpenNMS, it might be worth giving it a try on my Mac laptop. Above, you can see my comment about the postgresql restart. What's up with this? % fink describe opennms % less /sw/share/doc/opennms/README.Darwin % fink install opennms % fink install tomcat5 # Did a really need this one? % sudo emacs /sw/var/postgresql-8.3/data/pg_hba.conf # Realize that the file looks to be already configured % sudo emacs /sw/var/postgresql-8.3/data/postgresql.conf listen_addresses = 'localhost' # what IP address(es) to listen on; max_connections = 192 # (change requires restart) % sudo daemonic enable postgresql83This is the Mac OS X 10.5: % sudo emacs /etc/sysctl.conf kern.sysv.shmmax=16777216 kern.sysv.shmmin=1 kern.sysv.shmmni=64 kern.sysv.shmseg=8 kern.sysv.shmall=32768Once the above parameters are set, you need to reboot the mac. % #cd /sw/bin && sudo pgsql.sh-8.3 restart # Not needed with reboot # There is something weird with my postgresql setup... # Lots of complaints on restart % sudo -u postgres createdb -U postgres -E UNICODE opennms % install_iplike_83.sh % sudo /sw/var/opennms/bin/install -dis % sudo /sw/bin/tomcat5 start % sudo opennms start % open U: admin, P: adminThen, when you are ready to have an OpenNMS service always running, do this: % sudo daemonic enable opennms Above, you can see my comment about the postgresql restart. What's up with this? % sudo pgsql.sh-8.3 restart pg_ctl-8.3: PID file "/sw/var/postgresql-8.3/data/postmaster.pid" does not exist Is server running? starting server anyway shell-init: error retrieving current directory: getcwd: cannot access parent directories: Permission denied server starting LOG: 00000: could not identify current directory: Permission denied LOCATION: find_my_exec, exec.c:196 FATAL: XX000: /sw/bin/postgres-8.3: could not locate my own executable path LOCATION: PostmasterMain, postmaster.c:470 11.30.2008 12:48 bdec xml description of binary files bdec is a python project that defines a XML definition of a binary file format and allows programs to read the data. If only it handled non byte aligned data such as AIS binary. Writing decoders for binary formats is typically tedious and error prone. Binary formats are usually specified in text documents that developers have to read if they are to create decoders, a time consuming, frustrating, and costly process. While there are high level markup languages such as ASN.1 for specifying formats, few specifications make use of these languages, and such markup languages cannot be retro-fitted to existing binary formats. 'bdec' is an attempt to specify arbitrary binary formats in a markup language, and create decoders automatically for that binary format given the high level specification. Bdec can: * Allow specifications to be easily written, updated, and maintained. * Decode binary files directly from the specification. * Generate portable, readable, and efficient C decoders. * Provide rudimentary encoding support. * Run under Windows & Unix operating systems The bdec xml specification uses constructs loosely based on those found in ASN.1.Example description: <protocol> <sequence name="png"> <field name="signature" length="64" type="hex" value="0x89504e470d0a1a0a" /> <sequenceof name="chunks"> <reference name="unknown chunk" /> </sequenceof> </sequence> <common> <sequence name="unknown chunk"> <field name="data length" length="32" type="integer" /> <field name="type" length="32" type="hex" /> <field name="data" length="${data length} * 8" type="hex" /> <field name="crc" length="32" type="hex" /> </sequence> </common> </protocol> 11.30.2008 09:55 Univ of Alaska - Autonmous Aircraft Autonmous aircraft make a lot of sense for certain tasks. If a ship can carry a bunch of them, then they can act as a force multiplier. Additionally, by not having to support a person onboard the aircraft, it can be made small enough to be easily transported as needed. For science applications, typical payloads might consist of 3-axis magnetometers, bathy or topo lidar (laser ranging), weather & chemical sensors and cameras. Especially interesting is the idea of the airborn magnetometers. A magnetometer must be towed a long ways behind the ship thus inducing operational issues and layback positionitional issues. Additionally, I've experienced loosing one in ice. Univ. of Alaska testing two autonomous aircraft on the NOAA ship Oscar Dyson in October: Jeff Gee has been pushing the magnetometer idea for a long time. Univ. of Alaska testing two autonomous aircraft on the NOAA ship Oscar Dyson in October: Jeff Gee has been pushing the magnetometer idea for a long time. 11.29.2008 16:44 Call for Computer Science articles for Wikipedia There is a call for new and improved articles on Computer Science in Wikipedia: Wanted: Better Wikipedia coverage of theoretical computer science [Scott Aaronson] Wanted: Better Wikipedia coverage of theoretical computer science [Scott Aaronson] # Property testing # Quantum computation (though Quantum computer is defined) # Algorithmic game theory # Derandomization # Sketching algorithms # Propositional proof complexity (though Proof complexity is defined) # Arithmetic circuit complexity # Discrete harmonic analysis # Streaming algorithms # Hardness of approximation ... 11.28.2008 16:13 vimeo At Loic's suggestion, I checked out Vimeo for HD video. The sorce video is 1024 wide, so it's only half HD. The text is definitely more readable. There is a full screen mode, but the artifacting is pretty strong. Coastal Images for Preparation and Situational Awareness from Kurt Schwehr on Vimeo. Coastal Images for Preparation and Situational Awareness from Kurt Schwehr on Vimeo. 11.28.2008 08:14 Chart of the Future Videos on YouTube Here are a couple videos from the CCOM Visualization Lab at CCOM produced by Matt Plumlee. YouTube doesn't make for the highest resolution videos, but at least they are published someplace. 11.27.2008 07:03 USACE contract with Woods Hole Group Note: this is not WHOI. Environmental Planning for USACE Environmental Planning for USACE Woods Hole Group has been awarded a contract by the United States Army Corps of Engineers (USACE) New England District to provide environmental planning and consulting services for various projects located throughout the 14 northeastern states from Maine to Virginia and the nation's capital. The contract can extend for up to five years, with an estimated budget of USD15 million. ...: * Environmental impact reports/statements * Dredging/dredge material management * Multi-disciplinary environmental measurements and monitoring * Oceanography * Risk assessments * Hazardous and toxic waste site monitoring and remediation planning * Other specialised studies 11.26.2008 12:42 mbsystem in fink for PPC I got a little to clever for my own good... Here are some options. First is 'uname -m' % fink install mbsystem #!/bin/bash -ve # NOTE: do not set -e for bash in the line above. That will break the grep perl -pi.bak -e 's|\@FINKPREFIX\@|/sw|g' install_makefiles cpp -dM /dev/null | grep __LITTLE_ENDIAN__ ### execution of /var/tmp/tmp.1.TrwYax failed, exit code 1On PPC, the little endian check fails and grep has an exit status of 1. This is fine, but bash then sees it as an error and quits with a failure. Sigh. Here are some options. First is 'uname -m' big endian machine % uname -m Power Macintosh little endian machine $ uname -m i386That works, but if uname -m ever changes, we are sunk. Here is my new, more robust endian check that doesn't require actually compiling anything. big endian ppc machine % cpp -dM /dev/null | grep -i ENDIAN__ | cut -d_ -f 3 BIG little endian i386 machine % cpp -dM /dev/null | grep -i ENDIAN__ | cut -d_ -f 3 LITTLEOf course, now I'm getting mbsystem failing to build. I thought I had mbsystem not building with OpenGL stuff... mbview_callbacks.c:1719: warning: too many arguments for format mbview_callbacks.c:1720: warning: format '%f' expects type 'double', but argument 4 has type 'int' mbview_callbacks.c:1720: warning: too many arguments for format mbview_callbacks.c:1775: error: 'GLwNrgba' undeclared (first use in this function) mbview_callbacks.c:1775: error: (Each undeclared identifier is reported only once mbview_callbacks.c:1775: error: for each function it appears in.) mbview_callbacks.c:1775: warning: left-hand operand of comma expression has no effect mbview_callbacks.c:1777: error: 'GLwNdepthSize' undeclared (first use in this function) mbview_callbacks.c:1777: warning: left-hand operand of comma expression has no effect mbview_callbacks.c:1779: error: 'GLwNdoublebuffer' undeclared (first use in this function) 11.25.2008 16:49 Simple structure and pointer ctypes example I needed a more straightforward example of a pointer to a structure being returned from a function call to practice ctypes on. So, I made one and here is what I did. There wasn't anything quite like it in the ctypes tutorial. First the C code. I didn't want to deal with malloc / free, so I just put an instance of the structure (sample2) in the scope of the shared object. First the C code. I didn't want to deal with malloc / free, so I just put an instance of the structure (sample2) in the scope of the shared object. // Build like this on the mac: gcc -g -Wall -bundle -undefined dynamic_lookup test_ctypes.c -o _test_ctypes.so struct two { int g; float h; }; typedef struct two *two_ptr; struct two sample2; two_ptr return_a_two (void) { sample2.g = 123; sample2.h = 456.789; return &sample2; }And the python ctypes interface. #!/usr/bin/env python import ctypes lib = ctypes.CDLL('_test_ctypes.so') class struct_two(ctypes.Structure): _fields_ = [ ('g',ctypes.c_int), ('h',ctypes.c_float) ] return_a_two = lib.return_a_two return_a_two.restype = ctypes.POINTER(struct_two) if __name__ == '__main__': ptr = return_a_two() print ptr.contents.g print ptr.contents.hWhen run, it prints out the value of the two members of the structure. % ./test_ctype.py 123 456.789001465Now to figure out the actual structure of magic_set of file. 11.25.2008 15:49 New SIO building Scripps Gets $12m for New Building [mtr] Scripps Institution of Oceanography at UC San Diego has been awarded $12 million by the U.S. Department of Commerce (DoC)/National Institute of Standards and Technology (NIST) to construct a new laboratory building on the Scripps campus for research on marine ecosystem forecasting. This new building will become a resource for marine ecological research at Scripps and for other national and international ocean science organizations that address the management of marine resources.. ... 11.25.2008 15:43 Py++ generating boost interfaces for python 11.25.2008 13:56 OBIS-SEAMAP datasets The Google Earth Blog today pointed to Ocean Biogeographic Maps in Google Earth. OBIS-SEAMAP, Ocean Biogeographic Information System Spatial Ecological Analysis of Megavertebrate Populations, is a spatially referenced online database, aggregating marine mammal, seabird and sea turtle data from across the globe. The collection can be searched and visualized through a set of advanced online mapping applications.Check out the restrictions on the site. The first usage restriction is a bit weird. I am surprised that you have to ask permission for publications. I get the commercial restriction and the citation. * Not to use data contained in OBIS-SEAMAP in any publication, product, or commercial application without prior written consent of the original data provider. * To cite both the data provider and OBIS-SEAMAP appropriately after approval of use is obtained. .... 11.25.2008 10:42 Seacoast NH farmers' markets Last weekend, Dover had an excellent farmers' market in the McIntosh Atlantic Culinary Academy. I picked up some really nice locally raised meats. For the info on local farmers' markets, check out Seacoast Eat Local Now if it would just stop storming... Now if it would just stop storming... 11.25.2008 09:49 Zort Labs netblox 11.25.2008 08:00 Home brew laser scanner Why is it that I always loose my laser pointer? I got this link from Peter Selkin. Thanks Peter! 3-D Laser Scanner. [instructables] Requires a laser pointer, wine glass, and a imaging device (aka the iSight camera built into your mac laptop). The website even provides some matlab code to get you going. Errr... never look into the laser. Know How to Make a 3-D Scanner Know How to Make a 3-D Scanner 11.25.2008 07:15 ctypes python interface built by pyglet I've been looking at python ctypes code generators a bit. The idea is to look at the C header and get most of the work done automatically to code the functions, structures, and #defines. Then I can focus on making a pythonic interface. The first one to work is wraptypes in pyglet. However, the release does not seem to contain wraptype, so I had to pull it from svn. % mkdir pyglets % cd pyglets % svn co % cd wraptypes % chmod +x wrap.py % ./wrap.py magic.hI've cut down the output some, but it shows that I can use some of the code it generates. __docformat__ = 'restructuredtext' import ctypes from ctypes import * import pyglet.lib _lib = pyglet.lib.load_library('magic') _int_types = (c_int16, c_int32) if hasattr(ctypes, 'c_int64'): _int_types += (ctypes.c_int64,) for t in _int_types: if sizeof(t) == sizeof(c_size_t): c_ptrdiff_t = t class c_void(Structure): _fields_ = [('dummy', c_int)] MAGIC_NONE = 0 # magic.h:32 MAGIC_DEBUG = 1 # magic.h:33 MAGIC_SYMLINK = 2 # magic.h:34 MAGIC_COMPRESS = 4 # magic.h:35 MAGIC_DEVICES = 8 # magic.h:36 # ... class struct_magic_set(Structure): __slots__ = [ ] struct_magic_set._fields_ = [ ('_opaque_struct', c_int) ] class struct_magic_set(Structure): __slots__ = [ ] struct_magic_set._fields_ = [ ('_opaque_struct', c_int) ] magic_t = POINTER(struct_magic_set) # magic.h:62 # magic.h:63 magic_open = _lib.magic_open magic_open.restype = magic_t magic_open.argtypes = [c_int] # magic.h:68 magic_buffer = _lib.magic_buffer magic_buffer.restype = c_char_p magic_buffer.argtypes = [magic_t, POINTER(None), c_size_t] # ... __all__ = ['MAGIC_NONE', 'MAGIC_DEBUG', 'MAGIC_SYMLINK', 'MAGIC_COMPRESS', 'MAGIC_DEVICES', 'MAGIC_MIME_TYPE', 'MAGIC_CONTINUE', 'MAGIC_CHECK', 'MAGIC_PRESERVE_ATIME', 'MAGIC_RAW', 'MAGIC_ERROR', 'MAGIC_MIME_ENCODING', 'MAGIC_MIME', 'MAGIC_NO_CHECK_COMPRESS', 'MAGIC_NO_CHECK_TAR', 'MAGIC_NO_CHECK_SOFT', 'MAGIC_NO_CHECK_APPTYPE', 'MAGIC_NO_CHECK_ELF', 'MAGIC_NO_CHECK_ASCII', 'MAGIC_NO_CHECK_TOKENS', 'MAGIC_NO_CHECK_FORTRAN', 'MAGIC_NO_CHECK_TROFF', 'magic_t', 'magic_open', 'magic_close', 'magic_file', 'magic_descriptor', 'magic_buffer', 'magic_error', 'magic_setflags', 'magic_load', 'magic_compile', 'magic_check', 'magic_errno']Other than the custom import and the opaque structs, I should be able to use most of this. 11.24.2008 17:54 mbsystem in fink This has been coming up a lot lately. Yes, I am still maintaining the fink install of MB-System. It still is there and working (minus the OpenGL 3D graphics portion). If you want to see how it is packaged and built, take a look at the fink info file: My fink packages: Fink - Package Database - Browse (Maintainer = 'Kurt Schwehr') and - MBSystem in fink - install XCode - installed the binary package from - run "fink configure" and activate the "unstable" This edits /sw/etc/fink.conf and puts "unstable/main" on your "Trees:" line. - run "fink selfupdate-rsync" (after the first time, you should just use "fink selfupdate" - fink list mbsystem - fink install mbsystem - wait a good while as it builds everything - run "dpkg -L mbsystem | grep bin" to see all the programs it installed. less /sw/fink/10.4/unstable/main/finkinfo/sci/mbsystem.infoNote that 10.4 and 10.5 use a symbolic link so that the trees are the same. My fink packages: Fink - Package Database - Browse (Maintainer = 'Kurt Schwehr') and - MBSystem in fink 11.22.2008 16:36 netflix on the XBox Today we finally got to update the XBox 360 and check out the big update. It's very different. At first glance, the new UI is very confusing. One very cool thing is that it now has Netflix built in. Add something to your "play it now" list and it appears on your xbox list of movies after a minute or two. Would be nice to be able to search all the available movies, but it isn't that hard to just have your laptop with you and select a movie that way. I was too lazy to hook up one of my many computers to the TV, so this is a big help. The first movie played started in absolutely horrible quality. But, after hitting pause, making some popcorn, and coming back, the movie restarted in full standard def NTSC video. I know... I'm still using the same Sony 24" Trinitron that I bought in 1991 in Kimball Hall._15<<_16<< 11.22.2008 09:33 A sick mac? A week ago, a mac that I have had running since late 2005/2006 suddenly kernel panicked. The machine hadn't crashed since I'd updated it to 10.4 afer a bad experience with Microsoft's Virtual PC. It's a quad core 2.5GHz G5 with 8 GB of RAM. Not fast compared to my laptop or mac mini, but a workhorse that is currently my N-AIS box. I thought it was a fluke and rebooted the machine. This week has seen 4 more kernel panicks with shorter and shorter intervals between them. Today, I ran DiskWarrior on it and rebuilt the directory. Then I went ahead and finally upgraded the os to 10.5... this was my last 10.4 box. I didn't want to rock the boat with data logging, but since the machine is unhappy, I might as well update the OS now.. % uptime 7:37 up 2 days, 21:12, 4 users, load averages: 0.95 0.75 0.60 11.21.2008 10:53 AIS Binary Messages and Okeanos Explorer at sunset eNavigation 2008 finished on Wednesday and yesterday we had the RTCM SC121 Working Group on Expanded Use of AIS in VTS (focusing on AIS Binary Broadcast Messages [BBM] at the moment) at the Port of Seattle, Pier 69 in Seattle. The RTCM meeting had some intense and good discussion on the Zone message that is now renamed to Timed Area Notice [TAN] to follow my convention and remove confusion with the word zone. We also started looking into the waterways management messages. We will be using the St Lawrence Seaway and European RIS messages as background material to support the our work. If you know of other waterway management AIS binary messages, please send them along. Just because it is a cool picture... 11.20.2008 11:48 SeaClear chart Brian T. pointed me to another free charting application: SeaClear - PC Chart plotter and GPS Navigation software. It includes support for AIS. 11.19.2008 15:57 python ctypes to wrap C code I've looked at Python's ctypes before, but this is the first time I've done this much. The BSD/Darwin file command comes with a python interface, but it is not very satifying. It's writen in CPython and is not very flexible. I thought that I might be able to get ctypes to make a simpler interface and allow for additional functionality to be bolted on as I need it. First, here is the code:. #!/usr/bin/env python import ctypes # FIX: make a loader that will be cross platform libmagic = ctypes.CDLL('/sw/lib/libmagic.1.0.0.dylib') libmagic.magic_file.restype = ctypes.c_char_p MAGIC_NONE = 0x000000 MAGIC_DEBUG = 0x000001 # Turn on debugging MAGIC_SYMLINK = 0x000002 # Follow symlinks # ... class Magic: 'Provide file identification survices' def __init__(self, flags=MAGIC_NONE,filename=None): self.cookie = libmagic.magic_open(flags) result = libmagic.magic_load(self.cookie,filename) def id(self,filename): message = libmagic.magic_file(self.cookie,filename) return message def __del__(self): libmagic.magic_close(self.cookie) if __name__ == '__main__': m = Magic() print m.id('magic.py') print m.id('does not exist')Here is what the little demo outputs when run: % ./magic.py a python script text executable cannot open `does not exist' (No such file or directory)I start off the code by pulling in the ctypes library. One trick is that the return code for all functions is assumed to be an integer. That works for everything here accept the magic_file call that returns a string. Therefore, I have to tell it that it restype is a c_char_p (aka a C string).. static char _magic_file__doc__[] = "Returns a textual description of the contents of the argument passed \ as a filename or None if an error occurred and the MAGIC_ERROR flag \ is set. A call to errno() will return the numeric error code.\n"; static PyObject* py_magic_file(PyObject* self, PyObject* args) { magic_cookie_hnd* hnd = (magic_cookie_hnd*)self; char* filename = NULL; const char* message = NULL; PyObject* result = Py_None; if(!(PyArg_ParseTuple(args, "s", &filename))) return NULL; message = magic_file(hnd->cookie, filename); if(message != NULL) result = PyString_FromString(message); else Py_INCREF(Py_None); return result; }The py_magic.c is called on the python side like this: #!/usr/bin/env python import magic ms = magic.open(magic.MAGIC_NONE) ms.load() ms.file("somefile") 11.19.2008 14:49 DoRIS DoRIS is the Austrian River Information System. Mario Sattler gave a presentation entitled: 11.19.2008 11:35 Okeanos Explorer Thanks to Elaine for a tour of the Okeanos Explorer. Here are a couple images. The vessel has dynamic positioning system (DP) and a Transas chart system. Bridge communications systems: HD over Internet2 (I2) is controlled here: Bridge communications systems: HD over Internet2 (I2) is controlled here: 11.19.2008 08:40 MapMyPage - add Google Maps popups I gave MapMyPage a quick try. It does have some funny things that is does. It took "US Coast Guard" and found a random USCG facility. Found via Links: Where 2.0, Weather Buoys, Argentina, Earthscape, MapMyPage, and more [gearthblog] Found via Links: Where 2.0, Weather Buoys, Argentina, Earthscape, MapMyPage, and more [gearthblog] 11.19.2008 08:27 EarthNC NOAA Buoys 11.18.2008 16:05 Bluetooth Pilot Interface Captain Jorge Viso has these in his talk today... Turns out that Raven has exactly what I'm looking for: Raven Industries Unveils Bluetooth Pilot Interface TrueHeading also has one: Blue-Pilot.pdf Turns out that Raven has exactly what I'm looking for: Raven Industries Unveils Bluetooth Pilot Interface Raven Industries, a manufacturer of lightweight Portable DGPS Pilot Systems, announced the introduction of the Bluetooth Pilot Interface (BPI): a Class 1 product that gives ship pilots a quality Bluetooth wireless device that transmits data from the ship's Pilot Port Interface (PPI) to a computer. The BPI features the Wire Wizard that automatically corrects mis- wired pilot plugs, a rechargeable battery, and small lightweight packaging. The integrated status lights, automatic baud-rate ... TrueHeading also has one: Blue-Pilot.pdf 11.18.2008 14:07 Okeanos Explorer I looked out the window from the conference center at Pier 66 and saw the NOAA ship Okeanos Explorer 11.18.2008 11:46 VTS Survey During Brian Tetreault's presentation on the Cosco Busan incident, he mentioned that the USCG is running a survey of VTS users: USCG Customer Satisfaction Survey of Vessel Traffic Services ... expires at the end of Dec 2008. If you use a VTS, please go fill in the survey. 11.18.2008 09:26 origins of AIS Jorge is talking at eNav2008. I didn't know about ADSSE on Alaskan tankers from the Oil Pollution Act of 1990. The Prince William Sound automated dependent surveillance system The Prince William Sound automated dependent surveillance system Radice, J.T.; Cairns, W.R. Position Location and Navigation Symposium, 1994., IEEE Volume , Issue , 11-15 Apr 1994 Page(s):823 - 832 Digital Object Identifier 10.1109/PLANS.1994.303397 Summary:The US Coast Guard is presently conducting a project in order to implement an automated dependent surveillance system (ADSS) in Prince William Sound (PWS), Alaska. The ADSS will expand the coverage area of the present vessel traffic service (VTS) by over 3000 square miles, provide all weather surveillance, and maintain well defined tracks for all participants. The ADSS will utilize differential GPS (DGPS), digital selective calling (DSC), and an electronic chart based geographic information system (GIS) in order to track participating vessels. Additionally, the PWS ADSS can integrate radar and dependent surveillance data where radar coverage is provided. This paper describes the impact of the PWS ADSS on current VTS systems, the forces behind ADS implementation, the ADS concept and design, along with suggested directions for ADS systems in the United StatesAlso, the new AIS rule coming soon will me that ALL vessels 65 feet or longer will have to have AIS. Also towing 26 feet or greater and 600 hps will need AIS. 11.18.2008 09:21 emacs dired - directory editor I use dired in emacs fairly often. It's easy to start... just C-x C-f to open a file and instead of a file, select a directory name (e.g "."). Emacs: Dired Directory Management [Greg Newman] I'll have to check out DiredPlus. And I just saw that there is PyMacs for crossing between emacs lisp and python. (Pymacs framework Emacs: Dired Directory Management [Greg Newman] I'll have to check out DiredPlus. And I just saw that there is PyMacs for crossing between emacs lisp and python. (Pymacs framework 11.17.2008 18:10 eNavigation 2008 (formerly AIS) conference starts tomorrow eNavigation 2008 starts tomorrow in Seattle. There will be a big push to get AIS related stuff figured out this week. 11.16.2008 07:34 How do I use matplotlib to make same scale graphs? I wrote a program to walk through some data and make one png plot per item with matplotlib. This is pretty great, but how do I make the axes all the same range if the min and max values are often different? In gnuplot, I use something like this: So lazyweb, what's the answer? set xrange [0:1000] set yrange [0:260]With matplotlib in python, I was trying this, but it doesn't control the axes. import matplotlib.pyplot as plt plt.ioff() in_file = 'somedata.dat' somedata = data_class(in_file) for item_num,item in enumerate(some_data): out_filename = ('%s-%03d.png' % (in_file[:-4],item_num)) print out_filename p = plt.subplot(111) p.set_xlabel('sample number') p.set_ylabel('value') p.set_title('File: %s Item num: %d' % (in_file[:-3], item_num)) p.set_ylim(0,260) p.plot(item) #p.draw() plt.savefig(out_filename) #,'png') # Freaks out if I remind it that I want png plt.close()With great power comes great complexity and pain. Gnuplot is easy, but I often push past its capabilities. So lazyweb, what's the answer? 11.14.2008 16:38 AIS on the Healy Phil just pointed me to this in his RVTEC meeting notes: The UNOLS Research Vessel Technical Enhancement Committee (RVTEC) Meeting 30. AIS use on HEALY, D. Chayes: Used $250 AIS to track LSSL (Louis S. St Laurent) position. ... Used Kurt Schwehr's Python encoding off his webpage. 11.14.2008 16:16 MA bridge allision Of local interest... Barge collides with Amesbury Swing Bridge [coastguardnews.com] The Coast Guard is investigating what caused a 250-foot barge to strike the Amesbury Swing Bridge in the Merrimack River around 12 p.m. today. Coast Guard Sector Boston received a call at approximately 11: 50 a.m., reporting the barge William Breckenridge hit the Swing Bridge, which connects Newburyport to Amesbury, as it transited through the channel near Deer Island. No injuries or pollution have been reported. The Massachusetts State Highway Department is inspecting the extent of the damage to the bridge. ... 11.14.2008 12:09 spidering a disk I've been thinking about how to best spider a disk. In python, there is the os.path command. Combining that with the file python interface, we get the beginnings of a spider utility. #!/usr/bin/env python import sys import os.path import magic ms = magic.open(magic.MAGIC_NONE) ms.load() def visit(arg, dirname, names): print ' ',dirname for name in names: file_type = ms.file(dirname+'/'+name) print ' ',name,file_type os.path.walk('.', visit, 'arg')This is similar to this shell command: % find . | xargs fileThe above spider.py outputs something like this (but won't yet work on windows due to '/'): % spider.py == . == acinclude.m4 ASCII M4 macro language pre-processor text aclocal.m4 ASCII English text, with very long lines AUTHORS ASCII text, with no line terminators ChangeLog ISO-8859 English text config.guess POSIX shell script text executable == ./doc == file.man troff or preprocessor input text libmagic.man troff or preprocessor input text magic.man troff or preprocessor input text Makefile.am ASCII text Makefile.in ASCII English text == ./magic == Header magic text file for file(1) cmd Localstuff ASCII text Magdir directory Makefile.am ASCII English text Makefile.in ASCII English text == ./magic/Magdir == acorn ASCII English text adi ASCII text adventure ASCII English text allegro ASCII text alliant ASCII English text == ./python == example.py ASCII Java program text Makefile.am ASCII text Makefile.in ASCII English text py_magic.c ASCII C program text py_magic.h ASCII C program text 11.14.2008 10:19 Mac networking issue? Last night, I tried to connect my mac laptop to a Linksys wireless router. I've connected to this router many times before without trouble. However, last night, it connected, but nothing worked. When I looked at the network settings, I saw that I had a self-assigned IP address. I could ping the router, but was otherwise out of luck. I tried connecting via wired ethernet with the same result. I found in my syslog something like this: Nov 13 19:47:52 CatBoxIII mDNSResponder[21]: Note: Frequent transitions for interface en0 (169.254.237.105); network traffic reduction measures in effectWhat? Now, back at the office, things are working just fine both wireless and wired. What gives? This is an up-to-date intel mac runing 10.5.5. Is there a bug in the mac dhcp handler? Another computer was using this router just fine. 11.13.2008 15:22 ka-Map and GeoMedia Mashkoor sent out an email today asking for people to get their data submitted to NGDC. He sent out the link again to our intranet browswer based web mapping system that Jim Case had done. It is driven by Intergraph's GeoMedia [wikipedia], ka-Map, and a big Oracle database. Here is a quick screenshot of the tool as it exists presently: 11.13.2008 09:35 PowerPoint handout printing mode I didn't know until today that this is how everybody around me has been turning their powerpoints into handouts to take notes on. 11.13.2008 09:05 Scripting seminar Yesterday, I gave the CCOM student seminar on scripting. I mostly focused on python and there were a lot of good questions and comments during the seminar. 20081112-Scripting 20081112-Scripting 11.13.2008 09:03 Tide data for tappy Here is the tide data that I used for tappy. Small section of waterlevels after converting to metric: Here is a histogram done in matplotlib: Also, here is the sparse.def file that tappy.py uses for my data: Here is a histogram done in matplotlib: #!/usr/bin/env python2.5 import matplotlib.pyplot as plt import matplotlib.mlab as mlab waterlevels = mlab.load('tide-all-2.dat') n, bins, patches = plt.hist(waterlevels, 50) plt.grid(True) plt.show() Also, here is the sparse.def file that tappy.py uses for my data: parse = [ positive_integer('year'), positive_integer('month'), positive_integer('day'), positive_integer('hour'), positive_integer('minute'), positive_integer('second'), real('water_level'), ] 11.13.2008 07:18 tappy - tidal analysis in python Yesterday, I tried tappy. The program crashed and I sent Tim, the author, a crash report. Last night, he put a new version up on source forge and it seems to work! I need to look at the output to see if it makes sense, but here is the process so far. % head tide2.raw # START LOGGING UTC seconds since the epoch: 1221162698.25 # SPEED: 9600 # PORT: /dev/ttyS0 # TIMEOUT: 300.0 # STATIONID: memma # DAEMON MODE: True 635 222 497,rmemma,1221162708.95 635 222 497,rmemma,1221162721.07 635 222 497,rmemma,1221162733.2 635 223 497,rmemma,1221162745.32Then I converted the engineering units to meters above the sensor. #!/usr/bin/env python import datetime # Coefficients for water level in meters A=-1.008E-01 B=5.125E-03 C=7.402E-08 D=0. out = file('tide-tappy.dat','w') for line in file('tide2.raw'): try: fields = line.split() station_id = int(fields[0]) waterlevel_raw = int(fields[1]) temp_c, station_name, timestamp = fields[2].split(',') temp_c = int(temp_c) timestamp = float(timestamp) N = waterlevel_raw waterlevel = ( A + B*N + C*(N**2) + D*(N**3) ) except: print 'bad data:',line continue time_str = datetime.datetime.utcfromtimestamp(timestamp).strftime("%Y %m %d %H %M %S") out.write('%s %s\n' % (time_str,waterlevel))The metric result was this: % head tide-tappy.dat 2008 09 11 19 51 48 1.04059800168 2008 09 11 19 52 01 1.04059800168 2008 09 11 19 52 13 1.04059800168 2008 09 11 19 52 25 1.04575594058 2008 09 11 19 52 37 1.04575594058 2008 09 11 19 52 49 1.04575594058 2008 09 11 19 53 01 1.04575594058 2008 09 11 19 53 13 1.04575594058 2008 09 11 19 53 25 1.04575594058 2008 09 11 19 53 38 1.04575594058Note that this data is going to be non-zero mean and all the data is way above 0. I don't know if that will cause problems with the tidal analysis. After clipping out a large continuous chunk of data, I still had 247K water levels. That will take a long time, so I ran my decimate script to get it down to 22K samples: decimate -l 10 tide-tappy-short.dat > tide-tappy-short2.datNow that I have a reasonably sized data set so it will run faster, here are the results: % tappy.py tide-tappy-short2.dat # NAME SPEED H PHASE # ==== ===== = ===== Mm 0.54437626 0.1129 41.9029 MSf 1.01589577 0.0358 287.9779 2Q1 12.85428307 0.0115 188.2313 Q1 13.39865933 0.0248 200.5201 O1 13.94303559 0.1089 187.5583 NO1 14.49669238 0.0201 93.0352 K1 15.04106864 0.1053 173.8819 J1 15.58544490 0.0046 165.1417 OO1 16.13910169 0.0065 316.7976 ups1 16.68347795 0.0056 214.2090 MNS2 27.42383220 0.0128 164.7614 mu2 27.96820846 0.0873 291.0659 N2 28.43972797 0.2989 78.1917 M2 28.98410423 1.5892 105.8615 L2 29.52848049 0.0857 119.4599 S2 30.00000000 0.2055 129.5660 eta2 30.62651354 0.0193 21.9170 2SM2 31.01589577 0.0032 315.2719 MO3 42.92713982 0.0056 240.8188 M3 43.47615634 0.0011 145.7261 MK3 44.02517287 0.0039 147.9154 SK3 45.04106864 0.0031 8.1197 MN4 57.42383220 0.0186 26.3005 M4 57.96820846 0.0210 73.1954 SN4 58.43972797 0.0097 275.5858 MS4 58.98410423 0.0075 68.4415 S4 60.00000000 0.0011 315.0172 2MN6 86.40793643 0.0169 117.4640 M6 86.95231269 0.0407 124.8344 2MS6 87.96820846 0.0234 164.1220 2SM6 88.98410423 0.0075 256.1520 S6 90.00000000 0.0026 304.1771 M8 115.93641692 0.0068 174.5350 # INFERRED CONSTITUENTS # NAME SPEED H PHASE # ==== ===== = ===== rho1 13.47151608 0.0041 58.4877 M1 14.49205211 0.0077 248.6799 P1 14.95893136 0.0349 260.5434 2N2 27.89535171 0.0413 76.3025 nu2 28.51258472 0.0604 175.2170 lambda2 29.45562374 0.0111 331.9892 T2 29.95893332 0.0121 101.4720 R2 30.04106668 0.0016 110.5621 K2 30.08213728 0.0559 63.0246 # AVERAGE (Z0) = 2.21195756363I now need to plot this up agains the data to see how it did. I might need to add some flags to tappy, but I need more time to give it a look. 11.12.2008 13:32 European LRIT data center Moving Closer to an EU LRIT Data Centre [emsa.europa.eu] press_release_07-11-2008.pdf Today, 6 November 2008, at the European Maritime Safety Agency (EMSA) headquarters in Lisbon contracts have been signed for the development of the European Union Long Range Identification and Tracking Data Centre. This step marks the end of a European public procurement process started with the tender publication on 11 June 2008. Willem de Ruiter, EMSA Executive Director, Christophe Vassal, CEO of CLS (Collecte Localisation Satellites) based in Ramonville, France and Isabelle Roussin, Executive Vice President of Marketing for Highdeal S.A., based in Paris, France met to underline their dedication to setting-up a Data Centre for the EU Member States as early as possible in 2009. Mr. de Ruiter expressed his confidence in the two companies to set up a robust system for the collection and distribution of long range ship position reports. Mr. de Ruiter recalled that it was only on 2 October 2007 that the task of providing an EU LRIT Data Centre was given to the Agency by the European Transport Ministers. Substantial work has been carried out by a dedicated task force since this date. He stated: "With the professional experience of these companies (CLS and Highdeal) I trust that a good European LRIT system will soon be available to serve over 10,000 vessels flying the flag of EU Member States and interested Overseas Territories." ... 11.12.2008 10:12 pyproj example I don't think I've ever posted a pyproj example in my blog, so here is a quick one: python >>> from pyproj import Proj >>> params={'proj':'utm','zone':19} >>> proj = Proj(params) >>> proj(-70.2052,42.0822) (400316.0002622169, 4659605.5586070241) proj(400316.0002622169, 4659605.5586070241,inverse=True) 11.12.2008 10:01 Marine Information Objects/Overlays for Marine Protected Areas ICAN and CARIS have been working with NOAA on Marine Information Objects/Overlays (MIO) of Marine Protected Areas (MPA). I don't know the full details of the project, but I just got permission to share some screen shots from Aldebaran II. I belive both of these are off of Florida. 11.12.2008 04:09 GIS Day at UNH Today Today is GIS Day at UNH. Not quite the same day as the normal GIS Day 11.11.2008 16:53 Right whale AIS Project - PVA demonstration Yesterday, Dave Wiley, Dudley Baker, and I gave an invited talk at the Passenger Vessel Association's Original Colonies Region Meeting in Portsmouth. Nice group and some good questions. Expect to see an article on Right Whales next month in their FogHorn magazine. (Membership required). 11.11.2008 16:42 Winter has come to the SeaCoast area We were over in Portsmouth this morning and winter has definitely come to the area. 11.11.2008 06:56 AGU Virtual Globes session Update 18-Nov-2008: Google Oceans will not be out during 2008. A quick aside: I hear that Google Oceans has been delayed to Dec 9th. Google also has Introducing Google.org Geo Challenge Grants The web page is now up: Virtual Globes at AGU 2008 Visualizing the Operations of the Phoenix Mars Lander A quick aside: I hear that Google Oceans has been delayed to Dec 9th. Google also has Introducing Google.org Geo Challenge Grants The web page is now up: Virtual Globes at AGU 2008 Visualizing the Operations of the Phoenix Mars Lander 11.10.2008 19:03 Phx end of mission Mars Phoenix Returns to Ashes [wired] with interactive timelines. Mars Phoenix Lander Finishes Successful Work On Red Planet [LPL] Requiem for a Robot: Mars Probe Dies [abcnews] Phoenix Mars Lander Declared Dead [slashdot] JPL has a tribute video: Phoenix - A Tribute and Phoenix Mars Phoenix Lander Finishes Successful Work On Red Planet [LPL]. ...From Andy M.: Mars Lander Succumbs to Winter [New York Times] Requiem for a Robot: Mars Probe Dies [abcnews] Phoenix Mars Lander Declared Dead [slashdot] JPL has a tribute video: Phoenix - A Tribute and Phoenix 11.10.2008 18:00 EPA uses MS Virtual Earth Some concepts of the Environmental Responce Management Application (ERMA) in here. We've met with the EPA share ideas and it is exciting to see this kind of application. Microsoft Virtual Earth used by EPA for "Cleanups in My Community" web site [virtualearth4gov] Microsoft Virtual Earth used by EPA for "Cleanups in My Community" web site [virtualearth4gov] Accidents,. The EPA has expanded its use of the Virtual Earth mapping platform through its "Cleanups in My Community" web portal to provide a mapping and listing tool that shows sites where pollution is being or has been cleaned up throughout the United States. It maps, lists and provides cleanup progress profiles... 11.10.2008 17:41 Buzzards Bay grounding Just found out that I know someone who was on this tug. Small world. Boston based tug runs aground near Buzzards Bay [Coast Guard News] Boston based tug runs aground near Buzzards Bay [Coast Guard News] Coast Guard crews helped a Boston-based tug crew today. ... 11.09.2008 19:50 SWIG interface for GSF I've been thinking about a python interface for the SAIC Generic Sensor Format (GSF) for sensor data. If I use SWIG, then it can support other languages in addition to Python (e.g. Perl). However, it turns out to create a very awkward interace. I was hoping that it would be easy to do a quick swig run and that I could write a Python class around it, but I am wondering if that is worth my time. Roland has taken a liking to boost, but this is straight C and I think that ctypes would work nicely (and keep me in working python as much as possible). Can SWIG include docstrings?: CFLAGS += ${shell python-config --includes} -std=c99Then I build the library like this: % swig -python -module gsf gsf.h # Produces gsf_wrap and gsf.py % perl -pi -e 's|\#define SWIGPYTHON|\#include "gsf.h"\n\#define SWIGPYTHON|' gsf_wrap.c % gcc-4 -O3 -funroll-loops -fexpensive-optimizations -DNDEBUG -ffast-math \ -Wall -Wimplicit -W -pedantic -Wredundant-decls -Wimplicit-int \ -Wimplicit-function-declaration -Wnested-externs -march=core2 -I/sw/include/python2.6 -I/sw/include/python2.6 -std=c99 -c -o gsf_wrap.o gsf_wrap.c % gcc-4 -L/sw/lib -bundle -undefined dynamic_lookup libgsf.a gsf_wrap.o -o _gsf.so % ipython In [1]: import gsfI was then unable to use gsf.gsfOpen. The prototype for gsfOpen shows that it returns the handle the memory pointed to by handle. int gsfOpen(const char *filename, const int mode, int *handle);I then read SWIG:Examples:python:pointer [os.apple.com]. I realized that I have to create a swig interface file to go this route. In the examples, they have a typemap to specify which pointers are inputs and which are outputs. // -*- c -*- %module gsf // #define GSF_CREATE 1 #define GSF_READONLY 2 #define GSF_UPDATE 3 #define GSF_READONLY_INDEX 4 #define GSF_UPDATE_INDEX 5 #define GSF_APPEND 6 // //%{ //int gsfOpen(const char *filename, const int mode, int *handle); //%} // %include typemaps.i int gsfOpen(const char *INPUT, const int INPUT, int* OUTPUT);Now I run swig on the .i interface definition to build the gsf_wrap.c file: % swig -python gsf.Then recompile as above and build the .so shared object. Here is running what I have so far: % ipython In [1]: import gsf In [2]: status,fd = gsf.gsfOpen('C1101635.gsf',gsf.GSF_READONLY) In [3]: print status,fd (0, 1)What I really want is something like this: import mb.gsf new_line = gsf('C1101635-v2.gsf',gsf.CREATE) # Note that I got ride of GSF_ from GSF_CREATE for ping in gsf('C1101635.gsf'): # Note that GSF_READONLY is the default # plot ping new_ping = ping.dup() # or gsf.ping(template=ping) or gsf.ping(ping) to copy all the nav in a ping? for beam_num,beam in enumerate(ping): # Apply some function to the beam new_ping[beam_num] = some_transform(beam) new_line.close()Or something like that. BTW, I had trouble finding GSF lines while not at CCOM. Thanks to Dave C. for this Reson 8101 survey off of Alaska: H10906 [NGDC] and I was looking at this file: C1101635.gsf. Hmm... try googling for H10906 multibeam. Or search for H10906 site:noaa.gov - these sites are really not setup well for searching. I tried to find the metadata. 11.09.2008 17:04 Slashdot on emacs tricks (Stupid) Useful Emacs Tricks? [Ask Slashdot] First, try these: M-x tetris M-x doctor M-x yow M-x phases-of-moon M-x hanoi M-x spookAnd for your .emacs file: ;; remote compile support (setq remote-compile-host "hostname") (setq remote-shell-program "/usr/bin/ssh") (setq default-directory "/home/username/compileDir") ;; remote edit of files: (require 'tramp) (setq tramp-default-method "scp") ;in one's .emacs file. Then open remote files with: ;/username@host:remote_path ;/ssh:example.com:/Users/someone/.emacs ;C-x C-f /sudo::/path/to/file, or su: C-x C-f /root@localhost:/path/to/file. ; (setq inhibit-startup-message t) (menu-bar-mode -1) ; Put backup files not in my directories (setq backup-directory-alist `(("." . ,(expand-file-name "~/.emacs-backup")))) ; (global-set-key "\C-x\C-e" 'compile) (global-set-key "\C-x\C-g" 'goto-line) ;; setup function keys (global-set-key [f1] 'compile) (global-set-key [f2] 'next-error) (global-set-key [f3] 'manual-entry) ; (display-time) (setq compile-command "make ") (fset 'yes-or-no-p 'y-or-n-p) ; stop forcing me to spell out "yes"and - pdb-mode - Python debugger mode for emacs. -- Emacs Speaks statistics - (R) - 11.09.2008 16:54 Canadian AUV's for UNCLOS Kind of a surprising use of an AUV. Launch and recovery in the ice could prove challenging. Unmanned robot subs key to Canada's claim on Arctic riches [canada.com]. The twin Autonomous Underwater Vehicles are being built by Vancouver-based International Submarine Engineering Ltd. in a $4-million deal with Natural Resources Canada, Defence Research and Development Canada and other federal agencies. ... But the bright yellow, six-metre-long, 1,800-kilogram submersibles - being designed to cruise a long, pre-programmed course above the Arctic's underwater mountains - would allow Canadian scientists to gain more detailed information about the geology of the polar seabed. Jacob Verhoef, the chief federal scientist responsible for Canada's Arctic mapping mission, said Friday that the AUVs being built will make it much easier to conduct seabed surveying in the sometimes harsh polar conditions that can buffet ships, ground helicopters and create long delays in data collection. ... 11.07.2008 09:37 UDel CSHEL visit Yesterday I visited the UDel Coastal Sediments, Hydrodynamic, and Engineering Lab (CSHEL). It was great to see what Art Trembanis and his students have been up to. I had a blast giving a talk for the Geology of Coasts class and a seminar for the deptartment of Geological Sciences. 11.06.2008 08:06 grace for data exploration I've been trying to find a good data exploration tool for 2D plots. There are lots of options and I just tried another one that Jeff Gee introduced me to back in 2002 or so: Grace. I've got data that is time based. It's a waterlevel pressure sensor that logs the time and water level in UNIX UTC seconds. In python I convert the timestamps to be ISO 8601 dates for Grace: I've got data that is time based. It's a waterlevel pressure sensor that logs the time and water level in UNIX UTC seconds. In python I convert the timestamps to be ISO 8601 dates for Grace: a_datetime = datetime.datetime.utcfromtimestamp(timestamp) a_datetime.strftime('%Y-%m-%dT%H:%M:%S') # ISO 8601Running grace in the GUI mode like this brings up the data: % fink install grace % xmgrace event.datI then see this. I am not sure how to change the x-axis into something friendlier. I added the circle in photoshop to highlight the data that I am trying to explore. 11.06.2008 06:26 Arriving in Delaware I've never been to Delaware before. Last night I got to hang out with Art and two of his students. 11.05.2008 10:39 MIT Odyssey IV AUV visits the UNH Chase tank Today CCOM has a visitor being testing in our big tank: 11.04.2008 17:40 python2.6 and fink - ipython We have started working on getting python 2.6 setup in fink. Things are going pretty well, but there are some funky things. For ipython, fink is back at 0.8.2 (the latest is 0.9.1). Looks like we need to get some things updated. % ipython /sw/lib/python2.6/site-packages/IPython/Extensions/path.py:32: DeprecationWarning: the md5 module is deprecated; use hashlib instead import sys, warnings, os, fnmatch, glob, shutil, codecs, md5 /sw/lib/python2.6/site-packages/IPython/iplib.py:58: DeprecationWarning: the sets module is deprecated from sets import Set Python 2.6 (r26:66714, Nov 2 2008, 16:14:23) Type "copyright", "credits" or "license" for more information. IPython 0.8.2 -- An enhanced Interactive Python. In [1]: 11.04.2008 06:11 Time lapse video of sampling a gravity core I'm planning to show this video in class today for "Non-Biogenous Sediments," so I figure it was time to put it in YouTube. I am collecting paleomag cubes for sediment fabric analysis using Anisotropy of Magnetic Susceptibility (AMS). This is from an Apple iSight camera on a firewire cable using a bash script to trigger iSightCapture about 1 time a second. Overall it takes a couple hours to process one section of a core. The video here represents something like 1-2 hours. Someone asked me what is the procedure for processing a core. I responded with a series of these time lapse videos that are hidden in the subfolders here:] 11.04.2008 04:47 More python gems/tricks/tips I was reading Eric Flornzano's trigger article and decided to take a look at his non-django posts: Gems of Python [Eric Florenzano] That led me to this: Python gems of my own [Eric Holscher] Also, Eric Holscher has an article on unittesting for Django... Django testing: Basic Unit Tests Here is enumerate: Gems of Python [Eric Florenzano] - filter - itertools.chain - setdefault - collections.defaultdict - zip - title - ulrparse.urlparse - inspect.getargspec(somefunction) - Generators - re.DEBUG - enumerate ... I do i+=1 all the time. Time to break the habit Here is enumerate: #!/usr/bin/env python l = ['zero','one','two','three'] for count,number_str in enumerate(l): print count,'...',number_strWhen run: % ./enum.py 0 ... zero 1 ... one 2 ... two 3 ... threeStarting from an artibrary number (e.g. 1) is a bit less readable: #!/usr/bin/env python from itertools import izip, count a_list = ['one','two','three','four'] start=1 for i,number_str in izip(count(start), a_list): print i,'...',number_strThe result: % ./enum2.py 1 ... one 2 ... two 3 ... three 4 ... fourMore stuff I need to add to my giant template file. 11.03.2008 17:22 AIS Class B alert from the USCG The AIS Notices page at the USCG Navigation Center has announced an alert for AIS Class B: New Automatic Identification System (AIS) devices may not be visible on older AIS units [pdf] Make sure to check out the list of devices that need updating: Safety_Alert_10-08_Class_A_Listing.pdf There is also a notice on the same page about the testing going on in Tampa, FL of the met-hydro test message. This is a follow-on to the demonstration last New Automatic Identification System (AIS) devices may not be visible on older AIS units [pdf] Make sure to check out the list of devices that need updating: Safety_Alert_10-08_Class_A_Listing.pdf 'seen'. ... There is also a notice on the same page about the testing going on in Tampa, FL of the met-hydro test message. This is a follow-on to the demonstration last Commencing 11 September 2008, the Tampa Bay Cooperative Vessel Traffic Service began broadcasting Automatic Identification System (AIS) test messages to select test participants in the area via standard AIS channels. These broadcasts-originating from MMSI 003660471-are less than 1/2 second in duration, and, should not impact other AIS users in the area. However, should they, please notify us via our AIS Problem Report or by calling our. ... 11.03.2008 16:20 Gallery2 and django I'd like to see something like Gallery2 in Django. I've been using a moderate sized PHP at NASA that is built around Gallery2 and have been impressed. I figured the best thing to do is evaluate Gallery2 on my laptop and see if I could get Django to pick up the database model via introspection. First to install Gallery2. I got the "full" English only version. This is on a mac with fink libapache2-mod-php5. I stopped when I got the Step 9 that wants to install all the modules. At this point, the "Core" module is installed and I want to check out the database. As an aside, you can also just directly look at the database tables. For Postgresql: % createdb gallery2 -E UNICODE # Setup Postgresql database % createuser -l -P gallery2 # Make a database user for Gallery2 % wget % cd /sw/var/www % tar xf ~/Desktop/allery-2.3-full-en.tar.gz % open I started walking through the install. The only thing a bit custom is that I put the data directory in /sw/var/g2data so that it is not accessable for viewing as web pages. I selected Postgres: I stopped when I got the Step 9 that wants to install all the modules. At this point, the "Core" module is installed and I want to check out the database. % mkdir ~/Desktop/djangotest % cd ~/Desktop/djangotest % django-admin.py startproject g2 % cd g2 % python2.5 python2.5 manage.py inspectdb > model_tmp.pyThe last command will dump the database model that Django things goes with the database tables created by Gallery2: from django.db import models class G2Schema(models.Model): g_name = models.CharField(max_length=128, primary_key=True) g_major = models.IntegerField() g_minor = models.IntegerField() g_createsql = models.TextField() g_pluginid = models.CharField(max_length=32) g_type = models.CharField(max_length=32) g_info = models.TextField() class Meta: db_table = u'g2_schema' class G2Eventlogmap(models.Model): g_id = models.IntegerField(primary_key=True) g_userid = models.IntegerField() g_type = models.CharField(max_length=32) g_summary = models.CharField(max_length=255) g_details = models.TextField() g_location = models.CharField(max_length=255) g_client = models.CharField(max_length=128) g_timestamp = models.IntegerField() g_referer = models.CharField(max_length=128) class Meta: db_table = u'g2_eventlogmap' class G2Externalidmap(models.Model): g_externalid = models.CharField(max_length=128) g_entitytype = models.CharField(max_length=32) g_entityid = models.IntegerField() class Meta: db_table = u'g2_externalidmap' class G2Failedloginsmap(models.Model): g_username = models.CharField(max_length=32, primary_key=True) g_count = models.IntegerField() g_lastattempt = models.IntegerField() class Meta: db_table = u'g2_failedloginsmap' class G2Accessmap(models.Model): g_accesslistid = models.IntegerField() g_userorgroupid = models.IntegerField() g_permission = models.TextField() # This field type is a guess. class Meta: db_table = u'g2_accessmap' class G2Accesssubscribermap(models.Model): g_itemid = models.IntegerField(primary_key=True) g_accesslistid = models.IntegerField() class Meta: db_table = u'g2_accesssubscribermap' class G2Albumitem(models.Model): g_id = models.IntegerField(primary_key=True) g_theme = models.CharField(max_length=32) g_orderby = models.CharField(max_length=128) g_orderdirection = models.CharField(max_length=32) class Meta: db_table = u'g2_albumitem' class G2Animationitem(models.Model): g_id = models.IntegerField(primary_key=True) g_width = models.IntegerField() g_height = models.IntegerField() class Meta: db_table = u'g2_animationitem' class G2Cachemap(models.Model): g_key = models.CharField(max_length=32) g_value = models.TextField() g_userid = models.IntegerField() g_itemid = models.IntegerField() g_type = models.CharField(max_length=32) g_timestamp = models.IntegerField() g_isempty = models.SmallIntegerField() class Meta: db_table = u'g2_cachemap' class G2Childentity(models.Model): g_id = models.IntegerField(primary_key=True) g_parentid = models.IntegerField() class Meta: db_table = u'g2_childentity' class G2Dataitem(models.Model): g_id = models.IntegerField(primary_key=True) g_mimetype = models.CharField(max_length=128) g_size = models.IntegerField() class Meta: db_table = u'g2_dataitem' class G2Derivative(models.Model): g_id = models.IntegerField(primary_key=True) g_derivativesourceid = models.IntegerField() g_derivativeoperations = models.CharField(max_length=255) g_derivativeorder = models.IntegerField() g_derivativesize = models.IntegerField() g_derivativetype = models.IntegerField() g_mimetype = models.CharField(max_length=128) g_postfilteroperations = models.CharField(max_length=255) g_isbroken = models.SmallIntegerField() class Meta: db_table = u'g2_derivative' class G2Derivativeimage(models.Model): g_id = models.IntegerField(primary_key=True) g_width = models.IntegerField() g_height = models.IntegerField() class Meta: db_table = u'g2_derivativeimage' class G2Derivativeprefsmap(models.Model): g_itemid = models.IntegerField() g_order = models.IntegerField() g_derivativetype = models.IntegerField() g_derivativeoperations = models.CharField(max_length=255) class Meta: db_table = u'g2_derivativeprefsmap' class G2Descendentcountsmap(models.Model): g_userid = models.IntegerField() g_itemid = models.IntegerField() g_descendentcount = models.IntegerField() class Meta: db_table = u'g2_descendentcountsmap' class G2Entity(models.Model): g_id = models.IntegerField(primary_key=True) g_creationtimestamp = models.IntegerField() g_islinkable = models.SmallIntegerField() g_linkid = models.IntegerField() g_modificationtimestamp = models.IntegerField() g_serialnumber = models.IntegerField() g_entitytype = models.CharField(max_length=32) g_onloadhandlers = models.CharField(max_length=128) class Meta: db_table = u'g2_entity' class G2Factorymap(models.Model): g_classtype = models.CharField(max_length=128) g_classname = models.CharField(max_length=128) g_implid = models.CharField(max_length=128) g_implpath = models.CharField(max_length=128) g_implmoduleid = models.CharField(max_length=128) g_hints = models.CharField(max_length=255) g_orderweight = models.CharField(max_length=255) class Meta: db_table = u'g2_factorymap' class G2Filesystementity(models.Model): g_id = models.IntegerField(primary_key=True) g_pathcomponent = models.CharField(max_length=128) class Meta: db_table = u'g2_filesystementity' class G2Group(models.Model): g_id = models.IntegerField(primary_key=True) g_grouptype = models.IntegerField() g_groupname = models.CharField(unique=True, max_length=128) class Meta: db_table = u'g2_group' class G2Item(models.Model): g_id = models.IntegerField(primary_key=True) g_cancontainchildren = models.SmallIntegerField() g_description = models.TextField() g_keywords = models.CharField(max_length=255) g_ownerid = models.IntegerField() g_renderer = models.CharField(max_length=128) g_summary = models.CharField(max_length=255) g_title = models.CharField(max_length=128) g_viewedsincetimestamp = models.IntegerField() g_originationtimestamp = models.IntegerField() class Meta: db_table = u'g2_item' class G2Itemattributesmap(models.Model): g_itemid = models.IntegerField(primary_key=True) g_viewcount = models.IntegerField() g_orderweight = models.IntegerField() g_parentsequence = models.CharField(max_length=255) class Meta: db_table = u'g2_itemattributesmap' class G2Maintenancemap(models.Model): g_runid = models.IntegerField(primary_key=True) g_taskid = models.CharField(max_length=128) g_timestamp = models.IntegerField() g_success = models.SmallIntegerField() g_details = models.TextField() class Meta: db_table = u'g2_maintenancemap' class G2Mimetypemap(models.Model): g_extension = models.CharField(max_length=32, primary_key=True) g_mimetype = models.CharField(max_length=128) g_viewable = models.SmallIntegerField() class Meta: db_table = u'g2_mimetypemap' class G2Movieitem(models.Model): g_id = models.IntegerField(primary_key=True) g_width = models.IntegerField() g_height = models.IntegerField() g_duration = models.IntegerField() class Meta: db_table = u'g2_movieitem' class G2Permissionsetmap(models.Model): g_module = models.CharField(max_length=128) g_permission = models.CharField(unique=True, max_length=128) g_description = models.CharField(max_length=255) g_bits = models.TextField() # This field type is a guess. g_flags = models.IntegerField() class Meta: db_table = u'g2_permissionsetmap' class G2Photoitem(models.Model): g_id = models.IntegerField(primary_key=True) g_width = models.IntegerField() g_height = models.IntegerField() class Meta: db_table = u'g2_photoitem' class G2Pluginmap(models.Model): g_plugintype = models.CharField(max_length=32) g_pluginid = models.CharField(max_length=32) g_active = models.SmallIntegerField() class Meta: db_table = u'g2_pluginmap' class G2Pluginpackagemap(models.Model): g_plugintype = models.CharField(max_length=32) g_pluginid = models.CharField(max_length=32) g_packagename = models.CharField(max_length=32) g_packageversion = models.CharField(max_length=32) g_packagebuild = models.CharField(max_length=32) g_locked = models.SmallIntegerField() class Meta: db_table = u'g2_pluginpackagemap' class G2Pluginparametermap(models.Model): g_plugintype = models.CharField(max_length=32) g_pluginid = models.CharField(max_length=32) g_itemid = models.IntegerField() g_parametername = models.CharField(max_length=128) g_parametervalue = models.TextField() class Meta: db_table = u'g2_pluginparametermap' class G2Recoverpasswordmap(models.Model): g_username = models.CharField(max_length=32, primary_key=True) g_authstring = models.CharField(max_length=32) g_requestexpires = models.IntegerField() class Meta: db_table = u'g2_recoverpasswordmap' class G2Sessionmap(models.Model): g_id = models.CharField(max_length=32, primary_key=True) g_userid = models.IntegerField() g_remoteidentifier = models.CharField(max_length=128) g_creationtimestamp = models.IntegerField() g_modificationtimestamp = models.IntegerField() g_data = models.TextField() class Meta: db_table = u'g2_sessionmap' class G2Tkoperatnmap(models.Model): g_name = models.CharField(max_length=128, primary_key=True) g_parameterscrc = models.CharField(max_length=32) g_outputmimetype = models.CharField(max_length=128) g_description = models.CharField(max_length=255) class Meta: db_table = u'g2_tkoperatnmap' class G2Tkoperatnmimetypemap(models.Model): g_operationname = models.CharField(max_length=128) g_toolkitid = models.CharField(max_length=128) g_mimetype = models.CharField(max_length=128) g_priority = models.IntegerField() class Meta: db_table = u'g2_tkoperatnmimetypemap' class G2Tkoperatnparametermap(models.Model): g_operationname = models.CharField(max_length=128) g_position = models.IntegerField() g_type = models.CharField(max_length=128) g_description = models.CharField(max_length=255) class Meta: db_table = u'g2_tkoperatnparametermap' class G2Tkpropertymap(models.Model): g_name = models.CharField(max_length=128) g_type = models.CharField(max_length=128) g_description = models.CharField(max_length=128) class Meta: db_table = u'g2_tkpropertymap' class G2Tkpropertymimetypemap(models.Model): g_propertyname = models.CharField(max_length=128) g_toolkitid = models.CharField(max_length=128) g_mimetype = models.CharField(max_length=128) class Meta: db_table = u'g2_tkpropertymimetypemap' class G2Unknownitem(models.Model): g_id = models.IntegerField(primary_key=True) class Meta: db_table = u'g2_unknownitem' class G2User(models.Model): g_id = models.IntegerField(primary_key=True) g_username = models.CharField(unique=True, max_length=32) g_fullname = models.CharField(max_length=128) g_hashedpassword = models.CharField(max_length=128) g_email = models.CharField(max_length=255) g_language = models.CharField(max_length=128) g_locked = models.SmallIntegerField() class Meta: db_table = u'g2_user' class G2Usergroupmap(models.Model): g_userid = models.IntegerField() g_groupid = models.IntegerField() class Meta: db_table = u'g2_usergroupmap' class G2Lock(models.Model): g_lockid = models.IntegerField() g_readentityid = models.IntegerField() g_writeentityid = models.IntegerField() g_freshuntil = models.IntegerField() g_request = models.IntegerField() class Meta: db_table = u'g2_lock'That's a lot and I won't get to dig into the tables just yet. As an aside, you can also just directly look at the database tables. For Postgresql: % psql gallery2 gallery2=# \dor in MySQL: % mysqldump --no-data --tables -u gallery2 -p******** gallery2 11.03.2008 12:53 UDel talk on Thursday Geological Sciences Seminar Phoenix Mars Lander - Visualization of the Surface of Another Planet Presented by: Kurt Schwehr Center for Coastal and Ocean Engineering University of New Hampshire Thursday, November 6, 2008 3:30 p.m. Room 209, Penny Hall 11.03.2008 12:49 python time Time and date just keeps coming up as a confusing point with data logging. I wrote a quite program to help understand some common time issue with finding local time. It doesn't help that I'm looking at data from last week and there was a daylight savings change over the weekend. % ./check_time.py /sw/bin/date -R -u : Mon, 03 Nov 2008 17:45:58 +0000 /sw/bin/date -R : Mon, 03 Nov 2008 12:45:58 -0500 /sw/bin/date -u : Mon Nov 3 17:45:58 UTC 2008 /sw/bin/date : Mon Nov 3 12:45:58 EST 2008 /sw/bin/date -u +%s : 1225734358 /sw/bin/date +%s : 1225734358 time.time(): 1225734358.86 datetime.datetime.utcnow(): 2008-11-03 17:45:58.863182 datetime.datetime.utcnow().timetuple(): (2008, 11, 3, 12, 45, 58, 0, 308, -1) calendar.timegm(datetime.datetime.utcnow().timetuple()) 1225734358 datetime.datetime.fromtimestamp(time.time()) 2008-11-03 12:45:58.863338 EST EST 2008-11-03 17:45:58+00:00 -> 2008-11-03 12:45:58-05:00 EST5EDT EST5EDT 2008-11-03 17:45:58+00:00 -> 2008-11-03 12:45:58-05:00Here is the source code to the above. #!/usr/bin/env python import datetime, calendar, time import pytz import os cmds = [ '/sw/bin/date -R -u ', '/sw/bin/date -R ', '/sw/bin/date -u ', '/sw/bin/date ', '/sw/bin/date -u +%s', '/sw/bin/date +%s', ] for cmd in cmds: print cmd,':', for line in os.popen(cmd): print line, print 'time.time():',time.time() print 'datetime.datetime.utcnow():',datetime.datetime.utcnow() print 'datetime.datetime.utcnow().timetuple():',datetime.datetime.now().timetuple() print 'calendar.timegm(datetime.datetime.utcnow().timetuple())',calendar.timegm(datetime.datetime.utcnow().timetuple()) print 'datetime.datetime.fromtimestamp(time.time())',datetime.datetime.fromtimestamp(time.time()) utc = pytz.utc now = datetime.datetime.utcnow() now = datetime.datetime(now.year,now.month,now.day, now.hour, now.minute, now.second, tzinfo=utc) tz_strs = ['EST','EST5EDT'] for tz_str in tz_strs: tz = pytz.timezone(tz_str) print tz_str, str(tz), now, '->', now.astimezone(tz) 11.02.2008 17:52 fink... php5, apache2, python2 Looks like we now have Python 2.6 (python26) in fink! dmacks had me give a go for setuptools-py26. Seems to work when I tested it with magicdate-py26. I don't use the whole ball 'o wax, so if anybody wants to give it a runthrough, please let me know how it went and how you tested it so I can hopefully repeat the tests in the future. # Wiped fink % fink configure # use unstable % fink selfupdate-rsync # (I use cvs rather than rsync for devel) % fink install apache2 % fink install imagemagick % fink install libapache2-mod-php5 % fink install php5-apache2-ssl-pgsql % fink install netpbm-binThen setup the environment... % sudo daemonic enable postgresql83 % sudo -u postgres initdb -D /sw/var/postgresql-8.3/data # Did I have to do this before? % sudo -u postgres createuser -U postgres -W $USER -P % sudo pgsql.sh start % createdb gallery2 -E UNICODE % createuser -l -P gallery2I then untarrged the gallery2 full tar into /sw/var/www. I did an "open"... but nothin. % sudo tail -1 /sw/var/log/apache2/error.log [Sun Nov 02 17:17:28 2008] [notice] child pid 58410 exit signal Segmentation fault (11)What? So I tried hello world and that died to. I tried the cli: % fink install php5-cli % cat << EOF > hello.php #!/usr/bin/env php5 <?php Print "Hello world"; ?> % chmod +x hello.php % ./hello.php PHP Warning: PHP Startup: Unable to load dynamic library '/sw/lib/php5/20060613/pdo.so' - (null) in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/sw/lib/php5/20060613/pdo_pgsql.so' - (null) in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library '/sw/lib/php5/20060613/pgsql.so' - (null) in Unknown on line 0 Hello worldWell that sort of worked. I did an apache2ctl restart and then hello world worked from with apache2. And with that, I have to put this on hold for a bit.
http://schwehr.org/blog/archives/2008-11.html
CC-MAIN-2017-30
refinedweb
10,791
60.92
On 2/4/07, Ben Collins-Sussman <[email protected]> wrote: >.) Does the current model really make sense? It's certainly possible that users could encode top-secret information in their log messages, but this isn't always the case. For example, the log message "Initial import from CVS" is useful, but isn't top-secret. Even if the code itself is secret, the log message might not be. Permission checks on log messages are also particularly expensive. If you import a million files into a Subversion repository with an "initial import" log message, Subversion will force any user who wants to view that log message to wait for a million Apache permission-check subrequests to finish. I've seen repositories where it takes hours to simply run "svn log" on a single file because the log-message permission checks are so expensive. It might make sense to allow users to configure their log-message permissions separately, so as to avoid this bottleneck, without turning off permissions completely. Perhaps we should simply setup a "SVNLogMessageAuthz Off'" flag? This flag would disable authz for log messages, therefore allowing any user who has any access to the repository to also access log messages. Tom, would this flag help with your use case? (By the way: What happened to the artem-soc-work branch? This branch should substantially improve the performance of log message permission checks.) Cheers, David --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected] Received on Sun Feb 4 19:09:59 2007 This is an archived mail posted to the Subversion Dev mailing list.
https://svn.haxx.se/dev/archive-2007-02/0055.shtml
CC-MAIN-2018-26
refinedweb
276
56.76
Welcome to part four of the Machine Learning with Python tutorial series. In the previous tutorials, we got our initial data, we transformed and manipulated it a bit to our liking, and then we began to define our features. Scikit-Learn does not fundamentally need to work with Pandas and dataframes, I just prefer to do my data-handling with it, as it is fast and efficient. Instead, Scikit-learn actually fundamentally requires numpy arrays. Pandas dataframes can be easily converted to NumPy arrays, so it just so happens to work out for us! Our code up to this point: import Quandl, math import numpy as np import pandas as pd from sklearn import preprocessing, cross_validation, svm from sklearn.linear_model import LinearRegression df = Quandl.get("WIKI/GOOGL") print(df.head()) #print(df.tail())']] print(df.head()) forecast_col = 'Adj. Close' df.fillna(value=-99999, inplace=True) forecast_out = int(math.ceil(0.01 * len(df))) df['label'] = df[forecast_col].shift(-forecast_out) We'll then drop any still NaN information from the dataframe: df.dropna(inplace=True) It is a typical standard with machine learning in code to define X (capital x), as the features, and y (lowercase y) as the label that corresponds to the features. As such, we can define our features and labels like so: X = np.array(df.drop(['label'], 1)) y = np.array(df['label']) Above, what we've done, is defined X (features), as our entire dataframe EXCEPT for the label column, converted to a numpy array. We do this using the .drop method that can be applied to dataframes, which returns a new dataframe. Next, we define our y variable, which is our label, as simply the label column of the dataframe, converted to a numpy array. We could leave it at this, and move on to training and testing, but we're going to do some pre-processing. Generally, you want your features in machine learning to be in a range of -1 to 1. This may do nothing, but it usually speeds up processing and can also help with accuracy. Because this range is so popularly used, it is included in the preprocessing module of Scikit-Learn. To utilize this, you can apply preprocessing.scale to your X variable: X = preprocessing.scale(X) Next, create the label, y: y = np.array(df['label']) Now comes the training and testing. The way this works is you take, for example, 75% of your data, and use this to train the machine learning classifier. Then you take the remaining 25% of your data, and test the classifier. Since this is your sample data, you should have the features and known labels. Thus, if you test on the last 25% of your data, you can get a sort of accuracy and reliability, often called the confidence score. There are many ways to do this, but, probably the best way is using the build in cross_validation provided, since this also shuffles your data for you. The code to do this: X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.2) The return here is the training set of features, testing set of features, training set of labels, and testing set of labels. Now, we're ready to define our classifier. There are many classifiers in general available through Scikit-Learn, and even a few specifically for regression. We'll show a couple in this example, but for now, let's use Support Vector Regression from Scikit-Learn's svm package: clf = svm.SVR() We're just going to use all of the defaults to keep things simple here, but you can learn much more about Support Vector Regression in the sklearn.svm.SVR documentation. Once you have defined the classifer, you're ready to train it. With Scikit-Learn (sklearn), you train with .fit: clf.fit(X_train, y_train) Here, we're "fitting" our training features and training labels. Our classifier is now trained. Wow that was easy. Now we can test it! confidence = clf.score(X_test, y_test) Boom tested, and then: print(confidence) 0.960075071072 So here, we can see the accuracy is about 96%. Nothing to write home about. Let's try another classifier, this time using LinearRegression from sklearn: clf = LinearRegression() 0.963311624499 A bit better, but basically the same. So how might we know, as scientists, which algorithm to choose? After a while, you will get used to what works in most situations and what doesn't. You can also check out: choosing the right estimator from scikit-learn's website. This can help you walk through some basic choices. If you ask people who use machine learning often though, it's really trial and error. You will try a handful of algorithms and simply go with the one that works best. Another thing to note is that some of the algorithms must run linearly, others not. Do not confuse linear regression with the requirement to run linearly, by the way. So what does that all mean? Some of the machine learning algorithms here will process one step at a time, with no threading, others can thread and use all the CPU cores you have available. You could learn a lot about each algorithm to figure out which ones can thread, or you can visit the documentation, and look for the n_jobs parameter. If it has n_jobs, you have an algorithm that can be threaded for high performance. If not, tough luck! Thus, if you are processing massive amounts of data, or you need to process medium data but at a very high rate of speed, then you would want something threaded. Let's check for our two algorithms. Heading to the docs for sklearn.svm.SVR, and looking through the parameters, do you see n_jobs? Not me. So no, no threading here. As you could see, on our small data, it makes very little difference, but, on say even as little as 20mb of data, it makes a massive difference. Next up, let's check out the LinearRegression algorithm. Do you see n_jobs here? Indeed! So here, you can specify exactly how many threads you'll want. If you put in -1 for the value, then the algorithm will use all available threads. To do this: clf = LinearRegression(n_jobs=-1) That's all. While I have you doing such a rare thing (looking at documentation), let me draw your attention to the fact that, just because machine learning algorithms work with default parameters, it doesn't mean you can just ignore them. For example, let's revisit svm.SVR. SVR is support vector regression, which is a kind of...architecture... when doing machine learning. I highly encourage anyone interested in learning more to research the topic and learn from people who are far more educated than I am on fundamentals, by the way, I will do my best to explain things simply here, but I am not an expert. Back on topic, however. There is a parameter to svm.SVR for example which is kernel. What in the heck is that? Think of a kernel like a transformation against your data. It's a way to grossly, and I mean grossly, simplify your data. This makes processing go much faster. In the case of svm.SVR, the default is rbf, which is a type of kernel. You have a few other choices though. Check the documentation, you have 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable. Again, just like the suggestion to try the various ML algorithms that can do what you want, try the kernels. Let's do a few: for k in ['linear','poly','rbf','sigmoid']: clf = svm.SVR(kernel=k) clf.fit(X_train, y_train) confidence = clf.score(X_test, y_test) print(k,confidence) linear 0.960075071072 poly 0.63712232551 rbf 0.802831714511 sigmoid -0.125347960903 As we can see, the linear kernel performed the best, closely by rbf, then poly, then sigmoid was clearly just goofing off and definitely needs to be kicked from the team. So we have trained, and tested. Let's say we're happy with 71% at this point, what would we do next? We've trained and tested, but now we want to move forward and forecast out, which is what we'll be covering in the next tutorial.
https://pythonprogramming.net/training-testing-machine-learning-tutorial/?completed=/features-labels-machine-learning-tutorial/
CC-MAIN-2021-39
refinedweb
1,382
67.04
08.31.2008 22:00 Moved old blog entries into nanoblogger I just finished converting some of my old blog posts from a raw weblog.html file into nanoblogger entries. The eventual goal is to move beyond nanoblogger. I'm at 2780 entries and it is getting to be way to slow. I needed something somewhat mindless to do today and that one won. The posts range from July 2007 to Nov 2004. Somewhere I have a file with even older stuff, but I can't seem to find it right now. I tried to do a little bit of spelling and formatting cleanup. I even added a couple images to the posts that referred to images. 08.30.2008 07:52 UNH Aquaculture Grant Gregg announces funding for N.H. aquaculture initiative DURHAM - U.S. Sen. Judd Gregg, R-N.H., announced that the University of New Hampshire will receive $355,000 in federal funding for aquaculture research efforts from the National Oceanic and Atmospheric Administration (NOAA). These funds will allow UNH to research technologies for reducing damage to offshore cages by marine organisms and to develop new depth control technologies for optimizing fish feeding, metabolism, stress reduction and growth. ... "Technology for offshore aquaculture has advanced significantly in recent years; however, we need to achieve greater efficiency for production of native species like cod, haddock and halibut to be commercially viable. Growth of marine organisms on cage surfaces increases drag forces and reduces the flow of oxygen rich water to the fish." . Langan continued, "This grant will develop advanced, non-toxic materials that resist unwanted growth, greatly reducing the labor required to keep cages clean. It will also develop precision depth control of cages, so that fish can be placed depths where temperatures are ideal for fish health and growth." ... 08.29.2008 12:31 Another seminar page - UNH Marine Program There are too many sites out there and none of them seem to have subscribable public calendars or RSS feeds. UNH Marine Program Seminars UNH Marine Program Seminars 08.29.2008 08:19 New Nikon camera with GPS Nikon's New D90 Shoots Video, Includes GPS, Steals Canon's 50D's Thunder and the body is $999 at Amazon.com But... why not just put the GPS inside the camera? And how much will it cost? Trackback: Nikon D90 SLR with GPS Support [slashgeo] But... why not just put the GPS inside the camera? And how much will it cost? GPS geo-tagging The optional GP-1 GPS unit provides automatic real-time geo-tagging. Trackback: Nikon D90 SLR with GPS Support [slashgeo] 08.29.2008 07:52 GeoDjango on This Week in Django podcast. 08.28.2008 14:07 More on ORBCOMM AIS AIS Satellite Announcement [gCaptain.com] via TheDigitalShip. ...Also, they annouce a new book by Andy Norris: Digital Ship's resident navigation expert Andy Norris has launched a new book, detailing some of the latest advancements in navigation technology. 'Radar and AIS' by Dr Norris builds on the basic radar theory and target tracking knowledge that seagoing officers already have - while looking ahead into the future where New Technology (NT) radars are hoped to provide significantly enhanced performance. From the 1st July 2008, all new radars required mandatory AIS (automatic identification system) integration. While much effort has gone into ensuring that AIS, radar and chart information is consistent, with uniform symbols and a standard resolution, operators still need guidance and instruction. ... 08.28.2008 13:54 CCOM Weather Station We've been pulling out the stops with the new building. Andy M. setup a weather station on the roof. 08.28.2008 12:36 Accessing the Apple Addresbook application This is a pretty cool hint. I've altered there example with a cut so you can't see actual people in my address book. % sqlite3 ~/Library/Application\ Support/AddressBook/AddressBook-v22.abcddb \ "select ZADDRESSNORMALIZED from ZABCDEMAILADDRESS;" | \ cut -d@ -f 2| sort -u | tail sdsc.edu sim.no stanfordalumni.org tone.jpl.nasa.gov ucsd.edu unh.edu usgs.gov yahoo.comQuickly extract all email addresses from Address Book [macosxhints] 08.28.2008 09:18 North Pole NOAA cameras This is definitely a time where the camera system needs to have an integrated GPS. I just sent StarDot a suggestion to add EXIF fields to the images. Hopefully this will come in future firmware additions. What is the path that this camera has drifted? Where is it now? Arctic theme page - Live from the the North Pole noaa cam 2 latest In this image, I did a little cropping to just pick out the part that is not just flat gray. Arctic theme page - Live from the the North Pole noaa cam 2 latest In this image, I did a little cropping to just pick out the part that is not just flat gray. % EXIF.py noaa2.jpg noaa2.jpg: No EXIF information found 08.28.2008 09:03 Sunrise in Scituate From the NOAA dock last week as Matt and I got ready for a cruise. 08.27.2008 21:23 Super basic CSS I'm finally taking the time to understand CSS more than just randomly hacking on things I only guess are the right things to do (which usually works). Today, I got Eric Meyer's CSS, The Definitive Guide, 3rd Ed. The web is one great big distraction, so a physical book is often a relief. I already feel more comfortable playing with CSS. Here is my first pathetic test file: Looking at the html that code2html output, I realize that I need to get a better code highlighter in my tool box. <html> <head> <title>First CSS attempt</title> <style type="text/css"> h1 {color: purple;} body {background: orange;} </style> </head> <body> <h1>h1 heading</h1> <p>General text</p> <p style="color:red;">Second paragraph</p> </body> </html>The results are not that exciting, but it does work: Looking at the html that code2html output, I realize that I need to get a better code highlighter in my tool box. 08.27.2008 19:55 Django 1.0 party in Mountain View, CA Arg! I'm on the wrong coast right now. I grew up not to far from the Tied House and ended up there many times after long days coding just down the street at NASA Ames in the Intelligent Mechanisms (Now the Autonomy and Robotics Area or some such). Some very good memories there.. 08.27.2008 13:35 django 1.0 beta2 differences BackwardsIncompatibleChanges: class Category: # ... from django.contrib import admin admin.site.register(Category)Also, the urls.py for admin has changed: from django.contrib import admin admin.autodiscover() # ... urlpatterns += patterns('', (r'^admin/doc/', include('django.contrib.admindocs.urls')), (r'^admin/(.*)', admin.site.root), ) 08.27.2008 11:46 JavaScript speedup with TraceMonkey Steve L. just pointed me to this. Sounds like the Firefox 3.1 release will be a huge deal. <"">Mozilla Speeds Up JavaScript with TraceMonkey. <"">Mo. .... 08.27.2008 08:54 new noaa weather web site New From NOAA - Southeast U.S. Marine Weather Website [gCaptian.com]. How does this relate to nowCOAST? National Weather Service, Southeastern U.S. Marine Weather National Weather Service, Southeastern U.S. Marine Weather 08.27.2008 08:52 Arctic Healy links Maintaining a presence in the Arctic requires a national commitment [An Unofficial Coast Guard Blog]. 08.26.2008 14:54 10 knots limit on the US East Coast Warning... this is one giant posting. I've extracted some key pieces of the FEIS NOAA submission... it's a lot of text. There is no direct reference to what we are working on with AIS or ListenForWhales.org. NOAA Files Final Environmental Impact Statement on Ship Strike Reduction Measures Agency Seeks to Slow Ships to Protect North Atlantic Right Whales. NOAA Files Final Environmental Impact Statement on Ship Strike Reduction Measures Agency Seeks to Slow Ships to Protect North Atlantic Right Whales. NOAA's Fischeries.Final Environmental Impact Statement [pdf] ... Specific measures of each type are described in greater detail by region of application in Sections 2.1.1 through 2.1.4. For each measure, which alternative(s) include(s) it is specified. Only a subset of the measures is included in the proposed action (Alternative 6), as summarized in Section 2.2.6. As the modifications to the Boston Traffic Separation Scheme (TSS) and creation of an Area To Be Avoided (ATBA) in the Great South Channel are independent of the NMFS rulemaking and the vessel operational measures considered in the FEIS, they are no longer included as potential measures (see Section 1.4). . In all regions, unless otherwise noted, the vessel operational measures would apply only to nonsovereign2 vessels subject to the jurisdiction of the United States that are 65 ft (19.8 m) or greater in length overall. Sixty-five feet is a vessel-size class recognized by the maritime community and commonly used in maritime regulations (e.g., Automatic Identification System [AIS]; International Navigational Rules Act, Rules of the Road sections) to distinguish between a motorboat and a larger vessel. All Federal vessels and those state enforcement vessels engaged in enforcement or human safety missions would be exempt. In response to comments about vessel maneuverability, NMFS also decided to exempt all vessels from the speed restrictions where oceanographic, hydrographic, and/or meteorological conditions severely restrict vessel maneuverability (see Section 1.4). . With regard to speed restrictions, NMFS' proposed limit is 10 knots; however, for comparison purposes, the FEIS also considers speed limits of 12 and 14 knots. Records of ship strikes in which vessel speed was known indicate that the majority of serious injuries to, or deaths of, whales resulting from ship strikes involved ships operating at speeds of 14 knots or more (Laist et al., 2001; Jensen and Silber, 2003); therefore, a vessel traveling at less than 14 knots would reduce the likelihood and the severity of a ship strike. Recent analysis of these same records indicates that the probability of death or serious injury increases with ship speed. There is a 50 percent (0.26-0.71 for 95 percent confidence interval [CI]) chance that death or serious injury will occur if a right whale is hit by a vessel traveling at 10.5 knots. The probability increases to 75 percent at 14 knots, and exceeds 90 percent at 17 knots (Pace and Silber, 2005). Vanderlaan and Taggart (2007) came to a similar conclusion, determining that the probability of death from a collision was approximately 35-40 percent at 10 knots, 45-60 percent at 12 knots, and 60-80 percent at 14 knots; above 15 knots, it asymptotically approaches 100 percent. . Additionally, vessels traveling at lower speeds may also produce weaker hydrodynamic forces. At higher speeds, such forces have the capacity to first push a whale away from a moving ship and then draw the whale back toward the ship or propeller, resulting in a strike (Knowlton et al., 1998). These forces increase with the vessel's speed; therefore, a whale's ability to avoid a ship in close quarters may be reduced at higher vessel speeds. In a modeling study using data from observed encounters of right whales with vessels, Kite-Powell et al. (2007) determined that more than half of the right whales located in or swimming into the path of an oncoming ship traveling at 15 knots or more are likely to be struck even if the whales attempt evasive action. ...Then at page 404 EL Paso - Southern LNG - Elba Island, Georgia This LNG terminal on Elba Island, Georgia is already an existing terminal (see Section 4.7.3.1 for a description of current operations at this terminal); however El Paso - Southern LNG submitted a proposal to FERC to expand this terminal. Southern LNG has agreed to notify LNG terminals via an automated identification system (AIS) to slow to 10 knots or less when consistent with safe navigation. The AIS is currently operational and sends an AIS message to all incoming vessels. Current AIS data is being archived until a live feed to NOAA's Southeast Regional Office AIS network is achieved. Informal Section 7 consultation has been completed on this terminal and NOAA has concluded that the project would not likely to adversely affect Right Whales. . Offshore LNG Deepwater Ports The two offshore facilities addressed in detail in this section that would have potential impacts on right whales are the Neptune and Northeast Gateway Deepwater Ports. Neptune has been approved and construction started in July 2008, and Northeast Gateway is fully operational. This section addresses the cumulative impacts of constructing/operating these facilities and the increase in vessel traffic generated by the proposed LNG terminals on right whales in the reasonably foreseeable future. . Neptune LNG The Neptune LNG terminal is being built approximately 22 mi (35 km) northeast of Boston, Massachusetts, in a water depth of approximately 260 ft (79 m). One unloading buoy system at the deepwater port would moor up to two shuttle regasification vessels (SRVs). There would be an initial increase in vessel traffic in Massachusetts Bay during the construction of the terminal and installation of a 10.9-mi (17.5-km) pipeline that would connect to the existing Algonquin HubLine natural gas pipeline (Neptune LNG, LLC, 2005). The Deepwater Port license application includes estimates of the vessel traffic from operations (including construction); support vessels are estimated to take 61 round trips per year, SRVs would take approximately 50 round trips per year, and pilot vessels would also take 50 round trips per year, accompanying the SRVs (Neptune LNG, LLC, 2005). Therefore, this facility would increase vessel traffic by approximately 161 round trips (322 one-way trips) per year. . The USCG and MARAD published a notice of availability for the FEIS on November 2, 2006 (71 FR 64606), and the record of decision (ROD) has been approved with conditions. In their scoping comments on the NOI to prepare an EIS for the Neptune LNG Deepwater Port, NOAA specifically requested that the EIS consider the potential impacts of the construction and operation of the terminal on endangered species, including right whales. While the FEIS does consider the potential impacts of this vessel traffic and construction on right whales, the findings of the BO supercede the conclusions in the FEIS. . In addition to the FEIS, these agencies consulted with NMFS under Section 7 of the ESA. The BO resulting from this consultation determined that the action may adversely affect but is not likely to jeopardize right whales or adversely modify or destroy critical habitat. During this process, the applicant and the agencies agreed to the following mitigation measures (which are not specific terms and conditions of the BO): seasonal speed restrictions of 10 knots or less, in accordance with the proposed rule to reduce ship strikes to right whales; year round speed restrictions in the Off Race Point SMA; and installation of passive acoustic detection buoys (to determine the presence of calling whales) in the portion of the Boston TSS that passes through SBNMS. Right whale detections through the buoys or reports from the Sighting Advisory System will be monitored prior to entering the area, and appropriate action will be taken in response to active sightings. Also, Neptune vessels will enter the Boston TSS as soon as practicable and remain in the TSS until the Boston Harbor Precautionary Area (see Figure 4-11). . Northeast Gateway The Northeast Gateway LNG terminal is located offshore in Massachusetts Bay, approximately 13 mi (21 km) south-southeast of the city of Gloucester, Massachusetts, in Federal waters approximately 270 to 290 ft (82 to 88 m) in depth. The natural gas is delivered to shore by a new 16.4-mi (26.4-km) pipeline from the deepwater port to the existing Algonquin HubLine pipeline (Northeast Gateway Energy Bridge, LLC, 2005). As with the Neptune project, the construction and operation of this terminal will increase vessel traffic over current levels. The Deepwater Port license application states that there would be an estimated 55 to 62 Energy Bridge regasification vessels (EBRV) arrivals per year. In addition, support vessels would take one trip per week, or 52 trips per year. Therefore, this facility would increase vessel traffic by 162 to 176 round trips (324 to 352 one-way trips) per year (Northeast Gateway Energy Bridge, LLC, 2005). . The USGC and MARAD published a notice of availability for the FEIS on October 26, 2006 (71 FR 62657), and the ROD has been approved with conditions. In addition to commenting on the NOI, NOAA also provided comments to assist the USCG with their completeness determination and recommended the collection of additional data for further analyses that will be necessary to evaluate the impacts on NOAA's trust resources. These comments include NOAA's concern that the Northeast Gateway project would negatively impact conservation within SBNMS, specifically with respect to NOAA's plans to reconfigure the Boston TSS to reduce the risks of collisions between ships and endangered whales. NOAA issued an Incidental Harrassment Authorization (IHA) on May 14, 2007 (72 FR 27077), which contained various monitoring and mitigation measures to prevent ship strikes to right whales. . Northeast Gateway did include some mitigation measures in its application. The applicant expressly states that "EBRV speed while transiting outer Massachusetts Bay will be less than the sea speed of the vessel because the vessel will be slowing down in preparation for docking at the Northeast Port. In addition, Northeast Gateway will observe seasonal speed restrictions while transiting through or in the TSS adjacent to the Great South Channel and Off Race Point to minimize potential ship strikes on whales" (Northeast Gateway Energy Bridge, LLC, 2005). NOAA's comment letter reiterated that while speed may reduce the number of strikes, speed reduction alone will not reduce the risk of ship strike to zero, and the additional vessel traffic is expected to increase the risk of ship strike mortalities in SBNMS. . Another topic addressed with respect to right whales is the planned construction period of late summer to early spring, which overlaps with the high-use period of right whales in the area, primarily from January through April. The actual construction period has since been changed to May through November to avoid this seasonal aggregation. Construction commenced in May 2007 and was completed in November 2007. Also, noise during construction and the potential for entanglement by fishing gear displaced by LNG sites pose additional threats to right whales. These topics have been analyzed in the EIS and Section 7 consultations. . The BO for Northeast Gateway also came to a finding that the project may adversely affect, but is not likely to jeopardize right whales or adversely modify or destroy critical habitat. Through the Section 7 consultation process, the applicant and agencies voluntarily committed to the following mitigation measures: a seasonal 10-knot speed restriction in the Off Race Point and Great South Channel SMAs; a year-round 12-knot speed restriction in the Boston TSS; and these vessels will enter the TSS as soon as practicable, and remain in the TSS until they need to divert to transit north to the deepwater port. ...And in the comment section: 59b) Maybe AIS would be a means to track whales; it would be hard to do for every whale, but it might be useful to help the ships identify where the whales were. Response: If it were possible to develop this technology, it is likely many years away. Experience with satellite tagging indicates that attachment to the whale is the most significant challenge. More over, even if it were possible to determine where every right whale was at all times, the mariner would still need to take evasive action, e.g., limit speed.Another comment: 72c) Commenter suggested a system with 24/7 real time reporting 365 days of the year, where information would be transmitted back to a clearinghouse, and then distributed to the maritime community through AIS and radio, and then the mariners could make decisions for themselves as to what avoidance actions they should take. Response: Currently the infrastructure for such a system does not exist, and knowledge of right whale locations is only part of the equation. A mariner must still take some type of evasive action, which would be subjective. See the final rule for a more detailed explanation.And another: 94g) Commenter believes that implementing new technology, such as pop-up buoys and tagging whales with transmitters, can improve the detection of whales. Also, AIS with VHF radio communication and MSR should be considered for real time ship strike avoidance. Response: See response to # 58d in reference to pop-up buoys and #88a in reference to tagging whales.Here are 58d and 88a... to go with the response above: 58d) Response: Pop-up buoy identification of whales has several limitations; the whales must be vocalizing, the system would not detect all whales present, and it is not always possible to determine the number of whales without visual verification. This approach would still require evasive action by the mariner.And finally: 88a) Strongly supports the 10-knot speed restriction. Support Alternative 5, but if Alternative 6 is implemented, commenter encourages NMFS to consider using telemetry devices to track individual whales whenever possible. This would allow vessels to be notified well in advance of the presence of right whales, and would greatly improve the effectiveness of DMAs. . Response: Support Acknowledged. Using telemetry devices would require attaching a transmitter to all right whales to track each individual's movement. Historically, tags attached to large whales have had a short lifetime, and sometimes resulted in infection. Finally, while telemetry may remain a useful tool for monitoring the movements of individual animals, it is improbable for an entire population. Even with knowledge of the location of every individual, the mariner would still need to take evasive action, e.g. slow the vessel. This increases unpredictability for shipping companies - an undesirable outcome, as indicated by the industry. Known times and locations of restrictions provide predictability. 08.26.2008 14:03 A blog from the Healy Stephen Howard's Arctic web log I woke up this morning to find out that the night watch saw a polar bear at about 5am. Trying to hide my jealousy, I spent the next two hours up on the bridge scanning for bears, but no luck. We have hit an area of thick ice, with many floes four to six feet thick at the water line and remnants of pressure ridges that are thrust above the ice another eight feet or so. This is a good environment for bears, and everyone is keeping their cameras handy in the event one happens to be spotted. I've heard that last year at this same time fourteen bears were spotted, and at about halfway through our cruise we've only seen one. I did see a few fresh tracks of bears, though. Well, it's something! . Because of the polar bear sighting, I thought I'd dedicate today's journal entry to describing a few of the key marine mammal species in this region of the Arctic. I'm no expert on this topic, but I've located some books to help me out. I've also asked George, the community observer from Barrow who I profiled earlier, to check it over for me, as he has a great knowledge of these animals. Thanks, George! . The top predator around here is, of course, the polar bear. These guys can get enormous, up to 1800 pounds, and their principal prey are ringed seals, which they sniff out and excavate from their birthing lairs or stalk near the seal's breathing holes in the ice. Polar bears are powerful swimmers, and can swim 60 miles or more in open water. On open ice, they can run as fast as 30 mph! George says that in Barrow it is not uncommon to see three or four bears foraging on the beaches. When science teams deploy on the ice, Coast Guard crew go down with guns in case a polar bear is around. The Inupiats have an agreement with the federal government allowing subsistence hunting of polar bears, which they use for food and fur. Other than that, there is no commercial hunting of polar bears. There is much concern about the future of polar bears; with thinning ice due to climate change, there will be less seal pupping sites and therefore less for the polar bears to eat. ... 08.26.2008 11:02 Alvin's replacement in the works Thanks to Janice for pointing this out: New Sphere in Exploring the Abyss [nyt. ...Replacement HOV [whoi.edu] 08.25.2008 20:11 Submitted GeoEndNote to Thomson Reuters Trackback: Sean Gillies points out that similar mapping of locations for pubs has been done before: Friends don't let friends use Endnote I just submitted GeoEndNote as a suggested output format to Thomson Reuters' EndNote output style suggestion page.. Kurt, I have in fact seen mapping integrated with Zotero. Shekhar Krishnan used the GeoNames database to locate items by their place of publication in a demo at THATCamp. I just submitted GeoEndNote as a suggested output format to Thomson Reuters' EndNote output style suggestion page. . 08.25.2008 16:36 Healy science team spots a polar bear Just in the last two days, the science team on the Healy spotted a polar bear. I haven't seen any bears in the hourly aloftcon camera images, but we can always hope to see one. The ice ponds are a spectacular color. 20080825-2001: I haven't seen any bears in the hourly aloftcon camera images, but we can always hope to see one. The ice ponds are a spectacular color. 20080825-2001: 08.25.2008 11:14 Marine Measurement Forum This is across the pond, so I will not be going, but sounds interesting. The Marine Measurement Forum (MMF) is a non-profit making, one-day event, etc. The MMF is normally hosted within southern England with occasional excursions to other parts of the UK. . Format: The event covers a single day and is organised such that the average attendee can travel to and from the venue on the same day. Frequency is targeted at 6 monthly intervals. 08.25.2008 11:07 noaadata 0.40 released This version has two months of changes. I've been slowly smoking out some bugs in the postgis network bridge. Also, the regular expression (regex) for AIS NMEA strings is a big help. noaadata downloads - aitutils/uscg.py - added a REGEX for AIS NMEA messages - ais_positions.py works again - Bug report by Joe Healy - aisutils.uscg - added a regular expression for the USCG format. Now need to move code to using this - Fixed AIS NMEA encoding bug report by Miguel Eduardo Gil Biraud - Added scripts/jpegexif2kml.py for geo tagged photos - ais-net-to-postgis - better exception logging, - ais-dumpnames - uses the uscg regex now to parse the nmea lines - ais_pg_transitlines_noMakeLine - removed bug on vessels with no points - ais_positions - lots of changes, but still not flexible - six_min_avg.m - Matlab code by Val for tide stations - ais_pg_transitsummary - timezone management with pytz. Now not specific to 2006 - tideconvert - for the memme station for June 2008 summer hydro class - ais_pg_vesselsummary - new program to write excel spreadsheets - database.py - fixed name handling bug with empty ship names. Traceback on trouble - template.py - more examples - binary.py - minor cleanup 08.25.2008 10:33 NOAA Natural Hazards db in kml Links: Burning Man, Earth API, GlobalMapper, Hazards Database, X-Prize [Google Earth Blog] Historical Natural Hazards Database [Google Earth Community] ... Natural Hazards Database - The NOAA National Geophysical Data Center (NGDC) maintains a database of information about historical natural hazards such as earthquakes, tsunamis, and volcanic eruptions. They have created a KML file [Google Earth required.] which lets you view the data which helps with planning for future events in areas like disaster recovery, disaster response, etc. But, it also helps you get a perspective on dangerous locations. This one is definitely worth a look. ...This database would have been helpful to have during my PhD! Historical Natural Hazards Database [Google Earth Community] 08.24.2008 19:59 3DUI conference IEEE Symposium on 3D User Interfaces 2009, March 14-15, Lafayette, LA. Papers are due in August. In conjunction with IEEE Virtual Reality 2009 08.24.2008 13:24 Bill Borucki - Finding planets William "Bill" Borucki was my first boss at NASA back in 1989. He was very generous with his time and got me started on my long and windy history working with NASA. I did several projects with him. The first was on the Acquire program to collect photometric data from robotic telescopes. I later worked on software to try to find extrasolar planets. [wikipedia] I never found any (PlanetSearch1991), but now people have found many. Systems engingeering for the Kepler Mission : a search for terrestrial planets (2006) Duren, Riley M., ... Borucki, Bill, ..., SPIE Astronomical Telescopes and Instrumentiation 2004. Systems engingeering for the Kepler Mission : a search for terrestrial planets (2006) Duren, Riley M., ... Borucki, Bill, ..., SPIE Astronomical Telescopes and Instrumentiation 2004. The Kepler mission will launch in 2007 and determine the distribution of earth-size planets (0.5 to 10 earth masses) in the habitable zones (HZs) of solar-like stars. The mission will monitor > 100,000 dwarf stars simultaneously for at least 4 years. Precision differential photometry will be used to detect the periodic signals of transiting planets. Kepler will also support asteroseismology by measuring the pressure-mode (p-mode) oscillations of selected stars. Key mission elements include a spacecraft bus and 0.95 meter, wide-field, CCD-based photometer injected into an earth-trailing heliocentric orbit by a 3-stage Delta II launch vehicle as well as a distributed Ground Segment and Follow-up Observing Program. The project is currently preparing for Preliminary Design Review (October 2004) and is proceeding with detailed design and procurement of long-lead components. In order to meet the unprecedented photometric precision requirement and to ensure a statistically significant result, the Kepler mission involves technical challenges in the areas of photometric noise and systematic error reduction, stability, and false-positive rejection. Programmatic and logistical challenges include the collaborative design, modeling, integration, test, and operation of a geographically and functionally distributed project. A very rigorous systems engineering program has evolved to address these challenges. This paper provides an overview of the Kepler systems engineering program, including some examples of our processes and techniques in areas such as requirements synthesis, validation & verification, system robustness design, and end-to-end performance modeling. 08.24.2008 10:30 Camera with GPS to Google Earth track and thumbnails I got my first draft of a GoogleEarth map of the GPS camera run through Boston Harbor. boston-gpscamera.kmz The source code will be out in the next release of noaadata as a script called "jpegexif2kml." boston-gpscamera.kmz The source code will be out in the next release of noaadata as a script called "jpegexif2kml." 08.23.2008 22:49 pyexiv2 - reading GPS info in JPG headers I've been working on how to manage the large number of georeferenced images that Matt and I collected earlier this week. I'd like to be able to work directly in python. I first tried EXIF.py. It works, but it is not a proper python package. % ./EXIF.py IMG_6829.JPG | grep -i gps GPS GPSAltitude (Ratio): 7 GPS GPSAltitudeRef (Byte): 0 GPS GPSDOP (Ratio): 19/5 GPS GPSDate (ASCII): 2008:08:20 GPS GPSDestBearing (Ratio): 0/0 GPS GPSDestBearingRef (ASCII): T GPS GPSDestDistance (Ratio): 0/0 GPS GPSDestDistanceRef (ASCII): K GPS GPSDestLatitude (Ratio): [0/0, 0/0, 0/0] GPS GPSDestLatitudeRef (ASCII): GPS GPSDestLongitude (Ratio): [0/0, 0/0, 0/0] GPS GPSDestLongitudeRef (ASCII): GPS GPSImgDirection (Ratio): 0/0 GPS GPSImgDirectionRef (ASCII): T GPS GPSLatitude (Ratio): [42, 80511/5000, 0] GPS GPSLatitudeRef (ASCII): N GPS GPSLongitude (Ratio): [70, 140097/2500, 0] GPS GPSLongitudeRef (ASCII): W GPS GPSMapDatum (ASCII): GPS GPSMeasureMode (ASCII): 3 GPS GPSSatellites (ASCII): 08,31,73,359,36,32,, ... ] GPS GPSSpeed (Ratio): 9 GPS GPSSpeedRef (ASCII): N GPS GPSStatus (ASCII): A GPS GPSTimeStamp (Ratio): [15, 20, 3709/100] GPS GPSTrack (Ratio): 200 GPS GPSTrackRef (ASCII): T GPS GPSVersionID (Byte): [2L, 2L, 0L, 0L] GPS Tag 0x001B (Undefined): [0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, ... ] GPS Tag 0x001C (Undefined): [0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, ... ] GPS Tag 0x001E (Short): 0 Image GPSInfo (Long): 8670Then I took a look at pyexiv2. I was not able to figure out how to properly pass include and library flags to the compile lines, so I wrote a quick setup.py: from distutils.core import setup, Extension srcs = ['src/libpyexiv2.cpp','src/libpyexiv2_wrapper.cpp'] setup(name='pyexiv2', version='1.0', ext_modules=[ Extension('libpyexiv2', srcs, libraries=['boost_python','exiv2']), ], py_modules=['pyexiv2'], )Then I whipped up a fink info file: pyexiv2-py.info % ipython import pyexiv2 image = pyexiv2.Image('IMG_6829.JPG') image.readMetadata() lat = image['Exif.GPSInfo.GPSLatitude'] [str(field) for field in lat] lat[0].numerator/lat[0].denominator + lat[1].numerator/float(lat[1].denominator) / 60 + lat[2].numerator/lat[2].denominator / 3600.The above outputs: ['42/1', '161022/10000', '0/1'] 42.268369999999997 08.23.2008 20:20 James Research Webcams I realized that I've never blogged about this. Becca (a fellow PhD student in the Driscoll lab) is Director of the James Reserve. The reserve has got quite a few cameras on-site to monitor birds and the environment. Check it out! Robocam ... Robocam ... 08.23.2008 20:10 PLATO computer system I was talking to Capt Ben yesterday and he mentioned that he got to try a system called PLATO back in school. His description was pretty impressive. I had to check it out and of course it is on wikipedia: PLATO was the first (circa 1960, on ILLIAC I) generalized computer assisted instruction system. planned. ... raster. ... 08.23.2008 10:47 Seismic air guns and whales in the Gulf of Mexico Oil, Gas Seismic Work Not Affecting Gulf Sperm Whales, Study Shows ] ... The multi-year $9 million study, the largest of its type ever undertaken and formally titledgun noise from seismic surveys that are thousands of yards distant does not drive away sperm whales living in the Gulf," Biggs explains. . guns come within one-third of a mile of whales or groups of whales in the Gulf." ... ). ] In the Gulf of Mexico, sperm whales forage for deep-living prey in continental margin areas that are receiving increasing human effort in exploration for and production of oil and gas. Because these endangered species use echolocation "clicks" to search for their prey at depths of 500-2000 m, federal regulatory agencies have expressed concern that sperm whales may be impacted by anthropogenic noise produced by geophysical seismic surveys. To address this concern, TAMU scientists based in College Station and Galveston worked in partnership during summers 2002- 2005 with colleagues from seven other universities for a cooperative study of sperm whales, their habitat in the Gulf of Mexico, and their response to man-made noise. . Our cooperative Sperm Whale Seismic Study (SWSS) was sponsored by the Minerals Management Service in cooperation with the Industry Research Funders Coalition (IRFC), National Science Foundation (NSF), and Office of Naval Research (ONR), with additional support provided by the National Fish and Wildlife Foundation (NFWF). This study was conducted in cooperation with scientists from Oregon State University (OSU), Woods Hole Oceanographic Institution (WHOI), Scripps Institution of Oceanography (SIO), University of Colorado (CU), University of South Florida (USF), University of Durham (UD), University of St. Andrews (UStA), and a UK small business venture called Ecologic Ltd. A Science Review Board was established to provide review and comment on the Summary Report for 2002-2004 and the project's final Synthesis Report. This board consisted of five members: one from the federal government (NOAA), one from industry, one retired from the Marine Mammal Commission, and two from the academic community. All activities involving sperm whales were performed under the terms of valid permits from NOAA Fisheries. ... 08.22.2008 13:13 night run Would be good to do a geozui nighttime simulation of the this location to contrast with the movie of an actual night run through a harbor. I also didn't know that flickr can do movies. Found through: Running The Houston Ship Channel - At night [gCaptain] Found through: Running The Houston Ship Channel - At night [gCaptain] 08.21.2008 13:31 Boston photo survey Matt and I spent all of yesterday doing a photo survey of much of Boston Harbor. We put on about 100 miles and took over 3000 photos. I think we both have sore arms from the cameras. We had along two Cannons. Matt had the big zoom lens, I had the wide angle. The goal was to get all of the key features that a mariner might use for navigation. Things like this: Matt under the 1 bridge: Matt in action just around the corner from the Constitution... Kurt in action... We made it to many of the corners of the harbor. Matt under the 1 bridge: Matt in action just around the corner from the Constitution... Kurt in action... We made it to many of the corners of the harbor. 08.19.2008 10:41 CODA underwater inspection system The Underwater Inspection System (UIS) [CodaOctopus] seems to be an AUV] ... Using patented 3D mosaicing techniques, large areas can be inspected very quickly and integrated into an intuitive geo-referenced visualisation of the whole underwater scene in real-time, in the murkiest of waters. ...]. ... 08.18.2008 19:36 Two more Healy blogs The Exploratorium has a section about arctic regions: Ice Stories, Dispatches from Polar Scientists: Kevin Fall and Phil McGillivary are on the Healy and blogging. Kevin Fall and Phil McGillivary are on the Healy and blogging. 08.15.2008 08:47 Updates from the Healy Some updates from the Arctic... Monica has a blog post up from on the ship: Healy Cruise - update 1 Val sent along this photo taken during the helicopter ride out to the Healy: Monica has a blog post up from on the ship: Healy Cruise - update 1 Val sent along this photo taken during the helicopter ride out to the Healy: 08.13.2008 19:20 CCOM crew in on the Healy Just got word that the CCOM crew is onboard the USCG Healy ice breaker. They transited by helo... always fun. 08.13.2008 17:54 Passing arrangements? This is a few years ago, but wow. Incident Photo of The Week - Head On Collision [gCaptain / CargoLaw] Incident Photo of The Week - Head On Collision [gCaptain / CargoLaw] 08.13.2008 11:28 LaTeX error messages I'm not sure what is going on with my LaTeX document... pdflatex/natbib: \pdfendlink ended up in different nesting level than \pdfstartlink <Figures/santabarbC_bath-v6.pdf, id=279, 451.6875pt x 325.215pt> <use Figures/santabarbC_bath-v6.pdf> [6 ! pdfTeX error (ext4): \pdfendlink ended up in different nesting level than \pd fstartlink. <to be read again> \endgroup \set@typeset@protect l.633 ! ==> Fatal error occurred, the output PDF file is not finished! Transcript written on seismic-py.log. make: *** [force] Error 1This happens when I add a \cite{swig} later in my document, but not early on. My bib entry isn't that interesting: @article{swig, title = {Feeding a Large--scale Physics Application to Python}, author = {D M Beazley and P S Lomdhal}, year = {1997}, journal = {International Python Conference}, volume = {6}, }I really like the way LaTeX does so many things, but its error reporting is often beyond me. At least the great google knows... pdflatex/natbib: \pdfendlink ended up in different nesting level than \pdfstartlink I've found that pdflatex from tetex 1.0.7 aborts when natbib author-name citations are split across a page boundary in twocolumn mode. I've attached a test case and the output. . Heiko Oberdiek confirms that the same thing happens with TeXLive 6 and pdfTeX 1.00a. For technical reasons, he expects that it will be very difficult to fix. . Of course, the workaround is to adjust the spacing or linebreaking to avoid the troublesome linebreak. However, this can be difficult because no PDF output is generated when pdflatex aborts, and the error message does not list the line or section of the source file that would need to be changed. . Luckily, there's a solution: when this error occurs, write down the page number where pdflatex aborted, then add the "draft" option to hyperref: . \usepackage[draft]{hyperref}In draft mode, I see that this reference puts a later citation over a boundary. That was some serious frustration. Time to finish the paper in draft mode and then see if I get lucky. Otherwise, I will be adding some padding or figuring out how to prevent LaTeX from splitting particular references. I was sure I was doing something dumb in my BibTeX file. Nice to know that I was doing the right thing. 08.12.2008 10:00 Gardening - protection for seedlings Something has been taking out my peas and beans as they start. Time to bring out the plastic bottles to give them at least a chance. If the front, you can see some of the cabages that I'm giving a go. Only Walmart had anything and starting this late in the summer, I don't think the chances are good. 08.11.2008 09:52 Healy webcam animation Animation by Val Schmidt of the USCG Healy ice breaker. 08.11.2008 09:21 USCG in the arctic Coast Guard Unprepared for Climate Change in Arctic [An Unoffical Coast Guard Blog] points to Coast Guard Unprepared for Climate Change in Arctic [National Defense Magazine] ... The United States must take a two-pronged approach of matching strength with strength and engaging the Russians diplomatically. It is doing neither, Borgerson said. Denmark, Finland, Norway and Iceland are also eying the region, and Canada is spending billions to build a fleet to patrol arctic waters. ... For now, the Coast Guard is doing what it can to discover what is out there. Last fall the Coast Guard started "arctic domain awareness" flights to get a feel for traffic patterns, as well as to monitor foreign vessels to ensure they don't take fish in U.S. territorial waters, Brooks said. ... And in a report last February, the U.S. Polar Operations and Policy Work Group identified problem areas and made a series of recommendations including enhancing national security, projecting a U.S. presence and protecting sovereignty. ...A team from here at CCOM leaves this week to continue the Law of the Seas mapping of the Arctic using the Healy. Note that the US has as of yet signed the treaty. 08.10.2008 07:53 Scripps Pier Here is an image I just ran into in my photo collection. I took this 30-Jan-2002 while working on a paper about the deformation in the Ardath shale that crops out on the cliff here. Monica, Roxie, and Priyantha were on this beach yesterday. 08.10.2008 07:50 Phx WCL model Mike Stetsion did an amazing job with the Phoenix Wet Chemistry Lab (WCL) 08.10.2008 07:39 Standards for Nautical Pubs I knew about S-100 effort, the update to S-57, to make a XML format for Electronic Navigation Charts (ENC), but I hadn't looked into the standards bodies web pages to see what is going on. There are some interesting papers. Standardization of Nautical Publications Working Group (SNPWG) for the CoastPilot. This is a part of Committee on Hydrographic Requirements for Information Systems (CHRIS) For example, Jeppesen - BSH Pilot Study Report: [9th meeting] Standardization of Nautical Publications Working Group (SNPWG) for the CoastPilot. This is a part of Committee on Hydrographic Requirements for Information Systems (CHRIS) For example, Jeppesen - BSH Pilot Study Report: [9th meeting] This project involved review of selected BSH publications, mapping selected sailing directions content to the SNPWG8.x object model, definition of a Jeppesen XML model, content conversion, and production of sample outputs. BSH had re-structured the content to support the SNPWG model. They translated the content to English, and provided background materials. The pilot team mapped the SNPWG/S57 object model to the sailing directions content. Jeppesen defined an XML model, XSL code, and steel-thread processes to convert the BSH source content from MS Word into an XML repository. Jeppesen defined print, web, and data extract output formats, and additional transforms and processes were developed to move data from the repository into the 3 outputs. BSH reviewed and approved version 1 outputs, then updates were applied in the XML model, and Jeppesen produced a version 2 print sample. . Groundwork for the pilot started in Hamburg, Germany in October 2007. Earnest preparation started in November. The first project outputs were reviewed in February, and the team drafted these findings in March. . The pilot concludes that it is feasible to apply the SNPWG/S57 model to the sailing directions sample and use the Jeppesen XML solution to produce print, web, and data extract encapsulations that preserve the BSH content. Caveats: * This mapping of SNPWG/S57 object model to the BSH Handbook was defined by the pilot team, and has not been validated by SNPWG. * We chose SNPWG/S57 objects that best fit the BSH SD text; we did not evaluate whether these also best fit the BSH implementation of ENCs. * Some content in the sample text was unique to text document formats, and not a clear candidate for chart object attribution; the Jeppesen XML handles both, but this is beyond the SNPWG/S57 model. The pilot study suggests that with diligent follow-through, publications products that begin to implement the SNPWG/S57 model could be introduced in advance 0f the 2011 milestone for S-100. This can make S-100 stronger and soften the impact of introducing S-100 for both the public and private sectors. SNPWG meeting. All 3 outputs provide useful evidence on what is 08.09.2008 22:17 GeoDjango merged into Django mainline This is great/big news! I didn't realize GeoDjango was going to be merged into the mainline Django so soon. First baby-steps in GeoDjango From the GeoDjango site: First baby-steps in GeoDjango For a couple of days now, Django trunk finally also includes GeoDjango. A geo-spatial extension module for Django that adds -- among other things -- model fields and managers to easily use backends like the geo-spatial extensions for PostgreSQL, MySQL and Oracle. Let's see, how hard it is to get it going on my little MacBook. ...Too bad the author talks about MacPorts instead of fink From the GeoDjango site: GeoDjango was merged to trunk on 5th August 2008! 08.09.2008 11:22 Practical Djano Projects I've got a copy of Practical Django Projects and am working through it. This book is a bit challenging for people like me who don't have the time to really dig in deep into the topic, but it is good. The book drops you into django very quickly. In the first 3 chapters (25 pages), the book has you with a working website using static Flatpages and even has you editing the default Admin pages to at the JavaScript based TinyMCE editor to the flatpages Admin edit. 08.09.2008 11:17 Rain and gardening It's been raining quite a bit. We got something like two inches in the last two days. I had to add some drainage to the lake (err... garden). Here is the waterfall in Newmarket yesterday.. 08.09.2008 09:58 Stellwagen Bank National Marine Santuary management plan comment period Public Comment Period Extended NOAA's Office of National Marine Sanctuaries has extended the period for public comment on the draft management plan and draft environmental assessment for Gerry E. Studds Stellwagen Bank National Marine Sanctuary (SBNMS) to Oct. 3, 2008. The original 90-day public comment period, during which eight public hearings were held throughout New England, was scheduled to end on Aug. 4, 2008. Comments on the draft management plan and draft environmental assessment will now be considered if received on or before Oct. 3, 2008, and may be submitted by mail, emailed to [email protected], or faxed to 781-545-8036. .. . For a copy of the draft management plan and draft environmental assessment, contact the Management Plan Review Coordinator, Stellwagen Bank National Marine Sanctuary, 175 Edward Foster Rd., Scituate, MA 02066. Copies can also be downloaded from the sanctuary Web site at. . Recruitment for new members on sanctuary's advisory council The SBNMS is seeking applicants for the newly vacant Conservation (Member) seat on its Sanctuary Advisory Council (Council) as well as the alternate seat for the At-Large member. Application packages are available at the sanctuary website, or can be obtained by contacting [email protected] or at (781) 545-8026. Ext. 201. Application packages must be received by close of business day 15 September 2008 at Stellwagen National Marine Sanctuary Office, 175 Edward Foster Road, Scituate, MA 02066; faxed to 781-545-8036; or emailed to [email protected]. 08.08.2008 20:38 Under the current US copyright rules, my blog is copyright protected until at least 2074 and possibly until 2124 or so (depending on how long I live. Does that make sense? How copyright got to its current state (Patry blog ending) What would the world of computer software be like if we had even stronger copyright protections, but they only lasted 14 years? Will). ...Now imagine the world where the blog and all the software I wrote was only protected for 14 years. That means that all the software I wrote in the 1st 4 years of undergrad would be public domain. The first software I wrote for a Marsokhod (do you even know what the Marsokhods are?) would have 2 more years to go. That would be code written for VxWorks VWMare 4.x on a 68030. In the computer world, that is the dark ages (and a world I'd rather not revisit). What would the world of computer software be like if we had even stronger copyright protections, but they only lasted 14 years? 08.08.2008 12:02 Python at AGU Too bad I'm not into hydrology! AGU has a section called: H17: Python Application in Hydrology Python is rapidly becoming one of the main tools in the toolbox of many hydrologists. Python is used for linking models, visualization, data analysis, pre and post processing of model data, computational mathematics, time series analysis, and many other tasks. In addition, it is very attractive for educational purposes. Python is a free and open-source computer language. Some of its main features include a clear and powerful syntax, a large collection of packages (libraries) and an active, helpful, open-source community. In this session, we seek to provide a representative overview of existing applications of Python in the hydrological sciences, to form a hydrological community of Python aficionados, and to demonstrate a critical mass. Above all, we want to show that Python is an excellent choice for the development of software for the solution of both small and large scale hydrological problems. 08.08.2008 07:24 PTP IEEE 1588 timing protocol for hydrographic work Just out today: Innovation in High-precision GNSS Timing Products [Hydro International] NovAtel Inc. and Brilliant Telecommunications Inc. have signed a technology partnership agreement to develop and deliver innovative timing, synchronization and positioning solutions. The companies will undertake cooperative development activities, combining their respective technologies to create new product platforms that target high precision applications. . "GPS and GNSS precise positioning technology is a key element to meet the growing demands placed on Network Time Protocol (NTP) and Precision Time Protocol (PTPv2) server technology, as the end-to-end transmission and synchronisation of voice, data and video across packet-based networks becomes more sophisticated," said Jon Ladd, NovAtel's Chief Executive Officer. ...If you are interested in high precision timing, you should first read Brian Calder, Rick Brennan, et al. 2007: Application of High-Precision Timing to Distributed Survey Systems [pdf] Abstract micro-s (rms) due to software constraints within the capture system. This compares to 288 ms (rms) using Reson's standard hybrid hardware/software solution, and 13.6 ms (rms) using a conventional single-oscillator timestamping model. 08.07.2008 14:57 LRIT on vessels First Ship with LRIT Compliance International. ... 08.07.2008 14:08 Computer security and the US President Memo to Next President: How to Get Cyber Security Right by Bruce Schneier .... ... 08.07.2008 06:27 Oil spill buoy Prototype of robotic buoy developed to fight maritime pollution [Japan Today] Robot buoy to track oil spills [primidi.com] I can't find any other pictures, so here is the same one that all the websites are showing... Robot buoy to track oil spills [primidi.com] ... This robot buoy has been designed by Naomi Kato, professor of submersible robotic engineering at the Department of Naval Architecture at Osaka University, Japan, with the members of his lab. The 'Katolab' "is conducting education and research on underwater robotics, biomechanics on aquatic animals and its application to engineering, computational hydrodynamics of viscous flow fields." ...Development of Spilled Oil Autonomously Chasing Buoy System [Creator's website - Kato Lab] I can't find any other pictures, so here is the same one that all the websites are showing... 08.06.2008 16:25 NOAA Rainier launches CCOM just had two researchers out checking out these launches. NOAA 4-Year Vessel Acquisition Plan [mtr] NOAA 4-Year Vessel Acquisition Plan [mtr] The design of the new launch is based on an evolution of the original hull form designed for NOAA in 1975 and was updated by Jensen Maritime Architects. This proven design features sturdy construction and a full keel for survey work in poorly charted waters. Design updates include an open working deck and 200 lb. capacity A-frame that can mount a wide variety of equipment. The propulsion package consists of a Cummins QSC 8.3 liter 490-hp engine turning a 25"x26" ZF propeller through a ZF 305 gear. Cruising speed is 24 knots and typical survey speeds are approximately eight knots. The multi-mission designed boats are equipped with a state-of-the-art hydrographic surveying suite, including dual frequency Reson 7125 multibeam sonar, Applanix POS MV positioning and attitude sensor, and a Brooke Ocean MVP-30 moving vessel profiler. 08.06.2008 15:55 Marine navigation ontologies I'm re-reading XML Encapsulation of Navigation Typography by John Tucker and John Nyberg [Hydro 2005]. John^2 mention some work by Raphael Malyankar that I have not yet looked into. This project looks very interesting and looking at the papers, I realized that our GeoCoastPilot project draws on this research Representation, Distribution and Ontologies for Geospatial Knowledge [ASU] Representation, Distribution and Ontologies for Geospatial Knowledge [ASU] This project is creating a computational ontology that will facilitate the creation of software that understands the meaning of geographical features. The most sophisticated geographical information systems (GIS), ENC (Electronic Navigational Chart) systems, and digital cartographic systems currently available still contain only very basic geospatial information, for example, representations of routes and waypoints, tide tables, currents, overlays of one kind or another. Further, they are capable of only relatively basic geographic operations, for example, distance, adjacency, Voronoi diagrams, etc. Smarter processing of geospatial information is needed, giving software the ability to understand what geospatial entities `mean',. Future plans for this research include ontologies for other kinds of geospatial knowledge. 08.06.2008 09:48 NOAA and Navy test marine mammal reactions to Navy Sonar Scientists Use Naval Exercises to Learn More About How Marine Mammals React to Sonar [noaa] ...." ... 08.05.2008 17:31 Nahant, MA Today, we had lunch just north of Boston on a little peninsula called Nahant at Tides.. 08.05.2008 08:40 First AIS problem report to the USCG Earlier this year, the USCG said that nobody had reported any problems with AIS to them. I finally got around to submitting my first problem. My software had the faulty assumption that a vessel name in msg 5 always had at least one character. Then, last night, my daemon crashed. My code now handles vessels with empty name fields. In python, I now use name.strip('@ ') in one more place. NAME for 366981240 is ""I went to and submitted a report. I should really get around to reporting the two government vessels using a MMSI/UserID of 0 in the boston area. My code now handles vessels with empty name fields. In python, I now use name.strip('@ ') in one more place. 08.04.2008 15:00 Marine marine mammal tag download NOAA system request Thanks to Andy A. for pointing us to this... Develop a System and Build Prototype to Download Dive and GPS Location Information from Telemetry Tags - Solicitation Number: WRAD-8-838 Develop a System and Build Prototype to Download Dive and GPS Location Information from Telemetry Tags - Solicitation Number: WRAD-8-838 ... This project will involve the development of a prototype VHF/UHF/GSM receiving station that can archive data transferred from telemetry devices deployed on marine mammals. The recipients of this contract will work in conjunction with NMFS staff and other partners on the construction of this station using existing technologies. The final product should be able to receive and store data from standard dive/gps location tags deployed on marine mammals such as the Wildlife Computers MK10 satellite tag or Sea Mammal Research Unit GPS/GSM tag. Some existing tags utilize UHF or GSM, but designers of the prototype should consider all options for transmitting/receiving archived data. The ultimate goal, beyond the scope of this contract is a station that is portable, robust and able to be deployed in remote areas and in a variety of climates. . Timeline: Sept 2008 - Aug 2010 . Deliverables: * Report on the strengths and weaknesses (including cost, bandwidth, range, power consumption etc.) of using VHF, UHF, GSM or other technology for the transfer of data to logging system. This should include a discussion of the feasibility of modifying existing telemetry tags to communicate with the tower. * Schematics for a portable micro-tower high-bandwidth receiving stations with data-logging capabilities capable of recording data transmitted from a marine mammal telemetry tag using VHF, UHF, GSM or other technology. * Reports on performance tests on various aspects of the technology including, but not limited to: data quality, range, power consumption, portability. * Quarterly project reports that will include progress on development. * Prototype receiving-archiving station. ... 08.04.2008 08:32 2007 AUV Competition Video This last weekend in San Diego was the annual AUV competition out at SPAWAR on Point Loma. Wish I was there to see it, but I did find a video of last year's event. Sib had me come along and help out one year while I was living in San Diego. I had fun helping out for one day. auv08v5final.mov AUVSI Competition page auv08v5final.mov AUVSI Competition page 08.03.2008 23:01 python bitarray module 08.02.2008 13:25 Neal gets a grant to map San Diego Bay Thanks to Monica for pointing me at this article that includes quotes from Neal Driscoll (one of my PhD committee co-chairs): Scientists from two universities team up to map marine habitats in S.D. Bay [SignOnSanDiego] Scientists from two universities team up to map marine habitats in S.D. Bay [SignOnSanDiego] ... "There. ... 08.02.2008 12:39 boost.python hello world on Mac OSX 10.5 with fink I've been attempting to give a look at the boost.python tool for adding a python interface to a C++ class. I'm trying a restart on creating a cBitVector class. I don't know how to use bjam, so I'm not sure if there is an easier way to do this. But I finally got at least something to work. I had trouble with a missing init when the module name and shared object file name did not match. What I have installed from fink for this: What I have installed from fink for this: % fink list -i boost Information about 7010 packages read in 0 seconds. i boost-jam 3.1.16-1 Extension of Perforce Jam make replacement i boost1.33-shlibs 1.33.1-1009 Boost C++ Shared Library Files i boost1.34.python25 1.34.1-1004 Boost C++ Libraries: static and source libs i boost1.34.python25-shli 1.34.1-1004 Boost C++ Libraries (shared libs)There is the source file hello.cpp. I'm not a fan of .cpp for C++ code, but that was the tutorial's style. char const* greet() { return "hello"; } #include <boost/python.hpp> BOOST_PYTHON_MODULE(hello) { using namespace boost::python; def("greet",greet); }Build it the module. Here is the Makefile: default: g++ -fno-strict-aliasing -Wno-long-double \ -g -fwrapv -g -Wall \ -I/sw/include/python2.5 -I/sw/include \ -c hello.cpp -o hello.o gcc -L/sw/lib -bundle -undefined dynamic_lookup \ hello.o -o hello.so -lboost_pythonNow try it out... In [1]: import hello In [2]: hello. # Press tab here hello.__class__ hello.__file__ hello.__name__ hello.__repr__ hello.greet hello.__delattr__ hello.__getattribute__ hello.__new__ hello.__setattr__ hello.html hello.__dict__ hello.__hash__ hello.__reduce__ hello.__str__ hello.o hello.__doc__ hello.__init__ hello.__reduce_ex__ hello.cpp hello.so In [2]: hello.greet() Out[2]: 'hello' 08.01.2008 16:48 eNav 2008 Michael Winkler pointed me to the recently released schedule for eNavigation 2008 coming up in November... 2008 Agenda. e.g. - AIS Regulations, an update - Mr. Jorge Arroyo, U. S. Coast Guard - Electronic Chart Systems; Where are things going, and will there be a domestic standard? And, if so, what should it be? - U.S. Coast Guard speaker - An update of the status of Radio navigation systems (Loran and DGPS) - U.S. Coast Guard speaker - The Coastal and River Information System: System goals, current status, future plans - Mr. Michael F. Winkler Research Hydraulic Engineer, U. S. Army Corps of Engineers - The COSCO BUSAN Allision and its Aftermath- CDR Brian Tetreault, USCG. Chief, Vessel Traffic Services, U.S. Coast Guard - Expanding the use of AIS within Vessel Traffic Services: Developing binary messaging to reduce voice communications and workload - Ms. Irene M. Gonin, U.S. Coast Guard Research & Development Center 08.01.2008 08:47 Me in the Anchor newsletter The Anchor newletter this month has several articles that I want to mention. There is one on the GeoCoastPilot and another on the new Integrated Ocean and Coastal Mapping (IOCM) processing center, in which I now physically reside. There is also a small paragraph about me visiting a local chapter of the Boy Scout's Order of the Arrow.
http://schwehr.org/blog/archives/2008-08.html
crawl-002
refinedweb
10,538
53.71
Mousetracker Data #3 Mousetracker Data #3¶ In this post, I'm extracting some additional information about the stimuli so that we can run further analysis on participants' choices. For further background, please refer to the first post in this series. import os import re import pandas as pd data = pd.read_csv('./data cleaned/%s' % os.listdir('./data cleaned')[0]) data.head() includelist = pd.read_csv('n=452 subjectID.csv', header = None) includelist = includelist[0].values data = data.loc[data['subject'].isin(includelist)] Overview¶ The basic idea is that participants were shown one of two sets of poker chips that they would split between themselves and another person close to them. In every case, they could make a choice that was either selfish (gave more to themselves), or altruistic (gave more to their partner). We wanted to know how much utility a choice would have to give to a participant before it made them selfish. In other words, how much more would a selfish choice have to give me for me to not be altruistic. data['RESPONSE1'] = [x for x in data['resp_1'].str.extract(r'..(\d*)_.(\d*)', expand = True).values] data['RESPONSE2'] = [x for x in data['resp_2'].str.extract(r'..(\d*)_.(\d*)', expand = True).values] The columns 'resp_1' and 'resp_2' are image files of the choices shown. The naming convention is as follows: 'S' for how many chips the self gets, followed by that number, and 'O' for how many chips the other person gets. We also have two other columns of interest: 'response', and 'error'. 'response' is which option the participant chose, and 'error' is whether or not that was the selfish choice. In this case, '0' is a selfish choice and '1' is an altruistic choice. This will become important shortly. data.head() 5 rows × 229 columns Algorithm¶ This got a little more complicated because the software we were using to capture mouse telemetry data randomized the position of the stimuli (i.e., whether the selfish choice was on the left or right of the screen was randomized), as it should. This information is not a feature/variable on its own, but can be inferred from the 'response' and 'error' variables. If a participant chose the option on the left (response == 1), and that was coded as an 'error', it means the selfish choice was on the right (because 'errors' are altruistic choices). There were two pieces of information that we wanted to extract: - How many more chips did the selfish choice give vs. the altruistic choice? - How many more chips did the selfish choice give the group vs. the altruistic choice? For example, let's look at the first row data: data.head(1) 1 rows × 229 columns In this case, our participant chose the left option, which was the selfish choice. This choice gave him/her 1 more chip (9-8), and gave the group 4 fewer chips ((9+7)-(8+12)). I didn't have time to think of an efficient way to do this for all the rows at once, so I decided to brute force it. First, I created a smaller dataframe: tempdata = pd.DataFrame(columns = ('RESPONSE','ERROR','RESPONSE1','RESPONSE2')) tempdata['RESPONSE'] = data['response'] tempdata['ERROR'] = data['error'] tempdata['RESPONSE1'] = data['RESPONSE1'] tempdata['RESPONSE2'] = data['RESPONSE2'] tempdata['SELFISHCHOICESELFMORE'] = 0 tempdata['SELFISHCHOICEGROUPMORE'] = 0 This algorithm basically iterates through each row in the data fram, checks to see if the selfish choice is on the left or right, and does the math I described above. SELFISHCHOICESELFMORE = [] SELFISHCHOICEGROUPMORE = [] for row in tempdata.iterrows(): if (row[1][0] == 1) & (row[1][1] == 0) | ((row[1][0] == 2) & (row[1][1] == 1)): try: SELFISHCHOICESELFMORE.append(int(row[1][2][0]) - int(row[1][3][0])) SELFISHCHOICEGROUPMORE.append((int(row[1][2][0]) + int(row[1][2][1])) - (int(row[1][3][0]) + int(row[1][3][1]))) except: SELFISHCHOICESELFMORE.append(None) SELFISHCHOICEGROUPMORE.append(None) elif ((row[1][0] == 2) & (row[1][1] == 0)) | ((row[1][0] == 1) & (row[1][1] == 1)): try: SELFISHCHOICESELFMORE.append(int(row[1][3][0]) - int(row[1][2][0])) SELFISHCHOICEGROUPMORE.append((int(row[1][3][0]) + int(row[1][3][1])) - (int(row[1][2][0]) + int(row[1][2][1]))) except: SELFISHCHOICESELFMORE.append(None) SELFISHCHOICEGROUPMORE.append(None) tempdata = tempdata.drop(['RESPONSE1','RESPONSE2', 'RESPONSE', 'ERROR'], axis = 1) tempdata['SELFISHCHOICESELFMORE'] = SELFISHCHOICESELFMORE tempdata['SELFISHCHOICEGROUPMORE'] = SELFISHCHOICEGROUPMORE Concatenating and writing to a csv: outdata = pd.concat([data,tempdata], axis = 1) outdata.to_csv('combineddata3.csv')
http://bryansim.github.io/categories/mousetracker/
CC-MAIN-2019-18
refinedweb
739
50.12
Many mobile apps need the ability to preview files — email attachments, web links, cloud photos, and other assets. Some apps even need the ability to open and handle files themselves. Although file sharing between iOS applications hasn’t always been available and easy, basic file sharing scenarios are now entirely accessible and easily available to any iOS app. In this post we’ll take a look at how iOS apps can register as a file type handler for a specific file type, how apps can preview files, and how apps can trigger an “Open in…” dialog so that another app can get its chance to handle the file. As in previous posts, I’ll be using Xamarin but the Objective C APIs are almost identical, so it should be quite easy to port the examples below to a native iOS app if necessary. Let’s get started with file association. File Association An iOS app declares itself as a file type handler in its main .plist file, which contains a section for the document types the app supports. The association is not based only on the file extension. Instead, you have to use the UTI (Uniform Type Identifier) for the file type you’re interested in, unless you’re making up your own file format — in which case you’ll have to register a custom UTI as well. Although you can edit the .plist file directly, Xamarin’s project options offers an “Advanced” tab where you can add the document type quite easily. In Xcode, the same property pane is under the Target properties if you select your project in the navigator. After adding the document type in the .plist, your app is all set to handle the specified file type — in the example above, we’re all set to handle PDF files. Now it’s just a matter of finding some other app that wants to open a PDF file, such as Safari. Click the PDF file’s background and wait for the “Open in…” menu to appear on the top. Your app should be one of the candidates now. But wait, we haven’t done anything yet on the app side to actually handle the incoming file. What iOS does when sharing a file from one app to another is copy the file to the target application’s Documents/Inbox directory, where you can find it yourself. The system also calls the OpenUrl ([UIApplicationDelegate application:openUrl:sourceApplication:annotation:] in Objective C) method on your application delegate, which can work directly with the url parameter to display or process the incoming file. Because the application delegate is often decoupled from the view controller that will be handling the file, you might consider using a pub/sub solution, such as NSNotificationCenter, to post a message from the application delegate to the view controller responsible for displaying the file. Something along the following lines: // In your AppDelegate class: public override bool OpenUrl (UIApplication application, NSUrl url, string sourceApplication, NSObject annotation) { Console.WriteLine ("Invoked with OpenUrl: {0}", url.AbsoluteString); NSNotificationCenter.DefaultCenter.PostNotificationName ("OpenUrl", url); return true; } Preview OK, at this time we’re registered to handle PDF files, but we don’t actually do anything about them. The next step is to display a preview of the provided PDF file. Instead of integrating a third party PDF library, we’re going to use QuickLook, a powerful built-in framework that can display a preview for a large number of file formats, including Microsoft Office documents, PDF files, and images. QuickLook is built around QLPreviewController, which is a view controller that you can present with the item(s) to preview. It requires an implementation of QLPreviewControllerDataSource, which feeds it with preview items (QLPreviewItem) that have a title and a URL. The following view controller handles the details of receiving the notification from NSNotificationCenter posted in the previous step, and offers the user a button which will display the document using QuickLook’s preview controller. public class MainViewController : UIViewController { private NSUrl _documentUrl; public override void ViewDidLoad () { base.ViewDidLoad (); View.BackgroundColor = UIColor.LightGray; UIButton previewButton = new UIButton (new RectangleF (100, 200, 150, 30)); previewButton.SetTitle ("Preview", UIControlState.Normal); previewButton.TouchUpInside += OnPreviewTapped; View.AddSubview (previewButton); NSNotificationCenter.DefaultCenter.AddObserver ("OpenUrl", OnOpenUrl); } private class PreviewItem : QLPreviewItem { public string Title { get; set; } public NSUrl Url { get; set; } public override NSUrl ItemUrl { get { return Url; } } public override string ItemTitle { get { return Title; } } } private class PreviewDataSource : QLPreviewControllerDataSource { private NSUrl _url; public PreviewDataSource(NSUrl url) { _url = url; } public override int PreviewItemCount (QLPreviewController controller) { return 1; } public override QLPreviewItem GetPreviewItem (QLPreviewController controller, int index) { return new PreviewItem { Title = "PDF Document", Url = _url }; } } private void OnPreviewTapped(object sender, EventArgs args) { QLPreviewController qlPreview = new QLPreviewController(); qlPreview.DataSource = new PreviewDataSource(_documentUrl); PresentViewController (qlPreview, true, null); } private void OnOpenUrl (NSNotification notification) { _documentUrl = (NSUrl) notification.Object; } } The result, when the button is tapped, looks like this — the additional sharing capabilities are provided directly by QLPreviewController: Open In… Next, it would be nice to integrate our app back with the rest of the system. Specifically, the “Open in…” feature we used in Safari to test our app is something users might look for when viewing documents or working with data. If our app can handle PDF documents, it might be nice enough to offer other apps a go at that document. Even if the app doesn’t handle any files, it might still have some data (such as links, text, tables, or images) to share. Modern versions of iOS have a very simple facility for sharing data from your application — UIActivityViewController. This view controller accepts an array of data items such as links, images, URLs, or files, and displays a list of apps and actions appropriate for your data source. Again, all you need to do is provide the data: // In the following code, _documentUrl is the NSUrl for the document we're working on UIActivityViewController activityVC = new UIActivityViewController ( new NSObject[] { new NSString("PDF Document"), _documentUrl }, null); PresentViewController (activityVC, true, null); The result is a menu with the actions suitable for a PDF document, such as sending it by email: Summary In this post, we looked at some ways an iOS app can integrate with the result of the system by sharing files and receiving files from other applications. Associating your app with a file type makes sense if you can display, edit, or process specific files. Sharing data/files or previewing files makes sense even if you’re not a content provider or the user’s first pick at editing a certain type of file. I am posting short links and updates on Twitter as well as on this blog. You can follow me: @goldshtn Pingback: Intents, Contracts, and App Extensions: App-to-App Communication Hi Israel, Thanks for sharing this. It is hard to find good documentation about this for Xamarin. I am trying to replicate your code to associate a little app images, but UTI’s such as “public.image”, “public.jpeg”, “public.png” do not work! Any idea? What are you trying to do, and which part of it doesn’t work? By the way, my name is Sasha. Thanks for this post, it is really clear and helpful. just a small change, now in Xamarin you have to write : public override IQLPreviewItem GetPreviewItem (QLPreviewController controller, nint index)
http://blogs.microsoft.co.il/sasha/2014/05/29/ios-file-association-preview-open-xamarin/
CC-MAIN-2017-22
refinedweb
1,213
50.06
I have just begun to learn SPOPS by our own lachoy. Last night (after RTFM'ing for a good two hours), i finally made some progress. The result is a script that reads MP3 files in a given directory and stores their ID3 tag's into a database table. The table i used has an 'id' field which is an auto incremented primary key, and five other fields (title, artist, album, year, and genre) which can be any database type you want. Mine were just varchars that can be NULL, for simplicity's sake. SPOPS stands for Simple Perl Object Persistence with Security. SPOPS is not one module, but instead a collection of almost FIFTY modules! There is a lot to learn up front, and this post will not go deep into SPOPS (it barely scratches the surface). This post only covers instantiating fairly simple objects that can communicate to a database, it doesn't cover persistence. A review and a tutorial for SPOPS are 'in the works'. Think of this as just a primer. This code uses File::Find and MP3::Info to find the MP3 files and extract the tag info, respectively. With these two tools, it is easy to write DBI code to store the found tags into a database. SPOPS is used here to abstract the SQL completely away from the client. This is currently implemented in SPOPS by subclassing a SPOPS::DBI object: package MySPOPS::DBI; use strict; use SPOPS::DBI; @MySPOPS_DBI::ISA = qw(SPOPS::DBI); use constant DBI_DSN => 'DBI:vendor:database:host'; use constant DBI_USER => 'user'; use constant DBI_PASS => 'pass'; my ($DB); sub global_datasource_handle { unless (ref $DB) { $DB = DBI->connect( DBI_DSN, DBI_USER, DBI_PASS, {RaiseError => 1} ); } return $DB; } 1; [download] This subclassed SPOPS::DBI class can be reused by different clients. The client that i wrote uses the SPOPS::Initialize object to create my MySPOPS::MP3 object. use strict; use MP3::Info; use File::Find; use SPOPS::Initialize; $|++; @ARGV = ('.') unless @ARGV; # the configuration SPOPS::Initialize->process({ config => { myobject => { class => 'MySPOPS::MP3', isa => [qw( MySPOPS::DBI )], field => [qw(title artist album year genre)], id_field => 'id', object_name => 'mp3', base_table => 'songs', } }}); find sub { return unless m/\.mp3$/; my $tag = get_mp3tag($_) or return; my $mp3 = MySPOPS::MP3->new({map {lc($_)=>$tag->{$_}} keys %$tag}); print STDERR join(':',values %$mp3),"\n"; $mp3->save(); }, @ARGV; [download] I use File::Find in a similar manner as the code from the Perl Cookbook, recipe number 9.7. If no argument is supplied, then the current directory is recursively scanned. MP3::Info is used to obtain the MP3 tag from the file. I have not bothered to validate in the interest of keeping the code simple. Adding validation should be trivial, see Identifying MP3 files with no MP3 tag for some tips on that. Another possiblity is to use CDDB. The trick is the instantiation of the MySPOPS::MP3 object. If you compare the Data::Dumper outputs of a MP3::Info with the Dumper output of what a MySPOSP::MP3 object _should_ look like, you will see that they are very similar: $VAR1 = bless( { 'ARTIST' => 'The Fixx', 'GENRE' => 'Pop', 'ALBUM' => 'Phantoms', 'TITLE' => 'Woman On A Train', 'YEAR' => '1984', 'COMMENT' => 'underrated band', }, 'MP3::Info' ); [download] $VAR1 = bless( { 'artist' => '', 'genre' => '', 'album' => '', 'title' => '', 'year' => '', 'id' => , }, 'MySPOPS::MP3' ); [download] The MySPOPS::MP3 constructor accepts a hash reference as an argument and will use that hash reference to define it's attributes. All that is needed is to lower case the keys of the MP3::Info object and the two will have virtually the same keys ('comment' will be ignored because it is not in the configuration and 'id' is not needed because it will be handled for you). So, the MySPOPS::MP3 object is instantiated with a transformed copy of the MP3::Info's internal attribute hash. I could have have named my database table fields with all upper case letters and there would be no need for the transformation. Finally, a message is printed to standard error and the MySPOPS::MP3::save() method is called, which stores the object's attributes in the database. The next version of SPOPS (0.56) will allow you skip having to subclass a SPOPS::DBI object and simply pass the the connection credentials along with your configuration: myobject => { class => 'MySPOPS::MP3', isa => [qw( MySPOPS_DBI )], field => [qw(title artist album year genre)], id_field => 'id', object_name => 'mp3', base_table => 'songs', dbi_config => { dsn => 'DBI:vendor:database:host', username => 'user', password => 'pass', }, } [download] But this should be used for 'one-offs' only. By subclassing SPOPS::DBI you allow other clients to share and have the databse connection code abstracted away. jeffa L-LL-L--L-LL-L--L-LL-L-- -R--R-RR-R--R-RR-R--R-RR F--F--F--F--F--F--F--F-- (the triplet paradiddle) package Song; use base qw(Class::DBI); __PACKAGE__->set_db( 'Main', 'dbi:mysql', 'username', 'password' ); __PACKAGE__->table('Song'); __PACKAGE__->columns( All => qw( song_id title artist album year genre + ) ); __PACKAGE__->columns( Primary => 'song_id' ); package main; find sub { return unless m/\.mp3$/; my $tag = get_mp3tag($_) or return; Song->create($tag); }, @ARGV; [download] package Song; use base qw(Class::DBI::mysql); __PACKAGE__->set_db( 'Main', 'dbi:mysql', 'username', 'password' ); __PACKAGE__->set_up_table('Song'); package main; ... Tony Hi Tony, I created a module to do this for SPOPS. I've tested this on MySQL, PostgreSQL and DB2/AS400 thru ODBC. Basically, all you need to do is issue a dummy query against a table so DBI can grab the metadata. Here's an example: my $dbh = DBI->connect( ... ) || die "Error connecting: $DBI::errstr"; $dbh->{RaiseError} = 1; my $sql = "SELECT * FROM $table where 1 = 0"; my ( $sth ); eval { $sth = $dbh->prepare( $sql ); $sth->execute; }; if ( $@ ) { die "Cannot fetch column info: $@" } my @column_names = @{ $sth->{NAME} }; my @column_types = @{ $sth->{TYPE} }; [download] I think most DBDs support this basic metadata. Whenever I've found one that didn't support both of these (like DBD::ASAny at one time), I've harrangued the author and always gotten a quick response :-) Chris M-x auto-bs-mode package DBH; use strict; use DBIx; use constant DBI_DSN => 'DBI:vendor:database:host'; use constant DBI_USER => 'user'; use constant DBI_PASS => 'pass'; my $DBH; sub import { unless (ref $DBH) { $DBH = DBI->connect( DBI_DSN, DBI_USER, DBI_PASS, {RaiseError => 1} ); } $DBIx::DBH = $DBH; } 1; package main; use DBH; # creates database handle via import() ### initialize + config code un-necessary ### not with SQL::Catalog + DBIx ### and not, as mentioned in the conclusion ### with DBIx::Recordset ### and probably not Alzabo either find sub { return unless m/\.mp3$/; my $tag = get_mp3tag($_) or return; my %tag = %$tag; # sql_do finds the $dbh for you... sql_do sql_lookup 'insert_mp3_tag_data', @tag{title artist album year genre} ; }, @ARGV; [download] Also, consider that your entire example would be a DBIx::Recordset one-liner. I think I will contact Dave Rolsky and coax him into posting an Alzabo version. Then every person who is actively working on database frameworks will have an example here. use strict; use MP3::Info; use File::Find; use Alzabo::Runtime; $|++; @ARGV = ('.') unless @ARGV; my $schema = Alzabo::Runtime::Schema->load_from_file( name => 'mp3' ); find sub { return unless m/\.mp3$/; my $tag = get_mp3tag($_) or return; my %mp3 = map {lc($_)=>$tag->{$_}} keys %$tag; print STDERR join(':',values %$mp3),"\n"; $schema->table('Track')->insert( values => \%mp3 ); }, @ARGV; [download] Terrence, you neglected to mention that 'sql_lookup' bit in your example actually requires some setup in advance. Terrence, you neglected to mention that 'sql_lookup' bit in your example actually requires some setup in advance. and then use any DBI shortcut interface to execute the SQL for the desired results. I was a little surprised it could be used in a one-off script like this, myself. (That said, it's a great simple example!) SPOPS wasn't really designed for this sort of thing -- it was originally designed for server applications using lots of objects related to one another. That's why I assumed creating a subroutine to fetch the database handle wouldn't be a big deal. (As jeffa mentioned, I've since added a helper class to the SPOPS distribution so you can specify the DBI or LDAP connection info in the object configuration. Feedback is good!) So if you could do this more tersely (even one-line) in other tools, great! However, there are many features that SPOPS has that makes it more than just an object-relational (or DBI-wrapper) tool. - It supports other datasources than DBI. You can have pretty much all the same functionality with objects in LDAP directories, including manipulating them across different directories. You can also create relationships between objects on different databases. - You can add transparent behaviors to the persistence functionality. For instance, if we were going to create an application using jeffa's core idea, we might want to fill in holes in the ID info from the MP3 -- maybe the genre is left out or something else. It's very simple to create a 'rule' that gets triggered before the object is saved to check a CDDB source and find our missing information. Then in our application we never have to worry about having all the correct information -- the object will do it for us. (You can use these rules to do just about anything :-) In any case, as always: use the tool most appropriate for the job. IMO, there's no competition here with any of the POOP stuff. I know I'm not bashful about swiping features from other modules :-) And what will be superfly is if the P5EE effort is able to create an API for object persistence. Then no matter what implementing module you used, your code would look the same. Also, some of the statements about SPOPS and it means of abstracting SQL sound like they will not be making use of DBIx::AnyDBD, which on the dbi-dev and dbi-users list, Tim Bunce has stated will be rolled into DBI proper. finally, read this recent post to dbi-users which explains that DBI itself will in the future have business-logic capabilities: From: Tim Bunce <[email protected]> Date: Wed Jan 02, 2002 08:45:59 PM US/Eastern To: [email protected] Subject: (Fwd) Important: Subclassing and Merging DBIx::AnyDBD into th +e DBI FYI - This has just been sent to [email protected]. Probably best to discuss it there in whatever threads it spawns (but do post here if you've a driver-development related question or you're not subscribed to dbi-users) Tim. ----- Forwarded message from Tim Bunce <[email protected]> ----- Delivered-To: [email protected] Date: Thu, 3 Jan 2002 01:25:03 +0000 From: Tim Bunce <[email protected]> To: [email protected] Cc: Matt Sergeant <[email protected]>, Tim Bunce <[email protected]> Subject: Important: Subclassing and Merging DBIx::AnyDBD into the DBI Here's what I'm thinking, and developing, at the moment... [Please read and think about it all before commenting] [Also, I've CC'd Matt Sergeant <[email protected]>, please ensure that he's CC'd on any replies. Thanks.] Firstly, subclassing... (we'll use MyDBI as the example subclass here) The "mixed case with leading capital letter" method namespace will be reserved for use by subclassing of the DBI. The DBI will never have any methods of it's own in that namespace. (DBI method names are either all lower case or all upper case.) The need to call MyDBI->init_rootclass will be removed. Simply calling $dbh = MyDBI->connect(...) will be interpreted as a request to have the $dbh blessed into the MyDBI::db class (and a $dbh->prepare will return $sth blessed into MyDBI::st). A warning will be generated if @MyDBI::db::ISA is empty. Also, and this is where it gets interesting, calling: DBI->connect(,,, { RootClass => 'MyDBI' }) will have the same effect as above, with the added feature that the DBI will try to automatically load MyDBI.pm for you. It'll ignore a failure to load due to the file not existing if the MyDBI class already exists. This feature dramatically opens up the scope of DBI subclassing. The idea behind it is that the $dbh object is no longer 'just' encapsulating a simple database connection, it can now encapsulate a high-level information repository that can be 'queried' at a more abstract level. So instead of just calling low-level do/prepare/execute/fetch methods you can now call higher-level methods that relate to your own data and concepts. More below. Typically a 'Sales Order Handling' database could now be given a SalesOrderDBI::db class containing high-level methods that deal directly with Sales Order Handling concepts and could do things like automatically trigger re-ordering when stocks get low. Also consider, for example, that DBD::Proxy would be able to dynamic +ally load the subclass on the proxy server instead of the proxy client. The subclass can perform multiple DBI method calls before returning a result to the client. For example: $ok=$dbh->Check_Available($a,$b +) on the proxy client triggers a $dbh->Check_Available($a,$b) call on the proxy server and that method may perform many selects to gather the info before returning the boolean result to the client. Performing the selects on the proxy server is far far more efficient +. In terms of buzzwords, the dynamic loading of subclasses can translate into "Encapsulating Business Logic" and thowing in the proxy extends that to "3-Tier" :) Also, the ability to embed attributes into the DSN may lead to some interesting possibilities... DBI->connect("dbi:Oracle(PrintError=1,RootClass=OtherDBI):...",. +..) I'm not sure where that road leads but I suspect it'll be interesting, though I may disable it by default, or just provide a way to do so. Next, merging in DBIx::AnyDBD functionality... Rather than describe Matt Sergeant's excellent DBIx::AnyDBD module I'll just describe my plans. You can take a look at to see the obvious inspiration. Calling $dbh = DBI->connect("dbi:Oracle:foo",,, { DbTypeSubclass => 1 }) will return a $dbh that's blessed into a class with a name that depends on the type of database you've connected to. In this case 'DBI::Oracle::db'. @DBI::Oracle::db::ISA=('DBI::db') is automatically setup for you, if it's empty, so the inheritance works normally. For ODBC and ADO connections the underlying database type is determined and a class hierarchy setup for you. So an ODBC connection to an Informix database, for example, would be blessed into 'DBI::Informix::db' which would automatically be setup as a subclass of 'DBI::ODBC::db' which would be setup as a subclass of 'DBI::db'. The DBI will try to automatically load these classes for you. It'll ignore a failure to load caused by the file not existing. The idea behind this, if it's not dawned on you already, is to enable a simple way to provide alternate implementations of methods that require different SQL dialects for different database types. See below... Finally, putting it all together... These two mechanisms can be used together so $dbh = MyDBI->connect("dbi:Oracle:foo",,, { DbTypeSubclass=>1 }) will return a $dbh blessed into 'MyDBI::Oracle::db'. In fact users of DbTypeSubclass are strongly encouraged to also subclass into a non-'DBI' root class. They are a natural fit togethe +r. Imagine, for example, that you have a Sales Order Handling database and a SalesOrderDBI::db class containing high-level methods like automatically triggering re-ordering when stocks get low. Imagine you've implemented this, or just prototyped it, in Access and now want to port it to PostgreSQL... A typical porting process might now be... 1/ Break up any large methods that include both high-level business logic and low-level database interactions. Put the low-level database interactions into new methods called from the (now smaller) original method. 2/ Add { DbTypeSubclass=>1 } to your connect() method call. 3/ Move the low-level database interaction methods from the SalesOrderDBI::db class into the SalesOrderDBI::Access::db class. 4/ Implement and test alternate versions using PostgreSQL in the SalesOrderDBI::Pg::db class. Since PostgreSQL supports stored procedures you could move some of the business logic into stored procedures within the database. Thus your Access specific class may contains select statements but your PostgreSQL specific class may contain stored procedure call +s. Random extra thoughts... AUTOLOAD (ala Autoloader, SelfLoader etc) could be put to good and interesting uses in either handling large libraries of queries (load in demand) or even automatically generating methods with logic based on the method name: $dbh->Delete_from_table($table, $key) Oh, scary. The logic for mapping a connection into a hierarchy of classes will be extensible and overridable so that new or special cases can be handled, such as considering/including the database version. Comments welcome. (But please trim replies, and be brief and to the po +int!) Tim. p.s. Don't forget to CC Matt Sergeant <[email protected]> ----- End forwarded message ----- [download] Actually, some enterprising folks have already done this mapping of objects to multiple tables -- with inheritance -- using SPOPS. (Search the openinteract-dev mailing list for ESPOPS, or /msg me for a url.) I haven't been able to fold it into the main module yet but I hope to in the near future (~2 months). Also note that SPOPS doesn't prevent you from doing this, it's just not built-in. There's nothing stopping you from doing: My::News->global_datasource_handle->{AutoCommit} = 0; eval { $news_article->save; $news_category->save; $news_author->save; foreach my $news_topic ( @topics ) { $news_topic->save; } }; if ( $@ ) { My::News->global_datasource_handle->rollback; } else { My::News->global_datasource_handle->commit; } My::News->global_datasource_handle->{AutoCommit} = 1; [download] I also have a tickling idea for transactions across multiple datasources, but that is much further in the future. And SPOPS may take advantage of the DBI features you mentioned when they exist and are stable. The SQL abstraction stuff is well-factored so if we wanted to plop something else in there it wouldn't be too.
http://www.perlmonks.org/index.pl/jacques?node_id=136913
CC-MAIN-2017-17
refinedweb
3,002
52.09
Originally posted by Jona I found it first!!! Well your not using it so Ha! P.S. Its on my server not yours. That makes no difference! LOL. It wasn't on my server when I first found it, either! ...In fact, I found it a long time ago. But it looks like they've closed down now... That's where I got my little baby face and the freaky woman avatar. I'll put the freaky woman one back up, I really liked that one... Reminds me of my mom. Oh boy. Something to tell your mother about. Originally posted by Jona ...Okay, now I'm spooked!! If you wanna listen to something from the 70s, listen to Van Halen or Journey. halen was more 80s. If you want 70s 80s and early 90s, ronnie james dio all the way. ~~== Best of the 80s ==~~ def leppard van halen (NON VANHAGAR, it turned gay when he came around) led zeppelin ac/dc the scorpians men at work black sabbath (ozzy and dio) ronnie james dio the cure (some of it) Yngvie Malmstein (possibly best guitarist ever) stephen wolf this list goes on and on... never mind, because I can just keep going. I should just list my favority metal bands, I can think of them real fast, get about 10 out. Buntine and myself seem to be the two local metal fans... he likes death metal more, I like power metal more. ps: my avatar still puts everyone else's to shame. Last edited by PeOfEo; 05-25-2004 at 08:54 PM. You know sometimes I just don't see how people like some music... But oh well, I'm not going to kill anyone for it... Or you can be like me and have no clue what you are talking about. Originally posted by Jona I like D12 - My Band! ... I listen to 97.9 "the box" which plays rap only, 94.5 "the buzz" which plays alternative rock (Linkin Park, Trapt, Hoobastank, 3 Doors Down, Limp Bizkit, Evanescence, etc.) ... Evanescence and Creed (w00t! Go Creed!!) ... Beach Boys!? I aggree, I do not see how people can listen to some music this rap, new age 'alternative' rock / raprock (my mother didn't love me enough I will sing about it, wahhh, kinda stuff), and punk. Grrr. I'll listen to the beach boys and classical any day tchaikovsky woot! Hey everyone, now we all know Peo's problem!! Originally posted by Jona Hey everyone, now we all know Peo's problem!! people who listen to the same music as you We need to destroy the bad music jona. Its time for you to admit that you have a problem then burn you cds and delete your files. Originally posted by PeOfEo (my mother didn't love me enough I will sing about it, wahhh, kinda stuff) That's only stuff from the 50s or if you listen to Country music... Originally posted by Jona That's only stuff from the 50s or if you listen to Country music... or stained maybe? That is 'alternative' rock. Originally posted by PeOfEo people who listen to the same music as you We need to destroy the bad music jona. Its time for you to admit that you have a problem then burn you cds and delete your files. I agree - we should completely destroy every single classical song! Originally posted by PeOfEo or stained maybe? That is 'alternative' rock. You mean Staind? They're cool, but I don't even know their lyrics, so I have no clue. lol. They aren't on the radio much. Haven't we all figured this all out by now? All the really good music is from the 60's-70's. Two words everyone, classic rock. Although there is some good current rock, though I'm really fussy when it comes to modern rock. The only exception probably would by Linkin Park... But come on, THE BEATLES, The Rolling Stones, The Doors, Led Zeppelin, and PINK FLOYD!!! Who can touch these bands? No one, plain and simple. You want some good modern rock, I still say Jet and Audioslave. And I will say this real shocker: Disco Sucks! Originally posted by Jona You mean Staind? They're cool, but I don't even know their lyrics, so I have no clue. lol. They aren't on the radio much. Their lyrics are 'daddy and mommy didn't give me enough attention, I am going to cry about it and write lil songs'. There is little or no substance in modern alternative rock, it is weak and meaningless crap. Rap today is like the stereotype, sex sex sex. Watch the music videos much? There are currently 1 users browsing this thread. (0 members and 1 guests) Forum Rules
http://www.webdeveloper.com/forum/showthread.php?35251-Avatars&p=193992
CC-MAIN-2015-06
refinedweb
803
85.39
Pythonic Parsing Programs Creed of Python Developers Pythonistas are eager to extol the lovely virtues of our language. Most beginning Python programmers are invited to run import this from the interpreter right after the canonical hello world. One of the favorite quips from running that command is: There should be one-- and preferably only one --obvious way to do it. But the path to Python enlightenment is often covered in rocky terrain, or thorns hidden under leaves. A Dilemma On that note, I recently had to use some code that parsed a file. A problem arose when the API had been optimized around the assumption that what I wanted to parse would be found in the filesystem of a POSIX compliant system. The implementation was a staticmethod on a class that was called from_filepath. Well in 2013, we tend to ignore files and shove those lightweight chisels of the 70s behind in favor of a shiny new super-powered jack-hammers called NoSQL.!
http://www.linuxjournal.com/node/1090745
CC-MAIN-2017-39
refinedweb
163
67.99
. Contents - 1 Ada - 2 Aikido - 3 ALGOL 68 - 4 AutoHotkey - 5 AWK - 6 Batch File - 7 Bracmat - 8 C - 9 C++ - 10 C++ - 11 C# - 12 Clojure - 13 COBOL - 14 Coco - 15 CoffeeScript - 16 Common Lisp - 17 D - 18 Delphi - 19 DWScript - 20 E - 21 EchoLisp - 22 ECL - 23 Elena - 24 Elixir - 25 Erlang - 26 Euphoria - 27 F# - 28 Factor - 29 Falcon - 30 Fantom - 31 Fortran - 32 FreeBASIC - 33 Frink - 34 FunL - 35 Gambas - 36 Gastona - 37 Go - 38 Groovy - 39 Haskell - 40 HicEst - 41 Icon and Unicon - 42 J - 43 Java - 44 JavaScript - 45 jq - 46 Julia - 47 Kotlin - 48 Lasso - 49 LiveCode - 50 Lua - 51 M2000 Interpreter - 52 Mathematica - 53 Maxima - 54 Nemerle - 55 NetRexx - 56 Nim - 57 OCaml - 58 OOC - 59 Oz - 60 PARI/GP - 61 Perl - 62 Perl 6 - 63 Phix - 64 PHP - 65 PicoLisp - 66 PL/I - 67 PowerShell - 68 Prolog - 69 PureBasic - 70 Python - 71 Racket - 72 REBOL - 73 REXX - 74 Ring - 75 Ruby - 76 Run BASIC - 77 Rust - 78 Scala - 79 Sed - 80 Seed7 - 81 Sidef - 82 SNOBOL4 - 83 Stata - 84 Swift - 85 Tcl - 86 TUSCRIPT - 87 UNIX Shell - 88 Ursala - 89 VBA - 90 Verbexx - 91 Visual Basic .NET - 92 zkl Ada[edit] with Ada.Strings.Fixed, Ada.Text_IO; use Ada.Strings, Ada.Text_IO; procedure String_Replace is Original : constant String := "Mary had a @[email protected] lamb."; Tbr : constant String := "@[email protected]"; New_Str : constant String := "little"; Index : Natural := Fixed.Index (Original, Tbr); begin Put_Line (Fixed.Replace_Slice ( Original, Index, Index + Tbr'Length - 1, New_Str)); end String_Replace; Alternatively Put_Line ("Mary had a " & New_Str & " lamb."); Aikido[edit] const little = "little" printf ("Mary had a %s lamb\n", little) // alternatively println ("Mary had a " + little + " lamb") ALGOL 68[edit]. AutoHotkey[edit] ; Using the = operator LIT = little string = Mary had a %LIT% lamb. ; Using the := operator LIT := "little" string := "Mary had a" LIT " lamb." MsgBox %string% Documentation: Variables (see Storing values in variables and Retrieving the contents of variables) AWK[edit] String interpolation is usually done with functions sub() and gsub(). gawk has also gensub(). #!/usr/bin/awk -f BEGIN { str="Mary had a # lamb." gsub(/#/, "little", str) print str } Batch File[edit] Bracmat[edit] Use pattern matching to find the part of the string up to and the part of the string following the magic X. Concatenate these parts with the string "little" in the middle. @("Mary had a X lamb":?a X ?z) & str$(!a little !z) C[edit] Include the <stdio.h> header to use the functions of the printf family: #include <stdio.h> int main() { const char *extra = "little"; printf("Mary had a %s lamb.\n", extra); return 0; } C++[edit] #include <string> #include <iostream> int main( ) { std::string original( "Mary had a X lamb." ) , toBeReplaced( "X" ) , replacement ( "little" ) ; std::string newString = original.replace( original.find( "X" ) , toBeReplaced.length( ) , replacement ) ; std::cout << "String after replacement: " << newString << " \n" ; return 0 ; } C++[edit] // Variable argument template #include <string> #include <vector> using std::string; using std::vector; template<typename S, typename... Args> string interpolate( const S& orig , const Args&... args) { string out(orig); // populate vector from argument list auto va = {args...}; vector<string> v{va}; size_t i = 1; for( string s: v) { string is = std::to_string(i); string t = "{" + is + "}"; // "{1}", "{2}", ... try { auto pos = out.find(t); if ( pos != out.npos) // found token { out.erase(pos, t.length()); //erase token out.insert( pos, s); // insert arg } i++; // next } catch( std::exception& e) { std::cerr << e.what() << std::endl; } } // for return out; } C#[edit] This is called "composite formatting" in MSDN. class Program { static void Main() { string extra = "little"; string formatted = $"Mary had a {extra} lamb."; System.Console.WriteLine(formatted); } } Clojure[edit] (let [little "little"] (println (format "Mary had a %s lamb." little))) COBOL[edit] IDENTIFICATION DIVISION. PROGRAM-ID. interpolation-included. DATA DIVISION. WORKING-STORAGE SECTION. 01 extra PIC X(6) VALUE "little". PROCEDURE DIVISION. DISPLAY FUNCTION SUBSTITUTE("Mary had a X lamb.", "X", extra) GOBACK . Coco[edit] As CoffeeScript, but the braces are optional if the expression to be interpolated is just a variable: size = 'little' console.log "Mary had a #size lamb." CoffeeScript[edit] } """ Common Lisp[edit] (let ((extra "little")) (format t "Mary had a ~A lamb.~%" extra)) More documentation on the FORMAT function. D[edit]. Delphi[edit] program Project1; uses System.SysUtils; var Template : string; Marker : string; Description : string; Value : integer; Output : string; begin // StringReplace can be used if you are definitely using strings // Template := 'Mary had a X lamb.'; Marker := 'X'; Description := 'little'; Output := StringReplace(Template, Marker, Description, [rfReplaceAll, rfIgnoreCase]); writeln(Output); // You could also use format to do the same thing. // Template := 'Mary had a %s lamb.'; Description := 'little'; Output := format(Template,[Description]); writeln(Output); // Unlike StringReplace, format is not restricted to strings. Template := 'Mary had a %s lamb. It was worth $%d.'; Description := 'little'; Value := 20; Output := format(Template,[Description, Value]); writeln(Output); end. - Output: Mary had a little lamb. Mary had a little lamb. Mary had a little lamb. It was worth $20. DWScript[edit] PrintLn(Format('Mary had a %s lamb.', ['little'])) - Output: Mary had a little lamb. E[edit]. EchoLisp[edit] format and printf use replacement directives to perform interpolation. See format specification in EchoLisp documentatiuon. ;; format uses %a or ~a as replacement directive (format "Mary had a ~a lamb" "little") → "Mary had a little lamb" (format "Mary had a %a lamb" "little") → "Mary had a little lamb" ECL[edit] IMPORT STD; STD.Str.FindReplace('Mary had a X Lamb', 'X','little'); Elena[edit] ELENA 3.4 : import extensions. public program [ var s := "little". console printLineFormatted("Mary had a {0} lamb.",s); readChar ] Elixir[edit] Elixir borrows Ruby's #{...} interpolation syntax. x = "little" IO.puts "Mary had a #{x} lamb" Erlang[edit] - Output: 7> S1 = "Mary had a ~s lamb". 8> S2 = lists:flatten( io_lib:format(S1, ["big"]) ). 9> S2. "Mary had a big lamb" Euphoria[edit] constant lambType = "little" sequence s s = sprintf("Mary had a %s lamb.",{lambType}) puts(1,s) F#[edit] let lambType = "little" printfn "Mary had a %s lamb." lambType Factor[edit] Falcon[edit] 'VBA/Python programmer's approach. I'm just a junior Falconeer but this code seems falconic /* created by Aykayayciti Earl Lamont Montgomery April 9th, 2018 */ @ "Mary had a $size lamb" // line 1: use of the = operator // line 2: use of the @ and $ operator - Output: Mary had a little lamb [Finished in 0.2s] Fantom[edit] Interpolating a variable value into a string is done by using a $ prefix on the variable name within a string. For example: fansh> x := "little" little fansh> echo ("Mary had a $x lamb") Mary had a little lamb Documentation at: Fantom website Fortran[edit] FreeBASIC[edit] FreeBASIC has a complex Print statement which, amongst other things, enables variables to be embedded in the string to be printed. It is also possible to use C library functions such as printf or sprintf, which allow more conventional string interpolation, as easily as if they were part of FB itself: ' FB 1.05.0 Win64 #Include "crt/stdio.bi" '' header needed for printf Dim x As String = "big" Print "Mary had a "; x; " lamb" '' FB's native Print statement x = "little" printf("Mary also had a %s lamb", x) Sleep - Output: Mary had a big lamb Mary also had a little lamb Frink[edit] x = "little" println["Mary had a $x lamb."] FunL[edit] X = 'little' println( "Mary had a $X lamb." ) Gambas[edit] Click this link to run this code Public Sub Main() Print Subst("Mary had a &1 lamb", "little") End Output: Mary had a little lamb Gastona[edit] This kind of string interpolation is indeed a strong feature in Gastona. We add one more indirection in the sample just to ilustrate it. #listix# <how> //little <what> //has a @<how> lamb <main> //Mary @<what> - Output: Mary has a little lamb Go[edit] Doc: package main import ( "fmt" ) func main() { str := "Mary had a %s lamb" txt := "little" out := fmt.Sprintf(str, txt) fmt.Println(out) } Groovy[edit] def adj = 'little' assert 'Mary had a little lamb.' == "Mary had a ${adj} lamb." Haskell[edit] No such facilities are defined in Haskell 98, but the base package distributed with GHC provides a printf function. import Text.Printf main = printf "Mary had a %s lamb\n" "little" HicEst[edit] Further documentation on HicEst string interpolation function EDIT() CHARACTER original="Mary had a X lamb", little = "little", output_string*100 output_string = original EDIT(Text=output_string, Right='X', RePLaceby=little) Icon and Unicon[edit]. J[edit]). That said, note that in recent versions of J, strings is no longer a separate script but part of the core library. Java[edit]); JavaScript[edit] var original = "Mary had a X lamb"; var little = "little"; var replaced = original.replace("X", little); //does not change the original string Or, // ECMAScript 6 var X = "little"; var replaced = `Mary had a ${X} lamb`; jq[edit] Julia[edit] X = "little" "Mary had a $X lamb" Kotlin[edit] // version 1.0.6 fun main(args: Array<String>) { val s = "little" // String interpolation using a simple variable println("Mary had a $s lamb") // String interpolation using an expression (need to wrap it in braces) println("Mary had a ${s.toUpperCase()} lamb") // However if a simple variable is immediately followed by a letter, digit or underscore // it must be treated as an expression println("Mary had a ${s}r lamb") // not $sr } - Output: Mary had a little lamb Mary had a LITTLE lamb Mary had a littler lamb Lasso[edit] Lasso doesn't really have built-in string interpolation, but you can use the built-in email mail-merge capability: - Output: Mary had a little lamb LiveCode[edit] Livecode has a merge function for interpolation local str="little" put merge("Mary had a [[str]] lamb.") -- Mary had a little lamb. Lua[edit] Variable names[edit] There is no default support for automatic interpolation of variables names being used as placeholders within a string. However, interpolation is easily emulated by using the [string.gsub] function: str = string.gsub( "Mary had a X lamb.", "X", "little" ) print( str ) Literal characters[edit] Interpolation of literal characters escape sequences does occur within a string: print "Mary had a \n lamb" -- The \n is interpreted as an escape sequence for a newline M2000 Interpreter[edit] module checkit { size$="little" m$=format$("Mary had a {0} lamb.", size$) Print m$ Const RightJustify=1 \\ format$(string_expression) process escape codes Report RightJustify, format$(format$("Mary had a {0} {1} lamb.\r\n We use {0} for size, and {1} for color\r\n", size$, "wh"+"ite")) \\ we can use { } for multi line string Report RightJustify, format$({Mary had a {0} {1} lamb. We use {0} for size, and {1} for color }, size$, "wh"+"ite") } checkit Mathematica[edit] Extra = "little"; StringReplace["Mary had a X lamb.", {"X" -> Extra}] ->"Mary had a little lamb." Maxima[edit] printf(true, "Mary had a ~a lamb", "little"); Nemerle[edit]."); } } NetRexx[edit]. Nim[edit] import strutils var str = "little" echo "Mary had a $# lamb" % [str] # doesn't need an array for one substitution, but use an array for multiple substitutions import strformat var str: string = "little" echo fmt"Mary had a {str} lamb" echo &"Mary had a {str} lamb" OCaml[edit] The OCaml standard library provides the module Printf: let extra = "little" in Printf.printf "Mary had a %s lamb." extra OOC[edit] In a String all expressions between #{...} will be evaluated. main: func { X := "little" "Mary had a #{X} lamb" println() } Oz[edit] String interpolation is unidiomatic in Oz. Instead, "virtual strings" are used. Virtual strings are tuples of printable values and are supported by many library functions. declare X = "little" in {System.showInfo "Mary had a "#X#" lamb"} PARI/GP[edit]); Perl[edit] $extra = "little"; print "Mary had a $extra lamb.\n"; printf "Mary had a %s lamb.\n", $extra; Perl 6[edit] my $extra = "little"; say "Mary had a $extra lamb"; # variable interpolation say "Mary had a { $extra } lamb"; # expression interpolation printf "Mary had a %s lamb.\n", $extra; # standard printf say $extra.fmt("Mary had a %s lamb"); # inside-out printf my @lambs = <Jimmy Bobby Tommy>; say Q :array { $$$ The lambs are called @lambs[]\\\.} # only @-sigiled containers are interpolated Phix[edit] string size = "little" string s = sprintf("Mary had a %s lamb.",{size}) ?s - Output: "Mary had a little lamb." PHP[edit] <?php $extra = 'little'; echo "Mary had a $extra lamb.\n"; printf("Mary had a %s lamb.\n", $extra); ?> PicoLisp[edit] (let Extra "little" (prinl (text "Mary had a @1 lamb." Extra)) ) PL/I[edit] PowerShell[edit]:###,###}) Prolog[edit]'. PureBasic[edit] Python[edit].' Using the new f-strings string literal available from Python 3.6: >>>>> f'Mary had a {extra} lamb.' 'Mary had a little lamb.' >>> Racket[edit] See the documentation on fprintf for more information on string interpolation in Racket. #lang racket (format "Mary had a ~a lamb" "little") REBOL[edit] str: "Mary had a <%size%> lamb" size: "little" build-markup str ;REBOL3 also has the REWORD function str: "Mary had a $size lamb" reword str [size "little"] REXX[edit]. Ring[edit] aString =substr("Mary had a X lamb.", "X", "little") see aString + nl Ruby[edit]. Run BASIC[edit] a$ = Mary had a X lamb." a$ = word$(a$,1,"X")+"little"+word$(a$,2,"X") Rust[edit] Rust has very powerful string interpolation. Documentation here. fn main() { println!("Mary had a {} lamb", "little"); // You can specify order println!("{1} had a {0} lamb", "little", "Mary"); // Or named arguments if you prefer println!("{name} had a {adj} lamb", adj="little", name="Mary"); } Scala[edit] Sed[edit] #!/bin/bash # Usage example: . interpolate "Mary has a X lamb" "quite annoying" echo "$1" | sed "s/ X / $2 /g" Seed7[edit] $ Sidef[edit] var extra = 'little'; say "Mary had a #{extra} lamb"; or: say ("Mary had a %s lamb" % 'little'); See: documentation SNOBOL4[edit] Stata[edit] See printf in Stata help. : printf("Mary had a %s lamb.\n", "little") Mary had a little lamb. Swift[edit] let extra = "little" println("Mary had a \(extra) lamb.") Tcl[edit] @[email protected] lamb." puts [string map {@[email protected] "] TUSCRIPT[edit] $$ MODE TUSCRIPT sentence_old="Mary had a X lamb." values=* DATA little DATA big sentence_new=SUBSTITUTE (sentence_old,":X:",0,0,values) PRINT sentence_old PRINT sentence_new - Output: Mary had a X lamb. Mary had a little lamb. UNIX Shell[edit]. C Shell[edit] set extra='little' echo Mary had a $extra lamb. echo "Mary had a $extra lamb." printf "Mary had a %s lamb.\n" $extra C Shell has $extra and ${extra}. There are also modifiers, like $file:t; csh(1) manual explains those. Ursala[edit]. - Output: Mary had a little lamb. VBA[edit] Here are 2 examples: With Replace a="little" debug.print replace("Mary had a X lamb","X",a) 'prints Mary had a little lamb With Interpolation function Sub Main() a="little" debug.print Format("Mary had a {0} lamb",a) End Sub Public Function Format(ParamArray arr() As Variant) As String Dim i As Long, temp As String temp = CStr(arr(0)) For i = 1 To UBound(arr) temp = Replace(temp, "{" & i - 1 & "}", CStr(arr(i))) Next Format = temp End Function Verbexx[edit] //////////////////////////////////////////////////////////////////////////////////////// // // The @INTERPOLATE verb processes a string with imbedded blocks of code. The code // blocks are parsed and evaluated. Any results are converted to a string, which // is then inserted into the output string, replacing the code and braces. // // example: @INTERPOLATE "text{ @IF (x > y) then:{x} else:{y} }more text " // //////////////////////////////////////////////////////////////////////////////////////// @VAR v = "little"; @SAY (@INTERPOLATE "Mary had a { v } lamb"); // output: Mary had a litle lamb Visual Basic .NET[edit] Dim name as String = "J. Doe" Dim balance as Double = 123.45 Dim prompt as String = String.Format("Hello {0}, your balance is {1}.", name, balance) Console.WriteLine(prompt) zkl[edit] "Mary had a X lamb.".replace("X","big") Generates a new string. For more info, refer to manual in the downloads section of zenkinetic.com zkl page - Programming Tasks - Basic language learning - String manipulation - Simple - Basic Data Operations - NSIS/Omit - BBC BASIC/Omit - Ada - Aikido - ALGOL 68 - AutoHotkey - AWK - Batch File - Bracmat - C - C++ - C sharp - Clojure - COBOL - Coco - CoffeeScript - Common Lisp - D - Delphi - DWScript - E - EchoLisp - ECL - Elena - Elixir - Erlang - Euphoria - F Sharp - Factor - Falcon - Fantom - Fortran - FreeBASIC - Frink - FunL - Gambas - Gastona - Go - Groovy - Haskell - HicEst - Icon - Unicon - Icon Programming Library - J - Java - JavaScript - Jq - Julia - Kotlin - Lasso - LiveCode - Lua - M2000 Interpreter - Mathematica - Maxima - Nemerle - NetRexx - Nim - OCaml - OOC - Oz - PARI/GP - Perl - Perl 6 - Phix - PHP - PicoLisp - PL/I - PowerShell - Prolog - PureBasic - Python - Racket - REBOL - REXX - Ring - Ruby - Run BASIC - Rust - Scala - Sed - Seed7 - Sidef - SNOBOL4 - Stata - Swift - Tcl - TUSCRIPT - UNIX Shell - C Shell - Ursala - VBA - Verbexx - Visual Basic .NET - Zkl - 8086 Assembly/Omit - 80386 Assembly/Omit - Bc/Omit - Dc/Omit - GUISS/Omit - Unlambda/Omit - Z80 Assembly/Omit - Axe/Omit
http://rosettacode.org/wiki/String_interpolation_(included)
CC-MAIN-2018-43
refinedweb
2,780
56.25
Which is better for creating a settings file for Python programs, the built-in module (ConfigParser) or the independent project (ConfigObj), or using the YAML data serialization format? I have heard that ConfigObj is easier to use than ConfigParser, even though it is not a built-in library. I have also read that PyYAML is easy to use, though YAML takes a bit of time to use. Ease of implementation aside, which is the best option for creating a settings/configuration file? Using ConfigObj is at least very straightforward and ini files in general are much simpler (and more widely used) than YAML. For more complex cases, including validation, default values and types, ConfigObj provides a way to do this through configspec validation. Simple code to read an ini file with ConfigObj: from configobj import ConfigObj conf = ConfigObj('filename.ini') section = conf['section'] value = section['value'] It automatically handles list values for you and allows multiline values. If you modify the config file you can write it back out whilst preserving order and comments. See these articles for more info, including examples of the type validation:
https://codedump.io/share/pfmUz2LY3q67/1/configobjconfigparser-vs-using-yaml-for-python-settings-file
CC-MAIN-2017-04
refinedweb
186
53.31
/* _VERSION_H #define CSOUND_VERSION_H /* Define to the full name of this package. */ #define CS_PACKAGE_NAME "Csound" /* Define to the full name and version of this package. */ #define CS_PACKAGE_STRING "Csound 5.13" /* Define to the one symbol short name of this package. */ #define CS_PACKAGE_TARNAME "csound" /* Define to the version of this package. */ #define CS_PACKAGE_VERSION "5.13" #define CS_VERSION (5) #define CS_SUBVER (13) #define CS_PATCHLEVEL (0) #define CS_APIVERSION 2 /* should be increased anytime a new version contains changes that an older host will not be able to handle -- most likely this will be a change to an API function or the CSOUND struct */ #define CS_APISUBVER 5 /* for minor changes that will still allow compatiblity with older hosts */ #endif /* CSOUND_VERSION_H */
http://csound.sourcearchive.com/documentation/1:5.13.0~dfsg-3build1/version_8h_source.html
CC-MAIN-2018-09
refinedweb
115
63.59
Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: Hello, I am New here, And also in development, I just made my first Website, using the MERN stack, Now want to deploy it on the digital ocean, the frontend part is working ok, my Nginx configuration is ok, but I have a problem with backend, my server.js file, is not working, it is causing this error ' Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: ' I am using the import and export ES module, and also have "type": "module" in my package.json, The app is working ok on a local server, Can anyone help? :)) here is my server.js file ======================== import path from 'path' import express from 'express' import dotenv from 'dotenv' import colors from 'colors' import { notFound, errorHandler } from './middleware/errorMiddlleware.js' import connectDB from './config/db.js' import eventRouter from './routes/eventsRoute.js' import blogRouter from './routes/blogRoute.js' import userRouter from './routes/userRoutes.js' import uploadRouter from './routes/uploadRoutes.js' dotenv.config() connectDB() const app = express() app.use(express.json()) app.use('/api/events', eventRouter) app.use('/api/blogs', blogRouter) app.use('/api/users', userRouter) app.use('/api/uploads', uploadRouter) const dirname = path.resolve() app.use('/uploads', express.static(path.join(dirname, '/uploads'))) if(process.env.NODE_ENV === 'production'){ app.use(express.static(path.join(__dirname, '/frontend/build'))) app.get('*', (req, res) => { res.sendFile(path.resolve(__dirname, 'frontend', 'build', 'index.html')) }) } else { app.get('/', (req, res) => { res.send('App Is Running...') }) } app.use(notFound) app.use(errorHandler) const PORT = process.env.PORT || 5000 app.use((req, res, next) => { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"); }); app.listen(5000, () => { console.log( Server Is Running In ${process.env.NODE_ENV} Mode On Port ${PORT}.yellow.bold) }) ========================== Discussion (1) I'm gonna give some advice, dev.to is mainly for sharing ideas and posts. Not for helping solve your problems, so for fast and efficient help I suggest going on stack overflow to see if someone has had the same problem as you. If not be the first to ask for help there and I guarantee you'll get an answer in 30 minutes. 😎 Hope you get a solution.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/shmagiavro/error-errrequireesm-must-use-import-to-load-es-module-4kmm
CC-MAIN-2021-21
refinedweb
371
54.59
Extending Visual Studio 2005 The Visual Studio 2005 Integrated Development Environment (IDE) has an abundance of features. These include support for all of your favorite .Net languages, out—of—the box solutions (i.e., web sites, windows forms, mobile applications, etc.), built in refactoring and code snippet support, Intellisense, a sophisticated compiler, among other elements. What really sets Visual Studio 2005 apart from its predecessors (and its competitors) is the advanced extensibility options. As a developer, you can build macros to bind to key stroke combinations or toolbar buttons, develop Add—ins to enable the addition of new features, or build VS Packages through the Visual Studio Industry Partner Program to enable the integration of entire products or advance customizations into the IDE. Although the primary focus of this article will be the fundamentals of the VS Package, the other three extensibility levels will also be discussed. Additionally, I will walk you through the development of a managed Visual Studio 2005 Package, explore some of the Managed Package Framework (MPF), and take a look at the Visual Studio SDK Interop assemblies. Levels of Extensibility Visual Studio 2005 (VS 2005) provides you with three levels of extensibility, with each level increasing with power and functionality. First, the easiest way to extend Visual Studio is to record a macro. Recording a macro is extremely simple and provides you with the ability to record almost all commands and keystrokes. Additionally, macros give you access to nearly all of the Visual Studio automation object model, which contains more than 140 objects and provides you with full access to the .Net framework. Macros Macros are not new. Office developers have been using them for thousands of years and they have been an integral aspect of most Microsoft automation models. Unfortunately, for all of their functionality, Macros do have some legitimate drawbacks. To start with, they can only be written in the Visual Basic language, limiting their usability to a somewhat narrow audience. Next, Macros cannot be used to implement new tool windows, commands, or Tools Options pages. Finally, macros are not compiled, which means you have less control when trying to protect your intellectual property. Add—ins The next level in the hierarchy is a VS 2005 add—in. Add—ins provide you with full control over the Visual Studio automation object model . This ability facilitates the interaction with most of the tools and functionality in the VS 2005 IDE. This includes but is not limited to the Text Editor, Output Window, Task List, and Code Model. Additionally, there are very few barriers to developing an add—in because there is an out of the box Add—in Solution and Wizard in VS 2005. Unlike Macros, Add—ins can be written in any language that supports COM including Visual C#, VB.Net, and Visual C++. Add—ins are compiled as DLLs which also means you have greater flexibility in protecting your intellectual property. Furthermore, unlike Macros, Add—ins can implement new Commands, Tools Options pages and Tool windows. However, Add—ins cannot implement new document types, new project types, new debug engines, etc. which all require the next level of extensibility. Packages Packages, are the third a final level of extensibility in the Visual Studio 2005 hierarchy of extensibility. Packages in previous versions of Visual Studio.Net would have required some investment from the developer, but with the new VSIP program for VS 2005, Microsoft has added a free participation level. Packages are on the top of the pyramid; they will provide you with more flexibility than is available through an Add—in or a Macro. When developing a package you can use C# or C++ to access the Visual Studio SDK. At this point, its worth stating that developing a VS Package is not for the faint of heart. Although you can get by developing a Package without any knowledge of C++, you are going to have a lot less gray hair if you have a solid understanding of the fundamentals of C++, such as header files and interfaces. After you register with the VSIP program and download the 100+ MB SDK, you can start building applications that truly extend the IDE. Some examples of tasks that can be performed in a package include: customizing editors, creating new projects types, integrating debug tools, and integrating new languages. One caveat worth mentioning is that if you plan on commercially marketing any package you will need pay into the VSIP partner program. Plus, as most would assume, the developers license key distributed with the SDK cannot be used for distribution. To distribute a VS Package one needs to apply for a product license key (PLK) from Microsoft. Managed VS Packages While the functionality found in Add—ins and Macros is extremely useful, there is already a lot of documentation out there on these subjects. Not to say that the help files and the samples with the VS SDK are bad, because theyre not, but many of them are complicated for non C++ developers because they refer to C++ interfaces, use Hungarian notation, and use COM terms like IUnknown, Co—Creatable, etc. that many new developers will not be familiar with. Additionally, there are many significant changes to the VS SDK that need to be examined. The VS 2005 SDK supports several new and enhanced features to make it easier for you to integrate your programs into VS 2005. The first new feature is the MPF. The MPF enables you to develop packages in managed code using C# or VB.Net. The MPF is a significant upgrade from the VSIP 2003 program because it provides default implementations for many of the more complicated COM interfaces. You may be wondering why I keep mentioning COM. Well, if you havent figured it out yet, Visual Studio is developed using COM objects. What this means is you should be prepared for GUIDs and cryptic interface names. Fortunately, when using managed code you have two layers of abstraction from the underlying COM Interfaces as illustrated in Figure 1. Figure 1: As Figure 1 shows, when developing a managed package there is a hierarchy of assemblies. Although I wish I could say that you will only be interacting with the MPF, the fact is you will probably need to access the Visual Studio Interop Assemblies directly to retrieve handles to various interfaces. Both scenarios will be explored further in the example later in this article. So, you are probably wondering what you can actually do with the MPF? Lets look at some specifics. The MPF can help you all of the following points of extensibility: - Tool Windows / Document Windows - Creation and manipulation - Commands - Adding Commands - Command Routing - Tools Options, Automation model additions and VS Settings participation - Made much easier and can use the same underlying code to handle all these areas - Package Registration - Accomplished through declarative code attributes - Extensible by 3rd parties - Task List / Error List integration - Properties window integration For the most part you can accomplish the majority of tasks through the MPF. Some examples of places where you will have to dive into the interop assemblies include: language services, editors, and debuggers. Additionally, there are cases where you will have to use a combination of the MPF and the Interop assemblies to accomplish your goals. The final point worth mentioning before we move into an example is the fact that the MPF and the Visual Studio SDK assemblies are completely separate from the automation model (different namespaces, assemblies, etc.). Although there is overlap in functionality, the VS SDK hooks directly into the core of Visual Studio while the automation model, which is stored in the EnvDTE and EnvDTE80 assemblies, works through a higher level layer of abstraction. With that said, there is no reason why you cant use the automation model in your VS Package, in fact it is probably recommended in some cases. I just want to make sure that we are clear, and that nobody misunderstands and thinks that they worked with the SDK by using EnvDTE for an Add—in. There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/.net/net_general/visualstudionetadd-ins/article.php/c11835/Extending-Visual-Studio-2005.htm
crawl-003
refinedweb
1,353
51.58
jakzaprogramowac.pl All questions About the project How To Program How To Develop Data dodania Pytanie 2017-09-20 20:09 Spark Streaming Guarantee Specific Start Window Time » I'm using Spark Streaming to read data from Kinesis using the Structured Streaming framework, my connection is as follows val kinesis = spark .rea... (0) odpowiedzi 2017-09-20 14:09 How to make compiler check that 2 method arguments have the same type? » Probably I am missing something fundamental here, but I was refactoring certain things in my code and on the halfway I noticed that my code compiles, ... (2) odpowiedzi 2017-09-20 13:09 groupByKey vs. aggregateByKey - where exactly does the difference come from? » There is some scary language in the docs of groupByKey, warning that it can be "very expensive", and suggesting to use aggregateByKey instead whenever... (2) odpowiedzi 2017-09-19 22:09 Prevent 0-ary functions from being called implicitly in Scala » I got nipped by a production bug where I passed an impure 0-ary function to a class that mistakenly expected a a bare result type. def impureFunc(): ... (1) odpowiedzi 2017-09-19 19:09 Does Scala tuple type uses all of its elements to compute its hash code? » Or does Scala compute tuple hashcode using something independent of its elements, like memory address? In other words, given two Tuple2s (a, b) and (... (2) odpowiedzi 2017-09-19 10:09 Chisel3: How to get verilog,cpp and vcd files simultaneously » I am a novice with chisel. I would be using it in my project in coming days and I am trying to get familiar with the library. After working with Chi... 23:09 Shapeless: Iterate over the types in a Coproduct » I want to do something really simple, but I'm struggling to craft the correct search or just understanding some of the solutions I've seen. Given a m... (1) odpowiedzi 2017-09-18 21:09 scala weird symbol "_@" meaning » I am wondering what this scala symbol is: _@. (Search engines have trouble with weird characters so it's hard to find anything on google...) Here is... (1) odpowiedzi 2017-09-18 06:09 How do I change a project's id in SBT 1.0? » I have a bunch of SBT 0.13 project definitions that look like this: lazy val coreBase = crossProject.crossType(CrossType.Pure).in(file("core")) .se... (1) odpowiedzi 2017-09-17 16:09 Scala - function which takes a type composition of 2 traits » I want my function to receive a type which implements 2 traits. Is it possible to create an "adhoc" trait type like that? For instance: trait t1 { ... (1) odpowiedzi 2017-09-17 14:09 Higher-order unification for type constructor inference » def meh[M[_], A](x: M[A]): M[A] = x meh((x: Int) => "") After SI-2712 () fix type A is inferred to th... (1) odpowiedzi 2017-09-17 14:09 Scala's TreeSet `min` operation performance » When exploring Scala's standard library, I found out, that TreeSet doesn't override min operation at all, so this operation is derived from Traversabl... -16 23:09 Scala objects and thread safety » I am new to Scala. I am trying to figure out how to ensure thread safety with functions in a Scala object (aka singleton) From what I have read so ... (3) odpowiedzi 2017-09-16 15:09 How can I obtain the DAG of an Apache Spark job without running it? » I have some Scala code that I can run with Spark using spark-submit. From what I understood, Spark creates a DAG in order to schedule the operation. ... (1) odpowiedzi 2017-09-16 12:09 How to Inject() into a class with constructor parameters? » I have a WebSocket controller which creates per connection actor handler: class WebSocketController @Inject()(cc: ControllerComponents)(implicit exc:... (2) odpowiedzi 2017-09-15 11:09 Scala: implicitly converted type as implicit param of a method » I have a method that expects some implicit param of type String. I have an object of class MyClass which I want to pass to that method implicitly. Fi... (1) odpowiedzi 2017-09-15 08:09 Find wrapped array entry that is not empty » I´m new with spark. My problem is that I have to find in a list, these which are not empty. When I use the filter function is not null, than I get a... (2) odpowiedzi 2017-09-15 03:09 sbt corrupts terminal display » This is a head scratcher for me; I'm on Mac OS 10.11.6 and I recently started using sbt for scala. I'm running into the situation that every time I r... (1) odpowiedzi 2017-09-14 13:09 Play Framework Stop responding after 2-3 days » I am using play framework and it stops responding after 2-3 days and when I restart server then everything works fine. Please let me know what I am... (1) odpowiedzi 2017-09-14 13:09 Prepend to vector scala » Newbie to scala, im finding the symbolic notation for dealing with collections confusing. To append an item to a list or vector i can use: List(1, 2,... (1) odpowiedzi 2017-09-14 09:09 Scalatest building and running in Intellij, but not building the project when run on the command line with maven » I'm able to run my tests in a spec file individually when using intellij, however when I try running with Maven on the command line the project will n... (1) odpowiedzi 2017-09-12 23:09 Scala: Function0 vs by-name parameters » Can anyone give a definitive answer on how by-name parameters => T and Function0 parameters () => T are transformed into one another by the Scal... (1) odpowiedzi 2017-09-12 18:09 Get type of a "singleton type" » We can create a literal types via shapeless: import shapeless.syntax.singleton._ var x = 42.narrow // x: Int(42) = 42 But how can I operate with In... (2) odpowiedzi 2017-09-12 16:09 Spark Dataframe Group by having New Indicator Column » I need to group by "KEY" Column and need to check whether "TYPE_CODE" column has both "PL" and "JL" values , if so then i need to add a Indicator Colu... (2) odpowiedzi 2017-09-11 20:09 Unwrap/strip Shapeless Coproduct objects » I have a domain that looks like this: package object tryme { type ALL = AlarmMessage :+: PassMessage :+: HeyMessage :+: CNil } import com.tryme._ ... (2) odpowiedzi 2017-09-11 18:09 is it possible to implement flip as a Scala function (and not a method) » As a part of learning Scala I try to implement Haskell's flip function (a function with signature (A => B => C) => (B => A => C)) in Scala - and imple... (1) odpowiedzi 2017-09-11 11:09 What is Applicative Builder » I am scala and functional newbie and trying to learn applicatives. val o1 = Some(1) val o2 = Some(2) val o3 = Some(3) val result = (o1 |@| o2 |@| o3)... (1) odpowiedzi 2017-09-10 22:09 With a Scala Either, how do you stop at the first error, but gets the already computed values » For example, let say I have a function def foo(): Either[String, Int] = ??? I want to call this function 3 times. If all the values are Right, I wa... (3) odpowiedzi 2017-09-10 21:09 Decoding structured JSON arrays with circe in Scala » Suppose I need to decode JSON arrays that look like the following, where there are a couple of fields at the beginning, some arbitrary number of homog... (1) odpowiedzi 2017-09-10 16:09 Scala can't infer type arguments of Java method » I have the following sophisticated type hierarchy in Java: // the first type interface Element<Type extends Element<Type>> { Type foo... (1) odpowiedzi 2017-09-09 21:09 How to properly pass many dependencies (external APIs) to a class in Scala? » How to properly pass many dependencies (external APIs) to a class in Scala? I'm working on the application that uses many APIs to collect data. For A... (1) odpowiedzi 2017-09-09 12:09 How to create a custom Seq with bounded type parameter in Scala? » Consider the following working custom Seq: class MySeq[B](val s: Seq[B]) extends Seq[B] with GenericTraversableTemplate[B, MySeq] with SeqLike[B, My... (2) odpowiedzi 2017-09-09 03:09 Scala way to remove duplicate in an Array » I want to learn how to write for loop functions without declear "var" at the front of the code. For example, I want to remove duplicates in an inter
http://jakzaprogramowac.pl/lista-pytan-jakzaprogramowac-wg-tagow/286/strona/3
CC-MAIN-2017-43
refinedweb
1,430
68.91
How to convert a List to an Array in Java In our previous tutorial, we have discussed various ways of converting an Array to a List in java7 as well as in Java 8. but there are scenarios when we have to change a List to an Array. In this tutorial, we will discuss on various possible ways on how to convert a List to an Array. Approaches that we will discuss today are – - List.toArray() - Using Java 8 streams API Using List.toArray() This method has two overloaded methods- - <T> T[] toArray(T[] a) - Object[] toArray() If we know what is the input type and what is the output type then we use the first declaration and pass the array of the same type, else we call the second method which returns an array of Object. As per the official documentation of this method –. So as it suggested input array is mainly used to know the type of data because if array is smaller or larger this method will take care itself. Lets understand it via an example import java.util.List; public class ListToArrayExample { public static void main(String[] args) { // List.of() was introduced in Java 9 List<String> list = List.of("Harry Potter", "Lord Voldemort", "Hedwig"); System.out.println("Contents of list ::" + list); String[] convertedArray = new String[list.size()]; list.toArray(convertedArray); for (int i = 0; i < convertedArray.length; i++) { System.out.println("Element at the index " + i + " is ::" + convertedArray[i]); } } } Output:- Contents of list ::[Harry Potter, Lord Voldemort, Hedwig] Element at the index 0 is ::Harry Potter Element at the index 1 is ::Lord Voldemort Element at the index 2 is ::Hedwig Using Java Streams Java 8 streams also provide a simple way to transform a list to an array. In this, we create a stream of list, perform operations like filtering and get the array using the terminal method toArray(). Have a look at example on how to use Java 8 streams to convert list to Array. import java.util.List; public class ListToArrayUsingStreams { public static void main(String[] args) { List<String> list = List.of("Harry Potter", "Harry Potter 2", "Lord Voldemort", "Hedwig"); System.out.println("Contents of list ::" + list); // filters name which starts from Harry only String[] convertedArray = list.stream().filter(name -> name.startsWith("Harry")).toArray(String[]::new); for (int i = 0; i < convertedArray.length; i++) { System.out.println("Element at the index " + i + " is ::" + convertedArray[i]); } } } Output:- Contents of list ::[Harry Potter, Harry Potter 2, Lord Voldemort, Hedwig] Element at the index 0 is ::Harry Potter Element at the index 1 is ::Harry Potter 2 Do come back for more. Hope this helps and you like the tutorial. Do ask for any queries in the comment box and provide your valuable feedback. Share and subscribe. Keep Coding!! Happy Coding!! 🙂 References –
https://www.codingeek.com/java/how-to-convert-list-to-array-in-java/
CC-MAIN-2020-50
refinedweb
467
55.54
Internal Representation of Characters in C/C++ Examine this code snippet and explain why it will work or why not it will not? char a; for(a = 'a';a < 'm';a++) printf("%cn"); maintai. ___ I need a simple C program that reads each line data from a file and displays the information. pac How C supports implicit constructors. Compare this C feature with other languages. How reliable is this feature. Please help with the following as it relates to the existing code (separate attachment) Instead of getting the product names and prices from the standard input, enhance the program to read it from a file instead. Do not hard-code the filename within the program; ask the user to enter it instead. Also, enhance the program to s I need to modify the existing code (separate attachment) to be more orderly in its output appearance. Basically so that the displays are similar to what is below in the samples. Right now, I get this error when attempting to compile in Miracle C: c:program filesmiracle cnewweek4.c: line 18: wrong # args in function call '{ C Programming Problem. I need help writing a program in C that meets the following criteria: 1) There are 5 products for sale at a grocery store ( I want to use cucumbers, lettuce, grapes, apples and bananas) 2) The five products are all sold by weight (pounds). 3) Customers need to be prompted to enter the price (per pound) C programming. Let us consider a simple grocery store with just 5 products. Assume that all items are sold by weight. Select 5 vegetables to use in the program. Prompt the user to enter the price (i.e. price per pound) for each of the vegetables. A customer walks-in with an unusual request: He wants to know the maximum amount o Use the following to answer questions 1-1 and 1-2. Given the following memory values and a one-address machine with an accumulator, what values do the following instructions load into the accumulator? Word 20 contains 40. Word 30 contains 50. Word 40 contains 60. Word 60 contains 20. Register 1 contains 10. Register 1-1 Binary code "000000000000000111111111100" represents A. 409910 B. 409310 C. 409410 D. 409210 E. none of the above is correct. 1-2 What is the 2's complement of F ? Wherein F = 2's complement of (2's complement of "11111101000001"): A. "100011110111110" B. "110110001000001" C. "0111000010000 (See attached file for full problem description) --- 1. Analyze the following pseudocode. Whats the ansr's final value ? Num = 8 ansr =0 if num > 5 then if num < 20 then ansr =1 else ansr = 2 endif endif . A) 0 B) 1 C) 2 D) 7 2. Which piece of pseudocode represent the checking the loop condition A) rep = 1 (B) rep=r Arrays are collections of elements of the same type. The elements of the array can be accessed using their position (index) in the array. Arrays may have several dimensions: they can be similar to lists (1-dimension), matrices (2-dimensions), cubes (3-dimensions) etc. 1. The first exercise involves a 1-dimensional array of Please review problem and verify the solution. problem --------- In the algorithm SELECT, the input elements are divided into groups of 5. Will the algorithm work in linear time if they are divided into groups of 7? Argue that SELECT does not run in linear time if groups of 3 are used. solution --------- Use groups docume 28. What is the first step that should be taken when creating a chart? A. providing a name for the chart B. selecting the chart type C. selcting the range of cells that contain the data the chart will use D. choosing the data labels that will be used on the chart 29. If you want to print only the chart in a worksheet, wh Expand the "Currency Conversion" program to include a menu that allows the user to choose which currency he/she wishes to display in its equivalency to the US dollar. - Correctly use at least 1 function or subroutine Permit the user to input an amount to be converted into US Dollars. You may assume that a number is input. Y I ran the previous problem but still got errors maybe it is the compiler we have to use which is Miracle C. This is the error that I have. unrecognised types in comparison 'while (Amount<=0)' (See attached file for full problem description) expressions are Computer Organization Digital Logic Circuits(III) Boolean Algebra Information about computer terminals in a computer network is maintained in a file. The terminals are numbered 1 through 100, and information about the nth terminal is stored in the nth line of the file. This information consists of a terminal type (string), the building in which it is located (string), the transmission rate ( Write a function convertLength() that receives a real value and two strings inUnits and outUnits, then converts the value given in inUnits to the equivalent metric value in outUnits and displays this value. The function should carry out the following conversions: inUnits outUnits I c Revernd Zeller developed a formula for computing the day of the week on which a given date fell or will fall. Suppose that we let a, b, c, and d be integers defined as follows: a = the number of a month of the year, with March = 1, April = 2, and so on, with January and February being counted as months 11 and 12 of the preceed numb I need an example of a flow chart and specification sheet of a C program. Here is an example of a C program: /*Currency conversion.*/ #include <stdio.h> #include <stdlib.h> #include <math.h> #include <system.h> main () { /* Data Declaration statements*/ int Curr_type; float For_curr; float Equ_curr; pri The attachment shows where I am in the program and what needs to be accomplished. See the attach file for problem. Please read R as R+ Question : Prove that ! ( ) n n ∈ο n by using the definition of the small o- notation. Definition: Let : N →R∪{0}, where ∫ N is the set of all non-negative integers: R is the set of all positive real numbers. f 0 0 o( f )={ g : N →R ∪{0}|∀c∈R,∃n
https://brainmass.com/computer-science/c/pg3
CC-MAIN-2017-13
refinedweb
1,049
64.81
On woensdag, juni 26, 2002, at 07:38 , Raymond Hettinger wrote: To fix it, I found myself writing the same code over and over again: def _toset(container): return dict([(elem, True) for elem in container]) This repeated dictionary construction exercise occurs in so many guises that it would be worthwhile to provide a fast, less magical looking approach. I disagree on this being "magical", I tend to think of it as "Pythonic". If there is a reasonably easy to remember construct (such as this one: if you've seen it once you'll remember it) just use that, in stead of adding extra layers of functionality. Moreover, this construct has lots of slight modifications that are useful in slightly different situations (i.e. don't put True in the value but something else), and people will "magically" understand these if they've seen this one. What I could imagine would be nice is a warning if you're doing inefficient "in" operations. But I guess this would have to be done in the interpreter itself (I don't think pychecker could do this, or could it?), and the definition of "inefficient" is going to be difficult ("if your program has done more than N1 in operations on a data structure with more than N2 items in it and these took an average of O(N1*N2/2) compares", and keep that information per object). -- - Jack Jansen [email protected] - - If I can't dance I don't want to be part of your revolution -- Emma Goldman -
https://mail.python.org/archives/list/[email protected]/message/6QXEHA7AX4EPZENXU3Z6FDMPBTVFGXXB/
CC-MAIN-2021-39
refinedweb
258
55.58
Std ios Search view with search icon - problem I have posted this here as I am not sure its a bug or not. But as far as I can see its not possible to create a std ui Search view as in ios using the ui module(as far as I can see, very possible I have over looked something). But I think the main reason you cant get the same look and feel as a standard ios search view is because you cant manipulate some attrs of the ui.TextField, eg bg_color, corner_radius. I would have thought you would compose the the view like - - Create a ui.View with a corner_radius and set the flex attrs. - Create an ui.ImageView and add the built in image 'iob:ios7_search_24' - Create a ui.TextField, with no border_width, corner_radius, bg_color - Add the image and textfield to the view, positioning them with x any attrs, set the flex of the textfield. I have tried various things, but I cant get a search field view that is the equivalent of the ios one. Again, maybe I am just missing some thing simple. Adding an search emoji in the placeholder is not the correct way. Any insight would be great I did a somewhat convincing search bar in my objc_browser.py in objc_hacks repo. using: - a container view (gray bg color), 320x420 ** subview: textfield, frame=(20,4,280,32) to center it up in the larger frame, corner radius 16, border color 0,0,0, bg color slightly off white, and placeholder='\U0001F50E Search' with a delegate that looked like: def textfield_did_change(self,textfield): self.filter=textfield.text.lower() self.filtereditems=[x for x in self.items if self.filter in x.lower()] self.tv.reload() Well, probably best just to go look at the code... I have in my half finished projects a wrapper that implements a UISearchController in objc_util, but this isnt quite finished(it displays, but is not functional). you might also look at the blackmamba code -- i feel like this is something @zrzka may have done already. There's no search controller in BM. I was kind of lazy to do it. I just use simple text field and table updates (insert, ...) or reload. @JonB , thanks I did copy out the relevant parts of your objc_browser.py and ran it. It's not what Iam looking for. Have done similar before. The search icon should not be in the placeholder text, you do not get the same iOS feel. Clearly in the Native iOS version the search Icon is an image object positioned at the head of textfield in a different object. I am not trying to be smart or funny, but its pretty fundamental to the look of native iOS apps. I didn't look at @zrzka code, but the search boxes I see in his helper apps are not native iOS looking. Look, I will post this as an issue on the repo. As far as I can see the only thing stopping us making a 100% native looking/behavioural iOS search view is the inability to modify some of the Textfields attrs. bg_color, corner_radius , bg_width. I am not sure why @omz ignores these attrs when you set them. There could be some technical reasons why he does this. I am not sure. But it seems to me we should be able to easily create this view. Then with the built in transform features go on to provide some of the different behaviours you see in different iOS apps like when you tap into the textfield, like in contacts, the textfield width shrinks and a Cancel button appears at the right of the enclosing view. Granted, this could also be done now. Not sure ppl think this is an issue or not. I personally would be able to be able to make a native looking ios search view. Also with out the help of objc. I mention without the help of objc, for me it does not seem necessary and search fields do various transformations, more than the one simple one I mentioned above where a Cancel button is added to the rightmost of the field in the enclosing view. Although I dont know if objc could interfere with things like this but it would better not to have to worry about it. If you are interested in a UISearchController, I wrote a wrapper for the native controller for PyDoc it needs a little bit of work still but it might give you a starting point. You can find it here this would be something that I would find very useful if it was in the std ui module. I also had started a wrapper of UISearchController. Basically works, though currently not move/delete. Guys, please dont take me the wrong way. But I am not really into the whole objc thing. I am happy if I can just get better at Python. I am glad objc is included in Pythonista and for sure others code with objc has helped me in the past. But having zero experince with it makes me feel quite uneasy when I am relying on something that uses objc. But as I say I am personally happy its there as I know a lot of cool things can be done with with. Eg. Not that I have looked, but i expect BlackMumba relies on obj, I use BM all the time. But as a utility. I install and forget, the way it should be. I suspect StaSH also relies on objc (I am not sure about that, I am not sure objc was around when StaSH was released). Anyway, I like I am sure many others rely on that also. So thats why I am hoping this will get resolved in the ui Module. I feel it should be quite simple thing to do. I was just interested to make a function that created a search panel and basically save it as a snippet. Same as IK did with the Cancel_OK button view I posted the other day. Might have given it some Basic functionality, but more important for me was just creating a well behaved view that could be created to another view and visually take care of its self size wise just using flex. Anyway, I guess we will see. I opened an issue. Once someone develops something with objc that works, and provides a convienent python interface that hides the details from you, then it effectively becomes no different than the native option, and you never need to worry about the objc underpinnings. You just have to import an extra module. The question becomes, what should that interface look like? I am thinking a SearchableTableView should be identical to a tableview, except it should have an option to show_search_bar, which will toggle the search box. The datasource then is responsible for implementing a searchbox_did_change(tableview, searchtext), and filtering the results. The SearchableListDataSource would be the "default" to work like ListDataSource, and would handle the default use case of presenting a list of textual items. @JonB, in the case you cited I agree. Basically where its a module you import and only interact with it via the intended API. I was more thinking about building blocks. Were you are more likely to paste the code into you module and interact with it a bit more than you would in a standalone module. Look, I realise you could can put anything in a module and import it. Sometimes it doesn't seem so natural to do that. But natural has different meanings. I have had this mental block with Pythonista in the past of using a lot of modules. And it's because in past to share (simple projects) that contain multiple modules is not the easiest thing for a lot of ppl to install, test use etc. I mean trivial projects. However, with the beta and iOS11 this is quickly becoming less problematic. Less problematic is not the right phrase, its getting quite easy. Even though there has been bridges between Pythonista and 'Working Copy' before, the multitasking between the 2 apps now is great. Working both ways with repo's makes like so much easier. So hopefully to share trivial multi module projects in the future will become a no brainer. What I should have mentioned, whilst the project might be trivial, you still might say have 5 modules for the trivial project, as 4 of those modules might be just helper modules you include in every project you make. You make only use a couple of classes/functions from one or 2 of the modules. But to me it doesn't make sense to go through and cherry pick out those support modules/classes or functions each time you want to share something. In way, this has hindered me as I have never really made my own support files directory with modules that I could reuse in all my projects (this is my own fault). But I think I will start to do that now. Sorry for the tangent, but I feel its all related, well for me it is. @JonB , your searchable table script was almost exactly what I was looking for in order to provide an iOS citation picker for my writing. I've adapted it a bit so that it inputs a list of my citation keys. However, I can't seem to figure out how to return the selection. Tapping on an item only highlights the item itself but never returns it or closes the search table. I've tried to adapt the script to return the selected item, based off other examples, but have been unable to figure it out. I'm sure it's relatively simple but I'm quite new to python and only at the level of basic trial & error. Any help you could provide would be most appreciated, and thank you for your work either way!
https://forum.omz-software.com/topic/4558/std-ios-search-view-with-search-icon-problem
CC-MAIN-2021-04
refinedweb
1,665
72.26
15.1. Introduction: Optional Parameters¶ In the treatment of functions so far, each function definition specifies zero or more formal parameters and each function invocation provides exactly that many values. Sometimes it is convenient to have optional parameters that can be specified or omitted. When an optional parameter is omitted from a function invocation, the formal parameter is bound to a default value. When the optional parameter is included, then the formal parameter is bound to the value provided. Optional parameters are convenient when a function is almost always used in a simple way, but it’s nice to allow it to be used in a more complex way, with non-default values specified for the optional parameters. Consider, for example, the int function, which you have used previously. Its first parameter, which is required, specifies the object that you wish to convert to an integer. For example, if you call in on a string, int("100"), the return value will be the integer 100. That’s the most common way programmers want to convert strings to integers. Sometimes, however, they are working with numbers in some other “base” rather than base 10. For example, in base 8, the rightmost digit is ones, the next digit to the left is 8s, and the one to the left of that is the 64s place (8**2). The int function provides an optional parameter for the base. When it is not specified, the number is converted to an integer assuming the original number was in base 10. We say that 10 is the default value. So int("100") is the same as invoking int("100", 10). We can override the default of 10 by supplying a different value. Note Tom Lehrer’s New Math Some math educators believe that elementary school students will get a much deeper understanding of the place-value system, and set a foundation for learning algebra later, if they learn to do arithmetic not only in base-10 but also in base-8 and other bases. This was part of a movement called “The New Math”, though it’s not so new now (I had it when I was in elementary school!) Tom Lehrer made a really funny song about it, and it’s set with visuals in several YouTube renditions now. Try this very nice lip-synched version. When defining a function, you can specify a default value for a parameter. That parameter then becomes an optional parameter when the function is called. The way to specify a default value is with an assignment statement inside the parameter list. Consider the following code, for example. (clens15_1_1) Notice the different bindings of x, y, and z on the three invocations of f. The first time, y and z have their default values, 3 and 7. The second time, y gets the value 5 that is passed in, but z still gets the default value of 7. The last time, z gets the value 8 that is passed in. If you want to provide a non-default value for the third parameter (z), you also need to provide a value for the second item (y). We will see in the next section a mechanism called keyword parameters that lets you specify a value for z without specifying a value for y. Note This is a second, related but slightly different use of = than we have seen previously. In a stand-alone assignment statement, not part of a function definition, x=3 assigns 3 to the variable x. As part of specifying the parameters in a function definition, x=3 says that 3 is the default value for x, used only when no value is provided during the function invocation. There are two tricky things that can confuse you with default values. The first is that the default value is determined at the time that the function is defined, not at the time that it is invoked. So in the example above, if we wanted to invoke the function f with a value of 10 for z, we cannot simply set initial = 10 right before invoking f. See what happens in the code below, where z still gets the value 7 when f is invoked without specifying a value for z. (clens15_1_2) The second tricky thing is that if the default value is set to a mutable object, such as a list or a dictionary, that object will be shared in all invocations of the function. This can get very confusing, so I suggest that you never set a default value that is a mutable object. For example, follow the exceution of this one carefully. (opt_params_4) When the default value is used, the same list is shared. But on lines 8 and 9 two different copies of the list [“Hello”] are provided, so the 4 that is appended is not present in the list that is printed on line 9. Check your understanding - 0 - Since no parameters are specified, x is 0 and y is 1, so 0 is returned. - 1 - 0 * 1 is 0. - None - The function does return a value. - Runtime error since no parameters are passed in the call to f. - Because both parameters have default values specified in the definition, they are both optional. advfuncs-1-1: What will the following code print? def f(x = 0, y = 1): return x * y print(f()) - 0 - Since one parameter value is specified, it is bound to x; y gets the default value of 1. - 1 - Since one parameter value is specified, it is bound to x; y gets the default value of 1. - None - The function does return a value. - Runtime error since the second parameter value is missing. - Because both parameters have default values specified in the definition, they are both optional. advfuncs-1-2: What will the following code print? def f(x = 0, y = 1): return x * y print(f(1)) str_multthat takes in a required string parameter and an optional integer parameter. The default value for the integer parameter should be 3. The function should return the string multiplied by the integer parameter.
https://runestone.academy/runestone/static/fopp/AdvancedFunctions/OptionalParameters.html
CC-MAIN-2018-51
refinedweb
1,020
62.27
ECharts_study There are many libraries for rendering visual icons. Because of the needs of the project, I learned Echarts. The full name is Apache ECharts Many drawings will not be explained in detail in the basic preparation part, so beginners must see the basic preparation part Besides, there are a lot of charts, What is Echarts ECharts, an open-source visual chart library implemented with JavaScript, can run smoothly on PC s and mobile devices, is compatible with most current browsers (IE9/10/11, Chrome, Firefox, Safari, etc.), and the underlying layer relies on vector graphics library ZRender , provide intuitive, interactive and highly personalized data visualization charts. ECharts provides a general Line chart,Histogram,Scatter diagram,Pie chart,K-line diagram , for statistics Box diagram , for geographic data visualization Map,Thermodynamic diagram,Line diagram , for relational data visualization Diagram,treemap,Rising sun chart , multidimensional data visualization Parallel coordinates , and for BI Funnel diagram,Dashboard And supports the mashup between graphs. It's awesome. It's cool. That's not much nonsense. Let's start to see how to do it First experience It can be copied directly. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>ECharts-First experience</title> <script src=""></script> </head> <body> <div id="main" style="width: 600px;height:400px;margin:100px auto;"></div> <script> // Initialize the ecarts instance based on the prepared dom var myChart = echarts.init(document.getElementById('main')); // Specify configuration items and data for the chart var option = { // Icon title title: { text: 'ECharts First experience' }, // The legend tag of type HTML is also the title of the icon. But this label can be a specific icon. The title above can be the total title showing all icons legend: { data: ['achievement'] }, // Column, discount and other titles with horizontal and vertical coordinates will be easier to understand in combination with the figure xAxis: { data: ['language', 'mathematics', 'English', 'Physics', 'biology', 'Chemistry'] }, yAxis: {}, // Figure list. series: [{ name: 'achievement', type: 'bar', data: [120, 130, 119, 90, 90, 88] }] }; // Display the chart using the configuration item and data you just specified. myChart.setOption(option); </script> </body> </html> install There are many installation methods, such as downloading code directly, npm acquisition, and CDN. You can even select only what you need to install. npm npm install echarts --save CDN <script src=""></script> Or in Select dist / ecarts.js, click and save it as an ecarts.js file. Then use it in the file <script src="echarts.js" /> Or special customization provided by the official Import project The previous section describes how to install. For those using CDN, all modules will be loaded. Those that use npm or yarn to install or only install some functions. We can import only some required modules. Import all import * as echarts from 'echarts'; This will import the components and renderers provided by echarts. And focus on the eckarts object. If we want to use the pass through point method. echarts.init(....) Partial import For partial import, we can only import the modules we need to reduce the final volume of the code. The core module and renderer are necessary. Other icon components can be introduced as needed. Note that you must register before using it. //'; // The introduction prompt box, title, rectangular coordinate system, data set, built-in data converter Component, and Component suffix are all Component import { TitleComponent, TooltipComponent, GridComponent, DatasetComponent, DatasetComponentOption, TransformComponent } from 'echarts/components'; // Automatic label layout, global transition animation, etc import { LabelLayout, UniversalTransition } from 'echarts/features'; // When introducing the Canvas renderer, note that introducing the Canvas renderer or SVGRenderer is a necessary step import { CanvasRenderer } from 'echarts/renderers'; // Register required components echarts.use([ TitleComponent, TooltipComponent, GridComponent, DatasetComponent, TransformComponent, BarChart, LabelLayout, UniversalTransition, CanvasRenderer ]); // The next use is the same as before, initializing the chart and setting configuration items var myChart = echarts.init(document.getElementById('main')); myChart.setOption({ // ... }); Basic preparation (also a common step for other charts) Preparation steps Create a box. Used to display charts. Remember to give it an initial width and height. The init method provided by echart is introduced, and other API s or components can be introduced as needed Prepare an initialization chart function, which can be called when rendering for the first time or re rendering is required. (not necessary, if data need not be modified) Call the init method in the initial function and pass in the box node prepared in advance. The init method returns an instance of a chart (echartsInstance). A box can only initialize one chart instance Use the setOption method of the chart instance to pass in an option object, which is used to configure the generated icon type, data and style. <template> <div> <!-- Set up a box to display the chart --> <div id="con" :</div> </div> </template> <script> // Import init import { init } from "echarts"; export default { name: "Echarts_bar", // Invoke after component mount, initialize chart mounted() { this.initChart(); }, methods: { initChart() { // Call init to return an ecahrt instance let myChart = init(document.getElementById("con")); // Configuration chart. option is an object used to configure the chart myChart.setOption(option) }; </script> option object The configuration of option object is the difficulty of learning charts. The basic preparation is general, simple and easy. Therefore, to learn echarts is to learn how to configure the option object and how to configure the required charts. From the official website, the option object has many configurations. The key value under option may be an object, and the object has many key values. But these are summaries. Only part of a chart is used, and some can be used but not necessary. option:{ title:{...} // Configuration title xAxis:{...} yAxis:{...} // Configure x and y axes series:[{}] // Configuration chart //.... } The option set in the same chart instance has another feature. The old and new options are merged rather than replaced. That is, for the first time, we configured a lot of new data. If there is new data, it will be new to configure. I just want to add new or to be changed in the new option. If the configuration remains unchanged, you can not configure it for the second time. Bar chart Bar chart is a common chart in junior high school. The advantage of using bar chart for data visualization in the project is that we can know the approximate quantity and change trend at a glance. So let's learn how to use echart to make a bar chart. (the following code is written based on vue's scaffolding project) Chart configuration Basic preparations are common. The difficulty lies in that different charts need different configurations (but some configurations are also common). The functions are complex and there are more configuration items. This section only describes some common configurations of bar charts. Other configurations can be found on the official website as needed. The setOption method is passed an object. It's not difficult to know. What we need to write is the key value pair. (the following keys are in no order) - Title: the title of the configuration chart. The value is an object. title: { // Title content, support \ nline break text: "echarts_bar" // Other configurations will be described in the following general configuration section }, - **xAxis / yAxis: * * the x-axis in the rectangular coordinate system grid. Generally, a single grid component can only place the upper and lower X-axes at most. If there are more than two X-axes, you need to configure the offset attribute to prevent the overlap of multiple X-axes at the same position. (if a chart requires a coordinate axis, you can't configure xAxis, but the value of xAxis can be an empty object.) x: alignment axis. Applies to logarithmic data. // Category data, valid in category axis (type: 'category'). data : [] // Position of the x-axis. position : 'top'/'bottom' // The offset of the x-axis from the default position is useful when there are multiple X-axes in the same position offset : '20' (Default units px) // Axis scale min. And maximum scale value min / max : // If you don't write, it will be calculated automatically. There are many ways to write values. Has different functions. See the documentation for details. } - **Series: * * series list. Each series determines its own chart type by type. The value is an array. The element of an array is an object. An object corresponds to a chart. (however, these charts all work in the same container. It can also be the superposition of multiple charts of the same kind) series:[ { // Series name, used for displaying tooltip, legend filtering of legend, and specifying corresponding series when setOption updates data and configuration items name:'System name', // Series type, that is, the type of table. Bar: bar table. Pie: pie chart. Line: line chart. etc. type:'bar', // data data: [], // If we have set the data of a coordinate axis (its type is category), the data here can be a one-dimensional array as the value of the category. It corresponds to another prepared category one by one. // It can also be a multidimensional array, which is used to set x-axis data and y-axis data. (you need to specify which axis is category in xAxis or yAxis). // data:[ // ['Apple', 4], // ['milk', 2], // ['watermelon', 6], // ['Banana', 3] // ] } ] Simple case <template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts_bar", data() { return { xdata: ["watermelon", "Banana", "Apple", "Peach"], ydata: [2, 3.5, 6, 4], }; }, mounted() { this.initChart(); }, methods: { initChart() { // Initialize the ecarts instance based on the prepared dom let myChart = init(document.getElementById("con")); // Configuration chart myChart.setOption({ title: { // Title Component text: "Fruit unit price table", }, xAxis: { data: this.xdata, // type will default this axis to category, otherwise data will not take effect }, yAxis: { name:'Company/element' }, // type defaults to value. Data is configured in series series: [ // List of series. Each series determines its own chart type by type { name: "Unit Price", // Name of this chart type: "bar", // The type of the icon data: this.ydata, // Class name data } ], }); }, }, }; </script> Other effects barWidth If the column width is 100%, it will be divided into equal proportions according to the number of categories on the x-axis (as shown in this figure). Then it will be taken out 100% as the column width according to the size. If it is not set, it will be adaptive. It is set in the series of sereies. It is suitable for bar charts // Width of bar column barWidth : '20', // If the width of the column is 100%, it is divided into proportions according to the number of categories on the x-axis (in this figure). Each is one quarter of the box width, and this percentage is relative to one quarter. If it is not set, it will be adaptive. markPoint Mark point, which can mark some special data. It is used in the series // Mark points to mark some special data markPoint: { symbol: "pin", // Mark the shape, circle, square, etc. data: [ // Set marker point. It is an array. The elements of the array are objects. Each object corresponds to a marker point. { type: "max", // Maximum }, { type: "min", // Maximum name: "The name of the tag", // Can not }, ], }, markLine Marker line, used in series markLine: { data: [ { type: "average", name: "Marker line name", // Mean line }, ], }, label The text label on the graph can be used to describe some data information of the graph, such as value, name, etc. it is used in the series As far as the histogram is concerned, if only the column is used, the specific value represented cannot be seen directly, but it can be marked with label label: { show: true, // Whether to display. The default is false position : 'Mark position' // Support strings: left, inside, etc. it also supports array representation of position: [20,30] / [10%, 20%] rotate : 'Rotation of text' // There are other styles, such as color, that can be set }, Multi effect use cases <template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts", data() { return { xdata: ["watermelon", "Banana", "Apple", "Peach"], ydata: [5, 8, 4, 9], ydata2: [10, 20, 40, 30], }; }, mounted() { this.initChart(); }, methods: { initChart() { let myChart = init(document.getElementById("con")); myChart.setOption({ title: { text: "Fruits", link:'', left:'30' }, xAxis: {}, yAxis: { type: "category", data: this.xdata, }, series: [ { name: "Unit Price", type: "bar", markLine: { // Marker line data: [ { type: "average", }, ], }, label: { show: true, // Whether to display. The default is false }, barWidth: "25%", data: this.ydata, // Chart data }, { name: "Storage capacity", type: "bar", data:this.ydata2, markPoint: { symbol: "pin", data: [ { type: "max", // Maximum }, { type: "min", }, ], }, }, ], }); }, }, }; </script> General configuration Here are some general configurations, but not all of them are listed. Some contents are used in special scenarios or rarely used. I won't introduce them. If you need to consult the documents, this note is for introductory learning and providing some cases. title Configure the title of the chart. The value is an object. title: { // Title content, support \ nline break text: "echarts_bar", // Title hyperlink, click the title to jump link :'url' // Title Style object textStyle : { /* color fontStyle / fontWeight Equal font style width / height There are also border, shadow and other styles */ } // Sub text title, it also has independent styles, links, etc. subtext :'character string' // Horizontal alignment of the whole (including text and subtext) textAlign :'' // 'auto','left','right','center' // position adjustment left / top / right / button // The value can be 20 (default unit px), the percentage of '20%' relative to the width and height of the parent box, or 'top', 'middle', 'bottom'. }, tooltip Prompt box component. It can be set in many locations, such as global, local, a series, a data, etc The prompt box component has different effects on different types of diagrams. For example, in the bar chart, the effect is like this: (a prompt box appears when the mouse hovers over the bar column) The value of tooltip is also an object. You can configure whether to display, display trigger conditions, displayed text, style, etc. // Prompt box component. tooltip: { show: true, trigger: "item", // Trigger type. // Item data item graph trigger is mainly used in scatter chart, pie chart and other charts without category axis. (that is, it will be triggered on the graph) // Axis coordinate axis trigger is mainly used in histogram, line chart and other charts that will use category axis (it can be on the coordinate axis, not necessarily on the graph) // none triggers nothing. triggerOn: "mousemove", // Triggered when the mouse moves. Or click / none // formatter:'{b}: {c}' / / you can customize the content of the prompt box. If not, there will be the default content. // Some variables have been assigned values. You can refer to documents according to different types, such as b and c above // It supports the writing of {variable}. It also supports HTML tags, // In addition to a string, the value can also be a function. A function is a callback function. The return value of the function is a string, and the value of this string is displayed as the value of formatter. // The function receives a parameter, which is an object with relevant data in it. formatter: (res) => { return `${res.name}:${res.value}`; // Category name: value }, // Text style of the floating layer of the prompt box textStyle :{} // Color, fontsize, etc }, toolbox According to the column, there are five built-in tools: export chart as picture, data view, dynamic type switching, data area scaling and reset. They can be added as needed. toolbox: { // Tool size itemSize: 15 // Spacing between tools itemGap: 10 // Configuration tool feature: { saveAsImageL: {}, // Export picture dataView: {}, // Direct data display restore: {}, // Reset dataZoom: {}, // zoom magicType: { // switch type: ["bar", "line"], }, }, }, legend The legend component displays symbols, colors and names of different series. You can click the legend to control which series are not displayed legend: { data: ["Unit Price", "Storage capacity"], // The element value is the name of the series // There are also some control styles }, series:[ { name:'Unit Price', ... },{ name ;"Storage capacity" .... } ] Line chart Line chart is used to show the change of data in a continuous time interval or time span. Its characteristic is to reflect the trend of things changing with time or ordered categories. The change trend is obvious, and when multiple groups of data are displayed in the same table, it is easy to compare and simple. Basic use The use of line chart is almost the same as that of bar chart. In most cases, you only need to replace the type bar with the type line. Set box - > Import ecahrts related API - > create ecahrts instance - > set option <template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts_line", mounted() { this.initChart(); }, methods: { initChart() { let myChart = init(document.getElementById("con")); let option = { title:{ text:'New user statistics', left:'center' }, xAxis:{ name:'month', data:['1','2','3','4','5','6'] }, yAxis:{ name:'New user / individual' }, series:[ { name:'New user statistics', type:'line', data:[3000,2400,6300,2800,7300,4600,2200] } ] }; myChart.setOption(option); }, }, }; </script> Other effects xAxis , yAxis The configuration of the x and y axes is roughly the same. The section has been described in the bar chart section. Here are some necessary supplements. - boundaryGap The setting and performance of category axis and non category axis are different. The value can be Boolean or array ['value1','value2']. Set the blank size on both sides. xAxis:{ type:'', data:[], boundaryGap:false } - scalse type:'value' is valid only in the value axis. Whether it is a scale away from the 0 value. When set to true, the coordinate scale does not force zero scale. It is suitable for data with large data and small fluctuation range, or in the scatter diagram of double value axis. yAxis:{ // type defaults to value scale:true } lineStyle The style of polyline is set in the series. series: [ { // .... lineStyle: { color: "red", type: "dashed", // dashed: dotted line, dotted: dotted line, solid: solid line (default) }, }, ] - smooth Sets the style of the break point of the line chart, whether it is smooth or angular. The value is Boolean, and the default is false. Set in the series series: [ { // .... smooth:true }, ] - markArea Marking area, in the bar chart, we learned to mark points and lines. A marker area is added here. The marked area will have a shadow. We can set the color of the shadow. It is also set in the series // In the element object of sereis, markArea:{ data:[ // data is a two-dimensional array. [ // Set marker area { xAxis:'1' }, { xAxis:'2' } ], [ // Set second marker area { xAxis:'4', yAxis:'3100' }, { xAxis:'6', yAxis:'3300' } ] ] } - areaStyle Area fill pattern. After setting, it will be displayed as an area map. series: [ { // ... areaStyle:{ color:'pink', origin:'end' // Auto (default), start, end } }, ], - stack Stack graph, when there is more than one group of data, and we want the second group of data to be superimposed on the first group of data. series: [ { name: "New user statistics", type: "line", data: [700, 1000, 400, 500, 300, 800, 200], // The value of stack is used for matching, and the superimposed value will match it stack:'mystack', areaStyle:{} // Configure the color. If it is blank, ecahrts will automatically configure the color },{ name:'Overlay data', type:'line', data: [700, 300, 400, 200, 500, 100, 300], stack:'mystack', // Superimposed with the data of the stack configured first. areaStyle:{} } ], Scatter diagram Basic use Scatter plot can help us infer the correlation between variables. Its type is scatter. Compared with bar and line, its feature is that both coordinates are data (value). It displays points one by one. The basic operation is the same as that of line chart and bar chart. The type of coordinate axis is value. The type of series is scatter. The data is a two-dimensional array. <template> <div> <div id="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "Echarts_scatter", data() { return {}; }, mounted() { this.initChart(); }, methods: { initChart() { let myChart = init(document.getElementById("con")); let option = { title: { text: "Scatter diagram", left: "center", }, xAxis:{ type:"value", scale:true }, yAxis:{ type:"value", scale:true }, series:[ { type:'scatter', data:[ // x , y [10,11], [9,10], [11,11], [10,10], [10,12], [13,11], [12,15], [12,13], [14,13], [13,13], [13,12], [15,14], [14,13], [14,14], [16,15], [16,17], [17,16], [17,17], [17,18], [18,16], [18,17], [18,20], ], } ] } myChart.setOption(option); }, }, }; </script> Other effects symbolSize Configuration item that controls the size of scatter points. Used in series. It can control the size of all scatter points by = one by one, or it can control the size of some added points series:[ { // ... // Controls the size of the scatter. // symbolSize:10, / / the value can be a numerical value to control the size of all scatter points in this series symbolSize:function(e){ // The value can also be a function console.log(e); // The value is an array of numeric values one point at a time. if(e[1]/e[0]>1){ // When the value of the y-axis is greater than the value of the x-axis, the size is 12, otherwise it is 6 return 12 } return 6 }, } ] itemStyle Control the color of scatter points, which is used in sereis series It can control the color of all points or the color of some points. itemStyle:{ // color:'red' / / direct assignment, color:function(e){ // Function values are also supported console.log(e.data); // E is the scattered data, and e.data takes out the numerical array let [a,b] = e.data if(b/a>1){ return 'red' } return 'green' } } Figure of the combination of the above two effects effectScatter It is the value of type of series series. It is also a scatter chart. But it has ripple effect. For example: We set the value of type to effectscutter. Then all scatter points of this series of scatter diagrams have the effect of ripple. It can be used with showEffectOn trigger timing and rippleEffect ripple effect. series:[{ type:'effectScatter', // Ripple animation effect. After use, turn on the ripple effect for all scatter points in this series. After the default icon is loaded, turn on the effect. // Trigger timing showEffectOn:'emphasize', // The default value is render. It means that the scatter is enabled after loading. emphasize means that the scatter is enabled only when the mouse hovers over the scatter // Ripple style rippleEffect:{ color:'', // The color of ripples, which defaults to the color of scattered points number : 2 // Number of ripples period : 3 // Animation period in seconds scale:10 // Control the size of the expansion of ripples. scale is based on how many times the original. }, }] General configuration of xoy axis The above three figures are all on the X and Y axes. This section describes some general configurations of the xoy axis. grid For the drawing grid in the rectangular coordinate system, the upper and lower X axes and the left and right Y axes can be placed in a single grid. grid:{ // Show chart border show:true, // border width borderWidth:10, // Border color borderColor:'red', // left / top / controls the offset between the chart and the box. It will affect the size of the chart left:120 // left ,top , button,right // Controls the size of the chart // width:/height: }, axis x and y axes. This knowledge point is actually mentioned in the bar chart. xAxis:{ // Or y: on the number axis. Applicable to logarithmic data. // Category data, valid in category axis (type: 'category'). data : [] // x. The position of the y-axis. position : 'top'/'bottom' / 'left'/ 'right' // The offset of the x-axis from the default position is useful when there are multiple X-axes in the same position offset : '20' (Default units px) // Axis scale minimum and maximum scale values min / max : // If you don't write, it will be calculated automatically. There are many ways to write values. They have different functions. Please refer to the document for details. } Emphasize a few points - If data is set, type defaults to category. If it is empty, it defaults to value. The value is obtained from series. - If category is set but data is not set, the data in the series must be a two-dimensional array. - If the data of an axis is obtained from the series and it is of category type, it must be configured with category type in axis. dataZoom Previously, we knew that there was a dataZoom tool in the toolbox. You can select a part of the graph to zoom in and out. This dataZoom is not configured in the toolbox, but directly under option. The value is an array, and the element of the array is a scaling function. For example, you can configure one for the x-axis and one for the y-axis data. Generally, when there is only one, the default is let option = { dataZoom:[ { // Zoom type, control zoom position, and method. type:'slider' // There is a sliding axis at the bottom of the x axis. 'inside' / / zoom through the mouse pulley in the figure } ] // ... } **If it is inside * *, no UI will be added. However, when the mouse is placed in the figure and scrolled, it can be scaled according to the mouse position. Multiple scalers When there are multiple scalers, you need to configure the index of their corresponding coordinates corresponding to different coordinates. Generally, if there are no multiple x and multiple y, then inde is 0. dataZoom:[ { type:'slider', xAxisIndex:0 // x axis },{ type:'slider', yAxisIndex:0 } ], Pie chart The union chart is also a kind of chart with simple structure. Its characteristic is that it can intuitively see the approximate proportion of various purposes. It doesn't need the x and y axes. But the basic preparation remains the same. The main note is that the type of series is pie. And the type of data The data value of series can be written in a variety of ways. One dimensional array, two-dimensional array, or object array. Object arrays are commonly used [ { name:'Data item name.', value:'Data value' } ] Simple case <template> <div> <div class="con" :</div> </div> </template> <script> import {init} from 'echarts' export default { name:'ecahrts_pie', mounted(){ this.initChart() }, methods:{ initChart(){ let chart = init(document.querySelector('.con')) let option={ title:{ text:'pie', left:'center' }, series:[ { type:'pie', data:[ { name:'sleep', value:'7' },{ name:'Code', value:'10' } ] } ] } chart.setOption(option) } } } </script> Common effects label The text label on the pie chart graph can be used to describe some data information of the graph, such as value, name, etc. The default is displayed. That is, the coding and sleeping in the figure above. // If the data is just a simple one-dimensional array data = [7,10] // Then there are only values and no names. Therefore, you can set label not to display label:{ show:false } // The picture shows a bare circle. In addition to controlling whether the label is displayed, we can also control the content and style of the displayed data text. label:{ show : true, formatter:'character string', // Also fill in the variable {variable} // {a} : series name. // {b} : data name. // {c} : data value. // {d} : percentage. // The value of formatter can also be a function. The function can be connected to a parameter. If there is data required on the parameter, integrate the data and return it. // There are also position, color and other control styles } radius The radius of the pie chart. By setting it, we can change the radius of the pie chart or set it as a ring. series:[{ // .... radius:'' // Value: absolute radius in PX // Percent: take half of the minimum value of width and high school, and then take the value of percent. // Array: ['inner radius',' outer radius'] }] label:{ show:true, formatter:'{b}:{d}%' }, radius:['60','80'] roseType Whether to display as Nightingale chart, and distinguish the size of data by radius. Two modes can be selected: - The 'radius' sector center angle shows the percentage of data, and the radius shows the size of data. - 'area' the center angle of all sectors is the same, and the data size is displayed only by radius. series:[{ // ... roseType:'radius' }] selectedMode Click to select the mode. The pie chart has its own animation when the mouse hovers. But clicking does not respond. Set selectedMode to add the click selection effect for the sector. It can be used with selectedOffset series:[{ // ... selectedMode:'single' // Another value is multiple. // The effect is shown in the figure below. If it is single, only one sector can have this effect at the same time. While multipe allows multiple sectors to be in the selected state. // It can be used with selectedOffset to set the offset size selectedOffset: 30 }] Map Basic use Brief steps The basic preparation remains unchanged. - Introduce echarts related API s, prepare containers, initialize objects, and set option s. But it is also very different from the previous icons. - The json data of vector map needs to be prepared first. It can be loaded through ajax or downloaded locally in advance - Register map data globally using registerMap exposed by echarts. import {registerMap} from 'echarts' registerMap('chinaMap',china) // The first parameter is the name required for later use, and the second is JSON data - In option, you can configure either series or geo. geo:{ // type is map type:'map', map:'chinaMap' // It needs to be the same as the name set during registration } // perhaps series:[{ type:'map', map:'chinaMap' }] Simple case Download address of JSON file in this case <template> <div> <div id="con" :</div> </div> </template> <script> // Import init and registerMap import {init,registerMap} from 'echarts' // Import map JSON data downloaded locally in advance import china from '../../jsondata/china.json' export default { name:'echarts_map', mounted(){ this.initChart() }, methods: { async initChart(){ let Chart = init(document.querySelector('#con')) // Register maps with register await registerMap('chinaMap',china) let option = { geo:{ type:'map', map:'chinaMap' // It needs to be the same as the name set during registration } } Chart.setOption(option) } }, } </script> Common configuration roam Set the effects that allow zooming and dragging. When the mouse is placed on the map, you can zoom through the scroll wheel, and click the map to drag. The value is Boolean geo:{ // ... roam:true } label In the rendering of the above example, we can see that each province has no name. label can control the display and style of province names. geo:{ // ... label:{ show:true, // The displayed content is not set. The default is the province name. The value can be a string or function. It is the same as the above. formatter: '' // There are many configurations of styles. Please refer to the document } } zoom Set the zoom scale of the map during initial loading. The value is array. Under geo, it is the same level as label center Set the center point. The values are array. ['longitude coordinate', 'latitude coordinate'] set the point corresponding to longitude and latitude to the center of the chart Common effects Set color In the above maps, they are all gray. Except that the mouse hovers, it will change color. It looks very monotonous. Through the combination of series and visual map, we can set the color for the map, but its color will not be too dazzling. Generally, the color is used to display the population or temperature. **Configuration steps * * (based on the configured basic map) - Prepare a data, which saves the names of provinces in the map and the corresponding data, or the names of cities (according to the national map, provincial map, or others) let temdata = [ { name:'Guangdong Province', // Suppose it is a national map, and name matches Guangdong Province value:'32' // Set the corresponding data. For example, here we set the temperature. },{ name:'Fujian Province', value:'29' },{ // ... },//... ] - Associate series with geo. Of course, geo can also be replaced by series, so it is to add a series. series:[{ // Configuration data is the data prepared above data:temdata, // The index of the associated map is required. The index can be configured in geo. If there is only one, the default is 0 geoIndex:0, // The configuration type needs to be map. type:'map' }], - Configure visual map. This is a required step, otherwise the color will not be displayed. It is a visual mapping component used for "visual coding", that is, mapping data to visual elements (visual channels). visualMap:{ // After configuring visaulMap, show defaults to true. If it is set to false, this component will not be displayed. show:true, // Set minimum value min:0, // Set maximum max:100, // Configure the color change from minimum to maximum. If not configured, it will be set automatically inRange:{ color:['white','pink','red'] // Multiple colors can be configured }, calculable:true // The slider appears and is not set to false. However, it has the same filtering function as the original component. } case <template> <div> <div id="con" :</div> </div> </template> <script> import { init, registerMap } from "echarts"; import china from "../../jsondata/china.json"; // import axios from 'axios' export default { name: "echarts_map", data(){ return { temdata:[] } }, mounted() { // get data china.features.forEach(v=>{ let obj = { name:v.properties.name, value:(Math.random()*(100)).toFixed(2) } this.temdata.push(obj) }) // Initialize chart this.initChart(); }, methods: { async initChart() { let Chart = init(document.querySelector("#con")); await registerMap("china", china); let option = { geo:{ type:'map', map:'china', label:{ show:true } }, series:[{ data:this.temdata, geoIndex:0, type:'map' }], visualMap:{ show:true, min:0, max:100, inRange:{ color:['white','red'] } } }; Chart.setOption(option); }, }, }; </script> Combined with other figures Maps can be displayed in combination with other maps. Combined scatter diagram A container can have multiple graphs. Here we add a scatter graph to the map, in fact, we add a series of objects with the type of scatter graph. But our previous scatter map is on the coordinate axis, and the map has no coordinate axis. Therefore, we need to use longitude and latitude to mark the position of points. Steps (based on map already configured) - Prepare data. The data uses longitude and latitude to mark the position of points. Longitude and latitude can be obtained in the json data of the map. let opintData = [ [116.405285, 39.904989], [117.190182, 39.125596], //.... ] - Add an object in the series, whose type is scatter or effecrScatter, and set data. - At the same time, the coordinateSysem is configured as geo, that is, the longitude and latitude are set as the location reference, and the default is xoy coordinate axis series:[ { type:'effectScatter', data:opintData, coordinateSystem:'geo', } ] Combine lines Path map It is used for drawing line data with start and end information, mainly for route and route visualization on the map Its coordinateSystem is geo by default. Here we cooperate with the map to present the effect shown in the figure below. The type of its series series is lines. No xoy shaft is required. Therefore, it also needs to pay attention to data processing. series:[ { type:'lines', data: [ // The value is an array of objects, and each object is a path. { coords: [ [116.405285, 39.904989], // starting point [129.190182, 38.125596], // End, // Support multiple points, if necessary ], // Configure path style lineStyle:{ color:'red', width:1, opacity:1 }, // Adds a text label to the end of the path. label:{ show:true, formatter:'Beijing XXX company', fontSize:16, borderRadius:4, padding:[10,15], // verticalAlign :'middle', backgroundColor:'orange' } }, ], } ] Radar chart Radar chart is generally a visual processing of data after testing the performance of an object. Can clearly see the advantages and disadvantages. Basic use step - Prepare the box, import ecahrt related API s, etc. (some of the above are not described in detail) - Configure radar.indicator: it is used to configure a framework. As shown in the figure above, the Pentagon and text. // Its value is an array, and the elements of the array are objects. An object corresponds to a corner. There are name and max attributes in the object. Are the displayed text and the maximum value, respectively option = { radar:{ indicator:[ { name:'hearing', max:'100' },{ // .... } ] } } - Configure series. The type is radar. data is an array of objects. series: [ { type: "radar", data: [ { name: "English", value: ["80", "90", "77", "65", "100"], // Match the object configuration sequence in the indicator one by one. }, { // Can there be more than one } ], }, ], Simple case <template> <div> <div class="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "ecahrts_radar", mounted() { this.initChart(); }, methods: { initChart() { let chart = init(document.querySelector(".con")); let option = { radar: { indicator: [ { name: "hearing", max: "100", }, { name: "fluent", max: "100", }, { name: "pronunciation", max: "100", }, { name: "vocabulary", max: "100", }, { name: "Aggregate quantity", max: "100", }, ], }, series: [ { type: "radar", data: [ { name: "English", value: ["80", "90", "77", "65", "100"], }, ], }, ], }; chart.setOption(option); }, }, }; </script> Common configuration label Text label. Set to true to display data or series:[{ // ... label:{ show:true formatter:'show contents' // The default is the corresponding value position // Equal style settings } }] areaStyle Area fill pattern, which can fill the part surrounded by some data with other colors. You can set color, transparency, etc. series:[{ // ... areaStyle:{ // Even empty objects have a default color fill color:'orange' , opacity:0.6 } }] shape The radar chart is not necessarily square, but also round. We can modify the shape by adding a configuration shape to the label object radar: { shape: "circle", // It can be a circle or a polygon (default) // indicator: ... }, Instrument diagram The instrument diagram is the simplest configuration diagram. Basic use step - Basic box preparation, introduction of ecarts API, etc - The type of the object element of the configuration series is gauge. data configuration pointer series: [ { name:'Company or data name' // For example, weight, or km/h, can also be absent type: "gauge", data: [ // The value is an array, the elements of the array are objects, and an object is a pointer { value: "92", // Set the current value pointed by the pointer, and the maximum data and scale value of the instrument will be automatically adjusted }, ], }, ], Common configuration itemStyle Configure the color of the pointer in the object that configures the pointer. Each pointer can be configured with different colors, border colors, shadows and other styles. data:[{ value:'92', itemStyle:{ color:'pink' } }] min/max The scale value of the instrument will be set automatically or manually. series: [ { // ..... min: "50", max: "150", }, ], style Color theme The easiest way to change the global style is to directly adopt the color theme. Built in theme color In addition to the consistent default theme (light), ECharts5 also has a built-in 'dark' theme. The init API can also pass in the second parameter to change the color theme. The default is light. If you need to change to dark, you can pass in dark. let myChart = init(document.querySelector('#con'),'dark') Custom theme colors In the theme editor, we can edit the theme and then download it as a local file. Then use it in the project. The specific steps are clearly written in the official documents. - After configuring your own font, set a theme name. Of course, you can also use the default font - Then click the download topic. There are two versions that can be downloaded and used. The use steps and official documents are also very clear Introduction method // es6 import '../assets/mytheme' // use let chart = init(document.querySelector("#con"),'mytheme'); // CDN <script scr='url/mytheme.js'></script> If you change the file name and forget the topic name, you can open the downloaded topic file and find the following code echarts.registerTheme('mytheme',...) // The first parameter is the subject name. Debug color wheel Whether it's a built-in theme or a theme we set ourselves. Each has a palette. For example: If there are no special settings, the chart will take colors from the palette. For example, we often do not set color, but the bar column has color, and the pie chart also has color. Global palette In addition to the theme's own color palette, we can also set a global color palette, which will overwrite the theme color palette. The global palette is directly set under option, the key is color and the value is array option:{ color:['#ff0000','blue','green '] / / support colors such as hexadecimal } If there are fewer kinds of colors, the same color may appear in parts that should have different colors, such as color:['red','blue '] Local palette The partial palette is set in the series. Dedicated to a graph. series:[{ type:'pie', color:['red','yellow'] }] Direct style settings (xxxStyle) The color palette provides color, but when rendering, the color of a part is uncertain, or may change. Therefore, we can use the relevant configuration to set its color directly. Direct style setting is a common setting method. In option, itemStyle,textStyle,lineStyle,areaStyle,label, etc. can be set in many places. In these places, the color, line width, point size, label text, label style, etc. of graphic elements can be set directly. itemStyle This can be used in many charts. It can set the color of the bar column of the bar chart, control the color of each part of the pie chart, and use it in the scatter chart, etc. series:[{ //... itemStyle:{ color:'red' // Control all, support callback function color: (RES) = > {} } }] There are also graphs that support writing in data to control the color of some data, such as pie charts series:[{ type:'pie', data:[{ name: "sleep", value: "7", // Set color itemStyle: { color: "green", }, }] }] In addition to controlling the color, itemStyle has other styles to control // part borderColor: '#000 ', / / border color borderWidth: 0 , // Border width borderType: 'solid' , // Border type borderDashOffset: 0 , // Used to set the offset of the dashed line borderCap: butt , // Specifies how segment ends are drawn opacity: 1 , // transparency ...... textStyle Text Style is configured in the title. We usually use to configure the color, size and position. Therefore, the use method is basically the same as that of other styles. There are some general descriptions in the general configuration. We won't repeat it. lineStyle It can be used in line related graphs such as line graph and path graph. areaStyle We have used this configuration in line chart and radar chart. But there are more than two kinds of charts that can use it. But its configuration is also relatively simple. label Text tags, which we have used many times in the above examples, will not be repeated. Constant light and highlight Normally on, that is, its color in the general state, which can be configured by direct style or color palette. This section focuses on highlighting. Highlight is the color (such as mouse hovering or clicking) that appears when the graph corresponding to some data is selected. It is generally hovering. The highlighted style is configured through emphasis. Take the pie chart as an example, others are similar. series:[{ //... emphasis:{ scale:true // Zoom in or not, scaleSize:2 // Magnification focus:'none' // Fade other graphics // For example, linestyle, itemstyle, label, etc. some can also be placed in emphasis. The writing method remains the same, but the position changes. Configure the style when highlighting itemStyle: { color:'red' } }] Color gradient and texture fill The value of color supports the use of RGB to represent pure color, such as' rgb(128, 128, 128) ', or RGBA, such as' rgba(128, 128, 128, 0.5)', or hexadecimal format, such as' #ccc '. The value of color can also be a callback function. In addition, the value of color can be an object. The gradient used to configure the color // Linear gradient color:{ type:'linear' // Linear gradient x: , y: , x2:, y2:, // (x,y) represents one point, (x2,y2) represents another point, and x,y points to x2,y2. Indicates the direction colorStops:[{ offset:0, // Color at 0% color:'' },{ offset: 0.3 // Color at 30% color:'' },{ // .... }] } // Radial Gradient { type: 'radial', x: 0.5, y: 0.5, r: 0.5, // (x,y) is the origin and r is the radius colorStops: [{ offset: 0, color: 'red' // Color at 0% }, { offset: 1, color: 'blue' // Color at 100% }], } What's more amazing about color is that it supports texture filling, which is equivalent to backgroundImage. // Texture fill { image: imageDom, // Htmlimageelement and htmlcanvas element are supported, but path string is not supported repeat: 'repeat' // Whether to tile. It can be 'repeat-x', 'repeat-y', 'no repeat' } case <template> <div> <div class="con" :</div> </div> </template> <script> import { init } from "echarts"; export default { name: "ecahrts_pie", mounted() { this.initChart(); }, methods: { initChart() { // Load picture resources as textures let imgsrc = ""; let piePatternImg = new Image(); piePatternImg.src = imgsrc; let chart = init(document.querySelector(".con")); let option = { color: ["pink", "lightblue"], title: { text: "pie", left: "center", }, series: [ { type: "pie", data: [ { name: "sleep", value: "7", itemStyle: { color: { type: "linear", x: 0, y: 0, x2: 0, y2: 1, colorStops: [ { offset: 0, color: "red", // Color at 0% }, { offset: 1, color: "blue", // Color at 100% }, ], global: false, // The default is false }, }, }, { name: "Code", value: "9", itemStyle: { color: { type: "radial", x: 0.5, y: 0.5, r: 0.5, colorStops: [ { offset: 0, color: "#46bbf2 ", / / color at 0% }, { offset: 1, color: "#Aa47d7 ", / / color at 100% }, ], global: false, // The default is false }, }, }, { name: "Daydreaming", value: "5", itemStyle: { color: { image: piePatternImg, repeat: "repeat", }, }, }, ], }, ], }; chart.setOption(option); }, }, }; </script> Container and container size Containers are used to store charts. In the above use, we only used one method to set the width and height. Define a parent container with width and height in HTML (recommended) Generally speaking, you need to define a < div > node in HTML first, and make the node have width and height through CSS. Using this method, you must ensure that the container has width and height at init. <!-- Specify width and height --> <div class="con" :</div> <!-- Specify the height and width as the width of the viewer visual interface --> <div class="con" :</div> Specifies the size of the chart If the chart container does not have width and height, or if you want the chart width and height not to be equal to the container size, you can also specify the size during initialization. <div id="con"></div> var myChart = init(document.getElementById('con'), null, { width: 600, height: 400 }); This takes advantage of the third parameter of init, which is an object that specifies the width and height of the chart. Respond to changes in container size When the width and height are specified, the size of the chart is fixed. If we change the size of the viewer window, we will find that the size of the chart will not change. Even if the width is not set to dead. The container width is adaptive, but the chart will not be adaptive, and some will be obscured. As shown in the figure: If we want the chart to be adaptive, we can use the resize() method on the chart instance returned by init. var myChart = init(document.getElementById('con')); window.onresize = function() { // Listen for window size events myChart.resize(); // Call the resize method }; // You can do the same window.onresize = myCahrts.resize // Or use the event listener window.addEventListener("resize", myChart.resize); resize supports an object parameter, which is equivalent to the third parameter of init, so that the actual width and height of the chart does not depend on the container. myChart.resize({ width: 800, height: 400 }); Note, however, that the size of the chart is limited by the container. Once the width and height of the chart is dead, it cannot be changed adaptively In the following cases, calling resize directly cannot change the width and height of the chart // 1. Vessel width and height limit <div id="con" :</div> // 2. Chart width and height limit var myChart = init(document.getElementById('con'), null, { width: 600, height: 400 }); But it is not without solution We can get the width and height of the viewer and modify the width and height of the container by resetting setOption. If the container does not set overflow to hidden, the dynamically obtained width and height can also be used as the resize parameter. animation Load animation A lot of chart data is requested from the server. If the network is not very good, or there is a lot of data, there will be blank. Therefore, ecarts has built-in loading animation. myChart.showLoading() // Show animation myChart.hideLoading() // hide animation // myChart is the chart instance returned by init We can turn on the animation when loading data, and then hide the loaded animation when the data comes back. animation Animation configuration in option. option turns on the animation effect by default. For example, after loading the data, or when changing the data. option = { animation: true // Whether to enable animation effect. The default value is true animationDuration : 100 // Animation duration in milliseconds // The value of an animation can be a function, and the parameter of the function is the number of each part of the image. It depends on the actual situation animationEasing:'linear' // Jog animation // More effect references animationThreshold:8 // Maximum animation // Take the histogram for example. Each column, or marker point, or marker line will have an independent animation. If these independent animations exceed the number set here, all animation effects will be cancelled. It is equivalent to that the animation is false. // There are also some configurations of animation control, which can be referred to the official documents. } API The API s provided by ecahrts can be divided into four categories. - ecahrts: Global ecarts object - Echartsinstance: ecahrts instance, that is, the object returned by init - Action: Chart behavior supported in echarts - Event: event in ECharts echarts The global ecarts object is obtained after the script tag is imported into the ecarts.js file, or through the import module. init Create an ecarts instance and return an ecarts instance. You cannot initialize multiple ecarts instances on a single container. init(Container node, theme, chart width and height) // Of the three parameters, only the first is required connect connect is an API exposed by ecarhts, which can associate multiple charts. Because a container registers a chart instance. If multiple charts need to be displayed and cannot be merged, they will become separate pieces. If we want to click to download as an image, or refresh the chart at the same time, it will be a little troublesome. connect can solve this problem. import {connect} from 'ecahrts' // Set the group id of each instance separately. You need to set the same id for the associated chart chart1.group = 'group1'; chart2.group = 'group1'; connect('group1'); // Or you can directly pass in the instance array that needs linkage connect([chart1, chart2]); disconnect Disassociate connect. The parameter is the associated group id. import {disconnect} from 'ecahrts' disconnect(groupid) dispose dispose is to destroy the instance. It destroys the chart and destroys the instance. At this time, resetting the option is invalid. It is different from the API of another clear. import {dispose} from 'ecahrts' dispose(myEchart) // Parameters are icon instances or container nodes use Use components to cooperate with new on-demand interfaces. It needs to be used before init. //'; // Rectangular coordinate system components are introduced, and the suffix of components is Component import { GridComponent } from 'echarts/components'; // When introducing the Canvas renderer, note that introducing the Canvas renderer or SVGRenderer is a necessary step import { CanvasRenderer } from 'echarts/renderers'; // Register required components echarts.use( [GridComponent, BarChart, CanvasRenderer] ); registerMap Register the map. After obtaining the SJON data of the map, register the map globally. import {registerMap} from 'ecahrts' registerMap('mapName',mapJsonValue) echartsInstance API on icon instance group Grouping of charts for connect ion let myChart = init(document.querySelector(".con")) mtChart.group = 'String' // group id . The value is a string for linkage setOption chart.setOption(option, notMerge, lazyUpdate); - option: Chart configuration - notMerge: the value is Boolean, and the default is false. That is, the new option is merged with the old option. If set to true, the new option will completely replace the old option - lazyUpdate: Select. Whether to not update the chart immediately after setting the option. The default value is false, that is, synchronize and update immediately. If true, the chart will be updated in the next animation frame myChart.setOption(option) The setOption parameter has a lot to write. But the commonly used one is actually option resize Recorded in the container and container size section. clear Clear the current chart, but the instance still exists after clearing. You can also reset the option. Chart instances also have a dispose method, which directly destroys the instance without parameters. action Chart behaviors supported in ECharts are triggered through dispatchAction. Some events triggered manually can be simulated. dispatchAction is a method on a chart instance. It is here because it mainly serves action. Chart behavior is to click a part of the icon to highlight it, hover it, or switch the chart type, zoom, and so on The following is an example. You can view the document for more examples. The method of use is similar. // To highlight a series: dispatchAction({ // Specify type type: 'highlight', // Use index or id or name to specify the series seriesIndex:0 // Represents the first chart in the series // You can also use arrays to specify multiple series. // seriesId?: string | string[], // seriesName?: string | string[], // If you do not specify the index of the data item, you can also specify the data item by name through the name attribute dataIndex?: number | number[], // Optional, data item name, ignored when there is dataIndex name?: string | string[], }); event event events are functions that execute when certain events are triggered. action is the corresponding style change of the chart after some chart behaviors occur. event includes some DOM events, such as click ing, mouse entering mouseover and other mouse events. There are also some custom events of ecahrts, such as highlight event, event triggered when the data selection status changes, and so on. These events are monitored through the on method of the instance, which is equivalent to an event listener. The on method is also an example of a chart. myChart.on('Event type',Callback function) // Event type, click, highlight, selectchange, etc // The callback function will receive a parameter with some data of the event The off method can cancel listening to events. myChart.off('Event type',Callback function) // The event type is to cancel the event listening on the instance // The callback function does not have to be passed. If it is not passed, the event bound on this type will be cancelled.
https://programmer.help/blogs/619eae5ac6be3.html
CC-MAIN-2021-49
refinedweb
8,995
56.15
It's been a while since I have used Python and am stumbling already at a simple import! statements.py: str = "hello" main.py: import statements print statements.str Obviously the final program will have more going on, and the statements will be stuff like URLs. For this simple example however with both files sitting side by side in the same folder I get the following error on running main.py: AttributeError: 'module' object has no attribute 'str' I know I am doing something terribly silly, but I can't see what it is. Searching around it looks fine. Any help/insight is appreciated.
http://www.howtobuildsoftware.com/index.php/how-do/dlO/python-import-attributeerror-trouble-remembering-import-rules-in-python
CC-MAIN-2017-47
refinedweb
104
67.65
: 50 comments: I've been noticing this sort of thing myself but couldn't quite see how to get rid of the interfaces. Great to see example code and know that it can actually work! I found Mark Seeman's description of replacing interfaces with functions interesting too: ... ... In my experience, you can model everything with single-member interfaces, which also implies that you can model everything with functions. Totally agree with you but there's something I see as a problem, and let me do a little strechy analogy here, moving from OOP to Functional feels like moving from Typescript back to JavaScript, I'll explain in just one second: IEmailer becomes Action problem with this is: IEmailer and its method Send(string toAddress, string body); are very, extremely, explicit about what they mean and do, we all agree here whereas Action is completely meaningless other than it's a function that received 2 string parameters and returns void. What parameters, what do they mean? I'm sure we all can imagine different, simple ways of solving this issue with trivial language features, but they don't exist yet, in C# at least. @Ricardo I guess it might help to use value objects to make it Action<EmailAddress, HtmlString>, though that is a lot of work in C#, compared to F#. Exactly, that's why I said to solve it properly it's a language feature still to be developed, so I'm not sure moving to a functional style in C# is a good idea, yet. IEmailer is just a typedef (essentially) for an interface about a function that takes some strings and sends an email. There are functional equivalents of that, renaming to reduce a developer's cognitive load. I'm not sure what it would be in C# (I'd be surprised if there wasn't an equivalent way), but in Haskell it would look something like: type Address = String type Body = String type Emailer = Address -> Body -> Email Where here Emailer is the type that represents a function that takes an Address and a Body and returns an Email. I could probably have chosen the names better, but hopefully you get the idea. There isn't a succint way in C# (as James also pointed out), hence my comment. Ah fair enough, I didn't see that. The closest I found was this: But it seems comparatively clunky. I like this. I don't think technicalities pose a stumbling block to anyone trying to adopt this style. I take @Ricardo's point. I would say that you would aim to minimize the occasions when you have to resort to the action. Typically it would be when performing operations that have side effects. Ideally you want to keep those at the edges of the application. The core can be built in functional style where each function takes some input and yields some result. The stumbling block would be the recognition of the advantages of the style. This is harder to prove and I think this is one of the reasons functional programming in general hasn't become mainstream outside certain circles. People can easily counter the fact that the code base has shrunk by 40%. Personally I favour this style and I would try it in C#. Ultimately as others have said F# should be at least considered if we're targeting .NET. I'm currently learning F# and Haskell and I'm loving it. I'm beginning to think that people are wired differently. As Mark Seeman (and also stated here) certain constructs seem to get in the way of what you are trying to express. I value clarity, simplicity, less moving parts, referential transparency, idempotency, declarative style which I think you get from a more functional style. Okay. I've just had a thought and I've found this on StackOverflow. Essentially we could improve this by having alias for your function specifications. It's a bit left-field but perfectly valid. In some ways it still exposes C# as object oriented and functions tagging along in the trailer :-). So you could say using Emailer = Action; using CustomerRetriever = Func>; Perhaps poor naming convention but you can see that there is a path back to the goodness of types and the expressiveness and compile time guarantees they offer. I don't think that's a good option, 2 different aliases won't work I believe. That SO reference also proves my point. There's too much ceremony in trying to port this functional behavior in C#, the language just doesn't support it yet (if it ever will). Sadly the blog engine completely messed up the code. But if you follow the stackoverflow link you'll see what I was getting at. @Ricardo Rodrigues: Exactly. I think it is at the point where the level of ceremony and cumbersomeness becomes unbearable that people consider a functional first or even a "pure" functional language ;-) having said that a lot has been achieved with imperative and object oriented programming. It'll probably be with us for a while. All roads lead to Rome as someone said :-). Perhaps its more about the journey. It's an interesting idea. I wonder how it will scale, though, to a real-world application of significant complexity. Also, I have to disagree on test simplification. You could actually construct the unit test almost exactly the same way with Moq, with no extra complexity or lines of code. On the naming thing, how about using a parameter object instead of ths string primitives as arguments? I think then you bascially have a command and handler pattern? public class SendEmailCommand { public string EmailAddress; public string Body; } Action handleSendEmail = (SendEmailCommand) => { ... } (Obviously you could use properties with protected setters and constructor, but this was easier to format!) Sorry, some trouble with formatting - that should be: Action<SendEmailCommand> handleSendEmail = command => { ... } I should also correct my spelling of Mark Seemann :) If you go that style you could switch to F# - it's that you're missing stuff you need in more complex scenarios. Stateful functions? Currying instead of ctors. Just think about the fact that Action and Function are two distinct types. You are also missing algebraic data types. What you have now is a language with some functional elements, basically a subset of the whole language. Delegates. Action can be replaced with: public delegate void Emailer(string email, string body); Still, I think the oo version is easier to reason about. This is exactly how I do it. I use my own delegates. This also means that I can still use a container where it's appropriate. This is such an idiosyncratic style of programming and such a departure from the awful practice whereby devs create an most one-to-one interface -> "service" approach dogmatically. I've found I have to marry the two because if nobody can understand or appreciate the nuances of this different approach then the codes value is undermined regardless of the advantages it brings. HUGE amount of ceremony. I think you're missing the point. It's doable, agreed, yes. Idiomatic C#? No Practical in C# ? Neither Will it ever be? With F# being out there, don't think so. Do you want to use it? Yes (learn F#) PS: myself included I'd be interested to hear why all your interfaces (and classes) only have one method, and whether cohesion ends up suffering? I have worked on similar codebases and find the classes too small and disconnected, with dependencies difficult to follow and code difficult to track down, not to mention unit tests all need reshuffling every time the dependency graph changes, and algorithms being split over multiple classes and functions even to do something simple. Personally I would consider moving in the other direction - not to start using static methods, but look at consolidating the classes towards better cohesion and encapsulation. I recently changed 6 classes into 2, and the code was much cleaner. (A provider, a factory, a factory-factory, a repository...) Another thing I've noticed is that I actually have remarkable trouble refactoring such "micro-class" codebases - in my experience it seems to be much easier / more natural to break up a large class than reassemble many small ones. If one's not doing it right it's fairly easy to achieve duplication but small interfaces doesn't mean low cohesion at all, exactly the opposite! It does one thing and it's very portable. When it does too many things, then you break cohesion (violating the ISP), because you end up with a swiss knife interface, like IQueryable for example. Did you do any performance profiling you can share? I'm interested in the runtime behavior of such conversions; lots of discussion of functional variants either ignore performance or the functional version is slower due to increased (hidden) looping and memory related to functional collection related thinking. I've enjoyed the recent tech du jour focus on functional languages, but I also am suspicious there's a reason why they have been around forever (i.e. Lisp in 1958) but always just a niche. Performance seems to be one thing keeping it there. I was going to complain about how bad and pointless the interfaces were in your starting code but then realized it's a great example of real world. Totally pointless interfaces that offer nothing and don't need to exist. @Ricardo Rodrigues the purpose of TypeScript isn't to make javascript less functional the purpose is to fix the insanity of how bad it is at handling contracts. It also adds meta programming (which some/all is being absorbed by ES6 because of the major lacking of it in JS). To be very straight to the point: I see a lot of code with interfaces that are realized by only one type of object and it's useless. Many people code with the "what if" syndrome and that's waste of time. I would guess Mike's point here is simplicity. I liked the article. * "what if": what if this needs to change in the future. Nice article, I like the point. This is a journey I've been on for a long time. Initially, bastardizing an OO language to work functionally feels like a nice thing to do. And then you realize that you are writing little functions everywhere to glue things together (like your compose function). So, you write a library of little combinators, such as the ones given in the answer to. You then realize that you could combine things so much better if you had partial application. And then you write some more little combinators. So much boiler-plate. When will it end? Then one day, the new person joins the team and they immediately spew as they see the crazy. You slowly open your eyes and realize that you can't put lipstick on a pig; C# ain't functional.. So to me the two implementations are equally readable, with the OO version maybe a little more intention-revealing thanks to "Verify( a method was called...)" and "Setup". But sure, you could be exactly as expressive in the functional version with some well-named helper functions. Anyway, a very good article about the functional approach to dependencies and testing. @Thomas Eyde Using named delegates to make it explicit what a Function is for can be harmful to your career - I recently received the following feedback from an offline coding test I had to submit when going for a contract (which I didn't get). "When using Funcs<> from 3.5 I would expect not to use delages cira 2.0 mixing these sort of stuff is a red flag for me. would expect to use Actions instead of delegates" Sigh. @Harry McIntyre Their loss. I often use explicit delegates to remove ambiguity. i.e: public delegate DateTimeOffset Now(); instead of: Func I think there is something to be said about the clarity? Not just for people either. You can register that with you container and inject date time, or substitute easily in a test. I like the approach even though it might lead to big overloads of constructors, specially when the injected dependencies were quite considerably in size (methods). Thanks for sharing :) Looks interesting. Can you please do a followup post where your interfaces have say 3-4 methods each? I'm not really sure how your approach scale, hence why I'd like your take on a more realistic situation. @Anonymous, when you're thinking functionally like this your interfaces all tend to end up with one method anyway: . Interfaces having one method doesn't stop you grouping your code into classes or modules :) "rantings of a mad man" seems a little presumptive! What is Abstract Data type? This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level. Love from This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level. Love. The more I use functional the more I believe OO is a dead end and a productivity time suck. It simply does not give the fluidity of code that functional provides. Just look at your old code, if you have interfaces that have one method the question should be why, just a bunch of pointless noise. An interface, a class and possibly IOC registration and resolution. People used to worry about readonly dictionaries in C# but I just pass around a Func which is the lookup and also means you are not even tied to a dictionary. Functional, because you break everything down more, gives more building block with finer granularity and closures provide far more power and flexibility that classes can. So the current project has 500000 lines of C# and I can work faster and move faster in the functional stuff than I can in the OO. This blog awesome and i learn a lot about programming from here.The best thing about this blog is that you doing from beginning to experts level. Love from Interesting. How does this play in a real world mvc app, using a container like StructureMap or AutoFac? @Ricardo () I'd use delegates for that. They are like interfaces, but for methods, and c# can match them easily to lambda expressions or static methods. C# 7 Named tuples can solve the Action problem. class Program { static void Main() { RunCustomerReportBatch(SendEmail); Console.ReadKey(); } public static void SendEmail((string toAddress, string body) emailFields) { // pretend to send an email here Console.Out.WriteLine("Sent Email to: {0}, Body: '{1}'", emailFields.toAddress, emailFields.body); } public static void RunCustomerReportBatch(Action<(string toAddress, string body)> sendEmail) { //....code omitted for breviety sendEmail(("[email protected]", "Test Report Body")); } }. Going a step further, when I inject a dependency I tend to use only one method. So the next logical step is for the method to take a delegate as an argument instead of an interface or class. One thing I like about this approach is that it makes it more difficult for someone to clutter the code. They can't sneak in new class dependencies by adding new members to existing interfaces.
http://mikehadlow.blogspot.com/2015/08/c-program-entirely-with-static-methods.html?showComment=1446408900711
CC-MAIN-2019-51
refinedweb
2,559
64
Created on 2001-12-12 17:21 by ahlstromjc, last changed 2002-11-29 11:38 by pmoore. This issue is now closed. This is the "final" patch to support imports from zip archives, and directory caching using os.listdir(). It replaces patch 483466 and 476047. It is a separate patch since I can't delete file attachments. It adds support for importing from "" and from relative paths. Logged In: YES user_id=64929 I still can't delete files, but I added a new file which contains all diffs as a single file, and is made from the current CVS tree (Mar 15, 2002). Logged In: YES user_id=64929 I added a diff -c version of the patch. Logged In: YES user_id=31392 Deleteing the old diffs that Jim couldn't delete. Logged In: YES user_id=21627 Is this patch ready to be applied? Logged In: YES user_id=6380 Sigh. We waited too long for this one, and now the patch is hopelessly out of date. I managed to fix most of the failing hunks, but the remaining hunk that fails (#11 in import.c) is a whopper: the saved import.c.rej is 270 lines long. I'm going to sleep on that, but I could use some help. In the mean time, I'm uploading an edited version of dashc.diff to help the next person. Logged In: YES user_id=33168 I'm reviewed most of the updated version. Quick summary: * strncpy() doesn't null terminate strings, this needs to be added * seems to be many leaked references Here are more details: * strncpy(dest, str, size) doesn't guarantee the result to be null terminated, need dest[size] = '\0'; after all strncpy()s (I see a bunch in getpath.c) * getpath.c:get_sys_path_0(), I think some compilers warn when you do char[i] = 0; (use '\0' instead) (there may be other places of this) * import.c:PyImport_InitZip(), have_zlib is an alias to zlib and isn't really helpful/necessary * import.c:get_path_type() can leak a reference to pyobj if it's an int * import.c:add_directory_names() pylist reference is leaked * import.c:add_zip_names(), memcmp(path+i, ".zip", 4) is clearer to me than path[i] == '.' .... * import.c:add_zip_names, there's a lot of magic #s in this function * " " : leak refs when doing PyTuple_SetItem I think there were other leaked references. I'll try to spend some more time later. Logged In: YES user_id=6380 Whoa... That's a lot. Neal, do you think you could come up with a reworked version of this? That would be great! Logged In: YES user_id=33168 Reassigning to me. I'll give it a shot. It will take a while. Do I understand you correctly that your updated patch (dashc-2) has all the necessary pieces, except for the import hunk 11 that was rejected? And I need to get that failed hunk from the original patch? Logged In: YES user_id=6380 No, the failing hunk is included in dashc-2. I actually *edited* the patch file until all but that one hunk succeeded. Thanks for looking into this! If you need help don't fail to ask on python-dev, I read it daily. :-) Logged In: YES user_id=64929 This patch is old. I can provide a new patch against Python 2.2.1 if that would help. Or a new patch just for import.c against 2.2.1. Logged In: YES user_id=33168 James, could you look at what Guido reworked? If that is fine, I can push it forward. Otherwise, feel free to update the patch. If I do any work on it, I'll make comments here so we don't duplicate work. Thanks. Logged In: YES user_id=6380 I think a new patch just for import.c would be helpful. Logged In: YES user_id=64929 Here is the import.c diff -c against Python-2.2.1. Logged In: YES user_id=6380 Alas, the 2.2.1 diff doesn't help much. Current CVS is what we need. :-( Logged In: YES user_id=64929 I just grabbed the CVS import.c (only). I will edit this to add my changes and submit it as a new import.c patch. This should help, although I can't test it unless I download the whole tree. Logged In: YES user_id=64929 Here is a diff -c against today's import.c. It is untested because I didn't get the whole tree, but it is way closer than the old patch. I moved the find_module() loop over file suffixes into search_using_fopen(), a trivial change which produces a large diff. To make the change obvious, I didn't correct the indenting, so please do that after looking at the patch. Next I will download the whole tree and check the other changes once I get a little time. Logged In: YES user_id=113328 Sorry to poke my nose in on this, but I note that Guido is hoping for a 2.3a1 release by Christmas, and this patch has been making very slow progress. I have an interest in seeing it go in - is there anything I can do to help? Logged In: YES user_id=21627 Try it out, report whether it works; if it doesn't work, fix it. Verify that documentation and test cases are up-to-date. Logged In: YES user_id=113328 I've uploaded a revised patch (zip-cvs.diff). This is against current Python CVS. It's basically just the existing import.c patch, plus the rest of the dashc-2 patch. Everything applied OK (with fuzz and offsets, but otherwise OK). The resulting code builds, but when I try importing from a zip file, I get a crash. However, this happens when I use the zlib module directly, so appears to be because I haven't built zlib correctly :-( I will investigate further, but thought I'd upload the new patch in case anyone else wants to take a look. Logged In: YES user_id=113328 How the *&^!%%$ do I upload files to SF? For now, the patch is at. Logged In: YES user_id=113328 Sigh. If I'd read the README file properly, I wouldn't have screwed up the zlib build. I am now happy to report that the patch seems to work fine. I've only done some simple tests, but have no problems. Logged In: YES user_id=21627 Paul, Can you propose documentation changes for this patch? I'd expect to see modifications to NEWS, whatsnew23.tex, api, and ref, perhaps other places as well. Feel free to use material from PEP 273 as appropriate. Also, having test cases for the feature would be good. As you cannot attach to this report (you are not the submitter), feel free to create a new patch. To create a diff for a cvs tree, using the "cvs diff" command is best - no need to have two copies of the tree. Logged In: YES user_id=113328 I'll see what I can do. I've never looked at documentation or test patches before, so I'll have to get my brain around things first... I'll also have a look at the issues Neal raised. Is the diff format I used a problem? I have dial-up internet access, so having 2 copies of the tree is faster for me than doing cvs diff, as well as being possible while offline :-( Logged In: YES user_id=21627 The diff format is fine; I was just not certain you know about cvs diff. Logged In: YES user_id=113328 Opened a new patch (645650) with an updated diff.
http://bugs.python.org/issue492105
crawl-003
refinedweb
1,271
84.68
From: Tobias Waldekranz <[email protected]> Sent: Tuesday, June 30, 2020 3:31 PM > On Tue Jun 30, 2020 at 8:27 AM CEST, Andy Duan wrote: > > From: Tobias Waldekranz <[email protected]> Sent: Tuesday, June > > 30, > > 2020 12:29 AM > > > On Sun Jun 28, 2020 at 8:23 AM CEST, Andy Duan wrote: > > > > I never seem bandwidth test cause netdev watchdog trip. > > > > Can you describe the reproduce steps on the commit, then we can > > > > reproduce it on my local. Thanks. > > > > > > My setup uses a i.MX8M Nano EVK connected to an ethernet switch, but > > > can get the same results with a direct connection to a PC. > > > > > > On the iMX, configure two VLANs on top of the FEC and enable IPv4 > > > forwarding. > > > > > > On the PC, configure two VLANs and put them in different namespaces. > > > From one namespace, use trafgen to generate a flow that the iMX will > > > route from the first VLAN to the second and then back towards the > > > second namespace on the PC. > > > > > > Something like: > > > > > > { > > > eth(sa=PC_MAC, da=IMX_MAC), > > > ipv4(saddr=10.0.2.2, daddr=10.0.3.2, ttl=2) > > > udp(sp=1, dp=2), > > > "Hello world" > > > } > > > > > > Wait a couple of seconds and then you'll see the output from fec_dump. > > > > > > In the same setup I also see a weird issue when running a TCP flow > > > using iperf3. Most of the time (~70%) when i start the iperf3 client > > > I'll see ~450Mbps of throughput. In the other case (~30%) I'll see > > > ~790Mbps. The system is "stably bi-modal", i.e. whichever rate is > > > reached in the beginning is then sustained for as long as the session is > > > kept > alive. > > > > > > I've inserted some tracepoints in the driver to try to understand > > > what's going > > > on: > > > > > > gsha > re.com%2Fi%2FMVp.svg&data=02%7C01%7Cfugang.duan%40nxp.com% > > > > 7C12854e21ea124b4cc2e008d81c59d618%7C686ea1d3bc2b4c6fa92cd99c5c > > > > 301635%7C0%7C0%7C637290519453656013&sdata=by4ShOkmTaRkFfE > > > 0xJkrTptC%2B2egFf9iM4E5hx4jiSU%3D&reserved=0 > > > > > > What I can't figure out is why the Tx buffers seem to be collected > > > at a much slower rate in the slow case (top in the picture). If we > > > fall behind in one NAPI poll, we should catch up at the next call (which > > > we > can see in the fast case). > > > But in the slow case we keep falling further and further behind > > > until we freeze the queue. Is this something you've ever observed? Any > ideas? > > > > Before, our cases don't reproduce the issue, cpu resource has better > > bandwidth than ethernet uDMA then there have chance to complete > > current NAPI. The next, work_tx get the update, never catch the issue. > > It appears it has nothing to do with routing back out through the same > interface. > > I get the same bi-modal behavior if just run the iperf3 server on the iMX and > then have it be the transmitting part, i.e. on the PC I run: > > iperf3 -c $IMX_IP -R > > I would be very interesting to see what numbers you see in this scenario. I just have on imx8mn evk in my hands, and run the case, the numbers is ~940Mbps as below. root@imx8mnevk:~# iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.192.242.132, port 43402 [ 5] local 10.192.242.96 port 5201 connected to 10.192.242.132 port 43404 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 109 MBytes 913 Mbits/sec 0 428 KBytes [ 5] 1.00-2.00 sec 112 MBytes 943 Mbits/sec 0 447 KBytes [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec 0 472 KBytes [ 5] 3.00-4.00 sec 113 MBytes 944 Mbits/sec 0 472 KBytes [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec 0 472 KBytes [ 5] 5.00-6.00 sec 112 MBytes 936 Mbits/sec 0 472 KBytes [ 5] 6.00-7.00 sec 113 MBytes 945 Mbits/sec 0 472 KBytes [ 5] 7.00-8.00 sec 112 MBytes 944 Mbits/sec 0 472 KBytes [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 472 KBytes [ 5] 9.00-10.00 sec 112 MBytes 940 Mbits/sec 0 472 KBytes [ 5] 10.00-10.04 sec 4.16 MBytes 873 Mbits/sec 0 472 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.04 sec 1.10 GBytes 939 Mbits/sec 0 sender
https://www.mail-archive.com/[email protected]/msg335390.html
CC-MAIN-2021-31
refinedweb
726
83.15
struct error while compiling - Java Beginners error while compiling i am facing problem with compiling and running a simple java program, can any one help me, the error i am getting is javac is not recognised as internal or external command Check if you JAVA_HOME Error While Compiling - Java Beginners Error While Compiling Hi All I Am a beginner and i face the following problem while compiling can anyone help me. C:\javatutorial\example>... command, operable program or batch file. C:\javatutorial\example>   Struts - Struts Struts Dear Sir , I am very new in Struts and want to learn about validation and custom validation. U have given in a such nice way... validation and one of custom validation program, may be i can understand.Plz The ActionForm Class The ActionForm Class In this lesson you will learn about the ActionForm in detail. I will show you a good example of ActionForm. This example will help Struts Struts When Submit a Form and while submit is working ,press the Refresh , what will happen in Struts i got an error while compile this program manually. i got an error while compile this program manually. import..., ActionForm form, HttpServletRequest request...,struts jar files and i got an error in saveErrors() error Heading cannot find struts struts please send me a program that clearly shows the use of struts with jsp java - Struts java hi sir, i need Structs Architecture and flow of the Application. Hi friend, Struts is an open source framework...:// how to compile and run struts application - Struts how to compile and run struts application how to compile and run struts program(ex :actionform,actionclass) from command line or in editplus Articles It is the first widely adopted Model View Controller (MVC) framework, and has proven itself in thousands of projects. Struts was ground-breaking when... years. It is also fair to say that Struts is not universally popular, and has struts <p>hi here is my code can you please help me to solve... { public ActionForward execute(ActionMapping am,ActionForm af... org.apache.struts.action.*; class UserForm extends ActionForm { private String structs doubts structs doubts i tried the fileupload program which is published in roseindia i am getting fileupload page after that when select any file to upload then i click upload i am getting following error struts - Struts struts My struts application runs differently in Firefox and IE... What's the problem? I initially viewed it in Firefox.It was OK... But in IE the o/p was different - Framework using the View component. ActionServlet, Action, ActionForm and struts-config.xml...Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary Java + struts - Struts org.apache.struts.upload.FormFile; public class ImportFileBean extends ActionForm... execute (ActionMapping mapping, ActionForm form, HttpServletRequest req... totalBytesRead = 0; while (totalBytesRead < formDataLength Struts Action Classes Struts Action Classes 1) Is necessary to create an ActionForm to LookupDispatchAction. If not the program will not executed. 2) What is the beauty of Mapping Dispatch Action struts - Struts struts while im running my application in tomcat5.5 i got jasper exceptions /WEB-INF/struts-html.tld is not found it says ....how can i solve struts first - Struts struts first struts first program example compiling programme IJVM compiling programme IJVM I TRY AND COMPILE THE PROGRAMME BELOW... METHOD FOR POWER IN THE PROGRAME AS WELL...URGENT // Program to read two single... MULTIPLICATION BIPUSH 48 IADD BIPUSH 80 'P' //* Prints product OUT Struts - Struts a problem in my program, if antbody knows help me. it is a JSP program that i used in a struts aplication. these are the conditions 1. when u entered.... ..............here is the program.............. New Member Personal Dispatch Action - Struts Dispatch Action While I am working with Structs Dispatch Action . I am getting the following error. Request does not contain handler parameter named 'function'. This may be caused by whitespace in the label text Struts - Struts *; public class UserRegisterForm extends ActionForm{ private String action="add...,ActionForm form,HttpServletRequest request,HttpServletResponse response) throws Struts Books -the business logic at the core of the program. The Jakarta Struts Framework is one...; Programming Jakarta Struts, Second Edition While... at the heart of the program. Jakarta Struts addresses this issue by combining Java Struts File Upload Example Struts File Upload Example In this tutorial you will learn how to use Struts to write program to upload files. The interface org.apache.struts.upload.FormFile Struts - Struts UserRegisterForm extends ActionForm{ private String action="add"; private struts validation struts validation I want to apply validation on my program.But i am... unable to solve the problem. please kindly help me.. I describe my program below...;%@ include file="../common/header.jsp"%> <%@ taglib uri="/WEB-INF/struts struts database program struts database program Can u show me 1 example of Struts jsp using oracle 10g as database! with struts config file Struts iteraor Struts iteraor Hi, I am making a program in Struts 1.3.8 in which i have to access data from mysql. I am able to access data but data is not coming...(); int count = 0; while(rs.next()) { count Struts File Upload and Save Struts File Upload and Save  ... regarding "Struts file upload example". It does not contain any... directory of server. In this tutorial you will learn how to use Struts How To Make Executable Jar File For Java P{roject - Java Beginners How To Make Executable Jar File For Java P{roject Hello Sir I want...]); while (true) { int len = fin.read(b, 0, b.length.../articles/java-to-exe.html struts validations - Struts struts validations hi friends i an getting an error in tomcat while running the application in struts validations the error in server... --------------------------------- Visit for more information. ActionForm ActionForm What is ActionForm JAVA ARTICLES - RMI JAVA ARTICLES I WANT SAMPLE RMI PROGRAM WITH COMPILING AND RUNNING METHOD Hi error - Struts java struts error my jsp page is post the problem... ActionForward execute(ActionMapping am,ActionForm af,HttpServletRequest req...*; import javax.servlet.http.*; public class loginform extends ActionForm{ private Articles el, and Italian il, while indefinite articles originate or are same... Articles  ... such as an encyclopedia. These days people publish articles on the web site. Visitors Mastering Struts - Struts Mastering Struts Sir, how can i master over struts...? Until and unless i am guided over some struts Project i dnt think...its possible...:// Problems With Struts - Development process Problems With Struts Respected Sir , While Deploying Struts Application in Tomcat 5.0 ,using Forward Action.This can properly work on NetBean but when i deploye in Tomcat only this will generate struct program - Struts Struct program I am using the weblogic 8.1 as application server. I pasted the struts. jar file in the lib folder. when i execute program.do always gives the unavailable service exception. please help me where 1.x and struts2.0 - Struts . While in Struts 2, an Action class implements an Action interface, along with other interfaces use optional and custom services. Struts 2 provides a base...struts 1.x and struts2.0 what are the differences between struts1.x process of compiling and running java program process of compiling and running java program Explain the process of compiling and running java program. Also Explain type casting STRUTS STRUTS MAIN DIFFERENCES BETWEEN STRUTS 1 AND STRUTS 2 Struts Struts How to retrive data from database by using what is SwitchAction in struts STRUTS STRUTS Request context in struts? SendRedirect () and forward how to configure in struts-config.xml Actions in Struts Actions in Struts Hello Sir, Thanks for the solutions you have sent me. i wanted examples on Struts DispatchAction,Forword Action ,Struts lookupDispatchAction,Struts mappingDispatchAction,Struts DynaActionform.please validations in struts - Struts validations in struts hi friends plz give me the solution its urgent I an getting an error in tomcat while running the application in struts...}. ------------------------------- Visit for more information. Developing Struts Application . This is achieved by the FormBean. It is a Struts-specific class known as 'ActionForm... in this struts-config file, the behaviour of the program can be easily changed.  ...Developing Struts Application   configuration - Struts class,ActionForm,Model in struts framework. What we will write in each.... The business objects update the application state. ActionForm bean represents... in the model.The JSP file reads information from the ActionForm bean using JSP tags... when required. I could not use the Struts API FormFile since Struts validation Struts validation I want to put validation rules on my project.But... the validatorform on actionform,created the validation.xml file for describing all... that violate the rules for struts automatic validation.So how I get the solution Struts - Framework Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary to learn and can u tell me clearly sir/madam? Hi Its good struts validation struts validation Sir i am getting stucked how to validate struts using action forms, the field vechicleNo should not be null and it should...:// struts struts Hi what is struts flow of 1.2 version struts? i have struts applicatin then from jsp page how struts application flows Thanks Kalins Naik Please visit the following link: Struts Tutorial Unable to understand Struts - Struts = "success"; /** * This is the action called from the Struts.... * @param form The optional ActionForm bean for this request. * @param..., ActionForm form, HttpServletRequest request, HttpServletResponse Tag: Struts Tag: bean:struts Tag -is used to create a new bean containing one of the standard Struts framework configuration objects. This tag retrieve the value of the specified Struts
http://www.roseindia.net/tutorialhelp/comment/847
CC-MAIN-2015-14
refinedweb
1,600
58.69
Over the years I've written a lot about the evolution I've taken with testing. I've also written about the frustration of testing within the confines of static languages. More briefly, I've discussed testing in dynamic languages, and what benefits developers get from that. I think talking about how far dynamic languages can be taken, from a testing point of view, deserves more attention. None of these are real life examples...they are more about techniques than actual usage. The most obvious example is the ability to test class/static methods. I've talked about this a few times already, but briefly: class Audit def self.log(message) DB.insert(:message => message, :at => Time.now.utc) end end .... it "saves the message with the current time" do Time.stub!(:now).and_return(Time.utc(2013, 3, 4, 5, 8, 9)) DB.should_receive(:insert).with(:message => 'over 9000!', :at => Time.now.utc) Audit.log('over 9000!') end There's no silly dependency injection, and we don't have to add any misdirection to properly handle current times or random values or guids. We can also leverage dynamic typing to and do: it "loads all users" do Users.stub!(:all).and_return("u think I'm crazy?") get :index assigns[:users].should == "u think I'm crazy" end At first glance, this seems a bit silly. Plus, if you are using some type of testing factory, it won't always buy you much. Nevertheless, I can't think of a compelling reason not to do this. As soon as you stub out Users.all, it really doesn't matter what it returns. What matters, from the point of view of this test, is that whatever it returns, is made available to the view. Lately, I've also been stubbing out internal implementations: class Vegeta def speak calculate_power_leve > 9000 ? "It's over 9000!!" : "meh" end def calculate_power_level #.... end end ... it "gets excited at high power levels" do vegeta = Vegeta.new vegeta.stub!(:calculate_power_level).and_return(9001) vegeta.speak.should == "It's over 9000!!" end In a way, this feels a lot like testing private members, which you are never supposed to do. But sometimes it just lets you write a dead simple yet important test. Essentially, we aren't limited to only mocking external dependencies. Yes, this can be abused. As a final example, how would you test: def process(data) transform1(data) transform2(data) transform3(data) transform4(data) transform5(data) end To do this in a static language, you'd probably end up injecting a TransormationProvider which implements ITransformationProvider go down that familiar rabbit hole. With a dynamic language, you can either stub out each of those methods directly, or you could integrate the lightweight "provider" directly in the class: class Processor def self.transformations [:transform1, :transform2, :transform3, :transform4, :tranform5] end def process(data) Processor.transformations.each{|t| send(t, data) } end end The outcome is the same..but both the tests and the actual code are simpler. We've introduced a mockable iterable "provider" in 3 lines rather than 2 new types (which fundamentally add no value).
https://www.openmymind.net/2012/4/24/I-Rather-Have-Silly-Tests-Than-Silly-Code/
CC-MAIN-2020-05
refinedweb
516
59.5
Archive::Zip(3) User Contributed Perl Documentation Archive::Zip(3) NAME Archive::Zip - Provide an interface to ZIP archive files. SYNOPSIS # Create'; } DESCRIPTION. File Naming: Names of files are in local format. "File::Spec" and "File::Basename" are used for various file operations. When you're referring to a file on your system, use its file naming conventions. Names of archive members are in Unix format. This applies to every method that refers to an archive member, or provides a name for new archive members. The "extract()" methods that can take one or two names will convert from local to zip names if you call them with a single name. Archive::Zip Object Model Overview. Inheritance EXPORTS :CONSTANTS Exports the following constants: :MISC_CONSTANTS Exports the following constants (only necessary for extending the module): :ERROR_CODES Explained below. Returned from most methods. AZ_OK AZ_STREAM_END AZ_ERROR AZ_FORMAT_ERROR AZ_IO_ERROR ERROR CODES Many of the methods in Archive::Zip return error codes. These are implemented as inline subroutines, using the "use constant" pragma. They can be imported into your namespace using the ":ERROR_CODES" tag: use Archive::Zip qw( :ERROR_CODES ); ... unless ( $zip->read( 'myfile.zip' ) == AZ_OK ) { die "whoops!"; } AZ_OK (0) Everything is fine. AZ_STREAM_END (1) The read stream (or central directory) ended normally. AZ_ERROR (2) There was some generic kind of error. AZ_FORMAT_ERROR (3) There is a format error in a ZIP file being read. AZ_IO_ERROR (4) There was an IO error. Compression Archive::Zip allows each member of a ZIP file to be compressed (using the Deflate algorithm) or uncompressed.: COMPRESSION_STORED File is stored (no compression) COMPRESSION_DEFLATED File is Deflated Compression Levels: 0 or COMPRESSION_LEVEL_NONE This is the same as saying $member->desiredCompressionMethod( COMPRESSION_STORED ); 1 .. 9 1 gives the best speed and worst compression, and 9 gives the best compression and worst speed. COMPRESSION_LEVEL_FASTEST This is a synonym for level 1. COMPRESSION_LEVEL_BEST_COMPRESSION This is a synonym for level 9. COMPRESSION_LEVEL_DEFAULT This gives a good compromise between speed and compression, and is currently equivalent to 6 (this is in the zlib code). This is the level that will be used if not specified. Archive::Zip Methods The Archive::Zip class (and its invisible subclass Archive::Zip::Archive) implement generic zip file functionality. Creating a new Archive::Zip object actually makes an Archive::Zip::Archive object, but you don't have to worry about this unless you're subclassing. Constructor new( [$fileName] ) Make a new, empty zip archive. Utility Methods Archive::Zip::computeCRC32( $string [, $crc] ) This is a utility function that uses the Compress::Raw::Zlib CRC routine to compute a CRC-32. You can get the CRC of a string: ( $number ) Report or change chunk size used for reading and writing. This can make big differences in dealing with large files. Currently, this defaults to 32K. This also changes the chunk size used for Compress::Raw::Zlib. You must call setChunkSize() before reading or writing. This is not exportable, so you must call it like: Archive::Zip::setChunkSize( 4096 ); or as a method on a zip (though this is a global setting). Returns old chunk size. Archive::Zip::chunkSize() Returns the current chunk size: my $chunkSize = Archive::Zip::chunkSize(); Archive::Zip::setErrorHandler( \&subroutine ) Change the subroutine called with error strings. This defaults to \&Carp::carp, but you may want to change it to get the error strings. This is not exportable, so you must call it like:++; Archive::Zip::tempFile( [$tmpdir] ) doesn't exist (this is a change from prior versions!). Returns file handle and name: my ($fh, $name) = Archive::Zip::tempFile(); my ($fh, $name) = Archive::Zip::tempFile('myTempDir'); my $fh = Archive::Zip::tempFile(); # if you don't need the name Zip Archive Accessors members() Return a copy of the members array my @members = $zip->members(); numberOfMembers() Return the number of members I have memberNames() Return a list of the (internal) file names of the zip members memberNamed( $string ) Return ref to member whose filename equals given filename or undef. $string must be in Zip (Unix) filename format. membersMatching( $regex ) Return array of members whose filenames match given regular expression in list context. Returns number of matching members in scalar context. my @textFileMembers = $zip->membersMatching( '.*\.txt' ); # or my $numberOfTextFiles = $zip->membersMatching( '.*\.txt' ); diskNumber() Return the disk that I start on. Not used for writing zips, but might be interesting if you read a zip in. This should be 0, as Archive::Zip does not handle multi-volume archives. diskNumberWithStartOfCentralDirectory() Return the disk number that holds the beginning of the central directory. Not used for writing zips, but might be interesting if you read a zip in. This should be 0, as Archive::Zip does not handle multi-volume archives. numberOfCentralDirectoriesOnThisDisk() Return the number of CD structures in the zipfile last read in. Not used for writing zips, but might be interesting if you read a zip in. numberOfCentralDirectories() Return the number of CD structures in the zipfile last read in. Not used for writing zips, but might be interesting if you read a zip in. centralDirectorySize() Returns central directory size, as read from an external zip file. Not used for writing zips, but might be interesting if you read a zip in. centralDirectoryOffsetWRTStartingDiskNumber() Returns the offset into the zip file where the CD begins. Not used for writing zips, but might be interesting if you read a zip in. zipfileComment( [$string] ) Get or set the zipfile comment. Returns the old comment. print $zip->zipfileComment(); $zip->zipfileComment( 'New Comment' ); eocdOffset(). fileName() Returns the name of the file last read from. If nothing has been read yet, returns an empty string; if read from a file handle, returns the handle in string form. Zip Archive Member OperationsMember( $memberOrName ) Remove and return the given member, or match its name and remove it. Returns undef if member or name doesn't exist in this Zip. No- op if member does not belong to this zip. replaceMember( $memberOrName, $newMember ) Remove and return the given member, or match its name and remove it. Replace with new member. Returns undef if member or name doesn't exist in this Zip, or if $newMember is undefined.. extractMember( $memberOrName [, $extractedName ] )MemberWithoutPaths( $memberOrName [, $extractedName ] ). addMember( $member ) ); updateMember( $memberOrName, $fileName ) Update a single member from the file or directory named $fileName. Returns the (possibly added or updated) member, if any; "undef" on errors. The comparison is based on "lastModTime()" and (in the case of a non-directory) the size of the file. addFile( $fileName [, $newName ] ). addDirectory( $directoryName [, $fileName ] ). addFileOrDirectory( $name [, $newName ] ). addString( $stringOrStringRef, $name )' ); contents( $memberOrMemberName [, $newContents ] ) Returns the uncompressed data for a particular member, or undef.'); Zip Archive I/O operations A Zip archive can be written to a file or file handle, or read from one. writeToFileNamed( $fileName ) Write a zip archive to named file. Returns "AZ_OK" on success.. writeToFileHandle( $fileHandle [, $seekable] ).. writeCentralDirectory( $fileHandle [, $offset ] ) Writes the central directory structure to the given file handle.()); } } overwriteAs( $newName ) Write the zip to the specified file, as safely as possible. This is done by first writing to a temp file, then renaming the original if it exists, then renaming the temp file, then deleting the renamed original if it exists. Returns AZ_OK if successful. overwrite() Write back to the original zip file. See overwriteAs() above. If the zip was not ever read from a file, this generates an error. read( $fileName ) Read zipfile headers from a zip file, appending new members. Returns "AZ_OK" or error code. my $zipFile = Archive::Zip->new(); my $status = $zipFile->read( '/some/FileName. my $fh = IO::File->new( '/some/FileName.zip', 'r' ); my $zip1 = Archive::Zip->new(); my $status = $zip1->readFromFileHandle( $fh ); my $zip2 = Archive::Zip->new(); $status = $zip2->readFromFileHandle( $fh ); Zip Archive Tree operations These used to be in Archive::Zip::Tree but got moved into Archive::Zip. They enable operation on an entire tree of members or files. A usage example:' ); $zip->addTree( $root, $dest [,$pred] ) -- Add tree of files to a zip ( $root, $dest, $pattern [,$pred] ) . $zip->updateTree( $root, [ $dest, [ $pred [, $mirror]]] ); if all is well. $zip->extractTree() $zip->extractTree( $root ) $zip->extractTree( $root, $dest ) $zip->extractTree( $root, $dest, $volume ) If you don't give any arguments at all, will extract all the files in the zip with their original names.. MEMBER OPERATIONS Member Class Methods Several constructors allow you to construct members without adding them to a zip archive. These work the same as the addFile(), addDirectory(), and addString() zip instance methods described above, but they don't add the new members to a zip. Archive::Zip::Member->newFromString( $stringOrStringRef [, $fileName] ) Construct a new member from the given string. Returns undef on error. my $member = Archive::Zip::Member->newFromString( 'This is a test', 'xyz.txt' ); newFromFile( $fileName ) Construct a new member from the given file. Returns undef on error. my $member = Archive::Zip::Member->newFromFile( 'xyz.txt' ); newDirectoryNamed( $directoryName [, $zipname ] )/' ); Member Simple accessors These methods get (and/or set) member attribute values. versionMadeBy() Gets the field from the member header. fileAttributeFormat( [$format] ) Gets or sets the field from the member header. These are "FA_*" values. versionNeededToExtract() Gets the field from the member header. bitFlag() Gets the general purpose bit field from the member header. This is where the "GPBF_*" bits live. compressionMethod(). desiredCompressionMethod( [$method] ). desiredCompressionLevel( [$method] ). externalFileName() Return the member's external file name, if any, or undef. fileName() Get or set the member's internal filename. Returns the (possibly new) filename. Names will have backslashes converted to forward slashes, and will have multiple consecutive slashes converted to single ones. lastModFileDateTime() Return the member's last modification date/time stamp in MS-DOS format. lastModTime() Return the member's last modification date/time stamp, converted to unix localtime format. print "Mod Time: " . scalar( localtime( $member->lastModTime() ) ); setLastModFileDateTimeFromUnix() Set the member's lastModFileDateTime from the given unix time. $member->setLastModFileDateTimeFromUnix( time() ); internalFileAttributes() Return the internal file attributes field from the zip header. This is only set for members read from a zip file. externalFileAttributes() Return member attributes as read from the ZIP file. Note that these are NOT UNIX! unixFileAttributes( [$newAttributes] ). localExtraField( [$newField] ) Gets or sets the extra field that was read from the local header. This is not set for a member from a zip file until after the member has been written out. The extra field must be in the proper format. cdExtraField( [$newField] ) Gets or sets the extra field that was read from the central directory header. The extra field must be in the proper format. extraFields() Return both local and CD extra fields, concatenated. fileComment( [$newComment] ) Get or set the member's file comment. hasDataDescriptor() Get or set the data descriptor flag. If this is set, the local header will not necessarily have the correct data sizes. Instead, a small structure will be stored at the end of the member data with these values. This should be transparent in normal operation. crc32() Return the CRC-32 value for this member. This will not be set for members that were constructed from strings or external files until after the member has been written. crc32String() Return the CRC-32 value for this member as an 8 character printable hex string. This will not be set for members that were constructed from strings or external files until after the member has been written. compressedSize() Return the compressed size for this member. This will not be set for members that were constructed from strings or external files until after the member has been written. uncompressedSize() Return the uncompressed size for this member. isEncrypted() Return true if this member is encrypted. The Archive::Zip module does not currently create or extract encrypted members. isTextFile( [$flag] ). isBinaryFile()ToFileNamed( $fileName ) Extract me to a file with the given name. The file will be created with default modes. Directories will be created as needed. The $fileName argument should be a valid file name on your file system. Returns AZ_OK on success. isDirectory() Returns true if I am a directory. writeLocalHeaderRelativeOffset() Returns the file offset in bytes the last time I was written. wasWritten() Returns true if I was successfully written. Reset at the beginning of a write attempt. Low-level member data reading It is possible to use lower-level routines to access member data streams, rather than the extract* methods and contents(). For instance, here is how to print the uncompressed contents of a member in chunks using these methods:(); readChunk( [$chunkSize] )Data() Rewind data and set up for reading data streams or writing zip files. Can take options for "inflateInit()" or "deflateInit()", but this isn't likely to be necessary. Subclass overrides should call this method. Returns "AZ_OK" on success. endRead() Reset the read variables and free the inflater or deflater. Must be called to close files, etc. Returns AZ_OK on success. readIsDone() Return true if the read has run out of data or errored out. contents()ToFileHandle( $fh ) Extract (and uncompress, if necessary) the member's contents to the given file handle. Return AZ_OK on success. Archive::Zip::FileMember methods The Archive::Zip::FileMember class extends Archive::Zip::Member. It is the base class for both ZipFileMember and NewFileMember classes. This class adds an "externalFileName" and an "fh" member to keep track of the external file. externalFileName() Return the member's external filename. fh() Return the member's read file handle. Automatically opens file if necessary. Archive::Zip::ZipFileMember methods The Archive::Zip::ZipFileMember class represents members that have been read from external zip files. diskNumberStart() Returns the disk number that the member's local header resides in. Should be 0. localHeaderRelativeOffset() Returns the offset into the zip file where the member's local header is. dataOffset() Returns the offset from the beginning of the zip file to the member's data. REQUIRED MODULES Archive::Zip requires several other modules: Carp Compress::Raw::Zlib Cwd File::Basename File::Copy File::Find File::Path File::Spec IO::File IO::Seekable Time::Local BUGS AND CAVEATS When not to use Archive::Zip If you are just going to be extracting zips (and/or other archives) you are recommended to look at using Archive::Extract instead, as it is much easier to use and factors out archive-specific functionality. Try to avoid IO::Scalar One of the most common ways to use Archive::Zip is to generate Zip files in-memory. Most people have use IO::Scalar for this purpose.. TO DO * auto-choosing storing vs compression * extra field hooks (see notes.txt) * check for dups on addition/renaming? * Text file extraction (line end translation) * Reading zip files from non-seekable inputs (Perhaps by proxying through IO::String?) * separate unused constants into separate module * cookbook style docs * Handle tainted paths correctly * Work on better compatability with other IO:: modules SUPPORT Bugs should be reported via the CPAN bug tracker <> For other issues contact the maintainer AUTHOR>. COPYRIGHT Some parts copyright 2006 - 2009 Adam Kennedy. Some parts copyright 2005 Steve Peters. Original work copyright 2000 - 2004 Ned Konz. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. SEE ALSO Look at Archive::Zip::MemberRead which is a wrapper that allows one to read Zip archive members as if they were files. Compress::Raw::Zlib, Archive::Tar, Archive::Extract There is a Japanese translation of this document at <> that was done by DEQ <[email protected]> . Thanks! perl v5.16.2 2009-06-30 Archive::Zip(3)[top]
http://www.polarhome.com/service/man/?qf=Archive%3A%3AZip&tf=2&of=MacOSX&sf=3pm
CC-MAIN-2018-39
refinedweb
2,555
58.99
In this lab, we'll practice applying gradient descent. As we know gradient descent begins with an initial regression line, and moves to a "best fit" regression line by changing values of $m$ and $b$ and evaluating the RSS. So far, we have illustrated this technique by changing the values of $b$ and evaluating the RSS. In this lab, we will work through applying our technique by changing the value of $m$ instead. Let's get started. Once again, we'll take take a look at revenues of movies to predict revenue. first_show = {'budget': 100, 'revenue': 275} second_show = {'budget': 200, 'revenue': 300} third_show = {'budget': 400, 'revenue': 700} shows = [first_show, second_show, third_show] We can start with some values for an initial not-so-accurate regression line, $y = .6x + 133.33$. from linear_equations import build_regression_line budgets = list(map(lambda show: show['budget'], shows)) revenues = list(map(lambda show: show['revenue'], shows)) def regression_line(x): return .6*x + 133.33 Now using the residual_sum_squares, function, we calculate the RSS to measure the accuracy of the regression line to our data. Let's take another look at that function: def squared_error(x, y, m, b): return (y - (m*x + b))**2 def residual_sum_squares(x_values, y_values, m, b): data_points = list(zip(x_values, y_values)) squared_errors = map(lambda data_point: squared_error(data_point[0], data_point[1], m, b), data_points) return sum(squared_errors) from plotly.offline import iplot, init_notebook_mode from graph import plot, m_b_trace import plotly.graph_objs as go init_notebook_mode(connected=True) from graph import trace_values, plot data_trace = trace_values(budgets, revenues) regression_trace = m_b_trace(.6, 133.33, budgets) plot([data_trace, regression_trace]) Now let's use the residual_sum_squares function to build a cost curve. Keeping the $b$ value fixed at $133.33$, write a function called rss_values. rss_valuespasses our dataset with the x_valuesand y_valuesarguments. m_valuesand rss_values, with each key pointing to a list of the corresponding values. def rss_values(x_values, y_values, m_values, b): pass budgets = list(map(lambda show: show['budget'] ,shows)) revenues = list(map(lambda show: show['revenue'] ,shows)) initial_m_values = list(range(8, 19, 1)) scaled_m_values = list(map(lambda initial_m_value: initial_m_value/10,initial_m_values)) b_value = 133.33 rss_values(budgets, revenues, scaled_m_values, b_value) # {'m_values': [0.8, 0.9, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8], # 'rss_values': [64693.76669999998, # 45559.96669999998, # 30626.166699999987, # 19892.36669999999, # 13358.5667, # 11024.766700000004, # 12890.96670000001, # 18957.166700000016, # 29223.36670000002, # 43689.566700000025, # 62355.76670000004]} Plotly provides for us a table chart, and we can pass the values generated from our rss_values function to create a table. from plotly.offline import iplot, init_notebook_mode from graph import plot import plotly.graph_objs as go init_notebook_mode(connected=True) def plot_table(headers, columns): trace_cost_chart = go.Table( header=dict(values=headers, line = dict(color='#7D7F80'), fill = dict(color='#a1c3d1'), align = ['left'] * 5), cells=dict(values=columns, line = dict(color='#7D7F80'), fill = dict(color='#EDFAFF'), align = ['left'] * 5)) plot([trace_cost_chart]) cost_chart = rss_values(budgets, revenues, scaled_m_values, b_value) if cost_chart: column_values = list(cost_chart.values()) plot_table(headers = ['M values', 'RSS values'], columns=column_values) And let's plot this out using a a line chart. from plotly.offline import iplot, init_notebook_mode init_notebook_mode(connected=True) from graph import plot, trace_values initial_m_values = list(range(1, 18, 1)) scaled_m_values = list(map(lambda initial_m_value: initial_m_value/10,initial_m_values)) cost_values = rss_values(budgets, revenues, scaled_m_values, 133.33) if cost_values: rss_trace = trace_values(cost_values['m_values'], cost_values['rss_values'], mode = 'lines') plot([rss_trace]) In this section, we'll work up to building a gradient descent function that automatically changes our step size. To get you started, we'll provide a function called slope_at that calculates the slope of the cost curve at a given point on the cost curve. Here it is in action: from helper import slope_at slope_at(budgets, revenues, .6, 133.33333333333326) slope_at(budgets, revenues, 1.6, 133.33333333333326) So the slope_at function takes in our dataset, and returns the slope of the cost curve at that point. So the numbers -296312.33 and 123687.67 reflect the slopes at the cost curve when m is .6 and 1.6 respectively. As you can see, it seems pretty accurate. When the curve is steeper and downwards at $m = 0.6$, the slope is around -290,000. And at $m = 1.6$ with our cost curve pointing upwards yet flatter, our slope is around 120,000. Now that we are familiar with our slope_at function and how it calculates the slope of our cost curve at a given point, we can begin to use that function with our gradient descent procedure. Remember that gradient descent works by starting at a regression line with values m, and b, which corresponds to a point on our cost curve. Then we alter our m or b value (here, the m value) by looking to the slope of the cost curve at that point. Then we look to the slope of the cost curve at the new m value to indicate the size and direction of the next step. So now let's write a function called updated_m. The function will tell us the step size and direction to move along our cost curve. The updated_m function takes as arguments an initial value of $m$, a learning rate, and the slope of the cost curve at that value of $m$. Its return value is the next value of m that it calculates. from error import residual_sum_squares def updated_m(m, learning_rate, cost_curve_slope): pass This is what our function returns. current_slope = slope_at(budgets, revenues, 1.7, 133.33333333333326)['slope'] updated_m(1.7, .000001, current_slope) # 1.5343123333335096 current_slope = slope_at(budgets, revenues, 1.534, 133.33333333333326)['slope'] updated_m(1.534, .000001, current_slope) # 1.43803233333338 current_slope = slope_at(budgets, revenues, 1.438, 133.33333333333326)['slope'] updated_m(1.438, .000001, current_slope) # 1.3823523333332086 Take a careful look at how we use the updated_m function. By using our updated value of $m$ we are quickly converging towards an optimal value of $m$. Now let's write another function called gradient_descent. The inputs of the function are x_values, y_values, steps, the b we are holding constant, the learning_rate, and the current_m that we are looking at. The steps arguments represents the number of steps the function will take before the function stops. We can get a sense of the return value in the cell below. It is a list of dictionaries, with each dictionary having a key of the current m value, the slope of the cost curve at that m value, and the rss at that m value. def gradient_descent(x_values, y_values, steps, b, learning_rate, current_m): pass descent_steps = gradient_descent(budgets, revenues, 12, 133.33, learning_rate = .000001, current_m = 0) descent_steps # [{'m': 0, 'rss': 368964.16669999994, 'slope': -548316.9999998063}, # {'m': 0.5483169999998062, 'rss': 131437.9413767516, 'slope': -318023.86000024853}, # {'m': 0.8663408600000547, 'rss': 51531.31420747324, 'slope': -184453.83880040026}, # {'m': 1.050794698800455, 'rss': 24649.097944855268, 'slope': -106983.22650372575}, # {'m': 1.1577779253041809, 'rss': 15604.976802103287, 'slope': -62050.271372208954}, # {'m': 1.2198281966763898, 'rss': 12561.987166284125, 'slope': -35989.15739588847}, # {'m': 1.2558173540722781, 'rss': 11538.008028425651, 'slope': -20873.711289696075}, # {'m': 1.2766910653619743, 'rss': 11193.357340315255, 'slope': -12106.752547970245}, # {'m': 1.2887978179099446, 'rss': 11077.310067278091, 'slope': -7021.916477824561}, # {'m': 1.295819734387769, 'rss': 11038.209831325046, 'slope': -4072.711557128059}, # {'m': 1.299892445944897, 'rss': 11025.020590634533, 'slope': -2362.172703124088}, # {'m': 1.302254618648021, 'rss': 11020.562895703, 'slope': -1370.0601677373925}] if descent_steps: m_values = list(map(lambda step: step['m'],descent_steps)) rss_result_values = list(map(lambda step: step['rss'], descent_steps)) text_values = list(map(lambda step: 'cost curve slope: ' + str(step['slope']), descent_steps)) gradient_trace = trace_values(m_values, rss_result_values, text=text_values) plot([gradient_trace]) Taking a look at a plot of our trace, you can get a nice visualization of how our gradient descent function works. It starts far away with $m = 0$, and the step size is relatively large, as is the slope of the cost curve. As the $m$ value updates such that it approaches a minimum of the RSS, the slope of the cost curve and the size of each step both decrease. Remember that each of these steps indicates a change in our regression line's slope value towards a "fit" that more accurately matches our dataset. Let's plot these various regression lines below, starting with our slope at 0 and moving from there. from plotly.offline import iplot, init_notebook_mode from graph import plot, m_b_trace import plotly.graph_objs as go init_notebook_mode(connected=True) from graph import trace_values, plot data_trace = trace_values(budgets, revenues) if descent_steps: m_values = list(map(lambda step: step['m'] ,descent_steps)) regression_traces = list(map(lambda m: m_b_trace(m, 133.33, budgets, name = 'm:' + str(round(m, 2))), m_values)) plot([data_trace, *regression_traces]) As you can see the slope converges towards a slope that better matches our data, around m = 1.3. You can isolate how the line changes clicking on the names of the lines to the right, which toggles the display of the respective lines. In this lesson, we learned some more about gradient descent. We saw how gradient descent allows our function to improve to a regression line that better matches our data. We see how to change our regression line, by looking at the Residual Sum of Squares related to current regression line. We update our regression line by looking at the rate of change of our RSS as we adjust our regression line in the right direction -- that is, the slope of our cost curve. The larger the magnitude of our rate of change (or slope of our cost curve) the larger our step size. This way, we take larger steps the further away we are from our minimizing our RSS, and take smaller steps as we converge towards our minimum RSS.
https://learn.co/lessons/gradient-descent-step-sizes-lab
CC-MAIN-2019-43
refinedweb
1,580
59.3
IRC log of xproc on 2007-04-05 Timestamps are in UTC. 14:42:58 [RRSAgent] RRSAgent has joined #xproc 14:42:58 [RRSAgent] logging to 14:51:53 [MoZ] Zakim, this will be xproc 14:51:53 [Zakim] ok, MoZ; I see XML_PMWG()11:00AM scheduled to start in 9 minutes 14:52:37 [avernet] avernet has joined #xproc 14:55:59 [PGrosso] PGrosso has joined #xproc 14:57:32 [Norm] Norm has joined #xproc 14:58:58 [richard] richard has joined #xproc 14:59:45 [Zakim] XML_PMWG()11:00AM has now started 14:59:52 [Zakim] +??P22 14:59:59 [Zakim] +[ArborText] 14:59:59 [richard] zakim, ? is me 15:00:00 [Zakim] +richard; got it 15:01:47 [Zakim] +Norm 15:01:59 [Norm] zakim, who's on the phone? 15:01:59 [Zakim] On the phone I see richard, PGrosso, Norm 15:02:05 [Norm] Meeting: XML Processing Model WG 15:02:05 [Norm] Date: 5 Apr 2007 15:02:05 [Norm] Agenda: 15:02:05 [Norm] Meeting number: 62, T-minus 30 weeks 15:02:05 [Norm] Chair: Norm 15:02:07 [Norm] Scribe: Norm 15:02:09 [Norm] ScribeNick: Norm 15:03:07 [Zakim] +Alessandro_Vernet 15:03:25 [Zakim] +Alex_Milows 15:03:39 [Norm] rrsagent, pointer? 15:03:39 [RRSAgent] See 15:04:50 [Norm] Regrets: Henry, Rui 15:05:05 [Norm] zakim, who's on the phone? 15:05:05 [Zakim] On the phone I see richard, PGrosso, Norm, Alessandro_Vernet, Alex_Milows 15:05:07 [alexmilowski] alexmilowski has joined #xproc 15:05:53 [Norm] Thanks, MoZ 15:05:58 [Norm] Regrets: Henry, Rui, Mohamed 15:06:05 [Norm] zakim, who's on the phone? 15:06:05 [Zakim] On the phone I see richard, PGrosso, Norm, Alessandro_Vernet, Alex_Milows 15:08:22 [Norm] Present: Richard, Paul, Norm, Alessandro, Alex 15:08:31 [Norm] Topic: Accept this agenda? 15:08:31 [Norm] -> 15:08:38 [Norm] Accepted. 15:08:42 [Norm] Topic: Accept minutes from the previous meeting? 15:08:42 [Norm] -> 15:08:47 [Norm] Accepted. 15:08:52 [Norm] Topic: Next meeting: telcon 12 Apr 2007 15:09:00 [Norm] No regrets given. 15:09:08 [Norm] Topic: 5 Apr 2007 WD 15:09:08 [Norm] Will be published today. 15:09:37 [Norm] s/Will be/Was/ 15:09:48 [Norm] Topic: Caching 15:10:13 [Norm] Norm: So caching is the problem of referring by URI in one component to an output of a previous component. 15:10:49 [Norm] -> 15:12:05 [Norm] Norm: There were follow-on messages, Henry proposed a caching scheme and Mohamed proposed p:map 15:13:01 [Norm] Richard: I would like to be able to implement XProc in a system where components were implemented as completely independent external programs. 15:13:01 [Norm] ...As a consequence, it'd be impossible to do any sort of caching. 15:13:33 [Norm] Norm: That strategy may have trouble with some pipelines. 15:13:52 [Norm] Richard: For protocols in general, it may even be the case that different requests at different times will get different results. 15:14:13 [Norm] Norm: The alternative that got us back into this discussion was the Xinclude-with-sequence component. 15:14:47 [Norm] Norm: I don't think that's a practical answer. 15:17:22 [Norm] Some discussion ranges 15:17:30 [Norm] s/ranges/of the possibilities 15:18:24 [Norm] Richard: What is the circumstance that causes an output to appear at a URI if there's no serializing component. 15:20:18 [Norm] Norm: I was thinking that your implementation of XSLT with extensions would write them to disk 15:20:58 [Norm] Richard: Are you supposed to use http: URIs for local caching? 15:21:03 [Norm] Alex: Browsers cache all the time. 15:21:36 [Norm] Alex: What happens to the base URI of a document when it goes through XInclude. 15:22:02 [Norm] Richard: I agree that the base URI can be anything you like, but I've never before encountered a situation where other processes would see that. 15:22:19 [Norm] Alex: What if we had a way to hook up a sequence to arbitrary steps to say that this is the set of known documents. 15:22:26 [Norm] Richard: You shouldn't use http to refer to the the things in there. 15:24:33 [Norm] Norm outlines the include/import case which motivates caching. 15:24:58 [Norm] Richard: I don't have any problem with URIs that are private to a pipeline, but I don't think they should use http: URIs. 15:25:15 [Norm] ...It seems to me that's an abuse of http: 15:25:38 [alexmilowski] That battle has been lost as http uris are used for namespaces all the time... 15:26:03 [Norm] Richard: The use of http: URIs for namespace doesn't bear on this because they don't get dereferenced. They're just strings. 15:26:22 [Norm] ...Here you're proposing a mechanism that does a GET but gets a different result. 15:28:05 [Norm] Alessandro: I agree with Richard. But if you use another component that might help. 15:28:23 [Norm] Richard: I think file: URIs are the way that you'd do this. You'd put it in a temporary file and refer to that. 15:28:45 [Norm] ...Either you have to reuse filenames or make up filenames, so file: URIs aren't perfect. 15:29:35 [Norm] Richard: I can see that there might be objections from others on the basis that this isn't how the web is supposed to work. 15:30:50 [Norm] Alex: Does this mean that if you changed the base URI of the document, you could avoid the problem? 15:31:22 [Norm] Alex: You fabricate an identifier, id:1234, then it's no longer retrievable therefore it's cacheable. 15:32:11 [Norm] Alex: Since it's not retrievable then it's not a problem. 15:32:33 [Norm] Richard: That doesn't help me because I can't use those URIs in external, unmodifiable components. 15:33:04 [Norm] Alex: For caching to work, then we need a way for people to order things. 15:33:51 [Norm] Norm: There was strong resistance on the list to any sort of dependency support and I don't see any consensus being acheived on the caching issue. 15:34:03 [Norm] Alex: I'm not a fan of caching. 15:34:42 [Norm] ...I don't want to be in a situation where arbitrary things can pull documents from a cache so that I have to store everything. 15:35:07 [Norm] Richard: I don't think the caching solution is a good solution anyway. What we have here is the temporary file problem. Having fixed names doesn't work. 15:35:52 [Norm] ...Suppose there's a subpipeline that works by constructing a partial stylesheet or something. Now if you use that module twice, you'll have a conflict. 15:36:19 [Norm] Richard: Programming libraries usually do this with dynamic names, but that's inconvenient in cases like XInclude. 15:36:55 [Norm] Norm: I don't think we're making progress towards an answer. Without a good proposal on the table, we should probably move on. 15:37:21 [Norm] Norm: Is there anyone that wants to continue discussing the caching issue? 15:37:51 [Norm] Richard: If we can't come to a conclusion about it, we ought to produce a list of use cases that seem to require it. That way we have something to test future solutions against. 15:39:16 [Norm] Norm: I think the XInclude/XSLT import/Schema include use case is the only one I can think of. Richard's observation of the problems of multiple inclusions of the same subpipeline is an interesting wrinkle. 15:39:37 [richard] moz - I suppose a scoped catalog mechanism might work for multiple instances 15:41:07 [Norm] Norm: Given a component that can produce a URI for a local file and another component that can replace attribute values, you can probably work around this situation. 15:41:18 [Norm] Alex: You may also be able to work around it with the p:insert component. Possibly. 15:41:38 [Norm] Norm: I propose that caching is dead. 15:41:52 [Norm] Topic: Dependency managentment. 15:43:24 [Norm] Norm: I propose dependency management is dead. We can abdicate responsibility for side-effects in V1. 15:43:36 [Norm] Alex: You can also use p:group and a funky parameter to force the order. 15:44:15 [Norm] Topic: Review of the step library 15:44:40 [Norm] Alex: We went through the list last time. 15:46:00 [Norm] Alex summarizes his current work queue from last time. 15:46:19 [Norm] Alex: there's a question about non-XML syntaxes for RELAX 15:46:31 [MoZ] yes 15:46:34 [Norm] Norm: I'd like to find some way to start a discussion of the component input and output vocabularies. 15:46:43 [MoZ] and for XQuery also 15:47:26 [Norm] Norm: We have specialized input/output vocabularies for store, XSL-FO, and httpRequest. 15:48:12 [Norm] Alex: XQuery also has one. 15:49:15 [Norm] Alex: The httpRequest component is most odd. Most other components consume things described in other specifications. 15:50:22 [Norm] Norm: I think it's going to be useful, so I don't want to remove it. 15:50:28 [Norm] Alex: It is underspecified. 15:50:47 [Norm] Norm: Can you please start to fully specify it. 15:50:50 [Norm] Alex: Yes. 15:51:46 [Norm] Alex: I should also put the XQuery input into our own namespace. 15:53:54 [Norm] Alex: Can we make the micro-operations optional? 15:54:06 [Norm] Norm: That's not interoperable, I'd rather make them all required. 15:54:26 [Norm] No one objects, so that's what Alex will do 15:54:28 [Norm] s/do/do./ 15:54:34 [Norm] Topic: Any other business? 15:54:36 [Norm] None. 15:54:53 [Norm] Adjourned. 15:54:53 [Zakim] -Alex_Milows 15:54:55 [Zakim] -PGrosso 15:54:56 [Zakim] -Alessandro_Vernet 15:54:56 [Zakim] -Norm 15:55:03 [Zakim] -richard 15:55:04 [Zakim] XML_PMWG()11:00AM has ended 15:55:05 [Zakim] Attendees were richard, PGrosso, Norm, Alessandro_Vernet, Alex_Milows 15:55:16 [MoZ] Zakim, you forget me ! 15:55:16 [Zakim] I don't understand 'you forget me !', MoZ 15:55:53 [PGrosso] PGrosso has left #xproc 15:56:10 [Norm] rrsagent, set logs world-visible 15:56:14 [Norm] rrsagent, draft minutes 15:56:14 [RRSAgent] I have made the request to generate Norm 15:56:28 [alexmilowski] alexmilowski has left #xproc 16:45:27 [Norm] Norm has joined #xproc 18:19:42 [Zakim] Zakim has left #xproc 18:26:26 [Norm] rrsagent, bye 18:26:26 [RRSAgent] I see no action items
http://www.w3.org/2007/04/05-xproc-irc
CC-MAIN-2017-04
refinedweb
1,857
79.5
Yesterday Adobe released it’s Flex 3 and naturally I went there to see this new cool technology. I downloaded it and got an Eclipse with a GUI-Builder to create SWF-Applications. The nice thing is that Flex uses an XML-Format to store the GUI, … . I guess you know what’s coming now, don’t you? I sat down and looked closer to find out that it would be fairly easy to write a Transformer to convert this in my custom EXSWT-Format used to build SWT/JFace-GUIs. After about 2 hours I now have a minimal XSLT-Stylesheet transforming MXML-Elements (currently mx:label, mx:combo, mx:button) to EXSWT and feeding it into my EXSWT-Lib. The only problem are different heights and widths needed by SWT and SWF but those are minor problems. Here’s the original Flex-Project View one can export as an SWF-Application: And here the one displayed using SWT-Widgets: This is only a simple example but it shows how easy it is to define highlevel dialects on top of EXSWT which is a more or less lowlevel XML-Language. You can as always get the sources from my SVN-Repository and I have once more prepared a .psf-File for all Subclipse users. That’s super cool, Tom. I forwarded this to the Flex/Flex Builder teams. – Mike Morearty, Flex Builder Tom, that’s rad! I’d love to use that to design UI for Flex Builder…if you’re ever in San Francisco, I think this deserves a beer 🙂 -David Zuckerman, Flex Builder RAP, GWT and now Flex 3. How I will choose between them to migrate my RCP app ? oO And by the way I like the idea to develop only one GUI but I have some doubts on complex GUI. Have to test it 🙂 Mike and David, thanks for the kind words, I much appreciated to here you like this. Currently the project status is PRE-Alpha but things are evolving as you may have seen when reading my other blog entries but I’m working on it only in my sparetime 🙂 I’ll blog about my future success/failure create a flexible XML-Specification and Lib to build GUIs ontop of SWT/JFace/Eclipse. And David I love beer, here over in Austria (Europe) we grow up with it 🙂 I have downloaded the sources from your SVN-Repository in a stupid manner (file by file), put everything in folders (at.bestsolution.exswt.transformer.mxml) and from root folder, import project in eclipse workspace. I have plan to “understand” how it work. But for the beginning there is no class XSLTTransformer . What have to be done? Nikos. EXSWT is not meant to provide a platform to develop your application once and deploy it anywhere. The interesting thing here is that it should be easily possible to define GUIs with a highlevel XML dialect (e.g. MXML, XUL, …) and create transformers who convert it to EXSWT. I think RAP,… are all cool technologies and of course I have their place in the future. I only ask myself why should I be satisfied with JavaScript if I could have Java, OSGI, … and defined my GUIs in some XML-Dialect making redeployment as easy as possible. If you ask me the next few years RCP is going to be the platform most commercial applications are built on because of all those Browser-pecularities and the extensible and really big frame Eclipse-RCP provides to developers. All those AJAX-Libs are far away from something that complete and if you can’t do things on the client the user experience is not satisfying (something I have problems with when it comes to RAP but that’s a personal problem and I don’t want to start a flame war) boris you should: 1. Install subclipse () 2. Download the 3. Copy the file to some project in your worspace 4. Right-Click and Select “Import Project Set” This should automatically checkout the following needed projects: – at.bestsolution.exswt – at.bestsolution.exswt.examples – at.bestsolution.exswt.groovy – at.bestsolution.exswt.transformer.mxml Having done this the last thing is to open at.bestsolution.exswt.examples/plugin.xml and click “Launch an Eclipse Application” which brings up a new IDE-Window and there you can find in “Window > Show View > Sample Category” the two views demonstrating GUIs defined with EXSWT directly and MXML. Tom, I almost agree with you. You’re totally right about all those Browser-pecularities !! And I choosed RCP to 🙂 But in some situations you have to use those browsers. For example, in bank or insurance companies, most of the time, your application will be a part of a portlet. And there is no possibilities to deploy something on the client machine. So, I need to struggle with all the Ajax limitations. And that’s right it’s hard after enjoying Eclipse RCP development. I have downloaded to a project in my workspace: – at.bestsolution.exswt – at.bestsolution.exswt.examples – at.bestsolution.exswt.groovy – at.bestsolution.exswt.transformer.mxml But I have had error message: The Constructor Status(int,String,String,Thrawable ) undefined in class: package at.bestsolution.exswt; public class Activator extends AbstractUIPlugin { ……… public IStatus createStatus(int errorCode, Map dynamicValues, Throwable e) { return new Status(IStatus.ERROR, PLUGIN_ID, Activator.getDefault().getErrorHandler().getErrorMessage(errorCode, dynamicValues), e); } ………. } Maybe you have to run 3.3 to make this work?
https://tomsondev.bestsolution.at/2007/06/12/im-perflexed/
CC-MAIN-2018-39
refinedweb
908
63.49
AttributeError: 'NoneType' object has no attribute 'shape' Hi all! I need your help regarding the following code. My goal is to read and show the video with a resolution modification. After executing the above code, the video displayed on my screen till the end. However, I have got the following error:After executing the above code, the video displayed on my screen till the end. However, I have got the following error: import cv2 cap = cv2.VideoCapture("C:/Users/user/Desktop/Foot_Detection/ball_tracking_example.mp4") def rescale_frame(frame, percent=30): width = int(frame.shape[1] * percent/ 100) height = int(frame.shape[0] * percent/ 100) dim = (width, height) return cv2.resize(frame, dim, interpolation =cv2.INTER_AREA) if (cap.isOpened() == False): print("Error opening video stream or file") while (cap.isOpened()): # Capture frame-by-frame ret, frame = cap.read() frame = rescale_frame(frame, percent=30) if ret == True: cv2.imshow('Frame', frame) if cv2.waitKey(25) & 0xFF == ord('q'): break else: break cap.release() cv2.destroyAllWindows() Traceback (most recent call last): File "C:/Users/user/Desktop/Foot_Detection/FootDetection.py", line 24, in <module> frame = rescale_frame(frame, percent=30) File "C:/Users/user/Desktop/Foot_Detection/FootDetection.py", line 10, in rescale_frame width = int(frame.shape[1] * percent/ 100) AttributeError: 'NoneType' object has no attribute 'shape' I tried to install the codec for mp4 and to the video path is correct. Could you please assist me in this matter? Thank you in advance. checking the retvalue is mandatory, since it will tell you, when the movie's over. Nope, berak. What does the error tell you? AttributeError: 'NoneType' object has no attribute 'shape' again, the last frame will be empty (None) (and this is one of the main differences between capturing from a webcam or a video file, please go and check) Nope. The frame is not empty. I already tested it.. There are no differences between webcam and file. @Abdu, can you take another look at the code in your question, and try to repair the broken formatting ? (whitespace matters in python ...)
http://answers.opencv.org/question/209433/attributeerror-nonetype-object-has-no-attribute-shape/
CC-MAIN-2019-18
refinedweb
337
61.83
Hello all! I am having trouble with the following code: I started playing around with the code to get an idea of how it worked only to find the c++ rug underneath my feet slip away.I started playing around with the code to get an idea of how it worked only to find the c++ rug underneath my feet slip away.Code: #include <iostream> using namespace std; void changeArgument (int x) { x = x + 5; } int main() { int y = 4; changeArgument( y ); // y will be unharmed cout << y; // still prints 4 } I omitted the variable declaration (y) passing an integer 4 as the argument, but the compiler is complaining that there were "too few arguments to function void changeArgument(int)." I also changed "cout <<y;" to "cout << changeArguement ( 4 );" Same error.... Does anyone know why this is happening? I was under the impression that I could pass arguments of type integer to this function for it to work. Why is it required that in main() one needs to declare an additional variable, and assign it a value? As the x variable is already declared locally inside the function, I thought that one would only need to pass an argument of its type for it to return a value. Despite the fact that changeArguments() return type is void, running the above code Does return a value. Any help would be immensely appreciated. Thank you so much for your time! spark*
http://cboard.cprogramming.com/cplusplus-programming/146986-noob-question-about-functions-printable-thread.html
CC-MAIN-2015-18
refinedweb
239
66.57
Paragraph Style Creation in Adobe CS5 Illustrator Paragraph styles include attributes that are applied to an entire paragraph. What constitutes a paragraph is all text that falls before a hard return (you create a hard return when you press Enter in Windows or Return on the Mac), so this could be one line of text for a headline or ten lines in a body text paragraph. To create a paragraph style, open a document that contains text or open a new document and add text to it; then follow these steps: Find a paragraph of text that has the same text attributes throughout it and put your cursor anywhere in that paragraph. You don’t even have to select the whole paragraph! Select the Create New Style button to create a new paragraph style; give your new style a name. The create new style button is the dog-eared icon at the bottom of the Paragraph panel. The keyboard shortcut for opening the create new style function is Alt-click (Windows) or Option-click (Mac). Your new style now appears in the Paragraph Styles panel list of styles. Create a paragraph of text elsewhere in your document. Make the new paragraph's attributes different from the text in Step 2. Put your cursor anywhere in the new paragraph and Alt-click (Windows) or Option-click (Mac) your named style in the Paragraph Styles panel. The attributes from the style are applied to the entire paragraph.
http://www.dummies.com/how-to/content/paragraph-style-creation-in-adobe-cs5-illustrator.navId-612184.html
CC-MAIN-2016-26
refinedweb
245
76.76
A desires read-only or read/write access. The process may also specify the following: On Alpha systems, you should specify the retadr argument to determine the exact boundaries of the memory that was mapped by the call. If your application specifies the relpag argument, you must specify the retadr argument. In this case, it is not an optional argument. Cooperating processes can both issue a SYS$CRMPSC system service 12.3.7.12 shows one process (ORION) creating a global section and a second process (CYGNUS) mapping the section. 12.3 with demand-zero section file pages where no initialization. 12.3.7.10 Mapping into a Defined Address Range (Alpha Only) On Alpha systems, SYS$CRMPSC and SYS$MGBLSC interpret some of the arguments differently than on VAX systems if you are mapping a section into a defined area of virtual address space. The differences are as follows: On Alpha systems, you can map a portion of a section file by specifying the address at which to start the mapping as an offset from the beginning of the section file. You specify this offset by supplying a value to the relpag argument of SYS$CRMPSC. The value of the relpag argument specifies the pagelet number relative to the beginning of the file at which the mapping should begin. To preserve compatibility, SYS$CRMPSC interprets the value of the relpag argument in 512-byte units on both VAX systems and Alpha systems. However, because the CPU-specific page size on the Alpha system is larger than 512 bytes, the address specified by the offset in the relpag argument probably does not fall on a CPU-specific page boundary on an Alpha system. SYS$CRMPSC can map virtual memory in CPU-specific page increments only. Therefore, on Alpha systems, the mapping of the section file will start at the beginning of the CPU-specific page that contains the offset address, not at the address specified by the offset. If you map from an offset into a section file, you must still provide an inadr argument that abides by the requirements presented in Section 12.3.7.10 when mapping into a defined address range. 12.3.7.12 for Alpha systems, process ORION creates a global section and process CYGNUS maps to that section: /* Process ORION */ #include <rms.h> #include <rmsdef.h> #include <literal>($M data in a process private section is modified, the process can release (or unmap) the section. The modified pages are then written back into the disk file defined as a section. After data in a global section is modified, the process or processes can release (or unmap) the section. The modified pages are still maintained in memory until the section is deleted. The data is then written back into the disk file defined as a section. Applications relying on modified data to be in the file at a specific point in time must use the SYS$UPDSEC(W) system service to force the write action. See Section 12.3.7.15. When the section is deleted, the revision number of the file is incremented, and the version number of the file remains unchanged. A full directory listing indicates the revision number of the file and the date and time that the file was last updated. 12.3.7.14 process private read/write section, all modified pages are written back into the section file. For global sections, the system's modified page writer starts writing back modified pages when the section is deleted and all mapping processes have deleted their associated virtual address space. Applications relying on modified data to be in the file at a specific point in time must use the SYS$UPDSEC(W) system service to force the write action. See Section 12.3.7.15. After a process private For global sections, the channel is only used to identify the file to the system. The system then assigns a different channel to use for future paging I/O to the file. The used assigned channel can be deleted immediately after the global section is created. 12.3.7.15 Writing Back Sections Because read/write sections are not normally updated on disk until either the physical pages they occupy are paged out, or until the section is deleted, a process should ensure that all modified pages are successfully written back into the section file at regular intervals. The Update Section File on Disk (SYS$UPDSEC) system service writes the modified pages in a section into the disk file. The SYS$UPDSEC system service is described in the OpenVMS System Services Reference Manual. 12.3.7.16 Memory-Resident Global Sections Memory-resident global sections allow a database server to keep larger amounts of currently used data cached in physical memory. The database server then accesses the data directly from physical memory without performing I/O read operations from the database files on disk. With faster access to the data in physical memory, runtime performance increases dramatically. Memory-resident global sections are non-file-backed global sections. Pages within a memory-resident global section are not backed by the pagefile or by any other file on disk. Thus, no pagefile quota is charged to any process or to the system. When a process maps to a memory-resident global section and references the pages, working set list entries are not created for the pages. No working set quota is charged to the process. For further information about memory-resident global sections, see Chapter 16. 12.3.7.17 Image Sections Global sections can contain shareable code. The operating system uses global sections to implement shareable code, as follows: For details on. 12.3.7.18). The pagcnt and relpag arguments are in units of CPU-specific pages for page frame sections. On Alpha systems, a partial section is one where not all of the defined section, whether private or global, is entirely backed up by disk blocks. In other words, a partial section is where a disk file does not map completely onto an Alpha system page. For example, suppose a file for which you wish to create a section consists of 17 virtual blocks on disk. To map this section, you would need two whole Alpha 8 KB pages, the smallest size Alpha page available. The first Alpha page would map the first 16 blocks of the section, and the second Alpha page would map the 17th block of the section. (A block on disk is 512 bytes, same as on OpenVMS VAX.) This results in 15/16ths of the second Alpha page not being backed up by the section file. This is called a partial section because the second Alpha page of the section is only partially backed up. When the partial page is faulted in, a disk read is issued for only as many blocks as actually back up that page, which in this case is 1. When that page is written back, only the one block is actually written. If the upper portion of the second Alpha page is used, it is done so at some risk, because only the first block of that page is saved on a write-back operation. This upper portion of the second Alpha page is not really useful space to the programmer, because it is discarded during page faulting. 12.3.8 Example of Using Memory Management System Services (Alpha Only) In the following example, two programs are communicating through a global section. The first program creates and maps a global section (by using SYS$CRMPSC) and then writes a device name to the section. This program also defines OpenVMS Linker Utility Manual for an explanation of how to use the solitary attribute. The address range requested for a section must end on a page boundary, so SYS$GETSYI is used to obtain the system page size. Before executing the first program, you need to write a user-open routine that sets the user-open bit (FAB$V_UFO) of the FAB options longword (FAB$L_FOP). Because the Fortran OPEN statement specifies that the file is new, you should use $CREATE to open it rather than $OPEN. No $CONNECT should be issued. The user-open routine reads the channel number that the file is opened on from the status longword (FAB$L_STV) and returns the system page size BUFF_LEN = 4 ITEM_CODE = %LOC(SYI$_PAGE_SIZE) BUFF_ADDR = %LOC(PAGE_SIZE) LENGTH = 0 TERMINATOR = 0 STATUS = SYS$GETSYI(,,,BUFF_LEN,,,) ! Get location of data PASS_ADDR (1) = %LOC (DEVICE) PASS_ADDR (2) = PASS_ADDR(1) + PAGE_SIZE - 1 = '$DISK . . . ! Get the system page size BUFF_LEN = 4 ITEM_CODE = %LOC(SYI$_PAGE_SIZE) BUFF_ADDR = %LOC(PAGE_SIZE) LENGTH = 0 TERMINATOR = 0 STATUS = SYS$GETSYI(,,,BUFF_LEN,,,) !) = PASS_ADDR(1) + PAGE_SIZE - 1 !
http://h71000.www7.hp.com/doc/731final/5841/5841pro_040.html
CC-MAIN-2014-49
refinedweb
1,453
52.6
Opened 19 months ago Closed 19 months ago Last modified 19 months ago #21838 closed New feature (wontfix) What about adding a .reload() method to the QuerySet API? Description Sometimes it is needed to reload data from database backend. It could be necessary in example to benefit of some values type-casting applied by the database backend module. In various Django projects I developed I find useful a simple method QuerySet.reload() like this: def reload(self): """Reload QuerySet from backend.""" pks_list = self.values_list('pk') return self.model.objects.filter(pk__in=pks_list) Change History (2) comment:1 Changed 19 months ago by aaugustin - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Resolution set to wontfix - Status changed from new to closed comment:2 Changed 19 months ago by fero@… Thanks I checked it out, I didn't know about this feature. Note: See TracTickets for help on using tickets. This will be very inefficient for large querysets. I'm -1 on this implementation. As far as I know, the easiest way to reload a QuerySet is simply qs.filter(). It will clone the queryset and wipe the result cache. You may get different rows if the database has changed since the QuerySet was first evaluated. That's the behavior I would expect.
https://code.djangoproject.com/ticket/21838
CC-MAIN-2015-35
refinedweb
214
66.54
std::byte is an enum class that was first introduced in C++17 and can be initialized using {} with any value from to . Make sure to include the cstddefheader file. #include <cstddef> // Initializing a 'byte' variable: std::byte x{2}; Only the following bitwise operations can be performed on a byte variable: <<): Shifts all digits of a binary number to the left by the specified number of steps. Shifting a binary number one-step to the left equates to multiplying it by 2. std::byte x{3}; // Multiply 'x' by 4 and store the result back in 'x': x <<= 2; >>): Shifts all digits of a binary number to the right by the specified number of steps. Shifting a binary number one-step to the right equates to dividing it by 2. std::byte x{32}; // Divide 'x' by 8 and store the result back in 'x': x >>= 3; &), OR ( |), and XOR ( ^) bitwise operators can also be performed on 2-byte variables. A byte can be converted to a signed or unsigned integer using the to_integer<type> function. This function is safer than a normal cast because it prevents a byte from being converted into a float. #include <iostream> #include <cstddef> using namespace std; int main() { byte x{20}; // Multiply 'x' by 2 => x = 20 * 2 = 40: x <<= 1; // Divide 'x' by 4 => x = 40 / 4 = 10: x >>= 2; cout << "x = " << to_integer<int>(x) << endl; return 0; } RELATED TAGS View all Courses
https://www.educative.io/answers/how-to-use-stdbyte-in-cpp
CC-MAIN-2022-33
refinedweb
241
57
Concepts Used Depth First Search, Disjoint Set Difficulty Level Easy Problem Statement : Given an undirected graph, print Yes if a cycle is present in the graph else print No. See original problem statement here Solution Approach : Introduction : By definition, " Connected components in a graph is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph. A vertex with no incident edges is itself a component. A graph that is itself connected has exactly one component, consisting of the whole graph." This problem can be solved by many ways like Breadth First Search, Depth First Search or Disjoint Set. Idea is to learn best c++ online course to keep count of the number of connected components. Below we are going to discuss two of the above mentioned methods to solve this problem. Method 1 (Using DFS) : In this method we are going to use Depth First Search or DFS to find total number of connected components. In dfs(), we are going to traverse all the adjacent vertices which are not yet visited and mark them as visited so we won't traverse them again. We have to keep count of the different dfs() calls made for each unvisited vertex and increment count by 1for each call. We can keep track if the node is visited or not, using a boolean visited[ ] array of size nwhich is initially set false (why ?). Now the question arise, why are we counting the different calls made to dfs() ?The answer lies in the fact that each time we call our dfs() function with some vertex v, it marks all vertices which are connected to vas visited, which means the we are visiting the vertices which are directly or indirectly connected to the vand by the definition of the Connected Component above, this is considered as one component. So each call made up to dfs() with some unvisited vertex v, gives us the different connected component. Method 2 (Using Disjoint Set) : Disjoint Set Union or DSU data structure is also sometimes called Union-Find Set. This data structure is used to keep track of the different disjoint sets. It basically performs two operations : - Union : It takes two vertices and join them in a single set (if it is not already there). - Find : Determines in which subset the element is. We will use a parent[]array to keep track of the parents of each vertex. ithelement will store the root/parent of ithvertex. Now we will perform union() for every edge (uv) and update the parent[]array. Now, iterate for all vertices, for each vertex v, we will find root of vusing find() and keep track of all distinct roots. Every time we encounter a distinct root increment the counter by 1. Each disjoint set stores the vertices which are connected (directly or indirectly) to each other and has no relation/connection with any other vertex (as it is non overlapping set), which means each set itself represents a connected component of the graph and since each set contains exactly one vertex as its root, the sum of distinct roots of all sets gives us the total number of connected components in the graph. Algorithms : dfs() : - For each call, for some vertex v( dfs( v) ), we will mark the vertex vas visited ( visited[v]= true). - Iterate for all the adjacent vertices of vand for every adjacent vertex a, do following : - if ais not visited i.e visited[a]= false, - recursively call dfs ( a). union() : - For two vertices u& v, find the root for both vertices using find() ( a=find(u)& b = find(v)). - If the root of u& vare not similar, check the level of a& bas follows: - if ( level[a]>level[b]), update parent[b] = a. - else , update parent[b]=a& update level[a] = level[a]+1 find() : - If (parent[v]==v), returns the vertex v(which is root). - Else, update parent[v]by recursively looking up the tree for the root. (find(parent[v])). Complexity Analysis: The time complexity of Depth First Search is represented in the form of O(V + E), where Vis the number of nodes and Eis the number of edges. The space complexity of the algorithm is O(V)as it requires one array ( visited[ ]) of size V. Talking about Union-Find data structure, since we have used path compression & union by rank (refer to Disjoint Set article for more detailed explanation), we will reach nearly constant time O(1)for each query. It turns out, that the final amortized time complexity is O(α(V)), where α(V)is the inverse Ackermann function, which grows very slowly (α(V)<5). In our case a single call might take O(logn)in the worst case, but if we do m such calls back to back we will end up with an average time of O(α(n)). Solutions: #include<stdio.h> #include<stdlib.h> long int findd(long int parent[],int i) { if(parent[i]==i) {; scanf("%d",&t); while(t--) { int n,e; scanf("%d %d",&n,&e); long int parent[n],level[n],hash[n]; for(long int i=0;i<n;i++) { parent[i]=i; level[i]=0; hash[i]=0; } while(e--) { int u,v; scanf("%d %d", &u, &v); unionn(parent,level,u,v); } for(int i=0;i<n;i++) hash[(findd(parent,i))]++; int count = 0; for(int i=0;i<n;i++) { if(hash[i]>0) count++; } printf("%d\n",count); } return 0; } #include <bits/stdc++.h> using namespace std; int findd(long int parent[],int i) { if(parent[i]==i) { c++;; cin>>t; while(t--) { int n,e; cin>>n>>e; long int parent[n],level[n]; for(long int i=0;i<n;i++) { parent[i]=i; level[i]=0; } while(e--) { int u,v; cin>>u>>v; unionn(parent,level,u,v); } set<long int> s; for(int i=0;i<n;i++) s.insert(findd(parent,i)); cout<<s.size()<<endl; } return 0; } import java.util.*; class Main { int V; LinkedList<Integer>[] adjListArray; int c = 0; //<Integer>(); } } // Adds an edge to an undirected graph void addEdge( int src, int dest) { // Add an edge from src to dest. adjListArray[src].add(dest); adjListArray[dest].add(src); } void DFSUtil(int v, boolean[] visited) { // Mark the current node as visited and print it visited[v] = true; //System.out.print(v+" "); // Recur for all the vertices // adjacent to this vertex for (int x : adjListArray[v]) { if(!visited[x]) DFSUtil(x,visited); } } int connectedComponents() { // Mark all the vertices as not visited boolean[] visited = new boolean[V]; for(int v = 0; v < V; ++v) { if(!visited[v]) { c++; DFSUtil(v,visited); } } return c; } public static void main(String[] args){ Scanner sc = new Scanner(System.in); int t = sc.nextInt(); while(t-->0) { int n = sc.nextInt(); int e = sc.nextInt(); Graph g = new Graph(n); while(e-->0) { int u = sc.nextInt(); int v = sc.nextInt(); g.addEdge(u,v); } System.out.println(g.connectedComponents()); } } }
https://www.prepbytes.com/blog/graphs-interview-questions/count-components/
CC-MAIN-2021-39
refinedweb
1,175
59.53
Objective Core purpose of this article is to explain, how to attach and debug a sequential workflow using Visual studio 2008(using object model) to a document library of a SharePoint site. In this article, I will discuss - How to create a Sharepoint Site - How to create Document Library inside that - How to add custom properties in Document library Last but not least, - I will discuss, how to add Sequential Workflow to document library using Visual studio 2008. Step1: Create SharePoint site Follow the steps given below, to create a SharePoint site. Step 1a: From site action select Create option Step 1b: From Web Pages tab select Sites and Work Spaces Step1c: Give name and title of the site. And leave other setting as default. The name I am giving in C Sharp Document Center and URL is. So the site will look like below, Step 2: Create Document Library Step 2a: Click on Documents from left Panel. Step 2b: Then click on Create Step 2c: From Libraries tab select Document Library. Step 2d: Give a name and description. And leave the other default settings. The name I am giving here is My Document Library. Then Click on Create. Step 2e: Now in Document Tab you could able to see newly created Document library. Step 3: Add Properties to the Document library Click on My Document Library and then Setting. Then select Create Column. I am adding three columns here, - Document Status - Assignee - Review Comments. Document Status Column - Give type as Choice Menu to choose from - Give column name Document Status - Give any description - Give Choice as Review Needed Review Completed Changes Requested - Leave other fields as default and click on Ok to create the column. Assignee column Add this column exactly the way explained above. Only give type of information as Single Line of Text. Leave other fields with default values. Review Comments column Add this column exactly the way explained above. Only give type of information as Multiple Line of Text. Leave other fields with default values. Step 4: Enable Document to be edited without Checkout Go to My Document Library then Document Library Setting. Then select Versioning setting from General Setting tab. Then scroll down and select No option for Check out. And click Ok. Step 5: Create a SharePoint Sequential Workflow project in Visual Studio 2008. - Start Visual Studio - Open new project - Select office node - Select 2007 node - Select SharePoint 2007 workflow from template pane - Give any name. I am giving name here “CSharpWorkflow”. What local site you want to use for debugging? Extra care is needed here. Give the URL of the Site you created in above steps. For library select My Document Library. We created this library in step 2. Leave the default setting and click on Finish to create the workflow. A Sequential workflow like below will get created. Step 5: Create a Workflow schedule - From designer select onWorkflowActivated1 and right click then Properties. - In Invoke property type onWorkflowActivated and press enter. - Code window will get open with event handler for Invoke properties. - Again go to designer. Open toolbox and from Windows workflow version 3.0 tab select While activity. - Drag and drop While Activity under onWorkflowActivities. After dragging and dropping designer will look like below. - Go to Property of While Activity and in Condition property give Code Condition. - Expand the Condition property and in child condition tab type isWorkflowPending. And press Enter. Code window will open and a method called isWorkflowPending will get added. - Open the designer. Open Toolbox and select SharePoint Workflow tab. From this select - From this drag OnWorkflowItemChanged control inside while activity . After dragging designer will look like below. Select onWorkflowItemChanged1 and go to properties. set the properties as below, Note: Please type onWorkflowItemChanged in Invoked property. Step6: Handle Activity Events - Define a Global variable Boolean workflowPending = true; - Create a private method to check the status of the document. private void CheckStatus() { if ((string)workflowProperties.Item[“Document Status”] == “Review Completed”) workflowPending = false; } - Call CheckStatus method inside onWorkflowActivated and onWorkflowItemChanged events. Workflow1.cs; using Microsoft.Office.Workflow.Utility; namespace CSharpWorkFlow { public sealed partial class Workflow1 : SequentialWorkflowActivity { Boolean workflowPending = true; public Workflow1() { InitializeComponent(); } public Guid workflowId = default(System.Guid); public SPWorkflowActivationProperties workflowProperties = new SPWorkflowActivationProperties(); private void onWorkflowActivated(object sender, ExternalDataEventArgs e) { CheckStatus(); } private void isWorkflowPending(object sender, ConditionalEventArgs e) { e.Result = workflowPending; } private void onWorkflowItemChanged(object sender, ExternalDataEventArgs e) { CheckStatus(); } private void CheckStatus() { if ((string)workflowProperties.Item[“Document Status”] == “Review Completed”) workflowPending = false; } } } Step7: Testing the Workflow - Press F5 to run with debug. - Site which URL you given in Step 5 will get open. In my case this is the site, I created in Step 1. - If you remember, I given the name to library as My Document Library . That library is open - Create a New Document inside this library. Just click on New and then New Document. - Press Ok to message displayed. - If you see, all the Properties I added are attached in the document as column. Document Status, Assignee, Review Comments are added in the document. Give some value and save the document at default location. Note: Please in Document Status property select Review Needed. - Close the document. Now you would able to see the workflow status is In Progress - Click on drop down of the document and select Edit properties. - Now change Document Status to Review Completed and press Ok. - Refresh the Site Since, user has changed the status to Review Completed so now workflow status is also completed. Conclusion: In this article, I discussed - How to create a Sharepoint Site - How to create Document Library inside that - How to add custom properties in Document library Last but not least, - I discussed, how to add Sequential Workflow to document library using Visual studio 2008. Thanks for reading Happy coding 4 thoughts on “Adding Sequential Workflow to a document library of a SharePoint site using Visual Studio 2008.” Thank u. good article Thank u Dhananjay Kumar, good article. Keep posting new articles. awesome article. Good to start with the Sequential workFlows Thanks, for writing on workflow . hope will continue.
https://debugmode.net/2009/12/05/adding-sequential-workflow-to-a-document-library-of-a-sharepoint-site-using-visual-studio-2008/
CC-MAIN-2022-05
refinedweb
1,010
50.84
Bjorn Pettersen wrote: >> From: JW [mailto:jkpangtang at yahoo.com] >> >> I am trying to embed python within a c program, but I can't get >> PyImport_ImportModule() command to load my module. >> Please let me know what I'm doing wrong. > [..] > > I know of at least four people who've given up on that route, and > instead done: > > PyObject* res = PyRun_String("import module\n", Py_file_input, namesp, > namesp); > > res will be NULL on error, a reference to Py_None otherwise. You can > grab a PyObject* to the module from the namespace dicts you passed in. > > If anyone knows how to do it with purely API calls, I'm all ears <wink>. PyImport_ImportModule bypasses import hooks. PyImport_Import doesn't, and thus it's what you should normally use unless you're specifically _trying_ to bypass whatever hooks are set. Is that your problem...? Alex
https://mail.python.org/pipermail/python-list/2003-March/203925.html
CC-MAIN-2014-10
refinedweb
140
65.42
Aug 08, 2012 09:38 AM|haraldsh|LINK I've created a database and added it to my project using Entity Framework. Since I live in Norway, I've added the globalization tag along with the cultureinfo for 'nb-no', so my decimals and dates should be correct. What I'm experiencing, is that while the date is shown correctly, the date cannot be saved unchanged. The datetime format I'm using is {0:d} or dd.MM.yyyy as it corresponds in nb-no. However, I can only store dates as dd/MM/yyyy, but it'll be displayed as dd.MM.yyyy ... I've tried lots of things: 1. Adding a metaclass public class PCStatusMeta { [DataType(DataType.Time)] public TimeSpan OrderedDate { get; set; } [DataType(DataType.Time)] public TimeSpan ApprovedDate { get; set; } [DataType(DataType.Time)] public TimeSpan ObsoleteDate { get; set; } } I thought the bug described in had something to do with it. So I changed my datetime fields to time(7) in my database and changed the meta class as well. Now I'm getting a "The value '14.07.2012' is not valid for OrderedDate.". The date is unchanged before save ... Besides, using time does not really make any sense. 2. Tried adding a custom validator [DisplayFormat(DataFormatString = "{0:d}", ApplyFormatInEditMode = true)] [DateOnly] public Datetime OrderedDate { get; set; } as described in Here I would get a "the field OrderedDate must have a date" validation error. 3. Currently I've gone back to using 2., but trying to see if I can avoid using the [DateOnly] since it didn't work. Any clues on what I'm doing wrong here? Aug 08, 2012 10:58 AM|bcanonica|LINK This might not be exactly what you want, but I usually use the Jquery Datepicker and it doesn't even allow user to enter any text, but a valid date. Aug 10, 2012 05:43 AM|haraldsh|LINK Are there any examples of how you use jquery datepicker in MVC4? I can find some for MVC3 and older, but not for MVC4. I have three datetime fields which are retrieved from Entity Framework. I'd imagine something like: <script type="text/javascript"> $(document).ready(function() { $(".calendar").datepicker(); }); </script> <div class="editor-field"> @Html.EditorFor(model => model.OrderedDate, new {@class="calendar"}) @Html.ValidationMessageFor(model => model.OrderedDate) </div> would work. However, I'm getting a ".datepicker is not a function" errormessage from dom. I've got jquery-1.7.2 along with jquery-ui.1.8.2 installed in my solution. There is a datepicker function in jquery-ui.1.8.2, but I can't get it to work. Do I need to add jquery.datepicker from NuGet (which makes me remove all jquery related libraries before I can actually add jquery datepicker)? After reading this thread, I reorganized the _layout.cshtml file, the jquery scripts weren't loaded in the apropiate order. But, datepicker still doesn't load. Arr, frustrating. What is a damn shame with jquery datepicker at the moment, is that NuGet has problem installing it to the project (datepicker library wants to install an older jquery library but it can't). Yes I know, I can add datepicker manually. I've had a bit of a struggle to get datepicker to work, but I guess it's more of a learning issue than library issue. I shall continue to fiddle with datepicker since it might be the sensible way of solving the matter. Still, what I'm trying to figure out is why the validation does not accept 14.09.2012, but must have 14/09/2012 even when the cultureinfo causes the output in other places to be 14.09.2012 .... Btw, I'm not solving the thread before I got something working :P Aug 10, 2012 06:44 AM|haraldsh|LINK Short story: This does not work in sense of initating the jquery libraries, you have to do it manually: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>@ViewBag.Title</title> @Styles.Render("~/Content/themes/base/css", "~/Content/css") @Scripts.Render("~/bundles/modernizr") @Scripts.Render("~/bundles/jquery") @Scripts.Render("~/bundles/jqueryui") </head> <body> @RenderBody() @RenderSection("scripts", required: false) </body> </html> This however works, create a new empty partial view where you add the following: <script src="../../../Scripts/jquery-1.7.2.js" type="text/javascript"></script> <script src="../../../Scripts/jquery-ui-1.8.22.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $("#OrderedDate").datepicker($.datepicker.regional["nb-no"]); }); </script> I'm passing an argument along with datepicker to solve my dateproblem. At the moment it seems like the jquery libraries are not initated properly. I'd love a more dynamic solution than what I've come up with, since I'm more than likely to break the code when I update the libraries. Jquery datepicker does not help with my original problem, which is the . or / problem. That problem persists. When I've picked a date, it shows dd/MM/yyyy, when I come back to reedit the date it is shown as dd.MM.yyyy which the validator does not like. Edit: I'm continously struggling with this problem. While I'm working on the problem I shall try to describe what I'm trying to achieve: I want to use culture 'nb-NO' to define how a date should be input, which is dd.MM.yyyy Normally you'd just add the globalization tag under <system.web> in the web.config with the culture you want to use and that should be it (or?). I have a model based on Entity Framework, where one class has three Datetime? properties. public partial class PC { public Nullable<System.DateTime> OrderedDate { get; set; } public Nullable<System.DateTime> ApprovedDate { get; set; } public Nullable<System.DateTime> ObsoleteDate { get; set; } } these properties are extended with a metaclass public class PCStatusMeta { [DataType(DataType.Date)] public DateTime? OrderedDate { get; set; } [DataType(DataType.Date)] public DateTime? ApprovedDate { get; set; } [DataType(DataType.Date)] public DateTime? ObsoleteDate { get; set; } } Which is added to the class [MetadataType(typeof (PCStatusMeta))] public partial class PCStatus { } Perhaps I should use the short {0:d} instead of the long specification .. anyway, the validation of the datetime fields should now use dd.MM.yyyy (at least that's what i think at this stage). However, as soon as I've added a date, the validationerror kicks in, either after selection of a date or when trying to save the changes. The validation doesn't like dd.MM.yyyy, it wants dd/MM/yyyy or MM/dd/yyyy (worst case). The error message is telling me 'The field <x> must be a date'. A quick search on the great google tells me this could be a jquery validation issue, since it's not configured(?) to check the locale. Aug 10, 2012 10:22 AM|haraldsh|LINK I hope people can forgive me for ranting on this forum .. (especially the moderators). Anyway, after days of debugging I figured the problem out. I sat and debugged the code and found out that jquery.validation library didn't like my dates (yes, it's the one that creates the 'The field <x> is not a date' messages ..). Shortly after that, I found a thread which showed how to solve the issue: I combined the solution and created a partial view which both enabled the jquery datepicker and alters the blasted jquery validate to accept my locale: <script src="../../../Scripts/jquery-1.7.2.js" type="text/javascript"></script> <script src="../../../Scripts/jquery-ui-1.8.22.js" type="text/javascript"></script> <script src="../../../Scripts/jquery.validate.js" type="text/javascript"></script> <script src="../../../Scripts/jquery.validate.unobtrusive.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $("#OrderedDate").datepicker({ dateFormat: 'dd.mm.yy' }); $("#ApprovedDate").datepicker({ dateFormat: 'dd.mm.yy' }); $("#ObsoleteDate").datepicker({ dateFormat: 'dd.mm.yy' }); jQuery.validator.addMethod( 'date', function (value, element, params) { if (this.optional(element)) { return true; }; var result = false; try { $.datepicker.parseDate('dd.mm.yy', value); result = true; } catch (err) { result = false; } return result; }, '' ); }); </script The entity class and meta class are same as before. Aug 10, 2012 11:17 AM|Bieters|LINK RenderSection doen't work in partial views. Only in regular views. That's why you must include the script tag in your partial. Beter use <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> Aug 31, 2012 02:03 PM|erothvt|LINK You actually do not need a RenderSection, in your partial view you need to wrap your script tag in a @section scripts like so: @section scripts { <script> .... </script> } This will insert the script in the correct location in the _layout.cshtml page so it is below the jquery imports. 7 replies Last post Aug 31, 2012 02:03 PM by erothvt
http://forums.asp.net/t/1831712.aspx
CC-MAIN-2015-22
refinedweb
1,472
60.01
Intro to APIs API’s are wonderful tools, and learning why you use them, and how to create them is incredibly valuable. There’s thousands of APIs that are exposed to the public for use. From GitHub, to Stripe or Google Calendar APIs allow you to access and utilize other systems. This tutorial is going to teach you how to use, and make your own API so that you can take advantage of the power that REST provides. What are APIs? API stands for Application Programming Interface. But that doesn’t exactly help anyone understand what it is. An API is a communication protocol between software that allows for the transfer of data between systems. The best example I can give for a use case for an API is for a simple request to a database: Let’s say you have a database on a server. Let’s say you have a table of users. It has username, email, phone numbers and password hashes stored in it. Now let’s say another application wants to access that information. But maybe you don’t want to provide them with the raw password hash. Let’s say you also only want them to get that data if they have the username and password for the user. In this scenario you could create an API that allows for a request to be made for that information, but only returns what you want, given what you want. If a user makes a request for information without a password, you can just reject or ignore the request. { "error": "Invalid Credentials" } But if provided correctly, you can return the data in whatever form you wish, and omit whatever fields you wish to omit. { "username": "matted", "phone": 5555555555 } What is RESTful? From codecademy:. REST is often confused for HTTP. They are not the same, but they do work well together. Most web-api’s, specifically the one’s I’ll be teaching you today, are in fact HTTP. When working with a RESTful HTTP API, there are 4 basic methods that you should learn. - GET — retrieve a specific resource (by id) or a collection of resources - POST — update a specific resource (by id) - PUT — create a new resource - DELETE — remove a specific resource by id Some people swap POST and PUT. There is a large discussion on Stackoverflow that talks about the reasons to use each. Paths First thing we should go over, is how to structure an API. RESTful APIs are accessed generally through simple web routes. Think of something like google.com/ and google.com/search. You're already likely familiar with the fact that google.com/ is the route to get the main page. Root / is an example of the "root" path of the API. It's what's loaded when you go to either google.com or google.com/. Routes /search is the route that exists in the path google.com/search. When you write routes, there is no limit to the path, /this/is/a/long/path/which/is/valid. You should however, try and keep your routes short and simple. Routes should be easily understood, and generally memorable so that people are able to type it in, or understand what information will be provided simply from the URL. Parameters Parameters are a huge part of APIs. In order for a lot of them to be useful, there needs to be an ability to pass data. You can do this by using parameters. You may have seen parameters in use, and never even realized it. Think back to the google.com/search example. Actually go to Google and make a search. If you google 'api' you'll get a route like this: So where are the parameters? Well everything after ? is going to define parameters, and ever subsequent & will determine another one. In this example we have the parameters: - ei: 87CEWsH-L8qV_Qau8ILABw - q: api - gs_l: psy-ab.3..35i39k1l2j0i131k1j0j0i20i264k1j0i131k1j0i20i264k1j0i131k1j0l2.3075.3336.0.4076.3.3.0.0.0.0.81.215.3.3.0….0…1c.1.64.psy-ab..1.3.215…0i131i20i264k1.0._czjqWn7yUE So this is only the case for GET requests. For POST, PUT and DELETE parameters are passed in as the payload of the request, outside of the URL route. The best way to understand it, is that the URL bar of a browser makes GET requests, and other forms of requests must be made through a different medium. Status Codes Understanding Status codes is imperative to being able to understand what to do with the data returned by a restful API. What a status code does, is tell you whether or not a request was successful or not, and describes the result before you even read the data returned. Status codes are described in a series of Specs, as there are many that are predefined, but the basics are this: - 1xx: Informational responses. - 2xx: Success. - 3xx: Redirection. - 4xx: Client errors. - 5xx: Server errors. If you want to know the more common status codes, you can read one of the specs here. Make your Own To follow along, clone the example repo and checkout the language/framework of your choice. git clone git checkout [branch] Options are: - Python with Flask: python_flask - Java with Spring: java_spring - PHP with Slim: php_slim - Node with Express node_express Tools You may at many points want to test your API. A good tool for that would be Postman. Allows for you to make GET, POST, PUT, and DELETE requests easily and read the response. Getting started Let’s start with simply making a root route that returns “Hello World!” to the body of the response. Python/Flask - First run pip install -r requirements.txt - Make sure you’re using Python3, you may need to use pip3instead of pipon some systems. - Edit __init__.pyand add the following route, then run app.pyto test. @app.route("/", methods=["GET"]) def root(): return "Hello World!" Java/Spring - I recommend using an IDE for all Java Development - I also recommend using Intellij - Add build.gradleand pom.xmlto the gradle/mavenportions of your IDE - Make sure to mark srcas the sources directory - Run Application.javato run/test your application @RequestMapping("/") public String root() { return "Hello World!"; } PHP/Slim - First run composer install - If you don’t have composer, install it - Edit routes.phpand add the following route, then run php -S localhost:3000 -t index.phpto test. $app->get('/', function (Request $request, Response $response) { return $response->write("Hello World!"); }); Node/Express - First run npm install - If you don’t have node, install it - Edit app.jsand add the following route, then run node app.jsto test. app.get('/', (req, res) => res.send('Hello World!')); Multiple Request Types Not every request in an API is GET in most cases. Often an API allows for you to make requests to change the underlying data in the system. In order to do this, you’re going to want to take advantage of POST, PUT and DELETE. Doing so in each framework is fairly easy to define. Python/Flask Java/Spring PHP/Slim Node/Express Using Status Codes HTTP requests are made up of two parts, the payload, and the status code. The status code tells whether a request was successful, or failed. It also allows for you handle the results of requests based on the code that it provides. Status codes are generally pretty consistent, so if you’re confused what code to return, just reference the spec. Python/Flask # Return a 205 @app.route("/success", methods=["GET"]) def return_success(): return "This will return a 205 Status Code", 205 # Return a 404 @app.route("/fail", methods=["GET"]) def return_fail(): return "This will return a 404 Status Code", 404 Java/Spring // Return a 205 @RequestMapping(value = "/success", method = RequestMethod.GET) public ResponseEntity returnSuccess() { return ResponseEntity.status(205).body("This will return a 205 Status Code"); } // Return a 404 @RequestMapping(value = "/fail", method = RequestMethod.GET) public ResponseEntity returnFail() { return ResponseEntity.status(404).body("This will return a 404 Status Code"); } PHP/Slim // Return a 205 $app->get('/success', function (Request $request, Response $response) { return $response->write("This will return a 205 Status Code")->withStatus(205); }); // Return a 404 $app->get('/fail', function (Request $request, Response $response) { return $response->write("This will return a 404 Status Code")->withStatus(404); }); Node/Express // Return a 205 app.get('/success', function (req, res) { res.status(205); res.send('This will return a 205 Status Code'); }); // Return a 404 app.get('/fail', function (req, res) { res.status(404); res.send('This will return a 404 Status Code'); }); Get Data to Return Often APIs are stateless, meaning that they get the data from somewhere. You aren’t going to be storing data in data structures as your long term storage. Because of that you’ll often interact with Databases. Each language works really well with a number of frameworks. I will recommend some good ORMs for each language below, but you should do your research to find one that works for your use case. Python/Flask Java/Spring PHP/Slim Node/Express Continue Learning If you want to learn more about APIs or any of these frameworks, there are plenty of good resources available. Frameworks Originally published at.
https://medium.com/@devinmatte/intro-to-apis-67eb36057739?source=---------3------------------
CC-MAIN-2020-10
refinedweb
1,538
65.83
drainers 0.0.3 Event-based draining of process output drainers is an abstraction around subprocess.Popen to read and control process output event-wise. It also allows you to abort running processes either gracefully or forcefully without having to directly interact with the processes or threads themself. Overview Defining a process A Drainer is a factory and controller wrapper around subprocess.Popen and therefore takes all of the (optional) parameters that subprocess.Popen’s initializer takes. For example, the minimal Drainer takes a command array: from drainers import Drainer def ignore_event(line, is_err): pass my_drainer = Drainer(['ls', '-la'], read_event_cb=ignore_event) my_drainer.start() But, extra arguments are allowed, too: my_drainer = Drainer(['echo', '$JAVA_HOME'], shell=True, bufsize=64, read_event_cb=ignore_event) my_drainer.start() The only two arguments to Drainer that are reserved are stdout and stderr. Drainer requires them to be subprocess.PIPE explicitly, and sets them for you accordingly. Defining a callback Drainer’s strength lies in the fact that each line that is read from the process’ standard output or standard error streams leads to a callback function being invoked. This allows you to process virtually any process’ output, as long as it’s line-based. The callback function can be specified using the read_event_cb parameter to the constructor, as seen in the example above. It is mandatory. The callback function specified needs to have a specific signature: def my_callback(line, is_err): ... It should take two parameters: line (a string) and is_err (a boolean). The latter indicates that the line is read from the standard error stream. There is nothing more to it. It does not need to return anything: it’s return value will be ignored. Your callback may be a class method, too, like in the following example. Notice that in those cases, you pass foo.my_method as the value for the read_event_cb parameter: class MyClass(object): def my_method(self, line, is_err): ... foo = MyClass() my_drainer = Drainer(['ls'], read_event_cb=foo.my_method) my_drainer.start() The granularity currently is a single line. If you want to read predefined chunks (lines) of data, use BufferedDrainer instead. See examples/buffer_results.py for an example. Aborting processes Drainer allows you to abort a running process in the middle of execution, forcefully sending the process a terminate() message (Python equivalent of a Unix SIGTERM message) when a certain condition arises. By default, the process will never be terminated abnormally. To specify termination criteria, implement a callback function that takes no parameters and returns True if abortion is desired and False otherwise. For example, for a long running process you might want to terminate it if the disk is getting (almost) full. But checking how much space is free can be a lengthy operation, so you might want to do it only sparingly: def out_of_diskspace(): left = handytools.check_disk_free() total = handytools.check_disk_total() return (left / total) < 0.03 # The following drainer executes the cruncher and checks whether the disk # is (almost) full every 5 seconds. It aborts if free disk space runs # under 3%. my_drainer = Drainer(['/bin/crunch', 'inputfile', 'outputfile'], read_event_cb=ignore_event, should_abort=out_of_diskspace, check_interval=5.0) exitcode = my_drainer.start() The example is pretty self-explaining. You can check the exitcode to see the result of the process. More examples See the examples directory for more detailed examples. - Author: Vincent Driessen - License: BSD - Platform: any - Categories - Package Index Owner: nvie - DOAP record: drainers-0.0.3.xml
https://pypi.python.org/pypi/drainers/0.0.3
CC-MAIN-2016-30
refinedweb
559
50.63
Qt Creator, linked boost library and debug Hello. I have found a strange problem with Qt Creator and boost library. I have a simple "Hello World!" program in the Qt Creator. So far so good. But I need to use the boost::filesystem library. So I add: @ #include <boost/filesystem.hpp> @ to the source code and @ INCLUDEPATH += C:/boost LIBS += C:/boost/stage/lib/libboost_filesystem-mgw45-mt-d-1_46_1.a LIBS += C:/boost/stage/lib/libboost_system-mgw45-mt-d-1_46_1.a @ to the .pro file Building the application for the "release" target is OK, no errors. Result executable executes and works well. But for the "debug" target it doesn't. I got the error: The process could not be started: %1 is not a valid Win32 application. Debugging can be started but it ignores all breakpoints and terminates immediately. Don't you know what could case this problem? If I don't have #include <boost/filesystem.hpp> in the source code, the "debug" executable can be executed, but when I include boost/filesystem, it immediately becomes invalid win32 application. However "release" executable works well in both cases. Thank you. P.S. I tried to create the same simple application in Code::Blocks, just to test whether included boost libraries are compiled correctly and yes, there's no problem with both release and debug build targets, all works fine. So it doesn't seem to be a problem with boost libraries. I'm using Qt SDK 1.1, Qt Creator 2.2 installed later, Windows 7 Ultimate x64 [EDIT: code formatting, please wrap in @-tags, Volker] That looks a lot like some stale files being left behind from one build being used in the new one. Did you clean out the build directory after switching to debug or did you use different shadow build directories for debug and release? Doing a clean rebuild should fix this. Thanks for the answer. Unfortunately this is not the problem. I have cleaned the project, then I have deleted the entire directory boosttest-build-desktop and then "debug" built again - and the same problem occured. What more, just commenting out the line "#include <boost/filesystem.hpp>" and "rebuild project" makes the .exe file executable in "debug" target again. So there IS something wrong with linking but I have no idea what it can be. Thanks. On windows you're using .dll, not .a. So what's you compiler ? (mingw 4.5 ?) Don't use QTCreator 2.2 wich is not delivered with Qt SDK 1.1 try : -L : this is a path -l : is a file without extension (.a or .lib is added) \ : continue on following line $$quote(..) : useful for path like 'c:/program files' @DEFINES += BOOST_ALL_DYN_LINK # for using dynamic libraries LIBS += -L$$quote(C:/boost/stage/lib) \ -llibboost_filesystem-mgw45-mt-d-1_46_1 -llibboost_system-mgw45-mt-d-1_46_1@ eventually make separate config for debug and release @ CONFIG(debug, debug|release) { LIBS += -L$$quote(C:/boost/stage/lib) \ -llibboost_filesystem-mgw45-mt-d-1_46_1 -llibboost_system-mgw45-mt-d-1_46_1 } else { LIBS += -L$$quote(C:/boost/stage/lib) \ -llibboost_filesystem-mgw45-mt-1_46_1 -llibboost_system-mgw45-mt-1_46_1 } @ Hi. Thank you for your reply. You know what? You have solved my problem with one sentence: Don’t use QTCreator 2.2 wich is not delivered with Qt SDK 1.1 That is, it works with Qt Creator 2.1 supplied with Qt SDK, but not with QTC 2.2. (Maybe different version of MinGW coming with this new version...?) I didn't even had to change LIBS += settings in the .pro file. And about .dll - they are used for dynamic linking, but I want statick link, I want to produce single .exe file. Thank you very much for your help. Well, great that this helps for now, but the SDK will eventually upgrade to Qt Creator 2.2 and then you will have the same issue. I would really appreciate getting this nailed down:-) Can you make your example available to us? How did you build it in codeblocks? Did you use a different build system or did you reuse the qmake .pro-file you had? Did codeblocks and creator use the same mingw? Hi. Well, it seems there's really problem with MinGW versions I have installed, you're right. My default MinGW installation (C:\MinGW) is version 4.5.0 Code::Blocks uses this default installation, so it's running on 4.5.0 Compilation of boost libraries (bootstrap, bjam) used this default MinGW too, it's compiled with 4.5.0 But QtSDK comes with its own MinGW - 4.4.0 And Qt Creator 2.2 also comes with its own MinGW - also 4.4.0 In "Build Settings" of both Qt Creators (2.1 from QtSDK and standalone 2.2) I can see the same settings in Qt version - "Qt 4.7.3 for Desktop - MinGW 4.4 (Qt SDK)," referencing "c:\qtsdk\desktop\qt\4.7.3\mingw\bin\qmake.exe" directory. So I have boost libraries compiled with MinGW 4.5.0, but Qt Creator uses its own 4.4.0 for my program. It can be the problem, I see. But why release works fine and only debug fails in Qt Creator 2.2? And why Qt Creator 2.1 releases and debugs with no problem - especially if both 2.1 and 2.2 use the same Qt from QtSDK? For Code::Blocks I created a new project from scratch - in fact it creates a "Hello World" main.cpp automatically when new project is created, so all I had to do was to add libraries to linker, includes to compiler and one line of code: #include <boost/filesystem.hpp> Thank you. Could you please try setting up mingw 4.5 in Qt Creator? Go to Tools->Options->Tool chain and add a new mingw tool chain. Point it to the g++ of the mingw 4.5, rename the whole thing (click on the name in the table) and apply the whole thing. You should not be able to select that version to build your project. Does it work when building with the newer mingw? Just few comments Qt Creator 2.2 work fine on windows xp/seven with Qt SDK 1.1 install Visual C++ 2008 Express Edition (free, uncheck SQL serveur express if you don't have to use) install QtSDK 1.1 VS2008 (uncheck all about MinGW, check delete QtCreator previous settings) install QtCreator 2.2 (uncheck MinGW) Both Qt Creator 2.1/2.2 work fine (release and debug) @Tobias Hunger: Oh yes, that's it. It works now. Qt Creator 2.1 probably choose the right MinGW by itself (I cannot check it, Tool Chain combobox is disabled in build settings there and no "Tool Chain" option in Tools/Options is present) but 2.2 set up its toolchain to QtSDK's MinGW 4.4.0. When I changed ToolChain to "MinGW (x86 32bit) - 4.5.0" in QTC 2.2, both release and debug builds started to work fine. Thank you very much, Tobias. - formiaczek If you happen to change boost versions, this is perhaps more generic way of doing this: (define your environment variable pointing at boost-root, e.g. C:\boost_1_54_0) than, for gcc4.7.2 (e.g. from qt 5.0.2) in .pro file: @ INCLUDEPATH += $(BOOST_ROOT) BOOST = $(BOOST_ROOT) GCC_VER = system(gcc -dumpversion) COMPILER = gcc-mingw-$${GCC_VER} #dunno how to make mgw47 from gcc-mingw and 4.7.2 from here.. COMPILER_SHORT = mgw47 BOOST_SUFFIX = $${COMPILER_SHORT}-mt-${BOOST_VER} LIBS += -L$${BOOST}/stage/lib // now you can add your libraries, e.g. LIBS += -lboost_system-$${BOOST_SUFFIX} LIBS += -lboost_WHATEVER-$${BOOST_SUFFIX} // just change 'WHATEVER' to a valid lib name..@ (obviously above is assuming you've compiled your boost with the same compiler, e.g.: @cd %BOOST_ROOT%; set PATH=C:\Qt\Qt5.0.2\Tools\MinGW\bin;%PATH%; boostrap gcc b2 --build-dir=/temp/build-boost toolset=gcc stage@ if you're using boost_1_54, you'd better off defining following in your project-config.jam before the build: @using gcc : : : <cxxflags>-std=c++0x <cxxflags>"-include cmath" ;@ etc..
https://forum.qt.io/topic/5866/qt-creator-linked-boost-library-and-debug
CC-MAIN-2017-39
refinedweb
1,347
69.68
There are a number of breaking changes between ASP.NET Core RC1 and RC2. If you have existing apps targeting RC1, here are some things you should expect to change in order to upgrade the apps to RC2. Consider this an “unofficial” guide. Update: The official docs have a bunch of great content on this as well – this was just my own personal notes that worked for my projects. NuGet Sources Make sure you’re configured to use the correct NuGet feed(s) for the RC2 sources. Once RC2 officially ships, the official, standard NuGet source should work, but until then you may need to use manually specify the correct source using the nuget command line (nuget sources) or a NuGet.config file. As of now you should be using: Update: The RC2 packages are all on nuget.org now, so you can use your usual nuget.org package source. You can browse to that link to see the available packages and their respective versions. You can specify this source in Nuget.Config file like this one (note the other two sources shown are commented out): dotnet CLI Make sure you have installed the latest dotnet command line interface (CLI), using the .NET Core SDK. You can find the dotnet CLI installer appropriate to your OS here. project.json The project.json file structure has changed in a number of subtle ways. You’ll of course need to change the versions of all of the packages you’re using, and change the references to any that use “Microsoft.AspNet” to be “Microsoft.AspNetCore” (see below for more of this). The current rc2 version that is working for me is “1.0.0-rc2-20801” but I expect this number will change to another string upon release (e.g. “1.0.0-rc2” or “1.0.0-rc2-final”). Note: commands are gone. Here is an example of a working project.json file (from the dependency injection docs): Namespaces Do a search for “Microsoft.AspNet.” in your project to find all of the namespaces that use this string, and replace them with “Microsoft.AspNetCore.” This will fix the bulk of the compile errors you will encounter immediately. Don’t forget to look in your views, including _ViewImports.cshtml, for additional references to namespaces that need updated that the compiler won’t catch (you’ll encounter these problems at runtime). “Microsoft.Framework” has been renamed to “Microsoft.Extensions”. This affects packages and namespaces like DependencyInjection and Logging. “Microsoft.AspNet.Identity.EntityFramework” is now “Microsoft.AspNetCore.Identity.EntityFrameworkCore”. global.json You can remove the “sdk” section. Typically it will just list the projects (e.g. “projects” : [“src”,”test”]). Startup.cs Remove references to UseIISPlatformHandler. Remove the public static void main entry point. You’ll add a Program.cs file (see below). Remove minimumLevel property from loggerFactory (it’s specified per logger type now), if set. Change any reference to LogLevel.Verbose (to Information or Debug). If building configuration, you can use AddJsonFile(“file.json”,optional: true, reloadOnChange: true) instead of calling .ReloadOnChanged() off of your call to builder.Build(). Access to the root of the application is now through IHostingEnvironment.ContentRootPath. If configuring Entity Framework, change call in ConfigureServices from AddEntityFramework to AddDbContext. If using services.Configure<T>(Configuration.GetSection(“sectionName”) you need to reference Microsoft.Extensions.Options.ConfigurationExtensions and add a using Microsoft.Extensions.DependencyInjection; if you aren’t already doing so. Below is an example Startup.cs: Entity Framework If you’re using Entity Framework, you’ll need to update your DbContext class to accept an options parameter in its constructor and pass it to its base, like so: Call AddDbContext in Startup as mentioned above. Change namespace from “Microsoft.Data.Entity” to “Microsoft.EntityFrameworkCore”. Migrations will need to be updated. For instance, “.HasAnnotation(“Relational:Name”, “name”)” is now simply “.HasName(“name”)”. Likewise, HasAnnotation(“Relational:TableName”, tableName) should be updated to ToTable(tableName). References to “Microsoft.AspNet.Identity.EntityFramework” will need to be changed to “Microsoft.AspNetCore.Identity.EntityFrameworkCore”. Table names will now use the DbSet name, not the Type name, so you may run into issues especially around pluralization. See here for resolutions. Program.cs You should add a Program.cs file to configure your web app. Here’s an example: Razor Change validation summary tag helpers from asp-validation-summary=”ValidationSummary.All” to asp-validation-summary=”All”. Likewise change “ValidationSummary.ModelOnly” to “ModelOnly”. Add to form tags asp-route-returnurl=”@ViewData[“ReturnUrl”]” if you wish to use this functionality (and be sure to set this in ViewData). Don’t forget to change _ViewImports.cshtml. Its reference to TagHelpers should be @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers (and change the reference to Identity to use AspNetCore as well) Properties – launchSettings.json You need to change the environment variable name used to set the environment to Development in launchSettings.json. Here’s a sample of the file: API Changes There are many small API changes, as well. Please list those that you find in the comments and I will update this list. Here are the ones I’ve encountered myself thus far. You can find the team’s summary of changes listed in the issues in the Announcements repository on GitHub. - IFilterFactory: now has a boolean IsReusable property that must be implemented. - [FromServices]: this attribute is no longer available; use constructor injection instead (learn more) - IActionResult HttpNotFound: this has been renamed NotFound - IActionResult HttpBadRequest: this has been renamed BadRequest - ASPNET_ENV environment variable is now ASPNETCORE_ENVIRONMENT - Microsoft.Extensions.WebEncoders namespace is now System.Text.Encodings.Web (for HtmlEncoder) Training If you or your team need training on ASP.NET Core, I have a virtual 2-day ASP.NET Core class scheduled for June, and I’m available to schedule a custom class for your team, as well.
http://ardalis.com/upgrading-from-aspnet-core-rc1-to-rc2-guide
CC-MAIN-2017-26
refinedweb
964
52.66
test case Last edit: Anonymous 2014-03-23 $ i686-w64-mingw32-gcc --version i686-w64-mingw32-gcc (GCC) 4.7.0 20120224 (Fedora MinGW 4.7.0-0.5.20120224.fc16_cross) I am attaching test case that you can easily see the problem. $ gcc -m32 test-ps.c $ ./a.out size: 12 $ gcc -fpack-struct -m32 test-ps.c size: 6 $ i686-w64-mingw32-gcc test-ps.c $ wine ./a-1.exe size: 12 $ i686-w64-mingw32-gcc -fpack-struct test-ps.c $ wine ./a.exe size: 9 The gcc 4.7 gcc for mingw has by default -mms-bitfields activated. Which means struct-layout has changed and alignment in struct. You can force a specific layout by using attribute ms_struct or by attribute gcc_struct. The difference between those two layouts is in bitfield packing and in alignment of those field members. For example: struct { unsigned long long c : 1; unsigned int a : 1; unsigned int b : 1; } will be layout differently. For gcc-layout all bits geting merged together within one bitfield. gcc struct makes no difference in actual type-sizes for bitfields. For ms-layout c will end in a 8-byte field, and just a and b getting merged together as they have same type-size. This is to be expected behavior. You might want to try here -mno-ms-bitfields. Ah, I missed that default-alignment is treated now for ms_struct union/struct-s too. You can solve the issue also by additing explicit #pragma pack(1) of __attribute__ ((pack(1))). The cause why it gets aligned to 9 is that. Default alignment of fields is 4 on 32-bit. So first char a[1]; is placed in 4-bytes, long b also, and the last element has 4 too. By enabling struct-pack, All fields in structure getting merged. So char a[1]; has one byte size, long b has 4 byte size, plus char c[1] has 1. But by structure-alignment of 4, structure gets expanded in the last field to 4. I see that here alignment is one too big ... hmm, maybe an issue indeed. Work-a-round use explicit #pragma pack(1), if you don't want structure alignment at end. Adding -mno-ms-bitfields does fix the issue that I am having. I was expecting 6 and not 8 (or 9). I did not know about the defaults change. Should I report the 9 issue upstream (gcc)? I was too quick to say it is fixed. There seem to be other byte alignment issues. I will continue investigating. The general issue is that the __attribute__ ((__packed)) applies only to last field of struct. This is the cause for the size of 9. Trick is here '#pragma pack(1)'. By it you get expected sizes. The issue is that this field-alignment gets applied even for struct/union's marked to be packed (or via -fpack-struct option). The remark of "__attribute__ ((__packed)) applies only to last field of struct" is curious: I never experienced that before in any gcc versions, and if that is a gcc-4.7 thing it is a devastatingly incompatible change. For the record, I just compiled Michael's testcase with gcc45 for win32 and it prints 6 when ran under wine. Same when compiled for x86-linux. Is your remark accurate Kai? My last comment was a false alarm. Adding "-mno-ms-bitfields" solves all my issues. I use software that heavily relies on bitfields and packed structs (yes, it is very old). I agree with sezero, even if I am an outside nobody. This is a drastic change in defaults. Perhaps this "bug" needs to be changed to an "enhancement" to change the defaults back to pre-4.7 values to prevent further problems with other projects. I run into this as well and I don't think the change is correct. I get now _different_ results from compiling with Microsoft Compiler and packing when using -m-ms-bitfields while I get _identical_ results when compiling with -mno-ms-bitfields. And when __attribute__ ((__packed)) applies only to last field of struct then m[no]-ms-bitfields shouldn't really make a difference in my case: #include <cstdio> #if defined(_MSC_VER) #pragma pack( push, packing ) #pragma pack( 1 ) #endif struct Header { unsigned char IdLength; unsigned char ColorMapType; unsigned char ImageType; unsigned char FirstEntryIndex[2]; unsigned short ColorMapLength; unsigned char ColorMapEntrySize; unsigned char XOrigin[2]; unsigned char YOrigin[2]; unsigned short ImageWidth; } #if defined(__GNUC__) __attribute__((packed)) #endif ; #if defined(_MSC_VER) #pragma pack( pop, packing ) #endif int main () { printf("sizeof(Header): %d\n", sizeof(Header)); return 0; } But my results for size are: With VS 2010 (32 bit as well as 64-bit): 14 Compiled with gcc 4.7 and -mno-ms-bitfields: 14 gcc 4.7 and new default (or with -mms-bitfields explicitly): 16 So gcc 4.7 is now incompatible with MS compiler as well as with gcc 4.6 as far as I can see. Log in to post a comment.
https://sourceforge.net/p/mingw-w64/bugs/275/
CC-MAIN-2017-47
refinedweb
839
76.22
vifm man page vifm — vi file manager Synopsis vifm [OPTION]... vifm [OPTION]... path vifm [OPTION]... path path Description Vifm is an ncurses based file manager with vi like keybindings. If you use vi, vifm gives you complete keyboard control over your files without having to learn a new set of commands. Options vifm starts in the current directory unless it is given a different directory on the command line or 'vifminfo' option includes "savedirs" (in which case last visited directories are used as defaults). - - Read list of files from standard input stream and compose custom view out of them (see "Custom views" section). Current working directory is used as a base for relative paths. - <path> Starts Vifm in the specified path. - <path> <path> Starts Vifm in the specified paths. Specifying two directories triggers split view even when vifm was in single-view mode on finishing previous session. To suppress this behaviour :only command can be put in the vifmrc file. When only one path argument is found on command-line, the left/top pane is automatically set as the current view. Paths to files are also allowed in case you want vifm to start with some archive opened. - --select <path> Open parent directory of the given path and select specified file in it. - -f Makes vifm instead of opening files write selection to $VIFM/vimfiles and quit. - --choose-files <path>|- Sets output file to write selection into on exit instead of opening files. "-" means standard output. Use empty value to disable it. - --choose-dir <path>|- Sets output file to write last visited directory into on exit. "-" means standard output. Use empty value to disable it. - --delimiter <delimiter> Sets separator for list of file paths written out by vifm. Empty value means null character. Default is new line character. - --on-choose <command> Sets command to be executed on selected files instead of opening them. The command may use any of macros described in "Command macros" section below. The command is executed once for whole selection. - --logging[=<startup log path>] Log some operational details $VIFM/log. If the optional startup log path is specified and permissions allow to open it for writing, then logging of early initialization (before value of $VIFM is determined) is put there. - --server-list List available server names and exit. - --server-name <name> Name of target or this instance (sequential numbers are appended on name conflict). - --remote Sends the rest of command line to the active vifm server (one of already running instances if any). When there is no server, quits silently. There is no limit on how many arguments can be processed. One can combine --remote with -c <command> or +<command> to execute command in already running instance of vifm. See also "Client-Server" section below. - -c <command> or +<command> Run command-line mode <command> on startup. Commands in such arguments are executed in the order they appear in command line. Commands with spaces or special symbols must be enclosed in double or single quotes or all special symbols should be escaped (the exact syntax strongly depends on shell). - --help, -h Show a brief command summary and exit vifm. - --version, -v Show version information and quit. - --no-configs Skip reading vifmrc and vifminfo. See "Startup" section below for the explanations on $VIFM. General keys - Ctrl-C or Escape cancel most operations (see "Cancellation" section below), clear all selected files. - Ctrl-L clear and redraw the screen. Basic Movement The basic vi key bindings are used to move through the files and pop-up windows. - k, gk, or Ctrl-P move cursor up one line. - j, gj or Ctrl-N move cursor down one line. - h when 'lsview' is off move up one directory, otherwise move left one file. - l when 'lsview' is off move into a directory or launches a file, otherwise move right one file. - gg move to the first line of the file list. - G move to the last line in the file list. - gh go up one directory. - gl or Enter enter directory or launch a file. - H move to the first file in the window. - M move to the file in the middle of the window. - L move to the last file in the window. - Ctrl-F or Page Down move forward one page. - Ctrl-B or Page Up move back one page. - Ctrl-D jump back one half page. - Ctrl-U jump forward one half page. - n% move to the file that is n percent from the top of the list (for example 25%). - 0 or ^ move cursor to the first column. See 'lsview' option description. - $ move cursor to the last column. See 'lsview' option description. - Space switch file lists. Movement with Count - Most movement commands also accept a count, 12j would move down 12 files. - [count]% move to percent of the file list. - [count]j move down [count] files. - [count]k move up [count] files. - [count]G or [count]gg move to list position [count]. - [count]h go up [count] directories. Scrolling panes - zt redraw pane with file in top of list. - zz redraw pane with file in center of list. - zb redraw pane with file in bottom of list. - Ctrl-E scroll pane one line down. - Ctrl-Y scroll pane one line up. Pane manipulation Second character can be entered with or without Control key. - Ctrl-W H move the pane to the far left. - Ctrl-W J move the pane to the very bottom. - Ctrl-W K move the pane to the very top. - Ctrl-W L move the pane to the far right. - Ctrl-W h switch to the left pane. - Ctrl-W j switch to the pane below. - Ctrl-W k switch to the pane above. - Ctrl-W l switch to the right pane. - Ctrl-W b switch to bottom-right window. - Ctrl-W t switch to top-left window. - Ctrl-W p switch to previous window. - Ctrl-W w switch to other pane. - Ctrl-W o leave only one pane. - Ctrl-W s split window horizontally. - Ctrl-W v split window vertically. - Ctrl-W x exchange panes. - Ctrl-W z quit preview pane or view modes. - Ctrl-W - decrease size of the view by count. - Ctrl-W + increase size of the view by count. - Ctrl-W < decrease size of the view by count. - Ctrl-W > increase size of the view by count. - Ctrl-W | set current view size to count. - Ctrl-W _ set current view size to count. - Ctrl-W = make size of two views equal. For Ctrl-W +, Ctrl-W -, Ctrl-W <, Ctrl-W >, Ctrl-W | and Ctrl-W _ commands count can be given before and/or after Ctrl-W. The resulting count is a multiplication of those two. So "2 Ctrl-W 2 -" decreases window size by 4 lines or columns. Ctrl-W | and Ctrl-W _ maximise current view by default. Marks Marks are set the same way as they are in vi. You can use this characters for marks [a-z][A-Z][0-9]. - m[a-z][A-Z][0-9] set a mark for the file at the current cursor position. - '[a-z][A-Z][0-9] navigate to the file set for the mark. There are also several special marks that can't be set manually: - ' (single quote) - previously visited directory of the view, thus hitting '' allows switching between two last locations - < - the first file of the last visually selected block - > - the last file of the last visually selected block Searching - /regular expression pattern search for files matching regular expression in forward direction and advance cursor to next match. - / perform forward search with top item of search pattern history. - ?regular expression pattern search for files matching regular expression in backward direction and advance cursor to previous match. - ? perform backward search with top item of search pattern history. - Matches are automatically selected if 'hlsearch' is set. Enabling 'incsearch' makes search interactive. 'ignorecase' and 'smartcase' options affect case sensitivity of search queries. - [count]n go to the next file matching last search pattern. Takes last search direction into account. - [count]N go to the previous file matching last search pattern. Takes last search direction into account. If 'hlsearch' option is set, hitting n/N to perform search and go to the first matching item resets current selection in normal mode. It is not the case if search was already performed on files in the directory, thus selection is not reset after clearing selection with escape key and hitting n/N key again. Note: vifm uses extended regular expressions for / and ?. - [count]f[character] search forward for file with [character] as first character in name. Search wraps around the end of the list. - [count]F[character] search backward for file with [character] as first character in name. Search wraps around the end of the list. - [count]; find the next match of f or F. - [count], find the previous match of f or F. Note: f, F, ; and , wrap around list beginning and end when they are used alone and they don't wrap when they are used as selectors. File Filters There are three basic file filters: - dot files filter (excluding "." and ".." special directories, which appearance is controlled by the 'dotdirs' option); - manual filter for file names; - automatic filter for file names; - local filter for file names (see description of the "=" normal mode command). Performing operations on manual filter for file names automatically does the same on automatic one. The file name filter is separated mainly for convenience purpose and to get more deterministic behaviour. The basic vim folding key bindings are used for filtering files. - Each file list has its own copy of each filter. - Filtered files are not checked in / search or :commands. - Files and directories are filtered separately. For this a slash is appended to a directory name before testing whether it matches the filter. Examples: " filter directories which names end with '.files' :filter /^.*\.files\/$/ " filter files which names end with '.d' :filter /^.*\.d$/ " filter files and directories which names end with '.o' :filter /^.*\.o\/?$/ - za toggle visibility of dot files. - zo show dot files. - zm hide dot files. - zf add selected files to file name filter. - zO show files hidden by file name filter. - zM restore all filters. - zR remove all filters. - zr remove local filter. - zd exclude selection or current file from custom view. Does nothing for regular view. - =regular expression pattern filter out files that don't match regular expression. Whether view is updated as regular expression is changed depends on the value of the 'incsearch' option. This kind of filter is automatically reset when directory is changed. Other Normal Mode Keys - [count]: enter command line mode. [count] generates range. - q: open external editor to prompt for command-line command. See "Command line editing" section for details. - q/ open external editor to prompt for search pattern to be searched in forward direction. See "Command line editing" section for details. - q? open external editor to prompt for search pattern to be searched in backward direction. See "Command line editing" section for details. - q= open external editor to prompt for filter pattern. See "Command line editing" section for details. Unlike other q{x} commands this one doesn't work in Visual mode. - [count]!! and [count]!<selector> enter command line mode with entered ! command. [count] modifies range. - Ctrl-O go backwards through directory history of current view. Nonexistent directories are automatically skipped. - Ctrl-I if 'cpoptions' contains "t" flag, <tab> and <c-i> switch active pane just like <space> does, otherwise it goes forward through directory history of current view. Nonexistent directories are automatically skipped. - Ctrl-G create a window showing detailed information about the current file. - Shift-Tab enters view mode (works only after activating view pane with :view command). - ga calculate directory size. Uses cached directory sizes when possible for better performance. As a special case calculating size of ".." entry results in calculation of size of current directory. - gA like ga, but force update. Ignores old values of directory sizes. If file under cursor is selected, each selected item is processed, otherwise only current file is updated. - gf find link destination (like l with 'followlinks' off, but also finds directories). - gr only for MS-Windows same as l key, but tries to run program with administrative privileges. - av go to visual mode into selection amending state preserving current selection. - gv go to visual mode restoring last selection. - [reg]gs when no register is specified, restore last t selection (similar to what gv does for visual mode selection). If register is present, then all files listed in that register and which are visible in current view are selected. - gu<selector> make names of selected files lowercase. - [count]guu and [count]gugu make names of [count] files starting from the current one lowercase. Without [count] only current file is affected. - gU<selector> make names of selected files uppercase. - [count]gUU and [count]gUgU make names of [count] files starting from the current one uppercase. Without [count] only current file is affected. - e explore file in the current pane. - i handle file (even if it's an executable and 'runexec' option is set). - cw change word is used to rename a file or files. - cW change WORD is used to change only name of file (without extension). - cl change link target. - co only for *nix change file owner. - cg only for *nix change file group. - cp change file attributes (permission on *nix and properties on Windows). - [count]C clone file [count] times. - [count]dd or d[count]selector move selected file or files to trash directory (if 'trash' option is set, otherwise delete). See "Trash directory" section below. - [count]DD or D[count]selector like dd and d<selector>, but omitting trash directory (even when 'trash' option is set). - Y, [count]yy or y[count]selector yank selected files. - p copy yanked files to the current directory or move the files to the current directory if they were deleted with dd or :d[elete] or if the files were yanked from trash directory. See "Trash directory" section below. - P move the last yanked files. The advantage of using P instead of d followed by p is that P moves files only once. This isn't important in case you're moving files in the same file system where your home directory is, but using P to move files on some other file system (or file systems, in case you want to move files from fs1 to fs2 and your home is on fs3) can save your time. - al put symbolic links with absolute paths. - rl put symbolic links with relative paths. - t select or unselect (tag) the current file. - u undo last change. - Ctrl-R redo last change. - v or V enter visual mode, clears current selection. - [count]Ctrl-A increment first number in file name by [count] (1 by default). - [count]Ctrl-X decrement first number in file name by [count] (1 by default). - ZQ same as :quit!. - ZZ same as :quit. - . repeat last command-line command (not normal mode command) of this session (does nothing right after startup or :restart command). The command doesn't depend on command-line history and can be used with completely disabled history. - ( goto previous group. Groups are defined by primary sorting key. For name and iname members of each group have same first letter, for all other sorting keys vifm uses size, uid, ... - ) goto next group. See ( key description above. - { similar to ( key, but always considers whether entry is file or directory and thus speeds up navigation to closest previous entry of the opposite type. - } same as {, but in forward direction. Using Count - You can use count with commands like yy. - [count]yy yank count files starting from current cursor position downward. - Or you can use count with motions passed to y, d or D. - d[count]j delete (count + 1) files starting from current cursor position upward. Registers vifm supports multiple registers for temporary storing list of yanked or deleted files. Registers should be specified by hitting double quote key followed by a register name. Count is specified after register name. By default commands use unnamed register, which has double quote as its name. Though all commands accept registers, most of commands ignores them (for example H or Ctrl-U). Other commands can fill register or append new files to it. Presently vifm supports ", _, a-z and A-Z characters as register names. As mentioned above " is unnamed register and has special meaning of the default register. Every time when you use named registers (a-z and A-Z) unnamed register is updated to contain same list of files as the last used register. _ is black hole register. It can be used for writing, but its list is always empty. Registers with names from a to z and from A to Z are named ones. Lowercase registers are cleared before adding new files, while uppercase aren't and should be used to append new files to the existing file list of appropriate lowercase register (A for a, B for b, ...). Registers can be changed on :empty command if they contain files under trash directory (see "Trash directory" section below). Registers do not contain one file more than once. Example: "a2yy puts names of two files to register a (and to the unnamed register), "Ad removes one file and append its name to register a (and to the unnamed register), p or "ap or "Ap inserts previously yanked and deleted files into current directory. Selectors y, d, D, !, gu and gU commands accept selectors. You can combine them with any of selectors below to quickly remove or yank several files. Most of selectors are like vi motions: j, k, gg, G, H, L, M, %, f, F, ;, comma, ', ^, 0 and $. But there are some additional ones. - a all files in current view. - s selected files. - S all files except selected. - Examples: - dj - delete file under cursor and one below; - d2j - delete file under cursor and two below; - y6gg - yank all files from cursor position to 6th file in the list. When you pass a count to whole command and its selector they are multiplied. So: - 2d2j - delete file under cursor and four below; - 2dj - delete file under cursor and two below; - 2y6gg - yank all files from cursor position to 12th file in the list. Visual Mode Visual mode has to generic operating submodes: - plain selection as it is in Vim; - selection editing submode. Both modes select files in range from cursor position at which visual mode was entered to current cursor position (let's call it "selection region"). Each of two borders can be adjusted by swapping them via "o" or "O" keys and updating cursor position with regular cursor motion keys. Obviously, once initial cursor position is altered this way, real start position becomes unavailable. Plain Vim-like visual mode starts with cleared selection, which is not restored on rejecting selection ("Escape", "Ctrl-C", "v", "V"). Contrary to it, selection editing doesn't clear previously selected files and restores them after reject. Accepting selection by performing an operation on selected items (e.g. yanking them via "y") moves cursor to the top of current selection region (not to the top most selected file of the view). In turn, selection editing supports three types of editing (look at statusbar to know which one is currently active): - append - amend selection by selecting elements in selection region; - remove - amend selection by deselecting elements in selection region; - invert - amend selection by inverting selection of elements in selection region. No matter how you activate selection editing it starts in "append". One can switch type of operation (in the order given above) via "Ctrl-G" key. Almost all normal mode keys work in visual mode, but instead of accepting selectors they operate on selected items. - Enter save selection and go back to normal mode not moving cursor. - av leave visual mode if in amending mode (restores previous selection), otherwise switch to amending selection mode. - gv restore previous visual selection. - v, V, Ctrl-C or Escape leave visual mode if not in amending mode, otherwise switch to normal visual selection. - Ctrl-G switch type of amending by round robin scheme: append -> remove -> invert. - : enter command line mode. Selection is cleared on leaving the mode. - o switch active selection bound. - O switch active selection bound. - gu, u make names of selected files lowercase. - gU, U make names of selected files uppercase. View Mode This mode tries to imitate the less program. List of builtin shortcuts can be found below. Shortcuts can be customized using :qmap, :qnoremap and :qunmap command-line commands. - Shift-Tab, Tab, q, Q, ZZ return to normal mode. - [count]e, [count]Ctrl-E, [count]j, [count]Ctrl-N, [count]Enter scroll forward one line (or [count] lines). - [count]y, [count]Ctrl-Y, [count]k, [count]Ctrl-K, [count]Ctrl-P scroll backward one line (or [count] lines). - [count]f, [count]Ctrl-F, [count]Ctrl-V, [count]Space scroll forward one window (or [count] lines). - [count]b, [count]Ctrl-B, [count]Alt-V scroll backward one window (or [count] lines). - [count]z scroll forward one window (and set window to [count]). - [count]w scroll backward one window (and set window to [count]). - [count]Alt-Space scroll forward one window, but don't stop at end-of-file. - [count]d, [count]Ctrl-D scroll forward one half-window (and set half-window to [count]). - [count]u, [count]Ctrl-U scroll backward one half-window (and set half-window to [count]). - r, Ctrl-R, Ctrl-L repaint screen. - R reload view preserving scroll position. - F toggle automatic forwarding. Roughly equivalent to periodic file reload and scrolling to the bottom. The behaviour is similar to `tail -F` or F key in less. - [count]/pattern - [count]?pattern search backward for ([count]‐th) matching line. - [count]n repeat previous search (for [count]‐th occurrence). - [count]N repeat previous search in reverse direction (for [count]‐th occurrence). - [count]g, [count]<, [count]Alt-< scroll to the first line of the file (or line [count]). - [count]G, [count]>, [count]Alt-> scroll to the last line of the file (or line [count]). - [count]p, [count]% scroll to the beginning of the file (or N percent into file). - v invoke an editor to edit the current file being viewed. The command for editing is taken from the 'vicmd'/'vixcmd' option value and extended with middle line number prepended by a plus sign and name of the current file. All "Ctrl-W x" keys work the same was as in Normal mode. Active mode is automatically changed on navigating among windows. When less-like mode activated on file preview is left using one by "Ctrl-W x" keys, its state is stored until another file is displayed using preview (it's possible to leave the mode, hide preview pane, do something else, then get back to the file and show preview pane again with previously stored state in it). Command line Mode These keys are available in all submodes of the command line mode: command, search, prompt and filtering. Down, Up, Left, Right, Home, End and Delete are extended keys and they are not available if vifm is compiled with --disable-extended-keys option. - Esc, Ctrl-C leave command line mode, cancels input. Cancelled input is saved into appropriate history and can be recalled later. - Ctrl-M, Enter execute command and leave command line mode. - Ctrl-I, Tab complete command or its argument. - Shift-Tab complete in reverse order. - Ctrl-_ stop completion and return original input. - Ctrl-B, Left move cursor to the left. - Ctrl-F, Right move cursor to the right. - Ctrl-A, Home go to line beginning. - Ctrl-E, End go to line end. - Alt-B go to the beginning of previous word. - Alt-F go to the end of next word. - Ctrl-U remove characters from cursor position till the beginning of line. - Ctrl-K remove characters from cursor position till the end of line. - Ctrl-H, Backspace remove character before the cursor. - Ctrl-D, remove character under the cursor. - Ctrl-W remove characters from cursor position till the beginning of previous word. - Alt-D remove characters from cursor position till the beginning of next word. - Ctrl-T swap the order of current and previous character and move cursor forward or, if cursor past the end of line, swap the order of two last characters in the line. - Alt-. insert last part of previous command to current cursor position. Each next call will insert last part of older command. - Ctrl-G edit command-line content in external editor. See "Command line editing" section for details. - Ctrl-N recall more recent command-line from history. - Ctrl-P recall older command-line from history. - Up recall more recent command-line from history, that begins as the current command-line. - Down recall older command-line from history, that begins as the current command-line. - Ctrl-] trigger abbreviation expansion. Pasting special values The shortcuts listed below insert specified values into current cursor position. Last key of every shortcut references value that it inserts: - c - [c]urrent file - d - [d]irectory path - e - [e]xtension of a file name - r - [r]oot part of a file name - t - [t]ail part of directory path - a - [a]utomatic filter - m - [m]anual filter - = - local filter, which is bound to "=" in normal mode Values related to filelist in current pane are available through Ctrl-X prefix, while values from the other pane have doubled Ctrl-X key as their prefix (doubled Ctrl-X is presumably easier to type than uppercase letters; it's still easy to remap the keys to correspond to names of similar macros). - Ctrl-X c name of the current file of the active pane. - Ctrl-X d path to the current directory of the active pane. - Ctrl-X e extension of the current file of the active pane. - Ctrl-X r name root of current file of the active pane. - Ctrl-X t the last component of path to the current directory of the active pane. - Ctrl-X Ctrl-X c name of the current file of the inactive pane. - Ctrl-X Ctrl-X d path to the current directory of the inactive pane. - Ctrl-X Ctrl-X e extension of the current file of the inactive pane. - Ctrl-X Ctrl-X r name root of current file of the inactive pane. - Ctrl-X Ctrl-X t the last component of path to the current directory of the inactive pane. - Ctrl-X a value of automatic filter of the active pane. - Ctrl-X m value of manual filter of the active pane. - Ctrl-X = value of local filter of the active pane. - Ctrl-X / last pattern from search history. Command line editing vifm provides a facility to edit several kinds of data, that is usually edited in command-line mode, in external editor (using command specified by 'vicmd' or 'vixcmd' option). This has at least two advantages over built-in command-line mode: - one can use full power of Vim to edit text; - finding and reusing history entries becomes possible. The facility is supported by four input submodes of the command-line: - command; - forward search; - backward search; - file rename (see description of cw and cW normal mode keys). Editing command-line using external editor is activated by the Ctrl-G shortcut. It's also possible to do almost the same from Normal and Visual modes using q:, q/ and q? commands. Temporary file created for the purpose of editing the line has the following structure: - 1. First line, which is either empty or contains text already entered in command-line. - 2. 2nd and all other lines with history items starting with the most recent one. Altering this lines in any way won't change history items stored by vifm. After editing application is finished the first line of the file is taken as the result of operation, when the application returns zero exit code. If the application returns an error (see :cquit command in Vim), all the edits made to the file are ignored, but the initial value of the first line is saved in appropriate history. More Mode This is the mode that appears when status bar content is so big that it doesn't fit on the screen. One can identify the mode by "-- More --" message at the bottom. The following keys are handled in this mode: - Enter, Ctrl-J, j or Down scroll one line down. - Backspace, k or Up scroll one line up. - d scroll one page (half of a screen) down. - u scroll one page (half of a screen) up. - Space, f or PageDown scroll down a screen. - b or PageUp scroll up a screen. - G scroll to the bottom. - g scroll to the top. - q, Escape or Ctrl-C quit the mode. - : switch to command-line mode. Commands Commands are executed with :command_name<Enter> Commented out lines should start with the double quote symbol ("), which may be preceded by whitespace characters intermixed with colons. Inline comments can be added at the end of the line after double quote symbol, only last line of a multi-line command can contain such comment. Not all commands support inline comments as their syntax conflicts with names of registers and fields where double quotes are allowed. Most of the commands have two forms: complete and the short one. Example: :noh[lsearch] This means the complete command is nohlsearch, and the short one is noh. Most of command-line commands completely reset selection in the current view. However, there are several exceptions: - ":invert s" most likely leaves some files selected; - :if and :else commands doesn't affect selection on successful execution. '|' can be used to separate commands, so you can give multiple commands in one line. If you want to use '|' in an argument, precede it with '\'. These commands see '|' as part of their arguments even when it's escaped: :[range]! :autocmd :cmap :cnoremap :command :filetype :fileviewer :filextype :map :mmap :mnoremap :nmap :nnoremap :noremap :normal :qmap :qnoremap :vmap :vnoremap :wincmd :windo :winrun To be able to use another command after one of these, wrap it with the :execute command. An example: if filetype('.') == 'reg' | execute '!!echo regular file' | endif - :[count] - :number move to the file number. :12 would move to the 12th file in the list. :0 move to the top of the list. :$ move to the bottom of the list. - :[count]command The only builtin :[count]command are :[count]d[elete] and :[count]y[ank]. - :d3 would delete three files starting at the current file position moving down. - :3d would delete one file at the third line in the list. - :command [args] - :[range]!program execute command via shell. Accepts macros. :[range]!command & same as above, but the command is run in the background using vifm's means. Programs that write to stdout like "ls" create an error message showing partial output of the command. Note the space before ampersand symbol, if you omit it, command will be run in the background using job control of your shell. Accepts macros. - :!! - :[range]!!command same as :!, but pauses before returning. - :!! repeat the last command. - :alink - :[range]alink[!?] create absolute symbolic links to files in directory of inactive view. With "?" prompts for destination file names in an editor. "!" forces overwrite. - :[range]alink[!] path create absolute symbolic links to files in directory specified by the path (absolute or relative to directory of inactive view). - :[range]alink[!] name1 name2... create absolute symbolic links of files in directory of other view giving each next link a corresponding name from the argument list. - :apropos - :apropos keyword... create a menu of items returned by the apropos command. Selecting an item in the menu opens corresponding man page. By default the command relies on the external "apropos" utility, which can be customized by altering value of the 'aproposprg' option. - :autocmd - :au[tocmd] {event} {pat} {cmd} register autocommand for the {event}, which can be: - DirEnter - performed on entering a directory Event name is case insensitive. {pat} is a comma-separated list of modified globs patterns, which can contain tilde or environment variables. All paths use slash ('/') as directory separator. The pattern can start with a '!', which negates it. Patterns that do not contain slashes are matched against the last item of the path only (e.g. "dir" in "/path/dir"). Literal comma can be entered by doubling it. Two modifications to globs matching are as follows: - * - never matches a slash (i.e., can signify single directory level) - ** - matches any character (i.e., can match path of arbitrary depth) {cmd} is a :command or several of them separated with '|'. Examples of patterns: - conf.d - matches conf.d directory anywhere - *.d - matches directories ending with ".d" anywhere - **.git - matches something.git, but not .git anywhere - **/.git/** - matches /path/.git/objects, but not /path/.git - **/.git/**/ - matches /path/.git/ only (because of trailing slash) - /etc/* - matches /etc/conf.d/, /etc/X11, but not /etc/X11/fs - /etc/**/*.d - matches /etc/conf.d, /etc/X11/conf.d, etc. - /etc/**/* - matches /etc/ itself and any file below it - /etc/**/** - matches /etc/ itself and any file below it - :au[tocmd] [{event}] [{pat}] list those autocommands that match given event-pattern combination. {event} and {pat} can be omitted to list all autocommands. To list any autocommands for specific pattern one can use * placeholder in place of {event}. - :au[tocmd]! [{event}] [{pat}] remove autocommands that match given event-pattern combination. Syntax is the same as for listing above. - :apropos repeat last :apropos command. - :bmark - :bmark tag1 [tag2 [tag3...]] bookmark current directory with specified tags. - :bmark! path tag1 [tag2 [tag3...]] same as :bmark, but allows bookmarking specific path instead of current directory. This is for use in vifmrc and for bookmarking files. Path can contain macros that expand to single path (%c, %C, %d, %D) or those that can expand to multiple paths, but contain only one (%f, %F, %rx). The latter is done for convenience on using the command interactively. Complex macros that include spaces (e.g. "%c:gs/ /_") should be escaped. - :bmarks - :bmarks display all bookmarks in a menu. - :bmarks [tag1 [tag2...]] display menu of bookmarks that include all of the specified tags. - :bmgo - :bmgo [tag1 [tag2...]] when there are more than one match acts exactly like :bmarks, otherwise navigates to single match immediately (and fails if there is no match). - :cabbrev - :ca[bbrev] display menu of command-line mode abbreviations. - :ca[bbrev] lhs-prefix display command-line mode abbreviations which left-hand side starts with specified prefix. - :ca[bbrev] lhs rhs register new or overwrites existing abbreviation for command-line mode. rhs can contain spaces and any special sequences accepted in rhs of mappings (see "Mappings" section below). Abbreviations are expanded non-recursively. - :cnoreabbrev - :cnorea[bbrev] display menu of command-line mode abbreviations. - :cnorea[bbrev] lhs-prefix display command-line mode abbreviations which left-hand side starts with specified prefix. - :cnorea[bbrev] lhs rhs same as :cabbrev, but mappings in rhs are ignored during expansion. - :cd - :cd or :cd ~ or :cd $HOME change to home directory. - :cd - go to the last visited directory. - :cd ~/dir change directory to ~/dir. - :cd /curr/dir /other/dir change directory of the current pane to /curr/dir and directory of the other pane to /other/dir. Relative paths are assumed to be relative to directory of current view. Command won't fail if one of directories is invalid. All forms of the command accept macros. - :cd! /dir same as :cd /dir /dir. - :change - :c[hange] create a menu window to alter a files properties. - :chmod - :[range]chmod display file attributes (permission on *nix and properties on Windows) change dialog. - :[range]chmod[!] arg... only for *nix change permissions for files. See `man 1 chmod` for arg format. "!" means set permissions recursively. - :chown - :[range]chown only for *nix same as co key in normal mode. - :[range]chown [user][:][group] only for *nix change owner and/or group of files. Operates on directories recursively. - :clone - :[range]clone[!?] clones files in current directory. With "?" vifm will open vi to edit file names. "!" forces overwrite. Macros are expanded. - :[range]clone[!] path clones files to directory specified with the path (absolute or relative to current directory). "!" forces overwrite. Macros are expanded. - :[range]clone[!] name1 name2... clones files in current directory giving each next clone a corresponding name from the argument list. "!" forces overwrite. Macros are expanded. - :colorscheme - :colo[rscheme]? print current color scheme name on the status bar. - :colo[rscheme] display a menu with a list of available color schemes. You can choose primary color scheme here. It is used for view if no directory specific colorscheme fits current path. It's also used to set border color (except view titles) and colors in menus and dialogs. - :colo[rscheme] color_scheme_name change primary color scheme to color_scheme_name. In case of errors (e.g. some colors are not supported by terminal) either nothing is changed or color scheme is reset to builtin colors to ensure that TUI is left in a usable state. - :colo[rscheme] color_scheme_name directory associate directory with the color scheme. The directory argument can be either absolute or relative path when :colorscheme command is executed from command line, but mandatory should be an absolute path when the command is executed in scripts loaded at startup (until vifm is completely loaded). - :comclear - :comc[lear] remove all user defined commands. - :command - :com[mand] display a menu of user commands. - :com[mand] beginning display user defined commands that start with the beginning. - :com[mand] name action set a new user command. Trying to use a reserved command name will result in an error message. Use :com[mand]! to overwrite a previously set command. Unlike vim user commands do not have to start with a capital letter. User commands are run in a shell by default. To run a command in the background you must set it as a background command with & at the end of the commands action (:com rm rm %f &). Command name cannot contain numbers or special symbols (except '?' and '!'). - :com[mand] name /pattern set search pattern. - :com[mand] name =pattern set local filter value. - :com[mand] name filter{:filter args} set file name filter (see :filter command description). For example: " display only audio files :command onlyaudio filter/.+.\(mp3|wav|mp3|flac|ogg|m4a|wma|ape\)$/i " display everything except audio files :command noaudio filter!/.+.\(mp3|wav|mp3|flac|ogg|m4a|wma|ape\)$/i - :com[mand] cmd :commands set kind of an alias for internal command (like in a shell). Passes range given to alias to an aliased command, so running :%cp after :command cp :copy %a equals :%copy - :copy - :[range]co[py][!?][ &] copy files to directory of other view. With "?" prompts for destination file names in an editor. "!" forces overwrite. - :[range]co[py][!] path[ &] copy files to directory specified with the path (absolute or relative to directory of other view). "!" forces overwrite. - :[range]co[py][!] name1 name2...[ &] copy files to directory of other view giving each next file a corresponding name from the argument list. "!" forces overwrite. - :cquit - :cq[uit][!] same as :quit, but also aborts directory choosing via --choose-dir (empties output file) and returns non-zero exit code. - :cunabbrev - :cuna[bbrev] lhs unregister command-line mode abbreviation by its lhs. - :cuna[bbrev] rhs unregister command-line mode abbreviation by its rhs, so that abbreviation could be removed even after expansion. - :delbmarks - :delbmarks remove bookmarks from current directory. - :delbmarks tag1 [tag2 [tag3...]] remove set of bookmarks that include all of the specified tags. - :delbmarks! remove all bookmarks. - :delbmarks! path1 [path2 [path3...]] remove bookmarks of listed paths. - :delcommand - :delc[ommand] user_command remove user defined command named user_command. - :delete - :[range]d[elete][!][ &] delete selected file or files. "!" means complete removal (omitting trash). - :[range]d[elete][!] [reg] [count][ &] delete selected or [count] files to the reg register. "!" means complete removal (omitting trash). - :delmarks - :delm[arks]! delete all marks. - :delm[arks] marks ... delete specified marks, each argument is treated as a set of marks. - :display - :di[splay] display menu with registers content. - :di[splay] list ... display the contents of the numbered and named registers that are mentioned in list (for example "az to display "", "a and "z content). - :dirs - :dirs display directory stack. - :echo - :ec[ho] [<expr>...] evaluate each argument as an expression and output them separated with a space. See help on :let command for a definition of <expr>. - :edit - :[range]e[dit] [file...] open selected or passed file(s) in editor. Macros and environment variables are expanded. - :else - :el[se] execute commands until next matching :endif if all other conditions didn't match. See also help on :if and :endif commands. - :elseif - :elsei[f] {expr1} execute commands until next matching :elseif, :else or :endif if conditions of previous :if and :elseif branches were evaluated to zero. See also help on :if and :endif commands. - :empty - :empty permanently remove files from all existing non-empty trash directories (see "Trash directory" section below). Also remove all operations from undolist that have no sense after :empty and remove all records about files located inside directories from all registers. Removal is performed as background task with undetermined amount of work and can be checked via :jobs menu. - :endif - :en[dif] end conditional block. See also help on :if and :else commands. - :execute - :exe[cute] [<expr>...] evaluate each argument as an expression and join results separated by a space to get a single string which is then executed as a command-line command. See help on :let command for a definition of <expr>. - :exit - :exi[t][!] same as :quit. - :file - :f[ile][ &] display menu of programs set for the file type of the current file. " &" forces running associated program in background. - :f[ile] arg[ &] run associated command that begins with the arg skipping opening menu. " &" forces running associated program in background. - :filetype - :filet[ype] pattern-list [{descr}]def_prog[ &],[{descr}]prog2[ &],... associate given program list to each of the patterns. Associated program (command) is used by handlers of l and Enter keys (and also in the :file menu). If you need to insert comma into command just double it (",,"). Space followed by an ampersand as two last characters of a command means running of the command in the background. Optional description can be given to each command to ease understanding of what command will do in the :file menu. Vifm will try the rest of the programs for an association when the default isn't found. When program entry doesn't contain any of vifm macros, name of current file is appended as if program entry ended with %c macro on *nix and %"c on Windows. On Windows path to executables containing spaces can (and should be for correct work with such paths) be double quoted. See "Patterns" section below for pattern definition. See also "Automatic FUSE mounts" section below. Example for zip archives and several actions: filetype *.zip,*.jar,*.war,*.ear \ {Mount with fuse-zip} \ FUSE_MOUNT|fuse-zip %SOURCE_FILE %DESTINATION_DIR, \ {View contents} \ zip -sf %c | less, \ {Extract here} \ tar -xf %c, - :filet[ype] filename list (in menu mode) currently registered patterns that match specified file name. Same as ":filextype filename". - :filextype - :filex[type] pattern-list [{ description }] def_program,program2,... same as :filetype, but this command is ignored if not running in X. In X :filextype is equal to :filetype. See "Patterns" section below for pattern definition. See also "Automatic FUSE mounts" section below. For example, consider the following settings (the order might seem strange, but it's for the demonstration purpose): filetype *.html,*.htm \ {View in lynx} \ lynx filextype *.html,*.htm \ {Open with dwb} \ dwb %f %i &, filetype *.html,*.htm \ {View in links} \ links filextype *.html,*.htm \ {Open with firefox} \ firefox %f &, \ {Open with uzbl} \ uzbl-browser %f %i &, If you're using vifm inside a terminal emulator that is running in graphical environment (when X is used on *nix; always on Windows), vifm attempts to run application in this order: 1. lynx 2. dwb 3. links 4. firefox 5. uzbl If there is no graphical environment (checked presence of $DISPLAY environment variable on *nix; never happens on Windows), the list will look like: 1. lynx 2. links Just as if all :filextype commands were not there. The purpose of such differentiation is to allow comfortable use of vifm with same settings in desktop environment/through remote connection (SSH)/in native console. - :filext[ype] filename list (in menu mode) currently registered patterns that match specified file name. Same as ":filetype filename". - :fileviewer - :filev[iewer] pattern-list command1,command2,... register specified list of commands as viewers for each of the patterns. Viewer is a command which output is captured and displayed in one of the panes of vifm after pressing "e" or running :view command. When the command doesn't contain any of vifm macros, name of current file is appended as if command ended with %c macro. Comma escaping and missing commands processing rules as for :filetype apply to this command. See "Patterns" section below for pattern definition. Example for zip archives: fileviewer *.zip,*.jar,*.war,*.ear zip -sf %c, echo "No zip to preview:" - :filev[iewer] filename list (in menu mode) currently registered patterns that match specified filename. - :filter - :filter[!] regular_expression_pattern - :filter[!] /regular_expression_pattern/[flags] will filter all the files out of the directory listing that match the regular expression. Using second variant you can use the bar ('|') symbol without escaping. Empty regular expression (specified by //, "" or '') means using of the last search pattern. Use '!' to control state of filter inversion after updating filter value (also see 'cpoptions' description). Filter is matched case sensitively on *nix and case insensitively on Windows. Supported flags: - "i" makes filter case insensitive; - "I" makes filter case sensitive. Flags might be repeated multiple times, later ones win (e.g. "iiiI" is equivalent to "I" and "IiIi" is the same as "i"). " filter all files ending in .o from the filelist. :filter /.o$/ Note: vifm uses extended regular expressions. - :filter reset filter (set it to empty string) and show all files. - :filter! same as :invert. - :filter? show information on local, name and auto filters. - :find - :[range]fin[d] pattern display results of find command in the menu. Searches among selected files if any. Accepts macros. By default the command relies on the external "find" utility, which can be customized by altering value of the 'findprg' option. - :[range]fin[d] -opt... same as :find above, but user defines all find arguments. Searches among selected files if any. - :[range]fin[d] path -opt... same as :find above, but user defines all find arguments. Ignores selection and range. - :[range]fin[d] repeat last :find command. - :finish - :fini[sh] stop sourcing a script. Can only be used in a vifm script file. This is a quick way to skip the rest of the file. - :grep - :[range]gr[ep][!] pattern will show results of grep command in the menu. Add "!" to request inversion of search (look for lines that do not match pattern). Searches among selected files if any and no range given. Ignores binary files by default. By default the command relies on the external "grep" utility, which can be customized by altering value of the 'grepprg' option. - :[range]gr[ep][!] -opt... same as :grep above, but user defines all grep arguments, which are not escaped. Searches among selected files if any. - :[range]gr[ep][!] repeats last :grep command. "!" of this command inverts "!" in repeated command. show the help file. - :h[elp] argument is the same as using ':h argument' in vim. Use vifm-<something> to get help on vifm (tab completion works). This form of the command doesn't work when 'vimhelp' option is off. - :highlight - :hi[ghlight] will show information about all highlight groups in the current directory. - :hi[ghlight] clear will reset all highlighting to builtin defaults. - :hi[ghlight] ( group-name | {pat1,pat2,...} | /regexp/ ) will show information on given highlight group or file name pattern of color scheme used in the active view. - :hi[ghlight] ( group-name | {pat1,pat2,...} | /regexp/[iI] ) cterm=style | ctermfg=color | ctermbg=color sets style (cterm), foreground (ctermfg) or/and background (ctermbg) parameters of highlight group or file name pattern for color scheme used in the active view. All style values as well as color names are case insensitive. Available style values (some of them can be combined): - bold - underline - reverse or inverse - standout - none Available group-name values: - Win - color of all windows (views, dialogs, menus) and default color for their content (e.g. regular files in views) - Border - color of vertical parts of the border - TopLineSel - top line color of the current pane - TopLine - top line color of the other pane - CmdLine - the command line/status bar color - ErrorMsg - color of error messages in the status bar - StatusLine - color of the line above the status bar - JobLine - color of job line that appears above the status line - WildMenu - color of the wild menu items - SuggestBox - color of key suggestion box - CurrLine - line at cursor position in active view - OtherLine - line at cursor position in inactive view - Selected - color of selected files - Directory - color of directories - Link - color of symbolic links in the views - BrokenLink - color of broken symbolic links - Socket - color of sockets - Device - color of block and character devices - Executable - color of executable files - Fifo - color of fifo pipes Available colors: - -1 or default or none - default or transparent - black and lightblack - red and lightred - green and lightgreen - yellow and lightyellow - blue and lightblue - magenta and lightmagenta - cyan and lightcyan - white and lightwhite - 0-255 - corresponding colors from 256-color palette Light versions of colors are regular colors with bold attribute set. So order of arguments of :highlight command is important and it's better to put "cterm" in front of others to prevent it from overwriting attributes set by "ctermfg" or "ctermbg" arguments. For convenience of color scheme authors xterm-like names for 256 color palette is also supported. The mapping is taken from Duplicated entries were altered by adding an underscore followed by numerical suffix. 0 Black 86 Aquamarine1 172 Orange3 1 Red 87 DarkSlateGray2 173 LightSalmon3_2 2 Green 88 DarkRed_2 174 LightPink3 3 Yellow 89 DeepPink4_2 175 Pink3 4 Blue 90 DarkMagenta 176 Plum3 5 Magenta 91 DarkMagenta_2 177 Violet 6 Cyan 92 DarkViolet 178 Gold3_2 7 White 93 Purple 179 LightGoldenrod3 8 LightBlack 94 Orange4_2 180 Tan 9 LightRed 95 LightPink4 181 MistyRose3 10 LightGreen 96 Plum4 182 Thistle3 11 LightYellow 97 MediumPurple3 183 Plum2 12 LightBlue 98 MediumPurple3_2 184 Yellow3_2 13 LightMagenta 99 SlateBlue1 185 Khaki3 14 LightCyan 100 Yellow4 186 LightGoldenrod2 15 LightWhite 101 Wheat4 187 LightYellow3 16 Grey0 102 Grey53 188 Grey84 17 NavyBlue 103 LightSlateGrey 189 LightSteelBlue1 18 DarkBlue 104 MediumPurple 190 Yellow2 19 Blue3 105 LightSlateBlue 191 DarkOliveGreen1 20 Blue3_2 106 Yellow4_2 192 DarkOliveGreen1_2 21 Blue1 107 DarkOliveGreen3 193 DarkSeaGreen1_2 22 DarkGreen 108 DarkSeaGreen 194 Honeydew2 23 DeepSkyBlue4 109 LightSkyBlue3 195 LightCyan1 24 DeepSkyBlue4_2 110 LightSkyBlue3_2 196 Red1 25 DeepSkyBlue4_3 111 SkyBlue2 197 DeepPink2 26 DodgerBlue3 112 Chartreuse2_2 198 DeepPink1 27 DodgerBlue2 113 DarkOliveGreen3_2 199 DeepPink1_2 28 Green4 114 PaleGreen3_2 200 Magenta2_2 29 SpringGreen4 115 DarkSeaGreen3 201 Magenta1 30 Turquoise4 116 DarkSlateGray3 202 OrangeRed1 31 DeepSkyBlue3 117 SkyBlue1 203 IndianRed1 32 DeepSkyBlue3_2 118 Chartreuse1 204 IndianRed1_2 33 DodgerBlue1 119 LightGreen_2 205 HotPink 34 Green3 120 LightGreen_3 206 HotPink_2 35 SpringGreen3 121 PaleGreen1 207 MediumOrchid1_2 36 DarkCyan 122 Aquamarine1_2 208 DarkOrange 37 LightSeaGreen 123 DarkSlateGray1 209 Salmon1 38 DeepSkyBlue2 124 Red3 210 LightCoral 39 DeepSkyBlue1 125 DeepPink4_3 211 PaleVioletRed1 40 Green3_2 126 MediumVioletRed 212 Orchid2 41 SpringGreen3_2 127 Magenta3 213 Orchid1 42 SpringGreen2 128 DarkViolet_2 214 Orange1 43 Cyan3 129 Purple_2 215 SandyBrown 44 DarkTurquoise 130 DarkOrange3 216 LightSalmon1 45 Turquoise2 131 IndianRed 217 LightPink1 46 Green1 132 HotPink3 218 Pink1 47 SpringGreen2_2 133 MediumOrchid3 219 Plum1 48 SpringGreen1 134 MediumOrchid 220 Gold1 49 MediumSpringGreen 135 MediumPurple2 221 LightGoldenrod2_2 50 Cyan2 136 DarkGoldenrod 222 LightGoldenrod2_3 51 Cyan1 137 LightSalmon3 223 NavajoWhite1 52 DarkRed 138 RosyBrown 224 MistyRose1 53 DeepPink4 139 Grey63 225 Thistle1 54 Purple4 140 MediumPurple2_2 226 Yellow1 55 Purple4_2 141 MediumPurple1 227 LightGoldenrod1 56 Purple3 142 Gold3 228 Khaki1 57 BlueViolet 143 DarkKhaki 229 Wheat1 58 Orange4 144 NavajoWhite3 230 Cornsilk1 59 Grey37 145 Grey69 231 Grey100 60 MediumPurple4 146 LightSteelBlue3 232 Grey3 61 SlateBlue3 147 LightSteelBlue 233 Grey7 62 SlateBlue3_2 148 Yellow3 234 Grey11 63 RoyalBlue1 149 DarkOliveGreen3_3 235 Grey15 64 Chartreuse4 150 DarkSeaGreen3_2 236 Grey19 65 DarkSeaGreen4 151 DarkSeaGreen2 237 Grey23 66 PaleTurquoise4 152 LightCyan3 238 Grey27 67 SteelBlue 153 LightSkyBlue1 239 Grey30 68 SteelBlue3 154 GreenYellow 240 Grey35 69 CornflowerBlue 155 DarkOliveGreen2 241 Grey39 70 Chartreuse3 156 PaleGreen1_2 242 Grey42 71 DarkSeaGreen4_2 157 DarkSeaGreen2_2 243 Grey46 72 CadetBlue 158 DarkSeaGreen1 244 Grey50 73 CadetBlue_2 159 PaleTurquoise1 245 Grey54 74 SkyBlue3 160 Red3_2 246 Grey58 75 SteelBlue1 161 DeepPink3 247 Grey62 76 Chartreuse3_2 162 DeepPink3_2 248 Grey66 77 PaleGreen3 163 Magenta3_2 249 Grey70 78 SeaGreen3 164 Magenta3_3 250 Grey74 79 Aquamarine3 165 Magenta2 251 Grey78 80 MediumTurquoise 166 DarkOrange3_2 252 Grey82 81 SteelBlue1_2 167 IndianRed_2 253 Grey85 82 Chartreuse2 168 HotPink3_2 254 Grey89 83 SeaGreen2 169 HotPink2 255 Grey93 84 SeaGreen1 170 Orchid 85 SeaGreen1_2 171 MediumOrchid1 There are two colors (foreground and background) and only one bold attribute. Thus single bold attribute affects both colors when "reverse" attribute is used in vifm run inside terminal emulator. At the same time linux native console can handle boldness of foreground and background colors independently, but for consistency with terminal emulators this is available only implicitly by using light versions of colors. This behaviour might be changed in the future. Although vifm supports 256 colors in a sense they are supported by UI drawing library, whether you will be able to use all of them highly depends on your terminal. To set up terminal properly, make sure that $TERM in the environment you run vifm is set to name of 256-color terminal (on *nixes it can also be set via X resources), e.g. xterm-256color. One can find list of available terminal names by listing /usr/lib/terminfo/. Number of colors supported by terminal with current settings can be checked via "tput colors" command. Here is the hierarchy of highlight groups, which you need to know for using transparency: JobLine SuggestBox StatusLine WildMenu Border CmdLine ErrorMsg Win File name specific highlights Directory Link BrokenLink Socket Device Fifo Executable Selected CurrLine OtherLine TopLine TopLineSel "none" means default terminal color for highlight groups at the first level of the hierarchy and transparency for all others. Here file name specific highlights mean those configured via globs ({}) or regular expressions (//). At most one of them is applied per file entry, namely the first that matches file name, hence order of :highlight commands might be important in certain cases. - :history - :his[tory] creates a pop-up menu of directories visited. - :his[tory] x x can be: d[ir] or . show directory history. c[md] or : show command line history. s[earch] or / show search history and search forward on l key. f[search] or / show search history and search forward on l key. b[search] or ? show search history and search backward on l key. i[nput] or @ show prompt history (e.g. on one file renaming). fi[lter] or = show filter history (see description of the "=" normal mode command). - :if - :if {expr1} starts conditional block. Commands are executed until next matching :elseif, :else or :endif command if {expr1} evaluates to non-zero, otherwise they are ignored. See also help on :else and :endif commands. Example: if $TERM == 'screen.linux' highlight CurrLine ctermfg=lightwhite ctermbg=lightblack elseif $TERM == 'tmux' highlight CurrLine cterm=reverse ctermfg=black ctermbg=white else highlight CurrLine cterm=bold,reverse ctermfg=black ctermbg=white endif - :invert - :invert [f] invert file name filter. - :invert? [f] show current filter state. - :invert s invert selection. - :invert o invert sorting order of the primary sorting key. - :invert? o show sorting order of the primary sorting key. - :jobs - :jobs shows menu of current backgrounded processes. - :let - :let $ENV_VAR = <expr> sets environment variable. Warning: setting environment variable to an empty string on Windows removes it. - :let $ENV_VAR .= <expr> append value to environment variable. - :let &[l:|g:]opt = <expr> sets option value. - :let &[l:|g:]opt .= <expr> append value to string option. - :let &[l:|g:]opt += <expr> increasing option value, adding sub-values. - :let &[l:|g:]opt -= <expr> decreasing option value, removing sub-values. - Where <expr> could be a single-quoted string, double-quoted string, an environment variable, function call or a concatanation of any of them in any order using the '.' operator. Any whitespace is ignored. - :locate - :locate filename use "locate" command to create a menu of filenames. Selecting a file from the menu will reload the current file list in vifm to show the selected file. By default the command relies on the external "locate" utility (it's assumed that its database is already built), which can be customized by altering value of the 'locateprg' option. - :locate repeats last :locate command. - :ls - :ls lists windows of active terminal multiplexer (only when terminal multiplexer is used). This is achieved by issuing proper command for active terminal multiplexer, thus the list is not handled by vifm. - :lstrash - :lstrash displays a menu with list of files in trash. Each element of the list is original path of a deleted file, thus the list can contain duplicates. - :mark - :[range]ma[rk][?] x [/full/path] [filename] Set mark x (a-zA-Z0-9) at /full/path and filename. By default current directory is being used. If no filename was given and /full/path is current directory then last file in [range] is used. Using of macros is allowed. Question mark will stop command from overwriting existing marks. - :marks - :marks create a pop-up menu of marks. - :marks list ... display the contents of the marks that are mentioned in list. - :messages - :mes[sages] shows previously given messages (up to 50). - :mkdir - :mkdir[!] dir ... creates directories with given names. "!" means make parent directories as needed. Macros are expanded. - :move - :[range]m[ove][!?][ &] move files to directory of other view. With "?" prompts for destination file names in an editor. "!" forces overwrite. - :[range]m[ove][!] path[ &] move files to directory specified with the path (absolute or relative to directory of other view). "!" forces overwrite. - :[range]m[ove][!] name1 name2...[ &] move files to directory of other view giving each next file a corresponding name from the argument list. "!" forces overwrite. - :nohlsearch - :noh[lsearch] clear selection in current pane. - :normal - :norm[al][!] commands execute normal mode commands. If "!" is used, user defined mappings are ignored. Unfinished last command is aborted as if <esc> or <c-c> was typed. A ":" should be completed as well. Commands can't start with a space, so put a count of 1 (one) before it. - :only - :on[ly] switch to a one window view. - :popd - :popd remove pane directories from stack. - :pushd - :pushd[!] /curr/dir [/other/dir] add pane directories to stack and process arguments like :cd command. - :pushd exchange the top two items of the directory stack. - :put - :pu[t][!] [reg] [ &] puts files from specified register (" by default) into current directory. "!" moves files from their original location instead of copying them. During this operation no confirmation dialogs will be shown, all checks are performed beforehand. - :pwd - :pw[d] show the present working directory. - :quit - :q[uit][!] exit vifm (add ! to skip saving changes and checking for active backgrounded commands). - :redraw - :redr[aw] redraw the screen immediately. - :registers - :reg[isters] display menu with registers content. - :reg[isters] list ... display the contents of the numbered and named registers that are mentioned in list (for example "az to display "", "a and "z content). - :rename - :[range]rename[!] rename files using vi to edit names. ! means go recursively through directories. - :[range]rename name1 name2... rename each of selected files to a corresponding name. - :restart - :restart free a lot of things (histories, commands, etc.), reread vifminfo and vifmrc files and run startup commands passed in the argument list, thus losing all unsaved changes (e.g. recent history or keys mapped in current session). - :restore - :[range]restore restore file from trash directory, doesn't work outside one of trash directories. See "Trash directory" section below. - :rlink - :[range]rlink[!?] create relative symbolic links to files in directory of other view. With "?" prompts for destination file names in an editor. "!" forces overwrite. - :[range]rlink[!] path create relative symbolic links of files in directory specified with the path (absolute or relative to directory of other view). "!" forces overwrite. - :[range]rlink[!] name1 name2... create relative symbolic links of files in directory of other view giving each next link a corresponding name from the argument list. "!" forces overwrite. - :screen - :screen toggle whether to use the terminal multiplexer or not. A terminal multiplexer uses pseudo terminals to allow multiple windows to be used in the console or in a single xterm. Starting vifm from terminal multiplexer with appropriate support turned on will cause vifm to open a new terminal multiplexer window for each new file edited or program launched from vifm. This requires screen version 3.9.9 or newer for the screen -X argument or tmux (1.8 version or newer is recommended). - :screen? display whether integration with terminal multiplexers is enabled. Note: the command is called screen for historical reasons (when tmux wasn't yet supported) and might be changed in future releases, or get an alias. - :[range]select select files in the given range (current file if no range is given). - :select {pattern} select files that match specified pattern. Possible {pattern} forms are described in "Patterns" section below. Trailing slash for directories is taken into account, so `:select! */ | invert s` selects only files. - :select //[iI] same as item above, but reuses last search pattern. - :select !{external command} select files from the list supplied by external command. Files are matched by full paths, relative paths are converted to absolute ones beforehand. - :[range]select! [{pattern}] same as above, but resets previously selected items before proceeding. - :set - :se[t] display all options that differ from their default value. - :se[t] all display all options. - :se[t] opt1=val1 opt2='val2' opt3="val3" ... sets given options. For local options both values are set. You can use following syntax: - for all options - option, option? and option& - for boolean options - nooption, invoption and option! - for integer options - option=x, option+=x and option-=x - for string options - option=x and option+=x - for string list options - option=x, option+=x and option-=x - for enumeration options - option=x, option+=x and option-=x - for set options - option=x, option+=x and option-=x - for charset options - option=x, option+=x, option-=x and option^=x the meaning: - option - turn option on (for boolean) or print its value (for all others) - nooption - turn option off - invoption - invert option state - option! - invert option state - option? - print option value - option& - reset option to its default value - option=x or option:x - set option to x - option+=x - add/append x to option - option-=x - remove (or subtract) x from option - option^=x - toggle x presence among values of the option Option name can be prepended and appended by any number of whitespace characters. - :setglobal - :setg[lobal] display all global options that differ from their default value. - :setg[lobal] all display all global options. - :setg[lobal] opt1=val1 opt2='val2' opt3="val3" ... same as :set, but changes/prints only global options or global values of local options. Changes to the latter might be not visible until directory is changed. - :setlocal - :setl[ocal] display all local options that differ from their default value. - :setl[ocal] all display all local options. - :setl[ocal] opt1=val1 opt2='val2' opt3="val3" ... same as :set, but changes/prints only local values of local options. - :shell - :sh[ell][!] start a shell in current directory. "!" suppresses spawning dedicated window of terminal multiplexer for a shell. To make vifm adaptive to environment it uses $SHELL if it's defined, otherwise 'shell' value is used. - :sort - :sor[t] display dialog with different sorting methods, when one can select primary sorting key. When 'viewcolumns' options is empty and 'lsview' is off, changing primary sorting key will also affect view look (in particular the second column of the view will be changed). - :source - :so[urce] file read command-line commands from the file. - :split - :sp[lit] switch to a two window horizontal view. - :sp[lit]! toggle horizontal window splitting. - :sp[lit] path splits the window horizontally to show both file directories. Also changes other pane to path (absolute or relative to current directory of active pane). - :substitute - :[range]s[ubstitute]/pattern/string/[flags] for each file in range replace a match of pattern with string. String can contain \0...\9 to link to capture groups (\0 - all match, \1 - first group, etc.). Pattern is stored in search history. Available flags: - i - ignore case (the 'ignorecase' and 'smartcase' options are not used) - I - don't ignore case (the 'ignorecase' and 'smartcase' options are not used) - g - substitute all matches in each file name (each g toggles this) - :[range]s[ubstitute]/pattern substitute pattern with an empty string. - :[range]s[ubstitute]//string/[flags] use last pattern from search history. - :[range]s[ubstitute] repeat previous substitution command. - :sync - :sync [relative path] change the other pane to the current pane directory or to some path relative to the current directory. Using macros is allowed. - :sync! change the other pane to the current pane directory and synchronize cursor position. If current pane displays custom list of files, position before entering it is used (current one might not make any sense). - :sync! [location | cursorpos | localopts | filters | filelist | all]... change enumerated properties of the other pane to match corresponding properties of the current pane. Arguments have the following meanings: - location - current directory of the pane; - cursorpos - cursor position (doesn't make sense without "location"); - localopts - all local options; - filters - all filters; - filelist - list of files for custom view (implies "location"); - all - all of the above. - :touch - :touch file... create file(s). Aborts on errors. Doesn't update time of existing files. Macros are expanded. - :tr - :[range]tr/pattern/string/ for each file in range transliterate the characters which appear in pattern to the corresponding character in string. When string is shorter than pattern, it's padded with its last character. - :trashes - :trashes lists all valid trash directories in a menu. Only non-empty and writable trash directories are shown. This is exactly the list of directories that are cleared when :empty command is executed. - :trashes? same as :trashes, but also displays size of each trash directory. - :undolist - :undol[ist] display list of latest changes. Use "!" to see actual commands. - :unlet - :unl[et][!] $ENV_VAR1 $ENV_VAR2 ... remove environment variables. Add ! to omit displaying of warnings about nonexistent variables. - :unselect - :[range]unselect unselect files in the given range (current file if no range is given). - :unselect {pattern} unselect files that match specified pattern. Possible {pattern} forms are described in "Patterns" section below. Trailing slash for directories is taken into account, so `:unselect */` unselects directories. - :unselect !{external command} unselect files from the list supplied by external command. Files are matched by full paths, relative paths are converted to absolute ones beforehand. - :unselect //[iI] same as item above, but reuses last search pattern. - :version - :ve[rsion] show menu with version information. - :vifm - :vifm same as :version. - :view - :vie[w] toggle on and off the quick file view. - :vie[w]! turn on quick file view if it's off. - :volumes - :volumes only for MS-Windows display menu with volume list. Hitting l (or Enter) key opens appropriate volume in the current pane. - :vsplit - :vs[plit] switch to a two window vertical view. - :vs[plit]! toggle window vertical splitting. - :vs[plit] path split the window vertically to show both file directories. And changes other pane to path (absolute or relative to current directory of active pane). - :wincmd - :[count]winc[md] {arg} same as running Ctrl-W [count] {arg}. - :windo - :windo [command...] execute command for each pane (same as :winrun % command). - :winrun - :winrun type [command...] execute command for pane(s), which is determined by type argument: - ^ - top-left pane - $ - bottom-right pane - % - all panes - . - current pane - , - other pane - :write - :w[rite] write vifminfo file. - :wq - :wq[!] same as :quit, but ! only disables check of backgrounded commands. - :xit - :x[it][!] will exit Vifm (add ! if you don't want to save changes). - :yank - :[range]y[ank] [reg] [count] will yank files to the reg register. - :map lhs rhs - :map lhs rhs map lhs key sequence to rhs in normal and visual modes. - :map! lhs rhs map lhs key sequence to rhs in command line mode. - :cm[ap] lhs rhs map lhs to rhs in command line mode. - :mm[ap] lhs rhs map lhs to rhs in menu mode. - :nm[ap] lhs rhs map lhs to rhs in normal mode. - :qm[ap] lhs rhs map lhs to rhs in view mode. - :vm[ap] lhs rhs map lhs to rhs in visual mode. - :map - :cm[ap] list all maps in command line mode. - :mm[ap] list all maps in menu mode. - :nm[ap] list all maps in normal mode. - :qm[ap] list all maps in view mode. - :vm[ap] list all maps in visual mode. - :map beginning - :cm[ap] beginning list all maps in command line mode that start with the beginning. - :mm[ap] beginning list all maps in menu mode that start with the beginning. - :nm[ap] beginning list all maps in normal mode that start with the beginning. - :qm[ap] beginning list all maps in view mode that start with the beginning. - :vm[ap] beginning list all maps in visual mode that start with the beginning. - :noremap - :no[remap] lhs rhs map the key sequence lhs to {rhs} for normal and visual modes, but disallow mapping of rhs. - :no[remap]! lhs rhs map the key sequence lhs to {rhs} for command line mode, but disallow mapping of rhs. - :cno[remap] lhs rhs map the key sequence lhs to {rhs} for command line mode, but disallow mapping of rhs. - :mn[oremap] lhs rhs map the key sequence lhs to {rhs} for menu mode, but disallow mapping of rhs. - :nn[oremap] lhs rhs map the key sequence lhs to {rhs} for normal mode, but disallow mapping of rhs. - :qn[oremap] lhs rhs map the key sequence lhs to {rhs} for view mode, but disallow mapping of rhs. - :vn[oremap] lhs rhs map the key sequence lhs to {rhs} for visual mode, but disallow mapping of rhs. - :unmap - :unm[ap] lhs remove the mapping of lhs from normal and visual modes. - :unm[ap]! lhs remove the mapping of lhs from command line mode. - :cu[nmap] lhs remove the mapping of lhs from command line mode. - :mu[nmap] lhs remove the mapping of lhs from menu mode. - :nun[map] lhs remove the mapping of lhs from normal mode. - :qun[map] lhs remove the mapping of lhs from view mode. - :vu[nmap] lhs remove the mapping of lhs from visual mode. Ranges The ranges implemented include: 2,3 - from second to third file in the list (including it) % - the entire directory. . - the current position in the filelist. $ - the end of the filelist. 't - the mark position t. Examples: :%delete would delete all files in the directory. :2,4delete would delete the files in the list positions 2 through 4. :.,$delete would delete the files from the current position to the end of the filelist. :3delete4 would delete the files in the list positions 3, 4, 5, 6. If a backward range is given :4,2delete - an query message is given and user can chose what to do next. The builtin commands that accept a range are :d[elete] and :y[ank]. Command macros The command macros may be used in user commands. - %a User arguments. When user arguments contain macros, they are expanded before preforming substitution of %a. - %c %"c The current file under the cursor. - %C %"C The current file under the cursor in the other directory. - %f %"f All of the selected files. - %F %"F All of the selected files in the other directory list. - %b %"b Same as %f %F. - %d %"d Full path to current directory. - %D %"D Full path to other file list directory. - %rx %"rx Full paths to files in the register {x}. In case of invalid symbol in place of {x}, it's processed with the rest of the line and default register is used. - %m Show command output in a menu. - %M Same as %m, but l (or Enter) key is handled like for :locate and :find commands. - %u Process command output as list of paths and compose custom view out of it. - %U Same as %u, but implies less list updates inside vifm, which is absence of sorting at the moment. - %S Show command output in the status bar. - %s Execute command in split window of active terminal multiplexer (ignored if not running inside one). - %n Forbid using of terminal multiplexer to run the command. - %i Completely ignore command output. - %pc Marks end of the main command and beginning of the clear command, which is invoked on closing preview of a file. The following dimensions and coordinates are in characters: - %px x coordinate of top-left corner of preview area. - %py y coordinate of top-left corner of preview area. - %pw width of preview area. - %ph height of preview area. Use %% if you need to put a percent sign in your command. Note that %m, %M, %s, %S, %i, %u and %U macros are mutually exclusive. Only the last one of them on the command will take effect. You can use file name modifiers after %c, %C, %f, %F, %b, %d and %D macros. Supported modifiers are: - :p - full path - :u - UNC name of path (e.g. "\\server" in "\\server\share"), Windows only. Expands to current computer name for not UNC paths. - :~ - relative to the home directory - :. - relative to current directory - :h - head of the file name - :t - tail of the file name - :r - root of the file name (without last extension) - :e - extension of the file name (last one) - :s?pat?sub? - substitute the first occurrence of pat with sub. You can use any character for '?', but it must not occur in pat or sub. - :gs?pat?sub? - like :s, but substitutes all occurrences of pat with sub. See ':h filename-modifiers' in Vim's documentation for the detailed description. Using %x means expand corresponding macro escaping all characters that have special meaning. And %"x means using of double quotes and escape only backslash and double quote characters, which is more useful on Windows systems. Position and quantity (if there is any) of %m, %M, %S or %s macros in the command is unimportant. All their occurrences are removed from the resulting command. %c and %f macros are expanded to file names only, when %C and %F are expanded to full paths. %f and %F follow this in %b too. - :com move mv %f %D set the :move command to move all of the files selected in the current directory to the other directory. - The %a macro is replaced with any arguments given to an alias command. All arguments are considered optional. :com lsl !!ls -l %a - set the lsl command to execute ls -l with or without an argument. - :lsl<Enter> will list the directory contents of the current directory. - :lsl filename<Enter> will list only the given filename. - The macros can also be used in directly executing commands. ":!mv %f %D" would move the current directory selected files to the other directory. - Appending & to the end of a command causes it to be executed in the background. Typically you want to run two kinds of external commands in the background: - GUI applications that doesn't fork thus block vifm (:!sxiv %f &); - console tools that do not work with terminal (:!mv %f %D &). You don't want to run terminal commands, which require terminal input or output something in background because they will mess up vifm's TUI. Anyway, if you did run such a command, you can use Ctrl-L key to update vifm's TUI. Rewriting the example command with macros given above with backgrounding: %m, %M, %s, %S, %u and %U macros cannot be combined with background mark (" &") as it doesn't make much sense. Command backgrounding Copy and move operation can take a lot of time to proceed. That's why vifm supports backgrounding of this two operations. To run :copy, :move or :delete command in the background just add " &" at the end of a command. For each background operation a new thread is created. Currently job cannot be stopped or paused. You can see if command is still running in the :jobs menu. Backgrounded commands have progress instead of process id at the line beginning. Background operations cannot be undone. Cancellation Note that cancellation works somewhat different on Windows platform due to different mechanism of break signal propagation. One also might need to use Ctrl-Break shortcut instead of Ctrl-C. There are two types of operations that can be cancelled: - file system operations; - mounting with FUSE (but not unmounting as it can cause loss of data); - calls of external applications. Note that vifm never terminates applications, it sends SIGINT signal and lets the application quit normally. When one of set of operations is cancelled (e.g. copying of 5th file of 10 files), further operations are cancelled too. In this case undo history will contain only actually performed operations. Cancelled operations are indicated by "(cancelled)" suffix appended to information message on statusbar. File system operations Currently the following commands can be cancelled: :alink, :chmod, :chown, :clone, :copy, :delete, :mkdir, :move, :restore, :rlink, :touch. File putting (on p/P key) can be cancelled as well. It's not hard to see that these are mainly long-running operations. Cancelling commands when they are repeated for undo/redo operations is allowed for convenience, but is not recommended as further undo/redo operations might get blocked by side-effects of partially cancelled group of operations. These commands can't be cancelled: :empty, :rename, :substitute, :tr. Mounting with FUSE It's not considered to be an error, so only notification on the status bar is shown. External application calls Each of this operations can be cancelled: :apropos, :find, :grep, :locate. Patterns :highlight, :filetype, :filextype, :fileviewer commands and 'classify' option support globs, regular expressions and mime types to match file names or their paths. There are six possible ways to write a single pattern: - 1. [!]{comma-separated-name-globs} - 2. [!]{{comma-separated-path-globs}} - 3. [!]/name-regular-expression/[iI] - 4. [!]//path-regular-expression//[iI] - 5. [!]<comma-separated-mime-type-globs> - 6. undecorated-pattern To combine several patterns (AND them), make sure you're using of the first five forms and write patterns one after another, like this: <text/plain>{*.vifm} Mind that if you make a mistake the whole string will be treated as the sixth form. :filetype, :filextype and :fileviewer commands accept comma-separated list of patterns instead of a single pattern, thus effectively handling OR operation on them: <text/plain>{*.vifm},<application/pdf>{*.pdf} Five first forms can include leading exclamation mark that negates pattern matching. The last form is implicitly refers to one of others. :highlight does not accept undecorated form, while :filetype, :filextype, :fileviewer and 'classify' treat it as list of name globs. Regular expression patterns are case insensitive by default. "Globs" section below provides short overview of globs and some important points that one needs to know about them. Mime type matching is essentially globs matching applied to mime type of a file instead of its name/path. Note: mime types aren't detected on Windows. Globs Globs are always case insensitive as it makes sense in general case. *, ?, [ and ] are treated as special symbols in the pattern. E.g. :filetype * less %c matches all files. One can use character classes for escaping, so :filetype [*] less %c matches only one file name, the one which contains only asterisk symbol. * means any number of any characters (possibly an empty substring), with one exception: asterisk at the pattern beginning doesn't match dot in the first position. E.g. :fileviewer *.zip,*.jar zip -sf %c associates using of zip program to preview all files with zip or jar extensions as listing of their content. ? means any character at this position. E.g. :fileviewer ?.out file %c calls file tool for all files which has exactly one character before their extension (e.g. a.out, b.out). Square brackets designate character class, which means that whole character class matches against any of characters listed in it. For example :fileviewer *.[ch] highlight -O xterm256 -s dante --syntax c %c makes vifm call highlight program to colorize source and header files in C language for a 256-color terminal. Equal command would be :fileviewer *.c,*.h highlight -O xterm256 -s dante --syntax c %c Inside square brackets ^ or ! can be used for symbol class negotiation and the - symbol to set a range. ^ and ! should appear right after the opening square bracket. For example :filetype *.[!d]/ inspect_dir associates inspect_dir as additional handler for all directories that have one character extension unless it's "d" letter. And :filetype [0-9].jpg sxiv associates sxiv picture viewer only for JPEG-files that contain single digit in their name. set options - Local options These are kind of options that are local to a specific view. So you can set ascending sorting order for left pane and descending order for right pane. In addition to being local to views, each such option also has two values: - local to current directory (value associated with current location); - global to current directory (value associated with the pane). The idea is that current directory can be made a temporary exception to regular configuration of the view, until directory change. Use :setlocal for that. :setglobal changes view value not affecting settings until directory change. :set applies changes immediately to all values. - 'aproposprg' type: string default: "apropos %a" Specifies format for an external command to be invoked by the :apropos command. The format supports expanding of macros, specific for a particular *prg option, and %% sequence for inserting percent sign literally. This option should include the %a macro to specify placement of arguments passed to the :apropos command. If the macro is not used, it will be implicitly added after a space to the value of this option. - 'autochpos' type: boolean default: true When disabled vifm will set cursor to the first line in the view after :cd and :pushd commands instead of saved cursor position. Disabling this will also make vifm clear information about cursor position in the view history on :cd and :pushd commands (and on startup if 'autochpos' is disabled in the vifmrc). l key in the ":history ." and ":trashes" menus is treated like :cd command. This option also affects marks so that navigating to a mark doesn't restore cursor position. - 'columns' 'co' type: integer default: terminal width on startup Terminal width in characters. - 'cdpath' 'cd' type: string list default: value of $CDPATH with commas instead of colons Specifies locations to check on changing directory with relative path that doesn't start with "./" or "../". When non-empty, current directory is examined after directories listed in the option. This option doesn't affect completion of :cd command. Example: set cdpath=~ This way ":cd bin" will switch to "~/bin" even if directory named "bin" exists in current directory, while ":cd ./bin" command will ignore value of 'cdpath'. - 'chaselinks' type: boolean default: false When enabled path of view is always resolved to real path (with all symbolic links expanded). - 'classify' type: string list default: ":dir:/" Specifies file name prefixes and suffixes depending on file type or name. The format is either of: - [{prefix}]:{filetype}:[{suffix}] - [{prefix}]::{pattern}::[{suffix}] Possible {pattern} forms are described in "Patterns" section above. Priority rules: - file name patterns have priority over type patterns - file name patterns are matched in left-to-right order of their appearance in this option Either {prefix} or {suffix} or both can be omitted (which is the default for all unspecified file types), this means empty {prefix} and/or {suffix}. {prefix} and {suffix} should consist of at most eight characters. Elements are separated by commas. Neither prefixes nor suffixes are part of file names, so they don't affect commands which operate on file names in any way. Comma (',') character can be inserted by doubling it. List of file type names can be found in the description of filetype() function. - 'confirm' 'cf' type: set default: delete,permdelete Defines which operations require confirmation: - delete - moving files to trash (on d or :delete); - permdelete - permanent deletion of files (on D or :delete! command or on undo/redo operation). - 'cpoptions' 'cpo' type: charset default: "fst" Contains a sequence of single-character flags. Each flag enables behaviour of older versions of vifm. Flags: - f - when included, running :filter command results in not inverted (matching files are filtered out) and :filter! in inverted (matching files are left) filter, when omitted, meaning of the exclamation mark changes to the opposite; - s - when included, yy, dd and DD normal mode commands act on selection, otherwise they operate on current file only; - t - when included, <tab> (thus <c-i>) behave as <space> and switch active pane, otherwise <tab> and <c-i> go forward in the view history. - 'cvoptions' type: set default: Specifies whether entering/leaving custom views triggers events that normally happen on entering/leaving directories: - autocmds - trigger autocommands on entering/leaving custom views; - localopts - reset local options on entering/leaving custom views; - localfilter - reset local filter on entering/leaving custom views. - 'deleteprg' type: string default: "" Specifies program to run on files that are permanently removed. When empty, files are removed as usual, otherwise this command is invoked on each file by appending its name. If the command doesn't remove files, they will remain on the file system. - 'dirsize' type: enumeration default: size Controls how size of directories is displayed in file views. The following values are possible: - size - size of directory (i.e., size used to store list of files) - nitems - number of entries in the directory (excluding . and ..) Size obtained via ga/gA overwrites this setting so seeing count of files and occasionally size of directories is possible. - 'dotdirs' type: set default: nonrootparent Controls displaying of dot directories. The following values are possible: - rootparent - show "../" in root directory of file system - nonrootparent - show "../" in non-root directories of file system Note that empty directories always contain "../" entry regardless of value of this option. "../" disappears at the moment at least one file is created. - 'fastrun' type: boolean default: false With this option turned on you can run partially entered commands with unambiguous beginning using :! (e.g. :!Te instead of :!Terminal or :!Te<tab>). - 'fillchars' 'fcs' type: string list default: "" Sets characters used to fill borders. item default Used for vborder:c ' ' left, middle and right vertical borders If value is omitted, its default value is used. Example: set fillchars=vborder:. - 'findprg' type: string default: "find %s %a -print , -type d \( ! -readable -o ! -executable \) -prune" Specifies format for an external command to be invoked by the :find command. The format supports expanding of macros, specific for a particular *prg option, and %% sequence for inserting percent sign literally. This option should include the %s macro to specify placement of list of paths to search in and %a or %A macro to specify placement of arguments passed to the :find command. If some of the macros are not used, they will be implicitly added after a space to the value of the option in the following order: %s, %a. Note that when neither %a nor %A are specified, it's %a which is added implicitly. The macros can slightly change their meaning depending on :find command arguments. When the first argument points to an existing directory, %s is assigned all arguments and %a/%A are left empty. Otherwise, %s is assigned a dot (".") meaning current directory or list of selected file names, if any. %a/%A are assigned arguments when first argument starts with a dash ("-"), otherwise %a gets an escaped version of arguments, prepended by "-name" (on *nix) or "-iname" (on Windows) predicate. %a and %A macros contain almost the same value, the difference is that %a can be escaped and %A is never escaped. %A is to be used mainly on Windows, where shell escaping is a mess and can break command execution. Optional %u or %U macro could be used (if both specified %U is chosen) to force redirection to custom or unsorted custom view respectively. Starting from Windows Server 2003 a where command is available, one can configure vifm to use it in the following way: set findprg="where /R %s %A" As the syntax of this command is rather limited, one can't use :find command with selection of more than one item in this case. The command looks for files only completely ignoring directories. When using find port on Windows, another option is to setup 'findprg' like this: set findprg="find %s %a" - 'followlinks' type: boolean default: true Follow links on l or Enter. That is navigate to destination file instead of treating the link as if it were target file. Doesn't affects links to directories, which are always entered (use gf key for directories). - 'fusehome' type: string default: "($XDG_DATA_HOME/.local/share | $VIFM)/fuse/" Directory to be used as a root dir for FUSE mounts. Value of the option can contain environment variables (in form "$envname"), which will be expanded (prepend it with a slash to prevent expansion). The value should expand to an absolute path. If you change this option, vifm won't remount anything. It affects future mounts only. See "Automatic FUSE mounts" section below for more information. - 'gdefault' 'gd' type: boolean default: false When on, 'g' flag is on for :substitute by default. - 'grepprg' type: string default: "grep -n -H -I -r %i %a %s" Specifies format for an external command to be invoked by the :grep command. The format supports expanding of macros, specific for a particular *prg option, and %% sequence for inserting percent sign literally. This option should include the %i macro to specify placement of "-v" string when inversion of results is requested, %a or %A macro to specify placement of arguments passed to the :grep command and the %s macro to specify placement of list of files to search in. If some of the macros are not used, they will be implicitly added after a space to the value of the 'grepprg' option in the following order: %i, %a, %s. Note that when neither %a nor %A are specified, it's %a which is added implicitly. Optional %u or %U macro could be used (if both specified %U is chosen) to force redirection to custom or unsorted custom view respectively. See 'findprg' option for description of difference between %a and %A. Example of setup to use ack () instead of grep: set grepprg=ack\ -H\ -r\ %i\ %a\ %s or The Silver Searcher (): set grepprg=ag\ --line-numbers\ %i\ %a\ %s - 'history' 'hi' type: integer default: 15 Maximum number of stored items in all histories. - 'hlsearch' 'hls' type: boolean default: true Highlight all matches of search pattern. - 'iec' type: boolean default: false Use KiB, MiB, ... instead of KB, MB, ... - 'ignorecase' 'ic' type: boolean default: false Ignore case in search patterns (:substitute, / and ? commands) and characters after f and F commands. It doesn't affect file filtering. - 'incsearch' 'is' type: boolean default: false When this option is set, search and view update for local filter is be performed starting from initial cursor position each time search pattern is changed. - 'iooptions' type: set default: Controls details of file operations. The following values are available: - fastfilecloning - perform fast file cloning (copy-on-write), when available (available on Linux and btrfs file system). - 'laststatus' 'ls' type: boolean default: true Controls if status bar is visible. - 'lines' type: integer default: terminal height on startup Terminal height in lines. - 'locateprg' type: string default: "locate %a" Specifies format for an external command to be invoked by the :locate command. The format supports expanding of macros, specific for a particular *prg option, and %% sequence for inserting percent sign literally. This option should include the %a macro to specify placement of arguments passed to the :locate command. If the macro is not used, it will be implicitly added after a space to the value of this option. Optional %u or %U macro could be used (if both specified %U is chosen) to force redirection to custom or unsorted custom view respectively. - 'mintimeoutlen' type: integer default: 150 The fracture of 'timeoutlen' in milliseconds that is waited between subsequent input polls, which affects various asynchronous operations (detecting changes made by external applications, monitoring background jobs, redrawing UI). There are no strict guarantees, however the higher this value is, the less is CPU load in idle mode. - 'lsview' type: boolean default: false scope: local When this option is set, directory view will be displayed in multiple columns with file names similar to output of `ls -x` command. See "ls-like view" section below for format description. - 'number' 'nu' type: boolean default: false scope: local Print line number in front of each file name when 'lsview' option is turned off. Use 'numberwidth' to control width of line number. Also see 'relativenumber'. - 'numberwidth' 'nuw' type: integer default: 4 scope: local Minimal number of characters for line number field. - 'relativenumber' 'rnu' type: boolean default: false scope: local Print relative line number in front of each file name when 'lsview' option is turned off. Use 'numberwidth' to control width of line number. Various combinations of 'number' and 'relativenumber' lead to such results: nonumber number norelativenumber | first | 1 first | second | 2 second | third | 3 third relativenumber | 1 first | 1 first | 0 second |2 second | 1 third | 1 third - 'rulerformat' 'ruf' type: string default: "%l/%S " Determines the content of the ruler. Its minimal width is 13 characters and it's right aligned. Following macros are supported: %= - separation point between left and right aligned halves of the line %l - file number %L - total number of files in view (including filtered out ones) %- - number of filtered out files %S - number of displayed files %= - separation point between left and right align items %% - percent sign %[ - designates beginning of an optional block %] - designates end of an optional block Percent sign can be followed by optional minimum field width. Add '-' before minimum field width if you want field to be right aligned. Note ambiguity with number of filtered out files, which can be resolved with the help of width field ("%0-"). Example: set rulerformat='%2l-%S%[ +%0-%]' - 'runexec' type: boolean default: false Run executable file on Enter or l. - 'scrollbind' 'scb' type: boolean default: false When this option is set, vifm will try to keep difference of scrolling positions of two windows constant. - 'scrolloff' 'so' type: integer default: 0 Minimal number of screen lines to keep above and below the cursor. If you want cursor line to always be in the middle of the view (except at the beginning or end of the file list), set this option to some large value (e.g. 999). - 'shell' 'sh' type: string default: $SHELL or "/bin/sh" or "cmd" (on MS-Windows) Full path to the shell to use to run external commands. On *nix a shell argument can be supplied. - 'shortmess' 'shm' type: charset default: "p" Contains a sequence of single-character flags. Each flag enables shortening of some message displayed by vifm in the TUI. Flags: T - truncate status-bar messages in the middle if they are too long to fit on the command line. "..." will appear in the middle. p - use tilde shortening in view titles. - 'slowfs' type: string list default: "" only for *nix A list of mounter fs name beginnings (first column in /etc/mtab or /proc/mounts) or paths prefixes for fs/directories that work too slow for you. This option can be used to stop vifm from making some requests to particular kinds of file systems that can slow down file browsing. Currently this means don't check if directory has changed, skip check if target of symbolic links exists, assume that link target located on slow fs to be a directory (allows entering directories and navigating to files via gf). If you set the option to "*", it means all the systems are considered slow (useful for cygwin, where all the checks might render vifm very slow if there are network mounts). Example for autofs root /mnt/autofs: set slowfs+=/mnt/autofs - 'smartcase' 'scs' type: boolean default: false Overrides the ignorecase option if the search pattern contains at least one upper case character. Only used when ignorecase option is enabled. It doesn't affect file filtering. - 'sort' type: string list default: +name on *nix and +iname on Windows scope: local Sets list of sorting keys (first item is primary key, second is secondary key, etc.): [+-]ext - extension of files and directories [+-]fileext - extension of files only [+-]name - name (including extension) [+-]iname - name (including extension, ignores case) [+-]type - file type (dir/reg/exe/link/char/block/sock/fifo) [+-]dir - directory grouping (directory < file) [+-]gid - group id (*nix only) [+-]gname - group name (*nix only) [+-]mode - file type derived from its mode (*nix only) [+-]perms - permissions string (*nix only) [+-]uid - owner id (*nix only) [+-]uname - owner name (*nix only) [+-]nlinks - number of hard links (*nix only) [+-]size - size [+-]nitems - number of items in a directory (zero for files) [+-]groups - groups extracted via regexps from 'sortgroups' [+-]target - symbolic link target (empty for other file types) [+-]atime - time accessed (e.g. read, executed) [+-]ctime - time changed (changes in metadata, e.g. mode) [+-]mtime - time modified (when file contents is changed) Note: look for st_atime, st_ctime and st_mtime in "man 2 stat" for more information on time keys. '+' means ascending sort for this key, and '-' means descending sort. "dir" key is somewhat similar in this regard but it's added implicitly: when "dir" is not specified, sorting behaves as if it was the first key in the list. That's why if one wants sorting algorithm to mix directories and files, "dir" should be appended to sorting option, for example like this: set sort+=dir or set sort=-size,dir Value of the option is checked to include dir key and default sorting key (name on *nix, iname on Windows). Here is what happens if one of them is missing: - type key is added at the beginning; - default key is added at the end; all other keys are left untouched (at most they are moved). This option also changes view columns according to primary sorting key set, unless 'viewcolumns' option is not empty. - 'sortnumbers' type: boolean default: false scope: local Natural sort of (version) numbers within text. - 'sortgroups' type: string default: "" scope: local Sets comma-separated list of regular expressions to use for group sorting, double comma is literal comma. Each expression should contain at least one group or its value will be considered to be always empty. Only first match of each regular expression is considered. Groups are considered from right to first similar to 'sort', first group divides list of files into sub-groups, each of which is sorted by the second group and so on. Example: set sortgroups=-(done|todo).* this would put files with "-done" in their names above all files with "-todo". - 'sortorder' type: enumeration default: ascending Sets sort order for primary key: ascending, descending. - 'statusline' 'stl' type: string default: "" Determines the content of the status line (the line right above command-line). Empty string means use same format like in previous versions. Following macros are supported: - %t - file name (considering value of the 'classify' option) - %A - file attributes (permissions on *nix or properties on Windows) %u - user name or uid (if it cannot be resolved) - %g - group name or gid (if it cannot be resolved) - %s - file size in human readable format - %E - size of selected files in human readable format, same as %s when no files are selected, except that it will never show size of ../ in visual mode, since it cannot be selected - %d - file modification date (uses 'timefmt' option) - %z - short tips/tricks/hints that chosen randomly after one minute period - all 'rulerformat' macros Percent sign can be followed by optional minimum field width. Add '-' before minimum field width if you want field to be right aligned. Example: set statusline=" %t%= %A %10u:%-7g %15s %20d " On Windows file properties include next flags (upper case means flag is on): A - archive H - hidden I - content isn't indexed R - readonly S - system C - compressed D - directory E - encrypted P - reparse point (e.g. symbolic link) Z - sparse file - 'suggestoptions' type: string list default: Controls when, for what and how suggestions are displayed. The following values are available: - normal - in normal mode; - visual - in visual mode; - view - in view mode; - otherpane - use other pane to display suggestions, when available; - delay[:num] - display suggestions after a small delay (to do not annoy if you just want to type a fast shortcut consisting of multiple keys), num specifies the delay in ms (500 by default), 'timeoutlen' at most; - keys - include shortcuts (commands and selectors); - marks - include marks; - registers[:num] - include registers, at most num files (5 by default). - 'syscalls' type: boolean default: false When disabled, vifm will rely on external applications to perform file-system operations, otherwise system calls are used instead (much faster). The feature is {EXPERIMENTAL} and {WORK-IN-PROGRESS}. The option will be eventually removed. Mostly *nix-like systems are affected. - 'tabstop' 'ts' type: integer default: value from curses library Number of spaces that a Tab in the file counts for. - 'timefmt' type: string default: " %m/%d %H:%M" Format of time in file list. See "man 1 date" or "man 3 strftime" for details. - 'timeoutlen' 'tm' type: integer default: 1000 The time in milliseconds that is waited for a mapped key in case of already typed key sequence is ambiguous. - 'title' type: boolean default: true when title can be restored, false otherwise When enabled title of the terminal or terminal multiplexer's window is updated according to current location. - 'trash' type: boolean default: true Use trash directory. See "Trash directory" section below. - 'trashdir' type: string default: on *nix: "%r/.vifm-Trash-%u,$VIFM/Trash,%r/.vifm-Trash" or if $VIFM/Trash doesn't exist "%r/.vifm-Trash-%u,$XDG_DATA_HOME/vifm/Trash,%r/.vifm-Trash" on Windows: "%r/.vifm-Trash,$XDG_DATA_HOME/vifm/Trash" List of trash directory path specifications, separated with commas. Each list item either defines an absolute path to trash directory or a path relative to a mount point root when list element starts with "%r/". Value of the option can contain environment variables (of form "$envname"), which will be expanded (prepend $ with a slash to prevent expansion). Environment variables are expanded when the option is set. On *nix, if element ends with "%u", the mark is replaced with real user ID and permissions are set so that only that only owner is able to use it. Note that even this setup is not completely secure when combined with "%r/" and it's overall safer to keep files in home directory, but that implies cost of copying files between partitions. When new file gets cut (deleted) vifm traverses each element of the option in the order of their appearance and uses first trash directory that it was able to create or that is already writable. Default value tries to use trash directory per mount point and falls back to ~/.vifm/Trash on failure. Will attempt to create the directory if it does not exist. See "Trash directory" section below. - 'tuioptions' 'to' type: charset default: "ps" Each flag configures some aspect of TUI appearance. The flags are: p - when included: * file list inside a pane gets additional single character padding on left and right sides; * quick view and view mode get single character padding. s - when included, left and right borders (side borders, hence "s" character) are visible. - 'undolevels' 'ul' type: integer default: 100 Maximum number of changes that can be undone. Note that here single file operation is used as a unit, not operation, i.e. deletion of 101 files will exceed default limit. - 'vicmd' type: string default: "vim" The actual command used to start vi. Ampersand sign at the end (regardless whether it's preceded by space or not) means backgrounding of command. - 'viewcolumns' type: string default: "" scope: local Format string containing list of columns in the view. When this option is empty view columns to show are chosen automatically using sorting keys (see 'sort') as a base. Value of this option is ignored if 'lsview' is set. See "Column view" section below for format description. An example of setting the options for both panes (note :windo command): windo set viewcolumns=-{name}..,6{size},11{perms} - 'vixcmd' type: string default: value of 'vicmd' The command used to start vi when in X. Ampersand sign at the end (regardless whether it's preceded by space or not) means backgrounding of command. - 'vifminfo' type: set default: bookmarks,bmarks Controls what will be saved in the $VIFM/vifminfo file. bmarks - named bookmarks bookmarks - marks, except special ones like '< and '> tui - state of the user interface (sorting, number of windows, quick view state, active view) dhistory - directory history state - file name and dot filters and terminal multiplexers integration state cs - primary color scheme savedirs - save last visited directory (requires dhistory) chistory - command line history shistory - search history (/ and ? commands) phistory - prompt history fhistory - history of local filter (see description of the "=" normal mode command) dirstack - directory stack overwrites previous stack, unless stack of current session is empty registers - registers content options - all options that can be set with the :set command (obsolete) filetypes - associated programs and viewers (obsolete) commands - user defined commands (see :command description) (obsolete) - 'vimhelp' type: boolean default: false Use vim help format. - 'wildmenu' 'wmnu' type: boolean default: false Controls whether possible matches of completion will be shown above the command line. - 'wildstyle' type: enumeration default: bar Picks presentation style of wild menu. Possible values: - bar - one-line with left-to-right cursor - popup - multi-line with top-to-bottom cursor - 'wordchars' type: string list default: "1-8,14-31,33-255" (that is all non-whitespace characters) Specifies which characters in command-line mode should be considered as part of a word. Value of the option is comma-separated list of ranges. If both endpoints of a range match, single endpoint is enough (e.g. "a" = "a-a"). Both endpoints are inclusive. There are two accepted forms: character representing itself or number encoding character according to ASCII table. In case of ambiguous characters (dash, comma, digit) use numeric form. Accepted characters are in the range from 0 to 255. Any Unicode character with code greater than 255 is considered to be part of a word. The option affects Alt-D, Alt-B and Alt-F, but not Ctrl-W. This is intentionally to allow two use cases: - Moving by WORDS and deletion by words. - Moving by words and deletion by WORDS. To get the latter use the following mapping: cnoremap <c-w> <a-b><a-d> Also used for abbreviations. - 'wrap' type: boolean default: true Controls whether to wrap text in quick view. - 'wrapscan' 'ws' type: boolean default: true Searches wrap around end of the list. Mappings Since it's not easy to enter special characters there are several special sequences that can be used in place of them. They are: - <cr> Enter key. - <esc> Escape key. - <space> Space key. - <lt> Less-than character (<). - <nop> provides a way to disable a mapping (by mapping it to <nop>). - <bs> Backspace key (see key conflict description below). - <tab> <s-tab> Tabulation and Shift+Tabulation keys. - <home> <end> Home/End. - <left> <right> <up> <down> Arrow keys. - <pageup> <pagedown> PageUp/PageDown. - <del> <delete> Delete key. <del> and <delete> mean different codes, but <delete> is more common. - <c-a>,<c-b>,...,<c-z>,<c-[>,<c->,<c-]>,<c-^>,<c-_> Control + some key (see key conflict description below). - <a-a>,<a-b>,...,<a-z> <m-a>,<m-b>,...,<m-z> Alt + some key. - <a-c-a>,<a-c-b>,...,<a-c-z> <m-c-a>,<m-c-b>,...,<m-c-z> only for *nix Alt + Ctrl + some key. - <f0> - <f63> Functional keys. - <c-f1> - <c-f12> only for MS-Windows functional keys with Control key pressed. - <a-f1> - <a-f12> only for MS-Windows functional keys with Alt key pressed. - <s-f1> - <s-f12> only for MS-Windows functional keys with Shift key pressed. Note that due to the way terminals process their input, several keyboard keys might be mapped to single key code, for example: - <cr> and <c-m>; - <tab> and <c-i>; - <c-h> and <bs>; - etc. Most of the time they are defined consistenly and don't cause surprises, but <c-h> and <bs> are treated differently in different environments (although they match each other all the time), that's why they correspond to different keys in vifm. As a consequence, if you map <c-h> or <bs> be sure to map the other one to the same combination so that the mapping will work in all environments. vifm removes whitespace characters at the beginning and end of commands. That's why you may want to use <space> at the end of rhs in mappings. For example: cmap <f1> man<space> will put "man " in line when you hit the <f1> key in the command line mode. Expression syntax Supported expressions is a subset of what VimL provides. Expression syntax summary, from least to most significant: expr1 expr2 || expr2 .. logical OR expr2 expr3 && expr3 .. logical AND expr3 expr4 == expr4 equal expr4 != expr4 not equal expr4 > expr4 greater than expr4 >= expr4 greater than or equal expr4 < expr4 smaller than expr4 <= expr4 smaller than or equal expr4 expr5 . expr5 .. string concatenation expr5 - expr5 unary minus + expr5 unary plus ! expr5 logical NOT expr6 number number constant "string" string constant, \ is special 'string' string constant, ' is doubled &option option value $VAR environment variable function(expr1, ...) function call ".." indicates that the operations in this level can be concatenated. expr1 ----- expr2 || expr1 Arguments are converted to numbers before evaluation. Result is non-zero if at least one of arguments is non-zero. It's right associative and with short-circuiting, so sub-expressions are evaluated from left to right until result of whole expression is determined (i.e., until first non-zero) or end of the expression. expr2 ----- expr3 && expr2 Arguments are converted to numbers before evaluation. Result is non-zero only if both arguments are non-zero. It's right associative and with short-circuiting, so sub-expressions are evaluated from left to right until result of whole expression is determined (i.e., until first zero) or end of the expression. expr3 ----- expr4 {cmp} expr4 Compare two expr4 expressions, resulting in a 0 if it evaluates to false or 1 if it evaluates to true. equal == not equal != greater than > greater than or equal >= smaller than < smaller than or equal <= Examples: 'a' == 'a' == 1 'a' > 'b' == 1 'a' == 'b' == 0 '2' > 'b' == 0 2 > 'b' == 1 2 > '1b' == 1 2 > '9b' == 0 -1 == -'1' == 1 0 == '--1' == 1 expr4 ----- expr5 . expr5 .. string concatenation Examples: 'a' . 'b' == 'ab' 'aaa' . '' . 'c' == 'aaac' expr5 ----- - expr5 unary minus + expr5 unary plus ! expr5 logical NOT For '-' the sign of the number is changed. For '+' the number is unchanged. For '!' non-zero becomes zero, zero becomes one. A String will be converted to a Number first. These operations can be repeated and mixed. Examples: --9 == 9 ---9 == -9 -+9 == 9 !-9 == 0 !'' == 1 !'x' == 0 !!9 == 1 expr6 ----- number number constant ----- Decimal number. Examples: 0 == 0 0000 == 0 01 == 1 123 == 123 10000 == 10000 string ------ "string" string constant Note that double quotes are used. A string constant accepts these special characters: \b backspace <bs> \e escape <esc> \n newline \r return <cr> \t tab <tab> \\ backslash \" double quote Examples: "\"Hello,\tWorld!\"" "Hi,\nthere!" literal-string -------------- 'string' string constant Note that single quotes are used. This string is taken as it is. No backslashes are removed or have a special meaning. The only exception is that two quotes stand for one quote. Examples: 'All\slashes\are\saved.' 'This string contains doubled single quotes ''here''' option ------ &option option value (local one is preferred, if exists) &g:option global option value &l:option local option value Examples: echo 'Terminal size: '.&columns.'x'.&lines if &columns > 100 Any valid option name can be used here (note that "all" in ":set all" is a pseudo option). See ":set options" section above. environment variable -------------------- $VAR environment variable The String value of any environment variable. When it is not defined, the result is an empty string. Examples: 'This is my $PATH env: ' . $PATH 'vifmrc at ' . $MYVIFMRC . ' is used.' function call ------------- function(expr1, ...) function call See "Functions" section below. Examples: "'" . filetype('.') . "'" filetype('.') == 'reg' Functions USAGE RESULT Description chooseopt({opt}) String Queries choose parameters passed on startup. executable({expr}) Integer Checks whether {expr} command available. expand({expr}) String Expands special keywords in {expr}. filetype({fnum}) String Returns file type from position. getpanetype() String Returns type of current pane. has({property}) Integer Checks whether instance has {property}. layoutis({type}) Integer Checks whether layout is of type {type}. paneisat({loc}) Integer Checks whether current pane is at {loc}. system({command}) String Executes shell command and returns its output. chooseopt({opt}) Retrieves values of options related to file choosing. {opt} can be one of: files returns argument of --choose-files or empty string dir returns argument of --choose-dir or empty string cmd returns argument of --on-choose or empty string delimiter returns argument of --delimiter or the default one (\n) executable({expr}) If {expr} is absolute or relative path, checks whether path destination exists and refers to an executable, otherwise checks whether command named {expr} is present in directories listed in $PATH. Checks for various executable extensions on Windows. Returns boolean value describing result of the check. Example: " use custom default viewer script if it's available and installed " in predefined system directory, otherwise try to find it elsewhere if executable('/usr/local/bin/defviewer') fileview * /usr/local/bin/defviewer %c else if executable('defviewer') fileview * defviewer %c endif endif expand({expr}) Expands environment variables and macros in {expr} just like it's done for command-line commands. Returns a string. See "Command macros" section above. Examples: " percent sign :echo expand('%%') " the last part of directory name of the other pane :echo expand('%D:t') " $PATH environment variable (same as `:echo $PATH`) :echo expand('$PATH') filetype({fnum}) The result is a string, which represents file type and is one of the list: exe executables reg regular files link symbolic links dir directories char character devices block block devices fifo pipes sock *nix domain sockets ? unknown file type (should never appear) Parameter {fnum} can have following values: - '.' to get type of file under the cursor in the active pane getpanetype() Retrieves string describing type of current pane. Possible return values: regular regular file listing of some directory custom custom file list (%u) very-custom very custom file list (%U) has({property}) Allows examining internal parameters from scripts to e.g. figure out environment in which application is running. Returns 1 if property is true/present, otherwise 0 is returned. Currently the following properties are supported (anything else will yield 0): unix runs in *nix-like environment (including Cygwin) win runs on Windows Usage example: " skip user/group on Windows if !has('win') let $RIGHTS = '%10u:%-7g ' endif execute 'set' 'statusline=" %t%= %A '.$RIGHTS.'%15E %20d "' layoutis({type}) Checks whether current interface layout is {type} or not, where {type} can be: only single-pane mode split double-pane mode (either vertical or horizon split) vsplit vertical split (left and right panes) hsplit horizontal split (top and bottom panes) Usage example: " automatically split vertically before enabling preview :nnoremap w :if layoutis('only') | vsplit | endif | view<cr> paneisat({loc}) Checks whether position of active pane in current layout matches one of the following locations: top pane reaches top border bottom pane reaches bottom border left pane reaches left border right pane reaches right border system({command}) Runs the command in shell and returns its output (joined standard output and standard error streams). All trailing newline characters are stripped to allow easy appending to command output. Ctrl-C should interrupt the command. Usage example: " command to enter .git/ directory of git-repository (when ran inside one) command! cdgit :execute 'cd' system('git rev-parse --git-dir') Menus and dialogs When navigating to some path from a menu there is a difference in end location depending on whether path has trailing slash or not. Files normally don't have trailing slashes so "file/" won't work and one can only navigate to a file anyway. On the other hand with directories there are two options: navigate to a directory or inside of it. To allow both use cases, the first one is used on paths like "dir" and the second one for "dir/". Commands :range navigate to a menu line. - :exi[t][!] :q[uit][!] :x[it][!] leave menu mode. - :noh[lsearch] reset search match highlighting. - :w[rite] {dest} write all menu lines into file specified by {dest}. General j, Ctrl-N - move down. k, Ctrl-P - move up. Enter, l - select and exit the menu. Ctrl-L - redraw the menu. Escape, Ctrl-C, ZZ, ZQ, q - quit. In all menus The following set of keys has the same meaning as in normal mode. Ctrl-B, Ctrl-F Ctrl-D, Ctrl-U Ctrl-E, Ctrl-Y /, ? n, N [count]G, [count]gg H, M, L zb, zt, zz zh - scroll menu items [count] characters to the right. zl - scroll menu items [count] characters to the left. zH - scroll menu items half of screen width characters to the right. zL - scroll menu items half of screen width characters to the left. : - enter command line mode for menus (currently only :exi[t], :q[uit], :x[it] and :{range} are supported). b - interpret content of the menu as list of paths and use it to create custom view in place of previously active pane. See "Custom views" section below. B - same as above, but creates unsorted view. v - load menu content into quickfix list of the editor (Vim compatible by assumption) or if list doesn't have separators after file names (colons) open each line as a file name. Below is description of additional commands and reaction on selection in some menus and dialogs. Apropos menu Selecting menu item run man on a given topic. Menu won't be closed automatically to allow view several pages one by one. Command-line mode abbreviations menu Type dd on an abbreviation to remove it. c leaves menu preserving file selection and inserts right-hand side of selected command into command-line. Color scheme menu Selecting name of a color scheme applies it the same way as if ":colorscheme <name>" was executed on the command-line. Commands menu Selecting command executes it with empty arguments (%a). dd on a command to remove. Marks menu Selecting mark navigates to it. dd on a mark to remove it. Bookmarks menu Selecting a bookmark navigates to it. Type dd on a bookmark to remove it. gf and e also work to make it more convenient to bookmark files. Trash (:lstrash) menu r on a file name to restore it from trash. dd deletes file under the cursor. Trashes menu dd empties selected trash in background. Directory history and Trashes menus Selecting directory name will change directory of the current view as if :cd command was used. Directory stack menu Selecting directory name will rotate stack to put selected directory pair at the top of the stack. Filetype menu Commands from vifmrc or typed in command-line are displayed above empty line. All commands below empty line are from .desktop files. c leaves menu preserving file selection and inserts command after :! in command-line mode. Grep, find, locate, bookmarks and user menu with navigation (%M macro) gf - navigate previously active view to currently selected item. Leaves menu mode except for grep menu. Pressing Enter key has the same effect. e - open selected path in the editor, stays in menu mode. c - leave menu preserving file selection and insert file name after :! in command-line mode. User menu without navigation (%m macro) c leaves menu preserving file selection and inserts whole line after :! in command-line mode. Grep menu Selecting file (via Enter or l key) opens it in editor set by 'vicmd' at given line number. Menu won't be closed automatically to allow viewing more than one result. See above for "gf" and "e" keys description. Command-line history menu Selecting an item executes it as command-line command, search query or local filter. c leaves menu preserving file selection and inserts line into command-line of appropriate kind. Volumes menu Selecting a drive navigates previously active pane to the root of that drive. Fileinfo dialog Enter, q - close dialog Sort dialog h, Space - switch ascending/descending. q - close dialog One shortcut per sorting key (see the dialog). Attributes (permissions or properties) dialog h, Space - check/uncheck. q - close dialog Item states: - * - checked flag. - X - means that it has different value for files in selection. - d (*nix only) - (only for execute flags) means u-x+X, g-x+X or o-x+X argument for the chmod program. If you're not on OS X and want to remove execute permission bit from all files, but preserve it for directories, set all execute flags to 'd' and check 'Set Recursively' flag. Custom views Definition Normally file views contain list of files from a single directory, but sometimes it's useful to populate them with list of files that do not belong to the same directory, which is what custom views are for. Presentation Custom views are still related to directory they were in before custom list was loaded. Path to that directory (original directory) can be seen in the title of a custom view. Files in same directory have to be named differently, this doesn't hold for custom views thus seeing just file names might be rather confusing. In order to give an idea where files come from and when possible, relative paths to original directory of the view is displayed, otherwise full path is used instead. Custom views normally don't contain any inexistent files. Navigation Custom views have some differences related to navigation in regular views. gf - acts similar to gf on symbolic links and navigates to the file at its real location. h, gh - return to the original directory. Opening ".." entry also causes return to the original directory. History Custom list exists only while it's visible, once left one can't return to it, so there is no appearances of it in any history. Filters Only local filter affects content of the view. This is intentional, presumably if one loads list, precisely that list should be displayed (except for inexistent paths, which are ignored). Search Although directory names are visible in listing, they are not searchable. Only file names are taken into account (might be changed in future, searching whole lines seems quite reasonable). Sorting Contrary to search sorting by name works on whole visible part of file path. Highlight Whole file name is highlighted as one entity, even if there are directory elements. Updates Reloads can occur, though they are not automatic due to files being scattered among different places. On a reload, inexistent files are removed and meta-data of all other files is updated. Once custom view forgets about the file, it won't add it back even if it's created again. So not seeing file previously affected by an operation, which was undone is normal. Operations All operations that add files are forbidden for custom views. For example, moving/copying/putting files into a custom view doesn't work, because it doesn't make much sense. On the other hand, operations that use files of a custom view as a source (e.g. yanking, copying, moving file from custom view, deletion) and operations that modify names are all allowed. Startup On startup vifm determines several variables that are used during the session. They are determined in the order they appear below. On *nix systems $HOME is normally present and used as is. On Windows systems vifm tries to find correct home directory in the following order: - $HOME variable; - $USERPROFILE variable (on Windows only); - a combination of $HOMEDRIVE and $HOMEPATH variables (on Windows only). vifm tries to find correct configuration directory by checking the following places: - $VIFM variable; - parent directory of the executable file (on Windows only); - $HOME/.vifm directory; - $APPDATA/Vifm directory (on Windows only); - $XDG_CONFIG_HOME/vifm directory; - $HOME/.config/vifm directory. vifm tries to find correct configuration file by checking the following places: - $MYVIFMRC variable; - vifmrc in parent directory of the executable file (on Windows only); - $VIFM/vifmrc file. Configure See "Startup" section above for the explanations on $VIFM and $MYVIFMRC. The vifmrc file contains commands that will be executed on vifm startup. There are two such files: global and local. Global one is at {prefix}/etc/vifm/vifmrc, see $MYVIFMRC variable description for the search algorithm used to find local vifmrc. Global vifmrc is loaded before the local one, so that the later one can redefine anything configured globally. Use vifmrc to set settings, mappings, filetypes etc. To use multi line commands precede each next line with a slash (whitespace before slash is ignored, but all spaces at the end of the lines are saved). For example: set \smartcase equals "setsmartcase". When set<space here> \ smartcase equals "set smartcase". The $VIFM/vifminfo file contains session settings. You may edit it by hand to change the settings, but it's not recommended to do that, edit vifmrc instead. You can control what settings will be saved in vifminfo by setting 'vifminfo' option. Vifm always writes this file on exit unless 'vifminfo' option is empty. Marks, bookmarks, commands, histories, filetypes, fileviewers and registers in the file are merged with vifm configuration (which has bigger priority). Generally, runtime configuration has bigger priority during merging, but there are some exceptions: - directory stack stored in the file is not overwritten unless something is changed in vifm session that performs merge; - each mark or bookmark is marked with a timestamp, so that newer value is not overwritten by older one, thus no matter from where it comes, the newer one wins. The $VIFM/scripts directory can contain shell scripts. vifm modifies its PATH environment variable to let user run those scripts without specifying full path. All subdirectories of the $VIFM/scripts will be added to PATH too. Script in a subdirectory overlaps script with the same name in all its parent directories. The $VIFM/colors/ and {prefix}/etc/vifm/colors/ directories contain color schemes. Available color schemes are searched in that order, so on name conflict the one in $VIFM/colors/ wins. Each color scheme should have ".vifm" extension. This wasn't the case before and for this reason the following rules apply during lookup: - if there is no file with .vifm extension, all regular files are listed; - otherwise only files with .vifm extension are listed (with the extension being truncated). Automatic FUSE mounts vifm has a builtin support of automated FUSE file system mounts. It is implemented using file associations mechanism. To enable automated mounts, one needs to use a specially formatted program line in filetype or filextype commands. Currently two formats are supported: 1) FUSE_MOUNT This format should be used in case when all information needed for mounting all files of a particular type is the same. E.g. mounting of tar files don't require any file specific options. Format line: FUSE_MOUNT|mounter %SOURCE_FILE %DESTINATION_DIR [%FOREGROUND] Example filetype command: :filetype FUSE_MOUNT|fuse-zip %SOURCE_FILE %DESTINATION_DIR 2) FUSE_MOUNT2 This format allows one to use specially formatted files to perform mounting and is useful for mounting remotes, for example remote file systems over ftp or ssh. Format line: FUSE_MOUNT2|mounter %PARAM %DESTINATION_DIR [%FOREGROUND] Example filetype command: :filetype FUSE_MOUNT2|sshfs %PARAM %DESTINATION_DIR Example file content: [email protected]:/ All % macros are expanded by vifm at runtime and have the following meaning: - %SOURCE_FILE is replaced by full path to selected file; - %DESTINATION_DIR is replaced by full path to mount directory, which is created by vifm basing on the value of 'fusehome' option; - %PARAM value is filled from the first line of file (whole line), though in the future it can be changed to whole file content; - %FOREGROUND means that you want to run mount command as a regular command (required to be able to provide input for communication with mounter in interactive way). %FOREGROUND is an optional macro. Other macros are not mandatory, but mount commands likely won't work without them. %CLEAR is obsolete name of %FOREGROUND, which is still supported, but might be removed in future. Its use is discouraged. The mounted FUSE file systems will be automatically unmounted in two cases: - when vifm quits (with ZZ, :q, etc. or when killed by signal); - when you explicitly leave mount point going up to its parent directory (with h, Enter on "../" or ":cd ..") and other pane is not in the same directory or its child directories. View look vifm supports displaying of file list view in two different ways: - in a table mode, when multiple columns can be set using 'viewcolumns' option (see "Column view" section below for details); - in a multicolumn list manner which looks almost like `ls -x` command output (see "ls-like view" section below for details). The look is local for each view and can be chosen by changing value of the 'lsview' boolean option. Depending on view look some of keys change their meaning to allow more natural cursor moving. This concerns mainly h, j, k, l and other similar navigation keys. Also some of options can be ignored if they don't affect view displaying in selected look. For example value of 'viewcolumns' when 'lsview' is set. ls-like view When this view look is enabled by setting 'lsview' option on, vifm will display files in multiple columns. Number of columns depends on the length of the longest file name present in current directory of the view. Whole file list is automatically reflowed on directory change, terminal or view resize. View looks close to output of `ls -x` command, so files are listed left to right in rows. In this mode file manipulation commands (e.g. d) don't work line-wise like they do in Vim, since such operations would be uncommon for file manipulating tasks. Thus, for example, dd will remove only current file. Column view View columns are described by a comma-separated list of column descriptions, each of which has the following format [ '-' ] [ fw ( [ '.' tw ] | '%' ) ] '{' type '}' '.'{0,3} where fw stands for full width and tw stands for text width. So it basically consists of four parts: 1. Optional alignment specifier 2. Optional width specifier 3. Mandatory column name 4. Optional cropping specifier Alignment specifier It's an optional minus or asterisk sign as the first symbol of the string. Specifies type of text alignment within a column. Three types are supported: left align set viewcolumns=-{name} right align (default) set viewcolumns={name} dynamic align It's like left alignment, but when the text is bigger than the column, the alignment is made at the right (so the part of the field is always visible). set viewcolumns=*{name} Width specifier It's a number followed by a percent sign, two numbers (second one should be less than or equal to the first one) separated with a dot or a single number. Specifies column width and its units. There are three size types: absolute size - column width is specified in characters set viewcolumns=-100{name},20.15{ext} results in two columns with lengths of 100 and 20 and a reserved space of five characters on the left of second column. relative (percent) size - column width is specified in percents of view width set viewcolumns=-80%{name},15%{ext},5%{mtime} results in three columns with lengths of 80/100, 15/100 and 5/100 of view width. auto size (default) - column width is automatically determined set viewcolumns=-{name},{ext},{mtime} results in three columns with length of one third of view width. There is no size adjustment to content, since it will slow down rendering. Columns of different sizing types can be freely mixed in one view. Though sometimes some of columns can be seen partly or be completely invisible if there is not enough space to display them. Column name This is just a sort key surrounded with curly braces, e.g. {name},{ext},{mtime} {name} and {iname} keys are the same and present both for consistency with 'sort' option. Empty curly braces ({}) are replaced with the default secondary column for primary sort key. So after the next command view will be displayed almost as if 'viewcolumns' is empty, but adding ellipsis for long file names: set viewcolumns=-{name}..,6{}. Cropping specifier It's from one to three dots after closing curly brace in column format. Specifies type of text truncation if it doesn't fit in the column. Currently three types are supported: truncation - text is truncated set viewcolumns=-{name}. results in truncation of names that are too long too fit in the view. adding of ellipsis - ellipsis on the left or right are added when needed set viewcolumns=-{name}.. results in that ellipsis are added at the end of too long file names. none (default) - text can pass column boundaries set viewcolumns=-{name}...,{ext} results in that long file names can partially be written on the ext column. Color schemes The color schemes in vifm can be applied in two different ways: - as the primary color scheme; - as local to a pane color scheme. Both types are set using :colorscheme command, but of different forms: - :colorscheme color_scheme_name - for the primary color scheme; - :colorscheme color_scheme_name directory - for local color schemes. Look of different parts of the TUI (Text User Interface) is determined in this way: - Border, TopLineSel, TopLine, CmdLine, ErrorMsg, StatusLine, JobLine, SuggestBox and WildMenu are always determined by the primary color scheme; - CurrLine, Selected, Directory, Link, BrokenLink, Socket, Device, Executable, Fifo and Win are determined by primary color scheme and a set of local color schemes, which can be empty. There might be a set of local color schemes because they are structured hierarchically according to file system structure. For example, having the following piece of file system: ~ `-- bin | `-- my Two color schemes: # ~/.vifm/colors/for_bin highlight Win cterm=none ctermfg=white ctermbg=red highlight CurrLine cterm=none ctermfg=red ctermbg=black # ~/.vifm/colors/for_bin_my highlight CurrLine cterm=none ctermfg=green ctermbg=black And these three commands in the vifmrc file: colorscheme Default colorscheme for_bin ~/bin colorscheme for_bin_my ~/bin/my File list will look in the following way for each level: - ~/ - Default color scheme black background cursor with blue background - ~/bin/ - mix of Default and for_bin color schemes red background cursor with black background and red foreground - ~/bin/my/ - mix of Default, for_bin and for_bin_my color schemes red background cursor with black background and green foreground Trash directory vifm has support of trash directory, which is used as temporary storage for deleted files or files that were cut. Using trash is controlled by the 'trash' option, and exact path to the trash can be set with 'trashdir' option. Trash directory in vifm differs from the system-wide one by default, because of possible incompatibilities of storing deleted files among different file managers. But one can set 'trashdir' to "~/.local/share/Trash" to use a "standard" trash directory. There are two scenarios of using trash in vifm: - 1. As a place for storing files that were cut by "d" and may be inserted to some other place in file system. - 2. As a storage of files, that are deleted but not purged yet. The first scenario uses deletion ("d") operations to put files to trash and put ("p") operations to restore files from trash directory. Note that such operations move files to and from trash directory, which can be long term operations in case of different partitions or remote drives mounted locally. The second scenario uses deletion ("d") operations for moving files to trash directory and :empty command-line command to purge all previously deleted files. Deletion and put operations depend on registers, which can point to files in trash directory. Normally, there are no nonexistent files in registers, but vifm doesn't keep track of modifications under trash directory, so one shouldn't expect value of registers to be absolutely correct if trash directory was modified not by operation that are meant for it. But this won't lead to any issues with operations, since they ignore nonexistent files. Client-Server vifm supports remote execution of command-line mode commands as well as remote changing of directories. This is possible using --remote command-line argument. To execute a command remotely combine --remote argument with -c <command> or +<command>. For example: vifm --remote -c 'cd /' vifm --remote '+cd /' To change directory not using command-line mode commands one can specify paths right after --remote argument, like this: vifm --remote / vifm --remote ~ vifm --remote /usr/bin /tmp Plugin Plugin for using vifm in vim as a file selector. Commands: :EditVifm select a file or files to open in the current buffer. :SplitVifm split buffer and select a file or files to open. :VsplitVifm vertically split buffer and select a file or files to open. :DiffVifm select a file or files to compare to the current file with :vert diffsplit. :TabVifm select a file or files to open in tabs. Each command accepts up to two arguments: left pane directory and right pane directory. After arguments are checked, vifm process is spawned in a special "file-picker" mode. To pick files just open them either by pressing l, i or Enter keys, or by running :edit command. If no files are selected, file under the cursor is opened, otherwise whole selection is passed to the plugin and opened in vim. The plugin have only two settings. It's a string variable named g:vifm_term to let user specify command to run GUI terminal. By default it's equal to 'xterm -e'. And another string variable named g:vifm_exec, which equals "vifm" by default and specifies path to vifm's executable. To pass arguments to vifm use g:vifm_exec_args, which is empty by default. To use the plugin copy the vifm.vim file to either the system wide vim/plugin directory or into ~/.vim/plugin. If you would prefer not to use the plugin and it is in the system wide plugin directory add let loaded_vifm=1 to your ~/.vimrc file. Reserved The following command names are reserved and shouldn't be used for user commands. g[lobal] v[global] Environment - VIFM Points to main configuration directory (usually ~/.vifm/). - MYVIFMRC Points to main configuration file (usually ~/.vifm/vifmrc). These environment variables are valid inside vifm and also can be used to configure it by setting some of them before running vifm. When $MYVIFMRC isn't set, it's made as $VIFM/vifmrc (exception for Windows: vifmrc in the same directory as vifm.exe has higher priority than $VIFM/vifmrc). See "Startup" section above for more details. - VIFM_FUSE_FILE On execution of external commands this variable is set to the full path of file used to initiate FUSE mount of the closes mount point from current pane directory up. It's not set when outside FUSE mount point. When vifm is used inside terminal multiplexer, it tries to set this variable as well (it doesn't work this way on its own). See Also vifm-convert-dircolors(1), vifm-pause(1) Website: Wiki: Esperanto translation of the documentation by Sebastian Cyprych: Author Vifm was written by ksteen <[email protected]> And currently is developed by xaizek <[email protected]> Referenced By ncdu(1), vifm-convert-dircolors(1), vifm-pause(1), vifm-screen-split(1).
https://www.mankier.com/1/vifm
CC-MAIN-2017-17
refinedweb
23,321
57.16
How i made my menu, What you could do is make an enum, add the menus in it as states (so MAINMENU, HACKS, UPDATER etc) And then make a button class which takes a couple of arguments ( Like x... How i made my menu, What you could do is make an enum, add the menus in it as states (so MAINMENU, HACKS, UPDATER etc) And then make a button class which takes a couple of arguments ( Like x... Make a new thread about that, and post the code. I'll try to help you then. So there is no difference between Object object; and Object object = null? Could you post the code as is now? 2 small things. First of all, try using the .code=java] ./code] tags (replace the . with a [) so it's easier to read. Also why do you do arrayOneValue = arrayOneValue +1 instead of just... String day = ""; for (int i=0; i< days.length; i++) { if (!dag.equals(day)) { This part, what should it do? What do you want to achieve exactly? I'm not quite understanding the purpose of this yet. Also post your full output. This is what i did: public static void main(String[] args){ String[]... String[] days = { "Monday", "Tuesday", "Wednesday", "Turshday", "Friday", "Saturday", "Sunday" }; String day = ""; for (int i = 0; i < days.length; i++) { day += days[i]; // previously... Ah ok. Your day array has a length of 7 (Monday-Sunday are 7 days). Agreed? So the days.length will return 7. Only the thing is, it starts at 0 instead of 1. So "Monday" will be 0. and then it will... What is line 32? hello darey You could update the speed every timer tick, but why not update the position every tick? make something like an update function. something along the lines of: public void update(){ x+=xspeed;... That's true indeed. Now look at the line i pointed out. It will execute that every 10 ms i believe. so every once in a very short while it will execute ball.getXPoint() and ball.getYPoint(). And... Try to get just 1 ball able to bounce and move properly. If you follow Norm's tips on his post, and then read what i said, you should be able to make a conclusion on what is going wrong :) . Owh, well mine thought me to use this. by this. you refer to a private data field. Also i'm not quite inderstanding your code. Do you want a simple ball bouncing on the screen? If so, look at this... Also another tip, instead of writing a "_" in front of every variable the class has, you could also use "this." makes everything a bit easier in my opinion. Example: private int speed = 0;... To slow it you could always make the x and y positions a double or float i suppose and do the same for the x and y point. then multiply them by 0.1 or something. It should go slower now. I dont... Depends how you make it. You could make a rectangle for each button, then whenever the mouse is pressed you could send the mouse pointer location to all your buttons. In your button you should check... Your class's name is also Collections, try renaming your class and see what happens :) For the formatting, i use an ide called Eclipse, it has a really nifty build in formatter. press control shift f and everything goes where it should be. When im working together with other people we... That is a good solution, you could've also made an if-else statement. No particular advantages to that, but i find it easier to understand my own code that way: import java.util.Scanner;... What exactly does it give for error? Also try using the code taggs. .code=java] your code here ./code] replace the . with [ Welcome to the forum, you can learn a lot on this forum, just read other peoples questions, and read and understand the answers :-). I've had that problem a lot as well. For some of my friends it was an outdated java version. For the others the .exe worked. Give it a try, it's so much easier.
http://www.javaprogrammingforums.com/search.php?s=8c32820cfb243781aac258f86afd04f9&searchid=1515198
CC-MAIN-2015-18
refinedweb
704
86.5
This article explains how to calculate basic statistics such as average, standard deviation, and variance TLDR; To average a NumPy array x along an axis, call np.average() with arguments x and the axis identifier. For example, np.average(x, axis=1) averages along axis 1. The outermost dimension has axis identifier “0”, the second-outermost dimension has identifier “1”. Python collapses the identified axis and replaces it with the axis average, which reduces dimensionality of the resulting array by one. Feel free to watch the video while skimming over the article for maximum learning efficiency: Graphical Explanation Here’s what you want to achieve: Extracting basic statistics such as average, variance, standard deviation from NumPy arrays and 2D matrices is a critical component for analyzing a wide range of data sets such as financial data, health data, or social media data. With the rise of machine learning and data science, your proficient education of linear algebra operators with NumPy becomes more and more valuable to the marketplace Code Solution Here is how you can accomplish this task in NumPy: import numpy as np x = np.array([[1, 3, 5], [1, 1, 1], [0, 2, 4]]) print(np.average(x, axis=1)) # [3. 1. 2.] print(np.var(x, axis=1)) # [2.66666667 0. 2.66666667] print(np.std(x, axis=1)) # [1.63299316 0. 1.63299316] Slow Explanation Next, I’ll NumPy internally represents data using NumPy arrays ( np.array). These arrays can have an arbitrary number of dimensions. In the figure above, we show a two-dimensional NumPy array but in practice, the array can have much higher dimensionality. You can quickly identify the dimensionality of a NumPy array by counting the number of opening brackets “[“ when creating the array. (The more formal alternative would be to use the ndim property.) Each dimension has its own axis identifier. ? Rule of thumb: The outermost dimension has the identifier “0”, the second-outermost dimension has the identifier “1”, and so on. By default, the NumPy average, variance, and standard deviation functions aggregate all the values in a NumPy array to a single value. Do you want to become a NumPy master? Check out our interactive puzzle book Coffee Break NumPy and boost your data science skills! (Amazon link opens in new tab.) Simple Average, Variance, Standard Deviation What happens if you don’t specify any additional argument apart from the NumPy array on which you want to perform the operation (average, variance, standard deviation)? import numpy as np x = np.array([[1, 3, 5], [1, 1, 1], [0, 2, 4]]) print(np.average(x)) # 2.0 print(np.var(x)) # 2.4444444444444446 print(np.std(x)) # 1.5634719199411433 For example, the simple average of a NumPy array is calculated as follows: (1+3+5+1+1+1+0+2+4)/9 = 18/9 = 2.0 Calculating Average, Variance, Standard Deviation Along an Axis However, sometimes you want to calculate these functions along an axis. For example, you may work at a large financial corporation and want to calculate the average value of a stock price — given a large matrix of stock prices (rows = different stocks, columns = daily stock prices). Here is how you can do this by specifying the keyword “ axis” as an argument to the average, variance, and standard deviation functions: import numpy as np ## Stock Price Data: 5 companies # (row=[price_day_1, price_day_2, ...]) x = np.array([[8, 9, 11, 12], [1, 2, 2, 1], [2, 8, 9, 9], [9, 6, 6, 3], [3, 3, 3, 3]]) avg, var, std = np.average(x, axis=1), np.var(x, axis=1), np.std(x, axis=1) print("Averages: " + str(avg)) print("Variances: " + str(var)) print("Standard Deviations: " + str(std)) """ Averages: [10. 1.5 7. 6. 3. ] Variances: [2.5 0.25 8.5 4.5 0. ] Standard Deviations: [1.58113883 0.5 2.91547595 2.12132034 0. ] """ Note that you want to perform these three functions along the axis=1, i.e., this is the axis that is aggregated to a single value. Hence, the resulting NumPy arrays have a reduced dimensionality. High-Dimensional Averaging Along An Axis Of course, you can also perform this averaging along an axis for high-dimensional NumPy arrays. Conceptually, you’ll always aggregate the axis you specify as an argument. Here is an example: import numpy as np x = np.array([[[1,2], [1,1]], [[1,1], [2,1]], [[1,0], [0,0]]]) print(np.average(x, axis=2)) print(np.var(x, axis=2)) print(np.std(x, axis=2)) """ [[1.5 1. ] [1. 1.5] [0.5 0. ]] [[0.25 0. ] [0. 0.25] [0.25 0. ]] [[0.5 0. ] [0. 0.5] [0.5 0. ]] """ Where to Go From Here? Solid programming skills are the foundation of your thorough education as a data scientist and machine learning expert. Master Python first! To Join more than 55,000 email subscribers and download your personal Python cheat sheets as high-resolution PDFs. Print them, study them, and keep consulting them daily until you master every bit of Python syntax by.
https://blog.finxter.com/numpy-average-along-axis/
CC-MAIN-2022-21
refinedweb
846
56.66
tag:nuttnet.net,2005:/index from the desk of 2009-02-01T16:40:00Z tag:nuttnet.net,2005:PostPresenter/4768 2009-02-01T16:40:00Z 2009-02-01T16:47:10-05:00 English Ordinal Suffix <p>As I was previewing the post published yesterday, I realized that the post's date was marked as <em>January 31th, 2009</em>. Looking through my templates, I saw that I had been lazy and hard-coded "th" into the template. Investigating further, I discovered that strftime (the method that <a href="">Liquid Templates</a> use to format dates) had no way to output the correct suffix.</p><p>There didn't seem to be any clean place to put it in the layout or in LimeSpot, but LimeSpot happens to let you add arbitrary javascript to your pages so I added this little snippet: </p><pre> <!-- hack --><br /> <script type="text/javascript"><br /> $$('.post_date').each(function(date) {<br /> var day = date.select('.day')[0]; <br /> var modDay = day.innerHTML % 10; <br /> var suffix = 'th'; // default case<br /> if(modDay == 1) { suffix = 'st'; } <br /> if(modDay == 2) { suffix = 'nd'; } <br /> if(modDay == 3) { suffix = 'rd'; } <br /> date.select('.th')[0].update(suffix);<br /> });<br /> </script> <br /></pre> michael tag:nuttnet.net,2005:PostPresenter/4756 2009-01-31T13:51:00Z 2009-01-31T23:37:01-05:00 The Empty Browser <p>Yesterday I was reading the <a href="">Mozilla Labs Blog</a> where they were talking about removing chrome and making the browser controls take up less screen space. I wanted to explore it more, so as a first step I removed Firefox's chrome piece by piece to see how it affected my browsing experience. </p><p>The <strong>Home</strong> button? I have never clicked it before. My browser opens to whatever page was opened last. Easy decision.</p><p>The <strong>Stop</strong> button. I don't often stop pages loading, and in the few cases where I do I use escape out of habit.</p><p><strong>Reload</strong>? I use it a bit more, but I'm pretty accustomed to Ctrl-R and I think I can deal.</p><p>Next, the <strong>Back</strong>/<strong>Forward</strong> keyhole buttons. I use Alt-left and Alt-right arrow more than I use the actual buttons themselves, but what I found is that the Back and Forward buttons are the only things giving me visual indications that I <em>can</em> go back or forward.</p><p>Removing <strong>tabs</strong> is like going back to the year 2000.</p><p>With the <strong>status bar</strong> gone, I am totally lost. Like walking around in a dark room bumping into things, I can't tell if a page is loaded, or loading, or perhaps stalled. I suppose I could get used to gauging how long pages take to load, but it's definitely an inferior browsing experience.</p><p>In conclusion, it's not as easy as just removing all of the buttons. (not that anyone ever said it was) Being an Emacs user I may like using the keyboard more than most people, but the visual cues I that I get from the buttons are more important than the buttons themselves. It would be very easy to remove the vertical scroll bars in Emacs because the keyboard is the preferred device for navigating through the page, but many people leave them in because they give a visual indication of the length of the document and location of the cursor in relation to the rest of the document.</p><p>Fortunately, I think it's easier to provide visual cues than to provide buttons for people to click. I intend to keep an eye on these new de-cluttered browser developments. </p> michael tag:nuttnet.net,2005:PostPresenter/2538 2008-06-10T20:58:00Z 2008-06-10T21:08:33-04:00 Rethinking Car Design <p>I just ran across this brilliant new idea from BMW:</p><p><object height="315" width="388"><param name="movie" value=";hl=en" /><param name="wmode" value="transparent" /><embed src=";hl=en" type="application/x-shockwave-flash" height="315" wmode="transparent" width="388"></embed></object></p><p>Why do we need metal bodies for cars, anyway? Assuming that a carbon fiber frame is just as sturdy as a regular steel frame with regular panels, I can't see any reason you would. Washing might be difficult, but I'd imagine you could just take the fabric off and throw it in the wash. Five years down the road when it's not looking so great, just buy a new cover for a couple hundred bucks.</p><p>Will this revolutionize the car industry? Probably not any time soon at least, but hopefully it'll get things moving in that direction. </p> michael tag:nuttnet.net,2005:PostPresenter/2482 2008-06-05T23:57:00Z 2008-06-06T00:10:59-04:00 Rails routing and namespaces <p>A while back, Rails gained the ability to namespace routes. For instance, if you wanted to add a blog to your app but didn't want to pollute its top-level URL space, you could do this:</p><pre>namespace(:blog) do map.resources :posts map.resources :tags end</pre> <p>Then you get a nice blog at "/blog" and everything is great, right? You now have to put your post blog controller in app/controllers/blog/posts_controller.rb, and put your views in app/view/blogs/posts/. It seems clean, but it turns into so much typing that it gets annoying. When rendering partials explicitly, you have to add the "blog/" prefix. Worse, you now have to append "blog_" to every named route helper you use.</p><p>Namespaced do happen to be great for disambiguation, however. Admin interfaces are a perfect example: you can put your exposed blog controller in app/controllers/blogs_controller.rb and put your admin blog controller in app/controllers/admin/blogs_controller.rb.</p><p>I made the mistake of namespacing a number of controllers because I thought it would make for a cleaner grouping, and I am paying for it in the time I have to take to de-namespace. The moral of the story is to only use namespaced routes when :path_prefix => "/blog" won't work. </p> michael tag:nuttnet.net,2005:PostPresenter/2360 2008-05-27T20:45:00Z 2008-06-06T00:20:10-04:00 mod_rails <p>The webserver stack for Rails-based projects changes every 6 months or so. First it was apache+fastcgi, then lighttpd+fastcgi, then apache+mongrel, then nginx+mongrel. Our production servers for limespot.com are using nginx+mongrel, but there's a new deployment option out now: <a href="">Phusion Passenger</a>, otherwise known as mod_rails.</p><p>At last week's hackfest I decided to try out mod_rails to see how easy it was. The website advertised it as the easiest deployment option, so here it is:</p><p>Installation is really as simple as <span class="Apple-style-span" style="font-style: italic;">gem install phusion-passenger</span> and then <span class="Apple-style-span" style=""><span class="Apple-style-span" style="font-style: italic;">passenger-install-apache2-module</span><span class="Apple-style-span" style="font-family: Verdana; line-height: normal; white-space: normal;"><span class="Apple-style-span" style="font-style: italic;"> </span>while following its instructions. I'm using Leopard so I already had apache and simply edited /etc/apache2/httpd.conf and added the three required lines.</span></span></p><p>From there it got a little bit weird. I think mod_rails' ideal use in development would be to be able to point it to ~/Sites and have it rails-ify each of the sub-directories. I tried setting Apache's DocumentRoot to /Users/michael/Sites, but mod_rails didn't pick up the sub-directories as rails apps. I had to link each public directory to my webserver's DocumentRoot and use the RailsBaseURI directive in order to manually specify each rails app.</p><p>The first rails app I tried was my company's intranet app. I was quickly presented with a beautiful error page telling me I had failed. It wasn't very helpful, but wow did it look good! It wasn't hitting rails at all and the only apache log message was "define INLINE_DIR or HOME in your environment" ImageScience was to blame, and I never could figure out how to get it to find INLINE_DIR on its own.</p><p>I figured I would go ahead and try LimeSpot, and to my pleasant surprise it just worked (mostly). Rails' stylesheet caching didn't work properly, but after I turned it off the stylesheets came down just fine and everything was rewritten with the "/limespot" prefix. Our custom mongrel handlers didn't work, obviously, so it will be a while before LimeSpot switches to mod_rails.</p><p>All in all, mod_rails is looking to be a really promising deployment solution, even in a development environment. It would be great if it could be made to detect all rails projects in a directory and automagically serve them from subdomains or subdirectories.</p><p><strong>Update:</strong> mod_rails supposedly supports <a href="">Rack</a> now, though I haven't looked into it. </p> michael tag:nuttnet.net,2005:PostPresenter/2183 2008-05-15T01:37:00Z 2008-05-15T01:44:24-04:00 Graduation <img src="" alt="IMG_3825.JPG" /><p> I went down to Houston this weekend to see my brother graduate. It was <em>hot</em>. I've only been to Houston a couple of times since he started school, and it seems like only recently that I was helping him move into his freshman dorm. Surveying his new apartment, I began thinking of all the trade-offs that one makes living in New York—cramped apartments, dirty streets, and alternating cold/hot weather. I still think they're worth it. </p> michael tag:nuttnet.net,2005:PostPresenter/2044 2008-05-05T23:32:00Z 2008-05-05T23:45:43-04:00 The Bleeding Edge <p>I've spent a few evenings hunting down a particularly annoying bug in my latest rails app.</p> ActionController::InvalidAuthenticityToken (ActionController::InvalidAuthenticityToken): /vendor/rails/actionpack/lib/action_controller/ request_forgery_protection.rb:73:in `verify_authenticity_token' ... <p>I was trying to get a script.aculo.us sortable list working, and I spent tons of time debugging with Firefox + Firebug to try and figure out why the authenticity token wasn't getting passed.</p> <p>Firefox 3 includes tons of great new features and it requires a new Firebug, so every couple of days I'm getting the latest alpha release of Firebug. For some reason I was sure my rails app was broken, rather than Firebug. As a matter of fact, it turns out my rails app was working just fine and Firebug was <i>lying</i> to me and resubmitting the request when I tried to use it to check out the response. (score: Rails CSRF protection 1, my sanity 0)</p> <p>I fixed it by either turning on Firebug's new debugging mode, or turning on its network monitoring. I can't really tell which because I can't seem to figure out which one actually solves the problem. </p> <p>These nifty development tools are great, but sometimes there is no substitute for inspecting outgoing TCP packets. </p> michael tag:nuttnet.net,2005:PostPresenter/1786 2008-04-14T15:48:00Z 2008-04-14T15:49:55-04:00 Feeds For the first time, my blog now has an RSS feed. Welcome to 1999! michael tag:nuttnet.net,2005:PostPresenter/1697 2008-04-07T00:55:00Z 2008-04-14T15:34:59-04:00 Spread Thin > <p>One of the topics heavily under discussion at the past few conferences I've attended is the idea of personal content ownership. The current model for startups is to build a service, attract users, and let those users post their own content which attracts more users. Once the cycle repeats a few times network effects start to kick in and the site takes off. Some sites have attempted to address the discrepancy between users providing the content and sites selling ads by offering revenue sharing programs, but I think the possibly more important issue is user control.</p><p>When I press the "publish" button for this post, where does it go? Perhaps I want to also publish it on another site. Plain text is fairly unique in that you can copy and paste your way to your post's freedom, but right now LimeSpot only really offers two ways to see a post: displayed in rendered form in my blog, and in the RSS feed. Take something more abstract, such as adding a user as a friend: does the social networking site "own" that piece of data that says we are friends? Should it?</p><p>I used to think that having control of my data meant being able to move it to my own website, on my hosting provider. So how is that any different than leaving a video on YouTube? Since my hosting provider allows me to run applications, I can create any interface I like for retrieving data. But what if a website (say, LimeSpot) could provide the same flexibility? It would be like a hosting provider that had a bunch of reasonable defaults for retrieving data, but allowed you to come up with your own if you wanted.</p><p>By "retrieving data" I mean a number of different things: perhaps you want to segregate your different types of content and have a separate blog, list of videos, photos, etc. Perhaps you want a big list twitter/pownce-style of everything you're doing online. Maybe you want to see everything you're doing mixed in with everything your friends are doing. Maybe you want to write your own API for retrieving your data.</p><p>Obviously there are disadvantages of all of a user's data actually living in a centralized place. Sites like twitter and youtube are popular because of the sense of community people have with other people on the site. I'd much rather pull in data to LimeSpot than have it hosted there.</p><p>This idea isn't a catch-all solution by any means. I didn't even link to my livejournal site because it is rarely updated and probably embarassing. Some information needs to stay separate, but it should be the user's choice, not the site owner's. </p> > michael tag:nuttnet.net,2005:PostPresenter/1473 2008-03-10T15:11:00Z 2008-03-10T15:12:21-04:00 SXSW I suppose it's a bit late, but I'm down in Austin at South By Southwest this year. Come say hi at our <a href="">party tonight</a>. michael tag:nuttnet.net,2005:PostPresenter/1423 2008-03-05T00:43:00Z 2008-03-05T00:45:29-05:00 The Magic is Temporarily Unavailable And that's what I get for neglecting my own theme while building other peoples'. The cobbler's children have no shoes. michael tag:nuttnet.net,2005:PostPresenter/1269 2008-02-20T16:02:00Z 2008-02-20T16:10:30-05:00 Firefox Clutter <p>Confession: I used to use Windows as my main operating system, and for a long time I used Internet Explorer as my browser. As a 14-year-old, I always saved my money to buy as much memory as I could, because at any given time I'd have 20 or 30 browser windows open. When I discovered Firefox and tabbed browsing, I dropped down to 5 or 10 browser windows, but they had 10-20 tabs each. My room was also cluttered as a kid.</p><p>Now I use Mac OS with only a single Firefox window, but I still have entirely too many tabs open. This mostly results from links appearing outside the browser, such as links in instant message windows and emails. I usually open these links once, look at the page, then forget about them. It would be really nice if I could specify one of Firefox's tabs as a "junk tab", to be reused for all external links.</p><p>I can't think of a clean way to fit the feature into Firefox's user interface, but I'm looking forward to the new bookmark management system. </p> michael tag:nuttnet.net,2005:PostPresenter/1234 2008-02-14T00:28:00Z 2008-02-14T00:32:04-05:00 Moving I just shut down the Athlon64 server sitting in my living room. Suddenly the room goes quiet. I could hear crickets chirping if such things could survive in New York. The quiet is disconcerting; I begin to wonder if the steady hum of the fans is all that has kept me sane for the last year and a half as I sit alone in my old apartment. michael tag:nuttnet.net,2005:PostPresenter/1207 2008-02-09T01:03:00Z 2008-02-09T02:02:54-05:00 Challenge <p><img class="embedded-img-right" src="" alt="IMG_0084.JPG copy" />Today I was invited at the last minute to a challenge of epic proportions.</p><p>I'm not usually one to participate in eating competitions, but macaroni and cheese is one of my favorite foods ever (yes, I'm still four years old) and I gave in to old-fashioned peer pressure. Secretly I was planning on only eating half and taking the rest home, but one thing led to another and I ended up finishing the "Mongo" size. So did Nathan, and he finished first. </p><p>We were rewarded with nothing but pride, though the waitress did give me a free water for finishing.</p> michael tag:nuttnet.net,2005:PostPresenter/1191 2008-02-06T23:31:00Z 2008-02-06T23:38:16-05:00 Apartment Hunt <p><img class="embedded-img-right" src="" alt="IMG_0064.JPG copy" />I've nearly secured an apartment. It's in the East Village, and it's quite nice. It's cozy (read: small) but also cozy in that it just feels nice. The bedroom has no windows so that's going to be weird, but I think I'm just going to sleep with the door open. I have one final thing to do tomorrow morning and it's mine.</p><p>Of course, as with anything in New York there's a chance that it'll slip through my fingers and I'll have to go back to looking through craigslist and calling brokers. I think New York requires a certain fortitude that is most heavily tested during relocation.</p><p>At least I'm not buying a house. </p> michael tag:nuttnet.net,2005:PostPresenter/1180 2008-02-05T19:12:00Z 2008-02-05T19:15:10-05:00 Primary <p class="post_body">I voted today. You should too, unless you're Canadian, a convict, or generally disenfranchised.<p>The nearest polling place was a school a few blocks from me, which was incredibly hot and humid, especially considering the nice 50-degree weather we're having in New York. I was not on the list. I ended up having to go back to my old polling place in Prospect Heights, where the smell of freshly baked chocolate chip cookies from the school cafeteria made me very hungry and not particularly politically inclined.</p><p>Last night a coworker pointed me to <a href="">Lawrence Lessig's video on why you should vote for Barack Obama</a>. </p> </p> michael tag:nuttnet.net,2005:PostPresenter/1181 2008-02-03T14:07:00Z 2008-02-05T19:19:34-05:00 How I Got Here I was only vaguely familiar with LimeWire in college. The career center at my school basically gave me two options: go to work for one of those mind-numbingly boring local government contractors that showed up to the career fairs, or go to grad school. Our school had an online job board filled with the previously mentioned government contractors, but in the middle of listings by Raytheon and Lockheed-Martin I noticed a Ruby on Rails position listed by Lime Wire.<br /><br />"Those guys are a company?"<br /><br />I've since gotten that question more times than I can count at career fairs and info sessions. People generally seem to think that either LimeWire was written by some nerds in their basements, or that it just appeared one day through some sort of Intelligent Design. Maybe the internet created it.<br /><br />I had a very nice chat with Justin Schmidt, who asked a bunch of algorithms questions, described the design of Limespot, and got me very interested in the project. At the time I was interested in Rails, but was still new to it. For whatever reason he invited me up to New York for an interview.<br /><br />Since Lime Wire was my first real interview ever, there were a few things I hadn't yet learned. I flew myself up to New York and slept on a friend's floor out on Long Island the night before. The day of the interview I woke up very early and took the two hour train ride into New York City. Being the confused tourist I was, I mistakenly got off at the 23rd street subway stop, and walked approximately 30 blocks through the pouring rain. By the time I got there my shoes made squishing noises as I walked, and I was soaked from the shoulders down.<br /><br />Fortunately I made the correct judgment not to wear my Interview/Wedding/Funeral suit that my parents had given me, and instead opted to wear jeans. At least I was comfortable during the strenuous day-long interview. As I left the interview, I was sure that there was no way Lime Wire would hire me. I think I might have even failed the reverse-a-string question at one point. I nearly even declined the dinner invitation as to not waste any more of their time. Two weeks later, I had the job.<br /><br />From the outside, I saw the Lime Wire interview process as a Really Big Deal. I kind of imagined it like a trial, where the interviewers interrogated me, analyzed the laws, and made some sort of ruling. In retrospect it's just a bunch of regular people trying to find the best co-workers they can.<br /><br />One year and two months later, LimeSpot is a reality. michael tag:nuttnet.net,2005:PostPresenter/383 2007-08-28T20:20:10Z 2007-08-28T20:20:16-04:00 Limespot.com has launched! It's finally live! After over a year of working on it, <a href="">limespot.com</a> has gone online. It powers <a href="">blog.limewire.com</a>, also. michael
http://feeds.feedburner.com/nuttnet/qWLn
crawl-002
refinedweb
3,825
63.39
Recently, I was assigned a project to develop Survival International's ecommerce component using Spree. Survival International is a non-profit organization that supports tribal groups worldwide in education, advocacy and campaigns. Spree is an open source Ruby on Rails ecommerce platform that was sponsored by End Point from its creation in early 2008 until May 2009, and that we continue to support. End Point also offers a hosting solution for Spree (SpreeCamps), that was used for this project. Spree contains ecommerce essentials and is intended to be extended by developers. The project required customization including significant cart customization such as adding a buy 4 get 1 free promo discount, adding free giftwrap to the order if the order total exceeded a specific preset amount, adding a 10% discount, and adding a donation to the order. Some code snippets and examples of the cart customization in rails are shown below. An important design decision that came up was how to store the four potential cart customizations (buy 4 get 1 free promo, free giftwrap, 10% discount, and donation). The first two items (4 get 1 free and free gift wrap) are dependent on the cart contents, while the latter two items (10% discount and donation) are dependent on user input. Early on in the project, I tried using session variables to track the 10% discount application and donation amount, and I applied an after_filter to calculate the buy 4 get 1 free promo and free giftwrap for every order edit, update, or creation. However, this proved somewhat cumbersome and required that most Rails views be edited (frontend and backend) to show the correct cart contents. After discussing the requirements with a coworker, we came up with the idea of using a single product with four variants to track each of the customization components. I created a migration file to introduce the following variants similar to the code shown below. A single product by the name of 'Special Product' contained four variants with SKUs to denote which customization component they belonged to ('supporter', 'donation', 'giftwrap', or '5cards'). p = Product.create(:name => 'Special Product', :description => "Discounts, Donations, Promotions", :master_price => 1.00) v = Variant.create(:product => p, :price => 1.00, :sku => 'supporter') # 10% discount v = Variant.create(:product => p, :price => 1.00, :sku => 'donation') # donation v = Variant.create(:product => p, :price => 1.00, :sku => 'giftwrap') # free giftwrap v = Variant.create(:product => p, :price => 1.00, :sku => '5cards') # buy 4 get 1 free discount Next, I added accessor elements to retrieve the variants shown below. Each of these accessor methods would be used throughout the code and so this would be the only location requiring an update if the variant SKU was modified. module VariantExtend ... def get_supporter_variant Variant.find_by_sku('supporter') end def get_donation_variant Variant.find_by_sku('donation') end def get_giftwrap_variant Variant.find_by_sku('giftwrap') end def get_cards_promo_variant Variant.find_by_sku('5cards') end ... end The design to use variants makes the display of cart contents on the backend and frontend much easier, in addition to calculating cart totals. In Spree, the line item price is not necessarily equal to the variant price or product master price, so the prices stored in the product and variant objects introduced above are meaningless to individual orders. An after_filter was added to the Spree orders controller to add, remove, or recalculate the price for each special product variant. The order of the after_filters was important. The cards (buy 4 get 1 free) discount was added first, followed by a subtotal check for adding free giftwrap, followed by adding the supporter discount which reduces the total price by 10%, and finally a donation would be added on top of the order total: OrdersController.class_eval do after_filter [:set_cards_discount, :set_free_giftwrap, :set_supporter_discount, :set_donation], :only => [:create, :edit, :update] end Each after filter contained specific business logic. The cards discount logic adds or removes the variant from the cart and adjusts the line item price: def set_cards_discount v = Variant.new.get_cards_promo_variant # get variant # calculate buy 4 get 1 free discount (cards_discount) # remove variant if order contains variant and cards_discount is 0 # add variant if order does not contain variant and cards_discount is not 0 # adjust price of discount line item to cards_discount # save order end The free giftwrap logic adds or removes the variant from the cart and sets the price equal to 0: def set_free_giftwrap v = Variant.new.get_giftwrap_variant # get variant # remove variant if cart contains variant and order subtotal < 40 # add variant if cart does not contain variant and order subtotal >= 40 # adjust price of giftwrap line item to 0.00 # save order end The supporter discount logic adds or removes the discount variant depending on user input. Then, the line item price is adjusted to give a 10% discount if the cart contains the discount variant: def set_supporter_discount v = Variant.new.get_supporter_variant # get variant # remove variant if cart contains variant and user input to receive discount is 'No' # add variant if cart does not contain variant and user input to receive discount is 'Yes' # adjust price of discount line item to equal 10% of the subtotal (minus existing donation) # save order end Finally, the donation logic adds or removes the donation variant depending on user input: def set_donation v = Variant.new.get_donation_variant # get variant # remove variant if cart contains variant and user donation is 0 # add variant if cart does not contain variant and user donation is not 0 # adjust price of donation line item # save order end This logic results in a simple process for all four variants to be adjusted for every recalculation or creation of the cart. Also, the code examples above used existing Spree methods where applicable (add_variant) and created a few new methods that were used throughout the examples above (Order.remove_variant(variant), Order.adjust_price(variant, price)). A few changes were made to the frontend cart view. To render the desired view, line items belonging to the "Special Product" were not displayed in the default order line display. The buy 4 get 1 free promo and free giftwrap were added below the default line order items. Donations and discounts were shown below the line items in order of how they are applied to the order. The backend views were not modified and as a result the site administrators would see all special variants in an order: An additional method was created to define the total number of line items in the order, shown at the top right of every page except for the cart and checkout page. module OrderExtend ... def mod_num_items item_count = line_items.inject(0) { |kount, line_item| kount + line_item.quantity } + (contains?(Variant.new.get_supporter_variant) ? -1 : 0) + (contains?(Variant.new.get_donation_variant) ? -1 : 0) + (contains?(Variant.new.get_giftwrap_variant) ? -1 : 0) + (contains?(Variant.new.get_cards_promo_variant) ? -1 : 0) item_count.to_s + (item_count != 1 ? ' items' : ' item') end ... end The solution developed for this project was simple and extended the Spree core ecommerce code elegantly. The complex business logic required was easily integrated in the variant accessor methods and after_filters to re add, remove, and recalculate the price of the custom variants where necessary. The project required additional customizations, such as view modifications, navigation modifications, and complex product optioning, which may be discussed in future blog posts :). Learn more about End Point's Rails development or Spree ecommerce development.
http://blog.endpoint.com/2009/10/rails-approach-to-spree-shopping-cart.html
CC-MAIN-2017-26
refinedweb
1,202
51.78
DZone Snippets is a public source code repository. Easily build up your personal collection of code snippets, categorize them with tags / keywords, and share them with the world User Friendly Time Entry Posted by Bob Silva Sat, 22 Apr 2006 08:49:00 GMT Have a need to track time spent on something? Here's an easy way to allow your users to enter their time in a smart way (anyway they want). This example accepts fractional hours, overflowing minutes and converts and displays them as the user would expect. They are stored in your database as minutes (integer) and displayed as hours/minutes (regardless of how they were input). Model Code (model.rb): def set_travel_time(hours, minutes) self.travel_time = ((hours.to_f * 60) + minutes.to_i).to_i end def get_travel_time travel_time.to_i.divmod(60) end Controller Code (models_controller.rb): def create @model.new(...) ... @model.set_travel_time(params[:hours], params[:minutes]) ... if @model.save ... end def edit @model = Model.find(...) ... @hours, @minutes = @model.get_travel_time ... end View Code (_form.rhtml): <%= text_field_tag 'hours', @hours -%> hours <%= text_field_tag 'minutes', @minutes -%> minutes For plugin ideas, see dollars_and_cents: One of the big problems with a database-agnostic framework like ActiveRecord is that it doesn’t have a decent data type for money. Yes, you can use a FLOAT, but then you end up charging someone $12.3000000000000001, which is just awkward. This plugin stores as INT, but displays as Currency.
http://www.dzone.com/snippets/user-friendly-time-entry
CC-MAIN-2014-10
refinedweb
231
59.9
import javax.swing.*; public class Substring{ public static void main (String a[]){ String m = JOptionPane.showInputDialog("Enter a String:"); String sub =... Type: Posts; User: cutee_eyeh ..in diamond formation.. ..the output should look like this * * * * * * * * * ..but then the output of this program is * ... ..this is a correct code but then the numbers that should be in prime the output would be in composite for example we input 1 the output would be the 1 is composed of 1..thank you.. public class Diamond { public static void main(String args[]){ int n,i,j,k; do { n = (int)(3); }while(n % 2 ==... import java.util.Scanner; public class erelle{ private static String comps; public static void main(String[] args){ Scanner input = new Scanner(System.in); ...
http://www.javaprogrammingforums.com/search.php?s=b8eafe610bc771c1a419d9def5d043a4&searchid=477068
CC-MAIN-2013-20
refinedweb
123
67.76
You’ve heard about the benefits of testing. You know that it can improve your code’s reliability and maintainability as well as your development processes. You may even know about the wide range of available modules and idioms that Perl offers for testing Perl and non-Perl programs. In short, you may know everything except where to start. The labs in this chapter walk through the most basic steps of running and writing automated tests with Perl. By the end of the chapter, you’ll know how to start and continue testing, how Perl’s testing libraries work, and where to find more libraries to ease your workload. One of Perl’s greatest strengths is the CPAN, an archive of thousands of reusuable code libraries—generally called modules—for almost any programming problem anyone has ever solved with Perl. This includes writing and running tests. Before you can use these modules, however, you must install them. Fortunately, Perl makes this easy. The best way to install modules from the CPAN is through a packaging system that can handle the details of finding, downloading, building, and installing the modules and their dependencies. On Unix-like platforms (including Mac OS X) as well as on Windows platforms if you have a C compiler available, the easiest way to install modules is by using the CPAN module that comes with Perl. To install a new version of the Test::Simple distribution, launch the CPAN shell with the cpan script: % cpan cpan shell -- CPAN exploration and modules installation (v1.7601) ReadLine support enabled cpan> install Test::Simple Running install for module Test::Simple Running make for M/MS/MSCHWERN/Test-Simple-0.54.tar.gz <...> Appending installation info to /usr/lib/perl5/5.8.6/powerpc-linux/perllocal.pod /usr/bin/make install UNINST=1 -- OK If Test::Simple had any dependencies (it doesn’t), the shell would have detected them and tried to install them first. If you haven’t used the CPAN module before, it will prompt you for all sorts of information about your machine and network configuration as well as your installation preferences. Usually the defaults are fine. By far, most Windows Perl installations use ActiveState’s Active Perl distribution (), which includes the ppm utility to download, configure, build, and install modules. With ActivePerl installed, open a console window and type: C:\>PPM PPM> install Test-Simple ActivePerl also has distributions for Linux and Solaris, so these instructions also work there. If the configuration is correct, ppm will download and install the latest Test::Simple distribution from ActiveState’s repository. If the module that you want isn’t in the repository at all or if the version in the repository is older than you like, you have a few options. First, you can search alternate repositories. See PodMaster’s list of ppm repositories at. For example, to use dada’s Win32 repository permanently, use the set repository command within ppm: C:\>PPM PPM> set repository dada PPM> set save If you want to install a pure-Perl module or are working on a platform that has an appropriate compiler, you can download and install the module by hand. First, find the appropriate module—perhaps by browsing. Then download the file and extract it to its own directory: $ tar xvzf Test-Simple-0.54.tar.gz Test-Simple-0.54/ <...> To set up a compilation environment for Perl on Windows, consult the README.win32 file that ships with Perl. Run the Makefile.PL program, and then issue the standard commands to build and test the module: $ perl Makefile.PL Checking if your kit is complete... Looks good Writing Makefile for Test::Simple $ make cp lib/Test/Builder.pm blib/lib/Test/Builder.pm cp lib/Test/Simple.pm blib/lib/Test/Simple.pm $ make test Be sure to download the file marked This Release, not the Latest Dev. Release, unless you plan to help develop the code. If all of the tests pass, great! Otherwise, do what you can to figure out what failed, why, and if it will hurt you. (See "Running Tests" and "Interpreting Test Results,” later in this chapter, for more information.) Finally, install the module by running make install (as root, if you’re installing the module system-wide). Makefile.PL uses a module called ExtUtils::MakeMaker to configure and install other modules. Some modules use Module::Build instead of ExtUtils::MakeMaker. There are two main differences from the installation standpoint. First, they require you to have Module::Build installed. Second, the installation commands are instead: $ perl Build.PL $ perl Build $ perl Build test # perl Build install Otherwise, they work almost identically. Windows users may need to install Microsoft’s nmake to install modules by hand. Where Unix users type make, use the nmake command instead: nmake, nmake test, and nmake install. Q: How do I know the name to type when installing modules through PPM? I tried install Test-More, but it couldn’t find it! A: Type the name of the distribution, not the module within the distribution. To find the name of the distribution, search for the name of the module that you want. In this example, Test::More is part of the Test-Simple distribution. Remove the version and use that name within PPM. Q: I’m not an administrator on the machine, or I don’t want to install the modules for everyone. How can I install a module to a specific directory? A: Set the PREFIX appropriately when installing the module. For example, a PREFIX of ~/perl/lib will install these modules to that directory (at least on Unix-like machines). Then set the PERL5LIB environment variable to point there or remember to use the lib pragma to add that directory to @INC in all programs in which you want to use your locally installed modules. If you build the module by hand, run Makefile.PL like this: $ perl Makefile.PL PREFIX=~/perl/lib If you use CPAN, configure it to install modules to a directory under your control. Launch the CPAN shell with your own user account and follow the configuration questions. When it prompts for the PREFIX:: [ ] add a prefix to a directory where you’d like to store your own modules. If the module uses Module::Build, pass the installbase parameter instead: $ perl Build.PL --installbase=~/perl See the documentation for ExtUtils::MakeMaker, CPAN, and Module::Build for more details. Before you can gain any benefit from writing tests, you must be able to run them. Fortunately, there are several ways to do this, depending on what you need to know. To see real tests in action, download the latest version of Test::Harness (see) from the CPAN and extract it to its own directory. Change to this directory and build the module as usual (see "Installing Test Modules,” earlier in this chapter). To run all of the tests at once, type make test: $ make test PERL_DL_NONLAZY=1 /usr/bin/perl5.8.6 "-MExtUtils::Command::MM" "-e" \ "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t t/00compile.........ok 1/5# Testing Test::Harness 2.46 t/00compile.........ok t/assert............ok t/base..............ok t/callback..........ok t/harness...........ok t/inc_taint.........ok t/nonumbers.........ok t/ok................ok t/pod...............ok t/prove-globbing....ok t/prove-switches....ok t/strap-analyze.....ok t/strap.............ok t/test-harness......ok 56/208 skipped: various reasons All tests successful, 56 subtests skipped. Files=14, Tests=551, 6 wallclock secs ( 4.52 cusr + 0.97 csys = 5.49 CPU) make test is the third step of nearly every Perl module installation. This command runs all of the test files it can find through Test::Harness, which summarizes and reports the results. It also takes care of setting the paths appropriately for as-yet-uninstalled modules. Q: How do I run tests for distributions that don’t use Makefile.PL? A: make test comes from ExtUtils::MakeMaker, an old and venerable module. Module::Build is easier to use in some cases. If there’s a Build.PL file, instead use the commands perl Build.PL, perl Build, and perl Build test. Everything will behave as described here. Q: How do I run tests individually? A: Sometimes you don’t want to run everything through make test, as it runs all of the tests for a distribution in a specific order. If you want to run a few tests individually, use prove instead. It runs the test files you pass as command-line arguments, and then summarizes and prints the results. If you don’t have prove installed, you’re using an old version of Test:: Harness. Use bin/ prove instead. Then upgrade. $ prove t/strap*.t t/strap-analyze....ok t/strap............ok All tests successful. Files=2, Tests=284, 1 wallclock secs ( 0.66 cusr + 0.14 csys = 0.80 CPU) If you want the raw details, not just a summary, use prove’s verbose ( -v) flag: $ prove -v t/assert.t t/assert....1..7 ok 1 - use Test::Harness::Assert; ok 2 - assert() exported ok 3 - assert( FALSE ) causes death ok 4 - with the right message ok 5 - assert( TRUE ) does nothing ok 6 - assert( FALSE, NAME ) ok 7 - has the name ok All tests successful. Files=1, Tests=7, 0 wallclock secs ( 0.06 cusr + 0.01 csys = 0.07 CPU) This flag prevents prove from eating the results. Instead, it prints them directly along with a short summary. This is very handy for development and debugging (see "Interpreting Test Results,” later in this chapter). Q: How do I run tests individually without prove? A: You can run most test files manually; they’re normally just Perl files. $ perl t/00compile.t 1..5 ok 1 - use Test::Harness; # Testing Test::Harness 2.42 ok 2 - use Test::Harness::Straps; ok 3 - use Test::Harness::Iterator; ok 4 - use Test::Harness::Assert; ok 5 - use Test::Harness; Oops! This ran the test against Test::Harness 2.42, the installed version, instead of Version 2.46, the new version. All of the other solutions set Perl’s @INC path correctly. When running tests manually, use the blib module to pick up the modules as built by make or perl Build: $ perl -Mblib t/00compile.t 1..5 ok 1 - use Test::Harness; # Testing Test::Harness 2.46 ok 2 - use Test::Harness::Straps; ok 3 - use Test::Harness::Iterator; ok 4 - use Test::Harness::Assert; ok 5 - use Test::Harness; The -M switch causes Perl to load the given module just as if the program file contained a use blib; line. The TEST_FILES argument to make_test can simplify this: $ make test TEST_FILES=t/00compile.t t/00compile....ok 1/5# Testing Test::Harness 2.46 t/00compile....ok All tests successful. Files=1, Tests=5, 0 wallclock secs ( 0.13 cusr + 0.02 csys = 0.15 CPU) For verbose output, add TEST_VERBOSE=1. Perl has a wealth of good testing modules that interoperate smoothly through a common protocol (the Test Anything Protocol, or TAP) and common libraries ( Test::Builder). You’ll probably never have to write your own testing protocol, but understanding TAP will help you interpret your test results and write better tests. All of the test modules in this book produce TAP output. Test:: Harness interprets that output. Think of it as a minilanguage about test successes and failures. Save the following program to sample_output.pl: #!perl print <<END_HERE; 1..9 ok 1 not ok 2 # Failed test (t/sample_output.t at line 10) # got: '2' # expected: '4' ok 3 ok 4 - this is test 4 not ok 5 - test 5 should look good too not ok 6 # TODO fix test 6 # I haven't had time add the feature for test 6 ok 7 # skip these tests never pass in examples ok 8 # skip these tests never pass in examples ok 9 # skip these tests never pass in examples END_HERE Using Windows and seeing an error about END_HERE? Add a newline to the end of sample_output. pl, then read perldoc perlfaq8. Now run it through prove (see "Running Tests,” earlier in this chapter): $ prove sample_output.pl sample_output....FAILED tests 2, 5 Failed 2/9 tests, 77.789 okay (less 3 skipped tests: 4 okay, 44.44%) Failed Test Stat Wstat Total Fail Failed List of Failed ------------------------------------------------------------------------ sample_output.pl 9 2 22.22% 2 5 3 subtests skipped. Failed 1/1 test scripts, 0.00% okay. 2/9 subtests failed, 77.79% okay. prove interpreted the output of the script as it would the output of a real test. In fact, there’s no effective difference—a real test might produce that exact output. The lines of the test correspond closely to the results. The first line of the output is the test plan. In this case, it tells the harness to plan to run 9 tests. The second line of the report shows that 9 tests ran, but two failed: tests 2 and 5, both of which start with not ok. The report also mentions three skipped tests. These are tests 7 through 9, all of which contain the text # skip. They count as successes, not failures. (See "Skipping Tests" in Chapter 2 to learn why.) That leaves one curious line, test 6. It starts with not ok, but it does not count as a failure because of the text # TODO. The test author expected this test to fail but left it in and marked it appropriately. (See "Marking Tests as TODO" in Chapter 2.) The test harness ignored all of the rest of the output, which consists of developer diagnostics. When developing, it’s often useful to look at the test output in its entirety, whether by using prove -v or running the tests directly through perl (see "Running Tests,” earlier in this chapter). This prevents the harness from suppressing the diagnostic output, as found with the second test in the sample output. Q: What happens when the actual number of tests is different than expected? A: Running the wrong number of tests counts as a failure. Save the following test as too_few_tests.t: use Test::More tests => 3; pass( 'one test' ); pass( 'two tests' ); Run it with prove: $ prove too_few_tests.t too_few_tests....ok 2/3# Looks like you planned 3 tests but only ran 2. too_few_tests....dubious Test returned status 1 (wstat 256, 0x100) DIED. FAILED test 3 Failed 1/3 tests, 66.67% okay Failed Test Stat Wstat Total Fail Failed List of Failed ------------------------------------------------------------------------ too_few_tests.t 1 256 3 2 66.67% 3 Failed 1/1 test scripts, 0.00% okay. 1/3 subtests failed, 66.67% okay. Test::More complained about the mismatch between the test plan and the number of tests that actually ran. The same goes for running too many tests. Save the following code as too_many_tests.t: use Test::More tests => 2; pass( 'one test' ); pass( 'two tests' ); pass( 'three tests' ); Run it with prove: $ prove -v too_many_tests.t too_many_tests....ok 3/2# Looks like you planned 2 tests but ran 1 extra. too_many_tests....dubious Test returned status 1 (wstat 256, 0x100) DIED. FAILED test 3 Failed 1/2 tests, 50.00% okay Failed Test Stat Wstat Total Fail Failed List of Failed ------------------------------------------------------------------------ too_many_tests.t 1 256 2 1 50.00% 3 Failed 1/1 test scripts, 0.00% okay. -1/2 subtests failed, 150.00% okay. This time, the harness interpreted the presence of the third test as a failure and reported it as such. Again, Test::More warned about the mismatch. This lab introduces the most basic features of Test::Simple, the simplest testing module. You’ll see how to write your own test for a simple “Hello, world!"-style program. Open your favorite text editor and create a file called hello.t. Enter the following code: #!perl use strict; use warnings; use Test::Simple tests => 1; sub hello_world { return "Hello, world!"; } ok( hello_world() eq "Hello, world!" ); Save it. Now you have a simple Perl test file. Run it from the command line with prove: $ prove hello.t You’ll see the following output: hello....ok All tests successful. Files=1, Tests=1, 0 wallclock secs ( 0.09 cusr + 0.00 csys = 0.09 CPU) hello.t looks like a normal Perl program; it uses a couple of pragmas to catch misbehavior as well as the Test::Simple module. It defines a simple subroutine. There’s no special syntax a decent Perl programmer doesn’t already know. The first potential twist is the use of Test::Simple. By convention, all test files need a plan to declare how many tests you expect to run. If you run the test file with perl and not prove, you’ll notice that the plan output comes before the test output: $ perl hello.t 1..1 ok 1 The other interesting piece is the ok() subroutine. It comes from Test::Simple and is the module’s only export. ok() is very, very simple. It reports a passed or a failed test, depending on the truth of its first argument. In the example, if whatever hello_world() returns is equal to the string Hello, world!, ok() will report that the test has passed. As the output shows, there’s one test in the file, and it passed. Congratulations! In some cases, the number of tests you run is important, so providing a real plan is a good habit to cultivate. Q: How do I avoid changing the plan number every time I add a test? A: Writing 'no_plan' on the use line lets Test::Simple know that you’re playing it by ear. In this case, it’ll keep its own count of tests and report that you ran as many as you ran. #!perl use strict; use warnings; use Test::Simple 'no_plan'; sub hello_world { return "Hello, world!"; } ok( hello_world() eq "Hello, world!" ); When you declare no_plan, the test plan comes after the test output. $ perl hello.t ok 1 1..1 This is very handy for developing, when you don’t know how many tests you’ll add. Having a plan is a nice sanity check against unexpected occurrences, though, so consider switching back to using a plan when you finish adding a batch of tests. Q: How do I make it easier to track down which tests are failing? A: When there are multiple tests in a file and some of them fail, descriptions help to explain what should have happened. Hopefully that will help you track down why the tests failed. It’s easy to add a description; just change the ok line. ok( hello_world() eq "Hello, world!", 'hello_world() output should be sane' ); You should see the same results as before when running it through prove. Running it with the verbose flag will show the test description: $ prove -v hello.t 1..1 ok 1 - hello_world() output should be sane Q: How do I make more detailed comparisons? A: Don’t worry; though you can define an entire test suite in terms of ok(), dozens of powerful and freely available testing modules work together nicely to provide much more powerful testing functions. That list starts with the aptly named Test::More. Most of the Perl testing libraries assume that you use them to test Perl modules. Modules are the building blocks of larger Perl programs and well-designed code uses them appropriately. Loading modules for testing seems simple, but it has two complications: how do you know you’ve loaded the right version of the module you want to test, and how do you know that you’ve loaded it successfully? This lab explains how to test both questions, with a little help from Test::More. Imagine that you’re developing a module to analyze sentences to prove that so-called professional writers have poor grammar skills. You’ve started by writing a module named AnalyzeSentence that performs some basic word counting. Save the following code in your library directory as AnalyzeSentence.pm: Perl is popular among linguists, so someone somewhere may be counting misplaced commas in Perl books. package AnalyzeSentence; use strict; use warnings; use base 'Exporter'; our $WORD_SEPARATOR = qr/\s+/; our @EXPORT_OK = qw( $WORD_SEPARATOR count_words words ); sub words { my $sentence = shift; return split( $WORD_SEPARATOR, $sentence ); } sub count_words { my $sentence = shift; return scalar words( $sentence ); } 1; Besides checking that words() and count_words() do the right thing, a good test should test that the module loads and imports the two subroutines correctly. Save the following test file as analyze_sentence.t: #!perl use strict; use warnings; use Test::More tests => 5; my @subs = qw( words count_words ); use_ok( 'AnalyzeSentence', @subs ); can_ok( _ _PACKAGE_ _, 'words' ); can_ok( _ _PACKAGE_ _, 'count_words' ); my $sentence = 'Queen Esther, ruler of the Frog-Human Alliance, briskly devours a monumental ice cream sundae in her honor.'; my @words = words( $sentence ); ok( @words = = 17, 'words() should return all words in sentence' ); $sentence = 'Rampaging ideas flutter greedily.'; my $count = count_words( $sentence ); ok( $count = = 4, 'count_words() should handle simple sentences' ); Run it with prove: $ prove analyze_sentence.t analyze_sentence....ok All tests successful. Files=1, Tests=5, 0 wallclock secs ( 0.08 cusr + 0.01 csys = 0.09 CPU) Instead of starting with Test::Simple, the test file uses Test::More. As the name suggests, Test::More does everything that Test::Simple does—and more! In particular, it provides the use_ok() and can_ok() functions shown in the test file. use_ok() takes the name of a module to load, AnalyzeSentence in this case, and an optional list of symbols to pass to the module’s import() method. It attempts to load the module and import the symbols and passes or fails a test based on the results. It’s the test equivalent of writing: use AnalyzeSentence qw( words count_words ); can_ok() is the test equivalent of the can() method. The tests use it here to see if the module has exported words() and count_words() functions into the current namespace. These tests aren’t entirely necessary, as the ok() functions later in the file will fail if the functions are missing, but the import tests can fail for only two reasons: either the import has failed or someone mistyped their names in the test file. Q: I don’t want to use use; I want to use require. Can I do that? How? A: See the Test::More documentation for require_ok(). Q: What if I need to import symbols from the module as it loads? A: If the test file depends on variables defined in the module being tested, for example, wrap the use_ok() line in a BEGIN block. Consider adding tests for the behavior of $WORD_SEPARATOR. Modify the use_ok() line and add the following lines to the end of analyze_sentence.t: use_ok( 'AnalyzeSentence', @subs, '$WORD_SEPARATOR' ) or exit; ... $WORD_SEPARATOR = qr/(?:\s|-)+/; @words = words( $sentence ); ok( @words = = 18, '... respecting $WORD_SEPARATOR, if set' ); Run the test: $ prove t/analyze_sentence.t t/analyze_sentence....Global symbol "$WORD_SEPARATOR" requires explicit package name at t/analyze_sentence.t line 28. Execution of t/analyze_sentence.t aborted due to compilation errors. # Looks like your test died before it could output anything. t/analyze_sentence....dubious Test returned status 255 (wstat 65280, 0xff00) FAILED--1 test script could be run, alas—no output ever seen With the strict pragma enabled, when Perl reaches the last lines of the test file in its compile stage, it hasn’t seen the variable named $WORD_SEPARATOR yet. Only when it runs the use_ok() line at runtime will it import the variable. Change the use_ok() line once more: BEGIN { use_ok( 'AnalyzeSentence', @subs, '$WORD_SEPARATOR') or exit;} Then run the test again: $ prove t/analyze_sentence.t t/analyze_sentence....ok All tests successful. Files=1, Tests=6, 0 wallclock secs ( 0.09 cusr + 0.00 csys = 0.09 CPU) Q: What if Perl can’t find AnalyzeSentence or it fails to compile? A: If there’s a syntax error somewhere in the module, some of your tests will pass and others will fail mysteriously. The successes and failures depend on what Perl has already compiled by the time it reaches the error. It’s difficult to recover from this kind of failure. The best thing you can do may be to quit the test altogether: use_ok( 'AnalyzeSentence' ) or exit; If you’ve specified a plan, Test::Harness will note the mismatch between the number of tests run (probably one) and the number of tests expected. Either way, it’s much easier to see the compilation failure if it’s the last failure reported.. The following listing tests a class named Greeter, which takes the name and age of a person and allows her to greet other people. Save this code as greeter.t: #!perl use strict; use warnings; use Test::More tests => 4; use_ok( 'Greeter' ) or exit; my $greeter = Greeter->new( name => 'Emily', age => 21 ); isa_ok( $greeter, 'Greeter' ); is( $greeter->age(), 21, 'age() should return age for object' ); like( $greeter->greet(), qr/Hello, .+ is Emily!/, 'greet() should include object name' ); The examples in "Writing Your First Test,” earlier in this chapter, will work the same way if you substitute Test::More for Test::Simple; Test::More is a superset of Test:: Simple. Now save the module being tested in your library directory as Greeter.pm: package Greeter; sub new { my ($class, %args) = @_; bless \%args, $class; } sub name { my $self = shift; return $self->{name}; } sub age { my $self = shift; return $self->{age}; } sub greet { my $self = shift; return "Hello, my name is " . $self->name() . "!"; } 1; Running the file from the command line with prove should reveal three successful tests: $ prove greeter.t greeter.t....ok All tests successful. Files=1, Tests=4, 0 wallclock secs ( 0.07 cusr + 0.03 csys = 0.10 CPU) This program starts by loading the Greeter module and creating a new Greeter object for Emily, age 21. The first test checks to see if the constructor returned an actual Greeter object. isa_ok() performs several checks to see if the variable is actually a defined reference, for example. It fails if it is an undefined value, a non-reference, or an object of any class other than the appropriate class or a derived class. The next test checks that the object’s age matches the age set for Emily in the constructor. Where a test using Test::Simple would have to perform this comparison manually, Test::More provides the is() function that takes two arguments to compare, along with the test description. It compares the values, reporting a successful test if they match and a failed test if they don’t. Test::More::is() uses a string comparison. This isn’t always the right choice for your data. See Test::More::cmp_ ok() to perform other comparisons. Similarly, the final test uses like() to compare the first two arguments. The second argument is a regular expression compiled with the qr// operator. like() compares this regular expression against the first argument—in this case, the result of the call to $greeter->greet()—and reports a successful test if it matches and a failed test if it doesn’t. Avoiding the need to write the comparisons manually is helpful, but the real improvement in this case is how these functions behave when tests fail. Add two more tests to the file and remember to change the test plan to declare six tests instead of four. The new code is: use Test::More tests => 6; ... is( $greeter->age(), 22, 'Emily just had a birthday' ); like( $greeter->greet(), qr/Howdy, pardner!/, '... and she talks like a cowgirl' ); Run the tests again with prove’s verbose mode: $ prove -v greeter.t greeter.t....1..6 ok 1 - use Greeter; ok 2 - The object isa Greeter ok 3 - age() should return age for object ok 4 - greet() should include object name not ok 5 - Emily just had a birthday # Failed test (greeter.t at line 18) # got: '21' # expected: '22' not ok 6 - ... and she talks like a cowgirl # Failed test (greeter.t at line 20) # 'Hello, my name is Emily!' # doesn't match '(?-xism:Howdy, pardner!)' # Looks like you failed 2 tests of 6. dubious Test returned status 2 (wstat 512, 0x200) DIED. FAILED tests 5-6 Failed 2/6 tests, 66.67% okay Failed Test Stat Wstat Total Fail Failed List of Failed ---------------------------------------------------------------------------- greeter.t 2 512 6 2 33.33% 5-6 Failed 1/1 test scripts, 0.00% okay. 2/6 subtests failed, 66.67% okay. The current version of prove doesn’t display the descriptions of failing tests, but it does display diagnostic output. Notice that the output for the new tests—those that shouldn’t pass—contains debugging information, including what the test saw, what it expected to see, and the line number of the test. If there’s only one benefit to using ok() from Test::Simple or Test::More, it’s these diagnostics. Q: How do I test things that shouldn’t match? A: Test::More provides isnt() and unlike(), which work the same way as is() and like(), except that the tests pass if the arguments do not match. Changing the fourth test to use isnt() and the fifth test to use unlike() will make them pass, though the test descriptions will seem weird. No credit card required
https://www.oreilly.com/library/view/perl-testing-a/0596100922/ch01.html
CC-MAIN-2019-39
refinedweb
4,875
74.79
Welcome to the gadgets API! This developers guide is based on the gadgets.* version of the gadgets JavaScript API. The gadgets API has been "renamespaced" into the gadgets.* JavaScript namespace, to provide a cleaner API to program against and support. To learn more about the gadgets.* API, see the reference documentation here. While the gadgets.* API overlaps significantly with the legacy gadgets API (Labs), there are also important differences. Currently, only some containers (a container is a site or application that runs gadgets) support the gadgets.* API. For a list of containers that support the gadgets.* API, see the OpenSocial container list. Some older containers support only the legacy gadgets API, so be sure to check the documentation for your specific container to see which API is supported. To learn more about different types of gadgets and where they run, see the gadgets API Overview. All containers support the legacy API, regardless of whether they support the gadgets.* API. However, the gadgets.* API offers many new features that don't exist in the legacy API, so you should use it if you can. This developers guide is intended for people who want to use the gadgets API to write gadgets. Gadgets are so easy to create that they are a good starting point if you are just learning about web programming.: <Module>tag indicates that this XML file contains a gadget. <ModulePrefs>tag contains information about the gadget such as its title, description, author, and other optional features. . Every container that runs social gadgets has slightly different characteristics. Here are a few places to start getting some hands-on experience developing social gadgets: For more general gadget programming information, go to Writing Your Own Gadgets. From there you can go to Development Fundamentals, or back to the documentation home page for an overview of sections and topics.
http://code.google.com/apis/gadgets/docs/gs.html
crawl-002
refinedweb
307
58.69
Introduction to Spark 2.0 - Part 2 : Wordcount in Dataset API Spark 2 2.0. This is the second blog in series, where I will be discussing about dataset abstraction of Spark. You can access all the posts in the series here. TL;DR All code examples are available on github. Introduction to Dataset Dataset is new abstraction in Spark introduced as alpha API in Spark 1.6. It’s becoming stable API in spark 2.0. It’s new single abstraction for all user land code in Spark. From Definition, ” A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations. Each dataset also has an untyped view called a DataFrame, which is a Dataset of Row. “ which sounds similar to RDD definition ” RDD represents an immutable,partitioned collection of elements that can be operated on in parallel “ The major difference is, dataset is collection of domain specific objects where as RDD is collection of any object. Domain object part of definition signifies the schema part of dataset. So dataset API is always strongly typed and optimized using schema where RDD is not. Dataset definition also talks about Dataframes API. Dataframe is special dataset where there is no compilation checks for schema. So this makes dataSet new single abstraction replacing RDD from earlier versions of spark. Once we understood the dataset abstraction, in rest of post we will see how to work with this abstraction. Dataset Wordcount example As with any new API, we will learn API using how to use in WordCount example. The below is the code for wordcount in dataset API. Step 1 : Create SparkSession As we discussed in last blog, we use spark session as entry point for dataset API. val sparkSession = SparkSession.builder. master("local") .appName("example") .getOrCreate() Step 2 : Read data and convert to Dataset We read data using read.text API which is similar to textFile API of RDD. The as[String] part of code assigns the needed schema for dataset. import sparkSession.implicits._ val data = sparkSession.read.text("src/main/resources/data.txt").as[String] Here data will be of the type of DataSet[String]. Remember to import sparkSession.implicits._ for all schema conversion magic. Step 3 : Split and group by word Dataset mimics lot of RDD API’s like map, groupByKey etc. The below code we are splitting lines to get words and group them by words. val words = data.flatMap(value => value.split("\\s+")) val groupedWords = words.groupByKey(_.toLowerCase) One thing you may observed we don’t create a key/value pair. The reason is unlike RDD, dataset works in row level abstraction. Each value is treated a row with multiple columns and any column can act as key for grouping like in database. Step 4 : Count Once we have grouped, we can count each word using count method. It’s similar to reduceByKey of RDD. val counts = groupedWords.count() Step 5 : Print results Finally once we count, we need to print the result. As with RDD, all the above API’s are lazy. We need to call an action to trigger computation. In dataset, show is one of those actions. It’s show first 20 results. If you want complete result, you can use collect API. counts.show() You can access complete code here. Now we have written our first example in dataset abstraction. We will explore more about dataset API in future posts.
http://blog.madhukaraphatak.com/introduction-to-spark-two-part-2/
CC-MAIN-2019-47
refinedweb
580
68.36
time – Functions for manipulating clock time¶ The time module exposes C library functions for manipulating dates and times. Since it is tied to the underlying C implementation, some details (such as the start of the epoch and maximum date value supported) are platform-specific. Refer to the library documentation for complete details. Wall Clock Time¶ One of the core functions of the time module is time(), which returns the number of seconds since the start of the epoch as a floating point value. import time print 'The time is:', time.time() Although the value is always a float, actual precision is platform-dependent. $ python time_time.py The time is: 1205079300.54 The float representation is useful when storing or comparing dates, but not as useful for producing human readable representations. For logging or printing time ctime() can be more useful. import time print 'The time is :', time.ctime() later = time.time() + 15 print '15 secs from now :', time.ctime(later) Here the second output line shows how to use ctime() to format a time value other than the current time. $ python time_ctime.py The time is : Sun Mar 9 12:18:02 2008 15 secs from now : Sun Mar 9 12:18:17 2008 Processor Clock Time¶ While time() returns a wall clock time, clock() returns processor clock time. The values returned from clock() should be used for performance testing, benchmarking, etc. since they reflect the actual time used by the program, and can be more precise than the values from time(). import hashlib import time # Data to use to calculate md5 checksums data = open(__file__, 'rt').read() for i in range(5): h = hashlib.sha1() print time.ctime(), ': %0.3f %0.3f' % (time.time(), time.clock()) for i in range(100000): h.update(data) cksum = h.digest() In this example, the formatted ctime() is printed along with the floating point values from time(), and clock() for each iteration through the loop. If you want to run the example on your system, you may have to add more cycles to the inner loop or work with a larger amount of data to actually see a difference. $ python time_clock.py Sun Mar 9 12:41:53 2008 : 1205080913.260 0.030 Sun Mar 9 12:41:53 2008 : 1205080913.682 0.440 Sun Mar 9 12:41:54 2008 : 1205080914.103 0.860 Sun Mar 9 12:41:54 2008 : 1205080914.518 1.270 Sun Mar 9 12:41:54 2008 : 1205080914.932 1.680 Typically, the processor clock doesn’t tick if your program isn’t doing anything. import time for i in range(6, 1, -1): print '%s %0.2f %0.2f' % (time.ctime(), time.time(), time.clock()) print 'Sleeping', i time.sleep(i) In this example, the loop does very little work by going to sleep after each iteration. The time() value increases even while the app is asleep, but the clock() value does not. $ Calling sleep() yields control from the current thread and asks it to wait for the system to wake it back up. If your program has only one thread, this effectively blocks the app and it does no work. struct_time¶ Storing times as elapsed seconds is useful in some situations, but there are times when you need to have access to the individual fields of a date (year, month, etc.). The time module defines struct_time for holding date and time values with components broken out so they are easy to access. There are several functions that work with struct_time values instead of floats. import time print 'gmtime :', time.gmtime() print 'localtime:', time.localtime() print 'mktime :', time.mktime(time.localtime()) print t = time.localtime() print 'Day of month:', t.tm_mday print ' Day of week:', t.tm_wday print ' Day of year:', t.tm_yday gmtime() returns the current time in UTC. localtime() returns the current time with the current time zone applied. mktime() takes a struct_time and converts it to the floating point representation. $ python time_struct.py gmtime : (2008, 3, 9, 16, 58, 19, 6, 69, 0) localtime: (2008, 3, 9, 12, 58, 19, 6, 69, 1) mktime : 1205081899.0 Day of month: 9 Day of week: 6 Day of year: 69 Parsing and Formatting Times¶ The two functions strptime() and strftime() convert between struct_time and string representations of time values. There is a long list of formatting instructions available to support input and output in different styles. The complete list is documented in the library documentation for the time module. This example converts the current time from a string, to a struct_time instance, and back to a string. import time now = time.ctime() print now parsed = time.strptime(now) print parsed print time.strftime("%a %b %d %H:%M:%S %Y", parsed) The output string is not exactly like the input, since the day of the month is prefixed with a zero. $ python time_strptime.py Sun Mar 9 13:01:19 2008 (2008, 3, 9, 13, 1, 19, 6, 69, -1) Sun Mar 09 13:01:19 2008 Working with Time Zones¶ The functions for determining the current time depend on having the time zone set, either by your program or by using a default time zone set for the system. Changing the time zone does not change the actual time, just the way it is represented. To change the time zone, set the environment variable TZ, then call tzset(). Using TZ, you can specify the time zone with a lot of detail, right down to the start and stop times for daylight savings time. It is usually easier to use the time zone name and let the underlying libraries derive the other information, though. This example program changes the time zone to a few different values and shows how the changes affect other settings in the time module. import time import os def show_zone_info(): print '\tTZ :', os.environ.get('TZ', '(not set)') print '\ttzname:', time.tzname print '\tZone : %d (%d)' % (time.timezone, (time.timezone / 3600)) print '\tDST :', time.daylight print '\tTime :', time.ctime() print print 'Default :' show_zone_info() for zone in [ 'US/Eastern', 'US/Pacific', 'GMT', 'Europe/Amsterdam' ]: os.environ['TZ'] = zone time.tzset() print zone, ':' show_zone_info() My default time zone is US/Eastern, so setting TZ to that has no effect. The other zones used change the tzname, daylight flag, and timezone offset value. $ python time_timezone.py Default : TZ : (not set) tzname: ('EST', 'EDT') Zone : 18000 (5) DST : 1 Time : Sun Mar 9 13:06:53 2008 US/Eastern : TZ : US/Eastern tzname: ('EST', 'EDT') Zone : 18000 (5) DST : 1 Time : Sun Mar 9 13:06:53 2008 US/Pacific : TZ : US/Pacific tzname: ('PST', 'PDT') Zone : 28800 (8) DST : 1 Time : Sun Mar 9 10:06:53 2008 GMT : TZ : GMT tzname: ('GMT', 'GMT') Zone : 0 (0) DST : 0 Time : Sun Mar 9 17:06:53 2008 Europe/Amsterdam : TZ : Europe/Amsterdam tzname: ('CET', 'CEST') Zone : -3600 (-1) DST : 1 Time : Sun Mar 9 18:06:53 2008
https://pymotw.com/2/time/index.html
CC-MAIN-2018-47
refinedweb
1,154
73.88